text
stringlengths
56
7.94M
\begin{document} \author{Erwan Koch\footnote{EPFL, Chair of Statistics STAT, EPFL-SB-MATH-STAT, MA B1 433 (B\^atiment MA), Station 8, 1015 Lausanne, Switzerland. Email: [email protected]},~~ Cl\'ement Dombry\footnote{Universit\'e Bourgogne Franche-Comt\'e, Laboratoire de Math\'ematiques de Besan\c{c}on, UMR 6623, 16 route de Gray, 25030 Besan\c{c}on, France. Email: [email protected]},~~ Christian Y. Robert\footnote{ISFA Universit\'e Lyon 1, 50 Avenue Tony Garnier, 69366 Lyon cedex 07, France. \newline Email: [email protected]}} \title{A central limit theorem for functions of stationary max-stable random fields on $\mathds{R} \begin{abstract} Max-stable random fields are very appropriate for the statistical modelling of spatial extremes. Hence, integrals of functions of max-stable random fields over a given region can play a key role in the assessment of the risk of natural disasters, meaning that it is relevant to improve our understanding of their probabilistic behaviour. For this purpose, in this paper, we propose a general central limit theorem for functions of stationary max-stable random fields on $\mathds{R}^d$. Then, we show that appropriate functions of the Brown-Resnick random field with a power variogram and of the Smith random field satisfy the central limit theorem. Another strong motivation for our work lies in the fact that central limit theorems for random fields on $\mathds{R}^d$ have been barely considered in the literature. As an application, we briefly show the usefulness of our results in a risk assessment context. \noindent \textbf{Key words}: Central limit theorem; Max-stable random fields on $\mathds{R}^d$; Mixing; Risk assessment. \end{abstract} \section{Introduction} Max-stable random fields constitute an extension of extreme-value theory to the level of random fields \citep[in the case of stochastic processes, see, e.g.,][]{haan1984spectral, de2007extreme} and turn out to be fundamental for spatial extremes. Indeed, they are particularly well suited to model the temporal maxima of a given variable (for instance a meteorological variable) at all points in space since they arise as the pointwise maxima taken over an infinite number of appropriately rescaled independent and identically distributed (iid) random fields. Thus, appropriate functions of max-stable random fields can be adequate models for the costs triggered by extreme environmental events. Hence, normalized integrals on subsets of $\mathds{R}^2$ of functions of max-stable random fields and associated spatial risk measures \citep[see][]{koch2017spatial, koch2018SpatialRiskAxiomsGeneral} are useful for assessing the impact of natural disasters. The existence of a central limit theorem (CLT) for functions of max-stable random fields on $\mathds{R}^2$ would provide insights about the asymptotic probabilistic behaviour of the previously mentioned normalized integrals. Moreover, as explained in \cite{koch2017spatial, koch2018SpatialRiskAxiomsGeneral}, it is relevant to look at the evolution of spatial risk measures when the region over which they are applied becomes increasingly large; see the axiom of asymptotic spatial homogeneity of order $- \alpha$ in \cite{koch2017spatial, koch2018SpatialRiskAxiomsGeneral} which quantifies the rate of spatial diversification when the region becomes large. As shown in the latter paper, under relatively mild assumptions, asymptotic spatial homogeneity of order $-2$, $-1$ and $-1$ of spatial risk measures associated respectively with variance, Value-at-Risk and expected shortfall is satisfied when there is a CLT for the cost field. Finally, from a statistical viewpoint, the existence of a CLT allows to show the asymptotic normality of various estimators. For all these reasons, in this paper, we are interested in showing a CLT for functions of max-stable random fields on $\mathds{R}^d$. A CLT has already been proved in \cite{dombry2012strong} in the case of stationary\footnote{Throughout the paper, stationarity refers to strict stationarity.} max-infinitely divisible random fields on $\mathds{Z}^d$. Several CLTs for stochastic processes on $\mathds{Z}$ have been proposed under various (especially mixing) conditions \citep[see, e.g.,][]{ibragimov1962some, gordin1969, ibragimov1975note}. Similarly, in the case of random fields on $\mathds{Z}^d$, many CLTs have been introduced under miscellaneous (especially mixing) conditions and in diverse contexts \citep[see, e.g.,][]{bolthausen1982central, chen1991uniform, nahapetian1992martingale, guyon1995random, perera1997geometry, dedecker1998central, el2013central}. For instance, the influential paper by \cite{bolthausen1982central} establishes a CLT for stationary random fields on $\mathds{Z}^d$ satisfying specific strong mixing conditions. However, the literature about CLTs for stochastic processes on $\mathds{R}$ or random fields on $\mathds{R}^d$ is, surprisingly, limited. This provides an additional strong motivation for our work. \cite{bulinski2010central} proposes a variant of the classical CLT where he considers a random field on $\mathds{R}^d$ but observed on a grid. His results involve two asymptotics at the same time: both the spatial domain and the grid resolution increase. The second type of asymptotics is known as infill asymptotics. To the best of our knowledge, only \cite{gorodetskii1984central, gorodetskii1987moment} has proposed a general CLT for strong mixing random fields on $\mathds{R}^d$. However, the strong mixing condition needed for this theorem to hold seems very difficult to check. Finally, CLTs for the indicator function of stationary random fields exceeding a given threshold have been obtained \citep[see, e.g.,][]{spodarev2014limit, bulinski2012central}. For more references about CLTs for random fields, we refer the reader for instance to \cite{ivanov1989statistical}. In this paper, we show a CLT for functions of stationary max-stable random fields on $\mathds{R}^d$. For the reason explained above, we could not use the results by \cite{gorodetskii1984central, gorodetskii1987moment} and, instead, we take advantage of the CLT by \cite{bolthausen1982central}. We basically extend Theorem 2.3 in \cite{dombry2012strong} from $\mathds{Z}^d$ to $\mathds{R}^d$ in the max-stable case, using the bound for the $\beta$-mixing coefficient established in that paper. Then, we show that appropriate functions of the Brown-Resnick random field with a power variogram and of the Smith random field satisfy the CLT. Finally, we briefly show the usefulness of our results in a risk assessment context. The remainder of the paper is organized as follows. In Section \ref{Sec_Notations_CLT_Bolthausen}, we introduce some concepts about mixing as well as the previously mentioned CLT by \cite{bolthausen1982central} and give some insights about max-stable random fields. In Section \ref{Sec_CLT_Functions_Stationary_Maxstable_Processes}, we first establish our general CLT for functions of stationary max-stable random fields and then consider the cases of the Brown-Resnick and Smith random fields. We shortly illustrate our results in a risk assessment context in Section \ref{Sec_Application}. Finally, Section \ref{Sec_Conclusion} contains a brief summary as well as some perspectives. Throughout the paper, the elements belonging to $\mathds{R}^d$ or $\mathds{Z}^d$ for some $d \in \mathds{N} \backslash \{0 \}$ are denoted using bold symbols, whereas those in more general spaces are denoted using normal font. All proofs can be found in the Appendix. \section{Some notations and concepts} \label{Sec_Notations_CLT_Bolthausen} In the following, ``$\bigvee$'' denotes the supremum when the latter is taken over a countable set. Moreover, $\overset{d}{=}$ and $\overset{d}{\to}$ stand for equality and convergence in distribution, respectively. Unless otherwise stated, in the case of random fields, distribution has to be understood as the set of all finite-dimensional distributions. Finally, let $\lambda$ denote the Lebesgue measure in $\mathds{R}^d$. \subsection{Brief introduction to mixing and the central limit theorem by Bolthausen} Let $(\Omega, \mathcal{F}, \mathds{P})$ be a probability space and $\mathcal{X}$ be a locally compact metric space. Let $\{ X(x) \}_{x \in \mathcal{X}}$ be a real-valued random field. For $S \subset \mathcal{X}$ a closed subset, we denote by $\mathcal{F}^X_S$ the $\sigma$-field generated by the random variables $\{ X(x): x \in S \}$. Moreover, for any $\mathcal{F}_1 \subset \mathcal{F}$, we denote by $\sigma(\mathcal{F}_1)$ the $\sigma$-field generated by $\mathcal{F}_1$. Let $S_1, S_2 \subset \mathcal{X}$ be disjoint closed subsets. The $\alpha$-mixing coefficient \citep[introduced by][]{rosenblatt1956central} between the $\sigma$-fields $\mathcal{F}^X_{S_1}$ and $\mathcal{F}^X_{S_2}$ is given by $$ \alpha^X(S_1,S_2)=\sup\Big\{|\mathds{P}(A\cap B)-\mathds{P}(A)\mathds{P}(B)|: A\in \mathcal{F}^X_{S_1}, B\in \mathcal{F}^X_{S_2} \Big\}.$$ The $\beta$-mixing coefficient or absolute regularity coefficient \citep[attributed to Kolmogorov in][]{volkonskii1959some} between the $\sigma$-fields $\mathcal{F}^X_{S_1}$ and $\mathcal{F}^X_{S_2}$ is defined by \begin{equation} \label{Eq_Def_2_Beta_Mixing} \beta^X(S_1, S_2)= \frac{1}{2} \sup \left \{ \sum_{i=1}^{I} \sum_{j=1}^J | \mathds{P}(A_i \cap B_j) - \mathds{P}(A_i) \mathds{P}(B_j) | \right \}, \end{equation} where the supremum is taken over all partitions $\{ A_1,\dots , A_I \}$ and $\{ B_1, \dots , B_J \}$ of $\Omega$ with the $A_i$'s in $\mathcal{F}^X_{S_1}$ and the $B_j$'s in $\mathcal{F}^X_{S_2}$. The inequality \begin{equation} \label{Eq_Majoration_Alphamixing_With_Betamixing} \alpha^X(S_1, S_2) \leq \frac{1}{2} \beta^X(S_1, S_2), \quad \mbox{for all } S_1, S_2 \subset \mathcal{X}, \end{equation} will be very useful. We now present the previously mentioned CLT for stationary random fields in $\mathds{Z}^d$ due to \cite{bolthausen1982central} since it will play a key role in the proof of our general CLT for functions of stationary max-stable random fields (Theorem \ref{Th_General_Case}). Let $\mathcal{X}=\mathds{Z}^d$. For $\mathbf{h}_1, \mathbf{h}_2 \in \mathds{Z}^d$, let $d(\mathbf{h}_1, \mathbf{h}_2)=\max_{1 \leq i \leq d} \{ |\mathbf{h}_1(i)-\mathbf{h}_2(i) | \}$, where, for $\mathbf{h} \in \mathds{Z}^d$, the $\mathbf{h}(i)$, $1 \leq i \leq d$, are the components of $\mathbf{h}$. If $S_1, S_2 \subset \mathds{Z}^d$, let $d(S_1, S_2)=\inf \{ d(\mathbf{h}_1, \mathbf{h}_2): \mathbf{h}_1 \in S_1, \mathbf{h}_2 \in S_2 \}$. For $\Lambda\subset\mathbb{Z}^d$, we note $|\Lambda|$ the number of elements in $\Lambda$ and $\partial\Lambda$ the set of elements $\mathbf{h}_1\in\Lambda$ such that there exists $\mathbf{h}_2 \notin \Lambda$ with $d(\mathbf{h}_1,\mathbf{h}_2)=1$. For a real-valued stationary random field $\{ X(\mathbf{h}) \}_{\mathbf{h}\in\mathds{Z}^d}$, we note \begin{equation}\label{eq:amix} \alpha^X_{kl}(m)=\sup\Big\{\alpha^X(S_1,S_2):\ S_1, S_2 \subset \mathds{Z}^d,\ |S_1| \leq k, |S_2| \leq l,\ d(S_1,S_2)\geq m \Big\}, \end{equation} defined for $m\geq 1$ and $k,l\in\mathds{N}\cup\{\infty\}$. Finally, let $\mathrm{Cov}$ denote the covariance. \begin{theorem}\label{theo:BCLT} Let $\{ X(\mathbf{h}) \}_{\mathbf{h}\in\mathds{Z}^d}$ be a real-valued stationary random field satisfying the following three conditions: \begin{align} &\sum_{m=1}^\infty m^{d-1}\alpha^X_{kl}(m)<\infty\quad \mbox{for\ all\ \ } k\geq 1,\ l\geq 1\ \mbox{such\ that\ } k+l\leq 4;\label{Eq_TCL2}\\ &\alpha^X_{1\infty}(m)=o(m^{-d});\label{Eq_TCL1}\\ &\mathds{E}\big[|X(\mathbf{0})|^{2+\delta}\big]<\infty \quad\mbox{and}\quad \sum_{m=1}^\infty m^{d-1} (\alpha^X_{11}(m) )^{\delta/(2+\delta)}<\infty\quad\mbox{for\ some}\ \delta>0.\label{Eq_TCL3} \end{align} \noindent Moreover, let $(\Lambda_n)_{n \in \mathds{N}}$ be a fixed sequence of finite subsets of $\mathds{Z}^d$ which increases to $\mathds{Z}^d$ and such that $\lim_{n\to\infty} |\partial \Lambda_n|/|\Lambda_n|=0$. Then we have that $\sum_{\mathbf{h}\in\mathds{Z}^d} | \mathrm{Cov}(X(\mathbf{0}),X(\mathbf{h})) | < \infty$ and, \\ if $\sigma^2=\sum_{\mathbf{h}\in\mathds{Z}^d} \mathrm{Cov}(X(\mathbf{0}),X(\mathbf{h}))>0$, $$\frac{1}{|\Lambda_n|^{1/2}} \sum_{\mathbf{h}\in\Lambda_n} (X(\mathbf{h})-\mathds{E}[X(\mathbf{h})]) \overset{d}{\rightarrow} \mathcal{N}(0,\sigma^2),\quad \mbox{as}\ n\to\infty,$$ where $\mathcal{N}(\mu, \sigma^2)$ denotes the normal distribution with expectation $\mu \in \mathds{R}$ and variance $\sigma^2>0$. \end{theorem} \subsection{Brief introduction to max-stable random fields} \begin{definition}[Max-stable random field] \label{Def_Maxstable_Processes} A random field $\left \{ Z(\mathbf{x}) \right \}_{\mathbf{x} \in \mathds{R}^d}$ is said to be max-stable if there exist sequences of functions $\left( a_T(\mathbf{x}), \mathbf{x} \in \mathds{R}^d \right)_{T \geq 1}> 0$ and \\ $\left( b_T(\mathbf{x}), \mathbf{x} \in \mathds{R}^d \right)_{T \geq 1} \in \mathds{R}$ such that, for all $T \geq 1$, $$ \left \{ \frac{ \bigvee_{t=1}^T \left \{ Z_t(\mathbf{x}) \right \}-b_T(\mathbf{x} )}{a_T(\mathbf{x} )} \right \} _{\mathbf{x} \in \mathds{R}^d} \overset{d}{=} \{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}, $$ where the $\{ Z_t(\mathbf{x})\}_{\mathbf{x} \in \mathds{R}^d }, t=1, \dots, T,$ are independent replications of $Z$. \end{definition} A max-stable random field is said to be simple if it has standard Fr\'echet margins, i.e., for all $\mathbf{x} \in \mathds{R}^d$, $\mathds{P}(Z(\mathbf{x}) < z)=\exp \left( -1/z \right), z>0$. Now, let $\{ \tilde{T}_i(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}, i=1, \dots, n,$ be independent replications of a random field $\{ \tilde{T}(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$. Let $\left( c_n(\mathbf{x}), \mathbf{x} \in \mathds{R}^d \right)_{n \geq 1} >0$ and $\left( d_n(\mathbf{x}), \mathbf{x} \in \mathds{R}^d \right)_{n \geq 1} \in \mathds{R}$ be sequences of functions. If there exists a non-degenerate random field $\{ G(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ such that $$ \left \{ \frac{\bigvee_{i=1}^n \left \{ \tilde{T}_i(\mathbf{x}) \right \} -d_n(\mathbf{x})}{c_n(\mathbf{x})} \right \}_{\mathbf{x} \in \mathds{R}^d} \overset{d}{\rightarrow} \left \{ G(\mathbf{x}) \right \}_{\mathbf{x} \in \mathds{R}^d}, \mbox{ for } n \to \infty, $$ then $G$ is necessarily max-stable; see, e.g., \cite{haan1984spectral}. This explains the relevance of max-stable random fields in the modelling of spatial extremes. We know \citep[see, e.g.,][]{haan1984spectral} that any simple max-stable random field $Z$ can be written as \begin{equation} \label{Eq_Spectral_Representation_Stochastic_Processes} \left \{ Z(\mathbf{x}) \right \}_{\mathbf{x} \in \mathds{R}^d} \overset{d}{=} \left \{ \bigvee_{i=1}^{\infty} \{ U_i Y_i(\mathbf{x}) \} \right \}_{\mathbf{x} \in \mathds{R}^d}, \end{equation} where the $(U_i)_{i \geq 1}$ are the points of a Poisson point process on $(0, \infty)$ with intensity $u^{-2} \lambda(\mathrm{d}u)$ and the $Y_i, i\geq 1$, are independent replications of a random field $\{ Y(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ such that, for all $\mathbf{x} \in \mathds{R}^d$, $\mathds{E}[Y(\mathbf{x})]=1$. The random field $Y$ is not unique and is referred to as a spectral random field of $Z$. Conversely, any random field of the form \eqref{Eq_Spectral_Representation_Stochastic_Processes} is a simple max-stable random field. Below, we introduce the Brown-Resnick random field, defined in \cite{kabluchko2009stationary} as a generalization of the stochastic process introduced in \cite{brown1977extreme}. Recall that a random field $\{ W(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ is said to have stationary increments if the distribution of the random field $\{ W(\mathbf{x}+\mathbf{x}_0)-W(\mathbf{x}_0) \}_{\mathbf{x} \in \mathds{R}^d}$ does not depend on the choice of $\mathbf{x}_0 \in \mathds{R}^d$. Provided the increments of $W$ have a second moment, the variogram of $W$, $\gamma_W$, is defined by $$ \gamma_W(\mathbf{x})=\mathrm{Var}(W(\mathbf{x})-W(\mathbf{0})), \quad \mathbf{x} \in \mathds{R}^d,$$ where $\mathrm{Var}$ stands for the variance. Moreover, for a random field $\{ W(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ having a second moment, we introduce the function $\sigma_W$ defined by $$ \sigma_W(\mathbf{x})=\left[ \mathrm{Var}(W(\mathbf{x})) \right]^{\frac{1}{2}}, \quad \mathbf{x} \in \mathds{R}^d.$$ \begin{definition}[Brown-Resnick random field] \label{Def_Brown_Resnick_Process} Let $\{ W(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be a centred Gaussian random field with stationary increments and with variogram $\gamma_W$. Let us consider the random field $Y$ written as $$ \{ Y(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}=\left \{ \exp \left( W(\mathbf{x})-\frac{\sigma^2_W(\mathbf{x})}{2} \right) \right \}_{\mathbf{x} \in \mathds{R}^d}.$$ Then the simple max-stable random field defined by \eqref{Eq_Spectral_Representation_Stochastic_Processes} with $Y$ is referred to as the Brown-Resnick random field associated with the variogram $\gamma_W$. In the following, we will call this field the Brown-Resnick random field built with $W$. \end{definition} It is worth noting that the Brown-Resnick random field is stationary \citep[see][Theorem 2]{kabluchko2009stationary} and that its distribution only depends on the variogram \citep[see][Proposition 11]{kabluchko2009stationary}. We now present the Smith random field. Let $(U_i, \mathbf{C}_i)_{i \geq 1}$ be the points of a Poisson point process on $(0,\infty) \times \mathds{R}^d$ with intensity measure $u^{-2} \lambda(\mathrm{d}u) \times \lambda(\mathrm{d}\mathbf{c})$. Independently, let $f_i, i \geq 1$, be independent replicates of some non-negative random function $f$ on $\mathds{R}^d$ satisfying $\mathds{E} \left[ \int_{\mathds{R}^d} f(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x}) \right]=1$. Then, the Mixed Moving Maxima (M3) random field \begin{equation} \label{Eq_Mixed_Moving_Maxima_Representation} \{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}= \left \{ \bigvee_{i=1}^{\infty} \{ U_i f_i(\mathbf{x}-\mathbf{C}_i) \} \right \}_{\mathbf{x} \in \mathds{R}^d} \end{equation} is a stationary and simple max-stable random field. The so called Smith random field introduced by \cite{smith1990max} is a specific case of M3 random field. \begin{definition}[Smith random field] Let $Z$ be written as in \eqref{Eq_Mixed_Moving_Maxima_Representation} with $f$ being the density of a $d$-variate Gaussian random vector with mean $\mathbf{0}$ and covariance matrix $\Sigma$. Then, the field $Z$ is referred to as the Smith random field with covariance matrix $\Sigma$. \end{definition} We conclude this section by giving some insights about a well-known dependence measure for max-stable random fields, the extremal coefficient. Let $\left \{ Z(\mathbf{x}) \right \}_{\mathbf{x} \in \mathds{R}^d}$ be a simple max-stable random field. For any compact $S \subset \mathds{R}^d$, the areal extremal coefficient of $Z$ for $S$, $\theta^Z(S)$, is defined by \begin{equation} \label{Def_Extremal_Coefficient} \mathds{P} \left( \sup_{\mathbf{x} \in S} \{ Z(\mathbf{x}) \} \leq z \right)=\exp \left( - \frac{\theta^Z(S)}{z} \right), \quad z>0. \end{equation} It is easily shown that, for any compact $S \subset \mathds{R}^d$, \begin{equation} \label{Eq_Property_Extremal_Coefficient} \theta^Z(S)=\mathds{E} \left[ \sup_{\mathbf{x} \in S} \{ Y(\mathbf{x}) \} \right], \end{equation} where $Y$ is a spectral random field of $Z$. \section{A CLT for functions of stationary max-stable random fields on $\mathds{R}^d$} \label{Sec_CLT_Functions_Stationary_Maxstable_Processes} We start with some notations and definitions. Let $\|.\|$ stand for the Euclidean norm in $\mathds{R}^d$ and $'$ designate transposition. Moreover, for $\mathbf{h}=(h_1, \dots, h_d)^{'} \in \mathds{Z}^d$, we adopt the notation $[\mathbf{h}, \mathbf{h}+1]=[h_1,h_1+1] \times \dots \times [h_d, h_d+1]$. We introduce $\mathds{S}= \left \{ \mathbf{x} \in \mathds{R}^d: \| \mathbf{x} \|=1 \right \}$, the unit sphere of $\mathds{R}^d$. Moreover, for two functions $f$ and $g$ from $\mathds{R}^d$ or $\mathds{Z}^d$ to $\mathds{R}$, the notations $f(\mathbf{h}) \underset{\| \mathbf{h} \| \to \infty}{=} o(g(\mathbf{h}))$ and $f(\mathbf{h}) \underset{\| \mathbf{h} \| \to \infty}{\sim} g(\mathbf{h})$ mean that $\lim_{\| \mathbf{h} \| \to \infty} f(\mathbf{h})/g(\mathbf{h})=0$ and $\lim_{\| \mathbf{h} \| \to \infty} f(\mathbf{h})/g(\mathbf{h})=1$, respectively, where, for $L<\infty$, $\lim_{\| \mathbf{h} \| \to \infty} f(\mathbf{h})/g(\mathbf{h})=L$ signifies $\lim_{h \to \infty} \sup_{\mathbf{u} \in \mathds{S}} \left \{ | f(h \mathbf{u})/g(h \mathbf{u}) - L| \right \}=0$. Finally, $\lim_{\| \mathbf{h} \| \to \infty} f(\mathbf{h})=\infty$ must be understood as $\lim_{h \to \infty} \inf_{\mathbf{u} \in \mathds{S}} \left \{ f(h \mathbf{u}) \right \}=\infty$. For $V \subset \mathds{R}^d$ and $r>0$, we denote $N(V, r)=\{ \mathbf{x} \in \mathds{R}^d: \mathrm{dist}(\mathbf{x}, V) \leq r \}$, where $\mathrm{dist}$ designates the Euclidean distance. Furthermore, for $V \subset \mathds{R}^d$, we denote $\partial V$ the boundary of $V$. For a compact and convex set $V \subset \mathds{R}^d$, let $s(V)$ be the inradius of $V$, i.e., the largest $s>0$ such that $V$ contains a ball of radius $s$. Finally, let $\mathcal{B}(\mathds{R})$ and $\mathcal{B}((0, \infty))$ be the Borel $\sigma$-fields on $\mathds{R}$ and $(0, \infty)$, respectively. We now present the concept of Van Hove sequence, which will play an important role in the following. \begin{definition} \label{Def_Van_Hove_Sequence} A Van Hove sequence in $\mathds{R}^d$ is a sequence $( V_n )_{n \in \mathds{N}}$ of bounded measurable subsets of $\mathds{R}^d$ satisfying $V_n \uparrow \mathds{R}^d$, $\lim_{n \to \infty} \lambda(V_n)=\infty$, and $\lim_{n \to \infty} \lambda(N(\partial V_n, r) )/\lambda(V_n) =0, \mbox{ for all } r>0$. \end{definition} Note that the assumption ``bounded'' does not always appear in the definition of a Van Hove sequence. It is worth mentioning that many sequences of bounded measurable subsets of $\mathds{R}^d$ are Van Hove sequences. For instance, any sequence $(V_n)_{n \in \mathds{N}}$ of compact convex subsets of $\mathds{R}^d$ such that $\lim_{n \to \infty} s(V_n)=\infty$ is a Van Hove sequence \citep[see, e.g.,][Lemma 3.11]{lenz2005ergodic}. \subsection{CLT in the general case} In the following, we say that a random field $\{ X(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ such that, for all $\mathbf{x} \in \mathds{R}^d$, $\mathds{E}[X(\mathbf{x})^2]<\infty$, satisfies the CLT if \begin{equation} \label{Eq_Convergence_Abs_Int_Cov_Thm} \int_{\mathds{R}^d} | \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) | \ \lambda(\mathrm{d}\mathbf{x}) < \infty, \end{equation} \begin{equation} \label{Eq_Def_Sigma_Square} \sigma^2= \int_{\mathds{R}^d} \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) \ \lambda(\mathrm{d}\mathbf{x})>0, \end{equation} and, for any Van Hove sequence $(V_n)_{n \in \mathds{N}}$ in $\mathds{R}^d$, \begin{equation} \label{Eq_Normality_Normalized_Integral} \frac{1}{[\lambda(V_n)]^{1/2}} \int_{V_n} (X(\mathbf{x})-\mathds{E}[X(\mathbf{x})]) \ \lambda(\mathrm{d}\mathbf{x}) \overset{d}{\rightarrow} \mathcal{N}(0, \sigma^2), \quad \mbox{as } n\to\infty. \end{equation} The main result of this section, stated directly below, gives sufficient conditions such that a function of a stationary max-stable random field satisfies the CLT. \begin{theorem} \label{Th_General_Case} Let $\{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be a simple, stationary and sample-continuous max-stable random field and $F$ be a measurable function from $((0, \infty), \mathcal{B}((0,\infty)))$ to $(\mathds{R},\mathcal{B}(\mathds{R}))$ satisfying \begin{equation} \label{Eq_Assumption_F} \mathds{E}\left[ |F(Z(\mathbf{0}))|^{2+\delta} \right]<\infty, \end{equation} for some $\delta>0$ and \begin{equation} \label{Eq_Condition_Positivity_Sigma2} \sigma^2= \int_{\mathds{R}^d} \mathrm{Cov} \left(F(Z(\mathbf{0})), F(Z(\mathbf{x}))\right) \ \lambda(\mathrm{d}\mathbf{x})>0. \end{equation} Furthermore, assume that, for all $\mathbf{h} \in \mathds{Z}^d$, \begin{equation} \label{Eq_Decay_Cubes_Y} \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Y(\mathbf{x}) \} \right \} \right] \leq K \| \mathbf{h} \|^{-b}, \end{equation} for some $K>0$, $b > d \max \left \{ 2, (2+\delta)/\delta \right \}$ and where $\{ Y(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ is a spectral random field of $Z$ (see \eqref{Eq_Spectral_Representation_Stochastic_Processes}). \noindent Then the random field $\{ X(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}=\{ F(Z(\mathbf{x})) \}_{\mathbf{x} \in \mathds{R}^d}$ satisfies the CLT. \end{theorem} It should be noted that this result constitutes, in the max-stable case, an extension of Theorem 2.3 in \cite{dombry2012strong} where the CLT for discrete random fields indexed by $\mathds{Z}^d$ is considered. Another connection with \cite{dombry2012strong} lies in the fact that we take advantage of the upper-bound for the $\beta$-mixing coefficient of simple and sample-continuous max-stable random fields established in that paper (Theorem 2.2). We provide here the structure of the proof in order to convey some of the main ideas. For the detailed proof, we refer the reader to the Appendix. Without loss of generality, we assume that $\mathds{E}[X(\mathbf{0})]=0$. The proof is divided into three main parts. The first one is dedicated to the proof of \eqref{Eq_Convergence_Abs_Int_Cov_Thm}. Then, the second and third ones show \eqref{Eq_Normality_Normalized_Integral}. Let $(V_n)_{n \in \mathds{N}}$ be a Van Hove sequence in $\mathds{R}^d$. In order to prove \eqref{Eq_Normality_Normalized_Integral}, we take advantage of the fact that, for all $n \in \mathds{N}$, \begin{equation} \label{Eq_Decomposition_Integral_Total} \frac{1}{[\lambda(V_n)]^{\frac{1}{2}}} \int_{V_n} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x})= I_{n,1} + I_{n,2}, \end{equation} where \begin{equation} \label{Eq_Def_In1_In2} I_{n,1}=\frac{1}{[\lambda(V_n)]^{\frac{1}{2}}} \int_{A_n} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x}) \quad \mbox{and} \quad I_{n,2} =\frac{1}{[\lambda(V_n)]^{\frac{1}{2}}} \int_{V_n \backslash A_n} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x}), \end{equation} with $$A_n= \bigcup_{\mathbf{h} \in \mathds{Z}^d:[\mathbf{h}, \mathbf{h}+1] \subset V_n} [\mathbf{h}, \mathbf{h}+1].$$ The second part of the proof is devoted to the study of $(I_{n,1})_{n \in \mathds{N}}$. For any $n \in \mathds{N}$, the domain of the related integral, $A_n$, consists of the union of all cubes $[\mathbf{h}, \mathbf{h}+1]$, for $\mathbf{h} \in \mathds{Z}^d$, which are entirely included in $V_n$. As will be seen, considering such sets allows to deal with a random field on $\mathds{Z}^d$ instead of $\mathds{R}^d$. Thus, we show that the assumptions of Theorem \ref{theo:BCLT} (Bolthausen's theorem) are satisfied, obtaining that $$ I_{n,1} \overset{d}{\rightarrow} \mathcal{N}(0, \sigma^2), \mbox{ for } n \to \infty.$$ Finally, the third part concerns the study of $(I_{n,2})_{n \in \mathds{N}}$. For any $n \in \mathds{N}$, points belonging to the domain of the related integral, $V_n \backslash A_n$, are at a Euclidean distance not larger than $\sqrt{d}$ from the boundary of $V_n$. Hence, using the fact that $(V_n)_{n \in \mathds{N}}$ is a Van Hove sequence, we show that $\lim_{n \to \infty} \mathrm{Var}(I_{n,2})=0$, which allows to obtain \eqref{Eq_Normality_Normalized_Integral}. \begin{Rq} It is worth mentioning that the left-hand side of \eqref{Eq_Decay_Cubes_Y} does not depend on the choice of the spectral random field $Y$. It only depends on the areal extremal coefficient function of $Z$. Indeed, the same computation as that leading to \eqref{Eq_Sum_Extremal_Coefficients} gives that \begin{align} & \quad \ \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Y(\mathbf{x}) \} \right \} \right] \nonumber \\&=\theta^Z\left([ 0,1]^d \right)+\theta^Z\left([\mathbf{h},\mathbf{h}+1] \right) -\theta^Z\left([ 0,1 ]^d \cup[\mathbf{h},\mathbf{h}+1] \right). \label{Eq_Explanation_Main_Condition} \end{align} \end{Rq} We now provide some insights about \eqref{Eq_Decay_Cubes_Y}, which is the main condition in Theorem \ref{Th_General_Case}. Using \eqref{Def_Extremal_Coefficient}, it follows from \eqref{Eq_Explanation_Main_Condition} that, for all $z >0$, \begin{align} & \quad \ \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Y(\mathbf{x}) \} \right \} \right] \nonumber \\& = - z \log \left( \mathds{P} \left( \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right) \right) - z \log \left( \mathds{P} \left( \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right) \right) \nonumber \\& \quad \ + z \log \left( \mathds{P} \left( \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} \bigcap \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} \right) \right) \nonumber \\& = z \log \left( \frac{\mathds{P} \left( \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} \bigcap \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} \right)}{\mathds{P} \left( \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right) \mathds{P} \left( \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right)} \right). \label{Eq_Expression_Fraction_Probabilities} \end{align} Therefore, \eqref{Eq_Decay_Cubes_Y} implies that, for all $z >0$, \begin{equation} \label{Eq_Implication_1_Main_Condition} \lim_{\| \mathbf{h} \| \to \infty} \frac{\mathds{P} \left( \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} \bigcap \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} \right)}{\mathds{P} \left( \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right) \mathds{P} \left( \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right)}=1. \end{equation} Consequently, \eqref{Eq_Decay_Cubes_Y} appears as a mixing condition. This is confirmed by the following fact. As can be seen in the proof of Theorem \ref{Th_General_Case}, \eqref{Eq_Decay_Cubes_Y} entails that, for all $\mathbf{x} \in \mathds{R}^d$, $$ 2-\theta^Z(\{ \mathbf{0}, \mathbf{x} \}) \leq K \| \mathbf{x} \|^{-b}, $$ which gives that \begin{equation} \label{Eq_Condition_Mixing_Maxstable} \lim_{\| \mathbf{x} \| \to \infty} 2-\theta^Z(\{ \mathbf{0}, \mathbf{x} \}) =0. \end{equation} From \cite{kabluchko2010ergodic}, Theorem 3.1, and the fact that this result can be extended to random fields on $\mathds{R}^d$, $d>1$ \citep[see, e.g.,][p.20]{DombryHDR2012}, we know that \eqref{Eq_Condition_Mixing_Maxstable} means that $Z$ is mixing. Finally, we have, for all $z>0$, that \begin{align*} & \quad \ \frac{\mathds{P} \left( \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} \bigcap \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} \right)}{\mathds{P} \left( \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right) \mathds{P} \left( \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right)} \\& = 1+ \frac{\mathds{E} \left[ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} } \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} } \right] - \mathds{E} \left[ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} } \right] \mathds{E} \left [ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} } \right] }{\mathds{E} \left[ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} } \right] \mathds{E} \left [ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} } \right]}. \\& = 1+ \frac{ \mathrm{Cov} \left( \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} }, \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} } \right)}{\mathds{E} \left[ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} } \right] \mathds{E} \left [ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} } \right]}. \end{align*} Therefore, it follows from \eqref{Eq_Expression_Fraction_Probabilities} and \eqref{Eq_Implication_1_Main_Condition} that $$\mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Y(\mathbf{x}) \} \right \} \right] \underset{\| \mathbf{h} \| \to \infty}{\sim} \frac{ \mathrm{Cov} \left( \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} }, \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} } \right)}{\mathds{E} \left[ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} } \right] \mathds{E} \left [ \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} } \right]}.$$ Hence, we deduce from \eqref{Eq_Decay_Cubes_Y} that, for all $z>0$, $$ \lim_{\| \mathbf{h} \| \to \infty} \mathrm{Cov} \left( \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Z(\mathbf{x}) \} \leq z \right \} }, \mathds{I}_{ \left \{ \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Z(\mathbf{x}) \} \leq z \right \} } \right)=0.$$ In the next proposition, we provide conditions ensuring that \eqref{Eq_Condition_Positivity_Sigma2} is satisfied. Before stating this result, we briefly recall the concept of association which plays an important role for max-stable random vectors. \begin{definition}[Association] \label{Def_Association} A random vector $\mathbf{R} \in \mathds{R}^q$, for $q \geq 1$, is said to be associated if $\mathrm{Cov}(g_1(\mathbf{R}), g_2(\mathbf{R})) \geq 0$ for all non-decreasing functions $g_i: \mathds{R}^q \to \mathds{R}$ such that $\mathds{E}[|g_i(\mathbf{R})|] < \infty$ and $\mathds{E}[|g_1(\mathbf{R}) g_2(\mathbf{R})|] < \infty$ ($i=1,2$). Here, the term ``non-decreasing'' function must be understood in the following sense: for $\mathbf{r}_1, \mathbf{r}_2 \in \mathds{R}^q$, $\mathbf{r}_1 \leq \mathbf{r}_2$ implies $g_i(\mathbf{r}_1) \leq g_i(\mathbf{r}_2)$ ($i=1,2$), where $\mathbf{r}_1 \leq \mathbf{r}_2$ is a coordinatewise inequality. \end{definition} \begin{proposition} \label{Prop_Condition_Sigma_Positive} Let $\{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be a simple, stationary and sample-continuous max-stable random field. For any function $F$ which is measurable from $((0, \infty), \mathcal{B}((0,\infty)))$ to $(\mathds{R},\mathcal{B}(\mathds{R}))$, satisfies \eqref{Eq_Assumption_F} and is moreover non-decreasing and non-constant, the random field $\{ X(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}=\{ F(Z(\mathbf{x})) \}_{\mathbf{x} \in \mathds{R}^d}$ satisfies $$ \sigma^2= \int_{\mathds{R}^d} \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) \ \lambda(\mathrm{d}\mathbf{x})>0.$$ \end{proposition} \subsection{CLT in the case of the Brown-Resnick and Smith random fields} In this section, we show that if $\{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ is the Smith or the Brown-Resnick random field with an appropriate variogram, then the random field $\{ F(Z(\mathbf{x})) \}_{\mathbf{x} \in \mathds{R}^d}$, where $F$ is as in Theorem \ref{Th_General_Case}, satisfies the CLT. In order to establish these results, we need the following proposition about the spectral random field of the Brown-Resnick random field. \begin{proposition} \label{Prop_Upper_Bound_Expectation_Min_Brown_Resnick} Let $\{ W(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be a centred Gaussian random field with stationary increments such that $W(\mathbf{0})=0$. Moreover, assume that $W$ is a.s. bounded on $[0,1]^d$ and that, for $\mathbf{h} \in \mathds{Z}^d$, \begin{equation} \label{Eq_Condition1_Variogram_TCL} \sup_{\mathbf{x} \in [0,1]^d} \{ \sigma_W^2(\mathbf{h})-\sigma_W^2(\mathbf{x}+\mathbf{h}) \} \underset{\| \mathbf{h} \| \to \infty}{=} o (\sigma_W^2(\mathbf{h})), \end{equation} and \begin{equation} \label{Eq_Condition2_Variogram_TCL} \lim_{\| \mathbf{h} \| \to \infty} \frac{\sigma_W^2(\mathbf{h})}{\ln( \| \mathbf{h} \|)} =\infty. \end{equation} Then the random field $Y$ defined by $$ \{ Y(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}=\left \{ \exp \left( W(\mathbf{x})-\frac{\sigma_W^2(\mathbf{x})}{2} \right) \right \}_{\mathbf{x} \in \mathds{R}^d}$$ satisfies Condition \eqref{Eq_Decay_Cubes_Y} for all $b>0$. \end{proposition} \begin{proposition} \label{Prop_Variogram_Power} Let $\{ W(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be a centred Gaussian random field with stationary increments such that $W(\mathbf{0})=0$. Moreover, assume that \begin{equation} \label{Eq_Condition_Variogram_Power} \sigma_W^2(\mathbf{x})=\eta \| \mathbf{x} \|^{\alpha}, \quad \mathbf{x} \in \mathds{R}^d, \end{equation} where $\eta>0$ and $\alpha \in (0, 2]$. Then $W$ is sample-continuous and the random field $Y$ defined by $$ \{ Y(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}=\left \{ \exp \left( W(\mathbf{x})-\frac{\sigma_W^2(\mathbf{x})}{2} \right) \right \}_{\mathbf{x} \in \mathds{R}^d}$$ satisfies Condition \eqref{Eq_Decay_Cubes_Y} for all $b>0$. \end{proposition} \begin{Rq} The field $W$ in Proposition \ref{Prop_Variogram_Power} is a (L\'evy) fractional Brownian field with Hurst parameter $\alpha/2 \in (0,1]$; see, e.g., \cite{Bierme2017random}, Section 1.2.3 and \cite{samorodnitsky1994stable}, Example 8.1.3. Its covariances are written $$ \mathrm{Cov}(W(\mathbf{x}), W(\mathbf{y}))=\frac{\eta}{2} \left( \| \mathbf{x} \|^{\alpha} + \| \mathbf{y} \|^{\alpha} - \| \mathbf{x}-\mathbf{y} \|^{\alpha} \right), \quad \mathbf{x}, \mathbf{y} \in \mathds{R}^d.$$ \end{Rq} Combining Theorem \ref{Th_General_Case} and Proposition \ref{Prop_Variogram_Power}, we directly obtain the following result. \begin{theorem} \label{Th_2_TCL_Brown_Resnick} Let $\{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be the Brown-Resnick random field associated with the variogram $\gamma_W(\mathbf{x})=\eta \| \mathbf{x} \|^{\alpha}$, where $\eta >0$ and $\alpha \in (0,2]$, and $F$ be as in Theorem \ref{Th_General_Case}. Then $\{ F(Z(\mathbf{x})) \}_{\mathbf{x} \in \mathds{R}^d}$ satisfies the CLT. \end{theorem} \begin{Rq} Using Proposition \ref{Prop_Upper_Bound_Expectation_Min_Brown_Resnick} and a very similar proof as that of Theorem \ref{Th_2_TCL_Brown_Resnick}, we obtain the following result. Let $\{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be the Brown-Resnick random field built with a random field $\{ W(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ which is sample-continuous and whose variogram satisfies $$\sup_{\mathbf{x} \in [0,1]^d} \{ \gamma_W(\mathbf{h})-\gamma_W(\mathbf{x}+\mathbf{h}) \}\underset{\| \mathbf{h} \| \to \infty}{=}o(\gamma_W(\mathbf{h})),$$ and $$ \lim_{\| \mathbf{h} \| \to \infty} \frac{\gamma_W(\mathbf{h})}{\ln( \| \mathbf{h} \|)} =\infty.$$ Moreover, let $F$ be as in Theorem \ref{Th_General_Case}. Then $\{ F(Z(\mathbf{x})) \}_{\mathbf{x} \in \mathds{R}^d}$ satisfies the CLT. \end{Rq} Similarly as above, we obtain the following result for the Smith random field. \begin{theorem} \label{Th_TCL_Smith} Let $\{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be the Smith random field with covariance matrix $\Sigma$ and $F$ be as in Theorem \ref{Th_General_Case}. Then the random field $\{ F(Z(\mathbf{x})) \}_{\mathbf{x} \in \mathds{R}^d}$ satisfies the CLT. \end{theorem} \section{Application to risk assessment} \label{Sec_Application} If the random field $\{ X(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^2}$ is a model for the cost field (e.g., the economic cost or the cost for an insurance company) due to damaging events having a spatial extent (typically such as weather events), then, as detailed in \cite{koch2017spatial, koch2018SpatialRiskAxiomsGeneral}, the random variable $$ L_N(V_n)=\frac{1}{\lambda(V_n)} \int_{V_n} X(\mathbf{x}) \ \lambda (\mathrm{d}\mathbf{x}),$$ where $V_n \subset \mathds{R}^2$, is relevant for risk assessment. It can be interpreted as the loss per surface unit (or, less rigorously, as the loss per insurance policy) over the region $V_n$. If $X$ has a first moment and a constant expectation ($X$ is first-order stationary), then we have that \begin{align} \frac{1}{[\lambda(V_n)]^{1/2}} \int_{V_n} (X(\mathbf{x})-\mathds{E}[X(\mathbf{x})]) \ \lambda(\mathrm{d}\mathbf{x}) &= \frac{1}{[\lambda(V_n)]^{1/2}} \int_{V_n} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x}) - [\lambda(V_n)]^{1/2} \mathds{E}[X(\mathbf{0})] \nonumber \\& = [\lambda(V_n)]^{1/2} (L_N(V_n)-\mathds{E}[X(\mathbf{0})]). \label{Eq_Expression_Quantity_Function_L_N} \end{align} Hence, the asymptotic (when $V_n \uparrow \mathds{R}^2$) probabilistic behaviour of $L_N(V_n)$ can be derived from that of the left-hand side of \eqref{Eq_Expression_Quantity_Function_L_N}, quantity which appears in the CLT for the random field $X$, provided it exists. This explains the usefulness of a CLT in a risk assessment context. As a short application, we now consider an insurance company called Ins. We assume that, during year $n$, Ins only covers the totality of the risk associated with a specified hazard over a whole continuous region, denoted by $V_n$ and referred to as the domain of Ins in the following. Moreover, let us assume that each insurance policy has a deductible $v>0$. Suppose that the process of the cost due to the mentioned hazard during a specified period (say one year) is given by a stationary and sample-continuous max-stable random field $\{ Z_G(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^2}$ having standard Gumbel margins, i.e. such that, for all $\mathbf{x} \in \mathds{R}^2$, $\mathds{P}(Z_G(\mathbf{x}) \leq z)= \exp(-\exp(-z))$, $z \in \mathds{R}$. On the region $V_n$, this cost field is related to policies covered by Ins only. Thus, the normalized loss for Ins is given by $$ L_{N}(V_n)= \frac{1}{\lambda(V_n)} \int_{V_n} (Z_G(\mathbf{x})-v) \ \mathds{I}_{ \{Z_G(\mathbf{x}) >v \} } \ \lambda(\mathrm{d} \mathbf{x}).$$ Now, observe that $\{ Z_G(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^2}=\{ \ln(Z(\mathbf{x})) \}_{\mathbf{x} \in \mathds{R}^2}$ for a simple, stationary and sample-continuous max-stable random field $\{ Z(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^2}$. Hence, denoting $u = \exp(v)$, we have that $$ L_{N}(V_n)= \frac{1}{\lambda(V_n)} \int_{V_n} F(Z(\mathbf{x})) \ \lambda(\mathrm{d} \mathbf{x}),$$ where the function $F$ is defined by $F(z)=\ln \left( z/u \right) \ \mathds{I}_{ \{z>u\} }$, $z>0$. It is clear that $F$ is measurable from $((0, \infty), \mathcal{B}((0,\infty)))$ to $(\mathds{R},\mathcal{B}(\mathds{R}))$. Moreover, by construction, the random field $\{ \ln ( Z(\mathbf{x}) ) \}_{\mathbf{x} \in \mathds{R}^2}$ has Gumbel margins. Hence, for all $\delta>0$, we have that $\mathds{E}\left[ |F(Z(\mathbf{0}))|^{2+\delta} \right]< \infty$. In addition, $F$ is non-constant and non-decreasing and, thus, we deduce from Proposition \ref{Prop_Condition_Sigma_Positive} that \eqref{Eq_Condition_Positivity_Sigma2} is satisfied. Let us choose a $\delta>0$ and assume that $Z$ satisfies \eqref{Eq_Decay_Cubes_Y}. Furthermore, assume that the sequence of domains (over the years) of Ins, $(V_n)_{n \in \mathds{N}}$, is a Van Hove sequence in $\mathds{R}^2$. Then, applying Theorem \ref{Th_General_Case} and using \eqref{Eq_Expression_Quantity_Function_L_N}, we obtain that \begin{equation} \label{TCL_Application} [\lambda(V_n)]^{1/2} (L_{N}(V_n)-\mathds{E}[F(Z(\mathbf{0}))]) \overset{d}{\rightarrow} \mathcal{N}(0, \sigma^2), \quad \mbox{as } n\to\infty. \end{equation} This gives the asymptotic probabilistic behaviour of the normalized loss suffered by Ins. If the sequence $(V_n)_{n \in \mathds{N}}$ is constant (i.e. if Ins does not plan to extend its domain) but the region $V_n$ is large enough, \eqref{TCL_Application} provides an approximation of the distribution of $L_{N}(V_n)$: $$L_N(V_n) \approx \mathcal{N} \left(\mathds{E}[F(Z(\mathbf{0}))], \frac{\sigma^2}{\lambda(V_n)} \right),$$ where $\approx$ means ``approximately follows''. If $V_n$ increases in the Van Hove sense (e.g., if Ins extends its domain), \eqref{TCL_Application} for instance gives insights about how the Value-at-Risk of $L_N(V_n)$ evolves. This is related to the axiom of asymptotic spatial homogeneity of order $- \alpha$, see \cite{koch2017spatial, koch2018SpatialRiskAxiomsGeneral}. \section{Conclusion} \label{Sec_Conclusion} We have shown a general CLT for functions of stationary max-stable random fields on $\mathds{R}^d$. Moreover, we have seen that appropriate functions of the Brown-Resnick random field with a power variogram and the Smith random field satisfy the CLT. As briefly discussed, such results can be useful in a risk assessment context. Moreover, this paper proposes a new contribution to the limited literature about CLT for random fields on $\mathds{R}^d$. Future work might consist in relaxing the sample-continuity and stationarity assumptions on the max-stable random field $Z$ as well as letting the function $F$ depend on the location $\mathbf{x}$ (with the notations of Theorem 2). \section{Appendix: Proofs} \subsection{For Theorem \ref{Th_General_Case}} \begin{proof} \noindent \textbf{Part 1: Proof of \eqref{Eq_Convergence_Abs_Int_Cov_Thm}} Using \eqref{Eq_Assumption_F} and the stationarity of $X$ (by stationarity of $Z$), we have, for all $\mathbf{x} \in \mathds{R}^d$, $\mathds{E} \left[ |X(\mathbf{x})|^{2+\delta} \right]<\infty$. Hence, Davydov's inequality (\citeauthor{davydov1968convergence}, \citeyear{davydov1968convergence}, Equation (2.2); \citeauthor{ivanov1989statistical}, \citeyear{ivanov1989statistical}, Lemma 1.6.2) gives that \begin{equation} \label{Eq_Davydov_Inequality} | \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) | \leq 10 \left[ \alpha^X(\{ \mathbf{0} \}, \{ \mathbf{x} \}) \right]^{\frac{\delta}{2+\delta}} \left( \mathds{E} \left[ |X(\mathbf{0})|^{2+\delta} \right] \right)^{\frac{1}{2+\delta}} \left( \mathds{E} \left[ |X(\mathbf{x})|^{2+\delta} \right] \right)^{\frac{1}{2+\delta}}. \end{equation} For all $\mathbf{x} \in \mathds{R}^d$, since $F$ is Borel-measurable, $X(\mathbf{x})=F(Z(\mathbf{x}))$ is $\mathcal{F}^Z_{ \{ \mathbf{x} \} }$-measurable. Hence, $\mathcal{F}^X_{ \{ \mathbf{x} \} } \subset \mathcal{F}^Z_{ \{ \mathbf{x} \} }$, which gives that, for all $\mathbf{x} \in \mathds{R}^d$, \begin{equation} \label{Eq_Majoration_Alpha_Mixing_X_With_Alpha_Mixing_Z} \alpha^X \left( \{ \mathbf{0} \}, \{ \mathbf{x} \} \right) \leq \alpha^Z \left( \{ \mathbf{0} \}, \{ \mathbf{x} \} \right). \end{equation} Moreover, using \eqref{Eq_Majoration_Alphamixing_With_Betamixing} and Corollary 2.2 in \cite{dombry2012strong}, we obtain that, for all $\mathbf{x} \in \mathds{R}^d$, \begin{equation} \label{Eq_Majoration_Alpha_Mixing_Z} \alpha^Z \left( \{ \mathbf{0} \}, \{ \mathbf{x} \} \right) \leq 2 [2-\theta^Z(\{ \mathbf{0}, \mathbf{x} \})]. \end{equation} Therefore, the combination of \eqref{Eq_Majoration_Alpha_Mixing_X_With_Alpha_Mixing_Z} and \eqref{Eq_Majoration_Alpha_Mixing_Z} gives that $$ \alpha^X \left( \{ \mathbf{0} \}, \{ \mathbf{x} \} \right) \leq 2 [2-\theta^Z(\{ \mathbf{0}, \mathbf{x} \})].$$ Hence, \eqref{Eq_Davydov_Inequality} and the stationarity of $X$ give that \begin{equation} \label{Eq_Reformulation_Davydov_Inequality} | \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) | \leq 2^{\frac{\delta}{2+\delta}} 10 \left( \mathds{E} \left[ |X(\mathbf{0})|^{2+\delta} \right] \right)^{\frac{2}{2+\delta}} [2-\theta^Z(\{ \mathbf{0}, \mathbf{x} \})]^{\frac{\delta}{2+\delta}}. \end{equation} Now, it follows from \eqref{Eq_Property_Extremal_Coefficient} that $\theta^Z(\{ \mathbf{0}, \mathbf{x} \})= \mathds{E}[\max \{ Y(\mathbf{0}), Y(\mathbf{x}) \}]$. Thus, using the facts that, for all $\mathbf{x} \in \mathds{R}^2$, $\mathds{E}[Y(\mathbf{x})]=1$, and, for all $a, b \in \mathds{R}, a+b-\max \{a,b\}=\min\{a,b\}$, as well as the linearity of the expectation and \eqref{Eq_Decay_Cubes_Y}, we have that \begin{align} 2-\theta^Z(\{ \mathbf{0}, \mathbf{x} \}) &=\mathds{E}[Y(\mathbf{0})+Y(\mathbf{x})-\max \{ Y(\mathbf{0}), Y(\mathbf{x}) \}] \nonumber \\& =\mathds{E}[ \min \{ Y(\mathbf{0}), Y(\mathbf{x}) \}] \nonumber \\& \leq \mathds{E} \left[ \min \left \{ \sup_{\mathbf{y} \in [0,1]^d} \{ Y(\mathbf{y}) \}, \sup_{\mathbf{y} \in [\mathbf{x},\mathbf{x}+1]} \{ Y(\mathbf{y}) \} \right \} \right] \nonumber \\& \leq K \| \mathbf{x} \|^{-b}. \nonumber \end{align} As for all $\mathbf{x} \in \mathds{R}^2$, $\theta^Z(\{ \mathbf{0}, \mathbf{x} \}) \leq 2$, and $b>(2+\delta)/\delta$, this directly implies that $$ \int_{\mathds{R}^d} \left[ 2-\theta^Z(\{ \mathbf{0}, \mathbf{x} \}) \right]^{\frac{\delta}{2+\delta}} \ \lambda(\mathrm{d}\mathbf{x}) < \infty.$$ Finally, we obtain, using \eqref{Eq_Assumption_F} and \eqref{Eq_Reformulation_Davydov_Inequality}, that \begin{equation} \label{Eq_Convergence_Int_Abs_Cov} \int_{\mathds{R}^d} | \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) | \ \lambda(\mathrm{d}\mathbf{x}) = K_1 < \infty, \end{equation} which shows \eqref{Eq_Convergence_Abs_Int_Cov_Thm}. \noindent \textbf{Part 2: Study of $(I_{n,1})_{n \in \mathds{N}}$} Introducing the random field \begin{equation} \label{Eq_Def_Xtilde} \tilde X(\mathbf{h})=\int_{[\mathbf{h}, \mathbf{h}+1]} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x}),\quad \mathbf{h} \in \mathds{Z}^d, \end{equation} we have, for all $n \in \mathds{N}$, that \begin{equation} \label{Eq_Integral_Sum_Representation} \int_{A_n} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x})=\sum_{\mathbf{h} \in \Lambda_n} \tilde X(\mathbf{h}), \end{equation} where $\Lambda_n=\{ \mathbf{h} \in \mathds{Z}^d: [\mathbf{h}, \mathbf{h}+1] \subset A_n \}$. We will now show that the random field $\left \{ \tilde{X}(\mathbf{h}) \right \}_{\mathbf{h} \in \mathds{Z}^d}$ and the sequence $(\Lambda_n)_{n \in \mathds{N}}$ satisfy the assumptions of Theorem \ref{theo:BCLT} (Bolthausen's theorem). First, note that $\tilde{X}$ is stationary since $X$ is stationary. As already mentioned, since $F$ is Borel-measurable, we have, for all $\mathbf{x} \in \mathds{R}^d$, that $X(\mathbf{x})=F(Z(\mathbf{x}))$ is $\mathcal{F}^Z_{ \{ \mathbf{x} \} }$-measurable. Moreover, we know that the integral of measurable functions is again measurable. Hence, we have that $\tilde{X}(\mathbf{h})$ is $\mathcal{F}_{[\mathbf{h}, \mathbf{h}+1]}^{Z}$-measurable. This directly gives that, for all $\mathbf{h} \in \mathds{Z}^d$, \begin{equation} \label{Eq_Inclusion_SigmafieldXk_SigmafieldZ} \mathcal{F}^{\tilde{X}}_{ \{ \mathbf{h} \} } \subset \mathcal{F}_{[ \mathbf{h}, \mathbf{h}+1]}^{Z}. \end{equation} Let $S \subset \mathds{Z}^d$. From \eqref{Eq_Inclusion_SigmafieldXk_SigmafieldZ}, it follows that \begin{equation} \label{Eq_Inclusion_0_Sigmafield_Xtilde} \sigma \left( \bigcup_{\mathbf{h} \in S} \mathcal{F}^{\tilde{X}}_{ \{ \mathbf{h} \} } \right) \subset \sigma \left( \bigcup_{\mathbf{h} \in S} \mathcal{F}_{[ \mathbf{h}, \mathbf{h}+1]}^{Z} \right). \end{equation} Additionally, it is easily shown that $$ \sigma \left( \bigcup_{\mathbf{h} \in S} \mathcal{F}^{\tilde{X}}_{ \{ \mathbf{h} \} } \right) = \mathcal{F}^{\tilde{X}}_{ S } \quad \mbox{and} \quad \sigma \left( \bigcup_{\mathbf{h} \in S} \mathcal{F}^{Z}_{ [ \mathbf{h}, \mathbf{h}+1] } \right) = \mathcal{F}^{Z}_{ \bigcup_{\mathbf{h} \in S} [ \mathbf{h}, \mathbf{h}+1]}, $$ which yield, using \eqref{Eq_Inclusion_0_Sigmafield_Xtilde}, that \begin{equation} \label{Eq_Majoration_Sigmafield_Xtilde_Sigmafield_X} \mathcal{F}^{\tilde{X}}_{ S } \subset \mathcal{F}^{Z}_{ \bigcup_{\mathbf{h} \in S} [ \mathbf{h}, \mathbf{h}+1] }. \end{equation} Thus, using \eqref{Eq_Def_2_Beta_Mixing} and \eqref{Eq_Majoration_Sigmafield_Xtilde_Sigmafield_X}, we obtain that, for all $S_1, S_2$ disjoint subsets of $\mathds{Z}^d$, \begin{equation} \label{Eq_Majoration_Beta_Xtilde_Beta_Z} \beta^{\tilde{X}} \left( S_1, S_2 \right) \leq \beta^Z \left( \bigcup_{\mathbf{h}_1 \in S_1} [ \mathbf{h}_1, \mathbf{h}_1+1], \bigcup_{\mathbf{h}_2 \in S_2} [ \mathbf{h}_2, \mathbf{h}_2+1] \right). \end{equation} Now, $([ \mathbf{h}_1, \mathbf{h}_1+1])_{ \mathbf{h}_1 \in S_1}$ and $([ \mathbf{h}_2, \mathbf{h}_2+1])_{ \mathbf{h}_2 \in S_1}$ are countable families of compact subsets of $\mathds{R}^d$. Therefore, as $Z$ is a simple and sample-continuous max-stable random field on $\mathds{R}^d$, we can apply Theorem 2.2 in \cite{dombry2012strong}. The first point gives that, for any compact $S \subset \mathds{R}^d$, the quantity $C^Z(S)=\mathds{E} \left[ \sup_{\mathbf{x} \in S} \left \{ Z(\mathbf{x})^{-1} \right \} \right]$ is finite. Moreover, the third point yields that, for all $S_1, S_2$ disjoint subsets of $\mathds{Z}^d$, \begin{align} & \quad \ \beta^Z \left( \bigcup_{\mathbf{h}_1 \in S_1} [ \mathbf{h}_1, \mathbf{h}_1+1], \bigcup_{\mathbf{h}_2 \in S_2} [ \mathbf{h}_2, \mathbf{h}_2+1] \right) \nonumber \\& \leq 2 \sum_{\mathbf{h}_1 \in S_1} \sum_{\mathbf{h}_2 \in S_2} \left[ C^Z \left( [ \mathbf{h}_1, \mathbf{h}_1+1] \right)+C^Z \left( [\mathbf{h}_2, \mathbf{h}_2+1] \right) \right] \nonumber \\& \quad \quad \quad \quad \quad \quad \ \ \left[\theta^Z\left([ \mathbf{h}_1, \mathbf{h}_1+1] \right)+\theta^Z\left([\mathbf{h}_2,\mathbf{h}_2+1] \right) -\theta^Z\left([ \mathbf{h}_1, \mathbf{h}_1+1]\cup[\mathbf{h}_2,\mathbf{h}_2+1] \right) \right]. \label{Eq_Majoration_Beta_Z_With_Extremal_Coefficients} \end{align} Let us introduce $K_2=C^Z \left( [0,1]^d \right)$. By stationarity of $Z$, we have that, for all $\mathbf{h} \in \mathds{Z}^d$, \begin{equation} \label{Eq_Value_C_Cube_k} C^Z \left( [\mathbf{h}, \mathbf{h}+1] \right)=K_2. \end{equation} Furthermore, let $Y$ be a spectral random field of $Z$. Using the stationarity of $Z$, \eqref{Eq_Property_Extremal_Coefficient}, the linearity of the expectation and the fact that, for all $a, b \in \mathds{R}, a+b-\max \{a,b\}=\min\{a,b\}$, we have that \begin{align} & \quad \ \theta^Z\left([ \mathbf{h}_1, \mathbf{h}_1+1] \right)+\theta^Z\left([\mathbf{h}_2,\mathbf{h}_2+1] \right) -\theta^Z\left([\mathbf{h}_1, \mathbf{h}_1+1] \cup[\mathbf{h}_2,\mathbf{h}_2+1] \right) \nonumber \\& = \theta^Z\left([ 0,1]^d \right)+\theta^Z\left([\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1] \right) -\theta^Z\left([ 0,1 ]^d \cup[\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1] \right) \nonumber \\& = \mathds{E} \left[ \sup_{\mathbf{x} \in [ 0,1]^d} \{ Y(\mathbf{x}) \} + \sup_{\mathbf{x} \in [\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1]} \{ Y(\mathbf{x}) \} - \sup_{\mathbf{x} \in [ 0,1]^d \bigcup [\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1 +1]} \{ Y(\mathbf{x}) \} \right] \nonumber \\& = \mathds{E} \left[ \sup_{\mathbf{x} \in [ 0,1]^d} \{ Y(\mathbf{x}) \} + \sup_{\mathbf{x} \in [\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1]} \{ Y(\mathbf{x}) \} - \max \left \{ \sup_{\mathbf{x} \in [ 0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1]} \{ Y(\mathbf{x}) \} \right \} \right] \nonumber \\& = \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [ 0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1]} \{ Y(\mathbf{x}) \} \right \} \right]. \label{Eq_Sum_Extremal_Coefficients} \end{align} Finally, combining \eqref{Eq_Majoration_Beta_Xtilde_Beta_Z}, \eqref{Eq_Majoration_Beta_Z_With_Extremal_Coefficients}, \eqref{Eq_Value_C_Cube_k} and \eqref{Eq_Sum_Extremal_Coefficients}, we obtain, for all $S_1, S_2$ disjoint subsets of $\mathds{Z}^d$, that \begin{equation} \label{Eq_Majoration_Beta_Xtilde_With_Extremal_Coefficients} \beta^{\tilde{X}} \left( S_1, S_2 \right) \leq 4K_2 \sum_{\mathbf{h}_1 \in S_1} \sum_{\mathbf{h}_2 \in S_2} \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [ 0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1]} \{ Y(\mathbf{x}) \} \right \} \right]. \end{equation} Now, we show that \eqref{Eq_TCL2}, \eqref{Eq_TCL1} and \eqref{Eq_TCL3} are satisfied. Using \eqref{Eq_Majoration_Alphamixing_With_Betamixing} and \eqref{eq:amix}, we obtain, for all $m\geq 1$ and $k,l\in\mathds{N}\cup\{\infty\}$, that \begin{equation} \label{Eq_Majoration_Alphakl_Xtilde_BetaklXtilde} \alpha^{\tilde{X}}_{kl}(m) \leq \frac{1}{2} \sup \Big\{\beta^{\tilde{X}}(S_1,S_2):\ S_1, S_2 \subset \mathds{Z}^d,\ \ |S_1| \leq k, |S_2| \leq l,\ d(S_1,S_2)\geq m \Big\}. \end{equation} Let $S_1$ and $S_2$ be subsets of $\mathds{Z}^d$ such that $|S_1| \leq k$, $|S_2| \leq l$ and $d(S_1, S_2) \geq m$, where $k, l \in \mathds{N}$ and $m \geq 1$. We have that $$ \beta^{\tilde{X}}(S_1,S_2) \leq 4K_2kl \max_{\mathbf{h}_1 \in S_1, \mathbf{h}_2 \in S_2} \left \{ \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h}_2-\mathbf{h}_1, \mathbf{h}_2-\mathbf{h}_1+1]} \{ Y(\mathbf{x}) \} \right \} \right] \right \}.$$ Since $d(S_1, S_2) \geq m$, we have, for all $\mathbf{h}_1 \in S_1$ and $\mathbf{h}_2 \in S_2$, that $\| \mathbf{h}_2-\mathbf{h}_1 \| \geq m$. Thus, using \eqref{Eq_Decay_Cubes_Y}, we obtain that $$\beta^{\tilde{X}}(S_1,S_2) \leq 4KK_2kl m^{-b}.$$ Therefore, using \eqref{Eq_Majoration_Alphakl_Xtilde_BetaklXtilde}, for all $m \geq 1$ and $k, l \in \mathds{N}$, we have that \begin{equation} \label{Eq_Majoration_alphakl_Xtilde} \alpha^{\tilde{X}}_{kl}(m) \leq 2KK_2kl m^{-b}. \end{equation} Hence, since $b > 2d$, we obtain, for all $k, l \geq 1$, that $$ m^{d-1} \alpha^{\tilde{X}}_{kl}(m) \leq 2KK_2klm^{-(1+d)},$$ which immediately gives, since $d>0$ and $\alpha$-mixing coefficients are non-negative, that $$ \sum_{m=1}^{\infty} m^{d-1} \alpha^{\tilde{X}}_{kl}(m) < \infty.$$ Thus, \eqref{Eq_TCL2} is satisfied. Let $S_1$ and $S_2$ be subsets of $\mathds{Z}^d$ such that $|S_1| \leq k$, $|S_2| \leq l$ and $d(S_1, S_2) \geq m$, where $k \in \mathds{N}$, $l= \infty$ and $m \geq 1$. As $Y$ is positive, it is clear that, for all $\mathbf{h}_1, \mathbf{h}_2 \in \mathds{Z}^d$, $$ \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [ 0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1]} \{ Y(\mathbf{x}) \} \right \} \right] \geq 0.$$ Hence, \eqref{Eq_Majoration_Beta_Xtilde_With_Extremal_Coefficients} gives that $$ \beta^{\tilde{X}} \left( S_1, S_2 \right) \leq 4K_2 \sum_{\mathbf{h}_1 \in S_1} \sum_{\mathbf{h}_2 \in \mathds{Z}^d: \| \mathbf{h}_2-\mathbf{h}_1 \| \geq m} \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [ 0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h}_2-\mathbf{h}_1,\mathbf{h}_2-\mathbf{h}_1+1]} \{ Y(\mathbf{x}) \} \right \} \right].$$ If follows from \eqref{Eq_Decay_Cubes_Y} that $$ \beta^{\tilde{X}} \left( S_1, S_2 \right) \leq 4KK_2k \sum_{\mathbf{h} \in \mathds{Z}^d: \| \mathbf{h} \| \geq m} \| \mathbf{h} \|^{-b}.$$ Consequently, \eqref{Eq_Majoration_Alphakl_Xtilde_BetaklXtilde} gives that $$ \alpha_{1, \infty}^{\tilde{X}}(m) \leq 2KK_2k \sum_{\mathbf{h} \in \mathds{Z}^d: \| \mathbf{h} \| \geq m} \| \mathbf{h} \|^{-b}.$$ Since $b>2d$ and $\alpha$-mixing coefficients are non-negative, we easily obtain that $\alpha_{1, \infty}^{\tilde{X}}(m)=o\left( m^{-d} \right)$ and hence that \eqref{Eq_TCL1} is satisfied. Using H\"older's inequality and the fact that $\lambda([0,1]^d)=1$, we have that \begin{align} \left| \int_{[0,1]^d} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x}) \right | &\leq \left( \int_{[ 0,1]^d} |X(\mathbf{x})|^{2+\delta} \ \lambda(\mathrm{d}\mathbf{x}) \right)^{\frac{1}{2+\delta}} \left( \int_{[ 0,1]^d} 1^{\frac{2+\delta}{1+\delta}} \ \lambda(\mathrm{d}\mathbf{x}) \right)^{\frac{1+\delta}{2+\delta}} \nonumber \\& = \left( \int_{[ 0,1]^d} |X(\mathbf{x})|^{2+\delta} \ \lambda(\mathrm{d}\mathbf{x}) \right)^{\frac{1}{2+\delta}}. \nonumber \end{align} Hence, using \eqref{Eq_Def_Xtilde} and taking advantage of the stationarity of $X$, we have that \begin{align*} \mathds{E} \left[ |\tilde{X}(\mathbf{0})|^{2+\delta} \right] \leq \mathds{E}\left[ \int_{[ 0,1]^d} |X(\mathbf{x})|^{2+\delta} \ \lambda(\mathrm{d}\mathbf{x}) \right] &=\int_{[0,1]^d} \mathds{E} \left[ |X(\mathbf{x})|^{2+\delta} \right] \ \lambda(\mathrm{d}\mathbf{x}) \\& =\mathds{E} \left[ |X(\mathbf{0})|^{2+\delta} \right]. \end{align*} Thus, using \eqref{Eq_Assumption_F}, we obtain that $\mathds{E}\left[ |\tilde{X}(\mathbf{0})|^{2+\delta} \right]<\infty$. Using \eqref{Eq_Majoration_alphakl_Xtilde}, we obtain that $$ m^{d-1} \left( \alpha^{\tilde{X}}_{11}(m) \right)^{\frac{\delta}{2+\delta}} \leq m^{d-1} (2KK_2)^{\frac{\delta}{2+\delta}} m^{-\frac{b \delta }{2+\delta}} = (2KK_2)^{\frac{\delta}{2+\delta}} m^{d-1-\frac{b \delta }{2+\delta}}.$$ Since $b > d(2+\delta)/\delta$, we have that $d-1-b \delta/(2+\delta)<-1$. Therefore, since $\alpha$-mixing coefficients are non-negative, this yields $$\sum_{m=1}^{\infty} m^{d-1} \left( \alpha^{\tilde{X}}_{11}(m) \right)^{\frac{\delta}{2+\delta}}< \infty.$$ Hence, \eqref{Eq_TCL3} is satisfied. Now, we recall that $\Lambda_n=\{ \mathbf{h} \in \mathds{Z}^d: [\mathbf{h}, \mathbf{h}+1] \subset A_n \}$, where $$A_n= \bigcup_{\mathbf{h} \in \mathds{Z}^d:[\mathbf{h}, \mathbf{h}+1] \subset V_n} [\mathbf{h}, \mathbf{h}+1].$$ Since, for any $n \in \mathds{N}$, $A_n$ is a bounded subset of $\mathds{R}^d$ (as a subset of $V_n$ which is bounded), and, by definition, $\Lambda_n \subset A_n$, we have that $\Lambda_n$ is a finite subset of $\mathds{Z}^d$. Moreover, as $(V_n)_{n \in \mathds{N}}$ is a Van Hove sequence, we have, for all $n \in \mathds{N}$, that $V_n \subset V_{n+1}$. This directly implies that $A_n \subset A_{n+1}$ and thus that $\Lambda_n \subset \Lambda_{n+1}$. Furthermore, by definition, we know that $\lim_{n \to \infty} V_n= \mathds{R}^d$. Hence, since, for all $n \in \mathds{N}$ and $\mathbf{x} \in \partial A_n$, $\mathrm{dist}(\mathbf{x},\partial V_n) \leq \sqrt{d}$, we obtain that $\lim_{n \to \infty} A_n= \mathds{R}^d$. Therefore, $\lim_{n \to \infty} \Lambda_n=\mathds{Z}^d$. Hence, $(\Lambda_n)_{n \in \mathds{N}}$ is a sequence of finite subsets of $\mathds{Z}^d$ which increases to $\mathds{Z}^d$. Furthermore, we have that, for all $n \in \mathds{N}$, \begin{equation} \label{Eq_Inclusion_Remainder_Tube} V_n \backslash A_n \subset N \left( \partial V_n, \sqrt{d} \right). \end{equation} Hence, $\lambda(V_n \backslash A_n) \leq \lambda \left( N \left( \partial V_n, \sqrt{d} \right) \right)$, which gives, since $(V_n)_{n \in \mathds{N}}$ is a Van Hove sequence, that \begin{equation} \label{Eq_Lim_LambdaReminder_On_LambdaVn} \lim_{n \to \infty} \frac{\lambda(V_n \backslash A_n)}{\lambda(V_n)}=0. \end{equation} Since $\lambda(A_n)=\lambda(V_n)-\lambda(V_n \backslash A_n)$, we have that $$ \frac{\lambda(A_n)}{\lambda(V_n)}=1-\frac{\lambda(V_n \backslash A_n)}{\lambda(V_n)}.$$ Thus, using \eqref{Eq_Lim_LambdaReminder_On_LambdaVn}, we obtain that \begin{equation} \label{Eq_Lim_LambdaAn_On_LambdaVn} \lim_{n \to \infty} \frac{\lambda(A_n)}{\lambda(V_n)}=1. \end{equation} Moreover, since, for all $\mathbf{h} \in \mathds{Z}^d$, $\lambda([\mathbf{h}, \mathbf{h}+1])=1$, we obtain that, for all $n \in \mathds{N}$, \begin{equation} \label{Eq_Equality_Cardinal_Lambdan_Measure_An} |\Lambda_n|=\lambda(A_n). \end{equation} For the same reason, we have, for all $n \in \mathds{N}$, that $|\partial \Lambda_n| \leq \lambda \left( N \left( \partial A_n, \sqrt{d} \right) \right)$. Additionally, as $\mathrm{dist}(\partial V_n, \partial A_n) \leq \sqrt{d}$, we obtain that $N \left( \partial A_n, \sqrt{d} \right) \subset N \left( \partial V_n, 2\sqrt{d} \right)$. Hence, we have, for all $n \in \mathds{N}$, that $|\partial \Lambda_n| \leq \lambda \left( N \left( \partial V_n, 2\sqrt{d} \right) \right)$. Therefore, using \eqref{Eq_Equality_Cardinal_Lambdan_Measure_An}, it follows that $$ \frac{| \partial \Lambda_n |}{|\Lambda_n|}=\frac{|\partial \Lambda_n|}{\lambda(V_n)} \frac{\lambda(V_n)}{\lambda(A_n)} \leq \frac{\lambda \left( N \left( \partial V_n, 2\sqrt{d} \right) \right)}{\lambda(V_n)} \frac{\lambda(V_n)}{\lambda(A_n)}.$$ Consequently, using \eqref{Eq_Lim_LambdaAn_On_LambdaVn} and the fact that $(V_n)_{n \in \mathds{N}}$ is a Van Hove sequence, we obtain that $\lim_{n \to \infty} |\partial \Lambda_n|/|\Lambda_n|=0$. To summarize, we have that $(\Lambda_n)_{n \in \mathds{N}}$ is a sequence of finite subsets of $\mathds{Z}^d$ which increases to $\mathds{Z}^d$ and such that $\lim_{n \to \infty} |\partial \Lambda_n|/|\Lambda_n|=0$. Thus, Theorem \ref{theo:BCLT} gives that $\sum_{\mathbf{h} \in \mathds{Z}^d} \left | \mathrm{Cov} \left(\tilde{X}(\mathbf{0}), \tilde{X}(\mathbf{h})\right) \right | < \infty$. We introduce $\sigma_1^2=\sum_{\mathbf{h} \in \mathds{Z}^d} \mathrm{Cov} \left(\tilde{X}(\mathbf{0}), \tilde{X}(\mathbf{h})\right)$. Using the fact that the covariance is bilinear, the stationarity of $X$ and the definition of $\sigma^2$ in \eqref{Eq_Def_Sigma_Square}, we have that \begin{align} \sigma_1^2 & = \int_{[0,1]^d} \left( \sum_{\mathbf{h} \in \mathds{Z}^d} \int_{[\mathbf{h}, \mathbf{h}+1]} \mathrm{Cov}(X(\mathbf{x}), X(\mathbf{y})) \ \lambda(\mathrm{d}\mathbf{y}) \right) \lambda(\mathrm{d}\mathbf{x}) \nonumber \\& = \int_{[0,1]^d} \left( \int_{\mathds{R}^d} \mathrm{Cov}(X(\mathbf{x}), X(\mathbf{y})) \ \lambda(\mathrm{d}\mathbf{y}) \right) \ \lambda(\mathrm{d}\mathbf{x}) \nonumber \\& = \int_{[0,1]^d} \left( \int_{\mathds{R}^d} \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{y}-\mathbf{x})) \ \lambda(\mathrm{d}\mathbf{y}) \right) \ \lambda(\mathrm{d}\mathbf{x}) \nonumber \\& = \sigma^2. \label{Eq_Equality_Covariance_Xtilde_Covariance} \end{align} Consequently, it follows from \eqref{Eq_Condition_Positivity_Sigma2} that $\sigma_1^2>0$. Hence, Theorem \ref{theo:BCLT} yields that \begin{equation} \label{Eq_TCL_Xtilde} \frac{1}{|\Lambda_n|^{\frac{1}{2}}} \sum_{\mathbf{h} \in \Lambda_n} \tilde{X}(\mathbf{h}) \overset{d}{\rightarrow} \mathcal{N}(0,\sigma_1^2). \end{equation} Finally, combining \eqref{Eq_Def_In1_In2}, \eqref{Eq_Integral_Sum_Representation}, \eqref{Eq_Equality_Cardinal_Lambdan_Measure_An}, \eqref{Eq_Equality_Covariance_Xtilde_Covariance} and \eqref{Eq_TCL_Xtilde}, we obtain that $$ \left( \frac{\lambda(V_n)}{\lambda(A_n)} \right)^{\frac{1}{2}} I_{n,1} \overset{d}{\rightarrow} \mathcal{N}(0, \sigma^2), \mbox{ for } n \to \infty.$$ Hence, at last, using \eqref{Eq_Lim_LambdaAn_On_LambdaVn}, Slutsky's theorem yields that \begin{equation} \label{Eq_Convergence_In1} I_{n,1} \overset{d}{\rightarrow} \mathcal{N}(0, \sigma^2), \mbox{ for } n \to \infty. \end{equation} \noindent \textbf{Part 3: Study of $(I_{n,2})_{n \in \mathds{N}}$} We now focus on the second term in \eqref{Eq_Decomposition_Integral_Total}, i.e. $I_{n,2}$. Using \eqref{Eq_Inclusion_Remainder_Tube}, the stationarity of $X$ and \eqref{Eq_Convergence_Int_Abs_Cov}, we obtain that, for all $n \in \mathds{N}$, \begin{align} \mathrm{Var}(I_{n,2})&=\frac{1}{\lambda(V_n)} \mathrm{Var} \left( \int_{V_n \backslash A_n} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x}) \right) \nonumber \\& = \frac{1}{\lambda(V_n)} \int_{V_n \backslash A_n} \int_{V_n \backslash A_n} \mathrm{Cov}(X(\mathbf{x}), X(\mathbf{y})) \ \lambda(\mathrm{d}\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{y}) \nonumber \\& \leq \frac{1}{\lambda(V_n)} \int_{N \left( \partial V_n, \sqrt{d} \right)} \int_{N \left( \partial V_n, \sqrt{d} \right)} | \mathrm{Cov}(X(\mathbf{x}), X(\mathbf{y}))| \ \lambda(\mathrm{d}\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{y}) \nonumber \\& = \frac{1}{\lambda(V_n)} \int_{N \left( \partial V_n, \sqrt{d} \right)} \left( \int_{N \left( \partial V_n, \sqrt{d} \right)} | \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{y}-\mathbf{x})) | \ \lambda(\mathrm{d}\mathbf{y}) \right) \ \lambda(\mathrm{d}\mathbf{x}) \nonumber \\& \leq \frac{\lambda \left( N \left( \partial V_n, \sqrt{d} \right) \right)}{\lambda(V_n)} K_1 \nonumber. \end{align} Since $(V_n)_{n \in \mathds{N}}$ is a Van Hove sequence, we have that $\lim_{n \to \infty} \lambda \left( N \left( \partial V_n, \sqrt{d} \right) \right)/\lambda(V_n)=0$, giving that $\lim_{n \to \infty} \mathrm{Var}(I_{n,2})=0$. Since $\mathds{E}[I_{n,2}]=0$, this shows (using Bienaym\'e-Tchebychev's inequality) that $(I_{n,2})_{n \in \mathds{N}}$ tends towards $0$ in probability. Finally, using \eqref{Eq_Decomposition_Integral_Total} and \eqref{Eq_Convergence_In1} and applying Slutsky's theorem, we obtain that $$\frac{1}{[\lambda(V_n)]^{\frac{1}{2}}} \int_{V_n} X(\mathbf{x}) \ \lambda(\mathrm{d}\mathbf{x}) \overset{d}{\rightarrow} \mathcal{N}(0, \sigma^2), \mbox{ for } n \to \infty.$$ This completes the proof. \end{proof} \subsection{For Proposition \ref{Prop_Condition_Sigma_Positive}} \begin{proof} Since the random field $Z$ is max-stable, we know that, for all $\mathbf{x} \in \mathds{R}^d$, the random vector $\mathbf{Z}=(Z(\mathbf{0}), Z(\mathbf{x}))^{'}$ is max-stable and thus max-infinitely divisible. Hence, Proposition 5.29 in \cite{resnickextreme} gives that it is associated. In Definition \ref{Def_Association}, let us choose $q=2$ and define $g_1$ and $g_2$ as follows: $$ g_1(z_1,z_2)=F(z_1) \quad \mathrm{and} \quad g_2(z_1,z_2)=F(z_2), \quad z_1, z_2 \in \mathds{R}.$$ As $F$ is non-decreasing, $g_1$ and $g_2$ are non-decreasing in the sense of Definition \ref{Def_Association}. Moreover, since $F$ satisfies \eqref{Eq_Assumption_F}, we have that $E[|F(Z(\mathbf{0}))|^2] < \infty$, $E[|F(Z(\mathbf{x}))|^2] < \infty$ and, using Cauchy-Schwarz inequality, $E[|F(Z(\mathbf{0}))F(Z(\mathbf{x}))|] < \infty$. This implies that $\mathds{E}[|g_1(\mathbf{Z})|^2] < \infty$, $\mathds{E}[|g_2(\mathbf{Z})|^2] < \infty$ and $\mathds{E}[|g_1(\mathbf{Z}) g_2(\mathbf{Z})|] < \infty$. By definition of association, it follows that, for all $\mathbf{x} \in \mathds{R}^d$, $\mathrm{Cov}(F(Z(\mathbf{0}))), F(Z(\mathbf{x}))) \geq 0$, i.e. \begin{equation} \label{Eq_Nonnegativity_Covariance} \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) \geq 0. \end{equation} Now, since $Z$ is max-stable and $F$ is measurable, non-decreasing and non-constant, we have that \begin{equation} \label{Eq_Positivity_Var} \mathrm{Var}(X(\mathbf{0}))>0. \end{equation} Since $F$ is monotone, the set of points at which $F$ is not continuous, denoted $\mathcal{D}_F$, is at most countable. Hence, for all $\mathbf{x}_0 \in \mathds{R}^d$, since $Z(\mathbf{x}_0)$ is a continuous random variable (standard Fr\'echet), we have that $\mathds{P}(Z(\mathbf{x}_0) \in \mathcal{D}_F )=0$. Thus, as $Z$ is sample-continuous, $X$ is almost surely (a.s.) continuous at $\mathbf{x}_0$, which implies that, for all $\mathbf{x}_0 \in \mathds{R}^d$, \begin{equation} \label{Eq_Convergence_Square_Diff_To_Zero} \mathds{P} \left( \lim_{\mathbf{x} \to \mathbf{x}_0} |X(\mathbf{x})-X(\mathbf{x}_0)|^2=0 \right)=1. \end{equation} We introduce $\delta_1=\delta/2$, where $\delta$ appears in \eqref{Eq_Assumption_F}. Using the well-known fact that, for all $a,b \in \mathds{R}$ and $p \geq 1$, $|a-b|^p \leq 2^{p-1} (|a|^p+|b|^p)$, we obtain that $$ \left (|X(\mathbf{x})-X(\mathbf{x}_0)|^2 \right)^{1+\delta_1} \leq 2^{1+\delta} (|X(\mathbf{x})|^{2+\delta} + |X(\mathbf{x}_0)|^{2+\delta} ).$$ Using the stationarity of $X$ and \eqref{Eq_Assumption_F}, we obtain, for all $\mathbf{x}_0 \in \mathds{R}^d$, that $$ \sup_{\mathbf{x} \in \mathds{R}^d} \left \{ \mathds{E} \left[ \left (|X(\mathbf{x})-X(\mathbf{x}_0)|^2 \right)^{1+\delta_1} \right] \right \} \leq 2^{2+\delta} \mathds{E}\left[ X(\mathbf{0}) \right]^{2+\delta}<\infty,$$ implying, since $\delta_1>0$, that the random field $\left \{ |X(\mathbf{x})-X(\mathbf{x}_0)|^2 \right \}_{\mathbf{x} \in \mathds{R}^d}$ is uniformly integrable. Consequently, we obtain using \eqref{Eq_Convergence_Square_Diff_To_Zero} that, for all $\mathbf{x}_0 \in \mathds{R}^d$, $\lim_{\mathbf{x} \to \mathbf{x}_0} \mathds{E}[|X(\mathbf{x})-X(\mathbf{x}_0)|^2]=0$, meaning that $X$ is continuous in quadratic mean. Hence, since $X$ is second-order stationary (since it is strictly stationary and has a second moment), its covariance function, defined by $\mathrm{Cov}_X(\mathbf{x})=\mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x}))$, $\mathbf{x} \in \mathds{R}^d$, is continuous at the origin. Hence, there exists $\xi>0$ such that, for all $\mathbf{x} \in \mathds{R}^d$ satisfying $\| \mathbf{x} \| \leq \xi$, $|\mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x}))-\mathrm{Var}(X(\mathbf{0}))| \leq \mathrm{Var}(X(\mathbf{0}))/2$, implying that $\mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) \geq \mathrm{Var}(X(\mathbf{0}))/2$. Thus, using \eqref{Eq_Nonnegativity_Covariance}, \eqref{Eq_Positivity_Var} and the fact that $\xi>0$, we obtain that $$\sigma^2 \geq \int_{\mathbf{x} \in \mathds{R}^d: \| \mathbf{x} \| \leq \xi} \mathrm{Cov}(X(\mathbf{0}), X(\mathbf{x})) \ \lambda (\mathrm{d}\mathbf{x}) \geq \lambda \left( \left \{\mathbf{x} \in \mathds{R}^d: \| \mathbf{x} \| \leq \xi \right \} \right) \frac{\mathrm{Var}(X(\mathbf{0}))}{2} > 0.$$ This concludes the proof. \end{proof} \subsection{For Proposition \ref{Prop_Upper_Bound_Expectation_Min_Brown_Resnick}} \begin{proof} We introduce \begin{equation} \label{Eq_Definition_Eta} \nu(\mathbf{h})=\mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [\mathbf{h},\mathbf{h}+1]} \{ Y(\mathbf{x}) \} \right \} \right], \quad \mathbf{h} \in \mathds{Z}^d. \end{equation} Since $Y$ is positive, we have, for all $\mathbf{h} \in \mathds{Z}^d$, that \begin{equation} \label{Eq_Positivity_Nu} \nu(\mathbf{h}) \geq 0. \end{equation} Moreover, we have, for all $\mathbf{h} \in \mathds{Z}^d$, that \begin{align} \nu(\mathbf{h}) &=\mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}+\mathbf{h}) \} \right \} \right] \nonumber \\& = \mathds{E} \left[ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, Y(\mathbf{h}) \sup_{\mathbf{x} \in [0,1]^d} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \} \right \} \right]. \label{Eq_First_Computation_Nu} \end{align} Let $\varepsilon$ denote any function from $\mathds{Z}^d$ to $(0,\infty)$. We have that \begin{align*} & \quad \ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} \ \mathds{I}_{ \{ Y(\mathbf{h})>\varepsilon(\mathbf{h}) \} } + \varepsilon(\mathbf{h}) \sup_{\mathbf{x} \in [0,1]^d} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \} \ \mathds{I}_{ \{ Y(\mathbf{h}) \leq\varepsilon(\mathbf{h}) \} } \\& = \left \{ \begin{array}{cc} \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} & \mbox{if } Y(\mathbf{h})>\varepsilon(\mathbf{h}), \\ \varepsilon(\mathbf{h}) \sup_{\mathbf{x} \in [0,1]^d} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \} & \mbox{if } Y(\mathbf{h}) \leq \varepsilon(\mathbf{h}), \end{array} \right. \end{align*} yielding, since $Y$ is positive, that \begin{align} & \quad \ \ \min \left \{ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \}, Y(\mathbf{h}) \sup_{\mathbf{x} \in [0,1]^d} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \} \right \} \nonumber \\& \leq \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} \ \mathds{I}_{ \{ Y(\mathbf{h})>\varepsilon(\mathbf{h}) \} } + \varepsilon(\mathbf{h}) \sup_{\mathbf{x} \in [0,1]^d} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \} \ \mathds{I}_{ \{ Y(\mathbf{h}) \leq\varepsilon(\mathbf{h}) \} }. \label{Eq_Explanation_Majoration_Epsilon} \end{align} We obtain, using \eqref{Eq_First_Computation_Nu}, \eqref{Eq_Explanation_Majoration_Epsilon} and Cauchy-Schwarz inequality, that, for all $\mathbf{h} \in \mathds{Z}^d$, \begin{align} \nu(\mathbf{h}) &\leq \mathds{E} \left[ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} \ \mathds{I}_{ \{ Y(\mathbf{h})>\varepsilon(\mathbf{h}) \} } + \varepsilon(\mathbf{h}) \sup_{\mathbf{x} \in [0,1]^d} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \} \ \mathds{I}_{ \{ Y(\mathbf{h}) \leq\varepsilon(\mathbf{h}) \} } \right] \nonumber \\& \leq \mathds{E} \left[ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} \ \mathds{I}_{ \{ Y(\mathbf{h})>\varepsilon(\mathbf{h}) \} } \right] + \mathds{E} \left[ \varepsilon(\mathbf{h}) \sup_{\mathbf{x} \in [0,1]^d} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \} \ \mathds{I}_{ \{ Y(\mathbf{h}) \leq\varepsilon(\mathbf{h}) \} } \right] \nonumber \\& \leq \mathds{E} \left [ \sup_{\mathbf{x} \in [0,1]^d} \left \{ Y^2(\mathbf{x}) \right \} \right ]^{\frac{1}{2}} \mathds{P}(Y(\mathbf{h}) > \varepsilon(\mathbf{h}) )^{\frac{1}{2}} + \varepsilon(\mathbf{h}) \mathds{E} \left [ \sup_{\mathbf{x} \in [0,1]^d} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \} \right]. \label{Eq_First_Majoration_Gamma} \end{align} Since $W$ is a Gaussian random field, we have that \begin{equation} \label{Eq_Gaussian_Survivor} \mathds{P}(Y(\mathbf{h}) > \varepsilon(\mathbf{h})) = \mathds{P} \left( W(\mathbf{h}) > \frac{\sigma_W^2(\mathbf{h})}{2} + \log(\varepsilon(\mathbf{h})) \right)=\bar{\Phi} \left( \frac{\sigma_W^2(\mathbf{h})}{2} + \log(\varepsilon(\mathbf{h})) \right), \end{equation} where $\bar{\Phi}=1-\Phi$ with $\Phi$ being the standard Gaussian distribution function. Now, for all $\mathbf{h} \in \mathds{Z}^d$ and $\mathbf{x} \in [0,1]^d$, we have that $$ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} = \exp \left( W(\mathbf{x}+\mathbf{h})-W(\mathbf{h})-\frac{\sigma_W^2(\mathbf{x}+\mathbf{h})-\sigma_W^2(\mathbf{h})}{2} \right).$$ Hence, since $W$ has stationary increments, we obtain that, for all $\mathbf{h} \in \mathds{Z}^d$, $$ \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \}_{\mathbf{x} \in \mathds{R}^d} \overset{d}{=} \left \{ \exp \left( W(\mathbf{x})-\frac{\sigma_W^2(\mathbf{x}+\mathbf{h})-\sigma_W^2(\mathbf{h})}{2} \right) \right \}_{\mathbf{x} \in \mathds{R}^d},$$ which yields \begin{equation} \label{Eq_Equality_Distribution_Fraction} \left \{ \frac{Y(\mathbf{x}+\mathbf{h})}{Y(\mathbf{h})} \right \}_{\mathbf{x} \in \mathds{R}^d} \overset{d}{=} \left \{ \exp \left( W(\mathbf{x}) -\frac{\sigma_W^2(\mathbf{x})}{2} \right) \exp \left( \frac{\sigma_W^2(\mathbf{x}) + \sigma_W^2(\mathbf{h})-\sigma_W^2(\mathbf{x}+\mathbf{h})}{2} \right) \right \}_{\mathbf{x} \in \mathds{R}^d}. \end{equation} Now, we show that $\mathds{E} \left[ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} \right] < \infty.$ Using the fact that the exponential function is increasing, we have that \begin{equation} \label{Eq_Expectation_Y} \mathds{E} \left[ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} \right] = \mathds{E} \left[ \exp \left( \sup_{\mathbf{x} \in [0,1]^d} \left \{ W(\mathbf{x})-\frac{\sigma_W^2(\mathbf{x})}{2} \right \} \right) \right] \leq \mathds{E} \left [ \exp \left( \sup_{\mathbf{x} \in [0,1]^d} \left \{ W(\mathbf{x}) \right \} \right) \right]. \end{equation} As $W$ is a centred Gaussian random field which is a.s. bounded on $[0,1]^d$, Theorem 2.1.2 in \cite{adler2007random} yields that $\sup_{\mathbf{x} \in [0,1]^d} \left \{ \mathds{E} \left[ (W(\mathbf{x}))^2 \right] \right \} < \infty.$ Hence, let us introduce \begin{equation} \label{Eq_Def_Tau} \tau= \sup_{\mathbf{x} \in [0,1]^d} \left \{ \mathds{E} \left[ (W(\mathbf{x}))^2 \right] \right \}. \end{equation} It is clear that $\tau >0$. Since $W$ is a centred Gaussian random field which is a.s. bounded on $[0,1]^d$, Theorem 2.1.1 in \cite{adler2007random} gives that, for all $u>0$, $$ \mathds{P} \left ( \sup_{\mathbf{x} \in [0,1]^d} \left \{ W(\mathbf{x}) \right \} > u \right ) \leq \exp \left( -\frac{u^2}{2 \tau} \right),$$ which yields, for all $w >0$, that \begin{equation} \label{Eq_Majoration_Survival_Exp_Sup_W} \mathds{P} \left ( \exp \left( \sup_{\mathbf{x} \in [0,1]^d} \left \{ W(\mathbf{x}) \right \} \right) > w \right ) \leq \exp \left( -\frac{\ln^2(w)}{2 \tau} \right). \end{equation} Using \eqref{Eq_Majoration_Survival_Exp_Sup_W} and two changes of variable, we obtain that \begin{align} \mathds{E} \left [ \exp \left( \sup_{\mathbf{x} \in [0,1]^d} \left \{ W(\mathbf{x}) \right \} \right) \right] &=\int_{0}^{\infty} \mathds{P} \left ( \exp \left( \sup_{\mathbf{x} \in [0,1]^d} \left \{ W(\mathbf{x}) \right \} \right) > w \right ) \ \mathrm{d}w \nonumber \\& \leq \int_{- \infty}^{\infty} \exp \left( -\frac{v^2}{2 \tau} + v \right) \ \mathrm{d}v \nonumber \\& = \exp \left( \frac{\tau}{2} \right) \int_{- \infty}^{\infty} \exp \left( -\frac{v_1^2}{2 \tau} \right) \ \mathrm{d}v_1 < \infty. \label{Eq_Majoration_Expectation_Exp_Sup_W} \end{align} Combining \eqref{Eq_Expectation_Y} and \eqref{Eq_Majoration_Expectation_Exp_Sup_W}, we obtain that $\mathds{E} \left[ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} \right] < \infty$. Very similar arguments yield that $\mathds{E} \left[ \sup_{\mathbf{x} \in [0,1]^d} \left \{ Y^2(\mathbf{x}) \right \} \right] < \infty$. Hence, we introduce \begin{equation} \label{Def_C1_C2} C_1 = \mathds{E} \left[ \sup_{\mathbf{x} \in [0,1]^d} \{ Y(\mathbf{x}) \} \right] \quad \mbox{and} \quad C_2 = \mathds{E} \left[ \sup_{\mathbf{x} \in [0,1]^d} \left \{ Y^2(\mathbf{x}) \right \} \right]. \end{equation} The random fields $Y$ and $Y^2$ being positive, we have $C_1, C_2>0$. Furthermore, let \begin{equation} \label{Eq_Def_Delta} \delta(\mathbf{h})=\sup_{\mathbf{x} \in [0,1]^d} \left \{ \sigma_W^2(\mathbf{x}) + \sigma_W^2(\mathbf{h})-\sigma_W^2(\mathbf{x}+\mathbf{h}) \right \}, \quad \mathbf{h} \in \mathds{Z}^d. \end{equation} The combination of \eqref{Eq_First_Majoration_Gamma}, \eqref{Eq_Gaussian_Survivor}, \eqref{Eq_Equality_Distribution_Fraction}, \eqref{Def_C1_C2} and \eqref{Eq_Def_Delta} gives that, for all $\mathbf{h} \in \mathds{Z}^d$, $$ \nu(\mathbf{h}) \leq C_2^{\frac{1}{2}} \bar{\Phi}^{\frac{1}{2}} \left( \frac{\sigma_W^2(\mathbf{h})}{2} + \log(\varepsilon(\mathbf{h})) \right) + \varepsilon(\mathbf{h}) C_1 \exp \left( \frac{\delta(\mathbf{h})}{2} \right).$$ Let us take $\varepsilon(\mathbf{h})=\exp \left( -\frac{\sigma_W^2(\mathbf{h})}{4} \right), \mathbf{h} \in \mathds{Z}^d$. Hence, we obtain \begin{equation} \label{Eq_Second_Majoration_Gamma} \nu(\mathbf{h}) \leq C_2^{\frac{1}{2}} \bar{\Phi}^{\frac{1}{2}} \left( \frac{\sigma_W^2(\mathbf{h})}{4} \right) +C_1 \exp \left( \frac{\delta(\mathbf{h})}{2} -\frac{\sigma_W^2(\mathbf{h})}{4} \right). \end{equation} We obtain using \eqref{Eq_Condition1_Variogram_TCL} that $$ \delta(\mathbf{h}) \leq \sup_{\mathbf{x} \in [0,1]^d} \{ \sigma_W^2(\mathbf{x}) \} + \sup_{\mathbf{x} \in [0,1]^d} \{ \sigma_W^2(\mathbf{h})-\sigma_W^2(\mathbf{x}+\mathbf{h}) \} \leq \sup_{\mathbf{x} \in [0,1]^d} \{ \sigma_W^2(\mathbf{x}) \} + o(\sigma_W^2(\mathbf{h})).$$ As $W$ is centred, it follows from \eqref{Eq_Def_Tau} that $\sup_{\mathbf{x} \in [0,1]^d} \{ \sigma_W^2(\mathbf{x}) \}=\tau$. Thus, \begin{equation} \label{Eq_Inequality_Difference_Delta_Sigma2Over4} \frac{\delta(\mathbf{h})}{2} -\frac{\sigma_W^2(\mathbf{h})}{4} \leq \frac{\tau}{2} + o \left(\sigma_W^2(\mathbf{h})\right) - \frac{\sigma_W^2(\mathbf{h})}{4}. \end{equation} Since $\lim_{\| \mathbf{h} \| \to \infty} \sigma_W^2(\mathbf{h})=\infty$, it is clear that $$ \frac{\tau}{2} + o(\sigma_W^2(\mathbf{h})) - \frac{\sigma_W^2(\mathbf{h})}{4} \underset{\| \mathbf{h} \| \to \infty}{\sim} - \frac{\sigma_W^2(\mathbf{h})}{4},$$ and hence that there exist $A, A_1>0$ such that, for all $\mathbf{h} \in \mathds{Z}^d$ satisfying $\| \mathbf{h} \| \geq A$, $$ \frac{\tau}{2} + o(\sigma_W^2(\mathbf{h})) - \frac{\sigma_W^2(\mathbf{h})}{4} \leq -A_1 \frac{\sigma_W^2(\mathbf{h})}{4}. $$ Therefore, using \eqref{Eq_Inequality_Difference_Delta_Sigma2Over4}, we obtain, for all $\mathbf{h} \in \mathds{Z}^d$ satisfying $\| \mathbf{h} \| \geq A$, that \begin{equation} \label{Eq_Upper_Bound_Delta_Difference_Delta_Sigma2Over4} \exp \left( \frac{\delta(\mathbf{h})}{2} - \frac{\sigma_W^2(\mathbf{h})}{4} \right) \leq \exp \left( -A_1 \frac{\sigma_W^2(\mathbf{h})}{4} \right). \end{equation} Now, Mill's ratio gives that $$ \bar{\Phi} \left( \frac{\sigma_W^2(\mathbf{h})}{4} \right) \underset{\| \mathbf{h} \| \to \infty}{\sim} \frac{4 \exp \left( - \frac{\sigma_W^4(\mathbf{h})}{32} \right)}{(2\pi)^{\frac{1}{2}} \sigma_W^2(\mathbf{h})} \mbox{ and thus } \bar{\Phi}^{\frac{1}{2}} \left( \frac{\sigma_W^2(\mathbf{h})}{4} \right) \underset{\| \mathbf{h} \| \to \infty}{\sim} \frac{2 \exp \left( - \frac{\sigma_W^4(\mathbf{h})}{64} \right)}{(2\pi)^{\frac{1}{4}} \sigma_W(\mathbf{h})}.$$ Hence, we easily obtain that there exists $A_2>0$ such that, for all $\mathbf{h} \in \mathds{Z}^d$, \begin{equation} \label{Eq_Upper_Bound_Phi_Bar} \bar{\Phi}^{\frac{1}{2}} \left( \frac{\sigma_W^2(\mathbf{h})}{4} \right) \leq A_2 \frac{\exp \left( - \frac{\sigma_W^4(\mathbf{h})}{64}\right)}{\sigma_W(\mathbf{h})}. \end{equation} Combining \eqref{Eq_Second_Majoration_Gamma}, \eqref{Eq_Upper_Bound_Delta_Difference_Delta_Sigma2Over4} and \eqref{Eq_Upper_Bound_Phi_Bar}, we obtain that there exists $A_3>0$ such that, for all $\mathbf{h} \in \mathds{Z}^d$, \begin{equation} \label{Eq_Second_Majoration_Nu} \nu(\mathbf{h}) \leq A_3 \exp \left( -A_1 \frac{\sigma_W^2(\mathbf{h})}{4} \right). \end{equation} Using \eqref{Eq_Condition2_Variogram_TCL}, we have, for all $b>0$, that $$ \lim_{\| \mathbf{h} \| \to \infty} \frac{\exp \left( -A_1 \frac{\sigma_W^2(\mathbf{h})}{4} \right)}{\| \mathbf{h} \|^{-b}}= \lim_{\| \mathbf{h} \| \to \infty} \exp \left( -A_1 \frac{\sigma_W^2(\mathbf{h})}{4}+b \ln(\| \mathbf{h} \| ) \right)=0,$$ giving, using \eqref{Eq_Positivity_Nu} and \eqref{Eq_Second_Majoration_Nu}, that $\nu(\mathbf{h}) \underset{\| \mathbf{h} \| \to \infty}{=} o(\| \mathbf{h} \|^{-b})$. Thus, for all $b>0$, there exists $A_4>0$ such that \begin{equation} \label{Eq_Final_Upper_Bound_Eta_Brown_Resnick} \nu(\mathbf{h}) \leq A_4 \| \mathbf{h} \|^{-b}. \end{equation} Finally, combining \eqref{Eq_Definition_Eta} and \eqref{Eq_Final_Upper_Bound_Eta_Brown_Resnick}, we obtain that \eqref{Eq_Decay_Cubes_Y} is satisfied for all $b>0$. \end{proof} \subsection{For Proposition \ref{Prop_Variogram_Power}} \begin{proof} First, we show that $W$ is sample-continuous. Since $W$ is centred and has stationary increments, it follows from \eqref{Eq_Condition_Variogram_Power} that, for all $\mathbf{x}_1, \mathbf{x}_2 \in \mathds{R}^d$, \begin{align} \mathds{E}[(W(\mathbf{x}_1)-W(\mathbf{x}_2))^2]=\mathrm{Var}(W(\mathbf{x}_1)-W(\mathbf{x}_2))&=\mathrm{Var}(W(\mathbf{x}_1-\mathbf{x}_2)-W(\mathbf{0})) \nonumber \\&=\sigma_W^2(\mathbf{x}_1-\mathbf{x}_2) \nonumber \\&=\eta \| \mathbf{x}_2 - \mathbf{x}_1 \|^{\alpha}. \label{Eq_Expectation_Variogram} \end{align} Let us take $C \in (0, \infty)$ and $\rho >0$. As $\alpha >0$, it is well-known that $\lim_{h \to 0} h^{\alpha} |\log(h)|^{1+\rho}=0$. Therefore, there exists $\xi_1>0$ such that, for all $\mathbf{x}_1, \mathbf{x}_2 \in \mathds{R}^d$ satisfying $\| \mathbf{x}_1 - \mathbf{x}_2 \|<\xi_1$, $$ \eta \| \mathbf{x}_2 - \mathbf{x}_1 \|^{\alpha} | \log(\| \mathbf{x}_2 - \mathbf{x}_1 \|)|^{1+\rho} \leq C.$$ This means, using \eqref{Eq_Expectation_Variogram}, that there exist $C \in (0, \infty)$ and $\rho, \xi_1 >0$ such that, for all $\mathbf{x}_1, \mathbf{x}_2 \in \mathds{R}^d$ satisfying $\| \mathbf{x}_1 - \mathbf{x}_2 \| < \xi_1$, $$ \mathds{E}[(W(\mathbf{x}_1)-W(\mathbf{x}_2))^2] \leq \frac{C}{| \log(\| \mathbf{x}_2 - \mathbf{x}_1 \|)|^{1+\rho}}.$$ Hence, Theorem 1.4.1 in \cite{adler2007random} gives that $W$ is sample-continuous and a.s. bounded on $[0,1]^d$. Second, we prove that the $\sigma_W^2$ satisfies \eqref{Eq_Condition1_Variogram_TCL}. It follows from \eqref{Eq_Condition_Variogram_Power}, that, for all $\mathbf{x} \in [0,1]^d$ and $\mathbf{h} \in \mathds{Z}^d$, \begin{equation} \label{Eq_Expression_Difference_Sigma2} \sigma_W^2(\mathbf{h})-\sigma_W^2(\mathbf{x}+\mathbf{h}) = \eta \left( \| \mathbf{h} \|^{\alpha}- \| \mathbf{x}+ \mathbf{h} \|^{\alpha} \right), \end{equation} where $\eta>0$ and $\alpha \in (0,2]$. Now, it is well-known that $ \| \mathbf{x}+ \mathbf{h} \| \geq \| \mathbf{h} \| - \| \mathbf{x} \|$. This gives, for all $\mathbf{h} \in \mathds{Z}^d$ satisfying $\| \mathbf{h} \| > d^{\frac{1}{2}}$, that \begin{equation} \label{Majoration_Sup_Quantity_Interest} \sup_{\mathbf{x} \in [0,1]^d} \left \{ \| \mathbf{h} \|^{\alpha} - \| \mathbf{x}+ \mathbf{h} \|^{\alpha} \right \} \leq \sup_{\mathbf{x} \in [0,1]^d} \left \{ \| \mathbf{h} \|^{\alpha} - (\| \mathbf{h} \| - \| \mathbf{x} \|)^{\alpha} \right \}. \end{equation} Now, we have, for all $\mathbf{x} \in [0,1]^d$ and $\mathbf{h} \in \mathds{Z}^d$ satisfying $\| \mathbf{h} \| > d^{\frac{1}{2}}$, that $$ (\| \mathbf{h} \| - \| \mathbf{x} \|)^{\alpha} = \| \mathbf{h} \|^{\alpha} \left( 1- \frac{\| \mathbf{x} \|}{\| \mathbf{h} \|} \right)^{\alpha}.$$ Hence, using a classical Taylor expansion, we obtain that $$ \| \mathbf{h} \|^{\alpha} - (\| \mathbf{h} \| - \| \mathbf{x} \|)^{\alpha} \underset{\| \mathbf{h} \| \to \infty}{=} \alpha \| \mathbf{x} \| \| \mathbf{h} \|^{\alpha-1} + o \left( \| \mathbf{x} \|^2 \| \mathbf{h} \|^{\alpha-2} \right).$$ Therefore, there exists $B>0$ such that, for all $\mathbf{x} \in [0,1]^d$ and $\mathbf{h} \in \mathds{Z}^d$ satisfying $\| \mathbf{h} \| > d^{\frac{1}{2}}$, $$ 0 \leq \| \mathbf{h} \|^{\alpha} - (\| \mathbf{h} \| - \| \mathbf{x} \|)^{\alpha} \leq B \| \mathbf{x} \| \| \mathbf{h} \|^{\alpha-1}.$$ Thus, since $\| \mathbf{x} \|$ is bounded on $[0,1]^d$, we directly obtain that $$ \sup_{\mathbf{x} \in [0,1]^d} \left \{ \| \mathbf{h} \|^{\alpha} - (\| \mathbf{h} \| - \| \mathbf{x} \|)^{\alpha} \right \} \underset{\| \mathbf{h} \| \to \infty}{=}o(\| \mathbf{h} \|^{\alpha}).$$ Now, we have that \begin{equation} \label{Positivity_Sup_Interest} \sup_{\mathbf{x} \in [0,1]^d} \left \{ \| \mathbf{h} \|^{\alpha}- \| \mathbf{x}+ \mathbf{h} \|^{\alpha} \right \} \geq 0. \end{equation} Combining \eqref{Eq_Condition_Variogram_Power}, \eqref{Eq_Expression_Difference_Sigma2}, \eqref{Majoration_Sup_Quantity_Interest} and \eqref{Positivity_Sup_Interest}, we obtain $$\sup_{\mathbf{x} \in [0,1]^d} \{ \sigma_W^2(\mathbf{h})-\sigma_W^2(\mathbf{x}+\mathbf{h}) \}\underset{\| \mathbf{h} \| \to \infty}{=}o(\sigma_W^2(\mathbf{h})).$$ Hence, \eqref{Eq_Condition1_Variogram_TCL} is satisfied. Third, it is clear, using \eqref{Eq_Condition_Variogram_Power}, that $$\lim_{\| \mathbf{h} \| \to \infty} \frac{\ln(\| \mathbf{h} \|)}{\sigma^2_W(\mathbf{h})}= 0.$$ Hence, $\sigma_W^2$ satisfies \eqref{Eq_Condition2_Variogram_TCL}. Finally, since $W$ is a.s. bounded on $[0,1]^d$, Proposition \ref{Prop_Upper_Bound_Expectation_Min_Brown_Resnick} yields that the random field $Y$ satisfies Condition \eqref{Eq_Decay_Cubes_Y} for all $b>0$. \end{proof} \subsection{For Theorem \ref{Th_2_TCL_Brown_Resnick}} \begin{proof} We assume that $Z$ has been built with a Gaussian random field $\{ W(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ having variogram $\gamma_W$. We consider the random field $\{ W_1(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}=\{ W(\mathbf{x})-W(\mathbf{0}) \}_{\mathbf{x} \in \mathds{R}^d}$ and we denote by $Z_1$ the Brown-Resnick random field built with $W_1$. It is clear that $W_1$ is a centred Gaussian random field with stationary increments such that $W_1(\mathbf{0})=0$. The random fields $W$ and $W_1$ have the same variogram, and thus the variogram of $W_1$ is written $\gamma_{W_1}(\mathbf{x})=\eta \| \mathbf{x} \|^{\alpha}$, $\mathbf{x} \in \mathds{R}^d$, where $\eta >0$ and $\alpha \in (0,2]$. Now, $\gamma_{W_1}(\mathbf{x})=\mathrm{Var}(W_1(\mathbf{x})-W_1(\mathbf{0}))=\mathrm{Var}(W_1(\mathbf{x}))=\sigma_{W_1}^2(\mathbf{x})$. Hence, for all $\mathbf{x} \in \mathds{R}^d$, $\sigma_{W_1}^2(\mathbf{x})=\eta \| \mathbf{x} \|^{\alpha}$. Thus, it follows from Proposition \ref{Prop_Variogram_Power} that $W_1$ is sample-continuous, which directly gives that $W$ is sample-continuous. Therefore, applying Proposition 13 in \cite{kabluchko2009stationary}, we obtain that $Z$ and $Z_1$ are sample-continuous. As $W$ and $W_1$ have the same variogram, $Z$ and $Z_1$ have the same finite-dimensional distributions. Moreover, since $Z$ and $Z_1$ are sample-continuous, they have the same distribution in the sense of the induced measure on the space of continuous functions from $\mathds{R}^d$ to $(0, \infty)$. Consequently, we can assume that $Z$ has been built with $W_1$. The random field $Z$ is simple (by definition), stationary \citep[see][Theorem 2]{kabluchko2009stationary} and sample-continuous. Moreover, Proposition \ref{Prop_Variogram_Power} gives that the random field $Y$ defined by $$ \{ Y(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}=\left \{ \exp \left( W_1(\mathbf{x})-\frac{\sigma_{W_1}^2(\mathbf{x})}{2} \right) \right \}_{\mathbf{x} \in \mathds{R}^d}$$ satisfies Condition \eqref{Eq_Decay_Cubes_Y} for all $b>0$. Hence, Theorem \ref{Th_General_Case} yields the result. \end{proof} \subsection{For Theorem \ref{Th_TCL_Smith}} \begin{proof} It is known \cite[see, e.g.,][]{huser2013composite} that the Smith random field with covariance matrix $\Sigma$ corresponds to the Brown-Resnick random field associated with the variogram $\gamma(\mathbf{x})=\mathbf{x}^{'} \Sigma^{-1} \mathbf{x}$, $\mathbf{x} \in \mathds{R}^d$. This variogram can be rewritten as $\gamma(\mathbf{x})= \| \mathbf{x} \|_{\Sigma}^2$, where $\| . \|_{\Sigma}$ is the norm associated with the inner product induced by the matrix $\Sigma^{-1}$. Let $\{ W(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}$ be a centred Gaussian random field with stationary increments such that $W(\mathbf{0})=0$ and $\sigma_W^2(\mathbf{x})=\gamma(\mathbf{x})$, $\mathbf{x} \in \mathds{R}^d$. Since all norms are equivalent in $\mathds{R}^d$, there exist $C_3, C_4>0$ such that, for all $\mathbf{x} \in \mathds{R}^d$, $C_3 \| \mathbf{x} \| \leq \| \mathbf{x} \|_{\Sigma} \leq C_4 \| \mathbf{x} \|$. Hence, a hardly modified version of the proof of Proposition \ref{Prop_Variogram_Power} leads that $W$ is sample-continuous and that the random field $$ \{ Y(\mathbf{x}) \}_{\mathbf{x} \in \mathds{R}^d}=\left \{ \exp \left( W(\mathbf{x})-\frac{\sigma_W^2(\mathbf{x})}{2} \right) \right \}_{\mathbf{x} \in \mathds{R}^d}$$ satisfies Condition \eqref{Eq_Decay_Cubes_Y} for all $b>0$. Then, the same proof as that of Theorem \ref{Th_2_TCL_Brown_Resnick} leads to the result. \end{proof} \end{document}
\begin{document} \global\long\def\bea{ \begin{eqnarray} {\rm e}nd{eqnarray} } \global\long\def {eqnarray}{ {eqnarray}} \global\long\def\begin{itemize}{\rm e}nd{itemize}{\begin{itemize}{\rm e}nd{itemize}} \global\long\def {itemize}{ {itemize}} \global\long\def\be{ \begin{equation} {\rm e}nd{equation} } \global\long\def{equation}{{equation}} \global\long\def\rangle{\ranglengle} \global\long\def\langle{\langlengle} \global\long\def{\hat{U}}{\widetilde{U}} \global\long\def\bra#1{{\langlengle#1|}} \global\long\def\ket#1{{|#1\ranglengle}} \global\long\def\bracket#1#2{{\langlengle#1|#2\ranglengle}} \global\long\def\inner#1#2{{\langlengle#1|#2\ranglengle}} \global\long\def{\rm e}xpect#1{{\langlengle#1\ranglengle}} \global\long\def{\rm e}{{\rm e}} \global\long\def{\hat{{\cal P}}}{{\hat{{\cal P}}}} \global\long\def{\rm Tr}{{\rm Tr}} \global\long\def{\hat{H}}{{\hat{H}}} \global\long\def{\hat{H}}dag{{\hat{H}}^{\dagger}} \global\long\def{\cal L}{{\cal L}} \global\long\def{\hat{E}}{{\hat{E}}} \global\long\def{\hat{E}}^{\dagger}{{\hat{E}}^{\dagger}} \global\long\def\hat{S}{\hat{S}} \global\long\def{\hat{S}}^{\dagger}{{\hat{S}}^{\dagger}} \global\long\def{\hat{A}}{{\hat{A}}} \global\long\def{\hat{A}}^{\dagger}{{\hat{A}}^{\dagger}} \global\long\def{\hat{U}}{{\hat{U}}} \global\long\def{\hat{U}}dag{{\hat{U}}^{\dagger}} \global\long\def{\hat{Z}}{{\hat{Z}}} \global\long\def{\hat{P}}{{\hat{P}}} \global\long\def{\hat{O}}{{\hat{O}}} \global\long\def{\hat{I}}{{\hat{I}}} \global\long\def{\hat{x}}{{\hat{x}}} \global\long\def{\hat{P}}{{\hat{P}}} \global\long\def{\hat{P}}x{{\hat{{\cal P}}}_{x}} \global\long\def{\hat{P}}r{{\hat{{\cal P}}}_{R}} \global\long\def{\hat{P}}l{{\hat{{\cal P}}}_{L}} \title{Quantum and classical complexity in coupled maps} \author{Pablo D. Bergamasco} \affiliation{Departamento de Física, CNEA, Libertador 8250, (C1429BNP) Buenos Aires, Argentina} \affiliation{Departamento de Física, FCEyN, Universidad de Buenos Aires, Argentina} \author{Gabriel G. Carlo} \affiliation{Departamento de Física, CNEA, CONICET, Libertador 8250, (C1429BNP) Buenos Aires, Argentina} \author{Alejandro M. F. Rivas} \affiliation{Departamento de Física, CNEA, CONICET, Libertador 8250, (C1429BNP) Buenos Aires, Argentina} {\rm e}mail{[email protected],[email protected],[email protected]} \selectlanguage{american} \date{\today} \begin{abstract} We study a generic and paradigmatic two degrees of freedom system consisting of two coupled perturbed cat maps with different types of dynamics. The Wigner separability entropy (WSE) -- equivalent to the operator space entanglement entropy -- and the classical separability entropy (CSE) are used as measures of complexity. For the case where both degrees of freedom are hyperbolic, the maps are classically ergodic and the WSE and the CSE behave similarly, growing up to higher values than in the doubly elliptic case. However, when one map is elliptic and the other hyperbolic, the WSE reaches the same asymptotic value than that of the doubly hyperbolic case, but at a much slower rate. The CSE only follows the WSE for a few map steps, revealing that classical dynamical features are not enough to explain complexity growth. {\rm e}nd{abstract} \pacs{05.45.Mt, 05.45.Pq, 03.67.Mn, 03.65.Ud} \maketitle \section{Introduction} \langlebel{sec1} Chaotic behaviour is a classical property that implies the exponential divergence of close initial conditions. The ability to explore the whole available phase space, i.e. ergodicity, is indeed the main ingredient for statistical thermodynamics. On the other hand, quantum mechanics is governed by the Schr\:odinger equation whose linearity forbids exponential divergences of close initial conditions. Also, entanglement is a quantum characteristic that has no classical counterpart. Consequently, as recently pointed out in \cite{nature2017}, there is a battle between quantum and thermodynamic laws. We mention a few contributions to this discussion. From the quantum to classical correspondence point of view, a pioneering work \cite{Patta99} has related the classically ergodic behaviour with quantum entanglement production. Very recently a small quantum system of three superconducting qubits has been considered \cite{nature2016}, showing a coincidence between regions of high (quantum) entanglement entropy and (classical) chaotic dynamics. On the other hand, entropy production in regular regions has been reported \cite{Lombardi-Matzkin}. Also, in a Toda model of two interacting particles the chaotic and integrable cases could hardly be distinguished regarding entanglement generation \cite{Casati}. Also, thermalization of quantum systems according to their type of dynamics is a subject of fundamental interest nowadays \cite{physrep2016}. In order to perform an explicit comparison between quantum and classical mechanics it is of great help to have a quantity that can be calculated in both realms. Wigner functions represent quantum mechanics in phase space providing with a very suitable analogue of Liouville distributions. Recently, in the spirit of algorithmic complexity, the WSE \cite{Benenti-Carlo-Prosen} and the CSE \cite{prosen1} have been introduced as measures of complexity of quantum and (discretized) classical distributions, respectively. In this work, we propose the notion of complexity to analyze correspondence by using these measures to study a two degrees of freedom system. We consider two coupled perturbed cat maps, where one of them can be seen as the system and the other as the environment. The dynamics of these maps can be both hyperbolic (chaotic) (HH), both elliptic (regular) (EE), or mixed where one degree of freedom is hyperbolic and the other is elliptic (HE-EH). We have found that for the HH case, the Wigner and Liouville distributions develop similar structures of increasing complexity, which are reflected in the WSE and the CSE hand by hand growth until a saturation value \cite{lak2002}. At the classical level, after an evolution of the order of the Ehrenfest time the entropy decreases due to discretization. For the EE case, the quantum and classical measures do not always follow each other. The WSE and the CSE both reach lower values compared with the previous case. These results are similar to what it was found in \cite{nature2016} for chaotic and regular regions of phase space. Finally, for the mixed HE case the quantum complexity saturates at the same values of the HH case, although the growth rate is much slower. The classical complexity only grows during the first few map steps and then decreases, unable to reach the quantum asymptotics. In this way, we can observe that one hyperbolic degree of freedom is enough to generate high values of complexity (entanglement). The classical behaviour differs and quantum mechanisms of complexity growth play a main role. This paper is organized as follows: in Section \ref{sec2} we explain the concepts of WSE and CSE, and how they are used in our study. In Section \ref{sec3} we present our model with a brief discussion on its properties. In Section \ref{sec4} we explain our results in detail and, in Section \ref{sec5} we state our conclusions. \section{Wigner and classical separability entropies} \langlebel{sec2} A state of a quantum system is described by means of the density operator $\hat{\rho}$ acting on the Hilbert space $\mathcal{H}$ such that $\text{Tr}\left(\hat{\rho}\right)=1$. This density operator is a vector belonging to the space $B(\mathcal{H})$ of Hilbert-Schmidt operators, where an inner product is defined as $\hat{A}.\hat{B}=\text{Tr}(\hat{A}^{\dagger}\hat{B})$ such that the norm $\|\hat{\rho}\|=\sqrt{\text{Tr}(\hat{\rho}^{2})}\le1$. Decomposing the Hilbert space $\mathcal{H}$ as a tensor product $\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}$, the density operator has a Schmidt decomposition, \begin{equation} \hat{\rho}=\sum\sigma_{n}\hat{a}_{n}\otimes\hat{b}_{n},\langlebel{SVDrho} {\rm e}nd{equation} with $n\in\mathbb{N}$, and where $\{\hat{a}_{n}\}$ and $\{\hat{b}_{n}\}$, such that $\text{Tr}(\hat{a}_{m}^{\dagger}\hat{a}_{n})=\delta_{mn}$, $\text{Tr}(\hat{b}_{m}^{\dagger}\hat{b}_{n})=\delta_{mn}$, are orthonormal bases for $B(\mathcal{H}_{1})$ and $B(\mathcal{H}_{2})$, respectively. The Schmidt coefficients $\sigma_{1}\ge\sigma_{2}\ge\ldots\ge0$ satisfy $\sum_{n}\sigma_{n}^{2}=\text{Tr}(\hat{\rho}^{2})=\|\hat{\rho}\|^{2}$. The operator space entanglement entropy~\cite{prosenOSEE} is then defined as \begin{equation} h[\hat{\rho}]=-\sum_{n}\tilde{\sigma}_{n}^{2}\ln\tilde{\sigma}_{n}^{2}, \quad\textrm{with}\:\tilde{\sigma}_{n}{\rm e}quiv\frac{\sigma_{n}}{\|\hat{\rho}\|}.\langlebel{eq:OSEE} {\rm e}nd{equation} Among the several representations of quantum mechanics, the Weyl-Wigner representation performs a decomposition of the operators that act on the Hilbert space $\mathcal{H}$ on the basis spanned by $\widehat{R}_{\boldsymbol{x}}$, the set of unitary reflection operators on points $\boldsymbol{x}{\rm e}quiv(\boldsymbol{q},\boldsymbol{p})$ \cite{ozrep,opetor} in a $2d$-dimensional compact phase space $\Omega=\Omega'\oplus\Omega''$. These reflection operators are orthogonal in the sense that \begin{equation} \text{Tr}\left[\hat{R}_{\boldsymbol{x}_{a}}\hat{R}_{\boldsymbol{x_{b}}}\right]=\left(2\pi\hbar\right)^{d}\; \delta(\boldsymbol{x}_{b}-\boldsymbol{x}_{a}).\langlebel{trR} {\rm e}nd{equation} Hence, any operator $\hat{A}$ acting on the Hilbert space $\mathcal{H}$ can be univocally decomposed in terms of reflection operators as follows \begin{equation} \hat{A}=\left(\frac{1}{2\pi\hbar}\right)^{d}\int d\boldsymbol{x}\ A_{W}(\boldsymbol{x})\ \hat{R}_{\boldsymbol{x}}.\langlebel{rep} {\rm e}nd{equation} With this decomposition, the operator $\hat{A}$ is mapped on a function $A_{W}(\boldsymbol{x})$, living in a $2d$-dimensional compact phase space $\Omega$, the so called Weyl-Wigner symbol of the operator. Using Eq.(\ref{trR}) it is easy to show that $A_{W}(x)$ can be obtained by means of the trace operation \[ A_{W}(\boldsymbol{x})=\text{Tr}\left[\hat{R}_{\boldsymbol{x}}\ \hat{A}\right]. \] The Wigner function is defined in terms of the Weyl-Wigner symbol of the density operator, \[ W(\boldsymbol{x})=(2\pi\hbar)^{-d/2}\rho(\boldsymbol{x})=(2\pi\hbar)^{-d/2} \text{Tr}\left[\hat{R}_{\boldsymbol{x}}\hat{\rho}\right]. \] Normalization of the density operator implies that: \[ \int d\boldsymbol{x}W(\boldsymbol{x})=\text{Tr}\left(\hat{\rho}\right)=1, \quad\textrm{while}\quad\int d\boldsymbol{x}W^{2}(\boldsymbol{x})=\|\hat{\rho}\|. \] Also, from the Schmidt decomposition of the density operator given in Eq.(\ref{SVDrho}), we obtain the Schmidt (singular value) decomposition of the Wigner function: \begin{equation} W(\boldsymbol{x})=\sum_{n}\sigma_{n}a_{n}(\boldsymbol{x_1})b_{n}(\boldsymbol{x_2}),\langlebel{eq:SVDWigner} {\rm e}nd{equation} where $\{a_{n}\}$ and $\{b_{n}\}$ are now orthonormal bases for $L^{2}(\Omega_{1})$ and $L^{2}(\Omega_{2})$ (which are associated to the Hilbert space decomposition), such that: \[ a_{n}(\boldsymbol{x_1})=\text{Tr}\left[\hat{R}_{\boldsymbol{x_1}}\hat{a}_{n}\right], \quad\textrm{and}\quad b_{n}(\boldsymbol{x_2})=\text{Tr}\left[\hat{R}_{\boldsymbol{x_2}}\hat{b}_{n}\right]. \] The {\rm e}mph{Wigner separability entropy} is defined as \cite{Benenti-Carlo-Prosen} \begin{equation} h[W]=-\sum_{n}\tilde{\sigma}_{n}^{2}\ln\tilde{\sigma}_{n}^{2},\langlebel{eq:Wignerentropy} {\rm e}nd{equation} where \begin{equation} \tilde{\sigma}_{n}{\rm e}quiv\frac{\sigma_{n}}{\sqrt{\int d\boldsymbol{x}W^{2}(\boldsymbol{x})}}. {\rm e}nd{equation} The coefficients $\{\tilde{\sigma}_{n}\}$ in Eq.(\ref{eq:Wignerentropy}) are then the same than those in Eq.(\ref{eq:OSEE}) and are the Schmidt coefficients of the singular value decomposition of $\tilde{W}{\rm e}quiv W/\sqrt{\int d\boldsymbol{x}W^{2}(\boldsymbol{x})}$, such that $\tilde{W}$ is normalized in $L^{2}(\Omega)$: $\int d\boldsymbol{x}\tilde{W}^{2}(\boldsymbol{x})=1$. The WSE $h[W]$ quantifies the logarithm of the number of terms that effectively contribute to the decomposition of Eq.(\ref{eq:SVDWigner}) and therefore provides a measure of separability of the Wigner function with respect to the chosen phase space decomposition. Comparing Eq.(\ref{eq:OSEE}) with Eq.(\ref{eq:Wignerentropy}), it is easy to see that the WSE is equal to {\rm e}mph{operator space entanglement entropy}\cite{Benenti-Carlo-Prosen}, i.e. $h[W]=h[\hat{\rho}]$. The main advantage of defining the separability entropy in phase space by means of the Wigner function is that such quantity can be directly translated to classical mechanics. The classical analogue of the Wigner separability entropy is the CSE (or s-entropy) $h[\rho_{c}]$ defined in Ref.~\cite{prosen1}, where a classical phase space distribution $\rho_{c}(\boldsymbol{x})$ (discretized at the $\hbar$ scale) is used instead of the Wigner function $W(\boldsymbol{x})$. The CSE estimates the minimal amount of computational resources required to simulate the classical Liouvillian evolution and grows linearly in time for dynamics that cannot be efficiently simulated. Both the WSE and the CSE measure complexity of the distributions on the same footing. For our purposes this bridges the gap between quantum and classical mechanics. It is worth mentioning that when the density operator $\hat{\rho}$ describes a pure state, $\hat{\rho}=|\psi\ranglengle\langlengle\psi|$, there exists a simple relation between the WSE and the entanglement content of the state $|\psi\ranglengle\in\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}$ \cite{Benenti-Carlo-Prosen}. In fact, \[ h[W]=-2S(\hat{\rho}_{1})=-2S(\hat{\rho}_{2}), \] where $\hat{\rho}_{1}=\text{Tr}_{2}(\hat{\rho})$ and $\hat{\rho}_{2}=\text{Tr}_{1}(\hat{\rho})$ are the reduced density operators for subsystems 1 and 2, and $S$ is the von Neumann entropy. Since for a pure state $|\psi\ranglengle$ the von Neumann entropy of the reduced density matrix quantifies the entanglement $E$ of $|\psi\ranglengle$~\cite{Nielsen,qcbook}, \begin{equation} E(|\psi\ranglengle)=S(\hat{\rho}_{1})=S(\hat{\rho}_{2}), {\rm e}nd{equation} the WSE is twice the {\rm e}mph{entanglement entropy} $E(|\psi\ranglengle)$: \begin{equation} h[W]=2\,E(|\psi\ranglengle). {\rm e}nd{equation} \section{Model system} \langlebel{sec3} Despite their simplicity, dynamical maps capture all the essential features of different types of complicated dynamical systems. This property and their relatively straightforward quantization turns them into a suitable tool to explore quantum to classical correspondence. The quantization of the cat map \cite{Hannay 1980} (a paradigmatic linear automorphism on the torus and one of the most simple models of chaotic dynamics) has contributed to elucidate many questions in the quantum chaos area \cite{Hannay 1980,Ozorio 1994,Haake,Espositi 2005}. We here investigate the behaviour of two coupled perturbed cat maps, a two degrees of freedom system. These two maps can have different types of dynamics. Each degree of freedom is defined on the 2-torus as \cite{Hannay 1980} \begin{equation} \left(\begin{array}{c} q_{t+1}\\ p_{t+1} {\rm e}nd{array}\right)=\mathcal{M}\left(\begin{array}{c} q_{t}\\ p_{t}+{\rm e}psilon\left(q_{t}\right) {\rm e}nd{array}\right) {\rm e}nd{equation} with $q$ and $p$ taken modulo $1$, and \[ {\rm e}psilon\left(q_{t}\right)=-\frac{K}{2\pi}\sin\left(2\pi q_{t}\right) \] For the ergodic case we use the hyperbolic map \begin{equation} \mathcal{M}_{h}=\left(\begin{array}{cc} 2 & 1\\ 3 & 2 {\rm e}nd{array}\right),\langlebel{eq:Mhyper} {\rm e}nd{equation} while for regular behaviour we choose the elliptic map \begin{equation} \mathcal{M}_{e}=\left(\begin{array}{cc} 0 & 1\\ -1 & 0 {\rm e}nd{array}\right).\langlebel{eq:Meliptic} {\rm e}nd{equation} Quantum mechanics on the torus implies a finite Hilbert space of dimension $N=\frac{1}{2\pi\hbar}$, where positions and momenta are defined to have discrete values in a lattice of separation $\frac{1}{N}$ \cite{Hannay 1980}. In coordinate representation the corresponding propagator is given by a $N\times N$ unitary matrix \begin{equation} U_{jk}=A{{\rm e}xp}\left[\frac{i\pi}{N\mathcal{M}_{12}}(\mathcal{M}_{11}j^{2}-2jk +\mathcal{M}_{22}k^{2})+F\right],\langlebel{uqq} {\rm e}nd{equation} where \[ A=\left[1/\left(iN\mathcal{M}_{12}\right)\right]^{1/2}, \] \[ F=\left[iKN/\left(2\pi\right)\right]\cos\left(2\pi j/N\right). \] The states $\langlengle q|\mathbf{q}_{j}\ranglengle$ are periodic combs of Dirac delta distributions at positions $q=j/N{\rm mod}(1)$, with $j$ integer in $[0,N-1]$. The two degrees of freedom system is defined in a four-dimensional phase space having coordinates $\left(q^{1},q^{2},p^{1},p^{2}\right)$ \cite{Benenti-Carlo-Prosen} as \[ \left(\begin{array}{c} q_{t+1}^{1}\\ p_{t+1}^{1} {\rm e}nd{array}\right)=\mathcal{M}_{1}\left(\begin{array}{c} q_{t}^{1}\\ p_{t}^{1}+{\rm e}psilon\left(q_{t}^{1}\right)+\kappa\left(q_{t}^{1},q_{t}^{2}\right) {\rm e}nd{array}\right) \] and \[ \left(\begin{array}{c} q_{t+1}^{2}\\ p_{t+1}^{2} {\rm e}nd{array}\right)=\mathcal{M}_{2}\left(\begin{array}{c} q_{t}^{2}\\ p_{t}^{2}+{\rm e}psilon\left(q_{t}^{2}\right)+\kappa\left(q_{t}^{1},q_{t}^{2}\right) {\rm e}nd{array}\right) \] where the coupling between both maps is given by \[ \kappa\left(q_{t}^{1},q_{t}^{2}\right)=-\frac{K_{c}}{2\pi}\sin\left(2\pi q_{t}^{1}+2\pi q_{t}^{2}\right). \] The quantized version of the two degrees of freedom system is obtained as the tensor product of the quantized one degree of freedom maps, given by a $N^{2}\times N^{2}$ unitary matrix \[ U_{j_{1}j_{2},k_{1}k_{2}}^{2D}=U_{j_{1}k_{1}}U_{j_{2}k_{2}}C_{j_{1}j_{2}}, \] with the coupling matrix (diagonal in the coordinate representation) \[ C_{j_{1}j_{2}}={\rm e}xp\left\{ \left(\frac{iNK_{c}}{2\pi}\right) \cos\left[\frac{2\pi}{N}\left(j_{1}+j_{2}\right)\right]\right\}, \] where $j_{1},j_{2},k_{1},k_{2}\in\{0,\ldots,N-1\}$. We use $K=0.25$ and $K_c=0.5$ throughout our work, which guarantees the Anosov condition \cite{Ozorio 1994}. \section{Results} \langlebel{sec4} To investigate the quantum to classical correspondence regarding complexity growth, we study the evolution in time of $h[W]$ and of its classical counterpart $h[\rho_{c}]$. As initial states we have used a Gaussian phase space distribution with dispersion equal to $\sqrt{\hbar}$, and its quantum analogue, a coherent state on the torus, for both degrees of freedom. Profiting from the fact that the latter is a pure state we just compute the von Neumann entropy which is half the WSE. In the following, when we refer to WSE and CSE we mean WSE/2 and CSE/2. We take $N=2^{6}$ for each map. First, we consider the HH case with the initial distributions centered at $(q,p)=(0.5,0.5)$, which is a period 1 fixed point of both, the hyperbolic and the elliptic maps. CSE and WSE as a function of time (map steps) are displayed in Fig. \ref{fig1}. \begin{figure}[htp] \hspace{0cm} \includegraphics[width=8cm]{fig1} \caption{(Color online) CSE ((red) gray line with crosses) and WSE (black line with triangles) as a function of time t (in map steps) in the HH case for $N=2^{6}$. Initial distributions are centered at $(q,p)=(0.5,0.5)$.} \langlebel{fig1} {\rm e}nd{figure} The Liouville and Wigner distributions develop similar structures of increasing complexity as the evolution takes place. At time $t=3$, and after growing at a rate given by the average Lyapunov exponent, they show the maximum complexity where features of the stable manifold are still visible in the quantum case (see Fig. \ref{fig2} a) and b)). From $t=3$ on the classical distribution becomes less complex due to discretization while the quantum one keeps its complexity through intertwined coherence patterns, as can be seen in Fig. \ref{fig2} c) and d). In case the classical distribution were not discretized, the CSE would continue to grow. The WSE grows until saturation at a value of the order of $\ln(0.6\times N)$, as predicted in \cite{lak2002}. At time $t=10$ the classical distribution is almost completely smoothed while the quantum one keeps the same morphology as the one at $t=4$ (see Fig. \ref{fig2} e) and f)) . We notice that we have removed the effects of the torus periodicity on the Wigner distributions in all figures \cite{dittrich}. \begin{figure}[htp] \includegraphics[width=8cm]{fig2} \caption{Liouville and Wigner distributions at times $t=3$ (a) and b)), $t=4$ (c) and d)), and $t=10$ (e) and f)), for the HH case with initial conditions centered at $(q,p)=(0.5,0.5)$.} \langlebel{fig2} {\rm e}nd{figure} The EE case is highly dependent on where the initial conditions are taken. We first show the evolution of the CSE/WSE as a function of time for distributions centered at the previous values, i.e. at the fixed point of period 1. As can be seen in Fig. \ref{fig3} complexity does not grow significantly and the quantum and classical behaviour is remarkably similar at all times (from $t=3$ on the agreement worsens). Small oscillations reflect the rotation of the distributions which do not explore much of the phase space. \begin{figure}[htp] \hspace{0cm} \includegraphics[width=8cm]{fig3} \caption{(Color online) CSE ((red) gray line with crosses) and WSE (black line with triangles) as a function of time t (in map steps) in the EE case for $N=2^{6}$. Initial distributions are centered at $(q,p)=(0.5,0.5)$.} \langlebel{fig3} {\rm e}nd{figure} But if we select initial distributions centered at $(q,p)=(\pi/4,\pi/4)$ for instance (we just take a representative case from the maximum complexity ones), complexity grows as shown in Fig. \ref{fig4}. The saturation values of the WSE are always lower than the one for the HH case, but greater than those of their corresponding CSE, suggesting that quantum effects begin to play an important role. In fact, from $t=10$ on, the two curves take completely different behaviours. Moreover, the WSE reaches its maximum value after approximately $160$ map steps, at a much slower rate than the HH case reflecting a different mechanism for complexity growth. In the inset we show the same quantities but in log-log scale. Here not only the power law behaviour becomes clear, but also the abrupt change of slope at $t=10$. \begin{figure}[htp] \hspace{0cm} \includegraphics[width=8cm]{fig4} \caption{(Color online) CSE ((red) gray line with crosses) and WSE (black line with triangles) as a function of time t (in map steps) in the EE case for $N=2^{6}$. Initial distributions are centered at $(q,p)=(\pi/4,\pi/4)$.} \langlebel{fig4} {\rm e}nd{figure} If we look at Fig. \ref{fig5} we can see how the quantum distribution develops interference fringes from $t=8$ (panel b)) to $t=11$ (panel d)). The classical distribution just develops a secondary bulb of high density but of course coherences do not appear (see panels a) and c)). Finally, for $t=50$ we can already see the typical morphology of the quantum distribution that has developed a lot of fringes in contrast with the classical one (see panels e) and f)). \begin{figure}[htp] \includegraphics[width=8cm]{fig5} \caption{Liouville and Wigner distributions at times $t=8$ (a) and b)), $t=11$ (c) and d)), and $t=50$ (e) and f)), for the EE case with initial conditions centered at $(q,p)=(\pi/4,\pi/4)$.} \langlebel{fig5} {\rm e}nd{figure} It is interesting to note that up to now everything seems to agree with the results of \cite{nature2016} regarding the entangling power of chaotic and regular regions of the classical phase space, but keeping in mind this marked dependence on the initial conditions in the regular case. Finally, we analyze the HE case for which we take the initial distributions centered at $(q,p)=(0.5,0.5)$. The WSE saturates at values similar to the HH case, although it takes a much longer time to reach them (see Fig. \ref{fig6}). The CSE only grows until $t=5$ and then decreases due to discretization. It is remarkable that just one hyperbolic degree of freedom is enough to reach maximum complexity, although at a classical level the dynamic is not completely ergodic. In this sense the behaviour is strongly different than that of a mixed phase space with regular and chaotic regions. \begin{figure}[htp] \hspace{0cm} \includegraphics[width=8cm]{fig6} \caption{(Color online) CSE ((red) gray line with crosses) and WSE (black line with triangles) as a function of time t (in map steps) in the HE case for $N=2^{6}$. Initial distributions are centered at $(q,p)=(0.5,0.5)$.} \langlebel{fig6} {\rm e}nd{figure} By looking at Fig. \ref{fig7} it becomes clear that the Liouville distribution at times $t=3$ and $t=5$ (see panels a) and c)) develop more and more complex structures associated to the stable manifold, but that at $t=50$ (see panel e)) it has already washed out almost all details. The corresponding quantum distributions (for the same times) at the right panels show a different mechanism of complexity growth mainly based on coherences after $t=3$. \begin{figure}[htp] \includegraphics[width=8cm]{fig7} \caption{Liouville and Wigner distributions at times $t=3$ (a) and b)), $t=5$ (c) and d)), and $t=50$ (e) and f)), for the HE case with initial conditions centered at $(q,p)=(0.5,0.5)$.} \langlebel{fig7} {\rm e}nd{figure} \section{Conclusions} \langlebel{sec5} We have studied a generic system consisting of two coupled perturbed cat maps, considering the doubly hyperbolic, elliptic, and the mixed cases. By using the WSE and the CSE as two sides of the same complexity notion we find that for the HH case, the quantum and classical complexity growth share the same behaviour (despite discretization effects of the classical distribution). The WSE and the CSE reach the maximum theoretical limit predicted in \cite{lak2002}, which is at the order of $\ln(0.6\times N)$. In the EE case this quantum to classical correspondence depends on the initial conditions, but the HH case complexity maximum is not reached. This confirms recent findings published in \cite{nature2016} regarding the production of entanglement in chaotic and regular regions of the phase space. But it is important to clarify that entanglement entropy generation and classical chaotic or regular behaviour are directly related through the complexity notion, in these cases. Thus the connection is not surprising. Moreover, this is not always this way, as for instance in the non-generic baker map, which is chaotic but not complex \cite{Benenti-Carlo-Prosen}. In the HE case of our generic system, the WSE reaches the maximum theoretical value around $\ln(0.6\times N)$, similarly to the HH case, although the time of the transient is much longer. The CSE does not reach this value and has a markedly different behaviour. This reveals that in the mixed scenario the quantum mechanisms of complexity growth (namely coherences) play a central role. On the other hand, just one hyperbolic degree of freedom is enough to reach maximum complexity, despite the dynamics is not completely ergodic at the classical level. Our results provide a wider picture of complexity growth, including mixed dynamics scenarios. These findings suggest new experiments with controllable quantum systems, where the behaviour of each component could be selected to be regular or chaotic. In the future, we will study the role played by complex eigenvalues of the symplectic matrix leading to loxodromic behaviour \cite{multicat}. \section*{Acknowledgments} Support from CONICET is gratefully acknowledged. \begin{thebibliography}{10} \bibitem{nature2017} D. Castelvecchi, Nature \textbf{543}, 597 (2017). \bibitem{Patta99} A.K. Pattanayak, Phys. Rev. Lett. \textbf{83}, 4526 (1999). \bibitem{nature2016} C. Neill \textit{et al.}, Nature Physics \textbf{12}, 1037 (2016). \bibitem{Lombardi-Matzkin} M. Lombardi and A. Matzkin, Phys. Rev. E \textbf{83}, 016207 (2011). \bibitem{Casati} G. Casati, I. Guarneri, and J. Reslen, Phys. Rev. E \textbf{85}, 036208 (2012). \bibitem{physrep2016} F. Borgonovi et. al., Phys. Rep. \textbf{626}, 1 (2016). \bibitem{Benenti-Carlo-Prosen} G. Benenti, G.G. Carlo and T. Prosen, Phys. Rev. E \textbf{85}, 051129 (2012). \bibitem{prosen1} T. Prosen, Phys. Rev. E \textbf{83}, 031124 (2011). \bibitem{lak2002} J. Bandyopadhyay and A. Lakshminarayan, Phys. Rev. Lett. \textbf{89}, 060402 (2002). \bibitem{prosenOSEE} P. Zanardi, Phys. Rev. A \textbf{63}, 040304(R) (2001); X. Wang and P. Zanardi, Phys. Rev. A \textbf{66}, 044303 (2002); M. A. Nielsen \textit{et al.}, Phys. Rev. A \textbf{67}, 052301 (2003); T. Prosen and I. Pi¸orn, Phys. Rev. A \textbf{76}, 032316 (2007). \bibitem{ozrep} A.M. Ozorio de Almeida, Phys. Rep. \textbf{295}, 266 (1998). \bibitem{opetor} A.M.F. Rivas and A.M. Ozorio de Almeida, Ann. Phys. \textbf{276}, 223 (1999). \bibitem{Nielsen} M.A. Nielsen and I.L. Chuang, \textit{Quantum Computation and Quantum Information}, (Cambridge University Press, Cambridge, 2000). \bibitem{qcbook} G. Benenti, G. Casati, and G. Strini, \textit{Principles of quantum computation and information, vol. II}, (World Scientific, Singapore, 2007). \bibitem{Hannay 1980} J. H. Hannay and M. V. Berry, Physica D \textbf{1}, 267 (1980). \bibitem{Ozorio 1994} M. Basilio De Matos and A. M. Ozorio De Almeida, Ann. Phys. \textbf{237}, 46 (1995). \bibitem{Haake} F. Haake, \textit{Quantum Signatures of Chaos} (Springer-Verlag, New York, 2001). \bibitem{Espositi 2005} M. Degli Espositi and B. Winn, J. Phys. A: Math. Gen. \textbf{38}, 5895 (2005). \bibitem{dittrich} A. Arguelles and T. Dittrich, Physica A \textbf{356}, 72 (2005). \bibitem{multicat} A.M.F. Rivas, M. Saraceno, and A.M. Ozorio de Almeida, Nonlinearity \textbf{13}, 341 (2000). {\rm e}nd{thebibliography} {\rm e}nd{document}
\begin{document} \title{Asymptotic enumeration of sparse uniform hypergraphs with given degrees} \begin{abstract} Let $r\geq 2$ be a fixed integer. For infinitely many $n$, let ${\boldsymbol{k}} = (k_1,\ldots, k_n)$ be a vector of nonnegative integers such that their sum $M$ is divisible by $r$. We present an asymptotic enumeration formula for simple $r$-uniform hypergraphs with degree sequence ${\boldsymbol{k}}$. (Here ``simple'' means that all edges are distinct and no edge contains a repeated vertex.) Our formula holds whenever the maximum degree $k_{\mathrm{max}}$ satisfies $k_{\mathrm{max}}^{3} = o(M)$. \end{abstract} \section{Introduction}\label{s:introduction} Hypergraphs are combinatorial structures which can model very general relational systems, including some real-world networks~\cite{ER,GZCN,Dih}. Formally, a \emph{hypergraph} or a set system is defined as a pair $(V,E)$, where $V$ is a finite set and $E$ is a multiset of multisubsets of $V$. (We refer to elements of $E$ as \emph{edges}.) Note that under this definition, a hypergraph may contain repeated edges and an edge may contain repeated vertices. If a vertex $v$ has multiplicity at least 2 in the edge $e$, we say that $v$ is a \emph{loop} in $e$. A hypergraph is \emph{simple} if it has no loops and no repeated edges. Here it is possible that distinct edges may have more than one vertex in common. Let $r\geq 2$ be a fixed integer. We say that the hypergraph $(V,E)$ is $r$-\emph{uniform} if each edge $e\in E$ contains exactly $r$ vertices (counting multiplicities). Uniform hypergraphs are a particular focus of study, not least because a 2-uniform hypergraph is precisely a graph. We seek an asymptotic enumeration formula for the number of $r$-uniform simple hypergraphs with a given degree sequence, when $r\geq 3$ is constant and the maximum degree is not too large (the sparse range). To state our result precisely, we need some definitions. Let $k_{i,n}$ be a nonnegative integer for all pairs $(i,n)$ of integers which satisfy $1\leq i\leq n$. Then for each $n\geq 1$, let ${\boldsymbol{k}} = {\boldsymbol{k}}(n) = (k_{1,n},\ldots, k_{n,n})$. We usually write $k_i$ instead of $k_{i,n}$. Define $M = \sum_{i=1}^n k_i$. We assume that $M$ is divisible by $r$ for an infinite number of values of $n$, and tacitly restrict ourselves to such $n$. We write $(a)_m$ to denote the falling factorial $a(a-1)\cdots (a-m+1)$, for integers $a$ and $m$. For each positive integer $t$, let $M_t = \sum_{i=1}^n (k_i)_t$. Notice that $M_1=M$ and that $M_t\leq k_{\mathrm{max}} M_{t-1}$ for $t\geq 2$. Let ${\mathcal{H}_r(\kvec)}$ be the set of simple $r$-uniform hypergraphs on the vertex set $\{ 1,2,\ldots, n\}$ with degrees given by ${\boldsymbol{k}} = (k_1,\ldots, k_n)$. Our main theorem is the following. \begin{theorem} Let $r\geq 3$ be a fixed integer. Suppose that $n\to\infty$, $M\to\infty$ and that $k_{\mathrm{max}}$ satisfies $k_{\mathrm{max}}\geq 2$ and $k_{\mathrm{max}}^{3} = o(M)$. Then \[ |{\mathcal{H}_r(\kvec)}| = \frac{M!}{\left(M/r\right)!\, (r!)^{M/r}\, \prod_{i=1}^{n}\, k_i!}\,\, \exp\biggl( - \frac{ (r-1)\, M_2}{2M} + O(k_{\mathrm{max}}^{3}/M)\, \biggr).\] \label{main} \end{theorem} As a corollary, we immediately obtain the corresponding formula for regular hypergraphs. Let $\mathcal{H}_r(k,n)$ denote the set of all $k$-regular $r$-uniform hypergraphs on the vertex set $\{ 1,\ldots, n\}$, where $k\geq 2$ is an integer, which may be a function of $n$. \begin{corollary} \label{main-regular} Suppose that $n\to\infty$ and that $k$ satisfies $k\geq 2$ and $k^{2} = o(n)$. Then \[ |\mathcal{H}_{r}(k,n)| = \frac{(kn)!}{(kn/r)!\, (r!)^{kn/r}\, (k!)^n}\, \exp\biggl( - \dfrac{1}{2}\, (k-1)(r-1) + O(k^{2}/n)\, \biggr). \] \end{corollary} \subsection{History}\label{s:history} In the case of graphs, the best asymptotic formula in the sparse range is given by McKay and Wormald~\cite{McKW91}. See that paper for further history of the problem. Note that their formula has a similar form to ours, but with many more term in the exponential factor. This is due to the fact that it is harder to avoid creating a repeated edge with a switching when $r=2$. The dense range for $r=2$ was treated in~\cite{ranX,MW90}, but there is a gap between these two ranges in which nothing is known. An early result in the asymptotic enumeration of hypergraphs was given by Cooper et al.~\cite{CFMR}, who considered simple $k$-regular hypergraphs when $k=O(1)$. Dudek et al.~\cite{DFRS} proved an asymptotic formula for the number of simple $k$-regular hypergraphs graphs with $k=o(n^{1/2})$. A restatement of their result in our notation is the following: \begin{theorem} \emph{(~\cite[Theorem 1]{DFRS})}\ For each integer $r\geq 3$, define \[ \kappa = \kappa(r) = \begin{cases} 1 & \text{ if $r\geq 4$,}\\ \nfrac{1}{2} & \text{ if $r=3$.} \end{cases} \] Let $\mathcal{H}(r,k)$ denote the set of all simple $k$-regular $r$-uniform hypergraphs on the vertex set $\{ 1,\ldots, n\}$. For every $r\geq 3$, if $k=o(n^{\kappa})$ then \[ |\mathcal{H}(r,k)| = \frac{(kn)!}{(kn/r)!\, (r!)^{kn/r}\, (k!)^n}\, \exp\left( -\dfrac{1}{2} (k-1)(r-1)\bigl(1 + O(\delta(n))\bigr) \right)\] where $\delta(n) = (kn)^{-1/2} + k/n$. \end{theorem} Note that the factor outside the exponential part matches ours (see Corollary~\ref{main-regular}), and that the exponential part of their formula can be rewritten as \[ \exp\left( -\dfrac{1}{2} (k-1)(r-1) + O(k\delta(n))\right)\] with relative error \[ O(k\delta(n)) = O\bigl(\sqrt{k/n} + k^2/n\bigr).\] This relative error is only $o(1)$ when $k^2=o(n)$, matching the range of $k$ covered by Corollary~\ref{main-regular}. Hence Theorem~\ref{main} can be seen as an extension of~\cite{DFRS} to irregular degree sequences. For an asymptotic formula for the number of dense simple $r$-uniform hypergraphs with a given degree sequence, see~\cite{KLP}. \subsection{The model, some early results and a plan of the proof} We work in a generalisation of the configuration model. Let $B_1, B_2,\ldots, B_n$ be disjoint sets, which we call \emph{cells}, and define $\mathcal{B} = \bigcup_{i=0}^n B_i$. Elements of $\mathcal{B}$ are called points. Assume that cell $B_i$ contains exactly $k_i$ points, for $i=1,\ldots, n$. We assume that there is a fixed ordering on the $M$ points of $\mathcal{B}$. Denote by $\Lambda_r({\boldsymbol{k}})$ the set of all unordered partitions $Q = \{ U_1,\ldots, U_{M/r}\}$ of $\mathcal{B}$ into $M/r$ parts, where each part has exactly $r$ points. Then \begin{equation} \label{Lambda} |\Lambda_r({\boldsymbol{k}})| = \frac{M!}{(M/r)!\, (r!)^{M/r}}. \end{equation} Each partition $Q\in\Lambda_r({\boldsymbol{k}})$ defines a hypergraph $G(Q)$ on the vertex set $\{ 1,\ldots, n\}$ in a natural way: vertex $i$ corresponds to the cell $B_i$, and each part $U\in Q$ gives rise to an edge $e_U$ such that the multiplicity of vertex $i$ in $e_U$ equals $|U\cap B_i|$, for $i=1,\ldots, n$. Then $G(Q)$ is an $r$-uniform hypergraph with degree sequence ${\boldsymbol{k}}$. The partition $Q\in\Lambda_r({\boldsymbol{k}})$ is called \emph{simple} if $G(Q)$ is simple. The edge $e_U$ has a loop at $i$ if and only if $|U\cap B_i| \geq 2$. In this case, each pair of distinct points in $U\cap B_i$ is called a \emph{loop} in $U$. We reserve the letters $e, f$ for edges in a hypergraph, and use $U$, $W$ for parts in a partition $Q$ (that is, in the configuration model). Now we will consider random partitions. Each hypergraph in ${\mathcal{H}_r(\kvec)}$ corresponds to exactly \[ \prod_{i=1}^n k_i!\] partitions $Q\in\Lambda_r({\boldsymbol{k}})$. Hence, when $Q\in\Lambda_r({\boldsymbol{k}})$ is chosen uniformly at random, conditioned on $G(Q)$ being simple, the probability distribution of $G(Q)$ is uniform over ${\mathcal{H}_r(\kvec)}$. Let $P_r({\boldsymbol{k}})$ denote the probability that a partition $Q\in \Lambda_r({\boldsymbol{k}})$ chosen uniformly at random is simple. Then \begin{equation} \label{pr-equation} |{\mathcal{H}_r(\kvec)}| = \frac{M!}{(M/r)!\, (r!)^{M/r}\, \prod_{i=1}^n k_i!}\, P_r({\boldsymbol{k}}). \end{equation} Hence it suffices to show that $P_r({\boldsymbol{k}})$ equals the exponential factor in the statement of Theorem~\ref{main}. As a first step, we identify several events which have probability $O(k_{\mathrm{max}}^{3}/M)$ in the uniform probability space over $\Lambda_r({\boldsymbol{k}})$. The following lemma will be used repeatedly. In most applications, $c$ will be a small positive integer. (Throughout the paper, ``$\log$'' denotes the natural logarithm.) \begin{lemma} \label{c-parts} Let $U_1,\ldots, U_c$ be fixed, disjoint $r$-subsets of the set of points $\mathcal{B}$, where $r\geq 3$ is a fixed integer and $c = o(M^{1/2})$. The probability that a uniformly random $Q\in\Lambda_r({\boldsymbol{k}})$ contains the parts $\{U_1,\ldots, U_c\}$ is \[ (1+o(1)) \frac{((r-1)!)^c}{M^{c(r-1)}}. \] \end{lemma} \begin{proof} Using (\ref{Lambda}), the required probability is \begin{align*} \frac{r!^c\, (M/r)_c}{(M)_{rc}} &= \frac{(r-1)!^c}{M^{(r-1)c}}\, \exp\left( -\sum_{j=0}^{rc-1} \,\log(1 - j/M) + \sum_{i=0}^{c-1}\, \log(1-ri/M) \right)\\ &= \frac{(r-1)!^c}{M^{(r-1)c}}\, \exp\left( O\left(\frac{r^2 c^2}{M}\right)\right). \end{align*} But $r^2c^2 = o(M)$ by assumption, which completes the proof. \end{proof} Let \[ N = \max\{ \lceil \log M\rceil,\, \lceil 9(r-1) M_2/M\rceil\}. \] Now define $\Lambda_r^+({\boldsymbol{k}})$ to be the set of partitions $Q\in \Lambda_r({\boldsymbol{k}})$ which satisfy the following properties: \begin{enumerate} \item[(i)] For each part $U\in Q$ we have $|U\cap B_i|\leq 2$ for $i=1,\ldots, n$. \item[(ii)] For each part $U\in Q$ there is at most one $i\in \{ 1,\ldots, n\}$ with $|U\cap B_i|=2$. \item[(iii)] For each pair $(U_1,U_2)$ of distinct parts in $Q$, the intersection $e_1\cap e_2$ of the corresponding edges contains at most 2 vertices. (It is possible that $e_1\cap e_2$ consists of a loop.) \item[(iv)] There are at most $N$ parts which contain loops. \end{enumerate} Note in particular that whenever $r\geq 3$, property (iii) implies that $G(Q)$ has no repeated edges. \begin{lemma} Under the assumptions of Theorem~\ref{main}, we have \[ \frac{|\Lambda_r^+({\boldsymbol{k}})|}{|\Lambda_r({\boldsymbol{k}})|} = 1 + O(k_{\mathrm{max}}^{3}/M). \] \label{pr-simple} \end{lemma} \begin{proof} Consider $Q\in\Lambda_r({\boldsymbol{k}})$ chosen uniformly at random. (i) The expected number of parts in $Q$ which contain three or more points from the same cell is \[ O\left(\frac{M_3 M^{r-3}}{M^{r-1}}\right) = O(k_{\mathrm{max}}^2/M),\] using Lemma~\ref{c-parts}. Hence, the probability that property (i) fails to hold is also $O(k_{\mathrm{max}}^2/M)$. (ii) Similarly, the expected number of parts in $Q$ which contain two loops (where each loop is from a distinct cell) is \[ O\left(\frac{M_2^2 M^{r-4}}{M^{r-1}}\right) = O(k_{\mathrm{max}}^2/M).\] (iii) Using Lemma~\ref{c-parts}, the expected number of ordered pairs of distinct parts $(U_1,U_2)$ which give rise to edges $e_1, e_2$ such that $|e_1\cap e_2| \geq 3$ is \[ O\left(\frac{M_2^3\, M^{2(r-3)} + M_2 M_4 M^{2(r-3)}}{M^{2(r-1)}}\right) = O(k_{\mathrm{max}}^3/M). \] (Here the first term arises if $e_1\cap e_2$ does not contain a loop while the second term covers the possibility that $e_1\cap e_2$ contains a loop. By (i) we can assume that $e_1\cap e_2$ contains at least two distinct vertices.) (iv) Let $\ell=N + 1$. We bound the expected number of sets $\{ U_1,\ldots, U_\ell\}$ of $\ell$ parts which each contain a loop. Given $(U_1,\ldots, U_{i-1})$, there are at most $M_2 M^{r-2}/(2(r-2)!)$ choices for $U_i$. Hence there are \[ O\left(\frac{1}{\ell!}\, \left(\frac{M_2 M^{r-2}}{2(r-2)!}\right)^{\ell} \right) \] possible sets $\{ U_1,\ldots, U_\ell\}$ of parts which each contain a loop. Now \[ \ell = O(N) = O(k_{\mathrm{max}} + \log M) = o(M^{1/2}), \] by definition of $N$. Hence Lemma~\ref{c-parts} applies, and we conclude that the expected number of sets of $\ell = N + 1$ parts which each contain a loop is \[ O\left(\frac{1}{\ell!}\, \left(\frac{(r-1)M_2}{2M}\right)^\ell\right) = O\left(\left(\frac{e(r-1)M_2}{2\ell M}\right)^\ell\right) = O\left((e/18)^{\log M}\right) = o(1/M),\] completing the proof. \end{proof} In Section~\ref{s:switchings} we will calculate $|\Lambda^+_{r}({\boldsymbol{k}})|$ by analysing switchings which make local changes to a partition to reduce (or increase) the number of loops by precisely 1. \section{The switchings}\label{s:switchings} For a given nonnegative integer $\ell$, let $\mathcal{C}_\ell$ be the set of partitions $Q\in \Lambda_r^+({\boldsymbol{k}})$ with exactly $\ell$ parts which contain a loop. Then partitions in $\mathcal{C}_0$ give rise to hypergraphs in ${\mathcal{H}_r(\kvec)}$. Now $\mathcal{C}_0$ is nonempty whenever $r$ divides $M$, and we restrict ourselves to this situation. Hence it follows from Lemma~\ref{pr-simple} that \begin{equation} \frac{1}{P_r({\boldsymbol{k}})} = \bigl(1 + O(k_{\mathrm{max}}^{3}/M)\bigr)\, \sum_{\ell = 0}^{N}\, \frac{|\mathcal{C}_\ell|}{|\mathcal{C}_0|}. \label{a=0} \end{equation} We estimate the above sum using a switching designed to remove loops. An $\ell$-\emph{switching} in a partition $Q$ is specified by a 4-tuple $(x_1,x_2,y_1,y_2)$ of points where $x_1$ belongs to the part $U$, and $y_j$ belongs to the part $W_j$ for $j=1,2$, such that: \begin{itemize} \item $U$, $W_1$ and $W_2$ are distinct parts of $Q$, \item $y_1$ and $y_2$ belong to distinct cells, and \item $U$ contains a loop $\{ x_1, x_2\}$ (so in particular, $x_1$ and $x_2$ belong to the same cell). \end{itemize} The $\ell$-switching maps $Q$ to the partition $Q'$ defined by \begin{equation} \label{QQ'-loops} Q' = \bigl(Q - \{ U, W_1, W_2\}\bigr) \cup \{\widehat{U}, \widehat{W}_1, \widehat{W}_2\} \end{equation} where \[ \widehat{U} = \bigl( U - \{ x_1,x_2\}\bigr) \cup \{ y_1,y_2\},\quad \widehat{W}_1 = \bigl(W_1 - \{ y_1\}\bigr)\cup \{ x_1\},\quad \widehat{W}_2 = \bigl(W_2 - \{ y_2\}\bigr) \cup \{ x_2\}. \] This operation is illustrated in Figure~\ref{f:l-switch}. It is the same operation used by Dudek et al.~\cite{DFRS}, but we use a somewhat different approach when analysing the switching. \begin{figure} \caption{An $\ell$-switching} \label{f:l-switch} \end{figure} Let $e$ be the edge of $G(Q)$ corresponding to $U$, and let $f_j$ be the edge of $G(Q)$ corresponding to $W_j$, for $j=1,2$. Similarly, let $\widehat{e}$ be the edge of $G(Q')$ corresponding to $\widehat{U}$, and let $\widehat{f}_j$ be the edge of $G(Q')$ corresponding to $\widehat{W}_j$ for $j=1,2$. Given $Q\in\mathcal{C}_\ell$, we say that the $\ell$-switching specified by the 4-tuple of points $(x_1,x_2,y_1,y_2)$ is \emph{legal for $Q$} if the resulting partition $Q'$ belongs to $\mathcal{C}_{\ell-1}$, and otherwise we say that the switching is \emph{illegal for $Q$}. \begin{lemma} With notation as above, if the $\ell$-switching $(x_1,x_2,y_1,y_2)$ is illegal for $Q$ then at least one of the following conditions must hold: \begin{itemize} \item[\emph{(I)}] At least one of $W_1$, $W_2$ contains a loop. \item[\emph{(II)}] $e$, $f_1$ and $f_2$ are not pairwise disjoint. \item[\emph{(III)}] Some edge of $G(Q)\setminus \{ e,\, f_1,\, f_2\}$ intersects both $e$ and $f_j$, for some $j\in \{1,2\}$. \end{itemize} \label{forward-legal} \end{lemma} \begin{proof} Given $Q\in\mathcal{C}_\ell$, suppose that the 4-tuple $(x_1,x_2,y_1,y_2)$ specifies an $\ell$-switching in $Q$ such that the resulting partition $Q'$ does not belong to $\mathcal{C}_{\ell-1}$. It could be that $Q'\in\Lambda_r^+({\boldsymbol{k}})$ but that $Q'$ has strictly more than $\ell-1$ parts which contain a loop. Here the $\ell$-switching has (accidently) introduced at least one new loop. But this implies that (II) holds, since we know that $y_1$ and $y_2$ do not belong to the same cell. Next, suppose that $Q'\in\Lambda_r^+({\boldsymbol{k}})$ but that $Q'$ has at most $\ell-2$ parts which contain a loop. This means that the $\ell$-switching has removed more than one loop. Then property (I) must hold: the point $y_j$ must have been involved in a loop in $W_j$ for some $j\in\{ 1,2\}$. It remains to consider the case that $Q'\not\in\Lambda_r^+({\boldsymbol{k}})$. Then at least one of the properties (i)--(iv) used to define $\Lambda_r^+({\boldsymbol{k}})$ no longer holds for $Q'$. Arguing as above, if (i), (ii) or (iv) fails then we have introduced at least one loop, or increased the multiplicity of a vertex in some edge from 2 to at least 3. This implies that (I) or (II) holds, using arguments similar to those above. Finally, suppose that (iii) fails for $Q'$. Then $G(Q')$ has a pair of edges which intersect in at least 3 vertices. We say that this pair of edges has \emph{large intersection}. At least one of the new edges $\widehat{e}$, $\widehat{f}_1$, $\widehat{f}_2$ must be involved in any such pair, since $Q\in\Lambda_r^+({\boldsymbol{k}})$. If $\widehat{f}_1$ and $\widehat{f}_2$ have large intersection then $f_1$ and $f_2$ are not disjoint, which shows that (II) holds. Similarly, if $\widehat{e}$ and $\widehat{f}_j$ have large intersection for some $j\in \{ 1,2\}$ then $e$ and $f_j$ are not disjoint, and (II) holds. Now suppose that an edge $e'\in G(Q')\setminus \{ \widehat{e},\, \widehat{f}_1,\, \widehat{f}_2\}$ has large intersection with one of the new edges. Note that $e'$ is also an edge of $G(Q)\setminus \{ e,f_1,f_2\}$. \begin{itemize} \item If $e'$ has large intersection with $\widehat{f}_j$ for some $j\in \{ 1,2\}$ then $e'$ must contain the vertex corresponding to the point $x_j$, or else $e'$ and $f_j$ would have large intersection in $G(Q)$, contradicting the fact that $Q\in\Lambda_r^+({\boldsymbol{k}})$. Furthermore, $e'\cap \widehat{f}_j$ contains at least one other vertex, corresponding to a point in $\widehat{W}_j\setminus \{ x_j\} = W_j \setminus \{ y_j\}$. Hence $e'$ intersects both $e$ and $f_j$ in $G(Q)$, showing that (III) holds. \item If $e'$ has large intersection with $\widehat{e}$ then $e'$ must contain the vertex corresponding to $y_j$ for some $j\in \{ 1,2\}$ (perhaps both), otherwise $e'$ and $e$ would have large intersection in $G(Q)$, a contradiction. Even if $e'$ contains both of these vertices, it must still contain a vertex corresponding to a point in $\widehat{U} \setminus \{ y_1,y_2\} = U \setminus \{ x_1,x_2\}$. Hence $e'$ intersects both $f_j$ and $e$ in $G(Q)$ for some $j\in \{ 1,2\}$, which again proves that (III) holds. \end{itemize} This completes the proof. \end{proof} A \emph{reverse $\ell$-switching} in a given partition $Q'$ is the reverse of an $\ell$-switching. It is described by a 4-tuple $(x_1,x_2,y_1,y_2)$ of points, where $\widehat{W}_j$ is the part of $Q'$ containing $x_j$, for $j=1,2$, and $y_1, y_2$ are distinct points in the part $\widehat{U}$ of $Q'$, such that \begin{itemize} \item $\widehat{U}$, $\widehat{W}_1$ and $\widehat{W}_2$ are distinct parts of $Q'$, \item $x_1$ and $x_2$ belong to the same cell, and \item $y_1$ and $y_2$ belong to distinct cells. \end{itemize} This reverse $\ell$-switching acting on $Q'$ produces the partition $Q$ defined by (\ref{QQ'-loops}), as depicted in Figure~\ref{f:l-switch} by following the arrow in reverse. Given $Q'\in\mathcal{C}_{\ell-1}$, we say that the reverse $\ell$-switching specified by $(x_1,x_2,y_1,y_2)$ is \emph{legal for $Q'$} if the resulting partition $Q$ belongs to $\mathcal{C}_\ell$, and otherwise we say that the switching is \emph{illegal for $Q'$}. For completeness we give the full proof of the following, though it is very similar to the proof of Lemma~\ref{forward-legal}. \begin{lemma} \label{legal-reverse} With notation as above, if the reverse $\ell$-switching specified by $(x_1,x_2,y_1,y_2)$ is illegal for $Q'\in\mathcal{C}_{\ell-1}$ then at least one of the following conditions must hold: \begin{itemize} \item[\emph{(I${}'$)}] At least one of $\widehat{U}$, $\widehat{W}_1$, $\widehat{W}_2$ contains a loop. \item[\emph{(II${}'$)}] $\widehat{e}\cap\widehat{f}_j\neq \emptyset$ for some $j\in \{ 1,2\}$. \item[\emph{(III${}'$)}] Some edge of $G(Q')\setminus \{ \widehat{e}, \widehat{f}_1, \widehat{f}_2\}$ intersects both $\widehat{e}$ and $\widehat{f}_j$ for some $j\in\{ 1,2\}$. \end{itemize} \end{lemma} \begin{proof} Fix $Q'\in \mathcal{C}_{\ell-1}$ and let $(x_1,x_2,y_1,y_2)$ describe an reverse $\ell$-switching such that the resulting partition $Q$ does not belong to $\mathcal{C}_\ell$. If $Q\in\Lambda_r^+({\boldsymbol{k}})$ but $Q$ has more than $\ell$ parts which contain loops then an extra loop has been unintentionally introduced. In this case, either $\widehat{W}_j\setminus \{ x_j\}$ contains a point from the same cell as $y_j$, or $\widehat{U}\setminus \{ y_1,y_2\}$ contains a point from the same cell as $x_j$, for some $j\in\{ 1,2\}$. In either case we have $\widehat{e}\cap \widehat{f}_j\neq\emptyset$, so (II${}'$) holds. Next, suppose that $Q\in\Lambda_r^+({\boldsymbol{k}})$ but that $Q$ has at most $\ell-1$ parts which contain a loop. Then the reverse switching has removed at least one loop, which implies that (I${}'$) holds. Now suppose that $Q\not\in\Lambda_r^+({\boldsymbol{k}})$. Then one of the properties (i)--(iv) fail for $Q$. If (i), (ii) or (iv) fail then arguing as above we see that (I${}'$) or (II${}'$) holds. Now suppose that (iii) fails. Then some edge of $G(Q)$ has large intersection with one of $e, f_1, f_2$ (recalling that terminology from the proof of Lemma~\ref{forward-legal}). Now $f_1$ and $f_2$ cannot have large intersection, since their intersection is contained in the intersection of $\widehat{f}_1$ and $\widehat{f}_2$, and $Q'\in\Lambda_r^+({\boldsymbol{k}})$. If $e$ and $f_j$ have large intersection for some $j\in\{ 1,2\}$ then either this intersection contains the vertex corresponding to $x_j$ (and hence $\widehat{W}_j$ contains a loop), or the intersection contains the vertex corresponding to $y_j$ (and hence $\widehat{U}$ contains a loop), or $\widehat{e}\cap\widehat{f}_j\neq \emptyset$. Again (I${}'$) or (II${}'$) hold. Finally, suppose that the large intersection involves an edge $e'\in G(Q)\setminus \{ e,f_1,f_2\}$. Then $e'$ also belongs to $G(Q')\setminus \{ \widehat{e},\, \widehat{f}_1,\, \widehat{f}_2\}$. If $e'$ has large intersection with $e$ in $G(Q)$ then $e'$ contains the vertex corresponding to the point $x_j$, for some $j\in\{1,2\}$ (or else $e'$ and $\widehat{e}$ have large overlap in $G(Q')$, a contradiction), and $e'$ contains at least one vertex corresponding to a point of $U\setminus \{ x_1,x_2\} = \widehat{U}\setminus \{ y_1,y_2\}$. Therefore $e'$ overlaps both $\widehat{e}$ and $\widehat{f}_j$, so (III${}'$) holds. Similarly, if $e'$ has large intersection with $\widehat{f}_j$ for some $j\in\{1,2\}$ then $e'$ contains the vertex corresponding to $y_j$ (or else $e'\cap \widehat{f}_j$ is large in $G(Q')$, a contradiction), and $e'$ contains at least one vertex corresponding to a point in $W_j \setminus \{ y_j\} = \widehat{W}_j\setminus \{ x_j\}$. Again, $e'$ overlaps both $\widehat{e}$ and $\widehat{f}_j$, proving that (III${}'$) holds, as required. \end{proof} Next we analyse these switchings to find a relationship between the sizes of $\mathcal{C}_\ell$ and $\mathcal{C}_{\ell-1}$. \begin{lemma} Assume that the conditions of Theorem~\ref{main} hold and let $\ell'$ be the first value of $\ell\leq N$ such that $C_{\ell}=\emptyset$, or $\ell'=N+1$ if no such value exists. Then \[ |\mathcal{C}_\ell| = |\mathcal{C}_{\ell-1}|\, \frac{(r-1)M_2}{2\ell M}\, \left( 1 + O\left( \frac{k_{\mathrm{max}}^{3} + \ell\, k_{\mathrm{max}}}{M_2} \right)\right) \] uniformly for $1\leq \ell < \ell'$. \label{ratio} \end{lemma} \begin{proof} Fix $\ell\in \{ 1,\ldots, \ell'-1\}$ and let $Q\in\mathcal{C}_\ell$ be given. Define the set $\mathcal{S}$ of all 4-tuples $(x_1,x_2,y_1,y_2)$ of distinct points such that \begin{itemize} \item $y_1$ and $y_2$ belong to distinct cells, \item $\{ x_1, x_2\}$ is a loop in $U$ and $y_j\in W_j$ for $j=1,2$, for some distinct parts $U, W_1, W_2\in Q$, and \item neither $W_1$ nor $W_2$ contain a loop. \end{itemize} Note that $\mathcal{S}$ contains every 4-tuple which defines a legal $\ell$-switching from $Q$, so $|\mathcal{S}|$ is an upper bound for the number of legal $\ell$-switchings which can be performed in $Q$. There are precisely $2\ell$ ways to choose a pair of points $(x_1,x_2)$ which form a loop in some part $U$, using properties (i) and (ii) of the definition of $\Lambda_r^+({\boldsymbol{k}})$. For an easy upper bound, there are at most $M^2$ ways to select $(y_1,y_2)$ with the required properties, giving $|\mathcal{S}|\leq 2\ell M^2$. In fact \begin{equation} |\mathcal{S}| = 2\ell\, M^2 \left(1 + O\left(\frac{k_{\mathrm{max}} + \ell}{M}\right)\right), \end{equation} since there are precisely $M-r\ell$ ways to select a point $y_1$ which belongs to some part $W_1$ which does not contain a loop, and then there are $M - r(\ell+1) + O(k_{\mathrm{max}}) = M + O(k_{\mathrm{max}} + \ell)$ ways to select a point $y_2$ which lies in a part $W_2$ which contains no loops and which is distinct from $W_1$, such that $y_1$ and $y_2$ not in the same cell. We now find an upper bound for the number of 4-tuples in $\mathcal{S}$ which give rise to illegal $\ell$-switchings, and subtract this value from $|\mathcal{S}|$. By Lemma~\ref{forward-legal} it suffices to find an upper bound for the number of 4-tuples in $\mathcal{S}$ which satisfy one of Conditions (I), (II), (III). First note that no 4-tuple in $\mathcal{S}$ satisfies Condition (I), by definition of $\mathcal{S}$. If Condition (II) holds then $f_1\cap f_2\neq \emptyset$ or $e\cap f_j\neq \emptyset$ for some $j\in\{1,2\}$. This occurs for at most $O(\ellk_{\mathrm{max}} M)$ 4-tuples in $\mathcal{S}$. If Condition (III) holds then some edge $e'$ of $G(Q)\setminus \{ e, f_1, f_2\}$ intersects two of $e$, $f_1$ and $f_2$. There are $O(\ellk_{\mathrm{max}}^2 M)$ choices of 4-tuples in $\mathcal{S}$ which satisfy this condition. Combining these contributions, we find that there are \begin{equation} \label{l-for} 2\ell M^2\left( 1 + O\left(\frac{k_{\mathrm{max}}^{2} + \ell}{M}\right)\right) \end{equation} 4-tuples $(x_1,x_2,y_1,y_2)$ which give a legal $\ell$-switching from $Q$. Next, suppose that $Q'\in \mathcal{C}_{\ell-1}$ (and note that $\mathcal{C}_{\ell-1}$ is nonempty, by definition of $\ell'$). Let $\mathcal{S}'$ be the set of all 4-tuples $(x_1,x_2,y_1,y_2)$ of distinct points such that \begin{itemize} \item $x_1$ and $x_2$ belong to the same cell, \item $x_j\in \widehat{W}_j$ for $j=1,2$ and and $y_1,y_2\in \widehat{U}$, for some distinct parts $\widehat{U}$, $\widehat{W}_1$, $\widehat{W}_2$ of $Q'$, and \item $\widehat{U}$ does not contain a loop (so in particular, $y_1$ and $y_2$ belong to distinct cells). \end{itemize} Again, $\mathcal{S}'$ contains every 4-tuple which describes a legal reverse $\ell$-switching from $Q'$, so the number of legal reverse $\ell$-switchings which may be performed in $Q'$ is at most $|\mathcal{S}'|$. There are $M_2$ choices for $(x_1,x_2)$, and each such choice determines two distinct parts $\widehat{W}_1$, $\widehat{W}_2$ unless $\{x_1,x_2\}$ is a loop in some part of $Q'$. Using properties (i) and (ii) of the definition of $\Lambda_r^+({\boldsymbol{k}})$, there are exactly $2(\ell-1)$ choices of $(x_1,x_2)$ such that $\{ x_1,x_2\}$ is a loop in $Q'$. Next, there are precisely $M-r(\ell-1)$ choices for $y_1$ belonging to some part $\widehat{U}$ which does not contain a loop, and then there are $r-1$ choices for $y_2\in \widehat{U}\setminus\{ y_1\}$. For a lower bound, there are at least $(r-1)(M-r(\ell+1))$ choices for $(y_1,y_2)$ which ensure that $\widehat{U}$ contains no loop and is distinct from both $\widehat{W}_1$ and $\widehat{W}_2$. Therefore \[ (r-1)\left(M - r(\ell+1)\right)\, \left(M_2 - 2(\ell-1)\right)\leq |\mathcal{S}'| \leq (r-1)\left(M - r(\ell-1)\right)\, M_2, \] which implies that $|\mathcal{S}'| = (r-1)M\, M_2\, (1 + O(\ell/M + \ell/M_2))$. Now we must find an upper bound for the number of 4-tuples in $\mathcal{S}'$ which give an illegal reverse $\ell$-switching in $Q$, and subtract this number from $|\mathcal{S}'|$. By Lemma~\ref{legal-reverse} it suffices to find upper bounds for the number of elements of $\mathcal{S}'$ which satisfy (at least) one of conditions (I${}'$), (II${}'$) or (III${}'$). If Condition (I${}'$) holds then $\widehat{W}_j$ contains a loop for some $j\in\{1,2\}$, which is true for $O(\ell k_{\mathrm{max}} M)$ 4-tuples in $\mathcal{S}'$. (Recall that $\widehat{U}$ has no loop, by definition of $\mathcal{S}'$.) Condition (II${}'$) holds if $\widehat{e}\cap\widehat{f_j}$ is nonempty for some $j\in \{ 1,2\}$. This occurs for at most $O(k_{\mathrm{max}} M_2)$ 4-tuples in $\mathcal{S}'$. Next, suppose that Condition (III${}'$) holds. Then there exists an edge $e'\in G(Q')\setminus \{ \widehat{e},\, \widehat{f}_1,\, \widehat{f}_2\}$ which intersects both $\widehat{e}$ and $\widehat{f}_j$ for some $j\in \{ 1,2\}$. The number of 4-tuples in $\mathcal{S}'$ which satisfy this condition is $O(k_{\mathrm{max}}^2 M_2)$. Putting these contributions together, the number of 4-tuples in $\mathcal{S}'$ which give a legal reverse $\ell$-switchings from $Q'$ is \begin{equation} \label{l-rev} (r-1)M M_2\left(1 + O\left(\frac{k_{\mathrm{max}}^2}{M} + \frac{\ellk_{\mathrm{max}}}{M_2}\right) \right) = (r-1)M M_2\left(1 + O\left(\frac{k_{\mathrm{max}}^3 + \ellk_{\mathrm{max}}}{M_2}\right) \right), \end{equation} since $1/M \leq k_{\mathrm{max}}/M_2$. Combining (\ref{l-for}) and (\ref{l-rev}) completes the proof. \end{proof} The following summation lemma from~\cite{GMW} will be needed, and for completeness we state it here. (The statement has been adapted slightly from that given in~\cite{GMW}, without affecting the proof given there.) \begin{lemma}[{\cite[Corollary 4.5]{GMW}}]\label{sumcor2} Let $N\geq 2$ be an integer and, for $1\leq i\leq N$, let real numbers $A(i)$, $C(i)$ be given such that $A(i)\geq 0$ and $A(i)-(i-1)C(i) \ge 0$. Define $A_1 = \min_{i=1}^N A(i)$, $A_2 = \max_{i=1}^N A(i)$, $C_1 = \min_{i=1}^N C(i)$ and $C_2=\max_{i=1}^N C(i)$. Suppose that there exists a real number $\hat{c}$ with $0<\hat{c} < \tfrac{1}{3}$ such that $\max\{ A_2/N,\, |C_1|, \, |C_2|\} \leq \hat{c}$. Define $n_0,\ldots ,n_N$ by $n_0=1$ and \[ n_i = \frac{1}{i}\bigl(A(i)-(i-1)C(i)\bigr)\, n_{i-1}\] for $1\leq i\leq N$. Then \[ \varSigma_1 \leq \sum_{i=0}^N n_i\leq \varSigma_2, \] where \begin{align*} \varSigma_1 &= \exp\bigl( A_1 - \tfrac{1}{2} A_1 C_2 \bigr) - (2e\hat{c})^N,\\ \varSigma_2 &= \exp\bigl( A_2 - \tfrac{1}{2} A_2 C_1 + \tfrac12 A_2 C_1^2 \bigr) + (2e\hat{c})^N. \quad\qedsymbol \end{align*} \end{lemma} This summation lemma will now be applied. \begin{lemma} Under the conditions of Theorem~\ref{main} we have \[ \sum_{\ell = 0}^{N} |\mathcal{C}_\ell| = |\mathcal{C}_0| \, \exp\left( \frac{(r-1)M_2}{2M} + O\left(\frac{k_{\mathrm{max}}^{3}}{M}\right)\right).\] \label{l-switch} \end{lemma} \begin{proof} Let $\ell'$ be as defined in Lemma~\ref{ratio}. By (\ref{l-for}), any $Q\in\mathcal{C}_\ell$ can be converted to some $Q'\in\mathcal{C}_{\ell-1}$ using an $\ell$-switching. Hence $\mathcal{C}_\ell=\emptyset$ for $\ell'\leq \ell\leq N$. In particular, the lemma holds if $\mathcal{C}_0=\emptyset$, so we assume that $\ell'\geq 1$. By Lemma~\ref{ratio}, there exists some uniformly bounded function $\beta_\ell$ such that \begin{equation}\label{lrat} \frac{|\mathcal{C}_\ell|}{|\mathcal{C}_0|} = \frac{1}{\ell}\, \frac{|\mathcal{C}_{\ell-1}|}{|\mathcal{C}_0|}\, \bigl( A(\ell) - (\ell-1)C(\ell)\bigr) \end{equation} for $\ell = 1,\ldots, N$, where \[ A(\ell) = \frac{(r-1)M_2-\beta_\ell\, k_{\mathrm{max}}^{3}}{2M}, \quad C(\ell) = \frac{\beta_\ell\, k_{\mathrm{max}}}{2M} \] for $1\le\ell<\ell'$, and $A(\ell) = C(\ell) = 0$ for $\ell'\leq \ell \leq N$. Now we apply Lemma~\ref{sumcor2}. It is clear that $A(\ell) - (\ell-1)C(\ell)\geq 0$, from (\ref{lrat}) if $1\leq \ell < \ell'$, or by definition if $\ell'\leq \ell \leq N$. If $\beta_\ell \geq 0$ then $A(\ell)\geq A(\ell)-(\ell-1)C(\ell) \geq 0$, while if $\beta_\ell < 0$ then $A(\ell)$ is nonnegative by definition. Next, define $A_1, A_2, C_1, C_2$ to be the minimum and maximum of $A(\ell)$ and $C(\ell)$ over $1\leq \ell\leq N$, as in Lemma~\ref{sumcor2}, and set $\hat c=\frac{1}{16}$. Since $A_2 = (r-1)M_2/(2M) + o(1)$ and $C_1,C_2 = o(1)$, we have that $\max\{ A_2/N,\, |C_1|,\, |C_2|\}\leq \hat{c}$ for $M$ sufficiently large, by definition of $N$. Lemma~\ref{sumcor2} applies and gives an upper bound \[ \sum_{\ell = 0}^{N} \frac{|\mathcal{C}_\ell|} {|\mathcal{C}_0|} \leq \exp\left( \frac{(r-1)M_2}{2M} + O\left(\frac{k_{\mathrm{max}}^{3}}{M}\right)\right) + O\bigl( (e/8)^{N}\bigr).\] Now $(e/8)^{N}\leq (e/8)^{\log M} \leq M^{-1}$, which leads to \begin{equation} \label{upper} \sum_{\ell = 0}^{N} \frac{|\mathcal{C}_\ell|} {|\mathcal{C}_0|} \leq \exp\left( \frac{(r-1)M_2}{2M} + O\left(\frac{k_{\mathrm{max}}^{3}}{M}\right)\right). \end{equation} If $\ell' = N + 1$ then the lower bound given by Lemma~\ref{sumcor2} is the same as the upper bound (\ref{upper}), within the stated error term, establishing the result in this case. Finally suppose that $1\leq \ell'\leq N$. Then (\ref{l-rev}) shows that \[ M_2 = O(k_{\mathrm{max}}^3 + \ell' k_{\mathrm{max}}) = o(M + M^{1/3}\log M) = o(M).\] If $\ell'=1$ then $M_2 = O(k_{\mathrm{max}}^{3})$ and hence $(r-1)M_2/(2M) = O\bigl(k_{\mathrm{max}}^3/M\bigr)$, so in this case the trivial lower bound of 1 matches the upper bound (\ref{upper}), within the stated error term. If $2\leq \ell'\leq N$ then using (\ref{lrat}) with $\ell=1$, we obtain \[ \sum_{\ell = 0}^{N} \frac{|\mathcal{C}_\ell|}{|\mathcal{C}_0|} \geq 1 + \frac{|\mathcal{C}_1|}{|\mathcal{C}_0|} = 1 + A(1) = 1 + \frac{(r-1)M_2}{2M} + O\bigl(k_{\mathrm{max}}^3/M\bigr). \] Since here $M_2 = o(M)$, this expression matches the upper bound (\ref{upper}), within the stated error term. This completes the proof. \end{proof} Theorem~\ref{main} now follows immediately, by combining (\ref{pr-equation}), (\ref{a=0}) and Lemma~\ref{l-switch}. \end{document}
\begin{document} \algrenewcommand\algorithmicrequire{\textbf{Input:}} \algrenewcommand\algorithmicensure{\textbf{Output:}} \title{Competition and Recall in Selection Problems \footnote{\today}\footnote{The authors gratefully acknowledge funding from ANITI ANR-3IA Artificial and Natural Intelligence Toulouse Institute, grant ANR-19-PI3A-0004, and from the ANR under the Investments for the Future program, grant ANR-17-EURE- 0010. J. Renault also acknowledges the support of ANR MaSDOL-19-CE23-0017-01.}} \author{Fabien Gensbittel \thanks{Toulouse School of Economics, University of Toulouse Capitole, Toulouse, France, [email protected]} \and Dana Pizarro \thanks{Toulouse School of Economics, Universit\'e Toulouse 1 Capitole and ANITI, France, [email protected]} \and J\'{e}r\^{o}me Renault \thanks{Toulouse School of Economics, University of Toulouse Capitole, Toulouse, France, [email protected]} } \date{ } \maketitle \thispagestyle{empty} \begin{abstract} We extend the prophet inequality problem to a competitive setting. At every period $ t \in \{1,\ldots,n\}$, a new sample $X_t$ from a known distribution $F$ arrives, which is publicly observed. Then two players simultaneously decide whether to pick an available value or to pass and wait until the next period (ties are broken uniformly at random). As soon as a player gets one sample, he leaves the market and his payoff is the value of this item. In a first variant, namely ``no recall'' case, the agents can only bid in period $t$ for the current value $X_t$. In a second variant, the ``full recall'' case, the agents can also bid at period $t$ for any of the previous samples with values $X_1$,...,$X_{t-1}$ which has not been already selected. For each variant, we study the subgame-perfect Nash equilibrium payoffs of the corresponding game, as a function of the number of periods $n$ and the distribution $F$. More specifically, we give a full characterization in the full recall case, and show in particular that both players always get the same payoff at equilibrium, whereas in the no recall case the set of equilibrium payoffs typically has full dimension. Regarding the welfare at equilibrium, surprisingly it is possible that the best equilibrium payoff a player can have is strictly higher in the no recall case than in the full recall case. However, symmetric equilibrium payoffs are always better when the players have full recall. Finally, we show that in the case of 2 arrivals and arbitrary distributions on $[0,1]$, the prices of Anarchy and Stability in the no recall case are at most 4/3, and this bound is tight.\\ \noindent \textbf{Keywords}: Optimal stopping, Competing agents, Recall, Prophet inequalities, Price of anarchy, Price of stability, Subgame-perfect equilibria, Game theory. \end{abstract} \vskip4truept \noindentagenumbering{arabic} \section{Introduction} \subsection{Context} The theory of optimal stopping has a vast history and is concerned with the problem of a decision-maker who observes a sequence of random variables arriving over time and has to decide when to stop optimizing a particular objective. Probably, the two best-known problems in optimal stopping are the Secretary Problem and the Prophet Inequality. In the classical model of the former, introduced in the ‘60s, a decision-maker observes a sequence of values arriving over time and has to pick one in a take-it-or-leave-it fashion maximizing the probability of picking the highest one. In other words, after observing an arrival, he has to decide whether to pick this value (and gets a reward equals to the value picked) or to pass and continue observing the sequence. Once a value is picked, the game ends and the goal of the decision-maker is to maximize the probability of getting the highest value. Lindley \cite{L61} proves that an optimal stopping rule for this problem consists in rejecting a particular amount of values first and then accepting the first value higher than the maximum observed so far. When the number of arrivals goes to infinity, the probability of picking the best value approaches to $1/e$. Since then, several variants have been studied in the literature (see, e.g., \cite{EFK20, F83, IKM06}). Related to the secretary problem is the optimal selection problem where the decision-maker knows not only the total number of arrivals but also the distribution behind them, and he has to decide when to stop, with the goal of maximizing the expected value of what he gets. Instead of looking at this problem as an optimal stopping problem, in the ‘70s researchers started to answer the question of how good can a decision-maker play compared to what a prophet can do, where a prophet is someone who knows all the realizations of the random variables in advance and simply picks the maximum. These inequalities are called Prophet Inequalities and it was in the '70s when Krengel and Sucheston, and Garling \cite{KS78} proved that the decision-maker can get at least 1/2 of what a prophet gets, and that this bound is tight. Later, in 1984 Samuel-Cahn \cite{SC84} proved that instead of looking at all feasible stopping rules, it is enough to look at a single threshold strategy to get the 1/2 bound. These results are for a general setting where the random variables are independent but not necessarily identically distributed, and then one natural question that arose was if this bound could be improved assuming i.i.d. random variables. Kertz \cite{K86} answered this question positively and provided a lower bound of roughly 0.7451. Quite recently, Correa et al. \cite{CFHOV17} proved that this bound is tight. A lot of work has appeared considering different model variants (infinitely many arrivals, feasibility constraints, multi-selection, etc) but it was since the last decades that this problem gained particular attention due to its surprising connection with online mechanisms (see, e.g., \cite{CHMS10, CFPV19, HKS07}). \subsection{Our paper} In the i.i.d. setting of the prophet inequality problem, there is a decision-maker who observes an i.i.d. sample $X_1, \dots, X_n$ distributed according to a known distribution $F$. Only one value can be selected, and the banchmark is $\mathbb{E}(\max_i X_i)$. After each arrival, the decision-maker must make an immediate and irrevocable decision whether to pick this value or not. If he selects it, he leaves the market and the game ends, otherwise he continues observing the next arrival. This problem is commonly motivated by applications to online auction theory, where at each time period a seller, who wants to sell an item, receives an offer from a potential buyer and has to decide whether to accept it and sell the item or to reject it and wait for the next offer. Most of the work on the prophet inequality problem relies in the fact that there is only one decision-maker and in that the decision must be taken immediately after observing the offer. However, in some situations of interest-- a person who wants to buy a house, a company hiring an employee, among others-- it seems reasonable to allow more than one decision-maker, as well as to be able to make the decision later on time. Driven by this fact, in this paper we study two variants of the classic setting. More specifically, on one hand, we consider the setting where two decision-makers compete to get the best possible value, and we call it \textit{competitive selection problem with no recall}. On the other hand, we consider the problem where the two decision-makers are allowed to select any available value that appeared in the past and not only the one just arrived, and we call it \textit{competitive selection problem with full recall}. For each variant, we study the two-player game induced by the optimal stopping problem, focusing on subgame-perfect Nash equilibria (SPE for short). The main contributions of this paper can be divided into three lines, which are summarized in what follows. \\ \noindent{\bf Description of the subgame-perfect equilibrium payoffs.} The first stream of contributions refers to the study of the set of SPE payoffs (SPEP) for both settings. Regarding the full recall case, we fully characterize the set of SPEP (Theorems~\ref{thm:SPEPcharact} and \ref{thm:caract}) and we obtain that each such payoff is symmetric, meaning that every SPE gives the same payoff to both players. In the no-recall case, the set of SPEP is clearly symmetric with respect to the diagonal but contains points outside of the diagonal, inducing different payoffs for the players. In this case we give in Theorem ~\ref{thm:norecall} recursive formulas to compute the best and the worst SPEP payoffs (in terms of sum of payoffs of the players), as well as the worst payoff a single player can get at a subgame-perfect equilibrium. To illustrate these results, in Section~\ref{sec:unif} we provide a detailed study of the particular case where $F$ is the uniform distribution on $[0,1]$. \\ \noindent{\bf Comparison of the two variants.} The second stream of results is focused on comparing the highest SPEP in both settings. Surprisingly, an example (see Section~\ref{sec:simple:example}) shows that the best SPEP a player can obtain may be higher in the no recall case than in the full recall case. This can be explained by the possible existence of an asymmetric SPE in the no recall case which significantly favors one player at the expense of the other, who picks a value early in the game. However, if we restrict our attention to symmetric SPEP, we show in Theorem~\ref{thm:comparison} that players are always better off in the variant with recall. More precisely, we prove that for every number of periods $n$ and continuous distribution $F$, if $(u,u)$ is a SPEP of the game with full recall and $(v,v)$ is a SPEP of the game without recall, then $u\geq v$. Furthermore, this advantage can be significant: If $F$ is the uniform distribution on $[0,1],$ and $n=5$, the payoff of players in the full recall case is at least $7\%$ higher than in the no recall case.\\ \noindent {\bf Efficiency of equilibria.} To analyze efficiency, we use in Section~\ref{sec:eff} the standard notions of Price of Anarchy and Price of Stability, and we introduce the notion of Prophet Ratio, defined as the sum of payoffs a team of prophets would obtain, divided by the best sum of payoffs at equilibrium. When $F$ is uniform on $[0,1]$, we find numerically that equilibria are quite efficient and that having full recall gives a small advantage to the players, in terms of having a payoff closer to the one obtained by playing the best possible feasible strategy in the corresponding setting. We also show that both the prices of Anarchy and Stability are maximized in the no recall case with $n=2$, and the intuition is that it is the case maximizing the probability that a player will get nothing. The Prophet Ratio in the same context is maximized for $n=5$, which is less intuitive. Finally we consider the case $n=2$ without recall, and let $F$ be any distribution with support contained in [0,1]. We prove in Theorem \ref{prop:boundeff} that both the prices of anarchy and stability are not greater than 4/3, and that this bound (reminiscent of the bound for routing problems with linear latencies, see \cite{RT02}) is tight in both cases. \\ It is worth noting that, although competition and recalling variants have already been considered in the literature for some optimal stopping problems (as we will discuss in the next section), to the best of our knowledge, our paper is the first considering them from a Game Theory perspective with a focus on the study of the set of SPEP, and then it constitutes a good starting point for future research work of interest cross academic communities in Operations Research, Computer Science, and Economics. \subsection{Related literature} \label{sec:rel:lit} As it was aforementioned, the literature in optimal stopping theory is extensive and mainly focused on finding optimal or near-optimal policies for the different model variants, as well as on studying the guarantees of some simple strategies, such as single threshold strategies, even when they are not optimal. However, this paper introduces a game-theoretic approach for a model with competition and where recall is allowed. In what follows, we revisit some of the existing literature regarding optimal stopping problems with some of these two particular features. \vskip4truept \noindentaragraph{Optimal Stopping with competition.} Abdelaziz and Krichen \cite{AK07} survey the literature on optimal stopping problems with more than one decision-maker until 2000s. More recently, Immorlica et al. \cite{IKM06} and Ezra et al. \cite{EFK20} study the secretary problem with competition. The former considers a classical setting where decision are made in a take-it-or-leave-it fashion and ties are broken uniformly at random, and they show that as the number of competitors grows, the moment at which ``accept'' is played for the first time in an equilibrium decreases. The latter incorporates the recalling option, and studies the structure and performance of equilibria in this game when the ties are broken uniformly at random or according to a global ranking. Our paper considers a different model, since the problem is more related to the prophet inequality setting. In this sense, the work closest to ours is the recent paper by Ezra et al. \cite{EFK21}, who introduce, independently of our paper, the no recall case, again with the two variants for tie breaking. However, the novelty of our paper is the incorporation of recalling in addition to the competition between players. Moreover, instead of studying the reward guarantees under single-threshold strategies, we focus on the study of equilibria of the game. \vskip4truept \noindentaragraph{Optimal Stopping with recall.} Allowing decision-makers to choose between any of the values arrived so far is a variant of the classic problem that may have interesting applications. If this extension is considered without competition, it is easy to see that the optimum is just to wait until the end and pick the best value. However, adding competition in the model makes the problem interesting and characterizing the set of equilibrium payoff and studying their efficiency are challenging questions we address in this paper. Notice that this notion of recalling is not new for some optimal stopping problems. For example, Yang \cite{Y74} considers a variant of the secretary problem, where the interviewer is allowed to make an offer to any applicant already interviewed. In his model, the applicant reject the offer with some probability that depends on when the offer is made and he studies the optimal stopping rules in this context. Thereafter, different authors have been studying other variants of the secretary problem with recall (see, e.g., \cite{EFK20, P81, S94}). Our work differs from most of them not only in the model we consider but also in terms of questions since we are more interested in a game-theoretic approach of the problem. \\ \subsection{Roadmap} The remainder of this paper is organized as follows: We start presenting the model in Section~\ref{sec:model}, including the description of the games for the full recall and no recall cases in Section~\ref{sec:model:game}. The results of our paper are presented formally in Section~\ref{sec:results}. This section is divided into four parts: In Section~\ref{sec:payoffs}, we study the set of SPEP, starting with its computation for a very simple example (see Section~\ref{sec:simple:example}), and making then a complete analysis for the full recall case (see Section~\ref{sec:payoffs:FR}) and for the no recall case (see Section~\ref{sec:payoff:NR}). In Section~\ref{sec:comparison} we present the results concerning the comparison of the two variants, whereas in Section~\ref{sec:eff} we study the efficiency of equilibria. In Section~\ref{sec:unif} we make a detailed analysis of the case where $F$ is uniform on $[0,1]$. Finally, we close the paper with the proofs of the results, which are relegated to Section~\ref{sec:proofs}. \section{Model}\label{sec:model} Consider a sequence of samples $X_1$,...,$X_n$ of a random variable $X$ distributed according to a c.d.f. $F$ with support included in [0,1]. \footnote{We assume the support of $F$ included in $[0,1]$ to not overload the notation but the results can be easily extended to the general case.} There are two players, or decision-makers, competing to select the highest value among $X_1$,...,$X_n$. More explicitly, at each time period $t=1,...,n$, the decision-makers -- who know the distribution $F$-- observe $X_{t}$ and simultaneously decide whether or not to select one value in the current {\it{feasible set}} $\mathcal{F}_t$. Once a decision-maker chooses a value $X_j$, he leaves the market obtaining a payoff of $X_{j}$ and this value no longer belongs to the feasible set. Decision-makers must decide when to stop, maximizing the expected value of their payoff. If at time $t$ both agents want to select the same value, we break the tie uniformly at random. That is, each of them gets the value with probability $1/2$, and the decision-maker who gets it leaves the market, whereas the other passes to the next period. It remains to specify what are the sets of feasible values, distinguishing the two cases we will consider. In the first one, namely {\it{full recall case}}, the set $\mathcal{F}_t \subset \{1, \dots, t\}$ of feasible elements at time $t$ consists of all the periods that have not yet been selected by a decision-maker, and a {\it{feasible action}} represents a probability distribution over the set $\mathcal{F}_t \cup \{ \emptyset \}$, where $\emptyset$ represents the action of not selecting anything. In this case, the game ends when all decision-makers choose an element, and then it could be later than $n$ if both players are still present at period $n$. We assume that at stage $n$ all players who are present select the element corresponding to the highest value. If both players are present, there is a tie, and the player who losses the toss gets the second best value. The second variant we consider is the {\it{no recall case}}. In this case, a take-it-or-leave-it decision is faced by the decision-makers at each time period. That is, after observing the sample just arrived they should decide whether to take it or not. Independently of their decision, the value cannot be chosen later. Thus, the set of feasible elements at time $t$ is just given by $\mathcal{F}_t= \{t\}$ and a feasible action is a probability distribution on $\{t, \emptyset\},$ where $\emptyset$ represents the action of represents the action of not selecting anything. Notice that in the 1-player case (decision problem), the optimal strategy with full recall is simply to wait until the end and to pick the element which corresponds to the maximum of $\{X_1,...,X_n\}$. And in the 1-player case with no recall we have a standard prophet problem, with value smaller than the expectation of the maximum of $\{X_1,...,X_n\}$. Obviously, having the possibility of getting a sample observed in the past is beneficial to the player. In our multiplayer setting, we cannot say {\it a priori} that the full recall case is beneficial to the players, as there are examples where having more information or more actions decreases the sum of the payoffs of the players at equilibrium. This motivated us to ask how important is to have the power of being able to choose a value observed in the past. To answer this question we study the games behind the two model variants described above. In particular, we study the set of SPEP as well as the Price of Anarchy (PoA) and the Price of Stability (PoS). We remark here that throughout the paper we will ``the decision-maker selects value $X_i$'' to refer to the decision-maker selecting element $i$ from $\mathcal{F}_t$. \subsection{Description of the games} \label{sec:model:game} We now formally describe the games induced by the full recall case and the no recall case, denoted by $\Gamma^{FR}_n$ and $\Gamma^{NR}_n$, respectively. Then, we specify the notions of equilibrium that will be used throughout the paper. \begin{enumerate} \item {\bf Full recall case.} Game $\Gamma^{FR}_n$ For each $t \in \{0,1,...,n+1\}$ we denote by $\mathcal{H}_t$ the set of possible histories up to stage $t$. $\mathcal{H}_0$ only contains the empty history. $\mathcal{H}_1$ contains what happens at stage 1, i.e., the sample $X_1$, who tried to get $X_1$ and who got $X_1$ (possibly nobody). $\mathcal{H}_2$ contains everything that happened at stages 1 and 2, and so on. As usual a strategy for player $j \in \{1,2\}$ is an element $\sigma_j=(\sigma_{j,_t})_{t =0,...,n}$ where $\sigma_{j,_t}$ is a measurable map which associates to every history in $\mathcal{H}_t$ an available action, that is a probability distribution over $\mathcal{F}_t \cup \{ \emptyset \}$. A strategy profile $(\sigma_1, \sigma_2)$ induces a probability distribution over the set of possible plays $\mathcal{H}_{n+1}$, and the payoff (or utility) of each player is defined as the expectation of the value he gets, with the convention that getting none of the samples yields a payoff of $0$. \item {\bf No recall case.} Game $\Gamma^{NR}_n$ Here, the set of available elements at stage $t$ is $\mathcal{F}_t =\{t\}$, and we only need to consider histories $\mathcal{H}_t$ for $t \in \{0,...,n\}$ where $\mathcal{H}_t$ contains everything that happened up to stage $t$ under the no recall assumption. A strategy for player $j \in \{1,2\}$ is an element $\sigma_j=(\sigma_{j,t})_{t =0,...,n-1}$ where $\sigma_{j,t}$ is a measurable map which associates to every history in $\mathcal{H}_t$ an available action, that is a probability distribution over $\{t, \emptyset\}$. A strategy profile $(\sigma_1, \sigma_2)$ induces a probability distribution over the set of possible plays $\mathcal{H}_{n}$, and payoffs are defined as in the full recall case. \item{\bf Equilibrium notions.} We recall here the usual notions of Nash equilibrium (NE) and subgame perfect equilibrium (see, e.g., \cite{FT91}). The following definitions apply to both games $\Gamma^{FR}_n$ and $\Gamma^{NR}_n$. \begin{definition} A strategy profile $\sigma= (\sigma_1,\sigma_2)$ is a Nash equilibrium (NE) of the game if for every agent $i$ and every strategy $\sigma'_i$, player's $i$ utility when $(\sigma'_i, \sigma_{-i})$ is played is not greater than the one obtained if $\sigma$ is played. \end{definition} Given a stage number $t$ and a finite history $h_t$ mentioning everything that happened up to stage $t$, we can define the continuation game after $h_t$. \begin{definition} A strategy profile $\sigma= (\sigma_1,\sigma_2)$ is a subgame perfect equilibrium (SPE) if it induces a NE for every proper subgame of the game (i.e. for any continuation game after a finite history). \end{definition} When studying SPE, we will assume w.l.o.g. that as soon as a player is alone in the game, he plays optimally in the remaining decision problem. By best (resp. worse) equilibrium, we mean an equilibrium maximizing (resp. minimizing) the sum of the payoffs of the players. \end{enumerate} \section{Results} \label{sec:results} The goal of this section is to present the main results of the paper. First, we study the structure of the sets of SPEP. We start by introducing a simple example, where the computation for both the recall and no recall cases is easy. Then, we present our main result in the full recall case which fully characterizes the set of SPEP. Regarding the no recall case, we provide recursive formulas to compute the the worst payoff a player can get in equilibrium, as well as the sum of payoffs of players for both the best and worst SPE, when the distribution function $F$ is continuous. After that, we move on to understand what is the relationship between the payoff a player gets in both problems. In other words, we answer the question: Is the recall case always beneficial for the players? Finally, we study the efficiency of equilibria for the full recall and no recall cases. We leave the proofs of the results to Section~\ref{sec:proofs}. \subsection{Motivating example.}\label{sec:simple:example} Let us consider the particular instance of the problem where $n$ samples of $X$ arrive sequentially over time, where the law $F$ of $X$ is defined by: \[ X= \begin{cases} 1/3 & \text{with probability } 1/2,\\ 2/3 &\text{with probability } 1/2.\\ \end{cases} \] We compute the set of SPEP in the full recall and no recall case, denoted by $E_n^{FR}$ and $E_n^{NR}$, respectively.\\ \noindent \textit{Full recall case.} Note that in this case, the unique SPE is to bid at any time if $X=2/3$ and to pass otherwise until the last period, where the best available value is chosen. Then, if all samples take the value $1/3,$ the players obtain a payoff of $1/3$; if only one sample is equal to $2/3$, both players bid for this value and then they obtain $2/3$ with probability $1/2$ and $1/3$ with probability $1/2$; and if more than one realization of $X$ has value $2/3,$ both players obtain a payoff of $2/3$. Thus, the expected payoff of a player in equilibrium, namely $h_n$, can be computed as follows \[ h_n=\frac{1}{3} \mathbb{P}( X_{(n)} =1/3)+ \frac{1}{2} \mathbb{P}( \exists! \ i : X_i=2/3 ) + \frac{2}{3} \mathbb{P}(\exists i\neq j : X_i=X_j=2/3), \] where $X_{(n)}$ denotes the maximum of $n$ i.i.d. samples of $X \sim F$. The probabilities above are easy to compute, having that $\mathbb{P}( X_{(n)} =1/3)=1/2^n$, $\mathbb{P}( \exists! \ i : X_i=2/3 )= n/2^{n}$ and $\mathbb{P}(\exists i\neq j : X_i=X_j=2/3)=1-(n+1)/2^{n}$. Putting all together we conclude that $E_n^{FR}= \left\{\left( h_n,h_n\right) \right\},$ with $$h_n=\frac{2}{3}-\frac{n+2}{3 \cdot 2^{n+1}}.$$ \noindent \textit{No recall case.} Because we are considering an instance with $n$ time periods and the random variable only takes two possible values, for the first $n-2$ arrivals, at equilibrium, players bid if and only if the sample has value $2/3$. Therefore, at equilibrium: if at least two samples have value $2/3$ up to time $n-2$, players obtain $2/3$; if only one samples have value $2/3$ up to time $n-2$, each player obtain $2/3$ with probability $1/2$ and the expected value of $X_{(2)}$ with probability $1/2$; and if all samples up to $n-2$ are $1/3$, the players obtain an expected payoff $(e_n^1,e_n^2) \in E_2^{NR}$. Note that $\mathbb{E}(X_{(2)})$ is the value of the decision problem in a standard prophet setting with two arrivals from $X$, and thus $\mathbb{E}\left(X_{(2)}\right)=7/12$. Therefore, an expected payoff of player 1 can be computed as \begin{eqnarray*} \mathbb{P}\left( X_{(n-2)}=1/3\right) e_2^1+\mathbb{P}( \exists! \ i\leq n-2 : X_i=2/3 )\frac{15}{24}+ \mathbb{P}(\exists i\neq j : X_i=X_j=2/3, i,j \leq n-2) \frac{2}{3}, \end{eqnarray*} whereas for player 2 we have \begin{eqnarray*} \mathbb{P}\left( X_{(n-2)}=1/3\right) e_2^2+\mathbb{P}( \exists! \ i\leq n-2 : X_i=2/3 )\frac{15}{24}+ \mathbb{P}(\exists i\neq j : X_i=X_j=2/3, i,j \leq n-2) \frac{2}{3}. \end{eqnarray*} Again, the probabilities above are easy to compute, having $\mathbb{P}\left( X_{(n-2)}=1/3\right)=1/2^{n-2}$, $\mathbb{P}( \exists! \ i\leq n-2 : X_i=2/3 )=(n-2)/2^{n-2}$ and $\mathbb{P}(\exists i\neq j : X_i=X_j=2/3, i,j \leq n-2)=1-(n-1)/2^{n-2}$. Thus, we obtain that an expected payoff of player $i$ is given by: \begin{eqnarray}\label{eqn:example:NR} \frac{e_2^i}{2^{n-2}}+ \frac{15}{24} \cdot\frac{n-2}{2^{n-2}}+\frac{2}{3} \cdot \left(1-\frac{n-1}{2^{n-2}} \right)=\frac{2}{3}+\frac{e_2^i}{2^{n-2}}-\frac{n+4}{3\cdot 2^{n+1}}. \end{eqnarray} It remains to compute the set $E_2^{NR}.$ To this end, for $a\ \in \{1/3,2/3\}$ we consider the game, called $\Gamma_1^{NR}(a)$, defined as $\Gamma_1^{NR}$ but with an initial available value $a$. That is, there is a time period ``zero'' where players choose between selecting the value $a$ or pass, before observing the value of the single sample of the game. We denote by $E_1^{NR}(a)$ the set of SPEP of the game $\Gamma_1^{NR}(a)$. Notice that \begin{equation}\label{eqn:nr:simple:example} E_2^{NR}= \left\{(\gamma_1,\gamma_2) : \gamma_i= \sum_{a \in \{1/3,2/3\} } \mathbb{P}(X=a) e_1^i(a), \text{where} \left(e_1^1(a), e_1^2(a) \right) \in E_1^{NR}(a)\right\}, \end{equation} and then $E_2^{NR}$ can be easily computed from $E_1^{NR}(a).$ To study $E_1^{NR}(a),$ we consider the payoffs matrix for the game $\Gamma_1^{NR}(a),$ represented in Table~\ref{tab:payoff:example}. \begin{table}[h] \centering \setlength{\extrarowheight}{7pt} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Player $2$}\\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$\emptyset$} \\\cline{3-4} \multirow{2}*{Player $1$} & $a$ & $\left(\frac{1}{2} a + \frac{1}{4} , \frac{1}{2} a + \frac{1}{4}\right)$ & $\left( a, \frac{1}{2}\right)$ \\\cline{3-4} \multirow{2}*{} & $\emptyset$ & $\left(\frac{1}{2},a\right)$ & $\left( \frac{1}{4}, \frac{1}{4}\right)$ \\\cline{3-4} \end{tabular} \caption{\label{tab:payoff:example}Payoffs matrix for the game $\Gamma_1^{NR}(a)$.} \end{table} Observe that if $a=2/3$, $(a,a)$ is the unique NE with payoff $(7/12, 7/12)$, and thus $E_1^{NR}(2/3)=\{(7/12, 7/12)\}$. Otherwise, if $a=1/3$, there are three NE: $(a, \emptyset), (\emptyset,a)$ and a symmetric mixed equilibrium in which both agents play $a$ with probability $1/2$ and pass with probability $1/2$. The equilibrium payoffs are $(1/3,1/2), (1/2,1/3)$ and $(3/8,3/8)$, respectively, and then $E_1^{NR}(1/3)=\{(1/3,1/2), (1/2,1/3), (3/8,3/8)\}.$ We now can compute the set $E_2^{NR}$ by using \eqref{eqn:nr:simple:example}, obtaining \[E_2^{NR}= \{(11/24,13/24), (13/24, 11/24), (23/48,23/48)\}.\] Finally, we go back to equations \eqref{eqn:example:NR} and we conclude that $E_n^{NR}$ is the three-elements set \begin{eqnarray}\label{set:SPE:exaple:NR} E_n^{NR}=\left\{ \left(p_n,q_n\right), \left(r_n,r_n \right),\left(q_n, r_n \right) \right\}, \end{eqnarray} with \[ p_n:=\frac{2}{3}-\frac{n+1}{3 \cdot 2^{n+1}}, \ \ q_n:=\frac{2}{3}- \frac{n+3}{3 \cdot 2^{n+1}} \ \ \text{ and } \ \ r_n:=\frac{2}{3}-\frac{n+5/2}{3 \cdot 2^{n+1}}.\] Note that $p_n$ and $q_n$ represent the best and worst possible payoff of a player in equilibrium, respectively, whereas $r_n$ represents the payoff in between. Recall that $h_n=2/3-(n+2)/(3 \cdot 2^{n+1})$ and therefore $p_n > h_n > r_n > q_n.$ Moreover $p_n+q_n=2h_n>2r_n$. The latter means that the sum of equilibrium payoffs of the players in the full recall case is equal to the sum of the payoffs of players in the no recall case if an asymmetric equilibrium is played. Otherwise, i.e., if in the no recall case the symmetric equilibrium is played, then the sum of payoffs of the players is strictly lower than in the full recall case, which is somehow not surprising. However, for this example we observe that, surprisingly, the best payoff a single player can obtain at equilibrium is strictly higher in the no recall case than in the full recall case. Intuitively, what happens here is that the lack of recalling power implies that with positive probability one player will take the ``bad'' value in period $n-1$ and therefore it gives an advantage to the other player, who is now alone in the game. A natural question is to ask how efficient are the equilibria of the games compared with the best feasible strategy. We use the well known notions of Price of Stability (PoS) and Price of Anarchy (PoA) which are defined as the ratio between the maximal sum of payoffs obtained by players under any feasible strategy and the best and worst sum of payoffs of SPEs respectively. For the example, this analysis is easy to do, and we start by noting that in both model variants, there exist a feasible strategy (not the same for both problems) where players obtain the first and the second best values. This implies that the numerator of PoA and PoS will be the same and equal to $\mathbb{E}(X_{(n)}+X_{(2:n)})$, where $X_{(2:n)}$ denotes the second best sample, which is given by $4/3-(n+2)/(3 \cdot 2^n ).$ Observe that this value is exactly $2 h_n$ and $p_n+q_n$ and then PoS is 1 for both settings and PoA is 1 for the full recall case, meaning that in the full recall case the resulting equilibria is efficient. On the other hand, the PoA for the no recall case is given by $\mathbb{E}(X_{(n)}+X_{(2:n)})/(2 r_n)$ which is strictly greater to 1 but it converges fast to one when $n$ grows. This example highlights the relevance of our work in three ways. First, even in this case where the random variable only takes two possible values, we observe that the computation of the SPEP requires a detailed analysis and then computing these sets for a general probability distribution, even under some mild assumptions, seems difficult and challenging. On the other hand, this example shows us that the power of recalling is not always favorable to all players, and then it is an interesting question to try to understand under which assumptions one can ensure that the payoff of a player in the full recall case in equilibria will be at least as good as the one in the no recall case. Finally, we see in this example that the SPEP are efficient and it motivates us to study if that still holds for a general distribution. \subsection{ Description of the subgame-perfect equilibrium payoffs} \label{sec:payoffs} Here we will fully characterize the set of SPEP for the full recall case. For the no recall case, we provide recursive formulas to compute the best and worst sum of SPEP. \subsubsection {Full recall case}\label{sec:payoffs:FR} We go back to the general case and study here the game with full recall, that is, at time $t$, any of the values so far arrived that has not been selected before can be selected. We introduce a two-player game $\Gamma^{FR}_n(a,b)$, where for each natural number $n$ and $1 \geq a \geq b \geq 0$, $\Gamma^{FR}_n(a,b)$ is defined as $\Gamma^{FR}_n$ with $n$ samples to arrive and $a$ and $b$ two available values present at the beginning of the game. That is, we have a time period ``zero'' where players choose between getting $a,b$ or pass, before the sequential arrival of the samples. We denote by $E_n^{FR}(a,b) \subset \mathbb{R}_+^2$ the set of the SPEP of the game $\Gamma^{FR}_n(a,b).$ Note that the set of the SPEP of the game $\Gamma^{FR}_n$ is just the set $E_n^{FR}(0,0)$. The next theorem states that the set of SPEP for the defined auxiliary game $\Gamma^{FR}_n(a,b)$ is contained in the diagonal, i.e., at each SPE both players get the same payoff. We see this as a surprising result. \begin{thm}\label{thm:SPEPcharact} Consider an instance of the game $\Gamma^{FR}_n(a,b)$, for $a,b$ real numbers such that $0\leq b \leq a\leq 1$ and $n \in \mathbb{N}$. The set of SPE payoffs is contained in the diagonal, that is $E_n^{FR}(a,b) \subset \{(u,u), u\in [0, 1]\}$. Furthermore, if we define $\text{P}E_n^{FR}(a,b)$ the projection of $E_n^{FR}(a,b)$ to $\mathbb{R}$, we have $\min~\text{P}E_n^{FR}(a,b)=l_n(a,b)$ and $\max \text{P}E_n^{FR}(a,b)=h_n(a,b)$, where $l_n$ and $h_n$ are defined recursively as follows: \begin{itemize} \item[i)]$l_0(a,b)=h_0(a,b)=\frac{a+b}{2};$ \item[ii)] for $n\geq 1$: \[l_n(a,b)=L(a,\mathbb{E}\left(X_{(n)} \vee b), d^-_n(a,b) \right) \text{ with } d^-_n(a,b)= \mathbb{E}_X\left( l_{n-1}(a \vee X, \text{med}[a,b,X])\right),\] \[h_n(a,b)=H(a,\mathbb{E}\left(X_{(n)} \vee b), d_n^+(a,b) \right) \text{ with } d_n^+(a,b)= \mathbb{E}_X\left( h_{n-1}(a \vee X, \text{med}[a,b,X])\right),\] where $X_{(n)}=\max\{X_1, \dots, X_n\}$, med denotes the median, and $L: \mathbb{R}^3 \rightarrow \mathbb{R} $ and $H:\mathbb{R}^3 \rightarrow \mathbb{R}$ are defined by: \[ L(x,y,z) = \begin{cases} z & \text{if } x \leq y ,\\ \frac{1}{2}(x+y) &\text{if } x > y,\\ \end{cases} \;\; {\it and}\;\; H(x,y,z) = \begin{cases} \frac{1}{2}(x+y) & \text{if } x > y \vee z,\\ z &\text{if } x \leq y \vee z.\\ \end{cases} \] \end{itemize} \end{thm} Using the theorem above, we can prove the following result which fully characterizes the set of SPEP of the games $\Gamma^{FR}_n$ when $F$ is continuous (i.e. when the corresponding distribution is atomless). \begin{thm}\label{thm:caract} Assume $F$ is continuous. Then, for $n\geq 1$, the set $E^{FR}_n$ of SPE payoffs of the game $\Gamma^{FR}_n$ is the segment: \[E^{FR}_n=\{\lambda (l_n,l_n)+(1-\lambda) (h_n,h_n), \lambda \in [0,1]\}, \text{ where } l_n=l_n(0,0) \text{ and } h_n=h_n(0,0).\] That is, $E^{FR}_n$ is convex, contained in the diagonal, and its extreme points are $(l_n(0,0),l_n(0,0))$ and $(h_n(0,0),h_n(0,0))$, where $l_n(0,0)$ and $h_n(0,0)$ are defined in the statement of Theorem~\ref{thm:SPEPcharact}. \end{thm} \subsubsection{No recall case}\label{sec:payoff:NR} We now consider the no recall variant, where players can only play in a take-it-or-leave-it fashion, without being able to select a sample arrived in the past. In this section, we assume that $F$ is continuous, which ensures that the set of SPEP is convex, and allows us to derive explicit recursive formulas for the support function of this set in particular directions. We introduce here the two-player game $\Gamma^{NR}_n(a)$, where for each natural number $n$ and $a \in [0, 1]$, $\Gamma^{NR}_n(a)$ is defined as $\Gamma_n^{NR}$ with $n$ samples to arrive, but with $a$ an available value present at the beginning. That is, we have a time period ``zero'' where players choose between getting $a$ or pass, before the sequential arrival of the samples. Calling $E_n^{NR}(a) \subset \mathbb{R}_{+}^2$ the set of SPEP of the game $\Gamma^{NR}_n(a)$, we have that the set of the SPEP of $\Gamma^{NR}_n$ is just $E^{NR}_n:=E_n^{NR}(0)=\mathbb{E}_{a \sim F} (E_{n-1}^{NR}(a)),$ where \begin{eqnarray*} \mathbb{E}_{a \sim F} (E_{n-1}^{NR}(a))= \left\{ \int_0^1 f(a) \mathrm{d}F(a), f:[0,1] \rightarrow \R^2 \text{ measurable with } f(a)\in E_{n-1}^{NR}(a) \text{ for each } a \right\}. \end{eqnarray*} Below we present a technical result, stating that the set of SPE payoffs for the no recall case is symmetric with respect to the diagonal, and when $F$ is continuous, it is also convex and compact. \begin{proposition}\label{prop:convexity} For each natural number $n$, the set of the SPE payoffs $E^{NR}_n$ is is symmetric with respect to the diagonal. If $F$ is continuous, $E^{NR}_n$ is convex compact. \end{proposition} Although the set of SPEP for the no recall case is convex and symmetric with respect to the diagonal, it will not be a subset of the diagonal. The recursive structure of the SPEP in the no recall case is more complex than the full recall case. However, in the main result of this section we give explicit recursive formulas to compute the sum of the SPE payoffs for the best and worst equilibria under the no recall setting. Recall that by best (resp. worst) equilibrium we mean a SPE which maximizes (minimizes) the sum of the payoffs of the 2 players. Before presenting the result, we introduce some necessary notation. We first define by induction (with $X \sim F$): $$c_1=\mathbb{E}(X), \; \; {\rm and}\; \forall n>1, \; c_n=\mathbb{E}(X \vee c_{n-1}) .$$ Note that $c_n$ is the value of the decision problem in a standard prophet setting. On the other hand, we denote by $\alpha_n$ ($\beta_n$) the smallest (highest) coordinate value of a point on $E_n^{NR}$ belonging to the diagonal, and by $\alpha'_n$ the smallest coordinate of a point belonging to $E_n^{NR}$. That is, $\alpha_n:= \min \{ x : (x,x) \in E^{NR}_n\}$, $\beta_n:= \max \{x : (x,x) \in E^{NR}_n \}$ and $\alpha'_n= \min \{ \min \{ x,y \} : (x,y) \in E^{NR}_n\}$ (see Figure~\ref{fig:def}). It is easy to see that: $\alpha'_1=\alpha_1=\beta_1= 1/2 \cdot \text{E}(X).$ \begin{figure} \caption{Representation of $\alpha_n$, $\beta_n$ and $\alpha'_n$ if the set $E_n^{NR} \label{fig:def} \end{figure} \begin{thm}\label{thm:norecall} Assume $F$ is continuous. In the game $\Gamma^{NR}_n$, the following holds: \begin{itemize} \small \item[a)] the {\bf worst payoff} a player can get at equilibrium is $\alpha'_n$, where for $n\geq1$: $$\alpha'_{n+1}=\frac{c_n+1}{2}-\int_{\alpha'_n}^{c_n} F(a) \mathrm{d}a -\frac{1}{2}\int_{c_n}^{1} F(a) \mathrm{d}a.$$ \item [b)] the sum of payoffs for the {\bf best SPE} is $2 \beta_n$, where for $n>1$: $$2\beta_{n}=\int_{0}^{\alpha'_{n-1}} 2 \beta_{n-1} \mathrm{d}F(a) +\int_{\alpha'_{n-1}}^{\beta_{n-1}} \max\{a+c_{n-1}, 2\beta_{n-1} \} \mathrm{d}F(a) +\int_{\beta_{n-1}}^{1} (a+c_{n-1}) \mathrm{d}F(a).$$ \item [c)] the sum of payoffs for the {\bf worst SPE} is $2 \alpha_n$, where for $n>1$: \begin{eqnarray*} 2\alpha_n&=& \int_{c_{n-1}}^{1} (a+c_{n-1})\mathrm{d}F(a) + \int_{\beta_{n-1}}^{c_{n-1}} \frac{4ac_{n-1}-2\beta_{n-1}(a+c_{n-1})}{c_{n-1}+a-2 \beta_{n-1}}\mathrm{d}F(a) + \int_{\alpha_{n-1}}^{\beta_{n-1}} 2 a \mathrm{d}F(a) \\ &+& \int_{\alpha_{n-1}'}^{\alpha_{n-1}}\min \{2 \alpha_{n-1}, a+c_{n-1}\}\mathrm{d}F(a) + \int_{0}^{\alpha_{n-1}'} 2 \alpha_{n-1} \mathrm{d}F(a). \end{eqnarray*} \end{itemize} \end{thm} \subsection{Comparison of the two model variants} \label{sec:comparison} We now come back to the general case and no longer assume that $F$ is continuous, and want to compare the SPEP obtained by the players with and without recalling power. We have that the best SPE payoff of a player under full recall is $h_n=h_n(0,0)$ defined recursively in Theorem \ref{thm:SPEPcharact}. We now denote by $$\beta'_n=\max\{\max \{x,y\}: (x,y) \in E_n^{NR}\}$$ the best possible SPEP of a player under no-recall. We know from the simple example presented in Section~\ref{sec:simple:example} that we may have: $\beta'_n>h_n$, that is the best possible SPE payoff for a player may be strcitly higher under no recall than under full recall. Unsurprisingly this is not a general result, and we will give here one example when $\beta'_2<h_2$ and one example when $\beta'_2=h_2$. Then, we will consider the maximal sum of the payoffs of the players at equilibrium. In the the full recall case, we know that the best SPE payoff is $(h_n,h_n)$ and the worst SPE payoff is $(l_n,l_n)$. In the no-recall case when $F$ is continuous, the SPE payoff set $E_n^{NR}$ is convex, compact and symmetric with respect to the diagonal, hence the best sum of payoffs is obtained at the symmetric equilibrium $(\beta_n,\beta_n)$. Theorem \ref{thm:comparison} will show that $l_n \geq \beta_n$, implying that when $F$ is continuous, at a symmetric equilibrium payoff, players are always better off under full recall than under no recall. \begin{example} \rm An example with $h_2 > \beta'_2$. Consider the following discrete random variable: \[ X= \begin{cases} 1/10 & \text{with probability } 1/2,\\ 1/2 &\text{with probability } 1/2.\\ \end{cases} \] We will show that the expected payoff of a player at equilibrium in the full recall case is always higher than the expected payoff of a player at equilibrium in the no recall case, when there are 2 samples of $X$ arriving sequentially. In other words, we will prove that $h_2>\beta'_2$. Let us start with the full recall case. In this setting, the only SPE is to bid in the first stage if and only if $X=1/2$. Then, the expected payoff of a player playing SPE is given by: \[ h_2=\frac{1}{10} \mathbb{P}(X_{(2)}=1/10)+\left(\frac{1}{2} \cdot \frac{1}{2}+\frac{1}{2} \cdot \frac{1}{10}\right)\mathbb{P}( \exists! i: X_i=1/10)+\frac{1}{2} \mathbb{P}(X_{(2:2)}=1/2)=\frac{3}{10}. \] In the no recall case, Table~\ref{tab:payoff:example1} represents the matrix of expected payoff for the game if the first arrival is $x$. \begin{table}[h] \centering \setlength{\extrarowheight}{7pt} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Player $2$}\\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$x$} & \multicolumn{1}{c}{$\emptyset$} \\\cline{3-4} \multirow{2}*{Player $1$} & $x$ & $\left(\frac{1}{2} x + \frac{3}{20} , \frac{1}{2} x + \frac{3}{20}\right)$ & $\left( x, \frac{3}{10}\right)$ \\\cline{3-4} \multirow{2}*{} & $\emptyset$ & $\left(\frac{3}{10},x\right)$ & $\left( \frac{3}{20}, \frac{3}{20}\right)$ \\\cline{3-4} \end{tabular} \caption{\label{tab:payoff:example1}Payoffs matrix for the game with $n=2$ if $X_1=x$.} \end{table} Note that if $x=1/10$, $(\emptyset, \emptyset)$ is the unique NE with payoff $(3/20,3/20)$, and if $x=1/10$, $(x,x)$ is the unique NE with payoff $(2/5,2/5)$. Then, the SPE payoff of each player is $1/2 \cdot 3/20 + 1/2 \cdot 4/20=11/40.$ This means that $\beta'_2=11/40$, and we then conclude that $h_2 > \beta'_2$. \end{example} \begin{example} \rm An example with $h_2 = \beta'_2$. Let us now consider the game with two samples of $X$ arriving sequentially over time, with $X \sim \text{Unif}[0,1]$. Using Theorem~\ref{thm:SPEPcharact} we can easily obtain that $h_2=\mathbb{E}(X)$, and then in this case we have $h_2=1/2$. Regarding the no recall case, Table~\ref{tab:payoff:example} represents the expected payoffs matrix if the first arrival is $x$. We now compute the set of NE depending on the value of $x$: \begin{itemize} \item[\textit{Case 1.}] If $x<1/4$, $(\emptyset,\emptyset)$ is the unique NE and the payoff is given by $(1/4,1/4)$. \item[\textit{Case 2.}] If $x>1/2$, $(x,x)$ is the unique NE with payoff $(1/2x+1/4, 1/2x+1/4)$. \item[\textit{Case 3.}] If $x \in [1/4,1/2]$, $(x,\emptyset)$ and $(\emptyset,x)$ are the pure NE with payoff $(x,1/2)$ and $(1/2,x)$, respectively, and there is also a mixed NE with payoff $(3/4-1/(8x),3/4-1/(8x))$. \end{itemize} Therefore we have that, \[ E_1^{NR}(x) = \begin{cases} \{(1/4,1/4) \}& \text{if } x < 1/4\\ \{(x,1/2),(1/2,x), (3/4-1/(8x),3/4-1/(8x)) \} &\text{if } x \in [1/4,1/2] \\ \{(1/2x+1/4, 1/2x+1/4) \} &\text{if } x > 1/2. \\ \end{cases} \] Note that the maximum possible expected payoff for one player is obtained if he passes for every $x \in [1/4,1/2]$, and the payoff obtained is given by \[\beta'_2= \int_0^{1/4} \frac{1}{4} \mathrm{d}x +\int_{1/4}^{1/2} \frac{1}{2} \mathrm{d}x + \int_{1/2}^1 \frac{x}{2}+\frac{1}{4} \mathrm{d}x =\frac{1}{2},\] concluding that $h_2=\beta'_2$. \end{example} \noindent{\bf Symmetric equilibrium payoffs.} From the examples above, we conclude that it is not true that at equilibrium players always take advantage of the recalling power. However, restricting the set of SPE in the no recall case to the symmetric SPE, holds that the expected revenue of a player in the full recall case is always at least the one obtained in the no recall case. We state this result formally in Theorem~\ref{thm:comparison}. \begin{thm}\label{thm:comparison} Assume that $F$ is continuous. Let $(u,u)\in E_n^{FR}$ be a SPE payoff under full recall, and $(v,v)\in E_n^{NR}$ be a {\it symmetric} SPE payoff under no recall. Then $u\geq v$. \end{thm} The proof in Appendix~\ref{app:comparison} will use the following technical lemma, which gives a lower bound for a SPEP of a player in the full recall case. \begin{lemma}\label{lem:bound:SPEP} Let $a$ be a positive real number such that $a \leq \mathbb{E}\left(X_{(n+1)}\right)$. If $\gamma_n^{FR}$ denotes the expected payoff of one player in some SPE in $\Gamma_n^{FR}(a \vee X, a \wedge X)$, then \[ \gamma_n^{FR} \geq \frac{a+c_{n+1}}{2}, \] where $c_n$ is the value of the decision problem in the no recall case with one decision-maker and $n$ arrivals. \end{lemma} \subsection{Efficiency of equilibria}\label{sec:eff} The goal of this section is to study how efficient are the SPE payoffs. To this end, we define as usual the Price of Anarchy and Price of Stability, and we introduce what we call the Prophet Ratio of the game. Given an instance of a game, the first two notions refers to the ratio between the maximal sum of payoffs obtained by players under any feasible strategy and the sum of payoffs for the worst and best SPEs, respectively. On the other hand, we define the Prophet Ratio of an instance of the problem as the ratio between the optimal Prophet value of the problem (that is, the expected sum of the two best values) and the sum of payoffs for the best SPE. We call this quantity Prophet Ratio because we are comparing the best sum of payoffs obtained by playing a SPE strategy with what a prophet would do if he knows all the values of the samples in advance. Next, we formally introduce these definitions. \begin{definition} Consider an instance $\Gamma^{FR}_n$ or $\Gamma^{NR}_n$, where $n$ samples from a distribution $F$ arrive sequentially over time. Denote by $\Sigma$ the set of all feasible strategy pairs and by SPE the set of subgame-perfect equilibria. We call: \begin{itemize} \item[a)] Price of Anarchy of this game instance-- and we denote it by $\text{PoA}_n(F)$-- to the following ratio \[ \text{PoA}_n(F):= \frac{ \max_{ \sigma \in \Sigma}\gamma^1(\sigma)+\gamma^2(\sigma)}{\min_{\sigma \; SPE} \gamma^1(\sigma)+\gamma^2(\sigma)}, \] \item[b)] Price of Stability of this game instance-- and we denote it by $\text{PoS}_n(F)$-- to the following ratio \[ \text{PoS}_n(F):= \frac{ \max_{ \sigma \in \Sigma}\gamma^1(\sigma)+\gamma^2(\sigma)}{\max_{\sigma \; SPE} \gamma^1(\sigma)+\gamma^2(\sigma)}, \] \item[c)] Prophet Ratio of this game instance-- and we denote it by $\text{PR}_n(F)$-- to the following ratio \[ \text{PR}_n(F):= \frac{\mathbb{E}(X_{(1:n)}+X_{(2:n)})}{\max_{\sigma \; SPE} \gamma^1(\sigma)+\gamma^2(\sigma)}, \] where $X_{(1:n)}$ and $X_{(2:n)}$ represent the first and second order statistics from the sequence of samples $\{X_i\}_{i \in [n]}$. \end{itemize} \end{definition} Clearly, by definition it holds that for each $n$ and $F$ \[ \min\{\text{PoA}_n(F),\text{PR}_n(F)\}\geq \text{PoS}_n(F) \geq 1. \] Notice that $\text{PoS}_n(F)/\text{PR}_n(F)= \frac{ \max_{ \sigma \in \Sigma}\gamma^1(\sigma)+\gamma^2(\sigma)}{\mathbb{E}(X_{(1:n)}+X_{(2:n)})}$ is usually called competitive ratio for the two-sample optimal selection problem. For each $n$, we define the Price of Anarchy, Price of Stability and Prophet Ratio of our competitive selection problems as the worst case ratio over all possible value distributions $F$. That is: $$\text{PoA}_n:= \max_F \text{PoA}_n(F), \hspace{0.4cm} \text{PoS}_n:= \max_F \text{PoS}_n(F),\hspace{0.4cm} \text{PR}_n:= \max_F \text{PR}_n(F).$$ In what follows, we study this quantities for each of the model variants, and at the end of the section, we consider the case where the number of arrivals is two, and we present a tight bound for the ratios in Theorem~\ref{prop:boundeff}. \subsubsection{Full recall case } Note that in this case, the maximal feasible sum of payoffs obtained by the players is simply the expected value of the two best samples (they can wait until the end of the horizon and pick the best and second best values). Thus, here, the notions of Price of Stability and Prophet Ratio are equivalent. By Theorem~\ref{thm:caract}, the sum of payoffs for the worst SPE is given by $2l_n$ and for the best SPE is $2h_n$, and therefore given an instance of the game we have in this setting: \[ \text{PoA}^{FR}_n(F)= \frac{\mathbb{E}(X_{(1:n)}+X_{(2:n)})}{2 l_n} \; \text{ and }\; \text{PoS}^{FR}_n(F)=\text{PR}^{FR}_n(F)= \frac{\mathbb{E}(X_{(1:n)}+X_{(2:n)})}{2 h_n}.\] Notice that if $n=2$, that is, we have only two arrivals, each sample will be eventually picked by a player, so that $\text{PoA}^{FR}_2(F)=\text{PoS}^{FR}S_2(F)=\text{PR}^{FR}_2(F)=1$ for every value distribution $F$. On the other hand, if $n$ goes to infinity, then we also have that both the Price of Anarchy and Price of Stability goes to 1. Then, the interesting question is what happen with these ratios when $n$ is finite and greater than 2. Although we have a general characterization of the ratios for any value of $n$ and distribution $F$, these quantities are not always easy to compute for any $n$ even if we fix the distribution $F$. \subsubsection{No recall case} If $n=2$, picking the two best samples is a feasible strategy since player one can get $X_1$ and player two $X_2$, and then $\text{PoS}_2^{NR}(F)=\text{PR}_2^{NR}(F)$. However, for $n\geq 3$, as soon as the support of $F$ has at least three points, picking almost surely the two best samples is no longer feasible and $\text{PR}_n^{NR}(F)>\text{PoS}_n^{NR}(F).$ Note that in this case, the maximal feasible strategy is the same as the strategy of one player selecting two values among $n$ in the classical online selection problem. The following Lemma gives us a recursive formula for the expected sum of payoffs of the maximal feasible strategy. \begin{lemma}\label{lem:maxfeasible} Assume we are in the no recall case with $n$ arrivals following a distribution $F$ with mean $m$. Let $X$ denote a random variable with law $F$, and $c_n$ the value of the decision problem in the no recall case with one decision-maker and $n$ arrivals. Then, the expected maximal feasible sum of payoffs $s_n$ satisfies \begin{itemize} \item[a)] $s_1= m$ if $n=1$, \item[b)] $s_2=2 m$ if $n=2$, \item[c)] $s_{n}=\mathbb{P}(X \geq x_{n-1}-c_{n-1}) \mathbb{E}(X+c_{n-1} | X \geq s_{n-1}-c_{n-1})+ s_{n-1} \mathbb{P}(X < s_{n-1}-c_{n-1})$ if $n>2.$ \end{itemize} \end{lemma} Using Theorem~\ref{thm:norecall} together with Lemma~\ref{lem:maxfeasible} we have: \[ \text{PoA}^{NR}_n(F)=\frac{s_n}{2\alpha_n}, \hspace{0.6cm} \text{PoS}^{NR}_n(F)=\frac{s_n}{2\beta_n} \hspace{0.6cm} \text{and} \hspace{0.6cm} \text{PR}^{NR}_n(F)=\frac{\mathbb{E}(X_{(1:n)}+X_{(2:n)})}{2\beta_n}. \] If $n=1$ the ratios are equal to 1 and if we take $n$ going to infinity, we also obtain that the ratios goes to 1. Then, as in the full recall case, the interesting cases are the ones in between. Recall that from Theorem~\ref{thm:comparison} we have that for every $n$ and every continuous distribution $F$, $h_n \geq\beta_n$ and therefore $\text{PR}_n^{FR}(F) \geq \text{PR}_n^{NR}(F)$. However, the comparison is not direct for PoA and PoS as the numerators are different in the full recall and the no recall cases. \subsubsection{Two arrivals case} To conclude this section, we study the efficiency of SPE when we fix the number of arrivals at two and we look at the worst case ratios over $F$. In the case with full recall all ratios are 1 and the question is trivial, so we consider the no recall case here. In other words, we consider the game $\Gamma_2^{NR}$ and we want to study how bad it may be to play the best or worst SPE, in terms of the sum of payoffs obtained, compared with the maximal feasible sum of payoffs. We obtain the following result, which states that both the Price of Anarchy and Price of Stability are upper bounded by $4/3$, and that this bound tight. \begin{thm} \label{prop:boundeff} If $n=2$, under the no recall case it holds that for every distribution $F$, \[ \text{PoS}^{NR}_2(F) \leq 4/3 \hspace{0.6cm} \text{and } \hspace{0.6cm} \text{PoA}^{NR}_2(F) \leq 4/3. \] Furthermore, this bound is tight for both the price of stability and price of anarchy. \end{thm} \subsection{Example: Uniform distribution} \label{sec:unif} In this section, we apply the results obtained in the former sections to the particular case where the samples are from a random variable uniformly distributed on [0,1]. \subsubsection{Computation of SPEP} We start by the computation of the SPEP. To this end, we divide the analysis according to whether we are under the full recall or no recall case. For the former, We compute $E_3^{FR}$ to illustrate how to apply Theorem \ref{thm:SPEPcharact}. For the latter, we implement the recursive formulas for $\alpha_n$, $\beta_n$ and $\alpha'_n$. \vskip4truept \noindentaragraph{Full recall case.} We now compute the set of SPEP of the game $\Gamma^{FR}_3$, when we have three samples arriving from a uniform $[0,1]$ distribution. Given $ 1 \geq a \geq b \geq 0$, by Theorem~\ref{thm:SPEPcharact} it holds that $l_0(a,b)=h_0(a,b)=(a+b)/2$ and \[ E_0^{FR}(a,b)= \left\{ \left( (a+b)/2, (a+b)/2\right) \right\}. \] Let us now compute a closed-form expression for $E_1^{FR}(a,b)$. Using the notation introduced in Theorem \ref{thm:SPEPcharact}: \begin{eqnarray*} d_1^-(a,b)=d_1^+(a,b)=\mathbb{E}\left[\frac{a+b}{2}\indi_{X< b}+\frac{a+X}{2}\indi_{X\geq b} \right]= b(a+b)/2 +(1-b) (a/2+(1+b)/4)=\frac{1}{2}a+\frac{1}{4}+\frac{1}{4}b^2. \end{eqnarray*} If $X \sim \text{Unif}[0,1]$ and $k \in [0,1]$, we have: \begin{eqnarray*} \mathbb{E}(X \vee k)&=& k \mathbb{P}(X \leq k)+ \mathbb{E}(X|X >k) \mathbb{P}(X>k) \\ &=& \frac{1+k^2}{2}. \end{eqnarray*} Using that $\frac{1+2a+b^2}{4}=\frac{1}{2}\left(a+ \frac{1+b^2}{2} \right)$, we deduce that \[ l_1(a,b)=L\left(a, \frac{1+b^2}{2},\frac{1+2a+b^2}{4} \right)=h_1(a,b)=H\left(a, \frac{1+b^2}{2},\frac{1+2a+b^2}{4} \right)=\frac{1+2a+b^2}{4}.\] By Theorem~\ref{thm:SPEPcharact} we obtain \[ E_1^{FR}(a,b)=\left\{\left(\frac{a}{2}+\frac{1+b^2}{4},\frac{a}{2}+\frac{1+b^2}{4}\right)\right\}.\] To compute $E_2^{FR}(a,b)$, we need to compute $\mathbb{E}(X_{(2)}\vee b)$. The expected value of the maximum between $n$ $\text{Unif}[0,1]$ random variables and a constant $k \in [0,1]$ is given by: \begin{eqnarray*} \mathbb{E}(X_{(n)} \vee k)&=& k \mathbb{P}(X_{(n)} \leq k)+ \mathbb{E}(X_{(n)}|X_{(n)} >k) \mathbb{P}(X_{(n)}>k) \\ &=& k^{n+1}+\frac{n}{n+1}(1-k^{n+1}) \\ &=& \frac{n+k^{n+1}}{n+1}, \end{eqnarray*} and therefore $\mathbb{E}(X_{(2)}\vee b)=(2+b^{3})/3$. We have: \begin{eqnarray*} d_2^-(a,b)=d_2^+(a,b)=\mathbb{E}_X\left(\frac{a \vee X}{2}+\frac{1+((a \wedge X) \vee b)^2}{4}\right) &=&\frac{1+a^2}{2}+\frac{b^3-a^3}{6}. \end{eqnarray*} We obtain: \[ l_2(a,b)=\begin{cases} (1+a^2)/2+(b^3-a^3)/6 & \text{if } a \leq (2+b^3)/3 \\ a/2+(2+b^3)/6 & \text{if } a \> (2+b^3)/3 \end{cases}\] \[ h_2(a,b)=\begin{cases} (1+a^2)/2+(b^3-a^3)/6 & \text{if } a \leq \max\{(2+b^3)/3, (1+a^2)/2+(b^3-a^3)/6`\} \\ a/2+(2+b^3)/6 & \text{if } a > \max\{(2+b^3)/3, (1+a^2)/2+(b^3-a^3)/6`\}\end{cases}\] In particular, $l_2(0,0)=h_2(0,0)=1/2$ so that $E_2^{FR}=E_2^{FR}(0,0)=\{(1/2,1/2)\}$. Now, we compute $E_3^{FR}$. We have \[ l_2(a,0)=\begin{cases} (1+a^2)/2-a^3/6 & \text{if } a \leq 2/3 \\ a/2+1/3 & \text{if } a > 2/3 \end{cases}\] \[ h_2(a,0)=\begin{cases} (1+a^2)/2-a^3/6 & \text{if } a \leq a^* \\ a/2+1/3 & \text{if } a > a^*\end{cases},\] where $a^*$ is the unique root of $a=(1+a^2)/2-a^3/6$. We deduce that \[ l_3(0,0)= d_3^-(0,0)=\mathbb{E}[l_2(X,0)]=\int_{0}^{2/3} ( (1+a^2)/2 - a^3/6) \mathrm{d}a+\int_{2/3}^{1} (a/2+1/3)\mathrm{d}a= \frac{607}{972}\approx 0.6244, \] \[h_3(0,0)=d_3^+(0,0)=\mathbb{E}[h_2(X,0)]=\int_{0}^{a*} ( (1+a^2)/2-a^3/6) \mathrm{d}a+\int_{a^*}^{1} (a/2+1/3) \mathrm{d}a \approx 0.6245.\] We conclude that \[ E_3^{FR}(0,0)=\left\{ (u,u) : u \in [l_3(0,0),h_3(0,0)]\right\}. \] What is played at equilibrium? In the best equilibrium, both players pick $X_1$ if and only if $X_1\geq a^*$, whereas in the worst equilibrium, both players pick $X_1$ if and only if $X_1\geq 2/3$. Competition induces the players to pick $X_1$ with relatively low values in the worst equilibrium, and this decreases the sum of expected payoffs. \vskip4truept \noindentaragraph{No recall case.} Now, we turn to reduce the formulas obtained in Theorem~\ref{thm:norecall} for the particular case where the values are uniformly distributed in the interval $[0,1]$. Recall that $c_1=\mathbb{E}(X)$, $c_n=\mathbb{E}(X \vee c_{n-1})$ for $n>1$ and $X \sim F$. To obtain the expressions of $\beta_n$ and $\alpha_n$ for the uniform case, we use the following technical result. \begin{lemma} \label{lem:ineq:norecall} If $\alpha'_n <a< \beta_n$, then $a+c_n \geq 2 \beta_n$. \end{lemma} \begin{proof} It is enough to show that $\alpha_n'+c_n \geq 2 \beta_n$. We prove that by induction on $n$. Notice that $E_1^{NR}=\{(1/4,1/4)\}$ and thus $\beta_1=\alpha'_1=1/4$. On the other hand, $c_1=\mathbb{E}(X)=1/2,$ and putting all together we have that \[ \alpha'_1+c_1 \geq 2 \beta_1, \] and then the Lemma holds for $n=1.$ Let us assume now that the inequality holds for $n$ and we prove that it also holds for $n+1,$ that is: \[ \alpha'_{n+1}+c_{n+1} \geq 2 \beta_{n+1}. \] By Theorem~\ref{thm:norecall}, we have that \begin{eqnarray*} 2\beta_{n+1}&=& 2 {\alpha'_n} \beta_n + \int_{\alpha'_n}^{\beta_n} \max\{a+c_n, 2\beta_n \} \mathrm{d}a +(1-\beta_n)c_n + \frac{1}{2}(1-\beta_n^2) \\ &=&2 {\alpha'_n} \beta_n +(\beta_n-\alpha_n')c_n + \frac{1}{2}(\beta_n^2-{\alpha_n'}^2)+(1-\beta_n)c_n + \frac{1}{2}(1-\beta_n^2)\\ &=&2 {\alpha'_n} \beta_n +c_n (1-\alpha_n')+\frac{1}{2}(1-{\alpha'_n}^2) \\ &\leq& ({\alpha_n'}+c_n)\alpha_n'+c_n (1-\alpha_n')+\frac{1}{2}(1-{\alpha'_n}^2) \\ &=& c_n+ \frac{1}{2} {\alpha_n'}^2+\frac{1}{2}, \end{eqnarray*} where the second equality and the inequality follow by the induction hypothesis. On the other hand, from Theorem~\ref{thm:norecall}, we obtain \[ \alpha_{n+1}'=\frac{1}{2}{\alpha_n'}^2+\frac{1}{2}c_n-\frac{1}{4} c_n^2+\frac{1}{4}. \] Then, $\alpha'_{n+1}+c_{n+1} \geq 2 \beta_{n+1}$ if and only if \[ \frac{1}{4}(c_n+1)^2 \leq c_{n+1}, \] and the last holds due to $c_{n+1}=\mathbb{E}(X \vee c_n)=(1+c_n^2)/2.$ Therefore, we have that $ 2 \beta_{n+1} \leq \alpha'_{n+1}+c_{n+1},$ and the proof is completed. \end{proof} Using Theorem~\ref{thm:norecall} together with Lemma~\ref{lem:ineq:norecall}, we obtain that the sum of payoffs for the best SPE for $n>1$ is \begin{eqnarray*} 2\beta_{n}&=& \int_{0}^{\alpha'_{n-1}} 2 \beta_{n-1} \mathrm{d}a + \int_{\alpha'_{n-1}}^{1} (a+c_{n-1}) \mathrm{d}a \\ &=&2 \beta_{n-1} \alpha'_{n-1}+ c_{n-1}(1-\alpha'_{n-1})+\frac{1}{2}(1-{\alpha_{n-1}'}^2). \end{eqnarray*} Notice that by definition, $\beta_n \ge \alpha_n$ for all $n$, and therefore, by Lemma~\ref{lem:ineq:norecall} we conclude that if $\alpha_n' < a< \alpha_n$ then $a+c_n \geq 2 \alpha_n$. Thus, applying Theorem~\ref{thm:norecall}, the sum of payoffs for the worst SPE for $n>1$ is \begin{eqnarray*} 2\alpha_n=\frac{1}{2}+\frac{5}{2} c_{n-1}^2+3 \beta_{n-1}^2+c_{n-1}-6c_{n-1}\beta_{n-1}+\alpha_{n-1}^2-4(c_{n-1}-\beta_{n-1})^2 \ln(2). \end{eqnarray*} If the samples are uniformly distributed in the interval $[0,1]$, the recursive formulas given in Theorem \ref{thm:norecall} are easy to implement numerically, In Table~\ref{tab:alphas:beta} we expose the values of $\alpha'_n, \alpha_n$ and $\beta_n$ for $n$ from 1 to 4 and $F=\text{Unif}[0,1]$, whereas in Figure~\ref{fig:alp:bet} we compare the sum of payoffs for the best and worst SPE for $n$ up to 10. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & $\alpha'_n$ & $\alpha_n$ & $\beta_n$ \\ \hline \hline n=1 & 0.25 & 0.25 & 0.25 \\ \hline n=2 & 0.4688 & 0.4759 & 0.4844\\ \hline n=3 & 0.5747 & 0.5803 & 0.5881 \\ \hline n=4 & 0.6419 & 0.6465 & 0.6533 \\ \hline \end{tabular} \end{center} \caption{Values of $\alpha'_n, \alpha_n$ and $\beta_n$ for one, two, three and four arrivals and values uniformly distributed in $[0,1]$.} \label{tab:alphas:beta} \end{table} \begin{figure} \caption{Values of sum of payoff for the best SPE (dashed line) and for the worst SPE (continuous line) for up to 10 arrivals, no recall and values uniformly distributed in $[0,1]$.} \label{fig:alp:bet} \end{figure} \subsubsection{Comparison of model variants} We know by Theorem~\ref{thm:comparison} that $\alpha_n \leq \beta_n \leq l_n \leq h_n$ for every $n$, meaning that having full recall is advantageous to the players. We quantify this difference for small values of $n$ numerically, In Table~\ref{tab:l:h:alpha:beta} we present the values of $\alpha_n, \beta_n , l_n$ and $h_n$ for $n \leq 5$ arrivals. The values corresponding to the no recall case were computed using the formulas obtained in the former section, whereas for the full recall case, for $n=1,2,3$ the values were obtained explicitely and for $n=4,5$ numerically via discretization. From the values of Table~\ref{tab:l:h:alpha:beta} we can see that under the full recall case, each player has an advantage compare to the no recall case of between 3$\%$ and 5$\%$ for $n=2$, between $6\%$ and $7.6\%$ for $n=3$, between $6.9\%$ and $8\%$ for $n=4$ and between $7\%$ and $8\%$ for $n=5$. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline & $l_n$ & $h_n$ & $\alpha_n$ & $\beta_n$ \\ \hline \hline n=1 & 0.25 & 0.25 & 0.25 & 0.25 \\ \hline n=2 & 0.5 & 0.5 & 0.4759 & 0.4844\\ \hline n=3 & 0.6244 & 0.6245 & 0.5803 & 0.5881 \\ \hline n=4 & 0.6989 & 0.699 & 0.6465 & 0.6533 \\ \hline n=5 & 0.7484 & 0.7486 & 0.6932 & 0.6991 \\ \hline \end{tabular} \end{center} \caption{Values of $l_n, h_n, \alpha_n$ and $\beta_n$ for up to 5 arrivals and values uniformly distributed in $[0,1]$.} \label{tab:l:h:alpha:beta} \end{table} \subsubsection{Efficiency of equilibria} We represent in Table~\ref{tab:multicol} the values of the ratios $\text{PoA}_n(F), \text{PoS}_n(F)$ and $\text{PR}_n(F)$ for $X \sim\text{Unif}[0,1]$ and $n$ up to 5 in both settings. We notice that the ratios are close to 1, i.e. equilibria are close to be efficient in both model variants, being more efficient in the full recall case. We highlight here that these ratios are a measure of efficiency of equilibria for each game, but not between the two different games. That is to say, it is not correct to say that one setting is better than the other by comparing the PoA or PoS, because for each game, these ratios are computed in a different way (the numerator are different). For such a comparison, we could use the value of PR, but comparing the PR for both problems is equivalent to compare $\beta_n$ and $h_n$, which is a comparison we already made. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{PoA$_n$(F)} & \multicolumn{2}{c|}{PoS$_n$(F)} & \multicolumn{2}{c|}{PR$_n$(F)}\\ \hline \hline & Full Recall & No Recall & Full Recall & No Recall & Full Recall & No Recall\\ \hline n=2 & 1 & 1.0507 & 1 & 1.0323 & 1 & 1.0323 \\ \hline n=3 & 1.000823 & 1.0299 & 1.0008 & 1.0161 & 1.0008 & 1.0627\\ \hline n=4 & 1.00157 & 1.0212 & 1.00143 & 1.0105 & 1.00143 & 1.0714 \\ \hline n=5 & 1.0021 & 1.0164 & 1.00187 & 1.0077 & 1.00187 & 1.0728 \\ \hline \end{tabular} \end{center} \caption{Efficiency of equilibria up to 5 arrivals from Unif[0,1] distribution.} \label{tab:multicol} \end{table} Another question we address here is the number of arrivals that gives the worst gaps, in the no recall case for which the numerics are easier. We obtain that, for both the Price of Anarchy and Price of Stability, the ratios reach their maximum when $n=2$, see Figures~\ref{fig:PoA} and \ref{fig:PoS}. This result is somehow intuitive: as we are in the no recall case, there exists a positive probability of getting nothing and then the smaller the number of arrivals, the more likely this seems to happen. Regarding the Prophet Ratio, we also compute it as a function of $n$ (see Figure~\ref{fig:PR}), and we obtained that the maximum is reached when $n=5$. Here, the result is more surprising. \begin{figure} \caption{$\text{PoA} \label{fig:PoA} \caption{$\text{PoS} \label{fig:PoS} \caption{$\text{PR} \label{fig:PR} \caption{Price of anarchy, price of stability and prophet ratio as function of $n$ when $F=\text{Unif} \label{fig:ratios} \end{figure} \section{Proofs}\label{sec:proofs} In this section we provide the proofs omitted in Section~\ref{sec:results}. \subsection{Omitted proofs from Section~\ref{sec:payoffs}} \vskip4truept \noindentaragraph{Full recall case.} In this section, we prove Theorems~\ref{thm:SPEPcharact} and \ref{thm:caract}. Before that, we need some preliminary results stating properties of the game $\Gamma^{FR}_n(a,b)$ defined in Section~\ref{sec:payoffs}. Recall that this game is defined as $\Gamma_n^{FR}$ with $n$ arrivals but with two initial values $a \geq b$ already present in the market. At first, in order to study the SPEP of $\Gamma_n^{FR}$, note that this game has the same SPEP as the game which is identical to $\Gamma_n^{FR}$ and terminates at the first time a player gets an item or at stage $n$. If the game terminates because one player did get an item, the payoff of the other player is the value of the one-player continuation problem, and the payoff of the players if none of them did stop strictly before stage $n$ is the expected payoff given by the pair of strategies "bid for the best available item". Indeed, it is easy to check that in both these situations the SPEP in the continuation games are unique and correspond to the payoffs of this auxiliary game. In the following, we assume that $\Gamma_n^{FR}$ is the auxiliary game. In this game, an history is just a sequence of values $(X_1,...,X_t)$ and the strategy of player $i$ at time $t<n$ is a measurable map $\sigma_{i,t}$ from histories into the probabilities over $\{\emptyset\} \cup \{1,...,t\}$. We use the same identification for the game $\Gamma^{FR}_n(a,b)$. Given an history $h=(x_1,...,x_t)$ of length $t$, let $\Gamma_n(h)$ denote the subgame of $\Gamma^{FR}_{n+t}$ starting at stage $t$ after observing $h$ in which the two players are still present and $E_n(h)$ the set of SPEP of this game. In the following, variables $(X,X_1,X_2,....)$ will denote independent variables with distribution $F$, and $X_{(t)}=\max(X_1,...,X_t)$. Without loss of generality, we assume that $\mathbb{P}(X>0)>0$ (otherwise the set of equilibrium payoffs is reduced to $\{(0,0\}$). We state now without proofs some properties of SPEP which follow easily from usual arguments in dynamic game theory. Recall that if $x \in [0,1] \rightarrow S(x)$ is a set-valued map, $\mathbb{E}[S(X)]$ is defined by \[ \mathbb{E}[S(X)]= \{ \mathbb{E}[f(X)] \,|\, \text{$f$ measurable such that $\forall x\in [0,1], f(x) \in S(x)$} \}. \] \begin{lemma}\label{lemma:propertiesSPE} The following properties hold: \begin{enumerate} \item $E_n=E_n(0,0)$ \item If $h=(x_1,...,x_t)$ is an history of length $t \geq 2$, then $E_n(h)= E_n(a,b)$ where $a$ and $b$ denote the first and second largest items in $h$ respectively. \item $(x,y) \in E_n(a,b)$ with $a\geq b$ if and only if there exists $(d,e) \in \mathbb{E}[ E_{n-1}(a\vee X,med[a,b,X])]$ such that $(x,y)$ is a mixed Nash equilibrium payoff of the finite game $G_n(a,b;d,e)$ with payoff matrix: \begin{table}[h] \centering \setlength{\extrarowheight}{7pt} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Player $2$}\\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$b$} & \multicolumn{1}{c}{$\emptyset$} \\\cline{3-5} \multirow{2}*{Player $1$} & $a$ & $\left(\frac{a+ \mathbb{E}(X_{(n)} \vee b)}{2} , \frac{a+ \mathbb{E}(X_{(n)} \vee b)}{2} \right)$ & $(a,b)$ & $\left( a, \mathbb{E}(X_{(n)} \vee b)\right)$ \\\cline{3-5} & $b$ & $(b,a)$ & $\left(\frac{b+ \mathbb{E}(X_{(n)} \vee a)}{2} , \frac{b+ \mathbb{E}(X_{(n)} \vee a)}{2} \right)$ & $(b, \mathbb{E}(X_{(n)} \vee a))$ \\\cline{3-5} \multirow{2}*{} & $\emptyset$ & $\left( \mathbb{E}(X_{(n)} \vee b),a\right)$ & $( \mathbb{E}(X_{(n)} \vee a),b)$ & $\left( d, e\right)$ \\\cline{3-5} \end{tabular} \caption{\label{tab:payoffs:general0} Payoff matrix of $G_n(a,b;d,e)$.} \end{table} \item Similarly, $(\sigma_1,\sigma_2)$ is a pair of first-stage strategies of some SPE in $\Gamma_n^{FR}(a,b)$ with payoff $(x,y)$ in $\Gamma_n^{FR}(a,b)$ if and only if there exists $(d,e) \in \mathbb{E}[ E_{n-1}(a\vee X,med[a,b,X])]$ such that $(\sigma_1,\sigma_2)$ is a mixed Nash equilibrium of the matrix game $G_n(a,b;d,e)$ with payoff $(x,y)$. \end{enumerate} \end{lemma} The first point follows from the fact that any strategy in any subgame of $\Gamma_n(0,0)$ which bids with positive probability for one of the two initial items with value $0$ is strictly dominated by the modified strategy which waits until the last stage and bids for the best available item whenever the initial strategy bids for an item with value zero. Therefore such strategies are not played in any equilibrium in $\Gamma_n(0,0)$ and the result follows easily. The other points can be proven by induction on $n$ using the recursive structure of SPE, the one-shot deviation principle and measurable selection arguments. In the following proposition, we prove that if a player prefers to pass instead of take $a$ given that the other player takes $a$, then he prefers to pass instead of take $a$ if the other player passes. \begin{proposition}\label{prop:ineq} Let $a>b$, $(d,e) \in \mathbb{E}[ E_{n-1}(a\vee X,med[a,b,X])]$ and $c=\mathbb{E}[X_{(n)}\vee b]$. If $c \geq a$ then $d \geq a$ and $e \geq a$. Similarly, if $c>a$ then $d>a$ and $e>a$. \end{proposition} By the symmetry of the game, it is enough to prove the inequality for $d$, then the same arguments will hold for $e$. To prove the Proposition, we need the following two lemmas. \begin{lemma}\label{lem:tech:ineq} If $X, Y$ are integrable random variables and $a$ a real number such that $\mathbb{E}(X \vee Y)\geq a$ (resp. $>a$), then it holds that $\frac{1}{2}\mathbb{E}(X \vee a)+ \frac{1}{2} \mathbb{E}((X \wedge a) \vee Y)\geq a$ (resp. $>a$). \end{lemma} \begin{proof} Let us first prove that if $x,y$ are real numbers, then \begin{equation}\label{ineq:lemma} x \vee y \leq x^++(y \vee (x \wedge 0)). \end{equation} Indeed, we show that for the four possible cases: \begin{itemize} \item [(a)] If $x \vee y=x \geq 0$, the left hand side in \eqref{ineq:lemma} is $x$ and the right hand side is $x+\max\{y,0\}$ which is at least $x$ and the inequality holds. \item [(b)] If $x \vee y=x < 0$, the the left hand side in \eqref{ineq:lemma} is $x$ and the right hand side is $0+\max\{y,x\}=x$ and \eqref{ineq:lemma} follows. \item [(c)] If $x \vee y=y \geq 0$, the left hand side in \eqref{ineq:lemma} is $y$ and the right hand side is $x^++y$ which is at least $y$ and the inequality holds. \item [(d)] If $x \vee y=y < 0$, the left hand side in \eqref{ineq:lemma} is $y$ and the right hand side is $0+y$ and \eqref{ineq:lemma} follows. \end{itemize} Then, defining the random variable $Z=X \vee Y - X^+-(Y \vee (X \wedge 0)))$ and using the inequality above, \begin{equation}\label{ineq:rv} X \vee Y \leq X^++(Y \vee (X \wedge 0)) \ \ \ \ \text{a.s.} \end{equation} Take $X'=X-a$ and $Y'=Y-a$, then by \eqref{ineq:rv} we have that \begin{equation}\label{ineq:rv_2} X' \vee Y' \leq X'^++(Y' \vee (X' \wedge 0)) \ \ \ \ \text{a.s.} \end{equation} But \begin{itemize} \item [i)]$X' \vee Y' = (X-a) \vee (Y-a)= (X \vee Y)-a$, \item [ii)] $X'^+=(X-a)^+=(X \vee a)-a$, and \item [iii)] $Y' \vee (X' \wedge 0)= (Y-a) \vee (X-a \wedge 0)=Y\vee(X \wedge a)-a,$ \end{itemize} and thus \eqref{ineq:rv_2} means that \begin{equation*} (X \vee Y)-a \leq (X \vee a)-a +Y\vee(X \wedge a)-a \ \ \ \text{a.s.,} \end{equation*} which is equivalent to \begin{equation*} (X \vee Y)+a \leq (X \vee a) +Y\vee(X \wedge a) \ \ \ \text{a.s.} \end{equation*} In particular, the later implies that \begin{equation*} \mathbb{E}(X \vee Y)+a \leq \mathbb{E}(X \vee a) +\mathbb{E}(Y\vee(X \wedge a)), \end{equation*} and due to $ \mathbb{E}(X \vee Y) \geq a$, we conclude that \begin{equation*} 2a \leq \mathbb{E}(X \vee a) +\mathbb{E}(Y\vee(X \wedge a)), \end{equation*} and the result follows. The case with strict inequalities is similar. \end{proof} \begin{lemma}\label{lem:ineq:dn} Under the same notation and assumptions as in Proposition \ref{prop:ineq}, we have \[ d \geq \frac{1}{2} \mathbb{E}(X \vee a)+ \frac{1}{2}\mathbb{E}(\beta(X)),\] where $X$ is a random variable with distribution $F$, $\beta(x)= \mathbb{E}[med[a,b,x] \vee X_{(n-1)}]$ and $X_{(n-1)}=\max(X_1,..,X_{n-1})$ for some i.i.id sequence $X_1,...,X_{n-1}$ of samples of $X$. \end{lemma} \begin{proof} By assumption, $d=\mathbb{E}[f(X)]$ for some measurable function $f$ such that $f(x)$ is the expected payoff of player 1 in some SPE in $\Gamma_{n-1}^{FR}(a\vee x, med[a,b,x])$. We will obtain a lower bound for $f(x)$ by providing a lower bound for the payoff associated to a particular strategy in $\Gamma_{n-1}^{FR}(a\vee x, med[a,b,x])$. To this end, suppose that Player 1 plays the following strategy: \begin{enumerate} \item If $x \vee a \geq \beta(x)$, bid for $x \vee a$. \item If $x \vee a < \beta(x)$, wait until the end and get at least the second best item. \end{enumerate} Define $\Omega_1:= \{ x \in [0,1] \,:\, x \vee a \geq \beta(x) \}$, $\Omega_2:= \{ x \in [0,1] \,:\, x \vee a < \beta(x) \}$. Since the second best item in $\Gamma_{n-1}^{FR}(a\vee x, med[a,b,x])$ has expected value $\beta(x)$, we deduce that the payoff of player $1$, independently of the strategy of player $2$, is at least $\frac{1}{2}(x\vee a + \beta(x))$ if $x\in \Omega_1$ and $\beta(x)$ if $x\in \Omega_2$. We deduce that \[ d = \mathbb{E}[ f(X)] \geq \mathbb{E} \left[ \frac{1}{2}(X\vee a + \beta(X))\indi_{X\in \Omega_1}+\beta(X)\indi_{X\in \Omega_2} \right] .\] Note that the last term is at least $\mathbb{E}[ \frac{1}{2}( X \vee a+ \beta(X))\indi_{X\in \Omega_2}]$ because $x \vee a < \beta(x)$ for $x\in \Omega_2$. Therefore, we conclude that \begin{eqnarray*} d &\geq& \mathbb{E} \left[ \frac{1}{2}(X\vee a + \beta(X))\indi_{X\in \Omega_1}+\frac{1}{2}(X\vee a + \beta(X))\indi_{X\in \Omega_2} \right] \\ &=& \mathbb{E} \left[\frac{1}{2}(X\vee a + \beta(X))\right] \\ \end{eqnarray*} and the result follows. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:ineq}] Take $X$ with distribution $F$ independent of some i.i.d sequence $(X_1,...,X_{n-1})$ of variables distributed according to $F$, and $Y=X_{(n-1)} \vee b$. Then $\mathbb{E}(X \vee Y)= c \geq a$, and applying Lemma~\ref{lem:tech:ineq} we have \[ \frac{1}{2}\mathbb{E}(X \vee a)+ \frac{1}{2} \mathbb{E}((X \wedge a) \vee X_{(n-1)} \vee b)\geq a. \] On the other hand, by independence and since $med[a,b,x]=(x\vee a)\wedge b$, we have \[ \mathbb{E}((X \wedge a) \vee X_{(n-1)} \vee b) = \mathbb{E}[\beta(X)].\] By Lemma~\ref{lem:ineq:dn} $d \geq \frac{1}{2}\mathbb{E}(X \vee a)+ \frac{1}{2} \mathbb{E}(\beta(X)$. Putting all together we obtain $d \geq a$ and the proof is completed. The case with the strict inequality is similar. \end{proof} The next result allows to reduce the analysis of $G_n(a,b;d,e)$ to a smaller matrix game. \begin{lemma}\label{lem:dominance} Let $a\geq b$ in $[0,1]$, $n$ a positive natural number, and $(d,e) \in \mathbb{E}[ E_{n-1}(a\vee X,med[a,b,X])]$. The game $G_n(a,b;e,d)$ has the same Nash equilibrium payoffs as the game with matrix: \begin{table}[h] \centering \setlength{\extrarowheight}{7pt} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Player $2$}\\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$\emptyset$} \\\cline{3-4} \multirow{2}*{Player $1$} & $a$ & $\left(\frac{a+ \mathbb{E}(X_{(n)} \vee b)}{2} , \frac{a+ \mathbb{E}(X_{(n)} \vee b)}{2} \right)$ & $\left( a, \mathbb{E}(X_{(n)} \vee b)\right)$ \\\cline{3-4} \multirow{2}*{} & $\emptyset$ & $\left( \mathbb{E}(X_{(n)} \vee b),a\right)$ & $\left( d, e \right)$ \\\cline{3-4} \end{tabular} \caption{\label{tab:payoff:matrix:reduced}Reduced payoffs matrix for $G_n(a,b;d,e)$.} \end{table} \end{lemma} \begin{proof} We first consider the case $a>b$. For both players, the strategy $b$ is strictly dominated by mixed strategy consisting on playing $a$ with probability $1/2$ and pass with probability $1/2$. Due to the symmetry of the payoffs' matrix, it is enough to show that for one player (let us say player 1) the expected payoff if he plays $a$ with probability $1/2$ and passes with probability $1/2$ is strictly higher that the expected payoff he obtains if he plays $b$. Note that because $a > b$ we have that: \begin{itemize} \item[i-] $\frac{1}{2} \frac{a+ \mathbb{E}(X_{(n)} \vee b)}{2} + \frac{1}{2} \mathbb{E}(X_{(n)} \vee b) = \frac{a}{4} + \frac{3}{4} \mathbb{E}(X_{(n)} \vee b) > b, $ \item[ii-] $\frac{1}{2} a + \frac{1}{2} \mathbb{E}(X_{(n)} \vee a) > \frac{b+ \mathbb{E}(X_{(n)} \vee a)}{2},$ and \item[iii-] $ \frac{1}{2} a + \frac{1}{2} d > \frac{1}{2}b+ \frac{1}{2}b=b ,$ \end{itemize} where the last inequality follows because $d \ge b$. Indeed, in any SPE, player $1$ obtains at least the second largest item (otherwise he could deviate to the strategy which does not bid until stage $n$), which is not smaller than $b$ in $\Gamma_n^{FR}(a,b)$. We conclude that $b$ is a strictly dominated strategy for both players (due to the symmetry of the game), and the result follows. If $a=b$, we consider two subcases. If $\mathbb{E}(X_{(n)} \vee a)>a$, then Proposition \ref{prop:ineq} implies that $d>a$ and $e>a$, so that the strategies $a$ and $b$ are strictly dominated by $\emptyset$. There is a unique Nash equilibrium$(\emptyset,\emptyset)$ with payoff $(d,e)$. Since the same holds for the matrix game given in table \ref{tab:payoff:matrix:reduced}, the result follows. If $\mathbb{E}(X_{(n)} \vee a)=a$, the actions $a$ and $b$ are equivalent in the sense that they induce the same payoffs. Eliminating $b$ leads to a matrix game with the same Nash equilibrium payoffs. \end{proof} Denoting $c=\mathbb{E}(X_{(n)} \vee b)$, the payoff matrix introduced in Lemma \ref{lem:dominance} has the particular form exposed in Table~\ref{tab:payoff:cn}. Thus, it is enough to compute the NE of this matrix game where $(d,e)$ are parameters which correspond to the expected continuation equilibrium payoffs for players. \begin{table}[h] \centering \setlength{\extrarowheight}{4pt} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Player $2$}\\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$\emptyset$} \\\cline{3-4} \multirow{2}*{Player $1$} & $a$ & $\left(\frac{1}{2}(a+c),\frac{1}{2}(a+c) \right)$ & $\left( a, c\right)$ \\\cline{3-4} \multirow{2}*{} & $\emptyset$ & $\left( c,a\right)$ & $\left( d, e\right)$ \\\cline{3-4} \end{tabular} \caption{\label{tab:payoff:cn} General form of the reduced payoffs matrix for $G_n(a,b;d,e)$.} \end{table} We are now ready to prove Theorem~\ref{thm:SPEPcharact}. \begin{proof}[Proof of Theorem~\ref{thm:SPEPcharact}] Let us first analyze the game $G_n(a,b;d,d)$ with $(d,d)\in \mathbb{E}[ E_{n-1}(a\vee X,med[a,b,X])]$, i.e. the case of continuation payoffs which belong to the diagonal in $\mathbb{R}^2$, given in Table~\ref{tab:payoff:cn:final} below where $c=\mathbb{E}(X_{(n)} \vee b)$. \begin{table}[h] \centering \setlength{\extrarowheight}{4pt} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Player $2$}\\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$\emptyset$} \\\cline{3-4} \multirow{2}*{Player $1$} & $a$ & $\left(\frac{1}{2}(a+c),\frac{1}{2}(a+c) \right)$ & $\left( a, c\right)$ \\\cline{3-4} \multirow{2}*{} & $\emptyset$ & $\left( c,a\right)$ & $\left( d, d\right)$ \\\cline{3-4} \end{tabular} \caption{\label{tab:payoff:cn:final} Payoffs matrix for $G_n(a,b;d,d)$.} \end{table} Let us compute the mixed Nash equilibria of this game. Using Proposition \ref{prop:ineq}, $c>a \Rightarrow d>a$ and $c\geq a \Rightarrow d\geq a$. We have that: \begin{itemize} \item[(a)] If $a> c \vee d$, then $(a,a)$ is the unique NE with payoff $\left(\frac{1}{2}(a+c),\frac{1}{2}(a+c)\right)$. \item[(b)] If $d>a>c$, there are two pure NE $(a,a)$ and $(\emptyset, \emptyset)$ and a symmetric mixed equilibrium in which both agents play $a$ with probability $\frac{2(d-a)}{2d-a-c} $ and pass with probability $\frac{a-c}{2d-a-c}.$ Furthermore, the equilibrium payoffs are: \[\left(\frac{1}{2}(a+c), \frac{1}{2}(a+c)\right) , (d,d) \text{ and } \left( \frac{dc-2ac+ad}{2d-a-c}, \frac{dc-2ac+ad}{2d-a-c}\right), \text{ respectively.}\] \item[(c)] If $a <c$, then $(\emptyset, \emptyset)$ is the unique NE, and $(d,d)$ is the expected payoff. \item[(d)] If $d=a>c$ or $d>a=c$, there are two pure NE $(a,a)$ and $(\emptyset, \emptyset)$ with payoffs: \[\left(\frac{1}{2}(a+c), \frac{1}{2}(a+c)\right) \text{ and } (d,d) .\] \item[(e)] If $d=a=c$, then any profile is a NE with payoff $(d,d)$. \end{itemize} From the analysis above we conclude that the set of NE payoffs of the game is contained in the diagonal. We now prove by induction on $n$ that $E_n(a,b)$ is also a subset of the diagonal, that is: \[ E_n(a,b) \subset \{ (u,u) : u \in \mathbb{R}_+\}. \] Ar first, $E_0(a,b)=\{ ( \frac{a+b}{2}, \frac{a+b}{2})\}$, so the statement is correct for $n=0$. Let us assume that the statement is correct for $E_n(a,b)$ for all pairs $(a,b)$. Then, it implies that $\mathbb{E}[ E_n( a\vee X, med[a,b,X])]$ is also a subset of the diagonal. Therefore, by point 3) of Lemma \ref{lemma:propertiesSPE} and the above analysis, we deduce that $E_{n+1}(a,b)$ is a subset of the diagonal. The first statement of the theorem is proved. Let us now show that $\min \text{P}E_n(a,b)=l_n(a,b)$ and $\max \text{P}E_n(a,b)=h_n(a,b)$, where $\text{P}E_n(a,b)$ is the projection of $E_n(a,b)$ to its first coordinate in $\mathbb{R}$. Note that from the analysis done above, we have that if $a>c \vee d$ or $a < c$ or $a=c=d$, there is only one NE payoff but in the other cases we have multiple equilibrium payoffs. When playing a SPE, the expected payoff of the players when both pass depends on which equilibrium is played in the following stages of the game. It is known that, in general, we cannot assume that if ``the worst'' or ``the best'' equilibrium is played at each stage, that results in the worst or best equilibrium of the game. However, below we show that this is indeed true for the game we are considering and then the result will follow just computing the expected payoff corresponding to the best and worst equilibrium. To this end, it is enough to prove that the expected payoff of a player is increasing in $d$. Define $\Omega=\{(a,c,d) \in \mathbb{R}^3 : ( c>a \Rightarrow d>a) \text{ and } ( c \geq a \Rightarrow d \geq a) \} $ and consider the multivalued function $\vskip4truept \noindentsi: \Omega \rightrightarrows \mathbb{R}$ defined by \[ \vskip4truept \noindentsi(a,c,d) = \begin{cases} d & \text{if } c>a \text{ or } a=c=d\\ \left\{ \frac{a+c}{2},d\right\} &\text{if } c<a<d \text{ or } c=a<d\text{ or } c<a=d \\ \frac{a+c}{2} & \text{if } c\vee d < a. \end{cases} \] Note that if $d>a>c$, then \[ d>\frac{dc-2ac+ad}{2d-a-c}> \frac{1}{2} (a+c), \] and thus $d$ and $\frac{1}{2} (a+c)$ are respectively the "best" and the "worst" equilibrium payoffs for player $1$ in $G_n(a,b;d,e)$. Therefore, $\vskip4truept \noindentsi(a,c,d)$ represents the "best" and "worst" NE payoffs for player $1$ in the game represented by Table~\ref{tab:payoff:cn:final}. We say that the multivalued function is non-decreasing in $d$ if for each $a,c,d_1,d_2$ such that $d_1 < d_2$ and $(a,c,d_i) \in \Omega$ for $i=1,2$, holds that $\min \vskip4truept \noindentsi (a,c,d_1) \leq \min \vskip4truept \noindentsi(a,c,d_2)$ and $\max \vskip4truept \noindentsi(a,c,d_1) \leq \max \vskip4truept \noindentsi(a,c,d_2)$. Let us see that $\vskip4truept \noindentsi$ is non-decreasing in $d$. To show that, fix $a$, $c$ and take $d_1,d_2$ such that $d_1 < d_2$ and $(a,c,d_1) \in \Omega$, $(a,c,d_2) \in \Omega$. We have the following cases: \begin{itemize} \item[(a)] If $c>a$, then $\vskip4truept \noindentsi(a,c,d_i)=\{d_i\}$ for $i=1,2$ and the result is obvious. \item[(b)] If $c<a \leq d_1$, then $c<a<d_2$ and $\frac{a+c}{2} < d_1<d_2$. Thus, $\min \vskip4truept \noindentsi(a,c,d_1)=\frac{a+c}{2}= \min \vskip4truept \noindentsi(a,c,d_2)$ and $\max \vskip4truept \noindentsi(a,c,d_1)=d_1 \leq d_2= \max \vskip4truept \noindentsi(a,c,d_2).$ \item[(c)] If $a>c \vee d_2$, then $\vskip4truept \noindentsi(a,c,d_1)=\{\frac{a+c}{2}\}=\vskip4truept \noindentsi(a,c,d_2)$ and the result is obvious. \item[(d)] If $c \vee d_1 < a \leq d_2$, then $\vskip4truept \noindentsi(a,c,d_1)=\{\frac{a+c}{2}\}$ and $\vskip4truept \noindentsi(a,c,d_2)=\left\{ \frac{a+c}{2},d_2\right\}$, obtaining $\min\vskip4truept \noindentsi(a,c,d_1)=\max\vskip4truept \noindentsi(a,c,d_1) \leq \min \vskip4truept \noindentsi(a,c,d_2) \leq \max \vskip4truept \noindentsi(a,c,d_2)$. \item[(e)] If $a=c \leq d_1$, then $\vskip4truept \noindentsi(a,c,d_1)=\{a\}$ or $\{a,d_1\}$ and $\vskip4truept \noindentsi(a,c,d_2)=\{a,d_2\}$, and the result follows. \end{itemize} Thus, we conclude that $\vskip4truept \noindentsi$ is non-decreasing in $d$. The monotonicity property we proved for $\vskip4truept \noindentsi$ means that playing a ``better'' equilibrium in the continuation game gives a ``better'' equilibrium for $\Gamma_n^{FR}(a,b)$ and playing a ``worse'' equilibrium one gives a ``worse'' equilibrium for $\Gamma_n^{FR}(a,b)$. Precisely, define $L: \Omega \rightarrow \mathbb{R} $ and $H:\Omega \rightarrow \mathbb{R}$ by: \[ L(a,c,d) = \begin{cases} d & \text{ if } a < c ,\\ \frac{1}{2}(a+c) & \text{ if } a \geq c\\ \end{cases} \; \; {\it and}\;\; H(a,c,d) = \begin{cases} \frac{1}{2}(a+c) & \text{if } a > c \vee d,\\ d &\text{if } a \leq c \vee d\\ \end{cases} \] $L(x,y,z)$ and $H(x,y,z)$ represent the lowest and highest equilibrium expected payoffs of player $1$ in $G_n(a,b;d,d)$. As we aforementioned, we are interested in computing the expected payoff corresponding to the best and the worst NE of the game $\Gamma_n^{FR}(a,b)$, which correspond to the extremes values of the set $\text{E}_n(a,b)$ denoted $l_n(a,b)$ and $h_n(a,b)$. Using the monotony of $\vskip4truept \noindentsi$ (and thus of $L$ and $H$) with respect to $d$, $l_n(a,b)$ is the lowest equilibrium payoff of $G_n(a,b;d,d)$, i.e. $L(a,\mathbb{E}[X_{(n)}\vee b],d)$, when $d$ is the expected continuation payoff obtained by playing the lowest equilibrium in every continuation game, that is: \[ d=\min \mathbb{E}[ E_{n-1}(a\vee X,med[a,b,X])] = \mathbb{E}_X\left[ l_{n-1}(a \vee X,\text{med}[a,b,X])\right].\] where $l_0(a,b)=\frac{a+b}{2}$. This completes the proof for $l_n$ and a the same analysis applies for $h_n(a,b)$. \end{proof} \begin{remark} Note that we did not use any assumption on the distribution $F$ except that it is supported by $[0,1]$, therefore Theorem~\ref{thm:SPEPcharact} holds when considering discrete distributions. \end{remark} Using Theorem~\ref{thm:SPEPcharact}, we now prove Theorem~\ref{thm:caract}. \begin{proof}[Proof of Theorem~\ref{thm:caract}] By Theorem~\ref{thm:SPEPcharact}, we know that $E^{FR}_n \subset \{ (u,u) : l_n \leq u \leq h_n\}$. Furthermore, $E^{FR}_n= \left\{ \int_{[0,1]} f(x) \mathrm{d}F(x) : f(x) \in \text{E}_{n-1}(x,0) \right\},$ and then $(l_n,l_n)$ and $(h_n,h_n)$ belongs to $E^{FR}_n$ (we obtain them just taking $f(x)=(l_{n-1}(x,0),l_{n-1}(x,0))$ and $f(x)=(h_{n-1}(x,0),h_{n-1}(x,0))$, respectively). To obtain the result is then enough to prove that the set $E_n$ is convex. To this end, let us define the function $\mu: [0, 1] \rightarrow \mathbb{R}$ by \[ \mu(\alpha)=\int_{[0\alpha)} l_{n-1}(x,0) \mathrm{d}F(x)+\int_{[\alpha,1]} h_{n-1}(x,0) \mathrm{d}F(x). \] Notice that $\mu$ is continuous since $F$ is atomless, $\mu(0)=h_{n}$ and $\mu(1)=l_{n}$, then using the intermediate value theorem all values between $l_n$ and $h_n$ are taken by $\mu$. But $(\mu(\alpha),\mu(\alpha))$ belongs to $E_n$ for every $\alpha$ by choosing $f(x)$ equal to $(l_{n-1}(x,0),l_{n-1}(x,0))$ on $[0,\alpha$ and to $(h_{n-1}(x,0),h_{n-1}(x,0))$ on $[\alpha,1]$. Therefore $E_n$ is convex. \end{proof} \vskip4truept \noindentaragraph{No recall case.} We pass now to the no recall case and the goal is to prove Theorem~\ref{thm:norecall}. Recall that the game $\Gamma_n^{NR}(a)$ is defined as $\Gamma_n^{NR}$, but there are $n+1$ stages: at the first stage, the players can bid for an item with value $a$, and the items for the next $n$ stages are randomly drawn as in $\Gamma_n^{NR}$. At first, in order to study the SPEP of $\Gamma_n^{NR}$, note that this game has the same SPEP as the game which is identical to $\Gamma_n^{NR}$ but terminates at the first time a player gets an item (or after stage $n$). If the game terminates because one player did get an item, the payoff of the other player is the value of the one-player continuation problem. As for the full recall case, it is easy to check that if after a player gets an item, the SPEP in the continuation game are unique and correspond to the payoffs of this auxiliary game. In the following, we assume that $\Gamma_n^{NR}$ is the auxiliary game. In this game, an history is just a sequence of values $(X_1,...,X_t)$ and the strategy of player $i$ at time $t<n$ is a measurable map $\sigma_{i,t}$ from histories into the probabilities over $\{\emptyset\} \cup \{1,...,t\}$. We use the same identification for the game $\Gamma^{NR}_n(a)$. Given an history $h=(x_1,...,x_t)$ of length $t$, let $\Gamma^{NR}_n(h)$ denote the subgame of $\Gamma^{NR}_{n+t}$ starting at stage $t$ after observing $h$ in which the two players are still present and $E_n^{NR}(h)$ the set of SPEP of this game. In the following, variables $(X,X_1,X_2,....)$ will denote independent variables with distribution $F$, and $X_{(t)}=\max(X_1,...,X_t)$. Without loss of generality, we assume that $\mathbb{P}(X>0)>0$ (otherwise the set of equilibrium payoffs is reduced to $\{(0,0\}$). As for the full recall case, we state without proofs some properties of SPEP which follow easily from usual arguments in dynamic game theory. \begin{lemma}\label{lemma:propertiesSPE2} The following properties hold: \begin{enumerate} \item $E_0^{NR}=\{(0,0)\}$ and for $n\geq 1$, $E_n^{NR}=E^{NR}_n(0)=\mathbb{E}[E_{n-1}^{NR}(X)]$. \item If $h=(x_1,...,x_t)$ is an history of length $t \geq 1$, then $E_n^{NR}(h)= E_n^{NR}(a)$ where $a$ denotes the largest item in $h$. \item $(x,y) \in E^{NR}_n(a)$ if and only if there exists $(d,e) \in E^{NR}_{n}$ such that $(x,y)$ is a mixed Nash equilibrium payoff of the finite game $G_n(a;d,e)$ with payoff matrix: \begin{table}[h] \centering \setlength{\extrarowheight}{4pt} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Player $2$}\\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$\emptyset$} \\\cline{3-4} \multirow{2}*{Player $1$} & $a$ & $\left(\frac{1}{2}(a+c_n),\frac{1}{2}(a+c_n) \right)$ & $\left( a, c_n\right)$ \\\cline{3-4} \multirow{2}*{} & $\emptyset$ & $\left( c_n,a\right)$ & $\left( d, e\right)$ \\\cline{3-4} \end{tabular} \caption{\label{tab:payoff_no_recall0};Payoffs matrix of $G_n(a;d,e)$.} \end{table} where $c_n$ denotes the value of the decision problem with $n$ stages in a standard prophet setting. \item Similarly, $(\sigma_1,\sigma_2)$ is a pair of first-stage strategies of some SPE in $\Gamma_n^{NR}(a)$ with payoff $(x,y)$ if and only if there exists $(d,e) \in E^{NR}_{n}$ such that $(\sigma_1,\sigma_2)$ is a mixed Nash equilibrium of the matrix game $G_n(a;d,e)$ with payoff $(x,y)$. \end{enumerate} \end{lemma} Before concluding the section with the proof of Theorem~\ref{thm:norecall}, we show Proposition~\ref{prop:convexity}. \begin{proof}[Proof of Proposition~\ref{prop:convexity}.] The fact that $E^{NR}_n$ is is symmetric with respect to the diagonal is a direct consequence of the fact the the game is symmetric. To prove convexity, we can use the properties of the expectation of a set-valued map, also called the Aumann integral. Theorem 8.6.3 in \cite{AF09} implies that the expectation of a set-valued map with non-empty closed values and compact graph from $[0,1]$ to $\mathbb{R}^2$ with respect to an atomless measure on $[0,1]$ is a non-empty convex compact set. At first, $E^{NR}_0=\{(0,0)\}$ is non-empty compact convex. Then, $a \rightarrow E^{NR}_0(a)$ is a set-valued map with non-empty closed values and compact graph using the classical properties of Nash equilibrium payoffs of matrix games. It follows that $E^{NR}_1$ is non-empty compact convex. Let us assume that $E^{NR}_n$ is non-empty compact convex. Let $NEPG_n(a;d,e)$ denote the set of Nash equilibrium payoffs of $G_n(a;d,e)$, then $(a,d,e) \in [0,1]\times E^{NR}_n \rightarrow NEPG_n(a;d,e)$ is a set-valued map with non-empty closed values and a compact graph, and thus $a \rightarrow E^{NR}_n(a)$ is a set-valued map with non-empty closed values and a compact graph. We conclude that $E^{NR}_{n+1}$ is a non-empty compact convex set. \end{proof} Below, we prove the theorem. \begin{proof}[Proof of Theorem~\ref{thm:norecall}.] Given $a \in [0,1]$ and $n$ a natural number, we consider the game $\Gamma_n^{NR}(a)$ defined in Section~\ref{sec:payoffs}. We first analyze the game $G_n(a;d,e)$ described in Table~\ref{tab:payoff_no_recall0} with $(d,e)\in E^{NR}_n$. One first remark to do regarding this game, is that what a player gets if he stays alone in the game, that is $c_n$, is at least what he gets if both stay. In other words, $c_n\ge d $ and $c_n \ge e$. Now, we use Table~\ref{tab:payoff_no_recall0} and the remark above to study the NE of the game $G_n(a;d,e)$, depending on the relation between the parameters $a,c_n,d,e$. It is easy to check that the following holds: \begin{itemize} \item[(a)] If $a>c_n$, $(a,a)$ is the unique NE with payoff $\left( \frac{1}{2}(a+c_n),\frac{1}{2}(a+c_n)\right)$. \item[(b)] If $a=c_n$, there is a unique NE payoff $(c_n,c_n)$. \item[(c)] If $c_n>a>d \vee e$ there are two pure NE $(a,\emptyset)$ and $(\emptyset,a)$, and a mixed equilibrium; with payoffs $\left( a,c_n\right), (c_n,a)$ and $(\gamma_n^1,\gamma_n^2)$, respectively, with $\gamma_n^1=\frac{2ac_n-d(c_n+a)}{c_n+a-2d}, \gamma_n^2=\frac{2ac_n-e(c_n+a)}{c_n+a-2e}$. \item[(d)] If $c_n>a=d>e$, the NE payoffs are $(c_n,a)$ and $(a,\lambda)$ with $\lambda \in [\gamma_n^2,c_n]$. \item[(e)] If $c_n>a=e>d$, the NE payoffs are $(a,c_n)$ and $(\lambda,a)$ with $\lambda \in [\gamma_n^1,c_n]$. \item[(f)] If $c_n> a=d=e$, the NE payoffs are $(a,\lambda)$ and $(\lambda,a)$ with $\lambda \in [a,c_n]$. \item[(g)] If $d \wedge e>a$, $(\emptyset,\emptyset)$ is the unique NE with payoff $(d,e)$. \item[(h)] If $d>a>e$, $(\emptyset,a)$ is the unique NE with payoff $(c_n,a)$. \item[(i)] If $e>a>d$, $(a, \emptyset)$ is the unique NE with payoff $(a,c_n)$. \item[(j)] If $d> a=e$, the NE payoffs are $(d,e)$ and $(\lambda,a)$ with $ \lambda \in [d,c_n]$. \item[(k)] If $e>a=d$, the Ne payoffs are $(d,e)$ and $(a,\lambda)$ with $\lambda \in [e,c_n]$. \end{itemize} In particular, we are interested on computing the sum of the expected payoff corresponding to the best and the worst SPEs that is, $\max\{x+y: (x,y) \in E^{NR}_{n+1}\}$ and $\min\{x+y: (x,y) \in E^{NR}_{n+1}\}$, respectively; as well as the worst payoff a player can get at equilibrium, that is $ \min \{ \min \{ x,y \} : (x,y) \in E^{NR}_n\}$. Due to Proposition~\ref{prop:convexity} we have that \[ \max\{x+y: (x,y) \in E^{NR}_{n+1}\}=2 \max\{x: (x,x) \in E^{NR}_{n+1}\}, \] and \[ \min\{x+y: (x,y) \in E^{NR}_{n+1}\}=2 \min\{x: (x,x) \in E^{NR}_{n+1}\}. \] Therefore, defining $\alpha_n:= \min \{ x : (x,x) \in E^{NR}_n\}$, $\beta_n:= \max \{x : (x,x) \in E^{NR}_n \}$ and $\alpha'_n= \min \{ \min\{x,y\} : (x,y) \in E^{NR}_n\}$, it follows that it is enough to compute $2 \alpha_{n+1}$, $2\beta_{n+1}$ and $\alpha'_{n+1}$. To prove the part $a)$ of the theorem, we compute a recursive formula for $\alpha_{n+1}'$. Define $\alpha'_{n}(a)$ as the minimal NE payoff of player $1$ in the family of game $G_n(a;d,e)$ when $(d,e)$ ranges through $E^{NR}_n(a)$. It is clear from the previous characterization that $\alpha'_{n+1}=\int_{[0,1]} \alpha'_{n}(a)\mathrm{d}F(a)$. Using our previous analysis, notice that: \[ \alpha_n'(a) = \begin{cases} \alpha'_n & \text{ if } a < \alpha'_n ,\\ a & \text{ if } \alpha'_n <a< c_n,\\ \frac{a+c_n}{2} & \text{ if } a>c_n. \end{cases} \] Since $F$ is atomless, it is sufficient to conclude that for $n\ge 1$ \begin{eqnarray}\label{eq:alpha'} \nonumber \alpha'_{n+1}&=&\int_{0}^{\alpha'_n} \alpha'_n \mathrm{d}F(a)+\int_{\alpha'_n}^{c_n}a \mathrm{d}F(a) + \int_{c_n}^{1} \frac{a+c_n}{2} \mathrm{d}F(a) \\ \nonumber &=&{\alpha'_n} (F({\alpha'_n})-F(0))+(c_n F(c_n)-\alpha'_n F(\alpha'_n))-\int_{\alpha'_n}^{c_n} F(a) \mathrm{d}a \\ \nonumber &+& \frac{c_n}{2}(F(1)-F(c_n))+\frac{1}{2}(1 F(1)-c_n F(c_n))-\frac{1}{2}\int_{c_n}^{1} F(a) \mathrm{d}a. \\ &=&\frac{(c_n+1)}{2}-\int_{\alpha'_n}^{c_n} F(a) \mathrm{d}a -\frac{1}{2}\int_{c_n}^{1} F(a) \mathrm{d}a, \end{eqnarray} and the first statement of Theorem~\ref{thm:norecall} is proved. Next, we compute $2\beta_{n+1}$ as a function of $\beta_n, \alpha'_n$ and $c_n$. Define $2\beta_n(a)$ as the maximal sum of payoffs in any NE on the family of game $G_n(a;d,e)$ when $(d,e)$ ranges through $E^{NR}_n(a)$, so that $\beta_{n+1}=\int_{[0,1]} \beta_{n}(a)\mathrm{d}F(a)$. We have: \[ 2\beta(a) = \begin{cases} 2\beta_n & \text{ if } a < \alpha'_n ,\\ \max\{a+c_n,2\beta_n\} & \text{ if } \alpha'_n <a< \beta_n,\\ a+c_n & \text{ if } a>\beta_n. \end{cases} \] Since $F$ is atomless, it is sufficient to conclude that for $n \ge 1$ \begin{eqnarray}\label{eq:beta} 2\beta_{n+1}&=& \int_{0}^{\alpha'_n} 2 \beta_n \mathrm{d}F(a) + \int_{\alpha'_n}^{\beta_n} \max\{a+c_n, 2\beta_n \} \mathrm{d}F(a) +\int_{\beta_n}^{1} (a+c_n) \mathrm{d}F(a), \end{eqnarray} and the second statement of the theorem is obtained. It remains to compute $2 \alpha_{n+1}.$ As before, define $2\alpha_n(a)$ as the minimal sum of payoffs in any NE on the family of game $G_n(a;d,e)$ when $(d,e)$ ranges through $E^{NR}_n(a)$, so that $\alpha_{n+1}=\int_{[0,1]} \alpha_{n}(a)\mathrm{d}F(a)$. Notice that if $c_n>a>d \vee e$, the mixed equilibrium gives a worse sum of payoffs than the pure. We have: \[ 2\alpha_n(a) = \begin{cases} 2\alpha_n & \text{ if } a < \alpha'_n ,\\ \min\{2\alpha_n, a+c_n\} & \text{ if } \alpha'_n<a<\alpha_n ,\\ 2a & \text{ if } \alpha_n <a< \beta_n,\\ 2 \frac{2ac_n-\beta_n(a+c_n)}{c_n+a-2 \beta_n} & \text{ if } \beta_n <a< c_n,\\ \frac{a+c_n}{2} & \text{ if } a>c_n. \end{cases} \] Since $F$ is atomless, it is sufficient to conclude that for $n \ge 1$ \begin{eqnarray}\label{eqn:alphan} \alpha_{n+1}= \int_{c_{n}}^{1} (a+c_{n})\mathrm{d}F(a) + \int_{\beta_{n}}^{c_{n}} \vskip4truept \noindentsi_n(a) \mathrm{d}F(a) + \int_{\alpha_{n}}^{\beta_{n}} 2 a \mathrm{d}F(a) + \int_{\alpha_{n}'}^{\alpha_{n}} \xi_n(a) \mathrm{d}F(a) + \int_{0}^{\alpha_{n}'} 2 \alpha_{n} \mathrm{d}F(a), \end{eqnarray} where $\vskip4truept \noindentsi_n(a)=\frac{4ac_{n}-2\beta_{n}(a+c_{n})}{c_{n}+a-2 \beta_{n}} $ and $\xi_n(a)=\min \{2 \alpha_{n}, a+c_{n}\}$, which is the third statement of the theorem. Putting together all the foregoing analysis, we obtain the desired result, concluding the proof. \end{proof} \subsection{Omitted proofs from Section~\ref{sec:comparison}} \label{app:comparison} \begin{proof}[Proof of Lemma \ref{lem:bound:SPEP}.] We assume that one player, let us say player 1, bid in the first stage if $a\vee X_1 \geq c_n$, and passes otherwise, where $X_1$ is the realization of the first random variable arrived. Let us show that player 1 obtain an expected payoff, namely $\gamma$, of at least $(a+c_{n+1})/2$, independently of what player 2 does, and therefore player 1 will obtain a payoff of at least $(a+c_{n+1})/2$ playing any SPE. We divide the proof in two cases: \vskip4truept \noindentaragraph{Case 1.} Assume that $a\geq c_n.$ Note that in this case, player 1 bids for $M=a\vee X_1,$ and we have that \begin{equation*} \gamma = \mathbb{E}_{X_1}\left( (a \vee X_1) \mathbb{P}(A)+\frac{1}{2}\left((a \vee X_1)+ \mathbb{E}\left((a \wedge X_1)\vee X_{(n)}\right)\right) \mathbb{P}(A^c) \right), \end{equation*} where $A$ is the event \textit{player 2 does not bid for M}. Notice that $\left((a \vee X_1)+ \mathbb{E}\left((a \wedge X_1)\vee X_{(n)}\right)\right)$ is lower bounded by $a+c_n$ and by $a+X_1$ and therefore we have that \begin{eqnarray*} \gamma &\geq& \mathbb{E}_{X_1}\left( (a \vee X_1) \mathbb{P}(A)+\frac{1}{2}\left((a +c_n) \textbf{1}_{\{X_1<c_n\}}+ (a + X_1) \textbf{1}_{\{X_1 \geq c_n\}} \right) \mathbb{P}(A^c) \right) \\ &=& \frac{a}{2}+ \frac{1}{2}\left( \mathbb{P}(A) (2(a \vee X_1)-a)+ \mathbb{P}(A^c) \left( c_n \textbf{1}_{\{X_1<c_n\}}+ X_1 \textbf{1}_{\{X_1 \geq c_n\}} \right) \right) \\ &\geq & \frac{a}{2}+\frac{1}{2} \left( c_n \textbf{1}_{\{X_1<c_n\}}+ X_1 \textbf{1}_{\{X_1 \geq c_n\}} \right) = \frac{a}{2}+ \frac{c_{n+1}}{2}, \end{eqnarray*} where the second inequality holds because $2(a \vee X_1)-a \geq c_n \textbf{1}_{\{X_1<c_n\}}+ X_1 \textbf{1}_{\{X_1 \geq c_n\}}$ due to $a>c_n$ and the last equality because $c_{n+1}=\mathbb{E}(X\vee c_n).$ We conclude that $\gamma\geq (a+c_{n+1})/2$ if $a \geq c_n.$ \vskip4truept \noindentaragraph{Case 2.} Assume that $a< c_n.$ In this case, we have that \begin{eqnarray*} \gamma &\geq& \mathbb{E}_{X_1}\left( \left( X_1 \wedge \frac{1}{2} \left( X_1 + \mathbb{E}\left(a \vee X_{(n)}\right)\right) \right) \textbf{1}_{\{X_1 \geq c_n\}} + \left(\frac{(a \vee X_1)+c_n}{2} \wedge \mathbb{E}\left( X_{(n)} \right) \right) \textbf{1}_{\{X_1< c_n\}}\right) \\ &=& \mathbb{E}_{X_1}\left( \left( \frac{X_1}{2} + \frac{1}{2} \left( X_1 \wedge \mathbb{E}\left(a \vee X_{(n)}\right)\right) \right) \textbf{1}_{\{X_1 \geq c_n\}}+ \left( \frac{c_n}{2} + \frac{1}{2} \left( \left(a\vee X_1\right) \wedge \left( 2 \mathbb{E}\left(X_{(n)}\right) -c_n \right)\right) \right) \textbf{1}_{\{X_1< c_n\}} \right) \\ &=& \frac{c_{n+1}}{2}+ \mathbb{E}_{X_1}\left( \frac{1}{2} \left( X_1 \wedge \mathbb{E}\left(a \vee X_{(n)}\right)\right) \textbf{1}_{\{X_1 \geq c_n\}} + \frac{1}{2} \left( \left(a\vee X_1\right) \wedge \left( 2 \mathbb{E}\left(X_{(n)}\right) -c_n \right)\right) \textbf{1}_{\{X_1< c_n\}} \right) \\ &\geq& \frac{a+c_{n+1}}{2}, \end{eqnarray*} where the second equality holds because $c_{n+1}=\mathbb{E}(X\vee c_n)$ and the last inequality because both $ X_1 \wedge \mathbb{E}\left(a \vee X_{(n)}\right)$ and $\left(a\vee X_1\right) \wedge \left( 2 \mathbb{E}\left(X_{(n)}\right) -c_n \right)$ are at least $a$ when $a<c_n$. Thus, we conclude that $\gamma \geq (a+c_{n+1})/2$ if $a<c_n$. Putting all together we obtain that $\gamma_n^{FR}\geq (a+c_{n+1})/2$ and the proof is complete. \end{proof} \begin{proof} [Proof of Theorem \ref{thm:comparison}] Note that it is enough to prove that $l_n \geq \beta_n$ for all $n$, where $l_n$ denotes the lowest SPE payoff in the full recall case and $\beta_n$ denotes the highest symmetric SPE payoff in the no recall case. To this end, we will prove by induction on $n$ that $l_n(a,0) \geq \beta_n(a)$ for all $n$ and for all $a$, where $l_n(a,0)$ is defined as in the statement of Theorem~\ref{thm:SPEPcharact} and $\beta_n(a)$ is defined by \[ \beta_n(a) = \begin{cases} \beta_n & \text{if } a < \alpha'_n\\ \max\{\beta_n, \frac{a+c_n}{2}\} &\text{if } a \in [\alpha'_n, \beta_n] \\ \frac{a+c_n}{2} &\text{if } a > \beta_n, \\ \end{cases} \] where $c_n$ is the value of the decision problem in the no recall case with one decision-maker and $n$ arrivals. First, notice that $l_1(a,0)=(a+\mathbb{E}(X))/2$, $c_1=\mathbb{E}(X)$ and $\beta_1=\mathbb{E}(X)$, and thus $l_1(a,0) \geq \beta_1$ and $l_1(a,0) \geq (a+c_1)/2$ for all $a$, concluding that $l_1(a,0) \geq \beta_1(a)$ for all $a$. We assume now that $l_n(a,0) \geq \beta_n(a)$ for all $a$ and we prove that $l_{n+1}(a,0) \geq \beta_{n+1}(a)$ for all $a$. We divide the proof in two cases depending on if $a$ is higher than $\mathbb{E}\left(X_{(n+1)}\right)$ or not. \vskip4truept \noindentaragraph{Case 1.} Assume that $a>\mathbb{E}\left(X_{(n+1)}\right)$. In this case, we have that $a>\beta_{n+1}$ because $\mathbb{E}\left(X_{(n+1)}\right)\geq c_{n+1}\geq \beta_{n+1}.$ Thus, by the definition of $\beta_{n+1}(a),$ we have that \begin{equation}\label{eqn:comp1} \beta_{n+1}(a)=\frac{a+c_{n+1}}{2}. \end{equation} On the other hand, holds that \begin{equation}\label{eqn:comp2} l_{n+1}(a,0)=\frac{a+\mathbb{E}\left(X_{(n+1)}\right)}{2}\geq \frac{a+\mathbb{E}\left(c_{(n+1)}\right)}{2}, \end{equation} where the equality follows from using the definition of $l_{n+1}(a,0)$ when $a>\mathbb{E}\left(X_{(n+1)}\right)$ and the inequality hols because $\mathbb{E}\left(X_{(n+1)}\right)\geq c_{n+1}$. Therefore, we obtain $l_{n+1}(a,0) \geq \beta_{n+1}(a)$ for all $a>\mathbb{E}\left(X_{(n+1)}\right)$ combining \eqref{eqn:comp1} and \eqref{eqn:comp2}. \vskip4truept \noindentaragraph{Case 2.} Assume that $a \leq \mathbb{E}\left(X_{(n+1)}\right).$ To prove that $l_{n+1}(a,0) \geq \beta_{n+1}(a)$, we show that $l_{n+1}(a,0) \geq \beta_{n+1}$ and that $l_{n+1}(a,0) \geq (a+c_{n+1})$/2. The latter holds by Lemma~\ref{lem:bound:SPEP}. On the other hand, \begin{equation*} l_{n+1}(a,0)=\mathbb{E}_X(l_n(a \vee X, a \wedge X)) \geq \mathbb{E}_X(l_n( X,0)) \geq \mathbb{E}_X( \beta_n(X))= \beta_{n+1}, \end{equation*} where the first equality follows from the definition of $l_{n+1}(a,0)$ for $a \leq \mathbb{E}\left(X_{(n+1)}\right)$, the first inequality holds due to the monotonicity of the function $l_n$ in both components, the second inequality follows from the induction hypothesis and the last equality from the definition of $\beta_{n+1}.$ Therefore, we conclude that $l_{n+1}(a,0) \geq \beta_{n+1}(a)$ for all $a \leq \mathbb{E}\left(X_{(n+1)}\right)$. Putting all together we obtain that $l_{n}\geq \beta_{n}$ for all $n$ and the desired result follows. \end{proof} \subsection{Omitted proofs from Section~\ref{sec:eff}} The main result in Section~\ref{sec:eff} we prove here is Theorem~\ref{prop:boundeff} and then we work on the competitive selection problem with no recall and two arrivals. Before going to the proof, we obtain an expression for the price of anarchy and price of stability if the random variables are distributed according to $F$, namely $\text{PoA}_2^{NR}(F)$ and $\text{PoS}_2^{NR}(F)$ respectively. Regarding the price of stability, we need to compute \[ \frac{ \mathbb{E}\left(X_{(1:2)}+X_{(2:2)}\right)}{2 \beta_2}, \] where $2\beta_2$ is obtained from equation \eqref{eq:beta} by taking $n=1$. That is: \begin{eqnarray*} 2\beta_{2}&=& \int_{0}^{\alpha'_1} 2 \beta_1 \mathrm{d}F(a) + \int_{\alpha'_1}^{\beta_1} \max\{a+c_1, 2\beta_1 \} \mathrm{d}F(a) +\int_{\beta_1}^{1} (a+c_1) \mathrm{d}F(a). \end{eqnarray*} Now, due to $E^{NR}_1=\left\{ \left(\frac{\mathbb{E}(X)}{2},\frac{\mathbb{E}(X)}{2} \right)\right\}$, we have that $\alpha_1'=\beta_1=\frac{\mathbb{E}(X)}{2}$. Noting that $c_1= \mathbb{E}(X)$ and putting all together we have \begin{eqnarray*} 2\beta_{2}&=& \int_{0}^{\mathbb{E}(X)/2} \mathbb{E}(X) \mathrm{d}F(a) + \int_{\mathbb{E}(X)/2}^{\mathbb{E}(X)/2} \max\{a+\mathbb{E}(X), \mathbb{E}(X) \} \mathrm{d}F(a) +\int_{\mathbb{E}(X)/2}^{1} (a+\mathbb{E}(X)) \mathrm{d}F(a) \\ &=&\int_{0}^{1} \mathbb{E}(X) \mathrm{d}F(a) + \int_{\mathbb{E}(X)/2}^{1} a \mathrm{d}F(a) \\ &=& \mathbb{E}(X) + \mathbb{P}(X \geq \mathbb{E}(X)/2) \mathbb{E}(X | X \geq \mathbb{E}(X)/2). \end{eqnarray*} On the other hand, $\mathbb{E}\left(X_{(1:2)}+X_{(2:2)}\right)= 2 \mathbb{E}(X),$ and therefore \begin{eqnarray}\label{eqn:pos2} \frac{1}{\text{PoS}_2^{NR}(F)}= \frac{\mathbb{E}(X) + \mathbb{P}(X \geq \mathbb{E}(X)/2) \mathbb{E}(X | X \geq \mathbb{E}(X)/2)}{2 \mathbb{E}(X)}=\frac{1}{2}+\frac{\mathbb{P}(X \geq \mathbb{E}(X)/2) \mathbb{E}(X | X \geq \mathbb{E}(X)/2)}{2 \mathbb{E}(X)}. \end{eqnarray} Regarding the price of anarchy, we need to compute \[ \frac{\mathbb{E}\left(X_{(1:n)}+X_{(2:n)}\right)}{2 \alpha_2}, \] where $2 \alpha_2$ follows from equation \eqref{eqn:alphan} by taking $n=1$. After some algebra, we obtain \begin{eqnarray}\label{eqn:alpha2}\nonumber 2 \alpha_2 &=& \mathbb{E}(X)+ \int_{\mathbb{E}(X)/2}^{\mathbb{E}(X)} 2\mathbb{E}(X)- \frac{\mathbb{E}(X)^2}{a} \mathrm{d}F(a)+ \int_{\mathbb{E}(X)}^1 a \mathrm{d}F(a) \\ &=& 2 \mathbb{E}(X) - \int_0^{\mathbb{E}(X)/2} a \mathrm{d}F(a) - \int_{\mathbb{E}(X)/2}^{\mathbb{E}(X)} a - 2\mathbb{E}(X) + \frac{\mathbb{E}(X)^2}{a} \mathrm{d}F(a), \end{eqnarray} and thus \begin{eqnarray}\label{eqn:poa2} \frac{1}{\text{PoA}_2^{NR}(F)}&=&1- \frac{1}{2 \mathbb{E}(X)} \int_0^{\mathbb{E}(X)/2} a \mathrm{d}F(a) - \frac{1}{2 \mathbb{E}(X)} \int_{\mathbb{E}(X)/2}^{\mathbb{E}(X)} a - 2\mathbb{E}(X) + \frac{\mathbb{E}(X)^2}{a} \mathrm{d}F(a). \end{eqnarray} Also, we can write the inverse of price of anarchy as follows: \begin{eqnarray}\label{eqn:poa2_2} \frac{1}{\text{PoA}_2^{NR}(F)}&=&\frac{1}{\text{PoS}_2^{NR}(F)}- \frac{1}{2 \mathbb{E}(X)} \int_{\mathbb{E}(X)/2}^{\mathbb{E}(X)} (a-2\mathbb{E}(X)+ \mathbb{E}(X)^2/2) \mathrm{d}F(a). \end{eqnarray} We now prove Theorem~\ref{prop:boundeff} using the formulas obtained above. \begin{proof}[Proof of Theorem~\ref{prop:boundeff}.] To prove that $\text{PoS}_2^{NR}(F) \leq 4/3,$ let us consider the second term in the rhs of \eqref{eqn:pos2} and notice that \begin{eqnarray*} \mathbb{P}(X \geq \mathbb{E}(X)/2) \mathbb{E}(X | X \geq \mathbb{E}(X)/2)&=&\mathbb{E}(X)-\mathbb{P}(X < \mathbb{E}(X)/2) \mathbb{E}(X | X < \mathbb{E}(X)/2) \\ &>&\mathbb{E}(X)-\mathbb{P}(X < \mathbb{E}(X)/2)\frac{\mathbb{E}(X)}{2} \\ &\geq& \mathbb{E}(X)-\frac{\mathbb{E}(X)}{2} = \frac{\mathbb{E}(X)}{2}. \end{eqnarray*} Then, \begin{eqnarray*} \frac{1}{\text{PoS}_2^{NR}(F)} \geq \frac{1}{2}+ \frac{\mathbb{E}(X)}{2} \frac{1}{2 \mathbb{E}(X)}= \frac{3}{4}. \end{eqnarray*} Regarding the price of anarchy, by equation \eqref{eqn:poa2} and using that $ a - 2\mathbb{E}(X) + \frac{\mathbb{E}(X)^2}{a} \leq \frac{\mathbb{E}(X)}{2}$ if $a \in [\mathbb{E}(X)/2,\mathbb{E}(X)]$, we have \begin{eqnarray*} \frac{1}{\text{PoA}_2^{NR}(F)}&\geq&1- \frac{\mathbb{P}(X < \mathbb{E}(X)/2) \mathbb{E}(X | X < \mathbb{E}(X)/2)}{2 \mathbb{E}(X)} - \frac{1}{4}\mathbb{P}(\mathbb{E}(X)/2 \leq X \leq \mathbb{E}(X)) . \end{eqnarray*} But, $\mathbb{E}(X | X < \mathbb{E}(X)/2) \leq \mathbb{E}(X)/2$ and thus \begin{eqnarray*} \frac{1}{\text{PoA}_2^{NR}(F)}&\geq&1- \frac{1}{4} \mathbb{P}(X < \mathbb{E}(X)/2) - \frac{1}{4}\mathbb{P}(\mathbb{E}(X)/2 \leq X \leq \mathbb{E}(X)) \\ &=& 1- \frac{1}{4} \mathbb{P}(X \leq \mathbb{E}(X)) \geq \frac{3}{4} , \end{eqnarray*} obtaining the desire inequalities. To prove the tightness of the bound, let us take $\varepsilon > 0$ and $\eta>0$ two small positive real numbers and consider the random variable $X_{\varepsilon, \eta}=(1-\eta) X + \eta Unif[0,1]$, where \[ X= \begin{cases} \varepsilon-\varepsilon^2 & \text{with probability } 1-\varepsilon,\\ 1 &\text{with probability } \varepsilon.\\ \end{cases} \] Note that when $\eta$ goes to $0$, $$\frac{1}{\text{PoS}_2^{NR}(F_{\varepsilon,\eta})} \to \frac{1}{2}+\frac{\mathbb{P}(X \geq \mathbb{E}(X)/2) \mathbb{E}(X | X \geq \mathbb{E}(X)/2)}{2 \mathbb{E}(X)}, $$ where $F_{\varepsilon, \eta}$ represents the c.d.f. of $X_{\varepsilon, \eta}$, and therefore it is enough to prove that $$\frac{1}{2}+\frac{\mathbb{P}(X \geq \mathbb{E}(X)/2) \mathbb{E}(X | X \geq \mathbb{E}(X)/2)}{2 \mathbb{E}(X)} \to 3/4$$ when $\varepsilon$ goes to $0$. The c.d.f. of $X$ is given by \[ F_\varepsilon(x) = \begin{cases} 0 & \text{if } x < \varepsilon-\varepsilon^2 ,\\ 1-\varepsilon &\text{if } x \in [\varepsilon-\varepsilon^2,1),\\ 1 & \text{if } x \geq 1; \\ \end{cases} \] and the expected value is $\mathbb{E}(X)=(\varepsilon-\varepsilon^2)(1-\varepsilon)+\varepsilon=\varepsilon (1+(1-\varepsilon)^2).$ Therefore, after some algebra, it follows that \[ \frac{1}{2}+\frac{\mathbb{P}(X \geq \mathbb{E}(X)/2) \mathbb{E}(X | X \geq \mathbb{E}(X)/2)}{2 \mathbb{E}(X)}= \frac{1}{2}+ \frac{\varepsilon}{2 \varepsilon (1+(1-\varepsilon)^2)}=\frac{1}{2}+ \frac{1}{2(1+(1-\varepsilon)^2)}, \] which converges to $3/4$ when $\varepsilon \to 0$, and therefore we obtain price of stability $4/3.$ On the other hand, note that by definition, for each distribution $F$, holds that $\text{PoS}_2(F) \leq \text{PoA}_2(F)$, and then $\text{PoA}_2(F_{\varepsilon,\eta}) \geq 4/3$, where $4/3$ is the upper bound we already prove. Therefore, the bound is tight also for the price of anarchy. \end{proof} \end{document}
\begin{document} \title[Recurrence of multiples of composition operators on weighted Dirichlet spaces]{Recurrence of multiples of composition operators on weighted Dirichlet spaces} \author[N. Karim]{Noureddine Karim} \author[O. Benchiheb]{Otmane Benchiheb} \author[M. Amouch]{Mohamed Amouch} \address[Noureddine Karim, Otmane Benchiheb, and Mohamed Amouch]{Chouaib Doukkali University. Department of Mathematics, Faculty of science El Jadida, Morocco} \email{[email protected]} \email{[email protected]} \email{[email protected]} \keywords{Hypercyclicity, Recurrence, Composition operator, Dirichlet spaces.} \subjclass[2010]{ Primarily 47A16, 37B20, Secondarily 46E50, 46T25 } \date{} \begin{abstract} A bounded linear operator $T$ acting on a Hilbert space $\mathcal{H}$ is said to be recurrent if for every non-empty open subset $U\subset \mathcal{H}$ there is an integer $n$ such that $T^n (U)\cap U\neq\emptyset$. In this paper, we completely characterize the recurrence of scalar multiples of composition operators, induced by linear fractional self maps of the unit disk, acting on weighted Dirichlet spaces $S_\nu$; in particular on the Bergman space, the Hardy space, and the Dirichlet space. Consequently, we complete a previous work of Costakis et al. \cite{costakis} on recurrence of linear fractional composition operators on Hardy space. In this manner, we determine the triples $(\lambda,\nu,\phi)\in \mathbb{C}\times \mathbb{R}\times LFM(\mathbb{D})$ for which the scalar multiple of composition operator $\lambda C_\phi$ acting on $S_\nu$ fails to be recurrent. \end{abstract} \renewcommand\thetable{\mathbb{R}oman{table}} \maketitle \section{Introduction and preliminaries} Throughout this paper, $\mathbb{C}$ will represent the complex plane, $\mathbb{C}^*$ the punctured plane $\mathbb{C}\backslash\{0\}$, and $\hat{\mathbb{C}} = \mathbb{C}\cup \{\infty\}$ will be the one-point compactification of $\mathbb{C}$. Moreover, $\mathbb{D}$ will stand for the open unit disk of $\mathbb{C}$, while $\mathbb{T}$ will represent the unit circle of $\mathbb{C}$. A bounded linear operator $T$ acting on a Hilbert space $\mathcal{H}$ is said to be hypercyclic if there is a vector $f\in \mathcal{H}$ whose $T$-orbit; $$\mathcal{O}(f,T)=\{T^n f;\ n\in \mathbb{N}\},$$ is dense in $\mathcal{H}$. In such a case the vector $f$ is called a hypercyclic vector. An operator $T$ is said to be cyclic if there is a vector $f\in \mathcal{H}$ such that the space generated by $\mathcal{O}(f,T)$; $$\textrm{span}(\mathcal{O}(f,T))=\{p(T)f;\ p\ \textrm{a polynomial}\},$$ is dense in $\mathcal{H}$. In this case the vector $f$ is called a cyclic vector. A strong form of cyclicity and weaker than hypercyclicity is supercyclicity. An operator $T$ is said to be supercyclic if there is a vector $f\in \mathcal{H}$ whose projective orbit; $$\mathbb{C}.\mathcal{O}(f,T)=\{\lambda T^n f;\ n\in \mathbb{N}\ \textrm{and}\ \lambda\in \mathbb{C} \},$$ is dense in $\mathcal{H}$. The vector $f$ is called a supercyclic vector. Recall \cite{universal}, that an operator $T$ on a Hilbert space $\mathcal{H}$ is hypercyclic if and only if it is topologically transitive; that is, for any pair of non-empty open subsets $U$, $V$ of $\mathcal{H}$ there exists some $n \in \mathbb{N}$ such that $$T^n (U)\cap V\neq\emptyset.$$ Recently, the notions of hypercyclic and transitivity has been generalized and studied see \cite{recpro,uppfre,strtra,onthespe}.\\ Another important concept in topological dynamics is that of recurrence. This notion has been initiated by Poincar\'{e} and Birkhoff, while a systematic study was given by Costakis et al. in \cite{costakis}. A bounded linear operator on a Hilbert space $\mathcal{H}$ is called recurrent if for every non-empty open subset $U\subset \mathcal{H}$ there is a positive integer $n$ such that $$T^n (U)\cap U\neq\emptyset.$$ A vector $f\in \mathcal{H}$ is said to be recurrent for $T$ if there exists a strictly increasing sequence of positive integers $(n_k)_{k\in \mathbb{N}}$ such that $$T^{n_k}f\rightarrow f,\\ \mbox{ as } \ \ k\rightarrow \infty.$$ In that sense, every hypercyclic operator is recurrent, and every hypercyclic vector is recurrent. For more information on linear dynamics we refer to \cite{lincha} and \cite{dynoflinope}. The study of linear dynamics has become a very active area of research. This work will be devoted to studying the recurrence of composition operators. Recall that if $\mathcal{H}$ is a Hilbert space of analytic functions in the unit disk $\mathbb{D}$, and if $\phi$ is a nonconstant self map of $\mathbb{D}$, then the composition operator $C_\phi$ associated to $\phi$ on $\mathcal{H}$ is defined by $$C_\phi f=f\circ \phi \ \ \mbox{ for all } \ \ f \in \mathcal{H}.$$ In which case, the function $\phi$ called a symbol of $C_\phi$. For general references on the theory of composition operators, see, e.g., Cowen and MacCluer's book \cite{CM}, Shapiro's book \cite{Sh} and K. H. Zhu's book \cite{spaholfun}. The special about the composition operator is that the properties of $C_\phi$ depend significantly on the behaviour of the symbol $\phi$. In this paper, we show that the recurrence of the composition operator is influenced by the location of the fixed points of its symbol. For each real number $\nu$ the weighted Dirichlet space $\mathcal{S}_\nu$ is the space of functions $f(z)=\sum_{n=0}^{\infty}a_n z^n$ analytic on $\mathbb{D}$ such that the following norm \begin{equation}\label{21} \|f\|_\nu^2=\sum_{n=0}^{\infty}|a_n|^2(n+1)^{2\nu} \end{equation} is finite. Endowed with the inner product $$\left\langle \sum_{n=0}^{\infty}a_n z^n,\sum_{n=0}^{\infty}b_n z^n\right\rangle =\sum_{n=0}^{\infty}a_n \bar{b}_n(n+1)^{2\nu},$$ the spaces $\mathcal{S}_\nu$ are Hilbert spaces, see \cite[p. 16]{CM} or \cite[p. 1]{EM}. For some values of $\nu$ the spaces $\mathcal{S}_\nu$ are very well known classical analytic function spaces: for instance if $\nu = 1/2$, $S_\nu$ is the Dirichlet space $\mathcal{D}$; for $\nu = 0$ it is the Hardy space $H^2$ and for $\nu = -1/2$ it is the Bergman space $\mathcal{A}^2$. Observe that if $\nu_1>\nu_2$, then the space $\mathcal{S}_{\nu_1}$ is strictly contained in $\mathcal{S}_{\nu_2}$, and that $\|f\|_{\nu_2}\leq\|f\|_{\nu_1}$, for every $f\in \mathcal{S}_{\nu_1}$. Also, we can define the Dirichlet space as the collection of functions analytic on $\mathbb{D}$ whose first derivatives have square integrable modulus over $\mathbb{D}$. For $f\in \mathcal{D}$ the norm in $\mathcal{D}$ has the representation $$\|f\|_{\mathcal{D}}^2=|f(0)|^2+\int_{\mathbb{D}}|f'(z)|^2dA(z),$$ where here $dA(z)$ is the Lebesgue area measure on $\mathbb{D}$ normalized to have unit mass. In the Hardy space $H^2$ there is also an integral representation of the norm. This representation is the following $$\|f\|_{H^2}^2=\frac{1}{2\pi}\sup_{0<r<1}\int_{-\pi}^{\pi}|f(re^{i\theta})|^2d\theta.$$ The Dirichlet space and the Hardy space will play a central role in our study. Let $\nu\in \mathbb{R}$, we define the function on $\mathbb{D}$ as: $$k(z)=\sum_{n=0}^{\infty}\frac{z^n}{(n+1)^{2\nu}},$$ which is analytic on $\mathbb{D}$. Then for each $w\in \mathbb{D}$, the reproducing kernel is defined by $$K_w(z)=k(\bar{w}z).$$ This is easily seen that $\|K_w\|^2=k(|w|^2)$ and for every $f(z)=\sum_{n=0}^{\infty}a_nz^n\in S_\nu$ we have that $$\langle f,K_w\rangle=\sum_{n=0}^{\infty}a_nw^n=f(w).$$ It is known according to a result of P.R. Hurst \cite{PH} that the composition operator $C_\phi$ is always bounded on $S_\nu$ when $\phi$ is a linear fractional map on $\mathbb{D}$.\\ Recall (see, e.g., \cite[Chapter 3]{Ah}, \cite[Chapter 0]{Sh}) that linear fractional maps are those maps of the form $$\phi(z) = \frac{az + b}{cz + d},$$ where $a,b,c$ and $d$ are complex numbers satisfying $ ad - bc\neq0$. They extend to the extended complex plane $\mathbb{\hat{C}}$ by defining $\phi(\infty)=a/c$, and $\phi(-d/c)=\infty$ if $c\neq0$, while $\phi(\infty)=\infty$ if $c=0$. The linear fractional maps can be classified according to their fixed point, which are at most two fixed points if $\phi$ is not the identity. Two linear fractional maps $\phi$ and $\psi$ are called conjugate if there is another linear fractional map $T$ such that $$\phi=T^{-1}\psi T.$$ It is called parabolic if $\phi$ has only one fixed point $\alpha$ or, equivalently, it is conjugate to a translation $\psi(z)=z+\tau$, were $\tau\neq0$. If $\phi$ has two distinct fixed points $\alpha$ and $\beta$, then $\phi$ is conjugate to a dilation $\psi(z)=\mu z$. In this case, the linear fractional map $\phi$ is called elliptic if $|\mu|=1$; hyperbolic if $\mu>0$ and loxodromic otherwise, see \cite{Ah} for more details. The chain role proves that the value of the derivative at the fixed point is preserved under conjugation. Therefore, the derivative of a parabolic linear fractional map at its fixed point is $1$, while the derivative of a hyperbolic one is strictly less than $1$ at its attractive fixed point and greater than $1$ at its repulsive point. We will note $LFM(\mathbb{D})$ to refer to the set of all such maps, which are additionally self-maps of the unit disk $\mathbb{D}$. It is known that if $\phi\in LFM(\mathbb{D})$, then the derivative $\phi'(\eta)$ exists and is finite for every $\eta$ in the unit circle $\mathbb{T}$. The condition $\phi(\mathbb{D})\subset \mathbb{D}$ make some exigences on the location of the fixed points of $\phi$. We have that (see \cite[p. 5]{Sh}) \begin{enumerate} \item If $\phi$ is parabolic, then its fixed point $\eta$ is in $\mathbb{T}$ and it satisfies $\phi'(\eta)=1$. \item If $\phi$ is hyperbolic, the attractive fixed point is in $\bar{\mathbb{D}}$ and the other fixed point outside of $\mathbb{D}$ and both fixed points are on $\mathbb{T}$ if and only if $\phi$ is an automorphism of $\mathbb{D}$. \item If $\phi$ is loxodromic or elliptic, one fixed point is in $\mathbb{D}$ and the other fixed point outside of $\bar{\mathbb{D}}$. The elliptic ones are always automorphisms of $\mathbb{D}$. The fixed point in $\mathbb{D}$ for the loxodromic ones is attractive. \end{enumerate} Concerning the dynamics of composition operator on Dirichlet spaces, many works have been realized: In her dissertation \cite{Zo}, Nina Zorboska studied composition operators induced by non-elliptic disk automorphism and proved that they are all cyclic on the Hardy space. In addition, in \cite{zo3} she has obtained some results about cyclicity and hypercyclicity on the so-called smooth weighted Hardy spaces, which are the weighted Hardy spaces whose functions have continuous first derivatives on the boundary of the unit disk (basically weighted Hardy spaces strictly smaller than $\mathcal{S}_{3/2}$). In \cite{BS1,BS2} Bourdon and Shapiro thoroughly characterized the cyclicity, the supercyclicity, and the hypercyclicity of linear fractional composition operators on the Hardy space $H^2(\mathbb{D})$ (see Table I). Moreover, in \cite{BS2} they gave a program of transferring the cyclic and hypercyclic properties of linear fractional composition operators to general composition operators on the Hardy space. In \cite{EM} answering some open questions posed by Zorboska \cite{zo3}, Gallardo-Guti\'{e}rrez and Montes-Rodr\'{\i}guez obtained a complete characterization of the cyclic and hypercyclic composition operators induced by linear fractional maps on weighted Dirichlet spaces $\mathcal{S}_\nu$ (see Table II). In the present work, we give a complete characterization of recurrence of scalar multiples of composition operator whose symbols are linear fractional maps, acting on weighted Dirichlet spaces $\mathcal{S}_\nu$; in particular, on the Bergman space $\mathcal{S}_{-1/2}$, the Hardy space $\mathcal{S}_0$ and the classical Dirichlet space $\mathcal{S}_{1/2}$. To do that, spectral technics will play a significant role. The paper is organized as follows$:$ In the third section, we characterize the recurrence of linear fractional composition operator $C_\phi$ on $\mathcal{S}_\nu$ by providing a necessary and sufficient condition on $\phi$ and the real number $\nu$. The fourth section is devoted to studying the recurrence of scalar multiples of linear fractional composition operators $\lambda C_\phi$ on the weighted Dirichlet spaces by providing a necessary and sufficient condition on $\phi$, the real $\nu$, and the complex scalar $\lambda$. In tables I and II, we expose the classification of Gallardo-Guti\'{e}rrez and Montes-Rodr\'{\i}guez of the dynamics of composition operators and their scalar multiples on $\mathcal{S}_\nu$. In table III, we summarize our results about the recurrence of linear fractional composition operator on $\mathcal{S}_\nu$. Finally, in table IV, we give a summarize of the recurrence of scalar multiples of linear fractional composition operator on $\mathcal{S}_\nu$. \begin{table}[h!] \centering \begin{tabular}{|p{3cm}|c|c|c|} \hline Type of $\phi$ & Cyclic & Supercyclic & Hypercyclic \\ \hline\hline Hyperbolic Automorphism & $\nu<1/2$ & $\nu<1/2$ & $\nu<1/2$ \\ \hline Parabolic Automorphism & $\nu<1/2$ & $\nu<1/2$ & $\nu<1/2$ \\ \hline Hyperbolic Non-Automorphism & Always & $\nu\leq1/2$ & $\nu<1/2$ \\ \hline Parabolic Non-Automorphism& $\nu\leq3/2$ & Never & Never \\ \hline Interior and Exterior& Always & Never & Never \\ \hline Interior and Boundary& Never & Never & Never \\ \hline Elliptic Irrational rotation& Always & Never & Never \\ \hline Elliptic Rational rotation& Never & Never & Never \\ \hline \end{tabular} \caption{Cyclic properties of $C_\phi$ on $\mathcal{S}_\nu$, $\phi$ linear fractional map.} \end{table} \begin{table}[h!] \begin{tabular}{|p{3cm}|c|} \hline Type of $\phi$& Hypercyclicity \\ \hline\hline Hyperbolic Automorphism& $\nu<1/2$ and $\phi'(\eta)^{(1-2\nu)/2}<|\lambda|<\phi'(\eta)^{(2\nu-1)/2}$ \\ \hline Parabolic automorphism& $\nu<1/2$ and $|\lambda|=1$ \\ \hline Hyperbolic Non-Automorphism & $\nu\leq1/2$ and $|\lambda|>\phi'(\eta)^{(1-2\nu)/2}$ \\ \hline \end{tabular} \caption{Cyclic properties of $\lambda C_\phi$ on $\mathcal{S}_\nu$ with $\phi$ linear fractional map and $\eta$ denotes the attractive fixed point of $\phi$} \end{table} Our goal in the next two sections will be completing the previous tables by adding the recurrence of $C_\phi$ and also the recurrence of $\lambda C_\phi$ on $\mathcal{S}_\nu$. All the results about the recurrence of composition operators and its scalar multiples on $\mathcal{S}_\nu$ spaces are summarized in Table III and Table IV below. \begin{table}[h!] \centering \begin{tabular}{|p{3cm}|c|c|} \hline Type of $\phi$ & Recurrence & Examples \\ \hline\hline Hyperbolic Automorphism & $\nu<1/2$ & $\frac{3z+1}{z+3}$ \\ \hline Parabolic Automorphism & $\nu<1/2$ & $\frac{(1+i)z-1}{z+i-1}$ \\ \hline Hyperbolic Non-Automorphism & $\nu<1/2$ & $\frac{1+z}{2}$ \\ \hline Parabolic Non-Automorphism& Never & $\frac{1}{2-z}$ \\ \hline Interior and Exterior& Never & $\frac{-z}{2+z}$\\ \hline Interior and Boundary& Never & $\frac{z}{2-z}$ \\ \hline Elliptic Irrational rotation& Always & $e^{i\sqrt{2}}z$ \\ \hline Elliptic Rational rotation& Always & $e^{2i\pi/3}z$ \\ \hline \end{tabular} \caption{Recurrence of $C_\phi$ on $\mathcal{S}_\nu$, $\phi$ linear fractional map.} \end{table} \begin{table}[h!] \begin{tabular}{|p{3cm}|c|} \hline Type of $\phi$& Recurrence \\ \hline\hline Elliptic Automorphism& $\nu\in \mathbb{R}$ and $|\lambda|=1$\\ \hline Hyperbolic Automorphism& $\nu<1/2$ and $\phi'(\eta)^{(1-2\nu)/2}<|\lambda|<\phi'(\eta)^{(2\nu-1)/2}$ \\ \hline Parabolic automorphism& $\nu<1/2$ and $|\lambda|=1$ \\ \hline Hyperbolic Non-Automorphism & $\nu\leq1/2$ and $|\lambda|>\phi'(\eta)^{(1-2\nu)/2}$ \\ \hline \end{tabular} \caption{Recurrence of $\lambda C_\phi$ on $\mathcal{S}_\nu$, $\phi$ linear fractional map and $\eta$ the attractive fixed point of $\phi$.} \end{table} For the cases not appearing in the Table IV, the operator $\lambda C_\phi$ cannot be recurrent, for any $\lambda\in \mathbb{C}$. \section{Recurrence of $C_\phi$ on $\mathcal{S}_\nu$} A useful tool in the study of any of the dynamic properties is the following proposition known as Comparison principle (see \cite[p. 111]{Sh} and \cite{salas}). \begin{proposition} Let $\mathcal{E}$ be a metric space and $\mathcal{F}$ be a subspace that it self a linear metric space with a stronger topology. Suppose that $T$ is a continuous linear transformation on $\mathcal{E}$ that maps the smaller space $\mathcal{F}$ into itself and also continuous in the topology of this space. If $T$ is cyclic on $\mathcal{F}$, then it is also cyclic on $\mathcal{E}$ and has an $\mathcal{E}$-cyclic vector that belongs to $\mathcal{F}$. Furthermore, the same is true for supercyclic and hypercyclic operators. \end{proposition} \begin{remark} Observe that the comparison principle is also true for recurrent operators. If $\nu_1>\nu_2$, we have that $\mathcal{S}_{\nu_1}$ is a dense subspace of the space $\mathcal{S}_{\nu_2}$. Thus we can apply the comparison principle in the sense that, if an operator $T$ acting on $\mathcal{S}_{\nu_1}$ is recurrent, then it is recurrent on $\mathcal{S}_{\nu_2}$. \end{remark} \begin{theorem}\label{thm31} Let $\phi$ be a linear fractional map on $\mathbb{D}$ with an interior fixed point $p\in \mathbb{D}$. Then $C_\phi$ is recurrent on any of the $\mathcal{S}_\nu$ spaces if and only if $\phi$ is an elliptic automorphism of $\mathbb{D}$. \end{theorem} \begin{proof} If $\phi$ is an elliptic automorphism then $\phi$ is conjugate to a rotation (see \cite[Chapter 0]{Sh}). Thus, there exists a linear fractional map composition operator $S$ and a complex number $\lambda\in \mathbb{T}$ such that $C_\phi=S^{-1}C_{\phi_{\lambda}}S$, where $\phi_{\lambda}(z)=\lambda z$, $z\in \mathbb{D}$. In order to prove that $C_\phi$ is recurrent, we need to prove that $C_{\phi_{\lambda}}$ is recurrent. Since $\lambda\in \mathbb{T}$, there exists $(n_k)_{k\in \mathbb{N}}$ such that $\lambda^{n_k}\rightarrow 1$. For any $f(z) = \sum_{m\geq0}a_mz^m \in \mathcal{S}_\nu$ we have that $$\|C_{\phi_{\lambda}}^{n_k}(f)- f\|_{\mathcal{S}_\nu}=\sum_{m\geq0}|a_m|^2 |\lambda^{m n_k}-1|^2(m+1)^{2\nu}\rightarrow 0,\ \textrm{as}\ k\rightarrow\infty.$$ Thus, every $f\in \mathcal{S}_\nu$ is a recurrent vector for $C_{\phi_\lambda}$, and so $C_{\phi_\lambda}$ is recurrent on $\mathcal{S}_\nu$. Now, if $\phi$ is not an elliptic automorphism then the Denjoy-Wolff Iteration Theorem, \cite[Proposition 1, Chapter 5]{Sh}, implies that $\phi_n(z)$ converges to $p$, for every $z\in \mathbb{D}$. Hence, if $f$ is a recurrent function of $C_\phi$ on $\mathcal{S}_\nu$, then there exists a sequence of positive integers $(n_k)_{k\in \mathbb{N}}$, such that $f\circ\phi_{n_k}\rightarrow f$ in $\mathcal{S}_\nu$. Thus, by the continuity of point evaluation functional on $S_\nu$, we have that for each $z\in \mathbb{D}$: $$f(z)=\lim_{k\rightarrow\infty}f(\phi_{n_k}(z))=f(p).$$ We conclude that the only recurrent vectors of $C_\phi$ are the constant functions. \end{proof} \begin{theorem} Let $\phi$ be a parabolic automorphism of the unit disk. Then $C_\phi$ is recurrent on $\mathcal{S}_\nu$ if and only if $\nu<1/2$. \end{theorem} \begin{proof} First, if $\nu<1/2$, then $C_\phi$ is hypercyclic, hence it is recurrent. Now we shall prove that this condition is necessary. Let us start by the case $\nu=1/2$. Since $\phi$ is parabolic, it has a fixed point on the unit circle. Without loss of generality we may assume this fixed point is $1$. Suppose that $C_\phi$ is recurrent, and let $f$ be a recurrent vector for $C_\phi$. Then there exists a sequence $(n_k)_{k\in \mathbb{N}}$ of positive integers such that $$C_{\phi_{n_k}}(f)\rightarrow f,$$ in $\mathcal{D}$. Since $\phi$ has no fixed point in $\mathbb{D}$, by Denjoy-Wolff Iteration Theorem, the fixed point $1$ is actually attractive. Thus, $\phi_{n_k}$ converge to $1$ uniformly on compact subsets of $\mathbb{D}$; in particular, $\phi_{n_k}(z)$ converge to $1$, for every $z\in \mathbb{D}$. Therefore we have that $$\|C_{\phi_{n_k}}(f)-f(1)\|_{\mathcal{D}}^2=\|f\circ\phi_{n_k}-f(1)\|_{\mathcal{D}}^2 =|f\circ\phi_{n_k}(0)-f(1)|^2+ \int_{\mathbb{D}}|f'(\phi_{n_k}(z))|^2 |\phi_{n_k}'(z)|^2 dA(z)\rightarrow 0,$$ when $k\rightarrow \infty$. Thus, all recurrent vectors for $C_\phi$ are constants, and so $C_\phi$ cannot be recurrent in this case. Therefore, $C_\phi$ cannot be recurrent on $\mathcal{D}$. Hence, neither on any $S_\nu$ with $\nu>1/2$, by Comparison principle. \end{proof} \begin{theorem} Let $\phi$ be a hyperbolic automorphism of the unit disk. Then $C_\phi$ is recurrent on $\mathcal{S}_\nu$ if and only if $\nu<1/2$. \end{theorem} \begin{proof} If $\nu<1/2$ then the operator $C_\phi$ is recurrent. For $\nu\geq1/2$, the non-recurrence of $C_\phi$ follows exactly as in the case of the parabolic automorphism. \end{proof} Now in the sequel let $\mathcal{S}_\nu^{0}$ be the space of functions obtained from the space $\mathcal{S_\nu}$ by identifying functions that differ by a constant; that is $f\in \mathcal{S}_\nu^0$ if and only if there is $g\in \mathcal{S}_\nu$ such that $f-g$ is constant. Then the space $\mathcal{S}_\nu^{0}$ is invariant under the operator $C_\phi$, and so we can consider $\tilde{C}_\phi$ the restriction of $C_\phi$ on $\mathcal{S}_\nu^{0}$. \begin{lemma}\label{lem36} Let $\phi$ be an analytic self-map of the unit disk. If $C_\phi$ is recurrent on $\mathcal{S}_{\nu}$, then so is $\tilde{C}_\phi$ on $\mathcal{S}_\nu^{0}$. \end{lemma} \begin{proof} We have that for every $f\in \mathcal{S}_\nu$ and $k\in \mathbb{N}$, \begin{align*} \big\|C_{\phi_k}f-f\big\|_{S_\nu}&\geq\big\|C_{\phi_k}f-f+f(0)-f(\phi_k(0))\big\|_{S_\nu}\\ &=\big\|\tilde{C}_{\phi_k}f-f\big\|_{\mathcal{S}_\nu^{0}}. \end{align*} Thus, $Rec(\tilde{C}_\phi)=Rec(C_\phi)\cap \mathcal{S}_\nu^{0}$. Assume that $C_\phi$ is recurrent, then $Rec(C_\phi)$ is dense in $\mathcal{S}_\nu$, which implies that $Rec(\tilde{C}_\phi)$ is dense in $\mathcal{S}_\nu^{0}$. \end{proof} \begin{theorem} Let $\phi$ be a linear fractional map on $\mathbb{D}$ without interior fixed point. If $\phi$ is hyperbolic non-automorphism, then $C_\phi$ is recurrent on $\mathcal{S}_\nu$ if and if $\nu<1/2$. \end{theorem} \begin{proof} If $\nu<1/2$ then $C_\phi$ is hypercyclic, thus it is recurrent. Now, suppose that $\nu>1/2$. Since $\phi$ has no interior fixed point, $\phi$ must have an attractive fixed point $p\in \mathbb{T}$. Thus, $\phi$ is hyperbolic non-automorphism with attractive fixed point $p\in \mathbb{T}$. Hence, the spectrum of $C_\phi$ on $S_\nu$ is given by: $$\sigma(C_\phi)=\{\alpha\in \mathbb{C};\ |\alpha|\leq \phi'(p)^{-\gamma}\}\cup \{\phi'(p)^{k};\ k=0,1,...\},$$ where $\phi'(p)$ is the angular derivation of $\phi$ on $p$ and $\gamma=(1-2\nu)/2$, see \cite{PH}. If $\nu>1/2$, then $\gamma<0$. In this case the singleton $\{\phi'(p)\}$ is a component of the spectrum which isn't intersect $\mathbb{T}$, and so by \cite[Proposition 2.11]{costakis}, $C_\phi$ cannot be recurrent. Now, suppose that $\nu=1/2$, that is, $\mathcal{S}_\nu$ is the Dirichlet space $\mathcal{D}$. If $C_\phi$ is recurrent on the Dirichlet space, then, by Lemma \ref{lem36}, so is $\tilde{C}_\phi$ on $\mathcal{D}_0:=\mathcal{S}_{1/2}^0$. Since $\phi$ is not an automorphism, $\phi(\mathbb{D})\varsubsetneq \mathbb{D}$, and the recurrence of $C_\phi$ implies that $\phi$ is injective, then there exists a non-empty open $U\varsubsetneq \mathbb{D}$, such that $\phi(\mathbb{D})\cap U=\emptyset$. Hence, on $\mathcal{D}_0$ we have $$\int_\mathbb{D} |f'(\phi(z))|^2 |\phi'(z)|^2 dA(z)=\int_{\phi(\mathbb{D})}|f'(z)|^2 dA(z)\leq \int_{\phi(\mathbb{D})}|f'(z)|^2 dA(z)+\int_{U}|f'(z)|^2 dA(z)<\int_{\mathbb{D}}|f'(z)|^2 dA(z).$$ Thus, $\|C_\phi\|_{\mathcal{D}_0}<1$, and therefore, $\tilde{C}_\phi$ cannot be recurrent on $\mathcal{D}_0$. \end{proof} \begin{theorem} Let $\phi$ be a parabolic non-automorphism of the unit disk. Then $C_\phi$ is never recurrent in any of the weighted Dirichlet spaces $S_\nu$. \end{theorem} \begin{proof} Since $\phi$ is parabolic non-automorphism, the spectrum of $C_\phi$ on $\mathcal{S}_\nu$ is given by (see \cite{rikka}) $$\sigma(C_\phi)=\{e^{-\alpha t};\ t\geq0\}\cup\{0\},$$ thus $\{0\}$ is a component of the spectrum which does not intersect the unit circle. Hence, by \cite[Proposition 2.11]{costakis}, the operator $C_\phi$ cannot be recurrent. \end{proof} \section{Recurrence of $\lambda C_\phi$ on $\mathcal{S}_\nu$} \begin{theorem} Let $\phi$ be a holomorphic self map of $\mathbb{D}$ with an interior fixed point. Then the operator $\lambda C_\phi$ is recurrent on $\mathcal{S}_\nu$ if and only if $\phi$ is an elliptic automorphism and $\lambda\in \mathbb{T}$. \end{theorem} \begin{proof} Assume that $\phi$ is an elliptic automorphism and $|\lambda|=1$. Then $C_\phi$ is recurrent by Theorem \ref{thm31}, and since $|\lambda|=1$, then $\lambda C_\phi$ is recurrent. If $\phi$ is not an elliptic automorphism, then $C_\phi$ is not recurrent, which implies that $\lambda C_\phi$ is not recurrent as well. Now, if $|\lambda|\neq1$ and $\phi$ is not an elliptic automorphism. Let $p\in \mathbb{D}$ be the fixed point of $\phi$. Suppose that $\lambda C_{\phi}$ is recurrent and let $f$ be a recurrent vector. Then there exists a sequence $(n_k)_{k\in \mathbb{N}}$ of positive integers such that $$\lambda^{n_k}f\circ\phi_{n_k}\rightarrow f,$$ in $\mathcal{S}_\nu$. Thus, $$f(p)=\lim_{k\rightarrow\infty}\lambda^{n_k}(C_{\phi_{n_k}}f)(p)= \lim_{k}\lambda^{n_k}f(\phi_{n_k}(p)).$$ Now, observe that $f(\phi_{n_k}(p))\rightarrow f(p)$ as $k\rightarrow \infty$. Therefore, if $|\lambda|<1$, then $f(p)=0$ for every $z\in \mathbb{D}$. If $|\lambda|>1$, then $f(p)$ is not even defined, unless $f(p)=0$. But $f(p)$ cannot be zero for every recurrent vector, because the set of recurrent vectors is a dense subset. \end{proof} \begin{theorem}\label{thm41} Let $\phi$ be a hyperbolic non-automorphism and $\eta$ its boundary fixed point. Then $\lambda C_\phi$ is recurrent on $\mathcal{S}_\nu$ if and only if $|\lambda|>\phi'(\eta)^{(1-2\nu)/2}$ and $\nu\leq1/2$. In particular, we have: For the Dirichlet space ($\nu=1/2$) the operator $\lambda C_\phi$ is recurrent if and only if $|\lambda|>1$. \end{theorem} To prove Theorem \ref{thm41}, we need the following lemma and proposition: \begin{lemma}\label{lem42} Let $T$ be a bounded operator on a separable Hilbert space $\mathcal{H}$. Suppose that for each non zero vector $g\in \mathcal{H}$ such that the set of complex numbers $$A:=\{\langle T f,g\rangle;\ f\in \mathcal{H}\}$$ is not dense in $\mathbb{C}$. Then $T$ is not recurrent. \end{lemma} \begin{proof} Every non zero vector $g\in \mathcal{H}$ defines a linear functional $\psi:f\rightarrow \langle f, g\rangle$ whose range is $\mathbb{C}$. Suppose that $T$ is recurrent, then its range is dense in $\mathcal{H}$. Thus, $$\mathbb{C}=\psi(\mathcal{H})=\psi(\overline{\textrm{Im}(T)})\subset \overline{\psi(\textrm{Im}(T))}=\overline{A},$$ which implies that the set $A$ is dense in $\mathbb{C}$, a contradiction. \end{proof} \begin{proposition}\cite[Proposition 2.16]{EM}\label{prop44} If $\nu<1/2$, then for each $f(z)=\sum_{n=0}^{\infty}a_n z^n\in \mathcal{S}_\nu$ there is a constant $C>0$ such that $$|f'(z)|\leq\frac{C}{(1-|z|^2)^{(3-2\nu)/2}}\ (z\in \mathbb{D}).$$ \end{proposition} Now we prove Theorem \ref{thm41}. \begin{proof}[Proof of Theorem \ref{thm41}] First, if $\nu\leq1/2$ and $|\lambda|>\phi'(\eta)^{(1-2\nu)/2}$, then $C_\phi$ is hypercyclic on $\mathcal{S}_\nu$, thus it is recurrent on $\mathcal{S}_\nu$. Now we prove that the conditions are necessary. Since the recurrence is invariant under similarity, we may suppose that the boundary fixed point is $1$. Suppose that $\nu>1/2$. Then the reproducing kernel at $1$ $$K_1(z)=\sum_{n=0}^{\infty}\frac{z^n}{(n+1)^{2\nu}}$$ is in $\mathcal{S}_\nu$. Furthermore, for any $f\in \mathcal{S}_\nu$ we have $$\langle\overline{\lambda}C_\phi^* K_1,f\rangle=\overline{\lambda}\langle K_1,C_\phi f\rangle=\overline{\lambda}\langle K_1,f\circ\phi\rangle=\overline{\lambda}f(\phi(1))=\overline{\lambda}f(1)= \langle\overline{\lambda}K_1,f\rangle.$$ Then $\overline{\lambda}$ is an eigenvalue of $\overline{\lambda}C_\phi^*$. Thus, if $|\lambda|\neq1$, the operator $\lambda C_\phi$ is not recurrent, since the point spectrum of its adjoint $\overline{\lambda}C_\phi^*$, $\sigma_p(\overline{\lambda}C_\phi^*)$ must be in the unit circle by \cite[Proposition 2.15]{costakis}. If $|\lambda|=1$, also in this case $\lambda C_\phi$ is not recurrent, since $C_\phi$ is not recurrent on $\mathcal{S}_\nu$. Now, if $\nu=1/2$, in this case $\mathcal{S}_\nu$ will be the Dirichlet space $\mathcal{D}$. Assume that $\lambda C_\phi$ is recurrent on $\mathcal{D}$, for some $|\lambda|\leq1$, then so is $\lambda \tilde{C}_{\phi}$ on $\mathcal{D}_0$. But on $\mathcal{D}_0$ we have $$\int_\mathbb{D}|\lambda f'(\phi(z))|^2|\phi'(z)|^2 dA(z)=|\lambda|^2\int_{\phi(\mathbb{D})}|f'(z)|^2dA(z)<\int_\mathbb{D}|f'(z)|^2dA(z).$$ Thus $\|\lambda C_\phi\|_{D_0}<1$, and therefore, $\lambda \tilde{C}_\phi$ cannot be recurrent on $\mathcal{D}_0$, a contradiction. It remains to prove that if $|\lambda|\leq\mu^{(1-2\nu)/2}$ and $\nu<1/2$, then the $\lambda C_\phi$ cannot be recurrent on $\mathcal{S}_\nu$. The growth estimate for the derivative in Proposition \ref{prop44} provides a constant $C$ such that the first inequality below holds. Thus we have that \begin{align*} |\langle \lambda C_\phi f,z \rangle| &= 2^{2\nu}|\lambda||(f\circ\phi)'(0)| \\ &= 2^{2\nu}|\lambda||f'(\phi(0))||\phi'(0)| \\ &= 2^{2\nu}|\lambda||f'(1-\mu)|\mu\\ &\leq\frac{2^{2\nu}C|\lambda|\mu}{(1-|1-\mu|^2)^{(3-2\nu)/2}}\\ &\leq\frac{2^{2\nu}C \mu^{(3-2\nu)/2}}{(2\mu-\mu^2)^{(3-2\nu)/2}}\\ &=\frac{2^{2\nu}C}{2-\mu}. \end{align*} By applying Lemma \ref{lem42} we see that $\lambda C_\phi$ is not recurrent. \end{proof} \begin{theorem} Let $\phi$ be a parabolic automorphism of the unit disk. Then the operator $\lambda C_\phi$ is recurrent on $\mathcal{S}_\nu$ if and only if $\nu<1/2$ and $|\lambda|=1$. \end{theorem} \begin{proof} First if $\nu<1/2$ and $|\lambda|=1$, then $\lambda C_\phi$ is hypercyclic on $\mathcal{S}_\nu$, which implies that it is recurrent. Now we prove that these conditions are necessary. Let us first examine the case $\nu=1/2$. Suppose that $|\lambda|\leq1$. Let $f$ be a recurrent vector for $\lambda C_\phi$, then there is a sequence $(n_k)_{k\in \mathbb{N}}$ of positive integers such that $\lambda^{n_k}C_\phi^{n_k}f\rightarrow f$ in $\mathcal{D}$. Since $\phi$ is parabolic automorphism with fixed point $1$. If $|\lambda|>1$, then, by what we have just proved, $\lambda^{-1}C_{\phi_{-1}}$ is not recurrent. Now, an invertible operator is recurrent if and only if its inverse is. Consequently, $\lambda C_\phi$ is not recurrent on $\mathcal{D}$.\\ Now suppose that $\nu>1/2$, we distingue two cases:\\ \textbf{Case 1}: If $|\lambda|\neq1$, the reproducing kernel at $1$ in $\mathcal{S}_\nu$ is an eigenvalue for the adjoint $\overline{\lambda}C_\phi^{*}$, a contradiction.\\ \textbf{Case 2}: If $|\lambda|=1$, we know that $C_\phi$ is not recurrent on $\mathcal{S}_\nu$, then $\lambda C_\phi$ is not recurrent on $\mathcal{S}_\nu$ as well.\\ Now suppose that $\nu<1/2$. By Proposition the fixed point of $\phi$ must be on the boundary of the unit disk. We may suppose that the fixed point is $1$. Let $$\sigma(w)=\frac{i(1+w)}{1-w},\ \textrm{and}\ \psi=\sigma\circ\phi\circ\sigma^{-1}.$$ then $\sigma$ is a linear fractional map of the unit disk onto the upper half plane that takes $1$ to $\infty$, which implies $\infty$ is the only fixed point of $\psi$, and so $\psi(w)=w+a$, where $a\neq0$ and $\mathbb{R}e a=0$. The fact that $\mathbb{R}e a=0$ comes form that $\phi$ corresponds to an automorphism of the upper half plane. Thus $\phi$ satisfies the following formula \begin{equation} \phi(z)=\frac{(2-a)z+a}{-az+2+a}\ \textrm{with}\ a\neq0\ \textrm{and}\ \mathbb{R}e a=0. \end{equation} If $|\lambda|<1$. By the growth estimate for the derivative's proposition, there is a constant $C$ such that the inequality below valid \begin{align*} |\langle\lambda C_\phi f,z\rangle|&=2^{2\nu}|\lambda||f'(\phi(0))||\phi'(0)|\\ &=2^{2\nu}|\lambda|\big|f'\big(\frac{a}{2+a}\big)\big|\frac{4}{4+|a|^2}\\ &\leq2^{2\nu+2}C|\lambda|\big(1-\frac{|a|^2}{4+|a|^2}\big)^{(2\nu-3)/2}\frac{1}{4+|a|^2}\\ &=\frac{2^{4\nu-1}C|\lambda|}{(4+|a|^2)^{(2\nu-1)/2}}\\ &<\frac{2^{4\nu-1}C}{(4+|a|^2)^{(2\nu-1)/2}}. \end{align*} Thus, by Lemma \ref{lem42}, the operator $\lambda C_\phi$ is not recurrent. If $|\lambda|>1$, then $\lambda^{-1}C_{\phi_{-1}}$ is not recurrent and, therefore, neither is $\lambda C_\phi$. \end{proof} \begin{theorem} Let $\phi$ be a hyperbolic automorphism of the unit disk and $\eta$ its attractive fixed point. Then $\lambda C_\phi$ is recurrent if and only if $\nu<1/2$ and $\phi'(\eta)^{(1-2\nu)/2}<|\lambda|<\phi'(\eta)^{(2\nu-1)/2}$. \end{theorem} \begin{proof} For $\nu\geq1/2$, the non-recurrence of $\lambda C_\phi$ follows exactly as in the case of the parabolic automorphism.\\ Now, suppose that $\nu<1/2$. We need an expression of $\phi$. Without loss of generality, we may suppose that $\phi$ has $-1$ and $1$ as its fixed points. Moreover, we may assume that $1$ is the attractive fixed point. In order to compute $\phi$ explicitly, we use the change of variables $$\sigma(z)=\frac{i(1-z)}{1+z}$$ that sends the unit disk onto the upper half-plane, the fixed points $1$ and $-1$ to 0 and $\infty$, respectively, and $\phi$ to the contraction map $\phi(w)=\mu w$, where $0<\mu<1$. Coming back to the unit disk we have $$\phi(z)=\frac{(1+\mu)z+1-\mu}{(1-\mu)z+1+\mu}\ \textrm{with}\ 0<\mu<1.$$ Observe that the derivative at the attractive fixed point is $\phi'(1)=\mu$.\\ Now, for any $f\in \mathcal{S}_\nu$ we have the following estimate for some constant $C$, \begin{align*} |\langle \lambda C_\phi f,z\rangle|&=2^{2\nu}|\lambda||f'(\phi(0))||\phi'(0)|\\ &=2^{2\nu}|\lambda|\big|f'\big(\frac{1-\mu}{1+\mu}\big)\big|\frac{4\mu}{(1+\mu)^2}\\ &\frac{\leq2^{2\nu}C|\lambda|\mu^{(2\nu-1)/2}}{(1+\mu)^{2\nu-1}}, \end{align*} that remain bounded for $|\lambda|\mu^{(2\nu-1)/2}\leq1$. Therefor, if $\lambda C_\phi$ is recurrent, then $|\lambda|>\mu^{(1-2\nu)/2}$. In addition, the inverse operator $\lambda^{-1}C_{\phi_{-1}}$ must also be recurrent. The attractive fixed point of $\phi_{-1}$ is $-1$ and $\phi'_{-1}(-1)=\mu$. Therefore, we must also have $|\lambda^{-1}|>\mu^{(1-2\nu)/2}$. Thus the conditions on $\lambda$ are necessary for $\lambda C_\phi$ to be recurrent. \end{proof} \begin{theorem} Let $\phi$ be a parabolic non-automorphism of the unit disk. Then the operator $\lambda C_\phi$ is never recurrent on any $\mathcal{S}_\nu$. \end{theorem} \begin{proof} Since the singleton $\{0\}$ is a component of the spectrum of $\lambda C_\phi$, but every component of the spectrum of recurrent operator must intersect the unit circle, it implies that $\lambda C_\phi$ is not recurrent. \end{proof} \end{document}
\begin{document} \title [Two remarks on sums of squares with rational coefficients] {Two remarks on sums of squares\\with rational coefficients} \author {Jose Capco} \address {Research Institute for Symbolic Computation, Johannes Kepler University \\ Altenberger Stra\ss e 69, 4040 Linz, Austria} \email {[email protected]} \author {Claus Scheiderer} \address {Fachbereich Mathematik und Statistik, Universit\"at Konstanz, 78457 Konstanz, Germany} \email {[email protected]} \begin{abstract} There exist homogeneous polynomials $f$ with ${\mathbb{Q}}$-coefficients that are sums of squares over ${\mathbb{R}}$ but not over ${\mathbb{Q}}$. The only systematic construction of such polynomials that is known so far uses as its key ingredient totally imaginary number fields $K/{\mathbb{Q}}$ with specific Galois-theoretic properties. We first show that one may relax these properties considerably without losing the conclusion, and that this relaxation is sharp at least in a weak sense. In the second part we discuss the open question whether any $f$ as above necessarily has a (non-trivial) real zero. In the minimal open cases $(3,6)$ and $(4,4)$, we prove that all examples without a real zero are contained in a thin subset of the boundary of the sum of squares cone. \end{abstract} \maketitle \section{Introduction} Let $f\in{\mathbb{Q}}[x_1,\dots,x_n]$ be a polynomial with rational coefficients. Given a field extension $E/{\mathbb{Q}}$ we say that $f$ is a sum of squares over $E$, or briefly that $f$ is \emph{$E$-sos}, if there is a polynomial identity $f=\sum_{i=1}^rp_i^2$ with $p_1,\dots,p_r\in E[x_1,\dots,x_n]$. Suppose that $f$ is ${\mathbb{R}}$-sos. Then does it follow that $f$ is ${\mathbb{Q}}$-sos? This question is certainly of theoretical interest, but is also relevant from a practical perspective. Indeed, it is one particular instance of the general problem of finding exact (rather than floating-point) positivity certificates. In general, the answer is negative, as was shown in \cite{sch:rat}. In fact, an explicit construction was presented there of ${\mathbb{Q}}$-polynomials $f$ that are ${\mathbb{R}}$-sos but not ${\mathbb{Q}}$-sos. From this negative answer, a series of natural follow-up questions arises, see Section~5 in \cite{sch:rat}. In this article we discuss two of them. Throughout we assume that our polynomials are forms, i.e.\ they are homogeneous. Recall that the basic construction in \cite{sch:rat} starts out with a totally imaginary number field $K/{\mathbb{Q}}$ of even degree $2d$ and a linear form $l\in K[x_1,\dots,x_n]$ with sufficiently general coefficients. The norm form $f=N_{K/{\mathbb{Q}}}(l)$ is a degree~$2d$ form over ${\mathbb{Q}}$ and is ${\mathbb{R}}$-sos. Let $K^{\textrm{gal}}/{\mathbb{Q}}$ be the Galois hull of $K/{\mathbb{Q}}$, and let $G=\Gal(K^{\textrm{gal}}/{\mathbb{Q}})$ act on the set $X=\Hom(K,{\mathbb{C}})$ of (complex) places of $K$ (note $|X|=2d$). According to \cite{sch:rat}, if $G$ is sufficiently ``big'' as a subgroup of the symmetric group $S_{2d}$, then $f$ cannot be ${\mathbb{Q}}$-sos. For example, it is enough that $G$ is doubly transitive on $X$. In Section~2 below we show that a condition much weaker than $2$-transitivity suffices to make the construction work (condition $(**)$), and that this relaxation is sharp at least in a weak sense. Moreover we present empirical data showing that up to degree $2d=16$, all transitive group actions satisfying this condition do actually arise from a number field $K/{\mathbb{Q}}$ as before. All known examples of ${\mathbb{R}}$-sos forms $f\in{\mathbb{Q}}[x_1,\dots,x_n]$ that are not ${\mathbb{Q}}$-sos have real zeros. In fact, the proofs for the impossibility of writing $f$ as a sum of squares over ${\mathbb{Q}}$ make crucial use of the existence of these real zeros. Therefore it is natural to ask if there can be any examples of such forms without any (non-trivial) real zero. See also Question 5.1 in \cite{sch:rat}. Let $f\in{\mathbb{Q}}[x_1,\dots,x_n]$ be a form with $\deg(f)=2d$ that is ${\mathbb{R}}$-sos and has strictly positive values. In Section~3 below we discuss the first open case, namely $(n,2d)=(3,6)$. We conjecture that $f$ is ${\mathbb{Q}}$-sos in this case, and we prove this conjecture for all $f$ outside of a Zariski-thin subset of the boundary of the sum of squares cone $\Sigma_6$. An analogous result holds in the other ``Hilbert case'' $(n,2d)=(4,4)$. In Section~3 we use some simple facts on Gram spectrahedra of forms. For the reader's convenience we have included a brief introduction to Gram spectrahedra in Section~4, together with proofs or references for the facts that we use. We remark that Question 5.3 from \cite{sch:rat} has recently been given a negative answer by Laplagne \cite{la}. This question was asking whether $f$ is ${\mathbb{Q}}$-sos if it becomes $K$-sos in an odd degree extension $K/{\mathbb{Q}}$. Laplagne shows that the answer is negative. In fact, he constructs examples of degree~$4$ polynomials $f\in{\mathbb{Q}}[x_1,\dots,x_4]$ that are sums of squares over ${\mathbb{Q}}(\root^3\of2)$ but not over~${\mathbb{Q}}$. \emph{Acknowledgement}: We are indebted to J\"urgen Kl\"uners for sharing valuable information on the Database for Number Fields \cite{dnf} with us. \section{Conditions on the Galois group} \begin{lab}\label{recall} Let $K/{\mathbb{Q}}$ be a totally imaginary number field of degree $[K:{\mathbb{Q}}]=2d \ge4$, let $K^{\textrm{gal}}/{\mathbb{Q}}$ be its Galois hull and $G=\Gal(K^{\textrm{gal}}/{\mathbb{Q}})$ the Galois group. The group $G$ acts transitively on the set $X=\Hom(K,{\mathbb{C}})$ of cardinality $|X|=2d$. (This action can be identified with the $G$-action on the roots of the minimal polynomial of a primitive element of $K/{\mathbb{Q}}$.) We fix once and for all an embedding $K\subseteq{\mathbb{C}}$ and denote complex conjugation (restricted to $K^{\textrm{gal}}$) by $\tau\in G$. Since $K$ has no real place, $\tau$ acts on $X$ without fixed point, i.e.\ as a product of $d$ pairwise disjoint transpositions. Let $l\in K[x_1,x_2,x_3]$ be a linear form, and let $f=N_{K/{\mathbb{Q}}}(l)$ be the $K/{\mathbb{Q}}$-norm of $l$. So $f\in{\mathbb{Q}}[x_1,x_2,x_3]$, and $f$ is the product of the $2d$ Galois conjugates of~$l$. The form $f$ is a sum of two squares of forms over the field ${\mathbb{R}}$ of real numbers. The following was proved in \cite{sch:rat} (Sect.~2): \end{lab} \begin{thm}\label{jemsthm} Suppose that the $G$-action on $X$ is $2$-transitive, or more generally satisfies condition $(*)$ below. Then, for $l$ a linear form with sufficiently general coefficients, the form $f=N_{K/{\mathbb{Q}}}(l)$ fails to be ${\mathbb{Q}}$-sos. \end{thm} ``Sufficiently general coefficients'' means that no three of the $2d$ Galois conjugates of $l$ have a common nontrivial (complex) zero. For example, when $\alpha$ is a primitive element for $K/{\mathbb{Q}}$, the form $l=x_1+\alpha x_2+\alpha^2x_3$ has sufficiently general coefficients in this sense. Condition $(*)$ is the following condition on the Galois action on~$X$: \begin{itemize} \item[$(*)$] \emph{For any $x,\,y\in X$ with $x\ne y$ there exists $z\in X$ and $\sigma\in G$ such that $x=\sigma z$ and $y=\sigma\tau z$.} \end{itemize} Condition $(*)$ requires, in other words, that every $2$-element subset $\{x,y\}\subseteq X$ is $G$-conjugate to a subset of the form $\{z,\tau z\}$ with $z\in X$. This is a weaker condition than $2$-transitivity, and is in fact strictly weaker (see \cite{sch:rat} Remark 2.9 or \ref{empdata} below). \begin{lab}\label{charnumber} We are going to show that Theorem \ref{jemsthm} remains true under a condition that is still much more general than condition $(*)$. To this end let $G$ be a finite group, and let $X$ be a transitive and faithful $G$-set (i.e.\ only $1\in G$ acts as the identity). Let $t\in G$ be a fixed-point-free involution, i.e.\ $t$ acts on $X$ without fixed point and satisfies $t^2=1$. This forces $|X|$ to be even. For $x\in X$ let \begin{align*} M_t(x) & =\ \{y\in X\colon\exists\, z\in X,\ \exists\, g\in G\text{ such that } x=gz,\ y=gtz\} \\ & =\ \{gtg^{-1}x\colon g\in G\}, \end{align*} be the ``orbit'' of $x$ under the conjugacy class of $t$ in $G$. It is easy to see that $M_t(hx)=hM_t(x)$ for any $h\in G$, $x\in X$, and that $x\notin M_t(x)$. Therefore the cardinality $|M_t(x)|$ is independent of $x\in M$ and depends only on the conjugacy class of $t$ in $G$. We write $c(G,X,t):=|M_t(x)|$ and call this the \emph{characteristic number} of the triple $(G,X,t)$. Property $(*)$ above says $M_t(x)=X\smallsetminus\{x\}$ for $x\in X$, or in other words, $(*)$ says $c(G,X,t)=|X|-1$. \end{lab} \begin{dfn} Let the finite group $G$ act transitively and faithfully on the set $X$, and let $t\in G$ be an involution without fixed points in $X$. We say that the triple $(G,X,t)$ has property $(**)$, if $c(G,X,t)>\frac12|X|$. \end{dfn} Clearly, property $(*)$ implies property $(**)$. We return to the setting in \ref{recall}. So let again $K\subseteq{\mathbb{C}}$ be a totally imaginary field extension of ${\mathbb{Q}}$, of degree $[K:{\mathbb{Q}}]=2d\ge4$ and with Galois hull $K^{\textrm{gal}}/{\mathbb{Q}}$. We consider the action of $G=\Gal(E/{\mathbb{Q}})$ on $X=\Hom(K,{\mathbb{C}})$. Let $\tau\in G$ be complex conjugation. Theorem \ref{jemsthm} holds when condition $(*)$ gets replaced by the weaker condition~$(**)$: \begin{thm}\label{suff} Suppose that the triple $(G,X,\tau)$ satisfies condition $(**)$. Then, for $l$ a linear form with sufficiently general coefficients, the form $f=N_{K/{\mathbb{Q}}}(l)$ fails to be ${\mathbb{Q}}$-sos. \end{thm} \begin{proof} Let $l\in K[x_1,x_2,x_3]$ be a linear form, let $l_1,\dots,l_{2d}$ be its $G$-conjugates, and assume that no three of them have a common nontrivial complex zero. For $i=1,\dots,2d$ let $L_i\subseteq{\mathbb{P}}^2({\mathbb{C}})$ be the projective zero set of $l_i$. For $i\ne j$ in $\{1,\dots,2d\}$ let $M_{ij}=L_i\cap L_j$, the intersection point of $L_i$ and $L_j$. Let $Q:=\{M_{ij}\colon1\le i<j\le2d\}$, and let $Q_0\subseteq Q$ be the subset of all real points in $Q$. By the general position assumption, $Q_0$ consists just of the $d$ intersection points $L_i\cap\overline L_i$ ($1\le i\le2d$). As in \cite{sch:rat}, we can identify the $2$-element subsets of $X$ with the points in $Q$, by letting $\{i,j\}\subseteq X$ correspond to $M_{ij}\in Q$. This identifies the subsets $\{x,\tau x\}$ ($x\in X$) with the points in $Q_0$. Condition $(**)$ therefore says that one (in fact, any) of the lines $L_1,\dots,L_{2d}$ contains at least $d+1$ points that are $G$-conjugate to a point in $Q_0$. Assume that $f$ is ${\mathbb{Q}}$-sos, i.e.\ $f=\sum_\nu p_\nu^2$ with forms $p_\nu\in{\mathbb{Q}}[x_1,x_2,x_3]$. Since $f(\xi)=0$ for every $\xi\in Q_0$, we have $p_\nu(\xi)=0$ for every $\nu$ and every $\xi\in Q_0$. Therefore every $p_\nu$ vanishes in at least $d+1$ different points of every line $L_j$. Since $\deg(p_\nu)=d$, this implies that the $p_\nu$ vanish identically on each line $L_j$, implying $p_\nu=0$, contradiction. \end{proof} \begin{rem}\label{weaklysharp} In a weak sense at least, condition $(**)$ is sharp for Theorem \ref{suff}. Indeed, $(**)$ requires $c(G,X,t)\ge d+1$. If we allow $c(G,X,t)=d$, then Theorem \ref{suff} fails in general. An example showing this is provided by the dihedral group $G$ of order~$8$, acting on the vertices $X$ of a square by symmetries of the square (so $2d=4$ here). If $t\in G$ is one of the two fixed-point-free reflections then $M_t(x)$ consists of the two vertices adjacent to the vertex $x\in X$, and so $c(G,X,t)=2=d$. But when $K/{\mathbb{Q}}$ is an extension with $[K:{\mathbb{Q}}]=4$ and $\Gal(K^{\textrm{gal}}/{\mathbb{Q}})\cong G$, the construction in \ref{recall} always produces forms that are ${\mathbb{Q}}$-sos. Indeed, this is a consequence of \cite{sch:rat} Theorem~4.1. \end{rem} \begin{lab} To complete this discussion, we add some empirical observations. We consider finite transitive (faithful) group actions $(G,X)$ up to isomorphism, i.e.\ $G$ is a finite group that acts transitively and faithfully on the set $X$. The notion of isomorphism $(G_1,X_1)\to(G_2,X_2)$ is obvious. For small degrees, the isomorphism classes of such transitive group actions are well known, and corresponding data is both available on the web and implemented in computer algebra systems. For example, the transitive permutation groups of degree $\le30$ are contained in a Magma database \cite{mag}. Given a finite extension $K\subseteq{\mathbb{C}}$ of ${\mathbb{Q}}$ with Galois closure $K^{\textrm{gal}}\subseteq{\mathbb{C}}$, we'll say that $K$ realizes the transitive group $(G,X)$ if there is an isomorphism $(G,X)\isoto(\Gal(K^{\textrm{gal}}/{\mathbb{Q}}),\> \Hom(K,{\mathbb{C}}))$. Given moreover an involution $t\in G$, we say that $K$ realizes the triple $(G,X,t)$ if such an isomorphism can be found that sends $t$ to (the restriction of) complex conjugation. Of course, the realization question for $(G,X)$ is a strong version of the inverse Galois problem, that becomes even stronger when an involution $t\in G$ is fixed. Since we are interested in the group actions of totally imaginary number fields, we only consider transitive groups $(G,X)$ that contain a fixed-point-free (\emph{fpf}) involution. \end{lab} \begin{lab}\label{empdata} The following information was obtained with the help of the Magma computer algebra system \cite{mag}. For $2d=4$ there are 5 transitive groups $(G,X)$ (up to isomorphism). Only the $2$-transitive groups $S_4$ and $A_4$ satisfy condition $(**)$. For $2d=6$ there are 16 transitive groups $(G,X)$, $11$ of which contain a fpf involution. Two of these $11$ are $2$-transitive. Two other groups contain a fpf involution which satisfies $(*)$, namely the transitive groups $6{\mathrm{T}}8$ and $6{\mathrm{T}}11$. (The first of them was already discussed in \cite{sch:rat} 2.9). No further group satisfies $(**)$. For $2d=8$ we have the first examples of transitive groups which satisfy $(**)$, but not $(*)$, with respect to some involution. Note that in general, a transitive group $(G,X)$ contains several conjugacy classes of fpf involutions $t$, whose characteristic numbers $c(G,X,t)$ will usually be different. For $2d=10$ or $14$, condition $(**)$ is not more general than $(*)$. But in degrees $12,\,16,\,18$ and $20$ there are many transitive groups that have at least one involution satisfying $(**)$, but no involution satisfying $(*)$. Combined with the remarks in \ref{realization} below, this shows that we gain quite a~bit of new examples with Theorem \ref{suff}. A precise statistics up to degree~20 is provided in the following table: $$\begin{array}{r|rrrr} n & (1) & (2) & (3) & (4) \\ \hline 4 & 5 & 2 & - & - \\ 6 & 11 & 2 & 2 & - \\ 8 & 50 & 7 & 2 & 3 \\ 10 & 27 & 3 & 6 & - \\ 12 & 282 & 5 & 21 & 50 \\ 14 & 44 & 2 & 9 & - \\ 16 & 1954 & 13 & 35 & 120 \\ 18 & 678 & 2 & 83 & 132 \\ 20 & 1020 & 4 & 126 & 197 \end{array}$$ Here row $n$ contains the numbers of (isomorphism classes of) transitive groups $(G,X)$ with $|X|=n$ that \begin{enumerate} \item contain a fpf involution, \item contain a fpf involution and are $2$-transitive, \item satisfy $(*)$ for some fpf involution but are not $2$-transitive, \item satisfy $(**)$ for some fpf involution, but don't satisfy $(*)$ for any fpf involution. \end{enumerate} \end{lab} \begin{lab}\label{realization} By consulting the Kl\"uners-Malle data base for number fields (\cite{dnf}, see also \cite{km}), one can verify that for every transitive group $(G,X)$ of degree $|X|\le16$ and every fpf involution $t\in G$, the triple $(G,X,t)$ is realized by some number field $K/{\mathbb{Q}}$. We are grateful to J\"urgen Kl\"uners for confirming to us this last assertion. A list of Galois realizations of all triples $(G,X,t)$ with $|X|\le16$ that satisfy $(**)$, extracted from \cite{dnf}, is available under \cite{zdata}. \end{lab} \section{Strictly positive forms in Hilbert's sos cones} \begin{lab} We consider Open Problem~5.1 from \cite{sch:rat}. Let $f\in{\mathbb{Q}}[x]={\mathbb{Q}}[x_1,\dots,x_n]$ be a form of degree $\deg(f)=2d$ which is ${\mathbb{R}}$-sos, and assume that $f$ is strictly positive (i.e.\ $f(a)>0$ for $0\ne a\in{\mathbb{R}}^n$). Then, does it follow that $f$ is ${\mathbb{Q}}$-sos? Although we expect a negative answer in general, no argument or example is known so far to decide this question in general. Here we are looking at the first open cases. When $n\le2$ or $2d=2$, $f$ is clearly ${\mathbb{Q}}$-sos, and the same is true for $(n,2d)=(3,4)$ according to \cite{sch:rat} Theorem 4.1. We therefore consider the cases $(n,2d)=(3,6)$ or $(4,4)$. They may be called the Hilbert cases, alluding to Hilbert's celebrated 1888 theorem \cite{hi}, by which $(3,6)$ and $(4,4)$ are the minimal cases for which there exist nonnegative forms (over ${\mathbb{R}}$) that are not sums of squares. In all that follows, the two cases $(3,6)$ and $(4,4)$ are completely parallel. For simplicity we will focus on the $(3,6)$ case, and will point out at the end how to adapt the results to the $(4,4)$ case. Our main result in the $(3,6)$ case is Theorem \ref{posternsextq} below. Roughly it says that if there exists strictly positive $f$ over~${\mathbb{Q}}$ which is ${\mathbb{R}}$-sos but not ${\mathbb{Q}}$-sos, then $f$ lies in a thin subset of the boundary of the sos cone. \end{lab} \begin{lab} Let $x=(x_1,x_2,x_3)$, and consider the polynomial ring $A={\mathbb{R}}[x]={\mathbb{R}}[x_1,x_2,x_3]$ with the natural grading $A=\bigoplus_{d\ge0}A_d$. We say that a form $f\in A$ is \emph{strictly positive}, denoted $f>0$, if $f(a)>0$ for any $0\ne a\in{\mathbb{R}}^3$. Recall that $A_d$ is a finite-dimensional real vector space and thus carries a unique topology. For even $d\ge0$ let $\Sigma_d\subseteq A_d$ be the set of all sums of squares, a closed convex cone of full dimension (i.e.\ with non-empty interior relative to $A_d$). Let $f\in{\mathbb{Q}}[x]={\mathbb{Q}}[x_1,x_2,x_3]$ be a strictly positive form of degree~$6$ which is ${\mathbb{R}}$-sos, i.e.\ $f\in{\mathbb{Q}}[x]_6\cap\Sigma_6$. If $f$ lies in the interior of $\Sigma_6$ then $f$ is a sum of squares over ${\mathbb{Q}}$ (\cite{hr} Theorem 1.2 or \cite{sch:rat} Lemma 4.6). So we assume that $f$ lies on the boundary $\partial\Sigma_6$. The Zariski closure $\partial^a\Sigma_6$ of $\partial\Sigma_6$ is called the \emph{algebraic boundary} of $\Sigma_6$, and is known to be a union of two irreducible hypersurfaces inside the space $A_6$. Namely $$\partial^a\Sigma_6\>=\>\Delta\cup V$$ where $\Delta$ is the discriminant hypersurface, consisting of all forms with at least one (complex) singularity, and $V$ is the Zariski closure of the sets of all sums of three squares of forms. The degree of $\Delta$ resp.\ $V$ is $75$ resp.\ $83200$. See \cite{bhors} for proofs of these facts (we will not make use of the precise degrees). \end{lab} \begin{lab} For basic notions from convexity we refer to standard texts like \cite{we} or \cite{ro}. Let $A_6^{\scriptscriptstyle\vee}=\Hom(A_6,{\mathbb{R}})$ be the dual space of the linear space $A_6$, and let $\Sigma_6^*=\{\alpha\in A_6^{\scriptscriptstyle\vee}\colon\alpha(\Sigma_6) \ge0\}$, the dual cone of $\Sigma_6$. Given $f\in\Sigma_6$, the \emph{normal cone} of $\Sigma_6$ at $f$ is $$N_f\>=\>N_f(\Sigma_6)\>=\>\{\alpha\in\Sigma_6^*\colon \alpha(f)=0\},$$ a closed convex cone contained in $\Sigma_6^*$. For $f\in\partial\Sigma_6$ we have $N_f\ne\{0\}$. Moreover, then, $N_f$ is the (closed) convex cone generated by the extreme rays ${\mathbb{R}}_{\scriptscriptstyle+}\alpha$ of $\Sigma_6^*$ satisfying $\alpha(f)=0$, according to the Krein-Milman theorem for closed pointed convex cones. \end{lab} \begin{lab}\label{ualpha} Let $f\in\partial\Sigma_6$ be strictly positive. Let us recall how Blekherman \cite{bl} proves that $f$ is a sum of three squares of forms. Let $\alpha\in N_f$ span an extreme ray. Since $f$ is strictly positive, $\alpha$ cannot be evaluation in a point $u\in{\mathbb{R}}^3$. The symmetric bilinear form $$b_\alpha\colon A_3\times A_3\to{\mathbb{R}},\quad(p,q)\mapsto\alpha(pq)$$ is positive semidefinite. Let $U_\alpha\subseteq A_3$ be its kernel, so $$U_\alpha\>=\>\{p\in A_3\colon pA_3\subseteq\ker(\alpha)\}\>=\> \{p\in A_3\colon\alpha(p^2)=0\}.$$ By \cite{bl}, Corollary 2.3 and Lemma 2.4, the forms in $U_\alpha$ have no common (real or complex) projective zero in ${\mathbb{P}}^2$, so the projective zero set $V(U_\alpha)$ is empty. By assumption, $f$ has at least one sums of squares representation $f=\sum_{i=1}^rp_i^2$ (with $p_i\in A_3$). For any such identity we have $\alpha(p_i^2)=0$ for all~$i$, so the $p_i$ lie in $U_\alpha$. According to \cite{bl} Theorem 2.7, the linear space $U_\alpha$ has dimension~$3$. Therefore $f$ can be written as a sum of 3 squares. In particular, any strictly positive $f\in\partial\Sigma_6$ lies in the hypersurface~$V$. Moreover, Blekherman proves that $f$ is not a sum of two squares (\cite{bl} Corollary~1.3). Hence for every sum of squares representation $f=\sum_{i=1}^rp_i^2$ (with $p_i\in A_3$), the linear span of the forms $p_1,\dots,p_r$ is equal to $U_\alpha$. By Lemma \ref{2repsameu} (see Appendix), this implies that $f$ has essentially only one such representation: \end{lab} \begin{cor}\label{unique} Let $f\in\partial\Sigma_6$ be strictly positive. Then $f$ is a sum of three squares, and up to orthogonal equivalence, there is only one sum of squares representation of~$f$. \qed \end{cor} In particular, the Gram spectrahedron of $f$ is reduced to a single point. \begin{lab}\label{gorenstein} We keep assuming that $f\in\partial\Sigma_6$ is strictly positive, and that $\alpha\in N_f$ spans an extreme ray. Let $U_\alpha$ be the kernel of $b_\alpha$, as in \ref{ualpha}, and let $I\subseteq A$ be the (homogeneous) ideal generated by $U_\alpha$. Since $V(I)=\varnothing$ and $\dim(U_\alpha)=3$, the ideal $I$ is a complete intersection. Therefore, according to \cite{egh} Theorem~CB8, the graded ring $A/I$ is a $0$-dimensional Gorenstein ring with socle degree~$6$, see also \cite{bl} Theorem 2.5. In particular, $\ker(\alpha)\subseteq A_6$ is the degree~$6$ part of $I$, i.e.\ $\ker(\alpha)=I_6= U_\alpha A_3$. \end{lab} \begin{cor}\label{normalcone} For every strictly positive form $f$ in $\partial\Sigma_6$, the normal cone $N_f(\Sigma_6)$ has dimension one, i.e.\ it is a single ray. \end{cor} \begin{proof} Let ${\mathbb{R}}_{\scriptscriptstyle+}\alpha$, ${\mathbb{R}}_{\scriptscriptstyle+}\beta$ be two extreme rays contained in $N_f$. It suffices to show that both are equal. By the discussion in \ref{ualpha} we have $U_\alpha=U_\beta$. But this implies $\ker(\alpha)=\ker(\beta)$, so $\alpha$ and $\beta$ are positive scalar multiples of each other. \end{proof} \begin{cor}\label{tangspace} Let $f\in\partial\Sigma_6$ be strictly positive, and assume that $f$ is a nonsingular point of the hypersurface $V$. Let $\alpha\in A_6^{\scriptscriptstyle\vee}$ span the normal cone $N_f(\Sigma_6)$. Then the kernel of $\alpha$ coincides with the tangent space to $V$ at~$f$. \end{cor} \begin{proof} Let $f=p_1^2+p_2^2+p_3^2$ be the unique sos representation of $f$. Then $U_\alpha=\spn(p_1,p_2,p_3)$ and $\ker(\alpha)=U_\alpha A_3$ (\ref{ualpha}, \ref{unique}). Consider the morphism of algebraic varieties (affine spaces) $\phi\colon A_3\times A_3\times A_3\to V\subseteq A_6$, $(q_1,q_2,q_3)\mapsto q_1^2+q_2^2+q_3^2$, and its tangent map at the triple $(p_1,p_2,p_3)$. The image of this tangent map is $p_1A_3+p_2A_3+p_3A_3=U_\alpha A_3\subseteq A_6$, and this subspace is contained in $T_f(V)$. We conclude $\ker(\alpha)\subseteq T_f(V)$, and equality must hold since both are codimension one subspaces of $A_6$. \end{proof} The following result summarizes most of what we discussed so far: \begin{cor}\label{summary} Let $f\in\partial\Sigma_6$ be strictly positive. Then the following hold: \begin{itemize} \item[(a)] There are $p_1,\,p_2,\,p_3\in A_3$, linearly independent, with $f=p_1^2+p_2^2+p_3^2$, and up to orthogonal equivalence, this is the only sum of squares representation of~$f$. \end{itemize} Let $U=\spn(p_1,p_2,p_3)\subseteq A_3$, let $I\subseteq A$ be the ideal generated by $U$. \begin{itemize} \item[(b)] The normal cone $N_f(\Sigma_6)$ is a ray: $N_f(\Sigma_6)= {\mathbb{R}}_{\scriptscriptstyle+}\alpha$ for some~$\alpha$. \item[(c)] $A/I$ is a Gorenstein graded algebra with socle degree~$6$. \item[(d)] $I_6=\ker(\alpha)=UA_3$. \end{itemize} If in addition, $f$ is a nonsingular point of the hypersurface $V$, then \begin{itemize} \item[(e)] $I_6=\ker(\alpha)$ is the tangent space of $V$ at~$f$. \qed \end{itemize} \end{cor} Much of Corollary \ref{summary} was already known from \cite{bl}. New are (b), (e) and the uniqueness part of~(a). \begin{lab}\label{tangspaceratl} The hypersurface $V\subseteq A_6$ is defined by a homogeneous polynomial $F$ (of degree $83200$) in the $\choose82=28$ coefficients of a ternary sextic. Since (the complexification of) $V$ is the Zariski closure of all sums of three squares of cubic forms, this hypersurface is defined over ${\mathbb{Q}}$. So the polynomial $F$ can be taken to have coefficients in ${\mathbb{Q}}$. Let $f\in\partial\Sigma_6$ be strictly positive, so $f\in V$, and assume that $f$ has ${\mathbb{Q}}$-coefficients. By Corollary \ref{unique}, $f$ has a unique sos representation over ${\mathbb{R}}$. We are going to show that this representation is defined over ${\mathbb{Q}}$, provided that $f$ is a smooth point of the hypersurface~$V$. Indeed, since $f$ has ${\mathbb{Q}}$-coefficients, the tangent space $T_f(V) \subseteq A_6$ of $V$ at $f$ is the kernel of a linear form with ${\mathbb{Q}}$-coefficients. By Corollary \ref{summary}(e), we conclude that the normal cone $N_f(\Sigma_6)$ is generated by a linear form $\alpha$ with ${\mathbb{Q}}$-coefficients. Hence the $3$-dimensional subspace $U_\alpha=\{p\in A_3\colon A_3p\subseteq\ker(\alpha)\}$ of $A_3$ has a basis consisting of ${\mathbb{Q}}$-polynomials. According to Lemma \ref{basic} in the Appendix, this implies that the unique matrix (or tensor) in $\Gram(f)$ has ${\mathbb{Q}}$-coefficients. Thus, $f$ is a sum of squares over~${\mathbb{Q}}$. Altogether this proves: \end{lab} \begin{thm}\label{posternsextq} Let $f\in\Sigma_6$ be a strictly positive form with coefficients in ${\mathbb{Q}}$. If $f$ fails to be a sum of squares over ${\mathbb{Q}}$, then $f$ lies in the boundary of $\Sigma_6$, and $f$ is a singular point of the hypersurface $V$. \qed \end{thm} In view of this theorem, it would be very interesting to see a characterization of the forms in $V$ that are singular as points of~$V$. Unfortunately we don't know how to approach this question. A direct computation of the singularities of $V$ seems hopeless due to the complexity of the equation of $V$ (a homogeneous polynomial in 28~variables of degree 83200). \begin{rems} \hfil 1.\ Beware that a strictly positive form $f\in\Sigma_6$ which is a sum of three squares will not in general lie on the boundary of $\Sigma_6$. For example, the symmetric form $f=x_1^6+x_2^6+x_3^6$ is strictly positive, and it is easily seen that $f\in\text{int} (\Sigma_6)$. 2.\ There is another aspect that makes the form $f=x_1^6+x_2^6+x_3^6$ interesting. Indeed, $f$ can be written as a sum of 3 squares in more than one way, for example $$f\>=\>(x_1^3-2x_1x_2^2)^2+(2x_1^2x_2-x_2^3)^2+x_3^6.$$ By Corollary \ref{unique}, this directly implies that $f$ lies in the interior of $\Sigma_6$. Using the arguments from the proof of \ref{tangspace}, we can also conclude that $f$ is a singular point of~$V$. 3.\ According to Blekherman \cite{bl}, examples of strictly positive forms in $\partial\Sigma_6$ can be constructed as follows. Let $p_1,\,p_2\in A_3$ be two cubics which intersect transversely in nine projective ${\mathbb{R}}$-points. For example, we may take $p_1=x_1(x_1^2-x_3^2)$ and $p_2=x_2(x_2^2-x_3^2)$, intersecting in nine points with affine representatives $$\xi_1=(1,1,1),\quad\xi_2=(-1,1,1),\quad\xi_3=(1,-1,1),\quad \xi_4=(1,1,-1),$$ $$\xi_5=(0,1,1),\quad\xi_6=(0,1,-1),\quad\xi_7=(1,0,1),\quad \xi_8=(1,0,-1)$$ and $$\xi_9=(0,0,1).$$ The Cayley-Bacharach relation is the unique (up to scaling) linear relation between the nine values $p(\xi_i)$ ($1\le i\le9$) of a general cubic~$p$. In our example it is $\sum_{i=1}^9u_ip(\xi_i)=0$ where $$(u_1,\dots,u_9)\>=\>(1,1,1,1,-2,-2,-2,-2,4).$$ Following \cite{bl} Theorem 6.1 we consider 9-tuples $a=(a_1,\dots,a_9)$ of nonzero real numbers with $a_i<0$ for precisely one index $i$, that satisfy the relation $\sum_{i=1}^9\frac{u_i^2}{a_i}=0$. For any such tuple $a$ the linear form $$\alpha\colon A_6\to{\mathbb{R}},\quad\alpha(f)\>=\>\sum_{i=1}^9a_if(\xi_i)$$ is an extreme form in $\Sigma_6^*$ and is not evaluation in a point. In our example, $a=(1,1,1,1,4,4,4,4,-2)$ is an example of such a tuple, giving rise to $\alpha\in\Sigma_6^*$ as above. The kernel $U_\alpha$ of the psd bilinear form $b_\alpha$ has $\dim(U_\alpha)=3$ and is spanned by $p_1,\,p_2$ and $p_3=(3x_1^2+3x_2^2-4x_3^2)x_3$. For any three linearly independent forms $q_1,\,q_2,\,q_3$ in $U_\alpha=\spn(p_1,p_2,p_3)$, the sextic $f=q_1^2+q_2^2+q_3^2$ is strictly positive and lies in $\partial\Sigma_6$. For example, $$f\>=\>x_1^6+x_2^6+7(x_1^4+x_2^4)x_3^2+18x_1^2x_2^2x_3^2 -23(x_1^2+x_2^2)x_3^4+16x_3^6$$ is such a sextic, obtained by taking $q_i=p_i$ ($i=1,2,3$). According to \ref{unique}, $f=p_1^2+p_2^2+p_3^2$ is, up to orthogonal equivalence, the only sum of squares representation of~$f$ over~${\mathbb{R}}$. Instead of 9 real points of intersection, the two conics $p_1$ and $p_2$ may also intersect in 7 real and one complex conjugate pair of points. Then the $a_i$ have to satisfy a slightly different condition, see \cite{bl} Theorem 7.1. \end{rems} \begin{lab} The results and remarks in this section all carry over to the $(4,4)$ case, i.e.\ forms $f(x_1,x_2,x_3,x_4)$ of degree~$4$, as follows. Put $A={\mathbb{R}}[x_1,x_2,x_3,x_4]$, let now $\Sigma_4$ denote the sos cone in $A_4$. The Zariski closure of $\partial\Sigma_4$ is a union $\Delta\cup V$ of two irreducible hypersurfaces, with $\Delta$ the discriminant (of degree~$108$), and $V$ (of degree~$38475$) the Zariski closure of the sums of $4$ squares of quadratic forms \cite{bhors}. For $f\in\partial\Sigma_4$ a strictly positive form, the normal cone $N_f(\Sigma_4)$ is a ray, and $f$ has a unique sos representation, which is of length~$4$. Defining the ideal $I\subseteq A$ similarly to \ref{summary}, $A/I$ is Gorenstein of socle degree~$4$. The argument in \ref{tangspaceratl} carries over (using $4$ instead of $3$ squares), and so the analogue of Theorem \ref{posternsextq} holds for ${\mathbb{Q}}$-forms in $\Sigma_4$. The proofs resp.\ references are the same as in the $(3,6)$ case. \end{lab} \section{Background on Gram spectrahedra} We give a brief introduction to Gram spectrahedra of forms here, including proofs or references for a few basic facts that we are using in Section~3. See \cite{sch:gram} for a more detailed account of Gram spectrahedra in general. \begin{lab} Let $n\in{\mathbb{N}}$ and $A={\mathbb{R}}[x_1,\dots,x_n]={\mathbb{R}}[x]$, considered with the usual grading $A=\bigoplus_{d\ge0}A_d$. Let $d\ge0$ and $f\in A_{2d}$, let $X=(x_1^d,x_1^{d-1}x_2,\dots,x_n^d)$ be the list of monomials of degree~$d$ in some fixed order, let $N=\choose{n-1+d}d$ be the number of these monomials. The \emph{Gram spectrahedron} of $f$ is defined to be $$\Gram(f)\>=\>\{G\in{\mathbb{S}}^N\colon G\succeq0,\ X^tGX=f\},$$ where ${\mathbb{S}}^N$ is the space of real symmetric $N\times N$ matrices and $G\succeq0$ means that $G$ is positive semidefinite (has non-negative eigenvalues). So $\Gram(f)$ is an affine-linear section of the cone ${\mathbb{S}}^N_{\scriptscriptstyle+}$ of psd symmetric matrices, and one easily checks that $\Gram(f)$ is compact. If $f=p_1^2+\cdots+p_r^2$ with $p_i\in A_d$, and if $u_i\in{\mathbb{R}}^N$ is the coefficients (column) vector of $p_i$, then the matrix $G=\sum_{i=1}^ru_iu_i^t$ lies in $\Gram(f)$. Conversely, every point of $\Gram(f)$ arises in this way. More precisely, two sum of squares representations $f=\sum_{i=1}^rp_i^2=\sum_{i=1}^rq_i^2$ (which we can assume to have the same length by possibly adding zero summands to one of them) give the same element of $\Gram(f)$ if and only if they are \emph{orthogonally equivalent}, which means that there is an orthogonal $r\times r$ matrix $(a_{ij})$ such that $q_i=\sum_{j=1}^r a_{ij}p_j$ ($i=1,\dots,r$). See \cite{clr} for this fact, where Gram spectrahedra were first introduced. In other words, the points of $\Gram(f)$ are in natural bijection with the orthogonal equivalence classes of sum of squares representations of $f$. For our purposes it is more convenient to represent elements of $\Gram(f)$ as symmetric tensors $\sum_{i=1}^rp_i\leftarrowimes q_i= \sum_{i=1}^rq_i\leftarrowimes p_i$ with $p_i,\,q_i\in A_d$, i.e.\ as elements of ${\mathsf{S}}_2A_d$, the subspace of $A_d\leftarrowimes A_d$ of symmetric tensors. Using the multiplication (linear) map $\mu\colon{\mathsf{S}}_2A_d\to A_{2d}$, $\Gram(f)$ consists of all $\theta\in{\mathsf{S}}_2A_d$ with $\mu(\theta)=f$ and $\theta\succeq0$. Here $\theta\succeq0$ means that $\theta$ can be written $\theta=\sum_{i=1}^rp_i\leftarrowimes p_i$ with $p_i\in A_d$. See \cite{sch:gram} for this point of view. \end{lab} \begin{lab}\label{dimfaces} Let $\theta\in\Gram(f)$, say $\theta=\sum_{i=1}^rp_i\leftarrowimes p_i$. With $\theta$ we associate the linear subspace $U_\theta:= \spn(p_1,\dots,p_r)$ of $A_d$. The supporting face $F$ of $f$ in $\Gram(f)$ (i.e.\ the unique face that contains $f$ in its relative interior) consists of all $\eta\in\Gram(f)$ with $U_\eta\subseteq U_\theta$. It therefore corresponds to the sos representations of $f$ that use only polynomials from $U_\theta$. For describing the dimension of $F$ we can assume that $p_1,\dots,p_r$ are linearly independent. Then $\dim(F)$ is the number of linear relations between the forms $p_ip_j$ ($1\le i\le j\le r$), i.e. $$\dim(F)+\dim(U_\theta U_\theta)\>=\>\choose{r+1}2$$ where $U_\theta U_\theta:=\spn(p_ip_j\colon1\le i\le j\le r)$ (\cite{sch:gram} Proposition 3.6). In particular, $\theta$ is an extreme point of $\Gram(f)$ if and only if the $\choose{r+1}2$ forms $p_ip_j$ are linearly independent. In this case we say that $p_1,\dots,p_r$ are \emph{quadratically independent.} \end{lab} \begin{lem}\label{2repsameu} Assume that $f\in A_{2d}$ has two non-equivalent sos representations $f=\sum_{i=1}^rp_i^2=\sum_{i=1}^rq_i^2$ with $\spn(p_1,\dots,p_r)= \spn(q_1,\dots,q_r)=:U$. Then $f$ has another sos representation $f=\sum_{j=1}^su_j^2$ for which $\spn(u_1,\dots,u_s)$ is a proper subspace of~$U$. \end{lem} \begin{proof} The two given representations represent two different points in the Gram spectrahedron $\Gram(f)$, that both lie in the relative interior of the same face $F$ of $\Gram(f)$. Since $\Gram(f)$ is compact, $F$ has some extreme point $\theta$. If $f=\sum_{j=1}^su_j^2$ is an sos representation that corresponds to $\theta$, then $\spn(u_1,\dots,u_s)$ is a proper subspace of~$U$. \end{proof} The following lemma is used in the proof of our main result in Section~3: \begin{lem}\label{basic} Let $f\in\Sigma_{2d}$ be a form with ${\mathbb{Q}}$-coefficients, and let $\theta$ be an extreme point of $\Gram(f)$. If the space $U_\theta$ is defined over ${\mathbb{Q}}$, then the sos representation of $f$ corresponding to $\theta$ is (can be) defined over~${\mathbb{Q}}$. \end{lem} That $U_\theta$ is defined over ${\mathbb{Q}}$ means that $U_\theta$ has a linear ${\mathbb{R}}$-basis consisting of polynomials with ${\mathbb{Q}}$-coefficients. \begin{proof} Let $p_1,\dots,p_r$ be a basis of $U_\theta$ consisting of ${\mathbb{Q}}$-polynomials. Since $\theta$ is an extreme point of $\Gram(f)$, the forms $p_i$ are quadratically independent, see \ref{dimfaces}. Hence there is a unique ${\mathbb{R}}$-linear combination $f=\sum_{i,j=1}^r a_{ij}p_ip_j$ with $a_{ij}=a_{ji}\in{\mathbb{R}}$. The matrix $(a_{ij})$ is positive definite, and $a_{ij}\in{\mathbb{Q}}$ by uniqueness of the linear combination. So $f$ is ${\mathbb{Q}}$-sos. \end{proof} \end{document}
\begin{document} \title{{\TheTitle} \begin{abstract} For an arbitrary finite family of semi-algebraic/definable functions, we consider the corresponding inequality constraint set and we study qualification conditions for perturbations of this set. In particular we prove that all positive diagonal perturbations, save perhaps a finite number of them, ensure that any point within the feasible set satisfies Mangasarian-Fromovitz constraint qualification. Using the Milnor-Thom theorem, we provide a bound for the number of singular perturbations when the constraints are polynomial functions. Examples show that the order of magnitude of our exponential bound is relevant. Our perturbation approach provides a simple protocol to build sequences of ``regular'' problems approximating an arbitrary semi-algebraic/definable problem. Applications to sequential quadratic programming methods and sum of squares relaxation are provided. \end{abstract} \section{Introduction} Constraint qualification conditions ensure that normal cones are finitely generated by the gradients of the active constraints. When considering an optimization problem, this fact immediately provides Lagrange/KKT necessary optimality conditions which are at the root of most resolution methods (see e.g., \cite{NW06,Ber16}). Finding settings in which qualification conditions are easy to formulate and easy to verify is thus of fundamental importance. In a convex framework, the power of Slater's condition consists in its extreme simplicity: the resolution of a ``simple'' problem (e.g., finding an interior point), often done directly or through routine computations, guarantees the regularity of the problem. In a nonconvex setting, the question becomes much more delicate but the wish is the same: describing normal cones as gradient-generated cones for deriving KKT conditions (see e.g., \cite{RW98}). Contrary to what happens for convex functions, the knowledge of the functions at one point does not capture enough information about the global geometry to infer well-posedness everywhere\footnote{Observe though that the local knowledge of a polynomial function implies the knowledge of the function everywhere. But, as far as we know, this fact has never given birth to any simple qualification condition.}. Very smooth and simple problems sa\-tis\-fying all possible natural conditions can generally present a failure of qualification, for which the normal cone is not generated by the gradients of the active constraints, and thus KKT conditions cannot apply. In dimension two a typical failure is a cusp, illustrated in \cref{fig:cusp} for the constraint set \begin{equation} \label{eq:cusp} D = \big\{ (x_1,x_2) \in \mathbb{R}^2 \mid x_1^3 + x_2 \leqslant 0 , \; x_1^3 - x_2 \leqslant 0 \big\} . \end{equation} \begin{figure} \caption{On the left, the constraint set $D$, see \labelcref{eq:cusp} \label{fig:cusp} \end{figure} Since in this general setting simple qualification conditions are not available, se\-veral researchers have considered the problem under the angle of perturbations. To our knowledge, the first work in this direction was proposed by Spingarn and Rockafellar \cite{RS79}. Given differentiable functions $g_1,\dots,g_m:\mathbb{R}^n\to \mathbb{R}$ and a constraint set $C = \{ x \in \mathbb{R}^n \mid g_1(x) \leqslant 0, \dots, g_m(x) \leqslant 0\}$, they indeed introduced the perturbed constraint sets \[ C_\mu: = \left[ g_1 \leqslant \mu_1, \dots, g_m \leqslant \mu_m \right] \quad \text{where $\mu = (\mu_1, \dots, \mu_m) \in \mathbb{R}^m$} \] and studied their properties regarding qualification conditions. In the sequel, a set $C_{\mu}$ for which qualification conditions hold at each feasible point is said to be {\em regular}. Accordingly the corresponding perturbation $\mu$ is called {\em regular}. When $m = 1$, one obviously recovers the usual definition of a regular value of a function (see e.g., Milnor's monograph \cite{Milnor65}), and one guesses that a major role will be played by Sard-type theorems. Recall that original Sard's theorem (see e.g., \cite{Milnor65}) expresses that the regular values of a sufficiently smooth function are generic within $\mathbb{R}^m$. For $m \geqslant 1$, the work on perturbed constraint sets by Spingarn and Rockafellar \cite{RS79} dealt with the genericity of regular values using a quite restrictive notion of qualification condition. Works by Fujiwara \cite{Fuj82}, Scholtes and St\"ohr \cite{SS01} or Nie \cite{Nie14} gave further insights on different other aspects but with the same type of qualification assumptions. When the mappings $g_i$ are semi-algebraic (or definable), the application of definable nonsmooth Sard's theorem of Ioffe \cite{Iof07} yields stronger results since in that case, regularity exactly corresponds to sets satisfying Mangasarian-Fromovitz constraint qualification everywhere (see \cref{thm:genericity-definable}). These aspects are discussed in detail in \Cref{sec:pertubed-constraint-sets}. Genericity results have been the object of a recent revival in connection with semi-algebraic optimization: see Bolte, Daniilidis and Lewis \cite{BDL11}, Daniilidis and Pang \cite{pang}, Drusvyatskiy, Ioffe and Lewis \cite{DIL16}, Lee and Pham \cite{LP15}, H\`a and Pham \cite{HP17}. An original feature of our work is to exploit the fact that genericity is a relative concept. A property is indeed generic within some given family, but if one considers smaller families, genericity may no longer hold. It is therefore important to identify the smallest possible families in order to strengthen genericity results and to be able to exploit them for improving effective optimization techniques (e.g., algorithms, homotopy methods). In this regard, we address in \Cref{sec:finite-singularities} the following two questions: \begin{itemize} \em \item How do we perturb to ensure regularity? In other words, how can we build simple problems $(\mathcal{P}_\alpha)_{\alpha \in \mathbb{R}_+}$ which are regular and whose value, $\val \mathcal{P}_\prtb$, converges to the one of the original problem, $\val ({\mathcal P_0})$? \item Can we go beyond mere genericity and quantify the number of singular (i.e., nonregular) values in the polynomial case? \end{itemize} Our first result, \`a la Morse-Sard, relies on definability assumptions of the data (e.g., semi-algebraicity) and provides one-parameter families of regular constraint sets. This is done by showing that any positive semiline $\mathbb{R}_+ \, v$, with $v \in \mathbb{R}_{++}^m$, bears only finitely many singular perturbations. For instance if we let $\bm{\alpha} := (\alpha, \dots, \alpha)$, the sets $( C_{\bm{\alpha}} )_{\alpha \in \mathbb{R}_+}$ are regular for all $\alpha$ positive small enough, see \cref{fig:cusp} for an illustration. When some of the constraint functions are convex, our approach is considerably simplified: we show indeed that a ``partial Slater's condition'' allows to restrict the perturbation approach to nonconvex functions. The strength of our results is well conveyed by the following general approximation fact: for any objective $f$ and for $\alpha$ small enough, we are able to build explicit well-posed problems \begin{equation*} \tag{$\mathcal{P}_{\alpha}$} \text{minimize } f(x) \quad \text{s.t.} \enspace x \in C_{\bm{\alpha}} \end{equation*} which satisfy, under mild conditions, $\lim _{\alpha \to 0^+} \val (\mathcal{P}_{\alpha}) = \val (\mathcal{P}_0)$. Our approach opens the way to continuation methods (see \cite{homotopy} and references therein) or to more direct diagonal methods as shown in our last section. A natural question which immediately emerges is whether it is possible to count the number of singular perturbations. When assuming further that the data are polynomial functions whose degree is bounded by $d$, we show by using Milnor-Thom's theorem that the number of singular values for problems of the type $(\mathcal{P}_{\alpha})$ is lower than $d (2d-1)^n (2d+1)^m$. Examples show that in general the bound is indeed exponential, even in the quadratic case with only one of the $g_i$ being nonconvex. The worst-case bound described in this work is a rather negative result for semi-algebraic programming, in the sense that it shows that there are instances for which singular values are so clumped and numerous that perturbation techniques are uneffective. The fact that worst-case instances of general semi-algebraic programming is out of reach of modern methods is a well-known fact since the pioneering work~\cite{smale89}. It would be interesting to recast our findings along this perspective. For instance, our results suggest that some constraint sets might have such a complex nature that most local methods are inapplicable in practice, even after perturbation. On the other hand, as suggested by real-life problems, regular instances are numerous in practice. This shows the need to understand further the geometric factors or probabilistic priors on the constraints that could make singular values less numerous or at least favorably distributed. In \Cref{sec:applications}, we provide two theoretical algorithmic illustrations of our results. As a general fact, our diagonal perturbation scheme can be used in conjunction with any algorithm whose behavior relies on constraint qualification assumptions. We illustrate this principle with exact semidefinite programming relaxations in polynomial programming, for which well-behaved constructions were proposed for regular problems \cite{DNS06,DNP07,ABM14}. A second application of our general results is given by a class of sequential quadratic programming methods, SQP for short. SQP methods are widespread in practical applications, see e.g, \cite{Fle85,fred,Aus13}. Convergence analysis of such methods usually requires very strong qualification conditions in order to handle regularity and infeasibility issues for minimizing sequences. We show how our perturbation results provide a natural and strong tool for convergence analysis in the framework of semi-algebraic optimization. \section{Regular and singular perturbations of constraint sets} \label{sec:pertubed-constraint-sets} \subsection{Notation and definitions} \subsubsection*{Constraint sets and qualification conditions} Let us consider the general nonlinear optimization problem \begin{equation} \label[prb]{pb:NLP} \tag{$\mathbb{N}LP$} \begin{aligned} \text{minimize} & \enspace f(x) \\ \text{subject to} & \enspace g_1(x) \leqslant 0 , \dots , g_m(x) \leqslant 0 , \\ & \enspace h_1(x) = 0 , \dots , h_r(x) = 0 , \end{aligned} \end{equation} where $f$, $g_1, \dots, g_m, h_1, \dots, h_r$ are differentiable functions from $\mathbb{R}^n$ to $\mathbb{R}$. We denote by \begin{equation*} C \; = \; [ g_1 \leqslant 0, \dots, g_m \leqslant 0 ] \; := \; \{x \in \mathbb{R}^n \mid g_1(x) \leqslant 0, \dots, g_m(x) \leqslant 0 \} \end{equation*} the inequality constraint set, and by \begin{equation*} M \; = \; [ h_1 = 0, \dots, h_r = 0 ] \; := \; \{x \in \mathbb{R}^n \mid h_1(x) = 0, \dots, h_r(x) = 0 \} \end{equation*} the equality constraint set. For $x \in C$, we define the set of active constraints by \[ I(x) \; := \; \{ 1 \leqslant i \leqslant m \mid g_i(x) = 0 \} . \] We next recall a standard regularity condition. \begin{definition}[Mangasarian-Fromovitz constaint qualification] \label{def:MFCQ} A point $x \in C \cap M$ is said to satisfy the {\em Mangasarian-Fromovitz constraint qualification (MFCQ)} if the gradient vectors $\nabla h_j(x)$, $j=1,\dots,r$, are linearly independent and there exists $y \in \mathbb{R}^n$ such that \begin{equation} \label{eq:MFCQ} \left\{ \begin{aligned} \<y,\nabla h_j(x)> = 0 , & \quad j = 1,\dots, r , \\ \<y,\nabla g_i(x)> < 0 , & \quad i \in I(x) . \end{aligned} \right. \end{equation} If there is no equality constraint, this condition is then called {\em Arrow-Hurwicz-Uzawa constraint qualification}. We say that MFCQ holds throughout $C \cap M$ if it is satisfied at every point in $C \cap M$. \end{definition} \begin{remark} By a straightforward application of Hahn-Banach's separation theorem, the existence of a vector $y \in \mathbb{R}^n$ satisfying condition \labelcref{eq:MFCQ} is equivalent to \[ \co \, \{\nabla g_i(x) \mid i \in I(x) \} \; \cap \; \spn \, \{ \nabla h_j(x) \mid 1 \leqslant j \leqslant r \} = \emptyset \] where $\co X$ denotes the convex hull of any subset $X \subset \mathbb{R}^n$, and $\spn X$ its linear span. If there is no equality constraint, this characterization simply reads \[ 0 \notin \co \, \{\nabla g_i(x) \mid i \in I(x) \} . \] \label{rmk:Hahn-Banach} \end{remark} Let us briefly remind that MFCQ guarantees the existence of Lagrange multipliers at minimizers of \cref{pb:NLP}: if a local minimizer $\bar{x}$ of $f$ on $C \cap M$ satisfies MFCQ, then there exist multipliers $\lambda_1,\dots,\lambda_m \in \mathbb{R}_+ := [0,+\infty)$ and $\kappa_1,\dots,\kappa_r \in \mathbb{R}$ such that \begin{equation} \label{eq:KKT} \left\{ \begin{aligned} & \nabla f(\bar{x}) + \sum_{i=1}^m \lambda_i \nabla g_i(\bar{x}) + \sum_{j=1}^r \kappa_j \nabla h_j(\bar{x}) = 0 , \\ & \lambda_i \, g_i(\bar{x}) = 0 , \quad i = 1,\dots,m . \end{aligned} \right. \end{equation} Any feasible point satisfying these conditions is called a {\em Karush-Kuhn-Tucker (KKT) point}. \begin{remark}[Clarke regularity and MFCQ] \label{rmk:Clarke-regularity} A more geometrical way of formulating the existence of Lagrange multipliers consists in interpreting the gradients of active constraints as generators of a cone normal to the constraint set. In the terminology of modern nonsmooth analysis, assuming that there are only inequality constraints in \cref{pb:NLP}, this amounts to the normal regularity of the set $C = [g_1 \leqslant 0,\dots, g_m \leqslant 0]$. We next explain this fact. Being given a nonempty closed subset $X \subset \mathbb{R}^n$, the {\em Fr\'echet normal cone} to $X$ at point $\bar{x} \in X$ is defined by \[ \hat{N}_X(\bar{x}) := \{ v \in \mathbb{R}^n \mid \<v,x-\bar{x}> \leqslant o(\|x-\bar{x}\|), \; x \in X \} . \] It is immediate to prove that any solution to \cref{pb:NLP} satisfies \begin{equation} \label{eq:Fermat} \nabla f(x) + \hat{N}_C (x) \ni 0 \end{equation} which suggests to express $\hat{N}_C(x)$ with the initial data $g_1,\dots,g_m$. To do so, let us introduce the {\em limiting normal {\rm or} Mordukhovich normal cone\footnote{See the pioneering work \cite{morduk}.}} to $X$ at $\bar{x}$, denoted by $N_X(\bar{x})$ and defined by \[ v \in N_X(\bar{x}) \iff \exists \, x_n \to \bar{x} , \quad \exists \, v_n \to v , \quad v_n \in \hat{N}_X(x_n) . \] The set $X$ is called {\em regular} at $\bar{x}$ if $\hat{N}_X(\bar{x}) = N_X(\bar{x})$. By classical results of nonsmooth analysis, if the Mangasarian-Fromovitz constraint qualification holds throughout $C$, then $C$ is regular at every point in $C$, see \cite[Th.~6.14]{RW98} or \cite[Th.~7.2.6]{BL06}. In addition, we have for all $x \in C$ \[ N_{C}(x) = \Big\{ \sum_{i \in I(x)} \lambda_i \; \nabla g_i(x) \mid \lambda_i \geqslant 0, \; i \in I(x) \Big\} , \] which combined with \Cref{eq:Fermat} yields the claimed result. \end{remark} \subsubsection*{Perturbations of constraint sets} For $\mu \in \mathbb{R}^m$ and $\nu \in \mathbb{R}^r$, we denote by \begin{align*} C_\mu & := [ g_1 \leqslant \mu_1, \dots, g_m \leqslant \mu_m ] = \{x \in \mathbb{R}^n \mid g_1(x) \leqslant \mu_1, \dots, g_m(x) \leqslant \mu_m \} , \\ M_\nu & := [ h_1 = \nu_1, \dots, h_r = \nu_r ] = \{x \in \mathbb{R}^n \mid h_1(x) = \nu_1, \dots, h_r(x) = \nu_r \} , \end{align*} the perturbed inequality and equality constraint sets of \cref{pb:NLP}, respectively. Also, we denote by \[ \mathcal{A} := \{ (\mu,\nu) \in \mathbb{R}^m \times \mathbb{R}^r \mid C_\mu \cap M_\nu \neq \emptyset \} \] the set of {\em admissible perturbations}. \begin{definition}[Regular/Singular perturbations] We say that $(\mu,\nu) \in \mathcal{A}$ is a {\em regular perturbation} if the Mangasarian-Fromovitz constraint qualification holds throughout $C_\mu \cap M_\nu$, and we denote by $\mathbb{R}eg$ the collection of all regular perturbations: \[ \mathbb{R}eg := \{ (\mu,\nu) \in \mathcal{A} \mid \text{MFCQ holds at every } x \in C_\mu \cap M_\nu \} . \] In contrast, an admissible perturbation $(\mu,\nu) \in \mathcal{A}$ is {\em singular} if the Mangasarian-Fromovitz constraint qualification is not satisfied at some point of $C_\mu \cap M_\nu$. The subset of singular perturbations is given by \[ \mathcal{A}_\text{sing} := \mathcal{A} \setminus \mathbb{R}eg . \] \end{definition} Up to an obvious change of definition, we shall use the same notation when there is no equality constraint. \subsection{Metric regularity and constraint qualification} In this subsection, we recall how the Mangasarian-Fromovitz constraint qualification can be interpreted in terms of metric regularity of some set-valued mapping. To that purpose, we gather below some classical notions in nonsmooth analysis, see \cite{RW98,Mor06,DR14}. A set-valued mapping $F: \mathbb{R}^p \rightrightarrows \mathbb{R}^q$ is a map sending each point of $\mathbb{R}^p$ to a subset of $\mathbb{R}^q$. We denote by $\graph F := \{ (x,y) \in \mathbb{R}^p \times \mathbb{R}^q \mid y \in F(x) \}$ the {\em graph} of $F$ and by $\dom F := \{ x \in \mathbb{R}^p \mid F(x) \neq \emptyset \}$ its {\em domain}. The set-valued mapping $F: \mathbb{R}^p \rightrightarrows \mathbb{R}^q$ is {\em metrically regular} at $(\bar{x},\bar{y}) \in \graph F$ if the graph of $F$ is locally closed at $(\bar{x},\bar{y})$ and there exist a positive real number $\kappa$, together with neighborhoods $\mathcal{U}$ and $\mathcal{V}$ of $\bar{x}$ and $\bar{y}$ respectively, such that \[ \dist(x,F^{-1}(y)) \leqslant \kappa \, \dist(y,F(x)) \] for all $(x,y) \in \mathcal{U} \times \mathcal{V}$. Here, $\dist(z,K)$ refers to the distance of any point $z$ of a space endowed with a norm $\| \cdot \|$ to any subset $K$ of the same space, i.e., $\inf_{k \in K} \|k-z\|$. We now come back to \cref{pb:NLP} and introduce the set-valued mapping $F: \mathbb{R}^n \rightrightarrows \mathbb{R}^{m+r}$ defined by \begin{equation} \label{eq:constraint-mapping} F(x) \quad=\quad \begin{pmatrix} g_1(x) \\ \vdots \\ g_m(x) \\ h_1(x) \\ \vdots \\ h_r(x) \end{pmatrix} \quad+\quad \mathbb{R}^m_+ \times \{0\}^r \end{equation} where $\mathbb{R}_+^m $ is the nonnegative orthant of $\mathbb{R}^m$. Observe that $(\mu,\nu) \in F(x)$ if and only if $x \in C_\mu \cap M_\nu$. Also notice that, by continuity of the constraint functions, $\graph F$ is closed. The following result, due to Robinson, characterizes the points satisfying MFCQ in terms of the mapping $F$. For a thorough discussion and various proofs, we refer the reader to \cite{Mor06}. Other approaches are \cite[Ex.~9.44]{RW98}, \cite[Ex.~4F.3]{DR14} or \cite[Th.~4.1]{DQZ06} which avoids the use of coderivative calculus. \begin{theorem}[Robinson] The Mangasarian-Fromovitz constraint qualification holds at point $x \in C_\mu \cap M_\nu$ if and only if the set-valued mapping $F$ defined in~\labelcref{eq:constraint-mapping} is metrically regular at $(x,(\mu,\nu))$. \label{thm:MFCQ-metric-regularity} \end{theorem} \subsection{Genericity of regular perturbations} Qualification conditions play an important role in the analysis of nonlinear programming and the convergence of optimization algorithms, yet checking these conditions at optimal points is hardly possible. This is why one rather seeks local/global simple geometrical assumptions that automatically warrant these conditions. Sard's theorem provides results in this direction: generic equations are well-posed if the data are smooth enough or well-structured (e.g., analytic). Viewing constraint sets along this angle and following the pioneering work \cite{RS79}, we establish here various genericity results for regular perturbations. \subsubsection*{Smooth constraint functions} The first genericity result we present here concerns {\em linear independence constraint qualification}, a strong and quite stringent qualification condition which is often considered in the literature when dealing with ``generic'' instances of optimization problems (see e.g., \cite{Nie14,LP16}). This qualification condition requires that the gradients of both the equality constraints and the active inequality constraints are linearly independent. Note that it implies in particular MFCQ. Let us first recall classical Sard's theorem. For a differentiable map $f: \mathbb{R}^p \to \mathbb{R}^q$, a point $x \in \mathbb{R}^p$ is {\em critical} if the differential mapping of $f$ at $x$ is not surjective. A {\em critical value} of $f$ is the image of a critical point. Otherwise, $v$ is said to be {\em regular}. \begin{sard}[see \cite{Milnor65}] \label{thm:Sard} Let $f: \mathbb{R}^p \to \mathbb{R}^q$ be a map of class $\mathcal{C}^k$ with $k > \max(0,p-q)$. Then the Lebesgue measure of the set of critical values of $f$ is zero. \end{sard} As a consequence of Sard's theorem, we deduce that a perturbation of the constraint set of \cref{pb:NLP} is almost surely regular when the constraint functions are smooth enough. \begin{theorem}[{compare with \cite[Th.~1]{RS79}}] \label{thm:genericity-smooth} Let $g_1,\dots,g_m,h_1,\dots,h_r$ be $\mathcal{C}^k$ constraint functions from $\mathbb{R}^n$ to $\mathbb{R}$ with $k > \max(0,n-r)$. Then the set of admissible perturbations $(\mu,\nu) \in \mathbb{R}^m \times \mathbb{R}^r$ for which the linear independence constraint qualification is not satisfied at every point of the set $C_\mu \cap M_\nu$ has Lebesgue measure zero. In particular, the set $\mathcal{A}_\text{sing}$ of singular perturbations has Lebesgue measure zero. \end{theorem} \subsubsection*{Definable constraint functions} The above result can be considerably relaxed by replacing smoothness assumptions by mere definability. The results on definability and tame geometry that we use hereafter are recalled in \cref{sec:tame-geometry}. Ioffe showed a nonsmooth version of Sard's theorem for definable set-valued mappings. In this framework, a vector $\bar{y} \in \mathbb{R}^q$ is a {\em critical value} of any set-valued mapping $F: \mathbb{R}^p \rightrightarrows \mathbb{R}^q$ if there exists a point $\bar{x} \in \mathbb{R}^p$ such that $\bar{y} \in F(\bar{x})$ and $F$ is not metrically regular at $(\bar{x},\bar{y})$. \begin{nonsmooth-sard}[{\cite[Th.~1]{Iof07}}] Let $F:\mathbb{R}^p \rightrightarrows \mathbb{R}^q$ be a definable set-valued mapping with locally closed graph. Then the set of critical values of $F$ is a definable set in $\mathbb{R}^q$ whose dimension is less than $q-1$. \label{thm:Sard-definable} \end{nonsmooth-sard} Combining this result with \cref{thm:MFCQ-metric-regularity}, we readily get a geometric description of regular perturbations for \cref{pb:NLP} when the constraint functions are definable in the same o-minimal structure. Let us mention that we also use the fact that any definable set $A \subset \mathbb{R}^p$ can be ``stratified'', that is, written as a finite disjoint union of smooth submanifolds of $\mathbb{R}^p$ that fit together in a ``regular'' manner. This implies in particular that the dimension of $A$, i.e., the largest dimension of such submanifolds, is strictly lower than $p$ if and only if the complement of $A$ is dense. \begin{theorem}[Genericity of regular perturbations] \label{thm:genericity-definable} Let $g_1,\dots,g_m,h_1,\dots,h_r: \mathbb{R}^n \to \mathbb{R}$ be constraint functions that are definable in the same o-minimal structure. Then the set $\mathbb{R}eg$ (resp., $\mathcal{A}_\text{sing}$) of regular (resp., singular) perturbations is definable in $\mathbb{R}^{m+r}$, and $\mathcal{A}_\text{sing}$ is a finite union of smooth submanifolds of $\mathbb{R}^{m+r}$ of dimension strictly lower than $m+r$. \end{theorem} \begin{remark} \label{rmk:topology-singularities} Note that, in general, the set $\mathcal{A}_\text{sing}$ of singular perturbations is not closed. Consider for instance the semi-algebraic functions defined on $\mathbb{R}^2$ by \begin{gather*} g_1(x_1, x_2) = \min \left\{ 2 x_1-1, \frac{1}{|x_1|} \right\} - x_2 , \\ h_1(x_1, x_2) = \frac{x_1}{1+(x_1)^2} - x_2 , \quad h_2(x_1, x_2) = \frac{x_1}{1+(x_1)^2} + x_2 . \end{gather*} For every $\nu \in (0,\frac{1}{2})$, the set $[h_1=\nu, h_2=\nu]$ contains two distinct points, namely $(\frac{1 \pm \sqrt{1 - 4 \nu^2}}{2 \nu}, 0)$, whereas $[h_1=0, h_2=0] = \{(0,0)\}$. Let $\mu_\nu = \frac{1 + \sqrt{1 - 4 \nu^2}}{2 \nu}$. One easily checks that, for the constraint set $[g_1 \leqslant \mu_\nu^{-1}, h_1 = \nu, h_2 = \nu]$ with $\nu \in (0,\frac{1}{2})$, MFCQ fails at point $(\mu_\nu, 0)$ but it is satisfied at point $(\frac{1 - \sqrt{1 - 4 \nu^2}}{2 \nu}, 0)$, where the inequality constraint is not active. Hence $(\mu_\nu^{-1}, \nu, \nu)$ is a singular perturbation for all $\nu \in (0,\frac{1}{2})$. However, when $\nu = 0$ the singularity disappears and only remains the point $(0,0)$ at which MFCQ holds. In other words, $(0,0,0)$ is regular. \end{remark} \subsection{Continuity properties of perturbations} We investigate below the continuity properties of the perturbed constraint sets and of the value function of \cref{pb:NLP}. Recall beforehand that given any set-valued mapping $F: \mathbb{R}^p \rightrightarrows \mathbb{R}^q$, the {\em outer limit}, $\limsup_{x \to \bar{x}} F(x) \subset \mathbb{R}^q$, and the {\em inner limit}, $\liminf_{x \to \bar{x}} F(x) \subset \mathbb{R}^q$, of $F$ at any point $\bar{x} \in \mathbb{R}^p$ are defined respectively by the following: \begin{align*} y \in \limsup_{x \to \bar{x}} F(x) & \iff \exists \, x_n \to \bar{x} , \quad \exists \, y_n \to y , \quad \forall \, n \in \mathbb{N} , \quad y_n \in F(x_n) , \\ y \in \liminf_{x \to \bar{x}} F(x) & \iff \forall \, x_n \to \bar{x} , \quad \exists \, y_n \to y , \quad \exists \, n_0 \in \mathbb{N} , \quad \forall \, n \geqslant n_0 , \quad y_n \in F(x_n) . \end{align*} Then we can define the notion of (semi)continuity for set-valued mapping. \begin{definition}[Semicontinuity of set-valued mappings] A set-valued mapping $F: \mathbb{R}^p \rightrightarrows \mathbb{R}^q$ is {\em outer semicontinuous} (resp., {\em inner semicontinuous}) at $\bar{x} \in \mathbb{R}^p$ if \[ \limsup_{x \to \bar{x}} F(x) \subset F(\bar{x}) \quad \Big( \text{resp.,} \enspace \liminf_{x \to \bar{x}} F(x) \supset F(\bar{x}) \Big) . \] It is {\em continuous} at $\bar{x}$ if it is both outer and inner semicontinuous. \end{definition} A straightforward application of these definitions leads to the elementary lemma: \begin{lemma}[Continuity of perturbed sets] \label{lem:continuity-constraint-set} Let $g_1,\dots,g_m: \mathbb{R}^n \to \mathbb{R}$ be continuous functions. Assume that the constraint set $C_0 = [g_1 \leqslant 0,\dots, g_m \leqslant 0]$ is nonempty. Then the set-valued mapping $\mathbb{R}^m_+ \rightrightarrows \mathbb{R}^n, \; \mu \mapsto C_\mu$ is continuous at $0$. \qed \end{lemma} \begin{remark} It is in general necessary to consider nonnegative perturbations in order to have continuity at $0$. Indeed, for general perturbations, although the inequality constraint set mapping $\mathbb{R}^m \rightrightarrows \mathbb{R}^n, \; \mu \mapsto C_\mu$ is outer semicontinuous at $0$ (this readily follows from the continuity of the constraint functions), it is not inner semicontinuous. Consider for instance the following constraint set, defined for any $\mu \in \mathbb{R}^2$ by \[ C_\mu = \{x \in \mathbb{R} \mid 1-x^2 \leqslant \mu_1, \; (x+1)^2 - 4 \leqslant \mu_2 \} . \] Check that $C_0 = [-3,-1] \cup \{1\}$. However, for all $\mu_1 < 0$ and $\mu_2 < 0$ small enough, we have $C_\mu = [-1-\sqrt{4+\mu_2}, -\sqrt{1-\mu_1}] \subset [-3, -1]$. Hence $\{1\}$ cannot be in the inner limit of $C_\mu$ as $\mu \to 0$. Precisely, we have $\liminf_{\mu \to 0} C_\mu = [-3,-1]$. As for the equality constraint set mapping $\mathbb{R}^r \rightrightarrows \mathbb{R}^n, \; \nu \mapsto M_\nu$, it is also clearly outer semicontinuous at $0$ but not inner semicontinuous in general, even when restricting to $\mathbb{R}_+^r$. For instance, consider for $\nu \in \mathbb{R}$ the constraint set \[ M_\nu = \{x \in \mathbb{R} \mid 9 x (x^2-1) - 2 \sqrt{3} = \nu \} \] and check that $M_0 = \{-1 / \sqrt{3}, \; 2/ \sqrt{3}\}$. However, for every $\nu > 0$, $M_\nu$ contains a unique point, which converges to $2 / \sqrt{3}$ as $\nu$ tends to $0$. As a consequence, the inner limit of $M_\nu$ at $0$ can only contain this point. Studying the situation when $\nu < 0$, one readily see that, actually, $\liminf_{\nu \to 0} M_\nu = \{2 / \sqrt{3}\}$. \end{remark} We now turn our attention to the behavior of the value function of perturbed problems and we study the continuity at $0$ of $(\mu,\nu) \mapsto \min \{ f(x) \mid x \in C_\mu \cap M_\nu \}$. As a consequence of previous observations: continuity cannot occur in general when equality constraints are present, and it is ``necessary" to consider nonnegative perturbations for the inequalities. The next result is a classical, see e.g., \cite[Prop.\ 4.4]{BS00}. In the following, we denote by $\mathbb{R}_{++}^m$ the set of vectors in $\mathbb{R}^m$ with positive entries. \begin{lemma}[Continuity of the value function] \label{lem:continuity-value} Let $f, g_1,\dots,g_m: \mathbb{R}^n \to \mathbb{R}$ be continuous functions. Assume that the constraint set $C_\mu = [g_1 \leqslant \mu_1,\dots, g_m \leqslant \mu_m]$ is nonempty for $\mu = 0$ and bounded for some positive perturbation $\mu' \in \mathbb{R}_{++}^m$. Then the value function $\val: \mathbb{R}_+^m \to \mathbb{R}$ defined by $\displaystyle \val(\mu) = \min_{x \in C_\mu} f(x)$ is continuous at $0$: \[ \min_{x \in C_\mu} f(x) \enspace \xrightarrow[\substack{\mu \to 0 \\ \mu \in \mathbb{R}_+^m}]{} \enspace \min_{x \in C_0} f(x) \enspace . \] \end{lemma} \begin{proof} First, since $C_0 \subset C_\mu$ for all $\mu \in \mathbb{R}_+^m$, we have $\val(0) \geqslant \limsup_{\mu \to 0} \val(\mu)$. Let $(\mu_n)_{n \in \mathbb{N}}$ be any sequence in $\mathbb{R}_+^m$ converging to $0$ and such that $\val(\mu_n)$ converges to some $v \in \mathbb{R} \cup \{-\infty\}$. Since $\mu' \in \mathbb{R}_{++}^m$, we may assume without loss of generality that we have, for all integers $n$, $\mu' - \mu_n \in \mathbb{R}_+^m$, so that $C_{\mu_n} \subset C_{\mu'}$. Let $x^*_n \in \argmin \{ f(x) \mid x \in C_{\mu_n} \}$, that is, $x^*_n \in C_{\mu_n}$ and $f(x^*_n) = \val(\mu_n)$. Since the sequence $(x^*_n)_{n \in \mathbb{N}}$ lies in the bounded set $C_{\mu'}$, it converges, up to an extraction, to some point $x^*$. Therefore, we have $v = f(x^*)$ with $x^* \in C_0$ by continuity of $f$ and $\mu \mapsto C_\mu$ (\cref{lem:continuity-constraint-set}). We deduce that $\val(0) \leqslant \liminf_{\mu \to 0} \val(\mu)$, which concludes the proof. \end{proof} Note that, without the compactness assumption, the conclusion of \cref{lem:continuity-value} does not hold. Consider for instance the semi-algebraic programming problem \[ \text{minimize} \enspace \frac{1+x^2}{1+x^4} \quad \text{subject to} \enspace x \in \mathbb{R} , \enspace \frac{x^2}{1+x^4} \leqslant \alpha . \] For all scalars $\alpha > 0$, the value of the problem is $\val(\alpha) = 0$, whereas for $\alpha = 0$ it is $\val(0) = 1$. \section{Finiteness of singular diagonal perturbations} \label{sec:finite-singularities} \subsection{Geometric aspects of regular perturbations} Although \cref{thm:genericity-definable} is a satisfying theoretical result, it does not give any structural information beyond dimension and definability. In particular it is not clear {\em how the perturbations} should be chosen when dealing with concrete optimization problems. The following result shows that, under reasonable assumptions, {\em small positive} and {\em small negative} perturbations $\mu$ of the inequality constraints are always regular, that is, MFCQ is satisfied at every point in $C_{\mu}$. For the sake of simplicity, we have chosen to state this result as well as all the subsequent ones for constraint sets defined only by inequalities. Nevertheless, they all extend easily to the setting of inequality and equality constraints (with perturbations applying only to the inequalities), see \cref{rmk:regular-perturbations}~\labelcref{it:equality-constraints-i} and \cref{rmk:equality-constraints-ii,rmk:equality-constraints-iii,rmk:equality-constraints-iv,rmk:equality-constraints-v}. \begin{theorem}[Small regular perturbations] \label{thm:regular-local-perturb} Let $g_1,\dots$, $g_m: \mathbb{R}^n \to \mathbb{R}$ be differentiable constraint functions that are definable in the same o-minimal structure. \begin{enumerate} \item {\em (Outer regular perturbations).} If $C_0 = [g_1 \leqslant 0, \dots, g_m \leqslant 0]$ is nonempty, then there exists $\varepsilon_0 > 0$ such that $(0,\varepsilon_0)^m \subset \mathbb{R}eg$. In other words, for all positive perturbations $\mu \in (0,\varepsilon_0)^m$, the Mangasarian-Fromovitz constraint qualification holds throughout $C_\mu = [g_1 \leqslant \mu_1, \dots, g_m \leqslant \mu_m]$. \label{it:outer-perturb} \item {\em (Inner regular perturbations).} If $[g_1 < 0, \dots, g_m < 0]$ is nonempty, then there exists $\varepsilon_1 > 0$ such that $(-\varepsilon_1,0)^m \subset \mathbb{R}eg$. In other words, for all negative perturbations $\mu \in (-\varepsilon_1,0)^m$, the Mangasarian-Fromovitz constraint qualification holds throughout $C_\mu = [g_1 \leqslant \mu_1, \dots, g_m \leqslant \mu_m]$. \label{it:inner-perturb} \end{enumerate} \end{theorem} \begin{proof} We only show \cref{it:outer-perturb}. \Cref{it:inner-perturb} follows from very similar arguments. Let us first notice that the constraint set mapping $\mathbb{R}^m_+ \rightrightarrows \mathbb{R}^n, \; \mu \mapsto C_\mu$ is a definable mapping. For each $\mu$ in $\mathbb{R}^m_{++}$ we consider the subset $S(\mu)$ of $C_\mu$ consisting of the points at which MFCQ is not satisfied. Following \cref{rmk:Hahn-Banach}, we have \begin{equation} \label{eq:singular-point-map} S(\mu) = \Big\{ x \in C_\mu \mid 0 \in \co \, \{\nabla g_i(x) \mid i \in I(x) \} \Big\} . \end{equation} This extends to a definable set-valued mapping $S: \mathbb{R}^m \rightrightarrows \mathbb{R}^n, \; \mu \mapsto S(\mu)$ by setting $S(\mu) = \emptyset$ if $\mu \not \in \mathbb{R}^m_{++}$. Towards a contradiction, we assume that $0$ belongs to the closure of $\dom S$. Using the \cref{lem:curve-selection}, we obtain a definable $\mathcal{C}^1$ curve $[0,1) \to \mathbb{R}^m, \; t \mapsto \mu(t)$ such that $\mu(t) \in \dom S$ for all $t > 0$ and $\mu(0) = 0$. The \cref{lem:monotonicity} combined with the fact that $\mu(t) \in \dom S \subset \mathbb{R}^m_{++}$ for $t \in (0,1)$ and $\mu(0) = 0$, ensures the existence of $\varepsilon > 0$ such that \begin{equation} \label{eq:increase-mu} \dot{\mu}_i(t) > 0 , \quad i = 1,\dots,m , \quad \forall \, t \in (0,\varepsilon) . \end{equation} The set-valued mapping $(0,\varepsilon) \rightrightarrows \mathbb{R}^n, \; t \mapsto S(\mu(t))$ is definable and has nonempty values, hence the \cref{lem:definable-choice} yields the existence of a definable curve $x: (0,\varepsilon) \to \mathbb{R}^n$ such that $x(t) \in S(\mu(t))$ for all $t$. Shrinking $\varepsilon$ if necessary (using the \cref{lem:monotonicity}) we can assume that $x(\cdot)$ is $\mathcal{C}^1$. Being given two definable functions $a, b: (0,\varepsilon) \to \mathbb{R}$, we can apply once more \hyperref[lem:monotonicity]{Lemma~\labelcref{lem:monotonicity}} to see that either $a(t) = b(t)$ or $a(t) > b(t)$ or $b(t) > a(t)$ for $t$ sufficiently small. This implies in particular that there exists a positive real $\varepsilon' \leqslant \varepsilon$ and a nonempty subset $I \subset \{1,\ldots, m\}$ such that $I(x(t)) = I$ for all $t \in (0,\varepsilon')$. Indeed, recall that $I(x(t)) = \{1 \leqslant i \leqslant m \mid g_i(x(t)) = \mu_i(t) \}$, where each pair of functions $(g_i(x(\cdot)),\mu_i(\cdot))$, $i=1,\dots,m$, is definable. Hence $I(x(t))$ stabilizes for $t > 0$ sufficiently small. Furthermore, for all $t \in (0, \varepsilon)$, $I(x(t))$ is nonempty because otherwise MFCQ would be satisfied at $x(t)$, which would contradict the fact that $x(t) \in S(\mu(t))$. By definition of $S$, for all $t \in (0,\varepsilon')$ there exist coefficients $\lambda_i(t)$ with $i \in I$ such that \[ \lambda_i(t) \geqslant 0 , \enspace \forall i \in I , \quad \text{and} \quad \sum_{i \in I} \lambda_i(t) = 1 , \] and such that \begin{equation} \label{eq:convex-combination} \sum_{i \in I} \lambda_i(t) \, \nabla g_i(x(t)) = 0 . \end{equation} Multiplying each member of the above equality by $\dot{x}(t)$, one obtains \[ \sum_{i \in I} \lambda_i(t) \, \< \: \dot{x}(t), \nabla g_i(x(t)) \: > = 0 \] which also writes \[ \sum_{i \in I} \lambda_i(t) \, \frac{d (g_i \circ x)}{dt} (t) = 0 . \] Since each inequality constraint $g_i$, $i \in I$, is active for all $t \in (0,\varepsilon')$, one gets \[ \sum_{i \in I} \lambda_i(t) \, \dot{\mu}_i(t) = 0 , \quad t \in (0,\varepsilon') , \] a contradiction: indeed \Cref{eq:increase-mu} and the fact that $I \neq \emptyset$ indicate that the left-hand side of the latter equality is positive for all $t \in (0,\varepsilon')$. For \cref{it:inner-perturb}, it suffices to notice that Slater's condition guarantees that the set $C_\mu$ with $\mu \in (-\infty,0)^m$ is nonempty for $\mu$ in a neighborhood of $0$. The proof follows then arguments similar to those developed for \cref{it:outer-perturb}. \end{proof} We next discuss some aspects of the hypotheses of \cref{thm:regular-local-perturb}. \begin{remark} \label{rmk:regular-perturbations} \begin{enumerate}[label=(\alph*),leftmargin=0pt,itemindent=0.25in,labelsep=3pt] \item A similar result cannot be derived for joint perturbations $(\mu,\nu)$ of inequality and equality constraint sets. Consider for instance the functions defined on $\mathbb{R}^2$ by $g(x_1,x_2) = x_2$ and $h(x_1,x_2) = x_2-(x_1)^2$. For all perturbations $\mu \in \mathbb{R}$, the constraint set $C_\mu \cap M_\mu = [g \leqslant \mu, h = \mu]$ contains only the point $(0,\mu)$, at which MFCQ does not hold since $\nabla g(0,\mu) = \nabla h(0,\mu) = (0,1).$ See also \cref{rmk:topology-singularities}. \item The conclusion of \cref{thm:regular-local-perturb} still holds for perturbed constraint sets of the form $C_\mu \cap M_0$. For this, one needs to assume that the equality constraint set $M_0 = [h_1 = 0,\dots,h_r = 0]$ is defined by definable differentiable functions $h_1,\dots,h_r: \mathbb{R}^n \to \mathbb{R}$ whose gradient vectors $\nabla h_j(x)$, $j = 1,\dots,r$, are linearly independent for all $x \in M_0$. \begin{small} \noindent $\left[\right.${\em Sketch of proof.} Only a few changes in the proof of the previous theorem are necessary. The first one is the definition of the set-valued map $S$ \cref{eq:singular-point-map}, which sends perturbation vectors $\mu \in \mathbb{R}^m$ to the set of feasible points where constraint qualification conditions are not satisfied. In this new setting, it becomes \[ S(\mu) = \Big\{ x \in C_\mu \cap M_0 \mid \co \, \{\nabla g_i(x) \mid i \in I(x) \} \cap \spn \, \{ \nabla h_j(x) \mid 1 \leqslant j \leqslant r \} \neq \emptyset \Big\} . \] The second change is \Cref{eq:convex-combination} which characterizes the failure of constraint qualification at point $x(t) \in S(\mu(t))$. Since the gradient vectors of the equality constraints are linearly independent for all the feasible points, this failure of constraint qualification must come from the absence of a vector $y$ satisfying \cref{eq:MFCQ}. Hence, following \cref{rmk:Hahn-Banach}, the right-hand side of the equality must be replaced by a linear combination of the gradients $\nabla h_j$ at point $x(t)$, that is, \Cref{eq:convex-combination} now reads \[ \sum_{i \in I} \lambda_i(t) \, \nabla g_i(x(t)) = \sum_{j=1}^r \kappa_j(t) \, \nabla h_j(x(t)) \] for some coefficients $\kappa_j(t) \in \mathbb{R}$, $j = 1, \dots, r$. Then the proof proceeds along the same lines. In particular, multiplying the right-hand side of the previous equality by $\dot{x}(t)$, one obtains \[ \sum_{j=1}^r \kappa_j(t) \, \< \: \dot{x}(t), \nabla h_j(x(t)) \: > = \sum_{j=1}^r \kappa_j(t) \, \frac{d (h_j \circ x)}{dt} (t) = 0 \] since each equality constraint $h_j$, $j=1,\dots,r$, is constant on the curve $x$.$\left.\right]$ \end{small} \label{it:equality-constraints-i} \item The use of small perturbation vectors which are not positive (or negative) may not remove the absence of MFCQ. A simple example is given on $\mathbb{R}^2$ by \[ g_1(x_1,x_2) = (x_1)^2 + (x_2)^2 - 1 \quad \text{and} \quad g_2(x_1,x_2) = (x_1-2)^2 + (x_2)^2 - 1 . \] The sets $[g_1 \leqslant 0]$ and $[g_2 \leqslant 0]$ delineate two tangent discs. Therefore, MFCQ fails at their contact point $(1,0)$. Consider the perturbation path $t \mapsto (\mu_1(t), \mu_2(t)) = (t^2 - 2t, t^2 + 2t)$ which passes through $(0,0)$ with velocity $(2,-2)$. It can be checked that the constraint set $[g_1 \leqslant \mu_1(t), g_2\leqslant \mu_2(t)]$ is not regular at $(1-t,0)$ for all $t \in (-1,1)$. \item Even though definable functions are not the unique class of functions for which a theorem similar to \cref{thm:regular-local-perturb} can be derived\footnote{One can think for instance of continuous convex functions.}, the definability assumption in \cref{thm:regular-local-perturb} cannot be replaced by mere smoothness. Many counterexamples can be given, even when $n=1$. Consider for instance the strictly increasing $\mathcal{C}^{\infty}$ function $g(x) = \int_0^x \exp(-t^{-2}) \, \sin^2(t^{-1}) dt$ with $x \in \mathbb{R}$, and the set $[g \leqslant 0] = (-\infty,0]$. Obviously the set of regular perturbations $\mathbb{R}eg$ does not contain any segment of the form $(0,\varepsilon)$ with $\varepsilon > 0$. \end{enumerate} \end{remark} We now provide a ``partial perturbation version'' of our main result, which can be proved following the lines of \cref{thm:regular-local-perturb}. It relies on the assumption that the set defined by the first $p$ inequalities is regular. \begin{theorem}[Partial constraint qualification] Let $g_1,\dots,g_m: \mathbb{R}^n \to \mathbb{R}$ be differentiable functions that are definable in the same o-minimal structure. Assume that $C_0 = [g_{1} \leqslant 0,\dots, g_{m} \leqslant 0]$ is nonempty and that the Mangasarian-Fromovitz constraint qualification holds throughout $[g_{1} \leqslant 0,\dots, g_{p} \leqslant 0]$ for some positive integer $p < m$. Then there exists $\varepsilon > 0$ such that, for all perturbations $\mu_{p+1}, \dots, \mu_m \in (0,\varepsilon)$, MFCQ holds throughout $[g_1 \leqslant 0, \dots, g_p \leqslant 0, \, g_{p+1} \leqslant \mu_{p+1}, \dots, g_{m} \leqslant \mu_{m}]$. \qed \label{thm:partial-perturb} \end{theorem} \begin{remark} \label{rmk:equality-constraints-ii} Similarly to \cref{rmk:regular-perturbations}~\labelcref{it:equality-constraints-i}, \cref{thm:partial-perturb} also holds in the setting of fixed equality constraints, in addition to partially perturbed inequality constraints. \end{remark} A simple but important corollary is a kind of partial Slater's qualification condition. \begin{corollary}[Partial Slater's condition] Let $g_1, \dots, g_m: \mathbb{R}^n \to \mathbb{R}$ be differentiable convex functions that are definable in the same o-minimal structure. Assume that $C_0 \neq \emptyset$ and that $g_i(x_0) < 0$ for some $x_0 \in \mathbb{R}^n$ and all $i \in \{1,\dots,p\}$ where $p < m$. Then there exists $\varepsilon > 0$ such that, for all perturbations $\mu_{p+1}, \dots, \mu_m \in (0,\varepsilon)$, MFCQ holds throughout $[g_1 \leqslant 0, \dots, g_p \leqslant 0, \, g_{p+1} \leqslant \mu_{p+1}, \dots, g_{m} \leqslant \mu_{m}]$. \qed \label{cor:partial-slater} \end{corollary} \begin{remark} \label{rmk:equality-constraints-iii} The above result is provided without equality constraints for the sake of simplicity: the addition of a finite system of affine constraints\footnote{The linear independence assumption of \cref{rmk:regular-perturbations}~\labelcref{it:equality-constraints-i} is not necessary in this case.} is an easy task. \end{remark} To conclude this subsection, let us emphasize that \cref{thm:regular-local-perturb} applies to several frameworks that are widely spread in practice, including polynomial, semi-algebraic, real analytic and many other kind of definable constraints. \subsection{Diagonal perturbations} When the constraint functions are definable in the same o-minimal structure, then so is the set of regular (resp., singular) perturbations, since it can be described by a first-order formula (see also \cref{thm:genericity-definable}). Thus, a direct application of \cref{thm:regular-local-perturb} leads to the finiteness of singular perturbations along any direction, and in particular along the {\em diagonal}, i.e., for perturbations of the form $(\alpha,\dots,\alpha)$ with $\alpha \in \mathbb{R}$. With a minor abuse of notation, for $\alpha \in \mathbb{R}$, we use the following, $C_\alpha := [g_1 \leqslant \alpha,\dots,g_m \leqslant \alpha]$. \begin{corollary} \label{cor:diagonal-perturb} Let $g_1,\dots,g_m: \mathbb{R}^n \to \mathbb{R}$ be differentiable functions that are definable in the same o-minimal structure. Then, for all except finitely many perturbations $\alpha \in \mathbb{R}$, the Mangasarian-Fromovitz constraint qualification holds throughout $C_\alpha = [g_1 \leqslant \alpha,\dots,g_m \leqslant \alpha]$. The same conclusion holds for \[ [g_1 \leqslant 0,\dots, g_p \leqslant 0, g_{p+1} \leqslant \alpha, \dots, g_{m} \leqslant \alpha] \] if MFCQ holds throughout $[g_{1} \leqslant 0,\dots, g_{p} \leqslant 0]$ for some $p < m$. \end{corollary} \begin{remark}[Equality and inequality constraint] \label{rmk:equality-constraints-iv} Similarly to \cref{rmk:regular-perturbations}~\labelcref{it:equality-constraints-i}, the above result also holds for perturbed constrained sets of the form $C_\alpha \cap M_0$, when $M_0$ is defined by definable functions that are differentiable and whose gradients are independent at any point $x \in M_0$. \end{remark} \begin{remark}[Diagonal perturbations through nonsmooth Sard's theorem] With a slightly stronger regularity assumption, \cref{cor:diagonal-perturb} can be seen as the nonsmooth definable Sard-type theorem \cite[Cor.~9]{BDLS07}. We next explain this observation. When dealing with diagonal perturbations $\alpha \in \mathbb{R}$, the constraint set $C_\alpha$ can be represented as the lower level set of a single real-extended-valued function. Namely, a point $x$ is in $C_\alpha$ if and only if \[ \max_{1 \leqslant i \leqslant m} g_i(x) \leqslant \alpha . \] Let us define $g = \max_{1 \leqslant i \leqslant m} g_i$ and let us assume that the constraint functions $g_1,\dots,g_m$ are $\mathcal{C}^1$. This implies that the basic chain rule of subdifferential calculus applies to the function $g$, see \cite[Th.~10.6]{RW98}. Thus we have, for all $x \in \mathbb{R}^n$, \begin{equation*} \widehat{\partial} g(x) = \partial g(x) = \co \, \{\nabla g_i(x) \mid 1 \leqslant i \leqslant m, \; g_i(x) = g(x) \} \end{equation*} where, for any function $f: \mathbb{R}^n \to \mathbb{R}$ and any $x \in \mathbb{R}^n$, $\widehat{\partial} f(x)$ and $\partial f(x)$ denote respectively the {\em Fr\'echet subdifferential} and the {\em limiting/Mordukhovich subdifferential} of $f$ at $x$, see \cite{RW98} for their constructions. Now observe that $\alpha \in \mathbb{R}$ is a singular perturbation, that is, MFCQ is not satisfied at some point $x \in C_\alpha$, if and only if $g(x) = \alpha$ and $0 \in \partial g( x)$. In other words, the singular diagonal perturbations of the inequality constraint set of \cref{pb:NLP} correspond to the {\em critical values} of $g$. Thus, when the functions $g_1,\dots,g_m$ are definable in the same o-minimal structure, so is $g$, and the finiteness of the singular diagonal perturbations stated in \cref{cor:diagonal-perturb} is equivalent to the finiteness of the critical values of the definable function $g$. Hence, with continuously differentiable functions, \cref{cor:diagonal-perturb} can be seen as a Sard-type theorem for definable functions, which was proved in full generality in \cite[Cor.~9]{BDLS07}. These arguments can also be extended when equality constraints with linearly independant gradients are added to the constraint set, see \cref{rmk:regular-perturbations}~\labelcref{it:equality-constraints-i}. \end{remark} \subsection{A bound on the number of singular perturbations for polynomial optimization} We consider here constraint sets defined by real polynomial functions and we bound the number of singular values for the corresponding perturbed sets. To tackle this problem, we evaluate the number of connected components of some adequate real algebraic sets. A key result regarding this evaluation is provided by Milnor-Thom's bound: given any polynomial map $f: \mathbb{R}^p \to \mathbb{R}^q$, the number of connected components of the set of zeros of $f$, $\{x \in \mathbb{R}^p \mid f(x) = 0\}$, is bounded by \begin{equation} \label{eq:Milnor-Thom} d \, (2d-1)^{p-1} \end{equation} where $d$ is the maximal degree of the polynomial functions $f_j$ for $j=1,\dots,q$, see e.g., \cite{BLR91}. \begin{theorem} \label{thm:Milnor-Thom} Let $g_1,\dots,g_m: \mathbb{R}^n \to \mathbb{R}$ be polynomial functions whose degree is bounded by $d$. Let $\{I,J\}$ be a partition of the set of indices $\{1,\dots,m\}$, possibly trivial, such that the Mangasarian-Fromovitz constraint qualification holds throughout $[g_j \leqslant 0,\, j \in J]$. Then, for the perturbed sets $[g_i \leqslant \alpha, g_j \leqslant 0, i \in I, j \in J]$ with $\alpha$ ranging in $\mathbb{R}$, the number of singular perturbations is bounded by \[ d \, (2d-1)^{n} \, (2d+1)^{m} . \] \end{theorem} \begin{proof} Denoting by $|J|$ the cardinality of $J$, we assume that $|J| \in \{0,\dots,m-1\}$ (otherwise $I$ is empty and there is nothing to prove). For $\alpha \in \mathbb{R}$, let \[ C_{I,\alpha} := [g_i \leqslant \alpha, g_j \leqslant 0, \, i \in I, \, j \in J] . \] If $\alpha$ is a singular perturbation, then there exists a point $x \in C_{I,\alpha}$ for which $0 \in \co \, \{ \nabla g_i(x) \mid i \in I(x) \}$. So, there exists a subset of indices $K \subset I(x)$, which we fix, and positive scalars $\lambda_i > 0$, $i \in K$, such that $\sum_{i \in K} \lambda_i \, \nabla g_i(x) = 0$ and $\sum_{i \in K} \lambda_i = 1$. Furthermore, $K \not \subset J$ since MFCQ holds throughout $[g_j \leqslant 0, j \in J]$. Let $L$ be the set of indices equal to $I(x)$. Thus, the sets $K$ and $L$ being fixed, the tuple $(x,\lambda,\alpha) \in \mathbb{R}^n \times \mathbb{R}^K \times \mathbb{R}$ is solution of the polynomial system \begin{equation} \label{eq:singular-prtb-system} \left\{ \begin{aligned} & \sum_{i \in K} \lambda_i \, \nabla g_i(x) = 0 , \\ & \sum_{i \in K} \lambda_i = 1 , \\ & g_j(x) = \alpha , \quad j \in L \cap I , \\ & g_j(x) = 0 , \quad j \in L \cap J , \\ \end{aligned} \right. \end{equation} and satisfies the following additional constraints: $\lambda \in \mathbb{R}_{++}^K$, $g_\ell(x) < \alpha$ for all $\ell \in I \setminus L$, and $g_\ell(x) < 0$ for all $\ell \in J \setminus L$. The first step of the proof is to show that the number of singular perturbations is bounded above by the number of connected components of the set of solutions of \labelcref{eq:singular-prtb-system} for all possible choices of $K$ and $L$. This is done by constructing an injection from the set of singular perturbations to these connected components. Fix a singular value $\alpha$ and choose a subset $L \subset \{1,\dots,m\}$ with maximal cardinality among all the sets of active constraints $I(x)$ such that MFCQ is not satisfied at $x \in C_{I,\alpha}$. Then pick a subset $K \subset L$ with minimal cardinality among all the subsets $K' \subset L$ such that the system~\labelcref{eq:singular-prtb-system} with $K$ replaced by $K'$ has a solution $(x,\lambda,\alpha) \in \mathbb{R}^n \times \mathbb{R}^{K'} \times \mathbb{R}$ with $x \in C_{I,\alpha}$ and $\lambda \in \mathbb{R}_+^{K'}$. Let $(\bar{x},\bar{\lambda},\alpha)$ be such a solution for the particular choice of $K$ and $L$. Note that $K \not \subset J$ since $[g_j \leqslant 0, j \in J]$ is regular. Also note that $\bar{\lambda} \in \mathbb{R}_{++}^K$ by minimality of $|K|$, and that $g_\ell(\bar{x}) < \alpha$ for all $\ell \in I \setminus L$, and $g_\ell(\bar{x}) < 0$ for all $\ell \in J \setminus L$ by maximality of $|L|$. Let $Q \subset \mathbb{R}^n \times \mathbb{R}^K \times \mathbb{R}$ be the connected component of the set of solutions of~\labelcref{eq:singular-prtb-system} corresponding to $K$ and $L$, containing the tuple $(\bar{x},\bar{\lambda},\alpha)$. We next prove that \[ Q \subset S(\alpha,K,L) := \{x \in C_{I,\alpha} \mid I(x) = L\} \times \mathbb{R}_{++}^K \times \{ \alpha \} . \] Toward a contradiction, assume that the above inclusion does not hold. There exists therefore a continuous path $(x(\cdot), \lambda(\cdot), \alpha(\cdot))$ from $[0,1]$ to $Q$ such that \[ \begin{cases} (x(0), \lambda(0), \alpha(0)) = (\bar{x},\bar{\lambda},\alpha) , \\ (x(1), \lambda(1), \alpha(1)) \notin S(\alpha,K,L) . \end{cases} \] Let $t = \sup \{s \in [0,1] \mid (x(s), \lambda(s), \alpha(s)) \in S(\alpha,K,L) \}$. By continuity we have $\alpha(t) = \alpha$, $x(t) \in C_{I,\alpha}$ and $\lambda(t) \in \mathbb{R}_+^K$. If either $I(x(t)) \neq L$ or $\lambda(t) \notin \mathbb{R}_{++}^K$, then there is a contradiction with the maximality of $|L|$ (since we already have $L \subset I(x(t))$) or with the minimality of $|K|$. Hence, we have $I(x(t)) = L$ and $\lambda(t) \in \mathbb{R}_{++}^K$. Since in addition $x(t) \in C_{I,\alpha}$ and $\alpha(t) = \alpha$, we have $(x(t), \lambda(t), \alpha(t)) \in S(\alpha,K,L)$. Finally, we have $t < 1$ since $(x(1), \lambda(1), \alpha(1)) \not\in S(\alpha,K,L)$. Using the continuity of the path and \labelcref{eq:singular-prtb-system}, there exists $\varepsilon > 0$ such that for all $s \in \left[ t, t+\varepsilon \right)$, we have $x(s) \in C_{I,\alpha(s)}$ with $I(x(s)) = L$ and $\lambda(s) \in \mathbb{R}_{++}^K$. This implies that $\alpha(s)$ is a singular perturbation for all $s \in \left[ t, t+\varepsilon \right)$. Combining the continuity of $\alpha(\cdot)$ and \cref{cor:diagonal-perturb}, $\alpha(\cdot)$ is constant on $\left[ t, t+\varepsilon \right)$. Hence, we have $(x(s), \lambda(s), \alpha(s)) \in S(\alpha,K,L)$ for all $s \in \left[ t, t+\varepsilon \right)$. From the definition of $t$, we obtain $t \geqslant t + \varepsilon$ which is contradictory since $\varepsilon > 0$. Thus, for every singular perturbation $\alpha$, there exist subsets $K \subset L \subset \{1,\dots,m\}$ with $K \cap I \neq \emptyset$ such that the set of solutions of the polynomial system~\labelcref{eq:singular-prtb-system} with this choice of $K$ and $L$ has at least one connected component included in $\mathbb{R}^n \times \mathbb{R}^K \times \{\alpha\}$. Hence the mapping sending every singular perturbation to this connected component is injective. So we have just proved that the number of singular perturbations is upper bounded by the number of connected components of the set of solutions of \labelcref{eq:singular-prtb-system} for all possible choices of $K$ and $L$. We can then deduce from Milnor-Thom's bound \labelcref{eq:Milnor-Thom} an upper bound for the number of singular perturbations $\alpha$ by summation over all possible choices of $K$ and $L$. Denote $p = |I| \in \{1,\dots,m\}$. In the computation below we denote by $\ell_1, \ell_2$ the cardinality of $L \cap I$ and $L \cap J$, respectively, and by $k_1, k_2$ the cardinality of $K \cap I$ and $K \cap J$, respectively. Since the system \labelcref{eq:singular-prtb-system} has degree $d$ and $n + k_1 + k_2 + 1$ variables, the number of singular perturbation is bounded by \begin{align*} & \sum_{\substack{ 1 \leqslant \ell_1 \leqslant p \\ 0 \leqslant \ell_2 \leqslant m-p}} \mybinom{p}{\ell_1} \mybinom{m-p}{\ell_2} \sum_{\substack{ 1 \leqslant k_1 \leqslant \ell_1 \\ 0 \leqslant k_2 \leqslant \ell_2}} \mybinom{\ell_1}{k_1} \mybinom{\ell_2}{k_2} \, d \, (2d-1)^{n+k_1+k_2} \\ & \qquad = d \, (2d-1)^n \, (2d+1)^{m-p} \, \big( (2d+1)^p - 2^p \big) , \\ & \qquad = d \, (2d-1)^n \, (2d+1)^m \, \left( 1 - \left( \frac{2}{2d+1} \right)^p \right) . \end{align*} To conclude, observe that \begin{equation} \label{eq:IJ} \frac{1}{3} \leqslant 1- \left( \frac{2}{2d+1} \right)^p \leqslant 1 \end{equation} for all $d \geqslant 1$ and $1 \leqslant p \leqslant m$. \end{proof} \begin{remark} \label{rmk:equality-constraints-v} \begin{enumerate}[label=(\alph*),leftmargin=0pt,itemindent=0.25in,labelsep=3pt] \item As attested in a forthcoming example the choice of a partition $(I,J)$ has a very marginal impact on the global bound which we have neglected in our main estimate \labelcref{eq:Milnor-Thom}\footnote{Our proof shows that its evolves within the interval~$[1/3,1]$, see \labelcref{eq:IJ}.}. \item Let $h_1, \dots, h_r: \mathbb{R}^n \to \mathbb{R}$ be polynomial functions with maximal degree $d$ such that the set $[h_1 = 0, \dots, h_r = 0]$ satisfies MFCQ (i.e, the first regularity assumption in \cref{def:MFCQ}). Then, with a minor adaptation of the above proof, we can show that for the perturbed sets $[g_i \leqslant \alpha, g_j \leqslant 0, i \in I, j \in J] \cap [h_1 = 0, \dots, h_r = 0]$ the number of singular perturbations $\alpha \in \mathbb{R}$ is bounded by \[ d (2d-1)^{n+r} (2d+1)^m . \] Indeed, if $\alpha$ is singular, then there exists a tuple $(x,\lambda,\kappa,\alpha) \in \mathbb{R}^n \times \mathbb{R}_{++}^K \times \mathbb{R}^r \times \mathbb{R}$ that is solution of a polynomial system similar to \labelcref{eq:singular-prtb-system} with the following changes: \begin{itemize} \item add the $r$ equality constraints $h_j(x) = 0$, $j = 1, \dots, r$; \item replace the right-hand side of the first equality by the linear combination\\ $\sum_{j=1}^r \kappa_j \, \nabla h_j(x)$. \end{itemize} The rest of the proof follows the exact same lines, with a trivial adaptation of the notation. In particular, we do not need to take into account the values of the coefficients $\kappa_j$, $j = 1, \dots, r$, contrary to the $\lambda_i$, $i \in K$. Using the same notation as in the proof, this new system has degree $d$ and $n+k_1+k_2+r+1$ variables. Whence the bound. \end{enumerate} \end{remark} Milnor-Thom's bound \labelcref{eq:Milnor-Thom} and a fortiori the bound in \cref{thm:Milnor-Thom} is not sharp, but one may ask whether it is of the right order of magnitude. The following examples show that this is indeed the case, at least regarding the dependence with respect to the degree $d$ of the polynomials and the dimension $n$ of the base space. They also illustrate the absence of sensitivity of our bound with respect to the choice of the partition $(I,J)$. Indeed, the examples show that even if all but one constraints define a regular set, the number of singular perturbations generated by the last constraint is of the right order. In the first example, which is thoroughly explained, the degree is fixed to $d=2$, and the number of singular perturbations is shown to be exponential with respect to $n$. In the second example, the number of singular diagonal perturbations is shown to be highly dependent on the degree $d$. \begin{example} \label{ex:2n-singular} Here, we construct an inequality constraint set in $\mathbb{R}^n$ defined by $n+1$ polynomial functions of degree $2$, $n$ of which are convex. The number of singular perturbations corresponding to a variation of the unique nonconvex constraint is $3^n-1$. Let $a \in \mathbb{R}^n$ be a point in $(-1,1)^n$. Then, for $\alpha \in \mathbb{R}$, define the constraint set $C_{0,\alpha}$ as the set of points $x \in \mathbb{R}^n$ such that \[ \left\{ \begin{aligned} g_0(x) & = 4n - \sum_{i=1}^n (x_i-a_i)^2 \leqslant \alpha , \\ g_i(x) & = (x_i)^2 \leqslant 1 , \quad i = 1,\dots,n . \end{aligned} \right. \] For $\alpha < 4n$, the first inequality defines the complement in $\mathbb{R}^n$ of the open ball centered at point $a$ with radius $\sqrt{4n-\alpha}$, denoted by $B(a,\sqrt{4n-\alpha})$. As for the $n$ last inequalities, they are convex and define the hypercube $[-1,1]^n$. Observe that for $\alpha \leqslant 0$, $C_{0,\alpha}$ is empty since $[-1,1]^n$ is strictly included in $B(a,\sqrt{4n-\alpha})$, whereas for $\alpha \geqslant 4n$, $C_{0,\alpha} = [-1,1]^n$. We next show that a perturbation $\alpha$ is singular whenever a face of $[-1,1]^n$ and the ball $B(a,\sqrt{4n-\alpha})$ are tangent. First note that the constraint sets $[g_0 \leqslant \alpha]$ and $[g_1 \leqslant 1,\cdots,g_n \leqslant 1]$ both satisfies MFCQ. Hence, the constraint qualification for $C_{0,\alpha}$ may fail only at points where the constraint $g_0$ and at least one of the constraints $g_i$ with $i \in \{1,\cdots,n\}$ are active, that is, at intersection points between the boundary of the hypercube $[-1,1]^n$ and the boundary of the ball $B(a,\sqrt{4n-\alpha})$. Let $z$ be such a point for a given $\alpha$. There exists a nonempty subset of indices $I$ and integers $v_i \in \{\pm 1\}$ for $i \in I$ such that $z_i = v_i$ for all $i \in I$ and $|z_j| < 1$ for all $j \notin I$. Then MFCQ is not satisfied at $z$ if and only if the convex hull of the gradients $\nabla g_0(z)$ and $\nabla g_i(z)$ with $i \in I$ contains $0$, and since $[-1,1]^n$ is qualified, this is equivalent to $-\nabla g_0(z)$ being in the convex cone generated by the gradients $\nabla g_i(z)$, $i \in I$. Since $-\nabla g_0(z) = 2(z-a)$ and $\nabla g_i(z) = 2 v_i e_i$ for every $i \in I$, where $e_i$ denotes the $i$th coordinate vector of $\mathbb{R}^n$, the latter condition holds if and only if $z_j = a_j$ for all $j \notin I$, i.e., if and only if $z$ is the orthogonal projection of $a$ on the face $F = \{ x \in \mathbb{R}^n \mid x_i = v_i, \; i \in I, \; |x_j| \leqslant 1, \; j \notin I\}$. In other words, MFCQ is not satisfied at point $z$ if and only if a face of the hypercube $[-1,1]^n$ and the ball $B(a,\sqrt{4n-\alpha})$ are tangent at $z$. Now, given $k \in \{0,\ldots,n-1\}$ there are ${n \choose k} 2^{n-k}$ faces of dimension $k$ in the cube $[-1,1]^n$. Thus, by choosing adequately $a$ in $(-1,1)^n$ so that, for all $\alpha$, $B(a,\sqrt{4n-\alpha})$ is tangent to a unique face of $[-1,1]^n$ at most, we deduce that the total number of singular perturbations is $\sum_{k=0}^{n-1} {n \choose k} 2^{n-k} = 3^n-1$. \Cref{fig:2n-singular} shows a representation of $C_{0,\alpha}$ in $\mathbb{R}^2$ for each singular value $\alpha$. \begin{figure} \caption{Singular perturbations of a constraint set (hatched area) defined by degree~2 polynomials.} \label{fig:2n-singular} \end{figure} \end{example} \begin{remark} The dependence of the number of singular values in the previous exam\-ple, $3^n-1$, with respect to $m$ and $n$ does not appear clearly since $m = n+1$. In this regard, the gap between this number and the bound predicted by \cref{thm:Milnor-Thom}, $2 \times 3^n \times 5^{n+1} = 10 \times 15^n$, questions the relevance of the exponential term in $m$ appearing in \cref{thm:Milnor-Thom}. In order to better understand this dependence, we could think of an example similar to \cref{ex:2n-singular} where the hypercube would be replaced by a polytope with $m$ facets, hence defined by $m$ linear constraints (instead of $2n$ in \cref{ex:2n-singular}). However, the maximum number of vertices of such a polytope, given by the upper bound theorem \cite{McM70}, is asymptotically equal to $O(m^{\lfloor n/2 \rfloor})$ (see~\cite{Sei95}). Hence, such an example could not have a number of singular perturbations exponential with respect to $m$. It then remains an open question to understand the dependence of the maximum number of singular values with respect to $m$, $n$. \end{remark} \begin{example} We build an example in $\mathbb{R}^n$ with $n+1$ polynomial constraints, degree $2d$, and we show that the perturbation of a unique constraint generates at least $d^n$ singular values. For any even integer $d$, let us consider the polynomial $Q_d = \prod_{k=1}^d (X^2 - k^2)$. Let $H$ be the set of points $x \in \mathbb{R}^n$ such that $g_i(x) = Q_d(x_i) \leqslant 0$, $i = 1,\dots,n$. The set $H$ is a disjoint union of $d^n$ boxes. More precisely, since $d$ is even, $Q_d$ is nonpositive on the intervals $[2k-1, 2k]$, $k=1,\dots,d/2$, and on their symmetrical images with respect to $0$. Then $H$ is the (disjoint) union of the $d^n$ boxes \begin{equation} \label{eq:boxes} H(v,k) := \prod_{i=1}^n v_i \, [2k_i-1, 2k_i] , \quad v \in \{ \pm 1 \}^n , \quad k \in \left\{1, \dots, d/2 \right\}^n . \end{equation} Note that all the boxes \labelcref{eq:boxes} are included in $[-d,d]^n$. Let $a$ be some point in $(-d,d)^n$ and for $\alpha \in \mathbb{R}$, define $C_{0,\alpha}$ as the set of points contained in $H$ and that satisfy in addition $g_0(x) = 4 n d^2 - \|x-a\|^2 \, \leqslant \, \alpha$. \Cref{fig:dn-singular} displays $C_{0,\alpha}$ for $d = 4$. \begin{figure} \caption{Constraint set (hatched area) defined by $n+1$ polynomials of degree at most $2d$ with $d^n$ singular perturbations ($n=2$, $d=4$).} \label{fig:dn-singular} \end{figure} Let us follow the arguments of \cref{ex:2n-singular}. Observe that for each vertex $v \in \{ \pm 1 \}^n$ and for each tuple of indices $k \in \{1,\dots,d/2\}^n$, there exists a unique perturbation $\alpha \in (0, 4 n d^2)$ such that MFCQ does not hold at point $(2 k_i \, v_i)_{1 \leqslant i \leqslant n}$, that is, when the box $H(v,k)$ defined in~\labelcref{eq:boxes} and the sphere centered at $a$ with radius $\sqrt{4 n d^2 - \alpha}$ have a unique contact point (see \cref{fig:dn-singular}). Finally, by choosing adequately $a$, it is possible to show that all the $d^n$ singular perturbations mentioned above are distinct. \end{example} \section{Applications to optimization algorithms} \label{sec:applications} We illustrate here the results of \Cref{sec:finite-singularities} through some classical algorithms for nonlinear optimization. Our approach consists in embedding the original problem within some one-parameter family of optimization problems: \begin{equation} \label[prb]{pb:parametric} \tag{$\mathcal{P}_\prtb$} \begin{aligned} \text{minimize} & \enspace f(x) \\ \text{subject to} & \enspace g_1(x) \leqslant \alpha, \dots, g_m(x) \leqslant \alpha, \end{aligned} \end{equation} where $f, g_1, \dots, g_m:\mathbb{R}^m\to \mathbb{R}$ are differentiable definable functions. A first obvious but important consequence is that any algorithm which is ope\-ratio\-nal under the standard qualification condition can be applied to \cref{pb:parametric} except perhaps for a finite number of parameters $\alpha$. In view of the fact that $\val \mathcal{P}_\prtb$ tends to $\val (\mathcal{P}_0)$ (see \cref{lem:continuity-value}), it provides a natural way of approximating $(\mathcal{P}_0)$. This can be illustrated in a straightforward manner with many types of algorithms, see e.g., \cite{NW06,GW12,Ber16}, see also \cite{homotopy} for continuation techniques in optimization. Estimating the complexity of such an approach is a matter for future research\footnote{It is likely to be connected to the results from \cite{Fra90} and \cite{FQ12}.}. In this spirit of a direct approximation, we provide an illustration involving SDP relaxations on the KKT ideal which improves a series of results of \cite{DNS06,DNP07,ABM14}. Another family of applications considered below is provided by infeasible SQP methods which often require strong qualification conditions assumptions. \subsection{Infeasible Sequential Quadratic Programming} We consider the {\em Extended Sequential Quadratic Method}, \hyperref[algo:esqm]{ESQM}\xspace, proposed by Auslender~\cite{Aus13} and based on a $\ell^\infty$ penalty function. Other methods could be treated as, for instance, Flechter's S$\ell^1$QP \cite{Fle85}. We make the following very basic assumptions: \begin{assumption} \label{asm:esqm} \leavevmode \begin{enumerate} \item {\em (Regularity).} The functions $f, g_1, \dots, g_m: \mathbb{R}^n \to \mathbb{R}$ are $\mathcal{C}^2$ with Lipschitz continuous gradients. We denote by $L, L_1,\dots,L_m > 0$ their Lipschitz constants, respectively. \label{it:regularity} \item {\em (Compactness).} The constraint sets $C_\alpha = [g_1 \leqslant \alpha,\dots,g_m \leqslant \alpha]$ are compact and nonempty for all $\alpha \geqslant 0$. \label{it:compact} \item {\em (Boundedness).} $\inf_{x \in \mathbb{R}^n} f(x) > -\infty$. \label{it:boundedness} \end{enumerate} \end{assumption} The general SQP method we consider, \hyperref[algo:esqm]{ESQM}\xspace, is described below. The strength of the following general convergence theorem is to rely merely on semi-alge\-brai\-city/de\-fina\-bility and boun\-ded\-ness assumptions. In particular, it does not require any qualification assumptions whatsoever. Another distinctive feature of this result is to allow to treat all at once many issues such as nonconvexity, continuum of stationary points, infeasibility, nonlinear constraints or oscillations (see \cite{BP16} for more on the key issues). \begin{algorithm}[htb] \caption{-- Extended Sequential Quadratic Method \cite{Aus13,BP16}} \label{algo:esqm} \begin{algorithmic} \STATE {\bfseries Step 1}: Choose $x_0 \in C_\alpha$, $\beta_0 > 0$, $\delta > 0$, $\lambda \geqslant L$ and $\lambda' \geqslant \max_{i} L_i$, and set $k \leftarrow 0$. \STATE \label{it:min-pb} {\bfseries Step 2}: Compute $x_{k+1}$ solution (along with some $s \in \mathbb{R}$) of \vskip-1\baselineskip \begin{align*} \operatorname*{minimize}_{s \in \mathbb{R} , \: y \in \mathbb{R}^n} \enspace & f(x_k) + \<\nabla f(x_k),y-x_k> + \beta_k s + \frac{\lambda + \beta_k \lambda'}{2} \|y-x_k\|^2 \\ \text{s.t.} \enspace & g_i(x_k) + \<\nabla g_i(x_k),y-x_k> \leqslant \alpha + s \enspace , \quad i = 1,\dots,m, \\ & s \geqslant 0. \end{align*} \vskip-0.5\baselineskip \STATE {\bfseries Step 3}: If \enspace $g_i(x_k) + \<\nabla g_i(x_k),x_{k+1}-x_k> \leqslant \alpha$, \enspace $i = 1,\dots,m$, \enspace then \enspace $\beta_{k+1} \leftarrow \beta_k$. \\ \newlength{\myl} \settowidth{\myl}{{\bfseries Step 3}:\ } \hspace{\myl}Else \enspace $\beta_{k+1} \leftarrow \beta_k + \delta$. \STATE {\bfseries Step 4}: $k \leftarrow k+1$, go to step 2. \end{algorithmic} \end{algorithm} \begin{theorem}[Large penalty parameters yield convergence of ESQM] \label{thm:esqm} Assume that \cref{it:regularity,it:compact,it:boundedness} hold (smoothness, compactness, boundedness). For all parameters $\alpha \geqslant 0$, except for a finite number of them, there exists a number $\beta(\alpha)\geqslant 0$ such that {\rm \hyperref[algo:esqm]{ESQM}\xspace} initialized with any $\beta_0 \geqslant \beta(\alpha)$ generates a sequence $(x_k)_{k \in \mathbb{N}}$ that converges to some KKT point of \cref{pb:parametric}. \end{theorem} \begin{proof} Following \cref{cor:diagonal-perturb}, there exist a finite family of parameters $\mathcal{A} \subset \mathbb{R}_+$ such that for all $\alpha \notin \mathcal{A}$, MFCQ holds throughout $C_\alpha$. Let us fix a parameter $\alpha \in \mathbb{R}_+ \setminus \mathcal{A}$. Then there exists a positive real number $\varepsilon$ such that $[\alpha,\alpha+\varepsilon] \subset \mathbb{R}_{+} \setminus \mathcal{A}$. This implies that for every $x \in C_{\alpha+\varepsilon}$, if $g_j(x) \geqslant \alpha$ for some index $j$, then there exist $y \in \mathbb{R}^n$ such that $\<y, \nabla g_i(x) > < 0$ for all indices $i$ such that $g_i(x) = \max_{1 \leqslant \ell \leqslant m} g_\ell(x)$. This follows from the fact that $x \in C_{\alpha'}$ where $\alpha' = \max_{1 \leqslant i \leqslant m} g_i(x)$ is such that $\alpha \leqslant \alpha' \leqslant \alpha + \varepsilon$, so that MFCQ holds at $x \in C_{\alpha'}$. Let $f_{\min} = \inf_{x \in \mathbb{R}^n} f(x)$ and $g_0$ be the constant function equal to $\alpha$. Set $d_k = x_{k+1} - x_k$. For every $k \in \mathbb{N}$, we have \begin{align*} & \frac{1}{\beta_{k+1}} (f(x_{k+1}) - f_{\min}) + \max_{0 \leqslant i \leqslant m} g_i(x_{k+1}) \\ & \hspace{6em} \leqslant \frac{1}{\beta_k} (f(x_{k+1}) - f_{\min}) + \max_{0 \leqslant i \leqslant m} g_i(x_{k+1}) , \\ & \hspace{6em} \leqslant \frac{1}{\beta_k} (f(x_k) + \< \nabla f(x_k), d_k > - f_{\min}) \\ & \hspace{12em} + \max_{0 \leqslant i \leqslant m} \big( g_i(x_k) + \< \nabla g_i(x_k), d_k > \big) + \frac{\lambda + \beta_k \lambda'}{2 \beta_k} \|d_k\|^2 , \\ & \hspace{6em} \leqslant \frac{1}{\beta_k} (f(x_k) - f_{\min}) + \max_{0 \leqslant i \leqslant m} g_i(x_k) , \end{align*} where the second inequality comes from the Lipschitz continuity of the gradients of the functions invovled, and the third inequality follows from the minimization problem in \cref{it:min-pb} of~\hyperref[algo:esqm]{ESQM}\xspace. Now choose $\beta_0 \geqslant (f(x_0)-f_{\min}) / \varepsilon$. By a trivial induction, we deduce that, for every integer $k \in \mathbb{N}$, \[ \max_{0 \leqslant i \leqslant m} g_i(x_k) \leqslant \frac{1}{\beta_0} (f(x_0) - f_{\min}) + \max_{0 \leqslant i \leqslant m} g_i(x_0) \leqslant \alpha + \varepsilon . \] Hence, all the points $x_k$ generated by \hyperref[algo:esqm]{ESQM}\xspace with the latter choice of $\beta_0$ lie in $C_{\alpha+\varepsilon}$, and so satisfy the following qualification condition (an essential ingredient in \cite{BP16}): if $g_j(x) \geqslant \alpha$ for some index $j$ and some $x$ in $\mathbb{R}^n$, then there exists $y \in \mathbb{R}^n$ such that $\<y, \nabla g_i(x) > < 0$ for all indices $i$ such that $g_i(x) = \max_{1 \leqslant \ell \leqslant m} g_\ell(x)$. The fact that any cluster point of $(x_k)_{k \in \mathbb{N}}$ is a KKT point of \cref{pb:parametric} readily follows from \cite[Th.~2]{BP16} (see also \cite[Th.3.1]{Aus13}). The convergence of $(x_k)_{k \in \mathbb{N}}$, follows then from \cite[Th.~3]{BP16} and the definability assumptions. \end{proof} \begin{remark}[Stabilization of penalty parameters] \begin{enumerate}[label=(\alph*),leftmargin=0pt,itemindent=0.25in,labelsep=3pt] \item For a fixed $\alpha$, the sequence of penalty parameters $\beta_k$ is constant after a finite number of iterations. This was already an essential result in \cite{Aus13} which still holds here. \item As in \cite{BP16}, rates of convergence are available when the data are in addition real semi-algebraic. \end{enumerate} \end{remark} \subsection{Exact relaxation in polynomial programming} A standard approach for solving \cref{pb:parametric} when data are polynomial relies on hierarchies of semidefinite programming, see \cite{Las01,Las10}. It is known that, generically, these hierarchies are exact, meaning that they converge in a finite number of steps (see \cite{Nie14}), but this behavior cannot be detected a priori. In order to construct SDP hierarchies with guaranteed finite convergence behavior, some authors introduced redundant constraints in the hierarchies. The work presented in \cite{DNS06} investigates unconstrained problems and the convergence of SDP hierarchies over the variety of critical points, while \cite{DNP07} considers more generally KKT ideals. The recent work \cite{ABM14} extends these results further and propose a relaxation which is either exact or which detects in finitely many steps the absence of ``KKT minimizers'' \cite[Th.~6.3]{ABM14}. A drawback of this method is that it fails whenever optimal solutions of \cref{pb:parametric} do not satisfy KKT conditions\footnote{Abril Bucero and Mourrain gave hints to deal with such a situation, but at the expense of an increasing complexity in the construction of the hierarchies.}. \Cref{cor:diagonal-perturb} shows that this issue is only a concern for finitely many values of the perturbation parameter $\alpha$ in \hyperref[pb:parametric]{$\mathcal{P}_\prtb$} and that the relaxation remains exact outside of this finite set. We point out that the constructions presented in \cite{DNS06,DNP07}, similar in their approach, require much stronger assumptions on the constraint ideal than the one we propose. We now explain these facts; \cref{sec:polynomials} contains the basic notation/definition used below. We first describe the polynomial problem from which the relaxation in \cite{ABM14} is constructed. Let $\alpha \in \mathbb{R}$ be such that $C_\alpha = [g_1 \leqslant \alpha,\dots,g_m \leqslant \alpha]$ is nonempty. The Lagrangian associated with \cref{pb:parametric} is defined for $x \in \mathbb{R}^n$ and $\lambda \in \mathbb{R}^m$ by \[ L^\alpha(x,\lambda) := f(x) + \sum_{i = 1}^m \lambda_i \, ( g_i(x) - \alpha ) . \] Then we introduce the KKT ideal defined on $\mathbb{R}[x,\lambda]$ by \[ I_\textsc{kkt}^\alpha := \left\langle \frac{\partial L^\alpha}{\partial x_1},\dots, \frac{\partial L^\alpha}{\partial x_n}, \lambda_1 \, (g_1-\alpha),\dots,\lambda_m \, (g_m-\alpha) \right\rangle . \] Let $\{h^\alpha_1,\dots,h^\alpha_r\} \subset \mathbb{R}[x]$ be a generating family of the ideal $I_\textsc{kkt}^\alpha \cap \mathbb{R}[x]$: \[ \langle h^\alpha_1,\dots,h^\alpha_r \rangle = I_\textsc{kkt}^\alpha \cap \mathbb{R}[x] . \] Note that such a family can be obtained by computing a Gr\"obner basis of $I_\textsc{kkt}^\alpha$, see \cite{CLO15}. Adding these redundant constraints to \cref{pb:parametric} yields the following polynomial problem. \begin{equation} \label[prb]{pb:KKT} \tag{$\mathcal{P}_\prtb^\KKT$} \begin{aligned} \text{minimize} & \enspace f(x) \\ \text{subject to} & \enspace g_1(x) \leqslant \alpha, \dots, g_m(x) \leqslant \alpha , \\ & \enspace h^\alpha_1(x) = 0, \dots, h^\alpha_r(x) = 0 . \end{aligned} \end{equation} Observe that any minimizer of \cref{pb:parametric} that is also a KKT point is a minimizer of \cref{pb:KKT}. Hence, if the Mangasarian-Fromovitz constraint qualification holds throughout $C_\alpha$, then solving the former problem boils down to solving the latter. We next introduce the SDP relaxation hierarchies proposed in \cite{ABM14} to solve \labelcref{pb:KKT}. For $k \in \mathbb{N}$, the primal is given by \begin{multline} \label[prb]{pb:primal-relaxation} p^\alpha_k = \inf \big\{ \Lambda(f) \mid \enspace \Lambda \in (\mathbb{R}_{2k}[x])^* , \; \Lambda(1) = 1 , \\ \Lambda(p) \geqslant 0 , \enspace \forall p \in \langle h^\alpha_1,\dots,h^\alpha_r \rangle_{2k} + \mathfrak{P}_k (\alpha-g_1,\dots,\alpha-g_m) \big\} , \end{multline} and the dual problem is \begin{equation} \label[prb]{pb:dual-relaxation} d^\alpha_k = \sup \big\{ \gamma \in \mathbb{R} \mid \enspace f - \gamma \in \langle h^\alpha_1,\dots,h^\alpha_r \rangle_{2k} + \mathfrak{P}_k (\alpha-g_1,\dots,\alpha-g_m) \big\} , \end{equation} where the notation for the truncated ideal $\langle \cdot \rangle_{2k}$ and the truncated preordering $\mathfrak{P}_k$ is detailed in \cref{sec:polynomials}. Let us mention however that the dual relaxation hierarchy is based on an SOS representation of nonnegative polynomials which uses a Schm\"udgen-type certificate. But contrary to Schm\"udgen's Positivstellensatz \cite[Cor.~3]{Sch91}, compactness is not required here. A straightforward combination of \cref{cor:diagonal-perturb} and \cite[Th.~6.3]{ABM14} leads to the following. \begin{proposition} Let $f, g_1, \dots, g_m: \mathbb{R}^n \to \mathbb{R}$ be polynomial functions such that $C_0 = [g_1 \leqslant 0, \cdots, g_m \leqslant 0]$ is nonempty. Then, for all parameters $\alpha \geqslant 0$, except for a finite number of them, one of the following assertions holds: \begin{enumerate} \item the relaxations \labelcref{pb:primal-relaxation} and \labelcref{pb:dual-relaxation} of \cref{pb:KKT} are exact and provide the value of \labelcref{pb:parametric}, i.e., $\val \mathcal{P}_\prtb = d^\alpha_k = p^\alpha_k$ for all $k$ large enough\footnote{The result of Abril Bucero and Mourrain is actually more precise and establishes a link between the minimizers of \cref{pb:primal-relaxation} and the ones of \cref{pb:parametric}. We refer the reader to \cite[Th.~6.3]{ABM14} for a comprehensive presentation.}; \item for $k$ large enough, the feasible set of \cref{pb:primal-relaxation} is empty and \labelcref{pb:parametric} has no minimizer. \end{enumerate} \end{proposition} \appendix \section{Reminder on semi-algebraic and tame geometry} \label{sec:tame-geometry} We recall here the basic results of tame geometry that we use in the present work. Some references on this topic are \cite{vdDM96,vdD98,Cos99}. \begin{definition}[see {\cite[Def.~1.4]{Cos99}}] An {\em o-minimal} structure on $(\mathbb{R},+,\cdot)$ is a sequence of Boolean algebras $\mathcal{O} = (\mathcal{O}_p)_{p \in \mathbb{N}}$ where each $\mathcal{O}_p$ is a family of subsets of $\mathbb{R}^p$ and such that for each $p \in \mathbb{N}$: \begin{enumerate} \item if $A$ belongs to $\mathcal{O}_p$, then $A \times \mathbb{R}$ and $\mathbb{R} \times A$ belong to $\mathcal{O}_{p+1}$; \item if $\pi: \mathbb{R}^{p+1} \to \mathbb{R}^p$ is the canonical projection onto $\mathbb{R}^p$ then, for any $A \in \mathcal{O}_{p+1}$, the set $\pi(A)$ belongs to $\mathcal{O}_p$; \label{it:algebraic} \item $\mathcal{O}_p$ contains the family of real algebraic subsets of $\mathbb{R}^p$, that is, every set of the form $\{ x \in \mathbb{R}^p \mid g(x) = 0 \}$ where $g: \mathbb{R}^p \to \mathbb{R}$ is a polynomial function; \item the elements of $\mathcal{O}_1$ are exactly the finite unions of points and intervals. \end{enumerate} \end{definition} A subset of $\mathbb{R}^p$ which belongs to an o-minimal structure $\mathcal{O}$ is said to be {\em definable} (in $\mathcal{O}$). A function $f: A \subset \mathbb{R}^p \to \mathbb{R}^q$ or a set-valued mapping $F: A \subset \mathbb{R}^p \rightrightarrows \mathbb{R}^q$ is said to be definable in $\mathcal{O}$ if its graph is definable (in $\mathcal{O}$) as a subset of $\mathbb{R}^p \times \mathbb{R}^q$. \begin{example} The simplest (and smallest) o-minimal structure is given by the class $\mathcal{SA}$ of real {\em semi-algebraic} objects. A set $A \subset \mathbb{R}^p$ is called semi-algebraic if it is of the form $A = \bigcup_{j=1}^l \bigcap_{i=1}^k \{x \in \mathbb{R}^p \mid g_{ij}(x) < 0, \; h_{ij}(x) = 0 \}$ where the functions $g_{ij}, h_{ij}: \mathbb{R}^p \to \mathbb{R}$ are polynomial functions. The fact that $\mathcal{SA}$ is an o-minimal structure relies mainly on the Tarski-Seidenberg principle (see \cite{BR90}) which asserts that \cref{it:algebraic} holds true in this class. Other examples like globally subanalytic sets or sets belonging to the $\log$-$\exp$ structure provide a vast field of sets and functions that are of primary importance for optimizers. We will not give proper definitions of these structures in this paper, but the interested reader may consult \cite{vdDM96} or \cite{BDL06,Iof07,BDL09} for optimization oriented subjects. \end{example} In this paper, we shall essentially use the classical results listed hereafter. In the remainder of this subsection, we fix an o-minimal structure $\mathcal{O}$ on $(\mathbb{R},+,\cdot)$. \begin{proposition}[stability results] Let $A \subset \mathbb{R}^p$ and $g: A \to \mathbb{R}^p$ be definable objects. \begin{itemize} \item If $B \subset A$ is a definable set, then $g(B)$ is definable. \item If $C \subset \mathbb{R}^q$ is a definable set, then $g^{-1}(C)$ is definable. \item If $A$ is open and $g$ is differentiable, then its derivative is definable. \end{itemize} \end{proposition} \begin{monotonicity} \label{lem:monotonicity} Let $f: I \subset \mathbb{R} \to \mathbb{R}$ be a definable function and let $k \in \mathbb{N}$. Then there exists a finite partition of $I$ into $p$ disjoint intervals $I_1,\dots,I_p$, such that the restriction of $f$ to each nontrivial interval $I_j$, $j \in \{1,\dots,p\}$, is $\mathcal{C}^k$ and either constant or strictly monotone. \end{monotonicity} \begin{definable-choice} \label{lem:definable-choice} Let $A \subset \mathbb{R}^p \times \mathbb{R}^q$ be a definable set and let $\pi: \mathbb{R}^p \times \mathbb{R}^q \to \mathbb{R}^p$ be the canonical projection onto $\mathbb{R}^p$. Then there exists a definable function $f: \pi(A) \to \mathbb{R}^q$ such that $\graph f \subset A$. \end{definable-choice} Note that an equivalent formulation of the latter result can be stated in terms of selection: if $F: \mathbb{R}^p \rightrightarrows \mathbb{R}^q$ is a definable set-valued mapping, then there exists a definable function $f: \dom F \to \mathbb{R}^q$ such that $\graph f \subset \graph F$. \begin{curve-selection} \label{lem:curve-selection} Let $A \subset \mathbb{R}^p$ be a definable set, $x$ be an element of $\operatorname{cl}(A)$, the topological closure of $A$, and let $k \in \mathbb{N}$ be a fixed integer. Then there exists a $\mathcal{C}^k$ definable path $\gamma: [0,1) \to \mathbb{R}^p$ such that $\gamma(0) = x$ and $\gamma((0,1)) \subset A$. \end{curve-selection} \section{Relaxation in polynomial programming: definitions and notation} \label{sec:polynomials} By $\mathbb{R}[x]$ we denote the ring of real polynomials in the variable $x = (x_1,\dots,x_n)$. For any $k \in \mathbb{N}$, we denote by $\mathbb{R}_k[x]$ the space of real polynomials whose degree is bounded by $k$, and we denote by $(\mathbb{R}_k[x])^*$ its dual space. A polynomial $p \in \mathbb{R}[x]$ is a sum of squares (SOS) if $p$ can be written as $p = \sum_{i \in I} p_i^2$ for some finite family of polynomials $(p_i)_{i \in I} \subset \mathbb{R}[x]$. Denote by $\Sigma[x]$ the space of SOS polynomials. Given any integer $k \in \mathbb{N}$ and any finite family $\{ p_1,\dots,p_m \} \subset \mathbb{R}[x]$ of polynomials, the {\em $k$-truncated ideal} on $\mathbb{R}[x]$ generated by this family is the subset of $\mathbb{R}[x]$ defined by \[ \langle p_1,\dots,p_r \rangle_k := \bigg\{ \sum_{i = 1}^m q_i \, p_i \mid q_i \in \mathbb{R}[x], \; \deg(q_i \, p_i) \leqslant k, \; i = 1,\dots,m \bigg\} , \] where $\deg(p)$ denotes the degree of any polynomial $p \in \mathbb{R}[x]$. The ideal generated by the family $\{ p_1,\dots,p_m \}$ is denoted and defined in a similar way but with no condition required on the degree of the polynomials. For a set $I \subset \{1,\dots,m\}$, we denote by $p_I \in \mathbb{R}[x]$ the polynomial defined by $p_I := \prod_{i \in I} p_i$, with the convention that $p_\emptyset = 1$. Then we define the {\em $k$-truncated preordering} of $\{ p_1,\dots,p_m \}$ by \[ \mathfrak{P}_k(p_1,\dots,p_m) := \bigg\{ \sum_I q_I \, p_I \mid q_I \in \Sigma[x], \; \deg(q_I \, p_I) \leqslant 2k, \; \forall I \subset \{1,\dots,m\} \bigg\} . \] \section*{Acknowledgments} The authors thanks J.~Pang, H.~Frankowska, T.~S.~Pham, J.-B.~Lasserre for their useful comments. \end{document}
\begin{document} \title[The incompatibility of crossing number and bridge number]{The incompatibility of crossing number and bridge number for knot diagrams} \author{Ryan Blair} \address{Department of Mathematics, California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840} \email{[email protected]} \author{Alexandra A. Kjuchukova} \address{Department of Mathematics, University of Wisconsin-Madison, Van Vleck Hall, 480 Lincoln Drive, Madison, WI 53706} \email{[email protected]} \author{Makoto Ozawa} \address{Department of Natural Sciences, Faculty of Arts and Sciences, Komazawa University, 1-23-1 Komazawa, Setagaya-ku, Tokyo, 154-8525, Japan} \email{[email protected]} \thanks{The third author is partially supported by Grant-in-Aid for Scientific Research (C) (No. 26400097 \& 17K05262), The Ministry of Education, Culture, Sports, Science and Technology, Japan} \subjclass[2010]{Primary 57M25; Secondary 57M27} \keywords{knot, knot diagram, crossing number, bridge number, perpendicular bridge number, Wirtinger number} \begin{abstract} We define and compare several natural ways to compute the bridge number of a knot diagram. We study bridge numbers of crossing number minimizing diagrams, as well as the behavior of diagrammatic bridge numbers under the connected sum operation. For each notion of diagrammatic bridge number considered, we find crossing number minimizing knot diagrams which fail to minimize bridge number. Furthermore, we construct a family of minimal crossing diagrams for which the difference between diagrammatic bridge number and the actual bridge number of the knot grows to infinity. \end{abstract} \maketitle \section{Introduction} Let $K$ be a knot-type in $\mathbb{R}^3$ and let $\gamma\in K$ be a smooth embedding of the knot-type $K$. Let $p:\mathbb{R}^3\rightarrow \mathbb{R}^2$ be given by $p(x,y,z):=(x,y)$, $h:\mathbb{R}^3\rightarrow \mathbb{R}$ by $h(x,y,z):=z$ and $h_{||}:\mathbb{R}^3\rightarrow \mathbb{R}$ by $h_{||}(x,y,z)=y$. Denote by $\mathcal{C}(K)$ the set of embeddings $\gamma\in K$ which are regular and have minimal crossing number with respect to $p$; namely, these are crossing number minimizing embeddings. Denote by $\mathcal{B}(K)$ the set of embeddings $\gamma\in K$ which are Morse and have the minimal number of maximal points with respect to $h$; namely, these are bridge number minimizing embeddings. We have experimentally verified in \cite{Ksite} that $\mathcal{C}(K)\cap \mathcal{B}(K)\neq\emptyset$ for at least 450,000 prime knots of up to 16 crossings. Is this true for all knots? That is, we ask: \begin{question} Can we extract the bridge number of a knot $K$ from a minimal crossing diagram of $K$? \end{question} In order to study this question, we introduce several diagrammatic notions of bridge number. These are integers associated to knot diagrams with the property that the minimum of the bridge number of $D$ over all diagrams $D$ of a given knot $K$ is equal to the bridge number of $K$. A classical notion of diagrammatic bridge number is that of ``overpass'' bridge number. Call an arc in knot diagram a {\it bridge} if it includes at least one overcrossing. The overpass bridge number of the diagram is the number of bridges in the diagram. It is well-known that this definition of diagrammatic bridge number is not well behaved with respect to minimal crossing number diagrams. For instance, the trefoil knot has bridge number two and crossing number three. However, it is a straight-forward exercise to show that every diagram of the trefoil with overpass bridge number two contains at least four crossings. More generally, we show that the minimal overpass bridge number and the minimal crossing number are wholly incompatible. \begin{theorem}\label{thm:overpass} Let $K$ be a non-trivial knot. $K$ does not admit a diagram which realizes both the minimal overpass bridge number and the minimal crossing number of $K$. \end{theorem} Alternatively, the typical diagrammatic depiction of knots obtaining their bridge number suggests defining the ``parallel'' bridge number of a diagram $D$, $b_{||}(D)$, as the minimal number of maxima of $h_{||}|_{\gamma}$ for any Morse embedding $\gamma$ that projects to $D$. We remark that this definition of diagrammatic bridge number is not very effective at calculating bridge number of a knot. We focus instead on the perpendicular bridge number of a knot diagram $D$, $b_{\perp}(D)$ (Definition~\ref{perp}), and the Wirtinger number of $D$, $\omega(D)$ (Definition~\ref{omega}). The Wirtinger number is defined combinatorially, and the fact that the minimum value of $\omega(D)$ over all diagrams $D$ of a knot $K$ equals the bridge number of $K$ is non-trivial -- see~\cite{BKVV} for a proof. On the plus side, the Wirtinger number is algorthmically computable. This allowed the detection of bridge numbers for nearly half a million knots from minimal-crossing diagrams, and the tabulation of bridge numbers for the majority of these knots for the first time. By contrast, $b_{\perp}(D)$ is defined geometrically, and it follows easily that taking the minimum of $b_{\perp}(D)$ over all diagrams of a knot gives the bridge number, but $b_{\perp}(D)$ can be challenging to compute directly. Additionally, it is a straight forward exercise to show that $b_{\perp}(D)\leq b_{||}(D)$ for all knot diagrams, making $b_{\perp}(D)$ more effective at calculating the bridge number of a knot from one of its diagrams. Our next result relates $\omega(D)$ and $b_{\perp}(D)$. \begin{theorem} \label{omega=perp} For a knot diagram $D$, $\omega(D)=b_{\perp}(D)$. \end{theorem} In light of this, we will sometimes use $b(D)$ to denote either of these quantities for a knot diagram, that is, $b(D):=\omega(D)=b_{\perp}(D)$. Throughout, $\beta(K)$ denotes the bridge number of $K$. We leverage the above equality to prove the following. \begin{theorem} \label{minmal-gap} For every positive integer $n$, there exists an alternating knot $K$ and a crossing number minimizing diagram $D$ of $K$, with the property that $b(D) - \beta(K) \geq n$. \end{theorem} This theorem shows that not all minimal diagrams have Wirtinger number equal to the bridge number. However, all of the examples constructed in the proof of Theorem \ref{minmal-gap} have the property that $K$ is a composite knot and that there exists an alternative minimal crossing diagram $D'$ such that $\beta(K)=b_{\perp}(D')$. In fact, we conjecture that every knot has a minimal diagram realizing bridge number, that is, \begin{conjecture} For any knot $K$, $\mathcal{C}(K)\cap\mathcal{B}(K)\ne\emptyset$. \end{conjecture} The above conjecture, if true, together with invariance of the Wirtinger number under flypes, which we conjecture holds, would imply: \begin{conjecture} For a minimal diagram $D$ of a prime alternating knot $K$, then $b(D) = \beta(K)$. \end{conjecture} \section{Preliminaries} Let $K$, $p$, $h$ and $\gamma$ be as above. Among the many equivalent definitions of bridge number of a knot, we favor the following one. \begin{definition} The \emph{bridge number of $K$}, $\beta(K)$, is the minimal number of maxima of $h|_{\gamma}$ over all $\gamma\in K$ such that $h|_{\gamma}$ is Morse. \end{definition} The first definition of diagrammatic bridge number we consider is the following. Let $K, \gamma, p, h$ be as above. When $p|_{\gamma}$ is regular and $|p^{-1}(x, y)|\leq 2, \forall(x, y)\in \mathbb{R}^2$, we say $p(\gamma)$ is a {\it knot projection} for $K$. Hence, every knot projection is a finite, $4$-valent graph in the plane. A {\it knot diagram} of $K$ is a knot projection $p(\gamma)$ together with labels that indicate which strand is the over-strand and which is the under-strand at each double point. Given a diagram $D$ and an embedding $\gamma$ of a knot type $K$, we say that $\gamma$ \emph{presents} $D$ if the following hold: \begin{enumerate} \item $h|_{\gamma}$ is Morse; \item $p(\gamma)$ is a knot projection of $K$; \item $p(\gamma)$ together with crossing labels is equal to $D$. \end{enumerate} \begin{definition}\label{perp} Given a knot diagram $D$ of knot $K$, define the \emph{perpendicular bridge number of $D$}, $b_{\perp}(D)$, to be the minimal number of maxima of $h|_{\gamma}$ over all $\gamma$ presenting $D$. \end{definition} For any diagram $D$ of a knot $K$, if $\gamma$ presents $D$, by definition $\gamma\in K$, so $b_{\perp}(D)\geq\beta(K)$. Furthermore, since, after an arbitrarily small perturbation that preserves the number of maxima of $h|_{\gamma}$, any $\gamma\in K$ has a diagram, $\beta(K)$ is realized as $b_{\perp}(D)$ for some diagram $D$ of $K$. Therefore, $b_{\perp}(D)$ has the desired properties of a diagrammatic bridge number. Thoerem~\ref{omega=perp} proves that the perpendicular bridge number of a diagram $D$ equals its Wirtinger number, $\omega(D)$. The Wirtinger number of a diagram $D$ is calculated algorithmically and is closely related to the problem of finding the minimal number of Wirtinger generators in $D$ which suffice to generate the group of $K$. The Wirtinger number was introduced in \cite{BKVV}, and it follows from the main theorem therein that $\omega(D)$ constitutes a diagrammatic bridge number in the above sense. We recall the definition here. Let $D$ be a knot diagram with $n$ crossings and let $v(D)$ be the set of crossings $c_1$, $c_2$,..., $c_n$ in the plane. Since we think of deleting a neighborhood of each understand from a knot projection to form the diagram $D$, then $D$ consists of $n$ disjoint closed arcs in the plane called \emph{strands}. Denote by $s(D)$ the set of strands $s_1$, $s_2$,..., $s_n$ for the diagram $D$. Two strands $s_i$ and $s_j$ of $D$ are {\it adjacent} if $s_i$ and $s_j$ are the under-strands of some crossing in $D$. In what follows we assume that $n>1$ so, in particular, no strand is adjacent to itself. We call $D$ \textit{k-partially colored} if we have specified a subset $A$ of the strands of $D$ and a function $f: A \to \{1, 2, \dots, k\}$. We refer to this partial coloring by the tuple $(A, f)$. Next, we define an operation that allows us to pass from one partial coloring on a diagram to another. \begin{definition} \label{move} Given $k$-partial colorings $(A_1, f_1)$ and $(A_2, f_2)$ of $D$, we say $(A_2, f_2)$ is the result of a \textit{coloring move } on $(A_1, f_1)$ if the following conditions hold: \begin{enumerate} \item $A_1 \subset A_2$ and $A_2 \setminus A_1 = \{s_j\}$ for some strand $s_j$ in $D$; \item $f_2|_{A_1} = f_1$; \item $s_j$ is adjacent to $s_i$ at some crossing $c \in v(D)$, and $s_i\in A_1$; \item the over-strand $s_k$ at $c$ is an element of $A_1$; \item $f_1(s_i) = f_2(s_j)$. \end{enumerate} \end{definition} We denote the above coloring move by $(A_1, f_1)\to(A_2, f_2)$. Two consecutive coloring moves are illustrated in Figure \ref{coloring.fig}, which we borrowed from~\cite{BKVV}. We say a knot diagram $D$ is \textit{k-colorable} if there exists a $k$-partial coloring $(A_0, f_0) = (\{s_{i_1}, s_{i_2}, \dots, s_{i_k}\}$, $f_0(s_{i_j}) = j)$ and a sequence of coloring moves which result in coloring the entire diagram. \begin{definition} \label{omega} Let $D$ be a knot diagram. The smallest integer $k$ such that $D$ is $k$-colorable is the {\it Wirtinger number} of $D$, denoted $\omega(D)$. \end{definition} For a proof that the minimum value of $\omega(D)$ over all diagrams $D$ of a knot $K$ equals the bridge number of $K$ we again refer the reader to~\cite{BKVV}. The Wirtinger number is our main tool for computing bridge numbers of minimal diagrams and comparing these to the bridge numbers of the corresponding knots. \begin{figure} \caption{Two coloring moves on the knot $8_{17} \label{coloring.fig} \end{figure} In Section \ref{main} we prove that $\omega(D)=b_{\perp}(D)$ for any knot diagram $D$. In Section~\ref{applications} we analyze how Wirtinger number behaves with respect to diagrammatic connected sum and we use these results to show that given any knot $K$ you can always find a diagram $D$ such that the difference between $\omega(D)$ and $\beta(K)$ is arbitrarily large. Finally, we extend these results to show that the difference between $\omega(D)$ and $\beta(K)$ can be arbitrarily large even among crossing number minimizing diagrams if composite knots are allowed. We conclude by exhibiting a crossing number minimizing diagram of a {\it prime} knot which also fails to realize the Wirtinger number. \section{Main Theorem}\label{main} Theorem \ref{thm:overpass} follows from the next two lemmas. We recall from \cite{Schu54} the definition of an overpass (resp. underpass) in a knot diagram. For a diagram $D=p(\gamma)$ of a knot type $K$, an {\em overpass} (resp. {\em underpass}) is a subarc $p(\alpha)$, where $\alpha$ is a subarc of $\gamma$ and $p|_\alpha$ is an injection, which contains only over-crossings (resp. under-crossings). Any knot diagram $D$ can be decomposed into an alternating sequence of over- and under-passes $\alpha_1^+, \alpha_1^-,\ldots,\alpha_n^+,\alpha_n^-$. Let us rephrase the definition of overpass bridge number, which we recalled in the introduction, in this language. \begin{definition} \label{overpass} The {\em overpass bridge number} of a knot diagram $D$ as the minimal number of $n$ over all alternating sequences of over/underpasses $\alpha_1^+, \alpha_1^-,\ldots,\alpha_n^+,\alpha_n^-$. \end{definition} It is well known that the {\em overpass bridge number} of a knot type $K$, namely, the minimal overpass bridge number over all diagrams of $K$, is simply the bridge number of $K$. \begin{lemma} If a knot diagram has the minimal overpass bridge number, then any pair of consecutive over/underpasses intersect in at least one crossing. \end{lemma} \begin{proof} Suppose that there exists a pair of consecutive over/underpasses which has no crossing. Then we can obtain a knot diagram with a smaller overpass bridge number, by applying a move as shown in Figure~\ref{move1}. \begin{figure} \caption{Reducing the overpass bridge number.} \label{move1} \end{figure} \end{proof} \begin{lemma} Let $D$ be a crossing number minimizing knot diagram. Any pair of consecutive over/underpasses in $D$ do not intersect at a crossing. \end{lemma} \begin{proof} Consider a knot diagram which contains a pair of consecutive over/underpasses intersecting in at least one crossings. Apply a move as shown in Figure \ref{move2}. This results in a knot diagram for the same knot whose crossing number is smaller. \begin{figure} \caption{Reducing the crossing number.} \label{move2} \end{figure} \end{proof} Theorem \ref{omega=perp} relates the Wirtinger number of a diagram to its perpendicular bridge number. It is proved in~\cite{BKVV} that $\omega(D)$ can be calculated by computer from Gauss code for $D$, while the more geometric $b_{\perp}(D)$ tends to be rather elusive. To prove Theorem \ref{omega=perp}, some additional notation must be established. As above, given a diagram $D$, $s(D)$ denotes the set of strands $s_1$, $s_2$,..., $s_n$ and $v(D)$ the set of crossings $c_1$, $c_2$,..., $c_n$. Furthermore, if $\gamma$ presents $D$, we denote by $\overline{s}_1$, $\overline{s}_2$,..., $\overline{s}_n$ the subarcs of $p(\gamma)$ obtained by extending each of $s_1$, $s_2$,..., $s_n$ slightly so that the endpoints of the arcs $\overline{s}_1$, $\overline{s}_2$,..., $\overline{s}_n$ are contained in $v(D)$. In particular, if we consider $p(\gamma)$ as a finite, 4-valent graph, then $\overline{s}_i$ is the union of all edges in $p(\gamma)$ whose interiors intersect $s_i$. Additionally, let $\{a_i^+,a_i^-\}:=(p|_{\gamma})^{-1}(v_i)$, where $h(a_i^+)>h(a_i^-)$ for all $v_i\in v(D)$. Finally, observe that $(p|_{\gamma})^{-1}(\overline{s}_i)$ consists of a closed arc of positive length, possibly together with a collection of isolated points in $\gamma$, corresponding to crossings of $D$ where $s_i$ is the over-strand. We denote the closed arc component of $(p|_{\gamma})^{-1}(\overline{s}_i)$ by $\hat{s}_i$. \begin{lemma}\label{critical} Let $D$ be a knot diagram and $\gamma$ an embedding that presents $D$ and minimizes $b_{\perp}(D)$. Then, after an isotopy of $\gamma$ which fixes $p(\gamma)$ pointwise, we can assume the following: for every $s_i\in s(D)$, $\hat{s}_i$ contains at most three critical points of $h|_{\gamma}$; if $\hat{s}_i$ contains exactly three critical points of $h|_{\gamma}$, then two of these critical points are minima and one is a maximum; all critical points of $h|_{\gamma}$ corresponding to minima are contained in the set $\{a_1^-, a_2^-,...,a_n^-\}$. \end{lemma} \begin{proof} Assume $\gamma$ is an embedding that presents $D$ and minimizes $b_{\perp}(D)$. Consider a strand $s_i\in s(D)$. Let $U$ denote the portion of the interior of $p^{-1}(\overline{s}_i)$ that lies above $\hat{s}_i$. By definition of strand, $U$ is disjoint from $\gamma$. Hence, there is an ambient isotopy of $\gamma$ supported in an arbitrarily small open neighborhood of $U$ in $\mathbb{R}^3$, after which $\hat{s}_i$ is replaced by an arc with one maximum and at most two minima and $p(\gamma)$ is preserved pointwise. See Figure~\ref{3crit.fig}. This is a contradiction to the minimality of $h|_{\gamma}$, unless $\hat{s}_i$ contains at most three critical points of $h|_{\gamma}$. Moroever, this isotopy shows that we can assume that if $\hat{s}_i$ contains exactly three critical points of $h|_{\gamma}$, then two of these critical points are minima and one is a maximum. As an intermediate next step, we arrange that the critical points of $h|_{\gamma}$ are contained in the interiors of the $\hat{s}_i$. Indeed, since $(p|_{\gamma})^{-1}(v(D))$ is a discrete set in $\gamma$, after an arbitrarily small ambient isotopy of $\gamma$ that fixes $p(\gamma)$ pointwise and preserves that number of critical points, we can assume that no critical point of $h|_{\gamma}$ is contained in $p^{-1}(v(D))$. \begin{figure} \caption{Isotopy decreasing the number of critical points for $\hat{s} \label{3crit.fig} \end{figure} We have already shown that, if $\hat{s}_i$ contains exactly three critical points of $h|_{\gamma}$, then two of these critical points are minima and one is a maximum. To conclude the proof, we need to isotope $\gamma$, preserving $p(\gamma)$ pointwise, so that all critical points of $h|_{\gamma}$ corresponding to minima are contained in the set $\{a_1^-, a_2^-,...,a_n^-\}$. By the above, each arc $\overline{s}_i$ fits into one of three mutually disjoint categories: \begin{enumerate} \item Type I: $\hat{s}_i$ contains no minima of $h|_{\gamma}$. \item Type II: $\hat{s}_i$ contains exactly one minimum and no maximum of $h|_{\gamma}$. \item Type III: $\hat{s}_i$ contains exactly one minimum and exactly one maximum of $h|_{\gamma}$. \item Type IV: $\hat{s}_i$ contains exactly two minima and exactly one maximum of $h|_{\gamma}$. \end{enumerate} In each of these cases, we let $U$ denote the portion of the interior of $p^{-1}(\overline{s}_i)$ that lies above $\hat{s}_i$. Recall that, by the definition of strand, $U$ is disjoint from $\gamma$. If $\overline{s}_i$ is an edge of Type II, let $v_j$ and $v_k$ be the endpoints of $\overline{s}_i$ and suppose $h(a_j^-)\leq h(a_k^-)$. Let $U'$ be the portion of $U$ below the plane $\{z=h^{-1}(h(a_j^-))\}$. Then there is an ambient isotopy of $\gamma$ supported in an arbitrarily small open neighborhood of $U'$ in $\mathbb{R}^3$ after which $\hat{s}_i$ is replaced by an arc with no critical points in its interior, $h|_{\gamma}$ has a minimum at $a_j^-$, and $p(\gamma)$ is preserved pointwise. See Figure~\ref{type2.fig}. \begin{figure} \caption{Isotopy for a strand of Type II.} \label{type2.fig} \end{figure} If $\overline{s}_i$ is an edge of Type III, let $M_i$ be the maximum of $h|_{\gamma}$ in $\hat{s}_i$ and let $m_i$ be the minimum of $h|_{\gamma}$ in $\hat{s}_i$. Let $v_j$ and $v_k$ be the endpoints of $\overline{s}_i$ and suppose $a_j^-$ is closer to $m_i$ than it is to $M_i$, along $\hat{s}_i$. Let $U^*$ be the portion of $U$ below the plane $\{z=h^{-1}(h(a_j^-))\}$ and let $U'$ be the component of $U^*$ whose closure in $U$ contains $m_i$ and $a_j^-$. Note that $M_i$ is not contained in the closure of $U'$ as this would imply that there is an ambient isotopy of $\gamma$ that eliminates a minimum and a maximum while preserving $p(\gamma)$, a contradiction to the minimality assumption for $\gamma$. Subsequently, there is an ambient isotopy of $\gamma$ supported in an arbitrarily small open neighborhood of $U'$ in $\mathbb{R}^3$ after which $\hat{s}_i$ is replaced by an arc with a single maximum critical point in its interior, $h|_{\gamma}$ has a minimum at $a_j^-$, and $p(\gamma)$ is preserved pointwise. See Figure~\ref{type3.fig}. \begin{figure} \caption{Isotopy for a strand of Type III.} \label{type3.fig} \end{figure} If $\overline{s}_i$ is an edge of Type IV, let $M_i$ be the maximum of $h|_{\gamma}$ in $\hat{s}_i$ and let $m^1_i$ and $m^2_i$ be the two minima of $h|_{\gamma}$ in $\hat{s}_i$. Let $v_j$ and $v_k$ be the endpoints of $\overline{s}_i$ and suppose $a_j^-$ is closer to $m^1_i$ than it is to $m^2_i$, along $\hat{s}_i$. Let $U^*$ be the portion of $U$ below the plane $\{z=h^{-1}(h(a_j^-))\}$. Note that $h(M_i)>max\{h(a_j^-),h(a_k^-)\}$, since otherwise there would exists an ambient isotopy of $\gamma$ which eliminates a minimum and a maximum while preserving $p(\gamma)$, a contradiction to the minimality assumption for $\gamma$. Hence, the portion of $U$ below the plane $\{z=h^{-1}(h(a_j^-))\}$ contains a disk component $U'$ that is incident to $a_j^-$. Similarly, the portion of $U$ below the plane $\{z=h^{-1}(h(a_k^-))\}$ contains a disk component $U''$ that is incident to $a_k^-$. There is an ambient isotopy of $\gamma$ supported in an arbitrarily small open neighborhood of $U'\cup U''$ in $\mathbb{R}^3$ after which $\hat{s}_i$ is replaced by an arc with a single maximum critical point in its interior, $h|_{\gamma}$ has a minimum at $a_j^-$ and at $a_k^-$, and $p(\gamma)$ is preserved pointwise. See Figure~\ref{type4.fig}. \begin{figure} \caption{Isotopy for a strand of Type IV.} \label{type4.fig} \end{figure} Since each of the ambient isotopies described above are supported in pairwise disjoint regions in $\mathbb{R}^3$, we can simultaneously apply the above isotopies to produce an ambient isotopy of $\gamma$ that fixes $p(\gamma)$ pointwise, results in an embedding of $\gamma$ that minimizes $b_{\perp}(D)$, and results in all critical points of $h|_{\gamma}$ corresponding to minima being contained in $\{a_1^-, a_2^-,...,a_n^-\}$. \end{proof} \begin{proof} [Proof of Theorem~\ref{omega=perp}] Given a diagram $D$ of a knot $K$, let $\mu:=\omega(D)$. As in the proof of main result in \cite{BKVV}, we can construct an embedding $\gamma$ that presents $D$ such that $h|_{\gamma}$ has $\mu$ maxima. Hence, $b_{\perp}(D)\leq w(D)$. It remains to show that $b_{\perp}(D)\geq w(D)$. Let $D$ be a diagram of $K$ and let $\gamma$ be an embedding of $K$ that presents $D$ and realizes $b_{\perp}(D)$. Assume that we have isotoped $\gamma$ so that the conclusions of Lemma~\ref{critical} hold. That is, all critical points corresponding to minima of $h|_{\gamma}$ are contained in $\{a_1^-, a_2^-,...,a_n^-\}$, all critical points corresponding to maxima of $h|_{\gamma}$ are contained in the interior of the arcs $\hat{s}_i$ for $i\in\{1,...,n\}$, and each arc $\hat{s}_i$ contains at most one critical point corresponding to a maximum of $h|_{\gamma}$. Moreover, after a further isotopy of $\gamma$, fixing $p(\gamma)$ pointwise as always, we can additionally assume that all critical points of $h|_{\gamma}$ and all points in $(p|{\gamma})^{-1}(v(D))$ take distinct values under $h$ and that all critical points corresponding to maxima of $h|_{\gamma}$ lie above all of the points $(p|{\gamma})^{-1}(v(D))$. Define $g:s(D)\rightarrow \mathbb{R}$ by $g(s_i)=max(\{h(t)|t\in \hat{s}_i\})$. Note that this value is well-defined since $\hat{s}_i$ is compact and that $g$ is one-to-one since all critical points of $h|_{\gamma}$ and all points in $(p|{\gamma})^{-1}(v(D))$ take distinct values under $h$. Order the elements of $s(D)$ in order of decreasing value under the map $g$, and denote the resulting sequence by $s_{i_1},...,s_{i_n}$. Let $b_{\perp}(D)=m$ and observe that the above assumptions guarantee that $s_{i_1},...,s_{i_m}$ are exactly the strands which contain maxima of $h|_{\gamma}$. Set $A_k:=\{s_{i_1},...,s_{i_{m+k}}\}$ for $0\leq k\leq n-m$ and define $f_0: A_0 \rightarrow \{1,...,m\}$ by $f_0(s_{i_j}):=j$ for $1\leq j\leq m$. Let $M=\{d_1,...,d_m\}\subset \{a_1^-, a_2^-,...,a_n^-\}$ be the collection of critical points corresponding to minima for $h|_{\gamma}$. The closure of each component of $\gamma\setminus M$ is then a union of (consecutive) arcs $\hat{s}_{j}$ and contains a unique critical point corresponding to a maximum of $h|_{\gamma}$. Therefore, each component of $\gamma\setminus M$ contains the interior of a unique arc in the set $\{\hat{s}_{i_1},...,\hat{s}_{i_m}\}$. Extend $f_0$ to a function $f:s(D)\rightarrow \{1,...,m\}$ by assigning $f(s_j):=f(s_{i_k})=k$ where $\hat{s}_{i_k}\in \{\hat{s}_{i_1},...,\hat{s}_{i_m}\}$ is the unique element of this set with the property that $int(\hat{s}_j)$ and $int(\hat{s}_{i_k})$ are contained in the same component of $\gamma\setminus M$. Define $f_k=f|_{A_k}$ for all $1\leq k\leq n-m$. To show that $D$ is $m$-meridionally colorable, it remains to be shown that $(A_k, f_k)\to(A_{k+1}, f_{k+1})$ is a valid coloring move for all $0\leq k\leq n-m-1$. Note that $A_{k+1}\setminus A_k=\{s_{i_{m+k+1}}\}$ and let $p:=f(s_{i_{m+k+1}}), p\in \{1,...,m\}$. By the definition of $f$, we have that $\hat{s}_{i_{m+k+1}}$ is in the same component of $\gamma\setminus M$ as $\hat{s}_{i_{p}}$. But $\hat{s}_{i_{m+k+1}}$ does not contain a maximum of $h|_{\gamma}$, since $m+k+1>m$. Therefore, $s_{i_{m+k+1}}$ is adjacent to a strand $s_{i_r}$ such that $f(s_{i_r})=p$ and $g(s_{i_r})>g(s_{i_{m+k+1}})$. Since $g(s_{i_r})>g(s_{i_{m+k+1}})$, we have that $s_{i_r}\in A_k$. Let $s_{i_l}$ be the strand which passes over the crossing $v_r$ at which $s_{i_{m+k+1}}$ and $s_{i_r}$ are adjacent. Since $\hat{s}_{i_{m+k+1}}$ does not contain a maximum of $h|_{\gamma}$ and $g(s_{i_r})>g(s_{i_{m+k+1}})$, then $g(s_{i_{m+k+1}})=h(a_r^-)$. Since $h(a_r^+)>h(a_r^-)$ and $a_r^+\in \hat{s}_{i_l}$, then $g(s_{i_l})>g(s_{i_{m+k+1}})$ and, thus, $s_{i_l}\in A_k$. Hence, conditions (1)-(5) of Definition~\ref{move} are satisfied and $(A_k, f_k)\to(A_{k+1}, f_{k+1})$ is a valid coloring move for all $0\leq k\leq n-m-1$. Therefore, $D$ is $m$-meridionally colorable. This implies that $w(D)\leq b_{\perp}(D)$ and the theorem follows. \end{proof} In \cite{BKVV} the authors computed the Wirtinger number for crossing number minimizing diagrams of all knots of eleven or fewer crossings and verified that the Wirtinger number for these diagrams realized bridge number. By the same method, the authors calculated the bridge number of all knots with crossing number 12. Since then, the authors have also calculated the bridge number of more that 450,000 knots with 16 or fewer crossings. Moreover, all of these knots where found to have a minimal crossing diagram $D$ such that $\omega(D)=\beta(K)$. These results motivate the search for the class of knots such that Wirtinger number is realized in a minimal crossing number diagram. We give some negative results in the next section as well as some ensuing conjectures regarding the compatibility of crossing number and Wirtinger number. \section{Diagrammatic Connected Sum and Examples}\label{applications} We prove that the Wirtinger number of a knot diagram is super-additive with respect to the connected sum operation on knot diagrams, and we use this fact to construct minimal diagrams whose diagrammatic bridge number is arbitrarily large compared to the bridge number of the knot. A diagram $D$ of a knot $K$ is \emph{prime} if every simple closed curve in the plane of projection that is disjoint from the crossings of $D$ and meets the edges of $p(K)$ transversally in two points bounds a disk in the plane of projection that is disjoint from the crossings of $D$. Otherwise, we say $D$ is composite. If $D$ is composite, $\alpha$ is the simple closed curve in the plane of projection that illustrates that $D$ is composite and $s\in \alpha\setminus D$, then the triple $(D,\alpha,s)$ induces a pair of diagrams $D_1$ and $D_2$ by surgering $D$ along the arc of $\alpha\setminus D$ that contains $s$. See Figure~\ref{diagramconnectedsum}. In this case, we write $D=D_1\# D_2$ and say $D$ is the \emph{connected sum} of $D_1$ and $D_2$. \begin{figure} \caption{An example of the triple $(D,\alpha,s)$ inducing a pair of diagrams $D_1$ and $D_2$. The red dot is $s$ and specifies the arc along which the diagram is surgered.} \label{diagramconnectedsum} \end{figure} \begin{theorem} \label{add} Given a composite diagram $D:=D_1\# D_2$, the inequality $b_{\perp}(D)\geq b_{\perp}(D_1)+b_{\perp}(D_2)-1$ holds. \end{theorem} \begin{proof} Let $(D,\alpha,s)$ be the triple that gives rise to the decomposition $D=D_1\# D_2$. Let $\gamma\in K$ such that $\gamma$ presents $D$ and $h|_{\gamma}$ has $b_{\perp}(D)$ maxima. After a small perturbation of $\alpha$ that is transverse to $p(K)$, we can assume that $A=p^{-1}(\alpha)$ is disjoint from the critical points of $h|_{\gamma}$ and $A\cap \gamma =\{x,y\}$ such that $h(x)<h(y)$. Let $\hat{\alpha}$ be a monotone increasing (wrt the $z$ coordinate) arc embedded in $A$ connecting $x$ to $y$ that is mapped by $p$ homeomorphically onto a sub arc of $\alpha$. Surgering $\gamma$ along $\hat{\alpha}$ with framing parallel to the plane of projection results in two knot embeddings $\gamma_1$ and $\gamma_2$ such that $\gamma_1$ presents $D_1$ and $\gamma_2$ presents $D_2$. Independent of the sign of the derivative of $h|_{\gamma}$ at $x$ and $y$, $h|_{\gamma_1\cup \gamma_2}$ has exactly one more maxima than $h|_{\gamma}$. See Figure \ref{4_1}. \begin{figure} \caption{Surgering $\gamma$ along $\hat{\alpha} \label{4_1} \end{figure} Since the number of maxima of $h|_{\gamma_1\cup \gamma_2}$ is bounded below by $b_{\perp}(D_1)+b_{\perp}(D_2)$, then $b_{\perp}(D)\geq b_{\perp}(D_1)+b_{\perp}(D_2)-1$. \end{proof} \begin{corollary} Given a composite diagram $D:=D_1\# D_2$, the inequality $\omega(D)\geq \omega(D_1)+ \omega(D_2)-1$ holds. \end{corollary} \begin{proof} By Theorem~\ref{omega=perp}, for any knot diagram $D$, $\omega(D) = b_{\perp}(D)$. \end{proof} \begin{corollary} \label{unknot-gap} Given any integer $n$, there exists a diagram $D_n$ of the unknot such that $b_{\perp}(D)=\omega(D) > n$. \end{corollary} \begin{proof} Let $D$ denote the Thistlethwaite diagram of the unknot, pictured in Figure~\ref{Thistlethwaite}. By direct computation, we find that $\omega(D) =3$. (This can be done by hand. We show $\omega(D) >2$ by verifying that, if we begin by coloring the over-strand and one under-strand at any crossing, the coloring process terminates before all strands are colored, regardless of the choice of crossing. By contrast, it is not hard to find three strands which, when colored, allow us to extend the coloring to the entire diagram, by iterating the coloring move.) Let $D_0=D_1=D$ and let $D_n$ be a connected sum of $n$ copies of $D$. By repeated application of Theorem~\ref{add}, we have $\omega(D_n) = b_{\perp}(D_n) \geq 3 + 2(n-1) = 2n+1 > n$. \end{proof} \begin{figure} \caption{The Thistlethwaite unknot, whose Wirtinger number is 3.} \label{Thistlethwaite} \end{figure} This allows us to construct knot diagrams for which the gap between the Wirtinger number and the bridge number of the knot is arbitrarily large. \begin{corollary} \label{anyknot-gap} For every positive integer $m$ and every knot $K$, there exists a diagram $D$ of $K$ such that $b_{\perp}(D)=\omega(D)> \beta(K) + m$. \end{corollary} \begin{proof} Let $D_1$ be any diagram of $K$, and $D_{n}$ a diagram of the unknot satisfying the conclusion of Corollary~\ref{unknot-gap} for $n=m+1$. Let $D:=D_1\# D_n$. By Theorem~\ref{add}, $D$ is a diagram of $K$ such that $$\omega(D) \geq \omega(D_1) + \omega(D_n) - 1\geq \omega(K) + \omega(D_n) -1 > \omega(K) + n-1$$ $$ = \omega(K) + m=\beta(K)+m.$$ \end{proof} Note, however, that the diagram constructed in the proof of Corollary~\ref{anyknot-gap} has a very large number of crossings, exceeding the crossing number of $K$ by at least $15n$. A more subtle question is how big the gap between $\omega(D)$ and $\omega(K)$ can be for $D$ a {\it minimal} diagram of the knot $K$. We demonstrate that this gap can be arbitrarily large as well. \begin{proof} [Proof of Theorem~\ref{minmal-gap}] In Figure \ref{example} we give an example of a knot $K$ with reduced alternating diagram $D$ such that $\omega(K)=\beta(K)=5$ and $\omega(D)=6$. The fact that $\omega(D)=6$ was established by computer computation using the algorithm introduced in \cite{BKVV}. By Schubert's equality for bridge number, $\omega(\#_{i=1}^{n} K)= 5n-(n-1)$. Similarly, by Theorem \ref{add}, $\omega(\#_{i=1}^{n} D)\geq 6n-(n-1)$. Hence, $\omega(\#_{i=1}^{n} D)-\omega(\#_{i=1}^{n} K)\geq n$. \end{proof} \begin{figure} \caption{A minimal crossing diagram $D$ of a composite knot $K$ such that $\omega(D)=6$ and $\beta(K)=5$. Additionally, $b_{||} \label{example} \end{figure} Having seen that the gap between the bridge number of a knot $K$ and diagrammatic bridge number in a minimal diagram of $K$ can be arbitrarily large for composite knots, we naturally ask whether minimal diagrams of {\it prime} knots always realize bridge number. We answer this question in the negative. \begin{theorem} There exists a prime knot $K$ with $\mathcal{C}(K)\setminus\mathcal{B}(K)\ne\emptyset$. \end{theorem} \begin{proof} Let $K_0$ be the knot in Figure~\ref{example2}, and denote the diagram depicted by $D_0$. Using the algorithm described in~\cite{BKVV}, we calculated the Wirtinger number of this diagram and found that $\omega(D_0)=6$. Moreover, it is straight-forward to verify that the diagram $D_0$ is adequate and thus crossing number minimizing by \cite{Th88}. Due to a somewhat lengthy, but standard argument of analyzing the possible intersections between an essential meridional annulus and the various essential surfaces in the exterior $K_0$, one can show that $K_0$ is prime, see for example Lemma 6.0.29 of \cite{BThesis}. It is similarly straight-forward to show that $K_0$ is an index two cable of the trefoil knot. Hence, by \cite{Schu54}, $\beta(K_0)\geq 4$. However, Figure \ref{example3}, illustrates that $\beta(K_0)\leq4$. \end{proof} We believe the method used to construct the knot $K_0$ can be generalized to find other minimal diagrams $D_i$ of prime non-alternating knots $K_i$, such that $b(D_i)>\beta(K_i)$ and the gap between $b(D_i)$ and $\beta(K_i)$ grows with the crossing number of $D_i$. It is interesting to note that there exists a minimal crossing diagram for $K_0$, given in Figure \ref{example3}, that realizes bridge number. \begin{figure} \caption{A minimal crossing diagram $D$ of a prime knot $K$ such that $\omega(D)=6$ and $\beta(K)=4$.} \label{example2} \end{figure} \begin{figure} \caption{A minimal crossing diagram $D$ of a prime knot $K$ such that $w(D)=\beta(K)=4$.} \label{example3} \end{figure} \end{document}
\begin{equation}gin{document} \title{Nonlinear boundary value problems \\relative to the one dimensional heat equation$\phantom{d^{q^p} \keywords{Nonlinear heat flux, Singularities, Radon measures, Marcinkiewicz spaces}{35J65, 35L71} \abstract {We consider the problem of existence of a solution $u$ to $\partial _tu-\partial _{xx} u=0$ in $(0,T)\timesmes\BBR_+ $ subject to the boundary condition $-u_x(t,0)+g(u(t,0))=\mu} \def\gn{\nu} \def\gp{\pi$ on $(0,T)$ where $\mu} \def\gn{\nu} \def\gp{\pi$ is a measure on $(0,T)$ and $g$ a continuous nondecreasing function. When $p>1$ we study the set of self-similar solutions of $\partial _tu-\partial _{xx} u=0$ in $\BBR_+\times\BBR_+$ such that $-u_x(t,0)+u^p=0$ on $(0,\infty)$. At end, we present various extensions to a higher dimensional framework.} \section{Introduction} \setcounter{equation}{0} Let $g:\BBR\mapsto\BBR$ be a continuous nondecreasing function. Set $Q^T_{\BBR_+}=(0,T)\times \BBR_+$ for $0<T\leq\infty$ and $\partial_\ell Q^T_{\BBR_+}=\overline{\BBR_+}\times\{0\}$. The aim of this article is to study the following 1-dimensional heat equation with a nonlinear flux on the parabolic boundary \begin{equation}l{I-1-0}\begin{array} {lll} \phantom{g(u---} \phantom{,,,,}u_t-u_{xx}=0\quadquad&\text{in }\;Q^T_{\BBR_+}\\[1mm] \phantom{;}-u_x(.,0)+g(u(.,0))=\mu} \def\gn{\nu} \def\gp{\pi\quadquad&\text{in }\;[0,T)\\[1mm]\phantom{-------,,} u(0,.)=\gn\quadquad&\text{in }\;\BBR_+, \end{array} \end{equation} where $\gn,\mu} \def\gn{\nu} \def\gp{\pi$ are Radon measures in $\BBR_+$ and $[0,T)$ respectively. A related problem in $Q^\infty_{\BBR_+}$ for which there exist explicit solutions is the following, \begin{equation}l{I-1-2}\begin{array} {lll} \phantom{-------,,ii} u_t-u_{xx}=0\quaduad&\text{in }\;Q^\infty_{\BBR_+}\\[0mm]\phantom{--} -u_x(t,0)+|u|^{p-1}u(t,0)=0&\text{for all }\;t>0\\[1mm] \phantom{-------,,i}\displaystyle \lim_{t\to 0}u(t,x)=0&\text{for all }\;x>0, \end{array} \end{equation} where $p>1$. Problem $(\ref{I-1-2})$ is invariant under the transformation $T_k$ defined for all $k>0$ by \begin{equation}l{I-1-3}\begin{array} {lll} T_k[u](t,x)=k^{\frac{1}{p-1}}u(k^2t,kx). \end{array} \end{equation} This leads naturaly to look for existence of self-similar solutions under the form \begin{equation}l{I-1-4}\begin{array} {lll} u_s(t,x)=t^{-\frac{1}{2(p-1)}}\gw\left(\frac x{\sqrt t}\right). \end{array} \end{equation} Putting $\eta=\frac x{\sqrt t}$, $\gw$ satisfies \begin{equation}l{I-1-5}\begin{array} {lll} \!\!-\gw''-\myfrac12\eta \gw'-\myfrac1{2(p-1)}\gw=0\quaduad&\text{in }\; \BBR_+\\[2mm] \phantom{--} -\gw'(0)+|\gw|^{p-1}\gw(0)=0\\[1mm] \phantom{-u(tx),,-} \displaystyle \lim_{\eta\to \infty}\eta^{\frac{1}{p-1}}\gw(\eta)=0. \end{array} \end{equation} Self-similar solutions of non-linear diffusion equations such as porous-media or fast-diffusion equation were discovered long time ago by Kompaneets and Zeldovich and a thourougful study was made by Barenblatt, reducing the study to the one of integrable ordinary differential equations with explicit solutions. Concerning semilinear heat equation Brezis, Terman and Peletier opened the study of self-similar solutions of semilinear heat equations in proving in \cite{BPT} the existence of a positive strongly singular function satisfying \begin{equation}l{I-1-6}\begin{array} {lll} u_t-\Gd u+|u|^{p-1}u=0\quaduad&\text{in }\;\BBR_+\times\BBR^n, \end{array} \end{equation} and vanishing at $t=0$ on $\BBR^n\setminus\{0\}$. They called it the {\it very singular solution}. Their method of construction is based upon the study of an ordinary differential equation with a phase space analysis. A new and more flexible method based upon variational analysis has been provided by \cite{EK}. Other singular solutions of $(\ref{I-1-6})$ in different configurations such as boundary singularities have been studied in \cite{MV1}. We set $K(\eta)=e^{\eta^2/4}$ and \begin{equation}l{I-1-6'}\begin{array} {lll} L^2_K(\BBR_+)=\left\{\phi\in L^1_{loc}(\BBR_+):\myint{\BBR_+}{}\phi} \def\vgf{\varphi} \def\gh{\eta^2 Kdx:=\norm\phi} \def\vgf{\varphi} \def\gh{\eta^2_{L^2_K}<\infty\right\}, \end{array} \end{equation} and, for $k\geq1$, \begin{equation}l{I-1-6''}\begin{array} {lll}\displaystyle H^{k}_K(\BBR_+)=\left\{\phi\in L^2_K(\BBR_+):\sum_{\alpha} \def\gb{\beta} \def\gg{\gamma=0}^k\norm{\phi} \def\vgf{\varphi} \def\gh{\eta^{(\alpha} \def\gb{\beta} \def\gg{\gamma)}}^2_{L^2_K}:=\norm\phi} \def\vgf{\varphi} \def\gh{\eta^2_{H^{k}_K}<\infty\right\}. \end{array} \end{equation} Let us denote by $\CE$ the subset of $H^{1}_K(\BBR_+)$ of weak solutions of $(\ref{I-1-5})$ that is the set of functions satisfying \begin{equation}l{I-1-6'''}\begin{array} {lll}\displaystyle \myint{0}{\infty}\left(\gw'\gz'-\myfrac{1}{2(p-1)}\gw \gz\right) K(\eta)d\eta+\left(|\gw|^{p-1}\gw\gz\right)(0) =0, \end{array} \end{equation} and by $\CE_+$ the subset of nonnegative solutions. The next result gives the structure of $\CE$. \bth{vssth} 1- If $p\geq2$, then $\CE=\{0\}$. \noindent 2- If $1<p\leq \frac{3}{2}$, then $\CE_+=\{0\}$ \noindent 3 - If $\frac{3}{2}<p<2$ then $\CE=\{\gw_s,-\gw_s,0\}$ where $\gw_s$ is the unique positive solution of $(\ref{I-1-5})$. Furthermore there exists $c>1$ such that \begin{equation}l{I-1-7}\begin{array} {lll} c^{-1}\eta^{\frac{1}{p-1}-1}\leq e^{\frac{\eta^2}{4}}\gw_s(\eta)\leq c\eta^{\frac{1}{p-1}-1}\;\text{ for all }\;\eta>0. \end{array} \end{equation} \end{sub} Whenever it exists the function $u_s$ defined in $(\ref{I-1-4})$ is the limit, when $\ell\to\infty$ of the positive solutions $u_{\ell\gd_0}$ of \begin{equation}l{I-1-8}\begin{array} {lll} \phantom{--g(u-----)} \!u_t-u_{xx}=0\quaduad&\text{in }\;Q^\infty_{\BBR_+}\\[1mm]\phantom{,--} \!-u_{x}(t,.)+|u|^{p-1}u(t,.)=\ell\gd_0\quaduad&\text{in }\;[0,T)\\[1mm] \phantom{--------;}\displaystyle \lim_{t\to 0}u(t,x)=0&\text{for all }\;x\in\BBR_+. \end{array} \end{equation} When such a function $u_s$ does not exits the sequence $\{u_{\ell\gd_0}\}$ tends to infinity. This is a charateristic phenomenon of an underlying fractional diffusion associated to the linear equation \begin{equation}l{I-1-8+}\begin{array} {lll} \phantom{g(u---} \phantom{,,,,}u_t-u_{xx}=0\quadquad&\text{in }\;Q^\infty_{\BBR_+}\\[1mm] \phantom{,,+g(u(.,0))}-u_x(.,0)=\mu} \def\gn{\nu} \def\gp{\pi\quadquad&\text{in }\;[0,\infty)\\[1mm]\phantom{-------,,} u(0,.)=0\quadquad&\text{in }\;\BBR_+. \end{array} \end{equation} More generaly we consider problem $(\ref{I-1-0})$. We define the set $\BBX(Q^T_{\BBR_+})$ of test functions by \begin{equation}l{I-1-9}\begin{array} {lll} \BBX(Q^T_{\BBR_+})=\left\{\gz\in C_c^{1,2}([0,T)\times[0,\infty)):\gz _{x}(t,0)=0\;\text{ for }t\in [0,T]\right\}. \end{array} \end{equation} \bdef{weak} Let $\gn,\mu} \def\gn{\nu} \def\gp{\pi$ be Radon measures in $\BBR_+$ and $[0,T)$ respectively. A function $u$ defined in $\overline{Q^T_{\BBR_+}}$ and belonging to $L^1_{loc}(\overline{Q^T_{\BBR_+}})\cap L^1(\partial_\ell Q^T_{\BBR_+};dt)$ such that $g(u)\in L^1(\partial_\ell Q^T_{\BBR_+};dt)$ is a weak solution of $(\ref{I-1-0})$ if for every $\gz\in\BBX(Q^T_{\BBR_+})$ there holds \begin{equation}l{I-1-10}\begin{array} {lll} -\myint{0}{T}\myint{0}{\infty}(\gz_t+\gz_{xx})udxdt+\myint{0}{T}\left(g(u)\gz\right)(t,0)dt\\[4mm] \phantom{---\myint{0}{T}\myint{0}{\infty}(\gz_t+\gz_{xx})udxdt}=\myint{0}{\infty}\gz d\gn(x)+ \myint{0}{T}\gz (t,0) d\mu} \def\gn{\nu} \def\gp{\pi(t). \end{array} \end{equation} \end{sub} We denote by $E(t,x)$ the Gaussian kernel in $\BBR_+\times\BBR$. The solution of \begin{equation}l{I-1-11}\begin{array} {lll} v_t- v_{xx}=0\quadquad&\text{in }\;Q^\infty_{\BBR_+}\\[1mm]\phantom{v_t-} -v_{x}=\gd_0\quadquad&\text{in }\;\overline{\BBR_+}\\[1mm]\phantom{-} \phantom{,,}v(0,.)=0\quadquad&\text{in }\;\BBR_+, \end{array} \end{equation} has explicit expression \begin{equation}l{I-1-12}\begin{array} {lll} v(t,x)=2E(t,x)=\myfrac{1}{\sqrt{\gp t}}e^{-\frac{x^2}{4t}}. \end{array} \end{equation} If $x,y>0$ and $s<t$ we set $\timeslde E(t-s,x,y)=E(t-s,x-y)+E(t-s,x+y)$. When $\gn\in \mathfrak M^b(\BBR_+)$ and $\mu} \def\gn{\nu} \def\gp{\pi\in \mathfrak M^b(\overline{\BBR_+})$ the solution of \begin{equation}l{I-1-13}\begin{array} {lll} \phantom{i-(t,0)}v_t-v_{xx} =0\quadquad&\text{in }\;Q^\infty_{\BBR_+}\\[1mm]\phantom{-\partial_{xx} E} -v_{x}(.,0)=\mu} \def\gn{\nu} \def\gp{\pi\quadquad&\text{in }\;\overline{\BBR_+}\\[1mm]\phantom{--} \!\phantom{i-(t,0)}u(0,.)=\gn\quadquad&\text{in }\;\BBR_+, \end{array} \end{equation} is given by \begin{equation}l{I-1-14}\begin{array} {lll} v_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}(t,x)=\myint{0}{\infty}\timeslde E(t,x,y)d\gn(y)+2\myint{0}{t}E(t-s,x) d\mu} \def\gn{\nu} \def\gp{\pi(s)\\[3mm] \phantom{v_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}(t,x)}=\CE_{\BBR_+}[\gn](t,x)+\CE_{\BBR_+\times\{0\}}[\mu} \def\gn{\nu} \def\gp{\pi](t,x)=\CE_{Q^\infty_{\BBR_+}}[(\gn,\mu} \def\gn{\nu} \def\gp{\pi)](t,x). \end{array} \end{equation} We prove the following existence and uniqueness result. \bth{exist-uniq} Let $g:\BBR\mapsto\BBR$ be a continuous nondecreasing function such that $g(0)=0$. If $g$ satisfies \begin{equation}l{I-1-15}\begin{array} {lll} \myint{1}{\infty}(g(s)-g(-s))s^{-3}ds<\infty, \end{array} \end{equation} then for any bounded Borel measures $\gn$ in $\BBR_+$ and $\mu} \def\gn{\nu} \def\gp{\pi$ in $[0,T)$, there exists a unique weak solution $u:=u_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}\in L^1({Q^T_{\BBR_+}})$ of $(\ref{I-1-0})$. Furthermore the mapping $(\gn,\mu} \def\gn{\nu} \def\gp{\pi)\mapsto u_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}$ is nondecreasing. \end{sub} When $g(s)=|s|^{p-1}s$, condition $(\ref{I-1-15})$ is satisfied if \begin{equation}l{I-1-16}\begin{array} {lll} 0<p<2. \end{array} \end{equation} The above result is still valid under minor modifications if $\BBR_+$ is replaced by a bounded interval $I:=(a,b)$, and problem $(\ref{I-1-0})$ by \begin{equation}l{I-1-17}\begin{array} {lll} \phantom{g(u(} \phantom{,,t,1---)}u_t-u_{xx}=0\quadquad&\text{in }\;Q^T_{I}\\[1mm]\phantom{-,--} u_x(.,b)+g(u(.,b))=\mu} \def\gn{\nu} \def\gp{\pi_1\quadquad&\text{in }\;[0,T)\\[1mm]\phantom{,} -u_x(.,a)+g(u(.,a))=\mu} \def\gn{\nu} \def\gp{\pi_2\quadquad&\text{in }\;[0,T) \\[1mm]\phantom{---------} u(0,.)=\gn\quadquad&\text{in }\;(a,b), \end{array} \end{equation} where $\gn,\mu} \def\gn{\nu} \def\gp{\pi_j$ ($j=1,2$) are Radon measures in $I$ and $(0,T)$ respectively. In the last section we present the scheme of the natural extensions of this problem to a multidimensional framework \begin{equation}l{I-1-18}\begin{array} {lll} \phantom{g(u;} \phantom{,--,}u_t-\Gd u=0\quadquad&\text{in }\;Q^T_{\BBR^n_+}\\[1mm]\phantom{,,,u} -u_{x_n}+g(u)=\mu} \def\gn{\nu} \def\gp{\pi\quadquad&\text{in }\;\partial_\ell Q^T_{\BBR^n_+}\\[1mm]\phantom{--,,---} u(0,.)=\gn\quadquad&\text{in }\;\BBR^n_+, \end{array} \end{equation} The construction of solutions with measure data can be generalized but there are some difficulties in the obtention of self-similar solutions. The equation with a source flux \begin{equation}l{I-1-19}\begin{array} {lll} \phantom{g(u;} \phantom{,--,}u_t-\Gd u=0\quadquad&\text{in }\;Q^T_{\BBR^n_+}\\[1mm]\phantom{,,,,-u} u_{x_n}+g(u)=0\quadquad&\text{in }\;\partial_\ell Q^T_{\BBR^n_+}\\[1mm]\phantom{--,,---} u(0,.)=\gn\quadquad&\text{in }\;\BBR^n_+, \end{array} \end{equation} has been studied by several authors, in particular Fila, Ishige, Kawakami and Sato \cite{Fil}, \cite{Ish-Ka}, \cite{Ish-Sa}. Their main concern deals with global existence of solutions. \noindent {\it Aknowledgements}. The author is grateful to the reviewer for mentioning reference \cite{FQ} which pointed out the role of Whittaker's equation which was used for analyzing the blow-up of positive solutions of $(\ref{I-1-19})$ when $g(u)=u^p$ when $n=1$. \section{Self-similar solutions} \subsection{The symmetrization} We define the operator $\CL_K$ in $C^2_0(\BBR)$ by $$\CL_K(\phi)=-K^{-1}(K\phi')'. $$ The operator $\CL_K$ has been thouroughly studied in \cite{EK}. In particular \begin{equation}l{2-1-3}\inf\left\{\myint{-\infty}{\infty}\phi'^2K(\eta)\eta:\myint{-\infty}{\infty}\phi^2K(\eta)d\eta=1\right\}=\myfrac{1}{2}. \end{equation} The above infimum is achieved by $\phi_1=(4\gp)^{-\frac 12}K^{-1}$ and $\CL_K$ is an isomorphism from $H^1_K(\BBR)$ onto its dual $(H^1_K(\BBR))'\sim H^{-1}_K(\BBR)$. Finally $\CL_K^{-1}$ is compact from $L^2_K(\BBR)$ into $H^1_K(\BBR)$, which implies that $\CL_K$ is a Fredholm self-adjoint operator with $$\sigma} \def\vgs{\varsigma} \def\gt{\tau(\CL_K)=\left\{\gl_j=\tfrac{1+j-1}{2}:j=1, 2,...\right\},$$ and $$ker\left(\CL_K-\gl_jI_d\right)=span\left\{\phi_1^{(j)}\right\}. $$ If $\phi} \def\vgf{\varphi} \def\gh{\eta$ is defined in $\BBR_+$, $\timeslde \phi} \def\vgf{\varphi} \def\gh{\eta(x)=\phi} \def\vgf{\varphi} \def\gh{\eta(-x)$ is the symmetric with respect to $0$ while $\phi^*(x)=-\phi(-x)$ is the antisymmetric with respect to $0$. The operator $\CL_K$ restricted to $\BBR_+$ is denoted by $\CL^+_K$. The operator $\CL^{+,N}_{K}$ with Neumann condition at $x=0$ is again a Fredholm operator. This is also valid for the operator $\CL^{+,D}_{K}$ with Dirichlet condition at $x=0$. Hence, if $\phi$ is an eigenfunction of $\CL^{+,N}_{K}$, then $\timeslde\phi$ is an eigenfunction of $\CL_K$ in $L^2_K(\BBR)$. Similarly, if $\phi$ is an eigenfunction of $\CL^{+,D}_{K}$, then $\phi^*$ is an eigenfunction of $\CL_K$ in $L^2_K(\BBR)$. Conversely, any even (resp. odd) eigenfunction of $\CL_K$ in $L^2_K(\BBR)$ satisfies Neumann (resp. Dirichlet) boundary condition at $x=0$. Hence its restiction to $L^2_K(\BBR_+)$ is an eigenfunction of $\CL^{+,N}_{K}$ (resp. $\CL^{+,D}_{K}$). Since $\phi_1^{(j)}$ is even (resp. odd) if and only if $j$ is even (resp. odd), we derive \begin{equation}l{2-1-1} H^{1,0}_K(\BBR_+)=\bigoplus_{\ell=1}^\infty span\left\{\phi_1^{(2\ell+1)}\right\}, \end{equation} and \begin{equation}l{2-1-2} H^{1}_K(\BBR_+)=\bigoplus_{\ell=0}^\infty span\left\{\phi_1^{(2\ell)}\right\}. \end{equation} Note that $\phi\in H^1_K(\BBR_+)$ such that $\phi_x(0)=0$ (resp. $\phi(0)=0$) implies $\timeslde\phi\in H^1_K(\BBR)$ (resp. $\phi^*\in H^1_K(\BBR)$). Furthermore, $\phi_1$ is an eigenfunction of $\CL^+_K$ in $H^1_K(\BBR_+^n)$ with Neumann boundary condition on $\partial\BBR_+^n$ while $\partial_{x_n}\phi_1$ is an eigenfunction of $\CL^+_K$ in $H^1_K(\BBR_+^n)$ with Dirichlet boundary condition on $\partial\BBR_+^n$. We list below two important properties of $H^1_K(\BBR_+)$ valid for any $\gb>0$. Actually they are proved in \cite[Prop. 1.12]{EK} with $H^1_{K^\gb}(\BBR)$ but the proof is valid with $H^1_{K^\gb}(\BBR_+)$. \begin{equation}l{imbed}\begin{array}{lll} \!\!\!\!\!\!(i) \quaduad\, &\phi\in H^1_{K^\gb}(\BBR_+)\Longrightarrow K^{\frac{\gb}{2}}\phi\in C^{0,\frac{1}{2}}(\BBR_+)\\[1mm] \!\!\!\!\!\!(ii)\quadquad &H^1_{K^\gb}(\BBR_+)\hookrightarrow L^2_{K^\gb}(\BBR_+)\quaduad\text{is compact for all }n\geq 1. \end{array} \end{equation} \noindent \subsection{Proof of \rth{vssth}-(i)-(ii)} Assume $p\geq 2$, then $\frac{1}{2(p-1)}\leq \frac 12$. If $\gw$ is a weak solution, then $$ \myint{0}{\infty}\left(\gw'^2-\frac{1}{2(p-1)}\gw^2\right)Kd\eta+|\gw|^{p+1}(0)= 0. $$ If $\frac 12> \frac{1}{2(p-1)}$ we deduce that $\gw=0$. Furthermore, when $\frac 12= \frac{1}{2(p-1)}$ then $$|\gw|^{p+1}(0)= 0. $$ If $\gw$ is nonzero, it is an eigenfunction of $\CL^{+,D}_K$. Since the first eigenvalue is $1$ it would imply $1=\frac{1}{2(p-1)}\leq \frac 12$, contradiction. \noindent Assume $1<p\leq \frac3{2}$ and $\gw$ is a nonnegative weak solution. We take $\gz(\eta)=\eta e^{-\frac{\eta^2}{4}}=-2\phi'_{1\,}(\eta)$, then $$\begin{array} {lll} \myint{0}{\infty}\left(-\gz''-\myfrac{1}{2(p-1)}\gz\right)\gw K(\eta)d\eta+\gz'(0)\gw^p(0) =0. \end{array}$$ Since $-\gz''=\gz\lfloor_{\BBR_+}>0$ and $\gz'(0)=\phi_1(0)=1$, we derive $\gw\gz=0$ if $1>\frac{1}{2(p-1)}$ and $\gw(0)=0$ if $1=\frac{1}{2(p-1)}$. Hence $\gw'(0)=0$ by the equation and $\gw\equiv 0$ by the Cauchy-Lipschitz theorem. \hspace{10mm} $\square$ \subsection{Proof of \rth{vssth}-(iii)} We define the following functional on $H^1_K(\BBR^n_+)$ \begin{equation}l{2-2-1} J(\phi)=\myfrac{1}{2}\myint{0}{\infty}\left(\phi} \def\vgf{\varphi} \def\gh{\eta'^2-\myfrac{1}{2(p-1)}\phi} \def\vgf{\varphi} \def\gh{\eta^2\right) Kd\eta+\myfrac{1}{p+1}|\phi} \def\vgf{\varphi} \def\gh{\eta(0)|^{p+1}. \end{equation} \blemma{funct} The functional $J$ is lower semicontinuous in $H^1_{K}(\BBR_+)$. It tends to infinity at infinity and achieves negative values. \end{sub} \note{Proof} We write $$J(\psi)=J_1(\psi)-J_2(\psi)=J_1(\psi)-\frac{1}{2(p-1)}\norm\psi_{L^2_K}^2.$$ Clearly $J_1$ is convex and $J_2$ is continuous in the weak topology of $H^1_K(\BBR_+)$ since the imbedding of $H^1_K(\BBR_+)$ into $L^2_K(\BBR_+)$ is compact. Hence $J$ is weakly semicontinuous in $H^1_K(\BBR_+)$. \noindent Let $\ge>0$, then $$J(\ge\phi_1)=\left(\myfrac{1}{4}-\myfrac{1}{4(p-1)}\right)\myfrac{\ge^2\sqrt\gp}{2}+\myfrac{\ge^{p+1}}{p+1}. $$ Since $1<p<2$, $\frac{1}{4}-\frac{1}{4(p-1)}<0$. Hence $J(\ge\phi_1)<0$ for $\ge$ small enough, thus $J$ achieves negative values on $H^1_{K}(\BBR_+)$. \noindent If $\psi\in H^1_{K}(\BBR_+)$ it can be written in a unique way under the form $\psi=a\phi_1+\psi_1$ where $a=2\sqrt\gp\psi(0)$ and $\psi_1\in H^{1,0}_K(\BBR_+)$. Hence, for any $\ge>0$, $$\begin{array} {lll}J(\psi)=\myfrac{1}{2}\myint{0}{\infty}\left(\psi_1'^2-\myfrac{1}{2(p-1)}\psi_1^2\right) Kd\eta +\myfrac{a^2}{2}\myint{0}{\infty}\left(\phi_1'^2-\myfrac{1}{2(p-1)}\phi_1^2\right) Kd\eta \\[4mm] \phantom{J(\psi)=} +a\myint{0}{\infty}\left(\psi_1'\phi_1'-\myfrac{1}{2(p-1)}\psi_1\phi_1\right) Kd\eta +\myfrac{1}{p+1}|a|^{p+1} \\[4mm]\phantom{J(\psi)} \geq \myfrac{2p-3}{4(p-1)}\myint{0}{\infty}\psi_1'^2 Kd\eta-\myfrac{a\ge}{2}\myint{0}{\infty}\left(\psi_1'^2+\myfrac{1}{2(p-1)}\psi_1^2\right) K d\eta\\[4mm]\phantom{J(\psi)=} +\myfrac{a^2(p-2)\sqrt\gp}{4(p-1)}-\myfrac{ap\sqrt\gp}{4(p-1)\ge}+\myfrac{1}{p+1}|a|^{p+1}. \end{array}$$ Note that $\norm{\psi}^2_{H^1_K}\leq 4\left(\norm{\psi'_1}^2_{L^2_K}+a^2\right)$. Since $2p-3>0$, we can take $\ge>0$ small enough in order that \begin{equation}l{2-2-9}\displaystyle \lim_{\norm\psi_{H^1_K}\to\infty}J(\psi)=\infty. \end{equation} \hspace{10mm} $\square$ \noindent{\it End of the proof of \rth{vssth}-(iii)}. By \rlemma{funct} the functional $J$ achieves its minimum in $H^{1}_K(\BBR_+)$ at some $\gw_s\neq 0$, and $\gw_s$ can be assumed to be nonnegative since $J$ is even. By the strong maximum principle $\gw_s>0$, and by the method used in the proof of \cite[Proposition 1] {MoV} is is easy to prove that positive solutions belong to $H^2_K(\BBR_+)$. Assume that $\timeslde \gw$ is another positive solution, then $$\myint{0}{\infty}\left(\myfrac{(K\gw'_s)'}{\gw_s}-\myfrac{(K\timeslde\gw'_s)'}{\timeslde\gw_s}\right) (\gw_s^2-\timeslde\gw_s^2)d\eta=0. $$ Integration by parts, easily justified by regularity, yields $$\begin{array} {lll} \myint{0}{\infty}\left(\myfrac{(K\gw'_s)'}{\gw_s}-\myfrac{(K\timeslde\gw'_s)'}{\timeslde\gw_s}\right) (\gw_s^2-\timeslde\gw_s^2)d\eta\\[4mm] \phantom{-----} =\left[K\gw_s'\left(\gw_s-\myfrac{\timeslde\gw_s^2}{\gw_s}\right)-K\timeslde\gw_s'\left(\myfrac{\gw_s^2}{\timeslde\gw_s}-\timeslde\gw_s\right)\right]^\infty_0\\[4mm] \phantom{-------}-\myint{0}{\infty}\left(\gw_s-\myfrac{\timeslde\gw_s^2}{\gw_s}\right)'K\gw_s'd\eta+ \myint{0}{\infty}\left(\myfrac{\gw_s^2}{\timeslde\gw_s}-\timeslde\gw_s\right)'K\gw_s'd\eta\\[4mm] \phantom{-----} =-\left(\gw_s^{p-1}-\timeslde\gw_s^{p-1}\right)\left(\gw_s^{2}-\timeslde\gw_s^{2}\right)(0) \\[4mm] \phantom{-------} -\myint{0}{\infty}\left(\left(\myfrac{\gw_s'\timeslde\gw_s-\gw_s\timeslde\gw_s'}{\timeslde\gw_s}\right)^2+\left(\myfrac{\gw_s\timeslde\gw_s'-\timeslde\gw_s\gw_s'}{\gw_s}\right)^2\right)d\eta. \end{array} $$ This implies that $\gw_s=\timeslde\gw_s$. The proof of $(\ref{I-1-7})$ is similar as the proof of estimate (2.5) in \cite[Theorem 4.1]{MV1}.\hspace{10mm} $\square$ \subsection{The explicit approach} This part is an adaptation to our problem of what has been done in \cite{FQ} concerning the blow-up problem in equation $(\ref{I-1-19})$. Let $\gw$ be a solution of \begin{equation}l{W-0} \gw''+\myfrac12\eta \gw'+\myfrac1{2(p-1)}\gw=0\quadquad\text{in }\; \BBR_+. \end{equation} We set $$r=\frac{\eta^2}{4} \, \text{ and }\; \gw(\eta)=r^{-\frac{1}{4}}e^{-\frac{r}{2}}Z(r). $$ Then $Z$ satisfies the Whittaker equation (with the standard notations) \begin{equation}l{W-1} Z_{rr}+\left(-\myfrac{1}{4}+\myfrac{k}{r}+\myfrac{1-4\mu} \def\gn{\nu} \def\gp{\pi^2}{4r^2}\right)Z=0 \end{equation} where $k=\frac{1}{2(p-1)}-\frac 14$ and $\mu} \def\gn{\nu} \def\gp{\pi=\frac 14$. Notice that the only difference with the expression in \cite[Lemma 3.1]{FQ} is the value of the coefficient $k$. This equation admits two linearly independent solutions $$Z_1(r)=e^{-\frac r2}r^{\frac 12+\mu} \def\gn{\nu} \def\gp{\pi}U\left(\tfrac{1}{2}+\mu} \def\gn{\nu} \def\gp{\pi-k,1+2\mu} \def\gn{\nu} \def\gp{\pi,r\right), $$ and $$Z_2(r)=e^{-\frac r2}r^{\frac 12+\mu} \def\gn{\nu} \def\gp{\pi}M\left(\tfrac{1}{2}+\mu} \def\gn{\nu} \def\gp{\pi-k,1+2\mu} \def\gn{\nu} \def\gp{\pi,r\right). $$ The functions $U$ and $M$ are the Whittaker functions which play an important role not only in analysis but also in group theory. The have the following asymptotic expansion as $r\to\infty$ (see e.g. \cite{AS}), $$U\left(\tfrac{1}{2}+\mu} \def\gn{\nu} \def\gp{\pi-k,1+2\mu} \def\gn{\nu} \def\gp{\pi,r\right)=r^{k-\mu} \def\gn{\nu} \def\gp{\pi-\frac 12}\left(1+O(r^{-1}\right)=r^{\frac {1}{2(p-1)}-1}\left(1+O(r^{-1}\right), $$ and $$\begin{array} {lll} M\left(\tfrac{1}{2}+\mu} \def\gn{\nu} \def\gp{\pi-k,1+2\mu} \def\gn{\nu} \def\gp{\pi,r\right)=\myfrac{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi(1+2\mu} \def\gn{\nu} \def\gp{\pi)}{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi(\tfrac{1}{2}+\mu} \def\gn{\nu} \def\gp{\pi-k)}e^rr^{-(\mu} \def\gn{\nu} \def\gp{\pi+\frac12+k)}\left(1+O(r^{-1}\right)\\[4mm] \phantom{M\left(\tfrac{1}{2}+\mu} \def\gn{\nu} \def\gp{\pi-k,1+2\mu} \def\gn{\nu} \def\gp{\pi,r\right)} =\myfrac{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi(\tfrac 32)}{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi(1-\tfrac{1}{2(p-1)})}e^rr^{-\frac p{2(p-1)}}\left(1+O(r^{-1}\right). \end{array}$$ Then $$Z_1(r)=r^{\frac{1}{2(p-1)}-\frac 14}e^{-\frac{r}{2}}\left(1+O(r^{-1}\right), $$ and $$Z_2(r)=\myfrac{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi(\tfrac 32)}{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi(1-\tfrac{1}{2(p-1)})}r^{\frac 14-\frac{1}{2(p-1)}-}e^\frac r2\left(1+O(r^{-1}\right). $$ To this corresponds the two linearly independent solutions $\gw_1$ and $\gw_2$ of $(\ref{W-0})$ with the following behaviour as $\eta\to\infty$, \begin{equation}l{W-2}\begin{array}{lll} (i)\quadquad&\gw_1(\eta)=c_1\eta^{\frac{1}{p-1}-1}e^{-\frac{\eta^2}{4}}\left(1+O(\eta^{-2}\right),\quadquad\quadquad\quadquad\\[4mm] (ii)\quadquad&\gw_2(\eta)=c_2\eta^{-\frac{1}{p-1}}\left(1+O(\eta^{-2}\right). \end{array}\end{equation} Clearly only $\gw_1$ satisfies the decay estimate $\gw(\eta)=o(\eta^{-\frac{1}{p-1}})$ as $\eta\to\infty$. Hence the solution $\gw$ is a multiple of $\gw_1$ and the multiplicative constant $c$ is adjusted in order to fit the condition $\gw'(0)=\gw^p(0)$. \section{Problem with measure data} \subsection{The regular problem} Set $G(r)=\int_0^rg(s)ds$. We consider the functional $J$ in $L^2(\BBR_+)$ with domain $D(J)=H^1(\BBR_+)$ defined by $$ J(u)=\myfrac12\myint{0}{\infty}u_x^2dx+G(v(0)). $$ It is convex and lower semicontinuous in $L^2(\BBR_+)$ and its subdifferential $\partial J$ sastisfies $$\myint{0}{\infty}\partial J(u)\gz dx=\myint{0}{\infty}u _x\gz _x dx+g(u(0))\gz(0), $$ for all $\gz\in H^1(\BBR_+)$. Therefore $$\myint{0}{\infty}\partial J(u)\gz dx=-\myint{0}{\infty}u_{xx}\gz dx+(g(u(0))-u_x(0))\gz(0). $$ Hence \begin{equation}l{II-1-1} \partial J(u)=-u_{xx}\,\;\text{for all }\, u\in D(\partial J)=\{v\in H^1(\BBR_+):v_x(0)=g(v(0))\}. \end{equation} The operator $\partial J$ is maximal monotone, hence it generates a semi-group of contractions. Furthermore, for any $u_0\in L^2(\BBR_+)$ and $F\in L^2(0,T;L^2(L^2(\BBR_+))$ there exists a unique strong solution to \begin{equation}l{II-1-2}\begin{array} {lll} U_t+\partial J(U)=F\quaduad\text{a.e. on }\, (0,T)\\ \phantom{-,,--}U(0)=u_0. \end{array}\end{equation} \bprop{Brcase} Let $\mu} \def\gn{\nu} \def\gp{\pi\in H^1(0,T)$ and $\gn\in L^2(\BBR_+)$. Then there exists a unique function $u\in C([0,T];L^2(\BBR_+)$ such that $\sqrt tu_{xx}\in L^2((0,T)\times\BBR_+)$ which satisfies $(\ref{II-1-3})$. The mapping $(\mu} \def\gn{\nu} \def\gp{\pi,\gn)\mapsto u:=u_{\mu} \def\gn{\nu} \def\gp{\pi,\gn}$ is non-decreasing and $u$ is a weak solution in the sense that it satisfies $(\ref{I-1-10})$. \end{sub} \note{Proof} Let $\eta\in C^2_0([0,\infty))$ such that $\eta (0)=0$, $\eta' (0)=1$. If $f\in H^1(0,T)$, $\gn\in L^2(\BBR_+)$, and $u$ is a solution of \begin{equation}l{II-1-3}\begin{array} {lll} \phantom{g(u---} \phantom{,,-}u_t-u_{xx}=0\quadquad&\text{in }\;Q^T_{\BBR_+}\\[1mm] \phantom{,,}\!-u_x(.,0)+g(u(.,0))=\mu} \def\gn{\nu} \def\gp{\pi(t)\quadquad&\text{in }\;[0,T)\\[1mm]\phantom{--------} u(0,.)=\gn\quadquad&\text{in }\;\BBR_+, \end{array} \end{equation} where $\gn\in L^2(\BBR_+)$, then the function $v(t,x)=u(t,x)-\mu} \def\gn{\nu} \def\gp{\pi(t)\eta(x)$ satisfies \begin{equation}l{II-1-4}\begin{array} {lll} \phantom{g(u---} \phantom{,-,}v_t-v_{xx}=F\quadquad&\text{in }\;Q^T_{\BBR_+}\\[1mm] \phantom{,}-v_x(.,0)+g(v(.,0))=0\quadquad&\text{in }\;[0,T)\\[1mm]\phantom{------,,,,} v(0,.)=\gn-\mu} \def\gn{\nu} \def\gp{\pi(0)\eta\quadquad&\text{in }\;\BBR_+, \end{array} \end{equation} with $F(t,x)=-(\mu} \def\gn{\nu} \def\gp{\pi'(t)\eta(x)+\mu} \def\gn{\nu} \def\gp{\pi(t)\eta''(x))$. The proof of the existence follows by using \cite[Theorem 3.6]{Br-OMM}. \\ \noindent Next, let $(\timeslde \mu} \def\gn{\nu} \def\gp{\pi,\timeslde\gn)\in H^1(0,T)\times L^2(\BBR_+)$ such that $\timeslde \mu} \def\gn{\nu} \def\gp{\pi\leq \mu} \def\gn{\nu} \def\gp{\pi$ and $\timeslde\gn\leq\gn$ and let $\timeslde u=u_{\timeslde \mu} \def\gn{\nu} \def\gp{\pi,\timeslde\gn}$, then $$\begin{array} {lll}\myfrac 1{2}\myfrac d{dt}\myint{0}{\infty}(\timeslde u-u)^2_+dx+\myint{0}{\infty}\left(\partial_x(\timeslde u-u)_+\right)^2dx-\left(\timeslde \mu} \def\gn{\nu} \def\gp{\pi(t)-\mu} \def\gn{\nu} \def\gp{\pi(t)\right)(\timeslde u(t,0)-u(t,0))_+\\[4mm]\phantom{-----------} +\left(g(\timeslde u(t,0))-g(u(t,0))\right))(\timeslde u(t,0)-u(t,0))=0. \end{array}$$ Then $$\myint{0}{\infty}(\timeslde u-u)^2_+dx\lfloor_{t=0}\;\Longrightarrow\myint{0}{\infty}(\timeslde u-u)^2_+dx=0\quaduad\text{ on }\,[0,T].$$ We can also use $(\ref{I-1-14})$ to express the solution of $(\ref{II-1-3})$: $$u(t,x)=\myint{0}{\infty}\timeslde E(t,x,y)\gn(y)dy+2\myint{0}{t}E(t-s,x) (\mu} \def\gn{\nu} \def\gp{\pi(s)-g(u(s,0)))ds. $$ In particular, if $g(0)=0$, then $$|u(t,x)|\leq \myint{0}{\infty}\timeslde E(t,x,y)|\gn(y)|dy+2\myint{0}{t}E(t-s,x) |\mu} \def\gn{\nu} \def\gp{\pi(s)|ds. $$ The proof of $(\ref{I-1-10})$ follows since $u$ is a strong solution. \hspace{10mm} $\square$ Next, we prove that the problem is well-posed if $\mu} \def\gn{\nu} \def\gp{\pi\in L^1(0,T)$. \bprop{L^1} Assume $\{\gn_n\}\subset C_c(\BBR_+)$ and $\{\mu} \def\gn{\nu} \def\gp{\pi_n\}\subset C^1([0,T])$ are Cauchy sequences in $L^1(\BBR_+)$ and $L^1(0,T)$ respectively. Then the sequence $\{u_n\}$ of solutions of \begin{equation}l{II-1-5}\begin{array} {lll}\phantom{------} \!u_{n\,t}-u_{n\,xx}=0\quadquad&\text{in }\;Q^T_{\BBR_+}\\[1mm] \,\,-u_{n\,x}(.,0)+g(u_n(.,0))=\mu} \def\gn{\nu} \def\gp{\pi_n(t)\quadquad&\text{in }\;[0,T)\\[1mm]\phantom{--------} \!u_n(0,.)=\gn_n\quadquad&\text{in }\;\BBR_+, \end{array} \end{equation} converges in $C([0,T];L^1(\BBR_+)$ to a function $u$ which satisfies $(\ref{I-1-10})$. \end{sub} \note{Proof} For $\ge>0$ let $p_\ge$ be an odd $C^1$ function defined on $\BBR$ such that $p_\ge'\geq 0$ and $p_\ge(r)=1$ on $[\ge,\infty)$, and put $j_\ge(r)=\int_0^rp_\ge(s)ds$. Then $$\begin{array} {lll}\myfrac {d}{dt}\myint{0}{\infty}j_\ge(u_n-u_m)dx+\myint{0}{\infty}(u_{n\,x}-u_{m\,x})^2p'_\ge(u_n-u_m)dx\\[4mm]\phantom{--------} +\left(g(u_n(t,0))-g(u_m(t,0))\right)p_\ge(u_n(t,0)-u_m(t,0))\\[4mm] \phantom{-----------}=\left(\mu} \def\gn{\nu} \def\gp{\pi_n(t)-\mu} \def\gn{\nu} \def\gp{\pi_m(t)\right)p_\ge(u_n(t,0)-u_m(t,0)). \end{array}$$ Hence $$\begin{array} {lll}\myint{0}{\infty}j_\ge(u_n-u_m)(t,x)dx+\left(g(u_n(t,0))-g(u_m(t,0))\right)p_\ge(u_n(t,0)-u_m(t,0)) \\[4mm] \phantom{------} \leq\myint{0}{\infty}j_\ge(\gn_n-\gn_m)dx+\left(\mu} \def\gn{\nu} \def\gp{\pi_n(t)-\mu} \def\gn{\nu} \def\gp{\pi_m(t)\right)p_\ge(u_n(t,0)-u_m(t,0)). \end{array}$$ Letting $\ge\to 0$ implies $p_\ge\to sgn_0$, hence for any $t\in [0,T]$, \begin{equation}l{II-1-6}\begin{array} {lll}\myint{0}{\infty}|u_n-u_m|(t,x)dx+|g(u_n(t,0))-g(u_m(t,0)| \\[4mm] \phantom{-----------} \leq\myint{0}{\infty}|\gn_n-\gn_m|dx+|\mu} \def\gn{\nu} \def\gp{\pi_n(t)-\mu} \def\gn{\nu} \def\gp{\pi_m(t)|. \end{array} \end{equation} Therefore $\{u_n\}$ and $\{g(u_n(.,0)\}$ are Cauchy sequences in $C([0,T];L^1(\BBR_+))$ and $C([0,T])$ respectively with limit $u$ and $g(u)$ and $u=u_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}$ satisfies $(\ref{I-1-10})$. If we assume that $(\gn,\timeslde\gn)$ and $(\mu} \def\gn{\nu} \def\gp{\pi,\timeslde\mu} \def\gn{\nu} \def\gp{\pi)$ are couples of elements of $L^1(\BBR_+)$ and $L^1(0,T)$ respectively and if $u=u_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}$ and $\timeslde u=u_{\timeslde\gn,\timeslde\mu} \def\gn{\nu} \def\gp{\pi}$, there holds by the above technique, \begin{equation}l{II-1-7}\begin{array} {lll}\myint{0}{\infty}|u-\timeslde u|(t,x)dx+|g(u(t,0))-g(\timeslde u(t,0)| \\[4mm] \phantom{------} \leq\myint{0}{\infty}|\timeslde\gn-\timeslde\gn|dx+|\timeslde\mu} \def\gn{\nu} \def\gp{\pi(t)-\timeslde\mu} \def\gn{\nu} \def\gp{\pi(t)|\quaduad\text{for all }\,t\in [0,T]. \end{array} \end{equation} \hspace{10mm} $\square$ The following lemma is a parabolic version of an inequality due to Brezis. \blemma{Brtype} Let $\gn \in L^1(\BBR_+)$ and $\mu} \def\gn{\nu} \def\gp{\pi \in L^1(0,T)$ and $v$ be a function defined in $[0,T)\times\BBR_+$, belonging to $L^1(Q^T_{\BBR_+})\cap L^1(\partial_\ell Q^T_{\BBR_+})$ and satisfying \begin{equation}l{II-1-8}\begin{array} {lll} -\myint{0}{T}\!\!\myint{0}{\infty}(\gz_t+\gz_{xx})vdxdt=\myint{0}{T}\gz(.,0) \mu} \def\gn{\nu} \def\gp{\pi dt+\myint{0}{\infty}\gn\gz dx. \end{array} \end{equation} Then for any $\gz\in\BBX(Q^T_{\BBR_+})$, $\gz\geq 0$, there holds \begin{equation}l{II-1-9}\begin{array} {lll} -\myint{0}{T}\!\!\myint{0}{\infty}(\gz_t+\gz_{xx})|v|dxdt\leq\myint{0}{\infty}\gz(.,0) sign (v)\mu} \def\gn{\nu} \def\gp{\pi dt+\myint{0}{\infty}|\gn|\gz dx. \end{array}\end{equation} Similarly \begin{equation}l{II-1-10}\begin{array} {lll} -\myint{0}{T}\!\!\myint{0}{\infty}(\gz_t+\gz_{xx})v_+dxdt\leq\myint{0}{\infty}\gz(.,0) sign_+ (v)\mu} \def\gn{\nu} \def\gp{\pi dt+\myint{0}{\infty}\gn_+\gz dx. \end{array}\end{equation} \end{sub} \note{Proof} Let $p_\ge$ be the approximation of $sign_0$ used in \rprop{L^1} and $\eta_\ge$ be the solution of $$\begin{array}{lll} -\eta_{\ge\,t}-\eta_{\ge\,xx}=p_\ge(v)\quaduad&\text{in }Q^T_{\BBR_+}\\[1mm] \phantom{--,} \eta_{\ge\,x}(.,0)=0&\text{in }[0,T]\\[1mm] \phantom{---} \eta_\ge(0,.)=0&\text{in }\BBR_+. \end{array}$$ Then $|\eta_\ge|\leq \eta^*$ where $\eta^*$ satisfies $$\begin{array}{lll} -\eta^*_{t}-\eta^*_{xx}=1\quaduad&\text{in }Q^T_{\BBR_+}\\[1mm] \phantom{--} \eta^*_{x}(.,0)=0&\text{in }[0,T]\\[1mm] \phantom{--} \eta^*(0,.)=0&\text{in }\BBR_+. \end{array}$$ Although $\eta_\ge$ does not belong to $\BBX(Q^T_{\BBR_+})$ (it is not in $C^{1,2}([0,T)\times\BBR_+)$, it is an admissible test function and we deduce that there exists a unique solution to $(\ref{II-1-8})$. Thus $v$ is given by expression $(\ref{I-1-14})$.\par In order to prove $(\ref{II-1-9})$, we can assume that $\mu} \def\gn{\nu} \def\gp{\pi$ and $\gn$ are smooth, $\gz\in \BBX(Q^T_{\BBR_+})$, $\gz\geq 0$ and set $h_\ge=p_\ge(v)\gz$ and $w_\ge=vp_\ge(v)$, then \begin{equation}l{II-1-11}\begin{array} {lll}\myint{0}{\infty}h_{\ge\,xx}vdx=\myint{0}{\infty}\left(2p'_\ge(v)v_x\gz_x+p_\ge(v)\gz_{xx}+\gz(p_\ge(v))_{xx}\right)vdx\\[4mm] \phantom{\myint{0}{\infty}} =\myint{0}{\infty}\left(2vp'_\ge(v)v_x\gz_x-w_{\ge\, x}\gz_x-(v\gz)_x(p_\ge(v))_x\right)dx\\[0mm] \phantom{\myint{0}{\infty}------------} -\gz(t,0)v(t,0)p'_\ge(v(t,0))v_x(t,0) \\[0mm] \phantom{\myint{0}{\infty}} =-\myint{0}{\infty}\left(\gz_x(j_\ge(v))_x+\gz p'(v)_\ge v_x^2\right)dx-\gz(t,0)v(t,0)p'_\ge(v(t,0))v_x(t,0) \\[0mm] \phantom{\myint{0}{\infty}} =-\myint{0}{\infty}\left(\gz p'(v)_\ge v_x^2 -j_\ge(v)\gz_{xx}\right)dx-\gz(t,0)v(t,0)p'_\ge(v(t,0))v_x(t,0), \end{array}\end{equation} and \begin{equation}l{II-1-12}\begin{array} {lll} \myint{0}{T}h_{\ge\,t}vdt=\myint{0}{T}(p_\ge(v)\gz_t+p'_\ge(v)\gz v_t)vdt. \end{array}\end{equation} Since $v$ is smooth $$\begin{array} {lll} 0=\myint{0}{T}\myint{0}{\infty}(v_t-v_{xx})h_\ge dx dt\\ \phantom{0} =-\myint{0}{T}\myint{0}{\infty}(h_{\ge\,t}+h_{\ge\,xx})v dx dt-\myint{0}{\infty}h_\ge(0,x)\gn(x)dx\\ \phantom{0---------} - \myint{0}{T}\left[p_{\ge}(v(t,0))-v(t,0)p'_{\ge}(v(t,0))\right] \gz(t,0)\mu} \def\gn{\nu} \def\gp{\pi(t)dt. \end{array}$$ Therefore, using $(\ref{II-1-9})$ and $(\ref{II-1-10})$, \begin{equation}l{II-1-13}\begin{array} {lll} -\myint{0}{T}\myint{0}{\infty}\left(j_\ge v)\gz_{xx}+vp_{\ge}(v)\gz_t\right)dxdt\\[4mm] \phantom{---------}+\myint{0}{T}\myint{0}{\infty}\left(\gz p_\ge'(v)v_x^2-vp'_\ge(v)v_t\gz\right) dxdt \\[4mm] \phantom{---------} =\myint{0}{\infty}h_\ge(0,x)\gn(x) dx+\myint{0}{T}h_\ge(t,0)\mu} \def\gn{\nu} \def\gp{\pi(t) dt. \end{array}\end{equation} Put $\ell_\ge(s)=\int_0^srp'_\ge(r) dr$, then $|\ell_\ge (s)\leq c\ge^{-1}s^2\chi_{_{[-\ge,\ge]}}(s)|$. Since $$\begin{array} {lll} \myint{0}{T}\myint{0}{\infty}\gz v p'_\ge(v)v_t dxdt=-\myint{0}{\infty}\ell_\ge(v(0,x))\gz(x)dx-\myint{0}{T}\myint{0}{\infty}\gz_t \ell_\ge(v)dxdt, \end{array}$$ and $\gz$ has compact support, it follows that $$\displaystyle \lim_{\ge\to 0}\myint{0}{T}\myint{0}{\infty}\gz v p'_\ge(v)v_t dxdt=0. $$ Letting $\ge\to 0$ in $(\ref{II-1-13})$, we derive $(\ref{II-1-9})$ for smooth $v$. Using \rprop{L^1} completes the proof of $(\ref{II-1-9})$. The proof of $(\ref{II-1-10})$ is similar.\hspace{10mm} $\square$ \noindent \eqrefemark Inequalities $(\ref{II-1-9})$ and $(\ref{II-1-10})$ hold if $\gz(t,x)$ does not vanish if $|x|\geq R$ for some $R$ but if it satisfies \begin{equation}l{II-1-14}\begin{array} {lll}\displaystyle \lim_{x\to\infty}\sup_{t\in [0,T]}(\gz(t,x)+|\gz_x(t,x)|)=0. \end{array}\end{equation} The proof follows by replacing $\gz(t,x)$ by $\gz(t,x)\eta_n(x)$ where $\eta_n\in C^\infty_c(\BBR_+)$ with $0\leq \eta_n\leq 1$, $\eta_n(x)=1$ on $[0,n]$, $\eta_n(x)=0$ on $[n+1,\infty)$, $|\eta'_n|\leq 2$, $|\eta''_n|\leq 4$. Then $\eta_n\gz\in\BBX(Q_{\BBR_+}^T)$ by letting $n\to\infty$ and the proof follows by letting $n\to\infty$. \subsection{Proof of \rth{exist-uniq}} We give first some {\it heat-ball} estimates relative to our problem. For $r>0$, $x\in\BBR_+$ and $t\in\BBR$ we set \begin{equation}l{III-1-1e} \begin{array}{lll} e(t,x;r)=\left\{(s,y)\in (0,T)\times\BBR_+:s\leq t,\, \timeslde E(t-s,x,y)\geq r\right\}. \end{array} \end{equation} Since $$e(t,x;r)\subset [t-\tfrac{1}{4\gp er^2},t]\times[x-\tfrac{1}{r\sqrt{\gp e}},x+\tfrac{1}{r\sqrt{\gp e}}],$$ there holds \begin{equation}l{III-1-2e}|e(t,x;r)|\leq \myfrac{1}{2r^3(\gp e)^{\frac32}}, \end{equation} and if \begin{equation}l{III-1-3e} e^*(t;r)=\left\{s\in (0,T):s\leq t,\, E(t-s,0,0)\geq r\right\}, \end{equation} then we have \begin{equation}l{III-1-4e}e^*(t;r)\subset [t-\tfrac{1}{4\gp er^2},t]\Longrightarrow |e^*(t;r)|\leq \myfrac{1}{4r^2\gp e}. \end{equation} If $G$ is a measured space, $\gl$ a positive measure on $G$ and $q>1$, $M^q(G,\gl)$ is the Marcinkiewicz space of measurable functions $f:G\mapsto \BBR$ satisfying for some constant $c>0$ and all measurable set $E\subset G$, \begin{equation}l{IIl-1-4} \myint{E}{}|f|d\gl\leq c\left(\gl (E)\right)^{\frac{1}{p'}}, \end{equation} and $$\norm f_{M^q(G,\gl)}=\inf\{c>0\,\text{ s.t. $(\ref{III-1-4e})$ holds}\}. $$ \blemma{reg1} Assume $\mu} \def\gn{\nu} \def\gp{\pi$,$\gn$ are bounded measure in $\overline{\BBR_+}$ and $\BBR_+$ respectively and $u$ is the solution of $(\ref{I-1-13})$ given by $(\ref{I-1-14})$ and $v_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}$ is the solution of $(\ref{I-1-13})$. Then \begin{equation}l{IIl-1-5} \norm{v_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}}_{M^3(Q^T_{\BBR_+})}+\norm{v_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}\lfloor_{\partial Q_{\BBR_+}^T}}_{M^2(\partial Q_{\BBR_+}^T)}\leq c \left(\norm{\mu} \def\gn{\nu} \def\gp{\pi}_{\mathfrak M(\partial Q_{\BBR_+}^T)}+\norm\gn_{\mathfrak M(Q^T_{\BBR_+})}\right). \end{equation} \end{sub} \note{Proof} First we consider $v_{0,\mu} \def\gn{\nu} \def\gp{\pi}$ $$v_{0,\mu} \def\gn{\nu} \def\gp{\pi}(t,x)=2\myint{0}{t}E(t-s,x)d\mu} \def\gn{\nu} \def\gp{\pi(s).$$ If $F\subset [0,T]$ is a Borel set, than for any $\gt>0$ $$\begin{array} {lll}\myint{F}{}E(t-s,0)ds=\myint{F\cap \left\{E\leq \gt\right\}}{}E(t-s,0)ds+\myint{F\cap \left\{E> \gt\right\}}{}E(t-s,0)ds\\[4mm] \phantom{\myint{F}{}E(t-s,0)ds}\leq \gt|F|+\myint{\left\{E> \gt\right\}}{}E(t-s,0)ds\\[4mm] \phantom{\myint{F}{}E(t-s,0)ds}\leq \gt|F|-\myint{\gt}{\infty}\gl d|e^*(t,\gl)|\\[4mm] \phantom{\myint{F}{}E(t-s,0)ds}\leq \gt|F|+\myint{\gt}{\infty}\gl d|e^*(t,\gl)|\\[4mm] \phantom{\myint{F}{}E(t-s,0)ds}\leq \gt|F|+\myfrac{1}{4\gp e\gt}. \end{array}$$ If we choose $\gt^2=\frac{1}{4\gp e|F|}$, we derive \begin{equation}l{IIl-1-6} \myint{F}{}E(t-s,0)ds\leq \myfrac{|F|^{\frac 12}}{\sqrt{\gp e}}. \end{equation} If $F\subset (0,T)$ is a Borel set then $$\left|\myint{F}{}v_{0,\mu} \def\gn{\nu} \def\gp{\pi}(t,0)dt\right|=2\left|\myint{0}{t}\myint{F}{}E(t-s,0)dtd\mu} \def\gn{\nu} \def\gp{\pi(s)\right|\leq \myfrac{2|F|^{\frac 12}}{\sqrt{\gp e}}\norm{\mu} \def\gn{\nu} \def\gp{\pi}_{\mathfrak M(\partial Q_{\BBR_+}^T)}. $$ This proves that \begin{equation}l{IIl-1-7} \norm{v_{0,\mu} \def\gn{\nu} \def\gp{\pi}\lfloor_{\partial Q_{\BBR_+}^T}}_{M^2(\partial Q_{\BBR_+}^T)}\leq c\norm{\mu} \def\gn{\nu} \def\gp{\pi}_{\mathfrak M(\partial Q_{\BBR_+}^T)}. \end{equation} Similarly, if $G\subset [0,T]\times[0,\infty)$ is a Borel set, then \begin{equation}l{IIl-1-8} \myint{G}{}\timeslde E(t-s,x,0)ds\leq \myfrac{2|G|^{\frac 13}}{\sqrt{\gp e}}, \end{equation} and \begin{equation}l{IIl-1-9} \norm{v_{0,\mu} \def\gn{\nu} \def\gp{\pi}}_{M^3( Q_{\BBR_+}^T)}\leq c\norm{\mu} \def\gn{\nu} \def\gp{\pi}_{\mathfrak M(\partial Q_{\BBR_+}^T)}. \end{equation} In the same way we prove that \begin{equation}l{IIl-1-10} \norm{v_{\gn,0}}_{M^3(Q^T_{\BBR_+})}+\norm{v_{\gn,0}\lfloor_{\partial Q_{\BBR_+}^T}}_{M^2(\partial Q_{\BBR_+}^T)}\leq c \norm\gn_{\mathfrak M(Q^T_{\BBR_+})}. \end{equation} This ends the proof.\hspace{10mm} $\square$ \noindent{\it Proof of \rth{exist-uniq}} \noindent{\it Uniqueness.} Assume $u$ and $\timeslde u$ are solutions of $(\ref{I-1-0})$, then $w=u-\timeslde u$ satisfies \begin{equation}l{III-1-1}\begin{array} {lll} \phantom{g(u---} \phantom{,,,,-g(\timeslde u(.,0))}w_t-w_{xx}=0\quadquad&\text{in }\;Q^T_{\BBR_+}\\[1mm] \phantom{;}-w_x(.,0)+g(u(.,0))-g(\timeslde u(.,0))=0\quadquad&\text{in }\;[0,T)\\[1mm]\phantom{-------,,-g(\timeslde u(.,0))} w(0,.)=0\quadquad&\text{in }\;\BBR_+. \end{array} \end{equation} Applying $(\ref{II-1-9})$, we obtain $$-\myint{0}{T}\!\!\myint{0}{\infty}(\gz_t+\gz_{xx})|w|dxdt+\myint{0}{\infty}(g(u(.,0))-g(\timeslde u(.,0)) )sign (w)\gz(t,0) dt\leq 0, $$ for any $\gz\in \BBX_{\BBR_+}^T$ with $\gz\geq 0$. Let $\theta} \def\vge{\varepsilon\in C^1_c(Q^T_{\BBR_+})$, $\eta\geq 0$, we take $\gz$ to be the solution of $$\begin{array} {lll}-\gz_t-\gz_{xx}=\theta} \def\vge{\varepsilon&\quadquad\text{in }(0,T)\times\BBR_+\\ \phantom{-\gz_,} \gz_x(t,0)=0&\quadquad\text{in }(0,T)\\ \phantom{-\gz_t} \gz(T,x)=0&\quadquad\text{in }(0,\infty). \end{array}$$ Then $\gz$ satisfies $(\ref{II-1-14})$, hence $$\myint{0}{T}\!\!\myint{0}{\infty}\theta} \def\vge{\varepsilon|w|dxdt+\myint{0}{\infty}(g(u(.,0))-g(\timeslde u(.,0)) )sign (w)\gz(t,0) dt\leq 0. $$ This implies $w=0$. \noindent{\it Existence.} Without loss of generality we can assume that $\mu} \def\gn{\nu} \def\gp{\pi$ and $\gn$ are nonnegative. Let $\{\gn_n\}\subset C_c(\BBR_+)$ and $\{\mu} \def\gn{\nu} \def\gp{\pi_n\}\subset C_c([\BBR_+]0,T))$ converging to $\gn$ and $\mu} \def\gn{\nu} \def\gp{\pi$ in the sense of measures and let $u_n$ be the solution of $(\ref{II-1-5})$. Then from $(\ref{II-1-7})$, \begin{equation}l{III-1-2}\begin{array} {lll} \myint{0}{T} \myint{0}{\infty}|u_n|dxdt+ \myint{0}{T}|g(u_n(t,0))|dt\leq T\myint{0}{\infty}|\gn_n|dx+ \myint{0}{T} |\mu} \def\gn{\nu} \def\gp{\pi_n|dt. \end{array} \end{equation} Therefore $u_n$ and $g(u_n(.,0))$ remain bounded respectively in $L^1(Q^T_{\BBR_+})$ and in $L^1(0,T)$. Furthermore, by \rlemma{reg1}, $u_n$ remains bounded in $M^3(Q^T_{\BBR_+})$ and in $M^2(\partial Q^T_{\BBR_+})$. We can also write $u_n$ under the form \begin{equation}l{III-1-3}\begin{array} {lll} u_n(t,x)=\myint{0}{\infty}\timeslde E(t,x,y)\mu} \def\gn{\nu} \def\gp{\pi_n(y)dy+2\myint{0}{t}E(t-s,x)(\gn_n(t)-g(u_n(t,0)))ds\\[2mm] \phantom{ u_n(t,x)}=A_n(t,x)+B_n(t,x). \end{array} \end{equation} Since we can perform the even reflexion through $y=0$, the mapping $$(t,x)\mapsto A_n(t,x):=\myint{0}{\infty}\timeslde E(t,x,y)\mu} \def\gn{\nu} \def\gp{\pi_n(y)dy, $$ is relatively compact in $C^m_{loc}(\overline{Q^T_{\BBR_+}})$ for any $m\in\BBN^*$. Hence we can extract a subsequence $\{u_{n_k}\}$ which converges uniformly on every compact subset of $(0,T]\times[0,\infty)$, hence a.e. on $(0,T]$ for the 1-dimensional Lebesque measure. Concerning the boundary term $$(t,x)\mapsto B_n(t,x):=\myint{0}{t}E(t-s,x)(\gn_n(t)-g(u_n(t,0)))ds, $$ it is relatively compact on every compact subset of $[0,T]\times(0,\infty)$. If $x=0$, then $$B_n(t,0)=\myint{0}{t}(\gn_n(t)-g(u_n(t,0)))\myfrac{ds}{\sqrt{\gp(t-s)}}. $$ Since $\norm{\gn_n(.)-g(u_n(.,0))}_{L^1(0,T)}$, $t\mapsto B_n(t,0)$ is uniformly integrable on $(0,T)$, hence relatively compact by the Frechet-Kolmogorov Theorem. Therefore there exists a subsequence, still denoted by $\{n_k\}$ such that $B_{n_k}(t,0)$ converges for almost all $t\in (0,T)$. This implies that the sequence of function $\{u_{n_k}\}$ defined by $(\ref{III-1-3})$ converges in $\overline {Q^T_{\BBR_+}}$ up to a set $\Theta\cup\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi$ where $\Theta\subset Q^T_{\BBR_+}$ is neglectable for the 2-dimensional Lebesgue measure and $\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi\subset \partial_\ell Q^T_{\BBR_+}$ neglectable for the 1-dimensional Lebesgue measure.\par From \rlemma{reg1}, $(u_{n,k}\lfloor_{Q^T_{\BBR_+}},u\lfloor_{\partial_\ell Q^T_{\BBR_+}})$ converges in $L^1_{loc}(Q^T_{\BBR_+})\times L^1(\partial_\ell Q^T_{\BBR_+})$ and the convergence of each of the components holds also almost everywhere (up to a subsequence). Since $u_{n,k}$ is a weak solution, it satisfies for any $\gz\in\BBX(Q^T_{\BBR_+})$ \begin{equation}l{III-1-4}\begin{array} {lll} -\myint{0}{T}\!\!\myint{0}{\infty}(\gz_t+\gz_{xx})u_{n,k}dxdt+\myint{0}{T}\left(g(u_{n,k})\gz\right)(t,0)dt\\[4mm] \phantom{---------}=\myint{0}{\infty}\gz \gn_{n,k}(x)dx+ \myint{0}{T}\gz (t,0) \mu} \def\gn{\nu} \def\gp{\pi_{n,k}(t)dt. \end{array} \end{equation} In order to prove the convergence of $g(u_{n,k}(t,0))$, we use Vitali's convergence theorem and the assumption $(\ref{I-1-15})$. Let $F\subset [0,T]$ be a Borel set. Using the fact that $0\leq u_{n,k}\leq v_{\gn_{n,k},\mu} \def\gn{\nu} \def\gp{\pi_{n,k}}$ and the estimate of \rlemma{reg1}, we have for any $\gl>0$, $$\begin{array} {lll} \myint{F}{}|g(u_{n,k}(t,0))|dt\leq \myint{F\cap\{u_{n,k}(t,0)\leq\gl\}}{}|g(u_{n,k}(t,0))|dt\\[4mm] \phantom{---------\myint{F}{}|g(u_{n,k}(t,0))|dt} +\myint{\{u_{n,k}(t,0)>\gl\}}{}|g(u_{n,k}(t,0))|dt\\[4mm] \phantom{\myint{F}{}|g(u_{n,k}(t,0))|dt} \leq g(\gl)|F|-\myint{\gl}{\infty}\sigma} \def\vgs{\varsigma} \def\gt{\tau d|\{t:|g(u_{n,k}(t,0))|>\sigma} \def\vgs{\varsigma} \def\gt{\tau\}|\\[4mm] \phantom{\myint{E}{}|g(u_{n,k}(t,0))|dt} \leq g(\gl)|F|+c\myint{\gl}{\infty}|g(\sigma} \def\vgs{\varsigma} \def\gt{\tau)|\sigma} \def\vgs{\varsigma} \def\gt{\tau^{-3}ds, \end{array}$$ where $c$ depends of $\norm\mu} \def\gn{\nu} \def\gp{\pi_{\mathfrak M(\partial Q_{\BBR_+}^T)}+\norm\gn_{\mathfrak M(Q^T_{\BBR_+})}$. For $\ge>0$ given, we chose $\gl$ large enough so that the integral term above is smaller than $\ge$ and then $|F|$ such that $g(\gl)|F|+\leq\ge$. Hence $\{g(u_{n,k}(.,0))\}$ is uniformly integrable. Therefore up to a subsequence, it converges to $g(u(.,0))$ in $L^1(0,T)$. Clearly $u$ satisfies \begin{equation}l{III-1-5}\begin{array} {lll} -\myint{0}{T}\!\!\myint{0}{\infty}(\gz_t+\gz_{xx})udxdt+\myint{0}{T}\left(g(u)\gz\right)(t,0)dt\\[4mm] \phantom{---------}=\myint{0}{\infty}\gz \gn(x)dx+ \myint{0}{T}\gz (t,0) \mu} \def\gn{\nu} \def\gp{\pi(t)dt, \end{array} \end{equation} which ends the existence proof. \noindent{\it Monotonicity.} If $\gn\geq\timeslde\gn$ and $\mu} \def\gn{\nu} \def\gp{\pi\geq\timeslde\mu} \def\gn{\nu} \def\gp{\pi$; we can choose the approximations such that $\gn_n\geq\timeslde\gn_n$ and $\mu} \def\gn{\nu} \def\gp{\pi_n\geq\timeslde\mu} \def\gn{\nu} \def\gp{\pi_n$. It follows from $(\ref{II-1-10})$ that $u_{\gn_n,\mu} \def\gn{\nu} \def\gp{\pi_n}\geq u_{\timeslde\gn_n,\timeslde\mu} \def\gn{\nu} \def\gp{\pi_n}$. Choosing the same subsequence $\{n_k\}$, the limits $u$, $\timeslde u$ are in the same order. The conclusion follows by uniqueness. \hspace{10mm} $\square$ \subsection{The case $g(u)=|u|^{p-1}u$} Condition $(\ref{I-1-15})$ is satisfied if $p<2$. If this condition holds there exists a solution $u_{\ell\gd_0}=u_{0,\ell\gd_0}$ and the mapping $\ell\mapsto u_{\ell\gd_0}$ is increasing. \bth{lim-k} (i) If $1<p\leq \frac{3}{2}$, $u_{\ell\gd_0}$ tends to $\infty$ when $k\to\infty$. \noindent(ii) If $\frac{3}{2}<p<2$, $u_{\ell\gd_0}$ converges to $U_{\gw_s}$ defined by $$U_{\gw_s}(t,x)=t^{-\frac{1}{2(p-1)}}\gw_s(\tfrac{x}{\sqrt t}),$$ when $k\to\infty$. \end{sub} \note{Proof} By uniqueness and using $(\ref{I-1-3})$, there holds \begin{equation}l{III-3-1}\begin{array} {lll} T_k[u_{\ell\gd_0}]=u_{k^{\frac{2-p}{p-1}\ell }\gd_0}, \end{array} \end{equation} for any $k,\ell>0$. Since $\ell\mapsto u_{\ell\gd_0}$ is increasing, its limit $u_\infty$, when $\ell\to\infty$, satisfies \begin{equation}l{III-3-2}\begin{array} {lll} T_k[u_{\infty}]=u_{\infty}. \end{array} \end{equation} Hence $u_\infty$ is a positive self-similar solution of $(\ref{I-1-2})$, provided it exists. Hence $u_\infty=U_{\gw_s}$ if $\frac{3}{2}<p<2$. If $1<p\leq \frac{3}{2}$, $u_{k\gd_0}$ admits no finite limit when $k\to\infty$ which ends the proof. \hspace{10mm} $\square$ \noindent\eqrefemark As a consequence of this result, no a priori estimate of Brezis-Friedman type (parabolic Keller-Osserman) exists for a nonnegative function $u\in C^{2,1}(\overline{Q^\infty_{\BBR_+}}\setminus\{(0,0)\}$ solution of \begin{equation}l{III-3-3}\begin{array} {lll} \phantom{-------,,ii} u_t-u_{xx}=0\quaduad&\text{in }\;Q^\infty_{\BBR_+}\\[0mm]\phantom{--} -u_x(.,0)+|u|^{p-1}u(.,0)=0&\text{for all }\;t>0 \\[1mm] \phantom{,,--------,,i}\displaystyle u(0,x)=0&\text{for all }\;x>0. \end{array} \end{equation} when $1<p\leq \frac{3}{2}$. When $\frac{3}{2}<p<2$ it is expected that \begin{equation}l{III-3-4}\begin{array} {lll} u(t,x)\leq \myfrac{c}{(|x|^2+t)^{\frac{1}{2(p-1)}}}. \end{array} \end{equation} The type of phenomenon (i) in \rth{lim-k} is characteristic of fractional diffusion. It has already been observed in \cite[Theorem 1.3]{CVW} with equations \begin{equation}l{III-3-5}\begin{array} {lll} u_t+(-\Gd)^{\alpha} \def\gb{\beta} \def\gg{\gamma}u+t^{\gb}u^p=0\quaduad &\text{ in }\, \BBR_+\times\BBR^N\\ \phantom{-----\,\;\;-} u(0,.)=k\gd_0&\text{ in }\, \BBR^N, \end{array} \end{equation} when $0<\alpha} \def\gb{\beta} \def\gg{\gamma<1$ is small and $p>1$ is close to $1$. \section{Extension and open problems} The natural extension is to replace a one dimensional domain by a mutidimenional one. The main open problem is the question of a priori estimate as stated in the last remark above. \subsection{Self-similar solutions} Let $\eta=(\eta_1,...,\eta_n)$ be the coordinates in $\BBR^n$ and denote $\BBR_+^n=\{\eta=(\eta_1,...,\eta_n)=(\eta',\eta_n):\eta_n>0\}$. We set $K(\eta)=e^{\frac{|\eta|^2}{4}}$ and $K'(\eta')=e^{\frac{|\eta'|^2}{4}}$. Similarly to Section 2 we define $\CL_K$ in $C_0^2(\BBR^n)$ by \begin{equation}l{IV-1-1} \begin{array}{lll} \CL_K(\phi)=-K^{-1}div (K\nabla\phi). \end{array} \end{equation} If $\alpha} \def\gb{\beta} \def\gg{\gamma=(\alpha} \def\gb{\beta} \def\gg{\gamma_1,...,\alpha} \def\gb{\beta} \def\gg{\gamma_n)\in\BBN^n$, we set $|\alpha} \def\gb{\beta} \def\gg{\gamma|=\alpha} \def\gb{\beta} \def\gg{\gamma_1+\alpha} \def\gb{\beta} \def\gg{\gamma_2+...+\alpha} \def\gb{\beta} \def\gg{\gamma_n$. We denote by $\phi_1$ the function $K^{-1}$. Then the set of eigenvalues of $\CL_K$ is the set of numbers $\left\{\gl_k=\frac{n+k}{2}:k\in\BBN\right\}$ with corresponding set of eigenspaces $$N_k=span\left\{ D^\alpha} \def\gb{\beta} \def\gg{\gamma\phi_1:|\alpha} \def\gb{\beta} \def\gg{\gamma|=k\right\}. $$ The operators $\CL_K^{+,N}$ and $\CL_K^{+,D}$ are defined acoordingly in $H_K^{1}(\BBR^n_+)$ and $H_K^{1,0}(\BBR^n_+)$ respectively and $\sigma} \def\vgs{\varsigma} \def\gt{\tau(\CL_K^{+,N})=\left\{\frac{n+k}{2}:k\in\BBN\right\}$ and $\sigma} \def\vgs{\varsigma} \def\gt{\tau(\CL_K^{+,D})=\left\{\frac{n+k}{2}:k\in\BBN^*\right\}$ Furthermore \begin{equation}l{IV-1-2}N_{k,N}=ker\left(\CL_K^{+,N}-\tfrac{n+k}{2}I_d\right)=span\left\{ D^\alpha} \def\gb{\beta} \def\gg{\gamma\phi_1:|\alpha} \def\gb{\beta} \def\gg{\gamma|=k,\alpha} \def\gb{\beta} \def\gg{\gamma_n=2\ell\,,\;\ell\in\BBN\right\}, \end{equation} and \begin{equation}l{IV-1-3}N_{k,D}=ker\left(\CL_K^{+,D}-\tfrac{n+k}{2}I_d\right)=span\left\{ D^\alpha} \def\gb{\beta} \def\gg{\gamma\phi_1:|\alpha} \def\gb{\beta} \def\gg{\gamma|=k,\alpha} \def\gb{\beta} \def\gg{\gamma_n=2\ell+1\,,\;\ell\in\BBN\right\}. \end{equation} Since $\CL_K^{+,N}$ and $\CL_K^{+,D}$ are Fredholm operators, \begin{equation}l{IV-1-4} H^1_K(\BBR^n_+)=\bigoplus_{k=0}^\infty N_{k,N}\;\text{ and }\;H^{1,0}_K(\BBR^n_+)=\bigoplus_{k=1}^\infty N_{k,D}. \end{equation} We define the following functional on $H^1_K(\BBR^n_+)$ \begin{equation}l{IV-1-5} J(\phi)=\myfrac{1}{2}\myint{\BBR^n_+}{}\left(|\nabla\phi} \def\vgf{\varphi} \def\gh{\eta|^2-\myfrac{1}{2(p-1)}\phi} \def\vgf{\varphi} \def\gh{\eta^2\right) Kd\eta+\myfrac{1}{p+1}\myint{\partial\BBR^n_+}{}|\phi} \def\vgf{\varphi} \def\gh{\eta|^{p+1} K'd\eta'. \end{equation} The critical points of $J$ satisfies \begin{equation}l{IV-1-6}\begin{array} {lll} \!\!-\Gd\gw-\myfrac12\eta.\nabla\gw-\myfrac1{2(p-1)}\gw=0\quaduad&\text{in }\; \BBR^n_+\\[2mm] \phantom{---,-\partial} -\gw _{\eta_n}+|\gw|^{p-1}\gw=0\quaduad&\text{in }\; \partial\BBR^n_+.\\[1mm] \end{array} \end{equation} If $\gw$ is a solution of $(\ref{IV-1-6})$, the function \begin{equation}l{IV-1-7} u_\gw(t,x)=t^{-\frac{1}{2(p-1)}}\gw(\frac x{\sqrt t}) \end{equation} satisfies \begin{equation}l{IV-1-8}\begin{array} {lll}\phantom{----} u_{\gw\,t}-\Gd u_\gw=0\quadquad&\text{in }Q_{\BBR^n_+}^\infty:=(0,\infty)\times\BBR^n_+\\[1mm] -u_{\gw\,x_n}+|u_\gw|^{p-1}u_\gw=0\quadquad&\text{in }\partial_\ell Q_{\BBR^n_+}^\infty:=(0,\infty)\times\partial\BBR^n_+. \end{array}\end{equation} Here we have set $\BBR_+^n=\{x=(x_1,...,x_n)=(x',x_n):x_n>0\}$. We denote by $\CE$ the subset $H^{1}_K(\BBR^n_+)\cap L^p(\partial\BBR^n_+;d\eta')$ of solutions of $(\ref{IV-1-6})$ and by $\CE_+$ the subset of positive solutions. As for the case $n=1$ we have the following non-existence result \bprop{nonex} 1- If $p\geq1+\frac{1}{n}$, then $\CE=\{0\}$. \noindent 2- If $1<p\leq 1+\frac{1}{n+1}$, then $\CE_+=\{0\}$ \end{sub} The proof is similar to the one of \rth{vssth}. Hence the existence is to be found in the range $1+\frac{1}{n+1}<p<1+\frac 1n$. \noindent {\bf Conjecture} {\it Assume $1+\frac{1}{n+1}<p<1+\frac 1n$, then the functional $J$ is bounded from below in $H^{1}_K(\BBR^n_+)\cap L_{K'}^p(\partial\BBR^n_+)$. Furthermore $J(\phi)$ tends to infinity when $\norm {\phi}_{H^{1}_K(\BBR^n_+)}+\norm {\phi\lfloor_{\partial\BBR^n_+}}_{L^{p+1}_{K'}(\partial\BBR^n_+)}$ tends to infinity.} \subsection{Problem with measure data} The method for proving \rth{exist-uniq} can be adapted to prove the following $n$-dimensional result \bth{exist-uniq-n} Let $g:\BBR\mapsto\BBR$ be a nondecreasing continuous function such that $g(0)=0$ and \begin{equation}l{IV-1-10}\begin{array} {lll}\phantom{----} \myint{1}{\infty}(g(s)-g(-s))s^{-\frac{2n+1}{n}}ds<\infty, \end{array}\end{equation} then for any bounded Radon measures $\gn$ in $\BBR^n_+$ and $\mu} \def\gn{\nu} \def\gp{\pi$ in $(0,T)\times\partial\BBR^n_+$, there exists a unique Borel function $u:=u_{\gn,\mu} \def\gn{\nu} \def\gp{\pi}$ defined in $\overline{Q_{T}^{\BBR^n_+}}:=[0,T]\times\BBR^n_+$ such that $u\in L^1(Q_{T}^{\BBR^n_+})$, $u\lfloor_{(0,T)\times\partial\BBR^n_+}\in L^1((0,T)\times\partial\BBR^n_+)$ and $g(u)\in L^1((0,T)\times\partial\BBR^n_+)$ solution of \begin{equation}l{IV-1-11}\begin{array} {lll} \phantom{g(u;} \phantom{,--,}u_t-\Gd u=0\quadquad&\text{in }\;Q^T_{\BBR^n_+}\\[1mm]\phantom{,,,u} -u_{x_n}+g(u)=\mu} \def\gn{\nu} \def\gp{\pi\quadquad&\text{in }\;\partial_\ell Q^T_{\BBR^n_+}\\[1mm]\phantom{--,,---} u(0,.)=\gn\quadquad&\text{in }\;\BBR^n_+, \end{array} \end{equation} in the sense that \begin{equation}l{IV-1-12}\begin{array} {lll} \myint{}{}\myint{Q^T_{\BBR^n_+}}{}(-\partial_t\gz-\Gd\gz)udxdt+\myint{}{}\myint{\partial_\ell Q^T_{\BBR^n_+}}{}g(u)\gz dx'dt\\[4mm] \phantom{----------------}=\myint{\BBR^n_+}{}\gz d\gn+ \myint{}{}\myint{\partial_\ell Q^T_{\BBR^n_+}}{}\gz d\mu} \def\gn{\nu} \def\gp{\pi, \end{array} \end{equation} for all $\gz\in C_c^{1,2}(\overline{Q^T_{\BBR^n_+}})$ such that $\gz_{x_n}=0$ on $(0,T)\times\partial\BBR^n_+$ and $\gz(T,.)=0$. Furthermore $(\gn,\mu} \def\gn{\nu} \def\gp{\pi)\mapsto u_{\gn,\mu} \def\gn{\nu} \def\gp{\pi})$ is nondecreasing. \end{sub} \begin{equation}gin{thebibliography}{110} \bibitem{AS} M. A{\scriptsize{BRAMOWITZ}}, I. A. S{\scriptsize{TEGUN}}. \textit{Handbook of Mathematical Functions}, National Bureau of Standards, Washington, 1964. \bibitem{BoV} O. B{\scriptsize{OUKARABILA}}, L. V{\scriptsize{\'ERON}}. \textit{Nonlinear boundary value problems \\relative to harmonic functions}, Nonlinear Analysis, to appear. arXiv:2003.00871. \bibitem{Br-OMM} H. B{\scriptsize{REZIS}}. \textit{ Op\'erateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert}, Notas de Matem\`aticas {\bf 5}, North Holland (1971). \bibitem{BrFr} H. B{\scriptsize{REZIS}}, A. F{\scriptsize{RIEDMAN}}. \textit{Nonlinear parabolic equations involving measures as initial conditions}, J. Math. Pures Appl. {\bf 62} (1983), 73--97. \bibitem{BPT} H. B{\scriptsize{REZIS}}, L.A. P{\scriptsize{ELETIER}}, D. T{\scriptsize{ERMAN}}. \textit{A very singular solution of the heat equation with absorption}, Arch. Rational Mech. Anal. {\bf 95} (1986), 185--209. \bibitem{CVW} H. C{\scriptsize{HEN}}, L. V{\scriptsize{\'ERON}}, Y. W{\scriptsize{ANG}}. \textit{Fractional heat equations with subcritical absorption having a measure as initial data}, Nonlinear Anal. {\bf 137} (2016), 306--337. \bibitem{EK} M. E{\scriptsize{SCOBEDO}}, O. K{\scriptsize{AVIAN}}. \textit{Variational problems related to the self-similar solutions of the heat equations}, Nonlinear Anal. {\bf 10} (1987), 1103--1133. \bibitem {Fil} M. F{\scriptsize{ILA}}, K. I{\scriptsize{SHIGE}}, T. K{\scriptsize{AWAKAMI}}. \textit{Existence of positive solutions of a semilinear elliptic equation with a dynamical boundary condition}, Calc. Var. {\bf 54} (2015), 2059--2078. \bibitem{FQ} M. F{\scriptsize{ILA}}, P. Q{\scriptsize{UITTNER}}. \textit{The blow-up rate for the heat equation with a non-linear boundary condition}, Math. Methods in the Appl. Sci. {\bf 14} (1991), 197--205. \bibitem {Ish-Ka} K. I{\scriptsize{SHIGE}}, T. K{\scriptsize{AWAKAMI}}. \textit{Global solutions of the heat equation with a nonlinear boundary condition}, Calc. Var. {\bf 39} (2010), 429--457. \bibitem {Ish-Sa} K. I{\scriptsize{SHIGE}}, R. S{\scriptsize{ATO}}. \textit{Heat equation with a nonlinear boundary condition and uniformly local $L^r$ spaces}, Disc. Cont. Dyn. Syst. {\bf 36} (2016), 2627--2652. \bibitem{Ke} J. K{\scriptsize{ELLER}}. \textit{On solutions of $\Delta u = f(u)$}, Comm. Pure Appl. Math. \textbf{10} (1957), 503--510. \bibitem{MV1} M. M{\scriptsize{ARCUS}}, L. V{\scriptsize{\'ERON}}. \textit{Semilinear parabolic equations with measure boundary data and isolated singularities}, J. Anal. Mat. {\bf 85} (2001), 245--290. \bibitem{MV2} M. M{\scriptsize{ARCUS}}, L. V{\scriptsize{ERON}}. \textit{Isolated boundary singularities of signed solutions of some nonlinear parabolic equations}, Adv. Diff. Equ. {\bf 6} (2001), 1281--1316. \bibitem{MoV} I. M{\scriptsize{OUTOUSSAMY}}, L. V{\scriptsize{\'ERON}}. \textit{Isolated singularities and asymptotic behaviour of the solutions of a semilinear heat equation}, Asymptotic Anal. {\bf 9} (1994), 259--289. \bibitem{Os} R. O{\scriptsize{SSERMAN}}. \textit{On the inequality $\Delta u \geq f(u)$}, Pacific J. Math. \textbf{7} (1957), 1641--1647. \end{thebibliography} \authinfo{ Laurent V\'eron\\ Institut Denis Poisson, CNRS UMR 7013, \\ Universit\'e de Tours, France.\\ \email{[email protected]}} \end{document}
\begin{document} \title{Phase modulation induced by cooperative effects \\ in electromagnetically induced transparency} \author{Robert \surname{Fleischhaker}} \affiliation{Max-Planck-Institut f\"ur Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg, Germany} \author{Tarak N. \surname{Dey}} \affiliation{Max-Planck-Institut f\"ur Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg, Germany} \affiliation{Indian Institute of Technology Guwahati, Guwahati- 781 039, Assam, India} \author{J\"org \surname{Evers}} \affiliation{Max-Planck-Institut f\"ur Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg, Germany} \date{\today} \begin{abstract} We analyze the influence of dipole-dipole interactions in an electromagnetically induced transparency setup for a density at the onset of cooperative effects. To this end, we include mean-field models for the influence of local field corrections and radiation trapping into our calculation. We show both analytically and numerically that the polarization contribution to the local field strongly modulates the phase of a weak pulse. We give an intuitive explanation for this {\em local field induced phase modulation} and demonstrate that it distinctively differs from the nonlinear self-phase modulation a strong pulse experiences in a Kerr medium. \end{abstract} \pacs{42.50.Gy,42.65.Sf,42.65.An} \maketitle \section{\label{secint}Introduction} Electromagnetically induced transparency (EIT) stands out as one of the most useful coherence and interference phenomena (see \cite{eit1,eit2} and references therein). Current research focuses on dilute samples with $N \lambda^3 \ll 1$ ($N$ density, $\lambda$ transition wavelength), in which the atoms essentially act independently. Experimentally, an important reason for the restriction to low densities is detrimental decoherence induced, e.g., by atom collisions. This density restriction applies in particular to hot atomic vapors. More promising in this respect are ultracold gases~\cite{coldEIT1,coldEIT2,coldEIT3,coldEIT4,coldEIT5,coldEIT6,coldEIT7,coldEIT8,coldEIT9,coldEIT10,coldEIT11,coldEIT12,coldEIT13,coldEIT14, dense1,dense2,dense3}, in which atomic collisions are much less frequent, leading to greatly improved coherence properties. However, most experiments still operate in the regime of a dilute gas, where cooperative effects do not play a role. But with recent advances in preparation techniques, now trapped ultracold gases at densities up to $10^{14} - 10^{15}$ atoms/cm$^3$ with a linear extend of the gas in the $\mu$m range~\cite{dense1,dense2,dense3} have been reported. With such setups, dense gas light propagation experiments at the onset of cooperative effects seem within reach. From the theoretical side, it is well-known that in the high density regime ($N \lambda^3 \gg 1$), a rigorous treatment of cooperative effects in general gives rise to an infinite hierarchy of polarization correlations~\cite{ddQFT1,ddQFT2,ddQFT3}. Calculations of this type are demanding, such that concrete results for more complex model systems such as EIT have not been found yet. However, at the onset of cooperative effects, the infinite hierarchy can be truncated. A study of optical properties of a dense cold two-level gas found that in leading order of the density, three corrections with respect to a dilute gas occur~\cite{dd2ndorder}. First, the microscopic field $E_{mic}$ driving the individual atom is no longer the externally applied field $E_{ext}$ alone, but rather is corrected by the mean polarization $P$ of the neighboring atoms. This local field correction (LFC) is described by the well-known Lorentz-Lorenz formula~\cite{bornwolf}, \begin{align} \label{lle} E_{mic} = E_{ext} + \frac{1}{3 \epsilon_0} P\,. \end{align} The second correction in leading order of the density is due to the quantum statistics of atoms, and becomes relevant close to the phase transition to a Bose-Einstein condensate (BEC). In this regime the atomic de Broglie wavelength is of the order of the electronic transition wavelength such that the medium's refractive and absorptive properties are enhanced by quantum mechanical exchange effects. In contrast, sufficiently above or below the transition point, quantum statistical corrections are small~\cite{dd2ndorder}. The third correction arises due to the leading order of multiple scattering, which can be interpreted as a reabsorption of spontaneously emitted photons before they leave the sample. In general, the reabsorption gives rise to a complicated dynamics because of the many pathways a photon can take between the different atoms in the medium. But to a first approximation, this radiation trapping can be modeled in a mean field like approach by suitably chosen additional incoherent pump rates between the ground states and the excited state~\cite{radtrapF1,radtrapF2,radtrap1,radtrap2}. These results suggest that at the onset of cooperative effects, and at parameters away from the phase transition to BEC, the dominant cooperative corrections arise on a mean-field level, such that a macroscopic treatment is meaningful. Similar results were obtained in a recent calculation which for a two-level system explicitly compares a microscopic treatment based on a multiple-scattering approach including $n$-particle correlations with a macroscopic treatment based on LFC~\cite{sokolov}. It was found that with densities of few particles per wavelength cubed ($N\lambda^3\approx 5$), the two methods agree well, while at higher density ($N\lambda^3\approx 124$) deviations occur, which however due to the numerical complexity could only be studied for rather small sample sizes. So far, LFC alone has been studied in a number of systems~\cite{gmbe}, including self-induced transparency for short and ultrashort pulses~\cite{lfcsoliton,few-cycle}, coherent population trapping~\cite{agarwal-lfc} and lasing without inversion~\cite{lfclwi}. More recently, LFC was studied in the context of atomic gases with a negative refractive index (NRI)~\cite{negindex1,negindex2,negindex3,negindex4,negindex5}. LFC effects have also been exploited to spectroscopically resolve the hyperfine structure of the Rb $D_2$ line in a dense gas~\cite{lfcspec}. But surprisingly, cooperative effects in EIT have received only little attention. In~\cite{radtrapF1,radtrapF2}, effects of radiation trapping on EIT were studied. The propagation of two non-adiabatic propagating pulses is considered in~\cite{lfceit}. Some effects such as a modification of the group velocity and a phase modulation are reported, but only numerical results are given without clear interpretation~\cite{note}. The recent theoretical proposals for NRI requiring high density already motivate a better study of the regime of higher densities in atomic gases. But also several aspects of EIT itself could benefit strongly from such an analysis. For example, the group velocity reduction in EIT and the related spatial compression of the pulse are directly proportional to the density~\cite{eit2}. Also, the time delay bandwidth product, a figure of merit for the overall performance of a slow light system, depends on density~\cite{dbp}. Finally, a higher density could facilitate a more efficient coupling of light and matter, e.g., for applications in quantum information science~\cite{quantinf}. \begin{figure} \caption{\label{fig1} \label{fig1} \end{figure} Therefore, in this paper, we study light propagation in an EIT medium at the onset of cooperative effects, and reveal and interpret the underlying physical mechanisms. We show that LFC leads to a phase modulation of a probe pulse at densities achieved in current experiments. This phase modulation is distinctively different from nonlinear self-phase modulation, as it leads to a linear frequency chirp across the whole probe pulse, and does not require high probe intensity. Our main aim is a physical understanding of the cooperative effects. For this, we include LFC, but assume parameters away from the crossover point to a Bose-Einstein condensate, such that corrections due to quantum statistics can be neglected. Similarly, our semi-classical treatment of the light fields does not allow to reveal non-classical properties of the scattered light~\cite{quantum}. We model the onset of multiple scattering via an incoherent pump rate proportional to the excited state population. Furthermore, we take into account a ground state decoherence due to elastic collisions as well as a ground state population transfer due to spin-exchange collisions. In addition to a full numerical solution of the equations of motion (EOM), we derive analytic solutions for the propagation dynamics of a slow light pulse and the medium polarization, which enable us to provide an intuitive explanation of the phase modulation in terms of the energy exchange with neighboring atoms. The paper is organized as follows. In the next section (Sec.~\ref{secmod}) the theoretical model is described. We present the full set of equations of motion (EOM) (Sec.~\ref{seceom}), discuss our framework for taking into account radiation trapping (Sec.~\ref{secrad}), and introduce LFC into the EOM (Sec.~\ref{seclfc}). In Sec.~\ref{secres}, we present our results. First, we give an analytic expression for the susceptibility (Sec.~\ref{secsus}). Then, we use this expression to derive an analytic solution for a propagating probe pulse including the LFC-induced phase modulation (Sec.~\ref{secana}). In Sec.~\ref{secnum}, we compare this analytic solution to the numeric solution of the full set of equations of motion and calculate the expected probe field absorption due to radiation trapping and ground state population transfer for a range of different possible experimental parameters. In Sec.~\ref{secpha}, we explain the physical origin of the phase modulation in terms of the energy exchange of neighboring atomic dipoles and contrast it with the self-phase modulation a strong pulse acquires in a nonlinear medium. Finally, in Sec.~\ref{seccon}, we draw some conclusions. \section{\label{secmod}Theoretical model} \subsection{\label{seceom}Maxwell-Schr\"odinger equations} We start from the Hamiltonian for an EIT level scheme~\cite{eit2} as shown in Fig.~\ref{fig1}. In a suitable interaction picture it is given by \begin{align} H = & -\hbar (\Delta A_{22} + \Delta_{31} A_{33}) \\ \nonumber & - \frac{\hbar}{2} \left(\Omega_{31} A_{31} + \Omega_{32} A_{32} + \textrm{h.c.}\right)\,, \end{align} where $\Delta = \Delta_{31} - \Delta_{32}$ is the two-photon detuning, $A_{ij} = \ket{i}\bra{j}$ are the atomic operators, and $\Omega_{31}$, $\Omega_{32}$, $\Delta_{31}$, $\Delta_{32}$ are the Rabi frequency and detuning of the probe and control field, respectively. Note that without application of the LFC, the Rabi frequencies contain microscopic fields at the position of the respective atoms. After performing LFC, in our numerical calculations, the microscopic electric fields are replaced by the externally applied fields as discussed in Sec.~\ref{seclfc}. In order to account for the various incoherent processes, we choose a master equation approach~\cite{FiSw2005}, and the EOM for the atomic density operator $\rho$ reads \begin{subequations} \begin{align} \label{eqn:mas} \partial_t \rho = & \, \frac{1}{i \hbar} \left[ H, \rho \right] \\ \nonumber & - \sum_{j=1}^2 \frac{\gamma_{3j}}{2} \left( \com{\rho A_{3j}}{A_{j3}} + \textrm{h.c.} \right) \\ \nonumber & - \sum_{j=1}^2 \frac{R_{3j}}{2} \left( \com{A_{3j}}{\com{A_{j3}}{\rho}} + \textrm{h.c.} \right) \\ \nonumber & - \frac{\gamma_s}{2} \left( \com{A_{21}}{\com{A_{12}}{\rho}} + \textrm{h.c.} \right) \\ \nonumber & - \gamma_{deph} \left(A_{22} \rho A_{11} + \textrm{h.c.} \right)\,. \end{align} Here, $\gamma_{3j}$ ($j \in \{1,2\}$) denote the radiative decay rates, and and $R_{3j}$ are incoherent pumping rates between the ground and the excited state. $\gamma_s$ is the ground state population transfer rate, and $\gamma_{deph}$ is the ground state dephasing. The dynamics of the probe and control fields are each governed by a wave equation. In slowly varying envelope approximation (SVEA), these have the form~\cite{scullybook} \begin{align} \label{eqn:wav} \left[\partial_z + \frac{1}{c} \partial_t \right] \mathcal{E}_{3j} = \frac{i k}{\varepsilon_0}\,P_{3j} \,, \quad j \in \{1,2\}\,, \end{align} \end{subequations} where $c$ is the speed of light in vacuum, $\mathcal{E}_{3j}$ is the envelope of the probe and control field, and $P_{3j}$ is the probe and control field polarization. The combination of Eq.~(\ref{eqn:mas}) and Eqs.~(\ref{eqn:wav}) form the full set of Maxwell-Schr\"odinger EOM that govern the spatio-temporal dynamics of the system. In our model, its solution becomes more challenging because of the additional nonlinear terms introduced due to radiation trapping and LFC. We will discuss both aspects separately and in more detail in the following. \subsection{\label{secrad}Radiation trapping} A higher density of the atoms leads to reabsorption and multiple scattering of spontaneously emitted photons. We model this by incoherent pump fields $R_{31}$ and $R_{32}$ with an intensity proportional to the population of the excited state. The number of incoherent photons inside the sample is also influenced by its geometry. Along the lines of~\cite{radtrap1}, this is taken into account by a pumping rate $r_a$ due to atomic decay and a photon escape rate $r_e$. Altogether, $R_{31}$ and $R_{32}$ are then given by \begin{subequations} \begin{align} R_{31} = \gamma_{31} \frac{r_a}{r_e} \rho_{33}\,, \\ R_{32} = \gamma_{32} \frac{r_a}{r_e} \rho_{33}\,. \end{align} \end{subequations} Obviously, the ratio $r_a/r_e$ should also depends on the atomic density. For high density it will eventually approach unity such that even a small fraction of excited state population can lead to a significant change of the dynamics. In an EIT system, the excited state population in principle should remain small. Especially under adiabatic conditions when the system is in the dark state at all times, the destructive quantum interference underlying EIT strongly suppresses excitation events. However, in experiments a dephasing of the ground state coherence as well as spin exchange due to inhomogeneous magnetic fields and atomic collisions can lead to an effective pump rate into the bright state and subsequent excitation. This type of dynamics is modeled by the different incoherent pump rates in the master equation Eq.~(\ref{eqn:mas}). In Sec.~\ref{secnum}, we use the numerical solution of the full set of EOM including this dynamics to calculate the probe field absorption for a range of different values for $r_a/r_e$ as well as $\gamma_s$. \subsection{\label{seclfc}Local field corrections} In the master equation (\ref{eqn:mas}) the Rabi frequencies depend on the microscopic probe and control field, such that we have to replace them by their macroscopic counterparts using Eq.~(\ref{lle}). Since the mean medium polarization $P$ can be expressed in terms of a density matrix element, this leads to nonlinear EOM. Expanding the nonlinear EOM up to linear order in the external probe field leaves us with just two EOM, one for the probe field coherence $\rho_{31}$ and one for the Raman coherence $\rho_{21}$, \begin{subequations} \begin{align} \label{eom:1} \partial_t \rho_{31} = & - \Gamma_{31} \rho_{31} + \frac{i}{2} \Omega_{31} + \frac{i}{2} L \gamma_{31} \rho_{31} + \frac{i}{2} \Omega_{32} \rho_{21}\,,\\ \label{eom:2} \partial_t \rho_{21} = & - \Gamma_{21} \rho_{21} + \frac{i}{2} \Omega_{32}^* \rho_{31}\,. \end{align} \label{eom} \end{subequations} From now on, after performing the LFC, $\Omega_{31}$ and $\Omega_{32}$ denote the respective Rabi frequencies of the external fields, and we have defined \begin{subequations} \begin{align} \Gamma_{31} = & \frac{\gamma}{2} - i \Delta_{31}, \\ \Gamma_{21} = & \gamma_{dec} - i \Delta, \\ \gamma = & \gamma_{31} + \gamma_{32} + \gamma_s, \\ \gamma_{dec} = & \gamma_{deph} + \gamma_s\,. \end{align} \end{subequations} Due to LFC, a new term arises in Eq.~(\ref{eom:1}) as compared to the low-density case which is proportional to the dimensionless parameter \begin{align} L = \frac{N \lambda^3}{4 \pi^2}. \end{align} It is a measure for the strength of LFC, and the prefactors ensure that $L = 1$ corresponds to a density where the LFC induced frequency shift in a two-level atom is equal to half the natural linewidth. Formally, the new term can be interpreted as a frequency shift in an EIT system as well and can be included into the probe field detuning $\tilde{\Delta}_{31} = \Delta_{31} + L \gamma_{31}/2$. However, this frequency shift does not influence the two-photon detuning $\Delta$. \section{\label{secres}Results} \subsection{\label{secsus}Susceptibility with LFC} The EOM~(\ref{eom}) are already linear in the probe field and solving for the steady state, we can derive the susceptibility $\chi$, \begin{align} \label{sus} \chi = \frac{3 L \frac{\gamma_{31}}{2} (\Delta + i \gamma_{dec})}{\frac{\gamma}{2} \gamma_{dec} - \tilde{\Delta}_{31} \Delta + \frac{|\Omega_{32}|^2}{4} - i (\tilde{\Delta}_{31} \gamma_{dec} + \Delta \frac{\gamma}{2})}\,. \end{align} With an analytic expression at hand we can easily pinpoint the effect of LFC. We find that a reshaping of the EIT transparency window takes place~\cite{agarwal-lfc}. To illustrate this, in Fig.~\ref{fig2} we show the real and imaginary part of $\chi$ for two different sets of parameters. In Fig.~\ref{fig2}(a) we set $L = 10^{-5}$ which results in negligible LFC and the standard form of the EIT susceptibility is recovered. With $L = 4$ in Fig.~\ref{fig2}(b) a strong reshaping of the transparency window due to LFC is found. In both cases the control field Rabi frequency is $\Omega_{32} = 2 \gamma$ and the control field detuning is $\Delta_{32} = 0$. \begin{figure} \caption{\label{fig2} \label{fig2} \end{figure} \subsection{\label{secana}Analytic solution for a light pulse} To analyze the influence of LFC on the propagation dynamics of a light pulse, we expand Eq.~(\ref{sus}) around the center of the transparency window. From \begin{align} k(\omega) = \frac{\omega}{c} \sqrt{1+\chi(\omega)}\,, \end{align} we find the frequency dependent wave number $k(\omega)$ up to second order in the probe field detuning. The solution for the positively rotating component of the probe field in Fourier space then follows from \begin{align} \label{solfou} E^{(+)}(z, \omega) = E^{(+)}(0, \omega) \exp[i k(\omega) z], \end{align} where $E^{(+)}(0, \omega)$ is given by the initial condition and \begin{align} \label{wavenumber} k(\omega) \approx k_0 + i \frac{ n_g \gamma_{dec}}{c} + \frac{\Delta_{31}}{v_g} + k_0 (i \beta_1+ \beta_2 ) \Delta_{31}^2 \,. \end{align} We neglected terms suppressed by a factor of $\gamma \gamma_{dec}/\Omega_{32}^2$, since $\gamma \gamma_{dec} \ll \Omega_{32}^2$ is required for low absorption, and of higher order in $\Delta_{31}$. Each term in Eq.~(\ref{wavenumber}) can be clearly interpreted. $k_0$ is the wave number of the undisturbed carrier wave. The second term describes the decay due to the ground state decoherence $\gamma_{dec}$ where \begin{align} n_g = \frac{3 L \gamma_{31} \omega_0}{\Omega_{32}^2} \end{align} is the group index and $\omega_0$ is the probe field transition frequency. The third term leads to the reduced group velocity $v_g = c / (1 + n_g)$. The fourth term is quadratic in $\Delta_{31}$ and thus associated with a change of width in the Fourier transformation of a Gaussian. The imaginary part proportional to \begin{align} \beta_1 = \frac{6 L \gamma \gamma_{31}}{\Omega_{32}^4} \end{align} leads to a broadening of the temporal width due to the finite spectral width of the transparency window. Finally, the real part proportional to \begin{align} \beta_2 = \frac{6 L^2 \gamma_{31}^2}{\Omega_{32}^4} \end{align} results from LFC. Formally, it leads to an imaginary part in the temporal width which corresponds to a phase modulation of the pulse. When we assume a Gaussian pulse shape for the initial condition, the solution in the time domain can be obtained by Fourier transformation. Considering only the envelope defined by \begin{align} E(z,t) = \frac{1}{2} \mathcal{E}(z,t) \exp[i(k_0 z - \omega_0 t)] + \textrm{c.c.}\,, \end{align} we find \begin{align} \label{soltim} \mathcal{E}(z,t) = \mathcal{E}_0 \frac{\sigma}{\tilde{\sigma}} \exp\left[-\gamma_{dec} \frac{z}{v_g } - \frac{(t-z/v_g)^2}{2\tilde{\sigma}^2}\right]\,. \end{align} $\mathcal{E}_0$ and $\sigma$ are the initial amplitude and temporal width. The LFC modified width after propagating a distance $z$ is \begin{align} \tilde{\sigma}^2 = \sigma^2 + 2 k_0 z (\beta_1 - i \beta_2). \end{align} The phase modulation can be well approximated by the parabola \begin{align} \label{phlfc} \phi_\textrm{\tiny{LFC}}(t) = \frac{\beta_2 k_0 z}{\sigma^2} \left[1 - \left(t-z/v_g\right)^2/\sigma^2\right]. \end{align} \subsection{\label{secnum}Numeric solution} To compare our analytic solution to results obtained numerically, we chose parameters consistent with recent ultra cold gas setups~\cite{dense1,dense2,dense3}. We assume a medium with density $N \approx 10^{14} \textrm{cm}^{-3}$ and a length of $z \approx 40 \mu$m. This corresponds to $N \lambda^3 \approx 50$ ($\lambda = 795$nm), and with a control field $\Omega_{32} = 2 \gamma$, a probe pulse with initial width of $\sigma = 20 / \gamma$ propagates a distance of $z = 150 v_g / \gamma$. Besides some broadening due to the finite spectral width of the transparency window, the probe pulse is attenuated directly by the ground state decoherence $\gamma_{dec}$ (see Eq.~\ref{soltim}). It also suffers absorption because of the reabsorption of incoherent photons. As discussed in Sec.~\ref{secrad}, this significantly depends on the ground state population transfer $\gamma_s$ and the geometry of the sample represented by $r_a/r_e$. \begin{figure} \caption{\label{fig3} \label{fig3} \end{figure} From results of a recent experiment with Rubidium atoms of density $N\approx 2\times10^{14} \textrm{cm}^{-3}$ in an anisotropic trap we estimate $\gamma_{dec}$ to about $100$~kHz~\cite{dense1,dense2,dense3}. From radiation trapping experiments in hot and ultracold gases~\cite{radtrap1,radtrap2}, we further estimate that the ratio of the photon pump and escape rate is about $r_a/r_e = 0.99$. The rate of spin exchange collisions depends on the relative ground state energies and the fact that spin exchange collision become more likely if the overall angular momentum is conserved. From \cite{coldfermions} we find that a typical value in ultracold gases is given by \begin{align} \gamma_s = N \times 10^{-11} \textrm{cm}^3 \textrm{s}^{-1}\,, \end{align} such that for a density of $N \approx 10^{14} \textrm{cm}^{-3}$ we estimate at a value of about $\gamma_s = 1$~kHz. To account for different experimental situations, and to obtain insight in the effect of radiation trapping and ground state decoherence, we use the full set of EOM to numerically propagate a probe pulse and to calculate the probe field absorption for a range of values for $\gamma_s$ as well as $r_a/r_e$. The result is shown in Fig.~\ref{fig3}. The plot shows the amplitude of the propagated pulse relative to the initial pulse. The three contour lines indicate parameters for which a fraction of $50\%$, $25\%$ or $1\%$ of the initial light field amplitude leaves the sample, respectively. For the estimated values of $\gamma_{dec}$, $\gamma_s$, and $r_a/r_e$ which we assume for our further calculations, the probe field $\Omega_{31}$ is reduced to about $50\%$ of its initial value. \begin{figure} \caption{\label{fig4} \label{fig4} \end{figure} Fig.~\ref{fig4} analyzes the pulse after passing through the medium. In subfigure~\ref{fig4}(a), we show the propagated pulse together with the LFC induced phase modulation $\phi_\textrm{\tiny{LFC}}(t)$ and the corresponding instantaneous frequency defined by $\omega(t) = \omega_0 - \partial_t \phi_\textrm{\tiny{LFC}}(t)$. The numerical solution is virtually indistinguishable from the analytical one, and we only show the analytical solution. We see that the phase modulation has a negative parabola-like shape such that the instantaneous frequency is approximately linear over the total extend of the pulse. \subsection{\label{secpha}Physical origin of the phase modulation} To explain the physical origin of the phase modulation, we explicitly calculate the relevant parts of the polarization using the relation \begin{align} P^{(+)}(z, \omega) = \epsilon_0 \chi(\omega) E^{(+)}(z, \omega)\,. \end{align} Considering only the real part of $\chi$ up to quadratic order in $\Delta_{31}$ and Fourier transforming it into the time domain, we can distinguish two contributions, \begin{subequations} \begin{align} &P^{(+)}_0(z, t) = \frac{\epsilon_0 n_g}{\omega_0} \left[i \partial_t \mathcal{E}(z, t)\right] \exp[i(k_0 z - \omega_0 t)] \label{contr:1}\,,\\ &P^{(+)}_\textrm{\tiny{LFC}}(z, t) = \epsilon_0 \beta_2 \left[i^2 \partial^2_t \mathcal{E}(z, t)\right] \exp[i(k_0 z - \omega_0 t)]\,. \label{contr:2} \end{align} \label{contr} \end{subequations} The first contribution stems from the part linear in $\Delta_{31}$ and leads to the change of group velocity. The second contribution is due to the part quadratic in $\Delta_{31}$ which is related to the LFC induced phase modulation. In Fig.~\ref{fig4}(b), we show the pulse together with these two contributions. In the first half of the pulse, $E(t)$ is ahead in phase by $\pi/2$ compared to $P_0(t)$, which indicates that energy is transferred from the pulse to the polarization $P_0(t)$. In the second half, the pulse is delayed by $\pi/2$, and energy is transferred back from the polarization $P_0(t)$ to the pulse. This energy exchange effectively reduces the group velocity of the pulse. Similarly, we can understand how the interaction of atoms with the collective dipole field of their neighbors proportional to $P_0(t)$ induces an additional polarization $P_\textrm{\tiny{LFC}}(t)$. Before $t = -\sigma$, the polarization component $P_0(t)$ is $\pi/2$ ahead in phase compared to $P_\textrm{\tiny{LFC}}(t)$, whereas at $-\sigma < t < 0$, it is delayed by $\pi/2$, again leading to an energy exchange. The same exchange takes place again for $0 < t < \sigma$ and $t > \sigma$. While the additional polarization $P_\textrm{\tiny{LFC}}(t)$ is induced exactly by the same mechanism as $P_0(t)$, its back action on the probe pulse is different. At $t < -\sigma$ and $t > \sigma$, $P_\textrm{\tiny{LFC}}(t)$ and $E(t)$ have opposite phase. This opposite phase has a dragging effect on $E(t)$, and reduces its phase. In the central part of the pulse ($-\sigma < t < \sigma$), $P_\textrm{\tiny{LFC}}(t)$ is in phase with $E(t)$. This has a pushing effect on $E(t)$, and increases the phase of the pulse. This interpretation agrees with the phase modulation obtained from the calculation as shown in Fig.~\ref{fig4}(a). We thus conclude that energy is exchanged between the atomic dipoles and the field of neighboring dipoles in exactly the same way as between the atomic dipoles and the external field $E(t)$. But the two polarization components act differently on the probe pulse, leading to the group velocity change and the phase modulation, respectively. Finally, we compare the LFC induced phase modulation with nonlinear self-phase modulation (NSM) in a medium with an intensity dependent refraction. The NSM modulation is \begin{align} \phi_\textrm{\tiny{NSM}}(t) = n_2\, I(t)\, k_0\, z\,, \end{align} where $n_2$ is the intensity dependent index of refraction and $I(t)$ the intensity profile of the pulse. In Fig.~\ref{fig4}(c) we show $\phi_\textrm{\tiny{NSM}}(t)$ together with the corresponding instantaneous frequency for a Gaussian pulse. We see that the front of the pulse experiences a red shift whereas the back experiences a blue shift with an approximately linear frequency chirp in the center. Comparing the two chirps, \begin{align} \alpha_\textrm{\tiny{NSM}} = & \frac{2 n_2 I_0 k_0 z}{\sigma^2}\,, \\ \alpha_\textrm{\tiny{LFC}} = & \frac{2 \beta_2 k_0 z}{\sigma^4}\,, \end{align} we find that in the LFC case, $n_2 I_0$ is replaced by $\beta_2/\sigma^2$. Thus, the LFC modulation does not require a large intensity, and is approximately linear over the total extend of the pulse since it depends on the strength of the dipole-dipole interaction. This strength is given by $\beta_2$ in an EIT system and can be influenced by the density and the control field strength $\Omega_{32}$. \section{\label{seccon}Conclusion} In summary, we studied light propagation in an EIT system for densities at the onset of cooperative effects. For this, we amended the standard Maxwell-Schrödinger equations for a three-level $\Lambda$-type EIT setup on a mean-field level by local field corrections of the applied fields, and by a model for radiation trapping based on incoherent pump rates between the ground and the excited states. As main result, we found that the phase of a Gaussian probe pulse is strongly modulated by the polarization contribution to the local field. We gave a physical interpretation for the underlying mechanism leading to the phase modulation and showed that it is distinctively different from the case of non-linear self phase modulation, as the resulting frequency chirp is linear across the whole pulse, and does not depend on the pulse intensity. Based on the numerical solution, we further studied the effects of decoherence and radiation trapping. Within our model, this allows to estimate upper limits for these incoherent effects such that the pulse survives a sufficient propagation length for the phase modulation to become observable. \end{document}
\begin{document} \ensuremath{\tilde}tle{Asymptotic stability for stochastic dissipative systems with a H\"older noise} \author{Luu Hoang Duc\thanks{Max Planck Institute for Mathematics in the Sciences, Inselstr. 22, 04103 Leipzig, Germany, \& Institute of Mathematics, Viet Nam Academy of Science and Technology, Hoang Quoc Viet str. 18, 10307 Ha Noi, Viet Nam {\tt\small [email protected], [email protected]} }, $\;$ Phan Thanh Hong \thanks{Thang Long University, Hanoi, Vietnam {\tt\small [email protected]} }, $\;$ Nguyen Dinh Cong \thanks{Institute of Mathematics, Viet Nam Academy of Science and Technology, Hoang Quoc Viet str. 18, 10307 Ha Noi, Viet Nam {\tt\small [email protected]} } } \date{} \title{Asymptotic stability for stochastic dissipative systems with a H\"older noise} \begin{abstract} We prove the exponential stability of the zero solution of a stochastic differential equation with a H\"older noise, under the strong dissipativity assumption. As a result, we also prove that there exists a random pullback attractor for a stochastic system under a multiplicative fractional Brownian noise. \end{abstract} \textbf{Keywords}: fractional Brownian motion, stochastic differential equations (SDE), Young integral, exponential stability, random attractor. \section{Introduction} In this paper we study the long term asymptotic behavior of the following nonautonomous stochastic differential equation \begin{equation}\label{linfBm} dx(t) = [A(t) x(t) + F(t,x(t))]dt + C(t) x(t) dZ(t),\ x(0) = x_0 \in \ensuremath{\mathbb{R}}^d, \end{equation} where $Z(t)$ is a stationary stochastic process with almost sure all trajectories $\omega(t) = Z(t,\omega)$ to be H\"older continuous of index $\nu > \frac{1}{2}$. System \eqref{linfBm} can be solved by the pathwise approach with the help of Young integral \cite{young}. We will derive sufficient conditions on coefficient functions $A, F, C$, for which the zero solution is asymptotically or exponentially stable. Stochastic stability is systematically treated in \cite{khasminskii} and \cite{mao}. For example, the stability problem for system under a standard Brownian noise, i.~e.\ the case of which $Z(t)$ is replaced by the stochastic Brownian motion $B(t)$, can be studied using the Ito's formula \[ d \|x(t)\|^2 = \mathbb{B}ig(2 \langle x(t), A(t) x(t) \rangle + 2 \langle x(t), F(t,x(t)) \rangle + \|C(t) x(t)\|^2 \mathbb{B}ig) dt + 2 \langle x(t), C(t) x( t)\rangle dB(t), \] which follows that \begin{eqnarray}\label{Itoest} d E \|x(t)\|^2 = E \mathbb{B}ig(2 \langle x(t), A(t) x(t) \rangle + 2 \langle x(t), F(t,x(t)) \rangle + \|C(t) x(t)\|^2 \mathbb{B}ig) dt, \end{eqnarray} where $E$ denotes the expectation function. Therefore under conditions on negative definiteness of $A(t)$ and global Lipschitz continuity of $F$ w.r.t. $x$ with a small Lipschitz constant, given $\|C(t)\|$ small enough, the quantity $ E \|x(t)\|^2 $ is exponentially decaying to zero, which implies that $\|x(t)\|$ converges exponentially and almost surely to zero due to Borel-Catelli lemma (see \cite[p\ 255]{Shiryaev}). The situation is however different here with equation \eqref{linfBm}, since in general $Z$ is neither a Markov process nor a semimartingale (e.g. fractional Brownian motion $B^H$ \cite{nourdin}), hence the expectation $E \langle x(t), C(t) x(t) \rangle dZ(t)$ does not vanish. Therefore a new approach to study stochastic stability is necessary. Recently, the global dynamics is studied in \cite{ducGANSch18} for which the noise is assumed to be fractional Brownian motion with small noise in the sense that the H\"older seminorm of its realization is integrable and can be controlled to be small. On the other hand, the local stability is studied in \cite{garrido-atienzaetal} and in \cite{GABSch18} for which the diffusion coefficient $C(t) x(t)$ is replaced by $G(x(t))$ which is flat, i.e. $G(0) = DG(0) = 0$. It is also important to note that all above mentioned references apply fractional calculus (see also \cite{mishura}, \cite{nualart}, \cite{zahle}, \cite{zahle2}) and the semigroup approach to deal with the stability problem. Looking back at the classical theory of ordinary differential equations we know that there are two fundamental methods to deal with stability problem of solution of an ODE --- the methods of Lyapunov, which proved to be powerful tools of qualitative theory of ODE and the stability theory in particular. In case of the first method one linearizes the system near an equilibrium and studies the growth rate (Lyapunov exponents) of the solutions and the spectrum of derived linear system and then deduces the asymptotic properties of the original nonlinear systems near the fixed point. In case of the second Lyapunov method one studies the action of the ODE on a specific function (called Lyapunov function) and then deduces asymptotic properties of the system without the need of solving the ODE explicitly (hence this method is called the method of Lyapunov functions). In this paper we reinvestigate the stability problem using a different method compared to the references mentioned above, namely we use the approach of the second Lyapunov method: we construct a Lyapunov-type function, which is the norm function, and combine the discretization scheme developed in \cite{congduchong17}, \cite{congduchong18} and \cite{ducGANSch18} but for polar coordinates, using $p {\rm -var}$ norm estimates. The main difficulty lies in how to use path-wise estimates to deal with the driving noise, which is expected to be technical. We prove in Theorem \ref{stabfSDElin} that for $A$ negative definite and $F$ with small Lipschitz coefficient, one can choose $C$ small enough in terms of average $q-$var norm such that the system is pathwise exponentially stable. As such, the result gives a significantly better stability criterion than those in \cite{ducGANSch18} and \cite{GAKLBSch2010}, and moreover matches the stability criteria for ordinary differential equations when the noise is diminished (see details in Remark \ref{comparison}). To our knowledge, our method is also the first attempt to study the stability for Young differential equations using Lyapunov type functions. The result is then applied to study the asymptotic behavior of the stochastic system \begin{equation}\label{fSDE0} dx(t) = [Ax(t) + f(x(t))]dt + Cx(t)dB^H(t),t\in \ensuremath{\mathbb{R}},\ x(0)=x_0 \in \ensuremath{\mathbb{R}}^d, \end{equation} where we assume for simplicity that $A,C \in \ensuremath{\mathbb{R}}^{d\ensuremath{\tilde}mes d}$, $f: \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d$ such that $f(0) \ne 0$, and $B^H$ is an one-dimensional fractional Brownian motion with Hurst exponent $H \in (1/2,1)$ \cite{mandelbrot}, i.e. it is a family of centered Gaussian processes $B^H = \{B^H(t)\}$, $t\in \ensuremath{\mathbb{R}}$ with continuous sample paths and the covariance function \[ R_H(s,t) = \tfrac{1}{2}(t^{2H} + s^{2H} - |t-s|^{2H}), \forall t,s \in \ensuremath{\mathbb{R}}. \] Since no deterministic equilibrium such as the zero solution is found, system \eqref{fSDE0} is expected to possess a random attractor, which is a generalization of the classical attractor concept (see e.g. \cite{crauel-flandoli} or \cite{crauelkloeden15} for a survey on random attractor theory). In the stochastic setting with fractional Brownian motions, in \cite{GAKLBSch2010} the existence of the random attractor is investigated assuming that the diffusion coefficient is bounded. Here in this paper, we will prove in Theorem \ref{attractor} that there exists a global random attractor for system \eqref{fSDE0}, and moreover the random attractor consists of only one random point. \section{Preliminaries} \subsection{Young integral} Let $C([a,b],\ensuremath{\mathbb{R}}^d)$ denote the space of all continuous paths $x:\;[a,b] \to \ensuremath{\mathbb{R}}^d$ equipped with sup norm $\|\cdot\|_{\infty,[a,b]}$ given by $\|x\|_{\infty,[a,b]}=\sup_{t\in [a,b]} \|x(t)\|$, where $\|\cdot\|$ is the Euclidean norm in $\ensuremath{\mathbb{R}}^d$. For $p\geq 1$ and $[a,b] \subset \ensuremath{\mathbb{R}}$, $\mathcal{C}^{p{\rm-var}}([a,b],\ensuremath{\mathbb{R}}^d)\subset C([a,b],\ensuremath{\mathbb{R}}^d)$ denotes the space of all continuous paths $x:[a,b] \to \ensuremath{\mathbb{R}}^d$ which are of finite $p-$variation \begin{eqnarray} \ensuremath{\left| \! \left| \! \left|} x\ensuremath{\right| \! \right| \! \right|}_{p\rm{-var},[a,b]} :=\left(\sup_{\Pi(a,b)}\sum_{i=1}^n \|x(t_{i+1})-x(t_i)\|^p\right)^{1/p} < \infty, \end{eqnarray} where the supremum is taken over the whole class of finite partitions of $[a,b]$. $\mathcal{C}^{p{\rm-var}}([a,b],\ensuremath{\mathbb{R}}^d)$ equipped with the $p-$var norm \begin{eqnarray*} \|x\|_{p\text{-var},[a,b]}&:=& \|x(a)\|+\ensuremath{\left| \! \left| \! \left|} x\ensuremath{\right| \! \right| \! \right|}_{p\rm{-var},[a,b]}, \end{eqnarray*} is a nonseparable Banach space \cite[Theorem 5.25, p.\ 92]{friz}. Also for each $0<\alpha<1$, we denote by $C^{\alpha\rm{-Hol}}([a,b],\ensuremath{\mathbb{R}}^d)$ the space of H\"older continuous functions with exponent $\alpha$ on $[a,b]$ equipped with the norm \[ \|x\|_{\alpha\rm{-Hol},[a,b]}: = \|x(a)\| + \sup_{a\leq s<t\leq b}\frac{\|x(t)-x(s)\|}{(t-s)^\alpha}. \] Given a simplex $\Delta[a,b]:=\{(s,t)|\ a\leq s\leq t\leq b\}$, a continuous map $\overline{\omega}: \Delta[a,b]\longrightarrow \ensuremath{\mathbb{R}}^+$ is called a {\it control} (see e.g. \cite{friz}) if it is zero on the diagonal and superadditive, i.e\\ (i), For all $t\in [a,b]$, $\overline{\omega}_{t,t}=0$,\\ (ii), For all $s\leq t\leq u$ in $[a,b]$, $\overline{\omega}_{s,t}+\overline{\omega}_{t,u}\leq \overline{\omega}_{s,u}$. Now, consider $x\in \mathcal{C}^{q{\rm-var}}([a,b],\ensuremath{\mathbb{R}}^{d\ensuremath{\tilde}mes m})$ and $\omega\in \mathcal{C}^{p{\rm -var}}([a,b],\ensuremath{\mathbb{R}}^m)$ with $\frac{1}{p}+\frac{1}{q} > 1$, the Young integral $\int_a^bx(t)d\omega(t)$ can be defined as \[ \int_a^bx(s)d\omega(s):= \lim \limits_{|\Pi| \to 0} \sum_{[u,v] \in \Pi} x(u)(\omega(v)-\omega(u)) , \] where the limit is taken on all the finite partitions $\Pi=\{ a=t_0<t_1<\cdots < t_n=b \}$ of $[a,b]$ with $|\Pi| := \displaystyle\max_{[u,v]\in \Pi} |v-u|$ (see \cite[p.\ 264--265]{young}). This integral satisfies additive property by the construction, and the so-called Young-Loeve estimate \cite[Theorem 6.8, p.\ 116]{friz} \begin{eqnarray}\label{YL0} \mathbb{B}ig\|\int_s^t x(u)d\omega(u)-x(s)[\omega(t)-\omega(s)]\mathbb{B}ig\| \leq K \ensuremath{\left| \! \left| \! \left|} x\ensuremath{\right| \! \right| \! \right|}_{q\rm{-var},[s,t]} \ensuremath{\left| \! \left| \! \left|}\omega\ensuremath{\right| \! \right| \! \right|}_{p\rm{-var},[s,t]}, \;\forall \; [s,t]\subset [a,b] \end{eqnarray} where \begin{equation}\label{constK} K:=(1-2^{1-\theta})^{-1},\qquad \theta := \frac{1}{p} + \frac{1}{q} >1. \end{equation} Throughout this paper, we would assume for simplicity that $m=1$. Notice that all the results are still correct for any $m \in \ensuremath{\mathbb{N}}$, with a small modification. \subsection{Nonlinear Young differential equations} For any fixed $1<p <2$, $T>0$ and a continuous path $\omega$ that belongs to $ \mathcal{C}^{p{\rm-var}}([0,T],\ensuremath{\mathbb{R}})$, consider the deterministic differential equation in the Young sense \begin{equation}\label{lin1} dx(t) = [A(t) x(t) + F(t,x(t))]dt + C(t)x(t) d\omega(t),\;\; x(0)=x_0, \end{equation} where $0\leq t\leq T$, $x_0\in\ensuremath{\mathbb{R}}^d$, $A\in C([0,T],\ensuremath{\mathbb{R}}^{d\ensuremath{\tilde}mes d})$ and $C \in \mathcal{C}^{q{\rm-var}}([0,T],\ensuremath{\mathbb{R}}^{d\ensuremath{\tilde}mes d})$ with $q$ satisfying $q\geq p$ and $\frac{1}{p} + \frac{1}{q} >1$. Additionally, $F$ is globally Lipschitz continuous w.r.t. $x$, i.e there exists $L>0$ such that for all $t\in[0,T]$, for all $x,y\in \ensuremath{\mathbb{R}}^d$: $\|F(t,x)-F(t,y)\|\leq L\|x-y\|$. Then the system \eqref{lin1} possesses a unique solution in both the forward and backward sense, as studied in \cite{congduchong17,congduchong18}. In fact under these conditions the system can be transformed to a classical ordinary differential equation which satisfies the existence and uniqueness theorem. \begin{theorem}\label{existence} There exists a unique solution to the system \eqref{lin1} in the space $\mathcal{C}^{q{\rm-var}}([0,T],\ensuremath{\mathbb{R}}^{d})$. \end{theorem} \begin{proof} Indeed, due to \cite{congduchong18}, there exists a unique solution to the equation \begin{equation}\label{lin1a} dz(t) = A(t) z(t)dt + C(t)z(t) d\omega(t) \end{equation} in the space $ \mathcal{C}^{q{\rm-var}}([0,T],\ensuremath{\mathbb{R}}^{d})$. Denote by $\Phi(t,\omega)$ the fundamental matrix of solution of \eqref{lin1a} with $\Phi(0,\omega) = Id$ - the identity matrix. Put $u(t) = \Phi^{-1}(t,\omega)x(t)$, then by the integration by part formula, $u$ satisfies the equation \begin{eqnarray}\label{u} du(t)&=& \Phi^{-1}(t,\omega) dx(t) + d\Phi^{-1}(t,\omega)x(t)\notag\\ &=&\Phi^{-1}(t,\omega) \mathbb{B}ig[\mathbb{B}ig(A(t)x(t) +F(t,x(t))\mathbb{B}ig)dt + C(t)x(t)d\omega(t)\mathbb{B}ig] \notag\\ &&-\Phi^{-1}(t,\omega)\mathbb{B}ig(A(t)\Phi(t) dt + C(t) \Phi(t,\omega)d\omega(t)\mathbb{B}ig)\Phi^{-1}(t,\omega)x(t)\notag\\ &=& \Phi^{-1}(t,\omega)F(t,\Phi(t,\omega)u(t))dt=:G(t,u(t))dt. \end{eqnarray} Since, $\Phi(\cdot,\omega)$ and $\Phi^{-1}(\cdot,\omega)$ are continuous on $[0,T]$, it is easy to check that $G(t,u)$ satisfy the global Lipschitz condition which assures the existence and uniqueness of a global solution to \eqref{u} on $[0,T]$, and moreover $u\in C^{1}([0,T],\ensuremath{\mathbb{R}}^d)$. The one-one correspondence between solutions of \eqref{lin1} and solutions of \eqref{u} then prove the existence and uniqueness of solution of \eqref{lin1}. The same conclusion holds for the backward equation of \eqref{lin1}. \end{proof} \section{Exponential stability of nonlinear Young differential equations} \begin{comment} Let $0 \leq T_1 < T_2 < \infty$, $\mathcal{C}^{r{\rm-var}}([T_1,T_2],\ensuremath{\mathbb{R}}^d)$ be the Banach space of bounded $r$-variation continuous functions $u$ on $[T_1,T_2]$ having values in $\ensuremath{\mathbb{R}}^d$ with finite norm \begin{equation*} \|u\|_{r{\rm-var},[T_1,T_2]}=\|u_{T_1}\|+\ensuremath{\left| \! \left| \! \left|} u \ensuremath{\right| \! \right| \! \right|}_{r{\rm-var},[T_1,T_2]} <\infty, \end{equation*} in which $\|\cdot\|$ is Euclidean norm and $\ensuremath{\left| \! \left| \! \left|} \cdot\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[T_1,T_2]}$ is seminorm defined for $u\in \mathcal{C}^{r\rm{-}\rm{var}}([T_1,T_2],\ensuremath{\mathbb{R}}^d) $ as follow $$\ensuremath{\left| \! \left| \! \left|} u\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[T_1,T_2]} = \left(\sup_{\Pi(T_1,T_2)}\sum_{i=0}^{n-1}\|u(t_{i+1})-u(t_i)\|^r\right)^{1/r}$$ where the supremum is taken over the whole class of finite partition $\Pi (T_1,T_2)=\{ T_1= t_0<t_1<\cdots < t_n=T_2\}$ of $[T_1,T_2]$. Denote by $\mathcal{C}([T_1,T_2],\ensuremath{\mathbb{R}}^d)$ the Banach space of continuous function on $[T_1,T_2]$ equipped with the sup norm. \end{comment} In this section we are going to study the exponential stability of \eqref{lin1} where $\omega\in \mathcal{C}^{p{\rm-var}}([0,T],\ensuremath{\mathbb{R}})$, $A\in C([0,T],\ensuremath{\mathbb{R}}^{d\ensuremath{\tilde}mes d})$ and $C \in \mathcal{C}^{q{\rm-var}}([0,T],\ensuremath{\mathbb{R}}^{d\ensuremath{\tilde}mes d})$ for any $T>0$. First, we formulate the definition of stability for deterministic Young differential equations (for the classical stability notion see e.g. \cite[p.~17]{hale}, \cite[p.~152]{nemytskii}, or \cite{ducetal06}). \begin{definition}\label{Defstability} (A) Stability: A solution $\mu(\cdot)$ of the deterministic Young differential equation \eqref{lin1} is called stable, if for any $\varepsilon >0$ there exists an $r =r(\varepsilon)>0$ such that for any solution $x(\cdot)$ of \eqref{lin1} satisfying $\|x(0)-\mu(0)\| < r$ the following inequality holds \[ \sup_{t\geq 0}\|x(t)-\mu(t)\| < \varepsilon. \] (B) Attractivity: $\mu(\cdot)$ is called attractive, if there exists $r >0$ such that for any solution $x(\cdot)$ of \eqref{lin1} satisfying $\|x(0)-\mu(0)\| < r$ we have \[ \lim \limits_{t \to \infty} \|x(t)-\mu(t)\| = 0. \] (C) Asymptotic stability: $\mu(\cdot)$ is called \begin{enum} \item asymptotically stable, if it is stable and attractive. \item exponentially stable, if it is stable and there exists $r>0$ such that for any solution $x(\cdot)$ of \eqref{lin1} satisfying $\|x(0)-\mu(0)\| < r$ we have \[ \varlimsup \limits_{t \to \infty} \frac{1}{t} \log\|x(t)-\mu(t)\| < 0. \] \end{enum} \end{definition} Below we need several assumptions for $A, F,C$.\\ (${\textbf H}_1$) $A$ is negative definite in the sense that there exists a function $h: \ensuremath{\mathbb{R}}^+ \to \ensuremath{\mathbb{R}}^+$ such that \begin{equation}\label{negdefh} \langle x,A(t)x \rangle \leq -h(t) \|x\|^2,\quad\hbox{for all}\quad x\in \ensuremath{\mathbb{R}}^d. \end{equation} (${\textbf H}_2$) $F(t,0) \equiv 0$ for all $t\in \ensuremath{\mathbb{R}}^+$ and $F(t,x)$ is of globally Lipschitz continuous w.r.t. $x$, i.e. there exists a positive continuous function $f: \ensuremath{\mathbb{R}}^+ \to \ensuremath{\mathbb{R}}^+$ such that \begin{equation}\label{lipF} \|F(t,x)-F(t,y)\| \leq f(t) \|x-y\|, \quad \forall x, y \in \ensuremath{\mathbb{R}}^d. \end{equation} (${\textbf H}_3$) There exist constants \begin{eqnarray} &&\hat{A}:=\varlimsup_{m\to\infty}\left(\frac{1}{m+1}\sum_{k=0}^m \mathbb{B}ig(\|A\|_{\infty,\Delta_k} + \|f\|_{\infty,\Delta_k}\mathbb{B}ig)^{4p} \right)^{\frac{1}{4p}}< \infty; \label{A-hat}\\ &&\hat{C}:= \varlimsup \limits_{m \to \infty}\left(\frac{1}{m+1} \sum_{k=0}^{m}\|C\|_{q{\rm-var}, \Delta_k}^{2p+2}\right)^{\frac{1}{2p+2}}< \infty; \label{C-hat}\\ &&\Gamma(\omega,2p+2):=\varlimsup_{m\to\infty}\left(\frac{1}{m+1}\sum_{k=0}^m \ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}^{2p+2}_{p{\rm- var},\Delta_k}\right)^{\frac{1}{2p+2}} < \infty, \label{omegagrowth} \end{eqnarray} where $\Delta_k:=[k,k+1]$. \begin{remark}\label{h3test} (i), Since $\langle x,A(t)x\rangle = \frac{1}{2} \langle x,A(t)x\rangle + \frac{1}{2} \langle x, A^T(t)x \rangle =\langle x,B(t)x\rangle$, where $B(t) = \frac{1}{2}[A(t)+A^T(t)]$ and since the smallest eigenvalue $h^*(t)$ of the symmetric matrix $-B(t)$ satisfies $$h^*(t) = \min\{ \langle x,-B(t)x\rangle \mid \ \|x\|=1\},$$ it follows from (${\textbf H}_1$) that $h^*(t)\geq h(t)$ for all $t\in\ensuremath{\mathbb{R}}^+$, $h$ can also be replaced by $h^*$ in asssumption (${\textbf H}_1$). The reader is referred to \cite{demidovich}, \cite{wazewski} for stability theory of ordinary differential equations. (ii) While assumptions (${\textbf H}_1$) and (${\textbf H}_2$) are usual, it is important to note that (${\textbf H}_3$) is satisfied in the simplest case of autonomous systems, i.e. $A(t) \equiv A, C(t) \equiv C$ and $f$ is bounded on $\ensuremath{\mathbb{R}}^+$. Then $\hat{A} \leq \|A\| + \|f\|_{\infty,\ensuremath{\mathbb{R}}^+}, \hat{C}= \|C\|$. For a nontrivial example, consider $A(t) = A(\Theta_t \eta), f(t) = f(\Theta_t \eta), C(t) = C(\Theta_t \eta)$ which depends on a dynamical system $\Theta_t$ on a space of elements $\eta \in C^{q{\rm-var}}$ such that $\Theta$ is invariant under some probability measure. Then $A(\cdot), C(\cdot)$ are functions of a stationary process. Conditions \eqref{A-hat} and \eqref{C-hat} are equivalent to \begin{eqnarray} \hat{A}= \mathbb{B}ig[ E (\|A(\eta)\|_{\infty,[0,1]}+ \|f(\eta)\|_{\infty,[0,1]})^{4p} \mathbb{B}ig]^{\frac{1}{4p}}< \infty,\\ \hat{C}= \left(E \|C(\eta)\|^{2p+2}_{q{\rm-var},[0,1]} \right)^{\frac{1}{2p+2}}< \infty. \end{eqnarray} Meanwhile, assumption \eqref{omegagrowth} is satisfied for almost sure all trajectories $\omega$ of the stationary process $Z(t)$ if \begin{equation}\label{gammaome} \Gamma(\omega,2p+2) = \mathbb{B}ig(E (\ensuremath{\left| \! \left| \! \left|} Z(\cdot) \ensuremath{\right| \! \right| \! \right|}_{p {\rm-var},[0,1]}^{2p+2})\mathbb{B}ig)^{\frac{1}{2p+2}} < \infty. \end{equation} (iii) It is easy to check (see \cite{congduchong17} and \cite{congduchong18}) that conditions (${\textbf H}_2$) and (${\textbf H}_3$) assure the existence and uniqueness of a global solution to \eqref{lin1} on $\ensuremath{\mathbb{R}}^+$. \end{remark} \begin{lemma}\label{lem70} Let $1\leq p\leq q$ be arbitrary and satisfy $\frac{1}{p}+\frac{1}{q}>1$. Assume that $\omega \in \mathcal{C}^{p{\rm-var}}([0,T],\ensuremath{\mathbb{R}})$ and $y\in \mathcal{C}^{q{\rm-var}}([0,T],\ensuremath{\mathbb{R}}^d)$ satisfy \begin{equation}\label{condition2} \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[s,t]}\leq b(1 +\ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[s,t]}) (t-s+\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[s,t]}), \end{equation} for all $[s,t] \subset [0,T]$, where $b\geq 0$ is a constant. Then there exists a constant $C(b)$ independent of $T$ such that the following inequality holds for every $s<t$ in $[0,T]$ \begin{eqnarray}\label{yest} \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[s,t]} &\leq& C(b) \max \mathbb{B}ig\{(t-s)^p+\ensuremath{\left| \! \left| \! \left|}\omega\ensuremath{\right| \! \right| \! \right|}^p_{p{\rm-var},[s,t]}, (t-s)+ \ensuremath{\left| \! \left| \! \left|}\omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[s,t]} \mathbb{B}ig\}. \end{eqnarray} \end{lemma} \begin{proof} Set $\overline{\omega}(s,t) = 2^{2p-1}b^p[(t-s)^p+\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}^p_{p\rm{-var},[s,t]}]$, then $\overline{\omega}(s,t)$ is a control on $\Delta[0,T]$ (see \cite{friz}) and due to the inequality $(a+b)^r\leq (a^r+b^r)\max\{1,2^{r-1}\},\;\forall a>0,b>0,r>0$ we have $$\ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[s,t]}\leq \frac{1}{2}(1+\ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[s,t]})\overline{\omega}(s,t)^{1/p}.$$ This implies that $$|y(t)-y(s)|\leq \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[s,t]}\leq \overline{\omega}(s,t)^{1/p}$$ for all $s,t\in [0,T]$ such that $\overline{\omega}(s,t)\leq 1$. Due to Proposition 5.10 of \cite{friz}, we have \begin{eqnarray*} \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{{\rm -var}},[s,t]}&\leq& 2 \max\{\overline{\omega}(s,t)^{1/p}, \overline{\omega}(s,t)\}\\ &\leq& C(b) \max \mathbb{B}ig\{(t-s)^p+\ensuremath{\left| \! \left| \! \left|}\omega\ensuremath{\right| \! \right| \! \right|}^p_{p{\rm-var},[s,t]}, (t-s)+ \ensuremath{\left| \! \left| \! \left|}\omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[s,t]} \mathbb{B}ig\}, \end{eqnarray*} in which $C(b)=2\max\{(4b)^p,4b\}$.\\ \end{proof} Our first main result on stability of system \eqref{lin1} can be formulated as follows. \begin{theorem}\label{stabfSDElin} Suppose that the conditions (${\textbf H}_1$) -- (${\textbf H}_3$) are satisfied, and further that \begin{equation}\label{negdef} \liminf \limits_{t \to \infty} \frac{1}{t} \int_0^t [h(s)-f(s)]ds \geq h_0 >0. \end{equation} Then under the condition \begin{eqnarray}\label{criterion} h_0&>&K\mathbb{B}ig(1 +4\hat{G}\mathbb{B}ig)\hat{C}\mathbb{B}ig[\Gamma(\omega,2) + \Gamma(\omega,4)^{2}+\Gamma(\omega,2p+2)^{p+1}\mathbb{B}ig] \end{eqnarray} where $K$ is given by \eqref{constK} and \[ \hat{G}:=\max \mathbb{B}ig \{8\hat{A}, 16K\hat{C}, 8^p\hat{A}^p, 16^pK^p \hat{C}^p \mathbb{B}ig \}, \] the zero solution of system \eqref{lin1} is exponentially stable. \end{theorem} \begin{proof} Our proof is divided into three steps. In {\bf Step 1}, we use polar coordinates to derive the growth rate of the solution in \eqref{trans3}. The estimate for $q-$var seminorm of the angular $y$ is then derived in \eqref{yqvarest} in {\bf Step 2}, applying Lemma \ref{lem70}. As such, the solution growth rate can finally be estimated in \eqref{rate}, in which each component is estimated in {\bf Step 3} using hypothesis (${\textbf H}_3$). The theorem is then proved by choosing $\varepsilonilon$ such that \eqref{criterion} is satisfied. \\ {\bf Step 1:} Put $r(t) := \|x(t)\|$. Due to the fact that the system \eqref{lin1} possesses a unique solution in both the forward and backward sense and that $x(t)\equiv 0$ is the unique solution through zero, the solution starting from the initial condition $x(0)\ne 0\in\ensuremath{\mathbb{R}}^d$ satisfies $x(t)\ne 0$ for all $t\in \ensuremath{\mathbb{R}}^+$. We then can define $y(t) := \frac{x(t)}{\|x(t)\|}$. Using integration by part technique (see, e.g., Z{\"a}hle~\cite{zahle,zahle2}), it is easy to prove that $r(t)$ satisfies the system \allowdisplaybreaks \begin{eqnarray}\label{trans1} dr(t) &=& \mathbb{B}ig[\langle y(t), A(t)y(t)\rangle + \langle y(t), \frac{F(t,x(t))}{\|x(t)\|}\rangle \mathbb{B}ig] r(t)dt + \langle y(t), C(t)y(t)\rangle r(t) d\omega(t), \end{eqnarray} where \allowdisplaybreaks \begin{eqnarray}\label{eqy} dy(t) &=& \frac{r(t) d x(t) - x(t) d r(t)}{r(t)^2} \nonumber \\ &=& \mathbb{B}ig[A(t)y(t) - y(t) \langle y(t), A(t)y(t)\rangle + \frac{F(t,x(t))}{\|x(t)\|} - y(t) \langle y(t), \frac{F(t,x(t))}{\|x(t)\|}\rangle \mathbb{B}ig] dt \nonumber \\ &&+ [C(t)y(t) - y(t) \langle y(t), C(t)y(t)\rangle] d\omega(t). \end{eqnarray} Again using the integration by parts, we can prove that \begin{eqnarray}\label{trans2} d\log r(t) &=& \mathbb{B}ig[\langle y(t), A(t)y(t)\rangle + \langle y(t), \frac{F(t,x(t))}{\|x(t)\|}\rangle \mathbb{B}ig] dt + \langle y(t), C(t)y(t)\rangle d\omega(t), \end{eqnarray} or in the integration form \allowdisplaybreaks \begin{eqnarray*} \log r(t) = \log r(0) + \int_0^t \mathbb{B}ig[\langle y(s), A(s)y(s)\rangle + \langle y(s), \frac{F(s,x(s))}{\|x(s)\|}\rangle \mathbb{B}ig] ds+ \int_0^t \langle y(s), C(s)y(s)\rangle d\omega(s). \end{eqnarray*} Due to \eqref{lipF}, $\mathbb{B}ig\|\frac{F(t,x)}{\|x\|}\mathbb{B}ig\| \leq f(t)$ for any $x \ne 0$, hence \allowdisplaybreaks \begin{eqnarray}\label{trans3} &&\frac{1}{t}\log r(t) \notag\\ &\leq& \frac{1}{t}\log r(0) + \frac{1}{t} \int_0^t \mathbb{B}ig[\langle y(s), A(s)y(s)\rangle + \mathbb{B}ig|\langle y(s), \frac{F(s,x(s))}{\|x(s)\|}\rangle \mathbb{B}ig| \mathbb{B}ig]ds + \frac{1}{t} \mathbb{B}ig|\int_0^t \langle y(s), C(s)y(s)\rangle d\omega(s)\mathbb{B}ig| \nonumber\\ &\leq& \frac{1}{t}\log r(0) - \frac{1}{t}\int_0^t [h(s)-f(s)]ds+ \frac{1}{t} \mathbb{B}ig| \int_0^t \langle y(s), C(s)y(s)\rangle d\omega(s) \mathbb{B}ig|. \end{eqnarray} {\bf Step 2:} To estimate the third term in the right hand side of \eqref{trans3}, we use the discretization scheme. Note that \begin{eqnarray*} \mathbb{B}ig| \int_{k}^{k+1} \langle y(s), C(s)y(s) \rangle d\omega(s)\mathbb{B}ig| &\leq& \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k} \left( |\langle y(k),C(k) y(k)\rangle |+ K \ensuremath{\left| \! \left| \! \left|} \langle y, Cy\rangle\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k}\right)\\ &\leq& \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k} \left( \|C(k)\|+ K \ensuremath{\left| \! \left| \! \left|} \langle y, Cy\rangle\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k}\right), \end{eqnarray*} due to the fact that $\|y(t)\| = 1$ where \begin{eqnarray*} \ensuremath{\left| \! \left| \! \left|}\langle y, Cy \rangle\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} &\leq& \|y\|_{\infty,\Delta_k} \ensuremath{\left| \! \left| \! \left|} Cy\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} + \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} \|Cy\|_{\infty,\Delta_k}\\ &\leq& \ensuremath{\left| \! \left| \! \left|} C \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} \|y\|_{\infty,\Delta_k} + \|C\|_{\infty,\Delta_k} \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} + \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} \|C\|_{\infty,\Delta_k}\|y\|_{\infty,\Delta_k}\\ &\leq& 2 \| C \|_{\infty,\Delta_k} \ensuremath{\left| \! \left| \! \left|} y \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} + \ensuremath{\left| \! \left| \! \left|} C \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k}. \end{eqnarray*} Hence, \begin{eqnarray}\label{qvarest} &&\mathbb{B}ig| \int_{k}^{k+1} \langle y(s), C(s)y(s) \rangle d\omega(s) \mathbb{B}ig| \notag\\ &\leq& \ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k} \mathbb{B}ig(\|C(k)\|+ 2K \|C\|_{\infty,\Delta_k}\ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k}+ K \ensuremath{\left| \! \left| \! \left|} C\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k}\mathbb{B}ig) \nonumber \\ &\leq&K \|C\|_{q{\rm-var},\Delta_k}\ensuremath{\left| \! \left| \! \left|}\omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k}\mathbb{B}ig(1+ 2\ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} \mathbb{B}ig). \end{eqnarray} On the other hand, from \eqref{eqy} we derive that $y$ satisfies the equation: \begin{eqnarray*} y(t) -y(0) &=&\int_0^t \mathbb{B}ig[A(s)y(s) - y(s) \langle y(s), A(s)y(s)\rangle + \frac{F(s,x(s))}{\|x(s)\|} - y(s) \langle y(s), \frac{F(s,x(s))}{\|x(s)\|}\rangle \mathbb{B}ig] ds \\ &&+ \int_0^t[C(s)y(s) - y(s) \langle y(s), C(s)y(s)\rangle] d\omega(s)\\ &=: &I(y)(t)+J(y)(t), \quad \forall t\geq 0, \end{eqnarray*} hence for all $0<a \leq b$ \begin{eqnarray*} \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}\leq \ensuremath{\left| \! \left| \! \left|} I(y) \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}+ \ensuremath{\left| \! \left| \! \left|} J(y)\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}. \end{eqnarray*} Since $\|y(t)\| = 1$, a direct computation shows that for $ 0\leq a<b$, \begin{equation*}\label{Iest} \ensuremath{\left| \! \left| \! \left|} I(y) \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}\leq(b-a) \left(2\|A\|_{\infty,[a,b]} + 2 \|f\|_{\infty,[a,b]}\right), \end{equation*} and \begin{eqnarray*}\label{Jest} &&\ensuremath{\left| \! \left| \! \left|} J(y) \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}\\ &\leq & K \ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[a,b]} \mathbb{B}ig( \|Cy\|_{\infty,[a,b]}+ \| y \langle y, Cy\rangle\|_{\infty,[a,b]} + \ensuremath{\left| \! \left| \! \left|} Cy\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]} +\ensuremath{\left| \! \left| \! \left|} y \langle y, Cy\rangle\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]} \mathbb{B}ig) \nonumber\\ &\leq & K \ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[a,b]}\mathbb{B}ig(2 \|C\|_{\infty,[a,b]}+2\ensuremath{\left| \! \left| \! \left|} C \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}+ 4\|C\|_{\infty,[a,b]}\ensuremath{\left| \! \left| \! \left|} y \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}\mathbb{B}ig) \nonumber\\ &\leq & 4K \|C\|_{q{\rm-var},[a,b]}\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[a,b]}(1+\ensuremath{\left| \! \left| \! \left|} y \ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}) . \end{eqnarray*} Put $\hat{A}_k:=\|A\|_{\infty,\Delta_k} + \|f\|_{\infty,\Delta_k}$ and $\hat{C}_k:=\|C\|_{q{\rm-var}, \Delta_k}, k\in \ensuremath{\mathbb{N}}$. Then for $[a,b]\subset \Delta_k$ \begin{eqnarray*} \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]} &\leq& \max\{2\hat{A}_k,4K\hat{C}_k\} \mathbb{B}ig[(b-a)+\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},[a,b]} \mathbb{B}ig]\mathbb{B}ig(1+ \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},[a,b]}\mathbb{B}ig). \end{eqnarray*} By applying Lemma \ref{lem70} we obtain \begin{eqnarray}\label{yqvarest} \ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k}&\leq& 2 G_k\max \mathbb{B}ig\{ 1+\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k}, 1 +\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}^p_{p{\rm-var},\Delta_k}\mathbb{B}ig\}\notag\\ &\leq& 2 G_k\mathbb{B}ig( 1+\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k}+\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}^p_{p{\rm-var},\Delta_k}\mathbb{B}ig), \end{eqnarray} where \[ G_k:= \max\mathbb{B}ig\{8\hat{A}_k,16K\hat{C}_k, 8^p\hat{A}_k^p,16^pK^p\hat{C}_k^p\mathbb{B}ig\}. \] For any $t \in [m,m+1]$, \begin{eqnarray}\label{trans4} &&\frac{1}{t} \mathbb{B}ig| \int_0^t \langle y(s), C(s)y(s)\rangle d\omega(s) \mathbb{B}ig|\notag\\ &\leq& \frac{1}{m} \left(\sum_{k=0}^{m -1}\left | \int_{k}^{k+1} \langle y(s), C(s)y(s) \rangle d\omega(s)\right |+ \left | \int_{m}^{t} \langle y(s), C(s)y(s) \rangle d\omega(s)\right |\right) \notag\\ &&\leq \frac{K}{m} \sum_{k=0}^m \hat{C}_k\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k} \mathbb{B}ig(1+2\ensuremath{\left| \! \left| \! \left|} y\ensuremath{\right| \! \right| \! \right|}_{q{\rm-var},\Delta_k} \mathbb{B}ig). \end{eqnarray} Combining \eqref{trans4} with \eqref{trans3} and \eqref{yqvarest}, we get \begin{eqnarray}\label{rate} \varlimsup_{t\to \infty}\frac{1}{t}\log r(t) &\leq& -h_0 + \varlimsup_{m\to \infty} \frac{K}{(m+1)} \sum_{k=0}^m \hat{C}_k\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k} \notag \\ &&+ \varlimsup_{m\to \infty}\frac{4K}{(m+1)} \sum_{k=0}^m \hat{C}_k G_k \mathbb{B}ig(\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k}+\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}^{2}_{p{\rm-var},\Delta_k}+ \ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k}^{p+1}\mathbb{B}ig).\notag\\ \end{eqnarray} {\bf Step 3.} Using H\"older inequality, the second term in \eqref{rate} can be estimated as follows \begin{eqnarray*} \varlimsup_{m\to \infty} \frac{K}{(m+1)} \sum_{k=0}^m \hat{C}_k\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k} &\leq &K\varlimsup_{m\to \infty} \mathbb{B}ig(\frac{1}{m+1} \sum_{k=0}^m \hat{C}^2_k\mathbb{B}ig)^{\frac{1}{2}} \varlimsup_{m\to \infty} \mathbb{B}ig(\frac{1}{m+1} \sum_{k=0}^m \ensuremath{\left| \! \left| \! \left|}\omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm-var},\Delta_k}^2 \mathbb{B}ig)^{\frac{1}{2}} \\ &\leq& K\hat{C} \Gamma(\omega,2). \end{eqnarray*} Similarly, we get the estimates for the other terms at the right hand side of \eqref{rate} so that \begin{eqnarray*} \varlimsup_{t\to \infty}\frac{1}{t}\log r(t) &\leq& - h_0 +K \hat{C}\Gamma(\omega,2) \\ &&+4K\varlimsup_{m\to \infty} \mathbb{B}ig(\frac{1}{m+1} \sum_{k=0}^m \hat{C}^2_k G_k^2\mathbb{B}ig)^{\frac{1}{2}}\mathbb{B}ig( \Gamma(\omega,2)+ \Gamma(\omega,4)^{2}+\Gamma(\omega,2p+2)^{p+1}\mathbb{B}ig), \end{eqnarray*} where all the values of $\Gamma$ are finite due to assumption \eqref{omegagrowth}. To estimate the average of $\hat{C}^2_k G_k^2$, observe that \allowdisplaybreaks \begin{eqnarray*} && \varlimsup_{m\to \infty} \left(\frac{1}{m+1} \sum_{k=0}^m \hat{A}_k^2\hat{C}_k^2\right)^{\frac{1}{2}} \leq \varlimsup_{m\to \infty} \left(\frac{1}{m+1} \sum_{k=0}^m \hat{A}_k^4\right)^{\frac{1}{4}} \varlimsup_{m\to \infty} \left(\frac{1}{m+1} \sum_{k=0}^m \hat{C}_k^4\right)^{\frac{1}{4}}\leq \hat{A}\hat{C};\\ &&\varlimsup_{m\to \infty} \left(\frac{1}{m+1} \sum_{k=0}^m \hat{C}_k^4\right)^{\frac{1}{2}}\leq \hat{C}^2;\\ &&\varlimsup_{m\to \infty} \left(\frac{1}{m+1} \sum_{k=0}^m \hat{A}_k^{2p}\hat{C}_k^2\right)^{\frac{1}{2}} \leq \varlimsup_{m\to \infty} \left(\frac{1}{m+1} \sum_{k=0}^m \hat{A}_k^{4p}\right)^{\frac{1}{4}} \varlimsup_{m\to \infty} \left(\frac{1}{m+1} \sum_{k=0}^m \hat{C}_k^4\right)^{\frac{1}{4}}\leq \hat{A}^p\hat{C};\\ &&\varlimsup_{m\to \infty} \left(\frac{1}{m+1} \sum_{k=0}^m \hat{C}_k^{2(p+1)}\right)^{\frac{1}{2}}= \hat{C}^{p+1}. \end{eqnarray*} Hence \begin{eqnarray*} &&\varlimsup_{m\to \infty} \mathbb{B}ig(\frac{1}{m+1} \sum_{k=0}^m \hat{C}^2_k G_k^2\mathbb{B}ig)^{\frac{1}{2}} \leq \max \{8\hat{A}\hat{C}, 16K\hat{C}^2, 8^p\hat{A}^p\hat{C}, 16^pK^p \hat{C}^{1+p} \} = \hat{C}\hat{G}. \end{eqnarray*} As a result \begin{eqnarray*} \varlimsup_{t\to \infty}\frac{1}{t}\log r(t) &\leq& -h_0 + K(1+ 4\hat{G}) \hat{C} \mathbb{B}ig( \Gamma(\omega,2) + \Gamma(\omega,4)^{2}+\Gamma(\omega,2p+2)^{p+1}\mathbb{B}ig)<0, \end{eqnarray*} due to \eqref{criterion} which proves the exponentially asymptotical stability of the zero solution of system \eqref{lin1}. \end{proof} \begin{corollary}\label{Phi} Consider the equation \begin{equation}\label{AutoLinear} dz(t) = Az(t) dt + Cz(t) d\omega(t) \end{equation} in which $A,C\in \ensuremath{\mathbb{R}}^{d\ensuremath{\tilde}mes d}$, $A$ is negative definite, i.e. there exists constant $ h_A>0$ such that \begin{equation}\label{conditions1} \langle x, Ax \rangle \leq - h_A \|x\|^2,\quad \forall x\in \ensuremath{\mathbb{R}}^d. \end{equation} Denote by $\Phi(t,\omega)$ the matrix solution of \eqref{AutoLinear}, $\Phi(0,\omega)=Id$. Then for any given $\delta>0$ \begin{equation}\label{phiest} \|\Phi(t,\omega)\| \leq \exp \mathbb{B}ig\{ -h_A t + \delta + \max\{\|C\|, \|C\|^{p}\} \kappa (t,\omega) \mathbb{B}ig\},\quad \forall t \in [0,1] \end{equation} where \begin{eqnarray} G &:=& \max\mathbb{B}ig\{8 \|A\|,16K \|C\|, 8^p\|A\|^p,16^pK^p \|C\|^p\mathbb{B}ig\},\label{GA} \end{eqnarray} and \begin{eqnarray}\label{kappa} \kappa (t,\omega) := \frac{1}{\delta^{p-1}} \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},[0,t]}^{p}+ 4K G \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} \mathbb{B}ig(t + \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} + \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]}^p\mathbb{B}ig). \end{eqnarray} \end{corollary} \begin{proof} First, it can be seen that \[ \|C\| \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},[0,t]} \leq \delta + \frac{1}{\delta^{p-1}} \mathbb{B}ig(\|C\| \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},[0,t]} \mathbb{B}ig)^p \] For any $x_0\in \ensuremath{\mathbb{R}}^d$, it follows from \eqref{yest} that for any $t\in [0,1]$ and $y(t) = \frac{\Phi(t,\omega)x_0}{\|\Phi(t,\omega)x_0\|}$ \begin{eqnarray*} \log \|\Phi(t,\omega)x_0\| &=& \int_0^t \langle y(s),Ay(s) \rangle ds + \int_0^t \langle y(s), C y(s) \rangle d\omega(s) \notag \\ &\leq& -h_A t + \|C\| \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} + 2 K \|C\| \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} \ensuremath{\left| \! \left| \! \left|} y \ensuremath{\right| \! \right| \! \right|}_{q {\rm - var},[0,t]} \notag \\ &\leq& - h_A t +\delta + \frac{1}{\delta^{p-1}} \mathbb{B}ig(\|C\| \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},[0,t]} \mathbb{B}ig)^{p} \\ && + 4K \|C\| \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} G \mathbb{B}ig(\max \{t,t^p\} + \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} + \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]}^p\mathbb{B}ig) \notag\\ &\leq& - h_A t + \delta + \frac{1}{\delta^{p-1}} \mathbb{B}ig(\|C\| \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},[0,t]} \mathbb{B}ig)^{p} \\ && + 4K G\|C\| \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} \mathbb{B}ig(t + \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} + \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]}^p\mathbb{B}ig)\\ &\leq& - h_A t + \delta + \max\{\|C\|^{p}, \|C\|\} \kappa(t,\omega), \end{eqnarray*} which proves \eqref{phiest}. \end{proof} \begin{remark}\label{comparison} (i), In \cite{GAKLBSch2010} and \cite{ducGANSch18} the authors develop the semigroup method to estimate the H\"older norm of $y$ on intervals $\tau_k, \tau_{k+1}$ where $\tau_k$ is a sequence of stopping times \[ \tau_0 = 0, \tau_{k+1} - \tau_k + \ensuremath{\left| \! \left| \! \left|} x \ensuremath{\right| \! \right| \! \right|}_{\beta,[\tau_k,\tau_{k+1}]} = \mu \] for some $\mu \in (0,1)$ and $\beta > \frac{1}{p}$, which leads to the estimate of the exponent \[ - \mathbb{B}ig(h_A - Q e^{h_A}\max\{C_f,\|C\|\} \frac{n}{\tau_n} \mathbb{B}ig), \] where $h_A$ is given in \eqref{conditions1}, $C_f$ is the Lipchitz constant of $f$ and $Q$ is generic constant independent of $A, f, C, \omega$. It is then proved that there exists $\liminf \limits_{n \to \infty} \frac{\tau_n}{n} = \frac{1}{d}$, where $d = d(\mu)$ depends on the moment of the stochastic noise. As such the rate of exponential convergence of the solution to zero can be estimated as \begin{equation}\label{criterion2} -\mathbb{B}ig(h_A - Q e^{h_A}\max\{C_f,\|C\|\} d\mathbb{B}ig). \end{equation} However, it is required from the stopping time analysis (see \cite[Section 4]{ducGANSch18}) that the stochastic noise has to be small in the sense that the moment of H\"older semi-norm $\ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{\beta,[-1,1]}$ must be controlled as small as possible. On the other hand, when reduced to the case without noise, i.e. $C \equiv 0$, \eqref{criterion2} implies a very rough criterion for exponential stability of the ordinary differential equation \begin{equation}\label{oldcriterion} C_f \leq \frac{1}{Q} h_A e^{-h_A}. \end{equation} By contrast, if $A,C$ are constant matrices and $f(t) \equiv C_f$, condition \eqref{criterion} is satisfied if \begin{eqnarray}\label{criterion1} h_A - C_f &>&K \|C\| (1 +4G)\mathbb{B}ig\{\Gamma(\omega,2) + \Gamma(\omega,4)^{2}+\Gamma(\omega,2+2p)^{1+p} \mathbb{B}ig\}, \end{eqnarray} where $G$ is given by \eqref{GA}. The left and the right hand sides of criteria \eqref{criterion} and \eqref{criterion1} therefore can be interpreted as, respectively, the decay rate of the drift term and the intensity of the volatility term. In this sense, criteria \eqref{criterion} and \eqref{criterion1} have the same form as the one below \begin{equation}\label{itocriterion} \liminf \limits_{t \to \infty} \frac{1}{t} \int_0^t [h(s)-f(s)]ds > \limsup \limits_{t \to \infty} \frac{1}{t} \int_0^t \|C(s)\|^2 ds \end{equation} for stochastic system driven by a standard Brownian motion (see e.g. \cite{mao}). Indeed, using Hypotheses (${\textbf H}_1$) -- (${\textbf H}_2$) and estimate \eqref{Itoest}, it follows that \begin{eqnarray*} d E \|x(t)\|^2 &=& E \mathbb{B}ig(2 \langle x(t), A(t) x(t) \rangle + 2 \langle x(t), F(t,x(t)) \rangle + \|C(t) x(t)\|^2 \mathbb{B}ig) dt \\ &\leq& \mathbb{B}ig(- h(t) + f(t) + \|C(t)\|^2 \mathbb{B}ig) E \|x(t)\|^2 \end{eqnarray*} which then derives the exponential stability given \eqref{itocriterion}.\\ In addition, since $\|C\| (1 +4G)$ is an increasing function of $\|C\|$, criterion \eqref{criterion1} is satisfied in case the driving noise $\omega$ is small in the sense that the quantity in the brackets $\{\dots\}$ is small enough, or in case $\|C\|$ is small. Moreover, for ordinary differential equations, criteria \eqref{criterion} and \eqref{criterion1} reduce to $h_A>C_f$, which is the classical criterion and is much better than \eqref{oldcriterion} for dissipative systems. Therefore criteria \eqref{criterion} and \eqref{criterion1} can be viewed as a better generalization of the classical results on exponential stability for dissipative systems. (ii), Regarding to system \eqref{AutoLinear}, we could have, in some special cases, better estimates than \eqref{phiest}. In particular, if $A$ and $C$ are commute, then a direct computation shows that \begin{equation}\label{phiformula} \Phi(t,\omega) = \exp \{ At + C \omega(t)\},\quad \forall t \geq 0. \end{equation} As a result, \begin{equation}\label{phinormest} \|\Phi(t,\omega)\| \leq \|e^{At}\| \|e^{C\omega(t)}\| \leq \|e^{At}\| e^{\|C\||\omega(t)|},\quad \forall t \geq 0. \end{equation} Therefore, under the assumption that \[ \limsup \limits_{t \to \infty} \frac{\omega(t)}{t} = 0, \] (which is often satisfied for almost alls realization $\omega$ of a fractional Brownian motion), it follows that \[ \limsup \limits_{t \to \infty} \frac{1}{t} \log \|\Phi(t,\omega)\| \leq \limsup \limits_{t \to \infty} \frac{1}{t} \log \|e^{At}\| + \limsup \limits_{t \to \infty} \frac{1}{t} \|C\| |\omega(t)| = \limsup \limits_{t \to \infty} \frac{1}{t} \log \|e^{At}\|. \] In this situation, the exponential stability criterion of system \eqref{AutoLinear} is then equivalent to the one of the autonomous ordinary differential equation $\dot{z} = A z$, which is equivalent to that $A$ has all eigenvalues with negative real parts. However, since \eqref{phiformula} does not hold in general, we could not obtain \eqref{phinormest} but only the discrete version \eqref{phiest}. (iii), The strong condition \eqref{negdefh} is still able to cover several interesting cases, for instance if $A(t) \equiv A$ with negative real part eigenvalues. Then there exists a positive definite matrix $Q$, which is the solution of the matrix equation \[ A^{\rm T} Q^2 + Q^2 A = 2D \] where $D$ is a symmetric negative definite matrix \cite[Chapter 2 \& Chapter 5]{burton} such that $\langle Dx,x \rangle \leq - \lambda_D \|x\|^2$. Under the transformation $\ensuremath{\tilde}lde{x} = Qx$ the system \[ dx(t) = [A x(t) + F(t,x(t))]dt + C(t)x(t) d\omega(t) \] will be tranformed to \begin{eqnarray} d \ensuremath{\tilde}lde{x}(t) &=& \mathbb{B}ig[QAQ^{-1} \ensuremath{\tilde}lde{x}(t) + QF(t,Q^{-1}\ensuremath{\tilde}lde{x}(t)) \mathbb{B}ig]dt + QC(t)Q^{-1}\ensuremath{\tilde}lde{x}(t)d\omega(t) \notag\\ &=& \mathbb{B}ig[ \ensuremath{\tilde}lde{A} \ensuremath{\tilde}lde{x}(t) + \ensuremath{\tilde}lde{F}(t,\ensuremath{\tilde}lde{x}(t))\mathbb{B}ig] dt + \ensuremath{\tilde}lde{C}(t)\ensuremath{\tilde}lde{x}(t) d\omega(t), \end{eqnarray} where $\ensuremath{\tilde}lde{F}$ is globally Lipschitz continuous with $f(t)$ in \eqref{lipF} is replaced by $\ensuremath{\tilde}lde{f}(t) = \|Q\| \|Q^{-1}\| f(t)$; $\hat{A}, \hat{C}$ in \eqref{A-hat} and \eqref{C-hat} are replaced by $\|Q\| \|Q^{-1}\| \hat{A}, \|Q\| \|Q^{-1}\| \hat{C}$; and \eqref{negdefh} is of the form \begin{eqnarray*} \langle \ensuremath{\tilde}lde{x}, \ensuremath{\tilde}lde{A} \ensuremath{\tilde}lde{x} \rangle &=& \langle \ensuremath{\tilde}lde{x}, \frac{1}{2}[\ensuremath{\tilde}lde{A} + \ensuremath{\tilde}lde{A}^{\rm T}] \ensuremath{\tilde}lde{x}\rangle = \langle\ensuremath{\tilde}lde{x}, \frac{1}{2} [QAQ^{-1} + Q^{-1}A^{\rm T} Q] \ensuremath{\tilde}lde{x} \rangle \\ &=& \langle Q x,\frac{1}{2} QAQ^{-1}Qx\rangle + \langle Q x, \frac{1}{2} Q^{-1}A^{\rm T} Q^2 x\rangle\\ &=& \langle x,\frac{1}{2} [Q^2 A + A^{\rm T}Q^2]x \rangle \\ &=& \langle x, Dx \rangle \leq - \lambda_D \|x\|^2 \leq - \frac{\lambda_D}{\|Q\|^2} \|\ensuremath{\tilde}lde{x}\|^2. \end{eqnarray*} Therefore we are still able to apply Theorem \ref{stabfSDElin} with a small modification of conditions \eqref{negdef} and \eqref{criterion}. (iv), It is important to note that for the nonautonomous situation, the semigroup generated from the method in \cite{ducGANSch18} or \cite{GAKLBSch2010} should be replaced by the two parameter flow $\Psi(t,s)$ generated from the nonautonomous differential equation $\dot{z} = A(t)z$. As a result, all $p-$variation norm estimates for such $\Psi$ would be quite complex to present. Our method however helps overcome this drawback by using Lyapunov type functions, as seen in the proof of Theorem \ref{stabfSDElin}. \end{remark} \begin{comment} \begin{remark} The result and method can be generalized for system of the form \begin{equation} dx(t) = [A(t)x(t) + F(t,x(t))]dt + [C(t)x(t) + G(t,x(t))]d\omega(t), \end{equation} where $A, F, C$ are given as before, $G(t,x(t))$ satisfies $G(t,0) \equiv 0$ and there exists a function $g(t,y)$ such that $g(\cdot,y) \in C^{q{\rm -var}}(S^1)$ and \[ G(t,x) = \|x\|g(t,\frac{x}{\|x\|}),\quad \forall x \ne 0, \forall t \geq 0. \] {\color{red}(we can go into details, although computations is a bit complex)}. The same problem is studied in \cite{GAKLBSch2010} and \cite{garrido-atienzaetal} for either bounded or very flat $G$ without matrix $C$. It is also investigated in \cite{ducGANSch18} for special type of fractional Brownian motion with small intensity. \end{remark} \end{comment} \section{Applications: Existence of random attractors} In this section we would like to apply the main result to study the following system \begin{equation}\label{fSDE} dx(t) = [Ax(t) + f(x(t))]dt + Cx(t)dB^H(t), x(0)=x_0 \in \ensuremath{\mathbb{R}}^d, \end{equation} where $B^H$ is an one dimensional fractional Brownian motion with Hurst index $H>\frac{1}{2}$; $A$ is negative definite and $f: \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d$ is globally Lipschitz continuous, i.e. there exist contants $h_A, c_f >0$ such that \begin{equation}\label{conditions} \langle x, Ax \rangle \leq - h_A \|x\|^2,\quad \|f(x) - f(y)\| \leq c_f \|x-y\|, \quad \forall x,y\in \ensuremath{\mathbb{R}}^d. \end{equation} Given $\frac{1}{2}< \nu < H$ and any time interval $[0,T]$, almost sure all realizations $\omega(\cdot) = B^H(\cdot,\omega)$ belong to the H\"older space $C^{\nu\rm{-Hol}}([0,T],\ensuremath{\mathbb{R}})$ (see e.g. \cite[Proposition 1.6]{nourdin}), thus system \eqref{fSDE} can be solved in the pathwise sense and admits a unique solution $x(t,\omega,x_0)$, according to Theorem \ref{existence}. Moreover, it is proved, e.g. in \cite{GAKLBSch2010} that, the solution generates a so-called {\it random dynamical system} defined by $\varphi(t,\omega)x_0 := x(t,\omega,x_0)$ on the probability space $(\Omega,\mathcal{F},\mathbb{P})$ equipped with a metric dynamical system $\theta$, i.e. $\theta_{t+s} = \theta_t \circ \theta_s$ for all $t,s \in \ensuremath{\mathbb{R}}$. Namely, $\varphi: \ensuremath{\mathbb{R}} \ensuremath{\tilde}mes \Omega \ensuremath{\tilde}mes \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d$ is a measurable mapping which is also continuous in $t$ and $x_0$ such that the cocycle property \[ \varphi(t+s,\omega) =\varphi(t,\theta_s \omega) \circ \varphi(s,\omega),\quad \forall t,s \in \ensuremath{\mathbb{R}}, \] is satisfied \cite{arnold}. It is important to note that, given the probability space as $\Omega = \mathcal{C}_0(\ensuremath{\mathbb{R}},\ensuremath{\mathbb{R}})$ of continuous functions on $\ensuremath{\mathbb{R}}$ vanishing at zero, with the Borel sigma-algebra $\mathcal{F}$, the Wiener shift $\theta_t \omega (\cdot) = \omega(t+ \cdot)-\omega(t)$ and the Wiener probability $\mathbb{P}$, it follows from \cite[Theorem 1]{GASch} that one can construct an invariant probability measure $\mathbb{P}^H = B^H \mathbb{P}$ on the subspace $\mathcal{C}^\nu$ such that $B^H \circ \theta = \theta \circ B^H$, and $\theta$ is ergodic. \\ Following \cite{arnold-schmalfuss},\cite{crauel-flandoli}, we call a set $\hat{M} = \{M(\omega)\}_{\omega \in \Omega}$ a {\it random set}, if $\omega \mapsto d(x|M(\omega))$ is $\mathcal{F}$-measurable for each $x \in \ensuremath{\mathbb{R}}^d$, where $d(E|F) = \sup\{\inf\{d(x, y)|y \in F\} | x \in E\}$ for $E,F$ are nonempty subset of $\ensuremath{\mathbb{R}}^d$ and $d(x|E) = d(\{x\}|E)$. Given a continuous random dynamical system $\varphi$ on $\ensuremath{\mathbb{R}}^d$. An {\it universe} $\mathcal{D}$ is a family of random sets which is closed w.r.t. inclusions (i.e. if $\hat{D}_1 \in \mathcal{D}$ and $\hat{D}_2 \subset \hat{D}_1$ then $\hat{D}_2 \in \mathcal{D}$). In our setting, we define the universe $\mathcal{D}$ to be a family of random sets $D(\omega)$ which is {\it tempered} (see e.g. \cite[pp. 164, 386]{arnold}), namely$D(\omega)$ belongs to the ball $B(0,\rho(\omega))$ for all $\omega\in\Omega$ where the radius $\rho(\omega) >0$ is a {\it tempered random varible}, i.e. \begin{equation}\label{tempered} \lim \limits_{t \to \pm \infty} \frac{1}{t} \log \rho(\theta_{t}\omega) =0. \end{equation} An invariant random compact set $\mathcal{A} \in \mathcal{D}$ is called a {\it pullback random attractor} in $\mathcal{D}$, if $\mathcal{A} $ attracts any closed random set $\hat{D} \in \mathcal{D}$ in the pullback sense, i.e. \begin{equation}\label{pullback} \lim \limits_{t \to \infty} d(\varphi(t,\theta_{-t}\omega) \hat{D}(\theta_{-t}\omega)| \mathcal{A} (\omega)) = 0. \end{equation} Similarly, $\mathcal{A} $ is called a {\it forward random attractor} in $\mathcal{D}$, if $\mathcal{A}$ attracts any closed random set $\hat{D} \in \mathcal{D}$ in the forward sense, i.e. \begin{equation*}\label{eq3.6} \lim \limits_{t \to \infty} d(\varphi(t,\omega) \hat{D}(\omega)| \mathcal{A}(\theta_t \omega)) = 0. \end{equation*} The existence of a random pullback attractor follows from the existence of a random pullback absorbing set (see \cite{crauel-flandoli},\cite{schenk-hoppe01}). A random set $\mathcal{B} \in \mathcal{D}$ is called {\it pullback absorbing} in a universe $\mathcal{D}$ if $\mathcal{B} $ absorbs all sets in $\mathcal{D}$, i.e. for any $\hat{D} \in \mathcal{D}$, there exists a time $t_0 = t_0(\omega,\hat{D})$ such that \begin{equation}\label{absorb} \varphi(t,\theta_{-t}\omega)\hat{D}(\theta_{-t}\omega) \subset \mathcal{B} (\omega), \ \textup{for all}\ t\geq t_0. \end{equation} Given a universe $\mathcal{D}$ and a random compact pullback absorbing set $\mathcal{B} \in \mathcal{D}$, there exists a unique random pullback attractor (which is then a weak attractor) in $\mathcal{D}$, given by \begin{equation}\label{at} \mathcal{A}(\omega) = \cap_{s \geq 0} \overline{\cup_{t\geq s} \varphi(t,\theta_{-t}\omega)\mathcal{B}(\theta_{-t}\omega)}. \end{equation} The reader is referred to a survey on random attractors in \cite{crauelkloeden15}. \begin{lemma} For $\delta> 0$, the function $\kappa$ defined in \eqref{kappa} satisfies \\ (i) For all $0<s<t<1$ \begin{equation}\label{kappa2} \kappa(t,\omega) \geq \kappa(s,\omega) + \kappa(t-s,\theta_s \omega), \end{equation} (ii) For all $0\leq t'\leq 1$ \begin{equation}\label{kappa3} \kappa(1,\theta_{t'}\omega)\leq 2^{p} [\kappa(1,\omega)+\kappa(1,\theta_{1}\omega)]. \end{equation} (iii) $E\ \kappa(1,\omega)<\infty$. \end{lemma} \begin{proof} (i) The inequalitiy holds since $\ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[s,t]}^p, \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[s,t]}^2$ and $ \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[s,t]}^{p+1}$ are control functions (see \cite{friz} for details on control functions), meanwhile \[ t\ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t]} + s\ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[t,s+t]} \leq (t+s) \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm - var},[0,t+s]}. \] (ii) Due to \cite[Lemma\ 2.1]{congduchong17} if $z$ is an arbitrary function of bounded $p-$variation on $[0,2]$ then \[ \ensuremath{\left| \! \left| \! \left|} z\ensuremath{\right| \! \right| \! \right|}^p_{p\rm{-var},[0,2]}\leq 2^{p-1}(\ensuremath{\left| \! \left| \! \left|} z\ensuremath{\right| \! \right| \! \right|}^p_{p\rm{-var},[0,1]} +\ensuremath{\left| \! \left| \! \left|} z\ensuremath{\right| \! \right| \! \right|}^p_{p\rm{-var},[1,2]}), \] which implies that for all $n\geq 0$ \begin{eqnarray*}\label{pvar} \ensuremath{\left| \! \left| \! \left|} z\ensuremath{\right| \! \right| \! \right|}^n_{p\rm{-var},[0,2]} &\leq& 2^{\frac{(p-1)n}{p}}(\ensuremath{\left| \! \left| \! \left|} z\ensuremath{\right| \! \right| \! \right|}^p_{p\rm{-var},[0,1]} +\ensuremath{\left| \! \left| \! \left|} z\ensuremath{\right| \! \right| \! \right|}^p_{p\rm{-var},[1,2]} )^{\frac{n}{p}} \notag\\ &\leq& 2^{\max\{p, n\}-1}\left(\ensuremath{\left| \! \left| \! \left|} z\ensuremath{\right| \! \right| \! \right|}^n_{p\rm{-var},[0,1]} + \ensuremath{\left| \! \left| \! \left|} z\ensuremath{\right| \! \right| \! \right|}^n_{p\rm{-var},[1,2]}\right). \end{eqnarray*} Therefore, taking into account the formula \eqref{kappa} defining $\kappa$ we can easily derive \eqref{kappa3}. (iii) Recall that in this section we consider equation \eqref{fSDE}, hence $\omega$ is a realization of a fractional Brownian motion $B^H(t,\omega)$. Observe that for $\nu=1/p<H$ and $t>0$ be arbirary, $ \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{p {\rm -var}, [0,t]}\leq t^{\nu} \ensuremath{\left| \! \left| \! \left|} \omega \ensuremath{\right| \! \right| \! \right|}_{\nu {\rm -Hol}, [0,t]}$. Fix $q_0\geq \max\{\frac{2}{H-\nu}, 2p+2\}, q_0\in \ensuremath{\mathbb{N}}$. Apply \cite[Corollary A2]{friz} for $\alpha=\nu+\frac{1}{q_0}$ and \cite[Remark\ 1.2.2, p\ 7]{mishura} we get \begin{eqnarray*} E\ensuremath{\left| \! \left| \! \left|} B^H(\cdot,\omega)\ensuremath{\right| \! \right| \! \right|}^{q_0}_{\nu{\rm -Hol},[0,1]} &\leq &\left( \frac{32(\nu+\frac{2}{q_0})}{\nu}\right)^{q_0} \int_0^{1}\int_0^1\frac{E\|B^H(u,\omega)-B^H(v,\omega)\|^{q_0}}{|u-v|^{\nu q_0+2}} dudv\\ &\leq & \left( \frac{32(\nu+\frac{2}{q_0})}{\nu}\right)^{q_0} \int_0^{1}\int_0^1 \frac{2^{q_0/2}\Gamma(\frac{q_0+1}{2})}{\sqrt{\pi}}|u-v|^{(H-\nu-2/q_0)q_0} dudv\\ &\leq & \left(32\sqrt{2(q_0+1)}\right)^{q_0} \frac{2}{[(H-\nu)q_0-1](H-\nu)q_0}\\ &\leq & \left(32\sqrt{2(q_0+1)}\right)^{q_0}, \end{eqnarray*} in which $\Gamma(n)$ is the Gamma function. This implies \begin{equation}\label{beta1} \left(E\ensuremath{\left| \! \left| \! \left|} B^H(\cdot,\omega)\ensuremath{\right| \! \right| \! \right|}^{q_0}_{p{\rm -var},[0,1]}\right)^{\frac{1}{q_0}}\leq 32\sqrt{2(q_0+1)}=: \beta, \end{equation} and since $q_0 \geq 2p+2$ we conclude that \begin{equation}\label{Ekappa} E\ \kappa(1,\omega) \leq \max\{\frac{1}{\delta^{p-1}} ,4K G \} (\beta+\beta^p+\beta^2+ \beta^{p+1})<\infty \end{equation} \end{proof} Before stating the main result, we need the following results (the technical proofs are provided in the Appendix). \begin{lemma}[Gronwall-type lemma] \label{lem2} Assume that $z(\cdot), \alpha(\cdot): [a,b] \to \ensuremath{\mathbb{R}}^+$ satisfy \begin{equation}\label{gronwall1} z(t) \leq z_0 + \int_a^t \alpha(s)ds + \int_a^t \eta z(s) ds,\quad \forall t\in [a,b] \end{equation} for some $z_0, \eta>0$. Then \begin{equation}\label{gronwall2} z(t) \leq z_0 e^{\eta (t-a)} + \int_a^t \alpha(s) e^{\eta(t-s)}ds,\quad \forall t\in [a,b]. \end{equation} \end{lemma} \begin{lemma}\label{tempered} Consider the random variable \begin{equation}\label{xi} \xi(\omega):= 1+ \sum_{k=1}^{\infty} \exp \mathbb{B}ig\{\mathbb{B}ig(-h+c \frac{1}{k}\sum_{i=0}^{k-1}\kappa(1,\theta_{-i}\omega) \mathbb{B}ig)k\mathbb{B}ig\}, \end{equation} where $h$, $c$ are given positive numbers and $\kappa$ is defined by \eqref{kappa}. Then there exists $\varepsilon>0$ such that if $c<\varepsilon$, $\xi(\omega)$ is tempered. \end{lemma} Given the universe $\mathcal{D}$ of tempered random sets with property \eqref{tempered}, our second main result is then formulated as follows. \begin{theorem}\label{attractor} Assume that $h_A > c_f$. There exists an $\varepsilonilon >0$ such that under condition $\|C\| < \varepsilonilon$, $\varphi$ possesses a random pullback attractor consisting only of one random point $a(\omega)$ in the universe $\mathcal{D}$ of tempered random sets. Moreover, every tempered random set converges to the random attractor in the pullback sense with exponential rate. \end{theorem} \begin{proof} We summarize the steps of the proof here. In {\bf Step 1} we prove \eqref{xest}, which helps to prove \eqref{xest1} in the forward direction and \eqref{xdiscrete} in the pullback direction, by choosing $\|C\|<\varepsilonilon$ such that \eqref{attractorcriterion} is statisfied. As a result, there exists an absorbing set of the system which is a random ball with its radius described in \eqref{b}. The existence of the random attractor $\mathcal{A}$ is then followed. In {\bf Step 2}, we prove that any two different points $a_1,a_2$ in attractor $\mathcal{A}(\omega)$ can be pulled from fiber $\omega$ backward to fiber $\theta_{-t^*} \omega$, such that the difference of two solutions starting from fiber $\theta_{-t^*} \omega$ in fiber $\omega$ can be estimated by \eqref{y*}. Finally, using \eqref{btemper}, we conclude that $a_1(\omega) = a_2(\omega)$ almost surely, which proves that $\mathcal{A}$ is a single random point. \\ {\bf Step 1.} Fix a $\delta>0$ which will be specified later. We first show that there exists an absorbing set for system \eqref{fSDE}. Using \eqref{AutoLinear} and the method of variation of parameter as in \eqref{u}, one derives from \eqref{fSDE} the integral equation \begin{equation*} x(t,\omega,x_0) = \Phi(t,\omega)x_0 + \int_0^t \Phi(t-s,\theta_s \omega) f(x(s,\omega,x_0))ds, \end{equation*} where $\Phi$ defined in Corollary \ref{Phi}. Hence it follows from \eqref{phiest} and \eqref{kappa2} that for any $t \in [0,1]$ \allowdisplaybreaks \begin{eqnarray*} &&\|x(t,\omega,x_0)\| \\ &\leq& \| \Phi(t,\omega)x_0 \|+ \int_0^t \|\Phi(t-s,\theta_s \omega)\| \mathbb{B}ig( c_f \|x(s,\omega,x_0)\| + \|f(0)\| \mathbb{B}ig)ds \\ &\leq& \exp \mathbb{B}ig\{ -h_A t + \delta + \max\{\|C\|, \|C\|^{p} \} \kappa (t,\omega) \mathbb{B}ig\}\|x_0\| \\ && + \int_0^t \exp \mathbb{B}ig\{ -h_A (t-s) + \delta+\max\{\|C\|, \|C\|^{p} \}\kappa (t-s,\theta_s \omega) \mathbb{B}ig\} \mathbb{B}ig( c_f \|x(s,\omega,x_0)\| + \|f(0)\| \mathbb{B}ig)ds \\ &\leq& \exp \mathbb{B}ig\{ -h_A t +\delta + \max\{\|C\|, \|C\|^{p} \}\kappa (t,\omega) \mathbb{B}ig\} \|x_0\| \\ && + \int_0^t \exp \mathbb{B}ig\{ -h_A (t-s) + \delta+ \max\{\|C\|, \|C\|^{p} \} \big[\kappa (t,\omega) -\kappa(s,\omega)\big] \mathbb{B}ig\}\mathbb{B}ig( c_f \|x(s,\omega,x_0)\| + \|f(0)\| \mathbb{B}ig)ds. \end{eqnarray*} Assign $z(t) := \|x(t,\omega,x_0)\| \exp \mathbb{B}ig\{ h_A t - \max\{\|C\|, \|C\|^{p} \} \kappa(t,\omega)\mathbb{B}ig \}$, then for any $t \in[0,1]$ \begin{eqnarray*} z(t) &\leq& \|x_0\| e^{\delta} + \|f(0)\| e^{\delta} \int_0^t e^{h_A s - \max\{\|C\|, \|C\|^{p} \} \kappa(s,\omega)}ds + \int_0^t c_f e^{\delta} z(s) ds, \end{eqnarray*} which has the form of \eqref{gronwall1}. By applying Gronwall lemma \ref{lem2}, we obtain \begin{eqnarray*} z(t) &\leq& \|x_0\| e^{\delta} \exp \mathbb{B}ig \{ c_fe^{\delta}t \mathbb{B}ig\} + \|f(0)\|e^{\delta} \int_0^t \exp \mathbb{B}ig \{ c_f e^{\delta} (t-s) +h_A s - \max\{\|C\|, \|C\|^{p} \} \kappa(s,\omega)\mathbb{B}ig\}ds, \end{eqnarray*} for all $t\in [0,1]$. This follows that for any $t \in [0,1]$ \begin{eqnarray*} &&\|x(t,\omega,x_0)\| \\ &\leq& \|x_0\| \exp \mathbb{B}ig \{ -h_A t+ \delta + \max\{\|C\|, \|C\|^{p} \} \kappa(t,\omega) +c_fe^{\delta}t \mathbb{B}ig \} \notag \\ &&+ \|f(0)\|e^{\delta} \int_0^t \exp \mathbb{B}ig \{ c_f e^{\delta} (t-s) - h_A (t-s)+ \max\{\|C\|, \|C\|^{p} \} (\kappa(t,\omega) -\kappa(s,\omega))\mathbb{B}ig\}ds \notag\\ &\leq& \|x_0\| \exp \mathbb{B}ig \{ -\big(h_A - c_fe^{\delta} \big) t + \delta+ \max\{\|C\|, \|C\|^{p} \} \kappa(t,\omega) \mathbb{B}ig \}\notag \\ &&+ \|f(0)\|e^{\delta} \int_0^t \exp \mathbb{B}ig \{ -(h_A -c_f e^{\delta}) (t-s) + \max\{\|C\|, \|C\|^{p} \}(\kappa(t,\omega) -\kappa(s,\omega))\mathbb{B}ig\}ds. \end{eqnarray*} Since $h_A > c_f$ there exists $\delta >0$ such that \begin{eqnarray}\label{h3} h := h_A - c_fe^{\delta} - \delta> 0. \end{eqnarray} Then for all $t\in [0,1]$ \allowdisplaybreaks \begin{eqnarray}\label{xest} &&\|x(t,\omega,x_0)\| \notag \\ &\leq& \|x_0\| \exp \mathbb{B}ig \{ -(h+\delta) t+ \delta + \max\{\|C\|, \|C\|^{p} \}\kappa(t,\omega) \mathbb{B}ig \}\notag \\ &&+ \|f(0)\|e^{\delta} \int_0^t \exp \mathbb{B}ig \{ -(h+\delta) (t-s)+ \max\{\|C\|, \|C\|^{p} \}(\kappa(t,\omega) -\kappa(s,\omega))\mathbb{B}ig\}ds\notag\\ &\leq& \|x_0\| \exp \mathbb{B}ig \{ -(h+\delta) t + \delta + \max\{\|C\|, \|C\|^{p} \} \kappa(1,\omega) \mathbb{B}ig \}\notag\\ &&+ \|f(0)\|e^{\delta} \int_0^t \exp \mathbb{B}ig \{ -(h+\delta) (t-s) + \max\{\|C\|, \|C\|^{p} \}\kappa(1,\omega)\mathbb{B}ig\}ds\notag\\ &\leq& \|x_0\| \exp \mathbb{B}ig \{ -(h+\delta) t + \delta+ \max\{\|C\|, \|C\|^{p} \} \kappa(1,\omega) \mathbb{B}ig \}+ \frac{\|f(0)\|}{h+\delta}\exp \mathbb{B}ig\{\delta+\max\{\|C\|, \|C\|^{p} \}\kappa(1,\omega) \mathbb{B}ig\},\notag \\ \end{eqnarray} as $\kappa$ is an increasing function of $t$. In particular \begin{eqnarray*} \|x(1,\omega,x_0)\| &\leq& \|x_0\| \exp \mathbb{B}ig \{ -h + \max\{\|C\|, \|C\|^{p} \}\kappa(1,\omega) \mathbb{B}ig \} \\ &&+ \frac{\|f(0)\|}{h+\delta}\exp \mathbb{B}ig\{\delta + \max\{\|C\|, \|C\|^{p} \}\kappa(1,\omega)\mathbb{B}ig\} . \end{eqnarray*} Assign \begin{eqnarray*} \alpha(\omega) &:=& \exp \mathbb{B}ig\{-h + \max\{\|C\|, \|C\|^{p} \} \kappa(1,\omega) \mathbb{B}ig\},\\ \beta(\omega) &:=&\frac{\|f(0)\|}{h+\delta}\exp \mathbb{B}ig\{\delta + \max\{\|C\|, \|C\|^{p} \} \kappa(1,\omega)\mathbb{B}ig\}. \end{eqnarray*} By induction one can show that for any $n \geq 1$ \allowdisplaybreaks \begin{eqnarray}\label{xest1} &&\|x(n,\omega,x_0)\| \notag\\ &\leq& \|x(n-1,\omega,x_0)\| \alpha(\theta_{n-1}\omega) + \beta(\theta_{n-1} \omega) \notag\\ &\leq& \ldots \notag\\ &\leq& \|x_0\| \prod_{k=0}^{n-1} \alpha(\theta_{k} \omega) + \sum_{k=0}^{n-1} \beta (\theta_{k} \omega)\prod_{i=k+1}^{n-1} \alpha(\theta_{i } \omega)\notag\\ &\leq& \|x_0\| \exp \mathbb{B}ig\{\mathbb{B}ig(-h+ \max\{\|C\|, \|C\|^{p} \} \frac{1}{n}\sum_{k=0}^{n-1}\kappa(1,\theta_{k}\omega) \mathbb{B}ig)n \mathbb{B}ig\}\notag\\ &&+ \sum_{k=0}^{n-1}\frac{\|f(0)\|}{h+\delta} e^{h+\delta} \exp \mathbb{B}ig\{\mathbb{B}ig(-h + \max\{\|C\|, \|C\|^{p} \} \frac{1}{n-k}\sum_{i=k}^{n-1}\kappa(1,\theta_{i }\omega)\mathbb{B}ig)(n-k)\mathbb{B}ig\}. \end{eqnarray} Using \eqref{xest} and \eqref{xest1}, we have for $t\in [(n, n+1]$ \begin{eqnarray}\label{xest2} &&\|x(t,\omega,x_0)\|\notag\\ &\leq& \|x(n,\omega,x_0) \|\exp\mathbb{B}ig\{-(h+\delta)(t-n) +\delta+\max\{\|C\|, \|C\|^{p} \}\kappa(1,\theta_{n}\omega) \mathbb{B}ig\}\notag \\ &&+ \frac{\|f(0)\|}{h+\delta}\exp \big\{\delta +\max\{\|C\|, \|C\|^{p} \}\kappa(1,\theta_{n}\omega) \big\}\notag\\ &\leq & \|x_0\|e^{h+\delta}\exp \mathbb{B}ig\{ \mathbb{B}ig(-h +\max\{\|C\|, \|C\|^{p} \}\frac{1}{n+1}\sum_{k=0}^n\kappa(1,\theta_{k}\omega) \mathbb{B}ig)(n+1)\mathbb{B}ig\}\notag\\ &&+\sum_{k=0}^n\frac{\|f(0)\|e^{2(h+\delta)}}{h+\delta}\exp\mathbb{B}ig\{ \mathbb{B}ig(-h+\max\{\|C\|, \|C\|^{p} \}\frac{1}{n-k+1}\sum_{i=k}^n \kappa(1,\theta_{i}\omega)\mathbb{B}ig)(n-k+1)\mathbb{B}ig\}.\notag \end{eqnarray} By computation using \eqref{kappa3} we obtain \allowdisplaybreaks \begin{eqnarray}\label{xcont1} &&\|x(t,\theta_{-t}\omega,x_0)\| \notag\\ &\leq& \|x_0\|e^{2h+\delta}\exp\mathbb{B}ig\{ \mathbb{B}ig(-h +2^{p+1}\max\{\|C\|, \|C\|^{p} \} \frac{1}{n+2}\sum_{k=0}^{n+1}\kappa(1,\theta_{-k}\omega) \mathbb{B}ig)(n+2)\mathbb{B}ig\}\notag\\ &&+\sum_{k=1}^{n+1}\frac{\|f(0)\|e^{3h+2\delta}}{h+\delta}\exp\mathbb{B}ig\{ \mathbb{B}ig(-h +2^{p+1}\max\{\|C\|, \|C\|^{p} \}\frac{1}{k+1}\sum_{i=0}^k \kappa(1,\theta_{-i}\omega)\mathbb{B}ig)(k+1)\mathbb{B}ig\}.\notag\\ \end{eqnarray} Then for a fixed random set $\hat{D}(\omega) \in \mathcal{D}$ with the corresponding ball $B(0,\rho(\omega))$ satisfying \eqref{tempered}, and for any random point $x_0(\theta_{-t}\omega) \in \hat{D}(\theta_{-t}\omega) $, we have \allowdisplaybreaks \begin{eqnarray} && \|x(t,\theta_{-t}\omega,x_0(\theta_{-t}\omega))\| \notag\\ &\leq& \|x_0(\theta_{-t}\omega)\| e^{2h+\delta}\exp\mathbb{B}ig\{ \mathbb{B}ig(-h+2^{p+1}\max\{\|C\|, \|C\|^{p} \}\frac{1}{n+2}\sum_{k=0}^{n+1}\kappa(1,\theta_{-k}\omega) \mathbb{B}ig)(n+2)\mathbb{B}ig\} \notag\\% &&+ \frac{\|f(0)\|e^{3h+2\delta}}{h+\delta} \sum_{k=1}^{n+1} \exp \mathbb{B}ig\{\mathbb{B}ig(-h +2^{p+1} \max\{\|C\|, \|C\|^{p} \} \frac{1}{k+1}\sum_{i=0}^{k}\kappa(1,\theta_{-i}\omega)\mathbb{B}ig)(k+1)\mathbb{B}ig\}\notag\\ &\leq& \rho(\theta_{-t}\omega) e^{2h+\delta} \exp\mathbb{B}ig\{ \mathbb{B}ig(-h+2^{p+1}\max\{\|C\|, \|C\|^{p} \}\frac{1}{n+2}\sum_{k=0}^{n+1}\kappa(1,\theta_{-k}\omega) \mathbb{B}ig)(n+2)\mathbb{B}ig\} + b(\omega),\notag \end{eqnarray} where \begin{eqnarray}\label{b} b(\omega):= 1+ \frac{\|f(0)\|e^{3h+2\delta}}{h+\delta} \sum_{k=1}^{\infty} \exp \mathbb{B}ig\{\mathbb{B}ig(-h+ 2^{p+1}\max\{\|C\|, \|C\|^{p} \} \frac{1}{k}\sum_{i=0}^{k-1}\kappa(1,\theta_{-i}\omega) \mathbb{B}ig)k\mathbb{B}ig\}. \end{eqnarray} Now we choose $\delta$ small enough such that \eqref{h3} holds and $C$ which satisfies \begin{equation}\label{attractorcriterion} h = h_A - c_fe^{\delta} -\delta > 2^{p+1}\max\{\|C\|, \|C\|^{p} \} \max\{\frac{1}{\delta^{p-1}} ,4K G \} (\beta+\beta^p+\beta^2+ \beta^{p+1}) \end{equation} and set $\lambda := h-2^{p+1}\max\{\|C\|, \|C\|^{p} \} E\ \kappa(1,\cdot)$. There exists $n_0=n(\omega)$ such that $$\exp\mathbb{B}ig\{ \mathbb{B}ig(-h +2^{p+1}\max\{\|C\|, \|C\|^{p} \} \frac{1}{n+2}\sum_{k=0}^{n+1}\kappa(1,\theta_{-k}\omega) \mathbb{B}ig)(n+2)\mathbb{B}ig\}\leq e^{\frac{-\lambda (n+2)}{2}}$$ for all $n\geq n_0$ and $\rho(\theta_{-t}\omega) \leq e^{\frac{\lambda t}{4}}$ for all $t\geq n_0$ due to \eqref{tempered}. This follows that \begin{eqnarray}\label{xdiscrete} \|x(t,\theta_{-t}\omega,x_0(\theta_{-t}\omega))\|&\leq& 2 b(\omega), \; \end{eqnarray} for $n$ large enough and uniformly in random points $x_0(\omega) \in \hat{D}(\omega)$. This proves \eqref{absorb} and there exists a compact absorbing set $\mathcal{B}(\omega) = \overline{B}(0,2b(\omega))$ for system \eqref{fSDE}. Due to Lemma \ref{tempered} $b(\omega)$ is tempered when $\|C\|$ is small enough and thus $\mathcal{B} \in \mathcal{D}$, this prove the existence of a random attractor $\mathcal{A}(\omega)$ of the form \eqref{at} for system \eqref{fSDE}. {\bf Step 2.} Assume that there exist two different points $a_1(\omega), a_2(\omega) \in \mathcal{A}(\omega)$. Fix $t^* \in [m, m+1]$ and put $\omega^*=\theta_{-t^*}\omega$ and consider the equation \begin{equation}\label{equ.star} dx(t) = [Ax(t) + f(x(t))]dt + Cx(t)d\omega^*(t). \end{equation} Note that \eqref{omegagrowth} holds for $\omega^*$. By the invariance principle there exist two different points $b_1(\omega^*), b_2(\omega^*) \in \mathcal{A}(\omega^*)$ such that \[ a_i(\omega) = x(t^*,\omega^*,b_i), \quad i = 1,2. \] Put $ y(t,\omega^*):= x(t,\omega^*,b_1)- x(t,\omega^*,b_2)$ then $y(t^*,\omega^*) =a_1(\omega)- a_2(\omega)$ and we have \begin{eqnarray*} dy(t,\omega^*)&=& [Ay(t,\omega^*) + F(t,y(t,\omega^*))]dt + Cy(t,\omega^*)d\omega^*(t) \end{eqnarray*} where $F(t,y) = f(y + u(t)) - f(u(t))$, where $u(t)=x(t,\omega^*,b_2)$ satisfies also globally linear growth \eqref{lipF} with coefficient $c_f$ and condition $F(t,0) \equiv 0$.\\ Now repeating the calculation in Theorem \ref{stabfSDElin} in which $\omega$ is replaced by $\omega^*$, we obtain \begin{eqnarray*} \frac{1}{t^*}\log \| y(t^*,\omega^*)\|&\leq & \frac{1}{t^*}\log \| y(0,\omega^*)\| - (h_A-c_f) +\frac{K\|C\|}{m}\sum_{k=0}^m\ensuremath{\left| \! \left| \! \left|}\omega^*\ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},\Delta_k}\\ &&+\frac{4K\|C\|G}{m} \mathbb{B}ig(\sum_{k=0}^m\ensuremath{\left| \! \left| \! \left|}\omega^*\ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},\Delta_k} + \sum_{k=0}^m\ensuremath{\left| \! \left| \! \left|}\omega^*\ensuremath{\right| \! \right| \! \right|}^{2}_{p{\rm -var},\Delta_k}+\sum_{k=0}^m\ensuremath{\left| \! \left| \! \left|}\omega^*\ensuremath{\right| \! \right| \! \right|}^{p+1}_{p{\rm -var},\Delta_k}\mathbb{B}ig), \end{eqnarray*} in which $G$ given in \eqref{GA}. \begin{comment} Write $t'= -t^*+m+1\in [0,1]$ then $\omega^*(\cdot)= \theta_{-(m+1)}\omega'(\cdot)$ where $\omega'(\cdot)=\theta_{t'}\omega(\cdot)$. For each $0\leq k\leq m$, \begin{eqnarray*} \ensuremath{\left| \! \left| \! \left|}\omega^*\ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},\Delta_k}&=& \ensuremath{\left| \! \left| \! \left|}\omega'\ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},\Delta_{-(m+1-k)}} \leq \ensuremath{\left| \! \left| \! \left|} \theta_{-(m+1-k)}\omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},[0,2]}\\ &\leq & 2^{p-1}\mathbb{B}ig( \ensuremath{\left| \! \left| \! \left|} \theta_{-(m+1-k)}\omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},[0,1]}+ \ensuremath{\left| \! \left| \! \left|} \theta_{-(m-k)}\omega\ensuremath{\right| \! \right| \! \right|}_{p{\rm -var},[0,1]}\mathbb{B}ig){\color{red}.} \end{eqnarray*} Combining this with \eqref{pvar} we get \allowdisplaybreaks \begin{eqnarray}\label{y*} &&\frac{1}{t^*}\log \| y(t^*,\omega^*)\|\notag\\ &\leq & \frac{1}{t^*}\log \| y(0,\omega^*)\| - (h_A-c_f) +\frac{2^{p-1}K\|C\|}{m}\sum_{k=0}^{m+1}\ensuremath{\left| \! \left| \! \left|}\theta_{-k}\omega\ensuremath{\right| \! \right| \! \right|}_{p\rm{-var},[0,1]}\notag\\ && +\frac{2^{p}K\|C\|G}{m}\mathbb{B}ig(\sum_{k=0}^{m+1}\ensuremath{\left| \! \left| \! \left|}\theta_{-k}\omega\ensuremath{\right| \! \right| \! \right|}_{p\rm{-var},[0,1]}+\sum_{k=0}^{m+1}\ensuremath{\left| \! \left| \! \left|} \theta_{-k}\omega\ensuremath{\right| \! \right| \! \right|}^{p+1}_{p\rm{-var},[0,1]}+\sum_{k=0}^{m+1}\ensuremath{\left| \! \left| \! \left|}\theta_{-k}\omega\ensuremath{\right| \! \right| \! \right|}^{p+2}_{p\rm{-var},[0,1]}\mathbb{B}ig).\notag\\ \end{eqnarray} \end{comment} Using the fact that $\mathcal{A} \subset \mathcal{B}$, we have $\|y(0,\omega^*)\| \leq 4 b(\omega^*)$. Now letting $\ensuremath{\mathbb{N}} \ni t^* = m \to \infty$ and using \eqref{btemper}, we obtain \begin{eqnarray}\label{y*} &&\varlimsup_{t^*\to\infty}\frac{1}{t^*}\log \| y(t^*,\omega^*)\|\notag \\ &\leq & - (h_A-c_f) +K\|C\|(1+4G) \mathbb{B}ig(E\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}_{p\rm{-var},[0,1]} +E\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}^{2}_{p\rm{-var},[0,1]}+E\ensuremath{\left| \! \left| \! \left|} \omega\ensuremath{\right| \! \right| \! \right|}^{p+1}_{p\rm{-var},[0,1]}\mathbb{B}ig)\notag\\ &\leq&- (h_A-c_f)+K(1+4G)\|C\| \left( \beta+\beta^{2}+\beta^{p+1}\right), \end{eqnarray} in which $\beta$ is given by \eqref{beta1}. Hence, there exists $\varepsilon>0$ such that if we choose $\|C\|<\varepsilon$ then $y(t^*,\theta_{-t^*}\omega)$ converges to zero exponentially. Hence $a_1(\omega) - a_2(\omega) \to 0$ which is a contradiction. This proves that $\mathcal{A}(\omega) \equiv \{a(\omega)\}$ is a single random point. Finally similar arguments then prove that $\|x(t,\theta_{-t}\omega,x_0(\theta_{-t}\omega))-a(\omega)\|$ converges to $0$ as $t\to \infty$ in an exponential rate and uniformly in random points $x_0(\omega)$ in a tempered random set $\hat{D}(\omega) \in \mathcal{D}$, which proves the last conclusion of Theorem \ref{attractor}. \end{proof} \begin{example}[Stochastic SIR model] Following \cite{caraballo2018}, consider a stochastic version of "susceptible-infected-recovered" epidemic model (SIR) \begin{eqnarray}\label{stochSIR} dS_t &=& \mathbb{B}ig[q - a S_t + b I_t - \gamma \frac{S_tI_t}{S_t + I_t +R_t}\mathbb{B}ig] dt + \sigma_1 S_t dB^H_t \notag\\ dI_t &=& \mathbb{B}ig[- (a+b+c) I_t + \gamma \frac{S_tI_t}{S_t+ I_t +R_t} \mathbb{B}ig] dt + \sigma_2 I_t dB^H_t \notag\\ dR_t &=& \mathbb{B}ig[c I_t - a R_t\mathbb{B}ig] dt + \sigma_3 R_t dB^H_t, \end{eqnarray} where $q,a,b,c,\gamma, \sigma_1, \sigma_2, \sigma_3 \geq 0$. System \eqref{stochSIR} can be rewritten in the following form of variable $y = (S,I,R)^{\rm T} \in \ensuremath{\mathbb{R}}^3$ \begin{eqnarray}\label{stochSIR2} d y_t &=& [A y_t + F(y_t)]dt + Cy_t dB^H_t \notag\\ &=& \left[\left(\begin{matrix} -a & b & 0\\ 0& -a-b-c & 0 \\ 0 & c & -a \end{matrix} \right)y_t + \left(\begin{matrix} q-\gamma \frac{S_tI_t}{S_t+I_t+R_t}\\ \gamma \frac{S_tI_t}{S_t+I_t+R_t} \\ 0 \end{matrix} \right)\right] dt + \left(\begin{matrix} \sigma_1 & 0 & 0\\ 0& \sigma_2 & 0 \\ 0& 0 & \sigma_3 \end{matrix} \right) y_t dB^H_t. \end{eqnarray} It is easy to check that \[ \|F(y_1) - F(y_2) \| \leq \gamma \mathbb{B}ig(|S_1-S_2|+|I_1 - I_2| + |R_1-R_2|\mathbb{B}ig) \leq \gamma\sqrt{3} \|y_1 - y_2\|,\quad \forall y_1, y_2 \in \ensuremath{\mathbb{R}}^3_+, \] hence $F$ is globally Lipschitz continuous. The existence and uniqueness, as well as the positiveness of the solution of \eqref{stochSIR} are investigated in \cite{caraballo2018} using fractional calculus for Young integral \cite{zahle, zahle2}. \\ To study the asymptotic behavior of system \eqref{stochSIR}, observe from \cite{caraballo2018} that $A$ is diagonalizable, which can be written in the form \[ A = P D P^{-1}, \quad D = \left(\begin{matrix} -a & 0 & 0\\ 0& -a & 0 \\ 0 & 0 & -a-b-c \end{matrix} \right), \quad P = \left(\begin{matrix} 1 & 0 & \frac{b}{b+c}\\ 0& 0 & -1 \\ 0 & 1 & \frac{c}{b+c} \end{matrix} \right), \quad P^{-1} = \left( \begin{matrix} 1& \frac{b}{b+c}&0 \\ 0& \frac{c}{b+c} & 1 \\ 0&-1&0 \end{matrix} \right). \] Therefore, by assigning $x := P^{-1} y$ and applying the integration by parts for Young system, we obtain the equation for $x$ as follows \begin{eqnarray}\label{stochSIR3} dx_t &=& \mathbb{B}ig[P^{-1} A P x_t + P^{-1}F(Px_t)\mathbb{B}ig] dt + P^{-1}CPx_t dB^H_t \notag\\ &=& [D x_t + F_1(x_t)]dt + P^{-1}CP x_t dB^H_t, \end{eqnarray} which has the form of \eqref{fSDE} with \[ \langle x, Dx \rangle \leq - a \|x\|^2,\quad \|F_1(x_1) - F_1(x_2)\| \leq \gamma \sqrt{3} \|P\| \|P^{-1}\| \|x_1-x_2\| \leq 4\sqrt{3}\gamma \|x_1-x_2\|, \quad \forall x_1,x_2\in \ensuremath{\mathbb{R}}^3. \] We are now in the situation to apply Theorem \ref{attractor} provided that condition \eqref{attractorcriterion} is satisfied, i.e. $a -4\sqrt{3} \gamma$, $\delta>0$ such that $a -4\sqrt{3} \gamma e^\delta - \delta > 0$, and $\sigma_{\rm max}:=\max \{\sigma_1, \sigma_2,\sigma_3\} \geq 0$ small enough such that \begin{eqnarray} a -4\sqrt{3} \gamma e^\delta - \delta \geq 2^{p+1} \max\{4 \sigma_{\rm max}, (4 \sigma_{\rm max})^{p}\} \max\{\frac{1}{\delta^{p-1}} ,4K G \} (\beta+\beta^p+\beta^2+ \beta^{p+1}). \end{eqnarray} Under this condition, there exists an one-point pullback attractor for the tranformed system \eqref{stochSIR3} and thus for the original system \eqref{stochSIR} after the transformation $y = Px$. \end{example} \section{Appendix} \begin{proof}[Proof of Lemma \ref{lem2}] From \eqref{gronwall1} it follows that \begin{eqnarray*} d \mathbb{B}ig(e^{-\eta t} \int_a^t z(s) ds\mathbb{B}ig)& = &e^{-\eta t} \mathbb{B}ig(-\eta \int_a^t z(s) ds + z(t)\mathbb{B}ig)\leq e^{-\eta t} \mathbb{B}ig(z_0+ \int_a^t \alpha(s)ds\mathbb{B}ig),\quad \forall t\in [a,b]. \end{eqnarray*} As a result \[ \int_a^t z(s) ds \leq \int_a^t e^{\eta (t-s)} \mathbb{B}ig(z_0+ \int_a^s \alpha(u)du\mathbb{B}ig)ds. \] Hence combining with \eqref{gronwall1} and using the integration by parts one gets \begin{eqnarray*} z(t) &\leq& z_0 + \int_a^t \alpha(s)ds + \eta \int_a^t e^{\eta (t-s)} \mathbb{B}ig(z_0+ \int_a^s \alpha(u)du\mathbb{B}ig)ds \\ &\leq & z_0 e^{\eta (t-a)} + \int_a^t \alpha(s)ds - e^{\eta t} \int_a^t \mathbb{B}ig(\int_a^s \alpha(u)du\mathbb{B}ig) d (e^{-\eta s}) \\ &\leq & z_0 e^{\eta (t-a)} + \int_a^t e^{\eta(t-s)} \alpha(s)ds, \end{eqnarray*} which proves \eqref{gronwall2}.\\ \end{proof} \begin{proof}[Proof of Lemma \ref{tempered}] Firstly, since the dynamical system $\theta$ is ergodic in $(\Omega,\mathcal{F},\mathbb{P})$, for almost all $\omega \in \Omega$ \begin{eqnarray}\label{lambda} \lim \limits_{k \to \infty} \left(-h+c \frac{1}{k}\sum_{i=0}^{k-1}\kappa(1,\theta_{-k}\omega) \right)&=& -h+c E\ \kappa(1,\cdot) \notag\\ &\leq&-h+c \max\{\frac{1}{\delta^{p-1}} ,4K G \} (\beta+\beta^p+\beta^2+ \beta^{p+1}) \end{eqnarray} due to \eqref{Ekappa}. Set $-\lambda := -h+c E\ \kappa(1,\cdot)$. Take and fix a small positive number $\varepsilon$ such that \begin{equation}\label{criterion3} h >\varepsilon \max\{\frac{1}{\delta^{p-1}} ,4K G \} (\beta+\beta^p+\beta^2+ \beta^{p+1}). \end{equation} Then for any $0<c<\varepsilon$ we have $\lim \limits_{k \to \infty} \left(-h+c \frac{1}{k}\sum_{i=0}^{k-1}\kappa(1,\theta_{-k}\omega) \right)= -\lambda < 0$. Consequently, the series \[ \sum_{k=1}^{\infty} \exp \mathbb{B}ig\{\mathbb{B}ig(-h+c \frac{1}{k}\sum_{i=0}^{k-1}\kappa(1,\theta_{-i}\omega) \mathbb{B}ig)k\mathbb{B}ig\} \] converges or $\xi(\omega)$ is finite for almost all $\omega\in \Omega$. Next we are going to prove that $\xi(\omega)$ is tempered if $c$ is small enough. Using \eqref{kappa3}, it suffices to prove that \begin{equation}\label{btemper} \lim_{t\to\pm\infty\atop t\in\mathbb{Z}}\frac{1}{t}\log \mathbb{B}ig[\xi(\theta_{t}\omega)\mathbb{B}ig] = 0 \end{equation} whenever $c<\varepsilon$. Indeed, replacing $\omega$ by $\theta_{-m}\omega$ where $m\in\mathbb{Z}^+$ in \eqref{xi} we get \begin{eqnarray*} &&\xi(\theta_{-m}\omega) \\ &=& 1+ \sum_{k=1}^{\infty} \exp \mathbb{B}ig\{-h k+c \sum_{i=0}^{k-1}\kappa(1,\theta_{-(i+m)}\omega) \mathbb{B}ig\}.\\ &=& 1+\sum_{k=1}^{\infty} \exp \mathbb{B}ig\{-hk+c \sum_{i=m}^{k+m-1}\kappa(1,\theta_{-i}\omega) \mathbb{B}ig\}.\\ &= & 1+ \exp\left\{\left(h- c \frac{1}{m}\sum_{i=0}^{m-1}\kappa(1,\theta_{-i}\omega)\right)m \right\} \sum_{k=1}^{\infty} \exp \left\{\left(-h+c\frac{1}{k+m} \sum_{i=0}^{k+m-1}\kappa(1,\theta_{-i}\omega)\right) (k+m)\right\}.\\ \end{eqnarray*} By \eqref{lambda}, for each $N\in \ensuremath{\mathbb{N}}^*$, $\frac{1}{N}<\lambda$ there exists $n(\omega,N)$ such that for all $n>n(\omega,N)$ $$ -\lambda-\frac{1}{N}\leq -h +c \frac{1}{n}\sum_{k=0}^{n-1}\kappa(1,\theta_{-k}\omega) \leq -\lambda +\frac{1}{N},\, $$ and $$ -\lambda-\frac{1}{N}\leq -h+c \frac{1}{n}\sum_{k=0}^{n-1}\kappa(1,\theta_{k}\omega) \leq -\lambda +\frac{1}{N}. $$ Therefore, with $N$, $\omega$ fixed, if $m>n(\omega,N)$ we have \begin{eqnarray*} 1&\leq& \xi(\theta_{-m}\omega)\leq 1+ e^{(-\lambda+\frac{1}{N}) m} \sum_{k=1}^{\infty} \exp\{(-\lambda +1/N)(k+m)\}\leq (D+1) e^{2m/N}, \end{eqnarray*} where $D= \sum_{k=1}^{\infty} \exp\{(-\lambda +\frac{1}{N})k\} <\infty$ and $D$ is independent of $m$. Hence, it follows that \[ 0\leq \varlimsup_{m\to+\infty\atop m\in \mathbb{Z}}\frac{1}{m}\log \mathbb{B}ig[\xi(\theta_{-m}\omega)\mathbb{B}ig] \leq \lim_{m\to\infty}\frac{2m}{m N} = \frac{2}{N}, \] for any $N$ large enough, which proves \eqref{btemper} for the case $t\to -\infty$. Similarly, replacing $\omega$ by $\theta_{m}\omega$ where $m\in\mathbb{Z}^+$ in \eqref{xi} we obtain \allowdisplaybreaks \begin{eqnarray*} \xi(\theta_{m}\omega) &=& 1+ \sum_{k=1}^{\infty} \exp \mathbb{B}ig\{-hk+c \sum_{i=0}^{k-1}\kappa(1,\theta_{-i+m}\omega) \mathbb{B}ig\}.\\ &=& 1+ \sum_{k=1}^{m} \exp \mathbb{B}ig\{-hk+c \sum_{i=0}^{k-1}\kappa(1,\theta_{-i+m}\omega) \mathbb{B}ig\}+ \sum_{k=m+1}^{\infty} \exp \mathbb{B}ig\{-hk+\|C\| \sum_{i=0}^{k-1}\kappa(1,\theta_{-i+m}\omega) \mathbb{B}ig\},\\ \end{eqnarray*} in which the second term is \allowdisplaybreaks \begin{eqnarray*} &&\sum_{k=1}^{m} \exp \mathbb{B}ig\{-hk+c \sum_{i=0}^{k-1}\kappa(1,\theta_{-i+m}\omega) \mathbb{B}ig\}\\ &&= \sum_{k=1}^{m} \exp \mathbb{B}ig\{-hk+c \sum_{i=m-k+1}^{m}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\}\\ &=&\exp \mathbb{B}ig\{-h(m+1)+c \sum_{i=0}^{m}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\} \sum_{k=1}^{m} \exp \mathbb{B}ig\{hk-c \sum_{i=0}^{k-1}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\}\\ &=&\exp \mathbb{B}ig\{-h(m+1)+c \sum_{i=0}^{m}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\}\\ &&\ensuremath{\tilde}mes \left( \sum_{k=1}^{n(\omega,N)} \exp \mathbb{B}ig\{hk -c \sum_{i=0}^{k-1}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\}+ \sum_{k=n(\omega,N)+1}^{m} \exp \mathbb{B}ig\{hk-c \sum_{i=0}^{k-1}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\}\right)\\ &\leq& \exp\{(-\lambda+\frac{1}{N})(m+1)\} \ensuremath{\tilde}mes\\ &&\ensuremath{\tilde}mes \left( \sum_{k=1}^{n(\omega,N)} \exp \mathbb{B}ig\{hk- c \sum_{i=0}^{k-1}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\} + \sum_{k=n(\omega,N)+1}^{m} \exp\{(\lambda+\frac{1}{N})k\} \right) \\ &\leq & \exp\{(-\lambda+\frac{1}{N})(m+1)\} \left( \sum_{k=1}^{n(\omega,N)} \exp \mathbb{B}ig\{hk- c \sum_{i=0}^{k-1}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\}+ \frac{ \exp\{(\lambda+\frac{1}{N})(m+1)\} }{e^{\lambda+\frac{1}{N}}-1}\right) \\ &\leq& e^{\frac{2}{N}(m+1)}D(\omega), \end{eqnarray*} where \[ D(\omega) = \sum_{k=1}^{n(\omega,N)} \exp \mathbb{B}ig\{hk-c \sum_{i=0}^{k-1}\kappa(1,\theta_{i}\omega) \mathbb{B}ig\}\ + \frac{1}{e^{\lambda+\frac{1}{N}}-1} \] and $m>n(\omega,N)$.\\ On the other hand, the third term is \allowdisplaybreaks \begin{eqnarray*} &&\sum_{k=m+1}^{\infty} \exp \mathbb{B}ig\{-hk+c \sum_{i=0}^{k-1}\kappa(1,\theta_{-i+m}\omega) \mathbb{B}ig\}\\ &&=\sum_{k=m+1}^{\infty} \exp \mathbb{B}ig\{-hk+ c \sum_{i=0}^{m-1}\kappa(1,\theta_{-i+m}\omega)+ c \sum_{i=m}^{k-1}\kappa(1,\theta_{-i+m}\omega) \mathbb{B}ig\}\\ &&= \exp \mathbb{B}ig\{-hm+c \sum_{i=1}^{m}\kappa(1,\theta_{i}\omega)\mathbb{B}ig\} \sum_{k=1}^{\infty} \exp \mathbb{B}ig\{-hk+c \sum_{i=0}^{k-1}\kappa(1,\theta_{-i}\omega) \mathbb{B}ig\}\\ &&\leq e^{(-\lambda+\frac{1}{N})m} \ensuremath{\tilde}mes \sum_{k=1}^{\infty} \exp \mathbb{B}ig\{-hk+ c \sum_{i=0}^{k-1}\kappa(1,\theta_{-i}\omega) \mathbb{B}ig\} \end{eqnarray*} when $m>n(\omega,N)$. To sum up, for $m>n(\omega,N)$ we have \begin{eqnarray*} 1&\leq\xi(\theta_{m}\omega)&\leq 1+ e^{\frac{2}{N}(m+1)}D(\omega) + e^{(-\lambda+\frac{1}{N})m} \xi(\omega)\leq e^{\frac{2}{N}(m+1)} \left( 1+ D(\omega) + \xi(\omega)\right). \end{eqnarray*} Since $D(\omega), \xi(\omega)$ are independent of $m$, $\varlimsup \limits_{m\to+\infty\atop m\in \mathbb{Z}} \frac{\xi(\theta_m\omega)}{m}\leq \frac{2}{N}$ for any $N$ large enough, we conclude that $\xi(\omega)$ is tempered. \end{proof} \section*{Acknowledgment} This work was partially sponsored by the Max Planck Institute for Mathematics in the Science (MIS-Leipzig) and also by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number FWO.101.2017.01. \end{document}
\begin{document} \articletitle[ Multipartite Greenberger-Horne-Zeilinger paradoxes for continuous variables]{ Multipartite Greenberger-Horne-Zeilinger paradoxes for continuous variables} \chaptitlerunninghead{Multipartite GHZ paradoxes} \author{Serge Massar and Stefano Pironio\\ Service de Physique Th\'eorique, CP 225,\\ Universit\'e Libre de Bruxelles, 1050 Brussels, Belgium \\ } \begin{abstract} We show how to construct Greenberger-Horn-Zeilinger type paradoxes for continuous variable systems. We give two examples corresponding to 3 party and 5 party paradoxes. The paradoxes are revealed by carrying out position and momentum measurements. The structure of the quantum states which lead to these paradoxes is discussed. \end{abstract} \begin{keywords} Continuous variables, non-locality, Greenberger-Horne-Zeilinger paradox \end{keywords} When studying continuous variables systems, described by conjugate variables with commutation relation $[x,p]=i$, it is natural to inquire how non-locality can be revealed in those systems. Experimentally the operations that are easy to carry out on such systems involve linear optics, squeezing and homodyne detection. Using these operations, the states that can be prepared are Gaussian states and the measurements that can be performed are measurements of quadratures. But Gaussian states possess a Wigner function which is positive everywhere and so provide a trivial local-hidden variable model for measurement of $x$ or $p$. To exhibit non-locality in these systems, it is thus necessary to drop some of the requirements imposed by current day experimental techniques. For instance one can invoke more challenging measurements such as photon counting measurements or consider more general states that will necessitate higher order non-linear couplings to be produced. Using these two approaches it has recently been possible to extend from discrete variables to continuous variables systems the usual non-locality tests: Bell inequalities \cite{Bana, Kuz, Chen}, Hardy's non-locality proof \cite{Hill} and the Greenberger-Horne-Zeilinger paradox \cite{C,MP, ChenGHZ}. Greenberger-Horne-Zeilinger (GHZ) paradoxes \cite{GHZ} as formulated by Mermin \cite{M} are particularly elegant and simple ways of demonstrating the non-locality of quantum systems since the argument can be carried out at the level of operators only. The existence of a generalization of the original GHZ paradox for qubits to continuous variables was first pointed out by Clifton \cite{C} and was studied in more details in \cite{MP} and \cite{ChenGHZ}. The paradox presented in \cite{ChenGHZ} involve measurements of the parity of the number of photons, while in \cite{C} and \cite{MP}, it is associated with position and momentum variables. It is this last case that we will consider here. We shall summarize the results of \cite{MP} and show that the multipartite multidimensional GHZ paradoxes introduced in \cite{CMP} can easily be generalized to the case of continuous variables by exploiting the noncommutative geometry of the phase space. This idea is closely related to the technique used to embed finite-dimensional quantum error correcting code in the infinite-dimensional Hilbert space of continuous variables systems \cite{Got}. Let us introduce the dimensionless variables \begin{equation} \tilde x = {x \over {\sqrt{\pi}L}} \quad \mbox{and} \quad \tilde p = {p\ L \over \sqrt{\pi}} \ , \end{equation} where $L$ is an arbitrary length scale. Consider the translation operators in phase space \begin{equation} X^\alpha = \exp ( i \alpha \tilde x) \quad \mbox{and} \quad Y^\beta = \exp ( i \beta \tilde p) \ . \label{XY} \end{equation} These unitary operators obey the commutation relation \begin{equation}\label{commut} X^\alpha Y^\beta =e^{i\alpha \beta/\pi} Y^\beta X^\alpha \ , \end{equation} which follows from $[\tilde x,\tilde p]=i/\pi$ and the identy $e^Ae^B=e^{[A,B]}e^Be^A$ (valid if $A$ and $B$ commute with their commutator). The continuous variable GHZ paradoxes will be built out of these operators. Let us first consider the case of three spatially separated parties, A, B, C, each of which possess one part of an entangled system described by the canonical variables $x_A,p_A, x_B, p_B, x_C$ and $p_C$. Consider the operators $X_j^{\pm \pi} $ and $Y_j^{\pm \pi}$ acting on the space of party $j$ ($j=A,B,C$). Since $\alpha\beta=\pm \pi^2$, it follows from (\ref{commut}), that these operators obey the commutations relations $X_j^{\pm \pi} Y_j^{\pm \pi}=-Y_j^{\pm \pi}X_j^{\pm \pi}$. Using these operators let us construct the following four GHZ operators: \begin{eqnarray} \begin{array}{cccccc} V_1 &=& X_A^\pi & X_B^\pi & X_C^\pi \\ V_2 &=& X_A^{-\pi}& Y_B^{-\pi} &Y_C^{\pi} \\ V_3 &=& Y_A^{\pi} & X_B^{-\pi}& Y_C^{-\pi} \\ V_4 &=& Y_A^{-\pi} & Y_B^{\pi} & X_C^{-\pi} \\ \label{V4} \end{array}\end{eqnarray} These four operators give rise to a GHZ paradox as we now show. First note that the following two properties hold: \begin{enumerate} \item $V_1, V_2, V_3, V_4$ all commute. Thus they can be simultaneously diagonalized (in fact there exists a complete set of common eigenvectors). \item The product $V_1 V_2 V_3 V_4 = -I_{ABC} $ equals minus the identity operator. \end{enumerate} These properties are easily proven using the commutations relations $X_j^{\pm \pi} Y_j^{\pm \pi}=-Y_j^{\pm \pi}X_j^{\pm \pi}$. Any common eigenstate of $V_1, V_2, V_3, V_4$ will give rise to a GHZ paradox. Indeed suppose that the parties measure the hermitian operators $x_j$ or $p_j$, $j=A,B,C$ on this common eigenstate. The result of the measurement associates a complex number of unit norm to either the $X_j$ or $Y_j$ unitary operators. If one of the combinations of operators that occurs in eq. (\ref{V4}) is measured, a value can be assigned to one of the operators $V_1, V_2, V_3, V_4$. Quantum mechanics imposes that this value is equal to the corresponding eigenvalue. Moreover - due to property 2 - the product of the eigenvalues is $-1$. But this is in contradiction with local hidden variables theories. Indeed in a local hidden theory one must assign, prior to the measurement, a complex number of unit norm to all the operators $X_j$ and $Y_j$. Then taking the product of the four c-numbers assigned simultaneously to $V_1, V_2,V_3, V_4$ yields $+1$ instead of $-1$. Remark that all other tests of non-locality for continuous variable systems \cite{Bana,Kuz,Chen,Hill,ChenGHZ} use measurements with a discrete spectrum (such as the parity photon number) or involving only a discrete set of outcome (such as the probability that $x>0$ or $x<0$). In our version of the GHZ paradox for continuous variables this discrete character doesn't seem to appear at first sight. However it turn out that is is also the case thought in a subtle way because eq. (\ref{V4}) can be viewed as an infinite set of 2 dimensional paradoxes (see \cite{MP} for more details). In \cite{CMP}, GHZ paradoxes for many parties and multi-dimensional systems where constructed. These paradoxes where build using $d$-dimensional unitary operators with commutation relations: \begin{equation}\label{discop} X Y = e^{2\pi i/d} Y X \end{equation} which is a generalization of the anticommutation relation of spin operators for two-dimensional systems. Using $X^{\alpha}$ and $Y^{\beta}$ and choosing the coefficients $\alpha$ and $\beta$ such that $\alpha\beta=2\pi^2/d$ with $d$ integer, this commutation relation can be realized in a continuous variable systems and so all the paradoxes presented in \cite{CMP} can be rephrased with minor modifications in the context of infinite-dimensional Hilbert space. Let us for instance generalise to continuous variables the paradox for 5 parties each having a 4 dimensional systems described in \cite{CMP}. We now consider the operators $X^{\pm q}$, $Y^q$ and $Y^{-3q}$ where $q=\pi/\sqrt{2}$. They obey the commutation relation $X^{\pm q}Y^{q}= e^{\pm i \pi/2}Y^{q}X^{\pm q}$ and $X^{\pm q}Y^{-3q}= e^{\pm i \pi/2}Y^{-3q}X^{\pm q}$. Consider now the six unitary operators \begin{eqnarray}\begin{array}{ccccccc} W_1 &=& X^{q}_A &X^{q}_B& X^{q}_C & X^{q}_D & X^{q}_E \\ W_2 &=& X^{-q}_A &Y^{-3q}_B & Y^{q}_C & Y^{q}_D& Y^{q}_E \\ W_3 &=& Y^{q}_A& X^{-q}_B& Y^{-3q}_C & Y^{q}_D & Y^{q}_E \\ W_4 &=& Y^{q}_A & Y^{q}_B& X^{-q}_C & Y^{-3q}_D& Y^{q}_E \\ W_5 &=& Y^{q}_A & Y^{q}_B & Y^{q}_C & X^{-q}_D & Y^{-3q}_E \\ W_6 &=& Y^{-3q}_A & Y^{q}_B &Y^{q} _C& Y^{q}_D& X^{-q}_E \end{array}\label{VV6} \end{eqnarray} One easily shows that these six unitary operators commute and that their product is minus the identity operator. Furthermore if one assigns a classical value to $x_j$ and to $p_j$ for $j=A,B,C,D,E$, then the product of the operators takes the value $+1$. Hence, using the same argument as in the three party case, we have a contradiction. There is a slight difference between the paradox (\ref{VV6}) and the 4-dimensional paradox described in \cite{CMP}. The origin of this difference is that in a $d$-dimensional Hilbert space, if unitary operators $X,Y$ obey $XY=e^{i\pi/d}YX$, then $X^d = Y^d = I$ (up to a phase which we set to 1). In the 4-dimensional case, this implies that $X^3 = X^\dagger$ and $Y^3=Y^\dagger$. In the continous case these relations no longer hold and the GHZ operators $W_i$'s must be slightly modified, i.e. the operator $X^{-q}={X^q}^{\dagger}$ and $Y^{-3q}={Y^{3q}}^{\dagger}$ have to be explicitly introduced in order for the product of the $W_i$'s to give minus the idendity. Note that the same remark apply for the previous paradox (\ref{V4}) where in the discrete 2-dimensional version $X^\dagger=X$ and $Y^\dagger=Y$. As we mentioned earlier the GHZ state are not Gaussian states. A detailed analysis of the common eigenstates of $V_1, V_2, V_3, V_4$ is given in \cite{MP}. Let us give an example of such an eigenstate. Define the following coherent superpositions of infinitely squeezed states: \begin{eqnarray} |{\uparrow} \rangle &=& {1 \over \sqrt{2}} \sum_{k=-\infty}^{\infty} \left (|{\tilde x = 2k}\rangle +i |{\tilde x= \tilde 2k+1}\rangle \right) \nonumber\\ |{\downarrow} \rangle &=& {1 \over \sqrt{2}} \sum_{k=-\infty}^{\infty} \left( |{\tilde x =2k} \rangle -i |{\tilde x= \tilde 2k+1}\rangle \right) \ , \end{eqnarray} where $|\tilde x \rangle=|x=\sqrt{\pi}L\tilde x\rangle$. Then a common eigenstate of the four GHZ operators $V_1,V_2,V_3,V_4$ is $$\left (|\uparrow\rangle_A|\uparrow\rangle_B|\uparrow\rangle_C - |\downarrow\rangle_A|\downarrow\rangle_B|\downarrow\rangle_C\right)/\sqrt{2}\ .$$ However as shown in \cite{MP}, for any choice of the eigenvalues of the operators $V_k$, there is an infinite family of eigenvectors, ie. the eigenspace is infinitely degenerate. In summary we have shown the existence of multidimensional and multipartite GHZ paradoxes for continuous variable systems. These paradoxes involve measurements of position and momentum variables only, but the states which are measured are complex and difficult to construct experimentaly. \begin{acknowledgments} We would like to thank N. Cerf for helpful discussions. We acknowledges funding by the European Union under project EQUIP (IST-FET program). S.M. is a research associate of the Belgian National Research Foundation. \end{acknowledgments} \begin{chapthebibliography}{<widest bib entry>} \bibitem{Bana} K. Banaszek and K. W\'odkiewicz, Phys. Rev. A {\bf 58}, 4345 (1998). \bibitem{Kuz} A. Kuzmich, I. A. Walmsley and L. Mandel, Phys. Rev. Lett. {\bf 85}, 1349 (2000). \bibitem{Chen} Z. Chen, J. Pan, G. Hou and Y. Zhang, Phys. Rev. Lett. {\bf 88} 040406 (2002). \bibitem{Hill} B. Yurke, M. Hillery and D. Stoler, Phys. Rev. A {\bf 60}, 3444 (1999). \bibitem{C} R. Clifton, Phys. Lett. A {\bf 271}, 1 (2000). \bibitem{MP} S. Massar and S. Pironio, Phys. Rev. A {\bf 64} 062108 (2001). \bibitem{ChenGHZ} Z. Chen and Y. Zhang, Phys. Rev. A {\bf 65} 044102 (2001). \bibitem{GHZ} D. M. Greenberger, M. Horne, A. Zeilinger, in {\em Bell's Theorem, Quantum Theory, and Conceptions of the Universe}, M. Kafatos, ed., Kluwer, Dordrecht, The Netherlands (1989), p. 69. \bibitem{M} N. D. Mermin, Phys. Rev. Lett. {\bf 65}, 3373 (1990) and Phys. Today, 43(6), 9 (1990). \bibitem{CMP} N. Cerf, S. Massar, S. Pironio, {\em Greenberger-Horne-Zeilinger paradoxes for many qudits}, quant-ph/0107031. \bibitem{Got} D. Gottesman, A. Kitaev and J. Preskill, Phys. Rev. A {\bf 64} 012310 (2001). \end{chapthebibliography} \end{document}
\begin{document} \title{A Near-Optimal Depth-Hierarchy Theorem for Small-Depth Multilinear Circuits} \thispagestyle{empty} \begin{abstract} We study the size blow-up that is necessary to convert an algebraic circuit of product-depth $\Delta+1$ to one of product-depth $\Delta$ in the multilinear setting. We show that for every positive $\Delta = \Delta(n) = o(\log n/\log \log n),$ there is an explicit multilinear polynomial $P^{(\Delta)}$ on $n$ variables that can be computed by a multilinear formula of product-depth $\Delta+1$ and size $O(n)$, but not by any multilinear circuit of product-depth $\Delta$ and size less than $\exp(n^{\Omega(1/\Delta)})$. This result is tight up to the constant implicit in the double exponent for all $\Delta = o(\log n/\log \log n).$ This strengthens a result of Raz and Yehudayoff (Computational Complexity 2009) who prove a quasipolynomial separation for constant-depth multilinear circuits, and a result of Kayal, Nair and Saha (STACS 2016) who give an exponential separation in the case $\Delta = 1.$ Our separating examples may be viewed as algebraic analogues of variants of the Graph Reachability problem studied by Chen, Oliveira, Servedio and Tan (STOC 2016), who used them to prove lower bounds for constant-depth \emph{Boolean} circuits. \end{abstract} \section{Introduction} This paper deals with a question in the area of \emph{Algebraic Complexity,} which studies the Computational Complexity of any algorithmic task that can be cast as the problem of computing a fixed multivariate polynomial (or polynomials) $f\in \mathbb{F}[x_1,\ldots,x_N]$ on a given $a\in \mathbb{F}^N.$ Many fundamental problems such as the Determinant, Permanent, the Fast Fourier Transform and Matrix Multiplication can be captured in this general paradigm. The natural computational model for solving such problems are Algebraic Circuits (and their close relations Algebraic Formulas and Algebraic Branching Programs) which use the algebraic operations of the polynomial ring $\mathbb{F}[x_1,\ldots,x_N]$ to compute the given polynomial. Our main focus in this paper is on \emph{Small-depth} Algebraic circuits, which are easily defined as follows.\footnote{We actually define algebraic \emph{formulas} here, but the distinction is not too important for our results in this paper.} Recall that any multivariate polynomial $f\in \mathbb{F}[x_1,\ldots,x_N]$ can be written as a linear combination of terms which are products of variables (i.e.\ monomials); we call such an expression a $\Sigma\Pi$ circuit for $f$. In general, such an expression for $f$ could be prohibitively large. More compact representations for $f$ may be obtained if we consider representations as linear combinations of products of linear functions, which we call $\Sigma\Pi\Sigma$ circuits, and subsequent generalizations such as $\Sigma\Pi\Sigma\Pi$ circuits and so on. We consider representations of the form $\Sigma\Pi\cdots O_d$ for some $d = d(N)$ where $d$ is a slow growing function of the number of variables $N$. We consider such a representation of a polynomial $f$ as a computation of $f$. The efficiency of such a computation is captured by the following two complexity measures: the \emph{size} of the corresponding expression, which captures the number of operations used in computing the polynomial; and the \emph{product-depth} of the circuit, which is the number of $\Pi$s in the expression for the circuit class (e.g. $1$ for $\Sigma\Pi\Sigma$ circuits, $2$ for $\Sigma\Pi\Sigma\Pi$ circuits etc.) and which measures in some sense the inductive complexity of the corresponding computation.\footnote{It is also standard in the literature to consider the \emph{depth} of the circuit, which is the number of $\Sigma$ and $\Pi$ terms in the defining expression for the circuit class, but the product-depth is frequently nicer (invariant under simple operations such as linear transformations etc.) and is essentially $\lfloor \text{depth}/2\rfloor$. So we mostly use product-depth. We also state our results in terms of depth.} This paper is motivated by questions in a general body of results in the area that go by the name of \emph{Depth-reduction.} Informally, the question is: what is the worst case blow-up in size required to convert a circuit of product-depth $\Delta$ to one of product-depth $\Delta' < \Delta$? This is an important question, since circuits of smaller depth are frequently easier to analyze and understand, and such results allow us to transfer this understanding to more complex (i.e. higher depth) classes of circuits. Consequently, there have been many results addressing this general question for various models of Algebraic (and also Boolean) computation including Algebraic and Boolean formulas~\cite{brent, spira}, Boolean circuits~\cite{HopcroftPaulValiant}, and Algebraic circuits~\cite{VSBR,AV,Koiran,Tav13,GKKSdepth3}. Recently, Tavenas~\cite{Tav13} and Gupta, Kamath, Kayal and Saptharishi~\cite{GKKSdepth3} (building on~\cite{VSBR,AV,Koiran}) showed that a strong enough exponential lower bound for $\Sigma\Pi\Sigma$ circuits would imply a superpolynomial lower bound for general algebraic circuits. Interesting impossibility results are also known in this direction: for instance, it is known that the result of Tavenas~\cite{Tav13}, which converts a general circuit to a $\Sigma\Pi\Sigma\Pi$ circuit of subexponential size, cannot be improved in the restricted model of \emph{homogeneous}\footnote{I.e. a circuit where every intermediate expression computes a homogeneous polynomial.} $\Sigma\Pi\Sigma\Pi$ circuits~\cite{GKKSdepth4,FLMS,KLSS,KShom}. Here, we study a more fine-grained version of the question of depth-reduction. We ask: what is the size blow-up in converting a circuit of product-depth $\Delta+1$ to one of product-depth $\Delta$? A natural strategy to carry out such a depth-reduction for a given $(\Sigma\Pi)^{\Delta+1}\Sigma$\footnote{I.e. a $\Sigma\Pi\cdots \Sigma$ expression with exactly $(\Delta+1)$ many $\Pi$s.} expression $F$ is to take some product terms in the expression and interchange them with the inner sum terms via the distributive law. This creates a blow-up in the size of the expression that is exponential in the number of sum terms. It is not hard to show that by choosing the sum terms carefully, one can limit this blow-up to $\exp(s^{1/\Delta+o(1)})$ for constant $\Delta,$\footnote{In fact, the upper bound is of the form $\exp(s^{1/\Delta}\log s)$, and is the $\exp(s^{1/\Delta+o(1)})$ as long as $\Delta = o(\log s/\log \log s).$} where $s$ is the size of the expression of depth $\Delta+1.$ Is this exponential\footnote{Strictly speaking, we get an exponential blow-up only for \emph{constant} $\Delta$. The careful reader should read ``exponential'' as ``exponential in $s^{1/\Delta}$'' for general $\Delta.$} blow-up unavoidable for any $\Delta$? The evidence we do have seems to suggest the answer is yes. For example, in the Boolean setting (where the $\Sigma$ and $\Pi$ are replaced by their Boolean counterparts $\bigvee$ and $\bigwedge$), such an exponential \emph{Depth-hierarchy theorem} is a classical result of H\r{a}stad~\cite{Hastadthesis}, with recent improvements by Rossman, Servedio, Tan and H\r{a}stad~\cite{RST15,Has16,HRSTjournal}. Even in the algebraic setting, there have been partial results in this direction~\cite{nw1997,ry09,KNS16,KumarSap16}. For \emph{homogeneous} circuits such a result was proved for $\Delta=1$ in the work of Nisan and Wigderson~\cite{nw1997}. We study this question in the \emph{multilinear} setting, where the circuits are restricted to computing \emph{multilinear} polynomials at each stage of computation (multilinear polynomials are polynomials where the degree of each variable is at most $1$). This is a fairly natural model of computation for multilinear polynomials, and has been extensively studied in the literature \cite{Raz,Raznc2nc1,ry08,ry09,RSY08,DMPY12,KNS16}. Raz and Yehudayoff~\cite{ry09} considered the problem of separating product-depth $\Delta+1$ circuits from product-depth $\Delta$ circuits in the multilinear setting and proved a \emph{superpolynomial} separation implying a superpolynomial depth-hierarchy theorem. More precisely, they showed that there are circuits of size $s$ and product-depth $\Delta+1$ such that any product-depth $\Delta$ circuit computing the same polynomial must have size at least $s^{(\log s)^{\Omega(1/\Delta)}}.$ While this result shows that some blow-up is unavoidable in the multilinear setting, this is still only a quasipolynomial separation and does not completely resolve our original question. More recently, Kayal, Nair and Saha~\cite{KNS16} resolved the question completely in the case when $\Delta = 1$ by giving an optimal exponential separation between product-depth $2$ and product-depth $1$ multilinear circuits. We extend both these results to prove a strong depth-hierarchy theorem for all small depths. The following is implied by Corollary~\ref{cor:abstract-pdepth}. \begin{theorem} For each $\Delta = \Delta(n) = o(\log n/\log \log n),$ there is an explicit multilinear polynomial $P^{(\Delta)}$ on $n$ variables that can be computed by a multilinear formula of product-depth $(\Delta+1)$ and linear size, but not by any multilinear circuit of product-depth $\Delta$ and size less than $\exp(n^{\Omega(1/\Delta)}).$ \end{theorem} We also prove an analogous result for depth instead of product-depth (Corollary~\ref{cor:abstract-depth}). Note that, from the above discussion, the above results are tight up to the constant implicit in the $\Omega(1/\Delta)$. \subsection{Related Work} We survey here some of the work on depth-hierarchy theorems in the Algebraic and Boolean settings. As mentioned above, this is a well-known question in the Boolean setting, with a near-optimal separation between different depths due to H\r{a}stad~\cite{Hastadthesis} (building on~\cite{Ajtai,FSS,Yao}); the separating examples in H\r{a}stad's result are the \emph{Sipser functions}, which are computed by linear-sized Boolean circuits of depth $\Delta+1$ but cannot be computed by subexponential-sized depth $\Delta$ circuits. A recent variant of this lower bound was proved by Chen et al.~\cite{COST} for \emph{Skew-Sipser functions}, which in turn was used to prove near-optimal lower bounds for constant-depth Boolean circuits solving variants of the Boolean Graph Reachability problem. Their construction motivates our hard polynomials, as we describe below. In the algebraic setting, we have separations between fixed constant-depth circuits under the restriction of homogeneity. Nisan and Wigderson~\cite{nw1997} show that converting a homogeneous $\Sigma\Pi\Sigma\Pi$ circuit to a homogeneous $\Sigma\Pi\Sigma$ circuit requires an exponential blow-up. A quasipolynomial separation between homogeneous $\Sigma\Pi\Sigma\Pi\Sigma$ and $\Sigma\Pi\Sigma\Pi$ circuits was shown by Kumar and Saptharishi~\cite{KumarSap16}. However, as far as we know, nothing is known for larger depths. When the algebraic circuits are instead restricted to be multilinear, more is known. Raz and Yehudayoff~\cite{ry09} were the first to study this question, and showed a quasipolynomial depth-hierarchy theorem for all constant product-depths. In the case of product-depth $1$ vs. product-depth $2$, this was strengthened to an exponential separation by Kayal, Nair and Saha~\cite{KNS16}. One can ask if the methods of~\cite{ry09,KNS16} can be used to prove our result. Kayal et al.\cite{KNS16} prove their result by defining a suitable complexity measure for any polynomial $f\in \mathbb{F}[x_1,\ldots,x_N]$, and show that this measure is small for any subexponential-sized $\Sigma\Pi\Sigma$ circuit but large for some linear-sized $\Sigma\Pi\Sigma\Pi$ circuit. In particular, this means that this measure needs to be changed to prove lower bounds for larger depth circuits. However, it is not clear how to modify this measure to take into account the product-depth of the circuit class. This is not a problem with the technique of Raz and Yehudayoff~\cite{ry09}, which can indeed be used to prove exponential lower bounds on the sizes of small-depth circuits for computing certain polynomials. However, the polynomials used to witness the superpolynomial separation cannot give an exponential depth-hierarchy, for the reasons we now explain. The depth-hierarchy theorem of~\cite{ry09} is obtained via an exponential lower bound $s(n,\Delta) \approx \exp(n^{1/\Delta})$ against product-depth $\Delta$ circuits computing polynomials from a certain ``hard'' class $\mc{P}$ of polynomials on $n$ variables. Importantly, these lower bounds are tight in the sense that there also exist product-depth $\Delta$ circuits of size roughly $s(n,\Delta)$ computing polynomials from $\mc{P}$. Since $s(n,\Delta-1)$ is superpolynomially larger than $s(n,\Delta),$ we obtain a superpolynomial separation between circuits of product-depth $\Delta$ and product-depth $\Delta-1$. However, since we cannot improve on either the upper bound of $s(n,\Delta)$ or the lower bound of $s(n,\Delta-1)$ for this class of polynomials, we cannot hope to improve this separation. There is a striking parallel between this line of work and the setting of Boolean circuits, where also a similar quasipolynomial depth-hierarchy theorem can be obtained by appealing to easier lower bounds for explicit functions such as the ``Parity'' function~\cite{Hastadthesis}. It is known that the depth-$\Delta$ Boolean complexity of the Parity function is $\exp(\Theta(n^{1/\Delta-1}))$ and this yields a quasipolynomial depth-hierarchy theorem for constant-depth Boolean circuits as in the work of Raz and Yehudayoff described above. However, H\r{a}stad~\cite{Hastadthesis} was able to improve this to an exponential depth-hierarchy theorem by changing the candidate hard function to the Sipser functions and then proving a lower bound for these functions by a related, but more involved, technique~\cite{Hastadthesis, COST}. \subsection{Proof Outline} As mentioned in the Related Work section above, the polynomials considered by Raz and Yehudyaoff~\cite{ry09} cannot be used to prove better than a quasipolynomial depth-hierarchy theorem. This parallels a quasipolynomial depth-hierarchy theorem in the Boolean setting. However, in the Boolean setting, there is a different family of explicit functions that can be used to prove an exponential depth-hierarchy theorem. Our aim is to do something similar in the setting of multilinear small-depth circuits. While the methods for proving Boolean circuit lower bounds do not seem to apply in the algebraic setting, we can take inspiration in the matter of choosing the candidate hard polynomial. For this, we look to the recent result of Chen et al.~\cite{COST}, who observe that the Sipser functions (and also their ``skew'' variants) can be interpreted as special cases of the Boolean Graph Reachability problem. This is quite appealing for us, since Graph Reachability has a natural polynomial analogue, the Iterated Matrix Multiplication polynomial, which has been a source of many lower bounds in algebraic circuit complexity~\cite{nw1997,FLMS,KShom,BeraChakrabarti15,KNS16,KST16,CLS}. We therefore choose our lower bound candidate to be a restriction of the Iterated Matrix Multiplication polynomial, which we now describe. \subsubsection{The Hard Polynomials} All the polynomials we consider will be naturally defined in terms of Directed Acyclic graphs (DAGs) with a unique source and sink. Given such a graph $G$ with source $s$ and sink $t$, we define a corresponding polynomial $P_G$ as follows. Label each edge $e$ of $G$ with a distinct variable $x_e$. The polynomial $P_G$ is defined to be the sum, over all paths $\pi$ from $s$ to $t$, of the monomial which is the product of edge labels along that path. (See Figure~\ref{fig:eg1intro} for a simple example.) This polynomial $P_G$ is the algebraic analogue of the Boolean Computational problem of checking $s$-$t$ reachability on subgraphs of $G$. (Informally, we think of each $x_e$ as a Boolean variable that determines if $e$ remains in the subgraph or not. Then the polynomial $P_G$ on the Boolean input corresponding to a subgraph $H$ of $G$ counts the number of $s$-$t$ paths in $H$.) \begin{figure} \caption{A simple example of a graph $G$ (all edges go from left to right). In this case, $P_G$ is $x_1x_2x_3+x_1x_5x_6+x_4x_6.$} \label{fig:eg1intro} \end{figure} When $G$ is a layered graph with $d$ layers and all possible edges between consecutive layers, $P_G$ is known as the Iterated Matrix Multiplication polynomial (for the connection to matrix product, see, e.g.,~\cite{FLMS}). This polynomial has been studied in many contexts in Algebraic circuit complexity. Unfortunately, it is not useful in our setting, since it does not have a subexponential-size constant-depth multilinear circuit, as was shown recently by some of the authors~\cite{CLS}. In particular, this means that it cannot be used to obtain the claimed separation between depths $\Delta+1$ and $\Delta.$ However, changing $G$ can drastically reduce the complexity of the polynomial $P_G$. To obtain a $G$ such that $P_G$ has an efficient depth $\Delta+1$ circuit, we use \emph{series-parallel} graphs, as in the result of Chen et al.~\cite{COST}. Recall that given DAGs $G_1,\ldots,G_k$ with sources $s_1,\ldots,s_k$ and sinks $t_1,\ldots,t_k$, we can construct larger DAGs by composing these graphs in parallel (by identifying all the sources and all the sinks) or in series (by identifying $t_1$ with $s_2$, $t_2$ with $s_3$ and so on until $t_{k-1}$ with $s_k$) to get larger DAGs $G_{par}$ or $G_{ser}$ respectively. Note that the corresponding polynomials $P_{par}$ and $P_{ser}$ are the sums and products of the polynomials $P_1,\ldots,P_k,$ respectively. In particular, if $P_1,\ldots,P_k$ have efficient circuits of product-depth at most $\Delta$, then $P_{par}$ (resp.\ $P_{ser}$) has an efficient circuit of product-depth at most $\Delta$ (resp.\ $\Delta + 1$). In this way, we can inductively construct polynomials which, by their very definition, have efficient circuits of small depth. In particular, to keep the product-depth of the circuit bounded by $\Delta+1,$ it is sufficient to ensure that the number of series compositions used in the construction of the graph is at most $\Delta+1.$ \subsubsection{The $\Sigma\Pi\Sigma$ lower bound} We motivate our proof with the solution to the simpler problem of separating product-depth $2$ and product-depth $1$ circuits. In fact, we will separate the power of $\Pi\Sigma\Pi$ and $\Sigma\Pi\Sigma$ circuits. While an exponential separation was already known in this case thanks to the work of Kayal et al.~\cite{KNS16}, we will outline a different proof that extends to larger depths. Given the above discussion, a natural polynomial to witness the separation between product-depth $1$ and product-depth $2$ circuits is a polynomial corresponding to a series parallel graph obtained with exactly $2$ series compositions. Unfortunately, our proof technique is not able to prove a lower bound for such a graph. However, we are able to prove such a separation with the slightly more complicated graph $G^{(1)}$ in Figure~\ref{fig:g1intro}, which is made of a composition of $m$ copies of the basic graph $H^{(1)}$ (see Figure~\ref{fig:h1intro}). Though the graph $G^{(1)}$ is constructed using three series compositions rather than two, the corresponding polynomial $P^{(1)}(X)$ nevertheless has a product-depth $2$ circuit of small size (this is because the polynomial corresponding to $H^{(1)}$ depends only on a constant-number of variables and hence has a ``brute-force'' $\Sigma\Pi$ circuit). \begin{figure} \caption{$H^{(1)} \label{fig:h1intro} \end{figure} \begin{figure} \caption{$G^{(1)} \label{fig:g1intro} \end{figure} We now describe how to prove that any $\Sigma\Pi\Sigma$ circuit for $P^{(1)}$ must have large size (the lower bound we obtain is $2^{\Omega(m)}$). Let $F$ be a $\Sigma\Pi\Sigma$ circuit of size $s$ computing $P^{(1)}$ and assume $s$ is small. We can write $F$ as $T_1+\cdots + T_s$, where each $T_i$ is a product of linear polynomials. The lower bound is via a rank argument first used by Raz~\cite{Raz}. The idea is to associate a rank-based complexity measure with any polynomial $f$, and show that while the polynomial $P^{(1)}$ has rank as large as possible, the rank of $F$ must be small. This rank measure is defined as follows. We partition the variables $X$ in our polynomial into two sets $Y$ and $Z$ and consider any polynomial $f(X)$ as a polynomial in the variables in $Y$ with coefficients from $\mathbb{F}[Z].$ The rank of the space of coefficients (as vectors over the base field $\mathbb{F}$) is considered a measure of the complexity of $f$. It is easy to come up with a partition of the underlying variable set $X$ into $Y,Z$ so that the complexity of our polynomial $P^{(1)}$ is as large as it can be. Unfortunately, it is also easy to find $\Sigma\Pi\Sigma$ formulas that have maximum rank w.r.t. this partition. Hence, this notion of complexity is not by itself sufficient to prove a lower bound. At this point, we follow an idea of Raz~\cite{Raz} and show something stronger for $P^{(1)}$: we show that its complexity is \emph{robust} in the sense that it is full rank w.r.t. many different partitions. More precisely, we carefully design a large space of \emph{restrictions} $\rho^{(1)}:X\rightarrow Y\cup Z\cup \{0,1\}$ such that for any such restriction $\rho^{(1)},$ the resulting substitution of $P^{(1)}$, which we will call a \emph{restriction of $P^{(1)}$}, continues to have full rank, but the rank of the restriction of $F$ is small under many of them. We define these restrictions now. The definition of the space of restrictions is motivated by the following easily verified observation (also used in many previous results~\cite{Raznc2nc1,ry08,ry09,DMPY12}): any polynomial of the form \begin{equation} \label{eq:fullrkpoly} \prod_{i\in [t]}(y_{i} + z_{i}) \end{equation} is full-rank w.r.t. the natural partition $Y = \{y_1,\ldots,y_t\}$ and $Z=\{z_1,\ldots,z_t\}$. Given this, a possible option is to have the restriction $\rho^{(1)}$ set variables in each copy of $H^{(1)}$ in ways that ensure that after substitution, the polynomial is of the form in (\ref{eq:fullrkpoly}). We choose one among the three different (up to isomorphism) restrictions shown in Figure~\ref{fig:g0set}. It can be checked that each such restriction results in a polynomial of the form in (\ref{eq:fullrkpoly}) and hence is full rank. Since $P^{(1)}$ is a product of many disjoint copies of this basic polynomial, it remains full rank also. \begin{figure} \caption{Restrictions applied to each copy of $H^{(1)} \label{fig:g0set} \end{figure} Now, to show that $F$ has small rank under some such restriction, we apply a \emph{uniformly random} copy of such a restriction $\rho^{(1)}$ to $F$ and show that with high probability each of its constituent terms $T_1,\ldots, T_s$ is very small in rank. Since rank is subadditive and $s$ is assumed to be small, this implies that $F$ cannot be full rank after the restriction and hence cannot have been computing the polynomial $P^{(1)}$. Fix a term $T_i$ which is a product of linear functions as mentioned above. We argue that it restricts to a small-rank term with high probability as follows. \begin{itemize} \item A standard argument~\cite{Raz} shows that, by multilinearity,\footnote{This essentially means that the linear functions that participate in $T_i$ do not share any variables.} the rank of $T_i$ is small if many of its constituent linear functions have smaller than ``full-rank'' after applying $\rho^{(1)}$. Here, a linear polynomial $L(Y',Z')$ ($Y'\subseteq Y$, $Z'\subseteq Z$) is said to be full-rank if its complexity (as defined above) is $2^{(|Y'|+|Z'|)/2}.$ \item By a simple argument we can argue that, upon applying a random restriction $\rho^{(1)}$, any constituent linear function $L$ of $T_i$ becomes rank-deficient (i.e. short of full-rank) with good (constant) probability. This is by a case analysis on the set of variables $X'$ that appear in $L$ and proceeds roughly as follows. \begin{itemize} \item Say $X'$ is \emph{structured} in that it is the union of the variable sets in some copies of $H^{(1)}.$ In this case, we observe that with good probability, the total number of variables in $L$ after restriction is larger than $2$ and hence, the target full-rank is a number larger than $2$. On the other hand, any linear function cannot have rank more than $2$. \item On the other hand if $X'$ is not structured in the above sense, then it can be shown that with good probability, the number of restricted variables in $L$ is odd and then it cannot be full-rank, since the target full-rank is $2^{t+1/2}$ for some integer $t$, while the rank of $L$ can be at most $2^t$. \end{itemize} \item Given the above, we can easily argue using a concentration bound\footnote{By multilinearity, the events that the various linear functions are small rank become (roughly) independent.} that with high probability, $T_i$ has \emph{many} rank-deficient linear functions and consequently has small rank. This yields the proof of the $\Sigma\Pi\Sigma$ lower bound. \end{itemize} \subsubsection{Separating $\Delta$ from $\Delta+1$} To prove the lower bound for product-depth $\Delta$ circuits, we proceed by induction on $\Delta.$ Each step in the inductive proof is analogous to a step in the $\Sigma\Pi\Sigma$ lower bound so we will be brief. The graph $H^{(\Delta)}$ is constructed from two copies of $G^{(\Delta-1)}$ by parallel composition and the graph $G^{(\Delta)}$ is constructed from many copies of $H^{(\Delta)}$ by series composition as shown in Figure~\ref{fig:gdelta}. By construction $P^{(\Delta)}$ will have a circuit of product-depth $\Delta+1.$ \begin{figure} \caption{$H^{(\Delta)} \label{fig:gdelta} \end{figure} The lower bound is again proved via a random restriction argument. We define the random restriction $\rho^{(\Delta)}$ by restricting each copy of $H^{(\Delta)}$ independently by choosing one of these options at random. \begin{itemize} \item set it to a polynomial of the form $(y+z)$ by choosing a random path in each copy of $G^{(\Delta-1)}$ and setting a single variable in each to $y$ or $z$ (other variables in the path are set to $1$ and off-path variables are set to $0$), \item inductively applying the restriction $\rho^{(\Delta-1)}$ to one of the copies of $G^{(\Delta-1)}$ (setting the other one to $0$). \end{itemize} As before, the polynomial $P^{(\Delta)}$ remains full-rank after the restriction with probability $1$, since it always transforms into a polynomial of the form (\ref{eq:fullrkpoly}) after the restrictions. Let $F$ be a small product-depth $\Delta$ circuit. To argue that it is low-rank w.h.p. after applying $\rho^{(\Delta)}$, we prove a variant of a decomposition lemma from the literature that allows us to write $F = T_1 + \ldots + T_s$ where each $T_i$ now is a product of circuits each of which has small size and product depth at most $\Delta-1.$ As before, we argue that each $T_i$ is low-rank with high probability. Say $T_i = \prod_{j\in [t]}Q_{i,j}$. To show this, it suffices to show that each $Q_{i,j}$ is somewhat short of full-rank with good probability. This does indeed turn out to be true in one of two ways. \begin{itemize} \item If the variable set $X'$ of $Q_{i,j}$ is structured (as defined above), then we can use induction and show that it is low-rank if we happen to apply the restriction $\rho^{(\Delta-1)}$ to the variables of $Q_{i,j}.$ This happens with constant probability. \item On the other hand, if $Q_{i,j}$ is unstructured, then with good probability, $Q_{i,j}$ is left with an odd number of variables, which implies that it is rank-deficient exactly as before. \end{itemize} At an intuitive level, the above cases correspond to two different ways in which a small circuit can try to compute the hard polynomial $P^{(\Delta)}.$ In the first case, it uses the fact that the polynomial $P^{(\Delta)}$ is a product of many polynomials and tries to compute each of them with a smaller circuit of depth $\Delta-1$; in this case, we use the inductive hypothesis to show that this cannot happen. In the second case, the circuit tries to do some non-trivial (i.e. unexpected) computation at the top level and in this case, we argue directly that the circuit fails. This finishes the sketch of an idealized version of the argument. While the above strategy can be carried out as stated, it would yield a sub-optimal but still exponential bound of the form $\exp(n^{\alpha_\Delta})$ for some $\alpha_\Delta > 0$ that depends on $\Delta.$ To obtain the near-optimal lower bound of $\exp(n^{\Omega(1/\Delta)})$, we need to make the following changes to the above idealized proof strategy. \begin{itemize} \item We expand the set of ``structured'' polynomials to allow for polynomials $Q_{i,j}$ (and inductively formulas computed at smaller depths) that compute polynomials over a non-trivial fraction, say $\varepsilon$, of the variables in many different copies, say $M$, of $H^{(\Delta-1)}$. The exact relationship between $\varepsilon$ and $M$ is somewhat delicate (we choose $\varepsilon \geq 1/M^{0.25}$) and must be chosen carefully for the proof to yield an optimal lower bound. \item Given this, we also need to handle (for the general inductive statement) the case that $F$ does not depend on all the variables in $G^{(\Delta)}$ but rather at least $\delta$-fraction of the variables in $K$ many copies of $H^{(\Delta)}$ for some suitable $\delta$ and $K$. Given a single term $T_i = \prod_{j}Q_{i,j}$ in $F$ as above, one of the following two cases occurs. \begin{itemize} \item One of the $Q_{i,j}$s depends on at least a $\delta'$-fraction of the variables in $K'$ many copies of $H^{(\Delta)}$, where $\delta'$ and $K'$ are not much smaller than $\delta$ and $K$ respectively. This corresponds to the structured case above and in this case, we can actually use the induction hypothesis to show that we are done with high probability. To argue this, we have to use the fact that while $\delta$ has decreased to $\delta'$, the $K'$ many copies of $H^{(\Delta)}$ contain many more copies of $H^{(\Delta-1)}$. Hence the number of copies of $H^{(\Delta-1)}$, say $K''$, and $\delta'$ still have the ``correct'' relationship so that the inductive statement is applicable. \item Otherwise, in many copies of $H^{(\Delta)},$ the $\delta$-fraction that are variables of $F$ are further partitioned into parts of relative size $< \delta'$ by the variable sets of the $Q_{i,j}$. In this case, we have to argue that many of the $Q_{i,j}$ are rank-deficient with high probability. This is the most technical part of the proof and is done by a careful re-imagining of the sampling process that defines the random restriction. \end{itemize} \end{itemize} The more general inductive statement (and some additional technicalities that force us to carry around auxiliary sets of $Y$ and $Z$ variables) results in a technically complicated theorem statement (see Theorem~\ref{thm:technical}), which yields the near-optimal depth-hierarchy theorem. \paragraph{Organization.} We start with some Preliminaries in Section~\ref{sec:prelims}. We define the hard polynomials that we use in Section~\ref{sec:polysandrests}. The general inductive statement (Theorem~\ref{thm:technical}) is given in Section~\ref{sec:mainthm}, which also contains the proof of the main depth-hierarchy theorem (Corollary~\ref{cor:abstract-pdepth}) assuming Theorem~\ref{thm:technical}. Finally, we prove Theorem~\ref{thm:technical} in Section~\ref{sec:technical}. \section{Preliminaries} \label{sec:prelims} \subsection{Polynomials and restrictions} \label{sec:polyrest} Throughout, let $\mathbb{F}$ be an arbitrary field. A polynomial $P\in \mathbb{F}[X]$ is called \emph{multilinear} if the degree of $P$ in each variable $x\in X$ is at most $1$. Let $X,Y,Z$ be disjoint sets of variables. An \emph{$(X,Y,Z)$-restriction} is a function $\rho:X\rightarrow Y\cup Z\cup \{0,1\}.$ The sets $X,Y$ and $Z$ will sometimes be omitted when clear from context. We say that the restriction $\rho$ is \emph{multilinear} if no two variables in $X$ are mapped to the same variable in $Y\cup Z$ by $\rho$. A \emph{random} $(X,Y,Z)$-restriction is simply a random function $\rho:X\rightarrow Y\cup Z\cup \{0,1\}$ (chosen according to some distribution). Let $X,Y,Z$ be as above and say we have for each $i\in [k]$, an $(X_i,Y_i,Z_i)$-restriction $\rho_i$ where $X_i\subseteq X, Y_i\subseteq Y$ and $Z_i\subseteq Z.$ If $X_1,\ldots,X_k$ form a (pairwise disjoint) partition of $X$, we define their \emph{composition} --- denoted $\rho_1\circ\rho_2\cdots\circ \rho_k$ --- to be the $(X,Y,Z)$-restriction $\rho$ such that $\rho(x)$ agrees with $\rho_i(x)$ for each $x\in X_i.$ Further, if restrictions $\rho_i$ are multilinear restrictions and the sets $Y_i$ ($i\in [k]$) and $Z_j$ ($j\in [k]$) are pairwise disjoint, then $\rho_1\circ\cdots\circ \rho_k$ is also multilinear. Let $\rho$ be an $(X,Y,Z)$-restriction. Given a polynomial $f\in \mathbb{F}[X]$, the restriction $\rho$ yields a natural polynomial in $\mathbb{F}[Y\cup Z]$ obtained from $f$ by substitution; we denote this polynomial $f|_\rho$. Note, moreover, that if $f$ is multilinear and $\rho$ is multilinear, then so is $f|_\rho$. \subsection{Partial derivative matrices and relative rank} \label{sec:pdrelrk} Let $Y$ and $Z$ be two disjoint sets of variables and let $g\in \mathbb{F}[Y\cup Z]$ be a multilinear polynomial. Define the $2^{|Y|}\times 2^{|Z|}$ matrix ${M}_{(Y,Z)}(g)$ whose rows and columns are labelled by distinct multilinear monomials in $Y$ and $Z$ respectively and the $(m_1,m_2)$th entry of ${M}_{(Y,Z)}(g)$ is the coefficient of the monomial $m_1\cdot m_2$ in $g$. We will use the rank of this matrix as a measure of the complexity of $g$. We define the \emph{relative-rank} of $g$ w.r.t. $(Y,Z)$ (denoted by $\mathrm{relrk}_{(Y,Z)}(g)$) by \[ \mathrm{relrk}_{(Y,Z)}(g) = \frac{\mathrm{rank}(M_{(Y,Z)}(g))}{2^{(|Y|+|Z|)/2}}. \] We note the following properties of relative rank~\cite{ry09}. \begin{proposition} \label{prop:relrk} Let $g,g_1,g_2\in \mathbb{F}[Y\cup Z]$ be multilinear polynomials. \begin{enumerate} \item $\mathrm{relrk}_{(Y,Z)}(g) \leq 1.$ Further if $|Y|+|Z|$ is odd, $\mathrm{relrk}_{(Y,Z)}(g) \leq 1/\sqrt{2}.$ \item $\mathrm{relrk}_{(Y,Z)}(g_1+g_2)\leq \mathrm{relrk}_{(Y,Z)}(g_1) + \mathrm{relrk}_{(Y,Z)}(g_2).$ \item \label{relrk:mult} If $Y$ is partitioned into $Y_1,Y_2$ and $Z$ into $Z_1,Z_2$ with $g_i\in \mathbb{F}[Y_i\cup Z_i]$ ($i\in [2]$), then $\mathrm{rank}(M_{(Y,Z)}(g)) = \mathrm{rank}(M_{(Y_1,Z_1)})(g_1)\cdot \mathrm{rank}(M_{(Y_2,Z_2)})(g_2).$ In particular, $\mathrm{relrk}_{(Y,Z)}(g_1\cdot g_2) = \mathrm{relrk}_{(Y_1,Z_1)}(g_1)\cdot \mathrm{relrk}_{(Y_2,Z_2)}(g_2).$ \end{enumerate} \end{proposition} \subsection{Multilinear models of computation} \label{sec:mlmodels} We refer the reader to the standard resources (e.g.~\cite{sy,github}) for basic definitions related to algebraic circuits and formulas. Having said that, we make a few remarks. \begin{itemize} \item All the gates in our formulas and circuits will be allowed to have \emph{unbounded} fan-in. \item The size of a formula or circuit will refer to the number of gates (including input gates) in it, and the depth of the formula or circuit will refer to the maximum number of gates on a path from an input gate to the output gate. \item Further, the \emph{product-depth} of the formula or circuit (as in \cite{ry08}) will refer to the maximum number of product gates on a path from the input gate to output gate. Note that if a formula or circuit has depth $\Delta$, we can assume without loss of generality that its product depth is between $\lfloor \Delta/2\rfloor$ and $\lceil \Delta/2\rceil$ (by collapsing sum and product gates if necessary). \end{itemize} An algebraic formula $F$ (resp.\ circuit $C$) computing a polynomial from $\mathbb{F}[X]$ is said to be \emph{multilinear} if each gate in the formula (resp.\ circuit) computes a multilinear polynomial. Moreover, a formula $F$ is said to be \emph{syntactic multilinear} if for each $\times$-gate $\Phi$ of $F$ with children $\Psi_1,\ldots,\Psi_t$, we have $\mathrm{Supp}(\Psi_i)\cap \mathrm{Supp}(\Psi_j) = \emptyset \text{ for each $i\neq j$},$ where $\mathrm{Supp}(\Phi)$ denotes the set of variables that appear in the subformula rooted at $\Phi.$ Finally, for $\Delta \geq 1$, we say that a multilinear formula (resp.\ circuit) is a $(\Sigma\Pi)^{\Delta}\Sigma$ formula (resp.\ circuit) if the output gate is a sum gate and along any path from an input gate to the output gate, the sum and product gates alternate, with product gates appearing exactly $\Delta$ times and the bottom gate being a sum gate. We can define $(\Sigma\Pi)^{\Delta}, \Sigma\Pi\Sigma, \Sigma\Pi\Sigma\Pi$ formulas and circuits similarly. An \emph{Algebraic Branching Program} ($\mathrm{ABP}$), over the set of variables $X$ and field $\mathbb{F}$ is a layered (i.e., the edges are only between two consecutive layers) directed acyclic graph $G$ with two special vertices called the \emph{source} and the \emph{sink}. The label of an edge is a linear polynomial in $\mathbb{F}[X]$. The weight of a path is the product of the labels of its edges. The polynomial computed by $G$ is the sum of the weights of all the paths from $s$ to $t$ in $G$. A \emph{Multilinear ABP} ($\mathrm{mlABP}$) is an algebraic branching program such that for any path from the source to sink, the labels of edges on that path are linear polynomials over pairwise-disjoint sets of variables. \paragraph*{Composing ABPs in series and in parallel.} Let $G_1,\ldots,G_k$ be ABPs (on disjoint sets of vertices) with sources $s_1,\ldots,s_k$ and sinks $t_1,\ldots, t_k$. We say that $G$ is obtained by composing $G_1,\ldots,G_k$ \emph{in parallel} if $G$ is obtained by identifying all sources $s_1,\ldots, s_k$ to obtain a single source $s$ and all the sinks $t_1,\ldots,t_k$ to obtain a single sink $t$. Note that the polynomial computed by $G$ is the sum of the polynomials computed by $G_1,\ldots,G_k.$ We say that $G$ is obtained by composing $G_1,\ldots,G_k$ \emph{in series} if $G$ is obtained by identifying $t_i$ with $s_{i+1}$ for each $i\in [k-1]$ to obtain an ABP with source $s_1$ and sink $t_k$. Note that the polynomial computed by $G$ is the product of the polynomials computed by $G_1,\ldots,G_k.$ Further if $G_1,\ldots,G_k$ are $\mathrm{mlABP}$s over disjoint sets of variables, then $G$ is also an $\mathrm{mlABP}.$ \begin{figure} \caption{Compostion of ABPs in series and parallel.} \label{fig:abp-series} \end{figure} \subsection{Some structural lemmas} \label{sec:struct} Given a syntactically multilinear formula $F$ computing a polynomial $P\in \mathbb{F}[X],$ we define a \emph{variable-set labelling} of $F$ to be a labelling function that assigns to each gate $\Phi$ of $F$ a set $\mathrm{Var}s(\Phi)\subseteq X$ with the following properties. \begin{enumerate} \item For any gate $\Phi$ in $F$, $\mathrm{Supp}(\Phi) \subseteq \mathrm{Var}s(\Phi)$. \item If $\Phi$ is an sum gate, with children $\Psi_1, \Psi_2, \ldots, \Psi_k$, then $\forall i \in [k]$, $\mathrm{Var}s(\Psi_i) = \mathrm{Var}s(\Phi)$. \item If $\Phi$ is a product gate, with children $\Psi_1, \Psi_2, \ldots, \Psi_k$, then $\mathrm{Var}s(\Phi) = \cup_{i=1}^k \mathrm{Var}s(\Psi_i)$ and the sets $\mathrm{Var}s(\Psi_i)$ ($i\in [k]$) are pairwise disjoint. \end{enumerate} We call a syntactically multilinear formula $F$ \emph{variable-labelled} if it is equipped with a labelling function as above. For such an $F$, we define $\mathrm{Var}s(F)$ to be $\mathrm{Var}s(\Phi_0)$, where $\Phi_0$ is the output gate of $F$. The following lemma shows that we may always assume that a syntactically multilinear formula $F$ is variable-labelled. \begin{proposition} \label{prop:vars} For each syntactically multilinear formula $F$ computing $P\in \mathbb{F}[X],$ there is a variable-labelled syntactically multilinear formula $F'$ of the same size as $F$ computing $P$ such that $\mathrm{Var}s(\Phi)\subseteq X$ for each gate $\Phi$ of $F'$ and $\mathrm{Var}s(F) = X.$ \end{proposition} \begin{proof} The formula $F'$ is the same as the formula $F$ along with variable-set labels $\mathrm{Var}s(\Phi)$ for each gate $\Phi$ of $F'$. These labels are defined by downward induction on the structure of $F$ as follows. For the output gate $\Phi_0$ of $F$, we define $\mathrm{Var}s(\Phi_0) = X$ (this ensures that $\mathrm{Var}s(F) = X$). If $\Phi$ is a sum gate with children $\Psi_1, \ldots, \Psi_k$ and $\mathrm{Var}s(\Phi) =S \subseteq X$, then for each $1\leq i \leq k$, we define $\mathrm{Var}s(\Psi_i) = S$. If $\Phi$ is a product gate with children $\Psi_1, \ldots \Psi_k$ and $\mathrm{Var}s(\Phi) = S \subseteq X$, then we define $\mathrm{Var}s(\Psi_i) = \mathrm{Supp}(\Psi_i)$ for $1 \leq i \leq k-1$ and $\mathrm{Var}s(\Psi_k) = S \setminus \left(\cup_{i=1}^{k-1} \mathrm{Var}s(\Psi_i)\right)$. It is easy to check that the above labelling is indeed a valid variable-set labelling. \end{proof} We will use the following structural result that converts any small-depth multilinear circuit to a small-depth syntactic multilinear formula without a significant blowup in size. \begin{lemma}[Raz and Yehudayoff~\cite{ry09}, Lemma 2.1] \label{lem:RY-nf-ckts} For any multilinear circuit $C$ of product-depth at most $\Delta$ and size at most $s$, there is a syntactic multilinear $(\Sigma\Pi)^{\Delta}\Sigma$ formula $F$ of size at most $(\Delta+1)^2\cdot s^{2\Delta+1}$ computing the same polynomial as $C$. \end{lemma} \subsection{Useful Probabilistic Estimates} \label{sec:misc} We will need the Chernoff bound~\cite{chernoff, hoeffding} for sums of independent Boolean random variables. We use the version from the book of Dubhashi and Panconesi~\cite[Theorem 1.1]{DubhashiPanconesi}. \begin{theorem}[Chernoff bound] \label{thm:Chernoff} Let $W_1,\ldots,W_n$ be independent $\{0,1\}$-valued random variables and let $W = \sum_{i\in [n]} W_i.$ Then we have the following. \begin{enumerate} \item For any $\varepsilon > 0,$ \[ \prob{}{W > (1+\varepsilon) \avg{}{W}} \leq \exp(-\frac{\varepsilon^2}{3}\avg{}{W})\ \text{ and}\ \prob{}{W < (1-\varepsilon) \avg{}{W}} \leq \exp(-\frac{\varepsilon^2}{2}\avg{}{W}). \] \item For any $t > 2e\avg{}{W}$, \[ \prob{}{W > t}\leq 2^{-t}. \] \end{enumerate} \end{theorem} We will need the following simple fact that reduces the problem of proving a large deviation bound to the problem of bounding the probability of a conjunction of events. \begin{proposition} \label{prop:ANDthr} Let $W_1,\ldots,W_n$ be $n$ $\{0,1\}$-valued random variables (not necessarily independent) and let $W = (W_1,\ldots,W_n)$. Assume that for some $p\in [0,1]$ and for any $\beta\in \{0,1\}^n$ we have $\prob{}{W = \beta}\leq p.$ Then, we have $\prob{}{\sum_i W_i \leq r} \leq n^r\cdot p.$ \end{proposition} \begin{proof} For any $\beta \in \{0,1\}^n$, let weight of $\beta$ (denoted $\mathrm{wt}(\beta)$) be $\sum_{i=1}^n\beta(i)$. It is now easy to see that \begin{align*} \prob{}{\sum_i W_i \leq r} = \sum_{k=0}^r\left(\sum_{\substack{\beta\in\{0,1\}^n\\\mathrm{wt}(\beta)=k}}\prob{}{W=\beta}\right) \leq \sum_{k=0}^r{n\choose k}\cdot p \leq n^r \cdot p. \end{align*} The first inequality comes from the fact that there are ${n\choose k}$ many bit vectors $\beta$ in $\{0,1\}^n$ of weight exactly $k$ and $\prob{}{W=\beta} \leq p$ for each of them, and the last inequality comes from an upper bound of $n^r$ on $\sum_{k=0}^r{n\choose k}$. \end{proof} For a Boolean random variable $W,$ define the \emph{unbias} of $W$ by \[ \mathrm{Unbias}(W) = \min\{\prob{}{W = 0}, \prob{}{W=1}\} \] and \emph{bias} of $W$ by \[ \mathrm{Bias}(W) = \abs{\prob{}{W=0}-\prob{}{W=1}} = \abs{\mathbb{E}[(-1)^{W}]}. \] Note that $\mathrm{Unbias}(W)\in [0,1/2].$ The following fact follows directly from the aforementioned definitions. \begin{proposition} \label{prop:bias-unbias} For a Boolean random variable $W,$ $\mathrm{Bias}(W) + 2\cdot\mathrm{Unbias}(W) = 1$. \end{proposition} The following fact is folklore, but we state it here in the exact form we need and prove it for completeness. \begin{proposition} \label{prop:xorunbias} Let $W_1,\ldots,W_n$ be independent $\{0,1\}$-valued random variables and assume $W = \bigoplus_{j\in [n]} W_j$. Then $\mathrm{Unbias}(W) \geq \min\{(1/2)\cdot \sum_j \mathrm{Unbias}(W_j),1/10\}.$ \end{proposition} \begin{proof} From Proposition~\ref{prop:bias-unbias}, we get that $\mathrm{Unbias}(W) = \frac{1 - \mathrm{Bias}(W)}{2} = \frac{1-\abs{\mathbb{E}[(-1)^W]}}{2}$. We also get the following. \begin{align*} \abs{\mathbb{E}[(-1)^W]} = \abs{\mathbb{E}[(-1)^{\bigoplus_{j\in [n]} W_j}]} = \abs{\mathbb{E}[(-1)^{\sum_{j\in[n]}W_j}]} = \abs{\prod_{j\in[n]}{\mathbb{E}[(-1)^{W_j}]}} = \prod_{j\in[n]}\mathrm{Bias}(W_j). \end{align*} Thus, \begin{align*} \mathrm{Unbias}(W) = \frac{1-\prod_{j\in[n]}(1-2\cdot\mathrm{Unbias}(W_j))}{2} &\geq \frac{1-\prod_{j\in[n]}e^{-2\cdot\mathrm{Unbias}(W_{j})}}{2}\\ &=\frac{1 - e^{-2(\sum_{j\in[n]}\mathrm{Unbias}(W_j))}}{2}. \end{align*} If $\sum_{j\in[n]}\mathrm{Unbias}(W_j) \geq \frac{1}{2}$, then $\mathrm{Unbias}(W) \geq \frac{1-e^{-1}}{2}\geq \frac{1}{10}$. Else, $\mathrm{Unbias}(W)\geq \frac{\sum_{j\in[n]}\mathrm{Unbias}(W_j)}{2}.$ The latter follows from the inequality $e^{x}\leq 1 - \frac{x}{2}$ for all $x\in[0,1/2]$. This proves the proposition. \end{proof} Let $A_1,\ldots,A_N$ be any $N$ independent random variables taking values over any finite set. Let $B_1,\ldots,B_M$ be $M$ \emph{Boolean} random variables with $B_i = f_i(A_j : j\in S_i)$ for some function $f_i$ and some $S_i\subseteq [N].$ We say $B_1,\ldots,B_M$ are read-$k$ for $k\geq 1$ if each $j\in [N]$ belongs to at most $k$ many sets $S_i.$ A result of Janson~\cite{Janson} yields concentration bounds for sums of read-$k$ Boolean random variables. We use the following form of the bound that appears in the result of Gavinsky, Lovett, Saks and Srinivasan~\cite{GLSS}. \begin{theorem}[\cite{GLSS}] \label{thm:GLSS} If $B_1,\ldots, B_M$ are read-$k$ Boolean random variables and $B = \sum_i {B_i}$, then \[ \prob{}{B\leq \frac{1}{2}\avg{}{B}} \leq \exp(-\Omega(\avg{}{B}/k)). \] \end{theorem} \section{The Hard polynomials and their restrictions} \label{sec:polysandrests} \subsection{The Hard polynomials} \label{sec:polynomial} The hard polynomial for circuits of product-depth $\Delta$ will be defined to be the polynomial computed by a suitable ABP. The definition is by induction on the product depth $\Delta$. We also define some auxiliary variable sets that will be useful later when we restrict these polynomials. Let $m\in \mathbb{N}$ be a growing parameter. We will define for each $\Delta\geq 1$ an ABP $G^{(\Delta)}$ on a variable set $X^{(\Delta)}$ of size $n_\Delta$, along with some auxiliary variable sets $Y^{(\Delta)}$ and $Z^{(\Delta)}$. The parameter $n_\Delta$ itself is defined inductively as follows. Let $n_0 = 8$ and for each $\Delta\geq 1,$ define \begin{equation} \label{eq:defn_im_i} n_\Delta = 2 \cdot m\cdot n_{\Delta-1}. \end{equation} Observe that $n_{\Delta} = 8\cdot (2m)^{\Delta}$ for $\Delta \geq 1.$ \paragraph{The Variable sets.} For each $\Delta \geq 0,$ we will define three disjoint variable sets $X^{(\Delta)},Y^{(\Delta)}$ and $Z^{(\Delta)}.$ We will have $|X^{(\Delta)}| = n_\Delta$ for each $\Delta.$ Let $X^{(0)}$ be the set $\{x_{i,j}^{(0)}\ |\ i\in [4],j\in [2]\}$, $Y^{(0)} = \{y_1^{(0)},y_2^{(0)}\}$ and $Z^{(0)} = \{z_1^{(0)},z_2^{(0)}\}.$ Note that $|X^{(0)}| = n_0 = 8.$ Given $X^{(\Delta)}, Y^{(\Delta)}$ and $Z^{(\Delta)},$ we now define $X^{(\Delta+1)}, Y^{(\Delta+1)}$ and $Z^{(\Delta + 1)}$ as follows. \paragraph{Clones, segments and half-segments.} For each $i\in [m]$ and $j\in [2]$, let $X^{(\Delta+1)}_{i,j}$, $Y^{(\Delta+1)}_{i,j}$ and $Z^{(\Delta+1)}_{i,j}$ be pairwise disjoint copies of $X^{(\Delta)}, Y^{(\Delta)}$ and $Z^{(\Delta)}$ respectively. Define $X^{(\Delta+1)} = \bigcup_{i,j} X^{(\Delta+1)}_{i,j}$; note that $|X^{(\Delta+1)}| = 2m |X^{(\Delta)}| = n_{\Delta+1}$ as desired. Define $Y^{(\Delta+1)}$ to be $\bigcup_{i} Y^{(\Delta+1)}_{i,1} \cup Y^{(\Delta+1)}_{i,2} \cup \{y^{(\Delta+1)}_i\}$ and $Z^{(\Delta+1)}$ to be $\bigcup_{i} Z^{(\Delta+1)}_{i,1} \cup Z^{(\Delta+1)}_{i,2} \cup \{z^{(\Delta+1)}_i\}$, where $y_i^{(\Delta+1)}$ and $z_i^{(\Delta+1)}$ are fresh variables. By the inductive definition of the sets $X^{(\Delta+1)}$ above, we see that $X^{(\Delta+1)}$ is made up of $2m$ copies of $X^{(\Delta)}$, which we denote by $X^{(\Delta+1)}_{i,j}$ for $i\in [m]$ and $j\in [2].$ Using this fact inductively, we see that for any $t\in \{0,\ldots,\Delta+1\},$ $X^{(\Delta+1)}$ contains $(2m)^t$ copies of $X^{(\Delta+1-t)}$. Each such copy is uniquely labelled by a tuple $\omega = ((i_1,j_1),\ldots,(i_t,j_t))\in ([m]\times [2])^t$; we denote this copy by $X^{(\Delta+1)}_\omega$ and call this an \emph{$X^{(\Delta+1-t)}$-clone}. In a similar way, we see that for each $\omega\in ([m]\times [2])^t$, $Y^{(\Delta+1)}$ and $Z^{(\Delta+1)}$ contain copies of $Y^{(\Delta+1-t)}$ and $Z^{(\Delta+1-t)}$ respectively, which we denote as $Y^{(\Delta+1)}_\omega$ and $Z^{(\Delta+1)}_\omega$ respectively, and call $Y^{(\Delta+1-t)}$-clones and $Z^{(\Delta+1-t)}$-clones respectively. Say $X^{(\Delta')}_\omega$ is an $X^{(\Delta)}$-clone for some $\Delta' \geq \Delta \geq 1.$ Then we refer to $X^{(\Delta')}_{(\omega,(i,j))}$ (which is an $X^{(\Delta-1)}$-clone) as the $(i,j)$th \emph{half-segment} of $X^{(\Delta')}_\omega$. Further, we refer to $X^{(\Delta')}_{(\omega,(i,1))}\cup X^{(\Delta')}_{(\omega,(i,2))}$ as the $i$th \emph{segment} of $X^{(\Delta')}_\omega,$ which we will denote $X^{(\Delta')}_{(\omega,i)}$; note that $X^{(\Delta')}_\omega$ is a union of its $m$ segments. Similarly a $Y^{(\Delta)}$-clone $Y^{(\Delta')}_\omega$ (resp.\ a $Z^{(\Delta)}$-clone $Z^{(\Delta')}_\omega$) is a disjoint union of $m$ segments $Y^{(\Delta')}_{(\omega,i)}$ (resp.\ $Z^{(\Delta')}_{(\omega,i)}$) for $i\in [m]$, the $i$th of which is made up of two half-segments $Y^{(\Delta')}_{(\omega,(i,1))}$ and $Y^{(\Delta')}_{(\omega,(i,2))}$ (resp.\ $Z^{(\Delta')}_{(\omega,(i,1))}$ and $Z^{(\Delta')}_{(\omega,(i,2))}$) and an additional fresh variable that we denote $y^{(\Delta')}_{(\omega,i)}$ (resp.\ $z^{(\Delta')}_{(\omega,i)}$). We refer to a set of the form $X^{(\Delta')}_{(\omega,i)}$ as an $X^{(\Delta)}$-segment\footnote{The reader may want to read ``$X^{(\Delta)}$-segment'' as ``a segment of (a clone of) $X^{(\Delta)}$'' and similarly for other segments and half-segments.} and a set of the form $X^{(\Delta')}_{(\omega,(i,j))}$ as an $X^{(\Delta)}$-half-segment (similarly $Y^{(\Delta)}$-segment, $Y^{(\Delta)}$-half-segment, $Z^{(\Delta)}$-segment and $Z^{(\Delta)}$-half-segment). To summarize, given any $\Delta'\geq \Delta\geq 0$ the set $X^{(\Delta')}$ contains $(2m)^{\Delta'-\Delta}$ many $X^{(\Delta)}$-clones which are indexed by elements of the set $([m]\times [2])^{\Delta'-\Delta}$. If $\Delta \geq 1,$ each such $X^{(\Delta)}$-clone contains $m$ many $X^{(\Delta)}$-segments, which are indexed by the set $([m]\times [2])^{\Delta'-\Delta} \times [m];$ and each segment contains two $X^{(\Delta-1)}$-clones, also referred to as $X^{(\Delta)}$-half-segments. Finally, these definitions extend naturally to $Y^{(\Delta')}$- and $Z^{(\Delta')}$-clones. \paragraph{The Hard polynomial.} The hard polynomial for product-depth $\Delta$ is defined by induction on $\Delta.$ Define $G^{(0)}$ to be an ABP as follows. Assume that $G^{(0)}$ has source vertex $s_0$, sink vertex $t_0$ and an intermediate vertex $u_0$. The graph of the ABP consists of two disjoint paths $\pi_1$ and $\pi_2$ of length $2$ each from $s_0$ to $u_0$ and two disjoint paths $\pi_3$ and $\pi_4$ of length $2$ each from $u_0$ to $t_0$ (see Figure~\ref{fig:g0}). The edges of $\pi_i$ are labelled by the distinct variables $x_{i,1}^{(0)}$ and $x_{i,2}^{(0)}$. Let $X^{(0)}$ be the set $\{x_{i,j}^{(0)}\ |\ i\in [4],j\in [2]\}$ of variables that appear in $G^{(0)}.$ Note that $G^{(0)}$ computes a polynomial over the variable set $X^{(0)}$ defined above. \begin{figure} \caption{The ABP $G^{(0)} \label{fig:g0} \end{figure} Now fix any $\Delta \geq 0$. Given $G^{(\Delta)}$, an ABP on variable set $X^{(\Delta)}$, we inductively define the ABP $G^{(\Delta +1)}$ an ABP on variable set $X^{(\Delta+1)}$ as follows. \begin{itemize} \item For each $i\in[m]$ and $j\in[2]$, let $G^{(\Delta+1)}_{i,j}$ be a copy of $G^{(\Delta)}$ on the variable set $X^{(\Delta+1)}_{i,j}$. \item Let $H^{(\Delta+1)}_{i}$ be the ABP obtained by composing $G^{(\Delta+1)}_{i,1}$ and $G^{(\Delta+1)}_{i,2}$ in parallel. This ABP is defined over variable set $X^{(\Delta+1)}_i.$ \item Finally, let $G^{(\Delta+1)}$ be the ABP obtained by composing $H^{(\Delta+1)}_{1}, \ldots, H^{(\Delta+1)}_{m}$ in series, in that order. \end{itemize} From the definition of $G^{(\Delta)}$ it is easy to observe the following properties. \begin{proposition} \label{prop:graph} For any $\Delta \geq 0$, $G^{(\Delta)}$ has a unique source, say $s_\Delta$, and a unique sink, say $t_\Delta$. Also, each edge in $G^{(\Delta)}$ appears on some source to sink path. \end{proposition} We define $P^{(\Delta)}$ to be the multilinear polynomial in ${\mathbb{F}}[X^{(\Delta)}]$ computed by the ABP $G^{(\Delta)}$. We can also note the following properties of the polynomial computed by $G^{(\Delta)}$. \begin{proposition}[Properties of $P^{(\Delta)}$] \label{prop:poly} \begin{enumerate} \item $P^{(0)}$ is the polynomial $\sum_{i_1 \in \{1,2\}, i_2 \in \{3,4\}} x^{(0)}_{i_1,1} \cdot x^{(0)}_{i_1,2} \cdot x^{(0)}_{i_2,1} \cdot x^{(0)}_{i_2,2}.$ \item For each $\Delta \geq 0$, $$P^{(\Delta+1)}(X^{(\Delta+1)}) = \prod_{i\in [m]} \left(P^{(\Delta)}(X^{(\Delta+1)}_{i,1}) + P^{(\Delta)}(X^{(\Delta+1)}_{i,2})\right) $$ \item The polynomial $P^{(0)}$ is computed by a $\Sigma \Pi$ multilinear formula of size $O(1).$ For each $\Delta \geq 1,$ $P^{(\Delta)}$ can be computed by a syntactic multilinear $(\Pi\Sigma)^{\Delta} \Pi$ formula of size $O(n_\Delta).$ \end{enumerate} \end{proposition} As with the variable sets, we see that for any $\Delta' \geq \Delta\geq 1,$ and for each $\omega\in ([m]\times [2])^{\Delta'-\Delta}$, the ABP $G^{(\Delta')}$ contains a corresponding copy of $G^{(\Delta)}$ on the variable set $X^{(\Delta')}_\omega.$ We denote this copy of $G^{(\Delta)}$ by $G^{(\Delta')}_\omega.$ The ABP $G^{(\Delta')}_\omega$ is obtained by composing in series the ABPs $H^{(\Delta')}_{(\omega,1)},\ldots,H^{(\Delta')}_{(\omega,m)}$ (defined on the segments $X^{(\Delta')}_{(\omega,1)},\ldots,X^{(\Delta')}_{(\omega,m)}$ respectively) in that order. \subsection{Restrictions} \label{sec:restriction} We define a random multilinear\footnote{See Section~\ref{sec:polyrest} for the definition.} $(X^{(\Delta+1)}, Y^{(\Delta+1)},Z^{(\Delta+1)})$-restriction $\rho^{(\Delta+1)}$ by an inductively defined sampling process. \paragraph{Base case.} Let $\rho^{(0)}$ be defined as follows: $\rho^{(0)}(x^{(0)}_{1,1}) = y^{(0)}_1, \rho^{(0)}(x^{(0)}_{2,1}) = z^{(0)}_1$, $\rho^{(0)}(x^{(0)}_{3,1}) = y^{(0)}_2, \rho^{(0)}(x^{(0)}_{4,1}) = z^{(0)}_2$ (see Figure~\ref{fig:base-restriction}). That is, we set the first variable in each of the paths $\pi_1,\pi_2,\pi_3,\pi_4$ to a distinct variable in $Y^{(0)}\cup Z^{(0)}$. Also, we set $\rho^{(0)}(x^{(0)}_{i,2}) = 1$ for each $i \in [2]$. That is, we set all the remaining variables to the constant $1$. (Note that $\rho^{(0)}$ is in fact a \emph{deterministic} $(X^{(0)},Y^{(0)},Z^{(0)})$-restriction.) It can be checked that $\rho^{(0)}$ is multilinear. \paragraph{Inductive case.} Let us now assume that we have defined the process for sampling the random multilinear $(X^{(\Delta)},Y^{(\Delta)},Z^{(\Delta)})$-restriction $\rho^{(\Delta)}$. We define $\rho^{(\Delta+1)}$ by the following sampling process. For each $i \in [m]$, we sample a random multilinear $(X^{(\Delta+1)}_i,Y^{(\Delta+1)}_i,Z^{(\Delta+1)}_i)$-restriction $\rho^{(\Delta+1)}_i$ independently using the general sampling process described below. We then define $\rho^{(\Delta+1)}$ as a composition of these restrictions, i.e. $\rho^{(\Delta+1)} = \rho^{(\Delta+1)}_1\circ \cdots \circ \rho^{(\Delta+1)}_{m}$ (as defined in Section~\ref{sec:polyrest}). Clearly, $\rho^{(\Delta+1)}$ is multilinear since each $\rho^{(\Delta+1)}_i$ is multilinear. We now give a general sampling process to sample a random multilinear $(X,Y,Z)$-restriction where $X$ is an $X^{(\Delta+1)}$-segment and $Y$ and $Z$ are the corresponding $Y^{(\Delta+1)}$- and $Z^{(\Delta+1)}$-segments respectively. For the remainder of this section, let $\Delta'\geq \Delta+1 \geq 1$ be arbitrary and for some $\omega\in ([m]\times [2])^{\Delta'-\Delta-1}$ and $i\in [m]$, let $X$ denote the $i$th segment $X^{(\Delta')}_{(\omega,i)}$ of the $X^{(\Delta+1)}$-clone $X^{(\Delta')}_\omega$ and let $Y = Y^{(\Delta')}_{(\omega,i)}$, $Z = Z^{(\Delta')}_{(\omega,i)}.$ For any $j\in [2]$, let $X_j,Y_j,$ and $Z_j$ denote the $X^{(\Delta)}$-clones $X^{(\Delta')}_{(\omega,(i,j))}, Y^{(\Delta')}_{(\omega,(i,j))}$ and $Z^{(\Delta')}_{(\omega,(i,j))}$ respectively, and let $G_j$ denote the ABP $G^{(\Delta')}_{(\omega,(i,j))}.$ Let $y,z$ denote the variables $y^{(\Delta')}_{(\omega,i)}$ and $z^{(\Delta')}_{(\omega,i)}$ in the sets $Y$ and $Z$ respectively (recall that we have $Y = Y_1\cup Y_2 \cup \{y\}$ and $Z = Z_1\cup Z_2 \cup \{z\}$). Let $H$ denote the ABP $H^{(\Delta')}_{(\omega,i)}$ (recall that $H$ is the parallel composition of $G_1$ and $G_2$). We show now how to sample for any such $X,Y,Z$ a random multilinear $(X,Y,Z)$-restriction $\rho$. \paragraph{Sampling Algorithm $\mc{A}$ for $(X,Y,Z)$-restriction $\rho$.} \begin{itemize} \item[$E_1$:] Set all the variables from the set $X_2$ to $0$. For the variables in $X_1$, sample a random $(X_1,Y_1,Z_1)$-restriction $\rho_1$ using the sampling procedure for $\rho^{(\Delta)}$. (Recall that $X_1,Y_1,Z_1$ are $X^{(\Delta)}$-, $Y^{(\Delta)}$-, and $Z^{(\Delta)}$-clones respectively.) Set variables in $X_{1}$ according to $\rho_1$.\label{i:rest1} \item[$E_2$:] \label{i:rest2} This is the same as $E_1$ except that the roles of $X_1$ and $X_2$ are exchanged. Formally, we set all the variables from the set $X_1$ to $0$ and apply a random $(X_2,Y_2,Z_2)$-restriction $\rho_2$, sampled using the sampling procedure for $\rho^{(\Delta)}$, to $X_2$ . \item[$E_3$:] Choose variables $x_1$ and $x_2$ independently and uniformly at random from $X_1$ and $X_2$ respectively. The variable $x_j$ ($j\in [2]$) labels a unique edge, say $e_j$, in ABP $G_j$; let $\pi_{e_j}$ denote the lexicographically smallest source to sink path in $G^{(\Delta+1)}_{i,j}$ containing $e_j$. For any variable $x\in X$ which does not label an edge on either of the paths $\pi_{e_1}$ or $\pi_{e_2}$, set $x$ to the constant $0$. Set $x_1$ to $y$ and $x_2$ to $z$. For any edge $e$ other than $e_1,e_2$ which lies on either $\pi_{e_1}$ or $\pi_{e_2},$ set the variable labelling $e$ to $1$. \end{itemize} \subsection{Properties of $\rho^{(\Delta)}$} \label{sec:rest-props} Fix some $\Delta \geq 1.$ For a given random choice of $\rho^{(\Delta)}$, we use $\tilde{Y}$ and $\tilde{Z}$ to denote $\mathrm{Img}(\rho^{(\Delta)})\cap Y^{(\Delta)}$ and $\mathrm{Img}(\rho^{(\Delta)})\cap Z^{(\Delta)}$ respectively. Note that these are \emph{random} sets. Given any $f\in \mathbb{F}[X^{(\Delta)}],$ the polynomial $f|_{\rho^{(\Delta)}}$ belongs to the set $\mathbb{F}[\tilde{Y}\cup \tilde{Z}].$ By the multilinearity of $\rho^{(\Delta)},$ if $f$ is multilinear, then so is $f|_{\rho^{(\Delta)}}.$ \begin{lemma} \label{lem:fullrank} With probability $1$, $\mathrm{relrk}_{(\tilde{Y},\tilde{Z})}(P^{(\Delta)}|_{\rho^{(\Delta)}}) = 1.$ \end{lemma} \begin{proof} We prove the lemma by induction on $\Delta.$ The base case corresponds to $\Delta = 0,$ in which case $\tilde{Y} = Y^{(0)} = \{y^{(0)}_1,y^{(0)}_2\}$ and $\tilde{Z} = Z^{(0)} = \{z^{(0)}_1,z^{(0)}_2\}$ deterministically. From the definition of $P^{(0)}$ and $\rho^{(0)}$, we know that $P^{(0)}|_{\rho^{(0)}} = (y^{(0)}_1+ z^{(0)}_1)(y^{(0)}_2 + z^{(0)}_2)$, which can easily be seen to have relative rank $1$ w.r.t. the partition $(\tilde{Y},\tilde{Z})$. \begin{figure} \caption{The effect of $\rho^{(0)} \label{fig:base-restriction} \end{figure} Recall (Proposition~\ref{prop:poly}) that for each $\Delta \geq 1$, $$P^{(\Delta)}(X^{(\Delta)}) = \prod_{i\in [m]} \left(P^{(\Delta-1)}(X^{(\Delta)}_{i,1}) + P^{(\Delta-1)}(X^{(\Delta)}_{i,2})\right) $$ Let $Q^{(\Delta)}_i$ be equal to $P^{(\Delta-1)}(X^{(\Delta)}_{i,1}) + P^{(\Delta-1)}(X^{(\Delta)}_{i,2})$. The random restriction $\rho^{(\Delta)}$ is defined to be the composition $\rho^{(\Delta)}_1\circ \cdots \circ \rho^{(\Delta)}_m$ where each $\rho^{(\Delta)}_i$ is a random multilinear $(X^{(\Delta)}_i,Y^{(\Delta)}_i,Z^{(\Delta)}_i)$-restriction sampled according to the algorithm $\mc{A}$ described in Section~\ref{sec:restriction}. Thus, we have for any fixing of $\rho^{(\Delta)}_i$ ($i\in [m]$), \[ P^{(\Delta)}|_{\rho^{(\Delta)}} = \prod_{i\in [m]} Q^{(\Delta)}|_{\rho^{(\Delta)}_i} \] and hence by Proposition~\ref{prop:relrk}, we have \[ \mathrm{relrk}_{(\tilde{Y},\tilde{Z})}(P^{(\Delta)}|_{\rho^{(\Delta)}}) = \prod_{i\in [m]} \mathrm{relrk}_{(\tilde{Y}_i,\tilde{Z}_i)}(Q^{(\Delta)}|_{\rho^{(\Delta)}_i}) \] where $\tilde{Y}_i := \mathrm{Img}(\rho^{(\Delta)}_i)\cap Y^{(\Delta)}_i$ and $\tilde{Z}_i := \mathrm{Img}(\rho^{(\Delta)}_i)\cap Z^{(\Delta)}_i$. So it suffices to argue that each term in the above product is $1$ with probability $1$. Fix an $i\in [m]$ and consider $\mathrm{relrk}_{(\tilde{Y}_i,\tilde{Z}_i)}(Q^{(\Delta)}|_{\rho^{(\Delta)}_i})$. There are three possibilities for the sampling algorithm $\mc{A}$ in choosing $\rho^{(\Delta)}_i$. Say $\mc{A}$ picks option $E_1$. Then, all variables in $X^{(\Delta)}\setminus X^{(\Delta)}_{i,1} = X^{(\Delta)}_{i,2}$ are set to $0$ and $\rho^{(\Delta)}_i$ is simply a copy of $\rho^{(\Delta-1)}$ defined w.r.t. the sets $(X^{(\Delta)}_{i,1},Y^{(\Delta)}_{i,1},Z^{(\Delta)}_{i,1})$. Thus, by induction $\mathrm{relrk}_{(\tilde{Y}_i,\tilde{Z}_i)}(Q^{(\Delta)}|_{\rho^{(\Delta)}_i})=\mathrm{relrk}_{(\tilde{Y}_i,\tilde{Z}_i)}(P^{(\Delta-1)}|_{\rho^{(\Delta)}_i}) = 1$. A similar reasoning works when $\mc{A}$ picks option $E_2.$ In case $\mc{A}$ picks $E_3$ then $Q^{(\Delta)}_i = (y_i^{(\Delta)} +z_i^{(\Delta)})$ and $\tilde{Y}_i = \{y_i^{(\Delta)}\}, \tilde{Z}_i = \{z_i^{(\Delta)}\}.$ It is then easily checked that $\mathrm{relrk}_{(\tilde{Y}_i,\tilde{Z}_i)}(Q^{(\Delta)}|_{\rho^{(\Delta)}_i})=1$. This completes the induction and proves the lemma. \end{proof} \section{The Main Result} \label{sec:mainthm} The main result of this paper is the following. \begin{theorem} \label{thm:PDeltalbd} Let $m,\Delta\in \mathbb{N}$ be growing parameters with $\Delta = m^{o(1)}.$\footnote{Since $n_\Delta = O(m)^{\Delta}$, this is equivalent to requiring that $\Delta = o(\log n_\Delta/\log \log n_\Delta).$} Assume that the polynomials $P^{(\Delta)}(X^{(\Delta)})$ are as defined in Section~\ref{sec:polynomial}. Then any multilinear circuit $C$ of product-depth $\Delta$ computing $P^{(\Delta)}$ must have a size of at least $\exp(m^{\Omega(1)}) = \exp(n_{\Delta}^{\Omega(1/\Delta)}).$ \end{theorem} The following corollary is immediate from the lower bound in Theorem~\ref{thm:PDeltalbd} and the formula upper bound in Proposition~\ref{prop:poly}. \begin{corollary} \label{cor:abstract-pdepth} Assume $\Delta = \Delta(n) = o(\log n/\log \log n)$. For all large enough $n\in \mathbb{N}$, there is an explicit multilinear polynomial on $n$ variables that has a multilinear formula of size $O(n)$ and product-depth $\Delta(n)+1$ but no multilinear circuit of size at most $\exp(n^{\Omega(1/\Delta)})$ and product-depth at most $\Delta(n).$ \end{corollary} \begin{proof} We can find $m = m(n)$ so that that $\Delta = m^{o(1)}$ and the corresponding $n_\Delta \in [\sqrt{n}, n]$ for large enough $n$. We now apply Proposition~\ref{prop:poly} and Theorem~\ref{thm:PDeltalbd} to obtain the result. \end{proof} We also have a similar result for depth instead of product-depth. \begin{corollary} \label{cor:abstract-depth} Assume $\Delta = \Delta(n) = o(\log n/\log \log n)$. For all large enough $n\in \mathbb{N}$, there is an explicit multilinear polynomial on $n$ variables that has a multilinear formula of size $n$ and depth $\Delta+1$ but no multilinear circuit of size $\exp(n^{\Omega(1/\Delta)})$ and depth at most $\Delta.$ \end{corollary} \begin{proof} Let $\Delta' = \lfloor (\Delta+1)/2\rfloor.$ Fix $m = m(n)$ so that that $\Delta = m^{o(1)}$ and the corresponding $n_{\Delta'} \in [\sqrt{n}, n/2]$ for large enough $n$. We define the explicit polynomial to be either $P^{(\Delta')}$ or the sum of two copies of $P^{(\Delta'-1)}$ depending on whether $\Delta$ is even or odd. Assume that $\Delta$ is even. Then, $\Delta = 2\Delta'.$ In this case, the explicit polynomial is $P^{(\Delta')}$ which has a $(\Pi\Sigma)^{\Delta'}\Pi$ formula of size $O(n_{\Delta'}) = O(n)$ by Proposition~\ref{prop:poly}. Note that a $(\Pi\Sigma)^{\Delta'}\Pi$ formula is of depth $2\Delta'+1 = \Delta+1.$ This gives the upper bound. For the lower bound, we use Theorem~\ref{thm:PDeltalbd}. Any circuit $C$ of size at most $s$ and depth at most $\Delta = 2\Delta'$ can be converted to one of size at most $s$ and product-depth at most $\Delta'$ as follows. If $C$ contains two $\times$-gates $\Psi_1$ and $\Psi_2$ where $\Psi_1$ feeds into $\Psi_2$, we merge $\Psi_1$ and $\Psi_2$. Repeated applications of this procedure yields a circuit of depth at most $2\Delta'$ in which no input-to-output path can contain consecutive $\times$-gates. This circuit must have product-depth at most $\Delta^{\prime}.$ Clearly, this process can only reduce the size of the circuit. By Theorem~\ref{thm:PDeltalbd}, we see that if $C$ computes $P^{(\Delta')}$ it must have a size of at least $\exp(n_{\Delta'}^{\Omega(1/\Delta^{\prime})}) = \exp(n^{\Omega(1/\Delta)}).$ Now consider the case when $\Delta$ is odd. Then $\Delta = 2\Delta'-1.$ In this case, we define the explicit polynomial to be $Q = P^{(\Delta'-1)}(X_1) + P^{(\Delta'-1)}(X_2)$ where $X_1$ and $X_2$ are disjoint copies of $X^{(\Delta'-1)}.$ Note that the number of variables in $Q$ is $2n_{\Delta'-1} \leq n_{\Delta^{\prime}}\leq n$ and by Proposition~\ref{prop:poly}, $Q$ has a $\Sigma(\Pi\Sigma)^{\Delta'-1}\Pi$ formula of size $O(n_{\Delta'-1}) = O(n)$. Note that a $\Sigma(\Pi\Sigma)^{\Delta'-1}\Pi$ formula is of depth $2(\Delta'-1)+2 = \Delta+1.$ This gives the upper bound. For the lower bound, we proceed as was done in the case where $\Delta$ is even. Like before, we can assume that the most efficient circuit $C$ of depth $\Delta$ for $Q$ has the property that no path in $C$ contains consecutive $\times$-gates. Further, since $Q$ is easily seen to be an irreducible polynomial, we can also assume that the output gate is a $+$-gate. This implies that the product-depth of $C$ is at most $\lfloor \Delta/2\rfloor \leq \Delta'-1.$ Applying Theorem~\ref{thm:PDeltalbd} now yields the lower bound as in the even case. \end{proof} Theorem~\ref{thm:PDeltalbd} is proved by induction on the parameter $\Delta.$ For the purposes of induction, we need to prove a more technical statement from which we can easily infer Theorem~\ref{thm:PDeltalbd}. We give this technical statement below. Recall that for a random mutlilinear $(X,Y,Z)$-restriction $\rho$, $\tilde{Y} = \mathrm{Img}(\rho)\cap Y$ and $\tilde{Z} = \mathrm{Img}(\rho)\cap Z$. Note that these are \emph{random} sets. Now, given a multilinear polynomial $f\in \mathbb{F}[X'\cup Y'\cup Z']$ where $X'\subseteq X$ and $Y',Z'$ are disjoint sets of variables, the polynomial $f|_\rho$ is a multilinear polynomial in $\mathbb{F}[\tilde{Y}\cup \tilde{Z}\cup Y'\cup Z'].$ \begin{theorem} \label{thm:technical} Let $m,k,\Delta, M\in \mathbb{N}$ be growing positive integer parameters with $k = m^{0.05}$ and $M \geq m/10.$ Let $\varepsilon > 0$ be such that $\varepsilon \geq 1/M^{0.25}.$ Let $\Delta' \geq \Delta$ be such that $\Delta'\leq m^{0.001}.$ Let $X^{(\Delta')}_{(\omega_1,j_1)},\ldots, X^{(\Delta')}_{(\omega_M,j_M)}$ be arbitrary distinct $X^{(\Delta)}$-segments. Let $X = \bigcup_{i\in [M]} X^{(\Delta')}_{(\omega_i,j_i)}, Y = \bigcup_{i\in [M]} Y^{(\Delta')}_{(\omega_i,j_i)},$ and $Z = \bigcup_{i\in [M]} Z^{(\Delta')}_{(\omega_i,j_i)}$. Assume that $X' = \bigcup_{i\in [M]} X'_i$ where for each $i\in [M]$, $X'_i \subseteq X^{(\Delta')}_{(\omega_i,j_i)}$ satisfying $|X'_i| \geq \varepsilon\cdot |X^{(\Delta')}_{(\omega_i,j_i)}|.$ Let $F$ be any $(\Sigma\Pi)^{\Delta}\Sigma$ syntactially multilinear variable-labelled\footnote{See Section~\ref{sec:struct} for the definition of variable-labelled formulas.} formula with $\mathrm{Var}s(F) = X'\dot\cup Y'\dot\cup Z'$ where $Y'$ and $Z'$ are arbitrary sets of variables that are disjoint from $X'$. Assume that the size of $F$ is $s\leq \exp(k^{0.1}/\Delta^2).$ For each $i\in [M]$, let $\rho_i$ be an \emph{independent} multilinear random restriction obtained by using the sampling algorithm $\mc{A}$ described in Section~\ref{sec:restriction} to sample a $(X^{(\Delta')}_{(\omega_i,j_i)},Y^{(\Delta')}_{(\omega_i,j_i)},Z^{(\Delta')}_{(\omega_i,j_i)})$-restriction. Let $\rho = \rho_1\circ \cdots \circ \rho_M$ be the resulting $(X,Y,Z)$-random restriction. Let $\tilde{Y}' = \rho(X')\cap Y$ and $\tilde{Z}' = \rho(X')\cap Z$.\footnote{ Note that these are \emph{random} sets. Also note that for each fixing of $\rho$, the restricted formula $F|_\rho$ computes a multilinear polynomial in $\mathbb{F}[\tilde{Y}'\cup \tilde{Z}'\cup Y'\cup Z']$} Then, we have \[ \prob{\rho}{\mathrm{relrk}_{(Y'\cup \tilde{Y}',Z'\cup \tilde{Z}')}(F|_\rho) \geq \exp(-k^{0.1}/\Delta)} \leq \Delta \exp(-k^{0.1}/\Delta). \] \end{theorem} \begin{remark} \label{rem:technical} Note that the $X^{(\Delta)}$-segments that appear in the statement of Theorem~\ref{thm:technical} are $X^{(\Delta)}$-segments contained in $X^{(\Delta')}$ for some $\Delta'\geq \Delta.$ This general form of the theorem is needed when we are proving lower bounds for multilinear formulas of product-depth $\Delta'.$ During the course of the inductive proof, the depth of the circuit reduces iteratively and at some intermediate point in the proof when the depth is down to $\Delta$, we end up in the situation described in Theorem~\ref{thm:technical}. \end{remark} We postpone the proof of the above theorem to Section~\ref{sec:technical}, and deduce Theorem~\ref{thm:PDeltalbd} from it below. \begin{proof}[Proof of Theorem~\ref{thm:PDeltalbd}] Let $C$ be any multilinear circuit of product-depth at most $\Delta$ computing $P^{(\Delta)}.$ Let $s$ denote the size of $C$. By Lemma~\ref{lem:RY-nf-ckts}, there is a $(\Sigma\Pi)^{\Delta}\Sigma$ syntactic multilinear formula $F$ of size $s' = s^{O(\Delta)}$ computing $P^{(\Delta)}.$ We can assume that $F$ does not use any variables outside $X^{(\Delta)}$ since such variables may be safely set to $0$ without affecting the polynomial being computed. Recall that $X^{(\Delta)}$ is the disjoint union of its segments $X^{(\Delta)}_1,\ldots,X^{(\Delta)}_m$. The random restriction $\rho^{(\Delta)}$ from Section~\ref{sec:restriction} is defined to be $\rho^{(\Delta)} = \rho_1^{(\Delta)} \circ \cdots\circ \rho_m^{(\Delta)}$ where each $\rho_i^{(\Delta)}$ is an independent $(X^{(\Delta)}_i,Y^{(\Delta)}_i,Z^{(\Delta)}_i)$-restriction sampled using the algorithm $\mc{A}.$ Let $\tilde{Y}$ and $\tilde{Z}$ denote $\mathrm{Img}(\rho^{(\Delta)})\cap (\bigcup_i Y_i^{(\Delta)})$ and $\mathrm{Img}(\rho^{(\Delta)})\cap (\bigcup_i Z_i^{(\Delta)})$ respectively. Consider the restricted polynomials $P' = P^{(\Delta)}|_{\rho}$ and $F' = F|_{\rho}$. By Lemma~\ref{lem:fullrank}, we know that $\mathrm{relrk}_{(\tilde{Y},\tilde{Z})}(P') = 1$ with probability $1$. On the other hand, applying Theorem~\ref{thm:technical} with $M=m$, $\varepsilon = 1$, $\Delta'=\Delta = m^{o(1)}$ and $Y' = Z' = \emptyset,$ we get \[ \prob{\rho}{\mathrm{relrk}_{(\tilde{Y},\tilde{Z})}(F') \geq \exp(-k^{0.1}/\Delta)}\leq \Delta \exp(-k^{-0.1}/\Delta) = o(1) \] if $s'\leq \exp(k^{0.1}/\Delta^2)$. The final inequality above follows from the fact that $\Delta \leq m^{o(1)}$ and hence $\Delta \exp(-k^{-0.1}/\Delta) = \Delta \exp(-m^{\Omega(1)}/\Delta) = o(1).$ Since the formula $F'$ computes $P'$, we must therefore have $s' > \exp(k^{0.1}/\Delta^2) = \exp(m^{\Omega(1)}/\Delta^2)$ which yields $s = \exp(m^{\Omega(1)}/\Delta^3)$. Since $\Delta = m^{o(1)},$ this means that $s = \exp(m^{\Omega(1)}) = \exp((n_\Delta)^{\Omega(1/\Delta)}).$ \end{proof} \section{Proof of Theorem~\ref{thm:technical}} \label{sec:technical} The proof of Theorem~\ref{thm:technical} will be by induction on the product-depth $\Delta.$ The base case $\Delta = 1$ is handled in Section~\ref{sec:base-technical}. For the induction case $\Delta > 1,$ we proceed as follows. Let $F$ be any formula as in the statement of Theorem~\ref{thm:technical}. We can write \begin{equation} \label{eq:F-decompose} F = \sum_{i\in [s]} F_i \end{equation} where each $F_i$ is a $\Pi (\Sigma\Pi)^{\Delta - 1}\Sigma$ variable-labelled syntactic multilinear formula of size at most $s$. Further, note (see Section~\ref{sec:struct}) that $\mathrm{Var}s(F_i) = \mathrm{Var}s(F)$ for each $i$ by the property of the variable-labelling $\mathrm{Var}s(\cdot).$ We claim that it suffices to show the following. For any $\Pi (\Sigma\Pi)^{\Delta - 1}\Sigma$ variable-labelled syntactic multilinear formula $F'$ with $\mathrm{Var}s(F') = X'\cup Y'\cup Z'$ of size at most $s\leq \exp(k^{0.1}/\Delta^2),$ we have \begin{equation} \label{eq:relrkPigate} \prob{\rho}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho) \geq \exp(-k^{0.1}/(\Delta-1))}\leq \Delta \exp(-k^{0.1}/(\Delta-1)). \end{equation} Assuming (\ref{eq:relrkPigate}) for now, we proceed as follows. Applying (\ref{eq:relrkPigate}) to the formulas $F_1,\ldots,F_s$ and using a union bound, we have \[ \prob{\rho}{\exists i\in [s],\ \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F_{i}|_\rho) \geq \exp(-k^{0.1}/(\Delta-1))} \leq \Delta \exp(-k^{0.1}/(\Delta-1))\cdot s \leq \Delta \exp(-k^{0.1}/\Delta) \] where for the last inequality, we have used the fact that $s\leq \exp(k^{0.1}/\Delta^2).$ Note that if $\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F_i|_\rho) < \exp(-k^{0.1}/(\Delta-1))$ for each $i\in [s]$, the subadditivity of relative rank (Propostion~\ref{prop:relrk} item 2) implies that \[ \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F|_\rho) \leq s\cdot \exp(-k^{0.1}/(\Delta-1)) \leq \exp(-k^{0.1}/\Delta), \] where for the last inequality we have again used $s\leq \exp(k^{0.1}/\Delta^2).$ We have thus shown that \[ \prob{\rho}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F|_\rho) \geq \exp(-k^{0.1}/\Delta)} \leq \Delta \exp(-k^{0.1}/\Delta) \] which proves the theorem. It remains to prove (\ref{eq:relrkPigate}), which is the main technical part of the proof. Fix an $F'$ as in (\ref{eq:relrkPigate}) and assume that \begin{equation} \label{eq:F'-decompose} F' = \prod_{i\in [t]}F'_i \end{equation} where the $F'_i$ are the constituent $(\Sigma\Pi)^{(\Delta-1)}\Sigma$ formulas of $F'$. Recall that the $\mathrm{Var}s(F'_i)$ ($i\in [t]$) partition $\mathrm{Var}s(F') = X'\dot\cup Y'\dot\cup Z'.$ For $\delta > 0,$ $i\in [t]$ and $j\in [M]$, we say that $F'_i$ is \emph{$\delta$-heavy} in segment $X_j$ if $|\mathrm{Var}s(F'_i) \cap X_j|\geq \delta \cdot |X_j|.$ We say that $X_j$ is \emph{$\delta$-shattered} if there is no $i\in [t]$ such that $F'_i$ is $\delta$-heavy w.r.t. $X_j.$ The proof of (\ref{eq:relrkPigate}) breaks down into the following three cases. Roughly, in case $1$, we can prove (\ref{eq:relrkPigate}) directly, whereas in cases $2$ and $3$, we appeal to the inductive hypothesis. \begin{itemize} \item {\bf Case 1:} At least $M/2$ segments are $(1/4)$-shattered and further, no $F'_i$ is $(\varepsilon/k)$-heavy in at least $M/k$ segments $X_j$. \item {\bf Case 2:} At least $M/2$ segments are \emph{not} $(1/4)$-shattered. \item {\bf Case 3:} There is some $F'_i$ ($i\in [t]$) that is $(\varepsilon/k)$-heavy in at least $M/k$ many segments $X_j.$ \end{itemize} We show (\ref{eq:relrkPigate}) in each of the above cases in Section~\ref{sec:cases1-4} below. This will conclude the proof of the theorem. \paragraph{Notation.} We now define some notation that will be useful in the remainder of the theorem. \begin{itemize} \item For brevity, we use $X_i,Y_i,$ and $Z_i$ ($i\in [M]$) to denote the $X^{(\Delta)}$-segment $X^{(\Delta')}_{(\omega_i,j_i)}$, the $Y^{(\Delta)}$-segment $Y^{(\Delta')}_{(\omega_i,j_i)}$ and the $Z^{(\Delta)}$-segment $Z^{(\Delta')}_{(\omega_i,j_i)}$ respectively. The corresponding half-segments of $X_i,Y_i$ and $Z_i$ are denoted $X_{i,j}, Y_{i,j}$ and $Z_{i,j}$ respectively (for $j\in [2]$). Further $X^{(\Delta-1)}$-segments of $X_{i,j}$ are denoted $X_{i,j,p}$ ($p\in [m]$) and similarly we also have $Y_{i,j,p}$ and $Z_{i,j,p}.$ \item For each $i\in [M]$, let $y_i,z_i$ denote the variables $y^{(\Delta')}_{(\omega_i,j_i)}$ and $z^{(\Delta')}_{(\omega_i,j_i)}$ in the sets $Y_i$ and $Z_i$ respectively. Recall (see Section~\ref{sec:polynomial}) that we have $Y_i = Y_{i,1}\cup Y_{i,2} \cup \{y_i\}$ and $Z = Z_{i,1}\cup Z_{i,2} \cup \{z_i\}$. \item For each $i\in [M]$, let $\tilde{Y}_i$ and $\tilde{Z}_i$ denote the random sets $\rho_i(X_i)\cap Y_i$ and $\rho_i(X_i)\cap Z_i$ respectively. Let $\tilde{Y}$ and $\tilde{Z}$ denote $\bigcup_{i\in [M]} \tilde{Y}_i$ and $\tilde{Z} = \bigcup_{i\in [M]}\tilde{Z}_i.$ Also, let $\tilde{Y}'_i$ and $\tilde{Z}'_i$ denote the random sets $\rho_i(X_i')\cap Y_i$ and $\rho_i(X_i')\cap Z_i$ respectively. \end{itemize} \subsection{The base case of Theorem~\ref{thm:technical}} \label{sec:base-technical} Let $F$ be a $\Sigma \Pi \Sigma$ syntactic multilinear formula over the variable set $X^{'}\cup Y'\cup Z'$. We can write $F = T_1+\cdots + T_s$ where each $T_i$ is a $\Pi\Sigma$ syntactic multilinear formula. We claim that for any $\Pi\Sigma$ syntactic multilinear formula $T$, we have \begin{equation} \label{eq:prodLi} \prob{\rho}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(T|_{\rho})\geq \exp(-2k^{0.1})}\leq \exp(-2k^{0.1}). \end{equation} Assuming (\ref{eq:prodLi}) we are done, since for any fixing of the random restriction $\rho$ (which is a copy of the restriction $\rho^{(1)}$ from Section~\ref{sec:restriction}), we have by Proposition~\ref{prop:relrk} Item 2 that \begin{align*} \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F|_{\rho}) &\leq \sum_{i=1}^s \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(T_i|_{\rho})\leq s\cdot \max_{i}\left(\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(T_i|_{\rho})\right)\\ &\leq \exp(k^{0.1})\cdot \max_{i}\left(\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(T_i|_{\rho})\right) \end{align*} and hence \begin{align*} \prob{\rho}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F|_{\rho}) \geq \exp(-k^{0.1})} & \leq \prob{\rho}{\exists i\ \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(T_i|_{\rho}) \geq \exp(-2k^{0.1})} \\ & \leq \frac{s}{\exp(2k^{0.1})} \leq \exp(-k^{0.1}) \end{align*} where we have used (\ref{eq:prodLi}) and a union bound for the second inequality. It remains to prove (\ref{eq:prodLi}). To see this, we proceed as follows. Assume that the output $\times$-gate of $T$ is $\Phi$ and let $\Psi_1,\ldots,\Psi_r$ be the $+$-gates feeding into it and let $L_1,\ldots,L_r$ be the linear functions computed by these gates. Recall from Proposition~\ref{prop:vars} that we can assign to each $\Psi_i$ a variable set $\mathrm{Var}s(\Psi_i)\subseteq X^{'}\cup Y'\cup Z'$ such that $\{\mathrm{Var}s(\Psi_i)\ |\ i\in [r]\}$ induces a partition of $X^{'}\cup Y'\cup Z'$ and further $\mathrm{Var}s(\Psi_i)$ contains $\mathrm{Supp}(\Psi_i)$.\footnote{$\mathrm{Supp}(\Psi_i)$ is the set of variables that actually appear in the subformula rooted at $\Psi_i$ (cf. Section~\ref{sec:struct}).} Henceforth, we use $\mathrm{Var}s(L_i)$ to denote $\mathrm{Var}s(\Psi_i)$ for each $i\in [r]$. We divide the gates $\Psi_1,\ldots,\Psi_r$ into two classes: those gates such that $\mathrm{Var}s(L_i)\cap X^{'} \neq \emptyset$ and those for which $\mathrm{Var}s(L_i)\cap X^{'} = \emptyset$; let $t$ denote the number of gates of the former kind. Without loss of generality, we can assume that $\Psi_1,\ldots,\Psi_t$ are the gates such that $\mathrm{Var}s(L_i)\cap X^{'} \neq \emptyset$ and $\Psi_{t+1},\ldots,\Psi_r$ the rest. We write the polynomial $T$ as $T = L_1\cdots L_t\cdot Q'$ where $Q' = \prod_{i > t}L_i.$ For each $i\in [t]$ and $j\in [M]$, let $\hat{Y}_{i,j}$ and $\hat{Z}_{i,j}$ denote the random sets $\rho(\mathrm{Var}s(L_i)\cap X'_j)\cap Y$ and $\rho(\mathrm{Var}s(L_i)\cap X'_j)\cap Z$ respectively. We also denote by $\hat{Y}_{i,0}$ and $\hat{Z}_{i,0}$ the (non-random) sets $\mathrm{Var}s(L_i)\cap Y'$ and $\mathrm{Var}s(L_i)\cap Z'.$ Let us define $\hat{Y}_{i} = \bigcup_{j\in [M]\cup \{0\}}\hat{Y}_{i,j}$ and $\hat{Z}_{i} = \bigcup_{j\in [M]\cup \{0\}}\hat{Z}_{i,j}$. Finally, let $\hat{Y}_{t+1}$ and $\hat{Z}_{t+1}$ denote the (non-random) sets $\bigcup_{i > t} \mathrm{Var}s(L_i)\cap Y'$ and $\bigcup_{i > t} \mathrm{Var}s(L_i)\cap Z'.$ Note that the sets $\tilde{Y}'\cup Y'$ can be partitioned as $Y'\cup \bigcup_{j\in [M]} \tilde{Y}'_j$ and also as $\bigcup_{i\in [t+1]}\hat{Y}_i = \left(\bigcup_{i\in [t], j\in [M] \cup \{0\}} \hat{Y}_{i,j}\right)\cup \hat{Y}_{t+1}.$ A similar statement is true for $\tilde{Z'}\cup Z'$ as well. Upon the application of the random restriction $\rho$ (as described in Section~\ref{sec:restriction}), each $L_i$ ($i\in [t]$) restricts to a linear function $L_i|_{\rho} = \tilde{L}_i\in \mathbb{F}[\hat{Y}_i\cup \hat{Z}_i]$, while the polynomial $Q'$ is unaffected by the restriction $\rho$. Recall that $\rho$ is the composition of \emph{independent} restrictions $\rho_1,\ldots,\rho_{m}$ where each $\rho_j$ only affects variables in $X_j,$ for all $j\in[m]$ and the $\rho_j$'s are independent copies of the random restriction $\rho$ (defined in Section~\ref{sec:restriction}). By Proposition~\ref{prop:relrk} items 3 and 1, we know that for any choice of $\rho$ \begin{align} \mathrm{relrk}_{(Y'\cup \tilde{Y}',Z'\cup \tilde{Z}')}(T|_\rho) &= \left(\prod_{i\in [t]}\mathrm{relrk}_{(\hat{Y}_i, \hat{Z}_i )}(L_i|_{\rho})\right)\cdot \mathrm{relrk}_{(\hat{Y}_{t+1},\hat{Z}_{t+1})}(Q')\notag\\ & \leq \prod_{i\in [t]}\mathrm{relrk}_{(\hat{Y}_i, \hat{Z}_i )}(L_i|_{\rho}).\label{eq:relrkT} \end{align} To bound $\mathrm{relrk}_{(Y'\cup \tilde{Y}',Z'\cup \tilde{Z}')}(T|_\rho),$ we use (\ref{eq:relrkT}) along with a case analysis depending on the value of $t$. Before we do the case analysis, we will present two technical lemmas which will be useful. \begin{lemma} \label{lem:size-tildey'} For $\tilde{Y}', \tilde{Z}'$ defined as in the statement of Theorem~\ref{thm:technical}, $\prob{\rho}{|\tilde{Y}'|+ |\tilde{Z}'| \leq M/200} \leq \exp(-\Omega(M)).$ \end{lemma} \begin{proof} For $j\in [M]$, let $v_j$ be a $\{0,1\}$-valued random variable which takes value $1$ if and only if $\tilde{Y}_j' \cup \tilde{Z}_j' \neq \emptyset$. This is a random variable which only depends on $\rho_j$ for each $j \in [M]$, i.e., $v_j$s are independent random variables. Let $v = \sum_j v_j$. Note that $v \leq |\tilde{Y}'| + |\tilde{Z}'|$. We first claim that for each $j$, $\prob{\rho_j}{\tilde{Y}_j' \cup \tilde{Z}_j' \neq \emptyset}$ is at least $1/48$. To see this, first note that $|X_j'|\geq \varepsilon |X_j| > 0$ (the non-emptiness of $X_j'\subseteq X_j$ is all we will need). Fix any variable $x \in X'_j$. From the description of the sampling algorithm $\mc{A}$ for the random restriction $\rho^{}$, it easily follows that the probability with which $\rho_j$ sets $x$ to a variable in $Y_j\cup Z_j$ is at least $1/48$.\footnote{W.l.o.g., say $x\in X_{j,1}$. Then $\mc{A}$ sets $x$ to a variable whenever option $E_3$ is chosen (this happens with probability $1/3$) and further $x$ is the variable chosen in $X_{j,1}$ to set to a variable (this happens with probability $1/16$).} Hence, $\avg{\rho_j}{v_j} \geq 1/48$. By linearity of expectation, we get that $\avg{\rho}{v} \geq M/48$. Using the Chernoff bound (Theorem~\ref{thm:Chernoff}) we get that $\prob{\rho}{v \leq M/200} \leq \exp(-\Omega(M))$. As $v \leq |\tilde{Y}'| + |\tilde{Z}'|$, we get that $\prob{\rho^{}}{|\tilde{Y}'|+ |\tilde{Z}'| \leq M/200} \leq \exp(-\Omega(M)).$ \end{proof} \begin{lemma} \label{lem:rank-loss} For each $i \in [t]$, $\prob{\rho}{\mathrm{relrk}_{(\hat{Y}_i, \hat{Z}_i)}(L_i|_\rho) \leq 1/\sqrt{2}} \geq 1/1000$. \end{lemma} \begin{proof} Fix an $L_i$ for $i\in [t]$. Let us consider the smallest $j_0$ such that $X'_{j_0} \cap \mathrm{Var}s(L_i) \neq \emptyset$. We consider two possibilities for the relationship between $X_{j_0}$ and $\mathrm{Var}s(L_i)$: either $\mathrm{Var}s(L_i) \cap X'_{j_0} \subsetneq X_{j_0}$ or $X_{j_0} \subseteq \mathrm{Var}s(L_i)$. We fix $\rho_j$ for all $j \in [M]$ such that $j \neq j_0$. Let $\hat{Y}'_i = \cup_{j \neq j_0} \hat{Y}_{i,j}$ and $\hat{Z}'_i = \cup_{j \neq j_0} \hat{Z}_{i,j}$. \paragraph{\bf Case I: $X_{j_0} \subseteq \mathrm{Var}s(L_i)$.} In this case, if the sampling step for $\rho_{j_0}$ chooses options $E_1$ or $E_2$, then from the definition of $\rho^{(0)}$ we can see that $|\hat{Y}_{i,j}| = |\rho(X_{j_0})\cap Y| = 2$ and similarly $|\hat{Z}_{i,j}|=2$ as well. This implies that $|\hat{Y}_i|,|\hat{Z}_i|\geq 2.$ On the other hand, we also claim that $\mathrm{rank}(M_{(\hat{Y}_i, \hat{Z}_i)})(L_i|_{\rho})) \leq 2.$ To see this, note that each $L_i|_{\rho}$ can be written as $L_i'(\hat{Y}_i) + L_i''(\hat{Z}_i)$ and for any polynomial $Q$ that depends only on the variables in either $\hat{Y}_i$ or $\hat{Z}_i$, we have $\mathrm{rank}(M_{(\hat{Y}_i,\hat{Z}_i)}(Q))\leq 1.$ The subadditivity of matrix rank now implies that $\mathrm{rank}(M_{(\hat{Y}_i,\hat{Z}_i)}(L_i|_\rho)) \leq 2.$ In particular, we see that whenever option $E_1$ or $E_2$ is chosen, we have \[ \mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(L_i|_{\rho}) \leq \frac{2}{2^{\frac{1}{2}(|\hat{Y}_i| + |\hat{Z}_i|)}}\leq \frac{1}{2}. \] Since one of $E_1$ or $E_2$ is chosen with probability $2/3$, we get the desired bound in this case. \paragraph{\bf Case II: $\mathrm{Var}s(L_i) \cap X'_{j_0} \subsetneq X_{j_0}$.} As mentioned earlier, we already fixed the restrictions $\rho_j$ for $j \neq j_0$. Thus, the sets $\hat{Y}'_i$ and $ \hat{Z}'_i$ have been fixed. Let $b \in \{0,1\}$. Suppose $|\hat{Y}_i'\cup \hat{Z}'_i| \equiv b \mod{2}$. We will show that \begin{equation} \label{eq:odd-even} \prob{\rho_{j_0}}{|\hat{Y}_{i,j_0} \cup \hat{Z}_{i,j_0}| = (1-b) \mod 2} \geq 1/1000. \end{equation} Assuming (\ref{eq:odd-even}) for now, we get that with probability at least $1/1000$, $| \hat{Y}_i \cup \hat{Z}_i| = |\hat{Y}_{i,j_0} \cup \hat{Z}_{i,j_0}| + |\hat{Y}_i'\cup \hat{Z}'_i|$ is odd. Therefore using Item 1 of Proposition~\ref{prop:relrk} we are done. This concludes the proof of Lemma~\ref{lem:rank-loss} assuming (\ref{eq:odd-even}). To see (\ref{eq:odd-even}), let us assume that $b=1$. (We only present the details for this case. The case of $b=0$ is similar.) If $b=1$, we need to show that $|\hat{Y}_{i,j_0} \cup \hat{Z}_{i,j_0}| = |\rho_{j_0}(\mathrm{Var}s(L_i)\cap X'_{j_0}) \cap (Y_{j_0}\cup Z_{j_0})|$ is even with probability $1/1000$. The following three possibilities arise: \begin{enumerate} \item[(a)] $\mathrm{Var}s(L_i)\cap X_{j_0,1} \neq \emptyset$ and $\mathrm{Var}s(L_i)\cap X_{j_0,2} \neq \emptyset$, \item[(b)] $X_{j_0,1} \setminus \mathrm{Var}s(L_i) \neq \emptyset$ and $X_{j_0,2} \setminus \mathrm{Var}s(L_i) \neq \emptyset$, or \item[(c)] $\mathrm{Var}s(L_i)\cap X_{j_0} = X_{j_0,1}$ or $\mathrm{Var}s(L_i)\cap X_{j_0} = X_{j_0,2}$. \end{enumerate} In each case we will show that $|\hat{Y}_{i,j_0}\cup \hat{Z}_{i,j_0}|$ is even with probability at least $1/1000$. For (a), if $\rho_{j_0}$ chooses option $E_3$, then with probability at least $1/8$ one of the variables in $\mathrm{Var}s(L_i)\cap X_{j_0,1}$ (resp.\ $\mathrm{Var}s(L_i)\cap X_{j_0,2}$) will be set to $y_{j_0}$ (resp.\ $z_{j_0}$). The option $E_3$ is chosen with probability $1/3$. Therefore, with probability at least $1/(8\cdot 8 \cdot 3)\geq 1/1000$, $|\hat{Y}_{i,j_0} \cup \hat{Z}_{i,j_0}| = 2$ and is thus even. For (b), observe that if $\rho_{j_0}$ chooses option $E_3$, then with probability at least $1/8$ one of the variables in $X_{j_0,1} \setminus \mathrm{Var}s(L_i)$ (resp.\ $X_{j_0,2} \setminus \mathrm{Var}s(L_i)$) will be set to $y_{j_0}$ (resp.\ $z_{j_0}$) and all other variables in $X_{j_0}$ are set to constants. As this implies that all variables in $\mathrm{Var}s(L_i)$ are set to constants, again with probability at least $1/(8\cdot 8 \cdot 3)$, $|\hat{Y}_{i,j_0} \cup \hat{Z}_{i,j_0}|=0$ and is thus even. For (c), let us assume that $\mathrm{Var}s(L_i)\cap X_{j_0} = X_{j_0,1}$. In this case, if $\rho_{j_0}$ chooses option $E_1$, then with probability $1$, $|\hat{Y}_{i,j_0} \cup \hat{Z}_{i,j_0}|=4$. The option $E_1$ is chosen with probability $1/3$. Similarly, if $\mathrm{Var}s(L_i)\cap X_{j_0} = X_{j_0,2}$ then if $\rho_{j_0}$ chooses option $E_2$, we have $|\hat{Y}_{i,j_0} \cup \hat{Z}_{i,j_0}|=4$. Again, option $E_2$ is chosen with probability $1/3$. This finishes the proof of (\ref{eq:odd-even}) in the case that $b=1$. For the case $b=0$, we employ a similar case analysis with the following cases. \begin{enumerate} \item[(a)] $\mathrm{Var}s(L_i)\cap X_{j_0,1} \neq \emptyset$ and $X_{j_0,2} \setminus \mathrm{Var}s(L_i) \neq \emptyset$, \item[(b)] $X_{j_0,1} \setminus \mathrm{Var}s(L_i) \neq \emptyset$ and $\mathrm{Var}s(L_i)\cap X_{j_0,2} \neq \emptyset$. \end{enumerate} The proof is similar and left to the reader. \end{proof} We now proceed to the proof of (\ref{eq:prodLi}), which will finish the proof of the base case of Theorem~\ref{thm:technical}. \vspace*{5pt} \noindent Let us recall that $t$ is the number of $+$-gates such that for all $i\in[t]$, $\mathrm{Var}s(L_{i})\cap X' \neq \emptyset.$ \vspace*{5pt} \noindent \underline{$t\leq M/500$ (Case 1):} This case is quite straightforward. As noted in the proof of Case I of Lemma~\ref{lem:rank-loss}, $\mathrm{rank}(M_{(\hat{Y}_i, \hat{Z}_i)}(L_i|_{\rho})) \leq 2.$ In particular, the above yields \[ \mathrm{relrk}_{(\hat{Y}_i, \hat{Z}_i)}(L_i|_{\rho}) \leq \frac{2}{2^{\frac{1}{2}(|\hat{Y}_i|+ |\hat{Z}_i|)}} \] and hence using (\ref{eq:relrkT}), we have \[ \mathrm{relrk}_{(Y'\cup \tilde{Y}',Z'\cup \tilde{Z}')}(T|_\rho) \leq \frac{2^t}{2^{\frac{1}{2}(|\tilde{Y}'\cup Y'| + |\tilde{Z}'\cup Z'|)}}\leq \frac{2^t}{2^{\frac{1}{2}(|\tilde{Y}'| + |\tilde{Z}'|)}}. \] Using Lemma~\ref{lem:size-tildey'}, we know that $|\tilde{Y}'|+ |\tilde{Z}'| \leq M/200$ with probability at most $\exp(-\Omega(M))$ . Therefore, with all but $\exp(-\Omega(M))$ probability, $$\mathrm{relrk}_{(Y'\cup \tilde{Y}',Z'\cup \tilde{Z}')}(T) \leq \frac{2^{M/500}}{2^{M/400}} \leq \exp(-2k^{0.1}).$$ Moreover, for our setting of parameters of $M$ and $k$, $\exp(-\Omega(M)) \leq \exp(-2k^{0.1})$. This proves (\ref{eq:prodLi}) in the case where $t\leq M/500.$ \vspace*{5pt} \noindent \underline{$t > M/500$ (Case 2):} For each $i\in [t]$, we define a Bernoulli random variable $V_i$ as follows: $V_i=1$ if $\mathrm{relrk}_{(\hat{Y}_i, \hat{Z}_i)}(L_i|_{\rho}) \leq 1/\sqrt{2}$ and it is $0$ otherwise. By Lemma~\ref{lem:rank-loss}, we know that $\avg{}{V_i} \geq 1/1000$. Let $V = \sum_{i \in [t]} V_i$. Then by linearity of expectation, $\avg{}{V} \geq t/1000$. Recall that $|X_j| = 16$ for every $j \in [M]$. The multilinearity of the formula $T$ implies that for any $i \neq i'$, $\mathrm{Var}s(L_i) \cap \mathrm{Var}s(L_{i'}) = \emptyset$. Therefore, for each segment $X_j$ the number of $L_i$ such that $\mathrm{Var}s(L_i)\cap X_j\neq \emptyset$ is at most $16$. This implies that the Bernoulli random variables $V_i$s are read-$16$, when viewed as functions of the independent random restrictions $\rho_1,\ldots,\rho_M.$ By the read-$k$ Chernoff bound (Theorem~\ref{thm:GLSS}),\footnote{The use of the read-$k$ Chernoff bound here can be easily circumvented by showing that in fact a constant-fraction of the $V_i$ ($i\in [t]$) are in fact completely independent and the standard Chernoff bound can be applied to their sum. We leave the details to the interested reader.} we thus obtain \[ \prob{}{ V < t/2000 } \leq \exp(-\Omega(t)) \leq \exp(-\Omega(M))\leq \exp(-2k^{0.1}). \] When $V \geq t/2000$, i.e. when at least $t/2000$ many $L_i$ are such that $\mathrm{relrk}_{(\hat{Y}_i, \hat{Z}_i)}(L_i|_{\rho}) \leq 1/\sqrt{2}$, we have by (\ref{eq:relrkT}), \begin{align*} \mathrm{relrk}_{(Y'\cup \tilde{Y},Z'\cup \tilde{Z})}(T) &\leq \prod_{i\in [t]}\mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(L_i|_{\rho}) \leq \frac{1}{(\sqrt{2})^{ t/2000}} \leq \exp(-2k^{0.1}). \end{align*} This proves (\ref{eq:prodLi}) in the case where $t > M/500.$ This completes the proof of the base case. \subsection{The Inductive Case: Proof of (\ref{eq:relrkPigate}) in Cases 1-3} \label{sec:cases1-4} The following piece of notation will be useful for the inductive case. For each $i\in [t]$ and $j\in [M]$, let $\hat{Y}_{i,j}$ and $\hat{Z}_{i,j}$ denote the random sets $\rho(\mathrm{Var}s(F'_i)\cap X'_j)\cap Y$ and $\rho(\mathrm{Var}s(F'_i)\cap X'_j)\cap Z$ respectively. We also denote by $\hat{Y}_{i,0}$ and $\hat{Z}_{i,0}$ the (non-random) sets $\mathrm{Var}s(F'_i)\cap Y'$ and $\mathrm{Var}s(F'_i)\cap Z'.$ Let us define $\hat{Y}_{i} = \bigcup_{j\in [M]\cup \{0\}}\hat{Y}_{i,j}$ and $\hat{Z}_{i} = \bigcup_{j\in [M]\cup \{0\}}\hat{Z}_{i,j}$. Note that the sets $\tilde{Y}'\cup Y'$ can be partitioned as $Y'\cup \bigcup_{j\in [M]} \tilde{Y}'_j$ and also as $\bigcup_{i\in [t]}\hat{Y}_i = \bigcup_{i\in [t], j\in [M]\cup \{0\}} \hat{Y}_{i,j}.$ A similar statement is true for $\tilde{Z'}\cup Z'$ as well. \subsubsection{Case 1} \label{sec:case1} By renaming the segments if necessary, we assume that $X_1,\ldots,X_{M/2}$ are $(1/4)$-shattered. For each $j \leq M/2$, the restriction $\rho_j$ is sampled according to the algorithm $\mc{A}$ described in Section~\ref{sec:restriction}. We say that a $\rho_j$ is \emph{good} if the algorithm $\mc{A}$ decides on option $E_3$ in sampling $\rho_j.$ For each $j$, $\rho_j$ is good with probability $1/3$. Let $\mc{E}_1$ denote the event that the number of good $\rho_j$ is at most $M/8$. Since different $\rho_j$'s are sampled independently, a Chernoff bound (Theorem~\ref{thm:Chernoff} Item 1) tells us that \begin{equation} \label{eq:case1chernoff} \prob{\rho}{\mc{E}_1}\leq \exp(-\Omega(M)). \end{equation} For each $j\in [M/2],$ we condition on whether or not $\rho_j$ is good. By (\ref{eq:case1chernoff}), the probability that $\mc{E}_1$ occurs is $\exp(-\Omega(M)).$ Assume that the event $\mc{E}_1$ does not occur. By renaming segments once more, we may assume that $\rho_j$ is good for each $j\in [M/8]$. We condition on any choice of the restrictions $\rho_j$ for $j > M/8$; this fixes the sets $\tilde{Y}'_j$ and $\tilde{Z}'_j$ defined above for $j > M/8$. We now observe the following from the description of the sampling algorithm $\mc{A}$, specifically option $E_3$ of $\mc{A}$. Conditioned on our choices so far, each $\rho_j$ for $j\in [M/8]$ is now a random $(X_j,\{y_j\},\{z_j\})$-restriction. For clarity, we call this conditioned random restriction $\rho_j'$. We now proceed to analyzing $\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho)$. Since $\tilde{Y}'\cup Y'$ (resp.\ $\tilde{Z}'\cup Z'$) can be partitioned as $\bigcup_{i\in [t]} \hat{Y}_i$ (resp.\ $\bigcup_{i\in [t]}\hat{Z}_i$), we know (Proposition~\ref{prop:relrk} Item 3) that for any choice of $\rho,$ \begin{equation} \label{eq:case1relrkF'} \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho) = \prod_{i\in [t]}\mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(F'_i|_\rho). \end{equation} So we analyze $\mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(F'_i|_\rho)$ for each $i\in [t]$. To bound this quantity, we recall (Proposition~\ref{prop:relrk} Item 1) that $\mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(F'_i|_\rho)$ is always at most $1$. Further, if $|\hat{Y}_i\cup \hat{Z}_i|$ is odd, then $\mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(F'_i|_\rho)\leq 1/\sqrt{2}.$ This motivates what follows. For any $i\in [t]$ and $j\in \{0,\ldots,M\}$, let $\alpha_{i,j}\in \{0,1\}$ be the random variable defined by $\alpha_{i,j} \equiv |\hat{Y}_{i,j}\cup \hat{Z}_{i,j}| \pmod{2}.$ Note that the variables $\alpha_{i,0}$ and $\alpha_{i,j}$ for $j > M/8$ are actually fixed. Define $\tilde{\alpha}_i\in \{0,1\}$ by $\tilde{\alpha}_i = \bigoplus_{j=0}^M \alpha_{i,j}.$ Observe that $\tilde{\alpha}_i = 1$ if and only if $|\hat{Y}_i\cup \hat{Z}_i|$ is odd. In particular, by Proposition~\ref{prop:relrk} (item 3), we have $\mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(F'_i|_\rho) \leq 1/2^{\tilde{\alpha}_i/2}.$ Let $\mathrm{Odd}_\rho$ be the integer random variable defined by \[ \mathrm{Odd}_\rho = \sum_{i\in [t]} \tilde{\alpha}_i \] where the sum is defined over $\mathbb{R}.$ By the discussion above and (\ref{eq:case1relrkF'}), we know that \begin{equation} \label{eq:case1oddrelrk} \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho) \leq \frac{1}{2^{\mathrm{Odd}_\rho/2}}. \end{equation} We will show below that \begin{equation} \label{eq:case1oddclm} \prob{\rho'_1,\ldots,\rho'_{M/8}}{\mathrm{Odd}_{\rho} \leq 10 k^{0.1}}\leq \exp(-k^{0.1}). \end{equation} The above, along with (\ref{eq:case1oddrelrk}) implies that whenever the event $\mc{E}_1$ does not occur, we have the inequality \[ \prob{\rho}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho) \geq \exp(-k^{0.1})} \leq \exp(-k^{0.1}). \] In particular, using (\ref{eq:case1chernoff}), we see that \[ \prob{\rho}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho) \geq \exp(-k^{0.1})} \leq \exp(-k^{0.1}) + \exp(-\Omega(M)) \leq 2\exp(-k^{0.1}) \] which implies (\ref{eq:relrkPigate}) and hence finishes the analysis of Case $1$. For the last inequality above, we have used the fact $M \geq m/10\geq k$ for our choice of parameters. We now prove (\ref{eq:case1oddclm}). To do this, we will actually prove a slightly different statement. For any $i\in [t]$, define $\alpha_i = \bigoplus_{j\in [M/8]}\alpha_{i,j}$. Note that $\alpha_i = \tilde{\alpha}_i \oplus \bigoplus_{j\not\in [M/8]}\alpha_{i,j}.$ For any $\beta_1,\ldots,\beta_t\in \{0,1\}$, we will show that \begin{equation} \label{eq:case1alphabeta} \prob{\rho_1',\ldots,\rho'_{M/8}}{\forall i\in [t]\ \alpha_i = \beta_i} \leq \exp(-k^{0.15}). \end{equation} Assuming the above, we obtain a similar statement for $\tilde{\alpha}_1,\ldots,\tilde{\alpha}_t$, since for any $\beta_1,\ldots,\beta_t\in \{0,1\}$, \begin{align*} \prob{\rho_1',\ldots,\rho'_{M/8}}{\forall i\in [t]\ \tilde{\alpha}_i = \beta_i} &= \prob{\rho_1',\ldots,\rho'_{M/8}}{\forall i\in [t]\ \alpha_i = \beta_i\oplus \bigoplus_{j\not\in [M/8]}\alpha_{i,j}} \leq \exp(-k^{0.15}). \end{align*} (We have used above the fact that $\alpha_{i,j}$ is fixed for any $j\in \{0,\ldots,M\}\setminus [M/8]$.) From the above inequality and using Proposition~\ref{prop:ANDthr}, we get \begin{align*} \prob{\rho'_1,\ldots,\rho'_{M/8}}{\mathrm{Odd}_{\rho} \leq 10 k^{0.1}} &\leq \exp(10k^{0.1}\log t - k^{0.15})\\ &\leq \exp(10k^{0.1} k^{0.02+o(1)} - k^{0.15}) \leq \exp(-k^{0.1}) \end{align*} where for the second inequality, we used the fact that $t\leq n_{\Delta'} = O(m)^{\Delta'}$ and hence $\log t = O(\Delta' \log m )\leq m^{0.001 + o(1)} = k^{0.02 + o(1)}$ as $\Delta' \leq m^{0.001}$ and $m = k^{20}.$ This finishes the proof of (\ref{eq:case1oddclm}) modulo (\ref{eq:case1alphabeta}). The proof of (\ref{eq:case1alphabeta}) is the main technical statement of this section. From now on, fix some $\beta_1,\ldots,\beta_t\in \{0,1\}.$ \paragraph{Proof outline of (\ref{eq:case1alphabeta}).} From the description of the sampling algorithm $\mc{A}$ (specifically option $E_3$ of $\mc{A}$), we observe the following. To sample each $\rho_j'$, we choose independent random variables $x_{j,1}$ and $x_{j,2}$ uniformly from $X_{j,1}$ and $X_{j,2}$ respectively, and set $x_{j,1}$ to $y_j$ and $x_{j,2}$ to $z_j$; all other variables in $X_{j}$ are set deterministically to $0$ or $1$ according to some rule $R$ (the exact rule $R$ will not be relevant in this argument). We will view this sampling process iteratively via an algorithm $\mc{A}'$ described formally below. Informally, at each step, $\mc{A}'$ chooses a suitable $I\subseteq [t]$ and asks for each $j\in [M/8]$ if either of the variables $x_{j,1}$ or $x_{j,2}$ belongs to the set of variables $\bigcup_{i\in I}\mathrm{Var}s(F'_i)\cap X'_j$. If this event does occur for any $j\in [M/8]$, then $\mc{A}'$ reveals the restriction $\rho'_j$ entirely, and otherwise, it does not reveal anything else about $\rho'_j$ in this step. In either case, we are able to entirely deduce the values of $\alpha_{i,j}$ for $i\in I$ from the above information about $\rho_j'$ and hence we can also deduce $\alpha_i = \bigoplus_{j\in [M/8]} \alpha_{i,j}$ for each $i\in I$. We show that with reasonable probability, it holds that $\bigoplus_{i\in I} \alpha_i \neq \bigoplus_{i\in I} \beta_i,$ which in particular implies that there must be an $i$ such that $\alpha_i\neq \beta_i.$ In this case, $\mc{A}'$ outputs SUCCESS. If not, the algorithm continues with another iteration of the same procedure. We will show that with high probability, $\mc{A}'$ can carry out many such iterations. If either $\mc{A}'$ does not find an $i$ such that $\alpha_i \neq \beta_i$ after many iterations, or it cannot carry out too many iterations, then $\mc{A}'$ outputs FAILURE. We show that $\mc{A}'$ outputs SUCCESS with high probability, which will finish the proof of (\ref{eq:case1alphabeta}). \paragraph{The algorithm $\mc{A}'$.} The algorithm $\mc{A}'$ is a general sampling procedure that has the following input-output behaviour. \begin{itemize} \item {\bf Input}: Sets $S\subseteq [M/8], T\subseteq [t]$, and fixed (i.e. not random) $(X_j,\{y_j\},\{z_j\})$-restrictions $(\rho'_j)_{j\in [M/8]\setminus S}.$ For each $j\in S$, we define $\bar{X}_j = (\bigcup_{i\in T}\mathrm{Var}s(F'_i))\cap X_j.$ \item {\bf Desired Output}: The algorithm samples an independent random restriction $\rho'_j$ for each $j\in S$ as follows. Independent and uniformly random variables $x_{j,1}$ and $x_{j,2}$ are chosen from sets $\bar{X}_{j,1}:= \bar{X}_j\cap X_{j,1}$ and $\bar{X}_{j,2}:= \bar{X}_j \cap X_{j,2}$ respectively. The variables $x_{j,1}$ and $x_{j,2}$ are set to $y_j$ and $z_j$ respectively, and the remaining variables in $X_j$ are set deterministically to $0$ or $1$ in accordance with the rule $R$ referenced above. Further, the algorithm either outputs SUCCESS or FAILURE, with the guarantee that when it outputs SUCCESS, then we must have $\alpha_i\neq \beta_i$ for some $i\in [t]$. \end{itemize} Note that the problem of sampling $\rho'_1,\ldots, \rho'_{M/8}$ is equivalent to running the algorithm $\mc{A}'$ with $S = [M/8]$ and $T = [t]$. In this case, $\bar{X}_j = X_j$ for each $j\in [M/8]$. The formal description of the algorithm $\mc{A}'$ follows. \paragraph{Algorithm $\mc{A}'(S,T,(\rho'_j)_{j\in [M/8]\setminus S})$} \begin{enumerate} \item Find $I\subseteq T$ such that the following holds. Let $F'_I := \prod_{i\in I}F'_i$ and $\mathrm{Var}s(F'_I) := \bigcup_{i\in I}\mathrm{Var}s(F'_i).$ \begin{enumerate} \item For all $j\in S$, $|\mathrm{Var}s(F'_I)\cap \bar{X}_j|\leq (1/3)\cdot |\bar{X}_j|.$ \item $|\hat{S}| \leq M/k$, where $\hat{S} := \{j \in S \mid |\mathrm{Var}s(F'_I)\cap \bar{X}_j|> (2\varepsilon/k)\cdot |\bar{X}_j|\}$. \item There is some $j\in S$ such that $|\mathrm{Var}s(F'_I)\cap \bar{X}_j|\geq (\varepsilon/2k)\cdot |\bar{X}_j|.$ \end{enumerate} If there are more than one such $I$, choose the lexicographically least one. If there is no such $I$ or if $S = \emptyset$, output FAILURE. Further, complete the sampling process as follows. For each $j\in S$, the restriction $\rho'_j$ is sampled by choosing variables $x_{j,1}$ and $x_{j,2}$ independently and uniformly from $\bar{X}_{j,1}$ and $\bar{X}_{j,2}$ respectively and setting them to $y_j$ and $z_j$ respectively. Other variables in $X_j$ are set according to the restriction rule $R$. \item Initialize the set $S' = \emptyset .$ \item For each $j\in S$, define $\delta_{j,1}\in [0,1]$ by \[ \delta_{j,1} = \frac{1}{|\bar{X}_{j,1}|}\cdot |\mathrm{Var}s(F'_I)\cap \bar{X}_{j,1}| \] and $\delta_{j,2}$ similarly. Let $\delta_j = |\mathrm{Var}s(F'_I)\cap \bar{X}_{j}|/|\bar{X}_j| $ and define $\delta = \sum_{j\in S}\delta_j.$ \item If $\delta < 1$, define $S'' = \{j\in S\ |\ \delta_j \geq 1/M^{0.5}\}$. If $\delta \geq 1,$ define $S'' = \hat{S}$ where $\hat{S}$ is as defined in Step 1 above, i.e., $S'' = \{j\in S\ | \delta_j \geq (2\varepsilon/k)\}.$ \item For each $j\in S$, do the following independently. \begin{enumerate} \item Sample $b_{j,1}\in \{0,1\}$ so that $b_{j,1}=1$ w.p. $\delta_{j,1}$ and $0$ otherwise. Similarly, sample $b_{j,2}$ independent of $b_{j,1}$ so that $b_{j,2}=1$ w.p. $\delta_{j,2}.$ \item If either of $b_{j,1}$ or $b_{j,2}$ is $1$, or $j\in S''$, do the following. \begin{enumerate} \item Add $j$ to the set $S'$. \item If $b_{j,1} = 1$, sample $x_{j,1}$ uniformly from $\mathrm{Var}s(F'_I)\cap \bar{X}_{j,1}$ and set $x_{j,1}$ to $y_j$. If $b_{j,1}=0,$ sample $x_{j,1}$ uniformly from $\bar{X}_{j,1}\setminus \mathrm{Var}s(F'_I)$ and set $x_{j,1}$ to $y_j$. \item Sample $x_{j,2}$ similarly from $\bar{X}_{j,2}$ and set $x_{j,2}$ to $z_j$. \item Set all other variables in $X_j$ deterministically in accordance with the rule $R$. (This fixes the $(X_j,\{y_j\},\{z_j\})$-restriction $\rho'_j$.) \end{enumerate} \end{enumerate} \item Compute the Boolean variables $\alpha_{i,j}'\in \{0,1\}$ for each $i\in I$ and $j\in [M/8]$ as follows. \begin{enumerate} \item If $j\not\in S$ or $j\in S'$, compute $\alpha_{i,j}'$ from $\rho'_j$ using $\alpha'_{i,j} = |\hat{Y}_{i,j}\cup \hat{Z}_{i,j}| \pmod{2}.$ \item If $j\in S\setminus S'$, then set $\alpha'_{i,j} = 0.$ \end{enumerate} \item For each $i\in I$, let $\alpha'_i = \bigoplus_{j\in [M/8]}\alpha'_{i,j}.$ If $\alpha'_i \neq \beta_i$ for any $i\in I$, sample the remaining $\rho'_j$ ($j\in S\setminus S'$) as in Steps 5(b)(ii) and 5(b)(iii), and output SUCCESS. \item Otherwise, run the algorithm $\mc{A}'$ on inputs $(S\setminus S', T\setminus I, (\rho'_j)_{j\not \in (S\setminus S')}).$ \end{enumerate} \paragraph{Correctness.} Here, we show that $\mc{A}'$, on any input $S,T,$ and $(\rho'_j)_{j\not\in S}$ samples $(\rho'_j)_{j\in S}$ according to the desired distribution. Moreover, we show that whenever $\mc{A}'$ outputs SUCCESS, it is indeed because the sampled restrictions imply that $\alpha_i \neq \beta_i$ for some $i\in [t]$. We first argue that the sampled distribution is correct. Note that if the algorithm cannot find a suitable $I$ in Step 1, then the restrictions sampled trivially have the correct distribution. So we assume that $\mc{A}$ does not output FAILURE in Step 1. Now, note that to sample a uniformly random variable $a$ from a finite set $A$, we may first fix any set $A'\subseteq A$ and sample a random bit $\beta\in \{0,1\}$ that is $1$ with probability $|A'|/|A|$ and depending on whether $\beta$ is $1$ or $0$, sample a random element of $A'$ or $A\setminus A'.$ This describes how the sampling algorithm $\mc{A}'$ samples a random $x_{j,1}$ from $\bar{X}_{j,1}$ (the case of $\bar{X}_{j,2}$ is similar), where the role of the set $A'$ is taken by $\bar{X}_{j,1}\cap\mathrm{Var}s(F'_I)$ and the bit $b_{j,1}$ plays the role of $\beta$. If $j\in S'$, then the subsequent sampling takes place in Step 5(b), and for $j\in S\setminus S'$, the subsequent sampling takes place in either Step 7 (in case the algorithm outputs SUCCESS) or in a later iteration. We now argue that an output of SUCCESS means that the algorithm has found an $i$ such that $\alpha_i\neq \beta_i.$ This is obvious once we argue that for each $i\in I$, the quantity $\alpha'_i$ computed by the algorithm is equal to the random variable $\alpha_{i}$ defined above. To argue this, it suffices to show that $\alpha'_{i,j}$ (computed in Step 6) equals $\alpha_{i,j}$ for each $i\in I,j\in [M/8]$. This is obvious for $j\not\in S$ or $j\in S'$ from Step 6 and the definition of $\alpha_{i,j}$ above. For $j\in S\setminus S'$, we know that $b_{j,1} = b_{j,2} = 0$ and hence the sampled $\rho'_j$ does not choose any variable from $\mathrm{Var}s(F'_I)$ to be either $x_{j,1}$ or $x_{j,2}$. In particular, this implies that $x_{j,1}\not\in \mathrm{Var}s(F'_i)$ and $x_{j,2}\not\in \mathrm{Var}s(F'_i)$ for any $i\in I$ and hence, $\alpha_{i,j}=0.$ Thus, even in the case that $j\in S\setminus S'$, we have $\alpha_{i,j} = \alpha'_{i,j}.$ This concludes the proof of correctness. \paragraph{Probability of SUCCESS.} We now argue that the algorithm $\mc{A}'$ outputs SUCCESS with high probability if started with the initial input of $S = S_0 := [M/8]$ and $T = T_0 := [t]$ (in which case $\bar{X}_j = X_j$ for each $j$ and the input $(\rho'_j)_{j\not\in S}$ is trivial). This will prove (\ref{eq:case1alphabeta}). Consider a run of the algorithm $\mc{A}'$ on the above inputs. Each such run can produce many successive iterations on different inputs $(S,T,(\rho'_j)_{j\not\in S})$. We say that a single iteration of $\mc{A}'$ is of Type I if the corresponding value of $\delta$ (computed in Step 3) is at least $1$ and of Type II otherwise. Note that this is a \emph{deterministic} function of the current inputs $S$ and $T$. To show that the algorithm $\mc{A}'$ outputs SUCCESS with high probability on $(S_0,T_0)$, we show a more general statement. Call an input $(S,T,(\rho'_j)_{j\not\in S})$ an \emph{$(a,b)$-good} input, where $a,b\in \mathbb{N}$, if the following holds. \begin{itemize} \item The input set $S$ satisfies \begin{equation} \label{eq:case1S} |S| \geq \frac{M}{8} - \frac{10\cdot a\cdot M}{k}- 10\cdot b\cdot M^{0.5}, \end{equation} \item and for each $j\in S$, we have \begin{equation} \label{eq:case1Xjbar} |\bar{X}_j| \geq |X_j|\cdot \left(1-\frac{2\varepsilon}{k}\cdot a - \frac{b}{M^{0.5}}\right). \end{equation} \end{itemize} We say an iteration of $\mc{A}'$ is $(a,b)$-good, if its input is $(a,b)$-good. Informally, we expect an iteration of $\mc{A}'$ to be $(a,b)$-good after at most $a$ iterations of Type I and at most $b$ iterations of Type II (see Claim~\ref{clm:case1gooditerns} below). The first observation is that an algorithm cannot output FAILURE on an $(a,b)$-good iteration for small $a,b$. Define $a_0 = k/100$ and $b_0 = M^{0.45}.$ \begin{claim} \label{clm:case1failureiter} The algorithm $\mc{A}'$ does not output FAILURE in any $(a,b)$-good iteration where $a < a_0$ and $b < b_0.$ \end{claim} \begin{proof} We only need to show that $S\neq \emptyset$ and that $\mc{A}'$ is able to find an $I\subseteq T$ with the required properties in Step 1. Observe that in an $(a,b)$-good iteration for $a < a_0$ and $b < b_0,$ we have by (\ref{eq:case1S}) \begin{equation} \label{eq:case1lbdS} |S| \geq \frac{M}{8} - \frac{10\cdot (k/100) \cdot M}{k} - 10\cdot M^{0.45}\cdot M^{0.5} \geq \frac{M}{50} \end{equation} and also for each $j\in S$, by (\ref{eq:case1Xjbar}) \begin{equation} \label{eq:case1lbdXj} |\bar{X}_j|\geq |X_j|\cdot \left(1-\frac{2\varepsilon}{k}\cdot \frac{k}{100} - \frac{M^{0.45}}{M^{0.5}}\right) \geq (1-o(1))|X_j|. \end{equation} First consider the case that there is some $i_0\in T$ and some $j_0\in S$ such that $|\mathrm{Var}s(F'_{i_0})\cap X_{j_0}|\geq (\varepsilon/2k)|X_{j_0}|.$ In this case, we claim that the singleton set $I = \{{i_0}\}$ has the properties required in Step 1 of $\mc{A}'.$ Property (a) follows from the fact that each $X_j$ ($j\in [M/8]$) is $(1/4)$-shattered and hence for each $j\in [M/8]$ \[ |\mathrm{Var}s(F'_{i_0})\cap \bar{X}_j| = |\mathrm{Var}s(F'_{i_0})\cap X_j|\leq |X_j|/4 \leq |\bar{X}_j|/3 \] where the last inequality uses (\ref{eq:case1lbdXj}). Property (c) follows from the choice of $i_0$, which implies \[ |\mathrm{Var}s(F'_{i_0})\cap \bar{X}_{j_0}| = |\mathrm{Var}s(F'_{i_0})\cap X_{j_0}|\geq (\varepsilon/2k)\cdot|X_{j_0}| \geq (\varepsilon/2k)\cdot |\bar{X}_{j_0}|. \] Finally, to see Property (b), we note that if $|\mathrm{Var}s(F'_{i_0})\cap \bar{X}_j| > (2\varepsilon/k)|\bar{X}_j|$ for some $j\in S$, then by (\ref{eq:case1lbdXj}), we also have \[ |\mathrm{Var}s(F'_{i_0})\cap \bar{X}_j| = (2\varepsilon/k)|\bar{X}_j| > (\varepsilon/k)|X_j|. \] In other words, $F'_{i_0}$ is $(\varepsilon/k)$-heavy in $X_j$. However, the assumption in this section (corresponding to Case 1) is that each $F'_i$ is $(\varepsilon/k)$-heavy in at most $M/k$ many $X_j$. In particular, this shows that the number of $j$'s such that $|\mathrm{Var}s(F'_{i_0})\cap \bar{X}_j| > (2\varepsilon/k)|\bar{X}_j|$ is at most $M/k$ and hence Property (b) is indeed satisfied. Thus, the singleton set $I = \{i_0\}$ has all the required properties. From now onwards, we assume that there is no $i_0\in T$ and $j_0\in S$ satisfying the above. Thus, we have for each $i\in T$ and $j\in S$, \begin{equation} \label{eq:case1Xjshatt} |\mathrm{Var}s(F'_i)\cap X_j| < (\varepsilon/2k)|X_j|. \end{equation} From (\ref{eq:case1Xjshatt}), we know in particular that for each $i\in T$, we have $|\mathrm{Var}s(F'_i)\cap \bar{X}_j| = |\mathrm{Var}s(F'_i)\cap X_j|\leq (\varepsilon/2k)|X_j| \leq (2\varepsilon/k)|\bar{X}_j|.$ Thus, the set $\bar{X}_j$ is partitioned by the sets $(\mathrm{Var}s(F'_i)\cap \bar{X}_j)_{i\in T},$ each of relative size at most $(2\varepsilon/k)$. Now, consider all sets $I\subseteq T$ such that for \emph{some} $j\in S$, we have \begin{equation} \label{eq:case1I0def} |\mathrm{Var}s(F'_I)\cap \bar{X}_j|\geq (\varepsilon/2k)\cdot |\bar{X}_j|. \end{equation} Clearly, since each $\bar{X}_j$ is partitioned by the sets $(\mathrm{Var}s(F'_i)\cap \bar{X}_j)_{i\in T},$ there do exist sets $I$ satisfying (\ref{eq:case1I0def}). That is, any such set $I$ satisfies property (c) mentioned in Step 1 of the algorithm $\mc{A}'$. We fix such an $I$ of the smallest possible size and claim that $I$ also satisfies \begin{equation} \label{eq:case1I0prop} |\mathrm{Var}s(F'_{I})\cap \bar{X}_j|\leq (2\varepsilon/k)\cdot |\bar{X}_j| \end{equation} for each $j\in S$. This will therefore prove that this $I$ satisfies properties (a) and (b) in Step 1 of the algorithm $\mc{A}'$. (In fact for proving that (a) holds, it suffices to prove that $|\mathrm{Var}s(F'_{I})\cap \bar{X}_j|\leq 1/3 \cdot |\bar{X}_j|$. Similarly, for proving (b), it suffices to prove (\ref{eq:case1I0prop}) for all but $M/k$ many $j\in S$. But we get better bounds in this case.) To see this, we argue as follows. If $|I| = 1$, then $F'_I = F'_i$ for some $i\in T$. Then (\ref{eq:case1I0prop}) follows from (\ref{eq:case1Xjshatt}) and (\ref{eq:case1lbdXj}). So we may assume $|I| > 1.$ In this case, as $I$ is the smallest possible set satisfying (\ref{eq:case1I0def}), we must have $|\mathrm{Var}s(F'_{I'})\cap \bar{X}_j|<(\varepsilon/2k)\cdot |\bar{X}_j|$ for each $I'\subsetneq I$ and \emph{each} $j\in S$. Thus, we have for any fixed partition of $I$ into two disjoint sets $I'$ and $I''$ and any $j\in S$, \[ |\mathrm{Var}s(F'_{I})\cap \bar{X}_j|\leq |\mathrm{Var}s(F'_{I'})\cap \bar{X}_j| + |\mathrm{Var}s(F'_{I''})\cap \bar{X}_j| \leq (\varepsilon/2k + \varepsilon/2k)\cdot |\bar{X}_j| \leq (\varepsilon/k)|\bar{X}_j| \] proving (\ref{eq:case1I0prop}). This shows that there are suitable sets $I$ satisfying the properties required by Step 1 of the algorithm $\mc{A}'$ and hence $\mc{A}'$ does not output FAILURE on this input. \end{proof} Informally, as mentioned earlier, we expect an iteration of $\mc{A}'$ to be $(a,b)$-good if there have been at most $a$ iterations of Type I and at most $b$ iterations of Type II before this one. Further, in each good iteration, we have a reasonable probability of success. These points are made precise in the following claim. \begin{claim} \label{clm:case1gooditerns} Consider an iteration of $\mc{A}'$ on an $(a,b)$-good input where $a < a_0$ and $b < b_0.$ \begin{itemize} \item If the iteration is of Type I, then the probability that the next iteration is not $(a+1,b)$-good is at most $\exp(-\Omega(k))$. Further, the probability of SUCCESS in the current iteration is at least $\eta_1 = 1/100.$ \item If the iteration is of Type II, then the probability that the input produced for the next iteration is not $(a,b+1)$-good is at most $\exp(-\Omega(k)).$ Further, the probability of SUCCESS in the current iteration is at least $\eta_2 = \varepsilon/100k.$ \end{itemize} \end{claim} \begin{proof} Let $(S,T,(\rho'_j)_{j\not\in S})$ be the input to this iteration of $\mc{A}'$ and $I\subseteq T$ the set chosen in Step 1 of $\mc{A}'.$ The inputs to the next iteration are $(S_1,T_1,(\rho'_j)_{j\not\in S_1})$ where $S_1 = S \setminus S'$, $T_1 = T\setminus I$ and $\rho'_j$ for $j\in S'$ are sampled during the current iteration. For each $j\in S_1$, the set corresponding to $\bar{X}_j$ in the next iteration is denoted by $\bar{X}_j'$; formally, $\bar{X}_j' := \bar{X}_j \setminus \mathrm{Var}s(F'_I).$ We start with some preliminary observations. Note that by (\ref{eq:case1lbdXj}), we have $|X_j\setminus \bar{X}_j| = o(|\bar{X}_j|)$ and also as a consequence $|X_{j,\gamma}\setminus \bar{X}_{j,\gamma}| = o (|\bar{X}_j|) = o(|\bar{X}_{j,\gamma}|)$ for any $\gamma\in \{1,2\}.$ In particular, this implies that for any $j\in S$, \begin{align} \delta_{j,1} + \delta_{j,2} &= \frac{|\mathrm{Var}s(F'_I)\cap \bar{X}_{j,1}|}{|\bar{X}_{j,1}|} + \frac{|\mathrm{Var}s(F'_I)\cap \bar{X}_{j,2}|}{|\bar{X}_{j,2}|}\notag\\ &= \frac{|\mathrm{Var}s(F'_I)\cap \bar{X}_{j,1}|}{((1/2)\pm o(1))|\bar{X}_{j}|} + \frac{|\mathrm{Var}s(F'_I)\cap \bar{X}_{j,2}|}{((1/2)\pm o(1))|\bar{X}_{j}|}\notag\\ &= (2\pm o(1)) \frac{|\mathrm{Var}s(F'_I)\cap \bar{X}_{j}|}{|\bar{X}_{j}|} = (2\pm o(1))\cdot \delta_j.\label{eq:case1deltaj} \end{align} Since each variable $b_{j,\gamma}$ ($j\in S$, $\gamma\in \{1,2\}$) is independently set to $1$ with probability $\delta_{j,\gamma}$, we see (using (\ref{eq:case1deltaj})) that for each $j\in S$ \begin{equation} \label{eq:case1jinS'} \prob{}{b_{j,1} = 1 \text{ or } b_{j,2} = 1} \leq \delta_{j,1} + \delta_{j,2} \leq 3 \delta_j. \end{equation} We now proceed to the proof of the statement of the claim. \paragraph{Type I iteration.} To show that $(S_1,T_1,(\rho'_j)_{j\not\in S_1})$ is $(a+1,b)$-good, we need to show the corresponding versions\footnote{I.e., the version with $a$ replaced by $a+1$ and $\bar{X}_j$ replaced by $\bar{X}'_j$.} of (\ref{eq:case1S}) and (\ref{eq:case1Xjbar}). \subparagraph{Proof of (\ref{eq:case1S}).} To show (\ref{eq:case1S}), note that it suffices to show \begin{equation} \label{eq:probS'largeI} \prob{}{|S'|\geq \frac{10M}{k}} \leq \exp(-\Omega(k)). \end{equation} Note also that in the case of a Type I iteration, $S' = S''' \cup \hat{S}$ where $S''' = \{j\in S\ |\ b_{j,1} = 1 \text{ or } b_{j,2} = 1\}.$ By our choice of $I$, we know that $|\hat{S}|\leq M/k.$ Below, we will bound $|S'''\setminus \hat{S}|$. Recall that each $b_{j,\gamma}$ ($\gamma\in [2]$) is set to $1$ with probability $\delta_{j,\gamma}.$ Hence, for each $j\in S$, the probability that $j\in S'''$ is at most $\delta_{j,1} + \delta_{j,2},$ which is at most $3\delta_j$ by (\ref{eq:case1jinS'}). Note that for each $j\in S\setminus \hat{S}$, we have $\delta_j \leq (2\varepsilon/k).$ Thus, the expected value of $|S'''\setminus \hat{S}|$ is at most $3\sum_{j\in S\setminus \hat{S}} \delta_j \leq 3\cdot (2\varepsilon/k)\cdot |S| \leq M/k,$ as $|S|\leq M/8$ and $\varepsilon \leq 1.$ Thus, by using our bound on $|\hat{S}|$ and a Chernoff bound (Theorem~\ref{thm:Chernoff}), we obtain \[ \prob{}{|S'| \geq 10 M/k}\leq \prob{}{|S'''\setminus \hat{S}|\geq 9M/k}\leq \exp(-\Omega(M/k)) \leq \exp(-k) \] proving (\ref{eq:probS'largeI}). For the final inequality, we have used the fact that $M \geq m/10 = \Omega(k^{20}).$ \subparagraph{Proof of (\ref{eq:case1Xjbar}).} Fix any $j\in S_1$. Since $\hat{S}\cap S_1=\emptyset$, we have $|\mathrm{Var}s(F'_I)\cap \bar{X}_j|\leq (2\varepsilon/k)|\bar{X}_j|\leq (2\varepsilon/k)\cdot |X_j|$. Thus, the set $\bar{X}'_j = \bar{X}_j \setminus \mathrm{Var}s(F'_I)$ satisfies \[ |\bar{X}'_j| \geq |\bar{X}_j| - (2\varepsilon/k)|X_j| \geq |X_j|\cdot \left(1-\frac{2\varepsilon}{k}\cdot (a+1)-\frac{b}{M^{0.5}}\right) \] thus proving (\ref{eq:case1Xjbar}). This shows that the next iteration is $(a+1,b)$-good except with probability $\exp(-\Omega(k)).$ \subparagraph{Probability of SUCCESS.} The algorithm outputs SUCCESS whenever it finds an $i\in I$ such that $\alpha'_i\neq \beta_i$. Since, as argued in the correctness proof above, $\alpha'_i$ equals the random variable $\alpha_i,$ the algorithm outputs SUCCESS whenever there is an $i$ such that $\alpha_i\neq \beta_i$ for some $i\in I$. Define the Boolean random variable $\alpha_I$ to be $\bigoplus_{i\in I} \alpha_i$; similarly, define $\beta_I$ to be $\bigoplus_{i\in I}\beta_i.$ For there to be some $i\in I$ such that $\alpha_i \neq \beta_i$, it suffices to have $\alpha_I \neq \beta_I$, or equivalently, whenever $\alpha_I \oplus \beta_I = 1.$ We show that this occurs with probability at least $\eta_1$. Each $\alpha_I = \bigoplus_{j\in S} \alpha_{I,j}\oplus \bigoplus_{j\not\in S}\alpha_{I,j}$ where $\alpha_{I,j}$ is defined to be $\bigoplus_{i\in I}\alpha_{i,j}.$ Note that for $j\in S$, $\alpha_{I,j}$ is $1$ precisely when exactly one among the randomly chosen variables $x_{j,1}$ and $x_{j,2}$ lands up in $\mathrm{Var}s(F'_I);$ this happens precisely when $b_{j,1}\oplus b_{j,2} = 1.$ Recall that $b_{j,1}$ and $b_{j,2}$ are independent random variables that are $1$ with probability $\delta_{j,1}$ and $\delta_{j,2}$ respectively. We know that $\delta_{j,\gamma} = |\mathrm{Var}s(F'_I)\cap \bar{X}_{j,\gamma}|/|\bar{X}_{j,\gamma}|$ for each $\gamma\in [2]$. Since $|\mathrm{Var}s(F'_I)\cap \bar{X}_{j,1}| \leq (1/3)|\bar{X}_j|$ (by the choice of $I$ in Step 1 of $\mc{A}'$) and $|\bar{X}_{j,\gamma}|\geq ((1/2)-o(1))\cdot |\bar{X}_j|$ as noted above, we have $\delta_{j,\gamma}\leq (2/3+o(1)).$ In particular, this implies that $\mathrm{Unbias}(b_{j,\gamma}) \geq \min\{\delta_{j,\gamma}, 1-\delta_{j, \gamma}\} \geq \delta_{j,\gamma}/3$\footnote{Recall (Section~\ref{sec:misc}) that $\mathrm{Unbias}(b_{j,1}) = \min\{\prob{}{b_{j,1} = 1}, \prob{}{b_{j,1}=0}\}$.}. Thus, Proposition~\ref{prop:xorunbias} and (\ref{eq:case1deltaj}) imply that for each $j\in S$, \begin{align*} \mathrm{Unbias}(\alpha_{I,j}) = \mathrm{Unbias}(b_{j,1}\oplus b_{j,2}) &\geq \min\{\frac{1}{6}(\delta_{j,1} + \delta_{j,2}),\frac{1}{10}\} \geq\min\{\frac{1}{3}(1-o(1))\delta_j,\frac{1}{10}\}. \end{align*} Since the random variables $\alpha_{I,j}$ are independent and this is a Type I iteration (meaning that $\sum_{j\in S} \delta_j \geq 1$), we see from Proposition~\ref{prop:xorunbias} that $\mathrm{Unbias}(\alpha_I) \geq \min\{(1/6)\sum_{j\in S} (1-o(1))\cdot \delta_j, 1/10\} \geq 1/100.$ This implies that the probability that $\alpha_I \oplus \beta_I = 1$ is at least $\eta_1=1/100$ as claimed. \paragraph{Type II iteration.} The proof is, to a large extent, similar to the proof in the Type I case. So we merely point out the main differences. \subparagraph{Proof of (\ref{eq:case1S}).} To show (the corresponding version of) (\ref{eq:case1S}), note that it suffices to show \begin{equation} \label{eq:probS'largeII} \prob{}{|S'|\geq 10\cdot M^{0.5}} \leq \exp(-\Omega(k)). \end{equation} Note that in this case, $S' = S'' \cup S''' $ where $S''$ is as defined in $\mc{A}'$ and $S''' = \{j\in S\ |\ b_{j,1} = 1 \text{ or } b_{j,2} = 1\}.$ To bound $S''$, we note that since we are dealing with a Type II iteration, we have $\sum_{j\in S} \delta_j < 1.$ In particular, the number of $j\in S$ such that $\delta_j \geq 1/M^{0.5}$ is at most $M^{0.5}$. Hence, we have shown that $|S''|\leq M^{0.5}.$ To bound $S''',$ we note as in the Type I case that the expected size of $|S'''|$ is at most $3\sum_j\delta_j\leq 3.$ In particular, by Theorem~\ref{thm:Chernoff} (Item 2), we have \[ \prob{}{|S'''| \geq M^{0.5}}\leq \exp(-\Omega(M^{0.5})) \leq \exp(-\Omega(k)). \] Thus, we see that with probability at least $1-\exp(-\Omega(k)),$ both $S''$ and $S'''$ have size at most $M^{0.5}$ and in this case, $|S'|\leq |S''| + |S'''| \leq 2 M^{0.5}.$ This proves (\ref{eq:probS'largeII}). \subparagraph{Proof of (\ref{eq:case1Xjbar}).} By our choice of $S''$, the set $S_1 = S\setminus S'$ does not contain any $j$ such that $\delta_j \geq 1/M^{0.5}.$ Hence, we see that for all $j\in S_1$, $|\mathrm{Var}s(F'_I)\cap \bar{X}_j|\leq |\bar{X}_j|/M^{0.5}$. Hence, we have \[ |\bar{X}'_j| \geq |\bar{X}_j| - \frac{|\bar{X}_j|}{M^{0.5}} \geq |X_j|\cdot \left(1-\frac{2\varepsilon}{k}\cdot a - \frac{b+1}{M^{0.5}}\right) \] as desired, for each $j\in S_{1}$. \subparagraph{Probability of SUCCESS.} We argue as in the Type I case that $\mathrm{Unbias}(\alpha_I)$ is large. We know by our choice of $I$ that there is at least one $j$ such that $\delta_j\in [\varepsilon/2k,2\varepsilon/k].$ Hence, exactly as in the Type I case, we see that $\mathrm{Unbias}(\alpha_I) \geq \min\{(1/6)\sum_j (1-o(1))\cdot \delta_j, 1/10\} \geq \varepsilon/12k.$ In particular, this implies that the probability that $\alpha_I \oplus \beta_I = 1$ is at least $\eta_2 = \varepsilon/100k$, which yields the statement of the claim. \end{proof} We are now ready to upper bound the probability that the algorithm outputs FAILURE on input $(S_0,T_0)$. We prove a more general statement. Assume that $(S,T,(\rho'_j)_{j\not\in S})$ is any $(a,b)$-good input to $\mc{A}'$ where $a \leq a_0$ and $b \leq b_0.$ Then, we claim that \begin{equation} \label{eq:case1indn} \prob{}{\mc{A}'(S,T,(\rho'_j)_{j\not\in S}) \text{ outputs FAILURE}} \leq ((a_0-a) + (b_0-b))\cdot \exp(-\Omega(k)) + (1-\eta_1)^{a_0-a} + (1-\eta_2)^{b_0-b} \end{equation} where $\eta_1,\eta_2$ are as defined in the statement of Claim~\ref{clm:case1gooditerns}. We prove the above by downward induction on $a+b$ where $a\leq a_0$ and $b\leq b_0$. The base case of the induction is defined to be the case when either $a = a_0$ or $b = b_0.$ In this case, the claim is trivial since the right hand side of (\ref{eq:case1indn}) is more than $1$. Now consider the case when $a < a_0$ and $b < b_0.$ Let us analyze the behaviour of the algorithm $\mc{A}'$ on an $(a,b)$-good input $(S,T,(\rho'_j)_{j\not\in S}).$ Assume that the current iteration is of Type I (the other case is similar). Claim~\ref{clm:case1failureiter} tells us that the algorithm does not output FAILURE in this iteration. By Claim~\ref{clm:case1gooditerns}, the probability that the algorithm produces an input (for the next iteration) that is \emph{not} $(a+1,b)$-good is at most $\exp(-\Omega(k))$; in this case, we give up and assume that the algorithm subsequently will output FAILURE. Finally, by Claim~\ref{clm:case1gooditerns}, the probability that the algorithm does not output SUCCESS in this round is at most $(1-\eta_1)$. In particular, this implies that the probability that the algorithm does not output SUCCESS \emph{and} produces an $(a+1,b)$-good input for the next iteration is also at most $(1-\eta_1)$. However, conditioned on this event, we can use the inductive hypothesis to bound the probability of FAILURE. By the union bound and the induction hypothesis, we get \begin{align*} &\prob{}{\mc{A}'(S,T,(\rho'_j)_{j\not\in S}) \text{ outputs FAILURE}}\\ &\leq\exp(-\Omega(k)) + (1-\eta_1)\cdot (((a_0-(a+1)) + (b_0-b))\cdot \exp(-\Omega(k)) + (1-\eta_1)^{a_0-(a+1)} + (1-\eta_2)^{b_0-b})\\ &\leq \exp(-\Omega(k)) + ((a_0-(a+1)) + (b_0-b))\cdot \exp(-\Omega(k)) + (1-\eta_1)\cdot (1-\eta_1)^{a_0-(a+1)} + (1-\eta_2)^{b_0-b}\\ &\leq ((a_0-a) + (b_0-b))\cdot \exp(-\Omega(k)) + (1-\eta_1)^{a_0-a} + (1-\eta_2)^{b_0-b} \end{align*} which completes the induction. In particular, since the initial input $(S_0,T_0)$ is $(0,0)$-good, we see that \begin{align*} \prob{}{\mc{A}' \text{ outputs FAILURE on } (S_0,T_0)} &\leq (a_0+b_0)\cdot \exp(-\Omega(k)) + (1-\eta_1)^{a_0} + (1-\eta_2)^{b_0}\\ &\leq (a_0+b_0)\cdot \exp(-\Omega(k)) + \exp(-\Omega(a_0)) + \exp(-\Omega(b_0\varepsilon/k))\\ &\leq M\cdot \exp(-\Omega(k)) + \exp(-\Omega(k)) + \exp(-\Omega(M^{0.45}\varepsilon/k))\\ &\leq \exp(-\Omega(k)) \end{align*} where the second inequality follows from the definition of $\eta_1$ and $\eta_2$, the third from the definition of $a_0$ and $b_0$, and the last follows from the fact that $M \leq m^{\Delta'} \leq \exp(k^{0.02+o(1)}),$\footnote{Note that there are at most $m^{\Delta'}$ many $X^{(\Delta)}$-segments in $G^{(\Delta')}$. } $\varepsilon \geq 1/M^{0.25},$ and $M = \Omega(m) = \Omega(k^{20}).$ This implies (\ref{eq:case1alphabeta}) and hence completes the proof of Case 1. \subsubsection{Case 2} \label{sec:case2} By renaming our segments if necessary, we assume that $X_1, \dots, X_{M/2}$ are the segments that are not $(1/4)$-shattered. That is, for each of these segments $X_j$ ($j\in[M/2]$), there exists an $F_{i_j}'$ ($i_j\in[t]$) such that $\abs{\mathrm{Var}s(F_{i_j}')\cap X_j} \geq (1/4)\abs{X_j}$. By an averaging argument, we can say that there exists $b_j\in\{1,2\}$ such that $\abs{\mathrm{Var}s(F_{i_{j}}')\cap X_{j,b_j}} \geq (1/4)\abs{X_{j,b_j}}$. For each $j \leq M/2$, the restriction $\rho_j$ is sampled according to the algorithm $\mc{A}$ described in Section~\ref{sec:restriction}. We first condition on the choice (among the options $E_1,E_2,E_3$) made by the algorithm $\mc{A}$ in sampling each $\rho_j.$ We say that a $\rho_j$ is \emph{good} if the algorithm $\mc{A}$ decides on option $E_{b_j}\in \{E_1,E_2\}$ in sampling $\rho_j.$ For each $j$, $\rho_j$ is good with probability $1/3$. Let $\mc{E}_2$ denote the event that no $\rho_j$ is good for $j\in [M/2]$ and let $\bar{\mc{E}_2}$ be the complement of $\mc{E}_2$. Clearly, we have \begin{equation} \label{eq:case2chernoff} \prob{\rho}{\mc{E}_2}\leq (2/3)^{M/2} \leq \exp(-\Omega(M)). \end{equation} Assume that $\mc{E}_2$ does not occur. Then, there is a $j\in[M/2]$ such that $\rho_j$ is good. By renaming segments once more, we can assume that $j=1$. Further, by renaming the $F_{i}'$s, we assume that $i_1 =1$; in particular, we have $\abs{\mathrm{Var}s(F_1')\cap X_1}\geq \abs{X_1}/4$. Finally, we can similarly assume that $b_1 = 1$, which implies that $\abs{\mathrm{Var}s(F_1')\cap X_{1,1}}\geq \abs{X_{1,1}}/4.$ We condition on any choice of the restrictions $\rho_j$ for $j > 1$; this fixes the sets $\tilde{Y}'_j$ and $\tilde{Z}'_j$ defined above for $j > 1$. Conditioned on all these events, we see that the random restriction $\rho_1$ sets all the variables in $X_{1,2}$ to $0$ deterministically, and hence can now be considered an $(X_{1,1},Y_{1,1},Z_{1,1})$-restriction, which we will call $\rho_1'$ for clarity. Note that $X_{1,1}, Y_{1,1}$ and $Z_{1,1}$ are $X^{(\Delta-1)}$-, $Y^{(\Delta-1)}$- and $Z^{(\Delta-1)}$-clones respectively. Recall that the $X^{(\Delta-1)}$-clone $X_{1,1}$ is made up of $m$ $X^{(\Delta-1)}$-segments $X_{1,1,1},\ldots,X_{1,1,m}$. We similarly have $Y_{1,1} = \bigcup_{j\in [m]} Y_{1,1,j}$ and $Z_{1,1} = \bigcup_{j\in [m]} Z_{1,1,j}$. The following claim is proved by a standard averaging argument. \begin{claim}\label{clm:popularity} There exist at least $m/8$ many $j\in [m]$ such that $F'_1$ is $1/8$-heavy in $X_{1,1,j}$ i.e., $|\mathrm{Var}s(F'_1)\cap X_{1,1,j}|\geq (1/8)\cdot |X_{1,1,j}|.$ \end{claim} Assuming Claim~\ref{clm:popularity}, we will first show how we can invoke the induction hypothesis to prove the desired result. By renaming if needed, let the $m/8$ segments from Claim~\ref{clm:popularity} be $X_{1,1, 1}, \dots, X_{1,1, m/8}$. The restriction $\rho_1'=\rho_{1,1}\circ\cdots\circ\rho_{1,m}$ where $\rho_{1,\ell}$ for $\ell\in[m]$ is an $(X_{1,1,\ell},Y_{1,1,\ell},Z_{1,1,\ell})$-restriction sampled using the algorithm $\mc{A}$ in Section~\ref{sec:restriction}. We further condition on any choice of restrictions $\rho_{1,\ell}$ for all $\ell > m/8$. This fixes the sets $\tilde{Y}_{1,1,\ell}' := \rho_{1,\ell}(\mathrm{Var}s(F'_1)\cap X_{1,1,\ell})\cap Y_{1,1,\ell}$ and $\tilde{Z}_{1,1,\ell}'$ (defined similarly) for all $\ell > m/8$. Let $F''$ denote the restricted formula thus obtained and $\rho_1''$ the random restriction after this conditioning (i.e. $\rho_1'' = \rho_{1,1}\circ \cdots \circ \rho_{1,m/8}).$ We proceed to analyze $\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho)$. Since $\tilde{Y}'\cup Y'$ (resp.\ $\tilde{Z}'\cup Z'$) can be partitioned as $\bigcup_{i\in [t]} \hat{Y}_i$ (resp.\ $\bigcup_{i\in [t]}\hat{Z}_i$), we know (using Proposition~\ref{prop:relrk} Item 3) that for any choice of $\rho$ consistent with our choices so far, \begin{equation} \label{eq:case2relrkF'} \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho) = \prod_{i\in [t]}\mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(F'_i|_\rho) \leq \mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F'_1|_\rho) = \mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F''|_{\rho_{1}''}). \end{equation} To bound the latter term, we invoke the induction hypothesis on $F''$ with $M $ replaced by $M'= m/8$ and $\varepsilon$ replaced by $\varepsilon' = 1/8$. Here $\rho''$ is a $(X'', Y'', Z'')$-restriction where \begin{align*} X''&=\cup_{j\in[m/8]}X_{1,1,j}\\ Y''&=\cup_{j\in[m/8]}Y_{1,1,j}\\ Z''&=\cup_{j\in[m/8]}Z_{1,1,j} \end{align*} and $\mathrm{Var}s(F'')=\bar{X}\cup\bar{Y}\cup\bar{Z}$ where \begin{align*} \bar{X} &= \mathrm{Var}s(F_1')\cap X''\\ \bar{Y} &= \left(\cup_{j>1}\tilde{Y}_j'\right)\cup\left(\cup_{\ell>m/8}\tilde{Y}'_{1,1,j}\right)\cup \hat{Y}_{1,0}\\ \bar{Z} &= \left(\cup_{j>1}\tilde{Z}_j'\right)\cup\left(\cup_{\ell>m/8}\tilde{Z}'_{1,1,j}\right)\cup \hat{Z}_{1,0}. \end{align*} Note that by Claim~\ref{clm:popularity}, it follows that $|\mathrm{Var}s(F'')\cap X_{1,1,j}|\geq \varepsilon'\cdot |X_{1,1,j}|$ for each $j\in [M']$ and hence the induction hypothesis for product-depth $(\Delta -1)$ is applicable. From the induction hypothesis, we see that \begin{align*} \prob{}{\mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F''|_{\rho''}) \geq \exp(-k^{0.1}/(\Delta-1))} &\leq (\Delta-1) \exp(-k^{0.1}/(\Delta-1)) \end{align*} Using (\ref{eq:case2relrkF'}), we get \begin{align*} \prob{\rho}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_{\rho}) \geq \exp(-k^{0.1}/(\Delta-1))\mid \bar{\mc{E}}_2} &\leq (\Delta-1) \exp(-k^{0.1}/(\Delta-1)). \end{align*} By (\ref{eq:case2chernoff}), we thus have \begin{align*} \prob{}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_{\rho}) \geq \exp(-k^{0.1}/(\Delta-1))}&\leq (\Delta-1) \exp(-k^{0.1}/(\Delta-1)) + \exp(-\Omega(M))\\ &\leq \Delta \exp(-k^{0.1}/(\Delta-1)). \end{align*} The last inequality holds since $M \geq m/10 > k$. This finishes the proof of Case 2 modulo the (standard) proof of Claim~\ref{clm:popularity}, which we give below for completeness. \begin{proof}[Proof of Claim~\ref{clm:popularity}] For $j\in [m]$, let $p_j$ denote $\abs{\mathrm{Var}s(F_{1}')\cap X_{1,1,j}}/|X_{1,1,j}| = (m\abs{\mathrm{Var}s(F_{1}')\cap X_{1,1,j}})/|X_{1,1}|\,$. Since $\abs{\mathrm{Var}s(F_{1}')\cap X_{1}}\geq (1/4)\abs{X_{1}}$, we get that $\sum_{j=1}^mp_j \geq m/4$. Assume that the number of $j$ such that $p_j \geq 1/8$ is strictly less than $m/8$. Then, we have \begin{align*} \sum_{j\in [m]} p_j \leq \sum_{j: p_j \geq 1/8} 1 + \sum_{j: p_j \leq 1/8} (1/8) < (m/8) + 1/8 \sum_{j: p_j \leq 1/8} 1 \leq m/8 + (1/8)\cdot m = m/4, \end{align*} yielding $\sum_{j\in [m]} p_j < m/4,$ contradicting our assumption on $\sum_{j\in [m/8]} p_j.$ This proves the claim. \end{proof} \subsubsection{Case 3} \label{sec:case3} Recall that in Case 3, we have that there is some $F'_i$ ($i\in [t]$) that is $(\varepsilon/k)$-heavy in at least $M/k$ many segments $X_j.$ We will prove (\ref{eq:relrkPigate}) in this case using induction. W.l.o.g., assume that $i=1$. Since $\tilde{Y}'\cup Y'$ (resp.\ $\tilde{Z}'\cup Z'$) can be partitioned as $\bigcup_{i\in [t]} \hat{Y}_i$ (resp.\ $\bigcup_{i\in [t]}\hat{Z}_i$), we know (Proposition~\ref{prop:relrk} Item 3) that for any choice of $\rho,$ \begin{equation} \label{eq:case4relrkF'} \mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_\rho) = \prod_{i\in [t]}\mathrm{relrk}_{(\hat{Y}_i,\hat{Z}_i)}(F'_i|_\rho) \leq \mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F'_1|_\rho). \end{equation} It is therefore sufficient to bound $\mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F'_1|_\rho)$. W.l.o.g., we assume that $F'_1$ is $(\varepsilon/k)$-heavy in segments $X_1,\ldots,X_{M/k}$. Each $X_j$ ($j\in [M/k]$) consists of two half-segments, i.e., $X_j = X_{j,1} \cup X_{j,2}$. By averaging, we know that there is some $\gamma\in [2]$ such that $F'_1$ is $(\varepsilon/k)$-heavy in at least half of $X_{1,\gamma},\ldots,X_{M/k,\gamma}$. W.l.o.g.\ we assume that $\gamma = 1$. By renaming the segments, let us assume that $F'_1$ is $(\varepsilon/k)$-heavy in $X_{1,1},\ldots,X_{M/2k,1}$. For each $j \leq M/2k$, the restriction $\rho_j$ is sampled according to the algorithm $\mc{A}$ described in Section~\ref{sec:restriction}. Let us say that $\rho_j$ is \emph{good} if the algorithm $\mc{A}$ decides on option $E_1$ in sampling $\rho_j.$ For each $j$, $\rho_j$ is good with probability $1/3$. For each $j\in [M/2k],$ we condition on whether or not $\rho_j$ is good. Let $\mc{E}_3$ denote the event that the number of good $\rho_j$ is at most $M/8k$. Since the different $\rho_j$ are sampled independently, a Chernoff bound (Theorem~\ref{thm:Chernoff} Item 1) tells us that \begin{equation} \label{eq:case4chernoff} \prob{\rho}{\mc{E}_3}\leq \exp(-\Omega(M/k)). \end{equation} Assume that the event $\mc{E}_3$ does not occur. By renaming segments once more, we may assume that $\rho_j$ is good for each $j\in [M/8k]$. We condition on any choice of the restrictions $\rho_j$ for $j > M/8k$; this fixes the sets $\tilde{Y}'_j$ and $\tilde{Z}'_j$ for $j > M/8k$. Also, note that since we have conditioned on choosing the option $E_1$ for sampling restrictions $\rho_1,\ldots,\rho_{M/8k}$, all variables in $X_{j,2}$ ($j\in [M/8k]$) are set to $0$ with probability $1$. We will therefore think of $\rho_j$ as an $(X_{j,1},Y_{j,1},Z_{j,1})$-restriction for the rest of the proof. We will now prove our result by considering two subcases. Recall that each half-segment $X_{j,1}$ is an $X^{(\Delta-1)}$-clone and hence a union of $m$ $X^{(\Delta-1)}$-segments $X_{j,1,1},\ldots,X_{j,1,m}.$ We will refer to these as \emph{sub-segments} of $X_{j,1}.$ \paragraph*{Case (3a): There are at least $M/16k$ many $j\in [M/8k]$ such that $F'_1$ is at least $(\varepsilon/2k)$-heavy in at least $m^{0.5}$ many sub-segments of $X_{j,1}$.} Without loss of generality, let us assume that the hypothesis of Case (3a) holds for the half-segments $X_{1,1},\ldots,X_{M/16k,1}.$ That is, $F'_1$ is $(\varepsilon/2k)$-heavy in at least $m^{0.5}$ many sub-segments of $X_{j,1}$ for each $j\in [M/16k]$. Here, we will be able to apply induction on $F'_1$ (which is a $\Sigma(\Pi\Sigma)^{(\Delta-1)}$ formula) to obtain an upper bound on its relative rank under the random restriction. We will apply the induction hypothesis with $M$ replaced by $M' = M/16k \times m^{0.5}$ and $\varepsilon$ replaced by $\varepsilon' = \varepsilon/2k$. We claim that we have $\varepsilon' \geq (1/M')^{0.25}$ and $M'\geq m/10$ (which are needed to apply induction). The latter inequality is trivial as $M'\geq M \geq m/10$. For the former, note that we have \[ \frac{1}{(\varepsilon')^4} = \frac{(2k)^4}{\varepsilon^4} \leq (2k)^4 M \leq \frac{M \sqrt{m}}{16k} = M' \] where the first inequality uses the fact that $\varepsilon \geq 1/M^{0.25}$ and the last inequality uses the fact that $k \leq m^{0.05}.$ This shows that $\varepsilon'\geq (1/M')^{0.25}$ as required. We apply induction after some further processing. By renaming if necessary, let us assume that $F_1'$ is heavy in the first $\sqrt{m}$ sub-segments $X_{j,1,1}, \cdots, X_{j,1,m^{0.5}}$ of $X_{j,1}$ for each $j\in [M/16k]$. Conditioned on what is known about $\rho$ so far, for each $j\in [M/8k]$, the restriction $\rho_j$ can be written as $\rho_j=\rho_{j,1}\circ\cdots\circ\rho_{j,m}$ where each $\rho_{j,\ell}$ for $\ell \in[m]$ is an $(X_{j,1,\ell},Y_{j,1,\ell},Z_{j,1,\ell})$-restriction sampled as per the sampling algorithm $\mc{A}$. We further condition on any choice of the restrictions $\rho_{j_1}$ for all $j_1 > M/16k$ and $\rho_{j_2, \ell}$ for all $j_2\in[M/16k]$ and $\ell > m^{0.5}$. This fixes the sets $\tilde{Y}_{j_1}'$, $\tilde{Z}_{j_1}'$ ($M/8k < j_1 \leq M/16k$) and $\tilde{Y}'_{j_2,1, \ell} := \rho(\mathrm{Var}s(F'_1)\cap X_{j_2,1,\ell})\cap Y_{j_2,1,\ell}$ and $\tilde{Z}'_{j_2,1, \ell} := \rho(\mathrm{Var}s(F'_1)\cap X_{j_2,1,\ell})\cap Z_{j_2,1,\ell}$ ($j_2\in [M/16k], \ell > m^{0.5}$). Conditioned on our choices so far, the restriction $\rho$ can now be identified an $(X'', Y'', Z'')$-restriction $\rho''$ where \begin{align*} X''&=\cup_{j\in[M/8k]}\cup_{\ell\in[m^{0.5}]}X_{j,1,\ell}\\ Y''&=\cup_{j\in[M/8k]}\cup_{\ell\in[m^{0.5}]}Y_{j,1,\ell}\\ Z''&=\cup_{j\in[M/8k]}\cup_{\ell\in[m^{0.5}]}Z_{j,1,\ell}. \end{align*} and the restricted formula, which we denote by $F_1''$, satisfies $\mathrm{Var}s(F''_1)=\bar{X}\cup\bar{Y}\cup\bar{Z}$ where \begin{align*} \bar{X} &= \mathrm{Var}s(F_1')\cap X''\\ \bar{Y} &= \left(\cup_{j> M/16k}\tilde{Y}_j'\right)\cup\left(\cup_{j\in [M/16k]}\cup_{\ell>m^{0.5}}\tilde{Y}'_{j,1,\ell}\right)\cup \hat{Y}_{1,0}\\ \bar{Z} &= \left(\cup_{j>M/16k}\tilde{Z}_j'\right)\cup\left(\cup_{j \in [M/16k]}\cup_{\ell > m^{0.5}}\tilde{Z}'_{j,1,\ell}\right)\cup \hat{Z}_{1,0}. \end{align*} Based on the discussion above, we can now invoke the induction hypothesis with $M' = (M/16k)m^{0.5}$ and $\varepsilon' = \varepsilon/2k$ on $F''_1$ to obtain \[ \prob{\rho''}{\mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F''_1|_\rho) \geq \exp(-k^{0.1}/(\Delta-1))}\leq (\Delta-1)\exp(-k^{0.1}/(\Delta-1)). \] Since the above holds for an arbitrary sequence of fixings after it was determined that $\mc{E}_3$ did not hold, we have thus shown the following. \begin{equation} \label{eq:case4aresult} \prob{\rho}{\mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F'_1|_\rho) \geq \exp(-k^{0.1}/(\Delta-1))\ |\ \bar{\mc{E}}_3}\leq (\Delta-1)\exp(-k^{0.1}/(\Delta-1)). \end{equation} We will use this below, after the proof of Case 3(b), to finish the proof in this case. \paragraph*{Case (3b): There are at least $M/16k$ many $j\in [M/8k]$ such that $F'_1$ is at least $(\varepsilon/2k)$-heavy in less than $m^{0.5}$ many sub-segments of $X_{j,1}$.} W.l.o.g., assume that for each $j\in [M/16k]$, there is a $h_j < \sqrt{m}$ such that $F_1'$ is $(\varepsilon/2k)$-heavy only in the sub-segments $X_{j,1,1},\ldots,X_{j,1,h_j}.$ For each $j \in [M/16k]$, let $W_j$ be $X_{j,1} \cap \mathrm{Var}s(F_1')$. By our assumption in Case 3, we know that \begin{equation} \label{eq:lower} |W_j| \geq \frac{\varepsilon}{k} \cdot |X_{j,1}|~~~~\forall j \in [M/16k] \end{equation} Also, we know that for any $\ell \in \{h_j+1, \ldots, m\}$, $|X_{j,1,\ell} \cap W_j| < \frac{\varepsilon}{2k}\cdot |X_{j,1,\ell}|$ for each $j \in [M/16k]$. Therefore, we get that \begin{equation} \label{eq:upper} \sum_{\ell > h_j} |X_{j,1,\ell} \cap W_j| \leq \frac{\varepsilon}{2k}\cdot |X_{j,1}| ~~~~\forall j \in [M/16k] \end{equation} Using Equations~(\ref{eq:lower}) and (\ref{eq:upper}) we get that \begin{align*} \sum_{\ell \in [h_j]} |X_{(j,1),\ell} \cap W_j| \geq \frac{\varepsilon}{2k}\cdot |X_{j,1}|~\mbox{ for each } j \in [M/16k]. \end{align*} Thus by averaging and by using the fact that $h_j \leq m^{0.5}$, we get that for every $j \in [M/16k]$ there exists an $\ell_j \in [h_j]$ such that $|X_{j,1,\ell_j} \cap W_j| \geq \frac{\varepsilon}{2k}\cdot \frac{|X_{j,1}|}{m^{0.5}}$. By renaming if necessary, let $\ell_j = 1$ for all $j\in[M/16k]$. Notice that in fact $|X_{j,1}| = m\cdot |X_{j,1,\ell}|$ for any $\ell \in [m]$. Therefore, we get \begin{align} \label{eq:new-e} |X_{j,1,1} \cap \mathrm{Var}s(F'_{1})| = |X_{j,1,1} \cap W_j| & \geq \frac{\varepsilon}{2k}\cdot \frac{|X_{j,1,1}| \cdot m}{m^{0.5}} = \frac{\varepsilon}{2k} \cdot m^{0.5} \cdot |X_{j,1,1}| & \forall j \in [M/16k] \end{align} We will apply induction on the formula $F'_1$ and the sub-segments $X_{j,1,1}$ ($j\in [M/16k]$). For the induction, the parameter $M$ will thus be replaced by $M' = M/16k$ and $\varepsilon$ by $\varepsilon' = \frac{\varepsilon}{2k} \cdot m^{0.5}$ (we can take this $\varepsilon'$ by (\ref{eq:new-e}) above). To check that induction is possible with these parameters, we need to ensure that $\varepsilon' \geq 1/M'^{0.25}$ and $M'\geq m/10$. The first inequality follows as in Case (3a). Formally, we note that \[ \frac{1}{(\varepsilon')^4} = \frac{(2k)^4}{\varepsilon^4 m^2} \leq \frac{M}{16k} = M' \] where the inequality follows from the fact that $\varepsilon \geq 1/M^{0.25}$ and $k = m^{0.05}.$ This shows that $\varepsilon' \geq (1/M')^{0.25}.$ Now we will show that $M'\geq m/10$. We know that $M\geq m/10$, but in this case we will be able to get a better lower bound on the value of $M$, which will then give us the intended lower bound on $M'$. For this, first observe that $\varepsilon' = (\varepsilon/2k)\cdot m^{0.5}$. Note that by (\ref{eq:new-e}) and the fact that $|X_{j,1,1} \cap \mathrm{Var}s(F'_{1})|\leq |X_{j,1,1}|$, we get $$(\varepsilon/2k)\cdot m^{0.5}\leq 1 \Leftrightarrow \frac{1}{\varepsilon} \geq \frac{\sqrt{m}}{2k}.$$ We also know that $\varepsilon \geq 1/M^{0.25}$ and therefore, $M \geq 1/\varepsilon^4 \geq m^2/(2k)^4$. As $M' = M/16k$, we get that $M' \geq \frac{m^2}{8k \cdot (2k)^4 } \geq m$. This gives us (a bound that is slightly better than) the desired bound on $M'$. We now apply induction. As in Case (3a), some processing is needed. Note that conditioned on what is known about $\rho$ so far, for each $j\in [M/8k]$, the restriction $\rho_j$ sets all $\rho_j=\rho_{j,1}\circ\cdots\circ\rho_{j,m}$ where each $\rho_{j,\ell}$ for $\ell \in[m]$ is an $(X_{j,1,\ell},Y_{j,1,\ell},Z_{j,1,\ell})$-restriction sampled as per the sampling algorithm $\mc{A}$. We further condition on any choice the restrictions $\rho_{j_1}$ for all $j_1 > M/16k$ and $\rho_{j_2, \ell}$ for all $j_2\in[M/16k]$ and $\ell > 1$. This fixes the sets $\tilde{Y}_{j_1}'$, $\tilde{Z}_{j_1}'$ ($M/8k < j_1 \leq M/16k$) and $\tilde{Y}'_{j_2,1, \ell} := \rho(\mathrm{Var}s(F'_1)\cap X_{j_2,1,\ell})\cap Y_{j_2,1,\ell}$ and $\tilde{Z}'_{j_2,1, \ell} := \rho(\mathrm{Var}s(F'_1)\cap X_{j_2,1,\ell})\cap Z_{j_2,1,\ell}$ ($j_2\in [M/16k], \ell > 1$). Conditioned on our choices so far, the restriction $\rho$ can be identified with an $(X'', Y'', Z'')$-restriction $\rho''$ where \begin{align*} X''&=\cup_{j\in[M/16k]}X_{j,1,1}\\ Y''&=\cup_{j\in[M/16k]}Y_{j,1,1}\\ Z''&=\cup_{j\in[M/16k]}Z_{j,1,1}. \end{align*} and the restricted formula $F_1''$ satisfies $\mathrm{Var}s(F''_1)=\bar{X}\cup\bar{Y}\cup\bar{Z}$ where \begin{align*} \bar{X} &= \mathrm{Var}s(F_1')\cap X''\\ \bar{Y} &= \left(\cup_{j> M/16k}\tilde{Y}_j'\right)\cup\left(\cup_{j\in [M/16k]}\cup_{\ell>1}\tilde{Y}'_{j,1,\ell}\right)\cup \hat{Y}_{1,0}\\ \bar{Z} &= \left(\cup_{j>M/16k}\tilde{Z}_j'\right)\cup\left(\cup_{j \in [M/16k]}\cup_{\ell > 1}\tilde{Z}'_{j,1,\ell}\right)\cup \hat{Z}_{1,0}. \end{align*} Based on the discussion above, we can now invoke the induction hypothesis with $M'$ and $\varepsilon'$ (as defined above) on $F''_1$ to obtain \[ \prob{\rho}{\mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F''_1|_{\rho''}) \geq \exp(-k^{0.1}/(\Delta-1))}\leq (\Delta-1)\exp(-k^{0.1}/(\Delta-1)). \] Hence, we obtain as in Case (3a), \begin{equation} \label{eq:case4bresult} \prob{\rho}{\mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F'_1|_\rho) \geq \exp(-k^{0.1}/(\Delta-1))\ |\ \bar{\mc{E}}_3}\leq (\Delta-1)\exp(-k^{0.1}/(\Delta-1)). \end{equation} We now see how to finish the proof in both Cases (3a) and (3b). Using (\ref{eq:case4aresult}) and (\ref{eq:case4bresult}), we see that in each of the cases (3a) and (3b), \begin{align*} \prob{\rho}{\mathrm{relrk}_{(\hat{Y}_1,\hat{Z}_1)}(F'_1|_{\rho}) \geq \exp(-k^{0.1}/(\Delta - 1))}&\leq (\Delta-1)\exp(-k^{0.1}/(\Delta - 1)) + \prob{\rho}{\mc{E}_{3}} \\ &\leq (\Delta-1)\exp(-k^{0.1}/(\Delta - 1)) + \exp(-\Omega(M/k))\\ &\leq \Delta \exp(-k^{0.1}/(\Delta - 1)) \end{align*} where the second inequality follows from (\ref{eq:case4chernoff}) and the last inequality from the fact that $M = \Omega(m) = \Omega(k^{20}).$ By (\ref{eq:case4relrkF'}), we thus get \[ \prob{\rho}{\mathrm{relrk}_{(\tilde{Y}'\cup Y',\tilde{Z}'\cup Z')}(F'|_{\rho}) \geq \exp(-k^{0.1}/(\Delta - 1))} \leq \Delta \exp(-k^{0.1}/(\Delta - 1)) \] proving inequality (\ref{eq:relrkPigate}) in this case. This completes the proof in Case 4. \end{document}
\begin{document} \pagenumbering{roman} \thispagestyle{empty} \title{On the $ER(2)$-cohomology of some odd-dimesional projective spaces} \author{Romie Banerjee} \date{} \maketitle \begin{abstract} Kitchloo and Wilson have used the homotopy fixed points spectrum $ER(2)$ of the classical complex-oriented Johnson-Wilson spectrum $E(2)$ to deduce certain non-immmersion results for real projective spaces. $ER(n)$ is a $2^{n+2}(2^n-1)$-periodic spectrum. The key result to use is the existence of a stable cofibration $\Sigma^{\lambda(n)}ER(n) \rightarrow ER(n) \rightarrow E(n)$ connecting the real Johnson-Wilson spectrum with the classical one. The value of $\lambda(n)$ is $2^{2n+1}-2^{n+2}+1$. We extend Kitchloo-Wilson's results on non-immersions of real projective spaces by computing the second real Johnson-Wilson cohomology $ER(2)$ of the odd-dimensional real projective spaces $RP^{16K+9}$. This enables us to solve certain non-immersion problems of projective spaces using obstructions in $ER(2)$-cohomology. {\bf Keywords:} Johnson-Wilson theory, homotopy fixed points {\bf AMS Subject Classification:} 55N20, 55N22, 55N91 \end{abstract} \section{Introduction} The spectrum $MU$ of complex cobordism comes naturally equipped with an action of $\mathbb{Z}/2$ by complex conjugation. Hu and Kriz in \cite{HK} have used this action to construct genuine $\mathbb{Z}/2$ equivariant spectra $E\mathbb{R}(n)$ from the complex-oriented spectra $E(n)$. Kitchloo and Wilson in \cite{KW1} have used the homotopy fixed point spectrum of this to solve certain non-immersion problems of real projective spaces. The homotopy fixed point spectrum $ER(n)$ is $2^{n+2}(2^n-1)$-periodic compared to the $2(2^n-1)$-periodic $E(n)$. The spectrum $ER(1)$ is $KO_{(2)}$ and $E(1)$ is $KU_{(2)}$. Kitchloo and Wilson have demonstrated the existence of a stable cofibration connecting $E(n)$ and $ER(n)$, \begin{equation} \begin{CD} \Sigma^{\lambda(n)} ER(n) @>x>> ER(n) @>>> E(n) \end{CD} \end{equation} where $\lambda(n) = 2^{2n+1}-2^{n+2}+1$. This leads to a Bockstein spectral sequence for $x$-torsion. It is known that $x^{2^{n+1}-1}=0$ so there can be only $2^{n+1}-1$ differentials. For the case of our interest $n=2$ there are only 7 differentials. From \cite{James} we know that if there is an immersion of $RP^b$ to $\mathbb{R}^c$ then there is an axial map \begin{equation} RP^b \times RP^{2^L-c-2} \rightarrow RP^{2^L-b-2}. \end{equation} For $b=2n$ and $c=2k$ Don Davis shows in \cite{Davis84} that there is no such map when $n=m+\alpha(m)-1$ and $k=2m-\alpha(m)$, where $\alpha(m)$ is the number of ones in the binary expression of $m$ by finding an obstruction to James's map (2) in $E(2)$-cohomology. Kitchloo and Wilson get new non-immersion results by computing obstructions in $ER(2)$-cohomology. In this paper we extend Kitchloo-Wilson's results by computing the $ER(2)$-cohomology of the odd projective space $RP^{16K+9}$. This will give us newer non-immersion results. The main results are the following. \begin{theorem} A 2-adic basis of $ER(2)^{8*}(RP^{16K+9},*)$ is given by the elements $$\alpha^ku^j, \,\, (k \geq 0,1\le j \le 8K+4)$$ $$v_2^4\alpha^k u^j, \,\, (k \geq 1, 1\leq j \leq 8K+4)$$ $$v_2^4u^j,\,\,(4 \leq j \leq 8K+4)$$ $$x \alpha^ki_{16K+9}, \,\, xv_2^4\alpha^k i_{16K+9},\,\,(k \geq 0)$$ \end{theorem} \begin{theorem} Let $\alpha(m)$ be the number of ones in the binary expansion of $m$. If $(m,\alpha(m)) \equiv$ (6,2) or (1,0) mod 8, $RP^{2(m+\alpha(m)-1)}$ does not immerse in $\mathbb{R}^{2(2m -\alpha(m))+1}$. \end{theorem} This shall give us new non-immersions that are often new and different from those of \cite{KW1} and \cite{KW2}. Using Davis's table \cite{Davis} the first new result is $RP^{2^{13}-2}$ does not immerse in $\mathbb{R}^{2^{14}-59}$. {\bf Acknowledgements}\, This paper came out of my PhD dissertation at Johns Hopkins. I owe many thanks to my advisor Steve Wilson and Michael Boardman. \tableofcontents \pagenumbering{arabic} \section{The Bockstein spectral sequence} The results obtained in this section can be found in \cite{KW1}. We reproduce it here for the convenience of the reader. We have the stable cofibration $$\xymatrix{ \Sigma^{\lambda(n)}ER(n) \ar[r]^x &ER(n) \ar[r] &E(n)\\ }$$ where $x\in ER(n)^{-\lambda(n)}$ and $\lambda(n) = 2^{2n+1}-2^{n+2}+1$. The fibration gives us a long exact sequence \begin{equation} \xymatrix{ ER(n)^*(X) \ar[rr]^x &&ER(n)^*(X) \ar[dl]^{\rho}\\ &E(n)^*(X) \ar[ul]^{\partial}\\ } \end{equation} where $x$ lowers the degree by $\lambda(n)$ and $\partial$ raises the degree by $\lambda(n)+1$. This leads to the Bockstein spectral sequence, which will completely determine $M=ER(n)^*(X)/(x)$ as a subring of $E(n)^*(X)$. We know that $x^{2^{n+1}-1}=0$ so there can be only $2^{n+1}-1$ differentials. We filter $M$, $$ 0= M_0 \subset M_1 \subset M_2 \subset \ldots \subset M_{2^{n+1}-1} =M$$ by submodules $$M_r = \hbox{Ker}\left[x^r:\frac{ER(n)^*(X)}{x} \rightarrow \frac{x^rER(n)^*(X)}{x^{r+1}}\right]$$ so that $M_r/M_{r-1}$ gives the $x^r$-torsion elements of $ER(n)^*(X)$ that are non-zero in $M$. We collect the basic facts about the spectral sequence in the following theorem. $E(n)$ is a complex oriented spectrum with a complex conjugation action. Denote this action by $c$. \begin{theorem} \cite[Theorem 4.2]{KW1} In the Bockstein spectral sequence for $ER(n)^*(X)$ \begin{enumerate} \item The exact couple (3) gives rise to a spectral sequence, $E^r$, of $ER(n)^*$-modules, starting with $$E^1\simeq E(n)^*(X).$$ \item $E^{2^{n+1}}=0$ \item $\hbox{Im}\, d^r \simeq M_r/M_{r-1}$. \item The degree of $d^r$ is $r\lambda(n)+1$. \item $d^r(ab) = d^r(a)b +c(a)d^r(b)$ \item $d^1(z) = v_n^{-(2^n-1)}(1-c)(z)$ where $c(v_i)=-v_i$. \item If $c(z)=z$ in $E^1$, then $d^1(z)=0$. If $c(z)=z$ in $E^r$ then $d^r(z^2)=0$. \item The following are all vector spaces over $\mathbb{Z}/2$: $$M_j/M_i,\, (j\geq i> 0)\,\, \hbox{and}\, E^r, (r\geq 2).$$ \end{enumerate} \end{theorem} Note that the image of $ER(n)^*(X) \rightarrow E(n)^*(X)$ consists of targets of the differentials and therefore always have the differentials trivial on them. Also anything in the image is trivial under the action of $c$. Since $ER(n)^*(-)$ is $2^{n+2}(2^n-1)$-periodic we will consider it as graded over $\mathbb{Z}/(2^{n+2}(2^n-1))$. We have to do the same then for $E(n)^*(-)$. We can do this by setting the unit $v_n^{2^{n+1}}= 1$ in the homotopy of $E(n)$. (This does not lose any information since we can always recover the original by inserting powers of $v_n^{2^{n+1}}$ to make the degrees match.) \subsection{The spectral sequence for $ER(2)^*$} For the Bockstein spectral sequence of Theorem 2.1 to be useful, we need to know the ring $ER(n)^*$. From now on we concentrate on the case $n=2$. This spectral sequence begins with $E^1= E(2)^*$ which is just a free $\mathbb{Z}_{(2)}[v_1]$-module on a basis given by $v_2^i$ for $0\leq i<8$. We are grading mod 48. Since all elements of $E^1$ have even degree and degree $\deg d^r=17r+1$, $d^r=0$ for $r$ even. As $E^8=0$, we only have $d^1,d^3,d^5,d^7$ to consider. We have the differential $d^1$ acting as follows: $$d^1(v_2^{2s+1})= v_2^{-3}(1-c)v_2^{2s+1}= v_2^{-3}2v_2^{2s+1}= 2v_2^{2s-2}.$$ Similarly $d^1(v_2^{2s})= 0$. However multiplication by $v_1$ doesn't behave well with respect to this differential. The problem is that $v_1 \in E(2)^*$ does not lift to $ER(2)^*$. We will need a substitute for $v_1$. We shall use the element $\alpha = v_1^{ER(2)} \in ER(2)^{\lambda(2)-1}$ (see \cite[page 13]{KW1}). The image of $\alpha$ is $v_1v_2^5 \in E(2)^{-32}$. Because $v_2$ is a unit this a good substitute for ordinary $v_1$. Furthermore this is invariant under $c$ because it is in the image of the map from $ER(2)^*$ and is a permanent cycle. Or we could just see this by observing that there are an even number of $v$'s. We rewrite the homotopy of $E(2)$ as $\mathbb{Z}_{(2)}[\alpha,v_2^{\pm 1}]$ but again set $v_2^8=1$. Now back to the computation of $d^1$ on $v_2^{2s+1}$ where the $E_1$ term is a free $\mathbb{Z}_{(2)}[\alpha]$-module on generators $v_2^i,\, 0\leq i<8$. $$d^1(\alpha^k v_2^{2s+1}) = d^1(\alpha^k)v_2^{2s+1} + c(\alpha^k)d^1(v_2^{2s+1}) = 0+\alpha^k 2v_2^{2s-2}.$$ But this really follows from the fact that we have a spectral sequence of $ER(2)^*$-modules. Thus the $d^1$-cycles form a free $\mathbb{Z}_{(2)}[\alpha]$-module generated by $\{ 1, v_2^2, v_2^4, v_2^6 \}$ and the $d^1$-boundaries form the free submodule with basis $\{ \alpha_0,\alpha_1,\alpha_2,\alpha_3 \}$, where $\alpha_i = 2v_2^{2i}$. In particular, $\alpha_0=2$. Thus $E^2 = E^3$ is the free $\mathbb{F}_2[\alpha]$-module with the basis (the images of) $\{ 1, v_2^2,v_2^4,v_2^6 \}$. By \cite{KW1}, $d^3(v_2^2) =\alpha v_2^4$. Since $d^3$ is a derivation, $d^3(v_2^6)=\alpha$, and the only elements of $E^4$ are 1 and $v_2^4$. Since $\deg d^5= 38$, $d^5=0$. We must have $d^7(v_2^4)=1$ to make $E^8=0$. We can read off $M$ as a $\mathbb{Z}_{(2)}[\alpha]$-submodule of $E^1$. $M_1 = \hbox{Im}d^1$ is the free module on basis $\{ \alpha_0,\alpha_1,\alpha_2,\alpha_3 \}$. $M_3$ is generated as a module by adding the elements $\alpha$ and $w = \alpha v_2^4$, which make $M_3/M_1$ the free $\mathbb{F}_2[\alpha]$-module with basis $\{ \alpha, w \}$. Finally, the only new element of $M_7$ is 1. The rest of the module structure is given by \begin{equation} 2\alpha = \alpha\alpha_0, 2w=\alpha\alpha_2, 2.1 =\alpha_0, \alpha.1 = \alpha \end{equation} Further, $M$ is a subring of $E^1$, generated by $\alpha_1, \alpha_2, \alpha_3, \alpha$ and $w$. The products not already given are \begin{eqnarray} \alpha_s\alpha_t &=& 2\alpha_{s+t} \,\,\,\,(\hbox{taking} \,\,s+t \,\hbox{mod}\, 4)\\ w\alpha_s &=& \alpha\alpha_{s+2} \,\,\,\,(\hbox{taking} \,\,s+2 \,\hbox{mod}\, 4)\\ w^2 &=& \alpha^2 \end{eqnarray} To obtain $ER(2)^*$, we must unfilter $M= ER(2)^*/(x)$ and add the generator $x$. We lift each generator $\alpha_s$ and $w$ to $ER(2)^*$, keeping the same names, with the relations \begin{equation} \alpha_sx=0,\alpha x^3=0,wx^3=0,x^7=0. \end{equation} By sparseness, each $\alpha_s$ and $w$ lifts {\it uniquely} to $ER(2)^*$; further, the module actions (4) and multiplications (5), (6), (7) also lift uniquely and hold in $ER(2)^*$, not merely mod$(x)$. \begin{prop}\cite[Section 5]{KW1} $ER(2)^*$ is graded over $\mathbb{Z}/48$. It is generated as a ring by elements, $$x,w,\alpha,\alpha_1,\alpha_2,\alpha_3$$ of degrees $-17,-8,-32,-12,-24$ and $-36$ respectively, with relations and products as listed above. \end{prop} \subsection{The Bockstein Spectral Sequence for $ER(2)^*(RP^{\infty})$} As always, we have the split short exact sequence $$0 \rightarrow ER(2)^*(X,*) \rightarrow ER(2)^*(X) \rightarrow ER(2)^*(*) \rightarrow 0$$ for any space $X$ with basepoint $*$. Now that we know $ER(2)^*(*) = ER(2)^*$, we will concentrate on $ER(2)^*(X,*)$. Nevertheless, we need to use the action of $ER(2)^*$ on $ER(2)^*(X)$ and hence $ER(2)^*(X,*)$; furthermore, this action extends to actions of the Bockstein spectral sequences. The $E(2)^*$-cohomology of $RP^{\infty}$ can be computed from the Gysin sequence $$\xymatrix{ \ldots E(2)^{k-2}(\mathbb{C}P^{\infty}) \ar[r]^{[2](x_2)} &E(2)^k(\mathbb{C}P^{\infty}) \ar[r] &E(2)^k(\widetilde{RP^{\infty}}) \ar[r] &E(2)^{k-1}(\mathbb{C}P^{\infty}) \ldots\\ }$$ where $E(2)^*(\mathbb{C}P^{\infty}) \simeq E(2)^*[[x_2]]$, $x_2 \in E(2)^{2}(\mathbb{C}P^{\infty})$. From above $E(2)^*(RP^{\infty}) \simeq E(2)^*[[x_2]]/([2](x_2))$. Recall that $ER(2)^*(RP^{\infty}) = ER(2)^*[[u]]/([2](u))$ where we have $u \in ER(2)^{1-\lambda(2)}(RP^{\infty})$. We will replace the $x_2$ by the image of $u \in ER(2)^{-16}(RP^{\infty})$ which we also call $u \in E(2)^{-16}(RP^{\infty})$, which is really $v_2^3x_2$. Likewise we replace the usual $v_1 \in E(2)^{-2}$ with $v_2^5v_1 =\alpha \in E(2)^{-32}$ which comes from $\alpha \in ER(2)^{-32}$. The element $w \in ER(2)^{-8}$ maps to $\alpha v_2^4 = v_2v_1 \in E(2)^{-8}$. These changes are necessary because $x_2$ and $v_1$ are not in the image of $ER(2)$-cohomology. We will describe our groups in terms of a {\it 2-adic basis} in the sense of \cite{KW1}, i.e, a set of elements such that any element in our group can be written as a unique sum of these elements with coefficients 0 or 1 (where the sum is allowed to be a formal power series in $u$). In the ring $E(2)^*(RP^{\infty})$ we have $2u = \alpha u^2 + \ldots$, therefore the 2-adic basis is given by $v_2^i\alpha^ku^j, \,(0\leq i <8, 0\leq k, 1\leq j)$. The original relation $[2](u) = 2u+_F v_1u^2 +_F v_2u^4$ for $E\mathbb{R}(2)$ converts to the relation \begin{equation} [2](u)=2u+_F \alpha u^2 +_F u^4 \end{equation} since $v_1$ is replaced by $v_1(v_2^3)^{-1}=v_1v_2^5=\alpha$ and $v_2$ is replaced by $v_2(v_2^3)^{-3} = v_2^{-8}=1$. Because $2x=0$, $x$ times the relation (9) gives us $0=x(\alpha u^2 +_F u^4)$. Therefore from the point of view of $x^1$-torsion $\alpha u^2$ can be replaced with $u^4+\ldots$. Similarly, if we multiply by $x^3$ and use the relation $x^3\alpha =0$ we end up with $x^3u^4 =0$. Since $u \in E(2)^*(RP^{\infty})$ is in the image from $ER(2)^*(RP^{\infty})$ our differentials commute with multiplication by $u$ and also commute with multiplication by $\alpha$. The $d^1$ differential creates a relation coming from our relation $0=2u +_F \alpha u^2 +_F u^4$ when $2u$ is set to zero. So in $E^2$, we have $\alpha u^2 \equiv u^4+\ldots$. The Bockstein spectral sequence goes like this: \begin{theorem}\cite[Theorem 8.1]{KW1} $E^1 =E(2)^*(RP^{\infty},*)$ is represented by $$v_2^i\alpha^ku^j \hspace{3mm} (0\leq i\le 7, \hspace{3mm} 0\leq k, \hspace{3mm} 1\leq j).$$ $$d^1(v_2^{2s-5}\alpha^k u^j)=2v_2^{2s}\alpha^k u^j = v_2^{2s}\alpha ^{k+1}u^{j+1}+ \ldots$$ $E^2=E^3$ is given by: $$v_2^{2s}\alpha^k u, \hspace{3mm} v_2^{2s}u^j \hspace{3mm} (2\leq j, \hspace{3mm} 0\leq s\le 4, \hspace{3mm} 0 \leq k)$$ $$d^3(v_2^{4s-2}\alpha ^k u) = v_2^{4s}\alpha ^{k+1}u,\,\,\,d^3(v_2^{4s-2}u^j) = v_2^{4s}\alpha u^j = v_2^{4s}u^{j+2}+\ldots$$ $E^4=E^5=E^6=E^7$ is given by:$$ v_2^4u^{\{1-3\}}, \hspace{3mm} u^{\{1-3\}}$$ $$d^7(v_2^4u^{\{1-3\}})=u^{\{1-3\}}.$$ The $x^1$-torsion generators are given by:$$\alpha_i\alpha ^ku^j \hspace{3mm} (0\leq i, \hspace{3mm} 0\leq k, \hspace{3mm} 1\leq j)$$ where $\alpha_0=2$. The $x^3$-torsion generators are given by:$$\alpha^{k+1}u,\, \alpha^k wu \,\,(k \geq 0),\, u^j\,\,(j \geq 4),\, wu^j\,\,(j \geq 2).$$ The only $x^7$-torsion generators are $$u^{\{1-3\}}.$$ \end{theorem} In degrees that are multiples of 8 (denoted $8*$), the description of $ER(2)^*(RP^{\infty},*)$ simplifies enormously. As $x$ is the only generator whose degree is not a multiple of 4, and $x^4$ kills everything except powers of $u$, multiplication by powers of $x$ produces no new elements in degree $8*$. \begin{cor}\cite[Theorem 2.1]{KW2} The homomorphism $$ER(2)^{8*}(RP^{\infty},*) \rightarrow E(2)^{8*}(RP^{\infty},*)$$ is injective, and is almost surjective -- the only elements not hit are $v_2^4u^j$ for $1 \leq j\leq 3$. \end{cor} We shall compute the Bockstein spectral sequence associated to $ER(2)$ for the odd-dimensional projective space $RP^{16K+9}$. This gives us nonimmersion results which are often new and differ from those of \cite{KW2}. We will have to introduce some new elements for the computations in the next section. The Atiyah-Hirzebruch spectral sequence for $ER(2)^*(RP^2)$ gives elements $x_1$ and $x_2$ in filtration degrees 1 and 2. As a $ER(2)^*$ module $ER(2)^*(RP^2)$ is generated by elements we will call $z_{-16}$ represented by $xx_1$ and $z_2$ represented by $x_2$. The element $z_{-16}=u \in ER(2)^*(RP^2)$. For the cofibration $S^1 \rightarrow RP^2 \rightarrow S^2$, the long exact sequence $$\xymatrix{ ER(2)^*(S^1) \ar[dr]_{\partial} &&ER(2)^*(RP^2) \ar[ll]_{i^*}\\ &ER(2)^*(S^2,*) \ar[ur]_{\rho^*}\\ }$$ is given by $\partial(i_1) = 2i_2, \rho^*(i_2)=z_2$, and $i^*(u)=xi_1$. We know that $\Sigma^{2n-2}ER(2)^*(RP^2,*) \simeq ER(2)^*(RP^{2n}/RP^{2n-2},*)$. We have elements $z_{2n-18},z_{2n} \in ER(2)^*(RP^{2n}/RP^{2n-2},*)$. These elements map to $v_2^{5n+3}u^n$ and $v_2^{5n}u^n$ respectively in $E(2)^*(RP^{2n}/RP^{2n-2},*)$ by \cite[section 10]{KW1}. \section{$ER(2)^*(RP^{16K+9},*)$} In \cite{KW1} and \cite{KW2} Kitchloo and Wilson considered all even dimensional real projective spaces and the odd dimensions $16K+1$. Our object is to extend the results to more odd-dimensional cases. \begin{prop} The element $u^{8K+5} \in ER(2)^*(RP^{16K+10})$ maps to a non-zero element of $ER(2)^*(RP^{16K+9})$. \end{prop} Note that this cannot happen for a complex oriented cohomology theory. \begin{proof} We use the exact sequence $$\xymatrix{ ER(2)^*(S^{16K+10},*) \ar[r]^{q^*} &ER(2)^*(RP^{16K+10}) \ar[r] &ER(2)^*(RP^{16K+9})\\ }$$ We only have to show that $u^{8K+5}$ is not in the image of $q^*$. Now $ER(2)^*(S^{16K+10},*)$ is the free $ER(2)^*$-module generated by the element $i_{16K+10}$, and $q^*$ is known. The structure of $ER(2)^*(RP^{16K+10})$ is given by \cite[Theorem 13.2]{KW1} and \cite[Theorem 13.3]{KW1}. We give a complete description of $M$, where $$ER(2)^*(RP^{16K+10},*)/xER(2)^*(RP^{16K+10},*) \simeq M \subset E(2)^*(RP^{16K+10},*)$$ as a submodule of $E(2)^*(RP^{16K+10})=\mathbb{Z}_{(2)}[\alpha,v_2^{\pm 1}][u]/(u^{8K+6},[2](u))$. We describe $M$ by specifying a 2-adic basis. As $d^r=0$ for $r$ even, $M$ is filtered by $0=M_0\subset M_1 =M_2 \subset M_3 = M_4 \subset M_5 = M_6 \subset M_7 = M,$ where $M_r/M_{r-1} \simeq \hbox{Im}d^r$. Elements of $M_r$ not in $M_{r-1}$ lift to $x^r$-torsion elements of $ER(2)^*(RP^{16K+10},*)$. As both $\alpha$ and $u$ come from $ER(2)$-cohomology, we may describe $M$ from section 2 as a filtered $\mathbb{Z}_2[\alpha,u]$-module. We write $z_t$ for various elements of $ER(2)^*(RP^{16K+10},*)$ and $\bar{z_t}$ for its image in $M$, where $t$ denotes the degree. $\alpha^k u^j(u\alpha_s) \in M_1$ for $k\geq0$, $0\leq j \leq 8K+4$, $0\leq s \leq 3$, where $u\alpha_s = 2v_2^{2s}u = d^1(v_2^{2s+3}u)$. Note that $\alpha_0=2$. $\alpha^k(u\alpha)\in M_3$ for $k \geq 0$, where $u\alpha = d^3(v_2^{-2}u)$. $\alpha^k(uw) \in M_3$ for $k \geq 0$, where $uw = d^3(v_2^2u)= uv_2^4\alpha$. $\alpha^ku^j\beta_0 \in M_3$ for $k\geq 0$, $0\leq j \leq 8K+1$, where $\beta_0= d^3(v_2^{-2}u^2)=\alpha u^2 \equiv u^4 + \ldots$ mod 2= $\alpha_0$. $\alpha^k u^j \beta_1 \in M_3$ for $k \geq 0$, $0\leq j \leq 8K+1$ where $\beta_1 = d^3(v_2^2u^2)= w u^2$. $\alpha^k \gamma_0 \in M_3$ for $k \geq 0$, where $\gamma_0 = v_2\alpha u^{8K+5} = d^3(v_2^{-1} u^{8K+5})$. $\alpha^k \gamma_1 \in M_3$ for $k \geq 0$, where $\gamma_1 = v_2^5 \alpha u^{8K+5} = d^3(v_2^3 u^{8K+5})$. $\bar{z}_{16K+10} \in M_5$, where $z_{16K+10} = q^*i_{16K+10}$ is induced from $S^{16K+10}$. Then $\bar{z}_{16K+10} = v_2u^{8K+5} = d^5(v_2^2u ^{8K+4})$. Note that $\alpha \bar{z}_{16K+10} = \gamma_0$ and $w \bar{z}_{16K+10} = \gamma_1$. $\bar{z}_{16K-14} \in M_5$, where $\bar{z}_{16K-14} = v_2^5u^{8K+5} = d^5(v_2^6u^{8k+4})$. Note that $\alpha \bar{z}_{16K-14}= \gamma_1$ and $w\bar{z}_{16K-14}=\gamma_0$. $\bar{z}_{16K+4} \in M_7$, where $\bar{z}_{16K+4} = v_2^2u^{8K+5} = d^7(v_2^6 u^{8K+5})$. $u^j \in M_7$, for $1 \leq j \leq 3$, since $d^7(v_2^4u^j)=u^j$. The action of $\alpha$ on $M$ is clear, except for $\alpha \bar{z}_{16K+4} = -u^{8K+4} \alpha_1$. The relations involving $u$ can also be determined, but are not useful. We really need the corresponding lifted relations in $ER(2)$-cohomology, where they hold only mod$(x)$. Again, we have \begin{equation} u^{8K+6} = x^2z_{16K-14}, \,\, u^{8K+7}=x^4 z_{16K+4}, \,\, u^{8K+8}=0. \end{equation} Now in $ER(2)$-cohomology, $q^*i_{16K+10} = v_2u^{8K+5}$, from \cite[(13.1)]{KW1}. Since the degrees of $\alpha_2, \alpha, w$ and $x$ are $-12s,16,40$ and $-17$ respectively, mod 48, the only elements in $ER(2)^*i_{16K+10}$ of degree $-16(8K+5)$ have the form $\alpha ^k w x^2 i_{16K+10}$. Then we have $q^*(\alpha ^q w x^2 i_{16K+10})=v_2\alpha ^q w x^2 u^{8K+5}$, which lies in the ideal $(x)$ and is not the same as $u^{8K+5}$. \end{proof} \subsection{The Bockstein spectral sequence for $ER(2)^*(RP^{16K+9})$} We compute this cohomology by sandwiching it between the $ER(2)$-cohomology of $RP^{16K+10}$ and $RP^{16K+8}$, which we know, in the commutative diagram of exact sequences \begin{equation} \xymatrix{ ER(2)^*(S^{16K+10},*) \ar[d]^{\rho^*} \ar[r]^= &ER(2)^*(S^{16K+10},*) \ar[d]\\ ER(2)^*(\frac{RP^{16K+10}}{RP^{16K+8}},*) \ar[r] \ar[d]^{i^*} &ER(2)^*(RP^{16K+10},*)\ar[d] \ar[r] &ER(2)^*(RP^{16K+8},*) \ar[d]^=\\ ER(2)^*(S^{16K+9},*) \ar[r] \ar[d]^2 &ER(2)^*(RP^{16K+9},*) \ar[r] &ER(2)^*(RP^{16K+8},*)\\ ER(2)^*(S^{16K+9},*)\\ } \end{equation} The $E^1$-term for the Bockstein spectral sequence is just $E(2)^*(RP^{16K+9},*)$, which decomposes \cite{Davis84} as $$E(2)^*(RP^{16K+8},*)\oplus E(2)^*(S^{16K+9},*).$$ via the maps $$RP^{16K+8} \rightarrow RP^{16K+9} \rightarrow S^{16K+9}$$ (In general, $E(2)^*(RP^{2n})=E(2)^*[u]/(u^{n+1},[2](u))$, as $E(2)$ is complex oriented and $E(2)^*$ has no 2-torsion.) The even part of the $E^1$-term has the 2-adic basis $$ v_2^i\alpha ^k u^j \hspace{5mm} (0\leq i \le 7, \hspace{5mm} 0 \le k, \hspace{5mm} 0\le j \leq 8K+4) $$ The odd part is a free $\mathbb{Z}_{(2)}$-module with basis $$v_2^i\alpha^k i_{16K+9} \hspace{5mm} (0\leq i \le 7, \hspace{5mm} 0 \leq q, \hspace{5mm} 0\leq k)$$ Since $d^1$ is of even degree we can just read off our $d^1$ from \cite[Theorem 13.2]{KW1} for $RP^{16K+8}$ and section 5 of \cite{KW1} for the $S^{16K+9}$ part. $$d^1(v_2^{2s-5}\alpha^ku^j) = 2v_2^{2s}\alpha^ku^j = v_2^{2s}\alpha^{k+1}u^{j+1} +\ldots \hspace{5mm} (j\le 8K+4)$$ $$d^1(v_2^{2s+1}\alpha^k i_{16K+9})=2v_2^{2s-2}\alpha^k i_{16K+9}$$ Thus $E^2$ is given by $$v_2^{2s}\alpha^k u \hspace{5mm} (k \geq 0), \hspace{5mm} v_2^{2s}u^j \hspace{5mm} (1\le j \leq 8K+4), \hspace{5mm}v_2^{2s+1}\alpha^k u^{8K+4} \hspace{5mm} (0\leq k),$$ $$v_2^{2s}\alpha^k i_{16K+9}$$ $d^2$ has odd degree 35. Since we have both odd and even degree elements in the $E^2$-term, $d^2$ might very well be non-trivial. If it is, then by naturality, it must have its source in the $RP^{16K+8}$ part and target in the $S^{16K+9}$ part. Also, the source cannot be anything from the $E^2$-term for the BSS for $RP^{\infty}$, for we know that $d^2$ is trivial there. Therefore the only possible sources are $v_2^{2s+1}\alpha^k u^{8K+4}$ with possible targets $v_2^{2s}\alpha^k i_{16K+9}.$ Now, since $v_2^2$ is a unit, if there is a $d^2$, it must be non-zero on $v_2^{-1}u^{8K+4}$ which has degree $+6-16(8K+4)$ which is $-10+16K$ mod 48. The degree of the target must be this plus 35. The possible targets have degrees $-12s-32k+16K+9$. The only solutions are $\alpha i_{16K+9}$, $\alpha^4 i_{16K+9}$ etc. If $d^2$ is non-zero our guess would be $d^2(v_2^{-1}u^{8K+4})=\alpha i_{16K+9}$. Thus we need to show that $x^2\alpha i_{16K+9}=0$ in order for our guess to be correct. The left column of (11) shows that the $ER(2)^*$-module $ER(2)^*(RP^{16K+10}/RP^{16K+8},*)$ is generated by two elements $z_{16K+10} = \rho^* i_{16K+10}$ and $z_{16K-8}$, where $i^*z_{16K-8}=xi^{16K+9}$. We want to show that $x^2\alpha i_{16K+9}=0$ in $ER(2)^*(RP^{16K+9},*)$. This is the image of the same named element in $ER(2)^*(S^{16K+9},*)$ which lifts to $x\alpha z_{16K-8} \in ER(2)^*({RP^{16K+10} \over RP^{16K+8}})$. This maps to $x\alpha v_2^4u^{8K+5}$ in $ER(2)^*(RP^{16K+10},*)$ from \cite[13.1]{KW1}. Since $2x=0$, we can use the relation $[2](u) = 2u +_F \alpha u^2 +_F u^4 =0$ on $\alpha v_2^2$, and we get $$x\alpha v_2^4 u^{8K+5} =xv_2^4u^{8K+7} + \ldots$$ The least power of $u$ which is zero in $ER(2)^*(RP^{16K+10})$ is $8K+8$. We have noted that $u^{8K+7}=x^4z_{16K+4}$, so that we have $x^5v_2^4z_{16K+4}$, which is non-zero and does not lift to $S^{16K+10}$. It follows that $x^2\alpha i_{16K+9} \neq 0$. However, if we multiply the whole calculation by $\alpha^3$, we get $\alpha^3 x^5 v_2^4 z_{16K+4}=0$, as $x^3\alpha=0$. So $x^2\alpha^4 i_{16K+9}=0$, and we conclude that $$d^2(v_2^{2s+1}\alpha^k u^{8K+4}) = v_2^{2s+2}\alpha ^{k+4} i_{16K+9}$$ Then, $E^3$ is given by $$v_2^{2s}\alpha ^k u \hspace{3mm} (0\leq k),\,\,\,v_2^{2s}u^j \hspace{3mm} (1\le j \leq 8K+4),\,\,\,v_2^{2s}\alpha^{\{0-3\}}i_{16K+9}.$$ $d^3$ is even degree so the even and odd parts don't mix under the differential. On both parts we already know the $d^3$ differential: $$d^3(v_2^{\{6,2\}}\alpha^ku)=v_2^{\{0,4\}}\alpha^{k+1}u$$ $$d^3(v_2^{\{6,2\}}u^j)=v_2^{\{0,4\}}\alpha u^j =v_2^{\{0,4\}}u^{j+2} \hspace{5mm} (1\le j\leq 8K+2)$$ $$d^3(v_2^{4s-2}\alpha^{\{0-2\}}i_{16K+9})=v_2^{4s}\alpha^{\{1-3\}}i_{16K+9} $$ $$d^3(v_2^{4s}\alpha^{\{0-3\}}i_{16K+9}) =0$$ Thus $E^4$ is given by $$v_2^{\{0,4\}}u^{\{1-3\}},\hspace{5mm} v_2^{\{6,2\}}u^{{8K+3,8K+4}}, \hspace{5mm} v_2^{4s} i_{16K+9}. $$ $d^4$ has degree 21, which is odd. So it must go from the $RP^{16K+8}$ part to the $S^{16K+9}$ part. $d^4$ must be zero on anything in the image from $RP^{\infty}$. So our non-zero differentials have possible sources $v_2^{\{6,2\}}u^{\{8K+3,8K+4\}}$, and possible targets $v_2^{4s}i_{16K+9}$ and $v_2^{\{6,2\}}\alpha^3i_{16K+9}$. Let's compute the degrees (mod 48). $$v_2^6u^{8K+3}:-36-16(8K+3) \equiv -36+16K \longrightarrow -15+16K$$ $$v_2^2u^{8K+3}:-12-16(8K+3) \equiv -12+16K \longrightarrow 9+16K$$ $$v_2^6u^{8K+4}:-36-16(8K+4) \equiv -4+16K \longrightarrow 17+16K$$ $$v_2^2u^{8K+4}:-12-16(8K+4) \equiv 20+16K \longrightarrow 41+16K$$ Comparing with degrees of the possible targets we see that the only possible differentials are: $$d^4(v_2^{\{6,2\}}u^{8K+3})=v_2^{\{0,4\}}i_{16K+9}$$ This must be true if $i_{16K+9}$ is $x^4$-torsion. We invoke the commutative diagram (11) used before. Consider $x^3z_{16K-8}$ in $ER(2)^*({RP^{16K+10} \over RP^{16K+8}})$. The following diagram shows its images in the lower left hand square of the commutative diagram. $$\xymatrix{ x^3z_{16K-8} \ar@{|->}[d] \ar@{|->}[r] &x^3v_2^4u^{8K+5} \ar@{|->}[d]\\ x^4i_{16K+9} \ar@{|->}[r] &x^4i_{16K+9}\\ }$$ The element in the upper right-hand corner is zero since $x^3u^4=0$. This shows $i_{16K+9}$ is indeed $x^4$-torsion. We obtain our $E^5$-term $$v_2^{\{0,4\}}u^{\{1-3\}},\hspace{3mm} v_2^{\{6,2\}}u^{8K+4}. $$ $d^5$ has even degree. For dimensional reasons the differentials must be zero in the odd part, and \cite[Theorem 13.2]{KW1} determines that it is zero for the even part. $d^6$ has degree 7. Again, it must go from even part to odd part by naturality. By sparseness, $d^6 = 0$. Since $d^7$ has even degree, the differential does not mix odd and even degrees. First of all in the even part we have $$d^7(v_2^4u^{\{1-3\}})=u^{\{1-3\}}$$ Also by mapping to $RP^{16K+8}$ \cite[page 23]{KW1} we get that $$d^7(v_2^6u^{8K+4})=v_2^2u^{8K+4}$$ We collect our results in the following theorem. \begin{theorem} The Bockstein spectral sequence for $ER^*(RP^{16K+9},*)$ is as follows: $E^1$ $$v_2^i\alpha^k u^j \hspace{5mm} (0\leq i \le 7, \hspace{5mm} 0\leq k, \hspace{5mm} 0\leq j \leq 8K+4)$$ $$v_2^i \alpha^k i_{16K+9} \hspace{5mm} (0\leq i \leq 7, \hspace{5mm} 0\leq k)$$ $$d^1(v_2^{2s-5}\alpha^ku^j)=2v_2^{2s}\alpha^ku^j = v_2^{2s}\alpha^{k+1}u^{j+1}+\ldots \hspace{5mm} (j\le 8K+3)$$ $$d^1(v_2^{2s+1}\alpha^ki_{16K+9})=2v_2^{2s-2}\alpha^ki_{16K+9}$$ where $v_2^i\alpha^ki_{16K+9}$ generates a free $\mathbb{Z}_{(2)}$-module in $E^1$. $E^2$ $$v_2^{2s}\alpha^k u \hspace{5mm} (k \geq 0), \hspace{5mm} v_2^{2s}u^j \hspace{5mm} (1\le j \leq 8K+4), \hspace{5mm}v_2^{2s+1}\alpha^k u^{8K+4} \hspace{5mm} (0 \leq k),$$ $$v_2^{2s}\alpha^k i_{16K+9}$$ $$d^2(v_2^{2s+1}\alpha^k u^{8K+4}) = v_2^{2s+2}\alpha ^{k+4} i_{16K+9}$$ $E^3$ $$v_2^{2s}\alpha ^k u \hspace{5mm} (0\leq k),\,\,\,v_2^{2s}u^j \hspace{5mm} (1\le j \leq 8K+4),\,\,\,v_2^{2s}\alpha^{{0-3}}i_{16K+9}$$ $$d^3(v_2^{\{6,2\}}\alpha^ku)=v_2^{\{0,4\}}\alpha^{k+1}u$$ $$d^3(v_2^{\{6,2\}}u^j)=v_2^{\{0,4\}}\alpha u^j =v_2^{\{0,4\}}u^{j+2} \hspace{5mm} (2\le j\leq 8K+2)$$ $$d^3(v_2^{4s-2}\alpha^{\{0-2\}}i_{16K+9})=v_2^{4s}\alpha^{\{1-3\}}i_{16K+9} $$ $$d^3(v_2^{4s}\alpha^{\{0-3\}}i_{16K+9}) =0$$ $E^4$ $$v_2^{\{0,4\}}u^{\{1-3\}},\hspace{5mm} v_2^{\{6,2\}}u^{\{8K+3,8K+4\}},\hspace{5mm} v_2^{4s} i_{16K+9} $$ $$d^4(v_2^{\{6,2\}}u^{8K+3})=v_2^{\{0,4\}}i_{16K+9}$$ $E^5=E^6=E^7$ $$v_2^{\{0,4\}}u^{\{1-3\}},\hspace{3mm} v_2^{\{6,2\}}u^{8K+4},\hspace{3mm} v_2^{\{6,2\}}\alpha^3i_{16K+9}$$ $$d^7(v_2^4u^{\{1-3\}})=u^{\{1-3\}},\hspace{5mm} d^7(v_2^6u^{8K+4})=v_2^2u^{8K+4}$$ \end{theorem} Next we identify all the elements in degree 8*. \begin{theorem} A 2-adic basis of $ER(2)^{8*}(RP^{16K+9},*)$ is given by the elements $$\alpha^ku^j, \,\, (k \geq 0,1\le j \le 8K+4)$$ $$v_2^4\alpha^k u^j, \,\, (k \geq 1, 1\leq j \leq 8K+4)$$ $$v_2^4u^j,\,\,(4 \leq j \leq 8K+4)$$ $$x \alpha^ki_{16K+9}, \,\, xv_2^4\alpha^k i_{16K+9},\,\,(k \geq 0)$$ \end{theorem} \begin{proof} The first classes of elements represent the images of differentials in the spectral sequence that do not involve $i_{16K+9}$. As in \cite{KW2}, multiplication by powers of $x$ leads to no new elements in degree $8*$. Those images involving $i_{16K+9}$ provide $x^2$,$x^3$, or $x^3$-torsion, which may be multiplied by $x$. \end{proof} \begin{cor} There is an algebraic map $$ER(2)^{8*}(RP^{16K+9}) \rightarrow E(2)^{8*}(RP^{16K+10})$$ which only misses the elements $v_2^4u^{\{1-3\}}$. \end{cor} \section{Non-Immersions} If $RP^b$ immerses in $\mathbb{R}^c$, James showed \cite{James} that there is an axial map $$m: RP^b \times RP^{2^L-c-2} \rightarrow RP^{2^L-b-2}$$ for large $L$ (meaning a map that is non-trivial on both axes). Specifically, to show that $RP^{2n}$ does not immerse in $\mathbb{R}^{2k+1}$ we need to prove there is no axial map \begin{equation} m: RP^{2n} \times RP^{2^L-2k-3} \rightarrow RP^{2^L-2n-2} \end{equation} Our strategy is to consider the class $u \in ER(2)^*(RP^{2^L-2n-2})$, which satisfies $u^{2^{L-1}-n}=0$ when $n\equiv$0 or 7 mod 8 \cite[Theorem 1.6]{KW1}. We shall see that $m^*u$ is known, in principle. If we can show that $(m^*u)^{2^{L-1}-n} \neq 0$, we have a contradiction. Davis \cite{Davis84} used this approach, by using the complex-oriented cohomology theory $E(2)$ to deduce that $RP^{2n}$ does not immerse in $\mathbb{R}^{2k}$ by showing there is no axial map $$m: RP^{2n} \times RP^{2^L-2k-2} \rightarrow RP^{2^L-2n-2}$$ when $n = m+ \alpha(m) -1$ and $k = 2m-\alpha(m)$ for some $m$, where $\alpha(m)$ denotes the number of 1's in the binary expansion of $m$. We wish to improve this result to show that for certain $n$ and $k$, (12) does not exist. There is an axial map $m: RP^{\infty} \times RP^{\infty} \rightarrow RP^{\infty}$, which is the restriction of the map $\mathbb{C}P^{\infty} \times \mathbb{C}P^{\infty} \rightarrow \mathbb{C}P^{\infty}$ induced by the tensor product of the canonical Real line bundles. Therefore $m^*u = u_1 +_F u_2$, where $u_1,u_2$ and $u$ are the Chern classes for the three copies of $RP^{\infty}$. If $m:RP^b \times RP^c \rightarrow RP^d$ is an axial map, the diagram $$\xymatrix{ RP^b \times RP^c \ar[r]^m \ar[d]^{\subset} &RP^d \ar[d]^{\subset}\\ RP^{\infty} \times RP^{\infty} \ar[r]^m &RP^{\infty} }$$ commutes, as all axial maps $RP^b \times RP^c \rightarrow RP^{\infty}$ are homotopic. It follows that the same formula $m^*u = u_1 +_F u_2$ holds for this $m$. As the formal group law $F$ is a formal power series in $u_1$ and $u_2$ over $\mathbb{Z}_{(2)}[\alpha]$ and $\deg u = -16$ and $\deg \alpha =16$, we are interested only in degrees that are multiples of 16. This simplifies our work, as $ER(2)^{16*}(RP^{\infty}) \rightarrow E(2)^{16*}(RP^{\infty})$ is an isomorphism by \cite{KW1}. We assume $k = 2$ mod 8, so that $2^L-2k-2 = 16K+10$ and we can use Theorem 3.2. Consider the diagram $$\xymatrix{ ER(2)^{16*}(RP^{\infty} \times RP^{\infty}) \ar[r] \ar[d] &E(2)^{16*}(RP^{\infty} \times RP^{\infty}) \ar[d]\\ ER(2)^{16*}(RP^{2n} \times RP^{16K+10}) \ar[r] \ar[d] &E(2)^{16*}(RP^{2n} \times RP^{16K+10}) \ar[d]\\ ER(2)^{16*}(RP^{2n} \times RP^{16K+9}) \ar[r] &E(2)^{16*}(RP^{2n} \times RP^{16K+9})\\ }$$ From Don Davis's work, the image of $(u_1 +_F u_2)^{2^{L-1}-n} \in ER(2)^{16*}(RP^{\infty} \times RP^{\infty})$ in $E(2)^{16*}(RP^{2n} \times RP^{16K+10})$ is non-zero. We need to show that the image in $ER(2)^{16*}(RP^{2n} \times RP^{16K+9})$ is non-zero, (Note that we cannot use $E(2)$-cohomology for this purpose, as it is complex-oriented, which implies that $u^{8K+5} \in E(2)^*(RP^{16K+10})$ maps to zero in $E(2)^*(RP^{16K+9})$. The two end terms in $(u_1 +_F u_2)^{2^{L-1}-n}$ are $u_1^{2^{L-1}-n}$ and $u_2^{2^{L-1}-n}$, which are plainly zero; all the other terms have the form $\lambda \alpha^ku_1^i u_2^j$, where $\lambda \in \mathbb{Z}_{(2)}, k \geq 0, i \geq 1, j \geq 1,$ and $i+j \geq 2^{L-1}-n$. Following \cite{KW1}, we may use the formulae $$2u_1=-\alpha u_1^2 + \ldots, \,\, \alpha u_1u_2^2 = \alpha u_1^2u_2 + \ldots $$ and induction to reduce $(u_1 +_F u_2)^{2^{L-1}-n}$ to a sum of distinct terms of the forms $\alpha^ku_1^iu_2$ and $u_1^iu_2^j$, with no numerical cofficient. (The first formula comes from $[2](u_1)=0$; the second from $u_1[2](u_2)- u_2[2](u_1)=0$.) Further, again by \cite{KW1}, $u_1^{n+1}=0$ since $n \equiv 0$ or 7 mod 8. Then $\alpha ^k u_1^i u_2=0$, since we still have $i+1 \geq 2^{L-1}-n.$ We do not know (or need to know) exactly which terms are present; all we have to do is show that the monomials $u_1^i u_2^j$ (for $1 \leq i \leq n$ and $1 \leq j \leq 8K+5$) remain linearly independent in $ER(2)^*(RP^{2n} \times RP^{16K+9})$, which we defer to the next section. Meanwhile, let us review the various numerical conditions. We need $n=m+\alpha(m)-1$, $k=2m-\alpha(m)$,$k\equiv 2$ mod 8 and $n \equiv$ 0 or 7 mod 8. So $2m -\alpha (m)\equiv 2$ and $m + \alpha(m) \equiv$ 0 or 1. Solving these, we get $(m,\alpha(m)) \equiv$ (6,2) or (1,0). \begin{theorem} If $(m,\alpha(m)) \equiv$ (6,2) or (1,0) mod 8, $RP^{2(m+\alpha(m)-1)}$ does not immerse in $\mathbb{R}^{2(2m -\alpha(m))+1}$. \end{theorem} \subsection{Products with an odd space} As we discussed towards the end of the previous section, in order to complete the proof of theorem 4.1 we need to show that the monomials $u_1^iu_2^j$ (for $1 \leq i \leq n$ and $1 \leq j \leq 8K+5$) remain linearly independent in $ER(2)^*(RP^{2n} \times RP^{16K+9})$. The argument presented in this section is similar to \cite[section 11]{KW2}. We shall look into the Bockstein spectral sequence for $$ER(2)^*(RP^{2n} \wedge RP^{16K+9},*)$$ where $2n < 16K+9$. The $E^1$-term is the usual $$E(2)^*(RP^{2n} \wedge RP^{16K+9},*)$$ $$\simeq E(2)^*(RP^{2n},*) \otimes E(2)^*(RP^{16K+9},*) \oplus \Sigma^{-16(8K+4)-1}E(2)^*(RP^{2n},*)$$ (from \cite{GW}) Also, we know that $$E(2)^*(RP^{16K+9},*) \cong E(2)^*(RP^{16K+8},*)\oplus E(2)^*(S^{16K+9},*).$$ Since $E(2)^*(S^{16K+9},*)$ is free it does not affect the Tor term, only the tensor product. So our $E^1$-term is: $$E(2)^*(RP^{2n},*)\otimes E(2)^*(RP^{16K+8},*) \oplus E(2)^*(RP^{2n},*) \otimes E(2)^*(S^{16K+9},*)$$ $$\oplus \Sigma^{16K-17}E(2)^*(RP^{2n},*)$$ A 2-adic basis for this is given by $$v_2^s\alpha^k u_1^iu_2 \hspace{5mm} (0\leq k, \hspace{5mm} 0<i\le n, \hspace{5mm} s <8 )$$ $$v_2^su_1^iu_2^j \hspace{5mm} (0<i \le n, \hspace{5mm} 1 <j \le 8K+4, \hspace{5mm} s<8) $$ by the same reduction as before, and $$v_2^s\alpha^k i_{16K+9} \hspace{5mm} (0\leq k, \hspace{5mm} 0 <i \le n, \hspace{5mm} s <8)$$ $$v_2^s\alpha^ku_1^iz_{16K-33} \hspace{5mm} (0\le k, \hspace{5mm} 0\le i < n, \hspace{5mm} s <8).$$ We know that $xi_{16K+9}$ represents $v_2^4u_2^{8K+4}$. So $v_2^4xu_1^ni_{16K+9}$ represents $u_1^nu_2^{8K+5}$. There is no differential on $u_1^ni_{16K+9}$. Also there is no differential on $v_2^4u_1^ni_{16K+9}$. All we have to do is show that $u_1^ni_{16K+9}$ is not in the image of $d^1$. Since $d^1$ has even degree we only have to worry about the odd degree elements since $u_1^ni_{16K+9}$ is odd degree. $d^1$ has degree 18 so if it is to hit $u_1^ni_{16K+9}$ it must start at some $\alpha^k u_1^i z_{16K-33}$ because they are the only elements in the correct degree modulo 16. Then we would have $d^1$ non-trivial on $z_{16K-33}$. In the Bockstein spectral sequence for $ER(2)^*(RP^{16M+16} \wedge RP^{16K+10},*)$, $8M+8<8K+5$, we have from \cite[Theorem 19.2]{KW1}, $d^1(z_{16K-1})=0$. From \cite[Theorem 1.2]{GW} we have that $z_{16K-1}$ maps to $u_1z_{16K-33}$ in the spectral sequence for $ER(2)^*(RP^{16K+16} \wedge RP^{16K+10},*)$. Since this passes through the spectral sequence for $ER(2)^*(RP^{16M+16} \wedge RP^{16K+9},*)$, $z_{16K-1}$ maps to $u_1z_{16K-33}$ here as well. So $d^1(u_1z_{16K-33})= u_1d^1(z_{16K-33})=0$. $$\xymatrix{ ER(2)^*(RP^{16K+16} \wedge RP^{16K+10}) \ar[d] \ar[r] &ER(2)^*(RP^{16M+16} \wedge RP^{16K+8}) \\ ER(2)^*(RP^{16M+16} \wedge RP^{16K+9}) \ar[ur]\\ }$$ All elements killed by multiplication by $u_1$ go to zero under the map to $RP^{16K} \wedge RP^{16K+9}$, so our $d^1(z_{16K-33})$ is zero. \begin{theorem} When $n \le 8M < 8M+8 < 8K+5$, in $$ER(2)^{16*}(RP^{2n} \wedge RP^{16K+9})$$ the element $u_1^nu_2^{8K+5}$ is non-zero. \end{theorem} \addcontentsline{toc}{section}{References} {\em Address:}\\ \texttt{A360 School of Mathematics,}\\ \texttt{Tata Institute of Fundamental Research,}\\ \texttt{1 Homi Bhabha Road,}\\ \texttt{Mumbai, India 400005}\\ \texttt{[email protected]} \end{document} \end{document}
\begin{document} \title{f{Singularities in Positive Characteristic} \begin{abstract} In this survey paper we give an overview on some aspects of singularities of algebraic varieties over an algebraically closed field of arbitrary characteristic. We review in particular results on equisingularity of plane curve singularities, classification of hypersurface singularities and determinacy of arbitrary singularities. The section on equisingularity has its roots in two important early papers by Antonio Campillo. One emphasis is on the differences between positive and zero characteristic and on open problems. \end{abstract} \noindent{\bf Key words:} {Equisingularity, Puiseux expansion, Hamburger Noether expansion, classification of singularities, simple singularities, finite determinacy, positive characteristic} \noindent{\bf 2010 Mathematics Subject Classification: }{32S05, 32S15, 14B05, 14B07, 14H20 } \renewcommand{Table of Contents}{Table of Contents} \tableofcontents {\mathcal S}ection{Historical Review}\label{review} Singularitiy theory means in this paper the study of systems of polynomial or analytic or differentiable equations locally at points where the Jacobian matrix has not maximal rank. This means that the zero set of the equations at these points is not smooth. The points where this happens are called {\em singularities} of the variety defined by the equations. Singularities have been studied since the beginning of algebraic geometry, but the establishment of their own discipline arose about 50 years ago.\\ Singularity theory started with fundamental work of {\em Heisuke Hironaka} on the resolution of singularities (1964), {\em Oskar Zariski's} studies in equisingularity, (1965-1968), {\em Michael Artin's} paper on isolated rational singularities of surfaces (1966), and the work by {\em Ren\'e Thom, Bernard Malgrange, John Mather,...} on singularities of differentiable mappings. It culminated in the 1970ties and 1980ties with the work of {\em John Milnor}, who intorduced what is now called the Milnor fibration and the Milnor number (1968), {\em Egbert Brieskorn's} discovery of exotic spheres as neighborhood boundaries of isolated hypersurface singularities (1966) and the connection to Lie groups (1971), {\em Vladimir Arnold's} classification of (simple) singularities (1973), and many others, e.g. {\em Andrei Gabrielov, Sabir Gusein-Zade, Ignaciao Luengo, Antonio Campillo, C.T.C. Wall, Johnatan Wahl, L\^e D\~ung Tr\'ang, Bernard Teissier, Dierk Siersma, Joseph Steenbrink, ...}. Besides the work of Artin, this was all in characteristic 0, mostly even for convergent power series over the complex or real numbers.\\ The first to study systematically "equisingular families" over a field of positive characteristic was {\em Antonio Campillo} in his thesis, published as Springer Lecture Notes in 1980. {\mathcal S}ection{Equisingularity}\label{equisingularity} In the 1960's O. Zariski introduced the concept of equisingularity in order to study the resolution of hypersurface singularities by induction on the dimension. His idea can roughly described as follows: \begin{itemize} \item To resolve the singularities of $X$ consider a generic projection $X\to \mathbb{C}$. \item If the fibres are an ''equisingular'' family, then the resolution of a single fibre should resolve the nearby fibres simultaneously and then also the total space $X$. \item If the fibres are plane curves then {''equisingular''} means that the combinatorics of the resolution process of the fibre singularities is constant. Equivalently: the Puiseux pairs of each branch and the pairwise intersection numbers of the different branches are the same for each fibre or, the topological type of the fibre singularities is constant. \item Zariski's idea works if the fibres are plane curves, but not in general. Nevertheless, equisingularity has become an independent research subject since then. \end{itemize} {\mathcal S}ubsection{Hamburger Noether expansions}\label{subsec.HN} Let me now describe Campillo's early contribution to equisingularity. There are two important papers by Antonio Campillo: \begin{itemize} \item {\em Algebroid Curves in Positive Characteristic} (\cite{Ca80}) \item {\em Hamburger--Noether expansion over rings} (\cite {Ca83}). \end{itemize} The first was Campillo's thesis and appeared as Springer Lecture Notes in 1980 and is now the standard reference in the field. The second paper is however less known but perhaps even more important.\\ For the rest of the paper let \textbf{$K$ denote an algebraically closed field of characteristic $p$ $\geq 0$}, unless otherwise stated.\\ We consider in this section reduced plane curve singularities. Over the complex numbers these are 1-dimensional complex germs germs $C {\mathcal S}ubset \mathbb{C}^2$ with isolated singularity at 0, given by a convergent power series $f \in \mathbb{C}\{x,y\}$ with $C$ the germ of the set of zeros $V(f)$ of $f$. If $K$ is arbitrary, a plane curve singularity is given by a formal power series $f \in K[[x,y]]$ with $C= V(f) = \Spec K[[x,y]]/\langle f\rangle$. $C$ and $f$ are also called {\bf algebroid plane curves}. \\ A reduced and irreducible algebroid plane curve $C$ can be given in two ways: \begin{itemize} \item by an {\em equation} $f =0$, with $f$ irreducible in the ring $K[[x,y]]$ \item by a {\em parametrization} $x=x(t), y=y(t)$ with $\langle f\rangle=\Ker(K[[x,y]] \to K[[t]])$ \end{itemize} \noindent {\bf Case $p=0$} \begin{itemize} \item In this case $C$ has a special parametrization (the {\bf Puiseux expansion}) \end{itemize} $\begin{array}{ll} \ \ \ \ x=t^n & : n=\text{ord}(f)=\text{mult} (C)\\ \ \ \ \ y= c_mt^m+c_{m+1}t^{m+1}+\cdots & : m\geq n, c_i\in K \end{array}$\\ Here ord($f$) is the {\bf order} of $f$, also denoted the {\bf multiplicity} of $f$, i.e. lowest degree of a non-vanishing term of the power series $f$. If $f = f_1 \cdot ... \cdot f_r$ is reducible (but reduced) with $f_i$ irreducible, we consider the parametrization of each {\bf branch} $f_i$ of $f$ individually. The Puiseux expansion determines the \textbf{characteristic exponents} of $C$ (equivalently the \textbf{Puiseux pairs} of $C$ ). \begin{itemize} \item These data is called the \textbf{equisingularity type} ({\bf es--type}) of the irreducible $C$. \item For a reducible curve $C$ the {\bf es--type} is defined by the {\bf es--types of the branches} and the {\bf pairwise intersection multiplicities} of different branches. \item Equivalently by the {\bf system of multiplicities} of the reduced total transform in the resolution process of $C$ by successive blowing up points. \item Two curves with the same es--type are called \textbf{equisingular} \end{itemize} For the case $K=\mathbb{C}$ and $f, g\in \mathbb{C}\{x,y\}$ we have the following nice topological interpretation of equisingularity: \begin{itemize} \item V(f) and V(g) are equisingular $\Leftrightarrow$ they are (embedded) \textbf{topologically equivalent}, i.e. there is a homeomorphism of triples $h:(B_\varepsilon, B\cap V(f), 0)\xrightarrow{{\mathcal S}im}(B_\varepsilon, B_\varepsilon\cap V(g), 0)$, with $B_\varepsilon {\mathcal S}ubset \mathbb{C}^2$ a small ball around 0 of radius $\varepsilon$ (cf. fig. 1). \end{itemize} \begin{center} \begin{tikzpicture}[scale=1.0, every node/.style={transform shape}] \begin{scope} \draw[fill=gray, fill opacity=0.2] (0cm,0cm) circle(2cm); \node[above left] at (-1.5cm,1.5cm) {$B_e$}; \fill[thick] (0cm,0cm) circle(0.05cm); \draw[thick,domain=0:2,smooth,variable=\y,draw=blue] plot (\y,{sqrt(\y^3)}) node[above right] {$V(f)$}; \draw[thick,domain=0:2,smooth,variable=\y,draw=blue] plot (\y,{-sqrt(\y^3)}); \end{scope} \begin{scope}[shift={(7,0)}] \draw[fill=gray, fill opacity=0.2] (0cm,0cm) circle(2cm); \fill[thick] (0cm,0cm) circle(0.05cm); \draw[thick,domain=0:3,smooth,variable=\y,draw=green!70!black] plot (\y,{1/2*sqrt(\y^3)}) node[above left] {$V(g)$}; \draw[thick,domain=0:3,smooth,variable=\y,draw=green!70!black] plot (\y,{-1/2*sqrt(\y^3)}); \end{scope} \draw[->, very thick] (3,0) -- (4,0) node[above,pos=0.5]{$\cong$} node[below,pos=0.5]{$h$}; \end{tikzpicture} \end{center} \noindent{\bf Case $p>0$}\\ The resolution process of $C$ by successive blowing up points exists as in the case $p=0$. There exists also a parametrization of $C$, but a \textbf{Puiseux expansion does not exist} if $p | n$, $n$ the multiplicity of $C$. \begin{itemize} \item We define the \textbf{equisingularity type (es-type)} of $C$, by the \textbf{system of multiplicities} of the resolution as in characteristic 0. \item Instead of the Puiseux expansion another special parametrization exists and can be computed from any parametrization (or any equation) of $C$, the \textbf{Hamburg--Noether (HN) expansion}. It is determined by a chain of relations obtained from successive divisions by series of lower order (assume $ord(x) \leq ord(y)$) as follows: $ \begin{array}{lclcr} y & = & a_{01}x \ +\cdots + a_{0h}x^h+x^h z_1, & ord(z_1) < ord(x)\\ x & = & a_{12}z_1^2 + \cdots +a_{1h_1}z_1{h_1}+z_1^{h_1}z_2 \\ z_1 & = & a_{22}z_2^2 + \cdots +a_{2h_2}z_2{h_2}+z_2^{h_2}z_3 && ({\bf HN(C)}) \\ & & \vdots & &\\ z_{r-1}& = & a_{r2}z_r^2+\cdots \ \ \in K[[z_r]] \end{array} $ \end{itemize} We do not have Puiseux pairs, but we have characteristic exponents. Campillo defines the {\bf characteristic exponents for $C$} from HN(C). \\ By substituting backwards, we get from HN(C) a parametrization of $C$: \begin {center} $x=x(z_r), \ \ y = y(z_r) \ \in K[[z_r]]$\\ \end {center} Note that the uniformizing parameter $z_r$ is a rational function of the coordinates $x,y$. It does not involve roots of unity as the uniformizing parameter for the Puiseux expansion. Moreover, computationally the Hamburg--Noether expansion is preferred to the Puiseux expansion as it needs the least number of field extensions if one wants to compute the es-type for an algebroid curve defined over a non algebraically closed field (such as $\mathbb{Q}$). This is implemented in SINGULAR \cite{DGPS16}.\\ For an arbitrary algebraically closed field $K$ Campillo defines the complex model of $C$ as follows: \begin{itemize} \item Let $F:K\to \mathbb{C}$ be any map with $F(a)=0\Leftrightarrow a=0$ (e.g. $F(0)=0, F(a) = 1$ for $a\neq 0$). A \textbf{complex model} $C_\mathbb{C}$ of the curve $C$ is obtained from HN(C) by the HN-expansion \begin {center} HN($C_\mathbb{C}$): replace $a_{ij}$ in HN($C$) by $F(a_{ij}).$ \end {center} \end{itemize} \begin {Theorem} {\bf(Campillo, \cite {Ca80})} \label{th2.1} \begin{enumerate} \item The characteristic exponents of $C_\mathbb{C}$ do not depend on the complex model. \item They are a complete set of invariants of the es-type of $C$. \item They coincide with the characteristic exponents of $C_\mathbb{C}$ obtained from the Puiseux espansion. \end{enumerate} \end {Theorem} Note that the complex model $C_\mathbb{C}$ of $C$ is defined over the integers if $F$ has integer values (this is important for coding theory and cryptography).\\ We come now to the second paper of Campillo ''Hamburger--Noether expansion over rings'' mentioned above. For any ring $A$ Campillo defines a \begin{itemize} \item \textbf{HN--expansion HN$_A$ over $A$}. HN$_A$ is similar to HN, but with $a_{ij} \in A$ and certain properties. It may be considered as a family over Spec($A$) of parametrized curves with constant es-type. If $A$ is the field $K$ then HN$_K$ coincides with HN defined above. \end{itemize} If $A$ is a local $K$--algebra with maximal ideal $\mathfrak {m}_A$ and $A/\mathfrak{m}_A = K $, then we may take residue classes of the HN-coefficients $a_{ij}$ modulo $\mathfrak{m}_A$ and thus the HN-expansion over $A$ induces a deformation \[ X=X(z_r),\ \ Y=Y(z_r) \ \in A[[z_r]] \] of the parametrization over Spec$(A)$ \[ x=x(z_r), \ \ y=y(z_r)\in K[[z_r]], \ \text{with } x, y = X ,Y \text {mod } \frak{m} _A \] of an irreducible plane algebroid curve $C$. If A is irreducible, the es--types of the curve $C$ parametrized by $x(z_r), y(z_r)$ over $K$ and of the curve defined by the parametrization $X(z_r), Y(z_r)$ over the quotient field Quot$(A)$ coincide. \\ We have the following important theorem, saying that for a fixed equisingularity type there exists some, in a sense ''totally versal", equisingular family $X \to Y$ of $\mathbb{Z}$--schemes such that for any field $K$ the following holds: any equisingular family of algebroid curves over $K$ can be induced from $X \to Y$. More precisely, Campillo proves:\\ \begin {Theorem} {\bf(Campillo, \cite {Ca83})}\label{th2.2} \begin{enumerate} \item [(1)] Let $C$ be irreducible and $E$ the es--type of $C$. Then there exists a morphism of $\mathbb{Z}$--schemes $\pi:X\to Y$ with section ${\mathcal S}igma:Y\to X$ s.t. for any algebraically closed field $K$ the base change $\mathbb{Z}\to K$ induces a family $X^K\to Y^K$ with section ${\mathcal S}igma^K$ such that the following holds: \begin{enumerate} \item [(i)] $Y^K$ is a {\em smooth, irreducible} affine algebraic variety over $K.$ \item [(ii)] For any closed point $y\in Y^K$ the induced family \[ \text{Spec}(\widehat{{\mathcal O}}_{X^K, {\mathcal S}igma^K(y)})\to \text{Spec}(\widehat{{\mathcal O}}_{Y^K, y}) \] is a {\bf total es--versal HN--expansion}, i.e.: for any algebroid curve $C'$ with es$(C') =E$ and any local $K$--algebra $A$ s.t. $HN_A$ induces $HN(C')$ mod $\frak{m}_A$, there exists a morphism $\varphi: \widehat{{\mathcal O}}_{Y^K, y} \to A$ such that $HN_A$ is induced from $HN_{\widehat{{\mathcal O}}_{Y^K, y}}$ via $\varphi$. \end{enumerate} \item [(2)] For a reducible curve $C$ the construction is extended to finite sets of HN--expansions over $A$ and the statement of (1) continues to hold. \end{enumerate} \end {Theorem} {\mathcal S}ubsection{Equisingularity strata}\label{subsec.ES} A Hamburger-Noether expansion over a ring $A$ induces an {\bf equisingular deformation of the parametrization} of an irreducible curve singularity $C$. Such a deformation of the parametrization induces a deformation of the equation as follows: \begin{itemize} \item Let $X=X(z_r), Y=Y(z_r)$ be a HN--expansion over $A$. It is a deformation of the parametrization $x=x(z_r), y= y(z_r)$ of an algebroid curve $C=\text{Spec}(K[[x, y]]/\langle f \rangle)$. By elimination of $z_r$ from $x-X(z_r)$ and $y-Y(z_r)$, we get a power series $F\in A[[x,y]]$ with $f=F \mod \frak{m}_A$. $F$ is a deformation of the curve $C=V(f)$ (in the usual sense) over Spec$(A)$, also called a deformation of the equation. Since it is induced from an equisingular deformation of the parametrization, we call it an equisingular deformation of the equation. Deformations are a category and the construction is functorial (cf. \cite {GLS07}). \item In this way we get for any algebroid curve $C$ a functor \[ \begin{array}{ll} \chi_{es}: & \text{equisingular--deformations of the parametrization of C} \\ & \rightarrow \text{(usual) deformations of the equation of C}.\\ \end{array} \] We call the image of $\chi_{es}$ (full subcategory) {\bf equisingular--deformations of the equation} or just {\bf es--deformations} of $C$ over Spec($A$). \item The construction can be generalized to reducible $C$ and a set of HN-expansions over $A$ of the branches of $C$ (with certain properties). \end{itemize} More generally, any deformation $\Phi: X \to T$, equisingular or not, of the parametrization of a plane curve singularity $C$ induces a deformation of the equation by eliminating the uniformising variable. \\ {\bf Question:} Does the base space $T$ of an arbitrary deformation of $C$ admit a unique maximal subspace over which the deformation is equisingular? In other words, does there exist a unique {\bf equisingularity stratum} of $\Phi$ in $T$? \\ The answer is well--known for $K = \mathbb{C}$. Recall that for any $K$ and $f \in K[[x,y]]$ \begin{itemize} \item $ \mu(f):=\dim_K K[[x,y]]/\langle f_x, f_y\rangle$ is the {\bf Milnor number} of $f$. \end{itemize} If $p$ = char($K$) = 0 $ \mu(f)$ depends only on the ideal $\langle f\rangle$ and not on the choosen generator and then we write also $ \mu(C)$ for $C=\text{Spec } K[[x, y]]/\langle f \rangle$.\\ For $K = \mathbb{C}$ or, more generally, if char($K$) = 0, the equisingularity stratum of any deformation $\Phi: X \to T$ of $C$ exists and is the {\bf $\mu$--constant stratum} of $\Phi$, i.e. the set of points $t \in T$ such that Milnor number of the fibres $X_t$ is constant along some section ${\mathcal S}igma :T \to X$ of $\Phi$. If $\Phi: X \to T$ is the semiuniversal deformation of $C$, the $\mu$--constant stratum is denoted by $\Delta_{\mu}$, \begin{itemize} \item $\Delta_{\mu}$ = \{$t \in T$ : $\exists$ a section ${\mathcal S}igma :T \to X$ of $\Phi$ s.t. $\mu(X_t, {\mathcal S}igma(t)) = \mu(C)$\}. \end{itemize} The restriction of $\Phi$ to $\Delta_{\mu}$ may be considered as the {\bf semiuniversal es--deformation} of $C$ in the sense that any es--deformation of $C$ over some base space $T$ can be induced by a morphism $\varphi: T \to \Delta_{\mu}$, such that the tangent map of $\varphi $ is unique. Moreover $\Delta_{\mu}$ is known to be smooth (c.f. e.g. \cite{GLS07}).\\ If $K$ has positive characteristic, the situation is more complicated. A semiuniversal es--deformation of $C$ exists, but an equisingularity stratum does not always exist. The situation is described in the following theorem and in the next subsection. \begin {Theorem} {\bf (Campillo, Greuel, Lossen; \cite{CGL07})} \label{th2.3} \begin{enumerate} \item [(1)] The functor $\chi_{es}$ defined above is smooth (unobstructed). \item [(2)] The functor $\underline{\Def}^{es}_C$ of (isomorphism classes) of es--deformations of the equation has a semiuniversal deformation with smooth base space $B_C^{es}$. \item [(3)] If $\text{char}(K)=0$ then $B_C^{es}$ coincides with the $\mu$--constant stratum $\Delta_{\mu}$ in the base space of the (usual) semiuniversal deformation of $C$. \item [(4)] In \textbf{good characteristic} (i.e. either $p=0$ or $p>0$ does not divide the multiplicity of any branch of $C$) there exists for any deformation of $C$ over some $T$ a unique maximal equisingularity stratum $T_{es} {\mathcal S}ubset T$ (generalizing the $\mu$--constant stratum). \end{enumerate} \end {Theorem} {\mathcal S}ubsection{Pathologies and open problems}\label{subsec.POP} If the characteristic is bad, i.e. $p >0$ divides the multiplicity of some branch of $C$, we have the following {\bf pathologies}: \begin{enumerate} \item [(1)] There eixst deformations of $C$ wich are not equisingular but become equisingular after a finite base change. We call these deformations \textbf{weakly equisingular}. \item [(2)] Let $\Phi_C: X_C\to B_C$ denote the semiuniversal deformation of $C$. In general no unique es--stratum in $B_C$ exists. For example, let $p=2$ and $f=x^4+y^6+y^7$. Then the following holds. There exist infinitely many smooth subgerms $B_\alpha{\mathcal S}ubset B_C$ s.t. \begin{itemize} \item $B_\alpha\cong B_C^{es}$. \item All $B_\alpha$ have the same tangent space. \item The restricton of $\Phi_C$ to $B_\alpha$ is equisingular for any $\alpha$. \item The restriction of $\Phi_C$ to $B_{\alpha_1}\cup B_{\alpha_2}$ is not equisingular if $\alpha_1\neq \alpha_2$. Hence, a unique maximal equisingularity stratum of $\Phi_C$ does not exist. \end{itemize} \item [(3)] Let $B_C^{wes}$ denote the Zariski--closure of the union of all subgerms $B'{\mathcal S}ubset B_C$ with the property that $\Phi_C|{B'}$ is equisingular. Then $\Phi_C|B_C^{wes}$ is the \textbf {semiuniversal weakly equisingular deformation} of $C$, i.e it has the versality property for weakly es--deformations (as above for es--deformations). We have \begin{itemize} \item $B_C^{wes}$ is irreducible but in general not smooth \item $B_C^{wes}$ becomes smooth after a finite (purely inseparable) base change. \end{itemize} \end{enumerate} For a proof of these facts see \cite{CGL07}. Let us mention the following {\bf Open Problem}: \begin{itemize} \item We know from Theorem \ref{th2.3} that "$p$ = good" is a sufficient condition for $B_C^{es}= B_C^{wes}$ (and hence that $B_C^{es}$ is smooth) but it is not a necessary condition. The problem is to find necessary conditions for $B_C^{es}=B_C^{wes}$ (these do not only depend on $p$). \end{itemize} We do not yet fully understand the relation between weak and strong equisingularity (even between weak and strong triviality). {\mathcal S}ection{Classification of Singularities}\label{classification} In this section we consider hypersurface singularities $f \in K[[x]]:=K[[x_1, \ldots, x_n]]$, again with $K$ algebraically closed and char$(K) = p\geq 0$. We recall the classical results for the classification of singularities in characteristic zero and present some more recent results in positive characteristic. The two most important equivalence relations for power series are right equivalence and contact equivalence. The two equivalence relations lead of course to different classification results. It turns out that the classification of so called ''simple singularities'' w.r.t. contact equivalence is rather similar for $p=0$ and for $p > 0$. However, for right equivalence the classification of simple singularities in positive characteristic is surprisingly different from that in characteristic zero.\\ Let $f,g \in K[[x]]$. Recall that \begin{itemize} \item $f$ is \textbf{right equivalent} to $g \ (f\overset{r }{{\mathcal S}im} g): \Leftrightarrow \exists\ \Phi \in \mathbb{A}ut(K[[x]])$ such that $f= \Phi(g)$, i.e. $f$ and $g$ differ by an analytic coordinate change $\Phi$ with $\Phi(x_i)=\Phi_i $ and $f=g(\Phi_1, \cdots, \Phi_n).$ \item $f$ is \textbf{contact equivalent} to $g \ (f \overset{c}{{\mathcal S}im}\ g): \Leftrightarrow \exists\ \Phi \in \mathbb{A}ut (K[[x]])$ and $u \in K[[x]]^\ast$ such that $f=u\cdot\Phi(g)$, i.e. the analytic $K$--algebras $K[[x]]/\langle f\rangle $ and $K[[x]]/\langle g\rangle$ are isomorphic. \item $f$ is called \textbf{right--simple} (resp. \textbf{contact--simple}): $\Leftrightarrow \exists$ finite set $\{g_1, \ldots, g_l\}{\mathcal S}ubset K[[x]]$ of power series such that for any deformation of $f$, \begin {center} $F_t(x)=F(x,t)=f(x)+{\mathcal S}um\limits^k_{i=1} t_ih_i(x,t), $ \end {center} there exists a neighbouhood $U=U(0){\mathcal S}ubset K^k$ such that $\forall\ t\in U\ \exists\ 1\leq j \leq l$ with $F_t\overset{r }{{\mathcal S}im} g_j$ (resp. $F_t\ \overset{c }{{\mathcal S}im} \ g_j$); in other words, $f$ has \textbf{no moduli} or $f$ is of \textbf{finite deformation type}. \end{itemize} {\mathcal S}ubsection {Classification in characteristic zero} The most important classification result for hypersurface singularities in characteristic zero is the following result by V. Arnold: \begin{Theorem} {\bf (Arnold; \cite{AGV85})} Let $f\in \mathbb{C}\{x_1, \ldots, x_n\}.$ Then $f$ is right--simple $\Leftrightarrow f$ is right equivalent to an ADE singularity from the following list: \\ $ \begin{array}{ll} A_k: & x_1^{k+1} + x_2^2 \ \ \ \ \ \ +q,\ \ \ \ \ k\geq 1,\quad q:=x_3^2+\cdots + x_n^2\\ D_k: & x_2(x_1^2+x_2^{k-2}) \ +q, \ \ \ \ \ k\geq 4\\ E_6: & x_1^3+x_2^4 \ \ \ \ \ \ \ \ \ +q\\ E_7: & x_1(x_1^2+x_2^3) \ \ \ \ +q\\ E_8: & x_1^3+x_2^5 \ \ \ \ \ \ \ \ \ +q \end{array} $ \end {Theorem} \noindent Later it was proved that $f \in \mathbb{C}\{x_1, \ldots, x_n\}$ is right--simple $\Leftrightarrow f$ is contact--simple. \\ Arnold's classification has numerous applications. The list of simple or ADE singularities appears in many other contexts of mathematics and is obtained also by classifications using a completely different equivalence relation (cf. \cite {Du79}, \cite{Gr92}). One such classification result is the following. \begin{Theorem} {\bf (Buchweitz, Greuel, Schreyer; Kn\"orrer, \cite{BGS87}, \cite{Kn87})} $f \in \mathbb{C}\{x_1, \ldots, x_r\}$ is simple $\Leftrightarrow$ $f$ is of \textbf{finite CM--type}, (i.e. there are only finitely many isomorphism classes of maximal indecomposable Cohen--Macaulay modules over the local ring $\mathbb{C}\{x\}/\langle f\rangle$). \end {Theorem} {\mathcal S}ubsection {Classification in positive characteristic} The classification of hypersurface singularities in positive characteristic started with the following result. Let $K$ be an algebraically closed field of characteristic $p>0$. \begin{Theorem} {\bf (Greuel, Kr\"oning \cite{GK90})} The following are equivalent for $f \in K[[x_1, \cdots, x_n]]$: \begin{enumerate} \item [(1)] $f$ is contact--simple, \item [(2)] $f$ is of finite $CM$--type. \item [(3)] $f$ is an ADE--singularity, i.e. $f$ is contact equivalent to a power series form Arnold's list, but with few extra normal forms in small characteristic ($p\leq5$).\\ E.g. for $p=3 \ \exists$ two normal forms for $E_6$: $E_6^0=x^3+y^4$ and $E_6^1=x^3+x^2y^2+y^5$. \end{enumerate} \end{Theorem} The classification of right--simple singularities in positive characteristic remained open for many years. It turned out that the result differs substantially from the characteristic zero case. For example, in characteristic zero there exist two infinite series $A_k$ and $D_k$ of right--simple singularities, but for any $p > 0$ there are only finitely many right--simple singularities. \begin{Theorem} {\bf (Greuel, Nguyen \cite{GN16})} $f \in K[[x_1, \cdots, x_n]]$ is right--simple $\Leftrightarrow$ $f$ is right equivalent to one of the following: \begin{enumerate} \item [(1)] $\mathbf{n=1}: \ \ \ \ \ \ A_k: x^{k+1} \ \ \qquad\qquad 1\leq k < p-1$ \item [(2)] $\mathbf{n=2, p>2}$:\\ \hspace*{2cm} $A_k: x^{k+1}+y^2\ ,\qquad 1\leq k < p-1$ \\ \hspace*{2cm} $D_k: x(y^2+x^{k-2}), \quad 4\leq k<p$\\ \hspace*{2.1cm}$E_6: x^3+y^4, \ \ \quad\qquad p>3$\\ \hspace*{2.1cm}$E_7: x(x^2+y^3), \ \qquad p>3$\\ \hspace*{2.1cm}$E_8: x^3+y^5 \ \ \ \ \quad\qquad p>5$\\ \item [(3)] $\mathbf{n>2, p>2}$\\ \hspace*{2cm} $g(x_1, x_2)+x_3^2+\cdots +x_n^2$ with $g$ from (2)\\ \item [(4)] $\mathbf{n\geq 2, p=2}$\\ \hspace*{2cm} $A_1: x_1x_2+x_3x_4+\cdots +x_{n-1}x_n\ ,\ n$ even \end{enumerate} \end{Theorem} Note that Arnold proved a ''singularity determinator'' and accomplished the complete classification of unimodal and bimodul hypersurfaces w.r.t. right equivalence in characteristic zero with tables of normal forms (c.f. \cite {AGV85}). Such a singularity determinator and a classification of unimodal and bimodul singularities in positive characteristic was achieved by Nguyen Hong Duc in \cite {Ng17}. {\mathcal S}ubsection{Pathologies and a conjecture} We comment on some differences of the classification in positive and zero characteristic and propose a conjecture.\\ For $f \in K[[x_1, \cdots, x_n]]$ the {\bf Milnor number} of $f$ is \[ \mu(f)=\dim_K K[[x]]/\langle f_{x_1}, \ldots, f_{x_n}\rangle. \] If $f\in\frak{m}^2, \frak{m} = \langle x_1 \ldots, x_n\rangle$, and $\mu(f) <\infty$, we have seen the following pathologies: \begin{itemize} \item For all $p>0$ there exist only finitely many right--simple singularities, in particular $\mu(f)$ is bounded by a function of $p$. \item For $p=2$ and $n$ odd there exists no right--simple singularity. \end{itemize} The reason why there are so few right--simple singularities in characteristic $p>0$ can be seen from the following example: the dimension of the group $Aut(K[[x]])$ and hence the right orbit of $f$ is too small (due to $(x+y)^p = x^p + y^p$). \\ {\bf Example:} We show that $f=x^p+x^{p+1}$ is not right--simple in characteristic $p$ by showing that if $f_t=x^p+tx^{p+1} $ is right equivalent to $f_{t'}=x^p+t'x^{p+1}$ then $t=t'$. This shows that $f$ can be deformed into infinitely many different normal forms. To see this, assume $\Phi(f_t)=f_{t'}$ for $\Phi(x)=u_1x+u_2x^2+\cdots \in\mathbb{A}ut(K[[x]])$. Then $(u_1x+u_2x^2+\cdots)^p+t(u_1x+u_2x^2+\cdots)^p(u_1x+\cdots)=x^p+t'x^{p+1}$ and hence $u_1^p - 1 = (u_1 - 1)^p = 0.$ This implies $u_1=1, \ tu_1^{p+1}=t'$ and $t=t'.$\\ The classification suggests the following conjecture that is in strict contradiction to the characteristic 0 case:\\ \textbf{Conjecture:} \\ Let char$(K)=p>0$ and $f_k\in K[[x_1,\ldots,x_n]]$ a sequence of isolated singularities with Milnor number $\mu(f_k) < \infty$ going to infinity if $k\to \infty$. Then the right modality of $f_k$ goes to infinity too. The conjecture was proved by Nguyen H. D. for unimodal and bimodal singularities and follows from the classification. He shows in \cite{Ng17} that $\mu(f) \leq 4p$ if the right modality of $f$ is less or equal to 2. {\mathcal S}ection{Finite determinacy and tangent image}\label{group actions} A power series is finitely determined (for a given equivalence relation) if it is equivalent to its truncation up to some finite order. For the classification of singularities the property of being finitely determined is indispensable. In this section we give a survey on finite determinacy in characteristic $p \geq 0$, not only for power series but also for ideals and matrices of power series. We consider algebraic group actions and their tangent maps where new phenomena appear for $p>0$, which lead to interesting open problems. {\mathcal S}ubsection{Finite determinacy for hypersurfaces} A power series $f \in K[[x_1, \cdots, x_n]]$ is called right (resp. contact) $\mathbf{k}$\textbf{--determined} if for all $g\in K[[x]]$ such that $j_k(f)=j_k(g)$ we have $f\overset{r}{{\mathcal S}im} g$ (resp. $f\overset{c}{{\mathcal S}im} g$). $f$ is called {\bf finitely determined}, if it is k--determined for some $k < \infty$. \begin{itemize} \item Here \[ j_k: K[[x]]\to J^{(k)}:=K[[x]]/\frak{m}^{k+1} \] denotes the canonical projection, called the $k$--jet. Usually we identify $j_k(f)$ with the power series expansion of $f$ up to and including order $k$. \item We use ord$(f)$, the {\bf order} of $f$ (the maximum $k$ with $f \in \frak{m}^k$) and the {\bf Tjurina number} of $f$, \[ \tau(f):=\dim_K K[[x]]/\langle f, f_{x_1}, \ldots, f_{x_n}\rangle. \] \end {itemize} \begin{Theorem} {\bf (Boubakri, Greuel, Markwig, Pham \cite {BGM12}, \cite{GP17})} For $f \in \frak{m}$ the following holds. \begin{enumerate} \item $f$ is finitely right determined $\Leftrightarrow \mu(f)< \infty$. \\ If this holds, then $f$ is right ($2\mu(f)-$ord$(f) +2$)--determined. \item $f$ is finitely contact determined $\Leftrightarrow \tau(f)< \infty$. \\ If this holds, then $f$ is contact ($2\tau(f)-ordf(f) +2$)--determined. \end{enumerate} \end{Theorem} If the characteristic is 0 we have better bounds: $f$ is right ($\mu(f)+1$)--determined (resp. ($\tau(f)+1$)--determined) if $\mu(f) < \infty$ (which is equivalent to $\tau(f) < \infty$ for $p=0$), see \cite{GLS07} for $K=\mathbb{C}$ and use the Lefschetz principle for arbitrary $K$ with $p=0$. The proof in characteristic $p>0$ is more difficult than for $p=0$, due to a pathology of algebriac group actions in positive characteristic, which we address in section 4.3. \begin {itemize} \item We can express right (resp. contact) equivalence by actions of algebraic groups. We have \[ f\overset{r }{{\mathcal S}im} g \ (\text{resp.} f\overset{c }{{\mathcal S}im} g)\Leftrightarrow f\in \text{orbit } G\cdot g \text{ of } g \text{ with groups} \] \[ G={\mathcal R}:=\mathbb{A}ut(K[[x]]) \textbf{ right group } (\text{for } \overset{r }{{\mathcal S}im}), \] \[ \ \ \ G={\mathcal K}:=K[[x]]^\ast\rtimes {\mathcal R} \textbf{ contact group } (\text{for } \overset{c }{{\mathcal S}im}), \] where $G$ acts as $(\Psi=\Phi, f)\mapsto \Phi(f)$ (resp. $(\Psi=(u, \Phi), f)\mapsto u\cdot \Phi(f)$). \item G is not an algebraic group, but the $k$--jet $G^{(k)}$ is algebraic and the induced action on $k$--jets \[ G^{(k)}\times J^{(k)}\to J^{(k)}\ ,\ (\Psi, f)\mapsto j_k(\Psi\cdot f) \] is an algebraic action. If $f$ is finitely determined then, for sufficiently large $k$, $f$ is (right resp. contact) equivalent to $g$ iff $j_k(f)$ is in the $G^{(k)}$--orbit of $j_k(g)$. \end {itemize} The determination of the tangent space $T_f(G^{(k)}f)$ of the orbit of $G^{(k)}$ at $f$ is important, but there is a big difference for $p=0$ and $p >0$. Consider the tangent map $T{o_k}$ to the orbit map \[ o_k : G^{(k)} \to G^{(k)} f {\mathcal S}ubset K[[x]],\ \Psi\mapsto j_k(\Psi\cdot f) \] and denote the image of $T{o_k} : T_e(G^{(k)}) \to T_f(G^{(k)} f) $ by $\widetilde{T}_f(G^{(k)}f)$. We have \[ \widetilde{T}_f(G^{(k)}f) {\mathcal S}ubset T_f(G^{(k)}f) \] with equality if $p=0$, but strict inclusion may happen if $p<0$ as we shall see below. $ \widetilde{T}_f(G^{(k)}f)$ and $T_f(G^{(k)}f)$ are inverse systems and we define the inverse limits as \[ T_f(Gf):=\mathop {\lim }\limits_{\mathop {\longleftarrow} \limits_{k\ge 0} } T_{f}(G^{(k)}f)){\mathcal S}ubset K[[x]], \textbf{ tangent space}, \text{ and} \] \[ \widetilde T_f(Gf):=\mathop {\lim }\limits_{\mathop {\longleftarrow} \limits_{k\ge 0} } \widetilde T_{jf}(G^{(k)}f)){\mathcal S}ubset K[[x]], \textbf{ tangent image} \] to the orbit $Gf$ of $G$. \\ The tangent images for $G = {\mathcal R}$ and $G ={\mathcal K}$ can be easily identified: \[ \widetilde{T}_f({\mathcal R} f)=\frak{m}\langle \frac{\partial f}{\partial x_1}, \ldots, \frac{\partial f}{\partial x_n}\rangle, \] \[ \widetilde{T}_f({\mathcal K} f)=\langle f\rangle +\frak{m} \langle \frac{\partial f}{\partial x_1}, \ldots, \frac{\partial f}{\partial x_n}\rangle. \] If char$(K)=0$ then the orbit map $o_k$ is separable, which implies $T_f(Gf)=\widetilde{T}_f(Gf)$. Moreover, in any characteristic we have: \begin {itemize} \item If the tangent space and the tangent image to $Gf$ coincide (e.g. if char$(K)=0$), then $f$ is finitely determined if and only if \[ \text{dim}_K K[[x]]/\widetilde{T}_f(Gf) < \infty. \] \end {itemize} {\mathcal S}ubsection{Finite determinacy for ideals and matrices} We generalize the results of the previous section to ideals and matrices. Consider matrices \[ A = [a_{ij}] \in M_{r,s}: = \text{Mat}(r, s, K[[x_1, \cdots, x_n]]) \text{ with } r \geq s, \] and the group \[ G = \text{GL}(r, K[[x]])\times \text{GL}(s, K[[x]])\rtimes \text{Aut}(K[[x]]) \] acting on $M_{r,s}$ in the obvious way: \[ (U, V, \Phi, A)\mapsto U\cdot \Phi(A)\cdot V=U\cdot [a_{ij}(\Phi(x))] \cdot V. \] If $r=s=1$ and $A=[f]$ then $GA = {\mathcal K} f$ and the considerations of this section generalize contact equivalence for power series.\\ $A$ is called $G$ {\bf k--determined} if for all $B \in M_{r,s}$ with $j_k(A)=j_k(B)$ we have $B {\mathcal S}ubset G \cdot A$. $A$ is {\bf finitely} $G$--{\bf determined}, if $A$ is $G$ k--determined for some $k < \infty$.\\ As in the case of one power series we have: \begin{itemize} \item the induced action of $G$ on jet--spaces gives algebraic group actions, \item the tangent image to the orbit of $G$ is contained in the tangent space \[ \widetilde{T}_A(GA){\mathcal S}ubset T_A(GA) \text{ with } "=" \text{ if } p= \text{char}(K)=0, \] \item In any characteristic $\widetilde{T}_A(GA)$ can be computed in terms of the entries of $A$ and their partials, but $T_A (GA)$ in general not if $p>0$. \end{itemize} \begin{Theorem} {\bf (Greuel, Pham \cite{GP16})} \begin{enumerate} \item [(1)] If $\dim_K(M_{r,s}/\widetilde{T}_A(GA))<\infty\mathbb{R}ightarrow A$ is finitely $G$--determined (in particular, the orbit $GA$ contains a matrix with polynomial entries). \item [(2)] If $A$ is finitely $G$--determined $\mathbb{R}ightarrow \dim_K(M_{r,s}/T_A(GA))<\infty$ \end{enumerate} \end{Theorem} In general we do not know whether $\dim_K(M_{r,s}/\widetilde{T}_A(GA))<\infty$ is nessecary for finite $G$--determinacy of $A$ for $p>0$, except for the case of 1--column matrices: \begin{Theorem} {\bf (Greuel, Pham \cite{GP17})} Let $p\geq 0$. If $A\in M_{r, 1}$, then $A$ is finitely $G$--determined $\Leftrightarrow \dim_K M_{r,1}/\widetilde{T}(GA)<\infty$. \end{Theorem} Two ideals $I, J {\mathcal S}ubset K[[x_1, \ldots, x_s]]$ are called {\bf contact equivalent} $\Leftrightarrow K[[x]]/I\cong K[[x]]/J$. Since $G$--equivalence for 1--column matrices is the same as contact equivalence for the ideals generated by the entries of the matrices, we have \begin{Corollary} Let $I=\langle f_1, \ldots, f_r\rangle {\mathcal S}ubset \frak{m}$ an ideal with $r$ the minimal number of generators of $I$ and let $I_r$ be the ideal generated by $r\times n$--minors of the Jacobian matrix $[\frac{\partial f_i}{\partial x_j}]$. \begin{enumerate} \item [(1)] $r\geq n: I$ is finitely contact determined \\ \hspace*{1.5cm} $\Leftrightarrow \dim_K(K[[x]]/I)<\infty$ \item [(2)] $r\leq n: I$ is finitely contact determined\\ \hspace*{1.5cm} $\Leftrightarrow \dim_K(K[[x]]/I+I_r)<\infty$ \end{enumerate} \end{Corollary} \begin{Theorem} {\bf (Greuel, Pham \cite{GP17})} For an ideal $I=\langle f_1, \ldots, f_r\rangle {\mathcal S}ubset \frak{m} K[[x_1, \ldots, x_n]]$ with $r$ the minimal number of generators of I, $ r\leq n$, the following are equivalent in any characteristic: \begin{enumerate} \item [(1)] $I$ is finitely contact determined, \item [(2)] The Tjurina number $\tau(I):=\dim_K K[[x]]^r/(I\cdot K[[x]]^r+\langle\frac{\partial f}{\partial x_1}, \cdots, \frac{\partial f}{\partial x_n}\rangle)<\infty$, with $\frac{\partial f}{\partial x_i} =( \frac{\partial f_1}{\partial x_i}, \cdots,\frac{\partial f_r}{\partial x_i}) \in K[[x]]^r$, \item [(3)] $I$ is an isolated complete intersection singularity. \end{enumerate} \end{Theorem} For the proof of (1) $\mathbb{R}ightarrow$ (2) we need a result about Fitting ideals, which is of independent interest. \begin{Proposition} {\bf (Greuel, Pham \cite{GP17})} Let $A \in M_{r,s}$ be finitely $G$--determined and let $I_t{\mathcal S}ubset K[[x_1, \cdots, x_n]]$ be the Fitting ideal generated by the $t\times t$ minors of $A$. Then $I_t$ has maximal height, i.e. \[ ht(I_t)=\min\{s, (r-t+1)(s-t+1)\}, \ t = 1, \cdots, n. \] \end{Proposition} {\mathcal S}ubsection{Pathology and a problem} We show that $\widetilde{T}_f(Gf){\mathcal S}ubsetneqq T_f(Gf)$ may happen in positive characteristic: \begin{Example} Let $ G={\mathcal K}\ ,\ f=x^3+y^4\ ,\ \text{char}(K)=3$. We compute (using SINGULAR, see \cite {GP17a}): \begin{itemize} \item $f$ is contact 5--determined \item $\dim_K\widetilde{T}_f({\mathcal K}^{(5)}f)=11$ \item $\dim_K T_f({\mathcal K}^{(5)}f)=12$ \end{itemize} \end{Example} For the computation of $\widetilde {T}_f $ we use the formula from section 4.1 but since the tangent space $T_f$ has no description in terms of $f$ and $\frac{\partial f}{\partial x_i}$ if char $(K) >0$, we compute the stabilzer of $G$ and its dimension with the help of Gr\"obner bases.\\ \textbf{Problem:} Does finite determinacy of $A \in M_{r,s}$ always imply finite codimension of $\widetilde{T}_A(GA)$ if $p>0$? We may conjecture that this not the case for arbitrary $r,s,n$. \addcontentsline{toc}{section}{References} Fachbereich Mathematik, Universit\"at Kaiserslautern, Erwin-Schr\"odinger Str., 67663 Kaiserslautern, Germany E-mail address: [email protected] \end{document}
\begin{document} \title{Deterministic Team Problems with Signaling Incentive} \begin{abstract} This paper considers linear quadratic team decision problems where the players in the team affect each other's information structure through their decisions. Whereas the stochastic version of the problem is well known to be complex with nonlinear optimal solutions that are hard to find, the deterministic counterpart is shown to be tractable. We show that under a mild assumption, where the weighting matrix on the controller is chosen large enough, linear decisions are optimal and can be found efficiently by solving a semi-definite program. {\epsilon}nd{abstract} \begin{keywords} Team Decision Theory, Game Theory, Convex Optimization. {\epsilon}nd{keywords} \section*{Notation} \begin{tabular}{ll} $\mathbb{S}^n$ & The set of $n\times n$ symmetric matrices.\\ $\mathbb{S}^n_{+}$& The set of $n\times n$ symmetric positive\\ & semidefinite matrices.\\ $\mathbb{S}^n_{++}$& The set of $n\times n$ symmetric positive\\ & definite matrices.\\ $\mathcal{C}$& The set of functions $\mu:\mathbb{R}^p\rightarrow \mathbb{R}^m$ with\\ & $\mu(y)=(\mu_1(y_1), \mu_2(y_2), ..., \mu_N(y_N))$,\\ & $\mu_i:\mathbb{R}^{p_i}\rightarrow \mathbb{R}^{m_i}$, $\sum_{i} m_i=m$, $\sum_{i} p_i=p$.\\ $\mathbb{K}$ & $\{K\in \mathbb{R}^{m\times p}| K=\oplus \sum K_i, K_i\in \mathbb{R}^{m_i\times p_i}\}$\\ $A^{\deltaagger}$ & Denotes the pseudo-inverse of the\\ & square matrix $A$. \\ $A_{\psirp}$ & Denotes the matrix with minimal number\\ & of columns spanning the nullspace of $A$.\\ $A_i$& The $i$th block row of the matrix $A$.\\ $A_{ij}$& The block element of $A$ in position $(i, j)$.\\ $\succeq$ & $A\succeq B$ $\Longleftrightarrow$ $A-B\in \mathbb{S}^n_{+}$.\\ $\succ$ & $A\succ B$ $\Longleftrightarrow$ $A-B\in \mathbb{S}^n_{++}$.\\ $\mathbf{Tr}$& $\mathbf{Tr}[A]$ is the trace of the matrix $A$.\\ $\mathcal{N}(m,X)$& The set of Gaussian variables with\\ & mean $m$ and covariance $X$. {\epsilon}nd{tabular} \section{Introduction} The team problem is an optimization problem, where a number of decision makers (or players) make up a team, optimizing a common cost function with respect to some uncertainty representing \textit{nature}. Each member of the team has limited information about the global state of nature. Furthermore, the team members could have different pieces of information, which makes the problem different from the one considered in classical optimization, where there is only one decision function that has access to the entire information available about the state of nature. Team problems seemed to possess certain properties that were considerably different from standard optimization, even for specific problem structures such as the optimization of a quadratic cost in the state of nature and the decisions of the team members. In stochastic linear quadratic decision theory, it was believed for a while that certainty-equvalence holds between estimation and optimal decision with complete information, even for team problems. The certainty-equivalence principle can be briefly explained as follows. First assume that every team member has access to the information about the entire state of nature, and find the corresponding optimal decision for each member. Then, each member makes an estimate of the state of nature, which is in turn combined with the optimal decision obtained from the full information assumption. It turns out that this strategy does \textit{not} yield an optimal solution (see \cite{radner}). A general solution to static stochastic quadratic team problems was presented by Radner \cite{radner}. Radner's result gave hope that some related problems of dynamic nature could be solved using similar arguments. But in 1968, Witsenhausen \cite{witsenhausen:1968} showed in his well known paper that finding the optimal decision can be complex if the decision makers affect each other's information. Witsenhausen considered a dynamic decision problem over two time steps to illustrate that difficulty. The dynamic problem can actually be written as a static team problem: \begin{equation*} \begin{aligned} \text{minimize } & \mathbf{E}\hspace{1mm}\left\{k_0u_0^2+(x+u_0-u_1)^2\right\}\\ \text{subject to } & u_0=\mu_0(x),\hspace{2mm} u_1=\mu_1(x+u_0+w), {\epsilon}nd{aligned} {\epsilon}nd{equation*} where $x$ and $w$ are Gaussian with zero mean and variance $X$ and $W$, respectively. Here, we have two decision makers, one corresponding to $u_0$, and the other to $u_1$. Witsenhausen showed that the optimal decisions $\mu_0$ and $\mu_1$ are not linear because of the \textit{signaling}/\textit{coding incentive} of $u_0$. Decision maker $u_1$ measures $x+u_0+w$, and hence, its measurement is affected by $u_0$. Decision maker $u_0$ tries to \textit{encode} information about $x$ in its decision, which makes the optimal strategy complex. The problem above is actually an information theoretic problem. To see this, consider the slightly modified problem \begin{equation*} \begin{aligned} \text{minimize } & \mathbf{E}\hspace{1mm}(x-u_1)^2\\ \text{subject to } & u_0=\mu_0(x), \hspace{2mm} \mathbf{E}\hspace{1mm}u_0^2 \leq 1, \hspace{2mm} u_1=\mu_1(u_0+w) {\epsilon}nd{aligned} {\epsilon}nd{equation*} The modification made is that we removed $u_0$ from the objective function, and instead added a constraint $ \mathbf{E}\hspace{1mm}u_0^2 \leq 1$ to make sure that it has a limited variance (of course we could set an arbitrary power limitation on the variance). The modified problem is exactly the Gaussian channel coding/decoding problem (see Figure \ref{teamchannel})! The optimal solution to Witsenhausens counterexample is still unknown. Even if we would restrict the optimization problem to the set of linear decisions, there is still no known polynomial-time algorithm to find optimal solutions. Another interesting counterexample was recently given in \cite{lipsa:2008}. \begin{figure} \begin{center} \psfig{file=team_channel.eps,width=0.8\hsize} {\epsilon}nd{center} \caption{Coding-decoding diagram over a Gaussian channel.} \label{teamchannel} {\epsilon}nd{figure} In this paper, we consider the problem of distributed decision making with information constraints under linear quadratic settings. For instance, information constraints appear naturally when making decisions over networks. These problems can be formulated as team problems. Early results considered static team theory in stochastic settings \cite{marschak:1955}, \cite{radner}, \cite{ho:chu}. In \cite{didinsky:basar:1992}, the team problem with two team members was solved. The solution cannot be easily extended to more than two players since it uses the fact that the two members have common information; a property that doesn't necessarily hold for more than two players. \cite{didinsky:basar:1992} uses the result to consider the two-player problem with one-step delayed measurement sharing with the neighbors, which is a special case of the partially nested information structure, where there is no signaling incentive. Also, a nonlinear team problem with two team members was considered in \cite{bernhard:99}, where one of the team members is assumed to have full information whereas the other member has only access to partial information about the state of the world. Related team problems with exponential cost criterion were considered in \cite{krainak:82}. Optimizing team problems with respect to \textit{affine} decisions in a minimax quadratic cost was shown to be equivalent to stochastic team problems with exponential cost, see \cite{fan:1994}. The connection is not clear when the optimization is carried out over nonlinear decision functions. In \cite{gattami:bob:rantzer}, a general solution was given for an arbitrary number of team members, where linear decision were shown to be optimal and can be found by solving a linear matrix inequality. In the deterministic version of Witsenhausen's counterexample, that is minimizing the quadratic cost with respect to the worst case scenario of the state $x$ (instead of the assumption that $x$ is Gaussian), the linear decisions where shown to be optimal in \cite{rotkowitz:2006}. We will show that for static linear quadratic minimax team problems, where the players in the team affect each others information structure through their decisions, linear decisions are optimal in general, and can be found by solving a linear matrix inequality. \section{Main Results} \label{minimaxteam} The deterministic problem considered is a quadratic game between a team of players and nature. Each player has limited information that could be different from the other players in the team. This game is formulated as a minimax problem, where the team is the minimizer and nature is the maximizer. We show that if there is a solution to the static minimax team problem, then linear decisions are optimal, and we show how to find a linear optimal solution by solving a linear matrix inequality. \section{Deterministic Team Problems with Signaling Incentive} Consider the following team decision problem \begin{equation} \label{minimax_signaling} \begin{aligned} \inf_{\mu} \sup_{v\in \mathbb{R}^p, 0\neq w\in \mathbb{R}^q}\hspace{1mm} & \frac{L(w,u)}{\|w\|^2+\|v\|^2}\\ \text{subject to }\hspace{1mm} & y_i = \sum_{j=1}^ND_{ij}u_j+E_iw + v_i\\ & u_i = \mu_i(y_i)\\ & \text{for } i=1,..., N, {\epsilon}nd{aligned} {\epsilon}nd{equation} where $u_i\in \mathbb{R}^{m_i}$ and $E_i\in \mathbb{R}^{p_i\times q}$, for $i=1, ..., N$, $L(w,u)$ is a quadratic cost given by $$L(w,u)= \left[ \begin{matrix} w\\ u {\epsilon}nd{matrix} \right] ^T \left[ \begin{matrix} Q_{ww} & Q_{wu}\\ Q_{uw} & Q_{uu} {\epsilon}nd{matrix} \right] \left[ \begin{matrix} w\\ u {\epsilon}nd{matrix} \right] , $$ $Q_{uu}\in \mathbb{S}^m_{++}$, $m=m_1+\cdots + m_N$, and $$ \left[ \begin{matrix} Q_{ww} &Q_{wu}\\ Q_{uw} & Q_{uu} {\epsilon}nd{matrix} \right] \in \mathbb{S}^{m+n}_+. $$ The players $u_1$,..., $u_N$ make up a \textit{team}, which plays against \textit{nature} represented by the vector $w$, using $\mu\in \mathcal{C}$. This problem is more complicated than the static team decision problem studied in \cite{gattami:bob:rantzer}, since it has the same flavour as that of the Witsenhausen counterexample that was presented in the introduction. We see that the measurement $y_i$ of decision maker $i$ could be affected by the other decision makers through the terms $D_{ij}u_j$, $j=1, ..., N$. Note that we have the equality $y = Du+Ew+v$ which is equivalent to $v = Du+Ew-y$. Using this substitution of variable, the team problem (\ref{minimax_signaling}) is equivalent to \begin{equation} \label{minimax_signaling2} \begin{aligned} \inf_{\mu\in\mathcal{C}} \sup_{y\in \mathbb{R}^p, 0\neq w\in \mathbb{R}^q}\hspace{1mm} & \frac{L(w,\mu(y))}{||D\mu(y)+Ew-y||^2+\|w\|^2}\\ {\epsilon}nd{aligned} {\epsilon}nd{equation} \begin{ass} \label{ass2} $$ \gamma^\star \leq \bar{\gamma} := \inf_{Du \neq 0}\hspace{1mm} \frac{u^TQ_{uu}u}{u^TD^TDu} . $$ {\epsilon}nd{ass} \begin{thm} \label{signaling_team} Let $\gamma^\star$ be the value of the game (\ref{minimax_signaling}) and suppose that Assumption \ref{ass2} holds. Then the following statements hold: \begin{itemize} \item[($i$)] There exist linear decisions $\mu_i(y_i)=K_iy_i$, $i=1, ..., N$, where the value $\gamma^\star $ is achieved. \item[($ii$)] If $\gamma^\star<\bar{\gamma}$, then for any $\gamma\in [\gamma^\star~,~\bar{\gamma})$, a linear decision $Ky$ with $K\in \mathbb{K}$ that achieves $\gamma$ is obtained by solving the linear matrix inequality {\small \begin{equation*} \label{team_computation} \begin{aligned} \text{find }\hspace {2mm} & K\\ \text{subject to }\hspace {2mm} & K=\text{diag}(K_1, ..., K_N) \\ & C = \left[ \begin{matrix} I & 0 {\epsilon}nd{matrix} \right] \in \mathbb{R}^{p\times(p+q)}, \hspace{3mm} Q_{uu}(\gamma )\in \mathbb{S}^{m\times m} {\epsilon}nd{aligned} {\epsilon}nd{equation*} \begin{equation*} \begin{aligned} \left[ \begin{matrix} Q_{xx}(\gamma) & Q_{xu}(\gamma)\\ Q_{ux}(\gamma) & Q_{uu}(\gamma) {\epsilon}nd{matrix} \right] &= \left[ \begin{matrix} Q_{ww} & 0 &Q_{wu}\\ 0 & 0 & 0\\ Q_{uw} & 0 & Q_{uu} {\epsilon}nd{matrix} \right] - \gamma \left[ \begin{matrix} E^TE & - E^T & - E^TD\\ - E & I & - D\\ - D^TE & - D^T & D^TD {\epsilon}nd{matrix} \right] {\epsilon}nd{aligned} {\epsilon}nd{equation*} $$ \left[\begin{matrix} Q_{xx}(\gamma) + Q_{xu}(\gamma)KC+C^TK^TQ_{ux}(\gamma) & C^TK^T\\ KC & -Q_{uu}^{-1}(\gamma) {\epsilon}nd{matrix} \right] \preceq 0 , $$ } {\epsilon}nd{itemize} {\epsilon}nd{thm} \begin{proof} ($i$) Note that $$y = Du + Ew + v \Longleftrightarrow v = y - Du - Ew\Rightarrow$$ $$\Rightarrow \frac{L(w,u)}{||v||^2+\|w\|^2} = \frac{L(w,u)}{||y - Du - Ew||^2+\|w\|^2}.$$ Now introduce $x\in \mathbb{R}^{n}$, $n=p+q$, such that $$ x = \left[ \begin{matrix} w\\ y {\epsilon}nd{matrix} \right] , $$ and \begin{equation} \label{QR} \begin{aligned} Q &= \left[ \begin{matrix} Q_{ww} & 0 & Q_{wu}\\ 0 & 0 & 0\\ Q_{uw} & 0 & Q_{uu} {\epsilon}nd{matrix} \right] , \\ R &= \left[ \begin{matrix} E^TE & - E^T & - E^TD\\ - E & I & - D\\ - D^TE & - D^T & D^TD {\epsilon}nd{matrix} \right]. {\epsilon}nd{aligned} {\epsilon}nd{equation} Then, \begin{equation*} \begin{aligned} J(x,u) &:= \left[ \begin{matrix} x\\ u {\epsilon}nd{matrix} \right]^T Q \left[ \begin{matrix} x\\ u {\epsilon}nd{matrix} \right] = L(w,u), \\ F(x,u) &:= \left[ \begin{matrix} x\\ u {\epsilon}nd{matrix} \right]^T R \left[ \begin{matrix} x\\ u {\epsilon}nd{matrix} \right] = ||y - Du - Ew||^2+\|w\|^2, {\epsilon}nd{aligned} {\epsilon}nd{equation*} and $ y = Cx. $ Hence, we have that $$ \frac{L(w,u)}{||v||^2+\|w\|^2} = \frac{L(w,u)}{||y - Du - Ew||^2+\|w\|^2} = \frac{J(x,u)}{F(x,u)}. $$ Then, for any $\gamma\in (\gamma^\star~,~\bar{\gamma})$, there exists a decision function $\mu\in\mathcal{C}$ such that $$ J(x,\mu(Cx)) - \gamma F(x,\mu(Cx)) = \left[ \begin{matrix} x\\ \mu(Cx) {\epsilon}nd{matrix} \right] ^T \left[ \begin{matrix} Q_{xx}(\gamma) & Q_{xu}(\gamma)\\ Q_{ux}(\gamma) & Q_{uu}(\gamma) {\epsilon}nd{matrix} \right] \left[ \begin{matrix} x\\ \mu(Cx) {\epsilon}nd{matrix} \right] \leq 0 $$ for all $x$. Under Assumption \ref{ass2}, we have that $$ Q_{uu}(\gamma) = Q_{uu} -\gamma D^TD\succ 0 $$ for any $\gamma\in (\gamma^\star~,~\bar{\gamma}]$. Thus, we can apply Theorem 1 in \cite{gattami:bob:rantzer}, which implies that there must exist linear decisions that can achieve any $\gamma\in (\gamma^\star~,~\bar{\gamma}]$. By compactness, there must exist linear decisions that achieve $\gamma^\star$.\\ ($ii$) Let $\mu(Cx) = KCx$ for $K\in \mathbb{K}$. Then $$ \left[ \begin{matrix} x\\ KCx {\epsilon}nd{matrix} \right] ^T \left[ \begin{matrix} Q_{xx}(\gamma) & Q_{xu}(\gamma)\\ Q_{ux}(\gamma) & Q_{uu}(\gamma) {\epsilon}nd{matrix} \right] \left[ \begin{matrix} x\\ KCx {\epsilon}nd{matrix} \right] \leq 0, ~ ~ \forall x $$ $$ \Updownarrow $$ $$ \left[ \begin{matrix} I\\ KC {\epsilon}nd{matrix} \right] ^T \left[ \begin{matrix} Q_{xx}(\gamma) & Q_{xu}(\gamma)\\ Q_{ux}(\gamma) & Q_{uu}(\gamma) {\epsilon}nd{matrix} \right] \left[ \begin{matrix} I\\ KC {\epsilon}nd{matrix} \right] \preceq 0 $$ $$ \Updownarrow $$ $$ Q_{xx}(\gamma)+ Q_{xu}(\gamma)KC+C^TK^TQ_{ux}(\gamma) + C^TK^T Q_{uu}(\gamma) KC\preceq 0 $$ $$ \Updownarrow $$ $$ \left[\begin{matrix} Q_{xx}(\gamma) + Q_{xu}(\gamma)KC+C^TK^TQ_{ux}(\gamma) & C^TK^T\\ KC & -Q_{uu}^{-1}(\gamma) {\epsilon}nd{matrix} \right] \preceq 0, $$ and the proof is complete. {\epsilon}nd{proof} \section{Linear Quadratic Control with Arbitrary Information Constraints} Consider the dynamic team decision problem \begin{equation} \label{lq} \begin{aligned} \inf_{\mu} \sup_{w, v\neq 0}\hspace{2mm} & \frac{\sum_{k=1}^{M}\left[ \begin{matrix} x(k)\\ u(k) {\epsilon}nd{matrix} \right]^T \left[ \begin{matrix} Q_{xx} & Q_{xu}\\ Q_{ux} & Q_{uu} {\epsilon}nd{matrix} \right] \left[ \begin{matrix} x(k)\\ u(k) {\epsilon}nd{matrix} \right]}{\sum_{k=1}^M \|w(k)\|^2+\|v(k)\|^2}\\ \text{subject to } & x(k+1) = Ax(k)+Bu(k)+w(k)\\ & y_i(k) = C_ix(k)+v_i(k)\\ & u_i(k) = [\mu_k]_{i}(y_i(k)), i=1,..., N. {\epsilon}nd{aligned} {\epsilon}nd{equation} Now write $x(t)$ and $y(t)$ as {\small \begin{equation*} \label{expansion} \begin{aligned} x(t) &= \sum_{k=1}^{t}A^{k}Bu(M-k)+\sum_{k=1}^{t}A^{k}w(M-k),\\ y_i(t) &= \sum_{k=1}^{t}C_iA^{k}Bu(M-k)+\sum_{k=1}^{t}C_iA^{k}w(M-k)+v_i(k). {\epsilon}nd{aligned} {\epsilon}nd{equation*}} It is easy to see that the optimal control problem above is equivalent to a static team problem of the form (\ref{minimax_signaling}). Thus, linear controllers are optimal under Assumption \ref{ass2}. \begin{ex} Consider the deterministic version of the Witsenhausen counterexample presented in the introduction: \begin{equation*} \begin{aligned} \inf_{\mu_1,\mu_2} \hspace{1mm} & \gamma \\ \text{s. t. } &\frac{ {k^2}\mu_1^2(y_1) +({x_1}-\mu_2(y_2))^2}{x_0^2+w^2} \leq \gamma\\ &{y_1 = x_0}\\ &{x_1} = {x_0}+\mu_1(y_1)\\ &y_2 = x_1+w = {x_0}+\mu_1(y_1)+w {\epsilon}nd{aligned} {\epsilon}nd{equation*} Substitue $x_0=y_1$, $x_1 = y_1 + \mu_1(y_1)$ and $ w^2 = ({x_0}+\mu_1(y_1)-y_2)^2 $ in the inequality $$ \ {k^2}\mu_1^2(y_1) +({x_1}-\mu_2(y_2))^2 \leq \gamma ({x_0^2}+w^2). $$ Then, we get the equivalent problem \begin{equation*} \begin{aligned} \inf_{\mu_1, \mu_2} \hspace{1mm} & \gamma \\ \text{s. t. } & {k^2}\mu_1^2(y_1)+(y_1+\mu_1(y_1)-\mu_2(y_2))^2 \leq \gamma (y_1^2+(y_1+\mu_1(y_1)-y_2)^2) {\epsilon}nd{aligned} {\epsilon}nd{equation*} Completing the squares gives the following equivalent inequality $$ \left[ \begin{matrix} y_1\\ y_2\\ \mu_1(y_1)\\ \mu_2(y_2) {\epsilon}nd{matrix} \right]^T \left[ \begin{matrix} 1-2\gamma & \gamma & 1-\gamma & -1\\ \gamma & -\gamma & \gamma & 0\\ 1-\gamma & \gamma & {1+k^2 - \gamma } & {-1}\\ -1 & 0 & {-1} & {1} {\epsilon}nd{matrix} \right] \left[ \begin{matrix} y_1\\ y_2\\ \mu_1(y_1)\\ \mu_2(y_2) {\epsilon}nd{matrix} \right] \leq 0 $$ For $k^2=0.1$, we can search over $\gamma <\bar{\gamma} = k^2 = 0.1$, and we can use Theorem \ref{signaling_team} to deduce that linear decisions are optimal, and can be computed by iteratively solving a linear matrix inequality, where the iterations are done with respect to $\gamma$. We find that $${\gamma^\star \approx 0.0901},$$ $${\mu_1(y_1)=-0.9001 y_1},$$ $${\mu_2(y_2)=-0.0896 y_2}.$$ For $k^2 = 1$, we iterate with respect to $\gamma < 1$, and we find optimal linear decisions given by \begin{equation*} \begin{aligned} {\mu_1(y_1)} & {=} {-0.3856 y_1}\\ {\mu_2(y_2)} & {=} {0.3840 y_2} {\epsilon}nd{aligned} {\epsilon}nd{equation*} $$\Downarrow$$ $$ \gamma^\star = 0.3820 $$ {\epsilon}nd{ex} \begin{ex} Consider the deterministic counterpart of the multi-stage finite-horizon stochastic control problem that was considered in \cite{lipsa:2008}: $$ \inf_{\mu_k:\mathbb{R}\rightarrow \mathbb{R}} \sup_{x_0, v_0, ..., v_{m-1}\in \mathbb{R}} \frac{(x_m-x_0)^2+ \sum_{k=0}^{m-2} \mu^2_k(y_k)}{x_0^2+v_0^2+\cdots+v_{m-1}^2} $$ subject to the dynamics \begin{align} x_{k+1} &= \mu_k(y_k) \nonumber\\ y_k &= x_k+v_k \nonumber . {\epsilon}nd{align} It is easy to check that $\bar{\gamma}=1$ and $Q_{uu} -\gamma D^T D\succ 0$ for $\gamma<\bar{\gamma}$ (compare with Assumption \ref{ass2}) . Thus, linear decisions are optimal. This is compared to the stochastic version, where linear decisions where not optimal for $m>2$. {\epsilon}nd{ex} \section{Conclusions} We have considered the static team problem in deterministic linear quadratic settings where the team members may affect each others information. We have shown that decisions that are linear in the observations are optimal and can be found by solving a linear matrix inequality. For future work, it would be interesting to consider the case where the measurements are given by $y = Du +Ew + Fv$, for an arbitrary matrix $F$. {\epsilon}nd{document}
\begin{document} \title{Shortest paths in arbitrary plane domains} \author{L. C. Hoehn \and L. G. Oversteegen \and E. D. Tymchatyn} \date{November 30, 2018} \address[L.\ C.\ Hoehn]{Nipissing University, Department of Computer Science \& Mathematics, 100 College Drive, Box 5002, North Bay, Ontario, Canada, P1B 8L7} \email{[email protected]} \address[L.\ G.\ Oversteegen]{University of Alabama at Birmingham, Department of Mathematics, Birmingham, AL 35294, USA} \email{[email protected]} \address[E.\ D.\ Tymchatyn]{University of Saskatchewan, Department of Mathematics and Statistics, 106 Wiggins road, Saskatoon, Canada, S7N 5E6} \email{[email protected]} \thanks{The first named author was partially supported by NSERC grant RGPIN 435518} \thanks{The second named author was partially supported by NSF-DMS-1807558} \thanks{The third named author was partially supported by NSERC grant OGP-0005616} \subjclass[2010]{Primary 54F50; Secondary 51M25} \keywords{path, length, shortest, plane domain, homotopy, analytic covering map} \begin{abstract} Let $\Omega$ be a connected open set in the plane and $\gamma: [0,1] \to \overline{\Omega}$ a path such that $\gamma((0,1)) \subset \Omega$. We show that the path $\gamma$ can be ``pulled tight'' to a unique shortest path which is homotopic to $\gamma$, via a homotopy $h$ with endpoints fixed whose intermediate paths $h_t$, for $t \in [0,1)$, satisfy $h_t((0,1)) \subset \Omega$. We prove this result even in the case when there is no path of finite Euclidean length homotopic to $\gamma$ under such a homotopy. For this purpose, we offer three other natural, equivalent notions of a ``shortest'' path. This work generalizes previous results for simply connected domains with simple closed curve boundaries. \end{abstract} \maketitle \section{Introduction} \label{sec:introduction} Bourgin and Renz \cite{BourginRenz1989} proved that given a simply connected plane domain $\Omega$ with simple closed curve boundary, and given any two points $p,q \in \overline{\Omega}$, there exists a unique \emph{shortest} path in $\overline{\Omega}$ which is the uniform limit of paths which (except possibly for their endpoints $p$ and $q$) are contained in $\Omega$. If there is a rectifiable such curve (i.e.\ one with finite Euclidean length), then by shortest is meant the one with the smallest Euclidean length. If not, then by shortest is meant \emph{locally shortest}, which means that every subpath not containing the endpoints $p$ and $q$ is of finite Euclidean length and is shortest among all such paths joining the endpoints of the subpath. A related result for $1$-dimensional locally connected continua is proved in \cite{CannonConnerZastrow2002}. In \cref{thm:main1}, we extend the result of Bourgin and Renz to paths in multiply connected domains with arbitrary boundaries. Along the way, we characterize the concept of a shortest path in up to four different ways, outlined in the subsections below and stated in \cref{thm:main2}. To state these concepts and our results precisely, we must fix some terminology and notation regarding paths and homotopies. For notational convenience, we identify the plane $\mathbb{R}^2$ with the complex numbers $\mathbb{C}$. Fix a connected open set $\Omega \subset \mathbb{C}$ and points $p,q \in \overline{\Omega}$. We consider paths whose range is contained in $\Omega$, except that the endpoints of the path may belong to $\partial \Omega$. For brevity, we use the abbreviation ``e.p.e.''\ to mean ``except possibly at endpoints''. Formally, a \emph{path in $\Omega$ (e.p.e.)\ joining $p$ and $q$} is a continuous function $\gamma: [0,1] \to \mathbb{C}$ such that $\gamma(0) = p$, $\gamma(1) = q$, and $\gamma(s) \in \Omega$ for all $s \in (0,1)$. When $p,q \in \partial \Omega$, the existence of such a path is equivalent to the statement that $p$ and $q$ are \emph{accessible} from $\Omega$. We consider homotopies $h: [0,1] \times [0,1] \to \mathbb{C}$ such that for each $t \in [0,1)$, the path $h_t: [0,1] \to \mathbb{C}$ defined by $h_t(s) = h(s,t)$ is a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$. Given a path $\gamma$ in $\Omega$ (e.p.e.)\ joining $p$ and $q$, let $\overline{[\gamma]}$ denote the set of all paths homotopic to $\gamma$ under such homotopies. Note that paths in $\overline{[\gamma]}$ may meet the boundary $\partial \Omega$ in more than just the endpoints. Let $[\gamma]$ denote the set of all paths in $\Omega$ (e.p.e.)\ which belong to $\overline{[\gamma]}$. Note that in spite of the notation, $\overline{[\gamma]}$ is not equal to the topological closure of $[\gamma]$ (e.g.\ in the function space). The main result of this paper is that if $\gamma$ is a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$, then $\overline{[\gamma]}$ contains a unique \emph{shortest} path. By ``shortest'', we mean any one of the equivalent notions introduced next and collected in \cref{thm:main2} below. First, if there is a path in $\overline{[\gamma]}$ of finite Euclidean length, then we may consider the path in $\overline{[\gamma]}$ of smallest Euclidean length. Second, we adapt Bourgin \& Renz's condition of locally shortest as follows: \begin{defn*} \label{defn:locally shortest} Let $\gamma$ be a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$, and let $\lambda \in \overline{[\gamma]}$. \begin{itemize} \item Given $0 < s_1 < s_2 < 1$, we define the class $\overline{[\lambda^\gamma_{[s_1,s_2]}]}$ as follows. Let $h: [0,1] \times [0,1] \to \overline{\Omega}$ be a homotopy such that $h_0 = \gamma$, $h_1 = \lambda$, and for each $t \in [0,1)$, $h_t$ is a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$. We use the homotopy $h$ to ``pull off'' the path $\lambda {\upharpoonright}_{[s_1,s_2]}$ (e.p.e.)\ from $\partial \Omega$ as follows. Define the path $\lambda^\gamma_{[s_1,s_2]}: [s_1,s_2] \to \overline{\Omega}$ by \[ \lambda^\gamma_{[s_1,s_2]}(s) = h(s, 1 - (s-s_1)(s-s_2)) .\] See \cref{fig:locally shortest}. Though this path is defined in terms of the homotopy $h$, the class $\overline{[\lambda^\gamma_{[s_1,s_2]}]}$ does not depend on the choice of $h$ (see \cref{cor:locally shortest well-defined} below). \item A path $\lambda \in \overline{[\gamma]}$ is called \emph{locally shortest} if for any $0 < s_1 < s_2 < 1$, the path $\lambda {\upharpoonright}_{[s_1,s_2]}$ has finite Euclidean length, and this length is smallest among all paths in $\overline{[\lambda^\gamma_{[s_1,s_2]}]}$. \end{itemize} \end{defn*} \begin{figure} \caption{An illustration of the paths $\gamma$, $\lambda$, and $\lambda^\gamma_{[s_1,s_2]} \label{fig:locally shortest} \end{figure} Two further notions of ``shortest path'' are given in the next subsections. \subsection{Efficient paths} \label{sec:efficient} As above, fix a connected open set $\Omega \subset \mathbb{C}$ and points $p,q \in \overline{\Omega}$. \begin{defn*} \label{defn:efficient} Let $\gamma$ be a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$. A path $\lambda \in \overline{[\gamma]}$ is called \emph{efficient} (in $\overline{[\gamma]}$) if the following property holds: \begin{quote} Given any $s_1,s_2 \in [0,1]$ with $s_1 < s_2$, let $\lambda'$ be the path defined by $\lambda'(s) = \lambda(s)$ for $s \notin [s_1,s_2]$, and $\lambda' {\upharpoonright}_{[s_1,s_2]}$ parameterizes the straight line segment $\overline{\lambda(s_1) \lambda(s_2)}$ or the constant path if $\lambda(s_1) = \lambda(s_2)$. If $\lambda' \in \overline{[\gamma]}$, then $\lambda = \lambda'$ (up to reparameterization). \end{quote} \end{defn*} The following is the first main result of this paper. The proof is in \cref{sec:proof main1}. \begin{thm} \label{thm:main1} Let $\Omega \subset \mathbb{C}$ be a connected open set, let $p,q \in \overline{\Omega}$, and let $\gamma$ be a path in $\Omega$ (except possibly at endpoints) joining $p$ and $q$. Then $\overline{[\gamma]}$ contains a unique (up to parameterization) efficient path. \end{thm} \subsection{Alternative notion of path length} \label{sec:len} Instead of the standard Euclidean path length, which can only distinguish between two paths if at least one of them is rectifiable, we can use an alternative notion of path length for which all paths have finite length, and which has other useful properties and many features in common with Euclidean length. A notion of length with such properties was first introduced in \cite{Morse1936} and further developed in \cite{Silverman1969}. A simlar notion is given in \cite{CannonConnerZastrow2002}, and the authors of the present paper modified and extended that notion in \cite{HOT2018-1}. Let $\mathsf{len}$ refer to either the path length function introduced in \cite{HOT2018-1} or in \cite{Morse1936}. The essential properties of $\mathsf{len}$ are: \begin{enumerate} \item $\mathsf{len}(\gamma)$ is finite (in fact $\mathsf{len}(\gamma) < 1$) for any path $\gamma$; \item $\mathsf{len}(\gamma) = 0$ if and only if $\gamma$ is constant; \item For any $p,q \in \mathbb{C}$, the straight line segment $\overline{pq}$ has smallest $\mathsf{len}$ length among all paths from $p$ to $q$. Moreover, if $\gamma$ is a path from $p$ to $q$ which deviates from the straight line segment $\overline{pq}$, or which is not monotone, then $\mathsf{len}(\gamma) > \mathsf{len}(\overline{pq})$; \item If $\Phi: \mathbb{C} \to \mathbb{C}$ is an isometry, then $\mathsf{len}(\Phi \circ \gamma) = \mathsf{len}(\gamma)$; \item Given $0 \leq c_1 < c_2 \leq 1$, $\mathsf{len}(\gamma {\upharpoonright}_{[c_1,c_2]}) \leq \mathsf{len}(\gamma)$. This inequality is strict unless $\gamma$ is constant outside of $[c_1,c_2]$; \item Given $c \in (0,1)$, $\mathsf{len}(\gamma) \leq \mathsf{len}(\gamma {\upharpoonright}_{[0,c]}) + \mathsf{len}(\gamma {\upharpoonright}_{[c,1]})$. This inequality is strict unless one of the subpaths is constant; \item $\mathsf{len}$ is a continuous function from the space of paths (with the uniform metric $d_{\mathrm{sup}}$) to $\mathbb{R}$. \end{enumerate} Our second main result is the following. The proof is in \cref{sec:proof main2}. \begin{thm} \label{thm:main2} Let $\Omega \subset \mathbb{C}$ be a connected open set, let $p,q \in \overline{\Omega}$, and let $\gamma$ be a path in $\Omega$ (except possibly at endpoints) joining $p$ and $q$. Then for a path $\lambda \in \overline{[\gamma]}$, the following are equivalent: \begin{enumerate} \item $\lambda$ is locally shortest; \item $\lambda$ is efficient; and \item $\lambda$ has smallest $\mathsf{len}$ length among all paths in $\overline{[\gamma]}$. \end{enumerate} Moreover, if $\overline{[\gamma]}$ contains a rectifiable path, then in addition to the above we have \begin{enumerate} \setcounter{enumi}{3} \item $\lambda$ has smallest Euclidean length among all paths in $\overline{[\gamma]}$. \end{enumerate} \end{thm} \subsection*{Ackowledgements} The authors would like to thank Professor Alexandre Eremenko for educating them on the history of analytic covering maps and the referee of an earlier version of this paper for several helpful comments which helped to clarify some of the arguments. \section{Preliminaries on bounded analytic covering maps} \label{sec:covering maps} Our arguments in Sections \ref{sec:proof main1} and \ref{sec:proof main2} make heavy use of the theory of complex analytic covering maps. In this section, we collect the results we will use later. Denote $\mathbb{D} = \{z \in \mathbb{C}: |z| < 1\}$. It is a standard classical result (see e.g.\ \cite{Ahlfors1973}) that for any connected open set $\Omega \subset \mathbb{C}$ whose complement contains at least two points, and for any $z_0 \in \Omega$, there is a complex analytic covering map $\varphi: \mathbb{D} \to \Omega$ such that $\varphi(0) = z_0$. Moreover, this covering map $\varphi$ is uniquely determined by the argument of $\varphi'(0)$. Many of the results below hold only for analytic covering maps $\varphi: \mathbb{D} \to \Omega$ to bounded sets $\Omega$. For the remainder of this subsection, let $\Omega \subset \mathbb{C}$ be a bounded connected open set, and let $\varphi: \mathbb{D} \to \Omega$ be an analytic covering map. We first state three classic results about the boundary behavior of $\varphi$. All of the subsequent results in this section will be derived from these and from standard covering map theory. \begin{thm}[Fatou \cite{Fatou06}, see e.g.\ {\cite[p.22]{Conway1995}}] \label{thm:Fatou} The radial limits $\lim_{r \to 1^-} \varphi(r\alpha)$ exist for all points $\alpha \in \partial \mathbb{D}$ except possibly for a set of linear measure zero in $\partial \mathbb{D}$. \end{thm} It can easily be seen that if $\alpha \in \partial \mathbb{D}$ is such that $\lim_{r \to 1^-} \varphi(r\alpha)$ exists, then the limit belongs to $\partial \Omega$. \begin{thm}[Riesz \cite{Riesz16, Riesz23}, see e.g.\ {\cite[p.22]{Conway1995}}] \label{thm:Riesz} For each $z \in \partial \Omega$, the set of points $\alpha \in \partial \mathbb{D}$ for which $\lim_{r\to 1^-} \varphi(r\alpha) = z$ has linear measure zero in $\partial \mathbb{D}$. \end{thm} \begin{thm}[Lindel\"{o}f \cite{Lindelof1915}, see e.g.\ {\cite[p.23]{Conway1995}}] \label{thm:Lindelof} Let $\widehat{\gamma}: [0,1] \to \overline{\mathbb{D}}$ be a path such that $\widehat{\gamma}([0,1)) \subset \mathbb{D}$ and $\widehat{\gamma}(1) = \alpha \in \partial \mathbb{D}$. Suppose that $\lim_{t \to 1^-} \varphi \circ \widehat{\gamma}(t)$ exists. Then the radial limit $\lim_{r\to 1^-} \varphi(r\alpha)$ exists and is equal to $\lim_{t \to 1^-} \varphi \circ \widehat{\gamma}(t)$. \end{thm} We define the \emph{extended covering map} \[ \overline{\varphi}: \mathbb{D} \cup \{\alpha \in \partial \mathbb{D}: \lim_{r \to 1^-} \varphi(r\alpha) \textrm{ exists}\} \to \overline{\Omega} \] by \[ \overline{\varphi}(\widehat{z}) = \begin{cases} \varphi(\widehat{z}) & \textrm{if } \widehat{z} \in \mathbb{D} \\ \lim_{r \to 1^-} \varphi(r\alpha) & \textrm{if } \widehat{z} = \alpha \in \partial \mathbb{D} . \end{cases} \] By \cref{thm:Fatou}, this function is defined on $\mathbb{D}$ plus a full measure subset of $\partial \mathbb{D}$. Note that this function $\overline{\varphi}$ is not necessarily continuous at points where it is defined in $\partial \mathbb{D}$. It is, however, continuous by definition along each radial segment from the center $0$ of $\mathbb{D}$ to any point $\alpha \in \partial \mathbb{D}$ where it is defined. In fact, more is true: if $\overline{\varphi}$ is defined at $\alpha$, then its restriction to any \emph{Stolz angle} at $\alpha$ is continuous (see e.g.\ \cite[p.23]{Conway1995}); however, we will not need this concept in this paper. As a general convention, we will put hats on symbols, as in $\widehat{z}$, $\widehat{A}$, or $\widehat{\gamma}$, to refer to points, subsets, or paths in $\overline{\mathbb{D}}$, and use symbols without hats to refer to points, subsets, or paths in $\overline{\Omega} \subset \mathbb{C}$, with the understanding that if both $z$ and $\widehat{z}$ appear in an argument, they are related by $z = \overline{\varphi}(\widehat{z})$ (and likewise for sets and paths). If $z = \overline{\varphi}(\widehat{z})$ for points $z \in \overline{\Omega}$ and $\widehat{z} \in \overline{\mathbb{D}}$ (respectively, $A = \overline{\varphi}(\widehat{A})$ for sets $A \subset \overline{\Omega}$ and $\widehat{A} \subset \overline{\mathbb{D}}$), we say $\widehat{z}$ is a \emph{lift} of $z$ (respectively, $\widehat{A}$ is a \emph{lift} of $A$). Similarly, if $\gamma = \overline{\varphi} \circ \widehat{\gamma}$ for paths $\gamma$ in $\overline{\Omega}$ and $\widehat{\gamma}$ in $\overline{\mathbb{D}}$, we say $\widehat{\gamma}$ is a \emph{lift} of $\gamma$. The next result about lifts of paths is very similar to classical results for covering maps. Since our extended map $\overline{\varphi}$ is not necessarily continuous at points in $\partial \mathbb{D}$, it is not a simple consequence of basic covering map theory. It is proved in \cite{HOT2018-2}, and we also include a proof here to convey the flavor of working with the extended map $\overline{\varphi}$. \begin{thm} \label{thm:lift} Let $p,q \in \overline{\Omega}$, let $z \in \Omega$, and let $\widehat{z} \in \mathbb{D}$ such that $\varphi(\widehat{z}) = z$. \begin{enumerate} \item Let $\gamma$ be a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$, and suppose $\gamma(s_0) = z$ for some $s_0 \in [0,1]$. Then there exists a unique path $\widehat{\gamma}$ in $\mathbb{D}$ (e.p.e.)\ such that $\overline{\varphi} \circ \widehat{\gamma} = \gamma$ and $\widehat{\gamma}(s_0) = \widehat{z}$. \item Let $h: [0,1] \times [0,1] \to \mathbb{C}$ be a homotopy such that for each $t \in [0,1]$ the path $h_t(s) = h(s,t)$ is a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$, and suppose $h(s_0,0) = z$ for some $s_0 \in [0,1]$. Then there exists a unique homotopy $\widehat{h}: [0,1] \times [0,1] \to \overline{\mathbb{D}}$ such that $\widehat{h}(s,t) \in \mathbb{D}$ and $\varphi \circ \widehat{h}(s,t) = h(s,t)$ for each $(s,t) \in (0,1) \times [0,1]$, $\widehat{h}(\{0\} \times [0,1])$ is a single point $\widehat{p}$ in $\overline{\mathbb{D}}$ with $\overline{\varphi}(\widehat{p}) = p$, $\widehat{h}(\{1\} \times [0,1])$ is a single point $\widehat{q}$ in $\overline{\mathbb{D}}$ with $\overline{\varphi}(\widehat{q}) = q$, and $\widehat{h}(s_0,0) = \widehat{z}$. \end{enumerate} \end{thm} \begin{proof} Observe that (1) follows from (2), by using the constant homotopy $h(s,t) = \gamma(s)$ for all $t \in [0,1]$. Thus it suffices to prove (2). Since $\varphi$ is a covering map and $h((0,1) \times [0,1]) \subset \Omega$, it follows from standard covering space theory that there is a unique homotopy $\widehat{h}: (0,1) \times [0,1] \to \mathbb{D}$ such that $\varphi \circ \widehat{h}(s,t) = h(s,t)$ for each $(s,t) \in (0,1) \times [0,1]$, and $\widehat{h}(s_0,0) = \widehat{z}$. It remains to prove that there are points $\widehat{p},\widehat{q} \in \mathbb{D}$ such that defining $\widehat{h}(\{0\} \times [0,1]) = \{\widehat{p}\}$ and $\widehat{h}(\{1\} \times [0,1]) = \{\widehat{q}\}$ makes $\widehat{h}$ into a continuous function from $[0,1] \times [0,1]$ to $\overline{\mathbb{D}}$. This is immediate if $p$ and $q$ belong to $\Omega$, so we assume $p,q \in \partial \Omega$. Observe that the set $\widehat{h}((0,\frac{1}{2}] \times [0,1])$ compactifies on a continuum $K \subset \partial \mathbb{D}$. We need to prove that $K$ is a single point $\{\widehat{p}\}$. Suppose for a contradication that $K$ contains more than one point. Then there exists by \cref{thm:Fatou} a set $E$ of positive measure in the interior of $K$ so that for each $\alpha \in E$, the radial limit $\lim_{r \to 1^-} \varphi(r\alpha)$ exists. Since the set $\widehat{h}((0,\frac{1}{2}] \times [0,1])$ compactifies on $K$ we can choose, for each $\alpha \in E$, a sequence $(s_n,t_n)$ in $(0,\frac{1}{2}] \times [0,1]$ such that $s_n \to 0$ and $\widehat{h}(s_n,t_n) = r_n \alpha$, with $r_n \to 1$. It follows that the radial limit $\lim_{r \to 1^-} \varphi(r\alpha) = p$ for each $\alpha \in E$, a contradiction with \cref{thm:Riesz}. Thus $K$ is a single point $\{\widehat{p}\}$, and so we can continuously extend $\widehat{h}$ to $\{0\} \times [0,1]$ by defining $\widehat{h}(\{0\} \times [0,1]) = \{\widehat{p}\}$. By \cref{thm:Lindelof} it follows that $\overline{\varphi}(\widehat{p}) = p$. Likewise, by considering $\widehat{h}([\frac{1}{2},1) \times [0,1])$ we obtain by the same argument a point $\widehat{q} \in \overline{\mathbb{D}}$ such that $\widehat{h}$ extends continuously to $\{1\} \times [0,1]$ by defining $\widehat{h}(\{1\} \times [0,1]) = \{\widehat{q}\}$, and $\overline{\varphi}(\widehat{q}) = q$. \end{proof} \cref{thm:lift}(2) implies that if $p,q \in \overline{\Omega}$, $\gamma$ is a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$, $\widehat{\gamma}$ is a lift of $\gamma$ with endpoints $\widehat{p},\widehat{q} \in \overline{\mathbb{D}}$, and $\lambda \in [\gamma]$, then there exists a lift $\widehat{\lambda}$ of $\lambda$ with the same endpoints $\widehat{p},\widehat{q}$ as $\widehat{\gamma}$. We will prove a stronger statement in \cref{thm:class closure lift} below. The next result follows immediately from \cref{thm:lift}(1). \begin{cor} \label{cor:line lift} Let $L$ be an open arc in $\Omega$ whose closure is an arc with distinct endpoints in $\partial \Omega$. Let $\widehat{L}$ be a component of $\varphi^{-1}(L)$. Then $\widehat{L}$ is an open arc in $\mathbb{D}$ whose closure in $\overline{\mathbb{D}}$ is an arc with distinct endpoints in $\partial \mathbb{D}$. \end{cor} We next prove a result about the existence of small ``crosscuts'' in $\mathbb{D}$ straddling any point $\alpha \in \partial \mathbb{D}$ for which $\overline{\varphi}(\alpha) \in \partial \Omega$ is not isolated in $\partial \Omega$. For one-to-one analytic maps $\varphi: \mathbb{D} \to \mathbb{C}$, this result is standard; see e.g.\ \cite{Pommerenke1992}. We were unable to find a reference for the case of a bounded analytic covering map $\varphi$, so we include a proof for completeness. \begin{thm} \label{thm:crosscut} Let $\alpha \in \partial \mathbb{D}$ be such that the radial limit $\lim_{r \to 1^-} \varphi(r\alpha)$ exists, and let $p \in \partial \Omega$ be the limit. \begin{enumerate} \item If $p$ is not isolated in $\partial \Omega$, then for any sufficiently small simple closed curve $S$ in $\mathbb{C}$ containing $p$ in its interior, there is a component $\widehat{S}$ of $\varphi^{-1}(S)$ whose closure is an arc separating $\alpha$ from the center $0$ of $\mathbb{D}$ in $\overline{\mathbb{D}}$. \item If $p$ is isolated in $\partial \Omega$, then for any sufficiently small simple closed curve $S$ in $\Omega$ containing $p$ in its interior, there is a component $\widehat{S}$ of $\varphi^{-1}(S)$ whose closure is a circle whose intersection with $\partial \mathbb{D}$ is $\{\alpha\}$. \end{enumerate} Moreover, in both cases, the diameter of $\widehat{S}$ can be made arbitrarily small by choosing $S$ sufficiently small. \end{thm} \begin{proof} Let $S$ be small enough so that $\varphi(0)$ is not in the closed topological disk bounded by $S$. Let $r_0 < 1$ be close enough to $1$ so that $\varphi(r\alpha)$ is in the interior of $S$ for all $r \in [r_0,1)$. The closed (in $\mathbb{D}$) set $\varphi^{-1}(S)$ must separate $r_0 \alpha$ from $0$ in $\mathbb{D}$, since otherwise there would be a path from $0$ to $r_0 \alpha$ which would project to a path in $\Omega$ from $\varphi(0)$ to $\varphi(r_0 \alpha)$ without intersecting $S$, a contradiction. Therefore, there must be a component $\widehat{S}$ of $\varphi^{-1}(S)$ which separates $r_0 \alpha$ from $0$ in $\mathbb{D}$ (see e.g.\ \cite[p.438 \S 57 III Theorem 1]{Kuratowski1968}). For (1), suppose that $p$ is not isolated in $\partial \Omega$. Assume first that $S \cap \partial \Omega \neq \emptyset$. Let $C$ be the component of $S \cap \Omega$ containing $\varphi(\widehat{S})$. This $C$ is a path in $\Omega$ (e.p.e.)\ joining two points (not necessarily distinct) $a,b \in \partial \Omega$ with $a \neq p \neq b$. By \cref{thm:lift} and the fact that $C$ is an open arc in $\Omega$, we have that each component of $\varphi^{-1}(C)$ is an arc in $\mathbb{D}$ joining points $\widehat{a},\widehat{b} \in \partial \mathbb{D}$ at which the radial limits exist and are equal to $a$ and $b$, respectively; in particular, we have $\widehat{a} \neq \alpha \neq \widehat{b}$. It follows that the closure of $\widehat{S}$ is an arc with endpoints distinct from $\alpha$, which separates $\alpha$ from $0$ in $\overline{\mathbb{D}}$, as desired. Now choose $\varepsilon_1 > 0$ small enough so that $\varphi(0)$ is not in the closed disk $\overline{B}(p,\varepsilon_1)$ and such that $\partial B(p,\varepsilon_1) \cap \partial \Omega \neq \emptyset$. Assume that $S \subset B(p,\varepsilon_1)$. If $S \cap \partial \Omega \neq \emptyset$, then we are done by the previous paragraph; hence, suppose that $S \cap \partial \Omega = \emptyset$. Choose $\varepsilon_0 > 0$ small enough so that $B(p,\varepsilon_0)$ is contained in the interior of $S$, and such that $\partial B(p,\varepsilon_0) \cap \partial \Omega \neq \emptyset$. By the previous paragraph, there are components $\widehat{S}_0$ and $\widehat{S}_1$ of $\varphi^{-1}(\partial B(p,\varepsilon_0))$ and $\varphi^{-1}(\partial B(p,\varepsilon_1))$, respectively, which are arcs separating $\alpha$ from $0$. As above, there is a component $\widehat{S}$ of $\varphi^{-1}(S)$ which separates a tail end of the radial segment at $\alpha$ from $0$ in $\mathbb{D}$, and which separates $\widehat{S}_0$ from $\widehat{S}_1$ in $\mathbb{D}$ (i.e.\ lies between $\widehat{S}_0$ and $\widehat{S}_1$). This implies the endpoints of the closure of $\widehat{S}$ are distinct, hence the closure of $\widehat{S}$ is an arc, as desired. For (2), let $S$ be small enough so that $p$ is the only point of $\partial \Omega$ in the closed topological disk bounded by $S$. As above, we obtain a component $\widehat{S}$ of $\varphi^{-1}(S)$ which separates a tail end of the radial segment at $\alpha$ from $0$ in $\mathbb{D}$. Since $\varphi$ is a covering map, this $\widehat{S}$ must be an open arc in $\mathbb{D}$ whose two ends both accumulate on continua $K_1$ and $K_2$ in $\partial \mathbb{D}$. We first argue that $K_1$ and $K_2$ are single points. Suppose $\beta_1,\beta_2 \in K_1$ with $\beta_1 \neq \beta_2$. Then by \cref{thm:Fatou} there exists $\beta \in K_1$ between $\beta_1$ and $\beta_2$ where the radial limit $\lim_{r \to 1^-} \varphi(r\beta)$ exists. However, this radial segment meets $\widehat{S}$ arbitrarily close to $\beta$, hence $\lim_{r \to 1^-} \varphi(r\beta) \notin \partial \Omega$, a contradiction. Thus $K_1$ is a single point $\{\widehat{a}\}$. Similarly, $K_2$ is a single point $\{\widehat{b}\}$. If $\widehat{a} \neq \alpha$, by \cref{thm:Fatou} and \cref{thm:Riesz} we can find $\beta \in \partial \mathbb{D}$ between $\widehat{a}$ and $\alpha$ such that the radial limit $\lim_{r \to 1^-} \varphi(r\beta)$ exists and is different from $p$. The path $\overline{\varphi}(r\beta)$, $0 \leq r \leq 1$, is homotopic to one which does not enter the interior of $S$. By \cref{thm:lift}, we can lift this homotopy, to obtain a path from $0$ to $\beta$ in $\mathbb{D}$ which does not meet $\widehat{S}$. But this is a contradiction since $\widehat{S}$ separates $\beta$ from $0$. Therefore $\widehat{a} = \alpha$, and likewise $\widehat{b} = \alpha$. Thus the closure of $\widehat{S}$ is a circle meeting $\partial \mathbb{D}$ at $\alpha$ only, as desired. For the moreover part, we argue as in the proof of \cref{thm:lift} that if these components $\widehat{S}$ did not converge to $0$ in diameter as the diameter of $S$ is shrunk towards $0$, then they would accumulate on a non-degenerate continuum $K \subset \partial \mathbb{D}$, and we would obtain a contradiction by \cref{thm:Fatou} and \cref{thm:Riesz}. The details are left to the reader. \end{proof} \begin{lem} \label{lem:large lifts} Let $S$ be a straight line or round circle in $\mathbb{C}$, and let $\varepsilon > 0$. Then there are only finitely many lifts of components of $S \cap \Omega$ with diameter at least $\varepsilon$. \end{lem} \begin{proof} Suppose the claim is false, so that there exists $\varepsilon > 0$ and infinitely many lifts $\widehat{S}_n$, $n = 1,2,\ldots$, of components of $S \cap \Omega$ such that the diameter of $\widehat{S}_n$ is at least $\varepsilon$ for each $n$. These lifts accumulate on a non-degenerate continuum $K \subset \overline{\mathbb{D}}$. If $K \cap \mathbb{D} \neq \emptyset$, then let $\widehat{z} \in K \cap \mathbb{D}$ and let $\widehat{V}$ be a neighborhood of $\widehat{z}$ which maps one-to-one under $\varphi$ to a small round disk $V \subset \Omega$. Then $\widehat{V}$ meets infinitely many of the lifts $\widehat{S}_n$. On the other hand, because $V$ is a round disk contained in $\Omega$ and $S$ is a round circle or straight line, it follows that $V$ can only meet one component of $S \cap \Omega$. This is a contradiction since $\varphi$ is one-to-one on $\widehat{V}$. Suppose then that $K \subset \partial \mathbb{D}$. Then there exists by \cref{thm:Fatou} a set $E$ of positive measure in the interior of $K$ so that for each $\alpha \in E$, the radial limit $\lim_{r \to 1^-} \varphi(r\alpha)$ exists. If there is a single component $S'$ of $S \cap \Omega$ such that $\varphi(\widehat{S}_n) = S'$ for infinitely many $n$, then it is clear that the radial limit of $\varphi$ at each $\alpha \in E$ must belong to $\overline{S'} \cap \partial \Omega$, which contains at most two points. But this contradicts \cref{thm:Riesz}. Therefore we may assume that the components $\varphi(\widehat{S}_n)$ are all distinct, which means their diameters must converge to $0$. By passing to a subsequence if necessary, we may assume that the components $\varphi(\widehat{S}_n)$ converge to a single point $a \in S \cap \partial \Omega$. Then it is clear that the radial limit of $\varphi$ at each $\alpha \in E$ must equal $a$, again a contradiction by \cref{thm:Riesz}. \end{proof} Recall that the function $\overline{\varphi}$ is not necessarily continuous at points $\alpha \in \partial \mathbb{D}$ where it is defined. However, the next result shows that the restriction of $\overline{\varphi}$ to the region in between two lifted paths with the same endpoints is continuous. Given a continuum $X$ in $\mathbb{C}$, the \emph{topological hull} of $X$, denoted $\widehatull(X)$, is the smallest simply connected continuum in $\mathbb{C}$ containing $X$. Equivalently, $\widehatull(X)$ is equal to $\mathbb{C} \smallsetminus U$, where $U$ is the unbounded component of $\mathbb{C} \smallsetminus X$. \begin{lem} \label{lem:hull continuous} Suppose $\widehat{\lambda}: [0,1] \to \overline{\mathbb{D}}$ is a path such that $\overline{\varphi} \circ \widehat{\lambda}$ is a path in $\overline{\Omega}$ (i.e.\ is continuous). Let $X$ be the union of $\widehat{\lambda}([0,1])$ with the two radial segments from the center $0$ of $\mathbb{D}$ to $\widehat{p} = \widehat{\lambda}(0)$ and to $\widehat{q} = \widehat{\lambda}(1)$, and let $\Delta = \widehatull(X)$. Then $\Delta$ is simply connected and locally connected, and the restriction $\overline{\varphi} {\upharpoonright}_\Delta$ is continuous on $\Delta$. \end{lem} \begin{proof} Note that $\Delta$ is simply connected by definition, and it is straightforward to see that the topological hull of any locally connected continuum is locally connected. Since $\varphi$ is continuous and $\overline{\varphi} = \varphi$ in $\mathbb{D}$, it remains to prove that the restriction of $\overline{\varphi}$ to $\Delta$ is continuous at each point of $\Delta \cap \partial \mathbb{D}$. Let $\alpha \in \Delta \cap \partial \mathbb{D}$, and suppose for a contradiction that the restriction of $\overline{\varphi}$ to $\Delta$ is not continuous at $\alpha$. Then there exists $\varepsilon > 0$ and a sequence of points $\langle \widehat{w}_n \rangle_{n=1}^\infty$ in $\Delta$ such that $\widehat{w}_n \to \alpha$, but $|\overline{\varphi}(\alpha) - \varphi(\widehat{w}_n)| \geq \varepsilon$ for all $n$. Note that since $\Delta \cap \partial \mathbb{D} \subset \widehat{\lambda}([0,1])$ and $\overline{\varphi} \circ \widehat{\lambda}$ is continuous, we have that the restriction of $\overline{\varphi}$ to $\Delta \cap \partial \mathbb{D}$ is continuous. Hence, we may assume that $\widehat{w}_n \in \Delta \cap \mathbb{D}$ for each $n$. Moreover, since the restriction of $\overline{\varphi}$ to the radial segment from $0$ to $\alpha$ is continuous, we may also assume that $\widehat{w}_n$ does not belong to this segment for all $n$. For each $n$, there is a lift $\widehat{C}_n$ of a component of $\Omega \cap \partial B(\overline{\varphi}(\alpha), \varepsilon)$ which separates $\widehat{w}_n$ from $\alpha$ in $\overline{\mathbb{D}}$, where $B(\overline{\varphi}(\alpha), \varepsilon)$ is the open disk centered at $\overline{\varphi}(\alpha)$ of radius $\varepsilon$. By \cref{thm:crosscut}, this lift $\widehat{C}_n$ is either an arc with endpoints in $\partial \mathbb{D}$, or, in the case that $\overline{\varphi}(\alpha)$ is the only point of $\partial \Omega$ in the closed disk $\overline{B}(\overline{\varphi}(\alpha), \varepsilon)$, a circle with one point on $\partial \mathbb{D}$. By passing to a subsequence if necessary, we may assume that all of the $\widehat{C}_n$ are distinct, hence they are pairwise disjoint. According to \cref{lem:large lifts}, the diameters of the lifts $\widehat{C}_n$ converge to $0$ as $n \to \infty$. Let $X$ be as in the statement of the lemma, and for each $n$ choose a point $\widehat{z}_n \in X \cap C_n$. Then $\widehat{z}_n \to \alpha$. Suppose first that there is a subsequence $\widehat{z}_{n_k}$ of $\widehat{z}_n$ such that for each $k$, $\widehat{z}_{n_k}$ belongs to the radial segment from $0$ to $\widehat{p}$. It follows that $\alpha = \widehat{p}$. Since $\overline{\varphi}$ is continuous on this radial segment, we have $\varphi(\widehat{z}_{n_k}) \to \overline{\varphi}(\alpha)$ as $k \to \infty$. But $\varphi(\widehat{z}_n) \in \partial B(\overline{\varphi}(\alpha), \varepsilon)$ for each $n$, so this is a contradiction. Likewise, we encounter a contradiction if infinitely many of the points $\widehat{z}_n$ belong to the radial segment from $0$ to $\widehat{q}$. Thus we may assume that all of the points $\widehat{z}_n$ belong to the set $\widehat{\lambda}([0,1])$. Let $s_n \in [0,1]$ such that $\widehat{z}_n = \widehat{\lambda}(s_n)$. By passing to a subsequence if necessary, we may assume that $s_n \to s_\infty$, and $\widehat{\lambda}(s_\infty) = \alpha$. Since $\overline{\varphi} \circ \widehat{\lambda}$ is continuous, it follows that $\overline{\varphi}(\widehat{z}_n) \to \overline{\varphi}(\alpha)$ as $n \to \infty$. But again this is a contradiction because $\varphi(\widehat{z}_n) \in \partial B(\overline{\varphi}(\alpha), \varepsilon)$ for each $n$. \end{proof} In the next result, we characterize paths in $\overline{[\gamma]}$ in terms of lifts. \begin{thm} \label{thm:class closure lift} Let $\gamma$ be a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$, and let $\widehat{\gamma}$ be a lift of $\gamma$ with endpoints $\widehat{p},\widehat{q} \in \overline{\mathbb{D}}$. If $\lambda \in \overline{[\gamma]}$, then there exists a lift $\widehat{\lambda}$ of $\lambda$ (to $\overline{\mathbb{D}}$) with the same endpoints $\widehat{p},\widehat{q}$. Conversely, if $\lambda: [0,1] \to \overline{\Omega}$ is a path joining $p$ and $q$ which has a lift $\widehat{\lambda}: [0,1] \to \overline{\mathbb{D}}$ with the same endpoints $\widehat{p},\widehat{q}$, then $\lambda \in \overline{[\gamma]}$. \end{thm} \begin{proof} Let $\lambda \in \overline{[\gamma]}$, and let $h$ be a homotopy such that $h_0 = \gamma$, $h_1 = \lambda$, and $h_t$ is a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$ for each $t \in [0,1)$. For each $s \in [0,1]$, consider the path $t \mapsto h_t(s)$. Apply \cref{thm:lift} to obtain a lift $\widehat{h}_t(s)$ such that $\widehat{h}_0(s) = \widehat{\gamma}(s)$. Define $\widehat{\lambda}: [0,1] \to \overline{\mathbb{D}}$ by $\widehat{\lambda}(s) = \widehat{h}_1(s)$. By \cref{thm:Lindelof}, we have $\overline{\varphi} \circ \widehat{\lambda}(s) = \lambda(s)$ for all $s \in [0,1]$, and $\widehat{\lambda}(0) = \widehat{p}$ and $\widehat{\lambda}(1) = \widehat{q}$. It remains to prove that $\widehat{\lambda}$ is continuous. Let $s \in [0,1]$. If $\widehat{\lambda}(s) \in \mathbb{D}$, then $\widehat{\lambda}$ is continuous at $s$ by standard covering space theory. Suppose for a contradiction that $\widehat{\lambda}(s) \in \partial \mathbb{D}$ and $\widehat{\lambda}$ is not continuous at $s$. We proceed with an argument similar to the one given for \cref{thm:lift}. The sets $\widehat{h}([s-\frac{1}{n},s+\frac{1}{n}] \times [1-\frac{1}{n},1)$, $n=1,2,\ldots$, accumulate on a non-degenerate continuum $K \subset \partial \mathbb{D}$, which contains, by \cref{thm:Fatou}, a set $E$ of positive measure such that the radial limit $\lim_{r \to 1^-} \varphi(r\alpha)$ exists for each $\alpha \in E$. We can choose, for each $\alpha \in E$, a sequence $(s_n,t_n)$ converging to $(s,1)$ such that $\widehat{h}(s_n,t_n) = r_n \alpha$, with $r_n \to 1$. It follows that the radial limit $\lim_{r \to 1^-} \varphi(r\alpha)$ is equal to \[ \lim_{n \to \infty} \varphi(r_n \alpha) = \lim_{n \to \infty} \varphi \circ \widehat{h}(s_n,t_n) = \lim_{n \to \infty} h(s_n,t_n) = \lambda(s) ,\] a contradiction with \cref{thm:Riesz}. Therefore, $\widehat{\lambda}$ is a continuous lift of $\lambda$. Conversely, suppose $\lambda: [0,1] \to \overline{\Omega}$ is a path joining $p$ and $q$ which has a lift $\widehat{\lambda}: [0,1] \to \overline{\mathbb{D}}$ with the same endpoints $\widehat{p},\widehat{q}$ as $\widehat{\gamma}$. Let $\widehat{c}$ be the path in $\mathbb{D}$ (e.p.e.)\ such that $\widehat{c}(0) = \widehat{p}$, $\widehat{c}(\frac{1}{2}) = 0$, $\widehat{c}(1) = \widehat{q}$, and $\widehat{c}$ linearly parameterizes the straight segments in between these points. Let $c = \overline{\varphi} \circ \widehat{c}$. Let $X = \widehat{\lambda}([0,1]) \cup \widehat{c}([0,1])$, and let $\Delta = \widehatull(X)$. Since $\Delta$ is simply connected, it follows that there is a homotopy $\widehat{h}$ between $\widehat{c}$ and $\widehat{\lambda}$ within $\Delta$, such that $\widehat{h}_0 = \widehat{c}$, $\widehat{h}_1 = \widehat{\lambda}$, and $\widehat{h}_t$ is a path in $\mathbb{D}$ (e.p.e.)\ joining $\widehat{p}$ and $\widehat{q}$ for all $t \in [0,1)$. Since $\overline{\varphi}$ is continuous on $\Delta$ by \cref{lem:hull continuous}, the composition $\overline{\varphi} \circ \widehat{h}$ is a homotopy between $c$ and $\lambda$ which establishes that $\lambda \in \overline{[c]}$. By the same reasoning, we can show that $\gamma \in [c]$. Therefore, $\lambda \in \overline{[\gamma]}$. \end{proof} We now have the machinery in place to conclude that the class $\overline{[\lambda^\gamma_{[s_1,s_2]}]}$ described in the definition of a locally shortest path is well-defined. \begin{cor} \label{cor:locally shortest well-defined} Let $\gamma$ be a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$, and let $\widehat{\gamma}$ be a lift of $\gamma$ with endpoints $\widehat{p},\widehat{q} \in \overline{\mathbb{D}}$. Let $\lambda \in \overline{[\gamma]}$, and let $\widehat{\lambda}$ be a lift of $\lambda$ (to $\overline{\mathbb{D}}$) with the same endpoints $\widehat{p},\widehat{q}$. Let $0 < s_1 < s_2 < 1$. A path $\rho$ belongs to $\overline{[\lambda^\gamma_{[s_1,s_2]}]}$ if and only if there is a lift $\widehat{\rho}$ of $\rho$ (to $\overline{\mathbb{D}}$) with endpoints $\widehat{\lambda}(s_1),\widehat{\lambda}(s_2)$. In particular, the definition of the class $\overline{[\lambda^\gamma_{[s_1,s_2]}]}$ is independent of the choice of homotopy $h$ between $\gamma$ and $\lambda$. \end{cor} \section{Proof of \cref{thm:main1}} \label{sec:proof main1} Let $\Omega \subset \mathbb{C}$ be a connected open set, $p,q \in \overline{\Omega}$, and let $\gamma$ be a path in $\Omega$ (e.p.e.)\ joining $p$ and $q$. We may assume that either $p \neq q$, or $\gamma$ is a non-trivial loop, so that $\overline{[\gamma]}$ contains no constant path, since otherwise this is obviously the unique efficient path. Let $D_\Omega$ be a large round disk in $\mathbb{C}$ which contains the entire path $\gamma([0,1])$. We may assume, without loss of generality, that $\Omega$ is contained in $D_\Omega$, hence in particular is a bounded subset of $\mathbb{C}$. Indeed, any efficient path in $\overline{[\gamma]}$ with respect to $\Omega$ also belongs to $\overline{[\gamma]}$ with respect to $\Omega \cap D_\Omega$, and vice versa. The same goes for the other notions of shortest path used in this paper. Hence, we assume $\Omega \subset D_\Omega$ for the remainder of this paper. Let $\varphi: \mathbb{D} \to \Omega$ be an analytic covering map, and let $\overline{\varphi}$ denote the extension of $\varphi$ to those points in $\partial \mathbb{D}$ where the radial limit is defined, as in \cref{sec:covering maps}. Choose any lift $\widehat{\gamma}: [0,1] \to \overline{\mathbb{D}}$ of $\gamma$ under $\overline{\varphi}$ (see \cref{thm:lift}(1)). So $\widehat{\gamma}$ is a path in $\mathbb{D}$ (e.p.e.)\ and $\overline{\varphi} \circ \widehat{\gamma} = \gamma$. Let $\widehat{p} = \widehat{\gamma}(0) \in \overline{\mathbb{D}}$ and $\widehat{q} = \widehat{\gamma}(1) \in \overline{\mathbb{D}}$. Since $\overline{[\gamma]}$ does not contain a constant path, we have $\widehat{p} \neq \widehat{q}$ (cf.\ \cref{thm:class closure lift}). In \cref{sec:sequence of approximations} and \cref{sec:convergence} below, we will establish the existence of an efficient path in $\overline{[\gamma]}$ via a recursive construction in which we repeatedly replace subpaths by straight line segments, when doing so does not change the homotopy class. We begin with some preliminary results in \cref{sec:lifts of lines}. \subsection{Lifts of lines} \label{sec:lifts of lines} Throughout this paper, when we use the word \emph{line} we mean straight line in $\mathbb{C}$. In this subsection, we consider lines $L$ in $\mathbb{C}$ which intersect $\Omega$, and lifts of closures of components of $L \cap \Omega$ under $\overline{\varphi}$. By abuse of terminology, any such lift will be called a \emph{lift of $L$}. For a given line $L$, $L \cap \Omega$ has at most countably many components, and each of these components has at most countably many lifts, each of which is, by \cref{cor:line lift}, an arc in $\overline{\mathbb{D}}$ whose (distinct) endpoints are in $\partial \mathbb{D}$, and which is otherwise contained in $\mathbb{D}$. Observe that if $\widehat{L}_1$ and $\widehat{L}_2$ are distinct lifts of lines, then $\widehat{L}_1 \cap \widehat{L}_2$ contains at most one point; moreover, if $\widehat{L}_1 \cap \widehat{L}_2 = \{\widehat{z}\}$ for some $\widehat{z} \in \mathbb{D}$, then $\widehat{L}_1$ and $\widehat{L}_2$ cross transversally at $\widehat{z}$. Our construction later in this section of an efficient path is based on the following reformulation of the definition of an efficient path. \begin{prop} \label{prop:efficient characterization} Let $\lambda \in \overline{[\gamma]}$, and let $\widehat{\lambda}$ be a lift of $\lambda$ (under $\overline{\varphi}$) joining $\widehat{p}$ to $\widehat{q}$. Then $\lambda$ is an efficient path in $\overline{[\gamma]}$ if and only if for any lift $\widehat{L}$ of a line intersecting $\Omega$, the set $\widehat{\lambda}^{-1}(\widehat{L})$ is connected (possibly empty). \end{prop} We next make a detailed study of lifts of lines, focusing on whether they separate $\widehat{p}$ from $\widehat{q}$ in $\overline{\mathbb{D}}$ or not. \begin{defn*} Let $L$ be a line which intersects $\Omega$ and which does not contain $p$ or $q$, and let $\widehat{L}$ be a lift of $L$. \begin{itemize} \item We call $\widehat{L}$ a \emph{separating lift} if it separates $\widehat{p}$ from $\widehat{q}$ in $\overline{\mathbb{D}}$. The component of $\mathbb{D} \smallsetminus \widehat{L}$ whose closure contains $\widehat{p}$ is called the \emph{$\widehat{p}$-side} of $\widehat{L}$, and the component of $\mathbb{D} \smallsetminus \widehat{L}$ whose closure contains $\widehat{q}$ is called the \emph{$\widehat{q}$-side} of $\widehat{L}$. \item We call $\widehat{L}$ a \emph{non-separating lift} if it does not separate $\widehat{p}$ from $\widehat{q}$ in $\overline{\mathbb{D}}$. The component of $\mathbb{D} \smallsetminus \widehat{L}$ whose closure does not contain $\widehat{p},\widehat{q}$ is called the \emph{shadow} of $\widehat{L}$, denoted $\mathrm{Sh}(\widehat{L})$. \end{itemize} \end{defn*} Given two lines $L_1,L_2$ which intersect $\Omega$, the \emph{distance} between $L_1$ and $L_2$ is the Hausdorff distance between $L_1 \cap \overline{D_\Omega}$ and $L_2 \cap \overline{D_\Omega}$ (recall that $D_\Omega$ is a fixed large disk in $\mathbb{C}$ containing $\Omega$); that is, the infimum of all $\delta > 0$ such that each point of $L_1 \cap \overline{D_\Omega}$ is within $\delta$ of a point in $L_2 \cap \overline{D_\Omega}$, and vice versa. \begin{defn*} \begin{itemize} \item Let $\widehat{V} \subset \mathbb{D}$ be an open set which maps one-to-one under $\varphi$ to an open set $V \subset \Omega$, and let $L$ be a line which intersects $V$. Let $\varepsilon > 0$, and assume $\varepsilon$ is small enough so that every line $L'$ which is within distance $\varepsilon$ of $L$ must also intersect $V$. The family of all lifts $\widehat{L}'$, of such lines $L'$, which intersect $\widehat{V}$ will be called a \emph{basic open set of lifts of lines}, and denoted $\widehat{\mathcal{N}}(\widehat{L},\widehat{V},\varepsilon)$. \item Let $\widehat{z} \in \mathbb{D}$. We say $\widehat{z}$ is a \emph{stable point} if there exists a basic open set of lifts of lines $\widehat{\mathcal{N}}(\widehat{L},\widehat{V},\varepsilon)$, each element of which is non-separating and contains $\widehat{z}$ in its shadow. \end{itemize} \end{defn*} \begin{lem} \label{lem:two lifts} A point $\widehat{z} \in \mathbb{D}$ is a stable point if and only if there are two intersecting non-separating lifts $\widehat{L}_1,\widehat{L}_2$ of distinct lines $L_1,L_2$ such that $\widehat{z}$ is in the shadow of both $\widehat{L}_1$ and $\widehat{L}_2$. Furthermore, whenever $\widehat{L}_1,\widehat{L}_2$ are intersecting non-separating lifts of distinct lines, the point of intersection of $\widehat{L}_1$ and $\widehat{L}_2$ is also a stable point. \end{lem} \begin{proof} That any stable point has this property is immediate, since any basic open set of lifts of lines clearly contains pairs of distinct intersecting lifts. Conversely, suppose $\widehat{L}_1,\widehat{L}_2$ are intersecting non-separating lifts of distinct lines $L_1 \supset \varphi(\widehat{L}_1), L_2 \supset \varphi(\widehat{L}_2)$, and let $\widehat{z}$ be any point in the shadow of both $\widehat{L}_1$ and $\widehat{L}_2$. Let $\widehat{w}$ be the point of intersection of $\widehat{L}_1$ and $\widehat{L}_2$, and let $\widehat{V}$ be a neighborhood of $\widehat{w}$ which maps one-to-one under $\varphi$ to a round disk $V$ centered at $w = \varphi(\widehat{w})$. \begin{figure} \caption{The configuration of lines described in the proof of \cref{lem:two lifts} \label{fig:two lifts} \end{figure} Consider a line $L$ not containing $w$ which intersects $L_1 \cap V$ and $L_2 \cap V$, and such that the lift $\widehat{L}$ of $L$ which intersects $\widehat{V}$ does not contain any point in the intersection of the shadows of $\widehat{L}_1$ and of $\widehat{L}_2$. Let $\varepsilon > 0$ be small enough so that any line $L'$ within distance $\varepsilon$ of $L$ has these same properties. Let $\widehat{L}'$ be an arbitrary element of the basic open set of lifts of lines $\widehat{\mathcal{N}}(\widehat{L},\widehat{V},\varepsilon)$; that is, $\widehat{L}'$ is the lift of a line $L'$ within distance $\varepsilon$ of $L$ such that $\widehat{L} \cap \widehat{V} \neq \emptyset$. Since $\widehat{L}'$ intersects $\widehat{L}_1$ and $\widehat{L}_2$ inside $\widehat{V}$, it cannot cross them again, and so one endpoint of $\widehat{L}'$ is in the closure of $\mathrm{Sh}(\widehat{L}_1) \smallsetminus \mathrm{Sh}(\widehat{L}_2)$ and the other is in the closure of $\mathrm{Sh}(\widehat{L}_2) \smallsetminus \mathrm{Sh}(\widehat{L}_1)$ (see \cref{fig:two lifts}). It follows that $\widehat{L}'$ is a non-separating lift and $\mathrm{Sh}(\widehat{L}')$ contains $\mathrm{Sh}(\widehat{L}_1) \cap \mathrm{Sh}(\widehat{L}_2)$ (in particular, it contains the point $\widehat{z}$), as well as the point $\widehat{w}$. Thus $\widehat{z}$ and $\widehat{w}$ are both stable points. \end{proof} \begin{lem} \label{lem:stable generic} The set of stable points is a dense open subset of $\mathbb{D}$. Moreover, except for a countable set of lines, every line $L$ has the property that for each lift $\widehat{L}$ of $L$, the set of stable points in $\widehat{L}$ is a dense open subset of $\widehat{L}$. \end{lem} \begin{proof} It follows immediately from \cref{lem:two lifts} that the set of stable points is open in $\mathbb{D}$. Now let $\widehat{z} \in \mathbb{D}$, and let $\widehat{V}$ be a neighborhood of $\widehat{z}$. Suppose $\widehat{z}$ is not a stable point. By shrinking $\widehat{V}$, we may assume that $\widehat{V}$ maps one-to-one under $\varphi$ to a disk $V$ centered at $z = \varphi(\widehat{z})$. For a given $\theta \in \mathbb{R}$, let $L(z,\theta)$ denote the straight line through $z$ making angle $\theta$ with the positive real axis. Let $\widehat{L}(\widehat{z},\theta)$ be the lift containing $\widehat{z}$ of the component of $L(z,\theta) \cap \Omega$ containing $z$. Denote the endpoints of $\widehat{L}(\widehat{z},\theta)$ by $e^+(\widehat{z},\theta)$ and $e^-(\widehat{z},\theta)$, where $e^+(\widehat{z},\theta)$ corresponds to following the line $L(z,\theta)$ to $\partial \Omega$ in the direction $\theta$ from $z$, and $e^-(\widehat{z},\theta)$ corresponds to following the line $L(z,\theta)$ to $\partial \Omega$ in the direction $\theta + \pi$ from $z$. As $\theta$ increases, the line $L(z,\theta)$ revolves about the point $z$. Correspondingly, the lift $\widehat{L}(\widehat{z},\theta)$ ``revolves'' about $\widehat{z}$. The endpoints of $\widehat{L}(\widehat{z},\theta)$ move monotonically in the circle $\partial \mathbb{D}$, but not necessarily continuously, as $\theta$ increases. Since $\widehat{z}$ is not stable, by \cref{lem:two lifts} there can be at most one non-separating lift of a line which contains $\widehat{z}$. It follows that if we consider the line $L(z,\theta)$ and increase $\theta$ to revolve the line about $z$, at some moment the endpoint $e^+(\widehat{z},\theta)$ must cross or ``jump over'' $\widehat{p}$, and at that same moment $e^-(\widehat{z},\theta)$ must cross or ``jump over'' $\widehat{q}$. That is, there exists $\theta_0$ such that for all $\alpha,\beta$ sufficiently close to $\theta_0$ with $\alpha < \theta_0 < \beta$, we have that $\widehat{p}$ is on the ``left'' side of the arc $\widehat{L}(\widehat{z},\alpha)$ (thinking of this arc as oriented from $e^-(\widehat{z},\alpha)$ to $e^+(\widehat{z},\alpha)$) and $\widehat{q}$ is on the ``right'' side, and $\widehat{p}$ is on the ``right'' side of $\widehat{L}(\widehat{z},\beta)$ and $\widehat{q}$ is on the ``left'' side (see \cref{fig:stable dense}). Let $\widehat{w}$ be any point in $\widehat{V} \smallsetminus \widehat{L}(\widehat{z},\theta_0)$ and let $w = \varphi(\widehat{w})$. Let $\alpha,\beta$ be as above and sufficiently close to $\theta_0$ so that the lines $L(w,\alpha)$ and $L(w,\beta)$ do not intersect either of the lines $L(z,\alpha)$ or $L(z,\beta)$ inside $\overline{D_\Omega}$ (recall that $D_\Omega$ is a fixed large disk in $\mathbb{C}$ containing $\Omega$). This means that the lifts $\widehat{L}(\widehat{w},\alpha)$ and $\widehat{L}(\widehat{w},\beta)$ do not cross either of the lifts $\widehat{L}(\widehat{z},\alpha)$ or $\widehat{L}(\widehat{z},\beta)$. Because of the locations of $\widehat{p}$ and $\widehat{q}$ with respect to the lines $\widehat{L}(\widehat{z},\alpha)$ and $\widehat{L}(\widehat{z},\beta)$, it follows that $\widehat{L}(\widehat{w},\alpha)$ and $\widehat{L}(\widehat{w},\beta)$ are both non-separating lifts (see \cref{fig:stable dense}). Hence, by \cref{lem:two lifts}, $\widehat{w}$ is a stable point. Thus, the set of stable points is dense in $\mathbb{D}$. \begin{figure} \caption{The configuration of lines described in the proof of \cref{lem:stable generic} \label{fig:stable dense} \end{figure} For the second statement, we may apply the above argument at each $\widehat{z} \in \mathbb{D}$ to obtain a countable cover $\{\widehat{V}_i: i = 1,2,\ldots\}$ of $\mathbb{D}$ by open sets with the property that for each $i$ there is at most one lift of a line, $\widehat{L}_i$, such that every point of $\widehat{V}_i \smallsetminus \widehat{L}_i$ is stable. Consider the countable family of lines $\{\varphi(\widehat{L}_i): i = 1,2,\ldots\}$. Let $L$ be any line not in this family, let $\widehat{L}$ be any lift of $L$, and let $\widehat{z} \in \widehat{L}$. Choose $i$ so that $\widehat{z} \in \widehat{V}_i$. If $\widehat{z}$ is not stable, it must be the (unique) point of intersection of $\widehat{L}$ and $\widehat{L}_i$, hence every other point in $\widehat{L} \cap \widehat{V}_i$ is stable. Therefore the set of stable points in $\widehat{L}$ is a dense open subset of $\widehat{L}$. \end{proof} We remark that since in the proof of \cref{lem:stable generic} the point $\widehat{w}$ could be chosen on either side of $\widehat{L}(\widehat{z},\theta_0)$, it follows from \cref{prop:efficient characterization} and the proof of \cref{lem:stable generic} that if $\widehat{\gamma}_0$ is a lift of an efficient path in $\overline{[\gamma]}$ with endpoints $\widehat{p}$ and $\widehat{q}$, then $\widehat{\gamma}_0([0,1]) \cap \mathbb{D}$ coincides with the set of non-stable points. This observation will not be used in the proof of \cref{thm:main1}. \subsection{Sequence of approximations of the efficient path} \label{sec:sequence of approximations} Let $\mathcal{L}$ be a countable dense (in the sense of Hausdorff distanct) family of distinct straight lines which intersect $\Omega$ and do not contain $p$ or $q$, and such that for each lift $\widehat{L}$ of a line $L \in \mathcal{L}$ the set of stable points in $\widehat{L}$ is a dense open subset of $\widehat{L}$ (this is possible by \cref{lem:stable generic}). Let $\widehat{\mathcal{L}}$ denote the (countable) set of all lifts of lines in $\mathcal{L}$. We enumerate the elements of this set: $\widehat{\mathcal{L}} = {\langle \widehat{L}_i \rangle}_{i=1}^\infty$. We construct a sequence of paths $\gamma_i$, $i \geq 1$, by recursion. To begin, let $\gamma_1 = \gamma$ and $\widehat{\gamma}_1 = \widehat{\gamma}$. Having defined $\gamma_i$ and its lift $\widehat{\gamma}_i$, we define $\gamma_{i+1}$ and $\widehat{\gamma}_{i+1}$ as follows: \begin{list}{\textbullet}{\leftmargin=2em \itemindent=0em} \item If $\widehat{\gamma}_i^{-1}(\widehat{L}_i)$ has cardinality $\leq 1$, then put $\gamma_{i+1} = \gamma_i$ and $\widehat{\gamma}_{i+1} = \widehat{\gamma}_i$. Otherwise, let $s_1$ and $s_2$ be the smallest and largest (respectively) $s \in [0,1]$ such that $\widehat{\gamma}_i(s) \in \widehat{L}_i$. Let $\widehat{\gamma}_{i+1}$ be the path in $\mathbb{D}$ (e.p.e.)\ defined by $\widehat{\gamma}_{i+1}(s) = \widehat{\gamma}_i(s)$ for $s \notin [s_1,s_2]$, and $\widehat{\gamma}_{i+1} {\upharpoonright}_{[s_1,s_2]}$ parameterizes the subarc of $\widehat{L}_i$ with endpoints $\widehat{\gamma}_i(s_1)$ and $\widehat{\gamma}_i(s_2)$ (or $\widehat{\gamma}_{i+1} {\upharpoonright}_{[s_1,s_2]}$ is constantly equal to $\widehat{w}$ if $\widehat{\gamma}_i(s_1) = \widehat{\gamma}_i(s_2) = \widehat{w}$). Let $\gamma_{i+1} = \overline{\varphi} \circ \widehat{\gamma}_{i+1}$. \end{list} \begin{lem} \label{lem:connected intersection} Let $\widehat{L}$ be a lift of a line $L$. If $i$ is such that $\widehat{\gamma}_i^{-1}(\widehat{L}) \subset [0,1]$ is connected (respectively, empty), then $\widehat{\gamma}_j^{-1}(\widehat{L})$ is connected (respectively, empty) for all $j \geq i$. In particular, for all $j > i \geq 1$, $\widehat{\gamma}_j^{-1}(\widehat{L}_i)$ is connected (or empty). \end{lem} \begin{proof} Let $\widehat{L}$ be a lift of a line $L$, and fix $i$. We will argue that if $\widehat{\gamma}_i^{-1}(\widehat{L}) \subset [0,1]$ is connected (respectively, empty), then $\widehat{\gamma}_{i+1}^{-1}(\widehat{L})$ is connected (respectively, empty). The claim then follows by induction. To this end, given the definition of $\widehat{\gamma}_{i+1}$, clearly we may assume $\widehat{L} \neq \widehat{L}_i$, so that either $\widehat{L} \cap \widehat{L}_i \cap \mathbb{D} = \emptyset$ or $\widehat{L}$ and $\widehat{L}_i$ cross transversally in $\mathbb{D}$. Assume first that $\widehat{\gamma}_i([0,1]) \cap \widehat{L} = \emptyset$. This means in particular that $\widehat{p},\widehat{q} \notin \widehat{L}$, $\widehat{L}$ is a non-separating lift, and $\widehat{\gamma}_i([0,1])$ is disjoint from the shadow of $\widehat{L}$. If $\widehat{L}_i$ does not meet $\widehat{L}$ in $\mathbb{D}$, then clearly $\widehat{\gamma}_{i+1}([0,1]) \cap \widehat{L} = \emptyset$ as well. If $\widehat{L}_i$ meets $\widehat{L}$ in $\mathbb{D}$, then these two lifts cross transversally. If $\widehat{\gamma}_{i+1}([0,1])$ meets $\widehat{L}$, we must have that $\widehat{\gamma}_{i+1}([s_1,s_2]) \cap \widehat{L} \neq \emptyset$ where $s_1 = \min\{s \in [0,1]: \widehat{\gamma}_i(s) \in \widehat{L}_i\}$ and $s_2 = \max\{s \in [0,1]: \widehat{\gamma}_i(s) \in \widehat{L}_i\}$, since $\widehat{\gamma}_{i+1} = \widehat{\gamma}_i$ outside of $[s_1,s_2]$. Since $\widehat{\gamma}_{i+1}([s_1,s_2])$ parameterizes a subarc of $\widehat{L}_i$, which crosses $\widehat{L}$ transversally, it follows that either $\widehat{\gamma}_{i+1}(s_1)$ or $\widehat{\gamma}_{i+1}(s_2)$ is in the shadow of $\widehat{L}$. But $\widehat{\gamma}_{i+1}(s_1) = \widehat{\gamma}_i(s_1)$ and $\widehat{\gamma}_{i+1}(s_2) = \widehat{\gamma}_i(s_2)$, so this contradicts the assumption that $\widehat{\gamma}_i([0,1]) \cap \widehat{L} = \emptyset$. Thus $\widehat{\gamma}_{i+1}([0,1]) \cap \widehat{L} = \emptyset$, as desired. Now assume that $\widehat{\gamma}_i([0,1]) \cap \widehat{L} \neq \emptyset$, and $\widehat{\gamma}_i^{-1}(\widehat{L})$ is an interval $[a,b]$ (here we allow the possibility that $a = b$, which means $[a,b] = \{a\}$). Suppose $\widehat{\gamma}_{i+1}$ is obtained from $\widehat{\gamma}_i$ by replacing the subpath of $\widehat{\gamma}_i$ from $s_1$ to $s_2$ ($0 < s_1 < s_2 < 1$) with a segment of $\widehat{L}_i$. We leave it to the reader to confirm the following claims: \begin{itemize} \item If $s_2 \leq a$, then $\widehat{\gamma}_{i+1}^{-1}(\widehat{L}) = [a,b]$; \item If $s_1 \leq a \leq s_2 \leq b$, then $\widehat{\gamma}_{i+1}^{-1}(\widehat{L}) = [s_2,b]$; \item If $s_1 \leq a \leq b \leq s_2$, then $\widehat{\gamma}_{i+1}^{-1}(\widehat{L})$ is either empty (if the section $\widehat{\gamma}_{i+1}([s_1,s_2])$ of $\widehat{L}_i$ does not meet $\widehat{L}$), or consists of the single point where the section $\widehat{\gamma}_{i+1}([s_1,s_2])$ of $\widehat{L}_i$ crosses $\widehat{L}$; \item It is not possible that $a \leq s_1 < s_2 \leq b$, since $\widehat{L} \neq \widehat{L}_i$ (unless $\widehat{\gamma}_i$ is constant on $[s_1,s_2]$, in which case $\widehat{\gamma}_{i+1} = \widehat{\gamma}_i$); \item If $a \leq s_1 \leq b \leq s_2$, then $\widehat{\gamma}_{i+1}^{-1}(\widehat{L}) = [a,s_1]$; and \item If $b \leq s_1$, then $\widehat{\gamma}_{i+1}^{-1}(\widehat{L}) = [a,b]$. \end{itemize} In all cases, $\widehat{\gamma}_{i+1}^{-1}(\widehat{L})$ is connected (possibly empty), as desired. \end{proof} \subsection{Convergence to the efficient path} \label{sec:convergence} For each $n = 1,2,\ldots$, we choose sets $\mathcal{G}_n$ of lines in $\mathcal{L}$ with the following properties: \begin{itemize} \item $\mathcal{G}_n$ is a finite subset of $\mathcal{L}$ for each $n$; \item $\mathcal{G}_n \subset \mathcal{G}_{n+1}$ for each $n$; \item each component of $\Omega \smallsetminus \bigcup \mathcal{G}_n$ has diameter less than $\frac{1}{n}$; and \item if $G_1,G_2 \in \mathcal{G}_n$ are distinct lines and $\widehat{G}_1$ and $\widehat{G}_2$ are separating lifts of $G_1$ and $G_2$, respectively, then $\widehat{G}_1$ and $\widehat{G}_2$ are either disjoint or meet in a stable point. \end{itemize} Let $\widehat{\mathcal{G}}_n$ be the set of all lifts of lines in $\mathcal{G}_n$. By \cref{lem:large lifts} only finitely many of the elements of $\widehat{\mathcal{G}}_n$ intersect the original lifted path $\widehat{\gamma}_1$, and among them are those that are separating lifts. We denote the set of all separating lifts in $\widehat{\mathcal{G}}_n$ by $\widehat{\mathcal{G}}_n^s$. We define an order $<$ on $\widehat{\mathcal{G}}_n^s$ as follows. Let $\widehat{G}_1,\widehat{G}_2 \in \widehat{\mathcal{G}}_n^s$ be distinct separating lifts. First suppose $\widehat{G}_1 \cap \widehat{G}_2 = \emptyset$. Then $\widehat{G}_1 < \widehat{G}_2$ if $\widehat{G}_2$ is on the $\widehat{q}$-side of $\widehat{G}_1$; otherwise $\widehat{G}_2 < \widehat{G}_1$. Now suppose $\widehat{G}_1 \cap \widehat{G}_2 = \{\widehat{z}\}$ for some stable point $\widehat{z} \in \mathbb{D}$. Since $\widehat{z}$ is stable, it follows from density of the family of lines $\mathcal{L}$ that there exists $\widehat{W} \in \widehat{\mathcal{L}}$ such that $\widehat{W}$ is non-separating and contains $\widehat{z}$ in its shadow. Then $\widehat{G}_1 < \widehat{G}_2$ if $\widehat{G}_2 \smallsetminus \overline{\mathrm{Sh}(\widehat{W})}$ is on the $\widehat{q}$-side of $\widehat{G}_1$; otherwise $\widehat{G}_2 < \widehat{G}_1$. It is straightforward to see that this relation $<$ is a well-defined linear order on $\widehat{\mathcal{G}}_n^s$. For each $n = 1,2,\ldots$, choose a finite set $\widehat{\mathcal{W}}_n$ of lifts of lines from $\widehat{\mathcal{L}}$ such that for each pair of distinct intersecting separating lifts $\widehat{G}_1,\widehat{G}_2 \in \widehat{\mathcal{G}}_n^s$ there is a lift $\widehat{W} \in \widehat{\mathcal{W}}_n$ witnessing that $\widehat{G}_1 < \widehat{G}_2$ or that $\widehat{G}_2 < \widehat{G}_1$ (i.e.\ such that $\mathrm{Sh}(\widehat{W})$ contains $\widehat{G}_1 \cap \widehat{G}_2$). Let $i(n)$ be large enough so that $\widehat{\mathcal{W}}_n \subset \{\widehat{L}_i: i = 1,\ldots,i(n)-1\}$, and also all of the (finitely many) elements of $\widehat{\mathcal{G}}_n$ which intersect the original lifted path $\widehat{\gamma}_1$ are contained in $\{\widehat{L}_i: i = 1,\ldots,i(n)-1\}$. Take an arbitrary $n$, and let $\widehat{G}_1,\ldots,\widehat{G}_m$ enumerate the elements of $\widehat{\mathcal{G}}_n^s$, listed in $<$ order. For any $i \geq i(n)$, by \cref{lem:connected intersection} the lifted path $\widehat{\gamma}_i$ does not cross any of the non-separating lifts in $\widehat{\mathcal{G}}_n \cup \widehat{\mathcal{W}}_n$, and has connected intersection with all the lifts that it meets, in particular with the elements of $\widehat{\mathcal{G}}_n^s$. Thus there are $m+1$ components $\widehat{R}_1,\ldots,\widehat{R}_{m+1}$ of $\mathbb{D} \smallsetminus \bigcup (\widehat{\mathcal{G}}_n \cup \widehat{\mathcal{W}}_n)$ such that for any $i \geq i(n)$, the lifted path $\widehat{\gamma}_i$ starts at $\widehat{p}$ then runs in $\widehat{R}_1$, until it crosses $\widehat{G}_1$ and enters $\widehat{R}_2$, then crosses $\widehat{G}_2$ and enters $\widehat{R}_3$, and so on, until it crosses $\widehat{G}_m$ to enter $\widehat{R}_{m+1}$ and ends at $\widehat{q}$ (see \cref{fig:regions}). \begin{figure} \caption{An illustration depicting a possible configuration of the lifts of lines $\widehat{G} \label{fig:regions} \end{figure} For each of these regions $\widehat{R}_j$, $\varphi(\widehat{R}_j)$ is contained in a component of $\Omega \smallsetminus \bigcup \mathcal{G}_n$, hence has diameter less than $\frac{1}{n}$. Therefore, provided we parameterize the path $\gamma_i$ so that if $\widehat{\gamma}_{i(n)}(t) \in \widehat{R}_j$ then $\widehat{\gamma}_i(t) \in \widehat{R}_j$ for all $i \geq i(n)$, we have that $d_{\mathrm{sup}}(\gamma_{i_1},\gamma_{i_2}) < \frac{1}{n}$ for all $i_1,i_2 \geq i(n)$; in fact there is a homotopy between $\gamma_{i_1}$ and $\gamma_{i_2}$ which moves no point more than $\frac{1}{n}$, obtained by moving $\widehat{\gamma}_{i_1}(t)$ to $\widehat{\gamma}_{i_2}(t)$ within the closure of a region $\widehat{R}_j$ which contains $\widehat{\gamma}_{i(n)}(t)$. We remark that according to \cref{prop:efficient characterization}, if $\lambda \in \overline{[\gamma]}$ is any efficient path, it must follow this same pattern traversing through the (closures of the) regions $\widehat{R}_1,\ldots,\widehat{R}_{m+1}$ in the same monotone order. Therefore, any two such efficient paths, if parameterized appropriately, are within $\frac{1}{n}$ of each other. Since $n$ is arbitrary, this establishes that there can be at most one efficient path in $\overline{[\gamma]}$ (up to reparameterization). It remains to confirm that our construction above yields an efficient path in $\overline{[\gamma]}$. Since $n$ was arbitrary, we now have that $\langle \gamma_i \rangle_{i=1}^\infty$ is a Cauchy sequence of paths, hence it converges to a path $\gamma_\infty$ from $p$ to $q$. Moreover, by putting together the (smaller and smaller) homotopies between the paths in this sequence, we have that $\gamma_\infty \in \overline{[\gamma]}$. \begin{lem} \label{lem:reduced} $\gamma_\infty$ is an efficient path in $\overline{[\gamma]}$. \end{lem} \begin{proof} This follows from \cref{prop:efficient characterization}: if $L$ is any line intersecting $\Omega$ which has a lift $\widehat{L}$ such that $\widehat{\gamma}_\infty([0,1]) \cap \widehat{L}$ is not connected, then by density of the family $\mathcal{L}$ it is easy to see there must be a line $L' \in \mathcal{L}$ close to $L$, with a lift $\widehat{L}'$ close to $\widehat{L}$, such that $\widehat{\gamma}_\infty([0,1]) \cap \widehat{L}'$ is also not connected. But if $\widehat{L}' = \widehat{L}_i$ in the enumeration of $\widehat{\mathcal{L}}$, then for all $j > i$, $\widehat{\gamma}_j([0,1]) \cap \widehat{L}'$ is connected according to \cref{lem:connected intersection}. Since $\widehat{\gamma}_i \to \widehat{\gamma}_\infty$, the same must be true for $\widehat{\gamma}_\infty$, a contradiction. \end{proof} This completes the proof of \cref{thm:main1}. \section{Proof of \cref{thm:main2}} \label{sec:proof main2} Having established the existence of a unique efficient path $\lambda$ in $\overline{[\gamma]}$, we now prove \cref{thm:main2} using \cref{thm:main1} and its proof. In particular, we observe that in the construction of the efficient path in the proof of \cref{thm:main1}, we started with a path in $[\gamma]$ (such as $\gamma$ itself) and constructed a sequence of paths converging to the efficient path, where each path in the sequence is obtained from the previous one by replacing a subpath with a straight segment. Such a replacement will always decrease the $\mathsf{len}$ length of the path and also, when finite, the Euclidean length. It is clear that any path in $\overline{[\gamma]}$ of smallest $\mathsf{len}$ length (or Euclidean length if finite) must be efficient, because replacing a subpath with a straight segment decreases the length. Thus we have (3) $\Rightarrow$ (2) and (4) $\Rightarrow$ (2). For (2) $\Rightarrow$ (3), suppose for a contradiction that there is a path $\rho \in \overline{[\gamma]}$ with strictly smaller $\mathsf{len}$ length than the efficient path $\lambda \in \overline{[\gamma]}$. By continuity of the function $\mathsf{len}$, it follows that there exists $\rho' \in [\gamma]$ whose $\mathsf{len}$ length is also strictly smaller than $\lambda$. If we follow the construction in the proof of \cref{thm:main1} starting with $\rho'$ instead of $\gamma$, we would then obtain an efficient path in $\overline{[\gamma]}$ of $\mathsf{len}$ length strictly smaller than $\lambda$, contradicting the uniqueness of the efficient path. For (2) $\Rightarrow$ (4), suppose for a contradiction that there is a path $\rho \in \overline{[\gamma]}$ with finite Euclidean length $E$ which is strictly smaller than the Euclidean length of the efficient path $\lambda \in \overline{[\gamma]}$. This means there exists $\varepsilon > 0$ and values $0 = s_0 < s_1 < \cdots < s_N = 1$ such that $\sum_{i=1}^N |\lambda(s_i) - \lambda(s_{i-1})| > E + \varepsilon$. Let $\widehat{\gamma}$ be a lift of $\gamma$ with endpoints $\widehat{p},\widehat{q} \in \overline{\mathbb{D}}$, and let $\widehat{\lambda}$ and $\widehat{\rho}$ be lifts of $\lambda$ and $\rho$ with the same endpoints $\widehat{p},\widehat{q}$. We will replace sections of the path $\rho$ with straight line segments to obtain a new path $\rho' \in \overline{[\gamma]}$ which goes within $\frac{\varepsilon}{2N}$ of each of the points $\lambda(s_i)$, in order. To obtain $\rho'$, we proceed as follows. For each $i = 1,\ldots,N-1$, consider the point $\widehat{\lambda}(s_i) \in \overline{\mathbb{D}}$. \begin{itemize} \item If $\widehat{\lambda}(s_i) \in \mathbb{D}$, by \cref{lem:stable generic} we can choose two stable points $\widehat{w}_1,\widehat{w}_2$ in a small neighborhood $\widehat{V}$ of $\widehat{\lambda}(s_i)$ which projects one-to-one under $\varphi$ to a small disk centered at $\lambda(s_i)$ of radius less than $\frac{\varepsilon}{2N}$, so that $\widehat{w}_1$ and $\widehat{w}_2$ are on opposite sides of $\widehat{\lambda}([0,1]) \cap \widehat{V}$. Let $\widehat{L}_1$ and $\widehat{L}_2$ be non-separating lifts of lines such that $\widehat{w}_1 \in \mathrm{Sh}(\widehat{L}_1)$ and $\widehat{w}_2 \in \mathrm{Sh}(\widehat{L}_2)$. By \cref{prop:efficient characterization}, $\widehat{\lambda}$ does not cross $\widehat{L}_1$ or $\widehat{L}_2$. It follows that if we replace the section of $\widehat{\rho}$ between its first and last intersections with $\widehat{L}_1$ with an arc in $\widehat{L}_1$, and likewise for $\widehat{L}_2$, then the resultant path must meet $\widehat{V}$. \item If $\widehat{\gamma}(s_i) \in \partial \mathbb{D}$, choose an arc $\widehat{A}$ such that $\overline{\varphi}(\widehat{A})$ has diameter less than $\frac{\varepsilon}{2N}$, with endpoints $\widehat{\lambda}(s_i)$ and a stable point $\widehat{w} \in \mathbb{D}$. Let $\widehat{L}$ be a lift of a line such that $\widehat{w} \in \mathrm{Sh}(\widehat{L})$. By \cref{prop:efficient characterization}, $\widehat{\lambda}$ does not cross $\widehat{L}$. It follows that if we replace the section of $\widehat{\rho}$ between its first and last intersections with $\widehat{L}$ with an arc in $\widehat{L}$, then the resultant path must meet $\widehat{A}$. \end{itemize} After making the finitely many replacements of subpaths of $\widehat{\rho}$ with segments of lifts of lines as described above, the resultant path $\widehat{\rho}'$ is such that $\rho' = \overline{\varphi} \circ \widehat{\rho}'$ is continuous, and $\rho' \in \overline{[\gamma]}$ by \cref{thm:class closure lift}. Clearly the Euclidean length of $\rho'$ is not greater than $E$. On the other hand, by construction, there are values $0 = s_0' < s_1' < \cdots < s_N' = 1$ such that $|\rho'(s_i') - \lambda(s_i)| < \frac{\varepsilon}{2N}$ for each $i = 0,\ldots,N$. It follows that the Euclidean length of $\rho'$ is at least \begin{align*} \sum_{i=1}^N |\rho'(s_i') - \rho'(s_{i-1}')| &> \sum_{i=1}^N \left( |\lambda(s_i) - \lambda(s_{i-1})| - 2 \cdot \frac{\varepsilon}{2N} \right) \\ &= \left( \sum_{i=1}^N |\lambda(s_i) - \lambda(s_{i-1})| \right) - \varepsilon \\ &> (E + \varepsilon) - \varepsilon = E , \end{align*} a contradiction. Therefore, $\lambda$ has smallest Euclidean length among all paths in $\overline{[\gamma]}$. The proof that (1) $\Rightarrow$ (2) is straightforward and left to the reader. Finally, for (2) $\Rightarrow$ (1), let $0 < s_1 < s_2 < 1$ and consider the path $\lambda {\upharpoonright}_{[s_1,s_2]}$. We claim that there exist $s_1',s_2'$ such that $0 < s_1' \leq s_1 < s_2 \leq s_2' < 1$ and $\widehat{\lambda}(s_1')$ and $\widehat{\lambda}(s_2')$ can be joined by a path in $\overline{\mathbb{D}}$ which projects under $\overline{\varphi}$ to a path of finite Euclidean length. We rely on the fact that any two points $\widehat{w}_1,\widehat{w}_2 \in \mathbb{D}$ can be joined by a path in $\mathbb{D}$ which projects to a path of finite Euclidean length (e.g.\ a piecewise linear path). If $\widehat{\lambda}([0,s_1]) \cap \mathbb{D} \neq \emptyset$, then let $0 < s_1' \leq s_1$ be such that $\widehat{\lambda}(s_1') \in \mathbb{D}$, and let $\widehat{w}_1 = \widehat{\lambda}(s_1')$. If $\widehat{\lambda}([0,s_1]) \subset \partial \mathbb{D}$, we choose $s_1'$ and $\widehat{w}_1$ as follows. Note that in this case $\lambda(s_1)$ is not isolated in $\partial \Omega$, so we can apply \cref{thm:crosscut} to obtain a component $\widehat{C}_1$ of $\varphi^{-1}(\partial B(\lambda(s_1),\varepsilon))$, for some sufficiently small $\varepsilon > 0$, whose closure is an arc which separates $\widehat{\lambda}(s_1)$ from $\widehat{p}$ in $\overline{\mathbb{D}}$. In particular, there exists $0 < s_1' < s_1$ such that $\widehat{\lambda}(s_1')$ is in the closure of $\widehat{C}_1$. Let $\widehat{w}_1$ be any point in $\widehat{C}_1$. Thus, we have established that there exists $0 < s_1' \leq s_1$ and $\widehat{w}_1 \in \mathbb{D}$ such that either $\widehat{\lambda}(s_1') = \widehat{w}_1$ or $\widehat{\lambda}(s_1')$ can be joined to $\widehat{w}_1$ by a path in $\overline{\mathbb{D}}$ which projects to a circular arc. By the same reasoning, there exists $s_2 \leq s_2' < 1$ and $\widehat{w}_2 \in \mathbb{D}$ such that either $\widehat{\lambda}(s_2') = \widehat{w}_2$ or $\widehat{\lambda}(s_2')$ can be joined to $\widehat{w}_2$ by a path in $\overline{\mathbb{D}}$ which projects to a circular arc. It follows that $\widehat{\lambda}(s_1')$ can be joined to $\widehat{\lambda}(s_2')$ by a path in $\overline{\mathbb{D}}$ which projects to a path of finite Euclidean length. Therefore, by \cref{cor:locally shortest well-defined}, there exists a path of finite Euclidean length in $\overline{[\lambda^\gamma_{[s_1',s_2']}]}$. Since $\lambda$ is efficient, it follows from \cref{prop:efficient characterization} that $\lambda {\upharpoonright}_{[s_1',s_2']}$ is the efficient path in $\overline{[\lambda^\gamma_{[s_1',s_2']}]}$, hence has smallest Euclidean length in this class according to the implication (2) $\Rightarrow$ (4). In particular, $\lambda {\upharpoonright}_{[s_1',s_2']}$ has finite Euclidean length in light of the previous paragraph. This in turn implies that the subpath $\lambda {\upharpoonright}_{[s_1,s_2]}$ has finite Euclidean length. Again by \cref{prop:efficient characterization}, we have that $\lambda {\upharpoonright}_{[s_1,s_2]}$ is the efficient path in $\overline{[\lambda^\gamma_{[s_1,s_2]}]}$, hence has smallest Euclidean length in this class according to the implication (2) $\Rightarrow$ (4), as required. Therefore, $\lambda$ is locally shortest. \end{document}
\begin{document} \global\long\def\boldsymbol{\alpha}{\boldsymbol{\alpha}} \global\long\def\mathbf{K}{\mathbf{K}} \global\long\def\mathbf{C}{\mathbf{C}} \global\long\def\mathbf{I}{\mathbf{I}} \newcommand{{\,\mathrm{d}}}{{\,\mathrm{d}}} \global\long\def\mathbf{x}{\mathbf{x}} \global\long\def\mathbf{X}{\mathbf{X}} \global\long\def\arg\!\min{\arg\!\min} \global\long\def\arg\!\max{\arg\!\max} \global\long\def\boldsymbol{\beta}{\boldsymbol{\beta}} \global\long\def\mathbf{x^{\prime}}{\mathbf{x^{\prime}}} \title{\LARGE \textbf{ Flexible Expectile Regression in Reproducing Kernel Hilbert Spaces}} \author{Yi Yang \thanks{Department of Mathematics and Statistics, McGill University}, Teng Zhang \thanks{Princeton University. Yi and Teng are joint first authors.}, Hui Zou \thanks{Corresponding author, School of Statistics, University of Minnesota ([email protected])}} \date{\today} \maketitle \begin{abstract} Expectile, first introduced by Newey and Powell (1987) in the econometrics literature, has recently become increasingly popular in risk management and capital allocation for financial institutions due to its desirable properties such as coherence and elicitability. The current standard tool for expectile regression analysis is the multiple linear expectile regression proposed by Newey and Powell in 1987. The growing applications of expectile regression motivate us to develop a much more flexible nonparametric multiple expectile regression in a reproducing kernel Hilbert space. The resulting estimator is called KERE which has multiple advantages over the classical multiple linear expectile regression by incorporating non-linearity, non-additivity and complex interactions in the final estimator. The kernel learning theory of KERE is established. We develop an efficient algorithm inspired by majorization-minimization principle for solving the entire solution path of KERE. It is shown that the algorithm converges at least at a linear rate. Extensive simulations are conducted to show the very competitive finite sample performance of KERE. We further demonstrate the application of KERE by using personal computer price data. \end{abstract} \noindent {\bf Keywords:} Asymmetry least squares; Expectile regression; Reproducing kernel Hilbert space; MM principle. \section{Introduction} The \emph{expectile} introduced by \citet{Asymmetric_Powell} is becoming an increasingly popular tool in risk management and capital allocation for financial institutions. Let $Y$ be a random variable, the $\omega$-expectile of $Y$, denoted as $f_{\omega}$, is defined by \begin{equation} \omega=\frac{E\{|Y-f_{\omega}|I_{Y\leq f_{\omega}}\}}{E\{|Y-f_{\omega}|\}}, \qquad\omega\in(0,1).\label{eq:def1} \end{equation} In financial applications, the expectile has been widely used as a tool for efficient estimation of the expected shortfall (ES) through a one-one mapping between the two \citep{taylor2008estimating,hamidi2014dynamic,xie2014varying}. More recently, many researchers started to advocate the use of the expectile as a favorable alternative to other two commonly used risk measures -- Value at Risk (VaR) and ES, due to its desirable properties such as \emph{coherence} and \emph{elicitability} \citep{kuan2009assessing, gneiting2011making,ziegel2014coherence}. VaR has been criticized mainly for two drawbacks: First, it does not reflect the magnitude of the extreme losses for the underlying risk as it is only determined by the probability of such losses; Second, VaR is not a coherent risk measure due to the lack of the \emph{sub-additive} property \citep{emmer2013best,embrechts2014academic}. Hence the risk of merging portfolios together could get worse than adding the risks separately, which contradicts the notion that risk can be reduced by diversification \citep{artzner1999coherent}. Unlike VaR, ES is coherent and it considers the magnitude of the losses when the VaR is exceeded. However, a major problem with ES is that it cannot be reliably backtested in the sense that competing forecasts of ES cannot be properly evaluated through comparison with realized observations. \citet{gneiting2011making} attributed this weakness to the fact that ES does not have \emph{elicitability}. \citet{ziegel2014coherence} further showed that the expectile are the only risk measure that is both coherent and elicitable. In applications we often need to estimate the conditional expectile of the response variable given a set of covariates. This is called expectile regression. Statisticians and Econometricians pioneered the study of expectile regression. Theoretical properties of the multiple linear expectile were firstly studied in \citet{Asymmetric_Powell} and \citet{Asymmetric_Efron}. \citet{Asymmetric_Tong} studied a non-parametric estimator of conditional expectiles based on local linear polynomials with a one-dimensional covariate, and established the asymptotic property of the estimator. A semiparametric expectile regression model relying on penalized splines is proposed by \citet{sobotka2012geoadditive}. \citet{yang2015nonparametric} adopted the gradient tree boosting algorithm for expectile regression. In this paper, we propose a flexible nonparametric expectile regression estimator constructed in a reproducing kernel Hilbert space (RKHS) \citep{wahba}. Our contributions in this article are twofold: First, we extend the parametric expectile model to a fully nonparametric multiple regression setting and develop the corresponding kernel learning theory. Second, we propose an efficient algorithm that adopts the Majorization-Minimization principle for computing the entire solution path of the kernel expectile regression. We provide numerical convergence analysis for the algorithm. Moreover, we provide an accompanying R package that allows other researchers and practitioners to use the kernel expectile regression. The rest of the paper is organized as follows. In Section 2 we present the kernel expectile regression and develop an asymptotic learning theory. Section 3 derives the fast algorithm for solving the solution paths of the kernel expectile regression. The numerical convergence of the algorithm is examined. In Section 4 we use simulation models to show the high prediction accuracy of the kernel expectile regression. We analyze the personal computer price data in Section 5. The technical proofs are relegated to an appendix. \section{Kernel Expectile Regression} \subsection{Methodology} \citet{Asymmetric_Powell} showed that the $\omega$-expectile $f_{\omega}$ of $Y$ has an equivalent definition given by \begin{equation} f_{\omega}=\underset{f}{\arg\min}E\{\phi_{\omega}(Y-f) \}, \end{equation} where \begin{eqnarray}\label{eqdef1} \phi_{\omega}(t)=\begin{cases} (1-\omega)t^{2} & t\leq 0,\\ \omega t^{2} & t>0. \end{cases} \end{eqnarray} Consequently, \citet{Asymmetric_Powell} showed that the $\omega$-expectile $f_{\omega}$ of $Y$ given the set of covariates $X=\mathbf{x}$, denoted by $f_{\omega}(\mathbf{x})$, can be defined as \begin{equation}\label{eqdef2} f_{\omega}(\mathbf{x})=\underset{f}{\arg\min}E\{\phi_{\omega}(Y-f)\mid X=\mathbf{x}\}. \end{equation} \citet{Asymmetric_Powell} developed the multiple linear expectile regression based on \eqref{eqdef2}. Given $n$ random observations $(\mathbf{x}_{1},y_{1}),\cdots,(\mathbf{x}_{n},y_{n})$ with $\mathbf{x}_{i}\in\mathbb{R}^{p}$ and $y_{i}\in\mathbb{R}$, \citet{Asymmetric_Powell} proposed the following formulation: \begin{equation} (\hat{\boldsymbol{\beta}},\hat{\beta}_0)=\underset{(\boldsymbol{\beta},\beta_0)}{\arg\min}\frac{1}{n}\sum_{i=1}^{n}\phi_{\omega}(y_{i}-\mathbf{x}_i^{\intercal}\boldsymbol{\beta}-\beta_0).\label{eq:optlinear} \end{equation} Then the estimated conditional $\omega$-expectile is $\mathbf{x}_i^{\intercal}\hat{\boldsymbol{\beta}}+\hat{ \beta_0}.$ \citet{Asymmetric_Efron} proposed an efficient algorithm for computing \eqref{eq:optlinear}. The linear expectile estimator can be too restrictive in many real applications. Researchers have also considered more flexible expectile regression estimators. For example, \citet{Asymmetric_Tong} studied a local linear-polynomial expectile estimator with a one-dimensional covariate. However, the local fitting approach is not suitable when the dimension of explanatory variables is more than five. This limitation of local smoothing motivated \citet{yang2015nonparametric} to develop a nonparametric expectile regression estimator based on the gradient tree boosting algorithm. The tree-boosted expectile regression tries to minimize the empirical expectile loss: \begin{equation} \underset{f\in{\cal F}}{\min}\frac{1}{n}\sum_{i=1}^{n}\phi_{\omega}(y_{i}-f(\mathbf{x}_{i})),\label{eq:opt} \end{equation} where each candidate function $f\in \mathcal F$ is assumed to be an ensemble of regression trees. In this article, we consider another nonparametric approach to the multiple expectile regression. To motivate our method, let us first look at the special expectile regression with $\omega=0.5$. It is easy to see from \eqref{eqdef1} and \eqref{eqdef2} that if $\omega=0.5$, expectile regression actually reduces to ordinary conditional mean regression. A host of flexible regression methods have been well-studied for the conditional mean regression, such as generalized additive model, regression trees, boosted regression trees, and function estimation in a reproducing kernel Hilbert space (RKHS). \citet{friedman2009elements} provided excellent introductions to all these methods. In particular, mean regression in a RKHS has a long history and a rich success record \citep{wahba}. So in the present work we propose the kernel expectile regression in a RKHS. Denote by $\mathbb{H}_{K}$ the Hilbert space generated by a positive definite kernel $K$. By the Mercer's theorem, kernel $K$ has an eigen-expansion $K(\mathbf{x},\mathbf{x^{\prime}})=\sum_{i=1}^{\infty}\nu_{i}\varphi_{i}(\mathbf{x})\varphi_{i}(\mathbf{x^{\prime}})$ with $\nu_{i}\geq0$ and $\sum_{i=1}^{\infty}\nu_{i}^{2}<\infty$. The function $f$ in $\mathbb{H}_{K}$ can be expressed as an expansion of these eigen-functions $f(\mathbf{x})=\sum_{i=1}^{\infty}c_{i}\varphi_{i}(\mathbf{x})$ with the kernel induced squared norm $\|f\|_{\mathbb{H}_{K}}^{2}\equiv\sum_{i=1}^{\infty}c_{i}^{2}/\nu_{i}<\infty.$ Some most widely used kernel functions are \begin{itemize} \item Gaussian RBF kernel $K(\mathbf{x},\mathbf{x^{\prime}})=\exp\left(\frac{-\|\mathbf{x}-\mathbf{x^{\prime}}\|^{2}}{\sigma^{2}}\right),$ \item Sigmoidal kernel $K(\mathbf{x},\mathbf{x^{\prime}})=\tanh(\kappa\left\langle \mathbf{x},\mathbf{x^{\prime}}\right\rangle +\theta),$ \item Polynomial kernel $K(\mathbf{x},\mathbf{x^{\prime}})=(\left\langle \mathbf{x},\mathbf{x^{\prime}}\right\rangle +\theta)^d.$ \end{itemize} Other kernels can be found in \citet{smola1998connection} and \citet{friedman2009elements}. Given $n$ observations $\{(\mathbf{x}_{i},y_{i})\}_{i=1}^n$, the kernel expectile regression estimator (KERE) is defined as \begin{equation} (\hat{f}_{n}(\mathbf{x}),\hat{\alpha}_{0})=\arg\min_{f\in\mathbb{H}_{K},\alpha_{0}\in\mathbb{R}}\sum_{i=1}^{n}\phi_{\omega}(y_{i}-\alpha_{0}-f(\mathbf{x}_{i}))+\lambda\|f\|_{\mathbb{H}_{K}}^{2},\label{eq:setup1} \end{equation} where $\mathbf{x}_{i}\in\mathbb{R}^{p}$, $\alpha_{0}\in\mathbb{R}$. The estimated conditional $\omega$-expectile is $\hat{\alpha}_0+\hat{f}_{n}(\mathbf{x}).$ Sometimes, one can absorb the intercept term into the nonparametric function $f$. We keep the intercept term in order to make a direct comparison to the multiple linear expectile regression. Although \eqref{eq:setup1} is often an optimization problem in an infinite-dimensional space, depending on the choice of the kernel, the representer theorem \citep{wahba} ensures that the solution to \eqref{eq:setup1} always lies in a finite-dimensional subspace spanned by kernel functions on observational data, i.e., \begin{equation} f(\mathbf{x})=\sum_{i=1}^{n}\alpha_{i}K(\mathbf{x}_{i},\mathbf{x}),\label{eq:sol} \end{equation} for some $\{\alpha_{i}\}_{i=1}^{n}\subset\mathbb{R}$. By \eqref{eq:sol} and the reproducing property of RKHS \citep{wahba} we have \begin{equation} \|f\|_{\mathbb{H}_{K}}^{2}=\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}K(\mathbf{x}_{i},\mathbf{x}_{j}).\label{eq:sol2} \end{equation} Based on \eqref{eq:sol} and \eqref{eq:sol2} we can rewrite the minimization problem \eqref{eq:setup1} in a finite-dimensional space \begin{equation} \{\hat{\alpha}_{i}\}_{i=0}^{n}=\arg\min_{\{\alpha_{i}\}_{i=0}^{n}}\sum_{i=1}^{n}\phi_{\omega}\left(y_{i}-\alpha_{0}-\sum_{j=1}^{n}\alpha_{j}K(\mathbf{x}_{i},\mathbf{x}_{j})\right)+\lambda\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}K(\mathbf{x}_{i},\mathbf{x}_{j}).\label{eq:setup2} \end{equation} The corresponding KERE estimator is $ \hat{\alpha}_0+\sum_{i=1}^{n} \hat \alpha_{i}K(\mathbf{x}_{i},\mathbf{x})$. The computation of KERE is based on \eqref{eq:setup2} and we use both \eqref{eq:setup1} and \eqref{eq:setup2} for the theoretical analysis of KERE. \subsection{Kernel learning theory} In this section we develop a kernel learning theory for KERE. We first discuss the criterion for evaluating an estimator in the context of expectile regression. Given the loss function $\phi_{\omega}$, the risk is $ {\cal R}(f,\alpha_{0})=E_{(\mathbf{x},y)}\phi_{\omega}(y-\alpha_{0}-f(\mathbf{x})). $ It is argued that ${\cal R}(f,\alpha_{0})$ is a more appropriate evaluation measure in practice than the squared error risk defined as $E_{\mathbf{x}}\|f(\mathbf{x})+\alpha_0-f^*_{\omega}(\mathbf{x}) \|^2$, where $f^*_{\omega}$ is the true conditional expectile of $Y$ given $X=\mathbf{x}$. The reason is simple: Let $\hat f, \hat \alpha_0$ be any estimator based on the training data. By law of large number we see that \[ {\cal R}(\hat f,\hat \alpha_{0})=E_{\{y_j,\mathbf{x}_j\}^m_{j=1}}\frac{1}{m}\sum^m_{j=1}\phi_{\omega}(y_j-\hat \alpha_{0}-\hat f(\mathbf{x}_j)), \] and \[ {\cal R}(\hat f,\hat \alpha_{0})=\lim_{m \rightarrow \infty}\frac{1}{m}\sum^m_{j=1}\phi_{\omega}(y_j-\hat \alpha_{0}-\hat f(\mathbf{x}_j)), \] where $\{(\mathbf{x}_j,y_j)\}^m_{j=1}$ is another independent test sample. Thus, one can use techniques such as cross-validation to estimate ${\cal R}(f,\alpha_{0})$. Additionally, the squared error risk depends on the function $f^*_{\omega}(\mathbf{x})$, which is usually unknown. Thus, we prefer to use ${\cal R}(\hat f,\hat \alpha_{0})$ over the squared error risk. Of course, if we assume a classical regression model (when $\omega=0.5$) such as $y=f(\mathbf{x})+\textrm{error}$, where the error is independent of $\mathbf{x}$ with mean zero and constant variance, ${\cal R}(\hat f,\hat \alpha_{0})$ then just equals the squared error risk plus a constant. Unfortunately, such equivalence breaks down for other values of $\omega$ and more general models. After choosing the risk function, the goal is to minimize the risk. Since typically the estimation is done in a function space, the minimization is carried out in the chosen function space. In our case, the function space is RKHS generated by a kernel function $K$. Thus, the ideal risk is defined as \[ {\cal R}^*_{f, \alpha_{0}}=\inf_{f\in\mathbb{H}_K,\alpha_0\in\mathbb{R}}{\cal R}(f, \alpha_{0}). \] Consider the kernel expectile regression estimator $(\hat{f},\hat{\alpha}_{0})$ as defined in \eqref{eq:setup1} based on a training sample $D_{n}$, where $D_{n}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^n$ are i.i.d. drawn from an unknown distribution. The observed risk of KERE is \[ {\cal R}(\hat{f},\hat\alpha_{0})=E_{(\mathbf{x},y)}\phi_{\omega}(y-\hat\alpha_{0}-\hat{f}(\mathbf{x})). \] It is desirable to show that ${\cal R}(\hat{f},\hat\alpha_{0})$ approaches the ideal risk ${\cal R}^*_{f, \alpha_{0}}$. It is important to note that $ {\cal R}(\hat{f},\hat\alpha_{0})$ is a random quantity that depends on the training sample $D_{n}$. So it is not the usual risk function which is deterministic. However, we can consider the expectation of $ {\cal R}(\hat{f},\hat\alpha_{0})$ and call it \emph{expected observed risk}. The formal definition is given as follows \begin{equation} \text{Expected observed risk:} \quad E_{D_n}{\cal R}(\hat{f},\hat{\alpha}_{0})=E_{D_{n}}\big\{E_{(\mathbf{x},y)}\phi_{\omega}(y-\hat{\alpha}_{0}-\hat{f}(\mathbf{x}))\big\}.\label{eq:expobrisk} \end{equation} Our goal is to show that ${\cal R}(\hat{f},\hat\alpha_{0})$ converges to ${\cal R}^*_{f, \alpha_{0}}.$ We achieve this by showing that the expected observed risk converges to the ideal risk, i.e., $\lim_{n \rightarrow \infty } E_{D_n}{\cal R}(\hat{f},\hat{\alpha}_{0})={\cal R}^*_{f, \alpha_{0}}$. By definition, we always have ${\cal R}(\hat{f},\hat\alpha_{0}) \ge {\cal R}^*_{f, \alpha_{0}}. $ Then by Markov inequality, for any $\varepsilon>0$ $$ P\Big({\cal R}(\hat{f},\hat{\alpha}_{0}) - {\cal R}^*_{f, \alpha_{0}} >\varepsilon \Big) \le \frac{E_{D_n}{\cal R}(\hat{f},\hat{\alpha}_{0})- {\cal R}^*_{f, \alpha_{0}} }{\varepsilon} \rightarrow 0. $$ The rigorous statement of our result is as follows: \begin{thm}\label{thm:asymptotic} Let $M=\sup_{\mathbf{x}}K(\mathbf{x},\mathbf{x})^{1/2}$. Assume $M<\infty$ and $E y^2<D<\infty$ where $M$ and $D$ are two constants. If $\lambda$ is chosen such that as $n\rightarrow\infty$, $\lambda/n^{2/3}\rightarrow\infty$, $\lambda/n\rightarrow 0$, then we have \[ E_{D_n}{\cal R}(\hat{f},\hat{\alpha}_{0})\rightarrow {\cal R}^*_{f, \alpha_{0}} \quad\text{as $n\rightarrow\infty$}, \] and hence \[ {\cal R}(\hat{f},\hat{\alpha}_{0}) - {\cal R}^*_{f, \alpha_{0}} \rightarrow 0 \ \textrm{in probability}. \] \end{thm} The Gaussian kernel is perhaps the most popular kernel for nonlinear learning. For the Gaussian kernel $K(\mathbf{x},\mathbf{x^{\prime}})=\exp(-\|\mathbf{x}-\mathbf{x^{\prime}}\|^2/c)$, we have $M=1$. For any radial kernel with the form $K(\mathbf{x},\mathbf{x^{\prime}})=h(\|\mathbf{x}-\mathbf{x^{\prime}}\|)$ where $h$ is a smooth decreasing function, we see $M=h(0)^{\frac{1}{2}}$ which is finite as long as $h(0)<\infty$. \section{Algorithm\label{sec:algorithm}} \subsection{Derivation} Majorization-minimization (MM) algorithm is a very successful technique for solving a wide range of statistical models \citep{lange2000optimization,hunter2004tutorial,MM08,zhou2010mm,lange2014mm}. In this section, we develop an algorithm inspired by MM principle for solving the optimization problem \eqref{eq:setup2}. Note that the loss function $\phi_{\omega}$ in \eqref{eq:setup2} does not have the second derivative. We adopt the MM principle to find the minimizer by iteratively minimizing a surrogate function that majorizes the objective function in \eqref{eq:setup2}. To further simplify the notation we write $\boldsymbol{\alpha}=(\alpha_{0},\alpha_{1},\alpha_{2},\cdots,\alpha_{n})^{\intercal}$, and \[ \mathbf{K}_{i}=\left(1,K(\mathbf{x}_{i},\mathbf{x}_{1}),\ldots,K(\mathbf{x}_{i},\mathbf{x}_{n})\right), \qquad\mathbf{K}=\left(\begin{array}{ccc} K(\mathbf{x}_{1},\mathbf{x}_{1}) & \cdots & K(\mathbf{x}_{1},\mathbf{x}_{n})\\ \vdots & \ddots & \vdots\\ K(\mathbf{x}_{n},\mathbf{x}_{1}) & \cdots & K(\mathbf{x}_{n},\mathbf{x}_{n}) \end{array}\right), \] \[ \mathbf{K}_{0}=\left(\begin{array}{cc} 0 & \mathbf{0}_{n\times1}^{\intercal}\\ \mathbf{0}_{n\times1} & \mathbf{K} \end{array}\right). \] Then \eqref{eq:setup2} is simplified to a minimization problem as \begin{equation} \widehat{\boldsymbol{\alpha}}=\arg\!\min_{\boldsymbol{\alpha}}F_{\omega, \lambda}(\boldsymbol{\alpha}),\label{eq:obj} \end{equation} \begin{equation} F_{\omega, \lambda}(\boldsymbol{\alpha})=\sum_{i=1}^{n}\phi_{\omega}\left(y_{i}-\mathbf{K}_{i}\boldsymbol{\alpha}\right)+\lambda\boldsymbol{\alpha}^{\intercal}\mathbf{K}_{0}\boldsymbol{\alpha},\label{eq:floss} \end{equation} where $\omega$ is given for computing the corresponding level of the conditional expectile. We also assume that $\lambda$ is given for the time being. A smart algorithm for computing the solution for a sequence of $\lambda$ will be studied in Section \ref{sec:implementation}. Our approach is to minimize \eqref{eq:obj} by iteratively update $\boldsymbol{\alpha}$ using the minimizer of a majorization function of $F_{\omega, \lambda}(\boldsymbol{\alpha})$. Specifically, at the $k$-th step of the algorithm, where $k=0,1,2,\ldots$, assume that $\boldsymbol{\alpha}^{(k)}$ is the current value of $\boldsymbol{\alpha}$ at iteration $k$, we find a majorization function $Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)})$ for $F_{\omega, \lambda}(\boldsymbol{\alpha})$ at current $\boldsymbol{\alpha}^{(k)}$ that satisfies \begin{alignat}{1} Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)}) > & F_{\omega, \lambda}(\boldsymbol{\alpha})\quad\mathrm{when\ }\boldsymbol{\alpha} \neq \boldsymbol{\alpha}^{(k)},\label{eq:mathdef1}\\ Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)}) = & F_{\omega, \lambda}(\boldsymbol{\alpha}) \quad\mathrm{when\ }\boldsymbol{\alpha} = \boldsymbol{\alpha}^{(k)}.\label{eq:mathdef2} \end{alignat} Then we update $\boldsymbol{\alpha}$ by minimizing $Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)})$ rather than the actual objective function $F_{\omega, \lambda}(\boldsymbol{\alpha})$: \begin{equation} \boldsymbol{\alpha}^{(k+1)}=\arg\!\min_{\boldsymbol{\alpha}}Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)}).\label{eq:majobj} \end{equation} To construct the majorization function $Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)})$ for $F_{\omega, \lambda}(\boldsymbol{\alpha})$ at the $k$-th iteration, we use the following lemma: \begin{lem} \label{lem:lips}The expectile loss $\phi_{\omega}$ has a Lipschitz continuous derivative $\phi^{\prime}_{\omega}$, i.e. \begin{equation} |\phi^{\prime}_{\omega}(a)-\phi^{\prime}_{\omega}(b)|\leq L|a-b|\qquad\forall a,b\in\mathbb{R},\label{eq:lips} \end{equation} where $L=2\max(1-\omega,\omega)$. This further implies that $\phi_{\omega}$ has a quadratic upper bound \begin{equation} \phi_{\omega}(a)\le\phi_{\omega}(b)+\phi^{\prime}_{\omega}(b)(a-b)+\frac{L}{2}|a-b|^{2}\qquad\forall a,b\in\mathbb{R}.\label{eq:upper_bound} \end{equation} Note that ``$=$'' is taken only when $a=b$. \end{lem} Assume the current ``residual" is $r_{i}^{(k)}=y_{i}-\mathbf{K}_{i}\boldsymbol{\alpha}^{(k)}$, then it is equivalent in \eqref{eq:obj} that $y_{i}-\mathbf{K}_{i}\boldsymbol{\alpha}=r_{i}^{(k)}-\mathbf{K}_{i}(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)})$. By lemma \ref{lem:lips}, we obtain \[ |\phi_{\omega}^{\prime}(r_{i}^{(k)}-\mathbf{K}_{i}(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)}))-\phi_{\omega}^{\prime}(r_{i}^{(k)})|\leq 2\max(1-\omega,\omega)|\mathbf{K}_{i}(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)})|, \] and the quadratic upper bound \[ \phi_{\omega}(r_{i}^{(k)}-\mathbf{K}_{i}(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)}))\leq q_{i}(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)}), \] where \[ q_{i}(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)})=\phi_{\omega}(r_{i}^{(k)})-\phi_{\omega}^{\prime}(r_{i}^{(k)})\mathbf{K}_{i}(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)})+\max(1-\omega,\omega)(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{i}\mathbf{K}_{i}^{\intercal}(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)}). \] Therefore the majorization function of $F_{\omega, \lambda}(\boldsymbol{\alpha})$ can be written as \begin{equation} Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)})=\sum_{i=1}^{n}q_{i}(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)})+\lambda\boldsymbol{\alpha}^{\intercal}\mathbf{K}_{0}\boldsymbol{\alpha},\label{eq:qbound} \end{equation} which has an alternatively form that can be written as \begin{equation} Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)})=F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})+\nabla F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)})+(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\boldsymbol{\alpha}-\boldsymbol{\alpha}^{(k)}),\label{eq:alter_qbound} \end{equation} where \begin{align} \mathbf{K}_{u} & =\lambda\mathbf{K}_{0}+\max(1-\omega,\omega)\sum_{i=1}^{n}\mathbf{K}_{i}\mathbf{K}_{i}^{\intercal}\label{eq:ku}\\ & =\max(1-\omega,\omega)\left(\begin{array}{cc} n & \mathbf{1}^{\intercal}\mathbf{K}\\ \mathbf{K}\mathbf{1} & \mathbf{KK}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{K} \end{array}\right), \end{align} and $\mathbf{1}$ is an $n\times1$ vector of all ones. Our algorithm updates $\boldsymbol{\alpha}$ using the minimizer of the quadratic majorization function \eqref{eq:alter_qbound}: \begin{equation} \boldsymbol{\alpha}^{(k+1)}=\arg\!\min_{\boldsymbol{\alpha}}Q(\boldsymbol{\alpha}\mid\boldsymbol{\alpha}^{(k)})=\boldsymbol{\alpha}^{(k)}+\mathbf{K}_{u}^{-1}\left(-\lambda\mathbf{K}_{0}\boldsymbol{\alpha}^{(k)}+\frac{1}{2}\sum_{i=1}^{n}\phi_{\omega}^{\prime}(r_{i}^{(k)})\mathbf{K}_{i}\right).\label{eq:update} \end{equation} The details of the whole procedures for solving \eqref{eq:obj} are described in Algorithm \ref{alg:main}. \begin{algorithm} \caption{The algorithm for the minimization of \eqref{eq:obj}. \label{alg:main}} \begin{itemize} \item Let $\{y_{i}\}_{1}^{n}$ be observations of the response, $\{K(\mathbf{x}_{i},\mathbf{x}_{j})\}_{i,j=1}^{n}$ be the kernel of all observations, and $\boldsymbol{\alpha}:=(\alpha_{0},\alpha_{1},\alpha_{2},\ldots,\alpha_{n})$. \item Initialize $\boldsymbol{\alpha}^{(0)}$ and $k=0$. \item Iterate step 1--3 until convergence:\end{itemize} \begin{enumerate} \item Calculated the residue of the response by $r_{i}^{(k)}=y_{i}-\mathbf{K}_{i}\boldsymbol{\alpha}^{(k)}$ for all $1\leq i\leq n$. \item Obtain $\boldsymbol{\alpha}^{(k+1)}$ by: \[ \boldsymbol{\alpha}^{(k+1)}=\boldsymbol{\alpha}^{(k)}+\mathbf{K}_{u}^{-1}\left(-\lambda\mathbf{K}_{0}\boldsymbol{\alpha}^{(k)}+\frac{1}{2}\sum_{i=1}^{n}\phi_{\omega}^{\prime}(r_{i}^{(k)})\mathbf{K}_{i}\right), \] where \[ \mathbf{K}_{u}=\max(1-\omega,\omega)\left(\begin{array}{cc} n & \mathbf{1}^{\intercal}\mathbf{K}\\ \mathbf{K}\mathbf{1} & \mathbf{KK}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{K} \end{array}\right). \] \item $k:=k+1$. \end{enumerate} \end{algorithm} \subsection{Convergence analysis}\label{sec:convergence_analysis} Now we provide the convergence analysis of Algorithm \ref{alg:main}. Lemma \ref{lem:convergence} below shows that the sequence $(\boldsymbol{\alpha}^{(k)})$ in the algorithm converges to the unique global minimum $\widehat{\boldsymbol{\alpha}}$ of the optimization problem. \begin{lem}\label{lem:convergence} If we update $\boldsymbol{\alpha}^{(k+1)}$ by using \textup{\eqref{eq:update}}, then the following results hold: \begin{enumerate} \item The descent property of the objective function. $F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k+1)})\leq F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})$, $\forall k$. \item The convergence of $\boldsymbol{\alpha}$. Assume that $\sum_{i=1}^{n}\mathbf{K}_{i}\mathbf{K}_{i}^{\intercal}$ is a positive definite matrix, then $\lim_{k\rightarrow\infty}\|\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)}\|=0$. \item The sequence $(\boldsymbol{\alpha}^{(k)})$ converges to $\widehat{\boldsymbol{\alpha}}$, which is the unique global minimum of \eqref{eq:obj}. \end{enumerate} \end{lem} \begin{thm} \label{thm:iteration} Denote by $\widehat{\boldsymbol{\alpha}}$ the unique minimizer of \eqref{eq:obj} and \begin{equation} \Lambda_{k}=\frac{Q(\widehat{\boldsymbol{\alpha}}\mid\boldsymbol{\alpha}^{(k)})-F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})}{(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})}.\label{eq:LambdaK} \end{equation} Note that when $\Lambda_{k}=0$, it is just a trivial case $\boldsymbol{\alpha}^{(j)}=\widehat{\boldsymbol{\alpha}}$ for $j>k$. We define \[ \Gamma=1-\gamma_{\min}(\mathbf{K}_{u}^{-1}\mathbf{K}_{l}), \] where \[ \mathbf{K}_{l}=\lambda\mathbf{K}_{0}+\min(1-\omega,\omega)\sum_{i=1}^{n}\mathbf{K}_{i}\mathbf{K}_{i}^{\intercal}. \] Assume that $\sum_{i=1}^{n}\mathbf{K}_{i}\mathbf{K}_{i}^{\intercal}$ is a positive definite matrix. Then we have the following results: 1. \textup{$F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k+1)})-F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})\leq\Lambda_{k}\left(F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})-F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})\right).$} 2. The sequence $(F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)}))$ has a linear convergence rate no greater than $\Gamma$, and $0\leq\Lambda_{k}\leq\Gamma<1$. 3. The sequence $(\boldsymbol{\alpha}^{(k)})$ has a linear convergence rate no greater than $\sqrt{\Gamma\gamma_{\max}(\mathbf{K}_{u})/\gamma_{\min}(\mathbf{K}_{l})}$, i.e. \[ \|\boldsymbol{\alpha}^{(k+1)}-\widehat{\boldsymbol{\alpha}}\|\leq \sqrt{\Gamma\frac{\gamma_{\max}(\mathbf{K}_{u})}{\gamma_{\min}(\mathbf{K}_{l})}}\|\boldsymbol{\alpha}^{(k)}-\widehat{\boldsymbol{\alpha}}\|. \] \end{thm} Theorem~\ref{thm:iteration} says that the convergence rate of Algorithm \ref{alg:main} is at least linear. In our numeric experiments, we have found that Algorithm \ref{alg:main} converges very fast: the convergence criterion is usually met after 15 iterations. \subsection{Implementation}\label{sec:implementation} We discuss some techniques used in our implementation to further improve the computational speed of the algorithm. Usually expectile models are computed by applying Algorithm \ref{alg:main} on a descending sequence of $\lambda$ values. To create a sequence $\{\lambda_{m}\}_{m=1}^{M}$, we place $M-2$ points uniformly (in the log-scale) between the starting and ending point $\lambda_{\max}$ and $\lambda_{\min}$ such that the $\lambda$ sequence length is $M$. The default number for $M$ is 100, hence $\lambda_{1}=\lambda_{\max}$, and $\lambda_{100}=\lambda_{\min}$. We adopt the warm-start trick to implement the solution paths along $\lambda$ values: suppose that we have already obtained the solution $\widehat{\boldsymbol{\alpha}}_{\lambda_{m}}$ at $\lambda_{m}$, then $\widehat{\boldsymbol{\alpha}}_{\lambda_{m}}$ will be used as the initial value for computing the solution at $\lambda_{m+1}$ in Algorithm \ref{alg:main}. Another computational trick adopted is based on the fact that in Algorithm \ref{alg:main}, the inverse of $\mathbf{K}_{u}$ does not have to be re-calculated for each $\lambda$. There is an easy way to update $\mathbf{K}^{-1}_{u}$ for $\lambda_{1},\lambda_{2},\ldots$. Because $\mathbf{K}_{u}$ can be partitioned into two rows and two columns of submatrices, by Theorem 8.5.11 of \citet{harville2008matrix}, $\mathbf{K}^{-1}_{u}$ can be expressed as \begin{align} \mathbf{K}_{u}^{-1}(\lambda) & =\frac{1}{\max(1-\omega,\omega)}\left(\begin{array}{cc} n & \mathbf{1}^{\intercal}\mathbf{K}\\ \mathbf{K}\mathbf{1} & \mathbf{KK}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{K} \end{array}\right)^{-1}\nonumber \\ & =\frac{1}{\max(1-\omega,\omega)}\left[\left(\begin{array}{cc} \frac{1}{n} & \mathbf{0}_{1\times n}\\ \mathbf{0}_{n\times1} & \mathbf{0}_{n\times n} \end{array}\right)+\left(\begin{array}{c} -\frac{1}{n}\mathbf{1}^{\intercal}\mathbf{K}\\ \mathbf{I}_{n} \end{array}\right)\mathbf{Q}_{\lambda}^{-1}(-\frac{1}{n}\mathbf{K}\mathbf{1},\mathbf{I}_{n})\right],\label{eq:Ku_partition} \end{align} where \[ \mathbf{Q}_{\lambda}^{-1}=\left[\left(\mathbf{KK}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{K}\right)-\frac{1}{n}\mathbf{K}\mathbf{1}\mathbf{1}^{\intercal}\mathbf{K}\right]^{-1}. \] In \eqref{eq:Ku_partition} only $\mathbf{Q}_{\lambda}^{-1}$ changes with $\lambda$, therefore the computation of $\mathbf{K}_{u}^{-1}$ for a different $\lambda$ only requires the updating of $\mathbf{Q}_{\lambda}^{-1}$. Observing that $\mathbf{Q}_{\lambda}^{-1}$ is the inverse of the sum of two submatrices $\mathbf{A}$ and $\mathbf{B}$: \[ \mathbf{A}_{\lambda}=\mathbf{KK}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{K},\qquad\mathbf{B}=-\frac{1}{n}\mathbf{K}\mathbf{1}\mathbf{1}^{\intercal}\mathbf{K}. \] By Sherman\textendash{}Morrison formula \citep{1950}, \begin{equation} \mathbf{Q}_{\lambda}^{-1} =\left[\mathbf{A}_{\lambda}+\mathbf{B}\right]^{-1}=\mathbf{A}_{\lambda}^{-1}-\frac{1}{1+g}\mathbf{A}_{\lambda}^{-1}\mathbf{B}\mathbf{A_{\lambda}}^{-1}, \label{eq:qa_relation} \end{equation} where $g=\mathrm{trace}(\mathbf{B}\mathbf{A}_{\lambda}^{-1})$, we find that to get $\mathbf{Q}_{\lambda}^{-1}$ for a different $\lambda$ one just needs to get $\mathbf{A}_{\lambda}^{-1}$, which can be efficiently computed by using eigen-decomposition $\mathbf{\mathbf{K}=\mathbf{U}\mathbf{D}\mathbf{U}^{\intercal}}$: \begin{equation} \mathbf{A}_{\lambda}^{-1}=\left(\mathbf{KK}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{K}\right)^{-1}=\mathbf{U}\left(\mathbf{D}^{2}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{I}_{n}\right)^{-1}\mathbf{U}^{\intercal}.\label{eq:a_inv_update} \end{equation} \eqref{eq:a_inv_update} implies that the computation of $\mathbf{K}_{u}^{-1}(\lambda)$ depends only on $\lambda$, $\mathbf{D}$, $\mathbf{U}$ and $\omega$. Since $\mathbf{D}$, $\mathbf{U}$ and $\omega$ stay unchanged, we only need to calculate them once. To get $\mathbf{K}_{u}^{-1}(\lambda)$ for a different $\lambda$ in the sequence, we just need to plug in a new $\lambda$ in \eqref{eq:a_inv_update}. The following is the implementation for computing KERE for a sequence of $\lambda$ values using Algorithm 1: \begin{itemize} \item Calculate $\mathbf{U}$ and $\mathbf{D}$ according to $\mathbf{\mathbf{K}=\mathbf{U}\mathbf{D}\mathbf{U}^{\intercal}}$. \item Initialize $\ensuremath{\widehat{\boldsymbol{\alpha}}_{\lambda_{0}}=[0,0,\ldots,0]}$. \item \textbf{for} $m=1,2,\ldots,M$, repeat step 1-3: \begin{enumerate} \item Initialize $\boldsymbol{\alpha}_{\lambda_{m}}^{(0)}=\widehat{\boldsymbol{\alpha}}_{\lambda_{m-1}}$. \item Compute $\mathbf{K}_{u}^{-1}(\lambda_{m})$ using \eqref{eq:Ku_partition}, \eqref{eq:qa_relation} and \eqref{eq:a_inv_update}. \item Call Algorithm \ref{alg:main} to compute $\widehat{\boldsymbol{\alpha}}_{\lambda_{m}}$. \end{enumerate} \end{itemize} Our algorithm has been implemented in an official R package \texttt{KERE}, which is publicly available from the Comprehensive R Archive Network at \url{http://cran.r-project.org/web/packages/KERE/index.html}. \section{Simulation} In this section, we conduct extensive simulations to show the excellent finite performance of KERE. We investigate how the performance of KERE is affected by various model and error distribution settings, training sample sizes and other characteristics. Although many kernels are available, throughout this section we use the commonly recommended \citep{friedman2009elements} Gaussian radial basis function (RBF) kernel $K(\mathbf{x}_{i},\mathbf{x}_{j})=e^{\frac{-\|\mathbf{x}_{i}-\mathbf{x}_{j}\|^{2}}{\sigma^{2}}}$. We select the best pair of kernel bandwidth $\sigma^{2}$ and regularization parameter $\lambda$ by two-dimensional five-fold cross-validation. All computations were done on an Intel Core i7-3770 processor at 3.40GHz. \subsection*{Simulation I: single covariate case} The model used for this simulation is defined as \begin{equation} y_{i}=\sin(0.7x_{i})+\frac{x_{i}^{2}}{20}+\frac{|x_{i}|+1}{5}\epsilon_{i},\label{eq:sim1} \end{equation} which is heteroscedastic as error depends on a single covariate $x\sim U[-8,8]$. We used a single covariate such that the estimator can be visualized nicely. We used two different error distributions: Laplace distribution and a mixed normal distribution, \[ \epsilon_{i}\sim0.5N(0,\frac{1}{4})+0.5N(1,\frac{1}{16}). \] We generated $n=400$ training observations from \eqref{eq:sim1}, on which five expectile models with levels $\omega=\{0.05,0.2,0.5,0.8,0.95\}$ were fitted. We selected the best $(\sigma^{2},\lambda)$ pair by using two-dimensional, five-fold cross-validation. We generated an additional $n^{\prime}=2000$ test observations for evaluating the mean absolute deviation (MAD) of the final estimate. Assume that the true expectile function is $f_{\omega}$ and the predicted expectile is $\hat{f}_{\omega}$, then the mean absolute deviation are defined a \[ \mathrm{MAD}(\omega)=\frac{1}{n^{\prime}}\sum_{i=1}^{n^{\prime}}|f_{\omega}(\mathbf{x}_{i})-\hat{f}_{\omega}(\mathbf{x}_{i})|. \] The true expectile $f_{\omega}$ is equal to $\sin(0.7x)+\frac{x^{2}}{20}+\frac{|x|+1}{5}b_{\omega}(\epsilon)$, where $b_{\omega}(\epsilon)$ is the $\omega$-expectile of $\epsilon$, which is the theoretical minimizer of $E\phi_{\omega}(\epsilon-b)$. The simulations were repeated for 100 times under the above settings. We recorded MADs for different expectile levels in Table \ref{tab:simulation_MAD}. We find that the accuracy of the expectile prediction with mixed normal errors is generally better than that with Laplace errors. For the symmetric Laplace case, the prediction MADs are also symmetric around $\omega=0.5$, while for the skewed mixed-normal case the MADs are skewed. In order to show that KERE works as expected, in Figure~\ref{fig:simulation1} we also compared the theoretical and predicted expectile curves based on KERE with $\omega=\{0.05,0.2,0.5,0.8,0.95\}$ in Figure \ref{fig:simulation1}. We can see that the corresponding theoretical and predicted curves are very close. Theoretically the two should become the same curves as $n\rightarrow\infty$. \begin{figure} \caption{Theoretical expectiles and empirical expectiles for a covariate heteroscedastic model with mixed normal error. The model is fitted on five expectile levels $\omega=\{0.05,0.2,0.5,0.8,0.95\} \label{fig:simulation1} \end{figure} \begin{table} \ra{1.1} \begin{centering} \begin{tabular}{cccccc} \toprule $\omega$ & 0.05 & 0.2 & 0.5 & 0.8 & 0.95\tabularnewline \midrule Mixture & 0.236 (0.003) & 0.138 (0.003) & 0.376 (0.002) & 0.610 (0.002) & 0.788 (0.002)\tabularnewline Laplace & 2.346 (0.013) & 1.037 (0.007) & 0.179 (0.005) & 1.033 (0.006) & 2.333 (0.027)\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{The averaged MADs and the corresponding standard errors of expectile regression predictions for single covariate heteroscedastic models with mixed normal and Laplace error. The models are fitted on five expectile levels $\omega=\{0.05,0.2,0.5,0.8,0.95\}$. The results are based on 300 independent runs. \label{tab:simulation_MAD}} \end{table} \subsection*{Simulation II: multiple covariate case} In this part we illustrate that KERE can work very well for target functions that are non-additive and/or with complex interactions. We generated data $\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}$ according to \[ y_{i}=f_{1}(\mathbf{x}_{i})+|f_{2}(\mathbf{x}_{i})|\epsilon_{i}, \] where predictors $\mathbf{x}_{i}$ was generated from a joint normal distribution $N(0,\mathbf{I}_{p})$ with $p=10$. For the error term $\epsilon_{i}$ we consider three types of distributions: \begin{enumerate} \item Normal distribution $\epsilon_{i}\sim N(0,1)$. \item Student's $t$-distribution with four degrees of freedom $\epsilon_{i}\sim t_{4}$. \item Mixed normal distribution $\epsilon_{i}\sim0.9N(0,1)+0.1N(1,4)$. \end{enumerate} We now describe the construction of $f_1$ and $f_2$. In the homoscedastic model, we let $f_{2}(\mathbf{x}_{i})=1$ and $f_1$ is generated by the ``random function generator'' model \citep{Friedman00greedyfunction}, according to \[ f(\mathbf{x})=\sum_{l=1}^{20}a_{l}g_{l}(\mathbf{x}_{l}), \] where $\{a_{l}\}_{l=1}^{20}$ are sampled from uniform distribution $a_{l}\sim U[-1,1]$, and $\mathbf{x}_{l}$ is a random subset of $p$-dimensional predictor $\mathbf{x}$, with size $p_{l}=\min(\lfloor1.5+r,p\rfloor)$, where $r$ was sampled from exponential distribution $r\sim Exp(0.5)$. The function $g_{l}(\mathbf{x}_{l})$ is an $p_{l}$-dimensional Gaussian function: \[ g_{l}(x_{l})=\exp\Big[-\frac{1}{2}(\mathbf{x}_{l}-\boldsymbol{\mu}_{l})^{\intercal}\mathbf{V}_{l}(\mathbf{x}_{l}-\boldsymbol{\mu}_{l})\Big], \] where $\boldsymbol{\mu}_{l}$ follows the distribution $N(0,\mathbf{I}_{p_{l}})$. The $p_{l}\times p_{l}$ covariance matrix $\mathbf{V}_{l}$ is defined by $\mathbf{V}_{l}=\mathbf{U}_{l}\mathbf{D}_{l}\mathbf{U}_{l}^{\intercal}$, where $\mathbf{U}_{l}$ is a random orthogonal matrix, and $\mathbf{D}_{l}=\mathrm{diag}(d_{1l},d_{2l},\cdots,d_{p_{l}l})$ with $\sqrt{d_{jl}}\sim U[0.1,2]$. In the heteroscedastic model, $f_{1}$ is the same as in the homoscedastic model and $f_{2}$ is independently generated by the ``random function generator'' model. We generated $n=300$ observations as the training set, on which the estimated expectile functions $\hat{f}_{\omega}$ were computed at seven levels: \[ \omega\in\{0.05,0.1,0.25,0.5,0.75,0.9,0.95\}. \] An additional test set with $n^{\prime}=1200$ observations was generated for evaluating MADs between the fitted expectile $\hat{f}_{\omega}$ and the true expectile $f_{\omega}$. Note that the expectile function $f_{\omega}(\mathbf{x})$ is equal to $f_{1}(\mathbf{x})+b_{\omega}(\epsilon)$ in the homoscedastic model and $f_{1}(\mathbf{x})+|f_{2}(\mathbf{x})|b_{\omega}(\epsilon)$ in the heteroscedastic model, where $b_{\omega}(\epsilon)$ is the $\omega$-expectile of the error distribution. Under the above settings, we repeated the simulations for 300 times and record the MAD and timing each time. In Figure \ref{fig:sim1} and \ref{fig:sim2} we show the box-plots of empirical distributions of MADs, and in Table \ref{tab:homo_heter_mad} we report the average values of $\mathrm{MADs}$ and corresponding standard errors. We see that KERE can deliver accurate expectile prediction results in all cases, although relatively the prediction error is more volatile in the heteroscedastic case as expected: in the mean regression case ($\omega=0.5$), the averaged MADs in homoscedastic and heteroscedastic models are very close. But this difference grows larger as $\omega$ moves away from $0.5$. We also observe that the prediction MADs for symmetric distributions, normal and $t_{4}$, also appear to be symmetric around the conditional mean $\omega=0.5$, and that the prediction MADs in the skewed mixed-normal distribution cases are asymmetric. The total computation times for conducting two-dimensional, five-fold cross-validation and fitting the final model with the chosen parameters $(\sigma^{2},\lambda)$ for conditional expectiles are also reported in Table \ref{tab:homo_heter_time}. We find that the algorithm can efficiently solve all models under 20 seconds, regardless of choices of error distributions. \begin{figure} \caption{Homoscedastic models with error distribution (a) normal, (b) $t_{4} \label{fig:sim1} \end{figure} \begin{figure} \caption{Heteroscedastic models with error distribution (a) normal, (b) $t_{4} \label{fig:sim2} \end{figure} \begin{table} \ra{1.1} \begin{centering} \begin{tabular}{lrrrrrrr} \toprule & \multicolumn{3}{c}{Homoscedastic model} & \phantom{} & \multicolumn{3}{c}{Heteroscedastic model}\tabularnewline \cmidrule{2-4} \cmidrule{6-8} $\omega$ & Normal & $t_{4}$ & Mixture & & Normal & $t_{4}$ & Mixture\tabularnewline \midrule 0.05 & 0.4068 & 0.4916 & 0.4183 & & 0.6009 & 0.8035 & 0.6142\tabularnewline & (0.0039) & (0.0061) & (0.0046) & & (0.0079) & (0.0103) & (0.0066)\tabularnewline 0.1 & 0.3975 & 0.4529 & 0.4019 & & 0.5067 & 0.6315 & 0.5052\tabularnewline & (0.0037) & (0.0051) & (0.0037) & & (0.0056) & (0.0094) & (0.0054)\tabularnewline 0.25 & 0.3717 & 0.4145 & 0.3886 & & 0.4065 & 0.4648 & 0.4173\tabularnewline & (0.0031) & (0.0042) & (0.0038) & & (0.0042) & (0.0061) & (0.0047)\tabularnewline 0.5 & 0.3750 & 0.4069 & 0.3851 & & 0.3712 & 0.4038 & 0.3886\tabularnewline & (0.0032) & (0.0038) & (0.0032) & & (0.0042) & (0.0049) & (0.0045)\tabularnewline 0.75 & 0.3782 & 0.4261 & 0.4102 & & 0.4185 & 0.4702 & 0.4635\tabularnewline & (0.0033) & (0.0042) & (0.0036) & & (0.0046) & (0.0064) & (0.0057)\tabularnewline 0.9 & 0.3932 & 0.4553 & 0.4356 & & 0.4968 & 0.6226 & 0.6203\tabularnewline & (0.0038) & (0.0050) & (0.0045) & & (0.0058) & (0.0081) & (0.0076)\tabularnewline 0.95 & 0.4040 & 0.4925 & 0.4628 & & 0.5938 & 0.8078 & 0.7631\tabularnewline & (0.0046) & (0.0062) & (0.0054) & & (0.0066) & (0.0128) & (0.0102)\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{The averaged MADs and the corresponding standard errors for fitting homoscedastic and heteroscedastic models based on 300 independent runs. The expectile levels are $\omega\in\{0.05,0.1,0.25,0.5,0.75,0.9,0.95\}$.\label{tab:homo_heter_mad}} \end{table} \begin{table} \ra{1.1} \begin{centering} \begin{tabular}{lrrrrrrr} \toprule & \multicolumn{3}{c}{Homoscedastic model} & \phantom{} & \multicolumn{3}{c}{Heteroscedastic model}\tabularnewline \cmidrule{2-4} \cmidrule{6-8} $\omega$ & Normal & $t_{4}$ & Mixture & & Normal & $t_{4}$ & Mixture\tabularnewline \midrule 0.05 & 19.04 & 21.47 & 17.10 & & 16.90 & 17.60 & 17.95\tabularnewline 0.1 & 14.25 & 16.89 & 13.91 & & 14.38 & 14.60 & 15.21\tabularnewline 0.25 & 11.67 & 15.25 & 13.59 & & 12.30 & 12.49 & 12.36\tabularnewline 0.5 & 10.54 & 14.09 & 12.18 & & 10.92 & 11.13 & 11.01\tabularnewline 0.75 & 8.24 & 15.33 & 10.47 & & 12.48 & 12.48 & 12.38\tabularnewline 0.9 & 10.08 & 14.39 & 12.46 & & 14.67 & 15.25 & 14.52\tabularnewline 0.95 & 12.16 & 19.90 & 15.17 & & 17.34 & 17.75 & 16.61\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{The averaged computation times (in seconds) for fitting homoscedastic and heteroscedastic models based on 300 independent runs. The expectile levels are $\omega\in\{0.05,0.1,0.25,0.5,0.75,0.9,0.95\}$.\label{tab:homo_heter_time}} \end{table} We next study how sample size affects predictive performance and computational time. We fit expectile models with $\omega\in\{$0.1, 0.5, $0.9\}$ using various sizes of training sets $n\in\{$250, 500, 750, $1000\}$ and evaluate the prediction accuracy of the estimate using an independent test set of size $n^{\prime}=2000$. We then report the averaged MADs and the corresponding averaged timings in Table \ref{tab:sample_size_table}. Since the results are very close for different model settings, only the result from the heteroscedastic model with mixed-normal error is presented. We find that the sample size strongly affects predictive performance and timings: large samples give models with higher predictive accuracy at the expense of computational cost -- the timings as least quadruple as one doubles sample size. \begin{table} \ra{1.1} \begin{centering} \begin{tabular}{ccccccccccc} \toprule & & \multicolumn{4}{c}{Error} & & \multicolumn{4}{c}{Timing}\tabularnewline \cmidrule{3-6} \cmidrule{8-11} $n$ & & 250 & 500 & 750 & 1000 & & 250 & 500 & 750 & 1000\tabularnewline \midrule $\omega=0.1$ & & 0.4824 & 0.4084 & 0.4047 & 0.3887 & & 8.739 & 56.188 & 168.636 & 382.897\tabularnewline $\omega=0.5$ & & 0.3329 & 0.2977 & 0.2732 & 0.2544 & & 6.028 & 43.802 & 159.398 & 329.646\tabularnewline $\omega=0.9$ & & 0.6341 & 0.5861 & 0.5563 & 0.5059 & & 9.167 & 56.533 & 173.359 & 386.345\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{The averaged MADs and the corresponding averaged computation times (in seconds) are reported. The size of the training set varies from 250 to 1000. The size of the test data set is 2000. All models are fitted on three expectile levels: (a) $\omega=0.1$, (b) $\omega=0.5$ and (c) $\omega=0.9$. \label{tab:sample_size_table}} \end{table} \section{Real data application} In this section we illustrate KERE by applying it to the Personal Computer Price Data studied in \citet{stengos2006intertemporal}. The data collected from the PC Magazine from January of 1993 to November of 1995 has 6259 observations, each of which consists of the advertised price and features of personal computers sold in United States. There are 9 main price detriments of PCs summarized in Table \ref{tab:housing_data}. The price and the continuous variables except the time trend are in logarithmic scale. We consider a hedonic analysis, where the price of a product is considered to be a function of the implicit prices of its various components, see \citet{triplett1989price}. The intertemporal effect of the implicit PC-component prices is captured by incorporating the time trend as one of the explanatory variables. The presence of non-linearity and the interactions of the components with the time trend in the data, shown by \citet{stengos2006intertemporal}, suggest that the linear expectile regression may lead to a misspecified model. Since there lacks of a general theory about any particular functional form for the PC prices, we use KERE to capture the nonlinear effects and higher order interactions of characteristics on price and avoid severe model misspecification. We randomly sampled $1/10$ observations for training and tuning with two-dimensional five-fold cross-validation for selecting an optimal $(\sigma^{2},\lambda)$ pair, and the remaining $9/10$ observations as the test set for calculating the prediction error defined by \[ \mathrm{prediction\ error}=\frac{1}{n^{\prime}}\sum_{i=1}^{n^{\prime}}\phi_{\omega}(y_{i}-\hat{f}_{\omega}(\mathbf{x}_{i})). \] For comparison, we also computed the prediction errors using the linear expectile regression models under the same setting. All prediction errors are computed for seven expectile levels $\omega\in\{0.05,$ 0.1, 0.25, 0.5, 0.75, 0.9, $0.95\}$. We repeated this process 100 times and reported the average prediction error and their corresponding standard errors in Table \ref{tab:real}. We also showed box-plots of empirical distributions of prediction errors in Figure \ref{fig:real}. We see that for all expectile levels KERE outperforms the linear expectile model in terms of both prediction error and the corresponding standard errors. This shows that KERE offers much more flexible and accurate predictions than the linear model by guarding against model misspecification bias. \begin{table} \ra{1.1} \begin{centering} \begin{tabular}{lll} \toprule {\small{}{ID } } & {\small{}{Variable } } & {\small{}{Explanation}}\tabularnewline \midrule {\small{}{1 } } & SPEED & clock speed in MHz\tabularnewline {\small{}{2 } } & HD & size of hard drive in MB\tabularnewline {\small{}{3 } } & RAM & size of RAM in in MB\tabularnewline {\small{}{4 } } & SCREEN & size of screen in inches\tabularnewline {\small{}{5 } } & CD & if a CD-ROM present\tabularnewline {\small{}{6 } } & PREMIUM & if the manufacturer was a ``premium'' firm (IBM, COMPAQ)\tabularnewline {\small{}{7 } } & MULTI & if a multimedia kit (speakers, sound card) included\tabularnewline {\small{}{8 } } & ADS & number of 486 price listings for each month\tabularnewline {\small{}{9 } } & TREND & time trend indicating month starting from Jan. 1993 to Nov. 1995\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{Explanatory variables in the Personal Computer Price Data \citep{stengos2006intertemporal} \label{tab:housing_data}} \end{table} \begin{table} \ra{1.1} \begin{centering} \begin{tabular}{rrrrrrrr} \toprule \multicolumn{8}{c}{Personal Computer Price Data }\tabularnewline \midrule $\omega$ & 0.05 & 0.1 & 0.25 & 0.5 & 0.75 & 0.9 & 0.95\tabularnewline \midrule Linear & 5.727 & 3.396 & 5.722 & 7.078 & 6.032 & 3.814 & 2.517\tabularnewline & (0.013) & (0.010) & (0.015) & (0.017) & (0.015) & (0.014) & (0.012)\tabularnewline KERE & 3.970 & 2.523 & 3.952 & 4.749 & 4.094 & 2.684 & 1.868\tabularnewline & (0.013) & (0.010) & (0.015) & (0.017) & (0.015) & (0.014) & (0.012)\tabularnewline \bottomrule \end{tabular} \par\end{centering} \caption{The averaged prediction error and the corresponding standard errors for the Personal Computer Price Data based on 100 independent runs. The expectile levels are $\omega\in\{0.05,0.1,0.25,0.5,0.75,0.9,0.95\}$. The numbers in this table are of the order of $10^{-3}.$\label{tab:real}} \end{table} \begin{figure} \caption{Prediction error distributions for the Personal Computer Price Data using the linear expectile model and KERE. Box-plots show prediction error based on 100 independent runs for expectiles $\omega\in\{0.05,0.1,0.25,0.5,0.75,0.9,0.95\} \label{fig:real} \end{figure} \section*{Appendix: Technical Proofs} \subsection{Some technical lemmas for Theorem~\ref{thm:asymptotic}} We first present some technical lemmas and their proofs. These lemmas are used to prove Theorem~\ref{thm:asymptotic}. \begin{lem}\label{lemma:setup3} Let $\phi_{\omega}^*$ be the convex conjugate of $\phi_{\omega}$, \[ \phi_{\omega}^*(t)=\begin{cases} \frac{1}{4(1-\omega)}t^2 &\text{if $t \leq 0$,}\\ \frac{1}{4\omega}t^2&\text{if $t > 0$.} \end{cases} \] The solution to \eqref{eq:setup2} can be alternatively obtained by solving the optimization problem \begin{equation} \min_{{\{\alpha_{i}\}_{i=0}^{n}}} g(\alpha_1,\alpha_2,\ldots, \alpha_n), \quad \text{subject to} \quad \sum_{i=1}^n\alpha_i=0,\label{eq:setup3} \end{equation} where $g$ is defined by \begin{equation} g(\alpha_1,\alpha_2,\ldots, \alpha_n)=-\sum_{i=1}^n y_i \alpha_i +\frac{1}{2}\sum_{i,j=1}^n\alpha_{i}\alpha_{j}K(\mathbf{x}_{i},\mathbf{x}_{j}) + 2 \lambda \sum_{i=1}^n\phi_{\omega}^*(\alpha_i).\label{eq:gdef} \end{equation} \end{lem} \begin{proof} Let $\boldsymbol{\alpha}=(\alpha_1,\alpha_2,\ldots,\alpha_n)^{\intercal}$. Since both objective functions in \eqref{eq:setup2} and \eqref{eq:setup3} are convex, we only need to show that they share a common stationary point. Define \[ G_{\omega}(\boldsymbol{\alpha}) = \phi_{\omega}(\alpha_1)+\phi_{\omega}(\alpha_2)+\cdots+\phi_{\omega}(\alpha_n), \] \[ \nabla G_{\omega}(\boldsymbol{\alpha}) = (\phi^{\prime}_{\omega}(\alpha_1),\phi^{\prime}_{\omega}(\alpha_2),\ldots,\phi^{\prime}_{\omega}(\alpha_n))^{\intercal}. \] By setting the derivatives of \eqref{eq:setup2} with respect to $\boldsymbol{\alpha}$ to be zero, we can find the stationary point of \eqref{eq:setup2} satisfying \[ \frac{\mathrm{d}}{\mathrm{d\boldsymbol{\alpha}}}\left[\left(\begin{array}{c} y_{1}-\alpha_{0}\\ y_{2}-\alpha_{0}\\ \vdots\\ y_{n}-\alpha_{0} \end{array}\right)-\mathbf{K}\boldsymbol{\alpha}\right]\cdot\left[\begin{array}{c} \phi_{\omega}^{\prime}\big(y_{1}-\alpha_{0}-\sum_{j=1}^{n}K(x_{1},x_{j})\alpha_{j}\big)\\ \phi_{\omega}^{\prime}\big(y_{2}-\alpha_{0}-\sum_{j=1}^{n}K(x_{2},x_{j})\alpha_{j}\big)\\ \vdots\\ \phi_{\omega}^{\prime}\big(y_{n}-\alpha_{0}-\sum_{j=1}^{n}K(x_{n},x_{j})\alpha_{j}\big) \end{array}\right]+\lambda\frac{\mathrm{d}}{\mathrm{d\boldsymbol{\alpha}}}\boldsymbol{\alpha}^{\intercal}K\boldsymbol{\alpha}=\mathbf{0}, \] which can be reduced to \begin{equation}\label{eq:setup2_1} -\phi_{\omega}^{\prime}\big(y_i-\alpha_0-\sum_{j=1}^n K(x_i,x_j)\alpha_j\big)+2\lambda\alpha_i=0,\quad\text{for $1\leq i \leq n$}, \end{equation} and setting the derivative of \eqref{eq:setup2} with respect to $\alpha_0$ to be zero, we have \begin{equation}\label{eq:setup2_2} \sum_{i=1}^n \phi_{\omega}'\big(y_i-\alpha_0-\sum_{j=1}^n K(x_i,x_j)\alpha_j\big) =0. \end{equation} Combining \eqref{eq:setup2_1} and \eqref{eq:setup2_2}, \eqref{eq:setup2_2} can be simplified to \begin{equation}\label{eq:setup2_3} \sum_{i=1}^n \alpha_i =0. \end{equation} In comparison, the Lagrange function of \eqref{eq:setup3} is \begin{equation} g(\alpha_1,\alpha_2,\ldots, \alpha_n)+\nu\sum_{i=1}^n\alpha_i. \label{eq:lagrangeform} \end{equation} The first order conditions of \eqref{eq:lagrangeform} are \begin{equation}\label{eq:setup3_1} -y_i+\nu+\sum_{j=1}^n K(x_i,x_j)\alpha_j +2 \lambda \phi_{\omega}^{*\,'}(\alpha_i)=0,\quad \text{for $1\leq i \leq n$}, \end{equation} and \begin{equation}\label{eq:setup2_4} \sum_{i=1}^n \alpha_i =0. \end{equation} Noting that $2 \lambda \phi_{\omega}^{*\,'}(\alpha_i)=\phi_{\omega}^{*\,'}(2 \lambda \alpha_i)$ and $\phi_{\omega}^{*\,'}$ is the inverse function of $\phi_{\omega}'$. Let $\nu=\alpha_0$, then \eqref{eq:setup2_1} and \eqref{eq:setup3_1} are equivalent. Therefore, \eqref{eq:setup2} and \eqref{eq:setup3} have a common stationary point and therefore a common minimizer. \end{proof} \begin{lem}\label{lemma:norm_control} \[ \sum_{j=1}^{n}\alpha_{j}K(\mathbf{x}_{i},\mathbf{x}_{j})\leq \sqrt{K(\mathbf{x}_i,\mathbf{x}_i)}\cdot \sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}K(\mathbf{x}_{i},\mathbf{x}_{j})}. \] \end{lem} \begin{proof} Let $\mathbf{C}=\mathbf{K}^{1/2}$, then by Cauchy-Schwarz inequality \begin{align*}&\sum_{j=1}^{n}\alpha_{j}K(\mathbf{x}_{i},\mathbf{x}_{j}) =({\alpha}_1, {\alpha}_2, \ldots, {\alpha}_{n})\mathbf{C} (\mathbf{C}_{i,1},\mathbf{C}_{i,2},\ldots, \mathbf{C}_{i,n})^T \\\leq &\|({\alpha}_1, {\alpha}_2, \ldots, {\alpha}_{n})\mathbf{C}\|\cdot \|(\mathbf{C}_{i,1},\mathbf{C}_{i,2},\ldots, \mathbf{C}_{i,n})\| = \sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}K(\mathbf{x}_{i},\mathbf{x}_{j})}\cdot \sqrt{K(\mathbf{x}_i,\mathbf{x}_i)}. \end{align*} \end{proof} \begin{lem} \label{lemma:bound_objective} For the $g$ function defined in \eqref{eq:gdef}, we have \begin{align*} &\frac{1}{2}\sum_{i,j=1}^{n}(\alpha_{i}-\hat{\alpha}_i)(\alpha_{j}-\hat{\alpha}_j)K(\mathbf{x}_{i},\mathbf{x}_{j}) +\frac{\lambda}{2\max(1-\omega,\omega)}\sum_{i=1}^{n}(\alpha_{i}-\hat{\alpha}_i)^2 \\\leq & g(\alpha_1,\alpha_2,\ldots,\alpha_{n}) - g(\hat{\alpha}_1,\hat{\alpha}_2,\ldots,\hat{\alpha}_{n}) \\\leq & \frac{1}{2}\sum_{i,j=1}^{n}(\alpha_{i}-\hat{\alpha}_i)(\alpha_{j}-\hat{\alpha}_j)K(\mathbf{x}_{i},\mathbf{x}_{j}) +\frac{\lambda}{2\min(1-\omega,\omega)}\sum_{i=1}^{n}(\alpha_{i}-\hat{\alpha}_i)^2. \end{align*} \end{lem} \begin{proof} It is clear that the second derivative of $g$ is bounded above by $\mathbf{K}+\frac{\lambda}{\min(1-\omega,\omega)}\mathbf{I}$ and bounded below by $\mathbf{K}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{I}$, where $\mathbf{K}\in\mathbb{R}^{n,n}$. Let $\boldsymbol{\alpha}=(\alpha_1,\alpha_2,\ldots,\alpha_{n})^{\intercal}$ \begin{eqnarray} g(\boldsymbol{\alpha})-g(\widehat{\boldsymbol{\alpha}})\leq g^{\prime}(\widehat{\boldsymbol{\alpha}})^{\intercal}(\boldsymbol{\alpha}-\widehat{\boldsymbol{\alpha}})+\frac{1}{2}(\mathbf{K}+\frac{\lambda}{\min(1-\omega,\omega)}\mathbf{I})(\boldsymbol{\alpha}-\widehat{\boldsymbol{\alpha}})^{\intercal}(\boldsymbol{\alpha}-\widehat{\boldsymbol{\alpha}}),\\ g(\boldsymbol{\alpha})-g(\widehat{\boldsymbol{\alpha}})\geq g^{\prime}(\widehat{\boldsymbol{\alpha}})^{\intercal}(\boldsymbol{\alpha}-\widehat{\boldsymbol{\alpha}})+\frac{1}{2}(\mathbf{K}+\frac{\lambda}{\max(1-\omega,\omega)}\mathbf{I})(\boldsymbol{\alpha}-\widehat{\boldsymbol{\alpha}})^{\intercal}(\boldsymbol{\alpha}-\widehat{\boldsymbol{\alpha}}). \end{eqnarray} Hence when $\boldsymbol{\alpha}$ and $\widehat{\boldsymbol{\alpha}}$ are fixed and $g'(\widehat{\boldsymbol{\alpha}})=0$, the maximum of $g(\boldsymbol{\alpha})-g(\widehat{\boldsymbol{\alpha}})$ is obtained when the second order derivative of $g$ achieves its maximum and the minimum is obtained when the second order derivative achieves its minimum. \end{proof} The next lemma establishes the basis for the so-called leave-one-out analysis \citep{jaakkola1999probabilistic,joachims2000estimating,forster2002relative, ZhangTong2003}. The basic idea is that the expected observed risk is equivalent to the expected leave-one-out error. Let $D_{n+1}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n+1}$ be a random sample of size $n+1$, and let $D^{[i]}_{n+1}$ be the subset of $D_{n+1}$ with the $i$-th observation removed, i.e. \[ D^{[i]}_{n+1}=\{(\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_{i-1},y_{i-1}),(\mathbf{x}_{i+1},y_{i+1}),\ldots,(\mathbf{x}_{n+1},y_{n+1})\}. \] Let $(\hat{f}^{[i]},\hat{\alpha}_0^{[i]})$ be the estimator trained on $D^{[i]}_{n+1}$. The leave-one-out error is defined as the averaged prediction error on each observation $(\mathbf{x}_i,y_i)$ using the estimator $(\hat{f}^{[i]},\hat{\alpha}_0^{[i]})$ computed from $D^{[i]}_{n+1}$, where $(\mathbf{x}_i,y_i)$ is excluded: \[ \text{Leave-one-out error:}\quad\frac{1}{n+1}\sum^{n+1}_{i=1}\phi_{\omega}(y_i-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_i)). \] \begin{lem}\label{lemma:leaveone} Let $(\hat{f}_{(n)},\hat{\alpha}_{0\,(n)})$ be the KERE estimator trained from $D_{n}$. The expected observed risk $E_{D_{n}}E_{(\mathbf{x},y)}\phi_{\omega}(y-\hat{\alpha}_{0\,(n)}-\hat{f}_{(n)}(\mathbf{x}))$ is equivalent to the expected leave-one-out error on $D_{n+1}$: \begin{equation} E_{D_{n}}\big\{E_{(\mathbf{x},y)}\phi_{\omega}(y-\hat{\alpha}_{0\,(n)}-\hat{f}_{(n)}(\mathbf{x}))\big\} = E_{D_{n+1}}\Big(\frac{1}{n+1}\sum^{n+1}_{i=1}\phi_{\omega}(y_i-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_i))\Big),\label{eq:leave_one_out} \end{equation} where $\hat{\alpha}_{0}^{[i]}$ and $\hat{f}^{[i]}$ are KERE trained from $D^{[i]}_{n+1}$. \end{lem} \begin{proof} \begin{eqnarray*} E_{D_{n+1}}\Big(\frac{1}{n+1}\sum_{i=1}^{n+1}\phi_{\omega}(y_{i}-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_{i}))\Big) & = & \frac{1}{n+1}\sum_{i=1}^{n+1}E_{D_{n+1}}\phi_{\omega}(y_{i}-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_{i}))\\ & = & \frac{1}{n+1}\sum_{i=1}^{n+1}E_{D_{n+1}^{[i]}}\big\{E_{(\mathbf{x}_{i},y_{i})}\phi_{\omega}(y_{i}-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_{i}))\big\}\\ & = & \frac{1}{n+1}\sum_{i=1}^{n+1}E_{D_{n}}\big\{E_{(\mathbf{x},y)}\phi_{\omega}(y-\hat{\alpha}_{0}-\hat{f}(\mathbf{x}))\big\}\\ & = & E_{D_{n}}{E_{(\mathbf{x},y)}\phi_{\omega}(y-\hat{\alpha}_{0}-\hat{f}(\mathbf{x}))}. \end{eqnarray*} \end{proof} In the following Lemma, we give an upper bound of $|\hat{\alpha}_{i}|$ for $1\leq i\leq n$. \begin{lem}\label{lemma:alphaibound} Assume $M=\sup_{\mathbf{x}}K(\mathbf{x},\mathbf{x})^{1/2}$. Denote as $(\hat{f}_{(n)},\hat{\alpha}_{0\,(n)})$ the KERE estimator in \eqref{eq:setup1} trained on $n$ samples $D_{n}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}$. The estimates $\hat{\alpha}_{i\,(n)}$ for $1\leq i\leq n$ are defined by $\hat{f}_{(n)}(\cdot)=\sum_{i=1}^{n}\hat{\alpha}_{i\,(n)}K(\mathbf{x}_{i},\cdot)$. Denote $\|Y_n\|_{2}=\sqrt{\sum_{i=1}^{n}y_{i}^{2}}$, $\frac{\|Y_{n}\|_1}{n}=\frac{1}{n}\sum_{i=1}^{n}|y_{i}|$, $q_{1}=\frac{\max(1-\omega,\omega)}{\min(1-\omega,\omega)}$, $q_{2}=\max(1-\omega,\omega)$. We claim that \begin{align}\label{eq:est_alphai} |\hat{\alpha}_{i\,(n)}|& \leq \frac{ q_{2} }{\lambda}\Big(q_{1} \frac{\|Y_{n}\|_1}{n}+ M(q_{1}+1)\sqrt{\frac{ q_{2}}{\lambda}}\|Y_n\|_{2}+|y_{i}|\Big),\quad \text{for\ } 1\leq i\leq n. \end{align} \end{lem} \begin{proof} The proof is as follows. The function $g$ is defined as in \eqref{eq:gdef}, then \[ g(\hat{\alpha}_{1\,(n)}, \hat{\alpha}_{2\,(n)}, \ldots, \hat{\alpha}_{n\,(n)}) \leq g(0, 0, \ldots, 0) = 0, \] we have \begin{eqnarray*} \frac{1}{2}\sum_{i,j=1}^{n}\hat{\alpha}_{i\,(n)}\hat{\alpha}_{j\,(n)}K(\mathbf{x}_{i},\mathbf{x}_{j}) & \leq & \sum_{i=1}^{n}y_{i}\hat{\alpha}_{i\,(n)}-2\lambda\sum_{i=1}^{n}\phi_{\omega}^{*}(\hat{\alpha}_{i\,(n)})\\ & \leq & -\frac{\lambda}{2 q_{2} }\sum_{i=1}^{n}\Big(\hat{\alpha}_{i\,(n)}-\frac{ q_{2} }{\lambda}y_i\Big)^2 + \frac{ q_{2} }{2\lambda}\sum_{i=1}^{n}y_{i}^{2}\\ & \leq & \frac{ q_{2} }{2\lambda}\sum_{i=1}^{n}y_{i}^{2}. \end{eqnarray*} Applying Lemma~\ref{lemma:norm_control}, we have \begin{equation}\label{eq:est_Kalpha} \hat{f}_{(n)}(\mathbf{x}_i)=\sum_{j=1}^{n}\hat{\alpha}_{j\,(n)}K(\mathbf{x}_{i},\mathbf{x}_{j}) \leq M\sqrt{\frac{ q_{2} \,\sum_{i=1}^{n}y_{i}^{2}}{\lambda}} = M \sqrt{\frac{ q_{2}}{\lambda}}\|Y_n\|_{2}.\end{equation} By the definition in \eqref{eq:setup2}, $\hat{\alpha}_{0\,(n)}$ is given by $ \arg\!\min_{\alpha_0}\sum_{i=1}^{n}\phi_{\omega}\big(y_{i}-\alpha_{0}-\hat{f}_{(n)}(\mathbf{x}_i)\big).$ By the first order condition \[ \sum_{i=1}^{n}2\big|\omega-I(y_i-\hat{\alpha}_{0\,(n)}-\hat{f}_{(n)}(\mathbf{x}_i))\big|(y_i-\hat{\alpha}_{0\,(n)}-\hat{f}_{(n)}(\mathbf{x}_i))=0. \] Let $c_i = \big|\omega-I(y_i-\hat{\alpha}_{0\,(n)}-\hat{f}_{(n)}(\mathbf{x}_i))\big|$, we have $\min(1-\omega,\omega)\leq c_i\leq\max(1-\omega,\omega)$, hence \begin{eqnarray*} \Big|\Big(\sum_{i=1}^{n}c_{i}\Big)\hat{\alpha}_{0\,(n)}\Big| & = & \Big|\sum_{i=1}^{n}c_{i}(y_{i}-\hat{f}_{(n)}(\mathbf{x}_{i}))\Big|\leq\sum_{i=1}^{n}c_{i}(\big|y_{i}\big|+\big|\hat{f}_{(n)}(\mathbf{x}_{i})\big|)\\ & \leq & q_{2} \Big(\sum_{i=1}^n |y_i|+nM \sqrt{\frac{ q_{2}}{\lambda}}\|Y_n\|_{2}\Big), \end{eqnarray*} and we have \begin{equation}\label{eq:est_alpha0} |\hat{\alpha}_{0\,(n)}|\leq q_{1} \Big(\frac{\|Y_{n}\|_1}{n}+ M \sqrt{\frac{ q_{2}}{\lambda}}\|Y_n\|_{2}\Big). \end{equation} Combining \eqref{eq:setup2_1} and \eqref{eq:est_alpha0}, we concluded \eqref{eq:est_alphai}. \end{proof} \subsection{Proof of Theorem~\ref{thm:asymptotic}} \begin{proof} Consider $n+1$ training samples $D_{n+1}=\{(\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_{n+1},y_{n+1})\}$. Denote as $(\hat{f}^{[i]},\hat{\alpha}_0^{[i]})$ the KERE estimator trained from $D^{[i]}_{n+1}$, which is a subset of $D_{n+1}$ with $i$-th observation removed, i.e., \[ D^{[i]}_{n+1}=\{(\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_{i-1},y_{i-1}),(\mathbf{x}_{i+1},y_{i+1}),\ldots,(\mathbf{x}_{n+1},y_{n+1})\}. \] Denote as $(\hat{f}_{(n+1)},\hat{\alpha}_{0\,(n+1)})$ the KERE estimator trained from $n+1$ samples $D_{n+1}$. The estimates $\hat{\alpha}_{i}$ for $1\leq i\leq n+1$ are defined by $\hat{f}_{(n+1)}(\cdot)=\sum_{i=1}^{n+1}\hat{\alpha}_{i}K(\mathbf{x}_{i},\cdot)$. In what follows, we denote $\|Y_{n+1}\|_{2}=\sqrt{\sum_{i=1}^{n+1}y_{i}^{2}}$, $\frac{\|Y_{n+1}\|_1}{n+1}=\frac{1}{n+1}\sum_{i=1}^{n+1}|y_{i}|$, $q_{1}=\frac{\max(1-\omega,\omega)}{\min(1-\omega,\omega)}$, $q_{2}=\max(1-\omega,\omega)$, $q_{3}=\min(1-\omega,\omega)$. \paragraph{Part I} We first show that the leave-one-out estimate is sufficiently close to the estimate fitted from using all the training data. Without loss of generality, just consider the case that the $(n+1)$th data point is removed. The same results apply to the other leave-one out cases. We show that $ |\hat{f}^{[n+1]}(\mathbf{x}_{i})+\hat{\alpha}_0^{[n+1]}-\hat{f}_{(n+1)}(\mathbf{x}_{i})-\hat{\alpha}_{0\,(n+1)}|\leq C^{[n+1]}_{2}, $ where the expression of $ C^{[n+1]}_{2} $ is to be derived in the following. We first study the upper bound for $|\hat{f}^{[n+1]}(\mathbf{x}_i)-\hat{f}_{(n+1)}(\mathbf{x}_i)|$. By the definitions of $g$ in \eqref{eq:gdef} and $(\hat{\alpha}_{1}^{[n+1]},\hat{\alpha}_{2}^{[n+1]},\ldots,\hat{\alpha}_{n}^{[n+1]})$, we have \begin{eqnarray*} & & g\big(\hat{\alpha}_{1}^{[n+1]},\hat{\alpha}_{2}^{[n+1]},\ldots,\hat{\alpha}_{n}^{[n+1]},0\big)\\ & = & g\big(\hat{\alpha}_{1}^{[n+1]},\hat{\alpha}_{2}^{[n+1]},\ldots,\hat{\alpha}_{n}^{[n+1]}\big)\\ & \leq & g\big(\hat{\alpha}_{1}+\frac{1}{n}\hat{\alpha}_{n+1},\hat{\alpha}_{2}+\frac{1}{n}\hat{\alpha}_{n+1},\ldots,\hat{\alpha}_{n}+\frac{1}{n}\hat{\alpha}_{n+1}\big)\\ & = & g\big(\hat{\alpha}_{1}+\frac{1}{n}\hat{\alpha}_{n+1},\hat{\alpha}_{2}+\frac{1}{n}\hat{\alpha}_{n+1},\ldots,\hat{\alpha}_{n}+\frac{1}{n}\hat{\alpha}_{n+1},0\big). \end{eqnarray*} That is, \begin{align*} &g\big(\hat{\alpha}_{1}^{[n+1]},\hat{\alpha}_{2}^{[n+1]},\ldots,\hat{\alpha}_{n}^{[n+1]},0\big) -g\big(\hat{\alpha}_{1},\hat{\alpha}_{2},\ldots,\hat{\alpha}_{n+1}\big)\\ &\leq g\big(\hat{\alpha}_{1}+\frac{1}{n}\hat{\alpha}_{n+1},\hat{\alpha}_{2}+\frac{1}{n}\hat{\alpha}_{n+1},\ldots,\hat{\alpha}_{n}+\frac{1}{n}\hat{\alpha}_{n+1},0\big) -g\big(\hat{\alpha}_{1},\hat{\alpha}_{2},\ldots,\hat{\alpha}_{n+1}\big). \end{align*} Denote for simplicity that $\hat{\alpha}_{n+1}^{[n+1]}=0$. Applying Lemma \ref{lemma:bound_objective} to both LHS and RHS of the above inequality, we have \begin{align*} & \sum_{i,j=1}^{n+1}(\hat{\alpha}_{i}^{[n+1]}-\hat{\alpha}_{i})(\hat{\alpha}_{j}^{[n+1]}-\hat{\alpha}_{j})K(\mathbf{x}_{i},\mathbf{x}_{j}) +\frac{\lambda}{2 q_{2} }\sum_{i=1}^{n+1}(\hat{\alpha}_i^{[n+1]}-\hat{\alpha}_i)^2\\ \leq & \hat{\alpha}_{n+1}^{2}\Big[\Big(\frac{1}{n},\ldots,\frac{1}{n},-1\Big)\mathbf{K}\Big(\frac{1}{n},\ldots,\frac{1}{n},-1\Big)^{T}+\frac{\lambda(n+1)}{2n q_{3} }\Big], \end{align*} where $\mathbf{K}\in\mathbb{R}^{n+1,n+1}$ is defined by $\mathbf{K}_{i,j}=K(\mathbf{x}_i,\mathbf{x}_j)$. Since $|K(\mathbf{x}_i,\mathbf{x}_j)|\leq M^{2}$ for any $1\leq i,j\leq n+1$, we have \[ \begin{array}{ll} &\Big(\frac{1}{n},\ldots, \frac{1}{n}, -1 \Big) \mathbf{K} \Big(\frac{1}{n},\ldots, \frac{1}{n}, -1 \Big)^T\\=&\frac{1}{n^2}\sum_{i,j=1}^n \mathbf{K}_{i,j}-\frac{1}{n}\sum_{i=1}^n\mathbf{K}_{i,n+1} -\frac{1}{n}\sum_{j=1}^n\mathbf{K}_{n+1,j}+\mathbf{K}_{n+1,n+1} \\\leq& M^{2}+M^{2}+M^{2}+M^{2} = 4M^{2}. \end{array} \] Combining it with the bound for $|\hat{\alpha}_{n+1}|$ by Lemma \ref{lemma:alphaibound} (note that here $\hat{\alpha}_{n+1}$ is trained on $n+1$ samples), we have \begin{equation}\label{eq:firstineq} \sum_{i,j=1}^{n+1}(\hat{\alpha}_i^{[n+1]}-\hat{\alpha}_i)(\hat{\alpha}_j^{[n+1]}-\hat{\alpha}_j)K(\mathbf{x}_{i},\mathbf{x}_{j}) \leq C^{[n+1]}_{1}, \end{equation} where \begin{align} C^{[n+1]}_{1} &= \Bigg(4M^{2}+ \frac{\lambda(n+1)}{2n q_{3} }\Bigg)\Bigg(\frac{ q_{2} }{\lambda} C^{[n+1]}_{0} \Bigg)^2, \label{eq:c0} \end{align} and \begin{align}\label{eq:truec0} C^{[n+1]}_{0} &= q_{1} \frac{\|Y_{n+1}\|_1}{n+1}+ M(q_{1}+1)\sqrt{\frac{ q_{2}}{\lambda}}\|Y_{n+1}\|_{2}+|y_{n+1}|. \end{align} Combining \eqref{eq:firstineq} with Lemma~\ref{lemma:norm_control}, we have that for $1\leq i\leq n+1$, \begin{equation}\label{eq:est_diff_alphaK} |\hat{f}^{[n+1]}(\mathbf{x}_i)-\hat{f}_{(n+1)}(\mathbf{x}_i)|= \Big|\sum_{j=1}^{n+1}(\hat{\alpha}_i^{[n+1]}-\hat{\alpha}_i)K(\mathbf{x}_{i},\mathbf{x}_{j})\Big|\leq \sqrt{C^{[n+1]}_{1}}M. \end{equation} Next, we bound $|\hat{\alpha}_0^{[n+1]}-\hat{\alpha}_{0\,(n+1)}|$. Since $\hat{\alpha}_{0\,(n+1)}$ and $\hat{\alpha}_0^{[n+1]}$ are the minimizers of \[\text{$\sum_{i=1}^{n+1}\phi_{\omega}\left(y_{i}-\alpha_{0}-\hat{f}_{(n+1)}(\mathbf{x}_i)\right)$ and $\sum_{i=1}^{n}\phi_{\omega}\left(y_{i}-\alpha_{0}-\hat{f}^{[n+1]}(\mathbf{x}_i)\right)$},\] we have \begin{equation}\label{eq:a0firstorder} \frac{{\,\mathrm{d}} }{{\,\mathrm{d}} \alpha_0}\sum_{i=1}^{n+1}\phi_{\omega}\left(y_{i}-\alpha_{0}-\hat{f}_{(n+1)}(\mathbf{x}_i)\right)\Big|_{\alpha_0=\hat{\alpha}_{0\,(n+1)}}=0, \end{equation} and \begin{equation}\label{eq:a0firstorder2} \frac{{\,\mathrm{d}} }{{\,\mathrm{d}} \alpha_0}\sum_{i=1}^{n}\phi_{\omega}\left(y_{i}-\alpha_{0}-\hat{f}^{[n+1]}(\mathbf{x}_i)\right)\Big|_{\alpha_0=\hat{\alpha}_0^{[n+1]}} =0. \end{equation} By the Lipschitz continuity of $\phi'_{\omega}$ we have \[ \begin{array}{ll} & \Bigg|\sum_{i=1}^{n+1}\phi'_{\omega}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{i})\right)-\sum_{i=1}^{n+1}\phi'_{\omega}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}_{(n+1)}(\mathbf{x}_{i})\right)\Bigg|\\ \leq & 2(n+1) q_{2} |\hat{f}^{[n+1]}(\mathbf{x}_{i})-\hat{f}_{(n+1)}(\mathbf{x}_{i})|, \end{array} \] and by applying \eqref{eq:est_diff_alphaK} and \eqref{eq:a0firstorder} we have the upper bound \[ \Bigg|\sum_{i=1}^{n+1}\phi'_{\omega}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{i})\right)\Bigg| \leq {2(n+1) q_{2} }\, \sqrt{C^{[n+1]}_{1}}M. \] Similarly, by \eqref{eq:est_Kalpha}, \eqref{eq:est_alpha0}, and \eqref{eq:a0firstorder2} we have \begin{equation} \begin{array}{ll} & \Bigg|\sum_{i=1}^{n}\phi'_{\omega}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{i})\right)\Bigg|\\ = & \Bigg|\sum_{i=1}^{n+1}\phi'_{\omega}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{i})\right)-\sum_{i=1}^{n+1}\phi'_{\omega}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}_{(n+1)}(\mathbf{x}_{i})\right)\\ & -\phi'_{\omega}\left(y_{n+1}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{n+1})\right)\Bigg|\\ \leq & \Bigg|\sum_{i=1}^{n+1}\phi'_{\omega}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{i})\right)-\sum_{i=1}^{n+1}\phi'_{\omega}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}_{(n+1)}(\mathbf{x}_{i})\right)\Bigg|\\ & +\Bigg|\phi'_{\omega}\left(y_{n+1}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{n+1})\right)\Bigg|\\ \leq & 2(n+1) q_{2} \sqrt{C^{[n+1]}_{1}}M+2 q_{2} \big(|y_{n+1}|+ |\hat{\alpha}_{0\,(n+1)}| + |\hat{f}_{(n)}| \big)\\ \leq & 2(n+1) q_{2} \sqrt{C^{[n+1]}_{1}}M+2 q_{2} \big(|y_{n+1}|+q_{1} \frac{\|Y_{n+1}\|_1}{n+1}+ Mq_{1}\sqrt{\frac{ q_{2}}{\lambda}}\|Y_{n+1}\|_{2}+\sqrt{\frac{ q_{2}}{\lambda}}\|Y_{n}\|_{2}\big)\\ \leq & 2(n+1) q_{2} \sqrt{C^{[n+1]}_{1}}M+2 q_{2} C^{[n+1]}_{0}, \end{array}\label{eq:interbound} \end{equation} where the second last inequality follows from \eqref{eq:est_Kalpha} and \eqref{eq:est_alpha0}. Note that in this case the corresponding sample is $n+1$. Using \eqref{eq:a0firstorder2} we have \begin{eqnarray*} & & 2n q_{3} \big|\hat{\alpha}_{0}^{[n+1]}-\hat{\alpha}_{0\,(n+1)}\big|\\ & \leq & \Big|\sum_{i=1}^{n}\phi_{\omega}^{\prime}\left(y_{i}-\hat{\alpha}_{0}^{[n+1]}-\hat{f}^{[n+1]}(\mathbf{x}_{i})\right)-\sum_{i=1}^{n}\phi_{\omega}^{\prime}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{i})\right)\Big|\\ & = & \Big|\sum_{i=1}^{n}\phi_{\omega}^{\prime}\left(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{i})\right)\Big|. \end{eqnarray*} By \eqref{eq:interbound}, we have \begin{equation}\label{eq:est_diff_alpha0} |\hat{\alpha}_{0}^{[n+1]}-\hat{\alpha}_{0\,(n+1)}|\leq q_{1} \Big((1+\frac{1}{n})\sqrt{C^{[n+1]}_{1}}M+\frac{1}{n}C^{[n+1]}_{0}\Big). \end{equation} Finally, combining \eqref{eq:est_diff_alphaK} and \eqref{eq:est_diff_alpha0} we have \begin{equation}\label{eq:diff_function} |\hat{f}^{[n+1]}(\mathbf{x}_{i})+\hat{\alpha}_0^{[n+1]}-\hat{f}_{(n+1)}(\mathbf{x}_{i})-\hat{\alpha}_{0\,(n+1)}|\leq C^{[n+1]}_{2}, \end{equation} where \begin{equation}\label{eq:c222} C^{[n+1]}_{2} = q_{1} \Big((1+\frac{1}{n})\sqrt{C^{[n+1]}_{1}}M+\frac{1}{n}C^{[n+1]}_{0}\Big)+ \sqrt{C^{[n+1]}_{1}}M. \end{equation} \paragraph{Part II} We now use \eqref{eq:diff_function} to derive a bound for $\phi_{\omega}(y_{n+1}-\hat{\alpha}_{0}^{[n+1]}-\hat{f}^{[n+1]}(\mathbf{x}_{n+1}))$. Let $t = \hat{f}_{(n+1)}(\mathbf{x}_{i})+\hat{\alpha}_{0\,(n+1)}-\hat{f}^{[n+1]}(\mathbf{x}_{i})-\hat{\alpha}_0^{[n+1]}$ and $t^{\prime}=y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}_{(n+1)}(\mathbf{x}_{i})$. We claim that, \begin{equation}\phi_\omega(t+t^{\prime})-\phi_\omega(t^{\prime})\leq q_{2} (|2tt^{\prime}|+|t^2|).\label{eq:phi_diff} \end{equation} when $(t+t^{\prime})$ and $t^{\prime}$ are both positive or both negative, \eqref{eq:phi_diff} follows from $(t+t^{\prime})^2-t^{\prime 2}=2tt^{\prime}+t^2$. When $t+t^{\prime}$ and $t^{\prime}$ have different signs, it must be that $|t^{\prime}|<|t|$, and we have $|t|=|t+t^{\prime}|+|t^{\prime}|$ and hence $|t+t^{\prime}|<|t|$. Then \eqref{eq:phi_diff} is proved by $\phi_\omega(t+t^{\prime})-\phi_\omega(t^{\prime})=\max(\phi_\omega(t+t^{\prime}),\phi_\omega(t^{\prime}))\leq q_{2} \max((t+t^{\prime})^2,t^{\prime 2})\leq \max(1-\omega,\omega) t^2 < \max(1-\omega,\omega)(|2tt^{\prime}|+|t^2|)$. Hence by \eqref{eq:diff_function}, \eqref{eq:phi_diff} and the upper bound of $|y_{n+1}-\hat{f}_{(n+1)}(\mathbf{x}_{n+1})-\hat{\alpha}_{0\,(n+1)}|$, we have \begin{align}\label{eq:c22} &\phi_{\omega}(y_{n+1}-\hat{\alpha}_{0}^{[n+1]}-\hat{f}^{[n+1]}(\mathbf{x}_{n+1})) \leq \phi_{\omega}(y_{n+1}-\hat{\alpha}_{0\,(n+1)}-\hat{f}_{(n+1)}(\mathbf{x}_{n+1}))+C^{[n+1]}_{3}, \end{align} where \begin{align} C^{[n+1]}_{3}&={ q_{2} } (2C^{[n+1]}_{0}C^{[n+1]}_{2}+(C^{[n+1]}_{2})^{2}).\label{eq:c2} \end{align} Note that \eqref{eq:c22} and \eqref{eq:c2} hold for other $i, 1 \le i \le n+1$. \begin{align} &\phi_{\omega}(y_{i}-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_{i})) \leq \phi_{\omega}(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}_{(n+1)}(\mathbf{x}_{i}))+C^{[i]}_{3}. \end{align} Hence by \eqref{eq:c0}, \eqref{eq:truec0}, \eqref{eq:c222} and \eqref{eq:c22} we have \begin{eqnarray}\label{eq:c2results} E_{D_{n+1}}\Big(\phi_{\omega}(y_{i}-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_{i}))\Big)\leq E_{D_{n+1}}\Big(\phi_{\omega}(y_{i}-\hat{\alpha}_{0\,(i)}-\hat{f}_{(n+1)}(\mathbf{x}_{i}))\Big)+E_{D_{n+1}}C_{3}^{[i]}. \end{eqnarray} and \begin{eqnarray} &&\frac{1}{n+1}E_{D_{n+1}}\Big(\sum_{i=1}^{n+1}\phi_{\omega}(y_{i}-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_{i}))\Big)\nonumber\\ & \leq & \frac{1}{n+1}E_{D_{n+1}}\Big(\sum_{i=1}^{n+1}\phi_{\omega}(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}_{(n+1)}(\mathbf{x}_{i}))\Big) +\frac{1}{n+1}E_{D_{n+1}}\sum_{i=1}^{n+1}C_{3}^{[i]}. \label{eq:ed1} \end{eqnarray} On the other hand, let $(f^*_\varepsilon, \alpha_{0\,\varepsilon}^*)$ in the RKHS and satisfy ${\cal R}(f^*_\varepsilon, \alpha_{0\,\varepsilon}^*)\leq \inf_{f\in\mathbb{H}_K,\alpha_0\in\mathbb{R}}{\cal R}(f, \alpha_{0})+\varepsilon$. From the definition of $\hat{\alpha}_{0\,(n+1)}$ and $\hat{f}_{(n+1)}$ we have \begin{eqnarray} &&\frac{1}{n+1}\Big(\sum_{i=1}^{n+1}\phi_{\omega}(y_{i}-\hat{\alpha}_{0\,(n+1)}-\hat{f}_{(n+1)}(\mathbf{x}_{i}))\Big)+\frac{\lambda}{n+1}\|\hat{f}_{(n+1)}\|_{\mathbb{H}_{K}}^{2} \nonumber\\ & \leq & \frac{1}{n+1}\Big(\sum_{i=1}^{n+1}\phi_{\omega}(y_{i}-\alpha_{0\,\varepsilon}^{*}-f_{\varepsilon}^{*}(\mathbf{x}_{i}))\Big) +\frac{\lambda}{n+1}\|f_{\varepsilon}^{*}\|_{\mathbb{H}_{K}}^{2}. \label{eq:ed2} \end{eqnarray} By Lemma \ref{lemma:leaveone}, \eqref{eq:ed1} and \eqref{eq:ed2}, we get \begin{eqnarray} &&E_{D_{n}}\big\{ E_{(\mathbf{x},y)}\phi_{\omega}(y-\hat{\alpha}_{0\,(n)}-\hat{f}_{(n)}(\mathbf{x}))\big\} \nonumber \\ & = & \frac{1}{n+1}E_{D_{n+1}}\Big(\sum_{i=1}^{n+1}\phi_{\omega}(y_{i}-\hat{\alpha}_{0}^{[i]}-\hat{f}^{[i]}(\mathbf{x}_{i}))\Big)\nonumber \\ & \leq & E_{D_{n}}\big\{ E_{(\mathbf{x},y)}\phi_{\omega}(y-\alpha_{0\,\varepsilon}^{*}-f_{\varepsilon}^{*}(\mathbf{x}_{i}))\big\}+\frac{\lambda}{n+1}\|f_{\varepsilon}^{*}\|_{\mathbb{H}_{K}}^{2}+\frac{1}{n+1}E_{D_{n+1}}\sum_{i=1}^{n+1}C_{3}^{[i]} \nonumber \\ & \le & \inf_{f\in\mathbb{H}_K,\alpha_0\in\mathbb{R}}{\cal R}(f, \alpha_{0})+\varepsilon+\frac{\lambda}{n+1}\|f_{\varepsilon}^{*}\|_{\mathbb{H}_{K}}^{2}+\frac{1}{n+1}E_{D_{n+1}}\sum_{i=1}^{n+1}C_{3}^{[i]}.\label{eq:secondfinal} \end{eqnarray} Because $\lambda/n\rightarrow 0$, there exists $N_{\varepsilon}$ such that when $n>N_{\varepsilon}$, $\frac{\lambda}{n+1}\|f_{\varepsilon}^{*}\|_{\mathbb{H}_{K}}^{2} \le \varepsilon$. In what follows, we show that there exists $N'_{\varepsilon}$ such that when $n>N'_{\varepsilon}$, $\frac{1}{n+1}E_{D_{n+1}}\sum_{i=1}^{n+1}C_{3}^{[i]} \le \varepsilon$. Thus, when $n>\max(N_{\varepsilon},N'_{\varepsilon})$ we have $$ E_{D_{n}}\big\{ E_{(\mathbf{x},y)}\phi_{\omega}(y-\hat{\alpha}_{0\,(n)}-\hat{f}_{(n)}(\mathbf{x}))\big\} \le \inf_{f\in\mathbb{H}_K,\alpha_0\in\mathbb{R}}{\cal R}(f, \alpha_{0})+3\varepsilon. $$ Since it holds for any $\varepsilon>0$, Theorem~\ref{thm:asymptotic} will be proved. Now we only need to show that $\frac{1}{n+1}E_{D_{n+1}}\sum_{i=1}^{n+1}C_{3}^{[i]}\rightarrow 0$ as $n\rightarrow\infty$. In fact we can show $\frac{1}{n+1}E_{D_{n+1}}\sum_{i=1}^{n+1}C_{3}^{[i]} \le \frac{C}{\sqrt{\lambda}}D\left(\frac{1+n}{\lambda}+1\right)\rightarrow 0$ as $n\rightarrow\infty$. In the following analysis, $C$ represents any constant that does not depend on $n$, but the value of $C$ varies in different expressions. Let $V_{i}=q_{1} \frac{\|Y_{n+1}\|_1}{n+1}+ M(q_{1}+1)\sqrt{\frac{ q_{2}}{\lambda}}\|Y_{n+1}\|_{2}+|y_{i}|$, then as $n\rightarrow\infty$, $4M^2<\frac{\lambda(n+1)}{2n q_{3} }$, and we have the upper bound \[ C_{1}^{[i]}<(C\lambda)\Big(\frac{C}{\lambda}V_{i}\Big)^2=C\frac{V_{i}^{2}}{\lambda}, \] and since $n>\sqrt{\lambda}$ asymptotically, we have \[ C_{2}^{[i]} < C\Big(C\sqrt{C_{1}^{[i]}}+\frac{V_{i}}{n}\Big)+C\sqrt{C_{1}^{[i]}}<C\frac{V_{i}}{\sqrt{\lambda}}+C\frac{V_{i}}{n}<C\frac{V_{i}}{\sqrt{\lambda}}. \] Then \begin{equation}\label{eq:EC2} C_{3}^{[i]} < CV_{i}C_{2}^{[i]}+CC_{2}^{[i]\,2} < C V_{i} \frac{V_{i}}{\sqrt{\lambda}}+ C \frac{V_{i}^{2}}{{\lambda}}<C \frac{V_{i}^{2}}{\sqrt{\lambda}}. \end{equation} We can bound $V_{i}$ as follows: \begin{eqnarray*} V_{i} & = & q_{1} \frac{\|Y_{n+1}\|_1}{n+1}+ M(q_{1}+1)\sqrt{\frac{ q_{2}}{\lambda}}\|Y_{n+1}\|_{2}+|y_{i}|\\ & < & q_{1}{\frac{\|Y_{n+1}\|_{2}}{\sqrt{n+1}}}+M(q_{1}+1)\sqrt{\frac{ q_{2}}{\lambda}}\|Y_{n+1}\|_{2}+|y_{i}|\\ & < & C\sqrt{\frac{\|Y_{n+1}\|^2_{2}}{\lambda}}+C|y_{i}|. \end{eqnarray*} Then we have \begin{eqnarray}\label{eq:EC22} E_{D_{n+1}}V_{i}^{2} & < & 2C^2 E_{D_{n+1}}\Big[{\frac{\|Y_{n+1}\|^2_{2}}{\lambda}}+y^2_{i}\Big]. \end{eqnarray} Combining it with \eqref{eq:EC2} and using the assumption $ E{y}^2_i<D$, we have \begin{eqnarray*} \frac{1}{n+1}E_{D_{n+1}}\sum_{i=1}^{n+1}C_{3}^{[i]} & \le & \frac{C}{\sqrt{\lambda}}\frac{1}{1+n}\left(\frac{1+n}{\lambda}E\|Y_{n+1}\|_{2}^{2}+E\|Y_{n+1}\|_{2}^{2}\right)\\ & \le & \frac{C}{\sqrt{\lambda}}\frac{E\|Y_{n+1}\|_{2}^{2}}{1+n}\left(\frac{1+n}{\lambda}+1\right)\leq\frac{C}{\sqrt{\lambda}}D\left(\frac{1+n}{\lambda}+1\right) \end{eqnarray*} So when $\lambda/n^{2/3} \rightarrow \infty$ we have $\frac{1}{n+1}E_{D_{n+1}}\sum_{i=1}^{n+1}C_{3}^{[i]} \rightarrow 0$. This completes the proof of Theorem~\ref{thm:asymptotic}. \end{proof} \subsection{Proof of Lemma~\ref{lem:lips}} \begin{proof} We observe that the difference of the first derivatives for the function $\phi_\omega$ satisfies \[ |\phi_{\omega}'(a)-\phi_{\omega}'(b)|= \begin{cases} 2(1-\omega)|a-b| & \mathrm{if}\quad(a\leq0, b\leq 0),\\ 2\omega|a-b| & \mathrm{if}\quad(a>0, b>0),\\ 2|(1-\omega)a-\omega b| & \mathrm{if}\quad(a\leq0, b>0),\\ 2|\omega a-(1-\omega) b| & \mathrm{if}\quad(a>0, b\leq 0). \end{cases} \] Therefore we have \begin{equation}\label{Lips1} \vert \phi^{'}_{\omega}(a)-\phi^{'}_{\omega}(b)\vert \le L|a-b| \quad \forall a,b, \end{equation} where $L=2\max(1-\omega,\omega)$. By the Lipschitz continuity of $\phi^{\prime}_{\omega}$ and Cauchy-Schwarz inequality, \begin{equation} (\phi^{\prime}_{\omega}(a)-\phi^{\prime}_{\omega}(b))(a-b)\leq L|a-b|^{2}\qquad\forall a,b\in\mathbb{R}.\label{eq:monotonicity} \end{equation} If we let $\varphi_{\omega}(a)=(L/2)a^{2}-\phi_{\omega}(a)$, then \eqref{eq:monotonicity} implies the monotonicity of the gradient $\varphi_{\omega}^{\prime}(a)=La-\phi^{\prime}_{\omega}(a)$. Therefore $\varphi$ is a convex function and by the first order condition for convexity of $\varphi_{\omega}$: \[ \varphi_{\omega}(a)\geq\varphi_{\omega}(b)+\varphi_{\omega}^{\prime}(b)(a-b)\qquad\forall a,b\in\mathbb{R}, \] which is equivalent to \eqref{eq:upper_bound}. \end{proof} \subsection{Proof of Lemma~\ref{lem:convergence}} \begin{proof} 1. By the definition of the majorization function and the fact that $\boldsymbol{\alpha}^{(k+1)}$ is the minimizer in \eqref{eq:majobj} \[ F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k+1)})\leq Q(\boldsymbol{\alpha}^{(k+1)}\mid\boldsymbol{\alpha}^{(k)})\leq Q(\boldsymbol{\alpha}^{(k)}\mid\boldsymbol{\alpha}^{(k)})=F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)}). \] 2. Based on \eqref{eq:alter_qbound} and the fact that $Q$ is continuous, bounded below and strictly convex, we have \begin{equation} \mathbf{0}=\nabla Q(\boldsymbol{\alpha}^{(k+1)}\mid\boldsymbol{\alpha}^{(k)})=\nabla F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})+2\mathbf{K}_{u}(\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)}).\label{eq:trick} \end{equation} Hence \begin{align*} F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k+1)}) & \leq Q(\boldsymbol{\alpha}^{(k+1)}\mid\boldsymbol{\alpha}^{(k)})\\ & =F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})+\nabla F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})(\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)})+(\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)})\\ & =F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})-(\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)}). \end{align*} By \eqref{eq:ku} and the assumption that $\sum_{i=1}^{n}\mathbf{K}_{i}\mathbf{K}_{i}^{\intercal}$ is positive definite, we see that $\mathbf{K}_{u}$ is also positive definite. Let $\gamma_{\min}(\mathbf{K}_{u})$ be the smallest eigenvalue of $\mathbf{K}_{u}$ then \begin{equation} 0\leq\gamma_{\min}(\mathbf{K}_{u})\|\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)}\|^{2}\leq(\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)})\leq F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})-F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k+1)}).\label{eq:sandwitch} \end{equation} Since $F$ is bounded below and monotonically decreasing as shown in Proof 1, $F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})-F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k+1)})$ converges to zero as $k\rightarrow\infty$, from \eqref{eq:sandwitch} we see that $\lim_{k\rightarrow\infty}\|\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)}\|=0$. 3. Now we show that the sequence $(\boldsymbol{\alpha}^{(k)})$ converges to the unique global minimum of \eqref{eq:obj}. As shown in Proof 1, the sequence $(F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)}))$ is monotonically decreasing, hence is bounded above. The fact that $(F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)}))$ is bounded implies that $(\boldsymbol{\alpha}^{(k)})$ must also be bounded, that is because $\lim_{\boldsymbol{\alpha}\rightarrow\infty}F_{\omega, \lambda}(\boldsymbol{\alpha})=\infty$. We next show that the limit of any convergent subsequence of $(\boldsymbol{\alpha}^{(k)})$ is a stationary point of $F$. Let $(\boldsymbol{\alpha}^{(k_{i})})$ be the subsequence of $(\boldsymbol{\alpha}^{(k)})$ and let $\lim_{i\rightarrow\infty}\boldsymbol{\alpha}^{(k_{i})}=\widehat{\boldsymbol{\alpha}}$, then by \eqref{eq:trick} \[ \mathbf{0}=\nabla Q(\boldsymbol{\alpha}^{(k_{i}+1)}\mid\boldsymbol{\alpha}^{(k_{i})})=\nabla F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k_{i})})+2\mathbf{K}_{u}(\boldsymbol{\alpha}^{(k_{i}+1)}-\boldsymbol{\alpha}^{(k_{i})}). \] Taking limits on both sides, we prove that $\widehat{\boldsymbol{\alpha}}$ is a stationary point of $F$. \begin{align*} \mathbf{0} & =\lim_{i\rightarrow\infty}\nabla Q(\boldsymbol{\alpha}^{(k_{i}+1)}\mid\boldsymbol{\alpha}^{(k_{i})})=\nabla Q(\lim_{i\rightarrow\infty}\boldsymbol{\alpha}^{(k_{i}+1)}\mid\lim_{i\rightarrow\infty}\boldsymbol{\alpha}^{(k_{i})}).\\ & =\nabla F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})+2\mathbf{K}_{u}(\widehat{\boldsymbol{\alpha}}-\widehat{\boldsymbol{\alpha}})=\nabla F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}}). \end{align*} Then by the strict convexity of $F$, we have that $\widehat{\boldsymbol{\alpha}}$ is the unique global minimum of \eqref{eq:obj}. \end{proof} \subsection{Proof of Theorem~\ref{thm:iteration}} \begin{proof} 1. By \eqref{eq:mathdef1} and \eqref{eq:majobj}, \begin{equation} F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k+1)})\leq Q(\boldsymbol{\alpha}^{(k+1)}\mid\boldsymbol{\alpha}^{(k)})\leq Q(\Lambda_{k}\boldsymbol{\alpha}^{(k)}+(1-\Lambda_{k})\widehat{\boldsymbol{\alpha}}\mid\boldsymbol{\alpha}^{(k)}).\label{eq:in1} \end{equation} Using \eqref{eq:LambdaK} we can show that \begin{align} & Q(\Lambda_{k}\boldsymbol{\alpha}^{(k)}+(1-\Lambda_{k})\widehat{\boldsymbol{\alpha}}\mid\boldsymbol{\alpha}^{(k)})\nonumber \\ = & F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})+(1-\Lambda_{k})\nabla F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})+(1-\Lambda_{k})^{2}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})\nonumber \\ = & \Lambda_{k}F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})+(1-\Lambda_{k})\left[Q(\widehat{\boldsymbol{\alpha}}\mid\boldsymbol{\alpha}^{(k)})-\Lambda_{k}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})\right]\nonumber \\ = & \Lambda_{k}F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})+(1-\Lambda_{k})F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}}).\label{eq:in2} \end{align} Then the statement can be proved by substituting \eqref{eq:in2} into \eqref{eq:in1}. 2. We obtain a lower bound for $F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})$ \begin{equation} F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})\geq F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})+\nabla F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})+(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{l}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)}),\label{eq:lbound} \end{equation} and majorization $Q(\widehat{\boldsymbol{\alpha}}\mid\boldsymbol{\alpha}^{(k)})$ \begin{equation} Q(\widehat{\boldsymbol{\alpha}}\mid\boldsymbol{\alpha}^{(k)})=F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})+\nabla F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})+(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)}).\label{eq:major} \end{equation} Subtract \eqref{eq:lbound} from \eqref{eq:major} and divide by $(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})$, we have \begin{align} \Lambda_{k} & =\frac{Q(\widehat{\boldsymbol{\alpha}}\mid\boldsymbol{\alpha}^{(k)})-F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})}{(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})}\nonumber \\ & \leq1-\frac{(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{l}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})}{(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})^{\intercal}\mathbf{K}_{u}(\widehat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}^{(k)})}\nonumber \\ & \leq1-\gamma_{\min}(\mathbf{K}_{u}^{-1}\mathbf{K}_{l}).\label{eq:fineq} \end{align} Both $K_{u}$ and $K_{l}$ are positive definite by the assumption that $\sum_{i=1}^{n}\mathbf{K}_{i}\mathbf{K}_{i}^{\intercal}$ is positive definite, and since \[ \mathbf{K}_{u}^{-1}\mathbf{K}_{l}=\mathbf{K}_{u}^{-\frac{1}{2}}\mathbf{K}_{u}^{-\frac{1}{2}}\mathbf{K}_{l}\mathbf{K}_{u}^{-\frac{1}{2}}\mathbf{K}_{u}^{\frac{1}{2}}, \] the matrix $\mathbf{K}_{u}^{-1}\mathbf{K}_{l}$ is similar to the matrix $\mathbf{K}_{u}^{-\frac{1}{2}}\mathbf{K}_{l}\mathbf{K}_{u}^{-\frac{1}{2}}$, which is positive definite. Hence \[ \Gamma=1-\gamma_{\min}(\mathbf{K}_{u}^{-1}\mathbf{K}_{l})=1-\gamma_{\min}(\mathbf{K}_{u}^{-\frac{1}{2}}\mathbf{K}_{l}\mathbf{K}_{u}^{-\frac{1}{2}})<1. \] By \eqref{eq:mathdef1} and \eqref{eq:fineq} we showed that $0\leq\Lambda_{k}\leq\Gamma<1$. 3. Since $\nabla F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})=\mathbf{0}$, using the Taylor expansion on $F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})$ at $\widehat{\boldsymbol{\alpha}}$, we have \[ F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})-F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})\geq(\boldsymbol{\alpha}^{(k)}-\widehat{\boldsymbol{\alpha}})^{\intercal}\mathbf{K}_{l}(\boldsymbol{\alpha}^{(k)}-\widehat{\boldsymbol{\alpha}})\geq\gamma_{\min}(\mathbf{K}_{l})\|\boldsymbol{\alpha}^{(k)}-\widehat{\boldsymbol{\alpha}}\|^{2}, \] \[ F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})-F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})\leq(\boldsymbol{\alpha}^{(k)}-\widehat{\boldsymbol{\alpha}})^{\intercal}\mathbf{K}_{u}(\boldsymbol{\alpha}^{(k)}-\widehat{\boldsymbol{\alpha}})\leq\gamma_{\max}(\mathbf{K}_{u})\|\boldsymbol{\alpha}^{(k)}-\widehat{\boldsymbol{\alpha}}\|^{2}. \] Therefore, by Results 1 and 2 \[ \|\boldsymbol{\alpha}^{(k+1)}-\widehat{\boldsymbol{\alpha}}\|^{2}\leq\frac{F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k+1)})-F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}})}{\gamma_{\min}(\mathbf{K}_{l})}\leq\frac{\Gamma(F_{\omega, \lambda}(\boldsymbol{\alpha}^{(k)})-F_{\omega, \lambda}(\widehat{\boldsymbol{\alpha}}))}{\gamma_{\min}(\mathbf{K}_{l})}\leq\Gamma\frac{\gamma_{\max}(\mathbf{K}_{u})}{\gamma_{\min}(\mathbf{K}_{l})}\|\boldsymbol{\alpha}^{(k)}-\widehat{\boldsymbol{\alpha}}\|^{2}. \] \end{proof} \end{document}
\begin{document} \title{Nonclassicality criteria: Quasiprobability distributions and correlation functions} \author{Moorad Alexanian} \email[]{[email protected]} \affiliation{Department of Physics and Physical Oceanography\\ University of North Carolina Wilmington\\ Wilmington, NC 28403-5606\\} \date{\today} \begin{abstract} We use the exact calculation of the quantum mechanical, temporal characteristic function $\chi(\eta)$ and the degree of second-order coherence $g^{(2)}(\tau)$ for a single-mode, degenerate parametric amplifier for a system in the Gaussian state, viz., a displaced-squeezed thermal state, to study the different criteria for nonclassicality. In particular, we contrast criteria that involve only one-time functions of the dynamical system, for instance, the quasiprobability distribution $P(\beta)$ of the Glauber-Sudarshan coherent or P-representation of the density of state and the Mandel $Q_{M}(\tau)$ parameter, versus the criteria associated with the two-time correlation function $g^{(2)}(\tau)$. \end{abstract} \pacs{42.50.Ct, 42.50.Pq, 42.50.Ar, 42.50.Xa} \maketitle {} \section{Introduction} The field of quantum computation and quantum information, as applied to quantum computers, quantum cryptography, and quantum teleportation, was originally based on the manipulation of quantum information in the form of discrete quantities like qubits, qutrits, and higher-dimensional qudits. Nowadays the emphasis has shifted on processing quantum information by the use of continuous-variable quantum information carriers. In this regard, use is now made of any combination of Gaussian states, Gaussian operations, and Gaussian measurements \cite{WPP12, ARL14}. The interest in Gaussian states is both theoretical as well as experimental since simple analytical tools are available and, on the experimental side, optical components effecting Gaussian processes are readily available in the laboratory \cite{WPP12}. Quantum optical systems give rise to interesting nonclassical behavior such as photon antibunching and sub-Poissonian photon statistics owing to the discreetness or photon nature of the radiation field \cite{GSA13}. These nonclassical features can also be quantified with the aid of the temporal second-order quantum mechanical correlation function $g^{(2)}(\tau)$ and experimentally studied using a Hanbury Brown--Twiss intensity interferometer modified for homodyne detection \cite{GSSRL07}. Physical realizations and measurements of the second-order coherence function $g^{(2)}(\tau)$ of light have been studied earlier via a degenerate parametric amplifier (DPA) \cite{KHM93,LO02,GSSRL07}. The early work on parametric amplification \cite{MGa67, MGb67} has led to a wealth of research, for instance, in sub-Poissonian statistics and squeezed light \cite{AZ92}, squeezing in the output of a cavity field \cite{BLPa90, BLPb90}, quantum noise, measurement, and amplification \cite{CDG10}, and photon antibunching \cite{HP82}. The need to formulate measurable conditions to discern the classical or nonclassical behavior of a dynamical system is important and so several criteria exist for nonclassicality. In particular, the use of the Glauber-Sudarshan P function to determine the existence or nonexistence of a quasiprobability distribution $P(\beta)$ that would characterize whether the system has a classical counterpart or not \cite {RSA15}. The existent differing criteria for nonclassicality actually complement each other since nonclassicality criteria derived from the one-time function $P(\beta)$ is actually complemented by the nonclassicality criteria involving the two-time coherence function $g^{(2)}(\tau)$. Note that nonclassicality information provide by $g^{(2)}(\tau)$ cannot be obtained solely from $P(\beta)$. In a recent work \cite{MA16}, a detailed study was made of the temporal development of the second-order coherence function for Gaussian states---displaced-squeezed thermal states---the dynamics being governed by a Hamiltonian for degenerate parametric amplification. The time development of the Gaussian state is generated by an initial thermal state and the system subsequently evolves in time where the usual assumption of statistically stationary fields is not made \cite{SZ97}. In the present work, we compare the differing criteria for nonclassicality. In Section II, we consider the general Hamiltonian of the degenerate parametric amplifier. In Section III, we find an exact expression for the characteristic function and introduce the Glauber-Sudarshan coherent state or P-representation of the density matrix. In Section IV, we obtain, via a two-dimensional Fourier transform, the quasiprobability distribution $P(\beta)$ from the exact expression of the characteristic function $\chi(\eta)$ and obtain from $P(\beta)$ the necessary and sufficient condition for nonclassicality. In Section V, we present the known nonclassicality criteria for the coherence function $g^{(2)}(\tau)$. In Section VI, we study numerical examples to elucidate how all the different criteria for nonclassicality complement each other. Finally, Section VII summarizes our results. \section {Degenerate parametric amplification} The Hamiltonian for degenerate parametric amplification, in the interaction picture, is \begin{equation} \hat{H} = c \hat{a}^{\dag 2} + c^* \hat{a}^2 + b\hat{a} + b^* \hat{a}^\dag. \end{equation} The system is initially in a thermal state $\hat{\rho}_{0}$ and a after a preparation time $t$, the system temporally develops into a Gaussian state and so \cite{MA16} \begin{equation} \hat{\rho}_{G}=\exp{(-i\hat{H}t/\hbar)}\hat{\rho}_{0} \exp{(i\hat{H}t/\hbar)} \end{equation} \[ = \hat{D}(\alpha) \hat{S}(\xi)\hat{\rho}_{0} \hat{S}(-\xi) \hat{D}(-\alpha), \] with the displacement $\hat{D}(\alpha)= \exp{(\alpha \hat{a}^{\dag} -\alpha^* \hat{a})}$ and the squeezing $\hat{S}(\xi)= \exp\big{(}-\frac{\xi}{2} \hat{a}^{\dag 2} + \frac{\xi^*}{2} \hat{a}^{2} \big{ )}$ operators, where $\hat{a}$ ($\hat{a}^{\dag})$ is the photon annihilation (creation) operator, $\xi = r \exp{(i\theta)}$, and $\alpha= |\alpha|\exp{(i\varphi)}$. The thermal state is given by \begin{equation} \hat{\rho}_{0} = \exp{(- \hbar \omega\hat{n}/k_{B}T)}/ \textup{Tr}[\exp{(- \hbar \omega \hat{n}/k_{B}T)}], \end{equation} with $\hat{n}= \hat{a}^\dag \hat{a}$ and $\bar{n}= \textup{Tr}[\hat{\rho}_{0} \hat{n}]$ . The parameters $c$ and $b$ in the degenerate parametric Hamiltonian (1) are determined \cite{MA16} by the parameters $\alpha$ and $\xi$ of the Gaussian density of state (2) via \begin{equation} tc = -i\frac{\hbar}{2} r\exp(i\theta) \end{equation} and \begin{equation} tb= -i\frac{\hbar}{2}\Big{(} \alpha \exp{(-i\theta)} + \alpha^* \coth (r/2)\Big{)} r, \end{equation} where $t$ is the time that it takes for the system governed by the Hamiltonian (1) to generate the Gaussian density of state $\hat{\rho}_{G}$ from the initial thermal density of state $\hat{\rho}_{0}$. The quantum mechanical seconde-order, degree of coherence is given by \cite{MA16} \begin{equation} g^{(2)}(\tau) = \frac{\langle \hat{a}^{\dag}(0) \hat{a}^{\dag}(\tau) \hat{a}(\tau) \hat{a}(0)\rangle }{\langle \hat{a}^{\dag}(0) \hat{a}(0)\rangle \langle \hat{a}^{\dag}(\tau)\hat{a}(\tau) \rangle}, \end{equation} where all the expectation values are traces with the Gaussian density operator, viz., a displaced-squeezed thermal state. Accordingly, the system is initially in the thermal state $\hat{\rho}_{0}$. After time $t$, the system evolves to the Gaussian state $\hat{\rho}_{G}$ and a photon is annihilated at time $t$, the system then develops in time and after a time $\tau$ another photon is annihilated \cite{MA16}. Therefore, two photon are annihilated in a time separation $\tau$ when the system is in the Gaussian density state $\hat{\rho}_{G}$. It is important to remark that we do not suppose statistically stationary fields \cite{SZ97}. Therefore, owing to the $\tau$ dependence of the number of photons in the cavity in the denominator of Equation (6), the system asymptotically, as $\tau\rightarrow \infty$, approaches a finite limit without supposing any sort of dissipative processes \cite{MA16}. The coherence function $g^{(2)}(\tau)$ is a function of $\Omega \tau=(r/t)\tau$, $\alpha$, $\xi$, and the average number of photons $\bar{n}$ in the initial thermal state (3), where the preparation time $t$ is the time that it takes the system to dynamically generate the Gaussian density $\hat{\rho}_{G}$ given by (2) from the initial thermal state $\hat{\rho}_{0}$ given by (3). Note that the limit $r\rightarrow 0$ is a combined limit whereby $\Omega =r/t$ also approaches zero resulting in a correlation function which has a power law decay as $\tau/t \rightarrow \infty$ rather than an exponential law decay as $\tau/t \rightarrow \infty$ as is the case in the presence of squeezing when $r>0$ \cite{MA16}. \section{characteristic function} The calculation of the correlation function (6) deals with the product of two-time operators. However, a complete statistical description of a field involves only the expectation value of any function of the operators $\hat{a}$ and $\hat{a}^\dag$. A characteristic function contains all the necessary information to reconstruct the density matrix for the state of the field. Now \cite{MA16} \begin{equation} \hat{\rho}(t+\tau) =\exp\big{(}-i\hat{H}(t+\tau)\big{)} \hat{\rho}_{0}\exp \big{(}i\hat{H}(t+\tau)\big{)} \end{equation} \[ = \exp(-i\hat{H}\tau) \hat{\rho}_{G}\exp(i\hat{H}\tau). \] Accordingly, for any operator function $\mathcal{\hat{O}}(\hat{a},\hat{a}^\dag)$, one has that \begin{equation} \textup{Tr}[\hat{\rho}(t+\tau) \mathcal{\hat{O}}(\hat{a},\hat{a}^\dag)] = \textup{Tr}[\hat{\rho}_{G} \mathcal{\hat{O}}\big{(}\hat{a}(\tau),\hat{a}^\dag (\tau)\big{)}] \end{equation} \[ \equiv \langle \mathcal{\hat{O}}\big{(}\hat{a}(\tau),\hat{a}^\dag (\tau)\big{)} \rangle . \] One obtains for the characteristic function \[ \chi(\eta) = \textup{Tr}[\hat{\rho}(t+\tau)\exp{(\eta \hat{a}^\dag}-\eta^*\hat{a})]\exp{(|\eta|^2/2)} \] \[ =\textup{Tr}[\hat{\rho}(t+\tau)\exp{(\eta \hat{a}^\dag}) \exp{(-\eta^*\hat{a})}] \] \begin{equation} =\exp{(|\eta|^2/2)} \exp{\big{(}\eta A^*(\tau)- \eta^* A(\tau)\big{)}}\cdot \end{equation} \[ \cdot \exp{\big{(}-(\bar{n}+1/2)|\xi(\tau)|^2\big{)}}, \] where \[ A(\tau) =\alpha \Bigg{(}\cosh(\Omega\tau)+\frac{1}{2}\coth(r/2) \sinh (\Omega \tau) \] \begin{equation} -\frac{1}{2} (\cosh(\Omega \tau)-1)+\exp[i(\theta -2 \varphi)]\Big{[} -\frac{1}{2}\sinh(\Omega\tau) \end{equation} \[ -\frac{1}{2}\coth(r/2)\big{(}\cosh(\Omega\tau)-1\big{)}\Big{]} \Bigg{)} \] and \begin{equation} \xi(\tau)= \eta\cosh(\Omega \tau +r) +\eta^* \exp(i\theta) \sinh(\Omega \tau +r). \end{equation} The expectation value $\textup{Tr} [\hat{\rho}(t+\tau)\hat{a}^{\dag m}\hat{a}^n]$ can be calculated by differentiation of the characteristic function $\chi(\eta)$ with respect to $\eta$ and $\eta^*$ as independent variables, viz., $\textup{Tr} [\hat{\rho}(t+\tau)\hat{a}^{\dag m}\hat{a}^n]= (\partial/\partial \eta)^m (-\partial/\partial \eta^*)^n \chi(\eta)\Big{|}_{\eta=0}$. Accordingly, knowledge only of the characteristic function can determine only one-time properties of the dynamical system. Define \begin{equation} |\xi(\tau)|^2 = \eta^2 T^*(\tau) +\eta^{*2} T(\tau) +\eta \eta^* S(\tau), \end{equation} with \begin{equation} T(\tau)= \frac{1}{2} \exp{(i \theta)} \sinh [2(\Omega \tau + r)] \end{equation} and \begin{equation} S(\tau)= \cosh[2(\Omega \tau +r)]. \end{equation} In the Glauber-Sudarshan coherent state or P-representation of the density operator $\hat{\rho}$ one has that \cite{GSA13} \begin{equation} \hat{\rho} =\int \textup{d}^2\beta \hspace{0.05in} P(\beta)|\beta\rangle \langle\beta| , \end{equation} where $|\beta\rangle$ is a coherent state and nonclassicality occurs when $P(\beta)$ takes on negative values and becomes more singular than a Dirac delta function. One has the normalization condition $\int P(\beta) \textup{d}^2\beta=1$; however, $P(\beta)$ would not describe probabilities, even if positive, of mutually exclusive states since coherent states are not orthogonal. In fact, coherent states are over complete. The quasiprobability distribution $P(\beta)$ is related to the characteristic function $\chi(\eta)$ via the two-dimensional Fourier transform \begin{equation} P(\beta)= \frac{1}{\pi^2} \int \textup{d}^2\eta \hspace{0.05in}\chi(\eta) \exp{(-\beta^*\eta +\beta \eta^*)}. \end{equation} The characteristic function $\chi(\eta)$ is a well-behaved function whereas the integral (16) is not always well-behaved, for instance, if $\chi(\eta)$ diverges as $|\eta|\rightarrow \infty$, then $P(\beta)$ can only be expressed in terms of generalized functions. Nonetheless, $P(\beta)$ can be still used to calculate moments of products of $\hat{a}$ and $\hat{a}^\dag$. It is important to remark that knowledge of $P(\beta)$ without further knowledge of the dynamics governing the system, can only be used to calculate equal-time properties of the system and does not allow us to calculate, for instance, correlation functions, in particular, the quantum mechanical, second-order degrees of coherence $g^{(2)}(\tau)$. The determination of the latter requires, in addition, to $P(\beta)$ the temporal behavior $\hat{a}(\tau)$. \section{P-representation} The integral (16) can be carried out for the characteristic function (9) and so \begin{equation} P(\beta)=\frac{2}{\pi} \frac{1}{\sqrt{4a^2b^2-c^2}} e^{-(a^2f^2+b^2d^2+cfd)/(4a^2b^2-c^2)}, \end{equation} where \[ a^2= -\frac{1}{2} +(\bar{n}+1/2)\big{(}T(\tau) +T^*(\tau) +S(\tau)\big{)}, \] \[ b^2= -\frac{1}{2} -(\bar{n}+1/2)\big{(}T(\tau) +T^*(\tau) -S(\tau)\big{)}, \] \begin{equation} c=-2 i(\bar{n}+1/2)\big{(}T^*(\tau) -T(\tau)\big{)}, \end{equation} \[ d= i(A(\tau) -A^*(\tau)-\beta+ \beta^*), \] \[ f= A(\tau)+A^*(\tau) -\beta-\beta^*. \] The existence of a real-valued function $P(\beta)$ requires that $(4a^2b^2-c^2)\geq 0$, which with the aid of (18), gives that \begin{equation} 1\leq (2\bar{n} +1) e^{-2(\Omega \tau+r)}, \end{equation} where the equality hold when $\bar{n}=0$ and $r=0$ at $\tau=0$. Note that criterion (19) does not depend on the coherent amplitude $\alpha$, which appears via $A(\tau)$. Also, if inequality (19) is initially satisfied at $\tau=0$, then as time goes on the inequality will be violated since the squeezing continues indefinitely and so no matter the value of $\bar{n}$, eventually as $\tau$ increases the dynamics will always lead to nonclassical states. The existence of $P(\beta)$ requires also that it must vanish as $|\beta|\rightarrow \infty$. The bilinear form $(a^2f^2+b^2d^2+cfd)$ in the exponential in (17) can be diagonalized in the variables $\Re{(A(\tau)-\beta)}$ and $\Im{(A(\tau)-\beta)}$ resulting in the eigenvalues $2\big{(}-1+(2\bar{n} +1)e^{2(\Omega \tau+r)}\big{)}$ and $2\big{(}-1+(2\bar{n} +1)e^{-2(\Omega \tau+r)}\big{)}$ that must be nonnegative which requirement gives rise to the same condition (19) for the existence of a genuine probability distribution $P(\beta)$. Two simple examples follow directly from (16). For the displaced vacuum state for $\tau\geq 0$, one obtains, since $\Omega =r/t =0$, that $P(\beta)=\delta(\beta-\alpha)$, the coherent state. Similarly, for $\bar{n} > 0$ one obtains for the displaced thermal state that $P(\beta)= (1/(\pi \bar{n}))\exp {(-|\beta-\alpha|^2/\bar{n})}$, which becomes the previous example in the vacuum limit when $\bar{n}\rightarrow 0$. The necessary and sufficient condition for nonclassicality is then \begin{equation} (2\bar{n} +1) e^{-2(\Omega \tau+r)} <1, \end{equation} which is based only on knowledge of $P(\beta)$. Note that (20) is independent of the value of the coherent parameter $\alpha$. \section{Nonclassicality criteria} As indicated above, mere knowledge of $P(\beta)$ does not allow the calculation of the quantum mechanical correlation functions additional knowledge of the the dynamics of the system is necessary, for instance, $\hat{a}(\tau)$ for $\tau\geq 0$. Nonclassical light can be characterized differently, for instance, with the aid of the quantum degree of second-order coherence $g^{(2)}(\tau)$ by the nonclassical inequalities \begin{equation} g^{(2)}(0)< 1 \hspace{0.3in} \textup{and} \hspace{0.3in} g^{(2)}(0) < g^{(2)}(\tau), \end{equation} where the first inequality represents the sub-Poissonian statistics, or photon-number squeezing, while the second gives rise to antibunched light. Hence a measurement of $g^{(2)}(\tau)$ can be used to determine the nonclassicality of the field. The two nonclassical effects often occur together but each can occur in the absence of the other. Similarly, one can derive the nonclassical inequality \cite{RC88} \begin{equation} |g^{(2)}(0)-1| < |g^{(2)}(\tau)-1|, \end{equation} that is, $g^{(2)}(\tau)$ can be farther away from unity than it was initially at $\tau=0$. Accordingly, in the determination of the nonclassicality of the field, situations may arise where some of the observable nonclassical characteristics such as squeezing and sub-Poissonian statistics are lost while $P(\alpha)$ still remains nonclassical, that is, inequality (20) holds true while some of the inequalities in (21) and (22) are violated. These situations do arise since the nonclassicality condition (20) is independent of the value of the coherent amplitude $\alpha$ whereas the nonclassicality conditions (21) and (22) do depend on the value of $\alpha$. Another sufficient condition for nonclassicality is determined by the Mandel $Q_{M}(\tau)$ parameter related to the photon-number variance \cite{GSA13, SZ97} \begin{equation} Q_{M}(\tau)= \frac{\Delta n^2(\tau) -\langle \hat{n}(\tau)\rangle}{\langle \hat{n}(\tau)\rangle}, \end{equation} where $-1 \leq Q_{M}(\tau) <0$ implies that $P(\alpha)$ assumes negative values and thus the field must be nonclassical with sub-Poissonian statistics. Condition $Q_{M}(0) <0$ is equivalent to the first condition in Equation (21) since $Q_{M}(0) = \langle \bar{n}(0)\rangle[g^{(2)}(0)-1]$. It important to remark that the latter equality holds only at $\tau=0$ when both $Q_{M}(0)$ and $g^{(2)}(0)$ represent one-time functions. The correlation function $g^{(2)}(\tau)$ is a two-time function for $\tau >0$ whereas $Q_{M}(\tau)$ is a one-time function for $\tau \geq 0$. Note that if the Mandel $Q_{M}(\tau)$ parameter is positive, then no conclusion can be drawn on the nonclassical nature of the radiation field. The evaluation of $G_{M}(\tau)$ requires knowledge of the characteristic function $\chi(\eta)$ or the quasiprobability distribution $P(\beta)$ and by taking successive derivatives. Such knowledge involves only one-time functions; whereas the correlation function $g^{(2)}(\tau)$ is a two-time function thus the nonclassicality determined by differing criteria complement each other. \section{Numerical comparisons} Owing to the equivalence of the nonclassical conditions given by the first of Equation (21) and the Mandel condition $Q_{M}(0)<0$, we need study only numerically the relation of the nonclassical inequalities (21) and (22) for the coherence function $g^{(2)}(\tau)$ and compare them to the nonclassical condition (20) for the quasiprobability distribution $P(\beta)$. It is important to remark that the nonclassicality criteria (21) and (22) for $g^{(2)}(\tau)$ depend strongly of the value of the coherent amplitude $\alpha$ whereas the nonclassicality criterion (20) for $P(\alpha)$ is actually independent of the value of $\alpha$. The coherence function $g^{(2)}(\tau)$ is rather sensitive to the value of $\alpha$. This will allow us to determine if the system can exhibit quantum behavior even though the known nonclassicality conditions given by both Equations (21) and (22) for the coherence function $g^{(2)}(\tau)$ are violated or, conversely, if the system exhibits nonclassical behavior even though the nonclassicality criterion (20) for $P(\alpha)$ is violated. It is interesting that Equation (20) for the nonclassicality of $P(\beta)$ is independent of the coherent parameter $\alpha$ since the eigenvalues of the quadratic form $(a^2f^2+b^2d^2+cfd)$ in the exponential in (17) are independent of $\alpha$ while the coherence function $g^{(2)}(\tau)$ is rather sensitive to the value of $\alpha$. The validity of any one of the inequalities in Equations (21) and (22) is sufficient but none of them is actually necessary for nonclassicality. On the other hand, the nonclassicality criterion (20) for the one-time function $P(\alpha)$ may not determine the nonclassicality of the two-time correlation function $g^{(2)}(\tau)$ and conversely. Therefore, condition (20) cannot be a necessary and sufficient condition for nonclassicality since when violated, implying thereby the system is in a classical state, nonetheless the two-time correlation function exhibits nonclassical behavior. The numerical results for $g^{(2)}(\tau)$, as given by Figures 5 and 6, attest to this conclusion, where (20) gives classical behavior from condition (20) for $P(\beta)$ for $\Omega \tau \leq 0.4493$, since $(2\bar{n}+1)e^{-2(\Omega \tau +r)}= 2.4562 e^{-2\Omega \tau}\geq 1$ for $\Omega\tau\leq0.4493$, whereas, both Figures 5 and 6 indicate nonclassical behavior for $0<\Omega \tau<0.5605$. To minimize intensity fluctuations, it is always optimal to squeeze the amplitude quadrature, that is, to choose $\theta= 2\varphi$, which we impose on all our numerical work. Figures 1 and 2 show the strictly classical features of the correlation function $g^{(2)}(\tau)$ for $n=0.1$, $r=0.1$, and $|\alpha|=0$ since $g^{(2)}(\tau)$ violates the nonclassical inequalities given by Equations (21) and (22). Note, however, that the nonclassical inequality (20) is satisfied for $\Omega \tau\geq 0$ since $(2\bar{n} +1)e^{-2r}=0.9825<1$. Accordingly, a quasiprobability distribution $P(\beta)$ does not exist since $P(\beta)$ does not vanish as $|\beta|\rightarrow \infty$ nonetheless the correlation $g^{(2)}(\tau)$ exhibits classical behavior. Thus the nonclassical nature of the radiation field, according to the $P(\beta)$ criteria, does not imply that the correlation $g^{(2)}(\tau)$ must behave nonclassically. In order to show the strong dependence of the coherence function $g^{(2)}(\tau)$ on the coherent parameter $\alpha$, we show in Figure 3 the behavior of $g^{(2)}(\tau)$ for the same values $\bar{n}=0.1$ and $r=0.1$ as those in Figures 1 but with the value of $|\alpha| =2$. In Figure 3, both nonclassical inequalities in (21) are satisfied. In Figure 4, we plot the variable associated with inequality (22) that shows classicality for $0\leq \Omega\tau\leq 2.5793$ and nonclassicality for $\Omega\tau> 2.5793$. Thus the nonclassical nature of the radiation field, according to the $P(\beta)$ criteria (20), can give rise also to mixed classical/nonclassical behavior in the correlation $g^{(2)}(\tau)$ . The $\lim _{\tau\rightarrow \infty} (g^{(2)}(\tau)-g^{(2)}(0))=0$ gives the critical value of $|\alpha|$, for given $\bar{n}$ and $r$, for which the inequality sign of the second inequality in (21) changes direction. That is, a critical point from classicality to nonclassicality. For instance, for the cases in Figures 1 and 3, $\bar{n}=0.1$, $r=0.1$, the critical value is $|\alpha_{c}|= 0.45397$. That is, $g^{(2)}(\infty)> g^{(2)}(0)$ for $|\alpha|>0.45397$ and $g^{(2)}(\infty)<g^{(2)}(0)$ for $|\alpha|<0.45397$. Figures 5 and 6 show the mixed classical/nonclassical nature of both $g^{(2)}(\tau)$ and $(|g^{(2)}(0)-1|-|g^{(2)}(\tau)-1|)$ for $\bar{n}=1.0$, $r=0.1$, and $|\alpha|=0$. In view of inequalities (21) and (22), both functions have nonclassical behavior for $0<\Omega \tau < 0.5605$ and classical for $\Omega \tau \geq 0.5605$. The nonclassicality criterion (20) indicates that a quasiprobability distribution $P(\beta)$ exhibits classical behavior for $0\leq\Omega \tau\leq 0.4493$ and nonclassical for $\Omega \tau> 0.4493$. Therefore, studies of the temporal second-order quantum mechanical correlation function $g^{(2)}(\tau)$, for instance, using a Hanbury Brown-Twiss intensity interferometer modified for homodyne detection \cite{GSSRL07}, will show the nonclassical nature of the correlation. This is contrary to what the nonclassicality criterion (20) would indicate. One must recall that the difference between criterion (20) and criteria (21) and (22) is that the former is based on one-time measurement or behavior of the system whereas the latter involves two-time measurements. Finally, Figure 7 shows the Mandel $Q_{M}(\tau)$ parameter for $\bar{n} =0.1$, $r=0.1$, and $|\alpha|=2$. The system exhibits nonclassical behavior for $0\leq \Omega \tau <1.7704$ and classical for $\Omega \tau \geq 1.7704$. The field is photon-number squeezed and exhibits sub-Poissonian statistics since $-1\leq Q_{M}(\tau) < 0$. Notice from Figures 3, 4, and 7 that nonclassical effects often occur together but each can occur in the absence of the others. \begin{figure} \caption{Temporal second-order correlation function $g^{(2)} \label{fig:theFig} \end{figure} \begin{figure} \caption{Plot of $(|g^{(2)} \label{fig:theFig} \end{figure} \begin{figure} \caption{Temporal second-order correlation function $g^{(2)} \label{fig:theFig} \end{figure} \begin{figure} \caption{Plot of $(|g^{(2)} \label{fig:theFig} \end{figure} \begin{figure} \caption{Temporal second-order correlation function $g^{(2)} \label{fig:theFig} \end{figure} \begin{figure} \caption{Plot of $(|g^{(2)} \label{fig:theFig} \end{figure} \begin{figure} \caption{Plot of the Mandel parameter $Q_{M} \label{fig:theFig} \end{figure} \section{Summary and discussions} We calculate the one-time quasiprobability distribution $P(\beta)$ and the two-time, second-order coherence function $g^{(2)}(\tau)$ for Gaussian states (2), viz., displaced-squeezed thermal states, where the dynamics is governed solely by the general, degenerate parametric amplification Hamiltonian (1). We use these exact results to analyze the different characterization of nonclassicality. We find from our numerical studies that satisfying any of the conditions for the coherence function $g^{(2)}(\tau)$ given in Equations (21) and (22) are sufficient for nonclassicality; however, violations of both conditions (21) and (22) does not insure strictly classical behavior. We find examples whereby the nonclassicality condition (20) for $P(\beta)$ is satisfied while the coherence function $g^{(2)}(\tau)$ satisfies all the known classical conditions and conversely, whereby the nonclassicality condition (20) is violated, that is, the quasiprobability distribution $P(\beta)$ exists, nonetheless, the coherence function $g^{(2)}(\tau)$ exhibits nonclassical behavior. Therefore, it does not seem possible to find a single set of necessary and sufficient conditions, based on the state of the system and measurements of observables of the system, which would unequivocally establish the classical or nonclassical nature of the radiation field. \appendix* \section{Second-order coherence} The degree of second-order temporal coherence is \cite{MA16} \begin{equation} g^{(2)}(\tau)= 1 + \frac{n^2(\tau)+ s^2(\tau) +u(\tau) n(\tau) -v(\tau)s(\tau)}{\langle \hat{a}^{\dag}(0)\hat{a}(0)\rangle \langle \hat{a}^{\dag}(\tau)\hat{a}(\tau) \rangle}, \end{equation} where \begin{equation} n(\tau)= (\bar{n}+ 1/2)\cosh \big{(} \Omega\tau + 2r\big{)} -(1/2)\cosh (\Omega\tau), \end{equation} \begin{equation} s(\tau)= (\bar{n}+ 1/2)\sinh \big{(}\Omega\tau + 2r\big{)} -(1/2)\sinh (\Omega\tau), \end{equation} \begin{equation} u(\tau) = \alpha A^*(\tau) + \alpha^* A(\tau), \end{equation} and \begin{equation} v(\tau)=\alpha A(\tau)\exp{(-i\theta)}+\alpha^* A^*(\tau)\exp{(i\theta)}, \end{equation} where $A(\tau)$ is defined by Equation (10). The time development of the photon number is given by \[ \textup{Tr}[\hat{\rho}(t+\tau)\hat{a}^\dag \hat{a}] = \langle \hat{a}^\dag(\tau) \hat{a}(\tau)\rangle = \langle\hat{n}(\tau)\rangle \] \begin{equation} =(\bar{n}+1/2)\cosh[2(\Omega\tau+r)] + |A(\tau)|^2 -\frac{1}{2}. \end{equation} Equation (10) is the correct expression for $A(\tau)$ and not that given in Ref. \cite{MA16}, where in Equation (13) the purely imaginary number $i$ should not be there. Similarly, there is no $i$ in the square braces of Equation (A2) in Ref. \cite{MA16}. \begin{newpage} \end{newpage} \end{document}
\baregin{document} \title{Adiabatic approximation for the motion of Ginzburg-Landau vortex filaments} \author{Jingxuan Zhang} \maketitle \abstract{ In this paper, we consider the concentration property of solutions to the dispersive Ginzburg-Landau (or Gross-Pitaevskii) equation in three dimensions. On a spatial domain, it has long been conjectured that such a solution concentrates near some curve evolving according to the binormal curvature flow, and conversely, that a curve moving this way can be realized in a suitable sense by some solution to the dispersive Ginzburg-Landau equation. Some partial results are known with rather strong symmetry assumptions. Our main theorems here provide affirmative answer to both conjectures under certain small curvature assumption. The results are valid for small but fixed material parameter in the equation, in contrast to the general practice to take this parameter to its zero limit. The advantage is that we can retain precise description of the vortex filament structure. The results hold on a long but finite time interval, depending on the curvature assumption.} \sigmaection{Introduction} Consider the dispersive Ginzburg-Landau (or Gross-Pitaevskii) equation on a spatial domain $\Omega\sigmaubset \mathbb{R}^3$, \baregin{equation} i\pd{\psi}{t} =-\Delta \psi+\frac{1}{\epsilonilon^2}(\abs{\psi}^2-1)\psi. \label{1.1} \end{equation} This equation has its origin from condensed matter physics. The complex scalar field $\psi:\Omega\to\mathbb{C}$ describes the Bose-Einstein condensate in superfluidity, or, ignoring the effect of magnetic field, the electronic condensate in superconductivity. Here $\epsilonilon>0$ is a material parameter measuring the characteristic length scale of $\abs{\psi}$. Realistically, the coherent length $\epsilonilon$ is very small compared to the size of the domain $\Omega$. Thus in what follows we take \baregin{equation} \label{Om} \Omega:=\Set{(x,z):x\in\omega\sigmaubset\mathbb{R}^2, z\in I:=[0,1]}, \end{equation} where $\omega$ is a bounded domain in $\mathbb{R}^2$, whose size is large compared to the scale $\epsilonilon$. The choice $I=[0,1]$ can be replaced by any interval $[a,b]$ with $b-a\gammag\epsilon$, which amounts to a change of coordinate along $z$-direction. The reason for taking a bounded domain is to circumvent certain undesirable decay properties that are not essential to our argument (see Sec. \ref{sec:2.1} for details). In addition, for technical reason we also assume $\omega$ is star-shaped around the origin, and the boundary $\partial\omega$ is smooth. On this cylindrical domain $\Omega$, we impose the boundary conditions $\abs{\psi}\to 1 $ as $x$ approaches $\partial\omega$ horizontally, and $\psi(x,0)=\psi(x,1)$ for every $x\in \omega$. Note that through the transform $\psi=e^{i\epsilonilon^{-2}t}u$, equation \eqref{1.1} becomes the cubic defocusing subcritical nonlinear Schr\"odinger equation. Global well-posedness for such equations has been known since the eighties \cite{MR533218}. Using standard blow-up arguments, one can show that for every initial configuration $u_0\in H^1(\Omega)$ satisfying the boundary conditions above, there exists a unique global solution $u_t\in H^1(\Omega)$ to \eqref{1.1} generated by $u_0$. Therefore in this paper we will mainly work with Sobolev spaces $H^k,\,k\gammae1$. \sigmaubsection{The geometric structure of \eqref{1.1}} \label{sec:1.1} The dispersive Ginzburg-Landau equation \eqref{1.1} can be viewed as a Hamiltonian system as follows. Let $X\sigmaubset L^2(\Omega,\mathbb{C})$ be a suitable configuration space for \eqref{1.1}. If we identify the space $L^2(\Omega,\mathbb{C})=L^2(\Omega,\mathbb{R})\times L^2(\Omega,\mathbb{R})$ through $u\mapsto (\Re u,\Im u)$, then we can regard $X$ as a real vector space, equipped with the inner product $\inn{(\Re u,\Im u)}{(\Re v,\Im v)}=\int(\Re u\Re v+\Im u\Im v).$ Under this identification, the operator $J:\psi\mapsto -i\psi$ can be represented by the symplectic matrix \baregin{equation} \label{4.0} J=\baregin{pmatrix}0&1\\-1&0\end{pmatrix}. \end{equation} Thus we say $J$ is a symplectic operator, in the sense that it induces a symplectic form $\tau=\inn{J\cdot}{\cdot}$ on $X$, satisfying $\tau({w},{w'})=-\tau({w'},{w})$ for every $w,w'\in X$. Using the operator $J$, we can write \eqref{1.1} as the Hamiltonian system \baregin{equation} \label{1.2} \pd{\psi}{t}=JE'(\psi),\quad E(\psi)=E_\Omega^\epsilonilon(\psi):=\int_\Omega \frac{1}{2}\abs{\nabla\psi}^2+\frac{1}{4\epsilonilon^2}\del{\abs{\psi}^2-1}^2. \end{equation} Here $E$ is the Ginzburg-Landau energy, measuring the difference of Helmholtz free energy between a transition phase $\psi$ and the normal phase. $E'(\psi)$ denotes the $L^2$-gradient of the energy functional w.r.t. the real inner product given above. This gradient $E'(\psi)$ is given explicitly as the r.h.s. of \eqref{1.1}. It is well-known by now that as the material parameter $\epsilonilon\to0$ in \eqref{1.1}, the energy $E_\Omega^\epsilonilon$ of a finite-energy configuration $\psi$ concentrates near some lower dimensional manifold $\gammaamma$. This concentration phenomenon can be made precise in measure theoretic terms. For instance, see \cite{MR1491752,MR1864830,MR2195132}. This leads one to expect, at least at the heuristic level, that as $\epsilonilon\to0$, \eqref{1.2} reduces, in an appropriate manner, to a Hamiltonian system in the space of curves, \baregin{equation} \label{1.3} \pd{\gammaamma}{t}=\mathcal{J} L'(\gammaamma), \end{equation} where $\gammaamma=\gammaamma_t$ is a $C^1$ path of curves in $\Omega$, and $\mathcal{J}$ is a suitable (symplectic) operator. \sigmaubsection{Outline of the main results} Suppose $\gammaamma$ in \eqref{1.3} is parametrized by arclength, $L(\gammaamma)=\int\abs{\partial_s\gammaamma}^2$, $L'(\gammaamma)=-\partial_{ss}\gammaamma$, and $\mathcal{J}:=-\partial_s\gammaamma\times$. Then \eqref{1.3} becomes the binormal curvature flow \eqref{BNF}. The purpose of this paper is to formulate rigorously the connection between the dispersive Ginzburg-Landau equation \eqref{1.1} and the binormal curvature flow. To this end, we first make precise the notion of concentration set in \lemref{lem2.2} for a class of low energy configurations. Then our first main result, \thmref{thm3.1}, states that if $u_t$ is a solution to \eqref{1.1} generated by $u_0$, and $u_0$ has concentration set $\gammaamma_0$, then the flow $u_t$ has concentration set given in the leading order by $\gammaamma_t$, the flow generated by $\gammaamma_0$ under the binormal curvature flow \eqref{BNF}. As a corollary, we derive the second main result, \thmref{thm3.2}, which states the converse: If $\gammaamma_t$ is the flow generated by $\gammaamma_0$, then we can find some solution $u_t$ to \eqref{1.1} such that the concentration set associated to $u_t$ is given in the leading order by $\gammaamma_t$. Both main results hold on some long but possibly finite time interval depending on the initial configuration. The precise statements of the main results of this paper are as follows: \baregin{theorem} \label{thm0} \baregin{enumerate} \item For any $\epsilonilon>0$, there are $\delta_1,\,\delta_2\ll\epsilonilon$ such that the following holds: Let $M$ be the manifold of approximate vortex filaments as in \defnref{M}. Let $u_0\in X^1+\psi_0$ be an initial configuration in the energy space ($X^k$ and $\psi_0$ are defined in Sec. \ref{sec:2.2}), such that $\partialst_{X^2}(u_0, M)<\delta_2$. Let $u_t$ be the flow generated by $u_0$ under \eqref{1.1}. Then there is some $T>0$ independent of $\epsilonilon$ and $\delta_1,\,\delta_2$, such that for $\epsilonilon t\le T$, there exists a flow of moduli $\sigma_t$ asscociated to $u_t$ as in \lemref{lem2.2}, with \baregin{equation} \norm{u_t-f(\sigma_t)}_{X^2}=o(\sigmaqrt{\delta_1}), \end{equation} where $f:\Sigmagma_{\delta_1}\to M$ is the parametrization defined in \eqref{2.1}. Moreover, for $\epsilonilon t\le T$, the flow of moduli moduli $\sigma_t$ evolves according to \baregin{equation} \partial_t\sigmaigma=\mathcal{J}_\sigmaigma^{-1}d_\sigmaigma E(f(\sigmaigma))+o_{\norm{\cdot}_{Y^0}}(\delta_1), \end{equation} where the operator $\mathcal{J}_\sigma$ is defined in \eqref{2.3}. \item For any $\bareta>0$, there exist $\delta,\epsilonilon_0>0$ such that the following holds: Let $\epsilonilon<\epsilonilon_0$ in \eqref{1.1}. Let $\sigmaigma_0=(\lambda_0,\gamma_0)\in\Sigmagma_\delta$ be given, where $\Sigmagma_\delta$ is defined in \eqref{Si}. Let $\vec\gamma_t$ be the flow generated by $(\gammaamma_0(z),z)$ under the binormal curvature flow \eqref{BNF}. Then there exist a solution $u_t$ to \eqref{1.1}, and some $T>0$ independent of $\epsilonilon$ and $\delta$, such that for all $\epsilonilon t\le T$, $X\in C^1_c(\mathbb{R}^3,\mathbb{R}^3)$ and $\phi\in C^1_c(\mathbb{R}^3,\mathbb{R})$ with $\norm{X}_{C^1},\,\norm{\phi}_{C^1}=O(\delta^{-1/4})$, we have \baregin{align} &\abs{\int_\Omega X\times Ju-\pi\int_{\vec\gamma_t}X}\le \bareta,\\ &\abs{\int_\Omega \frac{e(u)}{\abs{\log\epsilonilon}}\phi-\pi\int_{\vec\gamma_t}\phi\,d \mathcal{H}^1}\le \bareta , \end{align} where for $\psi=\psi^1+i\psi^2$, $$ e(\psi)= \frac{1}{2}\abs{\nabla\psi}^2+\frac{1}{4\epsilonilon^2}(\abs{\psi}^2-1)^2,\quad J\psi=\nabla\psi^1\times \nabla\psi^2.$$ \end{enumerate} \end{theorem} \baregin{remark} In the formulation of \thmref{thm0}, we do not adopt the convention of taking $\epsilonilon\to0$. Instead, for \textit{small but fixed }$\epsilonilon$, the main assumption is that the concentration sets $\gammaamma_t,\,t\gammae0$ have uniformly small curvature. (See the definitions of the various spaces in \thmref{thm0} in Sec. \ref{sec:2.2}.) This allows us to retain precise description of the vortex structure of the evolving configurations. \end{remark} We obtain the results in \thmref{thm0} by developing an adiabatic approximation scheme for \eqref{1.1}. This allows us to decompose a flow $u_t$ solving \eqref{1.1} to a slowly evolving main part $v_t$ and a uniformly small remainder $w_t:=u_t-v_t$. The flow $v_t$ consists of some low energy configurations whose motion is explicitly governed by their concentration sets $\gammaamma_t$. The small curvature assumption on $\gammaamma_t$ is used here to ensure $v_t$ has low energy, which is essential for the validity of adiabatic approximation. (If the curvature is large at a point on the concentration curve, then the speed of the vortex filament at this point is large, and consequently the adiabatic approximation breaks down.) The main technical ingredients include 1. to show that the concentration sets of $v_t$ indeed evolve according to the binormal curvature flow \eqref{BNF} to the leading order, \textit{assuming the the remainder $w_t$ is small} and 2. to get apriori estimate on the remainder $w_t$. The second part is analogous to proving dynamical stability of solitons, except for that the configurations $v_t$ need not to be stationary. \sigmaubsection{Historical remark} Let us briefly comment on the existing literature on related problems. If the evolution \eqref{1.1} is instead the \textit{energy dissipative} dynamics (so that \eqref{1.1} becomes a cubic nonlinear heat equation), rigorous result that formulates the kind of connection in which we are interested is known \cite{MR2195132}. This is one example of a vast literature in the $\epsilonilon\to0$ scheme. See an excellent review \cite{MR3729052} of works along this line. Note our method is completely different from the measure theoretic schemes for the $\epsilonilon\to0$ limit. One can also ask similar questions about the hyperbolic and gauged analogues of \eqref{1.1}. The method we develop in the present paper is applicable to the latter cases, which we will treat elsewhere. The problem we study here is motivated by a conjecture of R.L. Jerrard \cite[Conj. 7.1]{MR3729052}. The method we use here is inspired by a series of papers by I.M. Sigal with several co-authors \cite{MR1644389, MR1644393, MR2189216,MR2465296, MR3803553, MR3824945}. Other results along this line which we have referred to include \cite{MR1257242,MR1309545,MR2097576,MR2525632,MR2576380,MR2805465,MR3397388,MR3645729,MR3619875,MR4107807}. Most of these papers consider planar problems (possibly after some symmetry reduction), or higher dimensional problems with concentration sets given by finite collections of points. In this regard, the geometry of our problem is more involved. \sigmaubsection{Arrangement} The structure of this paper is as follows: In Section 2, we construct the manifold $M$ consisting of what we call approximate vortex filaments. These are low energy configurations with explicit concentration sets. We find a path of appropriate approximate filaments $v_t$ as the ``adiabatic part'' of a given solution $u_t$ to \eqref{1.1}, provided $u_t$ stays uniformly close to $M$. In Section 3, we show that the path $v_t$ as above is governed by the binormal curvature flow \eqref{BNF}. We prove then two main theorems by finding an apriori estimate for $w_t:=u_t-v_t$, valid on a long but finite interval. The proofs rely on the adiabaticity of the approximate filaments, as well as the energy conservation for \eqref{1.1}. In Section 4, we discuss the properties of certain linearized operators involved in the preceding analysis. In Appendix, we recall some basic concepts of Fr\'echet derivatives that we use repeatedly, and collect various technical estimates for operators used earlier. \sigmaubsection*{Notations} Throughout the paper, when no confusion arises, we shall drop the time dependence $t$ in subscripts. A domain $\Omega\sigmaubset\mathbb{R}^d$ is an open connected subset. The symbols $L^p(\Omega),\,H^s(\Omega),\, 0<p\le \infty,\,s\in \mathbb{R}$ denote respectively the Lebesgue space of order $p$, and the Sobolev space of order $s$, consisting of functions from a domain $\Omega$ into $\mathbb{C}$. \sigmaection{Approximate vortex filaments} In this section, we construct the manifold of approximate (vortex) filaments and discuss their key properties. \sigmaubsection{The planar 1-vortex}\label{sec:2.1} On the entire plane $\mathbb{R}^2$, it is well-known that \eqref{1.1} has smooth, stationary, radially symmetric solutions $\psi^{(n)}:\mathbb{R}^2\to\mathbb{C}$, where $n\in\mathbb{Z}$ labels the winding number $$\deg\psi\vert_{\abs{x}=R}=n$$ for large $R\gammag0$. The characteristic feature of $\psi^{(n)}$ is that they concentrate near the origin $r=0$. Among these vortices, the simple ones with $\abs{n}=1$ are stable, and the higher order ones with $\abs{n}>1$ are unstable \cite{MR1479248}. For this reason, in what follows we will only be concerned with the $1$-vortex. Write $\psi^{(1)}=\varphi(r)e^{i\theta}$ in polar coordinate. Then $\varphi\in C^\infty$, $\varphi'>0$ for $r>0$, and the following asymptotics hold \cite[Sec. 3.1]{MR1763040}: \baregin{equation} \label{2.0} \varphi\sigmaim 1-\frac{\epsilonilon^2}{2r^2}\quad(r\to\infty),\quad \varphi\sigmaim \frac{r}{\epsilonilon}-\frac{r^3}{8\epsilonilon^3}\quad (r\to0). \end{equation} The planar stationary equation $$-\Delta \psi+\frac{1}{\epsilonilon^2}\del{\abs{\psi}^2-1}\psi=0, \quad \psi:\mathbb{R}^2\to\mathbb{C}$$ is called the the (ungauged) Ginzburg-Landau equation. It has rotation, translation, and global gauge symmetries. Among these the, latter two classes are broken by $\psi^{(1)}$. Consequently, if $L^{(1)}_x:H^2(\omega)\to L^2(\omega)$ is the linearized operator at $\psi^{(1)}$, then the vectors $$\pd{\psi^{(1)}}{x_j},\,i\psi^{(1)},\quad j=1,2$$ are in the kernel of $L^{(1)}_x$. We call these vectors \textit{the symmetry zero modes.} Note that these modes are not in $L^2(\mathbb{R}^2)$. On a bounded star-shaped domain $\omega\sigmaubset \mathbb{R}^2$ around the origin, the following estimates are well-known \cite[Chap. III, X]{MR1269538}: \baregin{align} &E_\omega(\psi^{(1)})\le\pi\abs{\log\epsilonilon}+C(\omega)\label{2.0.1},\\ &\norm{\psi^{(1)}}_{L^\infty(\omega)}\le1\label{2.0.4},\\ &\norm{\nabla\psi^{(1)}}_{L^\infty(\omega)}\le\frac{C(\omega)}{\epsilonilon}\label{2.0.2},\\ &\norm{\nabla\psi^{(1)}}_{L^2(\omega)}\le C(\omega)\abs{\log\epsilonilon}^{1/2}\label{2.0.5},\\ &\frac{1}{\epsilonilon^2}\int_\omega \del{\abs{\psi^{(1)}}^2-1}^2\le C(\omega)\label{2.0.3}. \end{align} The finite energy property \eqref{2.0.1} fails if $\omega$ is unbounded. Consequently, if $\omega$ is unbounded, then in general (say $\omega=\mathbb{R}^2$), even the simple planar vortex $\psi^{(1)}$ has infinite energy. Moreover, the translation zero modes are not $L^2$. One can study the problem with $\omega=\mathbb{R}^2$ by either using some kind of renormalized energy \cite{MR1479248}, or posing the problem in a weighted Sobolev space \cite{MR1763040}. Though it will incur significantly more involved estimates, we believe the arguments below can extend to the non-compact case using one of these methods. \sigmaubsection{Construction of approximate vortex filaments}\label{sec:2.2} To construct the manifold of approximate vortex filaments, we first define some configuration spaces. Put \baregin{align} \label{X} &X^s:=\Set{\psi\in H^s(\Omega,\mathbb{C}):\psi\vert_{\partial\omega\times I}=0,\;\psi(x,0)=\psi(x,1)\text{ for every } x\in \omega},\\ &Y^k:=\mathbb{R}\times C^k(I,\mathbb{R}^2)\quad (s\in\mathbb{R},\, k\in \mathbb{N}).\label{Y} \end{align} We write elements in $Y^k$ as $\sigmaigma=(\lambda,\gammaamma)$. $X^s,\,Y^k$ are real Hilbert spaces with the inner products given respectively by \baregin{align} \label{2.00X} &\inn{\psi}{\psi'}_X=\int_\Omega \Re(\overline{\psi}\psi')\quad (\psi,\psi'\in X^s),\\ &\inn{\sigmaigma}{\sigmaigma'}_Y=\int_I \gammaamma\cdot \gammaamma'+\mu\mu'\quad(\sigma,\sigma'\in Y^k).\label{2.00Y} \end{align} The norm on $Y^k$ is $\norm{(\lambda,\gamma)}_{Y^k}:=\norm{\gammaamma}_{C^k}+\abs{\lambda}$. The Ginzburg-Landau energy $E$ defined in \eqref{1.2} is smooth on $1+X^k$ with $k\gammae1$, which we call \textit{the energy space}. Write $$C^k_\text{per}:=\Set{\gammaamma\in C^k(I,\omega):\gammaamma(0)=\gammaamma(1)}\quad (k\in \mathbb{N}).$$ Here we require periodic boundary condition for $\gammaamma$, so as to match the periodic boundary condition along the $z$-axis for \eqref{1.1}. Indeed, we impose such boundary conditions so that the arguments below can be naturally extended to an infinitely long cylindrical domain with $I=\mathbb{R}$. In general, one can take $I=[0,a]$ for any $a\gammag\epsilon$, and impose other appropriate boundary conditions. Notice that if one varies the vertical boundary condition for \eqref{1.1}, then the definition of $C^k_\text{per}$ must be changed accordingly. Define \baregin{align} &\Sigmagma:=\mathbb{R}\times \Set{\gammaamma \in C^2_\text{per}:z\mapsto (\gammaamma(z),z)\text{ is an embedding of $I$ into $\Omega$}},\\ &\Sigmagma_\delta:=\Set{\sigma\in \Sigmagma:\norm{\sigma}_{Y^2}<\delta}.\label{Si} \end{align} We view $\Sigmagma$ as a manifold. Each fibre $T_\sigma\Sigmagma$ can be trivialized as a subspace of $Y^0$. We view $\Sigmagma_\delta$ as an open submanifold of $\Sigmagma$. \baregin{definition}[Manifold of approximate vortex filaments] \label{M} For $\gamma\in C^k_\text{per}$ and a function $\psi:\Omega\to \mathbb{C}$, let $\psi_\gamma(x,z):=\psi^{(1)}(x-\gammaamma(z))$, where $\psi^{(1)}$ is the simple planar vortex in Sec. \ref{sec:2.1}. Define a map \baregin{equation} \label{2.1} \fullfunction{f}{\Sigmagma_\delta}{X^0+\psi_0}{\sigmaigma=(\lambda,\gammaamma)}{e^{i\lambda}\psi_\gamma(x,z)}, \end{equation} where $\psi_0:\Omega\to \mathbb{C}$ is the lift of the planar vortex $\psi^{(1)}$ to $X^0$. The map $f$ is $C^1$, as we show in Appendix. $f$ parametrizes a submanifold $$M:=f(\Sigmagma_\delta)\sigmaubset X^0+\psi_0.$$ The tangent space to $M$ at $f(\sigmaigma)$ is $T_{f(\sigmaigma)}M=df(\sigmaigma)(T_\sigmaigma \Sigmagma_\delta)$, where $df(\sigmaigma)$ denotes the Fr\'echet derivative of $f$ at $\sigma$, given explicitly in Appendix. We call the elements in $M$ the \textit{approximate vortex filaments}. \end{definition} \baregin{remark} The construction of $M$ is motivated by the broken symmetries by $\psi^{(1)}$. We use the term \emph{approximate (vortex) filaments} because the configurations in $M$ concentrate near some curves around $\Set{0}\times I$ by construction. We can trivialize $T_{f(\sigma)}M $ as a subspace of $X^0$. The spaces $M,\,\Sigmagma$ are Riemannian manifolds w.r.t. the inner products given in \eqref{2.00X}-\eqref{2.00Y}. \end{remark} \sigmaubsection{Properties of approximate filaments} In what follows, we always assume the material parameter $\epsilonilon\ll1$ in \eqref{1.1}. A key observation is that since $u:=\abs{\psi^{(1)}}$ (resp. $v:=\abs{\nabla\psi^{(1)}}$) is strictly increasing (resp. decreasing) sufficiently away from $r=0$, using the asymptotics in \eqref{2.0}, we have control over the oscillation of $u$ and $v$ as \baregin{equation} \label{4.1} \abs{u(r)-u(s)}\le C\frac{\epsilonilon^2}{R^2},\quad \abs{v(r)-v(s)}\le C\frac{\epsilonilon}{R}\quad(r>s\gammae R\gammag0), \end{equation} where $C$ is independent of $\epsilonilon$. Let $\alphapha>0$ be given such that the planar domain $\omega$ contains the ball of radius $1+\epsilonilon^\alphapha$. Then it is not hard to see that for $\sigma\in\Sigmagma_{\epsilonilon^\alphapha}$, $$\baregin{aligned} \norm{f(\sigma)}_{X^0}^2&=\int_\Omega\abs{\psi_\gamma}^2\\ &=\int_I\int_\omega \abs{\psi_\gamma}^2(x,z)\,dxdz\\ &\le\int_I\del{\int_\omega \abs{\psi^{(1)}}^2(x)\,dx+\int_\omega \del{\abs{\psi_\gamma}^2(x,z)-\abs{\psi^{(1)}}^2(x)}\,dx}\,dz\\ &\le\int_I\del{\int_\omega \abs{\psi^{(1)}}^2(x)\,dx+\norm{\sigma}_{Y^0}\partialam(\omega)\sigmaup_{r>s\gammae1}\abs{u(r)-u(s)}^2}\,dz\\ &\le\norm{\psi^{(1)}}_{L^2(\omega)}^2+C(\omega)\epsilonilon^{4+\alphapha}. \end{aligned}$$ Therefore we have \baregin{equation} \label{4.2} \norm{\psi_\gamma}_{X^0}=\norm{\psi^{(1)}}_{L^2(\omega)}+O(\epsilonilon^{2+\alphapha/2}).\end{equation} Similarly, one can show using \eqref{4.1} that \baregin{equation} \label{4.3} \norm{\nabla_x\psi_\gamma}_{X^0}=\norm{\nabla_x\psi^{(1)}}_{L^2(\omega)}+O(\epsilonilon^{1+\alphapha/2}). \end{equation} In what follows, for a $C^1$-curve $\gammaamma:I\to \mathbb{R}^d$, we write $\gammaamma_z=\partial_z\gammaamma$, etc.. (Not to be confused with the subscript $t$ in time-parametrized families.) \baregin{lemma}[approximate critical point] \label{lem2.1} Let $\alphapha>0$. If $\sigmaigma\in \Sigmagma_{\epsilonilon^\alphapha}$, then $\norm{E'(f(\sigmaigma))}_{X^0}\le C\epsilonilon^{\alphapha}\abs{\log\epsilonilon}^{1/2}$ where $C$ is independent of $\sigma$. \end{lemma} \baregin{proof} Using the fact that $\psi^{(1)}$ is stationary solution to \eqref{1.1}, one can compute $$E'(f(\sigmaigma))=e^{i\lambda}\del{\nabla_x \psi_\gamma \cdot \gammaamma_{zz}-\nabla^2_x\psi_\gamma\gammaamma_z\cdot \gammaamma_z}.$$ For $\norm{\gammaamma}_{C^2}\ll1$, the leading order term in this expression is $\nabla_x \psi_\gamma \cdot \gammaamma_{zz}$. One can estimate this as $$\baregin{aligned} \norm{\nabla_x \psi_\gamma \cdot \gammaamma_{zz}}_{X^0}&\le2\norm{\nabla_x\psi_\gamma}_{X^0}\norm{\gammaamma_{zz}}_{C^0}\\ &\le 2 \del{\norm{\nabla_x\psi^{(1)}}_{L^2(\omega)}+C(\omega)\epsilonilon^{1+\alphapha/2}}\norm{\gammaamma}_{C^2}\\&\le C(\omega)\epsilonilon^{\alphapha}\abs{\log\epsilonilon}^{1/2}, \end{aligned}$$ where in the last step one uses $\sigma\in\Sigmagma_{\epsilonilon^\alphapha}$ and the estimates \eqref{2.0.5}, \eqref{4.3}. \end{proof} To simplify notation, write $g_\sigma:Y^0\to X^0$ for the action of $df(\sigma):T_\sigma\Sigmagma_\delta\to T_{f(\sigma)}M$ on each fibre. Let $g_\sigma^*$ be the adjoint to $g_\sigma$ w.r.t. the inner products defined in \eqref{2.00X}-\eqref{2.00Y}. In Appendix, we calculate these operators explicitly in \eqref{A.1}-\eqref{A.2}. \baregin{remark} Note here that 1. $g_\sigmaigma$ is injective, and therefore $f$ is an immersion; 2. Using the regularity of $\psi^{(1)}$ and Sobolev embedding, we have $$g_\sigma:Y^0\to X^s,\quad g_\sigma^*:X^r\to Y^0\quad (s\in\mathbb{R},r\gammae2).$$ \end{remark} Let $J: X^0\to X^0$ be the symplectic operator sending $\psi$ to $-i\psi$. We show in Appendix that the map \baregin{equation} \label{2.3} \fullfunction{\mathcal{J}_\sigmaigma}{Y^0}{Y^0}{\xi}{g_\sigma^*J^{-1}g_\sigma\xi} \end{equation} defines a symplectic operator, in the sense that $J_\sigma$ induces a symplectic form on the tangent bundle $T\Sigmagma$. Moreover, $\mathcal{J}_\sigmaigma$ is invertible, and satisfies the uniform estimate $\norm{\mathcal{J}_\sigmaigma}_{Y^k\to Y^k}\le C(\Omega)\abs{\log\epsilon}^{-2}$ for any $k\in\mathbb{N}$. Geometrically, notice that $\mathcal{J}_\sigma$ is the pullback of $J$ by the parametrization $f$. Recall in Sec. \ref{sec:1.1}, we have defined the bilinear map induced by $J$ as \baregin{equation} \label{tau} \tau:(u,v)\mapsto \inn{Ju}{v}_X\quad (u,v\in X^s). \end{equation} This $\tau$ is a non-degenerate symplectic form on $X^s$. Thus through the immersion $f$, the manifold of approximate filaments $M$ also inherits a symplectic structure, with a non-degenerate symplectic form induced by $\mathcal{J}_\sigma$. \sigmaubsection{The adiabatic decomposition} \label{sec:2.4} In this section we derive a key result that is essential for the development in Section 3. The point is that on a tubular neighbourhood \emph{with definite volume} around the manifold $M$ of approximate filaments, one can define a nonlinear projection into $\Sigmagma$. This way we can make precise the notion of concentration set for low energy configurations that are uniformly close to the approximate filaments. Recall the symplectic form $\tau$ is given in \eqref{tau}. \baregin{lemma}[concentration set] \label{lem2.2} Let $M:=f(\Sigmagma_{\epsilon^\alphapha})$ be the manifold of approximate vortex filaments. Then there is $\delta>0$ such that for every $u\in X^1+\psi_0$ with $\partialst_{X^2}(u,M)<\delta$, there exists a unique $\sigmaigma\in \Sigmagma_{\epsilonilon^\alphapha}$ such that \baregin{equation} \label{2.5} \tau(u-f(\sigmaigma),\phi)=0 \quad (\phi\in T_{f(\sigmaigma)}M). \end{equation} \end{lemma} \baregin{remark} \label{concentrationset} We call $\sigmaigma=(\lambda,\gammaamma)$ associated to $u$ the \textit{moduli} of the latter. This terminology is taken from e.g. \cite{MR1257242,MR1309545} with similar context. We call the curve $\gammaamma$ \textit{the concentration set} of $u$. The real number $\lambda$ is the global gauge parameter and is not physically meaningful. One can view \eqref{2.5} as an orthogonality condition w.r.t. the symplectic form \eqref{tau}, and the association $u\mapsto \sigma$ is optimal in this sense. \end{remark} \baregin{proof} 1. First, for $\sigmaigma\in\Sigmagma_{\epsilonilon^\alphapha}$, define the linear projection $Q_\sigmaigma$ by \baregin{equation} \label{2.4} \fullfunction{Q_\sigmaigma}{X^0}{ X^0}{\phi}{g_\sigmaigma \mathcal{J}_\sigmaigma^{-1}g_\sigmaigma^*J^{-1}\phi}. \end{equation} This map $Q_\sigma$ is the skewed (i.e. $Q_\sigma^*=J^*Q_\sigma J$) projection onto the tangent space $T_{f(\sigma)}M$. In the definition of $Q_\sigmaigma$, each factor is uniformly bounded, as we show in Appendix \eqref{A.1}-\eqref{A.2} and \eqref{A.6}. So we get the uniform estimate $\norm{Q_\sigmaigma}_{X^0\to X^0} \le C.$ One can check $Q_\sigmaigma\phi=\phi$ for every $\phi\in T_{f(\sigmaigma)}M$ by writing $\phi=g_\sigmaigma\xi$ for some $\xi\in Y^0$, since $Q_\sigma g_\sigma\xi = g_\sigma\mathcal{J}_\sigma^{-1}(g_\sigma^*J^{-1}g_\sigma)\phi=g_\sigma(\mathcal{J}_\sigma^{-1}\mathcal{J}_\sigma)\xi=g_\sigma\xi=\phi$. Next, we find the concentration set $\sigmaigma$ using Implicit Function Theorem. Consider the map $$\fullfunction{F}{X^2\times \Sigmagma_{\epsilonilon^\alphapha}}{Y^0\times \mathbb{R}}{(\phi,\sigmaigma)}{g_\sigmaigma^*J^{-1}(\phi-f(\sigmaigma))}.$$ Condition \eqref{2.5} is satisfied if $Q_\sigmaigma (u-f(\sigma))=0$. To see this, one uses the property $Q^*=-JQJ$, which implies $\tau(Q\phi,\phi')=\tau(\phi,Q\phi')$. Thus, if $F(\phi,\sigmaigma)=0$, then \eqref{2.5} is satisfied. By construction, we have the following expression for the partial Fr\'echet derivative $$\partial_\sigmaigma F\vert_{(f(\sigmaigma),\sigmaigma)}=-\mathcal{J}_\sigmaigma.$$ It is invertible since $\mathcal{J}_\sigmaigma$ is invertible, see Appendix. The equation $F(\phi,\sigmaigma)=0$ has the trivial solution $(f(\sigmaigma),\sigmaigma)$. It follows that for any fixed $\sigmaigma\in\Sigmagma_\epsilonilon$, there is $\delta=\delta(\sigmaigma,\epsilonilon)>0$ and a map $S_\sigmaigma:B_\delta(f(\sigmaigma))\to \Sigmagma_\epsilonilon$ such that $F(\phi,S(\phi))=0$ for $\phi\in B_\delta(f(\sigmaigma))$. 2. It remains to show that in fact $\delta$ can be made independent of $\sigmaigma$. This is important because we must retain a definite volume for the projeciton neighbourhood, so that later on a flow can fluctuate within this neighbourhood. Write $$A_\phi:=\partial_\sigmaigma F\vert_{(\phi+f(\sigmaigma),\sigmaigma)},\quad V_\phi:=A_\phi-A_0.$$ Then $$A_0=-\mathcal{J}_\sigmaigma,\quad V_\phi=(d_\sigmaigma g_\sigmaigma^*)\del{\cdot}\vert_{J^{-1}\phi}.$$ The size of $\delta$ is determined by the condition that for every $\phi\in B_\delta(f(\sigmaigma))$, \baregin{align} &\label{2.4.00} A_\phi\text{ is invertible},\\ &\label{2.4.0}\norm{A_\phi^{-1}}_{Y^0\to Y^0}\le \frac{1}{4\norm{\mathcal{J}_\sigmaigma^{-1}}_{Y^0\to Y^0}},\\ &\label{2.4.1} \norm{F(\phi+f(\sigmaigma),\sigmaigma)}_{Y^0}\le \frac{\delta_0}{4\norm{\mathcal{J}_\sigmaigma^{-1}}_{Y^0\to Y^0}}, \end{align} where $\delta_0$ is determined that for every $\xi\in B_{\delta_0}(\sigmaigma)$ and $\phi\in B_\delta(f(\sigmaigma))$, \baregin{align} &\norm{R(\phi,\xi)}_{Y^0}\le \frac{\delta_0}{4\norm{\mathcal{J}_\sigmaigma^{-1}}_{Y^0\to Y^0}},\label{2.4.2}\\ &R(\phi,\xi):=F(\phi+f(\sigmaigma),\sigmaigma+\xi)-F(\phi+f(\sigmaigma),\sigmaigma)-\partial_\sigmaigma F(\phi+f(\sigmaigma),\sigmaigma)\xi,\notag\\ &\norm{\mathcal{J}_{\sigmaigma+\xi}-\mathcal{J}_\sigmaigma}_{Y^0\to Y^0}\le\frac{1}{4\norm{\mathcal{J}_\sigmaigma^{-1}}_{Y^0\to Y^0}}.\label{2.4.3} \end{align} See for instance \cite[Sec. 2]{MR1336591}. Note that the r.h.s. of \eqref{2.4.0}-\eqref{2.4.3} are independent of $\sigmaigma$ by the uniform estimate for $\mathcal{J}_\sigmaigma^{-1}$. Conditions \eqref{2.4.2}-\eqref{2.4.3} are satisfied for some $\delta_0=C(\Omega)\epsilonilon$. To get \eqref{2.4.2}, one uses \eqref{A.6} and the fact that $\norm{R(\phi,\xi)}_{Y^0}=o(\norm{\xi}_{Y^0})$, since it is the super-linear remainder of the expansion of $F$ in $\sigmaigma$. To get \eqref{2.4.3}, one uses the continuity of the map $\sigmaigma \mapsto \mathcal{J}_\sigmaigma \in L(Y^0,Y^0).$ The claim now is that \eqref{2.4.00}-\eqref{2.4.1} are satisfied for $\delta=O(\epsilonilon^3)$. Indeed, plugging $\delta_0=C\epsilonilon$ to \eqref{2.4.1} and using the uniform estimate for $g_\sigmaigma^*$ and $\mathcal{J}_\sigmaigma^{-1}$, one sees that \eqref{2.4.1} is satisfied so long as $\delta=O(\epsilonilon\abs{\log\epsilonilon}^{3/2}).$ By elementary perturbation theory, since $A_0$ is invertible, and the partial Fr\'echet derivative $A_\phi$ is continuous in $\phi$ as a map from $X^2\to L^2(Y^0, Y^0)$, it follows that condition \eqref{2.4.00} is satisfied provided $\norm{V_\phi}_{Y^0\to Y^0}<\norm{A_0^{-1}}_{Y^0\to Y^0}^{-1}\le C\abs{\log\epsilonilon}^2.$ By the uniform estimate on $d_\sigmaigma g_\sigmaigma^*$, we can arrange this with $\delta=O(\epsilonilon^3)$. Lastly, referring to the Neumann series for the inverse $$A_\phi^{-1}=\sigmaum_{n=0}^\infty A_0^{-1}\del{-V_\phi A_0^{-1}}^n,$$ one can see that $A_\phi^{-1}$ is also continuous in $\phi$, and $\norm{A_\phi^{-1}}_{Y^0\to Y^0}=O(\abs{\log\epsilonilon}^{-2})$ so long as $\norm{V_\phi}_{Y^0\to Y^0}=o(\abs{\log\epsilonilon}^2)$. The latter holds with $\delta=O(\epsilonilon^3)$. This completes the proof. \end{proof} \lemref{lem2.1} gives a unique decomposition for every configuration $u$ sufficiently close to $M$ as $u=v+w$, where $v$ is in $M$, and $\tau(v,w)=0$. This orthogonality ensures the decomposition we find is optimal. In turn, $v$ is characterized by the moduli $\sigmaigma=(\lambda,\gammaamma)$, which are, through the projection lemma above, functions of $u$. We call $v$ the adiabatic part of $u$. In the remaining sections, we use \lemref{lem2.1} to decompose an entire flow $u_t$ starting near $M$ under \eqref{1.1} into an adiabatic flow $v_t$, of which we have explicit information, and a uniformly small remainder. Then we check that the concentration set $\gammaamma_t$ associated to $u_t$ indeed evolves according to the binormal curvature flow. \sigmaection{Effective dynamics} \sigmaubsection{The connection of \eqref{1.1} to the binormal curvature flow} The following lemma translates the r.h.s. of \eqref{1.1} restricted to $M$ to an expression on the tangent bundle $T\Sigmagma_{\epsilon^\alphapha}$. Afterwards, we show this defines an evolution on $C^k_\text{per}$ that agrees with the binormal curvature flow in the leading order. In this subsection, the operator $J$ denotes multiplication by the standard symplectic matrix \eqref{4.0}. \baregin{lemma} \label{lem3.1} Let $\alphapha>2$. If $\sigmaigma\in \Sigmagma_{\epsilonilon^\alphapha}$, then \baregin{equation} \label{3.2} \mathcal{J}_\sigmaigma^{-1}d_\sigmaigma E(f(\sigmaigma))=\del{o(\epsilonilon^\alphapha),J\gamma_{zz}+o_{\norm{\cdot}_{C^0}}(\epsilonilon^\alphapha)}. \end{equation} \end{lemma} \baregin{proof} Recall $\sigmaigma=(\lambda,\gammaamma)$ consists of a $U(1)$-gauge parameter $\lambda$ and a concentration curve $\gamma$. The $\lambda$-component is therefore not physically relevant, and we are interested in the $\gammaamma$ component. For $\sigmaigma\in Y^k$, we write $\sigmaigma=:([\sigmaigma]_\lambda,[\sigmaigma]_\gamma)$. The partial-Fr\'echet derivative of the energy $E(f(\sigmaigma))$ w.r.t. $\gammaamma$ is given by \baregin{equation} \label{3.2.1} \partial_\gammaamma E(f(\sigmaigma))=\int_x\inn{\nabla_x \psi_\gamma \cdot \gammaamma_{zz}+\nabla^2_x\psi_\gamma\gammaamma_z\cdot \gammaamma_z}{\nabla_x\psi_\gamma}, \end{equation} where we write $\gammaamma_z=\partial_z\gammaamma$, etc.. Using this and the expression for $\mathcal{J}_\sigmaigma$ given in \eqref{A.3}, we find that the first term in the r.h.s. of \eqref{3.2.1} $$[\mathcal{J}_\sigmaigma^{-1}\int_x\inn{\nabla_x \psi_\gamma \cdot \gammaamma_{zz}}{\nabla_x\psi_\gamma}]_\gammaamma=J\gammaamma_{zz}+o(\epsilonilon^\alphapha).$$ It suffices then to show the second term \baregin{equation} \label{3.3} \norm{[\mathcal{J}_\sigmaigma^{-1}\int_x\inn{\nabla^2_x\psi_\gamma\gammaamma_z\cdot \gammaamma_z}{\nabla_x\psi_\gamma}]_\gammaamma}_{C^0}=o(\epsilonilon^\alphapha), \end{equation} i.e. this term can be absorbed into the remainder. First, we use \eqref{2.0.5} and \eqref{4.3} to get \baregin{equation}\label{3.3.0} \baregin{aligned} \norm{\int_x\inn{\nabla^2_x\psi_\gamma\gammaamma_z\cdot \gammaamma_z}{\nabla_x\psi_\gamma}}_{C^0}&\le C(\Omega)\norm{\nabla_x\psi_\gamma}_{X^0}\norm{\gammaamma_z}_{C^0}^2\norm{\nabla^2_x\psi_\gamma}_{L^\infty(\Omega)}\\&\le C(\Omega)\epsilonilon^{2\alphapha}\abs{\log\epsilonilon}^{1/2} \norm{\nabla^2_x\psi_\gamma}_{L^\infty(\Omega)}. \end{aligned} \end{equation} Next, note the function $\psi_\gamma$ satisfies the linear second order equation $$\Delta\psi+\nabla_x^2\psi\gammaamma_z\cdot\gammaamma_z+\nabla_x\psi\cdot\gammaamma_z+\frac{1}{\epsilonilon^2}(1-\abs{\psi_\gamma}^2)\psi=0\quad \text{on }\Omega.$$ This equation is elliptic for $\norm{\gammaamma_z}_{C^0}\ll1$, so the Schauder estimate implies for all $\epsilonilon<\epsilonilon_0\ll1$ and $0<\mu<1$, \baregin{equation} \label{3.3.00} \baregin{aligned} \norm{\psi_\gamma}_{C^{2,\mu}(\Omega)}&\le C(\Omega, \epsilonilon_0)\del{\norm{\psi_\gamma}_{L^\infty(\Omega)}+\norm{\frac{1}{\epsilonilon^2}(1-\abs{\psi_\gamma}^2)}_{C^{0,\mu}(\Omega)}}\\&=C(\Omega, \epsilonilon_0)\epsilonilon^{-2(1+\mu)}. \end{aligned} \end{equation} Here we have used the estimates on the uniform norm $\norm{\psi_\gamma}_{L^\infty(\Omega)}\le1$ and H\"older seminorm $\sigmabr{\psi_\gamma}_{\mu,\Omega}\le C(\Omega)\epsilonilon^{-\mu}.$ These follow from \eqref{2.0.2} and \eqref{4.2}. Plugging \eqref{3.3.00} to \eqref{3.3.0}, we see that \baregin{equation}\label{3.3.000} \norm{\int_x\inn{\nabla^2_x\psi_\gamma\gammaamma_z\cdot \gammaamma_z}{\nabla_x\psi_\gamma}}_{C^0}\le C(\Omega, \epsilonilon_0)\abs{\log\epsilonilon}^{1/2} \epsilonilon^{2(\alphapha-1-\mu)}. \end{equation} In Appendix we show $\norm{\mathcal{J}_\sigmaigma^{-1}}_{Y^k\to Y^k}\le C(\omega)\abs{\log\epsilonilon}^{-2}$. This, the assumption $\alphapha>2$, and the above estimate together show that if we choose $\mu<\alphapha/2-1$ then \eqref{3.3} holds. The proof is complete. \end{proof} We now discuss the geometric meaning of the r.h.s. of \eqref{3.2}. A family of curves $\vec{\gammaamma}_t\in C^2(I,\Omega)$ satisfies the \emph{binormal curvature flow} if parametrized by arclength, the curves satisfy \baregin{equation} \label{BNF} \partial_t\vec{\gammaamma}=\vec{\gammaamma}_s\times\vec{\gammaamma}_{ss}\quad \del{\abs{\vec{\gammaamma}_s}\equiv 1}. \end{equation} Here we write derivatives w.r.t. to the arclength parameter as $\vec{\gammaamma}_s\equiv \partial_s\vec{\gammaamma}$, etc.. Now reparametrize $\vec{\gammaamma}_t$ as $\vec{\gammaamma}_t(z)=(\gammaamma_t(z),z)$ with $\gammaamma\in C^2(I,\omega)$. Then \eqref{BNF} becomes \baregin{equation} \label{3.3.1} \partial_t\vec{\gammaamma}=\vec{\gammaamma}_zz_s\times(\vec{\gammaamma}_{zz}z_s^2+\vec{\gammaamma}_z z_{ss}). \end{equation} The arclength parameter $s$ is given in terms of the new parameter $z$ by $$s(z)=\int_0^z\abs{\vec{\gammaamma_z}}^2=z+\int_0^z\abs{\gammaamma_z}^2.$$ Differentiating this expression, for $\norm{\gammaamma}_{C^2}=O(\delta)$, we get $$z_s=\frac{1}{\abs{\gammaamma_z}^2+1}=1+O(\delta^2),\quad z_{ss}=-\frac{\gammaamma_z\cdot \gammaamma_{zz}}{\del{1+\abs{\gammaamma_z}^2}^3}=O(\delta^2).$$ Geometrically, $z$ and $s$ are close because the curve parametrized by $\vec{\gammaamma}$ with $\norm{\gammaamma}_{C^2}\ll1$ is approximately a vertical straight line, in which case $z=s$. Thus \eqref{3.3.1} in the leading order reads \baregin{equation} \label{3.4} \partial_t\vec\gammaamma=\vec\gammaamma_z\times \vec\gammaamma_{zz}+O(\delta^2)=(J\gammaamma_{zz},\gammaamma_z^1\gammaamma_{zz}^2-\gammaamma_z^2\gammaamma_{zz}^2)+O(\delta^2). \end{equation} In the r.h.s. of \eqref{3.4}, $\gammaamma_z^1\gammaamma_{zz}^2-\gammaamma_z^2\sigmaigma_{zz}^2$ is also $O(\delta^2)$. In conclusion, combing with \lemref{lem3.1}, we can say $$\text{$\partial_t\sigmaigma=(\partial_t\lambda,\partial_t\gammaamma)=\mathcal{J}_\sigmaigma^{-1}d_\sigmaigma E(f(\sigmaigma))+o(\delta)\implies\vec{\gammaamma}_t$ solves \eqref{BNF} up to $o(\delta)$}.$$ In the next subsection, we show that this equation for $\sigmaigma$ is indeed the effective dynamics of vortex filaments. \sigmaubsection{Proof of the first main theorem} Suppose $u_t$ is a solution to \eqref{1.1} such that $\partialst(u_t, M)<\delta\ll1$ for $t\le T$. So far, using \lemref{lem2.2}, this allows us to define an adiabatic flow $v_t=f(\sigmaigma_t)\in M$ consisting of the approximate filaments associated to $u_t$. At time $t\le T$, the filament $v_t$ is characterized by the moduli $\sigma_t$, and the curve $\gammaamma_t=[\sigmaigma_t]_\gamma$ defines the concentration set of $u_t$ (see Remark \ref{concentrationset}). In what follows, we show that the velocity $\partial_t\sigmaigma$ governing the motion of the adiabatic flow is given by $\mathcal{J}_\sigmaigma^{-1}d_\sigmaigma E(f(\sigmaigma))$ uniformly up to the leading order. We then find an a priori estimate for the remainder, so that as long as $u_0$ is close to $M$, the full flow $u_t$ remains uniformly close to $M$ up to a large time. In this sense one can view $M$ as an invariant manifold for \eqref{1.1}. \baregin{theorem}[effective dynamics] \label{thm3.1} For any $\epsilonilon>0$, there are $\delta_1,\,\delta_2\ll\epsilonilon$ such that the following holds: Let $\Sigmagma_{\delta_1}\to M$ be the manifold of approximate vortex filaments as in Section 2. Let $u_0$ be an initial configuration such that $\partialst_{X^2}(u_0, M)<\delta_2$. Then there is some $T>0$ independent of $\epsilonilon$ and $\delta_1,\,\delta_2$, such that for all $\epsilonilon t\le T$, there exists moduli $\sigma_t$ associated to $u_t$ as in \lemref{lem2.2}, and \baregin{equation} \label{3.4.1} \norm{u_t-f(\sigma_t)}_{X^2}=o(\sigmaqrt{\delta_1}). \end{equation} Moreover, for all $\epsilonilon t\le T$, the moduli $\sigma_t$ evolves according to \baregin{equation} \label{3.4.2} \partial_t\sigmaigma=\mathcal{J}_\sigmaigma^{-1}d_\sigmaigma E(f(\sigmaigma))+o_{\norm{\cdot}_{Y^0}}(\delta_1). \end{equation} \end{theorem} \baregin{proof} 1. To begin with, choose $\delta_2\ll \sigmaqrt{\delta_1}$. We show $w_0:=u_0-f(\sigmaigma_0)=u_0-v_0$ satisfies $\norm{w_0}_{X^2}\ll \sigmaqrt{\delta_1}$. Suppose the $X^2$-closest approximate vortex filament to $u_0$ is $v_*=f(\sigma_*)\in M$. Then $w_0=(u_0-v_*)+(v_*-v_0)$. The first difference on the r.h.s. has size $\delta_2$. The second difference can be bounded by $\norm{v_*-v_0}_{X^2}=\norm{ f(\sigma_*)-f(\sigma_0)}_{X^2}\le C \norm{\sigma_*-\sigma_0}_{Y^2}\le C \delta_2.$ It follows that $\norm{w_0}_{X^2}\le C \delta_2$, and by the choice of $\delta_2\ll\sigmaqrt{\delta_1}$, that $\norm{w_0}_{X^2}\ll \sigmaqrt{\delta_1}$. Therefore, by the continuity of the evolution \eqref{1.1}, \eqref{3.4.1} holds at least locally. 2. So long as the decomposition $u_t=v_t+w_t$ in \lemref{lem2.2} is valid, we can expand \eqref{1.2} as \baregin{equation} \label{3.5} \partial_tv+\partial_tw=J(E'(v)+L_\sigmaigma w+N_\sigmaigma(w)), \end{equation} where $J:\psi\mapsto -i\psi$, \baregin{equation} \label{3.6} L_\sigmaigma\phi:=-\Delta \phi+\frac{1}{\epsilonilon^2}(\abs{\psi_\gamma}^2-1)\phi+\frac{2e^{i\lambda}\cos \lambda }{\epsilonilon^2}\psi_\gamma\inn{\psi_\gamma}{\phi} \end{equation} is the linearized operator at $f(\sigma)\equiv v$, and $$N_\sigmaigma(\phi):=E'(\psi_\gamma+\phi)-E'(\psi_\gamma)-L_\sigmaigma\phi$$ is the nonlinearity. For the moduli $\sigmaigma=\sigmaigma_t$ associated to $u_t$ given in \lemref{lem2.2}, let $Q_\sigmaigma:X^0\to X^0$ be the fibrewise projection onto $T_{f(\sigmaigma)}M$, given in \eqref{2.4}. Applying $Q_\sigmaigma$ to both sides of \eqref{3.5}, we have \baregin{equation} \label{3.7} \partial_tv-Q_\sigmaigma JE'(v)=Q_\sigmaigma(JL_\sigmaigma w -\partial_tw+JN_\sigmaigma(w)). \end{equation} Consider the identity $$\mathcal{J}_\sigmaigma^{-1}g_\sigmaigma^*J^{-1}(\partial_tv-Q_\sigmaigma JE'(v))=\partial_t\sigmaigma-\mathcal{J}_\sigmaigma^{-1}d_\sigmaigma E(f(\sigmaigma)).$$ To verify this, one uses two facts that follow readily from the chain rule: $$\partial_tv=g_\sigmaigma\partial_t\sigmaigma,\quad g_\sigmaigma^*E'(f(\sigmaigma))=d_\sigmaigma E(f(\sigmaigma)).$$ Thus by the uniform estimates on $g_\sigmaigma^*$ and $\mathcal{J}_\sigmaigma^{-1}$ in \eqref{A.2}, \eqref{A.6}, we have \baregin{equation} \label{3.8} \norm{\partial_t\sigmaigma-\mathcal{J}_\sigmaigma^{-1}d_\sigmaigma E(f(\sigmaigma))}_{Y^0}\le C\abs{\log\epsilonilon}^{-3/2}\norm{\partial_tv-Q_\sigmaigma JE'(v)}_{X^2}, \end{equation} for some $C$ independent of $\sigmaigma$. This shows that the claim \eqref{3.4.2} would follow if we have uniform control over \eqref{3.7}. 3. We now derive the a priori estimate for the remainder $w=w_t$, the fluctuation of $u_t$ around the adiabatic part $v_t$. Consider the r.h.s. of \eqref{3.7}. These three terms can be bounded respectively as follows: \baregin{align} &\norm{Q_\sigmaigma JL_\sigmaigma w}_{X^0}\le C\abs{\log\epsilonilon}^{-1}\delta_1^{1/2}\norm{w}_{X^2}\label{3.9},\\ &\norm{Q_\sigmaigma\partial_tw}_{X^0}\le C\abs{\log\epsilonilon}^{-1}\norm{\partial_t\sigmaigma}_{Y^0}\norm{w}_{X^1},\label{3.10}\\ &\norm{Q_\sigmaigma JN_\sigmaigma(w)}_{X^0}\le C\abs{\log\epsilonilon}^{-1}\epsilonilon^{-2}\norm{w}_{X^2}^2.\label{3.11} \end{align} Here $C$ is independent of $\sigmaigma$. In all these three inequalities we use the uniform bound $\norm{Q_\sigmaigma}_{X^0\to X^0}\le C(\Omega)\abs{\log\epsilonilon}^{-1}$, which follows its definition \eqref{2.4} and the estimates for each of its factors. First, we show \eqref{3.9} using the identity \baregin{equation}\label{3.200} \abs{\inn{Q_\sigmaigma JL_\sigmaigma w}{w'}}= \abs{\inn{w}{L_\sigmaigma Q_\sigmaigma Jw'}} \quad (w,w'\in X^2). \end{equation} To get \eqref{3.200}, one uses the relation $Q_\sigma J=JQ_\sigma^*$, which follows from the definition of $Q_\sigma$ (see also the first step in the proof of \lemref{lem2.2}). Plugging $w'=Q_\sigmaigma J_\sigmaigma Lw$ into \eqref{3.200}, and using estimate \eqref{4.1.2}, we find $$\norm{Q_\sigmaigma JL_\sigmaigma w}_{X^0}\le C(\delta_1^{1/2}\norm{w}_{X^2}).$$ This gives \eqref{3.9}. Next, differentiating $Q_\sigmaigma w=0$ w.r.t. $t$, we have \baregin{equation}\label{3.10.1} 0=\partial_t(Q_\sigmaigma w)=(\partial_t Q_\sigmaigma)w+Q_\sigmaigma\partial_tw=(d_\sigmaigma Q_\sigmaigma\partial_t\sigmaigma)w+Q_\sigmaigma\partial_tw. \end{equation} Here $d_\sigmaigma Q_\sigmaigma$ is an operator from $Y^0$ to the space of linear operators $L(X^1, X^0)$. Since $Q_\sigmaigma$ is the projection onto $T_\sigmaigma M$, and $\sigmaigma$ is slowly varying, we get the uniform estimate $\norm{d_\sigmaigma Q_\sigmaigma}_{Y^0\to L(X^1, X^0)}\le C.$ Plugging this to \eqref{3.10.1} gives \eqref{3.10}. Lastly, we have the following explicit expression for the nonlinearity: $$N_\sigmaigma(w)=\frac{1}{\epsilonilon^2}(2w\inn{ v}{w}+\abs{w}^2(v+w)).$$ Using \eqref{2.0.4} we can bound this as \eqref{3.11}. Plugging \eqref{3.9}-\eqref{3.11} to \eqref{3.7}, and then using \eqref{3.8}, one can see that \eqref{3.4.2} follows if one can bound $\norm{w}_{X^2}$ uniformly in time by a sufficiently small number compared to $\epsilonilon$. It suffices then to show \eqref{3.4.1} with $\delta_1\ll\epsilonilon$. 4. Consider the expansion \baregin{equation} \label{3.12} E(v+w)=E(v)+\inn{E'(v)}{w}+\frac{1}{2}\inn{L_\sigmaigma w}{w}+R_\sigmaigma(w), \end{equation} where $R_\sigmaigma(w)$ is the super-quadratic remainder defined by this expression. We want to apply the coercivity of $L_\sigmaigma$ shown in \lemref{lem4.2} to $w$, so that we can rearrange \eqref{3.12} to get \baregin{equation} \label{3.12.1} \norm{w}_{X^2}^2\le C_\epsilonilon(E(v+w)-E(v)-\inn{E'(v)}{w}-R_\sigmaigma(w)). \end{equation} The constant $C_\epsilonilon$ will eventually be eliminated by the choice of $\delta$. This requires the uniform estimates \baregin{equation} \label{claim2}\norm{w}_{X^2}\le C_\epsilonilon \norm{w}_{X^0}. \end{equation} For this, one uses the fact that $w$ satisfies the elliptic equation $$\Delta w+\frac{1}{\epsilonilon^2}(1-\abs{v}^2)w=-i\partial_t u+\nabla_xv\cdot \gamma_{zz}-\nabla_x^2v_z\cdot\gamma_z+(\abs{u}^2-\abs{v}^2)u.$$ The r.h.s. of this equation is $O(\delta_1)$. Thus standard elliptic regularity gives $\norm{w}_{X^2}\le C(\Omega,\epsilonilon)\norm{w}_{X^0}$, with $C(\Omega,\epsilonilon)=O(\epsilonilon^{-2})$. Since $u=v+w$ solves \eqref{1.1}, the energy $E(u)$ is conserved for all $t$. Consequently, $$E(v+w)=E(v_0+w_0)=E(v_0)+\inn{E'(v_0)}{w_0}+\frac{1}{2}\inn{L_\sigmaigma w_0}{w_0}+R_\sigmaigma(w_0).$$ Plugging this into \eqref{3.12.1}, and using \eqref{claim2}, one gets \baregin{equation} \label{3.12.2} \baregin{aligned} \norm{w}_{X^2}^2&\le C_\epsilonilon\alphapha^{-1}(E(v_0)-E(v)\\&+\inn{E'(v_0)}{w_0}-\inn{E'(v)}{w}+\frac{1}{2}\inn{L_\sigmaigma w_0}{w_0}-R_\sigmaigma(w_0)+R_\sigmaigma(w)). \end{aligned} \end{equation} Here $\alphapha=\alphapha(\epsilonilon)=O(\abs{\log\epsilonilon}^{-1})$ is as in \lemref{lem4.2}. Similar to the nonlinear estimate on $N_\sigmaigma(w)$, it follows from the regularity of $E$ on the energy space that $R_\sigmaigma(w)\le C\epsilonilon^{-2}\norm{w}_{X^2}^3$ where $C$ is independent of $\sigmaigma$. Combining this with \lemref{lem4.1}, we can bound \baregin{equation} \label{3.101} \baregin{aligned} &\inn{E'(v_0)}{w_0}-\inn{E'(v)}{w}+\frac{1}{2}\inn{L_\sigmaigma w_0}{w_0}-R_\sigmaigma(w_0)+R_\sigmaigma(w)\\ \le& C\Bigl(\norm{E'(v_0)}_{X^0}\norm{w_0}_{X^0}+\epsilonilon^{-1}\norm{w_0}_{X^1}^2+\epsilonilon^{-2}\norm{w_0}_{X^2}^3\\&+ \norm{E'(v)}_{X^0}\norm{w}_{X^0}+\epsilonilon^{-2}\norm{w}_{X^2}^3\Bigr). \end{aligned} \end{equation} Recall in Step 1 we have shown $\norm{w_0}_{X^2}\le C \partialst_{X^2}(u_0,M)=C \delta_2$. So \eqref{3.101} becomes \baregin{equation} \label{3.102} \baregin{aligned} &\inn{E'(v_0)}{w_0}-\inn{E'(v)}{w}+\frac{1}{2}\inn{L_\sigmaigma w_0}{w_0}-R_\sigmaigma(w_0)+R_\sigmaigma(w)\\ \le& C\del{\norm{E'(v_0)}_{X^0}\delta_2+\epsilonilon^{-1}\delta_2^2+\epsilonilon^{-2}\delta_2^3+\norm{E'(v)}_{X^0}\norm{w}_{X^0}+\epsilonilon^{-2}\norm{w}_{X^2}^3}. \end{aligned} \end{equation} This gives control over the last five terms in the r.h.s. of \eqref{3.12.2} 5. We now exploit energy conservation to control the first two terms in the r.h.s. of\eqref{3.12.2}, as this difference is the energy fluctuation of the approximate filaments. Differentiate the energy $E(t)=E(f(\sigmaigma_t))$ and using \eqref{3.7}, we have \baregin{equation} \label{3.13} \baregin{aligned} \od{E}{t}&=\inn{E'(v)}{\partial_tv}\\ &=\inn{E'(v)}{Q_\sigmaigma JE'(v)}+\inn{E'(v)}{Q_\sigmaigma (JL_\sigmaigma w-\partial_tw)}+\inn{E'(v)}{Q_\sigmaigma JN_\sigmaigma(w))}. \end{aligned} \end{equation} We now bound the three inner products respectively. Using the relation $Q_\sigma J=JQ_\sigma^*$, the property of projection $Q_\sigma^2=Q_\sigma$, and the fact that $J$ is symplectic, we find $$\baregin{aligned} \inn{E'(v)}{Q_\sigmaigma JE'(v)}&=\inn{E'(v)}{Q_\sigmaigma^2 JE'(v)}\\ &=\inn{Q_\sigmaigma ^*E'(v)}{Q_\sigmaigma JE'(v)}\\ &=\inn{(J^{-1}J)Q_\sigmaigma^* E'(v)}{JQ_\sigmaigma E'(v)}\\ &=-\inn{JQ_\sigmaigma J E'(v)}{Q_\sigmaigma J E'(v)}=0. \end{aligned}$$ Thus the first term in \eqref{3.13} vanishes. Using \eqref{3.9}-\eqref{3.10}, the second inner product can be bounded as $$\abs{\inn{E'(v)}{Q_\sigmaigma(JL_\sigmaigma w-\partial_tw)}}\le\norm{E'(v)}_{X^0} C(\delta_1^{1/2}+\norm{\partial_t\sigmaigma}_{Y^0})\norm{w}_{X^2}.$$ By \eqref{3.2},\eqref{3.7}-\eqref{3.11}, so long as $\norm{w}_{X^1}<1/2$ we have $$\norm{\partial_t\sigmaigma}_{Y^0}\le C(\delta_1+\delta_1^{1/2}\norm{w}_{X^2}+\epsilonilon^{-2}\norm{w}_{X^2}^2).$$ Plugging this back to the previous estimate, we have \baregin{equation} \label{3.14} \baregin{aligned} &\abs{\inn{E'(v)}{Q_\sigmaigma(JL_\sigmaigma w-\partial_tw)}}\\\le&\norm{E'(v)}_{X^0}(C_1\delta_1^{1/2}+C_2(\delta_1+\delta_1^{1/2}\norm{w}_{X^2}+\epsilonilon^{-2}\norm{w}_{X^2}^2))\norm{w}_{X^2}. \end{aligned} \end{equation} Lastly, by the nonlinear estimate \eqref{3.11}, the third inner product in \eqref{3.13} can be bounded as \baregin{equation} \label{3.15} \abs{\inn{E'(v)}{Q_\sigmaigma JN_\sigmaigma(w))}}\le C\epsilonilon^{-2}\norm{E'(v)}_{X^0}\norm{w}_{X^2}^2. \end{equation} 6. Combining \eqref{3.13}-\eqref{3.15} and integrating from $0$ to $t$, we have \baregin{equation} \label{3.16} \baregin{aligned} &\abs{E(v_t)-E(v(0))}\\\le& t\norm{E'(v)}_{X^0}(C_1\delta_1^{1/2}+C_2(\delta_1+\delta_1^{1/2}M(t)+\epsilonilon^{-2}M(t)+\epsilonilon^{-2}M(t)^2))M(t), \end{aligned} \end{equation} where $M(t):=\sigmaup_{t'\le t}\norm{w(t')}_{X^2}$. Plugging \eqref{3.102} and \eqref{3.16} into \eqref{3.12.2}, we have \baregin{equation} \label{3.17} \baregin{aligned} M(t)&\le \abs{\log\epsilonilon}C_\epsilonilon \norm{E'(v)}_{X^0} t\del{C_1\delta_1^{1/2}+C_2(\delta_1+\delta_1^{1/2}M(t)+\epsilonilon^{-2}M(t)+\epsilonilon^{-2}M(t)^2)}\\ &+C_3\del{ \norm{E'(v)}_{X^0}(1+\epsilonilon^{-2}M(t)^2)+\norm{E'(v_0)}_{X^0}\delta_2+\epsilonilon^{-1}\delta_2^2+\epsilonilon^{-2}\delta_2^3}. \end{aligned} \end{equation} If we now choose $\delta_2=\min(\abs{\log\epsilonilon}^{-1/2},\epsilonilon\delta_1^{1/2})$ and the manifold of approximate filaments $\Sigmagma_{\delta_1}\to M$ to be sufficiently small, say with \baregin{equation}\label{3.18.1}\delta_1=O(\abs{\log\epsilonilon}^{-2}C_\epsilonilon^{-1}\epsilonilon^{1+\mu}) \end{equation} for some $\mu>0$, then \eqref{3.17} and \lemref{lem2.1} implies that for all $t\le T=C \epsilonilon^{-1}$, we have $M(t)\le C\epsilonilon^\mu\sigmaqrt{\delta}$, as claimed in \eqref{3.4.1}. (Actually, if we further shrink $\delta_1$ (i.e. the class of initial configurations), we can ensure \eqref{3.4.1} on a longer time interval. ) Plugging \eqref{3.4.1} into \eqref{3.7}-\eqref{3.11}, we have $$\norm{\partial_t\sigmaigma-\mathcal{J}_\sigmaigma^{-1}d_\sigmaigma E(f(\sigmaigma))}_{Y^0}\le C\abs{\log\epsilonilon}^{-5/2}\epsilonilon^{-2}\delta_1^2.$$ Shrinking $\delta_1$, say with $\mu>1$ in \eqref{3.18.1}, we can ensure this expression is still $o(\delta_1)$. The proof is complete. \end{proof} \sigmaubsection{\thmref{thm3.1} relates to Jerrard's conjecture} The following theorem refers to \cite[Conjecture 7.1]{MR3729052}. For small but fixed $\epsilonilon$, \thmref{thm3.2} below, in particular \eqref{3.18}-\eqref{3.19}, provides an affirmative answer for a certain class of initial conditions with uniformly small curvature. \baregin{theorem}\label{thm3.2} For any $\bareta>0$, there exist $\delta,\epsilonilon_0>0$ such that the following holds: Let $\epsilonilon<\epsilonilon_0$ in \eqref{1.1}. Let $\sigmaigma_0\in\Sigmagma_\delta$ be given, and $\gammaamma_0=[\sigmaigma_0]_\gammaamma$. Let $\vec\gamma_t$ be the flow generated by $(\gammaamma_0(z),z)$ under \eqref{BNF}. Then there exist a solution $u_t$ to \eqref{1.1}, and some $T>0$ independent of $\epsilonilon$ and $\delta$, such that for all $\epsilonilon t\le T$, $X\in C^1_c(\mathbb{R}^3,\mathbb{R}^3)$ and $\phi\in C^1_c(\mathbb{R}^3,\mathbb{R})$ with $\norm{X}_{C^1},\,\norm{\phi}_{C^1}=O(\delta^{-1/4})$, we have \baregin{align} &\abs{\int_\Omega X\times Ju-\pi\int_{\vec\gamma_t}X}\le \bareta,\label{3.18}\\ &\abs{\int_\Omega \frac{e(u)}{\abs{\log\epsilonilon}}\phi-\pi\int_{\vec\gamma_t}\phi\,d \mathcal{H}^1}\le \bareta \label{3.19}, \end{align} where for $\psi=\psi^1+i\psi^2$, $$ e(\psi)= \frac{1}{2}\abs{\nabla\psi}^2+\frac{1}{4\epsilonilon^2}(\abs{\psi}^2-1)^2,\quad J\psi=\nabla\psi^1\times \nabla\psi^2.$$ \end{theorem} \baregin{proof} Let $u_t$ be the flow generated by $u_0:=f(\sigmaigma_0)$ under \eqref{1.1}. Then for sufficiently small $\delta>0$, \thmref{thm3.1} applies with $\delta_1=\delta,\,\delta_2=0$. Let $\tilde{\sigmaigma}_t$ be the flow generated by $\sigmaigma_0$ under the effective dynamics \eqref{3.4.2}. It follows that $u_t=v_t+O_{\norm{\cdot}_{X^2}}(\sigmaqrt{\delta})$, where $v_t=f(\tilde{\sigmaigma_t})$. Given the explicit construction, one can check using classical concentration properties of the planar vortex $\psi^{(1)}$ (for instance \cite{MR1491752}) that, for sufficiently small $\epsilonilon_0=\epsilonilon_0(\bareta)>0$ and all $0<\epsilonilon\le\epsilonilon_0$, the flow $v_t$ satisfies \eqref{3.18}-\eqref{3.19}, with $\tilde{\sigmaigma}_t$ in place of $\sigmaigma_t$. For example, we can compute using \eqref{4.3} and the assumption $\norm{\phi}_{C^0}=O(\delta^{-1/4})$ that $$\baregin{aligned} \int_\Omega \frac{e(v_t)}{\abs{\log\epsilonilon}}\phi&=\int_{\vec{\tilde{\gammaamma}}_t}\del{\int_\omega \frac{e(\psi_{\tilde{\sigmaigma}_t})}{\abs{\log\epsilonilon}}\phi\,dx }\,d\mathcal{H}^1\\ &\le\int_{\vec{\tilde{\gammaamma}}_t}\del{\int_\omega \frac{e(\psi^{(1)})}{\abs{\log\epsilonilon}}\phi\,dx }\,d\mathcal{H}^1+C\frac{\epsilonilon^2\delta}{\abs{\log\epsilonilon}}\norm{\phi}_{C^0}\\ &\le \pi\int_{\vec{\tilde{\gammaamma}}_t}\phi\,d\mathcal{H}^1+C\del{\bareta+ \frac{\epsilonilon^2\delta^{3/4}}{\abs{\log\epsilonilon}}}. \end{aligned}$$ Here $\tilde{\gammaamma}=[\tilde{\sigmaigma}]_\gammaamma$, and $\vec{\tilde{\gammaamma}}(z)=(\tilde{\gammaamma(z)},z)$. Using \eqref{3.3.000} with $\alphapha=3$ and $\mu=1/4$, we can get the uniform estimate $\tilde{\sigmaigma}_t=\sigmaigma_t+O(\delta^{3/2})$. If $\norm{X}_{C^1}=O(\delta^{-1/4})$, then by the mean value theorem, $$\abs{\int_{\vec{\gammaamma}} X-\int_{\vec{\tilde{\gammaamma}} }X}\le \norm{\gamma-\tilde{\gamma}}_{L^{\infty}(I)}\norm{X}_{C^1}=O(\delta^{5/4}).$$ Similarly one can show $$\abs{\int_{\vec{\gammaamma}} \phi\,d\mathcal{H}^1-\int_{\vec{\tilde{\gammaamma} }}\phi\,d\mathcal{H}^1}=O(\delta^{5/4}).$$ It follows that for all sufficiently small $\epsilon_0$ and all $\epsilonilon<\epsilonilon_0$, \baregin{align} &\abs{\int_\Omega X\times Jv-\pi\int_{\gamma_t}X}\le C\del{\bareta+ \frac{\epsilonilon^2\delta^{3/4}}{\abs{\log\epsilonilon}}+\delta^{5/4}},\label{3.20}\\ &\abs{\int_\Omega \frac{e(v)}{\abs{\log\epsilonilon}}\phi-\pi\int_{\gamma_t}\phi\,d \mathcal{H}^1}\le C\del{\bareta+ \frac{\epsilonilon^2\delta^{3/4}}{\abs{\log\epsilonilon}}+\delta^{5/4}}\label{3.21}, \end{align} On the other hand, using \eqref{3.4.1} and the continuity of $e(\psi),\,J\psi$ on $X^1$, we have $$\abs{\int_\Omega X\times Ju-\int_\Omega X\times Jv}\le C\norm{X}_{C^0(\Omega)}\norm{u-v}_{X^2}\le C\delta^{1/4},$$ and similarly $$\abs{\int_\Omega\frac{e(u)}{\abs{\log\epsilonilon}}\phi\, d \mathcal{H}^1-\int_\Omega\frac{e(v)}{\abs{\log\epsilonilon}}\phi\,d \mathcal{H}^1}\le\ C\abs{\log\epsilonilon}^{-1}\delta^{1/4}.$$ Plugging these into \eqref{3.20}-\eqref{3.21}, we get that $u_t$ satisfies \eqref{3.18}-\eqref{3.19} for sufficiently small $\delta=\delta(\bareta, \epsilonilon_0)>0$. \end{proof} \sigmaection{Properties of the linearized operators} In this section we consider various estimates for the linearized operator $L_\sigmaigma=E''(f(\sigmaigma))$ defined in \eqref{3.6}. Recall that so far we have always suppressed the dependence of the various functions on the material parameter $\epsilonilon\ll1$. Through out this section we assume $\norm{\sigmaigma}_{Y^2}<\delta \ll\epsilon$. Without specification, various inner products are as in \eqref{2.00X}. \baregin{lemma}[uniform bound of $L_\sigmaigma$] \label{lem4.1} There exists $0<C<\infty$ independent of $\sigmaigma$ and $\epsilonilon$ such that \baregin{align} \label{4.1.1} &\inn{L_\sigma\phi}{\phi}\le C\epsilonilon^{-1}\norm{\phi}_{X^0}^2\quad (\phi\in X^1),\\ \label{4.1.2} &\norm{L_\sigmaigma Q_\sigmaigma\phi}_{X^0}\le C\delta^{1/2}\norm{\phi}_{X^2} \quad (\phi \in X^2). \end{align} Here, and $Q_\sigma$ is the projection onto the tangent space $T_{f(\sigma)}M$ defined in \eqref{2.4}. \end{lemma} \baregin{proof} We write the Schr\"odinger operator $L_\sigma$ defined in \eqref{3.6} as $L_\sigma=-\Delta+V$. By Poincar\'e's inequality, the kinetic part of \eqref{4.1.1} is bounded by $\inn{-\Delta\phi}{\phi}\le C \norm{\phi}_{X^0}$. To bound the potential part of \eqref{4.1.1}, consider $$ \baregin{aligned} \norm{V(\phi)}_{X^0}&=\epsilonilon^{-2}\norm{(\abs{\psi_\gamma}^2-1)\phi+2e^{i\lambda} \cos \lambda\psi_\gamma\Re(\overline{\psi_\gamma}\phi)}_{X^0}\\ &\le \epsilonilon^{-2}\del{\norm{\abs{\psi_\gamma}^2-1}_{X^0}+2\norm{\psi_\gamma^2}_{X^0}}\norm{\phi}_{X^0}\\ &\le\epsilonilon^{-2}\del{\norm{\abs{\psi_\gamma}^2-1}_{X^0}+\partialam(\omega)^2\norm{\psi_\gamma}_{L^\infty(\Omega)}}\norm{\phi}_{X^0}\\ &\le (C(\omega)\epsilonilon^{-1}+\partialam(\omega)^2)\norm{\phi}_{X^0}. \end{aligned} $$ Recall $\omega\sigmaubset \mathbb{R}^2$ is the cross section of the domain $\Omega$. In the last step we use \eqref{2.0.4}, \eqref{2.0.3}, and \eqref{4.2}. This implies $\inn{V(\phi)}{\phi}\le C\epsilon^{-1} \norm{\phi}_{X^0}^2$, and therefore \eqref{4.1.1} follows. Next, we show \eqref{4.1.2}. Since $Q_\sigma$ is the projection onto the tangent space $T_{f(\sigma)}M$, using the formula \eqref{A.1} for the trivialization, every $\phi\in\ran Q_\sigmaigma$ can be written as \baregin{equation}\label{phi} \phi=e^{i\lambda}\del{i\mu\psi_\gamma-\nabla_x\psi_\gamma\cdot \xi} \end{equation} for some $\xi \in C^0_\text{per}$ and $\mu\in\mathbb{R}$. Using this representation, the assumption $\lambda<\delta$ (which implies $e^{i\lambda}=1+iO(\delta)$), together with the formula \eqref{3.6} for $L_\sigma$ (in which only the last term is not real-linear), we can write \baregin{equation} \label{4.1.3} L_\sigmaigma Q_\sigmaigma\phi=L_\sigma(i\mu\psi_\gamma -\nabla_x\psi_\gamma\cdot \xi)+O(\delta\epsilon^{-2}\phi). \end{equation} For fixed $\gammaamma \in C^2_\text{per}$, let $L_{z,x}:H^2(\omega)\to L^2(\omega)$ be the planar linearized operator at $\psi_{\gammaamma,z}:=f(\sigmaigma)(\cdot,z)$, given explicitly as \baregin{equation} \label{4.1.4} L_{z,x}\phi:=-\Delta_x \phi+\frac{1}{\epsilonilon^2}(\abs{\psi_{\gammaamma,z}}^2-1)\phi+\frac{2}{\epsilonilon^2}\psi_{\gammaamma,z}\inn{\psi_{\gammaamma,z}}{\phi}\quad (\phi:\omega\sigmaubset \mathbb{R}^2\to \mathbb{C}). \end{equation} Consider the inner product $\inn{L_\sigma Q_\sigma \phi}{\phi'}$ for $\phi,\,\phi'\in X^2$. Using \eqref{4.1.3}-\eqref{4.1.4}, we can split this into three parts, \baregin{equation} \label{4.1.5} \baregin{aligned} \inn{L_\sigma Q_\sigma \phi}{\phi'}&=\int_I\del{\int_\omega \inn{L_{z,x}(i\mu\psi_\gamma(x,z) -\nabla_x\psi_\gamma(x,z)\cdot \xi(z))}{\phi'(x,z)}^2\,dx}\,dz\\ &+ \inn{\partial_{zz} (i\mu\psi_\gamma -\nabla_x\psi_\gamma\cdot \xi)}{\phi'}+O(\delta\epsilon^{-2})\norm{\phi}_{X^2}\norm{\phi'}_{X^2}. \end{aligned} \end{equation} The operator $L_{z,x}$ is obtained by translating the planar linearized operator $L^{(1)}_x$ at the simple vortex $\psi^{(1)}$ by $x\mapsto x-\gammaamma(z)$. Consequently, for each fixed $z$, the three vectors $\partial_{x_j}\psi_\gamma(x,z),\,j=1,2$, and $i\psi_\gamma(x,z)$ are in the kernel of $L_{z,x}$. (See the discussion in Sec. \ref{sec:2.1} about symmetry zero modes). Because of this fact, the first term in the r.h.s. of \eqref{4.1.5} vanishes. To estimate the second term, we compute \baregin{equation} \label{4.1.6} \partial_{zz} (i\mu\psi_\gamma -\nabla_x\psi_\gamma\cdot \xi)=-i\mu\nabla_x\psi_\gamma\cdot \gamma_{zz}+\nabla_x^2\psi_\gamma\gamma_{zz}\cdot \xi+O(\norm{\gamma}_{C^2}^2\sigmaum_{i,j=1,2}\partial^2_{x_ix_j}\phi). \end{equation} By the assumption $\norm{\gamma}_{C^2}=O(\delta)$, \eqref{4.1.6} implies that the second term in the r.h.s. of \eqref{4.1.5} is $O(\delta)\norm{\phi}_{X^2}\norm{\phi'}_{X^2}.$ Lastly, setting $\phi'=L_\sigma Q_\sigma \phi$ in \eqref{4.1.5}, we obtain \eqref{4.1.2} so long as $\delta= O(\epsilon ^4)$. \end{proof} \baregin{remark} Estimate \eqref{4.1.2} shows that elements in $\ran Q_\sigmaigma=T_{f(\sigmaigma)}M$ are \textit{approximate zero modes} of $L_\sigmaigma$. If one further shrinks $\delta=O(\epsilon^\alphapha)$ with $\alphapha>2$, then the power in $\delta$ in the r.h.s. of \eqref{4.1.2} can be improved to $(\alphapha-2)/\alphapha$. \end{remark} \baregin{lemma}[coercivity] \label{lem4.2} There exists $\alphapha=O(\abs{\log\epsilonilon}^{-1})>0$ independent of $\sigma$ such that \baregin{equation} \label{4.6} \inn{L_\sigmaigma\eta}{\eta}\gammae\alphapha\norm{\eta}_{X^0}^2\quad \del{\eta\in \ker Q_\sigmaigma }. \end{equation} \end{lemma} \baregin{proof} In this proof we fix $\sigma$ and drop the dependence on $\sigma$ in subscripts. All estimates are independent of $\sigma$. 1. Recall that $L^{(1)}_x$ is the planar linearized operator at the simple planar vortex $\psi^{(1)}$ as in Sec. \ref{sec:2.1}. The classical stability result for planar vortices states that there is $\alphapha>0$ such that for every $\eta\in L^2(\omega)$ orthogonal to the symmetry zero modes $G:=i\psi^{(1)},\,T_j:=\pd{\psi^{(1)}}{x_j}$, \baregin{equation} \label{4.7} \inn{L^{(1)}_x\eta(\cdot)}{\eta(\cdot)}_{L^2(\omega)}\gammae\bareta \norm{\eta}_{L^2(\omega)}^2. \end{equation} This $\bareta$ measures the spectral gap at $0$. See \cite[Secs. 7-8]{MR1479248} for a discussion in the same setting. Moreover, in \cite{MR1764706} it is shown that $\bareta=O(\abs{\log\epsilonilon}^{-1})$. Let $L_0:X^k\to X^{k-2}$ be the linearized operator at the lift of $\psi^{(1)}$ to $X^k$. Integrating \eqref{4.7} along $z$-direction, and use periodicity to drop the term $\inn{\partial_{zz}\eta}{\eta}\gammae0$, we find \baregin{equation} \label{4.7.1} \text{$\inn{L_0\eta}{\eta}\gammae\bareta\norm{\eta}_{X^0}^2$ if $\eta(\cdot, z)$ is orthogonal to $T_{j}$ and $G$.} \end{equation} The orthogonality condition for \eqref{4.7.1} holds trivially if $\eta$ is orthogonal to the lift of $T_j$ and $G$ to $X^k$. 2. Put $\bar{Q}=1- Q$, then we can rewrite \eqref{4.6} as \baregin{equation} \label{4.8} \inn{L\eta}{\eta}=\inn{LQ\eta}{\eta}+\inn{L\bar Q\eta}{\eta}. \end{equation} The first term is $O(\delta^{1/2})\norm{\eta}_{X^2}^2$ by the approximate zero mode property \eqref{4.1.2}. The second term further splits as $\inn{L\bar Q\eta}{\bar Q\eta }+\inn{L\bar Q\eta}{Q\eta}=\inn{L\bar Q\eta}{\bar Q\eta }+\inn{\bar Q\eta}{LQ\eta}=\inn{L\bar Q\eta}{\bar Q\eta }+O(\delta^{1/2})\norm{\eta}_{X^2}^2$, again by \eqref{4.1.2}. Thus it suffices to control $\inn{L\bar Q\eta}{\bar Q\eta }$. Write $\bar{\eta}=\bar{Q}\eta$. We claim \baregin{equation} \label{4.9} \inn{L\bar\eta}{\bar\eta }\gammae \frac{\bareta}{4}\norm{\eta}_{X^0}^2, \end{equation} which, so long as $\delta \ll \bareta$, implies \eqref{4.6} with $\alpha=\bareta/8$. 3. Choose a partition of unity $\chi_1,\,\chi_2$ on $\Omega$, such that $\chi_j\gammae0,\,\chi_1^2+\chi_2^2=1$ and $\sigmaupp \chi_1\sigmaubset\Set{(x,z):x-\gamma(z)<\rho}$ for $\gamma=\sigmabr{\sigma}_\gamma$ and some $\rho$ with $\delta<\rho<\partialam(\omega)$ to be determined. These cut-off functions $\chi_1,\chi_2$ separate $\Omega$ into an inner region, where the vortex filament $\psi_\gamma$ is small, and an outer region, where $\abs{\psi_\gamma}$ is close to $1$. We use the localization formula \cite[Eqn. (1.1)]{MR676004}, $$L=\sigmaum \chi_j L \chi_j-\sigmaum \abs{\nabla \chi_j}^2.$$ If we choose $\abs{\nabla \chi}\le \rho^{-1}$, then this formula allows us to write the l.h.s. of \eqref{4.9} as \baregin{equation} \label{4.10} \inn{L\bar\eta}{\bar\eta }\gammae \inn{L\chi_1\bar\eta}{\chi_1\bar\eta}+ \inn{L\chi_2\bar\eta}{\chi_2\bar\eta}-C_1\rho^{-2}\norm{\eta}_{X^0}^2. \end{equation} Since $\chi_2$ is supported away from the vortex filament, using the far-off asymptotics in \eqref{2.0}, the second term in the r.h.s. can be bounded from below as $\inn{L\chi_2\bar\eta}{\chi_2\bar\eta}\gammae (1-C_2\epsilon^2\rho^{-2})\norm{\chi_j\eta}_{X^0}^2$. Write $L=L_0+V$, where $L_0$ is the linearized operator at the lift of $\psi^{(1)}$, and $V$ is defined by this split. Then by the asymptotics \eqref{2.0}, we have $\norm{V}_{L^\infty}\le C_2\delta\epsilon^{-1}$, where $C_2$ is independent of $\rho$. This implies \baregin{equation} \label{4.11} \inn{L\bar\eta}{\bar\eta }\gammae \inn{L_0\chi_1\bar\eta}{\chi_1\bar\eta}+ (1-C_2\epsilon^2\rho^{-2})\norm{\chi_j\eta}_{X^0}^2 -(C_1\rho^{-2}+C_2\delta\epsilon^{-1})\norm{\eta}_{X^0}^2. \end{equation} Recall $Q$ is the projection onto the approximate zero modes satisfying \eqref{4.1.2}. This, the fact that $\barar\eta\in \ker Q_\sigma$, and the lower bound \eqref{4.7.1} together give \baregin{equation} \label{4.12} \inn{L_0\chi_1\bar\eta}{\chi_1\bar\eta}\gammae (\bareta-C_3(\delta^{1/2}+\rho^{-1})) \norm{\chi_1\bar\eta}^2_{X^0}. \end{equation} Plugging \eqref{4.12} to \eqref{4.11}, and choosing $\rho=2C_3\bareta^{-1}$, we find \baregin{equation} \label{4.13} \inn{L\bar\eta}{\bar\eta }\gammae \del{\min\del{\frac{\bareta}{2}-C_3\delta^{1/2},1- 4C_2C_3^2\epsilon^2\bareta^2}-(4C_1C_3^2\bareta^2+C_2\delta\epsilon^{-1})}\norm{\eta}_{X^0}^2. \end{equation} Since $\bareta=O(\abs{\log\epsilon}^{-1})$, for $\delta=\epsilon^{1+s},\,s>0$ and all $\epsilon\ll1$, we get the claim \eqref{4.9} from \eqref{4.13}. \end{proof} \sigmaection*{Acknowledgments} The Author is supported by Danish National Research Foundation grant CPH-GEOTOP-DNRF151. \sigmaection*{Declarations} \baregin{itemize} \item Competing interests: The Author has no conflicts of interest to declare that are relevant to the content of this article. \end{itemize} \appendix \sigmaection{Properties of the Fr\'echet derivatives} In this section we consider the Fr\'echet derivatives of the immersion $f$ defined in \eqref{2.1}. \sigmaubsection{Basic variational calculus} First, we recall some basic elements of variational derivatives that we used repeatedly. For details, see for instance \cite[Appendix C]{MR2431434} \emph{Fr\'echet derivative}. Let $X,\,Y$ be two Banach spaces, $U$ be an open set in $X$. For a map $g:U\sigmaubset X\to Y$ and a vector $u\in U$, the Fr\'echet derivative $dg(u)$ is a linear map from $X\to Y$ such that $g(u+v)-g(u)-dg(u)v=o(\norm{v}_X)$ for every $v\in X$. If $dg(u)$ exists at $u$, then it is unique. If $dg(u)$ exists for every $u\in U$, and the map $u\mapsto dg(u)$ is continuous from $U$ to the space of linear operators $L(X,Y)$, then we we say $g$ is $C^1$ on $U$. In this case, $dg(u)$ is uniquely given by $$v\mapsto \od{g(u+tv)}{t}\vert_{t=0}\quad (v\in X).$$ Iteratively, we can define higher order derivatives this way. \emph{Gradient and Hessian}. If $X$ is a Hilbert space over scalar field $Y$, then by Riesz representation, we can identify $dg(u)$ as an element in $X$, denoted $g'(u)$. The vector $g'(u)$ is called the $X$-gradient of $g$. Similarly, we denote $g''(u)$ the second-order Fr\'echet derivative $d^2g(u)$. If $g$ is $C^2$, then $g''$ can be identified as a symmetric linear operator uniquely determined by the relation $$\inn{g''(u)v}{w}=\md{g(u+tv+sw)}{2}{t}{}{s}{}\vert_{s=t=0}\quad (v,w\in X).$$ \emph{Expansion}. Let $X$ be a Hilbert space over scalar field $Y$. Suppose $g$ is $C^2$ on $U\sigmaubset X$. Define a scalar function $\phi(t):=g(u+tv)$ for vectors $v,w$ such that $v+tw\in U$ for every $0\le t\le 1$. Then the elementary Taylor expansion at $\phi(1)$ gives $$g(v+w)=g(v)+\inn{g'(v)}{w}+\frac{1}{2}\inn{g''(v)w}{w}+o(\norm{w}^2).$$ Here we have used the definition of $g'$ and $g''$ from the last paragraph. \emph{Composition}. Let $\Omega\sigmaubset \mathbb{R}^d$ be a bounded domain with smooth boundary. Fix $r>d/2,\,f\in C^{r+1}(\mathbb{R}^n)$. For $u:\Omega\to \mathbb{R}^n$, define a map $g:u\mapsto f\circ u$. Then $g:H^r(\Omega)\to H^r(\Omega)$, and is $C^1$. The Fr\'echet derivative is given by $v\mapsto \nabla f\cdot v$. \sigmaubsection{Various uniform estimates} In this section we assume $\epsilonilon\ll1$ in \eqref{1.1}. For two complex numbers, we use the real inner product $\inn{u}{v}=\Re (\barar{u}v)$. Fix some $\alphapha>0$. Using the definition of the immersion $f$ in Section \ref{sec:2.2}, for $\sigmaigma=(\lambda,\gammaamma)\in \Sigmagma_{\epsilonilon^\alphapha}$ and $(\mu,\xi)\in Y^k$, we compute the Fre\'echet derivative of $f$ as \baregin{equation} \label{A.1} df(\sigmaigma)(\mu,\xi)=e^{i\lambda}(i\mu\psi_\gamma-\nabla_x\psi_\gamma\cdot \xi). \end{equation} This is uniformly bounded in $\sigmaigma$ as an operator from $Y^0\to X^0$, since $$\baregin{aligned} \norm{df(\sigmaigma)(\mu,\xi)}_{X^0}&\le \mu\norm{\psi_\gamma}_{X^0}+\norm{\nabla_x\psi_\gamma\cdot\xi}_{X^0}\\ &\le \mu\del{\norm{\psi^{(1)}}_{L^2(\omega)}+O(\epsilonilon^{2+\alphapha/2})}\\ &+\del{\norm{\nabla\psi^{(1)}}_{L^2(\omega)}+O(\epsilonilon^{1+\alphapha/2})}\norm{\xi}_{C^0}\\&\le C(\Omega)\abs{\log\epsilonilon}^{1/2}\norm{\sigmaigma}_{Y^0}. \end{aligned}$$ Here we have used \eqref{2.0.5} and \eqref{4.3}. Using \eqref{A.1} and \eqref{3.3.00}, one can get a uniform estimate for $d_\sigmaigma df(\sigmaigma):Y^0\to L(Y^0, X^0).$ Write an element in $Y^k$ as $\sigmaigma=([\sigmaigma]_\lambda,[\sigmaigma]_\xi)$. The adjoint operator $df(\sigmaigma)^*$ is determined by the relation $$\baregin{aligned} &\inn{ df(\sigmaigma)(\mu,\xi)}{\phi}\\ &=\int_x\int_z\inn{e^{i\lambda}(i\mu\psi_\gamma-\nabla_x\psi_\gamma)\cdot \xi}{\phi}\\&=\mu[df(\sigmaigma)^*\phi]_\lambda +\int_z\xi\cdot [df(\sigmaigma)^*\phi]_\gammaamma\quad (\phi\in X^0). \end{aligned}$$ Here and in the remaining of this section, it is understood that various integrals are taken over $(x,z)\in\omega\times I=\Omega$. By Fubini's Theorem and the identity $\inn{v\cdot w}{ u}=\inn{u}{v}\cdot w$ for real vector $w$, the above relation implies \baregin{equation} \label{A.2} df(\sigmaigma)^*\phi=\del{\int_x\int_z\inn{\phi}{ie^{i\lambda}\psi_\gamma},-\int_x\inn{{e^{i\lambda}\nabla_x\psi_\gamma(x,\cdot)}}{\phi(x,\cdot)}}. \end{equation} This adjoint operator is also uniformly bounded in $\sigmaigma$ with $\norm{df(\sigmaigma)^*}_{X^0\to Y^0} \le C\abs{\log\epsilonilon}^{1/2}.$ Moreover, by Sobolev embedding, $df(\sigmaigma)^*$ maps $X^2$ into $Y^0$. Using \eqref{A.2} and \eqref{3.3.00}, one can get a uniform estimate for $d_\sigmaigma df(\sigmaigma)^*:Y^0\to L(X^0, Y^0).$ The operator $\mathcal{J}_\sigmaigma$ is defined in \eqref{2.3}. It induces a symplectic form w.r.t. the inner product \eqref{2.00Y} on the tangent bundle $T\Sigmagma$, since $$\inn{\mathcal{J}_\sigmaigma\chi}{\chi}=\inn{g_\sigmaigma^*J^{-1}g_\sigmaigma\chi}{\chi}=\inn{J^{-1}g_\sigmaigma\chi}{g_\sigmaigma\chi}=0\quad (\chi\in Y^0).$$ Using \eqref{A.1}-\eqref{A.2}, we can compute $\mathcal{J}_\sigmaigma$ explicitly as \baregin{equation} \label{A.3} \baregin{split} &[\mathcal{J}_\sigmaigma(\mu,\xi)]_\lambda =-\int_x\int_z\inn{\nabla_x\psi_\gamma\cdot\xi}{\psi_\gamma},\\ &[\mathcal{J}_\sigmaigma(\mu,\xi)]_\gamma =\mu\int_x \inn{\nabla_x\psi_\gamma(x,\cdot)}{\psi_\gamma(x,\cdot)}+\int_x \inn{\nabla_x\psi_\gamma(x,\cdot)\cdot J\xi(\cdot)}{\nabla_x\psi_\gamma(x,\cdot)}. \end{split} \end{equation} Here we have used the identity that for any complex-valued $C^1$ function $\phi$ and vector field $\chi$ in $\mathbb{R}^2$, by the Cauchy-Riemann equation, $$-i\nabla\phi\cdot \chi=\nabla{\phi}\cdot J\chi.$$ One can also write $\mathcal{J}_\sigmaigma=\mathcal{J}_\sigmaigma(\mu,\xi(z))$ as the multiplication operator by the matrix $B_{ij}$, where \baregin{equation} \label{A.4} \baregin{aligned} &B_{ii}=0\quad(i=1,2,3),\\ &B_{12}=-\int_x\int_z\inn{\pd{\psi_\gamma}{x_1}(x,z)}{\psi_\gamma(x,z)},\\ &B_{13}=-\int_x\int_z\inn{\pd{\psi_\gamma}{x_2}(x,z)}{\psi_\gamma(x,z)},\\ &B_{21}=\int_x\inn{\pd{\psi_\gamma}{x_1}(x,z)}{\psi_\gamma(x,z)},\\ &B_{23}=\int_x\inn{\pd{\psi_\gamma}{x_1}(x,z)}{\pd{\psi_\gamma}{x_1}(x,z)},\\ &B_{31}=\int_x\inn{\pd{\psi_\gamma}{x_2}(x,z)}{\psi_\gamma(x,z)},\\ &B_{32}=-\int_x\inn{\pd{\psi_\gamma}{x_2}(x,z)}{\pd{\psi_\gamma}{x_2}(x,z)}. \end{aligned} \end{equation} Here we have used the Cauchy-Riemann equation to eliminate certain cross-terms of the form $\inn{\pd{\psi_\gamma}{x_1}}{\pd{\psi_\gamma}{x_2}}$. Using \eqref{A.4}, the fact $\norm{\nabla\psi^{(1)}}_{L^2(\omega)}\sigmaim C(\omega)\abs{\log\epsilonilon}^{1/2}$ (see \eqref{2.0.5} and \cite[Chap. V.1]{MR1269538}), and the asymptotics \eqref{2.0} for $\psi^{(1)}$, one can check that $\mathcal{J}_\sigmaigma$ is invertible and satisfies the uniform estimates \baregin{align} &\norm{\mathcal{J}_\sigmaigma}_{Y^k\to Y^k}\le C(\omega)\abs{\log\epsilonilon}^2\label{A.5},\\ &\norm{\mathcal{J}_\sigmaigma^{-1}}_{Y^k\to Y^k}\le C(\omega)\abs{\log\epsilonilon}^{-2}\label{A.6} \end{align} for every $k\in\mathbb{N}$. \baribliography{bibfile} \end{document}
\betagin{document} \title{Hypermonogenic solutions and plane waves of the Dirac operator in $\mathbb{R}^p\times\mathbb{R}^q$} \author{Al\'{i} Guzm\'{a}n Ad\'{a}n, Heikki Orelma, Franciscus Sommen} \maketitle \betagin{abstract} In this paper we first define hypermonogenic solutions of the Dirac operator in $\mathbb{R}^p\times\mathbb{R}^q$ and study some basic properties, e.g., obtaining a Cauchy integral formula in the unit hemisphere. Hypermonogenic solutions form a natural function class in classical Clifford analysis. After that, we define the corresponding hypermonogenic plane wave solutions and deduce explicit methods to compute these functions. \end{abstract} \textbf{Mathematics Subject Classification (2000)}. Primary 30G35; Secondary 30A05\\ \\ \textbf{Keywords}. Hypermonogenic solution, Cauchy's formula, plane wave \section{Introduction} Clifford analysis is nowadays a well established generalization of the classical complex analysis in higher dimensions. It is a function theory for solutions of the Dirac operator $\grave{}artialrtial_{\m{x}}=\sum_{j=1}^me_j\grave{}artialrtial_{x_j}$, where $\{ e_1,...,e_m\}$ produces the Clifford algebra $\mathbb{R}_m$ by the defining relations $e_ie_j+e_je_i=-2\deltata_{ij}$. One of the key features of the Dirac operator is its $SO(m)$ rotation invariance. The effect of this feature may be seen in every part of the theory. This inspired also look for more general operators, acting on Clifford algebra valued functions, but with a ''weaker'' symmetry (for example a subgroup of $SO(m)$). On example of these is the so called modified Dirac operator defined by Heinz Leutwiler and Sirkka-Liisa Eriksson. Their idea is to add an extra generator $e$ and then generate the Clifford algebra $\mathbb{R}_{m+1}$ by the defining relations $e_ie_j+e_je_i=-2\deltata_{ij}$, $e^2=-1$ and $ee_j=-e_je$. They obtained the operator, called the modified Dirac operator \[ M=e\grave{}artialrtial_r+\grave{}artialrtial_{\m{y}}+\mathfrak{f}rac{p-1}{r}e\cdot, \] where $e\cdot$ denoted the interior multiplication and $p$ is a natural number. This operator is $SO(m)$ invariant only with respect to $e$-axis, not anymore to the whole space. In this paper we deduce a correspondence between the set of the null solutions of the modified Dirac operator $M$ and the solutions of the biaxial Dirac operator $\grave{}artialrtial_{\m{x}}+\grave{}artialrtial_{\m{y}}$. In particular, for $f=A+eB$ we obtain that \[ \grave{}artialrtial_{\m{x}}(A+eB)+\grave{}artialrtial_{\m{y}}(A+eB)=e\grave{}artialrtial_r(A+eB)+\grave{}artialrtial_{\m{y}}(A+eB)+\mathfrak{f}rac{p-1}{r}e\cdot (A+eB), \] where $A$ and $B$ are radial functions with respect to $\m{x}$ taking values in $\mathbb{R}_m$ and $e=\mathfrak{f}rac{\m{x}}{r}$, where $r=|\m{x}|$. We will call these functions hypermonogenic solutions of the Dirac operator; they are special biaxial monogenic functions. \\ \\ In the first Section 3 we study hypermonogenic functions as special biaxial solutions of the Dirac equation and we present the corresponding Cauchy-Kovalevskaya extension formula. In the next section we obtain a Cauchy formula for hypermonogenic functions in the upper half-ball, resulting from Cauchy's formula for biaxial monogenics in the unit ball. In the next section we construct a family of plane wave hypermonogenic solutions by applying Funk-Hecke's formula to an integral of plane wave monogenic function. Examples are given in the last section, in particular using the monogenic Fourier kernel $\exp(\lambdagle \m{x},\m{t}\rangle +i\lambdagle \m{y},\m{s}\rangle)(\m{t}+i\m{s})$. This example is expressed in terms of Bessel functions and it also can be obtained by applying the Cauchy-Kovalevskaya extension formula. This is the starting point for Fourier analysis and Radon transforms for hypermonogenic solutions. \section{Preliminaries} Let us now recall some standard facts from theory of Clifford algebra and analysis. We denote by $\mathbb{R}_m$ the real Clifford algebra generated by the symbols $\{e_1,...,e_m\}$ satisfying the defining relations $ e_ie_j+e_je_i=-2\deltata_{ij}, $ for $i,j=1,...,m$, where $\deltata_{ij}$ is the well known Kronecker symbol. We may consider the Euclidean vector space $\mathbb{R}^m$ as a natural embedded subspace of $\mathbb{R}_m$ with the embedding $(x_1,...,x_m)\mapsto \m{x}:=\sum_{j=1}^m x_je_j$. Similarly, we define the complex Clifford algebra by $\mathbb{C}_m=\mathbb{R}_m\otimes \mathbb{C}$ and consider $\mathbb{C}^m$ as its subspace. If we define the increasing $k$-list by $A=\{ (i_1,...,i_k): 0\le i_l\le m,\ i_1+...+i_m=k,\ i_1<...<i_k\}$ and put $|A|=k$ we may write basis elements of a Clifford algebra by $e_A:=e_{i_1}...e_{i_k}$. Then an element $a\in \mathbb{R}_m$ (or $a\in \mathbb{C}_m$) will be expressed as a sum $a=\sum_{k=0}^n\sum_{|A|=k} a_Ae_A$ where $a_A\in\mathbb{R}$ (or $a_A\in\mathbb{C}$). In $\mathbb{R}^m$ we define the euclidean inner product by $\lambdagle \m{x},\m{y}\rangle=\sum_{j=1}^mx_jy_j$ and the hermitean inner product in $\mathbb{C}^m$ by $\lambdagle \m{x},\m{y}\rangle=\sum_{j=1}^mx_jy^c_j$, where $y^c_j$ is the complex conjugation of ${y}_j$. An inner product induces the norm $|\m{x}|=\lambdagle \m{x},\m{x}\rangle^{\mathfrak{f}rac{1}{2}}$. If $\m{x},\m{y}\in \mathbb{R}_m$ are vectors, the product of these vectors may be represented as \[ \m{x}\m{y}=\mathfrak{f}rac{1}{2}(\m{x}\m{y}+\m{y}\m{x})+\mathfrak{f}rac{1}{2}(\m{x}\m{y}-\m{y}\m{x}), \] it is easy to see, that the first term is a real number and $\lambdagle \m{x},\m{y}\rangle=-\mathfrak{f}rac{1}{2}(\m{x}\m{y}+\m{y}\m{x})$. The last term is a linear combination of basis elements $e_ie_j$, where $i<j$, and denoted by $\m{x}\wedge\m{y}=\mathfrak{f}rac{1}{2}(\m{x}\m{y}-\m{y}\m{x})$. These operators may be generalized as follows. In the Clifford algebra $\mathbb{R}_m$ we may define the subspaces of $k$-multivectors as $ \mathbb{R}_m^{(k)}=\{a_k= \sum_{|A|=k}x_Ae_A : x_A\in\mathbb{R}\}. $ Then we may split the multiplication of a vector and of an $k$-multivector as \[ \m{x}a_k=\m{x}\cdot a_k+\m{x}\wedge a_k, \] where the interior and exterior multiplication are \betagin{align*} \m{x}\cdot a_k=\mathfrak{f}rac{1}{2}(\m{x} a_k-(-1)^ka_k\m{x}),\\ \m{x}\wedge a_k=\mathfrak{f}rac{1}{2}(\m{x} a_k+(-1)^ka_k\m{x}). \end{align*} Since an arbitrary element $a\in\mathbb{R}_m$ admits the multivector decomposition $ a=\sum_{k=0}^m a_k, $ where $a_k\in \mathbb{R}_m^{(k)}$ we may define the multiplications as \betagin{align*} \m{x}\cdot a=\sum_{k=1}^m\m{x}\cdot a_k,\\ \m{x}\wedge a=\sum_{k=0}^{m-1}\m{x}\wedge a_k. \end{align*} Let us now consider functions $f:\Omegaega\to\mathbb{R}_m$ where $\Omegaega$ is an open subset of $\mathbb{R}^m$. We say that these functions, which are of the form \[ f=\sum_{k=0}^m\sum_{|A|=k}f_Ae_A, \] are differentiable, integrable etc. if their component functions $f_A$ have these properties. The fundamental differential operator in theory of Clifford analysis is the so called Dirac operator, defined by \[ \grave{}artialrtial_{\m{x}}=e_1\grave{}artialrtial_{x_1}+\cdots+e_m\grave{}artialrtial_{x_m} \] and its square equals the Laplacian $-{\bf \mathcal{D}}eltata_{\m{x}}$, that is, \[ {\bf \mathcal{D}}eltata_{\m{x}}=-\grave{}artialrtial_{\m{x}}^2=\grave{}artialrtial_{x_1}^2+...+\grave{}artialrtial_{x_m}^2. \] A differentiable Clifford algebra valued function $f$, defined on some open set $\Omegaega\subset\mathbb{R}^m$, is called (left) monogenic (resp. right) if $\grave{}artialrtial_{\m{x}}f=0$ (resp. $f\grave{}artialrtial_{\m{x}}=0$) on $\Omegaega$. Monogenic functions have many nice function theoretic properties, see \thetae{BDS, DSS}. On fundamental feature is the Cauchy integral formula. In this paper we need it in the following special case. Consider monogenic functions $f:\Omegaega\to\mathbb{R}_m$, where $\Omegaega\subset\mathbb{R}^m $ contains the unit ball $B_1=\{ x\in\mathbb{R}^m : |x|<1\}$. The surface of the unit ball is the unit sphere $S^{m-1}=\{ \m{x}\in\mathbb{R}^m : |\m{x}|=1\}$. \betagin{thm}[Cauchy formula on the unit ball, \thetae{BDS}] Let $f:\Omegaega\to\mathbb{R}_m$ be a monogenic function and $B_1\subset\Omegaega$. If $\m{x}\in B_1$, then \[ f(\m{x})=\mathfrak{f}rac{1}{\lambda_{m-1}}\int_{S^{m-1}} \mathfrak{f}rac{\m{x}-\m{\eta}}{|\m{x}-\m{\eta}|^m}d\sigma(\m{\eta})f(\m{\eta}), \] where $\lambda_{m-1}$ is the measure of $S^{m-1}$ and the vectorial surface element $d\sigma(\m{\eta})=\m{\eta}dS(\m{\eta})$ is the product of the outwart pointing normal unit vector $\m{\eta}$ on the sphere and the scalar surface element $dS(\m{\eta})$. \end{thm} On the unit sphere we will need also the following integral formula. \betagin{thm}[Funk-Hecke, \thetae{H}] Let $\m{\xi},\m{\eta}\in S^{m-1}$ and suppose $\grave{}si(t)$ is continuous for $-1\le t\le 1$. Then for every spherical harmonic $H_k$ of degree $k$ \[ \int_{S^{m-1}} \grave{}si(\lambdagle\m{\xi},\m{\eta}\rangle)H_k(\m{\eta})dS(\m{\eta})= \lambda_{m-1}\mathfrak{f}rac{k!}{(m-2)_k} H_k(\m{\xi})\int_{-1}^1 \grave{}si (t)C^{\mathfrak{f}rac{m}{2}-1}_k(t)(1-t^2)^{\mathfrak{f}rac{m-3}{2}}dt, \] where $C_k^\ell$ is a Gegenbauer polynomial and $(a)_k$ the Pochhammer symbol. \end{thm} \section{Hypermonogenic solutions in biaxial domains} In the beginning of 90's Heinz Leutwiler together with Sirkka-Liisa Eriksson defined the so called hypermonogenic functions (see e.g. \thetae{ELI,L}). They are functions $f:\Omegaega\subset \mathbb{R}^{q+1}_+\to \mathbb{R}_{q+1}$ which are defined on the open subset of the upper half-space $\mathbb{R}^{q+1}_+=\{ (r,\m{y})\in\mathbb{R}\times \mathbb{R}^{q} : r>0\}$, take values in the Clifford algebra $\mathbb{R}_{q+1}$ associated to a quadratic form $(re+\m{y})^2=-r^2-|\m{y}|^2$ and belong to the kernel of the so called modified Dirac operator \[ Mf:=e\grave{}artialrtial_rf+\grave{}artialrtial_{\m{y}}f+\mathfrak{f}rac{p-1}{r}e\cdot f, \] where $e\cdot f$ denotes the interior multiplication operator and $p$ a natural number. \\ \\ The theory of hypermonogenic functions, or briefly, the hypermonogenic function theory is related to a hyperbolic upper half-space model. The theory have it's own advantages, e.g., paravector power functions may be included in the kernel of the operator. Also integral formulas are studied and a wide class of special solutions. The reader may find more information and recent research results for example in \thetae{ELI, EOS, EOV,L}. \\ \\ Let us now show, how these hypermonogenic functions may be seen as a special case of monogenic functions. Our fundamental idea is to consider $r$ as a norm of some extra $p$-vector variable $\m{x}$, that is $r=|\m{x}|$. Considering only radial functions with respect to $\m{x}$, we have $\grave{}artialrtial_{\m{x}}B=\mathfrak{f}rac{\m{x}}{r}\grave{}artialrtial_rB$ and we may denote $e=\mathfrak{f}rac{\m{x}}{r}$. Now $\grave{}artialrtial_{\m{x}}e=\mathfrak{f}rac{1-p}{r}$ and we obtain for radial functions $\grave{}artialrtial_{\m{x}}(eB)=-\grave{}artialrtial_rB-\mathfrak{f}rac{p-1}{r}B=e\grave{}artialrtial_r(eB)+\mathfrak{f}rac{p-1}{r}e\cdot (eB)$. If we consider now functions $A$ and $B$, which are radial with respect to $\m{x}$ and take their values in a Clifford algebra $\mathbb{R}_{q}$ associated to a quadratic form $\m{y}^2=-|\m{y}|^2$, we have that $\grave{}artialrtial_{\m{x}}(A+eB)+\grave{}artialrtial_{\m{y}}(A+eB)=e\grave{}artialrtial_r(A+eB)+\grave{}artialrtial_{\m{y}}(A+eB)+\mathfrak{f}rac{p-1}{r}e\cdot (A+eB)$. This motivates us to make the following definition. \betagin{defn}[Hypermonogenic solutions] Let $f:\Omegaega\subset \mathbb{R}^p\times\mathbb{R}^q\to \mathbb{C}_{p+q}$ be a differentiable function. Hypermonogenic solutions of the Dirac operator are functions of the form \[ f(\m{x},\m{y})=A(|\m{x}|,\m{y})+\mathfrak{f}rac{\m{x}}{|\m{x}|}B(|\m{x}|,\m{y}), \] defined on a biaxial domain $\Omegaega$ in $\mathbb{R}^p\times\mathbb{R}^q$ satisfying $(\grave{}artialrtial_{\m{x}}+\grave{}artialrtial_{\m{y}})f=0$. \end{defn} Let us now find a generalized Cauchy-Riemann system for hypermonogenic solutions. \betagin{prop} A function $f(\m{x},\m{y})=A(|\m{x}|,\m{y})+\mathfrak{f}rac{\m{x}}{|\m{x}|}B(|\m{x}|,\m{y})$ is a hypermonogenic solution, if and only if \betagin{align*} \grave{}artialrtial_{\m{y}}A-\grave{}artialrtial_{|\m{x}|}B-\mathfrak{f}rac{p-1}{|\m{x}|}B=0,\\ \grave{}artialrtial_{\m{y}}B-\grave{}artialrtial_{|\m{x}|}A=0. \end{align*} \end{prop} Proof. Since $\grave{}artialrtial_{\m{x}}(\mathfrak{f}rac{\m{x}}{|\m{x}|})=\mathfrak{f}rac{1-p}{|\m{x}|}$ we obtain \betagin{align*} \grave{}artialrtial_{\m{x}}\Big(\mathfrak{f}rac{\m{x}}{|\m{x}|}B(|\m{x}|,\m{y})\Big) &=\grave{}artialrtial_{\m{x}}\big(\mathfrak{f}rac{\m{x}}{|\m{x}|})B(|\m{x}|,\m{y})+\sum_{j=1}^{p}e_j\mathfrak{f}rac{\m{x}}{|\m{x}|}\grave{}artialrtial_{x_j}B(|\m{x}|,\m{y})\\ &=\mathfrak{f}rac{1-p}{|\m{x}|}B(|\m{x}|,\m{y})+\mathfrak{f}rac{\m{x}^2}{|\m{x}|^2}\grave{}artialrtial_{|\m{x}|}B(|\m{x}|,\m{y}). \end{align*} The system is easy obtain using this formula.$\square$\\ \\ In the heart of this paper are functions with the series representation \betagin{align}\label{CHoj} f(\m{x},\m{y})&=\sum_{j=0}^\infty \m{x}^j f_j(\m{y}). \end{align} All monogenic functions of this form\mathfrak{f}ootnote{One may find, that in this case $A=\sum_{j=0}^\infty(-1)^{j}|\m{x}|^{2j}f_{2j}(\m{y})$ and $ B=\sum_{j=0}^\infty(-1)^{j}|\m{x}|^{2j+1}f_{2j+1}(\m{y})\nonumber$.} are hypermonogenic solutions. For example, in the second part of the paper, these type of solutions are crucial when we consider the so called plane wave solutions. We can also find hypermonogenic solutions of this form as follows.\\ \\ The Vekua type system in the preceding proposition allows us to construct hypermonogenic solutions using Cauchy-Kovalevskaya extension in higher codimensions, see \thetae{DSS}. Hypermonogenic solutions are determined by the restriction $A(0,\m{y})$ to the surface $\m{x}=\m{0}$, that is, starting from a given function $A(\m{y})$ we may establish its Cauchy-Kovalevskaya extension \[ A(|\m{x}|,\m{y})+\mathfrak{f}rac{\m{x}}{|\m{x}|}B(|\m{x}|,\m{y}). \] Let us now take an open set $\widetilde{\Omegaega}\subset \mathbb{R}^p\times\mathbb{R}^q$, which is $SO(p)$ invariant and denote $\Omegaega=\widetilde{\Omegaega}\cap \mathbb{R}^q$ assuming this to be non empty. In a neighbourhood of this set, we may construct a hypermonogenic function, when $f_{0}(\m{y})$ is given, as follows. \betagin{thm}[Generalized Cauchy-Kovalevskaya extension \thetae{DSS}]\label{Absolut} Let $f_{0}(\m{y})$ be an analytic function in $\Omegaega$. Then there exists a unique sequence $\{f_j(\m{y})\}_{j=1}^\infty$ of analytic functions such that the series \[ f(\m{x},\m{y})=\sum_{j=0}^\infty \m{x}^j f_j(\m{y}) \] is convergent in a neigbourhood $U\subset \mathbb{R}^p\times \mathbb{R}^q$ of $\Omegaega$ and its sum $f$ is a hypermonogenic solution in $U$. The function $f_{0}(\m{y})$ is determined by the relation \[ f_{0}(\m{y})=f(\m{0},\m{y}). \] Furthermore, the sum $f$ is formally given by the expression \betagin{align*} f(\m{x},\m{y})=\Gammama(p/2)\Big(\mathfrak{f}rac{|\m{x}|}{2}\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}}\Big)^{-p/2}\Big(\mathfrak{f}rac{|\m{x}|}{2}\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}}J_{\mathfrak{f}rac{p}{2}-1}(|\m{x}|\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}})+\mathfrak{f}rac{\m{x}\grave{}artialrtial_{\m{y}}}{2}J_{\mathfrak{f}rac{p}{2}}(|\m{x}|\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}})\Big)f_{0}(\m{y}), \end{align*} where $J$ is a Bessel function and $\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}}$ is the the formal square root of the Laplacian. \end{thm} Also in the paper \thetae{DS} the Cauchy-Kovalevskaya problem in the biaxial framework was studied. The aim of that paper was to find a hypermonogenic solution of the form \[ f(\m{x},\m{y})=\sum_{j=0}^\infty c_j\m{x}^j F_j(\m{y})W(\m{y}) \] where $F_j(\m{y})$ are weight functions and $W(\m{y})$ is a given initial function whereby $F_0(\m{y})=1$, $c_0=1$ and $\grave{}artialrtial_{\m{x}}(c_j\m{x}^j)=c_{j-1}\m{x}^{j-1}$. Three explicit examples of hypermonogenic solutions were computed by choosing the initial function to be a Gau\ss ian function, a Clifford-Bessel function of biaxial type and the Kummer's function respectively. \section{Cauchy's integral formula for Hypermonogenic solutions in the unit ball} In this section we derive a Cauchy's formula for hypermonogenic solutions using the Cauchy's formula of monogenic functions. We restrict our study only to the case where the domain is the unit ball in $\mathbb{R}^{p+q}$. We assume that a hypermonogenic solution $f$ is defined in a set, which contains a unit ball. The corresponding unit sphere $S^{p+q-1}\subset\mathbb{R}^{p+q}$ may be expressed using polar coordinates as a product \betagin{equation}\label{Nahui} S^{p+q-1}=S^{p-1}\times S^{q-1}\times [0,\mathfrak{f}rac{\grave{}i}{2}], \end{equation} and then, in these coordinates, a point $\m{\eta}\in S^{p+q-1}$ may be expressed as \betagin{align}\label{Wasa} \m{\eta}=\cos(\theta)\m{\omega}+\sin(\theta)\m{\nu} \end{align} with $\m{\omega}\in S^{p-1}\subset\mathbb{R}^p$, $\m{\nu}\in S^{q-1}\subset\mathbb{R}^q$ and $\theta$ is the angle between vectors $\m{\eta}$ and $\m{\omega}$ with $0\le \theta\le\mathfrak{f}rac{\grave{}i}{2}$. We want to point out, that $\m{y}\grave{}erp \m{\omega}$, $\m{x}\grave{}erp \m{\nu}$ and $\m{\omega}\grave{}erp \m{\nu}$. If $|\m{x}+\m{y}|<1$ then the Cauchy formula for a function $f$ over a unit ball is now \[ f(\m{x},\m{y})=\mathfrak{f}rac{1}{\lambda_{p+q-1}}\int_{S^{p+q-1}}\mathfrak{f}rac{\m{x}+\m{y}-\m{\eta}}{|\m{x}+\m{y}-\m{\eta}|^{p+q}} d\sigma(\m{\eta})f(\m{\eta}), \] where $d\sigma(\m{\eta})=\m{\eta}dS(\m{\eta})$. Pulling back the scalar volume element $dS(\m{\eta})$ from the sphere $S^{p+q-1}$ into the preceding decomposition (\ref{Nahui}) via polar coordinates (\ref{Wasa}), we obtain \[ dS(\m{\eta})=\cos^{p-1}(\theta)dS(\m{\omega})\sin^{q-1}(\theta)dS(\m{\nu})d\theta \] Writing the Cauchy integral using the preceding coordinates, we have \betagin{align*} &f(\m{x},\m{y})=\\ &\mathfrak{f}rac{1}{\lambda_{p+q-1}}\int_{0}^{\mathfrak{f}rac{\grave{}i}{2}}\int_{S^{q-1}}\int_{S^{p-1}}\mathfrak{f}rac{\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}}{|\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}|^{p+q}} (\cos(\theta)\m{\omega}+\sin(\theta)\m{\nu})f(\m{\eta})dS(\m{\omega})d\Gammama(\theta,\m{\nu}) \end{align*} where $d\Gammama(\theta,\m{\nu})=\cos^{p-1}(\theta)\sin^{q-1}(\theta)dS(\m{\nu})d\theta$. \\ \\ In the preceding case, a hypermonogenic solution \[ f(\m{x},\m{y})=A(|\m{x}|,\m{y})+\mathfrak{f}rac{\m{x}}{|\m{x}|}B(|\m{x}|,\m{y}), \] takes the form \[ f(\m{\eta})=A(\cos(\theta),\sin(\theta)\m{\nu})+ \m{\omega}B(\cos(\theta),\sin(\theta)\m{\nu}) \] on the sphere. We see that funtions $A$ and $B$ do not depend on the variable $\m{\omega}$ and we may simplify the innermost integral. We compute the following two lemmas. \betagin{lem}We may write \betagin{align*} &\int_{S^{p-1}}\mathfrak{f}rac{\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}}{|\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}|^{p+q}} (\cos(\theta)\m{\omega}+\sin(\theta)\m{\nu})f(\m{\eta})dS(\m{\omega})\\ &=\big(A+(\m{x}+\m{y})(\sin(\theta)\m{\nu}A-\cos(\theta)B)\big)I(\m{x}+\m{y};\theta,\m{\nu}) \end{align*} where \[ I(\m{x}+\m{y};\theta,\m{\nu})=\int_{S^{p-1}}\mathfrak{f}rac{dS(\m{\omega})}{|\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}|^{p+q}} \] \end{lem} Proof. Let us first compute \betagin{align*} &(\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu})(\cos(\theta)\m{\omega}+\sin(\theta)\m{\nu})\\ &=(\m{x}+\m{y}) \big(\cos(\theta)\m{\omega}+\sin(\theta)\m{\nu}\big)+1 \end{align*} and then \betagin{align*} &(\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu})(\cos(\theta)\m{\omega}+\sin(\theta)\m{\nu})f(\m{\eta})\\ &=\big((\m{x}+\m{y}) (\cos(\theta)\m{\omega}+\sin(\theta)\m{\nu})+1\big)A\\ &+\big((\m{x}+\m{y}) (-\cos(\theta)+\sin(\theta)\m{\nu}\m{\omega})+\m{\omega}\big)B\\ &=A+(\m{x}+\m{y})(\sin(\theta)\m{\nu}A-\cos(\theta)B)\\ &+(\m{x}+\m{y}) \big(\cos(\theta)\m{\omega}A+ \sin(\theta)\m{\nu}\m{\omega}B\big) +\m{\omega}B. \end{align*} Then we obtain \betagin{align*} &\int_{S^{p-1}}\mathfrak{f}rac{\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}}{|\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}|^{p+q}} (\cos(\theta)\m{\omega}+\sin(\theta)\m{\nu})f(\m{\eta})dS(\m{\omega})\\ &=\int_{S^{p-1}}\mathfrak{f}rac{A+(\m{x}+\m{y})(\sin(\theta)\m{\nu}A-\cos(\theta)B)}{|\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}|^{p+q}} )dS(\m{\omega})\\ &+\underbrace{\int_{S^{p-1}}\mathfrak{f}rac{(\m{x}+\m{y}) \big(\cos(\theta)\m{\omega}A+ \sin(\theta)\m{\nu}\m{\omega}B\big) +\m{\omega}B}{|\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}|^{p+q}} dS(\m{\omega})}_{=0}, \end{align*} where the last integral is zero, because the integrand is an odd function with respect to $\m{\omega}$ (see \thetae{F}). This proves the lemma.$\square$ \betagin{lem} The function $I(\m{x}+\m{y};\theta,\m{\nu})$ in the preceding lemma may be written in the form \[ I(|\m{x}|,\m{y};\theta,\m{\nu}) = \mathfrak{f}rac{\lambda_{p-1}}{\tau^{\mathfrak{f}rac{p+q}{2}}}\mathfrak{f}rac{\Gammama(p-1)}{2^{p-2}\Gammama^2(\mathfrak{f}rac{p-1}{2})}\Big(\mathfrak{f}rac{\tau}{\tau+2r\cos(\theta)}\Big)^{\mathfrak{f}rac{p+q}{2}} {}_2F_1\Big(\mathfrak{f}rac{p+q}{2},\mathfrak{f}rac{p-1}{2},p-1; \mathfrak{f}rac{4r\cos(\theta)}{\tau+2r\cos(\theta)}\Big) \] where $\tau=|\m{x}|^2+\cos^2(\theta)+|\m{y}-\sin(\theta)\m{\nu}|^2$, $r=|\m{x}|$ and ${}_2F_1$ is a hypergeometric function. \end{lem} Proof. Since $\m{x}-\cos(\theta)\m{\omega}\grave{}erp \m{y}-\sin(\theta)\m{\nu}$, we have \betagin{align*} |\m{x}+\m{y}-\cos(\theta)\m{\omega}-\sin(\theta)\m{\nu}|^2&=|\m{x}-\cos(\theta)\m{\omega}|^2+|\m{y}-\sin(\theta)\m{\nu}|^2\\ &=\tau-2\cos(\theta)\lambdagle \m{x},\m{\omega}\rangle, \end{align*} where $\tau=|\m{x}|^2+\cos^2(\theta)+|\m{y}-\sin(\theta)\m{\nu}|^2$. We denote $r=|\m{x}|$ and $\m{\xi}=\mathfrak{f}rac{\m{x}}{r}\in S^{p-1}$. Then we obtain, using Funk-Hecke theorem, that \betagin{align*} I(\m{x}+\m{y};\theta,\m{\nu})&=\int_{S^{p-1}}\mathfrak{f}rac{dS(\m{\omega})}{\left(\tau-2r\cos(\theta)\lambdagle \m{\xi},\m{\omega}\rangle\right)^{\mathfrak{f}rac{p+q}{2}}}\\ &= \lambda_{p-1} \int_{-1}^1 \mathfrak{f}rac{(1-t^2)^{\mathfrak{f}rac{p-3}{2}}}{(\tau-2r\cos(\theta)t)^{\mathfrak{f}rac{p+q}{2}}}dt\\ &= \mathfrak{f}rac{\lambda_{p-1}}{\tau^{\mathfrak{f}rac{p+q}{2}}} \int_{-1}^1 \mathfrak{f}rac{(1-t^2)^{\mathfrak{f}rac{p-3}{2}}}{(1-\mathfrak{f}rac{2r\cos(\theta)}{\tau}t)^{\mathfrak{f}rac{p+q}{2}}}dt \end{align*} Recall the Euler's formula for hypergeometric functions (see e.g. \thetae{A}) \[ {}_2F_1(a,b,2b; z)=\mathfrak{f}rac{\Gammama(2b)}{\Gammama^2(b)}\int_0^1s^{b-1}(1-s)^{b-1}(1-zs)^{-a}ds, \] which converge for $|z|<1$ and $b>0$. Changing new variables $t= 2s-1$, we obtain the integral \betagin{align*} {}_2F_1(a,b,2b; z)&=\mathfrak{f}rac{\Gammama(2b)}{2\Gammama^2(b)}\int_{-1}^1(\mathfrak{f}rac{1+t}{2})^{b-1}(\mathfrak{f}rac{1-t}{2})^{b-1}(1-z\mathfrak{f}rac{t+1}{2})^{-a}dt\\ &=\mathfrak{f}rac{\Gammama(2b)}{2^{2b-1}(1-\mathfrak{f}rac{z}{2})^a\Gammama^2(b)}\int_{-1}^1(1-t^2)^{b-1}(1-\mathfrak{f}rac{z}{2(1-\mathfrak{f}rac{z}{2})}t)^{-a}dt. \end{align*} If we take $a=\mathfrak{f}rac{p+q}{2}$, $b-1=\mathfrak{f}rac{p-3}{2}$ and $\mathfrak{f}rac{z}{2(1-\mathfrak{f}rac{z}{2})}=\mathfrak{f}rac{2r\cos(\theta)}{\tau}$ we obtain \[b=\mathfrak{f}rac{p-1}{2}, \hspace{1cm} z=\mathfrak{f}rac{4r\cos(\theta)}{\tau+2r\cos(\theta)}.\] Hence, \betagin{multline*} \int_{-1}^1(1-t^2)^{\mathfrak{f}rac{p-3}{2}}(1-\mathfrak{f}rac{2r\cos(\theta)}{\tau}t)^{-\mathfrak{f}rac{p+q}{2}}dt \\ = \mathfrak{f}rac{2^{p-2}\Gammama^2(\mathfrak{f}rac{p-1}{2})}{\Gammama(p-1)}\Big(\mathfrak{f}rac{\tau}{\tau+2r\cos(\theta)}\Big)^{\mathfrak{f}rac{p+q}{2}} {}_2F_1\Big(\mathfrak{f}rac{p+q}{2},\mathfrak{f}rac{p-1}{2},p-1; \mathfrak{f}rac{4r\cos(\theta)}{\tau+2r\cos(\theta)}\Big) \end{multline*} Then we have \betagin{align*} I(\m{x}+\m{y};\theta,\m{\nu}) &= \mathfrak{f}rac{\lambda_{p-1}}{\tau^{\mathfrak{f}rac{p+q}{2}}}\mathfrak{f}rac{2^{p-2}\Gammama^2(\mathfrak{f}rac{p-1}{2})}{\Gammama(p-1)}\Big(\mathfrak{f}rac{\tau}{\tau+2r\cos(\theta)}\Big)^{\mathfrak{f}rac{p+q}{2}} {}_2F_1\Big(\mathfrak{f}rac{p+q}{2},\mathfrak{f}rac{p-1}{2},p-1; \mathfrak{f}rac{4r\cos(\theta)}{\tau+2r\cos(\theta)}\Big), \end{align*} which depends only on $|\m{x}|$ and $\m{y}$.$\square$\\ \\ Using the preceding lemmas, we obtain the following Cauchy's formula \betagin{align*} &f(\m{x},\m{y})=\\ &\mathfrak{f}rac{1}{\lambda_{p+q-1}}\int_{0}^{\mathfrak{f}rac{\grave{}i}{2}}\int_{S^{q-1}} \big(A+(\m{x}+\m{y})(\sin(\theta)\m{\nu}A-\cos(\theta)B)\big)I(|\m{x}|,\m{y};\theta,\m{\nu}) d\Gammama(\theta,\m{\nu})\\ &=\mathfrak{f}rac{1}{\lambda_{p+q-1}}\int_{0}^{\mathfrak{f}rac{\grave{}i}{2}}\int_{S^{q-1}}I(|\m{x}|,\m{y};\theta,\m{\nu}) (1+(\m{x}+\m{y})\sin(\theta)\m{\nu})A(\cos(\theta),\sin(\theta)\m{\nu}) d\Gammama(\theta,\m{\nu})\\ &-\mathfrak{f}rac{1}{\lambda_{p+q-1}}\int_{0}^{\mathfrak{f}rac{\grave{}i}{2}}\int_{S^{q-1}}I(|\m{x}|,\m{y};\theta,\m{\nu}) \cos(\theta)B(\cos(\theta),\sin(\theta)\m{\nu}) d\Gammama(\theta,\m{\nu}) \end{align*} Since $f(\m{x},\m{y})=A(|\m{x}|,\m{y})+\mathfrak{f}rac{\m{x}}{|\m{x}|}B(|\m{x}|,\m{y})$ we obtain the Cauchy's formulas for $A$ and $B$ parts of the function separately. \betagin{thm} In the unit ball \[ A(|\m{x}|,\m{y})=\mathfrak{f}rac{1}{\lambda_{p+q-1}}\int_{0}^{\mathfrak{f}rac{\grave{}i}{2}}\int_{S^{q-1}}I(|\m{x}|,\m{y};\theta,\m{\nu}) \big(A+\m{y}(\sin(\theta)\m{\nu}A-\cos(\theta)B)\big) d\Gammama(\theta,\m{\nu}) \] and \[ B(|\m{x}|,\m{y})=\mathfrak{f}rac{|\m{x}|}{\lambda_{p+q-1}}\int_{0}^{\mathfrak{f}rac{\grave{}i}{2}}\int_{S^{q-1}} I(|\m{x}|,\m{y};\theta,\m{\nu})\sin(\theta)\m{\nu}A\,d\Gammama(\theta,\m{\nu}) \] where $d\Gammama(\theta,\m{\nu})=\cos^{p-1}(\theta)\sin^{q-1}(\theta)dS(\m{\nu})d\theta$. \end{thm} \section{Hypermonogenic Plane Wave Solutions} In this section our aim is to give a definition for hypermonogenic plane wave solutions. In general, plane wave solutions in Clifford analysis are solutions of the Dirac operator depending only on inner products of the variable with the unit normal vector of the wave's direction of propagation. Monogenic plane waves are allways complex valued functions of the form \[ f(\lambdagle \m{x},\m{s}\rangle)\m{s}, \] where $\m{s}^2=\m{0}$ and $f(z)$ is a holomorphic function. To find a right definition for hypermonogenic plane waves, we consider, as an example, a biaxial monogenic plane wave \[ h(\m{x},\m{y})=\lambdagle \m{x}+\m{y},\m{\tau}+\m{\sigma}\rangle^k(\m{\tau}+\m{\sigma}) \] where $\m{\tau}\in\mathbb{C}^p$ and $\m{\sigma}\in\mathbb{C}^q$. This function is monogenic if $\m{\tau}+\m{\sigma}$ is a null vectors, i.e., \[ (\m{\tau}+\m{\sigma})^2=\m{\tau}^2+\m{\sigma}^2=0. \] We may take $\m{\tau}=\m{t}$ and $\m{\sigma}=i\m{s}$, where $\m{t}\in S^{p-1}$ and $\m{s}\in S^{q-1}$. Then we have \[ h(\m{x},\m{y})=(\lambdagle \m{x},\m{t}\rangle+i\lambdagle\m{y},\m{s}\rangle)^k(\m{t}+i\m{s}) \] Since hypermonogenic solutions are radial with respect to $\m{x}$, we may radialize the function $h$ by integrating it over $\m{t}\in S^{p-1}$, i.e., we define the function \[ g(\m{x},\m{y})=\int_{S^{p-1}} (\lambdagle \m{x},\m{t}\rangle+i\lambdagle\m{y},\m{s}\rangle)^k(\m{t}+i\m{s})dS(\m{t}). \] Now we want to see the proper form of this integral for a definition. We start from the following lemma. \betagin{lem} There exist functions $A$ and $B$ such that \[ g(\m{x},\m{y})=A+iB\m{s}. \] \end{lem} Proof. We compute \betagin{align*} g(\m{x},\m{y})&=\int_{S^{p-1}} (\lambdagle \m{x},\m{t}\rangle+i\lambdagle\m{y},\m{s}\rangle)^k(\m{t}+i\m{s})dS(\m{t})\\ &=\int_{S^{p-1}} (\lambdagle \m{x},\m{t}\rangle+i\lambdagle\m{y},\m{s}\rangle)^k\m{t}dS(\m{t})\\ &+i\int_{S^{p-1}} (\lambdagle \m{x},\m{t}\rangle+i\lambdagle\m{y},\m{s}\rangle)^k dS(\m{t})\m{s}. \end{align*} \betagin{flushright} $\square$ \end{flushright} We obtain the following explicit expressions for these functions. \betagin{prop} The function $A$ is of the form \[ A=\lambda_{p-1}\sum_{j=0}^{[\mathfrak{f}rac{k-1}{2}]} \m{x}^{2j+1} a_j(\lambdagle\m{y},\m{s}\rangle) \] where \[ a_j(\lambdagle\m{y},\m{s}\rangle)=(-1)^j {k \choose 2j+1}\mathfrak{f}rac{\Gammama(\mathfrak{f}rac{p-1}{2})\Gammama(j+\mathfrak{f}rac{3}{2})}{\Gammama(\mathfrak{f}rac{p}{2}+j+1)}(i\lambdagle\m{y},\m{s}\rangle)^{k-2j-1}. \] \end{prop} Proof. Using binomial theorem we have \betagin{align*} A&=\int_{S^{p-1}} (\lambdagle \m{x},\m{t}\rangle+i\lambdagle\m{y},\m{s}\rangle)^k\m{t}dS(\m{t})\\ &=\int_{S^{p-1}} \sum_{j=0}^k {k \choose j} \lambdagle \m{x},\m{t}\rangle^j(i\lambdagle\m{y},\m{s}\rangle)^{k-j}\m{t}dS(\m{t})\\ &=\sum_{j=0}^k {k \choose j} (i\lambdagle\m{y},\m{s}\rangle)^{k-j} \int_{S^{p-1}} \lambdagle \m{x},\m{t}\rangle^j\m{t}dS(\m{t}) \end{align*} We recall that for odd integrands the above integral is zero (see \thetae{F}), i.e., \[ \int_{S^{p-1}} \lambdagle \m{x},\m{t}\rangle^{2j}\m{t}dS(\m{t})=0. \] For even integrands we use the Funk-Hecke formula with $H_1(\m{x})=\m{x}$, $\grave{}si(u)=u^{2j+1}$ and $\mathfrak{f}rac{1!}{(p-2)_1} C^{\mathfrak{f}rac{p}{2}-1}_1(u)=u$. Hence, \betagin{align*} \int_{S^{p-1}} \lambdagle \m{x},\m{t}\rangle^{2j+1}\m{t}dS(\m{t})&=|\m{x}|^{2j+1}\int_{S^{p-1}} \lambdagle \mathfrak{f}rac{\m{x}}{|\m{x}|},\m{t}\rangle^{2j+1}\m{t}dS(\m{t})\\ &=|\m{x}|^{2\ell+1} \mathfrak{f}rac{\m{x}}{|\m{x}|} \lambda_{p-1}\int_{-1}^{1} u^{2j+1} u(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du. \end{align*} Then as a routine integration we obtain \[ \int_{-1}^{1} u^{2j+1} u(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du=\mathfrak{f}rac{\Gammama(\mathfrak{f}rac{p-1}{2})\Gammama(j+\mathfrak{f}rac{3}{2})}{\Gammama(\mathfrak{f}rac{p}{2}+j+1)}, \] that is \[ \int_{S^{p-1}} \lambdagle \m{x},\m{t}\rangle^{2j+1}\m{t}dS(\m{t})=(-1)^j\m{x}^{2j+1} \lambda_{p-1}\mathfrak{f}rac{\Gammama(\mathfrak{f}rac{p-1}{2})\Gammama(j+\mathfrak{f}rac{3}{2})}{\Gammama(\mathfrak{f}rac{p}{2}+j+1)}. \] Then \betagin{align*} A&=\sum_{j=0}^{[\mathfrak{f}rac{k-1}{2}]} {k \choose 2j+1}\int_{S^{p-1}} \lambdagle \m{x},\m{t}\rangle^{2j+1}\m{t}dS(\m{t})(i\lambdagle\m{y},\m{s}\rangle)^{k-2j-1}\\ &=\sum_{j=0}^{[\mathfrak{f}rac{k-1}{2}]} \m{x}^{2j+1} {k \choose 2j+1}(-1)^j \lambda_{p-1}\mathfrak{f}rac{\Gammama(\mathfrak{f}rac{p-1}{2})\Gammama(j+\mathfrak{f}rac{3}{2})}{\Gammama(\mathfrak{f}rac{p}{2}+j+1)}(i\lambdagle\m{y},\m{s}\rangle)^{k-2j-1}. \end{align*} \betagin{flushright} $\square$ \end{flushright} \betagin{prop} The function $B$ is of the form \[ B=\lambda_{p-1}\sum_{j=0}^{[\mathfrak{f}rac{k}{2}]} \m{x}^{2j}b_j(\lambdagle\m{y},\m{s}\rangle) \] where \[ b_j(\lambdagle\m{y},\m{s}\rangle)=(-1)^j{k \choose 2j}\mathfrak{f}rac{\Gammama(\mathfrak{f}rac{p-1}{2})\Gammama(j+\mathfrak{f}rac{1}{2})}{\Gammama(\mathfrak{f}rac{p}{2}+j)}(i\lambdagle\m{y},\m{s}\rangle)^{k-2j}. \] \end{prop} Proof. Similarly as in the preceding proposition, we compute \betagin{align*} B&=\int_{S^{p-1}} (\lambdagle \m{x},\m{t}\rangle+i\lambdagle\m{y},\m{s}\rangle)^k dS(\m{t})\\ &=\sum_{j=0}^k {k \choose j}\int_{S^{p-1}} \lambdagle \m{x},\m{t}\rangle^jdS(\m{t})(i\lambdagle\m{y},\m{s}\rangle)^{k-j} \end{align*} and since $\m{t}\mapsto\lambdagle \m{x},\m{t}\rangle^{2j+1}$ is an odd function we obtain \[ \int_{S^{p-1}} \lambdagle \m{x},\m{t}\rangle^{2j+1}dS(\m{t})=0. \] Using Funk-Hecke formula, where $ H_0(\m{x})=1,\ \mathfrak{f}rac{0!}{(p-2)_0}C^{\mathfrak{f}rac{p}{2}-1}_0(u)=1 $, we compute \betagin{align*} &\int_{S^{p-1}} \lambdagle \m{x},\m{t}\rangle^{2j}dS(\m{t})\\ &=|\m{x}|^{2j}\int_{S^{p-1}} \lambdagle \mathfrak{f}rac{\m{x}}{|\m{x}|},\m{t}\rangle^{2j}dS(\m{t})\\ &=|\m{x}|^{2j}\lambda_{p-1}\int_{-1}^{1} u^{2j} (1-u^2)^{\mathfrak{f}rac{p-3}{2}}du\\ &=|\m{x}|^{2j}\lambda_{p-1}\mathfrak{f}rac{\Gammama(\mathfrak{f}rac{p-1}{2})\Gammama(j+\mathfrak{f}rac{1}{2})}{\Gammama(\mathfrak{f}rac{p}{2}+j)}. \end{align*} Substituting these to the original formula, we have \betagin{align*} B&=\int_{S^{p-1}} (\lambdagle \m{x},\m{t}\rangle+i\lambdagle\m{y},\m{s}\rangle)^k dS(\m{t})\\ &=\sum_{j=0}^{[\mathfrak{f}rac{k}{2}]} {k \choose 2j}|\m{x}|^{2j}\lambda_{p-1}\mathfrak{f}rac{\Gammama(\mathfrak{f}rac{p-1}{2})\Gammama(j+\mathfrak{f}rac{1}{2})}{\Gammama(\mathfrak{f}rac{p}{2}+j)}(i\lambdagle\m{y},\m{s}\rangle)^{k-2j}\\ &=\sum_{j=0}^{[\mathfrak{f}rac{k}{2}]} \m{x}^{2j}(-1)^j\lambda_{p-1}{k \choose 2j}\mathfrak{f}rac{\Gammama(\mathfrak{f}rac{p-1}{2})\Gammama(j+\mathfrak{f}rac{1}{2})}{\Gammama(\mathfrak{f}rac{p}{2}+j)}(i\lambdagle\m{y},\m{s}\rangle)^{k-2j} \end{align*} $\square$\\ \\ We see that we may write the monogenic function $g$ of the form \[ g(\m{x},\m{y})=\lambda_{p-1} \sum_{j=0}^{[\mathfrak{f}rac{k-1}{2}]} \m{x}^{2j+1} a_j(\lambdagle\m{y},\m{s}\rangle)+\lambda_{p-1}\sum_{j=0}^{[\mathfrak{f}rac{k}{2}]} \m{x}^{2j}ib_j(\lambdagle\m{y},\m{s}\rangle)\m{s} \] This motivates us to define hypermonogenic plane waves as monogenic functions of the form \betagin{align}\label{HPW} F(\m{x},\lambdagle\m{y},\m{s}\rangle)=\sum_{j=0}^\infty \m{x}^j (C_j(\lambdagle\m{y},\m{s}\rangle)+\m{s}D_j(\lambdagle\m{y},\m{s}\rangle)), \end{align} where functions $C_j$ and $D_j$ are complex valued differentiable functions of one complex variable. This definition is a good starting point for the construction of the hypermonogenic plane waves and we will exploit it later. Developing this series expansion in terms of powers of $|\underline{x}|$, we see that functions are always of the following form. \betagin{defn}[Hypermonogenic plane waves (HPW)] They are functions of the form \[ F(\m{x},\lambdagle\m{y},\m{s}\rangle)=A(|\m{x}|,\lambdagle\m{y},\m{s}\rangle)+B(|\m{x}|,\lambdagle\m{y},\m{s}\rangle)\mathfrak{f}rac{\m{x}}{|\m{x}|}+C(|\m{x}|,\lambdagle\m{y},\m{s}\rangle)\m{s}+D(|\m{x}|,\lambdagle\m{y},\m{s}\rangle)\mathfrak{f}rac{\m{x}}{|\m{x}|}\m{s} \] where the coefficient functions $A$, $B$, $C$ and $D$ are differentiable complex valued functions, $s\in S^{q-1}$ and $(\grave{}artialrtial_{\m{x}}+\grave{}artialrtial_{\m{y}})F=0$. \end{defn} It is obvious, that the hypermonogenic plane wave solutions are hypermonogenic solutions.\\ \\ Next we derive a system for coeffecient functions of a HPW of the form (\ref{HPW}) and for that we recall \[ \grave{}artialrtial_{\m{x}} \m{x}^j=\betagin{cases} -j\m{x}^{j-1}, & \text{ for $j$ even},\\ -(j+p-1)\m{x}^{j-1}, & \text{ for $j$ odd},\\ \end{cases} \] where we may briefly denote \[ \grave{}artialrtial_{\m{x}} \m{x}^j=\betata_j\m{x}^{j-1}. \] \betagin{prop}\label{P5} A function $F(\m{x},\lambdagle\m{y},\m{s}\rangle)$ of the form (\ref{HPW}) is a hypermonogenic plane wave if and only if \betagin{align*} \betata_{j+1}C_{j+1}(t) -(-1)^j D'_j(t)=0,\\ \betata_{j+1}D_{j+1}(t) +(-1)^j C'_j(t)=0, \end{align*} where $t=\lambdagle\m{y},\m{s}\rangle$ and it is determined by its initial functions $C_0(t)$ and $D_0(t)$. \end{prop} Proof. First we have \[ \grave{}artialrtial_{\m{y}}C_j(\lambdagle\m{y},\m{s}\rangle)=\m{s}C'_j(\lambdagle\m{y},\m{s}\rangle). \] Since $\grave{}artialrtial_{\m{y}}\m{x}^j=(-1)^{j}\m{x}^j\grave{}artialrtial_{\m{y}}$, we obtain \betagin{align*} (\grave{}artialrtial_{\m{x}}+\grave{}artialrtial_{\m{y}})F(\m{x},\lambdagle\m{y},\m{s}\rangle) &=\sum_{j=1}^\infty \betata_j\m{x}^{j-1}(C_j(\lambdagle\m{y},\m{s}\rangle)+\m{s}D_j(\lambdagle\m{y},\m{s}\rangle))\\ &+\sum_{j=0}^\infty \m{x}^j (-1)^j(\m{s}C'_j(\lambdagle\m{y},\m{s}\rangle)+D'_j(\lambdagle\m{y},\m{s}\rangle)\m{s}^2)\\ &=\sum_{j=0}^\infty \m{x}^{j}(\betata_{j+1}C_{j+1}(\lambdagle\m{y},\m{s}\rangle)-(-1)^jD'_j(\lambdagle\m{y},\m{s}\rangle))\\ &+\sum_{j=0}^\infty \m{x}^j (\betata_{j+1}D_{j+1}(\lambdagle\m{y},\m{s}\rangle)+ (-1)^jC'_j(\lambdagle\m{y},\m{s}\rangle))\m{s}\\ \end{align*} completing the proof.$\square$ \section{Two methods to find hypermonogenic plane waves} In this section, we describe two methods that show how to construct hypermonogenic plane wave solutions. \textbf{The first method} uses series of the form (\ref{HPW}) and we illustrate it with the following example. \betagin{prop}\label{JuriS} If we look for a HPW of the form \[ F(\m{x},\lambdagle\m{y},\m{s}\rangle)=\sum_{j=0}^\infty \m{x}^j (c_j +\m{s}d_j)e^{\lambdagle\m{y},\m{s}\rangle} \] with the initial condition $c_0=1$ and $d_0=0$ we obtain the solution \[ F(\m{x},\lambdagle\m{y},\m{s}\rangle)=\mathfrak{f}rac{2^{\mathfrak{f}rac{p}{2}-1}\Gammama(\mathfrak{f}rac{p}{2})}{|\m{x}|^{\mathfrak{f}rac{p}{2}-1}}\Big(J_{\mathfrak{f}rac{p}{2}-1}(|\m{x}|) +J_{\mathfrak{f}rac{p}{2}}(|\m{x}|) \mathfrak{f}rac{\m{x}}{|\m{x}|}\m{s}\Big)e^{\lambdagle\m{y},\m{s}\rangle} \] where $J_{\nu}$ are Bessel functions. \end{prop} Proof. This particular case can be written in the form (\ref{HPW}) by taking \[C_j(t)=c_je^{t}, \;\;D_j(t)=d_je^{t}, \;\;\;\;\mbox{ which implies }\;\;\;\; C'_j(t)=c_je^{t}, \;\;D'_j(t)=d_je^{t}.\] The system given in Proposition \ref{P5} becomes now purely algebraic \[ \betagin{cases} \betata_{j+1}c_{j+1} -(-1)^j d_j=0,\\ \betata_{j+1}d_{j+1} +(-1)^j c_j=0, \end{cases} \;\;\;\;\mbox{ or equivalently }\;\;\;\; \betagin{cases} c_{j+1} =\mathfrak{f}rac{(-1)^j}{\betata_{j+1}} d_j,\\d_{j+1} =-\mathfrak{f}rac{(-1)^j}{\betata_{j+1}} c_j. \end{cases}\] Since $d_0=0$ we compute that \[ 0=d_0=c_1=d_2=c_3=... \] i.e. even $d_j$'s and odd $c_j$'s are zero. Starting from $c_0=1$ we get \betagin{align*} d_{1} &=-\mathfrak{f}rac{(-1)^0}{\betata_{1}} c_0=\mathfrak{f}rac{1}{p}, \\ c_{2} &=\mathfrak{f}rac{(-1)^1}{\betata_{2}} d_1= \mathfrak{f}rac{1}{2\cdot p},\\ d_{3} &=-\mathfrak{f}rac{(-1)^2}{\betata_{3}} c_2=\mathfrak{f}rac{1}{2\cdot p (p+2)},\\ c_{4} &=\mathfrak{f}rac{(-1)^3}{\betata_{4}} d_3=\mathfrak{f}rac{1}{2\cdot 4\cdot p (p+2)},\\ d_{5} &=-\mathfrak{f}rac{(-1)^4}{\betata_{5}} c_4=\mathfrak{f}rac{1}{2\cdot 4\cdot p (p+2)(p+4)},\\ c_{6}& =\mathfrak{f}rac{(-1)^5}{\betata_{6}} d_5=\mathfrak{f}rac{1}{2\cdot 4\cdot 6\cdot p (p+2)(p+4)}. \end{align*} Then we easily infer that when $j$ is even \[ c_j=\mathfrak{f}rac{1}{2\cdot 4\cdots j\cdot p(p+2)\cdots (p+j-2)} \] and when $j$ is odd \[ d_j=\mathfrak{f}rac{1}{2\cdot 4\cdots (j-1)\cdot p\cdot (p+2)\cdots (p+j-1)}. \] \underline{If $j$ is even}, we observe that $2^\mathfrak{f}rac{j}{2} (j/2)!=2\cdot 4\cdots j$ and we may write \[ c_j=\mathfrak{f}rac{1}{2^\mathfrak{f}rac{j}{2} (j/2)!\, p(2+p)\cdots (j-2+p)} \] where in addition $(j/2)!=\Gammama(j/2+1)$. In the product $p(2+p)\cdots (j-2+p)$, there are $j/2$ elements. Then we have \betagin{align*} &p(2+p)(4+p)\cdots (j-2+p)\\ &=2^{j/2}(0+\mathfrak{f}rac{p}{2})(1+\mathfrak{f}rac{p}{2})(2+\mathfrak{f}rac{p}{2})\cdots (\mathfrak{f}rac{j}{2}-1+\mathfrak{f}rac{p}{2})\\ &=2^{j/2}\grave{}rod_{k=1}^{j/2} \Big( k-1 +\mathfrak{f}rac{p}{2}\Big)=2^{j/2}\mathfrak{f}rac{\Gammama(j/2+p/2)}{\Gammama(p/2)} \end{align*} where in the last line we use a classical identity of the Gamma functions. Hence we obtain \betagin{align*} c_j&= \mathfrak{f}rac{\Gammama(p/2)}{2^j \Gammama(j/2+1)\Gammama(j/2+p/2)}. \end{align*} \underline{When $j$ is odd}, we observe that $2^{\mathfrak{f}rac{j-1}{2}} (\mathfrak{f}rac{j-1}{2})!=2\cdot 4\cdots (j-1)$. Then we have \[ d_j=\mathfrak{f}rac{1}{2^{\mathfrak{f}rac{j-1}{2}} (\mathfrak{f}rac{j-1}{2})!\cdot p\cdot (2+p)\cdots (j-1+p)}. \] where $(\mathfrak{f}rac{j-1}{2})!=\Gammama(\mathfrak{f}rac{j+1}{2})$. In the product $p(2+p)(4+p)\cdots (j-1+p)$ there are $\mathfrak{f}rac{j+1}{2}$ elements. Then we have \betagin{align*} &p(2+p)(4+p)\cdots (j-1+p)\\ &=2^{\mathfrak{f}rac{j+1}{2}}(0+\mathfrak{f}rac{p}{2})(1+\mathfrak{f}rac{p}{2})(2+\mathfrak{f}rac{p}{2})\cdots (\mathfrak{f}rac{j-1}{2}+\mathfrak{f}rac{p}{2})\\ &=2^{\mathfrak{f}rac{j+1}{2}}\grave{}rod_{k=1}^{\mathfrak{f}rac{j+1}{2}}\Big(k-1+\mathfrak{f}rac{p}{2}\Big)=2^{\mathfrak{f}rac{j+1}{2}}\mathfrak{f}rac{\Gammama((j+1)/2+p/2)}{\Gammama(p/2)}. \end{align*} As a consequence, \[ d_j=\mathfrak{f}rac{\Gammama(p/2)}{2^{j} \Gammama(\mathfrak{f}rac{j+1}{2})\Gammama((j+1)/2+p/2)} \] This way we obtain the solution \betagin{align*} F(\m{x},\lambdagle\m{y},\m{s}\rangle) &=\Gammama(\mathfrak{f}rac{p}{2})\sum_{j=0}^\infty \left( \mathfrak{f}rac{\m{x}^{2j}}{2^{2j} \Gammama(j+1)\Gammama(j+\mathfrak{f}rac{p}{2})} + \mathfrak{f}rac{\m{x}^{2j+1}}{2^{2j+1} \Gammama(j+1)\Gammama(j+1+\mathfrak{f}rac{p}{2})}\m{s}\right)e^{\lambdagle\m{y},\m{s}\rangle}\\ &=\Gammama(\mathfrak{f}rac{p}{2})\sum_{j=0}^\infty \left( \mathfrak{f}rac{(-1)^j |\m{x}|^{2j}}{2^{2j} \Gammama(j+1)\Gammama(j+\mathfrak{f}rac{p}{2})} +\mathfrak{f}rac{(-1)^j |\m{x}|^{2j}}{2^{2j+1} \Gammama(j+1)\Gammama(j+1+\mathfrak{f}rac{p}{2})}\m{x}\m{s}\right)e^{\lambdagle\m{y},\m{s}\rangle} \end{align*} Since $\Gammama(j+1)=j!$ we obtain \betagin{align} F(\m{x},\lambdagle\m{y},\m{s}\rangle) &=\Gammama(\mathfrak{f}rac{p}{2})\left(\sum_{j=0}^\infty \mathfrak{f}rac{(-1)^j |\m{x}|^{2j}}{2^{2j} j!\Gammama(\mathfrak{f}rac{p}{2}+j)} +\mathfrak{f}rac{1}{2}\sum_{j=0}^\infty \mathfrak{f}rac{(-1)^j |\m{x}|^{2j}}{2^{2j} j!\Gammama(\mathfrak{f}rac{p}{2}+j+1)}\m{x}\m{s}\right)e^{\lambdagle\m{y},\m{s}\rangle}.\label{UNTERMENSCH} \end{align} The sums are now precisely the same as in the definition of a Bessel function \betagin{align}\label{MashinaVremeni} J_{\nu}(z)=\mathfrak{f}rac{z^\nu}{2^\nu}\sum_{j=0}^\infty\mathfrak{f}rac{(-1)^j z^{2j}}{j! 2^{2j}\Gammama(\nu+j+1)}. \end{align} and we obtain the corresponding solution \betagin{align*} F(\m{x},\lambdagle\m{y},\m{s}\rangle) &=\Gammama(\mathfrak{f}rac{p}{2})\left(\mathfrak{f}rac{2^{\mathfrak{f}rac{p}{2}-1}}{|\m{x}|^{\mathfrak{f}rac{p}{2}-1}}J_{\mathfrak{f}rac{p}{2}-1}(|\m{x}|) +\mathfrak{f}rac{1}{2}\mathfrak{f}rac{2^{\mathfrak{f}rac{p}{2}}}{|\m{x}|^{\mathfrak{f}rac{p}{2}}}J_{\mathfrak{f}rac{p}{2}}(|\m{x}|) \m{x}\m{s}\right)e^{\lambdagle\m{y},\m{s}\rangle}\\ &=\mathfrak{f}rac{2^{\mathfrak{f}rac{p}{2}-1}\Gammama(\mathfrak{f}rac{p}{2})}{|\m{x}|^{\mathfrak{f}rac{p}{2}-1}}\left(J_{\mathfrak{f}rac{p}{2}-1}(|\m{x}|) +J_{\mathfrak{f}rac{p}{2}}(|\m{x}|) \mathfrak{f}rac{\m{x}}{|\m{x}|}\m{s}\right)e^{\lambdagle\m{y},\m{s}\rangle}. \end{align*} \betagin{flushright} $\square$ \end{flushright} The preceding proof is a direct application of the fundamental system given in Proposition \ref{P5}. The same solution may be found using the generalized Cauchy-Kovalevskaya extension. \betagin{prop} The plane wave solution found in Proposition \ref{JuriS} may be obtained using the generalized Cauchy-Kovalevskaya extension with the initial function $e^{\lambdagle\m{y},\m{s}\rangle}$. \end{prop} Proof. Using the definition of a Bessel function (\ref{MashinaVremeni}) we find the formulas \betagin{align*} \Big(\mathfrak{f}rac{|\m{x}|}{2}\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}}\Big)^{-p/2}\Big(\mathfrak{f}rac{|\m{x}|}{2}\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}}J_{\mathfrak{f}rac{p}{2}-1}(|\m{x}|\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}})\Big)&=\sum_{j=0}^\infty \mathfrak{f}rac{(-1)^j |\m{x}|^{2j} {\bf \mathcal{D}}eltata_{\m{y}}^j}{j! 2^{2j}\Gammama(\mathfrak{f}rac{p}{2}+j)},\\ \Big(\mathfrak{f}rac{|\m{x}|}{2}\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}}\Big)^{-p/2}J_{\mathfrak{f}rac{p}{2}}(|\m{x}|\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}})&=\sum_{j=0}^\infty \mathfrak{f}rac{(-1)^j |\m{x}|^{2j} {\bf \mathcal{D}}eltata_{\m{y}}^j}{j! 2^{2j}\Gammama(\mathfrak{f}rac{p}{2}+j+1)}, \end{align*} which we need in the formula given in Theorem \ref{Absolut}. Since $\grave{}artialrtial_{\m{y}}e^{\lambdagle\m{y},\m{s}\rangle}=e^{\lambdagle\m{y},\m{s}\rangle}\m{s}$ we compute \[ {\bf \mathcal{D}}eltata_{\m{y}}e^{\lambdagle\m{y},\m{s}\rangle}=-\grave{}artialrtial_{\m{y}}^2e^{\lambdagle\m{y},\m{s}\rangle}=-\grave{}artialrtial_{\m{y}}e^{\lambdagle\m{y},\m{s}\rangle}\m{s}= -e^{\lambdagle\m{y},\m{s}\rangle}\m{s}^2=e^{\lambdagle\m{y},\m{s}\rangle}, \] and then ${\bf \mathcal{D}}eltata^j_{\m{y}}e^{\lambdagle\m{y},\m{s}\rangle}=e^{\lambdagle\m{y},\m{s}\rangle}$ for all $j=0,1,2,...$. Using the generalized Cauchy-Kovalevskaya formula we obtain \betagin{align*} f(\m{x},\m{y})&=\Gammama(\mathfrak{f}rac{p}{2})\Big(\mathfrak{f}rac{|\m{x}|}{2}\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}}\Big)^{-p/2}\Big(\mathfrak{f}rac{|\m{x}|}{2}\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}}J_{\mathfrak{f}rac{p}{2}-1}(|\m{x}|\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}})+\mathfrak{f}rac{\m{x}\grave{}artialrtial_{\m{y}}}{2}J_{\mathfrak{f}rac{p}{2}}(|\m{x}|\sqrt{{\bf \mathcal{D}}eltata_{\m{y}}})\Big)e^{\lambdagle\m{y},\m{s}\rangle}\\ &=\Gammama(\mathfrak{f}rac{p}{2})\Big(\sum_{j=0}^\infty \mathfrak{f}rac{(-1)^j |\m{x}|^{2j} }{j! 2^{2j}\Gammama(\mathfrak{f}rac{p}{2}+j)} +\mathfrak{f}rac{1}{2}\sum_{j=0}^\infty \mathfrak{f}rac{(-1)^j |\m{x}|^{2j}}{j! 2^{2j}\Gammama(\mathfrak{f}rac{p}{2}+j+1)}\m{x}\m{s}\Big)e^{\lambdagle\m{y},\m{s}\rangle}, \end{align*} which coincides with the function obtained in (\ref{UNTERMENSCH}).$\square$\\ \\ \textbf{The second method} to construct hypermonogenic plane wave solutions is based on the Funk-Hecke formula, similarly as in the beginning of this section. We will start from a monogenic plane wave, let us take for example \betagin{align}\label{Teroesa} e^{\lambdagle \m{x},\m{t}\rangle +i\lambdagle \m{y},\m{s}\rangle}(\m{t}+i\m{s}). \end{align} Since a hypermongenic plane wave must be a radial function with respect to the variable $\m{x}$, we need to integrate over the sphere $\m{t}\in S^{p-1}$ and then we have the function \[ G(\m{x},\lambdagle \m{y},\m{s}\rangle)=\int_{S^{p-1}}e^{\lambdagle \m{x},\m{t}\rangle +i\lambdagle \m{y},\m{s}\rangle}(\m{t}+i\m{s})dS(\m{t}). \] Nowit suffices to compute the integral using the Funk-Hecke formula, which we will do in the next proposition. As a result we obtain a hypermonogenic plane wave solution. This example demonstrate the general idea of the method. Making the summary, instead of function (\ref{Teroesa}), we may start from an any monogenic plane wave and radialize it integrating with respect to $\m{t}$. This integral may be computed by using Funk-Hecke formula. The resulting function will be a hypermonogenic plane wave. \betagin{prop} The preceding function $G(\m{x},\lambdagle \m{y},\m{s}\rangle)$ is a hypermonogenic plane wave and can be written as \[ G(\m{x},\lambdagle \m{y},\m{s}\rangle)=\mathfrak{f}rac{\sqrt{\grave{}i}\lambda_{p-1}2^\mathfrak{f}rac{p-2}{2}\Gammama(\mathfrak{f}rac{p-1}{2})}{|\m{x}|^{\mathfrak{f}rac{p-2}{2}}} \Big(iI_\mathfrak{f}rac{p-2}{2}(|\m{x}|)\m{s}+ \mathfrak{f}rac{\m{x}}{|\m{x}|} I_{\mathfrak{f}rac{p}{2}}(|\m{x}|) \Big)e^{i\lambdagle \m{y},\m{s}\rangle}, \] where $I$ is a modified Bessel function. \end{prop} Proof. We observe that we may write $G$ in the form \[ G(\m{x},\lambdagle \m{y},\m{s}\rangle)=A(\m{x},\lambdagle \m{y},\m{s}\rangle)+iB(\m{x},\lambdagle \m{y},\m{s}\rangle)\m{s} \] where \[ A=e^{i\lambdagle \m{y},\m{s}\rangle}\int_{S^{p-1}}e^{\lambdagle \m{x},\m{t}\rangle} \m{t}dS(\m{t}) \] and \[ B=e^{i\lambdagle \m{y},\m{s}\rangle}\int_{S^{p-1}}e^{\lambdagle \m{x},\m{t}\rangle} dS(\m{t}). \] Let us now compute the integrals. We define $\m{x}=r\m{v}$, where $\m{v}\in S^{p-1}$. Funk-Hecke formula gives us \betagin{align*} &\int_{S^{p-1}}e^{r\lambdagle \m{v},\m{t}\rangle} \m{t}dS(\m{t})=\lambda_{p-1}\m{v}\int_{-1}^{1}e^{ru}u(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du \end{align*} and \betagin{align*} &\int_{S^{p-1}}e^{\lambdagle \m{x},\m{t}\rangle} dS(\m{t}) =\lambda_{p-1}\int_{-1}^{1}e^{ru}(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du \end{align*} We recall the integral representation to a modified Bessel function ($\text{Re}(\nu)>-\mathfrak{f}rac{1}{2}$) \[ I_\nu(z)=\mathfrak{f}rac{1}{\Gammama(\nu+\mathfrak{f}rac{1}{2})\sqrt{\grave{}i}} \big(\mathfrak{f}rac{z}{2}\big)^\nu\int_{-1}^1 e^{-zu}(1-u^2)^{\nu-\mathfrak{f}rac{1}{2}}du \] If we put $\nu=\mathfrak{f}rac{p-2}{2}$ and $r=-z$, we obtain \betagin{align*} &I_\mathfrak{f}rac{p-2}{2}(-r)=(-1)^\mathfrak{f}rac{p-2}{2}\mathfrak{f}rac{1}{\Gammama(\mathfrak{f}rac{p-1}{2})\sqrt{\grave{}i}} \big(\mathfrak{f}rac{r}{2}\big)^\mathfrak{f}rac{p-2}{2}\int_{-1}^1 e^{ru}(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du\\ \Leftrightarrow\ & \int_{-1}^1 e^{ru}(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du=(-1)^\mathfrak{f}rac{p-2}{2} \Gammama(\mathfrak{f}rac{p-1}{2})\sqrt{\grave{}i} \big(\mathfrak{f}rac{2}{r}\big)^\mathfrak{f}rac{p-2}{2} I_\mathfrak{f}rac{p-2}{2}(-r) \end{align*} In order to get rid of the minus-sign, we may use the following well known identity \[ I_\nu(-z)=(-z)^\nu z^{-\nu}I_\nu(z). \] what gives us \[ I_\mathfrak{f}rac{p-2}{2}(-r)=(-1)^\mathfrak{f}rac{p-2}{2}I_\mathfrak{f}rac{p-2}{2}(r) \] and then \betagin{align*} \int_{-1}^1 e^{ru}(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du= \Gammama(\mathfrak{f}rac{p-1}{2})\sqrt{\grave{}i} \big(\mathfrak{f}rac{2}{r}\big)^\mathfrak{f}rac{p-2}{2} I_\mathfrak{f}rac{p-2}{2}(r). \end{align*} Subsituting $r=|\m{x}|$ we obtain \[ B=\lambda_{p-1}\Gammama(\mathfrak{f}rac{p-1}{2})\sqrt{\grave{}i}e^{i\lambdagle \m{y},\m{s}\rangle}\big(\mathfrak{f}rac{2}{|\m{x}|}\big)^{\mathfrak{f}rac{p-2}{2}}I_\mathfrak{f}rac{p-2}{2}(|\m{x}|). \] Consider now a function \[ \varphi(r)=\int_{-1}^{1}e^{ru}(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du \] Since the integrated function and its derivative with respect to $r$ are continuous, we may compute \[ \varphi'(r)=\int_{-1}^{1}e^{ru}u(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du. \] Let us recall the following differentiation formula for modified Bessel functions \[ \mathfrak{f}rac{d }{dz}(z^{-\nu}I_\nu(z))=z^{-\nu}I_{\nu+1}(z). \] Using these, we have \betagin{align*} \int_{-1}^1 e^{ru}u(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du&= 2^\mathfrak{f}rac{p-2}{2}\Gammama(\mathfrak{f}rac{p-1}{2})\sqrt{\grave{}i} \mathfrak{f}rac{d}{dr}\Big(r^{-\mathfrak{f}rac{p-2}{2}} I_\mathfrak{f}rac{p-2}{2}(r)\Big)\\ &=\Gammama(\mathfrak{f}rac{p-1}{2})\sqrt{\grave{}i} \big(\mathfrak{f}rac{2}{r}\big)^\mathfrak{f}rac{p-2}{2} I_{\mathfrak{f}rac{p}{2}}(r). \end{align*} Substituting $r=|\m{x}|$ and $\m{v}=\mathfrak{f}rac{\m{x}}{|\m{x}|}$ we obtain \betagin{align*} A&=\lambda_{p-1}e^{i\lambdagle \m{y},\m{s}\rangle}\m{v}\int_{-1}^{1}e^{ru}u(1-u^2)^{\mathfrak{f}rac{p-3}{2}}du\\ &=\lambda_{p-1}e^{i\lambdagle \m{y},\m{s}\rangle}\mathfrak{f}rac{\m{x}}{|\m{x}|}\Gammama(\mathfrak{f}rac{p-1}{2})\sqrt{\grave{}i} \big(\mathfrak{f}rac{2}{|\m{x}|}\big)^\mathfrak{f}rac{p-2}{2} I_{\mathfrak{f}rac{p}{2}}(|\m{x}|)\\ &=\sqrt{\grave{}i}\lambda_{p-1}2^\mathfrak{f}rac{p-2}{2}\Gammama(\mathfrak{f}rac{p-1}{2}) \mathfrak{f}rac{e^{i\lambdagle \m{y},\m{s}\rangle} }{|\m{x}|^\mathfrak{f}rac{p}{2} } I_{\mathfrak{f}rac{p}{2}}(|\m{x}|)\m{x}. \end{align*} \betagin{flushright} $\square$ \end{flushright} \textbf{Conclutions} In this paper we define a new subclass for the class of monogenic functions in the biaxial case. This function class is called the class of hypermonogenic solutions. After the definition, we provide methods to find hypermonogenic solutions. This is important to be sure that this class of functions is interesting to study. After that, we deduce the Cauchy formula for the upper hemisphere. One interesting topic in future studies would be to find Cauchy formulas for more general manifolds. In the last part, we present a proper definition for hypermonogenic plane wave solutions and deduce two explicit methods to compute these. \\ \\ \textbf{Acknowledgements} The second author would like to thank all nice people in Krijgslaan for hospitality and patience. \betagin{thebibliography}{99} \bibitem{A}Andrews G., Askey R., and Roy R., \textit{Special Functions}, Encyclopedia of Mathematics and its Applications 71, Cambridge University Press, Cambridge, 1999. \bibitem{BDS} Brackx, F., Delanghe, R., and Sommen, F., {\it{Clifford Analysis}}, Research Notes in Mathematics, 76, Pitman, Boston, MA, 1982. \bibitem{DSS} Delanghe, R., Sommen, F., and Sou\v{c}ek, V., {\it{ Clifford Algebra and Spinor-valued Functions}}, Mathematics and its Applications, 53. Kluwer Academic Publishers Group, Dordrecht, 1992. \bibitem{DS} De Schepper, N., Sommen, F., \textit{Cauchy-Kowalevski extensions and monogenic plane waves in Clifford analysis}, Adv. Appl. Clifford Algebr. 22 (2012), no. 3, 625--647 \bibitem{ELI} Eriksson S.-L., Leutwiler H., {\it{Introduction to hyperbolic function theory}}, Clifford Algebras and Inverse Problems (Tampere 2008) Tampere Univ. of Tech. Institute of Math. Research Report No. 90 (2009), pp. 1--28 \bibitem{EOS} Eriksson, .-L., Orelma, H., Sommen, F. \textit{Vekua systems in hyperbolic harmonic analysis},Complex Anal. Oper. Theory 10 (2016), no. 2, 251--265. \bibitem{EOV} Eriksson S.-L., Orelma H., Vieira N., \textit{Hypermonogenic Functions of Two Vector Variables}, to appear \bibitem{F} Folland G. B., \textit{How to Integrate a Polynomial over a Sphere}, The American Mathematical Monthly, Vol. 108, No. 5 (May, 2001), pp. 446-448. \bibitem{L} Leutwiler, H., \textit{Modified Clifford analysis}, Complex Variables Theory Appl. 17 (1992), no. 3-4, 1531--71. \bibitem{H} M\"{u}ller C., \textit{Spherical harmonics}, Lecture Notes in Mathematics, 17 Springer-Verlag, Berlin-New York 1966 iv+45 pp. \end{thebibliography} \textbf{Al\'{i} Guzm\'{a}n Ad\'{a}n}\\ Clifford Research Group \\ Department of Mathematical Analysis \\ Ghent University\\ Krijgslaan 281\\ 9000 Gent, Belgium\\ e-mail: {\tt [email protected]}\\ \\ \textbf{Heikki Orelma}\\ Laboratory of Civil Engineering\\ Tampere University of Technology\\ Korkeakoulunkatu 10, \\ 33720 Tampere, Finland\\ e-mail: {\tt [email protected]}\\ \\ \textbf{Franciscus Sommen}\\ Clifford Research Group\\ Department of Mathematical Analysis\\ Ghent University\\ Krijgslaan 281\\ 9000 Gent, Belgium\\ e-mail: {\tt [email protected]} \end{document}
\begin{document} \title{Precise exponential decay for solutions of semilinear elliptic equations and its effect on the structure of the solution set for a real analytic nonlinearity} \author{Nils Ackermann\thanks{Supported by CONACYT grant \ensuremath{\clubsuit\clubsuit\clubsuit}{} and PAPIIT grant IN104315 (Mexico)}\and Norman Dancer} \date{} \maketitle \begin{abstract} We are concerned with the properties of weak solutions of the stationary Schrödinger equation $-\Delta u + Vu = f(u)$, $u\in H^1(\mathbb{R}^N)\cap L^\infty(\mathbb{R}^N)$, where $V$ is Hölder continuous and $\inf V>0$. Assuming $f$ to be continuous and bounded near $0$ by a power function with exponent larger than $1$ we provide precise decay estimates at infinity for solutions in terms of Green's function of the Schrödinger operator. In some cases this improves known theorems on the decay of solutions. If $f$ is also real analytic on $(0,\infty)$ we obtain that the set of positive solutions is locally path connected. For a periodic potential $V$ this implies that the standard variational functional has discrete critical values in the low energy range and that a compact isolated set of positive solutions exists, under additional assumptions. \end{abstract} \section{Introduction} \label{sec:introduction} We are interested in the properties of weak solutions of \begin{equation} \label{eq:12}\tag{P} -\Delta u + Vu = f(u),\qquad u\in H^1(\mathbb{R}^N)\cap L^\infty(\mathbb{R}^N), \end{equation} where $f$ is continuous, $f(u)\le C\abs{u}^q$ near $0$, for some $q>1$, $V$ is Hölder continuous, bounded, and $\mu_0:= \inf V>0$. In the first part of this work we consider exponential decay of solutions of \eqref{eq:12}. We say that a function $u$ \emph{decays exponentially at infinity with exponent $\nu>0$ if $\limsup_{\abs{x}\to\infty}\mathrm{e}^{\nu\abs{x}}u(x)<\infty$}. One of the most thorough studies of this question is an article by Rabier and Stuart~\cite{MR2001f:35139}, where general quasilinear equations are considered. We give a more precise description of the decay of such solutions $u$ in terms of Green's function $G$ of the Schrödinger operator $-\Delta+V$. Setting $H(x):= G(x,0)$ we show that $u$ is bounded above by a multiple of $H$ near infinity. In particular, $u$ decays as fast as $H$. In some cases this improves the estimates obtained in \cite{MR2001f:35139}. To illustrate this, suppose for a moment that $V$ is a positive constant $\mu_0$. Since $H$ decays exponentially at infinity with exponent $\sqrt{\mu_0}$, our result yields the same for every solution of \eqref{eq:12}, while \cite{MR2001f:35139} only yields exponential decay at infinity with exponent $\nu$ for every $\nu\in(0,\sqrt{\mu_0})$. Their method could be extended to yield the same result only if $f(u)/u\le0$ near $0$. On the other hand, if $u$ is a \emph{positive} solution of \eqref{eq:12} then we obtain that $u$ is bounded below by a multiple of $H$, that is to say, the decay of $u$ and $H$ are \emph{comparable}. We are not aware of a similar result in the literature. These comparison results are a consequence of \emph{a priori} exponential decay of every solution of \eqref{eq:12}, of the behavior of $f$ near $0$ and of a deep result of Ancona \cite{MR1482989} about the comparison of Green's functions for positive Schrödinger operators whose potentials only differ by a function that decays sufficiently fast at infinity. In the second part of our paper we assume in addition that $f$ is a real analytic function, either on all of $\mathbb{R}$ or solely on $(0,\infty)$. In the complete text analyticity is always \emph{real analyticity}. We have used analyticity before to obtain results on the path connectivity of bifurcation branches and solutions sets \cite{MR1962054,MR0322615,MR0375019}. Set $F(u):=\int_0^uf$ and introduce the variational functional \begin{equation*} J(u):=\frac12\int_{\mathbb{R}^N}(\abs{\nabla u}^2+Vu^2)-\int_{\mathbb{R}^N}F(u) \end{equation*} for weakly differentiable functions $u\colon\mathbb{R}^N\to\mathbb{R}$ such that the integrals are well defined. If $K$ is the set of solutions of \eqref{eq:12} and $K_+$ the set of \emph{positive} solutions of \eqref{eq:12} then we show that the analyticity of $f$ implies local path connectedness of $K$ in the first case and of $K_+$ in the second case. Moreover, it follows that $J$ is locally constant on $K$, respectively $K_+$. We achieve this by working in spaces of continuous functions with norms weighted at infinity by powers of $H$. As a consequence, the set $K_+$ lies in the interior of the positive cone of a related weighted space. This allows to transfer the analyticity from $f$ to the set $K_+$ in the case where $f$ is only analytic in $(0,\infty)$. From the analyticity of a set its local path connectedness follows from a classical triangulation theorem \cite{MR0173265,MR0159346}. In the last part we apply these results to a special case of \eqref{eq:12}, where we assume $V$ to be periodic in the coordinates. Set $c_0:=\inf J(K)>0$, the ground state energy. Under additional growth assumptions on $f$ we obtain that $J(K)$, respectively $J(K_+)$, has no accumulation point in the so called \emph{low energy range} $[c_0,2c_0)$. If in addition $V$ is reflection symmetric and $f$ satisfies an Ambrosetti-Rabinowitz-like condition, an earlier separation Theorem of ours \cite{MR2488693} yields, together with the aforementioned conclusion, the existence of a compact set $\Lambda$ of positive solutions at the ground state energy that is isolated in the set of solutions $K$. The latter result is of interest when one considers the existence of so-called \emph{multibump solutions}, which are nonlinear superpositions of translates of solutions in the case of a periodic potential $V$. It is to be expected that such a set $\Lambda$ can be used as a base for nonlinear superposition. This would yield a much weaker condition than that imposed in the seminal article \cite{MR93k:35087} and its follow-up works, where the existence of a \emph{single} isolated solution was required. The present article is structured as follows: In section~\ref{sec:comp-solut} we study the exact decay of solutions at infinity in terms of Green's function of the Schrödinger operator. Section~\ref{sec:real-analyticity} is devoted to the consequences of analyticity of $f$. And last but not least, Section~\ref{sec:appl-peri-potent} treats the consequences for the solution set of \eqref{eq:12} if the potential $V$ is periodic. \subsection{Notation} \label{sec:notation} For a metric space $(X,d)$, $r>0$, and $x\in X$ we denote \begin{align*} B_r(x;X)&:=\{y\in X\mid d(x,y)< r\},\\ \overline{B}_r(x;X)&:=\{y\in X\mid d(x,y)\le r\},\\ S_r(x;X)&:=\{y\in X\mid d(x,y)= r\}. \end{align*} We also set $B_rX:= B_r(0;X)$ if $X$ is a normed space and use analogous notation for the closed ball and the sphere. If $X$ is clear from context we may omit it in the notation. For $k\in\mathbb{N}_0$ denote by $C^k_\mathrm{b}(\mathbb{R}^N)$ the space of real valued functions of class $C^k$ on $\mathbb{R}^N$ such that all derivatives up to order $k$ are bounded. We set $C_\mathrm{b}(\mathbb{R}^N):= C^0_\mathrm{b}(\mathbb{R}^N)$. \section{Exact Decay of Solutions} \label{sec:comp-solut} This section is concerned with comparing the decay of a solution to \eqref{eq:12} with Green's function of the Schrödinger operator $T:= -\Delta+V$. We show that if the nonlinearity $f$ is well behaved at $0$ then a solution decays at least as fast as Green's function. If in addition the solution is positive then it decays at most as fast as Green's function. Suppose that $N\in\mathbb{N}$. The principal regularity and positivity requirements for the potential we use are contained in the following condition: \begin{enumerate}[label=\textup{(V\arabic*)},series=vcond] \item \label{item:1} $V\colon\mathbb{R}\to\mathbb{R}$ is Hölder continuous and bounded, and $\mu_0:=\inf V>0$. \end{enumerate} We will need to know \emph{a priori} that weak solutions of \eqref{eq:12} and related problems decay exponentially at infinity. For easier reference we include a pertinent result here, even though this fact is in principle well known. \begin{lemma} \th\label{lem:a-priori-decay} Assume \ref{item:1}. Suppose that $f\in C(\mathbb{R})$ satisfies $f(u)=o(u)$ as $u\to0$ and that $v\in L^\infty(\mathbb{R}^N)$ decays exponentially at infinity. If either $u\in H^1(\mathbb{R}^N)\cap L^\infty(\mathbb{R}^N)$ is a weak solution of $-\Delta u+ Vu=f(u)$ or $u\in H^1(\mathbb{R}^N)$ is a weak solution of $-\Delta u+ Vu=v$ then $u$ is continuous and decays exponentially at infinity. \end{lemma} \begin{proof} In the first case we may alter $f$ outside of the range of $u$ in any way we like. Therefore \cite[Lemma~5.3]{MR2151860} applies and yields, together with standard regularity theory and bootstrap arguments using \emph{a priori} estimates (e.g., \cite{MR737190}, Theorem~9.11 and Lemma~9.16), that $u$ is continuous and decays exponentially at infinity. For the second case suppose that $\abs{v(x)}\le C_1\mathrm{e}^{-C_2\abs{x}}$ for all $x\in\mathbb{R}^N$, with constants $C_1,C_2>0$. For $r>0$ denote \begin{equation*} Q(r):=\int_{\mathbb{R}^N\backslash \overline{B}_r}(\abs{\nabla u}^2+Vu^2). \end{equation*} We claim that $Q(r)$ decays exponentially at infinity. By contradiction we assume that this were not the case. Then \begin{equation}\label{eq:13} \inf_{r\ge0}\mathrm{e}^{C_2r}Q(r)>0. \end{equation} For $r\ge0$ define the cutoff function $\zeta_r$ as in the proof of \cite[Lemma~5.3]{MR2151860} and set $u_r(x):= \zeta(\abs{x}-r)u(x)$. Let $\delta:=\mu_0$. It follows from Hölder's inequality and \eqref{eq:13} that \begin{multline*} \xabs{\int_{\mathbb{R}^N}\nabla u\nabla u_r+Vuu_r} =\xabs{\int_{\mathbb{R}^N}vu_r} \le C_1\int_{\mathbb{R}^N\backslash \overline{B}_r}\mathrm{e}^{-C_2\abs{x}}\abs{u(x)}\,\rmd x\\ \le C \sqrt{Q(r)}\mathrm{e}^{-C_2r} \le C Q(r)\mathrm{e}^{-C_2r/2} \le \frac{\delta}{2} Q(r) \end{multline*} for $r$ large enough. This replaces Equation~5.3 of \cite{MR2151860}. As in that proof it follows that \begin{equation*} \frac{Q(r+1)}{Q(r)}\le \frac{1+\delta}{1+2\delta}<1 \end{equation*} for large $r$, so $Q(r)$ decays exponentially at infinity. Again using standard regularity estimates we obtain that $u$ is continuous and decays exponentially at infinity. \end{proof} By \cite[Theorem~4.3.3(iii)]{MR1326606} the operator $T$ is subcritical, according to the definition in Sect.~4.3 \emph{loc.\ cit.} Hence $T$ possesses a Green's function $G(x,y)$, i.e., a function that satisfies \begin{equation*} TG(x,y)=\delta(x-y). \end{equation*} Moreover, $G$ is positive. Denote $H(x):=G(x,0)$ for $x\neq0$. We collect some properties of $H$ needed later on: \begin{lemma}\th\label{lem:properties-h} The function $H\colon\mathbb{R}^N\backslash\{0\}\to\mathbb{R}$ satisfies: \begin{enumerate}[label=\textup{(\alph*)}] \item \label{item:5} $TH\equiv0$; \item \label{item:6} $H\in C^2(\mathbb{R}^N\backslash\{0\})$; \item \label{item:9} $H>0$; \item \label{item:7} $\liminf_{x\to0}H(x)>0$; \item \label{item:8} $\limsup_{\abs{x}\to\infty} \mathrm{e}^{\sqrt{\mu_0}\, \abs{x}}H(x)<\infty$. \end{enumerate} \end{lemma} \begin{proof} \ref{item:5} and \ref{item:6} are proved in \cite[Theorem~4.2.5(iii)]{MR1326606}, \ref{item:9} is a consequence of $G>0$, and \ref{item:7} is given by \cite[Theorem~4.2.8]{MR1326606}. In order to prove \ref{item:8}, consider the function $\psi\colon\mathbb{R}^N\to\mathbb{R}$ given by $\psi(x):=\mathrm{e}^{-\sqrt{\mu_0}\,\abs{x}}$. Then $\psi$ is a supersolution for $T$ on $\mathbb{R}^N\backslash\{0\}$. Take $\alpha>0$ large enough such that $\alpha\psi\ge H$ on $S_1$. Denote Green's function for $T$ on $B_k$ with Dirichlet boundary conditions by $\widetilde{G}_k$, for $k\in\mathbb{N}$, and set $\widetilde{H}_k:=\widetilde{G}_k(\cdot,0)$. Then $T\widetilde{H}_k\equiv0$ on $B_k\backslash\{0\}$ and $\lim_{\abs{x}\to k}\widetilde{H}_k(x)=0$, by \cite[Theorem~7.3.2]{MR1326606}. Moreover, \cite[Theorem~4.3.7]{MR1326606} implies that $\widetilde{H}_k(x)\to H(x)$ as $k\to\infty$, and $(\widetilde{H}_k)$ is an increasing sequence. It follows that $\widetilde{H}_k\le\alpha\psi$ on $S_1$ and hence, by the maximum principle, that $\widetilde{H}_k\le\alpha\psi$ in $\overline{B}_k\backslash B_1$ for all $k$. Therefore, $H\le\alpha\psi$ on $\mathbb{R}^N\backslash B_1$ and the claim follows. \end{proof} We now state the main result of this section: \begin{theorem} \th\label{thm:decay-at-infty} \begin{enumerate}[label=\textup{(\alph*)}] \item \label{item:12} Suppose that $w\in L^\infty(\mathbb{R}^N)$ satisfies \begin{equation*} \abs{w(x)}\le C_1\mathrm{e}^{-C_2\abs{x}} \end{equation*} for $x\in\mathbb{R}^N$, with some fixed $C_1,C_2>0$. If $u\in H^1(\mathbb{R}^N)$ is a weak solution of \begin{equation*} -\Delta u+(V-w)u=0 \end{equation*} then there exists, for every $\delta>0$, some $R_0>0$, depending only on $\delta$, $N$, $\inf V$, $\norm{V}_{\infty}$, $C_1$ and $C_2$, such that for every $R\ge R_0$ \begin{equation}\label{eq:18} \limsup_{\abs{x}\to\infty}\frac{\abs{u(x)}}{H(x)} \le(1+\delta)^2 \max_{x\in S_R}\frac{\abs{u(x)}}{H(x)}. \end{equation} In particular, \begin{equation}\label{eq:14} \limsup_{\abs{x}\to\infty} \mathrm{e}^{\sqrt{\mu_0}\,\abs{x}}\abs{u(x)}<\infty. \end{equation} \item \label{item:13} If in addition to the hypotheses of \ref{item:12} $u$ is positive then there exists, for every $\delta>0$, some $R_0>0$, depending only on $\delta$, $N$, $\inf V$, $\norm{V}_{\infty}$, $C_1$ and $C_2$, such that for every $R\ge R_0$ \begin{equation}\label{eq:19} \liminf_{\abs{x}\to\infty}\frac{u(x)}{H(x)} \ge(1+\delta)^{-2} \min_{x\in S_R}\frac{u(x)}{H(x)}. \end{equation} \item \label{item:14} If $v\in L^\infty(\mathbb{R}^N)$ satisfies that $v/H$ decays exponentially at $\infty$ and if $u\in H^1(\mathbb{R}^N)$ is a weak solution of \begin{equation*} -\Delta u+ Vu=v \end{equation*} then there exist continuous functions $u_1$ and $u_2$ such that $u=u_1-u_2$, $Tu_1=v^+$, $Tu_2=v^-$, and such that for each $i=1,2$ either $u_i\equiv0$, or $u_i>0$ and \begin{equation} \label{eq:21} 0 <\liminf_{\abs{x}\to\infty}\frac{u_i(x)}{H(x)} \le \limsup_{\abs{x}\to\infty}\frac{u_i(x)}{H(x)} <\infty. \end{equation} In particular, \begin{equation*} \limsup_{\abs{x}\to\infty}\frac{\abs{u(x)}}{H(x)} <\infty. \end{equation*} \end{enumerate} \end{theorem} \begin{proof} \noindent\textbf{\ref{item:12}} Standard \emph{a priori} estimates, as mentioned in the proof of \th\ref{lem:a-priori-decay}, yield that $u\in L^\infty(\mathbb{R}^N)$. Hence also $wu$ has exponential decay at infinity and \th\ref{lem:a-priori-decay} yields in particular that \begin{equation} \label{eq:23} u(x)\to0\qquad\text{as }\abs{x}\to\infty. \end{equation} We take $R>1$ large enough such that \begin{equation*} \sup\abs{w}\le \varepsilon_0:=\frac{\mu_0}{2} \end{equation*} in $\mathbb{R}^N\backslash \overline{B}_{R-1}$ and define $\eta\colon\mathbb{R}^N\to[0,1]$ by \begin{equation*} \eta(x):= \begin{cases} 0,&\qquad \abs{x}\le R-1,\\ \abs{x}-R+1,&\qquad R-1\le\abs{x}\le R,\\ 1,&\qquad \abs{x}\ge R. \end{cases} \end{equation*} Then $\inf (V-\eta w)\ge\varepsilon_0>0$. Hence also $T_1:= -\Delta+(V-\eta w)$ is subcritical on $\mathbb{R}^N$ and possesses a positive Green's function $G_1$. Since we are not assuming $w$ to be locally Hölder continuous, here we refer to \cite{MR890161} and \cite{MR874676} for the existence of the positive Green's function. Set $H_1(x):=G_1(x,0)$ for $x\neq0$. In the notation of \cite{MR1482989} use our $\varepsilon_0$ and set $r_0:=1/4$, $c_0:=1$, and $p:=2N$. Note that the bottom of the spectrum of $T$ and $T_1$ as operators in $L^2$ with domain $H^2$ is greater than or equal to $\varepsilon_0$. Denote \begin{equation*} \widetilde{C}:=\sup\biglr\{\}{\,\norm{v}_{L^N(\overline{B}_{r_0})}\bigm| v\in L^\infty(\overline{B}_{r_0}),\ \norm{v}_{L^\infty(\overline{B}_{r_0})}=1\,} \end{equation*} and set $\theta:=1+\widetilde{C}(C_1+\norm{V}_\infty)$. Define the decreasing function \begin{equation*} \Psi_R(s):= \begin{cases} C_1\mathrm{e}^{-C_2(R-1)}&\qquad 0\le s\le R\\ C_1\mathrm{e}^{-C_2(s-1)}&\qquad s\ge R \end{cases} \end{equation*} so $\norm{\eta w} _{L^\infty(\overline{B}_{r_0}(y))} \le \Psi_R(\abs{y})$ for $y\in\mathbb{R}^N$. Using these constants, the function $\Psi_R$ and the fact that \begin{equation*} \lim_{R\to\infty}\int_0^\infty\Psi_R=0, \end{equation*} \cite[Theorem~1]{MR1482989} yields \begin{equation} \label{eq:2} \frac1{1+\delta} H(x)\le H_1(x)\le (1+\delta)H(x) \end{equation} for $\abs{x}\ge r_0$ if $R$ is chosen large enough, only depending on $\delta$, $N$, $\inf(V)$, $\norm{V}_\infty$, $C_1$ and $C_2$. The function $H_1$ is continuous in $\mathbb{R}^N\backslash\{0\}$ and satisfies $T_1H_1\equiv0$ in $\mathbb{R}^N\backslash\{0\}$ in the weak sense. Moreover, $T_1u\equiv 0$ on $\mathbb{R}^N\backslash \overline{B}_R$ in the weak sense. Set \begin{equation*} C_3:=(1+\delta)^2 \max_{x\in S_R}\frac{\abs{u(x)}}{H(x)} \end{equation*} Then we have by \eqref{eq:2} \begin{equation*} \abs{u} \le\frac{C_3}{(1+\delta)^2}H \le \frac{C_3}{1+\delta}H_1 \qquad\text{on } S_R. \end{equation*} Note that $H_1(x)\to0$ as $\abs{x}\to\infty$, by \th\ref{lem:properties-h}\ref{item:8} and \eqref{eq:2}. Hence \eqref{eq:23}, the maximum principle for weak supersolutions \cite[Theorem~8.1]{MR737190} and again \eqref{eq:2} yield \begin{equation*} \abs{u}\le\frac{C_3}{1+\delta}H_1\le C_3 H \qquad\text{on }\mathbb{R}\backslash B_R, \end{equation*} that is, \eqref{eq:18}. Together with \th\ref{lem:properties-h}\ref{item:8} we obtain \eqref{eq:14}. \noindent\textbf{\ref{item:13}} Define \begin{equation*} C_4:=(1+\delta)^2 \max_{x\in S_R}\frac{H(x)}{u(x)}. \end{equation*} Then \eqref{eq:2} implies that \begin{equation*} H_1\le(1+\delta)H\le\frac{C_4}{1+\delta}u \qquad\text{on }S_R. \end{equation*} The maximum principle yields \begin{equation*} H_1\le\frac{C_4}{1+\delta}u \qquad\text{on }\mathbb{R}\backslash B_R, \end{equation*} so \eqref{eq:2} implies \eqref{eq:19}. \noindent\textbf{\ref{item:14}} The operator $T\colon H^2(\mathbb{R}^N)\to L^2(\mathbb{R}^N)$ has a bounded inverse by \ref{item:1}. Denote $v^+:=\max\{0,v\}$ and set $v^-:= v^+-v$. Define $u_1:= T^{-1}v^+\in H^1(\mathbb{R}^N)$ and $u_2:= T^{-1}v^-\in H^1(\mathbb{R}^N)$. Again we find by \th\ref{lem:a-priori-decay} that \begin{equation*} u_i(x)\to0\qquad\text{as }\abs{x}\to\infty,\ i=1,2. \end{equation*} If $u_1$ is not the zero function then it is positive, by the strong maximum principle. Using \begin{equation*} \left.\begin{aligned} Tu_1&\ge 0\\ TH&=0 \end{aligned}\quad\right\} \qquad\text{in }\mathbb{R}^N\backslash\{0\} \end{equation*} the maximum principle yields \begin{equation*} 0 <\liminf_{\abs{x}\to\infty}\frac{u_1(x)}{H(x)}. \end{equation*} Hence also $v^+/u_1$ decays exponentially at infinity, and \ref{item:12} implies that \begin{equation*} \limsup_{\abs{x}\to\infty}\frac{u_1(x)}{H(x)} <\infty. \end{equation*} This yields \eqref{eq:21} for $i=1$. The case $i=2$ follows analogously. \end{proof} For the semilinear problem \eqref{eq:12} we obtain: \begin{corollary}\th\label{cor:compare-solutions} Assume \ref{item:1}. Suppose that $f\colon\mathbb{R}\to\mathbb{R}$ is continuous and that there are $C,M>0$ and $q>1$ such that $\abs{f(u)}\le C\abs{u}^q$ for $\abs{u}\le M$. If $u$ is a weak solution of \eqref{eq:12} then $u$ has the properties claimed in \th\ref{thm:decay-at-infty}, \ref{item:12} and \ref{item:14}. If in addition $u$ is positive, then $u$ has the property claimed in \th\ref{thm:decay-at-infty}\ref{item:13}. \end{corollary} \begin{proof} By our hypotheses on $f$ \th\ref{lem:a-priori-decay} implies exponential decay of $u$ at infinity. Hence also $w:= f(u)/u$ decays exponentially at infinity. Since $u$ is a solution of $-\Delta u+(V-w)u=0$ \th\ref{thm:decay-at-infty}\ref{item:12} applies. Therefore also $f(u)/H$ has exponential decay at infinity. These facts yield the claims. \end{proof} \section{Real Analyticity} \label{sec:real-analyticity} Using the precise decay results of the previous section we construct a weighted space $Y$ of continuous functions that contains all solutions of \eqref{eq:12} and is such that the positive solutions are contained in the interior of the positive cone of $Y$. Assuming analyticity of the nonlinearity (on $(0,\infty)$) with appropriate growth bounds we obtain a setting where the (positive) solution set is locally a finite dimensional analytic set and hence locally path connected. Denote $2^*:= \infty$ if $N=1$ or $2$, $2^*:= 2N/(N-2)$ if $N\ge3$ and consider the following conditions on $f$: \begin{enumerate}[label=\textup{(F\arabic*)},series=fcond] \item \label{item:2} $f\in C^1(\mathbb{R})$, $f(0)=f'(0)=0$; \item \label{item:4} $f$ is analytic in $\mathbb{R}$ and for every $M>0$ there are numbers $a_k\in\mathbb{R}$ ($k\in\mathbb{N}_0$) such that \begin{equation*} \limsup_{k\to\infty}\frac{a_k}{k!}<\infty \end{equation*} and \begin{equation*} \abs{f^{(k)}(u)}\le a_k\abs{u}^{\max\{0,2-k\}} \end{equation*} for $\abs{u}\le M$ and $k\in\mathbb{N}_0$. \item \label{item:3} $f$ is analytic in $\mathbb{R}^+$ and for every $M>0$ there are numbers $p\in(1,2^*-1)$ and $a_k\in\mathbb{R}$ ($k\in\mathbb{N}_0$) such that \begin{equation*} \limsup_{k\to\infty}\frac{a_k}{k!}<\infty \end{equation*} and \begin{equation*} \abs{f^{(k)}(u)}\le a_k\abs{u}^{p-k} \end{equation*} for $u\in(0,M]$ and $k\in\mathbb{N}_0$; in this case we are only interested in positive solutions of \eqref{eq:12} and may take $f$ to be odd, for notational convenience. \item \label{item:20} There are $C>0$ and $\tilde q\in(1,2^*-1)$ such that $\abs{f(u)}\le C(1+\abs{u}^{\tilde q})$ for all $u\in\mathbb{R}$. \end{enumerate} To give a trivial example of a function satisfying these conditions, take $p$ as in condition \ref{item:3}. Then $f(u):=\abs{u}^{p-1}u$ satisfies conditions \ref{item:2}, \ref{item:3} and \ref{item:20}. If either \ref{item:4} or \ref{item:3} holds true, then there is $q>1$ such that for every $M>0$ there are $a_0,a_1\in\mathbb{R}$ such that \begin{equation} \label{eq:4} \abs{f(u)}\le a_0\abs{u}^q\quad\text{and}\quad \abs{f'(u)}\le a_1\abs{u}^{q-1}\qquad \text{if }\abs{u}\le M. \end{equation} To see this take $q:=2$ if \ref{item:4} holds true, take $q:= p$ if \ref{item:3} holds true, and use the respective numbers $a_0$ and $a_1$ given for $M$ by these hypotheses. Denote by $K$ the set of non-zero solutions of \eqref{eq:12} and set $K_+:=\{u\in K\mid u\ge 0\}$. Denote by $\mathcal{F}$ the superposition operator induced by $f$. Then every $u\in K$ satisfies $Tu=\mathcal{F}(u)$. Our goal is to produce a Banach space $Y$ such that \begin{equation*} \begin{aligned} \Gamma\colon Y&\to Y\\ u&\mapsto u- T^{-1}\mathcal{F}(u) \end{aligned} \end{equation*} is well defined and such that $K\subseteq Y$ is the zero set of $\Gamma$. Moreover, we need $\Gamma$ to be a Fredholm map, analytic in a neighborhood of $K$ if \ref{item:4} holds true, and analytic in a neighborhood of $K_+$ if \ref{item:3} holds true. In the latter case, because $f$ is not analytic at $0$ we need that $K_+$ belongs to the interior of the positive cone of $Y$. Consider the function $H$ defined in Section~\ref{sec:comp-solut}. Pick a number $b_0\in(0,\infty)$ such that $b_0\le\liminf_{x\to0}H(x)$. By \th\ref{lem:properties-h} the function $\varphi\colon\mathbb{R}^N\to\mathbb{R}^N$ defined by \begin{equation*} \varphi(x):=\min\{b_0,H(x)\} \end{equation*} is continuous, positive, and has the same decay at infinity as $H$. Define the spaces \begin{equation*} X_\alpha:=\bigglr\{\}{\,u\in C(\mathbb{R}^N)\biggm| \norm{u}_{X_\alpha} := \sup_{x\in\mathbb{R}^N}\xabs{\frac{u(x)}{\varphi(x)^\alpha}}<\infty\,} \end{equation*} for $\alpha>0$. Together with its weighted norm $\norm{\,\cdot\,}_{X_\alpha}$, $X_\alpha$ is a Banach space. Set \begin{equation*} Y:=X_1\cap C^1_{\mathrm{b}}(\mathbb{R}^N)\cap H^1(\mathbb{R}^N) \end{equation*} and $\norm{\cdot}_Y:=\norm{\cdot}_{X_1} + \norm{\cdot}_{C^1_{\mathrm{b}}} + \norm{\cdot}_{H^1}$. By \eqref{eq:4} and \th\ref{cor:compare-solutions} $K\subseteq Y$. We prove the basic properties of the space $Y$ and related mapping properties of the maps $T$ and $\mathcal{F}$: \begin{lemma} \th\label{lem:properties-y} Suppose that \ref{item:1}, \ref{item:2} and one of \ref{item:4} or \ref{item:3} are satisfied. Then the following hold true: \begin{enumerate}[label=\textup{(\alph*)}] \item \label{item:15} $T^{-1}\colon X_\alpha\to Y$ is well defined and continuous if $\alpha>1$. \item \label{item:16} Let $q$ be given by \eqref{eq:4}. If $\alpha<\min\{2,q\}$ then $\mathcal{F}(Y)\subseteq X_\alpha$, and $\mathcal{F}\colon Y\to X_\alpha$ is completely continuous, i.e., it is continuous and maps bounded sets into relatively compact sets. Moreover, it is continuously differentiable in $Y$. \item \label{item:23} The set $K_+$ is contained in the interior of the positive cone of $Y$. \item \label{item:17} If \ref{item:20} is satisfied then on $K$ the $H^1$-topology and the $Y$-topology coincide. \end{enumerate} \end{lemma} \begin{proof} \textbf{\ref{item:15}:} For any $s\ge2$ the linear mapping $T^{-1}\colon L^s(\mathbb{R}^N)\to W^{2,s}(\mathbb{R}^N)$ is well defined and continuous because of \ref{item:1}. If $v\in X_\alpha\subseteq L^s(\Omega)$ then by the definition of $X_\alpha$ and by \th\ref{lem:properties-h}\ref{item:8} the function $v/H$ decays exponentially at infinity. For $u:= T^{-1}v$ it follows from \th\ref{thm:decay-at-infty}\ref{item:14} that $u\in X_1$. Therefore \begin{equation*} \begin{tikzpicture}[commutative diagrams/every diagram] \matrix[matrix of math nodes, name=m, commutative diagrams/every cell] { X_1 & L^s \\ X_\alpha & L^s \\}; \path[commutative diagrams/.cd, every arrow, every label] (m-1-1) edge[commutative diagrams/hook] (m-1-2) (m-2-1) edge[commutative diagrams/hook] (m-2-2) edge node {$T^{-1}$} (m-1-1) (m-2-2) edge node[swap] {$T^{-1}$} (m-1-2); \end{tikzpicture} \end{equation*} is a commuting diagram of linear maps between Banach spaces, where the inclusions and the map $T^{-1}\colon L^s\to L^s$ are continuous. By the closed graph theorem also $T^{-1}\colon X_\alpha\to X_1$ is continuous. Moreover, if $s>N$ we have continuous maps \begin{equation*} X_\alpha\hookrightarrow L^s\xrightarrow{T^{-1}} W^{2,s}\hookrightarrow C^1_{\mathrm{b}} \end{equation*} so $T^{-1}\colon X_\alpha \to C^1_{\mathrm{b}}$ is continuous. Similarly, \begin{equation*} X_\alpha\hookrightarrow L^2\xrightarrow{T^{-1}} H^2\hookrightarrow H^1 \end{equation*} and therefore $T^{-1}\colon X_\alpha\to H^1$ is continuous. All in all we have proved \ref{item:15}. \textbf{\ref{item:16}:} Note that $\mathcal{F}(u)\in X_q \subseteq X_\alpha$ if $u\in X_1$, by \eqref{eq:4}. To see the continuous differentiability of $\mathcal{F}$ in $Y$, note that $f'$ is locally Hölder (respectively Lipschitz) continuous in $\mathbb{R}$ with exponent $\beta:=\min\{1,q-1\}$, as a consequence of \ref{item:4} or \ref{item:3}, respectively. In what follows we repeatedly pick arbitrary $u,v,w\in X_1$ and $C>0$ such that $\abs{f'(s)-f'(t)}\le C\abs{s-t}^\beta$ for all $s,t\in \mathbb{R}$ with $\abs{s},\abs{t}\le \norm{u}_\infty+ \norm{v}_\infty$. Define $\mathcal{F}_1$ to be the superposition operator induced by $f'$. First we show that $\mathcal{F}_1(u)\in\mathcal{L}(X_1, X_\alpha)$ as a multiplication operator and that $\mathcal{F}_1\colon X_1\to \mathcal{L}(X_1,X_\alpha)$ is continuous. Pick $a_1$ in \eqref{eq:4} for $M:=\norm{u}_\infty$. Then we find \begin{equation*} \norm{\mathcal{F}_1(u)w}_{X_\alpha} \le a_1\norm{\varphi^{q-\alpha}}_\infty\norm{u}_{X_1}^{q-1}\norm{w}_{X_1} \end{equation*} with $\norm{\varphi^{q-\alpha}}_\infty<\infty$ since $\alpha<q$. Hence $\mathcal{F}_1(u)\in\mathcal{L}(X_1,X_\alpha)$. Similarly, \begin{equation*} \norm{(\mathcal{F}_1(u)-\mathcal{F}_1(v))w}_{X_\alpha} \le C\norm{\varphi^{\beta+1-\alpha}}_\infty\norm{u-v}_{X_1}^\beta\norm{w}_{X_1} \end{equation*} with $\norm{\varphi^{\beta+1-\alpha}}_\infty<\infty$ since $\alpha<\beta+1$. Hence \begin{equation*} \norm{\mathcal{F}_1(u)-\mathcal{F}_1(v)}_{\mathcal{L}(X_1,X_\alpha)} \le C\norm{\varphi^{\beta+1-\alpha}}_\infty\norm{u-v}_{X_1}^\beta \end{equation*} and $\mathcal{F}_1$ is Hölder continuous. For any $x\in\mathbb{R}^N$ and $t\in\mathbb{R}\backslash\{0\}$ there is $\theta_{x,t}\in(-\abs{t},\abs{t})$ such that \begin{multline*} \xabs{\frac{f(u(x)+tv(x))-f(u(x))}{t}-f'(u(x))v(x)} =\xabs{f'(u(x)+\theta_{x,t}v(x))-f'(u(x))}\abs{v(x)}\\ \le C\abs{\theta_{x,t}v(x)}^\beta\abs{v(x)} \le C\abs{v(x)}^{\beta+1}\abs{t}^\beta. \end{multline*} It follows that \begin{equation*} \xnorm{\frac{\mathcal{F}(u+tv)-\mathcal{F}(u)}{t}-\mathcal{F}_1(u)v}_{X_\alpha} \le C\norm{\varphi^{\beta+1-\alpha}}_\infty\norm{v}_{X_1}^{\beta+1}\abs{t}^\beta \end{equation*} and hence that $\mathcal{F}$ is Gâteaux differentiable in $u$ with derivative $\mathcal{F}_1(u)$. Since $\mathcal{F}_1$ is continuous, $\mathcal{F}$ is continuously Fréchet differentiable as a map $X_1\mapsto X_\alpha$, and thus $Y\hookrightarrow X_1$ implies continuous differentiability of $\mathcal{F}\colon Y\to X_\alpha$. Suppose now that $(u_n)\subseteq Y$ is bounded in $Y$ and hence bounded in $X_1$ and $C^1_{\mathrm{b}}(\mathbb{R}^N)$. Passing to a subsequence we can suppose by Arzelà-Ascoli's theorem that $(u_n)$ converges locally uniformly in $\mathbb{R}^N$ to some $u\in C_\mathrm{b}(\mathbb{R}^N)$. Since $f$ is uniformly continuous on compact intervals, $\mathcal{F}(u_n)$ converges to $\mathcal{F}(u)$ locally uniformly in $\mathbb{R}^N$. There is $C>0$ such that \begin{equation}\label{eq:15} u,u_n\le C\varphi\qquad\text{in }\mathbb{R}^N, \text{ for all }n\in\mathbb{N}. \end{equation} For any $\varepsilon>0$ \eqref{eq:4} and \eqref{eq:15} imply that there are a constant $R>0$, constants $C>0$, and $n_0\in\mathbb{N}$ such that for all $n\ge n_0$ it holds true that \begin{align*} \frac{\abs{\mathcal{F}(u_n)}}{\varphi^\alpha} \le C\varphi^{q-\alpha} &\le\frac{\varepsilon}{3}\qquad\text{in }\mathbb{R}^N\backslash B_R,\\ \frac{\abs{\mathcal{F}(u)}}{\varphi^\alpha} \le C\varphi^{q-\alpha} &\le\frac{\varepsilon}{3}\qquad\text{in }\mathbb{R}^N\backslash B_R,\\ \shortintertext{and} \frac{\abs{\mathcal{F}(u_n)-\mathcal{F}(u)}}{\varphi^\alpha} \le C\norm{\mathcal{F}(u_n)-\mathcal{F}(u)}_\infty &\le\frac{\varepsilon}{3}\qquad\text{in }\overline{B}_R. \end{align*} It follows for $n\ge n_0$ that \begin{equation*} \norm{\mathcal{F}(u_n)-\mathcal{F}(u)}_{X_\alpha} \le\sup_{\overline{B}_R}\frac{\abs{\mathcal{F}(u_n)-\mathcal{F}(u)}}{\varphi^\alpha} +\sup_{\mathbb{R}^N\backslash B_R}\frac{\abs{\mathcal{F}(u_n)}+\abs{\mathcal{F}(u)}}{\varphi^\alpha} \le\varepsilon \end{equation*} and hence $\mathcal{F}(u_n)\to\mathcal{F}(u)$ in $X_\alpha$. This proves that $\mathcal{F}$ maps bounded sets in $Y$ into relatively compact sets in $X_\alpha$. Since $\mathcal{F}$ is differentiable, it is completely continuous. \textbf{\ref{item:23}:} Fix $u\in K_+$. By \th\ref{cor:compare-solutions} there is $C_1>0$ such that $C_1\varphi\le u$ in $\mathbb{R}^N$. For any $v\in Y$ such that $\norm{u-v}_Y\le C_1/2$ it follows that $\norm{u-v}_{X_1}\le C_1/2$ and hence \begin{equation*} v\ge u-\abs{u-v}\ge\frac{C_1\varphi}{2}>0 \end{equation*} in $\mathbb{R}^N$. Therefore, $u$ lies in the interior of the positive cone of $Y$. \textbf{\ref{item:17}:} It suffices to prove that on $K$ the $H^1$-topology is finer than the $Y$-topology. Therefore, assume that $u_n\to u$ in $K$ with respect to the $H^1$-topology and suppose by contradiction that $u_n\not\to u$ in $Y$. Passing to a subsequence we can assume that there is $\delta>0$ such that \begin{equation} \label{eq:7} \norm{u_n-u}_Y\ge\delta\qquad\text{for all }n\in\mathbb{N}. \end{equation} By \ref{item:20} and standard elliptic regularity estimates, $(u_n)$ is bounded in $C^1_\mathrm{b}(\mathbb{R}^N)$. Moreover, the proof of \cite[Prop.~5.2]{MR2151860} yields, together with regularity estimates, that the functions $u_n$ have a uniform pointwise exponential decay as $\abs{x}\to\infty$. In view of \eqref{eq:4} we obtain $C_1,C_2>0$ such that \begin{equation*} \frac{f(u_n(x))}{u_n(x)}\le C_1\mathrm{e}^{-C_2\abs{x}} \qquad\text{for } x\in\mathbb{R}^N,\ n\in\mathbb{N}. \end{equation*} By \th\ref{thm:decay-at-infty}\ref{item:12} $(u_n)$ also remains bounded in $X_1$ and hence in $Y$. Pick some $\alpha\in(1,\min\{2,q\})$. By \ref{item:15} and \ref{item:16}, and passing to a subsequence, $(T^{-1}\mathcal{F}(u_n))$ converges in $Y$. Since $u_n= T^{-1}\mathcal{F}(u_n)$, $u_n\to v$ in $Y$, for some $v\in Y$, and $v=u$ since $Y\hookrightarrow H^1$ and $u_n\to u$ in $H^1$. Hence $u_n\to u$ in $Y$ for this subsequence, contradicting \eqref{eq:7} and thus finishing the proof of \ref{item:17}. \end{proof} If \ref{item:2} is satisfied then $J$, as defined in the introduction, is well defined on $Y$. The main result of this section is the following \begin{theorem} \th\label{thm:analyticity} Assume that \ref{item:1} and \ref{item:2} hold true. \begin{enumerate}[label=\textup{(\alph*)}] \item \label{item:10} If \ref{item:4} is satisfied then $K$ is $Y$-locally path connected, and $J$ is $Y$-locally constant on $K$. \item \label{item:11} If \ref{item:3} is satisfied then $K_+$ is $Y$-locally path connected, and $J$ is $Y$-locally constant on $K_+$. \end{enumerate} \end{theorem} \begin{proof} We prove the two statements in parallel. Fix $\alpha\in(1,\min\{2,q\})$, where $q$ is taken from \eqref{eq:4}. For \ref{item:10} fix $u\in K$, and for \ref{item:11} fix $u\in K_+$. Set $M:=\norm{u}_\infty$ and let the numbers $a_k$ be given by \ref{item:4} or \ref{item:3}, respectively. Denote by $\mathcal{L}^k(X_1,X_\alpha)$ the Banach space of $k$-linear bounded maps from $X_1$ into $X_\alpha$, for $k\in\mathbb{N}_0$ (for $k=0$ we set $\mathcal{L}^k(X_1,X_\alpha):= X_\alpha$). For $k=0$ and $k=1$ we already know that $f^{(k)}(u)$ generates an element of $\mathcal{L}^k(X_1,X_\alpha)$ by multiplication by \th\ref{lem:properties-y}\ref{item:16}. We claim that \begin{gather} \label{eq:11} \parbox{.8\linewidth}{$f^{k}(u)$ generates an element $A_k$ of $\mathcal{L}^k(X_1,X_\alpha)$ by multiplication, for every $k\in\mathbb{N}_0$,}\\ \label{eq:26} r_1:=\xlr(){\limsup_{k\to\infty}\norm{A_k}_{\mathcal{L}^k(X_1,X_\alpha)}^{1/k}}^{-1} >0,\\ \shortintertext{and} \exists r_2\in(0,r_1]\ \forall h\in B_{r_2}X_1\ \forall x\in\mathbb{R}^N\colon f(u(x)+h(x))=\sum_{k=0}^\infty\frac{f^{(k)}(u(x))}{k!}h(x)^k.\label{eq:8} \end{gather} To prove the claims in case \ref{item:10}, denote by $r_0$ the convergence radius of the power series $\sum_0^\infty\frac{a_k}{k!}z^k$. Consider $k\in\mathbb{N}$, $k\ge2$. Taking into account that $\alpha<2$ we obtain from \ref{item:4} that \begin{equation*} \xnorm{\frac{f^{(k)}(u)}{k!}h^k}_{X_\alpha} \le\frac{a_k}{k!} \sup_{x\in\mathbb{R}^N} \xabs{\frac{h(x)^k}{\varphi(x)^\alpha}} \le \frac{a_k}{k!}\norm{\varphi}_\infty^{k-\alpha} \norm{h}_{X_1}^k. \end{equation*} Hence \eqref{eq:11} is true, with \begin{equation*} \norm{A_k}_{\mathcal{L}^k(X_1,X_\alpha)} \le \frac{a_k}{k!}\norm{\varphi}_\infty^{k-\alpha}. \end{equation*} Again by \ref{item:4}, \eqref{eq:26} is satisfied, and \begin{equation*} r_2:=\frac{r_0}{\norm{\varphi}_\infty}\le r_1. \end{equation*} Suppose now that $h\in B_{r_2}X_1$ and $x\in\mathbb{R}^N$. Then $u(x)\in[-M,M]$ and hence by \ref{item:4} \begin{equation*} \xlr(){\limsup_{k\to\infty}\fracwithdelims\lvert\rvert{f^{(k)}(u(x))}{k!}}^{-1} \ge r_0. \end{equation*} Moreover, $\abs{h(x)}< r_2\varphi(x)\le r_0$. Since $f$ is analytic, \eqref{eq:8} follows. To prove the claims in case \ref{item:11}, denote again by $r_0$ the convergence radius of the power series $\sum_0^\infty\frac{a_k}{k!}z^k$. By \th\ref{cor:compare-solutions} there are $C_1,C_2>0$ such that \begin{equation} \label{eq:6} C_1\varphi\le u\le C_2\varphi. \end{equation} Suppose first that $k\in\mathbb{N}$, $2\le k\le p$. Taking into account that $\alpha<q$ we obtain from \ref{item:3} \begin{equation*} \xnorm{\frac{f^{(k)}(u)}{k!}h^k}_{X_\alpha} \le\frac{a_k}{k!} \sup_{x\in\mathbb{R}^N} \abs{u(x)}^{q-k} \xabs{\frac{h(x)^k}{\varphi(x)^\alpha}} \le \frac{a_k}{k!}\norm{\varphi^{q-\alpha}}_\infty C_2^{q-k}\norm{h}_{X_1}^k. \end{equation*} If $k>q$ then we find \begin{equation*} \xnorm{\frac{f^{(k)}(u)}{k!}h^k}_{X_\alpha} \le\frac{a_k}{k!} \sup_{x\in\mathbb{R}^N} \abs{u(x)}^{q-k} \xabs{\frac{h(x)^k}{\varphi(x)^\alpha}} \le \frac{a_k}{k!}\norm{\varphi^{q-\alpha}}_\infty C_1^{q-k}\norm{h}_{X_1}^k. \end{equation*} Hence \eqref{eq:11} is true, with \begin{equation*} \norm{A_k}_{\mathcal{L}^k(X_1,X_\alpha)}\le \frac{a_k}{k!}\norm{\varphi^{p-\alpha}}_\infty C_1^{q-k} \end{equation*} for $k>p$. Again by \ref{item:3}, \eqref{eq:26} is satisfied, and \begin{equation*} r_2:= C_1\min\xlr\{\}{r_0,\frac12}\le r_1. \end{equation*} Suppose now that $h\in B_{r_2}X_1$ and $x\in\mathbb{R}^N$. Then $u(x)\in(0,M]$ and hence by \ref{item:3} \begin{equation*} \xlr(){\limsup_{k\to\infty}\fracwithdelims\lvert\rvert{f^{(k)}(u(x))}{k!}}^{-1} \ge r_0u(x). \end{equation*} Moreover, $\abs{h(x)}< r_2\varphi(x)\le C_1r_0\varphi(x)\le r_0u(x)$, by \eqref{eq:6}, and hence $u(x)+h(x)\ge (C_1-r_2)\varphi(x)>0$. Since $f$ is analytic in $(0,\infty)$, \eqref{eq:8} follows. For any $h\in X_1$ such that $\norm{h}_{X_1}<r_2$ we obtain from \eqref{eq:26} and $r_2\le r_1$ that \begin{equation} \label{eq:16} \sum_{k=0}^\infty A_k[h^k]\qquad\text{converges in $X_\alpha$.} \end{equation} Note that $X_\alpha$ embeds continuously in $C_\mathrm{b}(\mathbb{R}^N)$ and that therefore the evaluation $E_x$ at a point $x\in\mathbb{R}^N$ is a bounded linear operator on $X_\alpha$. Hence for every $x\in\mathbb{R}^N$ \begin{align*} \mathcal{F}(u+h)(x) &=f(u(x)+h(x))\\ &=\sum_{k=0}^\infty\frac{f^{(k)}(u(x))}{k!}h(x)^k&&\text{by \eqref{eq:8}}\\ &=\sum_{k=0}^\infty E_x\xlr[]{A_k[h^k]}&&\text{by \eqref{eq:11}}\\ &=E_x\xlr[]{\sum_{k=0}^\infty A_k[h^k]}&&\text{by \eqref{eq:16} and }E_x\in\mathcal{L}(X_\alpha,\mathbb{R}) \end{align*} and therefore \begin{equation} \mathcal{F}(u+h)=\sum_{k=0}^\infty A_k[h^k], \qquad \text{for all } h\in B_{r_2}X_1. \end{equation} By \cite[Theorem~6.2]{MR0062947} the map $\mathcal{F}$ is analytic in $B_{r_2}X_1$. Since $u\in K_{(+)}$ was arbitrary, $\mathcal{F}\colon X_1\to X_\alpha$ is analytic in a neighborhood of $K_{(+)}$. And since $Y\hookrightarrow X_1$ and bounded linear operators are analytic, also $\mathcal{F}\colon Y\to X_\alpha$ is analytic in a neighborhood of $K_{(+)}$, c.f.~\cite[Theorem~7.3]{MR0313811}. From the results above we conclude that $\Gamma\colon Y\to Y$ is analytic in a neighborhood of $K_{(+)}$. Moreover, by \th\ref{lem:properties-y}\ref{item:15} $\Gamma$ is continuously differentiable in $Y$, and by \th\ref{lem:properties-y}\ref{item:16} and \cite[Proposition~8.2]{MR787404}, for every $v\in Y$ the operator $\mathcal{F}'(v)\in\mathcal{L}(Y,X_\alpha)$ is compact. Hence for every $v\in Y$ the operator $\Gamma'(v)$ is of the form identity minus compact and thus a Fredholm operator of index $0$. In short, one calls the map $\Gamma$ a Fredholm map of index $0$. Recall that $K=\Gamma^{-1}(0)$. By \th\ref{lem:properties-y}\ref{item:23} $K_+$ is the set of zeros of $\Gamma$ in an open neighborhood of $u$. In any case, the implicit function theorem shows that there are an open neighborhood $U$ of $u$ in $Y$ and a $C^\infty$-manifold $M\subseteq Y$ of finite dimension $\dim\mathcal{N}(\Gamma'(u))$ such that $K\cap U\subseteq M$. In fact, by \cite[Theorem~7.5]{MR0313811} (see also Corollary~7.3 \emph{loc.\ cit.}), $M$ is the graph of a analytic map defined on a neighborhood of $u$ in $u+\mathcal{N}(\Gamma'(u))$. Moreover, $K\cap U$ is the set of zeros of the restriction of the finite dimensional analytic map $P\Gamma$ to $M$. Here $P\in\mathcal{L}(Y)$ denotes the projection with kernel $\mathcal{R}(\Gamma'(u))$ and range $\mathcal{N}(\Gamma'(u))$. Therefore, \cite[Theorem~2]{MR0173265} applies and yields a triangulation of $K\cap U$ by homeomorphic images of simplexes such that their interior is mapped analytically (see also \cite[Satz~4]{MR0159346}). This implies that $K_{(+)}$ is locally path connected by piecewise continuously differentiable arcs. Similarly as in the proof of \th\ref{lem:properties-y} it can be shown that the map $Y\to\mathbb{R}$, $u\mapsto \int F(u)$ is continuously differentiable. Hence also $J$ is continuously differentiable in $Y$ and therefore locally constant on $K_{(+)}$. \end{proof} \section{Applications to Periodic Potentials} \label{sec:appl-peri-potent} Returning to our main motivation we consider the variational setting in $H^1(\mathbb{R}^N)$. Assuming \ref{item:1}, \ref{item:2}, and \ref{item:20} the functional $J$ is of class $C^1$ on $H^1(\mathbb{R}^N)$, and solutions of \eqref{eq:12} are in correspondence with critical points of $J$. Denoting $c_0:=\inf J(K)$ it is easy to see that $c_0>0$ if $K\neq\varnothing$. To inspect the behavior of $J$ on $K$ we will need the following boundedness condition: \begin{enumerate}[fcond] \item \label{item:21} Every sequence $(u_n)\subseteq K$ such that $\limsup_{n\to\infty} J(u_n)<2c_0$ is bounded. \end{enumerate} It is satisfied, for example, under the classical Ambrosetti-Rabinowitz condition. Alternatively, one could use a set of conditions as in \cite{MR2557725}. For our purpose we also consider the periodicity condition \begin{enumerate}[vcond] \item \label{item:18} $V$ is $1$-periodic in all coordinates. \end{enumerate} By concentration compactness arguments $c_0$ is achieved if \ref{item:1}, \ref{item:18}, \ref{item:2}, \ref{item:20} and \ref{item:21} hold true and if $K\neq\varnothing$. The local path connectedness of the set of (positive) solutions of \eqref{eq:12} when $f$ is analytic has a consequence on the possible critical levels of $J$: \begin{theorem} \th\label{thm:discrete-energy-levels} Assume \ref{item:1}, \ref{item:18}, \ref{item:2}, \ref{item:20} and \ref{item:21}. \begin{enumerate}[label=\textup{(\alph*)}] \item \label{item:19} If \ref{item:4} is satisfied, then $J(K)$ has no accumulation point in $[c_0,2c_0)$. \item \label{item:22} If \ref{item:3} is satisfied, then $J(K_+)$ has no accumulation point in $[c_0,2c_0)$. \end{enumerate} \end{theorem} \begin{proof} We only prove \ref{item:19} since the other claim is proved analogously. Assume by contradiction that $J(K)$ contains an accumulation point $c\in[c_0,2c_0)$. We work entirely in the $H^1$-topology, which coincides with the $Y$-topology on $K$ by \th\ref{lem:properties-y}\ref{item:17}. There is a sequence $(u_n)\subseteq K$ such that $J(u_n)\neq c$ and $J(u_n)\to c$. A standard argument using the splitting lemma \cite[Proposition~2.5]{MR2151860} yields, after passing to a subsequence, a translated sequence $(v_n)\subseteq K$ and $v\in K$ such that $v_n\to v$, $J(v_n)=J(u_n)\neq c$ and $J(v)=c$. Since $J$ is locally constant on $K$ by \th\ref{thm:analyticity}\ref{item:10} we reach a contradiction. \end{proof} We now combine this property with the separation property obtained in \cite{MR2488693} to show the existence of compact isolated sets of solutions. For any $c\in\mathbb{R}$ denote \begin{equation*} K_+^c:=\{u\in K_+\mid J(u)\le c\}. \end{equation*} The result reads: \begin{corollary} \th\label{cor:isolated-set} In the situation of \th\ref{thm:discrete-energy-levels}\ref{item:22}, assume in addition that $V$ is of class $C^{1,1}$, that $V$ is even in every coordinate $x^i$, and that there is $\theta>2$ such that \begin{equation*} f'(u)u^2\ge(\theta-1)f(u)u\qquad\text{for }u\in\mathbb{R}\backslash\{0\}. \end{equation*} Suppose that for every $u\in K_+^{c_0}$ that is even in $x^i$ for some $i\in\{1,2,\dots,N\}$ it holds true that \begin{equation*} \int_{\mathbb{R}^N}u^2\partial_i^2 V\le0. \end{equation*} (Here we use the weak second derivative of $V$. It exists because $V'$ is Lipschitz continuous.) Then $K^+\neq\varnothing$ and there exists a compact subset $\Lambda$ of $K_+^{c_0}$ that is isolated in $K$, i.e., that satisfies $\dist(\Lambda,K\backslash\Lambda)>0$ in the $H^1$-metric. \end{corollary} \begin{proof} By \cite[Theorem~1.1]{MR2488693} there is a compact subset $\Lambda$ of $K_+^{c_0}$ such that \begin{equation*} K_+^{c_0}=\mathbb{Z}^N\star\Lambda \qquad\text{and}\qquad \Lambda\cap\xlr(){\mathbb{Z}^N\backslash\{0\}}\star\Lambda=\varnothing. \end{equation*} Here $\star$ denotes the action of $\mathbb{Z}^N$ on functions on $\mathbb{R}^N$ by translation: $a\star u:= u(\cdot\,-a)$. It follows easily that \begin{equation} \label{eq:17} \dist(\Lambda,K_+^{c_0}\backslash\Lambda)>0. \end{equation} We claim that $\dist(\Lambda,K\backslash\Lambda)>0$. Recall that the topologies of the space $Y$ from Section~\ref{sec:real-analyticity} and the $H^1$-topology coincide on $K$ and that $K_+$ is contained in the interior of the positive cone of $Y$, by \th\ref{lem:properties-y}\ref{item:17} and \ref{item:23}. Hence $\dist(\Lambda,K\backslash K_+)>0$. It remains to show that $\dist(\Lambda,K_+\backslash\Lambda)>0$. Assume by contradiction that this were not the case. Since $\Lambda$ is compact there would exist a sequence $(u_n)\subseteq K_+\backslash\Lambda$ and $u\in\Lambda$ such that $u_n\to u$. Since $c_0$ is not an accumulation point of $J(K_+)$ by \th\ref{thm:discrete-energy-levels}\ref{item:22}, $(u_n)\subseteq K_+^{c_0}$. But this contradicts \eqref{eq:17}, proving the claim. \end{proof} Note that in \cite{MR2488693} we show how to construct concrete examples that satisfy the conditions of \th\ref{cor:isolated-set}. \subsubsection*{Contact information:} \begin{sloppypar} \begin{description} \item[Nils Ackermann:] Instituto de Matem\'{a}ticas, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Circuito Exterior, C.U., 04510 M\'{e}xico D.F., Mexico \item[Norman Dancer:] School of Mathematics and Statistics, University of Sydney, NSW 2006, Australia \end{description} \end{sloppypar} \end{document}
\begin{document} \newcommand\A{\mathbb{A}} \newcommand\G{\mathbb{G}} \newcommand\N{\mathbb{N}} \newcommand\T{\mathbb{T}} \newcommand\sO{\mathcal{O}} \newcommand\sE{{\mathcal{E}}} \newcommand\tE{{\mathbb{E}}} \newcommand\sF{{\mathcal{F}}} \newcommand\sG{{\mathcal{G}}} \newcommand\GL{{\mathrm{GL}}} \newcommand\HH{{\mathrm H}} \newcommand\mM{{\mathrm M}} \newcommand\fS{\mathfrak{S}} \newcommand\fP{\mathfrak{P}} \newcommand\fQ{\mathfrak{Q}} \newcommand{\bf Q}bar{{\bar{\bf Q}}} \newcommand\sQ{{\mathcal{Q}}} \newcommand\sP{{\mathbb{P}}} \newcommand{{\bf Q}}{{\bf Q}} \newcommand{\mathbb{H}}{\mathbb{H}} \newcommand{{\bf Z}}{{\bf Z}} \newcommand{{\bf R}}{{\bf R}} \newcommand{{\bf C}}{{\bf C}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand\gP{\mathfrak{P}} \newcommand\Gal{{\mathrm {Gal}}} \newcommand\SL{{\mathrm {SL}}} \newcommand\Hom{{\mathrm {Hom}}} \newcommand{\legendre}[2] {\left(\frac{#1}{#2}\right)} \newcommand\iso{{\> \simeq \>}} \newtheorem{thm}{Theorem} \newtheorem{theorem}[thm]{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \theoremstyle{definition} \newtheorem{definition}[thm]{Definition} \theoremstyle{remark} \newtheorem{remark}[thm]{Remark} \newtheorem{example}[thm]{Example} \newtheorem{claim}[thm]{Claim} \newtheorem{lem}[thm]{Lemma} \theoremstyle{definition} \newtheorem{dfn}{Definition} \theoremstyle{remark} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \theoremstyle{remark} \newtheorem*{fact}{Fact} \makeatletter \def\imod#1{\allowbreak\mkern10mu({\operator@font mod}\,\,#1)} \makeatother \title{The Eisenstein cycles as modular symbols} \author{Debargha Banerjee} \address{INDIAN INSTITUTE OF SCIENCE EDUCATION AND RESEARCH, PUNE, INDIA} \author{Lo\"ic Merel} \address{Univ Paris Diderot, Sorbonne Paris Cit\'e, Institut de Math\'ematiques de Jussieu-Paris Rive Gauche, UMR 7586, CNRS, Sorbonne Universit\'es, UPMC Univ Paris 06, F-75013, Paris, France} \begin{abstract} For any odd integer $N$, we explicitly write down the {\it Eisenstein cycles} in the first homology group of modular curves of level $N$ as linear combinations of Manin symbols. These cycles are, by definition, those over which every integral of holomorphic differential forms vanish. Our result can be seen as an explicit version of the Manin-Drinfeld Theorem. Our method is to characterize such Eisenstein cycles as eigenvectors for the Hecke operators. We make crucial use of expressions of Hecke actions on modular symbols and on auxiliary level $2$ structures. \end{abstract} \subjclass[2010]{Primary: 11F67, Secondary: 11F11, 11F20, 11F30} \keywords{Eisenstein series, Modular symbols, Special values of $L$-functions} \maketitle \section{Introduction} \label{Intro} Let $N$ be a positive integer. Consider the principal congruence subgroup $\Gamma(N)$ which acts on the upper half-plane $\mathbb{H}$ by homographies, and thus defines the modular curve $Y(N)=\Gamma(N) \backslash \mathbb{H}$, compactified as $X(N)=Y(N) \cup \partial_N$, where $\partial_N=\Gamma(N) \backslash{\bf P}^{1}({\bf Q})$ is the set of cusps. The Manin-Drinfeld theorem \cite{MR0318157} \cite{MR0314846} asserts that the class of a divisor of degree zero supported on $\partial_N$ is torsion in the Jacobian $J(N)$ of the modular curve $X(N)$. The order of such divisor in $J(N)$ can made explicit by the use of Siegel units (\cite{MR648603}). We call {\it Eisenstein cycles} the elements $e\in \HH_1(X(N), \partial_N, {\bf R})$ such that $\int_{e}\omega=0$ for all $\omega\in \HH^0(X(N), \Omega^1)$. They constitute a real vector space $E(N)$. The Manin-Drinfeld theorem is reformulated by saying that $E(N)$ admits a ${\bf Q}$-rational structure in $\HH_1(X(N), \partial_N, {\bf Q})$. Our aim is to determine explicitly a basis for $E(N)$ in the following sense. Let $\overline{\SL_2({\bf Z}/N{\bf Z})}=\SL_2({\bf Z}/N{\bf Z})/{\pm 1}=\pm\Gamma(N)\backslash\SL_2({\bf Z})$. Let $$\xi : \overline{\SL_2({\bf Z}/N{\bf Z})} \rightarrow \HH_1(X(N),\partial_N,{\bf Z})$$ be the map that takes the class of a matrix $g \in \mathrm{SL}_2({\bf Z})$ to the class in $\HH_1(X(N),\partial_N, {\bf Z})$ of the image in $X(N)$ of the geodesic in $\mathbb{H} \cup \sP^1({\bf Q})$ joining $g.0$ and $g.\infty$. It is surjective \cite{MR0314846}. Hence we call $\xi(g)$ a {\it Manin generator}. The Manin generators satisfy the {\it Manin relations}: For all $g \in \Gamma(N) \backslash \SL_2({\bf Z})$, $\xi(g)+\xi(g S)=0$ and $\xi(g)+\xi(g U)+\xi(g U^2)=0$, where $T=\left(\begin{smallmatrix} 1 & 1\\ 0 & 1\\ \end{smallmatrix}\right)$, $S=\left(\begin{smallmatrix} 0 & -1\\ 1 & 0\\ \end{smallmatrix}\right)$ and $U=ST=\left(\begin{smallmatrix} 0 & -1\\ 1 & 1\\ \end{smallmatrix}\right)$. Hence we wish to express the Eisenstein cycles as rational combinations of Manin generators. Because of the Manin relations, such a problem does not admit a unique solution. To eliminate this ambiguity, we introduce an auxiliary $\Gamma(2)$-structure. Thus we exibit a distinguished solution to our problem. The auxiliary level $2$-structure is reminiscent of the usual need to rigidify moduli problems by adding a level-structure, but we are not able to make an explicit connection. At any rate, our reliance on an auxiliary level $2$-structure limits our ability to treat the cases where $N$ is even. This is why we limit ourselves to the cases where the level $N$ is odd in the present article, except for a brief discussion in section 8, where we fully explain the role of the $\Gamma(2)$-structure. Before we state our theorem, we need to explain to what extent our result does not seem to follow from the existing literature. We have a series of group isomorphisms: \[ \HH_1(X(N), \partial_N, {\bf Z}) \simeq \Hom (\HH_1(Y(N),{\bf Z}),{\bf Z})\simeq \Hom (\Gamma(N),{\bf Z}), \] where the first map comes from the intersection pairings and the inverse of the second map associates to $\gamma\in \Gamma(N)$ the class of the image in $Y(N)$ of a path from $z_{0}$ to $\gamma z_{0}$ in $\mathbb{H}$ (where $z_{0}$ is any point in $\mathbb{H}$). Hence finding the desired Eisenstein cycles amounts to make explicit certain group homomorphisms $\Gamma(N)\rightarrow {\bf Z}$. In \cite{MR553997}, Mazur accomplished such a program when ${\Gamma(N)}$ is replaced by the congruence subgroup $\Gamma_{0}(N)$, for $N$ odd prime. He calls the corresponding group homomorphism $\Gamma_{0}(N)\rightarrow {\bf Z}$ the {\it Dedekind-Rademacher homomorphism}, which is obtained as a period homomorphism for Eisenstein series. The method has been extended by Stevens (\cite{MR670070}) to the more general modular curves $X(N)$ (without any restriction on parity for $N$). However, the group homomorphisms $\Gamma(N)\rightarrow {\bf R}$ exhibited by Stevens do not enable to write directly the Eisenstein cycles as linear combinations of Manin symbols. Nevertheless what Mazur called the Dedekind-Rademacher homomorphism was used by one of us \cite{MR1405312} to find the desired expression of the (unique up to scalar in that case) Eisenstein cycle in the case where ${\Gamma(N)}$ is replaced by the congruence subgroup $\Gamma_{0}(N)$, for $N$ odd prime. Already in that work, the introduction of an auxiliary $\Gamma(2)$-structure played a key role. By a similar method, one of us \cite{debargha}, and together with Krishnamoorty\cite{Pacific} treated the modular curves $X_{0}(p^{2})$ and $X_{0}(pq)$ respectively for $p$ and $q$ odd prime numbers, at the cost of significant additional technical difficulties. Our method in the current paper does not rely on Dedekind-Rademacher type homomorphisms and yields more general results. We propose a formula for the Eisenstein elements and verify that they are eigenvectors for the Hecke operators. Our calculations to that extent depend crucially on the formulas for Hecke operators obtained by one of us in \cite{MR1316830}. For $P= \left(\begin{smallmatrix} {\bar x}\\ {\bar y}\\ \end{smallmatrix}\right)\in ({\bf Z}/N{\bf Z})^2$, choose the representatives $x$, $y \in {\bf Z}$ of $\bar x$ and $\bar y$ respectively with $x-y$ odd. Define: \[ F(P)= -\frac{1}{4}[ \frac{(\cos (\frac{\pi x}{N})+\cos (\frac{\pi y}{N})}{(\cos (\frac{\pi x}{N})- \cos (\frac{\pi y}{N}) }] \in {\bf R}. \] From the above expression, it is easy to see that for all $P \in ({\bf Z}/N {\bf Z})^2$: $F(P)=F(-P)$ and $F(P)+F(S P)=0$. Let $\overline{F}$ be a function on $({\bf Z}/N {\bf Z})^2 / \{\pm 1\}$ obtained from $F$ by passing to quotients. Let \[ \sE_P=\sum_{{\gamma} \in \overline{\SL_2({\bf Z}/N{\bf Z})}} {\bar F}(\gamma^{-1} P)\xi(\gamma) = \sE_{-P}. \] \begin{theorem} \label{Eisensteinclass} For $P \in ({\bf Z}/N{\bf Z})^2$, the modular symbols $\sE_P$ satisfy the following properties: 1) Suppose $l$ is an odd prime number and $l \equiv 1 \pmod N$. Let $T_l$ be the Hecke operator at the prime $l$. The Eisenstein cycles $\sE_P$ satisfy the equality: \[ T_l(\sE_P) = (l + 1)\sE_P . \] 2) The classes of $\sE_P$ lie in the kernel of $R$ and hence they are Eisenstein cycles. \end{theorem} Part 2) of Theorem~\ref{Eisensteinclass} follows easily from part 1). They are proved by Proposition \ref{Heckeq} and Proposition \ref{kernel} respectively. It remains to prove that the classes $\sE_{P}$ span the space $E(N)$ of Eisenstein cycles, when $P$ runs through elements of order $N$ of $P \in ({\bf Z}/N{\bf Z})^2$. We define the retraction $R$ as the map $\HH_1(X(N), \partial_N, {\bf R})\rightarrow \HH_1(X(N), {\bf R})$ characterized by $\int_{R(c)}\omega=\int_{c}\omega$ for all $\omega\in \HH^0(X(N), \Omega^1)$. The kernel of $R$ coincides with $E(N)$. Let $\phi(N)$ be the order of $({\bf Z}/N{\bf Z})^*$. Consider the group $D^+_N$ of even Dirichlet character $\chi$ modulo $N$. Let \[ L(\chi)=\frac{1}{2}\sum_{\mu\in({\bf Z}/N{\bf Z})^{*}}\frac{\chi(\mu)}{1-\cos(\frac{\pi\mu^{0}}{N})}, \] where $\mu^{0}$ is an odd representative of $\mu$ in ${\bf Z}$. It is essential for the next theorem that $L(\chi)$ is a non-zero algebraic number, as it can be expressed as the algebraic part of the value of a Dirichlet $L$-function. \begin{thm} \label{Retraction} For all Manin-symbol $\xi(\gamma) \in \HH_1(X(N), \partial_N, {\bf Z})$, we have the following retraction formula: \[ R(\xi(\gamma))=\xi(\gamma)-\sum_{\alpha \in ({\bf Z}/N{\bf Z})^*} \sum_{\chi \in D_N^{+}} \frac{\chi(\alpha)}{2N\phi(N)L(\chi)}(\sE_{\alpha \gamma P_{\infty}}-\sE_{\alpha \gamma P_{0}}). \] \end{thm} Define the boundary map $\delta$ : $\HH_1(X(N), \partial_N, {\bf Z})\rightarrow {\bf Z}[\partial_N]$ as follows. For $r,s\in \sP^1({\bf Q})$, the image by $\delta$ of the geodesic of the upper half-plane joining the cusps $r$ to $s$ is $[\pm\Gamma(N)r]- [\pm\Gamma(N)s]\in {\bf Z}[\partial_{N}]$. For an element $x\in\HH_1(X(N), \partial_N, {\bf R})$, $\delta(x)=0$ implies that $R(x)=x$. Hence to prove Theorem \ref{Retraction}, it is enough to show that \[ \delta(\xi(\gamma))=\delta(\sum_{\alpha \in ({\bf Z}/N{\bf Z})^*} \sum_{\chi \in D_N^{+}} \frac{\chi(\alpha)}{ 2N \phi(N)L(\chi)}(\sE_{\alpha \gamma P_{\infty}}-\sE_{\alpha \gamma P_{0}})). \] We prove this statement in Proposition~\ref{done}. From the above theorem, we can easily conclude that: \begin{cor} The kernel of $R$ is spanned by the classes of $\sE_P$ for $P \in ({\bf Z}/N {\bf Z})^2$ with $P$ of order $N$. \end{cor} We can offer a straightforward application of Theorem \ref{Retraction}. It enables to write explicitly $L(f,1)$, for $f$ cuspidal modular form of weight $2$ for $ \Gamma(N)$ as a linear combination of periods of $f$. Indeed, one has \[ \int_{0}^{\infty}f(z)dz=\int_{R(\xi({\rm Id}))}\omega_{f}, \] where $\omega_{f}$ is the Pullback to $X(N)$ of the differential form $f(z)dz$. We envisage as well applications similar to those of \cite{MR1405312} for the structure of Hecke algebras completed at Eisenstein primes. There is a possibility that the computation carried out in this paper is related to the Eisenstein classes considered in [Chapter 8 ~\cite{Venkatesh}]. This computation should be useful to answer the questions raised in the book. Finally, we have not tried to put our result in perspective with the Ramanujan sums considered in ~\cite{MR983619} by Murty and Ramakrishnan. \section{Modular symbols and retraction maps} \label{MSretract} We have a long exact sequence of relative homology: \[ 0 \rightarrow \HH_1(X(N),{\bf Z}) \rightarrow \HH_1(X(N), \partial_N,{\bf Z}) \xrightarrow {\delta} {\bf Z}[\partial_N]\rightarrow {\bf Z} \rightarrow 0. \] The first non-trivial map is the canonical injection. For $r,s\in \sP^1({\bf Q})$, {\it the modular symbol} $\{r,s\}$ is the class in $\HH_1(X(N),\partial_N,{\bf Z})$ of a continuous path in in $\mathbb{H} \cup \sP^1({\bf Q})$ joining $r$ to $s$. By definition, its image by the boundary map $\delta$ is $[\Gamma(N)r]- [\Gamma(N)s]\in {\bf Z}[\partial_{N}]$. The last non-trivial map of the long exact sequence is the sum of the coefficients. Let $\xi : \mathrm{SL}_2({\bf Z}) \rightarrow \HH_1(X(N),\partial_N,{\bf Z})$ be the map that takes the matrix $g \in \mathrm{SL}_2({\bf Z})$ to the modular symbol $\{g.0,g.\infty\}$. The map $\xi$ is surjective (Manin ~\cite{MR0314846}). We note that the class of $\xi(\gamma)=[\gamma]$ depends only on the class of $\gamma$ in $\overline{\SL_2({\bf Z}/N{\bf Z})}:=\SL_2({\bf Z}/N{\bf Z})/{\pm Id}$. We have an isomorphism of real vector spaces $\HH_1(X(N),{\bf Z})\otimes {\bf R} = \Hom_{{\bf C}}(\mathrm{H}^0(X(N),\Omega^1),{\bf C})$ given by \[ c \rightarrow \{\omega \rightarrow \int_c \omega\}. \] Consider the group homomorphism $\HH_1(X(N),\partial_N,{\bf Z}) \rightarrow \Hom_{{\bf C}}(\HH^0(X(N),\Omega^1),{\bf C})$ given by \[ c \rightarrow \{\omega \rightarrow \int_c \omega\}. \] The above homomorphism defines a retraction map $R:$ \[ \HH_1(X(N), \partial_N, {\bf Z}) \rightarrow \HH_1(X(N), {\bf R}). \] The Manin-Drinfeld theorem is equivalent \cite{MR1363488} to the fact that the image of $R$ is contained in $\HH_1(X(N), {\bf Q}) \subset \HH_1(X(N), {\bf R})$, {\it i.e} $R$ splits the long exact sequence after extending the scalars to ${\bf Q}$. \section{Hecke operators acting on modular symbols} \label{Hecke} In this section, we recall the action of the Hecke operators on the space of modular symbols [\cite{MR1322319}, \cite{MR1136594}]. Suppose $m$ is a positive integer congruent to $1$ modulo $N$. Let $A_m$ be the set of matrices in $M_2({\bf Z})$ of determinant $m$ and $A_{m,N}$ be the set of matrices in $A_m$ which are congruent to identity modulo $N$. The congruence subgroup $\Gamma(N )$ acts on the right on $A_{m,N}$. Let $R$ be a system of representatives of $\Gamma(N )\backslash A_{m,N}$. When $m$ is congruent to $1$ modulo $N$, the Hecke correspondence $T_m$ on $X(N )$ is defined by \[ \Gamma(N)z \rightarrow \sum_{r \in R} \Gamma(N) rz. \] This action does not dépend on the choice of $R$. It fixes the set of cusps of $X(N )$ pointwise. Thus, by transport of structure, the Hecke correspondence $T_m$ defines an endomorphism $T_m$ on $\HH_1(X(N),\partial_N, {\bf Z})$ and we have, for $\alpha$, $\beta\in\sP^1({\bf Q})$, $T_{m}(\{\alpha,\beta\})=\sum_{r \in R} \{r\alpha,r\beta\}$ \begin{definition} [Condition $(C_m)$] An element $\sum_M u_M M \in {\bf Z}[A_m]$ satisfies the following condition $C_m$ \cite{MR1322319} if we have the following equality in ${\bf C}[\sP^1({\bf Q})]$: \[ \sum_{M \in [K]} u_M M((\infty)-M(0))=(\infty)-(0) \] for all classes $K \in M_2({\bf Z})_m \slash {SL}_2({\bf Z})$. \end{definition} Let $\sum_M u_M M \in {\bf Z}[A_m]$ satisfies the condition $C_m$. The action of the Hecke operator $T_m$ on Manin symbol $\xi(g) \in \overline{\SL_2({\bf Z}/N{\bf Z})}$ is given by the following formula \cite{MR1322319}: \[ T_m (\xi(g)) =\sum_M u_M \xi(gM), \] which is meaningful since the elements of $A_m$ are of determinant $1$ modulo $N$. Similar formulas hold when $m$ is any interger ({\it i.e.} not necessarily congruent to $1$ modulo $N$) but one needs to make a choice in the definition of $T_{m}$. We turn to a particular family ${\bf Z}[A_m]$ that satisfies condition $C_{m}$, provided we impose that $m$ is an odd integer, which is our assumption for the rest of this section. Consider the following two sets of matrices: \[ U_m = \{ \left(\begin{smallmatrix} x & -y\\ y' & x'\\ \end{smallmatrix}\right) \in M_2({\bf Z})/ xx' + yy' = m, x \text{ and } x' \text{ odd}, y \text{ and } y' \text{ even } x > |y|, x' > |y'|\} \} \] and \[ V_m = \{ \left(\begin{smallmatrix} x & -y\\ y' & x'\\ \end{smallmatrix}\right) \in M_2({\bf Z})/ xx' + yy' = m, x \text{ and } x' \text{ odd }, y \text{ and } y' \text{ even } x > |y|, x' > |y'|\} \}. \] For all $m$, we define \[ \theta_m=\sum_m U_m-\sum_m V_m=\sum_m u_m^{\theta} M. \] The element $\theta_m$ satisfies the condition $(C_m)$ \cite{MR1322319}. Taken together, these elements satisfy the properties of Hecke operators in ${\bf Z}[M_2({\bf Z})]$ : $\theta_{m}\theta_{m'}=\theta_{mm'}$, provided $m$ and $m'$ are odd and coprime, and the usual recursive formula when $m$ is the power of a prime number \cite{MR1316830}. Let us denote by $\overline{M_2({\bf Z})}$, the set of matrices in $M_2({\bf Z})$ modulo multiplication by $\pm 1$. For $M=\left(\begin{smallmatrix} a & b\\ c & d\\ \end{smallmatrix}\right)$, let $\widetilde{M}=\left(\begin{smallmatrix} d & -b\\ -c & a\\ \end{smallmatrix}\right)$. Let $H=\left(\begin{smallmatrix} 1 & 1\\ -1 & 1\\ \end{smallmatrix}\right)$ and $c=\left(\begin{smallmatrix} -1 & 0\\ 0 & 1\\ \end{smallmatrix}\right)$. \begin{prop}\label{Hecke} [\cite{MR1316830}, Proposition 5, 6] We have the following relations in ${\bf Z}[\overline{M_2}({\bf Z})]$: \begin{itemize} \item $\theta_m c=c \theta_m$ \item $\theta_m H=H \widetilde{\theta_m}+ ([1]+[S])\sum_{M \in V_m} [MH-H\widetilde{M}]$, \end{itemize} where \[ \widetilde{\theta_m}=\sum_M u_M^{\theta} [\tilde{M}]. \] \end{prop} We will not use the explicit forms of the Diophantine sets $U_m$ and $V_m$ but we will make an use of the condition $C_m$ and the properties of $\theta_m$ stated in the proposition. The latter one is essential, but we only need to remember that $\theta_m H-H \widetilde{\theta_m}$ is a left multiple of $[1]+[S]$. It will be useful to note that the support of $\theta_{m}$ is contained in the set of matrices congruent to the identity modulo $2$. \section{Another expression of the function $F$} Let $\overline{B_1}:{\bf R} \rightarrow {\bf R}$ be the first Bernoulli function. This is a periodic with period $1$ and for $x \in (0, 1)$, $\overline{B_1}$ is defined by $\overline{B_1}(x) = x -\frac{1}{2}$ and $B_1( 0) = 0$. Let $e(x)=e^{2i\pi x}$. We still denote by $\overline{B_1}$ and $e$ functions defined on the quotient ${\bf R} \slash {\bf Z}$. Recall that $H=\left(\begin{smallmatrix} 1 & 1\\ -1 & 1\\ \end{smallmatrix}\right)$. For $P \in ({\bf Z}/N{\bf Z} )^2$, we denote by $P^0$ the unique ́element of $({\bf Z}/2N{\bf Z})^2$ which coincides with $P$ modulo $N$ and to $(1,1)$ modulo $2$. Recall that $H=\left(\begin{smallmatrix} 1 & 1\\ -1 & 1\\ \end{smallmatrix}\right)$. \begin{prop} \label{Alternative} For $P \in ({\bf Z}/N{\bf Z})^2$, we have \[ F (P ) =\sum_{s=(s_1,s_2) \in ({\bf Z}/2N{\bf Z})^2} e(\frac{s(HP)^0}{2N}) \overline{B_1}(\frac{s_1}{2N}) \overline{B_1}(\frac{s_2}{2N}). \] \end{prop} \begin{proof} Let $(x,y) \in {\bf Z}^2$ be a representative such that $(x-y)$ is odd. For $(u,v)=(HP)^0=(x+y,x-y)+2N {\bf Z}^2 \in ({\bf Z} \slash 2 N {\bf Z})^2$, Consider the sum $\sum_{k=0}^{2N-1} e(-\frac{ku}{2N})\overline{B_1}(\frac{k}{2N})$. We have \begin{eqnarray*} \sum_{k=0}^{2N-1} e(-\frac{ku}{2N})\overline{B_1}(\frac{k}{2N}) &= & \sum_{k=1}^{2N-1} e(-\frac{ku}{2N})[\frac{k}{2N}-\frac{1}{2}]\\ &= & \frac{1}{2N} \sum_{k=0}^{2N-1} e(-\frac{ku}{2N}) k-\frac{1}{2}\sum_{k=1}^{2N-1} e(-\frac{ku}{2N})\\ &= & \frac{1}{e(-\frac{u}{2N})-1}+\frac{1}{2}\\ &= & \frac{e(-\frac{u}{2N})+1}{2(e(-\frac{u}{2N})-1)}\\ &= &-\frac{i}{2} \cot(-\frac{ \pi u}{2N}). \end{eqnarray*} The last equality follows from $1-e(t)=-2i \sin(\pi t) e(\frac{t}{2})$ and $1+e(t)=2 \cos(\pi t) e(\frac{t}{2})$ for all $t \in {\bf R}$. We deduce the following equality: \begin{eqnarray*} \sum_{k_1=0}^{2N-1} \sum_{k_2=0}^{2N-1} e (- \frac{k_1u+k_2 v}{2N}) \overline{B_1}(\frac{k_1}{2N})\overline{B_1}(\frac{k_2}{2N}) &= & \frac{e(-\frac{u}{2N})+1}{2(e(-\frac{u}{2N})-1)}\frac{e(-\frac{v}{2N})+1}{2(e(-\frac{v}{2N})-1)}\\ &= & \frac{1}{4} [\frac{e(-\frac{u}{2N})+1}{e(-\frac{u}{2N})-1}][\frac{e(-\frac{v}{2N})+1}{e(-\frac{v}{2N})-1}]\\ &= &- \frac{1}{4} \cot ( \frac{\pi(x+y)}{2N}) \cot( \frac{\pi(x-y)}{2N}) \\ &= &-\frac{1}{4}[ \frac{\cos (\frac{\pi x}{N})+ \cos (\frac{\pi y}{N})}{ \cos (\frac{\pi x}{N})- \cos (\frac{\pi y}{N}) }]. \\ \end{eqnarray*} \end{proof} \section{The element $\theta_m$ and the function $F$} We will prove a main result in this section. For $(x,y) \in {\bf R}^2$, set $f(x,y) =: \overline{B}_1 (x)\overline{B}_1 (y)$. Consider the Fourier series development of the Bernoulli number $\overline{B}_1$. For $x \in {\bf R}$, recall the Fourier expansion of $\overline{B_1}$: \[ -\frac{1}{2 \pi i}\sum_{n \neq 0}^{\infty} \frac{1}{n} e(nx)=\overline{B_1}(x). \] Hence, we have \[ f(x,y)= \overline{B_1}(x)\overline{B_1}(y)=\frac{1}{(2 \pi i)^2}\ \sum_{n \neq 0}^{\infty} \sum_{m \neq 0}^{\infty} \frac{1}{n m} e(nx+my). \] \begin{prop} \label{Heckeaction} Let $l$ be a prime number. Wuppose $\sum_L u_L [L] \in {\bf Z}[A_l]$ satisfies the condition $C_l$ and $\sum_L u_L [Lc] = \sum_L u_L [cL]$ in $[\overline{M}_2 ({\bf Z})]$. We then have \[ \sum_L u_L f(sL)=lf(s)+f(ls) \] for all $s \in {\bf Z}^2$. \end{prop} \begin{proof} Consider the functions \[ f_g : (x, y) \rightarrow -4 \pi^2 \sum_L u_L f ((x, y)L) \] and \[ f_d :(x, y) \rightarrow -4 \pi^2 (l f (x, y)+f (lx, ly)). \] They are periodic functions with period $1$ in both variables. To prove $f_g=f_d$, it is enough to prove the equality of Fourier coefficients \[ c_{n,m} (f_g )=c_{n,m} (f_d ) \] for $(n, m) \in {\bf Z}^2$. Now, \[ f_d(x,y)=-4 \pi^2 (l f (x, y)+f (lx, ly))=[\sum_{n \neq 0}^{\infty} \sum_{m \neq 0}^{\infty} \frac{l}{n m} e(nx+my)+\sum_{n \neq 0}^{\infty} \sum_{m \neq 0}^{\infty} \frac{1}{n m} e(nlx+mly)]. \] We first consider the coefficients $c_{n,m}(f_d)$. We first note that $c_{n,0}(f_d) = c_{0,m} (f_d) = 0$. Moreover we have \begin{equation*} c_{n,m} (f_d) = \begin{cases} \frac{l}{nm} & \text{If $(n, m) \notin (l{\bf Z})^2$ and $nm \neq 0$,} \\ \frac{l}{nm} + \frac{l^2}{nm} & \text{If $(n, m) \in (l{\bf Z})^2$ and $nm \neq 0$.} \\ \end{cases} \end{equation*} We now examine the coefficients of $c_{n,m}(f_g)$. We note that $\sum_L u_L Lc = c \sum_L u_L L$ and $f_g$ is an odd function in both coordinate. We deduce that $c_{n,0}(f_g ) = c_{0,m}(f_g)=0$. Suppose now that $nm \neq 0$ and $L=\left(\begin{smallmatrix} a & b\\ c & d\\ \end{smallmatrix}\right) \in A_l$. We deduce that \[ f((x,y)L)= \sum_{(n,m) \in {\bf Z}^2, nm \neq 0} \frac{e((ax+cy)n+(bx+dy)m)}{nm} =\sum_{(n,m) \in {\bf Z}^2, nm \neq 0} \frac{e((an+bm)x+(cn+dm)y)}{nm}. \] For $(n', m')=(an+bm, cn+dm) \in {\bf Z}^2 L^t$, we then have \[ f((x,y)L)= \sum_{(n,m) \in {\bf Z}^2 L^t, (dn'-bm'),(-cn'+am')\neq 0} \frac{l^2e(xn'+ym')}{(dn'-bm')(-cn'+am')}. \] We deduce that \[ c_{m,n}(f_g)=\sum_{L} \frac{u_Ll^2}{(dn-bm)(-cn+am)} \] with the summation over all matrices $L$ in $A_l$ such that $(n,m) \in {\bf Z}^2 L^t$ and $(dn-bm)(-cn+am) \neq 0$. We deduce the following relation for all $\left(\begin{smallmatrix} a & b\\ c & d\\ \end{smallmatrix}\right) \in A_l$, \[ \frac{1}{(dn-bm)(-cn+am)}=\frac{1}{ad-bc}(\frac{1}{m(n-\frac{b}{d}m)}-\frac{1}{m(n-\frac{a}{c}m)}) =\frac{1}{l}(\frac{1}{m(n-L(0)m)}-\frac{1}{m(n-L (\infty)m)}). \] We also deduce that \[ c_{n,m}(f_g)=\sum_{\alpha \in A_l \slash \SL_2({\bf Z})} \sum_L l u_L(\frac{1}{m(n-L(0)m)}-\frac{1}{m(n-L (\infty)m)}), \] with the second summation over all matrices $L$ in $A_l$ such that $L \in \alpha$, $(n,m) \in {\bf Z}^2 L^t$ and $(dn-bm)(-cn+am) \neq 0$. The set ${\bf Z}^2 L^t$ does not depend on the class of $L $ in $\SL_2({\bf Z}) {\bf Z}^2 ={\bf Z}^2$. Now, suppose that ${\bf Z}^2 L^t={\bf Z}^2 \alpha^t$. We also assume that $dn-bm=0$ if and only if $L0=\frac{n}{m}$ and $-cn+am=0$ if and only of $L(\infty)=\frac{n}{m}$. We deduce that the condition $C_l$ gives the following equality: \[ c_{n,m}(f_g)=\sum_{\alpha \in A_l \slash \SL_2({\bf Z})} \frac{l}{nm}. \] The set of classes in $A_l/\SL_2({\bf Z})$ are in bijection with the subgroups of finite index $l$ of ${\bf Z}^2$. If $\alpha$ and $\alpha'$ are two distinct elements of $A_l/ \SL_2({\bf Z})$, we have \[ {\bf Z}^2 \alpha^t \cap {\bf Z}^2 (\alpha')^t=l^2. \] Also, we have $|A_l \backslash \SL_2({\bf Z})|=l+1$. Every element of ${\bf Z}^2 -l{\bf Z}^2$ corresponds to an unique subgroup of index $l$ in ${\bf Z}^2$ and an unique class of ${\bf Z}^2 \alpha^t$ with $\alpha \in A_l \backslash \SL_2({\bf Z})$. We now calculate $c_{n,m}(f_g)$. We have \[ c_{n,m}(f_g)=\frac{l}{nm} \] if $(n,m) \notin l {\bf Z}^2$ and \[ c_{n,m}(f_g)=\sum_{\alpha \in A_l \slash \SL_2({\bf Z})} \frac{l}{nm}= (l+1)\frac{l}{nm} \] if $(n,m) \in l {\bf Z}^2$. We thus prove the theorem. \end{proof} \begin{remark} Let $m$ be a positive integer. Suppose $\sum_M u_M M \in {\bf Z}[A_m]$ satisfies the condition $(C_m)$ and such that $\sum_M u_M Mc=c \sum_M u_M M$. For $s \in {\bf Z}^2$, by imitating the above method we can prove that \[ \sum_M u_M f(sM)=\sum_{d \mid m} \frac{m}{d} f(ds). \] We note that the condition $\sum_M u_M [Mc]=\sum_M u_M [cM]$ is not essential. In fact, if $\sum_M u_M [M]$ satisfies the condition $(C_m)$ and not necessarily commute with $c$. Then the element \[ \frac{1}{2} \sum_M u_M ([M] +[cMc]) \in {\bf Z}[ \frac{1}{2}] [M_2({\bf Z})] \] satisfy the condition $(C_m)$ and it commutes with $c$. \end{remark} For $P=\left(\begin{smallmatrix} x\\ y \\ \end{smallmatrix}\right) \in ({\bf Z}/2N{\bf Z})^2$, consider the function \[ \widehat{f}(P)= \sum_{s \in ({\bf Z}/2N {\bf Z})^2} e(\frac{ s P}{2N}) f(\frac{s}{2N}). \] Observe that $\widehat{f}(P)+\widehat{f}(S P)=0$ again since \[ \overline{B}_1(-\frac{s}{2N})=-\overline{B}_1(\frac{s}{2N}). \] \begin{prop} \label{ccommute} Let $l$ be an odd prime number and let $\sum_L u_L [L] \in {\bf Z}[A_l]$ satisfying the condition $C_l$ such that $ \sum_L u_L [L c]=c \sum_L u_L [cL] $ in ${\bf Z} [M_2({\bf Z})]$. For $P \in ({\bf Z}/N{\bf Z})^2$, we have \[ \sum_L u_L \widehat{f}(\tilde{L} P)=l \widehat{f}(lP)+\widehat{f}(P). \] \end{prop} \begin{proof} For $P \in ({\bf Z}/N{\bf Z})^2$, we have \[ \sum_L u_L \widehat{f}( \widetilde{L}P)=\sum_L u_L \sum_{s \in ({\bf Z}/2N{\bf Z})^2} e(\frac{s \widetilde{L}P}{2N}) f(\frac{s}{2N}). \] By abuse of notation, we consider $f$ as a function on $({\bf Z}/2N{\bf Z})$ defined by passing to the quotients. For $N$ co-prime to $l$, we make a change of variable $tL=s$ on $({\bf Z}/N{\bf Z})^2$. By using Prop \ref{Heckeaction} and the relation $L \tilde{L}=\left(\begin{smallmatrix} l & 0\\ 0 & l\\ \end{smallmatrix}\right)$, we have the following equality: \begin{eqnarray*} \sum_L u_L \widehat{f} (\tilde{L}P) &= & \sum_L u_L \sum_{s \in ({\bf Z}/2N{\bf Z})^2} e(\frac{s \widetilde{L} P}{2N})f(\frac{s}{2N})\\ &= & \sum_L u_L \sum_{t \in ({\bf Z}/2N{\bf Z})^2} e(\frac{t L\widetilde{L} P}{2N})f(\frac{t L}{2N})\\ &= & \sum_{s \in ({\bf Z}/2N{\bf Z})^2} e(\frac{l t P}{2N}) \sum_L u_L f(\frac{t L}{2N})\\ &= & \sum_{t \in ({\bf Z}/2N{\bf Z})^2} e(\frac{t l P}{2N}) (l f(\frac{t}{2N})+f(\frac{lt}{2N}))\\ &= & l \widehat{f}(lP)+\widehat{f}(P). \end{eqnarray*} \end{proof} Theorem~\ref{Eisensteinclass} follows directly from the following proposition. \begin{prop} Suppose $l$ is an odd prime number and $P \in ({\bf Z}/N{\bf Z})^2$. Then, we have \[ \sum_L u_L^{\theta} F(L P)= F(P)+l F (lP). \] \end{prop} \begin{proof} From Proposition~\ref{Alternative}, we have: \[ F(P)=\widehat{f} ((HP)^0). \] Thus \[ \sum_L u_L^{\theta} F(L P)=\sum_L u_L^{\theta} \widehat{f} ((HLP)^0). \] We use the formula expressed in proposition ~\ref{Hecke}: \[ \theta_l H=H \widetilde{\theta_l}+ ([1]+[S])\sum_{M \in V_l} [MH-H\widetilde{M}]=H \widetilde{\theta_l}+ ([1]+[S])\sum_{M}a_{M}[M], \] where $M$ runs over matrices congruent to the identity modulo $2$. Thus we obtain: \begin{eqnarray*} \sum_L u_L^{\theta} F(LP) &= & \sum_L u_L^{\theta} \widehat{f} ((H L P)^0) \\ &= & \sum_L u_L^{\theta} \widehat{f} (\widetilde{L} H(P)^0)+\sum_{M}a_{M} (\widehat{f} (MP)^0)+\widehat{f} (SMP)^0)). \end{eqnarray*} We use the fact that $(SMP)^{0}=S(MP)^{0}$ and the antiinvariance of $f$ under $S$. Hence the last term vanishes. We can pursue the calculation using the property of $\widetilde{f}$ established in the previous proposition and the fact that, for any $P \in ({\bf Z}/N{\bf Z})^2$ and $L \in R$ congruent to the identity modulo $2$, we have $L P^0=(LP)^0$ \begin{eqnarray*} \sum_L u_L^{\theta} F(LP) &= &\sum_L u_L^{\theta} \widehat{f}(\tilde{L}H P)^0)\\ &= & \sum_L u_L^{\theta} \widehat{f} (\tilde{L}(H P)^0)\\ &= & \widehat{f} ((H P)^0)+l \widehat{f} (l(H P)^0)\\ &= & F(P)+lF(l P). \end{eqnarray*} \end{proof} \section{Eisenstein eigenvectors} We now prove the Theorem~\ref{Eisensteinclass}. \begin{prop} \label{Heckeq} Suppose $P \in ({\bf Z} /N{\bf Z})^2$ and $l$ be an odd prime number congruent to $1$ modulo $N$. On $\HH_1(X(N), \partial_N, {\bf Z})$, we have \[ T_l(\sE_P)= (l+1) \sE_P. \] \end{prop} \begin{proof} We use the propositions in the previous section. In particular, we use the following relation: \[ \sum_L u_L^{\theta} F(LP)=F(P)+l F(lP). \] If $l \equiv 1 \pmod N$, the reductions of the matrices in the support of $\theta_l$ are of determinant $1$ modulo $N$. We have \[ \sum_L u_L^{\theta} F(LP)=F(P)+lF(P)=(1+l)F(P). \] We deduce the equality, \[ T_l(\sE_P)=\sum_L u_L^{\theta} \sum_{\gamma \in \overline{\SL_2({\bf Z}/N{\bf Z})}} \overline{F}(\gamma^{-1}P)[\gamma L] \] \[ =\sum_{\gamma \in \overline{\SL_2({\bf Z}/N{\bf Z})}}\sum_L u_L^{\theta} \overline{F}(L\gamma^{-1}P)[\gamma ] =(l+1)\sum_{\overline{\SL_2({\bf Z}/N{\bf Z})}}\overline{F}(\gamma^{-1}P)[\gamma] =(l+1)\sE_P . \] In the second step, we use the change of variable $\gamma L=\gamma'$. \end{proof} We now prove the second part of Theorem~\ref{Eisensteinclass}. \begin{prop} \label{kernel} The classes of $\sE_P$ lie in the kernel of the map $R$. \end{prop} \begin{proof} Recall, $T_l$ is the Hecke operator on $\HH^0(X(N), \Omega^1)$ deduced from the correspondences $T_l$ of degree $l+1$. Since the operators $T_l -(1+l)$ is surjective on $\HH^0(X(N), \Omega^1)$ \cite{MR0318157}, any $\tilde{\omega} \in \HH^0(X(N), \Omega^1)$ can be written in of the form $\tilde{\omega}=(T_l-(1+l)) \omega$. We then have \[ \int_{\sE_P}\tilde{\omega}=\int_{\sE_P}(T_l-(1+l)) \omega=\int_{\sE_P}T_l \omega-(l+1)\int_{\sE_P} \omega \] \[ =\int_{T_l\sE_P}\omega-(l+1)\int_{\sE_P} \omega=(l+1)\int_{\sE_P}\omega-(l+1)\int_{\sE_P} \omega=0 \] We deduce that $\sE_P$ lies in the kernel of $R$. We have proved Theorem~\ref{Eisensteinclass}. \end{proof} \section{Finer Eisenstein eigenvectors in mixed homology groups} This section is not stricly useful for the main results of the article. But we hope to enlighten the reader about our constructions. Let $M$ be an even integer. Consider the morphism of Riemann surfaces $\pi$ : $X(M)\rightarrow X(2)$. Recall that the modular curve $X(2)$ has three cusps: the classes $\Gamma(2)0$, $\Gamma(2)1$ and $\Gamma(2)\infty$ of $0$, $1$ and $\infty$ $\in{\bf P}^{1}({\bf Q})$. Consider the following partition of the cusps of $X(M)$ into $P_{+}$ and $P_{-}$, where $P_{+}=\pi^{-1}(\{\Gamma(2)0,\Gamma(2)\infty\})$ and $P_{-}=\pi^{-1}(\{\Gamma(2)1\})$. Thus we can consider the mixed homology groups $\HH_{0}=\HH_{1}(X(M)-P_{+},P_{-};{\bf Z})$ and $\HH^{0}=\HH_{1}(X(M)-P_{-},P_{+};{\bf Z})$. The intersection pairing $\bullet$ provides a ${\bf Z}$-valued perfect pairing between the latter two groups. More precisely, consider the map $\xi^{0}$ : ${\bf Z}[\pm\Gamma(M)\backslash \Gamma(2)]\rightarrow \HH_{0}$ that associates to $\gamma\in \Gamma(2)$ the class in $\HH^{0}$ of the image in $X(M)$ of the geodesic path in $\mathbb{H}$ from $\gamma0$ to $\gamma\infty$. It has a counterpart $\xi_{0}$ : ${\bf Z}[\pm\Gamma(M)\backslash \Gamma(2)]\rightarrow \HH^{0}$ that associates to $\gamma\in \Gamma(2)$ the class in $\HH_{0}$ of the image in $X(M)$ of the geodesic path in $\mathbb{H}$ from $\gamma(-1)$ to $\gamma1$. In \cite{MR1405312}, one of us proved that both $\xi^{0}$ and $\xi_{0}$ are group isomorphisms. Moreover one has $\xi^{0}([x])\bullet\xi_{0}([y])=0$ if $x\ne y$ and $\xi^{0}([x])\bullet\xi_{0}([y])=1$ if $x=y$ ($x$, $y\in \pm\Gamma(M)\backslash \Gamma(2)$). For $m$ odd integer, the Hecke correspondence $T_{m}$ leaves stable the sets of cusps $P_{+}$ and $P_{-}$. Hence it defines endomorphisms of the groups $\HH_{0}$ and $\HH^{0}$. More precisely, when $m$ is congruent to $1$ modulo $M$, the Hecke action if given thus on the bases of $\HH_{0}$ and $\HH^{0}$ \cite{MR1316830}: \[ T_{m}\xi^{0}(\gamma)=\xi^{0}(\gamma\theta_{m}) \] and \[ T_{m}\xi_{0}(\gamma)=\xi_{0}(\gamma\widetilde{\theta_{m}}). \] Similar formulas hold even when $m$ is any odd, positive integer \cite{MR1316830}. Suppose now that $M=2N$. One has canonical identifications $\pm\Gamma(M)\backslash\Gamma(2)\simeq\pm\Gamma(N)\backslash\SL_2({\bf Z}) \simeq\overline{\SL_2({\bf Z}/N{\bf Z})}$. On the other hand, we have a canonical map $\lambda$ obtained by composing the maps: \[ \HH^{0}=\HH_{1}(X(M)-P_{-},P_{+};{\bf Z})\rightarrow\HH_{1}(X(M),P_{+};{\bf Z})\rightarrow\HH_{1}(X(M),\partial_{M};{\bf Z})\rightarrow \HH_{1}(X(N),\partial_{N};{\bf Z}). \] The last map is deduced from the obvious degeneracy map $X(M)\rightarrow X(N)$. The map $\lambda$ has the following two properties: For $\gamma\in \Gamma(2)$, one has $\lambda(\xi^{0}(\gamma))=\xi(\gamma)$. Therefore $\lambda$ is surjective. It respects the Hecke operators and therefore the image by $\lambda$ of an Eisenstein class is an Eisenstein class. Our main innovation in the current paper, is to lift the Eisenstein classes of $\HH_{1}(X(N),\partial_{N};{\bf Z})$ to certain Eisenstein classes of $\HH^{0}=\HH_{1}(X(M)-P_{-},P_{+};{\bf Z})$, which admits a canonical basis (via $\xi^{0}$), and therefore the latter classes admit a canonical expression in terms of Manin symbols, which via $\lambda$ gives the distinguished expressions for Eisenstein classes obtained in this article. Evidently, our method presents difficulties to find Eisenstein classes of even level. Nevertheless, some non trivial things can be said in that context. Hence the classes \[ \sE^{0}_P=\sum_{{\gamma} \in \overline{\SL_2({\bf Z}/N{\bf Z})}} {\bar F}(\gamma^{-1} P)\xi^{0}(\gamma) = \sE_{-P} \] are Eisenstein classes of $\HH^{0}=\HH_{1}(X(M)-P_{-},P_{+};{\bf Z})$, which can be proved in the same way than Theorem~\ref{Eisensteinclass}. Via the canonical map $\HH^{0}=\HH_{1}(X(M)-P_{-},P_{+};{\bf Z})\rightarrow \HH_{1}(X(M),\partial_{M};{\bf Z})$ one obtains Eisenstein classes in $ \HH_{1}(X(M),\partial_{M};{\bf Z})$. Denote by $\sE'_{P}$ the image of $\sE^{0}_P$. Thus we are already beyond the scope of Theorem~\ref{Eisensteinclass}. However, the Hecke operator $U_{2}$ acts on $ \HH_{1}(X(M),\partial_{M};{\bf Z})$. It's unclear to us whether $E(M)$ is spanned, as a ${\bf Z}[U_{2}]$-module by the classes $\sE'_{P}$. \section{Boundaries of Eisenstein classes} Let $C_N$ be the set of points of $({\bf Z}/N {\bf Z})^2$ of order $N$ modulo multiplication by $\{\pm 1\}$. Denote by $\phi$ the map that associate the representative $[\frac{u}{v}] \in \sP^1({\bf Q})$ of $\Gamma(N )\backslash \sP^1({\bf Q})$ the class of $(u,v)$ in $C_N$. We identify the sets $\partial_N$ with $C_N$ using the map $\phi:\Gamma(N) \backslash\sP^1({\bf Q}) \rightarrow C_N$ \cite{MR2112196}. Let $(P)$ be an image in $C_N$ of a cusp in the above bijection. For an element $P$ of order $N$ in $({\bf Z}/N{\bf Z})^2$, denote by $(P)$ the cusp corresponding to the element $\overline{P}$ formed by the above bijection. We calculate the boundary $\delta(\sE_P) \in {\bf Z}[C_N]$ of the Eisenstein class $\sE_P$. \begin{thm} \label{boundary} One has \[ \delta(\sE_P)=2N[\sum_{\mu \in ({\bf Z}/N{\bf Z})^*} (F(\mu P_{\infty}) +\frac{1}{4})(\mu P)]-\frac{N}{2} [\sum_{Q \in C_{N}} (Q)]. \] \end{thm} \begin{proof} Since $S(P_{\infty})=-P_0$, we deduce that the boundary of $[\gamma]$ is equal to \[ \delta([\gamma])=(\gamma \infty)-(\gamma 0)=(\gamma P_{\infty})-(\gamma P_0) \] on ${\bf Z}[C_N]$. We deduce that \[ \delta(\sE_P)=\sum_{\gamma \in \overline{\SL_2({\bf Z}\slash N{\bf Z})}} \overline{F}(\gamma^{-1}P)((\gamma P_{\infty})-(\gamma P_0)) =\sum_{\gamma \in \overline{\SL_2({\bf Z}\slash N{\bf Z})}} (\overline{F}(\gamma^{-1}P)-\overline{F}(\sigma \gamma^{-1}P))(\gamma P_{\infty}). \] We note that $F(\sigma P)=-F(P)$, hence the above sum reduces to \[ \delta(\sE_P)=2\sum_{\gamma \in \overline{\SL_2({\bf Z}\slash N{\bf Z})}} \overline{F}(\gamma^{-1}P)(\gamma P_{\infty}). \] We calculate the coefficient of $(Q)$ in the above expression using the alternative description of $F(P)$ of Prop. \ref{Alternative}. The coefficient of $(Q)$ is: \[ 2 \sum_{\gamma P_{\infty}=Q} \sum_{s=(s_1,s_2) \in ({\bf Z}/N{\bf Z})^2} e(\frac{s(H \gamma^{-1}P)^0}{2N}) \overline{B_1}(\frac{s_1}{2N})\overline{B_1}(\frac{s_2}{2N}). \] Suppose $\gamma_0 \in \overline{\SL_2({\bf Z}/N{\bf Z})}$ be such that $\gamma_0 P_{\infty} = Q$ (such an element $\gamma_0$ always exist). Consider the matrix $T_{\alpha} =\left(\begin{smallmatrix} 1 & \alpha\\ 0 & 1\\ \end{smallmatrix}\right)$ with $\alpha \in {\bf Z}/N{\bf Z}$. We have \[ \{\gamma|\gamma \in \SL_2({\bf Z}/N{\bf Z}), \gamma P_{\infty}=Q\}=\{\gamma_0 T_{\alpha}| \alpha \in ({\bf Z}/N{\bf Z})\}. \] Hence, the coefficient of $(Q)$ is equal to \[ 2 \sum_{s=(s_1,s_2) \in ({\bf Z}/2N{\bf Z})^2} \sum_{\alpha \in ({\bf Z}/N{\bf Z})} e(\frac{s(HT_{\alpha} \gamma_0^{-1}P)^0}{2N}) \overline{B_1}(\frac{s_1}{2N})\overline{B_1}(\frac{s_2}{2N}):=T(P, Q). \] We will now prove that: \begin{equation*} T(P, Q) = \begin{cases} 2NF(\mu P_{\infty}) & \text{if $Q=\mu P$,} \\ - \frac{N}{2}& \text{if $Q \neq \mu P$ for all $\mu\in ({\bf Z}/N{\bf Z})$}. \\ \end{cases} \end{equation*} We examine the quantity \[ \sum_{\alpha \in ({\bf Z}/N{\bf Z})} e(\frac{s(HT_{\alpha} \gamma_0^{-1}P)^0}{2N}). \] Let us first assume that $P= \mu Q$ with $\mu \in ({\bf Z}/N{\bf Z})^*$. We then have $T_{\alpha} \gamma_0^{-1}P = \mu P_{\infty}$ and \[ T(P,Q)=2 N \sum_{s=(s_1,s_2) \in ({\bf Z}/2 N{\bf Z})^2} e(\frac{s(H \mu P_{\infty})^0}{2N}) \overline{B_1}(\frac{s_1}{2N})\overline{B_1}(\frac{s_2}{2N})=2NF(\mu P_{\infty}). \] Assume now $P \neq \mu Q$ for all $\mu \in ({\bf Z}/N{\bf Z})^*$. For $\gamma_0^{-1} P =\begin{pmatrix}u\\v\end{pmatrix}$, we have $v \neq 0 \pmod N$ since $Q\neq \mu P$. Writing $s=(s_1,s_2)$, it is easy to see that: \begin{eqnarray*} L&=& s(HT_{\alpha} \gamma_0^{-1}P)^0\\ &=&(s_1,s_2) \begin{pmatrix}(u+(\alpha+1)v)^0\\ (-u-(\alpha-1)v)^0\end{pmatrix}\\ &=&s_1 (u +(\alpha + 1)v)^0 + s_2 (-u -(\alpha-1)v)^0. \end{eqnarray*} From the above expression, we have: $L \equiv (s_1 - s_2)u + \alpha(s_1 -s_2)v +(s_2 + s_1)v \pmod N$ and $L \equiv s_1-s_2 \pmod 2$. Hence, we conclude that: \[ e(\frac{L}{2N})=e(\frac{L}{2})e(\frac{tL}{N})=e(\frac{s_1-s_2}{2})e(\frac{t[(s_1 - s_2)u + \alpha(s_1 -s_2)v +(s_2 + s_1)v ]}{N}) \] with $t=\frac{1-N}{2}$. If $(s_1 -s_2)v \not \equiv 0$ modulo $N$, we deduce that \[ \sum_{\alpha \in {\bf Z}/N{\bf Z}} e(\frac{s(HT_{\alpha} \gamma_0^{-1}P)^0}{2N})=0. \] We now calculate $T(P,Q)$ by restricting the sum to the terms that satisfy the equality $(s_1 - s_2)v \equiv 0 \pmod {N}$. Let us assume $s_1 v \equiv s_2 v \pmod {N}$, then: \[ L \equiv (s_1 - s_2)u +(s_2 + s_1)v \equiv s_1(u+v) -s_2(u-v)\pmod {N}. \] Let $d={\rm gcd}(v,N)$ and $d'=N/d$. By assumption, one has $d<N$ and consequently $d'>1$. Observe that $s_1 v \equiv s_2 v \pmod {N}$ is equivalent to $s_1 \equiv s_2 \pmod {d'}$ and that $u$ is well defined modulo $d$. One has \begin{eqnarray*} T(P,Q) & = & 2 N \sum_{s_1 v \equiv s_2v \pmod {N}} e(\frac{t[s_1(u+v)-s_2(u-v)]}{N}) \overline{B_1}(\frac{s_1}{2N})\overline{B_1}(\frac{s_2}{2N})e(\frac{s_1-s_2}{2})\\ & = & 2 N \sum_{s_1 \in ({\bf Z}/2N{\bf Z})} e(\frac{ts_1(u+v)}{N}) \overline{B_1}(\frac{s_1}{2N}) \sum_{s_2 \equiv s_1 \pmod {d'}} e(-\frac{t s_2(u-v)}{N}))\overline{B_1} (\frac{s_2}{2N})e(\frac{s_1-s_2}{2}).\\ \end{eqnarray*} Set $s_{2}=s_{1}+kd'$, $\alpha=e(2tv/N)$ (a primitive $d'$-th root of unity), $\zeta=-e(-tu/d)$ and $\beta=-\zeta$ (a primitive $2d$-th root of unity), where $k$ is an integer modulo $2d$. The calculation becomes ($s_{1}$ runs through the integers modulo $2N$ and $k$ runs through the integers modulo $2d$) \begin{eqnarray*} T(P,Q) &=& 2 N \sum_{s_1} e(\frac{ts_1 (u+v)}{N}) \overline{B_1}(\frac{s_1}{2N}) \sum_{k} e(-\frac{t(u-v)[s_1+kd']}{N}) \overline{B_1} (\frac{s_1+k d'}{2N})e(-\frac{kd'}{2})\\ &=& 2 N \sum_{s_1=0}^{2N-1} \alpha^{s_{1}} \overline{B_1}(\frac{s_1}{2N}) \sum_{k=0}^{2d-1}\beta^{k}\overline{B_1} (\frac{s_1}{2N}+\frac{k}{2d}).\\ \end{eqnarray*} Set $s_{1}=ad'+b$, with $b$ the class modulo $2N$ of an integer in $[0, d')$ and $a$ well defined modulo $2d$. Hence we get ($a$ and $k$ run through the integers modulo $2d$ and $b$ runs through the integers modulo $d'$) \begin{eqnarray*} T(P,Q) &=& 2 N \sum_{a=0}^{2d-1} \sum_{b=0}^{d'-1}\alpha^{ad'+b} \overline{B_1}(\frac{b}{2N}+\frac{a}{2d}) \sum_{k}\beta^{k}\overline{B_1} (\frac{b}{2N}+\frac{a}{2d}+\frac{k}{2d})\\ &=& 2 N \sum_{a=0}^{2d-1} \sum_{b=0}^{d'-1}\alpha^{b} \overline{B_1}(\frac{b}{2N}+\frac{a}{2d}) \sum_{k=0}^{2d-1}\beta^{k}\overline{B_1} (\frac{b}{2N}+\frac{a}{2d}+\frac{k}{2d}).\\ \end{eqnarray*} We need a lemma. \begin{lemma} \label{twistedB} Let $f(x)=\sum_{k}\beta^{k}\overline{B_1} (x+\frac{k}{2d})$, for $x\in {{\bf R}}$. Then for $l$ integer, $0\le l<2d$ and $x\in(l/2d,(l+1)/2d)$, we have \begin{eqnarray*} f(x)= \sum_{k}\beta^{k}\overline{B_1} (x+\frac{k}{2d})=\frac{\beta^{-l}}{(\beta-1)}, \end{eqnarray*} and \begin{eqnarray*} f( \frac{l}{2d})= \sum_{k}\beta^{k}\overline{B_1} (\frac{l}{2d}+\frac{k}{2d})=\frac{\beta^{-l}(1+\beta)}{2(\beta-1)}. \end{eqnarray*} \end{lemma} \begin{proof} Indeed, as a linear combination of functions which are $1$-periodic, affine of slope $1$ on the interval $(l/2d,(l+1)/2d)$, $f$ is $1$-periodic and affine of slope $\sum_{k=0}^{2d-1}\beta^{k}=0$ on $(l/2d,(l+1)/2d)$. Hence $F$ is constant on $(l/2d,(l+1)/2d)$. The relation $f(x+1/2d)=\beta^{-1}f(x)$ follows from the $1$-periodicity of $\overline{B_1}$. The first formula of the lemma follows from the particular case $x\in (0,1/2d)$, where $f(x)=\sum_{k}\beta^{k}k/2d=1/(\beta-1)$. The second formula is obtained by remarking that the values of $f$ at a point of discontinuity $y$ are obtained by averaging its values in the right and left neighborhoods of $y$ (this is a property that $f$ inherits from $\overline{B_1}$ by linearity). \end{proof} We can resume the calculation of $T(P,Q)$ using Lemma \ref{twistedB} and the relation $\sum_{b}\alpha^{b}=0$, since $\alpha$ is a primitive $d'$-th root of unity and $d'>1$. We obtain thus (here again $a$ runs through the integers modulo $2d$ and $b$ runs through the integers modulo $d'$): \begin{eqnarray*} T(P,Q) &=& 2 N \sum_{a} \sum_{b}\alpha^{b} \overline{B_1}(\frac{b}{2N}+\frac{a}{2d}) \frac{\beta^{-a}}{\beta-1}+2 N \sum_{a}\overline{B_1} (\frac{a}{2d})\frac{\beta^{-a}(1+\beta)}{2(\beta-1)}\\ &=& 2 N \sum_{b=1}^{d'-1}\frac{\alpha^{b}}{(\beta-1)(\beta^{-1}-1)}+2 N \frac{(1+\beta)}{2(\beta-1)} \frac{1+\beta^{-1}}{2(\beta^{-1}-1)}\\ &=& -\frac{2N}{(\beta-1)(\beta^{-1}-1)}+2 N \frac{(1+\beta)}{2(\beta-1)} \frac{1+\beta^{-1}}{2(\beta^{-1}-1)}\\ &=& - \frac{2N}{(\beta-1)(\beta^{-1}-1)}[1- \frac{(1+\beta)(1+\beta^{-1})}{4}]\\ &=&-\frac{N}{2}. \end{eqnarray*} \end{proof} \section{The retraction formula} We now prove Theorem~\ref{Retraction}. Consider the following modular symbol: \[ \sE(\gamma)=\frac{1}{2N \phi(N)}\sum_{\alpha \in ({\bf Z}/N{\bf Z})^*} \sum_{\chi \in D_N^{+}} \frac{{\chi}(\alpha)}{L(\chi)}(\sE_{\alpha \gamma P_{\infty}}-\sE_{\alpha \gamma P_{0}}) \] (We will see a bit later that $L(\chi)\ne0$.) The Theorem~\ref{Retraction} follows from the proposition. \begin{prop} \label{done} We have: \[ \delta(\xi(\gamma))=\delta(\sE(\gamma)). \] \end{prop} \begin{proof} It is easy to see that: \[ \delta(\xi(\gamma))=(\gamma P_{\infty})-(\gamma P_0). \] It is enough to prove that \[ (\gamma P_{\infty})=\frac{1}{2N \phi(N)}\sum_{\alpha \in ({\bf Z}/N{\bf Z})^*} \sum_{\chi \in D_N^{+}} \frac{{\chi}(\alpha)}{L(\chi)} \delta(\sE_{\alpha \gamma P_{\infty}}) + R, \] where $R\in{\bf C}[C_{N}]$ is a quantity independent of $\gamma$. We use theorem~\ref{boundary} and proceed using a Fourier inversion formula in ${\bf C} [C_{N}]$. For $\chi\in D_{N}^{+}$ (the group of even Dirichlet characters modulo $N$), denote by $e_{\chi}=(1/\phi(N))\sum_{\beta\in {\bf Z}/N{\bf Z})^*}\chi(\beta)[\beta]$ the corresponding idempotent of the group ring ${\bf C}[{\bf Z}/N{\bf Z})^*]$. Such a group ring acts on ${\bf C}[C_{N}]$. Recall that we have introduced \[ L(\chi)=\sum_{\beta}\chi(\beta)(F(\beta P_{\infty})+1/4)=\frac{1}{2}\sum_{\mu\in({\bf Z}/N{\bf Z})^{*}}\frac{\chi(\mu)}{1-\cos(\frac{\pi\mu^{0}}{N})}. \] Set $U=\sum_{Q \in C_{N}} (Q)$. We have, using Theorem~\ref{boundary}: \begin{eqnarray*} \sum_{\beta\in ({\bf Z}/N{\bf Z})^*}\chi(\beta)\delta(\sE_{\beta \gamma P_{\infty}}) &=& 2 N \sum_{\beta\in ({\bf Z}/N{\bf Z})^*}\sum_{\mu\in ({\bf Z}/N{\bf Z})^*} \chi(\beta)(F(\mu P_{\infty})+\frac{1}{4})(\mu\beta \gamma P_{\infty})-\frac{N}{2}\sum_{\beta\in({\bf Z}/N{\bf Z})^*}U \\ &=& 2N\phi(N)L(\chi)e_{\chi}(\gamma P_{\infty})-\frac{N}{2}\delta_{\chi,1}\phi(N)U. \end{eqnarray*} Hence we get \[ e_{\chi}(P)=\frac{1}{2N\phi(N)}\sum_{\beta\in ({\bf Z}/N{\bf Z})^*}\frac{\chi(\beta)}{L(\chi)} \delta(\sE_{\beta \gamma P_{\infty}})+\frac{\delta_{\chi,1}}{4L(\chi)}U. \] As $\sum_{\chi\in D_{N}^{+}}e_{\chi}=\frac{1}{2}([1]+[-1])$, we can now sum over $\chi\in D_{N}^{+}$ to obtain \[ (\gamma P_{\infty})=\frac{1}{2N\phi(N)}\sum_{\beta \in ({\bf Z}/N{\bf Z})^*} \sum_{\chi \in D_N^{+}} \frac{{\chi}(\alpha)}{L(\chi)} \delta(\sE_{\alpha \gamma P_{\infty}}) +\frac{1}{4L(1)}U. \] The term $\frac{1}{4L(1)}U$ is indeed independent of $\gamma$. In other words, we get the same term for $(\gamma P_0)$ also. Hence we have proved the proposition. \end{proof} \begin{cor} The map ${\bf C}[C_{N}]\rightarrow {\bf C}[E(N)]$ which to $\overline{P}$ associates $\sE_{P}$ is sujective and its kernel is one dimensional, generated by $\sum_{\overline{P}\in C_{N}}[P]$. \end{cor} \begin{proof} Consider the isomorphism of complex vector spaces : $E(N)\rightarrow {\bf C}[\partial_{N}]^{0}$, which to $\sE$ associates $\delta(\sE)$, where ${\bf C}[\partial_{N}]^{0}$ is formed by elements of degree $0$ of ${\bf C}[\partial_{N}]$. Moreover the map $\delta$ : $ \HH_1(X(N), \partial_N,{\bf C}) \rightarrow{\bf C}[\partial_{N}]^{0}$ is sujective. Hence by proposition \cite{done}, the map $\overline{P}\mapsto \sE_{P}$ is sujective. A computation of dimensions tells us that its kernel is of dimension $1$. Moreover, since $\overline{P}\mapsto \sE_{P}$ is a morphisms of $\overline{\SL_2({\bf Z}/N{\bf Z})}$-modules, its kernel is a one dimensional $\overline{\SL_2({\bf Z}/N{\bf Z})}$-module. Since $\overline{\SL_2({\bf Z}/N{\bf Z})}$ acts transitively on $C_{N}$, all coefficients of an element of the kernel are equal. Hence, the kernel is generated by $\sum_{\overline{P}\in C_{N}}[P]$. \end{proof} It remains to express $L(\chi)$ in terms of values of Dirichlet $L$-series (and that it is nonzero). Denote by $\chi_{2}$ the Dirichlet character modulo $2N$ which coincides with $\chi$ on odd integers. Let $L(\chi_{2},2)=\sum_{n=1}^{\infty}\chi_{2}(n)n^{-2}$ (which is nonzero). \begin{prop} \label{special} One has \[ L(\chi)= \frac{2N^{2}}{\pi^{2}}L(\chi_{2},2). \] \end{prop} \begin{proof} Let's remark first that, for $x\in{\bf R}$, \[ \frac{2}{1-\cos(\pi x)}=\frac{1}{\pi^{2}}\sum_{n\in{\bf Z}}\frac{1}{(x/2+n)^{2}}=\frac{1}{\pi^{2}}(\sum_{n=1}^{\infty}\frac{1}{(x/2+n)^{2}}+\sum_{n=1}^{\infty}\frac{1}{(x/2+1-n)^{2}})=\frac{1}{\pi^{2}}(\zeta(2,x/2)+\zeta(2,1-x/2)). \] where $\zeta(s,r)=\sum_{n=1}^{\infty}1/(n+r)^{s}$ is the Hurwitz zeta function. Therefore, we have \begin{eqnarray*} L(\chi) &=&\frac{1}{2}\sum_{\mu\in ({\bf Z}/N{\bf Z})^{*}}\frac{\chi(\mu)}{1-\cos(\pi\mu^{0}/N)}\\ &=&\frac{1}{4\pi^{2}}\sum_{\mu\in ({\bf Z}/N{\bf Z})^{*}}\chi(\mu)(\zeta(2,\frac{\mu^{0}}{2N})+\zeta(2,1-\frac{\mu^{0}}{2N}))\\ &=&\frac{1}{2\pi^{2}}\sum_{\mu\in ({\bf Z}/N{\bf Z})^{*}}\chi(\mu)\zeta(2,\frac{\mu^{0}}{2N})\\ &=&\frac{1}{2\pi^{2}}\sum_{\alpha=0}^{2N-1}\chi_{2}(\alpha)\zeta(2,\frac{\alpha}{2N})\\ &=&\frac{2N^{2}}{\pi^{2}}L(\chi_{2},2).\\ \end{eqnarray*} The last equality results from the classical relation between Dirichlet $L$-series and the Hurwitz zeta function [\cite{MR2312338}, Prop. 10.2.5, p. 168]: \[ L(s, \chi)=\frac{1}{m^s} \sum_{r=0}^{m} \chi(r) \zeta(s, \frac{r}{m}), \] where $m$ is the level of $\chi$. \end{proof} Given that the cuspidal subgroup of $X(N)$ can be identified with $R(\HH_1(X(N), \partial_N, {\bf Z}))/\HH_1(X(N), {\bf Z})$, we recover the classical fact that the order of this cuspidal group is related to the algebraic part of values at $2$ of Dirichlet $L$-series. \end{document}
\begin{document} \title{Refuting a Proposed Axiom for Defining the Exact Rotating Wave Approximation} \author{Daniel Zeuch$^1$ and David P.~DiVincenzo$^{1,2}$} \affiliation{$^1$Peter Gr\" unberg Institut, Theoretical Nanoelectronics, Forschungszentrum J\" ulich, D-52425 J\"ulich, Germany \\ $^2$Institute for Quantum Information, RWTH Aachen University, 52062 Aachen, Germany} \text{d}ate{\today} \begin{abstract} For a linearly driven quantum two-level system, or qubit, sets of stroboscropic points along the cycloidal-like trajectory in the rotating frame can be approximated using the exact rotating wave approximation introduced in arXiv:1807.02858. That work introduces an effective Hamiltonian series $\mathcal{H}_{\text{eff}}$ generating smoothed qubit trajectories; this series has been obtained using a combination of a Magnus expansion and a Taylor series, a Magnus-Taylor expansion. Since, however, this Hamiltonian series is not guaranteed to converge for arbitrary pulse shapes, the same work hypothesizes an axiomatic definition of the effective Hamiltonian. The first two of the proposed axioms define $\mathcal{H}_{\text{eff}}$ to (i) be analytic and (ii) generate a stroboscopic time evolution. In this work we probe a third axiom---motivated by the smoothed trajectories mentioned above---namely, (iii) a variational principle stating that the integral of the Hamiltonian's positive eigenvalue taken over the full pulse duration is minimized by this $\mathcal{H}_{\text{eff}}$. We numerically refute the validity of this third axiom via a variational minimization of the said integral. \end{abstract} \maketitle \tableofcontents \textbf{n}ewpage \section{Introduction} \text{lab}el{introduction} Consider a quantum two-level system, or qubit, which is coupled to a linearly-polarized drive treated classically. This problem, which has been considered by Bloch and Siegert \cite{bloch40}, is of current interest due to its applicability to the field of quantum information processing \cite{nielsen10}. In this setting, single-qubit gates need to be carried out with high precision by shaped pulses. Optimal pulse shapes, which correspond to specific envelope functions, are of often required to satisfy multiple constraints \cite{motzoi09} and therefore need to be synthesized by way of numerical search. Such a search can be streamlined by the ability to predict the driven qubit's time evolution using a high-precision approximation that is easy to integrate numerically. Given a resonance frequency of the qubit, $\omega_0$, and a drive frequency, $\omega$, the Hamiltonian of the driven-qubit system reads\footnote{While adding a term proportional to the identity to this Hamiltonian does not change the dynamics of the system, we consider a traceless Hamiltonian for simplicity in our analysis. \text{lab}el{foot:traceless}} ($\hbar = 1$) \begin{eqnarray} \mathcal H_{\text{lab}}(t) &=& \frac{\omega_0}2 \sigma_z + \frac{H_1(t)}{2} \cos(\omega t + \phi) \sigma_x \text{lab}el{Hlab0}\\ &=& \frac{\omega}2 \sigma_z + \frac{H_1(t)}{2} \cos(\omega t) \sigma_x, \qquad \qquad (\omega_0 = \omega, \phi = 0). \text{lab}el{Hlab} \end{eqnarray} Here $H_1(t)$ is the time-dependent drive amplitude, and $\sigma_x$, $\sigma_y$, $\sigma_z$ are the Pauli matrices. For the Hamiltonian (\ref{Hlab0}) we assume a constant phase offset, $\phi$, and small detuning, $\Delta = \omega_0 - \omega$, with $\Delta \ll \omega$. In the present study we often consider the Hamiltonian (\ref{Hlab}), which corresponds to the special case of resonant driving, $\Delta \equiv \omega_0 - \omega = 0$, and zero phase offset $\phi=0$. The relative magnitudes of the amplitude $H_1(t)$ and the qubit frequency $\omega$, the two central parameters in the above Hamiltonian, can be used to define different parameter regimes. Here, we focus on the regime of relatively weak to strong driving in which $|H_1(t)| \lesssim \omega$, for which it is useful to transform the above Hamiltonians from the laboratory frame of reference to a rotating frame associated with the drive. This latter frame rotates about the $z$ axis with the drive frequency $\omega$ and is defined by the standard transformation $\mathcal H_{\text{rot}} = \tilde U^{\text{d}agger} \mathcal H_{\text{lab}} \tilde U - i \tilde U^\text{d}agger \frac{\partial}{\partial t}\tilde U$ \cite{messiah1964quantum} with the unitary operator $\tilde U(t) = e^{-i \omega t \sigma_z/2}$, \begin{eqnarray} \mathcal H_{\text{rot}}(t) &\stackrel{(\ref{Hlab0})}{=}& \frac{H_1(t)}4 ( \cos(\phi)\sigma_x + \cos(2\omega t + \phi) \sigma_x + \sin(\phi)\sigma_y - \sin(2\omega t + \phi)\sigma_y) + \frac\Delta2 \sigma_z \text{lab}el{Hrot0}\\ &\stackrel{(\ref{Hlab})}{=}& \frac{H_1(t)}4 ( \sigma_x + \cos(2\omega t) \sigma_x - \sin(2\omega t)\sigma_y), \quad \qquad (\Delta=0, \ \phi=0). \text{lab}el{Hrot} \end{eqnarray} While the drive in the lab frame has a period of $2\pi/\omega$, note that the drive period in the rotating frame is \begin{equation} t_c = \pi/\omega. \text{lab}el{tc} \end{equation} Since the rotating-frame Hamiltonian $\mathcal H_{\text{rot}}(t)$ does not commute with itself at arbitrary times, it is a nontrivial problem to compute its time evolution analytically. Further note that non-commuting terms in the rotating-frame Hamiltonian $\mathcal H_{\text{rot}}(t)$ vary on the time scale of $1/\omega$. This time scale is assumed small compared to the Rabi frequency ($\sim \max(|H_1(t)|)$), since for most realistic pulses the amplitude fulfills $|H_1(t)| \leq 0.1 \omega$. This fast time dependence implies that $\mathcal H_{\text{rot}}(t)$ cannot be integrated very easily. This problem is often circumvented by using the rotating wave approximation (RWA) \cite{cohen98}. The Hamiltonian in the RWA is obtained by taking the rotating-frame Hamiltonian and neglecting the oscillatory terms. The usefulness of the RWA is that the Hamiltonian in this approximation varies relatively slowly in time, rendering it easy to integrate. However, the RWA only gives relatively accurate results for very weak drives with $|H_1(t)| \ll \omega$. The driven-qubit problem described above has been studied using Floquet's theorem (see, e.g., Refs.~\cite{shirley65, aravind84, peskin93, drese99, mananga11, novicenko17, schmidt18}), the dressed-state formalism \cite{cohen73} and the Magnus expansion (see, e.g., work on nuclear magnetic resonance \cite{haeberlen68, evans68, waugh68}, or more recent studies \cite{casas01, blanes09, mananga11, bukov2015universal} in which Floquet theory and the Magnus expansion have been combined). Not long ago, it has been shown that the time-dependent Schroedinger equation can be solved using the path-sum method \cite{giscard15}, and its applicability to the driven qubit problem constant drive amplitudes has been demonstrated in Ref.~\cite{giscard2019general}. For a more complete literature review see Ref.~\cite{zeuch18}. A recent development in the subject of periodically-driven quantum systems is the introduction of the exact rotating wave approximation \cite{zeuch18}, which can be used to accurately predict the time evolution even for strong drives with $|H_1(t)| \lesssim \omega$. This theory is based on a novel method for time-dependent perturbation theory called the Magnus-Taylor expansion \cite{zeuch18}. The time evolution in the exact RWA is generated by an \textit{effective Hamiltonian}, which, when compared to the exact Hamiltonian, varies only slowly in time and can therefore be integrated with similar ease as the RWA Hamiltonian. We note that Ref.~\cite{varvelis19} applies this effective Hamiltonian to the problem of designing quantum gates for singlet-triplet spin qubits \cite{cerfontaine14}. Here we are concerned with the definition of the effective Hamiltonian, denoted $\mathcal{H}_\text{eff}$. In its original derivation \cite{zeuch18}, $\mathcal{H}_\text{eff}$ is formulated as a series, whose convergence, however, is not always guaranteed. In Ref.~\cite{zeuch18} it has therefore also been surmised that the effective Hamiltonian can be alternatively defined via an axiomatic definition whose motivation is based on the qualitative features related to the stroboscopic time evolution. In the present paper we propose, study and give numerical evidence against a particular variant of such an axiomatic definition. \subsection{Exact Rotating Wave Approximation} \text{lab}el{exactRWA} The problem of finding the time evolution for the driven qubit is captured by the Schroedinger equation, \begin{eqnarray} - i \partial_t |\psi(t)\rangle = \mathcal{H}(t) |\psi(t)\rangle, \text{lab}el{schroedinger} \end{eqnarray} where for our problem the exact Hamiltonian is given by one of the rotating-frame Hamiltonians (\ref{Hrot0}) or (\ref{Hrot}). The solution to the Schroedinger equation can be formally expressed via the time evolution operator for initial and final times $t_i$ and $t$, respectively, \begin{equation} U(t, t_i) = \mathcal{T}e^{-i\int_{t_i}^{t} \text{d} \tau \mathcal H(\tau)} = e^{-i \overline{\mathcal H} (t-t_i)}. \text{lab}el{Ugeneric} \end{equation} Here the unitary operator $U$ is first written in the usual form featuring the time ordering operator $\mathcal{T}$. We also express the time evolution operator as a true exponential function using the Magnus expansion \cite{magnus1954, ernst87, waugh07}, in which the quantity $\overline {\mathcal H}$, also referred to as a Magnus Hamiltonian, can be understood as a type of Hamiltonian average on the interval $[t_i, t]$. This average is usually given as series of integral terms of commutators of the Hamiltonian with itself at different times, and the first three terms of this series are given explicitly in Appendix \ref{magnus_appendix}. A formal solution to the Schroedinger equation (\ref{schroedinger}) then reads \begin{eqnarray} |\psi(t)\rangle = U(t, t_i)|\psi(t_i)\rangle. \text{lab}el{solution} \end{eqnarray} Note that even for a constant envelope function $H_1(t) \equiv H_1$ the computation of the time evolution operator (\ref{Ugeneric}) is nontrivial because the full rotating-frame Hamiltonian $\mathcal{H}_\text{rot}(t)$ does not commute with itself at different times $t$ and $t'$, $[\mathcal{H}_\text{rot}(t), \mathcal{H}_\text{rot}(t')] \textbf{n}eq 0$. In contrast, the RWA Hamiltonian, \begin{eqnarray} \mathcal{H}_{\text{RWA}}(t) &\stackrel{(\ref{Hrot0})}{=}& \frac{H_1(t)}4 ( \cos(\phi)\sigma_x + \sin(\phi)\sigma_y) + \frac\Delta2 \sigma_z \\ &\stackrel{(\ref{Hrot})}{=}& \frac{H_1(t)}4 \sigma_x, \quad \qquad \qquad\qquad\qquad (\Delta=0, \ \phi=0). \text{lab}el{HRWA} \end{eqnarray} does commute with itself at different times for either case of zero detuning $\Delta=0$ or a constant field amplitude $H_1(t)$. [As noted above, the RWA Hamiltonian is obtained by neglecting the oscillating terms in the rotating frame Hamiltonian given above.] In either case the computation of the time evolution operator (\ref{Ugeneric}) for this approximation simplifies greatly since the time ordering operator $\mathcal T$ can be neglected. For the simplest case of a constant amplitude the RWA Hamiltonian itself is a constant, and we have \begin{eqnarray} \qquad \qquad \qquad U_{\text{RWA}}(t, t_i) = e^{-i\int_{t_i}^{t} \text{d} \tau \mathcal{H}_{\text{RWA}}} = e^{-i \mathcal{H}_{\text{RWA}} (t-t_i)}, \qquad \qquad (H_1(t) \equiv H_1). \text{lab}el{URWA} \end{eqnarray} As noted above, when the ratio $|H_1(t)|/\omega$ is appreciable, the usage of the RWA is not justfied for many applications requiring high-precision predictions of the qubit's time evolution. The perhaps most famous correction to the RWA is the Bloch-Siegert shift \cite{bloch40}, which is proportional to $H_1(t)^2/\omega$ at lowest order in $1/\omega$. A Hamiltonian beyond the RWA may then be written as follows, \begin{eqnarray} \mathcal{H}_{\text{RWA}, \text{improved}}(t) = \frac{H_1(t)}4 \sigma_x - \frac{H_1(t)^2} {32\omega} \sigma_z. \text{lab}el{BS} \end{eqnarray} As has been pointed out recently \cite{zeuch18}, while this Hamiltonian (\ref{BS}) is a systematic improvement for constant drive envelopes, this is not the case for arbitrary envelopes. This is because $\mathcal{H}_{\text{RWA}, \text{improved}}$ does not capture a correction term proportional to $\text{d}ot H_1/\omega$, which is of importance since it is on the same order in $1/\omega$ as the Bloch-Siegert shift. In the exact RWA, this term is part of an \emph{effective Hamiltonian} that has been derived in Ref.~\cite{zeuch18} using the Magnus-Taylor expansion mentioned above. Reference \cite{zeuch18} introduces an effective Hamiltonian as a series expansion in $1/\omega$, \begin{eqnarray} \mathcal{H}_{\text{eff}}(t; \beta_0) = \sum_{k=0}^{\infty} h_{k}(t; \beta_0) (1/\omega)^k. \text{lab}el{HeffSeries} \end{eqnarray} The operator function $h_{k=0}$ corresponds to the case of the RWA, i.e., $h_{k=0}(t; \beta_0)=\mathcal{H}_{\text{RWA}}(t)$. For a given $k>0$, the operator $h_{k}$ can be computed using the recursion relation given by Eq.~(59) in Ref.~\cite{zeuch18}. The first five terms of the series (\ref{HeffSeries}) for the special-case rotating-frame Hamiltonian (\ref{Hrot}) are given explicitly in Appendix \ref{effH_appendix}. This Hamiltonian series constitutes a set of correction terms to the usual RWA Hamiltonian given above, and it can be obtained up to arbitrary order in $1/\omega$. Assuming this series converges, the effective Hamiltonian results in a stroboscopic time evolution, that is, it generates effective qubit trajectories that agree with the exact trajectory at periodic points in time. Note that this effective Hamiltonian depends not only on time $t$ but also on a gauge parameter, $\beta_0$, whose role is explained further below. To give an example, the effective Hamiltonian for the system described by the rotating frame Hamiltonian (\ref{Hrot}) is \begin{eqnarray} \mathcal{H}_\text{eff}(t; \beta_0) &=& \frac{H_1}{4}\sigma _x + \frac{H_1^2}{32 \omega}(1-2 \cos\beta_0)\sigma_z + \frac{\text{d}ot H_1}{8 \omega}(\sin\beta_0\sigma_x+\cos\beta_0\sigma_y) + \mathcal O(1/\omega^2) \text{lab}el{Heff_main} \\ &\stackrel{(\beta_0=0)}{=}&\frac{H_1}{4}\sigma _x - \frac{H_1^2}{32 \omega}\sigma_z + \frac{\text{d}ot H_1}{8 \omega}\sigma_y + \mathcal O(1/\omega^2), \end{eqnarray} which is here given only up to first order in $1/\omega$. This Hamiltonian gives a systematic prediction of the time evolution of the driven qubit for time-dependent drive envelopes $H_1(t)$. Note that for constant $H_1(t) = H_1$ this effective Hamiltonian for $\beta_0=0$ reduces to the improved Hamiltonian (\ref{BS}), which includes the Bloch-Siegert shift. Figure \ref{trajectories} illustrates the general behavior of the exact RWA by means of various qubit Bloch-sphere trajectories for a $\pi$-pulse in the RWA with a Gaussian envelope function $H_1(t)$, which fulfills $\int_{0}^{t_\text{gate}} \text{d}\tau\, H_1(\tau) = 2\pi$ with the pulse duration $t_\text{gate}$. For this choice, the time evolution operator in the RWA, given in Eq.~(\ref{URWA}), results in a \textsc{not} gate, $U_{\text{RWA}}(t_\text{g}ate) \propto \sigma_x$. Shown are various solutions $|\psi(t)\rangle$ with $t\in[0, t_\text{g}ate]$ to the Schroedinger equation (\ref{schroedinger}) with initial condition $|\psi(t=0)\rangle = |0\rangle$, that is, at initial time $t_i = 0$ the qubit is initialized to the north pole of the Bloch sphere. Each shown trajectory corresponds to a certain Hamiltonian as detailed below. The exact qubit trajectory $|\psi_{\text{exact}}(t) \rangle = U_{\text{exact}}(t, 0)|0\rangle$ [cf.~the solution (\ref{solution}) to the Schroedinger equation], shown in red, is generated by the exact Hamiltonian (\ref{Hrot}). Its time evolution operator of the generic form (\ref{Ugeneric}) is given by \begin{equation} U_{\text{exact}}(t, t_i) = \mathcal{T}e^{-i\int_{t_i}^{t} \text{d} \tau \mathcal H_\text{rot}(\tau)}. \text{lab}el{Uexact} \end{equation} The corresponding trajectory follows cycloidal-like motions known as Bloch-Siegert oscillations, which are due to the terms in the exact Hamiltonian (\ref{Hrot}) that oscillate at twice the drive frequency $\omega$. In contrast, the RWA trajectory, generated only by the operator $\sigma_x$ [see the RWA Hamiltonian (\ref{HRWA})] is a simple $x$-axis rotation. As becomes clear from the trajectories shown on the left-hand side (LHS) of Fig.~\ref{trajectories}, this RWA trajectory, shown in green in the figure, makes significant errors in predicting the exact trajectory for the chosen, relatively hard drive ($H_1(t)/\omega \lesssim 0.1$). The LHS of Fig.~\ref{trajectories} also shows a blue qubit trajectory due to the effective Hamiltonian. As described above, the effective and exact trajectories agree at stroboscopic points indicated with bullets in the figure. The time difference between these points is the period of the drive (\ref{tc}), or $t_c = \pi/\omega$, which is equal to the period of the Bloch-Siegert oscillations. \begin{figure} \caption{Various qubit trajectories in the rotating frame compared and contrasted to one another. The initial state of the qubit is $|\psi(t=0)\rangle = |0\rangle$. The amplitude and width of the Gaussian envelope function (give function explicitly) are chosen such that the RWA trajectory, shown in green on the left-hand side (LHS), (i) corresponds to a $\pi$-pulse and (ii) is visibly inaccurate. As opposed to the simple RWA path, the exact trajectory (red) describes a cycloidal-like path. The LHS also shows the trajectory (blue) corresponding to the effective Hamiltonian of the exact RWA. On the right-hand side we show how three different \emph{exact RWA} \end{figure} As noted above, the effective Hamiltonian $\mathcal{H}_\text{eff}=\mathcal{H}_\text{eff}(t; \beta_0)$ depends on a gauge parameter denoted $\beta_0$. This gauge parameter enables one to choose different sets of stroboscopic points at which the effective and exact trajectories agree; these sets are given by \begin{eqnarray} \{t_0, t_0 \pm t_c, t_0 \pm 2t_c, \ldots\}. \text{lab}el{sets} \end{eqnarray} Here the constant time offset $t_0$ is chosen $t_0 \in [0, t_c)$, where $t_c$ given in Eq.(\ref{tc}) denotes the period of the drive in the rotating frame. We denote the intervals \begin{eqnarray} [t_0 + n t_c, t_0 + (n+1) t_c) \text{lab}el{MagnusInterval} \end{eqnarray} as \emph{Magnus intervals}. The offset $t_0$ is then related to the gauge parameter $\beta_0$ via the drive period $t_c = \pi/\omega$, that is, \begin{eqnarray} \beta_0 = 2\pi t_0/t_c = 2\omega t_0, \qquad \qquad \beta_0 \in [0, 2\pi). \text{lab}el{beta0} \end{eqnarray} Reference \cite{zeuch18} refers in this context to a gauge degree of freedom because both the starting and endpoints for a drive pulse are left unchanged\footnote{For this statement to be exact, one needs to implement so-called kick operators \cite{zeuch18}.} when varying $\beta_0$. This behavior is exemplified on the right-hand side of Fig.~\ref{trajectories}, where three different effective qubit trajectories for $\beta_0 = 0$, $\pi/2$ and $\pi$ are shown in different shades of blue. In this plot points of agreement are again indicated by bullets. With reference to the generic time evolution described by Eq.~(\ref{Ugeneric}), the effective evolution operator from an initial time $t_i = t_0 + m t_c$ for some integer $m$ is written in an extended notation, \begin{equation} U_{\beta_0}(t, t_i = t_0 + m t_c) = \mathcal{T} e^{-i\int_{t_0 + m t_c}^{t} \text{d} \tau \mathcal H_{\text{eff}}(\tau; \beta_0)}, \text{lab}el{Ueff} \end{equation} in which the dependence on the gauge parameter $\beta_0$ is given explicitly. Note that the choice of this gauge parameter $\beta_0$ determines the initial time $t_i$ up to an integer multiple of the drive period $t_c$, since the time $t_0$ is related to $\beta_0$ via Eq.~(\ref{beta0}). The stroboscopic time evolution means that this time evolution operator $U_{\beta_0}(t, t_i)$ agrees with the exact time evolution, which is described by Eq.~(\ref{Ugeneric}) with $\mathcal{H} = \mathcal{H}_\text{rot}$ and the same initial time $t_i$, at the final times (\ref{sets}). It is a premise of the derivation in Ref.~\cite{zeuch18} that the effective Hamiltonian depends on time solely through the envelope function $H_1(t)$ and its derivatives, or \begin{eqnarray} \mathcal{H}_\text{eff}(t) = \mathcal{H}_\text{eff}(H_1(t), \text{d}ot H_1(t), \text{d}dot H_1(t), \ldots). \text{lab}el{HEff_dependence} \end{eqnarray} For the special case of constant $H_1(t) \equiv H_1$ it then follows that the effective Hamiltonian is time-independent, $\mathcal{H}_\text{eff}(t; \beta_0) = \mathcal{H}_\text{eff}(\beta_0)$. Since time ordering can then be ignored, the effective time evolution operator (\ref{Ueff}) greatly simplifies to \begin{eqnarray} \qquad \qquad \qquad U_{\beta_0}(t, t_i) = e^{-i\int_{t_i}^{t} \text{d} \tau \mathcal{H}_{\text{eff}}(\beta_0)} = e^{-i \mathcal{H}_{\text{eff}}(\beta_0) (t-t_i)}, \qquad \qquad (H_1(t) \equiv H_1). \text{lab}el{Ueff_const} \end{eqnarray} For constant amplitudes the time evolution in the exact RWA is thus computed as easily as in the regular RWA, cf.~Eq.~(\ref{URWA}). Reference \cite{zeuch18} presents a derivation of the effective Hamiltonian series. This is done by employing a new method for time-dependent perturbation theory that combines the Magnus expansion \cite{magnus1954, ernst87, waugh07} of the time evolution operator with a Taylor expansion of the envelope $H_1(t)$---this method has been called the Magnus-Taylor expansion \cite{zeuch18}. As a consequence, the effective Hamiltonian is a function of not only $H_1(t)$ but also all its temporal derivatives, as also highlighted in Eq.~(\ref{HEff_dependence}). A direct consequence of this fact is that a crucial condition for an analytic effective Hamiltonian is an analytic envelope function $H_1(t)$. For non-analytic $H_1(t)$ the theory of the exact RWA requires the application of kick operators \cite{zeuch18}. Here we will not need to employ such kick operators, because ignoring them only leads to minor numerical corrections that are smaller than the error due to other numerical imperfections. Related to the nontrivial subject of the convergence of the Magnus expansion, a caveat of the effective Hamiltonian of Ref.~\cite{zeuch18} is that the circumstances under which the approximation series for the effective Hamiltonian converges have not yet been established. In Ref.~\cite{zeuch18} it was, however, hypothesized that there may be an alternate, \emph{axiomatic} definition of the effective Hamiltonian, which may hold even if the by the proposed calculation method does not converge. Two axioms in that definition would be that 1) the effective Hamiltonian is an analytic function of time, and 2) its propagator agrees with the exact propagator at periodic points in time. Since these two axioms are fulfilled not only by the effective Hamiltonian but also by the exact Hamiltonian, there needs to be at least a third axiom that sets these two Hamiltonians apart. The present manuscript is a compilation of research notes composed while probing such a third axiom. This axiom can be motivated by the exact and effective trajectories shown in Fig.~\ref{trajectories}, which illustrates the notion that individual effective trajectories are significantly shorter and smoother than the exact trajectory. This observation is the basis for the proposal of a third axiom, which suggests that the positive eigenvalue of the effective Hamiltonian is, when averaged over the entire qubit path, smaller than that of the exact Hamiltonian --- and thus perhaps also smaller than that of any other Hamiltonian generating a stroboscopic time evolution. Referring to Fig.~\ref{trajectories}, an effective qubit trajectory is generally visibly \emph{shorter} than its exact counterpart. Since the length of a trajectory is related to the eigenvalue of a Hamiltonian, one proposal discussed in this manuscript is that the integral of the positive eigenvalue of the Hamiltonian over the entire pulse duration will be smaller than the same integral for any other Hamiltonian satisfying Axioms 1 and 2. Furthermore, Fig.~\ref{trajectories} suggests that the effective trajectories are in general significantly \emph{smoother} than its exact counterpart. Since the smoothness is related to the change of the Hamiltonian as a function of time, our second proposal is that the integral of the positive eigenvalue of the time derivative of the Hamiltonian (again taken over the entire pulse duration) will be smaller than the same integral for any other Hamiltonian satisfying axioms 1 and 2. It is the purpose of this study to propose and investigate such a third axiom that may determine the effective Hamiltonian. Our investigation is partially analytic, though the main part consists of the implementation of a numerical minimization. The goal of this study has been to either prove the axiomatic definition analytically, or find supporting evidence for or against it using a numerical investigation. Since for most integrals that appear in this study an analytic solution has escaped our notice, we have tried to minimize the respective functionals numerically. Indeed, we find that the integrals that we proposed to be minimized by the effective Hamiltonian are actually minimized by a different Hamiltonian, which also results in a stroboscopic time evolution. This numerically-obtained finding refutes the third axiom. \subsection{Structure of the Remainder of This Manuscript} \text{lab}el{structure} These research notes are structured as follows. In Sec.~\ref{prelims} we first present the proposed \emph{axiomatic} definition of the effective Hamiltonian, which is the subject of this study. We then explain the basic method for scrutinizing this axiomatic definition. In Sec.~\ref{minimizeEigenvalue} we compute algebraic formulas for the main integrand considered in this work, both for constant driving envelopes as well as for time-dependent envelopes. In Sec. \ref{Results_of_Integrals} we present our numerical analysis, and we conclude in Sec.~\ref{conclusions}. A set of Python scripts and Mathematica notebooks used for the variational minimization described below can be found and accessed on a GitHub repository, see \href{https://github.com/zeuch/exactRWA.git}{this link}. A set of Mathematica files that can be used for computing the effective Hamiltonians can be found in the same repository. \section{Preliminaries} \text{lab}el{prelims} Starting with the proposed axiomatic definition described in Sec.~\ref{axioms}, we go on to propose two particular integrands for the functional of the third proposed axiom in Sec.~\ref{integrands}. We then introduce the details of our variational minmization in Sec.~\ref{minimization}, and mention the role of kick operators in \ref{kicks}. In Sec.~\ref{coding} we give a short overview over the github repository containing the Mathematic notebooks and Python scripts used for this work. \subsection{Axiomatic Definition of the Effective Hamiltonian} \text{lab}el{axioms} This Hamiltonian generates the time evolution in the exact rotating wave approximation as described above in Sec.~(\ref{exactRWA}). As noted in the Introduction, the convergence of the Hamiltonian series (\ref{HeffSeries}) is not always guaranteed to converge. Because of this convergence problem, we propose an axiomatic definition of the effective Hamiltonian. For this we assume that the exact Hamiltonian, given in Eq.~(\ref{Hrot}) for a linearly driven qubit in the rotating frame, itself is an analytic function in time. Note that for the Hamiltonian (\ref{Hrot}) this is the case if the envelope function $H_1(t)$ is analytic. It follows from the derivation of the effective Hamiltonian \cite{zeuch18} that the $\mathcal{H}_\text{eff}$ is analytic in time. This is because $\mathcal{H}_\text{eff}(t; \beta_0)$ depends only on constants and on $H_1(t)$ and its derivatives (see the effective Hamiltonian terms given in Appendix \ref{effH_appendix} for example terms). Furthermore, the effective and exact qubit trajectories coincide at equally-spaced points in time. This is the basis for the first two axioms, \begin{axiom} The effective Hamiltonian $\mathcal{H}_{\text{eff}}(t; \beta_0)$ is analytic in both of its arguments, that is, in time $t$ and the gauge parameter $\beta_0$. \text{lab}el{axiom1} \end{axiom} \begin{axiom} The time evolution due to the effective Hamiltonian $\mathcal{H}_{\text{eff}}(t; \beta_0)$ agrees with the exact time evolution at times (\ref{sets}), i.e. $t_0$, $t_0 + t_c$, $t_0 + 2t_c$, \ldots with $t_0 = \beta_0/2\omega$. That is, the time evolution operators for the exact time evolution, Eq.~(\ref{Ugeneric}), and for the effective time evolution, Eq.~(\ref{Ueff}), must be equal at these times (\ref{sets}), \begin{equation} U_{\beta_0}(t_0 + n t_c, t_i) = \mathcal{T}e^{-i\int_{t_i}^{t_0 + n t_c} \text{d} \tau \mathcal{H}_{\text{eff}}(\tau; \beta_0)} \stackrel{!}{=} \mathcal{T}e^{-i\int_{t_i}^{t_0 + n t_c} \text{d} \tau \mathcal{H}_{\text{rot}}(\tau)} \qquad \forall n\in\mathbb{N}. \text{lab}el{goal0} \end{equation} Here we have $t_i = t_0 + m t_c$ for an integer $m$ [recall that $\beta_0 = 2\pi t_0/t_c$ as per Eq.~(\ref{beta0})]. \text{lab}el{axiom2} \end{axiom} Note that Axioms \ref{axiom1} and \ref{axiom2} are not only satisfied by the effective Hamiltonian $\mathcal{H}_{\text{eff}}$, but, of course, also by the exact Hamiltonian $\mathcal{H}_{\text{rot}}$. In fact, these Axioms are satisfied by an infinite number of Hamiltonians, so that more axioms are needed to define the effective Hamiltonian of the exact rotating wave approximation. It has been surmised in Ref.~\cite{zeuch18} that a third axiom may be enough to distinguish the effective Hamiltonian from all other Hamiltonians that fulfill these two axioms. Since the convergence of the series in Eq.~(\ref{HeffSeries}) is in general not guaranteed, we want the third axiom to be independent of this series definition. As motivated in the Introduction, the fact that the effective time evolution is in general significantly smoother than the exact time evolution implies that the effective trajectory traverses shorter paths for a given gauge parameter $\beta_0$. Shorter paths translate to smaller (positive) eigenalues when averaged over the qubit trajectory and gauge parameter. Hence our proposed third axiom can be formulated as follows, \begin{axiom} There exists a functional of the form \begin{eqnarray} Q[\mathcal{H}(t; \beta_0)] &=& \int_{\beta_0 = 0}^{2\pi} \int_{\tau} f(\mathcal{H}(\tau; \beta_0)) \text{d} \tau \text{d} \beta_0, \text{lab}el{axiom3} \end{eqnarray} where the integral over time $\tau$ extends over the duration of the pulse, and a certain integrand $f(\mathcal{H}(t; \beta_0))$, which is minimized by the effective Hamiltonian $\mathcal{H}_{\text{eff}}$ given in Eq.~(\ref{HeffSeries}). \text{lab}el{axiom3_axiom} \end{axiom} In the present study we scrutinize this third proposed axiom for two particular integrands introduced in the subsequent section. \subsection{Proposed Integrands for the Third Axiom} \text{lab}el{integrands} The main integrand $f$ considered by us is the positive eigenvalue of the Hamiltonian, $f_\text{I}(\mathcal{H}(t; \beta_0)) = \text{eig}_+(\mathcal{H}(t; \beta_0))$. We also consider the positive eigenvalue of the derivative of the Hamiltonian, $f_\text{I}I(\mathcal{H}(t; \beta_0)) = \text{eig}_+(\text{d}ot \mathcal{H}(t; \beta_0))$. In order to introduce integrands for the above functional (\ref{axiom3}), we begin by introducing a generic, traceless Hamiltonian $\mathcal{H}$ that is assumed to satisfy axioms 1 and 2 given above, \begin{eqnarray} \mathcal{H}(t; \beta_0) = {\bf h}(t, \beta_0)\cdot \sigma = \lambda_+(t, \beta_0) \hat h(t, \beta_0) \cdot \sigma. \text{lab}el{HGeneric} \end{eqnarray} Here we paramaterize the Hamiltonian using a three-dimensional vector ${\bf h} = \lambda_+ \hat h$ with positive eigenvalue $\lambda_+ = \lambda_+(t, \beta_0)$ and unit vector $\hat h = \hat h(t, \beta_0)$. This Hamiltonian $\mathcal{H}$, given in Eq.~(\ref{HGeneric}), generates a stroboscopic time evolution as defined in Axiom \ref{axiom2}. Based on such a Hamiltonian, the first integrand proposed by us, denoted $f_{\text{I}}$, is a measure for the size of the Hamiltonian given by its operator $2$-norm, or equivalently its positive eigenvalue, \begin{eqnarray} f_{\text{I}}(\mathcal{H}(t; \beta_0)) = \lVert \mathcal{H}(t; \beta_0) \rVert_2 = \text{eig}_+(\mathcal{H}(t; \beta_0)) \equiv \lambda_+\mathcal{H}(t; \beta_0). \text{lab}el{fIII_def} \end{eqnarray} The second integrand is a measure for the size of the time derivative of the Hamiltonian given by its operator $2$-norm, which similarly is equal to the positive eigenvalue of $\text{d}ot \mathcal{H}(t; \beta_0) = (\partial/\partial_t)\mathcal{H}(t; \beta_0)$, \begin{eqnarray} f_{\text{I}I}(\mathcal{H}(t; \beta_0)) = \lVert \text{d}ot \mathcal{H}(t; \beta_0) \rVert_2 = \text{eig}_+(\text{d}ot \mathcal{H}(t; \beta_0)). \text{lab}el{fII_def} \end{eqnarray} The integrands $f_\text{I}$ and $f_\text{I}I$ are, respectively, analytically analyzed in Secs.~\ref{minimizeEigenvalue} and \ref{directional}. \subsection{Variational Minimization} \text{lab}el{minimization} In this work we test the validity of Axiom \ref{axiom3_axiom} by numerically minimizing the functional (\ref{axiom3}). To probe whether or not our effective Hamiltonian $\mathcal{H}_\text{eff}$ constitutes a local minimum of this functional, we compute the integral for variational Hamiltonians $\mathcal{H}_\text{var}$ in the vicinity of $\mathcal{H}_\text{eff}$, i.e. \begin{equation} \mathcal{H}_\text{var}(t; \beta_0) = \mathcal{H}_\text{eff}(t; \beta_0) + \text{d}elta \mathcal{H}(t; \beta_0) \text{lab}el{vary_H} \end{equation} with $\lVert \text{d}elta \mathcal{H}\rVert \ll \lVert \mathcal{H}_\text{eff}\rVert$ for some operator norm $\lVert \cdot \rVert$. When choosing such variational Hamiltonians $\mathcal{H}_\text{var}$, we need to ensure that Axioms \ref{axiom1} and \ref{axiom2} are satisfied. While analyticity (as required by Axiom \ref{axiom1}) is straightforward to built into this variational Hamiltonian, satisfying the requirement of stroboscopic time evolution stated in Axiom \ref{axiom2} is less trivial. This is because the two effective time evolution operators $U_{\beta_0}(t, t_i) = \mathcal{T}e^{-i\int_{t_i}^{t} \text{d} \tau \mathcal \mathcal{H}_\text{eff}(\tau; \beta_0)}$ for some fixed initial time $t_i$ [cf.~Eq.~(\ref{Ueff})] and \begin{equation} U_{\beta_0}^{\text{var}}(t, t_i) = \mathcal{T}e^{-i\int_{t_i}^{t} \text{d} \tau \mathcal H_\text{var}(\tau; \beta_0)} \text{lab}el{Uvar} \end{equation} with $\mathcal{H}_\text{var} = \mathcal{H}_\text{eff} + \text{d}elta \mathcal{H}$ as in Eq.~(\ref{vary_H}) for a time-dependent effective Hamiltonian cannot be related to one another straightforwardly. This fact makes it difficult to find Hamiltonians $\text{d}elta H$ such that $\mathcal{H}_\text{var}$ results in the same stroboscopic time evolution (\ref{goal0}) as $\mathcal{H}_\text{eff}$. The most natural variation of the Hamiltonian as given in Eq.~(\ref{vary_H}) can be implemented by enforcing the condition of stroboscopic time evolution via Lagrange multipliers. For this, the function (\ref{axiom3}) would be rewritten as \begin{eqnarray} Q[\mathcal{H}(t; \beta_0)] &=& \int_{\beta_0 = 0}^{2\pi} \int_{\tau} f(\mathcal{H}(\tau; \beta_0)) \text{d} \beta_0 \text{d} \tau \textbf{n}onumber \\ && \quad + \int_{\tau} \int_{\beta_0 = 0}^{2\pi} \mu(\mathcal{H}(\tau; \beta_0)) h(\tau, \beta_0) \lVert U_{\beta_0}(\tau, t_i) - U_{\text{exact}}(\tau, t_i) \rVert \text{d} \beta_0 \text{d} \tau, \text{lab}el{axiom3_lagrangified} \end{eqnarray} where again the $\tau$ integral extends over the entire duration of the pulse. In the second integral, we have introduced a Lagrange multiplier $\mu(\mathcal{H}(t; \beta_0))$, and we use a function $h(t, \beta_0)$ that limits the two-dimensional integral to the stroboscopic points (\ref{sets}), $\{t_0, t_0 \pm t_c, t_0 \pm 2 t_c \ldots \}$ with $t_0 = \beta_0/(2\omega)$ and $t_c = \pi/\omega$. For this let $h(t, \beta_0) = \sum_{n = -\infty}^\infty \text{d}elta(t - \beta_0/(2\omega) + n t_c)$; this ensures that for all $t$ and $\beta_0$ at which the effective and exact trajectories do \emph{not} need to coincide, the integrand of the second integral in Eq.~(\ref{axiom3_lagrangified}) is automatically zero. The exact time evolution operator $U_{\text{exact}}$ is given by Eq.~(\ref{Uexact}), so the factor $h(\tau, \beta_0)\lVert U_{\text{eff}}(\tau, t_i) - U_{\text{exact}}(\tau, t_i) \rVert$ constitutes the constraint of equal stroboscopic time evolution as stated in Axiom \ref{axiom2}. Below, however, we build Axiom \ref{axiom2} directly into the time evolution. The basic idea is to vary the time evolution operator rather than the Hamiltonian. We begin by writing the time evolution operator as a parameterization of a rotation around the Bloch sphere [referring to the last equality in Eq.~(\ref{Ugeneric})], \begin{eqnarray} U_{\beta_0}(t, t_i) = e^{-i \overline{\mathcal{H}} t} = e^{-i {\bf n}(t, \beta_0) \cdot \sigma} = e^{-i \alpha(t, \beta_0) \hat n(t, \beta_0) \cdot \sigma}. \text{lab}el{n_def} \end{eqnarray} That is, we parameterize the operator of the Magnus expansion as $\overline{\mathcal H}\, t = {\bf n} = \alpha \hat n$, where generally both $\alpha = \alpha(t, \beta_0)$ and $\hat n = \hat n(t, \beta_0)$ depend on both time $t$ and the gauge parameter $\beta_0$. We note that the initial time $t_i$ is considered fixed, because of which we suppress the dependence on this initial time when writing the vector $\bf n(t, \beta_0)$. The next step is to introduce the variation \begin{eqnarray} {\bf n}_\text{var}(t, \beta_0) = {\bf n}(t, \beta_0) + \text{d}elta {\bf n}(t, \beta_0), \text{lab}el{vary_n} \end{eqnarray} which replaces the direct variation of the Hamiltonian as given in Eq.~(\ref{vary_H}). To ensure that the resulting variational time evolution operator (\ref{Uvar}), which at this point reads \begin{equation} U_{\beta_0}^{\text{var}}(t, t_i) = e^{-i {\bf n}_\text{var} \cdot \sigma}, \text{lab}el{Uvar1} \end{equation} satisfies Axiom \ref{axiom2}, we choose functions $\text{d}elta {\bf n}(t, \beta_0)$ which are zero at the times $\{t_0, t_0 \pm t_c, t_0 \pm 2t_c, \ldots\}$ [cf.~Eq.~(\ref{sets})], i.e., \begin{equation} \text{d}elta {\bf n}(t = t_0 + n t_c; \beta_0) \equiv 0 \qquad \qquad \forall n \in \mathbb Z. \end{equation} Our explicit set of functions $\text{d}elta {\bf n}$ is given below in Sec.~\ref{trialFxns}. The integral of interest is then that given in Eq.~(\ref{axiom3}), in which we parametrize the Hamiltonian using the vector ${\bf n}$, $\mathcal{H}(t; \beta_0) = \mathcal{H}({\bf n}(t, \beta_0))$. Our goal is thus to minimize the functional \begin{eqnarray} Q[{\bf n}(t, \beta_0)] &=& \int_{\beta_0 = 0}^{2\pi} \int_{\tau} f( \mathcal{H}({\bf n}(\tau, \beta_0)) ) \text{d} \tau \text{d} \beta_0. \text{lab}el{axiom3n} \end{eqnarray} In Sec.~\ref{GaussianEnvelope} we present the envelope function used for the minimization. In Sec.~\ref{symmetry} we then examine the symmetries of the various Hamiltonians; this symmetry consideration then allows us to consider a restricted set of trial functions, which reduces the complexity of the numerical minimization. \subsubsection{Gaussian Envelope} \text{lab}el{GaussianEnvelope} \begin{figure} \caption{Actual rotation, similar to the rotations shown in Fig.~\ref{trajectories} \end{figure} Our minimization method is based on a Gaussian envelope function of the form \begin{eqnarray} H_1(t) &=& A e^{-\tfrac{t^2}{2\sigma^2}}, \qquad \qquad t\in [0, t_\text{g}ate]. \text{lab}el{H1Gaussian} \end{eqnarray} It is paramterized by an amplitude $A$ and a width $\sigma$. A convenient parameter range for $A$ and $\sigma$ is determined by the effective Hamiltonian series (\ref{HeffSeries}) to converge quickly. This allows us to compute the effective time evolution in the exact rotating wave approximation to high precision while computing the effective Hamiltonian only up to moderate order in $1/\omega$. To see which parameters lead to fast convergence, we take from the effective Hamiltonians given in Appendix \ref{effH_appendix} that we require $A/\omega$ to be small, which corresponds to weak driving. This ensures that terms proportional to $H_1^{n+1}/\omega^{n}$ become negligible quickly. Furthermore, we want terms propertional to the $n$th derivative $\text{d}ersub{H_1}{n}$ to fall off quickly with increasing $n$, which means that $\sigma \omega \sim \sigma/t_c$ (recall that $t_c = \pi/\omega$) should be large. Besides, however, we want the fraction $\sigma/t_c$ to not be too large, so that the Gaussian function falls off sufficiently quickly. This leads to a relatively short pulse, in the sense that it covers only a small number of Magnus intervals (\ref{MagnusInterval}) of duration $t_c$. This envelope function can be plotted for various parameters $A$ and $\sigma$ in the Mathematica notebook \texttt{exactRWA/programs/numerics/mathematica/Gaussian\_envelope.nb} available. Based on this, we have made the choice for ou parameter values of $A = 0.002$, $\sigma = 2t_c = 4\pi$ with $\omega = 1/2$ (and thus $t_c = \pi/\omega = 2\pi$). An exemplary time evolution due to this envelope function is shown in Fig.~\ref{actual_rotation}. Based on these parameters, our effective Hamiltonian has been truncated only to order $1/\omega^5$, but given that $A/\omega$ is rather small the only terms that are appreciable in our calculation are those proportional to $A^3$. This is an important feature for approximating the effective time evolution for the driven qubit via the Magnus expansion, which is discussed below. \subsubsection{Analytic approximation for effective time evolution} \text{lab}el{Ueff_analytic} We approximate the effective time evolution for the Gaussian pulse using the Magnus expansion. For this we consider the first three terms given explicitly in Appendix \ref{magnus_appendix}. Furthermore, as noted above in the previous section, we only consider terms up to order $A^3$, which we are allowed to do since the ratio $A/\omega$ is very small. Here we approximate the time evolution analytically for the RWA Hamiltonian (\ref{HRWA}), which constitutes the lowest-order term in the effective Hamiltonian series (\ref{HeffSeries}), \begin{eqnarray} \mathcal{H}_{\text{RWA}} = \frac{H_1(t)}4 \sigma_x \stackrel{(\ref{H1Gaussian})}{=}= \frac{A e^{-\tfrac{t^2}{2\sigma^2}}}4 \sigma_x. \end{eqnarray} To lowest order in the Magnus expansion of the time evolution operator , which is equivalent to ignoring time ordering completely of the time evolution operator, we then have \begin{eqnarray} \overline \mathcal{H}^{(0)} &=& \frac1{t} \int_{t = \beta_0/(2\omega)}^{\beta_0/(2\omega) + t} \mathcal{H}_{\text{RWA}}(t; \beta_0) \text{d} t + \mathcal O(1/\omega) \textbf{n}onumber \\ &=& \frac{1}{4} \sqrt{\frac{\pi }{2}} \sigma \left(\text{erf}\left(\frac{t+\frac{\beta _0}{2\omega}}{\sqrt{2} \sigma }\right) - \text{erf}\left(\frac{\beta _0}{2 \sqrt{2} \sigma \omega}\right)\right) \sigma_x + \mathcal O(1/\omega). \text{lab}el{ME_zero_RWA} \end{eqnarray} The full formulae for the analytic Magnus expansion used in our numerical minimization up to order $A^3$ can be found in the Mathematica notebook \texttt{MagnusExpansion\_results.nb}.\footnote{ \texttt{https://github.com/zeuch/exactRWA/programs/numerics/}} \subsubsection{Symmetry} \text{lab}el{symmetry} We use the symmetry of the envelope function to streamline the numerical minimization. The envelope function of our choice, i.e., the Gaussian envelope discussed in the previous Sec.~\ref{GaussianEnvelope}, is an even function with respect to time reversal, $t \rightarrow -t$. To see the implication on the symmetry of the Hamiltonian, first consider the \emph{generic} rotating-frame Hamiltonian (\ref{Hrot0}) for a symmetric envelope function $H_1(t)$ such as the Gaussian function described in the previous section, which fulfills $H_1(-t) = H_1(t)$. Given this even symmetry, it is easy to see that the components of both $\sigma_x$ and $\sigma_z$ of this Hamiltonian are symmetric with respect to time reversal, while the $\sigma_y$-component is anti-symmetric. When comparing this symmetry property of the exact Hamiltonian to that of the effective Hamiltonian $\mathcal{H}_{\text{eff}}(t; \beta_0)$, we need to take into account that the latter depends not only on time $t$ but also on the gauge parameter $\beta_0 = 2\omega t_0$ [cf.~Eq.~(\ref{beta0})]. Here $t_0$ defines the stroboscopic set of times (\ref{sets}), given by $\{t_0, t_0 \pm t_c, t_0 \pm 2t_c, \ldots\}$, at which the exact and effective time evolutions agree. Note that time reversal maps the set (\ref{sets}) to $\{-t_0, -t_0 \pm t_c, -t_0 \pm 2t_c, \ldots\}$. Given the proportionality relation between $t_0$ and $\beta_0$, the full symmetry operation for time reversal in case of the effective Hamiltonian $\mathcal{H}_\text{eff}(t; \beta_0)$ is thus \begin{eqnarray} S: \qquad (t, \beta_0) \ \rightarrow \ (-t, - \beta_0). \text{lab}el{symmetryOp} \end{eqnarray} We find that, as above for the exact Hamiltonian, for this symmetry operation $S$ the $\sigma_x$- and $\sigma_z$-components of $\mathcal{H}_\text{eff}$ are symmetric and the $\sigma_y$-component is anti-symmetric. The exact and effective Hamiltonians thus have the same symmetry with respect to the operation $S$. The Hamiltonian that minimizes the integral in Eq.~(\ref{axiom3}) may not need to have the same symmetry properties as the exact and effective Hamiltonians. However, we restrict our search to Hamiltonians of that very symmetry described in the previous paragraph, since this allows us to reduce the size of the search space of variational Hamiltonians. Let us now determine the $S$-symmetry properties of the variational parameters $\text{d}elta {\bf n}(t; \beta_0)$, introduced above in Eq.~(\ref{vary_n}), for the assumption that the variational Hamiltonian fulfills the same symmetry as the exact and effective Hamiltonians. To do this, consider a generic time evolution operator of the form (\ref{Ugeneric}) for $t_i = \beta_0$ and $t_f = \beta_0 + t$, or $U(t_f, t_i) = \mathcal{T}\text{ex}p(-i\int_{\beta_0}^{\beta_0 + t} \text{d} \tau \mathcal H(\tau)) = e^{-i \overline{\mathcal H} t}$. For simplicity, we focus on the lowest-order term of the Magnus expansion (\ref{MagnusSeries}) given in Appendix \ref{magnus_appendix}, $\overline{\mathcal H} \cong \overline{\mathcal H}^{(0)}$ [see also Eq.~(\ref{ME_zero_RWA})]. We first note the following identity for the expression $\int_{\beta_0}^{\beta_0 + t} a(\tau, \beta_0) \text{d}\tau$ with an arbitrary function $a(t, \beta_0)$ for the above symmetry operator $S$, \begin{eqnarray} \int_{\beta_0}^{\beta_0 + t} a(\tau, \beta_0) \text{d}\tau &\stackrel{S}{\rightarrow}& \int_{-\beta_0}^{-\beta_0 - t} a(\tau, -\beta_0) \text{d}\tau \textbf{n}onumber \\ &\stackrel{\tau\rightarrow -\tau}=& \int_{\beta_0}^{\beta_0 + t} a(-\tau, -\beta_0) \text{d}(-\tau) \textbf{n}onumber \\ &=& - \int_{\beta_0}^{\beta_0 + t} a(-\tau, -\beta_0) \text{d}\tau. \text{lab}el{symmetryIntegral0} \end{eqnarray} We now use this identity to find the $S$-symmetry for the $\bf n$ vector defined in Eq.~(\ref{n_def}) for the lowest-order Magnus expansion, ${\bf n}(t, \beta_0)\cdot \sigma = \overline {\mathcal{H}} \cong \overline {\mathcal{H}}^{(0)}$, \begin{eqnarray} {\bf n}(t, \beta_0)\cdot \sigma = \int_{\beta_0}^{\beta_0 + t} \mathcal{H}(\tau) \text{d} \tau &\stackrel{S,\ (\ref{symmetryIntegral0})}{\longrightarrow}& - \int_{\beta_0}^{\beta_0 + t} \mathcal{H}(-\tau) \text{d} \tau. \text{lab}el{symmetryIntegral} \end{eqnarray} We therefore consider the symmetry of the exact Hamiltonian (\ref{Hrot}) under time reversal, \begin{eqnarray} \mathcal H_{\text{rot}}(-t) &=& \frac{H_1(-t)}4 ( \sigma_x + \cos(-2\omega t) \sigma_x - ( \sin(-2\omega t)\sigma_y) ) \textbf{n}onumber \\ &=& \frac{H_1(t)}4 ( \sigma_x + \cos(2\omega t) \sigma_x - \sin(2\omega t)(-\sigma_y)), \text{lab}el{symmetryH} \end{eqnarray} where we have used the fact that $H_1(t)$ is assumed to be symmetric. Combining this result with Eq.~(\ref{symmetryIntegral}) we find that $n_x$ is odd and $n_y$ is even under the symmetry operation $S$ defined in Eq.~(\ref{symmetryOp}). Using the more complete exact Hamiltonian (\ref{Hrot0}), one can similarly find that the $n_x$ is odd under this symmetry. Recall that the effective and exact Hamiltonians have the same $S$-symmetry properties, because of which it suffices to consider one Hamiltonian. \subsubsection{Trial Functions for Time Evolution Operator} \text{lab}el{trialFxns} In our numerical variation we follow the convention \begin{eqnarray} \omega = 1/2, \end{eqnarray} for which we have $t_c = \pi/\omega = 2\pi$. Hence the Magnus interval (\ref{MagnusInterval}) are duration of $2\pi$, \begin{eqnarray} [t_0 + n t_c, t_0 + (n+1) t_c) = [t_0 + 2\pi n, t_0 + 2\pi (n+1)). \text{lab}el{MagnusInterval_convention} \end{eqnarray} The central condition to be satisfied by the variational functions ${\bf n_{\text{var}}} = {\bf n} + \text{d}elta {\bf n}$ [cf.~Eq.~(\ref{vary_n})] is that of stroboscopic as defined in Axiom \ref{axiom2}. When varying this vector, this simply means that for the times (\ref{sets}) the variational vector ${\bf n_{\text{var}}}(t; \beta_0)$ must coincide with the effective vector ${\bf n}(t; \beta_0)$, or \begin{eqnarray} \text{d}elta {\bf n}(t = t_0 + n t_c; \beta_0) = 0 \qquad\qquad \forall n \in \mathbb{Z}. \text{lab}el{zero} \end{eqnarray} We write the variational functions as a product \begin{equation} \text{d}elta {\bf n}(t; \beta_0) = [\sin[(t-\beta_0)/2+\eta\phi]^2 - \sin(\eta\phi)^2] e^{- (1+c) t^2/(2 \sigma^2)} {\bf g}(t; \beta_0) = f_{\text{outer}}(t; \beta_0) {\bf g}(t; \beta_0) \text{lab}el{trial_fxns} \end{equation} of a vector ${\bf g}(t; \beta_0)$ specified below, and an ``outer factor'' $f_{\text{outer}} = \sin[( t-\beta_0)/2 + \eta\phi]^2 - \sin(\eta\phi)^2$, which ensures that the condition (\ref{zero}) is fulfilled for arbitrary analytic ${\bf g}(t; \beta_0)$. Furthermore, for a fixed value of $\beta_0$ this outer factor is nonzero on every entire Magnus interval (\ref{MagnusInterval_convention}), and the symmetry of this factor close to the boundary of the Magnus intervals can be varied by the variational parameter $\phi\in[0, 2\pi]$. We note that we have also introduced two variational parameter $c\in\mathbb{R}$. The factor $\eta \sim 10^6$, fixed for the a numerical minmization, attempts to reconcile the small incremental changes of the variational parameters compared to $\phi \in [0, 2\pi]$. In principle, the ``starting vector'' $\textbf{n}_0(t; \beta_0)$ [remove subscript: replace with $\textbf{n}(t; \beta_0)$] is either that of the effective or the exact Hamiltonian, \begin{eqnarray} \textbf{n}_0(t; \beta_0) = \begin{cases} \textbf{n}_{\text{eff}}(t; \beta_0) \qquad \qquad \text{or}\\ \textbf{n}_{\text{exact}}(t_0). \end{cases} \end{eqnarray} In our experiments we, of course, choose the effective ${\bf n}$ because that's the minimum. The ansatz for the function ${\bf g}(t; \beta_0) = (g_x, g_y, g_z)$ can suitably be written as a Fourier series. First, we use the periodicity in $\beta_0$ to write \begin{eqnarray} {\bf g}(t; \beta_0) = \sum_{m=0}^M {\bf A}_m(t) \cos(m \beta_0) + {\bf B}_m(t) \sin(m \beta_0), \text{lab}el{ansatz1} \end{eqnarray} where ${\bf A}_m = (A_{x,m}, A_{y,m}, A_{z,m})$ and ${\bf B}_m = (B_{x,m}, B_{y,m}, B_{z,m})$. Of course, Eq.~(\ref{ansatz1}) is periodic in $\beta_0$ with periodicity $2\pi$. For the temporal dependence we need to make sure that there is \emph{no} explicit periodicity of duration $2\pi$. Generically for the component determined by $i=\{x, y, z\}$, we thus write \begin{eqnarray} A_{i, m}(t) &=& \sum_{n=0}^{N} a_{i, m, n} \cos(nt/L) + a'_{i, m, n} \sin((n+1)t/L), \text{lab}el{FourierTA} \\ B_{i, m}(t) &=& \sum_{n=0}^{N} b_{i, m, n} \cos(nt/L) + b'_{i, m, n} \sin((n+1)t/L). \text{lab}el{FourierTB} \end{eqnarray} These functions $A_{i, m}$ and $B_{i, m}$ have a period of $T_m = 2\pi/(m/L) = 2\pi L/m$, which should be different from the duration of a Magnus interval for completenes (otherwise we don't allow different Magnus intervals to be treated differently). The value for $L$ that we found useful is $L = 5*\sigma$ (where $\sigma$ is the width of the Gaussian envelope). The total number of parameters $\{a_{i, m,}, a'_{i, m,}, b_{i, m,}, b'_{i, m,}\}$ is then equal to $4\times3\times M \times N$ (3 for $i=x, y, z$). \subsection{Kick Operators} \text{lab}el{kicks} The kick operator formalism in relation to the exact rotating wave approximation was introduced in Ref.~\cite{zeuch18} in Sec.~3. These operators are needed for drive envelope function $H_1(t)$ that are not entirely smooth, or in particular for for envelopes for which some derivative behaves like a $\text{d}elta$-function. In this work we ignore the corrections due to kick operators, since they are so small that they do not affect our numerical results. \subsection{Coding} \text{lab}el{coding} The github page \texttt{https://github.com/zeuch/exactRWA.git} mentioned in the Introduction contains various Mathematica notebook and Python scripts that have been used to do the numerical minimization. For example, as noted in Sec.~\ref{Ueff_analytic}, the explicit results of the Magnus expansion up to fifth order in $1/\omega$ for the case of a Gaussian envelope function can be found in \texttt{exactRWA/programs/numerics/MagnusExpansion\_results.nb}. The Python scripts used for our minimization can be found in \texttt{exactRWA/programs/numerics/python}. \section{Positive Eigenvalue of the Hamiltonian} \text{lab}el{minimizeEigenvalue} Here we analyse the integrand $f_\text{I}(\mathcal{H}) = \text{eig}_+(\mathcal{H})$, or the Hamiltonian's positive eigenvalue. As noted in the Introduction, the Hamiltonian's eigenvalue is related to the length of the traversed trajectory on the Bloch sphere. When comparing the lengths of the exact and effective trajectories (cf.~Fig.~\ref{trajectories}), the latter are significantly shorter---suggesting that the integral over the total pulse duration of the Hamiltonian's positive eigenvalue may be minimized by the effective Hamiltonian. As described in Sec.~\ref{minimization}, we introduce the vector ${\bf n} = \alpha \hat n$ for the parameterization of the time evolution operator, $U = e^{-i {\bf n} \cdot \sigma}$ [as given in Eq.~(\ref{n_def})]. We use the Magnus expansion to compute this vector ${\bf n}(t, \beta_0) = \alpha(t, \beta_0) \hat n(t, \beta_0)$, where $\alpha(t, \beta_0)$ is a scalar and $\hat n(t, \beta_0)$ is a three-dimensional unit vector. In order to carry out the variational minimization of the functional $Q[\bf {n}]$ as given in Eq.~(\ref{axiom3n}) for the integrand $f_\text{I}$, we need to relate this integrand to the vector $\bf {n}$. To do this, we first write down the Schroedinger equation for the time evolution operator, \begin{eqnarray} \frac{\partial U(t, t_i)}{\partial t} = -i \mathcal{H}(t; \beta_0) U(t, t_i), \text{lab}el{schroedingerU} \end{eqnarray} which allows us to rewrite the positive eigenvalue of the Hamiltonian as follows, \begin{eqnarray} f_\text{I} &=& \lambda_+(t, \beta_0) = \text{eig} _+\left(\mathcal{H}\right) = \text{eig} _+\left[\left(i \frac{\partial U}{\partial t}\right) U^{\text{d}agger}\right]. \text{lab}el{f1_of_U} \end{eqnarray} In principle, one could thus use this equation to compute the integrand $f_\text{I}$ for a given $\bf {n}$ by first determining the time evolution operator. However, since we directly vary this $\bf {n}$ vector in our numerical minimization it is advantageous to have a direct algebraic relation between $f_\text{I}$ and the $\bf {n}$ vector, which is computed below. We note that one needs to take some care when computing the temporal derivative $\frac{\partial U}{\partial t}$, which is required to evaluate Eq.~(\ref{f1_of_U}), if the time evolution operator is parameterized as $U = \text{ex}p(f(t))$ with an operator $f(t)=-i{\bf n}\cdot \sigma$ [as opposed to the usual representation using the time ordering parameter]. This calculation is nontrivial if the operator $f(t)$ does not commute with its derivative, or $[f(t), \text{d}ot f(t)] \textbf{n}eq 0$; one way to see this is by noting that \begin{eqnarray} \partial_t \text{ex}p(f) &=& \partial_t (1 + f + \frac{f^2}{2!} + \frac{f^3}{3!} + \ldots) = \text{d}ot f + \frac{\text{d}ot f f + f \text{d}ot f}{2!} + \frac{\text{d}ot f f^2 + f \text{d}ot f f + f^2 \text{d}ot f}{3!} + \ldots \text{lab}el{nontrivial} \end{eqnarray} cannot be regrouped straightforwardly as a power series. The calculation of this derivative is carried out starting with a relatively generic function $f(t)$ in Appendix \ref{Udot}. As discussed below in Sec.~\ref{t-dependent}, in the case of an SU(2) operator $f(t)$, this derivative can be obtained rather straightforwardly. In the remainder of this section we determine the integrand $f_I$ as a function of the ${\bf n}$ vector by evaluating Eq.~(\ref{f1_of_U}). This is done for the two different cases of a constant drive amplitude $H_1(t) = H_1$ [Sec.~\ref{t-independent}] and a time-dependent $H_1(t)$ [Sec.~\ref{t-dependent}]. In the former case, Eq.~(\ref{f1_of_U}) can be simplified straightforwardly, and even the functional (\ref{axiom3n}) can be evaluated completely analytically. In the latter case, the result for $f_\text{I}$ is significantly less trivial, because of which we minimize Eq.~(\ref{axiom3n}) numerically. \subsection{Constant Drive} \text{lab}el{t-independent} Consider the case of a constant envelope function. As explained in the Introduction [cf.~the discussion leading to Eq.~(\ref{Ueff_const})], the effective Hamiltonian for this case is itself a constant, or \begin{equation} \qquad \qquad \qquad \qquad \mathcal{H}_\text{eff}(t; \beta_0) = \mathcal{H}_\text{eff}(\beta_0), \qquad \qquad (H_1(t) \equiv H_1). \end{equation} The simplified time evolution operator (\ref{Ueff_const}) is given by \begin{eqnarray} U_{\beta_0}(t, t_i) = e^{-i \mathcal{H}_{\text{eff}}(\beta_0) (t-t_i)}, \text{lab}el{Ueff_constant} \end{eqnarray} so that for the parametrization (\ref{n_def}), $U = e^{-i {\bf n} \cdot \sigma}$, we find that the vector ${\bf n} = \alpha {\hat n}$ factors into a time-independent unit vector ${\hat n}(t, \beta_0) = {\hat n}(\beta_0)$ and a factor \begin{equation} \alpha(t, \beta_0) = c(\beta_0) (t-t_i), \text{lab}el{alpha_constant} \end{equation} which depends linearly on time. Let us now analyse the $\beta_0$-dependence of $\alpha(t, \beta_0)$, which is contained in the factor $c(\beta_0)$. Given that the Hamiltonian is a sum of Pauli matrices [recall that, as noted in the Introduction (cf.~Footnote \ref{foot:traceless}) we consider traceless Hamiltonians], we note for a unitary matrix $\mathcal{H}_{\text{eff}} = {\bf n}\cdot \sigma = n_1\sigma_x + n_2\sigma_y + n_3\sigma_z$ with real parameters $n_1$, $n_2$ and $n_3$, \begin{eqnarray} \text{eig}_+(\mathcal{H}_{\text{eff}}) = \text{eig}_+(n_1\sigma_x + n_2\sigma_y + n_1\sigma_z) = \sqrt{n_1^2+n_2^2+n_3^2} = \lVert \mathcal{H}_{\text{eff}} \rVert_2, \text{lab}el{eigenvalues} \end{eqnarray} with the 2-norm $\lVert \cdot \rVert_2$. Equation (\ref{alpha_constant}) thus becomes \begin{eqnarray} \alpha(t, \beta_0) = \lVert \mathcal{H}_{\text{eff}}(\beta_0) \rVert_2 (t-t_i). \text{lab}el{alpha_constant_prime} \end{eqnarray} It follows from the periodicity of the exact Hamiltonian for a constant envelope that all different effective $\beta_0$-trajectories are traversed with one and the same angular rotation velocity. Because of this, the norm of the Hamiltonian cannot depend on $\beta_0$, and we have \begin{eqnarray} c(\beta_0) = \lVert \mathcal{H}_{\text{eff}}(\beta_0) \rVert_2 \equiv \tilde c = \text{constant} > 0. \text{lab}el{c} \end{eqnarray} Now consider the functional of interest (\ref{axiom3n}), $Q = \int_{\beta_0} \int_{\tau} f_\text{I} \text{d} \tau \text{d} \beta_0$. For this we need to compute the integrand (\ref{f1_of_U}). To do this, we note that the derivative of the time evolution operator $U$ as parameterized in Eq.~(\ref{Ueff_constant}) is simple to compute. This is because the difficulty described above in Eq.~(\ref{nontrivial}) does not occur, since the Hamiltonian is time-independent. We have\footnote{Since the time evolution operator is, in this case, defined without the time ordering operator, this result also follows directly from the Schroedinger equation (\ref{schroedingerU}).} \begin{eqnarray} \partial_t U_{\beta_0}(t, t_i) = -i\mathcal{H}_{\text{eff}}(\beta_0)U_{\beta_0}(t, t_i). \text{lab}el{U_dot} \end{eqnarray} This implies for the integrand $f_\text{I}$ taken for the effective time evolution, \begin{eqnarray} f_\text{I}( \mathcal{H}_{\text{eff}}({\bf n}(\tau, \beta_0)) ) &=& \text{eig}_+\left[\left(i \frac{\partial U_{\beta_0}}{\partial t}\right) U^{\text{d}agger}\right] \textbf{n}onumber \\ &\stackrel{(\ref{U_dot})}{=}& \text{eig}_+\left[ \mathcal{H}_{\text{eff}}U_{\beta_0} U_{\beta_0}^{\text{d}agger}\right] \textbf{n}onumber \\ &\stackrel{(\ref{eigenvalues})}=& \lVert \mathcal{H}_{\text{eff}} \rVert_2 \textbf{n}onumber \\ &\stackrel{(\ref{c})}=& |\tilde c| \text{lab}el{f_I-useful_now} \\ &\stackrel{(\ref{alpha_constant_prime})}=& |\text{d}ot \alpha(t, \beta_0)|. \text{lab}el{f_I-useful_later} \end{eqnarray} The functional $Q$ can thus be evaluated via \begin{eqnarray} Q[{\bf n}(t, \beta_0)] &\stackrel{(\ref{f_I-useful_now})}=& \int_{\beta_0 = 0}^{2\pi} \int_{\tau=t_i}^{t_i+n t_c} \tilde c \text{d} \tau \text{d} \beta_0 \textbf{n}onumber \\ &=& 2\pi n t_c \int_{\beta_0 = 0}^{2\pi} \tilde c. \text{lab}el{axiom3constant} \end{eqnarray} This result means there is (probably) a degeneracy in the functional $Q$ for a constant drive envelope, which includes our effective Hamiltonian. Computing the same integral $Q$ numerically for the exact time evolution results in a larger value, $Q_\text{exact} > Q_\text{eff}$, which shows that this degeneracy does not include the exact Hamiltonian. \subsection{Generic (Analytic) Drive} \text{lab}el{t-dependent} \subsubsection{Derivative of Time Evolution Operator} As noted above, in Appendix \ref{Udot} we note a technique for computing the time derivate of a time evolution operator of the form $e^{-i f(t)}$, where $f(t)$ is an operator that does not commute with its derivative. Thanks to a hint given to us by Alwin van Steensel, we are able to use an easier way of computing the derivative of the time evolution operator (\ref{n_def}), $U_{\beta_0}(t, t_i) = e^{-i {\bf n}(t, \beta_0) \cdot \sigma}$. To do this, we again first write ${\bf n} = \alpha \hat n$, so that \begin{eqnarray} \partial_t U(t) &=& \partial_t [\cos(\alpha) - i \hat n \cdot \sigma \sin\alpha] \textbf{n}onumber\\ &=& \text{d}ot \alpha \sin\alpha - i \text{d}ot {\hat n} \cdot \sigma \sin\alpha - i \text{d}ot \alpha \hat n \cdot \sigma \cos\alpha \textbf{n}onumber\\ &=& - i \text{d}ot {\hat n} \cdot \sigma \sin\alpha - i \text{d}ot \alpha \hat n \cdot \sigma [ \cos\alpha - i \hat n \cdot \sigma \sin\alpha ] \textbf{n}onumber\\ &=& - i( \text{d}ot {\hat n} \cdot \sigma \sin\alpha + \text{d}ot \alpha \ \hat n \cdot \sigma \underbrace{e^{-i\alpha \hat n \cdot \sigma }}_{=U(t)}). \end{eqnarray} Using this, our term in the integrand (\ref{f1_of_U}) can be calculated as follows, \begin{eqnarray} i(\partial_t U) U^{\text{d}agger} &=& \text{d}ot {\hat n} \cdot \sigma \sin\alpha \underbrace{[\cos\alpha + i \hat n \cdot \sigma \sin\alpha]}_{ = e^{i\alpha \hat n\cdot \sigma} = U^\text{d}agger} + \text{d}ot \alpha \ \hat n \cdot \sigma UU^{\text{d}agger} \textbf{n}onumber\\ &\stackrel{(\ref{sigmaCross})}{=}& \text{d}ot {\hat n} \cdot \sigma \sin\alpha \cos\alpha - |\text{d}ot {\hat n}|(\hat n \times \frac{\text{d}ot {\hat n}}{|\text{d}ot {\hat n}|}) \cdot \sigma \sin\alpha^2 + \text{d}ot \alpha \hat n \cdot \sigma \textbf{n}onumber\\ &=& (1/2) \sin (2\alpha) \text{d}ot{\hat n}\cdot \sigma - (1/2) [1-\cos (2\alpha)] (\hat n \times \text{d}ot {\hat n}) \cdot \sigma + \text{d}ot \alpha \hat n \cdot \sigma \text{lab}el{AlwinUseful}\\ &=& (1/2)|\text{d}ot {\hat n}| \sin (2\alpha) \frac{\text{d}ot {\hat n}}{|\text{d}ot {\hat n}|}\cdot \sigma - (1/2) |\text{d}ot {\hat n}| [1-\cos (2\alpha)] (\hat n \times \frac{\text{d}ot {\hat n}}{|\text{d}ot {\hat n}|}) \cdot \sigma + \text{d}ot \alpha \hat n \cdot \sigma , \text{lab}el{revisited} \end{eqnarray} where we used \begin{eqnarray} (a\cdot \sigma)(b\cdot \sigma) &=& a\cdot b + i (a\times b) \cdot \sigma, \text{lab}el{sigmaCross} \\ \sin(x)\cos(x) &=& \frac12\sin(2x), \\ \sin(x)^2 &=& \frac12 (1-\cos(2x)). \end{eqnarray} We have also used the following simplifying step in computing the eigenvalue, \begin{eqnarray} \sin(\alpha)^2 + (\cos(\alpha) - 1)^2 = \sin(\alpha)^2 + \cos(\alpha)^2 -2 \cos(\alpha) + 1 = 2(1-\cos\alpha). \end{eqnarray} The integrand $f_{\text{I}}$ can be computed directly from the equation (\ref{revisited}). Denoting $\hat d_1 = \hat n$, $\hat d_2 = \hat {\text{d}ot {\hat n}}$ and $\hat d_3 = \hat n \times \hat {\text{d}ot {\hat n}}$, we can write \begin{eqnarray} \mathcal{H} &=& d_1 \hat d_1\cdot \sigma + d_2 \hat d_2\cdot \sigma + d_3 (\hat d_1 \times \hat d_2)\cdot\sigma \\ \Rightarrow f_{\text{I}} &=& \text{eig}_+ (\mathcal{H}) = \sqrt{d_1^2 + d_2^2 + d_3^2}. \text{lab}el{posEigV} \end{eqnarray} Using this, we find the result \begin{eqnarray} f_\text{I}( \mathcal{H}_{\text{eff}}({\bf n}) ) &=& \text{eig}_+\left[\left(i \frac{\partial U_{\beta_0}}{\partial t}\right) U_{\beta_0}^{\text{d}agger}\right] \stackrel{(\ref{revisited})}{=} \sqrt{\text{d}ot \alpha^2 + (1/2)|\text{d}ot {\hat n}|^2 [1 - \cos (2\alpha)]}. \text{lab}el{Qf_I-Final} \end{eqnarray} Note that in the case of a constant drive envelope the vector $\hat n$ (the rotation axis of the effective evolution) is a constant. Since this implies $\text{d}ot {\hat n} = 0$, this result (\ref{Qf_I-Final}) then reduces to the integrand for constant drive amplitudes computed above in Eq.~(\ref{f_I-useful_later}). \section{Minimize the Variation of the Hamiltonian's Directional Vector} \text{lab}el{directional} Here we follow the idea that the effective Hamiltonian has no fast-oscillating terms (no terms that oscillate with frequency $\omega$). This is, of course, a striking difference between the exact Hamiltonian [see, e.g., Eq.~(\ref{Hrot})] and the effective Hamiltonian [see, e.g., Eq.~(\ref{Heff_main})]. In symbols, we want to minimize the variations of the vector $\hat h$, which is a measure for the noncommutativity of the Hamiltonian (\ref{HGeneric}). As usual, the minimization (\ref{axiom3n}) is over both time $t$ and gauge parameter $\beta_0$ on the domain of an entire pulse. When writing the Hamiltonian as in Eq.~(\ref{HGeneric}), \begin{eqnarray} \mathcal{H}(t; \beta_0) = {\bf h}(t,\beta_0) \cdot \sigma, \qquad \text{ where }{\bf h} = (h_x, h_y, h_z)^T, \text{lab}el{HVia_h} \end{eqnarray} we define the integrand $\lambda_\text{I}I$ as \begin{eqnarray} f_{\text{I}I}(t, \beta_0) = |\partial_t h| = \sqrt{(\partial_t h_x)^2 + (\partial_t h_y)^2 + (\partial_t h_z)^2}. \text{lab}el{II--fxnOfH} \end{eqnarray} This can also be viewed as the matrix norm (more precisely, the 2-norm) of the time derivative of the Hamiltonian, \begin{eqnarray} \lambda_ \text{I}I = \lVert \text{d}ot {\bf h}(t,\beta_0) \cdot \sigma\rVert_2 = \lVert \text{d}ot \mathcal{H}\rVert_2 \equiv \text{eig}_+(\partial_t \mathcal{H}), \end{eqnarray} which is also equal to the positive eigenvalue of the Hamiltonian. For our minimization, we want to compute the integrand $\lambda_\text{I}I$ as a function of the $\bf n$ vector. To do so, we again use the Schroedinger equation for the time evolution operator given in Eq.~(\ref{schroedingerU}), \begin{eqnarray} f_{\text{I}I} &=& \text{eig}_+(\partial_t \mathcal{H}) \textbf{n}onumber \\ &\stackrel{(\ref{schroedingerU})}{=}& \text{eig}_+\left\{\partial_t \left[\left(i \frac{\partial U}{\partial t}\right) U^{\text{d}agger}\right] \right\}. \text{lab}el{easyII} \end{eqnarray} \subsection{Constant Drive Envelope} For constant drive envelopes, recall that we have computed above in Sec.~\ref{t-independent} the positive eigenvalue of the Hamiltonian as a function of the ${\bf n}$ vector. In that calculation we noted that the time evolution operator for constant $H_1(t) \equiv H_1$ can be written as in Eq.~(\ref{Ueff_constant}), or $U_{\beta_0}(t, t_i) = e^{-i\mathcal{H}_{\text{eff}}(\beta_0) (t-t_i)}$. As noted above in Eq.~(\ref{U_dot}), since the operator inside the exponential is self-commutative at arbitrary different times, we can simply write \begin{eqnarray} f_{\text{I}I} = \text{eig}_+ \left\{\partial_t\left(i(\partial_t U_{\beta_0})U^\text{d}agger_{\beta_0}\right) \right)\} = \text{eig}_+ \{ \partial_t \mathcal{H}_{\text{eff}} \}. \end{eqnarray} That is, for the effective Hamiltonian $\mathcal{H}_{\text{eff}}$, which for constant drive envelopes is independent of time $t$ so that $\partial_t \mathcal{H}_{\text{eff}} = 0$, the integrand is zero, \begin{eqnarray} f_{\text{I}I} = 0 \qquad\qquad\qquad (\mathcal{H}_{\text{eff}}, H_1(t) = H_1). \end{eqnarray} In this case the integral in Eq.~(\ref{axiom3n}) is also zero, which suggests that here the effective Hamiltonian does indeed satisfy Axiom \ref{axiom3_axiom}. \subsection{Generic (Analytic) Drive} \text{lab}el{explicit_formulaII} We now use the intermediate step (\ref{revisited}) in the computation of the integrand $\lambda_\text{I} = \text{eig}_+(\mathcal{H})$ given above to simplify the Hamiltonian $\mathcal{H} = {\bf h}\cdot \sigma$ appearing in Eq.~(\ref{easyII}), \begin{eqnarray} \mathcal{H} &=& i(\partial_t U) U^{\text{d}agger} \textbf{n}onumber \\ &\stackrel{(\ref{revisited})}{=}&(1/2) \sin (2\alpha) \text{d}ot{\hat n}\cdot \sigma - (1/2) [1-\cos (2\alpha)] (\hat n \times \text{d}ot {\hat n}) \cdot \sigma + \text{d}ot \alpha \hat n \cdot \sigma \text{lab}el{incomplete} \\ &=& \left( (1/2) \sin (2\alpha) \text{d}ot{\hat n} - (1/2) [1-\cos (2\alpha)] (\hat n \times \text{d}ot {\hat n}) \sigma + \text{d}ot \alpha \hat n \right) \cdot \sigma \text{lab}el{H_of_n_ThanksAlwin} \end{eqnarray} Above in Sec.~\ref{minimizeEigenvalue}, we have used the fact that this Hamiltonian is written as a sum of three perpendicular vectors $\hat d_1 = \hat n$, $\hat d_2 = \hat {\text{d}ot {\hat n}}$ and $\hat d_3 = \hat n \times \hat {\text{d}ot {\hat n}}$, which has allowed us to use Eq.~(\ref{posEigV}) to find an algebraic expression for $f_{\text{I}}$. In order to obtain a similar expression for $f_{\text{I}I}$, we now compute the temporal derivative of the Hamiltonian $\mathcal{H}$ as given in Eq.(\ref{incomplete}), \begin{eqnarray} \partial_t \mathcal{H} &=& \partial_t \left[i(\partial_t U) U^{\text{d}agger} \right] \textbf{n}onumber \\ &=& (1/2) \sin (2\alpha) \text{d}dot{\hat n}\cdot \sigma + \text{d}ot \alpha \cos (2\alpha) \text{d}ot{\hat n}\cdot \sigma - (1/2) [1-\cos (2\alpha)] (\hat n \times \text{d}dot {\hat n}) \cdot \sigma \textbf{n}onumber \\ && \qquad \qquad \qquad \qquad - \text{d}ot \alpha \sin(2\alpha) (\hat n \times \text{d}ot {\hat n}) \cdot \sigma + \text{d}ot \alpha \text{d}ot {\hat n} \cdot \sigma + \text{d}dot \alpha \hat n \cdot \sigma \textbf{n}onumber \\ &=& (1/2) \sin(2\alpha) \text{d}dot{\hat n}\cdot \sigma - (1/2) [1-\cos (2\alpha)] (\hat n \times \text{d}dot {\hat n}) \cdot \sigma + \text{d}ot \alpha [\cos (2\alpha)+1] \text{d}ot{\hat n}\cdot \sigma \textbf{n}onumber \\ && \qquad \qquad \qquad \qquad - \text{d}ot \alpha \sin(2\alpha) (\hat n \times \text{d}ot {\hat n}) \cdot \sigma + \text{d}dot \alpha \hat n \cdot \sigma \text{lab}el{revisited2}\\ &=& (1/2) \sin(2\alpha) \text{d}dot{\hat n}\cdot \sigma - (1/2) [1-\cos (2\alpha)] (\hat n \times \text{d}dot {\hat n}) \cdot \sigma + \text{d}ot \alpha n_v [\cos (2\alpha)+1] \hat n_v \cdot \sigma \textbf{n}onumber \\ && \qquad \qquad \qquad \qquad - \text{d}ot \alpha n_v \sin(2\alpha) \hat n_\perp \cdot \sigma + \text{d}dot \alpha \hat n \cdot \sigma. \text{lab}el{revisitedFinal} \end{eqnarray} In the last line we used $\text{d}ot {\hat n} = n_v \hat n_v$, and $(\hat n \times \text{d}ot {\hat n}) = n_v (\hat n \times {\hat n}_v) = n_v \hat n_\perp$ [$\hat n_\perp = \hat n_v \times \hat n$, see also below in Eq.~(\ref{theVectors})]. Just to be clear, with Eq.~(\ref{easyII}) it is clear that the integrand $f_{\text{I}I}$ is related to Eq.~(\ref{revisited2}) in that it is the (positive) eigenvalue, that is, $f_{\text{I}I} = \text{eig}_+\left\{\partial_t \left[i(\partial_t U) U^{\text{d}agger} \right]\right\}$. Considering that a unit vector $\hat n$ is perpendicular to its first derivative, \begin{eqnarray} \text{d}ot{\hat n} \perp \hat n, \end{eqnarray} we can ``of course'' write the derivative of $\text{d}ot {\hat n} \equiv \vec n_v$ (the velocity vector of $\hat n$) as a sum of terms that are parallel-to and orthogonal-to $\hat v$. This is what we did in Appendix \ref{unit_vectors_elegant}. We need only write this new velocity vector as $\vec n_v = |n_v| \hat n_v$. The three orthogonal vectors are then \begin{eqnarray} \hat {\tilde x} \equiv \hat n, \qquad \hat {\tilde y} \equiv \hat n_v \equiv \frac{1}{|\text{d}ot {\hat n}|}\text{d}ot {\hat n}, \qquad \hat {\tilde z} \equiv \hat n_\perp = \hat n_v\times \hat n. \text{lab}el{theVectors} \end{eqnarray} Since these vectors $\{\hat {\tilde x}, \hat {\tilde y}, \hat {\tilde z}\}$ form a right-handed coordinate system it follows, for instance, that \begin{eqnarray} \hat {\tilde x} \times \hat {\tilde z} = - \hat {\tilde y}. \end{eqnarray} We can probably compute an algebraic equation for this integrand using Eq.~(\ref{revisitedFinal}) and the results computed in Appendix \ref{unit_vectors_elegant}. The result is that the second derivative of the unit vector $\hat n$ is that given in Eq.~(\ref{ddotnHatApp}) \begin{eqnarray} \text{d}dot {\hat n} =\text{d}ot n_v \hat n_v + n_v \left[(\vec n_a \cdot \hat n) \ \hat n + (\vec n_a \cdot \hat n_\perp) \ \hat n_{\perp} \right], \text{lab}el{ddotnHat} \end{eqnarray} where \begin{eqnarray} \hat n_{\perp} &=& \hat n \times \hat n_v, \\ \vec n_a &=& \text{d}ot {\hat n}_v. \end{eqnarray} Note that since $\vec n_a$ is the derivative of a unit vector (${\hat n}_v$), its dimension is [1/time] even though it plays the role of an acceleration vector. \subsubsection{Closed-form Expression for $f_{\text{I}I}$} The expression for the exact result of $f_{\text{I}I}$ is computed in the Mathematica notebook \texttt{integrand2.nb}, which can be found in the github repository [cf.~Sec.~\ref{coding}] in the folder \texttt{/exactRWA/programs/variational\_minimization}. Combining Eqs.~(\ref{revisitedFinal}) and (\ref{ddotnHat}), with $f_{\text{I}I} = \text{eig}_+(\partial_t \mathcal{H})$ we have \begin{eqnarray} f_{\text{I}I}^2 = \sin(\alpha)^2 [n_v(\vec n_a\cdot \hat n_{\perp} - 2\text{d}ot \alpha) \cos\alpha - \text{d}ot n_v \sin \alpha]^2 + [\text{d}dot \alpha + n_v \vec n_a \cdot \hat n \cos \alpha \sin\alpha ]^2 \textbf{n}onumber \\ + \frac14 [n_v (\vec n_a \cdot \hat n_\perp + 2\text{d}ot \alpha) - n_v (\vec n_a\cdot \hat n_\perp - 2\text{d}ot \alpha)\cos(2\alpha) + \text{d}ot n_v \sin(2\alpha) ]^2. \end{eqnarray} For coding this integrand, it is better if we rewrite the derivative of the Hamiltonian (\ref{H_of_n_ThanksAlwin}), which is given in Eq.~(\ref{revisited2}). We focus on its defining vector, $\text{d}ot \mathcal{H} = \text{d}ot {\bf h}\cdot \boldsymbol{\sigma}$, like this, \begin{eqnarray} \text{d}ot {\bf h} = a \text{d}dot{\hat n} - b (\hat n \times \text{d}dot {\hat n}) + c \hat n_v - d \hat n_\perp + \text{d}dot \alpha \hat n, \text{lab}el{hDot} \end{eqnarray} where \begin{eqnarray} a &=& (1/2) \sin(2\alpha), \qquad b = (1/2) [1-\cos (2\alpha)], \qquad c = \text{d}ot \alpha n_v [\cos (2\alpha)+1], \textbf{n}onumber \\ \ \ \ \ d &=& \text{d}ot \alpha n_v \sin(2\alpha), \qquad \text{and } n_v \stackrel{(\ref{theVectors})}{=} |\text{d}ot {\hat n}|. \end{eqnarray} Given the following orthogonality relations: $\text{d}dot {\hat n} \perp (\hat n \times \text{d}dot {\hat n})$, $\hat n \perp (\hat n \times \text{d}dot {\hat n})$, and $\{\hat n, \text{d}ot {\hat n}, \hat n_\perp\}$ are pairwise orthogonal, we find \begin{eqnarray} \text{d}ot {\bf h} \cdot \text{d}ot {\bf h}&=& a^2 \text{d}dot{\hat n}\cdot\text{d}dot{\hat n} + b^2 (\hat n \times \text{d}dot {\hat n})\cdot(\hat n \times \text{d}dot {\hat n}) + c^2 + d^2 + \text{d}dot \alpha^2 \textbf{n}onumber \\ && + 2 (ac \text{d}dot{\hat n}\cdot\hat n_v - ad \text{d}dot{\hat n}\cdot \hat n_\perp + a\text{d}dot \alpha \text{d}dot{\hat n}\cdot\hat n - bc (\hat n \times \text{d}dot {\hat n})\cdot\hat n_v + bd(\hat n \times \text{d}dot {\hat n})\cdot \hat n_\perp) \textbf{n}onumber \\ &=& a^2 |\text{d}dot{\hat n}|^2 + b^2 |(\hat n \times \text{d}dot {\hat n})|^2 + c^2 + d^2 + \text{d}dot \alpha^2 \textbf{n}onumber \\ && + 2 (ac \text{d}dot{\hat n}\cdot\hat n_v - ad \text{d}dot{\hat n}\cdot \hat n_\perp + a\text{d}dot \alpha \text{d}dot{\hat n}\cdot\hat n - bc (\hat n \times \text{d}dot {\hat n})\cdot\hat n_v + bd(\hat n \times \text{d}dot {\hat n})\cdot \hat n_\perp). \text{lab}el{hDotSquared} \end{eqnarray} \section{Results of Integrals} \text{lab}el{Results_of_Integrals} In Secs.~\ref{integrand1} and \ref{integrand1_full} we give results for the integrals that were found by our minimization. Sections \ref{errorSources} and \ref{errorAnalysis} present our error analysis, and, most importantly, Sec.~\ref{caseAgainstI} gives the main result of this study: the comparison of the integral results for the effective and variational Hamiltonians in relation to the maximal error. This comparison gives a strong argument that our proposed set of axioms is incorrect. We have run an extensive numerical minimzation of the functional (\ref{axiom3n}). In this numerical part of our work, we implemented the gradient descent algorithm called the Broyden–Fletcher–Goldfarb–Shanno algorithm\footnote{\url{https://en.wikipedia.org/wiki/Broyden\%E2\%80\%93Fletcher\%E2\%80\%93Goldfarb\%E2\%80\%93Shanno\_algorithm}}, in which the gradient is approximated. We have compared the performance with the Nelder-Mead algorithm\footnote{\url{https://en.wikipedia.org/wiki/Nelder\%E2\%80\%93Mead\_method}}, which gave similar results. Some integral results for $Q$ and uncertainties for the quantities $\alpha$ and $\bf n$, which are obtained numerically, are documented in Table \ref{integral_results}. \begin{table}[t] \begin{tabular}{|c||c|}\hline integrand & ${\bf n}_{\text{eff}}$ $\mathcal{O}\left(A^3\right)$ \\ \hline\hline $f_{\text{I}}$(*) & 0.099116584(3) \\ \hline $f_{\text{II}}$ & 0.00633432(1) \\\hline $\text{d}elta \alpha$ & $1.84\times10^{-8}$ \\ $\text{d}elta n$ & $(1.84\times10^{-8}, 1.74\times10^{-8}, 1.09\times10^{-8})$ \\ $\text{d}elta \hat n$, $|\text{d}elta \hat n|$ & $(7.64\times10^{-8}, 1.49\times10^{-6}, 9.61\times10^{-7})$, $1.50\times10^{-6}$ \\ \hline $\text{d}elta Q$ & $3.4(1)\times 10^{-6}$ [very conservative (constant maximum errors)] \\ $\text{d}elta Q$ & $2.90 \times 10^{-6}$ [quadrature (constant maximum errors)] \\\hline \end{tabular} \caption{Specific results of integrals $Q$ for amplitude $A = 0.002$. The number in parentheses next to the integrand denotes the upper bound in the number of trial functions $M$ [we use $M = N$, compare Eqs.~(\ref{ansatz1}) and (\ref{FourierTA})]. The number in [square brackets] denotes the numerical run that yields the results. The star (*) indicates that we have confirmed that $f_{\text{I}}$ and $f_{\text{I}}^{\text{simplified}}$ yield the same results to the given accuracy. The quantities $\text{d}elta n$, $\text{d}elta \alpha$, $\text{d}elta \hat n$ and $|\text{d}elta \hat n|$ are absolute (not relative) errors. } \text{lab}el{integral_results} \end{table} \subsection{Integrand 1} \text{lab}el{integrand1} We improved the ${\bf n}$ vector to higher order, namely it is now proportional to $A^2$---below we denote it $\hat n_{A^2}$, and the old vector is called ${\bf n}_A$. Note that the new vector ${\bf n}_{A^2}$ also goes to higher order in $1/\omega$. We wanted to find out if this improved ${\bf n}$ vector yields the same lower variational minimum for $f_{\text{I}}$. To do this, we first tried to find out if we could even repeat the old calculation with ${\bf n}_A$. The results for this $f_{\text{I}}$ integral is recorded in ``workMac440.txt'' (the original calculation was done in ``workMac44.txt"). That is, in ``440'' we redid the calculation with ${\bf n}_A$ just to be sure we can still do it with my current code. Note that (it looks like) in recalculating we only turned $n_z \rightarrow 0$, that is, $n_x$ and $n_y$ are considered up to high order in $1/\omega$ for this computation. Now, for ${\bf n}_{A^2}$ we have found the same lower integral in ``workMac10.txt" (with $M = N = 2$). \begin{table} \begin{tabular}{|c||c|c|} \hline integrand ($M$) & integral $Q_\text{var}$ / $\Delta Q_\text{var}$ & integral $\Delta Q_\text{ex}$ \\\hline $f_{\text{I}}$ & 0.0991166116(5) [00] & 3e-6 \\ \hline $f_{\text{I}}$(1) & -6.77e-7 [1011] & \\ $f_{\text{I}}$(2) & -1.99e-6 [1013] & \\ $f_{\text{I}}$(3) & -2.00e-6 [1014/1015] & \\ \hline $f_{\text{I}I}$ & 0.00633429(51) [000] & \\ $f_{\text{I}I}$(1) & 0.00631970(40) [204] & \\ $f_{\text{I}I}$(2) & 0.00632767(44) [206] & \\ \hline $f_{\text{I}I}$ & 0.0063346 (13) [000] & \\ $f_{\text{I}I}$(1) & 0.0063198(12) [204] & \\ $f_{\text{I}I}$(2) & 0.0063279(12) [206] & \\ \hline $f_{\text{I}I}^2$ & 0.00111881851(1) [000] & \\ $f_{\text{I}I}^2$(1)[*] & 0.00111868926(1) [260] & \\ $f_{\text{I}I}^2$(2) & []& \\ \hline \end{tabular} \caption{Integral results for the full Gaussian pulse, which is taken in the interval $[-t_{\text{g}ate}, t_{\text{g}ate}]$. For the computation of these numbers use ${\bf n}_{\text{var}}$ $\mathcal{O}\left(A^3\right)$. [*] Zero initial guess yielded no improvement.} \text{lab}el{integral_resultsFull} \end{table} \subsection{Integrals Over Full Pulse} \text{lab}el{integrand1_full} Using the same gate duration of the half Gaussian pulse, $t_\text{gate} = 12\sigma$ with $\sigma = 2\times t_c = 2\times(2\pi)$, we now integrate from $t_0 = -t_\text{gate}$ to $t_\text{gate}$ over the full Gaussian pulse. Accordingly, the values of the new integrals, which are shown in Table \ref{integral_resultsFull}, should be a bit more than twice as the old ones [see Table \ref{integral_results}]. \subsection{Sources of Errors} \text{lab}el{errorSources} Here we list five possible sources of error: \begin{enumerate} \item Computation of integral [this numerical uncertainty is given in parentheses, for example in Fig.~\ref{integral_results}] \item Errors due to the computation of the propagator [numerical values given in bottom section of Table \ref{integral_results}] \item Errors due to the numerical derivatives \item Approximations through simplistic choice of integrand \item ``Boundary conditions'' \end{enumerate} (5) The first four items listed here are under control (see Sec.~\ref{alreadyExplored} below). The fifth item is made small by considering a pulse shape that goes to zero very smoothly. \subsubsection{The Items Already Explored} \text{lab}el{alreadyExplored} (1) Done. (2) See discussion below in Sec.~\ref{errorAnalysis}. (3) We compute derivatives using a difference quotient. For example, the first derivative of $\alpha$ is given by \begin{eqnarray} \text{d}ot \alpha(t) \approx \frac{\alpha(t+h) - \alpha(t-h)}{2h}. \text{lab}el{derivativeNumerical} \end{eqnarray} For the standard parameters [in particular, $A \sim 0.002$, $\sigma = 4\pi$, $t_c = 2\pi$], we have used $h = 10^{-3}$ and $h = 10^{-4}$ with the same results for the integral. In case of the second derivative of $\alpha$ we have gotten (probably) bad results for the choice of $h = 10^{-5}$. (4) We have compared the integral results for the respective ``simple'' integrands to those of the ``full'' integrands, and they do not pose a problem for the calculation [assuming the numerical derivatives are computed accordingly, e.g., $10^{-4} \leq h \leq 10^{-3}$ in Eq.~(\ref{derivativeNumerical})]. \begin{eqnarray} \text{d}ot \alpha(t) &\approx& \frac{\alpha(t+h) - \alpha(t-h)}{2h}. \end{eqnarray} \subsection{Error Propagation} \text{lab}el{errorAnalysis} Recall that for a function $f(a, b)$ with uncertainties $\Delta a$ and $\Delta b$ we find an uncertainty $\Delta f$ to be \begin{eqnarray} \Delta f = \left |\frac{\partial f(a, b)}{\partial a} \right| \Delta a + \left |\frac{\partial f(a, b)}{\partial b} \right| \Delta b. \text{lab}el{errorPropagation} \end{eqnarray} In our case, the function $f$ is either the integrand $f_{\text{I}} = \lambda_+$ or $f_{\text{I}I} = \lVert \text{d}ot \mathcal{H}(t) \rVert$, which are given in Eqs.~(\ref{Qf_I-Final}) and (\ref{easyII}), respectively. [Note that we have found a closed-form expression for $f_{\text{I}I}$ in Sec.~\ref{explicit_formulaII}.] That is, \begin{eqnarray} f_{\text{I}}(\alpha, \text{d}ot \alpha, \text{d}ot {\hat n}) = \sqrt{\text{d}ot \alpha^2 + (1/2)|\text{d}ot {\hat n}|^2 [1 - \cos (2\alpha)]}. \text{lab}el{fIIIErrorAnalysis} \end{eqnarray} Recall that we have another, simplified way of computing this integrand, \begin{eqnarray} f_{\text{I}}^{\text{simplified}} = | \text{d}ot {\bf n} |. \text{lab}el{fIIISimErrorAnalysis} \end{eqnarray} I have compared the integrals $Q^{\text{simplified}} = \int f^{\text{simplified}}_{\text{I}}$ in and $Q = \int f_{\text{I}}$, and they both yield the numbers shown in Table \ref{integral_results} up to the given accuracy. In case of $f_{\text{I}I}$ we (currently) only use the simplified integrand, which is given by \begin{eqnarray} f_{\text{I}I}^{\text{simplified}} = | \text{d}dot {\bf n} |^2. \text{lab}el{fIIErrorAnalysis} \end{eqnarray} It follows that we deal with an integrand ($f_{\text{I}}$ or $f_{\text{I}}^{\text{simplified}}$ or $f_{\text{I}I}^{\text{simplified}}$) \begin{eqnarray} f_{\text{I}}(\alpha, \text{d}ot \alpha, \text{d}ot {\hat n}) \pm \Delta f_{\text{I}}. \end{eqnarray} Let us now determine $\Delta f_{\text{I}}$. \subsubsection{What We Know} I think I can assume I know the error in the the vector ${\bf n}$, denoted $\Delta n$, i.e., the true value of $n(t, \beta_0) = |{\bf n}(t, \beta_0)|$ lies somewhere within the interval \begin{eqnarray} [n(t, \beta_0) - \Delta n, n(t, \beta_0) + \Delta n]. \text{lab}el{nDelta} \end{eqnarray} Given $\alpha = |{\bf n}|$ and $\hat n = {\bf n}/\alpha$, we further define error quantities \begin{eqnarray} \Delta \alpha = \alpha - \alpha_0, \text{ and } \Delta \hat n = \hat n - \hat n_0, \text{lab}el{DANH} \end{eqnarray} where we take $\alpha$ and $\hat n$ to correspond to the effective time evolution, while $\alpha_0$ and $\hat n_0$ To obtain a numerical estimate of the quantitites (\ref{DANH}), let us define the time evolution operators for the effective and the exact trajectories: \begin{eqnarray} U_{\text{eff}}(\alpha, \hat n) &=& e^{-i\alpha \hat n \cdot \sigma} = I \cos(\alpha) -i \hat n\cdot \sigma \cos(\alpha), \text{lab}el{UEff} \\ U_{\text{ex}}(\alpha, \hat n) &=& \mathcal T e^{-i\int d\tau H_{\text{ex}}(\tau)} \equiv e^{-i\alpha_{\text{ex}} \hat n_{\text{ex}} \cdot \sigma}. \end{eqnarray} Note that we left the dependence on time $t$ and the gauge parameter $\beta_{\text{ex}}$ implicit, e.g., $\alpha = \alpha(t, \beta_{\text{ex}})$. We also define the difference between these two operators, \begin{eqnarray} \Delta U = U_{\text{eff}} - U_{\text{ex}}. \end{eqnarray} Note that from the difference $\Delta U$ we could determine the error quantity $\Delta$ [cf.~(\ref{DANH})] by writing \begin{eqnarray} \Delta_c &\equiv& \frac12 \text{tr} \Delta U = \cos\alpha - \cos\alpha_{\text{ex}} = \cos(\alpha_{\text{ex}} + \Delta \alpha) - \cos\alpha_{\text{ex}}, \end{eqnarray} and the somehow solving for $\Delta \alpha$. \subsubsection{Simple Way} Recall, however, that we already have functions for the ``effective'' quantities $\alpha$ and $\hat n$ [search for `def alpha' and 'def nHat' in fromMathematica.py], so in order to find the desired errors we merely need to write functions for $\alpha_{\text{ex}}$ and $\hat n_{\text{ex}}$ based on $U_{\text{ex}}$. Note that the function for $U_{\text{ex}}$ can be obtained from the exact time evolution psi(t, beta0, ...) [search for `def psi' in fromMathematica.py] by choosing the input parameter \emph{out} = 'thetaPhi.' As described above, we can also easily obtain the equivalent exact quantities. For this, note that Eq.~(\ref{UEff}) implies that \begin{eqnarray} \cos(\alpha_{\text{ex}}) &=& \frac12 \text{tr} \ U_{\text{ex}}, \\ \Rightarrow -i n_j \sin(\alpha_{\text{ex}}) &=& \frac12 \text{tr} \ U_{\text{ex}} \sigma_j \qquad \ \qquad (j = x, y, z) \\ \Rightarrow \qquad \qquad n_j &=& \frac{i}{2\sin(\alpha_{\text{ex}})} \text{tr} U_{\text{ex}} \sigma_j. \end{eqnarray} These quantities are computed in fromMathematica.py (search for `def alphaNHatExact'). \subsubsection{Now compute the error} I believe we can figure out an estimate of $\Delta n_x$, $\Delta n_y$ and $\Delta n_z$. Assuming this is given, we can compute the error in $\alpha$, which is length of the $n$-vector, \begin{eqnarray} \alpha &=& |{\bf n}| = \sqrt{n_x^2 + n_y^2 + n_z^2}, \\ \Rightarrow \frac{\partial \alpha}{n_i} &=& \frac{n_i}\alpha \qquad (\text{for } i = x, y, z), \text{lab}el{alphaDerivative}\\ \stackrel{(\ref{errorPropagation})}{\Rightarrow} \Delta \alpha &=& \sum_{i} \frac{n_i}{\alpha} \Delta n_i. \text{lab}el{DeltaAlpha} \end{eqnarray} Recall that we also have \begin{eqnarray} \hat n &=& \frac{\bf n}{n} = \frac{\bf n}{\alpha}.\\ \frac{\partial }{\partial \alpha} \frac 1{\alpha} &=& -\frac 1{\alpha^2} \\ \stackrel{(\ref{errorPropagation})}{\Rightarrow} \Delta \hat n & = & \frac{\Delta \bf n}{\alpha} + \frac{\Delta \alpha}{\alpha^2} {\bf n}, \end{eqnarray} where $\Delta \alpha$ is given above in Eq.~(\ref{DeltaAlpha}). \subsubsection{Time Derivatives} It is, of course, possible that some error, say $\Delta \alpha$ or $\Delta {\bf n}$, probably builds up (he didn't use this particular verb) over a Magnus interval $t_c$, so that we have \begin{eqnarray} \Delta \text{d}ot {\bf n} = \frac{\Delta {\bf n}}{t_c}, \\ \Delta \text{d}ot \alpha = \frac{\Delta \alpha}{t_c}, \text{lab}el{DeltaA}\\ \Delta \text{d}ot {\hat n} = \frac{\Delta {\hat n}}{t_c}. \text{lab}el{DeltaNH} \end{eqnarray} From Eq.~(\ref{fIIIErrorAnalysis}) we take that the latter two equations given here are required, i.e., (\ref{DeltaA}) and (\ref{DeltaNH}), as well as the uncertainty in $\alpha$ itself, i.e., Eq.~(\ref{DeltaAlpha}). \subsubsection{First Results} Using the function ``deltaAlphaNHat'' (see fromMathematica.py) I computed the errors of the $n$-vector for various approximations of the propagator [for this I used dbl\_test.py]. The results are noted in Table \ref{integral_results}, lowest rows. For the most accurate treatment, in which we started with the effective Hamiltonian $\mathcal{H}_{\text{eff}}$ of order $1/\omega^5$, and where we kept terms $\propto A^3$ in the Taylor expansion, we found $|\Delta \hat n| \lesssim 10^{-8}$, $\Delta \alpha \lesssim 10^{-10}$, perhaps most importantly, \begin{eqnarray} \Delta |{\bf n}| \equiv \Delta n \lesssim 1.4 \times 10^{-9}. \end{eqnarray} These are absolute (not relative) numbers. Using the three quantities $\Delta \alpha$, $\Delta \text{d}ot \alpha$ and $\Delta \hat n$ in hand we can thus compute the error for $f_{\text{I}}$ given in Eq.~(\ref{fIIIErrorAnalysis}) \begin{eqnarray} \Delta f_{\text{I}} &=& \sum_{p = \alpha, \text{d}ot \alpha, |\text{d}ot {\hat n}|} \frac{\partial f_{\text{I}}}{\partial p} \times \Delta p \\ &=& \frac{1}{2 f_{\text{I}}} \left[ 2\text{d}ot \alpha \times \Delta \text{d}ot \alpha + (1/2)[1 - \cos (2\alpha)] (2|\text{d}ot {\hat n}|) \times \Delta |\text{d}ot {\hat n}| + (1/2)|\text{d}ot {\hat n}|^2 |2\sin(2\alpha)| \times \Delta \alpha \right] \textbf{n}onumber \\ &=& \frac{1}{2 f_{\text{I}}} \left[ 2\text{d}ot \alpha \times \Delta \text{d}ot \alpha + [1 - \cos (2\alpha)] |\text{d}ot {\hat n}| \times \Delta |\text{d}ot {\hat n}| + |\text{d}ot {\hat n}|^2 \ |\sin(2\alpha)| \times \Delta \alpha \right]. \end{eqnarray} The integral over this quantity [computed using ``dbl\_plotting.py''] yields the following uncertainty of the integral $Q$, \begin{eqnarray} \Delta Q_{\text{I}} = 7.2(1)\times 10^{-6}\text{(*)}. \end{eqnarray} (*) This number is not up to date [July 11th, 2019]. Use the simplified result below in Eq.~(\ref{QIIISim}) for now. In the case of the ``simplified'' integrand $f_{\text{I}}^{\text{simplified}}$ given in Eq.~(\ref{fIIISimErrorAnalysis}) we need to use $\Delta \text{d}ot n \approx \Delta n/t_c$ (with the absolute uncertainty $\Delta n \lesssim 1.4\times 10^{-9}$ (see Table \ref{integral_results}), so we can compute \begin{eqnarray} \Delta f_{\text{I}}^{\text{simplified}} &=& \frac{\partial f^{\text{simplified}}_{\text{I}}}{\partial |\text{d}ot n|} \times \Delta |\text{d}ot {\bf n}| \textbf{n}onumber = \Delta \text{d}ot n. \end{eqnarray} Similar to above, the integral over this quantity (computed using ``dbl\_plotting.py'') yields an uncertainty of the integral $Q^{\text{simplified}}$ that is given by \begin{eqnarray} \Delta Q_{\text{I}}^{\text{simplified}} = 1.06 \times 10^{-7} \pm \mathcal O(10^{-9}). \text{lab}el{QIIISim} \end{eqnarray} In the case of the ``simplified'' integrand $f_{\text{I}I}^{\text{simplified}}$ given in Eq.~(\ref{fIIErrorAnalysis}) we will have to use $\Delta \text{d}dot n = \Delta n/t_c^2$ (I suppose) in order to compute \begin{eqnarray} \Delta f_{\text{I}I}^{\text{simplified}} &=& \frac{\partial f^{\text{simplified}}_{\text{I}I}}{\partial |\text{d}dot n|} \times \Delta |\text{d}dot {\bf n}| = \Delta \text{d}dot n. \end{eqnarray} From Eq.~(\ref{QIIISim}) we find directly that since $\Delta \text{d}dot n = \Delta n/t_c^2 = \Delta \text{d}ot n/t_c$, we have \begin{eqnarray} \Delta Q_{\text{I}I}^{\text{simplified}} = (1.1/2\pi) \times 10^{-7} \pm \mathcal O(10^{-9}) = 1.75 \times 10^{-8}. \text{lab}el{QIISim} \end{eqnarray} The bigger problem is that I believe I cannot use the simplified integrand for $f_{\text{I}I}$. \subsubsection{The Case Against Integrand I} \text{lab}el{caseAgainstI} From Table \ref{integral_results} we learn the following. The variational approach yields a set of variational parameters that result in a smaller integral $Q$. The corresponding trajectory, when plotting it for a given value of $\beta_0$, basically takes a slight shortcut compared to ``our'' original effective trajectory. \begin{table} \begin{tabular}{|c|c|}\hline integrand & integral result $Q_i$ with $i = 0, \text{var}$ \\\hline\hline digits & 0.123456789(00) [00] \\ $\lambda_{+, 0}$ & 0.039826282(13) [00] \\ $\lambda_{+, \text{var}}$ & 0.039825489(30) [10] \\\hline digits & 0.12345678901(00) [000] \\ $f_{\text{I}I, 0}$ & 0.00304297287(50) [000]\\ $f_{\text{I}I, \text{var}}$ & 0.00304289075(60) [212]\\\hline \end{tabular} \caption{Most informative integral results taken from Table \ref{integral_results}.} \text{lab}el{integral_results_informative} \end{table} We have more-or-less certainly confirmed that the integral result $Q_{\text{var}}$ is indeed smaller than $Q_0$, the integral for the original Hamiltonian. Consulting Table \ref{integral_results_informative}, the difference \begin{eqnarray} \Delta Q = Q_0 - Q_{\text{var}} = 7.8\times 10^{-7}. \text{lab}el{caseIII} \end{eqnarray} Here we have taken into account the inaccuracies due to the numerical integration. Now, the uncertainty computed in Eq.~(\ref{QIIISim}) is only $1.06 \times 10^{-7}$, which strongly suggests that the $\Delta Q$ found by us is significant. \section{Conclusions} \text{lab}el{conclusions} The main statement of Axiom \ref{axiom3_axiom} is that the effective Hamiltonian results in the lowest integral value of the functional $Q = \int_{\beta_0} \int_{\tau} f \text{d} \tau \text{d} \beta_0$. Here, an integrand $f(t, \beta_0)$ dependent on time $t$ and the gauge parameter $\beta_0$, is integrated over both of its values for a full single-qubit drive pulse. In these notes we have analytically and numerically analyzed this axiom for the integrand $f = f_\text{I} = \text{eig}_+(\mathcal{H})$, or the positive eigenvalue of the Hamiltonian, introduced in Sec.~\ref{integrands}. In Sec.~\ref{caseAgainstI} we have noted that the integral $Q$ for the effective Hamiltonian is significantly larger than that of the ``best'' variational Hamiltonian. This implies that our proposed axiom is violated for the integrand $f_\text{I}$. Given the prominent qualitative differences between the effective and exact qubit trajectories (cf.~Fig.~\ref{trajectories}), we expect that some other definition for the effective Hamiltonian of the exact rotating wave approximation---beyond infinite series with unclear convergence behavior---may be found. \section{Magnus Expansion} \text{lab}el{magnus_appendix} The Magnus expansion \cite{magnus1954, ernst87, waugh07} is a method used for time-dependent perturbation theory. The basic idea is to write the time evolution operator, which generally requires the time ordering operator, as a true exponential function of an operator that is to be determined perturbatively. Consider the stroboscopic time evolution for the set of times $\{t_0, t_0 \pm t_c, \ldots \}$, as given in Eq.~(\ref{sets}), with the drive period $t_c=\pi/\omega$ and the time offset $t_0\in[0, t_c)$. The stroboscopic time evolution operator parallel to that in Eq.~(\ref{goal0}) can then be written as \begin{equation} U_{t_0}(t_0 + n t_c, t_0) = e^{-i \overline{\mathcal H} n t_c}, \text{lab}el{UM2} \end{equation} for integers $n$. The Magnus expansion $\overline \mathcal{H}$ can be written as a series, \begin{equation} \overline \mathcal{H} = \sum_{k=0}^\infty \overline \mathcal{H}^{(k)}. \text{lab}el{MagnusSeries} \end{equation} The three lowest-order terms $\mathcal{H}^{(k)}$ with $k = 0$, 1 and 2, read \begin{eqnarray} \overline \mathcal{H}^{(0)} &=& \frac1{n t_c} \int_{t_0}^{t_0 + nt_c} \text d \tau \mathcal{H}(\tau), \text{lab}el{Hb0}\\ \overline \mathcal{H}^{(1)} &=& \frac{-i}{2 nt_c} \int_{t_0}^{t_0 + nt_c}\text d \tau' \int_{t_0}^{\tau'} \text d \tau [\mathcal{H}(\tau'),\mathcal{H}(\tau)], \text{lab}el{Hb1}\\ \overline \mathcal{H}^{(2)} &=& -\frac{1}{6 nt_c} \int_{t_0}^{t_0 + nt_c}\text d \tau'' \int_{t_0}^{\tau''} \text d \tau' \int_{t_0}^{\tau'} \text d \tau \Big\{[\mathcal{H}(\tau''),[\mathcal{H}(\tau'),\mathcal{H}(\tau)]] + [[\mathcal{H}(\tau''),\mathcal{H}(\tau')],\mathcal{H}(\tau)]\Big\}. \quad \text{lab}el{Hb2} \end{eqnarray} Terms of higher order may be determined recursively, see, e.g., Refs.~\cite{blanes09,blanes10}. \section{Various supporting calculations} \text{lab}el{support} \subsection{Derivatives of trial functions} \text{lab}el{explicit_derivatives} To write the first and second derivatives of the vector $n_\text{var}$ given in Eq.~(\ref{trial_fxns}), we defined the outer factor $f_{\text{outer}} = [\sin(\theta_t)^2-\sin(\eta\phi)^2] e^{a t^2}$ where $\theta_t = \omega (t-\beta_0)+\eta\phi$ (we argued above that $\mathcal{O}mega = \omega = 1/2$) and $a=-(1+c)/(2 \sigma^2)$. The derivatives of this outer factor are then given by \begin{eqnarray} \text{d}ot f_{\text{outer}} &=& 2\omega\sin(\theta_t)\cos(\theta_t)e^{a t^2} + 2at \ f_{\text{outer}} \textbf{n}onumber \\ &=& \omega\sin(2\theta_t)e^{a t^2} + (2at)f_{\text{outer}}, \text{lab}el{outerPrime} \\ \text{d}dot f_{\text{outer}} &=& 2[\omega^2(\underbrace{\cos \theta_t^2 - \sin\theta_t^2}_{\cos2\theta_t}) + 2a\omega t\underbrace{\sin\theta_t \cos\theta_t}_{\tfrac12 \sin2\theta_t}]e^{a t^2}\textbf{n}onumber \\ && + 2a f_{\text{outer}} + 2at \text{d}ot f_{\text{outer}} \textbf{n}onumber \\ &=& 2[\omega^2\cos2\theta_t + a\omega t \sin2\theta_t] e^{a t^2} + 2a f_{\text{outer}} + 2at \text{d}ot f_{\text{outer}}. \text{lab}el{outerPrimePrime} \end{eqnarray} To avoid mistakes as much as possible, these equations were checked analytically using Mathematica, and their Python code has been checked numerically. Numerical checks can be done using \texttt{wrongFactor} and \texttt{wrongFactor2} in \texttt{trial}, and plotting $f_{\text{I}}$ or $f_{\text{I}I}$ in plotting.py (with \texttt{derivative1FDQ} = \texttt{True}). \subsection{Time derivative of operator exponential} \text{lab}el{Udot} As discussed in the beginning of Sec.~\ref{minimizeEigenvalue}, the derivative of a function $\text{ex}p(f(t))$ for an operator $f(t)$ is not trivial if $[f(t), \text{d}ot f(t)] \textbf{n}eq 0$. Evangelos Varvelis pointed me to Ref.~\cite{blanes09}, in which Eqs.~(33)-(35) contain various forms of the derivative of the exponential of the time evolution operator. Here, we choose Eq.~(35) from Ref.~\cite{blanes09}, \begin{eqnarray} \partial_t U = \partial_t e^{\mathcal{O}mega(t)} = \int e^{s \mathcal{O}mega(t)} (\partial_t \mathcal{O}mega(t)) e^{(1-s)\mathcal{O}mega(t)}\text{d} s \text{lab}el{derivative} \end{eqnarray} with the Magnus expansion $\mathcal{O}mega(t)$. Using \begin{eqnarray} \bar \mathcal{H}(t) = i \mathcal{O}mega(t) = {\bf n}(t)\cdot \sigma = \alpha(t) \hat n(t)\cdot \sigma, \text{lab}el{Hbar} \end{eqnarray} the computation based on Eq.~(\ref{derivative}) reads \begin{eqnarray} Q &\stackrel{(*)}{=}& \int \int \text{eig} _+\left[ \int_0^1 e^{i s \bar \mathcal{H}(t)} (\partial_t \bar \mathcal{H}) e^{i(1-s) \bar \mathcal{H}(t)} e^{- i\bar \mathcal{H}(t)} \text{d} s \right] \text{d} t \text{d} \beta_0 \textbf{n}onumber\\ &=& \int \int \text{eig} _+\left[ \int_0^1 e^{i s \bar \mathcal{H}(t)} (\partial_t \bar \mathcal{H}) e^{-is \bar \mathcal{H}(t)} \text{d} s \right] \text{d} t \text{d} \beta_0. \end{eqnarray} Substituting for $\bar \mathcal{H}$ as given in Eq.~(\ref{Hbar}), \begin{eqnarray} Q &=& \int \int \text{eig} _+\left[ \int_0^1 e^{is \bar \mathcal{H}(t)}(\partial_t \alpha \hat n \cdot \sigma ) e^{-is \bar \mathcal{H}(t)} \text{d} s \right] \text{d} t \text{d} \beta_0 \textbf{n}onumber\\ &=& \int \int \text{eig} _+\left[ \text{d}ot \alpha \hat n\cdot \sigma + \alpha \underbrace{\int_0^1 e^{is \bar \mathcal{H}(t)}(\text{d}ot {\hat n} \cdot \sigma) e^{-is \bar \mathcal{H}(t)} \text{d} s}_{\equiv I} \right] \text{d} t \text{d} \beta_0. \text{lab}el{alphaTimesNHat} \text{lab}el{minimize} \end{eqnarray} \subsection{Algebra of Unit Vectors} \text{lab}el{unitVectors} \subsubsection{No Approximation} \text{lab}el{unit_vectors_elegant} To properly compute the second derivative of a unit vector, I believe I we can do the following. First define the velocity vector $\hat n_v$, which points along the direction of the velocity, \begin{eqnarray} \vec n_v &=& \partial_t {\hat n}. \text{lab}el{def_nv} \end{eqnarray} Using $\vec n_v \equiv n_v \hat n_v$ and $\partial_t \hat n = |\text{d}ot {\hat n}| \hat n_v$, it is clear that \begin{eqnarray} \partial_t {\hat n} &\equiv& n_v \hat n_v, \qquad \qquad n_v = |\text{d}ot {\hat n}|. \text{lab}el{nHatD} \end{eqnarray} Then take another derivative, \begin{eqnarray} \partial_t^2 \hat n &\stackrel{(\ref{def_nv})}{=}& \partial_t \vec n_v \\ &\stackrel{(\ref{nHatD})}{=}& \text{d}ot n_v \hat n_v + n_v \text{d}ot {\hat n}_v. \text{lab}el{nHatDD} \end{eqnarray} The computation of $\text{d}ot {\hat n}_v$ looks as follows [we have tried to gain some insights using the (second part within the) Mathematica notebook 'integrand4.nb' in the github repository [cf.~Sec.~\ref{coding}] in the folder \texttt{/exactRWA/programs/variational\_minimization}]. First define a slightly unintuitive acceleration vector, which is the temporal derivative of the \emph{unit vector} ${\hat n}_v$, i.e., \begin{eqnarray} \vec n_a = \text{d}ot {\hat n}_v = a \hat n + b \hat n_{\perp} \text{lab}el{nVHatDot} \end{eqnarray} where $\vec n_a = n_a \hat n_a$. We define $\hat n_{\perp} = \hat n \times \hat n_v$, which guarantees $\hat n_{\perp} \perp \hat n$ and $\hat n_{\perp} \perp \hat n_v$. We wish to express the integrand $f_\text{I}I$ in terms of the three unit vectors \begin{eqnarray} \left\{\hat n, \hat n_v, \hat n_{\perp} \right \}, \text{lab}el{system} \end{eqnarray} which are mutually perpendicular to one another. We then find \begin{eqnarray} a \stackrel{(\ref{nVHatDot})}{=} \vec n_a \cdot \hat n, \text{lab}el{littlea} \end{eqnarray} and together with $n_a = |\vec n_a| \stackrel{(\ref{nVHatDot})}{=} a^2 + b^2$ we have \begin{eqnarray} b = \pm \sqrt{n_a^2 - a^2} = \pm \sqrt{n_a^2 - \vec n_a \cdot \hat n}. \text{lab}el{littleb_Bad} \end{eqnarray} However, now we don't know the sign of $b$ so it's probably better to compute $b$ the same way as we computed $a$ in Eq.~(\ref{littlea}), \begin{eqnarray} b \stackrel{(\ref{nVHatDot})}{=} \vec n_a \cdot \hat n_\perp. \text{lab}el{littleb} \end{eqnarray} To be clear, the norm of the vector $\vec n_a$ is $n_a \stackrel{(\ref{nVHatDot})}{=} |\text{d}ot {\hat n}_v|$, and combining Eqs.~(\ref{nVHatDot}), (\ref{littlea}) and (\ref{littleb}) we obtain \begin{eqnarray} \vec n_a = (\vec n_a \cdot \hat n)\ \hat n + (\vec n_a \cdot \hat n_\perp) \ \hat n_{\perp}. \text{lab}el{vec_a} \end{eqnarray} We can thus write the second derivative of the vector $\hat n$ as follows, \begin{eqnarray} \partial_t^2 \hat n &\stackrel{(\ref{nHatDD}), (\ref{nVHatDot})}{=}& \text{d}ot n_v \hat n_v + n_v \vec n_a \text{lab}el{bad} \\ &\stackrel{(\ref{vec_a})}{=}& \text{d}ot n_v \hat n_v + n_v \left[(\vec n_a \cdot \hat n) \ \hat n + (\vec n_a \cdot \hat n_\perp) \ \hat n_{\perp} \right]. \text{lab}el{ddotnHatApp} \end{eqnarray} Note that the reason why Eq.~(\ref{ddotnHat}) should be more desirable than Eq.~(\ref{bad}) is that it is expressed using the unit vectors given in Eq.~(\ref{system}). \subsection{Eigenvalue of (Traceless) 2 by 2 Matrix} The eigenvalues $\pm \lambda$ of a traceless $2\times 2$ matrix $M_2$ can be computed quite easily using the determinant. This becomes clear (recall that the determinant is basis-independent) as follows, \begin{eqnarray} \text{d}et M_2 = \text{d}et \text{diag} (\lambda, -\lambda) = -\lambda^2, \text{lab}el{det} \end{eqnarray} because of which we have \begin{eqnarray} \text{eig}_+ M_2 = \sqrt{\lambda^2} \stackrel{(\ref{det})}{=} \sqrt{-\text{d}et M_2}. \text{lab}el{eigenvalueDet} \end{eqnarray} \section{Explicit formulas for effective Hamiltonian} \text{lab}el{effH_appendix} In this appendix we give the explicit formulas for Hamiltonian series (\ref{HeffSeries}) up to order $1/\omega^3$, \begin{eqnarray} \mathcal{H}_{\text{eff}}(t; \beta_0) &=& \sum_{k=0}^{\infty} h_{k}(t; \beta_0) (1/\omega)^k \textbf{n}onumber \\ &=& \mathcal{H}_0(t; \beta_0) + \mathcal{H}_1(t; \beta_0) + \mathcal{H}_2(t; \beta_0) + \mathcal{H}_3(t; \beta_0) + \mathcal{H}_4(t; \beta_0) \textbf{n}onumber \\ && + \mathcal{H}_5(t; \beta_0) + \mathcal{O}(1/\omega^4) \text{lab}el{HEffective} \end{eqnarray} with $\mathcal{H}_i = h_i/\omega^i$ for $i = 0, 1, 2, 3$. For readability, below all dependencies on time and the gauge parameter $\beta_0$ of the Hamiltonian or the envelope functions are kept implicit. These formulas have been obtained assuming the on-resonant rotating-frame Hamiltonian (\ref{Hrot}), and have been determined following the recurrence procedure of Ref.~\cite{zeuch18}. The lowest-order Hamiltonian is simply given by the Hamiltonian of the standard rotating wave approximation, $\mathcal{H}_0 = \mathcal{H}_\text{RWA} = (H_1/4)\sigma_x$, as also given in Eq.~(\ref{HRWA}). \subsection{Time-dependent drive envelope} \text{lab}el{HEff_genericH1} For a generic envelope $H_1(t)$, the lowest three corrections are \begin{eqnarray} \mathcal H_1 &=& \frac{H_1^2}{32 \omega}(1-2 \cos\beta_0)\sigma_z+ \frac{\text{d}ot H_1}{8 \omega} [\sin\beta_0\sigma_x+\cos\beta_0\sigma_y], \text{lab}el{backslashH1}\\ \mathcal H_2 &=& \frac{H_1^3}{256 \omega ^2}[(-2 + 2 \cos\beta_0-\cos(2 \beta_0))\sigma_x+(2 \sin\beta_0+\sin(2 \beta_0))\sigma_y] \textbf{n}onumber \\ && + \frac{3 H_1 \text{d}ot H_1}{32 \omega ^2} \sin\beta_0\sigma_z+ \frac{\text{d}dot H_1}{16 \omega ^2}[\cos\beta_0\sigma_x-\sin\beta_0\sigma_y], \end{eqnarray} together with \begin{eqnarray} \mathcal{H}_3 &=& \frac{H_1^4}{2048 \omega ^3}(1-2 \cos(\beta_0)-3 \cos(2 \beta_0))\sigma_z \textbf{n}onumber \\ && + \frac{H_1^2 \text{d}ot H_1}{1024 \omega ^3}[(9 \sin(2 \beta_0)-12 \sin(\beta_0))\sigma_x+(36 \cos(\beta_0)+9 \cos(2 \beta_0)-8)\sigma_y] \textbf{n}onumber\\ && +\frac{\text{d}ot H_1^2}{128 \omega ^3}(6 \cos(\beta_0)+1)\sigma_z + \frac{H_1 \text{d}dot H_1}{64 \omega ^3}(4 \cos(\beta_0)-1)\sigma_z \textbf{n}onumber \\ && - \frac{\text{d}ddot H_1}{32 \omega ^3}[\sin(\beta_0)\sigma_x+\cos(\beta_0)\sigma_y]. \end{eqnarray} The corrections of order $1/\omega^4$, \begin{eqnarray} \mathcal{H}_4 &=& \frac{H_1^5}{16384 \omega ^4}[(5 \cos(\beta_0)-\cos(2 \beta_0)-\cos(3 \beta_0)-9)\sigma_x+(5 \sin(\beta_0)+4 \sin(2 \beta_0)+\sin(3 \beta_0))\sigma_y] \textbf{n}onumber \\ && + \frac{45 H_1^3 \text{d}ot H_1}{8192 \omega ^4}(2 \sin(\beta_0)+\sin(2 \beta_0))\sigma_z \textbf{n}onumber \\ && + \frac{5 H_1 \text{d}ot H_1^2}{2048 \omega ^4}[(3 \cos(2 \beta_0)-4 \cos(\beta_0))\sigma_x - (20 \sin(\beta_0)+3 \sin(2 \beta_0))\sigma_y] \textbf{n}onumber \\ && +\frac{5 H_1^2 \text{d}dot H_1}{4096 \omega ^4}[(-8 \cos(\beta_0)+5 \cos(2 \beta_0)+8)\sigma_x-(24 \sin(\beta_0)+5 \sin(2 \beta_0))\sigma_y] \textbf{n}onumber \\ && -\frac{5 \text{d}ot H_1 \text{d}dot H_1 \sin(\beta_0) \sigma_z}{64 \omega ^4} - \frac{5 H_1 \text{d}ddot H_1 \sin(\beta_0)}{128 \omega ^4}\sigma_z+ \frac{H_1^{(4)}}{64 \omega ^4}[-\cos(\beta_0)\sigma_x+\sin(\beta_0)\sigma_y], \end{eqnarray} and of order $1/\omega^5$, \begin{eqnarray} \mathcal{H}_5 &=& \frac{H_1^6}{786432 \omega ^5}(18 \cos(\beta_0)-60 \cos(2 \beta_0)-10 \cos(3 \beta_0)-9)\sigma_z \textbf{n}onumber \\ && + \frac{H_1^4 \text{d}ot H_1}{196608 \omega ^5}[(-285 \sin(\beta_0)+150 \sin(2 \beta_0)+55 \sin(3 \beta_0))\sigma_x \textbf{n}onumber \\ && +(825 \cos(\beta_0)+330 \cos(2 \beta_0)+55 \cos(3 \beta_0)-297)\sigma_y] \textbf{n}onumber \\ && + \frac{H_1^2 \text{d}ot H_1^2}{32768 \omega ^5}(1000 \cos(\beta_0)+285 \cos(2 \beta_0)-104)\sigma_z\textbf{n}onumber \\ && + \frac{\text{d}ot H_1^3}{8192 \omega ^5}[(40 \sin(\beta_0)-15 \sin(2 \beta_0))\sigma_x+(-200 \cos(\beta_0)-15 \cos(2 \beta_0)-24)\sigma_y] \textbf{n}onumber \\ && +\frac{3 H_1^3 \text{d}dot H_1}{16384 \omega^5}(65 \cos(\beta_0)+25 \cos(2 \beta_0)-16)\sigma_z\textbf{n}onumber \\ && + \frac{H_1 \text{d}ot H_1 \text{d}dot H_1}{8192 \omega ^5}[(160 \sin(\beta_0)-95 \sin(2 \beta_0))\sigma_x+(-800 \cos(\beta_0)-95 \cos(2 \beta_0)+72)\sigma_y]\textbf{n}onumber \\ && + \frac{\text{d}dot H_1^2}{512 \omega ^5}(1-20 \cos(\beta_0))\sigma_z+ \frac{H_1^2 \text{d}ddot H_1}{16384 \omega ^5}[(80 \sin(\beta_0)-65 \sin(2 \beta_0))\sigma_x+\textbf{n}onumber \\ && (-400 \cos(\beta_0)-65 \cos(2 \beta_0)+64)\sigma_y]+ \frac{\text{d}ot H_1 \text{d}ddot H_1}{256 \omega ^5}(-15 \cos(\beta_0)-1)\sigma_z \textbf{n}onumber \\ && + \frac{H_1 H_1^{(4)}}{256 \omega ^5}(1-6 \cos(\beta_0))\sigma_z+ \frac{H_1^{(5)}}{128 \omega ^5}[\sin(\beta_0)\sigma_x+\cos(\beta_0)\sigma_y], \end{eqnarray} are also used in our calculation. \subsection{Constant drive envelope} \text{lab}el{HEff_constantH1} For a constant amplitude $H_1(t) = H_1$ the formulas above simplify as follows, \begin{eqnarray} \mathcal{H}_1 &=& \frac{H_1^2}{32 \omega}(1-2 \cos(\beta_0))\sigma_z, \\ \mathcal{H}_2 &=& \frac{H_1^3}{256 \omega ^2}[(2 \cos(\beta_0)-\cos(2 \ \beta_0)-2)\sigma_x+(2 \sin(\beta_0)+\sin(2 \beta_0))\sigma_y], \\ \mathcal{H}_3 &=& \frac{H_1^4}{2048 \omega ^3}(-2 \cos(\beta_0)-3 \cos(2 \ \beta_0)+1)\sigma_z, \\ \mathcal{H}_4 &=& \frac{H_1^5}{16384 \omega ^4}[(5 \cos(\beta_0)-\cos(2 \beta_0)-\cos(3 \ \beta_0)-9)\sigma_x + \textbf{n}onumber \\ && \qquad \qquad \qquad \qquad \qquad (5 \sin(\beta_0)+4 \sin(2 \beta_0)+\sin(3 \ \beta_0))\sigma_y], \\ \mathcal{H}_5 &=& \frac{H_1^6}{786432 \omega ^5}(18 \cos(\beta_0)-60 \cos(2 \beta_0)-10 \ \cos(3 \beta_0)-9)\sigma_z. \end{eqnarray} We furthermore give the Hamiltonian coefficients $\mathcal{H}_6$ and $\mathcal{H}_7$ for constant driving when considering the next two orders [not explicitly shown in Eq.~(\ref{HEffective})], \begin{eqnarray} \mathcal{H}_6 &=& \frac{H_1^7}{37748736 \omega ^6}[(252 \cos(\beta_0)+84 \cos(2\beta_0)-120 \cos(3 \beta_0)-15 \cos(4 \beta_0)-1224)\sigma_x \textbf{n}onumber \\ && \quad \qquad \qquad \qquad \qquad + (252 \sin(\beta_0)+336 \sin(2 \beta_0)+160 \sin(3 \beta_0)+15 \sin(4 \beta_0))\sigma_y], \\ \mathcal{H}_7 &=& \frac{H_1^8}{1811939328 \omega ^7}(10152 \cos(\beta_0)-4368 \cos(2 \beta_0)-1540 \cos(3 \beta_0)-105 \cos(4 \beta_0)-5076)\sigma_z. \textbf{n}onumber\\ \end{eqnarray} \end{document}
\begin{document} \title{Farey neighbors and hyperbolic Lorenz knots} \author{Paulo Gomes\thanks{\'Area Departamental de Matem\'atica, Instituto Superior de Engenharia de Lisboa, e-mail: [email protected]}, Nuno Franco\thanks{CIMA-UE and Departamento de Matem\'atica, Universidade de \'Evora, e-mail: [email protected]} and Lu\'is Silva\thanks{CIMA-UE and \'Area Departamental de Matem\'atica, Instituto Superior de Engenharia de Lisboa, e-mail: [email protected]}} \maketitle \begin{abstract} Based on symbolic dynamics of Lorenz maps, we prove that, provided one conjecture due to Morton is true, then Lorenz knots associated to orbits of points in the renormalization intervals of Lorenz maps with reducible kneading invariant of type $(X,Y)*S$, where the sequences $X$ and $Y$ are Farey neighbors verifying some conditions, are hyperbolic. \end{abstract} \section{Introduction} \label{sec:intro} \subsection*{Lorenz knots} \label{sec:lorknots} \par \emph{Lorenz knots} are the closed (periodic) orbits in the Lorenz system \cite{Lorenz63} \begin{align} \label{eq:lorsys} x' &= -10 x +10 y \nonumber \\ y' &= 28 x -y -xz \\ z' &= -\frac{8}{3} z +xy \nonumber \end{align} while \emph{Lorenz links} are finite collections of (possibly linked) Lorenz knots. The systematic study of Lorenz knots and links was made possible by the introduction of the \emph{Lorenz template} or knot-holder by Williams in \cite{Williams77} and \cite{Williams79}. It is a branched 2-manifold equipped with an expanding semi-flow, represented in Fig. \ref{fig:lortemp}. It was first conjectured by Guckenheimer and Williams and later proved through the work of Tucker \cite{Tucker02} that every knot and link in the Lorenz system can be projected into the Lorenz template. Birman and Williams made use of this result to investigate Lorenz knots and links \cite{Birman83}. For a review on Lorenz knots and links, see also \cite{Birman11}. A $T(p,q)$ torus knot is (isotopic to) a curve on the surface of an unknotted torus $T^2$ that intersects a meridian $p$ times and a longitude $q$ times. Birman and Williams \cite{Birman83} proved that every torus knot is a Lorenz knot. A satellite knot is defined as follows: take a nontrivial knot $C$ (companion) and nontrivial knot $P$ (pattern) contained in a solid unknotted torus $T$ and not contained in a $3-ball$ in $T$. A satellite knot is the image of $P$ under an homeomorfism that takes the core of $T$ onto $C$. A knot is hyperbolic if its complement in $S^3$ is a hyperbolic $3-manifold$. Thurston \cite{Thurston82} proved that a knot is hyperbolic \emph{iff} it is neither a satellite knot nor a torus knot. One of the goals in the study of Lorenz knots has been their classification into \emph{hyperbolic} and \emph{non-hyperbolic}, possibly further distinguishing torus knots from satellites. Birman and Kofman \cite{Birman09} listed hyperbolic Lorenz knots taken from a list of the simplest hyperbolic knots. In a previous article, \cite{Gomes14}, we generated and tested for hyperbolicity, using the program \emph{SnapPy}, families of Lorenz knots that are a generalization of some of those that appear in this list, which led us to conjecture that the families tested are hyperbolic \cite{Gomes13}. Morton has conjectured \cite{Elrifai88},\cite{Dehornoy11} that all Lorenz satellite knots are cablings (satellites where the pattern is a torus knot) on Lorenz knots. In \cite{PhysicaD}, based in the work of El-Rifai, \cite{Elrifai88}, we derived an algorithm to obtain Lorenz satellite braids, together with the corresponding words from symbolic dynamics. The first-return map induced by the semi-flow on the \emph{branch line} (the horizontal line in Fig. \ref{fig:lortemp}) is called a \emph{Lorenz map}. If the branch line is mapped onto $[-1,1]$, then the Lorenz map $f$ becomes a one-dimensional map from $[-1,1] \setminus \{0\}$ onto $[-1,1]$, with one discontinuity at $0$ and stricly increasing in each of the subintervals $[-1,0[$ and $]0,1]$. Periodic orbits in the flows correspond to periodic orbits on the Lorenz maps, so symbolic dynamics of the Lorenz maps provide a codification of the Lorenz knots. In \cite{DCDS}, using this codification, it was introduced an operation over Lorenz knots that is directly related with renormalization of Lorenz maps. In this work we prove that some families of knots, generated from torus knots through this operation, are hyperbolic. \begin{figure} \caption{The Lorenz template} \label{fig:lortemp} \end{figure} \subsection*{Lorenz braids} \label{sec:lorbraids} If the Lorenz template is cut open along the dotted lines in Fig. \ref{fig:lortemp}, then each knot and link on the template can be obtained as the closure of an open braid on the cut-open template, which will be called the \emph{Lorenz braid} associated to the knot or link (\cite{Birman83}). These \emph{Lorenz braids} are simple positive braids (our definition of positive crossing follows Birman and is therefore opposed to an usual convention in knot theory). Each Lorenz braid is composed of $n=p+q$ strings, where the set of $p$ left or $L$ strings cross over at least one (possibly all) of the $q$ right strings, with no crossings between strings in each subset. These sets can be subdivided into subsets $LL$, $LR$, $RL$ and $RR$ according to the position of the startpoints and endpoints of each string. An example of a Lorenz braid is shown in Fig. \ref{fig:lorbraid}, where we adopt the convention of drawing the overcrossing ($L$) strings as thicker lines than the undercrossing ($R$) strings. This convention will be used in other braid diagrams. Each Lorenz braid $\beta$ is a simple braid, i.e., a braid such that all its crossings are positive and every two strings only cross each other at most once, so it has an associated permutation $\pi$. This permutation has only one cycle \emph{iff} it is associated to a knot, and has $k$ cycles if it is associated to a link with $k$ components (knots). \begin{figure} \caption{A Lorenz braid} \label{fig:lorbraid} \end{figure} \subsection*{Symbolic dynamics for the Lorenz map} \label{sec:symbdynlor} Let $f^j=f \circ f^{j-1}$ be the $j$-th iterate of the Lorenz map $f$ and $f^0$ be the identity map. We define the \emph{itinerary} of a point $x$ under $f$ as the symbolic sequence $(i_f(x))_j$, $j=0,1,\ldots$ where $$(i_f(x))_j=\left\{\begin{array}{lll} L & \mathrm{ if } & f^j(x)<0\\ 0 & \mathrm{ if } & f^j(x)=0\\ R & \mathrm{ if } & f^j(x)>0. \end{array} \right.$$ The itinerary of a point in $[-1,1]\setminus \{0\}$ under the Lorenz map can either be an infinite word in the symbols $L,R$ or a finite word in $L,R$ terminated by a single symbol $0$ (because $f$ is undefined at $x=0$). The \emph{length} $|X|$ of a finite word $X = X_0 \ldots X_{n-1}0$ is $n$, so it can be written as $X = X_0 \ldots X_{|X|-1}0$. A word $X$ is periodic if $X=(X_0 \dots X_{p-1})^{\infty}$ for some $p>1$. If $p$ is the least integer for which this holds, then $p$ is the (least) period of $X$. The space $\Sigma$ of all finite and infinite words can be ordered in the lexicographic order induced by $L < 0 < R$: given $X, Y \in \Sigma$, let $k$ be the first index such that $X_k \neq Y_k$. Then $X<Y$ if $X_k < Y_k$ and $Y < X$ otherwise. The \emph{shift map} $s:\Sigma\setminus\{0\} \to \Sigma$ is defined as usual by $s(X_0X_1 \ldots)=X_1 \ldots$ (it just deletes the first symbol). From the definition above, an infinite word $X$ is periodic \emph{iff} there is $p>1$ such that $s^p(X)=X$. The sequence $W,s(W),\ldots,s^{p-1}(W)$ will also be called the \emph{orbit} of $W$ and a word in the orbit of $W$ will be generally called a shift of $W$. A (finite or infinite) word $X$ is called \emph{L-maximal} if $X_0=L$ and for $k>0$, $X_k = L \Rightarrow s^k(X)\leq X$, and \emph{R-minimal} if $X_0=R$ and for $k>0$, $X_k = R \Rightarrow X \geq s^k(X)$. An infinite periodic word $(X_0 \ldots X_{n-1})^{\infty}$ with least period $n$ is L-maximal (resp. R-minimal) if and only if the finite word $X_0 \ldots X_{n-1}0$ is L-maximal (resp. R-minimal). Therefore, there exists a bijective correspondence between the set of $L$-maximal (resp.$R$-minimal) finite words and the cyclic permutations classes of periodic words. For a finite word $W$, $n_L(W)$ and $n_R(W)$ will denote respectively the number of $L$ and $R$ symbols in $W$, and $n=n_L+n_R$ the length of $W$. Analogously, if $W$ is periodic with least period $n$ then we define $n_L(W)=n_L(W')$ and $n_R(W)=n_R(W')$, where $W'$ is the $L$-maximal finite word corresponding to $W$. Each periodic word is associated to a Lorenz braid (whose closure is a Lorenz knot), which can be obtained through the following procedure: given a periodic word $W$ with least period $n$, order the successive shifts $s(W),s^2(W),\ldots,s^n(W)=W$ lexicographically and associate them to startpoints and endpoints in the associated Lorenz braid, with points corresponding to words starting with $L$ lying on the left half and points corresponding to words starting with $R$ on the right half. Each string in the braid connects the startpoint corresponding to $s^k(W)$ to the endpoint corresponding to $s^{k+1}(W)$. Fig. \ref{fig:lorbraid-word} exemplifies this procedure for $W=(LRRLR)^{\infty}$. \begin{figure} \caption{Lorenz braid corresponding to $W=(LRRLR)^{\infty} \label{fig:lorbraid-word} \end{figure} Each periodic orbit of the flow has a unique corresponding orbit in the Lorenz map, which in turn corresponds to the cyclic permutation class of one periodic word in the symbols $L,R$. Sometimes we will refer to the knot represented by an $L$-maximal or an $R$-minimal word, meaning the knot corresponding to the associated periodic word. The \emph{crossing number} is the smallest number of crossings in any diagram of a knot $K$. The \emph{braid index} is the smallest number of strings among braids whose closure is $K$. A \emph{syllable} of a word is a subword of type $L^aR^b$ with maximal length. The \emph{trip number} $t$ of a periodic word $W$ with least period $n$ is the smallest number of syllables of all its subwords with length $n$. The trip number of a Lorenz link is the sum of the trip numbers of its components. Franks and Williams \cite{Franks87}, followed by Waddington \cite{Waddington96}, proved that the braid index of a Lorenz knot is equal to its trip number. This result had previously been conjectured by Birman and Williams \cite{Birman83}. \section{Syllable permutations of torus knots words} \label{sec:torsylperm} It was proved in \cite{Birman83} that all torus knots are Lorenz knots. The torus knot $T(p,q)$ is the closure of a Lorenz braid in $n=p+q$ strings, with $p$ left or $L$ strings that cross over $q$ right or $R$ strings, such that each $L$ string crosses over all the $R$ strings. This Lorenz braid of a torus knot thus has the maximum number of crossings ($pq$) for a Lorenz braid with $p$ $L$ strings and $q$ $R$ strings. Since $T(q,p)=T(p,q)$ we will only consider torus knots $T(p,q)$ with $p<q$. The structure of the Lorenz braid of a torus knot $T(p,q)$ ($p<q$) is sketched in Fig. \ref{fig:torbraid}, where only the first and last $L$ strings and some of the $R$ strings are drawn. The remaining $L$ ($R$) strings are parallel to the $L$ ($R$) strings shown. \begin{figure} \caption{Lorenz braid of $T(p,q) (p<q)$} \label{fig:torbraid} \end{figure} Lorenz knots corresponding to orbits in the Lorenz template which are represented by evenly distributed words in the alphabet $\{L,R\}$ are torus knots \cite{Birman83}. On the other hand, given a torus knot $T(p,q)$ there is an evenly distributed word with $n_L=p$, $n_R=q$, that represents it. There is thus a bijection between torus knots and cyclic permutation classes of evenly distributed words. We will call the $L-maximal$ word that represents $T(p,q)$ the \emph{standard word} $W(p,q)$ for $T(p,q)$. \begin{defin} We define $P(p,q)$ as the set of $L$-maximal words resulting from permutations of syllables of the standard word $W(p,q)$ for $T(p,q)$. \end{defin} In \cite{PhysicaD} we proved the following results: \begin{prop}\label{theor:unique} For each word $W$ in $P(p,q)$, $4<p<q$, distinct from $W(p,q)$, there is at most one torus knot $T(p,q'),\ q'<q$, with the same braid index and genus as the closure of the braid corresponding to $W$. \end{prop} \begin{prop}\label{prop:notorus1} For each odd integer $p>4$ and all integer $k>0$, the sets $P(p,q)$, for $q=kp+2$ and $q=(k+1)p-2$ contain no words corresponding to braids whose closure is a torus knot, besides $W(p,q)$. \end{prop} \begin{prop}\label{prop:notorus2} If $p>4$ is even and not a multiple of $3$, then for any integer $k$ the sets $P(p,kp+3)$ and $P(p,(k+1)p-3)$ contain no words corresponding to braids whose closure is a torus knot. Also, if $p<12$ and $p$ is even, then $P(p,q)$ contains no words other than $W(p,q)$ corresponding to torus knots. \end{prop} In \cite{PhysicaD} we derived an algorithm, based in the work of El-Rifai, \cite{Elrifai99}, to obtain Lorenz satellite braids, together with their corresponding aperiodic words and proved that none of the words in the sets $P(p,q)$ can be obtained through this procedure. Thus concluding the following result. \begin{prop}\label{satellite} If Morton's conjecture is true then the Lorenz knots corresponding to syllable permutations of standard torus words, that is, the knots corresponding to words in the sets $P(p,q)$, are not satellites. \end{prop} So we can conclude from Propositions \ref{prop:notorus1} and \ref{prop:notorus2} that, if Morton's conjecture is true, then the words in sets $P(p,kp+2)$, $P(p,(k+1)p-2)$ for $p$ odd and $P(p,kp+3)$, $P(p,(k+1)p-3)$, for $p$ even and $p$ not a multiple of $3$, distinct from the standard $W(p,q)$ torus word, correspond to hyperbolic Lorenz knots. Moreover, we have recently performed an extensive computational test \cite{Gomes14}, in which we computed the volumes of all knot complements corresponding to words in the (non-empty) sets $P(p,q)$ with $5\leq p \leq 19$ and $6 \leq q \leq 100$. We found all of them to be hyperbolic, with the expected exception of the torus knots $T(p,q)$ corresponding to the standard words $W(p,q)$. \section{Farey pairs} In \cite{DCDS} it was introduced one operation over Lorenz links that is directly related with renormalization of Lorenz maps. Generically, Lorenz maps are one-dimensional maps, $g:[-1,1]\rightarrow [-1,1]$, with one single discontinuity at $0$, increasing in both continuity intervals and such that $g(\pm 1)=\pm 1$. In particular the first-return map from the original Lorenz template is a Lorenz map in this sense, with the particularity of being surjective in both continuity intervals. If the critical orbits are finite, then Lorenz maps generate sub-Lorenz templates, see \cite{GHS} and \cite{ChSF}. These templates are contained in the Lorenz template, so all knots in them are Lorenz knots. The combinatorics of a Lorenz map $g$, such as the corresponding sub-Lorenz template, are completely determined by its kneading invariant, i.e., by the pair $(X,Y)$, where $X=Li_g(\lim_{x\rightarrow 0^-}g(x))$ and $Y=Ri_g(\lim_{x\rightarrow 0^+}g(x))$ are the critical itineraries. We say that a pair $(X,Y)\in \Sigma \times \Sigma$ is admissible if it is the kneading invariant of some Lorenz map $g$. In \cite{SSR} it was proved the following result. \begin{prop} A pair $(X,Y)\in \Sigma ^2$ is admissible if and only if the following conditions are verified: \begin{enumerate} \item For any $Z\in \lbrace X,Y \rbrace$, if $Z_i=L$ then $\sigma^i(Z)\leq X$. \item For any $Z\in \lbrace X,Y \rbrace$, if $Z_i=R$ then $\sigma^i(Z)\geq Y$. \item The previous inequalities are strict if any of the words involved is finite. \end{enumerate} \end{prop} For an admissible pair of finite words $(X,Y)\in \Sigma \times \Sigma$ and a finite word $S\in \Sigma$, we define the $*$-product $$(X,Y)*S=\overline{S_0}\ldots \overline{S_{|S|-1}}0,$$ where $$\overline{S_j}=\left\lbrace \begin{array}{ll} X_0\ldots X_{|X|-1} & \text{ if } S_j =L \\ Y_0\ldots Y_{|Y|-1} & \text{ if } S_j =R \end{array} \right. $$ Words of type $(X,Y)*S$ are the itineraries of points in the renormalization intervals of renormalizable Lorenz maps, and in \cite{DCDS} it was studied the structure of their corresponding Lorenz knots, as a geometric construction depending on the Lorenz link defined by the pair $(X,Y)$ and on the Lorenz knot defined by $S$. On the other hand \cite{TW}, evenly distributed words in $\Sigma$ are exactly those that can not be written as $(X,Y)*S$ for some admissible pair $(X,Y)$, so torus knots correspond to words that are irreducible relatively to the $*$-product and the hyperbolic and satellite Lorenz knots are generated under the geometric construction derived from the $*$-product. The $L$-maximal and $R$-minimal evenly distributed words can be generated recursively in the Symbolic Farey trees constructed below. First we define the $L$-maximal symbolic Farey tree, $\mathcal{F}^-=\cup_{i=0}^{\infty}\mathcal{F}^-_i$, where $\mathcal{F}_0^-=\left\lbrace L0 \right\rbrace$ and, for all $n$, $$\begin{array}{c} \mathcal{F}^ -_{n+1}= \mathcal{F}^-_n\cup\lbrace LR^{n+1}0\rbrace \cup\\ \left\lbrace Y_0\ldots Y_{|Y|-1}X_0\ldots X_{|X|-1}0 : X <Y \text{ are consecutive words in }\mathcal{F}^-_n \right\rangle. \end{array} $$ Now we say that two $L$-maximal words $X<Y$ are Farey neighbours if there is some $n$ such that they are consecutive words in $\mathcal{F}^-_n$. So we have $\mathcal{F}^-$: $$\begin{array}{ccccccccc} L0 & & & & & & & & \\ & & & & LR0 & & & & \\ & &LRL0& & & &LRR0& & \\ &LRLL0& &LRLRL0& &LRRLR0& &LRRR0 & \\ & \:\:\: \vdots& &\:\:\: \vdots & &\:\:\: \vdots & & \:\:\: \vdots & \end{array} $$ We define analogously the $R$-minimal symbolic Farey tree, $\mathcal{F}^+=\cup_{i=0}^{\infty}\mathcal{F}^+_i$, where $\mathcal{F}_0^+=\left\lbrace R0 \right\rbrace$ and, for all $n$, $$\begin{array}{c} \mathcal{F}^ +_{n+1}= \mathcal{F}^+_n\cup\lbrace RL^{n+1}0\rbrace \cup\\ \left\lbrace X_0\ldots X_{|X|-1}Y_0\ldots Y_{|Y|-1}0 : X <Y \text{ are consecutive words in }\mathcal{F}^+_n \right\rangle. \end{array} $$ Finally we have $\mathcal{F}^+$: $$\begin{array}{ccccccccc} & & & & & & & &R0 \\ & & & & RL0 & & & & \\ & &RLL0& & & &RLR0& & \\ &RLLL0& &RLLRL0& &RLRRL0& &RLRR0 & \\ &\:\:\: \vdots & &\:\:\: \vdots & & \:\:\: \vdots& & \:\:\: \vdots & \end{array} $$ For a word $X\in\mathcal{F}^-$, we may define its $R$-minimal version $m(X)=\min\lbrace X_j\ldots X_{|X|-1}X_0\ldots X_{j-1}0 : X_j=R\rbrace $. It is immediate to observe that each word $Y\in\mathcal{F}^+\setminus\mathcal{F}^+_0$ is obtained as $m(X)$ where $X\in\mathcal{F}^-\setminus\mathcal{F}^-_0$ is in the same position of $\mathcal{F}^-\setminus\mathcal{F}^-_0$. \begin{defin} A Farey pair is a pair $(X,Y)$ where $Y=m(S)$, $X,S\in \mathcal{F}^-$, and $X$ and $S$ are Farey neighbours with $S<X$. \end{defin} From the point of view of the genealogy of Lorenz words, see \cite{SSR}, words of type $(X,Y)*S$ where $(X,Y)$ are Farey pairs, are the "first" reducible words. In the rest of this paper, we will prove that, if Morton's conjecture is true, then those words never correspond to satellite Lorenz knots and, for some specific families of Farey pairs $(X,Y)$ they all correspond to hyperbolic knots. However, the computational tests performed in \cite{Gomes14}, lead us to conjecture that all words of the referred type correspond to hyperbolic knots. \begin{lem}\label{farneighadmiss} If $(X,Y)$ is a Farey pair, then $(X,Y)$ is admissible. \end{lem} \begin{proof} The proof follows immediately from the construction of the symbolic Farey trees. \end{proof} \begin{thm}\label{fareyneighstarprod} Let $(X,Y)$ be a Farey pair such that $t(X) > 1$ and $t(Y) > 1$. Let $p_1 = \min \{n_L(X),n_R(X)\}$, $q_1 = \max \{n_L(X),n_R(X)\}$, $p_2 = \min \{n_L(Y),n_R(Y)\}$, $q_2 = \max \{n_L(Y),n_R(Y)\}$, $q_1=kp_1+r_1\ (0<r_1<p_1)$, $q_2=kp_2+r_2\ (0<r_2<p_2)$. The Lorenz knots associated to $X$ and $Y$ are respectively the torus knots $T(p_1,q_1)$ and $T(p_2,q_2)$. Then, for any finite aperiodic word $S\in\Sigma$, $Z=(X,Y)*S$ is a nontrivial syllable permutation of the standard evenly distributed word associated to the torus knot $T(p,q)$, where $q=kp + r (1<r<p-1)$, where $p=n_L(S)p_1 + n_R(S)p_2 =$, $q=n_L(S)q_1 + n_R(S)q_2$, $r=n_L(S)r_1 + n_R(S)r_2$. \end{thm} \begin{proof} $Z=(\overline{S}_0\overline{S}_1 \dots \overline{S}_{|S|-1})^{\infty}$ where $$\overline{S}_i= \begin{cases} X_0 X_1 \dots X_{p_1+q_1-1} &\text{if } S_i=L\\ Y_0 Y_1 \dots Y_{p_2+q_2-1} &\text{if } S_i=R \end{cases}.$$ First assume $n_R(X)<n_L(X)$, $n_R(Y)<n_L(Y)$. $X$ and $Y$ must have the form $X = LRL^k \dots RL^k0$, $Y=RL^{k+1} \dots RL^k0$. In $Z$, pairs of consecutive subwords are therefore of one of the following types: \begin{itemize} \item $XX=LRL^k \dots RL^k LRL^k \dots RL^k = LRL^k \dots RL^{k+1}RL^k \dots RL^k$ \item $XY=LRL^k \dots RL^k RL^{k+1} \dots RL^k$ \item $YX=RL^{k+1} \dots RL^k LRL^k \dots RL^k=RL^{k+1} \dots RL^{k+1}RL^k \dots RL^k$ \item $YY=RL^{k+1} \dots RL^k RL^{k+1} \dots RL^k$ \end{itemize} Therefore, any shift of $Z$ starting with an $R$ has syllables of only two types: $RL^k$ and $RL^{k+1}$ and must therefore be a syllable permutation of the torus knot $T(p,q)$, where $p=n_R(Z)=n_L(S) p_1+n_R(S) p_2$ and $q=n_L(Z)=n_L(S) q_1+n_R(S) q_2$. Since $Z$ is reducible, $Z$ is not in the Farey tree and the syllable permutation must be nontrivial. Finally, $q = n_L(S) (kp_1+r_1) + n_R(S) (kp_2+r_2) = kp+r$, where $r=n_L(S) r_1 + n_R(S)r_2$. Since $0<r_1<p_1$ and $0<r_2<p_2$ and $r_1,r_2$ are integers, $1<r<p-1$. Now assume $n_L(X)<n_R(X)$, $n_L(Y)<n_R(Y)$. Since $X=LR^{k+1} \dots LR^k$ and $m(Y)= RLR^k \dots LR^k$, pairs of consecutive words in $Z$ will now be of one of the following types: \begin{itemize} \item $XX=LR^{k+1} \dots LR^k LR^{k+1} \dots LR^k$ \item $XY=LR^{k+1} \dots LR^k RLR^k \dots LR^k = LR^{k+1} \dots LR^{k+1} LR^k \dots LR^k$ \item $YX=RLR^k \dots LR^k LR^{k+1} \dots LR^k$ \item $YY=RLR^k \dots LR^k RLR^k \dots LR^k = RLR^k \dots LR^{k+1} LR^k \dots LR^k$ \end{itemize} In this case, any shift of $Z$ starting with an $L$ has syllables of only two types: $LR^k$ and $LR^{k+1}$ and is therefore a syllable permutation of the torus knot $T(p,q)$, where $p=n_L(Z)=n_L(S) p_1+n_R(S) p_2$ and $q=n_R(Z)=n_L(S) q_1+n_R(S) q_2$. Again, $Z$ is reducible and therefore the permutation is nontrivial. Using the same argument as in the previous case we conclude that $1<r<p-1$. If $n_L(X)<n_R(X)$ and $n_L(Y)<n_R(Y)$ then the proof follows analogously. Now, since for every Farey pair $(X,Y)$ satisfying $t(X)>1,\ t(Y)>1$ \label{fareyneighstarprod} we have either $Y < X < LR0 $ or $ LR0 < Y < X$ and therefore either $n_L(X)<n_R(X)$, $n_L(Y)<n_R(Y)$ or $n_R(X)<n_L(X)$, $n_R(Y)<n_L(Y)$, the result is proved for any Farey pair. \end{proof} So, since all torus knots have a corresponding word in the symbolic Farey trees, from Proposition \ref{satellite}, we conclude that, for Farey pairs $(X,Y)$, if $(X,Y)*S$ generates "new" knots, then they are hyperbolic. In spite of the extensive computational tests indicate that all knots corresponding to pairs $(X,Y)*S$ for any Farey pair $(X,Y)$ are indeed hyperbolic, the results from \cite{PhysicaD} only allow us to demonstrate this for the following families. \begin{corollary}\label{hyperbolicstarprod} The Lorenz knots associated to the following families, all of which are of the type defined in Theorem \ref{fareyneighstarprod}, are hyperbolic: \begin{enumerate} \item $\left( L(RL^k)^{n+1}0, RL^{k+1}(RL^k)^{n-1}0 \right) \ast (LR)^{\infty}$, $k>0,\ n>1$ \item $\left( LRL^k (RL^{k+1})^{n-2} RL^k0, (RL^{k+1})^n RL^k0 \right) \ast (LR)^{\infty}$, $k>0,\ n>1$ \item $\left( L(RL^k)^n0, RL^{k+1} (RL^k)^{n-2} RL^{k+1} (RL^k)^{n-1}0 \right) \ast (LR)^{\infty}$, $k>0,\ n >1 \text{ odd}$ \item $\left( L(RL^k)^n RL^{k+1} (RL^k)^n0, RL^{k+1} (RL^k)^{n-1}0 \right) \ast (LR)^{\infty}$, $k>0,\ n>1 \text{ odd}$ \item $\left( L(RL^k)^{n+1}0, RL^{k+1}(RL^k)^{n-1}0 \right) \ast (LRL)^{\infty}$, $k>0,\ n\text{ even },\ n>1$ \item $\left( L(RL^k)^{n+1}0, RL^{k+1}(RL^k)^{n-1}0 \right) \ast (LRR)^{\infty}$, $k>0,\ n\text{ odd },\ n>1$ \item $\left( LRL^k (RL^{k+1})^{n-2} RL^k0, (RL^{k+1})^n RL^k (RL^{k+1})^{n-1} RL^k0 \right) \ast (LR)^{\infty}$,\newline $k>0,\ n>1, \text{ odd}$ \item $\left( LRL^k (RL^{k+1})^{n-2} RL^k (RL^{k+1})^{n-2} RL^k0, (RL^{k+1})^{n-1} RL^k0 \right) \ast (LR)^{\infty}$,\newline $k>0,\ n>1 \text{ odd}$ \item $\left( LRL^k (RL^{k+1})^{n-2} RL^k0, (RL^{k+1})^n RL^k0 \right) \ast (LRL)^{\infty}$, $k>0,\ n >1 \text{ odd }$ \item $\left( LRL^k (RL^{k+1})^{n-2} RL^k0, (RL^{k+1})^n RL^k0 \right) \ast (LRR)^{\infty}$, $k>0,\ n > 1 \text{ even }$ \end{enumerate} \end{corollary} \begin{proof} From Lemma \ref{farneighadmiss}, to prove that each of the pairs $(X,Y)$ above is admissible it is sufficient to show that they are Farey pairs. We then use Theorem \ref{fareyneighstarprod} to prove that each product $(X,Y) \ast S$ is a syllable permutation of the standard $T(p,q)$ word, with $p,q$ satisfying either $q=kp+2$ or $q=(k+1)p-2$ with $p>4$ odd, or $q=kp+3$ or $q=(k+1)p-3$ with $p$ even and not a multiple of $3$. \begin{enumerate} \item For this family, $(X,Y) = \left( L(RL^k)^{n+1}0, RL^{k+1}(RL^k)^{n-1}0 \right)$, so $Y = m(L(RL^k)^n0)$ and $L(RL^k)^n0<X$. We start by remarking that $L0,LR0$ are Farey neighbours in level 1 of the maximal Farey tree. Therefore $L0,LRL0$ are also Farey neighbours. Since for $k>1$ $LRL^k0=LRL^{k-1}L0$ is the concatenation of $LRL^{k-1}0$ and $L0$, $LRL^k0$ and $LRL^{k-1}0$ are Farey neighbours, $k>0$. Finally, $L(RL^k)^{n+1}0=\left(LRL^{k-1}\right)^nLRL^k0$ therefore $L(RL^k)^{n+1}0, L(RL^k)^n0$ are Farey neighbours and $\big( L(RL^k)^{n+1}0,\allowbreak RL^{k+1}(RL^k)^{n-1}0 \big)$ is a Farey pair. Since $p_1=n_R(X)=n+1>2$, $p_2 = n_R(Y)=n>1$ and $n_L(S)=n_R(S)=1$, the trip number $p=p_1+p_2=2n+1$ is odd. From $q_1=n_L(X)=(n+1)k+1=kp_1+1)$ and $q_2=k+1+(n-1)k=kn+1=kp_2+1$, so $r=r_1+r_2=2$ and $(X,Y) \ast S$ is a syllable permutation of the standard word corresponding to $T(p,q)=T(p,kp+2)$, $p>4$ odd and therefore corresponds to a hyperbolic knot. \item The pair $(X,Y) = \big( LRL^k (RL^{k+1})^{n-2} RL^k0, (RL^{k+1})^n RL^k0 \big)$, so we have $Y = m(LRL^k (RL^{k+1})^{n-1} RL^k0)$ and $RL^{k+1})^{n-1} RL^k0<X$. From the previous case, $LRL^k0$ and $LRL^{k-1}0$ are Farey neighbours. Since $LRL^kRL^k0=LRL^{k-1} LRL^k0$, $LRL^k RL^k0$ and $LRL^k0$ are also Farey neighbours and therefore $LRL^k RL^{k+1} RL^k0 = LRL^k RL^k LRL^k 0$ and $LRL^k0$ are also Farey neighbours. For $n>2$, $LRL^k (RL^{k+1})^{n-2} RL^k0 = LRL^k LRL^k (RL^{k+1})^{n-1} RL^k0$, so $LRL^k (RL^{k+1})^{n-2} RL^k0$ and $LRL^k (RL^{k+1})^{n-1} RL^k0$ are Farey neighbours and $(X,Y)$ is therefore a Farey pair. For this family $p_1=n$, $p_2=n+1$, so $p=2n+1>4$ is odd, while $r_1=p_1-1$, $r_2=p_2-1$ and therefore $r=p-2$. $(X,Y) \ast S$ is thus a syllable permutation of the standard word for $T(p,k(p+1)-2)$. \item In this case $(X,Y) = \left( L(RL^k)^n0, RL^{k+1} (RL^k)^{n-2} RL^{k+1} (RL^k)^{n-1}0 \right)$, so $Y =\allowbreak m( L(RL^k)^n L(RL^k)^{n-1}0)$ and $ L(RL^k)^n L(RL^k)^{n-1}0<X $. $L(RL^k)^n0$ and $L(RL^k)^{n-1}0$ are Farey neighbours since $L(RL^k)^n0=(LRL^{k-1})^{n-1}LRL^k0$ and $LRL^{k-1},LRL^k0$ are Farey neighbours as seen in 1. Therefore, $X$ and $L(RL^k)^n L(RL^k)^{n-1}0$ are also Farey neighbours and $(X,Y)$ is Farey. The trip number $p=n + 1 + n-2 +1 +n-1 =3n-1$ is even and not divisible by 3. Since $r_1=1,\ +r_2=2$, $r=3$ and $(X,Y) \ast S$ is a syllable permutation of the Farey word for $T(p,kp+3)$, $p even$, $p>4$. \item In family 4, $(X,Y) = \left( L(RL^k)^n RL^{k+1} (RL^k)^n0, RL^{k+1} (RL^k)^{n-1}0 \right)$ and $Y=m(L(RL^k)^n0)$, so $X =L(RL^k)^{n+1} L(RL^k)^n0$ and from the previous proof $X$ and $L(RL^k)^n0$ are Farey neighbours and $(X,Y)$ is a Farey pair. The trip number $p=2n+1 + n = 3n+1$ is again even and not divisible by 3. Since $r_1=2,\ +r_2=1$, $r=3$ and $(X,Y) \ast S$ is a syllable permutation of the Farey word for $T(p,kp+3)$, $p even$, $p>4$. \item $(X,Y)$ is identical to the pair in family 1 and therefore a Farey pair. For the trip number we have $p=n_L(S)p_1 + n_R(S)p_2=2(n+1) + n = 3n+2$ even and not divisible by $3$; $r=2r_1+r_2=3$. $(X,Y) \ast S$ is again a syllable permutation of the Farey word for $T(p,kp+3)$, $p$ even, $p>4$. \item $(X,Y)$ is Farey (it is again identical to the pair in family 1). The trip number is given by $p=n_L(S)p_1 + n_R(S)p_2=n+1 + 2n =3n+1$ (even since $n$ is odd) while $r=r_1+2r_2=3$ and therefore $(X,Y) \ast S$ is once more a syllable permutation of the Farey word for $T(p,kp+3)$, $p even$, $p>4$. \item Here $(X,Y) = \left( LRL^k (RL^{k+1})^{n-2} RL^k0, (RL^{k+1})^n RL^k (RL^{k+1})^{n-1} RL^k0 \right)$. We have $Y = m(L(RL^k) (RL^{k+1})^{n-1} RL^k (RL^{k+1})^{n-1} RL^k0) = m(XU)$ with $U= LRL^k (RL^{k+1})^{n-1} RL^k0$. Since $U= LRL^k (RL^{k+1})^{n-2} RL\ LRL^k0\allowbreak =LRL^{k-1}(LRL^k)^{n-1}0$ and $LRL^{k-1}0,LRL^k0$ are Farey neighbours as seen above, $X,U$ and therefore $X,XU$ are also Farey neighbours. Therefore, $(X,Y)$ is a Farey pair. For this family $p=n + 2n+1=3n+1$ is even since $n$ is odd and not divisible by 3, while $r=(p_1-1)+(p_2-2)$=$p-3$. Therefore, $(X,Y)*S$ is a syllable permutation of the Farey word corresponding to $T(p,k(p+1)-3)$. \item $X=LRL^k (RL^{k+1})^{n-2} RL^k (RL^{k+1})^{n-2} RL^k0$, $Y=m(LRL^k (RL^{k+1})^{n-2} RL^k0)$ so $X=VLRL^k (RL^{k+1})^{n-2} RL^k0$ with $V=LRL^k(RL^{k+1})^{n-3}RL^k$. Since $V=LRL^{k-1}(LRL^k)^{n-2}$, we can write $LRL^k (RL^{k+1})^{n-2} RL^k0=V\ LRL^k$, so $X,LRL^k (RL^{k+1})^{n-2} RL^k0$ are Farey neighbours and $(X,Y)$ is a Farey pair. In this case $p=2n-1 + n= 3n -1$ is even since $n$ is odd and not a multiple of $3$. We have $r=(p_1-2)+(p_2-1)$=$p-3$. $(X,Y)*S$ is thus a syllable permutation of the Farey word corresponding to $T(p,k(p+1)-3)$. \item $(X,Y)$ is the same as in family 2, so $p_1=n$, $p_2=n+1$ and $p=2p_1+p_2=3n+1$ is even for $n$ odd, and undivisible by $3$. $r=2r_1+r_2=2(p_1-1)+p_2-1=p-3$, therefore $(X,Y)*S$ is thus a syllable permutation of the Farey word corresponding to $T(p,k(p+1)-3)$. \item $(X,Y)$ is again the same as in family 2, so $p_1=n$, $p_2=n+1$ and $p=p_1+2p_2= 3n+2$ is even since $n$ is even and not a multiple of $3$. $r=r_1+2r_2=p_1-1 + 2(p_2-1)=p-3$, so $(X,Y)*S$ is a syllable permutation of the Farey word corresponding to $T(p,k(p+1)-3)$. \end{enumerate} \end{proof} \begin{remark}\label{exchangeLR} To these families we can add those obtained by exchanging $L$ and $R$ in all words. More precisely, let $\hat{X}$, $\hat{Y}$ and $\hat{S}$ be the words obtained by exchanging $L$ and $R$ in $X$, $Y$ and $S$, respectively. Then due to the simmetry in the Farey tree, if $(X,Y)$ is a Farey pair and $(X,Y) * S$ corresponds to a hyperbolic knot, $(\hat{Y},\hat{X})$ is a Farey pair and $(\hat{Y},\hat{X}) * \hat{S}$ is an R-minimal word corresponding to the same hyperbolic Lorenz knot. \end{remark} \begin{remark} From the list of hyperbolic Lorenz knots presented by Birmann and Kofman in \cite{Birman09}, eight can be represented by syllable permutations of torus knot words. All the words representing these eight knots belong to one of the families defined in Corollary \ref{hyperbolicstarprod}. \end{remark} \begin{remark} The families defined in Corollary \ref{hyperbolicstarprod} and Remark \ref{exchangeLR} contain all the words of type $(X,Y)*S$, up to shifting, that belong to one of the sets $P(p,kp+2)$, $P(p,k(p+1)-2)$, $P(p,kp+3)$ or $P(p,k(p+1)-3)$ of syllable permutations defined in propositions \ref{prop:notorus1} and \ref{prop:notorus2}. \end{remark} \end{document}
\begin{document} \singlespacing \title{The probability of casting a pivotal vote in an Instant Runoff Voting election} \begin{abstract} I derive the probability that a vote cast in an Instant Runoff Voting election will change the election winner. I show that there can be two types of pivotal event: direct pivotality, in which a voter causes a candidate to win by ranking them, and indirect pivotality, in which a voter causes one candidate to win by ranking some other candidate. This suggests a reason that voters should be allowed to rank at most four candidates. I identify all pivotal events in terms of the ballots that a voter expects to be cast, and then I compute those probabilities in a common framework for voting games. I provide pseudocode, and work through an example of calculating pivotal probabilities. I then compare the probability of casting a pivotal vote in Instant Runoff Voting to single-vote plurality, and show that the incentives to vote strategically are similar in these two systems. \end{abstract} \section{Introduction} Instant Runoff Voting (abbreviated IRV, sometimes called ``Ranked Choice Voting'' or ``RCV'') is a ruleset for turning votes into winners and losers that has recently become a popular electoral system for large democratic elections. It was adopted in some Australian legislative elections more than a century ago, but was used in very few large elections until the last few years, when a wave of electoral reform particularly in the United States has brought this system into broader use. IRV is now used for elections in Alaska up to the federal level, in congressional elections in Maine, and in the municipal elections of the country's largest city. While IRV is being rapidly implemented in important elections, there has been limited opportunity to study how voters behave in this type of election. A particular concern is how prevalent strategic voting would be in IRV. I address this question by extending wasted vote models to estimate the probability that a ballot cast in an IRV election will change the election result. An IRV election works as follows: in an IRV race with $\kappa$ candidates, voters are allowed to rank some number of those candidates. The election administrator counts the number of times that each candidate was ranked first, and the candidate with the fewest first-place votes is eliminated. Then, any remaining candidate that was ranked on a ballot immediately after the candidate that was eliminated has one vote added to their vote total. Again, whichever candidate has the fewest votes is eliminated. This procedure is repeated until $\kappa-1$ candidates have been eliminated, and the remaining candidate wins the election. It is not well understood how people vote in such a system. One popular argument, commonly advanced by supporters of switching from traditional Single-Member District Plurality (SMDP) systems to IRV, is that the incentives and opportunities for strategic voting are different in the two types of systems. Strategic voters consult the expected votes of others in order to choose an optimal vote, a process that is sometimes seen as distorting the preferences of the electorate. Some proponents of reform argue that a virtue of IRV is that it holds less opportunity for strategic voting than SMDP, with \nl{\citet[ch. 5]{gehl20}} writing that ``with RCV, citizens can vote their actual preferences'', which ``liberates citizens to vote for the candidates they actually favor, instead of the duopoly candidate they are told they should support for strategic reasons''. On the other hand, there are good reasons to expect that IRV actually introduces new types of strategic opportunities. If voters have more slots on their ballots, then they have more opportunities to express their true preferences. At the same time, a longer ballot length might add tools to the strategic voter's toolbox: instead of strategizing about one vote in SMDP, under IRV they can strategize about several. Similarly, if strategic voting is viewed from the classical perspective of how probable it is that an individual voter can determine the election winner by creating or breaking a tie, then the structure of IRV suggests that IRV might provide more opportunities than SMDP. SMDP has just one opportunity for a tie to occur, between the first-place candidate and some other candidate, but the iterated eliminations in IRV suggest that perhaps there are multiple opportunities for candidates to have equal vote totals, and hence more opportunities for one ballot to change the result. These are claims about the relative size of the probability that a voter casts a pivotal vote in SMDP compared to IRV, but comparing those quantities requires answering a simple question: what is the probability that a vote cast in an IRV election will change the outcome of that election? This question of pivotal probabilities was first posed about SMDP nearly half a century ago, and various frameworks for estimating the quantity have been explored in detail since then \nl{\citep{riker68,cox94,mebane19,eggers20,vasselai21b}}. Pivotal probabilities have also been extended to proportional systems, and even the alternative ranked-choice system of Borda count \nl{\citep{baltz22,cox96}}. And indeed, the set of all pivotal outcomes in a three-candidate IRV race, together with a model of the probability of each outcome arising in a realistic election, has been identified \nl{\citep{bouton13,eggers21b}}. However, no attempt has yet been made to derive the pivotal probability of a ballot cast in an IRV race with any number of candidates and any ballot length.\footnote{By ``ballot length'' I mean the number of candidates that each voter is allowed to rank to determine the single winner of an IRV election. I will also use ``ballot'' to mean just the race for the office under consideration, recognizing that a ballot may have multiple offices on it, but lacking a simple term for ``just the part of a ballot that corresponds to the IRV race under consideration''.} Generalizing the 3-candidate case is not a straightforward extension because that case could be exhaustively enumerated, whereas the general case requires deriving an expression for the probability of a pivotal event (an opportunity for a ballot to be pivotal) occurring as a function of the ballots that are expected to be cast. I derive the probability that a ranking of candidates cast in an IRV election will change the outcome of that election. I show that pivotal events in IRV can be split into two cases: with four candidates or fewer, the only way for a voter to make a difference in the outcome of the election is to rank a candidate on their ballot and thereby cause that candidate to win the election. This situation can arise when there are any number of candidates. However, if there are five candidates or more, there is also another possibility: a voter can rank one candidate on their ballot and thereby cause a different candidate to win who would not have won otherwise. The fact that this is only possible in races with five or more candidates provides a motivation for only allowing voters to rank at most four candidates in IRV. I then show how to estimate pivotal probabilities in IRV using pseudocode and examples. I demonstrate that the probability of casting a vote in IRV is very similar to SMDP, which means that voters do not have a higher incentive to behave strategically in one system than the other. \section{Pivotal probabilities in Instant Runoff Voting} I now derive the probability that a ballot cast in an Instant Runoff Voting (IRV) election will change the outcome of the election. Let us take the perspective of a voter who has only three pieces of information: they know how many candidates are running, they know how many candidates voters are allowed to rank on their ballots, and they know the preference orderings of every other voter in the electorate (as if, for example, they had consulted a poll of the whole electorate). The quantity of interest is the vote total that each candidate will receive. So, I will proceed by deriving expressions for how many vote totals a candidate is expected to receive after a certain sequence $S$ of candidates has been eliminated, then phrasing the events in which a voter has pivotal opportunities, and finally stating the probability of those events. First I establish some notation. In the following derivations I will denote an ordered slice of a list $\lambda$ from index $a$ up to index $b$ (inclusive) by $\lambda_{a:b}$, indexing from 1, where such a slice can be empty if no elements satisfy the stated requirements. Let $C$ be the set of all $\kappa$ candidates contesting the election, in arbitrary order. Denote by $v_c$ the expected vote total of candidate $c$, and use $\mu_{a}^{b}|S_{1:n}$ to represent the number of voters expected to rank candidate $a$ in any of the ballot positions between 1 and $b$, given that they assigned every higher ballot position to some candidate in the set $S_{1:n}$, which I use to denote the first $n$ elements of the sequence of previously dropped candidates $S$. Also denote the last candidate dropped, $S_{\kappa - 1}$, simply by $S_{-1}$. \subsection{Expected vote totals} After $d$ candidates have been dropped, candidate $c$ has the following vote total: \noindent \hspace{1in} \[v_{c} = \overset{c \ \text{first}}{\underset{\text{vacuous condition}}{\overbrace{\mu_{c}^1}\underbrace{|S_{1:0}}}} + \overbrace{\mu_{c}^{2}|S_{1:1}}^{\substack{c \ \text{second}, \\ \text{any dropped} \\ \text{candidate first}}} + \cdots + \underbrace{\mu_{c}^{d+1}|S_{1:d}}_{\substack{c \ \text{ranked in} \ d + 1, \\ \text{all candidates ranked} \\ 1 \ \text{to} \ d \ \text{dropped}}} \] \noindent \hspace{1in} \[v_{c} = \sum_{q = 0}^{d} \mu_{c}^{\ d+1} | S_{1:q}\] \noindent In this sum, ballots should not be double-counted. Now, denote the probability that candidate $c$ has $k$ more votes than some candidate $j$ by \noindent \hspace{1in} \[\mathbb{P}(v_c - v_j = k)\] \noindent I will proceed by considering two mutually exclusive ways that a ballot could be pivotal. On the one hand, it could be pivotal by causing a candidate who is ranked on the ballot to win the election. I will call this \textbf{direct pivotality}. On the other hand, placing a candidate in a certain position on a ballot can change the winner of the election without causing that specific candidate to win, which I will call \textbf{indirect pivotality}. The idea of indirect pivotality is closely related to the well-known fact that IRV violates the monotonicity criterion, but to my knowledge I am the first to note that together direct and indirect pivotality partition the set of all pivotal events \nl{\citep{lepelley96,ornstein14,smith73}}.\footnote{In the social choice literature on runoff systems, pivotal events are often deliberately not studied: it is common to assume for simplicity that ties \textit{do not occur}. See for example \nl{\citet[p. 136]{lepelley96}}.} Let us first consider only direct pivotality, and later return to indirect pivotality. \subsection{Direct pivotality} \noindent The probability that placing some candidate $c$ in a particular position $i$ on the ballot will cause that candidate to win is the probability that, after every candidate has been eliminated except two, $c$ is among those two remaining candidates, is ranked on the ballot above the other remaining candidate, and is either a) one vote short of winning (breaking a first-place tie), or b) two votes short of winning and would win a tie-breaker (making a first-place tie). The probability that $c$ and some other candidate have the same vote totals after some number of eliminations is: \noindent \hspace{1in} \[\mathbb{P}(v_c - v_{S_{-1}} = 0)\] \noindent In order to reach a pivotal contest against candidate $S_{-1}$, candidate $c$ must exceed the vote total of every other candidate at the time at which they are dropped. But this is only a pivotal event if $c$ also is not part of the set of dropped candidates $S$. Let us use the word ``round'' to denote each act of comparing the vote totals of candidates, so that $S_1$ is dropped in the first round, $S_2$ is dropped in the second round, and so on. To begin the derivation I take the following two assumptions. For any candidates $A,B,$ and $C$, in any pair of rounds $r$ and $r'$ such that $r \neq r'$, and with $v_a^b$ denoting the vote total of candidate $a$ in round $b$, \begin{enumerate} \item \[\big[v_{A}^r > v_{B}^r\big] \perp \big[v_{A}^r > v_{B}^{r'}\big] \perp \big[v_{A}^{r'} > v_{C}^{r'}\big]\] \item \[\big[v_A^r > v_B^r \big] \perp \big[v_A^r > v_C^r\big]\] \end{enumerate} Assumption 1 is motivated by the observation that we must consider the probability that a candidate $c$ under consideration has more votes than some candidate in round 1, and also that $c$ has more votes than some other candidate in round 2, and similarly in every round before the last. It is reasonable to suppose that these probabilities have some dependence on each other, but to assume any particular dependence would be an especially strong assumption. To proceed, I will instead assume the independence of the probabilities of any pair of relative vote totals having a certain relative size in a certain round. The reason that I take this assumption rather than assuming some dependence is that the pairwise independence of vote totals is related to the independence of the share of the population that is of each type of supporter, which characterizes the Poisson voting games framework that I will later use to derive explicit probabilities \nl{\citep{myerson98}}. Assumption 2 is characteristic of that framework: that any two pairwise comparisons of vote totals are independent, even conditional on any set of candidates having been dropped and any relative ordering of other vote totals. \nl{\citet{vasselai21b}} identifies this as a common assumption in Poisson pivot probabilities, and it may arise in other models of pivotal probabilities. I will retain that assumption throughout the derivation in the interest of specificity, but to remove it, one would only need to rephrase the products of probabilities as a joint probability of all the relative orderings being true. With these two assumptions, the probability $p_d|S$ of a directly pivotal contest involving candidate $c$ (for now just considering the case in which a tie can be broken), given a specific sequence $S$ in which the other candidates are dropped, is as follows: \noindent \hspace{1in} \[p_d|S = \mathbb{P}(v_c = v_{S_{-1}}) \cdot \bigg[\overset{\substack{\text{Probability $c$} \\ \text{not eliminated 1st}}}{\overbrace{\big(\mathbb{P}(\mu_{c}^{1} > \mu_{S_1}^1)\big)}} \underset{\substack{\text{Probability $c$} \\ \text{not eliminated 2nd}}}{\underbrace{\big(\mathbb{P}(\mu_{c}^2|S_1 > \mu_{S_2}^2|S_1)\big)}} \cdots \overset{\substack{\text{Probability $c$ not} \\ \text{eliminated 2nd-last}}}{\overbrace{\big(\mathbb{P}(\mu_c^{\kappa-2}|S_{1:\kappa-3} > \mu_{S_{\kappa-2}}^{\kappa-2}|S_{1:\kappa-3}\big)}} \bigg]\] \noindent \hspace{1in} \[p_d|S = \mathbb{P}(\mu_c^{\kappa-1}|S_{1:-1} = \mu_{S_{-1}}^{\kappa-1}|S_{1:-1}) \cdot \prod_{h = 1}^{\kappa - 2} \mathbb{P} \big(\mu_c^h|S_{1:h-1} > \mu_{S_h}|S_{1:h-1}\big) \] \noindent This equation is conditional on the candidates being dropped in the order of some list $S$, but the probability that some $S$ is indeed the order in which candidates are dropped is itself a chain of probabilities, each of which can be expressed in terms of relative vote totals: it is the probability that candidate $S_1$ has fewer initial votes than any other candidate, and that $S_2$ has the fewest votes once $S_1$ has been dropped, and so on.\footnote{Notice that, even though this is stated as a sequence of drops, it is not just a sequence of probabilities where the next drop follows by transitivity; that is, one cannot just find the probability that $S_1$ has fewer votes than $S_2$, then that $S_2$ has fewer votes than $S_3$, and conclude therefore that initially $S_3$ must have had more votes than $S_1$. When these comparisons are made, \textit{different sets of candidates have been dropped}, so the vote totals of every candidate need to be compared in every round.} The probability $p_d(S)$ that $S$ is the list of candidates dropped, and that at the end of that chain $c$ is involved in a directly pivotal contest, is as follows: \begin{align*} p_d(S) = & \mathbb{P}\big(v_{S_1}^1 < v_{S_2}^{1}\big) \mathbb{P}\big(v_{S_1}^1 < v_{S_3}^{1}\big) \cdots \mathbb{P}\big(v_{S_{1}}^1 < v_{S_{-1}}^{1}\big)\mathbb{P}\big(v_{S_1}^1 < v_{c}^{1}\big) \cdot \\ & \mathbb{P}\big(v_{S_2}^2 < v_{S_3}^{2}\big) \cdots \mathbb{P}\big(v_{S_{2}}^2 < v_{S_{-1}}^2\big)\mathbb{P}\big(v_{S_2}^2 < v_{c}^2\big) \cdot \\ & \vdots \\ & \mathbb{P}\big(v_{S_{\kappa-2}}^{\kappa-2} < v_{S_{-1}}^{\kappa-2}\big)\mathbb{P}\big(v_{S_{\kappa-2}}^{\kappa-2} < v_{c}^{\kappa-2}\big) \cdot \\ & \mathbb{P}\big(v_{S_{-1}}^{\kappa-1} = v_{c}^{\kappa-1}\big) \end{align*} \noindent \hspace{1in} \[p_d(S) = \bigg[\prod_{\ell = 1}^{\kappa - 2} \prod_{r=\ell+1}^{\kappa-1} \mathbb{P} \big(v_{S_\ell}^\ell < v_{S_r}^\ell \big) \bigg]\bigg[\prod_{\ell=1}^{\kappa-2} \mathbb{P}\big(v_{S_\ell}^{\ell} < v_c^\ell\big) \bigg] \mathbb{P} \big(v_{S_{\kappa-1}}^{\kappa-1} = v_c^{\kappa-1}\big) \] \noindent To eliminate the redundant product, define $A \equiv [S_1, S_2, \cdots S_{\kappa - 1}, c]$, that is, the ordered list of all dropped candidates in the order in which they are dropped, followed by candidate $c$. One need only change the index so that the final index of $A$ is included when comparing the vote totals of every candidate up to (but not including) the second-last. So, \noindent \hspace{1in} \[p_d(A) = \bigg[\overset{\substack{\text{Probability the candidates} \\ \text{are dropped in order $A$}}}{\overbrace{\prod_{\ell = 1}^{\kappa - 2} \prod_{r=\ell+1}^{\kappa} \mathbb{P} \big(v_{A_\ell}^\ell < v_{A_r}^\ell \big)}} \bigg] \overset{\substack{\text{Probability that $c$ is} \\ \text{in a first-place tie} \\ \text{given the drop order $A$}}}{\overbrace{\mathbb{P} \big(v_{A_{\kappa-1}}^{\kappa-1} = v_c^{\kappa-1}\big)}} \] \noindent \hspace{1in} \[p_d(A) = \bigg[\prod_{\ell = 1}^{\kappa - 2} \prod_{r=\ell+1}^{\kappa} \mathbb{P} \bigg(\sum_{q=0}^{\ell-1} \mu_{A_{\ell}}^{q+1} | A_{1:q} < \sum_{q=0}^{\ell-1} \mu_{A_{r}}^{q+1} | A_{1:q} \bigg) \bigg] \mathbb{P} \bigg(\sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q} = \sum_{q=0}^{\kappa-2} \mu_{c}^{q+1} | A_{1:q}\bigg) \] \noindent I have only addressed the probability that each candidate in the drop order has \textit{fewer} votes than the candidates after it. However, it could also be the case that the dropped candidate has the same number of votes as one of the remaining candidates, but loses a tie-breaker against it. While it is not feasible to write out the probability of ties \textit{within} the drop order, that probability must be computed to obtain the correct probability of the drop order occurring. The event in which $S$ is the ordered list of candidates dropped is mutually exclusive with the event in which candidates are dropped in any other sequence, so one can sum those probabilities, and the overall probability that ranking candidate $c$ first will be pivotal is \begin{align*} p_d = & \sum_{\substack{S \in \\ \text{Sym}(C \setminus c)}} \bigg\{ \bigg[\prod_{\ell = 1}^{\kappa - 2} \prod_{r=\ell+1}^{\kappa} \mathbb{P} \bigg(\sum_{q=0}^{\ell-1} \mu_{A_{\ell}}^{q+1} | A_{1:q} < \sum_{q=0}^{\ell-1} \mu_{A_{r}}^{q+1} | A_{1:q} \bigg) \bigg] \\ & \mathbb{P} \bigg(\sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q} = \sum_{q=0}^{\kappa-2} \mu_{c}^{q+1} | A_{1:q}\bigg) \bigg\} \end{align*} \noindent where $\text{Sym}(C \setminus c)$ is the symmetric group of the set of candidates $C$ without $c$, since the set of all permutations of $C \setminus c$ is exactly the set of orderings in which all candidates other than $c$ can be dropped. Finally, one can go beyond the first ballot position by imposing a simple restriction: the only modification needed to represent the pivotal chances of a candidate ranked in some ballot position beyond the first is that the only sequences that can be included are those sequences of previously dropped candidates that include all of the candidates that are ranked above $c$ on the ballot. So for some ballot position $i$, one can simply restrict the set of losers to include the candidates who are above position $i$ on the length $L$ ballot $\beta$. For simplicity I will use $i$ to denote a position on a ballot, while also using $\mu_i$ and $C \setminus i$ to mean the vote total of the candidate listed in position $i$ and the set of candidates without the candidate listed in position $i$ respectively. This yields the following expression for $p_d(\beta)$, the direct pivotal probability of the full ballot $\beta$: \begin{align*} p_{\text{direct}}(\beta) = & \sum_{i=1}^{L} \bigg(\sum_{\substack{S \in \\ \text{Sym}(C \setminus i) \\ \beta_{1:i-1} \subset S}} \bigg\{ \bigg[\prod_{\ell = 1}^{\kappa - 2} \prod_{r=\ell+1}^{\kappa} \mathbb{P} \bigg(\sum_{q=0}^{\ell-1} \mu_{A_{\ell}}^{q+1} | A_{1:q} < \sum_{q=0}^{\ell-1} \mu_{A_{r}}^{q+1} | A_{1:q} \bigg) \bigg] \\ & \mathbb{P} \bigg(\sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q} = \sum_{q=0}^{\kappa-2} \mu_{i}^{q+1} | A_{1:q}\bigg) \bigg\}\bigg) \end{align*} Of course, a voter is also pivotal if they not only \textit{break} but also if they \textit{create} a first-place tie. This is a mutually exclusive event (and I continue to assume that each is independent of the other vote totals), so one can sum the probability of each of those events happening. However, if a voter creates a first-place tie, there needs to be a tie-breaker. Let us assume that the tie-breaker is fair, so that the joint probability of creating a first-place tie \textit{and} consequently being pivotal is half the probability of creating a first-place tie on its own. So, now accounting for creating a first-place tie and not just for breaking one: \begin{align*} p_{\text{direct}}(\beta) = & \sum_{i=1}^{L} \bigg(\sum_{\substack{S \in \\ \text{Sym}(C \setminus i) \\ \beta_{1:i-1} \subset S}} \bigg\{ \bigg[\prod_{\ell = 1}^{\kappa - 2} \prod_{r=\ell+1}^{\kappa} \mathbb{P} \bigg(\sum_{q=0}^{\ell-1} \mu_{A_{\ell}}^{q+1} | A_{1:q} < \sum_{q=0}^{\ell-1} \mu_{A_{r}}^{q+1} | A_{1:q} \bigg) \bigg] \\ & \bigg[\mathbb{P} \bigg(\sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q} = \sum_{q=0}^{\kappa-2} \mu_{i}^{q+1} | A_{1:q}\bigg) + \\ & \frac{1}{2} \cdot \mathbb{P} \bigg(\sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q} = 1 + \sum_{q=0}^{\kappa-2} \mu_{i}^{q+1} | A_{1:q}\bigg) \bigg] \bigg\}\bigg) \end{align*} This completes the derivation of the probability of casting a directly pivotal ballot. Next I derive the probability of casting an indirectly pivotal ballot. Our strategy for computing indirectly pivotal chances will be as follows. \subsection{Indirect pivotality} Start by considering the indirectly pivotal events that can occur because a voter ranks some candidate $c$ at position $i$ in ballot $\beta$. Fix on the ordered list $A$, by which I continue to denote the sequence in which candidates are dropped, followed by the election winner (so $A$ is any permutation of the list of all candidates, that is, any element of the symmetric group of the set of candidates). Because these are cases in which the vote is indirectly pivotal, candidate $c$ cannot win the election, or else it would not be an indirectly pivotal event, which means that $c$ must appear before the final position in $A$ (and it will be possible to strengthen that claim, since $c$ must be sufficiently early in $A$ that a vote for $c$ changes the winner without $c$ becoming the winner). Now the indirectly pivotal events can be modeled as a reordering of $A$. First notice that a vote for $c$ cannot change the sequence of candidates that are dropped before $c$ is dropped, because increasing the vote total of $c$ by 1 will not change the candidate dropped in a round where another candidate already had a lower vote total than $c$ did. Instead, the vote for $c$ is indirectly pivotal only if any of the candidates that would have been dropped after $c$ would receive either the same number of (first-place or redistributed) votes as $c$, so that there is an opportunity to break a tie in whoever would have been dropped, or if that alternative candidate would have received one more vote than $c$ and $c$ would win the tie-breaker after a tie is created. This means that it is useful to think of the list $A$ as containing two parts: the part before $c$, which is not changed by including $c$ on a ballot, and the part after $c$, which can be changed by ranking $c$. Because any given candidate drop sequence is mutually exclusive with the candidates being dropped in any other order, I will sum the probability of indirect pivotality over all possible lists $A$. Within a list $A$, I consider the probability of transforming $A$ into an alternative list $A'$, which represents transforming a given drop sequence into an event in which the ranking of $c$ in the ballot $\beta$ was indirectly pivotal. In what condition does $A'$ represent an indirectly pivotal event with respect to $A$? Let $y$ represent the index of $c$ in $A$. Then $A'$ must satisfy the following two conditions: \begin{itemize} \item $A$ and $A'$ are the same up to the pivotal event: for some $y$, $A_{1:y-1} = A_{1:y-1}'$ \item The winner in $A$ is not the winner in $A'$: $A_{-1} \neq A_{-1}'$ \end{itemize} Together these two conditions imply a third requirement: the last element in $A$ cannot be the candidate in $A_{y}$, that is, the winner is some candidate other than $c$. For it to specifically be the vote for $c$ that was indirectly pivotal, a candidate after $c$ must have switched places with $c$. But notice that it is not necessarily the case that $A_{y+1}$ needs to switch places with $c$. $A_{y+1}$ is the candidate that would have been dropped after $c$ had the voter not ranked $c$ in ballot position $i$, but that is not necessarily the same as the candidate that $c$ would have been involved in a tie or near-tie with; redistributed votes from $c$ might have boosted their closest competitor above the candidate $A_{y+1}$, so that the candidate that $c$ would have tied with would then have been dropped later in the sequence $A$. However, for a given $A'$ there is only one candidate that $c$ could have tied with: because I do not consider ties between more than two candidates, whichever candidate $c$ tied with is the candidate that is dropped in that round instead of $c$, so the tie must have been between $c$ and whichever candidate is listed in $A'_y$. Now, to aggregate this up to the full indirect pivotal probability of ranking $c$, one can separate out the probabilities as follows. Fixing on some $A_{1:y}$ to be the sequence in which candidates are dropped prior to candidate $c$, one can list the sequences $A'$ which satisfy the requirements for an indirectly pivotal event. This approach makes it possible to derive an expression for indirect pivotality. First consider just one list $A$, and seek the probability that listing some candidate $c$ in position $i$ on a ballot $\beta$ will cause a candidate other than $c$ to win the election. A necessary event is that there is a tie between $c$ and one of the candidates which remain in the contest by the time that $c$ is dropped. Because I assume that there is a negligible probability of more than two candidates being tied, these events are mutually exclusive by assumption, so the probability of $c$ being involved in any tie with another candidate is the sum of the probability of $c$ being in a tie or a near-tie with each remaining candidate $t$: \begin{align*} p_{\text{tie}}|A' = \sum_{t \in A_{y+1:\kappa}} \bigg[ \mathbb{P} \big(v_c^y = v_t^y\big) + \frac{1}{2} \cdot \mathbb{P} \big(v_c^y = v_t^y - 1\big) \bigg] \end{align*} To obtain the probability of switching from $A$ to $A'$, it is necessary to know not just the probability that there was a tie to create or break, but also that the end of $A'$ will be the specific sequence that follows creating or breaking that tie involving $c$. By the independence of vote totals, that is the probability that every candidate $d$ in $A'_{y+1:\kappa}$ defeats every candidate prior to it. Note that, as in the case of direct pivotality, one cannot appeal to transitivity to order this sequence, because the number of votes may change as a candidate is dropped in between each comparison, so this too requires comparing each candidate in $A'$ to every candidate that has not yet been dropped. For brevity denote $G \equiv A'_{y+1:\kappa}$, that is, $G$ is the alternate ending in the hypothetical pivotal event, and recall that the candidate that $c$ is in a last-place tie with must be $A'_{y}$. For brevity call $t \equiv A'_y$. Then, the joint probability is \begin{align*} p_{\text{tie}}(A'|A) = \overset{\substack{\text{Probability that $G$ is the} \\ \text{sequence after a tie with $c$}}}{\overbrace{\prod_{d = 1}^{|G|} \bigg[ \prod_{h = 1}^{d-1} \mathbb{P} \big( v_{G_{d}} > v_{G_{h}} \big) \bigg]}} \cdot \overset{\text{Probability that $c$ ties another candidate}}{\overbrace{\bigg[ \mathbb{P} \big(v_c^y = v_t^y\big) + \frac{1}{2} \cdot \mathbb{P}\big(v_c^y = v_t^y - 1\big) \bigg]}} \end{align*} This is the probability that there is a tie to be made or broken by ranking $c$, times the probability that the sequence that the tie will bounce to is a specific alternative sequence $A'$: the conditional probability of indirect pivotality through $A'$ given $A$. But there is not just one alternative $A'$ that is of concern; instead every $A'$ that could correspond to $A$ is of interest. Let $\bf{A}$ denote the set of all $A'$ that, for a given $A$, fulfill the two necessary conditions, and note that any two lists $A'$ represent mutually exclusive events. Then the probability of any indirectly pivotal event arising from the ranking of $c$ in position $i$ given that the drop sequence would otherwise have followed $A$ is: \begin{align*} p_{\text{tie}}|A = \sum_{A' \in \bf{A}} \bigg\{ \prod_{d = 1}^{|G|} \bigg[ \prod_{h = 1}^{d-1} \mathbb{P} \big(v_{G_d} > v_{G_{h}} \big) \bigg] \cdot \bigg[ \mathbb{P}\big(v_c^y = v_t^y\big) + \frac{1}{2} \cdot \mathbb{P}\big(v_c^y = v_t^y - 1\big) \bigg] \bigg\} \end{align*} So, to obtain the probability of $A'$ actually arising after voting for $c$, it is now necessary to know the probability of the chain $A$ happening. I therefore multiply by the probability of $A$ occurring, which is simply the probability that candidates' vote totals follow the relative ordering that corresponds to the drop sequence $A$: \begin{align*} p_{\neg \text{d}}(A \to A') = & \overset{\text{Probability of $A$ occurring}}{\overbrace{\prod_{\ell = 2}^{\kappa} \bigg[ \prod_{h = 1}^{\ell - 1} \mathbb{P} \big(v_{A_\ell} > v_{A_{h}} \big) \bigg]}} \ \cdot \\ & \underset{\text{Probability of any $A'$ arising from $A$, with $c$ tied for being dropped at some point}}{\underbrace{\sum_{A' \in \bf{A}} \bigg\{ \prod_{d = 1}^{|G|} \bigg[ \prod_{h = 1}^{d-1} \mathbb{P} \big(v_{G_d} > v_{G_{h}} \big) \bigg] \cdot \bigg[ \mathbb{P} \big(v_c^y = v_t^y\big) + \frac{1}{2} \cdot \mathbb{P} \big(v_c^y = v_t^y = -1\big) \bigg] \bigg\}}} \end{align*} What remains is to sum over the possible sequences $A$, and also conduct the calculation for every ballot location. The one important detail in the sum over different values of $A$ is that the only sequences that should be included are those which involve dropping every candidate listed before $i$ on the ballot, before the candidate in location $i$ is involved in a pivotal contest. The result is the indirect pivotal probability of the ballot in terms of the candidates' vote totals: \begin{align*} p_{\text{indirect}}(\beta) = \sum_{i = 1}^{L} \bigg( \sum_{\substack{A \in \text{Sym}(C) \\ \beta_{1:i} \subset A_{1:y}}} & \bigg[ \prod_{\ell = 2}^{\kappa} \bigg[ \prod_{h = 1}^{\ell - 1} \mathbb{P} \big(v_{A_\ell} > v_{A_{h}} \big) \bigg] \\ & \cdot \sum_{A' \in \bf{A}} \bigg\{ \prod_{d = 1}^{|G|} \bigg[ \prod_{h = 1}^{d-1} \mathbb{P} \big(v_{G_d} > v_{G_{h}} \big) \bigg] \\ & \cdot \sum_{t \in F} \bigg[ \mathbb{P} \big(v_c^y = v_t^y\big) + \frac{1}{2} \cdot \mathbb{P} \big(v_c^y = v_t^y - 1\big) \bigg] \bigg\} \bigg) \end{align*} Finally, one can substitute the equation for candidates' expected vote totals in terms of the current reported ballots in the population. That yields the full expression for the indirect pivotal probability of the ballot $\beta$: \begin{align*} p_{\text{indirect}}(\beta) = \sum_{i = 1}^{L} \bigg( \sum_{\substack{A \in \text{Sym}(C) \\ \beta_{1:i} \subset A_{1:y}}} & \bigg[ \prod_{\ell = 2}^{\kappa} \bigg[ \prod_{h = 1}^{\ell - 1} \mathbb{P} \bigg(\sum_{q=0}^{h-1} \mu_{A_{\ell}}^{q+1} \big \vert A_{1:q} \ > \sum_{q = 0}^{h-1} \mu_{A_h}^{q+1} \big \vert A_{1:q} \bigg) \bigg] \\ & \cdot \sum_{A' \in \bf{A}} \bigg\{ \prod_{d = 1}^{|G|} \bigg[ \prod_{h = 1}^{d-1} \mathbb{P} \bigg(\sum_{q=0}^{y+d} \mu_{G_d}^{q+1} \big \vert A_{1:q} \ > \sum_{q=0}^{y+d} \mu_{G_h}^{q+1} \big \vert A_{1:q} \bigg) \bigg] \\ & \cdot \bigg[ \mathbb{P} \bigg(\sum_{q=0}^{y} \mu_{c}^{q+1} \big \vert A_{1:q} \ = \sum_{q=0}^{y} \mu_{t}^{q+1} \big \vert A_{1:q}\bigg) \\ & + \frac{1}{2} \cdot \mathbb{P} \bigg(\sum_{q=0}^{y} \mu_{c}^{q+1} \big \vert A_{1:q} \ = \sum_{q=0}^{y} \mu_{t}^{q+1} \big \vert A_{1:q} - 1 \bigg) \bigg] \bigg\} \bigg] \bigg) \end{align*} where $y$ is the index in the list $A$ of the candidate ranked at ballot position $i$, and $\bf{A}$ is the set of all ordered lists $A'$ formed by pre-pending $A_{1:y}$ to $G$, for every $G$ in the symmetric group of the set $C \setminus A_{1:y}$, and satisfying the conditions for $A'$ to represent an indirectly pivotal contest corresponding to the default sequence $A$. \subsection{Full pivotal probability} I have derived both the direct pivotal probability of a given ballot and the indirect pivotal probability of a ballot. Because these are the only two ways to be pivotal, and they are mutually exclusive, the full pivotal probability $p$ is the sum of the probabilities of these mutually exclusive events, so $p = p_{\text{direct}} + p_{\text{indirect}}$. The final question is not just the probability that the ballot cast will be pivotal, but actually the expected utility of casting a particular ballot. In direct pivotality this is $u(c) - u(S_{-1})$, while in indirect pivotality it is $u(A'_{-1}) - u(A_{-1})$. Also, since I have phrased both direct and indirect pivotality as sums over the positions $i$ in a single ballot $\beta$, the equation can be simplified by collapsing that redundant sum and computing both indirect and direct pivotality at once within a particular ballot position. So, given a set $C$ of $\kappa$ candidates contesting the election, and with the distribution of intended ballots commonly known, the expected utility of some ballot $\beta$ is: \begin{align*} u(\beta) = & \sum_{i=1}^{L} \bigg(\sum_{\substack{S \in \\ \text{Sym}(C \setminus i) \\ \beta_{1:i-1} \subset S}} \bigg\{ \bigg[\prod_{\ell = 1}^{\kappa - 2} \prod_{r=\ell+1}^{\kappa} \mathbb{P} \bigg(\sum_{q=0}^{\ell-1} \mu_{A_{\ell}}^{q+1} | A_{1:q} < \sum_{q=0}^{\ell-1} \mu_{A_{r}}^{q+1} | A_{1:q} \bigg) \bigg] \\ & \bigg[\mathbb{P} \bigg(\sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q} = \sum_{q=0}^{\kappa-2} \mu_{i}^{q+1} | A_{1:q}\bigg) + \\ & \frac{1}{2} \cdot \mathbb{P} \bigg(\sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q} = 1 + \sum_{q=0}^{\kappa-2} \mu_{i}^{q+1} | A_{1:q}\bigg) \bigg] \bigg\} \cdot \bigg[u(i) - u(S_{-1})\bigg] + \\ & \sum_{\substack{A \in \text{Sym}(C) \\ \beta_{1:i} \subset A_{1:y}}} \bigg[ \prod_{\ell = 2}^{\kappa} \bigg[ \prod_{h = 1}^{\ell - 1} \mathbb{P} \bigg(\sum_{q=0}^{h-1} \mu_{A_{\ell}}^{q+1} \big \vert A_{1:q} \ > \sum_{q = 0}^{h-1} \mu_{A_h}^{q+1} \big \vert A_{1:q} \bigg) \bigg] \\ & \cdot \sum_{A' \in \bf{A}} \bigg\{ \prod_{d = 1}^{|G|} \bigg[ \prod_{h = 1}^{d-1} \mathbb{P} \bigg(\sum_{q=0}^{y+d} \mu_{G_d}^{q+1} \big \vert A_{1:q} \ > \sum_{q=0}^{y+d} \mu_{G_h}^{q+1} \big \vert A_{1:q} \bigg) \bigg] \\ & \cdot \bigg[ \mathbb{P} \bigg(\sum_{q=0}^{y} \mu_{c}^{q+1} \big \vert A_{1:q} \ = \sum_{q=0}^{y} \mu_{t}^{q+1} \big \vert A_{1:q}\bigg) \\ & + \frac{1}{2} \cdot \mathbb{P} \bigg(\sum_{q=0}^{y} \mu_{c}^{q+1} \big \vert A_{1:q} \ = \sum_{q=0}^{y} \mu_{t}^{q+1} \big \vert A_{1:q} - 1 \bigg) \bigg] \bigg\} \bigg] \\ & \cdot \bigg[u(A'_{-1}) - u(A_{-1})\bigg] \bigg) \end{align*} where $\text{Sym}(C)$ is the symmetric group of the set $C$, $L$ is the length of the ballot $\beta$ such that $L \leq \kappa$, $\mu_{a}^{b}|S$ denotes the number of voters expected to rank candidate $a$ in any of the ballot positions 1 through $b$ conditional on assigning any higher ballot position to candidates in the set of previously dropped candidates $S$, $y$ is the index in the list $A$ of the candidate ranked at position $i$, $t$ is the candidate in $A'_y$, and $\bf{A}$ is the set of all ordered lists $A'$ formed by pre-pending $A_{1:y}$ to $G$, for every $G$ in the symmetric group of the set $C \setminus A_{1:y}$. \subsection{Modeling the probabilities} There are many ways to model expected vote totals, and previous work on strategic voting in IRV has used Dirichlet beliefs in an iterated polling framework \nl{\citep{eggers21b}}. I have phrased the derivations in this section in terms of the probability that one vote total equals or exceeds another, so that others can estimate the probabilities of those events using whichever model of voting games they prefer. However, I have motivated several assumptions as being especially well-supported by one prominent framework: Poisson voting games \nl{\citep{myerson98}}. A way to estimate the relative size of candidate vote totals in the Poisson voting games setting was presented for single-vote elections by \nl{\citet{mebane19}} and with Assumption 1 in this paper their result can be immediately extended to IRV. If the number of voters is drawn from a Poisson distribution, then the number of voters with a preference ordering following each possible preference ordering will also follow a Poisson distribution with known parameter, and their difference follows the Skellam distribution. So the expected utility of a ballot can be computed as follows: \begin{align*} u(\beta) = & \sum_{i=1}^{L} \bigg(\sum_{\substack{S \in \\ \text{Sym}(C \setminus i) \\ \beta_{1:i-1} \subset S}} \bigg\{ \bigg[\prod_{\ell = 1}^{\kappa - 2} \prod_{r=\ell+1}^{\kappa} \sum_{w=0}^{\infty} \mathcal{S} \bigg(w, \sum_{q=0}^{\ell-1} \mu_{A_{r}}^{q+1} | A_{1:q}, \sum_{q=0}^{\ell-1} \mu_{A_{\ell}}^{q+1} | A_{1:q}\bigg) \bigg] \\ & \bigg[\mathcal{S} \bigg(0, \sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q}, \sum_{q=0}^{\kappa-2} \mu_{i}^{q+1} | A_{1:q}\bigg) + \\ & \frac{1}{2} \cdot \mathcal{S} \bigg(1, \sum_{q=0}^{\kappa-2} \mu_{A_{\kappa-1}}^{q+1} | A_{1:q}, \sum_{q=0}^{\kappa-2} \mu_{i}^{q+1} | A_{1:q}\bigg) \bigg] \bigg\} \cdot \bigg[u(i) - u(S_{-1})\bigg] + \\ & \sum_{\substack{A \in \text{Sym}(C) \\ \beta_{1:i} \subset A_{1:y}}} \bigg[ \prod_{\ell = 2}^{\kappa} \bigg[ \prod_{h = 1}^{\ell - 1} \sum_{w=0}^{\infty} \mathcal{S} \bigg(w, \sum_{q=0}^{h-1} \mu_{A_{\ell}}^{q+1} \big \vert A_{1:q}, \ \sum_{q = 0}^{h-1} \mu_{A_h}^{q+1} \big \vert A_{1:q} \bigg) \bigg] \\ & \cdot \sum_{A' \in \bf{A}} \bigg\{ \prod_{d = 1}^{|G|} \bigg[ \prod_{h = 1}^{d-1} \sum_{w=0}^{\infty} \mathcal{S} \bigg(w, \sum_{q=0}^{y+d} \mu_{G_d}^{q+1} \big \vert A_{1:q}, \ \sum_{q=0}^{y+d} \mu_{G_h}^{q+1} \big \vert A_{1:q} \bigg) \bigg] \\ & \cdot \bigg[ \mathcal{S} \bigg(0, \sum_{q=0}^{y} \mu_{c}^{q+1} \big \vert A_{1:q}, \ \sum_{q=0}^{y} \mu_{t}^{q+1} \big \vert A_{1:q}\bigg) \\ & + \frac{1}{2} \cdot \mathcal{S} \bigg(1, \sum_{q=0}^{y} \mu_{c}^{q+1} \big \vert A_{1:q}, \ \sum_{q=0}^{y} \mu_{t}^{q+1} \big \vert A_{1:q} \bigg) \bigg] \bigg\} \bigg] \\ & \cdot \bigg[u(A'_{-1}) - u(A_{-1})\bigg] \bigg) \end{align*} \subsection{Equal utility ballots} The goal of computing the expected utility of every ballot is for voters in a voting game to be able to select the optimal ballot. But is there always just one ballot that has the greatest expected utility? Nothing guarantees that $u(\beta_1) \neq u(\beta_2)$ for any ballots $\beta_1, \beta_2$ and any number of voters with an arbitrary preference distribution. There are several ways that voters may find the same expected utility for multiple different ballots. I will informally outline three ways. First, if a voter has a non-strict preference ordering, then they will expect the same utility from any pair of ballots which are equal up to the relative ordering of any set of candidates to which they attach equal sincere utility. Second, if voters have non-strict preference orderings, then the utilities they expect from casting two ballots may still be equal. For example, in a 4-way race between the set of candidates $\{A,B,C\}$, the ballots $\beta_1 = [A, B, D]$ and $\beta_2 = [A, C, D]$ have the same expected utility if and only if $p_{B,C} \cdot u(B,C) + 2p_{B,D} \cdot u(B,D) = -p_{C,B} \cdot u(C,B) + p_{D,C} \cdot u(D,C)$, that is if the two ways that the second position can be pivotal balance each other out, where $p_{x,y}$ denotes the probability of being pivotal in deciding the election for candidate $x$ over candidate $y$, and $u(x,y)$ represents the utility obtained from the victory of $x$ minus the utility obtained from the victory of $y$. Because we can freely set the sincere utility obtained from the victory of every candidate, it is straightforward to produce a numerical example that satisfies this equality. Third, suppose that one candidate is expected to not receive any votes. We assign zero probability to the event that such a candidate's vote total will exceed the vote total of a candidate who is expected to get a positive number of votes \nl{\citep{mebane19}}. Then any two ballots will have equal expected utility if they are equal up to the location of the candidate who is expected to receive zero votes. For this reason, a complete specification of optimal vote choice in IRV requires some means of breaking ties. One option is to pick a framework for assigning probabilities to pivotal events which guarantees that ties will not arise (though it is not obvious what such a framework would be), and to assume that all voters have strict preference orderings. Barring that, a tie-breaking rule must assign a unique expected utility to any two non-equal ballots, and a reasonable second condition would be that a voter should break ties in proportion to their sincere utilities. \section{Pseudocode} In the following pseudocode I leave counting the votes non-explicit, but each vote total should represent the vote total relevant to the comparison at hand. I also continue to not explicitly represent the probability of ties arising within drop sequences, though these must be computed. The following algorithms are run on the set of all possible ballots, which we have assumed to be $\mathcal{P}_L(\text{Sym}(C))$, that is, the set of all length-$L$ subsets of all of the permutations of the set of candidates $C$. \FloatBarrier \begin{algorithm} \caption{Direct pivotality} \begin{algorithmic}[1] \For {voter in voters} \For {$\beta$ in allBallots} \State ballotDirPivot $\gets 0$ \For {$i$ in $[1:L]$} \For {$S$ in $\text{Sym}(C \setminus \{S_i\})$} \State $A \gets S \cup \{i\}$ \If {$\beta[1:i]$ in $A$} \For {each $\ell$ in $A_{1:\kappa-2}$} \State candDropProbs $\gets 1$ \For {$h$ in $A_{\ell:\kappa}$} \State chainProb $\gets \sum_{w=1}^{\infty} \mathcal{S} (w, v_\ell, v_h)$ \State candDropProbs $\gets$ candDropProbs $\cdot$ chainProb \EndFor \EndFor \State breakTieProb $\gets \mathcal{S}(0, v_i, v_{S_{-1}})$ \State makeTieProb $\gets \mathcal{S}(-1, v_i, v_{S_{-1}})$ \State pivotProb $\gets$ (candDropProbs)(breakTieProb + $\frac{1}{2} \cdot$ makeTieProb) \State ballotDirPivot $\gets$ ballotDirPivot + pivotProb \EndIf \EndFor \EndFor \State allBallotPivots[$\beta$] $\gets$ ballotDirPivot \EndFor \EndFor \end{algorithmic} \end{algorithm} \FloatBarrier \FloatBarrier \begin{algorithm} \caption{Indirect pivotality} \begin{algorithmic}[1] \For {voter in voters} \For {$\beta$ in allBallots} \State ballotIndirPivot $\gets 0$ \For {$A$ in $\text{Sym}(C)$} \State $\mathbf{A}$ $\gets$ set of all permutations of $A$ satisfying the two requirements \State baseChainProb $\gets$ 1 \If {$\beta[1:i]$ in $A_{1:y}$} \For {each $\ell$ in $A_{2:}$} \For {$h$ in $A_{1:\ell-1}$} \State baseChainProb $\gets$ baseChainProb $\cdot \sum_{w=1}^{\infty}(w, v_\ell, v_h)$ \EndFor \For {$A'$ in $\mathbf{A}$} \State $G \gets A'_{y:\kappa}$ \State altProb $\gets$ 1 \For {each $d$ in $G$} \For {$d$ in $[1:length(G)]$} \For {$h$ in $[1:d-1]$} \State altProb $\gets$ altProb $\cdot \sum_{w=1}^{\infty}(w, v_{G_d}, v_{G_h})$ \EndFor \EndFor \EndFor \State $t \gets A'_y$ \State breakProb $\gets \mathcal{S}(0, v_i, v_t)$ \State makeProb $\gets \mathcal{S}(0, v_i, v_t - 1)$ \State pivotProb $\gets$ chainProb $\cdot$ altProb $\cdot$ (breakProb + $\frac{1}{2} \cdot$ makeProb) \State ballotIndirPivot $\gets$ ballotIndirPivot + pivotProb \EndFor \EndFor \EndIf \EndFor \EndFor \EndFor \end{algorithmic} \end{algorithm} \FloatBarrier The pivotal probability of each contest can be multiplied by the utility the voter would obtain from that result, and then the ballot with the largest expected utility selected, with the caveat that a tie-breaking rule might also be necessary as discussed above. \section{Some properties of pivotal events in IRV} First we establish a sort of history independence in IRV that will be useful for the main result in this section, which is that pivotal probability can only occur in races between five or more candidates. The following kind of history independence in IRV has been observed many times before, and though the following proof is original, the theorem should be considered a folk theorem. \begin{thm}\label{thm:folk} Let $v_c|S$ be the vote total of candidate $c$ after some ordered list $S$ of candidates has been eliminated. For any permutation $S'$ of $S$, $v_c|S = v_c|S'$. \end{thm} \begin{proof} Consider the vote total of candidate $c$ after $j$ candidates have been eliminated in the order $S = [s_1, s_2, \cdots, s_j]$, and let $v_c^j|S$ represent the number of ballots that have been distributed to candidate $c$. $v_c^j|S$ is the number of ballots of the form $\beta = [\mathbf{x}, c, \cdots]$, where $\mathbf{x} \subseteq \{s_1, s_2, \cdots, s_j\}$, since a ballot is counted as a vote for $c$ if and only if $c$ is listed on the ballot and every higher-ranked candidate has been dropped. But if $\mathbf{x} \subseteq S$, then for any permutation $S'$ of $S$, $\mathbf{x} \subseteq S'$ also. So $v_c^j|S = v_c^j|S'$. \end{proof} \begin{thm}\label{thm:5way} An indirectly pivotal event cannot occur if four or fewer candidates have not yet been dropped. \end{thm} \begin{proof} We will exhaust every case. Suppose only 2 candidates, $c$ and $a$, have not yet been eliminated. Then a vote for $c$ can only be pivotal if it causes $c$ to defeat $a$. But then $c$ wins the election and this vote was directly pivotal, not indirectly pivotal. Suppose 3 candidates have not been eliminated: $c, a$, and $b$. Suppose that without a pivotal vote for $c$, $a$ and $b$ will defeat $c$, but that $a$ will fall to $b$, but that with a pivotal vote, $c$ would beat $a$. We can label the candidates in this way without loss of generality because we can freely assume that $c$ is the first dropped (in the absence of a pivotal vote), because otherwise the case involving $\kappa$ candidates reduces to the case involving $\kappa-1$ candidates, which in the case of $\kappa=3$ we have already solved. Now consider a pivotal vote for $c$. Then $c$ defeats $a$, and there are two cases. If $b$ defeats $c$, then the vote was not pivotal in changing the election winner. If $c$ defeats $b$, then the vote was directly pivotal, not indirectly pivotal. Suppose 4 candidates have not been eliminated: $c, a, b$, and $d$, and wlog assume that without a pivotal vote for $c$ they would be dropped in the order $c, a, b, d$. Suppose $c$ receives an extra vote and instead $a$ is dropped first. Then there are two cases. If $c$ is dropped next, then by Theorem \ref{thm:folk}, $d$ will defeat $b$. If instead $b$ is dropped next, then there are the same two cases as in the 3-candidate scenario: either $d$ wins, in which case the vote was not pivotal, or $c$ wins, in which case it was not indirectly pivotal. This rules out the possibility of an indirectly pivotal event occurring when fewer than 5 candidates remain. \end{proof} \begin{cor}\label{cor:five} Indirectly pivotal events can only occur in races between five or more candidates. \end{cor} Corollary \ref{cor:five} contributes to an ongoing conversation on the length of IRV ballots. In some jurisdictions IRV ballots may include every candidate who contests the race, while the Alaskan system reduces the candidate pool to four before the IRV round, and \nl{\citet{gehl20}} advocate a version of the Alaskan system in which five candidates are ranked instead of four. Corollary \ref{cor:five} shows that in the ``Final-Five'' system preferred by \nl{\citet{gehl20}} the pathological situation that I call indirect pivotality, in which a voter increases the vote total of one candidate and thereby causes a \textit{different} candidate to win, is possible. In contrast, the Alaskan system is the largest IRV race in which indirect pivotality can never occur. If indirect pivotality is normatively undesirable, then Final-Four voting should be preferred over Final-Five voting. In the following section we will provide a numerical example that establishes that indirect pivotality is indeed possible in the five-candidate case (and is therefore possible in races between more than 5 candidates, since we can always assign ballots that will cause the remaining candidates to be dropped first without reallocations, for example by assigning them zero votes). \section{Example computations} \subsection{A minimal example} Consider a five-way IRV race using real candidates from the Alaskan 2022 contest for a seat in the US House. This requires taking a certain liberty with the Alaskan system, in which only four candidates are actually included in the runoff, but by Theorem \ref{thm:5way} we can only construct an example of indirect pivotality if we include at least 5 candidates. Imagine that the candidates in an IRV election are Al Gross, Santa Claus,\footnote{This candidate did actually contest Alaska's house seat \nl{\citep{timm22}}.} Mark Begich, Mary Peltola, and Sarah Palin, and for simplicity consider ballots of length 3.\footnote{This is only in the interest of exposition; similar examples with ballots of length 4 or 5 are straightforward to construct.} Suppose that the following ballot types are cast, with the following frequency: \\ \begin{center} \begin{tabular}{l|l} Number & Ballot \\ \hline 2 & [Gross, Palin, X] \\ 2 & [Claus, Gross, Palin] \\ 3 & [Begich, Gross, Palin] \\ 6 & [Palin, Peltola, X] \\ 12 & [Peltola, X, X] \\ \end{tabular} \end{center} where ``X'' represents any legal vote. We will take the perspective of an elector who expects these ballots to be cast, and is deciding whether to abstain or cast a ballot. For simplicity, just consider the case in which Al Gross loses the initial tie-breaker. \\ \noindent \textbf{Default outcome}: If the elector abstains, then the sequence of eliminations is as follows: \\ \begin{center} \begin{tabular}{l | l | l | l | l | l} & Gross & Claus & Begich & Palin & Peltola \\ \hline initial & 2 & 2 & 3 & 6 & 12 \\ round 1 & - & 2 & 3 & 8 & 12 \\ round 2 & - & - & 3 & 10 & 12 \\ round 3 & - & - & - & 13 & 12 \\ round 4 & - & - & - & 25 & - \\ \\ \end{tabular} \end{center} So by default the winner will be Sarah Palin. \\ \noindent \textbf{Direct pivotality}: Suppose that Mary Peltola would win a tiebreaker, and the elector casts any ballot of the form [Peltola, X, X]. Then \\ \begin{center} \begin{tabular}{l | l | l | l | l | l} & Gross & Claus & Begich & Palin & Peltola \\ \hline initial & 2 & 2 & 3 & 6 & 13 \\ round 1 & - & 2 & 3 & 8 & 13 \\ round 2 & - & - & 3 & 10 & 13 \\ round 3 & - & - & - & 13 & 13 \\ round 4 & - & - & - & - & 26 \\ \\ \end{tabular} \end{center} Here the voter caused Mary Peltola to win by voting for her, so they were directly pivotal. \\ \noindent \textbf{Indirect pivotality}: Suppose instead the elector casts any ballot of the form [Gross, X, X]. Then the sequence of eliminations is as follows. \\ \begin{center} \begin{tabular}{l | l | l | l | l | l} & Gross & Claus & Begich & Palin & Peltola \\ \hline initial & 3 & 2 & 3 & 6 & 12 \\ round 1 & 5 & - & 3 & 6 & 12 \\ round 2 & 8 & - & - & 6 & 12 \\ round 3 & 8 & - & - & - & 18 \\ round 4 & - & - & - & - & 26 \\ \\ \end{tabular} \end{center} This is an example of indirect pivotality: by casting a vote for Al Gross, the voter caused Mary Peltola to win instead of Sarah Palin. The computation of this example is also included in the supplementary code. \subsection{Calculating the pivot probabilities} Using the numbers in the previous section, we now perform a full example computation of pivotal probability in IRV. To perform the full pivotal probability calculations, the voter cannot just consider one way of being directly pivotal and one way of being indirectly pivotal, but the combinations of drop orders in a 5-candidate election is too large for us to explicitly write out the entire procedure for calculating it. So instead, just compute the probability of the drop sequences that we have identified as examples. We will also only fully work through the first ballot locations. \\ \noindent \textbf{Direct pivotality}: In this example, $\text{Sym(C} \setminus {i)}$ is the group of all permutations of the set $\{$Gross, Claus, Begich, Palin$\}$. We will focus on the drop order provoked in this example, namely $S = [\text{Gross}, \text{Claus}, \text{Begich}, \text{Palin}]$, but a voter who computes direct pivotality would need to consider every other permutation as well. Denote by $p_{\text{direct}}(S)$ the probability that the ballot under consideration will be directly pivotal through the particular chain $S$ under consideration. Then, \begin{align*} \hspace{-0.75in} p_{\text{direct}}(S) = \ & \mathbb{P} \bigg(\mu_{A_1}^1|A_{1:0} < \mu_{A_2}^1|A_{1:0}\bigg) \cdot \mathbb{P} \bigg(\mu_{A_1}^1|A_{1:0} < \mu_{A_3}^1|A_{1:0}\bigg) \cdot \mathbb{P} \bigg(\mu_{A_1}^1|A_{1:0} < \mu_{A_4}^1|A_{1:0}\bigg) \cdot \\ & \mathbb{P} \bigg(\mu_{A_1}^1|A_{1:0} < \mu_{A_5}^1|A_{1:0} \bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{A_2}^1|A_{1:0} + \mu_{A_2}^2|A_{1:1} \big] < \big[\mu_{A_3}^1|A_{1:0} + \mu_{A_3}^2|A_{1:1}\big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{A_2}^1|A_{1:0} + \mu_{A_2}^2|A_{1:1} \big] < \big[\mu_{A_4}^1|A_{1:0} + \mu_{A_4}^2|A_{1:1}\big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{A_2}^1|A_{1:0} + \mu_{A_2}^2|A_{1:1} \big] < \big[\mu_{A_5}^1|A_{1:0} + \mu_{A_5}^2|A_{1:1}\big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{A_3}^1|A_{1:0} + \mu_{A_3}^2|A_{1:1} + \mu_{A_3}^3|A_{1:2}\big] < \big[\mu_{A_4}^1|A_{1:0} + \mu_{A_4}^2|A_{1:1} + \mu_{A_4}^3|A_{1:2}\big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{A_3}^1|A_{1:0} + \mu_{A_3}^2|A_{1:1} + \mu_{A_3}^3|A_{1:2}\big] < \big[\mu_{A_5}^1|A_{1:0} + \mu_{A_5}^2|A_{1:1} + \mu_{A_5}^3|A_{1:2}\big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{A_4}^1|A_{1:0} + \mu_{A_4}^2|A_{1:1} + \mu_{A_4}^3|A_{1:2} + \mu_{A_4}^4|A_{1:3} \big] = \\ & \big[\mu_{A_5}^1|A_{1:0} + \mu_{A_5}^2|A_{1:1} + \mu_{A_5}^3|A_{1:2} + \mu_{A_5}^4|A_{1:3} \big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{A_4}^1|A_{1:0} + \mu_{A_4}^2|A_{1:1} + \mu_{A_4}^3|A_{1:2} + \mu_{A_4}^4|A_{1:3} \big] = 1 + \\ & \big[\mu_{A_5}^1|A_{1:0} + \mu_{A_5}^2|A_{1:1} + \mu_{A_5}^3|A_{1:2} + \mu_{A_5}^4|A_{1:3} \big]\bigg) \end{align*} \begin{align*} \hspace{-0.75in} p_{\text{direct}}(S) = \ & \mathbb{P} \bigg(\mu_{\text{Gross}}^1 < \mu_{\text{Claus}}^1\bigg) \cdot \mathbb{P} \bigg(\mu_{\text{Gross}}^1 < \mu_{\text{Begich}}^1\bigg) \cdot \mathbb{P} \bigg(\mu_{\text{Gross}}^1 < \mu_{\text{Palin}}^1\bigg) \cdot \mathbb{P} \bigg(\mu_{\text{Gross}}^1 < \mu_{\text{Peltola}}^1 \bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{\text{Claus}}^1 + \mu_{\text{Claus}}^2|\{\text{Gross}\} \big] < \big[\mu_{\text{Begich}}^1 + \mu_{\text{Begich}}^2|\{\text{Gross}\}\big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{\text{Claus}}^1 + \mu_{\text{Claus}}^2|\{\text{Gross}\} \big] < \big[\mu_{\text{Palin}}^1 + \mu_{\text{Palin}}^2|\{\text{Gross}\}\big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{\text{Claus}}^1 + \mu_{\text{Claus}}^2|\{\text{Gross}\} \big] < \big[\mu_{\text{Peltola}}^1 + \mu_{\text{Peltola}}^2|\{\text{Gross}\}\big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{\text{Begich}}^1 + \mu_{\text{Begich}}^2|\{\text{Gross}\} + \mu_{\text{Begich}}^3\{\text{Gross},\text{Claus}\}\big] < \\ & \big[\mu_{\text{Palin}}^1 + \mu_{\text{Palin}}^2|\{\text{Gross}\} + \mu_{\text{Palin}}^3|\{\text{Gross},\text{Claus}\}\big]\bigg) \\ & \mathbb{P}\bigg(\big[\mu_{\text{Begich}}^1 + \mu_{\text{Begich}}^2|\{\text{Gross}\} + \mu_{\text{Begich}}^3\{\text{Gross},\text{Claus}\}\big] < \\ & \big[\mu_{\text{Peltola}}^1 + \mu_{\text{Peltola}}^2|\{\text{Gross}\} + \mu_{\text{Peltola}}^3|\{\text{Gross},\text{Claus}\}\big]\bigg) \\ & \mathbb{P}\bigg(\big[\mu_{\text{Palin}}^1 + \mu_{\text{Palin}}^2|\{\text{Gross}\} + \mu_{\text{Palin}}^3|\{\text{Gross},\text{Claus}\} + \mu_{\text{Palin}}^4|\{\text{Gross},\text{Claus},\text{Begich}\} \big] = \\ & \big[\mu_{\text{Peltola}}^1 + \mu_{\text{Peltola}}^2|\{\text{Gross}\} + \mu_{\text{Peltola}}^3|\{\text{Gross},\text{Claus}\} + \mu_{\text{Peltola}}^4|\{\text{Gross},\text{Claus},\text{Begich}\} \big]\bigg) \cdot \\ & \mathbb{P}\bigg(\big[\mu_{\text{Palin}}^1 + \mu_{\text{Palin}}^2|\{\text{Gross}\} + \mu_{\text{Palin}}^3|\{\text{Gross},\text{Claus}\} + \mu_{\text{Palin}}^4|\{\text{Gross},\text{Claus},\text{Begich}\} \big] = 1 + \\ & \big[\mu_{\text{Peltola}}^1 + \mu_{\text{Peltola}}^2|\{\text{Gross}\} + \mu_{\text{Peltola}}^3|\{\text{Gross},\text{Claus}\} + \mu_{\text{Peltola}}^4|\{\text{Gross},\text{Claus},\text{Begich}\} \big]\bigg) \cdot \\ \end{align*} \noindent Now we can substitute in the actual numbers, and represent the Skellam CDF by $\mathcal{C} \equiv \sum_{w=0}^{\infty} \mathcal{S} (w, a, b)$, which is the probability that $a > b$ when $a,b$ are random variables following the Poisson distribution. Then, \begin{align*} p_{\text{direct}}(S) = \ & \mathcal{S}c(2, 2) \cdot \mathcal{S}c(3, 2) \cdot \mathcal{S}c(6, 2) \cdot \mathcal{S}c(12, 2) \cdot \\ & \mathcal{S}c(3 + 0, 2 + 0) \cdot \mathcal{S}c(6 + 2, 2 + 0) \cdot \mathcal{S}c(12 + 0, 2 + 0) \cdot \\ & \mathcal{S}c(6 + 2 + 2, 3 + 0 + 0) \cdot \\ & \mathcal{S}c(12 + 0 + 0, 3 + 0 + 0) \cdot \\ & (\mathcal{S}(0, 6 + 2 + 2 + 0, 12 + 0 + 0 + 0) + \\ & \frac{1}{2} \cdot \mathcal{S}(0, 6 + 2 + 2 + 0, 1 + 12 + 0 + 0 + 0)) \end{align*} \noindent \hspace{1in} \[p_{\text{direct}}(S) \approx 0.023\] \noindent \textbf{Indirect pivotality}: We will calculate just the indirect pivotal probability of moving from the default drop sequence $A = [\text{Gross}, \text{Claus}, \text{Begich}, \text{Peltola}, \text{Palin}]$ to the alternative drop sequence $A' = [\text{Claus}, \text{Begich}, \text{Palin}, \text{Gross}, \text{Peltola}]$. The indirectly pivotal event is boosting Gross above Claus, so we will only consider the probability of a last-place tie between those two candidates. Denote by $p_{\neg d}(E)$ indirect pivotal probability through this specific event. Then, \begin{align*} \hspace{-1in} p_{\neg d}(E) = & \mathbb{P} \bigg( \mu^1_{A_2}|A_{1:0} > \mu^1_{A_1}|A_{1:0} \bigg) \cdot \mathbb{P} \bigg(\mu^1_{A_3}|A_{1:0} > \mu^1_{A_1}|A_{1:0} \bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{A_3}|A_{1:0} + \mu^2_{A_3}|A_{1:1}\big] > \big[\mu^1_{A_2}|A_{1:0} + \mu^2_{A_2}|A_{1:1}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{A_4}|A_{1:0} > \mu^1_{A_1}|A_{1:0}\bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{A_4}|A_{1:0} + \mu^2_{A_4}|A_{1:1}\big] > \big[\mu^1_{A_2}|A_{1:0} + \mu^2_{A_2}|A_{1:1}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{A_4}|A_{1:0} + \mu^2_{A_4}|A_{1:1} + \mu^3_{A_4}|A_{1:2}\big] > \big[\mu^1_{A_3}|A_{1:0} + \mu^2_{A_3}|A_{1:1} + \mu^3_{A_3}|A_{1:2}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{A_5}|A_{1:0} > \mu^1_{A_1}|A_{1:0}\bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{A_5}|A_{1:0} + \mu^2_{A_5}|A_{1:1}\big] > \big[\mu^1_{A_2}|A_{1:0} + \mu^2_{A_2}|A_{1:1}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{A_5}|A_{1:0} + \mu^2_{A_5}|A_{1:1} + \mu^3_{A_5}|A_{1:2}\big] > \big[\mu^1_{A_3}|A_{1:0} + \mu^2_{A_3}|A_{1:1} + \mu^3_{A_3}|A_{1:2}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{A_5}|A_{1:0} + \mu^2_{A_5}|A_{1:1} + \mu^3_{A_5}|A_{1:2} + \mu^4_{A_5}|A_{1:3}\big] > \\ & \big[\mu^1_{A_4}|A_{1:0} + \mu^2_{A_4}|A_{1:1} + \mu^3_{A_4}|A_{1:2} + \mu^4_{A_4}|A_{1:3}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{G_2}|G_{1:0} > \mu^1_{G_1}|G_{1:0} \bigg) \cdot \mathbb{P} \bigg(\mu^1_{G_3}|G_{1:0} > \mu^1_{G_1}|G_{1:0} \bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{G_3}|G_{1:0} + \mu^2_{G_3}|G_{1:1}\big] > \big[\mu^1_{G_2}|G_{1:0} + \mu^2_{G_2}|G_{1:1}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{G_4}|G_{1:0} > \mu^1_{G_1}|G_{1:0}\bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{G_4}|G_{1:0} + \mu^2_{G_4}|G_{1:1}\big] > \big[\mu^1_{G_2}|G_{1:0} + \mu^2_{G_2}|G_{1:1}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{G_4}|G_{1:0} + \mu^2_{G_4}|G_{1:1} + \mu^3_{G_4}|G_{1:2}\big] > \big[\mu^1_{G_3}|G_{1:0} + \mu^2_{G_3}|G_{1:1} + \mu^3_{G_3}|G_{1:2}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{G_5}|G_{1:0} > \mu^1_{G_1}|G_{1:0}\bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{G_5}|G_{1:0} + \mu^2_{G_5}|G_{1:1}\big] > \big[\mu^1_{G_2}|G_{1:0} + \mu^2_{G_2}|G_{1:1}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{G_5}|G_{1:0} + \mu^2_{G_5}|G_{1:1} + \mu^3_{G_5}|G_{1:2}\big] > \big[\mu^1_{G_3}|G_{1:0} + \mu^2_{G_3}|G_{1:1} + \mu^3_{G_3}|G_{1:2}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{G_5}|G_{1:0} + \mu^2_{G_5}|G_{1:1} + \mu^3_{G_5}|G_{1:2} + \mu^4_{G_5}|G_{1:3}\big] > \\ & \big[\mu^1_{G_4}|G_{1:0} + \mu^2_{G_4}|G_{1:1} + \mu^3_{G_4}|G_{1:2} + \mu^4_{G_4}|G_{1:3}\big] \bigg) \cdot \\ & \mathbb{P} \bigg(\mu^1_{\text{Gross}} = \mu^1_{\text{Claus}}\bigg) + \frac{1}{2} \cdot \mathbb{P} \bigg( \mu^1_{\text{Gross}} = \mu^1_{\text{Claus}} - 1 \bigg) \end{align*} \begin{align*} \hspace{-1in} p_{\neg d}(E) = & \mathbb{P} \bigg(\mu^1_{\text{Gross}} = \mu^1_{\text{Claus}}\bigg) + \frac{1}{2} \cdot \mathbb{P} \bigg( \mu^1_{\text{Gross}} = \mu^1_{\text{Claus}} - 1\bigg) \cdot \mathbb{P} \bigg( \mu^1_{\text{Claus}} > \mu^1_{\text{Gross}} \bigg) \cdot \mathbb{P} \bigg(\mu^1_{\text{Begich}} > \mu^1_{\text{Gross}} \bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{\text{Begich}} + \mu^2_{\text{Begich}}|\text{Gross}\big] > \big[\mu^1_{\text{Claus}} + \mu^2_{\text{Claus}}|\text{Gross}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{\text{Palin}} > \mu^1_{\text{Gross}}\bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{\text{Palin}} + \mu^2_{\text{Palin}}|\text{Gross}\big] > \big[\mu^1_{\text{Claus}} + \mu^2_{\text{Claus}}|\text{Gross}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{\text{Palin}} + \mu^2_{\text{Palin}}|\text{Gross} + \mu^3_{\text{Palin}}|\{\text{Gross},\text{Claus}\}\big] > \\ & \big[\mu^1_{\text{Begich}} + \mu^2_{\text{Begich}}|\text{Gross} + \mu^3_{\text{Begich}}|\{\text{Gross},\text{Claus}\}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{\text{Peltola}} > \mu^1_{\text{Gross}}\bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{\text{Peltola}} + \mu^2_{\text{Peltola}}|\text{Gross}\big] > \big[\mu^1_{\text{Claus}} + \mu^2_{\text{Claus}}|\text{Gross}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{\text{Peltola}} + \mu^2_{\text{Peltola}}|\text{Gross} + \mu^3_{\text{Peltola}}|\{\text{Gross},\text{Claus}\}\big] > \\ & \big[\mu^1_{\text{Begich}} + \mu^2_{\text{Begich}}|\text{Gross} + \mu^3_{\text{Begich}}|\{\text{Gross},\text{Claus}\}\big] \bigg) \cdot \\ & \mathbb{P} \bigg(\big[\mu^1_{\text{Palin}} + \mu^2_{\text{Palin}}|\text{Gross} + \mu^3_{\text{Palin}}|\{\text{Gross},\text{Claus}\} + \mu^4_{\text{Palin}}|\{\text{Gross},\text{Claus},\text{Begich}\}\big] > \\ & \big[\mu^1_{\text{Peltola}} + \mu^2_{\text{Peltola}}|\text{Gross} + \mu^3_{\text{Peltola}}|\{\text{Gross},\text{Claus}\} + \mu^4_{\text{Peltola}}|\{\text{Gross},\text{Claus},\text{Begich}\}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{\text{Begich}} > \mu^1_{\text{Claus}} \bigg) \cdot \mathbb{P} \bigg(\mu^1_{\text{Palin}} > \mu^1_{\text{Claus}} \bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{\text{Palin}} + \mu^2_{\text{Palin}}|\text{Claus}\big] > \big[\mu^1_{\text{Begich}} + \mu^2_{\text{Begich}}|\text{Claus}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{\text{Gross}} > \mu^1_{\text{Claus}}\bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{\text{Gross}} + \mu^2_{\text{Gross}}|\text{Claus}\big] > \big[\mu^1_{\text{Begich}} + \mu^2_{\text{Begich}}|\text{Claus}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{\text{Gross}} + \mu^2_{\text{Gross}}|\text{Claus} + \mu^3_{\text{Gross}}|\{\text{Claus},\text{Begich}\}\big] > \\ & \big[\mu^1_{\text{Palin}} + \mu^2_{\text{Palin}}|\text{Claus} + \mu^3_{\text{Palin}}|\{\text{Claus},\text{Begich}\}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \mu^1_{\text{Peltola}} > \mu^1_{\text{Claus}}\bigg) \cdot \mathbb{P} \bigg( \big[\mu^1_{\text{Peltola}} + \mu^2_{\text{Peltola}}|\text{Claus}\big] > \big[\mu^1_{\text{Begich}} + \mu^2_{\text{Begich}}|\text{Claus}\big]\bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{\text{Peltola}} + \mu^2_{\text{Peltola}}|\text{Claus} + \mu^3_{\text{Peltola}}|\{\text{Claus},\text{Begich}\}\big] > \\ & \big[\mu^1_{\text{Palin}} + \mu^2_{\text{Palin}}|\text{Claus} + \mu^3_{\text{Palin}}|\{\text{Claus},\text{Begich}\}\big] \bigg) \cdot \\ & \mathbb{P} \bigg( \big[\mu^1_{\text{Peltola}} + \mu^2_{\text{Peltola}}|\text{Claus} + \mu^3_{\text{Peltola}}|\{\text{Claus},\text{Begich}\} + \mu^4_{\text{Peltola}}|\{\text{Claus},\text{Begich},\text{Palin}\}\big] > \\ & \big[\mu^1_{\text{Gross}} + \mu^2_{\text{Gross}}|\text{Claus} + \mu^3_{\text{Gross}}|\{\text{Claus},\text{Begich}\} + \mu^4_{\text{Gross}}|\{\text{Claus},\text{Begich},\text{Palin}\}\big] \bigg) \\ \end{align*} \begin{align*} p_{\neg d}(E) = \ & \mathcal{S}c(2,2) \cdot \mathcal{S}c(3,2) \cdot \mathcal{S}c(3 + 0, 2 + 0) \cdot \mathcal{S}c(6, 2) \cdot \mathcal{S}c(6 + 2, 2 + 0) \cdot \mathcal{S}c(6 + 2 + 2, 3 + 0 + 0) \cdot \\ & \mathcal{S}c(12, 2) \cdot \mathcal{S}c(12 + 0, 2 + 0) \cdot \mathcal{S}c(12 + 0 + 0, 3 + 0 + 0) \cdot \\ & \mathcal{S}c(6 + 2 + 2 + 3, 12 + 0 + 0 + 0) \cdot \mathcal{S}c(3, 2) \cdot \mathcal{S}c(6, 2) \cdot \mathcal{S}c(6 + 0, 3 + 0) \cdot \mathcal{S}c(2, 2) \cdot \\ & \mathcal{S}c(2 + 2, 3 + 0) \cdot \mathcal{S}c(2 + 2 + 3, 6 + 0 + 0) \cdot \mathcal{S}c(12,2) \cdot \mathcal{S}c(12 + 0, 3 + 0) \cdot \\ & \mathcal{S}c(12 + 0 + 0, 6 + 0 + 0) \cdot \mathcal{S}c(12 + 0 + 0 + 6, 2 + 2 + 3 + 0) \cdot \\ & (\mathcal{S}(0, 2, 2) + \frac{1}{2} \cdot \mathcal{S}(0, 2, 2 - 1)) \end{align*} \noindent \hspace{1in} \[p_{\neg d}(E) \approx 0.003\] The probability of this particular example of indirect pivotality was found to be an order of magnitude smaller than the probability of the direct pivotality example, consistent with the fact that indirect pivotality requires a much more specific situation to occur. \section{Simulated pivot probabilities in IRV and SMDP} I conclude by implementing the IRV pivotal probability algorithm in Python and simulating the magnitude of pivotal probabilities for identical election setups in IRV and SMDP. The motivation is the simple observation that strategic considerations could prompt voters to more frequently submit ballots which tend to produce ties in one system than in the other system. Now that we can estimate pivotal probabilities in both systems, we can numerically test the claim that there is more incentive to vote strategically in one system than in the other. Figure \ref{fig:piv} shows the results of 50 runs of a model, where each run consists of one IRV contest and one SMDP contest with identical parameters, re-run for each number of candidates. Every candidate has the same probability of appearing in any position on any ballot, so that each candidate has a similar number of first-place votes, a similar number of second-place votes, and so on. The electorate is fixed at 1,000 voters, and the number of candidates is either 3, 4, or 5, with ballot lengths equal to the number of candidates. To find the total pivotal probability for a given election setup, I sum the pivotal probability of every ballot. \FloatBarrier \begin{figure} \caption{\label{fig:piv} \label{fig:piv} \end{figure} \FloatBarrier Comparing the pivotal probabilities under IRV and SMDP in Figure \ref{fig:piv} shows that the two systems have very similar pivotal probabilities. The chances to be pivotal in IRV may be slightly higher than in SMDP when there are only 3 candidates, but when there are 4 or 5 candidates neither is clearly larger than the other.\footnote{I caution against reading substantively into the appearance that pivotal probabilities in IRV fall more quickly as the number of candidates rises than pivotal probabilities in SMDP, since this result may be attributable to the varying quality of the independence assumption on the probability of the relative sizes of candidates' vote counts.} The results do not support widespread claims that there are either stronger or weaker incentives for voters to cast strategic votes under IRV compared to SMDP. \section{Conclusion} This article has extended the classic calculus of voting to IRV, by deriving the probability that a ballot cast in an IRV election will determine the election outcome. I have shown how to implement that pivotal probability calculation using both pseudocode and examples. This core theoretical contribution led to two substantive implications. First, the fact that indirectly pivotal events can only occur in IRV races with five candidates or more provides a motivation to favour ballots with at most four candidates. In an electorate of dozens with realistic preferences, I showed that the probability of just one example of an indirectly pivotal event was about 0.3\%, so if hundreds of IRV elections are held in small electorates, it should be expected that there will regularly be some voter who caused one candidate to win because they chose to vote for an entirely different candidate. This is a reason to prefer systems like the one in Alaska where only four candidates are allowed to compete in the IRV stage. Second, by putting numbers to the probability of causing a pivotal event in IRV I was able to test the widespread expectations that, compared to classic SMDP voting, IRV either provides more opportunities for strategic voting, or else contains lower incentives for strategic voting. I showed that neither of these stories is clearly correct. The probability that some ballot in an IRV election will be pivotal is very similar to the probability that some ballot in an SMDP election will be pivotal. This suggests that the opportunities and incentives for strategic behaviour in IRV are about the same as they are in SMDP. \end{document}
\betagin{document} \tikzset{ ->, >=stealth, node distance=3cm, every state/.style={thick, fill=gray!10}, initial text=$\textrm{root}$, } \title{An asymptotically tight lower bound for superpatterns with small alphabets} \betagin{abstract} A permutation $\sigma \in S_n$ is a $k$-superpattern (or $k$-universal) if it contains each $\tau \in S_k$ as a pattern. This notion of ``superpatterns'' can be generalized to words on smaller alphabets, and several questions about superpatterns on small alphabets have recently been raised in the survey of Engen and Vatter. One of these questions concerned the length of the shortest $k$-superpattern on $[k+1]$. A construction by Miller gave an upper bound of $(k^2+k)/2$, which we show is optimal up to lower-order terms. This implies a weaker version of a conjecture by Eriksson, Eriksson, Linusson and Wastlund. Our results also refute a 40-year-old conjecture of Gupta. \end{abstract} \hide{Recent papers: \url{https://arxiv.org/pdf/2004.02375.pdf} \url{https://arxiv.org/pdf/1911.12878.pdf}} \section{Introduction} Given permutations $\tau \in S_k, \sigma \in S_n$, we say $\sigma$ \textit{contains} $\tau$ as a \textit{pattern} if there exist indices $1\le i_1< \dots<i_k\le n$ such that $\sigma(i_j) < \sigma(i_{j'})$ if and only if $\tau(j) < \tau(j')$ for all choices $j,j'$ (e.g., $312$ is contained in $2\underlinederline{514}3$ as a pattern, we may choose $i_1,i_2,i_3 = 2,3,4$). We say that $\sigma\in S_n$ is a \textit{$k$-superpattern} if it contains each $\tau \in S_k$ as a pattern. Naturally, this leads us to consider the ``superpattern problem''. \betagin{prob} For $k \ge 1$, let $f(k)$ be the minimum $n$ such that there exists $\sigma\in S_n$ which is a $k$-superpattern. What is the asymptotic growth of $f(k)$? \end{prob}\noindent In 1999, Arratia \cite{arratia} showed that $(1/e^2-o(1))k^2 \le f(k) \le k^2$, hence $f(k)$ is well-defined. There have been several competing conjectures about the asymptotic growth of $f(k)$. The conjecture relevant to this paper is that of Eriksson, Eriksson, Linusson and Wastlund, which claimed $f(k) = (1/2\pm o(1))k^2$ \cite{eriksson}. As some evidence towards this conjecture, Miller showed that there exist $k$-superpatterns of length $(k^2+k)/2$ (i.e., $f(k) \le (k^2+k)/2$) \cite{miller}. And later Engen and Vatter improved this to show $f(k) \le (k^2+1)/2$ \cite{engen}. However in forthcoming work \cite{hunter}, the author will show that $f(k)\le \frac{15}{32}k^2+O(k)$, refuting the claim that the constant $1/2$ is tight. \hide{ In 2009, Miller showed that there exist $k$-superpatterns of length $(k^2+k)/2$ using the alphabet $[k+1]$ (i.e., $f(k;k+1) \le (k^2+k)/2$) \cite{miller}. The linear factor was improved in 2018 by Engen and Vatter, though this used an alphabet that grows quadratically with $k$ \cite{engen}. In forthcoming work \cite{hunter}, the author will show that $f(k)\le \frac{15}{32}k^2+O(k)$ using quadratically large alphabets. This contradicts a conjecture by Eriksson et al. \cite{eriksson} that $f(k) = (1/2\pm o(1))k^2$.} In light of this, one is left to wonder if a revised version of the conjecture from \cite{eriksson} holds true. We answer this in the affirmative by considering a ``stricter regime'' of the superpattern problem which has received attention recently (see \cite{chroman,engen}). \hide{ In this paper we prove a weakening of the conjecture by showing the constant $1/2$ is asymptotically tight in a stricter regime of the superpattern problem. Recently, the survey of Engen and Vatter \cite{engen} there have been breakthroughs in the superpattern problem and some of its variants \cite{chroman,he}. We study a stricter regime of the superpattern problem, which was mentioned in \cite[Section~6]{chroman}. Meanwhile, in this paper, we will show that the constant $1/2$ is asymptotically tight when we consider a stricter regime of the superpattern problem. In this paper, we Recently, there have been breakthroughs in the superpattern problem and some of its variants \cite{chroman,he}. In this paper, we study a certain regime of the superpattern problem, which was mentioned in \cite[Section~6]{chroman}. Our results are asymptotically tight in said regime. \zc{this sentence i can't read:} They have several implication of our results also refute a 40-year conjecture of Gupta \cite{gupta}; see Section~\ref{implications} for more details. \zc{usually, people introduce history before saying their own results. just something to think about.}} The regime in question concerns ``alphabet size''. Instead of having $\sigma$ be a permutation, what if it was a word (i.e., sequence) on the alphabet $[r]:=\{1,\dots,r\}$? For $\sigma\in [r]^n$ and $\tau \in S_k$, we say $\sigma$ contains $\tau$ as a pattern for the same reasons as before (i.e., if there are indices $1\le i_1<\dots<i_k\le n$ such that $\sigma(i_j) < \sigma(i_{j'})$ if and only if $\tau(j)<\tau(j')$). As before, we say $\sigma \in [r]^n$ is a $k$-superpattern if it contains every $\tau \in S_k$ as a pattern. We define $f(k;r)$ to be the minimum $n$ such that there is a $\sigma\in [r]^n$ which is a $k$-superpattern. One could revise this conjecture, by claiming in regimes with ``small'' alphabets, that the shortest $k$-superpatterns have a length of $(1/2\pm o(1))k^2$. In this paper, we prove the revised conjecture for the regime where $r=r_k = (1+o(1))k$. The lower bound is given by our main result. \betagin{thm}\label{opt}For every $\epsilon > 0$, there exists $\delta > 0$ so that the following holds for sufficiently large $k$. For\footnote{Throughout the paper, we omit floor functions when there is not risk for confusion.} $r_k = (1+\delta)k$ and $n < (1/2-\epsilon)k^2$, no word $\sigma \in [r_k]^n$ is a $k$-superpattern. \end{thm}\noindent Hence, with Miller's construction (which uses the alphabet $[k+1]$, and thus shows $f(k;k+1)\le (k^2+k)/2$), we have asymptotically sharp bounds of the shortest superpatterns in this regime. \betagin{cor}\label{asy}Suppose $r_k = (1+o(1))k$ and also $r_k > k$ for all $k$. Then \[f(k;r_k) = \left(\frac{1}{2}\pm o(1)\right)k^2.\] \end{cor} In Section~\ref{outline} we go over past lower bounds of $f$, and outline a proof of Theorem~\ref{opt}. In Section~\ref{notation} we go over notation. In Section~\ref{reduction} we go over a reduction which shows that Theorem~\ref{opt} follows from a more technical Theorem~\ref{goodwalks}, which we state later. We came across two proofs of Theorem~\ref{goodwalks}, we include both (but provide different levels of detail). In Section~\ref{det} we prove Theorem~\ref{goodwalks} by a simple coupling argument. In Section~\ref{alt} we sketch a second proof which uses the differential method. We believe our second proof is more likely to find applications in future research, however the first proof is more natural and was easier to present in full detail. In Section~\ref{conclusion} we discuss some open problems and go over some results about the lower-order terms of our bounds. One part which may be of particular interest Section~\ref{implications}, where we refute a conjecture made by Gupta in 1981 \cite{gupta}, which was about the length of ``bi-directional circular superpatterns''. \subseteqsection{Past lower bounds and an outline of our proof}\label{outline} We mention two trivial lower bounds for the length of superpatterns. Any $\sigma \in S_n$ contains at most $\binom{n}{k}$ permutations $\tau \in S_k$ as a pattern, since $\binom{n}{k}$ counts the number of choices of indices $1\le i_1<\dots<i_k\le n$. This implies $\binom{f(k)}{k}\ge k!$ must hold, which gives the bound $f(k) \ge (1/e^2-o(1))k^2$. Meanwhile, if $r_k = (1+o(1))k$, then one can get $f(k;r_k) \ge (1/e-o(1))k^2$ by a convexity argument (more specifically, one shows that any $\sigma \in [k]^n$ contains at most $(n/k)^k$ patterns of length $k$, and then one uses Remark~\ref{nfromk} which we mention shortly). In 1976, Kleitman and Kwiatowski \cite{kleitman} used inductive methods to show that $f(k;k) \ge (1-o(1))k^2$ which is asymptotically tight (indeed, to see $f(k;k)\le k^2$ one can consider $1,\dots,k$ repeated $k$ times). But it was only in 2020 that Chroman, Kwan, and Singhal \cite{chroman} proved non-trivial lower bounds for superpatterns on alphabets larger than $[k]$. Basically, the methodology was based around ``encoding'' patterns in a more efficient manner. They show that typical choices of indices $1\le i_1<\dots<i_k\le n$ have many ``large gaps'' (choices $j$ where $i_{j+2}-i_j > Ck$ for a certain $C>0$), and that this property is particularly redundant (loosely, they create equivalence classes for choices of indices with many large gaps, and show that each equivalence class contain many choices of indices, yet few distinct patterns). This was used to show $f(k) \ge (1.000076/e^2)k^2$ for large $k$, and $f(k;(1+e^{-1000})k) \ge ((1+e^{-600})/e)k^2$ for large $k$. In proving Theorem~\ref{opt}, we take a rather different approach than either of the previous papers which established non-trivial lower bounds (namely \cite{chroman,kleitman}). We actually reformulate the problem in terms of random walks on deterministic finite automata (DFAs). To get there, we need a definition and an observation. \betagin{defn} For positive integers $k,n$, we let $F(k,n)$ be the maximum number of patterns $\tau \in S_k$ that a $\sigma \in [k]^n$ can contain. \end{defn} \betagin{rmk}\label{nfromk} For any $\sigma \in [r]^n$, we have that $\sigma$ contains at most $\binom{r}{k}F(k,n)$ patterns $\tau \in S_k$. Consequently, if $r$ is such that\[\binom{r}{k}F(k,n) < k!\] then there is no $\sigma \in [r]^n$ which is a $k$-superpattern (i.e., $f(k;r)>n$). \end{rmk}\noindent To confirm this remark, it suffices to verify the first sentence in Remark~\ref{nfromk}, which can be briefly justified as follows. Note that for each of the $\binom{r}{k}$ subsets $Y\subseteqset [r]$ with $|Y|=k$, there are at most $F(k,n)$ permutations $\tau\in S_k$ which are a pattern of $\sigma|_{\sigma^{-1}(Y)}$. Conversely, if $\tau\in S_k$ is a pattern of $\sigma$, it is contained as a pattern of $\sigma|_{\sigma^{-1}(Y)}$ for some set $Y\subseteqset [r]$ with $|Y| =k$. Thus for fixed $\epsilon > 0$, we want to show that when $n < (1/2-\epsilon)k^2$ and $k$ is large that $F(k,n)$ will be ``extremely small''. We are able to show this by considering random walks on certain DFAs. What we specifically prove about DFAs is a bit technical, so we defer the rigorous statement to Section~\ref{thereduction}. Essentially, it implies a exponentially small upper bound for $F(k,n)/k!$ when $n < (1/2-\Omega(1))k^2$. \betagin{repthm}{goodwalks}[Informal statement]There exists a function $G(k,n,N)$ (which is defined in terms of a family of DFAs) such that \[F(k,(1/2-\epsilon)k^2) \le G(k,(1/2-\epsilon)k^2,k^2) .\]For fixed $\epsilon >0$, we will have \[G(k,(1/2-\epsilon)k^2,k^2) \quad \textrm{gets ``very small'' as }k\to \infty. \] \end{repthm}\noindent Here, the notion of ``very small'' is such that Theorem~\ref{opt} will follow from an application of Remark~\ref{nfromk}. Intuitively, one may expect our results to hold true by considering the following argument sketch. The rest of our paper will be dedicated to rigorously grounding this sketch. Consider any $\sigma \in [k]^n$. Let $t$ be sampled from $[k]$ uniformly at random. For any $i_0 \in [n]$, we'll have that $\mathbb{E}[\inf\{i>i_0: \sigma(i) = t\}-i_0]\ge (k+1)/2$, which is minimized when $\sigma(i_0+1),\dots,\sigma(i_0+k)$ is a permutation (we use the convention $\infty-i_0 = \infty$ so that the quantity $\inf\{i>i_0: \sigma(i) = t\}-i_0$ is always well-defined). Thus, if $t_1,\dots,t_k$ are i.i.d. and sample $[k]$ uniformly at random, and we set $i_j = \inf\{i>i_{j-1}:\sigma(i) =t_j\}$ for each $j\in [k]$, then it should be exponentially likely (in terms of $k$) that $i_k-i_0 > (1/2-\epsilon)k^2$ (this essentially is due to a Chernoff bound). This quantity $i_k-i_0$ essentially tells us how long $\sigma$ needs to be so that we can ``embed'' $t_1,\dots,t_k$ into $\sigma$. In Section~\ref{reduction}, we go over our reduction from pattern containment to this deterministic embedding process, and then show how to use DFAs to track the quantity $i_k-i_0$. We conclude Section~\ref{reduction} by precisely stating Theorem~\ref{goodwalks} and showing how it implies Theorem~\ref{opt}. We then prove Theorem~\ref{goodwalks} in Section~\ref{det}. In our argument sketch above, we show that $i_k-i_0 < (1/2-\epsilon)k^2$ is exponentially unlikely when $t_1,\dots,t_k$ are sampled uniformly at random. What we do in Section~\ref{det} is show that the probability continues to be small when we condition on $t_1,\dots,t_k$ being a permutation. This is done by choosing $\alphapha >0$ sufficiently small relative to $\epsilon$, and considering the behavior of substrings of length $\alphapha k$. Here, if we choose the letters of our substring uniformly at random, the probability that our substring contains no repeated letters (i.e., it could be a substring of a permutation) is much larger than the probability that $i_{\alphapha k+j}-i_j > (1/2-\epsilon)\alphapha k^2$. \subseteqsection{Notation}\label{notation} For positive integers $n$ we let $[n]:= \{1,\dots,n\}$. We let $[\infty]:= \{1,2,3,\dots\} \cup\{\infty\}$. We use some standard asymptotic notation, detailed below. Let $f=f(k),g=g(k)$ be functions. We say $f = O(g)$ if there exists $C> 0$ such that $f\le Cg$ for sufficiently large $k$; conversely we say $f = \Omega(g)$ if there is $c> 0$ so that $f\ge cg$ for all large $k$. We use $o(1)$ to denote a non-negative\footnote{This is slightly non-standard, in most contexts $o(1)$ is allowed to be negative. We primarily use this convention to make the paper easier to read. We never implicitly make use of this convention in any of our proofs.} quantity that tends to zero as $k\to \infty$. Following \cite{keevash}, for a function $h= h(k)$, we say $h = f \pm g$ to mean $f-g\le h \le f+g$. We remind the readers of the Kleene star operator. Given an alphabet (i.e., a set) $\Sigma$, we let $\Sigma^*$ denote the set of finite words on the alphabet $\Sigma$ (so $\Sigma= \bigcup_{n=0}^\infty \Sigma^i$). For our purposes, a DFA is a $3$-tuple $D = (V,\delta,\textrm{root}(D))$, where $V$ is the set (of ``states'') of $D$, $\delta:V\times \Sigma \to V;(v,t)\mapsto \delta(v,t)$ is a transition function defined on some alphabet $\Sigma$, and $\textrm{root}(D) \in V$ is the ``root'' of $D$. For the purposes of this paper, one may think of each DFA $D$ as being a rooted (not necessarily simple) directed graph, with its transition function, $\delta$, being a convenient way to describe walks on said graph. Given a word $w \in [k]^*$ and $v\in V$ we define a walk in $D$, $\w{v}{w}$, as follows. Let $L$ be the number of letters in $w$, so $w = w_1,\dots,w_L$. We set $\w{v}{w} = v_0,\dots,v_L$, where $v_0 = v$ and for $j \in [L]$, $v_j = \d{v_{j-1}}{w_j}$. Let $D$ be a DFA with a sets of states $V$, and suppose we have defined $\delta:V \times[k]\to V;(v,w)\mapsto \d{v}{w}$. We shall extend the function $\delta$ to the domain $V\times[k]^*$. Consider $w \in [k]^*$. If $w$ has length zero, then set $\d{v}{w} = v$. Otherwise, proceeding inductively, writing $w = w_1,\dots,w_L$, we can set $\d{v}{w} = \d{\d{v}{w_1}}{w_2,\dots,w_L}$. \subseteqsubsection{Cost} Now we shall go over how we define a ``cost function''. We will start with an initial function $c:V\times [k]\to [\infty]$, and then extend it, similar to how we extended the transition function $\delta$. The end result will be a way to assign cost to walks that behaves additively; for those familiar with weighted graphs and the travelling salesman problem, we will effectively be translating the concept of weighted walks in terms of DFAs. Let $D$ be a DFA with a sets of states $V$, and suppose we have defined $\cost:V \times[k]\to [\infty]$. We shall extend this to the domain $V\times[k]^*$. Given $v\in V$ and $w \in [k]^*$, we let $v_0,\dots,v_{|w|} = \w{v}{w}$, and set\[\c{v}{w} = \sum_{j \in [|w|]}\c{v_{j-1}}{w(j)}.\]In English, we initialize with net cost zero and do a walk according to $w$ that starts at state $v$ and let $\c{v}{w}$ be our net cost at the end of the walk. When doing the $j$-th step of our walk, we read the letter $w(j)$ while at state $v_{j-1}$ and shall increment our net cost by $\c{v_{j-1}}{w(j)}$ (if we think of $v_{j-1}$ as being a toll booth, this is the cost of taking the $w(j)$-th route of $v_{j-1}$). A weighted DFA is simply a 2-tuple $(D,\cost)$ where $D$ is a DFA and $\cost$ is a cost function defined on $V$, the set of state of $D$. Given a weighted DFA $X =(D,\cost)$, we call $D$ the \textit{underlying DFA} of $X$. Also, for a weighted graph $X = (D,\cost)$, we will identify $X$ with $D$, so if we say something like ``let $V$ be the set of states of $X$'' we mean ``let $V$ be the set of states of $D$''. When talking about two DFAs $A,B$, we respectively denote the transition function of $A$ and the transition function of $B$ by $\delta_A$ and $\delta_B$. We similarly denote their walk functions by $\walk_A$ and $\walk_B$. In the same fashion, given two weighted DFAs $A,B$, each with their own cost function, we'll respectively denote them by $\cost_A$ and $\cost_B$. This allows us to compare functions when $A,B$ have a common set of states $V$. Thus, if we say $\c[A]{v}{t} \ge \c[B]{v}{t}$, this means that if we wanted to read the letter $t$ while at the state $v$, the associated cost of doing this in $A$ is at least as much as doing this in $B$. We now introduce the concept of making a weighted DFA ``cheaper''. For a weighted DFA $X = (D,\cost)$ we say that $Y = (D',\cost')$ is a \textit{cheapening} of $X$ if $D = D'$ (i.e., they have the same underlying DFA) and for each $(v,t) \in V\times [k]$ we have that $\cost(v,t) \ge \cost'(v,t)$ (here $V$ is the set of states of $D$ and $[k]$ is the alphabet of letters which $D$ reads). The implication of this definition is that a cheapening will have a more relaxed cost function, that assign lower costs to all inputs (just like what would happen if one decreased the weights of some edges in an instance of the traveling salesman problem). \betagin{rmk}\label{cheap}If $B$ is a cheapening of $A$, then for $(v,w) \in V\times [k]^*$ we have that $\c[B]{v}{w}\le \c[A]{v}{w}$. \betagin{proof} Consider any $v \in V$ and $w\in [k]^*$. As $A,B$ have the same underlying DFA, we'll have that $\w[A]{v}{w} = v_0,\dots,v_{|w|} = \w[B]{v}{w}$. Hence, \[\c[A]{v}{w}- \c[B]{v}{w} = \sum_{j\in [|w|]} \c[A]{v_{j-1}}{w_j}-\c[B]{v_{j-1}}{w_j} \ge 0\](because $B$ is ``cheaper'' than $A$,\footnote{i.e., $\c[A]{v}{t} \ge \c[B]{v}{t}$ for all $(v,t) \in V\times [k]$} each summand is non-negative). It follows that $\c[A]{v}{w}\ge \c[B]{v}{w}$ as desired. \end{proof} \end{rmk} Finally, here are some meta-notational conventions we will use. The symbol $\sigma$ will refer to a word we want to be a superpattern. The symbol $\tau$ will be an element of $S_k$, we'll wish to check if $\tau$ is a pattern of $\sigma$. We use $i$ to denote an index of $\sigma$, $j$ to denote an index of $\tau$, $t$ to denote an image of $\tau$ (i.e., it would make sense to write ``with $\tau = t_1,\dots,t_k$'' or ``suppose $\tau(j) = t$''). \section{Reduction}\label{reduction} In this section, we will properly state Theorem~\ref{goodwalks} (which shall be proven in Section~\ref{det}), and prove that it implies Theorem~\ref{opt}. First, in Section~\ref{gstrat}, we formalize a ``greedy strategy'' for embedding $\tau$ into $\sigma$, and show that when $\sigma \in [k]^n$ and $\tau \in S_k$ that $\tau$ is a pattern of $\sigma$ if and only if the greedy strategy works. Then in Section~\ref{gdfa}, we will introduce a way to associate $\sigma \in [k]^n$ with a weighted DFA that will simulate this greedy embedding. Next in Section~\ref{kdfa}, we introduce a family of weighted DFAs, called $k$-DFAs, and show they generalize the weighted DFAs from Section~\ref{gdfa}. Lastly, in Section~\ref{thereduction} we first state Theorem~\ref{goodwalks} in terms of $k$-DFAs, and prove Theorem~\ref{opt} assuming this result. \subseteqsection{Greedy Strategy}\label{gstrat} Let $\sigma\in [k]^n$ and $\tau \in S_k$. Since $\sigma$ uses the alphabet $[k]$, and $\tau$ uses every element of that alphabet, we have that $\tau$ is a pattern of $\sigma$ if and only if there are indices $1\le i_1< \dots <i_k \le n$ with $\sigma(i_j) = \tau(j)$ for each $j\in[k]$. Now, if such a choice/embedding of indices exist, then so will the ``greedy embedding'' of $\tau$ where we take $i_1 = \min\{i:i \in \sigma^{-1}(\tau(1))\}$ and iteratively for $j \in [k]\setminus \{1\}$ take $i_j = \min\{i>i_{j-1}: i\in \sigma^{-1}(\tau(j))\}$.\footnote{Indeed, suppose $i'_1<\dots < i'_k$ is one such embedding. We claim that the $i_j$ defined according to the greedy embedding will exist for all $j \in [k]$. First, we have that $\sigma(i'_1) =\tau(1)$, thus $i_1$ exists and we'll have $i_1 \le i'_1$. Then inductively, for any $j \in [k-1]$, assuming $i_j$ exists and $i_j\le i'_j$, we see that as $\sigma(i'_{j+1}) =\tau(j+1)$ and $i'_{j+1}> i'_j \ge i_j \implies i_{j+1} \le i'_{j+1}$. Hence we can construct $i_j$ for all $j\in [k]$ as required.} Conversely, if we can construct $i_1,\dots,i_k$ according to the greedy embedding, it is clear that we'll have $i_1\ge 1$ and $i_k \le n$, which will imply $\sigma$ contains $\tau$ as a pattern. Hence, $\tau$ being a pattern of $\sigma$ is equivalent to being able to greedily embed $\tau$ into $\sigma$. \subseteqsection{Greedy DFA}\label{gdfa} Given $\sigma \in [k]^n$, we shall create a weighted DFA $A_\sigma$ on $n+1$ states such that for $\tau \in S_k$, $\tau$ can be greedily embedded into $\sigma$ if and only if $c_\tau\le n$, where $c_\tau$ is the ``cost'' of the walk which $\tau$ induces in $A_\sigma$. We start by letting the states of $A_\sigma$ be $V=\{0\}\cup [n]$, with $0$ being the root. We will now define the transition function $\delta$ and the associated cost function $\cost$ on the domain $V\times[k]$. See Figure~\ref{fig:gdfa} for an example. For $v \in V$ and $t \in [k]$, we let $u=u(v,t) = \inf\{i \in \sigma^{-1}(t): i>v\}$. If $u < \infty$, then $u\in [k] \subseteqset V$, thus we define $\d{v}{t} = u$ and $\c{v}{t} = u-v$. Otherwise, if $u = \infty$, we let $\d{v}{t} = v$ and $\c{v}{t} = \infty$. \betagin{figure}[ht] \centering\betagin{tikzpicture} \node[state, initial] (q0) {$0$}; \node[state, right of=q0] (q1) {$1$}; \node[state, right of=q1] (q2) {$2$}; \node[state, right of=q2] (q3) {$3$}; \node[state, right of=q3] (q4) {$4$}; \draw (q0) edge[above] node{$a$ (1)} (q1) (q0) edge[bend left, above] node{$b$ (2)} (q2) (q0) edge[bend right, below] node{$c$ (3)} (q3) (q1) edge[above] node{$b$ (1)} (q2) (q1) edge[bend left, above] node{$c$ (2)} (q3) (q2) edge[bend right, below] node{$b$ (2)} (q4) (q2) edge[above] node{$c$ (1)} (q3) (q3) edge[above] node{$b$ (1)} (q4); \end{tikzpicture} \caption{A sketch of $A_\sigma$ where $\sigma = a,b,c,b$ (here we use the alphabet $\{a,b,c\}$ rather than $[3]$ for clarity). The labels of the edges are of the form ``$x$ (y)'' where $x \in \{a,b,c\}$ is the letter being read and y is the cost of the step. All omitted edges are self-loops with cost $\infty$.} \label{fig:gdfa} \end{figure} As we went over in Section~\ref{notation}, we can extend $\delta,\cost$ to functions on the domain $V\times [k]^*$ by considering finite walks. We also now can define the walk function $\walk$ for $A_\sigma$. Now, given $v\in V, w \in [k]^*$ we consider $v_0,\dots,v_{|w|} = \w{v}{w}$. If $v_j = v_{j-1}$ for some $j \in [|w|]$, we say there was a failure. It is easy to see that if there is a failure, then $\c{v}{w} = \infty$, and otherwise we will have $\c{v}{w} = v_{|w|}-v_0$ (by induction). We can now express pattern containment of permutations in terms of walks along $A_\sigma$. This is morally because $\w{0}{\tau}$ will mimic the greedy embedding of $\tau$, and has infinite cost if and only if the greedy embedding fails. \betagin{lem}For $\sigma \in [k]^n, \tau \in S_k$, we have that $\tau$ is a pattern of $\sigma$ if and only if $\c[A_\sigma]{0}{\tau}\le n$. \betagin{proof}Let $\cost,\walk$ be the cost and walk functions of $A_\sigma$. Consider any $w \in [k]^*$. We shall show that $w$ has a greedy embedding into $\sigma$ if and only if $\c{0}{w} \le n$. By Section~\ref{gstrat}, the result will follow, since $\tau$ will be a pattern of $\sigma$ if and only if it has a greedy embedding into $\sigma$. By design/definition, we see that if $w\in [k]^*$ has a greedy embedding $i_1,\dots,i_{|w|}$ into $\sigma$, then $\w{0}{w} = 0,i_1,i_2,\dots,i_{|w|}$. Since $i_1 \ge 1 > 0$, and $i_j< i_{j+1}$ for $j \in [|w|-1]$, we get $\c{0}{w} = i_{|w|}\le n$ (because the walk does not have a failure). Meanwhile, we have that if $\c{0}{w} \le n$, then $0 = v_0 <v_1<\dots<v_{|w|}\le n$ with $v_0,\dots,v_{|w|}= \w{0}{w}$, making $v_1,\dots,v_{|w|}$ a greedy embedding of $w$ into $\sigma$. \end{proof} \end{lem} \subseteqsection{The family of \texorpdfstring{$k$}{k}-DFAs}\label{kdfa} We will now define a family of weighted DFAs that will generalize the weighted DFAs $A_\sigma$ created in the last subsection. Let $D$ be a DFA with a set of states $V$ and a cost function $\cost:V\times[k]^*\to [\infty]$. We say $D$ is a \textit{$k$-DFA} if for each $v \in V$, we have that there is $\pi_v \in S_k$ such that $\pi_{v}(t) = \c{v}{t}$ for each $t\in [k]$. Now we will show how $k$-DFAs ``generalizes'' the family of $A_\sigma$ from Section~\ref{gdfa}. Recall that given two weighted DFAs $X,Y$, we say $X$ is a cheapening of $Y$ if they both have the same underlying DFA, and we have $\c[X]{v}{t} \le \c[Y]{v}{t}$ for all $(v,t) \in V\times [k]$. \betagin{lem}\label{kcheap} Let $k$ be a positive integer. For any $\sigma \in [k]^n$, there exists a $k$-DFA $B_\sigma$ which is a cheapening of $A_\sigma$. \betagin{proof} Let $V$ be the set of states for $A_\sigma$ and let $\cost$ be the cost function for $A_\sigma$ restricted to $V\times [k]$. We shall take $B_\sigma$ to have the same underlying DFA as $A_\sigma$, and need to define some cost function $\cost_*$ for $B_\sigma$. It suffices to define $\c[*]{v}{t}$ for all $(v,t) \in V\times [k]$. For each $v\in V$, we wish to find a permutation $\pi_v \in S_k$ such that $\pi_{v}(t)\le \c{v}{t}$ for all $t\in [k]$. We will then set $\c[*]{v}{t} = \pi_v(t)$ for all $(v,t)\in V\times[k]$.\hide{We may then set $\c[*]{v}{\cdot} = \pi_v$ for each $v\in V$.} If we can do this, then it is clear that $B_\sigma$ will be a $k$-DFA (by definition) and that it will be a cheapening of $A_\sigma$ (by our choices of $\pi_v$). We now fix some $v\in V$, and find $\pi_v$. By construction of $A_\sigma$, we have that $\cost_v$ is injective on finite values. Indeed, for $t\in [k]$, we have $\c{v}{t} = c < \infty \implies \sigma(v+c) = t$, thus if $t,t'\in [k]$ have the same finite cost $c$ (starting at $v$) we have that $t = \sigma(v+c) = t'$. Letting $T = \{t\in [k]: \c{v}{t} \le k\}$, we have that $\cost_v|_T$ is an injection into $[k]$ and $t\in [k]\setminus T $ will imply $\c{v}{t} > k$. Thus, it works to let $\pi_v = \pi \in S_k$ for any $\pi$ where $\pi|_T = \cost_v|_T$ (such $\pi$ will exist as $\cost_v|_T$ is an injection into $[k]$). \end{proof} \end{lem} \hide{Now, we observe a very trivial property about our $\cost$ function on $A_\sigma$ (for any $A_\sigma$ constructed according to preceding subsection). Let $V$ be the set of states of $A_\sigma$. For each $v \in V$, there exists $\pi_v \in S_k$ such that $\pi{v}{t} \le \c{v}{t}$ for all $t \in [k]$. Indeed, if we look back at the definition of $\cost_v$, we see that $\c{v}{t} = \c{v}{t'}< \infty $ implies $t=t'$ thus $\cost_v$ was injective on finite values. Letting $T = \{t: \c{v}{t} \le k\}$, we have that $\cost_v|_T$ is an injection into $[k]$ and $t\in [k]\setminus T $ will imply $\c{v}{t} > k$; thus it works to let $\pi_v = \pi \in S_k$ for any $\pi$ where $\pi|_T = \cost_v|_T$ (such $\pi$ will exist as $\cost_v|_T$ is an injection into $[k]$). Thus, for $\sigma\in [k]^n$, we shall relax $A_\sigma$ to get another weighted $B_\sigma$ as follows. Let $V$ be the state space of $A_\sigma$. As noted in the paragraph before, for each $v \in V$, we may choose $\pi_v \in S_k$ so that $\pi_v \le \cost(A)_v$ (for all inputs $t \in [k]$). For concreteness, let $\pi_v^\sigma$ be the lexicographically smallest choice of $\pi_v$ that works in the last sentence. We take $B_\sigma$ to have the same state space and transition function as $A_\sigma$, but for the cost function we set $\cost[B_\sigma]{v}{t} = \pi_v^\sigma(t)$ for each $(v,t) \in V\times[k]$ (which gets extended to $V\times [k]^*$). Since $\delta(B_\sigma) = \delta(A_\sigma)$, we have that $\walk $ We will now define a family of weighted DFAs that will generalize the $A_\sigma$ from the last subsection. Let $D$ be a DFA with a set of states $V$ and a cost function $\cost:V\times[k]^*\to [\infty]$. We say $D$ is a \textit{$k$-DFA} if for each $v \in V$, we have that there is $\pi_v \in S_k$ such that $\pi{v}{t} = \c{v}{t}$ for each $t\in [k]$. Looking at the previous paragraph, for $\sigma \in [k]^n$, there exists a $k$-DFA $B_\sigma$ with the set of states $V = \{0\}\cup [n]$ such that for all $v \in V$ and $t\in [k]$ we have $\cost(B_\sigma)_{v}(t) \le \cost(A_\sigma)_{v}(t)$ (i.e., for any state $v \in V$ and letter $t\in[k]$, the cost of reading $t$ at $v$ for $B_\sigma$ will be at most the cost of reading $t$ at $v$ for $A_\sigma$). This extends to any walk $w \in [k]^*$ having at most as much cost in $B_\sigma$ as it will in $A_\sigma$. Thus, $\{\tau \in S_k: \c[B_\sigma]{0}{\tau}\le n \}\supseteq \{\tau \in S_k: \c[A_\sigma]{0}{\tau}\le n \}$.} Recalling Remark~\ref{cheap}, as $B_\sigma$ is a cheapening of $A_\sigma$, we have that $\c[B_\sigma]{v}{w}\le \c[A_\sigma]{v}{w}$ for all $(v,w) \in V\times [k]^*$. Hence, for any $\sigma \in [k]^n$, we get \[ \{\tau \in S_k: \c[A_\sigma]{0}{\tau}\le n \} \subseteqseteq \{\tau \in S_k: \c[B_\sigma]{0}{\tau}\le n \}\] \hide{\betagin{equation}\label{contain} \{\tau \in S_k: \c[A_\sigma]{0}{\tau}\le n \} \subseteqseteq \{\tau \in S_k: \c[B_\sigma]{0}{\tau}\le n \} \tag{$\ast$} \end{equation}}where the set on the RHS is defined with respect to $B_\sigma$, which is a $k$-DFA. \subseteqsection{The Reduction}\label{thereduction} We define $G(k,n,N)$ so that for any $k$-DFA $D$ on $N$ states, there are at most $G(k,n,N)$ ``permutational walks'' $w\in S_k$ where $\c{\textrm{root}(D)}{w} \le n$. Observe that\[F(k,n) \le G(k,n,n+1)\le G(k,n,k^2)\]when $n\le k^2/2$ (here the first inequality follows by our previous work, and the second follows by the monotonicity of $G$ in the third variable). We can now make our original statement of Theorem~\ref*{goodwalks} precise.\betagin{thm}[Formal statement]\label{goodwalks} Fix $\epsilon^*>0$. Then there exists $c>0$ such that for sufficiently large $k$, \[G(k,(1/2-\epsilon^*)k^2,k^2) \le \exp(-ck)k!.\]\end{thm} By Remark~\ref{nfromk}, we see Theorem~\ref{opt} will follow. \betagin{proof}[Proof of Theorem~\ref{opt} given Theorem~\ref{goodwalks}] Fix $\epsilon>0$. We take $\epsilon^* = \epsilon$. We may apply Theorem~\ref{goodwalks} to get $c>0$ such that \[F(k,(1/2-\epsilon)k^2) \le \exp(-ck)k!\]for all sufficiently large $k$. One can easily verify that there exists $\delta_0 >0$ such that $2\delta +\delta \log(\delta^{-1})\le c$ for all $\delta \in (0,\delta_0]$. We will take some $\delta =\min\{1,\delta_0\}$. Letting $r_k = (1+\delta)k$, a standard bound gives \[\binom{r_k}{k} \le (e(1+\delta)\delta^{-1})^{\delta k} <\exp((2+\log(\delta^{-1}))\delta k) \le \exp(ck).\]Thus by Remark~\ref{nfromk}, we get that $f(k;r_k) >(1/2-\epsilon)k^2$ for sufficiently large $k$. \end{proof} \section{A coupling argument}\label{det} \subseteqsection{Machinery}\label{machinery} In this subsection, we will fix some variables. We let $k$ be a (fixed) positive integer. We let $D$ be a (fixed) $k$-DFA with state set $V$; we respectively denote the transition, walk, and cost functions of $D$ by $\delta,\walk,$ and $\cost$. We will say $w \in [k]^*$ is a \textit{permutational word} if $w(j) = w(j') \implies j=j'$ (i.e., if $w$ is injective). Note that permutational words will always use the alphabet $[k]$. Also, for $w = w_1,\dots,w_L$, and $E \subseteqset [L]$, we write $w|_E$ to denote the word $w_{e_1},w_{e_2},\dots,w_{e_{|E|}}$, where $e_1< e_2<\dots<e_{|E|}$ are the elements of $E$ in increasing order. We will make use of the following fact several times. \betagin{rmk}\label{uniform} Suppose $w$ is sampled uniformly from permutational words of length $L$. For any $ E\subseteqset [L]$, we have that $w|_E$ will sample permutational words of length $|E|$ uniformly at random. \end{rmk}\noindent This remark follows from basic properties of symmetry. We will be concerned with bounding the following quantity, $P$. \betagin{defn}For $v \in V,\epsilon >0,L$, we define \[P(v,L,\epsilon) = \mathbb{P}(\c{v}{w} < (1/2-\epsilon)kL)\]where $w$ is permutational word of length $L$ chosen uniformly at random. \end{defn} \noindent For convenience, for $v \in V, w\in [k]^*,\epsilon>0$, we say $w$ is \textit{$(v,\epsilon)$-bad} if $\c{v}{w} \le (1/2-\epsilon)k|w|$. Otherwise we say $w$ is $(v,\epsilon)$-good. Note that $P$ and this concept of ``goodness'' are defined with respect to $D$. We now move on to proving some necessary lemmas. \betagin{lem}For any $v\in V,\epsilon> 0, L=L_0+L_1+\dots +L_M$\[P(v,L,\epsilon) \le P(v,L_0,\epsilon) + \sum_{u \in V} \sum_{m\in [M]}P(u,L_m,\epsilon).\] \betagin{proof} Set $I_0 = [L_0]$, and similarly for $m \in [M]$ set $I_m = [L_0+\dots+L_m]\setminus [L_0+\dots +L_{m-1}]$. Observe that $I_0,\dots,I_M$ partitions $[L]$. Also, for each $m \in \{0\}\cup[M]$ it is clear that $|I_m| = L_m$. Consider a word $w\in [k]^L$ of length $L$. For each $m\in \{0\}\cup[M]$, let $w^m = w|_{I_m}$. Observe that for each $v \in V$, we can choose $u_1,\dots,u_M \in V$ so that \[\c{v}{w} = \c{v}{w^0}+\sum_{m\in[M]}\c{u_m}{w^m}\] (indeed, we can start by taking $u_1 = \d{v}{w^0}$, and then for $m\in [M-1]$ take $u_{m+1} = \d{u_m}{w^m}$). This is because $w$ is the sequential concatenation of $w^0,w^1,\dots,w^M$. Now suppose $w \in [k]^L$ is a $(v,\epsilon)$-bad word. It follows (essentially by pigeonhole) that there must exist some $m\in \{0\}\cup [M]$ where the event $E_m(w)$ is true, where \betagin{itemize} \item $E_0(w)$ is the event that $w^0$ is $(v,\epsilon)$-bad \item and for $m\in [M]$, $E_m(w)$ is the event that $w^m$ is $(u_m,\epsilon)$-bad. \end{itemize} Let $w$ be sampled from permutational words of length $L$ uniformly at random. As above, for each $m\in \{0\}\cup[M]$ we define $w^m = w|_{I_m}$. Now, recalling Remark~\ref{uniform}, we will have that each $w^m$ will be sampled uniformly at random from permutational words of length $L_m$. Immediately, we see that the probability of the event $E_0(w)$ being true is exactly $P(v,L,\epsilon)$, by definition. We now consider each $m\in [M]$. As $w^m$ is a uniform random permutational word of length $L^m$, we'll get \[\mathbb{P}(w^m \textrm{ is } (u,\epsilon)\textrm{-bad for some }u) \le \sum_{u \in V} P(u,L_m,\epsilon)\] by union bound. Hence as the event $E_m(w)$ is contained in the event on the LHS, the probability of $E_m(w)$ occurring is upper-bounded by the RHS. So by union bound we observe \[\mathbb{P}(w\textrm{ is $(v,\epsilon)$-bad}) \le \sum_{m\in \{0\}\cup[ M]} \mathbb{P}(E_m(w)),\]which gives the desired result due to the bounds given in the preceding paragraph. \end{proof} \end{lem} Writing $P(L,\epsilon):= \max_{v \in V}\{P(v,L,\epsilon)\}$, we immediately get \betagin{cor}\label{doubling} For, $\epsilon>0,L,M$\[P(ML,\epsilon) \le M|V| P(L,\epsilon).\] \end{cor} Next, we observe \betagin{lem}\label{forL} For $ \epsilon> 0, L$, \[P(L,\epsilon) \le \frac{k^L(k-L)!}{k!}\exp(-\frac{\epsilon^2}{4} L).\] \betagin{proof} Let $w$ be uniform random word of length $L$. For each $v \in V$, we have that \betagin{align*} P(v,L,\epsilon) &= \mathbb{P}(w \textrm{ is $(v,\epsilon)$-bad}|w\textrm{ is permutational})\\ &\le \frac{\mathbb{P}(w \textrm{ is $(v,\epsilon)$-bad})}{\mathbb{P}(w\textrm{ is permutational})}\\ \end{align*}by Bayes' theorem. Immediately, we note that $\mathbb{P}(w\textrm{ is permutational}) = \frac{k!}{(k-L)! k^L}$, which justifies the first term in our lemma. Meanwhile, by a Chernoff bound \cite[Theorem~6.ii]{goemans}, we have that $\mathbb{P}(w \textrm{ is }(v,\epsilon)\textrm{-bad})\le \exp(-\frac{\epsilon^2}{4} L)$ as $\c{v}{w}$ is the sum of $L$ i.i.d. samples from the uniform distribution of $[k]$ (this is true by definition of $D$ being a $k$-DFA). This justifies the second term in our lemma. Hence, $P(v,L,\epsilon) \le \frac{k^L(k-L)!}{k!}\exp(-\frac{\epsilon^2}{4}L)$. As $v\in V$ was arbitrary, the same bound applies to $P(L,\epsilon)$, giving the result. \end{proof} \end{lem} \subseteqsection{Proof of Theorem~\ref{goodwalks}} We require a standard bound for the birthday problem: \betagin{rmk}\label{birth} There exists $\alphapha_0>0$ such that for $\alphapha \in (0,\alphapha_0)$, if we take $L = \alphapha k$, then we have that \[\frac{k^L(k-L)!}{k!} \le \exp((\alphapha^2/2+\alphapha^3/4)k).\] \end{rmk}\noindent This follows from \cite[Slide~11]{maji}. We can now prove Theorem~\ref{goodwalks} by choosing $L$ appropriately. \betagin{proof}[Proof of Theorem~\ref{goodwalks}] Fix $\epsilon^* > 0$ and set $\epsilon = 2\epsilon^*/3$ so that \[(1/2-\epsilon)(1-\epsilon)>(1/2-\epsilon^*).\tag{$\dagger$}\label{epchoice}\]Without loss of generality, we may assume $\epsilon<\alphapha_0$ where $\alphapha_0$ is the constant from Remark~\ref{birth}. Let $D$ be any $k$-DFA with $k^2$ states. We define $P(\cdot,\cdot)$ and $(\cdot,\cdot)$-bad with respect to $D$ as we did in Section~\ref{machinery}. Now, we will take $L = \lfloor \alphapha k\rfloor $ for some $\alphapha\in (0,\epsilon)$ which we determine later. We shall bound $P(\epsilon,L)$ by directly applying Lemma~\ref{forL}. When $0<\alphapha<\epsilon<\alphapha_0$, the conclusion of Remark~\ref{birth} holds. Hence, plugging $L$ into Lemma~\ref{forL} gives \[P(L,\epsilon) \le \exp\left((\alphapha^2/2+\alphapha^3/4)k -\frac{\epsilon^2}{4}L\right) \le \exp\left((\alphapha^2/2+\alphapha^3/4-\frac{\epsilon^2}{4}\alphapha)k+1\right) .\] (here the $+1$ is to account for $L$ being the floor of $\alphapha k$) Taking $\alphapha = \sqrt{\epsilon^2/2+1}-1 \in (0,\epsilon)$,\footnote{It should be clear that defining $\alphapha$ in this way ensures $\alphapha>0$; by checking derivatives one can confirm that $\epsilon> 0 \implies \alphapha< \epsilon$. Hence $\alphapha \in (0,\epsilon)$ as desired.} we get \[P(L,\epsilon)\le \exp(1-c_0k),\textrm{ with }c_0=\frac{\epsilon^2(\sqrt{\epsilon^2/2 +1}-1)}{8}.\] Next, we set $M = \lfloor k/L\rfloor$. Because $L \ge 1$, we will have $M \le k$, and by assumption $D$ has at most $k^2$ states. By Corollary~\ref{doubling}, \[P(ML,\epsilon)\le k^3\exp(1-c_0k).\] For later use, we remark that \[k-\epsilon k< k-\alphapha k \le k-L < ML \tag{$\ddagger$}\label{floor}.\]The above follows from properties of the floor function and the fact that $\alphapha < \epsilon$. Now, let $w \in S_k$ be sampled uniformly at random. By Remark~\ref{uniform}, $w':=w|_{[ML]}$ samples permutational words of length $ML$ uniformly at random. Trivially, \[\c{\textrm{root}(D)}{w'}\le \c{\textrm{root}(D)}{w}\]as $w'$ is a prefix of $w$. So, assuming $w'$ is $(\textrm{root}(D),\epsilon)$-good, we get \betagin{align*} \c{\textrm{root}(D)}{w} &\ge \c{\textrm{root}(D)}{w'}\\ &\ge (1/2-\epsilon)kML \\ &>(1/2-\epsilon^*)k^2.\\ \end{align*} (The last line quickly follows from \ref{epchoice} and \ref{floor}.) Thus, by our bound on $P(ML,\epsilon)$ from above \[\mathbb{P}(\c{\textrm{root}(D)}{w}\le (1/2-\epsilon^*)k^2) \le P(ML,\epsilon) \le k^3\exp(1-c_0k).\]As $D$ was arbitrary, this holds for all $k$-DFAs on $k^2$ states, thus \[G(k,(1/2-\epsilon^*)k^2,k^2) \le k^3\exp(1-c_0k)k!.\]We conclude by fixing some choice of $c \in (0,c_0)$. By basic asymptotics, it follows that for sufficiently large $k$, we have \[G(k,(1/2-\epsilon^*)k^2,k^2) \le \exp(-ck)k!.\] \end{proof} \hide{ \betagin{proof}[Proof of Theorem~\ref{goodwalks}] Fix $\epsilon^* > 0,\gamma:\mathbb{N}\to \mathbb{R}_{>0}$ such that $\gamma = \omega(1)$, and set $\epsilon = \epsilon^*/2$. We will create constants $C_1,C_2>0$. Let $D$ be any $k$-DFA with $k^2$ states. We define $P(\cdot,\cdot)$ and $(\cdot,\cdot)$-bad with respect to $D$ as we did in Section~\ref{machinery}. Now, take $L = k/\gamma(k)$. We shall bound $P(\epsilon,L)$ by directly applying Lemma~\ref{forL}. First, we shall apply a standard bound for the birthday problem. Since $L = o(k)$, by \cite[Slide~11]{maji} we get that $\frac{k!}{k^L(k-L)!} \ge \exp(-O(k/\gamma^2(k)))$ and thus its reciprocal is at most $ \exp(O(k/\gamma^2(k)))$. Hence, plugging $L$ into Lemma~\ref{forL} gives \[P(L,\epsilon) \le \exp\left(O(k/\gamma^2(k))-\frac{\epsilon^2}{4}k/\gamma(k)\right) \le \exp\left(-C_1k/\gamma(k)\right) .\] Next, we set $M = \lfloor k/L\rfloor$. Because $L \ge 1$, we will have $M \le k$, and by assumption $D$ has at most $k^2$ states. Thus by Corollary~\ref{doubling}, \[P(ML,\epsilon)\le k^3\exp(-C_1(k/\gamma(k))) = \exp(-C_2k/\gamma(k)).\] By definition of the floor function, we'll have that \[k-L < ML \le k.\] Now, let $w \in S_k$ be sampled uniformly at random. By Remark~\ref{uniform}, $w':=w|_{[ML]}$ samples permutational words of length $ML$ uniformly at random. Trivially, \[\c{\textrm{root}(D)}{w'}\le \c{\textrm{root}(D)}{w}\]as $w'$ is a prefix of $w$. By what we've done above, we have that \[\mathbb{P}(w' \textrm{ is }(\textrm{root}(D),\epsilon)\textrm{-bad})\le \exp(-C_2k/\gamma(k)).\]Assuming $w'$ is $(\textrm{root}(D),\epsilon)$-good, we get \betagin{align*} \c{\textrm{root}(D)}{w} &\ge \c{\textrm{root}(D)}{w'}\\ &\ge (1/2-\epsilon)k(k-L) \\ &= (1/2-\epsilon-o(1))k^2.\\ \end{align*} Thus, for sufficiently large $k$, the last line will exceed $(1/2-\epsilon^*)k^2$, so \[\mathbb{P}(\c{\textrm{root}(D)}{w}<(1/2-\epsilon^*)k^2) \le \exp(-C_2k/\gamma(k)).\]As $D$ was arbitrary, this holds for all $k$-DFAs on $k^2$ states, thus for sufficiently large $k$ \[G(k,(1/2-\epsilon^*)k^2,k^2) \le \exp(-C_2k/\gamma(k))k!.\] \end{proof}} \section{An alternate approach}\label{alt} In this section, we sketch another way to get bounds on $G$. Here, we break the cost of each walk into two parts, which we bound separately. Fix a $k$-DFA $D$. Suppose we sample $\tau \in S_k$ uniformly at random. We write $\tau = t_1,\dots,t_k$ and $v_0,\dots,v_k = \w[D]{\textrm{root}(D)}{\tau}$. For each $j \in [k]$, let $C_j =\c[D]{v_{j-1}}{t_j}$. By definition of cost, \[\c[D]{\textrm{root}(D)}{\tau} = \sum_{j=1}^k C_j \label{eq:cost}\tag{$*$}.\] Now, given $t_1,\dots, t_{j-1}$, there exists $S_j \subseteqset [k],|S_j| = k-j+1$ such that $C_j$ samples $S_j$ uniformly at random (in particular, $t_1,\dots,t_{j-1}$ determines $v_{j-1}$ thus we get $S_j = \{\c[D]{v_{j-1}}{t}:t \in [k]\setminus \{t_1,\dots,t_{j-1}\}\}$). Let $X_j$ be such that $C_j$ is the $X_j$-th smallest element of $S_j$. Since $C_j$ samples $S_j$ uniformly, it follows that $X_j$ samples $[k-j+1]$ uniformly at random. We remark without proof that $X_1,\dots,X_k$ are independently distributed. Next, we define $Y_j = C_j-X_j$, and observe that $Y_j$ is always non-negative. By \ref{eq:cost}, we get \[\c[D]{\textrm{root}(D)}{\tau} = \sum_{j=1}^k X_j +Y_j.\] We shall now consider $\sum_{j=1}^k X_j$ and $\sum_{j=1}^k Y_j$ individually. The first sum is not very complicated and does not depend on our choice of $D$. It suffices to apply Hoeffding's inequality. \betagin{lem}\label{Xbound} For any $\epsilon > 0$, and sufficiently large $k$, \[\mathbb{P}(\sum_{j=1}^k X_j \le (1/4-\epsilon)k^2) < \exp( -32\epsilon^2 k/3).\] \betagin{proof} By linearity, \[\mathbb{E}\left[\sum_{j=1}^k X_j\right] = \sum_{j=1}^{k} \frac{k-j+1}{2} = \frac{1}{4}(k^2+k) > k^2/4.\]Meanwhile, for each $j$ the support of $X_j$ is contained in the interval $[1,k-j+1]$. We have that \[\sum_{j=1}^k (k-j+1-1)^2 = \sum_{j=1}^{k-1} j^2 = \frac{1}{6}(k-1)k(2k-1) < k^3/3.\] Thus, applying a standard Hoeffding bound, we get \betagin{align*} \mathbb{P}(\sum_{j=1}^k X_j \le (1/4-\epsilon)k^2) &< \exp\left(-\frac{2k^2 (4\epsilon k)^2}{k^3/3}\right)\\ &= \exp(-32\epsilon^2 k/3).\\ \end{align*} \end{proof} \end{lem} Next, we want to control the sum over $Y_j$. We first note \betagin{align*} Y_j &= \sum_{t=1}^{C_j} I(\c[D]{v_{j-1}}{t}=\c[D]{v_{j-1}}{t_{j'}}\textrm{ for some }j' \in [j-1])\\ &\ge \min_{v \in V} \left\{\sum_{t=1}^{X_j} I(\c[D]{v}{t} =\c[D]{v}{t_{j'}} \textrm{ for some }j'\in [j-1] )\right\}. \end{align*} Thus, for $v \in V, j\in [k],x \in [k-j+1]$, we define \[T_{v,j,x} := \sum_{t=1}^x I(\c[D]{v}{t} = \c[D]{v}{t_{j'}} \textrm{ for some }j'\in [j-1])\] \[\textrm{ and } T_{j,x} := \min_{v \in V}\{T_{v,j,x}\}.\] We will next need two concentration results. These will allow us to bound $\sum_{j=1}^k Y_j$ in manner reminiscent to Riemann sums. \betagin{prp}\label{con1} Fix $\epsilon^* > 0$ and a positive integer $M$. There exists $c = c_{\ref{con1}}(\epsilon^*,M) > 0$ such that for each $m_1,m_2 \in [M-1]$, \[\mathbb{P}(|\{m_1k/M< j\le (m_1+1)k/M: X_j/(k-j+1)>\frac{m_2}{M}\}| < (1-\epsilon^*)\left(1-\frac{m_2}{M}\right)k/M)\le \exp(-ck)\]when $k$ is sufficiently large. We may in particular take $c_{\ref{con1}}(\epsilon^*,M) =\frac{1}{2}\left(\frac{\epsilon^*}{M}\right)^2$. \end{prp} \betagin{prp}\label{con2} Fix $\epsilon^* > 0$ and a positive integer $M$. There exists $c = c_{\ref{con2}}(\epsilon^*,M) > 0$ such that for each $m_1,m_2 \in [M-1]$, \[\mathbb{P}(T_{j,m_2k/M} < (1-\epsilon^*)\left(\frac{m_2}{M}(j-1)\right) \textrm{ for some }\frac{m_1}{M}k<j\le \frac{m_1+1}{M}k)\le \exp(-ck)\]when $k$ is sufficiently large. We may in particular take any $c_{\ref{con2}}(\epsilon^*,M) <\frac{1}{2}\left(\frac{\epsilon^*}{M}\right)^2$. \end{prp}\noindent The first result immediately follows from a Chernoff bound, since the size of the set behaves exactly like a binomial random variable. To prove the second result it suffices to control $T_{j,v,m_2k/M}$ and then take a union bound over all $v,j$. To control $T_{j,v,m_2k/M}$, one can couple it with a binomial random variable $B$ with success probability slightly less than $m_2/M$ so that $\mathbb{P}(B> T_{j,v,m_2k/M})$ is exponentially small, and then apply a Chernoff bound. We leave the details as an exercise for the reader. We note that Proposition~\ref{con2} is the only result whose proof will make use of the number of states in $D$ not being too large. In Section~\ref{bestdfa}, we give an example of $k$-DFA with $2^k$ states such that $\sum_{j=1}^k Y_j = 0$ always holds, thus limiting the growth of the number of states is necessary. We now go over how to bound $\sum_{j=1}^kY_j$. \betagin{lem}\label{Ybound}Fix $\epsilon >0$. There exists $c > 0$ such that for sufficiently large $k$, \[\mathbb{P}(\sum_{j=1}^kY_j < (1/4-\epsilon)k^2)<\exp(-ck).\] \betagin{proof}[Proof of Lemma~\ref{Ybound} given Proposition~\ref{con1} and Proposition~\ref{con2}] Fix $\epsilon^*> 0$ and a positive integer $M$. Now assume the events of Proposition~\ref{con1} and Proposition~\ref{con2} for the given $\epsilon^*$ and $M$ do not hold for any $m_1,m_2 \in [M-1]$. For $m_1 \in [M-1]$, let $E_{m_1} = [m_1k/M:(m_1+1)k/M]$. For $m_2 \in [M-1]$, let $F_{m_2} =\{ j: X_j/(k-j+1)>\frac{m_2}{M}\}$. We will have that \betagin{align*} \sum_{j\in E_{m_1}} Y_j &\ge \sum_{j \in E_{m_1}} T_{j,X_j}\\ &\ge \sum_{m_2 \in [M-1]}\sum_{j \in E_{m_1}\cap F_{m_2}} (1-\epsilon^*)\frac{ (j-1)}{M}\\ &\ge \frac{(1-\epsilon^*)}{M}\sum_{m_2 \in [M-1]}|E_{m_1}\cap F_{m_2}|k\frac{m_1}{M}\\ &\ge \frac{(1-\epsilon^*)^2}{M^2}k^2 \sum_{m_2 \in [M-1]}(1-\frac{m_2}{M})\frac{m_1}{M}\\ \end{align*}(here the second inequality makes use of Proposition~\ref{con2} not holding and also applies telescoping; the last inequality makes use of Proposition~\ref{con1} not holding). Hence, \betagin{align*} \sum_{j=1}^k Y_j &\ge \frac{(1-\epsilon^*)^2}{M^2}k^2\sum_{m_1 \in [M-1]} \sum_{m_2\in [M-1]} (1-\frac{m_2}{M})\frac{m_1}{M}\\ &= \frac{(1-\epsilon^*)^2}{M^2}k^2\left(\frac{M-1}{2}\right)^2\\ &\ge (1-\epsilon^*)^2(1-1/M)^2 \frac{1}{4}k^2\\ \end{align*}\noindent here the second line follows by separating the double sum into the product of two sums (which both happen to equal $(M-1)/2$). Thus, if $\epsilon^*,M$ are such that $(1-\epsilon^*)^2(1-1/M)^2 \ge 1-4\epsilon$, the RHS will be at least $(1/4-\epsilon)k^2$. Hence, the probability that $\sum_{j=1}^kY_j <(1/4-\epsilon)k^2$ is at most probability that there exists $m_1,m_2\in [M-1]$ such that the event from Proposition~\ref{con1} or Proposition~\ref{con2} holds with respect to the specified $\epsilon^*,M$. By union bound, this is at most \[(M-1)^2(\exp(-c_{\ref{con1}}(\epsilon^*,M)k)+\exp(-c_{\ref{con2}}(\epsilon^*,M)k)) \le \exp(-ck) \textrm{ for sufficiently large }k\] for any $c <\min\{c_{\ref{con1}}(\epsilon^*,M),c_{\ref{con2}}(\epsilon^*,M)\}$. \end{proof} \end{lem} It is clear that combining Lemma~\ref{Xbound} and Lemma~\ref{Ybound} gives another proof of Theorem~\ref{goodwalks}. \section{Conclusions}\label{conclusion} \subseteqsection{Lower order terms for \texorpdfstring{$f(k;k+1)$}{f(k;k+1)}}\label{loworder} From Corollary~\ref{asy}, we know that $f(k;k+1) = (1/2\pm o(1))k^2$, meaning Miller's construction is optimal up to lower order terms. However, the statement of Theorem~\ref{opt} does not immediately yield any explicit function for this $o(1)$-term. We briefly mention an explicit function our methods yield. To prove $f(k;k+1) < n$, it suffices to show $kG(k,n,k^2) < k!$ (by Remark~\ref{nfromk}). The following comes from looking at the proof of Theorem~\ref{goodwalks}, and observing $c_0 > \epsilon^4/33$ for sufficiently small $\epsilon$ ($33$ may be replaced with any constant greater than $32$). \betagin{rmk}For all sufficiently small $\epsilon>0$, \[\epsilon^4 > \frac{33+132\log(k)}{k} \implies f(k;k+1)<(1/2-3\epsilon/2)k^2.\] \end{rmk}\noindent Analyzing the work from Section~\ref{alt} should give a similar bound, where $33+ 4\log(k)$ is replaced by some other function of the same shape. Thus, we can say \betagin{cor} For all $k$, \[\frac{k^2}{2}-k^{7/4+o(1)}\le f(k;k+1)\le \frac{k^2+k}{2}.\] \end{cor}\noindent It is interesting to note that the best lower bound of $f(k;k)$ is of the form $k^2-k^{7/4+o(1)}$ \cite{kleitman}. The lower bound for $f(k;k)$ was proved in 1976 and has remained unimproved for 45 years. It would be interesting to see if the lower-order error in the lower bound for $f(k;k)$ or $f(k;k+1)$ can be improved. As we will demonstrate in Section~\ref{dstrat}, there is a limit to how well we can bound $f(k;k+1)$ by our methods. In particular, for large $k$ we have $G(k,k^2-k^{3/2},k+1) = \Omega(k!)$. In fact, a more careful calculation would give that $kG(k,k^2-h(k)k^{3/2},k+1) \ge k!$ with $h(k)$ being some slowly growing function which is roughly $|\Phi^{-1}(C/\sqrt{k})|$ for a certain absolute constant $C>0$ (here $\Phi$ is the cdf of the standard normal distribution). \subseteqsection{Other Problems on \texorpdfstring{$k$}{k}-DFAs}\label{otherkdfa}We believe understanding the cost of permutational walks on $k$-DFAs might be of independent interest. We provide some useful constructions and ask a few future problems. \subseteqsubsection{Upper bound on \texorpdfstring{$G(k,n,N)$}{G(k,n,N)} independent of \texorpdfstring{$N$}{N}}\label{bestdfa} We note that there's an ``optimally cheap'' $k$-DFA for reading permutations. By which we mean there is a $k$-DFA $A$ such that for any other $k$-DFA $B$, there exists a bijection $\phi:S_k \to S_k$ such that for $\tau \in S_k$ we have $\c[A]{\textrm{root}(A)}{\pi} \le \c[B]{\textrm{root}(B)}{\phi(\tau)}$. It follows that for any $k$-DFA $B$, that \[|\{\tau \in S_k:\c[B]{\textrm{root}(B)}{\tau} \le n\}|\le |\{\tau \in S_k:\c[A]{\textrm{root}(A)}{\tau} \le n\}|.\]Thus the RHS will exactly be $\max_{N}\{G(k,n,N)\}$. We sketch on construction of $A$. For the set of states, $V$, we use all subsets of $[k]$ (with the empty set being the root). For $v \in V$, and $t \in [k]$, we set $\d{v}{t} = v\cup \{t\}$. For the cost, we impose for each $v\in V$, that $t \in v \iff \c{v}{t}> k-|v|$. Essentially, the DFA will remember which letters have been read thus far, and assigns the highest costs to these letters (since when reading a permutation, we never read a letter twice). To see optimality, it suffices to show that we'll always have $\sum_{j=1}^k Y_j = 0$ (here we use the terminology from Section~\ref{alt}). This follows immediately from how the cost is defined. If we've walked to a vertex $v$, then letters we've read while walking to $v$ is exactly the elements of $v$, and these will have greater cost at $v$ then any letter which is not an element of $v$ (and thus none of the summands $Y_j$ can be non-zero). \subseteqsubsection{\texorpdfstring{$k$}{k}-DFA's with many low cost permutations}\label{dstrat} It would be interesting to better understand how fast $n_k$ must grow when \[G(k,n_k,k^2) = \Omega(k!).\]Repeating the analysis from Section~\ref{loworder}, we get that $ n_k\ge k^2/2 - k^{7/4+o(1)}$ must hold. We will describe a construction (provided by Zachary Chase in personal communication) of a $k$-DFA $D$ on $k+1$ states such that for ``many'' $\tau \in S_k$, $\c[D]{\textrm{root}(D)}{\tau} \le k^2/2 - k^{3/2}$. This will show that its possible to have $n_k \le k^2/2 - \Omega(k^{3/2})$. We first partition $[k]$ into two sets $A,B$ as evenly as possible, such that $|A| \le |B| \le |A|+1$. Out set of states will be $V:=\{-|A|,1-|A|,\dots,|B|\}$ with root $0$. For $t\in A$, we let $\d[D]{v}{t} = v-1$ if $v\neq -|A|$ and for $t\in B$ we let $\d[D]{v}{t} = v+1$ if $v\neq |B|$ (otherwise we let $\delta$ be constant, though this will not matter when reading permutations). With $v_0,\dots,v_L = \w[D]{0}{w_1,\dots,w_L}$, we observe that we'll have $v_j = |B\cap \{w_1,\dots,w_j\}|-|A\cap \{w_1,\dots,w_j\}|$, unless there was some $j'<j$ where $w_{j'+1} = w_{j'}$. Whenever $w$ is a permutation, the second case will not happen, so $v_0,\dots,w_k := \w[D]{0}{w}$ satisfies \[v_j = |B\cap \{w_1,\dots,w_j\}|-|A\cap \{w_1,\dots,w_j\}|\] whenever $w\in S_k$. For our cost function, we will assign the elements of $A$ lower weights when we are in a negative state and do the opposite otherwise. For simplicity, we consider the case where $k = 2m$, $A = [m],B = [2m]\setminus [m]$. Then for $v \in V,t\in [k]$, we let \[\c[D]{v}{t}=\betagin{cases}t &\textrm{if }v<0\\ t+m&\textrm{if $v\ge 0$ and }t\in A\\ t-m&\textrm{if $v\ge 0$ and }t\in B\\\end{cases}.\] We now analyze the cost of reading permutations in $D$. We may write $\c[D]{v}{t} =mq(v,t)+r(t)$, where $q(v,t) \in \{0,1\}, r(t)\in [m]$ (it is easily verified that $r(t)$ does not depend on $v$). Thus, for $\tau \in S_k$, if $\w[D]{0}{\tau} = v_0,\dots,v_k$, then \[\c[D]{0}{\tau} = \sum_{t \in [k]}r(t) + m\sum_{j\in [k]} q(v_{j-1},\tau(j)).\] Noting $\sum_{t \in [k]}r(t) =\frac{k^2}{4}-\frac{k}{2}\le k^2/4$, it remains to control the second term. Now, we claim (without proof) that if $\tau \in S_k$ is chosen uniformly at random, there is a coupling with $X_1,\dots,X_k$ (where $X_i$ are i.i.d. Bernoulli variables with $\mathbb{P}(X_i=1) = 1/2$) so that $X_j = 0 \implies q(v_{j-1},\tau(j)) = 0$. By Berry-Esseen Theorem, one can see that \[\mathbb{P}(\sum_{j=1}^k X_j \le k/2-2\sqrt{k})\to \Phi(-4)>0\] (where $\Phi$ is the cdf of the standard normal distribution). As $X_j \ge q(v_{j-1},\tau(j))$ for each $j$, it follows that for large $k$, \[\mathbb{P}\left(\sum_{j\in [k]}q(v_{j-1},\tau(j)) \le k/2-2\sqrt{k}\right) \ge \Phi(-4)/2\] \[\implies G(k,k^2-k^{3/2},k+1) \ge \frac{\Phi(-4)}{2} k!.\] \subseteqsection{Refuting a conjecture of Gupta}\label{implications} Lastly, we demonstrate how our result contradicts a conjecture by Gupta \cite{gupta} (see also the second item in the final section of \cite{engen}). This conjecture is concerned with ``bi-directional circular pattern containment''. Essentially, given a word $w\in [r]^n$, we say $\tau \in S_k$ is a \textit{circular pattern} of $w$ if there exists $i \in [n]$ such that $\tau$ is a pattern of \[w(i),w(i+1),\dots,w(n),w(1),w(2),\dots,w(i-1).\]We say $\tau\in S_k$ is a \textit{bi-directional circular pattern} (BCP) of $w\in [r]^n$ if $\tau$ is circular pattern of $w$ and/or $w's$ reversal, $w(n),w(n-1),\dots,w(2),w(1)$. Gupta conjectured that for each $k$, there was $\sigma \in [k]^n$ with $n \le \frac{3}{8}k^2 +\frac{1}{2}$ such that each $\tau \in S_k$ is a BCP of $\sigma$. By definition of BCPs, this would mean that there exists $2n$ words $w_1,\dots,w_{2n}\in [k]^n$ such that for any $\tau \in S_k$, there exists $i\in [2n]$ such that $\tau$ is pattern of $w_i$. This would imply that $k! \le 2nF(k,n) \le k^2F(k,n)$. Hence, by our bounds on $F(k,n)$ we get a contradiction for large $k$. In fact, essentially repeating the analysis from Section~\ref{loworder}, we can show that if $\sigma\in [k]^n$ contains each $\tau \in S_k$ as a BCP, then $n \ge \frac{k^2}{2}-k^{7/4+o(1)}$. In 2012, Lecouturier and Zmiaikou proved that there exists $\sigma \in [k]^{k^2/2 +O(k)}$ which contain each $\tau \in S_k$ as a circular pattern (and hence as a BCP), thus our bound is tight up to lower-order terms \cite{lecouturier}. \subseteqsection{A 0-1 phenomenon} In \cite[Section~6]{chroman}, it was asked how large must $n_k$ be for there to exist $\sigma \in [k]^{n_k}$ which contain almost all patterns in $S_k$ (i.e., what are the growth of sequences $n_k$ so that $F(k,n_k) = (1-o(1))k!$). Again, the analysis of Section~\ref{loworder} shows that $n_k \ge k^2/2 - k^{7/4 + o(1)}$ is necessary for $F(k,n_k) = \Omega(k!)$ to hold. Meanwhile, if we consider the word $w_k^m$ obtained by concatenating $m$ copies of $1,2,\dots,k$, we have that $w$ contains all $\tau \in S_k$ with at least $k-m$ ascents (the number of ascents in a permutation $\tau \in S_k$ is the number of $j \in [k-1]$ such that $\tau(j)<\tau(j+1)$). By reversing the order of permutation $\tau \in S_k$ with $a$ ascents, you get a permutation with $k-a-1$ ascents. Thus, with $m = \lceil k/2\rceil$ we have that $w_k^{m}$ contains at least half of the $\tau \in S_k$ as a pattern (thus $n_k = (k^2+k)/2$ satisfies $F(k,n_k) \ge k!/2$). Finally, using standard martingale concentration results (see e.g. \cite[Proposition~2.3]{alon}) if $m = k/2 +C\sqrt{k}$ then $w_k^m$ contains $(1-2\exp(-\Omega(C^2)))k!$ patterns thus $n_k = k^2/2 + \omega(k^{3/2})$ suffices for $F(k,n_k) = (1-o(1))k!$. \subseteqsection{Open Problems} To recap Sections~\ref{loworder} and \ref{otherkdfa}, we find the following problems concerning lower-order terms interesting. \betagin{prob}\label{p2} Is there $c_1<7/4$ such that \[k^2-O(k^{c_1}) \le f(k;k)?\]It is known that $c_1$ must be taken to be $\ge 1$. \end{prob} \betagin{prob}\label{p3} Is there $c_2<7/4$ such that \[\frac{k^2+k}{2}-O(k^{c_2}) \le f(k;k+1)?\] It is possible that no error term is needed, and $(k^2+k)/2 = f(k;k+1)$ simply holds. \end{prob}\label{p4} \betagin{prob} Is there $c_3<7/4$ such that \[ G\left(k,\frac{k^2}{2}-\Omega(k^{c_3}),k^2\right) = o(k!) ?\]Due to Section~\ref{dstrat}, it is clear that $c_3$ must be taken so that $c_3> 3/2$ (but potentially we can take $c_3$ to be any value $>3/2$). \end{prob} It would also be interesting to extend the conclusion of Corollary~\ref{asy} to alphabets with linearly many extra letters. Specifically, we pose the following problem. \betagin{prob}\label{linear} Does there exist $\delta > 0$ such that $f(k;(1+\delta)k) \ge (1/2-o(1))k^2$? \end{prob}\noindent This would require a significant new idea. In particular, we think a proof would use some ``redundancy result'' to replace Remark~\ref{nfromk}. We further remark that the stronger statement, which claims $f(k;Ck)\ge (1/2-o(1))k^2$ for every $C>1$, could quite possibly be true. However, our methods fail to prove that $f(k;1.0001k)\ge (1/4-o(1))k^2$, so this currently seems out of reach. While we believe Problems~1-5 have affirmative answers, we are uncertain whether this stronger statement holds true. Our (lack of) understanding about more efficient superpatterns on small alphabets will be further discussed in \cite{hunter}. \end{document}
\begin{document} \newcommand\RR{\mathbb{R}} \newcommand{\lambda}{\lambdambda} \def\mathbb{R}^n{\mathbb{R}^n} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\operatorname{supp}}{\operatorname{supp}} \newcommand{\operatorname{card}}{\operatorname{card}} \renewcommand{\mathcal{L}}{\mathcal{L}} \renewcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\mathcal{K}}{\mathcal{K}} \renewcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\blue}[1]{\textcolor{blue}{#1}} \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\operatorname{I}}{\operatorname{I}} \newcommand\wrt{\,{\rm d}} \def\mathcal{S}L{\sqrt {L}} \newcommand{\mar}[1]{{\marginpar{\sffamily{\scriptsize #1}}}} \newcommand{\li}[1]{{\mar{LY:#1}}} \newcommand{\el}[1]{{\mar{EM:#1}}} \newcommand{\as}[1]{{\mar{AS:#1}}} \newcommand\CC{\mathbb{C}} \newcommand\NN{\mathbb{N}} \newcommand\ZZ{\mathbb{Z}} \renewcommand\Re{\operatorname{Re}} \renewcommand\Im{\operatorname{Im}} \newcommand{\mathcal}{\mathcal} \newcommand\D{\mathcal{D}} \newcommand{\alpha}{\alphapha} \newcommand{\infty}{\infty} \newcommand{\comment}[1]{\vskip.3cm \fbox{ \color{red} \parbox{0.93\linewidth}{\footnotesize #1}} \vskip.3cm} \newcommand{\disappear}[1] \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{lemma}[thm]{Lemma} \newtheorem{exams}[thm]{Examples} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \numberwithin{equation}{section} \newcommand{\chg}[1]{{\color{red}{#1}}} \newcommand{\note}[1]{{\color{green}{#1}}} \newcommand{\lambdater}[1]{{\color{blue}{#1}}} \newcommand{\mathlarger{\chi}}{\mathlarger{\chi}} \title[ ] { Bounds on the maximal Bochner-Riesz means \\[2pt] for elliptic operators} \author{Peng Chen} \author{Sanghyuk Lee} \author{Adam Sikora} \author{Lixin Yan} \address{Peng Chen, Department of Mathematics, Sun Yat-sen University, Guangzhou, 510275, P.R. China} \email{[email protected]} \address{Sanghyuek Lee, School of Mathematical Sciences, Seoul national University, Seoul 151-742, Repulic of Korea} \email{[email protected]} \address {Adam Sikora, Department of Mathematics, Macquarie University, NSW 2109, Australia} \email{[email protected]} \address{Lixin Yan, Department of Mathematics, Sun Yat-sen University, Guangzhou, 510275, P.R. China} \email{[email protected]} \date{\today} \subjclass[2000]{42B15, 42B25, 47F05.} \keywords{Maximal Bochner-Riesz means, non-negative self-adjoint operators, finite speed propagation property, elliptic type estimates, restriction type conditions.} \begin{abstract} We investigate $L^p$ boundedness of the maximal Bochner-Riesz means for self-adjoint operators of elliptic type. Assuming the finite speed of propagation for the associated wave operator, from the restriction type estimates we establish the sharp $L^p$ boundedness of the maximal Bochner-Riesz means for the elliptic operators. As applications, we obtain the sharp $L^p$ maximal bounds for the Schr\"odinger operators on asymptotically conic manifolds, the harmonic oscillator and its perturbations or elliptic operators on compact manifolds. \end{abstract} \maketitle \section{Introduction } \setcounter{equation}{0} Convergence of the Bochner-Riesz means and boundedness of the associated maximal operators on Lebesgue $L^p$ spaces are among the most classical problems in harmonic analysis. The study on the Bochner-Riesz means can be seen as an attempt to justify the Fourier inversion. We begin with recalling the Bochner-Riesz means on $\RN$ which are defined by, for $\alphapha\ge 0$ and $R>0$, \begin{eqnarray}\lambdabel{ee0} \widehat{S^{\alphapha}_Rf}(\xi) =\left(1-{|\xi|^2\over R^2}\right)_+^{\alphapha} \widehat{f}(\xi), \quad \forall{\xi \in \RN}. \end{eqnarray} Here $(x)_+=\max\{0,x\}$ for $x\in \mathbb R$ and $\widehat{f}\,$ denotes the Fourier transform of $f$. The associated maximal function which is called `maximal Bochner-Riesz operator' is given by \begin{eqnarray}\lambdabel{ee00} S_{\ast}^{\alphapha} f(x)= \sup_{R>0} |S^{\alphapha}_Rf(x) |. \end{eqnarray} The problem of characterizing the optimal range of $\alphapha$ for which $S^\alphapha$ (and $S^\alphapha_\ast$) is bounded on $L^p(\RN)$ is known as the Bochner-Riesz (and maximal Bochner-Riesz) conjecture. It has been conjectured that, for $1\le p \le \infty $ and $p\neq 2$, $S_{R}^\alphapha $ is bounded on $L^p(\mathbb R^n)$ if and only if \begin{equation} \lambdabel{sharp} \alphapha> \alphapha(p)=\max \left\{ n \left|{1\over p}-{1\over 2} \right|-{1\over 2}, 0 \right\}. \end{equation} We refer the reader to \cite{DY}, Stein's monograph \cite[Chapter IX]{St2} and Tao \cite{Ta3} for historical background and more on the Bochner-Riesz conjecture. It was shown by Herz that for a given $p$ the above condition on $\alphapha$ is necessary, see \cite{He}. Carleson and Sj\"olin \cite{CS} proved the conjecture when $n=2$. Afterward substantial progress has bee made \cite{tvv, Lee, BoGu, ghi}, but the conjecture still remains open for $n\ge 3$. Concerning the $L^p$ boundedness of $S^\alphapha_\ast$, for $p\geq 2$ it is natural to expect that $S_{\ast}^\alphapha$ is bounded on $L^p$ on the same range where $S_{R}^\alphapha$ is bounded, see e.g. \cite{Lee, LRS1}. This was shown to be true by Carbery \cite{Ca} when $n=2$. In dimensions greater than two partial results are known. Christ \cite{C1} showed that $S_{\ast}^{\alphapha} $ is bounded on $L^p$ if $p\geq {2(n+1)/(n-1)}$ and $\alphapha> \alphapha(p)$, and the range of $p$ was extended by the second named author to the range $p>{2(n+2)/n}$ in \cite{Lee} and see \cite{Lee2} for the most recent progress. In this paper we focus on the case $p\ge 2$ but it should be mentioned that, for $p<2$, the range of $\alphapha$ where $S_{\ast}^{\alphapha}$ is bounded on $L^p$ is different from that of $S_R^{\alphapha}$. Tao \cite{Ta1} showed that the additional restriction $\alphapha\geq (2n-1)/(2p) - n/2$ is necessary. Besides, when $n=2$ he obtained an improved estimate over the classical result \cite{Ta2}. \subsubsection*{Bochner-Riesz means for elliptic operators } Since the Bochner-Riesz means are radial Fourier multipliers, they can be defined in terms of the spectral resolution of the standard Laplace operator $\Delta=\sum_{i=1}^n\partial_{x_i}^2$. This point of view naturally allows us to extend the Bochner-Riesz means and the maximal Bochner-Riesz operator to arbitrary positive self-adjoint operator. For this purpose suppose that $(X,d,\mu) $ is a metric measure space with a distance $d$ and a measure $\mu$, and that $L$ is a non-negative self-adjoint operator acting on the space $L^2(X)$. Such an operator admits a spectral resolution \begin{eqnarray*} L=\int_0^{\infty} \lambdambda dE_L(\lambdambda). \end{eqnarray*} Now, the Bochner-Riesz mean of order $\alphapha\geq 0$ can be defined by \begin{equation}\lambdabel{e1.22} S^{\alphapha}_R(L) f(x)= \left( \int_0^{R^2} \left(1-\frac{\lambdambda}{R^2}\right)^{\alphapha}dE_L(\lambdambda) f \right)(x), \ \ \ \ x\in X \end{equation} and the associated maximal operator is given by \begin{eqnarray}\lambdabel{e1.2} S_{\ast}^{\alphapha}(L) f(x)= \sup_{R>0} |S^{\alphapha}_R(L) f(x) |. \end{eqnarray} If we set $L=-\Delta$, the operators $S^{\alphapha}_R(-\Delta)$ and $S_{\ast}^{\alphapha}(-\Delta)$ coincide with the classical $S_R$ and $S_{\ast}^{\alphapha}$, respectively. In this paper we aim to investigate $L^p$-boundedness of the maximal Bochner-Riesz given by a certain class of self-adjoint operators. \subsubsection*{Restriction estimates} The celebrated Stein-Tomas restriction estimate to the sphere played an important role in the development of Bochner-Riesz problem (see \cite{St2}). This estimate can be reformulated in terms of spectral decomposition of the standard Laplace operator. Indeed, for $\lambdambda>0$ let $R_\lambdambda$ be the restriction operator given by $R_\lambdambda(f)(\omega) =\hat{f}({\lambdambda} \omega),$ where $\omega\in {\bf S}^{n-1}$ (the unit sphere). Then $$ d E_{\sqrt{-\Delta}}(\lambdambda) =(2\pi)^{-n} \lambdambda^{n-1}R_\lambdambda^*R_\lambdambda. $$ Thus, putting $L=-\Delta$, the { Stein-Tomas theorem} (\cite[p. 386]{St2}) is equivalent to the estimate \begin{equation} \lambdabel{e1.4} \|dE_{\sqrt L}(\lambdambda)\|_{p\to p'}\le C\ \lambdambda^{{n}(\frac{1}{p}-\frac{1}{p'})-1}, \ \ \lambdambda>0\ \end{equation} for $1\le p\le 2(n+1)/(n+3)$. In \cite{GHS} Guillarmou, Hassell and the third named author showed that the estimate \eqref{e1.4} remains valid for the Schr\"odinger type operators on asymptotically conic manifolds. It is easy to check that \eqref{e1.4} is equivalent to the following estimate: \begin{equation*}\lambdabel{rp} \tag{${\rm R_p}$} \big\|F(\!\mathcal{S}L\,\,) \big\|_{p\to 2} \leq C R^{n\left({\frac 1p}-{\frac 12}\right)} \big\| \delta_R F \big\|_{2} \end{equation*} for any $R>0$ and all Borel functions $F$ supported in $ [0,R],$ where the dilation $\delta_RF$ is defined by {$\delta_RF(x)=F(Rx)$} (see \cite[Proposition I.4]{COSY} ). Observation regarding relation between restriction estimate and the sharp $L^p$-boundedness (the boundedness of $S^\alphapha_R$ in $L^p$ for $\alphapha$ satisfying \eqref{sharp}) of the Bochner-Riesz means goes back to as far as Stein \cite{fe1} (and also see \cite{St2}). The argument in \cite{fe1} and the Stein-Tomas restriction estimate give the sharp $L^p$ estimates for $S_R^\alphapha(-\Delta)$ for $p$ satisfying $\max(p,p')\ge 2(n+1)/(n-1)$. Likewise, it is natural to suspect if there is a similar connection between \eqref{rp} and the sharp $L^p$ bound for $S^{\alphapha}_R(L)$ when $L$ is a general elliptic operator. This question was explored in \cite{COSY}. In fact, it was shown in \cite[Corollary I.6]{COSY} that if the operator $L$ satisfies the finite speed of propagation property and the condition \eqref{rp}, then the Bochner-Riesz means are bounded on $L^p(X)$ spaces for $p$ on the range where \eqref{rp} holds if ${\alphapha}>\max(0, n|1/p-1/2|-1/2)$. Our first result is the maximal generalization of the aforementioned result in \cite{COSY}. \noindent{\bf Theorem A.} {\it Let { $B(x,r)=\{y\in X: d(x,y)<r\}$} and {$V(x,r)= \mu\big( B(x,r)\big)$}. Suppose that \begin{equation} \lambdabel{eq1.1} C^{-1}r^n \leq V(x, r)\leq C r^n \end{equation} holds for all $x\in X$, and $L$ satisfies the finite speed of propagation property (see, Definition \ref{FSP}) and the condition {\color{red}${\rm ( R_{p_0}})$} for some $1\leq p_0 <2$. Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(X)$ whenever \begin{eqnarray}\lambdabel{eww} 2\leq p< p_0', \ \ \ {\rm and }\ \ \ \alphapha> \alphapha(p_0)= \max\left\{ n\left({1\over p_0}-{\frac 1 2}\right)- {\frac 1 2}, \, 0 \right\}. \end{eqnarray} As a consequence, if $f\in L^p(X)$, then for $p$ and $\alphapha$ satisfying \eqref{eww}, $$ \lim\limits_{R\to \infty}S_{R}^{\alphapha}(L)f(x)=f(x), \ \ \ a.e. $$ } Later, we will see that the condition \eqref{eq1.1} can be replaced by the doubling condition \eqref{eq2.2}. \subsubsection*{Cluster estimates.} It is not difficult to see that the condition \eqref{rp} implies that the set of point spectrum of $L$ is empty. Indeed, one has, for $0\leq a< R$, $ \|1\!\!1_{\{a \} }(\sqrt{L}\,) \|_{p\to 2} \leq C R^{n({1\over p}-{1\over 2})} \|1\!\!1_{\{a \} }(R\cdot)\|_{2} =0, $ and thus $1\!\!1_{\{a\} }(\sqrt{L}\,)=0$. Since $\sigma(L)\subseteq [0, \infty)$, it is clear that the point spectrum of $L$ is empty. In particular, \eqref{rp} does not hold for elliptic operators on compact manifolds or for the harmonic oscillator. In order to treat these cases as well we need to modify the estimate \eqref{rp} as follows: For a fixed natural number $\kappa$ and for all $N\in \NN$ and all even Borel functions $F$ supported in $[-N, N]$, $$ \big\|F(\!\mathcal{S}L\,\,) \big\|_{p\to 2} \leq CN^{n({\frac 1p}-{\frac 12})}\| \delta_N F \|_{N^\kappa,\, 2}, \leqno{\rm (SC^{\kappa}_{p})} $$ where \begin{equation} \lambdabel{norm-def} \|F\|_{N,2}:=\left({1\over 2N}\sum_{\ell=1-N}^{N} \sup_{\lambdambda\in [{\ell-1\over N}, {\ell\over N})} |F(\lambdambda)|^2\right)^{1/2} \end{equation} for $F$ with $\operatorname{supp} F \subset [-1, 1]$. The norm $\|F\|_{N,2}$ already appeared in \cite{CowS, DOS} in the study of spectral multipliers, see also \cite{COSY}. As shown in \cite[Proposition I.14]{COSY}, the condition ${\rm (SC^{1}_{p})}$ is equivalent to the following $(p, p')$ spectral cluster estimate ${\rm (S_p)}$ introduced by Sogge (see \cite{Sog1, Sog3, Sog4}): For all $\lambdambda \geq 0,$ $$ \big\|E_{\sqrt{L}}[\lambdambda, \lambdambda+1)\big\|_{p\to p'} \leq C (1+\lambdambda)^{n({1\over p}-{1\over p'})-1}. \leqno{\rm (S_p)} $$ In this context we shall prove the following result. \noindent{\bf Theorem B.} {\it Suppose that the condition \begin{equation}\lambdabel{eq2a} \mu(X)<\infty \quad \mbox{and} \quad C^{-1} \min(r^n, 1) \le V(x, r)\leq C \min(r^n, 1) \end{equation} is valid for all $x\in X$ and $r>0$. And suppose that the operator $L$ satisfies the finite speed of propagation property (see, Definition \ref{FSP}) and the condition ${\rm (SC^{1}_{p_0})}$. Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(X)$ whenever \eqref{eww} is satisfied. As a consequence, if $f\in L^p(X)$, then for $p$ and $\alphapha$ satisfying \eqref{eww} $$ \lim\limits_{R\to \infty}S_{R}^{\alphapha}(L)f(x)=f(x), \ \ \ a.e. $$ } We now consider the case $\mu(X)=\infty$ with the property \eqref{eq1.1}. Motivated by the harmonic oscillator $L=-\Delta+|x|^2$ we obtain the following variant of { Theorem B}. \noindent{\bf Theorem C.} {\it Suppose that condition \eqref{eq1.1} holds, and the operator $L$ satisfies the finite speed of propagation property and the condition ${\rm (SC^{\kappa}_{p_0})}$ for some $1\leq p_0<2$ and some positive integer $\kappa$. In addition, we assume that there exists $\nu \ge 0$ such that \begin{eqnarray}\lambdabel{e1.500} \|(1+L)^{-\gamma/2}\|_{{p'_0}\to 2}\leq C, \ \ \gamma=n(\kappa-1)(1/p_0-1/2)+\kappa\nu. \end{eqnarray} Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(X)$ whenever \begin{eqnarray}\lambdabel{ewww2} 2\leq p< p_0', \ \ \ {\rm and }\ \ \ \alphapha> \nu+ \max\left\{ n\left({\frac {1}{p_0}}-{\frac 12}\right)- {\frac 12}, \, 0 \right\}. \end{eqnarray} As a consequence, if $f\in L^p(X)$, then for $p$ and $\alphapha$ satisfying \eqref{ewww2}, $$ \lim\limits_{R\to \infty}S_{R}^{\alphapha}(L)f(x)=f(x), \ \ \ a.e. $$ } We shall show that in dimension $n\geq 2$, \eqref{e1.500} holds with $\kappa=2$ and each $\nu>0$ for the harmonic oscillator $L=-\Delta +|x|^2$ and $L=-\Delta +V(x)$ with the potential $V$ satisfying \eqref{eq111.01} below. The restriction estimates ${\rm (SC^{2}_{p})}$ for those operator were obtained by Kardzhov \cite{Kar}, Thangavelu \cite{Th4}, Koch and Tataru \cite{KoT}. Combining these estimates with Theorem C, we are able to obtain the sharp $L^p$ bounds for the associated maximal Bochner-Riesz operators. See Section 6.3. In order to prove Theorems A, B, and C, we make use of the square function which has been utilized to control the maximal Bochner-Riesz operators (see \cite{St, Ca, C1, Lee}). The square function estimates in Proposition \ref{prop4.1} and Proposition \ref{prop5.1} also have other applications. In particular, those estimates can be used to deduce smoothing properties for the Schr\"odinger and the wave equations and also spectral multiplier theorems of H\"ormander-Mihlin type, see \cite{LRS1, LRS2} for such implications when $L=-\Delta$. However, unlike the classical case $L=-\Delta$, for the general elliptic operators we don't have the typical properties of Fourier multipliers such as translation and scaling invariances. Also, the associated heat kernels are not necessarily smooth. This requires to refine the classical argument in various aspects. In particular we will use a new variant of Calder\'on--Zygmund technique for the square functions, see for example \cite{Au, AM}. Roughly speaking, we show that the estimate \eqref{rp} (equivalently \eqref{e1.4}) or its variant implies the $L^p$ boundedness of the maximal Bochner-Riesz operators assuming the finite speed of propagation property. Main advantage of this approach is that we can handle large class of elliptic operators. Since the restriction type estimates are better understood now, it is possible to extend part of this argument to general setting of the homogeneous spaces, and also to include operators such as harmonic oscillator or operators acting on compact manifolds. The Bochner-Riesz means operator for various classes of self-adjoint operators have been extensively studied (see \cite{COSY, DOS, Heb, Ho2, Kar, Ma, Se, SYY2, Sog1, Sog4, Th1, Th3, Th4} and references therein). However, as far as the authors are aware, there is no result that proves, on the range of $p$ up to that of restriction type estimate, the sharp $L^p$ boundedness of the maximal Bochner-Riesz operator other than the standard Laplacian and Fourier multipliers (see \cite{BoGu, Ca, CS, C1, F2, Lee, Lee2, LRS1, LRS2, Se2, SW}). \emph{Organization of the paper.} In Section 2 we provide some prerequisites, which we need later, mostly on the restriction type estimate and the finite speed of propagation property. In Section 3 we consider the maximal bounds under less restrictive assumptions which includes more general elliptic operators though they don't give the sharp bounds. The proof of Theorem~A will be given in Section 4. The proof of Theorems~B and ~C will be given in Section 5. In Section \ref{sec6} we discuss some examples of applications of Theorems A, B, C which include the harmonic oscillator and its perturbation, Schr\"odinger operators on asymptotically conic manifolds, elliptic operators on compact manifolds and the radial part of the standard Laplace operator. {\bf List of notation.} \\ $ \bullet$ $(X,d,\mu) $ denotes a metric measure space with a distance $d$ and a measure $\mu$. \\ $ \bullet$ $L$ is a non-negative self-adjoint operator acting on the space $L^2(X).$ \\ $ \bullet$ For $x\in X$ and $r>0$, $B(x,r)=\{y\in X: d(x,y)<r\}$ and {$V(x,r)= \mu\big( B(x,r)\big).$ \\ $ \bullet$ $\delta_RF$ is defined by $\delta_RF(x)=F(Rx)$ for $R>0$ and Borel function $F$ supported on $ [0,R].$ \\ $\bullet$ $[t]$ denotes the integer part of $t$ for any positive real number $t$. \\ $\bullet$ $\NN$ is the set of positive integers. \\ $\bullet$ For $p\in [1,\infty]$, $p'={p}/{(p-1)}$. \\ $\bullet$ For $1\le p\le\infty$ and $f\in L^p(X,{\rm d}\mu)$, $\|f\|_p=\|f\|_{L^p(X,{\rm d}\mu)}.$ \\ $\bullet$ $\lambdangle \cdot,\cdot \rangle$ denotes the scalar product of $L^2(X, {\rm d}\mu)$. \\ $\bullet$ For $1\le p, \, q\le+\infty$, $\|T\|_{p\to q} $ denotes the operator norm of $T$ from $ L^p(X, {\rm d}\mu)$ to $L^q(X, {\rm d}\mu)$. \\ $\bullet$ If $T$ is given by $Tf(x)=\int K(x,y) f(y) d\mu(y)$, we denote by $K_T$ the kernel of $T$. \\ $\bullet$ Given a subset $E\subseteq X$, we denote by $\mathlarger{\chi}_E$ the characteristic function of $E$. \\ $\bullet$ For $1\leq r <\infty$, $\mathfrak M_r$ denote the uncentered $r$-th maximal operator over balls in $X$, that is \begin{equation*} \mathfrak M_rf(x)=\sup_{x\in B} \left({1\over \mu(B) }\int_{B} |f(y)|^rd\mu(y)\right)^{1/r}. \end{equation*} For simplicity we denote by $\mathfrak M$ the Hardy-Littlewood maximal function $\mathfrak M_1$. \section{Preliminaries}\lambdabel{sec2} \setcounter{equation}{0} We say that $(X, d, \mu)$ satisfies the doubling property (see Chapter 3, \cite{CW}) if there exists a constant $C>0$ such that \begin{eqnarray} V(x,2r)\leq C V(x, r)\quad \forall\,r>0,\,x\in X. \lambdabel{eq2.1} \end{eqnarray} If this is the case, there exist $C, n$ such that for $\lambdambda\geq 1$ and $x\in X$ \begin{equation} V(x, \lambdambda r)\leq C\lambdambda^n V(x,r). \lambdabel{eq2.2} \end{equation} In the Euclidean space with Lebesgue measure, $n$ corresponds to the dimension of the space. Observe that if $X$ satisfies (\ref{eq2.1}) and has finite measure then it has finite diameter. Therefore, if $\mu(X)$ is finite, then we may assume that $X=B(x_0, 1)$ for some $x_0\in X$. \noindent{\bf 2.1. Finite speed of propagation property and elliptic type estimates.} \ To formulate the finite speed of propagation property for the wave equation corresponding to an operator $L$, we set \begin{equation*} \D_r=\{ (x,\, y)\in X\times X: {d}(x,\, y) \le r \}. \end{equation*} Given an operator $T$ from $L^p(X)$ to $L^q(X)$, we write \begin{equation}\lambdabel{eq2.4} \operatorname{supp} K_{T} \subseteq \D_r \end{equation} if $\lambdangle T f_1, f_2 \rangle = 0$ whenever $f_1 \in L^p(B(x_1,r_1))$, $f_1 \in L^{q'}( B(x_2,r_2))$ with $r_1+r_2+r < {d}(x_1, x_2)$. Note that if $T$ is an integral operator with a {kernel $K_T$}, then (\ref{eq2.4}) coincides with the standard meaning of $\operatorname{supp} K_{T} \subseteq \D_r$, that is $K_T(x, \, y)=0$ for all $(x, \, y) \notin \D_r$. \begin{defn}\lambdabel{FSP} Given a non-negative self-adjoint operator $L$ on $L^2({X})$, we say that $L$ satisfies the finite speed of propagation property if \begin{equation*}\lambdabel{FS} \tag{FS} \operatorname{supp} K_{\cos(t\mathcal{S}L\,\,)} \subseteq \D_t, \quad \forall t> 0\,. \end{equation*} \end{defn} Property \eqref{FS} holds for most of second order self-adjoint operators and is equivalent to celebrated Davies-Gaffney estimates, see for example \cite{CouS} and \cite{S2}. \begin{lemma}\lambdabel{le2.2} Assume that $L$ satisfies the property {\rm (FS)} and that $F$ is an even bounded Borel function with Fourier transform $\hat{F} \in L^1(\RR)$ and that $\operatorname{supp} \hat{F} \subseteq [-r, r]$. Then $$ \operatorname{supp} K_{F(\!\mathcal{S}L\,\,)} \subseteq \D_r. $$ \end{lemma} \begin{proof} If $F$ is an even function, then by the Fourier inversion formula, $$ F(\!\mathcal{S}L\,\,) =\frac{1}{2\pi}\int_{-\infty}^{+\infty} \hat{F}(t) \cos(t\mathcal{S}L\,\,) \;dt. $$ But $\operatorname{supp}\hat{F} \subseteq [-r,r]$, and the lemma follows then from {\rm (FS)}. \end{proof} Since our discussion covers general elliptic operators, we need some related estimates which are slightly more technical. We start with defining the multiplication operator. For any function $W:X\rightarrow \mathbb{R}$, we define $M_{W}$ by $$(M_{W}f)(x)=W(x)f(x).$$ In what follows, we shall identify the operator $M_W$ with the function $W$. This means that, if $T$ is a linear operator, we shall denote by $W_1T$, $TW_2$, $W_1TW_2$, the operators $M_{W_1}T, TM_{W_2}$, $M_{W_1}TM_{W_2}$, respectively. We can now formulate the weighted $L^p-L^2$ estimates (Sobolev type conditions). Firstly we consider \begin{equation*}\lambdabel{EVp} \tag{${\rm EV_{p,2} }$} \sup_{t>0} \| e^{-t^2L}\, {V_{{t}}^{1/p-1/2}} \|_{p \to 2} < +\infty, \end{equation*} where $V_t(x)=V(x,t)$ and $1 \le p< 2$. An detailed and systematic discussion on the condition \eqref{EVp} can founded in \cite{BCS}. The following condition which was introduced in \cite{COSY}: \begin{equation*}\lambdabel{Gp} \tag{${\rm G_{p,2} }$} \big\|e^{-t^2L}\mathlarger{\chi}_{B(x, s)}\big\|_{p\to 2} \leq CV(x, s)^{{1\over 2}-{1\over p}} \left({s\over {t}}\right)^{n({1\over p}-{1\over 2})} \end{equation*} holds for all $x\in X$ and $s\geq t>0$. \begin{lemma}\lambdabel {le2.0} Let $1\leq p< 2$. Suppose that $L$ satisfies the property \eqref{FS}. Then the following are equivalent: (i) \eqref{EVp} holds. (ii) \eqref{Gp} holds. (iii) For every $N>n(1/p-1/2)$ there exists $C$ such that \begin{eqnarray*} \big\| (I+t \sqrt{L}\,)^{-N} V_t^{{\frac1p}-{\frac12}} \big\|_{p \to 2} \leq C. \end{eqnarray*} (iv) For all $x\in X$ and $r\geq t >0$ we have \begin{equation*} \big\|(I+t\mathcal{S}L\,\,)^{-N}\mathlarger{\chi}_{B(x, r)}\big\|_{p\to 2} \leq CV(x, r)^{{\frac 12}-{\frac 1p}} \left({\frac rt}\right)^{n\left({\frac 1p}-{\frac 12}\right)}. \end{equation*} \end{lemma} \begin{proof} The equivalence of the conditions $(ii)$ and $(iv)$ was verified in \cite[Proposition I.3]{COSY}. The similar argument shows that the conditions $(i)$ and $(iii)$ are also equivalent. Thus it is enough to show equivalence between $(iii)$ and $(iv)$. First we prove that $(iii)$ implies $(iv)$. Note that by the doubling condition for all $y \in B(x,r)$ one has $V(x,r)\sim V(y,r)$. Hence for all $x\in X$ and $r\geq t >0$, \begin{align*} \big\|(I+t\mathcal{S}L\,\,)^{-N}\mathlarger{\chi}_{B(x, r)}\big\|_{p\to 2} &\le C\big\|(I+t\mathcal{S}L\,\,)^{-N}\mathlarger{\chi}_{B(x, r)} V_r^{{\frac 1p}-{\frac 12}} V(x, r)^{{\frac 12}-{\frac 1p}} \big\|_{p\to 2} \\ &\le C \big\|(I+t\mathcal{S}L\,\,)^{-N}\mathlarger{\chi}_{B(x, r)} V_t^{{\frac 1p}-{\frac 12}} \big\|_{p\to 2} V(x, r)^{{\frac 12}-{\frac 1p}} \left({\frac rt}\right)^{n\left({\frac 1p}-{\frac 12}\right)} \\ & \le C \big\|(I+t\mathcal{S}L\,\,)^{-N} V_t^{{\frac 1p}-{\frac 12}} \big\|_{p\to 2} V(x, r)^{{\frac 12}-{\frac 1p}} \left({\frac rt}\right)^{n\left({\frac 1p}-{\frac 12}\right)} . \end{align*} By the assumption $(iii)$ it follows that \begin{align*} \big\|(I+t\mathcal{S}L\,\,)^{-N}\mathlarger{\chi}_{B(x, r)}\big\|_{p\to 2} \le C V(x, r)^{{\frac 12}-{\frac 1p}} \left({\frac rt}\right)^{n\left({\frac 1p}-{\frac 12}\right)}, \end{align*} where we used (iii) in the last inequality. We now show that $(iv)$ implies $(iii)$. Let us recall the well known identity, for $a>0$, $$ C_a\int_0^\infty\left(1-\frac{x^2}{s}\right)^a_+e^{-s/4}s^a \,ds=e^{-x^2/4} $$ with some suitable $C_a>0$. Taking the Fourier transform on both sides of the above equality yields $$ \int_0^\infty F_a(\sqrt s\lambdambda) s^{a+\frac{1}{2}}e^{-s/4}ds=e^{-\lambdambda^2}, $$ where $F_a$ is the Fourier transform of the function $t \to (1-{t^2})^a_+$ multiplied by the appropriate constant. Hence, by spectral theory, $$ \int_0^\infty F_a(\sqrt{stL}) s^{a+\frac{1}{2}}e^{-s/4}ds=e^{-tL}. $$ Using this and Minkowski's inequality give \begin{eqnarray*} &&\quad \| e^{-tL}\, {V_{{\sqrt{t}}}^{{\frac 1p}-{\frac 12}}} \|_{p \to 2} \le \int_0^\infty \| F_a(\sqrt{tsL}) {V_{{\sqrt{t}}}^{{\frac 1p}-{\frac 12}}} \|_{p \to 2} s^{a+\frac{1}{2}}e^{-s/4}ds \\ && \le C\int_0^\infty \| F_a(\sqrt{tsL}) {V_{{\sqrt{st}}}^{{\frac 1p}-{\frac 12}}} \|_{p \to 2} \Big(\sqrt{s}+\frac{1}{\sqrt{s}}\Big)^{{\frac 1p}-{\frac 12}}s^{a+\frac{1}{2}}e^{-s/4}ds, \end{eqnarray*} hence, with $a$ large enough, \begin{equation}\lambdabel{eqq} \sup_{t>0} \| e^{-tL}\, {V_{{\sqrt{t}}}^{{\frac 1p}-{\frac 12}}} \|_{p \to 2} \le C'\sup_{t>0}\| F_a(\sqrt{tL}) {V_{{\sqrt{t}}}^{{\frac 1p}-{\frac 12}}} \|_{p \to 2}. \end{equation} We note that $\mathcal{P}hi=F_a$ satisfies the assumptions of Lemma \ref{le2.2}. Thus $ \mbox{supp} \, {F_a(r\sqrt {L})} \subseteq D_r,\ \forall \,r>0. $ Hence, by \cite[Lemma 4.1.2]{BCS} \begin{equation} \lambdabel{eqq1} \|F_a(r\sqrt {L}){V_{{r}}^{{\frac 1p}-{\frac 12}}}\|_{p \to 2} \le C \sup_{x\in M} \| F_a(r\sqrt {L}){V_{{r}}^{{\frac 1p}-{\frac 12}}}\mathlarger{\chi}_{B(x, r) }\|_{p\to 2}. \end{equation} Observe that \begin{eqnarray*} \| F_a(r\sqrt {L}){V_{r}^{{\frac 1p}-{\frac 12}}}\mathlarger{\chi}_{B(x, r) }\|_{p\to 2} &\le& \| F_a(r\sqrt {L}) (1+r\sqrt {L})^{N} (1+r\sqrt {L})^{-N} {V_{{r}}^{{\frac 1p}-{\frac 12}}}\mathlarger{\chi}_{B(x, r) }\|_{p\to 2}\\ &\le& \| F_a(r\sqrt {L}) (1+r\sqrt {L})^{N}\|_{2\to 2} \|(1+r\sqrt {L})^{-N} {V_{{r}}^{{\frac 1p}-{\frac 12}}}\mathlarger{\chi}_{B(x, r) }\|_{p\to 2}\\ &\le& C \|(1+r\sqrt {L})^{-N} {V_{{r}}^{{\frac 1p}-{\frac 12}}}\mathlarger{\chi}_{B(x, r) }\|_{p\to 2}. \end{eqnarray*} From this and $(iv)$ with $r=t$, we get \begin{eqnarray*} \| F_a(r\sqrt {L}){V_{r}^{{\frac 1p}-{\frac 12}}}\mathlarger{\chi}_{B(x, r) }\|_{p\to 2} &\le& C {V(x,r)^{{\frac 1p}-{\frac 12}}} \|(1+r\sqrt {L})^{-N} \mathlarger{\chi}_{B(x, r) }\|_{p\to 2} \le C. \end{eqnarray*} Combining this with \eqref{eqq} and \eqref{eqq1} shows \eqref{EVp} which is equivalent with $(iii)$. \end{proof} Recall that $L$ is a non-negative self-adjoint operator on $L^2(X)$ and that the semigroup $e^{-tL}$, generated by $-L$ on $L^2(X)$, has the kernel $p_t(x,y)$ which satisfies the following Gaussian upper bound: \begin{equation*}\lambdabel{ge} \tag{GE} \big|p_t(x,y)\big|\leq {C\over V(x,\sqrt{t})} \exp\left(-c { d^2(x,y)\over t } \right) \end{equation*} for all $t>0$, and $x,y\in X,$ where $C$ and $ c$ are positive constants. The stimate \eqref{ge} follows from \eqref{FS} and (${\color{red} \rm EV_{1,2}}$). Indeed, (${\color{red} \rm EV_{1,2}}$) is equivalent to the standard Gaussian heat kernel estimate which is valid for a broad class of second order elliptic operators, see e.g. \cite{BCS}. It is not difficult to see that, for $1\leq p<2$, both the conditions \eqref{FS} and \eqref{EVp} follow from the Gaussian estimate \eqref{ge}. But the converse is not true in general. For some $1<p<2$, there are operators which fail to satisfy \eqref{ge} while \eqref{FS} and \eqref{EVp} hold for them. Examples for such operators are provided by the Schr\"odinger operators with inverse-square potential, see \cite{CouS} and the second order elliptic operators with rough lower order terms, see \cite{LSV}. \noindent{\bf 2.2. Stein-Tomas restriction type condition.} Let $1\leq p< 2$ and $2\leq q\leq\infty$. Following \cite{COSY}, we say that $L$ satisfies the Stein-Tomas restriction type condition if for any $R>0$ and all Borel functions $F$ supported in $ [0,R],$ \begin{equation*}\lambdabel{st} \tag{${\rm ST^q_{p, 2}}$} \big\|F(\!\mathcal{S}L\,\,)\mathlarger{\chi}_{B(x, r)} \big\|_{p\to 2} \leq CV(x, r)^{{1\over 2}-{1\over p}} \big( Rr \big)^{n({1\over p}-{1\over 2})}\big\| \delta_R F \big\|_{q} \end{equation*} for all $x\in X$ and all $r\geq 1/R$. To motivate this definition we state the following two lemmas. \begin{lemma}\lambdabel{le2.3} Assume that $C^{-1}r^n \leq V(x, r)\leq Cr^n$ for all $x\in X$ and $r>0$. Then ${\rm (ST^2_{p, 2})}$ is equivalent to ${\rm (R_p)}$. \end{lemma} \begin{lemma}\lambdabel{le2.4} Assume that a metric measure space $(X,d,\mu)$ satisfies the doubling condition \eqref{eq2.2}. Then ${\rm (ST^\infty_{p, 2})}$ is equivalent to \eqref{EVp} or any other condition listed in Lemma~\ref{le2.0}. \end{lemma} For the proofs of these Lemmas and more on the condition \eqref{st} we refer the reader to \cite{COSY}, especially \cite[Proposition I.3]{COSY} and \cite[Proposition I.4]{COSY}. The following result for the spectral multipliers of non-negative self-adjoint operators was one of the main results obtained in \cite[Theorem I.16, Corollary I.6]{COSY}. Fix a non-trivial auxiliary function $\eta \in C_c^\infty(0,\infty)$. \iffalse \begin{prop}\lambdabel{prop2.4} Assume that $L$ satisfies the property {\rm (FS)} and the condition ${\rm (ST^{q}_{p, 2})}$ for some $p,q$ satisfying $1\leq p<2$ and $2\leq q\leq \infty$. Then for any bounded Borel function $F$ such that $\sup_{t>0}\|\eta\, \delta_tF\|_{W^{\beta, q}}<\infty $ for some $\beta>\max\{n(1/p-1/2),1/q\}$ the operator $F(\!\mathcal{S}L\,\,)$ is bounded on $L^r(X)$ for all $p<r<p'$. In addition, \begin{eqnarray*} \|F(\!\mathcal{S}L\,\,) \|_{r\to r}\leq C_\beta\Big(\sup_{t>0}\|\eta\, \delta_tF\|_{W^{\beta, q}} + |F(0)|\Big). \end{eqnarray*} In particular, for all ${\alphapha}> \max\{ n(1/p-1/2)-1/q, 0\}$ we have the uniform bound, for $R>0$, \begin{eqnarray}\lambdabel {e2.44} \Big\|\Big(I-{L\over R^2}\Big)_+^{\alphapha}\Big\|_{p\to p}\leq C. \end{eqnarray} \end{prop} \fi \begin{prop} \lambdabel{prop2.4} Assume that $L$ satisfies the property {\rm (FS)} and the condition ${\rm (ST^{q}_{p, 2})}$ for some $p,q$ satisfying $1\leq p<2$ and $2\leq q\leq \infty$. \begin{itemize} \item[(i)] Then for any bounded Borel function $F$ such that $\sup_{t>0}\|\eta\, \delta_tF\|_{W^{\beta, q}}<\infty $ for some $\beta>\max\{n(1/p-1/2),1/q\}$ the operator $F(\!\mathcal{S}L\,\,)$ is bounded on $L^r(X)$ for all $p<r<p'$. In addition, \begin{eqnarray*} \|F(\!\mathcal{S}L\,\,) \|_{r\to r}\leq C_\beta\Big(\sup_{t>0}\|\eta\, \delta_tF\|_{W^{\beta, q}} + |F(0)|\Big). \end{eqnarray*} \item[(ii)] For all ${\alphapha}> n(1/p-1/2)-1/q$ we have the uniform bound, for $R>0$, \begin{eqnarray*} \Big\|\Big(I-{L\over R^2}\Big)_+^{\alphapha}\Big\|_{p\to p}\leq C. \end{eqnarray*} \end{itemize} \end{prop} Finally, we state a standard weighted inequality for the Littlewood-Paley square function, which we shall use in what follows. For its proof, we refer the reader to \cite{CD, DSY} for $p=1$, and \cite{AM} for the general $1\leq p<2$ on the Euclidean space $\mathbb R^n$. The estimate remains valid on spaces of homogeneous type. \begin{prop}\lambdabel{prop2.5} Assume that $L$ satisfies the property {\rm (FS)} and the condition \eqref{EVp} for some $1\leq p<2$. Let $\psi $ be a function in $ {\mathscr S} ({\mathbb{R}})$ such that $\psi(0)=0$, and let the quadratic functional be defined by \begin{eqnarray*} {\mathcal G}_L(f)(x)=\Big( \sum_{j\in{\mathbb Z}} |\psi(2^j\mathcal{S}L\,\,)f(x) |^2\Big)^{1/2} \end{eqnarray*} for $f\in L^2(X)$. Then for any $w\in A_1$ (i.e., the Muckenhoup $A_1$ weight), ${\mathcal G}_L$ is bounded on $L^{r}(w, X)$ for all $p<r<p'.$ \end{prop} \section{ Plancherel estimate and maximal Bochner-Riesz operator } \setcounter{equation}{0} In this section we will discuss the case $p=1$ for the condition \eqref{st}. In Corollary \ref{cor3.3} and Proposition \ref{prop3.2} below we state a version of Theorem A which deals with the case $p=1$. In this case the proofs of results are significantly simpler. We also describe some other observations which will be useful for results in full generality. Following \cite{DOS}, we will call the estimate ${\red{ \color{red} \rm{ (ST^q_{1, 2})}}}$ the Plancherel estimate. Assume that $(X, d, \mu)$ satisfies the doubling condition \eqref{eq2.2}. We start with the following lemma. \begin{lemma}\lambdabel {le3.0} Let $L$ satisfy the Gaussian bound ${\rm (GE)}$ and let $m$ be a bounded Borel function such that $\operatorname{supp} m\subseteq [-2, 2]$. If $\|m\|_{W_{s}^{2}}<\infty$ for some $s>n +1/2$, for all $x\in X,$ $$ \sup_{t>0}\left| m (tL)f(x)\right| \leq C\|m\|_{W_{ s}^{2}} {\mathfrak M}(f)(x). $$ As a consequence, if $\alphapha >n$, then $S_{\ast}^{ \alphapha}(L)$ is a bounded operator on $L^p(X)$ for all $ 1<p<\infty.$ \end{lemma} \begin{proof} Let $H(t):=m(\sqrt{t}) e^{t}$. By the Fourier inversion formula $ H (t) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}\widehat{H }(\tau) e^{it\tau}d\tau, $ we have \begin{eqnarray}\lambdabel{e3.02} m(t\sqrt{L}\,) =H(t^2L)e^{-t^2L} = \frac{1}{2\pi}\int_{-\infty}^{+\infty}\hat{H}(\tau)e^{- t^2(1-i\tau)L}d\tau. \end{eqnarray} Let $z:=t^2(1-i\tau)$ and $\theta={\rm arg } z$. From ${\rm (GE)}$, it is well known (see \cite[Theorem 7.2]{Ou}) that there exist positive constants $C, c$ such that for all $z\in{\mathbb C}^+$ and a.e. $x,y\in X,$ \begin{eqnarray}\lambdabel{e3.03} \big|p_z(x,y)\big| &\leq& {C\left(\cos\theta\right)^{-n}\over \sqrt{ V\left(x, \sqrt{|z|\over \cos\theta} \right) V\left(y, \sqrt{|z|\over \cos\theta} \right) }} \exp\left(-c { d^2(x,y)\over |z| } \cos\theta\right). \end{eqnarray} By the doubling properties of the space $X$, we use a standard argument to obtain \begin{eqnarray*} \big|e^{-zL} f(x)\big| &\leq& C (1+\tau^{2})^{n/2}\int_X {1\over V\left(x, \sqrt{|z|\over \cos\theta} \right)} \exp\Big(-{ {d(x,y)^2\over 2c\, |z| } \cos\theta}\Big) |f(y)|d\mu(y)\nonumber\\ &\leq& C(1+\tau^{2})^{n/2}{ \mathfrak M} f(x). \end{eqnarray*} Then, from this and \eqref{e3.02} it follows that, for any $\varepsilon\in(0,1)$, \begin{eqnarray}\lambdabel{e3.04} |m(t\sqrt{L}\,)f(x)| &=&\bigg|\frac{1}{2\pi}\int_{\mathbb{R}}e^{- zL}f(x)\hat{H}(\tau)d\tau\bigg|\nonumber \le C {\mathfrak M}(f)(x)\int_{\mathbb{R}}|\hat{H}(\tau)|(1+\tau^{2})^{n/2}d\tau \nonumber\\ &\leq&C\|H\|_{W_{ (2n+1)/2+\varepsilon}^{2} } {\mathfrak M}(f)(x) \le C\|m\|_{W_{ (2n+1)/2+\varepsilon}^{2} } {\mathfrak M}(f)(x). \end{eqnarray} \noindent Because $\|H\|_{W_{ (2n+1)/2+\varepsilon}^{2} } \leq C\|m\|_{W_{ (2n+1)/2+\varepsilon}^{2} }$ since $ \operatorname{supp} m\subset[-2,2]$. This gives the desired inequality. Finally, we notice that $(1-t^2)_+^{\alphapha}\in W^2_{s} $ if and only if $ \alphapha>s- 1/2$. From (\ref{e3.04}), $L^p$-boundedness of the Hardy-Littlewood maximal operator $\mathfrak M$, we see that for $\alphapha >n$, $S_{\ast}^{ \alphapha}(L)$ is a bounded operator on $L^p(X)$ for $ 1<p<\infty.$ \end{proof} \newcommand{{\mathlarger{ \mathfrak m}}}{{\mathlarger{ \mathfrak m}}} In Lemma~\ref{le3.0} the order of $\alphapha$ for which $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(X)$ for $ 1<p<\infty$ is relatively large. This is mainly because the maximal bound is obtained by the pointwise estimate. The bound can be improved by making use of the spectral theory. For this, let us first recall that the Mellin transform of the function $F \colon \, {\mathbb R}\to {\mathbb C}$ is defined by \begin{eqnarray}\lambdabel{e3.2} {\mathlarger{ \mathfrak m}}_F(u)=\frac{1}{2\pi}\int_0^\infty F(\lambdambda)\lambdambda^{-1-iu}d\lambdambda,\quad u\in{\mathbb R}. \end{eqnarray} Moreover the inverse transform is given by the following formula \begin{eqnarray}\lambdabel{e3.3} F(\lambdambda)=\int_{\mathbb R} {\mathlarger{ \mathfrak m}}_F(u)\lambdambda^{iu}du,\quad \lambdambda\in[0,\infty). \end{eqnarray} \begin{lemma}\lambdabel {le3.1} Suppose that $L$ satisfies the property ${\rm (FS)}$ and the condition \eqref{EVp} for some $1\leq p< 2$. Let $p<r<p'$ and $s>n|{1/p}-{1/2}|.$ Suppose also that $F \colon \, {\mathbb R}\to {\mathbb C}$ is a bounded Borel function such that $$ \int_{\mathbb R}|{\mathlarger{ \mathfrak m}}_F(u)|(1+|u|)^{s}du =C_{F,s} < \infty. $$ Then the maximal operator \begin{eqnarray}\lambdabel{e3.4} F^*(L)f(x)=\sup_{t>0}|F(tL)f(x)| \end{eqnarray} is a bounded operator on $L^r(X)$ with $ \|F^*(L)\|_{r \to r} \le C C_{F,s}. $ In particular, if $\operatorname{supp} F\subseteq [-2, 2]$ and $\|F\|_{W_{s}^{2}}<\infty$ for some $s>n|{1/ p}-{1/2}| +1/2$, then $F^*(L)$ is bounded on $L^r(X)$ for $ p<r<p'$. As a consequence, if $\alphapha >n|{1/ p}-{1/2}|$, then $S_{\ast}^{\alphapha}(L)$ is bounded on $L^r(X)$ for $p<r<p'$. \end{lemma} \begin{proof} By \eqref{e3.2}, \eqref{e3.3} it follows that \begin{eqnarray*} F(tL)&=&\int_0^{\infty}F(t\lambdambda) d E_L(\lambdambda) =\int_0^{\infty} \int_{\mathbb R} {\mathlarger{ \mathfrak m}}_F(u)(t\lambdambda)^{iu}du \, d E_L(\lambdambda)\\ &=&\int_{\mathbb R}\int_0^{\infty} {\mathlarger{ \mathfrak m}}_F(u)(t\lambdambda)^{iu} d E_L(\lambdambda) du= \int_{\mathbb R} {\mathlarger{ \mathfrak m}}_F(u) t^{iu}L^{iu}du. \end{eqnarray*} Hence $ F^*(L)f(x)=\sup_{t>0}|F(tL)f(x)| \le \int_{\mathbb R} |{\mathlarger{ \mathfrak m}}_F(u)| |L^{iu}f(x)| du. $ And we get \begin{eqnarray}\lambdabel{e3.5} \|F^*(L)f\|_{r}\leq C \|f\|_r \int_{\mathbb R} |{\mathlarger{ \mathfrak m}}_F(u)| \|L^{iu}\|_{r\to r} du. \end{eqnarray} From Proposition~\ref{prop2.4}, we have that the imaginary power $L^{iu}$ of $L$ is bounded on $L^r(X), p<r<p'$ with the bound $ \|L^{iu}\|_{ r \to r} \le C (1+|u|)^{s} $ for any $s>n|{1/p}-{1/2}|$. This, together with \eqref{e3.5}, gives \begin{eqnarray}\lambdabel{e3.7} \|F^*(L)f\|_{r}\leq C \|f\|_r \int_{\mathbb R} |{\mathlarger{ \mathfrak m}}_F(u)| (1+|u|)^{s} du. \end{eqnarray} Set $F(t)=(1-t^2)_+^{\alphapha}.$ Substituting $\lambdambda=e^\nu$ in \eqref{e3.2}, we notice that $m$ is the Fourier transform of $G(\nu)=F(e^{\nu})$. Since $(1-t^2)_+^{\alphapha}$ is compactly supported in $[-1, 1]$, we get \begin{eqnarray}\lambdabel{e3.8} \int_{\mathbb R} |{\mathlarger{ \mathfrak m}}_F(u)| (1+|u|)^{s} du\leq C \|G\|_{W^2_{s+1/2+\varepsilon} }\leq C\|F\|_{W^2_{s+1/2+\varepsilon} } \end{eqnarray} for any $\varepsilon>0$. On the other hand, $(1-t^2)_+^{\alphapha}\in W^2_{s+1/2+\varepsilon} $ if and only if $ \alphapha>s+\varepsilon$. From this, we know that if $\alphapha>n|{1/p}-{1/2}|$, then $S_{\ast}^{\alphapha}(L)$ is a bounded operator on $L^r(X)$ for $ p<r<p'.$ \end{proof} As a consequence of Lemma~\ref{le3.1}, we have the following which gives essentially sharp $L^2$ maximal bound for the Bochner-Riesz means. \begin{cor}\lambdabel{cor22} Suppose that $L$ satisfies the property ${\rm (FS)}$ and the condition \eqref{EVp} for some $1\leq p< 2$. If $\alphapha >0$, then $S_{\ast}^{ \alphapha}(L)$ is bounded on $L^2(X)$. \end{cor} In Lemma \ref{le3.0} we obtained the pointwise estimate for the maximal function under the Gaussian bound ${\rm (GE)}$ only. In what follows we additionally impose the condition ${\rm (ST^{q}_{1, 2})}$. This significantly improves the regularity assumption on $F$ so that this allows us to essentially recover the sharp maximal bounds for the Bochner-Riesz means for $p=\infty$, see Corollary~\ref{cor3.3}. The proof of Proposition \ref{prop3.2} was inspired by an argument of Thangavelu \cite[Theorem 4.2]{Th1}. \begin{prop}\lambdabel {prop3.2} Let $q\in [2, \infty]$. Let $L$ satisfy the Gaussian bound ${\rm (GE)}$ and let $F$ be a Borel function such that $\operatorname{supp} F\subseteq [1/4, 1]$ and $\|F\|_{W_{s}^{q} }<\infty$ for some $s>n/2$. Suppose that the condition ${\rm (ST^{q}_{1, 2})}$ holds, then we have, for each $2\leq r<\infty$, \begin{equation*} F^{\ast} (L)f(x)\leq C \|F\|_{W^q_{s} } \mathfrak M_r f(x). \end{equation*} Hence, $F^*(L)$ is a bounded operator on $L^p(X)$ for all $ 2<p<\infty$. \end{prop} \begin{proof} Let $r'\in (1, 2]$ such that $1/r+ 1/r'=1$ and fix $R>0$. Consider a partition of $X$ into the dyadic annuli $A_k=\{ y: 2^{k-1}R^{-1}< d(x,y)\leq 2^kR^{-1} \}$, for $k\in\mathbb N$. For a given $f$ we set \[ f_0(y)=f(y)\chi_{\{y: \, d(x,y)\leq R^{-1}\}}, \ \ f_k(y)=f(y)\chi_{A_k}(y), \ \ k\in\mathbb N. \] Then, note that $ \left|F\big({L/R^2}\big) f(x)\right|\leq \sum_{k=0}^{\infty} |F(L/R^2) f_k(x)| $. By H\"older's inequality, for $f\in L^2(X)\cap L^p(X)$, \begin{eqnarray*} \left|F\big({L/R^2}\big) f(x)\right| \leq \left(\int_{ d(x,y)\leq R^{-1}} |K_{F(L/R^2)} (x,y)|^{r'} d\mu(y)\right)^{1/r'} \|f_0\|_r+ \sum_{k=1}^{\infty} \left(\int_{A_k} |K_{F(L/R^2)} (x,y)|^{r'} d\mu(y)\right)^{1/r'} \|f_k\|_r. \end{eqnarray*} We also note that \begin{eqnarray*} \left(\int_{A_k} |K_{F(L/R^2)} (x,y)|^{r'} d\mu(y)\right)^{1/r'} &\leq& \sum_{k=0}^{\infty} V(x, 2^{k} R^{-1})^{{1\over r'}-{1\over 2}}\left(\int_{A_k} |K_{F(L/R^2)} (x,y)|^2 d\mu(y)\right)^{1/2} \end{eqnarray*} and $\|f_k\|_r\leq C V(x, 2^{k} R^{-1})^{1/r} \mathfrak M_r f(x)$. Combining all these inequalities gives \begin{eqnarray} \lambdabel{e3.16} \left|F\big({L/R^2}\big) f(x)\right|\le C\mathfrak M_r f(x) \sum_{k=0}^{\infty} 2^{-ks}V(x, 2^{k} R^{-1})^{1/2}\, \mathfrak I (R), \end{eqnarray} where \[ \mathfrak I (R)= \left(\int_{X} |K_{F(L/R^2)} (x,y)|^2 (1+Rd(x,y))^{2s}d\mu(y)\right)^{1/2}.\] Notice that $L$ satisfies the condition ${\rm (ST^{q}_{1, 2})}$ for some $q\in [2, \infty]$. By \cite[Lemma 4.3]{DOS} we have \begin{eqnarray*} \int_X \big|K_{F(L/R^2)} (x,y)\big|^2 \big(1+Rd(x,y)\big)^{2s} d\mu(y)&\leq& C V(x, R^{-1})^{-1} \|F\|^2_{W^q_{{s } +\varepsilon}}, \ \ \ \forall \varepsilon>0. \end{eqnarray*} Hence, this and \eqref{e3.16} yield \begin{eqnarray*} |F(L/R^2)f(x)| &\leq& C\|F\|_{W^q_{{s } +\varepsilon}} \sum_{k=0}^{\infty} 2^{-ks} \left({V(x, 2^{k} R^{-1})\over V(x, R^{-1})}\right)^{1/2} \mathfrak M_r f(x). \end{eqnarray*} Since $s> n/2$, we get \begin{eqnarray*} |F(L/R^2)f(x)| \leq C\|F\|_{W^q_{{s } +\varepsilon}} \sum_{k=0}^{\infty} 2^{({n\over 2}-s)k} \mathfrak M_r f(x)\leq C \|F\|_{W^q_{{s } +\varepsilon}} \mathfrak M_r f(x)\,. \end{eqnarray*} From this and $L^p$-boundedness of the Hardy-Littlewood maximal operator ${\mathfrak M}_r$ for $p>r$, we obtain that $ \| F^{\ast}(L) \|_{p\to p}\leq C$ for $p\in (2,\infty)$. This completes the proof. \end{proof} We conclude this section with the following result which covers a special case of Theorem A. In fact, this shows the case $p_0=1$ in Theorem A if we take $q=2$ in the following. \begin{cor}\lambdabel{cor3.3} Let $L$ satisfy the Gaussian bounds ${\rm (GE)}$. Suppose that the condition ${\rm (ST^{q}_{1, 2})}$ holds for some $q\in [2, \infty]$. If $\alphapha >n/2-1/q$, then for each $2\leq r<\infty$, \begin{equation*} S_{\ast}^{ \alphapha }(L)f(x)\leq C \mathfrak M_r f(x). \end{equation*} As a consequence, $S_{\ast}^{\alphapha }(L)$ is a bounded operator on $L^p(X)$ for all $2\leq p\leq\infty$. \end{cor} \begin{proof} Let $ S^{\alphapha }(t) = (1-t^2)^{\alphapha }_+$. We set $$ S^{\alphapha }(t) =S^{\alphapha }(t) \phi(t^2) + S^{\alphapha }(t)(1-\phi(t^2))=:S^{\alphapha, 1 }(t^2) +S^{\alphapha, 2 }(t^2), $$ where $\phi\in C^{\infty}(\mathbb R)$ is supported in $ \{ \xi: |\xi| \geq 1/4 \}$ and $\phi =1$ for all $|\xi|\geq 1/2$. Define the maximal Bochner-Riesz operators $S^{\alphapha, i }_{ \ast} (L), i=1, 2$ by $$ S^{\alphapha, i }_{ \ast} (L) f(x) = \sup_{R>0} | S^{\alphapha, i } \big({L/R^2}\big)f(x)|, \ \ \ i=1,2. $$ Note that by Lemma~\ref{le3.0}, $ S^{\alphapha, 2 }_{\ast} (L) f(x) \leq C \mathfrak M f(x). $ For the operator $S^{\alphapha, 1 }_{\ast} (L)$, we choose $n/2<s< \alphapha +1/q$, and notice that $S^{ \alphapha, 1 }\in W^q_{s+\varepsilon}$ if and only if $ s+\varepsilon<\lambdambda +1/q$. Taking $\varepsilon$ small enough, we apply Proposition~\ref{prop3.2} to obtain that $ S^{\alphapha, 1}_{\ast} (L)f(x)\leq C \|S^{ \alphapha, 1 } \|_{W^q_{{s } +\varepsilon}} \mathfrak M_r f(x) \leq C \mathfrak M_r f(x)$ for all $2\leq r<\infty. $ Hence $S^{\alphapha }_{\ast} (L)$ is bounded on $L^p(X)$ for all $p>2$. This, together with Corollary~\ref{cor22}, finishes the proof of Corollary~\ref{cor3.3}. \end{proof} \section{Spectral restriction estimate and maximal bound}\lambdabel{sec4} \setcounter{equation}{0} The aim of this section is to prove Theorem A. However, we would like to describe a slightly more general result which remains valid for the spaces of homogeneous type. For this end, we assume that $(X, d, \mu)$ satisfies the doubling condition, that is \eqref{eq2.2}. In this section, we will prove the following result, which yields Theorem A as a special case with $q=2$ and the uniform volume estimate \eqref{eq1.1}. \begin{thm}\lambdabel{th1.1} Suppose that $(X, d, \mu)$ satisfies the doubling condition \eqref{eq2.2}. Suppose that $L$ satisfies the property ${\rm (FS)}$ and the condition ${\rm (ST^{q}_{ p_0, 2})}$ for some $1\leq p_0 <2$ and $2\leq q\leq \infty$. Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(X)$ whenever \begin{eqnarray}\lambdabel{e4.0} 2\leq p< p_0' \ \ \ {\rm and }\ \ \ \alphapha> \max\left\{ n\left({1\over p_0}-{1\over 2}\right)- {1\over q}, \, 0 \right\}. \end{eqnarray} As a consequence, if $f\in L^p(X)$, for $p$ and $\alphapha$ in the range of \eqref{e4.0}, $$ \lim\limits_{R\to \infty}S_{R}^{\alphapha}(L)f(x)=f(x), \ \ \ a.e. $$ \end{thm} In order to prove Theorem~\ref{th1.1} we use the classical approach which makes use of the square function to control the maximal operator (see \cite{Ca, C1, Lee}). Here we should mention that we may assume that \begin{equation} \lambdabel{nontrivial} n\left({1\over p_0}-{1\over 2}\right)- {1\over q}\ge 0. \end{equation} Otherwise, by \cite[Corollary I.7]{COSY} it follows that $L=0.$ Thus Theorem \ref{th1.1} trivially holds. We assume the condition \eqref{nontrivial} for the rest of this section. \noindent{\bf 4.1 Reduction to square function estimate.} \iffalse Following \cite[p.278--279]{SW}, the identity \begin{eqnarray}\lambdabel{e4.1} \left(1-{|m|^2\over R^2}\right)^{\alphapha }=C_{\alphapha, \, \rho} R^{-2\alphapha }\int_{|m|}^R (R^2-t^2)^{\alphapha-\rho-1}t^{2\rho+1} \left(1-{|m|^2\over t^2}\right)^{ \rho}dt \end{eqnarray} with $C_{\alphapha, \, \rho}=2\Gamma(\alphapha+1)/\Gamma(\rho+1)\Gamma(\alphapha-\rho)$, immediately gives the expression \footnote{$|m|$ under integration ??} \begin{eqnarray*} S^{\alphapha}_R(L) =C_{\alphapha, \, \rho} R^{-2\alphapha}\int_{0}^R (R^2-t^2)^{\alphapha-\rho-1}t^{2\rho+1} S_t^{ \rho}(L)dt \end{eqnarray*} by the spectral theory. It then follows \footnote{Do we really need this part which show the inequality (4.3)? How about just saying `By the argument in \cite[p.278--279]{SW} we have \eqref{e4.2} and removing the part between (4.2) and (4.3)?} \begin{eqnarray*} |S^{\alphapha }_R(L)f(x)| \leq C_{\alphapha,\, \rho} R^{-2\alphapha} \left( \int_0^R |(R^2-t^2)^{\alphapha-\rho-1}t^{2\rho+1} |^2 dt\right)^{1/2} \left( \int_0^R |S_t^{ \rho}(L)f(x)|^2 dt\right)^{1/2}; \end{eqnarray*} therefore, upon taking the supremum over all $R>0$, we obtain, \fi Let us recall the well known identity, for $\alphapha>0,$ \begin{eqnarray*}\lambdabel{e4.1} \left(1-{|m|^2\over R^2}\right)^{\alphapha }=C_{\alphapha, \, \rho} R^{-2\alphapha }\int_{|m|}^R (R^2-t^2)^{\alphapha-\rho-1}t^{2\rho+1} \left(1-{|m|^2\over t^2}\right)^{ \rho}dt \end{eqnarray*} with $C_{\alphapha, \, \rho}=2\Gamma(\alphapha+1)/\Gamma(\rho+1)\Gamma(\alphapha-\rho)$. By the spectral theory, we use an argument in \cite[p.278--279]{SW} to obtain \begin{eqnarray}\lambdabel{e4.2} S_{\ast}^{\alphapha}(L)f(x)\leq C'_{\alphapha,\, \rho} \sup_{0<R<\infty} \left( {1\over R}\int_0^R |S_t^{\rho}(L) f(x)|^2dt\right)^{1/2} \end{eqnarray} provided that $\rho>-1/2$ and $\alphapha >\rho+1/2$. By dyadic decomposition, we write $x^{\rho}_+=\sum_{k\in{\mathbb Z}} 2^{-k\rho} \phi(2^{k}x)$ for some $\phi\in C_0^{\infty}(1/4, 1/2)$. Thus \begin{eqnarray}\lambdabel{e4.3} (1-|\xi|^2)_+^{\rho}=: \phi_0^{\rho} (\xi)+\sum_{k=1}^{\infty} 2^{-k\rho} \phi_k^{\rho} (\xi) \end{eqnarray} where $\phi_k^{\rho}=\phi(2^k(1-|\xi|^2)), k\geq 1$ and \begin{eqnarray*} {\rm supp}\ \phi_0^{\rho}&\subseteq& \{ |\xi|\leq {3\over 4}\}, \nonumber\\[4pt] {\rm supp}\ \phi_k^{\rho}&\subseteq& \{ 1-2^{-k}\leq |\xi|\leq 1-2^{-k-2}\}. \end{eqnarray*} By \eqref{e4.2}, for $\alphapha>\rho+1/2$ \begin{eqnarray}\lambdabel{e4.5} \|S_{\ast}^{\alphapha}(L)f\|_p &\leq& C \left\| \left(\sup_{0<R<\infty} {1\over R}\int_0^R \Big|\phi_0^{\rho} \left({\sqrt{L}\over t}\right) f(x)\Big|^2dt\right)^{1/2}\right\|_p \\ &+& C\sum_{k=1}^{\infty} 2^{-k\rho}\left\|\left(\int_0^{\infty} \Big|\phi_k^{\rho} \left({\sqrt{L}\over t}\right) f(x)\Big|^2{dt\over t}\right)^{1/2} \right\|_p. \nonumber \end{eqnarray} By Lemma~\ref{le3.1}, for the first term we have \begin{eqnarray}\lambdabel{e4.6} \left\|\sup_{0<t<\infty} \left|\phi_0^{\rho} \left(t{\sqrt{L} }\right) f(x)\right| \right\|_p \leq C\|f\|_p, \ \ \ \ p_0<p<p'_0. \end{eqnarray} Now, in order to prove Theorem~\ref{th1.1}, by \eqref{e4.5} it is sufficient to show the following. \begin{prop}\lambdabel{prop4.1} Let $\phi$ be a fixed $C^{\infty}$ function supported in $[-1/2, 1/2], |\phi|\leq 1$. For every $0<\delta\leq 1,$ define \begin{eqnarray}\lambdabel{e4.7} T_{\delta}f(x)=\left( \int_0^{\infty} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)f(x)\Big|^2 {dt\over t}\right)^{1/2}. \end{eqnarray} Suppose that $L$ satisfies the property {\rm (FS)} and the condition ${\rm (ST^{q}_{p_0, 2})}$ for some $1\leq p_0<2$ and $ 2\leq q\leq \infty.$ Then for all $2\leq p<p'_0 $ and $0<\delta\leq 1$, \begin{eqnarray}\lambdabel{e4.8} \|T_{\delta}f\|_{p} &\leq& C(p)\delta^{{1\over 2}+{1\over q}+n({1\over 2}- {1\over p_0})} \|f\|_{p}. \end{eqnarray} \end{prop} Before we start the proof of Proposition~\ref{prop4.1}, we show that Theorem~\ref{th1.1} is a straightforward consequence of Proposition~\ref{prop4.1}. \begin{proof}[Proof of Theorem ~\ref{th1.1}] Substituting \eqref{e4.6} and \eqref{e4.8} with $\delta=2^{-k}$ back into \eqref{e4.5} yields that, for a small enough $\varepsilon>0$, \begin{eqnarray*} \|S_{\ast}^{\alphapha}(L)f\|_p&\leq& C\|f\|_p+ C \sum_{k=1}^{\infty} 2^{-k(\alphapha-{1\over 2}-\varepsilon)}2^{-k({1\over 2}+{1\over q}+n({1\over 2}-{ 1\over p_0}))} \|f\|_p \leq C\|f\|_p \end{eqnarray*} provided that $2\leq p<p'_0$ and $ \alphapha>n\left({1/p_0}-{1/2}\right)-{1/q}. $ This gives Theorem~\ref{th1.1}. \end{proof} In order to prove Proposition~\ref{prop4.1}, let us verify \eqref{e4.8} for $p=2$ first. Note that $\phi$ is a fixed $C^{\infty}$ function supported in $[-1/2, 1/2], |\phi|\leq 1$. It follows from the spectral theory \cite{Yo} that, for any $f\in L^2(X)$, \begin{eqnarray}\lambdabel{einter} \|T_\delta f\|_{2} &=&\left\{\int_0^{\infty}\Big\lambdangle\,\phi^2\left(\delta^{-1}\left(1-{ {L}\over t^2} \right)\right)f, f\Big\rangle {dt\over t}\right\}^{1/2}\nonumber =\left\{\int_0^{\infty}\phi^2\left(\delta^{-1}\left(1- t^2 \right)\right) {dt\over t}\right\}^{1/2}\|f\|_{2}\nonumber \\ &\leq & C \delta^{\frac{1}{2}}\|f\|_{2}. \end{eqnarray} Since $\delta\in (0, 1]$ and we assume the condition \eqref{nontrivial}, the estimate \eqref{e4.8} for $p=2$ follows from \eqref{einter}. For proof of Proposition~\ref{prop4.1} for $2<p<p_0'$, we make use of a weighted inequality which reduces the desired inequality to $L^2$ weighted estimate. See \cite{Ca, C1, LRS1}. \noindent{\bf 4.2. Weighted inequality for the square function.} Let $r_0$ be a number such that $1/r_0 = 2/p_0 -1.$ \begin{lemma} \lambdabel{le4.1} Suppose that $L$ satisfies the property {\rm (FS)} and the condition ${\rm (ST^{q}_{p_0, 2})}$ for some $1\leq p_0<2$ and $2\leq q\leq \infty$. For any $0\leq w$ and $0<\delta\le 1$, \begin{eqnarray}\lambdabel{e4.10} \int_X |T_\delta f(x)|^2 w(x) d\mu(x) \leq C\delta^{1+{2\over q}+n({ 1- {{2}\over p_0} })} \int_X |f(x)|^2 \mathfrak M_{r_0}w(x)d\mu(x). \end{eqnarray} \end{lemma} Again before we prove the lemma we show that it concludes the proof of Proposition~\ref{prop4.1}. For every $2< p<p_0',$ we take $w\in L^{r}$ with $\|w\|_r\leq 1$ where $1/r + 2/p=1$. Since $r_0< r$, we have \begin{align*} &\int_X| T_{\delta}f(x)|^2 w(x)d\mu(x) \leq C\delta^{ 1+{2\over q}+n( 1-{2\over p_0}) } \int_X |f(x)|^2 \mathfrak M_{r_0} w(x)d\mu(x)\nonumber\\ &\leq C\delta^{ 1+{2\over q}+n( 1-{2\over p_0}) } \| f \|_{p }^2\|\mathfrak M_{r_0} w\|_r\nonumber \leq C\delta^{ 1+{2\over q}+n( 1-{2\over p_0}) } \| f \|_{p }^2. \end{align*} Hence $$ \|T_{\delta}f\|^2_p \leq C\delta^{ 1+{2\over q}+n( 1-{2\over p_0}) } \| f \|^2_{p } $$ for some constant $C>0$ independent of $f$ and $\delta.$ This finishes the proof of Proposition~\ref{prop4.1}. \begin{proof}[Proof of Lemma \ref{le4.1}] We start with Littlewood-Paley decomposition associated to the operator $L$. Fix a function $\varphi\in C^{\infty} $ supported in $\{ 1\leq |s|\leq 3\}$ such that $\sum_{-\infty}^{\infty}\varphi(2^ks)=1$ on ${\mathbb R}\backslash \{0\}$. Let \begin{eqnarray}\lambdabel{e4.9} \varphi_k(\sqrt{L}\,)f= \varphi(2^{-k}\sqrt{L}\,) f, \ \ \ \ k\in{\mathbb Z}. \end{eqnarray} By the spectral theory we have that, for any $f\in L^2(X),$ \begin{eqnarray}\lambdabel{e44} \sum_k \varphi_k(\sqrt{L}\,)f=f\,. \end{eqnarray} By \eqref{e44} we have that, for $f\in L^2(X)\cap L^p(X)$, \begin{eqnarray} \lambdabel{e4.11} | T_{\delta}f(x)|^2 &\leq& 5\sum_k \int_0^{\infty} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)\varphi_k(\sqrt{L}\,)f(x)\Big|^2 {dt\over t} \\ &=& 5\sum_k \int_{2^{k-1}}^{2^{k+2}} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)\varphi_k(\sqrt{L}\,)f(x)\Big|^2 {dt\over t}. \nonumber \end{eqnarray} For given $0<\delta\leq 1$, we set $j_0=-[\log_2\delta]-1$. Fix an even function $\eta\in C_0^{\infty}$, identically one on $\{|s| \leq 1 \}$ and supported on $\{|s| \leq 2 \}$. Let us set \begin{equation} \lambdabel{zeta} \zeta_{j_0}(s)=\eta(2^{-j_0} s), \ \ \ \zeta_j(s)=\eta(2^{-j} s)-\eta(2^{-j+1} s), \ j> j_0 \end{equation} so that \begin{equation} \lambdabel{id} 1\equiv \sum_{j\geq j_0 }\zeta_j(s), \ \ \ \ \forall s>0. \end{equation} Then we set $\phi_{\delta}(s)={ \phi\left(\delta^{-1}\left(1-|s|^2 \right) \right)}$, and set, for $ j\geq j_0$ \begin{eqnarray}\lambdabel{e4.12} \phi_{\delta,j}(s)={1\over 2\pi} \int_{-\infty}^{\infty}\zeta_j(u) {\widehat{ \phi_{\delta} }}(u) \cos(s u) du. \end{eqnarray} Note that $\zeta_j$ is a dilate of a fixed smooth compactly supported function, supported away from $0$ when $j>j_0$, hence \begin{eqnarray}\lambdabel{e4.13} | \phi_{\delta,j}(s)|\leq \left\{ \begin{array}{ll} C_N 2^{(j_0-j)N} , & |s|\in [1/4, 8];\\[8pt] C_N 2^{j-j_0} (1+ 2^j |s-1|)^{-N}, & {\rm otherwise} \end{array} \right. \end{eqnarray} for any $N$ and all $j\geq j_0$ (see \cite[page 18]{C1}). By the Fourier inversion formula, \begin{eqnarray}\lambdabel{e4.14} \phi\left(\delta^{-1}\left(1-{s}^2 \right)\right)= \sum_{j\geq j_0}\phi_{\delta,j}(s), \ \ \ \ s>0. \end{eqnarray} Set \[d_j=2^{j+1}/t.\] By Lemma~\ref{le2.2}, \begin{eqnarray}\lambdabel{e4.15} \operatorname{supp} K_{\phi_{\delta,j}(\sqrt{L}/t)}\subseteq \D_{d_j} =\left\{(x,y)\in X\times X: \ d(x,y)\leq 2^{j+1}/t\right\}. \end{eqnarray} From \eqref{e4.11}, \eqref{e4.14} and Minkowski's inequality, it follows that for every function $w\geq 0,$ \begin{eqnarray}\lambdabel{e4.16} \int_X | T_{\delta}f(x)|^2 w(x) d\mu(x) &\leq& C\sum_k\left[ \sum_{j\geq j_0} \left(\int_{2^{k-1}}^{2^{k+2}} \left\lambdangle \Big|\phi_{\delta,j}\left({\sqrt{L}\over t} \right) \varphi_k(\sqrt{L}\,)f\Big|^2, \ w\right\rangle {dt\over t} \right)^{1/2}\right]^{2}. \end{eqnarray} For a given $k\in{\mathbb Z}, j\geq j_0$, set $\rho=2^{j-k+2}>0$. Following an argument as in \cite{GHS}, we can choose a sequence $(x_m) \in X$ such that $d(x_m,x_\ell)> \rho/10$ for $m\neq \ell$ and $\sup_{x\in X}\inf_m d(x,x_m) \le \rho/10$. Such sequence exists because $X$ is separable. Let $B_m=B(x_m, 3\rho)$ and define $\widetilde{B_m}$ by the formula $$\widetilde{B_m}=\bar{B}\left(x_m,\frac{\rho}{10}\right)\setminus \bigcup_{\ell<m}\bar{B}\left(x_\ell,\frac{\rho}{10}\right),$$ where $\bar{B}\left(x, \rho\right)=\{y\in X \colon d(x,y) \le \rho\}$. Note that for $m\neq \ell$, $B(x_m, \frac{\rho}{20}) \cap B(x_\ell, \frac{\rho}{20})=\emptyset$. Hence, by the doubling condition \eqref{eq2.2} \begin{equation}\lambdabel{kk} K=\sup_m\#\{\ell:\;d(x_m,x_\ell)\le 2\rho\} \le \sup_x {V(x, (2+\frac{1}{20})\rho)\over V(x, \frac{\rho}{20})}< C (41)^n. \end{equation} It is not difficult to see that $$ \D_{\rho} \subset \bigcup_{\{\ell, m:\, d(x_\ell, x_m)< 2 \rho\}} \widetilde{B}_\ell\times \widetilde{B}_m \subset \D_{4\rho}. $$ Recall that $1/r_0 +2/p'_0=1$ and $d_j=2^{j+1}/t$. It follows by \eqref{e4.15} and H\"older's inequality that, for every $j,k$ and any test function $w\geq 0,$ \begin{eqnarray*} \left\lambdangle \Big|\phi_{\delta,j}\left({\sqrt{L}\over t} \right) \varphi_k(\sqrt{L}\,)f\Big|^2, \ w\right\rangle &=& \left\lambdangle \Big| \sum_{\ell, \, m: \, d(x_\ell, x_m)<2 d_j} \mathlarger{\chi}_{\widetilde{B}_\ell} \phi_{\delta,j}\left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \Big|^2, \, w \right \rangle. \nonumber \end{eqnarray*} Using \eqref{kk}, we have \begin{eqnarray*} \left\lambdangle \Big|\phi_{\delta,j}\left({\sqrt{L}\over t} \right) \varphi_k(\sqrt{L}\,)f\Big|^2, \ w\right\rangle &=& \sum_\ell\left\lambdangle \Big| \sum_{m: \, d(x_\ell, x_m)<2d_j} \mathlarger{\chi}_{\widetilde{B}_\ell} \phi_{\delta,j}\left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \Big|^2, \, w \right \rangle \nonumber\\ &\leq & K \sum_{\ell } \sum_{ m:\, d(x_\ell, x_m)<2 d_j} \left\lambdangle \big| \mathlarger{\chi}_{\widetilde{B}_\ell} \phi_{\delta,j}\left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \big|^2, \, w \right\rangle. \end{eqnarray*} By H\"older's inequality it follows that \begin{eqnarray}\lambdabel{e4.17} \qquad \left\lambdangle \Big|\phi_{\delta,j}\left({\sqrt{L}\over t} \right) \varphi_k(\sqrt{L}\,)f\Big|^2, \ w\right\rangle \leq K^2 \sum_{m } \|\mathlarger{\chi}_{B_m} w\|_{{r_0}} \left\|\mathlarger{\chi}_{B_m} \phi_{\delta,j}\left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \right\|^2_{{p'_0}}. \end{eqnarray} Since $ \phi_{\delta,j}$ is not compactly supported, we choose an even function $\theta\in C_0(-4, 4) $ such that $\theta(s)=1$ for $s\in (-2, 2)$. Set \begin{equation}\lambdabel{psi} { \psi}_{0, \delta}(s)=\theta(\delta^{-1}(1-s)) \ \ \ {\rm and}\ \ { \psi}_{\ell, \delta}(s)=\theta(2^{-\ell}\delta^{-1}(1-s)) - \theta(2^{-\ell+1}\delta^{-1}(1-s)) \end{equation} for all $\ell\geq 1$ such that $1=\sum_{\ell=0}^{\infty} { \psi}_{\ell, \delta}(s)$, and so $ \phi_{\delta,j}(s)= \sum_{\ell=0}^{\infty} \big ( { \psi}_{\ell, \delta}\phi_{\delta,j}\big)(s) $ for all $s>0. $ From this, we apply \eqref{e4.17} to write \begin{eqnarray}\lambdabel{e4.18}\hspace{0.5cm} &&\hspace{-1.2cm} \left(\int_{2^{k-1}}^{2^{k+2}} \left\lambdangle \Big| \phi_{\delta,j}\left({\sqrt{L}\over t} \right) \varphi_k(\sqrt{L}\,) f\Big|^2, \ w\right\rangle {dt\over t} \right)^{1/2}\nonumber\\ &\leq & \sum_{\ell=0}^{ [-{\rm log_2{\delta}}]} \left( \sum_{m } \|\mathlarger{\chi}_{B_m}w\|_{{r_0}} \int_{2^{k-1}}^{2^{k+2}} \left\|\mathlarger{\chi}_{B_m} \left ( { \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \right\|^2_{{p'_0}} {dt\over t}\right)^{1/2} \nonumber\\ &\quad\qquad +& \sum_{\ell= [-{\rm log_2{\delta}}]+1}^{\infty} \left( \sum_{m } \|\mathlarger{\chi}_{B_m} w\|_{{r_0}} \int_{2^{k-1}}^{2^{k+2}} \left\|\mathlarger{\chi}_{B_m} \left ( { \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \right\|^2_{{p'_0}} {dt\over t}\right)^{1/2}\nonumber\\[6pt] &=& I(j,k) + I\!I(j,k). \end{eqnarray} As to be seen later, the first term $I(j,k)$ is the major one. \noindent {\bf Estimate for $I(j,k)$.} For $k\in{\mathbb Z}$ and $\lambdambda=0, 1, \ldots, \lambdambda_0=[{8/\delta}] +1$, we set \begin{equation}\lambdabel{ijk1} I_\lambdambda=\left[2^{k-1} + \lambdambda 2^{k-1}\delta, \, 2^{k-1} + (\lambdambda+1)2^{k-1}\delta\right], \end{equation} so that $[2^{k-1}, 2^{k+2}]\subseteq \cup_{\lambdambda=0}^{\lambdambda_0} I_\lambdambda$. Define \begin{eqnarray}\lambdabel{ijk2} \eta_\lambdambda(s) = \eta\left( \lambdambda+{ 2^{k-1} -s\over 2^{k-1}\delta}\right), \end{eqnarray} where $\eta\in C_0^{\infty}(-1, 1)$ and $\sum_{\lambdambda\in {\mathbb Z}} \eta(\cdot-\lambdambda)=1$. Observe that for every $t\in I_\lambdambda$, it is possible that $ {\psi}_{\ell, \delta} \left({s/t}\right)\eta_{\lambdambda'} (s)\not=0$ only when $\lambdambda-2^{\ell+6}\leq \lambdambda'\leq \lambdambda+2^{\ell+6}.$ Hence, for $t\in I_\lambdambda$, \begin{eqnarray} \lambdabel{eqn4} \left ( { \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) &=& \sum_{\lambdambda'=\lambdambda-2^{\ell+6}}^{\lambdambda+2^{\ell+6}} \left ({ \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \eta_{\lambdambda'} (\sqrt{L}\,), \end{eqnarray} so \begin{eqnarray*} I(j,k)\leq \sum_{\ell=0}^{ [-{\rm log_2{\delta}}]} \left[ \sum_{m } \|\mathlarger{\chi}_{B_m} w\|_{{r_0}} \sum_{\lambdambda} \int_{I_\lambdambda} \left(\sum_{\lambdambda'=\lambdambda-2^{\ell+6}}^{\lambdambda+2^{\ell+6}} \left\|\mathlarger{\chi}_{B_m} \left ({ \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \right\|_{{p'_0}}\right)^2 {dt\over t}\right]^{1/ 2}. \end{eqnarray*} Note that $$ \operatorname{supp} { \psi}_{\ell, \delta} \subseteq (1-2^{\ell+2}\delta, 1+2^{\ell+2}\delta).$$ Moreover, if $\ell\geq 1$, then ${ \psi}_{\ell, \delta}(s)=0$ for $s\in (1-2^{\ell}\delta, 1+2^{\ell}\delta)$. By the Stein-Tomas restriction type condition ${\rm (ST^{q}_{p_0, 2})}$, we have, for $0\leq \ell\leq [-{\rm log_2{\delta}}]$, \begin{eqnarray} \lambdabel{e4.19} \hspace{1cm} && \left\|\mathlarger{\chi}_{B_m} \left (\psi_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right)\right\|_{2\to {p'_0} }= \left\| \left ({ \psi}_{\ell, \delta}\phi_{\delta,j}\right) \left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{B_m} \right\|_{p_0\to 2} \\ &\leq& C \left(2^j(1+2^{\ell+2}\delta)\right)^{n ({1\over p_0}-{1\over 2})}\mu(B_m)^{{1\over 2}-{1\over p_0}} \|({ \psi}_{\ell, \delta}\phi_{\delta,j})\big((1+2^{\ell+2}\delta) \cdot\big)\|_q\nonumber \\ &\leq& C 2^{jn ({1\over p_0}-{1\over 2})}\mu(B_m)^{{1\over 2}-{1\over p_0}} \|({ \psi}_{\ell, \delta}\phi_{\delta,j})\big((1+2^{\ell+2}\delta) \cdot\big)\|_q. \nonumber \end{eqnarray} From the definition of the function ${ \psi}_{\ell, \delta}$, it follows by \eqref{e4.13} that, for any $N<\infty$, \begin{equation} \lambdabel{psidel} \|{ \psi}_{\ell, \delta}\phi_{\delta,j}\big((1+2^{\ell+2}\delta) \cdot\big)\|_q \le C \begin{cases} \delta^{1\over q} 2^{(j_0-j)N}, &\ell=0, \\[6pt] \delta^{1\over q} 2^{\ell\over q} 2^{j-j_0}\big(2^{j+\ell} \delta\big)^{-N-1}, &1\leq \ell\leq [-{\rm log_2{\delta}}], \\[6pt] \delta^{1\over q} 2^{(j_0-j)N} 2^{- \ell N }, & 0\leq \ell\leq [-{\rm log_2{\delta}}]. \end{cases} \end{equation} By this we have \begin{eqnarray} \lambdabel{e4.20} &&\hspace{-1.8cm} \left\|\mathlarger{\chi}_{B_m} \left ({ \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \right\|_{{p'_0}}\\ &\leq& C\delta^{1\over q} 2^{(j_0-j)N} 2^{jn ({1\over p_0}-{1\over 2})} 2^{- \ell N} \mu(B_m)^{{1\over 2}-{1\over p_0}} \big\| \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \big\|_{2}.\nonumber \end{eqnarray} On the other hand, \begin{align*} \sum_{\lambdambda} &\int_{I_\lambdambda} \left(\sum_{\lambdambda'=\lambdambda-2^{\ell+6}}^{\lambdambda+2^{\ell+6}} \big\| \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \big\|_{2}\right)^2 {dt\over t} \leq C2^{\ell}\left({2^k\delta\over 2^{k}}\right) \sum_{\lambdambda} \sum_{\lambdambda'=\lambdambda-2^{\ell+6}}^{\lambdambda+2^{\ell+6}} \big\| \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \big\|^2_{2} \nonumber \\ &\qquad\qquad\qquad\leq C2^{2\ell} \delta \sum_{\lambdambda' } \big\| \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \big\|^2_{2} \leq C2^{2\ell} \delta \big\| \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \big\|^2_{2}. \end{align*} This, together with estimates \eqref{e4.19} and \eqref{e4.20}, the fact that $1/r_0 +2/p'_0=1$ and $$ \|\mathlarger{\chi}_{B_m} w\|_{{r_0}} \leq C\mu(B_m)^{ {2\over p_0}-1} \inf_{x\in B_m } \mathfrak M_{r_0} w(x), $$ show that \begin{eqnarray*} I(j,k) &\leq & C\delta^{{1\over q}+{1\over 2}} 2^{(j_0-j)N} 2^{jn ({1\over p_0}-{1\over 2})} \sum_{\ell} 2^{- \ell (N-1)} \left(\sum_{m } \mu(B_m)^{{1 }-{2\over p_0}} \|\mathlarger{\chi}_{B_m} w\|_{{r_0}} \int_{\widetilde{B}_m} \big| \varphi_k(\sqrt{L}\,) f |^2 d\mu(x)\right)^{1/2} \nonumber\\ &\leq &C\delta^{{1\over q}+{1\over 2}} 2^{(j_0-j)N} 2^{jn ({1\over p_0}-{1\over 2})} \left(\sum_m \int_{{\widetilde{B}_m}} |\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f(x)|^2 \mathfrak M_{r_0} w(x)d\mu(x) \right)^{1/2} \nonumber\\ &\leq &C\delta^{{1\over q}+{1\over 2}} 2^{(j_0-j)N} 2^{jn ({1\over p_0}-{1\over 2})} \left( \int_X |\varphi_k (\sqrt{L}\,)f(x)|^2 \mathfrak M_{r_0} w(x)d\mu(x)\right)^{1/2}. \end{eqnarray*} \noindent {\bf Estimate for $I\!I(j,k)$.} Next we show bounds for the term $ I\!I(j,k).$ For compactly supported function the $L^q$ norm is majorized by the supremum norm, so it follows from ${\rm (ST^{q}_{p_0, 2})}$ that \begin{align*} \left\|\mathlarger{\chi}_{B_m} \left (\psi_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right)\right\|_{2\to {p'_0} } & =\left\| \left ({ \psi}_{\ell, \delta}\phi_{\delta,j}\right) \left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{B_m} \right\|_{p_0\to 2}\nonumber\\ &\leq C \big(2^j(1+2^{\ell+2}\delta)\big)^{n ({1\over p_0}-{1\over 2})}\mu(B_m)^{{1\over 2}-{1\over p_0}} \|({ \psi}_{\ell, \delta}\phi_{\delta,j})\big((1+2^{\ell+2}\delta) \cdot\big)\|_{\infty}. \end{align*} From the definition of the function ${ \psi}_{\ell, \delta}$, it follows by \eqref{e4.13} that, for $\ell\geq [-{\rm log_2{\delta}}] +1$, \begin{eqnarray*} \|{ \psi}_{\ell, \delta}\phi_{\delta,j}\big((1+2^{\ell+2}\delta) \cdot\big)\|_{\infty} &\leq& C_N 2^{j-j_0}\big(2^{j+\ell} \delta\big)^{-N} \end{eqnarray*} for any $N<\infty.$ Therefore, \begin{eqnarray}\lambdabel{eee} I\!I(j,k) &\leq &C \sum_{\ell= [-{\rm log_2{\delta}}]+1}^{\infty} 2^{j-j_0}\big(2^{j+\ell} \delta\big)^{n ({1\over p_0}-{1\over 2})-N} \left(\sum_m \mu(B_m)^{{1 }-{2\over p_0}} \|\mathlarger{\chi}_{B_m} w\|_{{r_0}} \| \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \|^2_{2}\right)^{1/2}\nonumber\\ &\leq &C \delta 2^{j[n ({1\over p_0}-{1\over 2})-N+1]} \left(\int_X |\varphi_k (\sqrt{L}\,)f(x)|^2 \mathfrak M_{r_0} w(x)d\mu(x)\right)^{1/2}. \end{eqnarray} Collecting the estimates of the terms $I(j,k)$ and $ I\!I(j,k)$, together with \eqref{e4.16} and \eqref{e4.18}, we arrive at the conclusion that \begin{eqnarray*} \int_X| T_{\delta}f(x)|^2 w(x)d\mu(x) &\leq& C \delta \left( \sum_{j\geq j_0 } \big( \delta^{1\over q} 2^{(j_0-j)N} +2^{-j(N-1)}\big) 2^{nj ({1\over p_0}-{1\over 2 })}\right)^{2} \sum_k \int_X |\varphi_k (\sqrt{L}\,)f(x)|^2 \mathfrak M_{r_0} w(x)dx\\ &\leq&C\delta^{1+{2\over q}} \delta^{-n(1-{2\over p_0})} \int_{X} \sum_k| \varphi_k (\sqrt{L}\,)f(x)|^2 \mathfrak M_{r_0} w(x)d\mu(x)\nonumber\\ &\leq&C\delta^{1+{2\over q}+n(1-{2\over p_0})} \int_{X} | f(x)|^2 \mathfrak M_{r_0} w(x)d\mu(x) \end{eqnarray*} whenever $N> n({1/p_0}-{1/2 }) +1$. The last inequality follows by Proposition~\ref{prop2.5} for the weighted inequality for the square function, since $\mathfrak M_{r_0}w$ is an $A_1$ weight. This proves Lemma~\ref{le4.1} and completes the proof of Theorem~\ref{th1.1}. \end{proof} \iffalse As seen in Lemma~\ref{lemma 3.6}, restriction condition ${\rm (ST^{q}_{p_1, 2})}$ implies ${\rm (ST^{q}_{p, 2})}$ for all $p_0\leq p\leq p_1$ whenever $L$ satisfies the property ${\rm (G_{p_0, 2})}$ for some $1\leq p_0\leq p_1<2$ and $X$ . As a consequence of Lemma~\ref{lemma 3.6} and Theorem~\ref{th1.1}, we have the following result. \begin{cor} \lambdabel{cor4.5} Suppose that there exists a constant $C>0$ such that $C^{-1}\rho^n\leq V(x, \rho)\leq C\rho^n$ for all $x\in X$ and $\rho>0$. Next assume that $L$ is a non-negative self-adjoint operator acting on $L^2(X)$ satisfying the property ${\rm (FS)}$, ${\rm (G_{p_0, 2})}$ and condition ${\rm (ST^{q}_{p_1, 2})}$ for some $1\leq p_0\leq p_1<2$ and $2\leq q\leq \infty$. Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(X)$ whenever \begin{eqnarray}\lambdabel{e0ww0} 2\leq r<\min\Big\{ {2n\over n-1-2\delta}, \ p'_0\Big\} \ \ \ {\rm and }\ \ \ \delta> \max\Big\{0, n\left({1\over p_1}-{1\over 2}\right)- {1\over q} \Big\}. \end{eqnarray} As a consequence, if $f\in L^r(X)$, then $$ \lim\limits_{R\to \infty}S_{R}^{\alphapha}(L)f(x)=f(x), \ \ \ a.e. $$ for the same range of $r$ and $\delta$ in \eqref{e0ww0}. \end{cor} \fi \section{Spectral cluster estimate and maximal bound } \setcounter{equation}{0} Throughout this section, we assume that $(X, d, \mu)$ is a metric measure space satisfying the conditions \eqref{eq1.1} or \eqref{eq2a}. Let $1\leq p< 2$ and $2\leq q\leq\infty$. Following \cite{COSY}, we say that $L$ satisfies the Sogge spectral cluster condition: If for a fixed natural number $\kappa$ and for all $N\in \NN$ and all even Borel functions $F$ such that\, $\operatorname{supp} F\subseteq [-N, N]$, $$ \big\|F(\!\mathcal{S}L\,\,) \big\|_{p\to 2} \leq CN^{n(\frac{1}{p}-\frac{1}{2})}\| F (N \cdot) \|_{N^\kappa,\, q} \leqno{\rm (SC^{q, \kappa}_{p})} $$ for all $x\in X$ where $$ \|F\|_{N,q}=\left({1\over 2N}\sum_{\ell=1-N}^{N} \sup_{\lambdambda\in [{\ell-1\over N}, {\ell\over N})} |F(\lambdambda)|^q\right)^{1/q} $$ for $F$ supported in $[-1, 1]$. For $q=\infty$, we may put $\|F\|_{N, \infty}=\|F\|_{{\infty}}$ (see also \cite{CowS, DOS}). Both { Theorems B} and { C} stated in Introduction are a special case of the following statement with $q=2$. \begin{thm}\lambdabel{th1.2} Suppose that $L$ satisfies the property ${\rm (FS)}$ and the condition ${\rm (SC^{q,\kappa}_{p_0})}$ for some $1\leq p_0<2$, $2\leq q\leq \infty $ and for some $\kappa\in \mathbb N$. In addition, we assume that there exists $\nu \ge 0$ such that \eqref{e1.500} holds. Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(X)$ whenever \begin{eqnarray}\lambdabel{eqwww} 2\leq p< p_0', \ \ \ {\rm and }\ \ \ \alphapha> \nu+ \max\left\{n\left({\frac {1}{p_0}}-{\frac 12}\right)- {\frac 1q}, \, 0 \right\}. \end{eqnarray} As a consequence, if $f\in L^p(X)$, then for $p$ and $\alphapha$ satisfying \eqref{eqwww}, $$ \lim\limits_{R\to \infty}S_{R}^{\alphapha}(L)f(x)=f(x), \ \ \ a.e. $$ \end{thm} \begin{rem} Note that if $(X,d,\mu)$ satisfies \eqref{eq2a}, then by H\"older's inequality and $(1+L)^{0}=Id$, the condition \eqref{e1.500} holds with $\gamma=0$. \end{rem} \begin{rem} Taking into account the condition \eqref{st} one could consider the following estimate introduced in \cite{COSY}: $$ \big\|F(\!\mathcal{S}L\,\,)\mathlarger{\chi}_{B(x, r)} \big\|_{p\to 2} \leq CV(x,r)^{{1\over s}-{1\over p}}(Nr)^{n({1\over p}-{1\over s})}\| F (N \cdot) \|_{N^\kappa,\, q}. $$ However, one can easily check that the above condition under assumption \eqref{eq1.1} or \eqref{eq2a} is equivalent to ${\rm (SC^{q, \kappa}_{p})}$, so here we only discuss the latter only. \end{rem} \begin{rem} \lambdabel{rem5.4} Note that condition ${\rm (SC^{q,\kappa}_{p_0})}$ is weaker than ${\rm (ST^{q}_{p_0, 2})}$ and we need a priori estimate \eqref{e1.500} in Theorem~\ref{th1.2}. Recall that in \cite[Theorem I.10]{COSY}, one can obtain $L^p$ bounds for Bochner-Riesz means under the assumption ${\rm (AB_{p_0}^{q,\kappa})}$ instead of estimate \eqref{e1.500}. Following \cite{COSY}, we say that $L$ satisfies the condition ${\rm (AB_{p_0}^{q,\kappa})}$ if for each $\varepsilon>0$, there exists constant $C_\varepsilon>0$ such that for all $N\in \mathbb{N}$ and even Borel functions $H$ with $\operatorname{supp} H\subseteq [-N, N]$, $$ \big\|H(\mathcal{S}L) \big\|_{p_0\to p_0} \leq C_\varepsilon N^{\kappa n({1\over p_0}-{1\over 2})+\varepsilon}\| H (N \cdot) \|_{N^\kappa,\, q}. \leqno{{\rm (AB_{p_0}^{q,\kappa})}} $$ (see also \cite[Theorem 3.6]{CowS} and \cite[Theorem 3.2]{DOS} for related results). Once \eqref{e1.500} is proved for some $p_0\in [1, 2 )$ and all $\nu>0$, it is not difficult to check that ${\rm (SC^{q,\kappa}_{p_0})}$ implies ${\rm (AB_{p_0}^{q,\kappa})}$. Indeed, we apply \eqref{e1.500} and ${\rm (SC^{q,\kappa}_{p_0})}$ to obtain \begin{eqnarray*} \big\|H(\mathcal{S}L) \big\|_{p_0\to p_0}&\leq& \big\|H(\mathcal{S}L)(1+L)^{{n\over 2}({1\over p}-{1\over 2})(\kappa-1)+\varepsilon} \big\|_{p_0\to 2} \big\| (1+L)^{-{n\over 2}({1\over p_0}-{1\over 2})(\kappa-1)-\varepsilon} \big\|_{2\to p_0}\\ &\leq& C_\varepsilon N^{n({1\over p_0}-{1\over 2})}\big\| H (N \cdot)(1+N^2\cdot)^{{n\over 2}({1\over p_0}-{1\over 2})(\kappa-1)+\varepsilon} \big\|_{N^\kappa,\, q}\\ &\leq& C_\varepsilon N^{\kappa n({1\over p_0}-{1\over 2})+\varepsilon} \| H(N \cdot) \|_{N^\kappa,\, q}. \end{eqnarray*} This verifies the condition ${\rm (AB_{p_0}^{q,\kappa})}$. \end{rem} From Remark~\ref{rem5.4}, it is easy to see that the same argument as in \cite[Theorem I.10, Corollary I.7]{COSY} gives the following. \begin{prop} \lambdabel{prop5555} Under the same assumption as in Theorem \ref{th1.2}, we have the uniform bound $$ \big\|\big(I-{L\over R^2}\big)_+^{\alphapha}\big\|_{p_0\to p_0}\leq C, $$ for ${\alphapha}> \nu+ n(1/p_0-1/2)-1/q$ and $R>0$. As a consequence, if ${1/q}> n({1/p_0}-{1/2}) +\nu$ for some $q\geq 2$ and $1\leq p_0<2$, then $L=0.$ \end{prop} This shows that we may assume the condition \begin{equation}\lambdabel{nontrivial2} {1\over q}+n\Big({1\over 2}- {1\over p_0}\Big)-\nu\le 0. \end{equation} Because, otherwise, $L=0$ and Theorem \ref{th1.2} is trivially true. We assume the condition \eqref{nontrivial2} for the rest of this section. As in Theorem~\ref{th1.1}, Theorem~\ref{th1.2} is a consequence of the following. \begin{prop}\lambdabel{prop5.1} Suppose the operator $L$ satisfies the property {\rm (FS)} and the condition ${\rm (SC^{q,\kappa}_{p_0})}$ for some $p_0$ such that $1\leq p_0<2$, $2\leq q\leq \infty$ and some positive integer $\kappa$. In addition, assume that there exists $\nu \ge 0$ such that \eqref{e1.500} holds. Let $\phi$ be a fixed $C^{\infty}$ function supported in $[-1/2, 1/2], |\phi|\leq 1$. Recall that for every $0<\delta\le 1$ $T_\delta$ is defined by (4.6). Then for all $2\leq p<p'_0 $ and $0<\delta\leq 1$, \begin{eqnarray}\lambdabel{e5.00} \|T_{\delta}f\|_{p} &\leq& C(p)\delta^{{1\over 2}+{1\over q}+n({1\over 2}- {1\over p_0})-\nu} \|f\|_{p}. \end{eqnarray} \end{prop} The estimate \eqref{e5.00} for $p=2$ follows from \eqref{einter} and the condition \eqref{nontrivial2}. \iffalse Note that $\phi$ is a fixed $C^{\infty}$ function supported in $[-1/2, 1/2], |\phi|\leq 1$. It follows from the spectral theory \cite{Yo} that, for any $f\in L^2(X)$, \begin{eqnarray}\lambdabel{einter} \|T_\delta f\|_{2} &=&\left\{\int_0^{\infty}\Big\lambdangle\,\phi^2\left(\delta^{-1}\left(1-{ {L}\over t^2} \right)\right)f, f\Big\rangle {dt\over t}\right\}^{1/2}\nonumber \\ &= &\left\{\int_0^{\infty}\phi^2\left(\delta^{-1}\left(1- t^2 \right)\right) {dt\over t}\right\}^{1/2}\|f\|_{2}\nonumber \\ &\leq & C \delta^{\frac{1}{2}}\|f\|_{2}. \end{eqnarray} \fi To show \eqref{e5.00} for $2< p<p'_0$, for $0<\delta\leq 1$ we write \begin{eqnarray}\lambdabel{e5.2} T_\delta f(x)=\left( \int_0^{\infty} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)f(x)\Big|^2 {dt\over t}\right)^{1/2}\leq T_\delta^{(1)}f(x)+T_\delta^{(2)}f(x)+T_\delta^{(3)}f(x), \end{eqnarray} where \begin{eqnarray*} T_\delta^{(1)}f(x):&=&\left( \int_0^{1} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)f(x)\Big|^2 {dt\over t}\right)^{1/2},\nonumber\\ T_\delta^{(2)}f(x):&=&\left( \int_1^{1/\sqrt[\kappa]\delta} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)f(x)\Big|^2 {dt\over t}\right)^{1/2},\nonumber\\ T_\delta^{(3)}f(x):&=&\left( \int_{1/\sqrt[\kappa]\delta}^\infty \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)f(x)\Big|^2 {dt\over t}\right)^{1/2}. \end{eqnarray*} It is clear that to prove Proposition~\ref{prop5.1} it is sufficient to show the following Lemmas~\ref{le5.1} and ~\ref{le5.3}. \begin{lemma} \lambdabel{le5.1} Suppose the operator $L$ satisfies the property {\rm (FS)} and condition ${\rm (SC^{q,\kappa}_{p_0})}$ for some $p_0$ such that $1\leq p_0<2$, $2\leq q\leq \infty $ and some $\kappa\in {\mathbb N}^+$. In addition, we assume that \eqref{e1.500} holds for some $\nu\geq0$. Then for all $2\leq p\leq p'_0 $ and $0<\delta\le 1$, we have \begin{eqnarray} \lambdabel{t1} \|T_\delta^{(1)}f\|_{{p}}\leq C\delta^{1/2}\|f\|_{{p}} \end{eqnarray} and \begin{eqnarray} \lambdabel{t2} \|T_\delta^{(2)}f\|_{{p}}\leq C\delta^{{1\over 2}+{1\over q}+n({1\over 2}- {1\over p_0})-\nu} \|f\|_{{p}}. \end{eqnarray} \end{lemma} \begin{lemma} \lambdabel{le5.3} Suppose the operator $L$ satisfies the property {\rm (FS)} and the condition ${\rm (SC^{q,\kappa}_{p_0})}$ for some $p_0$ such that $1\leq p_0<2$, $2\leq q\leq \infty $ and some $\kappa\in {\mathbb N}^+$. Then for all $2\leq p<p'_0 $ and $0<\delta\leq 1$, \begin{eqnarray*} \|T_{\delta}^{(3)}f\|_{p} &\leq& C(p)\delta^{{1\over 2}+{1\over q}+n({1\over 2}- {1\over p_0})} \|f\|_{p}. \end{eqnarray*} \end{lemma} \noindent{\bf 5.1. Proof of Lemma~\ref{le5.1}.} From \eqref{einter}, the proof reduces to showing \eqref{t1} and \eqref{t2} for $p=p'_0$ by interpolation. By \eqref{e44}, we have that for $f\in L^2(X)\cap L^p(X)$, \begin{eqnarray} \lambdabel{e5.4} | T^{(1)}_{\delta}f(x)|^2 &\leq& 5\sum_{k\leq 0} \int_{2^{k-1}}^{2^{k+2}} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)\varphi_k(\sqrt{L}\,)f(x)\Big|^2 {dt\over t}. \end{eqnarray} Write $$ \mathcal{P}hi_{t,\delta}(\sqrt{L}\,):=\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right)\right). $$ Similarly as in Section \ref{sec4} for $k\in{\mathbb Z}$ and $\lambdambda=0, 1, \cdots, \lambdambda_0=[{8/\delta}] +1$ let $I_\lambdambda$ and $\eta_\lambdambda$ be defined by \eqref{ijk1} and \eqref{ijk2}, respectively. Observe that for every $t\in I_\lambdambda$, if $ \mathcal{P}hi_{t,\delta}(s)\eta_{\lambdambda'} (s)\not=0$ , then $\lambdambda-\lambdambda\delta-3\leq \lambdambda'\leq \lambdambda+\lambdambda\delta+3.$ Hence, we see that, for every $t\in I_\lambdambda$, \begin{eqnarray} \lambdabel{k-de} \mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \varphi_k(\sqrt{L}\,) &=& \sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10} \mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \varphi_k(\sqrt{L}\,) \eta_{\lambdambda'} (\sqrt{L}\,), \end{eqnarray} and thus \begin{eqnarray} \lambdabel{k-int} \int_{2^{k-1}}^{2^{k+2}} \Big|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \varphi_k(\sqrt{L}\,)f\Big|^2 {dt\over t} &=& \sum_\lambdambda\int_{I_\lambdambda} \Big|\sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10} \mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \varphi_k(\sqrt{L}\,) \eta_{\lambdambda'} (\sqrt{L}\,)f\Big|^2 {dt\over t}\\ &\leq& C\sum_\lambdambda\sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10}\int_{I_\lambdambda} \Big| \mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \varphi_k(\sqrt{L}\,) \eta_{\lambdambda'} (\sqrt{L}\,)f\Big|^2 {dt\over t}. \nonumber \end{eqnarray} By Minkowski's inequality, \begin{eqnarray*} \|T_\delta^{(1)}f\|_{{p_0'}}&\leq& C\left\|\left(\sum_{k\leq 0}\int_{2^{k-1}}^{2^{k+2}} \Big|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \varphi_k(\sqrt{L}\,)f\Big|^2 {dt\over t}\right)^{1/2}\right\|_{p_0'}\\ &\leq& C\left\|\left(\sum_{k\leq 0}\sum_\lambdambda\sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10}\int_{I_\lambdambda} \Big|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)f\Big|^2 {dt\over t}\right)^{1/2}\right\|_{p_0'}\\ &\leq& C\left(\sum_{k\leq 0}\sum_\lambdambda\sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10}\int_{I_\lambdambda} \left\|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)f\right\|_{p_0'}^2 {dt\over t}\right)^{1/2}. \end{eqnarray*} Note that $t\leq 1$ and by ${\rm (SC^{q,\kappa}_{p_0})}$ \begin{eqnarray*} \|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,)(1+L)^{\gamma/2}\|_{2\to p_0'}&=& \|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,)(1+L)^{\gamma/2}\|_{p_0\to 2}\\ &\leq& C2^{n({1\over p_0}-{1\over 2})}\|\mathcal{P}hi_{t,\delta}(2\cdot)\|_{2^{\kappa},q}\leq C. \end{eqnarray*} Hence $\big\| \mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \varphi_k(\sqrt{L}\,) \eta_{\lambdambda'} (\sqrt{L}\,)f\big\|_{p_0'} \le C \big\| \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)(1+L)^{-\gamma/2}f\big\|_{2} $. From this it is easy to see \begin{eqnarray*} \|T_\delta^{(1)}f\|_{{p_0'}} &\leq& C\left(\sum_{k\leq 0}\sum_\lambdambda\sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10} \left\| \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)(1+L)^{-\gamma/2}f\right\|_{2}^2 \int_{I_\lambdambda} {dt\over t}\right)^{1/2}\\ &\leq& C\left(\delta\sum_{k\leq 0}\sum_{\lambdambda'} \left\| \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)(1+L)^{-\gamma/2}f\right\|_{2}^2 \right)^{1/2}\\ &\leq& C\delta^{1\over 2}\|(1+L)^{-\gamma/2}f\|_{2}\\ &\leq& C\delta^{1\over 2}\|f\|_{{p_0'}}, \end{eqnarray*} where for the last inequality we use \eqref{e1.500}. Thus we get \eqref{t1}. We now show \eqref{t2} for $p=p'_0$. By \eqref{e44}, we have that for $f\in L^2(X)\cap L^p(X)$, \begin{eqnarray} \lambdabel{e5.5} | T^{(2)}_{\delta}f(x)|^2 &\leq & C\sum_{0<k\leq 1-{\rm log_2 \sqrt[\kappa]{\delta}}} \int_{2^{k-1}}^{2^{k+2}} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right) \right)\varphi_k(\sqrt{L}\,)f(x)\Big|^2 {dt\over t}. \end{eqnarray} Again, for $k\in{\mathbb Z}$ and $t\in [2^{k-1}, 2^{k+2}]$ and $\lambdambda=0, 1, \cdots, \lambdambda_0=[{8/\delta}] +1$, we consider the interval $I_\lambdambda$ and the function $\eta_\lambdambda$ which are given by \eqref{ijk1} and \eqref{ijk2}, respectively. Observe that for every $t\in I_\lambdambda$, if $ \mathcal{P}hi_{t,\delta}(s)\eta_{\lambdambda'} (s)\not=0$ , then $\lambdambda-\lambdambda\delta-3\leq \lambdambda'\leq \lambdambda+\lambdambda\delta+3.$ Hence, as before it follows that, for every $t\in I_\lambdambda$, \eqref{k-de} holds and we have \eqref{k-int}. Putting this in \eqref{e5.5} and Minkowski's inequality (twice) give \begin{eqnarray*} \|T_\delta^{(2)}f\|_{{p_0'}} &\leq& C\left\|\left(\sum_{0<k\leq 1-{\rm log_2 \sqrt[\kappa]{\delta}}}\sum_\lambdambda\sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10}\int_{I_\lambdambda} \Big|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)f\Big|^2 {dt\over t}\right)^{1/2}\right\|_{p_0'}\\ &\leq& C\left(\sum_{0<k\leq 1-{\rm log_2 \sqrt[\kappa]{\delta}}}\sum_\lambdambda\sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10}\int_{I_\lambdambda} \left\|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,) \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)f\right\|_{p_0'}^2 {dt\over t}\right)^{1/2}. \end{eqnarray*} We claim that \begin{equation} \lambdabel{l-estimate} \|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,)(1+L)^{\gamma/2}\|_{2\to p_0'} \leq C\delta^{{1\over q}-n({1\over p_0}-{1\over 2})-\nu}. \end{equation} Assuming this for the moment, we complete the proof. From \eqref{e5.5} and \eqref{l-estimate} we have \begin{eqnarray*} \|T_\delta^{(2)}f\|_{{p_0'}}&\leq& C\delta^{{1\over q}-n({1\over p_0}-{1\over 2})-\nu} \left(\sum_{0<k\leq 1-{\rm log_2 \sqrt[\kappa]{\delta}}}\sum_\lambdambda\sum_{\lambdambda'=\lambdambda-10}^{\lambdambda+10}\int_{I_\lambdambda} \left\| \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)(1+L)^{-\gamma/2}f\right\|_{2}^2 {dt\over t}\right)^{1/2}. \end{eqnarray*} Thus, it is easy to see that \begin{eqnarray*} \|T_\delta^{(2)}f\|_{{p_0'}} &\leq& C\delta^{{1\over q}-n({1\over p_0}-{1\over 2})-\nu}\left(\delta\sum_{0<k\leq 1-{\rm log_2 \sqrt[\kappa]{\delta}}}\sum_{\lambdambda'} \left\| \eta_{\lambdambda'} (\sqrt{L}\,)\varphi_k(\sqrt{L}\,)(1+L)^{-\gamma/2}f\right\|_{2}^2 \right)^{1/2}\\ &\leq& C\delta^{{1\over q}-n({1\over p_0}-{1\over 2})-\nu}\delta^{1/2}\|(1+L)^{-\gamma/2}f\|_{2}\\ &\leq& C\delta^{{1\over 2}+{1\over q}-n({1\over p_0}-{1\over 2})-\nu}\|f\|_{{p_0'}}. \end{eqnarray*} For the last inequality we use \eqref{e1.500}. This gives the desired estimate. It remains to show \eqref{l-estimate}. Let $N=8[t]+1$. Note that $\operatorname{supp} \mathcal{P}hi_{t,\delta}\subset [-N,N]$. From ${\rm (SC^{q,\kappa}_{p_0})}$ \begin{eqnarray*} \|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,)(1+L)^{\gamma/2}\|_{2\to p_0'}&=& \|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,)(1+L)^{\gamma/2})\|_{p_0\to 2}\\ &\leq& CN^{n({1\over p_0}- {1\over 2})}\|\mathcal{P}hi_{t,\delta}(Nu)(1+N^2u^2)^{\gamma/2}\|_{N^\kappa,q}. \end{eqnarray*} We estimate $\|\mathcal{P}hi_{t,\delta}(Nu)(1+N^2u^2)^{\gamma/2}\|_{N^\kappa,q}$. Set $H(\lambdambda)=\mathcal{P}hi_{t,\delta}(\lambdambda)(1+\lambdambda^2)^{\gamma/2}$. Let $\xi\in C_c^\infty$ be an even function such that $\operatorname{supp} \xi\subset [-1,1], \hat{\xi}(0)=1$ and $\hat{\xi}^{(k)}(0)=0$ for all $1\leq k\leq [\beta]+2$. Write $\xi_{N}=N\xi(Nu)$. Then \begin{eqnarray*} \|\mathcal{P}hi_{t,\delta}(Nu)(1+N^2u^2)^{\gamma/2}\|_{N^\kappa,q}\leq \|\big(H- \xi_{N^{\kappa-1}}*H\big)(Nu)\|_{N^\kappa,q} + \|\big(\xi_{N^{\kappa-1}}*H\big)(Nu)\|_{N^\kappa,q}. \end{eqnarray*} To estimate the first in the right hand side, we make use of the following (for its proof, see \cite[(3.29)]{CowS} or \cite[Propostion 4.6]{DOS}): If $\operatorname{supp} G\subset [-1,1]$, then \begin{eqnarray}\lambdabel {eq5.6} \|G-\xi_N\ast G\|_{N,q}\leq CN^{-\beta}\|G\|_{W^{\beta,q}} \end{eqnarray} for all $\beta>1/q$ and any $N\in \NN$. Since $\big(H- \xi_{N^{\kappa-1}}*H\big)(Nu)=H(Nu)-\big(\xi_{N^\kappa}*(H(N\cdot))\big)(u)$. For $\beta>n(1/2-1/p_0')$, we get \begin{align} \lambdabel{first} \big\|\big(H- \xi_{N^{\kappa-1}}*H\big)(Nu)\big\|_{N^\kappa,q}\leq CN^{-\beta\kappa}\|H(N\cdot)\|_{W^{\beta,q}}&= CN^{-\beta\kappa}\Big\|\mathcal{P}hi_{t,\delta}(Nu)(1+N^2u^2)^{\gamma/2}\Big\|_{W^{\beta,q}}\\ &\leq CN^{-\beta\kappa+\gamma}\delta^{{1\over q}-\beta}. \nonumber \end{align} For the second one, note that \begin{eqnarray*} \|(\xi_{N^{\kappa-1}}*H)(N\cdot)\|_{N^\kappa,q} &=&\left(\frac{1}{N^\kappa}\sum_{i=1-N^\kappa}^{N^\kappa} \sup_{\lambdambda\in [\frac{i-1}{N^\kappa},\frac{i}{N^\kappa})}|(\xi*H(\cdot/N^{\kappa-1}))(N^\kappa\lambdambda)|^{q}\right)^{1/q} \\ &\leq&\left(\frac{1}{N^\kappa}\sum_{i=1-N^\kappa}^{N^\kappa} \sup_{\lambdambda\in [i-1,i)}|(\xi*H(\cdot/N^{\kappa-1}))(\lambdambda)|^{q}\right)^{1/q}. \end{eqnarray*} Using $ |\xi*h(\lambdambda)|^q\leq C\|\xi\|^q_{q'}\int_{\lambdambda-1}^{\lambdambda+1}|h(u)|^qdu, $ \begin{eqnarray*} \|(\xi_{N^{\kappa-1}}*H)(N\cdot)\|_{N^\kappa,q} &\leq&C\left(\frac{1}{N^\kappa}\sum_{i=1-N^\kappa}^{N^\kappa} \sup_{\lambdambda\in [i-1,i)}\int_{\lambdambda-1}^{\lambdambda+1}|H(u/N^{\kappa-1})|^qdu\right)^{1/q} \\ &\leq&C\left(\frac{1}{N^\kappa}\sum_{i=1-N^\kappa}^{N^\kappa} \int_{i-2}^{i+1}|H(u/N^{\kappa-1})|^qdu\right)^{1/q}\\ &\leq&CN^{-{\kappa\over q}}\|H(\cdot/N^{\kappa-1})\|_q\\ &\leq& CN^{-{\kappa\over q}}(t^\kappa\delta)^{1\over q}t^{\gamma}\leq C\delta^{1\over q}t^{\gamma}. \end{eqnarray*} Combining \eqref{first} and the above, and noting that $1\leq t\leq 1/\sqrt[\kappa]\delta$, we have \begin{eqnarray*} \|\mathcal{P}hi_{t,\delta}(\sqrt{L}\,)(1+L)^{\gamma/2}\|_{2\to p_0'}&\leq& CN^{n({1\over p_0} -{1\over 2})}(N^{-\beta\kappa+\gamma}\delta^{{1\over q}-\beta}+t^{\gamma}\delta^{1\over q})\\ &\leq& C\delta^{{1\over q}-n({1\over p_0}-{1\over 2})-\nu}. \end{eqnarray*} Here, we use the relation $\gamma=n(\kappa-1)(1/p_0-1/2)+\kappa\nu$. This gives \eqref{l-estimate}, and completes the proof of \eqref{t2}. {}$\Box$ \noindent{\bf 5.2. Proof of Lemma~\ref{le5.3}.}\ As in Proposition~\ref{prop4.1}, the proof of Lemma~\ref{le5.3} reduces to showing the following lemma. \begin{lemma} \lambdabel{le5.5} For any $0\leq w$ and $0<\delta\le 1$, \begin{eqnarray*} \int_X |T_\delta^{(3)} f(x)|^2 w(x) d\mu(x) \leq C\delta^{1+{2\over q}+n({ 2\over p'_0} -1)} \int_X |f(x)|^2 \mathfrak M_{r_0}w(x)d\mu(x), \end{eqnarray*} where $1/r_0 +2/p'_0=1.$ \end{lemma} \begin{proof} We prove Lemma~\ref{le5.5} by modifying that of Lemma~\ref{le4.1}. By \eqref{e44}, we have that, for $f\in L^2(X)\cap L^p(X)$, \begin{eqnarray} \lambdabel{e5.8} | T_{\delta}^{(3)}f(x)|^2 &\leq & C\sum_{k>1-{\rm log_2 \sqrt[\kappa]{\delta}}} \int_{2^{k-1}}^{2^{k+2}} \Big|\phi\left(\delta^{-1}\left(1-{ {L}\over t^2} \right)\right)\varphi_k(\sqrt{L}\,)f(x)\Big|^2 {dt\over t}. \end{eqnarray} For given $0<\delta\leq 1$, we let $\delta\in [2^{-j_0-1}, 2^{-j_0})$ for some $j_0\in{\mathbb Z}$. As in the proof Lemma \ref{le4.1} we fix a cutoff function $\eta\in C_0^{\infty}$, identically one on $\{|s| \leq 1 \}$ and supported on $\{|s| \leq 2 \}$. For $j\ge j_0$ we define $\zeta_j$ by \eqref{zeta} so that \eqref{id} holds. Then let $ \phi_{\delta,j}$ be defined by \eqref{e4.12} so that \eqref{e4.14} holds. From \eqref{e5.8} and \eqref{e4.14}, it follows that for every function $w\geq 0,$ \begin{eqnarray}\lambdabel{e5.11}\hspace{0.8cm} \int_X | T_{\delta}^{(3)}f(x)|^2 w(x) d\mu(x) &\leq& C\sum_{k>1-{\rm log_2 \sqrt[\kappa]{\delta}}}\left[ \sum_{j\geq j_0} \left(\int_{2^{k-1}}^{2^{k+2}} \left\lambdangle \Big|\phi_{\delta,j}\left({\sqrt{L}\over t} \right) \varphi_k(\sqrt{L}\,)f\Big|^2, \ w\right\rangle {dt\over t} \right)^{1/2}\right]^{2}. \end{eqnarray} For $\ell\ge 0$ let ${ \psi}_{\ell, \delta}$ be defined by \eqref{psi}. So, $1=\sum_{\ell=0}^{\infty} { \psi}_{\ell, \delta}(s)$, and so $ \phi_{\delta,j}(s)= \sum_{\ell=0}^{\infty} \big ( { \psi}_{\ell, \delta}\phi_{\delta,j}\big)(s) $ for all $s>0. $ Similarly as in \eqref{e4.18} we get \begin{eqnarray}\lambdabel{e5.12}\hspace{0.5cm} &&\hspace{-1.2cm} \left(\int_{2^{k-1}}^{2^{k+2}} \left\lambdangle \Big| \phi_{\delta,j}\left({\sqrt{L}\over t} \right) \varphi_k(\sqrt{L}\,) f\Big|^2, \ w\right\rangle {dt\over t} \right)^{1/2}\nonumber\\ &\leq & \sum_{\ell=0}^{ [-{\rm log_2{\delta}}]} \left( \sum_{m } \|\mathlarger{\chi}_{B_m}w\|_{{r_0}} \int_{2^{k-1}}^{2^{k+2}} \left\|\mathlarger{\chi}_{B_m} \left ( { \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \right\|^2_{{p'_0}} {dt\over t}\right)^{1/2} \nonumber\\ &+& \sum_{\ell= [-{\rm log_2{\delta}}]+1}^{\infty} \left( \sum_{m } \|\mathlarger{\chi}_{B_m} w\|_{{r_0}} \int_{2^{k-1}}^{2^{k+2}} \left\|\mathlarger{\chi}_{B_m} \left ( { \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f \right\|^2_{{p'_0}} {dt\over t}\right)^{1/2}\nonumber\\ &=& I(j,k) + I\!I(j,k). \end{eqnarray} As in Section \ref{sec4}, the first term $I(j,k)$ is the major one. We handle $I\!I(j,k)$ first. {\bf Estimates for $I\!I(j,k)$}. Note that $\|F\|_{N,2} \le \|F\|_{N,\infty}=\|F\|_{\infty}$, so for a fixed $b>0$ the condition ${\rm (SC^{q, \kappa}_{p_0})}$ implies $ {\rm (ST^{\infty}_{p_0, 2})}$ for all functions $F$ with $\operatorname{supp} F \subset (b,R)$. Hence we can repeat the same argument used for the proof of \eqref{eee} to show that for any $N<\infty$ $$ I\!I(j,k) \leq C \delta 2^{j[n ({1\over p_0}-{1\over 2})-N+1]} \left(\int_X |\varphi_k (\sqrt{L}\,)f(x)|^2 \mathfrak M_{r_0} w(x)d\mu(x)\right)^{1/2}. $$ {\bf Estimates for $I(j,k)$}. As before (see Section \ref{sec4} ), for $k\in{\mathbb Z}$ and $t\in [2^{k-1}, 2^{k+2}]$ and $\lambdambda=0, 1, \cdots, \lambdambda_0=[{8/\delta}] +1$, we consider the interval $I_\lambdambda$ and the function $\eta_\lambdambda$ which are given by \eqref{ijk1} and \eqref{ijk2}, respectively. For $t\in I_\lambdambda$, $\lambdambda-2^{\ell+6}\leq \lambdambda'\leq \lambdambda+2^{\ell+6}$ if $ {\psi}_{\ell, \delta} \left({s/t}\right)\eta_{\lambdambda'} (s)\not=0$. Thus, for $t\in I_\lambdambda$, we have \eqref{eqn4}. Using this we get \begin{eqnarray*} I(j,k)\leq \sum_{\ell=0}^{ [-{\rm log_2{\delta}}]} \left[ \sum_{m } \|\mathlarger{\chi}_{B_m} w\|_{{r_0}} \sum_{\lambdambda} \int_{I_\lambdambda} \left(\sum_{\lambdambda'=\lambdambda-2^{\ell+6}}^{\lambdambda+2^{\ell+6}} \left\|\mathlarger{\chi}_{B_m} \left ({ \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \right\|_{{p'_0}}\right)^2 {dt\over t}\right]^{1/ 2}. \end{eqnarray*} Note that $ \operatorname{supp} { \psi}_{\ell, \delta} \subseteq (1-2^{\ell+2}\delta, 1+2^{\ell+2}\delta).$ Moreover, if $\ell\geq 1$, then ${ \psi}_{\ell, \delta}(s)=0$ for $s\in (1-2^{\ell}\delta, 1+2^{\ell}\delta)$, and so $\operatorname{supp} \left (\psi_{\ell, \delta}\phi_{\delta,j}\right)\left({\cdot/t}\right)\subset [t(1-2^{\ell+2}\delta), t(1+2^{\ell+2}\delta)]$. Let $R=[t(1+2^{\ell+2}\delta)]+1$. By the condition ${\rm (SC^{q,\kappa}_{p_0})}$, we have that, for $0\leq \ell\leq [-{\rm log_2{\delta}}]$, \begin{eqnarray} \lambdabel{e5.13} \hspace{1cm} \left\|\mathlarger{\chi}_{B_m} \left (\psi_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right)\right\|_{2\to {p'_0} }&=& \left\| \left ({ \psi}_{\ell, \delta}\phi_{\delta,j}\right) \left({\sqrt{L}\over t}\right) \mathlarger{\chi}_{B_m} \right\|_{p_0\to 2}\nonumber\\ &\leq& C \left(2^j(1+2^{\ell+2}\delta)\right)^{n ({1\over p_0}-{1\over 2})}\mu(B_m)^{{1\over 2}-{1\over p_0}} \|({ \psi}_{\ell, \delta}\phi_{\delta,j})\big(R \cdot/t\big)\|_{R^\kappa,q}\nonumber\\ &\leq& C 2^{jn ({1\over p_0}-{1\over 2})}\mu(B_m)^{{1\over 2}-{1\over p_0}} \|({ \psi}_{\ell, \delta}\phi_{\delta,j})\big(R \cdot/t\big)\|_{R^\kappa,q}. \end{eqnarray} We note that $$ \operatorname{supp} \,({ \psi}_{\ell, \delta}\phi_{\delta,j})\big(R \cdot/t\big)\subset \left[\frac{t(1-2^{\ell+2}\delta)}{R},\, \frac{t(1+2^{\ell+2}\delta)}{R}\right]. $$ This, in combination with the fact that $R^\kappa\delta\geq 1$, gives \begin{eqnarray} \lambdabel{e5.14} \|({ \psi}_{\ell, \delta}\phi_{\delta,j})\big((1+2^{\ell+2}\delta) \cdot\big)\|_{R^\kappa,q} \leq \|{ \psi}_{\ell, \delta}\phi_{\delta,j}\|_\infty \big\|\chi_{[\frac{t(1-2^{\ell+2}\delta)}{R},\, \frac{t(1+2^{\ell+2}\delta)}{R}]}\big\|_{R^\kappa,q} \le C \|{ \psi}_{\ell, \delta}\phi_{\delta,j}\|_\infty \left(\frac{2^{\ell+3}t\delta}{R}\right)^{1/q}.\nonumber \end{eqnarray} From this and \eqref{psidel} with $q=\infty$ we see that \begin{eqnarray*} \|({ \psi}_{\ell, \delta}\phi_{\delta,j})\big((1+2^{\ell+2}\delta) \cdot\big)\|_{R^\kappa,q} &\leq &C_N 2^{(j_0-j)N} 2^{- \ell N } (2^\ell\delta)^{1\over q}. \end{eqnarray*} Thus \eqref{e5.13} and the above inequality yield \begin{eqnarray} \lambdabel{e5.15} &&\hspace{-1.8cm} \left\|\mathlarger{\chi}_{B_m} \left ({ \psi}_{\ell, \delta}\phi_{\delta,j}\right)\left({\sqrt{L}\over t}\right) \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \right\|_{{p'_0}}\\ &\leq& C\delta^{1\over q} 2^{(j_0-j)N} 2^{jn ({1\over p_0}-{1\over 2})} 2^{- \ell( N-{1\over q})} \mu(B_m)^{{1\over 2}-{1\over p_0}} \big\| \eta_{\lambdambda'} (\sqrt{L}\,)\big[\mathlarger{\chi}_{\widetilde{B}_m} \varphi_k(\sqrt{L}\,) f\big] \big\|_{2}. \nonumber \end{eqnarray} Once \eqref{e5.15} is obtained, we may repeat the lines of argument in the proof of Lemma~\ref{le4.1} to get \begin{eqnarray*} I(j,k) &\leq & C\delta^{{1\over q}+{1\over 2}} 2^{(j_0-j)N} 2^{jn ({1\over p_0}-{1\over 2})} \int_X |\varphi_k (\sqrt{L}\,)f(x)|^2 \mathfrak M_{r_0} w(x)d\mu(x). \end{eqnarray*} Finally, combining the estimates for $I(j,k)$ and $ I\!I(j,k)$, together with \eqref{e5.11} and \eqref{e5.12}, we get \begin{eqnarray*} \int_X| T_{\delta}^{(3)}f(x)|^2 w(x)d\mu(x) &\leq& C\delta^{1+{2\over q}+n(1-{2\over p_0})} \int_{X} | f|^2 \mathfrak M_{r_0} w(x)dx \end{eqnarray*} whenever $N> n({1/p_0}-{1/2 }) +1$. This completes proof of Lemma ~\ref{le5.5}. \end{proof} \section{Applications }\lambdabel{sec6} \setcounter{equation}{0} As applications of our theorems we discuss several examples of important elliptic operators. Our results, Theorems~\ref{th1.1}and ~\ref{th1.2} have applications to all the examples which are discussed in \cite{DOS} and \cite{COSY}. Those include elliptic operators on compact manifolds, the harmonic oscillator, radial Schr\"odinger operators with inverse square potentials and the Schr\"odinger operators on asymptotically conic manifolds. \noindent{\bf 6.1. Laplace-Beltrami operator on compact manifolds.}\ Let $\Delta_g$ be the Laplace-Beltrami operator on a compact smooth Riemannian manifold $(M,g)$ of dimension $n$. It was shown by Sogge that the condition ${\rm(S_{p})}$ holds with $L=-\Delta_g$ in the standard range of Stein-Tomas restriction theorem, that is to say, for $1\leq p\leq 2(n+1)/(n+3)$, see \cite{Sog1, Sog3}. Hence we can apply Theorem~\ref{th1.2} and obtain the following. \begin{cor}\lambdabel{prop6.1} Suppose that $\Delta_g$ is the Laplace-Beltrami operator on a compact smooth Riemannian manifolds $(M,g)$ of dimension $n$. Then the operator $S_{\ast}^{\alphapha}(-\Delta_g)$ is bounded on $L^p(M)$ whenever \begin{eqnarray}\lambdabel{e6.1} p \ge {2(n+1)\over n-1}, \ \ \ {\rm and}\ \ \alphapha> \max\left\{ n\Big|{1\over p}-{\frac 1 2}\Big|- {\frac 1 2}, \, 0 \right\}. \end{eqnarray} \end{cor} The corollary can be extended to the Laplace-Beltrami operator on a certain class of compact manifolds with boundaries if one combines Theorem \ref{th1.2} and the results in Sogge \cite{Sog4}. As far as we are aware, Corollary \ref{prop6.1}, especially in view of its generality, has not appeared in any literature before. However, we should mention that in \cite{MSS} Mockenhaupt, Seeger and Sogge showed that the sharp maximal Bochner-Riesz bounds for $p\ge 2$ holds when $(M, g)$ is a compact Riemannian manifold of dimension $2$ with periodicity assumption for the geodesic flow. \noindent{\bf 6.2. Schr\"odinger operator on asymptotically conic manifolds.}\ Scattering manifolds or asymptotically conic manifolds are defined as interiors of a compact manifold with boundary $M$, and the metric $g$ is smooth on the interior $M^\circ$ and has the form \[g=\frac{dx^2}{x^4}+\frac{h(x)}{x^2} \] in a collar neighbourhood near $\partial M$, where $x$ is a smooth boundary defining function for $M$ and $h(x)$ a smooth one-parameter family of metrics on $\partial M$; the function $r:=1/x$ near $x=0$ can be thought of as a radial coordinate near infinity and the metric there is asymptotic to the exact metric cone $((0,\infty)_r \times \partial M, dr^2+r^2h(0))$. The restriction estimate \eqref{e1.4} and Bochner-Reisz sumability results for a class of Laplace type operators on on asymptotically conic manifolds were obtained in \cite{GHS}. Our approach allows us to complement these results with the following concerning the maximal Bochner-Riesz operator. \begin{cor}\lambdabel{prop8.1} Let $(M,g)$ be an asymptotically conic nontrapping manifold of dimension $n \geq 3$, and let $x$ be a smooth boundary defining function of $\partial M$. Let $L:= - \Delta_g+V$ be a Schr\"odinger operator with $V\in x^3C^\infty(M)$ and assume that $L$ has no $L^2$-eigenvalues and that $0$ is not a resonance. Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(M)$ whenever \begin{eqnarray*} p \ge {2(n+1)\over n-1}, \ \ \ {\rm and}\ \ \alphapha> \max\left\{ n\left|{1\over p}-{\frac 1 2}\right|- {\frac 1 2}, \, 0 \right\}. \end{eqnarray*} \end{cor} \begin{proof} Corollary~\ref{prop8.1} follows from restriction estimates \eqref{e1.4} established in \cite[Theorem 1.2]{GHS} and Theorem~A. \end{proof} Corollary~\ref{prop8.1} includes a class of operators which are 0-th order perturbations of the Laplacian on nontrapping asymptotically conic manifolds. In particular, our results cover the following settings: the Schr\"odinger operators, i.e. $-\Delta + V$ on $\RR^n$, where $V$ smooth and decaying sufficiently at infinity; the Laplacian with respect to metric perturbations of the flat metric on $\RR^n$, again decaying sufficiently at infinity; and the Laplacian on asymptotically conic manifolds, see \cite{GHS}. \noindent{\bf 6.3. The harmonic oscillator.}\ In this section we focus on the Schr\"odinger operators such as the harmonic oscillator $-\Delta + |x|^2$ on $L^2(\RN)$ for $n\ge 2$. Bochner-Riesz summability results for the harmonic oscillator were studied and sharp results were obtained by Karadzhov \cite{Kar} and Thangavelu in \cite{Th3, Th4}). Here we establish the corresponding result for the maximal Bochner-Riesz operator. However, we consider the class Schr\"odinger operators $ L= - \Delta + V(x)$ with a positive potential $V$ which satisfies the following condition \begin{equation}\lambdabel{eq111.01} V(x) \sim |x|^2, \quad |\nabla V(x)| \sim |x|, \quad |\partial_x^2 V(x)| \lesssim 1. \end{equation} Clearly this class includes the harmonic oscillator. A restriction type result for this class of operators was established by Koch and Tataru in \cite[Theorem 4]{KoT}, which states that, for $\lambdambda \ge 0$ and $1\leq p\leq 2n/(n+2)$, \begin{eqnarray*} \|E_L[\lambdambda^2,\lambdambda^2+1)\|_{p\to 2} \leq C(1+\lambdambda)^{n({1\over p}-{1\over 2})-1}. \end{eqnarray*} It is not difficult to show that the above condition is equivalent to condition ${\rm (SC^{2,\kappa}_{p})}$ for $\kappa=2$ and $1\leq p\leq 2n/(n+2)$, see \cite{COSY}. As a consequence of Theorem~\ref{th1.2} we establish boundedness of the associated maximal Bochner-Riesz operator. \begin{cor}\lambdabel{cor6.3} Let $L= - \Delta + V(x)$ with a positive potential $V(x)$ satisfying \eqref{eq111.01}. Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p(\RN)$ whenever \begin{eqnarray}\lambdabel{e6.3} p \ge {2n\over n-2}\ \ \ {\rm and }\ \ \ \alphapha> \max\left\{n\Big|{1\over p}-{\frac 1 2}\Big|- {\frac 1 2}, \, 0 \right\}. \end{eqnarray} \end{cor} \begin{proof} As we just mentioned, the condition ${\rm (SC^{2,\kappa}_{p'})}$ for $\kappa=2$ and $1\leq p'\leq \frac{2n}{n+2}$ follows from \cite[Thoerem 4]{KoT} and \cite[Theorem I\!II.9]{COSY}. Hence by { Theorem~C} it is enough to show that if $V(x)\sim |x|^2$ is a positive potential and $L= - \Delta + V$, then \begin{eqnarray}\lambdabel{e77.111} \|(1+L)^{-\gamma/2}\|_{2 \to p'}\leq C, \,\, \gamma=n(1/p-1/2)+2\nu \end{eqnarray} for $1\leq p'\leq \frac{2n}{n+2}$ and all $\nu>0$. The proof of \eqref{e77.111} for $p=1$ is given in \cite[Lemma 7.9]{DOS}. We give a brief proof of this for completeness. Now fix $\nu$ as a positive number. To prove \eqref{e77.111}, we put $M=M_{\sqrt{1+V}}$. Then we note that $$ \|(1+L)^{1/2}f\|_2^2=\lambdangle(1+L)f,f\rangle \geq \lambdangle M^2f,f\rangle=\|Mf\|_2^2. $$ By the L\"owner-Heinz inequality for any quadratic forms $B_1$ and $B_2$, if $B_1\geq B_2\geq 0$, then $B_1^\alphapha\geq B_2^\alphapha$ for $0\leq \alphapha\leq 1$. Hence, $$\lambdangle(1+L)^\alphapha f,f\rangle \geq \lambdangle M^{2\alphapha}f,f\rangle.$$ Thus, for $\alphapha\in [0,1]$, \begin{eqnarray}\lambdabel{e6.10a} \|M^\alphapha(1+L)^{-\alphapha/2}\|_{2\to 2}\leq C . \end{eqnarray} For $\alphapha=1$ the operator $M^\alphapha(1+L)^{-\alphapha/2}$ is of a first order Riesz transform type and a standard argument yields, for any $q\in (1,2]$, \begin{eqnarray}\lambdabel{e6.10} \|M(1+L)^{-1/2}\|_{q\to q}\leq C, \end{eqnarray} see \cite[Theorem 11]{S2}. Then by H\"older's inequality, for any $q_1\geq q_2\geq 1$ with $s=(1/q_2-1/q_1)^{-1}$, \begin{eqnarray}\lambdabel{e6.11} \|M^{-\alphapha}\|_{q_1\to q_2}\leq C \left(\int_{{\mathbb R}^n} (1+V(x))^{-s\alphapha/2}dx\right)^{1/(s\alphapha)}. \end{eqnarray} Recall that $\gamma=n(1/p-1/2)+2\nu$. Write \begin{eqnarray}\lambdabel{ppp} (1+L)^{-\gamma/2}=\big(M^{-1}M(1+L)^{-1/2}\big)^{[\gamma]} M^{[\gamma]-\gamma} M^{\gamma-[\gamma]}(1+L)^{([\gamma]-\gamma)/2}. \end{eqnarray} Because of $V(x)\sim |x|^2$, choose $s=(n+\varepsilon)/\alphapha$ in \eqref{e6.11} with $\varepsilon= 2\nu/({1/p'}-{1/2})>0. $ Denote $p_0 $ by ${1/p_0}= {(\gamma-[\gamma])/(n+\varepsilon)} +{1/2} $ and for each $1\leq i\leq [\gamma]-1$ we define $p_i$ by putting $1/p_{i+1}- 1/p_i=1/(n+\varepsilon)$, so $p_{[\gamma]}=p'$. Now multiple composition of operators from \eqref{e6.10a}, \eqref{e6.10} and \eqref{e6.11}, in combination with \eqref{ppp}, yield \begin{eqnarray*} \|(1+L)^{-\gamma/2}\|_{2 \to p'} \leq \|M^{\gamma-[\gamma]}(1+L)^{([\gamma]-\gamma)/2}\|_{2\to 2} \|M^{[\gamma]-\gamma}\|_{2\to p_0} \prod_{i=0}^{[\gamma]-1}\|M^{-1}M(1+L)^{-1/2}\|_{p_i\to p_{i+1}} \leq C. \end{eqnarray*} This finishes the proof of \eqref{e77.111}, and completes the proof of Corollary~\ref{cor6.3}. \end{proof} \noindent{\bf 6.4. Operators $\Delta_n+ {c\over r^{2}}$ acting on $L^2((0,\infty), r^{n-1}dr)$.}\ In this section we consider a class of the Schr\"odinger operators on $L^2((0,\infty), r^{n-1} dr)$. These operators generate semigroups but do not have the classical Gaussian upper bound for the heat kernel. Fix $n > 2$ and $ c> -{(n-2)^2/4} $ and consider the space $L^2((0,\infty), r^{n-1}dr)$. For $f,g \in C_c^\infty(0,\infty)$ we define the quadratic form \begin{equation} Q_{n,c}^{(0,\infty)} (f,g)=\int^{\infty}_{0}f'(r)g'(r)r^{n-1} dr +\int_0^\infty \frac{c}{r^2} f(r)g(r) r^{n-1}dr. \lambdabel{eq9.1} \end{equation} Using the Friedrichs extension one can define the operator $L_{n,c}= \Delta_n+c/{r^2}$ as the unique self-adjoint operator corresponding to $ Q_{n,c}^{(0,\infty)} $, acting on $L^2((0,\infty), r^{n-1}dr)$. In the sequel we will write $L$ instead of $L_{n,c}$, which is formally given by the following formula $$ Lf=(\Delta_n+\frac{c}{r^2} ) f=-\frac{d^2}{dr^2}f-\frac{n-1}{r}\frac{d}{dr}f +\frac{c}{r^2}f. $$ The classical Hardy inequality \begin{equation}\lambdabel{e6.99} - \Delta\geq {(n-2)^2\over 4}|x|^{-2}, \end{equation} shows that for all $c > -{(n-2)^2/4}$, the self-adjoint operator $L$ is non-negative. Such operators can be seen as radial Schr\"odinger operators with inverse-square potentials. It follows by Theorem 3.3 of \cite{CouS} that $L$ satisfies the property (FS). Now for $ -{(n-2)^2/4}<c<0,$ we set $p_c^{\ast}=n/\sigma$ where $\sigma=(n-2)/2-\sqrt{(n-2)^2/4+c}$. Note that $2<{2n\over n-2}<p_c^{\ast}$. Liskevich, Sobol and Vogt \cite{LSV} proved that, for $t>0$ and $p\in ((p_c^{\ast}){'}, p_c^{\ast})$, $$ \|e^{-tL}\|_{p\to p}\leq C. $$ They also proved that the range of $p$, $((p_c^{\ast}){'}, p_c^{\ast})$ is optimal in the sense that, if $p\not\in ((p_c^{\ast}){'}, p_c^{\ast})$, the semigroup does not act on $L^p((0,\infty), r^{n-1}dr)$ (see also \cite{CouS}). \begin{cor}\lambdabel{prop8.5} Suppose that $n > 2$ and $ -{(n-2)^2/4}<c $. Set \begin{eqnarray*} p_c^{\ast}= \left\{ \begin{array}{ll} {n\over \sigma}, \ \ \ c<0;\\[6pt] \infty, \ \ \ \ c\geq 0, \end{array} \right. \end{eqnarray*} where $\sigma=(n-2)/2-\sqrt{(n-2)^2/4+c}$. Then the operator $S_{\ast}^{\alphapha}(L)$ is bounded on $L^p((0,\infty), r^{n-1}dr)$ whenever \begin{eqnarray}\lambdabel{e6.9} {2n\over n-1} <p< p_c^{\ast} \ \ \ {\rm and }\ \ \ \alphapha> \max\left\{ n\left|{1\over p}-{\frac 1 2}\right|- {\frac 1 2}, \, 0 \right\}. \end{eqnarray} \end{cor} \begin{proof} It was shown in \cite[Proposition I\!I\!I.10]{COSY} that the condition ${\rm (ST^{2}_{p, 2})}$ for the operators $\Delta_n+c/{r^2}$ holds for $p\in \big((p_c^{\ast}){'}, {2n\over n+1}\big)$ for $c<0$; for $p\in \big[1, {2n\over n+1}\big)$ for $c\ge 0$. Now the corollary follows from Theorem \ref{th1.1}. \end{proof} \begin{rem} In the proof of Corollary \ref{prop8.5} one has to use condition ${\rm (ST^{2}_{p, 2})}$ because the condition \eqref{rp} is no longer valid in this setting. \end{rem} Finally we mention that our approach can be also applied to a class of sub-Laplacians on Heisenberg $H$-type group considered in \cite{LW}, for the class of inverse square potentials considered in \cite{BPST} and a class of Schr\"odinger type operators investigated in \cite{RS}. \noindent {\bf Acknowledgements:} P. Chen was supported by NNSF of China 11501583, Guangdong Natural Science Foundation 2016A030313351 and the Fundamental Research Funds for the Central Universities 161gpy45. S. Lee was partially supported by NRF (Republic of Korea) grant No. 2015R1A2A2A05000956. A. Sikora was partly supported by Australian Research Council Discovery Grant DP DP160100941. L.X. Yan was supported by the NNSF of China, Grant No. ~11471338 and ~11521101, and Guangdong Special Support Program. \end{document}
\begin{document} \date{\today} \author{Godelle Eddy} \title{A note on Renner monoids} \maketitle \begin{abstract} We extend the result obtained in~\cite{God} to every Renner monoid: we provide Renner monoids with a monoid presentation and we introduce a length function which extends the Coxeter length function and which behaves nicely. \end{abstract} \section*{Introduction} The notion of a \emph{Weyl group} is crucial in Linear Algebraic Group Theory~\cite{Hum}. The seminal example occurs when one considers the \emph{algebraic group}~$GL_n(K)$. In that case, the associated Weyl group is isomorphic to the group of monomial matrices, that is to the permutation group~$S_n$. Weyl groups are special examples of \emph{finite Coxeter groups}. Hence, they possess a group presentation of a particular type, and an associated length function. It turns out that this presentation and this length function are deeply related with the geometry of the associated algebraic group. Linear Algebraic Monoid Theory, mainly developed by Putcha, Renner and Solomon, has deep connections with Algebraic Group Theory. In particular, the \emph{Renner monoid}~\cite{Ren} plays the role that the Weyl group does in Linear Algebraic Group Theory. As far as I know, in the case of Renner monoids, there is no known theory that plays the role of Coxeter Group Theory. Therefore it is natural to look for such a theory, and, therefore, to address the question of monoid presentations for Renner monoids. In~\cite{God}, we considered the particular case of the \emph{rook monoid} defined by Solomon~\cite{Sol4}. We obtained a presentation of this group and introduced a length function, that is nicely related to the Hecke Algebra of the rook monoid. Our objective here is to consider the general case. We obtain a presentation of every Renner monoid and introduce a length function. In the case of the rook monoid, we recover the results obtained in~\cite{God}. Our length function is not the classical length function on Renner monoids~\cite{Ren}. We remark that the former shares with the latter several nice geometrical and combinatorial properties. We believe that this new length function is nicely related to the Hecke algebra, as in the special case of the rook monoid. Let us postpone to the next sections some definitions and notations, and state here our main results. Consider the Renner monoid~$R(M)$ of a linear algebraic monoid~$M$. Denote by $W$ the unit group of $R(M)$ and consider its associated Coxeter system~$(W,S)$. Denote by $\Lambda$ a \emph{cross section lattice} of the monoid~$E(R(M))$ of idempotent elements of $R(M)$, and by $\Lambdao$ the set of elements of~$\Lambda$ that are distinct from the unity. Denote finally by $\lambda$ the associated \emph{type map} of $R(M)$; roughly speaking, this is a map that describes the action of $W$ on $E(R(M))$. \begin{The} The Renner monoid~$R(M)$ admits the monoid presentation whose generating set is~$S\cup \Lambdao$ and whose defining relations are:\\ \begin{tabular}{lll} (COX1)&$s^2 = 1$,&$s\in S$;\\ (COX2)&$|s,t\rangle^m = |t,s\rangle^m$,&$(\{s,t\},m)\in \mathcal{E}(\Gamma)$;\\ (TYM1)&$se = es$,& $e\in \Lambdao$, $s\in \lambda^\star(e)$;\\ (TYM2)&$se = es = e$,& $e\in \Lambdao$, $s\in \lambda_\star(e)$;\\ (TYM)&$e\underline{w}f = e\wedge_wf$,& $e,f\in \Lambdao$, $w\in G^\uparrow(e)\cap D^\uparrow(f)$. \end{tabular} \label{thepreserennmonidintro} \end{The} We define the length~$\ell$ on~$R(M)$ in the following way: if~$s$ lies in~$S$, we set~$\ell(s) = 1$; if $e$ lies in $\Lambda$ we set~$\ell(e) = 0$. Then, we extend $\ell$ by additivity to the free monoid of words on~$S\cup \Lambdao$. If $w$ lies in~$R(M)$, its length~$\ell(w)$ is the minimal length of its word representatives on~$S\cup \Lambdao$. In Section~2, we investigate the properties of this length function. In particular we prove that it is nicely related to the classical \emph{normal form} defined on~$R(M)$, and we also prove \begin{Prop}\label{propintro} Let $T$ be a maximal torus of the unit group of~$M$. Fix a Borel subgroup~$B$ that contains $T$. Let~$w$ lie in~$R(M)$ and~$s$ lie in~$S$. Then, $$B s B w B = \left\{\begin{array}{ll}BwB&\textrm{if } \ell(sw) = \ell(w);\\BswB&\textrm{if } \ell(sw) = \ell(w)+1;\\BswB\cup BwB&\textrm{if } \ell(sw) = \ell(w)-1.\end{array}\right.$$\end{Prop} This article is organized as it follows. In section~$1$, we first recall the backgrounds on Algebraic Monoid Theory and on Coxeter Theory. Then, we prove Theorem~\ref{thepreserennmonidintro}. In Section~2, we consider several examples of Renner monoids and deduce explicit presentations from Theorem~\ref{thepreserennmonidintro}. In Section~3, we focus on the length function. \section{Presentation for Renner monoids} Our objective in the present section is to associate a monoid presentation to every Renner monoid. The statement of our result and its proof require some properties of algebraic monoid theory and of Coxeter group theory. In Section~\ref{sousectionrappelamt}, we introduce Renner monoids and we state the results we need about algebraic monoids. In Section~\ref{sousectionrappelcgt} we recall the definition of Coxeter groups and some of its well-known properties. Using the two preliminary sections, we can prove~Theorem~\ref{thepreserennmonidintro} in Section~\ref{sousectionpresrenmnoid}. This provides a monoid presentation to every Renner monoid. We fix an algebraically closed field~$\mathbb{K}$. We denote by~$M_n$ the set of all~$n\times n$ matrices over~$\mathbb{K}$, and by~$GL_n$ the set of all invertible matrices in~$M_n$. We refer to~\cite{Put,Ren,Sol1} for the general theory and proofs involving linear algebraic monoids and Renner monoids; we refer to~\cite{Hum} for an introduction to linear algebraic groups. If $X$ is a subset of $M_n$, we denote by $\overline{X}$ its closure for the Zariski topology. \subsection{Algebraic Monoid Theory} \label{sousectionrappelamt} We introduce here the basic definitions and notation on Algebraic Monoid Theory that we shall need in the sequel. \subsubsection{Regular monoids and reducible groups} \begin{definition}[Algebraic monoid] An \emph{algebraic monoid} is a submonoid of~$M_n$, for some positive integer~$n$, that is closed for the Zariski topology. An algebraic monoid is \emph{irreducible} if it is irreducible as a variety.\end{definition} It is very easy to construct algebraic monoids. Indeed, the Zariski closure~$M = \overline{G}$ of any submonoid~$G$ of $M_n$ is an algebraic monoid. The main example occurs when for $G$ one considers an algebraic subgroup of~$GL_n$. It turns out that in this case, the group~$G$ is the unit group of~$M$. Conversely, if $M$ is an algebraic monoid, then its unit group~$G(M)$ is an algebraic group. The monoid~$M_n$ is the seminal example of an algebraic monoid, and its unit group~$GL_n$ is the seminal example of an algebraic group. One of the main differences between an algebraic group and an algebraic monoid is that the latter have idempotent elements. In the sequel we denote by~$E(M)$ the set of idempotent elements of a monoid~$M$. We recall that $M$ is \emph{regular} if $M = E(M)G(M) = G(M)E(M)$, and that $M$ \emph{has a zero element} if there exists an element~$0$ such that~$0\times m = m\times 0 = 0$ for every~$m$ in~$M$. The next result, which is the starting point of the theory, was obtained independently by Putcha and Renner in 1982. \begin{The} Let $M$ be an irreducible algebraic monoid with a zero element. Then $M$ is regular if and only if $G(M)$ is reductive. \end{The} The order~$\leq$ on~$E(M)$ defined by~$e\leq f$ if~$ef =fe = e$, provides a natural connection between the Borel subgroups of $G(M)$ and the idempotent elements of~$M$ : \begin{The} \label{theoemoppborel}Let $M$ be a regular irreducible algebraic monoid with a zero element. Let~$\Gamma = (e_1,\ldots, e_k)$ be a maximal increasing sequence of distinct elements of~$E(M)$.\\ (i) The centralizer~$Z_{G(M)}(\Gamma)$ of~$\Gamma$ in~$G(M)$ is a maximal torus of~the reducive group~$G(M)$.\\(ii) Set $$B^+(\Gamma) = \{b\in G(M)\mid \forall e\in \Gamma, be = ebe\},$$ $$B^-(\Gamma) = \{b\in G(M)\mid \forall e\in \Gamma, eb = ebe\}.$$ Then, $B^-(\Gamma)$ and $B^+(\Gamma)$ are two opposed Borel subgroups that contain~$Z_{G(M)}(\Gamma)$. \end{The} \subsubsection{Renner monoid} \begin{definition}[Renner monoid] Let $M$ be a regular irreducible algebraic monoid with a zero element. If $T$ is a Borel subgroup of $G(M)$, then we denote its normalizer by $N_{G(M)}(T)$. The \emph{Renner monoid}~$R(M)$ of~$M$ is the monoid~$\overline{N_{G(M)}(T)}/T$. \end{definition} It is clear that $R(M)$ does not depend on the choice of the maximal torus of $G(M)$. \begin{Exe} Consider $M = M_n(K)$, and choose the maximal torus~$\mathbb{T}$ of diagonal matrices. The Renner monoid is isomorphic to the monoid of matrices with at most one nonzero entry, that is equal to~$1$, in each row and each column. This monoid is called the rook monoid~$R_n$~\cite{Sol2}. Its unit group is the group of monomial matrices, which is isomorphic to the symmetric group~$S_n$.\end{Exe} It is almost immediate from the definition that we have \begin{Prop} Let $M$ be a regular irreducible algebraic monoid with a zero element, and fix a maximal torus~$T$ of $G(M)$. The Renner monoid~$R(M)$ is a finite factorisable inverse monoid. In particular, the set $E(R(M))$ is a commutative monoid and a lattice for the partial order $\leq$ defined by $e\leq f$ when $ef = e$. Furthermore, there is a canonical order preserving isomorphism of monoids between $E(R(M))$ and $E(\overline{T})$. \label{propRgenparEetW} \end{Prop} \subsection{Coxeter Group Theory} \label{sousectionrappelcgt} Here we recall some well-known facts about Coxeter groups. We refer to~\cite{Bou} for general theory and proof. \begin{definition} Let $\Gamma$ be a finite simple labelled graph whose labels are positive integers greatest or equal than~$3$. We denote by $S$ the vertex set of $\Gamma$. We denote by $\mathcal{E}(\Gamma)$ the set of pairs $(\{s,t\},m)$ such that either $\{s,t\}$ is an edge of $\Gamma$ labelled by~$m$ or $\{s,t\}$ is not an edge of $\Gamma$ and $m=2$. When $(\{s,t\},m)$ belongs to~$\mathcal{E}(\Gamma)$, we denote by $|s,t\rangle^m$ the word~$sts\cdots$ of length~$m$. The \emph{Coxeter group}~$W(\Gamma)$ associated with~$\Gamma$ is defined by the following group presentation $$\left\langle S \left| \begin{array}{ll}s^2 = 1&s\in S\\ |s,t\rangle^m = |t,s\rangle^m &(\{s,t\},m)\in \mathcal{E}(\Gamma) \end{array}\right.\right\rangle$$ In this case, one says that $(W,S)$ is a \emph{Coxeter system}.\end{definition} \begin{Prop} Let $M$ be a regular irreducible algebraic monoid with a zero element, and denote by $G$ its unit group. Fix a maximal torus~$T$ included in~$B$. Then\\(i) the \emph{Weyl group}~$W = N_G(T)/T$ of $G$ is a finite Coxeter group.\\(ii) The unit group of~$R(M)$ is the Weyl group~$W$. \label{propgather2} \end{Prop} \begin{Rem}\label{remgenrset} Gathering the results of Propositions~\ref{propRgenparEetW} and~\ref{propgather2} we get $$R(M) = E(\overline{T})\cdot W = W\cdot E(\overline{T}).$$ \end{Rem} \begin{definition} Let~$(W,S)$ be a \emph{Coxeter system}. Let $w$ belong to $W$. The \emph{length}~$\ell(w)$ of $w$ is the minimal integer $k$ such that $w$ has a word representative of length of~$k$ on the alphabet~$S$. such a word is called a \emph{reduced word representative} of $w$. \end{definition} In the sequel, we use the following classical result~\cite{Bou}. \begin{Prop} Let $(W,S)$ be a \emph{Coxeter system} and $I,J$ be subsets of $S$. Let $W_I$ and $W_J$ be the subgroups of $W$ generated by $I$ and $J$ respectively.\\(i) The pairs $(W_I,I)$ and $(W_J,J)$ are Coxeter systems.\\(ii) For every element $w$ which belongs to $W$ there exists a unique element~$\hat{w}$ of minimal length in the double-class $W_JwW_I$. Furthermore there exists $w_1$ in $W_I$ and $w_2$ in $W_J$ such that $w = w_2\hat{w}w_1$ with $\ell(w) = \ell(w_1)+\ell(\hat{w})+\ell(w_2)$. \label{proppopa} \end{Prop} Note that~$(ii)$ holds when $I$ or $J$ are empty. \subsection{Cross section} Our objective here is to prove Theorem~\ref{thepreserennmonidintro}. We first need to precise the notation used in this theorem. \label{sousectionpresrenmnoid} In all this section, we assume $M$ is a regular irreducible algebraic monoid with a zero element. We denote by~$G$ the unit group of $M$. We fix a maximal torus~$T$ of $G$ and we denote by $W$ the Weyl group~$N_G(T)/T$ of~$G$. We denote by $S$ the standard generating set associated with the canonical Coxeter structure of the Weyl group~$W$. \subsubsection{The Cross Section Lattice} To describe the generating set of our presentation, we need to introduce the \emph{cross section lattice}, which is related to Green's relations. The latter are classical tools in semigroup theory. Let us recall the definition of Relation~$\mathcal{J}$. The~$\mathcal{J}$-class of an element~$a$ in~$M$ is the double coset~$MaM$. The set~$\mathcal{U}(M)$ of~$\mathcal{J}$-classes carries a natural partial order~$\leq$ defined by~$MaM \leq MbM$ if~$MaM\subseteq MbM$. It turns out that the map~$e\mapsto MeM$ from~$E(M)$ to~$\mathcal{U}(M)$ induces a one-to-one correspondence between the set of~$W$-orbits on~$E(\overline{T})$ and the set~$\mathcal{U}(M)$. The existence of this one-to-one correspondence leads to the following definition: \begin{definition}[cross section lattice] A subset~$\Lambda$ of~$E(\overline{T})$ is a \emph{cross section lattice} if it is a transversal of~$E(\overline{T})$ for the action of~$W$ such that the bijection~$e\mapsto MeM$ from~$\Lambda$ onto~$\mathcal{U}(M)$ is order preserving. \end{definition} It is not immediatly clear that such a cross section lattice exists. Indeed it is, and \begin{The}\cite[Theorem 9.10]{Put} For every Borel subgroup~$B$ of $G$ that contains $T$, we set $$\Lambda(B) = \{e\in E(\overline{T})\mid \forall b\in B,\ be = ebe \}.$$ The map~$B\mapsto \Lambda(B)$ is a bijection between the set of Borel subgroups of~$G$ that contain~$T$ and the set of cross section lattices of~$E(T)$. \end{The} \begin{Exe}\label{exemplecrossselat} Consider $M = M_n$. Consider the Borel subgroups~$\mathbb{B}$ of invertible upper triangular matrices and~$\mathbb{T}$ the maximal torus of invertible diagonal matrices. Denote by~$e_i$ the diagonal matrix~$\left(\begin{array}{cccccc}Id_i&0\\0&0\end{array}\right)$ of rank~$i$. Then, the set~$\Lambda(\mathbb{B})$ is~$\{e_0,\ldots, e_n\}$. One has $e_i\leq e_{i+1}$ for every index~$i$. \end{Exe} \begin{Rem} \label{remgammainclamb}(i) Let $\Gamma$ be a maximal chain of idempotent elements of $\overline{T}$ and consider the Borel subgroup~$B^+(\Gamma)$ defined in Theorem~\ref{theoemoppborel}. It follows from the definitions that we have~$\Gamma\subseteq \Lambda(B^+(\Gamma))$.\\(ii)\cite[Def.~9.1]{Put} A cross section lattice is a sublattice of $E(\overline{T})$. \end{Rem} \subsubsection{Normal form and type map} In order to state the defining relations of our presentation, we turn now to the notion of a {type map}. We fix a Borel subgroup of $G$ that contains~$T$. We write $\Lambda$ for $\Lambda(B)$. We consider~$\Lambda$ as a sublattice of $E(R(M))$ ({\it cf.} Proposition~\ref{propRgenparEetW}). \begin{Not}\cite{Ren} Let~$e$ belong to~$\Lambda$.(i) We set $$\lambda(e) = \{s\in S\mid se = es\}.$$ The map $\lambda : e\mapsto \lambda(e)$ is called the \emph{type map} of the reductive monoid~$M$.\\ (ii) We set $\lambda_\star(e) = \bigcap_{f\geq e}\lambda(f)$ and $\lambda^\star(e) = \bigcap_{f\geq e}\lambda(f)$.\\ (iii) We set~$\sn(e) = \{w\in W\mid we = ew \}$, $\sn_\star(e) = \{w\in \sn(e) \mid we = e\}$ and $\sn^\star(e) = \{w\in \sn(e) \mid we \neq e\}$. \end{Not} \begin{Prop}\cite[Lemma 7.15]{Ren} . Then\\ (i) $\lambda_\star(e) = \{s\in S\mid se = es = e\}$ and $\lambda^\star(e) = \{s\in S\mid se = es \neq e\}$.\\ (ii) The sets~$\sn(e)$, $\sn_\star(e)$ and $\sn^\star(e)$ are the \emph{standard parabolic subgroups} of~$W$ generated by the sets $\lambda(e)$ and $\lambda_\star(e)$ and $\lambda^\star(e)$, respectively. Furthermore, $\sn(e)$ is the direct product of $\sn_\star(e)$ and $\sn^\star(e)$.\label{propopo} \end{Prop} \begin{Not}\cite{Ren} By Propositions~\ref{proppopa} and~\ref{propopo}, for every~$w$ in~$W$ and every~$e,f$ in~$\Lambda$, each of the sets~$w\sn(e)$, $\sn(e) w$, $w\sn_\star(e)$, $\sn_\star(e) w$ and $\sn(e) w \sn(f)$ has a unique element of minimal length. We denote by $D(e),G(e),D_\star(e)$ and $G_\star(e)$ the set of elements~$w$ of~$W$ that are of minimal length in their classes $w\sn(e)$, $\sn(e) w$, $w\sn_\star(e)$ and $\sn_\star(e) w$, respectively. Note that the set of elements~$w$ of~$W$ that are of minimal length in their double class $\sn(e) w\sn(f)$ is $G(e)\cap D(f)$. \end{Not} \subsubsection{Properties of the cross section lattice} As in previous sections, we fix a Borel subgroup~$B$ of $G$ that contains $T$, and denote by $\Lambda$ the associated cross section lattice contained in~$E(R(M))$. We use the notation~$\mathcal{E}(\Gamma)$ of Section~\ref{sousectionrappelcgt}. We set $\Lambdao = \Lambda-\{1\}$. To make the statement of Proposition~\ref{thepreserennmonid} clear we need a preliminary result. \begin{Lem} Let $e_1,e_2$ lie in $E(T)$ such that $e_1\leq e_2$. There exists $f_1,f_2$ in $\Lambda$ with $f_1\leq f_2$ and $w$ in $W$ such that $wf_1w^{-1} = e_1$ and $wf_2w^{-1} = e_2$. \label{lemtechnorder} \end{Lem} \begin{proof} Let $\Gamma$ be a maximal chain of $E(R(M))$ that contains $e_1$ and $e_2$. The Borel subgroup $B^+(\Gamma)$ contains the maximal torus~$T$. Therefore, there exists~$w$ in $W$ such that $w^{-1}B^+(\Gamma) w = B$. This implies that $w^{-1}\Lambda(B^+(\Gamma)) w = \Lambda$. We conclude using Remark~\ref{remgammainclamb}$(i)$. \end{proof} \begin{Lem} \label{lem:cok} Let $h,e$ belong to $\Lambda$ such that $h \leq e$. Then, $W(h)\cap G(e)\subseteq W_\star(h)$ and $W(h)\cap D(e)\subseteq W_\star(h)$. \end{Lem} \begin{proof} Let $w$ lie in~$W(h)\cap G(e)$. We can write $w = w_1w_2 = w_2w_1$ where $w_1$ lies in $W_\star(h)$ and $w_2$ lies in $W^\star(h)$. Since $h\leq e$, we have $\lambda^\star(h)\subseteq\lambda^\star(e)$ and $W^\star(h)\subseteq W^\star(e)$. Since $w$ is assumed to belong to $G(e)$, this implies~$w_2 = 1$. The proof of the second inclusion is similar. \end{proof} \begin{Prop} Let $e,f$ lie in $\Lambdao$ and $w$ lie in~$G(e)\cap D(f)$. There exists $h$ in $\Lambdao$ with $h\leq e\wedge f$ such that $w$ belongs to $W_\star(h)$ and $ewf = hw = h$. \label{propconslong} \end{Prop} To prove the above proposition, we are going to use the existence of a normal decomposition in~$R(M)$: \begin{Prop}[\cite{Ren} Section 8.6] \label{fnrenner} For every~$w$ in~$R(M)$ there exists a unique triple~$(w_1,e,w_2)$ with~$e\in \Lambda$,~$w_1\in D_\star(e)$ and~$w_2\in G(e)$ such that~$w = w_1ew_2$. \end{Prop} Following~\cite{Ren}, we call the triple~$(w_1,e,w_2)$ the \emph{normal decomposition} of $w$. \begin{proof}[Proof of Proposition~\ref{propconslong}] Consider the normal decomposition~$(w_1,h,w_2)$ of $ewf$. Then, $w_1$ belongs to~$D_\star(h)$ and $w_2$ belongs to~$G(h)$. The element $ewfw^{-1}$ is equal to $w_1hw_2w^{-1}$ and belongs to $E(R(M))$. Since $w_1$ lies in~$D_\star(h)$, this implies that $w_3 = w_2w^{-1}w_1$ lies in $W_\star(h)$, and that $e\geq w_1hw_1^{-1}$. By Lemma~\ref{lemtechnorder}, there exists $w_4$ in $W$ and $e_1,h_1$ in $\Lambdao$, with $e_1\geq h_1$, such that $w_4e_1w_4^{-1} = e$ and $w_4h_1w_4^{-1} = w_1hw_1^{-1}$. Since $\Lambda$ is a cross section for the action of $W$, we have~$e_1 = e$ and~$h_1 = h$. In particular, $w_4$ belongs to~$W(e)$. Since $w_1$ belongs to $D_\star(h)$, we deduce that there exists $r$ in $W_\star(h)$ such that $w_4 = w_1r$ with $\ell(w_4) = \ell(w_1)+\ell(r)$. Then, $w_1$ lies in $W(e)$. Now, write $w_2 = w''_2w'_2$ where $w''_2$ lies in $W^\star(h)$ and $w'_2$ belongs to $G_\star(h)$. One has $ewf = w_1w''_2hw'_2$, and $w_1w''_2$ lies in~$D(h)$. By symmetry, we get that $w'_2$ belongs to $W(f)$. Hence, $w^{-1}_1w{w'}^{-1}_2$ is equal to ${w}_3^{-1}w''_2$ and belongs to $W(h)$. By hypothesis $w$ lies in $G(e)\capD(f)$. Then we must have $\ell({w}_3^{-1}w''_2) = \ell(w^{-1}_1)+\ell({w'}^{-1}_2)+\ell(w)$. Since ${w}_3^{-1}w''_2$ belongs to $W(h)$, it follows that $w_1$ and $w'_2$ belong to $W(h)$ too. This implies $w_1 = w'_2 = 1$ and $w = w_3^{-1}w''_2$. Therefore, $ewf = hw''_2 = hw = wh$. Finally,~$w$ belongs to $W_\star(h)$ by Lemma~\ref{lem:cok}. \end{proof} \subsubsection{A presentation for~$R(M)$} \begin{Not} (i) For each $w$ in $W$, we fix a reduced word representative~$\underline{w}$.\\ (ii) We denote by $e\wedge_w f$ the unique letter in~$\Lambda$ that represents the element $h$ in Proposition~\ref{propconslong}. \end{Not} Note that for~$s$ in~$S$, one has~$\underline{s} = s$. We recall that $\Lambda$ is a sub-lattice of $E(\overline{T})$ for the order~$\leq$ defined by $e\leq f$ if $ef = fe = e$. We are now ready to state a monoid presentation for $R(M)$: \begin{Prop} The Renner monoid has the following monoid presentation whose generating set is~$S\cup \Lambdao$ and whose defining relations are:\\ \begin{tabular}{lll} (COX1)&$s^2 = 1$,&$s\in S$;\\ (COX2)&$|s,t\rangle^m = |t,s\rangle^m$,&$(\{s,t\},m)\in \mathcal{E}(\Gamma)$;\\ (TYM1)&$se = es$,&$e\in \Lambdao$, $s\in \lambda^\star(e)$;\\ (TYM2)&$se = es = e$,&$e\in \Lambdao$, $s\in \lambda_\star(e)$;\\ (TYM3)&$e\underline{w}f = e\wedge_wf$,&$e,f\in \Lambdao$, $w\in G(e)\cap D(f)$. \\ \end{tabular} \label{thepreserennmonid} \end{Prop} Note that when $e\leq f$ and $w = 1$, then Relation~(TYM3) becomes $ef = fe = e$. More generally, one has~$e\wedge_1f = e\wedge f$. \begin{proof} By remark~\ref{remgenrset} the submonoids~$E(R)$ and $W$ generate the monoid $R(M)$. As $S$ is a generating set for $W$, it follows from the definition of $\Lambda$ that the set~$S\cup \Lambdao$ generates $R(M)$ as a monoid. Clearly, Relations~(COX1) and~(COX2) hold in~$W$, Relations~(TYM1) and~(TYM2) hold in $R(M)$. Relations~(TYM3) hold in~$R(M)$ by Proposition~\ref{propconslong}. It remains to prove that we obtain a presentation of the monoid~$R(M)$. Let~$w$ belong to~$R(M)$ with $(w_1,e,w_2)$ as normal form. Consider any word~$\omega$ on the alphabet~$S\cup \Lambdao$ that represents~$w$. We claim that starting from~$\omega$, one can obtain the word $\underline{w_1}e\underline{w_2}$ using the relations of the above presentation only. This is almost obvious by induction on the number~$j$ of letters of the word~$\omega$ that belong to~$\Lambdao$. The property holds for~$j = 0$ (in this case~$w = w_1$ and~$e = w_2 = 1$) because~(COX1) and~(COX2) are the defining relations of the presentation of~$W$. The case $j =1$ is also clear, applying Relations~(COX1),~(COX2),~(TYM1) and~(TYM2). Now, for $j\geq 2$, the case $j$ can be reduced to the case $j-1$ using Relations~(TYM3) (and the other relations). \end{proof} The presentation in Proposition~\ref{thepreserennmonid} is not minimal; some relations can be removed in order to obtain the presentation stated in Theorem~\ref{thepreserennmonidintro}. Let us introduce a notation used in this theorem: \begin{Not} If $e$ lies in $\Lambda$, we denote by $G^\uparrow(e)$ the set $G(e)\cap \left(\bigcap_{f > e}W(f)\right)$. Similarly, we denote by $D^\uparrow(e)$ the set $D(e)\cap \left(\bigcap_{f > e}W(f)\right)$. \end{Not} \begin{Rem}(i) $\left(\cap_{f > e}\lambda(f)\right)\cap\lambda_\star(e) = \emptyset$ by Proposition~\ref{propopo}.\\ (ii) $$G^\uparrow(e) = G(e)\cap W_{\cap_{f > e}\lambda(f)}\textrm{ and }D^\uparrow(e) = D(e)\cap W_{\cap_{f > e}\lambda(f)}.$$ \end{Rem} The reader may note that for $e\leq f$ one has $G^\uparrow(e)\cap D^\uparrow(f) = \{1\}$. \begin{proof}[Proof of Theorem \ref{thepreserennmonidintro}] We need to prove that every relations $e\underline{w}f = e\wedge_wf$ of type~(TYM3) in Proposition~\ref{thepreserennmonid} can be deduced from Relations~(RBI) of Theorem~\ref{thepreserennmonidintro}, using the other common defining relations of type~(COX1), (COX2), (TYM1) and (TYM2). We prove this by induction on the length of $w$. If $\ell(w) = 0$ then $w$ is equal to $1$ and therefore belongs to $G^\uparrow(e)\cap D^\uparrow(f)$. Assume $\ell(w)\geq 1$ and $w$ does not belong to~$G^\uparrow(e)\cap D^\uparrow(f)$. Assume furthermore~$w$ does not lie in~$G^\uparrow(e)$ (the other case is similar). Choose~$e_1$ in $\Lambdao$ such that $e_1 > e$ and $w$ does not lie in $W(e_1)$. Then, applying Relations~(RIB), we can transform the word~$e\underline{w}f$ into the word~$ee_1\underline{w}f$. Using relations~(COX2), we can transform the word~$\underline{w}$ into a word~$\underline{w_1}\,\underline{w_2}$ where $w_1$ belongs to $W(e_1)$ and $w_2$ belongs to $G^\uparrow(e)un$. Then, applying Relations~(COX2) and~(TYM1), we can transform the word~$ee_1\underline{w}f$ into the word~$e\underline{w_1}e_1\underline{w_2}f$. By hypothesis on~$w$, we have~$w_2\neq 1$ and, therefore, $\ell(w_1)< \ell(w)$. Assume $w_2$ belongs to $D^\uparrow(f)$. We can apply Relation~(RIB) to transform~$e\underline{w_1}e_1\underline{w_2}f$ into $e\underline{w_1}(e_1\wedge_{w_2}f)$. Using relations (COX2), we can transform $\underline{w_1}$ into a word $\underline{w'_1}\,\underline{w''_1}\,\underline{w'''_1}$ with $w'''_1$ in $W_\star(e_1\wedge_{w_2}f)$, $w''_2$ in $W^\star(e_1\wedge_{w_2}f)$ and $w'_1$ in $D(e_1\wedge_{w_2}f)$. Then~$e\underline{w_1}(e_1\wedge_{w_2}f)$ can be transformed into~$e\underline{w'_1}(e_1\wedge_{w_2}f)\underline{w''_1}$. Since $\ell(w'_1)\leq\ell(w_1)<\ell(w)$, we can apply an induction argument to transform the word~$e\underline{w_1}(e_1\wedge_{w_2}f)$ into the word $e\wedge_{w_1}(e_1\wedge_{w_2}f)\underline{w''_1}$. Now, by the unicity of the normal decomposition, $w''_1$ as to belong to $W_\star(e\wedge_{w_1}(e_1\wedge_{w_2}f))$. Therefore we can transform~$e\wedge_{w_1}(e_1\wedge_{w_2}f)\underline{w''_1}$ into $e\wedge_{w_1}(e_1\wedge_{w_2}f)$ using Relations~(TYM2). Note that the letters~$e\wedge_{w_1}(e_1\wedge_{w_2}f)$ and $e\wedge_{w}f$ has to be equal as they represent the same element in~$\Lambda$. Assume finally that $w_2$ does not belong to $D^\uparrow(f)$. By similar arguments we can, applying Relations~(COX2) and~(TYM1), transform the word~$e\underline{w_1}e_1\underline{w_2}f$ into a word~$e\underline{w_1}e_1\underline{w_{3}}f_1\underline{w_{4}}$ where $f_1 > f$ in~$\Lambdao$ and~$w_2 = w_{3}w_{4}$ with $w_{3}$ in $D^\uparrow(f)un$ and $w_{4}$ in $W(f_1)$. At this stage we are in position to apply Relation~(RBI). Thus, we can transform the word~$e\underline{w_1}e_1\underline{w_2}f$ into~$e\underline{w_1}\,(e_1\wedge_{w_3}f_1)\,\underline{w_{4}}f$. Since we have~$\ell(w_1)+\ell(w_4) < \ell(w)$ we can proceed as in the first case to conclude. \end{proof} \section{Some particular Renner monoids} Here we focus on some special Renner monoids considered in~\cite{LiRe,Li1,Li2}. In each case, we provide an explicit monoid presentation using the general presentation obtained in Section~1. \subsection{The rook monoid} Consider $M = M_n$ and choose $\mathbb{B}$ for Borel subgroup (see Example~\ref{exemplecrossselat}). In this case, the Weyl group is the symmetric group $S_n$. Its generating set $S$ is $\{s_1,\cdots, s_{n-1}\}$ where $s_i$ is the transposition matrix corresponding to~$(i,i+1)$. The cross section section lattice~$\Lambda = \{e_0,\cdots,e_{n-1},e_n\}$ is linear (we have $e_j\leq e_{j+1}$ for every $j$). For every~$j$ one has $\lambda_\star(e_j) =\{s_i\mid j+1\leq i\}$ and $\lambda^\star(e_j) = \{s_i\mid i\leq j-1\}$. In particular,~$G^\uparrow(e)i\cap D^\uparrow(e)i = \{1,s_i\}$, and for $i\neq j$ we have~$G^\uparrow(e)i\cap D^\uparrow(e)j = \{1\}$. \begin{figure} \caption{Coxeter graph and Hasse diagram for $M_n$.} \label{fig:hassecrlAn} \end{figure} Therefore, we recover the monoid presentation of the rook monoid~$R(M)$ stated in~\cite{God}: the generating set is~$\{s_1,\ldots,s_{n-1},e_0,\ldots,e_{n-1}\}$ and the defining relations are $$ \begin{array}{rcll} s_i^2&=&1, &1\leq i \leq n-1;\\s_is_j&=&s_js_i,&1\leq i,j \leq n-1\textrm{ and }|i-j|\geq 2;\\s_is_{i+1}s_i&=&s_{i+1}s_is_{i+1}, &1\leq i \leq n-1;\\e_js_i&=&s_ie_j&1\leq i<j \leq n-1;\\e_js_i&=&s_ie_j = e_j&0\leq j<i\leq n-1;\\e_ie_j&=&e_je_i = e_{\min(i,j)}&0\leq i,j \leq n-1;\\e_is_ie_i&=&e_{i-1} &1\leq i \leq n-1.\end{array}$$ \subsection{The Sympletic Algebraic Monoid} Let $n$ be\label{simplectmonoid} a positive even integer and $Sp_n$ be the \emph{Symplectic Algebraic Group} \cite[page 52]{Hum}: assume $\ell$ lies in $\mathbb{N}$, and consider the matrix~$J_{\ell} = \left(\begin{array}{ll} &1\\1\end{array}\right)$ in $M_{\ell}$. Let $J = \left(\begin{array}{cc}0&J_{\ell}\\-J_{\ell}&0\end{array}\right)$ in $M_n$, where $n = 2\ell$. Then,~$Sp_n$ is equal to~$\{A\in M_n\mid A^tJA = J\}$, where $A^t$ is the transpose matrix of $A$. We set $M = \overline{K^{\scriptscriptstyle \times}Sp_n}$. This monoid is a regular monoid with $0$ whose associated reductive algebraic unit group is~$K^{\scriptscriptstyle \times}Sp_n$. It is called the \emph{Symplectic Algebraic Monoid} \cite{LiRe}. Let~$\mathbb{B}$ be the Borel subgroup of $GL_n$ as defined in Example~\ref{exemplecrossselat}, and set $B = K^{\scriptscriptstyle \times}(\mathbb{B}\cap Sp_n)$. This is a Borel subgroup of the unit group of $M$. It is shown in~\cite{LiRe} that the cross section lattice~$\Lambda$ of $M$ is $\{e_0,e_1,\cdots ,e_{\ell}, e_n\}$ where the elements $e_i$ correspond to the matrices of $M_n$ defined in Example~\ref{exemplecrossselat}. In particular the cross section lattice~$\Lambda$ is linear. In this case, the Weyl group is a Coxeter group of type~$B_\ell$. In other words, the group~$W$ is isomorphic to the subgroup of $S_{n}$ generated by the permutation matrices $s_1,\cdots,s_\ell$ corresponding to $(1,2)(n-1,n)$, $(2,3)(n-2,n-1)$, $\cdots$, $(\ell-1,\ell) (\ell+1,\ell+2)$, and~$(\ell,\ell+1)$, respectively. One has $\lambda_\star(e_i) = \{s_{i+1},\cdots, s_{\ell}\}$ and $\lambda^\star(e_i) = \{s_1,\cdots, s_{i-1}\}$. Therefore, $G^\uparrow(e)l\cap D^\uparrow(e)l = \{1,s_\ell, s_\ell s_{\ell-1}s_\ell\}$, and for $i$ in~$\{1,\cdots, \ell-1\}$ one has $G^\uparrow(e)i\cap D^\uparrow(e)i = \{1,s_i\}$. A direct calculation proves that $e_is_ie_i = s_i e_{i-1}$ for every $i$, and $e_\ell s_\ell s_{\ell-1}s_\ell e_\ell = e_{\ell-2}$. \begin{figure} \caption{Coxeter graph and Hasse diagram for $Sp_n$.} \label{fig:hassecrlBn} \end{figure} Hence, a monoid presentation of~$R(M)$ is given by the generating set~$\{s_1,\ldots,s_{\ell},e_0,\ldots,e_{\ell}\}$ and the defining relations$$ \begin{array}{rcll} s_i^2&=&1, &1\leq i \leq \ell;\\s_is_j&=&s_js_i,&1\leq i,j \leq \ell\textrm{ and }|i-j|\geq 2;\\s_is_{i+1}s_i&=&s_{i+1}s_is_{i+1}, &1\leq i \leq \ell-2;\\s_\ell s_{\ell-1}s_\ell s_{\ell-1}&=&s_{\ell-1}s_\ell s_{\ell-1} s_\ell;\\e_js_i&=&s_ie_j&1\leq i<j \leq \ell;\\e_js_i&=&s_ie_j = e_j&0\leq j<i\leq \ell;\\e_ie_j&=&e_je_i = e_{\min(i,j)}&0\leq i,j \leq \ell;\\e_is_ie_i&=&e_{i-1} &1\leq i \leq \ell;\\e_\ell \, s_{\ell}s_{\ell-1} s_\ell\, e_\ell&=&e_{\ell-2} .\end{array}$$ \subsection{The Special Orthogonal Algebraic Monoid} Let $n$ be a positive integer and $J_n$ be defined as in Section~\ref{simplectmonoid}. The \emph{Special Orthogonal Group}~${\bf SO}_n$ is defined as ${\bf SO}_n = \{A\in SL_n\mid g^T J_n g = J_n\}$. The group $K^{\scriptscriptstyle \times}\, {\bf SO}_n$ is a connected reductive group. Following~\cite{Li1,Li2}, we define the \emph{Special Orthogonal Algebraic Monoid} to be the Zariski closure~$M = \overline{K^{\scriptscriptstyle \times}{\bf SO}_n}$ of $K^{\scriptscriptstyle \times}{\bf SO}_n$. This is an algebraic monoid~\cite{Li1,Li2}, and $B = \mathbb{B} \cap M$ is a Borel subgroup of its unit group. In this case, the cross section lattice depends on the parity of $n$. Furthermore, the Weyl group is a Coxeter group whose type depends on the parity of $n$ too. \begin{figure} \caption{Coxeter graph and Hasse diagram for ${\bf SO} \label{fig:hassecrlDn} \end{figure} Assume $n = 2\ell$ is even. In this case, $W$ is a Coxeter group of type~$D_\ell$. The standard generating set of $W$ is $\{s_1,\cdots, s_{\ell}\}$ where for $1\leq i\leq \ell-1$, the element~$s_i$ is the permutation matrix associated with~$(i,i+1)(n-i,n-i+1)$, and $s_\ell$ is the permutation matrix associated with~$(\ell-1,\ell+1)(\ell,\ell+2)$. It is shown in~\cite{Li2} that the cross section~$\Lambda$ is equal to~$\{e_0,e_1,\cdots,e_{\ell},f_\ell,e_n\}$.. The elements $e_i$ correspond to the matrices of $M_n$ defined in Example~\ref{exemplecrossselat}; the element~$f_{\ell}$ is the diagonal matrix~$e_{\ell+1}+e_{\ell-1}-e_\ell$. The Hasse diagram of $\Lambda$ is as represented in Figure~\ref{fig:hassecrlDn}. For~$j$ in $\{0,\ldots,\ell-2\}$ one has $\lambda_\star(e_j) =\{s_i\mid j+1\leq i\}$ and $\lambda^\star(e_j) = \{s_i\mid i\leq j-1\}$. Furthermore, one can verified that $$\lambda_\star(e_{\ell-1}) = \lambda_\star(f_{\ell}) = \lambda_\star(e_{\ell})= \emptyset,$$ $$\lambda^\star(e_{\ell-1}) = \lambda^\star(f_{\ell})= \{s_i\mid i\leq \ell-2\},$$ $$\lambda^\star(e_{\ell})= \{s_i\mid i\leq \ell-1\},$$ $$\lambda^\star(f_{\ell})= \{s_i\mid i\neq\ell-1\}.$$ Therefore, for $i$ in~$\{1,\cdots, \ell-2\}$ one has $G^\uparrow(e)i\cap D^\uparrow(e)i = \{1,s_i\}$. Furthermore, $$G^\uparrow(e)ll\cap D^\uparrow(e)ll = \{1\} \textrm{ and }$$ $$G^\uparrow(f)l\cap D^\uparrow(f)l = \{1,s_{\ell-1}\}\ \ \ ;\ \ \ G^\uparrow(e)l\cap D^\uparrow(e)l = \{1,s_\ell\};$$ $$G^\uparrow(e)l\cap D^\uparrow(f)l = \{1, s_\ell s_{\ell-2}s_{\ell-1}\}\ \ \ ;\ \ \ G^\uparrow(f)l\cap D^\uparrow(e)l = \{1, s_{\ell-1} s_{\ell-2} s_\ell\}.$$ The monoid~$R(M)$ has a presentation with~$\{s_1,\ldots,s_{\ell},e_0,\ldots,e_{\ell},f_\ell\}$ for generating set and $$ \begin{array}{rcll} s_i^2&=&1, &1\leq i \leq \ell;\\s_is_j&=&s_js_i,&1\leq i,j \leq \ell\textrm{ and }|i-j|\geq 2;\\s_is_{i+1}s_i&=&s_{i+1}s_is_{i+1}, &1\leq i \leq \ell-2;\\s_\ell s_{\ell-2}s_\ell &=&s_{\ell-2}s_\ell s_{\ell-2};\\e_js_i&=&s_ie_j,&1\leq i<j \leq \ell;\\e_js_i&=&s_ie_j = e_j,&0\leq j<i\leq \ell;\\e_ie_j&=&e_je_i = e_{\min(i,j)},&0\leq i,j \leq \ell;\\f_\ell e_\ell&=&e_\ell f_\ell = e_{\ell-1};\\e_is_ie_i&=&e_{i-1}, &1\leq i \leq \ell-1;\\e_\ell s_\ell e_\ell &=&f_\ell s_{\ell-1} f_\ell = e_{\ell-2};\\ e_\ell \, s_{\ell}s_{\ell-2} s_{\ell-1}\, f_\ell&=&f_\ell \, s_{\ell-1}s_{\ell-2} s_{\ell}\, e_\ell = e_{\ell-3} .\end{array}$$ for defining relations. Assume $n = 2\ell+1$ is odd. In that case, $W$ is a Coxeter group of type~$B_\ell$. It is shown in~\cite{Li1} that the cross section lattice is linear as in the case of the Symplectic Algebraic Monoid. It turns out that the Renner monoid of~${\bf SO}_{2\ell+1}$ is isomorphic to the Renner monoid of Symplectic Algebraic Monoid~$\overline{K^{\scriptscriptstyle \times} Sp_{2\ell}}$, and that we obtain the same presentation than in the latter case. \subsection{More examples: Adjoint Representations} Let~$G$ be a simple algebraic group, and denote by~$\mathfrak{g}$ its Lie algebra. Let $M$ be the algebraic monoid~$\overline{K^{\scriptscriptstyle \times} Ad(G)}$ in $End(\mathfrak{g})$. The cross section lattice of~$M$ and the type map of~$M$ has been calculated for each Dynkin diagram (see~\cite[Sec.~7.4]{Ren}). Therefore one can deduce a monoid presentation for each of the associated Renner monoid. \subsection{More examples: $\mathcal{J}$-irreducible algebraic monoids} In~\cite{RePu}, Renner and Putcha consider among regular irreducible algebraic monoids those who are \emph{$\mathcal{J}$-irreducible}, that is those whose cross section lattices have a unique minimal non-zero element. It is easy to see that the $\mathcal{J}$-irreducibility property is related to the existence of irreducible rational representations \cite[Prop.~4.2]{RePu}. Renner and Putcha determined the cross section lattice of those~$\mathcal{J}$-irreducible that arise from special kind of dominant weigths~\cite[Fig.~2,3]{RePu}. Using~\cite[Theorem~4.13]{RePu}, one can deduce the associated type maps and therefore a monoid presentation of each corresponding Renner monoids. \section{A length function on $R(M)$} In this section we extend the length function defined in~\cite{God} to any Renner monoid. In all this section, we assume $M$ is a regular irreducible algebraic monoid with a zero element. We denote by~$G$ the unit group of $M$. We fix a maximal torus~$T$ of $G$ and we denote by $W$ the Weyl group~$N_G(T)/T$ of~$G$. We denote by $S$ the standard generating set associated with the canonical Coxeter structure of the Weyl group~$W$. We fix a Borel subgroup~$B$ of $G$ that contains $T$, and we denote by~$\Lambda$ the associated cross section lattice contained in~$R(M)$. As before we set $\Lambdao = \Lambda-\{1\}$. \subsection{Minimal word representatives} The definition of the length function on $W$ and of a reduced word has been recalled in Section~\ref{sousectionrappelcgt}. If $X$ is a set, we denote by $X^*$ the set of finite words on $X$. \begin{definition}\label{deflenfun} (i) We set~$\ell(s) = 1$ for~$s$ in~$S$ and~$\ell(e) = 0$ for~$e$ in~$\Lambda$. Let~$x_1,\ldots, x_k$ be in~$S\cup\Lambdao$ and consider the word~$\omega = x_1\cdots x_k$. Then, the \emph{length} of the word~$\omega $ is the integer~$\ell(\omega )$ defined by~$\ell(\omega ) = \sum_{i = 1}^k\ell(x_i)$.\\ (ii) The \emph{length} of an element~$w$ which belongs to $R(M)$ is the integer~$\ell(w)$ defined by $$\ell(w) = \min\left\{\ell(\omega), \omega \in (S\cup\Lambdao)^*\mid \omega \textrm{ is a word representative of }w \right\}.$$ \end{definition} The following properties are direct consequences of the definition. \begin{Prop} \label{propproplenght}Let~$w$ belong to~$R(M)$.\\ (i) The length function~$\ell$ on~$R(M)$ extends the length function~$\ell$ defined on~$W$.\\(ii) If~$\ell(w) = 0$ then~$w$ lies in~$\Lambda$.\\(iii) If~$s$ lies in~$S$ then~$|\ell(sw)-\ell(w)|\leq 1$.\\(iv) If~$w'$ belongs to~$R(M)$, then~$\ell(ww')\leq \ell(w)+\ell(w')$.\end{Prop} \begin{proof} Point~$(i)$ and~$(ii)$ are clear (the letters of every representative word of an element in~$W$ are in~$S$). If~$w$ lie in~$R(M)$ and~$s$ lie to~$S$, then~$\ell (sw)\leq \ell(w)+1$. Since~$w = s^2w = s (sw)$, the inequality~$\ell(w)\leq \ell(sw)+1$ holds too. Point~$(iii)$ follows, and Point~$(iv)$ is a direct consequence of~$(iii)$. \end{proof} \begin{Prop}\label{propproplenght2} Let~$w$ belong to~$R(M)$.\\(i) If $(w_1,e,w_2)$ is the normal decomposition of $w$, then $\ell(w) = \ell(w_1)+\ell(w_2)$.\\(ii) If $\omega_1,\omega_2$ are two representative words of $w$ on $S\cup \Lambdao$ such that the equalities~$\ell(w) = \ell(\omega_1) = \ell(\omega_2)$ hold, then using the defining relations of the presentation of $R(M)$ in Theorem~\ref{thepreserennmonid}, we can transform $\omega_1$ into $\omega_2$ without increasing the length. \end{Prop} \begin{proof} (i) Let $\omega$ be a representative word $w$ on $S\cup \Lambdao$ such that $\ell(w) = \ell(\omega)$. It is clear that we can repeat the argument of the proof of Theorem~\ref{thepreserennmonid} without using the relation $(BR1)$. Therefore~$\ell(\omega)\geq \ell(w_1ew_2) = \ell(w_1)+\ell(w_2)\geq(w)$.\\ (ii) is a direct consequence of the proof of $(i)$. \end{proof} \begin{Cor} Let $w$ lie in~$R(M)$ and $e$ belongs to~$\Lambdao$. Denote by $(w_1,f,w_2)$ the normal decomposition of $w$.\\ (i) One has~$\ell(we)\leq \ell(w)$ and~$\ell(ew)\leq \ell(w)$.\\ (ii) $\ell(we) = \ell(w)$ if and only if the normal decomposition of $we$ is $(w_1,e\land f, w_2)$. Furthermore, in this case, $w_2$ lies in $W^\star(e)$. \end{Cor} \begin{proof} (i) is a direct consequence of the definition of the length and of Proposition~\ref{propproplenght2}(i) : $\ell(we) = \ell(w_1fw_2e)\leq \ell(w_1)+0 +\ell(w_2)+0 = \ell(w)$. The same arguments prove that~$\ell(ew)\leq \ell(w)$.\\(ii) Decompose~$w_2$ as a product~$w'_2w''_2w'''_2$ where $w'''_2$ lies in $W_\star(f)$, $w''_2$ lies in $W^\star(f)$, $w'_2$ lies in $D(f)$ and $\ell(w_2) = \ell(w'_2)+\ell(w''_2)+ \ell(w'''_2)$. Then, $we = w_1fw_2e = w_1fw'_2ew''_2 = w_1(f\land_{w'_2}e)w''_2$. In particular, $\ell(we)\leq \ell(w_1)+\ell(w''_2)$. Assume $\ell(we) = \ell(w)$. We must have $w'_2 = w'''_2 = 1$. The element~$w''_2$, that is $w_2$, must belong to $D^\star(f\land_1 e) = D^\star(f\land e)$, and the element~$w_1$ must belong to $G^\star(f\land e)$. In particular, $w_2$ lies in $W^\star(e)$. Furthermore, $w_2$ lies in $D(f\land e)$ since $\lambda^\star(f\land e)\subseteq \lambda(f)$ by Proposition~\ref{propopo}(i). Conversely, if the the normal decomposition of $we$ is $(w_1,e\land f, w_2)$, then $\ell(we) = \ell(w_1)+\ell(w_2) = \ell(w)$. \end{proof} \subsection{Geometrical formula} In Proposition~\ref{propgeominterpr} below we provide a geometrical formula for the length function~$\ell$ defined in the previous section. This formula extends naturally the geometrical definition of the length function on a Coxeter group. Another length function on Renner monoids has already been defined and investigated~\cite{Sol5,PePuRe,Ren}. This length function has nice properties, which are similar to the ones in Propositions~\ref{propproplenght}, \ref{propproplenght2}(i) and~\ref{propgeominterpr}. This alternative length function has been firstly introduced by Solomon~\cite{Sol5} in the special case of rook monoids in order to verify a combinatorial formula that generalizes Rodrigues formula~\cite{Rod}. That is why we call this length function the \emph{Solomon length function} in the sequel. We proved in~\cite{God} that our length function for the rook monoid verifies the same combinatorial formula. We also proved in~\cite{God} that in the case of the rook monoid, our presentation of~$R(M)$ and our length function are related to the Hecke algebra. We believe this is still true in the general case. \begin{Lem} \label{lemtechimpo} Let~$w$ belong to~$R(M)$ and denote by~$(w_1,e,w_2)$ its normal decomposition. Let~$s$ be in~$S$.\\(i) We have one of the two following cases:\\\indent (a) there exists~$t$ in~$\lambda_\star (e)$ such that~$sw_1 = w_1t$. In this case, one has~$sw = w$;\\\indent (b) the element~$sw_1~$ lies in~$D_\star(e)$ and~$(sw_1,e,w_2)$ is the normal decomposition of~$sw$.\\(ii) Denote by~$\tilde{l}$ the Solomon length function on~$R(M)$ . Then, $$\ell(sw) - \ell(w) = \tilde{l}(sw) -\tilde{l}(w).$$ \end{Lem} \begin{proof}~$(i)$ If~$sw_1$ lies in~$D_\star(e)$, then by Theorem~\ref{fnrenner}, the triple~$(sw_1,e,w_2)$ is the normal decomposition of~$sw$. Assume now that~$sw_1$ does not belong to~$D_\star(e)$. In that case,~$e$ cannot be equal to~$1$. Since~$w_1$ belongs to~$D_\star(e)$, by the exchange lemma, there exists~$t$ in~$\lambda_(e)$ such that~$sw_1 = w_1t$. Therefore,~$sw = sw_1ew_2 = w_1tew_2 = w_1ew_2= w$.\\ $(ii)$ The Solomon length~$\tilde{l}(w)$ of an element~$w$ in~$R(M)$ can be defined by the formula~$\tilde{l}(w) = \ell(w_1)-\ell(w_2)+\tilde{\ell}_e$ where~$(w_1,e,w_2)$ is the normal decomposition of~$w$ and~$\tilde{\ell}_e$ is a constant that depends on~$e$ only \cite[ Definition~4.1]{PePuRe}. Therefore the result is a direct consequence of~$(i)$. \end{proof} As a direct consequence of Lemma~\ref{lemtechimpo}$(ii)$ and~\cite[Theorem~8.18]{Ren} we get Proposition~\ref{propintro}. \begin{Prop}\label{propgeominterpr} Let~$w$ belong to~$R(M)$, and~$(w_1,e,w_2)$ be its normal decomposition. $$\ell(w) = \dim(B w_1e B) - \dim(B ew_2 B).$$\end{Prop} When~$w$ lies in~$S_n$, one has~$e = w_2 = 1$, and we recover the well-known formula~$\ell(w) = \dim(BwB) - \dim(B)$. \begin{proof} By~\cite[Section~4]{PePuRe}, for every normal decomposition~$(v_1,f,v_2)$ we have the equality~$\dim(B v_1fv_2 B) = \ell(v_1)-\ell(v_2)+k_f$, where~$k_f$ is a constant that depends on~$f$ only. Therefore,$$\dim(B w_1e B) - \dim(B ew_2 B) = \ell(w_1)+k_e - (-\ell(w_2)+k_e) = \ell(w).$$ \end{proof} \ \end{document}
\begin{document} \title{On fractional calculus with general analytic kernels} \date{} \author[1,2]{Arran Fernandez\thanks{Corresponding author. Email: \texttt{[email protected]}}} \author[2]{Mehmet Ali \"Ozarslan\thanks{Email: \texttt{[email protected]}}} \author[3,4]{Dumitru Baleanu\thanks{Email: \texttt{[email protected]}}} \affil[1]{{\small Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, CB3 0WA, United Kingdom}} \affil[2]{{\small Department of Mathematics, Faculty of Arts and Sciences, Eastern Mediterranean University, Famagusta, Northern Cyprus, via Mersin-10, Turkey}} \affil[3]{{\small Department of Mathematics, Cankaya University, 06530 Balgat, Ankara, Turkey}} \affil[4]{{\small Institute of Space Sciences, Magurele-Bucharest, Romania}} \maketitle \begin{abstract} Many possible definitions have been proposed for fractional derivatives and integrals, starting from the classical Riemann--Liouville formula and its generalisations and modifying it by replacing the power function kernel with other kernel functions. We demonstrate, under some assumptions, how all of these modifications can be considered as special cases of a single, unifying, model of fractional calculus. We provide a fundamental connection with classical fractional calculus by writing these general fractional operators in terms of the original Riemann--Liouville fractional integral operator. We also consider inversion properties of the new operators, prove analogues of the Leibniz and chain rules in this model of fractional calculus, and solve some fractional differential equations using the new operators. \end{abstract} \section{Background} \label{sec:intro} In fractional calculus, we seek to extend the basic calculus operators of differentiation and integration, generalising the order of these operators beyond the integers to the real line or the complex plane. The question of how to define, for example, the $\frac{1}{2}$th derivative of a function is one that has intrigued mathematicians and scientists for hundreds of years \cite{miller-ross,oldham-spanier}. Even today there is no single unique answer to this fundamental question, but many different definitions of fractional calculus have been proposed, starting from various viewpoints, and each one has its own advantages and disadvantages \cite{samko-kilbas-marichev,hristov2}. One of the most natural and popular models of fractional calculus is the \textbf{Riemann--Liouville} one \cite{miller-ross,oldham-spanier}. Here, the $\alpha$th fractional integral of a function $f$, with constant of integration $a$, is defined by \begin{equation} \label{RLdef:int} \prescript{RL}{}I_{a+}^{\alpha}f(t)\coloneqq\frac{1}{\Gamma(\alpha)}\int_a^t(t-\tau)^{\alpha-1}f(\tau)\,\mathrm{d}\tau,\quad\mathrm{Re}(\alpha)>0, \end{equation} while the $\alpha$th fractional derivative of $f$, again depending on a constant $a$, is defined by \begin{equation} \label{RLdef:deriv} \prescript{RL}{}D_{a+}^{\alpha}f(t)\coloneqq\frac{\mathrm{d}^m}{\mathrm{d}t^m}\Big(\prescript{RL}{}I_{a+}^{m-\alpha}f(t)\Big),\quad\mathrm{Re}(\alpha)\geq0,m\coloneqq\lfloor\mathrm{Re}(\alpha)\rfloor+1. \end{equation} The term ``differintegration'' is used to cover both integration and differentiation, which are now distinguished only by the sign of the real part of the order. The Riemann--Liouville model has discovered applications in many areas of science -- see for example \cite{bagley,hilfer,kilbas-srivastava-trujillo,magin,petras,podlubny} and the references therein. The definition \eqref{RLdef:deriv} of Riemann--Liouville fractional derivatives can be modified by interchanging the operations of differentiation and fractional integration. This gives rise to the \textbf{Caputo} model \cite{caputo}. Here, fractional integrals are taken in the Riemann--Liouville sense \eqref{RLdef:int}, while the $\alpha$th fractional derivative of a function $f$, with constant of differintegration $a$, is defined by \begin{equation} \label{CAPdef:deriv} \prescript{C}{}D_{a+}^{\alpha}f(t)\coloneqq\prescript{RL}{}I_{a+}^{m-\alpha}\Big(\frac{\mathrm{d}^m}{\mathrm{d}t^m}f(x)\Big),\quad\mathrm{Re}(\alpha)\geq0,m\coloneqq\lfloor\mathrm{Re}(\alpha)\rfloor+1. \end{equation} The Caputo definition is often preferred for modelling initial value problems, although analyticity in the order of differintegration is lost \cite{podlubny,samko-kilbas-marichev}. One of the most simple and efficient models of fractional calculus is due to Liouville \cite{liouville}. It was originally obtained for a particular set of functions, but it can encapsulate both Riemann--Liouyille and Caputo derivatives as special cases. In recent years, researchers have proposed new fractional models based on replacing the power function kernel in \eqref{RLdef:int} by a different, singular or non-singular, kernel function. The motivation behind such proposals relates to the various real data corresponding to different complex systems requiring different kernels. For example, the \textbf{Atangana--Baleanu} (or AB) fractional model \cite{atangana-baleanu}, proposed in 2016, is based on replacing the power function kernel of \eqref{RLdef:int} by a non-singular function known to have strong connections with fractional calculus \cite{haubold-mathai-saxena,mathai-haubold}, namely the Mittag-Leffler function. In this model, there are two ways of defining the $\alpha$th fractional derivative of a function $f$ with constant of differintegration $a$; these are referred to as the AB derivatives of Riemann--Liouville and Caputo type respectively, by comparison with \eqref{RLdef:deriv} and \eqref{CAPdef:deriv}: \begin{alignat}{2} \label{ABRdef:deriv} \prescript{ABR}{}D_{a+}^{\alpha}f(t)&\coloneqq\frac{B(\alpha)}{1-\alpha}\frac{\mathrm{d}}{\mathrm{d}t}\int_a^tE_{\alpha}\left(\frac{-\alpha}{1-\alpha}(t-\tau)^{\alpha}\right)f(\tau)\,\mathrm{d}\tau,&&\quad0<\alpha<1, \\ \label{ABCdef:deriv} \prescript{ABC}{}D_{a+}^{\alpha}f(t)&\coloneqq\frac{B(\alpha)}{1-\alpha}\int_a^tE_{\alpha}\left(\frac{-\alpha}{1-\alpha}(t-\tau)^{\alpha}\right)f'(\tau)\,\mathrm{d}\tau,&&\quad0<\alpha<1, \end{alignat} where the function $B$ satisfies $B(0)=B(1)=1$ and is often \cite{baleanu-fernandez} taken to be real and positive. Another recently proposed model of fractional calculus, called the \textbf{generalised proportional fractional} (or GPF) model, is based on the following fractional integral operator with two parameters: \begin{equation} \label{GPFdef:int} \prescript{GPF}{}I_{a+}^{\alpha,\rho}f(t)\coloneqq\frac{1}{\rho^{\alpha}\Gamma(\alpha)}\int_a^t\exp\left(\frac{\rho-1}{\rho}(t-\tau)\right)(t-\tau)^{\alpha-1}f(\tau)\,\mathrm{d}\tau,\quad0<\rho\leq1,\mathrm{Re}(\alpha)>0, \end{equation} This operator and its derivative were analysed in detail in \cite{jarad-abdeljawad-alzabut}. The above formulae can be viewed as special cases of the \textbf{Prabhakar} fractional model, which was introduced in 1971 \cite{prabhakar} for solving an integral equation, and later \cite{kilbas-saigo-saxena} interpreted as a fractional differintegral. Here, the fractional integral of a function $f$ with constant of differintegration $c$, with parameters $\alpha,\beta,\omega,\rho$ determining the order, is defined by \begin{equation} \label{PRABdef:int} \prescript{P}{\beta,\omega}I_{a+}^{\alpha,\rho}f(t)\coloneqq\int_a^t(t-\tau)^{\alpha-1}E_{\beta,\alpha}^{\rho}\left[\omega(t-\tau)^{\beta}\right]f(\tau)\,\mathrm{d}\tau,\quad\mathrm{Re}(\alpha)>0,\mathrm{Re}(\beta)>0, \end{equation} while the fractional derivative of $f$ with the same parameters is defined similarly to \eqref{RLdef:deriv}, namely by \begin{equation} \label{PRABdef:deriv} \prescript{P}{\beta,\omega}D_{a+}^{\alpha,\rho}f(t)\coloneqq\frac{\mathrm{d}^m}{\mathrm{d}t^m}\left(\prescript{P}{\beta,\omega}I_{a+}^{m-\alpha,-\rho}f(t)\right),\quad\mathrm{Re}(\alpha)>0,\mathrm{Re}(\beta)>0,m\coloneqq\lfloor\mathrm{Re}(\alpha)\rfloor+1. \end{equation} The function $E_{\beta,\alpha}^{\rho}$ appearing in \eqref{PRABdef:int} is a generalisation of the Mittag-Leffler function defined by \begin{equation} \label{GMLdef} E_{\beta,\alpha}^{\rho}(x)\coloneqq\sum_{n=0}^{\infty}\frac{\Gamma(\rho+n)}{\Gamma(\rho)\Gamma(\beta n+\alpha)n!}x^n. \end{equation} Although it is older than some of the other models mentioned above, the Prabhakar formula has gone mostly unnoticed until recent years. But now it has begun to attract attention, and applications have been discovered e.g. in viscoelasticity \cite{giusti-colombaro} and stochastic processes \cite{gorenflo-kilbas-mainardi-rogosin}. The above covers only a few of the many definitions which have been proposed for fractional derivatives and integrals. There are other definitions dating back a hundred years or more \cite{samko-kilbas-marichev}, and more recently various generalisations of some of the above models have been proposed \cite{fernandez-baleanu,garra-gorenflo-polito-tomovski,ozarslan-ustaoglu1,ozarslan-ustaoglu2,srivastava-tomovski}. The wealth of different definitions available for fractional derivatives and integrals has caused some researchers to propose criteria to determine whether a particular operator should be called a fractional derivative or not. But despite several attempts \cite{ross,ortigueira-machado,tarasov,caputo-fabrizio2}, so far there is no universally accepted set of criteria for this. \begin{question} Is it possible to define a \textit{general} class of fractional-calculus operators, which contains the already-existing operators as particular cases? This is a natural question to ask from a mathematical point of view: generalisation is always a key concept in mathematics. Armed with a general formalism which includes all the specific fractional-calculus operators, we can then attempt to prove results and establish a theory just for this general model, rather than proving similar results many times in many different models. A similar question has also been proposed by researchers working in applied sciences, e.g. engineers, from the point of view of real-world applications. They wish, in principle, to have a simple and efficient structure for fractional calculus -- if possible, just one single model, as in the classical case -- which can be used to model many different real-life processes. In our opinion, this fundamental question is still an open problem: currently, experimental data in several different contexts corresponds to several separate models of fractional calculus, but if these can be unified in a single framework, then that generalised framework is all we need. In any case, real data should be taken as top criteria in validating a given fractional model. In this way, \textit{a priori} an equal chance is given to all fractional models but, finally, only one model is chosen as being the most efficient for a specific set of real data. We are sure that by improving the numerical schemes it will be possible to see sharp differences between the diverse proposed fractional models and criteria for what a fractional operator means. We believe that research in the field of fractional calculus is interdisciplinary and soon we will have improved fractional models in many fields of science and engineering. One direction of research in this area has been to add more indices and parameters, as we see for example with the multi-parameter Prabhakar model mentioned above. However, although these complex operators are very nice mathematically, the laws of nature are always simple. Therefore, in order to satisfy the concerns of both pure mathematics and applications, we have to find a compromise between the complex forms of generalised fractional operators and the simplicity of the laws of nature. \end{question} In the current work, we consider a general framework of operators, which includes many of the proposed models of fractional calculus mentioned above, and whose relevance to fractional calculus can be justified in a clear and objective manner. The extreme generality of our approach enables us to consider many types of fractional operators as special cases, but we are still able to adapt some of the standard tools of fractional calculus to our general operators. Previous researchers such as \cite{atanackovic-pilipovic-zorica,kochubei} have noticed that such a general framework exists, but they did not analyse it in enough detail to grasp its true power. Here, inspired by some existing results on the AB \cite{baleanu-fernandez} and Prabhakar \cite{giusti,fernandez-baleanu-srivastava} models, we demonstrate how our general model can be expressed solely in terms of the classical Riemann--Liouville operators \eqref{RLdef:int} by means of a series formula. This confirms that our definition forms part of the field of fractional calculus, as well as enabling us to prove several theorems about it which are analogous to basic theorems in classical calculus. The organisation of this paper is as follows. In Section 2.1 we define a new general class of fractional integral operators and establish many of their fundamental properties. In Section 2.2 we consider how to define fractional derivatives in this general model. In Section \ref{sec:transODE} we prove some results on transforms and consider differential equations in the new model. In section \ref{sec:prodchain} we establish Leibniz and chain rules in the new model. In section \ref{sec:CauchyVolterra} we solve a general Cauchy problem in the new model. In section \ref{sec:conclusions} we conclude the paper. \section{Definition and basic properties} \label{sec:main} \subsection{Fractional integrals} We propose the following definition for a new general class of fractional operators. \begin{definition} \label{E:defn} Let $[a,b]$ be a real interval, $\alpha$ and $\beta$ be complex parameters with non-negative real parts, and $R$ be a positive number satisfying $R>(b-a)^{\mathrm{Re}(\beta)}$. Let $A$ be a complex function analytic on the disc $D(0,R)$ and defined on this disc by the locally uniformly convergent power series \begin{equation} \label{A:entire} A(x)=\sum_{n=0}^{\infty}a_nx^n, \end{equation} where the coefficients $a_n=a_n(\alpha,\beta)$ are permitted to depend on $\alpha$ and $\beta$ if desired. We define the following fractional integral operator, acting on a function $f:[a,b]\rightarrow\mathbb{R}$ with properties to be determined later (for example, $f\in L^1[a,b]$ -- see Theorem \ref{L1:thm} below): \begin{align} \label{EIdef} \prescript{A}{}I_{a+}^{\alpha,\beta}f(t)\coloneqq \int_a^t(t-\tau)^{\alpha-1}A\left((t-\tau)^{\beta}\right)f(\tau)\,\mathrm{d}\tau. \end{align} \end{definition} The formula \eqref{EIdef} is an extreme generalisation of the assortment of fractional models we considered above. In order for this new definition to be useful, we shall need to impose at least some further conditions on the function $A$. But before proceeding to such considerations, let us state formally how the existing fractional models can be seen as cases of this new generalism. \begin{remark} \label{E:cases} It has been demonstrated \cite{giusti,fernandez-baleanu-srivastava} that the Prabhakar kernel is general enough to include several other kernel functions of fractional calculus, including the AB one, as special cases. We now demonstrate that all of the fractional models mentioned above can be viewed as special cases of our new generalised model. For appropriate functions $f$ and parameters $a$, $\alpha$, $\beta$, etc., we have the following correspondences. The classical integer-order iterated integral is a special case given by: \begin{equation} \label{Eclassic:eqn} I^n_{a+}f(t)=\frac{1}{(n-1)!A(1)}\prescript{A}{}I_{a+}^{n,0}f(t), \end{equation} for an arbitrary choice of function $A$. Similarly, the RL integral \eqref{RLdef:int} is a special case given by: \begin{equation} \label{ERL:eqn1} \prescript{RL}{}I^{\alpha}_{a+}f(t)=\frac{1}{\Gamma(\alpha)A(1)}\prescript{A}{}I_{a+}^{\alpha,0}f(t), \end{equation} for an arbitrary choice of function $A$. However, it is often more useful to choose a specific function $A$, and the following choice seems most natural: \begin{align} \label{ERL:fns} A(x)&=\frac{1}{\Gamma(\alpha)}, \\ \label{ERL:eqn} \prescript{RL}{}I^{\alpha}_{a+}f(t)&=\prescript{A}{}I_{a+}^{\alpha,0}f(t). \end{align} An alternative expression for the RL integral is as follows: \begin{align} \label{ERL2:fns} A(x)&=\frac{1}{\Gamma(\alpha)}x, \\ \label{ERL2:eqn} \prescript{RL}{}I^{\alpha}_{a+}f(t)&=\prescript{A}{}I_{a+}^{1,\alpha-1}f(t). \end{align} But this seems less natural than the previous correspondence. The ABR and ABC derivatives \eqref{ABRdef:deriv}--\eqref{ABCdef:deriv} are special cases given by: \begin{align} \label{EAB:fns} A(x)&=\frac{B(\alpha)}{1-\alpha}E_{\alpha}\left(\frac{-\alpha}{1-\alpha}x\right), \\ \label{EABR:eqn} \prescript{ABR}{}D_{a+}^{\alpha}f(t)&=\frac{\mathrm{d}}{\mathrm{d}t}\prescript{A}{}I_{a+}^{1,\alpha}f(t), \\ \label{EABC:eqn} \prescript{ABC}{}D_{a+}^{\alpha}f(t)&=\prescript{A}{}I_{a+}^{1,\alpha}f'(t). \end{align} The GPF integral \eqref{GPFdef:int} is a special case given by: \begin{align} \label{EGPF:fns} A(x)&=\frac{1}{\rho^{\alpha}\Gamma(\alpha)}\exp\left(\frac{\rho-1}{\rho}x\right), \\ \label{EGPF:eqn} \prescript{GPF}{}I_{a+}^{\alpha,\rho}f(t)&=\prescript{A}{}I_{a+}^{\alpha,1}f(t). \end{align} The Prabhakar integral \eqref{PRABdef:int} is a special case given by: \begin{align} \label{EPRAB:fns} A(x)&=E^{\rho}_{\beta,\alpha}(\omega x), \\ \label{EPRAB:eqn} \prescript{P}{\beta,\omega}I_{a+}^{\alpha,\rho}f(t)&=\prescript{A}{}I_{a+}^{\alpha,\beta}f(t). \end{align} We note that the multiplier functions which are used outside of the integral in the AB and GPF definitions can now be absorbed into the coefficients of the function $A$. This gives a cleaner expression for the fractional integral formula: in the definition \eqref{EIdef}, there is no need for any multiplier function outside of the integral. It is also important to note that our formulation is not general enough to cover \textit{all} models of fractional calculus which have ever been proposed. The field of fractional calculus is very broad and covers a wide variety of different types of operators. \end{remark} \begin{remark} There are some existing papers in the literature which propose definitions of fractional operators involving very general kernel functions -- see for example Kochubei \cite{kochubei}, Agrawal \cite{agrawal}, and Zhao and Luo \cite{zhao-luo}. Let us now compare these with our Definition \ref{E:defn}. In each of the previous definitions, a very general kernel function was used, yielding operators of the following essential form: \begin{equation} \label{previous} \int_a^tk(t-\tau,\alpha)f(\tau)\,\mathrm{d}\tau, \end{equation} where $a$ is a constant of integration, $\alpha$ is a parameter, and $k$ is a general kernel function satisfying some very mild conditions. The authors of the previous papers studied such operators in various ways: formulating and solving differential equations \cite{kochubei}, analysing variational problems \cite{agrawal}, and modelling heat conduction processes \cite{zhao-luo}. However, they were unable to prove in these models certain other properties which might be expected of fractional differintegrals, such as a Leibniz rule or composition properties of the operators. This is because their formulation is more general than ours: they expanded the scope of the definition so far that it no longer has a clear connection to fractional calculus. The operation defined by \eqref{previous} is essentially a convolution with the general function $k$. For certain choices of $k$, such a convolution operation does reduce to the well-known fractional differentiation and integration. But in general it is hard to describe the ``fractionality'' of the operators, or to see why the parameter $\alpha$ would represent the order of differentiation or integration. In our work, we have chosen the level of generality to be slightly lower in the definitions. Instead of taking a completely general kernel function $k$, we choose our kernel function to be a general analytic function of a fractional power. The explicit involvement of fractional power functions allows us to easily define an order for our fractional differintegrals. The assumption of analyticity (as we shall see below) enables us to prove identities which connect our operators firmly back to the classical fractional calculus, which in turn enables us to prove many important results analogous to those in the classical models. Although the previously defined operators such as \eqref{previous} are known to be useful in their own right, we believe that our slightly less general operators are more clearly part of fractional calculus. \end{remark} \begin{definition} \label{AGamma:defn} For any analytic function $A$ as in Definition \ref{E:defn}, we define $A_{\Gamma}$ to be the transformed function \begin{equation} \label{AGamma:eqn} A_{\Gamma}(x)=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)x^n. \end{equation} The relationship between the pair of functions $A$ and $A_{\Gamma}$ is vital to the understanding of our generalised operators. Although the operator $\prescript{A}{}I_{a+}^{\alpha,\beta}$ itself is most easily defined using the function $A$, many of the results concerning it are more elegant when written in terms of $A_{\Gamma}$ instead of $A$: for example, see Theorem \ref{Eseries:thm} and Theorem \ref{Laplace:thm}. By the ratio test and Stirling's approximation, the series \eqref{AGamma:eqn} for $A_{\Gamma}$ has radius of convergence given by \[\lim_{n\rightarrow\infty}\left|\frac{a_{n}}{a_{n+1}}(\beta n+\beta+\alpha)^{-\beta}\right|.\] Also by the ratio test, the radius of convergence of the series \eqref{A:entire} for $A$ is $\lim_{n\rightarrow\infty}\left|\frac{a_{n}}{a_{n+1}}\right|\geq R$. Thus we see that if the series for $A_{\Gamma}$ converges, then so does the series for $A$, but not vice versa. \end{definition} \begin{theorem} \label{L1:thm} With all notation as in Definition \ref{E:defn}, we have a well-defined bounded operator \[\prescript{A}{}I_{a+}^{\alpha,\beta}:L^1[a,b]\rightarrow L^1[a,b]\] for any fixed $\alpha$ and $\beta$ with $\mathrm{Re}(\alpha),\mathrm{Re}(\beta)\geq0$. \end{theorem} \begin{proof} First we prove that for any function $f\in L^1[a,b]$, the resulting function $\prescript{A}{}I_{a+}^{\alpha,\beta}f$ is also in $L^1[a,b]$. For this it will suffice to show that the definite absolute integral \[\int_a^b\left|\prescript{A}{}I_{a+}^{\alpha,\beta}f(t)\right|\,\mathrm{d}t\leq\int_a^b\int_a^t\left|(t-\tau)^{\alpha-1}A\left((t-\tau)^{\beta}\right)f(\tau)\right|\,\mathrm{d}\tau\,\mathrm{d}t\] is finite. By Fubini's theorem, this is equivalent to showing that \[\int_a^b\int_\tau^b\left|(t-\tau)^{\alpha-1}A\left((t-\tau)^{\beta}\right)f(\tau)\right|\,\mathrm{d}t\,\mathrm{d}\tau\] is finite. The latter double integral can be rearranged to \[\int_a^b|f(\tau)|\int_0^{b-\tau}\left|u^{\alpha-1}A(u^{\beta})\right|\,\mathrm{d}u\,\mathrm{d}\tau\leq\int_a^b|f(\tau)|\,\mathrm{d}\tau\int_0^{b-a}\left|u^{\alpha-1}A(u^{\beta})\right|\,\mathrm{d}u.\] Because $A$ is analytic on $D(0,R)$, the function $A(u^{\beta})$ is bounded on the finite interval $[0,b-a]$. And $f$ is an $L^1$ function, so the double integral is bounded as required. We have also now shown that \[\left\|\prescript{A}{}I_{a+}^{\alpha,\beta}f\right\|_1\leq\|f\|_1\int_0^{b-a}\left|u^{\alpha-1}A(u^{\beta})\right|\,\mathrm{d}u.\] This proves that $\prescript{A}{}I_{a+}^{\alpha,\beta}$ is a bounded operator on $L^1[a,b]$, with operator norm at most $(b-a)^{\alpha}M$, where \[M\coloneqq\sup_{|x|<(b-a)^{\beta}}|A(x)|.\] Note that we have used the assumptions $\mathrm{Re}(\alpha),\mathrm{Re}(\beta)\geq0$ in order to know that the final integral is well-behaved near $u=0$. \end{proof} \begin{theorem}[Series formula] \label{Eseries:thm} With all notation as in Definition \ref{E:defn}, for any function $f\in L^1[a,b]$, we have the following locally uniformly convergent series for $\prescript{A}{}I_{a+}^{\alpha,\beta}f$ as a function on $[a,b]$: \begin{equation} \label{Eseries:eqn} \prescript{A}{}I_{a+}^{\alpha,\beta}f(t)=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\prescript{RL}{}I_{a+}^{\alpha+n\beta}f(t). \end{equation} Alternatively, this identity can be written more concisely in terms of the transformed function $A_{\Gamma}$ introduced in Definition \ref{AGamma:defn}: \begin{equation} \label{Eseries:AGamma} \prescript{A}{}I_{a+}^{\alpha,\beta}f(t)=A_{\Gamma}\left(\prescript{RL}{}I_{a+}^{\beta}\right)\prescript{RL}{}I_{a+}^{\alpha}f(t). \end{equation} \end{theorem} \begin{proof} We substitute the definition \eqref{A:entire} into the original formula \eqref{EIdef}: \begin{align*} \prescript{A}{}I_{a+}^{\alpha,\beta}f(t)&=\int_a^t(t-\tau)^{\alpha-1}\sum_{n=0}^{\infty}\left[a_n(t-\tau)^{\beta n}\right]f(\tau)\,\mathrm{d}\tau \\ &=\int_a^t\sum_{n=0}^{\infty}a_n(t-\tau)^{\beta n+\alpha-1}f(\tau)\,\mathrm{d}\tau. \end{align*} The series here is locally uniformly convergent, since $0\leq\left|(t-\tau)^{\beta}\right|\leq(b-a)^{\mathrm{Re}(\beta)}<R$ and the series \eqref{A:entire} for $A$ is assumed to be locally uniformly convergent on $D(0,R)\subset\mathbb{C}$. So the order of integration and summation can be swapped, to get: \begin{align*} \prescript{A}{}I_{a+}^{\alpha,\beta}f(t)&=\sum_{n=0}^{\infty}\int_a^ta_n(t-\tau)^{\beta n+\alpha-1}f(\tau)\,\mathrm{d}\tau \\ &=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\left[\frac{1}{\Gamma(\beta n+\alpha)}\int_a^t(t-\tau)^{\beta n+\alpha-1}f(\tau)\,\mathrm{d}\tau\right]. \end{align*} By the definition \eqref{RLdef:int} of Riemann--Liouville fractional integrals, the expression in square brackets is precisely the $(\beta n+\alpha)$th RL integral of $f(t)$. And the result follows. \end{proof} The above two theorems are fundamental for the study of these general operators. Theorem \ref{L1:thm} establishes a domain of definition for the operator, namely the space of all $L^1$ functions on the interval. This is the same domain of definition that works for the RL \cite{samko-kilbas-marichev}, AB \cite{baleanu-fernandez}, and Prabhakar \cite{prabhakar} operators. Theorem \ref{Eseries:thm} establishes a way of expressing the general operators in terms of only the classical Riemann--Liouville fractional integrals, following the method used in \cite{baleanu-fernandez,fernandez-baleanu-srivastava}. This is the reason for our assumption that $A$ was analytic, i.e. has a convergent power series. It is immensely significant because, as well as cementing the position of these operators as part of fractional calculus, it also provides us with short-cuts to many useful theorems concerning them. Many well-known results on RL integrals can now be quickly extended to a much more general scenario. \begin{theorem} With all notation as in Definition \ref{E:defn}, we have a well-defined operator \[\prescript{A}{}I_{a+}^{\alpha,\beta}:C[a,b]\rightarrow C(a,b)\] for any fixed $\alpha$ and $\beta$ with $\mathrm{Re}(\alpha),\mathrm{Re}(\beta)\geq0$. \end{theorem} \begin{proof} First we note that, by Theorem \ref{L1:thm}, the operator $\prescript{A}{}I_{a+}^{\alpha,\beta}$ is well-defined on $C[a,b]$ since this function space is a subset of $L^1[a,b]$. If $f$ is continuous on the open interval $(a,b)$, then so is its Riemann--Liouville integral $\prescript{RL}{}I_{a+}^{\nu}$ for any $\nu$ with positive real part \cite{samko-kilbas-marichev}. And the series \eqref{Eseries:eqn} is locally uniformly convergent. So we have an expression for $\prescript{A}{}I_{a+}^{\alpha,\beta}f(t)$ as a locally uniformly convergent series of functions in $C(a,b)$, which must itself be in $C(a,b)$ as required. \end{proof} \begin{theorem} \label{RL:E:thm} Let $a$, $b$, $A$ be as in Definition \ref{E:defn}. For any $f\in L^1[a,b]$ and $\alpha,\beta,\gamma\in\mathbb{C}$ with non-negative real parts, the composition of Riemann--Liouville fractional integrals with our generalised operators is given by: \begin{align} \label{RL:E:eqn} \prescript{RL}{}I_{a+}^{\gamma}\circ\prescript{A}{}I_{a+}^{\alpha,\beta}f(t)=\prescript{A}{}I_{a+}^{\alpha,\beta}\circ\prescript{RL}{}I_{a+}^{\gamma}f(t)&=\prescript{A}{}I_{a+}^{\alpha+\gamma,\beta}f(t) \\ \nonumber &=A_{\Gamma}\left(\prescript{RL}{}I_{a+}^{\beta}\right)\prescript{RL}{}I_{a+}^{\alpha+\gamma}f(t). \end{align} \end{theorem} \begin{proof} This follows directly from the expressions \eqref{Eseries:eqn} and \eqref{Eseries:AGamma} for $\prescript{A}{}I_{a+}^{\alpha,\beta}f(t)$, using the semigroup property of Riemann--Liouville fractional integrals. \end{proof} Theorem \ref{RL:E:thm} is interesting because it may provide us with a way of solving fractional integro-differential equations that involve both Riemann--Liouville operators and our new generalised operators. We investigate such problems in more detail in Section \ref{sec:CauchyVolterra} below. \begin{theorem} \label{commutativity:thm} Let $a$, $b$, $A$ be as in Definition \ref{E:defn}. The set \[\left\{\prescript{A}{}I_{a+}^{\alpha,\beta}:\alpha,\beta\in\mathbb{C},\mathrm{Re}(\alpha)\geq0,\mathrm{Re}(\beta)\geq0\right\}\] forms a commutative family of operators on the function space $L^1[a,b]$. \end{theorem} \begin{proof} This follows directly from Theorem \ref{Eseries:thm} and the commutativity of Riemann--Liouville fractional integrals. Explicitly, we have: \begin{align*} \prescript{A}{}I_{a+}^{\alpha_1,\beta_1}&\circ\prescript{A}{}I_{a+}^{\alpha_2,\beta_2}f(t) \\ &=\sum_{n=0}^{\infty}a_n\Gamma(\beta_1 n+\alpha_1)\prescript{RL}{}I_{a+}^{\alpha_1+n\beta_1}\left[\sum_{m=0}^{\infty}a_m\Gamma(\beta_2 m+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_2+m\beta_2}f(t)\right] \\ &=\sum_{m,n}a_n\Gamma(\beta_1 n+\alpha_1)a_m\Gamma(\beta_2 m+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_1+n\beta_1}\circ\prescript{RL}{}I_{a+}^{\alpha_2+m\beta_2}f(t) \\ &=\sum_{m,n}a_m\Gamma(\beta_2 m+\alpha_2)a_n\Gamma(\beta_1 n+\alpha_1)\prescript{RL}{}I_{a+}^{\alpha_2+m\beta_2}\circ\prescript{RL}{}I_{a+}^{\alpha_1+n\beta_1}f(t) \\ &=\prescript{A}{}I_{a+}^{\alpha_2,\beta_2}\circ\prescript{A}{}I_{a+}^{\alpha_1,\beta_1}f(t). \end{align*} \end{proof} \begin{theorem}[Semigroup property in one parameter] \label{semigroup:thm} Let $a$, $b$, $A$ be as in Definition \ref{E:defn}, and fix $\alpha_1,\alpha_2,\beta\in\mathbb{C}$ with non-negative real parts. The semigroup property \[\prescript{A}{}I_{a+}^{\alpha_1,\beta}\circ\prescript{A}{}I_{a+}^{\alpha_2,\beta}f(t)=\prescript{A}{}I_{a+}^{\alpha_1+\alpha_2,\beta}f(t)\] is uniformly valid (regardless of $\alpha_1$, $\alpha_2$, $\beta$, and $f$) if and only if the following condition is satisfied for all non-negative integers $k$: \begin{align} \label{semigroup:condn} \sum_{m+n=k}a_n(\alpha_1,\beta)a_m(\alpha_2,\beta)B(\beta n+\alpha_1,\beta m+\alpha_2)=a_k(\alpha_1+\alpha_2,\beta). \end{align} \end{theorem} \begin{proof} We saw in the proof of Theorem \ref{commutativity:thm} that \begin{align*} \prescript{A}{}I_{a+}^{\alpha_1,\beta}\circ\prescript{A}{}I_{a+}^{\alpha_2,\beta}f(t)&=\sum_{m,n}a_n\Gamma(\beta n+\alpha_1)a_m\Gamma(\beta m+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_1+n\beta}\circ\prescript{RL}{}I_{a+}^{\alpha_2+m\beta}f(t) \\ &=\sum_{m,n}a_na_m\Gamma(\beta n+\alpha_1)\Gamma(\beta m+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_1+\alpha_2+(n+m)\beta}f(t) \\ &=\sum_{k=0}^{\infty}\left[\sum_{m+n=k}a_na_m\Gamma(\beta n+\alpha_1)\Gamma(\beta m+\alpha_2)\right]\prescript{RL}{}I_{a+}^{\alpha_1+\alpha_2+k\beta}f(t). \end{align*} Meanwhile, the series formula \eqref{Eseries:eqn} yields \[\prescript{A}{}I_{a+}^{\alpha_1+\alpha_2,\beta}f(t)=\sum_{k=0}^{\infty}a_k\Gamma(\beta k+\alpha_1+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_1+\alpha_2+k\beta}f(t).\] Clearly these two expressions are always equal if and only if \begin{equation} \label{semigroup:condn2} \sum_{m+n=k}a_n(\alpha_1,\beta)a_m(\alpha_2,\beta)\Gamma(\beta n+\alpha_1)\Gamma(\beta m+\alpha_2)=a_k(\alpha_1+\alpha_2,\beta)\Gamma(\beta k+\alpha_1+\alpha_2) \end{equation} for all $k=0,1,2,\dots$, which is equivalent to \eqref{semigroup:condn} by definition of the beta function. \end{proof} \begin{remark} Let us examine the condition \eqref{semigroup:condn2} with increasing values of $k$. For $k=0$, the equation becomes \[a_0(\alpha_1,\beta)a_0(\alpha_2,\beta)\Gamma(\alpha_1)\Gamma(\alpha_2)=a_0(\alpha_1+\alpha_2,\beta)\Gamma(\alpha_1+\alpha_2).\] Therefore we need $a_0(\alpha,\beta)=\frac{1}{\Gamma(\alpha)}$ for the equation to be uniformly valid. For $k=1$, the equation becomes \[a_0(\alpha_1,\beta)a_1(\alpha_2,\beta)\Gamma(\alpha_1)\Gamma(\beta+\alpha_2)+a_1(\alpha_1,\beta)a_0(\alpha_2,\beta)\Gamma(\beta+\alpha_1)\Gamma(\alpha_2)=a_1(\alpha_1+\alpha_2,\beta)\Gamma(\beta+\alpha_1+\alpha_2).\] After substituting the known expression for $a_0$, this becomes \[a_1(\alpha_2,\beta)\Gamma(\beta+\alpha_2)+a_1(\alpha_1,\beta)\Gamma(\beta+\alpha_1)=a_1(\alpha_1+\alpha_2,\beta)\Gamma(\beta+\alpha_1+\alpha_2).\] There are many possibilities for $a_1$ which would satisfy this identity. Note that the trivial solution $a_1=0$, followed by setting $a_2=0$, $a_3=0$, etc. to ensure that \eqref{semigroup:condn2} remains valid for all $k$, would yield precisely the Riemann--Liouville fractional model as specified by \eqref{ERL:fns}. \end{remark} \begin{theorem} \label{semigroup2:thm} Let $a$, $b$, $A$ be as in Definition \ref{E:defn}, and fix $\alpha_1,\alpha_2,\beta_1,\beta_2\in\mathbb{C}$ with non-negative real parts. The semigroup property \[\prescript{A}{}I_{a+}^{\alpha_1,\beta_1}\circ\prescript{A}{}I_{a+}^{\alpha_2,\beta_2}f(t)=\prescript{A}{}I_{a+}^{\alpha_1+\alpha_2,\beta_1+\beta_2}f(t)\] cannot be uniformly valid for arbitrary $\alpha_1$, $\alpha_2$, $\beta_1$, $\beta_2$, and $f$. \end{theorem} \begin{proof} Again we can use the composition formula found in the proof of Theorem \ref{commutativity:thm}: \begin{align*} \prescript{A}{}I_{a+}^{\alpha_1,\beta_1}\circ\prescript{A}{}I_{a+}^{\alpha_2,\beta_2}f(t)&=\sum_{m,n}a_n\Gamma(\beta_1 n+\alpha_1)a_m\Gamma(\beta_2 m+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_1+\alpha_2+ n\beta_1+m\beta_2}f(t) \\ \begin{split} &=\sum_{k=0}^{\infty}a_ka_k\Gamma(\beta_1 k+\alpha_1)\Gamma(\beta_2 k+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_1+\alpha_2+ k(\beta_1+\beta_2)}f(t) \\ &\hspace{2cm}+\sum_{m\neq n}a_n\Gamma(\beta_1 n+\alpha_1)a_m\Gamma(\beta_2 m+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_1+\alpha_2+ n\beta_1+m\beta_2}f(t). \end{split} \end{align*} Meanwhile, the series formula \eqref{Eseries:eqn} yields \[\prescript{A}{}I_{a+}^{\alpha_1+\alpha_2,\beta_1+\beta_2}f(t)=\sum_{k=0}^{\infty}a_k\Gamma((\beta_1+\beta_2)k+\alpha_1+\alpha_2)\prescript{RL}{}I_{a+}^{\alpha_1+\alpha_2+ k(\beta_1+\beta_2)}f(t).\] For uniform equality, we need to have both \[a_k(\alpha_1,\beta_1)a_k(\alpha_2,\beta_2)\Gamma(\beta_1 k+\alpha_1)\Gamma(\beta_2 k+\alpha_2)=a_k(\alpha_1+\alpha_2,\beta_1+\beta_2)\Gamma((\beta_1+\beta_2)k+\alpha_1+\alpha_2)\] for all $k\in\mathbb{Z}^+_0$ and also \[a_n(\alpha_1,\beta_1)\Gamma(\beta_1 n+\alpha_1)a_m(\alpha_2,\beta_2)\Gamma(\beta_2 m+\alpha_2)=0\] for all distinct $m,n\in\mathbb{Z}^+_0$. But the latter condition implies $A=0$, which makes the whole problem trivial. \end{proof} \begin{remark} The Prabhakar operator does have a semigroup property in two parameters \cite{prabhakar}, but these two parameters are -- in the notation of \eqref{EPRAB:fns} -- $\alpha$ and $\rho$, not $\alpha$ and $\beta$. So the result of Theorem \ref{semigroup2:thm} does not contradict this property of Prabhakar operators. \end{remark} \subsection{Fractional derivatives} In Definition \ref{E:defn} we saw a way to define fractional integrals with general analytic kernel functions. But several of the familiar special cases of this formula, such as the AB fractional derivatives \eqref{EABR:eqn}--\eqref{EABC:eqn}, required taking derivatives as well as applying the integral operator \eqref{EIdef}. This is natural, because defining fractional derivatives in terms of classical derivatives and fractional integrals has been a well-established practice starting from Riemann--Liouville \eqref{RLdef:deriv}. So it now makes sense to ask: given the fractional integral operator $\prescript{A}{}I_{a+}^{\alpha,\beta}$, how might we define a corresponding fractional differential operator? Clearly we will have operators of both Riemann--Liouville and Caputo type, according to whether we apply the derivative inside or outside the integration. We guess that we should consider operators \begin{align} \label{ERdef} \prescript{A}{RL}D_{a+}^{\alpha,\beta}f(t)&=\frac{\mathrm{d}^m}{\mathrm{dt^m}}\left(\prescript{A}{}I_{a+}^{\alpha',\beta'}f(t)\right), \\ \label{ECdef} \prescript{A}{C}D_{a+}^{\alpha,\beta}f(t)&=\prescript{A}{}I_{a+}^{\alpha',\beta'}\left(\frac{\mathrm{d}^m}{\mathrm{dt^m}}f(t)\right), \end{align} where the natural number $m$ and the orders $\alpha'$, $\beta'$ depend on $\alpha$ and $\beta$. The question, then, is how these inter-variable dependences are defined. For the RL integral \eqref{ERL:eqn} and the Prabhakar integral \eqref{EPRAB:eqn}, we would use $\beta'=\beta$ and $m+\alpha'=\alpha$. For the AB derivatives \eqref{EABR:eqn} and \eqref{EABC:eqn}, we would use $\alpha'=\alpha$ and $m+\beta'=\beta$. What happens in the general case? We note the following series formula as a corollary of Theorem \ref{Eseries:thm}. \begin{corollary} For any $m\in\mathbb{N}$, with all notation as in Theorem \ref{Eseries:thm}, we have \begin{align} \label{Eseries:deriv} \frac{\mathrm{d}^m}{\mathrm{dt^m}}\left(\prescript{A}{}I_{a+}^{\alpha,\beta}f(t)\right)&=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\prescript{RL}{}I_{a+}^{\alpha+n\beta-m}f(t) \\ \label{Eseries:deriv:AGamma}&=A_{\Gamma}\left(\prescript{RL}{}I_{a+}^{\beta}\right)\prescript{RL}{}I_{a+}^{\alpha-m}f(t). \end{align} \end{corollary} \begin{proof} This follows from \eqref{Eseries:eqn}, using the fact that any classical derivative of a Riemann--Liouville differintegral is another RL differintegral of the appropriate order \cite{kilbas-srivastava-trujillo,miller-ross}. \end{proof} Unfortunately, except in a few special cases, we are not able to treat the operator \eqref{Eseries:deriv} as an inverse of the integral operator \eqref{EIdef} with the same function $A$. To see why, consider the result of Theorems \ref{semigroup:thm} and \ref{semigroup2:thm}. We know that a semigroup property in two parameters is impossible, and a semigroup property in one parameter would preserve $\beta$. Therefore, if we want a statement of the form \[\frac{\mathrm{d}^m}{\mathrm{dt^m}}\circ\prescript{A}{}I_{a+}^{\alpha,\beta}\circ\prescript{A}{}I_{a+}^{\alpha',\beta'}f(t)=f(t)\] to be true, then we would need $\beta'=\beta$ and the above equation would reduce to \[\frac{\mathrm{d}^m}{\mathrm{dt^m}}\circ\prescript{A}{}I_{a+}^{\alpha+\alpha',\beta}f(t)=f(t).\] But this requires $\beta=0$, so that the operator on the left-hand side is trivial. And if $\beta=\beta'=0$, then the operators essentially reduce to the Riemann--Liouville fractional differintegrals. The reason why we \textit{are} able to get a left inverse in the form of \eqref{Eseries:deriv} for the Prabhakar operator \cite{kilbas-saigo-saxena} is that the function $A$ is changed slightly between the fractional integral \eqref{PRABdef:int} and the fractional derivative \eqref{PRABdef:deriv}: namely, by negating the extra parameter $\rho$ as well as changing the parameter $\alpha$. In general, there are two possible approaches which may be used in our new framework to construct a system of differintegral operators with well-defined derivatives, integrals, and an inversion relation: \begin{enumerate} \item \textbf{Approach 1.} We can define the fractional \textbf{integral} by an expression of the form \eqref{EIdef}. Then to define the fractional \textbf{derivative}, we need to find a different analytic function $\bar{A}$ which `complements' the original choice of $A$, in the sense that \[\prescript{\bar{A}}{RL}D_{a+}^{\alpha,\beta}\circ\prescript{A}{}I_{a+}^{\alpha,\beta}f=f.\] In order to find an appropriate $\bar{A}$, we can consider the series formula \eqref{Eseries:AGamma} and invert the function $A_{\Gamma}$. \item \textbf{Approach 2.} Alternatively, we can define the fractional \textbf{derivative} by an expression of the form \eqref{ERdef} or \eqref{ECdef}. Then to define the fractional \textbf{integral}, we need to find an inverse for the fractional differential operator thus defined. This was the approach used to construct the AB model \cite{atangana-baleanu}. \end{enumerate} In order to illustrate the general discussion above, we consider specific implementations of these ideas, which may be used to recover some of the fractional models we already know. Let us consider the Approach 1 described above, and try to find a condition on $\bar{A}$ in terms of $A$. We use the notation $\mathcal{I}=\prescript{A}{}I_{a+}^{\alpha,\beta}$ for some fixed $\alpha$, $\beta$, and analytic function $A(x)=\sum_{n=0}^{\infty}a_nx^n$. In other words, we define \[\mathcal{I}f(t)=\int_a^t(t-\tau)^{\alpha-1}A\left((t-\tau)^{\beta}\right)f(\tau)\,\mathrm{d}\tau.\] We also use the notation $\mathcal{D}=\prescript{\bar{A}}{RL}D_{a+}^{\alpha,\beta}$, as defined in \eqref{ERdef}, for some fixed $\alpha'$, $\beta'$, $m$, and analytic function $\bar{A}(x)=\sum_{n=0}^{\infty}\bar{a_n}x^n$. In other words, we define \[\mathcal{D}f(t)=\frac{\mathrm{d}^m}{\mathrm{dt^m}}\left(\int_a^t(t-\tau)^{\alpha'-1}\bar{A}\left((t-\tau)^{\beta'}\right)f(\tau)\,\mathrm{d}\tau\right).\] Let us check the conditions for $\mathcal{D}$ to be a left inverse of $\mathcal{I}$, using the expression \eqref{Eseries:AGamma} for these operators as series in terms of $A_{\Gamma}$: \begin{align*} \mathcal{D}\circ\mathcal{I}f(t)=f(t)&\Leftrightarrow\left[\bar{A}_{\Gamma}\left(\prescript{RL}{}I_{a+}^{\beta'}\right)\prescript{RL}{}I_{a+}^{\alpha'-m}\right]\circ\left[A_{\Gamma}\left(\prescript{RL}{}I_{a+}^{\beta}\right)\prescript{RL}{}I_{a+}^{\alpha}\right]f(t)=f(t) \\ &\Leftrightarrow\bar{A}_{\Gamma}\left(\prescript{RL}{}I_{a+}^{\beta'}\right)A_{\Gamma}\left(\prescript{RL}{}I_{a+}^{\beta}\right)\prescript{RL}{}I_{a+}^{\alpha+\alpha'-m}f(t)=f(t). \end{align*} So the following conditions will suffice to give us a well-defined left inverse operator to \eqref{EIdef}: \begin{equation} \label{inverse:condns} \alpha'=m-\alpha,\quad\quad\beta'=\beta,\quad\quad\bar{A}_{\Gamma}\cdot A_{\Gamma}=1. \end{equation} The above discussion can be formalised into the following definition for generalised fractional differentiation operators. \begin{definition} \label{Eder:defn} Let $a$, $b$, $\alpha$, $\beta$, and $A$ be as in Definition \ref{E:defn}. Using Approach 1 as discussed above, we can define the following fractional differential operators, of both Riemann--Liouville and Caputo type, acting on a function $f:[a,b]\rightarrow\mathbb{R}$ with sufficient differentiability properties. \begin{align} \label{Eder:defnRL} \prescript{A}{RL}D_{a+}^{\alpha,\beta}f(t)&=\frac{\mathrm{d}^m}{\mathrm{dt^m}}\left(\prescript{\bar{A}}{}I_{a+}^{m-\alpha,\beta}f(t)\right), \\ \label{Eder:defnC} \prescript{A}{C}D_{a+}^{\alpha,\beta}f(t)&=\prescript{\bar{A}}{}I_{a+}^{m-\alpha,\beta}\left(\frac{\mathrm{d}^m}{\mathrm{dt^m}}f(t)\right), \end{align} where the function $\bar{A}$ used on the right-hand side is defined such that $A_{\Gamma}(x)\cdot\bar{A}_{\Gamma}(x)=1$. (Here the $\Gamma$-transformed functions are as defined in Definition \ref{AGamma:defn}.) \end{definition} \begin{example} Let us consider the Prabhakar model. Here the fractional integral is defined using the function $A(x)=E^{\rho}_{\beta,\alpha}(\omega x)$, and the fractional derivative is defined by an expression of the form \eqref{ERdef} using $\alpha'=m-\alpha$, $\beta'=\beta$, and the function $\bar{A}(x)=E^{-\rho}_{\beta,m-\alpha}(\omega x)$. In order to verify that \eqref{inverse:condns} is valid, we just need to check the final part. Here we have: \begin{align*} A(x)&=E^{\rho}_{\beta,\alpha}(\omega x)=\sum_{n=0}^{\infty}\frac{\Gamma(\rho+n)\omega^n}{\Gamma(\rho)\Gamma(\beta n+\alpha)n!}x^n; \\ A_{\Gamma}(x)&=\sum_{n=0}^{\infty}\frac{\Gamma(\rho+n)\omega^n}{\Gamma(\rho)n!}x^n; \\ \bar{A}(x)&=E^{-\rho}_{\beta,m-\alpha}(\omega x)=\sum_{n=0}^{\infty}\frac{\Gamma(-\rho+n)\omega^n}{\Gamma(-\rho)\Gamma(\beta n+m-\alpha)n!}x^n; \\ \bar{A}_{\Gamma}(x)&=\sum_{n=0}^{\infty}\frac{\Gamma(-\rho+n)\omega^n}{\Gamma(-\rho)n!}x^n; \\ \bar{A}_{\Gamma}\cdot A_{\Gamma}(x)&=\sum_{n_1=0}^{\infty}\sum_{n_2=0}^{\infty}\frac{\Gamma(-\rho+n_1)\Gamma(\rho+n_2)\omega^{n_1+n_2}}{\Gamma(-\rho)\Gamma(\rho)n_1!n_2!}x^{n_1+n_2}=1, \end{align*} where in the last line we have used some basic properties of gamma functions \cite{fernandez-baleanu-srivastava} to get the desired result. \end{example} Thus the Approach 1, which was outlined above and formalised in Definition \ref{Eder:defn}, is a valid method for inverting our generalised fractional integral operators, which indeed yields the Prabhakar fractional derivative when we start from the Prabhakar fractional integral. We now demonstrate how Approach 2 may be used to derive the AB fractional integral when starting from the AB fractional derivative. \begin{example} In the 2nd method described above, we start from an operator of the form \begin{equation} \label{ex:AB:D1} \mathcal{D}f(t)=\frac{\mathrm{d}^m}{\mathrm{dt^m}}\left(\prescript{A}{}I_{a+}^{\alpha,\beta}f(t)\right)=A_{\Gamma}\left(\prescript{RL}{}I_{a+}^{\beta}\right)\prescript{RL}{}I_{a+}^{\alpha-m}f(t). \end{equation} To find the inverse of this operator, it would help to know the reciprocal of the function $A_{\Gamma}$. As a basic example, let us consider the case where $A_{\Gamma}(x)=(1-x)^{-1}$. This yields the following choices of function: \begin{align*} A_{\Gamma}(x)&=(1-x)^{-1}=\sum_{n=0}^{\infty}x^n; \\ A(x)&=\sum_{n=0}^{\infty}\frac{x^n}{\Gamma(\beta n+\alpha)}=E_{\beta,\alpha}(x). \end{align*} Setting $\alpha=m=1$ for simplicity, we find that \begin{align} \nonumber \mathcal{D}f(t)&=\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_a^t(t-\tau)^{\alpha-1}A\left((t-\tau)^{\beta}\right)f(\tau)\,\mathrm{d}\tau\right) \\ \label{ex:AB:D2} &=\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_a^tE_{\beta,\alpha}\left((t-\tau)^{\beta}\right)f(\tau)\,\mathrm{d}\tau\right). \end{align} For the inverse operator, we can see from \eqref{ex:AB:D1} that it should be given by \begin{equation} \label{ex:AB:I} \mathcal{I}f(t)=\left(1-\prescript{RL}{}I_{a+}^{\beta}\right)f(t). \end{equation} Now the formulae \eqref{ex:AB:D2} and \eqref{ex:AB:I}, up to some multiplicative constants, are precisely the definitions for the AB fractional derivative of Riemann--Liouville type and the AB fractional integral. Thus we have re-derived the AB model of fractional calculus, including the inversion properties of AB differintegrals, as a special case of our more general methodology. \end{example} \begin{remark} The above discussion illustrates the difference in structure between the AB and Prabhakar models of fractional calculus. Although the fractional derivatives look similar in both, the inversion relation between fractional integrals and derivatives is quite different. \end{remark} \section{Transforms and differential equations} \label{sec:transODE} Fourier and Laplace transforms for our generalised operators could be computed from the definition \eqref{EIdef} using the convolution property. However, to get a usable expression by this means, we would need to know how to transform the function $A$ directly. It is more straightforward to find formulae for the transformed functions using the series formula of Theorem \ref{Eseries:thm}. \begin{theorem} \label{Laplace:thm} Let $a=0$, $b>0$, and $\alpha$, $\beta$, $A$ be as in Definition \ref{E:defn}, and let $f\in L^2[a,b]$ with Laplace transform $\hat{f}$. The function $\prescript{A}{}I_{0+}^{\alpha,\beta}f(t)$ has a Laplace transform given by the following formula: \begin{equation} \label{Laplace:eqn} \widehat{\prescript{A}{}I_{0+}^{\alpha,\beta}f}(s)=s^{-\alpha}A_{\Gamma}(s^{-\beta})\hat{f}(s), \end{equation} where the function $A_{\Gamma}$ is given in Definition \ref{AGamma:defn}. \end{theorem} \begin{proof} We start from the series formula \eqref{Eseries:eqn}, recalling the uniform convergence of the series there: \begin{align*} \widehat{\prescript{A}{}I_{0+}^{\alpha,\beta}f}(s)=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\widehat{\prescript{RL}{}I_{0+}^{\alpha+n\beta}f}(s). \end{align*} The Laplace transforms of Riemann--Liouville integrals are well-known \cite{miller-ross,samko-kilbas-marichev}, and so we get: \begin{align*} \widehat{\prescript{A}{}I_{0+}^{\alpha,\beta}f}(s)&=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)s^{-\alpha-n\beta}\hat{f}(s) \\ &=s^{-\alpha}\hat{f}(s)\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)s^{-n\beta}, \end{align*} as required. \end{proof} \begin{theorem} \label{Fourier:thm} Let $a=-\infty$, $b\in\mathbb{R}$, and $\alpha$, $\beta$, $A$ be as in Definition \ref{E:defn}, and let $f\in L^2[a,b]$ with Fourier transform $\tilde{f}$. The function $\prescript{A}{}I_{+}^{\alpha,\beta}f(t)$ has a Fourier transform given by the following formula: \begin{equation} \label{Fourier:eqn} \widetilde{\prescript{A}{}I_{+}^{\alpha,\beta}f}(k)=k^{-\alpha}e^{i\alpha\pi/2}A_{\Gamma}(k^{-\beta}e^{i\beta\pi/2})\tilde{f}(k), \end{equation} where the function $A_{\Gamma}$ is given in Definition \ref{AGamma:defn}. \end{theorem} \begin{proof} We start from the series formula \eqref{Eseries:eqn}, recalling the uniform convergence of the series there: \begin{align*} \widetilde{\prescript{A}{}I_{+}^{\alpha,\beta}f}(k)=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\widetilde{\prescript{RL}{}I_{+}^{\alpha+n\beta}f}(k). \end{align*} The Fourier transforms of Riemann--Liouville integrals are well-known \cite{samko-kilbas-marichev}, and so we get: \begin{align*} \widetilde{\prescript{A}{}I_{+}^{\alpha,\beta}f}(k)&=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)(-ik)^{-\alpha-n\beta}\hat{f}(k) \\ &=(-ik)^{-\alpha}\hat{f}(k)\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)(-ik)^{-n\beta}, \end{align*} as required. \end{proof} Given the above two theorems, we can now attempt to solve some differintegral equations within the framework of the generalised operators. The following result demonstrates a basic example of how this would work. \begin{theorem} Let $a=0$, $b>0$, and $\alpha$, $\beta$, $A$ be as in Definition \ref{E:defn}, and let $c\in\mathbb{R}$ and $g\in L^2[a,b]$. The ordinary fractional integral equation \begin{equation} \label{ODE1} \prescript{A}{}I_{+}^{\alpha,\beta}f(t)+cf(t)=g(t),\quad\quad f(0)=\frac{g(0)}{c}, \end{equation} has a unique solution $f\in L^2[a,b]$. \end{theorem} \begin{proof} We apply Laplace transforms to the equation \eqref{ODE1} and use the result of Theorem \ref{Laplace:thm}: \begin{align*} \hat{g}(s)&=\widehat{\prescript{A}{}I_{+}^{\alpha,\beta}f}(s)+c\hat{f}(s) \\ &=s^{-\alpha}A_{\Gamma}(s^{-\beta})\hat{f}(s)+c\hat{f}(s). \end{align*} So we have an explicit expression for the Laplace transform of $f$, namely: \[\hat{f}(s)=\frac{\hat{g}(s)}{s^{-\alpha}A_{\Gamma}(s^{-\beta})+c}.\] This has a unique inverse Laplace transform, giving a unique solution function $f$. We assume the initial condition which is necessary for the ODE itself to be consistent at $t=0$. \end{proof} \section{Leibniz rule and chain rule} \label{sec:prodchain} Two fundamental results in any first course on calculus are the Leibniz rule (or product rule) and the chain rule. Analogues of these results in fractional calculus are well known in the Riemann--Liouville model \cite{miller-ross,podlubny,osler1} and have also been developed in other contexts such as the AB and Prabhakar models \cite{baleanu-fernandez,fernandez-baleanu-srivastava}. It turns out that our proposed model, despite its generality, is still sufficiently close to the classical fractional models that we can prove such results in this context too. \begin{theorem}[Generalised Leibniz rule] \label{Leibniz:thm} If $f\in C[a,b]$ and $g\in C^{\infty}[a,b]$, then for any $\alpha,\beta\in\mathbb{C}$ with non-negative real parts, we have: \begin{equation} \label{Leibniz:eqn} \prescript{A}{}I^{\alpha,\beta}_{a+}\big(f(t)g(t)\big)=\sum_{m=0}^{\infty}\frac{\mathrm{d}^mg}{\mathrm{d}t^m}(t)\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\binom{-\alpha-n\beta}{m}\prescript{RL}{}D^{-\alpha-n\beta-m}_{a+}f(t). \end{equation} \end{theorem} \begin{proof} Our starting point is the following Leibniz rule analogue for Riemann--Liouville differintegrals, which is proved in \cite[Eq. (2.199)]{podlubny}: \begin{equation} \label{Leibniz:RL} \prescript{RL}{}D^{\alpha}_{a+}\big(f(t)g(t)\big)=\sum_{m=0}^N\binom{\alpha}{m}\prescript{RL}{}D^{\alpha-m}_{a+}f(t)\prescript{RL}{}D^{m}g(t)-R_{N,\alpha}(t), \end{equation} where $N\geq\mathrm{Re}(\alpha)+1$ and the remainder term is defined by \[R_{N,\alpha}(t)\coloneqq\frac{1}{N!\Gamma(-\alpha)}\int_a^t(t-\tau)^{-\alpha-1}f(\tau)\int_{\tau}^t\prescript{RL}{}D^{N+1}g(u)(t-u)^N\,\mathrm{d}u\,\mathrm{d}\tau.\] In \cite{podlubny} it is shown that \[\lim_{N\rightarrow\infty}R_{N,\alpha}(t)=0,\] and related results are established in \cite{baleanu-fernandez,fernandez-baleanu-srivastava} to prove the convergence of the relevant infinite series. Here, we put together the Leibniz rule \eqref{Leibniz:RL} with the series formula \eqref{Eseries:eqn} for the generalised operators: \begin{align*} \prescript{A}{}I^{\alpha,\beta}_{a+}\big(f(t)g(t)\big)&=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\prescript{RL}{}D_{a+}^{-\alpha-n\beta}\big(f(t)g(t)\big) \\ &=\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\left[\sum_{m=0}^N\binom{-\alpha-n\beta}{m}\prescript{RL}{}D^{-\alpha-n\beta-m}_{a+}f(t)\prescript{RL}{}D^{m}g(t)-R_{N,-\alpha-n\beta}(t)\right] \\ \begin{split} &=\sum_{n=0}^{\infty}\sum_{m=0}^Na_n\Gamma(\beta n+\alpha)\binom{-\alpha-n\beta}{m}\prescript{RL}{}D^{-\alpha-n\beta-m}_{a+}f(t)\prescript{RL}{}D^{m}g(t) \\ &\hspace{7cm}-\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)R_{N,-\alpha-n\beta}(t). \end{split} \end{align*} We know by Theorem \ref{Eseries:thm} that the summation over $n$ is locally uniformly convergent. So the summations over $m$ and $n$ can be swapped, and it remains to prove that \[\lim_{N\rightarrow\infty}\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)R_{N,-\alpha-n\beta}(t)=0.\] By a change of variables in the double integral as in \cite[Eq. (2.201)]{podlubny}, we find the following expression for $R$, which was also written in \cite{fernandez-baleanu-srivastava}: \[R_{N,-\alpha-n\beta}(t)=\frac{(-1)^N(t-a)^{N+\alpha+n\beta+1}}{N!\Gamma(\alpha+n\beta)}\int_0^1\int_0^1f\big(a+p(t-a)\big)\prescript{RL}{}D^{N+1}g\big(a+(p+q-pq)(t-a)\big)\,\mathrm{d}p\,\mathrm{d}q.\] We insert this formula into the series for which we need to find the limit, and note that the gamma functions cancel precisely with each other: \begin{multline*} \sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)R_{N,-\alpha-n\beta}(t) \\ =\frac{a_n(-1)^N}{N!}(t-a)^{N+\alpha+1}A\left((t-a)^{\beta}\right)\int_0^1\int_0^1f\big(a+p(t-a)\big)\prescript{RL}{}D^{N+1}g\big(a+(p+q-pq)(t-a)\big)\,\mathrm{d}p\,\mathrm{d}q. \end{multline*} And this expression tends to zero as $N\rightarrow\infty$, by the same argument as in \cite{podlubny}. Thus, the proof is complete. \end{proof} \begin{example} To illustrate our results, we apply the above Theorem \ref{Leibniz:thm} to the function $te^{kt}$. Taking $f(t)=e^{kt}$ and $g(t)=t$ and $a=-\infty$ in the general identity \eqref{Leibniz:eqn} yields the following: \begin{align*} \prescript{A}{}I^{\alpha,\beta}_{a+}\big(te^{kt}\big)&=\sum_{m=0}^{1}\frac{\mathrm{d}^m}{\mathrm{d}t^m}(t)\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\binom{-\alpha-n\beta}{m}\prescript{RL}{}D^{-\alpha-n\beta-m}_{+}(e^{kt}) \\ &=\sum_{m=0}^{1}t^{1-m}\sum_{n=0}^{\infty}a_n\Gamma(\alpha+n\beta)\binom{-\alpha-n\beta}{m}k^{\alpha+n\beta+m}e^{kt} \\ &=t\sum_{n=0}^{\infty}a_n\Gamma(\alpha+n\beta)k^{\alpha+n\beta}e^{kt}+\sum_{n=0}^{\infty}a_n\Gamma(\alpha+n\beta)(-\alpha-n\beta)k^{\alpha+n\beta+1}e^{kt} \\ &=\sum_{n=0}^{\infty}a_n\Gamma(\alpha+n\beta)k^{\alpha+n\beta}e^{kt}\big[t-k(\alpha+n\beta)\big]. \end{align*} Thus, we have an explicit series expression for the fractional differintegral of the function $te^{kt}$, for any $k\in\mathbb{C}$ and in any model of fractional calculus which fits into the general framework we have established. \end{example} \begin{theorem}[Generalised chain rule] \label{chain:thm} If $f,g\in C^{\infty}[a,b]$ and $\alpha,\beta\in\mathbb{C}$ with non-negative real parts, then \begin{equation} \label{chain:eqn} \prescript{A}{}I^{\alpha,\beta}_{a+}\big(f(g(t))\big)=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\frac{a_n(-1)^mt^{\alpha+n\beta+m}}{m!(\alpha+n\beta+m)}\sum_{r=1}^m\frac{\mathrm{d}^rf(g(t))}{\mathrm{d}g(t)^r}\sum_{(r_1,\dots,r_m)}\Bigg[\prod_{j=1}^m\tfrac{j}{r_j!(j!)^{r_j}}\Big(\tfrac{\mathrm{d}^jg(t)}{\mathrm{d}x^j}\Big)^{r_j}\Bigg], \end{equation} where for any fixed $m$ and $r$ with $1\leq r\leq m$, the summation over $(r_1,\dots,r_m)$ is over all such $m$-tuples which satisfy $\sum_jr_j=r$ and $\sum_jjr_j=m$. \end{theorem} \begin{proof} We use the result of the previous theorem. Substituting $\mathbbm{1}(t)=1$ instead of $f(t)$ and $f(g(t))$ instead of $g(t)$ in \eqref{Leibniz:eqn}, we find: \begin{align*} \prescript{A}{}I^{\alpha,\beta}_{a+}\big(f(g(t))\big)&=\sum_{m=0}^{\infty}\frac{\mathrm{d}^m}{\mathrm{d}t^m}f(g(t))\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\binom{-\alpha-n\beta}{m}\prescript{RL}{}D^{-\alpha-n\beta-m}_{a+}\mathbbm{1}(t) \\ &=\sum_{m=0}^{\infty}\frac{\mathrm{d}^m}{\mathrm{d}t^m}f(g(t))\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\binom{-\alpha-n\beta}{m}\frac{t^{\alpha+n\beta+m}}{\Gamma(\beta n+\alpha+m+1)} \\ &=\sum_{m=0}^{\infty}\frac{\mathrm{d}^m}{\mathrm{d}t^m}f(g(t))\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\frac{\Gamma(1-\alpha-n\beta)}{m!\Gamma(1-\alpha-n\beta-m)}\cdot\frac{t^{\alpha+n\beta+m}}{\Gamma(\beta n+\alpha+m+1)} \\ &=\sum_{m=0}^{\infty}\frac{\mathrm{d}^m}{\mathrm{d}t^m}f(g(t))\sum_{n=0}^{\infty}\frac{a_n\sin(\pi(\alpha+n\beta+m))t^{\alpha+n\beta+m}}{m!(\alpha+n\beta+m)\sin(\pi(\alpha+n\beta))} \\ &=\sum_{m=0}^{\infty}\frac{\mathrm{d}^m}{\mathrm{d}t^m}f(g(t))\sum_{n=0}^{\infty}\frac{a_n(-1)^mt^{\alpha+n\beta+m}}{m!(\alpha+n\beta+m)}, \end{align*} where we have used the reflection formula $\Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z}$ for the gamma function. We note that once again all the gamma functions cancel out precisely with each other. Now the classical Fa\`a di Bruno formula can be applied to the function $\frac{\mathrm{d}^m}{\mathrm{d}t^m}f(g(t))$: we know that \begin{equation} \label{FaadiBruno} \frac{\mathrm{d}^m}{\mathrm{d}t^m}f(g(t))=\sum_{r=1}^m\frac{\mathrm{d}^rf(g(t))}{\mathrm{d}g(t)^r}\sum_{(r_1,\dots,r_m)}\Bigg[\prod_{j=1}^m\tfrac{j}{r_j!(j!)^{r_j}}\Big(\tfrac{\mathrm{d}^jg(t)}{\mathrm{d}x^j}\Big)^{r_j}\Bigg] \end{equation} where the summation over $(r_1,\dots,r_m)$ is over the set \[\Big\{(r_1,\dots,r_m)\in\left(\mathbb{Z}^+_0\right)^m:\sum_jr_j=r,\sum_jjr_j=m\Big\}.\] Now the result follows by substituting \eqref{FaadiBruno} into our expression for $\prescript{A}{}I^{\alpha,\beta}_{a+}\big(f(g(t))\big)$. \end{proof} Theorems \ref{Leibniz:thm} and \ref{chain:thm} can be used to compute the application of our generalised operators to a wide range of functions which can be generated from elementary ones (e.g. power and exponential functions) by multiplication and composition. \section{The solution of a Cauchy problem using Volterra integral equations} \label{sec:CauchyVolterra} In this section, we shall consider a generalised ordinary differintegral equation of the following form: \begin{equation} \label{Cauchy:eqn} \prescript{RL}{}D_{a+}^{\gamma}u(t)=\prescript{A}{}I_{a+}^{\alpha,\beta}f(t,u(t)), \end{equation} with some appropriate initial conditions to be specified later. Note that the expression $f(t,u(t))$ is a function of $t$, and therefore our generalised operator can be applied in \eqref{Cauchy:eqn} with respect to $t$. We first state the following equivalence between the Cauchy problem defined by the differintegral equation \eqref{Cauchy:eqn} and a Volterra integral equation, the latter of which we shall then proceed to solve. \begin{lemma} \label{equiv} Let $a,b,A$ be as in Definition \ref{E:defn}, and fix $\alpha,\beta,\gamma\in\mathbb{C}$ with non-negative real parts. Define $n=\lceil\gamma\rceil$ and let $C_1,\dots,C_n$ be complex constants. Assume the functions $u:[a,b]\rightarrow\mathbb{R}$ and $f:[a,b]\times\mathbb{R}\rightarrow\mathbb{R}$ are such that $u(t)$ and $f(t,u(t))$ are both in $L^1[a,b]$. Then solving the Cauchy-type problem \begin{align} \label{equiv:Cauchy1} \prescript{RL}{}D_{a+}^{\gamma}u(t)=\prescript{A}{}I_{a+}^{\alpha,\beta}f(t,u(t)),\quad\quad t\in[a,b]; \\ \label{equiv:Cauchy2} \lim_{t\rightarrow a^+}\left(\prescript{RL}{}D_{a+}^{\gamma-k}u(t)\right)=C_k,\quad\quad k=1,2,\dots,n \end{align} for $u\in L^1[a,b]$ is precisely equivalent to solving the Volterra integral equation \begin{equation} \label{equiv:Volterra} u(t)=\sum_{k=1}^n\frac{C_k(t-a)^{\gamma-k}}{\Gamma(\gamma-k+1)}+\prescript{A}{}I_{a+}^{\alpha+\gamma,\beta}f(t,u(t)),\quad\quad t\in[a,b]. \end{equation} \end{lemma} \begin{proof} The underlying fact here is the result of Theorem 1 in \cite{kilbas-bonilla-trujillo}, which was also used to solve a similar but more specific problem in \cite[Lemma 4]{kilbas-saigo-saxena2}. We first use our Theorem \ref{L1:thm} to note that, since $f(t,u(t))$ is an $L^1$ function by assumption, so too is the right-hand side of the equation \ref{equiv:Cauchy1}. Then by \cite[Theorem 1]{kilbas-bonilla-trujillo}, the solution of the Cauchy problem given by \eqref{equiv:Cauchy1} and \eqref{equiv:Cauchy2} is precisely equivalent to the solution of the Volterra equation \[u(t)=\sum_{k=1}^n\frac{C_k(t-a)^{\gamma-k}}{\Gamma(\gamma-k+1)}+\frac{1}{\Gamma(\gamma)}\int_a^t(t-\tau)^{\gamma-1}\prescript{A}{}I_{a+}^{\alpha,\beta}f(\tau,u(\tau))\,\mathrm{d}\tau.\] By our Theorem \ref{RL:E:thm}, the result follows. \end{proof} Using the equivalence of the Volterra integral equation, we can now prove that the original Cauchy problem has a unique solution, as follows. \begin{theorem} \label{Volterra:thm} With all notation as in Lemma \ref{equiv}, and assuming that the function $f$ satisfies the following Lipschitz condition in the second variable: \[|f(x,y_1)-f(x,y_2)|<C|y_1-y_2|,\] the Volterra integral equation \eqref{equiv:Volterra} has a unique solution $y\in L^1[a,b]$. \end{theorem} \begin{proof} We first consider the restriction of \eqref{equiv:Volterra} to an interval $[a,t_1]\subset[a,b]$, where $t_1\in(a,b)$ is chosen close enough to $a$ such that \begin{equation} \label{t1:defn} C(t_1-a)^{\alpha}\sup_{|x|<(t_1-a)^{\beta}}|A(x)|<1. \end{equation} Defining \begin{equation} \label{y0:defn} u_0(t)\coloneqq\sum_{k=1}^n\frac{C_k(t-a)^{\gamma-k}}{\Gamma(\gamma-k+1)}, \end{equation} we can rewrite the Volterra equation \eqref{equiv:Volterra} as \[u(t)=Tu(t),\] where the function operator $T$ is defined by \begin{equation} \label{T:defn} Tu(t)=u_0(t)+\prescript{A}{}I_{a+}^{\alpha+\gamma,\beta}f(t,u(t)) \end{equation} We aim to apply the contraction mapping theorem to the operator $T$ acting on the complete metric space $L^1[a,t_1]$. In order for this theorem to be applicable, the proofs of the following two statements will be required. \begin{enumerate} \item If $u\in L^1[a,t_1]$, then $Tu\in L^1[a,t_1]$. \item For any $u_1,u_2\in L^1[a,t_1]$, we have \[\|Tu_1-Tu_2\|_1\leq r\|u_1-u_2\|_1,\] where $r\in(0,1)$ is constant and $\|\cdot\|_1$ denotes the $L^1$ norm on $L^1[a,t_1]$. \end{enumerate} \textbf{Proof of statement 1.} We have assumed the function $f$ is such that $f(t,u(t))$ is an $L^1$ function for any $L^1$ function $u$. Therefore, by Theorem \ref{L1:thm}, the right-hand term in \eqref{T:defn} is also $L^1$. And clearly $u_0$ is an $L^1$ function, so the result follows. \textbf{Proof of statement 2.} By the definition \eqref{T:defn}, we have \[Tu_1-Tu_2=\prescript{A}{}I_{a+}^{\alpha+\gamma,\beta}f(t,u_1(t))-\prescript{A}{}I_{a+}^{\alpha+\gamma,\beta}f(t,u_2(t)).\] Thus we have the following inequalities for the norm in $L^1[a,t_1]$: \begin{align*} \left\|Tu_1-Tu_2\right\|_1&=\left\|\prescript{A}{}I_{a+}^{\alpha+\gamma,\beta}\left(f(t,u_1(t))-f(t,u_2(t))\right)\right\|_1 \\ &\leq\left[(t_1-a)^{\alpha}\sup_{|x|<(t_1-a)^{\beta}}|A(x)|\right]\left\|f(t,u_1(t))-f(t,u_2(t))\right\|_1 \\ &\leq\left[(t_1-a)^{\alpha}\sup_{|x|<(t_1-a)^{\beta}}|A(x)|\right]C\left\|u_1-u_2\right\|_1, \end{align*} where in the second line we used the proof of Theorem \ref{L1:thm} above, and in the third line we used the assumed Lipschitz condition on $f$. And the constant \begin{equation} \label{r:defn} r\coloneqq C(t_1-a)^{\alpha}\sup_{|x|<(t_1-a)^{\beta}}|A(x)| \end{equation} is strictly between 0 and 1 by assumption, so the result follows. By the contraction mapping theorem, we can now say that the Volterra equation \eqref{equiv:Volterra} has a unique solution $u^*\in L^1[a,t_1]$ defined on the interval $[a,t_1]$. Now \eqref{equiv:Volterra} can be rewritten as \begin{equation} \label{Volterra2} u(t)=\sum_{k=1}^n\frac{C_k(t-a)^{\gamma-k}}{\Gamma(\gamma-k+1)}+\int_a^{t_1}(t-\tau)^{\alpha-1}A\left((t-\tau)^{\beta}\right)f(\tau)\,\mathrm{d}\tau+\prescript{A}{}I_{t_1+}^{\alpha+\gamma,\beta}f(t,u(t)), \end{equation} or equivalently as \[u(t)=T_1u(t),\] where the function operator $T_1$ is defined by \[T_1u(t)=u_{01}(t)+\prescript{A}{}I_{t_1+}^{\alpha+\gamma,\beta}f(t,u(t))\] and the function $u_{01}$ is defined by \[u_{01}(t)\coloneqq\sum_{k=1}^n\frac{C_k(t-a)^{\gamma-k}}{\Gamma(\gamma-k+1)}+\int_a^{t_1}(t-\tau)^{\alpha-1}A\left((t-\tau)^{\beta}\right)f(\tau)\,\mathrm{d}\tau.\] Note that $u_{01}$ is a fixed function, not depending on $u$, since we have already proved that $u$ is uniquely determined on $[a,t_1]$. Thus we can use exactly the same approach as before to prove that the Volterra equation \eqref{Volterra2} has a unique solution $u^*\in L^1[t_1,t_2]$ defined on the interval $[t_1,t_2]$, where $t_2$ is defined (analogously to \eqref{t1:defn}) by requiring the inequality \begin{equation} \label{t2:defn} C(t_2-t_1)^{\alpha}\sup_{|x|<(t_2-t_1)^{\beta}}|A(x)|<1. \end{equation} Note that, by comparison of \eqref{t1:defn} and \eqref{t2:defn}, we can define $t_2-t_1=t_1-a$ provided that this yields a value $t_2$ which is still in the interval $[a,b]$. This argument can be extended indefinitely: each time we find a unique $L^1$ solution on $[t_{i-1},t_i]$, we can then define $t_{i+1}$ using an inequality analogous to \eqref{t2:defn} and find a unique $L^1$ solution on $[t_i,t_{i+1}]$ using the same argument. Since the difference $t_i-t_{i-1}$ can be taken as constant, the process must eventually end when the end of the interval $[a,b]$ is reached. Putting all of the $L^1[t_{i-1},t_i]$ solutions together yields a piecewise defined function $y$ on $[a,b]$ which is the unique solution in $L^1[a,b]$ of the original problem. \end{proof} \begin{corollary} \label{Cauchy:thm} Let $a,b,A$ be as in Definition \ref{E:defn}, and fix $\alpha,\beta,\gamma\in\mathbb{C}$ with non-negative real parts. Define $n=\lceil\gamma\rceil$ and let $C_1,\dots,C_n$ be complex constants. Assume the functions $u:[a,b]\rightarrow\mathbb{R}$ and $f:[a,b]\times\mathbb{R}\rightarrow\mathbb{R}$ are such that $u(t)$ and $f(t,u(t))$ are both in $L^1[a,b]$, and that $f$ satisfies the following Lipschitz condition in the second variable: \[|f(x,y_1)-f(x,y_2)|<C|y_1-y_2|.\] Then the Cauchy problem defined by \eqref{equiv:Cauchy1} and \eqref{equiv:Cauchy2} has a unique solution $u\in L^1[a,b]$. \end{corollary} \begin{proof} By Lemma \ref{equiv}, solving the Cauchy problem \eqref{equiv:Cauchy1}--\eqref{equiv:Cauchy2} is equivalent to solving the Volterra integral equation \eqref{equiv:Volterra}. By Theorem \ref{Volterra:thm}, this Volterra equation has a unique solution $u\in L^1[a,b]$. \end{proof} \section{Operators with respect to functions} A well-known extension of the usual calculus is given by differentiating or integrating a function $f(t)$ with respect to another function $g(t)$ instead of with respect to $t$. For integrals, this is called Riemann--Stieltjes integration. The same concept has been extended to fractional derivatives and integrals \cite{osler1,oldham-spanier,kilbas-srivastava-trujillo,samko-kilbas-marichev}, where it is often known as $\psi$-fractional calculus, due to the notation $\psi(t)$ being used instead of $g(t)$. Detailed studies of this idea and its extensions have been made in recent years by authors including \cite{almeida,sousa-oliveira,almeida-malinowska-monteiro}, but the essential definition is as follows for $\psi$-fractional integration and differentiation in the Riemann--Liouville model: \begin{alignat}{2} \label{psi:RLint} \prescript{RL}{\psi(t)}I^{\alpha}_{a+}f(t)&\coloneqq\frac{1}{\Gamma(\alpha)}\int_a^t\psi'(\tau)\big(\psi(t)-\psi(\tau)\big)^{\alpha-1}f(\tau)\,\mathrm{d}\tau,\quad\quad&&\mathrm{Re}(\alpha)>0; \\ \label{psi:RLder} \prescript{RL}{\psi(t)}D^{\alpha}_{a+}f(t)&\coloneqq\left(\frac{1}{\psi'(t)}\cdot\frac{\mathrm{d}}{\mathrm{d}t}\right)^m\left(\prescript{RL}{\psi(t)}I^{m-\alpha}_{a+}f(t)\right),\quad\quad&&\mathrm{Re}(\alpha)\geq0,m\coloneqq\lfloor\mathrm{Re}(\alpha)\rfloor+1. \end{alignat} We note that \eqref{psi:RLint}, like \eqref{EIdef}, involves multiplying the function $f(t)$ by an expression containing an arbitrary function ($\psi$ versus $A$). But this is the only similarity between the two types of operators: \eqref{EIdef} is a convolution-type operator with very different structure and behaviour from \eqref{psi:RLint}. The definitions \eqref{psi:RLint}--\eqref{psi:RLder} have been extended to $\psi$-fractional differentiation of Caputo \cite{almeida,almeida-malinowska-monteiro} and Hilfer \cite{sousa-oliveira} type, as well as to other models of fractional calculus such as Atangana--Baleanu and Prabhakar \cite{fernandez-baleanu-icfda}. In a similar, natural, way it is possible to extend the definition to our new generalised model of fractional calculus. \begin{definition} \label{psi:E:defn} Let $[a,b]$ be a real interval, and let $\alpha$, $\beta$, $A$ be as in Definition \ref{E:defn}. For any functions $f$ and $\psi$ defined on $[a,b]$ such that $f$ is an $L^1$ function and $\psi$ is both monotonic and $C^1$, we define the generalised fractional integral of $f(t)$ with respect to $\psi(t)$ as follows: \begin{equation} \label{psi:Eint} \prescript{A}{\psi(t)}I^{\alpha,\beta}_{a+}f(t)\coloneqq\int_a^t\psi'(\tau)\left(\psi(t)-\psi(\tau)\right)^{\alpha-1}A\left(\left(\psi(t)-\psi(\tau)\right)^\beta\right)f(\tau)\,\mathrm{d}\tau. \end{equation} \end{definition} Definition \ref{psi:E:defn} provides a natural extension and combination of two different ways of generalising fractional calculus -- namely, differintegration with respect to functions and differintegration using generalised kernel functions. Additionally, this new formalism enables us to consider as special cases several other classical models of fractional calculus, such as the Hadamard and Erdelyi--Kober fractional differintegrals. \begin{example} Using $\psi(t)=\log t$ in Definition \ref{psi:E:defn} enables us to recover a generalised version of the \textbf{Hadamard} model of fractional calculus: \begin{align*} \prescript{A}{\log(t)}I^{\alpha,\beta}_{a+}f(t)&=\int_a^t\frac{1}{\tau}\left(\log(t)-\log(\tau)\right)^{\alpha-1}A\left(\left(\log(t)-\log(\tau)\right)^\beta\right)f(\tau)\,\mathrm{d}\tau \\ &=\int_a^t\frac{1}{\tau}\left(\log\left(\tfrac{t}{\tau}\right)\right)^{\alpha-1}A\left(\left(\log\left(\tfrac{t}{\tau}\right)\right)^\beta\right)f(\tau)\,\mathrm{d}\tau. \end{align*} If we now set $A(x)=\frac{1}{\Gamma(\alpha)}$ and $\beta=0$ as in \eqref{ERL:fns}, then we recover from this the standard Hadamard fractional integral: \[\prescript{H}{}I^{\alpha}_{a+}f(t)=\frac{1}{\Gamma(\alpha)}\int_a^t\frac{1}{\tau}\left(\log\left(\tfrac{t}{\tau}\right)\right)^{\alpha-1}f(\tau)\,\mathrm{d}\tau.\] \end{example} \begin{example} Using $\psi(t)=t^{\rho+1}$ in Definition \ref{psi:E:defn} enables us to recover a generalised version of the \textbf{Katugampola} model of fractional calculus: \begin{align*} \prescript{A}{t^{\rho+1}}I^{\alpha,\beta}_{a+}f(t)&=\int_a^t(\rho+1)\tau^{\rho}\left(t^{\rho+1}-\tau^{\rho+1}\right)^{\alpha-1}A\left(\left(t^{\rho+1}-\tau^{\rho+1}\right)^\beta\right)f(\tau)\,\mathrm{d}\tau \\ &=(\rho+1)\int_a^t\left(t^{\rho+1}-\tau^{\rho+1}\right)^{\alpha-1}A\left(\left(t^{\rho+1}-\tau^{\rho+1}\right)^\beta\right)\tau^{\rho}f(\tau)\,\mathrm{d}\tau. \end{align*} If we now set $A(x)=\frac{(\rho+1)^{-\alpha}}{\Gamma(\alpha)}$ and $\beta=0$ as in \eqref{ERL:fns}, then we recover from this the standard Katugampola fractional integral introduced in \cite{katugampola}: \[\prescript{K}{}I^{\alpha}_{a+}f(t)=\frac{(\rho+1)^{1-\alpha}}{\Gamma(\alpha)}\int_a^t\left(t^{\rho+1}-\tau^{\rho+1}\right)^{\alpha-1}\tau^{\rho}f(\tau)\,\mathrm{d}\tau.\] \end{example} \begin{example} Using $\psi(t)=t^{\sigma}$ and replacing $f(t)$ by $t^{\sigma\eta}f(t)$ in Definition \ref{psi:E:defn} enables us to recover one possible generalisation of the \textbf{Erdelyi--Kober} model of fractional calculus: \begin{align*} \prescript{A}{t^{\sigma}}I^{\alpha,\beta}_{a+}\left(t^{\sigma\eta}f(t)\right)&=\int_a^t\sigma\tau^{\sigma-1}\left(t^{\sigma}-\tau^{\sigma}\right)^{\alpha-1}A\left(\left(t^{\sigma}-\tau^{\sigma}\right)^\beta\right)\tau^{\sigma\eta}f(\tau)\,\mathrm{d}\tau \\ &=\sigma\int_a^t\left(t^{\sigma}-\tau^{\sigma}\right)^{\alpha-1}A\left(\left(t^{\sigma}-\tau^{\sigma}\right)^\beta\right)\tau^{\sigma\eta+\sigma-1}f(\tau)\,\mathrm{d}\tau. \end{align*} If we now set $A(x)=\frac{1}{\Gamma(\alpha)}$ and $\beta=0$ as in \eqref{ERL:fns}, and also multiply by $t^{-\sigma(\alpha+\eta)}$, then we recover the Erdelyi--Kober fractional integral as defined in \cite[\S18]{samko-kilbas-marichev}: \[t^{-\sigma(\alpha+\eta)}\left[\prescript{\frac{1}{\Gamma(\alpha)}}{t^{\sigma}}I^{\alpha,\beta}_{a+}\left(t^{\sigma\eta}f(t)\right)\right]=\frac{\sigma t^{-\sigma(\alpha+\eta)}}{\Gamma(\alpha)}\int_a^t\left(t^{\sigma}-\tau^{\sigma}\right)^{\alpha-1}\tau^{\sigma\eta+\sigma-1}f(\tau)\,\mathrm{d}\tau.\] \end{example} The above examples demonstrate that our model with general analytic kernels can be extended to cover even more of the classical models of fractional calculus: not only the Riemann--Liouville model and related formulae with different kernels, but also the Hadamard and Katugampola models and their generalisations. Although this is not the main concern of the current paper, many of the results we have already proved above for the generalised operator $\prescript{A}{}I_{a+}^{\alpha,\beta}f(t)$ with respect to the variable $t$ can also be proved in a very similar way for the operator $\prescript{A}{\psi(t)}I_{a+}^{\alpha,\beta}f(t)$ with respect to a function $\psi(t)$. For example, we have the following generalised Leibniz rule as an extension of Theorem \ref{Leibniz:thm}. \begin{theorem} \label{psi:Leibniz:thm} If $f\in C[a,b]$ and $g\in C^{\infty}[a,b]$, and if $\psi\in C^1[a,b]$ is monotonic, then for any $\alpha,\beta\in\mathbb{C}$ with non-negative real parts, we have: \begin{equation} \label{psi:Leibniz:eqn} \prescript{A}{\psi(t)}I^{\alpha,\beta}_{a+}\big(f(t)g(t)\big)=\sum_{m=0}^{\infty}\left(\frac{1}{\psi'(t)}\cdot\frac{\mathrm{d}}{\mathrm{d}t}\right)^m\big(g(t)\big)\sum_{n=0}^{\infty}a_n\Gamma(\beta n+\alpha)\binom{-\alpha-n\beta}{m}\prescript{RL}{\psi(t)}I^{\alpha+n\beta+m}_{a+}f(t). \end{equation} \end{theorem} \begin{proof} The proof is exactly analagous to that of Theorem \ref{Leibniz:thm}, with all derivatives and integrals replaced by their $\psi$-differintegral equivalents. \end{proof} A fruitful future direction of research might be to examine how much of the usual fractional calculus can be extended to the operators with general analytic kernels. Here we have indicated one direction of generalisation -- combining the ideas of differintegration with respect to functions and the new class of kernel functions -- but other directions may also be possible in the future. \section{Conclusions} \label{sec:conclusions} In this work, we have introduced a new general framework for fractional calculus, which incorporates many existing definitions of fractional integrals and derivatives as special cases. We started by defining an integral operator with a general analytic kernel, and proved that it can be written as an infinite series of Riemann--Liouville integrals. Series appear naturally in fractional calculus, and this result is an indication of the fundamental role of Riemann--Liouville in fractional calculus. After proving some fundamental properties of our generalised operator, we considered how it might be used to define fractional derivatives as well as integrals. We demonstrated how the Leibniz rule and chain rule can be extended to the new operators. Using the method of Fourier and Laplace transforms, we analysed and solved some simple ordinary differential equations in the new general framework. Using the contraction mapping theorem and a Volterra integral equation, we also proved existence and uniqueness for Cauchy problems in a more general class of differential equations. Finally, we indicated a new direction of research for the future by proposing a definition of differintegration with respect to functions in the generalised model. We believe that this new model can now be used as a way of proving various useful results in a more general context. For example, theorems such as the fractional product rule, chain rule, and Taylor's theorem have been proved in some recently suggested models using series \cite{baleanu-fernandez,fernandez-baleanu-srivastava,fernandez-baleanu2}, and similar methods may now be applicable to a much broader class of fractional operators. In the future, theorems like these and many others could potentially be proved in the generalised fractional calculus which we have introduced here. The advantage of such general results is that they advance the field as a whole, not just in one model or another of fractional calculus but increasing knowledge about many different models simultaneously. Generalisation is one of the most powerful tools in the mathematician's arsenal, and by generalising many fractional models in a single framework, we have opened the possibility for a unified approach to solve many problems in the future. \section*{Declaration of interests} The authors declare that they have no competing interests. \end{document}
\begin{equation}gin{document} \title{Trapped Ion Quantum Computer Research at Los Alamos} \author{D. F. V. James, M. S. Gulley, M. H. Holzscheiter, R. J. Hughes, P. G. Kwiat, S. K. Lamoreaux, C. G. Peterson, V. D. Sandberg, M. M. Schauer,\\ C. M. Simmons, D. Tupa, P. Z. Wang, A. G. White.} \institute{ Los Alamos National Laboratory, Los Alamos, NM 87545, USA } \maketitle \begin{equation}gin{abstract} We briefly review the development and theory of an experiment to investigate quantum computation with trapped calcium ions. The ion trap, laser and ion requirements are determined, and the parameters required for simple quantum logic operations are described. \\(LAUR 98-314) \end{abstract} \section{Introduction} In the last 15 years various authors have considered the generalization of information theory concepts to allow the representation of information by quantum systems. The introduction into computation of {\em quantum mechanical} concepts, in particular the superposition principle, opened up the possibility of new capabilities, such as quantum cryptography \cite{HughesCrypto}, that have no classical counterparts. One of the most interesting of these new ideas is quantum computation, first proposed by Benioff \cite{Benioff}. Feynman \cite{Feynman} suggested that quantum computation might be more powerful than classical computation, a notion which gained further credence through the work of Deutsch \cite{Deutsch}. However, until quite recently quantum computation was an essentially academic endeavor because there were no quantum algorithms that exploited this power to solve useful computational problems, and because no realistic technology capable of performing quantum computations had been envisioned. This changed in 1994 when Shor discovered quantum algorithms for efficient solution of integer factorization and the discrete logarithm problem \cite{Shor,EkertJozsa}, two problems that are at the heart of the security of much of modern public key cryptography \cite{Hughes}. Later that same year Cirac and Zoller proposed that quantum computational hardware could be realized using known techniques in the laser manipulation of trapped ions \cite{CZ}. Since then interest in quantum computation has grown dramatically, and remarkable progress has been made: a single quantum logic gate has been demonstrated with trapped ions \cite{NISTgate}; quantum error correction schemes have been invented \cite{MannyRay,Preskill}; several alternative technological proposals have been made \cite{Kimble,Havel2,Ziolo,Privman,Bocko,DiVincenzo2} and quantum algorithms for solving new problems have been discovered \cite{Grover,Terhal,Boneh,Kitaev}. In this paper we will review our development of an experiment to investigate the potential of quantum computation using trapped calcium ions \cite{latiqce}. The three essential requirements for quantum computational hardware are: (1) the ability to isolate a set of two-level quantum systems from the environment for long enough to maintain coherence throughout the computation, while at the same time being able to interact with the systems strongly enough to manipulate them into an arbitrary quantum state; (2) a mechanism for performing quantum logic operations: in other words a ``quantum bus channel'' connecting the various two-level systems in a quantum mechanical manner; and (3) a method for reading out the quantum state of the system at the end of the calculation. \begin{equation}gin{figure}[!ht] \begin{equation}gin{center} \epsfxsize=8cm \epsfbox{fig1.eps} \end{center} \caption{A schematic illustration of an idealized laser-ion interaction system; ${\bf k}_{L}$ is the wavevector of the single addressing laser.} \label{figone} \end{figure} All three of these requirements are in principle met by the cold trapped ion quantum computer. In this scheme each qubit consists of two internal levels of an ion trapped in a linear configuration. In order to perform the required logic gates, a third atomic state known as the auxiliary level is required. The quantum bus channel is realized using the phonon modes of the ions' collective oscillations. These quantum systems may be manipulated using precisely controlled laser pulses. Two distinct types of laser pulse are required: ``V'' type pulses, which only interact with the internal states of individual ions, and ``U'' type pulses which interact with both the internal states and the external vibrational degrees of freedom of the ions. These interactions can be realized using Rabi flipping induced by either a single laser or Raman (two laser) scheme (Fig.2). Readout is performed by using quantum jumps. This scheme was originally proposed by Cirac and Zoller in 1994 \cite{CZ}, and was used to demonstrate a CNOT gate shortly afterwards \cite{NISTgate}. \begin{equation}gin{figure}[!ht] \begin{equation}gin{center} \epsfxsize=7.5cm \epsfbox{fig2.eps} \end{center} \caption{A schematic illustration of (a) single laser and (b) Raman qubit addressing and control techniques.} \label{figtwo} \end{figure} As we can only give the briefest of description of the principles of quantum computation using cold trapped ions, the reader is recommended to peruse the more detailed descriptions which can be found elsewhere \cite{Steane,dfvj,NISTrev,latiqce}. In this paper we intend to focus on the experimental issues involved in building a trapped ion quantum computer. \section {Choice of Ion} There are three requirements which the species of ion chosen for the qubits of an ion trap quantum computer must satisfy: 1. If we use the single laser scheme, the ions must have a level that is sufficiently long-lived to allow some computation to take place; this level can also be used for sideband cooling. 2. the ions must have a suitable dipole-allowed transition for Doppler cooling, quantum jump readout and for Raman transitions (if we chose to use two sub-levels of the ground state to form the qubit); 3. These transitions must be at wavelengths compatible with current laser technology. \begin{equation}gin{figure}[!ht] \begin{equation}gin{center} \epsfxsize=8.4cm \epsfbox{fig3.eps} \end{center} \caption{The lowest energy levels of $^{40}Ca^{+}$ ions, with transition wavelengths and lifetimes listed.} \label{figthree} \end{figure} Various ions used in atomic frequency standards work satisfy the first requirement. Of these ions, $Ca^{+}$ offers the advantages of transitions that can be accessed with titanium-sapphire or diode lasers and a reasonably long-lived metastable state. The relevant energy levels of the $A=40$ isotope are shown in fig.3. The dipole-allowed transition from the $4\,^{2}S_{1/2}$ ground state to the $4\,^{2}P_{1/2}$ level with a wavelength of 397 nm can be used for Doppler cooling and quantum jump readout; The 732 nm electric quadrupole transition from the $4\,^{2}S_{1/2}$ ground state to the $3\,^{2}D_{3/2}$ metastable level (lifetime $\approx 1.08 sec.$) is suitable for sideband cooling. In the single laser computation scheme, the qubits and auxiliary level can be chosen as the electronic states \begin{equation}a \ket{0} &=&\ket{4\,^{2}S_{1/2},\,M_{j}=1/2}, \nonumber \\ \ket{1} &=&\ket{3\,^{2}D_{5/2},\,M_{j}=3/2}, \nonumber \\ \ket{aux}&=&\ket{ 3\,^{2}D_{5/2},\,M_{j}=-1/2} . \nonumber \end{equation}a This ion can also be used for Raman type qubits, with the two Zeeman sublevels of the $4\,^{2}S_{1/2}$ ground state forming the two qubit states $\ket{0}$ and $\ket{1}$, with one of the sublevels of the $4\,^{2}P_{1/2}$ level being the upper level $\ket{2}$. A magnetic field of 200 Gauss should be sufficient to split these two levels so that they can be resolved by the lasers. The pump and Stokes beams would be formed by splitting a 397nm laser into two, and shifting the frequency of one with respect to the other by means of an acousto-optic or electro-optic modulator. This arrangement has a great advantage in that any fluctuations in the phase of the original 397nm laser will be passed on to both the pump and Stokes beams, and will therefore be canceled out, because the dynamics is only sensitive to the difference between the pump and Stokes phases. One problem in realizing the Raman scheme in $Ca^{+}$ is the absence of a third level in the ground state that can act as the auxiliary state $\ket{aux}$ required for execution of quantum gates. This difficulty could be removed by using the alternative scheme for quantum logic recently proposed by Monroe {\it et al.} \cite{NISTsimp}; alternatively, one could use an isotope of $Ca^{+}$ which has non-zero nuclear spin, thereby giving several more sublevels in the ground state due to the hyperfine interaction; other possibilities that have been suggested for an auxiliary state with $^{40}Ca^{+}$ in the Raman scheme are to use a state of a phonon mode other than the CM mode \cite{Steaneaux} or one of the sublevels of the $3\,^{2}D$ doublet \cite{Blattaux}. \section {The Radio Frequency Ion Trap} Radio-frequency (RF) quadrupole traps, also named ``Paul traps'' after their inventor, have been used for many years to confine electrically charged particles \cite{Paul} (for an introduction to the theory of ion traps, see refs. \cite{NISTtrap,Ghosh}). The classic design of such a Paul trap has a ring electrode with endcap electrodes above and below, with the ions confined to the enclosed volume. A single ion can be located precisely at the center of the trap where the amplitude of the RF field is zero. But when several ions are placed into this trapping field, their Coulomb repulsion forces them apart and into regions where they are subjected to heating by the RF field. For this reason in our experiment ions are confined in a linear RF quadrupole trap \cite{latiqce}. Radial confinement is achieved by a quadrupole RF field provided by four 1 mm diameter rods in a rectangular arrangement. Axial confinement is provided by DC voltages applied to conical endcaps at either end of the RF structure; the endcap separation is 10 mm. The design of the trap used in these experiments is shown in diagrammatically in Fig.4. \begin{equation}gin{figure}[!ht] \begin{equation}gin{center} \epsfxsize=8cm \epsfbox{fig4.eps} \end{center} \caption{Side view diagram of the linear RF trap used to confine $Ca^{+}$ ions in these experiments. The endcap separation is 10 mm and the gap between the RF rods is 1.7 mm.} \label{figfour} \end{figure} The main concerns for the design are to provide sufficient radial confinement to assure that the ions form a string on the trap axis after Doppler cooling; to minimize the coupling between the radial and axial degrees of freedom by producing radial oscillation frequencies significantly greater than the axial oscillation frequencies; to produce high enough axial frequencies to allow the use of sideband cooling\cite{NISTcool89}; and to provide sufficient spatial separation to allow individual ions to be addressed with laser beams. \section {Laser Systems} The relevant optical transitions for $Ca^{+}$ ions are shown in Fig.5. There are four different optical processes employed in the quantum computer; each places specific demands on the laser system. \begin{equation}gin{figure}[!ht] \begin{equation}gin{center} \epsfxsize=8cm \epsfbox{fig5.eps} \end{center} \caption{Different transitions between the levels of $Ca^{+}$ ions required for (a) Doppler cooling, (b) Resolved sideband cooling and (c) quantum logic operations and readout. The single laser addressing technique has been assumed.} \label{figfive} \end{figure} The first stage is to cool a small number of ions to their Doppler limit in the ion trap, as shown in Fig.5a. This requires a beam at 397 nm, the $4\,^{2}S_{1/2} - 4\,^{2}P_{1/2}$ resonant transition. Tuning the laser to the red of the transition causes the ions to be slowed by the optical molasses technique \cite{StenholmCool}. In this procedure, a laser beam with a frequency slightly less than that of the resonant transition of an ion is used to reduce its kinetic energy. Owing to the Doppler shift of the photon frequency, ions preferentially absorb photons that oppose their motion, whereas they re-emit photons in all directions, resulting in a net reduction in momentum along the direction of the laser beam. Having carefully selected the trap parameters, many cycles of absorption and re-emission will bring the system to the Lamb-Dicke regime, leaving the ions in a string-of-pearls geometry. We have recently found ion crystals of up to five $Ca^{+}$ ions. In order to Doppler cool the ions, the demands on the power and linewidth of the 397 nm laser are modest. The saturation intensity of $Ca^{+}$ ions is $\sim 10\; {\rm mW/cm}^{2}$, and the laser linewidth must be less than $\sim 10\; {\rm MHz}$. An optogalvonic signal obtained with a hollow cathode lamp suffices to set the frequency. We use a Titanium:Sapphire (Ti:Sapphire) laser (Coherent CR 899-21) with an internal frequency doubling crystal to produce the 397 nm light. During the Doppler cooling, the ions may decay from the $4\,^{2}P_{1/2}$ state to the $3\,^{2}D_{3/2}$ state, whose lifetime is $\sim 1 {\rm sec}.$ To empty this metastable state, we use a second Ti:Sapphire laser at 866 nm. Once the string of ions is Doppler cooled to the Lamb-Dicke regime, the second stage of optical cooling, sideband cooling, will be used to reduce the collective motion of the string of ions to its lowest vibrational level \cite{WinelandItano}, illustrated in Fig.5b. In this regime, a narrow optical transition, such as the 732 nm $4\,^{2}S_{1/2} - 3\,^{2}D_{3/2}$ dipole forbidden transition, develops sidebands above and below the central frequency by the vibrational frequencies of the ions. The sidebands closest to the unperturbed frequency correspond to the CM vibrational motion. If $\omega_{0}$ is the optical transition frequency and $\omega_x$ the frequency of the CM vibrational motion, the phonon number is increased by one, unchanged, or decreased by one if an ion absorbs a photon of frequency $\omega_{0}+\omega_x$, $\omega_{0}$ or $\omega_{0}-\omega_x$, respectively. Thus, sideband cooling is accomplished by optically cooling the string of ions with a laser tuned to $\omega_{0}-\omega_x$. The need to resolve the sidebands of the transition implies a much more stringent requirement for the laser linewidth; it must be well below the CM mode vibrational frequency of $\sim (2\pi)\times 200\; {\rm kHz}$. The laser power must also be greater in order to pump the forbidden transition. We plan to use a Ti:Sapphire laser locked to a reference cavity to meet the required linewidth and power. At first glance it would seem that, with a metastable level with a lifetime of 1s, no more than 1 phonon per second could be removed from a trapped ion. A second laser at 866 nm is used to couple the $4\,^{2}P_{1/2}$ state to the $3\,^{2}D_{3/2}$ state to reduce the effective lifetime of the D state and allow faster cooling times. The transitions required for realization of quantum logic gates and for readout, discussed in detail in sections 5.2 and 5.3, are shown in Fig.5c. These can be performed with the same lasers used in the Doppler and sideband cooling procedures. There are two other considerations concerning the laser systems for quantum computation which should be mentioned. To reduce the total complexity of the completed system, we are developing diode lasers and a frequency doubling cavity to handle the Doppler cooling and quantum jump read out. Also complex quantum computations would require that the laser on the $4\,^{2}S_{1/2} - 3\,^{2}P_{5/2}$ computation transition have a coherence time as long as the computation time. This may necessitate using qubits bridged by Raman transitions as discussed above, which eliminates the errors caused by the phase drift of the laser. \section {Qubit Addressing Optics} In order for the $Ca^{+}$ ion qubits to be useful for actual calculations, it will be necessary to address the ions in a very controlled fashion. Our optical system for qubit addressing is shown schematically in fig. 6. \begin{equation}gin{figure}[!ht] \begin{equation}gin{center} \epsfxsize=12cm \epsfbox{fig6.eps} \end{center} \caption{Illustration of the laser beam control optics system.} \label{figsix} \end{figure} There are two aspects to be considered in the design of such a system: the precise interactions with a single ion; and an arrangement for switching between different ions in the string. In addition to the obvious constraints on laser frequency and polarization, the primary consideration for making exact $\pi$- or $2\pi$-pulses is control of the area (over time) of the driving light field pulse. The first step toward this is to stabilize the intensity of the laser, as can be done to better than $0.1\%$, using a standard ``noise-eater". Such a device typically consists of an electro-optical polarization rotator located between two polarizers; the output of a fast detector monitoring part of the transmitted beam is used in a feedback circuit to adjust the degree of polarization rotation, and thus the intensity of the transmitted light. Switching the light beam on and off can be performed with a similar (or even the same) device. Because such switches can possess rise/fall times on the scale of nanoseconds, it should be possible to readily control the area under the pulse to within $\sim 0.1\%$, simply by accurately determining the width of the pulse. A more elaborate scheme would involve an integrating detector, which would monitor the actual integrated energy under the pulse, shutting the pulse off when the desired value is obtained. Once the controls for addressing a single ion are decided, the means for switching between ions must be considered. Any system for achieving this must be fast, reproducible, display very precise aiming and low ``crosstalk" (i.e. overlap of the focal spot onto more than one ion), and be as simple as possible. In particular, it is desirable to be able to switch between different ions in the string in a time short compared to the time required to complete a given $\pi$-pulse on one ion. This tends to discount any sort of mechanical scanning system. Acousto-optic deflectors, which are often used for similar purposes, may be made fast enough, but introduce unwanted frequency shifts on the deviated beams. As a tentative solution, we propose to use an electro-optic beam deflector, basically a prism whose index of refraction, and consequently whose deflection angle, is varied slightly by applying a high voltage across the material; typical switching times for these devices is 10 nanoseconds, adequate for our purposes. One such device produces a maximum deflection of $\pm$9 mrad, for a $\pm$3000V input. The associated maximum number of resolvable spots (using the Rayleigh criterion) is of order 100, implying that $\sim$ 20 ions could be comfortably resolved with negligible crosstalk. After the inter-ion spacing has been determined, i.e., by the trap frequencies, the crosstalk specification determines the maximum spot size of the addressing beam. For example, for an ion spacing of 20 $\mu$m, any spot size (defined here as the $1/e^2$ diameter) less than 21.6 $\mu$m will yield a crosstalk of less than 0.1$\%$, assuming a purely Gaussian intensity distribution (a good approximation if the light is delivered from a single-mode optical fiber, or through an appropriate spatial filter). In practice, scattering and other experimental realities will increase this size, so that it is prudent to aim for a somewhat smaller spot size, e.g. 10 $\mu$m. One consideration when such small spot sizes are required is the effect of lens aberrations, especially since the spot must remain small regardless of which ion it is deflected on. Employing standard ray-trace methods, we have found that the blurring effects of aberrations can be reduced if a doublet/meniscus lens combination is used (assuming an input beam size of 3mm, and an effective focal length of 30mm). A further complication is that, in order to add or remove phonons from the system, the addressing beams must have a component along the longitudinal axis of the trap. The addressing optics must accommodate a tilted line of focus, otherwise the intensity at each ion would be markedly different, and the crosstalk for the outermost ions would become unacceptable. According to ray-trace calculations, adding a simple wedge (of $\sim 2^{o}$) solves the problem and this has been confirmed by measurements using a mock system. Depending on the exact level scheme being considered, it may be necessary to vary the polarization of the light. Because the electro-optic deflector requires a specific linear polarization, any polarization-control elements should be placed after the deflector. The final result is a highly directional, tightly-focused beam with controllable polarization and intensity. \section{Imaging System} In order to determine the ions' locations and to readout the result of the quantum computations, an imaging system is required. Our current imaging system consists of two lenses, one of which is mounted inside the vacuum chamber, and a video camera coupled to a dual-stage micro-channel plate (MCP) image intensifier. The first lens with focal length 15 mm collects the light emitted from the central trap region with a solid angle of approximately 0.25 sr. The image is relayed through a 110mm/f2 commercial camera lens to the front plate of the MCP. This set-up produces a magnification of 7.5 at the input of the MCP. The input of the 110 mm lens is fitted with a 400nm narrow band filter to reduce background from the IR laser and from light emanating from the hot calcium oven and the electron gun filament. The dual plate intensifier is operated at maximum gain for the highest possible sensitivity. This allows us to read out the camera at normal video rate of 30 frames ${\rm s}^{-1}$ into a data acquisition computer. Averaging and integrating of the signal over a given time period can then be undertaken by software. We find this arrangement extremely useful in enabling us to observe changes of the cloud size or the intensity of the fluorescence with changes of external parameters like trapping potential, laser frequency, laser amplitude, etc. in real time. The spatial resolution of the system is limited by the active diameter of individual channels of the MCP of approximately 12 $\mu$m. Since the gain is run at its maximum value cross talk between adjacent channels in the transition between the first and second stage is to be expected. This results in the requirement that two incoming photons can only be resolved when they are separated at the input of the MCP by at least two channels, i.e. by 36 $\mu$m in our case. With the magnification of the optical system of 7.5 this translates into a minimum separation of two ions to be resolved of 5 $\mu$m, which is well below the separation of ions in the axial well of about 25 $\mu$m expected in our experiment. \section{Summary} It is our contention that currently the ion trap proposal for realizing a practical quantum computer offers the best chance of long term success. This in no way is intended to trivialize research into the other proposals: in any of these schemes technological advances may at some stage lead to a breakthrough. In particular, Nuclear Magnetic Resonance does seem to be a relatively straightforward way in which to achieve systems containing a few qubits. However, of the technologies which have so far been used to demonstrate experimental logic gates, ion traps seem to offer the least number of technological problems for scaling up to 10's or even 100's of qubits. In this paper we have described in some detail the experiment we are currently developing to investigate the feasibility of cold trapped ion quantum computation. We should emphasize that our intentions are at the moment exploratory: we have chosen an ion on the basis of current laser technology, rather than on the basis of which ion which will give the best performance for the quantum computer. Other species of ion may well give better performance: In particular Beryllium ions do have the potential for a significantly lower error rate due to spontaneous emission, although it is also true that lighter ions may be more susceptible to heating. Other variations, such as the use of Raman transitions in place of single laser transitions, or the use of standing wave lasers need to be investigated. Our choice of Calcium will allow us to explore these issues. Furthermore, calculations suggest that it should be possible to trap 20 or more Calcium ions in a linear configuration and manipulate their quantum states by lasers on short enough time scales that many quantum logic operations may be performed before coherence is lost. Only by experiment can the theoretical estimates of performance be confirmed \cite{HJKLP,HJKLPtwo}. Until all of the sources of experimental error in real devices are thoroughly investigated, it will be impossible to determine what ion and addressing scheme enables one to build the best quantum computer or, indeed, whether it is possible to build a useful quantum computer with cold trap ions at all. \section*{Acknowledgments} This research was funded by the National Security Agency. \begin{equation}gin{thebibliography}{99} \bibitem {HughesCrypto} R. J. Hughes et al. Contemp. Phys. {\bf 36} (1995) 149-163. \bibitem {Benioff} P. A. Benioff, Int. J. Theor. Phys.{\bf 21} (1982) 177-201. \bibitem{Feynman} R. P. Feynman, Foundations of Physics {\bf 16} (1986) 507-531. \bibitem{Deutsch} D. Deutsch, Proc. R. Soc. Lond.{\bf A 425} (1989) 73-90. \bibitem{Shor} P. W. Shor, {\em Proceedings of the 35th Annual Symposium on the Foundations of Computer Science}, S. Goldwasser ed., IEEE Computer Society Press, Los Alamitos CA, 1994. \bibitem{EkertJozsa} A. Ekert and R. Jozsa, Rev. Mod. Phys. {\bf 68} (1996) 733-753. \bibitem{Hughes} R. J. Hughes, ``Crptography, Quantum Computation and Trapped Ions'', submitted to Phil. Trans. Roy. Soc. (London), 1997; quant-ph/9712054. \bibitem{CZ} J. I. Cirac and P. Zoller, Phys. Rev. Lett. {\bf 74}, (1995) 4094-4097. \bibitem{NISTgate} C. Monroe et al., Phys. Rev. Lett. {\bf 75} (1995) 4714-4717. \bibitem{MannyRay} E. Knill, R. Laflamme and W. Zurek, ``Accuracy threshold for quantum computation''; submitted to {\em Science} (1997). \bibitem{Preskill} J. Preskill, ``Reliable quantum computers'', preprint (1997), quant-ph/9705031. \bibitem{Steane} A. M. Steane, Applied Physics {\bf B 64} (1997) 623-642. \bibitem{dfvj} D. F. V. James, ``Quantum dynamics of cold trapped ions, with application to quantum computation'', Applied Physics B, in the press (1998); quant-ph/9702053. \bibitem{NISTrev} D. J. Wineland et al., ``Experimental issues in coherent quantum-state manipulation of trapped atomic ions'', to be submitted to Rev. Mod. Phys. (1997). \bibitem{latiqce} R. J. Hughes, et al., ``The Los Alamos Trapped Ion Quantum Computer Experiment'', Fortschritte der Physik, in the press (1998); quant-ph/9708050. \bibitem{Grover} L. K. Grover {\em Proceedings of the 28th Annual ACM Symposium on the Theory of Computing}, ACM Press, New York, 1996 p.212 \bibitem{Terhal} B. M. Terhal and J. A. Smolin, " Superfast quantum algorithms for coin weighing and binary search problems", preprint (1997) quant-ph/9705041. \bibitem{Boneh} D. Boneh and R. Lipton, "Quantum cryptanalysis of hidden linear functions," Proc. CRYPTO'95 (Springer, New York, 1995) \bibitem{Kitaev} A. Kitaev, "Quantum measurements and the Abelian stabilizer problem," preprint (1995) quant-ph 9511026. \bibitem {PGCZ} T. Pellizzari et al., Phys. Rev. Lett.{\bf 75} (1995) 3788-3791. \bibitem{Kimble} Q. A. Turchette et al., Phys. Rev. Lett.{\bf 75} (1995) 4710-4713. \bibitem{Havel2} D. G. Cory, A. F. Fahmy and T. F. Havel, Proc. Natl. Acad. Sci. USA {\bf 94} (1997) 1634-1639. \bibitem{Ziolo} J. R. Friedman et al., Phys. Rev. Lett. {\bf 76} (1996) 3830-3833. \bibitem{Privman} V. Privman, I. D. Vagner and G. Kventsel, ``Quantum computation in quantum-Hall systems'', preprint (1997), quant-ph/9707017. \bibitem{Bocko} M. F. Bocko, A. M. Herr and M. J. Feldman, ``Prospects for quantum coherent computation using superconducting electronics'', preprint (1997). \bibitem{DiVincenzo2} D. Loss and D. P. DiVincenzo, ``Quantum computation with quantum dots'' preprint (1997), cond-mat/9701055. \bibitem{NISTsimp} C. Monroe et al. Phys. Rev. {\bf A 55} (1997) R2489. \bibitem {Steaneaux} A. M. Steane, private communication, 1996. \bibitem {Blattaux} R. Blatt, private communication, 1997. \bibitem{Paul} W. Paul and H. Steinwedel, Z. Naturforsch.{\bf A 8} (1953) 448. \bibitem{NISTtrap} M. G. Raizen et al., Phys. Rev. {\bf A 45} (1992) 6493-6501. \bibitem{Ghosh} P. K. Ghosh, {\em Ion Traps} Clarendon Press, 1995. \bibitem{NISTcool89} F. Diedrich et al., Phys. Rev. Let. {\bf 62} (1989) 403-407. \bibitem {StenholmCool} S. Stenholm, Rev. Mod. Phys. {\bf 58} (1986) 699-739. \bibitem {WinelandItano} D. J. Wineland and W. M. Itano, Phys. Rev.{\bf A 20} (1979) 1521-1540. \bibitem{HJKLP} R. J. Hughes, D. F. V. James, E. H. Knill, R. Laflamme and A. G. Petschek, Phys. Rev. Lett. {\bf 77} (1996) 3240-3243. \bibitem{HJKLPtwo} D. F. V. James, R. J. Hughes, E. H. Knill, R. Laflamme and A. G. Petschek, {\em Photonic Quantum Computing}, S. P. Hotaling, A. R. Pirich eds, {\em Proceedings of SPIE} {\bf 3076} (1997) 42-50. \end{thebibliography} \end{document}
\begin{document} \subjclass{20E06, 20E36, 20E08} \begin{abstract} We study the minimally displaced set of irreducible automorphisms of a free group. Our main result is the co-compactness of the minimally displaced set of an irreducible automorphism with exponential growth $\phi$, under the action of the centraliser $C(\phi)$. As a corollary, we get that the same holds for the action of $ <\phi>$ on $Min(\phi)$. Finally, we prove that the minimally displaced set of an irreducible automorphism of growth rate one is consisted of a single point. \varepsilonnd{abstract} \maketitle \tableofcontents \section{Introduction} In this paper, we study the minimally displaced set of an irreducible automorphism of a free group $F_N$. Our goal is to produce an essentially elementary proof that an irreducible automorphism acts co-compactly on its minimally displaced set (equivalently, the train track points) in Culler-Vogtmann space. We note that stronger results were produced by \cite{Handel-Mosher}, who showed that the action on the {\varepsilonm axis bundle} is co-compact, from which one easily deduces the result. However, their result only holds for non-geometric iwip automorphisms, and we believe our argument to be significantly simpler. Irreducible automorphisms of $F_N$ play a central role in the study of the of outer automorphism group $Out(F_N)$, as they can be studied using the powerful train track machinery, which has been introduced by Bestvina and Handel in \cite{BH-TrainTracks}. Also, they are generic elements of $Out(F_N)$ in the sense of random walks (see \cite{Rivin}). Culler-Vogtmann Outer space $CV_N$ is considered a classical method to understand automorphisms of free groups, by studying the action of $Out(F_N)$ on $CV_N$. More recently, there is an extensive study of the so-called Lipschitz metric on $CV_N$ which seems that it has interesting applications (for example, see \cite{Kfir-Bestvina}, \cite{BestvinaBers}, \cite{FM12}, \cite{QingRafi}). In particular, given an automorphism $\phi$, we can define the displacement of $\phi$ in $CV_N$, as $\lambda_{\phi} = \inf\{ \Lambda(X,(X)\phi) : X \in CV_N\}$ (where we denote by $\Lambda$ the asymmetric Lipschitz metric). The set of minimally displaced points (or simply Min-Set) $Min(\phi)$ has interesting properties. For instance, the first two authors studied the Min-Set in \cite{FM18II} (in the context of general deformation spaces), where they proved that it is connected. As an application they gave solutions to some decision problems for irreducible automorphisms. For an automorphism $\phi$ the centraliser, $C(\phi)$, preserves the Min-Set, $Min(\phi)$. Our main result is the following: \begin{restatable*}{thm}{centralcocompact} Let $\phi$ be an irreducible automorphism of $F_N$ with $\lambda_{\phi} > 1$. The quotient space $Min(\phi) / C(\phi)$ is compact. \varepsilonnd{restatable*} As we have already mentioned, Handel and Mosher prove the previous theorem (under some extra hypotheses) in \cite{Handel-Mosher}. It is worth mentioning that some of the main ingredients of their proof is generalised by Bestina, Guirardel and Horbez in the context of deformation spaces of free products (see \cite{BGH}). We then collect known results together, showing that the centraliser of an irreducible automorphism with $\lambda_{\phi} > 1$ is virtually cyclic, and deduce the following. \begin{restatable*}{thm}{autococompact} \label{autococom} Let $\phi$ be an irreducible automorphism of $F_N$ with $\lambda_{\phi} > 1$. Then $Min(\phi) / <\phi>$ is compact. \varepsilonnd{restatable*} \begin{rem*} Note that centralisers for iwip automorphisms are well known to be virtually cyclic, but the more general statement is also true since irreducible automorphisms of infinite order which are not iwip are geometric, and their centralisers are also geometric, hence virtually cyclic. We collect these observations in more detail in the proof of Theorem~\ref{autococom}. \varepsilonnd{rem*} In order to have a complete picture for irreducible automorphisms of a free group, we study irreducible automorphisms of growth rate one (all of these have finite order). In this case, we have the following: \begin{restatable*}{thm}{lambdaonefix} Let $\phi$ be an irreducible automorphism of $F_N$ with $\lambda_{\phi} = 1$. There is a single point $T \in CV_N$ so that $Min(\phi) = Fix(\phi) = \{T\}$. \varepsilonnd{restatable*} As an application, we get the following corollary: \begin{restatable*}{cor}{centrallambdaonefinite} Let $\phi$ be an irreducible automorphism of $F_N$ with $\lambda_{\phi} = 1$. There is some $T \in CV_N$ so that $C(\phi)$ fixes $T$. In particular, $C(\phi)$ is finite. \varepsilonnd{restatable*} \section{Preliminaries} \subsection{Culler -Vogtmann space} For the rest of the paper, we will denote by $F_N$ the free group on $N$ generators, for some $N \geq 2$. Firstly, we will describe the construction of the Culler-Vogtmann space which is denoted by $CV_N$ and it is a space on which $Out(F_N)$ acts nicely. Let's fix a free basis $x_1,\ldots,x_N$ of $F_N$. We denote by $R_N$ the rose with $n$-petals where we identify each $x_i$ with a single petal of $R_N$ (the petals are still denoted by $x_1,\ldots,x_N$). \begin{defn} A marked metric graph of rank $n$, is a triple $(T, h, \varepsilonll_T)$ so that : \begin{itemize} \item $T$ is a graph (without valence one or two vertices) with fundamental group isomorphic to $F_N$. \item $h: R_N \rightarrow T$ is a homotopy equivalence, which is called the marking \item $\varepsilonll_T : E(T) \rightarrow (0,1)$ is a metric on $T$, which assigns a positive number on each edge of $T$, with the property $\sum_e \varepsilonll_T(e) = 1$. \varepsilonnd{itemize} \varepsilonnd{defn} We are now in position to define Culler Vogtmann space: \begin{defn} $CV_N$ is the space of equivalence classes, under $\sim$, of marked metric graphs of rank $n$. The equivalence relation $\sim$ is given by $(T, h, \varepsilonll_T) \sim (S,h',\varepsilonll_S)$ if and only if there is an isometry $g:T \rightarrow S$ so that $gh$ is homotopic to $h'$. \varepsilonnd{defn} In order to simplify the notation, whenever $\varepsilonll_T,h$ are clear by the context, we can simply write $T$ instead of the triple $(T,h,\varepsilonll_T)$. \begin{rem*} Equivalently, one may take universal covers to get a different formulation of Culler Vogtmann space; as the space of free, minimal, simplicial $F_N$ trees of volume 1, up to equivariant isometry. \varepsilonnd{rem*} \textbf{Action of $Out(F_N)$}: Let $\phi \in Out(F_N)$ and $(T,h,\varepsilonll_T)$ be a marked metric graph. We define $((T,h, \varepsilonll_T))\phi = (T, h' , \varepsilonll_T)$ where $h ' = h \phi$ (here we still denote by $\phi$, the natural representative of $\phi$ as a homotopy equivalence from $R_N$ to $R_N$). \textbf{Simplicial Structure.} If we fix a pair $(T,h)$ of a topological marked graph (without metric) and we consider all the different possible metrics on $E(T)$ (i.e. assignments of a positive number on each edge so that the volume of $T$ is one), we obtain an (open) simplex in $CV_N$ with dimension $|E(T)| -1$. However, if we allow some edges to have length $0$, then it is not always true that the new graph will have rank $n$, so the resulting simplex is not necessarily in $CV_N$ (note that some faces could be in $CV_N$). Therefore, $CV_N$ is not a simplicial complex, but it can be described as a union of open simplices. Note that it follows immediately by the definitions, that $Out(F_N)$ acts simplicially on $CV_N$. In the following theorem, we list some more properties which are proved in \cite{cv}. \begin{thm*}[See {\cite{cv}}] \noindent \begin{enumerate}[{i)}] \item $CV_N$ is contractible. \item The maximum dimension of a simplex is $3N-4$. In other words, the maximum number of edges in a marked metric graph with rank $N$, is bounded above by $3N-3$. \item $Out(F_N)$ acts on $CV_N$ by finite point stabilisers and the quotient space is consisted of finitely many open simplices \varepsilonnd{enumerate} \varepsilonnd{thm*} \textbf{Translation length.} Given a marked metric graph $T$, we would like to define the translation length of the conjugacy class of a non-trivial group element $a$ with respect to $T$. It is natural to define the length of $a$, as the sum of lengths of the edges that $a$ crosses when it is realised as a reduced loop in $T$. We define the translation length of the conjugacy class of $a \in F_N$, as $\varepsilonll_T ([a]) = \inf \{len_T(a'): [a'] =[a] \}$. It is easy to see that any group element is freely homotopic to an embedded loop which realises the minimum. \begin{rem*} This translation length is the same as the translation length - in the sense of the minimum distance moved by the group element - of the corresponding action of the group element on the universal cover, a tree. \varepsilonnd{rem*} \textbf{Thick part.} We can now define the thick part of $CV_N$, which will be essential in our arguments: \begin{defn} Let $\varepsilonpsilon > 0$. We define the thick part of $CV_N$, and we denote it by $CV_N (\varepsilonpsilon)$, the subspace of $CV_N$ which is consisted of all the points $T \in CV_N$ with the property that every non-trivial conjugacy class $\alpha$ of $F_N$ has translation length at least $\varepsilonpsilon$, i.e. $\varepsilonll_T(\alpha) \geq \varepsilonpsilon$. \varepsilonnd{defn} \textbf{Centre of a simplex.} We will need the notion of a special point of a simplex, the centre, so we give the following definition: \begin{defn} Let $\Delta$ be a simplex of $CV_N$. The point of $\Delta$ where all the edges have the same length is called the centre of $\Delta$ and we will denote it by $X_{\Delta}$. \varepsilonnd{defn} \begin{rem}\label{Centres} By the definition of the action of $Out(F_N)$ on $CV_N$, for any automorphism $\psi$, $X_{\psi(\Delta)} = \psi(X_{\Delta}) $, i.e. the centre of $\Delta$ is sent to the centre of $\psi(\Delta)$. \varepsilonnd{rem} \subsection{Stretching factor and Automorphisms} We define a natural notion of distance on $CV_N$, which has been studied in \cite{FM11}. \begin{defn} Let $T,S \in CV_N$. We define the (right) stretching factor as: \begin{equation*} \Lambda(T,S) = \sup \{\varphirac{\varepsilonll_S([a])}{\varepsilonll_T ([a])}: 1 \neq a \in F_N \} \varepsilonnd{equation*} \varepsilonnd{defn} In the following Proposition, we state some main properties of the stretching factor: \begin{prop}[See {\cite{FM11}}] \noindent \begin{enumerate}[{i)}] \item For any two points $T,S \in CV_N$, $\Lambda(T,S) \geq 1$, with equality if and only if $T = S$. \item For any $T,S,Q \in CV_N$, the (non-symmetric) multiplicative triangle inequality for $\Lambda$ holds. In other words, $\Lambda (T,S) \leq \Lambda(T,Q) \Lambda(Q,S)$. \item For any $\psi \in Out(F_N)$ and for any $T,S \in CV_N$, $\Lambda(T,S) = \Lambda( (T)\psi, (S)\psi)$ \varepsilonnd{enumerate} \varepsilonnd{prop} Note that $\Lambda$ is not symmetric, i.e. there are points $T,S$ of $CV_N $ so that $\Lambda(T,S) \neq \Lambda(S,T)$. In fact, it is not even quasi-symmetric, in general. However, it is proved in \cite{Kfir-Bestvina} that for any $\varepsilonpsilon > 0$, its restriction on the $\varepsilonpsilon$-thick part, $CV_N (\varepsilonpsilon)$, induces a quasi-symmetric function. More specifically, we have the following: \begin{prop}[{Theorem 24, \cite{Kfir-Bestvina}}]\label{quasi-symmetry} For every $\varepsilonpsilon > 0$ and for every $T,S \in CV_N (\varepsilonpsilon)$, there is a uniform constant $K $ (depending only on $\varepsilonpsilon$ and $N$) so that $\Lambda(T,S) \leq \Lambda(S,T)^K$. \varepsilonnd{prop} One could consider the function $d_R(T,S) = \ln \Lambda (T,S)$, which behaves as an asymmetric distance, or even the symmetrised version $d(T,S) = d_R(T,S) + d_R(S,T)$, which is a metric on $CV_N$. We choose to work with $\Lambda$, as it is related more naturally to the displacement of an automorphism which is defined in the next subsection. However, as we will work only with some fixed $\varepsilonpsilon$-thick part of $CV_N$, it follows by the quasi-symmetry of $\Lambda$, that we could use exactly the same arguments using $d$, instead.\\ \textbf{Balls in Outer Space.} The asymmetry implies that there are three different types of balls that we can define, which are different, in general. \begin{enumerate} \item The symmetric closed ball with centre $T \in CV_N$ and radius $r> 0$: \begin{equation*} B (T,r) = \{X \in CV_N : \Lambda(X,T) \Lambda(T,X) \leq r \} \varepsilonnd{equation*} \item The in-going ball with centre $T \in CV_N$ and radius $r>0$: \begin{equation*} B_{in} (T,r) = \{X \in CV_N : \Lambda(X,T) \leq r \} \varepsilonnd{equation*} \item The out-going ball with centre $T \in CV_N$ and radius $r>0$: \begin{equation*} B_{out} (T,r) = \{X \in CV_N : \Lambda(T,X) \leq r \} \varepsilonnd{equation*} \varepsilonnd{enumerate} \begin{prop}\label{Compactness of Balls} Let $T \in CV_N$ and $r > 0$. \begin{enumerate}[{i)}] \item The symmetric ball $B (T,r)$ is compact. \item Then in-going ball $B_{in}(T,r)$ is compact. \varepsilonnd{enumerate} \varepsilonnd{prop} \begin{proof} \noindent \begin{enumerate}[{i)}] \item The statement is proved in \cite{FM11}, Theorem 4.12. \item Let $T,r$ as before. Firstly note that there is some $C$ (the injective radius of $T$) for which $\varepsilonll_T([g]) \geq C > 0$ for every non-trivial $g \in F_N $. Therefore, for every non-trivial $g \in F_N$ and for every $X \in B_{in}(T,r)$, it holds that: \begin{equation*} \varphirac{\varepsilonll_T([g])}{\varepsilonll_X([g])} \leq \Lambda(X,T) \leq r {\mathbb R}ightarrow \\ \varepsilonll_X([g]) \geq \varphirac{\varepsilonll_T([g])}{r} \geq \varphirac{C}{r} \varepsilonnd{equation*} As a consequence, it follows that $B_{in}(T,r) \subset CV_N (C/r)$. By quasi-symmetry of $\Lambda$ when it is restricted on the thick part (\ref{quasi-symmetry}), we have that there is some $M$ so that $$B_{in} (T,r) \subseteq B(T,r^{2M}) $$ The result now follows by i), as $B_{in}(T,r)$ is a closed subset of the compact set $B(T,r^{2M})$ and therefore compact. \varepsilonnd{enumerate} \varepsilonnd{proof} Note that it's not difficult to find an out-going ball $B_{out}(T,r)$, for some $T \in CV_N$, $r > 0$, which is not compact. It is worth mentioning that $B_{out}$ is weakly convex, while $B_{in}$ is not, in general (as is proved in \cite{QingRafi}). In \cite{FM11}, it is proved that the supremum in the definition of stretching factor $\Lambda(T,S)$ is maximum, as there is a hyperbolic element that realises the supremum. Even more, such a hyperbolic element can be chosen by a finite list of candidates of $T$. \begin{defn} Let $T \in CV_N$. A hyperbolic element $a$ of $F_N$ is called a candidate with respect to $T$, if the realisation of $a$ as a loop in $T$ is either: \begin{itemize} \item an embedded circle \item a figure eight \item a barbell \varepsilonnd{itemize} The set of candidates with respect to $T$ is called the set of candidates and it is denoted by $Cand(T)$. \varepsilonnd{defn} \begin{prop}[{Prop.3.15, [\cite{FM11}}]\label{Candidates} For any $T,S \in CV_N$, there is a candidate $a \in Cand(T)$ so that: \begin{equation*} \Lambda(T,S) = \varphirac{\varepsilonll_S([a])}{\varepsilonll_T([a])} \varepsilonnd{equation*} \varepsilonnd{prop} \textbf{Displacement.} Now let's fix an automorphism $\phi \in Out(F_N)$. The displacement of $\phi$ is $\lambda_{\phi} = \inf \{\Lambda(X, \phi(X)) : X \in CV_N\}$. \begin{defn} Let $\phi$ be an outer automorphism of $F_N$. Then we define the Min-Set of $\phi$, as $Min(\phi) = \{T \in CV_N : \Lambda(T,\phi(T)) = \lambda_{\phi} \}$. \varepsilonnd{defn} Note that $Min(\phi)$ could be empty, in general. However, it is non-empty when $\phi$ is irreducible (see \cite{FM15} for more details). \begin{defn} Let $\phi$ be an automorphism of $F_N$. Then $\phi$ is called reducible, if there is a free product decomposition of $F_N = A_1 \ast \ldots \ast A_k \ast B $ (where $B$ could be empty) so that every $A_i$, $i=1,\ldots,k$ is proper free factor and the conjugacy classes of $A_i$'s are permuted by $\phi$. Otherwise, $\phi$ is called irreducible. \varepsilonnd{defn} We note that by [Th.8.19, \cite{FM15}], $Min(\phi)$ coincides with the set of train track points of $\phi$, when $\phi$ is irreducible. That is, the points of $CV_N$ that admit a (not necessarily simplicial) train track representative of $\phi$. \subsection{Properties of the Thick Part} In this subsection, we list some properties of the thick part that we will need in the following section. \begin{prop}\label{Thickness1} Let $\phi$ be an irreducible automorphism of $F_N$. Then there is a positive number $\varepsilonpsilon_1 $ (depending only on $N$ and on $\phi$) so that $Min(\phi) \subset CV_N (\varepsilonpsilon_1)$. One could take $\varepsilonpsilon_1 = 1/ ((3N-3)\mu^{3N-2})$ for any $\mu > \lambda_{\phi}$. \varepsilonnd{prop} \begin{proof} For a proof see Proposition 10 of \cite{BestvinaBers}. \varepsilonnd{proof} \begin{lem}\label{Thickness2} Let $\Delta$ be a simplex of $CV_N$ and $T,S$ be two points of $\Delta$. If we further suppose that $T \in CV_N (\varepsilonpsilon)$, then $\Lambda(T,S) \leq \varphirac{2}{\varepsilonpsilon}$. \varepsilonnd{lem} \begin{proof} The proof is an immediate corollary of the candidates theorem (\ref{Candidates}). Firstly, note that candidates depend only on the simplex and so for any $T,S \in \Delta$, $Cand(T) = Cand(S) = \mathcal{C}$. The result now follows by the remark that for every $g \in \mathcal{C}$ we have $\varepsilonll_S ([g]) \leq 2$, while for every $1 \neq g$, $\varepsilonll_T ([g]) \geq \varepsilonpsilon$ as $T \in CV_N(\varepsilonpsilon)$. \varepsilonnd{proof} \begin{rem}\label{Thickness of centre} Let $\Delta$ be any simplex of $CV_N$, then there is some uniform positive number $\varepsilonpsilon_2$ (depending only on $N$), so that the centre of $\Delta$ is $\varepsilonpsilon_2$-thick. One could take, $\varepsilonpsilon_2 = 1 / (3N-3)$. Therefore, it is easy to see $\varepsilonpsilon_1 < \varepsilonpsilon_2$ and so the centre of any simplex is $\varepsilonpsilon_1$-thick. \varepsilonnd{rem} \begin{proof} The number of edges in any graph is bounded above by $3N-3$. Therefore the length of each edge with respect to the centre of a simplex, is bounded below by $1/(3N-3)$. The same number gives us an obvious lower bound for the translation length of any hyperbolic element. \varepsilonnd{proof} \subsection{Connectivity of the Min-set} In this subsection, we will state some results from \cite{FM18I} and \cite{FM18II}, that we will need in the following section. \begin{defn} Let $T,S \in CV_N$. A \textit{simplicial path} between $T,S$ is given by: \begin{enumerate} \item A finite sequence of points $T = X_0, X_1,\ldots, X_k = S$, such that for every $i = 1,\ldots, k$, there is a simplex $\Delta_i$ such that the simplices $\Delta_{X{i-1}}$ and $\Delta_{X_{i}}$ are both faces of $\Delta_i$. \item Euclidean segments $\overline{X_{i-1} X_i}$. \varepsilonnd{enumerate} The simplicial path is then the concatenation of these Euclidean segments. \varepsilonnd{defn} The following results were proved in \cite{FM18I} and \cite{FM18II}, in the more general context of deformation spaces of free products. \begin{thm} \label{Connectivity} Let $\phi$ be an automorphism of $F_N$. \begin{enumerate} \item Let $\Delta$ be a simplex of $CV_N$. If $X,Y \in Min(\phi) \cap cl(\Delta)$, where $cl(\Delta) $ is the closure of $\Delta$ in $CV_N$. Then the Euclidean segment $\overline{XY}$ is contained to $Min(\phi)$. \item If $\phi$ is irreducible automorphism of $F_N$, then $Min(\phi)$ is connected by simplicial paths in $CV_N$; that is, for every $T,S \in Min(\phi)$ there is a simplicial path $T$ and $S$, which is entirely contained in $Min(\phi)$. \varepsilonnd{enumerate} \varepsilonnd{thm} \begin{proof} \noindent \begin{enumerate} \item This is an immediate consequence of the quasi-convexity of the displacement function, see Lemma 6.2 in \cite{FM18I}. \item This follows by the Main Theorem of \cite{FM18II} (see Theorem 5.3.) which implies that the set of minimally displaced points for $\phi$ as a subset of the free splitting complex (i.e. the simplicial boardifiaction of $CV_N$) of $F_N$ is connected by simplicial paths, combined with the fact that the Min-Set of an irreducible automorphism does not enter some thin part of $CV_N$ (see the previous subsection). It is easy to see now that, by the connectivity, it's not possible to have minimally displaced points in the boundary without entering any thin part. As a consequence, any minimally displaced point of the free splitting complex must be a point of $CV_N$ and the result follows. \varepsilonnd{enumerate} \varepsilonnd{proof} \section{Results} \subsection{Exponential growth} Firstly, we will concentrate on the case where $\phi$ is irreducible with exponential growth. In this case, we have the following theorem: \centralcocompact \begin{proof} We will prove that there is a compact set $\mathcal{K}$ of $CV_N$ so that $Min(\phi) \subseteq \mathcal{K} C(\phi)$ and the theorem follows, as $Min(\phi)$ is closed. Therefore as in-going balls are compact (by \ref{Compactness of Balls}), the compact set $\mathcal{K}$ can be chosen to be an in-going closed ball. Let's fix a point $T \in Min(\phi)$. It is then sufficient to prove that there is some positive radius $L$ so that for any $X \in Min(\phi)$, there is some element $\alpha$ of $C(\phi)$ which satisfies $\Lambda(X, (T)\alpha) \leq L$. We will argue by contradiction. Let's suppose that there is a sequence of points $X_m \in Min(\phi), m=1,2,3,\ldots$, so that $\Lambda(X_m, (T)\alpha) \geq m$, $m=1,2,\ldots$ and for every $\alpha \in C(\phi)$. Note that there are finitely many $Out(F_N)$- orbits of open simplices on $CV_N$. Therefore, up to taking a subsequence of $X_m$, we can suppose that there is an (open) simplex $\Delta$ and a sequence of (difference of markings) $\psi_m \in Out(F_N)$ so that $X_m \in (\Delta) \psi_m$. Firstly, we apply Proposition \ref{Thickness1} for $\phi$ and we get a constant $\varepsilonpsilon$ (depending on $N$ and on $\phi$), so that $Min(\phi) \subset CV_N(\varepsilonpsilon)$. Also, by the Remark \ref{Thickness of centre}, the centre of any simplex is $\varepsilonpsilon$-thick. In particular, the centre $X_{\Delta}$ of $\Delta$ belongs to $CV_N (\varepsilonpsilon)$ and $(X_m)\psi_m \in \Delta$, so by \ref{Thickness2} there is a constant $M = M(\varepsilonpsilon) $ (which doesn't depend on $\Delta$ or $m$) which satisfies $\Lambda(X_{\Delta}, (X_m)\psi_m ^{-1} ) \leq M$. It follows that $\Lambda(X_{m},(X_{\Delta})\psi_m ) \leq M$ for every $m$. In addition, by assumption, $X_m \in Min(\phi)$ which is equivalent to $\Lambda(X_m,(X_m)\phi) = \lambda_{\phi} = \lambda$. Therefore, as an easy application of the multiplicative triangle inequality for $\Lambda$ and the fact that $Out(F_N)$ acts on $CV_N$ by isometries with respect to $\Lambda$, the previous relations imply that: $$ \begin{array}{rcl} \Lambda((X_{\Delta})\psi_m, (X_{\Delta} ) \psi_m \phi) & \leq & \Lambda((X_{\Delta})\psi_m, X_m) \Lambda(X_m, (X_m)\phi) \Lambda ((X_m)\phi, (X_{\Delta}) \psi_m \phi ) \\ \\ & = & \Lambda(X_{\Delta}, (X_m)\psi_m ^{-1}) \Lambda(X_m, (X_m)\phi) \Lambda (X_m, (X_{\Delta})\psi_m ) \\ \\ & \leq & M^2 \lambda. \varepsilonnd{array} $$ The previous inequality is equivalent to $\Lambda( X_{\Delta}, (X_{\Delta})\psi_m \phi \psi_m^{-1} ) \leq M^2 \lambda$ for every $m$. Note that by the Remark \ref{Centres}, $(X_{\Delta})\psi_m \phi \psi_m^{-1}$ are the centres of the corresponding simplices $(\Delta)\psi_m \phi \psi_m^{-1}$ for every $m$. As $CV_N$ is locally finite, there are finitely many simplices so that their centers have bounded distance from $X_{\Delta}$. Therefore, it follows that infinitely many of the simplices $ (\Delta)\psi_n \phi \psi_n^{-1}$,$n=1,2,\ldots$ must be the same, which means that after possibly taking a subsequence of $\psi_n$, we have that $ (X_{\Delta}) \psi_n \phi \psi_n^{-1} = (X_{\Delta})\psi_m \phi \psi_m^{-1} $ for every $n,m$. As a consequence, the automorphisms $(\psi_n \phi \psi_n^{-1})(\psi_1 \phi^{-1} \psi_1^{-1})$ fix $X_{\Delta}$ for every $n$. On the other hand, the stabiliser of any point of $CV_N$ is finite and so infinitely many of these automorphisms are forced to be the same or, in other words, after taking a subsequence, we can suppose that $\psi_m \phi \psi_m^{-1} = \psi_n \phi \psi_n^{-1}$ for every $n,m$. This is equivalent to $\psi_m^{-1} \psi_n \in C(\phi)$ for every $n,m$ and in particular, by fixing one of the indices to be $m=1$, we get that for every $n$ there is some $\alpha_n \in C(\phi)$, so that $\psi_n = \psi_1 \alpha_n$. As a consequence, the $\Lambda$-distance from $(X_{\Delta})\psi_n$ to $(T)\alpha_n$ does not depend on $n$, as $$\Lambda( (X_{\Delta})\psi_n, (T)\alpha_n) = \Lambda( (X_{\Delta}) \psi_1 \alpha_n , (T) \alpha_n) = \Lambda( (X_{\Delta})\psi_1 , T) = C.$$ Note that as we have already seen $\Lambda(X_n, (X_{\Delta})\psi_n) \leq M$ for every $n$. Therefore, by applying again the triangle inequality, it follows that $\Lambda( X_n, (T)\alpha_n) \leq M C$ for every $n$. This contradicts our assumption that $\Lambda( X_n, (T)\alpha_n) \geq n$, as neither $M$ nor $C$ depend on $n$ (note that here by an abuse of the notation, where we still denote by $X_n$ a subsequence $X_{k_n}$ of the original $X_n$, but the inequality still holds because $k_n \geq n$ for every $n$). \varepsilonnd{proof} \begin{rem} In the previous proof we used irreducibility only in order to ensure the condition that there is some uniform $\varepsilonpsilon$ so that $Min(\phi) \subset CV_N (\varepsilonpsilon)$ which follows by the Proposition \ref{Thickness1}. Therefore, we could replace the assumption of irreducibility with this weaker condition. \varepsilonnd{rem} For the centralisers of irreducible automorphims with exponential growth, the following theorem holds: \autococompact \begin{proof} In light of the previous theorem, it is sufficient to prove that the centraliser of an irreducible automorphism of exponential growth rate is virtually cyclic. The case when $\phi$ is irreducible with irreducible powers (iwip) this is well known, by the main result of \cite{BFH-Laminations0}. Note that if $\phi$ is atoroidal and irreducible with $\lambda_{\phi} > 1$, it follows by \cite{Kapovich} that $\phi$ is iwip. As a consequence, we suppose for the rest of the proof that $\phi$ is toroidal and irreducible. In this case, $\phi$ is a geometric automorphism, i.e. it is induced by a pseudo-Anosov automorphism $f$ of a surface $\Sigma$ with $p\geq1$ punctures, which acts transitively on the boundary components (this was a folk theorem, until recently where the details appeared in an appendix of \cite{Mut}). Note that $\phi$ is iwip exactly when $p=1$. It is well known (see \cite{McCarthy}) that the centraliser $C_{MCG(\Sigma)}(f)$ of the pseudo-Anosov $f$ in $MCG(\Sigma)$ is virtually cyclic. Therefore, it is enough to prove that the centraliser $C(\phi) = C_{Out(F_n)}(\phi)$ of $\phi$ in $Out(F_n)$ is isomorphic to $C_{MCG(\Sigma)} (f)$. Let's denote by $c_1,\ldots,c_p$ the elements corresponding to the peripheral curves (a simple curve around each puncture). We also denote by $ Out^*(F_n)$, the subgroup of the automorphisms that preserve the set of conjucacy classes of simple peripheral curves (which are $c_i$'s and their inverses). By the Dehn–-Nielsen–-Baer Theorem for surfaces with punctures (see Theorem 8.8. of \cite{FarbMarg}), we have that the natural map from $MCG(\Sigma)$ to $Out^*(F_n)$ is an isomorphism. In other words, an automorphism $\psi$ of $Out(F_n)$ is induced by an element of $MCG(\Sigma)$, exactly when it preserves the set of conjugacy classes of the peripheral curves, or equivalently $\psi \in Out^*(F_n)$. We will now show that any element of $C(\phi)$ is induced by an element of $C_{MCG(\Sigma)}(f)$ and the proof follows, as any element of $C_{MCG(\Sigma)} (f)$ induces an element of $C(\phi)$. We can assume that without loss, after re-numbering if needed, that $[\phi(c_i)] = [c_{i+1}]$ mod $p$. If $\psi \in C(\phi)$, then for every $i$, $[\phi \psi (c_i)] = [\psi \phi (c_i) ]= [\psi (c_{i+1})] $ (mod $p$). It follows that $f$ preserves the closed curves $[\psi(c_i)]$ and since $f$ is pseudo-Anosov, we get that any such curve must be a peripheral curve, up to orientation (proper powers can be discounted as $\psi$ is an automorphism). Therefore, $\psi$ must preserve the set of (conjugacy classes) of the peripheral curves and so by the previous criterion, it is induced by an element of $MCG(\Sigma)$. \varepsilonnd{proof} As an immediate application of the previous two theorems, we get the following statement which seems to be known to the experts, but it doesn't appear explicitly in the literature: \begin{thm} Let $\phi$ be an irreducible automorphism of $F_N$ with $\lambda_{\phi} > 1$. It holds that $Min(\phi) / <\phi>$ is compact. \varepsilonnd{thm} \subsection{Growth rate one} In this subsection, we will cover the case of irreducible automorphisms of growth rate one (it is known that they have finite order). This class has been studied by Dicks and Ventura in \cite{Dicks-Ventura}, where they give an explicit description of any such automorphism. \begin{nt} We describe below two types of topological graphs and a graph map for each case: \begin{itemize}\label{Types of autos} \item For the first type, let $p$ be an odd prime. In this case, we define a graph $X_p$ which has rank $p-1$. This graph is consisted of two vertices and $p$ edges which connect them. In order to describe the graph map is more convenient to identify the set of vertices $\{v_0,v_1\}$ with the set $\{0, 1\}$ and the edge set $\{e_1,...,e_p\}$ with $\mathbb{Z}_p$, while all edges are oriented so that their initial vertex is $v_0$ and their terminal vertex $v_1$. We denote by $\alpha_p$ the graph map, which fixes the vertices and sends $e_i$ to $e_{i+1}$ (mod $p$). \item For the second type, let $p,q$ two primes where $p<q$ (here we don't assume that $p$ is odd). We define a graph $X_{pq}$ of rank $pq-p-q+1$ which is consisted of $p+q$ vertices and $pq$ edges. More specifically, the set of vertices is consisted of two distinct subsets $\{v_1,\ldots,v_p\}$, $\{w_1,\ldots,w_q\}$ which can be naturally identified with $\mathbb{Z}_p \sqcup \mathbb{Z}_q$. Also, for every $i \in \mathbb{Z}_p$ and for every $j \in \mathbb{Z}_q$, there is a unique edge $e_{i,j}$ with initial vertex $v_i$ and terminal vertex $w_j$ and so we can naturally identify the set of edges with $\mathbb{Z}_{p} \times \mathbb{Z}_q$. We denote by $\alpha_{pq}$ the map which sends $e_{i,j}$ to $e_{i+1,j+1}$ (mod$p$, mod$q$, respectively). \varepsilonnd{itemize} \varepsilonnd{nt} Note that $X_p$, for an odd prime $p$, can be seen as an element of $CV_{p-1}$ by assigning length $1/(p-1)$ on each edge and by considering a marking $R_{p-1} \rightarrow X_p$. Similarly, $X_{pq}$ can be seen as an element of $CV_{pq-p-q+1}$. This is true even when $p=2$, as in this case $X_{2q} = X_q $ (as elements of $CV_{q-1}$ ). We consider this case separately, as the maps $\alpha_q$ and $\alpha_{2q}$ are different ($\alpha_{2q}$ inverts the orientation of each edge, while $\alpha_q$ preserves it). We can now state the main result of \cite{Dicks-Ventura} (it is proved in Proposition 3.6, even if it is stated in a different form, the following formulation is evident by the proof): \begin{prop}\label{Classification} If $\phi$ is an irreducible automorphism of $F_N$ with $\lambda_{\phi} = 1$, then it can be represented by either $\alpha_p$ for some odd prime $p$ or $\alpha_{pq}$ for some primes $p<q$, as in Notation \ref*{Types of autos}. \varepsilonnd{prop} We need the following lemma: \begin{lem}\label{graph auto} Let $\phi$ be an irreducible automorphism of $CV_N$ with $\lambda_{\phi} = 1$. Let $S$ be a point of $Min(\phi)$ and $f : S \to S$ be an optimal representative of $\phi$. Then $f$ is an isometry on $S$, and hence a graph automorphism. Morever, $f$ is the unique optimal map representing $\phi$ on $S$. \varepsilonnd{lem} \begin{proof} Note that as $\lambda_{\phi}=1$, $Min(\phi)$ is simply the set of fixed points of $\phi$ in $CV_N$. Hence for any loop, $\gamma$, the lengths of $\gamma$ and $f(\gamma)$ are the same (as loops) in $S$, and from there it follows easily that $f$ is an isometry. We can lift this isometry to the universal cover and invoke \cite{CM}, to conclude that this isometry is unique (up to a covering translation), and hence all optimal maps are the same on $Min(\phi)$. Alternatively, notice that two graph automorphisms give rise to the same action on the associated simplex if and only if the graph maps are the same. \varepsilonnd{proof} \begin{rem*} Note that this uniqueness statement is definitely false when $\lambda_{\phi} > 1$; see \cite{FM18I}, Example 3.14. \varepsilonnd{rem*} We can now prove our result for the finite order irreducible automorphisms of $F_N$. \lambdaonefix Note that the existence of such a point $T$ is proved in \cite{Dicks-Ventura}; the content of this Theorem is the uniqueness of such a point. \begin{proof} As note above, the fact that $\lambda_{\phi} = 1$ implies that $Min(\phi) = Fix(\phi)$. By applying Proposition \ref{Classification}, we get that $\phi$ can be represented by either $\alpha_p$ or $\alpha_{pq}$ (for some primes $p,q$) as an isometry of $X_p$ or $X_{pq}$ (as in notation \ref*{Types of autos}), respectively. For the rest of the proof we will denote the graph map by $\alpha$ and the graph by $T$. In particular, $\Delta_T = \Delta$, then $\phi$ fixes the centre (i.e. if we assign the same length to every edge of $T$) of $\Delta$, as an element of $CV_N$. We will prove that $T$ (with the metric given as above) is the unique fixed point of $CV_N$ for $\phi$. By the second assertion of Theorem \ref{Connectivity}, $Fix(\phi) = Min(\phi)$ is connected by simplicial paths in $CV_N$. Now consider some other point, $S \in CV_N$ fixed by $\phi$. If we connect $S$ and $T$ by a (simplicial) path in $Fix(\phi)$, we will be able to produce a point $T' \in Fix(\phi)$ such that either $\Delta'$ is a face of $\Delta$ or vice versa (where $\Delta'$ is the simplex defined by $T'$). However, it is clear that $\phi$ cannot fix any other point of $\Delta$; this is because $\alpha$ acts as a cyclic permutation of the edges of $T$, and hence the only metric structure that can be preserved assigns the same length to every edge. Therefore, $\Delta$ must be a face of $\Delta'$. We will aim to show that $\Delta$ must be equal to $\Delta'$, which will prove our result. (Since then the only possibilty left is that $S=T$.) Since $\Delta$ is a face of $\Delta'$, there exists a forest, $F$, whose components are collapsed to produce $T$ from $T'$ (as graphs, absent the metric structure). However, the connectivity of $Fix(\phi)$ allows us to connect $T$ to $T'$ - as metric graphs - via a Euclidean segment in the closure of $\Delta'$ (see the first assertion of \ref{Connectivity}). Thus, without loss of generality, we may assume that the optimal map representing $\phi$ on $T'$ - call this $\alpha'$ - leaves $F$ invariant, since $\alpha'$ is an isometry (by Lemma \ref{graph auto}), and so if the volume of $F$ is sufficiently small, it must be sent to itself. Our goal is to show that, under the assumption that $T'$ has no valence one or two vertices, each component of $F$ is a vertex. Hence $T=T'$. Now if we ignore the metric structures we get that on collapsing $F$, $\alpha'$ induces a graph map on the quotient, which is $T$ (as a graph). Since this must also represent $\phi$, we deduce that this induced map is equal to $\alpha$. (Alternatively, collapse $F$ and assign the same length to the surviving edges. The map $\alpha'$ induces an isometry of this graph which is equal to $\alpha$, by \ref{graph auto}.) Let $G$ denote the cyclic group generated by $\alpha$, acting on $T$. By the comments above, $G$ has an action on $T'$ so that collapsing components of $F$ gives rise to the original action on $T$. For the remainder of the proof, the edges of $F$ will be called \textit{black}, the edges of the complement of $F$ will be called \textit{white} and vertices that are incident to both and white edges will be called \textit{mixed}. Accordingly, vertices have {\varepsilonm black} valence and {\varepsilonm white} valence respectively. Now consider a component $C$ of $F$. Note that we can think of $C$ as a {\varepsilonm vertex} of $T$. Let $v_1, \ldots, v_k$ be the leaves of $C$; that is, those vertices of $C$ with black valence equal to $1$. Since $T'$ can admit no valence one vertices (if we count both black and white edges), each $v_i$ is mixed, incident to both black and white edges. Let $\partial C$ denote the boundary of $C$ in $T'$; that is, the edges (necessarily white) of $T'$ connecting some $v_i$ to a vertex not in $C$. Since a vertex stabiliser of $T$ in $G$ acts freely and transitively on the edges incident to it, we deduce that $Stab(C)$ (the set-wise stabiliser) acts freely and transitively on $\partial C$. Note that the transitive action on $\partial C$ implies that $Stab(C)$ must also act transitively on the leaves, $v_1, \ldots, v_k$. Now $Stab(C)$ has prime order (since vertex stabilisers in $T$ have prime order), so using the orbit-stabiliser theorem, we deduce that either $k=1$ or $Stab(C)$ acts freely and transitively on $v_1, \ldots, v_k$. In the former case, $C$ consists of a single vertex. In the latter case we get that $k = | \partial C| = |Stab(C)|$. This implies that the white valence of each $v_i$ is equal to $1$. But now the valence of each $v_i$ is exactly $2$, leading to our desired contradiction. Note that this argument doesn't quite work in the cases where $T$ has a vertex of valence 2 - in the second case of Notation \ref{Types of autos} when $p=2$. Here we get vertices of valence 2 since the graph map $\alpha$ acts as a cyclic permutation on the edges along with an inversion, and we subdivide at the midpoints of edges which are fixed. But in this case, the valence 2 vertices are a notational convenience, and we can omit them from $T$, and reach the same conclusion with the same argument. \varepsilonnd{proof} The following corollary is now immediate: \centrallambdaonefinite \begin{proof} It follows immediately by the previous Theorem, by noting that $Min(\phi)$ is $C(\phi)$-invariant. \varepsilonnd{proof} In fact, the graph $T$ in the previous corollary, is some graph as in Notation \ref*{Types of autos}, we can get a much more precise description of the centaliser in each case. \begin{cor} If $\phi$ is irreducible automorphism of $F_N$ of growth rate one, then $C(\phi)$ fixes a point $X$, where $X$ is as in Notation \ref{Types of autos}. As a consequence, $C(\phi) = <\phi> \times <\sigma>$, where $\sigma$ is the order two automorphism of $F_N$ that is induced by the graph map of $X$ sending every edge to its inverse. \varepsilonnd{cor} \begin{thebibliography}{99} \bibitem{Kfir-Bestvina} Y. Algom-Kfir and M. Bestvina, Geom Dedicata (2012) 156: 81. https://doi.org/10.1007/s10711-011-9591-2 \bibitem{BestvinaBers} M. Bestvina, \varepsilonmph{A {B}ers-like proof of the existence of train tracks for free group automorphisms}, Fund. Math. \textbf{214} (2011), no.~1, 1--12. \bibitem{BH-TrainTracks} M.Bestvina and M. Handel. \textit{Train Tracks and Automorphisms of Free Groups.}, Annals of Mathematics, vol. 135, no. 1, 1992, pp. 1–51. JSTOR, www.jstor.org/stable/2946562. \bibitem{BFH-Laminations0} M. Bestvina, M. Feighn, M. and Handel (1997). \textit{Laminations, trees, and irreducible automorphisms of free groups.} Geometric and Functional Analysis GAFA, 7(2), 215-244. \bibitem{BGH} M. Bestvina, V. Guirardel and C. Horbez (2017) \textit{Boundary amenability of $Out(F_N)$}, arXiv:1705.07017 \bibitem{cv} M. Culler and K. Vogtmann, \varepsilonmph{Moduli of graphs and automorphisms of free groups}, Invent. Math. \textbf{84} (1986), no.~1, 91--119. \MR{830040} \bibitem{CM} Marc Culler and John~W. Morgan. \newblock Group actions on {${\bf R}$}-trees. \newblock {\varepsilonm Proc. London Math. Soc. (3)}, 55(3):571--604, 1987. \bibitem{Dicks-Ventura} W. Dicks and E. Ventura, \textit{Irreducible automorphisms of growth rate one},Journal of Pure and Applied Algebra, Volume 88, Issues 1–3,1993, Pages 51-62,ISSN 0022-4049, https://doi.org/10.1016/0022-4049(93)90012-I. \bibitem{FarbMarg} B.Farb and D. Margalit, \textit{A primer on mapping class groups}, Princeton University press, 2012 \bibitem{FM11} S. Francaviglia and A. Martino, \varepsilonmph{Metric properties of outer space}, Publ. Mat. \textbf{55} (2011), no.~2, 433--473. \MR{2839451} \bibitem{FM12} \bysame, \varepsilonmph{The isometry group of outer space}, Adv. Math. \textbf{231} (2012), no.~3-4, 1940--1973. \MR{2964629} \bibitem{FM15} S. Francaviglia and A. Martino, \varepsilonmph{Stretching factors, metrics and train tracks for free products.} Illinois J. Math. 59 (2015), no. 4, 859--899. doi:10.1215/ijm/1488186013. https://projecteuclid.org/euclid.ijm/1488186013 \bibitem{FM18I} \bysame, \varepsilonmph{Displacements of automorphisms of free groups I: Displacement functions, minpoints and train tracks}, preprint, arXiv1807.02781. \bibitem{FM18II} \bysame, \varepsilonmph{Displacements of automorphisms of free groups II: Connectedness of level sets}, preprint, arXiv:1807.02782. \bibitem{Handel-Mosher} M. Handel and L. Mosher, \textit{Axes in outer space}", American Mathematical Society Providence, RI, Memoirs of the American Mathematical Society, 2012, http://cds.cern.ch/record/2122547, \bibitem{Kapovich} I. Kapovich,\textit{ Algorithmic detectability of iwip automorphisms}. Bulletin of the London Mathematical Society 46 (2014), no. 2, 279–290. \bibitem{McCarthy} J. McCarthy, \textit{Normalizers and centralizers of pseudo-Anosov mapping classes}, preprint, June 8 (1994). \bibitem{Mut} J.P. Mutanguha, \textit{Irreducibility of a Free Group Endomorphism is a Mapping Torus Invariant}, preprint, arXiv: arXiv:1910.04285 (2019) \bibitem{QingRafi} Y.Qing and K. Rafi. \textit{Convexity of balls in the outer space.}, preprint, arXiv:1708.04921 (2017). \bibitem{Rivin} I. Rivin, \textit{Walks on groups, counting reducible matrices, polynomials, and surface and free group automorphisms.} Duke Math. J. 142 (2008), no. 2, 353--379. doi:10.1215/00127094-2008-009. https://projecteuclid.org/euclid.dmj/1206642158 \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{A formula for the derivative of the $p$-adic $L$-function of the symmetric square of a finite slope modular form} Let $f$ be a modular form of weight $k$ and Nebentypus $\mathpzc{p}si$. By generalizing a construction of \cite{DD}, we construct a $p$-adic $L$-function interpolating the special values of the $L$-function $L(s,\mathrm{Sym}^2(f)\otimes \xi)$, where $\xi$ is a Dirichlet character.\\ When $s=k-1$ and $\xi=\mathpzc{p}si^{-1}$, this $p$-adic $L$-function vanishes due to the presence of a so-called trivial zero. We give a formula for the derivative at $s=k-1$ of this $p$-adic $L$-function when the form $f$ is Steinberg at $p$.\\ If the weight of $f$ is even, the conductor is even and squarefree, and the Nebentypus is trivial this formula implies a conjecture of Benois. \tableofcontents \mathcal{S}ection{Introduction} The aim of this paper is to prove a conjecture of Benois on trivial zeros in the particular case of the symmetric square representation of a modular form whose associated automorphic representation at $p$ is Steinberg.\\ We begin by recalling the statement of Benois' conjecture. Let $G_{\mathbb{Q}}$ be the absolute Galois group of $\mathbb{Q}$. We fix an odd prime number $p$ and two embeddings \begin{align*} \overline{\mathbb{Q}} \mathcal{H}ookrightarrow \mathbb{C}_p, \; \; \overline{\mathbb{Q}} \mathcal{H}ookrightarrow \mathbb{C}, \end{align*} and we let $G_{\mathbb{Q}_p}$ be the absolute Galois group of $\mathbb{Q}_p$. Let \begin{align*} V : G_{\mathbb{Q}} \rightarrow \mathrm{GL}_n(\overline{\mathbb{Q}}_p) \end{align*} be a continuous, irreducible, $p$-adic Galois representation of $G_{\mathbb{Q}}$. We suppose that $V$ is the $p$-adic realization of a pure motive $M_{/\mathbb{Q}}$ of weight $0$. We can associate a complex $L$-function $L(s,M)$. Let $M^*=\mathrm{Hom}(M, -)$ be the dual motive of $M$. Conjecturally, if $M$ is not trivial, $L(s,M)$ is a holomorphic function on the whole complex plane satisfying a holomorphic functional equation \begin{align*} L(s,M)\Gamma(s,M)= \mathbf{\varepsilon}(s,M)L(1-s,M^*)\Gamma(1-s,M^*), \end{align*} where $\Gamma(s,M)$ denotes a product of Gamma functions and $\varepsilon(s,M)=\zeta N^s$, for $N$ a positive integer and $\zeta$ a root of unit. We say that $M$ is {\it critical} at $s=0$ if neither $\Gamma(s,M)$ nor $\Gamma(1-s,M^*)$ have a pole at $s=0$. In this case the complex value $L(s,M)$ is not forced to be $0$ by the functional equation; we shall suppose, moreover, that $L(s,M)$ is not zero. Similarly, we can say that $M$ is critical at an integer $n$ if $s=0$ is critical for $M(n)$.\\ Deligne \cite{Del} has defined a non-zero complex number $\Omega(M)$ (defined only modulo multiplication by a non zero algebraic number) depending only on the Betti and de Rham realizations of $M$, such that conjecturally \begin{align*} \frac{L(0,M)}{\Omega(M)} \in \overline{\mathbb{Q}}. \end{align*} We now suppose that all the above conjectures are true for $M$ and all its twists $M \otimes \varepsilon$, where $\varepsilon$ ranges among the finite-order characters of $1+ p\mathbb{Z}_p$. We suppose moreover that $V$ is a semi-stable representation of $G_{\mathbb{Q}_p}$. Let $\mathbf{D}_{\mathrm{st}}(V)$ be the semistable module associated to $V$; it is a filtered $(\mathpzc{p}hi,N)$-module, i.e. it is endowed with a filtration and with the action of two operators: a Frobenius $\mathpzc{p}hi$ and a monodromy operator $N$. We say that a filtered $(\mathpzc{p}hi,N)$-sub-module $D$ of $\mathbf{D}_{\mathrm{st}}(V)$ is regular if \begin{align*} \mathbf{D}_{\mathrm{st}}(V) = \mathrm{Fil}^0(\mathbf{D}_{\mathrm{st}}(V)) \bigoplus D. \end{align*} To these data Perrin-Riou \cite{PR} associates a $p$-adic $L$-function $L_p(s,V,D)$ which is supposed to {\it interpolate} the special values $\frac{L(M\otimes \varepsilon,0)}{\Omega(M)}$, for $\varepsilon$ as above. In particular, it should satisfy \begin{align*} L_p(0,V,D)= E_p(V,D)\frac{L(0,M)}{\Omega(M)}, \end{align*} where $E_p(V,D)$ denotes a finite product of Euler-type factors, corresponding to a subset of the eigenvalues of $\mathpzc{p}hi$ on $D$ and on the dual regular submodule $D^*$ of $\mathbf{D}_{\mathrm{st}}(V^*)$ (see \cite[\S 0.1]{BenEZ}).\\ It may happen that some of these Euler factors vanish. In this case, we say that we are in the presence of a {\it trivial zero}. When trivial zeros appear, we would like to be able to retrieve information about the special value $\frac{L(0,M)}{\Omega(M)}$ from the $p$-adic derivative of $L_p(s,V,D)$.\\ Under certain suitable hypotheses (denoted by $\mathbf{C1}-\mathbf{C4}$ in \cite[\S 0.2]{BenLinv2}) on the representation $V$, Benois states the following conjecture: \begin{conj}\label{MainCoOC}[Trivial zeros conjecture] Let $e$ be the number of Euler-type factors of $E_p(V,D)$ which vanish. Then \begin{align*} \lim_{s \rightarrow 0} \frac{L_p(s,V,D)}{s^e e!} = \mathcal{L}(V^*,D^*) E^*(V,D) \frac{L(0,M)}{\Omega(V)}. \end{align*} Here $\mathcal{L}(V^*,D^*)$ is a non-zero number defined in term of the cohomology of the $(\mathpzc{p}hi,\Gammaamma)$-module associated with $V$. \end{conj} We remark that the conjectures of Bloch and Kato tell us that the aforementioned hypotheses $\mathbf{C1}-\mathbf{C4}$ in \cite{BenLinv2} are a consequence of all the assumptions we have made about $M$. In the case $V$ is ordinary this conjecture has already been stated by Greenberg in \cite{TTT}. In this situation, the $\mathcal{L}$-invariant can be calculated in terms of the Galois cohomology of $V$. Conjecturally, the $\mathcal{L}$-invariant is non-zero, but even in the cases when $\mathcal{L}(V,D)$ has been calculated it is hard to say whether it vanishes or not.\\ We now describe the Galois representation for which we will prove the above-mentioned conjecture.\\ Let $f$ be a modular eigenform of weight $k \geq 2$, level $N$ and Nebertypus $\mathpzc{p}si$. Let $K_0$ be the number field generated by the Fourier coefficients of $f$. For each prime $\lambda$ of $K_0$ the existence of a continuous Galois representatation associated to $f$ is well-known \begin{align*} \rho_{f,\lambda} : G_{\mathbb{Q}} \rightarrow \mathrm{GL}_2(K_{0,\lambda}). \end{align*} Let $\mathpzc{p}frak$ be the prime above $p$ in $K_0$ induced by the above embedding $\overline{\mathbb{Q}} \mathcal{H}ookrightarrow \mathbb{C}_p$, we shall write $\rho_{f}:=\rho_{f,\mathpzc{p}frak}$.\\ The adjoint action of $\mathrm{GL}_2$ on the Lie algebra of $\mathrm{SL}_2$ induces a three-dimensional representation of $\mathrm{GL}_2$ which we shall denote by $\mathrm{Ad}$. We shall denote by $\mathrm{Ad}(\rho_{f})$ the $3$-dimensional Galois representation obtained by composing $\mathrm{Ad}$ and $\rho_f$.\\ The $L$-function $L(s,\mathrm{Ad}(\rho_f))$ has been studied in \cite{GJ}; unless $f$ has complex multiplication, $L(s,\mathrm{Ad}(\rho_f))$ satisfies the conjectured functional equation and the Euler factors at primes dividing the conductor of $f$ are well-known. For each $s=2-k, \ldots, 0$, $s$ even, we have that $L(s,\mathrm{Ad}(\rho_f))$ is critical \`a la Deligne and the algebraicity of the special values has been shown in \cite{Stu}.\\ If $p \nmid N$, we choose a $p$-stabilization $\tilde{f}$ of $f$; i.e. a form of level $Np$ such that $f$ and $\tilde{f}$ have the same Hecke eigenvalues outside $p$ and $U_p \tilde{f} = \lambda_p \tilde{f}$, where $\lambda_p$ is one of the roots of the Hecke polynomial at $p$ for $f$. \\ From now on, we shall suppose that $f$ is of level $Np$ and primitive at $N$. We point out that the choice of a $p$-stabilization of $f$ induces a regular sub-module $D$ of $\mathbf{D}_{\mathrm{st}}(\mathrm{Ad}(\rho_f))$. So, from now on, we shall drop the dependence on $D$ in the notation for the $p$-adic $L$-function.\\ Following the work of many people \cite{Sc,H6,DD}, the existence of a $p$-adic $L$-function associated to $\mathrm{Ad}(\rho_f)$ when $f$ is ordinary (i.e. $v_p(\lambda_p) =0$) or when $2v_p(\lambda_p) < k-2$ is known.\\ In what follows, we shall not work directly with $\mathrm{Ad}(\rho_f)$ but with $\mathrm{Sym}^2(\rho_f) =\mathrm{Ad}(\rho_f) \otimes \mathrm{det}(\rho_f)$. For each prime $l$, let us denote by $\alpha_l$ and $\beta_l$ the roots of the Hecke polynomial at $l$ associated to $f$. We define \begin{align*} D_l(X):=(1-\alpha_l^2X)(1-\alpha_l \beta_l X)(1-\beta_l^2 X). \end{align*} For each Dirichlet character $\xi$ we define \begin{align*} \mathcal{L}(s,\mathrm{Sym}^2(f),\xi):=(1-\mathpzc{p}si^2\xi^2(2)2^{2k-2-2s})\mathpzc{p}rod_{l}{D_l(\xi(l)l^{-s})}^{-1}. \end{align*} This $L$-function differs from $L(s,\mathrm{Sym}^2(\rho_f)\otimes \xi)$ by a finite number of Euler factors at prime dividing $N$ and for the Euler factor at $2$. The advantage of dealing with this {\it imprimitive} $L$-function is that it admits an integral expression (see Section \ref{primLfun}) as the Petersson product of $f$ with a certain product of two half-integral weight forms. The presence of the Euler factor at $2$ in the above definition is due to the fact that forms of half-integral weight are defined only for levels divisible by $4$. This forces us to consider $f$ as a form of level divisible by $4$, thus losing one Euler factor at $2$ if $\mathpzc{p}si\xi(2)\neq 0$.\\ Let us suppose that $\lambda_p\neq 0$; then we know that $f$ can be {\it interpolated} in a ``Coleman family''. Indeed, let us denote by $\mathcal{W}$ the weight space. It is a rigid analytic variety over $\mathbb{Q}_p$ such that $\mathcal{W}(\mathbb{C}_p)=\mathrm{Hom}_{cont}(\mathbb{Z}_p^{\times},\mathbb{C}_p^{\times})$. In \cite{CM}, Coleman and Mazur constructed a rigid-analytic curve $\mathbb{C}c$ which is locally finite above $\mathcal{W}$ and whose points are in bijection with overconvergent eigenforms.\\ If $f$ is a classical form of non-critical weight (i.e. if $v_p(\lambda_p) < k-1$), then there exists a unique irreducible component of $\mathbb{C}c$ such that $f$ belongs to it. We fix a neighbourhood $\mathbb{C}c_F$ of $f$ in this irreducible component, it gives rise to an analytic function $F(\kappa)$ which we shall call a family of eigenforms. Let us denote by $\lambda_p(\kappa)$ the $U_p$-eigenvalue of $F(\kappa)$. We know that $v_p(\lambda_p(\kappa))$ is constant on $\mathbb{C}c_F$. For any $k$ in $\mathbb{Z}_p$, let us denote by $[k]$ the weight corresponding to $ z \mapsto z^k$. Then for all $\kappa'$ above $[k']$ such that $v_p(\lambda_p(\kappa)) < k-1$ we know that $F(\kappa')$ is classical.\\ Let us fix an even Dirichlet character $\xi$. We fix a generator $u$ of $1+p\mathbb{Z}_p$ and we shall denote by $\left\langle z \mathcal{R}a $ the projection of $z$ in $\mathbb{Z}_p^{\times}$ to $1+ p\mathbb{Z}_p$. We prove the following theorem in Section \ref{padicL}: \begin{theo}\label{Tintro} We have a function $L_p(\kappa,\kappa')$ on $\mathbb{C}c_{F} \times \mathcal{W}$, meromorphic in the first variable and of logarithmic growth $h=[2 v_p(\lambda_p)]+2$ in the second variable (i.e. $L_p(\kappa,[s])/\mathpzc{p}rod_{i=0}^h \log_p(u^{s-i}-1)$ is holomorphic on the open unit ball). For any point $(\kappa, \varepsilon(\left\langle z \mathcal{R}a)z^s)$ such that $\kappa$ is above $[k]$, $\varepsilon$ is a finite-order character of $1+p\mathbb{Z}_p$ and $s$ is an integer such that $1\leq s \leq k-1$, we have the following interpolation formula \begin{align*} L_p(\kappa,\varepsilon(\left\langle z \mathcal{R}a)z^s) = C_{\kappa,\kappa'} E_1(\kappa,\kappa')E_2(\kappa,\kappa') \frac{\mathcal{L}(s,\mathrm{Sym}^2(F(\kappa)),\xi^{-1}\varepsilon^{-1}\omega^{1-s})}{\mathpzc{p}i^{s}S(F(\kappa))W'(F(\kappa))\left\langle F^{\circ}(\kappa),F^{\circ}(\kappa)\mathcal{R}a}. \end{align*} \end{theo} Here $E_1(\kappa,\kappa')$ and $E_2(\kappa,\kappa')$ are two Euler-type factors at $p$. We refer to Section \ref{padicL} for the notations. Here we want to point out that this theorem fits perfectly within the framework of $p$-adic $L$-functions for motives and their $p$-adic deformations \cite{CPR,Gpvar,PR}.\\ Our first remark is that such a two variables $p$-adic $L$-function has been constructed in \cite{H6} in the ordinary case and in \cite{Kim} in the non-ordinary case. Its construction is quite classical: first, one constructs a measure interpolating $p$-adically the half-integral weight forms appearing in the integral expression of the $L$-function, and then one applies a $p$-adic version of the Petersson product.\\ Unless $s=1$, the half-integral weight forms involved are not holomorphic but, in Shimura terminology, {\it nearly holomorphic}. It is well known that nearly holomorphic forms can be seen as $p$-adic modular forms (see \cite[\S 5.7]{K2}).\\ In the ordinary case, we have Hida's ordinary projector which is defined on the whole space of $p$-adic modular forms and which allows us to project $p$-adic forms on a finite dimensional vector space where a $p$-adic Petersson product can be defined. \\ If $f$ is not ordinary, the situation is more complicated; $f$ is annihilated by the ordinary projector, and there exists no other projector which could possibly play the role of Hida's projector. The solution is to consider instead of the whole space of $p$-adic forms, the smaller subspace of {\it overconvergent} ones.\\ On this space $U_p$ acts as a completely continuous operator, and elementary $p$-adic functional analysis allows us to define, for any given $\alpha \in \mathbb{Q}_{\geq 0}$, a projector to the finite dimensional subspace of forms whose slopes with respect to $U_p$ are smaller than $\alpha$. Then it is easy to construct a $p$-adic analogue of the Petersson product as in \cite{Pan}.\\ The problem in our situation is that nearly holomorphic forms are not overconvergent. Kim's idea is to construct a space of {\it nearly holomorphic and overconvergent forms} which projects, thanks to a $p$-adic analogue of the holomorphic projector for nearly holomorphic forms, to the space of overconvergent forms. Unfortunately, some of his constructions and proofs are sketched-out, and we prefer to give a new proof of this result using the recent work of Urban.\\ In \cite{UrbNholo}, an algebraic theory for nearly holomorphic forms has been developed; it allows this author to construct a space of {\it nearly overconvergent} forms in which all classical nearly holomorphic forms appear and where one can define an {\it overconvergent} projector to the subspace of overconvergent forms. This is enough to construct, as sketched above, the $p$-adic $L$-function.\\ We expect that the theory of nearly overconvergent forms will be very useful for the construction of $p$-adic $L$-functions; as an example, we can give the generalization of the work of Niklas \cite{Niklas} on values of $p$-adic $L$-function at non-critical integers to finite slope families, or the upcoming work of Eischen, Harris, Li and Skinner on $p$-adic $L$-functions for unitary groups.\\ A second remark is that for all weights such that $k > h$ we obtain, by specializing the weight variable, the $p$-adic $L$-functions constructed in \cite{DD}. They construct several distribution $\mu_i$, for $i=1,\ldots, k-1$, satisfying Kummer congruences and the $p$-adic $L$-function is defined via the Mellin transform. The $\mu_i$ define an $h$-admissible measure $\mu$ in the sense of Amice-V\'elu; in this case the Mellin transform is uniquely determined once one knows $\mu_i$ for $i=1,...,h$.\\ If $k \leq h$, then the number of special values is not enough to determine uniquely an analytic one-variable function. Nevertheless, as in Pollack-Stevens \cite{PolSt}, we can construct a well-defined one variable $p$-adic $L$-function for eigenforms such that $k \leq h$ (see Section \ref{padicL}).\\ Let $\kappa_0$ be a point of $\mathbb{C}c_F$ above $[k_0]$, and $f:=F(\kappa_0)$. We shall write \begin{align*} L_p(s,\mathrm{Sym}^2(f),\xi):=L_p(\kappa_0,[s]). \end{align*} We now deal with the trivial zeros of this $p$-adic $L$-function. Let $\kappa$ be above $[k]$ and suppose that $F(\kappa)$ has trivial Nebentypus at $p$, then either $E_1(\kappa,\kappa')$ or $E_2(\kappa,\kappa')$ vanishes when $\kappa'(u)=u^{k-1}$. The main theorem of the paper is: \begin{theo}\label{MainThOC} Let $f$ be a modular form of trivial Nebentypus, weight $k_0$ and conductor $Np$, $N$ squarefree, even and prime to $p$. Then Conjecture \ref{MainCoOC} (up to the non-vanishing of the $\mathcal{L}$-invariant) is true for $L_p(s,\mathrm{Sym}^2(f),\omega^{2-k_0})$. \end{theo} In this case, the form $f$ is Steinberg at $p$ and the trivial zero is brought by $E_1$. The proof of this theorem is the natural generalization of the (unpublished) proof of Greenberg and Tilouine in the ordinary case (which has already been generalized to the Hilbert setting in \cite{RosCR,RosH}).\\ We remark that in the forthcoming paper \cite{RosBonn} we remove the hypothesis that $N$ is even and allow $p=2$ when $k_0=2$ using a different method.\\ When we fix $\kappa_0'(u)=u^{k_0}$ we see that $E_1(\kappa,\kappa'_0)$ is an analytic function of $\kappa$. We can then find a factorization $L_p(\kappa,\kappa'_0)=E_1(\kappa,\kappa'_0)L^*_p(\kappa)$, where $L_p^*(\kappa)$ is an {\it improved} $p$-adic $L$-function in the sense of \cite{SSS} (see Section \ref{padicL} for the exact meaning). The construction of the improved $p$-adic $L$-function is similar to \cite{HT}; we substitute the convolution of two measures with the product of a modular form with an Eisenstein measure. We note that the two-variable $p$-adic $L$-function vanishes on the line $\kappa=[k]$ and $\kappa'=[k-1]$ (the {\it line of trivial zeros}) and we are left to follow the method of \cite{SSS}.\\ The hypotheses on the conductor are to ensure that $\mathcal{L}(s,\mathrm{Sym}^2(f))$ coincides with $L(s-k+1,\mathrm{Ad}(\rho_f))$. The same proof gives a proof of Conjecture \ref{MainCoOC} for $\mathrm{Sym}^2(f)\otimes \xi$ for many $\xi$, and $f$ not necessarily of even weight. We refer to Section \ref{Benconj} for a list of such a $\xi$. \\ Recently, Dasgupta \cite{Das} has shown Conjecture \ref{MainCoOC} for all weights in the ordinary case. He uses the strategy outlined in \cite{Citro}. \mathpzc{p}aragraph{Acknowledgement} This paper is part of the author's PhD thesis and we would like to thank J. Tilouine for his constant guidance and attention. We would like to thank \'E. Urban for sharing his ideas on nearly overconvergent forms with the author, and also for inviting him to Columbia University.\\ We would also like to thank D.~Benois, R.~Brasca, P.~Chojecki, A.~Dabrowski, M.~Dimitrov, R.~Greenberg, F.~Lemma, A.~Sehanobish, S.~Shah, D.~Turchetti, S.~Wang, and J.~Welliaveetil for useful conversations.\\ We thank the anonymous referee for useful comments and suggestions.\\ Part of this work has been written during a stay at the Hausdorff Institute during the {\it Arithmetic and Geometry} program, and a stay at Columbia University. The author would like to thank these institutions for their support and for providing optimal working conditions. \mathcal{S}ection{Nearly holomorphic modular forms} The aim of this section is to recall the theory of nearly holomorphic modular forms from the analytic and geometric point of view, and construct their $p$-adic analogues, the {\it nearly overconvergent} modular forms. We shall use them in Section \ref{padicLfunc} to construct a two variables $p$-adic $L$-function for the symmetric square, generalizing the construction of \cite{Pan}. We want to remark that, contrary to the situation of \cite{Pan}, the theory of nearly overconvergent forms is {\it necessary} for the construction of the two variables $p$-adic $L$-function.\\ We will also construct an eigenvariety parameterizing finite slope nearly overconvergent eigenforms. The main reference is \cite{UrbNholo}; we are very grateful to Urban for sharing this paper, from its earliest versions, with the author. We point out that there is nothing really new in this section; however, we shall give a proof of all the statements which we shall need in the rest of the paper in the attempt to make it self-contained. We will also emphasize certain aspects of the theory we find particularly interesting. For all the unproven propositions we refer to the aforementioned paper. \mathcal{S}ubsection{The analytic definition}\label{analytic} Nearly-holomorphic forms for $\mbox{GL}_2$ have been introduced and extensively studied by Shimura. His definition is of analytic nature, but he succeeded in proving several algebraicity results. Later, Harris \cite{Har1,Har2} studied them in terms of coherent sheaves on Shimura varieties.\\ Let $\Gamma$ be a congruence subgroups of $\mbox{GL}_2(\mathbb{Z})$ and $k$ a positive integer. Let $\mathcal{H}$ be the complex upper-half plane, we let $\mbox{GL}_2(\mathbb{Q})^+$ act on the space of $\mathbb{C}c^{\infty}$ functions $f:\mathcal{H} \rightarrow \mathbb{C}$ in the usual way \begin{align*} f |_{k}\gamma (z)= {\mathrm{det}(\gamma)}^{k/2}{(cz+d)}^{-k}f(\gamma(z)) \end{align*} where $\gamma= \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) $ and $\gamma(z)=\frac{az+b}{cz+d}$. We now give the precise definition of nearly holomorphic form. \begin{defin} Let $r\geq 0$ be an integer. Let $f:\mathcal{H} \rightarrow \mathbb{C}$ be a $\mathbb{C}c^{\infty}$-function, we say that $f$ is a nearly holomorphic form for $\Gamma$ of weight $k$ and degree $r$ if \begin{itemize} \item[i)] for all $\gamma$ in $\Gamma$, we have $f|_k\gamma = f$, \item[ii)] there are holomorphic $f_i(z)$ with $f_r(z) \neq 0$ such that \begin{align*} f(z)= & \mathcal{S}um_{i=0}^{r} \frac{1}{y^i}f_i(z), \end{align*} for $y=\mathrm{Im}{z}$, \item[iii)] $f$ is finite at the cusps of $\Gamma$. \end{itemize} \end{defin} Let us denote by $\mathcal{N}_k^r(\Gamma,\mathbb{C})$ the space of nearly holomorphic forms of weight $k$ and degree at most $r$ for $\Gamma$. When $r=0$, we will write $\mathcal{M}_k(\Gamma,\mathbb{C})$.\\ A simple calculation, as in the case of holomorphic modular forms, tells us that $k \geq 2r$. \\ Finally, let us notice that we can substitute condition ${\it ii)}$ by \begin{align*} \varepsilon^{r+1}(f)= 0 \end{align*} for $\varepsilon$ the differential operator $-4 y^2 \frac{\mathpzc{p}artial f}{\mathpzc{p}artial \overline{z}}$. If $f$ belongs to $\mathcal{N}_k^r(\Gamma,\mathbb{C})$, then $\varepsilon(f)$ belongs to $\mathcal{N}_{k-1}^{r-1}(\Gamma,\mathbb{C})$. \\ We warn the reader that except for $i=r$, the $f_i$'s are not modular forms. \\ Let us denote by $\delta_{k}$ the Maa\mathcal{S}s{}-Shimura differential operator $$ \begin{array}{cccc} \delta_k : & \mathcal{N}_k^r(\Gamma,\mathbb{C}) & \rightarrow & \mathcal{N}_{k+2}^{r+1}(\Gamma,\mathbb{C})\\ & f & \mapsto & \frac{1}{2 \mathpzc{p}i i}\left( \frac{\mathpzc{p}artial }{\mathpzc{p}artial z} + \frac{k}{2 y i} \right) f. \end{array} $$ For any integer $s$, we define $$ \begin{array}{cccc} \delta_k^{s} : & \mathcal{N}_k^r(\Gamma,\mathbb{C}) & \rightarrow & \mathcal{N}_{k+2s}^{r+s}(\Gamma,\mathbb{C})\\ & f & \mapsto & \delta_{k+2s-2} \circ \cdots \circ \delta_{k} f. \end{array} $$ Let us denote by $E_2(z)$ the nearly holomorphic form \begin{align*} E_2(z) = -\frac{1}{24} +\mathcal{S}um_{n=1}^{\infty} \mathcal{S}igma_1(n)q^{n} + \frac{1}{8 \mathpzc{p}i y}, \:\: \left( \mbox{ where }\forall n \geq 1 \;\; \mathcal{S}igma_1(n)= \mathcal{S}um_{d | n, d >0} d\right). \end{align*} It belongs to $\mathcal{N}_2^1(\mbox{SL}_2(\mathbb{Z}),\mathbb{C})$. It is immediate to see that for any $\Gamma$ and for any form $f \neq 0$ in $\mathcal{N}_2^1(\Gamma,\mathbb{C})$, it does not exist a nearly holomorphic form $g$ such that $\delta_0 g (z)=f(z) $. This is an exception, as the following proposition, due to Shimura \cite[Lemma 8.2]{ShH3}, tells us. \begin{prop}\label{sumMS} Let $f$ in $\mathcal{N}_k^r(\Gamma,\mathbb{C})$ and suppose that $(k,r)\neq (2,1)$. If $k\neq 2r$, then there exists a sequence $(g_i(z))$, $i=0,\ldots,r$, where $g_i$ is in $\mathcal{M}_{k-2i}(\Gamma,\mathbb{C})$ such that \begin{align*} f(z) & = \mathcal{S}um_{i=0}^{r} \delta_{k-2i}^i g_i(z), \end{align*} while if $k=2r$ there exists a sequence $(g_i(z))$, $i=0,\ldots,r-1$, where $g_i$ is in $\mathcal{M}_{k-2i}(\Gamma,\mathbb{C})$ and $c$ in $\mathbb{C}^{\times}$ such that \begin{align*} f(z) & = \mathcal{S}um_{i=0}^{r-1} \delta_{k-2i}^i g_i(z) + c\delta_{2}^{r-1}E_2(z). \end{align*} Moreover, such a decomposition is unique. \end{prop} The importance of such a decomposition is given by the fact that the various $\delta_{k-2i}^i g_i(z)$ are nearly holomorphic forms. This decomposition will be very useful for the study of the Hecke action on the space $\mathcal{N}_k^r(\Gamma,\mathbb{C})$.\\ We can define, as in the case of holomorphic modular forms, the Hecke operators as double coset operators. For all $l$ positive integer, we decompose \begin{align*} \Gamma \left( \begin{array}{cc} 1 & 0 \\ 0 & l \end{array} \right)\Gamma = \cup_i \Gamma \alpha_i . \end{align*} We define $f(z)|_k T_l = l^{\frac{k}{2}-1}\mathcal{S}um_{i} f(z)|_k \alpha_i $. We have the following relations \begin{align*} l \delta_{k}(f(z)|_k T_l) = & (\delta_{k} f ) |_{k+2} T_l, \\ \varepsilon (f|_k T_l) = & l (\varepsilon f)|_{k-2} T_l. \end{align*} \begin{lemma}\label{deltaT_l} Let $f(z)=\mathcal{S}um_{i=0}^{r} \delta_{k-2i}g_i(z) $ in $\mathcal{N}_k^{r}(\Gamma)$ be an eigenform for $T_l$ of eigenvalue $\lambda_f(l)$, then $g_i$ is an eigenform for $T_l$ of eigenvalue $l ^{-i}\lambda_f(l)$. \end{lemma} \begin{proof} It is an immediate consequence of the uniqueness of the decomposition in the previous proposition and of the relation between $\delta_k$ and $T_l$. \end{proof} Following Urban, we give an alternative construction of nearly holomorphic forms as section of certain coherent sheaves. Such a description will allow us to define a notion of nearly holomorphic forms over any ring $R$. \\ Let $Y=Y(\Gamma)$ be the open modular curve of level $\Gamma$ defined over $\mathrm{Spec}\left(\mathbb{Z} \right)$, and let $\mathbf{E}$ be the universal elliptic curve over $Y$. Let us consider a minimal compactification $X=X(\Gamma)$ of $Y$ and the Kuga-Sato compactification $\overline{\mathbf{E}}$ of $\mathbf{E}$. Let us denote by $\mathbf{p}$ the projection of $\overline{\mathbf{E}}$ to $X$ and by $\omega$ the sheaf of invariant differential over $X$, i.e. $\omega=\mathbf{p}_{*} \Omega^1_{\overline{\mathbf{E}}/X}(\log (\overline{\mathbf{E}} / \mathbf{E}))$. \\ We define \begin{align*} \mathcal{H}_{\mathrm{dR}}^{1} = R^{1}\mathbf{p}_{*} \Omega^{\bullet}_{\overline{\mathbf{E}}/X}(\log (\overline{\mathbf{E}} / \mathbf{E})); \end{align*} it is the algebraic de Rham cohomology. Let us denote by $\mathpzc{p}i: \mathcal{H} \rightarrow \mathcal{H}/\Gamma $ the quotient map, we have over the $\mathbb{C}c^{\infty}$-topos of $\mathcal{H}$ the splitting \begin{align*} \mathpzc{p}i^{*}\mathcal{H}_{\mathrm{dR}}^{1} \cong \mathpzc{p}i^*\omega \oplus \mathpzc{p}i^*\overline{\omega} \cong \mathpzc{p}i^*\omega \oplus{\mathpzc{p}i^*\omega}^{\vee} . \end{align*} Let us denote by $\mathpzc{p}i^{*}\mathbf{E}$ the fiber product of $\mathcal{H}$ and $\mathbf{E}$ above $Y$. The fiber above $z \in \mathcal{H}$ is the elliptic curve $\mathbb{C}/(\mathbb{Z}+z\mathbb{Z})$. If we denote by $ \tau$ a coordinate on $\mathbb{C}$, the first isomorphism is given in the basis $\textup{d} \tau$, $\textup{d} \overline{\tau}$, while the second isomorphism is induced by the Poincar\'e duality. Let us define \begin{align*} \mathcal{H}_k^r = \omega^{k-r}\otimes \mathrm{Sym}^r(\mathcal{H}_{\mathrm{dR}}^1). \end{align*} The above splitting induces \begin{align*} {\mathcal{H}_k^r} \cong \omega^{k} \oplus \omega^{k-2} \oplus \cdots \oplus \omega^{k-2r}. \end{align*} We have the Gau\mathcal{S}s{}-Manin connexion \begin{align*} \nabla : \mathrm{Sym}^k(\mathcal{H}_{\mathrm{dR}}^1) \rightarrow \mathrm{Sym}^k(\mathcal{H}_{\mathrm{dR}}^1) \otimes \Omega^1_{X/\mathbb{Z}[N^{-1}]}(\log(\mathrm{Cusp})). \end{align*} Recall the descending Hodge filtration on $\mathrm{Sym}^k\left(\mathcal{H}_{\mathrm{dR}}^1\right)$ given by \begin{align*} \mathrm{Fil}^{k-r}\left(\mathrm{Sym}^k\left(\mathcal{H}_{\mathrm{dR}}^1\right)\right) = \mathcal{H}_k^r. \end{align*} In particular, we have \begin{align*} 0 \rightarrow \omega^k \rightarrow \mathcal{H}_k^r \mathcal{S}tackrel{ \tilde{\varepsilon}}{\rightarrow} \mathcal{H}_{k-2}^{r-1} \rightarrow 0. \end{align*} By definition, $\nabla$ satisfies Griffiths transversality; \begin{align*} \nabla\mathrm{Fil}^{k-r}\left(\mathrm{Sym}^k\left(\mathcal{H}_{\mathrm{dR}}^1\right)\right) \mathcal{S}ubset \mathrm{Fil}^{k-r-1}\left(\mathrm{Sym}^k\left(\mathcal{H}_{\mathrm{dR}}^1\right)\right)\otimes \Omega^1_{X/\mathbb{Z}[N^{-1}]}(\log(\mathrm{Cusp})) . \end{align*} Recall the Kodaira-Spencer isomorphism $\Omega^1_{X/\mathbb{Z}[N^{-1}]}(\log(\mathrm{Cusp})) \cong \omega^{\otimes 2}$; then the map $\nabla$ induces a differential operator \begin{align*} \tilde{\delta}_k : \mathcal{H}_k^r \rightarrow \mathcal{H}_{k+2}^{r+1}, \end{align*} We have the following proposition \cite[Proposition 2.2.3]{UrbNholo} \begin{prop} We have a natural isomorphism $H^{0}\left(X,\mathcal{H}_k^r \right) \cong \mathcal{N}_k^r(\Gamma,\mathbb{C})$. Once Hecke correspondences are defined on $(X,\mathcal{H}^r_k)$, the above isomorphism is Hecke-equivariant. \end{prop} \begin{proof} Let us denote $R_1 \mathbf{p}_{*}\mathbb{Z} = \mathcal{H}\mathpzc{o}\mathpzc{m}(R^1 \mathbf{p}_{*}\mathbb{Z}, \mathbb{Z})$. Via Poincar\'e duality we can identify \begin{align*} \mathpzc{p}i^{*}\mathcal{H}_{\mathrm{dR}}^{1} = \mathcal{H}\mathpzc{o}\mathpzc{m}(R_1 \mathbf{p}_{*}\mathbb{Z}, \mathcal{O}_{\mathcal{H}}), \end{align*} where $\mathcal{O}_{\mathcal{H}}$ denote the sheaf of holomorphic functions on $\mathcal{H}$. For all $z \in \mathcal{H}$, we have \begin{align*} \mathpzc{p}i^{*}{(R_1 \mathbf{p}_{*}\mathbb{Z})}_{z} = H_1(\mathbb{C}/(\mathbb{Z}+z\mathbb{Z}),\mathbb{Z}) = \mathbb{Z}+z\mathbb{Z}. \end{align*} Let us denote by $\alpha$ resp. $\beta$ the linear form in $\mathrm{Hom}(R_1 \mathbf{p}_{*}\mathbb{Z}, \mathcal{O}_{\mathcal{H}})$ which at the stalk at $z$ sends $a+bz$ to $a $ resp. $b$. It is the dual basis of $\gamma_1$, $\gamma_2$. Let $\eta \in H^{0}(X,\mathcal{H}_k^r)$, we can write \begin{align*} \mathpzc{p}i^* \eta = \mathcal{S}um_{i=0}^r f_i(z){\textup{d} \tau}^{{\otimes}^{k-i}}{\beta}^{{\otimes}^{i}}. \end{align*} We remark that we have $\beta = \frac{\textup{d}\tau - \textup{d}\overline{\tau}}{2iy}$. Using the Hodge decomposition of ${\mathcal{H}_k^r}_{/\mathbb{C}c^{\infty}}$ given above, we project $ \mathpzc{p}i^* \eta$ onto $\omega^{{\otimes}^k}$ and we obtain \begin{align*} f(z) = \mathcal{S}um_{i=0}^r \frac{f_i(z)}{(2 i y)^i}. \end{align*} It is easy to check that $f(z)$ belongs to $\mathcal{N}_k^r(\Gamma,\mathbb{C})$ and that such a map is bijective. \end{proof} \begin{rem} This proposition allows us to identify $\tilde{\varepsilon}$ with the differential operator $\varepsilon$ and $\tilde{\delta}_k$ with the Maa\mathcal{S}s{}-Shimura operator $\delta_k$. \end{rem} We now give another description of the sheaf $\mathcal{H}_k^r$; for any ring $R$ we shall denote by $R[X] _r$ the group of polynomial with coefficients in $R$ of degree at most $r$. Let us denote by $B$ the standard Borel of upper triangular matrices of $\mbox{SL}_2$. We have a left action of $B(R)$ over $\mathbb{A}^1(R) \mathcal{S}ubset \mathbb{P}^1(R)$ via the usual fractional linear transformations \begin{align*} {\left( \begin{array}{cc} a & b \\ 0 & a^{-1}\end{array} \right)}.X = & \frac{aX+b}{a^{-1}}. \end{align*} We define then a right action of weight $k \geq 0$ of $B(R)$ on $R[X]_r$ as \begin{align*} P(X)\vert_k \left( \begin{array}{cc} a & b \\ 0 & a^{-1} \end{array} \right)= & a^k P\left(a^{-2}X+ba^{-1}\right). \end{align*} If we see $P(X)$ as a function on $\mathbb{A}^1(R)$, then \begin{align*} P(X) \vert_k \gamma = a^k P\left(\left( \begin{array}{cc} a^{-1} & b \\ 0 & a \end{array} \right). X \right). \end{align*} We will denote by $R[X]_r(k)$ the group $R[X]_r$ endowed with this action of the Borel. We now use this representation of $B$ to give another description of $\mathcal{H}_k^r$.\\ We can define a $B$-torsor $\mathbb{T}t$ over $Y_{Zar}$ which consists of isomorphism $\mathpzc{p}si_U: \mathcal{H}^1_{DR/U} \cong \mathcal{O}(U) \oplus \mathcal{O}(U) $ such that on the first component it induces $\mathcal{O}(U) \cong\omega_{/U} $ and on the quotient it induces $\mathcal{O}(U)\cong \omega^{\vee}_{/U}$, for $U$ a Zariski open of $Y$. That is, $\mathbb{T}t$ is the set of trivialization of $\mathcal{H}^{1}_{\mathrm{dR}}$ which preserves the line spanned by a fixed invariant differential $\omega$ and the Poincar\'e pairing. We have a right action of $B$ on such a trivialization given by \begin{align*} (\omega,\omega' ) \left( \begin{array}{cc} a & b \\ 0 & a^{-1} \end{array} \right) = (a \omega, a^{-1}\omega' +b \omega). \end{align*} We can define similarly an action of the Borel of $\mbox{GL}_2(R)$ but this would not respect the Poincar\'e pairing.\\ Then we define the product $\mathbb{T}t \times ^B R[X]_r(k)$, consisting of couples $(t,P(X))$ modulo the relation $(t \gamma, P(X))\mathcal{S}im (t, P(X)\vert_k \gamma^{-1})$, for $\gamma$ in $B$. It is isomorphic to $\mathcal{H}_k^r$ as $R$-sheaf over $Y$. In fact, a nearly holomorphic modular form can be seen as a function \begin{align*} f : \mathbb{T}t \rightarrow R[X]_r(k) \end{align*} which is $B$-equivariant. That is, $f$ associates to an element $(E,\mu,\omega, \omega')$ in $\mathbb{T}t$ ($\mu$ denotes a level structure) an element $f(E,\mu,\omega, \omega')(X)$ in $R[X]_r$ such that \begin{align*} f(E,\mu, a\omega, a^{-1}\omega' + b \omega) = & a^{-k}f(E,\mu,\omega, \omega')(a^2 X -ba). \end{align*} We are now ready to introduce a {\it polynomial} $q$-expansion principle for nearly holomorphic forms. Let us pose $A=\mathbb{Z}\left[ \frac{1}{N}\right]$; let $\mathrm{Tate}(q)$ be the Tate curve over $A[[q]]$, $\omega_{can}$ the canonical differential and $\alpha_{can}$ the canonical $N$-level structure. \\ We can construct as before (take $\mathrm{Tate}(q)$ and $A[[q]]$ in place of $\tilde{\mathbf{E}}$ and $X$) the Gau\mathcal{S}s{}-Manin connection $\nabla$, followed by the contraction associated to the vector field $ q \frac{\textup{d}}{\textup{d}q}$ \begin{align*} \nabla\left( q \frac{\textup{d}}{\textup{d}q}\right): \mathcal{H}_{\mathrm{dR}}^{1}(\mathrm{Tate}(q)_{/ A((q))}) \rightarrow \mathcal{H}_{\mathrm{dR}}^{1}(\mathrm{Tate}(q)_{/A((q))}). \end{align*} We pose $u_{\mathrm{can}} := \nabla\left( q \frac{\textup{d}}{\textup{d}q} \right)(\omega_{\mathrm{can}})$. We remark that $(\omega_{\mathrm{can}},u_{\mathrm{can}})$ is a basis of $\mathcal{H}_{\mathrm{dR}}^{1}(\mathrm{Tate}(q)_{/A((q))})$ and that $u_{\mathrm{can}}$ is horizontal for the Gau\mathcal{S}s{}-Manin connection (moreover $u_{\mathrm{can}}$ is a basis for the {\it unit root subspace}, defined by Dwork, which we will describe later). For any $A$-algebra $R$ and $f$ in $\mathcal{N}_k^r(\Gamma,R)$, we say that \begin{align*} f(q,X) := f(\mathrm{Tate}(q),\mu_{\mathrm{can}},\omega_{\mathrm{can}},u_{\mathrm{can}})(X) \in R[[q]][X] \end{align*} is the {\it polynomial} $q$-expansion of $f$. If we take a form $f$ in $\mathcal{N}_k^r(\Gamma,\mathbb{C})$ written in the form $\mathcal{S}um_i^r f_i(z)\frac{1}{(-4 \mathpzc{p}i y)^i}$ we obtain \begin{align*} f(q,X)= \mathcal{S}um_i^r f_i(q) X^i \;\;\:\: (i.e. \; X ``=" -\frac{1}{4 \mathpzc{p}i y} ). \end{align*} For example, we have \begin{align*} E_2(q,X) = -\frac{1}{24} +\mathcal{S}um_{n=1}^{\infty} \mathcal{S}igma_1(n)q^{n} - \frac{X}{2}, \end{align*} We have the following proposition \cite[Proposition 2.3]{UrbNholo} \begin{prop} Let $f$ be in $\mathcal{N}_k^r(\Gamma,R)$ and let $\varepsilon (f)$ in $\mathcal{N}_{r-1}^{k-2}(\Gamma,R)$. Then for all $(E,\mu,\omega, \omega')$ in $\mathbb{T}t$ we have \begin{align*} \varepsilon(f) (E,\mu,\omega, \omega') = & \frac{\textup{d}}{\textup{d}X} f (E,\mu,\omega, \omega')(X). \end{align*} \end{prop} Note that if $r!$ is not invertible in $R$, then $\varepsilon$ is NOT surjective. We have that $E_2$ is defined over $\mathbb{Z}_p$ for $p\geq 5$. As $\varepsilon \left( 2 E_2(q,X) \right)=-1$ we have that $- 2 E_2(q,X)$ gives a section for the map $\varepsilon : \mathcal{H}^1_{\mathrm{dR}} \rightarrow \omega$. \begin{rem} There is also a representation-theoretic interpretation of $y \delta_k$ in term of Lie operator and representation of the Lie algebra of $\mbox{SL}_2(\mathbb{R})$. More details can be found in $\cite[\S 2.1, 2.2]{Bump}$. \end{rem} \mathcal{S}ubsection{Nearly overconvergent forms}\label{overconvergen} In this section we give the notion of nearly overconvergent modular forms \`a la Urban. Let $N$ be a positive integer and $p$ a prime number coprime with $N$. Let $X$ be $X(\Gamma)$ for $\Gamma=\Gamma_1(N)\cap \Gamma_0(p)$ and let $X_{\mathrm{rig}}$ the generic fiber of the associated formal scheme over $\mathbb{Z}_p$. Let $A$ be a lifting of the Hasse invariant in characteristic $0$. If $p\geq 5$, we can take $A=E_{p-1}$, the Eisenstein series of weight $p-1$. For all $v$ in ${\mathbb{Q}}$ such that $v \in [0, \frac{p}{p+1}]$ we define $X_N(v)$ as the set of $x$ in $X(\Gamma_1(N))_{rig}$ such that $|A(x)|\geq p^{-v}$. The assumption that $ v \leq \frac{p}{p+1}$ is necessary to ensure the existence of the canonical subgroup of level $p$. Consequently, we have that $X_N(v)$ can be seen as an affinoid of $X_{\mathrm{rig}}$ via the map \begin{align*} u : (E,\mu_N) \mapsto (E,\mu_N, C), \end{align*} where $C$ is the canonical subgroup. Let us define $X(v):= u (X_N(v))$. We define $X_{\mathrm{ord}}$ as the ordinary multiplicative locus of $X_{\mathrm{rig}}$, i.e. $X_{\mathrm{ord}}=X(0)$. For all $v$ as above, $X(v)$ is a rigid variety and a strict neighborhood of $X_{\mathrm{ord}}$.\\ We remark that the set of $x$ in $X_{\mathrm{rig}}$ such that $|A(x)|\geq p^{-v}$ consists of two disjoint connected components, isomorphic via the Fricke involution, and that $X(v)$ is the connected component containing $\infty$. We define, following Katz \cite{Katz}, the space of $p$-adic modular forms of weight $k$ as \begin{align*} \mathcal{M}_k^{p-\mathrm{adic}}(N)= & H^{0}(X_{\mathrm{ord}},\omega^{{\otimes}^k}). \end{align*} We say that a $p$-adic modular form $f$ is {\it overconvergent} if $f$ can be extended to a strict neighborhood of $X_{\mathrm{ord}}$. That is, there exists $v > 0$ such that $f$ belongs to $H^{0}(X(v),\omega^{{\otimes}^k})$. Let us define the space of overconvergent modular forms \begin{align}\label{defover} \mathcal{M}_k^{\dagger}(N)= & \varinjlim_{v >0} H^{0}(X(v),\omega^{{\otimes}^k}). \end{align} In the same way, we define the set of {\it nearly overconvergent} modular forms as \begin{align*} \mathcal{N}_k^{r,\dagger}(N)= \varinjlim_{v >0} H^{0}(X(v),\mathcal{H}_k^r). \end{align*} The sheaf $\mathcal{H}_k^r$ is locally free as $E_2(q,X)$ gives a splitting of $\mathcal{H}^1_{\mathrm{dR}}$ and we can consequently find an isomorphism $H^{0}(X(v),\mathcal{H}_k^r) \cong \mathcal{O}(X(v))^M$. For $v' < v$, these isomorphisms are compatible with the restiction maps $X(v) \rightarrow X(v')$. The supremum norm on $X(v)$ induces a norm on each $H^{0}(X(v),\mathcal{H}_k^r)$ which makes this space a Banach module over $\mathbb{Q}_p$. This allows moreover to define an integral structure on $H^{0}(X(v),\mathcal{H}_k^r)$. For all $\mathbb{Z}_p$-algebra $R$, we shall denote by $\mathcal{M}_k^{\dagger}(N,R)$, $\mathcal{N}_k^{r,\dagger}(N,R)$ the global section of the previous sheaves when they are seen as sheaves over $X(v)_{/R}$.\\ We have a correspondence $$ \xymatrix { & C_p\ar[dl]_{p_1} \ar[dr]^{p_2} \\ X(v) & & X(v) .} $$ On the non-compactified modular curve, over $\mathbb{Q}_p$, $C_p$ is the rigid curve classifying quadruplets $(E,\mu_N, C, H)$ with $|A(E)|\geq p^{-v}$, $\mu_N$ a $\Gamma_1(N)$-structure, $C$ the canonical subgroup and $H$ a subgroup of $E[p]$ which intersects $C$ trivially. The projections are explicitly given by \begin{align*} p_1 (E,\mu_N, C, H) = & (E,\mu_N, C),\\ p_2 (E,\mu_N, C, H) = & (E/H,Im(\mu_N), E[p]/H). \end{align*} We remark that the theory of canonical subgroups ensures us that if $v\leq \frac{1}{p+1}$ then $E[p]/H$ is the canonical subgroup of $E/H$ (and the image of $C$ modulo $H$, of course). The map $p_2$ induces an isomorphism $C_p \cong X\left(\frac{v}{p}\right)$.\\ We define the operator $U_p$ on $H^{0}(X(v),\mathcal{H}_k^r)$ as the following map \begin{align*} H^{0}(X(v),\mathcal{H}_k^r) \rightarrow H^{0}(X(v),p_2^*\mathcal{H}_k^r) \mathcal{S}tackrel{p^{-1}\mathrm{Trace}(p_1)}{\rightarrow} H^{0}(X(v),\mathcal{H}_k^r). \end{align*} The fact that $p_2$ is an isomorphism implies the well known property that {\it $U_p$ improves overconvergence}.\\ We can construct correspondences as in \cite[\S 4]{Pil} to define operators $T_l$ for $l\nmid Np$ and $U_l$ for $l \mid N$.\\ Let $A$ be a Banach ring, and let $U:M_1 \rightarrow M_2$ be a continuous morphism of $A$-Banach modules. We pose \begin{align*} |U |= \mathcal{S}up_{m \neq 0} \frac{|U(m)|}{|m|}. \end{align*} This norm induces a topology on the module of continuous morphisms of $A$-Banach modules. We say that an operator $U$ is {\it of finite rank} if it is a continuous morphism of $A$-Banach modules such that its image is of finite rank over $A$. We say that $U$ is {\it completely continuous} if it is a limit of finite rank operators. Completely continous operators admit a Fredholm determinant \cite[Proposition 7]{SerreCC}.\\ We give to $H^{0}(X(v),\omega^{{\otimes}^k})$ the structure of Banach space for the norm induced by the supremum norm on $X(v)$; the transition maps in \ref{defover} are completely continuous and we complete $M_k^{\dagger}(N)$ for this norm. It is known that $U_p$ acts as a completely continuous operator on this completion; its Fredholm determinant is independent of $v$, for $v$ big enough \cite[Theorem B]{Col}. Similarly, we have that $U_p$ is completely continuous on $\mathcal{N}_k^{r,\dagger}(N)$. Indeed, $U_p$ is the composition of the restriction to $X\left(\frac{v}{p}\right)$ and a trace map.\\ On $q$-expansion, $U_p$ amounts to \begin{align*} \mathcal{S}um_{i=0}^r \mathcal{S}um_n a_n^{(i)}q^n X^i \mapsto \mathcal{S}um_{i=0}^r \mathcal{S}um_n a_{pn}^{(i)}q^n p^i X^i. \end{align*} We now recall that we have on ${\mathcal{H}^1_{\mathrm{dR}}}_{ / X_{\mathrm{ord}}}$ a splitting $\omega \oplus U$. Here $U$ is a Frobenius stable line where the Frobenius is invertible. Some authors call this splitting the {\it unit root splitting}. It induces ${\mathcal{H}_k^r}_{/X_{\mathrm{ord}}}=\omega^{{\otimes}^k}\oplus\cdots \oplus U^{{\otimes}^r}\otimes \omega^{{\otimes}^{k-r}}$. We have then \cite[Proposition 3.2.4]{UrbNholo} \begin{prop}\label{ordproj} The morphism $$ \begin{array}{ccccc} H^{0}(X(v),\mathcal{H}_k^r) & \rightarrow & H^{0}(X_{\mathrm{ord}},\mathcal{H}_k^r) & \rightarrow & H^{0}(X_{\mathrm{ord}},\omega^{{\otimes}^k}) \\ f(X) & \mapsto & f(X)_{{|}_{X_{\mathrm{ord}}}} & \mapsto & f(0) \end{array}$$ is injective and commutes with $q$-expansion. \end{prop} Note that the injectivity of the composition is a remarkable result. A consequence of this is that every nearly overconvergent form has a unique degree $r$ \cite[Corollary 3.2.5]{UrbNholo}.\\ We remark that we have two differential maps $$\begin{array}{cccc} \varepsilon :& \mathcal{N}_k^{r,\dagger}(\Gamma) & \rightarrow & \mathcal{N}_{k-2}^{r-1,\dagger}(\Gamma),\\ \delta_k :& \mathcal{N}_k^{r,\dagger}(\Gamma)& \rightarrow & \mathcal{N}_{k+2}^{r+1,\dagger}(\Gamma). \end{array} $$ Both of them are induced by functoriality from the maps defined in Section \ref{analytic} at the level of sheaves. We want to mention that Cameron in his PhD thesis \cite[Definition 4.3.6]{Cameron} gives an analogue of the Maa\mathcal{S}s{}-Shimura differential operator for rigid analytic modular forms on the Cerednik-Drinfeld $p$-adic upper half plane. It would be interesting to compare his definition with this one.\\ The above mentioned splitting allows us to define a map $$ \begin{array}{ccccc} \mathbb{T}heta : \mathcal{M}_k^{\dagger}(N) & \mathcal{S}tackrel{\delta_k}{\rightarrow} & \mathcal{N}_{k+2}^{1,\dagger}(N) & \rightarrow & \mathcal{M}_{k+2}^{p-\mathrm{adic}}(N) \end{array}$$ which at level of $q$-expansion is $q\frac{\textup{d}}{\textup{d} q}$. We have the following application of Proposition; \ref{ordproj} \begin{coro} Let $f$ be an overconvergent form of weight different from $0$, then $\mathbb{T}heta f$ is not overconvergent. \end{coro} We have the following proposition \cite[Lemma 3.3.4]{UrbNholo}; \begin{prop}\label{sumMSover} Let $(k,r) $ be different from $(2,1)$ and $f$ in $\mathcal{N}_k^{r,\dagger}(N,R)$. If $k\neq 2r$, then there exist $g_i$, $i=0,\ldots,r$, in $M^{\dagger}_{k-2i}(N,R)$ such that \begin{align*} f & = \mathcal{S}um_{i=0}^{r} \delta_{k-2i}g_i, \end{align*} while if $r=2k$ there exists a sequence $(g_i)$, $i=0,\ldots,r-1$, with each $g_i$ in $M^{\dagger}_{k-2i}(N,R)$ and $c$ in $R$ such that \begin{align*} f & = \mathcal{S}um_{i=0}^{r-1} \delta_{k-2i}g_i + c\delta_{2}^{r-1}E_2. \end{align*} Moreover, such a decomposition is unique. \end{prop} We conclude with a sufficient condition for a nearly overconvergent modular form to be classical; \begin{prop} Let $k$ be a classical weight, $f$ in $\mathcal{N}_k^{r,\dagger}(N)$ an eigenform for $U_p$ of slope $\alpha$. Then $r\leq \alpha$. If $\alpha < k-1 +r$, then $f$ is classical. \end{prop} \begin{proof} The first part is a trivial consequence of the above formula for $U_p$ acting on $q$-expansion. For the second part, the hypotheses of Proposition \ref{sumMSover} are satisfied. We apply $\varepsilon^r$ to $f$ to see that $g_r$ is of degree $0$, slope $\alpha - r$ and weight $k-2r$. It is then known that $g_r$ is classical. We conclude by induction on the degree. \end{proof} Let $\alpha \in \mathbb{Q}_{\geq 0}$ and $r \geq 0$ be a positive integer such that $r\leq \alpha$. We say that a positive integer $k$ is a {\it non critical weight} with respect to $\alpha$ and $r$ if $\alpha < k-1 +r$.\\ \begin{rem} In particular, if $\alpha=0$ then $r=0$. This should convince the reader of the fact that the ordinary projector is a $p$-adic analogue of the holomorphic projector. \end{rem} \mathcal{S}ubsection{Families}\label{families} In this subsection we construct families of nearly overconvergent forms. We start recalling the construction of families of overconvergent modular forms as done in Andreatta-Iovita-Stevens \cite{AIS} and Pilloni \cite{Pil}.\\ The authors of the first paper use $p$-adic Hodge theory to construct their families, while Pilloni's approach is more in the spirit of Hida's theory. We will follow in our exposition the article \cite{Pil}.\\ Let us denote by $\mathcal{W}$ the weight space. It is a rigid analytic variety over $\mathbb{Q}_p$ such that $\mathcal{W}(\mathbb{C}_p)=\mathrm{Hom}_{\mathrm{cont}}(\mathbb{Z}_p^{\times},\mathbb{C}_p^{\times})$. For all integer $k$, we denote the continuous homomorphism $z \mapsto z^{k}$ by $[k]$.\\ Let $\Delta=\mu_{p-1}$ if $p >2$ (resp. $\Delta=\mu_2$ if $p=2$) and let $B(1,1^{-})$ be the open unit ball centered in $1$. It is known that $\mathcal{W}$ is an analytic space isomorphic to $\Delta \times B(1,1^{-})$; let us denote by $\mathcal{A}(\mathcal{W})$ the ring of analytic function on $\mathcal{W}$. We define for $t$ in $(0,\infty)$, \begin{align*} \mathcal{W}(t) := & \left\{ (\zeta, z) \in \mathcal{W}(\mathbb{C}_p) | |z-1| \leq p^{-t} \right\}. \end{align*} Let $\Delta$ be the cyclic group of $q$-roots of unity. We define $\kappa$, the universal weight, as $$ \begin{array}{cccc} \kappa : & \mathbb{Z}_p^{\times} & \rightarrow & {(\mathbb{Z}_p[\Delta][[S]])}^{\times} \\ & (\zeta, z) & \mapsto & \tilde{\zeta} (1+S)^{\frac{\log_p(z)}{\log_p(u)}}, \end{array} $$ where $\tilde{\zeta}$ is the image of $\zeta$ via the tautological character $\Delta \rightarrow {(\mathbb{Z}_p[\Delta])}^{\times}$. We can see $\kappa$ as a local coordinate on the open unit ball $\lgr 1 \rgr \times B(1,1^{-})$.\\ For any weight $\kappa_0$, both of the aforementioned papers construct an invertible sheaf $\omega^{\kappa_0}$ over $X(v)$ whose sections correspond to overconvergent forms of weight $\kappa_0$. This construction can be globalized over $\mathcal{W}(t)$ into a coherent sheaf $\omega^{\kappa}$ over $X(v)\times \mathcal{W}(t)$ (for suitable $v$ and $t$) such that the corresponding sections will give rise to families of holomorphic modular forms.\\ We describe more in detail Pilloni's construction. Let $n,v$ be such that $0 \leq v < {\frac{1}{p^{n-2}(p+1)}}$; there exists then a {\it canonical subgroup} $H_n$ of level $n$ over $X(v)$. It is possible to define a rigid variety $F^{\times}_n(v)$ above $X(v) $ whose $\mathbb{C}_p$-points are triplets $(x,y;\omega)$ where $x$ is an element of $X(v)$ corresponding to an elliptic curve $E_x$, $y$ a generator of $H_n^D$ (the Cartier dual of $H_n$) and $\omega$ is an element of $e^{\ast}\Omega_{E_x/\mathbb{C}_p}$ (for $e$ the unit section $X(v)\rightarrow E_x$) whose restriction to $e^{\ast}\Omega_{H_n/\mathbb{C}_p}$ is the image of $y$ via the Hodge-Tate map \cite[\S 3.3]{Pil}. Locally, $F_n^{\times}(v)$ is a trivial fibration of $X(v)$ in $p^{n-1}(p-1)$ balls.\\ On $F_n(v)^{\times}$ we have an action of ${(\mathbb{Z}/p^{n})}^{\times}$. This induces an action of $\mathbb{Z}_p^{\times}$. For each $t$, there exist $v$ and $n$ satisfying the above condition such that any $\kappa_0$ in $\mathcal{W}(t)$ acts on $F_n^{\times}(v)$. Let us denote by $\mathpzc{p}i_n(v)$ the projection from $F_n^{\times}(v)$ to $X(v)$; $\omega^{\kappa_0}$ is by definition the $\kappa_0$-eigenspace of $\left({\mathpzc{p}i_n(v)}_*\mathcal{O}_{F_n^{\times}(v)}\right)$ (which we shall denote by $\left({\mathpzc{p}i_n(v)}_*\mathcal{O}_{F_n^{\times}(v)}\right) \left\langle \kappa_0 \mathcal{R}a$) for the action of $\mathbb{Z}_p^{\times}$. If $k$ is a positive integer, then $\omega^{[k]} = \omega^{\otimes k}$, for $\omega$ the sheaf defined in Section \ref{analytic}.\\ A family of overconvergent modular forms is then an element of \begin{align*}\mathcal{M}(N,\mathcal{A}(\mathcal{W}(t))) := & \varinjlim_v H^0\left(X(v) \times \mathcal{W}(t), \omega^{\kappa} \right),\\ \omega^{\kappa} = & \left({\mathpzc{p}i_n(v)}_*(\mathcal{O}_{F^{\times}_n(v)}\mathcal{H}at{\otimes} \mathcal{O}_{\mathcal{W}}) \right)\left\langle \kappa \mathcal{R}a. \end{align*} The construction commutes to base change in the sense that for weights $\kappa_0 \in \mathcal{W}(t)(K)$ we have \begin{align*} \omega^{\kappa}\otimes_{\kappa_0}K = \omega^{\kappa_0}_{/K}. \end{align*} The operator $U_p$ defined in the previous section is completely continuous on $\mathcal{M}(N,\mathcal{A}(\mathcal{W}(t)))$. Let $Q_0(\kappa,T)$ be its Fredholm determinant; it is independent of $v$ and belongs to $\mathbb{Z}_p[[\kappa]][[T]] $ \cite[Theorem 4.3.1]{CM}.\\ This definition includes the family of overconvergent modular forms \`a la Coleman. Let $\zeta^*(\kappa)$ be the $p$-adic $\zeta$-function, we pose \begin{align}\label{GEis} \tilde{E}(\kappa)= \frac{\zeta^*(\kappa)}{2} + \mathcal{S}um_{n}\mathcal{S}igma^*_n(\kappa) q^n, \end{align} where $\mathcal{S}igma^*_n(\kappa) = \mathcal{S}um_{1 \leq d | n, (d,p)=1} \kappa(d)d^{-1}$, and \begin{align*} E(\kappa) = \frac{2}{\zeta^*(\kappa)} \tilde{E}(\kappa). \end{align*} It is known that the zeros of $E(\kappa)$ are {\it far enough} from the ordinary locus \cite[B1]{Col}, in particular there exists $v$ such that $ E(\kappa)$ is invertible on $X(v)\times \mathcal{W}(t)$. In \cite[B4]{Col} a family of modular forms $F(\kappa)$ is defined as an element of $\mathcal{A}(\mathcal{W}(t))[[q]]$ such that for all $\kappa \in \mathcal{W}(t)$, we have $\frac{F(\kappa)}{E(\kappa)}$ in $H^0(X(v)\times \mathcal{W}(t), \mathcal{O}_{X(v) \times \mathcal{W}})$. The fact that $E(\kappa)$ is invertible induces an isomorphism \begin{align*} H^0(X(v)\times \mathcal{W}(t), \mathcal{O}_{X(v) \times \mathcal{W}}) \mathcal{S}tackrel{ \times E(\kappa)}{\longrightarrow} H^0\left(X(v) \times \mathcal{W}(t), \omega^{\kappa} \right). \end{align*} Let us define the following coherent sheaf \begin{align*} \mathcal{H}_{\kappa}^{r}= & \omega^{\kappa[-r]} \otimes \mathrm{Sym}^r(\mathcal{H}_{\mathrm{dR}}^1); \end{align*} we define then for all affinoid $\mathcal{U} \mathcal{S}ubset \mathcal{W}(t)$ the family of nearly overconvergent forms of degree $r$ \begin{align*} \mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))= & \varinjlim_{v} H^{0}(X(v)\times \mathcal{U},\mathcal{H}_{\kappa}^{r}\mathcal{H}at{\otimes} \mathcal{O}_{\mathcal{U}}). \end{align*} We remark that we can choose $v$ small enough such that $H^{0}(X(v)\times \mathcal{U},\omega^{[-r]} \otimes \mathrm{Sym}^r(\mathcal{H}_{\mathrm{dR}}^1)\mathcal{H}at{ \otimes} \mathcal{O}(\mathcal{U}))$ is isomorphic via multiplication by $E(\kappa)$ to $H^{0}(X(v)\times \mathcal{U},\mathcal{H}_{\kappa}^{r}\otimes \mathcal{O}(\mathcal{U}))$. We shall call the elements of the former space families of nearly overconvergent forms {\it \`a la Coleman}. \\ We can define $\mathcal{N}^{\infty}(N,\mathcal{A}(\mathcal{U}))$ as the completion of $\cup_r\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))$ with respect to the Frechet topology. For the interested reader, let us mention that there exist forms in $\mathcal{N}^{\infty}(N,\mathcal{A}(\mathcal{U}))$ whose polynomial $q$-expansion is no longer a polynomial in $X$ but an effective formal series. Indeed, we can trivialize ${\mathcal{H}_{\kappa}^{r}}_{/X(v)\times \mathcal{W}(t)}$ as $\oplus_{i=0}^{r} \omega^{\kappa[-2i]}$ and take a sequence of $f_r=(f_{r,0},\ldots,f_{r,r})$, $f_r$ in $\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))$, such that $f_{r,i}=f_{r+1,i}$ and $f_{r,r}$ smaller and smaller for the norm induced by $X(v)$.\\ There is a sheaf-theoretic interpretation of $\mathcal{N}^{\infty}(N,\mathcal{A}(\mathcal{U}))$. Let $\mathcal{A}\mathpzc{n}(\mathbb{Z}_p)$ be the ring of analytic function on $\mathbb{Z}_p$ with values in $\mathcal{A}(\mathcal{U})$; we can define the vector bundle in Frechet space \begin{align*} \mathcal{H}_{\kappa}^{\infty} = \mathbb{T}t \times^{B}\mathcal{A}\mathpzc{n}(\mathbb{Z}_p). \end{align*} \begin{rem} As in the rest of the paper we will work with nearly overconvergent forms of bounded slope, there is no particular interest in taking $r=\infty$ as we have already mentioned the degree gives a lower bound on the slopes which can appear. However, we think that the case $r=\infty$ could have some interesting applications, both geometric or representation-theoretic. \end{rem} We can see that $U_p$ acts completely continuously on $\mathcal{N}_{\kappa}^{r}(N,\mathcal{A}(\mathcal{W}(t)))$ using \cite[Proposition A5.2]{Col}, as it is defined via the correspondence $C_p$. We have on ${\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{W}(t)))}$ an action of the Hecke algebra $\mathbb{T}^r(N,\mathcal{A}(\mathcal{W}(t)))$ generated by the Hecke operators $T_l$, for $l$ coprime with $Np$, and $U_l$ for $l$ dividing $Np$. We will denote by $Q_r(\kappa,T)$ the Fredholm determinant of $U_p$ on $\mathcal{N}_{\kappa}^{r}(\Gamma,\mathcal{A}(\mathcal{W}(t)))$. To lighten the notation, we will write sometimes $Q_r(T)$ for $Q_r(\kappa,T)$ if there is no possibility of confusion.\\ \begin{lemma} For any $t \in (0,\infty)$ and suitable $v$ (see \cite[\S 5.1]{Pil}) small enough and $t$ big enough, $H^{0}(X(v)\times \mathcal{W}(t),\mathcal{H}_{\kappa}^{r}\otimes \mathcal{O}_{\mathcal{W}})$ is a direct factor of a potentially orthonormalizable $\mathcal{A}(\mathcal{W}(t))$-module (see \cite[page 7]{Buz} for the definition of potentially orthonormalizable). \end{lemma} \begin{proof} The proof is exactly the same as \cite[Corollary 5.2]{Pil}, so we only sketch it. Let \begin{align*} M:= H^{0}(X(v)\times \mathcal{W}(t),\mathcal{H}_{\kappa}^{r}\otimes \mathcal{O}_{\mathcal{W}}), \;\;\:\:\; A:= \mathcal{A}(\mathcal{W}(t)). \end{align*} Let us denote by $B$ the function ring of $X(v)$ and by $B'$ the function ring of ${(H_n^{D})}^{\times}$ above $X(v)$. We know that $B'$ is an \'etale $B$-algebra of Galois group ${(\mathbb{Z}/p^n\mathbb{Z})}^{\times}$. As $M$ is a direct summand of $M'=M\otimes_B B'$, it will be enough to show that the latter is potentially orthonormalizable. Let ${(\mathcal{U}_i)}_{i=1,\ldots,I} \rightarrow {(H_n^{D})}^{\times}$ be a finite cover by open sets such that for all $i$'s $F_n^{\times}(v)\times_{X(v)}\mathcal{U}_i$ is a disjoint union of $p^{n-1}(p-1)$ copies of $\mathcal{U}_i$. The augmented \v{C}ech complex associated to this cover is then \begin{align*} 0 \rightarrow M' \rightarrow M_{1} \rightarrow \cdots M_I \rightarrow 0 \end{align*} and it is exact. Let $k \geq 1$ be an integer and $\underline{i}$ be a subset of $\left\{1,2,\ldots, I \right\}$ of cardinality $k$. By construction $M_k$ is a sum of modules of the type $M' \mathcal{H}at{\otimes}_{B'} B_{\underline{i}}$ for $B_{\underline{i}}={\mathcal{H}at{\otimes}_{j \in \underline{i}}}\mathcal{O}(\mathcal{U}_{j})$ where the tensor product is taken over $B'$. By the choice of $\mathcal{U}_i$, each one of these modules is free of rank $r+1$ over $A \mathcal{H}at{\otimes} B_{\underline{i}}$. As $B_{\underline{i}}$ is potentially orthonormalizable over $\mathbb{Q}_p$ we know that $A \mathcal{H}at{\otimes} B_{\underline{i}}$ is potentially orthonomalizable over $A$. We can conclude by \cite[Lemma 5.1]{Pil}.\\ \end{proof} We can thus apply Buzzard's eigenvariety machinery \cite[Construction 5.7]{Buz}. This means that to the data \begin{align*} (\mathcal{A}(\mathcal{W}(t)),{\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{W}(t)))},\mathbb{T}^r(N,\mathcal{A}(\mathcal{W}(t))),U_p) \end{align*} we can associate a rigid-analytic one-dimensional variety $\mathbb{C}c^r(t)$. Let us denote by $Z$ the zero-locus of $Q_r(T)$ on $\mathcal{W}(t) \times \mathbb{A}^1_{\mathrm{An}}$ (see \cite[\S 3.4]{UrbNholo}). The rigid-analytic variety $\mathbb{C}c^r(t)$ is characterized by the following properties. \begin{itemize} \item We have a finite map $\mathbb{C}c^r(t) \rightarrow Z$. \item There is a cover of $Z$ by affinoid $Y_i$ such that $X_i = Y_i \times_Z \mathcal{W}(t)$ is an open affinoid of $\mathcal{W}(t)$ and $Y_i \rightarrow X_i$ is finite. \item Above $X_i$ we can write $Q_r(T)=R_r(T)S_r(T)$ with $R_r(T)$ a polynomial in $T$ whose constant term is $1$ and $S_r(T)$ power series in $T$ coprime to $R_r(T)$. \item Let $R^*_r(T)=T^{\mathrm{deg}(R_r(T))} R_r(T^{-1})$. Above $X_i$ we have a $U_p$-invariant decomposition \begin{align*} {\mathcal{N}^{r}(N,\mathcal{A}(X_i))}={\mathcal{N}^{r}(N,\mathcal{A}(X_i))}^{*} \bigoplus {\mathcal{N}^{r}(N,\mathcal{A}(X_i))}', \end{align*} such that $R^*_r(U_p)$ acts on ${\mathcal{N}^{r}(N,\mathcal{A}(X_i))}^{'}$ invertibly and on ${\mathcal{N}^{r}(N,\mathcal{A}(X_i))}^*$ is $0$. Moreover the rank of ${\mathcal{N}^{r}(N,\mathcal{A}(X_i))}^{*}$ on $\mathcal{A}(X_i)$ is $\mathrm{deg}(R_r(T))$. \item There exists a coherent sheaf $\widetilde{\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{W}(t)))}$ above $\mathbb{C}c^r(t)$. \item To each $K$-point $x$ of $\mathbb{C}c^r(t) \times_Z Y_i$ above $\kappa(x) \in \mathcal{W}(t)$ corresponds a system of Hecke eigenvalues for $\mathbb{T}_{\kappa}^r(N,K)$ on $\mathcal{N}_{\kappa(x)}^{r,\dagger}(Np,K)$ such that the $U_p$-eigenvalue is a zero of $R^*_r(T)$ (in particular it is not zero). \item To each $K$-point $x$ as above, the fiber $\widetilde{\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{W}(t)))}_x$ is the generalized eigenspace in $\mathcal{N}_{\kappa(x)}^{r,\dagger}(Np,K)$ for the system of eigenvalues associated to $x$. \end{itemize} Taking the limit for $t$ which goes to $0$, we obtain the eigencurve $\mathbb{C}c^r \rightarrow \mathcal{W}$. When $r=0$ this is the Coleman-Mazur eigencurve which we shall denote by $\mathbb{C}c$.\\ For a Banach module $M$, a completely continuous operator $U$ and $\alpha \in \mathbb{Q}_{\geq 0}$, we define ${M}^{\leq \alpha}$ resp. ${M}^{> \alpha}$ as the subspace which contains all the generalized eigenspaces of eigenvalues of $U$ of valuation less or equal than $\alpha$ resp. strictly bigger than $\alpha$. Then the above discussion gives us the following proposition which is essentially all we need in what follows; \begin{prop}\label{lessalpha} For all $\alpha \in \mathbb{Q}_{> 0}$ we have the a direct sum decomposition \begin{align*} {\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}={\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha} \bigoplus {\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{> \alpha}, \end{align*} where ${\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha}$ is a finite dimensional, free Banach module over $\mathcal{A}(\mathcal{U})$. Moreover the projector to ${\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha}$ is given by a formal series in $U_p$ which we shall denote by $\mathrm{Pr}^{\leq \alpha}$. \end{prop} \begin{rem}\label{remv} As ${\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha}$ is of finite rank and $\mathcal{A}(\mathcal{U})$ is noetherian there exists $v$ such that ${\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha} ={H^{0}(X(v)\times \mathcal{U},\mathcal{H}_{\kappa}^{r}\mathcal{H}at{\otimes} \mathcal{O}_{\mathcal{U}})}^{\leq \alpha} $. \end{rem} If we want to consider forms with Nebentypus $\mathpzc{p}si$ whose $p$-part is non-trivial, we need to apply the above construction to an affinoid $\mathcal{U}$ of $\mathcal{W}$ where $\mathpzc{p}si$ is constant. This is because finite-order characters do not define Tate functions on $\mathcal{W}$.\\ It is well known that on a finite dimensional vector space over a complete field, all the norms are equivalent. In particular the overconvergent norm on ${\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha}$ is equivalent to sup-norm on the coefficients of the $q$-expansion. We call it the $q$-expansion norm; a unit ball for this norm defines a natural integral structure ${\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha}$, which coincides with the one defined in the previous subsection.\\ We now give a useful lemma. \begin{lemma}\label{qexpandproj} Let $f$ be a nearly overconvergent form in $\mathcal{N}_k^{r,\dagger}(N)$ and let $f^{\leq \alpha}$ be its projection to ${\mathcal{N}_k^{r,\dagger}(N)}^{\leq \alpha}$. If $f(q,X) \in p^n\mathbb{Z}_p[[q]][X]$, then $f^{\leq \alpha}(q,X) \in p^n\mathbb{Z}_p[[q]][X]$. \end{lemma} \begin{proof} Let $f$ be as in the statement of the lemma, then we have $U_p f(q,X)\in p^n\mathbb{Z}_p[[q]][X]$. As $f^{\leq \alpha} = \mathrm{Pr}^{\leq \alpha}f$, we conclude. \end{proof} Let $\mathcal{A}^{0}(\mathcal{U})$ be the unit ball in $\mathcal{A}(\mathcal{U})$. We have the following proposition which, roughly speaking, guarantees us that the limit for the $q$-expansion norm of nearly overconvergent forms of bounded slope is nearly overcovergent. \begin{prop}\label{naivefam} Let $F(\kappa) = \mathcal{S}um_{i=0}^r F_i(\kappa) X^i$, with $F_i(\kappa)$ in $\mathcal{A}^{0}(\mathcal{U})[[q]]$. Suppose that for a set $\left\{ \kappa_i \right\} $ of $\overline{\mathbb{Q}}_p$-points of $\mathcal{U}$ which are dense we have $F(\kappa_i) \in {\mathcal{N}_{\kappa_i}^{r}(N,\overline{\mathbb{Q}}_p)}^{\leq \alpha}$. Then \begin{align*} F(\kappa) \in {\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha}. \end{align*} \end{prop} \begin{proof} It is enough to show that for every $\kappa_0$ in $\mathcal{U}$, $F(\kappa_0)$ is nearly overconvergent (and the radius of overconvergence can be chosen independently of $\kappa_0$ by Remark \ref{remv}).\\ We follow the proof of \cite[Corollary 4.8]{Til}. We have for all $\kappa$ in $\mathcal{U}$ the Eisenstein series $E(\kappa)$. It is know that $E(\kappa)$ has no zeros on $X(v)$ for $v > 0$ small enough. \\ We will write $f\equiv 0 \bmod p^n $ for $f(q,X) \in p^n\mathcal{A}^{0}(\mathcal{U})[[q]][X]$. \\ Let $\kappa_0$ be a $L$-point of $\mathcal{U}$, and fix a sequence of points $\kappa_i$, $i > 0$, of $\mathcal{U}$ such that $\kappa_i$ converges to $\kappa_0$. In particular, $F(\kappa_i)$ converges to $F(\kappa_0)$ for the $q$-expansion topology. Let us consider the nearly overconvergent modular forms $G(\kappa_i):=\frac{F(\kappa_i)E(\kappa_0)}{E(\kappa_i)}$ of weight $\kappa_0$, we want to show that ${G(\kappa_i)}^{\leq \alpha}$ converge to $F(\kappa_0)$ in the $q$-expansion topology. This will prove that $F(\kappa_0)$ is nearly overconvergent because, as already said, in the space of nearly overceonvergent forms of slope bounded by $\alpha$ all the norms are equivalent.\\ If $\vert \kappa_i - \kappa_0 \vert < p^{-n}$, we have $E(\kappa_i)\equiv E(\kappa_0) \bmod p^n$, hence ${E(\kappa_i)}^{-1}\equiv {E(\kappa_0)}^{-1} \bmod p^n$; consequently, it is clear that $G(\kappa_i) \equiv F(\kappa) \bmod p^n$. We apply Lemma \ref{qexpandproj} to the forms $G(\kappa_i)-F(\kappa_i)$ to see that ${G(\kappa_i)}^{\leq \alpha}$ is a sequence of overconvergent forms of weight $\kappa_0$ and bounded slope which converges to $F(\kappa_0)$ for the $q$-expansion topology.\\ \end{proof} \begin{rem} In \cite[\S 1]{Pan} the author defines {\it rigid analytic nearly holomorphic modular forms} as elements of $\mathcal{A}(\mathcal{U})[[q]][X]$ which on classical points give classical nearly holomorphic forms. It would be interesting to compare his definition with the one here, especially understanding necessary and sufficient conditions to detect when a specialization at a non classical weight of a rigid analytic nearly holomorphic modular form is nearly overconvergent or not. \end{rem} One application of the above proposition is that it allows us to define a Maa\mathcal{S}s{}-Shimura operator of weight $\kappa$ as follows. Let us define \begin{align*} \log(\kappa) = \frac{\log_p(\kappa(u^r))}{\log_p(u^r)} \end{align*} for $u$ any topological generator of $1+p\mathbb{Z}_p$ and $r$ any integer big enough. \\ For any open affinoid $\mathcal{U}$ of $\mathcal{W}$ and $\kappa_0$ in $\mathcal{W}(\mathbb{C}_p)$, we define the $\kappa_0$-translate $\mathcal{U}\kappa_0$ of $\mathcal{U}$ as the composition $\mathcal{U} \rightarrow \mathcal{W}$ with $\mathcal{W} \mathcal{S}tackrel{ \times \kappa_0 }{\rightarrow} \mathcal{W}$.\\ \begin{prop}\label{MSfam} We have an operator $$ \begin{array}{ccccc} \delta_{\kappa} : & {\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha} & \rightarrow & {\mathcal{N}^{r+1}(N,\mathcal{A}(\mathcal{U}[2]))}^{\leq \alpha+1} \\ & \mathcal{S}um_{i=0}^r F_i(\kappa) X^i & \mapsto & \mathcal{S}um_{i=0}^r \mathbb{T}heta F_i(\kappa) X^i + (\log(\kappa) -i) F_i(\kappa)X^{i+1} \end{array} $$ which is $\mathcal{A}(\mathcal{U})$-linear. \end{prop} Note that $\delta_{\kappa}$ is not $\mathcal{A}(X(v))$-linear. \begin{proof} It is an application of the fact that for classical $\overline{\mathbb{Q}}_p$-points of $\mathcal{U}$ above $[k]$ we have $[k+2](\delta_{\kappa})=\delta_{k}[k]$ and Proposition \ref{naivefam}. \end{proof} We point out that there are other possible constructions of the Maa\mathcal{S}s{}-Shimura operator on nearly overconvergent forms which are defined on the whole space ${\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}$ and not only on the part of finite slope. In \cite{HX}, the authors construct an overconvergent Gau\mathcal{S}s{}-Manin connection \begin{align*} \mathcal{H}_{\kappa}^{r} \rightarrow \mathcal{H}_{\kappa[2]}^{r+1} \end{align*} using the existence of the canonical splitting of ${\mathcal{H}^1_{\mathrm{dR}}}_{/X(v)}$ given by $E_2$ (which exists because $X(v)$ is affinoid, \cite[Appendix 1]{Katz}).\\ Let $r\geq 0$ be an integer, we define \begin{align*} \log^{[r]}(\kappa)= \mathpzc{p}rod_{j=0}^{r-1}{\log(\kappa[-2r +j])}. \end{align*} Let us denote by $\mathcal{K}(\mathcal{U})$ the total fraction field of $\mathcal{A}(\mathcal{U})$; we define \begin{align*} {\mathcal{N}^{r}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha} = & {\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha}\otimes_{\mathcal{A}(\mathcal{U})}\mathcal{K}(\mathcal{U}). \end{align*} \begin{prop}\label{kappaMassS} Let $F(\kappa) $ in ${\mathcal{N}_{\kappa}^{r}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha}$, then \begin{align*} F(\kappa)= \mathcal{S}um_{i=0}^r \frac{\delta_{\kappa[-2i]}^i G_i(\kappa)}{\log^{[i]}(\kappa)} \end{align*} for a unique sequence $(G_i(\kappa))$, $i=0,\ldots,r$, with $G_i(\kappa)$ in $\mathcal{M}(N,\mathcal{K}(\mathcal{U}[-2i]))$. \end{prop} \begin{proof} The proposition is clear if $r=0$. For $r \geq 1$, we proceed by induction; write \begin{align*} F(\kappa)= & \mathcal{S}um_{i=0}^r F_i(\kappa) X^i, \end{align*} we have then $\varepsilon^r F(\kappa) = r! F_r(\kappa)$, so $F_r(\kappa)$ is a family of overconvergent forms. \\ We pose $G_r(\kappa) := F_r(\kappa)$ and we see easily that \begin{align*} F(\kappa) - \frac{\delta_{\kappa[-2r]}^r G_r(\kappa)}{\log^{[r]}(\kappa)} \end{align*} has degree $r-1$ and by induction there exist $G_i(\kappa)$ as in the statement.\\ For uniqueness, suppose \begin{align*} \mathcal{S}um_{i=0}^r \frac{\delta_{\kappa[-2i]}^i G_i(\kappa)}{\log^{[i]}(\kappa)} = 0, \end{align*} by applying $\varepsilon^r$ we obtain $G_r(\kappa)=0$ and uniqueness follows by induction. \end{proof} We have the following corollaries. \begin{coro} We have an isomorphism of Hecke-modules \begin{align*} \bigoplus_{i=0}^r \delta_{\kappa[-2i]}^i {\mathcal{M}(N,\mathcal{K}(\mathcal{U}[-2i]))}^{\leq \alpha -i} \cong {\mathcal{N}^r(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha}. \end{align*} and consequently the characteristic series of $U_p$ is given by $Q_r(\kappa,T)= \mathpzc{p}rod_{i=0}^r Q_0(\kappa[-2i],p^iT)$. \end{coro} \begin{coro}\label{holoproj} We define a projector \begin{align*} H : {\mathcal{N}^r\left(N,\mathcal{A}(\mathcal{U})\right)}^{\leq \alpha} \rightarrow {\mathcal{M}\left(N,\mathcal{A}(\mathcal{U})\left[\frac{1}{\mathpzc{p}rod_{j=0}^{2r}{\log(\kappa[-j])}}\right]\right)}^{\leq \alpha} \end{align*} by sending $F(\kappa)$ to $G_0(\kappa)$. It is called the {\it overconvergent projection}. \end{coro} \begin{proof} We use the same notation of the proof of Proposition \ref{kappaMassS}. If $F(\kappa)(X)$ has $q$-expansion in $\mathcal{A}(\mathcal{U})[[q]][X]$, we can see by induction on the degree that then the only possible poles of $G_0(\kappa)$ are the zero of $\mathpzc{p}rod_{j=0}^{2r}\log(\kappa[-j])$. \end{proof} It is clear that this projector is a $p$-adic version of the classical holomorphic projector.\\ We remark that it is not possible to improve Proposition \ref{kappaMassS} allowing holomorphic coefficients, as shown by the following example; let us write $E_2^{\mathrm{cr}}(z)$ for the critical $p$-stabilization $E_2(z) - E_2(pz) $. We have that the polynomial $q$-expansion of $E_2^{\mathrm{cr}}(z)$ is \begin{align*} E_2^{\mathrm{cr}}(q,X)=\frac{p-1}{2p}X +\mathcal{S}um_{n p^m, (n,p)=1} p^{m}\mathcal{S}igma_1(n) q^{n p^m}. \end{align*} Recall the Eisenstein family $\tilde{E}(\kappa)$ defined in (\ref{GEis}). We have \begin{align*} E_2^{\mathrm{cr}}(q,X) = & \delta_{\kappa}\tilde{E}(\kappa)|_{\kappa = \mathbf{1} }, \end{align*} as the residue at $\kappa = \mathbf{1}$ of $\zeta^*(\kappa)$ is $\frac{p-1}{p}$. The fact that the overconvergent projector has denominators in the weight variable was already known to Hida \cite[Lemma 5.1]{H1}.\\ We now give the following proposition. \begin{prop}\label{MSfamilies} Let $F(\kappa)$ be an element of $\mathcal{N}^{r}(N,\mathcal{A}(\mathcal{U}))$ and suppose that $F(\kappa)$ is an eigenform for the whole Hecke algebra and of finite slope for $U_p$. Then $F(\kappa)=\delta_{\kappa}^r G(\kappa)$, for $G(\kappa) \in \mathcal{M}(N,(\mathcal{K}(\mathcal{U}[-2r])))$ a family of overconvergent eigenforms. \end{prop} \begin{proof} Let $\lambda_F(n)$ be the Hecke eigenvalue of $T_n$; we have from Proposition \ref{kappaMassS} that \begin{align*} F(\kappa)(X) = \mathcal{S}um_{i=0}^{r}\delta_{\kappa[-2i]}^i G_i(\kappa) \end{align*} with $G_i(\kappa)$ overconvergent. Moreover, we know from Proposition \ref{deltaT_l} that $G_i(\kappa)=a_0(G_i) \mathcal{S}um_{i=1}^{\infty} n^{-i} \lambda_F(n) q^n $. We have then \begin{align*} G_i(\kappa)=\frac{a_0(G_i)}{a_0(G_r)}\mathbb{T}heta^{r-i} G_r(\kappa). \end{align*} By restriction to the ordinary locus and projecting to $\omega^{\kappa}$ by $X \mapsto 0$ we find: \begin{align*} F(\kappa)(0)= & \left(\mathcal{S}um_{i=0}^r \frac{a_0(G_i)}{a_0(G_r)}\right) \mathbb{T}heta^r G_0(\kappa). \end{align*} This is the same $q$-expansion of ${\left(\mathcal{S}um_{i=0}^r \frac{a_0(G_i)}{a_0(G_r)}\right) }\delta_{\kappa[-2r]}^r G_r(\kappa)$; hence we can conclude by Proposition \ref{ordproj}. \end{proof} For any $\alpha < \infty$ and for $i=0,\ldots,r$ we define a map $s_i:\mathbb{C}c^{\leq \alpha} \rightarrow {\mathbb{C}c^i}^{\leq \alpha + i}$ induced by \begin{align*} \delta_{\kappa}^i : {\mathcal{M}(N,\mathcal{A}(\mathcal{U}))}^{\leq \alpha} \rightarrow {\mathcal{N}^i(N,\mathcal{A}(\mathcal{U}[2i]))}^{\leq \alpha +i}. \end{align*} The interest of the above proposition lies in the fact that it tells us that $\mathbb{C}c^r $ minus a finite set of points (such as $E_2^{\mathrm{cr}}$) can be covered by the images of $s_i$. The images of these maps are not disjoint; it may indeed happen that two families of different degrees meet.\\ \begin{exam} Let $k\geq 2$ be an integer, we have that $\delta_{1-k}^{k}=\mathbb{T}heta^k$ (see formula \ref{deltatheta}). It is well known that $\mathbb{T}heta^k$ preserve overconvergence \cite[Proposition 4.3]{ColCO}. Let $F(\kappa)$ be a family of overconvergent forms of finite slope, then the specialization at $\kappa=[1-k]$ of the nearly overconvergent family $\delta_{\kappa}^{k}F(\kappa)$ is overconvergent and consequently belongs to an overconvergent family. \end{exam} From the polynomial $q$-expansion principle for the degree of near holomorphicity \cite[Corollary 3.2.5]{UrbNholo}, we see that intersections between families of different degrees may happen only when the coefficients of the higher terms in $X$ of $\delta_{\kappa}^i$ vanish. It is clear from Formula \ref{deltatheta} that this can happen only for points $[1-k]$, for $i \geq k\geq 2$. Note that these points lie above the poles of the overconvergent projection $H$. This is not a coincidence; in fact $H(\delta_{\kappa}^{k}F(\kappa))=0$ for all $ \kappa \in \mathcal{W}$ for which $H$ is defined. If we could extend $H$ over the whole $\mathcal{W}$, we should have then $H\left(\delta_{[1-k]}^{k}F([1-k])\right)=0$ but we have just seen that $\delta_{[1-k]}^{k}F([1-k])$ is already overconvergent. \mathcal{S}ection{Half-integral weight modular forms and symmetric square $L$-function} In this section we first recall the definition and some examples of half-integral weight modular forms. Then we use them to give an integral expression of $\mathcal{L}(s,\mathrm{Sym}^2(f),\xi)$. We conclude the section studying the Euler factor by which $\mathcal{L}(s,\mathrm{Sym}^2(f),\xi)$ and $L(s,\mathrm{Sym}^2(f),\xi)$ differ. \mathcal{S}ubsection{Half-integral weight modular forms}\label{Halfint} We recall the definition of half-integral weight modular forms. We define a holomorphic function on $\mathcal{H}$ \begin{align*} \theta(z)=\mathcal{S}um_{n \in \mathbb{Z}} q^{n^2}, \;\;\; q=e^{2 \mathpzc{p}i i z}. \end{align*} Note that this theta series has no relations with the operator $\mathbb{T}heta$ of the previous section. We hope that this will cause no confusion.\\ We define a factor of automorphy \begin{align*} h(\gamma,z)= \frac{\theta(\gamma(z))}{\theta(z)},\;\; \gamma \in \Gamma_0(4), z \in \mathcal{H}. \end{align*} It satisfies \begin{align*} {h(\gamma,z)}^2 = \mathcal{S}igma_{-1}(d)(c+zd). \end{align*} Let $k\geq 0$ be an integer and $\Gamma$ a congruence subgroup of $\mbox{SL}_2(\mathbb{Z})$. We define the space of half-integral weight nearly holomorphic modular forms $\mathcal{N}^r_{k+\frac{1}{2}}(\Gamma,\mathbb{C})$ as the set of $\mathbb{C}c^{\infty}$-functions \begin{align*} f : \mathcal{H} \rightarrow \mathbb{C} \end{align*} such that \begin{itemize} \item $f|_{k+\frac{1}{2}}\gamma(z):= f(\gamma(z)){h(\gamma,z)}^{-1} {(c+zd)}^{-k}= f(z)$ for all $\gamma$ in $\Gamma$, \item $f$ has a finite limit at all cusps of $\Gamma$, \item there exist holomorphic $f_i(z)$ such that \begin{align*} f(z) = \mathcal{S}um_{i=0}^r f_i(z)\frac{1}{{(4 \mathpzc{p}i y)}^i}, \; \; \; y = \mathrm{Im}(z). \end{align*} \end{itemize} When $r=0$, one simply writes $\mathcal{M}_{k+\frac{1}{2}}(\Gamma,\mathbb{C})$ for the space of holomorphic forms of weight $k + \frac{1}{2}$.\\ As $\Gamma$ is a congruence subgroup, then there exists $N$ such that each $f_i(z)$ as above admits a Fourier expansion of the form \begin{align*} f_i(z) = \mathcal{S}um_{n=0}^{\infty} a_n(f_i)q^{\frac{n}{N}}. \end{align*} This allows us to embed $\mathcal{N}_{k+\frac{1}{2}}^r(\Gamma,\mathbb{C})$ into $\mathbb{C}[[q]][q^{\frac{1}{N}},X]$. For all $\mathbb{C}$-algebra $A$ containing the $N$-th roots of unity, we define \begin{align*} \mathcal{N}_{k+\frac{1}{2}}^r(\Gamma,A) = & \mathcal{N}^r_{k+\frac{1}{2}}(\Gamma,\mathbb{C}) \cap A[[q]][q^{\frac{1}{N}},X]. \end{align*} For a geometric definition, see \cite[Proposition 8.7]{DT}. In the following, we will drop the variable $z$ from $f$.\\ Let us consider a non trivial Dirichlet character $\xi$ of level $N$ and let $\beta$ be $0$ resp. $1$ if $\xi$ is even, resp. odd. We define \begin{align*} \theta(\xi)=\mathcal{S}um_{n=1}^{\infty} n^{\beta}\xi(n)q^{n^2} \in \mathcal{M}_{\beta+\frac{1}{2}}(\Gamma_1(4N^2),\xi,\mathbb{Z}[\zeta_N]). \end{align*} Another example of half-integral weight forms is given by Eisenstein series; we recall their definition. Let $k>0$ be an integer and $\chi$ be a Dirichlet character modulo $Dp^r$ ($D$ a positive integer prime to $p$) such that $\chi(-1)={(-1)}^{k-1} $, we set \begin{align*} \frac{E^*_{k-1/2}(z,s;\chi)}{L(2s+2k-2,\chi^2)} & = \mathcal{S}um_{\mathcal{H}uge{\gamma \in \Gamma_{\infty}\mathcal{S}etminus \Gamma_0(Lp^r)}} \chi \mathcal{S}igma_{Dp^r}{\mathcal{S}igma_{-1}}^{k-1}(\gamma) {h(\gamma,z)}^{-2k+1}{|h(\gamma,z)|}^{-2s},\\ E_{k-1/2}(z,m;\chi) & = C_{m,k} \left \{{(2 y)}^{\frac{-m}{2}} E^*_{k-1/2}(z,-m;\chi) \right \}|_{k-1/2} \tau_{Dp^r},\\ C_{m,k} & = {(2\mathpzc{p}i)}^{\frac{m-2k+1}{2}}{(Dp^r)}^{\frac{2k-1-2m}{4}}\Gamma\left(\frac{2k-1-m}{2}\right), \end{align*} where $\tau_{Dp^{r}}$ is the Atkin-Lehner involution for half-integral weight modular forms normalized as in \cite[\S2 h4]{H6} and $\mathcal{S}igma_n$ is the quadratic character corresponding via class field theory to the quadratic extension $\mathbb{Q}(\mathcal{S}qrt{n})/\mathbb{Q}$. \\ If we set $E_{k-1/2}(\chi)=E_{k-1/2}(z,3-2k;\chi)$, then $E_{k-1/2}(\chi)$ is a holomorphic modular form of half-integral weight $k-1/2$, level $Dp^r$ and nebentypus $\chi$. Let us denote by $\mu$ the M\"{o}bius function. The Fourier expansion of $E_{k-1/2}(\chi)$ is given by \begin{align*} L_{Dp}(3-2k,\chi^2) + \mathcal{S}um_{n=1}^{\infty} q^n L_{Dp}\left(2-k,\chi\chi_n\right) \mathcal{S}um_{\tiny{\begin{array}{c} t_1^2t_2^2 |n, \\ (t_1t_2,Dp)=1, \\ t_1>0, t_2>0 \end{array}}} \mu(t_1)\chi(t_1t_2^2)\chi_n(t_1)t_2{(t_1t_2^2)}^{k-2}, \end{align*} where $L_{Dp}(s,\chi) = \mathpzc{p}rod_{q\mid Dp}(1-\chi_0(q)q^{-s})L(s,\chi_0) $, for $\chi_0$ the primitive character associated to $\chi$ .\\ Let $s$ be an odd integer, $1 \leq s \leq k-1$. We have the following key formula, for the compatibility with the Maa\mathcal{S}s{}-Shimura operators as defined in Section \ref{analytic}; \begin{align*} \delta_{k-s+\frac{1}{2}}^{\frac{s+1}{2}-1} E_{k-s+\frac{1}{2}}(\chi) = E_{k -\frac{1}{2}}(z,2k-s-2;\chi). \end{align*} In particular $E_{k -\frac{1}{2}}(z,2k-s-2; \chi) \in \mathcal{N}^{\frac{s+1}{2}-1}_{k-\frac{1}{2}}(\Gamma_1(Dp^r),\chi,\overline{\mathbb{Q}})$.\\ If $g_1$ resp. $g_2$ denotes a form in $\mathcal{N}^{r_1}_{k_1+\frac{1}{2}}(\Gamma_1(N),\mathpzc{p}si_1,A)$ resp. $\mathcal{N}^{r_2}_{k_2-\frac{1}{2}}(\Gamma_1(N),\mathpzc{p}si_2,A)$, then $g_1g_2$ belongs to $\mathcal{N}^{r_1+r_2}_{k_1+k_2}(\Gamma_1(N),\mathpzc{p}si_1\mathpzc{p}si_2\mathcal{S}igma_{-1},A)$. \mathcal{S}ubsection{An integral formula} In this subsection we use the half-integral weight forms we have defined before to express $\mathcal{L}(s,\mathrm{Sym}^2(f),\xi)$ as the Petersson product of $f$ with the product of two half-integral weight forms. Let $f$ be a cusp form of integral weight $k$ and Nebentypus $\mathpzc{p}si_1$, $g$ a modular form of half-integral weight $l/2$ and Nebentypus $\mathpzc{p}si_2$. Let $N$ be the least common multiple of the levels of $f$ and $g$ and suppose $k > l/2$. We define the Rankin product of $f$ and $g$ $$D(s,f,g)=L_N(2s-2k-l+3,{(\mathpzc{p}si\xi)}^2)\mathcal{S}um_n\frac{a(n,f)a(n,g)}{n^{s/2}}.$$ The Eisenstein series introduced above allow us to give an integral formulation for this Rankin product \cite[Lemma 4.5]{H6}. \begin{lemma}\label{RankPet} Let $f$, $g$ and $D(s,f,g)$ as above. Let $f^c=\overline{f(-\overline{z})}$. We have the equality \begin{align*} &{(4\mathpzc{p}i)}^{-s/2}\Gamma(s/2)D(s,f,g) = \left\langle f^c,gE^*_{k-l/2}(z,s+2-2k;\mathpzc{p}si_1\mathpzc{p}si_2\mathcal{S}igma_{-N})y^{(s/2)+1-k} \mathcal{R}a_N, \\ &= {(-i)}^k \left\langle f^c|_k\tau_{N}, g|_{l/2}\tau_{N}\left(E^*_{k-l/2}(z,s+2-2k;\mathpzc{p}si_1\mathpzc{p}si_2\mathcal{S}igma_{-N})y^{(s/2)+1-k}\right)|_{k-l/2}\tau_N \mathcal{R}a_N. \end{align*} \end{lemma} Here $\left\langle f, g \mathcal{R}a$ denotes the complex Petersson product \begin{align*} \left\langle f, g \mathcal{R}a = \int_{X(\Gamma)} \overline{f(z)} g(z) y^{k-2} \textup{d}x\textup{d}y \end{align*} and it is defined for any couple $(f,g)$ in ${\mathcal{N}_k^r(N,\mathbb{C})}^2$ such that at least one between $f$ and $g$ is cuspidal.\\ If we take for $g$ a theta series $\theta(\xi)$ as defined above we have then \begin{align*} D(\beta+s,f,\theta(\xi))= \mathcal{L}(s,\mathrm{Sym}^2(f),\xi), \end{align*} for $\mathcal{L}(s,\mathrm{Sym}^2(f),\xi)$ the imprimitive $L$-function defined in the introduction. The interest of writing $\mathcal{L}(s,\mathrm{Sym}^2(f),\xi)$ as a Petersson product lies in the fact that such a product, properly normalized, is algebraic. This allowed Sturm to show Deligne's conjecture for the symmetric square \cite{Stu} and it is at the base of our construction of $p$-adic $L$-function for the symmetric square. We conclude with the following relation which can be easily deduced from \cite[(5.1)]{H6} and which is fundamental for the proof of Theorem \ref{MainThOC}. \begin{lemma}\label{thetanonprim} Let $f$ be an Hecke eigenform of level divisible by $p$ and let $\xi$ be a character defined modulo $Cp$ of conductor $C$. Let us denote by $\xi'$ the primitive character associated to $\xi$, then \begin{align*} D(s,f,\theta(\xi))= (1 - \lambda_p^2 p^{1-s}) D(s,f,\theta(\xi')). \end{align*} \end{lemma} \mathcal{S}ubsection{The $L$-function for the symmetric square}\label{primLfun} Let $f$ be a modular form of weight $k$ and of Nebentypus $\mathpzc{p}si$ and $\mathpzc{p}i(f)$ the automorphic representation of $\mbox{GL}_2(\mathbb{A})$ spanned by $f$. Let us denoted by $\lambda_q$ the associated set of Hecke eigenvalues. In \cite{GJ}, the authors construct an automorphic representation of ${\mbox{GL}_3}(\mathbb{A})$ denoted $\mathcal{H}at{\mathpzc{p}i}(f)$ and usually called the base change to ${\mbox{GL}_3}$ of $\mathpzc{p}i({f})$. It is standard to associate to $\mathcal{H}at{\mathpzc{p}i}({f})$ a complex $L$-function $\Lambda(s,\mathcal{H}at{\mathpzc{p}i}(f))$ which satisfies a nice functional equation and coincides with $\mathcal{L}(s,\mathrm{Sym}^2(f),\mathpzc{p}si^{-1})$ up to some Euler factors. The problem is that some of these Euler factors could vanish at critical integers. We recall very briefly the $L$-factors at primes of bad reduction of $\Lambda(s,\mathcal{H}at{\mathpzc{p}i}(f))$ in order to determine in Section \ref{Benconj} whether the $L$-value we interpolate vanishes or not. We shall also use them in the Appendix \ref{FE} to generalize the results of \cite{DD,H6}. For a more detailed exposition, we refer to \cite[\S 4.2]{RosH}. Fix an adelic Hecke character $\tilde{\xi}$ of $\mathbb{A}_{\mathbb{Q}}$ and denote by $\xi$ the corresponding (primitive) Dirichlet character. For any place $v$ of $\mathbb{Q}$, we pose \begin{align*} L_v(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi) = & \frac{L_v(s,{\mathpzc{p}i}{({f})}_v \otimes \tilde{\xi}_v \times \check{\mathpzc{p}i}{({f})}_v) }{L_v(s, \tilde{\xi}_v) }, \end{align*} where $\check{\mathpzc{p}hantom{ }}$ denotes the contragredient and ${\mathpzc{p}i}{(f)}_v \times \check{\mathpzc{p}i}{(f)}_v$ is a representation of $\mbox{GL}_2(\mathbb{Q}_v)\times\mbox{GL}_2(\mathbb{Q}_v)$. \\ The completed $L$-function \begin{align*} \Lambda(s,\mathcal{H}at{\mathpzc{p}i}({f}),\xi) = & \mathpzc{p}rod_v L_v(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi) \end{align*} is holomorphic over $\mathbb{C}$ except in a few cases which correspond to CM-forms with complex multiplication by $\xi$ \cite[Theorem 9.3]{GJ}.\\ Let $\mathpzc{p}i = \mathpzc{p}i (f)$ and let $q$ be a place where $\mathpzc{p}i$ ramifies and let $\mathpzc{p}i_{q}$ be the component at $q$. By twisting by a character of $\mathbb{Q}_{q}^{\times}$, we may assume that $\mathpzc{p}i_{q}$ has minimal conductor among its twists; this does not change the $L$-factor $\mathcal{H}at{\mathpzc{p}i}_q$. Let $\mathpzc{p}si'$ be the Nebentypus of the minimal form associated with $f$.\\ We distinguish the following four cases: \begin{itemize} \item[(i)] $\mathpzc{p}i_{q}$ is a principal series $\mathpzc{p}i(\eta,\nu)$, with both $\eta$ and $\nu$ unramified, \item[(ii)] $\mathpzc{p}i_{q}$ is a principal series $\mathpzc{p}i(\eta,\nu)$ with $\eta$ unramified and $\nu$ ramified, \item[(iii)] $\mathpzc{p}i_{q}$ is a special representation $\mathcal{S}igma(\eta,\nu)$ with $\eta$, $\nu$ unramified and $\eta\nu^{-1} = |\mathpzc{p}hantom{e}|_{q}$, \item[(iv)] $\mathpzc{p}i_{q}$ is supercuspidal. \end{itemize} We will partition the set of primes dividing the conductor of $f$ as $\Sigma_1,\cdots,\Sigma_4$ according to these cases. When $\mathpzc{p}i_{q}$ is a ramified principal series we have $\eta(q)=\lambda_q q^{\frac{1-k}{2}}$ and $\nu=\eta^{-1}\tilde{\mathpzc{p}si'}_q$, where $\tilde{\mathpzc{p}si'}$ is the adelic character corresponding to $\mathpzc{p}si'$. In case {\it i)}, if $ \tilde{\xi}_q $ is unramified, the Euler factor ${L_q(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi)}^{-1}$ is \begin{align*} (1-\tilde{\xi}_q\nu^{-1}\eta(q){q}^{-s})(1-\tilde{\xi}_q(q){q}^{-s})(1-\tilde{\xi}_q\nu\eta^{-1}(q){q}^{-s}) \end{align*} and $1$ otherwise. In case {\it ii)} we have that ${L_q(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi)}^{-1}$ equals \begin{align*} (1-\tilde{\xi}_q {\tilde{\mathpzc{p}si'}_q}^{-1}(q)\lambda^2_q q^{1 -k-s})(1-\tilde{\xi}_q(q){q}^{-s})(1-\tilde{\xi}_q\tilde{\mathpzc{p}si'}_q(q)\lambda^{-2}_q q^{{k-1}-s}) \end{align*} While in the third case if $\tilde{\xi}_{q}$ is unramified we have $(1-\tilde{\xi}_q(q){q}^{-s-1})$ and $1$ otherwise. The supercuspidal factors are slightly more complicated and depend on the ramification of $\tilde{\xi}_{q}$. They are classified by \cite[Lemma 1.6]{Sc}; we recall them briefly. Let $q$ be a prime such that $\mathpzc{p}i_{q}$ is supercuspidal. If $\tilde{\xi}_{q}^2$ is unramified, let $\lambda_1$ and $\lambda_2$ the two ramified characters such that $\tilde{\xi}_{q}\lambda_i$ is unramified. We consider the following disjoint subsets of $\Sigma_4$: \begin{align*} \Sigma_4^0 &=\left\{q \in \Sigma_4 : \tilde{\xi}_{q} \mbox{ is unramified and } \mathpzc{p}i_{q}\cong\mathpzc{p}i_{q}\otimes \tilde{\xi}_q \right \},\\ \Sigma_4^1 &=\left\{q \in \Sigma_4 : \tilde{\xi}_{q}^2 \mbox{ is unramified and } \mathpzc{p}i_{q}\cong\mathpzc{p}i_{q}\otimes\lambda_i \mbox{ for } i=1,2 \right \},\\ \Sigma_4^2 &=\left\{q \in \Sigma_4 : \tilde{\xi}_{q}^2 \mbox{ is unramified and } \mathpzc{p}i_{q}\not\cong\mathpzc{p}i_{q}\otimes\lambda_1 \mbox{ and }\mathpzc{p}i_{q}\cong\mathpzc{p}i_{q}\otimes\lambda_2 \right \}, \\ \Sigma_4^3 &=\left\{q \in \Sigma_4 : \tilde{\xi}_{q}^2 \mbox{ is unramified and } \mathpzc{p}i_{q}\not\cong\mathpzc{p}i_{q}\otimes\lambda_2 \mbox{ and }\mathpzc{p}i_{q}\cong\mathpzc{p}i_{q}\otimes\lambda_1 \right \}. \end{align*} If $q$ is in $\Sigma_4$ but not in $\Sigma_4^i$, for $i=0,\cdots, 3$, then $L_q(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi)=1$. If $q$ is in $\Sigma_4^0$, then $$ {L_q(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi)}^{-1}=1+\tilde{\xi}_q(q){q}^{-s}$$ and if $q$ is in $\Sigma_4^i$, for $i=1,2,3$ then $${L_q(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi)}^{-1}=\mathpzc{p}rod_{j \mbox{ s.t.} \mathpzc{p}i_{q}\cong\mathpzc{p}i_{q}\otimes\lambda_j} (1-\tilde{\xi}_{q}\lambda_j(q){q}^{-s}).$$ We try now to explain briefly these four cases. Suppose $q \neq 2$: then there exists a quadratic extension $F/\mathbb{Q}_q$ and a Galois character $\mu$ such that the local Galois representation $r_q(\mathpzc{p}i_q)$ associated with $\mathpzc{p}i_q$ is the induced from $F$ to $\mathbb{Q}_q$ of $\mu$. The explicit matrix for $\mathrm{Sym}^2(r_q(\mathpzc{p}i_q))$ can be found in \cite[(4)]{HarCM} and involves $\mu^2$; these local $L$-factors are then compatible with the ones predicted by Deligne \cite[\S 1.1]{Del}.\\ If $v=\infty$, the $L$-factor depends only on the parity of the character by which we twist. Let $\kappa=0,1$ according to the parity of $\xi_{\infty}\mathpzc{p}si_{\infty}$, from \cite[Lemma 1.1]{Sc} we have $L_{\infty}(s-k+1,\mathcal{H}at{\mathpzc{p}i}(f),\xi\mathpzc{p}si)=\Gamma_{\mathbb{R}}(s-k+2 -\kappa)\Gamma_{\mathbb{C}}(s)$ for the complex and real $\Gamma$-functions \begin{align*} \Gamma_{\mathbb{R}}(s) = & \mathpzc{p}i^{-s/2}\Gamma(s/2), \\ \Gamma_{\mathbb{C}}(s) = & 2{(2\mathpzc{p}i)}^{-s}\Gamma(s). \end{align*} We define \begin{align*}\mathcal{E}_{N}(s,f,\xi)=& \frac{ \mathpzc{p}rod_{q |N } (1-\xi(q)\lambda_q^2{q}^{-s}){L_{q}(s-k+1,\mathcal{H}at{\mathpzc{p}i}(f),\xi\mathpzc{p}si)}} {(1-\mathpzc{p}si^2\xi^2(2)2^{2k-2-2s})}. \end{align*} Note that $\lambda_q=0$ if $\mathpzc{p}i$ is not minimal at $q$ or if $\mathpzc{p}i_{q}$ is a supercuspidal representation. We multiply then $\mathcal{L} (s,f,\xi)$, the imprimitive $L$-function, by $\mathcal{E}_{N}(s,f,\xi)$ to get \begin{align*} L(s,\mathrm{Sym}^2(f),\xi):= & L(s-k+1,\mathcal{H}at{\mathpzc{p}i}(f)\otimes \xi\mathpzc{p}si) \\ = & \mathcal{L} (s,f,\xi)\mathcal{E}_{N}(s,f,\xi). \end{align*} We can now state the functional equation \begin{align*} \Lambda(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi) & = \mathbf{\varepsilon}(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi)\Lambda(1-s,\mathcal{H}at{\mathpzc{p}i}(f),\xi^{-1}),\\ \Lambda(s,\mathrm{Sym}^2(f),\xi) & =\mathbf{\varepsilon}(s-k+1,\mathcal{H}at{\mathpzc{p}i}(f),\xi\mathpzc{p}si)\Lambda(2k+1-s,\mathrm{Sym}^2(f^c),\xi^{-1}), \end{align*} for $\mathbf{\varepsilon}(s,\mathcal{H}at{\mathpzc{p}i}(f),\xi)$ of \cite[Theorem 1.3.2]{DD}. \mathcal{S}ection{$p$-adic measures and $p$-adic $L$-functions}\label{padicLfunc} The aim of this section is to construct the $p$-adic $L$-functions which we have described in the introduction. We first review the notion of $h$-admissible distribution and we generalize this notion to distributions with values in nearly overconvergent forms; we then produce two such distributions. We shall use these distributions to construct the $p$-adic $L$-functions for the symmetric square. \mathcal{S}ubsection{Admissibility condition} We now give the definition of the admissibility condition for measures with value in the space of nearly overconvergent modular forms. We will follow the approach of \cite[\S 3]{Pan}. Let us denote by $A$ a $\mathbb{Q}_p$-Banach algebra, by $M$ a Banach module over $A$ and by $Z_D$ the $p$-adic space ${(\mathbb{Z}/ Dp\mathbb{Z})}^{\times} \times (1 +p\mathbb{Z}_p)$. Let $h$ be an integer, we define $\mathbb{C}c^h(Z_D,A)$ as the space of locally polynomial function on $Z_D$ of degree strictly less than $h$ in the variable $z_p \in 1+p\mathbb{Z}_p$. Let us define $\mathbb{C}c^h_n(Z_D,A)$ as the space of functions from $Z_D$ to $A$ which are polynomial of degree stricty less than $h$ when restricted to ball of radius $p^n$. It is a compact Banach space and we have \begin{align*} \mathbb{C}c^h(Z_D,A) = \varinjlim_n \mathbb{C}c^h_n(Z_D,A). \end{align*} If $h \leq h'$, we have an isometric immersion of $\mathbb{C}c^h_n(Z_D,A)$ into $\mathbb{C}c^{h'}_n(Z_D,A)$ \\ \begin{defin} Let $\mu$ be an $M$-valued distribution on $Z_D$, i.e. a $A$-linear continuous map \begin{align*} \mu : \mathbb{C}c^1(Z_D,A) \rightarrow M. \end{align*} We say that $\mu$ is an $h$-admissible measure if $\mu$ can be extended to a continuous morphism (which we shall denote by the same letter) $\mu : \mathbb{C}c^h(Z_D,A) \rightarrow M$ such that for all $n$ positive integer, any $a \in {(\mathbb{Z}/ Dp^n\mathbb{Z})}^{\times}$ and $h'=0,\ldots, h-1 $ we have \begin{align*} \left|\int_{a+(Dp^n)}{(z_p-a)}^{h'} \textup{d}\mu \right| = o(p^{-n(h'-h)}). \end{align*} \end{defin} Let us denote by $\mathbf{1}_U$ the characteristic function of a open set $U$ of $Z_D$, we shall sometimes write $\int_{U} \textup{d}\mu$ for $\mu(\mathbf{1}_U)$.\\ The definition of $h$-admissible measure for $A=\mathcal{O}_{\mathbb{C}_p}$ is due to Amice and V\'elu.\\ There are many different (equivalent) definitions of a $h$-admissible measure; we refer to \cite[\S II ]{ColAnp} for a detailed exposition of them.\\ The following proposition will be very usefull in the following (\cite[Proposition II.3.3]{ColAnp}); \begin{prop}\label{hadm'} Let $\mu$ be a $h$-admissible measure, let $\tilde{h} \geq h$ be a positive integer, then $\mu$ satisfies \begin{align*} \left\vert \int_{a + Dp^n} {(z_p -a_p)}^{h'} \textup{d}\mu \right\vert =o(p^{-n(h'-\tilde{h})}) \end{align*} for any $n \in \mathbb{N}$, $h' \in \mathbb{N}$ and $a \in {(\mathbb{Z}/ Dp^n\mathbb{Z})}^{\times}$. \end{prop} It is known that any $h$-admissible measure is uniquely determined by the values $\int_{Z_D}\chi(z)\varepsilon(z_p){z_p}^{h'} \textup{d}\mu$, for all integers $h'$ in $[0,\ldots,h-1]$, all $\chi$ in $\widehat{{(\mathbb{Z}/ Dp\mathbb{Z})}^{\times}} $ and all finite-order characters $\varepsilon$ of $1+p\mathbb{Z}_p$.\\ Let us fix now $\mathcal{U}$, an affinoid subset of $\mathcal{W}$. The following proposition about the behavior of $U_p$ on $\mathcal{N}^r(Np^n,\mathcal{A}(\mathcal{U}))$ can be proven exactly as \cite[Proposition 1.6]{Pan}, \begin{prop} Let $n\geq 1$ be an integer, we have that $U_p^n$ sends $\mathcal{N}^r(Np^{n+1},\mathcal{A}(\mathcal{U}))$ into $\mathcal{N}^r(Np,\mathcal{A}(\mathcal{U}))$. In particular, the map $$ \begin{array}{cccc} \mathrm{Pr}^{\leq \alpha, p^{\infty}} : & \bigcup_{n=0}^{\infty} \mathcal{N}^r(Np^{n+1},\mathcal{A}(\mathcal{U})) & \rightarrow & \bigcup_{n=0}^{\infty} \mathcal{N}^r(Np^{n+1},\mathcal{A}(\mathcal{U})) \\ & G(\kappa) & \mapsto & U_p^{-n} \mathrm{Pr}^{\leq \alpha} U_p^n G(\kappa) . \end{array} $$ is well-defined and induces an equality ${\mathcal{N}^r(Np^{n+1},\mathcal{A}(\mathcal{U}))}^{\leq \alpha} ={\mathcal{N}^r(Np,\mathcal{A}(\mathcal{U}))}^{\leq \alpha}$ \end{prop} We remark that the trick to use $U_p$ to lower the level from $\Gamma_0(p^n)$ to $\Gamma_0(p^{n-1})$ was already known to Shimura and is a fundamental tool in the study of family of $p$-adic modular forms. We conclude the section with the following theorem, which is exactly \cite[Theorem 3.4]{Pan} in the nearly overconvergent context. Let $\mathcal{U}$ be an open affinoid of $\mathcal{W}$; we let $A=\mathcal{A}(\mathcal{U})$ and $M=\mathcal{N}^r(\Gamma,\mathcal{A}(\mathcal{U}))$. \begin{theo}\label{Theoadm} Let $\alpha$ be a positive rational number and let $\mu_{s}$, $s=0,1,\ldots$, be a set of distributions on $\mathbb{C}c^1(Z_D,M)$. Suppose there exists a positive integer $h_1$ such that the following two conditions are satisfied: \begin{align*} \mu_{s}(a+(Dp^n)) \in {\mathcal{N}^{r}\left(Dp^{h_1n},\mathcal{A}(\mathcal{U})\right)}, \\ \left\vert U^{h_1 n}\mathcal{S}um_{i=0}^s{ s \choose i } (-a_p)^{s-i}\mu_i(a+(Dp^n))\right\vert_p < C p^{-ns}. \end{align*} Let $h$ be such that $h > h_1 \alpha +1$; then there exists a $h$-admissible measure $\mu$ such that \begin{align*} \int_{a+(Dp^n)}(z_p-a_p)^{s}\textup{d}\mu = U_p^{-h_1 n} \mathrm{Pr}^{\leq \alpha}(U_p^{h_1 n}\mu_{s}(a+(Dp^n))). \end{align*} \end{theo} \mathcal{S}ubsection{Nearly overconvergent measures}\label{nomeasures} In this subsection we will define two measures with values in the space of nearly overconvergent forms. We begin by studying the behavior of the Maa\mathcal{S}s{}-Shimura operator modulo $p^n$. We have from \cite[(6.6)]{H1bis} the following expression; \begin{align}\label{deltatheta} \delta_k^s = \mathcal{S}um_{j=0}^s { s \choose j} \frac{\Gamma(k+s)}{\Gamma(k+s-j)} {\mathbb{T}heta}^{s-j} X^j, \end{align} for $\mathbb{T}heta = q\frac{\textup{d}}{\textup{d}q}$ as in Section \ref{overconvergen}. We now give an elementary lemma \begin{lemma} We have for all integers $s$ \begin{align*} \mathrm{v}_p(s) & \leq \mathrm{v}_p \left( {s \choose j} (k+s-1)\cdots (k+s -j)\right) \end{align*} for all $1 \leq j \leq s$. \end{lemma} \begin{proof} Simply notice that the valuation of $(k+s-1)\cdots (k+s -j)$ is bigger than that of $j!$ and $s | (s! / (s-j)!)$. \end{proof} The following two propositions are almost straightforward; \begin{prop}\label{MSmodp_0} Let $k$, $k'$ be two integers, $k \equiv k' \bmod p^n(p-1)$ and $f_k$ and $f_{k'}$ two nearly holomorphic modular forms, algebraic such that $f_k \equiv f_{k'} \bmod p^m$. Then $\delta_k f_k \equiv \delta_{k'}f_{k'} \bmod p^{\mathrm{min}(n,m)}$. \end{prop} \begin{proof} Direct computation from the formula in Proposition \ref{MSfam}. \end{proof} \begin{prop}\label{MSmodp} Let $k$, $k'$ be two integers, $k \equiv k' \bmod p^n(p-1)$ and $f_k$ and $f_{k'}$ two nearly holomorphic modular forms, algebraic of same degree such that $f_k \equiv f_{k'} \bmod p^n$. Let $s$, $s'$ be two positive integers, $s'=s+s_0 p^n(p-1)$. Then $(\delta_k^s f_k)|\iota_p \equiv \delta_{k'}^{s'} f_{k'} \bmod p^{n}$. \end{prop} \begin{proof} Iterating the above proposition we get $\delta_k^s f_k \equiv \delta_{k'}^{s} f_{k'} \bmod p^{n}$. But $s'-s \equiv 0 \bmod p^n$, so by the above lemma and (\ref{deltatheta}) we have $\delta_{k+2s}^{s'-s}\delta_k^s f_k\equiv {\mathbb{T}heta}^{s'-s}\delta_{k'}^{s} f_{k'} $. We conclude as ${\mathbb{T}heta}^{s'-s}\equiv \iota_p \bmod p^n$. \end{proof} Before constructing the aforementioned measures, we recall the existence of the Kubota-Leopoldt $p$-adic $L$-function. \begin{prop}\label{zetames} Let $\chi$ be a primitive character modulo $Cp^r$, with $C$ and $p$ coprime and $r\geq 0$. Then for any $b\geq 2$ coprime with $p$, there exists a measure $\zeta_{\chi,b}$ such that for every finite-order character $\varepsilon$ of $Z_D$ and any integer $m\geq 1$ we have $$\int_{Z_D} \varepsilon(z)z_p^{m-1}\textup{d}\zeta_{\chi,b}(z)=(1-\varepsilon'\chi'(b)b^{m})L_{Dp}(1-m,\chi\varepsilon), $$ where $\chi'$ denote the prime-to-$p$ part of $\chi$. \end{prop} To such a measure and to each character $\varepsilon$ modulo $Np^r$, we can associate by $p$-adic Mellin transform a formal series \begin{align*} G(S,\varepsilon,\chi,b) = \int_{Z_D} \varepsilon(z) {(1+S)}^{z_p} \textup{d}\zeta_{\chi,b}(z) \end{align*} in $\mathcal{O}_K[[S]]$, where $K$ is a finite extension of $\mathbb{Q}_p$. We have a natural map from $\mathcal{O}_K[[S]]$ to $\mathcal{A}(\mathcal{W})$ induced by $S\mapsto (\kappa \mapsto \kappa(u)-1)$. We shall denote by $L_p(\kappa,\varepsilon,\chi,b)$ the image of $G(S,\varepsilon,\chi,b)$ by this map.\\ We define an element of $\mathcal{A}(\mathcal{W})[[q]]$ \begin{align*} \mathcal{E}_{\kappa}(\varepsilon) & = \mathcal{S}um_{n=1, (n,p)=1}^{\infty} L_p(\kappa[-2],\varepsilon, \mathcal{S}igma_n,b) q^n \mathcal{S}um_{\tiny{\begin{array}{c} t_1^2t_2^2 |n, \\ (t_1t_2,Dp)=1, \\ t_1>0, t_2>0 \end{array}}}t_1^{-2}t_2^{-3} \mu(t_1)\varepsilon(t_1t_2^2)\mathcal{S}igma_n(t_1) \kappa({t_1t_2^2}). \end{align*} If $\kappa=[k]$, we have then $[k](\mathcal{E}_{\kappa}(\varepsilon))=(1-\varepsilon'(b)b^{k-1})E_{k -\frac{1}{2}}(\varepsilon\omega^{-k})|\iota_p$, where $\iota_p$ is the trivial character modulo $p$.\\ We fix two even Dirichlet characters: $\xi$ is primitive modulo ${\mathbb{Z}/Cp^{\delta}\mathbb{Z}}$ ($\delta=0,1$) and $\mathpzc{p}si$ is defined modulo ${\mathbb{Z}/pN\mathbb{Z}}$. Fix also a positive slope $\alpha$ and an integer $D$ which is a square and divisible by $4$, $C^2$ and $N$.\\ Let $h$ be an integer, $h > 2{\alpha}+1$. For $s=0, 1,\ldots$ we now define distributions $\mu_{s}$ on $\mathbb{Z}^{\times}_p$ with value in ${\mathcal{N}^{r}(D,\mathcal{A}(\mathcal{W}))}^{\leq \alpha}$. For any finite-order character $\varepsilon$ of conductor $p^n$ we pose \begin{align*} \mu_{s}(\varepsilon) = \mathrm{Pr}^{\leq{\alpha}}U_p^{2n-1}\left(\theta(\varepsilon\xi\omega^{s})|\left[\frac{D}{4C^2}\right]\delta_{\kappa\left[-s-\frac{1}{2}\right] }^{\frac{s-\beta_s}{2}} \mathcal{E}_{\kappa[-s]}(\mathpzc{p}si\xi \varepsilon \mathcal{S}igma_{-1})\right) \end{align*} with $\beta_s =0$, $1$ such that $s\equiv \beta_s \bmod 2$. The projector $\mathrm{Pr}^{\leq{\alpha}}$ is {\it a priori} defined only on $\mathcal{N}^{r}(D,\mathcal{A}(\mathcal{U}))$ but, thanks to Proposition \ref{lessalpha}, it makes perfect sense to apply it to a formal polynomial $q$-expansion as it is a formal power series in $U_p$ and we know how $U_p$ acts on a polynomial $q$-expansion.\\ Define $t_0 \in \mathbb{Q}$ to be the smaller rational such that $z_p^{\log(\kappa)}$ converges for all $z_p$ in $1+p\mathbb{Z}_p$ and $\kappa$ in $\mathcal{W}(t_0)$. \begin{prop}\label{GlueDist} The distributions $\mu_s$ defined above define an $h$-admissible measure $\mu$ with values in ${\mathcal{N}^{r}(D,\mathcal{A}(\mathcal{W}(t_0) \times \mathcal{W}))}^{\leq \alpha}$. \end{prop} \begin{proof} We have to check that the two conditions of Theorem \ref{Theoadm} are verified. The calculations are similar to the one of \cite[Theorem 2.7.6, 2.7.7]{DD} or, more precisely, to the one made by \cite[\S 3.5.6]{Gorsse} and \cite[\S 4.6.8]{CP} which study in detail the growth condition. We have the discrete Fourier expansion \begin{align*} \mathbf{1}_{a_p + p^n\mathbb{Z}_p} (x) = \frac{1}{p^{n-1}}\mathcal{S}um_{\varepsilon} \varepsilon(a_p^{-1}x). \end{align*} By integration, together with the fact that each $\mu_{s}(\varepsilon)$ belongs to $\mathcal{N}^{r}(Dp^{2r},\mathcal{A}(\mathcal{U}))$, we obtain {\it i)}.\\ For the estimate {\it ii)}, we have to show that for all $n\geq 0$, $0 \leq s \leq h-1 $ \begin{align*} \left| U_p^{2n}\mathcal{S}um_{i=0}^s{ s \choose i } (-a_p)^{s-i}\mu_i(a+(Lp^n))\right|_p < C p^{-ns} \end{align*} where the norm $|\mathpzc{p}hantom{e}|_p$ is the $q$-expansion norm defined in Section \ref{families}. Let us write \begin{align*} \mathcal{S}um_{i=0}^s{ s \choose i } (-a_p)^{s-i}\mu_i(a+(Lp^n)) = \mathcal{S}um_{j=0}^s \mathcal{S}um_{n=0}^{\infty} b_n^j(\kappa) X^j q^n. \end{align*} Hence what we have to do is to bound the norm of $b_n^j=b_n^j(\kappa)$ on $\mathcal{W}(t_0)$. Using (\ref{deltatheta}), for $\beta_i=0,1$, $\beta_i \equiv i \bmod 2$, we expand \begin{align*} & \mu_i(a+(Lp^n)) = \theta(a+(Lp^n)) \times \\ &\times \mathcal{S}um_{j=0}^{\frac{i-\beta_i}{2}} { \frac{i-\beta_i}{2} \choose j} \log(\kappa[-\frac{i+1+\beta_i}{2}-1])\cdots \log(\kappa[-\frac{i+1+\beta_i}{2}-j]) {\mathbb{T}heta}^{\frac{i-\beta_i}{2}-j} \mathcal{E}_{\kappa\left[-i\right]}(a+(Lp^n))X^j. \end{align*} Note that $\mathcal{E}_{\kappa} = \mathcal{S}um_n \nu'_n q^n$, where $\nu'_n$ are measures. Hence we have \begin{align*} b_n^j=\mathcal{S}um_{i=0}^s { s \choose i} {(-a_p)}^{s-i} a(i,n) { \frac{i-\beta_i}{2} \choose j} \log(\kappa[-\frac{i+1+\beta_i}{2}-1])\cdots \log(\kappa[-\frac{i+1+\beta_i}{2}-j]), \end{align*} where $a(i,n)=\int z_p^i \textup{d}\nu_n $, for $\nu_n$ a measure, namely a linear combination of Kubota-Leopoldt $p$-adic $L$-functions and Dirac deltas. For $\beta=0$, $1$ and for $j\geq 1$ we shall write: \begin{align*} D_{\kappa,\beta}^j = \left(z_p^{\log(\kappa^{2})-2 -\beta} \frac{\mathpzc{p}artial}{\mathpzc{p}artial z_p} \cdots z_p^{-1}\frac{\mathpzc{p}artial}{\mathpzc{p}artial z_p} \cdot z_p^{-1}\frac{\mathpzc{p}artial}{\mathpzc{p}artial z_p}z_p^{1+\beta+2j -2\log(\kappa)} \right), \end{align*} where we have applied $\frac{\mathpzc{p}artial}{\mathpzc{p}artial z_p}$ $j$-times and multiplied $j-1$ times by $z_p^{-1}$. We note that we have for any positive integer $i$: \begin{align*} D_{\kappa,\beta}^j (z_p^i) = \log(\kappa^{-2}[i+1+\beta+2])\cdots \log(\kappa^{-2}[i+1+\beta+2j]) z_p^i. \end{align*} Similarly, \begin{align*} \mathfrak{D}_{\kappa,\beta}^j & = \left(z_p^{2j+\beta -1} \frac{\mathpzc{p}artial}{\mathpzc{p}artial z_p} \cdots z_p^{-1}\frac{\mathpzc{p}artial}{\mathpzc{p}artial z_p} \cdot z_p^{-1}\frac{\mathpzc{p}artial}{\mathpzc{p}artial z_p}z_p^{-\beta} \right),\\ \mathfrak{D}_{\kappa,\beta}^j (z_p^i) & = (i-\beta)(i-\beta -2)\cdots (i-\beta-2j+2) z_p^i. \end{align*} Summing up \begin{align*} & \mathcal{S}um_{i=0}^s { s \choose i} {(-a_p)}^{s-i} { \frac{i-\beta}{2} \choose j} \log(\kappa[-\frac{i+1+\beta}{2}-1])\cdots \log(\kappa[-\frac{i+1+\beta}{2}-j])z_p^i = \\ & = \frac{2^{2j}}{j!}{(-1)}^{-j}\mathfrak{D}_{\kappa,\beta}^j D_{\kappa,\beta}^j \left({(z_p-a_p)}^s\right). \end{align*} We have that $|\frac{\mathpzc{p}artial}{\mathpzc{p}artial z_p}{(z_p-a_p)}^s\mathbf{1}_{a+(Lp^n)}|_p=p^{-n(s-1)}$. The $p$-adic logarithm $\log(\kappa)$ is not bounded on $\mathcal{W}$ but the maximum modulus principle \cite[\S 3.8.1, Proposition 7]{BGR} ensures us that there exists $C_{t_0} \in \mathbb{R}$ such that $\vert \log(\kappa)\vert_{\mathcal{W}(t_0)} < C_{t_0}$. As $\nu_n$ is a measure, we have \begin{align*} \left\vert\int_{a+(Dp^n)} \mathfrak{D}_{\kappa,\beta}^j D_{\kappa,\beta}^j \left({(z_p-a_p)}^s\right) \textup{d}\nu_n\right\vert_p = o(p^{-n(s-2j)}). \end{align*} As we have \begin{align*} \mathcal{S}um_{i=0, i\equiv 0 \bmod 2 }^s{ s \choose i } (-a_p)^{s-i}z_p^{i}= &\frac{1}{2}({(z_p-a_p)}^s + {(-z_p-a_p)}^s),\\ \mathcal{S}um_{i=0, i\equiv 1\bmod 2 }^s { s \choose i } (-a_p)^{s-i}z_p^{i}= &\frac{1}{2}({(z_p-a_p)}^s - {(-z_p-a_p)}^s), \end{align*} we deduce the same estimate on $\vert b_n^j \vert_p$ because \begin{align*} \frac{j!}{2^{2j}}{(-1)}^{-j} b_n^j = & \int_{a+(Dp^n)} \mathfrak{D}_{\kappa,0}^j D_{\kappa,0}^j \left(\frac{1}{2}({(z_p-a_p)}^s + {(-z_p-a_p)}^s)\right) \textup{d}\nu_n + \\ & + \int_{a+(Dp^n)} \mathfrak{D}_{\kappa,1}^j D_{\kappa,1}^j \left(\frac{1}{2}({(z_p-a_p)}^s - {(-z_p-a_p)}^s)\right) \textup{d}\nu_n. \end{align*} Recall that $U_p^{2n}X^j=p^{2nj}X^j$. Then for all $s\geq 0$, $n \geq 0$ we have the growth condition of Proposition \ref{Theoadm}. This assures us that these distributions define a unique $h$-admissible measure $\mu$ with values in $\mathcal{A}(\mathcal{W}(t_0) \times \mathcal{W}(t))[[q]]$. We can see using Proposition \ref{MSmodp} that $\mu_{s}(\varepsilon)$ satisfies the hypothesis of Proposition \ref{naivefam} and hence $\mu_{s}(\varepsilon)$ belongs to ${\mathcal{N}^{r}(L,\mathcal{A}(\mathcal{W}(t_0)))}^{\leq \alpha}$. The Mellin transform \begin{align*} \kappa' \mapsto \int_{1+p\mathbb{Z}_p} {\kappa'(u)}^{z}\textup{d}\mu(z) \end{align*} gives us the desired two variables family. \end{proof} Note that if $\alpha=0$, we do not need to introduce the differential operators $\mathfrak{D}_{\kappa,\beta}^j$ and $D_{\kappa,\beta}^j$ and the above families are defined over the whole $\mathcal{W} \times \mathcal{W}$ (see also the construction in \cite[\S 4.3]{UrbNholo}).\\ We define then an improved one variable family $\theta.E(\kappa)=\theta.E(b,\xi', \mathpzc{p}si')$. We call this measure improved because it will allow us to construct a one-variable $p$-adic $L$-function which does not present a trivial zero. Fix a weight $k_0$ and, to define $\theta.E(\kappa)$, suppose that $\xi=\xi'\omega^{2-k_0}$, with $\xi'$ a character of conductor $C$ such that $\xi'(-1)={(-1)}^{k_0}$. We define \begin{align*} \mathrm{Pr}^{\leq \alpha} {\left(\theta(\xi') |\left[ \frac{D}{4C^2} \right] \delta_{\kappa\left[- k_0 - \frac{3}{2} \right]}^{\frac{k_0 -\beta}{2}-1}\tilde{\mathcal{E}}_{[\kappa]}(\mathcal{S}igma_{-1}\mathpzc{p}si'\xi')\right)}, \end{align*} where \begin{align*} \tilde{\mathcal{E}}_{[\kappa]}(\chi') & = (1-\chi'(b)\kappa(b)b^{-k_0+1}) L_p(\kappa^2[-4-2k_0],{\chi'}^2, \mathbf{1},b) + \\ & (1-{(\chi')}^2(b)\kappa(b^2)b^{-2k_0 -4}) \mathcal{S}um_{n=1}^{\infty} L_p(\kappa[-k_0],\chi, \mathcal{S}igma_n,b) q^n \\ & \times \mathcal{S}um_{\tiny{\begin{array}{c} t_1^2 t_2^2 |n, \\ (t_1 t_2,Dp)=1, \\ t_1>0, t_2>0 \end{array}}} t_1^{-2} t_2^{-3} \mu(t_1)\chi(t_1 t_2^2)\mathcal{S}igma_n(t_1) \kappa(t_1 t_2^2). \end{align*} Let $F(\kappa)$ be a family of overconvergent eigenforms with coefficients in $\mathcal{U}$. We define a linear form $l_{F}$ on ${\mathcal{M}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha}$ as in \cite[Proposition 6.7]{Pan}. Note that the evaluation formula holds also for weights which are, in Panchishkin's notation, {\it critical}, i.e. when $\alpha = (k_0-2)/2$, because at such point $\mathbb{C}c$ is \'etale above $\mathcal{W}$. In \cite{Pan} this case is excluded because a trivial zero appears in his interpolation formula. Such a trivial zero is studied in \cite{SteTZ}, where Conjecture \ref{MainCoOC} for $\rho_f(k_0/2)$ is proven.\\ We can define linear forms for nearly overconvergent families, in a way similar to \cite[\S 4.2]{UrbNholo} but without the restriction $N=1$. For this, let ${\mathbb{T}^r(N,\mathcal{K}(\mathcal{U}))}^{(Np)}$ be the sub-algebra of $\mathrm{End}_{\mathcal{K}(\mathcal{U})}({\mathcal{N}^{r}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha})$ generated by the Hecke operators outside $Np$. It is a commutative and semisimple algebra; hence we can diagonalize ${\mathcal{N}^{r}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha}$ for the action of this Hecke algebra. Let $F$ be an eigenform for ${\mathbb{T}^r(N,\mathcal{K}(\mathcal{U}))}^{(Np)}$, we have a linear form $l_{F}^r$ corresponding to the projection of an element of ${\mathcal{N}^{r}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha}$ to the $\mathcal{K}(\mathcal{U})$-line spanned by $F$. \\ We say that a family $F(\kappa)$ is {\it primitive} if it is a family of eigenforms and all its specializations at non critical weights are the Maa\mathcal{S}s{}-Shimura derivative of a primitive form. This implies that the system of eigenvalues for ${\mathbb{T}^r(N,\mathcal{K}(\mathcal{U}))}^{(Np)}$ corresponding to $F(\kappa)$ appears with multiplicity one in ${\mathcal{N}^{r}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha}$. We can see $l_{F}^r$ as a $p$-adic analogue of the normalized Petersson product; more precisely we have the following proposition. We recall that $\tau_N$ is the Atkin-Lehner involution of level $N$ normalized as in \cite[h4]{H6}. When the level will be clear from the context, we shall simply write $\tau$. \\ \begin{prop} Let $F(\kappa)$ be an overconvergent family of primitive eigenform of finite slope $\alpha$, degree $r$ and conductor $N$. Let $k$ be a classical non critical weight, then for all $G(\kappa)$ in ${\mathcal{N}^{r}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha}$ we have \begin{align*} l_{F}^r (G(k)) = \frac{\left\langle F(k)^{c}|\tau, G(k) \mathcal{R}a}{\left\langle F(k)^{c}|\tau, F(k) \mathcal{R}a}. \end{align*} \end{prop} \begin{proof} Let $f$ be an element of $\mathcal{N}^r_k(N,\mathbb{C})$, the linear form \begin{align*} g \mapsto \frac{\left\langle f^{c}|\tau, g \mathcal{R}a}{\left\langle f^{c}|\tau, f \mathcal{R}a} \end{align*} is Hecke equivariant and takes the value $1$ on $f$. It is the unique one with these two properties.\\ For any pairs of forms $g_1$ and $g_2$ of weights $k-2r_1$ and $k-2r_2$, $r_1 \neq r_2$, using Proposition \ref{sumMS} and Lemma \ref{deltaT_l}, we see that $\delta_{k-2r_1}^{r_1}g_1$ and $\delta_{k-2r_2}^{r_2}g_2$ are automatically orthogonal for the Petersson product normalized as above.\\ Then, as $k > 2r$, we have for any $f$ in $\mathcal{N}_k^r(N,\mathbb{C})$ a linear form \begin{align*} g \mapsto \frac{\left\langle f^{c}|\tau, g \mathcal{R}a}{\left\langle f^{c}|\tau, f \mathcal{R}a} \end{align*} which is Hecke equivariant and takes the value $1$ on $f$. Moreover if $f$ is defined over $\overline{\mathbb{Q}}$ then both linear forms are defined over $\overline{\mathbb{Q}}$.\\ Let $l_{F(k)}^r$ be the specialization of $l_{F}^r$ at weight $k$. As we have $l_{F(k)}^r (F(k))=1$, we deduce that $l_{F(k)}^r$ must coincide, after extending scalars if necessary, with the previous one and we are done. \end{proof} In particular, we deduce from the above proof the $p$-adic analogue of the theorem which say that holomorphic forms are orthogonal to Maa\mathcal{S}s{}-Shimura derivatives.\\ We have the following lemma \begin{lemma} Let $H$ be the overconvergent projector of Corollary \ref{holoproj} and $F(\kappa)$ a family of overconvergent primitive eigenforms, then \begin{align*} l_{F} \circ H = l_{F}^r. \end{align*} \end{lemma} \begin{proof} Let us write $G(\kappa) =\mathcal{S}um_{i=0}^r \delta_{\kappa[-2i]}^i G_i(\kappa)$. The above proposition tells us $l^r_F(G(\kappa)) = l^r_F(G_0(\kappa))$. By definition, $l_{F}^r = l_{F}$ when restricted to ${\mathcal{M}(N,\mathcal{K}(\mathcal{U}))}^{\leq \alpha}$ and we are done. \end{proof} We remark that $l_{F}$ is defined over $\mathcal{K}(\mathcal{U})$ but not over $\mathcal{A}(\mathcal{U})$.\\ The linear forms $l_F$ defines a splitting of $\mathcal{K}(\mathcal{U})$-algebras \begin{align*} \mathbb{T}^r(N,\mathcal{A}(\mathcal{W}(t)))^{\leq \alpha}\otimes \mathcal{K}(\mathcal{U}) = \mathcal{K}(\mathcal{U}) \times C \end{align*} and consequently an idempotent $1_F \in \mathbb{T}^r(N,\mathcal{A}(\mathcal{W}(t)))^{\leq \alpha}\otimes \mathcal{K}(\mathcal{U}) $. It is possible to find an element $H_F(\kappa) \in \mathcal{A}^{\circ}(\mathcal{U})$ such that $H_F(\kappa) 1_F$ belongs to $ \mathbb{T}^r(N,\mathcal{A}(\mathcal{W}(t)))^{\leq \alpha}$. Then we can say that $l_F$ is not holomorphic in the sense that it is not defined for certain $\kappa$ in $\mathcal{U}$. We hope that the above lemma helps the reader to understand why the overconvergent projectors cannot be defined for all weights. \\ \begin{rem} We will see in Section \ref{KimBel} some possible relations between the poles of $l_{F}$ and another $p$-adic $L$-function for the symmetric square. \end{rem} \mathcal{S}ubsection{The two $p$-adic $L$-functions}\label{padicL} We shall now construct the two-variable $p$-adic $L$-function $L_p(\kappa,\kappa')$ of Theorem \ref{Tintro} and, in the case where $\xi=\xi'\omega^{2-k_0}$, with $\xi'$ a character of conductor $C$ such that $\xi'(-1)={(-1)}^{k_0}$, an {\it improved} $p$-adic $L$-function $L^{*}_p(\kappa)$. We call this $p$-adic $L$-function, in the terminology of Greenberg-Stevens, improved because it has no trivial zero and at $\kappa_0$ is a non zero multiple of the value $\mathcal{L}(k_0-1,\mathrm{Sym}^2(F(\kappa)),{\xi'}^{-1})$. \\ These two $p$-adic $L$-functions are related by the key Corollary \ref{CoroImp}. Allowing a cyclotomic variable forces us to use theta series of level divisible by $p$ even when the conductor of the character is not divisible by $p$; Lemma \ref{thetanonprim} tells us that the trivial zero for $f$ as in Theorem \ref{MainThOC} comes precisely from this fact. The construction of the one-variable $p$-adic $L$-function is done in the spirit of \cite{HT}, using the measure $\theta.E(\kappa)$ which is not a convolution of two measures but a product of a measure by a constant theta series whose level is not divisible by $p$. We warn the reader that the proof of Theorem \ref{T1OC} below is very technical and is not necessary for the following.\\ Before constructing the $p$-adic $L$-functions, we introduce the generalization to nearly overconvergent forms of the {\it twisted} trace operator defined in \cite[\S 1 VI]{H1bis}. It will allow us to simplify certain calculations we will perform later.\\ Fix two prime-to-$p$ integers $D$ and $N$, with $N|D$. We define for classical $k,r$ \begin{align*} \begin{array}{ccccc} T_{D/N,k} : & {\mathcal{N}_k^{r}(Dp,A)} & \rightarrow & {\mathcal{N}_k^{r}(Dp,A)}\\ & f & \mapsto &{(D/N)}^{k/2} \mathcal{S}um_{[\gamma] \in \Gamma(N)/ \Gamma(N,D/N)} f |_k \left(\begin{array}{cc} 1 & 0 \\ 0 & D/N \end{array} \right) |_k \gamma \end{array}. \end{align*} As $D$ is prime to $p$, it is clear that $T_{D/N,k}$ commutes with $U_p$. It extends uniquely to a linear map $$\begin{array}{ccccc} T_{D/N} : & {\mathcal{N}^{\infty}(Dp,\mathcal{A}(\mathcal{U}))} & \rightarrow & {\mathcal{N}^{\infty}(Np,\mathcal{A}(\mathcal{U}))} \end{array} $$ which in weight $k$ specializes to $T_{D/N,k}$. In particular, it preserves the slope decomposition. \\ Let us fix a $p$-stabilized eigenform $f$ of weight $k$ as in the introduction such that $k -1 > v_p(\lambda_p)$. Let $\mathbb{C}c_F$ be a neighbourhood of $f$ in $\mathbb{C}c$ contained in a unique irreducible component of $\mathbb{C}c$. It corresponds by duality to a family of overconvergent modular forms $F(\kappa)$. We have that the slope of $U_p$ on $\mathbb{C}c_F$ is constant; let us denote by $\alpha$ this slope. We shall denote by $\lambda_p(\kappa)$ the eigenvalue of $U_p$ on $\mathbb{C}c_F$.\\ Let $u$ be a generator of $1+p\mathbb{Z}_p$ such that $u=b \omega^{-1}(b)$, where $b$ is the positive integer we have chosen in Proposition \ref{zetames}. Let us define \begin{align*} \Delta(\kappa,\kappa') = & \left(1-\mathpzc{p}si'\xi'(b)\frac{\kappa(u)}{b\kappa'(u)}\right), \\ \Delta_0(\kappa) = & (1-\xi'\mathpzc{p}si'(b)b^{-k_0+1}\kappa(u))(1-\xi'\mathpzc{p}si'(b)b^{-2k_0-4}{\kappa(u)}^2). \end{align*} The two $p$-adic $L$-functions that we define are \begin{align*} L_p(\kappa,\kappa')= & D^{-1}{\Delta(\kappa,\kappa')}^{-1} l_{F}(T_{D/N} \theta \ast E(\kappa,\kappa')) \in \mathcal{K}(\mathbb{C}c_F \times \mathcal{W}), \\ L_p^*(\kappa)= & D^{-1}{\Delta_0(\kappa)}^{-1} l_{F}( T_{D/N} \theta.E(\kappa)) \in \mathcal{K}(\mathbb{C}c_F). \end{align*} We say that a point $(\kappa, \kappa')$ of $\mathcal{A}(\mathbb{C}c_F \times \mathcal{W})$ is classical if $\kappa$ is a non-critical weight and $\kappa'(z)=\varepsilon(\left\langle z \mathcal{R}a) z^s$, for $\varepsilon$ a finite-order character of $1+p\mathbb{Z}_p$ and $s$ an integer such that $ 1 \leq s+1 \leq k-1$. This ensures that $s+1$ is a critical integer \`a la Deligne for $\mathrm{Sym}^2(f)\otimes \omega^{-s}$. We define certain numbers which will appear in the following interpolation formulae. Suppose that $(\kappa,\kappa')$ is classical in the above sense and let $n$ be such that $\varepsilon$ factors through $1+p^n\mathbb{Z}_p$. Let $n_0=n$ resp. $n=0$ if $\varepsilon$ is not trivial resp. is trivial. For a Dirichlet character $\eta$, we denote by $\eta_0$ the associated primitive character. Let us pose \begin{align*} E_1(\kappa,\kappa')= & \lambda_p(\kappa)^{-2n_0}(1-{(\xi\varepsilon\omega^{s})}_0(p)\lambda_p(\kappa)^{-2}p^{s}); \end{align*} if $F(\kappa)$ is primitive at $p$ we define $E_2(\kappa,\kappa')=1$, otherwise \begin{align*} E_2(\kappa,\kappa')= & (1-{(\xi^{-1}\varepsilon^{-1}\omega^{-s}\mathpzc{p}si)}_0(p)p^{k-2-s}) \times \\ & (1-{(\xi^{-1}\varepsilon^{-1}\omega^{-s}\mathpzc{p}si^{2})}_0(p)\lambda_p(\kappa)^{-2} p^{2k-3-s}). \end{align*} We shall denote by $F^{\circ}(\kappa)$ the primitive form associated to $F(\kappa)$. We shall write $W'(F(\kappa))$ for the prime-to-$p$ part of the root number of $F^{\circ}(\kappa)$. If $F(\kappa)$ is not primitive at $p$ we pose \begin{align*} S(F(\kappa)) = (-1)^k \left( 1 - \frac{\mathpzc{p}si_0(p)p^{k-1}}{\lambda_p(\kappa)^{2}} \right)\left( 1 - \frac{\mathpzc{p}si_0(p)p^{k-2}}{\lambda_p(\kappa)^{2}} \right), \end{align*} and $S(F(\kappa)) = (-1)^k $ otherwise. Let $D$ be a positive integer divisible by $4C^2$ and $N$, we shall write $D=4C^{2}D'$. Let $\beta=0$, $1$ such that $s \equiv \beta \bmod 2$, we pose \begin{align*} C_{\kappa,\kappa'} & = s ! G(\xi\varepsilon\omega^s) C(\xi\varepsilon\omega^s)^{s} N^{-k/2} {D'}^{\frac{s-\beta}{2}} 2^{-2s -k -\frac{1}{2}},\\ C_{\kappa} & = C_{\kappa,[k_0-2]}, \end{align*} where $C(\chi)$ denotes the conductor of $\chi$. \begin{theo}\label{T1OC} \begin{itemize} \item[i)] The function $L_p(\kappa,\kappa')$ is defined on $\mathbb{C}c_{F} \times \mathcal{W}$, it is meromorphic in the first variable and of logarithmic growth $h=[2 \alpha]+2$ in the second variable (i.e., as function of $s$, $L_p(\kappa,[s])/\mathpzc{p}rod_{i=0}^h \log_p(u^{s-i}-1)$ is holomorphic on the open unit ball). For all classical points $(\kappa, \kappa')$, we have the following interpolation formula \begin{align*} L_p(\kappa,\kappa') = C_{\kappa,\kappa'} E_1(\kappa,\kappa')E_2(\kappa,\kappa') \frac{\mathcal{L}(s+1,\mathrm{Sym}^2(F(\kappa)),\xi^{-1}\varepsilon^{-1}\omega^{-s})}{\mathpzc{p}i^{s+1}S(F(\kappa))W'(F(\kappa))\left\langle F^{\circ}(\kappa),F^{\circ}(\kappa)\mathcal{R}a}. \end{align*} \item[ii)] The function $L_p^*(\kappa)$ is meromorphic on $\mathbb{C}c_{F}$. For $ k \geq k_0 -1$, we have the following interpolation formula \begin{align*} L^*_p(\kappa) = C_{\kappa} E_2(\kappa,[k_0-2]) \frac{\mathcal{L}(k_0-1,\mathrm{Sym}^2(F(\kappa)),{\xi'}^{-1})}{\mathpzc{p}i^{k_0-1}S(F(\kappa))W'(F(\kappa))\left\langle F^{\circ}(\kappa),F^{\circ}(\kappa)\mathcal{R}a}. \end{align*} \end{itemize} \end{theo} If $\alpha=0$, using the direct estimate in \cite[Theorem 2.7.6]{DD}, we see that we can take $h=1$ and the first part of this theorem is \cite[Theorem]{H6}.\\ The poles on $\mathbb{C}c_{F}$ of these two functions come from the poles of the overconvergent projection and of $l_{F}$; if $(\kappa,\kappa')$ corresponds to a couple of points which are classical, then locally around this point no poles appear. \\ Let us fix a point $\kappa$ of $\mathbb{C}c_F$ above $[k]$ and let $f$ be the corresponding form. If $k > 2 \alpha+2$ then, by specializing at $\kappa$ the first variable, we recover the one variable $p$-adic $L$-function of the symmetric square of $f$ constructed in \cite{DD} (up to some Euler factors). If instead $k \leq 2 \alpha+2$, the method of \cite{DD} cannot give a well-defined one variable $p$-adic $L$-function because, as we said in the introduction, the Mellin transform of an $h$-admissible measure $\mu$ is well-defined only if the first $h+1$ moments are specified. But in this situation the number of critical integers is $k-1$ and consequently we do not have enough moments. What we have to do is to choose the extra moments $\int \varepsilon(u)u^{s} \textup{d}\mu$ for all $\varepsilon$ finite-order character of $1+p\mathbb{Z}_p$ and $s=k-1,\ldots,h$. We proceed as in \cite{PolSt}; the two-variable $p$-adic $L$-function $L_p(\kappa,\kappa')$ is well defined for all $(\kappa,\kappa')$, so we decide that \begin{align*} L_p(s,\mathrm{Sym}^2(f),\xi):=L_p(\kappa,[s]). \end{align*} This amounts to say that the extra moments for $\mu$ are \begin{align*} \int \varepsilon(u)u^{s} \textup{d}\mu = L_p(\kappa,\varepsilon (\left\langle z \mathcal{R}a) z^s). \end{align*} To justify such a choice, we remark that if a classical point $\kappa''$ of $\mathbb{C}c_F$ is sufficiently close to $\kappa$, then $\kappa''$ is above $[k'']$ with $k'' > h+1$. In this case $L_p(\kappa'',\varepsilon (\left\langle z \mathcal{R}a) z^s)$ interpolates the special values $L(s,\mathrm{Sym}^2(f'') \otimes \xi)$ which are critical \`a la Deligne. We are then choosing the extra moments by $p$-adic intepolation along the weight variable. Fix $f$ as in Theorem \ref{MainThOC} and let $\kappa_0$ in $\mathcal{U}$ such that $F(\kappa_0)=f$. We have the following important corollary \begin{coro}\label{CoroImp} We have the following factorization of locally analytic functions around $\kappa_0$ in $\mathbb{C}c_F$: \begin{align*} L_p(\kappa,[k_0-1])= (1 - \xi'(p)\lambda_p(\kappa)^{-2}p^{k_0-2})L_p^*(\kappa). \end{align*} \end{coro} We recall that this corollary is the key for the proof of Theorem \ref{MainThOC}.\\ The rest of the section will be devoted to the proof of Theorem \ref{T1OC} \begin{proof}[Proof of Theorem \ref{T1OC}] Let $(\kappa,\kappa')$ be a classical point as in the statement of the theorem. In particular $\kappa(u)=u^k$ and $\kappa'(u)=\varepsilon(u)u^s$, with $0 \leq s \leq k-2$. \\ We point out that all the calculations we need have already been performed in \cite{Pan,H6}. \\ If $\varepsilon$ is not trivial at $p$, we shall write $p^n$ for the conductor of $\varepsilon$. If $\varepsilon$ is trivial, then we let $n=1$. Let $\beta=0,1$, $\beta \equiv s \bmod 2 $.\\ We have \begin{align*} L_p(\kappa,\kappa') = & D^{-1} \frac{\left\langle F(\kappa)^c|\tau_{Np}, T_{D/N,k} U_p^{-2 n+1} \mathrm{Pr}^{\leq \alpha} U_p^{2 n-1} g \mathcal{R}a}{\left\langle F(\kappa)^c | \tau, F(\kappa) \mathcal{R}a}, \\ g = & \theta(\varepsilon\xi\omega^{s})|[D/4C^2] E_{k-\frac{2\beta+1}{2}}(2k-s-\beta-3,\xi\mathpzc{p}si\mathcal{S}igma_{-1}\omega^{-s}\varepsilon)|\iota_p. \end{align*} We have as in \cite[(7.11)]{Pan} \begin{align*} \left\langle F(k)^c|\tau_{Np}, U_p^{-2 n +1} \mathrm{Pr}^{\leq \alpha} U_p^{2 n-1} g \mathcal{R}a = \lambda_p(\kappa)^{1-2n}p^{(2n-1)(k-1)} \left\langle F(\kappa)^c|\tau_{Np}|[p^{2n-1}], g \mathcal{R}a, \end{align*} where $f|[p^{2n-1}](z)=f(p^{2n-1}z)$. We recall the well-known formulae \cite[page 79]{H1bis} \begin{align*} \left\langle f|[p^{2n}] , T_{D/N,k} g \mathcal{R}a = & {(D/N)}^k \left\langle f|[(p^{2n}D)/N], g \mathcal{R}a, \\ \tau_{Np}|[(p^{2n-1}D)/N] = &{\left(\frac{p^{2n-1}D}{N}\right)}^{-k/2}\tau_{Dp^{2n}} ,\\ \frac{\left\langle F(\kappa)^{c}|\tau_{Np}, F(\kappa) \mathcal{R}a}{\left\langle F(\kappa)^{\circ}, F(\kappa)^{\circ} \mathcal{R}a} = &(-1)^k W'(F(\kappa)) p^{(2-k)/2} \lambda_p(\kappa) \times \\ & \left( 1 - \frac{\mathpzc{p}si(p)p^{k-1}}{\lambda_p(\kappa)^{2}} \right)\left( 1 - \frac{\mathpzc{p}si(p)p^{k-2}}{\lambda_p(\kappa)^{2}} \right). \end{align*} Combining these with Lemma \ref{RankPet}, we have \begin{align*} L_p(\kappa,\kappa') = & D^{-1} i^k 2^{\frac{s+1+\beta}{2}+1-k} {(4\mathpzc{p}i)}^{-\frac{s+1+\beta}{2}}{(2\mathpzc{p}i)}^{-\frac{s+2-\beta}{2}} \Gamma\left(\frac{s+1+\beta}{2} \right)\Gamma\left(\frac{s+2-\beta}{2} \right)\times \\ & \lambda_p(\kappa)^{1 -2n}p^{(2n-1)\left(\frac{k}{2}-1\right)} {(D/N)}^{k/2} {(Dp^{2n})}^{\frac{2s-2k +5}{4}} \times \\ & \frac{D(\beta + s+1, f, \theta(\xi\omega^{s}\varepsilon)|[D/(4C^2)]|\tau_{Dp^{2n}}) }{\left\langle F(\kappa)^{c}|\tau, F(\kappa) \mathcal{R}a}. \end{align*} Let $\eta=\xi\omega^{s}\varepsilon$. We recall from the transformation formula for theta series (see \cite[(5.1 c)]{H6}) \begin{align*} \theta(\eta)|\tau_{4C^2p^{2n}}= \left\{ \begin{array}{cc} (-i)^{\beta} {(Cp^{n})}^{-1/2} G(\eta) \theta(\eta^{-1}) & \mbox{ if } \eta \mbox{ primitive } \bmod p \\ \begin{array}{c} -(-i)^{\beta}{(Cp)}^{-1/2} G(\eta) \eta_0(p) \times \\ \times (\theta(\eta_0^{-1}) - p^{\beta+1}\eta_0^{-1}(p)\theta(\eta_0^{-1})|[p^2]) \end{array} & \mbox{ if not. } \end{array}\right. \end{align*} We have the following relations for weight $\frac{2\beta +1}{2}$: \begin{align*} \tau_{Dp^n}= \tau_{D}[p^n]p^{n{\frac{2\beta+1}{4}}}, \;\;\:\: \left[D'\right]|_{\frac{2\beta +1}{2}}\tau_{D} = \tau_{4C^2} {(D')}^{-\frac{2\beta +1}{4}}; \end{align*} when $\eta$ is trivial modulo $p$ we obtain \begin{align*} D(\beta + s, f, \theta(\eta)|\tau_{4C^2p^{2n}}) = & -(-i)^{\beta}{(Cp)}^{-1/2} (1 -\lambda_p(\kappa)^2 p^{1-s} \eta_0^{-1}(p)) \times \\ & \times G(\eta) \eta_0(p) D(\beta +s, f, \theta(\eta_0^{-1})). \end{align*} We recall the well-known duplication formula \begin{align*} \Gamma(z)\Gamma\left(z+\frac{1}{2} \right) = 2^{1-2z}\mathpzc{p}i^{1/2}\Gamma(2z). \end{align*} Summing up, we let $\delta=0$ resp. $\delta=1$ if $\eta_0$ has conductor divisible resp. not divisible by $p$. We obtain \begin{align*} L_p(\kappa,\kappa') = & D^{\frac{2s-2\beta}{4}} N^{-k/2} C^{\beta} 2^{-s -\frac{1}{2} -k} 2^{1-s -1} \times \\ & \times {(-1)}^{\delta} p^{sn}\lambda_p(\kappa)^{-2n}(1 -\lambda_p(\kappa)^2 p^{-s} \eta_0^{-1}(p))\eta_0(p) \times \\ & \times E_2(\kappa,\kappa') \frac{\mathcal{L}(s+1,\mathrm{Sym}^2(F(\kappa)),\xi^{-1}\varepsilon^{-1}\omega^{-s})}{\mathpzc{p}i^{s+1}S(F(\kappa))W'(F(\kappa))\left\langle F^{\circ}(\kappa),F^{\circ}(\kappa)\mathcal{R}a}. \end{align*} We now evaluate the second $p$-adic $L$-function; we have \begin{align*} L_p^*(\kappa) = & \frac{\left\langle F(\kappa)^c|\tau_{Np}, T_{D/N,k} \mathrm{Pr}^{\leq \alpha} g \mathcal{R}a}{\left\langle F(\kappa)^c | \tau, F(\kappa) \mathcal{R}a}, \\ g = & \theta(\xi') |\left[ \frac{D}{4C^2} \right] \delta_{k- k_0 - \frac{3}{2}}^{\frac{k_0 -\beta}{2}-1} E_{k- k_0 -1}(\mathcal{S}igma_{-1}\mathpzc{p}si'\xi'). \end{align*} The relation \begin{align*} \theta(\xi') |\left[ \frac{D}{4C^2} \right]| \tau_{Dp}={\left(\frac{4C^2}{D}\right)}^{\frac{2\beta +1}{4}} \theta(\xi')| \tau_{4C^2}[p]p^{\frac{2\beta+1}{4}}. \end{align*} gives us \begin{align*} D\left(\beta + k_0 -1, f, \theta(\xi')|\left[ \frac{D}{4C^2} \right]| \tau_{Dp}\right) = & p^{\frac{2\beta+1}{4}}\lambda_p(\kappa)p^{-\frac{k_0-1}{2}}{\left(\frac{4C^2}{D}\right)}^{\frac{2\beta +1}{4}}(-i)^{\beta} \\ & \times {C}^{-1/2} G(\eta) D(\beta + k_0-1, f, \theta({\xi'}^{-1})). \end{align*} \end{proof} We now give a proposition on the behavior of $L_p(\kappa,\kappa')$ along $\Delta(\kappa,\kappa')$. We say that $F(\kappa)$ has complex multiplication by a quadratic imaginary field $K$ if $F(\kappa)|T_l =0 $ for almost all $l$ inert in $K$. In particular, if $F(\kappa)$ has complex multiplication it is ordinary; indeed, all the non critical specializations are classical CM forms and a finite slope CM form of weight $k$ can only have slope $0$, $(k-1)/2$ or $k-1$. As we supposed that the slope of $F(\kappa)$ is fixed, then it must be zero. We can prove exactly as \cite[Proposition 5.2]{H6} the following proposition \begin{prop}\label{noCM} Unless $\mathpzc{p}si\xi\omega^{-1}$ is quadratic imaginary and $F(\kappa)$ has complex multiplication by the field corresponding to $\mathpzc{p}si\xi\omega^{-1}$, $H(\kappa)L_p(\kappa,\kappa')$ is holomorphic at all points $(\kappa,\kappa')$ on the closed $\Delta(\kappa,\kappa')=0$, except possibly for a finite number of points of type $([k],[k-1])$ corresponding to CM forms. \end{prop} Suppose that the family $F(\kappa)$ specializes to a critical CM form. It is known that it is in the image of the operator $\mathbb{T}heta^{k-1}$ and therefore should correspond to a zero of $H_F(\kappa)$. Moreover such a point on $\mathbb{C}c_F$ is ramified above the weight space \cite[Proposition 1]{BelCM}. \mathcal{S}ection{The proof of Benois' conjecture}\label{Benconj} In this section we shall prove a more general version of Theorem \ref{MainThOC}. Once one knows Corollary \ref{CoroImp}, what he is left to do is to reproduce {\it mutatis mutandis} the method of Greenberg-Stevens. We remark that we have a shift $s \mapsto s-1$ between the $p$-adic $L$-function of the previous section and the one of the introduction.\\ From the interpolation formula given in Theorem \ref{T1OC}, we see that we have a trivial zero when $s=k_0-2$ and $(\omega^{2-k_0}\xi)_0(p)=1$. Let us denote by $\mathcal{L}(f)$ the $\mathcal{L}$-invariant of $\mathrm{Sym}^2(f)\otimes \xi( k_0 -1)$ as defined in \cite{BenLinv}. We have the following theorem \begin{theo}\label{T2} Fix $f$ in $\mathcal{M}_{k_0}(Np,\mathpzc{p}si)$ and suppose that $f$ is Steinberg at $p$. Let $\xi=\xi'\omega^{k_0-2}$ be a character such that $\xi(-1)={(-1)}^{k_0}$ and $\xi'(p)=1$. Then \begin{align*} \lim_{s \to k_0 -1} \frac{L_p(s,\mathrm{Sym}^2(f), \xi)}{s-k_0+1} = \mathcal{L}(f) \frac{\mathcal{L}(k_0-1,\mathrm{Sym}^2(f),\xi^{-1})}{\mathpzc{p}i^{k_0-1} W'(f)S(f)\Omega(f)}. \end{align*} \end{theo} We shall leave the proof of this theorem for the end of the section. We now give the proof of the main theorem of the paper.\\ \begin{proof}[Proof of Theorem \ref{MainThOC}] We let $\xi'=\mathbf{1}$. Assuming Theorem \ref{T2}, what we are left to show is that $\mathcal{L}(s,\mathrm{Sym}^2(f))$ coincides with the completed $L$-function $L(s,\mathrm{Sym}^2(f))$.\\ As we said in Section \ref{primLfun}, the two $L$-functions differ only for some Euler factors at primes dividing $2N$.\\ As $2 \mid N$, we have $(1-\mathpzc{p}si^2(2))=1$ as $\mathpzc{p}si(2)=0$.\\ We have seen in Section \ref{primLfun} that when $\mathpzc{p}i(f)_q$ is a Steinberg representation, the Euler factors at $q$ of $\mathcal{L}(s,\mathrm{Sym}^2(f))$ and $L(s,\mathrm{Sym}^2(f))$ are the same.\\ As the form $f$ has trivial Nebentypus and squarefree conductor, we have that $\mathpzc{p}i(f)_q$ is Steinberg for all $q \mid N$ and we are done. \end{proof} More precisely, we have that Theorem \ref{T2} implies Conjecture \ref{MainCoOC} any time that the factor ${\mathcal{E}_N(k-1,f,\xi)}^{-1}$ is not zero, for the same reasoning as above. This is true if, for example, the character $\xi$ is very ramified modulo $2N$.\\ If we choose $\xi=\mathpzc{p}si^{-1}$, we are then considering the $L$-function for the representation $\mathrm{Ad}(\rho_{f})$. In this case the conditions for ${\mathcal{E}_N(k-1,f,\xi)}^{-1}$ to be non-zero are quite restrictive. For example $2$ must divide the level of $f$. If moreover we have that the weight is odd, then there exist at least a prime $q$ for which, in the notation of Section \ref{primLfun}, $\mathpzc{p}i_q$ is a ramified principal series. From the explicit description of the Euler factors at $q$ given in Section \ref{primLfun}, we see that ${L_q(0,\mathcal{H}at{\mathpzc{p}i}(f))}^{-1}$ is always zero. \\ The $\mathcal{L}$-invariant for the adjoint representation has been calculated in the ordinary case in \cite{HLinv}. This approach has been generalized to calculate Benois' $\mathcal{L}$-invariant in \cite{BenLinv2,MokLinv}. These results can be subsumed as follows \begin{theo}\label{L-inv} Let $F(\kappa)$ be a family of overconvergent eigenforms such that $F(\kappa_0)=f$ and let $\lambda_p(\kappa)$ be its $U_p$ eigenvalue. We have \begin{align*} \mathcal{L}(f) = & -2 \frac{\textup{d} \log \lambda_p(\kappa)}{\textup{d}\kappa}\vert_{\kappa=\kappa_0}. \end{align*} \end{theo} We remark that it is very hard to determine whether $\mathcal{L}(f)$ is not zero, even though the above theorem tells us that this is always true except for a finite number of points.\\ If we suppose $k_0=2$, we are considering an ordinary form and in this case $\rho_f\vert_{\mathbb{Q}_p}$ is an extension of $\mathbb{Q}_p$ by $\mathbb{Q}_p(1)$. The $\mathcal{L}$-invariant can be described via Kummer theory. Let us denote by $q_f$ the universal norm associated to the extension $\rho_f\vert_{\mathbb{Q}_p}$; we have then $ \mathcal{L}(f)= \frac{\log_p(q_f)}{\mathrm{ord}_p(q_f)}$. Let $A_f$ be the abelian variety associated to $f$, in \cite[\S 3]{SSS} the two authors give a description of $q_f$ in term of the $p$-adic uniformization of $A_f$. When $A_f$ is an elliptic curve, then $q_f$ is Tate's uniformizer and a theorem of transcendental number theory \cite{Saint} tells us that $\log_p(q_f)\neq 0$. \begin{proof}[Proof of Theorem \ref{T2}] Let $\kappa_0$ be the point on $\mathbb{C}c$ corresponding to $f$. As the weight of $f$ is not critical, we have that $w:\mathbb{C}c \rightarrow \mathcal{W}$ is \'etale at $\kappa_0$. We have $w(\kappa_0)=[k_0]$. Let us write $t_0 = (z \mapsto \omega^{-k_0}(z)z^{k_0} )$; $t_0$ is a local uniformizer in $\mathcal{O}_{\mathcal{W},[k_0-1]}$. As the map $w$ is \'etale at $\kappa_0$, $t_0$ is a local uniformizer for $\mathcal{O}_{\mathbb{C}c,\kappa_0}$. Let us write $A=\mathcal{O}_{\mathbb{C}c,\kappa_0}/(T_0^2)$, for $T_0=\kappa - t_0$. We have an isomorphism between the tangent spaces; this induces an isomorphism on derivations \begin{align*}\ \mathrm{Der}_{K}(\mathcal{O}_{\mathcal{W},[k_0-1]},\mathbb{C}_p) \cong \mathrm{Der}_{K}(\mathcal{O}_{\mathbb{C}c,\kappa_0},\mathbb{C}_p). \end{align*} The isomorphism is made explict by fixing a common basis $\frac{\mathpzc{p}artial }{\mathpzc{p}artial T_0}$.\\ We take the local parameter at $[k_0-2]$ in $\mathcal{W}$ to be $t_1=(z \mapsto \omega^{-k_0+2}(z)z^{k_0-2} )$.\\ Let $\xi$ be as in the hypothesis of the theorem and let $L_p(\kappa,\kappa')$ be the $p$-adic $L$-function constructed in Theorem \ref{T1OC}. We can see, locally at $(\kappa_0,[k_0-2])$, the two variables $p$-adic $L$-function $L_p(\kappa,\kappa')$ as a function $L_p(t_0,t_1)$ of the two local parameters $(t_0,t_1)$. Let us define $t_0(k)=(z \mapsto \omega^{-k_0}(z)z^{k} )$ resp. $t_1(s)=(z \mapsto \omega^{-k_0+2}(z)z^{s} )$ for $k$ resp. $s$ $p$-adically close to $k_0$ resp. $k_0 -2$. Consequently, we pose $L_p(k,s)=L_p(t_0(k),t_1(s))$; this is a locally analytic function around $(k_0,k_0-2)$.\\ We have $ \frac{\mathpzc{p}artial }{\mathpzc{p}artial k}=\log_p(u) \frac{\mathpzc{p}artial }{\mathpzc{p}artial \log T_0}$. The interpolation formula of Theorem \ref{T1OC} {\it i)} tells us that locally $L(k,k-2)\equiv 0$. We derive this identity with respect to $k$ to obtain \begin{align*} \left. \frac{\mathpzc{p}artial L(k,s)}{\mathpzc{p}artial s } \right\vert_{s=k_0-1,k=k_0} =- \left.\frac{\mathpzc{p}artial L(k,s)}{\mathpzc{p}artial k } \right\vert_{s=k_0-1,k=k_0}. \end{align*} Using Corollary \ref{CoroImp} and Theorem \ref{L-inv} we see that \begin{align*} L_p(k,k_0-1) =\mathcal{L}(f) L_p^*(k_0) + O({(k-k_0)}^2) \end{align*} and we can conclude thanks to the second interpolation formula of Theorem \ref{T1OC}. \end{proof} \mathcal{S}ection{Relation with other symmetric square $p$-adic $L$-functions}\label{KimBel} As we have already said, the $p$-adic $L$-function of the previous section $L_p(\kappa,\kappa')$ has some poles on $\mathbb{C}c_F$ coming from the {\it ``denominator''} $H_F(\kappa)$ of $l_F^r$. In this section we will see how we can modify it to obtain a holomorphic function using a one-variable $p$-adic $L$-function for the symmetric square constructed by Kim and, more recently, Bella\"iche. The modification we will perform to $L_p(\kappa,\kappa')$ will also change the interpolation formula of Theorem \ref{T1OC}, changing the automorphic period (the Petersson norm of $f$) with a motivic one. We shall explain in the end of the section why, in the ordinary case, this change is important for the Greenberg-Iwasawa-Coates-Schmidt Main Conjecture \cite[Conjecture 1.3.4]{Urb}.\\ We want to point out that all we will say in this section is conditional on Kim's thesis \cite{Kim} which has not been published yet.\\ In the ordinary setting, the overconvergent projector and the ordinary projector coincide and it is known in many cases, thanks to Hida \cite[Theorem 0.1]{H5}, that $H_F(\kappa)$ interpolates, up to a $p$-adic unit, the special value \begin{align*} (k-1)! W'(F({\kappa})) E^*_3(\kappa)\frac{L(k,\mathrm{Sym^2}(F({\kappa})),\mathpzc{p}si^{-1})}{\mathpzc{p}i^{k+1}\Omega^+\Omega^-}, \end{align*} where $W'(F({\kappa}))$ is the root number of $F^{\circ}(\kappa)$ and $E_3^*(\kappa)=1$ if $F({\kappa})$ is primitive at $p$ and \begin{align*} \left( 1 - \frac{\mathpzc{p}si(p)p^{k-1}}{\lambda_p(\kappa)^{2}} \right)\left( 1 - \frac{\mathpzc{p}si(p)p^{k-2}}{\lambda_p(\kappa)^{2}} \right) \end{align*} otherwise (note that, up to a sign, it coincides with $S(F(\kappa))$). Here $\Omega^+=\Omega^+(F(\kappa))$ and $\Omega^-=\Omega^-(F(\kappa))$ are two complex periods defined via the Eichler-Shimura isomorphism.\\ Kim in his thesis \cite{Kim} and recently Bella\"iche generalize Hida's construction to obtain a one variable $p$-adic $L$-function for the symmetric square. The aim of this section is to confront the $p$-adic $L$-function of section \ref{padicL} with theirs.\\ Kim's idea is very beautiful and at the same time quite simple; we will sketch it now. Its construction relies on two key ingredients; the first one is the formula, due to Shimura, \begin{align}\label{PetSym} \left\langle f^{\circ}, f^{\circ} \mathcal{R}a =(k-1)! \frac{L_N(k,\mathrm{Sym}^2(f),\mathpzc{p}si^{-1})}{N^2 2^{2k_0}\mathpzc{p}i^{k_0+1}}. \end{align} The second one is the sheaf over the eigencurve $\mathbb{C}c$ of distribution-valued modular symbol, sometimes called the {\it overconvergent modular symbol}, constructed by Stevens \cite{PolSt,BelCri}. It is a sheaf interpolating the sheaves $\mathrm{Sym}^{k-2}(\mathbb{Z}_p^2)$ appearing in the classical Eichler-Shimura isomorphism. When modular forms are seen as sections on this sheaf, the Petersson product is induced by the natural pairing on $\mathrm{Sym}^{k-2}(\mathbb{Z}_p^2)$. Kim's idea is to interpolate these pairings when $k$ varies in the weight space to construct a pairing on the space of locally analytic functions on $\mathbb{Z}_p$. This will induce a (non-perfect) pairing $\left\langle \mathpzc{p}hantom{e}, \mathpzc{p}hantom{e} \mathcal{R}a_{\kappa} $ on the sheaf of overconvergent modular symbol. For a family $F(\kappa)$ we can define two modular symbols $\Phi^{\mathpzc{p}m}(F(\kappa))$. Kim defines \begin{align*} L_p^{KB}(\kappa)= \left\langle \Phi^{+}(F(\kappa)), \Phi^{-}(F(\kappa)) \mathcal{R}a_{\kappa}. \end{align*} This $p$-adic $L$-function satisfies the property that its zero locus contains the ramification points of the map $\mathbb{C}c\rightarrow \mathcal{W}$ \cite[Theorem 1.3.3]{Kim} and for all classical non critical point $\kappa$ of weight $k$ we have the following interpolation formula \cite[Theorem 3.3.9]{Kim} \begin{align*} L_p^{KB}(\kappa)= E^*_3(\kappa)W'(F(\kappa))\frac{ (k-1)! L_N(k,\mathrm{Sym}^2(F(\kappa)),\mathpzc{p}si^{-1})}{N^2 2^{2k}\mathpzc{p}i^{k+1}\Omega^+\Omega^-}. \end{align*} The period $\Omega^+\Omega^-$ is the one predicted by Deligne's conjecture for the symmetric square motive and it is probably a better choice than the Petersson norm of $f$ for at least two reasons. The first one is that, as we have seen in (\ref{PetSym}), the Petersson norm of $f$ essentially coincides with $L(k,\mathrm{Sym}^2,\mathpzc{p}si^{-1})$ and such a choice as a period is not particulary enlightening when one is interested in Bloch-Kato style conjectures. \\ The second reason is related to the the Main Conjecture. Under certain hypotheses, such a conjecture is proven in \cite{Urb} for the $p$-adic $L$-function with motivic period. In fact, in \cite[\S 1.3.2]{Urb} the author is forced to make a change of periods from the $p$-adic $L$-function of \cite{CS,H6,DD} to obtain equality of $\mu$-invariant.\\ It seems reasonable to the author that in many cases, away from the zero of the overconvergent projection, we could choose $H_F(\kappa)=L_p^{KB}(\kappa)$; in any case, we can define a function \begin{align*} \tilde{L}_p(\kappa,\kappa'):=L_p^{KB}(\kappa)L_p(\kappa,\kappa') \end{align*} which is locally holomorphic in $\kappa$ and at classical points interpolates, up to some explicit algebraic number which we do not write down explicitly, the special values $\frac{ \mathcal{L}(s,\mathrm{Sym}^2(F(\kappa)),\varepsilon^{-1}\omega^{s-1})}{ \mathpzc{p}i^{s}\Omega^+\Omega^-}$. \appendix \mathcal{S}ection{Functional equation and holomorphy}\label{FE} The aim of this appendix is to show that we can divide the two-variable $p$-adic $L$-function constructed in Section \ref{padicL} by suitable two-variable functions to obtain a holomorphic $p$-adic $L$-function interpolating the special values of the primitive $L$-function, as defined in Section \ref{primLfun}.\\ The method of proof follows closely the one used in \cite{DD} and \cite{H6}. We shall first construct another two variables $p$-adic $L$-function, interpolating the other set of critical values. The construction of this two variables $p$-adic $L$-function has its own interest. The missing Euler factors do not vanish, and if one could prove a formula for the derivative of this function would obtain a proof of Conjecture \ref{MainCoOC} without hypotheses on the conductor.\\ We will show that, after dividing by suitable functions, this $p$-adic $L$-function and the one of Section \ref{padicL} satisfy a functional equation. We shall conclude by showing that the poles of these two functions are distinct.\\ We start recalling the Fourier expansion of some Eisenstein series from \cite[Proposition 3.3.10]{RosTh}; \begin{align*} E_{k-\frac{1}{2}}(z,0;\chi) = & L_{Lp}(2k-3,\chi^2) + \mathcal{S}um_{n=1}^{\infty} q^n L_{Dp}\left(k-1,\chi\mathcal{S}igma_n\right) \times \\ & \times \left(\mathcal{S}um_{\tiny{\begin{array}{c} t_1^2t_2^2 |n, \\ (t_1t_2,Dp)=1, \\ t_1>0, t_2>0 \end{array}}} \mu(t_1)\chi(t_1t_2^2)\mathcal{S}igma_n(t_1)t_2{(t_1t_2^2)}^{1-k}\right). \end{align*} We have for $0 \leq s \leq k/2$ \begin{align*} \delta_{k+\frac{1}{2}}^{s}E_{k+\frac{1}{2}}(z,0,\chi) = E_{k + 2s + \frac{1}{2}}(z,2s;\chi). \end{align*} We give this well known lemma; \begin{lemma}\label{zetames+} Let $\chi$ be a even primitive character modulo $Cp^r$, with $C$ and $p$ coprime and $r\geq 0$. Then for any $b\geq 2$ coprime with $p$, there exists a measure $\zeta^+_{\chi,b}$ such that for every finite-order character $\varepsilon$ of $Z_D$ and any integer $m\geq 1$ we have \begin{align*} \int_{Z_D} \varepsilon(z)z_p^{m-1}\textup{d}\zeta^+_{\chi,b}(z)=(1-\varepsilon'\chi'(b)b^{m})(1-(\varepsilon\chi)_0(p)p^{m-1})\times \\ \times \frac{G((\varepsilon\chi)_p)}{p^{(1-m) c_p}}\frac{L_{D}(m,\chi^{-1}\varepsilon^{-1})}{\Omega(m)}, \end{align*} where $\chi'$ denote the prime-to-$p$ part of $\chi$, $\chi_p$ the $p$-part of $\chi$ and $c_p$ the $p$-part of the conductor of $\chi\varepsilon$. If we let $a=0$, $1$ such that $\varepsilon\chi(-1)={(-1)}^a$, we have \begin{align*} {\Omega(m)}^{-1}= {\Omega(m,\varepsilon\chi)}^{-1} = i^a \mathpzc{p}i^{1/2-m}\frac{\Gamma(\frac{m+a}{2})}{\Gamma(\frac{1-m+a}{2})}. \end{align*} \end{lemma} As before, we can associate to this measure a formal series \begin{align*} G^+(S,\xi,\chi,b) = \int_{Z_D} \xi(z) {(1+S)}^{z_p} \textup{d}\zeta^+_{\chi,b}(z). \end{align*} We shall denote by $L^+_p(\kappa,\xi,\chi,b)$ the image of $G^+(S,\xi,\chi,b)$ by the map $S\mapsto (\kappa \mapsto \kappa(u)-1)$.\\ We define an element of $\mathcal{A}(\mathcal{W})[[q]]$ \begin{align*} \mathcal{E}^+_{\kappa}(\chi) & = \mathcal{S}um_{n=1, (n,p)=1}^{\infty} L^+_p(\kappa[-2],\chi, \mathcal{S}igma_n,b) q^n \mathcal{S}um_{\tiny{\begin{array}{c} t_1^2t_2^2 |n, \\ (t_1t_2,Dp)=1, \\ t_1>0, t_2>0 \end{array}}}t_1^{2}t_2^{3} \mu(t_1)\chi(t_1t_2^2)\mathcal{S}igma_n(t_1) \kappa^{-1}({t_1t_2^2}). \end{align*} If $\kappa=[k]$, we have then \begin{align*} [k](\mathcal{E}^+_{\kappa}(\chi))=&\frac{G((\chi)_p)}{p^{(2-k) c_p}\Omega(k-1)}(1-\chi'(b)b^{k-1})E_{k -\frac{1}{2}}(z,0,\omega^k \chi^{-1})|\nu_k \\ \nu_k(n)=&\frac{(1-\omega^{\frac{p-1}{2}}(n)(\chi\omega^k)_0(p)p^{k-3})}{(1-\omega^{\frac{p-1}{2}}(n)(\chi^{-1}\omega^{-k})_0(p)p^{2-k})}, \end{align*} where the twist by $\nu_k$ is defined as in \cite[h5]{H6}. We fix two even Dirichlet characters as in Section \ref{padicL}: $\xi$ is primitive modulo ${\mathbb{Z}/Cp^{\delta}\mathbb{Z}}$ ($\delta=0,1$) and $\mathpzc{p}si$ is defined modulo ${\mathbb{Z}/pN\mathbb{Z}}$. Fix also a positive slope $\alpha$ and a positive integer $D$ which is a square and divisible by $C^2$, $4$ and $N$. Let us denote by $C_0$ the conductor of the prime-to-$p$ part of $\xi\mathpzc{p}si^{-2}$ and let us write $D=4C_0^2D_0'$.\\ For $s=0,1,\ldots$ we now define distributions $\mu^+_{s}$ on $\mathbb{Z}^{\times}_p$ with values in ${\mathcal{N}^{r}(D,\mathcal{A}(\mathcal{W}))}^{\leq \alpha}$. For any $\varepsilon$ of conductor $p^n$ we pose \begin{align*} \mu^+_{s}(\varepsilon) = \mathrm{Pr}^{\leq{\alpha}}U_p^{2n-1}\left(\theta(\mathpzc{p}si^2\xi^{-1} \varepsilon^{-1}\omega^{-s})|\left[\frac{D}{4C_0^2}\right]\delta_{\kappa\left[-s-\frac{1}{2}\right] }^{\frac{s-\beta}{2}} \mathcal{E}^+_{\kappa[-s]}(\mathpzc{p}si\xi^{-1} \varepsilon^{-1} \mathcal{S}igma_{-1})\right) \end{align*} with $\beta =0$, $1$ such that $s\equiv \beta \bmod 2$. \begin{prop} The distributions $\mu^+_s$ define an $h$-admissible measure $\mu^+$ with values in ${\mathcal{N}^{r}(D,\mathcal{A}(\mathcal{W}(t_0) \times \mathcal{W}))}^{\leq \alpha}$ (for $t_0$ as before Proposition \ref{GlueDist}). \end{prop} We take the Mellin transform \begin{align*} \kappa' \mapsto \int_{1+p\mathbb{Z}_p} {\kappa'(u)}^{z}\textup{d}\mu^+(z) \end{align*} to obtain an element $\theta\ast E^+(\kappa,\kappa')$ of ${\mathcal{N}^{r}(D,\mathcal{A}(\mathcal{W}(t_0) \times \mathcal{W}(t)))}^{\leq \alpha}$.\\ Let $F$ be a family of finite slope eigenforms. We refer to Section \ref{padicL} for all the unexplained notation and terminology. We pose \begin{align*} \Delta(\kappa,\kappa') = & \left(1-\mathpzc{p}si'{\xi'}^{-1}(b)\frac{\kappa(u)}{b\kappa'(u)}\right). \end{align*} We define a new $p$-adic $L$-function \begin{align*} L^+_p(\kappa,\kappa')= & D^{-1}{\Delta(\kappa,\kappa')}^{-1} l_{F}(T_{D/N} \theta \ast E^+(\kappa,\kappa')) \in \mathcal{K}(\mathbb{C}c_F \times \mathcal{W}). \end{align*} Let us define \begin{align*} E^+_1(\kappa,\kappa')= & \lambda_p(\kappa)^{-2n}(1-(\xi^{-1}\varepsilon^{-1}\omega^{-s}\mathpzc{p}si^{2})_0(p)\lambda_p(\kappa)^{-2} p^{2k-3-s}); \end{align*} when $F(\kappa)$ is primitive at $p$ we define $E_2^+(\kappa,\kappa')=1$, otherwise \begin{align*} E^+_2(\kappa,\kappa')= & (1-\xi^{-1}\varepsilon^{-1}\omega^{-s}\mathpzc{p}si(p)p^{k-2-s}) (1-{(\xi\varepsilon\omega^{s})}_0(p)\lambda_p(\kappa)^{-2}p^{s}). \end{align*} Let $\beta=0$, $1$ such that $s \equiv \beta \bmod 2$, we pose \begin{align*} C^+_{\kappa,\kappa'} = & (2k-3-s)! p^{n(3k-2s-5)}G(\mathpzc{p}si \xi^{-1}\varepsilon^{-1}\omega^{-s})G(\mathpzc{p}si^2 \xi^{-1}\varepsilon^{-1}\omega^{-s}) \times \\ & \times C_0^{2k-s-1} N^{-k/2} {D_0'}^{k-1-\frac{s+\beta+1}{2}} 2^{2s+5-5k+\frac{1}{2}}. \end{align*} \begin{theo}\label{T3} $L^+_p(\kappa,\kappa')$ is a function on $\mathbb{C}c_{F} \times \mathcal{W}$, meromorphic in the first variable and of logarithmic growth $h=[2 \alpha]+2$ in the second variable. For all classical points $(\kappa, \kappa')$, we have the following interpolation formula \begin{align*} L^+_p(\kappa,\kappa') = C^+_{\kappa,\kappa'} \frac{E^+_1(\kappa,\kappa')E^+_2(\kappa,\kappa')\mathcal{L}(2k-2-s,\mathrm{Sym}^2(F(\kappa)),\xi\varepsilon\omega^s)}{\Omega(k-s-1)\mathpzc{p}i^{2k-s}S(F(\kappa))W'(F(\kappa))\left\langle F^{\circ}(\kappa),F^{\circ}(\kappa)\mathcal{R}a}. \end{align*} \end{theo} \begin{proof} The calculation are essentially the same as Theorem \ref{T1OC}; the only real difference is the presence of the twist by $\nu_k$. We can deal with it as we did in \cite[Theorem 3.11.2]{RosTh} so we shall only sketch the calculations. We first remark the following; let $\chi$ be any character modulo $p^r$, then it is immediate to see the following identity of $q$-expansions \begin{align*} U_{p^r}\left( \mathcal{S}um_n \chi(n)a_n q^n \mathcal{S}um_m a_m q^m \right) = \chi(-1)U_{p^r}\left( \mathcal{S}um_n a_n q^n \mathcal{S}um_m \chi(m)a_m q^m \right). \end{align*} We can write \begin{align*} \frac{1}{(1-\omega^{\frac{p-1}{2}}(n)(\chi)_0(p)p^{k-2})}= 1 + \omega^{\frac{p-1}{2}}(n)(\chi^{-1}\omega^{k})_0(p)p^{k-2} + \dots . \end{align*} We apply this to \begin{align*} \theta(\mathpzc{p}si^2\xi^{-1} \varepsilon^{-1}\omega^{-s})|\left[\frac{D}{4C_0^2}\right] [k]\delta_{\kappa\left[-s-\frac{1}{2}\right] }^{\frac{s-\beta}{2}} \mathcal{E}^+_{\kappa[-s]}(\mathpzc{p}si\xi^{-1} \varepsilon^{-1} \mathcal{S}igma_{-1}) \end{align*} and we see that we can move the twist $\nu_k$ to $\theta(\mathpzc{p}si^2\xi^{-1} \varepsilon^{-1}\omega^{-s})|\left[\frac{D}{4C^2}\right]$, and we conclude noticing that \begin{align*} \nu_k\left(\frac{D}{4C^2}n^2\right)=\frac{(1-(\mathpzc{p}si\xi^{-1} \varepsilon^{-1} \mathcal{S}igma_{-1})_0(p)p^{k-s-2})}{(1-(\mathpzc{p}si^{-1}\xi \varepsilon \mathcal{S}igma_{-1})_0(p)p^{s+1-k})} \end{align*} is independent of $n$. We have \begin{align*} L^+_p(\kappa,\kappa') = & i^k C_{s-\beta,k-\beta} G(\eta_0^{-1})D^{-1}{(D/N)}^{k/2} {D_0'}^{-\frac{2\beta+1}{4}}(-i)^{\beta}{(C_0p^n)}^{-1/2} \frac{G(\mathpzc{p}si\xi^{-1}\varepsilon^{-1}\omega^{-s})}{p^{n(1-k+s)} \Omega(k-s-1)} \times \\ & \times p^{-(2n-1)\frac{k}{2}} (1 -\lambda_p(\kappa)^2 p^{s-2k-2} \eta_0^{-1}(p)) \lambda_p(\kappa)^{1-2n}p^{(2n-1)(k-1)} \frac{\left\langle F(\kappa)^c, g \mathcal{R}a}{\left\langle F(\kappa)^c | \tau, F(\kappa) \mathcal{R}a}, \\ g = & 2^{\frac{\beta-s}{2}} \theta(\mathpzc{p}si^{-2}\xi\varepsilon\omega^{s}) E^*_{k-\frac{2\beta+1}{2}}(\beta-s,\mathpzc{p}si^{-1}\xi\varepsilon\omega^{s}\mathcal{S}igma_{-1})y^{\frac{\beta-s}{2}},\\ C_{s-\beta,k-\beta} = & {(2\mathpzc{p}i)}^{\frac{s-2k+1+\beta}{2}} {(Dp^{2n})}^{\frac{2k -2s-1}{4}}\Gamma\left(\frac{2k-s-1-\beta}{2} \right). \end{align*} We recall the well-known duplication formula \begin{align*} \Gamma(z)\Gamma\left(z+\frac{1}{2} \right) = 2^{1-2z}\mathpzc{p}i^{1/2}\Gamma(2z) \end{align*} which we apply for $z=\frac{2k-s-2}{2}$. Summing up, we obtain \begin{align*} L_p^+(\kappa,\kappa') = & (2k-3-s)! 2^{2s+5-5k+\frac{1}{2}} N^{-k/2}C_0^{2k-s-1}{D'_0}^{k-1-\frac{s+\beta+1}{2}} \times\\ & \times G(\mathpzc{p}si\xi^{-1}\varepsilon^{-1}\omega^{-s})G(\mathpzc{p}si^2\xi^{-1}\varepsilon^{-1}\omega^{-s}) p^{n(3k-2s-5)} \lambda_p(\kappa)^{-2n} \times\\ & \times \frac{(1 -\eta_0^{-1}(p)\lambda_p(\kappa)^2 p^{s-2k-2}) E^+_2(\kappa,\kappa') \mathcal{L}(2k-2-s,\mathrm{Sym}^2(F(\kappa)),\mathpzc{p}si^2\xi\varepsilon\omega^{s})}{\mathpzc{p}i^{2k-2-s}S(F(\kappa))W'(F(\kappa))\left\langle F^{\circ}(\kappa),F^{\circ}(\kappa)\mathcal{R}a \Omega(k-s-1)}. \end{align*} \end{proof} To interpolate the primitive $L$-function we have to divide $L_p(\kappa,\kappa')$ by some functions which interpolate the extra factors given in Section \ref{primLfun}. Let $F(\kappa)$ be as above and let us denote by $\left\{\lambda_n(\kappa) \right\}$ the corresponding system of Hecke eigenvalues. For any Dirichlet character of prime-to-$p$ conductor $\chi$, let us denote by $F_{\chi}(\kappa)$ the primitive family of eigenforms associated to the system of Hecke eigenvalues $\left\{\lambda_n(\kappa)\chi(n) \right\}$. Let $q$ be a prime number and $f$ a classical modular form, we say that $f$ is minimal at $q$ if the local representation $\mathpzc{p}i(f)_q$ has minimal conductor among its twists. Let $\chi$ be a Dirichlet character such that $F_{\chi}(\kappa)$ is minimal everywhere for every non critical point $\kappa$. As the Hecke algebra $\mathbb{T}^r(N,\mathcal{K}(\mathcal{U}))$ is generated by a finite number of Hecke operators, if $K$ is big enough to contain the values of $\chi$, then the Hecke eigenvalues of $F(\kappa)$ and $F_{\chi}(\kappa)$ all belong to $\mathcal{A}(\mathcal{U})$. We shall denote by $\lambda^{\circ}_q(\kappa)$ the Hecke eigenvalue corresponding to the family which is minimal at $q$ and by $\alpha_q^{\circ}(\kappa)$ and $\beta_q^{\circ}(\kappa)$ the two roots of the corresponding Hecke polynomial; enlarging $\mathcal{A}(\mathcal{U})$ if necessary, we can suppose that both of them belong to $\mathcal{A}(\mathcal{U})$.\\ For each prime $q$, let us write $l_q=\frac{\log_p(q)}{\log_p(u)}$. Recall the partition of the primes dividing the level of $F$ {\it i), \ldots, iv)} given in Section \ref{primLfun}, we define \begin{align*} E_{q}(\kappa,\kappa') = & {\left(1-\xi^{-1}_0(q)q^{-1}\alpha^{\circ}_q(\kappa)^2 \kappa'(u^{-l_q})\right)}^{-1} \times \\ & \times {\left(1-(\mathpzc{p}si\xi^{-1})_0(q)q^{-2}\frac{\kappa(u^{l_q})}{\kappa'(u^{l_q})}\right)}^{-1} {\left(1-\xi_0^{-1}(q)q^{-1}\beta^{\circ}_q(\kappa)^{2}{\kappa'(u^{-l_q})}\right)}^{-1} \;\: \left(\mbox{if } q \mbox{ in case } i\right),\\ E_{q}(\kappa,\kappa') = & {\left(1-(\mathpzc{p}si\xi^{-1})_0(q)q^{-2}\frac{\kappa(u^{l_q})}{\kappa'(u^{l_q})}\right)}^{-1} {\left(1-(\mathpzc{p}si^2\xi^{-1})_0(q)q^{-1}\lambda^{\circ}_q(\kappa)^{-2}{\kappa'(u^{l_q})}\right)}^{-1} \;\: \left(\mbox{if } q \mbox{ in case } ii\right),\\ E_{q}(\kappa,\kappa') = & \mathpzc{p}rod_{j \mbox{ s.t. } \mathpzc{p}i_q \cong \mathpzc{p}i_q \otimes \lambda_j} {\left(1-(\mathpzc{p}si\lambda_j\xi^{-1})_0(q)q^{-2}\frac{\kappa(u^{l_q})}{\kappa'(u^{l_q})}\right)}^{-1} \;\: \left(\mbox{if } q \mbox{ in case } iv\right) \end{align*} and we pose \begin{align*} A(\kappa,\kappa')=& {\left(1-\mathpzc{p}si^{-2}\xi^{2}(2)2^{2} \frac{\kappa'(u^{2 l_2})}{\kappa(u^{2 l_2})}\right)}\mathpzc{p}rod_{q}{E_{q}(\kappa,\kappa')}^{-1}. \end{align*} We define also \begin{align*} E^+_{q}(\kappa,\kappa') = & {\left(1-(\mathpzc{p}si^{-2}\xi)_0(q)q^{2}\alpha^{\circ}_q(\kappa)^2\frac{\kappa'(u^{l_q})}{\kappa(u^{2l_q})}\right)}^{-1}\times \\ & \times {\left(1-(\mathpzc{p}si^{-1}\xi)_0(q)q \frac{\kappa'(u^{l_q})}{\kappa(u^{l_q})}\right)}^{-1} {\left(1-(\mathpzc{p}si^{-2}\xi)_0(q)q^{2}\beta^{\circ}_q(\kappa)^{2}\frac{\kappa'(u^{l_q})}{\kappa(u^{2l_q})}\right)}^{-1}\;\: \left(\mbox{if } q \mbox{ in case } i\right),\\ E^+_{q}(\kappa,\kappa') = & {\left(1-(\mathpzc{p}si^{-1}\xi)_0(q)q\frac{\kappa'(u^{l_q})}{\kappa(u^{l_q})}\right)}^{-1} {\left(1-(\mathpzc{p}si^{-1}\xi)_0(q)q^{-1}\lambda^{\circ}_q(\kappa)^{-2}\kappa(u^{l_q}){\kappa'(u^{l_q})}\right)}^{-1}\;\: \left(\mbox{if } q \mbox{ in case } ii\right),\\ E^+_{q}(\kappa,\kappa') = & \mathpzc{p}rod_{j \mbox{ s.t. } \mathpzc{p}i_q \cong \mathpzc{p}i_q \otimes \lambda_j} {\left(1-(\mathpzc{p}si^{-1}\lambda_j\xi)_0(q)q\frac{\kappa'(u^{l_q})}{\kappa(u^{l_q})}\right)}^{-1} \;\: \left(\mbox{if } q \mbox{ in case } iv\right) \end{align*} and we pose \begin{align*} B(\kappa,\kappa')=& {\left(1-\mathpzc{p}si^2\xi^{-2}(2)2^{-4} \frac{\kappa(u^{2 l_2})}{\kappa'(u^{2 l_2})}\right)}\mathpzc{p}rod_{ q }{E^+_{q}(\kappa,\kappa')}^{-1} . \end{align*} \begin{prop}\label{padicFE} We have the following equality of meromorphic functions on $\mathbb{C}c_F \times \mathcal{W}$ \begin{align*} L_p(\kappa,\kappa'){A(\kappa,\kappa')}^{-1}= & \mathbf{\varepsilon}(\kappa,\kappa') L^+_p(\kappa,\kappa'){B(\kappa,\kappa')}^{-1} \end{align*} where $\mathbf{\varepsilon}(\kappa,\kappa')$ is the only Iwasawa function such that \begin{align*} \mathbf{\varepsilon}(u^k,u^s)= \frac{G({\chi'}^{-1}){G({\xi'}^{-1}\mathpzc{p}si')}^2 {G({\xi'}^{-1}{\mathpzc{p}si'}^2)}^2}{G(\xi')}{D'_0}^{\frac{s+1+\beta}{2}+1 - k}{D'}^{\frac{s-\beta}{2}} C'(\mathpzc{p}i\otimes\xi)^{s-k+1} 2^{4k-4s-6} , \end{align*} for $C'(\mathpzc{p}i\otimes\xi)$ the conductor outside $p$ of $\mathcal{H}at{\mathpzc{p}i}\otimes \mathpzc{p}si \xi^{-1}$. \end{prop} \begin{proof} The explicit epsilon factor of the functional equation stated in Section \ref{primLfun} can be found in \cite[Theorem 1.3.2]{DD}.\\ Recall from {\it loc. cit.} that \begin{align*} \frac{L_{\infty}(s+1)}{L_{\infty}(2k-2-s)} &= \frac{s!{(2 \mathpzc{p}i)}^{-s-1}\mathpzc{p}i^{-\frac{s+1}{2}} \Gamma\left(\frac{s-k+2+a}{2}\right)}{(2k-3-s)!{(2 \mathpzc{p}i)}^{-2k+2+s}\mathpzc{p}i^{-\frac{2k-s-2}{2}} \Gamma\left(\frac{k-1-s+a}{2}\right)}. \end{align*} We have, on all classical points \begin{align*} \frac{L_p(\kappa,\kappa'){B(\kappa,\kappa')}}{L^+_p(\kappa,\kappa'){A(\kappa,\kappa')}}& = \frac{\mathpzc{p}i^{2k-s-2}s ! G(\xi\varepsilon\omega^s) C(\xi\varepsilon\omega^{-s})^{s} {D'}^{\frac{s-\beta}{2}} 2^{-2s -k -\frac{1}{2}} 2^{-(2s+5-5k+\frac{1}{2})} p^{ns}\Omega(k-s-1)} {\mathpzc{p}i^{s+1}(2k-3-s)!G(\mathpzc{p}si\xi^{-1}\varepsilon^{-1}\omega^{-s})G(\mathpzc{p}si^2\xi^{-1}\varepsilon^{-1}\omega^{-s}) C_0^{2k-s-1}{D'_0}^{k-\beta-\frac{2s-3}{4}} p^{n(2k-s-3)} } \times \\ & \times \frac{L(s+1,\mathrm{Sym}^2(f),\xi^{-1}\varepsilon^{-1}\omega^{-s})}{L(2k-2-s,\mathrm{Sym}^2(f),\mathpzc{p}si^2\xi\varepsilon\omega^{s})} \\ & = \frac{p^{n(3(s+1)+3k+2)}G(\xi\varepsilon\omega^s) 2^{{4k-4s-6}} {C(\xi\varepsilon\omega^{-s})}^{s} {D'}^{\frac{s-\beta}{2}}} {G(\mathpzc{p}si\xi^{-1}\varepsilon^{-1}\omega^{-s})G(\mathpzc{p}si^2\xi^{-1}\varepsilon^{-1}\omega^{-s})C_0^{2k-s-1}{D'_0}^{k-\beta-\frac{2s-3}{4}}} \mathbf{\varepsilon}(s-k+2,\mathcal{H}at{\mathpzc{p}i}(f),\xi^{-1}\varepsilon^{-1}\omega^{-s}\mathpzc{p}si). \end{align*} To conclude we use \cite[Lemma 1.4]{Sc} and the relations \begin{align*} p^n = G(\tilde{\mathpzc{p}si})G(\tilde{\mathpzc{p}si}^{-1}), \:\:\:\: G(\mathpzc{p}si_1\mathpzc{p}si_2)=\mathpzc{p}si_1(C_2)\mathpzc{p}si_2(C_1)G(\mathpzc{p}si_1)G(\mathpzc{p}si_2), \end{align*} for $\tilde{\mathpzc{p}si}$ a character of conductor $p^n$ and $\mathpzc{p}si_i$ a character of conductor $C_i$, with $(C_1,C_2)=1$. \end{proof} \begin{prop}\label{disjpole} The elements $ A(\kappa,\kappa')$ and $ B(\kappa,\kappa')$ are mutually coprime in $\mathcal{A}(\mathcal{U}\times \mathcal{W}) $. \end{prop} \begin{proof} We follow closely the proof of \cite[\S 3.1]{DD}.\\ During the proof of this proposition we shall identify $\mathcal{A}(\mathcal{U}\times \mathcal{W})$ with $\mathcal{A}(\mathcal{U})[[T]]$ and we will see $\mathcal{A}(\mathcal{U})$ as a $\mathcal{O}[[S]]$-algebra via $S \mapsto (\kappa \mapsto \kappa(u)-1)$. \\ Consider one of the factors of $A(\kappa,\kappa')$ in which neither $\lambda_q(\kappa)$, nor $\alpha^{\circ}_q(\kappa)$, nor $\beta^{\circ}_q(\kappa)$ appear. Then such a factor belongs to $\mathcal{O}[[S,T]]$ and a prime factor of it is of the form $(1+T)-z(1+S)$, with $z \in \mu_{p^{\infty}}$.\\ A prime divisor of the excluded factors of $A(\kappa,\kappa')$ is $(1+T) - j(\kappa)$, with $j(\kappa)$ in $\mathcal{A}(\mathcal{U})$.\\ Similarly, a prime factor of $B(\kappa,\kappa')$ is $(1+T)-z'(1+S)$, with $z \in u^{-1}\mu_{p^{\infty}}$ or $(1+T) - j'(\kappa)$. \\ If a prime elements divides both elements, we must have $z(1+S) = j'(\kappa)$ or $z'(1+S) = j(\kappa) $. We deal with the fist case. Suppose that this prime elements divides $ {\left(1-(\mathpzc{p}si\xi^{-1})_0(q)q^{-2}\frac{\kappa(u^{l_q})}{\kappa'(u^{l_q})}\right)}$ and ${\left(1-(\mathpzc{p}si^{-2}\xi)_0(q'){q'}^{2}\alpha^{\circ}_{q'}(\kappa)^2\frac{\kappa'(u^{l_{q'}})}{\kappa(u^{2l_{q'}})}\right)}$. Specializing at any classical point $(\kappa,\kappa')$ we obtain $q^{k-s-2}=\zeta {q'}^{s-2k +2}\alpha_{q'}^{\circ}(\kappa)^{2}$, for $\zeta$ a root of unity. Noticing that $|\alpha_{q'}^{\circ}(\kappa)^{2}|_{\mathbb{C}}={q'}^{k-1}$ we obtain $|q^{k-s-2}|_{\mathbb{C}}=|{q'}^{s-k+1}|_{\mathbb{C}}$, contradiction. All the other cases are analogous. \end{proof} We can than state the main theorem of the appendix. We exclude the case where $\mathpzc{p}si\xi\omega^{-1}$ is quadratic imaginary and $F(\kappa)$ has complex multiplication by the corresponding quadratic field because this case has already been treated in \cite{H6}. Recall the ``denominator'' $H_F(\kappa)$ of $l^r_{F}$ defined at the end of section \ref{nomeasures}. \begin{theo} We have a two-variable $p$-adic $L$-function $H_F(\kappa)\Lambda_p(\kappa,\kappa')$ on $\mathbb{C}c_{F} \times \mathcal{W}$, holomorphic in the first variable and of logarithmic growth $h=[2 \alpha]+2$ in the second variable such that for all classical points $(\kappa, \kappa')$ we have the following interpolation formula \begin{align*} \Lambda_p(\kappa,\kappa') = C_{\kappa,\kappa'} E_1(\kappa,\kappa')E_2(\kappa,\kappa') \frac{L(s+1,\mathrm{Sym}^2(F(\kappa)),\xi^{-1}\varepsilon^{-1}\omega^{-s})}{\mathpzc{p}i^{s+1}S(F(\kappa))W'(F(\kappa))\left\langle F^{\circ}(\kappa),F^{\circ}(\kappa)\mathcal{R}a}. \end{align*} \end{theo} \begin{proof} We pose \begin{align*} \Lambda_p(\kappa,\kappa') :=L_p(\kappa,\kappa') {A(\kappa,\kappa')}^{-1}. \end{align*} We begin by showing that $\Lambda_p(\kappa,\kappa')$ is holomorphic. We know from the definition of $L_p(\kappa,\kappa')$ and Proposition \ref{noCM} that all the poles of $L_p(\kappa,\kappa')$ are controlled by $H_F(\kappa)$, that is $H_F(\kappa)L_p(\kappa,\kappa')$ is holomorphic in $\kappa$.\\ We have moreover that ${A(\kappa,\kappa')}^{-1}$ brings no extra poles; indeed, because of the functional equation of Proposition \ref{padicFE}, a zero of $A(\kappa,\kappa')$ induces a pole of $H_F(\kappa)L^+_p(\kappa,\kappa') {B(\kappa,\kappa')}^{-1}$. But the only poles of the latter could be the zeros of ${B(\kappa,\kappa')}$. Proposition \ref{disjpole} tells us that the zeros of $A(\kappa,\kappa')$ and $B(\kappa,\kappa')$ are disjoint and we are done.\\ To conclude, we have to show the interpolation formula at zeros of $A(\kappa,\kappa')$; for this, it is enough to combine Proposition \ref{padicFE} and Theorem \ref{T3}. \end{proof} \mathcal{A}ddresses \end{document} \end{document}
\begin{document} \title{ An explicit high-order single-stage single-step positivity-preserving finite difference WENO method for the compressible Euler equations } \titlerunning{Positivity-preserving Picard integral formulated finite difference WENO} \author{David C. Seal \and Qi Tang \and Zhengfu Xu \and \\ Andrew J. Christlieb} \institute{ David C. Seal \at Department of Mathematics \\ U.S. Naval Academy \\ 121 Blake Road \\ Annapolis, MD 21402 USA \\ Tel.: +1(410) 293-6784 \\ \email{[email protected]} \and Qi Tang \at Department of Mathematical Sciences \\ Rensselaer Polytechnic Institute \\ Troy, NY 12180, USA \and Zhengfu Xu \at Department of Mathematical Sciences \\ Michigan Technological University \\ Houghton, MI 49931, USA \\ \and Andrew J. Christlieb \at Department of Mathematics and Department of Electrical and Computer Engineering \\ Michigan State University \\ East Lansing, MI 48824, USA \\ } \date{Received: date / Accepted: date} \maketitle \begin{abstract} In this work we construct a high-order, single-stage, single-step positivity-preserving method for the compressible Euler equations. Space is discretized with the finite difference weighted essentially non-oscillatory (WENO) method. Time is discretized through a Lax-Wendroff procedure that is constructed from the Picard integral formulation (PIF) of the partial differential equation. The method can be viewed as a modified flux approach, where a linear combination of a low- and high-order flux defines the numerical flux used for a single-step update. The coefficients of the linear combination are constructed by solving a simple optimization problem at each time step. The high-order flux itself is constructed through the use of Taylor series and the Cauchy-Kowalewski procedure that incorporates higher-order terms. Numerical results in one- and two-dimensions are presented. \keywords{ hyperbolic conservation laws \and Lax-Wendroff \and weighted essentially non-oscillatory \and positivity-preserving } \end{abstract} \section{Introduction} \label{sec:introduction} The objective of this work is to define a single-stage, single-step finite difference WENO method that is provably positivity-preserving for the compressible Euler equations. These equations describe the evolution of density $\rho$, momentum $\rho \vec{u}$ and energy $\mathcal{E}$ of an ideal gas through \begin{equation} \label{eqn:euler-eqns-1d} \left( \begin{array}{c} \rho \\ \rho \vec{u} \\ \mathcal{E} \end{array} \right)_{, t} + \nabla_{\bf x} \cdot \left( \begin{array}{c} \rho \vec{u} \\ \rho \vec{u} \otimes \vec{u} + p {\bf I}\\ ( \mathcal{E} + p ) \vec{u} \end{array} \right) = 0. \end{equation} The energy $\mathcal{E}$ is related to the primitive variables $\rho$, $\vec{u}$ and pressure $p$ by the equation of state that we take to be $\mathcal{E} = frac{p}{ Gamma-1 } + frac{1}{2}\rho \norm{\vec{u}}^2$. The ratio of specific heats is the constant $Gamma$. Numerical difficulties for solving this system include the following: \begin{itemize} \item Low (1st and 2nd) order methods generally suffer from an inordinate amount of numerical diffusion. However, they are oftentimes more robust, and in some cases they have provable convergence to the correct entropy solution. Historically, 2nd-order schemes \cite{LxW60,Sod78,Harten78,Roe81} have been called ``high-resolution'' methods when compared to their 1st-order counterparts \cite{VonRict50,CoIsRe52,Lax54,God57}. \item High-order methods \cite{HaEnOsCh87,ShuOsher87,LiuOsherChan94} provide greater accuracy and resolution for much less overall computational effort. However, they are oftentimes less robust, and do not necessarily have provable convergence to the correct entropy solution. \end{itemize} In this work, we define a high-order conservative finite difference method based upon the Picard integral formulation of the PDE. We make a further modification to the fluxes and define a numerical scheme that obtains the following properties: \begin{itemize} \item High-order accuracy in space ($5^{th}$-order) and time ($3^{rd}$-order). Our method can be extended to arbitrary order in space or time. \item A robust scheme that stems from provable positivity preservation of the pressure and density. Numerical results indicate that high-order accuracy is retained with our positivity-preserving limiter turned on. \end{itemize} Our scheme is the first single-stage, single-step numerical method that simultaneously attains high-order accuracy, with \emph{provable} positivity preservation. When compared to other positivity-preserving schemes, our method has the following advantages: \begin{itemize} \item In order to retain positivity, we only solve one simple optimization problem per time step. Unlike positivity-``preserving'' methods that use Runge-Kutta discretizations \cite{XiQiXu14,ChLiTaXu14}, positivity of our solution is guaranteed during the entire simulation because we do not have internal stages where the solution can go negative. \item Compared to other positivity-preserving schemes \cite{ZhangShu11-survey,ZhangShu12}, the addition of the positivity preserving limiter introduces none of the additional time step restrictions that are often introduced in order to retain positivity. \end{itemize} In addition, our method is amenable to adaptive mesh refinement (AMR) technology. At present, we aim to lay the necessary foundation that would be required to do so. An in depth investigation of this property is the subject of a future study. \subsection{An overview of the proposed method} The Euler equations define a system of hyperbolic conservation laws. In 1D, such an equation is given by \begin{equation} \label{eqn:1dcons} q_{,t} + f(q)_{\!,\,x} = 0, \end{equation} where $q(t,x):\mathbb{R}^+\times \mathbb{R} \to \mathbb{R}^m$ is the unknown vector of $m$ conserved quantities and $f:\mathbb{R}^m \to \mathbb{R}^m$ is a prescribed flux function. The conserved variables for the 1D Euler equations are $q = \left( \rho, \rho u^1, \mathcal{E} \right)^T$. A typical finite difference solver for \eqref{eqn:1dcons} discretizes space with a uniform grid of $m_x$ equidistant points in $\Omega = [a,b]$, \begin{equation} x_i = a + \Par{i-frac{1}{2}}\Delta x, \qquad \Delta x = frac{b-a}{m_x}, \qquad i\in\{1,\dots,m_x\}, \end{equation} and looks for a pointwise approximation $q_i^n \approx q(t^{n},x_i)$ solution to hold at discrete time levels $t^n$. In a conservative finite difference WENO method, the update of the unknowns is typically defined by \begin{equation} \label{eqn:1dpif} q^{n+1}_i = q^n_i - frac{\Delta t}{\Delta x}\left( {F}^n_{i+\bar{H}f}-{F}^n_{i-\bar{H}f} \right), \end{equation} where the numerical flux ${F}^n_{i\pm\bar{H}f}$ is constructed from a linear combination of the WENO reconstruction procedure applied to stage values from a Runge-Kutta solver. In this work, we propose the following procedure: \begin{enumerate} \item Construct a high-order approximation to the \emph{time-averaged fluxes} \cite{HaEnOsCh87,SeGuCh14} \begin{equation} \label{eqn:picard1d-b} {F}^n_i := frac{1}{\Delta t} \int_{t^n}^{t^{n+1}} f( q(t, x_i ) )\, dt \end{equation} at each grid point $x_i$. In this work, we consider the Taylor discretization of Eqn. \eqref{eqn:picard1d-b} for conservative finite difference methods. \item Construct a {\bf high-order in space and time} numerical flux $\bar{H}at{F}^n_{i-\bar{H}f}$ based upon applying the WENO reconstruction procedure to the time-averaged fluxes \cite{SeGuCh14}. \item Replace the flux constructed in Step 2 with \begin{align} \label{eqn:limited-flux} \tilde{F}^n_{i-\bar{H}f} := \theta^n_{i-\bar{H}f}(\bar{H}at{F}^n_{i-\bar{H}f} - \bar{H}at{f}^n_{i-\bar{H}f}) + \bar{H}at{f}^n_{i-\bar{H}f}, \end{align} where $\theta^n_{i-\bar{H}f} \in [0,1]$ is found by solving a single optimization problem, and $\bar{H}at{f}^n_{i-\bar{H}f}$ is a low-order flux that guarantees positivity of the solution. This procedure is described in \S\ref{sec:1d}. \item Insert the result of Step 3 into Eqn. \eqref{eqn:1dpif}, and update the solution. \end{enumerate} Steps 1 and 2 have already been proposed in \cite{SeGuCh14}, where high-order accuracy is obtained through a flux modification that incorporates the high-order temporal discretization. A review of this procedure is presented in \S\ref{sec:review}. Step 3 can be thought of as a further flux modification, where an automatic switch adjusts between the high-order non positivity-preserving scheme, and a low-order, positivity-preserving scheme. The original idea is attributed to Harten and Zwas \cite{HaZw72}, but has since been extended to high-order WENO schemes \cite{XiQiXu14}. The details of this procedure are presented in \S\ref{sec:1d} and \S\ref{sec:md}. \section{Background} The compressible Euler equations have been an object of study ever since the infancy of numerical methods \cite{VonRict50,CoIsRe52,Lax54,God57,LxW60}. In recent years, high-order methods have attracted considerable interest because of their ability to obtain higher accuracy on certain problems with an equivalent computational cost of a low-order method. Among many choices of high-order schemes are the classical essentially non-oscillatory (ENO) method \cite{HaEnOsCh87}, the extensions to finite difference (FD) and finite volume (FV) WENO methods \cite{LiuOsherChan94,JiangShu96,Shu09}, and the discontinuous Galerkin (DG) method \cite{colish89}. These methods all seek to simultaneously obtain two properties: retain high-order accuracy in smooth regions, and capture shocks without introducing spurious oscillations near discontinuities of the solution. One added difficulty with high-order schemes is the necessity of defining and selecting a high-order time integrator. Runge-Kutta methods applied to the method of lines (MOL) approach is the most widely used discretization for high-order schemes. These methods all treat space and time as separate entities. Over the past decade, there has been a rejuvenation of interest in high-order single-stage, single-step methods for hyperbolic conservation laws, including the compressible Euler equations. All of these methods are typically based upon a Taylor temporal discretization that uses the Cauchy-Kowalewski procedure to exchange temporal for spatial derivatives. Lax and Wendroff performed this very procedure in 1960 \cite{LxW60}, and this technique has since been called the Lax-Wendroff procedure within the numerical analysis community. Methods for defining a second and higher-order single-step version of Godunov's method were investigated in the 1980s \cite{Ro87,BeCoTr89,Me90}. The original high-order ENO method of Harten et. al \cite{HaEnOsCh87} used Taylor series for their temporal discretization, although most of the attention they have received is for their emphasis on the spatial discretization. In 2001, the preliminary definitions for the so-called Arbitrary DERivative (ADER) methods \cite{toro2001,ToroTitarev02,TitarevToro02-ader} were put in place. Additionally, various FD WENO methods with Lax-Wendroff time discretizations have been constructed and tested on the Euler equations \cite{QiuShu03,JiangShuZhang13,SeGuCh14}. Recent ADER methods have been defined by Balsara and collaborators for hydrodynamics and magnetohydrodynamics \cite{balsara09,BaDiMeDuHuXu13}, and have later been extended to an adaptive mesh refinement (AMR) setting \cite{balsara13}. Other recent work in single-stage, single-step methods for Euler equations includes Lax-Wendroff time stepping coupled with DG \cite{QiuDumbserShu05,DuMu06,GaDuHiMuCl11}, and high-order Lagrangian schemes \cite{LiuChengShu09}. The present work is an extension of the Taylor discretization of the Picard integral formulation that uses finite differences for its spatial discretization \cite{SeGuCh14}, which falls into this single-stage, single-step class of methods. Defining high-order numerical schemes that retain positivity of the solution for hydrodynamics (or magnetohydrodynamics) simulations is genuinely a non-trivial task. This has been an ongoing subject of study even for low and the so-called ``high-resolution'' schemes \cite{EinfMuRoeSj91,EstVil95,EsVi96,PeShu96,TangXu99,Dubroca99,TangXu00,Gallice03}. All methods that are second or higher order share the same disadvantage that without care, they may violate a natural weak stability condition that the density and pressure need to keep positive, which is necessary to ensure physical meaningfulness of the solution and hyperbolicity of the mathematical problem. Of some of the early positivity works, Perthame and Shu propose a general reconstruction approach to obtain a high-order positivity-preserving finite volume schemes from a low-order scheme \cite{PeShu96}. In addition, they prove that the explicit Lax-Friedrichs scheme is positivity-preserving with a CFL number up to $0.5$. Later on, a more general result extends the positivity-preserving property to CFL numbers up to $1$ for both explicit and implicit Lax-Friedrichs methods \cite{TangXu00}. With those building blocks, a positivity-preserving limiter is proposed for DG schemes \cite{ZhangShu10,ZhangShu11-euler,ZhangXiaShu12,KoEk13} and FD and FV WENO schemes \cite{ZhangShu11-survey,ZhangShu12} for the Euler equations. In \cite{hu2013}, a flux cut-off limiter is also applied to FD WENO schemes to retain positivity. In addition to gas dynamics, plasma physics is another area where retaining positivity of numerical solutions is critical, and therefore has seen recent attention in the literature \cite{RoSe11,GuChHi14}. For example, collision operators for Vlasov equations require a positive distribution function in order to avoid creating artificial singularities. Our method is based upon a parameterized flux limiter that can be dated back to at least 1972 with the work of Harten and Zwas \cite{HaZw72}. There, the authors propose a second-order shock-capturing \emph{self adjusting hybrid scheme} through a simple linear combination of low- and high-order fluxes that is identical to Eqn. \eqref{eqn:limited-flux}. The original idea is to combine a ``high-order" flux with a first-order flux such that it has better accuracy in smooth region and produces a smooth profile around shock regions. A similar approach called \emph{flux-corrected transport} is proposed by Boris and Book \cite{Boris197338,Book1975248,Boris1976397,BoBoZa81}, where the purpose of limiting the high-order flux is to control overshoots and undershoots around shock regions. Sod performs an extensive review of these and other classical finite difference methods in a classical paper \cite{Sod78}. Xu and Lian recently extend this work to WENO methods that maintain the maximum-principle-preserving property for scalar hyperbolic conservation laws with the so-called ``parametrized flux limiter'' \cite{xu2013,liang2014parametrized}. Later on, these limiters are applied to FD WENO schemes on rectangular meshes \cite{XiQiXu13} and FV WENO schemes on triangular meshes \cite{ChLiTaXu14} for the Euler equations. These limiters are also applied to magnetohydrodynamics with a constrained transport framework \cite{tang14}. The basic idea for all of these methods is the same: modify the high-order non positivity-preserving numerical flux by a linear combination of a low- and high-order flux in order to retain positivity of the solution. The modification is carefully designed so that the high-order accuracy remains. The purpose of this work is to define a single-stage, single-step finite difference WENO method that is provably positivity-preserving for the compressible Euler equations. Of the various finite difference schemes constructed from the Picard integral formulation \cite{SeGuCh14}, we begin with the Taylor discretization, and then apply recently developed flux limiters \cite{XiQiXu13,ChLiTaXu14} in order to retain positivity of the solution. One advantage of the chosen limiter is that positivity is preserved without introducing additional time step restrictions, however, our primary contribution is that the present method is the first scheme to simultaneously be high-order, single-stage, single-step and have provable positivity preservation. The outline of this paper is as follows. In \S\ref{sec:review}, we briefly review the high-order finite difference WENO method that is based upon the Picard integral formulation of the PDE with a Taylor temporal discretization \cite{SeGuCh14}. In \S\ref{sec:1d} and \S\ref{sec:md}, we present the positivity-preserving limiter for PIF-WENO schemes applied to the compressible Euler system in single and multiple dimensions. Numerical examples of the positivity-preserving PIF-WENO scheme applied to the problems with low density and low pressure is provided in \S\ref{sec:numerical-results}. Finally, conclusions and future work are given in \S\ref{sec:conclusions}. \section{A single-stage single-step finite difference WENO method} \label{sec:review} The numerical method that is the subject of this work is based upon the Taylor discretization of the Picard integral formulation of Euler's equations, which is one of the many methods developed in \cite{SeGuCh14}. Our focus is on the finite difference WENO method based on a Taylor discretization of the time-averaged fluxes because it easily lends itself to the positivity-preserving limiters that are presented in \S\ref{sec:1d}. In this section, we review the minimal details presented in \cite{SeGuCh14} that are necessary to reproduce the present work. In addition, this section serves to set the notation that is used in upcoming sections. In two dimensions, a hyperbolic conservation law is defined by a flux function with two components, \begin{align} \label{eqn:conslaw2} q_{t} + f(q)_{,x} + g(q)_{,y} = 0, \end{align} where $q(t,x,y):\R^+\times \R^2 \to \R^m$ is the vector of conserved variables, and $f,g :\R^m \to \R^m$ are the two components of the flux function. The Euler equations are an example of a set of equations from this class of problems. Formal integration of \eqref{eqn:conslaw2} in time over $t \in [t^n, t^{n+1}]$ defines the 2D \emph{Picard integral formulation} \cite{SeGuCh14} as \begin{subequations} \begin{equation}\label{eqn:picard2d-a} q^{n+1} = q^n - \Delta t \left( F^n( x, y ) \right)_{,x} - \Delta t \left( G^n( x, y) \right)_{,y}, \end{equation} where the \emph{time-averaged flux} is defined as \begin{equation}\label{eqn:picard2d-b} {F}^n( x, y) := frac{1}{\Delta t} \int_{t^n}^{t^{n+1}} f(q(t,x,y))\, dt, \quad {G}^n( x, y) := frac{1}{\Delta t} \int_{t^n}^{t^{n+1}} g(q(t,x,y))\, dt. \end{equation} \end{subequations} The basic idea of the Picard integral formulation of WENO (PIF-WENO) \cite{SeGuCh14} is to first approximate the time-averaged fluxes \eqref{eqn:picard1d-b} at each grid point using some temporal discretization, and then approximate spatial derivatives in \eqref{eqn:picard2d-a} by applying WENO reconstruction to the resulting time-averaged fluxes. In this work, we approximate Eqn. \eqref{eqn:picard2d-a} with the finite difference WENO method, and we use a third-order Taylor discretization for \eqref{eqn:picard2d-b}. We remark that the positivity-preserving limiter proposed in \S \ref{sec:1d} can be generally applied to any form of the Picard integral formulation, including Runge-Kutta time discretizations. Given a domain $\Omega = [a_x,b_x] \times [a_y, b_y]$, a finite difference approximation seeks pointwise approximations $q^n_{i,j} \approx q\left( t^n, x_i, y_j \right)$ to hold at each \begin{subequations} \begin{align} x_i &= a_x + \Par{i-frac{1}{2}}\Delta x, \qquad &\Delta x = frac{b_x-a_x}{m_x}, \qquad &i\in\{1,\dots,m_x\}, && \quad \\ y_j &= a_y + \Par{j-frac{1}{2}}\Delta y, \qquad &\Delta y = frac{b_y-a_y}{m_y}, \qquad &j\in\{1,\dots,m_y\},& \end{align} \end{subequations} for discrete values of $t = t^n$. The 2D PIF-WENO scheme \cite{SeGuCh14} solves Eqn. \eqref{eqn:conslaw2} with a conservative form \begin{align} \label{eqn:pif-2d} q^{n+1}_{i,j} = q^n_{i,j} - \lambda_x\left( \bar{H}at{F}^n_{i+\bar{H}f,j}-\bar{H}at{F}^n_{i-\bar{H}f,j} \right) - \lambda_y\left( \bar{H}at{G}^n_{i,j+\bar{H}f}-\bar{H}at{G}^n_{i,j-\bar{H}f} \right), \end{align} where $\lambda_x = \Delta t/\Delta x$, $\lambda_y = \Delta t/\Delta y$, and $\bar{H}at{F}^n_{i\pm \bar{H}f, j}$ and $\bar{H}at{G}^n_{i, j\pm \bar{H}f}$ are high-order fluxes obtained by applying the classical WENO reconstruction to the time-averaged fluxes in place of a typical ``frozen-in-time'' approximation to the fluxes. This requires a total of two steps: construct a time-averaged flux, followed by performing a WENO reconstruction to the resulting modified fluxes. We first define numerical time-averaged fluxes at each grid point $(x_i, y_j)$ through Taylor expansions. After taking temporal derivatives of $f$ and $g$, we integrate the resulting Taylor polynomials over $[t^n,t^{n+1}]$ to yield \begin{subequations} \begin{align} \label{eqn:2D_system.TI-F} F_T^n(x,y) := f( q(t^n,x,y) ) + frac{\Delta t}{2!} frac{df}{dt} ( q(t^n,x,y) ) + frac{\Delta t^2}{3!} frac{d^2f}{dt^2}( q(t^n,x,y) )\\ \label{eqn:2D_system.TI-G} G_T^n(x,y) := g( q(t^n,x,y) ) + frac{\Delta t}{2!} frac{dg}{dt} ( q(t^n,x,y) ) + frac{\Delta t^2}{3!} frac{d^2g}{dt^2}( q(t^n,x,y) ). \end{align} \end{subequations} The temporal derivatives that appear can be found via the Cauchy-Kowalewski procedure. For example, the first two time derivatives of the first component of the flux function are given by \begin{subequations} \begin{equation} \label{eqn:time-derivs-2df-a} frac{df}{dt} = -\pd{f}{q} \cdot \left( f_{\!,\,x} + g_{\!,\,y} \right), \end{equation} and \begin{equation} \label{eqn:time-derivs-2df-b} frac{d^2f}{dt^2} = \pdn{2}{f}{q} \cdot \bigl( f_{,x} + g_{,y}\,, f_{,x} + g_{,y} \bigr) + \pd{f}{q} \cdot \left( -f_{,x} - g_{,y} \right)_{,t}. \end{equation} \end{subequations} The last time derivative can be further simplified to \begin{align} \label{eqn:temporal-derivs-2dfpg} -\left( f_{,x} + g_{,y} \right)_{,t} = & \pdn{2}{f}{q} \cdot \left( q_{,x}, f_{,x} + g_{,y} \right) + \pd{f}{q} \cdot \left( f_{,xx} + g_{,xy} \right) + \\ \nonumber & \pdn{2}{g}{q} \cdot \left( q_{,y}, f_{,x} + g_{,y} \right) + \pd{g}{q} \cdot \left( f_{,xy} + g_{,yy} \right). \end{align} Temporal derivatives of $g$ have a similar structure, and can be found in \cite{SeGuCh14}. We approximate each $\partial_{x}, \partial_{xx}$ and $\partial_{y}, \partial_{yy}$ in \eqref{eqn:time-derivs-2df-a} and \eqref{eqn:time-derivs-2df-b} by applying the 5-point finite difference formulae \begin{subequations} \label{eqn:u-deriv} \begin{align} \label{eqn:ux} u_{i,\, x} &:= frac{1}{12 \Delta x} \left( u_{i-2} - 8 u_{i-1} + 8 u_{i+1} - u_{i+2} \right) = u_{\, ,\, x}( x_i ) + \mathcal{O}\left( \Delta x^4 \right) \\ \label{eqn:uxx} u_{i,\, xx} &:= frac{1}{12 \Delta x^2} \left( - u_{i-2} + 16 u_{i-1} - 30 u_{i} + 16 u_{i+1} - u_{i+2} \right) = u_{\, ,\, xx}( x_i ) + \mathcal{O}\left( \Delta x^4 \right) \end{align} \end{subequations} in each direction. In order to retain a compact stencil, we compute the cross derivatives $\partial_{xy}$ with a second-order approximation \begin{equation} u_{ij,\, xy} := frac{1}{4 \Delta x \Delta y } \left( u_{i+1,\, j+1} - u_{i-1,\, j+1} - u_{i+1,\, j-1} + u_{i-1,\, j-1} \right), \end{equation} which is sufficient to retain third-order accuracy in time. After defining these higher derivatives, we define numerical fluxes by $F^n_{i,j} := F^n_T( x_i, y_j )$ and $G^n_{i,j} := G^n_T( x_i, y_j )$. We then apply WENO reconstruction in a dimension by dimension fashion to each component of the flux to construct interface values $F^n_{i\pm \bar{H}f, j}$ and $G^n_{i, j\pm \bar{H}f}$. The complete description of this process can be found in \cite{SeGuCh14}. \begin{rmk} \label{rmk:pif} The Picard integral formulation sets up a discretization for the fluxes, and not the conserved variables. \end{rmk} The significance of this remark is that further flux modifications can be incorporated into the Picard integral formulation. Previous finite difference WENO methods with Lax-Wendroff type time discretizations (e.g. \cite{QiuShu03,JiangShuZhang13}) rely on Taylor expansions of the \emph{conserved variables}, and not the fluxes; the Taylor discretization of the Picard integral formulation computes Taylor expansions of the fluxes, and not the conserved variables. In \cite{QiuShu03}, conservation of mass comes from the fact that higher derivatives of the conserved variables are computed with a central stencil. In our scheme, we directly discretize the fluxes, and are automatically mass conservative because we insert the result into the WENO reconstruction procedure. Because ours is an operation on the fluxes, we have the opportunity to consider further flux modifications. In this work, we further modify the fluxes to obtain a provably positivity-preserving method for Euler equations, which we now describe. \section{The positivity-preserving method: the 1D case} \label{sec:1d} We begin with the 1D formulation of the proposed positivity-preserving scheme. Recall that the update for the vector of conserved variables is given by Eqn. \eqref{eqn:1dpif} for the 1D conservation law defined in \eqref{eqn:1dcons}. We consider a numerical flux $\bar{H}at{F}^n_{i-\bar{H}f}$ that is high order accurate in time (and space) that is constructed from the 1D Taylor discretization of the Picard integral formulation (PIF) of FD-WENO \cite{SeGuCh14}, and we consider a low-order flux $\bar{H}at{f}^n_{i-\bar{H}f}$ that is constructed from the Lax-Friedrichs scheme (that is provably positivity-preserving \cite{PeShu96}). Both fluxes are constructed by looking at the solution $q^n$ at time level $t^n$. We propose modifying the high-order flux by \begin{equation} \label{eqn:flux} \tilde{F}^n_{i-\bar{H}f} := \theta^n_{i-\bar{H}f}(\bar{H}at{F}^n_{i-\bar{H}f} - \bar{H}at{f}^n_{i-\bar{H}f}) + \bar{H}at{f}^n_{i-\bar{H}f}, \end{equation} where a simple optimization problem is solved for the \emph{limiting parameter} $\theta^n_{i-\bar{H}f} \in [0,1]$ at each time step. We observe that if $\theta^n_{i-\bar{H}f} = 0$, the scheme reduces to the first-order Lax-Friedrichs scheme, which is positivity preserving, and therefore it is always possible to find a value that retains positive density and pressure. If $\theta^n_{i-\bar{H}f} = 1.0$, the scheme reduces to the high-order scheme, but does not guarantee positivity of the numerical solution. In order to retain high-order accuracy, we would like to choose $\theta^n_{i-\bar{H}f}$ as close to $1.0$ as possible without violating positivity of the density and pressure. The positivity-preserving Algorithm we outline in this section follows a two step procedure: i) guarantee positivity of the density, and then ii) guarantee positivity of the pressure. The details of this procedure are spelled out in the following subsections. \subsection{Step 1: Maintain positivity of the density} This discussion focuses on the first component of the modified flux \begin{equation} \tilde{f}^{n,\rho}_{i+\bar{H}f} := \theta^n_{i+\bar{H}f}( \bar{H}at{F}^{n,\rho}_{i+\bar{H}f} - \bar{H}at{f}^{n,\rho}_{i+\bar{H}f}) + \bar{H}at{f}^{n,\rho}_{i+\bar{H}f}, \end{equation} where $\bar{H}at{f}^{n,\rho}$ is the first component of the low-order flux $\bar{H}at{f}^n$, and $\bar{H}at{F}^{n,\rho}$ is the first component of the high-order flux $\bar{H}at{F}^n$. In this step, we must assume that the density ${\rho}^{n}_i > 0$ is positive at the current time. We further define the low-order update for the density as \begin{equation*} \bar{H}at{\rho}^{n+1}_i := {\rho}_i^n - \lambda \left( \bar{H}at{f}^{n,\rho}_{i+\bar{H}f} - \bar{H}at{f}^{n,\rho}_{i-\bar{H}f} \right), \end{equation*} and define a numerical lower bound of the high-order updated density $\rho^{n+1}$ as $\varepsilonilon^{n+1}_\rho : = \min ( \min_i \left(\bar{H}at{\rho}^{n+1}_i \right ),\varepsilonilon_0)$. The use of $\varepsilonilon_0 > 0$ guarantees finite wave speeds, because the sound speed $c := \sqrt{Gamma p / \rho}$ goes to infinity as $\rho \to 0$. In our simulations, we take $\varepsilonilon_0 = 10^{-13}$, which is consistent with recent high-order positivity-preserving work \cite{ZhangShu10}. Thanks to the positivity of the low-order flux \cite{PeShu96}, we observe that $\varepsilonilon^{n+1}_\rho > 0$. After the low- and high-order fluxes have been computed, the update for the density at a single grid point $x_i$ only depends on two values of $\theta^n_{i\pm \bar{H}f}$ through \begin{equation*} {\rho}_i^{n+1} \left(\theta^n_{i-\bar{H}f},\, \theta^n_{i+\bar{H}f} \right) = {\rho}_i^n - \lambda \left( \tilde{f}^{n,\rho}_{i+\bar{H}f} - \tilde{f}^{n,\rho}_{i-\bar{H}f} \right), \quad \lambda = frac{\Delta t}{\Delta x}, \quad i \in \left\{ 1, 2, \dots, m_x \right\}. \end{equation*} This, and each of the conserved variables are \emph{linear functions} with respect to the variable $\left( \theta^n_{i-\bar{H}f}, \theta^n_{i+\bar{H}f} \right) \in [0,1]^2$. To preserve the positivity of the density $\rho^{n+1}$, we want to guarantee $\rho^{n+1}_i Geq \varepsilonilon^{n+1}_\rho$. Therefore, we seek bounds $\Lambda^\rho_{\pm\bar{H}f, I_i}$ such that whenever \begin{equation*} \left(\theta^n_{i-\bar{H}f}, \theta^n_{i+\bar{H}f} \right) \in \left[0,\, \Lambda^\rho_{-\bar{H}f, I_i} \right] \times \left[0,\, \Lambda^\rho_{+\bar{H}f, I_i} \right] \subseteq [0, 1]^2, \end{equation*} we have \begin{align} \label{liu3} {\rho}_i^{n+1} \left(\theta^n_{i-\bar{H}f},\theta^n_{i+\bar{H}f} \right) = {\rho}_i^n - \lambda \left( \tilde{f}^{n,\rho}_{i+\bar{H}f} - \tilde{f}^{n,\rho}_{i-\bar{H}f} \right) Geq \varepsilonilon^{n+1}_\rho. \end{align} The purpose of defining such a set is that in Step 2 of \S\ref{subsec:step2-1d}, we will further limit the fluxes to maintain positivity of the pressure. We insert the definition of $\bar{H}at{\rho}^{n+1}_i$ into Eqn. \eqref{liu3} to see \begin{equation} \label{inequ1} \bar{H}at{\rho}^{n+1}_i - \lambda\left[ \theta^n_{i+\bar{H}f} \left( \bar{H}at{F}^{n,\rho}_{i+\bar{H}f} - \bar{H}at{f}^{n,\rho}_{i+\bar{H}f} \right) - \theta^n_{i-\bar{H}f} \left( \bar{H}at{F}^{n,\rho}_{i-\bar{H}f} - \bar{H}at{f}^{n,\rho}_{i-\bar{H}f} \right) \right] Geq \varepsilonilon^{n+1}_\rho, \end{equation} which is equivalent to \begin{align} \label{inequ2} \theta^n_{i-\bar{H}f} \Delta{f}_{i-\bar{H}f} - \theta^n_{i+\bar{H}f} \Delta{f}_{i+\bar{H}f} Geq \varepsilonilon^{n+1}_\rho - \bar{H}at{\rho}^{n+1}_i, \end{align} where $\Delta f_{i-\bar{H}f} := \lambda ( \bar{H}at{F}^{n,\rho}_{i-\bar{H}f} - \bar{H}at{f}^{n,\rho}_{i-\bar{H}f})$ is a measure of the deviation of the high- and low-order fluxes. Note that the right hand side satisfies $\varepsilonilon^{n+1}_\rho - \bar{H}at{\rho}^{n+1}_i \le 0$, and therefore there is at least one solution that can be found for Eqn. \eqref{inequ2} (namely $\theta^n_{i-\bar{H}f} = \theta^n_{i+\bar{H}f} = 0$). We determine bounds on $\Lambda^\rho_{\pm\bar{H}f, I_i}$ through a case-by-case discussion based on the signs of $\Delta f_{i-\bar{H}f}$ and $\Delta f_{i+\bar{H}f}$. This analysis has already been performed for single \cite{xu2013} and multidimensional \cite{liang2014parametrized} scalar problems. There are a total of four cases of Eqn. \eqref{inequ2} to consider: \begin{itemize} \item \bar{u}nderline{Case 1.} If $\Delta f_{i-\bar{H}f} Ge 0$ and $\Delta f_{i+\bar{H}f} \le 0$, then we set \begin{equation*} \left( \Lambda^\rho_{-\bar{H}f, I_i}, \Lambda^\rho_{+\bar{H}f, I_i} \right) := (1,1). \end{equation*} \item \bar{u}nderline{Case 2.} If $\Delta f_{i-\bar{H}f} Ge 0$ and $\Delta f_{i+\bar{H}f} > 0$, then we define \begin{equation*} \left( \Lambda^\rho_{-\bar{H}f, I_i}, \Lambda^\rho_{+\bar{H}f, I_i} \right) := \left(1,\min \left(1, frac{\varepsilonilon^{n+1}_\rho - \bar{H}at{\rho}^{n+1}_i}{- \Delta f_{i+\bar{H}f}}\right)\right). \end{equation*} \item \bar{u}nderline{Case 3.} If $\Delta f_{i-\bar{H}f} < 0$ and $\Delta f_{i+\bar{H}f} \le 0$, then we set \begin{equation*} \left( \Lambda^\rho_{-\bar{H}f, I_i}, \Lambda^\rho_{+\bar{H}f, I_i} \right) := \left(\min \left(1, frac{\varepsilonilon^{n+1}_\rho - \bar{H}at{\rho}^{n+1}_i}{ \Delta f_{i-\bar{H}f}}\right),1 \right). \end{equation*} \item \bar{u}nderline{Case 4.} If $\Delta f_{i-\bar{H}f} < 0$ and $\Delta f_{i+\bar{H}f} >0$, there are two sub-cases to consider. \begin{itemize} \item \bar{u}nderline{Case 4a.} If the inequality $\eqref{inequ2}$ is satisfied with $(\theta^n_{i-\bar{H}f}, \theta^n_{i+\bar{H}f}) = (1, 1)$ then we set \begin{equation*} \left(\Lambda^\rho_{-\bar{H}f, I_i}, \Lambda^\rho_{+\bar{H}f, I_i} \right) := \left(1, 1 \right). \end{equation*} \item \bar{u}nderline{Case 4b.} Otherwise, we choose \begin{equation*} \left(\Lambda^\rho_{-\bar{H}f, I_i}, \Lambda^\rho_{+\bar{H}f, I_i} \right) := \left(frac{\varepsilonilon^{n+1}_\rho - \bar{H}at{\rho}^{n+1}_i}{\Delta f_{i-\bar{H}f} - \Delta f_{i+\bar{H}f}},frac{ \varepsilonilon^{n+1}_\rho - \bar{H}at{\rho}^{n+1}_i}{ \Delta f_{i-\bar{H}f} - \Delta f_{i+\bar{H}f}} \right). \end{equation*} \end{itemize} \end{itemize} After considering each of the above cases at each grid element $x_i$, we define the following set \begin{equation} S^\rho_i := \left[ 0, \Lambda^\rho_{-\bar{H}f, I_i} \right] \times \left[ 0, \Lambda^\rho_{+\bar{H}f, I_i} \right]. \end{equation} {The obtained set has the property that $\rho^{n+1}_i(\theta^n_{i-\bar{H}f},\theta^n_{i+\bar{H}f}) Geq \varepsilonilon_\rho^{n+1}$ for any $ (\theta^n_{i-\bar{H}f}, \theta^n_{i+\bar{H}f}) \in S^\rho_i$.} \subsection{Step 2: Maintain positivity of the pressure} \label{subsec:step2-1d} The second step focuses on the pressure $p^{n+1}_i$. We begin with the following Lemma, which has already been used in the past \cite{ZhangShu10,XiQiXu14}. \begin{lem} \label{lem:concave-press} The pressure function satisfies \begin{equation*} p\left( q^n_i \left( \alpha \overrightarrow{\theta}^1 + (1-\alpha) \overrightarrow{\theta}^2 \right) \right) Geq \alpha p\left( q^n_i \left( \overrightarrow{\theta}^1 \right) \right) + (1- \alpha) p\left( q^n_i \left( \overrightarrow{\theta}^2 \right) \right) \end{equation*} for any $\alpha \in [0,1]$ and $\overrightarrow{\theta}^1, \overrightarrow{\theta}^2 \in S^\rho_i$. \end{lem} \begin{proof} Provided $\rho > 0$, the pressure function \begin{equation*} p(q) := (Gamma - 1) \left( {\mathcal{E}n} -frac{ \| \rho {\bf u }\|^2}{2 \rho} \right) \end{equation*} is concave with respect to the conserved variables $q = (\rho, \rho {\bf u}, \mathcal{E})$ \cite{TangXu99,ZhangShu10,XiQiXu14}. By definition of the limiting parameter, each of the conserved variables are \emph{linear functions} of $\overrightarrow{\theta}$, which means \begin{equation*} q^n_i \left( \alpha \overrightarrow{\theta}^1 + (1-\alpha) \overrightarrow{\theta}^2 \right) = \alpha q^n_i\left( \overrightarrow{\theta}^1\right) + (1-\alpha) q^n_i\left(\overrightarrow{\theta}^2 \right). \end{equation*} Together, and as a result of the construction in Step 1, these imply \begin{align*} p\left(q^n_i\left( \alpha \overrightarrow{\theta}^1 + (1-\alpha) \overrightarrow{\theta}^2 \right) \right) & = p\left(\alpha q^n_i\left(\overrightarrow{\theta}^1\right) + (1-\alpha) q^n_i\left(\overrightarrow{\theta}^2 \right)\right), \\ & Geq \alpha p\left( q^n_i\left( \overrightarrow{\theta}^1 \right) \right) + (1- \alpha) p\left( q^n_i\left( \overrightarrow{\theta}^2 \right) \right). \end{align*} \qed \end{proof} We define $p_i\left( \overrightarrow{\theta} \right) := p\left( q^n_i \left( \overrightarrow{\theta} \right) \right)$ for any $\overrightarrow{\theta} \in [0,1]^2$ in order to simplify the notation for the ensuing discussion. If we use $\bar{H}at{p}^{n+1}$ to denote the low-order pressure solved by the flux $\bar{H}at{f}^n$, we can similarly define a numerical lower bound for the pressure as $\varepsilonilon_p^{n+1}: = \min(\min_i\left(\bar{H}at{p}_i\right),\varepsilonilon_0)$. Next, we consider the following subset \begin{equation} S^p_i := \left\{ (\theta^n_{i-\bar{H}f}, \theta^n_{i+\bar{H}f}) \in S^\rho_i : p_i\left( \theta^n_{i-\bar{H}f},\theta^n_{i+\bar{H}f} \right) Ge \varepsilonilon_p^{n+1} \right\} \subseteq S^\rho_i, \end{equation} and we observe that $S^p_i$ is convex, thanks to the result of Lemma \ref{lem:concave-press}. We do not attempt to find the entire boundary of $S^p_i$ because that would be computationally intractable. Instead, we define a single rectangle $R^{\rho,p}_i$ inside of $S^p_i$ that define bounds on the limiting parameters. To do this, we consider finitely many points on the boundary of $S^p_i$. To begin, consider the four vertices of $S_i^\rho$ denoted by $ A^{k_1,k_2} := (k_1 \Lambda^\rho_{-\bar{H}f,i}, k_2 \Lambda^\rho_{+\bar{H}f,i}), $ with $k_1$, $k_2$ being 0 or 1. For each $(k_1, k_2)$, we define $B^{{k_1,k_2}}$ based on two cases: \begin{itemize} \item \bar{u}nderline{Case 1.} If $p_i(A^{k_1,k_2}) Geq \varepsilonilon_p^{n+1}$, we put $B^{{k_1,k_2}} := A^{k_1,k_2}$. The origin always falls into this case. \item \bar{u}nderline{Case 2.} Otherwise, we solve the quadratic equation $p_i(rA^{k_1,k_2})=\varepsilonilon_p^{n+1}$ for the unknown variable $r \in [0,1]$, and define $B^{{k_1,k_2}} := r A^{k_1,k_2}$. \end{itemize} After checking each vertex of $S^\rho_i$, we define \begin{align} R_i^{\rho,p} := \left[ 0, \Lambda_{-\bar{H}f,I_i} \right] \times \left[ 0, \Lambda_{+\bar{H}f,I_i} \right] \subseteq S^p_i \subseteq S^\rho_i, \end{align} where \begin{align} \Lambda_{-\bar{H}f,I_i} := \min_{\substack{k_2 = 0, 1 }}\left( B^{1,k_2} \right), \quad \Lambda_{+\bar{H}f,I_i} := \min_{\substack{k_1 = 0, 1 }}\left( B^{k_1,1} \right). \end{align} After performing this two step process at each grid cell $x_i$, the end result of this construction is the following theorem. \begin{theorem} The numerical flux in Eqn. \eqref{eqn:flux} preserves positivity of the solution for any \begin{equation} \label{eqn:theta-admissable-interval} \theta^n_{i-\bar{H}f} \in \left[0,\, \min\left( \Lambda_{+\bar{H}f,I_{i-1}},\Lambda_{-\bar{H}f, I_{i}} \right) \right]. \end{equation} \end{theorem} Although we could in principle choose any value in the interval defined in Eqn. \eqref{eqn:theta-admissable-interval} (e.g. $\theta^n_{i-\bar{H}f}=0$), in order to retain high-order accuracy, we choose the largest possible value that we can prove retains positivity. That is, we define \begin{align} \label{eqn:theta} \theta^n_{i-\bar{H}f} := \min\left( \Lambda_{+\bar{H}f,I_{i-1}},\Lambda_{-\bar{H}f, I_{i}} \right) \end{align} at each cell interface. This finishes the discussion for the 1D scheme. We reiterate that this entire process relies on flux modifications, which the Picard integral formulation was designed to accept, and is pointed out in Rmk. \ref{rmk:pif}. \begin{rmk} The positivity of the solution is guaranteed for the \bar{u}nderline{entire simulation}. \end{rmk} One consequence of being a single-stage, single-step method is that we do not have stages where the density or pressure can become negative, whereas multistage Runge-Kutta methods typically introduce either additional computational cost or artificial sound speeds in order to retain positivity. For example, in \cite{ZhangShu10,ZhangShu11-survey,ZhangShu12} the limiter is applied after each stage in the Runge-Kutta method. This introduces additional computational complexity (i.e.\ there are multiple applications of the limiter per time step) as well as further constraints on the time step selection because the limiter relies on positivity of the forward Euler method. The modifications made in \cite{XiQiXu14,ChLiTaXu14} are part of an effort to decrease the computational complexity by only applying the limiter once per time step. However, this happens at the expense of potentially introducing negative pressure and density. In order to compensate for this, in \cite{XiQiXu14,ChLiTaXu14} the authors indicate they artificially define the sound speed as $c=\sqrt{Gamma |p|/|\rho|}$ for each stage in the Runge-Kutta method. This is necessary to implement the characteristic decomposition required for the high-order WENO reconstruction, and although it does not affect the refinement study in \cite{XiQiXu14,ChLiTaXu14}, this treatment may lead to a potential numerical instability for some extreme cases. A similar issue can be found for the ideal MHD equations \cite{tang14}. \section{The positivity-preserving method: the 2D case} \label{sec:md} In this section, we apply the positivity-preserving limiter to the two-dimensional case. Extensions to a general multi-D case follow from what is provided here. Recall that our single-stage, single-step update is given by Eqn. \eqref{eqn:pif-2d}. Similar to the 1D case, we use $\bar{H}at{f}^n_{i-\bar{H}f,j}$ and $\bar{H}at{g}^n_{i,j-\bar{H}f}$ to denote the low-order positivity-preserving fluxes, and our numerical method uses modified fluxes through \begin{subequations} \label{eqn:2dfluxes} \begin{align} \tilde{F}^n_{i-\bar{H}f,j} & := \theta^n_{i-\bar{H}f,j}(\bar{H}at{F}^n_{i-\bar{H}f,j} - \bar{H}at{f}^n_{i-\bar{H}f,j}) + \bar{H}at{f}^n_{i-\bar{H}f,j}, \\ \tilde{G}^n_{i,j-\bar{H}f} & := \theta^n_{i,j-\bar{H}f}(\bar{H}at{G}^n_{i,j-\bar{H}f} - \bar{H}at{g}^n_{i,j-\bar{H}f}) + \bar{H}at{g}^n_{i,j-\bar{H}f}. \end{align} \end{subequations} Identical to the single-dimensional case, the positivity-preserving limiting procedure consists of two steps. If we still use $\bar{H}at{\rho}^{n+1}$ and $\bar{H}at{p}^{n+1}$ to denote the low-order density and pressure solved by the flux $\bar{H}at{f}^n$ and $\bar{H}at{g}^n$, we can similarly define the 2D numerical lower bounds for density and pressure as $\varepsilonilon_\rho^{n+1}: = \min\left( \min_{i,j}\left( \bar{H}at{\rho}^{n+1}_{i,j}\right), \varepsilonilon_0 \right)$ and $\varepsilonilon_p^{n+1}: = \min\left( \min_{i,j}\left( \bar{H}at{p}^{n+1}_{i,j}\right), \varepsilonilon_0 \right)$. \subsection{Step 1: Maintain positivity of the density} Our fist step is to find local bounds $\Lambda^\rho_{L,I_{i,j}}$, $\Lambda^\rho_{R,I_{i,j}}$, $\Lambda^\rho_{U,I_{i,j}}$ and $\Lambda^\rho_{D,I_{i,j}}$, such that for any $ (\theta^n_{i-\bar{H}f,j},\theta^n_{i+\bar{H}f,j},\theta^n_{i,j-\bar{H}f},\theta^n_{i,j+\bar{H}f}) \in S_{i,j}^\rho$, we have \begin{equation} \label{inequ3} \rho^{n+1}_{i,j} = \rho^n_{i,j} - \lambda_x \left( \tilde{f}^{n,\rho}_{i+\bar{H}f,j}-\tilde{f}^{n,\rho}_{i-\bar{H}f,j} \right) - \lambda_y \left( \tilde{g}^{n,\rho}_{i,j+\bar{H}f}-\tilde{g}^{n,\rho}_{i,j-\bar{H}f} \right)Ge \varepsilonilon^{n+1}_\rho, \end{equation} where \begin{align} S_{i,j}^\rho := \left[0, \Lambda^\rho_{L,I_{i,j}}\right] \times \left[0, \Lambda^\rho_{R,I_{i,j}}\right] \times \left[0, \Lambda^\rho_{D,I_{i,j}}\right] \times \left[0, \Lambda^\rho_{U,I_{i,j}}\right]. \end{align} Again, we have used the notation $g^\rho$ to refer to the first component of the flux function, $g$. We define the low-order update as \begin{equation*} \bar{H}at{\rho}^{n+1}_{i,j} := {\rho}_{i,j}^n - \lambda_x \left(\bar{H}at{f}^{n,\rho}_{i+\bar{H}f,j}-\bar{H}at{f}^{n,\rho}_{i-\bar{H}f,j} \right) - \lambda_y \left(\bar{H}at{g}^{n,\rho}_{i,j+\bar{H}f}-\bar{H}at{g}^{n,\rho}_{i,j-\bar{H}f} \right), \end{equation*} and observe that it satisfies $\bar{H}at{\rho}^{n+1}_{i,j} Geq \varepsilonilon^{n+1}_\rho > 0$ for all $i,j$, provided the density is positive at time $t^n$. Similar to Eqn. \eqref{inequ2}, we rewrite Eqn. \eqref{inequ3} as \begin{equation} \label{inequ4} \theta^n_{i-\bar{H}f,j} \Delta f_{i-\bar{H}f,j} - \theta^n_{i+\bar{H}f,j} \Delta f_{i+\bar{H}f,j} + \theta^n_{i,j-\bar{H}f} \Delta g_{i,j-\bar{H}f} - \theta^n_{i,j+\bar{H}f} \Delta g_{i,j+\bar{H}f} Ge \varepsilonilon_\rho^{n+1} - \bar{H}at{\rho}^{n+1}_{i,j}, \end{equation} where we have defined the deviation between the high- and low-order fluxes as \begin{align} \begin{cases} \Delta f_{i-\bar{H}f,j} := \lambda_x (\bar{H}at{F}^{n,\rho}_{i-\bar{H}f,j} - \bar{H}at{f}^{n,\rho}_{i-\bar{H}f,j}), \\ \Delta f_{i+\bar{H}f,j} := \lambda_x (\bar{H}at{F}^{n,\rho}_{i+\bar{H}f,j} - \bar{H}at{f}^{n,\rho}_{i+\bar{H}f,j}), \\ \Delta g_{i,j-\bar{H}f} := \lambda_y (\bar{H}at{G}^{n,\rho}_{{i,j-\bar{H}f}} - \bar{H}at{g}^{n,\rho}_{i,j-\bar{H}f}), \\ \Delta g_{i,j+\bar{H}f} := \lambda_y (\bar{H}at{G}^{n,\rho}_{i,j-\bar{H}f} - \bar{H}at{g}^{n,\rho}_{i,j-\bar{H}f}). \end{cases} \end{align} Similar to the 1D case, we solve $\eqref{inequ4}$ based on the signs of $\Delta f_{i\pm \bar{H}f,j}$ and $\Delta g_{i,j\pm\bar{H}f}$ at each node $(x_i, y_j)$. The basic idea requires a total of two steps: \begin{enumerate} \item Identify the negative values of each of the four numbers \begin{equation} \left\{ \Delta f_{i-\bar{H}f,j},\, -\Delta f_{i+\bar{H}f,j},\, \Delta g_{i,j-\bar{H}f},\, -\Delta g_{i,j+\bar{H}f} \right\}. \end{equation} \item Corresponding to the collective negative values, we define upper bounds on the limiting parameters by solving Eqn. \eqref{inequ4} for each value of $\theta$ after neglecting any positive values found. For example, if $\Delta f_{i-\bar{H}f,j}, -\Delta f_{i+\bar{H}f,j} < 0$ and $\Delta g_{i,j-\bar{H}f}, -\Delta g_{i,j+\bar{H}f} Ge 0$, then we define \begin{align} \begin{cases} \Lambda^\rho_{L,I_{i,j}} := \Lambda^\rho_{R,I_{i,j}} := \min \left( frac{\varepsilonilon_\rho^{n+1} - \bar{H}at{\rho}^{n+1}_{i,j}} { \Delta f_{i-\bar{H}f,j}-\Delta f_{i+\bar{H}f,j}} , 1\right), \\ \Lambda^\rho_{D,I_{i,j}} := \Lambda^\rho_{U,I_{i,j}} := 1. \end{cases} \end{align} Likewise, if $-\Delta f_{i+\bar{H}f,j}, \Delta g_{i,j-\bar{H}f} < 0$ and $\Delta f_{i-\bar{H}f,j}, -\Delta g_{i,j+\bar{H}f} Ge 0$, then we define \begin{align} \begin{cases} \Lambda^\rho_{R,I_{i,j}} := \Lambda^\rho_{D,I_{i,j}} := \min \left( frac{\varepsilonilon_\rho^{n+1} - \bar{H}at{\rho}^{n+1}_{i,j}} { -\Delta f_{i+\bar{H}f,j}+\Delta g_{i,j-\bar{H}f}} , 1\right), \\ \Lambda^\rho_{L,I_{i,j}} := \Lambda^\rho_{U,I_{i,j}} := 1. \end{cases} \end{align} There are a total of 16 cases. Each follow similarly, and are omitted for brevity. \end{enumerate} \subsection{Step 2: Maintain positivity of the pressure} Using the same construction from \S\ref{subsec:step2-1d}, we identify a rectangle $R_{i,j}^{\rho,p} \subseteq S_{i,j}^\rho$ where the pressure satisfies $p_{i,j}(\theta^n_{i-\bar{H}f,j},\theta^n_{i+\bar{H}f,j},\theta^n_{i,j-\bar{H}f},\theta^n_{i,j+\bar{H}f}) Geq \varepsilonilon_p^{n+1} $. Again, we consider the vertices of the region that were computed in the first step. In 2D, we define them as \[ A^{k_1,k_2,k_3,k_4} := (k_1 \Lambda^\rho_{L,I_{ij}}, k_2 \Lambda^\rho_{R,I_{ij}}, k_3 \Lambda^\rho_{D,I_{ij}}, k_4 \Lambda^\rho_{U,I_{ij}}), \quad k_1, k_2, k_3, k_4 \in \{0,1 \}. \] We rescale each vertex in an identical manner to the 1D case presented in subsection \ref{subsec:step2-1d}. There are two cases: \begin{itemize} \item \bar{u}nderline{Case 1.} If $p_{i,j}(A^{k_1,k_2,k_3,k_4}) Geq \varepsilonilon_p^{n+1}$, we define the vertex $B^{{k_1,k_2,k_3,k_4}} := A^{k_1,k_2,k_3,k_4}$. \item \bar{u}nderline{Case 2.} We solve the quadratic equation $p_{i,j}(rA^{k_1,k_2,k_3,k_4}) = \varepsilonilon_p^{n+1}$ for the unknown $r \in [0,1]$ and put $B^{{k_1,k_2,k_3,k_4}} := r A^{k_1,k_2,k_3,k_4}$. \end{itemize} In the final step, we identify a rectangular box inside $S_{i,j}^p$ through \begin{align} R_{i,j}^{\rho,p} := [0, \Lambda_{L,I_{i,j}}] \times [0,\Lambda_{R,I_{i,j}}] \times [0, \Lambda_{D,I_{i,j}}] \times [0,\Lambda_{U,I_{i,j}}], \end{align} where \begin{equation} \begin{aligned} \Lambda_{L,I_{i,j}} := \min_{{k_{2,3,4} \in \{0, 1\}}}\left( B^{1,k_2,k_3,k_4} \right), \quad \Lambda_{R,I_{i,j}} := \min_{{k_{1,3,4} \in \{0, 1\}}}\left( B^{k_1,1,k_3,k_4} \right), \\ \Lambda_{D,I_{i,j}} := \min_{{k_{1,2,4} \in \{0, 1\}}}\left( B^{k_1,k_2,1,k_4} \right), \quad \Lambda_{U,I_{i,j}} := \min_{{k_{1,2,3} \in \{0, 1\}}}\left( B^{k_1,k_2,k_3,1} \right). \end{aligned} \end{equation} After repeating this procedure for each node $(x_i, y_j)$, we finish by defining the scaling parameter as \begin{equation} \theta^n_{i-\bar{H}f,j} := \min(\Lambda_{R,I_{i-1,j}},\Lambda_{L, I_{i,j}}), \quad \theta^n_{i,j-\bar{H}f} := \min(\Lambda_{U,I_{i,j-1}},\Lambda_{D, I_{i,j}}), \end{equation} and insert the result into Eqn \eqref{eqn:2dfluxes} to define our modified fluxes. This finishes the discussion for the 2D scheme, and a similar positivity-preserving theorem follows as in Thm. \ref{eqn:theta-admissable-interval}. \section{Numerical results} \label{sec:numerical-results} In this section, we perform numerical simulations with our proposed positivity-preserving method on 1D and 2D compressible Euler equations. \subsubsection{Implementation details} The parameters we use for our WENO reconstructions include a power parameter $p=2$, and a regularization parameter $\varepsilon = 10^{-6}$, and a gas constant of $Gamma = 1.4$. In addition, we follow common practice and use a global (as opposed to a local) value for $\alpha$ in the Lax-Friedrichs flux splitting for all of our simulations. This introduces additional numerical dissipation that helps to prevent unphysical oscillations in this high-order scheme. Contrary to what typically happens with first-order finite volume schemes, the additional numerical dissipation introduced by this choice does not introduce an exorbitant amount of artificial diffusion. In every simulation save one, the CFL number is $0.35$. All of our numerical results can be found in the open source software package FINESS \cite{FINESS}. \subsection{Accuracy test} To test the accuracy of our method, we use the smooth vortex problem with low density and low pressure \cite{XiQiXu14,ZhangShu12}. Initially, we have a mean flow \begin{align} (\rho, u^1, u^2, u^3, p) = (1, 1, 1, 0, 1), \end{align} with perturbations on the velocities $u^1$, $u^2$ and the temperature $T = {p}/{\rho}$, given by \begin{equation*} (\delta u^1, \delta u^2) = frac{\varepsilonilon}{2\pi}e^{0.5(1-r^2)}(-{y},{x}), \quad \delta T = - frac{(Gamma-1)\varepsilonilon^2}{8Gamma\pi^2}e^{1-r^2}, \quad r^2 := {x}^2 + {y}^2. \end{equation*} The initial pressure and density are determined by keeping the entropy $S = p/{\rho^Gamma}$ constant. The domain is $(x,y) \in [-5,5]\times[-5,5]$ with periodic boundary condition on all sides. The vortex strength $\varepsilonilon$ is set as $10.0828$ such that the lowest density and lowest pressure in the center of the vortex are $7.8 \times 10^{-15}$ and $1.7 \times 10^{-20}$ respectively. A convergence study is presented in Table \ref{tab:2dhd}. The $L_1$-errors and $L_\infty$-errors of the density are computed at a final time of $t = 0.01$. We observe the fifth-order accuracy of the proposed scheme, which is comparable with those demonstrated in \cite{XiQiXu14,ZhangShu12}. In \cite{ZhangShu12}, the authors took $\Delta t = \Delta x^{frac{5}{3}}$ in order to make the spatial error dominate the numerical error. We find this treatment is not necessary to observe high-order spatial accuracy because of the short the final time. In our table, we only present the results with a constant CFL number of $0.35$ that has been chosen for this, and all other simulations save one. Without the addition of the positivity limiter, we observe negative density and negative pressure with the Taylor formulation of the PIF-WENO scheme that appears in the center of vortex. \begin{table} \begin{center} \begin{Large} \caption{Accuracy test of the 2D smooth vortex. We show the $L_{1}$-errors and $L_{\infty}$-errors at time $t = 0.01$ of the density. The solutions converge at fifth-order accuracy. \label{tab:2dhd}} \end{Large} \begin{tabular}{|c || c c c c |} \bar{H}line {\normalsize{ Mesh}} & {\normalsize $L_{1}$-Error} & {\normalsize Order} & {\normalsize $L_{\infty}$-Error } & {\normalsize Order } \\ \bar{H}line \bar{H}line {\normalsize $80 \times 80$} &2.970e-06 & - & 2.494e-04& - \\ {\normalsize $160 \times 160$} &1.627e-07 & 4.190 & 2.442e-05& 3.353 \\ {\normalsize $320 \times 320$} & 7.384e-09 & 4.462 & 1.390e-06& 4.135 \\ {\normalsize $640 \times 640$} &2.428e-10 & 4.927 & 4.718e-08& 4.881 \\ \bar{H}line \end{tabular} \end{center} \end{table} \subsection{1D Sedov blast wave problem} The first 1D problem we considered is the 1D Sedov blast problem originally from the book by Sedov \cite{se59}. The problem describes an intense explosion in a gas where the disturbed air is separated from the undisturbed air by a shock wave. Initially, we deposit a quantity of energy $\mathcal{E}=3200000$ into the center cell of the domain with the length of $\Delta x$, and the energy in every other cell is set to $10^{-12}$. The other quantities are initialized with a constant values of $\rho = 1$ and $u^1 = 0$. The numerical results are displayed in Fig.~\ref{1dsed}, where we see the shock wave is captured with the proposed limiter used. In Fig.~\ref{1dsed}, we use the exact solution given in Sedov's book \cite{se59} as the reference solution. Our results are in agreement with other recent work \cite{ZhangShu10,ZhangShu12,XiQiXu14}. \subsection{1D double rarefaction problem} The second 1D problem we considered is the double rarefaction problem. It is a Riemann problem with an initial condition of $\rho_L = \rho_R = 7$, $u^1_L = -1$, $u^1_R = 1$ and $p_L = p_R = 0.2$. The exact solution consists of two rarefaction waves traveling in opposite directions, which results in the creation of a vacuum in the center of the domain. Only with the proposed limiter are we are able to solve this low density and low pressure problem with the high-order finite difference WENO method. For this problem only, we find it is necessary to reduce the CFL number from $0.35$ to $0.15$ in order to avoid introducing spurious oscillations near the top of the rarefactions. The numerical results are presented in Fig.~\ref{1dst}, where we use the same resolution of $\Delta x = 1/200$ as those in reference \cite{ZhangShu10,ZhangShu12,XiQiXu14}. Our results are in agreement with the exact solution. Here, the exact solution is a highly resolved solution with $\Delta x = 1/1000$. Other Riemann problems have been investigated, and our method gives similar results as those found elsewhere in the literature (e.g. \cite{EsVi96,TangXu99}). \subsection{2D Sedov blast wave problem} We also consider a 2D version of the Sedov blast wave problem. In 2D case, the problem has an exact self-similar solution and we expect the numerical result has a similar structure. In the simulation, we only compute one quadrant of the whole domain, where we choose the computation domain to be $(x,y) \in [0, 1.1] \times [0, 1.1]$. Similar to the 1D case, we deposit a quantity of energy $\mathcal{E}=0.244816$ into the lower left corner cell, and set the energy in every other cell to be $\mathcal{E}=10^{-12}$. The other initial values are identical to the 1D case. We apply solid wall boundary conditions along the bottom ($x=0$) and left ($y=0$) boundaries so that the setup is equivalent to computing the whole domain $[-1.1, 1.1] \times [-1.1, 1.1]$ with $\mathcal{E}=0.979264$. The density is presented in Fig.~\ref{2dsed}, from which we can see the result has a nice self-similar structure. Additionally, we observe that the density cut at $y=0$ agrees with the exact solution. \subsection{2D Shock diffraction problem} The second 2D problem we consider is the shock diffraction problem. The computational domain is $ [0, 1]\times [6, 11] \bigcup [1, 13]\times [0, 11]$. There is a shock wave of Mach number $5.09$ initially located at $\{x = 0.5, 6 \leq y \leq 11\}$. As time evolves, the wave moves into undisturbed air with a density of $\rho = 1.4$ and pressure of $p = 1$. We use an inflow boundary condition at $\{x = 0, 6 \leq y \leq 11\}$, and an outflow boundary condition at $\{x = 13, 0 \leq y \leq11 \} $, $ \{ 1 \leq x \leq 13, y = 0 \}$ and $ \{0 \leq x \leq 13, y = 11 \}$. For the other parts of the boundary where $\{ 0 \leq x \leq 1, y = 6 \}$ and $ \{ x = 1, 0 \leq y \leq 6 \} $, solid wall boundary conditions are applied. As the shock passes the corner, negative density and negative pressure is observed without the addition of the positivity limiter to the Taylor PIF-WENO scheme. This issue is resolved with the proposed modifications to the older scheme. In Fig.~\ref{2dsd}, we present results for the density and pressure at time $t = 2.3$. \begin{figure} \caption{1D Sedov blast wave problem. These panels show plots at time $t = 0.001$ of (a) the density, (b) the pressure and (c) the velocity $u^1$. The solid lines are the exact solutions. The solution was obtained on a mesh with $\Delta x = 1/200$ and a CFL of 0.35. \label{1dsed} \label{1dsed} \end{figure} \begin{figure} \caption{1D double rarefaction problem. These panels show plots at time $t = 0.6$ of (a) the density, (b) the pressure and (c) $u^1$. The solid lines are the exact solutions. The solution is obtained on a mesh with $\Delta x = 1/200$ and a smaller CFL of $0.15$ that help to reduce unphysical oscillations. \label{1dst} \label{1dst} \end{figure} \begin{figure} \caption{2D Sedov blast wave problem. These panels show plots at time $t = 1$ of (a) the density, and (b) a slice of the density along $y = 0$. The solid line in (b) is the exact solution. The solution is obtained on a $160 \times 160$ mesh and a CFL number of $0.35$. \label{2dsed} \label{2dsed} \end{figure} \begin{figure} \caption{2D Shock diffraction problem. These panels show plots at time $t = 2.3$ of (a) the density, and (b) the pressure. A total of 20 equally spaced contour lines from $\rho = 0.0662$ to $7.07$ are plotted in (a). We use 40 equally spaced contour lines from $p = 0.091$ to $37$ are in (b). The solution is computed on a mesh with $\Delta x =\Delta y = 1/30$ and a CFL number of $0.35$. \label{2dsd} \label{2dsd} \end{figure} \section{Conclusions and future work} \label{sec:conclusions} In this paper we propose a high-order, single-stage, single-step, positivity-preserving method for the compressible Euler equations. The base scheme is the Taylor discretization of the Picard integral formulation of the PDE, where a single finite difference WENO reconstruction is applied once per time step. A positivity-preserving limiter is introduced in such a way that the positivity of the solution is preserved for the \emph{entire simulation}, which adds a degree of robustness to our scheme. In addition, we have no excessive CFL restriction in order to retain positivity, which makes our new method more efficient compared to recent positivity-preserving methods. We demonstrate the effectiveness and efficiency of the positivity-preserving schemes on one- and two-dimensional problems with low density and pressure. High-order accuracy is retained after the introduction of our positivity preserving limiter on a test problem that has near zero pressure and density. Future work includes the construction of positivity-preserving multiderivative methods \cite{Seal2014}, applying the proposed method to other systems such as magnetohydrodynamics, as well as incorporating our method into an AMR framework. \noindent \end{document}
{{\beta}ta}egin{document} \title{\vspace*{-1cm}Global Axisymmetric Euler Flows with Rotation} \author{Yan Guo} \address{Brown University, Providence, RI, USA} \email{yan\[email protected]} \author{Benoit Pausader} \address{Brown University, Providence, RI, USA} \email{benoit\[email protected]} \author{Klaus Widmayer} \address{University of Zurich, Zurich, Switzerland \& University of Vienna, Vienna, Austria} \email{[email protected]} \subjclass[2010]{35Q31, 35B40, 76B03, 76U05} {{\beta}ta}egin{abstract} We construct a class of \emph{global, dynamical} solutions to the $3d$ Euler equations near the stationary state given by uniform ``rigid body'' rotation. These solutions are axisymmetric, of Sobolev regularity, have non-vanishing swirl and scatter linearly, thanks to the dispersive effect induced by the rotation. To establish this, we introduce a framework that builds on the symmetries of the problem and precisely captures the anisotropic, dispersive mechanism due to rotation. This enables a fine analysis of the geometry of nonlinear interactions and allows us to propagate sharp decay bounds, which is crucial for the construction of global Euler flows. \end{abstract} \setcounter{tocdepth}{1} \mathfrak{m}aketitle \vspace*{-.75cm} \tableofcontents \vspace*{-.75cm} \section{Introduction} While global regularity of solutions to the incompressible $3d$ Euler equations for ${{\beta}ta}m{U}:\mathfrak{m}athbb{R}\times\mathfrak{m}athbb{R}^3\to\mathfrak{m}athbb{R}^3$ {{\beta}ta}egin{equation}\lambdabel{eq:E} \partial_t{{\beta}ta}m{U}+{{\beta}ta}m{U}\cdotdot\partialla {{\beta}ta}m{U}+\partialla P=0,\qquad{\mathrm{div }}({{\beta}ta}m{U})=0, \end{equation} remains an outstanding open problem, there are several examples of stationary states (see e.g.\ \cdotite{Cho2020,CLV2019,FB1973,Gav2019} for some nontrivial ones). A particularly simple yet relevant one is given by \emph{uniform rotation} around a fixed axis. In Cartesian coordinates with $\vec{e}_3$ along the axis of rotation, these ``rigid motions'' are given by ${{\beta}ta}m{U}_{rot}=(-x_2,x_1,0)$ (with pressure $P_{rot}=(x_1^2+x_2^2)/2$). Working with solutions that are \emph{axisymmetric} (i.e.\ invariant with respect to rotation about $\vec{e}_3$) and writing ${{\beta}ta}m{U}={{\beta}ta}m{U}_{rot}+{{\beta}ta}m{u}$, one sees that ${{\beta}ta}m{U}$ solves \eqref{eq:E} iff the velocity field ${{\beta}ta}m{u}:\mathfrak{m}athbb{R}\times\mathfrak{m}athbb{R}^3\to\mathfrak{m}athbb{R}^3$ satisfies the \emph{Euler-Coriolis} equations {{\beta}ta}egin{equation}\lambdabel{eq:EC} {{\beta}ta}egin{cases} &\partial_t{{\beta}ta}m{u}+{{\beta}ta}m{u}\cdotdot\partialla {{\beta}ta}m{u}+\vec{e}_3\times {{\beta}ta}m{u}+\partialla p=0,\\ &{\mathrm{div }}({{\beta}ta}m{u})=0. \end{cases} \end{equation} As an alternative viewpoint, \eqref{eq:EC} are the incompressible, $3d$ Euler equations written in a uniformly rotating frame of reference, where the Coriolis force is given as $\vec{e}_3\times{{\beta}ta}m{u}$. The scalar pressure $p:\mathfrak{m}athbb{R}\times\mathfrak{m}athbb{R}^3\to\mathfrak{m}athbb{R}$ serves to maintain the incompressibility condition ${\mathrm{div }}({{\beta}ta}m{u})=0$ and can be recovered from ${{\beta}ta}m{u}$ by solving the elliptic equation $\mathfrak{m}athcal{D}elta p=-(\partial_1{{\beta}ta}m{u}_2-\partial_2{{\beta}ta}m{u}_1)-{\mathrm{div }}({{\beta}ta}m{u}\cdotdot\partialla{{\beta}ta}m{u})$. Our main result shows that sufficiently small and smooth initial data ${{\beta}ta}m{u}_0$ that are axisymmetric lead to global, unique solutions to \eqref{eq:EC}: {{\beta}ta}egin{theorem}\lambdabel{MainThm} There exist $N_0>0$ and a norm $Z$, finite for Schwartz data, and $\varepsilonsilonilons_0>0$ such that if ${{\beta}ta}m{u}_0\in H^3(\mathfrak{m}athbb{R}^3)$ is \emph{axisymmetric} and satisfies {{\beta}ta}egin{equation}\lambdabel{eq:id-main} \norm{{{\beta}ta}m{u}_0}_{Z}+\norm{{{\beta}ta}m{u}_0}_{H^{2N_0}}\le\varepsilonsilon<\varepsilonsilon_0, \end{equation} then there exists a unique global solution ${{\beta}ta}m{u}\in C(\mathfrak{m}athbb{R}:H^{2N_0})$ of \eqref{eq:EC} with initial data ${{\beta}ta}m{u}_0$, and thus also a global solution ${{\beta}ta}m{U}$ for \eqref{eq:E} with initial data ${{\beta}ta}m{U}_0={{\beta}ta}m{U}_{rot}+{{\beta}ta}m{u}_0$. Moreover, ${{\beta}ta}m{u}(t)$ decays over time {{\beta}ta}lue{at the optimal rate} {{\beta}ta}egin{equation} \norm{{{\beta}ta}m{u}(t)}_{L^\infty}\lesssimssim \varepsilonsilonilons \ip{t}^{-1} \end{equation} and scatters linearly in $L^2$: There exists ${{\beta}ta}m{u}_0^\infty$ such that the solution ${{\beta}ta}m{u}_{lin}(t)$ of the linearization of \eqref{eq:EC} with initial data ${{\beta}ta}m{u}_0^\infty$, {{\beta}ta}egin{equation}\lambdabel{eq:lin-EC} \partial_t{{\beta}ta}m{u}_{lin}+\vec{e}_3\times {{\beta}ta}m{u}_{lin}+\partialla p=0,\quad {\mathrm{div }}({{\beta}ta}m{u}_{lin})=0,\quad {{\beta}ta}m{u}_{lin}(0)={{\beta}ta}m{u}_0^\infty, \end{equation} satisfies {{\beta}ta}egin{equation} \norm{{{\beta}ta}m{u}(t)-{{\beta}ta}m{u}_{lin}(t)}_{L^2}\to 0,\qquad t\to\infty. \end{equation} \end{theorem} We comment on a few points of immediate relevance: {{\beta}ta}egin{enumerate}[wide,itemsep=3pt] \item A more precise version of Theorem \ref{MainThm} is given below in Theorem \ref{thm:EC} of Section \ref{ssec:mainresult}. In particular, the $Z$ norm in the above statement is given explicitly as a sum of $B$ and $X$ norms -- defined in \eqref{eq:defBnorm} resp.\ \eqref{eq:defXnorm} after the introduction of appropriate technical tools -- plus regularity in terms of a scaling vector field. With this, the scattering statement can be refined and holds in a stronger topology than $L^2$ -- see Corollary \ref{cor:scatter}. \item We may view Theorem \ref{MainThm} as a \emph{global stability} result (in the class of axisymmetric perturbations satisfying \eqref{eq:id-main}) for uniformly rotating solutions ${{\beta}ta}m{U}_{rot}=r\vec{e}_\thetaeta$ in cylindrical coordinates $(r,\thetaeta,z)$ to the incompressible $3d$ Euler equations \eqref{eq:E}. From this perspective, our result connects with the study of stability of infinite energy solutions to the $2d$ Euler equations, such as shear flows \cdotite{BM15,IJ2019,IJ20,MaZhaShears2020} or stratified configurations \cdotite{BBZD2021}, even though the stability mechanism (``phase mixing'') in these settings is different. However, to the best of our knowledge there are no such results for the Euler equations in $3d$. We point out that the particular rotating solution ${{\beta}ta}m{U}_{rot}$ is but one example of a family of stationary states of the $3d$ Euler equations, given by ${{\beta}ta}m{U}_f=f(r)\vec{e}_\thetaeta$, with $f:\mathfrak{m}athbb{R}^+\to\mathfrak{m}athbb{R}$. The $3d$ Euler dynamics near ${{\beta}ta}m{U}_f$ can be described as ${{\beta}ta}m{U}={{\beta}ta}m{U}_f+{{\beta}ta}m{u}$ where ${{\beta}ta}m{u}$ satisfies {{\beta}ta}egin{equation*} \partial_t{{\beta}ta}m{u}+{{\beta}ta}m{u}\cdotdot\partialla{{\beta}ta}m{u}+\frac{f(r)}{r}\vec{e}_z\times{{\beta}ta}m{u}+\frac{f(r)}{r}\partial_\thetaeta{{\beta}ta}m{u}+r\partial_r\left(\frac{f(r)}{r}\right)({{\beta}ta}m{u}\cdotdot\vec{e}_r)\vec{e}_\thetaeta+\partialla p=0,\quad {\mathrm{div }}({{\beta}ta}m{u})=0. \end{equation*} (For $f(r)=r$ and under axisymmetry this reduces to \eqref{eq:EC}.) Our result thus initiates the study of the stability of these equilibriums. \item Apart from smallness, localization and axisymmetry assumptions, no restrictions are put on the initial data in Theorem \ref{MainThm}. Classical theory thus only predicts the existence of \emph{local} solutions for a time span of order $\varepsilonsilonilons^{-1}$. In contrast, the \emph{global} solutions we construct can (and in general do -- see Remark \ref{rem:a-vs-swirl}) have \emph{non-vanishing swirl}. (We recall that without swirl, solutions exist globally under relatively mild assumptions, see e.g.\ \cdotite[Section 4.3]{MB2002}.) In this context, the crucial role of \emph{axisymmetry} is to suppress a $2d$ Euler-type dynamic in \eqref{eq:EC}. {{\beta}ta}lue{Without axial symmetry, it is unclear whether a similar stability result can hold: the $3d$ Euler equations are notoriously unstable and there are reasons to believe that even for small initial data the aforementioned $2d$ Euler dynamic (with its potential for extremely fast norm growth) would play an important role -- for more on this we refer the reader to} the discussion in Section \ref{sec:role-axisymm}. \item It is remarkable that a uniform rotation keeps solutions from Theorem \ref{MainThm} {\it globally regular} in the absence of dissipation. Without rotation, even axisymmetric initial data may lead to finite time blow-up, as conjectured in \cdotite{Hou2021,Hou2014,Luo2014} and recently established in \cdotite{elgindiBU,EGM2019} for $C^{1,\alphapha}$ solutions. For related equations, one can produce finite time blow-up even in the presence of rotation, e.g.\ in the inviscid primitive equations \cdotite{ILT2020}. At the heart of this result is a dispersive effect due to rotation. This is a linear mechanism that on $\mathfrak{m}athbb{R}^3$ leads to amplitude decay of solutions of the linearization \eqref{eq:lin-EC} of the Euler-Coriolis system. The anisotropy of the problem is reflected in the dispersion relation, which is degenerate and yields a critical decay rate of at most $t^{-1}$ (see Corollary \ref{cor:decay}). In particular, our nonlinear solutions decay at the same rate as linear solutions. \item The influence and importance of rotational effects in fluids has been documented in various contexts, in particular in the geophysical fluids literature (see e.g.\ \cdotite{GS2007,mcwilliamsGFD,GFD} or for the ${{\beta}ta}eta$-plane model \cdotite{b-plane,Gal2008,globalbeta}). In the setting of fast rotation, the (inverse) speed of rotation introduces a parameter of smallness that can be used to prolong the time of existence of solutions. For Euler-Coriolis \eqref{eq:EC}, this has been done in \cdotite{AF2018,CDGG2002a,Dut2005,JW2020,KohE,MR3488136,WC2018} via Strichartz estimates associated to the linear semigroup, based on work in the viscous setting \cdotite{CDGG2006,GR2009}. Such results do not require axisymmetry and apply for sufficiently smooth initial data without size restrictions. By rescaling\footnote{Note that if ${{\beta}ta}m{u}$ solves \eqref{eq:EC} on a time interval $[0,T]$, then for $\omegaega>0$ we have that ${{\beta}ta}m{u}_\omegaega(t,x):= \omegaega{{\beta}ta}m{u}(\omegaega t,x)$ solves \eqref{eq:EC} with $\vec{e}_3$ replaced by $\omegaega\vec{e}_3$ on the time interval $[0,\omegaega^{-1}T]$, so that speed of rotation and size of initial data can be related.}, these results amount to a logarithmic improvement of the time scale of existence in Sobolev spaces, with a slightly stronger improvement available in Besov spaces \cdotite{AF2018,WC2018}. \item This article expands on the line of work initiated in \cdotite{rotE}: we \emph{globally} control the evolution of small, axisymmetric initial data and find their asymptotic behavior. We develop a framework that tracks various important anisotropic parameters and -- crucially -- introduce an angular Littlewood-Paley decomposition to propagate fractional type regularity in certain angular derivatives on the Fourier side. This is coupled with a novel, refined analysis of the linear effect due to rotation, which allows us to obtain sharp decay rates with a weak control of the unknowns, and a precise understanding of the geometry of nonlinear interactions. We refer the reader to Section \ref{sec:method} for a more detailed description of our ``method of partial symmetries''. \item While the techniques and ideas of this article are developed with a precise adaptation to the geometry of the Euler-Coriolis system, we believe they can be of much wider use, for instance for stratified systems (such as the Boussinesq equations of \cdotite{3dBouss} or \cdotite{Cha2020}), plasmas with magnetic background fields (e.g.\ in the Euler-Poisson or Euler-Maxwell equations \cdotite{GIP2016,GP2011}), or in a broader context dynamo theory in the MHD equations (see e.g.\ \cdotite[Section 7.9]{fitzpatrick2014plasma}). Moreover, they may open directions towards new results or improved thresholds also in the viscous setting \cdotite{CDGG2006,KohNS}. \end{enumerate} \mathfrak{m}edskip We give next an overview of the methodology this article proposes and how these ideas are used to overcome the challenges posed by the anisotropy, quasilinear nature and critical decay rate of \eqref{eq:EC}. \subsection{The method of partial symmetries}\lambdabel{sec:method} Underlying our approach are classical techniques for small data/global regularity problems in nonlinear dispersive equations, such as vector fields \cdotite{Klainer85} and normal forms \cdotite{ShatahKG85} as unified in a spacetime resonance approach \cdotite{GermSTR,GMSGWW3d,GNT1} and further developed in \cdotite{BG2014,Den2018,DIPau,DIPP,CapWW,GIP2016,IP2013,IoPau1,IP2019,IoPu2,G2dWW,KP2011} (see also \cdotite{Ifrim2017,Ifrim2019a}). To initiate such an analysis, we observe that the linearization of \eqref{eq:EC} is a dispersive equation, with dispersion relation given by {{\beta}ta}egin{equation} \mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi)=\xi_3/\abs{\xi},\qquad \xi\in\mathfrak{m}athbb{R}^3. \end{equation} This is anisotropic and degenerate, and leads to $L^\infty$ decay at the critical rate $t^{-1}$, {{\beta}ta}lue{which is also sharp} -- see also Proposition \ref{prop:decay} resp.\ Corollary \ref{cor:decay} {{\beta}ta}lue{and the discussion thereafter}. This anisotropy is also manifest in the full, nonlinear problem \eqref{eq:EC}, which exhibits fewer symmetries and conservation laws than the $3d$ Euler equations without rotation \eqref{eq:E}. In our setting, we only have two unbounded commuting vector fields: the rotation $\Omegaega$ about the axis $\vec{e}_3$, and the scaling $S$ (see Section \ref{sec:structure}). To obtain regularity in all directions, we complement them with a third vector field $\mathfrak{m}athcal{U}ps$, corresponding to a derivative along the polar angle in spherical coordinates on the Fourier side. This choice ensures that $\mathfrak{m}athcal{U}ps$ commutes with both $\Omegaega$ and $S$, but it {\it does not commute with the equation}. Our overall strategy leans on a general approach to quasilinear dispersive problems and establishes a bootstrapping scheme as follows: {{\beta}ta}egin{enumerate}[leftmargin=*] \item \emph{Choice of unknowns and formulation as dispersive problem (Section \ref{sec:structure}).} We parameterize the fluid velocity by two real scalar unknowns $U_{\pm}$ which diagonalize the linear system and commute with the geometric structure (Hodge decomposition and vector fields). Normalizing them properly then reveals a ``null type'' structure in the case of axisymmetric solutions (Lemma \ref{lem:ECmult}). \item \emph{Linear decay analysis (Section \ref{sec:lin_decay}).} The key point here is to identify a \emph{weak criterion} for \emph{sharp decay} which will allow to retain \emph{optimal} pointwise decay even though the highest order energies increase slowly over time. This criterion largely determines the norm we will propagate in the bootstrap; it incorporates localized control of vector fields and angular derivatives in direction $\mathfrak{m}athcal{U}ps$ via a $B$ and $X$ norm, respectively. \item \emph{Nonlinear Analysis 1: energy and refined estimates for vector fields (Section \ref{sec:Bnorm}).} Thanks to the commutation of $S$ with the equation, energy estimates for (arbitrary) powers of $S$ on the unknowns $U_\pm$ follow directly from the decay at rate $t^{-1}$. We then upgrade these $L^2$ bounds of many vector fields to refined, uniform bounds for fewer vector fields on the profiles $\mathfrak{m}athcal{U}_\pm$ of $U_\pm$ in a norm $B$. This norm is designed as a relaxation of the requirement that the Fourier transform of the profiles $\mathfrak{m}athcal{U}_\pm$ be in $L^\infty$. \item \emph{Nonlinear Analysis 2: propagation of regularity in $\mathfrak{m}athcal{U}ps$ (Section \ref{sec:Xnorm}).} This is the most delicate part of the arguments, and the design of the $X$ norm to capture the angular regularity in $\mathfrak{m}athcal{U}ps$ plays a key role: roughly speaking, while stronger norms give easier access to decay, they are also harder to bound along the nonlinear evolution. In the balance struck here the $X$ norm corresponds to a fractional, angular regularity on the Fourier transforms of the profiles $\mathfrak{m}athcal{U}_\pm$. \end{enumerate} We highlight some key aspects of our novel approach: {{\beta}ta}egin{itemize} \item \emph{Anisotropic localizations:} To precisely capture the degeneracy of dispersion and to be able to quantify the size of nonlinear interactions, it is important to track both horizontal and vertical components of interacting frequencies. New analytical challenges include the control of singularities due to anisotropic degeneracy (see e.g.\ Proposition \ref{prop:decay} or Lemma \ref{lem:vfsizes-mini}). We thus work with Littlewood-Paley decompositions (with associated parameters $p,q\in\mathfrak{m}athbb{Z}^-$) relative to the horizontal $\abs{\xi_\textnormal{h}}/\abs{\xi}$ and vertical components $\abs{\xi_3}/\abs{\xi}$ of a vector $\xi=(\xi_1,\xi_2,\xi_3)\in\mathfrak{m}athbb{R}^3$, where $\xi_\textnormal{h}=(\xi_1,\xi_2)$. \item \emph{Angular Littlewood-Paley decomposition:} A crucial new ingredient is the introduction of an ``angular'' Littlewood-Paley decomposition quantifying angular regularity (see Section \ref{ssec:angLP}). Since our solutions are axisymmetric, this amounts to define and control fractional powers $\mathfrak{m}athcal{U}psilon^{1+{{\beta}ta}eta}$, for $0<{{\beta}ta}eta\ll 1$. This is fundamental for our analysis in that it enables us to pinpoint a \emph{weak criterion} for \emph{sharp decay} that moreover can be controlled \emph{globally}.\footnote{While sharp decay would also follow from control of a higher power of $\mathfrak{m}athcal{U}ps$ such as $\mathfrak{m}athcal{U}ps^2$, the resulting terms seem to resist uniform in time bounds and are thus very hard to manage.} \item \emph{Emphasis on natural derivatives:} We view the vector fields $S,\Omegaega$ generated by the symmetries as the \emph{natural derivatives} of this problem, and our approach is tailored to rely on them to the largest extent possible. In particular, we develop a framework of integration by parts along these vector fields (Section \ref{sec:vfibp}). The precise quantification of this technique is achieved by combining information from the anisotropic localizations and the new angular Littlewood-Paley decomposition. Furthermore, a remarkable interplay with the ``phases'' of the nonlinear interactions reveals a natural dichotomy on which we can base our nonlinear analysis. Compared to traditional spacetime resonance analysis, one may view this as a {\it qualified} version of the absence of spacetime resonances, relying only on the natural derivatives coming from the symmetries. \end{itemize} \mathfrak{m}edskip In what follows, we describe some of our arguments in more detail. \subsubsection*{Linear Decay} We collect the control necessary for decay in a norm $D$ in \eqref{eq:defDnorm}, that combines the aforementioned $B$ and $X$ norms (associated with localized control of vector fields and angular derivatives in direction $\mathfrak{m}athcal{U}ps$, respectively). In particular, it guarantees $L^\infty$-control of the Fourier transform. This enables a stationary phase argument \emph{adapted to the vector fields}, and yields (in Proposition \ref{prop:decay}) a novel, anisotropic dispersive decay result: we split the action of the linear semigroup of \eqref{eq:EC} on a function into two well-localized pieces (related to the angular regularity we have), which decay in $L^\infty$ resp.\ $L^2$. In addition, away from the sets of degeneracy of $\mathfrak{m}athbf{\mathcal{L}ambda}bda$, these pieces display decay at a faster rate. To quantify this accurately, our anisotropic setup makes use of the horizontal and vertical projections $P_p$, $P_{p,q}$ and associated parameters $p,q\in\mathfrak{m}athbb{Z}^-$. In combination with the localization information and a null structure of nonlinear interactions, this provides a key advantage over some traditional dispersive estimates. \subsubsection*{Choice of Norms}\lambdabel{it:normchoice} Our norms are modeled on $L^2$ to exploit the Hilbertian structure, and play a complementary role. The $B$-norm \eqref{eq:defBnorm} weights the projections $P_{p,q}$ \emph{negatively} in $p,q$. {{\beta}ta}lue{For functions localized at unit frequencies,} this provides normal $L^2$ control of $\textnormal{h}at{f}$ for frequencies where dispersion yields full $t^{-3/2}$ decay (i.e.\ when $p,q\ge -1$), but strengthens to scale as $L^\infty$ control on $\textnormal{h}at{f}$ where the decay degenerates to the nonintegrable rate $t^{-1}$. It is primarily used to control the contribution of the region where $q{{\beta}ta}lue{<}-10$. The $X$-norm \eqref{eq:defXnorm} gives a strong control of $1+{{\beta}ta}eta$ angular derivatives in $\mathfrak{m}athcal{U}ps$, quantified via the angular Littlewood-Paley decomposition $R_\ell$ of Section \ref{ssec:angLP}, $\ell\in\mathfrak{m}athbb{Z}^+$. Weighting \emph{positively} in $p$ we obtain a control that degenerates to scale as the $L^\infty$ norm of $\textnormal{h}at{f}$ for vertical frequencies. This is used chiefly to control the region where $p{{\beta}ta}lue{<}-10$. {{\beta}ta}lue{In addition to the weighting in terms of anisotropic localization, our norms also include factors that help overcome the derivative loss due to the quasilinear nature of the equations.} \subsubsection*{Nonlinear Analysis} With a suitable choice of two scalar unknowns $U_+$ and $U_-$ (Section \ref{sec:structure}), the quasilinear structure of \eqref{eq:EC} reveals a ``null type'' structure (Lemma \ref{lem:ECmult}) that will be important for the estimates to come. Conjugating by the linear evolution we can reformulate \eqref{eq:EC} in terms of bilinear Duhamel formulas for two scalar profiles $\mathfrak{m}athcal{U}_+,\mathfrak{m}athcal{U}_-$ -- see Section \ref{ssec:profiles}. The nonlinear analysis can then be reduced to suitable bilinear estimates for the profiles in the $B$ and $X$ norms relevant for the decay. For the resulting oscillatory integrals of the form \eqref{eq:def_Qm}, we have versions of the classical tools of normal forms or integration by parts at our disposal. Here our anisotropic framework invokes the horizontal and vertical parameters $p,p_j$ and $q,q_j$, $j=1,2$ -- corresponding to the interacting and output frequencies -- that are adapted to capture (inter alia) the size of the nonlinear ``phase'' functions $\Phi$ and its vector field derivatives $V\Phi$ (Lemma \ref{lem:vfsizes-mini}). It is valuable to observe that a gap in the values of either the horizontal or vertical parameters immediately yields a robust lower bound for $S\Phi$ or $\Omegaega\Phi$, expressed again in terms of those parameters $p,p_j,q,q_j$, with additional singularity in $p_j$ due to the anisotropy, see \eqref{eq:def_sigma} and \eqref{eq:vflobound_0}. Moreover, we have the striking fact that if $\Phi$ is (relatively) small, then $V\Phi$ will be (relatively) large for some vector field $V\in\{S,\Omegaega\}$ (see Proposition \ref{prop:phasevssigma}). To take full advantage of this dichotomy, it is important to establish \emph{sharp criteria} for when integration by parts along vector fields is beneficial (Section \ref{sec:vfibp}). Here the Littlewood-Paley decomposition $R_\ell$ in the angular direction $\mathfrak{m}athcal{U}ps$ plays a vital role, and quantifies the effect on ``cross terms'' via associated parameters $\ell_j$, $j=1,2$ (see also Lemma \ref{lem:VFcross}). In bilinear estimates, the resulting framework for \emph{iterated} integration by parts \emph{along vector fields} then allows us to force parameters $\ell,\ell_j$ at the cost of $p,p_j$ and $q,q_j$, $j=1,2$, roughly speaking. As it is not viable to localize in all parameters at once (see also Remark \ref{rem:comm-loc}), we first decompose our profiles with respect to $R_{\ell_j}P_{p_j}$, and only later include the full $P_{p_j,q_j}$, $j=1,2$. In practice, we will then be able to first enforce that $p,p_j$ are all comparable (no ``gap in $p$'', as we call it), then that there are no size discrepancies in $q,q_j$ (no ``gap in $q$''), and either work with normal forms or use our new decay estimates for the linear semigroup (Proposition \ref{prop:decay}). The simplest version of these arguments appears in Section \ref{sec:dtfbds}, and gives an improved decay at almost the optimal rate $t^{-\frac{3}{2}}$ for the $L^2$ norm of time derivatives of the profiles $\mathfrak{m}athcal{U}_\pm$. This is a demonstration of the flexibility and power of our approach, which in this instance overcomes the criticality of the sharp $t^{-1}$ decay with relative ease. Here, when there are no gaps in $p$ nor $q$ (and integration by parts is thus not feasible), normal forms are not available due to the time derivative. However, with our novel decay analysis and its well-localized contributions (Proposition \ref{prop:decay}) we can gain additional decay in a straightforward $L^2\times L^\infty$ estimate. Including normal form arguments and a refined study of the delicate contributions of terms with localization in $q,q_j$, we can then show the $B$ norm bounds \eqref{eq:btstrap-concl1.1-B} -- see Section \ref{sec:Bnorm}. Finally, the control of the $X$ norm in Section \ref{sec:Xnorm} is the most challenging aspect of this article and requires a more subtle splitting of cases and an adapted version of iterated integration by parts along vector fields (as presented in Section \ref{ssec:D3ibp}). \subsection{Plan of the Article} After the necessary background in Section \ref{sec:structure}, in Section \ref{sec:mainresult} we introduce the functional framework (including the angular Littlewood-Paley decomposition) and present our main result in detail with an overview of its proof. This is followed by the linear dispersive analysis that gives the decay estimate (Section \ref{sec:lin_decay}). The formalism for repeated integration by parts in the vector fields is subsequently developed in Section \ref{sec:vfibp}, and first used in Section \ref{sec:dtfbds} to establish some useful bounds for the time derivative of our unknowns in $L^2$. In Section \ref{sec:Bnorm} we recall the straightforward $L^2$ based energy estimates and prove the claimed $B$ norm bounds, while those for the $X$ norm are given in Sections \ref{sec:Xnorm}. Appendix \ref{apdx} includes the proof of basic properties of the angular Littlewood-Paley decomposition (Appendix \ref{apdx:angLP}) and collects some useful lemmata that are used throughout the text (Appendices \ref{apdx:set_gain}--\ref{sec:symbols}). \section{Structure of the equations}\lambdabel{sec:structure} In this section we present our choice of dispersive unknowns and investigate the nonlinear structure of the equations \eqref{eq:EC} in these variables. Parts of this have already been developed in our previous work \cdotite[Section 2]{rotE}, but we include all necessary details for the convenience of the reader. \subsection{Symmetries and vector fields} The equations \eqref{eq:EC} exhibit the two symmetries of \emph{scaling} and \emph{rotation} {{\beta}ta}egin{equation} {{\beta}ta}m{u}_\lambdambda(t,x)=\lambdambda {{\beta}ta}m{u}(t,\lambdambda^{-1}x),\quad \lambdambda>0,\qquad {{\beta}ta}m{u}_\mathfrak{m}athbb{T}heta(t,x)=\mathfrak{m}athbb{T}heta^\intercal {{\beta}ta}m{u}(t,\mathfrak{m}athbb{T}heta x),\quad \mathfrak{m}athbb{T}heta\in O(3). \end{equation} These are generated by the vector fields $S$ resp.\ $\Omegaega$, which act on vector fields ${{\beta}ta}m{v}$ and functions $f$ as {{\beta}ta}egin{equation}\lambdabel{ScalingVF} S{{\beta}ta}m{v}=\sum_{j=1}^3x^j\partial_j{{\beta}ta}m{v}-{{\beta}ta}m{v},\qquad Sf=x\cdotdot\partialla f \end{equation} resp.\footnote{In terms of the rotations $\Omegaega_{ab}$ of Section \ref{ssec:angLP} we have that $\Omegaega=\Omegaega_{12}$.} {{\beta}ta}egin{equation}\lambdabel{RotationVF} \Omegaega {{\beta}ta}m{v}=(x^1\partial_2-x^2\partial_1){{\beta}ta}m{v}-{{\beta}ta}m{v}_\textnormal{h}^\perp,\qquad \Omegaega f=(x^1\partial_2-x^2\partial_1)f. \end{equation} In both cases, we observe that the vector field $V\in\{S,\Omegaega\}$ commutes with the Hodge decomposition and leads to the linearized equation: {{\beta}ta}egin{equation}\lambdabel{eq:IER-VF} \partial_t V {{\beta}ta}m{u}+V {{\beta}ta}m{u}\cdotdot\partialla {{\beta}ta}m{u}+{{\beta}ta}m{u}\cdotdot\partialla V {{\beta}ta}m{u}+\vec{e}_3\times V {{\beta}ta}m{u}+\partialla p_V=0,\quad {\mathrm{div }}\, V {{\beta}ta}m{u}=0. \end{equation} In particular, the nonlinear flow of \eqref{eq:EC} preserves \emph{axisymmetry}, the invariance under the action of $\Omegaega$, i.e.\ under rotations about the $\vec{e}_3$ axis. We note that both $S$ and $\Omegaega$ are natural in the sense that they correspond to flat derivatives in spherical coordinates $({\rho}o,\thetaeta,\phi)$: {{\beta}ta}egin{equation*} \Omegaega=\partial_\thetaeta,\qquad S={\rho}o\partial_{\rho}o. \end{equation*} In particular, they commute and they both behave well under Fourier transform: we have {{\beta}ta}egin{equation} \widehat{Sf}=-3\textnormal{h}at{f}-S\textnormal{h}at{f}=S^*\textnormal{h}at{f},\qquad \widehat{\Omegaega f}=\Omegaega\textnormal{h}at{f}. \end{equation} In practice, we will thus be able to equivalently work with $\mathfrak{m}athcal{F}^{-1}(V\textnormal{h}at{f})$ or $V f$, $V\in\{\Omegaega,S\}$ (since they differ by at most a multiple of $f$), and will henceforth ignore this distinction. \subsection{Choice of unknowns and nonlinearity in axisymmetry}\lambdabel{ssec:unknownsetc} To motivate our choice of variables, we first observe that the linear part of \eqref{eq:EC}, {{\beta}ta}egin{equation} \partial_t {{\beta}ta}m{u}+\vec{e}_3\times {{\beta}ta}m{u}+\partialla p=0, \quad {\mathrm{div }}\; {{\beta}ta}m{u}=0, \end{equation} is \emph{dispersive}. Here $\mathfrak{m}athcal{D}elta p={\mathrm{curl }}_\textnormal{h} {{\beta}ta}m{u}:=\partial_{x^1}u^2-\partial_{x^2}u^1$, so using the divergence condition one sees directly that the linear system is equivalent to {{\beta}ta}egin{equation}\lambdabel{eq:linIER} {{\beta}ta}egin{aligned} \partial_t{\mathrm{curl }}_\textnormal{h} {{\beta}ta}m{u}-\partial_3{{\beta}ta}m{u}^3&=0,\qquad \partial_t{{\beta}ta}m{u}^3+\partial_3\mathfrak{m}athcal{D}elta^{-1}{\mathrm{curl }}_\textnormal{h} {{\beta}ta}m{u}=0. \end{aligned} \end{equation} The dispersion relations $\pm i\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi)$ satisfies $(i\mathfrak{m}athbf{\mathcal{L}ambda}bda)^2=-\xi_3^2/\abs{\xi}^2$, and we choose {{\beta}ta}egin{equation} \mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi)=\frac{\xi_3}{\abs{\xi}}. \end{equation} We also use this notation to denote the associated differential operators, e.g.\ the real operator $i\mathfrak{m}athbf{\mathcal{L}ambda}bda=\partial_3\vert\partialla\vert^{-1}$. \subsubsection{Scalar unknowns} Due to the incompressibility condition, ${{\beta}ta}m{u}$ has two scalar degrees of freedom. To exploit this we will work with the (scalar) variables {{\beta}ta}egin{equation}\lambdabel{eq:a-c-def} A:=\abs{\partialla_\textnormal{h}}^{-1}{\mathrm{curl }}_\textnormal{h} {{\beta}ta}m{u},\qquad C:=\abs{\partialla} \abs{\partialla_\textnormal{h}}^{-1}{{\beta}ta}m{u}^3,\qquad\partialla_h=(\partial_{x^1},\partial_{x^2},0) \end{equation} which are chosen such that the normalization \eqref{MainPropU} holds. Here ${{\beta}ta}m{u}$ can be recovered from $(A,C)$ as {{\beta}ta}egin{equation}\lambdabel{Formu0} {{\beta}ta}m{u}={{\beta}ta}m{u}_A+{{\beta}ta}m{u}_C, \end{equation} where\footnote{We use the convention that repeated latin indices are summed $1-2$ and repeated greek indices are summed $1-3$.} {{\beta}ta}egin{equation}\lambdabel{eq:Formu} {{\beta}ta}egin{split} {{\beta}ta}m{u}_A&:=-\partialla_\textnormal{h}^\perp\abs{\partialla_\textnormal{h}}^{-1}A,\qquad {{\beta}ta}m{u}_A^j=\in^{jk}\vert\partialla_\textnormal{h}\vert^{-1}\partial_k A,\\ {{\beta}ta}m{u}_C&:= i\mathfrak{m}athbf{\mathcal{L}ambda}bda\partialla_\textnormal{h}\abs{\partialla_\textnormal{h}}^{-1}C+\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}C\;\vec{e}_3,\qquad {{\beta}ta}m{u}_C^j=i\mathfrak{m}athbf{\mathcal{L}ambda}bda\vert\partialla_\textnormal{h}\vert^{-1}\partial_j C,\quad {{\beta}ta}m{u}_C^3=\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}C \end{split} \end{equation} and for any vector field $V\in\{S,\Omegaega\}$ and any Fourier multiplier $m:\mathfrak{m}athbb{R}^3\to\mathfrak{m}athbb{R}$, {{\beta}ta}egin{equation}\lambdabel{MainPropU} {{\beta}ta}egin{aligned} V{{\beta}ta}m{u}^\alphapha&={{\beta}ta}m{u}_{VA}^\alphapha+{{\beta}ta}m{u}_{VC}^\alphapha,\qquad V\in\{S,\Omegaega\},\\ \norm{m{{\beta}ta}m{u}}_{L^2}^2&=\norm{mA}_{L^2}^2+\norm{mC}_{L^2}^2=\norm{m{{\beta}ta}m{u}_A}_{L^2}^2+\norm{m{{\beta}ta}m{u}_C}_{L^2}^2.\\ \end{aligned} \end{equation} Using that {{\beta}ta}egin{equation} \mathfrak{m}athcal{D}elta p=\vert\partialla_\textnormal{h}\vert A-{\mathrm{div }}\left[{{\beta}ta}m{u}\cdotdot\partialla {{\beta}ta}m{u}\right]=\vert\partialla_\textnormal{h}\vert A-\partial_\alphapha\partial_{{\beta}ta}eta\left[{{\beta}ta}m{u}^\alphapha {{\beta}ta}m{u}^{{\beta}ta}eta\right]. \end{equation} we obtain that \eqref{eq:EC} is equivalent to {{\beta}ta}egin{equation}\lambdabel{eq:EC2} {{\beta}ta}egin{aligned} \partial_tA-i\mathfrak{m}athbf{\mathcal{L}ambda}bda C&=-\vert\partialla_\textnormal{h}\vert^{-1}\partial_j\partial_n\in^{jk}\left[{{\beta}ta}m{u}^n{{\beta}ta}m{u}^k\right]-i\mathfrak{m}athbf{\mathcal{L}ambda}bda\vert\partialla\vert \in^{jk}\vert\partialla_\textnormal{h}\vert^{-1}\partial_j\left[{{\beta}ta}m{u}^3{{\beta}ta}m{u}^k\right],\\ \partial_tC-i\mathfrak{m}athbf{\mathcal{L}ambda}bda A&=-i\mathfrak{m}athbf{\mathcal{L}ambda}bda\vert\partialla\vert\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}\left[\vert\partialla_\textnormal{h}\vert^{-2}\partial_j\partial_k\left[{{\beta}ta}m{u}^j{{\beta}ta}m{u}^k\right]+{{\beta}ta}m{u}^3{{\beta}ta}m{u}^3\right]-\vert\partialla\vert \vert\partialla_\textnormal{h}\vert^{-1}\partial_j(1-2\mathfrak{m}athbf{\mathcal{L}ambda}bda^2)\left[{{\beta}ta}m{u}^j{{\beta}ta}m{u}^3\right]. \end{aligned} \end{equation} Here the structure of the nonlinearity is apparent as a quasilinear, quadratic form in $A,C$ without singularities at low frequency. {{\beta}ta}egin{remark}\lambdabel{rem:a-vs-swirl} In the classical axisymmetric formulation of flows as ${{\beta}ta}m{u}={{\beta}ta}m{u}_\thetaeta \vec{e}_\thetaeta+{{\beta}ta}m{u}_r \vec{e}_r+{{\beta}ta}m{u}_z \vec{e}_3$ where $(\vec{e}_\thetaeta,\vec{e}_r,\vec{e}_3)$ are the basis vectors of a cylindrical coordinate system, one has that ${\mathrm{curl }}_\textnormal{h}{{\beta}ta}m{u}=r^{-1}\partial_r(r{{\beta}ta}m{u}_\thetaeta)$. Our unknown $A$ is thus closely linked to the swirl ${{\beta}ta}m{u}_\thetaeta$ of ${{\beta}ta}m{u}$: it satisfies $\abs{\partialla_\textnormal{h}}A=r^{-1}\partial_r(r{{\beta}ta}m{u}_\thetaeta)$. In general $A$ will not vanish for the solutions we construct, and neither will their swirl. \end{remark} \subsubsection{On the role of axisymmetry}\lambdabel{sec:role-axisymm} A particular family of solutions to \eqref{eq:EC} is given by a $3d$ system of $2d$ Euler equations, i.e.\ ${{\beta}ta}m{u}(t,x_1,x_2,x_3)=(v_\textnormal{h}(t,x_1,x_2),w(t,x_1,x_2))$ satisfies \eqref{eq:EC} provided that $v_\textnormal{h}:\mathfrak{m}athbb{R}\times\mathfrak{m}athbb{R}^2\to\mathfrak{m}athbb{R}^2$ and $w:\mathfrak{m}athbb{R}\times\mathfrak{m}athbb{R}^2\to\mathfrak{m}athbb{R}$ solve {{\beta}ta}egin{equation} \partial_t v_\textnormal{h}+v_\textnormal{h}\cdotdot\partialla v_\textnormal{h} +(-v_2,v_1)^\intercal+\partialla q=0, \quad \partial_t w+v_h\cdotdot\partialla w=0,\quad {\mathrm{div }}(v_\textnormal{h})=0. \end{equation} Since in $2d$ the rotation term $(-v_2,v_1)^\intercal$ is a gradient, it can be absorbed into the pressure and thus ${{\beta}ta}m{u}$ as above is a solution to the Euler-Coriolis system if $v_\textnormal{h}$ satisfies the $2d$ Euler equations, with $w$ passively advected by $v_\textnormal{h}$. While such solutions have infinite energy and are thus excluded from our functional setting on $\mathfrak{m}athbb{R}^3$, they have been shown in \cdotite{BMN1997,Gre1997} to be of leading order on a (generic) torus $\mathfrak{m}athbb{T}^3$ with sufficiently fast rotation. In the setting of $\mathfrak{m}athbb{R}^3$ one also encounters the $2d$ Euler equations through a \emph{resonant subsystem}: substituting ${{\beta}ta}m{u}$ in terms of $A,C$ as in \eqref{eq:Formu} one sees that \eqref{eq:EC2} is of the form {{\beta}ta}egin{equation}\lambdabel{eq:EC-fullstructure} {{\beta}ta}egin{aligned} \partial_tA-i\mathfrak{m}athbf{\mathcal{L}ambda}bda C&=Q_{null}^A(A,C)+Q^A_\Omegaega(A,C)+Q^A_E(A,C),\\ \partial_tC-i\mathfrak{m}athbf{\mathcal{L}ambda}bda A&= Q_{null}^C(A,C)+Q^C_\Omegaega(A,C)+Q^C_E(A,C), \end{aligned} \end{equation} where for $W\in\{A,C\}$ the quadratic terms {{\beta}ta}egin{itemize} \item $Q_{null}^W(A,C)$ contain a favorable null type structure (discussed below in Section \ref{sec:axisymm} in detail), \item $Q^W_\Omegaega(A,C)$ contain a rotational product structure, \item $Q^W_E(A,C)$ contain the $2d$ Euler equations in the following sense: near $\mathfrak{m}athbf{\mathcal{L}ambda}bda=0$ their contribution to $A$ is {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \partial_t\widetilde{A}+(\partialla_h^\perp\vert\partialla_h\vert^{-2} \widetilde{A})\cdotdot \partialla_h\widetilde{A}=l.o.t.,\qquad \widetilde{A}:=\abs{\partialla_\textnormal{h}}A={\mathrm{curl }}_\textnormal{h}({{\beta}ta}m{u}), \end{aligned} \end{equation} in which one recognizes the $2d$ Euler equations in vorticity formulation for $\widetilde{A}$, while $C$ is being passively transported by $A$. \end{itemize} In terms of the nonlinear structure, the crucial observation for our purposes is that $Q^W_E$ \emph{vanishes on axisymmetric functions}, so that in our setting we do not have to contend with a possible fast norm growth due to $2d$ Euler-type nonlinear interactions in \eqref{eq:EC-fullstructure}. Moreover, it turns out that also $Q^W_\Omegaega$ vanishes under axisymmetry, but this is less important for our analysis. {{\beta}ta}egin{remark} {{\beta}ta}egin{enumerate}[wide] \item The assumption of axisymmetry brings some further simplifications (see e.g.\ Lemma \ref{lem:VFcross}), but those are less vital for our arguments. \item Although all our functions (including the localizations) are axisymmetric \emph{in their arguments} and we have that {{\beta}ta}egin{equation} \Omegaega f=0\quad\text{if } f\text{ axisymmetric,} \end{equation} the vector field $\Omegaega$ still plays an important role, since it does not vanish on expressions of several arguments, such as the phase functions $\Phi$ (see e.g.\ Lemma \ref{lem:vfsizes-mini}). \end{enumerate} \end{remark} \subsubsection{The equations in axisymmetry}\lambdabel{sec:axisymm} In order to properly describe the structure of the nonlinearity in \eqref{eq:EC2} for axisymmetric solutions, we introduce the following collection of zero homogeneous symbols: {{\beta}ta}egin{equation}\lambdabel{eq:barE} {{\beta}ta}ar{E}:=\left\{\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata),\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\zetata)},\frac{\xi_{\textnormal{h}}\cdotdot\thetaeta_{\textnormal{h}}}{\abs{\xi_{\textnormal{h}}}\abs{\thetaeta_{\textnormal{h}}}},\frac{\xi_{\textnormal{h}}^\perp\cdotdot\thetaeta_{\textnormal{h}}}{\abs{\xi_{\textnormal{h}}}\abs{\thetaeta_{\textnormal{h}}}}\,:\,\zetata\in\{\xi,\xi-\eta,\eta\},\thetaeta\in\{\xi-\eta,\eta\}\right\}. \end{equation} With the standard notation {{\beta}ta}egin{equation*} \mathfrak{m}athcal{F}(Q_{\mathfrak{m}athfrak{n}}(f,g))(\xi)=\int_\eta \mathfrak{m}athfrak{n}(\xi,\eta)\textnormal{h}at{f}(\xi-\eta)\textnormal{h}at{g}(\eta)d\eta \end{equation*} for quadratic expressions with multiplier $\mathfrak{m}athfrak{n}$, we have the following result (see also \cdotite[Lemma 2.1]{rotE}): {{\beta}ta}egin{lemma}\lambdabel{lem:ECmult} Let ${{\beta}ta}m{u}$ be an axisymmetric solution to \eqref{eq:EC} on a time interval $[0,T]$, so that $A,C$ as defined in \eqref{eq:a-c-def} are axisymmetric functions that solve \eqref{eq:EC2}. Then the \emph{dispersive unknowns} {{\beta}ta}egin{equation} U_+:=A+C,\quad U_-:=A-C, \end{equation} satisfy the equations {{\beta}ta}egin{equation}\lambdabel{eq:EC_disp} {{\beta}ta}egin{aligned} (\partial_t-i\mathfrak{m}athbf{\mathcal{L}ambda}bda)U_+&=Q_{\mathfrak{m}athfrak{n}^{++}_+}(U_+,U_+)+Q_{\mathfrak{m}athfrak{n}^{+-}_+}(U_+,U_-)+Q_{\mathfrak{m}athfrak{n}^{--}_+}(U_-,U_-),\\ (\partial_t+i\mathfrak{m}athbf{\mathcal{L}ambda}bda)U_-&=Q_{\mathfrak{m}athfrak{n}^{++}_-}(U_+,U_+)+Q_{\mathfrak{m}athfrak{n}^{+-}_-}(U_+,U_-)+Q_{-\mathfrak{m}athfrak{n}^{--}_-}(U_-,U_-), \end{aligned} \end{equation} with multipliers satisfying {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \mathfrak{m}athfrak{n}^{\mathfrak{m}u\nu}_\kappappa(\xi,\eta)&=\abs{\xi}{{\beta}ta}ar{\mathfrak{m}athfrak{n}}^{\mathfrak{m}u\nu}_\kappappa(\xi,\eta),\qquad\kappappa,\mathfrak{m}u,\nu\in\{+,-\},\\ {{\beta}ta}ar{\mathfrak{m}athfrak{n}}^{\mathfrak{m}u\nu}_\kappappa&\in\textnormal{span}_\mathfrak{m}athbb{R}\left\{\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata_1)\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\zetata_2)}\cdotdot {{\beta}ta}ar{\mathfrak{m}athfrak{n}}(\xi,\eta),\zetata_1,\zetata_2\in\{\xi,\xi-\eta,\eta\},{{\beta}ta}ar{\mathfrak{m}athfrak{n}}\in{{\beta}ta}ar{E}\right\}. \end{aligned} \end{equation} \end{lemma} In words: in the axisymmetric case, in the dispersive variables $U_\pm$ the symbols of the quadratic, quasilinear nonlinearity of \eqref{eq:EC_disp} contain a derivative $\abs{\partialla}$ and factors of $\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata_1)\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\zetata_2)}$ for some $\zetata_1,\zetata_2\in\{\xi,\xi-\eta,\eta\}$. We shall make frequent use of this null type structure in our nonlinear estimates -- a quantified version of it may be found below in Lemma \ref{lem:ECmult_bds}. {{\beta}ta}egin{proof}[Proof of Lemma \ref{lem:ECmult}] This has been established in our prior work \cdotite[Lemma 2.1]{rotE}. \end{proof} \subsubsection{Profiles and bilinear expressions}\lambdabel{ssec:profiles} Introducing the \emph{profiles} $\mathfrak{m}athcal{U}_\pm$ of the dispersive unknowns $U_\pm$ as {{\beta}ta}egin{equation}\lambdabel{eq:def_profiles} \mathfrak{m}athcal{U}_+(t):=e^{-it\mathfrak{m}athbf{\mathcal{L}ambda}bda}U_+(t),\qquad \mathfrak{m}athcal{U}_-(t):=e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}U_-(t), \end{equation} we can express \eqref{eq:EC_disp} in terms of $\mathfrak{m}athcal{U}_\pm$ and see that the bilinear terms are of the form {{\beta}ta}egin{equation}\lambdabel{eq:def_Qm} \mathfrak{m}athcal{Q}_\mathfrak{m}(\mathfrak{m}athcal{U}_\mathfrak{m}u,\mathfrak{m}athcal{U}_\nu)(s):=\mathfrak{m}athcal{F}^{-1}\left(\int_{\mathfrak{m}athbb{R}^3}e^{\pm is\Phi_{\mathfrak{m}u\nu}(\xi,\eta)}\mathfrak{m}(\xi,\eta)\widehat{\mathfrak{m}athcal{U}_\mathfrak{m}u}(s,\xi-\eta)\widehat{\mathfrak{m}athcal{U}_\nu}(s,\eta)d\eta\right),\qquad \mathfrak{m}u,\nu\in\{-,+\}, \end{equation} for a phase function {{\beta}ta}egin{equation}\lambdabel{eq:def_phi} \Phi_{\mathfrak{m}u\nu}(\xi,\eta):=\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi)+\mathfrak{m}u\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)+\nu\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta),\qquad \mathfrak{m}u,\nu\in\{-,+\}, \end{equation} and $\mathfrak{m}$ one of the multipliers $\mathfrak{m}athfrak{n}_\kappappa^{\mathfrak{m}u,\nu}$ of Lemma \ref{lem:ECmult}. By Duhamel's formula we thus have from \eqref{eq:EC_disp} that {{\beta}ta}egin{equation}\lambdabel{eq:EC_disp_Duham} {{\beta}ta}egin{aligned} \mathfrak{m}athcal{U}_+(t)=\mathfrak{m}athcal{U}_+(0)+\int_0^t \left[\mathfrak{m}athcal{Q}_{m^{++}_+}(\mathfrak{m}athcal{U}_+,\mathfrak{m}athcal{U}_+)+\mathfrak{m}athcal{Q}_{m^{+-}_+}(\mathfrak{m}athcal{U}_+,\mathfrak{m}athcal{U}_-)+\mathfrak{m}athcal{Q}_{m^{--}_+}(\mathfrak{m}athcal{U}_-,\mathfrak{m}athcal{U}_-)\right](s) ds,\\ \mathfrak{m}athcal{U}_-(t)=\mathfrak{m}athcal{U}_-(0)+\int_0^t \left[\mathfrak{m}athcal{Q}_{m^{++}_-}(\mathfrak{m}athcal{U}_+,\mathfrak{m}athcal{U}_+)+\mathfrak{m}athcal{Q}_{m^{+-}_-}(\mathfrak{m}athcal{U}_+,\mathfrak{m}athcal{U}_-)+\mathfrak{m}athcal{Q}_{m^{--}_-}(\mathfrak{m}athcal{U}_-,\mathfrak{m}athcal{U}_-)\right](s) ds. \end{aligned} \end{equation} Defining for a multiplier $\mathfrak{m}athfrak{n}$ the bilinear expression {{\beta}ta}egin{equation} B_\mathfrak{m}athfrak{n}(f,g)(t):= \int_0^t \mathfrak{m}athcal{Q}_\mathfrak{m}athfrak{n}(f,g)(s) ds \end{equation} we may thus write \eqref{eq:EC_disp_Duham} compactly as {{\beta}ta}egin{equation}\lambdabel{eq:EC-Duhamel} \mathfrak{m}athcal{U}_\pm(t)=\mathfrak{m}athcal{U}_\pm(0)+\sum_{\mathfrak{m}u,\nu\in\{+,-\}}B_{\mathfrak{m}athfrak{n}_\pm^{\mathfrak{m}u\nu}}(\mathfrak{m}athcal{U}_\mathfrak{m}u,\mathfrak{m}athcal{U}_\nu)(t). \end{equation} We will use this expression as the basis for our bootstrap arguments. \section{Functional framework and main result}\lambdabel{sec:mainresult} We begin with a discussion of some necessary background in Sections \ref{ssec:loc} and \ref{ssec:angLP}, to make our statement in Theorem \ref{MainThm} more precise -- see Section \ref{ssec:mainresult}. \subsection{Localizations}\lambdabel{ssec:loc} Let $\psi\in C^\infty(\mathfrak{m}athbb{R},[0,1])$ be a radial, non-increasing bump function supported in $[-\frac{8}{5},\frac{8}{5}]$ with $\psi|_{[-\frac{4}{5},\frac{4}{5}]}\equiv1$, and set $\varphi(x):=\psi(x)-\psi(2x)$. We use the notations that for $a\in\mathfrak{m}athbb{Z}$ and $b,c\in\mathfrak{m}athbb{Z}^-$ {{\beta}ta}egin{equation}\lambdabel{eq:loc_def2} \varphi_{a,b}(\zetata):=\varphi(2^{-a}\abs{\zetata})\varphi(2^{-b}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\zetata)}), \qquad \varphi_{a,b,c}(\zetata):=\varphi_{a,b}(\zetata)\varphi(2^{-c}\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata)), \end{equation} and will generically denote by $\widetilde{\varphi}$ a function that has similar support properties as $\varphi$, and analogously for $\widetilde{\varphi}_{a,b}$ and $\widetilde{\varphi}_{a,b,c}$. We define the associated Littlewood-Paley projections $P_{k_j,p_j}$ and $P_{k_j,p_j,q_j}$ as {{\beta}ta}egin{equation} \mathfrak{m}athcal{F}(P_{k_j,p_j}f)(\xi)=\varphi_{k_j,p_j}(\xi)\textnormal{h}at{f}(\xi),\qquad \mathfrak{m}athcal{F}(P_{k_j,p_j,q_j}f)(\xi)=\varphi_{k_j,p_j,q_j}(\xi)\textnormal{h}at{f}(\xi), \end{equation} and remark that these projections are bounded on $L^r$, $1\le r\le \infty$. We note that $p,q$ are \emph{not independent} parameters -- on the support of $P_{k,p,q}$ there holds that $2^{2p+q}=\mathfrak{m}in\{2^{2p},2^q\}$ and $2^{2p}+2^q\sigmameq 1$. In particular, there is a discrepancy between $p$ and $q$, in that the natural comparison of scales is between $2p$ and $q$ (rather than $p$ and $q$). To collect the above localizations we will make use of the notation {{\beta}ta}egin{equation}\lambdabel{eq:loc_def3} \cdothi_\textnormal{h}(\xi,\eta):=\varphi_{k,p}(\xi)\varphi_{k_1,p_1}(\xi-\eta)\varphi_{k_2,p_2}(\eta),\qquad \cdothi(\xi,\eta):=\varphi_{k,p,q}(\xi)\varphi_{k_1,p_1,q_1}(\xi-\eta)\varphi_{k_2,p_2,q_2}(\eta), \end{equation} and write {{\beta}ta}egin{equation} w_{\mathfrak{m}ax}:=\mathfrak{m}ax\{w,w_1,w_2\},\quad w_{\mathfrak{m}in}:=\mathfrak{m}in\{w,w_1,w_2\},\qquad w\in\{k,p,q\}. \end{equation} \subsection{Angular Littlewood-Paley decomposition}\lambdabel{ssec:angLP} We now introduce angular {{\beta}ta}lue{regularity} localizations via associated Littlewood-Paley type projectors. {{\beta}ta}lue{Due to axial symmetry, these can be constructed based on the spectral decomposition\footnote{{{\beta}ta}lue{That this controls regularity in $\mathfrak{m}athcal{U}psilon$ can be seen from \eqref{eq:R-Bernstein} and \eqref{eq:UpsOmegas} below.}} of the Laplacian on $\mathfrak{m}athbb{S}^2$, $\mathfrak{m}athcal{D}elta_{\mathfrak{m}athbb{S}^2}=\mathfrak{m}athcal{U}psilon^2$.} Let $N=(0,0,1)\in\mathfrak{m}athbb{R}^3$ denote the north pole of the standard 2-sphere $\mathfrak{m}athbb{S}^2$ and let $\mathfrak{m}athcal{Z}_n(P)=\mathfrak{m}athfrak{Z}_n(\lambdangle P,N\rangle)$ denote the $n$-th zonal spherical harmonic, given explicitly via the Legendre polynomial $L_n$ by {{\beta}ta}egin{equation} \mathfrak{m}athfrak{Z}_n(x)=\frac{2n+1}{4\pi}L_n(x),\qquad L_n(z)=\frac{1}{2^n(n!)}\frac{d^n}{dz^n}[(z^2-1)^n]. \end{equation} Using this, for $k\in\mathfrak{m}athbb{Z}$ we define the ``angular Littlewood-Paley projectors'' ${{\beta}ta}ar{R}_{\leq \ell}, {{\beta}ta}ar{R}_\ell$ by {{\beta}ta}egin{equation} {{\beta}ta}egin{split} \left({{\beta}ta}ar{R}_{\le \ell}f\right)(x)&=\sum_{n\ge 0}\psi(2^{-\ell}n)\int_{\mathfrak{m}athbb{S}^2}f(\vert x\vert\vartheta)\mathfrak{m}athfrak{Z}_n(\lambdangle \vartheta,\frac{x}{\vert x\vert}\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\vartheta),\\ \left({{\beta}ta}ar{R}_\ell f\right)(x)&=\sum_{n\ge 0}\varphi(2^{-\ell}n)\int_{\mathfrak{m}athbb{S}^2}f(\vert x\vert\vartheta)\mathfrak{m}athfrak{Z}_n(\lambdangle \vartheta,\frac{x}{\vert x\vert}\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\vartheta), \end{split} \end{equation} where $d\nu_{\mathfrak{m}athbb{S}^2}$ denotes the standard measure on the sphere (so that $\nu_{\mathfrak{m}athbb{S}^2}(\mathfrak{m}athbb{S}^2)=4\pi$). These operators are bounded on $L^2$ and self-adjoint; their key properties parallel those of standard Littlewood-Paley projectors: {{\beta}ta}egin{proposition}\lambdabel{prop:LPOmega} For any $\ell\in\mathfrak{m}athbb{Z}$, the angular Littlewood-Paley projectors ${{\beta}ta}ar{R}_\ell$ satisfy: {{\beta}ta}egin{enumerate}[label=(\roman*),wide] \item\lambdabel{it:LPang-comm} ${{\beta}ta}ar{R}_\ell$ commutes with regular Littlewood-Paley projectors, both in space and in frequency. Besides, ${{\beta}ta}ar{R}_\ell$ commutes with vector fields $\Omegaega_{ab}=x_a\partial_{x_b}-x_b\partial_{x_a}$ ($a,b\in\{1,2,3\}$), $S$, and the Fourier transform: {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} [\Omegaega_{ab},{{\beta}ta}ar{R}_\ell]=[S,{{\beta}ta}ar{R}_\ell]=[{{\beta}ta}ar{R}_\ell,P_{k}]=[{{\beta}ta}ar{R}_\ell,\mathfrak{m}athcal{F}]=0. \end{split} \end{equation*} \item\lambdabel{it:LPang-orth} ${{\beta}ta}ar{R}_\ell$ constitutes an almost orthogonal partition of unity in the sense that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} f=\sum_{\ell\ge0}{{\beta}ta}ar{R}_\ell f,&\qquad {{\beta}ta}ar{R}_\ell{{\beta}ta}ar{R}_{\ell^{\partial}ime}=0\quad\textnormal{h}box{whenever}\quad \vert \ell-\ell^{\partial}ime\vert\ge4,\qquad \mathcal{V}ert f\mathcal{V}ert_{L^2}^2\sigmameq\sum_{\ell\ge 0}\mathcal{V}ert {{\beta}ta}ar{R}_\ell f\mathcal{V}ert_{L^2}^2,\\ {{\beta}ta}ar{R}_\ell\left[{{\beta}ta}ar{R}_{\ell_1}f\cdotdot {{\beta}ta}ar{R}_{\ell_2}g\right]&=0\quad\textnormal{h}box{whenever}\quad\mathfrak{m}ax\{\ell,\ell_1,\ell_2\}\ge\textnormal{med}\{\ell,\ell_1,\ell_2\}+4 \end{split} \end{equation*} \item\lambdabel{it:LPang-bd} ${{\beta}ta}ar{R}_\ell$ and ${{\beta}ta}ar{R}_{\le \ell}$ are bounded on $L^r$, $1\le r\le\infty$, {{\beta}ta}egin{equation} \mathcal{V}ert {{\beta}ta}ar{R}_\ell f\mathcal{V}ert_{L^r}+\mathcal{V}ert {{\beta}ta}ar{R}_{\le \ell}f\mathcal{V}ert_{L^r}\lesssimssim \mathcal{V}ert f\mathcal{V}ert_{L^r}. \end{equation} \item\lambdabel{it:LPang-Bern} We have a Bernstein property: There holds that {{\beta}ta}egin{equation}\lambdabel{eq:R-Bernstein} {{\beta}ta}egin{split} \sum_{1\leq a<b\leq 3}\mathcal{V}ert \Omegaega_{ab}{{\beta}ta}ar{R}_\ell f\mathcal{V}ert_{L^r}\sigmameq 2^\ell\mathcal{V}ert {{\beta}ta}ar{R}_\ell f\mathcal{V}ert_{L^r},\qquad 1\le r\le\infty. \end{split} \end{equation} \end{enumerate} \end{proposition} We refer the reader to Appendix \ref{apdx:angLP} for the proof of this proposition. It is important to understand the interplay between the $R_\ell$ and $P_{k,p}$ localizations. By direct computations we have that {{\beta}ta}egin{equation}\lambdabel{eq:comm-angLP-p} \norm{[\Omegaega_{j3},P_{k,p}]}_{L^r\to L^r}\lesssimssim2^{-p}, \qquad j=1,2,\quad 1\leq r\leq \infty. \end{equation} In particular, for localization $P_{k,p}{{\beta}ta}ar{R}_\ell$ (in both horizontal frequency $p$ and ``angular frequency'' $\ell$) this shows that we should not go below the scale $-p>\ell$, since there the projections do not commute (up to lower order terms). In practice we will thus work with projectors that incorporate this ``uncertainty principle'' $\ell+p\geq 0$: for $p\in\mathfrak{m}athbb{Z}^{-}$, $\ell\in\mathfrak{m}athbb{Z}^{+}$, we introduce the operators {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} R^{(p)}_\ell:= {{\beta}ta}egin{cases} 0,&p+\ell< 0,\\ {{\beta}ta}ar{R}_{\le\ell},&p+\ell= 0,\\ {{\beta}ta}ar{R}_{\ell},&p+\ell> 0. \end{cases} \end{aligned} \end{equation} \subsubsection*{Convention:} For simplicity of notation we shall henceforth \emph{drop the superscript $(p)$} on $R_\ell$, i.e.\ {{\beta}ta}egin{equation} R_\ell\equiv R_\ell^{(p)}, \end{equation} since it will always be clear from the context of localization in the corresponding $p$. Clearly, key features of Proposition \ref{prop:LPOmega} transfer to $R_\ell$: For example, we have the decomposition {{\beta}ta}egin{equation} P_{k}f=\sum_{\ell+p\geq 0}P_{k,p}R_{\ell}^{(p)}f\equiv \sum_{\ell+p\geq 0}P_{k,p}R_{\ell}f. \end{equation} {{\beta}ta}egin{remark}\lambdabel{rem:comm-loc} One checks that {{\beta}ta}egin{equation}\lambdabel{eq:comm-angLP-q} \norm{[\Omegaega_{j3},P_{k,p,q}]}_{L^r\to L^r}\lesssimssim 2^{-p-q}, \qquad j=1,2,\quad 1\leq r\leq \infty, \end{equation} Since $q$ plays a similar role as $2p$ in terms of scales, it does not seem advantageous to at once localize in $p,\ell$ and additionally $q$. Rather, typically we will first only work with localizations in $p$ and $\ell$, and only introduce localizations in $q$ once the other parameters are under control. \end{remark} \subsection{Main Result}\lambdabel{ssec:mainresult} With the notations $k^+:=\mathfrak{m}ax\{0,k\}$, $k^-:=\mathfrak{m}in\{k,0\}$ and for ${{\beta}ta}eta>0$ to be chosen we introduce {{\beta}ta}lue{now our key norms, both weighted, $L^2$ based to allow for a Fourier analysis based approach:} {{\beta}ta}egin{align} \norm{f}_B&:=\sup_{k\in\mathfrak{m}athbb{Z},\, p,q\in\mathfrak{m}athbb{Z}^-}2^{3k^+-\frac{1}{2}k^{-}}2^{-p-\frac{q}{2}}\norm{P_{k,p,q}f}_{L^2},\lambdabel{eq:defBnorm}\\ \norm{f}_X&:=\sup_{\substack{k\in\mathfrak{m}athbb{Z},\,\ell\in\mathfrak{m}athbb{Z}^+\!\!,\,p\in\mathfrak{m}athbb{Z}^-\\\ell+p\geq 0}}2^{3k^+} 2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{ P_{k,p}R_\ell f}_{L^2}.\lambdabel{eq:defXnorm} \end{align} {{\beta}ta}lue{As discussed in the introduction on page \pageref{it:normchoice}, these norms play complementary roles. Through appropriate weighting of the anisotropic resp.\ angular Littlewood-Paley projectors, the $B$ norm captures anisotropic localization and scales like the Fourier transform in $L^\infty$,\footnote{{{\beta}ta}lue{Such a scaling may also be motivated by the fact that the stationary phase arguments that yield linear decay can only be optimal if one controls the Fourier transform in $L^\infty$. This control is indeed given by a \emph{combination} of $B$ and $X$ norms including some vector fields $S$, as we show in Lemma \ref{lem:ControlLinfty} -- we refer to its proof for a further demonstration of the different roles in terms of angular regularity of the two norms in \eqref{eq:defBnorm}, \eqref{eq:defXnorm}.}} while the $X$ norm accounts for angular derivatives in $\mathfrak{m}athcal{U}psilon$. The additional weights in terms of the frequency size $k$ are designed to capture both the sharp decay at the linear level, and also allow to overcome the derivative loss inherent to the nonlinearity. In particular the large power of $k^+$ ensures that we have {{\beta}ta}egin{equation*} \mathcal{V}ert \partialla R_\ell f\mathcal{V}ert_{L^\infty}\lesssimssim 2^{-\ell}\mathcal{V}ert f\mathcal{V}ert_X,\qquad\mathcal{V}ert\partialla P_{k,p,q}f\mathcal{V}ert_{L^\infty}\lesssimssim 2^{p+q/2}\mathcal{V}ert f\mathcal{V}ert_B. \end{equation*}} In detail, our main result from Theorem \ref{MainThm} can then be stated as the following global existence result for the Euler-Coriolis system \eqref{eq:EC}: {{\beta}ta}egin{theorem}\lambdabel{thm:EC} Let $N\geq 5$. There exist $M,N_0\in\mathfrak{m}athbb{N}$, ${{\beta}ta}eta>0$ with $N_0\gg M\gg {{\beta}ta}eta^{-1}+N$, and $\varepsilonsilonilons_\ast>0$ such that if $U_{\pm,0}$ satisfy {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned}\lambdabel{eq:id} \norm{U_{\pm,0}}_{H^{2N_0}\cdotap \dot{H}^{-1}}+\norm{S^aU_{\pm,0}}_{L^2\cdotap \dot{H}^{-1}}&\le\varepsilonsilonilons_0,\qquad 0\le a\le M,\\ \norm{S^bU_{\pm,0}}_{B}+\norm{S^bU_{\pm,0}}_{X}&\le\varepsilonsilonilons_0,\qquad 0\le b\le N, \end{aligned} \end{equation} for some $0<\varepsilonsilon_0<\varepsilonsilon_\ast$, then there exists a unique global solution $U_\pm\in C(\mathfrak{m}athbb{R},\mathfrak{m}athbb{R}^3)$ to \eqref{eq:EC_disp}. Moreover, $U_\pm(t)$ decay and have (at most) slowly growing energy {{\beta}ta}egin{equation} \norm{U_\pm(t)}_{L^\infty}\lesssimssim \varepsilonsilonilons_0 \ip{t}^{-1},\qquad \norm{U_\pm(t)}_{H^{2N_0}}\lesssimssim \varepsilonsilonilons_0 \ip{t}^{C\varepsilonsilonilons_0} \end{equation} for some $C>0$, and in fact $U_\pm(t)$ scatters linearly. \end{theorem} {{\beta}ta}egin{remark} In order to keep the essence of the arguments as clear as possible, we have not striven to optimize the number of vector fields and derivatives in the above result. As our arguments show, a choice of ${{\beta}ta}eta=10^{-2}$, $N_0=O(10^{7})$ and such that $N_0\gg M\gg {{\beta}ta}eta^{-1}$ works. \end{remark} Theorem \ref{thm:EC} is established through a bootstrap argument, {{\beta}ta}lue{which we discuss next}. We will show the following result, which implies all of Theorem \ref{thm:EC} except for the scattering statement: {{\beta}ta}egin{proposition}\lambdabel{prop:btstrap} Let $U_\pm\in C([0,T],\mathfrak{m}athbb{R}^3)$ be solutions to \eqref{eq:EC_disp} for some $T>0$ with profiles $\mathfrak{m}athcal{U}_{\pm}:=e^{\mathfrak{m}p it\mathfrak{m}athbf{\mathcal{L}ambda}bda}U_{\pm}$, and initial data satisfying \eqref{eq:id}. If for $t\in [0,T]$ there holds that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned}\lambdabel{eq:btstrap-assump} \norm{S^b\mathfrak{m}athcal{U}_\pm(t)}_{B}+\norm{S^b\mathfrak{m}athcal{U}_\pm(t)}_{X}&\leq \varepsilonsilonilons_1,\qquad 0\le b\le N, \end{aligned} \end{equation} for some $0<\varepsilonsilon_1\le\sqrt{\varepsilonsilon_0}$, then in fact we have the improved bounds {{\beta}ta}egin{align} \norm{S^b\mathfrak{m}athcal{U}_\pm(t)}_{B}+\norm{S^b\mathfrak{m}athcal{U}_\pm(t)}_{X}&\lesssimssim\varepsilonsilonilons_0+\varepsilonsilonilons_1^2,\qquad 0\le b\le N, \lambdabel{eq:btstrap-concl1} \end{align} and for some $C>0$ there holds that {{\beta}ta}egin{equation}\lambdabel{eq:btstrap-concl2} \norm{\mathfrak{m}athcal{U}_\pm(t)}_{H^{2N_0}\cdotap H^{-1}}+\norm{S^a\mathfrak{m}athcal{U}_\pm(t)}_{L^2\cdotap H^{-1}}\lesssimssim\varepsilonsilonilons_0\ip{t}^{C\varepsilonsilonilons_1},\qquad 0\le a\le M. \end{equation} \end{proposition} Finally, the linear scattering in $L^2$ is a direct consequence of the fast decay of $\partial_t\mathfrak{m}athcal{U}_\pm(t)$, and more is true: {{\beta}ta}egin{corollary}\lambdabel{cor:scatter} Let $U_\pm(t)$ be global solutions to \eqref{eq:EC_disp}, constructed via the bootstrap in Proposition \ref{prop:btstrap} that in particular satisfy the bounds \eqref{eq:btstrap-concl1} and \eqref{eq:btstrap-concl2}. There exist $\mathfrak{m}athcal{U}_{\pm}^\infty\in B\cdotap X$ such that {{\beta}ta}egin{equation}\lambdabel{eq:scatter} \norm{e^{\mathfrak{m}p it\mathfrak{m}athbf{\mathcal{L}ambda}bda}U_{\pm}(t)-\mathfrak{m}athcal{U}_{\pm}^{\infty}}_{B\cdotap X}\to 0,\qquad t\to +\infty. \end{equation} \end{corollary} {{\beta}ta}egin{proof} By Lemma \ref{lem:dtfinL2} we have that $\norm{\partial_t \mathfrak{m}athcal{U}_\pm(t)}_{L^2}\lesssimssim \ip{t}^{-\frac{5}{4}}$, and hence $\mathfrak{m}athcal{U}_\pm(t)$ are $L^2$ Cauchy sequences (in time) and converge to $\mathfrak{m}athcal{U}_{\pm}^\infty$ as $t\to\infty$. Similarly, by Proposition \ref{prop:Bnorm} resp.\ Propositions \ref{prop:Xnorm1} and \ref{prop:Xnorm2} we have that $\mathfrak{m}athcal{U}_\pm(t)$ are Cauchy sequences in the $B$ resp.\ $X$ norm. \end{proof} {{\beta}ta}lue{We outline next the strategy of proof of Proposition \ref{prop:btstrap}. In particular, we show how control of the nonlinearity can be obtained through a reduction to several bilinear estimates, which are at the heart of the rest of this article.} {{\beta}ta}egin{proof}[Proof of Proposition \ref{prop:btstrap}] We note that under the assumptions \eqref{eq:btstrap-assump} it follows from the linear decay estimates in Proposition \ref{prop:decay} that {{\beta}ta}egin{equation}\lambdabel{eq:decay_cons} \norm{S^aU_\pm(t)}_{L^\infty}\lesssimssim \varepsilonsilonilons_1 \ip{t}^{-1},\quad 0\leq a\leq N-3, \end{equation} and thus the slow growth of the energy and vector fields \eqref{eq:btstrap-concl2} follows from a standard energy estimate for the system \eqref{eq:EC} -- see Corollary \ref{cor:energy}. We note further that by interpolation we also have bounds for up to $N$ vector fields in $H^{N_0}$: With Lemma \ref{lem:interpol} and \eqref{eq:btstrap-concl2} there holds that for $b\leq N$ we have {{\beta}ta}egin{equation}\lambdabel{eq:interpol_energy} \norm{S^b U_\pm(t)}_{H^{N_0}}\lesssimssim \varepsilonsilonilons_0 \ip{t}^{C\varepsilonsilonilons_1}. \end{equation} The key point is thus to establish \eqref{eq:btstrap-concl1}. We proceed as follows: \subsubsection*{Reduction} From the Duhamel formula \eqref{eq:EC-Duhamel} for the profiles $\mathfrak{m}athcal{U}_\pm$ we have that {{\beta}ta}egin{equation} \norm{S^b\mathfrak{m}athcal{U}_\pm(t)}_{B\cdotap X}\leq \norm{S^b\mathfrak{m}athcal{U}_\pm(0)}_{B\cdotap X}+\!\!\!\sum_{\mathfrak{m}u,\nu\in\{+,-\}}\norm{S^bB_{\mathfrak{m}athfrak{n}_\pm^{\mathfrak{m}u\nu}}(\mathfrak{m}athcal{U}_\mathfrak{m}u,\mathfrak{m}athcal{U}_\nu)}_{B\cdotap X}, \end{equation} hence to prove \eqref{eq:btstrap-concl1} it suffices to show that under the bootstrap assumptions \eqref{eq:btstrap-assump}, for any multiplier $\mathfrak{m}=\mathfrak{m}athfrak{n}_\pm^{\mathfrak{m}u,\nu}$ as in Lemma \ref{lem:ECmult} there holds that {{\beta}ta}egin{equation} \norm{S^bB_{\mathfrak{m}}(\mathfrak{m}athcal{U}_\mathfrak{m}u,\mathfrak{m}athcal{U}_\nu)}_B+\norm{S^bB_{\mathfrak{m}}(\mathfrak{m}athcal{U}_\mathfrak{m}u,\mathfrak{m}athcal{U}_\nu)}_X\lesssimssim \varepsilonsilonilons_1^2,\qquad 0\leq b\leq N. \end{equation} Since $S$ generates a symmetry of the equation, its application to a bilinear term $B_\mathfrak{m}$ yields a favorable structure: With $S_\xi\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)=-S_\eta\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)$ and $S\mathfrak{m}athbf{\mathcal{L}ambda}bda=0$ there holds that $(S_\xi+S_\eta)\Phi_{\mathfrak{m}u\nu}=0$, and one computes directly\footnote{Note that $S_\xi+S_\eta$ vanishes on the elements of ${{\beta}ta}ar{E}$ from \eqref{eq:barE}, and $S_\xi\abs{\xi}=\abs{\xi}$ (alternatively, see \cdotite[Lemma A.6]{rotE}).} that $(S_\xi+S_\eta)\mathfrak{m}=\mathfrak{m}$, so that from integration by parts we deduce that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} S_\xi\mathfrak{m}athcal{F}\left(\mathfrak{m}athcal{Q}_\mathfrak{m}(\mathfrak{m}athcal{U}_\mathfrak{m}u,\mathfrak{m}athcal{U}_\nu)\right)(\xi)&=S_\xi\int_{\mathfrak{m}athbb{R}^3}e^{\pm is\Phi_{\mathfrak{m}u\nu}(\xi,\eta)}\mathfrak{m}(\xi,\eta)\widehat{\mathfrak{m}athcal{U}_\mathfrak{m}u}(s,\xi-\eta)\widehat{\mathfrak{m}athcal{U}_\nu}(s,\eta)d\eta\\ &=\int_{\mathfrak{m}athbb{R}^3}((S_\xi+S_\eta)e^{\pm is\Phi_{\mathfrak{m}u\nu}(\xi,\eta)})\mathfrak{m}(\xi,\eta)\widehat{\mathfrak{m}athcal{U}_\mathfrak{m}u}(s,\xi-\eta)\widehat{\mathfrak{m}athcal{U}_\nu}(s,\eta)d\eta\\ &\qquad+ \int_{\mathfrak{m}athbb{R}^3}e^{\pm is\Phi_{\mathfrak{m}u\nu}(\xi,\eta)}(S_\xi+S_\eta)\left(\mathfrak{m}(\xi,\eta)\widehat{\mathfrak{m}athcal{U}_\mathfrak{m}u}(s,\xi-\eta)\widehat{\mathfrak{m}athcal{U}_\nu}(s,\eta)\right)d\eta\\ &\qquad +3\int_{\mathfrak{m}athbb{R}^3}e^{\pm is\Phi_{\mathfrak{m}u\nu}(\xi,\eta)}\mathfrak{m}(\xi,\eta)\widehat{\mathfrak{m}athcal{U}_\mathfrak{m}u}(s,\xi-\eta)\widehat{\mathfrak{m}athcal{U}_\nu}(s,\eta)d\eta\\ &=\mathfrak{m}athcal{F}\left(-2\,\mathfrak{m}athcal{Q}_\mathfrak{m}(\mathfrak{m}athcal{U}_\mathfrak{m}u,\mathfrak{m}athcal{U}_\nu)+\mathfrak{m}athcal{Q}_\mathfrak{m}(S\mathfrak{m}athcal{U}_\mathfrak{m}u,\mathfrak{m}athcal{U}_\nu)+\mathfrak{m}athcal{Q}_\mathfrak{m}(\mathfrak{m}athcal{U}_\mathfrak{m}u,S\mathfrak{m}athcal{U}_\nu)\right)(\xi). \end{aligned} \end{equation} It thus suffices to show that for $\mathfrak{m}u,\nu\in\{+,-\}$, and $b_1,b_2\geq 0$ there holds that {{\beta}ta}egin{equation}\lambdabel{eq:btstrap-concl1.1} \norm{B_{\mathfrak{m}}(S^{b_1}\mathfrak{m}athcal{U}_\mathfrak{m}u,S^{b_2}\mathfrak{m}athcal{U}_\nu)}_B+\norm{B_{\mathfrak{m}}(S^{b_1}\mathfrak{m}athcal{U}_\mathfrak{m}u,S^{b_2}\mathfrak{m}athcal{U}_\nu)}_X\lesssimssim \varepsilonsilonilons_1^2,\qquad b_1+b_2\leq N. \end{equation} \subsubsection*{Bilinear estimates} To prove \eqref{eq:btstrap-concl1.1} it is convenient to localize the time variable. For $t\in[0,T]$ we choose a decomposition of the indicator function $\mathfrak{m}athfrak{1}_{[0,t]}$ by functions $\tau_0,\ldots,\tau_{L+1}:\mathfrak{m}athbb{R} \to [0,1]$, $\abs{L-\log_2(2+t)}\leq 2$, satisfying {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} & \mathfrak{m}athrm{supp} \,\tau_0 \subseteq [0,2], \quad \mathfrak{m}athrm{supp} \,\tau_{L+1}\subseteq [t-2,t], \quad \mathfrak{m}athrm{supp}\,\tau_m\subseteq [2^{m-1},2^{m+1}] \quad \text{for} \quad m \in \{1,\dots,L\}, \\ & \sum_{m=0}^{L+1}\tau_m(s) = \mathfrak{m}athbf{1}_{[0,t]}(s), \quad \tau_m\in C^1(\mathfrak{m}athbb{R}) \quad \text{and} \quad \int_0^t|\tau'_m(s)|\,ds\lesssimssim 1 \quad \text{for} \quad m\in \{1,\ldots,L\}. \end{aligned} \end{equation} We can then decompose {{\beta}ta}egin{equation} B_\mathfrak{m}(F,G)=\sum_m B_\mathfrak{m}^{(m)}(F,G),\qquad B_\mathfrak{m}^{(m)}(F,G):=\int_0^t\tau_m(s)\,\mathfrak{m}athcal{Q}_\mathfrak{m}(F,G)(s)ds. \end{equation} For simplicity of the expressions we will not carry the superscript $(m)$ and instead generically write $\mathfrak{m}athcal{B}_\mathfrak{m}$ for any of the time localized bilinear expressions $B_\mathfrak{m}^{(m)}$ above. After establishing the relevant background and methodology in Sections \ref{sec:lin_decay}--\ref{sec:dtfbds}, we prove \eqref{eq:btstrap-concl1.1} by establishing for some $\deltalta>0$ the stronger bounds {{\beta}ta}egin{equation}\lambdabel{eq:btstrap-concl1.1-B} \norm{\mathfrak{m}athcal{B}_{\mathfrak{m}}(S^{b_1}\mathfrak{m}athcal{U}_\mathfrak{m}u,S^{b_2}\mathfrak{m}athcal{U}_\nu)}_B\lesssimssim 2^{-\deltalta^3 m}\varepsilonsilonilons_1^2,\qquad b_1+b_2\leq N \end{equation} in Proposition \ref{prop:Bnorm}, and {{\beta}ta}egin{equation}\lambdabel{eq:btstrap-concl1.1-X} \norm{\mathfrak{m}athcal{B}_{\mathfrak{m}}(S^{b_1}\mathfrak{m}athcal{U}_\mathfrak{m}u,S^{b_2}\mathfrak{m}athcal{U}_\nu)}_X\lesssimssim 2^{-\deltalta^3 m}\varepsilonsilonilons_1^2,\qquad b_1+b_2\leq N \end{equation} in Propositions \ref{prop:Xnorm1} and \ref{prop:Xnorm2}. Explicitly we will choose {{\beta}ta}egin{equation}\lambdabel{eq:def_delta} \deltalta:=2M^{-\frac{1}{2}}, \end{equation} and another relevant parameter of smallness will be {{\beta}ta}egin{equation}\lambdabel{eq:def_nu} \deltalta_0:=2N_0^{-1}. \end{equation} For technical reasons, in the proofs it will be useful to have the following hierarchy between $\deltalta_0$ and $\deltalta$, related to the sizes of $N_0, M$ in Theorem \ref{thm:EC}: {{\beta}ta}egin{equation}\lambdabel{eq:nuvsdelta} 10\deltalta_0<\deltalta^2, \quad\textnormal{i.e.}\quad N_0> 20 M. \end{equation} \end{proof} \section{Linear decay}\lambdabel{sec:lin_decay} Introducing the ``decay norm'' {{\beta}ta}egin{equation}\lambdabel{eq:defDnorm} \norm{f}_D:=\sup_{0\leq a\leq 3} \left( \norm{S^a f}_B+\norm{S^a f}_X \right), \end{equation} we have the following decay result: {{\beta}ta}egin{proposition}\lambdabel{prop:decay} Let $f$ be axisymmetric and $t>0$. We can split {{\beta}ta}egin{equation} P_{k,p,q}e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f=I_{k,p,q}(f)+I\!I_{k,p,q}(f) \end{equation} where for any $0<{{\beta}ta}eta'<{{\beta}ta}eta$ {{\beta}ta}egin{equation}\lambdabel{eq:I-IIdecay} {{\beta}ta}egin{aligned} \norm{I_{k,p,q}(f)}_{L^\infty}&\lesssimssim 2^{\frac{3}{2}k-3k^+}\cdotdot \mathfrak{m}in\{2^{2p+q},2^{-p-\frac{q}{2}}t^{-\frac{3}{2}}\}\norm{f}_D,\\ \norm{I\!I_{k,p,q}(f)}_{L^2}&\lesssimssim 2^{-3k^+}t^{-1-{{\beta}ta}eta'}2^{(-1-2{{\beta}ta}eta')p}\cdotdot\mathfrak{m}athfrak{1}_{2^{2p+q}\gtrsim t^{-1}}\norm{f}_D. \end{aligned} \end{equation} \end{proposition} The proof gives a slightly finer decomposition and makes crucial use of the fact that the $D$ norm of a function bounds its Fourier transform in $L^\infty$ -- see Lemma \ref{lem:ControlLinfty}. We remark that the ideas and techniques underlying Proposition \ref{prop:decay} also apply in a general (i.e.\ non-axisymmetric) setting, where upon inclusion of sufficient powers of the rotation vector field $\Omegaega$ in the $D$ norm an analogous result can be established. {{\beta}ta}egin{remark}\lambdabel{rem:decay-sum} We note that the corresponding $L^\infty$ bound for $I\!I_{k,p,q}(f)$ reads {{\beta}ta}egin{equation}\lambdabel{eq:IIdecayLinfty} {{\beta}ta}egin{aligned} \norm{I\!I_{k,p,q}(f)}_{L^\infty}&\lesssimssim 2^{\frac{3}{2}k-3k^+}t^{-1-{{\beta}ta}eta'}2^{-2{{\beta}ta}eta'p}\mathfrak{m}athfrak{1}_{2^{2p+q}\gtrsim t^{-1}} \norm{f}_D, \end{aligned} \end{equation} and we have summability in $p,q$: {{\beta}ta}egin{equation} \sum_{2^{2p+q}\gtrsim t^{-1}}\norm{I\!I_{k,p,q}(f)}_{L^2}\lesssimssim t^{-\frac{1}{2}}2^{-3k^+}\norm{f}_D, \qquad \sum_{2^{2p+q}\gtrsim t^{-1}}\norm{I\!I_{k,p,q}(f)}_{L^\infty}\lesssimssim 2^{\frac{3}{2}k-3k^+}t^{-1}\norm{f}_D. \end{equation} \end{remark} Together with the above bound for $I_{k,p,q}(f)$, we thus conclude that: {{\beta}ta}egin{corollary}\lambdabel{cor:decay} {{\beta}ta}egin{equation}\lambdabel{eq:cor_decay} \norm{P_k e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f}_{L^\infty}\lesssimssim \ip{t}^{-1}2^{\frac{3}{2}k-3k^+}\norm{f}_D. \end{equation} In particular, under the bootstrap assumptions \eqref{eq:btstrap-assump} there holds for a solution ${{\beta}ta}m{u}$ to \eqref{eq:EC} that {{\beta}ta}egin{equation} \norm{{{\beta}ta}m{u}(t)}_{L^\infty}+\norm{\partialla_x{{\beta}ta}m{u}(t)}_{L^\infty}\lesssimssim \varepsilonsilonilons_1\ip{t}^{-1}. \end{equation} \end{corollary} We highlight that the decay rate $t^{-1}$ in \eqref{eq:cor_decay} is optimal: For radial $f\in L^2\cdotap C^0(\mathfrak{m}athbb{R}^3)$ there holds that $e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f(0)=\frac{\sigman t}{t}f(0)$. After a brief review of some geometric background in the following Section \ref{sec:TS}, we give the proof of these results in Section \ref{sec:decay_proof}. \subsection{Spanning the tangent space}\lambdabel{sec:TS} The vector fields $S,\Omegaega$ are related to spherical coordinates as follows: For $({\rho}o,\thetaeta,\phi)\in[0,\infty)\times [0,2\pi)\times[0,\pi]$ we let {{\beta}ta}egin{equation} \xi=({\rho}o\cdotos\thetaeta\sigman\phi,{\rho}o\sigman\thetaeta\sigman\phi,{\rho}o\cdotos\phi)=({\rho}o\cdotos\thetaeta\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2},{\rho}o\sigman\thetaeta\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2},{\rho}o\mathfrak{m}athbf{\mathcal{L}ambda}bda), \end{equation} with {{\beta}ta}egin{equation} \mathfrak{m}athbf{\mathcal{L}ambda}bda=\cdotos\phi,\quad \sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}=\sigman\phi, \end{equation} and have that {{\beta}ta}egin{equation} {{\beta}ta}egin{split} \partial_{\rho}o\xi&={\rho}o^{-1}\xi,\qquad \partial_\thetaeta\xi=\xi_\textnormal{h}^\perp,\\ \partial_\phi\xi&=({\rho}o\cdotos\thetaeta\cdotos\phi,{\rho}o\sigman\thetaeta\cdotos\phi,-{\rho}o\sigman\phi)=\frac{\mathfrak{m}athbf{\mathcal{L}ambda}bda}{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}}\xi_\textnormal{h}-\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}{\rho}o\,\vec{e}_3,\\ \partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\xi&=-\frac{\mathfrak{m}athbf{\mathcal{L}ambda}bda}{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}}({\rho}o\cdotos\thetaeta,{\rho}o\sigman\thetaeta,0)+(0,0,{\rho}o), \end{split} \end{equation} so that $S$ is the radial scaling vector field and $\Omegaega$ the azimuthal angular derivative, i.e.\ {{\beta}ta}egin{equation}\lambdabel{eq:vfs_coords} S_\xi=\xi\cdotdot\partialla_\xi={\rho}o\partial_{\rho}o,\qquad \Omegaega_\xi=\xi_\textnormal{h}^\perp\cdotdot\partialla_\xi=\partial_\thetaeta. \end{equation} To complement these to a full set of vector fields\footnote{{{\beta}ta}lue{While other choices of complementing vector field are possible, $\mathfrak{m}athcal{U}ps$ seems to play a particularly favorable role with respect to the linear and nonlinear structure. In the context of cylindrical symmetry, (a $0$-homogeneous version of) the vertical derivative $\partial_{\xi_3}$ would be another natural choice, but this leads to a degenerate coordinate system near the vertical axis (where $S=\xi_3\partial_{\xi_3}$) and complicates the nonlinear analysis.}} that spans the tangent space at a point in $\mathfrak{m}athbb{R}^3$, we define the polar angular derivative by {{\beta}ta}egin{equation}\lambdabel{eq:ups} {{\beta}ta}egin{aligned} \mathfrak{m}athcal{U}psilon_\xi:=\partial_\phi=-\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda&=\frac{\mathfrak{m}athbf{\mathcal{L}ambda}bda}{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}}(\xi_1\partial_{\xi_1}+\xi_2\partial_{\xi_2})-\vert\xi\vert\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}\partial_{\xi_3}=\frac{1}{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}}\left[\mathfrak{m}athbf{\mathcal{L}ambda}bda S_\xi-\abs{\xi}\partial_{\xi_3}\right]. \end{aligned} \end{equation} In terms of the rotation vector fields $\Omegaega^x_{ab}=x_a\partial_{x_b}-x_b\partial_{x_a}$ introduced in the context of the angular Littlewood-Paley decomposition (Section \ref{ssec:angLP}), this can also be expressed as {{\beta}ta}egin{equation}\lambdabel{eq:UpsOmegas} \mathfrak{m}athcal{U}ps_\xi=-\frac{\xi_1}{\abs{\xi_\textnormal{h}}}\Omegaega^\xi_{13}-\frac{\xi_2}{\abs{\xi_\textnormal{h}}}\Omegaega^\xi_{23}. \end{equation} {{\beta}ta}egin{figure}[h] \cdotentering \includegraphics[width=0.4\textwidth]{3dcoords.pdf} \cdotaption{The vector fields $S,\Omegaega,\mathfrak{m}athcal{U}ps$.} \lambdabel{fig:3dcoords} \end{figure} \subsection{Proof of Proposition \ref{prop:decay}}\lambdabel{sec:decay_proof} By scaling and rotation symmetry, we may assume that $k=0$ and ${{{\beta}ta}f x}=(x,0,z)$ for some $x\ge0$. If $t2^{2p+q}\le C$, we simply use a crude integration to get {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \vert I_{0,p,q}\vert\lesssimssim 2^{2p+q}\norm{f}_B. \end{split} \end{equation*} Henceforth we will assume that $2^{-2p-q}\leq C^{-1}t$. We have that {{\beta}ta}egin{equation} P_{k,p,q}e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f({{{\beta}ta}f x})=\int_{\mathfrak{m}athbb{R}^3}e^{i\left[t\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi)+\lambdangle {{{\beta}ta}f x},\xi\rangle\right]}\widehat{P_{k,p,q}f}(\xi_1,\xi_2,\xi_3)d\xi. \end{equation} In spherical coordinates $\xi\mathfrak{m}apsto({\rho}o,\mathfrak{m}athbf{\mathcal{L}ambda}bda,\thetaeta)$, with {{\beta}ta}egin{equation*} d\xi={\rho}o^2\sigman\phi d\thetaeta d\phi d{\rho}o={\rho}o^2d\thetaeta d\mathfrak{m}athbf{\mathcal{L}ambda}bda d{\rho}o, \end{equation*} upon integration in $\thetaeta$ we thus need to consider the integral {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} I(x,z,t)&:=\int_{\mathfrak{m}athbb{R}_+\times[-1,1]}e^{i\left[t\mathfrak{m}athbf{\mathcal{L}ambda}bda+{\rho}o\mathfrak{m}athbf{\mathcal{L}ambda}bda z\right]}\varphi(2^{-p}{\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2})\varphi(2^{-q}{\rho}o\mathfrak{m}athbf{\mathcal{L}ambda}bda)\cdotdot J_0({\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}x)\cdotdot \widehat{f}{\rho}o^2\varphi({\rho}o)d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda, \end{split} \end{equation*} where $J_0$ denotes the Bessel function of order $0$. By standard results on Bessel functions (see e.g.\ \cdotite[page 338]{Ste1993}), this reduces to studying {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} I^\pm(x,z,t)&:=\int_{\mathfrak{m}athbb{R}_+\times[-1,1]}e^{i\Psi}\varphi(2^{-p}{\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2})\varphi(2^{-q}{\rho}o\mathfrak{m}athbf{\mathcal{L}ambda}bda)\cdotdot H_\pm({\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}x)\cdotdot \widehat{f}{\rho}o^2\varphi({\rho}o)d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda,\\ \Psi&:=t\mathfrak{m}athbf{\mathcal{L}ambda}bda+{\rho}o\left[\mathfrak{m}athbf{\mathcal{L}ambda}bda z\pm\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}x\right], \end{split} \end{equation*} where {{\beta}ta}egin{equation}\lambdabel{eq:dxBessel} \left\vert\left(\frac{d}{dx}\right)^aH_\pm(x)\right\vert\lesssimssim \lambdangle x\rangle^{-\frac{1}{2}-a}. \end{equation} We focus on the case with sign $+$; the other estimate is similar. We can compute the gradient {{\beta}ta}egin{equation}\lambdabel{SPgradients} {{\beta}ta}egin{split} \partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi&=t+{\rho}o\left[z-\frac{\mathfrak{m}athbf{\mathcal{L}ambda}bda}{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}} x\right],\qquad\partial_{\rho}o\Psi=\mathfrak{m}athbf{\mathcal{L}ambda}bda z+\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}x,\\ \partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^2\Psi&=-\frac{{\rho}o x}{\left[1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2\right]^\frac{3}{2}},\qquad\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\partial_{\rho}o\Psi=z-\frac{\mathfrak{m}athbf{\mathcal{L}ambda}bda}{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}} x,\qquad\partial_{\rho}o^2\Psi=0. \end{split} \end{equation} \mathfrak{m}edskip For fixed $p,q$ and $0<\kappappa<({{\beta}ta}eta-{{\beta}ta}eta^{\partial}ime)/20$, we let $\ell_0$ be the greatest integer such that $2^{\ell_0}\le 2^p t\cdotdot(2^{2p+q}t)^{-\kappappa}$, and we decompose {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} {{\beta}ta}lue{f=R_{\le \ell_0}f+(Id-R_{\leq\ell_0})f,} \end{split} \end{equation*} with {{\beta}ta}lue{$I^+=I^+_{\leq\ell_0}+I^+_{>\ell_0}$} accordingly. On the one hand, we see that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} {{\beta}ta}lue{\mathcal{V}ert I^+_{>\ell_0}\mathcal{V}ert_{L^2}}&\lesssimssim \sum_{\ell\ge\ell_0}2^{-(1+{{\beta}ta}eta)\ell}2^{-{{\beta}ta}eta p} \norm{P_{0,p}R_\ell f}_{X}\lesssimssim 2^{-p}t^{-1}\cdotdot (2^{2p+q}t)^{-{{\beta}ta}eta+\kappappa+{{\beta}ta}eta\kappappa}\cdotdot 2^{q{{\beta}ta}eta}\norm{f}_X, \end{split} \end{equation*} which yields the $L^2$ contribution to \eqref{eq:I-IIdecay}. From now on, together with \eqref{eq:R-Bernstein} we can thus assume that {{\beta}ta}lue{$f=R_{\le \ell_0}f$} satisfies for all $a\ge0$ and $0\le b\le 2$ that {{\beta}ta}egin{equation}\lambdabel{FSignalProperty} {{\beta}ta}egin{split} \mathcal{V}ert S^b\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^a \widehat{f}\mathcal{V}ert_{L^\infty}&\lesssimssim_a t^a\cdotdot(2^{2p+q}t)^{-a\kappappa}\mathcal{V}ert S^b\widehat{f}\mathcal{V}ert_{L^\infty}\lesssimssim_a t^a\cdotdot(2^{2p+q}t)^{-a\kappappa}\norm{f}_D. \end{split} \end{equation} We will bound the remaining terms in $L^\infty$, and distinguish cases as follows: \mathfrak{m}edskip {{{\beta}ta}f Case 1}: {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 0\le x\le C^{-1}t2^{p+q},\qquad\textnormal{h}box{ and }\qquad \vert z\vert\le C^{-1} t2^{2p}. \end{split} \end{equation*} In these conditions, there holds that {{\beta}ta}egin{equation}\lambdabel{eq:dlambdaPsi} {{\beta}ta}egin{split} \vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi\vert\ge t/2,\qquad \abs{\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^2\Psi}\leq C^{-1}t2^{q-2p},\qquad \abs{\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^3\Psi}\leq C^{-1}t2^{2q-4p}. \end{split} \end{equation} Using that (with $h=\varphi(2^{-p}{\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2})\varphi(2^{-q}{\rho}o\mathfrak{m}athbf{\mathcal{L}ambda}bda)H_+({\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}x){\rho}o^2\varphi({\rho}o)$) {{\beta}ta}egin{equation*} -i\iint_{\mathfrak{m}athbb{R}_+\times[-1,1]}e^{i\Psi}h\,\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^n\widehat{f} d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda=\iint_{\mathfrak{m}athbb{R}_+\times[-1,1]}e^{i\Psi}\frac{h}{\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi}\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^{n+1}\widehat{f} d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda+\iint_{\mathfrak{m}athbb{R}_+\times[-1,1]}e^{i\Psi}\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\left(\frac{h}{\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi}\right)\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^n\widehat{f} d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda, \end{equation*} we integrate by parts at most $N$ times in $\mathfrak{m}athbf{\mathcal{L}ambda}bda$, with $N\kappappa\ge2$, stopping before if a second derivative does not hit $\widehat{f}$. Note that the boundary terms vanish since we assume $2^{2p}t\gtrsim 1$. Once this is done, we have several types of terms: {{\beta}ta}egin{enumerate}[label=(\roman*),wide] \item if all derivatives hit $\widehat{f}$, a crude estimate using \eqref{FSignalProperty} gives {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \left\vert \iint_{\mathfrak{m}athbb{R}_+\times[-1,1]}e^{i\Psi}\frac{h}{(\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi)^N}\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^N\widehat{f} d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda\right\vert &\lesssimssim 2^{2p+q}t^{-N}\cdotdot t^N\cdotdot(2^{2p+q}t)^{-N\kappappa}\norm{f}_D\lesssimssim t^{-1}\cdotdot (2^{2p+q}t)^{-1}\norm{f}_D. \end{split} \end{equation*} \item if all but one derivative hit $\widehat{f}$, we have a similar estimate since by \eqref{eq:dlambdaPsi} there holds that {{\beta}ta}egin{equation*} \vert \partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda h/\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda \Psi\vert+\vert \partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^2\Psi/(\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi)^2\vert\lesssimssim 1. \end{equation*} \item if two derivatives do not hit $\widehat{f}$, using \eqref{eq:dlambdaPsi} we compute that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \frac{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^3\Psi\vert}{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi\vert^3}+\frac{\vert\partial^2_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi\vert^2}{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi\vert^4}+\frac{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^2\Psi\vert}{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi\vert^2}\frac{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda h\vert}{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi\vert}+\frac{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda^2h\vert}{\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi\vert^2}\lesssimssim (2^{2p+q}t)^{-2} \end{split} \end{equation*} and therefore a crude estimate gives a similar bound. \end{enumerate} \mathfrak{m}edskip {{{\beta}ta}f Case 2}:\footnote{Cases 2 and 3 have already been treated similarly in \cdotite[Proof of Proposition 4.1]{rotE}.} {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} x\ge C^{-1}t2^{p+q}\,\textnormal{h}box{ and }\,\vert z\vert\le C^{-2}t2^{2p},\qquad\textnormal{h}box{ or }\qquad \vert x\vert\le C^{-2}t2^{p+q}\,\textnormal{h}box{ and }\,\vert z\vert\ge C^{-1}t2^{2p}. \end{split} \end{equation*} Here we have that {{\beta}ta}egin{equation*} \vert\partial_{\rho}o\Psi\vert\gtrsim t2^{2p+q}, \end{equation*} and we can integrate by parts twice with respect to ${\rho}o$ to obtain after crude integration that {{\beta}ta}egin{equation} \abs{I^+(x,z,t)}\lesssimssim (t2^{2p+q})^{-2}\cdotdot 2^{2p+q}\norm{(1,\partial_{\rho}o,\partial_{\rho}o^2)\widehat{f}}_{L^\infty}\lesssimssim t^{-2}2^{-2p-q}\norm{(1,S,S^2)f}_D \end{equation} upon using Lemma \ref{lem:ControlLinfty}. \mathfrak{m}edskip {{{\beta}ta}f Case 3}: {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} x\ge C^{-2}t2^{p+q},\,\textnormal{h}box{ and }\,\vert z\vert\ge C^{-2}t2^{2p}. \end{split} \end{equation*} In these conditions, there holds that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^q\vert \partial_{\rho}o\Psi\vert+\vert\partial_{\rho}o\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\Psi\vert\gtrsim t \end{split} \end{equation*} which follows from {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}athbf{\mathcal{L}ambda}bda\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\partial_{\rho}o\Psi-\partial_{\rho}o\Psi&=-\frac{1}{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}}x,\qquad\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\partial_{\rho}o\Psi+\frac{\mathfrak{m}athbf{\mathcal{L}ambda}bda}{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}\partial_{\rho}o\Psi=\frac{z}{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2} \end{split} \end{equation*} using the first estimate if ${{\beta}ta}lue{p\leq -10}$ and the second otherwise. We now decompose {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} I=\sum_{n\ge0}I_n,\qquad I_n(x,z,t)&:=\int_{\mathfrak{m}athbb{R}_+\times[-1,1]}e^{i\Psi}\overline\varphi_{p,q}\cdotdot H_+({\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}\vert x\vert)\cdotdot\ \varphi(2^{-n}\partial_{\rho}o\Psi)\widehat{f}{\rho}o^2\varphi({\rho}o)d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda,\\ \overline\varphi_{p,q}&:=\varphi(2^{-p}{\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2})\varphi(2^{-q}{\rho}o\mathfrak{m}athbf{\mathcal{L}ambda}bda). \end{split} \end{equation*} On the support of $I_0$ we have that $\vert\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda\partial_{\rho}o\Psi\vert\gtrsim t$, and thus with $g({\rho}o,\mathfrak{m}athbf{\mathcal{L}ambda}bda)=\partial_{\rho}o\Psi$ {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \vert I_0\vert&\lesssimssim \mathcal{V}ert \widehat{f}\mathcal{V}ert_{L^\infty}\iint \vert H_+({\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}x)\vert\cdotdot \varphi(g)\cdotdot{\rho}o^2\varphi({\rho}o)d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda\lesssimssim t^{-1}\cdotdot (2^{2p+q}t)^{-\frac{1}{2}}\mathcal{V}ert \widehat{f}\mathcal{V}ert_{L^\infty}. \end{split} \end{equation*} For $n\ge 1$, we integrate by parts twice in ${\rho}o$ and we find that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} I_n(x,z,t)&:=\int_{\mathfrak{m}athbb{R}_+\times[-1,1]}e^{i\Psi}\frac{1}{(\partial_{\rho}o\Psi)^2}\varphi(2^{-n}\partial_{\rho}o\Psi)\partial_{\rho}o^2\left(\overline\varphi_{p,q}\cdotdot H_+({\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}x)\cdotdot\widehat{f}{\rho}o^2\varphi({\rho}o)\right)d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda \end{split} \end{equation*} and we hence deduce {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \vert I_n\vert&\lesssimssim 2^{-2n}\int_{\mathfrak{m}athbb{R}_+\times[-1,1]}\vert \widetilde{H}({\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2} x)\vert \cdotdot\ \varphi(2^{-n}\partial_{\rho}o\Psi)\widetilde{\overline\varphi}_{p,q}F\widetilde{\varphi}({\rho}o)d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda,\\ \widetilde{\overline\varphi}_{p,q}&:=\widetilde{\varphi}(2^{-p}{\rho}o\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2})\widetilde{\varphi}(2^{-q}{\rho}o\mathfrak{m}athbf{\mathcal{L}ambda}bda),\qquad \widetilde{H}(x):=\vert H_+(x)\vert+\lambdangle x\rangle\vert \frac{dH_+}{dx}(x)\vert+\lambdangle x\rangle^2\vert\frac{d^2H_+}{dx^2}(x)\vert,\\ F&:=\vert \widehat{f}\vert+\vert \partial_{\rho}o\widehat{f}\vert+\vert \partial_{\rho}o^2\widehat{f}\vert, \end{split} \end{equation*} so that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \sum_{n\geq 1}\vert I_n\vert&\lesssimssim (t2^{2p+q})^{-\frac{1}{2}}\sum_{q+n\le \ln(t)}2^{-2n}\mathcal{V}ert F\mathcal{V}ert_{L^\infty}\int_{\mathfrak{m}athbb{R}_+\times[-1,1]} \varphi(2^{-n}g)\varphi(t^{-1}\partial_\mathfrak{m}athbf{\mathcal{L}ambda}bda g)\widetilde{\overline\varphi}\widetilde{\varphi}({\rho}o)d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda\\ &\quad+(t2^{2p+q})^{-\frac{1}{2}}\sum_{q+n\ge \ln(t)}2^{-2n}\mathcal{V}ert F\mathcal{V}ert_{L^\infty}\int_{\mathfrak{m}athbb{R}_+\times[-1,1]} \widetilde{\overline\varphi}\widetilde{\varphi}({\rho}o)d{\rho}o d\mathfrak{m}athbf{\mathcal{L}ambda}bda\\ &\lesssimssim (t2^{2p+q})^{-\frac{1}{2}}\mathcal{V}ert F\mathcal{V}ert_{L^\infty}\left(\sum_{q+n\le \ln(t)}2^{-n} t^{-1}+\sum_{q+n\ge \ln(t)}2^{-2n}2^{2p+q}\right). \end{split} \end{equation*} Summing and using Lemma \ref{lem:ControlLinfty} finishes the proof. \section{Integration by parts along vector fields}\lambdabel{sec:vfibp} In this section we develop the formalism for repeated integration by parts along vector fields. To systematically do this, we first address (Section \ref{ssec:vfphase}) some important analytic aspects of the vector fields and how they relate to the bilinear structure of the equations \eqref{eq:EC_disp}. Then we introduce some multiplier classes related to the nonlinearity of \eqref{eq:EC_disp} and study their behavior under the vector fields (Section \ref{ssec:multipliers}). Subsequently we prove bounds for repeated integration by parts along vector fields in Section \ref{ssec:vfibp}. \subsection{Vector fields and the phase}\lambdabel{ssec:vfphase} We discuss here some aspects related to the interaction of the vector fields $V\in\{S,\Omegaega\}$ and the phase functions $\Phi_{\mathfrak{m}u\nu}$ as in \eqref{eq:def_phi}. We use subscripts to denote the Fourier variable in which a vector field acts, so that {{\beta}ta}egin{equation} \Omegaega_\eta=\eta_\textnormal{h}^\perp\cdotdot\partialla_{\eta_\textnormal{h}},\quad S_\eta=\eta\cdotdot\partialla_{\eta}. \end{equation} We begin by recalling that by construction there holds that {{\beta}ta}egin{equation} S_\zetata\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata)=\Omegaega_\zetata\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata)=0, \end{equation} and thus {{\beta}ta}egin{equation} V_\eta\Phi_{\mathfrak{m}u\nu}(\xi,\eta)=\mathfrak{m}u\,V_\eta\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta),\qquad V\in\{S,\Omegaega\},\quad\mathfrak{m}u,\nu\in\{-,+\}. \end{equation} To simplify the notation we will henceforth assume that $\mathfrak{m}u=+$ and simply write $\Phi$ for any of the phase functions $\Phi_{\mathfrak{m}u\nu}$, when the precise sign combination in \eqref{eq:def_phi} is inconsequential. The quantity {{\beta}ta}egin{equation}\lambdabel{eq:def_sigma} {{\beta}ta}ar\sigmagma\equiv{{\beta}ta}ar\sigmagma(\xi,\eta):=\xi_3\eta_\textnormal{h}-\eta_3\xi_\textnormal{h}=-(\xi\times\eta)_\textnormal{h}^\perp, \end{equation} will play an important role in our analysis. We note that {{\beta}ta}egin{equation}\lambdabel{eq:def_sigma2} {{\beta}ta}ar\sigmagma(\xi,\eta)={{\beta}ta}ar\sigmagma(\xi-\eta,\eta)=-{{\beta}ta}ar\sigmagma(\xi,\xi-\eta), \end{equation} and ${{\beta}ta}ar\sigmagma$ combines horizontal and vertical components of our frequencies, over which we will have precise control (see e.g.\ \eqref{eq:loc_def3}). Moreover, it turns out that ${{\beta}ta}ar\sigmagma$ controls the size of vector fields acting on the phase. A direct computation yields: {{\beta}ta}egin{lemma}\lambdabel{lem:vfsizes-mini} There holds that {{\beta}ta}egin{equation}\lambdabel{eq:vf_sigma_0} S_\eta\Phi={{\beta}ta}ar\sigmagma(\xi,\eta)\cdotdot\frac{\xi_\textnormal{h}-\eta_\textnormal{h}}{\abs{\xi-\eta}^3},\quad \Omegaega_\eta\Phi=-{{\beta}ta}ar\sigmagma(\xi,\eta)\cdotdot\frac{(\xi_\textnormal{h}-\eta_\textnormal{h})^\perp}{\abs{\xi-\eta}^3}, \end{equation} and hence {{\beta}ta}egin{equation}\lambdabel{eq:vflobound_0} \abs{S_\eta\Phi}+\abs{\Omegaega_\eta\Phi}\sigmam \frac{\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}}{\abs{\xi-\eta}}\abs{\xi-\eta}^{-2}\abs{{{\beta}ta}ar\sigmagma(\xi,\eta)}. \end{equation} \end{lemma} {{\beta}ta}egin{proof} See \cdotite[Lemma 6.1]{rotE}. \end{proof} We will make frequent use of this lemma when integrating by parts along vector fields (see Section \ref{ssec:vfibp}). Another crucial observation is contained in the following proposition: it shows that either we have a lower bound for ${{\beta}ta}ar\sigmagma$ (and by \eqref{eq:vflobound_0} thus also for $V_\eta\Phi$), or the phase is relatively large. More precisely, we have shown in \cdotite[Proposition 6.2]{rotE} that: {{\beta}ta}egin{proposition}\lambdabel{prop:phasevssigma} Assume that $\abs{\Phi}\leq 2^{q_{\mathfrak{m}ax}-10}$. Then in fact $2^{p_{\mathfrak{m}ax}}\sigmam 1$, and $\abs{{{\beta}ta}ar\sigmagma}\gtrsim 2^{q_{\mathfrak{m}ax}}2^{k_{\mathfrak{m}ax}+k_{\mathfrak{m}in}}$. \end{proposition} In practice, this implies that \emph{either we can integrate by parts along a vector field $V\in\{S,\Omegaega\}$ or perform a normal form.} This may also be viewed as a qualified (and quantified) statement of absence of spacetime resonances. Remarkably, it only makes use of the easily accessible derivatives given by the symmetries, rather than the full gradient. \subsection{Some multiplier mechanics}\lambdabel{ssec:multipliers} Let us consider the following set of ``elementary'' multipliers {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} E&:=\left\{\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata),\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\zetata)},\frac{\zetata_{\textnormal{h}}\cdotdot\zetata_{1,\textnormal{h}}}{\abs{\zetata_{\textnormal{h}}}\abs{\zetata_{1,\textnormal{h}}}},\frac{\zetata_{\textnormal{h}}^\perp\cdotdot\zetata_{1,\textnormal{h}}}{\abs{\zetata_{\textnormal{h}}}\abs{\zetata_{1,\textnormal{h}}}},\frac{\zetata_2\cdotdot\zetata_3}{\abs{\zetata_2}\abs{\zetata_3}}\,:\,\zetata,\zetata_1\in\{\xi,\xi-\eta,\eta\},\zetata_2,\zetata_3\in\{\xi-\eta,\eta\}\right\}. \end{aligned} \end{equation} We note that for $e\in E$ there holds that $\abs{e}\leq 1$, and $E$ is an enlarged version of ${{\beta}ta}ar{E}$ in \eqref{eq:barE}, that includes the \emph{horizontal} ``angles'' between all frequencies. As we will see, up to products and homogeneity this yields a class of multipliers that is closed under the action of the vector fields $V\in\{S,\Omegaega\}$, and allows us to express not only multipliers but also dot products with ${{\beta}ta}ar\sigmagma$ (as is needed for $V_\eta\Phi$) in terms of building blocks from $E$. To track the orders of multipliers we encounter, we define the following collections of products of elementary multipliers {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} E_0\equiv E^{0}_{0}&:=\textnormal{span}_{\mathfrak{m}athbb{R}}\left\{{\partial}od_{i=1}^N e_i\,:\, e_i\in E,N\in\mathfrak{m}athbb{N}\right\},\\ E^{a}_{b}&:=\textnormal{span}_{\mathfrak{m}athbb{R}}\left\{\abs{\xi-\eta}^{-a}\abs{\eta}^{a}\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}^{-b}\abs{\eta_\textnormal{h}}^{b}\cdotdot e\,:\,e\in E_0\right\}, \quad a,b\in\mathfrak{m}athbb{Z}. \end{aligned} \end{equation} Furthermore, for $n\in \mathfrak{m}athbb{N}$ we let {{\beta}ta}egin{equation} E(n):=\cdotup_{\substack{a+b\leq n\\a,b\geq 0}} E^{a}_{b},\quad E(-n):=\cdotup_{\substack{a+b\geq -n\\a,b\leq 0}} E^{a}_{b}, \end{equation} which includes all multipliers up to a certain order of homogeneity. We remark that $\Phi\in E_0$. From Lemma \ref{lem:ECmult} it follows that the multipliers of the nonlinearity of Euler-Coriolis in dispersive formulation \eqref{eq:EC_disp} are elements of $E_0$ that satisfy certain bounds: {{\beta}ta}egin{lemma}\lambdabel{lem:ECmult_bds} Let $\mathfrak{m}$ be a multiplier of the nonlinearity of \eqref{eq:EC_disp}. Then there exists $e\in E_0$ such that {{\beta}ta}egin{equation} \mathfrak{m}=\abs{\xi}\cdotdot e. \end{equation} Moreover, we have the bounds {{\beta}ta}egin{equation}\lambdabel{BoundsOnMAddedBenoit} \abs{\mathfrak{m}}\cdotdot\cdothi_\textnormal{h}\lesssimssim 2^{k+p_{\mathfrak{m}ax}},\qquad \abs{\mathfrak{m}}\cdotdot\cdothi\lesssimssim 2^{k+p_{\mathfrak{m}ax}+q_{\mathfrak{m}ax}}. \end{equation} \end{lemma} As a consequence, it will be important to understand the effect of vector fields on the above classes of multipliers, allowing us to keep track of their orders (e.g.\ when integrating by parts). This is the goal of the following lemma. {{\beta}ta}egin{lemma}\lambdabel{lem:vfE} If $e\in E_0$, then $V_\eta e\in E(1)$ and $V_{\xi-\eta} e\in E(-1)$, and thus {{\beta}ta}egin{equation}\lambdabel{eq:Ve1} {{\beta}ta}egin{aligned} \abs{V_\eta e}\cdotdot\cdothi_\textnormal{h}(\xi,\eta)&\lesssimssim 1+2^{k_2-k_1}(1+2^{p_2-p_1}),\\ \abs{V_{\xi-\eta} e}\cdotdot\cdothi_\textnormal{h}(\xi,\eta)&\lesssimssim 1+2^{k_1-k_2}(1+2^{p_1-p_2}). \end{aligned} \end{equation} More generally, if $e\in E^{a}_{b}$ then {{\beta}ta}egin{equation}\lambdabel{eq:Ve2} {{\beta}ta}egin{aligned} V_{\eta}e\in E^{a}_{b}\cdotup E^{a+1}_{b} \cdotup E^{a}_{b+1}:\quad \abs{V_\eta e}\cdotdot\cdothi_\textnormal{h}(\xi,\eta)&\lesssimssim [1+2^{k_2-k_1}(1+2^{p_2-p_1})]\norm{e\cdothi_\textnormal{h}}_{L^\infty},\\ V_{\xi-\eta}e\in E^{a}_{b}\cdotup E^{a-1}_{b} \cdotup E^{a}_{b-1}:\quad \abs{V_{\xi-\eta} e}\cdotdot\cdothi_\textnormal{h}(\xi,\eta)&\lesssimssim [1+2^{k_1-k_2}(1+2^{p_1-p_2})] \norm{e\cdothi_\textnormal{h}}_{L^\infty}. \end{aligned} \end{equation} \end{lemma} {{\beta}ta}egin{proof} By symmetry it suffices to show the above claims for $V_\eta$, and \eqref{eq:Ve2} follows with analogous computations as for \eqref{eq:Ve1}. To establish \eqref{eq:Ve1} we recall that $V_\eta\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)=0$, and we note that $V_\eta\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)=V_\eta\Phi$, so that by \eqref{eq:vf_sigma_0} of Lemma \ref{lem:vfsizes-mini} there holds {{\beta}ta}egin{equation} V_\eta(\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)})=-\frac{\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)}{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)}}V_\eta\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)=-\frac{\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)}{\abs{\xi-\eta}^2}\,{{\beta}ta}ar\sigmagma(\xi,\eta)\cdotdot {{\beta}ta}egin{cases} \frac{\xi_\textnormal{h}-\eta_\textnormal{h}}{\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}},&V=S,\\ -\frac{(\xi_\textnormal{h}-\eta_\textnormal{h})^\perp}{\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}},&V=\Omegaega, \end{cases} \end{equation} and we note that ${{\beta}ta}ar\sigmagma(\xi,\eta)={{\beta}ta}ar\sigmagma(\xi-\eta,\eta)$. Together with some straightforward, but slightly tedious computations for the ``angles'' (see \cdotite[Appendix A.2]{rotE}) this is implies the claim. \end{proof} As a consequence, we can establish bounds for vector field quotients in cases where we can integrate by parts, i.e.\ when we have a suitable lower bound for ${{\beta}ta}ar\sigmagma$: {{\beta}ta}egin{lemma}\lambdabel{lem:vfQ} Assume that $\abs{{{\beta}ta}ar\sigmagma}\cdotdot \cdothi\gtrsim 2^{k_{\mathfrak{m}ax}+k_{\mathfrak{m}in}}2^{p_{\mathfrak{m}ax}+q_{\mathfrak{m}ax}}$. Then for $V,V'\in\{S,\Omegaega\}$ and $n,m\geq 0$ there holds if $\abs{V_\eta\Phi}\gtrsim\abs{\Omegaega_\eta\Phi}+\abs{S_\eta\Phi}$ {{\beta}ta}egin{equation} \abs{\frac{(V'_{\xi-\eta})^{n} V_\eta^{m+1}\Phi}{V_\eta\Phi}}\cdotdot\cdothi\lesssimssim [1+2^{k_1-k_2}(1+2^{p_1-p_2})]^n\cdotdot[1+2^{k_2-k_1}(1+2^{p_2-p_1})]^m. \end{equation} \end{lemma} {{\beta}ta}egin{proof} A direct computation gives that {{\beta}ta}egin{equation}\lambdabel{eq:2ndvfPhi} {{\beta}ta}egin{aligned} &S_\eta^2\Phi=S_\eta\Phi\left[3\frac{\eta\cdotdot(\xi-\eta)}{\abs{\xi-\eta}^2}+2\right]-\frac{{{\beta}ta}ar\sigmagma\cdotdot\xi_\textnormal{h}}{\abs{\xi-\eta}^3},\\ &\Omegaega_\eta^2\Phi=3\Omegaega_\eta\Phi\frac{\in^{ab}\eta_a\xi_b}{\abs{\xi-\eta}^2}-\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\frac{\eta_\textnormal{h}\cdotdot\xi_\textnormal{h}}{\abs{\xi-\eta}^2},\\ &S_\eta\Omegaega_\eta\Phi=\Omegaega_\eta S_\eta\Phi=S_\eta\Phi\frac{\eta_\textnormal{h}^\perp\cdotdot\xi_\textnormal{h}}{\abs{\xi-\eta}^2}+\Omegaega_\eta\Phi[1+2\frac{\eta\cdotdot(\xi-\eta)}{\abs{\xi-\eta}^2}]. \end{aligned} \end{equation} With Lemmas \ref{lem:vfsizes-mini} and \ref{lem:vfE}, by induction we thus see that for $m\in\mathfrak{m}athbb{N}$ there exist $e_1,e_2\in E(m)$ and $(e_3^\vartheta)_{\vartheta\in E'}\subset E(m-1)$, where $E'=\left\{\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata')\frac{\zetata_\textnormal{h}}{\abs{\zetata}},\mathfrak{m}athbf{\mathcal{L}ambda}bda(\zetata')\frac{\zetata_\textnormal{h}^\perp}{\abs{\zetata}}:\,\zetata,\zetata' \in\{\xi-\eta,\eta\}\right\}$, such that {{\beta}ta}egin{equation}\lambdabel{eq:hivfphase} V_\eta^{m+1}\Phi=\Omegaega_\eta\Phi\cdotdot e_1+S_\eta\Phi\cdotdot e_2+\frac{\abs{\eta}}{\abs{\xi-\eta}}\sum_{\vartheta\in E'}\left(\frac{\xi_\textnormal{h}}{\abs{\xi-\eta}}\cdotdot\vartheta\right)e_3^\vartheta. \end{equation} Together with the bounds in Lemmas \ref{lem:vfsizes-mini} and \ref{lem:vfE} this proves the claim when $n=0$, since {{\beta}ta}egin{equation} \frac{\abs{\Omegaega_\eta\Phi}+\abs{S_\eta\Phi}}{\abs{V_\eta\Phi}}\lesssimssim 1, \end{equation} and since for $\vartheta\in E'$ there holds that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \frac{\abs{\eta}}{\abs{\xi-\eta}}\frac{\abs{\xi_\textnormal{h}}}{\abs{\xi-\eta}}\abs{\vartheta}\cdotdot\abs{V_\eta\Phi}^{-1}&\lesssimssim 2^{k_2-2k_1}2^{k+p}(2^{q_1}+2^{q_2})(2^{p_1}+2^{p_2})\cdotdot 2^{2k_1-p_1}2^{-k_{\mathfrak{m}ax}-k_{\mathfrak{m}in}}2^{-p_{\mathfrak{m}ax}-q_{\mathfrak{m}ax}}\\ &\lesssimssim (1+2^{p_2-p_1})(1+2^{k_2-k_1}). \end{aligned} \end{equation} When $n>0$ the claim follows analogously: we compute that {{\beta}ta}egin{align} \Omegaega_{\xi-\eta}\Omegaega_\eta\Phi&=\frac{\xi_3}{\abs{\xi-\eta}}\frac{\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}^2}{\abs{\xi-\eta}^2}-S_\eta\Phi,\qquad \Omegaega_{\xi-\eta}S_\eta\Phi=-S_{\xi-\eta}\Omegaega_\eta\Phi=\Omegaega_\eta\Phi, \qquad S_{\xi-\eta}S_\eta\Phi=-S_\eta\Phi, \end{align} so that with \eqref{eq:hivfphase} there holds that {{\beta}ta}egin{equation}\lambdabel{eq:mixhivfphase} (V_{\xi-\eta})^n V_\eta^{m+1}\Phi=\Omegaega_\eta\Phi\cdotdot {{\beta}ta}ar{e}_1+S_\eta\Phi\cdotdot {{\beta}ta}ar{e}_2+\frac{\abs{\eta}}{\abs{\xi-\eta}}\sum_{\vartheta\in E'}\left(\frac{\xi_\textnormal{h}}{\abs{\xi-\eta}}\cdotdot\vartheta\right){{\beta}ta}ar{e}_3^\vartheta+\frac{\xi_3}{\abs{\xi-\eta}}\frac{\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}^2}{\abs{\xi-\eta}^2}{{\beta}ta}ar{e}_4, \end{equation} where ${{\beta}ta}ar{e}_1,{{\beta}ta}ar{e}_2,{{\beta}ta}ar{e}_4\in E(m)\cdotup E(-n)$ and $({{\beta}ta}ar{e}_3^\vartheta)_{\vartheta\in E'}\subset E(m-1)\cdotup E(-n)$. To conclude it suffices to note that {{\beta}ta}egin{equation} \frac{\abs{\xi_3}}{\abs{\xi-\eta}}\frac{\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}^2}{\abs{\xi-\eta}^2}\cdotdot\abs{V_\eta\Phi}^{-1}\lesssimssim 2^{k+q}2^{2p_1-k_1}\cdotdot 2^{2k_1-p_1}2^{-k_{\mathfrak{m}ax}-k_{\mathfrak{m}in}}2^{-p_{\mathfrak{m}ax}-q_{\mathfrak{m}ax}}\lesssimssim 1+2^{k_1-k_2}. \end{equation} \end{proof} \subsection{Integration by parts in bilinear expressions}\lambdabel{ssec:vfibp} Consider now a typical bilinear term $\mathfrak{m}athcal{Q}_m(f_1,f_2)$ as in \eqref{eq:def_Qm}: {{\beta}ta}egin{equation*} \mathfrak{m}athcal{Q}_m(f_1,f_2)(s):=\mathfrak{m}athcal{F}^{-1}\left(\int_{\mathfrak{m}athbb{R}^3}e^{is\Phi(\xi,\eta)}m(\xi,\eta)\widehat{f_1}(s,\xi-\eta)\widehat{f_2}(s,\eta)d\eta\right), \end{equation*} with multiplier in our standard multiplier classes, i.e.\ $ m=\abs{\xi}\cdotdot e$ for some $e\in E^{a}_{b}$. Our strategy for integration by parts will be to get bounds for $\abs{{{\beta}ta}ar\sigmagma}$ via localizations -- firstly in $p,p_j$ and $\ell,\ell_j$, or with more refinement also in $q,q_j$, $j=1,2$ -- from which control of the size of the vector fields applied to the phase follows by \eqref{eq:vflobound_0} in Lemma \ref{lem:vfsizes-mini}. Together with the corresponding quantified control on the inputs this informs us when integration by parts can be carried out advantageously. We thus decompose {{\beta}ta}egin{equation} \mathfrak{m}athcal{Q}_m(f_1,f_2)=\sum_{\substack{k_j,p_j,\ell_j,\\j=1,2}}\mathfrak{m}athcal{Q}_m(P_{k_1,p_1}R_{\ell_1}f_1,P_{k_2,p_2}R_{\ell_2}f_2), \end{equation} and using our notation \eqref{eq:loc_def3} for the localizations we have that {{\beta}ta}egin{equation} \mathfrak{m}athcal{Q}_m(P_{k_1,p_1}R_{\ell_1}f_1,P_{k_2,p_2}R_{\ell_2}f_2)=\mathfrak{m}athcal{Q}_{m\cdotdot \cdothi_\textnormal{h}}(R_{\ell_1}f_1,R_{\ell_2}f_2). \end{equation} \subsubsection{Formalism} We begin by recalling from \cdotite[Lemma 6.4]{rotE} that we can resolve the action of a vector field in a variable $\zetata\in\{\xi-\eta,\eta\}$ on a function of $\xi-\zetata$ (as we will frequently encounter them when integrating by parts along vector fields in the bilinear expressions \eqref{eq:EC_disp_Duham}) as follows: {{\beta}ta}egin{lemma}\lambdabel{lem:VFcross} Let $\Gammamma_{V,\eta}^S,\Gammamma_{V,\eta}^\mathfrak{m}athcal{U}ps\in E^{1}_{0}$ be defined by {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \Gammamma_{S,\eta}^S&=-\frac{\abs{\eta}}{\abs{\xi-\eta}}\left[\omegaega_c\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\eta)\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\xi-\eta)+\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\right],\quad \Gammamma_{\Omegaega,\eta}^S=\frac{\abs{\eta}}{\abs{\xi-\eta}}\omegaega_s\cdotdot\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta)}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)},\\ \Gammamma_{S,\eta}^\mathfrak{m}athcal{U}ps&=-\frac{\abs{\eta}}{\abs{\xi-\eta}}\left[\omegaega_c\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\eta)\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)-\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)}\right],\quad \Gammamma_{\Omegaega,\eta}^\mathfrak{m}athcal{U}ps=\frac{\abs{\eta}}{\abs{\xi-\eta}}\omegaega_s\cdotdot\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta)}\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta), \end{aligned} \end{equation} where {{\beta}ta}egin{equation} \omegaega_c:=\frac{\eta_\textnormal{h}\cdotdot(\xi_\textnormal{h}-\eta_\textnormal{h})}{\abs{\eta_\textnormal{h}}\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}},\quad \omegaega_s:=\frac{\eta_\textnormal{h}\cdotdot(\xi_\textnormal{h}-\eta_\textnormal{h})^\perp}{\abs{\eta_\textnormal{h}}\abs{\xi_\textnormal{h}-\eta_\textnormal{h}}}. \end{equation} Then on \emph{axisymmetric functions} there holds that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} S_\eta &=\Gammamma_{S,\eta}^S\cdotdot S_{\xi-\eta}+\Gammamma_{S,\eta}^\mathfrak{m}athcal{U}ps\cdotdot \mathfrak{m}athcal{U}ps_{\xi-\eta},\qquad \Omegaega_\eta =\Gammamma_{\Omegaega,\eta}^S\cdotdot S_{\xi-\eta}+\Gammamma_{\Omegaega,\eta}^\mathfrak{m}athcal{U}ps\cdotdot \mathfrak{m}athcal{U}ps_{\xi-\eta}, \end{aligned} \end{equation} The symmetric statement holds with the roles of $\eta$ and $\xi-\eta$ exchanged and $\Gammamma_{V,\xi-\eta}^W\in E^{-1}_{0}$ for $W\in\{S,\mathfrak{m}athcal{U}ps\}$. \end{lemma} {{\beta}ta}egin{proof} See \cdotite[Lemma 6.4]{rotE}. \end{proof} To systematically treat several integrations by parts, we introduce the following notations. For $\zetata\in\{\eta,\xi-\eta\}$, we consider the following three types of operators, as they naturally arise in integration by parts (according to where the vector fields ``land''): {{\beta}ta}egin{equation}\lambdabel{eq:adjoints} {{\beta}ta}egin{aligned} \mathcal{L}_{V,\zetata}^{\textnormal{Id},S}&:=\frac{1}{V_\zetata\Phi},\\ \mathcal{L}_{V,\zetata}^{W,\textnormal{Id}}&:=\frac{1}{V_\zetata\Phi}\Gammamma^W_{V,\zetata},\qquad W\in\{S,\mathfrak{m}athcal{U}ps\},\\ \mathcal{L}_{V,\zetata}^{\textnormal{Id},\textnormal{Id}}&:=V_\zetata\left(\frac{1}{V_\zetata\Phi}\;\cdotdot\right)+\frac{1}{V_\zetata\Phi}c_V,\qquad c_S=3,\,\, c_\Omegaega=0. \end{aligned} \end{equation} The first one corresponds to $V_\zetata$ hitting the input of variable $\zetata$, the second to a ``cross term'' with $W\in\{S,\mathfrak{m}athcal{U}ps\}$ and the last to $V_\zetata$ acting on the multiplier itself. Letting further {{\beta}ta}egin{equation} \mathfrak{m}athcal{I}:=\{(\textnormal{Id},S),(S,\textnormal{Id}),(\mathfrak{m}athcal{U}ps,\textnormal{Id}),(\textnormal{Id},\textnormal{Id})\}, \end{equation} we can write an integration by parts in e.g.\ $V_\eta$ compactly as {{\beta}ta}egin{equation} \mathfrak{m}athcal{Q}_{m}(F,G)=is^{-1}\sum_{(W,Z)\in\mathfrak{m}athcal{I}}\mathfrak{m}athcal{Q}_{\mathcal{L}_{V,\eta}^{W,Z}(m)}(WF,ZG), \end{equation} and in $V'_{\xi-\eta}$ as {{\beta}ta}egin{equation} \mathfrak{m}athcal{Q}_{m}(F,G)=is^{-1}\sum_{(W,Z)\in\mathfrak{m}athcal{I}}\mathfrak{m}athcal{Q}_{\mathcal{L}_{V',\xi-\eta}^{W,Z}(m)}(ZF,WG), \end{equation} and analogously for several consecutive integrations by parts.\footnote{For example, integrating once along $V_\eta$, then $V'_{\xi-\eta}$, then $V_{\eta}$, gives {{\beta}ta}egin{equation} \mathfrak{m}athcal{Q}_m(F,G)=s^{-3}\sum_{(W_i,Z_i)\in\mathfrak{m}athcal{I},\;1\leq i\leq 3} \mathfrak{m}athcal{Q}_{\mathcal{L}_{V,\eta}^{W_3,Z_3}\mathcal{L}_{V',\xi-\eta}^{W_2,Z_2}\mathcal{L}_{V,\eta}^{W_1,Z_1}(m)}(W_3Z_2W_1F,Z_3W_2Z_1G). \end{equation} We note that in such an expression, only the $W_i$ may equal $\mathfrak{m}athcal{U}ps$, and we have $Z_i\in\{S,\textnormal{Id}\}$.} \subsubsection{Bounds}\lambdabel{ssec:vfibp_bds} The following lemma gives bounds for iterated integration by parts along vector fields (when this is possible). {{\beta}ta}egin{lemma}\lambdabel{lem:new_ibp} We have the following bounds for repeated integration by parts: {{\beta}ta}egin{enumerate}[wide] \item\lambdabel{it:ibp_p} Assume the localization parameters are such that $\abs{{{\beta}ta}ar\sigmagma}\cdotdot\cdothi_\textnormal{h}\gtrsim L_1\gtrsim 2^{k_{\mathfrak{m}ax}+k_{\mathfrak{m}in}}2^{p_{\mathfrak{m}ax}}$. Then we have for any $N\in\mathfrak{m}athbb{N}$ that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{\mathfrak{m}athcal{F}\left(\mathfrak{m}athcal{Q}_{m\cdothi_\textnormal{h}}(R_{\ell_1}f_1,R_{\ell_2}f_2)\right)}_{L^\infty}&\lesssimssim \norm{m\cdothi_\textnormal{h}}_{L^\infty}\cdotdot \left(s^{-1}\cdotdot 2^{-p_1+2k_1}L_1^{-1}\cdotdot [1+2^{k_2-k_1+\ell_1}]\right)^N \\ &\qquad \cdotdot\norm{P_{k_1,p_1}R_{\ell_1}(1,S)^N f_1}_{L^2}\norm{P_{k_2,p_2}R_{\ell_2}(1,S)^Nf_2}_{L^2}. \end{aligned} \end{equation} \item\lambdabel{it:ibp_pq} Assume the localization parameters are such that $\abs{{{\beta}ta}ar\sigmagma}\cdotdot\cdothi\gtrsim L_2\gtrsim 2^{k_{\mathfrak{m}ax}+k_{\mathfrak{m}in}}2^{p_{\mathfrak{m}ax}+q_{\mathfrak{m}ax}}$. Then we have for any $N\in\mathfrak{m}athbb{N}$ that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{\mathfrak{m}athcal{F}\left(\mathfrak{m}athcal{Q}_{m\cdothi}(R_{\ell_1}f_1,R_{\ell_2}f_2)\right)}_{L^\infty}&\lesssimssim \norm{m\cdothi}_{L^\infty}\cdotdot \left(s^{-1}\cdotdot 2^{-p_1+2k_1}L_2^{-1}\cdotdot [1+2^{k_2-k_1}(2^{q_2-q_1}+2^{\ell_1})]\right)^N \\ &\qquad \cdotdot\norm{P_{k_1,p_1,q_1}R_{\ell_1}(1,S)^N f_1}_{L^2}\norm{P_{k_2,p_2,q_2}R_{\ell_2}(1,S)^Nf_2}_{L^2}. \end{aligned} \end{equation} \end{enumerate} \mathfrak{m}edskip These claims hold symmetrically if the variables $\eta$, $\xi-\eta$ are exchanged. \end{lemma} We note that the precise estimates are slightly stronger, and in fact show that with a loss of $2^{\ell_1}$ also comes a gain of $c_{p}:=2^{p_1}+2^{p_2}$ resp.\ $c_{pq}:=2^{p_1+q_2}+2^{p_2+q_1}$. {{\beta}ta}egin{proof} \eqref{it:ibp_p} Let us denote for simplicity of notation $F=R_{\ell_1}f_1$, $G=R_{\ell_2}f_2$. By \eqref{eq:vflobound_0} in Lemma \ref{lem:vfsizes-mini} we may partition {{\beta}ta}egin{equation}\lambdabel{eq:vfloc_not} 1=(1-\cdothi_{V_\eta})+\cdothi_{V_\eta},\qquad \cdothi_{V_\eta}:=(1-\psi)(2^{-p_1+2k_1}L_1^{-1}\cdotdot V_\eta\Phi), \end{equation} where $V,V'\in\{S,\Omegaega\}$ are such that {{\beta}ta}egin{equation} \abs{V_\eta\Phi}\cdotdot\cdothi_\textnormal{h}\cdothi_{V_\eta}+\abs{V'_\eta\Phi}\cdotdot\cdothi_\textnormal{h}(1-\cdothi_{V_\eta})\gtrsim 2^{p_1-2k_1}L_1. \end{equation} We then have that {{\beta}ta}egin{equation}\lambdabel{eq:bilin-ibp-split} \mathfrak{m}athcal{Q}_{m \cdothi_\textnormal{h}}(F,G)=\mathfrak{m}athcal{Q}_{m \cdothi_\textnormal{h}\cdothi_{V_\eta}}(F,G)+\mathfrak{m}athcal{Q}_{m \cdothi_\textnormal{h}(1-\cdothi_{V_\eta})}(F,G), \end{equation} and can integrate by parts in $V_\eta$ resp.\ $V'_\eta$ in the first resp.\ second term. We discuss in detail the first term on the right hand side of \eqref{eq:bilin-ibp-split}, the second being almost identical. We begin with the demonstration of \eqref{it:ibp_p} for $N=1$: Upon integration by parts in $V_\eta$ we have {{\beta}ta}egin{equation} \mathfrak{m}athcal{Q}_{m \cdothi_\textnormal{h}\cdothi_{V_\eta}}(F,G)=is^{-1}\sum_{(W,Z)\in\mathfrak{m}athcal{I}}\mathfrak{m}athcal{Q}_{\mathcal{L}_{V,\eta}^{W,Z}(m\cdothi_\textnormal{h}\cdothi_{V_\eta})}(WF,ZG). \end{equation} It suffices to estimate the three types of terms separately: {{\beta}ta}egin{itemize} \item $(W,Z)=(\textnormal{Id},S)$: Then we have that {{\beta}ta}egin{equation} \norm{\mathfrak{m}athcal{F}\left(\mathfrak{m}athcal{Q}_{\frac{m\cdothi_\textnormal{h}\cdothi_{V_\eta}}{V_\eta\Phi}}(F,SG)\right)}_{L^\infty}\lesssimssim \norm{m\cdothi_\textnormal{h}}_{L^\infty}2^{-p_1+2k_1}L_1^{-1}\cdotdot\norm{P_{k_1,p_1}F}_{L^2}\norm{P_{k_2,p_2}SG}_{L^2}. \end{equation} \item $(W,Z)=(S,\textnormal{Id})$: Here we have that {{\beta}ta}egin{equation} \abs{\mathcal{L}_{V,\eta}^{S,\textnormal{Id}}(m\cdothi_\textnormal{h}\cdothi_{V_\eta})}=\abs{\frac{1}{V_\eta\Phi}\Gammamma^S_{V,\eta}\cdotdot m\cdothi_\textnormal{h}\cdothi_{V_\eta}}\lesssimssim \norm{m\cdothi_\textnormal{h}}_{L^\infty}\cdotdot 2^{k_2-k_1}\cdotdot 2^{-p_1+2k_1}L_1^{-1}, \end{equation} so that {{\beta}ta}egin{equation} \norm{\mathfrak{m}athcal{F}\left(\mathfrak{m}athcal{Q}_{\mathcal{L}_{V,\eta}^{S,\textnormal{Id}}(m\cdothi_\textnormal{h}\cdothi_{V_\eta})}(SF,G)\right)}_{L^\infty}\lesssimssim \norm{m\cdothi_\textnormal{h}}_{L^\infty} \cdotdot 2^{k_2-k_1}\cdotdot 2^{-p_1+2k_1}L_1^{-1}\cdotdot \norm{P_{k_1,p_1}SF}_{L^2}\norm{P_{k_2,p_2}G}_{L^2} \end{equation} \item $(W,Z)=(\mathfrak{m}athcal{U}ps,\textnormal{Id})$: Similarly we have {{\beta}ta}egin{equation} \abs{\mathcal{L}_{V,\eta}^{\mathfrak{m}athcal{U}psilon,\textnormal{Id}}(m\cdothi_\textnormal{h}\cdothi_{V_\eta})}=\abs{\frac{1}{V_\eta\Phi}\Gammamma^\mathfrak{m}athcal{U}ps_{V,\eta}\cdotdot m\cdothi_\textnormal{h}\cdothi_{V_\eta}}\lesssimssim \norm{m\cdothi_\textnormal{h}}_{L^\infty}\cdotdot (2^{p_1}+2^{p_2})\cdotdot 2^{k_2-k_1}\cdotdot 2^{-p_1+2k_1}L_1^{-1}, \end{equation} so that {{\beta}ta}egin{equation} \norm{\mathfrak{m}athcal{F}\left(\mathfrak{m}athcal{Q}_{\mathcal{L}_{V,\eta}^{S,\textnormal{Id}}(m\cdothi_\textnormal{h}\cdothi_{V_\eta})}(\mathfrak{m}athcal{U}ps F,G)\right)}_{L^\infty}\lesssimssim \norm{m\cdothi_\textnormal{h}}_{L^\infty} \cdotdot (1+2^{p_2-p_1})2^{k_2+k_1}\cdotdot L_1^{-1}\cdotdot 2^{\ell_1} \cdotdot \norm{P_{k_1,p_1}F}_{L^2}\norm{P_{k_2,p_2}G}_{L^2}, \end{equation} having used that by \eqref{eq:UpsOmegas} and \eqref{eq:R-Bernstein} there holds {{\beta}ta}egin{equation} \norm{\mathfrak{m}athcal{U}ps R_{\ell_1}f_1}_{L^2}\lesssimssim 2^{\ell_1}\norm{R_{\ell_1}f_1}_{L^2}. \end{equation} \item $(W,Z)=(\textnormal{Id},\textnormal{Id})$: Here we have by Lemma \ref{lem:vfQ} (and by direct computation on the localizations $\cdothi_\textnormal{h}$ and $\cdothi_{V_\eta}$) that {{\beta}ta}egin{equation} \abs{\mathcal{L}_{V,\eta}^{\textnormal{Id},\textnormal{Id}}(m\cdothi_\textnormal{h}\cdothi_{V_\eta})}\lesssimssim \norm{m\cdothi_\textnormal{h}}_{L^\infty}[1+2^{k_2-k_1}(1+2^{p_2-p_1})]\cdotdot 2^{-p_1+2k_1}L_1^{-1}, \end{equation} so that {{\beta}ta}egin{equation} \norm{\mathfrak{m}athcal{F}\left(\mathfrak{m}athcal{Q}_{\mathcal{L}_{V,\eta}^{\textnormal{Id},\textnormal{Id}}(m\cdothi_\textnormal{h}\cdothi_{V_\eta})}(F,G)\right)}_{L^\infty}\lesssimssim \norm{m\cdothi_\textnormal{h}}_{L^\infty}\cdotdot 2^{-p_1+2k_1}L_1^{-1}\cdotdot [1+2^{k_2-k_1}(1+2^{p_2-p_1})]\cdotdot \norm{P_{k_1,p_1}F}_{L^2}\norm{P_{k_2,p_2}G}_{L^2}. \end{equation} \end{itemize} Since $\ell_j+p_j\geq 0$, by iteration and Lemma \ref{lem:vfQ} we obtain the claim \eqref{it:ibp_p} for general $N\in\mathfrak{m}athbb{N}$. The proof of \eqref{it:ibp_pq} is similar. The only difference arises from the case where the vector fields land on the localization functions $\cdothi$. Here we observe that {{\beta}ta}egin{equation} \abs{V_\eta(\varphi_{k_1,p_1,q_1}(\xi-\eta))}\lesssimssim \left(1+2^{k_2-k_1}(1+2^{p_2-p_1}+2^{q_2-q_1})\right)\cdotdot \widetilde\varphi_{k_1,p_1,q_1}(\xi-\eta). \end{equation} Hence \eqref{it:ibp_pq} is proved. \end{proof} \subsubsection{A ``vertical'' variant}\lambdabel{ssec:D3ibp} When no localizations in $\mathfrak{m}athbf{\mathcal{L}ambda}bda$ are involved, a zero homogeneous version of the vertical derivative can also be useful for iterated integrations by parts: We let {{\beta}ta}egin{equation} D_3^\eta:=\abs{\eta}\partial_{\eta_3}=\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)S_\eta-\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta)}\mathfrak{m}athcal{U}ps_\eta, \end{equation} and note that {{\beta}ta}egin{equation}\lambdabel{eq:D3basics} D_3^\eta\left(\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\right)=1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta),\qquad D_3^\eta\left(\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta)}\right)=-\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta)}, \end{equation} as well as {{\beta}ta}egin{equation} D_3^\eta\left(\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\right)=-\frac{\abs{\eta}}{\abs{\xi-\eta}}(1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)),\qquad D_3^\eta\left(\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)}\right)=\frac{\abs{\eta}}{\abs{\xi-\eta}}\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)}. \end{equation} Thus {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} D_3^\eta \varphi(2^{-p_2}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\eta))&=-2^{-p_2}\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta)}\varphi'(2^{-p_2}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\eta))\\ &= -\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\cdotdot \widetilde{\varphi}(2^{-p_2}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\eta)),\\ D_3^\eta \varphi(2^{-p_1}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\xi-\eta))&=2^{-p_1}\frac{\abs{\eta}}{\abs{\xi-\eta}}\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)}\varphi'(2^{-p_1}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\xi-\eta))\\ &= \mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\cdotdot\frac{\abs{\eta}}{\abs{\xi-\eta}} \widetilde{\varphi}(2^{-p_1}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2}(\xi-\eta)). \end{aligned} \end{equation} Together with $D_3^\eta\abs{\eta}=\abs{\eta}\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)$, $D_3^\eta\abs{\xi-\eta}=-\abs{\eta}\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)$ and the fact that $D_3^\eta=-\frac{\abs{\eta}}{\abs{\xi-\eta}}D_3^{\xi-\eta}$ we thus have that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} D_3^\eta \,\mathfrak{m}athcal{F}\left(P_{k_2,p_2}R_{\ell_2} G\right)(\eta)&\sigmam \mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta) \mathfrak{m}athcal{F}\left(P_{k_2,p_2}R_{\ell_2} (1,S)G\right)(\eta)+ 2^{\ell_2+p_2}\mathfrak{m}athcal{F}\left(P_{k_2,p_2}R_{\ell_2} G\right)(\eta),\\ D_3^\eta\, \mathfrak{m}athcal{F}\left(P_{k_1,p_1}R_{\ell_1} F\right)(\xi-\eta)&\sigmam 2^{k_2-k_1}\left[ \mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta) \mathfrak{m}athcal{F}\left(P_{k_1,p_1}R_{\ell_1} (1,S)F\right)(\xi-\eta)+ 2^{\ell_1+p_1}\mathfrak{m}athcal{F}\left(P_{k_1,p_1}R_{\ell_1} F\right)(\xi-\eta)\right]. \end{aligned} \end{equation} To make use of this in an iterated integration by parts we also need to control $D_3^\eta\Phi$. From \eqref{eq:D3basics} we see iteratively that for any $M\in\mathfrak{m}athbb{N}$ there holds {{\beta}ta}egin{equation}\lambdabel{eq:D3basics2} \abs{(D_3^\eta)^M\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)}\lesssimssim 1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta),\qquad \abs{(D_3^\eta)^M\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)}\lesssimssim \frac{\abs{\eta}^M}{\abs{\xi-\eta}^M}(1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)). \end{equation} \subsubsection*{Example} In the particular case where $2^{k_2}\sigmam 2^{k_1}$ and $0\gg p_2\gg p_1$, we have that $\abs{D_3^\eta\Phi}\sigmam 2^{2p_2}$, and with \eqref{eq:D3basics2} we see analogously as in Section \ref{ssec:vfibp_bds} that repeated integration by parts along $D_3^\eta$ in a term $\mathfrak{m}athcal{Q}_{\mathfrak{m}}(P_{k_1,p_1}R_{\ell_1} F,P_{k_2,p_2}R_{\ell_2} G)$ is beneficial if {{\beta}ta}egin{equation}\lambdabel{eq:D3ibpcond} 2^{-2p_2}\left(1+2^{\ell_1+p_1}+2^{\ell_2+p_2}\right)<s^{1-\deltalta}. \end{equation} \subsubsection{A preliminary lemma to organize cases} Since we have a multitude of parameters that govern the losses and gains when integrating by parts as in Lemma \ref{lem:new_ibp}, it is useful to get some overview of natural restrictions. To guide the organization of cases later on we will make use of the following result: {{\beta}ta}egin{lemma}\lambdabel{lem:gapp-cases} Assume that $p\leq \mathfrak{m}in\{p_1,p_2\}-10$. Then on the support of $\cdothi_\textnormal{h}$ there holds that $p+k<p_1+k_1-4$, and thus $p_2+k_2-2\leq p_1+k_1\leq p_2+k_2+2$. Moreover, either one of the following options holds: {{\beta}ta}egin{enumerate} \item\lambdabel{it:gapp-op1} $\abs{k_1-k_2}\leq 4$, and thus also $\abs{p_1-p_2}\leq 6$, \item\lambdabel{it:gapp-op2} $k_2<k_1-4$ then $\abs{k-k_1}\leq 2$ and $p_1\leq p_2-2$, so that $p\leq p_1-10\leq p_2-12$. \item $k_1<k_2-4$ then $\abs{k-k_2}\leq 2$ and $p_2\leq p_1-2$, so that $p\leq p_2-10\leq p_1-12$. \end{enumerate} \end{lemma} {{\beta}ta}egin{remark}\lambdabel{lem:gapp-casesRem} We comment on a few points: {{\beta}ta}egin{enumerate} \item The analogous result applies with the roles of $p,p_i$ permuted. \item The analogous results hold in the variables $q,q_i$ on the support of $\cdothi$ in case of a gap in $q_{\mathfrak{m}in}\leq q_{\mathfrak{m}ax}-10$. \end{enumerate} \end{remark} {{\beta}ta}lue{\textbf{Notation.} Since the constants involved here and in many future, similar case by case analyses are independent of the other important parameters in our proofs, we will use the slightly less formal $\ll$, $\sigmam$, etc. Since the decisive scales are usually given in terms of parameters in dyadic decompositions, to unburden the notation we will use the same symbols to denote both multiplicative bounds (resp.\ equivalences) at the level of the dyadic scales $2^{n}$, as well as additive bounds at the level of the parameter $n\in\mathfrak{m}athbb{Z}$, where the distinction is clear from the context. For example, we will refer to the assumption of Lemma \ref{lem:gapp-cases} as $p\ll \mathfrak{m}in\{p_1,p_2\}$, and will take $p\sigmam 0$ as equivalent to $2^p\sigmam 1$, namely that there exist $C\in\mathfrak{m}athbb{N}$ such that $-C<p<C$. } \mathfrak{m}edskip {{\beta}ta}egin{figure*}[h] \cdotentering {{\beta}ta}egin{subfigure}[t]{0.5\textwidth} \cdotentering \includegraphics[height=4cm]{vectors1.pdf} \cdotaption*{Case \eqref{it:gapp-op1}} \end{subfigure} ~ {{\beta}ta}egin{subfigure}[t]{0.5\textwidth} \cdotentering \includegraphics[height=7cm]{vectors2.pdf} \cdotaption*{Case \eqref{it:gapp-op2}} \end{subfigure} \cdotaption{Exemplary illustration of the scenarios of Lemma \ref{lem:gapp-cases} in Cartesian coordinates.}\lambdabel{fig:gapp} \end{figure*} For the proof it is convenient to visualize the triangle of frequencies $\xi,\xi-\eta,\eta$ -- see also Figure \ref{fig:gapp} for illustration. {{\beta}ta}egin{proof}[Proof of Lemma \ref{lem:gapp-cases}] Consider $(\xi,\eta)\in\textnormal{supp}(\cdothi_\textnormal{h})$. Let $p\leq \mathfrak{m}in\{p_1,p_2\}-10$, and assume for the sake of contradiction that $ p+k\geq p_1+k_1-4$. Then from $\eta_\textnormal{h}=\xi_h-(\xi_\textnormal{h}-\eta_h)$ we have that $p_2+k_2\leq p+k+6$, and it also follows that $k_1\leq p-p_1+k+4\leq k-6$, and hence $k-2\leq k_2\leq k+2$ since $\eta=\xi-(\xi-\eta)$. But then we arrive at the contradiction that $p\geq p_2+k_2-k-6\geq p_2-8$. Hence we conclude that $p+k<p_1+k_1-4$, and thus $p_1+k_1\in[p_2+k_2-2,p_2+k_2+2]$. Moreover, if $\abs{k_1-k_2}\leq 4$, then it follows that $\abs{p_1-p_2}\leq 6$. Finally, if $k_2< k_1-4$, then $\abs{k-k_1}\leq 2$ and thus $p_1-p_2\leq k_2-k_1+2\leq -2$, so that $p\leq p_1-10\leq p_2-12$. The third statement is the symmetric version upon exchanging the roles of $\xi-\eta$ and $\eta$. \end{proof} \subsection{Remark on normal forms}\lambdabel{ssec:nfs} In the bilinear expressions we encounter we will also perform normal forms. For a parameter $\lambdambda>0$ to be chosen we decompose the multiplier into ``resonant'' and ``non-resonant'' parts {{\beta}ta}egin{equation}\lambdabel{eq:mdecompNF} \mathfrak{m}(\xi,\eta)=\psi(\lambdambda^{-1}\Phi)\mathfrak{m}athfrak{m}(\xi,\eta)+(1-\psi(\lambdambda^{-1}\Phi))\mathfrak{m}(\xi,\eta)=:\mathfrak{m}athfrak{m}^{res}(\xi,\eta)+\mathfrak{m}^{nr}(\xi,\eta), \end{equation} and correspondingly have that {{\beta}ta}egin{equation} \mathfrak{m}athcal{B}_\mathfrak{m}(F_1,F_2)=\mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(F_1,F_2)+\mathfrak{m}athcal{B}_{\mathfrak{m}^{nr}}(F_1,F_2). \end{equation} A direct integration by parts in time yields that {{\beta}ta}egin{equation}\lambdabel{mdecompNFNRNorms} \norm{P_{k,p}\mathfrak{m}athcal{B}_{\mathfrak{m}^{nr}}(F_1,F_2)}_{L^2}\lesssimssim \norm{P_{k,p}\mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(F_1,F_2)}_{L^2}+\norm{P_{k,p}\mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(\partial_tF_1,F_2)}_{L^2}+\norm{P_{k,p}\mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(F_1,\partial_tF_2)}_{L^2}. \end{equation} {{\beta}ta}egin{lemma}\lambdabel{lem:nfs} Let $\lambdambda>0$ and $G_j=P_{k_j,p_j}G_j$. We have the following bounds: {{\beta}ta}egin{enumerate} \item\lambdabel{it:NF-bd1} The non-resonant part satisfies {{\beta}ta}egin{equation}\lambdabel{eq:NF-bd1} \norm{P_{k,p}\mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(G_1,G_2)}_{L^2}\lesssimssim 2^{k+p_{\mathfrak{m}ax}}\lambdambda^{-1}\cdotdot \abs{\mathfrak{m}athcal{S}} \norm{G_1}_{L^2}\norm{G_2}_{L^2}. \end{equation} \item\lambdabel{it:NF-bd2} If we can choose $\lambdambda>0$ such that $\abs{\Phi\cdothi_\textnormal{h}}\geq\lambdambda\gtrsim 1$, then we have that $\mathfrak{m}^{res}=0$ and thus $\mathfrak{m}=\mathfrak{m}^{nr}$, and in addition to \eqref{eq:NF-bd1} there holds the alternative bound {{\beta}ta}egin{equation}\lambdabel{eq:NF-bd2} \norm{P_{k,p}\mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(G_1,G_2)}_{L^2}\lesssimssim 2^{k+p_{\mathfrak{m}ax}}\cdotdot \mathfrak{m}in\{\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda} G_1}_{L^\infty}\norm{G_2}_{L^2},\norm{G_1}_{L^2}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}G_2}_{L^\infty}\}. \end{equation} \item\lambdabel{it:NF-bd34} If there holds that $\abs{\partial_{\eta_3}\Phi\,\cdothi_\textnormal{h}}\gtrsim L>0$, then we also have the following set size gains: {{\beta}ta}egin{equation}\lambdabel{eq:NF-bd3} \norm{P_{k,p}\mathfrak{m}athcal{Q}_{\mathfrak{m}^{res}}(G_1,G_2)}_{L^2}\lesssimssim 2^{k+p_{\mathfrak{m}ax}}\lambdambda^{\frac{1}{2}} L^{-\frac{1}{2}}\cdotdot \mathfrak{m}in\{2^{k_1+p_1},2^{k_2+p_2}\}\norm{G_1}_{L^2}\norm{G_2}_{L^2}, \end{equation} and {{\beta}ta}egin{equation}\lambdabel{eq:NF-bd4} \norm{P_{k,p}\mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(G_1,G_2)}_{L^2}\lesssimssim \abs{\log\lambdambda}2^{k+p_{\mathfrak{m}ax}}\lambdambda^{-\frac{1}{2}} L^{-\frac{1}{2}}\cdotdot \mathfrak{m}in\{2^{k_1+p_1},2^{k_2+p_2}\}\norm{G_1}_{L^2}\norm{G_2}_{L^2}. \end{equation} \item The analogous bounds hold when additional localizations in $q, q_j$, $j=1,2$ are considered. \end{enumerate} \end{lemma} {{\beta}ta}egin{proof} The first claim \eqref{it:NF-bd1} follows from Lemma \ref{lem:set_gain} and the fact that $\abs{\Phi^{-1}\mathfrak{m}^{nr}\cdothi_\textnormal{h}}\lesssimssim 2^{k+p_{\mathfrak{m}ax}}\lambdambda^{-1}$. For \eqref{it:NF-bd2} it suffices to notice that under these assumptions by Lemma \ref{lem:phasesymb_bd} there holds that $\norm{\Phi^{-1}\mathfrak{m}^{nr}}_{\widetilde{\mathfrak{m}athcal{W}}_\textnormal{h}}\lesssimssim 2^{k+p_{\mathfrak{m}ax}}$. Finally, \eqref{eq:NF-bd3} follows with the improved set size gain of Lemma \ref{lem:set_gain2}. To obtain \eqref{eq:NF-bd4}, we further decompose {{\beta}ta}egin{equation} \mathfrak{m}^{nr}(\xi,\eta)=\sum_{r\geq 1}\mathfrak{m}_r(\xi,\eta),\qquad \mathfrak{m}_r(\xi,\eta)=\varphi(2^{-r}\lambdambda^{-1}\Phi(\xi,\eta))\mathfrak{m}^{nr}(\xi,\eta), \end{equation} and correspondingly $\mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(G_1,G_2)=\sum_{r\geq 1}\mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}_r}(G_1,G_2)$. For these we invoke again Lemma \ref{lem:set_gain2} and note that since $\abs{\Phi}\lesssimssim 1$ there are at most $\abs{\log\lambdambda}$ terms. \end{proof} \section{Bounds for $\partial_t S^Nf$ in $L^2$}\lambdabel{sec:dtfbds} We have the following estimates for the time derivative of the dispersive unknowns. We note that in view of the fact that these are bilinear expressions in $3d$, without further removal of resonant parts this is the fastest decay (up to minor losses) one can hope for. {{\beta}ta}egin{lemma}\lambdabel{lem:dtfinL2} Let $f$ be a dispersive unknown in Euler-Coriolis, and assume the bootstrap assumptions \eqref{eq:btstrap-assump}. Then there exists $0<\gammamma\ll {{\beta}ta}eta$ such that for $m\in\mathfrak{m}athbb{N}$ and $t\in [2^m,2^{m+1})\cdotap [0,T]$ there holds that {{\beta}ta}egin{equation} \norm{\partial_t P_k S^bf(t)}_{L^2}\lesssimssim 2^{\frac{k}{2}-k^+}\cdotdot 2^{-\frac{3}{2}m+\gammamma m}\cdotdot \varepsilonsilonilons_1^2, \qquad 0\leq b\leq N. \end{equation} \end{lemma} {{\beta}ta}egin{proof} We know that $\partial_t P_k S^bf$, $0\leq b\leq N$ is a sum of terms of the form $\sum_{b_1+b_2\leq N} P_k \mathfrak{m}athcal{Q}_\mathfrak{m}(S^{b_1}F_1,S^{b_2}F_2)$ with $\mathfrak{m}$ a multiplier as in Lemma \ref{lem:ECmult} and $F_j\in\{\mathfrak{m}athcal{U}_+,\mathfrak{m}athcal{U}_-\}$ dispersive unknowns, $j=1,2$, so it suffices to bound such expressions in $L^2$. Localizing the inputs in frequency we have that {{\beta}ta}egin{equation} \norm{P_k \mathfrak{m}athcal{Q}_\mathfrak{m}(S^{b_1}F_1,S^{b_2}F_2)}_{L^2}\lesssimssim \sum_{k_1,k_2\in\mathfrak{m}athbb{Z}}\norm{P_k\mathfrak{m}athcal{Q}_\mathfrak{m}(P_{k_1}S^{b_1}F_1,P_{k_2}S^{b_2}F_2)}_{L^2}. \end{equation} By the energy estimates \eqref{eq:interpol_energy} and direct bounds we have that {{\beta}ta}egin{equation} \norm{P_k\mathfrak{m}athcal{Q}_\mathfrak{m}(P_{k_1}S^{b_1}F_1,P_{k_2}S^{b_2}F_2)}_{L^2}\lesssimssim 2^{k+\frac{3}{2}k_{\mathfrak{m}in}}\cdotdot 2^{-N_0(k_1^++k_2^+)}\cdotdot\norm{S^{b_1}F_1}_{H^{N_0}}\norm{S^{b_2}F_2}_{H^{N_0}}, \end{equation} so the claim follows provided that $k_{\mathfrak{m}ax}\geq 2N_0^{-1} m$, or if $k_{\mathfrak{m}in}\leq -2m$. With $\deltalta_0= 2N_0^{-1}$ we will thus assume that $-2m< k,k_1,k_2<\deltalta_0 m$. Localizing further in $p,p_i$ and $\ell_i$, $i=1,2$, and writing $f_i=P_{k_i,p_i}R_{\ell_i}S^{b_i}F_i$ for simplicity of notation, we can further assume that $p,p_i\geq -2m$ and $\ell_i\leq 2m$, since {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}\lesssimssim 2^{k+\frac{3}{2}k_{\mathfrak{m}ax}+p_{\mathfrak{m}in}} &\mathfrak{m}in\{2^{p_1}\norm{f_1}_{B},2^{-\ell_1}\norm{f_1}_X\} \cdotdot \mathfrak{m}in\{2^{p_2}\norm{f_2}_B,2^{-\ell_2}\norm{f_2}_X\}. \end{aligned} \end{equation} Assuming without loss of generality that $k_2\leq k_1$ (so that $2^{k_{\mathfrak{m}ax}+k_{\mathfrak{m}in}}\sigmam 2^{k+k_2}$), it thus suffices to show that {{\beta}ta}egin{equation}\lambdabel{eq:dtfclaim} \norm{P_{k,p}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}\lesssimssim 2^{-\frac{3}{2}m+\frac{\gammamma}{2} m}\cdotdot\varepsilonsilonilons_1^2. \end{equation} The basic strategy for this will be to either repeatedly integrate by parts as in Lemma \ref{lem:new_ibp}, or to use the $X$ and $B$ norm bounds. For this it is useful to distinguish cases based on the localizations and whether there are size gaps, first in $p$, $p_i$ (Case 1), then in $q$, $q_i$ (Case 2), since this gives lower bounds for $\abs{{{\beta}ta}ar\sigmagma}$ and thus for $V_\zetata\Phi$, $\zetata\in\{\eta,\xi-\eta\}$, as per Lemma \ref{lem:vfsizes-mini}. At the end (Case 3) this leaves us with the setting where these localizations are comparable. \subsubsection*{Case 1: Gap in $p$} Here we assume that $p_{\mathfrak{m}in}\ll p_{\mathfrak{m}ax}$. By Lemma \ref{lem:vfsizes-mini} we have $\abs{{{\beta}ta}ar\sigmagma}\gtrsim 2^{p_{\mathfrak{m}ax}}2^{k+k_2}$, and we may choose $V\in\{S,\Omegaega\}$ such that {{\beta}ta}egin{equation} \abs{V_{\xi-\eta}\Phi}\sigmam 2^{p_2-2k_2}2^{p_{\mathfrak{m}ax}+k+k_2}=2^{p_2+p_{\mathfrak{m}ax}}2^{k-k_2}, \end{equation} where we used that by convention $k_2\leq k_1$, so that $k_{\mathfrak{m}in}\in\{k,k_2\}$. \subsubsection*{Case 1.1: $k_2=k_{\mathfrak{m}in}$} Here we have that $2^k\sigmam 2^{k_1}$. Then from Lemma \ref{lem:new_ibp}\eqref{it:ibp_p} we see that repeated integration by parts along $V_{\xi-\eta}$ gives for $K\in\mathfrak{m}athbb{N}$ that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^{k+\frac{3}{2}k_2}\left(2^{-m}2^{-p_2-p_{\mathfrak{m}ax}}\cdotdot 2^{k_2-k}\cdotdot2^{k_1-k_2}[1+2^{\ell_2}]\right)^K\cdotdot \norm{(1,S)^Kf_1}_{L^2}\norm{(1,S)^Kf_2}_{L^2}\\ &\lesssimssim 2^{k+\frac{3}{2}k_2}\left(2^{-m}2^{-p_2-p_{\mathfrak{m}ax}}2^{\ell_2}\right)^K\cdotdot\varepsilonsilonilons_1^2. \end{aligned} \end{equation} Choosing $K =O(M)\gg 1$ yields the claim, provided that for $\deltalta=2K^{-1}=O(M^{-1})$ we have (since $\ell_2+p_2\geq 0$) {{\beta}ta}egin{equation} -p_{\mathfrak{m}ax}+2\ell_2\leq (1-\deltalta)m. \end{equation} If on the other hand $\ell_2>(1-\deltalta)\frac{m}{2}+\frac{p_{\mathfrak{m}ax}}{2}$, using an $L^\infty\times L^2$ estimate and \eqref{BoundsOnMAddedBenoit} and Corollary \ref{cor:extrapol_decay} with $\kappappa\ll{{\beta}ta}eta$ if $f_1$ has more than $N-3$ vector fields, we have that {{\beta}ta}egin{equation}\lambdabel{eq:pgap_bd} {{\beta}ta}egin{aligned} \norm{P_{k,p}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^{k+p_{\mathfrak{m}ax}} \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f_1}_{L^\infty}\cdotdot 2^{-\ell_2}\norm{f_2}_{X}\lesssimssim 2^{k+p_{\mathfrak{m}ax}}2^{-m+\kappappa m}\cdotdot 2^{-\frac{m}{2}+\deltalta\frac{m}{2}-\frac{p_{\mathfrak{m}ax}}{2}}\varepsilonsilonilons_1^2\\ &\lesssimssim 2^k2^{-\frac{3}{2}m+(\frac{\deltalta}{2}+\kappappa) m}\cdotdot \varepsilonsilonilons_1^2. \end{aligned} \end{equation} \subsubsection*{Case 1.2: $k=k_{\mathfrak{m}in}$} We now aim to show \eqref{eq:dtfclaim} in case $k=k_{\mathfrak{m}in}$, where in particular $2^{k_1}\sigmam 2^{k_2}$. This can be done as Case 1.1 above, with the difference that now there may be a loss in $k$. This however, is recovered directly by the multiplier $\mathfrak{m}$, so we can proceed in close parallel to Case 1.1. By Lemma \ref{lem:new_ibp} integration by parts is feasible provided that {{\beta}ta}egin{equation} 2^{-m}\cdotdot 2^{-p_2-p_{\mathfrak{m}ax}-k+k_2}\cdotdot (1+2^{p_1-p_2}+2^{\ell_2})<2^{-\deltalta m}, \end{equation} which can be guaranteed by requiring that $-p_{\mathfrak{m}ax}+2\ell_2-k+k_2\leq (1-\deltalta)m$. If on the other hand $-p_{\mathfrak{m}ax}+2\ell_2-k+k_2> (1-\deltalta)m$, then as in \eqref{eq:pgap_bd} we have {{\beta}ta}egin{equation}\lambdabel{eq:pgap_bd-2} {{\beta}ta}egin{aligned} \norm{P_{k,p}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^{k+p_{\mathfrak{m}ax}} \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f_1}_{L^\infty}\cdotdot 2^{-\ell_2}\norm{f_2}_{X}\lesssimssim 2^{k+p_{\mathfrak{m}ax}}2^{-m+\kappappa m}2^{-\frac{1-\deltalta}{2}m-\frac{k}{2}+\frac{k_2}{2}}\cdotdot \varepsilonsilonilons_1^2\\ &\lesssimssim 2^{\frac{k+p_{\mathfrak{m}ax}}{2}+\frac{k_2}{2}}2^{-\frac{3}{2}m+(\kappappa+\frac{\deltalta}{2}) m}\varepsilonsilonilons^2 \lesssimssim 2^{\frac{k}{2}}2^{-\frac{3}{2}m+(\frac{\deltalta}{2}+\kappappa+\deltalta_0) m}\varepsilonsilonilons_1^2. \end{aligned} \end{equation} \mathfrak{m}edskip We may henceforth assume that $2^{p_{\mathfrak{m}ax}}\sigmam 2^{p_{\mathfrak{m}in}}$. \subsubsection*{Case 2: Gap in $q$} Now we localize further in $q,q_i$, and write $g_i=P_{k,p_i,q_i}R_{\ell_i}f_i$, $i=1,2$. Then by $B$ norm estimates and the set size bound $\abs{\mathfrak{m}athcal{S}}\lesssimssim 2^{\frac{3}{2}k_{\mathfrak{m}ax}}2^{p+\frac{q}{2}}$ we can assume that $q,q_i\geq -4m$, since if $q_{\mathfrak{m}in}<-4m$ there holds {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{\frac{5}{2}k+p+\frac{q}{2}}\cdotdot 2^{p_1+\frac{q_1}{2}}\norm{g_1}_B 2^{p_2+\frac{q_2}{2}}\norm{g_2}_B\lesssimssim 2^{\frac{5}{2}k}2^{-2m}\varepsilonsilonilons_1^2. \end{aligned} \end{equation} Assuming now that $q_{\mathfrak{m}in}\ll q_{\mathfrak{m}ax}$, we have that $2^{p_{\mathfrak{m}ax}}\sigmam 1$, and thus by the previous case that $2^{p_{\mathfrak{m}ax}}\sigmam 2^{p_{\mathfrak{m}in}}\sigmam 1$. Analogously to before we choose $V\in\{S,\Omegaega\}$ such that {{\beta}ta}egin{equation} \abs{V_{\xi-\eta}\Phi}\sigmam 2^{-2k_2}2^{q_{\mathfrak{m}ax}+k+k_2}=2^{q_{\mathfrak{m}ax}}2^{k-k_2}. \end{equation} \subsubsection*{Case 2.1: $k_2=k_{\mathfrak{m}in}$} By Lemma \ref{lem:new_ibp}\eqref{it:ibp_pq}, repeated integration by parts along $V_{\xi-\eta}$ gives {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{k+\frac{3}{2}k_2}\left(2^{-m}\cdotdot 2^{-q_{\mathfrak{m}ax}}2^{k_2-k}\cdotdot 2^{k-k_2}[1+2^{q_1-q_2}+2^{\ell_2}]\right)^K\cdotdot \norm{(1,S)^Kg_1}_{L^2}\norm{(1,S)^Kg_2}_{L^2}\\ &\lesssimssim 2^{k+\frac{3}{2}k_2}\left(2^{-m}2^{-q_{\mathfrak{m}ax}}\mathfrak{m}ax\{2^{q_1-q_2},2^{\ell_2}\}\right)^K\cdotdot\varepsilonsilonilons_1^2, \end{aligned} \end{equation} and the claim follows provided that {{\beta}ta}egin{equation} 2^{-m}2^{-q_{\mathfrak{m}ax}}\mathfrak{m}ax\{2^{q_1-q_2},2^{\ell_2}\}<2^{-\deltalta m}. \end{equation} Case 2.1a: In case $\ell_2\leq q_{1}-q_2$ this is satisfied if $q_2\geq (-1+\deltalta)m$. If on the other hand $q_2<(-1+\deltalta)m$, then by a $L^\infty\times L^2$ norm estimate and \eqref{BoundsOnMAddedBenoit} we have {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{k+q_{\mathfrak{m}ax}}\cdotdot \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1}_{L^\infty}\cdotdot 2^{\frac{q_2}{2}}\norm{g_2}_{B}\lesssimssim 2^{k}\cdotdot 2^{-m+\kappappa m}\cdotdot 2^{-\frac{1-\deltalta}{2}m}\varepsilonsilonilons_1^2\lesssimssim 2^k 2^{-\frac{3}{2}m+(\kappappa+\frac{\deltalta}{2})m}\cdotdot \varepsilonsilonilons_1^2. \end{aligned} \end{equation} Case 2.1b: In case $\ell_2> q_{1}-q_2$ we can repeatedly integrate by parts if $\ell_2-q_{\mathfrak{m}ax}\leq (1-\deltalta)m$. Else we use an $L^\infty\times L^2$ estimate to get that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{k+q_{\mathfrak{m}ax}} \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1}_{L^\infty}\cdotdot 2^{-\ell_2}\norm{g_2}_{X}\lesssimssim 2^k2^{-2m+(\kappappa+\deltalta) m}\cdotdot \varepsilonsilonilons_1^2. \end{aligned} \end{equation} \subsubsection*{Case 2.2: $k=k_{\mathfrak{m}in}$ and $\vert k_1-k_2\vert\le 10$} By Lemma \ref{lem:new_ibp}\eqref{it:ibp_pq}, repeated integration by parts along $V_{\xi-\eta}$ is feasible if {{\beta}ta}egin{equation} 2^{-m}2^{-q_{\mathfrak{m}ax}}2^{k_2-k}\mathfrak{m}ax\{2^{q_1-q_2},2^{\ell_2}\}<2^{-\deltalta m}. \end{equation} If this condition is violated we distinguish cases as above in Cases 2.1a resp. 2.1b: either $-q_2+k_2-k>(1-\deltalta) m$ and then {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{k+q_{\mathfrak{m}ax}}\cdotdot \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1}_{L^\infty}\cdotdot 2^{\frac{q_2}{2}}\norm{g_2}_{B}\lesssimssim 2^{\frac{k+k_2}{2} m}\cdotdot 2^{-m+\kappappa m}\cdotdot 2^{-\frac{1-\deltalta}{2}m}\varepsilonsilonilons^2\\ &\lesssimssim 2^{\frac{k}{2}}2^{-\frac{3}{2}m+(\kappappa+\frac{\deltalta+\deltalta_0+2}{2}) m}\cdotdot \varepsilonsilonilons_1^2, \end{aligned} \end{equation} or $\ell_2-q_{\mathfrak{m}ax}+k_2-k>(1-\deltalta) m$ and then {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{k+q_{\mathfrak{m}ax}} \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1}_{L^\infty}\cdotdot 2^{-\frac{\ell_2}{2}}\norm{g_2}_{X}\lesssimssim 2^{\frac{k+k_2}{2}}\cdotdot 2^{-m+\kappappa m}\cdotdot 2^{-\frac{(1-\deltalta)}{2}m}\cdotdot \varepsilonsilonilons^2\\ &\lesssimssim 2^{\frac{k}{2}} 2^{-\frac{3}{2}m+(\kappappa+\frac{\deltalta+\deltalta_0}{2}) m}\cdotdot \varepsilonsilonilons_1^2. \end{aligned} \end{equation} \mathfrak{m}edskip Thus the only scenario we are left with is the following: \subsubsection*{Case 3: No gaps} In this case we have that $2^p\sigmam 2^{p_i}$ and $2^q\sigmam 2^{q_i}$, $i=1,2$. As before we can also assume that $p,q\ge-4m$. Then we are done by a direct $L^2\times L^\infty$ estimate: Assuming without loss of generality that $g_1$ has fewer vector fields than $g_2$ (i.e.\ $b_1\leq b_2$ in our original notation), we have by Proposition \ref{prop:decay} that $e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda} P_{k_1,p_1,q_1}R_{\ell_1}g_1=I_{1,1}+I_{1,2}$ with {{\beta}ta}egin{equation} \norm{I_{1,1}}_{L^\infty}\lesssimssim \varepsilonsilonilons_1\cdotdot 2^{-\frac{3}{2}\vert k_1\vert} 2^{-p-\frac{q}{2}}t^{-\frac{3}{2}},\quad \norm{I_{1,2}}_{L^2}\lesssimssim \varepsilonsilonilons_1 \cdotdot 2^{-\frac{3}{2}\vert k_1\vert}(t2^p)^{-1}\mathfrak{m}athfrak{1}_{2^{2p+q}\gtrsim t^{-1}}, \end{equation} so that using \eqref{BoundsOnMAddedBenoit}, {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{p+q+k}[\norm{I_{1,1}}_{L^\infty}2^{p+\frac{q}{2}}\norm{f_2}_{B}+\norm{I_{1,2}}_{L^2}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2}_{L^\infty}]\lesssimssim 2^{k}\cdotdot 2^{-\frac{3}{2}m}\cdotdot \varepsilonsilonilons_1^2, \end{aligned} \end{equation} where we have used the dispersive decay (at rate at least $t^{-1/2}$) of $e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2$, which in case $g_2$ has more than $N-3$ vector fields follows by interpolation (see Lemma \ref{lem:interpol} resp.\ Corollary \ref{cor:extrapol_decay}). \end{proof} \section{Energy estimates and $B$ norm bounds}\lambdabel{sec:Bnorm} It is classical to obtain energy estimates for \eqref{eq:EC}. As we showed in \cdotite[Proposition 5.1]{rotE}, both derivatives and vector fields can be controlled in $L^2$ as follows: {{\beta}ta}egin{proposition}[Proposition 5.1 in \cdotite{rotE}]\lambdabel{prop:EnergyIncrement} Assuming that ${{\beta}ta}m{u}$ solves \eqref{eq:EC} on $0\le t\le T$, for $n\in\mathfrak{m}athbb{N}$ there holds that {{\beta}ta}egin{equation}\lambdabel{EnergyIncrementEq} {{\beta}ta}egin{split} \mathcal{V}ert {{\beta}ta}m{u}(t)\mathcal{V}ert_{H^{n}}^2-\mathcal{V}ert {{\beta}ta}m{u}(0)\mathcal{V}ert_{H^{n}}^2&\lesssimssim \int_{s=0}^t\alphapha(s)\cdotdot \mathcal{V}ert {{\beta}ta}m{u}(s)\mathcal{V}ert_{H^n}^2\cdotdot \frac{ds}{1+s},\\ \mathcal{V}ert S^n{{\beta}ta}m{u}(t)\mathcal{V}ert_{L^2}^2-\mathcal{V}ert S^n{{\beta}ta}m{u}(0)\mathcal{V}ert_{L^2}^2&\lesssimssim \int_{s=0}^t\alphapha(s)\cdotdot \left(\mathcal{V}ert {{\beta}ta}m{u}(s)\mathcal{V}ert_{H^n}^2+\sum_{b=0}^n\mathcal{V}ert S^b{{\beta}ta}m{u}(s)\mathcal{V}ert_{L^2}^2\right)\cdotdot \frac{ds}{1+s},\\ \mathcal{V}ert \vert\partialla\vert^{-1}S^n{{\beta}ta}m{u}(t)\mathcal{V}ert_{L^2}^2-\mathcal{V}ert \vert\partialla\vert^{-1}S^n{{\beta}ta}m{u}(0)\mathcal{V}ert_{L^2}^2&\lesssimssim \int_{s=0}^t \alphapha(s)\cdotdot \sum_{b=0}^n\mathcal{V}ert S^b{{\beta}ta}m{u}(s)\mathcal{V}ert_{L^2}^2\cdotdot \frac{ds}{1+s}, \end{split} \end{equation} where {{\beta}ta}egin{equation} {{\beta}ta}egin{split} \alphapha(s)&=(1+s)\left[\mathcal{V}ert {{\beta}ta}m{u}(s)\mathcal{V}ert_{L^\infty}+\mathcal{V}ert \partialla_x {{\beta}ta}m{u}(s)\mathcal{V}ert_{L^\infty}\right]. \end{split} \end{equation} \end{proposition} As a consequence, with the decay bounds of Corollary \ref{cor:decay} we obtain: {{\beta}ta}egin{corollary}\lambdabel{cor:energy} Under the bootstrap assumptions \eqref{eq:btstrap-assump} there holds that {{\beta}ta}egin{equation}\lambdabel{eq:btstrap-concl2'} \norm{U_\pm(t)}_{H^{2N_0}\cdotap H^{-1}}+\norm{S^aU_\pm(t)}_{L^2\cdotap H^{-1}}\lesssimssim\varepsilonsilonilons_0\ip{t}^{C\varepsilonsilonilons_1},\qquad 0\le a\le M. \end{equation} \end{corollary} {{\beta}ta}egin{proof} It suffices to note that under the assumptions \eqref{eq:btstrap-assump}, we can use Proposition \ref{prop:decay} to get a constant $C>0$ such that {{\beta}ta}egin{equation} \alphapha(s)\leq C\varepsilonsilonilons_1. \end{equation} \end{proof} The main goal of this section is then to upgrade this $L^2$ information on many vector fields to stronger $B$ norm bounds of fewer vector fields on the solution profiles. After the reduction to bilinear bounds as in the proof of Proposition \ref{prop:btstrap}, this is done by establishing the following claim (see also \eqref{eq:btstrap-concl1.1-B}): {{\beta}ta}egin{proposition}\lambdabel{prop:Bnorm} Assume the bootstrap assumptions \eqref{eq:btstrap-assump} of Proposition \ref{prop:btstrap}. Then for $\deltalta=2M^{-1/2}>0$ and with $F_j=S^{b_j}\mathfrak{m}athcal{U}_{\mathfrak{m}u_j}$, $0\leq b_1+b_2\leq N$, $\mathfrak{m}u_j\in\{+,-\}$, $j=1,2$, there holds that {{\beta}ta}egin{equation}\lambdabel{eq:B-claim0} \norm{\mathfrak{m}athcal{B}_\mathfrak{m}(F_1,F_2)}_B\lesssimssim 2^{-\deltalta^3 m}\varepsilonsilonilons_1^2. \end{equation} \end{proposition} We recall again that here $\mathfrak{m}$ is one of the multipliers of the Euler-Coriolis system in the dispersive formulation \eqref{eq:EC_disp} (see Lemma \ref{lem:ECmult}), for which we have the bounds of Lemma \ref{lem:ECmult_bds}. The remainder of this section now gives the proof of Proposition \ref{prop:Bnorm}. {{\beta}ta}egin{proof}[Proof of Proposition \ref{prop:Bnorm}] In most cases, we will be able to prove the stronger bound {{\beta}ta}egin{equation}\lambdabel{StrongerThanBNorm} 2^k2^{4k^+} \norm{\mathfrak{m}athcal{F}\left[P_k\mathfrak{m}athcal{B}_\mathfrak{m}(F_1,F_2)\right]}_{L^\infty}\lesssimssim 2^{-\deltalta^3 m}\varepsilonsilonilons_1^2. \end{equation} \subsubsection{Some simple cases}\lambdabel{ssec:Bnorm-reduction} From the energy bounds \eqref{eq:interpol_energy} and with $\abs{\mathfrak{m}}\lesssimssim 2^k$ and $\abs{\mathfrak{m}athcal{S}}\leq 2^{p+\frac{q}{2}+\frac{3}{2}k}$ we deduce that since $F_i=\sum_{k_i}P_{k_i}F_i$ there holds that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^k2^{4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k}\mathfrak{m}athcal{B}_\mathfrak{m}(F_1,F_2)\right]}_{L^\infty}\lesssimssim 2^m\cdotdot 2^{k+4k^+}&\sum_{k_1,k_2}\mathfrak{m}in\{2^{-N_0k_1}\norm{F_1}_{H^{N_0}},2^{k_1}\norm{F_1}_{\dot{H}^{-1}}\} \\ &\qquad \cdotdot \mathfrak{m}in\{2^{-N_0k_2}\norm{F_2}_{H^{N_0}},2^{k_2}\norm{F_2}_{\dot{H}^{-1}}\}, \end{aligned} \end{equation} and \eqref{StrongerThanBNorm} follows if $\mathfrak{m}in\{k,k_1,k_2\}\leq -2m$ or $\mathfrak{m}ax\{k,k_1,k_2\}\geq \deltalta_0 m$, where $\deltalta_0=2N_0^{-1}$. Localizing in $p_j, \ell_j$ with $\ell_j\geq -p_j$, $j=1,2$, we have that with {{\beta}ta}egin{equation} f_j=P_{k_j,p_j}R_{\ell_j}F_j, \qquad j=1,2, \end{equation} there holds that {{\beta}ta}egin{equation} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k}\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)\right]}_{L^\infty}\lesssimssim 2^{(1+2\deltalta_0)m}\cdotdot 2^{-\ell_1}\norm{f_1}_X\cdotdot 2^{-\ell_2}\norm{f_2}_X\lesssimssim 2^{(1+2\deltalta_0)m-\ell_1-\ell_2}\varepsilonsilon_1^2, \end{equation} and this gives \eqref{StrongerThanBNorm} if $\mathfrak{m}in\{\ell_1,\ell_2\}\ge 2m$, so that to prove \eqref{eq:B-claim0} it suffices to show that for {{\beta}ta}egin{equation}\lambdabel{AssumptionBnormAddedBenoit} -2m\leq k,k_j\leq \deltalta_0 m,\qquad -2m\leq p_j\leq 0,\qquad -p_j\leq \ell_i\leq 2m,\quad j=1,2, \end{equation} we have {{\beta}ta}egin{equation}\lambdabel{eq:B-claim} \sup_{k,p,q}2^{-\frac{1}{2}k^{-}}2^{-p-\frac{q}{2}}\norm{P_{k,p,q}\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)}_{L^2}\lesssimssim 2^{-\deltalta m}\varepsilonsilonilons_1^2. \end{equation} The rest of this section establishes \eqref{eq:B-claim}, by first treating the case of a gap in $p$ (i.e.\ $p_{\mathfrak{m}in}\ll p_{\mathfrak{m}ax}$) with $2^{p_{\mathfrak{m}ax}}\sigmam 1$ (Section \ref{ssec:B-p-gap}), secondly that of $p_{\mathfrak{m}ax}\ll 0$ (Section \ref{ssec:B-pmax}), thirdly that of a gap in $q$ (Section \ref{ssec:B-q-gap}, and finally the case of no gaps (Section \ref{ssec:B-nogaps}). \subsection{Gap in $p$, with $p_{\mathfrak{m}ax}\sigmam 0$.}\lambdabel{ssec:B-p-gap} We show \eqref{eq:B-claim} when \eqref{AssumptionBnormAddedBenoit} holds and in addition $p_{\mathfrak{m}in}\ll p_{\mathfrak{m}ax}\sigmam 0$. We further subdivide according to whether the output $p$ or one of the inputs $p_i$ is small, and use Lemma \ref{lem:gapp-cases} to organize these cases. Without loss of generality we will assume that $p_1\leq p_2$, so that we have two main cases to consider. Noting that $\abs{{{\beta}ta}ar\sigmagma}\sigmam 2^{p_{\mathfrak{m}ax}}2^{k_{\mathfrak{m}in}+k_{\mathfrak{m}ax}}$ and using that $\ell_i+p_i\geq 0$ (and thus $p_2-p_1\leq -\ell_1$, $p_1-p_2\leq -\ell_2$), repeated integration by parts is feasible if (see Lemma \ref{lem:new_ibp}) {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} V_\eta:\qquad &2^{-p_1}2^{2k_1-k_{\mathfrak{m}in}-k_{\mathfrak{m}ax}}(1+2^{k_2-k_1}2^{\ell_1})\leq 2^{(1-\deltalta)m},\\ V_{\xi-\eta}:\qquad &2^{-p_2}2^{2k_2-k_{\mathfrak{m}in}-k_{\mathfrak{m}ax}}(1+2^{k_1-k_2}2^{\ell_2})\leq 2^{(1-\deltalta)m}. \end{aligned} \end{equation} \subsubsection{Case 1: $p\ll p_1,p_2$} By Lemma \ref{lem:gapp-cases} we have three scenarios to consider: \subsubsection*{Subcase 1.1: $2^{k_1}\sigmam 2^{k_2}$.} Here we have $2^{p_1}\sigmam 2^{p_2}\sigmam 1$. Using Lemma \ref{lem:new_ibp}, iterated integration by parts in $V_\eta$ or $V_{\xi-\eta}$ gives the result if $\mathfrak{m}in\{\ell_1,\ell_2\}<(1-\deltalta)m+k-k_1$. Else we have the bound {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)\right]}_{L^\infty}&\lesssimssim 2^{2k-2k_2^+}\cdotdot 2^{-\ell_1-\ell_2}\norm{f_1}_X\norm{f_2}_{X}\\ &\lesssimssim 2^{-2(1-\deltalta)m}\norm{f_1}_X\norm{f_2}_{X}\ll 2^{-(1+\frac{{{\beta}ta}eta}{2})m}\varepsilonsilonilons_1^2.\\ \end{aligned} \end{equation} \subsubsection*{Subcase 1.2: $2^{k_2}\ll 2^{k_1}\sigmam 2^k$} Then we have that $2^{p_1-p_2}\sigmam 2^{k_2-k_1}\ll 1$, so that $p\ll p_1\ll p_2\sigmam 0$. Using Lemma \ref{lem:new_ibp}, iterated integration by parts in $V_{\xi-\eta}$ gives the claim if $\ell_2<(1-\deltalta)m$. Else, when $\ell_2>(1-\deltalta)m$ there holds that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)\right]}_{L^\infty}&\lesssimssim 2^{2k_1+4k_1^+}\cdotdot \norm{f_1}_{L^2} 2^{-(1+{{\beta}ta}eta)\ell_2}\norm{f_2}_{X} \ll 2^{-(1+\frac{{{\beta}ta}eta}{2})m}\varepsilonsilonilons_1^2. \end{aligned} \end{equation} \subsubsection*{Subcase 1.3: $2^{k_1}\ll 2^{k_2}\sigmam 2^k$} This leads to $p_2\ll p_1$ which is excluded. \subsubsection{Case 2: $p_1\ll p,p_2$} By Lemma \ref{lem:gapp-cases} we have three scenarios to consider: \subsubsection*{Subcase 2.1: $2^{k}\sigmam 2^{k_2}$} Then $2^{p}\sigmam 2^{p_2}\sigmam 1$. Iterated integration by parts in $V_\eta$ (with $\abs{{{\beta}ta}ar\sigmagma}\gtrsim 2^{k_1+k}$) gives the claim if $\ell_1-p_1\leq (1-\deltalta)m$, whereas iterated integration by parts in $V_{\xi-\eta}$ suffices if {{\beta}ta}egin{equation} 2^{k_2-k_1}+2^{\ell_2}\leq 2^{(1-\deltalta)m}. \end{equation} We may thus assume that $\ell_1-p_1> (1-\deltalta)m$ and that $\mathfrak{m}ax\{k_2-k_1,\ell_2\}>(1-\deltalta)m$, but this suffices in view of the crude bound {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{3k^+-\frac{1}{2}k^-}2^{-\frac{q}{2}}\norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2} &\lesssimssim 2^{-\frac{q}{2}}\abs{\mathfrak{m}athcal{S}}\cdotdot 2^{k-\frac{1}{2}k^-+3k_2^+}2^{-\ell_1-\ell_2}\norm{f_1}_{X} \norm{f_2}_{X}\\ &\lesssimssim 2^{\frac{k+k^+}{2}+k_1+p_1}2^{-(1-\deltalta)m-p_1-\ell_2}\norm{f_1}_{X}\norm{f_2}_{X}\\ &\lesssimssim 2^{\frac{3}{2}k+\frac{1}{2}k^+}\cdotdot 2^{k_1-k_2-\ell_2}\cdotdot 2^{-(1-\deltalta)m}\norm{f_1}_{X}\norm{f_2}_{X}. \end{aligned} \end{equation} \subsubsection*{Subcase 2.2: $2^{k}\ll 2^{k_2}\sigmam 2^{k_1}$} Then $2^{p_2-p}\sigmam 2^{k-k_2}\ll 1$, and thus $p_1\ll p_2\ll p\sigmam 0$ and we only need to recover $q$. Repeated integration by parts in $V_{\xi-\eta}$ (where now $\abs{{{\beta}ta}ar\sigmagma}\sigmam 2^{k_2+k}$) gives the claim if {{\beta}ta}egin{equation} -p_2+k_2-k+\ell_2\leq (1-\deltalta)m. \end{equation} In the opposite case we use that, since ${{\beta}ta}eta\leq\frac{1}{3}$, with $2^{{{\beta}ta}eta(k_2-k)}\sigmam 2^{-{{\beta}ta}eta p_2}$ and $\abs{\mathfrak{m}athcal{S}}\lesssimssim 2^{\frac{k+q}{2}}2^{k_1+p_1}$ we can bound {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{3k^+-\frac{1}{2}k^-} 2^{-\frac{q}{2}}\norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^{-\frac{q}{2}}\abs{\mathfrak{m}athcal{S}}\cdotdot 2^{k-\frac{1}{2}k^-}\cdotdot 2^{p_1+\frac{k_1}{2}}\norm{f_1}_{B}\cdotdot 2^{-(1+{{\beta}ta}eta)\ell_2-{{\beta}ta}eta p_2-3k_2^+}\norm{f_2}_{X}\\ &\lesssimssim 2^{k+\frac{1}{2}k^+}2^{\frac{3}{2}k_1+2p_1}\norm{f_1}_{B}2^{-(1+{{\beta}ta}eta)(1-\deltalta)m}2^{-(1+2{{\beta}ta}eta)p_2}2^{(1+{{\beta}ta}eta)(k_2-k)}2^{-3k_2^+}\norm{f_2}_{X}\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta)(1-\deltalta)m}2^{p_1-3{{\beta}ta}eta p_2}\norm{f_1}_{B}\norm{f_2}_{X}\\ &\lesssimssim 2^{-(1+\frac{{{\beta}ta}eta}{2})m}\varepsilonsilonilons_1^2. \end{aligned} \end{equation} \subsubsection*{Subcase 2.3: $2^{k_2}\ll 2^{k}\sigmam 2^{k_1}$} Then $2^{p-p_2}\sigmam 2^{k_2-k}\ll 1$, and thus $p_1\ll p\ll p_2\sigmam 0$. This is as in Subcase 1.2: if $\ell_2\leq (1-\deltalta)m$, then repeated integration by parts in $V_{\xi-\eta}$ gives the claim, {{\beta}ta}lue{whereas for $\ell_2>(1-\deltalta)m$ we have that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p}\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)\right]}_{L^\infty}&\lesssimssim 2^{2k_1+4k_1^+}\cdotdot \norm{f_1}_{L^2} 2^{-(1+{{\beta}ta}eta)\ell_2}\norm{f_2}_{X} \ll 2^{-(1+\frac{{{\beta}ta}eta}{2})m}\varepsilonsilonilons_1^2. \end{aligned} \end{equation} } \subsection{Case $p_{\mathfrak{m}in}\ll p_{\mathfrak{m}ax}\ll 0$.}\lambdabel{ssec:B-pmax} Here we have that $\abs{\Phi}\gtrsim 1$. We can use a normal form as in \eqref{eq:mdecompNF}--\eqref{mdecompNFNRNorms} with $\lambdambda=\frac{1}{10}$ so that $\mathfrak{m}athfrak{m}^{res}=0$ and we see that {{\beta}ta}egin{equation} \norm{P_{k,p}\mathfrak{m}athcal{B}_{\mathfrak{m}}(f_1,f_2)}_{L^2}\lesssimssim \norm{P_{k,p}\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)}_{L^2}+\norm{P_{k,p}\mathfrak{m}athcal{B}_{\mathfrak{m}\cdotdot\Phi^{-1}}(\partial_t f_1,f_2)}_{L^2}+\norm{P_{k,p}\mathfrak{m}athcal{B}_{\mathfrak{m}\cdotdot\Phi^{-1}}( f_1,\partial_tf_2)}_{L^2}. \end{equation} Using Lemma \ref{lem:dtfinL2} the second term can be bounded as {{\beta}ta}egin{equation} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p}\mathfrak{m}athcal{B}_{\mathfrak{m}\cdotdot\Phi^{-1}}(\partial_t f_1,f_2)\right]}_{L^\infty}\lesssimssim 2^m\cdotdot 2^{2k+4k^+} 2^{p_{\mathfrak{m}ax}}\norm{\partial_t f_1}_{L^2}\norm{f_2}_{L^2}\lesssimssim 2^{-\frac{1}{4}m}\varepsilonsilonilons_1^2, \end{equation} and similarly for the third one. It thus remains to control the boundary term. If $\mathfrak{m}in\{p_1,p_2\}=p_1\lesssimssim p$, this follows from Proposition \ref{prop:decay}, Lemma \ref{lem:phasesymb_bd}, Corollary \ref{cor:extrapol_decay} and the multiplier bounds \eqref{BoundsOnMAddedBenoit} as follows: {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{3k^+-\frac{1}{2}k^-}2^{-p}\norm{P_{k,p}\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)}_{L^2}&\lesssimssim 2^{-p}2^{\frac{k+k^+}{2}+3k^+}2^{p_{\mathfrak{m}ax}}2^{p_1}\norm{f_1}_{B}\cdotdot \mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f_2\mathcal{V}ert_{L^\infty}\ll 2^{-\deltalta m}\varepsilonsilonilons_1^2. \end{split} \end{equation*} If $\mathfrak{m}in\{p_1,p_2\}=p_2\lesssimssim p$, the situation is similar. We note that if $p_{\mathfrak{m}ax}\leq -\deltalta m$, then we have that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p}\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)\right]}_{L^\infty}&\lesssimssim 2^{2k+4k^+}2^{p_{\mathfrak{m}ax}}\norm{f_1}_{L^2}\norm{f_2}_{L^2}\lesssimssim 2^{3k^+}2^{p_1+p_2+p_{max}}\mathcal{V}ert f_1\mathcal{V}ert_B\mathcal{V}ert f_2\mathcal{V}ert_B\\ &\lesssimssim 2^{-2\deltalta m}\varepsilonsilonilons_1^2, \end{split} \end{equation*} so we may assume that $p_{\mathfrak{m}ax}>-\deltalta m$. Then with $\abs{{{\beta}ta}ar\sigmagma}\gtrsim 2^{-\deltalta m}2^{k_{\mathfrak{m}ax}+k_{\mathfrak{m}in}}$ and the fact that {{\beta}ta}egin{equation} s^{-1}\abs{\frac{1}{V_\eta\Phi}V_\eta(\Phi^{-1})}\lesssimssim s^{-1}\abs{\Phi}^{-2}\lesssimssim s^{-1}, \end{equation} we are done by integration by parts as in the case of a gap in $p$, Section \ref{ssec:B-p-gap}. After the estimates in Section \ref{ssec:B-p-gap} and Section \ref{ssec:B-pmax}, we may assume that all $p$'s are comparable: {{\beta}ta}egin{equation*} p_{\mathfrak{m}ax}\le p_{\mathfrak{m}in}+100. \end{equation*} \subsection{Gap in $q$.}\lambdabel{ssec:B-q-gap} We additionally localize in $q_i$, writing $g_i=P_{k_i,p_i,q_i}R_{\ell_i}f_i$, $i=1,2$, and can assume by $B$ norm bounds that $q_i\geq -3m$. For this case we now assume that $q_{\mathfrak{m}in}\ll q_{\mathfrak{m}ax}$ and thus (by the previous case) $2^{p_{\mathfrak{m}in}}\sigmam 2^{p_{\mathfrak{m}ax}}\sigmam 1$ (and thus also $\ell_i\geq 0$). Noting that $\abs{{{\beta}ta}ar\sigmagma}\sigmam 2^{q_{\mathfrak{m}ax}}2^{k_{\mathfrak{m}in}+k_{\mathfrak{m}ax}}$ and using Lemma \ref{lem:new_ibp}, repeated integration by parts is feasible if {{\beta}ta}egin{equation}\lambdabel{CondIBPAddedGapqBNorm} {{\beta}ta}egin{aligned} V_\eta:\qquad &2^{-q_{\mathfrak{m}ax}}2^{2k_1-k_{\mathfrak{m}in}-k_{\mathfrak{m}ax}}(1+2^{k_2-k_1}(2^{q_2-q_1}+2^{\ell_1}))\leq 2^{(1-\deltalta)m},\\ V_{\xi-\eta}:\qquad &2^{-q_{\mathfrak{m}ax}}2^{2k_2-k_{\mathfrak{m}in}-k_{\mathfrak{m}ax}}(1+2^{k_1-k_2}(2^{q_1-q_2}+2^{\ell_2}))\leq 2^{(1-\deltalta)m}. \end{aligned} \end{equation} We have two main cases to consider: \subsubsection{Case 3: $q\ll q_1,q_2$} By Lemma \ref{lem:gapp-cases} and Remark \ref{lem:gapp-casesRem}, we have three scenarios to consider: \subsubsection*{Subcase 3.1: $2^{k_1}\sigmam 2^{k_2}$} Then also $2^{q_1}\sigmam 2^{q_2}$. Using \eqref{CondIBPAddedGapqBNorm}, we see that repeated integration by parts gives the claim if {{\beta}ta}egin{equation} -q_{\mathfrak{m}ax}+k_1-k+\mathfrak{m}in\{\ell_1,\ell_2\}\leq (1-\deltalta)m. \end{equation} Otherwise, using a crude bound and \eqref{BoundsOnMAddedBenoit}, we have that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)\right]}_{L^\infty}&\lesssimssim 2^{2k+k^+-3k_2^+}2^{q_{\mathfrak{m}ax}}\cdotdot 2^{\frac{q_1}{2}}\norm{g_1}_{B} 2^{-(1+{{\beta}ta}eta)\ell_2}\norm{g_2}_{X}\\ &\lesssimssim 2^{\deltalta m}\cdotdot 2^{(\frac{1}{2}-{{\beta}ta}eta)q_{\mathfrak{m}ax}}2^{-(1+{{\beta}ta}eta)(1-\deltalta)m}\norm{g_1}_{B}\norm{g_2}_{X}, \end{aligned} \end{equation} which is an acceptable contribution. \subsubsection*{Subcase 3.2: $2^{k_2}\ll 2^{k_1}\sigmam 2^k$} Then we have $2^{q_1-q_2}\sigmam 2^{k_2-k_1}\ll 1$, and thus $q\ll q_1\ll q_2$. From \eqref{CondIBPAddedGapqBNorm}, we see that repeated integration by parts in $V_{\xi-\eta}$ gives the claim provided that {{\beta}ta}egin{equation} -q_{2}+\ell_2\leq (1-\deltalta)m. \end{equation} Otherwise we can conclude just as in Subcase 3.1. \subsubsection*{Subcase 3.3: $2^{k_1}\ll 2^{k_2}$} This is symmetric to Subcase {{\beta}ta}lue{3.2}. \subsubsection{Case 4: $\mathfrak{m}in\{q_1,q_2\}\ll q$} Without loss of generality, we may assume that $q_1\le q_2$. By Lemma \ref{lem:gapp-cases} we have three scenarios to consider: \subsubsection*{Subcase 4.1: $2^{k}\sigmam 2^{k_2}$} Then also $2^{q}\sigmam 2^{q_2}$. Inspecting \eqref{CondIBPAddedGapqBNorm}, repeated integration by parts gives the claim if {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} V_\eta:\quad &\mathfrak{m}ax\{-q_1,\ell_1-q_{\mathfrak{m}ax}\}\leq (1-\deltalta)m,\qquad V_{\xi-\eta}:\quad \mathfrak{m}ax\{k_2-k_1,\ell_2\}\leq (1-\deltalta)m+q_{\mathfrak{m}ax}. \end{aligned} \end{equation} In the opposite case, if $2^{q_1}\lesssimssim 2^{-(1-\deltalta)m}$ we can use Lemma \ref{lem:set_gain} and \eqref{BoundsOnMAddedBenoit} to bound {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{3k^+-\frac{1}{2}k^-}2^{-\frac{q}{2}}\norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{-\frac{q_{\mathfrak{m}ax}}{2}}\abs{\mathfrak{m}athcal{S}}\cdotdot 2^{k-\frac{1}{2}k^-+q_{\mathfrak{m}ax}}\cdotdot 2^{\frac{q_1}{2}}\norm{g_1}_{B} 2^{-(1+{{\beta}ta}eta)\ell_2}\norm{g_2}_{X}\\ &\lesssimssim 2^{\frac{q_{\mathfrak{m}ax}}{2}}2^{q_1}2^{\frac{k+k^+}{2}+\frac{3}{2}k_1}2^{-(1+{{\beta}ta}eta)\ell_2}\varepsilonsilon_1^2\\ &\lesssimssim 2^{\frac{q_{\mathfrak{m}ax}}{2}}2^{2k_1+\frac{1}{2}k^+}2^{-(1-\deltalta)m}2^{\frac{k-k_1}{2}}2^{-(1+{{\beta}ta}eta)\ell_2}\varepsilonsilon_1^2, \end{aligned} \end{equation} which suffices since $\mathfrak{m}ax\{\ell_2,k-k_1\}>(1-\deltalta)m+q_{\mathfrak{m}ax}$. If on the other hand $\ell_1-q_{\mathfrak{m}ax}>(1-\deltalta)m$, then a crude estimate gives {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)\right]}_{L^\infty}&\lesssimssim 2^{2k+2k^++q_{\mathfrak{m}ax}}\cdotdot 2^{-(1+{{\beta}ta}eta)\ell_1}\norm{g_1}_{X} 2^{\frac{q_2}{2}}\norm{g_2}_{B}\\ &\lesssimssim 2^{3k^+}2^{-(1+{{\beta}ta}eta)(\ell_1-q_{\mathfrak{m}ax})}\varepsilonsilon_1^2\lesssimssim 2^{-(1+\deltalta)m}\varepsilonsilon_1^2 \end{aligned} \end{equation} which is an acceptable contribution. \subsubsection*{Subcase 4.2: $2^{k}\ll 2^{k_2}\sigmam 2^{k_1}$} Then also $2^{q_2-q}\sigmam 2^{k-k_2}\ll 1$, so that $q_1\ll q_2\ll q$. Using \eqref{CondIBPAddedGapqBNorm}, repeated integration by parts then gives the claim if {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} V_{\xi-\eta}:\qquad &-q+k_2-k+\ell_2\leq (1-\deltalta)m. \end{aligned} \end{equation} Otherwise we get the acceptable contribution {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)\right]}_{L^\infty}&\lesssimssim 2^{2(k-k_1^+)+q}\cdotdot 2^{\frac{k_1}{2}+\frac{q_1}{2}}\norm{g_1}_{B} 2^{-(1+{{\beta}ta}eta)\ell_2}\norm{g_2}_{X}\\ &\lesssimssim 2^{2k_2^+}2^{-(1+{{\beta}ta}eta)(\ell_2+k_2-k-q)}\varepsilonsilon_1^2. \end{aligned} \end{equation} \subsubsection*{Subcase 4.3: $2^{k_2}\ll 2^{k}\sigmam 2^{k_1}$} Then also $2^{q-q_2}\sigmam 2^{k-k_2}\ll 1$, so that $q_1\ll q\ll q_2$. From \eqref{CondIBPAddedGapqBNorm}, repeated integration by parts then gives the claim if {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} V_{\xi-\eta}:\quad \ell_2-q_2\leq (1-\deltalta)m. \end{aligned} \end{equation} Otherwise, we get an acceptable contribution as in Subcase 4.2: {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{k+4k^+}\norm{\mathfrak{m}athcal{F}\left[P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)\right]}_{L^\infty}&\lesssimssim 2^{2k+k^++q_{2}}\cdotdot 2^{\frac{k_1}{2}+\frac{q_1}{2}}\norm{g_1}_{B} 2^{-(1+{{\beta}ta}eta)\ell_2}\norm{g_2}_{X}\\ &\lesssimssim 2^{4k_1^+}2^{-(1+{{\beta}ta}eta)(\ell_2-q_2)}\varepsilonsilon_1^2\le 2^{-(1+{{\beta}ta}eta/2)m}\varepsilonsilon_1^2. \end{aligned} \end{equation} \subsection{No gaps.}\lambdabel{ssec:B-nogaps} Assume now that $2^{p_{\mathfrak{m}in}}\sigmam 2^{p_{\mathfrak{m}ax}}$ and $2^{q_{\mathfrak{m}in}}\sigmam 2^{q_{\mathfrak{m}ax}}$. Assuming further w.l.o.g.\ that $g_1$ has at most $\frac{N}{2}$ copies of $S$, by the decay estimate in Proposition \ref{prop:decay} we then have that {{\beta}ta}egin{equation} e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1=I+I\!I \end{equation} with {{\beta}ta}egin{equation} \norm{I}_{L^\infty}\lesssimssim 2^{-p-\frac{q}{2}}t^{-\frac{3}{2}}2^{\frac{3}{2}k_1-3k_1^+}\varepsilonsilonilons_1,\qquad \norm{I\!I}_{L^2}\lesssimssim 2^{-3k^+_1}t^{-\frac{1}{2}}\varepsilonsilonilons_1. \end{equation} By Corollary \ref{cor:extrapol_decay} we further have that {{\beta}ta}egin{equation} \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda} g_2}_{L^\infty}\lesssimssim 2^{-\frac{3}{2}k_2^+}2^{-\frac{2}{3}m}\varepsilonsilonilons_1, \end{equation} and using \eqref{BoundsOnMAddedBenoit} and an $L^\infty\times L^2$ estimate, we find that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{3k^+-\frac{1}{2}k^-}2^{-p-\frac{q}{2}}\norm{P_{k,p,q}\mathfrak{m}athcal{Q}_\mathfrak{m}(g_1,g_2)}_{L^2}&\lesssimssim 2^{3k^+}2^{-p-\frac{q}{2}}\cdotdot 2^{p+q+\frac{k+k^+}{2}} \left(\norm{I}_{L^\infty}2^{p+\frac{q}{2}}\norm{g_2}_B+\norm{I\!I}_{L^2}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda} g_2}_{L^\infty}\right)\\ &\lesssimssim 2^{\frac{q}{2}+3k^++\frac{k+k^+}{2}}\left(2^{-\frac{3}{2}m}+2^{-\frac{m}{2}}2^{-\frac{2}{3}m}\right)\varepsilonsilonilons_1^2\\ &\lesssimssim 2^{-\frac{9}{8}m}\varepsilonsilonilons_1^2. \end{aligned} \end{equation} \end{proof} \section{$X$ norm bounds}\lambdabel{sec:Xnorm} In this section we finally prove the $X$ norm bounds for the quadratic expressions \eqref{eq:btstrap-concl1.1-X}. This is done first for the case of ``large'' $\ell$ in Section \ref{ssec:Xnorm1}, then for ``small'' $\ell$ in Section \ref{ssec:Xnorm2}. \subsection{$X$ norm bounds for $\ell>(1+\deltalta)m$}\lambdabel{ssec:Xnorm1} The goal here is to show that if $\ell$ is sufficiently large, then we have the $X$ norm bounds claimed in the bootstrap conclusion \eqref{eq:btstrap-concl1}. More precisely, we will show: {{\beta}ta}egin{proposition}\lambdabel{prop:Xnorm1} Let $0<\deltalta=2M^{-\frac{1}{2}}\ll {{\beta}ta}eta$, and assume the bootstrap assumptions \eqref{eq:btstrap-assump} of Proposition \ref{prop:btstrap} and let $F_j=S^{b_j}\mathfrak{m}athcal{U}_{\mathfrak{m}u_j}$, $0\leq b_1+b_2\leq N$, $\mathfrak{m}u_j\in\{+,-\}$, $j=1,2$. Then there holds that {{\beta}ta}egin{equation}\lambdabel{eq:X-claim1} \sup_{k,\,\ell+p\geq 0,\,\ell>(1+\deltalta)m}2^{3k^+}2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(F_1,F_2)}_{L^2}\lesssimssim 2^{-\deltalta^2 m}\varepsilonsilonilons_1^2. \end{equation} \end{proposition} We give next the proof of Proposition \ref{prop:Xnorm1}. As one sees below, here the choice of $\deltalta^2=O(M^{-1})$ will be convenient for repeated integrations by parts, where $M\in\mathfrak{m}athbb{N}$ is the number of vector fields we propagate. {{\beta}ta}egin{proof}[Proof of Proposition \ref{prop:Xnorm1}] Assuming that $\ell>(1+\deltalta)m$, we split our arguments into two cases: If $\ell+p\leq \deltalta m$ (Section \ref{sec:Xnorm1-1}) ``only'' a gain of $2^{(1+\deltalta)m}$ is needed, and relatively simple arguments suffice. If on the other hand $\ell+p>\deltalta m$ (Section \ref{sec:Xnorm1-2}) we can make use of a ``finite speed of propagation'' feature\footnote{{{\beta}ta}lue{This refers to the fact that localization on scales greater than $t$ and linear flow almost commute, similar to the terminology in e.g.\ \cdotite[Section 7.2]{DIPau} or \cdotite[Section 3B]{globalbeta}.}} of the equations via the Bernstein property \eqref{eq:R-Bernstein} of the angular Littlewood-Paley decomposition. We remark that in the arguments that follow, no localizations in $q,q_j$ are used. \subsubsection{Case $\ell+p\leq \deltalta m$}\lambdabel{sec:Xnorm1-1} Here we have that $p\leq -\ell+\deltalta m<-m$. We begin with some direct observations to treat a few simple cases, akin to Section \ref{ssec:Bnorm-reduction}. Noting that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} 2^{3k^+}2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(P_{k_1}F_1,P_{k_2}F_2)}_{L^2}&\lesssimssim 2^{3k^++\frac{3}{2}k}2^{(1+{{\beta}ta}eta)(\ell+p)}2^m\mathfrak{m}in\{2^{-N_0k_1}\norm{F_1}_{H^{N_0}},2^{k_1}\norm{F_1}_{\dot{H}^{-1}}\}\\ &\qquad\cdotdot \mathfrak{m}in\{2^{-N_0k_2}\norm{F_2}_{H^{N_0}},2^{k_2}\norm{F_2}_{\dot{H}^{-1}}\}\\ &\lesssimssim 2^{3k^++\frac{3}{2}k} 2^{m+(1+{{\beta}ta}eta)\deltalta m}\mathfrak{m}in\{2^{-N_0k_1},2^{k_1}\}\mathfrak{m}in\{2^{-N_0k_2},2^{k_2}\}\cdotdot\varepsilonsilonilons_1^2, \end{aligned} \end{equation} we see that it suffices to prove that if $\ell>(1+\deltalta)m$ then with $\deltalta_0=2N_0^{-1}\ll\deltalta^2$ we have {{\beta}ta}egin{equation} 2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(P_{k_1}F_1,P_{k_2}F_2)}_{L^2}\lesssimssim 2^{-2\deltalta^2 m}\varepsilonsilonilons_1^2,\qquad -2m\leq k,k_j\leq \deltalta_0 m,\quad j=1,2. \end{equation} As in Section \ref{ssec:Bnorm-reduction} we can now further reduce cases by considering localizations in $p_j, \ell_j$, $j=1,2$, and see that to show the claim it suffices to establish that for $f_j=P_{k_j,p_j}R_{\ell_j}F_j$, $j=1,2$, when {{\beta}ta}egin{equation} -2m\leq k,k_j\leq \deltalta_0 m,\qquad -2m\leq p_j\leq 0,\qquad -p_j\leq \ell_j\leq 2m,\quad j=1,2, \end{equation} there holds that {{\beta}ta}egin{equation} 2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)}_{L^2}\lesssimssim 2^{-\deltalta m}\varepsilonsilonilons_1^2. \end{equation} This is done in the following Case a and Case b. \subsubsection*{Case a: $2^{p_1}+2^{p_2}\ll 1$.} In this case there holds that $\abs{\Phi}\gtrsim 1$ and a normal form (as in \eqref{eq:mdecompNF}--\eqref{mdecompNFNRNorms} with $\lambdambda=\frac{1}{10}$ so that $\mathfrak{m}athfrak{m}^{res}=0$) gives {{\beta}ta}egin{equation}\lambdabel{EZNF821a} \norm{P_{k,p}\mathfrak{m}athcal{B}_{\mathfrak{m}}(f_1,f_2)}_{L^2}\lesssimssim \norm{P_{k,p}\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)}_{L^2}+\norm{P_{k,p}\mathfrak{m}athcal{B}_{\mathfrak{m}\cdotdot\Phi^{-1}}(\partial_t f_1,f_2)}_{L^2}+\norm{P_{k,p}\mathfrak{m}athcal{B}_{\mathfrak{m}\cdotdot\Phi^{-1}}( f_1,\partial_tf_2)}_{L^2}. \end{equation} A crude estimate using Lemma \ref{lem:dtfinL2} gives {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_{\mathfrak{m}\cdotdot\Phi^{-1}}(\partial_t f_1,f_2)}_{L^2}&\lesssimssim 2^{\frac{3}{2}k}2^{(1+{{\beta}ta}eta)(\ell+p)}\norm{\mathfrak{m}athcal{F}\left[P_{k,p}R_\ell\mathfrak{m}athcal{B}_{\mathfrak{m}\cdotdot\Phi^{-1}}(\partial_t f_1,f_2)\right]}_{L^\infty}\\ &\lesssimssim 2^m\cdotdot 2^{(1+{{\beta}ta}eta)(\ell+p)}2^{\frac{5}{2}k}\norm{\partial_t f_1}_{L^2}\norm{f_2}_{L^2}\\ &\lesssimssim 2^{m}\cdotdot 2^{-\frac{3}{2}m+\gammamma m+(1+{{\beta}ta}eta)\deltalta m+3\deltalta_0 m}\varepsilonsilonilons_1^2 \end{split} \end{equation*} and symmetrically for the term with $\partial_t f_2$. Assume now w.l.o.g.\ that $p_2\leq p_1$. For the boundary term a direct $L^2\times L^\infty$ estimate using Corollary \ref{cor:extrapol_decay} then gives the claim if $2^{p}\gtrsim 2^{p_2}$, since {{\beta}ta}egin{equation*} 2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)}_{L^2}\lesssimssim 2^{k}2^{(1+{{\beta}ta}eta)\deltalta m}2^{-p}\cdotdot \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda} f_1}_{L^\infty} 2^{p_2}\norm{f_2}_{B}\lesssimssim 2^{-\frac{m}{2}}\varepsilonsilonilons_1^2. \end{equation*} We thus assume that $p\ll p_2\leq p_1$. If $p_1\leq -2\deltalta m$, using \eqref{BoundsOnMAddedBenoit}, we are done since {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{(1+{{\beta}ta}eta)\ell+{{\beta}ta}eta p}\norm{\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)}_{L^2}&\lesssimssim 2^{(1+{{\beta}ta}eta)(\ell+ p)}2^{\frac{3}{2}k}\norm{\mathfrak{m}athcal{F}\left[\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)\right]}_{L^\infty}\\ &\lesssimssim 2^{\frac{5}{2}k}2^{(1+{{\beta}ta}eta)\deltalta m}2^{3p_{\mathfrak{m}ax}}\norm{f_1}_B\norm{f_2}_B. \end{split} \end{equation*} Else we have $p_1\geq -2\deltalta m$, and thus $\abs{{{\beta}ta}ar\sigmagma}\gtrsim 2^{p_1+k_1+k}$ and using Lemma \ref{lem:new_ibp}, we can repeatedly integrate by parts in $V_\eta$ if {{\beta}ta}egin{equation*} 2^{-2p_1+k_1-k}(1+2^{k_2-k_1}2^{\ell_1})<2^{(1-\deltalta)m}. \end{equation*} If this inequality is reversed, we use crude bounds: In case $k=k_{\mathfrak{m}in}$ we can assume that $-2p_1+\ell_1+k_1-k>(1-\deltalta)m$ and then, using \eqref{BoundsOnMAddedBenoit}, {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{(1+{{\beta}ta}eta)\ell+{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)}_{L^2}&\lesssimssim 2^{\frac{3}{2}k}2^{(1+{{\beta}ta}eta)(\ell+ p)}\norm{\mathfrak{m}athcal{F}\left[P_{k,p}R_\ell\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)\right]}_{L^\infty}\\ &\lesssimssim 2^{(1+{{\beta}ta}eta)\deltalta m}\cdotdot 2^{\frac{5k}{2}+p_1}\cdotdot 2^{-\ell_1}\norm{f_1}_X\cdotdot 2^{p_2}\norm{f_2}_{B}\\ &\lesssimssim 2^{-m/3}\varepsilonsilonilons_1^2, \end{split} \end{equation*} whereas if $k_{\mathfrak{m}in}\in\{k_1,k_2\}$ we can assume that $-2p_1+\ell_1>(1-\deltalta)m$ and the conclusion follows just as above. \subsubsection*{Case b: $2^{p_1}\sigmam 1$} Then $\abs{{{\beta}ta}ar\sigmagma}\sigmam 2^{k_1+k}$, and thus by Lemma \ref{lem:new_ibp}\eqref{it:ibp_p} repeated integration by parts in $V_\eta$ gives the claim if {{\beta}ta}egin{equation*} 2^{k_1-k}(1+2^{k_2-k_1}2^{\ell_1})<2^{(1-\deltalta) m}, \end{equation*} whereas, in the opposite case, a crude estimate using \eqref{BoundsOnMAddedBenoit} suffices {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^{(1+{{\beta}ta}eta)(\ell+p)}2^{\frac{3}{2}k}\norm{\mathfrak{m}athcal{F}\left[P_{k,p}R_\ell \mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)\right]}_{L^\infty}\\ &\lesssimssim 2^{(1+2\deltalta)m}\cdotdot 2^{\frac{5}{2}k}\cdotdot 2^{-3k_1^+}2^{-(1+{{\beta}ta}eta)\ell_1} \norm{f_1}_{X}\norm{f_2}_{L^2}\\ &\lesssimssim 2^{(1+3\deltalta)m}\cdotdot \mathfrak{m}in\{1,2^{(\frac{3}{2}-{{\beta}ta}eta)(k-k_1)}\}\cdotdot \mathfrak{m}in\{1,2^{-(1+{{\beta}ta}eta)(\ell_1-k+k_2)}\} \varepsilonsilon_1^2. \end{split} \end{equation*} \subsubsection{Case $\ell+p>\deltalta m$}\lambdabel{sec:Xnorm1-2} By analogous reductions as at the beginning of Section \ref{sec:Xnorm1-1} it suffices to prove that with $f_j=P_{k_j,p_j}R_{\ell_j}F_j$, $j=1,2$, and when {{\beta}ta}egin{equation}\lambdabel{eq:Xnorm1-restrict1} -2\ell\leq k,k_j\leq \deltalta_0 \ell,\qquad -2\ell\leq p_j\leq 0,\qquad -p_j\leq \ell_j\leq 2\ell,\quad j=1,2, \end{equation} (with $\deltalta_0:=2N_0^{-1}\ll \deltalta^2$ as above) there holds that {{\beta}ta}egin{equation} 2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)}_{L^2}\lesssimssim 2^{-\deltalta^2 \ell}2^{-\deltalta^2m}\varepsilonsilonilons_1^2. \end{equation} (Note here that the reductions are given naturally in terms of the large parameter $\ell>m$). A crude bound using \eqref{BoundsOnMAddedBenoit} gives that {{\beta}ta}egin{equation}\lambdabel{SuffCondAddedBenoitAug} 2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)}_{L^2}\lesssimssim 2^m2^{(1+{{\beta}ta}eta)(\ell-\ell_1-\ell_2)}2^{{{\beta}ta}eta(p-p_1-p_2)}2^{k+k_{\mathfrak{m}ax}+\frac{k_{\mathfrak{m}in}}{2}+p_{\mathfrak{m}in}}\mathcal{V}ert f_1\mathcal{V}ert_{X}\mathcal{V}ert f_2\mathcal{V}ert_X. \end{equation} When {{\beta}ta}egin{equation*} 2^{m+p}+2^{k-k_j}2^{m+p_j}+2^{k-k_j+\ell_j}\le 2^{(1-\deltalta^2)\ell},\quad j=1,2, \end{equation*} we want to use the Bernstein property: As in \eqref{BernsteinConsequence}, we can rewrite {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} R^{(n)}_\ell\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)&=\sum_{a=1}^2 2^{-2\ell}R^{(n+1)}_\ell\Omegaega_{a3}^2\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2), \end{split} \end{equation*} for $R^{(0)}_\ell=R_\ell$ and where $R^{(n+1)}_\ell$ is bounded for all $n\ge1$. We find that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \Omegaega_{a3}^\xi\mathfrak{m}athcal{Q}_{\mathfrak{m}^{(j)}}(f_1,f_2) =&\mathfrak{m}athcal{Q}_{\mathfrak{m}^{(j+1)}_1}(f_1,f_2)-\mathfrak{m}athcal{Q}_{\mathfrak{m}^{(j+1)}_2}(Sf_1,f_2)+\mathfrak{m}athcal{Q}_{\mathfrak{m}^{(j+1)}_3}(\Omegaega_{a3}f_1,f_2) \end{split} \end{equation*} where for some coefficient matrices $A^r_{\alphapha,\gammamma}$, $r=2,3$, {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}athfrak{m}^{(j+1)}_1(\xi,\eta)&=\left[is(\Omegaega_{a3}^\xi\Phi(\xi,\eta))-3\right]\mathfrak{m}athfrak{m}^{(j)}(\xi,\eta)+\Omegaega_{a3}^\xi\mathfrak{m}athfrak{m}^{(j)}(\xi,\eta),\\ \mathfrak{m}athfrak{m}^{(j+1)}_2(\xi,\eta)&=A_{2}^{\alphapha,\gammamma}\xi_\alphapha(\xi-\eta)_\gammamma\abs{\xi-\eta}^{-2}\mathfrak{m}athfrak{m}^{(j)}(\xi,\eta),\\ \mathfrak{m}athfrak{m}^{(j+1)}_3(\xi,\eta)&=A_{3}^{\alphapha,\gammamma}\xi_\alphapha(\xi-\eta)_\gammamma\abs{\xi-\eta}^{-2}\mathfrak{m}athfrak{m}^{(j)}(\xi,\eta)\\ \end{split} \end{equation*} and we see by induction that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathcal{V}ert \mathfrak{m}athfrak{m}^{(j+1)}_1\mathcal{V}ert_{\widetilde{W}}&\lesssimssim 2^{k+p_{\mathfrak{m}ax}}\cdotdot \left[2^{-p}+2^{m}(2^p+2^{k-k_1+p_1})+2^{k-k_1-p_1}\right]^j,\\ \mathcal{V}ert \mathfrak{m}athfrak{m}^{(j+1)}_2\mathcal{V}ert_{\widetilde{W}}+\mathcal{V}ert \mathfrak{m}athfrak{m}^{(j+1)}_3\mathcal{V}ert_{\widetilde{W}}&\lesssimssim \mathcal{V}ert \mathfrak{m}athfrak{m}^{(j)}\mathcal{V}ert_{\widetilde{W}}\cdotdot 2^{k-k_1}. \end{split} \end{equation*} Iterating this at most $K$ times, stopping before once a term involves three $S$ derivatives and we use a crude estimate to conclude with \eqref{eq:R-Bernstein} for $f_1$ that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathcal{V}ert R_\ell\mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)\mathcal{V}ert_{L^2}&\lesssimssim 2^{\frac{3}{2}k_{\mathfrak{m}in}}2^{k+p_{\mathfrak{m}ax}}\cdotdot \sum_{a=0}^22^{-K\ell}\left[2^{-p}+2^{m}(2^p+2^{k-k_1+p_1})+2^{k-k_1-p_1}+2^{k-k_1+\ell_1}\right]^K\mathcal{V}ert S^af_1\mathcal{V}ert_{L^2}\mathcal{V}ert f_2\mathcal{V}ert_{L^2}\\ &\quad+2^{\frac{3}{2}k_{\mathfrak{m}in}}2^{k+p_{\mathfrak{m}ax}}\cdotdot2^{-3\ell}\mathcal{V}ert S^3f_1\mathcal{V}ert_{L^2}\mathcal{V}ert f_2\mathcal{V}ert_{L^2} \end{split} \end{equation*} which gives an acceptable contribution. Using this for $\xi-\eta$ and for $\eta$, and supposing wlog that $k_2\le k_1$, we obtain an acceptable contribution whenever {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \ell_1\le (1-\deltalta^2)\ell\qquad\textnormal{h}box{ or }\qquad k-k_2+\mathfrak{m}ax\{m+p_2,\ell_2\}\le (1-\deltalta^2)\ell. \end{split} \end{equation*} If $\ell_2\ge m+p_2$, this and \eqref{SuffCondAddedBenoitAug} cover all the cases. In the opposite case, it remains to consider the case when {{\beta}ta}egin{equation}\lambdabel{SuffCondLargeL} {{\beta}ta}egin{split} k_2\le k_1\sigmam k,\qquad \ell_1\ge(1-\deltalta^2)\ell,\qquad (1+\deltalta)m\le \ell \le (1+2\deltalta^2)(m+p_2+k-k_2) \end{split} \end{equation} which in particular implies that $p_2+k-k_2\ge\deltalta m/2$. \mathfrak{m}edskip Assume now that \eqref{SuffCondLargeL} holds and in addition, {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{-p_2-p_{\mathfrak{m}ax}}2^{k_2-k}+2^{-p_2-p_{\mathfrak{m}ax}+\ell_2}\le 2^{(1-\deltalta^2)m}. \end{split} \end{equation*} In this case, if $p_{\mathfrak{m}in}\ll p_{\mathfrak{m}ax}$ we can integrate by parts along the vector field $V_{\xi-\eta}$ using Lemma \ref{lem:new_ibp}, or else an $L^2\times L^\infty$ estimate ($\ell_1\geq (1-\deltalta^2)\ell$) using Cor.\ \eqref{cor:decay} and $p_2+k-k_2\geq\frac{\deltalta m}{2}$ gives an acceptable contribution. \mathfrak{m}edskip If \eqref{SuffCondLargeL} holds and {{\beta}ta}egin{equation*} -p_2-p_{\mathfrak{m}ax}+k_2-k\geq (1-\deltalta^2)m-100, \end{equation*} a crude estimate using \eqref{BoundsOnMAddedBenoit} gives that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{3k^+}2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\mathcal{V}ert P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)\mathcal{V}ert_{L^2}&\lesssimssim 2^m2^{4k^+}2^{p_{\mathfrak{m}ax}}2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\cdotdot 2^{k+\frac{k_2}{2}}2^{p_{\mathfrak{m}in}}\cdotdot\mathcal{V}ert f_1\mathcal{V}ert_{L^2}\mathcal{V}ert f_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^m2^{2k^+}2^{(1+{{\beta}ta}eta)(\ell-\ell_1)}2^{{{\beta}ta}eta (p-p_1)}2^{p_{\mathfrak{m}in}+p_{\mathfrak{m}ax}+p_2}2^{\frac{k_2^-}{2}}\mathcal{V}ert f_1\mathcal{V}ert_X\mathcal{V}ert f_2\mathcal{V}ert_{B}\\ &\lesssimssim 2^m2^{2k^+}2^{2\deltalta^2\ell}2^{(1-{{\beta}ta}eta)p_{\mathfrak{m}in}+p_{\mathfrak{m}ax}+p_2}2^{k_2^-}\varepsilonsilon_1^2 \end{split} \end{equation*} which gives an acceptable contribution. Finally, if \eqref{SuffCondLargeL} holds and {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} -p_2-p_{\mathfrak{m}ax}+\ell_2\geq (1-\deltalta^2)m-100, \end{split} \end{equation*} we estimate $f_2$ in the $X$ norm instead to get {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{3k^+}2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\mathcal{V}ert P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)\mathcal{V}ert_{L^2}&\lesssimssim 2^m2^{4k^+}2^{p_{\mathfrak{m}ax}}2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\cdotdot 2^{k_2+\frac{k_1}{2}+\frac{p_1+p_2}{2}}\cdotdot\mathcal{V}ert f_1\mathcal{V}ert_{L^2}\mathcal{V}ert f_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^m2^{2k^+}2^{(1+{{\beta}ta}eta)(\ell-\ell_1-\ell_2)}2^{{{\beta}ta}eta (p-p_1-p_2)}2^{\frac{p_1+p_2}{2}}2^{k_2^-}\mathcal{V}ert f_1\mathcal{V}ert_X\mathcal{V}ert f_2\mathcal{V}ert_{X}\\ &\lesssimssim 2^m2^{2k^+}2^{2\deltalta^2\ell}2^{-(1+{{\beta}ta}eta)(1-\deltalta^2)m}2^{(\frac{1}{2}-{{\beta}ta}eta)p_1}2^{-(\frac{1}{2}+{{\beta}ta}eta)p_2}2^{k_2^-}\varepsilonsilon_1^2 \end{split} \end{equation*} and since $p_2+k-k_2\ge\deltalta m/2$, this also leads to an acceptable contribution. This covers all cases. \end{proof} \subsection{$X$ norm bounds for $\ell\leq (1+\deltalta)m$}\lambdabel{ssec:Xnorm2} Next we prove the main bounds for the propagation of the $X$ norm. By Proposition \ref{prop:Xnorm1} it suffices to consider the case where $\ell<(1+\deltalta)m$. We will show the following: {{\beta}ta}egin{proposition}\lambdabel{prop:Xnorm2} Assume the bootstrap assumptions \eqref{eq:btstrap-assump} of Proposition \ref{prop:btstrap}, and let $\deltalta=2M^{-\frac{1}{2}}>0$. Then for $F_j=S^{b_j}\mathfrak{m}athcal{U}_{\mathfrak{m}u_j}$, $0\leq b_1+b_2\leq N$, $\mathfrak{m}u_j\in\{+,-\}$, $j=1,2$ there holds that {{\beta}ta}egin{equation}\lambdabel{eq:X-claim2} \sup_{k,\,\ell+p\geq 0,\,\ell\leq (1+\deltalta)m}2^{3k^+}2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(F_1,F_2)}_{L^2}\lesssimssim 2^{-\deltalta^2 m}\varepsilonsilonilons_1^2. \end{equation} \end{proposition} The remainder of this section is devoted to the proof of Proposition \ref{prop:Xnorm2}. After a standard reduction to ``atomic'' estimates with localized versions of the inputs, we will make ample use of the integration by parts along vector fields and normal forms. To this end, we note that by choice of $\deltalta$ we can repeatedly integrate by parts at least $M=O(\deltalta^{-2})\gg O(\deltalta^{-1})$ times. We proceed in a similar fashion as in the proof of the $B$ norm bounds in Section \ref{sec:Bnorm}, but the estimates are more delicate since we always require a gain of $(2+{{\beta}ta}eta+)$ powers of the time variable. We use the possibility to integrate by parts along vector fields to push $\ell_i\sigmam (1-\deltalta)m$ up to losses in adjacent parameters $k_j,p_j$, then we use a normal form to gain a copy of $m$ at the cost of adjacent parameters. {{\beta}ta}egin{proof}[Proof of Proposition \ref{prop:Xnorm2}] We begin with a reduction: Note that if $\ell\leq (1+\deltalta)m$ then $(1+{{\beta}ta}eta)\ell\leq (1+{{\beta}ta}eta+2\deltalta)m$, and thus by energy estimates, further localizations in $p_j, \ell_j$, $j=1,2$, and $B$ resp.\ $X$ norm bounds it suffices to prove that (again with $\deltalta_0=2N_0^{-1}$) for {{\beta}ta}egin{equation} f_j=P_{k_j,p_j}R_{\ell_j}F_j,\quad -2m\leq k,k_j\leq \deltalta_0 m,\qquad -2m\leq p_j\leq 0,\qquad -p_j\leq \ell_j\leq 2m,\quad j=1,2,\quad \end{equation} we have that {{\beta}ta}egin{equation}\lambdabel{eq:X-claim2'} \sup_{k,\,\ell+p\geq 0,\,\ell\leq (1+\deltalta)m}2^{(1+{{\beta}ta}eta+2\deltalta)m}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)}_{L^2}\lesssimssim 2^{-\deltalta m}\varepsilonsilonilons_1^2. \end{equation} This is the bound we shall prove in the rest of this section. Similar to the $B$ norm bounds we do this first in the setting of a gap in $p$ with $p_{\mathfrak{m}ax}\sigmam 0$ (Section \ref{ssec:X-p-gap}), secondly when $p_{\mathfrak{m}ax}\ll 0$ (Section \ref{ssec:X-pmax}), then for the case of a gap in $q$ (Section \ref{ssec:X-q-gap}) and finally for the case of no gaps (Section \ref{ssec:X-nogaps}). \subsubsection{Gap in $p$, with $p_{\mathfrak{m}ax}\sigmam 0$.}\lambdabel{ssec:X-p-gap} We consider here the case where $p_{\mathfrak{m}in}\ll p_{\mathfrak{m}ax}\sigmam 0$. We further subdivide according to whether the output $p$ or one of the inputs $p_i$ is small, and use Lemma \ref{lem:gapp-cases} to organize these cases. Wlog we assume that $p_1\leq p_2$, so that we have two main cases to consider. Noting that $\abs{{{\beta}ta}ar\sigmagma}\sigmam 2^{p_{\mathfrak{m}ax}}2^{k_{\mathfrak{m}in}+k_{\mathfrak{m}ax}}$ and using that $\ell_i+p_i\geq 0$, by Lemma \ref{lem:new_ibp}\eqref{it:ibp_p} repeated integration by parts is feasible if {{\beta}ta}egin{equation}\lambdabel{IBPVF82} {{\beta}ta}egin{aligned} V_\eta:\qquad &2^{-p_1}2^{2k_1-k_{\mathfrak{m}in}-k_{\mathfrak{m}ax}}(1+2^{k_2-k_1}2^{\ell_1})\leq 2^{(1-\deltalta)m},\\ V_{\xi-\eta}:\qquad &2^{-p_2}2^{2k_2-k_{\mathfrak{m}in}-k_{\mathfrak{m}ax}}(1+2^{k_1-k_2}2^{\ell_2})\leq 2^{(1-\deltalta)m}. \end{aligned} \end{equation} \subsubsection*{Case 1: $p\ll p_1,p_2$}\lambdabel{ssec:X-p-gap-p} By Lemma \ref{lem:gapp-cases} we have three scenarios to consider: \subsubsection*{Subcase 1.1: $2^{k_1}\sigmam 2^{k_2}$.} Here we have $2^{p_1}\sigmam 2^{p_2}\sigmam 1$. As for \eqref{IBPVF82}, after repeated integration by parts ($O(\deltalta^{-1})\ll M$ times) we can assume that $\ell_i>(1-\deltalta)m+k-k_1$, $i=1,2$. Then a direct $X$ norm bound gives the claim: we have that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^k\abs{\mathfrak{m}athcal{S}} 2^{-(1+{{\beta}ta}eta)(\ell_1+\ell_2)}\norm{f_1}_X\norm{f_2}_X\\ &\lesssimssim 2^{(\frac{5}{2}-2-2{{\beta}ta}eta)k}2^{2(1+{{\beta}ta}eta)k_1}\cdotdot 2^{-2(1+{{\beta}ta}eta)(1-\deltalta)m}\norm{f_1}_X\norm{f_2}_X, \end{aligned} \end{equation*} and hence {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} 2^{3k^+}2^{(1+{{\beta}ta}eta)\ell}2^{{{\beta}ta}eta p}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)}_{L^2} &\lesssimssim 2^{(1+{{\beta}ta}eta+2\deltalta)m}\norm{P_{k,p}R_\ell\mathfrak{m}athcal{B}_\mathfrak{m}(f_1,f_2)}_{L^2}\\ &\lesssimssim 2^{(2+{{\beta}ta}eta+2\deltalta)m}\norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}\lesssimssim 2^{-({{\beta}ta}eta-2\deltalta(2+{{\beta}ta}eta)-\frac{5}{2}\deltalta_0)m}\varepsilonsilonilons_1^2, \end{split} \end{equation*} which suffices since ${{\beta}ta}eta\gg \deltalta$. \subsubsection*{Subcase 1.2: $2^{k_2}\ll 2^{k_1}\sigmam 2^k$} Then we have that $2^{p_1-p_2}\sigmam 2^{k_2-k_1}\ll 1$, so that $p\ll p_1\ll p_2\sigmam 0$. As in \eqref{IBPVF82}, by iterated integration by parts we can assume that $\ell_2>(1-\deltalta)m$ and $\ell_1>p_1+(1-\deltalta)m$, which suffices for a direct $X$ norm bound provided that ${{\beta}ta}eta\leq \frac{1}{4}$, {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^k\abs{\mathfrak{m}athcal{S}} 2^{-(1+{{\beta}ta}eta)(\ell_1+\ell_2)}2^{-{{\beta}ta}eta p_1}\norm{f_1}_X\norm{f_2}_X\\ &\lesssimssim 2^{k}\cdotdot 2^{\frac{3}{2}k_2}\cdotdot 2^{-(1+2{{\beta}ta}eta)p_1}\cdotdot 2^{-2(1+{{\beta}ta}eta)(1-\deltalta)m}\varepsilonsilon_1^2\\ &\lesssimssim 2^{\frac{5}{2}k_1} 2^{(\frac{1}{2}-2{{\beta}ta}eta)p_1}\cdotdot 2^{-2(1+{{\beta}ta}eta)(1-\deltalta)m}\varepsilonsilon_1^2, \end{aligned} \end{equation} using that $2^{k_2}\sigmam 2^{k_1+p_1}$. This leads to an acceptable contribution. \subsubsection*{Subcase 1.3: $2^{k_1}\ll 2^{k_2}\sigmam 2^k$} This would imply $p_2\ll p_1$, which is excluded by assumption. \subsubsection*{Case 2: $p_1\ll p,p_2$}\lambdabel{ssec:X-p-gap-p_1} By Lemma \ref{lem:gapp-cases} we have three scenarios to consider: \subsubsection*{Subcase 2.1: $2^{k}\sigmam 2^{k_2}$} Then $2^{p}\sigmam 2^{p_2}\sigmam 1$. Using \eqref{IBPVF82}, we can assume that $\ell_1-p_1\geq (1-\deltalta)m$ and $\mathfrak{m}ax\{k_2-k_1,\ell_2\}\geq (1-\deltalta)m$. {{\beta}ta}egin{enumerate}[label=(\alphaph*),wide] \item $k_2-k_1>(1-\deltalta)m$. Here we have that since $\abs{\mathfrak{m}athcal{S}}\lesssimssim 2^{p_1}2^{\frac{3}{2}k_1}$ there holds {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^k\abs{\mathfrak{m}athcal{S}} 2^{-\ell_1}\norm{f_1}_X\norm{f_2}_{L^2}\lesssimssim 2^{k+\frac{3}{2}k_1-N_0k_2^+}2^{-(1-\deltalta)m}\varepsilonsilon_1^2, \end{split} \end{equation*} which is more than enough thanks to the smallness of $k_1-k_2$. \item $\ell_2>(1-\deltalta)m$. Here we will further split cases towards a normal form. Assume first that {{\beta}ta}egin{equation}\lambdabel{p1NotTooSmall82} {{\beta}ta}egin{split} 2p_1-k_1\ge -k-100. \end{split} \end{equation} In this case, a crude estimate gives {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_\mathfrak{m}(f_1,f_2)}_{L^2}&\lesssimssim 2^k\abs{\mathfrak{m}athcal{S}}\cdotdot \norm{f_1}_{L^2}\norm{f_2}_{L^2}\\ &\lesssimssim 2^{k+\frac{3}{2}k_1+p_1} \cdotdot 2^{-(1+2{{\beta}ta}eta)p_1}2^{-2(1+{{\beta}ta}eta)(1-\deltalta)m}\norm{f_1}_X\norm{f_2}_X\\ &\lesssimssim 2^{(1+{{\beta}ta}eta)k}2^{(\frac{3}{2}-{{\beta}ta}eta)k_1}2^{{{\beta}ta}eta (k_1-k-2p_1)}2^{-2(1+{{\beta}ta}eta)(1-\deltalta)m}\varepsilonsilon_1^2, \end{split} \end{equation*} which gives an acceptable contribution. We now assume that \eqref{p1NotTooSmall82} does not hold and do a normal form away from the resonant set (see also Section \ref{ssec:nfs}). For $\lambdambda=2^{-200\deltalta m}$, we decompose as in \eqref{eq:mdecompNF} {{\beta}ta}egin{equation}\lambdabel{822DecompositionResNRes} {{\beta}ta}egin{split} \mathfrak{m}athfrak{m}(\xi,\eta)=\psi(\lambdambda^{-1}\Phi)\mathfrak{m}athfrak{m}(\xi,\eta)+(1-\psi(\lambdambda^{-1}\Phi))\mathfrak{m}athfrak{m}(\xi,\eta)=\mathfrak{m}athfrak{m}^{res}(\xi,\eta)+\mathfrak{m}athfrak{m}^{nr}(\xi,\eta). \end{split} \end{equation} On the support of $\mathfrak{m}athfrak{m}^{res}$, using (the contraposite of) \eqref{p1NotTooSmall82}, we observe that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \vert\partial_{\eta_3}\Phi(\xi,\eta)\vert\gtrsim 2^{-k} \end{split} \end{equation*} and using Lemma \ref{lem:nfs}\eqref{it:NF-bd34} we find that {{\beta}ta}egin{equation}\lambdabel{eq:gap-p-2.1} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(f_1,f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k2^{p_1+k_1}\lambdambda^{1/2}2^{\frac{k_2}{2}}\cdotdot 2^{-\ell_1}2^{-(1+{{\beta}ta}eta)\ell_2}\norm{f_1}_{X}\norm{f_2}_{X}\\ &\lesssimssim 2^{k+k_1+\frac{k_2}{2}}\cdotdot \lambdambda^{1/2}\cdotdot 2^{m-(2+{{\beta}ta}eta)(1-\deltalta)m}\varepsilonsilonilons_1^2 \end{aligned} \end{equation} and again, we obtain an acceptable contribution. On the support of $\mathfrak{m}athfrak{m}^{nr}$, the phase is large and we can perform a normal and see as in \eqref{mdecompNFNRNorms} that {{\beta}ta}egin{equation}\lambdabel{eq:gap-p-nfsplit} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}^{nr}}(f_1,f_2)}_{L^2}&\lesssimssim \norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,f_2)}_{L^2} + \norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(\partial_t f_1,f_2)}_{L^2}\\ &\qquad +\norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,\partial_t f_2)}_{L^2}, \end{aligned} \end{equation} and using crude estimates and Lemma \ref{lem:nfs}\eqref{it:NF-bd1}, we see that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,f_2)}_{L^2}&\lesssimssim 2^k2^{p_1+\frac{3}{2}k_1}\lambdambda^{-1}\cdotdot 2^{-\ell_1}2^{-(1+{{\beta}ta}eta)\ell_2}\norm{f_1}_{X}\norm{f_2}_{X}\\ &\lesssimssim 2^{k+\frac{3}{2}k_1}\cdotdot 2^{200\deltalta m-(2+{{\beta}ta}eta)(1-\deltalta)m}\varepsilonsilonilons_1^2, \end{aligned} \end{equation*} which suffices since ${{\beta}ta}eta\gg \deltalta$. Similarly, using Lemma \ref{lem:dtfinL2}, we obtain that {{\beta}ta}egin{equation}\lambdabel{eq:gap-p-2.1end} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,\partial_t f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k2^{p_1+\frac{3}{2}k_1}\lambdambda^{-1} 2^{-\ell_1}\norm{f_1}_{X}\norm{\partial_t f_2}_{L^2}\\ &\lesssimssim 2^{k+k_1+\frac{k_2}{2}}\cdotdot 2^{\gammamma+300\deltalta m-\frac{3}{2}m}\varepsilonsilonilons_1^2, \end{aligned} \end{equation} and similarly for the term with $\partial_t f_1$. \end{enumerate} \subsubsection*{Subcase 2.2: $2^{k}\ll 2^{k_2}\sigmam 2^{k_1}$} Then $2^{p_2-p}\sigmam 2^{k-k_2}\ll 1$, and thus $p_1\ll p_2\ll p\sigmam 0$. After repeated integration by parts we may assume that {{\beta}ta}egin{equation}\lambdabel{Suff822} \ell_i\ge\mathfrak{m}ax\{p_i-k_1+k+(1-\deltalta)m,-p_i\},\qquad i\in\{1,2\}. \end{equation} This is sufficient if {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} p_{1}\ge -10\deltalta m. \end{split} \end{equation*} We first localize the analysis to the resonant set by decomposing $\mathfrak{m}(\xi,\eta)=\mathfrak{m}^{res}(\xi,\eta)+\mathfrak{m}^{nr}(\xi,\eta)$ as in \eqref{eq:mdecompNF} with $\lambdambda=2^{-100}(2^q+2^{2p_2})$. For the nonresonant terms, we can do a normal form as in \eqref{mdecompNFNRNorms}, and with Lemma \ref{lem:nfs}\eqref{it:NF-bd1} a crude estimate gives {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,f_2)}_{L^2}&\lesssimssim \abs{\mathfrak{m}athcal{S}}\cdotdot 2^k\cdotdot (2^{q}+2^{2p_2})^{-1}\cdotdot \mathcal{V}ert f_1\mathcal{V}ert_{L^2}\mathcal{V}ert f_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{k_1+\frac{3}{2}k}2^{p_1+\frac{q}{2}}\cdotdot (2^q+2^{2p_2})^{-1}\cdotdot 2^{-\ell_1-(1+{{\beta}ta}eta)\ell_2-{{\beta}ta}eta p_2}\mathcal{V}ert f_1\mathcal{V}ert_{X}\mathcal{V}ert f_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta+2\deltalta)m}\varepsilonsilon_1^2\cdotdot 2^{p_2+\frac{q}{2}}(2^q+2^{2p_2})^{-1}\cdotdot 2^{({{\beta}ta}eta+3\deltalta)m}2^{2k_1}2^{\frac{1}{2}k}2^{-(1+{{\beta}ta}eta)(\ell_2+p_2)}. \end{aligned} \end{equation*} If {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}ax\{k_1-k,\ell_2+p_2\}\ge 5{{\beta}ta}eta m \end{split} \end{equation*} this gives an acceptable contribution; else, using that $2^{p_2}\sigmam 2^{k-k_1}$, we obtain a contradiction with \eqref{Suff822}. In addition, another use of Lemma \ref{lem:dtfinL2} and Lemma \ref{lem:nfs}\eqref{it:NF-bd1} gives {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,\partial_t f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k(2^{q}+2^{2p_2})^{-1}\cdotdot \abs{\mathfrak{m}athcal{S}}\cdotdot \norm{f_1}_{L^2}\norm{\partial_t f_2}_{L^2}\\ &\lesssimssim 2^m\cdotdot (2^{q}+2^{2p_2})^{-1}2^{\frac{q}{2}}2^{\frac{3}{2}k+k_1}\cdotdot 2^{p_1-\ell_1}2^{-\frac{3}{2}m+\gammamma m}\cdotdot \varepsilonsilonilons_1^3. \end{aligned} \end{equation*} Now, using that $(2^{q}+2^{2p_2})^{-1}2^{\frac{q}{2}}2^{k-k_1}\le 2^{-p_2}2^{k-k_1}\lesssimssim 1$, we see that if $p_1\le -m/2$, then {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \sum_q\norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,\partial_t f_2)}_{L^2} &\lesssimssim 2^m\cdotdot 2^{\frac{1}{2}k+2k_1}\cdotdot 2^{2p_1}2^{-\frac{3}{2}m+\gammamma m}\cdotdot \varepsilonsilonilons_1^3\\ \end{aligned} \end{equation*} which gives an acceptable contribution, while if $-m/2\le p_1\le p_2$, we see that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \sum_q\norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,\partial_t f_2)}_{L^2} &\lesssimssim 2^m\cdotdot 2^{-p_2}\cdotdot 2^{\frac{1}{2}k+2k_1}\cdotdot 2^{k-k_1+p_1-\ell_1}2^{-\frac{3}{2}m+\gammamma m}\cdotdot \varepsilonsilonilons_1^3\\ &\lesssimssim 2^{-(\frac{3}{2}-\gammamma-\deltalta)m}\cdotdot 2^{-\frac{1}{2}p_2}\cdotdot 2^{\frac{5}{2}k_1}\cdotdot \varepsilonsilonilons_1^3, \end{aligned} \end{equation*} which is acceptable. The term involving $\partial_tf_1$ is treated similarly. \mathfrak{m}edskip We now turn to the resonant term. First, we observe that, on the support of $\mathfrak{m}athfrak{m}^{res}$, {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \vert \mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\vert+\vert\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\vert\ge 3/2,\qquad \vert \mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\vert-\vert\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\vert=\frac{\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\eta)}^2-\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi-\eta)}^2}{\vert\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi-\eta)\vert+\vert\mathfrak{m}athbf{\mathcal{L}ambda}bda(\eta)\vert}\ge 2^{-8}2^{2p_2}, \end{split} \end{equation*} so that smallness of $\vert\Phi\vert$ implies that $2^q\sigmam 2^{2p_2}\sigmam 2^{2(k-k_1)}$, but we will need to restrict the support further. \mathfrak{m}edskip We first observe that since $\vert k_1-k_2\vert\le 10$ and $p_1\ll p_2$, we have that, on the support of $\mathfrak{m}athfrak{m}^{res}$, {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \vert \partial_{\eta_3}\Phi(\xi,\eta)\vert\gtrsim 2^{2p_2-k_2}. \end{split} \end{equation*} and we can use the analysis in Section \ref{ssec:D3ibp} to obtain an acceptable contribution unless we have {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}ax\{\ell_1+p_1-2p_2,\ell_2-p_2\}\ge (1-\deltalta)m, \end{split} \end{equation*} which improves upon \eqref{Suff822} in that it does not incur $k$ losses. If the first term is largest, a crude estimate gives that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(f_1,f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k\abs{\mathfrak{m}athcal{S}} \cdotdot\norm{f_1}_{L^2}\norm{f_2}_{L^2}\\ &\lesssimssim 2^m\cdotdot 2^{\frac{5}{2}k}2^{\frac{q}{2}}2^{-(1+{{\beta}ta}eta)\ell_1-{{\beta}ta}eta p_1}2^{-(1+{{\beta}ta}eta)\ell_2-{{\beta}ta}eta p_2}\mathcal{V}ert f_1\mathcal{V}ert_{X}\mathcal{V}ert f_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(1+2{{\beta}ta}eta-3\deltalta)m}\cdotdot 2^{p_1-p_2}\cdotdot 2^{-(1+4{{\beta}ta}eta)p_2}2^{(\frac{3}{2}-{{\beta}ta}eta)k} 2^{(2+{{\beta}ta}eta)k_1}\varepsilonsilon_1^2, \end{aligned} \end{equation*} and since $2^{k-k_1}\lesssimssim 2^{p_2}$, we obtain an acceptable contribution. Thus from now on, we may assume that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \ell_1-p_1-k+k_1\ge (1-\deltalta)m,\qquad \ell_2-p_2\ge(1-\deltalta)m,\qquad 2^{\frac{q}{2}}\sigmam 2^{p_2}\sigmam 2^{k-k_1}. \end{split} \end{equation*} \mathfrak{m}edskip In this case, a crude estimate gives that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(f_1,f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k\abs{\mathfrak{m}athcal{S}} \cdotdot\norm{f_1}_{L^2}\norm{f_2}_{L^2}\\ &\lesssimssim 2^m\cdotdot 2^{\frac{3}{2}k+k_1}2^{p_1+\frac{q}{2}}2^{-\ell_1}2^{-{{\beta}ta}eta (\ell_1+p_1)}2^{-(1+{{\beta}ta}eta)\ell_2}2^{-{{\beta}ta}eta p_2}\mathcal{V}ert f_1\mathcal{V}ert_{X}\mathcal{V}ert f_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta-3\deltalta)m}\cdotdot 2^{-{{\beta}ta}eta(\ell_1+p_1)}\cdotdot 2^{(\frac{1}{2}-{{\beta}ta}eta)k}2^{-2{{\beta}ta}eta p_2}\cdotdot 2^{(2+{{\beta}ta}eta)k_1}\varepsilonsilon_1^2 \end{aligned} \end{equation*} and this leads to an acceptable contribution whenever {{\beta}ta}egin{equation}\lambdabel{822ParametersNotSoSmall} p_2\le -40\deltalta m. \end{equation} In the opposite case, we do another normal form, choosing a smaller phase restriction $\lambdambda=2^{-300\deltalta m}$. Thus we set {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}athfrak{m}^{res}(\xi,\eta)&:=\psi(\lambdambda^{-1}\Phi(\xi,\eta))\mathfrak{m}athfrak{m}^{res}(\xi,\eta)+(1-\psi(\lambdambda^{-1}\Phi(\xi,\eta)))\mathfrak{m}athfrak{m}^{res}(\xi,\eta)=\mathfrak{m}athfrak{m}^{rr}(\xi,\eta)+\mathfrak{m}athfrak{m}^{nrr}(\xi,\eta). \end{split} \end{equation*} On the support of $\mathfrak{m}athfrak{m}^{rr}$, we have that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \vert\partial_{\eta_3}\Phi(\xi,\eta)\vert\gtrsim 2^{2p_2-k_2}\gtrsim 2^{-100\deltalta m} \end{split} \end{equation*} and using Lemma \ref{lem:set_gain2}, as in Lemma \ref{lem:nfs}\eqref{it:NF-bd34} we see that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}^{rr}}(f_1,f_2)}_{L^2}&\lesssimssim 2^{m}\cdotdot 2^{k+k_1}(2^{100\deltalta m}\lambdambda)^{\frac{1}{2}}2^{p_1}\cdotdot \mathcal{V}ert f_1\mathcal{V}ert_{L^2}\mathcal{V}ert f_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{m}\cdotdot 2^{k_1}2^{-100\deltalta m}\cdotdot 2^{p_1+k-\ell_1}\cdotdot 2^{-(1+{{\beta}ta}eta)(\ell_2-p_2)}2^{-(1+2{{\beta}ta}eta) p_2}\mathcal{V}ert f_1\mathcal{V}ert_X\mathcal{V}ert f_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta-3\deltalta+100\deltalta)m}\cdotdot 2^{2k_1}\cdotdot 2^{-(1+2{{\beta}ta}eta)p_2}\varepsilonsilon_1^2, \end{split} \end{equation*} which gives an acceptable contribution using \eqref{822ParametersNotSoSmall}. Independently, we treat the nonresonant term via a normal form as in \eqref{mdecompNFNRNorms}. First a crude estimate using Lemma \ref{lem:nfs}\eqref{it:NF-bd1} gives that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nrr}}(f_1,f_2)}_{L^2} &\lesssimssim 2^k\lambdambda^{-1}\cdotdot 2^{\frac{1}{2}k+k_1}2^{p_1+\frac{q}{2}}\cdotdot \mathcal{V}ert f_1\mathcal{V}ert_{L^2}\mathcal{V}ert f_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{\frac{1}{2}k}2^{500\deltalta m}\cdotdot 2^{p_1+k-\ell_1}2^{p_2-\ell_2}\mathcal{V}ert f_1\mathcal{V}ert_X\mathcal{V}ert f_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(2-600\deltalta)m}2^{\frac{1}{2}k+2k_1}\varepsilonsilon_1^2, \end{aligned} \end{equation*} which is again acceptable. In addition, Lemma \ref{lem:dtfinL2} and Lemma \ref{lem:nfs}\eqref{it:NF-bd1} give {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nrr}}(f_1,\partial_t f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k\lambdambda^{-1}\cdotdot 2^{\frac{1}{2}k+k_1}2^{p_1+\frac{q}{2}}\cdotdot \mathcal{V}ert f_1\mathcal{V}ert_{L^2}\mathcal{V}ert \partial_tf_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{\frac{1}{2}k}\cdotdot 2^{p_1+k-\ell_1}\cdotdot 2^{-(\frac{1}{2}-\gammamma-500\deltalta)m}\mathcal{V}ert f_1\mathcal{V}ert_X\varepsilonsilon_1^2\\ &\lesssimssim 2^{-(\frac{3}{2}-\gammamma-500\deltalta)m}2^{\frac{1}{2}k+k_1}\varepsilonsilon_1^3, \end{aligned} \end{equation*} and once again the term involving $\partial_tf_1$ is easier. \subsubsection*{Subcase 2.3: $2^{k_2}\ll 2^{k}\sigmam 2^{k_1}$} Then $2^{p-p_2}\sigmam 2^{k_2-k}\ll 1$, and thus $p_1\ll p\ll p_2\sigmam 0$. Using Lemma \ref{lem:new_ibp}, repeated integration by parts give the result unless {{\beta}ta}egin{equation}\lambdabel{SuffCond823} \ell_2\geq (1-\deltalta)m\qquad\textnormal{h}box{ and }\qquad -p_1+\ell_1\geq (1-\deltalta)m. \end{equation} A crude estimate gives that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}}(f_1,f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k\cdotdot 2^{k_1+p_1}2^{\frac{1}{2}k_2}\cdotdot 2^{-\ell_1}2^{-(1+{{\beta}ta}eta)\ell_2}\norm{f_1}_{X}\norm{f_2}_{X}\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta-3\deltalta)m}\cdotdot 2^{2k+\frac{1}{2}k_2}\varepsilonsilonilons_1^2, \end{split} \end{equation*} and this gives an acceptable contribution unless $k_2\geq -10\deltalta m$. In this case, we split again into resonant and nonresonant regions with $\lambdambda=2^{-200\deltalta m}$ as in \eqref{eq:mdecompNF}. On the support of the resonant term, we see that (since $p_1\ll p_2$, $k_2\ll k_1$) {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \vert\partial_{\eta_3}\Phi(\xi,\eta)\vert\gtrsim 2^{-25\deltalta m}, \end{split} \end{equation*} and using crude estimates and Lemma \ref{lem:nfs}\eqref{it:NF-bd34}, we see that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(f_1,f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k\cdotdot 2^{k_1+p_1}(2^{25\deltalta m}\lambdambda)^\frac{1}{2}\cdotdot 2^{-\ell_1}2^{-(1+{{\beta}ta}eta)\ell_2}\norm{f_1}_{X}\norm{f_2}_{X}\\ &\lesssimssim 2^{-(2+{{\beta}ta}eta+50\deltalta)m}2^{2k}\varepsilonsilonilons_1^2. \end{aligned} \end{equation*} For the nonresonant term, we use a normal form as in \eqref{mdecompNFNRNorms}. Using crude estimates and Lemma \ref{lem:nfs}\eqref{it:NF-bd1}, we see that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,f_2)}_{L^2}&\lesssimssim 2^k2^{p_1+\frac{3}{2}k_1}\lambdambda^{-1}\cdotdot 2^{-\ell_1}2^{-(1+{{\beta}ta}eta)\ell_2}\norm{f_1}_{X}\norm{f_2}_{X}\\ &\lesssimssim 2^{\frac{5}{2}k}\cdotdot 2^{200\deltalta m-(2+{{\beta}ta}eta)(1-\deltalta)m}\varepsilonsilonilons_1^2, \end{aligned} \end{equation*} which suffices since ${{\beta}ta}eta\gg \deltalta$. Similarly, using Lemma \ref{lem:dtfinL2}, we obtain that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}^{nr}}(f_1,\partial_t f_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k2^{p_1+\frac{3}{2}k_1}\lambdambda^{-1} 2^{-\ell_1}\norm{f_1}_{X}\norm{\partial_t f_2}_{L^2}\\ &\lesssimssim 2^{\frac{5}{2}k}\cdotdot 2^{\gammamma+300\deltalta m-\frac{3}{2}m}\varepsilonsilonilons_1^2, \end{aligned} \end{equation} and similarly for the symmetric case, and once again, we obtain an acceptable contribution. \mathfrak{m}edskip \subsubsection{Case $p_{\mathfrak{m}ax}\ll 0$}\lambdabel{ssec:X-pmax} In case $2^{p_{\mathfrak{m}ax}}\ll 1$ we have that $\abs{\Phi}\gtrsim 1$, and we can do a normal form as in \eqref{eq:mdecompNF}--\eqref{mdecompNFNRNorms} with $\lambdambda=\frac{1}{10}$ so that $\mathfrak{m}athfrak{m}^{res}=0$. Using Lemma \ref{lem:interpol}, we have that $\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f_i}_{L^\infty}\lesssimssim 2^{-\frac{2}{3}m}\varepsilonsilonilons_1$, $i=1,2$, and thus by Lemma \ref{lem:nfs}\eqref{it:NF-bd2} {{\beta}ta}egin{equation} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}\cdotdot\Phi^{-1}}(\partial_t f_1,f_2)}_{L^2}\lesssimssim 2^m\cdotdot 2^k\norm{\partial_t f_1}_{L^2}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda} f_2}_{L^\infty}\lesssimssim 2^{k+\frac{3}{2}k_2}2^{m-\frac{3}{2}m+\gammamma m-\frac{2}{3}m}\varepsilonsilonilons_1^2 \end{equation} and symmetrically for $B_{\mathfrak{m}\cdotdot\Phi^{-1}}( f_1,\partial_tf_2)$. The boundary term requires a bit more care: Assuming w.l.o.g.\ that $p_2\leq p_1$, we distinguish two cases: {{\beta}ta}egin{itemize}[wide] \item If $f_1$ has fewer vector fields than $f_2$, by Proposition \ref{prop:decay} (and again Lemma \ref{lem:nfs}\eqref{it:NF-bd2}) there holds that {{\beta}ta}egin{equation} \norm{P_{k,p}\mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)}_{L^2}\lesssimssim 2^{k} \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f_1^{(1)}}_{L^\infty}2^{p_2}\norm{f_2}_{B}+2^k\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f_1^{(2)}}_{L^2}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda} f_2}_{L^\infty}\lesssimssim 2^{-\frac{3}{2}m}\varepsilonsilonilons_1^2, \end{equation} and analogously if $2^{p_2}\gtrsim 2^{p_1}$. \item If $f_1$ has more vector fields than $f_2$ and $p_2\ll p_1$, we note that since $\abs{{{\beta}ta}ar\sigmagma}\sigmam 2^{p_{\mathfrak{m}ax}}2^{k_{\mathfrak{m}ax}+k_{\mathfrak{m}in}}$ we have that repeated integration by parts in $V_\eta$ gives the claim if {{\beta}ta}egin{equation} 2^{-p_1-p_{\mathfrak{m}ax}}2^{2k_1-k_{\mathfrak{m}ax}-k_{\mathfrak{m}in}}(1+2^{k_2-k_1}2^{\ell_1})<2^{(1-\deltalta) m}. \end{equation} Otherwise we are done by a standard $L^2\times L^\infty$ estimate, using the localization information. The most difficult term is when $k=k_{\mathfrak{m}in}$, where we can assume that $-p_1-p_{\mathfrak{m}ax}+k_1-k+\ell_1>(1-\deltalta)m$ and obtain {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m}\cdotdot\Phi^{-1}}(f_1,f_2)}_{L^2}&\lesssimssim 2^{k+p_{\mathfrak{m}ax}}\norm{f_1}_{L^2}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f_2}_{L^\infty}\lesssimssim 2^{k+p_{\mathfrak{m}ax}}2^{-\frac{\ell_1+p_1}{2}}\norm{f_1}_X^{\frac{1}{2}}\norm{f_1}_B^{\frac{1}{2}}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}f_2}_{L^\infty}\\ &\lesssimssim 2^{\frac{k+p_{\mathfrak{m}ax}}{2}+\frac{k_1}{2}}\cdotdot 2^{-\frac{1-\deltalta}{2}m-m}\cdotdot \varepsilonsilonilons_1^2, \end{aligned} \end{equation} an acceptable contribution. \end{itemize} \subsubsection{Gap in $q$.}\lambdabel{ssec:X-q-gap} We additionally localize in $q_i$, write $g_i=P_{k_i,p_i,q_i}R_{\ell_i}f_i$, $i=1,2$. A crude estimate using \eqref{BoundsOnMAddedBenoit} gives that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}}(g_1,g_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^{k+q_{\mathfrak{m}ax}}\cdotdot 2^{\frac{3}{2}k_{\mathfrak{m}ax}+\frac{q_{\mathfrak{m}in}}{2}}\cdotdot\mathcal{V}ert g_1\mathcal{V}ert_{L^2}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^m\cdotdot 2^{\frac{5}{2}k_{\mathfrak{m}ax}}2^{\frac{q_{\mathfrak{m}in}+q_1+q_2}{2}+q_{\mathfrak{m}ax}}\cdotdot\mathcal{V}ert g_1\mathcal{V}ert_B\mathcal{V}ert g_2\mathcal{V}ert_B, \end{split} \end{equation*} and we obtain acceptable contributions unless {{\beta}ta}egin{equation}\lambdabel{CrudeBoundq1} q_{\mathfrak{m}in}\ge-10 m,\qquad q_{\mathfrak{m}ax}\ge -6m/7, \end{equation} and in particular, we have at most $O(m^3)$ choices for $\{q,q_1,q_2\}$. In this section, we assume that $q_{\mathfrak{m}in}\ll q_{\mathfrak{m}ax}$ and (by the previous case) $2^{p_{\mathfrak{m}in}}\sigmam 2^{p_{\mathfrak{m}ax}}\sigmam 1$. Using Lemma \ref{lem:new_ibp} and noting that $\abs{{{\beta}ta}ar\sigmagma}\sigmam 2^{q_{\mathfrak{m}ax}}2^{k_{\mathfrak{m}in}+k_{\mathfrak{m}ax}}$, repeated integration by parts is allow us to deal with the case when {{\beta}ta}egin{equation}\lambdabel{ConsLem57Gapq} {{\beta}ta}egin{aligned} V_\eta:\qquad &2^{-q_{\mathfrak{m}ax}}2^{2k_1-k_{\mathfrak{m}in}-k_{\mathfrak{m}ax}}(1+2^{k_2-k_1}(2^{q_2-q_1}+2^{\ell_1}))\leq 2^{(1-\deltalta)m},\\ V_{\xi-\eta}:\qquad &2^{-q_{\mathfrak{m}ax}}2^{2k_2-k_{\mathfrak{m}in}-k_{\mathfrak{m}ax}}(1+2^{k_1-k_2}(2^{q_1-q_2}+2^{\ell_2}))\leq 2^{(1-\deltalta)m}. \end{aligned} \end{equation} Wlog we assume that $q_1\leq q_2$, so that we have two main cases to consider: \subsubsection*{Case 3: $q\ll q_1,q_2$} By Lemma \ref{lem:gapp-cases} we have two scenarios to consider: \subsubsection*{Subcase 3.1: $2^{k_1}\sigmam 2^{k_2}$} Then also $2^{q_1}\sigmam 2^{q_2}$. Using \eqref{ConsLem57Gapq}, we see that we can assume $-q_{1}+k_1-k+\mathfrak{m}in\{\ell_1,\ell_2\}>(1-\deltalta)m$. We now want to use the precised decay estimate. Assuming wlog that $g_2$ has fewer vector fields than $g_1$, we recall that by Proposition \ref{prop:decay} we have {{\beta}ta}egin{equation} e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2=e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2^{(1)}+e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2^{(2)}, \end{equation} with {{\beta}ta}egin{equation} \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2^{(1)}}_{L^\infty}\lesssimssim 2^{\frac{3}{2}k_2-\frac{q_2}{2}}t^{-\frac{3}{2}}\varepsilonsilonilons_1,\qquad \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2^{(2)}}_{L^2}\lesssimssim t^{-1-{{\beta}ta}eta'}\mathfrak{m}athfrak{1}_{\{q_2\gtrsim -m\}}\varepsilonsilonilons_1. \end{equation} Using a simple $L^\infty\times L^2$ bound with \eqref{BoundsOnMAddedBenoit}, we get {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m}}(g_1,g_2^{(1)})}_{L^2}&\lesssimssim 2^{k+q_2}\norm{g_1}_{L^2}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2^{(1)}}_{L^\infty}\\ &\lesssimssim 2^{-\frac{3}{2}m}2^{k+\frac{q_1}{2}}\mathfrak{m}in\{2^{q_1},2^{-(1+{{\beta}ta}eta)\ell_1}\}\varepsilonsilon_1^2\lesssimssim 2^{-\frac{9}{4}m}\varepsilonsilon_1^2 \end{aligned} \end{equation} and using a crude estimate with Lemma \ref{lem:set_gain}, {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m}}(g_1,g_2^{(2)})}_{L^2}&\lesssimssim 2^{k+q_2}\abs{\mathfrak{m}athcal{S}}\cdotdot\norm{g_1}_{L^2}\norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2^{(2)}}_{L^2} \\ &\lesssimssim 2^{2k+\frac{k_2}{2}+\frac{3}{2}q_2}2^{-(1+{{\beta}ta}eta)\ell_1}\norm{g_1}_X\cdotdot 2^{-(1+{{\beta}ta}eta')m} \cdotdot\varepsilonsilonilons_1\\ &\lesssimssim 2^{(1-{{\beta}ta}eta)k+(\frac{3}{2}+{{\beta}ta}eta)k_1}2^{(\frac{1}{2}-{{\beta}ta}eta)q_2}\cdotdot 2^{-(2+{{\beta}ta}eta'+{{\beta}ta}eta-3\deltalta)m}\varepsilonsilonilons_1^2, \end{aligned} \end{equation} which are acceptable contributions. \subsubsection*{Subcase 3.2: $2^{k_2}\ll 2^{k_1}$} Then we have $2^{q_1-q_2}\sigmam 2^{k_2-k_1}\ll 1$, and thus $q\ll q_1\ll q_2$. Using \eqref{ConsLem57Gapq}, we can assume that {{\beta}ta}egin{equation}\lambdabel{IBPSubcase832} -q_2+\ell_2\geq (1-\deltalta)m-100\qquad\textnormal{h}box{ and }\qquad \mathfrak{m}ax\{-q_1,\ell_1-q_2\}\ge (1-\deltalta) m-100. \end{equation} We also observe that {{\beta}ta}egin{equation}\lambdabel{eq:sc3.2-philowbd} {{\beta}ta}egin{split} \vert \Phi(\xi,\eta)\vert\ge 2^{q_2-10}. \end{split} \end{equation} \mathfrak{m}edskip Assume first that $q_1\ge (1+2{{\beta}ta}eta)q_2$, so that from \eqref{IBPSubcase832}, we see that $\ell_i\gtrsim (1-\deltalta)m+q_2$ for $i=1,2$. We can use the precised dispersive decay from Proposition \ref{prop:decay}. The worst case is when $g_2$ has more than $N-3$ vector fields. In this case, we split {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1=e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^{I}+e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^{II},\qquad\mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^{I}\mathcal{V}ert_{L^\infty}\lesssimssim \varepsilonsilon_1 2^{-\frac{3}{2}m-\frac{q_1}{2}},\qquad \mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^{II}\mathcal{V}ert_{L^2}\lesssimssim \varepsilonsilon_1 2^{-(1+{{\beta}ta}eta^{\partial}ime)m} \end{split} \end{equation*} and we use a crude estimate to estimate {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}}(g_1^{I},g_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^{k+q_2}\cdotdot\mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^I\mathcal{V}ert_{L^\infty}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{k-\frac{1}{2}m}2^{-\frac{q_1}{2}}2^{q_2-(1+{{\beta}ta}eta)\ell_2} \cdotdot\mathcal{V}ert g_2\mathcal{V}ert_{X}\varepsilonsilon_1\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta+2\deltalta)m}\varepsilonsilon_1^2\cdotdot 2^{-(\frac{1}{2}-4\deltalta) m}2^{k}2^{-\frac{1}{2}q_1-{{\beta}ta}eta q_2} \end{split} \end{equation*} and this is enough using \eqref{CrudeBoundq1} since $-q_1\le-(1+2{{\beta}ta}eta)q_2$ and $q_2\ge-6m/7$. Similarly a crude estimate gives {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}}(g_1^{II},g_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^{k+q_2}\cdotdot\abs{\mathfrak{m}athcal{S}}\cdotdot \mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^{II}\mathcal{V}ert_{L^2}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^m\cdotdot 2^{k+\frac{3}{2}k_2+\frac{3}{2}q_2}\cdotdot 2^{-(1+{{\beta}ta}eta^{\partial}ime)m}2^{-(1+{{\beta}ta}eta)\ell_2}\cdotdot\varepsilonsilon_1\mathcal{V}ert g_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta^{\partial}ime+{{\beta}ta}eta-3\deltalta)m}\cdotdot 2^{(\frac{1}{2}-{{\beta}ta}eta)q_2}\cdotdot 2^{\frac{5}{2}k}\cdotdot\varepsilonsilon_1^2 \end{split} \end{equation*} and this is acceptable. \mathfrak{m}edskip We can now assume that $q_1\le (1+2{{\beta}ta}eta)q_2$, so that $2^{k_2}\lesssimssim 2^{k_1}2^{2{{\beta}ta}eta q_2}$. We can do a normal form as in \eqref{eq:mdecompNF}--\eqref{mdecompNFNRNorms} with $\lambdambda=2^{q_2-20}$, so that $\mathfrak{m}=\mathfrak{m}^{nr}$ by \eqref{eq:sc3.2-philowbd}. On the one hand, a crude estimate using \eqref{BoundsOnMAddedBenoit} gives that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m} \Phi^{-1}}(g_1,g_2)}_{L^2}&\lesssimssim 2^{k}\cdotdot 2^{\frac{3}{2}k_2+\frac{q_2}{2}}\norm{g_1}_{L^2}\norm{g_2}_{L^2}\\ &\lesssimssim 2^k2^{\frac{3}{2}k_2+\frac{q_2}{2}}\cdotdot \mathfrak{m}in\{2^{\frac{q_1}{2}},2^{-(1+{{\beta}ta}eta)\ell_1}\}2^{-(1+{{\beta}ta}eta)\ell_2}\cdotdot \left[\norm{g_1}_B+\mathcal{V}ert g_1\mathcal{V}ert_X\right]\norm{g_2}_X\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta-2\deltalta)m}\cdotdot 2^{\frac{3}{2}k_2-(\frac{1}{2}+{{\beta}ta}eta)q_2}\cdotdot2^{(1-{{\beta}ta}eta)\frac{q_1}{2}}2^{-{{\beta}ta}eta(1+{{\beta}ta}eta)\ell_1}\cdotdot 2^{k}\varepsilonsilon_1^2, \end{aligned} \end{equation} which gives an acceptable contribution. Similarly, {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m} \Phi^{-1}}(\partial_tg_1,g_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^{k}\cdotdot 2^{\frac{3}{2}k_2+\frac{q_2}{2}}\cdotdot\norm{\partial_t g_1}_{L^2}\norm{g_2}_{L^2}\\ &\lesssimssim 2^{-(\frac{1}{2}-\gammamma)m}\cdotdot 2^{\frac{3}{2}(q_1-q_2)+\frac{q_2}{2}}2^{-(1+{{\beta}ta}eta)\ell_2}2^{\frac{5}{2}k}\cdotdot\varepsilonsilon_1^2\mathcal{V}ert g_2\mathcal{V}ert_{X}\\ &\lesssimssim 2^{-(\frac{3}{2}-\gammamma-2\deltalta)m}2^{-\frac{1}{2}q_2}2^{\frac{5}{2}k}\cdotdot\varepsilonsilon_1^3 \end{aligned} \end{equation} which is enough since $q_2\ge-6m/7$ and ${{\beta}ta}eta\le 1/10$. The other case is simpler: {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m} \Phi^{-1}}(g_1,\partial_tg_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^{k}\cdotdot 2^{\frac{3}{2}k_2+\frac{q_2}{2}}\cdotdot\norm{g_1}_{L^2}\norm{\partial_tg_2}_{L^2}\\ &\lesssimssim 2^{-(\frac{1}{2}-\gammamma)m}\cdotdot 2^{(q_1-q_2)+\frac{q_1}{2}}\mathfrak{m}in\{2^{\frac{q_1}{2}},2^{-(1+{{\beta}ta}eta)\ell_1}\}2^{\frac{5}{2}k}\cdotdot\varepsilonsilon_1^2\mathcal{V}ert g_1\mathcal{V}ert_{X}\\ \end{aligned} \end{equation} and if $q_1\le-(1-\deltalta)m$, we obtain an acceptable contribution, while if $\ell_1\ge (1-\deltalta)m+q_2-300$, we have the same numerology as in the term above. In all cases, we have an acceptable contribution. \subsubsection*{Case 4: $q_1\ll q,q_2$} By Lemma \ref{lem:gapp-cases} we have three scenarios to consider: \subsubsection*{Subcase 4.1: $2^{k}\sigmam 2^{k_2}$} Then also $2^{q}\sigmam 2^{q_2}$. Here repeated integration by parts gives the claim if {{\beta}ta}egin{equation}\lambdabel{82341} {{\beta}ta}egin{aligned} V_\eta:\qquad &2^{-q_1}+2^{\ell_1-q_2}\leq 2^{(1-\deltalta)m},\\ V_{\xi-\eta}:\qquad &2^{-q_{2}}(2^{k_2-k_1}+2^{\ell_2})\leq 2^{(1-\deltalta)m}. \end{aligned} \end{equation} This leads to the following cases to be distinguished: {{\beta}ta}egin{enumerate}[label=(\alphaph*),wide] \item Assume first that {{\beta}ta}egin{equation}\lambdabel{823Case41} k_2-k_1-q_2\ge(1-\deltalta)m-200. \end{equation} In this case, we can use \eqref{BoundsOnMAddedBenoit} with crude estimates as in Lemma \ref{lem:set_gain} to get {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}}(g_1,g_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^{k+q_2}\cdotdot 2^{\frac{3}{2}k_1+\frac{q_1}{2}}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_{L^2}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^m\cdotdot 2^{\frac{3}{2}(q_2+k_1-k)}2^{\frac{5}{2}k}2^{\frac{q_1}{2}}\mathfrak{m}in\{2^{\frac{q_1}{2}},2^{-(1+{{\beta}ta}eta)\ell_1}\}\cdotdot\left[\mathcal{V}ert g_1\mathcal{V}ert_B+\mathcal{V}ert g_1\mathcal{V}ert_X\right]\mathcal{V}ert g_2\mathcal{V}ert_B\\ &\lesssimssim 2^{-(\frac{1}{2}-3\deltalta)m}2^{\frac{5}{2}k}2^{\frac{q_1}{2}}\mathfrak{m}in\{2^{\frac{q_1}{2}},2^{-(1+{{\beta}ta}eta)\ell_1}\}\cdotdot\varepsilonsilon_1^2. \end{aligned} \end{equation} If $q_1\leq -(1-\deltalta)m-100$, we can use the first estimate, while if $\ell_1\ge q_2+(1-\deltalta)m-100$, we can use the second term in the minimum since $q_2\ge -6m/7$ from \eqref{CrudeBoundq1}. In view of \eqref{82341}, this covers all cases when \eqref{823Case41} holds. \item From \eqref{823Case41}, we can now assume that {{\beta}ta}egin{equation}\lambdabel{823Case42} {{\beta}ta}egin{split} &\qquad \ell_2-q_2\ge(1-\deltalta)m-100\qquad\textnormal{h}box{ and }\\ & \textnormal{h}box{either }\quad q_2-k_2\ge q_1-k_1+10\quad\textnormal{h}box{ or }\quad\ell_1-q_2\ge(1-\deltalta)m-10. \end{split} \end{equation} Assume first that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} q_2-k_2\ge q_1-k_1+10, \end{split} \end{equation*} in this case, we have that, on the support of integration, {{\beta}ta}egin{equation}\lambdabel{LargeHorGrad841} {{\beta}ta}egin{split} \vert\partialla_{\eta_h}\Phi(\xi,\eta)\vert\gtrsim 2^{q_2-k_2}. \end{split} \end{equation} We will proceed as in Lemma \ref{lem:nfs}\eqref{it:NF-bd34}, and decompose for $\lambdambda>0$ to be determined {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}athfrak{m}(\xi,\eta)&=\mathfrak{m}athfrak{m}^{res}(\xi,\eta)+\sum_{r\ge1}\mathfrak{m}athfrak{m}_r(\xi,\eta),\\ \mathfrak{m}athfrak{m}^{res}(\xi,\eta)&=\psi(\lambdambda^{-1}\Phi(\xi,\eta))\mathfrak{m}athfrak{m}(\xi,\eta),\qquad \mathfrak{m}athfrak{m}_r(\xi,\eta)=\varphi(2^{-r}\lambdambda^{-1}\Phi(\xi,\eta))\mathfrak{m}athfrak{m}(\xi,\eta). \end{split} \end{equation*} We can treat the resonant term using \eqref{LargeHorGrad841}, Lemma \ref{lem:set_gain2} and \eqref{BoundsOnMAddedBenoit}: {{\beta}ta}egin{equation}\lambdabel{83241Res} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(g_1,g_2)}_{L^2} &\lesssimssim 2^m\cdotdot 2^{k+q_2}\cdotdot 2^{\frac{k_1+q_1}{2}}\cdotdot (2^{k-q_2}\lambdambda)^{\frac{1}{2}}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_{L^2}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^m\cdotdot 2^{\frac{3k+k_1}{2}}\cdotdot \lambdambda^{\frac{1}{2}}\cdotdot\mathfrak{m}in\{2^{q_1},2^{-(1+{{\beta}ta}eta)\ell_1+\frac{q_1}{2}}\}2^{-(1+{{\beta}ta}eta)\ell_2+\frac{q_2}{2}}\cdotdot\left[\mathcal{V}ert g_1\mathcal{V}ert_B+\mathcal{V}ert g_1\mathcal{V}ert_X\right]\cdotdot\mathcal{V}ert g_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-({{\beta}ta}eta-3\deltalta)m}\cdotdot 2^{\frac{3k+k_1}{2}}2^{-(\frac{1}{2}+{{\beta}ta}eta)q_2}\cdotdot \lambdambda^{\frac{1}{2}}\cdotdot\mathfrak{m}in\{2^{q_1},2^{-(1+{{\beta}ta}eta)\ell_1+\frac{q_1}{2}}\}\cdotdot\varepsilonsilon_1^2. \end{aligned} \end{equation} On the other hand, for the nonresonant terms, $r\ge 1$, we use a normal form transformation as in \eqref{mdecompNFNRNorms} and we estimate with a crude estimate, using \eqref{LargeHorGrad841} and Lemma \ref{lem:set_gain2} (see also Lemma \ref{lem:nfs}\eqref{it:NF-bd34}): {{\beta}ta}egin{equation}\lambdabel{83241NRes1} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m}_r \Phi^{-1}}(g_1,g_2)}_{L^2}&\lesssimssim 2^{k+q_2}\cdotdot 2^{\frac{k_1+q_1}{2}}\cdotdot 2^{-r}\lambdambda^{-1}(2^{k-q_2}2^{r}\lambdambda)^{\frac{1}{2}}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_{L^2}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{\frac{3k+k_1}{2}}\cdotdot 2^{-\frac{r}{2}}\lambdambda^{-\frac{1}{2}}\cdotdot\mathfrak{m}in\{2^{q_1},2^{-(1+{{\beta}ta}eta)\ell_1+\frac{q_1}{2}}\}2^{-(1+{{\beta}ta}eta)\ell_2+\frac{q_2}{2}}\cdotdot\left[\mathcal{V}ert g_1\mathcal{V}ert_X+\mathcal{V}ert g_1\mathcal{V}ert_B\right]\cdotdot\mathcal{V}ert g_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta-2\deltalta)m}2^{\frac{3k+k_1}{2}}\cdotdot 2^{-(\frac{1}{2}+{{\beta}ta}eta)q_2}2^{-\frac{r}{2}}\lambdambda^{-\frac{1}{2}}\cdotdot\mathfrak{m}in\{2^{q_1},2^{-(1+{{\beta}ta}eta)\ell_1+\frac{q_1}{2}}\}\cdotdot\varepsilonsilon_1^2, \end{aligned} \end{equation} and using Lemma \ref{lem:dtfinL2} as well {{\beta}ta}egin{equation}\lambdabel{83241NRes2} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}_r \Phi^{-1}}(\partial_tg_1,g_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^{k+q_2}2^{\frac{k_2}{2}} 2^{\frac{k_1+q_1}{2}}\cdotdot 2^{-r}\lambdambda^{-1}(2^{k-q_2}2^{r}\lambdambda)^{\frac{1}{2}}\cdotdot \mathcal{V}ert \partial_tg_1\mathcal{V}ert_{L^2}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^m\cdotdot2^{\frac{4k+k_1}{2}+\frac{q_1+q_2}{2}}\cdotdot 2^{-\frac{r}{2}}\lambdambda^{-\frac{1}{2}}\cdotdot 2^{-(1+{{\beta}ta}eta)\ell_2}\cdotdot 2^{-(\frac{3}{2}-\gammamma)m}\varepsilonsilon_1^3. \end{aligned} \end{equation} and {{\beta}ta}egin{equation}\lambdabel{83241NRes3} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}_r \Phi^{-1}}(g_1,\partial_tg_2)}_{L^2} &\lesssimssim 2^m\cdotdot 2^{k+q_2}\cdotdot 2^{\frac{k_1+q_1}{2}}\cdotdot 2^{-r}\lambdambda^{-1}(2^{k-q_2}2^r\lambdambda)^{\frac{1}{2}}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_{L^2}\mathcal{V}ert \partial_tg_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{-(\frac{1}{2}-\gammamma)m}\cdotdot2^{\frac{3k+k_1+q_2}{2}}\cdotdot 2^{-\frac{r}{2}}\lambdambda^{-\frac{1}{2}}\cdotdot\mathfrak{m}in\{2^{q_1},2^{-(1+{{\beta}ta}eta)\ell_1+\frac{q_1}{2}}\}\cdotdot\left[\mathcal{V}ert g_1\mathcal{V}ert_X+\mathcal{V}ert g_1\mathcal{V}ert_B\right]\cdotdot\varepsilonsilon_1^2. \end{aligned} \end{equation} Inspecting \eqref{83241Res}, \eqref{83241NRes1}, \eqref{83241NRes2} and \eqref{83241NRes3}, we obtain an acceptable contribution when {{\beta}ta}egin{equation*} q_1\le -(1-3{{\beta}ta}eta)m-2{{\beta}ta}eta q_2,\qquad\textnormal{h}box{ and }\qquad \lambdambda=2^{(1+6{{\beta}ta}eta)q_2-6{{\beta}ta}eta m}, \end{equation*} since $q_2\ge -6m/7$ from \eqref{CrudeBoundq1} and ${{\beta}ta}eta\leq 1/100$. Finally, when {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \ell_i\ge q_2+(1-\deltalta)m\qquad \textnormal{h}box{ and }\qquad q_1\ge -(1-3{{\beta}ta}eta)m-2{{\beta}ta}eta q_2 \end{split} \end{equation*} we use the precised dispersive decay from Proposition \ref{prop:decay}. The worst case is when $g_2$ has too many vector fields, in which case we decompose {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1&=e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^I+e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^{II} \end{split} \end{equation*} and we compute that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}}(g_1^{I},g_2)}_{L^2} &\lesssimssim 2^m\cdotdot 2^{k+q_2}\cdotdot \mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^I\mathcal{V}ert_{L^\infty} \mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{-(\frac{3}{2}-2\deltalta)m}2^{\frac{3}{2}k_1-\frac{1}{2}q_1-{{\beta}ta}eta q_2}\mathcal{V}ert g_1\mathcal{V}ert_D\mathcal{V}ert g_2\mathcal{V}ert_X \lesssimssim 2^{-(1+5{{\beta}ta}eta/4)m}\varepsilonsilon_1^2, \end{aligned} \end{equation*} and {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\mathfrak{m}}(g_1^{II},g_2)}_{L^2} &\lesssimssim 2^m\cdotdot 2^{k+q_2}\cdotdot \abs{\mathfrak{m}athcal{S}}\cdotdot \mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^{II}\mathcal{V}ert_{L^2} \mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta^{\partial}ime+{{\beta}ta}eta -2\deltalta)m}2^{(\frac{1}{2}-{{\beta}ta}eta) q_2}2^{\frac{5}{2}k}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_D\mathcal{V}ert g_2\mathcal{V}ert_X \lesssimssim 2^{-(1+{{\beta}ta}eta+10\deltalta)m}\varepsilonsilon_1^2, \end{aligned} \end{equation*} which is acceptable. \end{enumerate} \subsubsection*{Subcase 4.2: $2^{k}\ll 2^{k_2}\sigmam 2^{k_1}$} Then also $2^{q_2-q}\sigmam 2^{k-k_2}\ll 1$, so that $q_1\ll q_2\ll q$. Using Lemma \ref{lem:new_ibp}, repeated integration by parts then gives the claim if {{\beta}ta}egin{equation}\lambdabel{8342Suff} {{\beta}ta}egin{aligned} (V_\eta)\quad \mathfrak{m}ax\{-q_1,\ell_1-q_2\}&\le(1-\deltalta)m+k-k_2,\quad\textnormal{h}box{ or }\quad (V_{\xi-\eta})\quad \ell_2-q_2\le (1-\deltalta)m+k-k_2. \end{aligned} \end{equation} In addition, we have that {{\beta}ta}egin{equation} \abs{\Phi}\gtrsim 2^{q_{\mathfrak{m}ax}}, \end{equation} so that, using Lemma \ref{lem:phasesymb_bd}, we see that {{\beta}ta}egin{equation}\lambdabel{8342BoundedNF} \mathcal{V}ert \mathfrak{m} \Phi^{-1}\mathcal{V}ert_{\widetilde{\mathfrak{m}athcal{W}}}\lesssimssim 2^k. \end{equation} And we can do a normal form as in \eqref{mdecompNFNRNorms}. The most difficult term is the boundary term. First a crude estimate using Lemma \ref{lem:set_gain} gives {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m}\Phi^{-1}}(g_1,g_2)}_{L^2}&\lesssimssim 2^k\abs{\mathfrak{m}athcal{S}}\cdotdot \norm{g_1}_{L^2}\norm{g_2}_{L^2}\\ &\lesssimssim 2^{2k+\frac{k_1}{2}+\frac{q_1}{2}}\cdotdot 2^{\frac{q_1+q_2}{2}}\cdotdot\mathcal{V}ert g_1\mathcal{V}ert_B\mathcal{V}ert g_2\mathcal{V}ert_{B}\\ &\lesssimssim 2^{\frac{5}{2}k_1}2^{q_1+\frac{q_2}{2}}\cdotdot\varepsilonsilon_1^2, \end{aligned} \end{equation*} which is acceptable if $q_2+q_1/2\le -5/4m$. Independently, if {{\beta}ta}egin{equation*} q_1\le -(1-\deltalta)m-k+k_2,\qquad q_2\ge -5m/6, \end{equation*} a crude estimate using Lemma \ref{lem:set_gain} gives {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m}\Phi^{-1}}(g_1,g_2)}_{L^2}&\lesssimssim 2^k\abs{\mathfrak{m}athcal{S}}\cdotdot \norm{g_1}_{L^2}\norm{g_2}_{L^2}\\ &\lesssimssim 2^{2k+\frac{k_1}{2}+\frac{q_1}{2}}\cdotdot 2^{\frac{q_1}{2}}2^{-(1+{{\beta}ta}eta)\ell_2}\cdotdot\mathcal{V}ert g_1\mathcal{V}ert_B\mathcal{V}ert g_2\mathcal{V}ert_{X}\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta-2\deltalta)m}2^{q_1-q_2-{{\beta}ta}eta q_2}\cdotdot 2^{\frac{5}{2}k_1}2^{(1-{{\beta}ta}eta)(k-k_1)}\varepsilonsilon_1^2\\ \end{aligned} \end{equation} and this gives an acceptable contribution. Finally, if {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \ell_i\ge (1-\deltalta)m+k-k_2+q_2,\,\, i\in\{1,2\}\qquad q_2+\frac{1}{2}q_1\ge-5/4m, \end{split} \end{equation*} we can use the precised dispersion inequality from Proposition \ref{prop:decay}. The most difficult case is when $g_2$ has too many vector fields, in which case we decompose {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} g_1=g_1^I+g_1^{II},\qquad\mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^I\mathcal{V}ert_{L^\infty}&\lesssimssim 2^{-\frac{3}{2}m-\frac{q_1}{2}+\frac{3}{2}k_1}\mathcal{V}ert g_1\mathcal{V}ert_D,\qquad \mathcal{V}ert g_1^{II}\mathcal{V}ert_{L^2}\lesssimssim 2^{-(1+{{\beta}ta}eta^{\partial}ime)m}\mathcal{V}ert g_1\mathcal{V}ert_D \end{split} \end{equation*} and we compute using \eqref{8342BoundedNF} {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m}\Phi^{-1}}(g_1^I,g_2)}_{L^2}&\lesssimssim 2^k\cdotdot \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^I}_{L^\infty}\norm{g_2}_{L^2}\\ &\lesssimssim 2^{k+\frac{3}{2}k_1}2^{-\frac{3}{2}m-\frac{q_1}{2}-\ell_2}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_D\mathcal{V}ert g_2\mathcal{V}ert_X\\ &\lesssimssim 2^{\frac{5}{2}k_1}2^{-(\frac{5}{2}-\deltalta)m-\frac{q_1}{2}-q_2}\cdotdot\varepsilonsilon_1^2 \end{aligned} \end{equation*} and, using a crude estimate from Lemma \ref{lem:set_gain}, {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{Q}_{\mathfrak{m}\Phi^{-1}}(g_1^{II},g_2)}_{L^2}&\lesssimssim 2^k\cdotdot \abs{\mathfrak{m}athcal{S}}\cdotdot\norm{g_1^{II}}_{L^2}\norm{g_2}_{L^2}\\ &\lesssimssim 2^{\frac{5}{2}k+\frac{q}{2}}2^{-(1+{{\beta}ta}eta^{\partial}ime)m-(1+{{\beta}ta}eta)\ell_2}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_D\mathcal{V}ert g_2\mathcal{V}ert_X\\ &\lesssimssim 2^{\frac{5}{2}k_1}2^{-(2+{{\beta}ta}eta+{{\beta}ta}eta^{\partial}ime-2\deltalta)m-(\frac{1}{2}+{{\beta}ta}eta)q_2}\cdotdot\varepsilonsilon_1^2, \end{aligned} \end{equation*} which is acceptable since $q_2\ge(2/3)(k_2+k_1/2)\ge-5/6m$. The terms with derivatives are easier to control using Lemma \ref{lem:dtfinL2}.: {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}}(\partial_tg_1,g_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k\abs{\mathfrak{m}athcal{S}}\cdotdot \norm{\partial_tg_1}_{L^2}\norm{g_2}_{L^2}\\ &\lesssimssim 2^{-(\frac{1}{2}-\gammamma)m}2^{2k+\frac{k_1}{2}}\mathfrak{m}in\{2^{q_2},2^{-(1+{{\beta}ta}eta)\ell_2+\frac{q_2}{2}}\}\varepsilonsilon_1^2\left[\mathcal{V}ert g_2\mathcal{V}ert_B+\mathcal{V}ert g_2\mathcal{V}ert_X\right]. \end{aligned} \end{equation*} If $q_2\le -3/4m$, the first term in the $\mathfrak{m}in$ gives an acceptable contribution, else the second term gives an acceptable contribution using \eqref{8342Suff}. Similarly, {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}R_\ell \mathfrak{m}athcal{B}_{\Phi^{-1}\mathfrak{m}}(g_1,\partial_tg_2)}_{L^2}&\lesssimssim 2^m\cdotdot 2^k\abs{\mathfrak{m}athcal{S}}\cdotdot \norm{g_1}_{L^2}\norm{\partial_tg_2}_{L^2}\\ &\lesssimssim 2^{-(\frac{1}{2}-\gammamma)m}2^{2k+\frac{k_1}{2}}\mathfrak{m}in\{2^{q_1},2^{-(1+{{\beta}ta}eta)\ell_1+\frac{q_2}{2}}\}\varepsilonsilon_1^2\left[\mathcal{V}ert g_1\mathcal{V}ert_B+\mathcal{V}ert g_1\mathcal{V}ert_X\right] \end{aligned} \end{equation*} and we can conclude similarly. \subsubsection*{Subcase 4.3: $2^{k_2}\ll 2^{k}\sigmam 2^{k_1}$} Then also $2^{q-q_2}\sigmam 2^{k_2-k}\ll 1$, so that $q_1\ll q\ll q_2$. Using Lemma \ref{lem:new_ibp}, repeated integration by parts gives the claim if {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} (V_\eta)\quad\mathfrak{m}ax\{-q_1,\ell_1-q_2\}\le (1-\deltalta)m,\qquad\textnormal{h}box{ or }\qquad ( V_{\xi-\eta})\quad\ell_2-q_2\leq (1-\deltalta)m, \end{aligned} \end{equation} and we can proceed as for Subcase 4.2, since once again {{\beta}ta}egin{equation*} \vert\Phi(\xi,\eta)\vert\gtrsim 2^{q_{\mathfrak{m}ax}} \end{equation*} so that \eqref{8342BoundedNF} holds and since we do not need to keep track of the $k$ contributions. \mathfrak{m}edskip \subsubsection{No gaps}\lambdabel{ssec:X-nogaps} It remains (see \eqref{CrudeBoundq1}) to consider the case $p_{\mathfrak{m}in}\ge -10$ and $-6m/7\le q_{\mathfrak{m}ax}\le q_{\mathfrak{m}in}+10$. We use the dichotomy of Proposition \ref{prop:phasevssigma}. We decompose $\mathfrak{m}=\mathfrak{m}^{res}+\mathfrak{m}^{nr}$ as in \eqref{eq:mdecompNF} with $\lambdambda=2^{-100}2^{q}$. \subsubsection*{The nonresonant case $\mathfrak{m}athfrak{m}^{nr}$} On the support of the nonresonant set, we use a normal form transformation as in \eqref{mdecompNFNRNorms}. Lemma \ref{lem:phasesymb_bd} gives {{\beta}ta}egin{equation}\lambdabel{GoodBoundMultiplierNoGapQ} \abs{\mathfrak{m}^{nr}\Phi^{-1}}\lesssimssim \norm{\mathfrak{m}^{nr}\Phi^{-1}}_{\widetilde{\mathfrak{m}athcal{W}}}\lesssimssim 2^k. \end{equation} For the boundary term, we may assume that $g_1$ has fewer vector fields than $g_2$ and we use the precised dispersion estimate from Proposition \ref{prop:decay} to decompose {{\beta}ta}egin{equation}\lambdabel{PrecisedDec} {{\beta}ta}egin{split} g_1=g_1^I+g_1^{II},\qquad\mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^I\mathcal{V}ert_{L^\infty}&\lesssimssim 2^{-\frac{3}{2}m}2^{-\frac{q}{2}}\mathcal{V}ert g_1\mathcal{V}ert_D,\qquad \mathcal{V}ert g_1^{II}\mathcal{V}ert_{L^2}\lesssimssim 2^{-(1+{{\beta}ta}eta^{\partial}ime)m}\mathcal{V}ert g_1\mathcal{V}ert_D, \end{split} \end{equation} and using \eqref{GoodBoundMultiplierNoGapQ} we compute that {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_{\mathfrak{m}^{nr}\Phi^{-1}}(g_1^I,g_2)}_{L^2}&\lesssimssim 2^k\cdotdot \mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1^I\mathcal{V}ert_{L^\infty}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^k\cdotdot 2^{-\frac{3}{2}m}\mathcal{V}ert g_1\mathcal{V}ert_{D}\mathcal{V}ert g_2\mathcal{V}ert_{B}\lesssimssim 2^k\cdotdot 2^{-\frac{3}{2}m}\varepsilonsilon_1^2, \end{aligned} \end{equation*} while for the other term, we use Corollary \ref{cor:extrapol_decay} as well to get {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{Q}_{\mathfrak{m}^{nr}\Phi^{-1}}(g_1^{II},g_2)}_{L^2}&\lesssimssim 2^k\cdotdot \mathcal{V}ert g_1^{II}\mathcal{V}ert_{L^2}\mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_2\mathcal{V}ert_{L^\infty}\\ &\lesssimssim 2^k\cdotdot 2^{-(1+\frac{2}{3}+{{\beta}ta}eta^{\partial}ime)m}\mathcal{V}ert g_1\mathcal{V}ert_{D}\,\varepsilonsilon_1. \end{aligned} \end{equation*} For the terms with the time derivatives, we proceed similarly, using \eqref{GoodBoundMultiplierNoGapQ}, Lemma \ref{lem:dtfinL2} and Corollary \ref{cor:extrapol_decay}: {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{B}_{\mathfrak{m}^{nr}\Phi^{-1}}(g_1,\partial_tg_2)}_{L^2}&\lesssimssim 2^m2^k\cdotdot \mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1\mathcal{V}ert_{L^\infty}\mathcal{V}ert \partial_tg_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^k\cdotdot 2^{-(\frac{1}{2}+\frac{2}{3}-\gammamma)m}\varepsilonsilon_1^3 \end{aligned} \end{equation*} and similarly for the symmetric term. \subsubsection*{The resonant term $\mathfrak{m}athfrak{m}^{res}$} A crude estimate using Lemma \ref{lem:set_gain} gives {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(g_1,g_2)}_{L^2}&\lesssimssim 2^m2^{q+k}\cdotdot\abs{\mathfrak{m}athcal{S}}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_{L^2}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^m2^{k+\frac{3}{2}k_{\mathfrak{m}in}+\frac{3}{2}q}\cdotdot\mathfrak{m}in\{2^{k_1},2^{\frac{q}{2}}\}\cdotdot\mathfrak{m}in\{2^{k_2},2^{\frac{q}{2}}\}\cdotdot\left[\mathcal{V}ert g_1\mathcal{V}ert_{H^{-1}}+\mathcal{V}ert g_1\mathcal{V}ert_B\right]\cdotdot\left[\mathcal{V}ert g_2\mathcal{V}ert_{H^{-1}}+\mathcal{V}ert g_2\mathcal{V}ert_B\right]\\ &\lesssimssim 2^m2^{k+\frac{3}{2}k_{\mathfrak{m}in}+2q}\cdotdot\mathfrak{m}in\{2^\frac{q}{2},2^{k_1},2^{k_2}\}\varepsilonsilon_1^2. \end{aligned} \end{equation*} This gives an acceptable contribution when {{\beta}ta}egin{equation}\lambdabel{824ResCase1} k_{\mathfrak{m}in}+q\le -(1-{{\beta}ta}eta)m. \end{equation} We see from Proposition \ref{prop:phasevssigma} that $\abs{{{\beta}ta}ar\sigmagma}\gtrsim 2^{q+k_{\mathfrak{m}ax}+k_{\mathfrak{m}in}}$ and we can proceed as above in the case of gaps (without need to worry about the losses in $p$'s and $q$'s). Observing that {{\beta}ta}egin{equation*} (sV_\eta\Phi)^{-1}\cdotdot V_\eta(\psi(2^{-q}\Phi))=(2^{100}s2^{q})^{-1}\cdotdot\psi^{\partial}ime(\lambdambda^{-1}\Phi) \end{equation*} we can use Lemma \ref{lem:new_ibp} to control the terms when {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}ax\{2k_i,k_1+k_2+\ell_i\}\le (1-\deltalta)m+q+k_{\mathfrak{m}ax}+k_{\mathfrak{m}in},\qquad i\in\{1,2\}. \end{split} \end{equation*} Using the conclusion from \eqref{824ResCase1}, it suffices to consider the case {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} k_1+k_2+\ell_i\ge (1-\deltalta)m+q+k_{\mathfrak{m}ax}+k_{\mathfrak{m}in},\qquad i\in\{1,2\}, \end{split} \end{equation*} (else $k_{\mathfrak{m}in}+q\lesssimssim -(1-{{\beta}ta}eta)m$). To conclude, we want to use the precised dispersion from Proposition \ref{prop:decay}. Assuming that $g_1$ has fewer vector fields, we decompose as in \eqref{PrecisedDec}, we compute, using Lemma \ref{lem:ECmult_bds}, {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(g_1^I,g_2)}_{L^2}&\lesssimssim 2^m2^{q+k}\cdotdot \mathcal{V}ert e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}g_1\mathcal{V}ert_{L^\infty}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{-\frac{1}{2}m}2^{\frac{q}{2}+k+\frac{3}{2}k_1-\ell_2}\mathcal{V}ert g_1\mathcal{V}ert_D\mathcal{V}ert g_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(\frac{3}{2}-\deltalta)m}2^{-\frac{q}{2}+k+\frac{5}{2}k_1-k_{\mathfrak{m}ax}-k_{\mathfrak{m}in}+k_2}\varepsilonsilon_1^2 \end{aligned} \end{equation*} and this gives an acceptable contribution since $q\ge -6m/7$. Similarly, using Lemma \ref{lem:set_gain} {{\beta}ta}egin{equation*} {{\beta}ta}egin{aligned} \norm{P_{k,p,q}\mathfrak{m}athcal{B}_{\mathfrak{m}^{res}}(g_1^{II},g_2)}_{L^2}&\lesssimssim 2^m2^{q+k}\cdotdot\abs{\mathfrak{m}athcal{S}}\cdotdot \mathcal{V}ert g_1\mathcal{V}ert_{L^2}\mathcal{V}ert g_2\mathcal{V}ert_{L^2}\\ &\lesssimssim 2^{-{{\beta}ta}eta^{\partial}ime m}2^{\frac{3}{2}q+k+\frac{3}{2}k_{\mathfrak{m}in}-(1+{{\beta}ta}eta)\ell_2}\mathcal{V}ert g_1\mathcal{V}ert_D\mathcal{V}ert g_2\mathcal{V}ert_X\\ &\lesssimssim 2^{-(1+{{\beta}ta}eta+{{\beta}ta}eta^{\partial}ime-2\deltalta)m}2^{\frac{5}{2}k_{\mathfrak{m}ax}^+}\varepsilonsilon_1^2 \end{aligned} \end{equation*} which gives an acceptable contribution. \end{proof} \subsection*{Acknowledgments} Y.\ Guo's research is supported in part by NSF grant DMS-2106650. B.\ Pausader is supported in part by NSF grant DMS-2154162, a Simons fellowship and benefited from the hospitality of CY-Advanced Studies fellow program. K.\ Widmayer gratefully acknowledges support of the SNSF through grant PCEFP2\_203059. {{\beta}ta}lue{We thank the referees for their careful reading and the many valuable suggestions.} \appendix \section{Auxiliary results}\lambdabel{apdx} \subsection{Proof of Proposition \ref{prop:LPOmega}}\lambdabel{apdx:angLP} Here we give the proof of the properties of the angular Littlewood-Paley decomposition introduced in Section \ref{ssec:angLP}. Denoting by $P_n^{(a,b)}$ the Jacobi polynomials, we begin by recalling that with {{\beta}ta}egin{equation} P^{(0,0)}_n(z)=L_n(z)=\frac{1}{2^n n!}\frac{d^n}{dz^n}[(z^2-1)^n] \end{equation} there holds that (see \cdotite[Section 4.5]{Sze1975}) {{\beta}ta}egin{equation}\lambdabel{DerJacobi} {{\beta}ta}egin{split} \frac{d}{dz}P^{(a,a)}_n=\frac{n+2a+1}{2}P^{(a+1,a+1)}_{n-1},\qquad a\in\mathfrak{m}athbb{N}. \end{split} \end{equation} Moreover, we have the following asymptotics (see \cdotite[Section 8.21]{Sze1975}): {{\beta}ta}egin{lemma}\lambdabel{LemZonal0} Fix $0<c<\pi$, then {{\beta}ta}egin{equation}\lambdabel{SogForm} P^{(a,a)}_n(\cdotos\thetaeta)={{\beta}ta}egin{cases}2^a\sqrt{\frac{2}{\pi}}\frac{1}{\sqrt{n}(\sigman\thetaeta)^{a+\frac{1}{2}}}\left(\cdotos((n+a+\frac{1}{2})\thetaeta-\frac{(2a+1)\pi}{4})+\frac{1}{n\sigman\thetaeta}O(1)\right)&\quad\textnormal{h}box{if}\quad c/n\le\thetaeta\le\pi-c/n,\\ O(n^a)&\quad\textnormal{h}box{else.} \end{cases} \end{equation} \end{lemma} These estimates are relevant in view of the following fact about angular regularity (see \cdotite[Section 2.8.4]{Atkinson2012}): {{\beta}ta}egin{lemma}\lambdabel{LemZonal1} Let $\Pi_n$ denote the $L^2$-projector onto the $n$-th eigenspace of the spherical Laplacian $\mathfrak{m}athcal{D}elta_{\mathfrak{m}athbb{S}^2}$ associated to the eigenvalue $n(n+1)$. Then for any $P\in\mathfrak{m}athbb{S}^2$ and $f\in L^2(\mathfrak{m}athbb{S}^2)$ there holds that {{\beta}ta}egin{equation}\lambdabel{ZonConv} (\Pi_n f)(P)=\int_{\mathfrak{m}athbb{S}^2}f(Q)\mathfrak{m}athfrak{Z}_n(\lambdangle P,Q\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(Q),\qquad \mathfrak{m}athfrak{Z}_n:=\frac{2n+1}{4\pi}P_n^{(0,0)}. \end{equation} \end{lemma} We are now in the position to give the proof of Proposition \ref{prop:LPOmega}. {{\beta}ta}egin{proof}[Proof of Proposition \ref{prop:LPOmega}] We start with a proof of \ref{it:LPang-comm}. It is straightforward to see that for any $\ell\in\mathfrak{m}athbb{Z}$ there holds that $[{{\beta}ta}ar{R}_\ell,S]=0$. The commutation with $\Omegaega_{ab}$ follows from the identity {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} (\Omegaega^x_{ab}+\Omegaega^\vartheta_{ab})\lambdangle x,\vartheta\rangle=0. \end{split} \end{equation*} It thus suffices to prove that ${{\beta}ta}ar{R}_\ell$ commutes with the Fourier transform. After writing down the explicit formula, we see that it suffices to check that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \int_{\mathfrak{m}athbb{S}^2}e^{i\vert x\vert\vert\xi\vert\lambdangle \alphapha,\frac{\xi}{\vert\xi\vert}\rangle}\mathfrak{m}athfrak{Z}_n(\lambdangle \alphapha,{{\beta}ta}eta\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\alphapha)=\int_{\mathfrak{m}athbb{S}^2}e^{i\vert x\vert\vert\xi\vert\lambdangle \alphapha,{{\beta}ta}eta\rangle}\mathfrak{m}athfrak{Z}_n(\lambdangle \alphapha,\frac{\xi}{\vert\xi\vert}\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\alphapha). \end{split} \end{equation*} Let ${\rho}o_\gammamma$ denote a rotation that sends $N$ to $\gammamma\in\mathfrak{m}athbb{S}^2$. After a change of variable, we observe that {{\beta}ta}egin{equation}\lambdabel{FRk} {{\beta}ta}egin{split} \int_{\mathfrak{m}athbb{S}^2}e^{i\lambdambda \lambdangle \alphapha,\gammamma\rangle}\mathfrak{m}athfrak{Z}_n(\lambdangle \alphapha,{{\beta}ta}eta\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\alphapha)&=\int_{\mathfrak{m}athbb{S}^2}e^{i\lambdambda\lambdangle N,\alphapha\rangle}\mathfrak{m}athfrak{Z}_n(\lambdangle {{\beta}ta}eta,{\rho}o_\gammamma \alphapha\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\alphapha)=\Pi_n[e^{i\lambdambda\lambdangle N,\cdotdot\rangle}]({\rho}o_{-\gammamma}{{\beta}ta}eta). \end{split} \end{equation} It remains to observe that $e^{i\lambdambda\lambdangle N,\cdotdot\rangle}$ is a zonal function, and that $\Pi_n$ respects zonal functions: this follows by direct inspection, or by the fact that $\mathfrak{m}athcal{D}elta_{\mathfrak{m}athbb{S}^2}$ and the $z$-angular momentum $\Omegaega_{12}$ commute. Hence $\Pi_n[e^{i\lambdambda\lambdangle N,\cdotdot\rangle}]$ only depends on the distance to the north pole. But {{\beta}ta}egin{equation*} \lambdangle {\rho}o_{\gammamma}^{-1}{{\beta}ta}eta,N\rangle=\lambdangle{{\beta}ta}eta,\gammamma\rangle=\lambdangle {\rho}o_{{{\beta}ta}eta}^{-1}\gammamma,N\rangle \end{equation*} and therefore the last term in \eqref{FRk} is symmetric in $\gammamma,{{\beta}ta}eta$. \mathfrak{m}edskip The first and third affirmation in \ref{it:LPang-orth} follow from the same properties on $L^2(\mathfrak{m}athbb{S}^2)$. For the second statement, using the reproducing property \eqref{ZonConv}, we compute that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \left({{\beta}ta}ar{R}_\ell {{\beta}ta}ar{R}_{\ell^{\partial}ime}f\right)(x)&=\sum_{n,n'\ge0}\varphi(2^{-\ell}n)\varphi(2^{-\ell^{\partial}ime}n')\int_{\mathfrak{m}athbb{S}^2}f(\vert x\vert\vartheta)\left(\int_{\mathfrak{m}athbb{S}^2}\mathfrak{m}athfrak{Z}_{n'}(\lambdangle\frac{x}{\vert x\vert},\alphapha\rangle)\mathfrak{m}athfrak{Z}_n(\lambdangle\alphapha,\vartheta\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\alphapha)\right) d\nu_{\mathfrak{m}athbb{S}^2}(\vartheta)\\ &=\sum_{n'\ge0}\varphi(2^{-\ell}n')\varphi(2^{-\ell^{\partial}ime}n')\int_{\mathfrak{m}athbb{S}^2}f(\vert x\vert\vartheta)\mathfrak{m}athfrak{Z}_{n'}(\lambdangle\frac{x}{\vert x\vert},\vartheta\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\vartheta) \end{split} \end{equation*} This shows that ${{\beta}ta}ar{R}_\ell {{\beta}ta}ar{R}_{\ell^{\partial}ime}=0$ whenever $\abs{\ell-\ell^{\partial}ime}\ge4$. The last statement in \ref{it:LPang-orth} follows by duality from the fact that {{\beta}ta}egin{equation*} \int_{\mathfrak{m}athbb{S}^2}\Pi_{n_1}[f]\cdotdot \Pi_{n_2}[g]\cdotdot\Pi_{n_3}[h]d\nu_{\mathfrak{m}athbb{S}^2}=0 \end{equation*} whenever $\mathfrak{m}ax\{n_1,n_2,n_3\}\ge\frac{1}{2}\textnormal{h}box{med}\{n_1,n_2,n_3\}+4$. (This can also be seen from the fact that spherical harmonics of degree $n$ are restrictions to $\mathfrak{m}athbb{S}^2$ of homogeneous harmonic polynomials of degree $n$.) \mathfrak{m}edskip For \ref{it:LPang-bd} we let {{\beta}ta}egin{equation*} K_\ell(\omegaega,\vartheta)=\sum_{n\ge 0}\varphi(2^{-\ell}n)\mathfrak{m}athfrak{Z}_n(\lambdangle \omegaega,\vartheta\rangle), \end{equation*} and we claim that {{\beta}ta}egin{equation*} \sup_\omegaega\mathcal{V}ert K_\ell(\omegaega,\vartheta)\mathcal{V}ert_{L^1(\mathfrak{m}athbb{S}^2_\vartheta)}+\sup_\vartheta\mathcal{V}ert K_\ell(\omegaega,\vartheta)\mathcal{V}ert_{L^1(\mathfrak{m}athbb{S}^2_\omegaega)}\lesssimssim 1. \end{equation*} This essentially follows from \eqref{SogForm}: With \eqref{DerJacobi} we have that {{\beta}ta}egin{equation} (2n+1)P_n^{(0,0)}(x)=\frac{d}{dx}\left(P_{n+1}^{(0,0)}(x)-P_{n-1}^{(0,0)}(x)\right)=\frac{1}{2}\left((n+2)P_{n}^{(1,1)}(x)-nP_{n-2}^{(1,1)}(x)\right), \end{equation} and thus {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \sum_{n\geq 0}\varphi(2^{-\ell}n)\frac{2n+1}{4\pi}P_n^{(0,0)}(x)=\frac{1}{8\pi}\sum_{n\geq 0}P_n^{(1,1)}(x)\cdotdot D_n,\qquad D_n:=(n+2)\left[\varphi(2^{-\ell}n)-\varphi(2^{-\ell}(n+2))\right]. \end{split} \end{equation*} In view of \eqref{SogForm}, let {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} C_n(\thetaeta)&=\sum_{0\le j\le n-1}\cdotos\left((j+\frac{3}{2})\thetaeta-\frac{3\pi}{4}\right)=\mathfrak{m}athbb{R}e\left(e^{-i\frac{3\pi}{4}}e^{i\frac{\thetaeta}{2}}\frac{1-e^{in\thetaeta}}{1-e^{i\thetaeta}}\right)=\frac{\sigman\frac{n\thetaeta}{2}}{\sigman\frac{\thetaeta}{2}}\cdotos\left(\left(\frac{n}{2}+1\right)\thetaeta-\frac{3\pi}{4}\right),\\ \end{split} \end{equation*} then {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} I_\ell(\thetaeta):=\sigman(\thetaeta)^{-3/2}\sum_{n\ge0}\frac{1}{\sqrt{n}}\cdotos((n+\frac{3}{2})\thetaeta-\frac{3\pi}{4})\cdotdot D_n&=\sigman(\thetaeta)^{-3/2}\cdotdot \sum_{n\ge0}\frac{D_n}{\sqrt{n}}\cdotdot\left[C_{n+1}(\thetaeta)-C_n(\thetaeta)\right]\\ &=\sigman(\thetaeta)^{-3/2}\cdotdot\sum_{n\ge0}C_n(\thetaeta)\left[\frac{D_{n-1}}{\sqrt{n-1}}-\frac{D_n}{\sqrt{n}}\right] \end{split} \end{equation*} so that since $\abs{\frac{D_{n+1}}{\sqrt{n+1}}-\frac{D_{n}}{\sqrt{n}}}\lesssimssim 2^{-3\ell/2}$ and $\abs{C_n(\thetaeta)}\lesssimssim \abs{\sigman(\frac{\thetaeta}{2})}^{-1}$ we have {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \int_{c2^{-\ell}\le \thetaeta\le\pi-c2^{-\ell}}\vert I_\ell(\thetaeta)\vert\cdotdot\sigman(\thetaeta) d\thetaeta\lesssimssim 1. \end{split} \end{equation*} Similarly, {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} I\!I_\ell(\thetaeta):=\sigman(\thetaeta)^{-5/2}\sum_{n\ge0}\frac{1}{n^{3/2}},\qquad \int_{c2^{-\ell}\le \thetaeta\le\pi-c2^{-\ell}}\vert I\!I_\ell(\thetaeta)\vert\cdotdot\sigman(\thetaeta) d\thetaeta\lesssimssim 1, \end{split} \end{equation*} and again by \eqref{SogForm} {{\beta}ta}egin{equation*} \sum_{n\geq 0}\varphi(2^{-\ell}n)\int_{0\le \thetaeta\le c2^{-\ell}}\abs{P_n^{(1,1)}(\cdotos(\thetaeta))}\cdotdot\sigman(\thetaeta) d\thetaeta\lesssimssim 1. \end{equation*} This shows that the kernel of ${{\beta}ta}ar{R}_\ell$ is integrable. A similar proof works for ${{\beta}ta}ar{R}_{\le \ell}$. \mathfrak{m}edskip We now turn to \ref{it:LPang-Bern}. Starting from {{\beta}ta}egin{equation*} \mathfrak{m}athcal{D}elta_{\mathfrak{m}athbb{S}^2}=\sum_{j<l}\Omegaega_{jl}\Omegaega_{jl}, \end{equation*} we obtain the self-reproducing formula {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}athcal{Z}_n=\frac{1}{n(n+1)}\sum_{a<b}\Omegaega_{ab}\Omegaega_{ab}\mathfrak{m}athcal{Z}_n,\qquad \mathfrak{m}athcal{Z}_n(P)=\mathfrak{m}athfrak{Z}_n(\lambdangle P,N\rangle) \end{split} \end{equation*} and therefore {{\beta}ta}egin{equation}\lambdabel{BernsteinConsequence} {{\beta}ta}egin{split} {{\beta}ta}ar{R}_\ell f&=2^{-2\ell}\sum_{a<b}\Omegaega_{ab}\Omegaega_{ab}\widetilde{R}_\ell f,\\ \widetilde{R}_\ell f&:= \sum_{n\ge 0}\varphi(2^{-\ell}n)\frac{2^{2\ell}}{n(n+1)}\int_{\mathfrak{m}athbb{S}^2}f(\vert x\vert\vartheta)\mathfrak{m}athfrak{Z}_n(\lambdangle \vartheta,\frac{x}{\vert x\vert}\rangle)d\nu_{\mathfrak{m}athbb{S}^2}(\vartheta) \end{split} \end{equation} where $\widetilde{R}_\ell$ obeys similar properties as ${{\beta}ta}ar{R}_\ell$. It suffices now to show that {{\beta}ta}egin{equation} {{\beta}ta}egin{split} \mathcal{V}ert \Omegaega_{ab}{{\beta}ta}ar{R}_\ell f\mathcal{V}ert_{L^r}\lesssimssim 2^\ell\mathcal{V}ert f\mathcal{V}ert_{L^r},\qquad 1\le r\le\infty, \end{split} \end{equation} and similarly for $\widetilde{R}_\ell$. We provide the details for ${{\beta}ta}ar{R}_\ell$, $\widetilde{R}_\ell$ is similar. Once again we consider the kernel of $\Omegaega_{ab} {{\beta}ta}ar{R}_\ell$: {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} (\Omegaega_{ab} {{\beta}ta}ar{R}_\ell f)(x)&=2^\ell\int_{\mathfrak{m}athbb{S}^2}f(\vert x\vert\vartheta)\cdotdot \mathfrak{m}athcal{K}_\ell(\frac{x}{\vert x\vert},\vartheta)d\nu_{\mathfrak{m}athbb{S}^2}(\vartheta),\qquad \mathfrak{m}athcal{K}_\ell(\frac{x}{\vert x\vert},\vartheta):=2^{-\ell}\sum_{n\ge0}\varphi(2^{-\ell}n)\Omegaega_{ab}\left[\mathfrak{m}athfrak{Z}_n(\lambdangle \frac{x}{\vert x\vert},\vartheta\rangle)\right] \end{split} \end{equation*} and claim that {{\beta}ta}egin{equation*} \sup_\omegaega\mathcal{V}ert \mathfrak{m}athcal{K}_\ell(\omegaega,\vartheta)\mathcal{V}ert_{L^1(\mathfrak{m}athbb{S}^2_\vartheta)}+\sup_\vartheta\mathcal{V}ert \mathfrak{m}athcal{K}_\ell(\omegaega,\vartheta)\mathcal{V}ert_{L^1(\mathfrak{m}athbb{S}^2_\omegaega)}\lesssimssim 1. \end{equation*} Indeed, we compute that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathfrak{m}athcal{K}_\ell(\omegaega,\vartheta)&=2^{-\ell}\sum_{n\ge0}\varphi(2^{-\ell}n)\mathfrak{m}athfrak{Z}_n^{\partial}ime(\lambdangle \omegaega,\vartheta\rangle)\cdotdot\Omegaega^\omegaega_{ab}(\lambdangle\omegaega,\vartheta\rangle),\qquad \vert \Omegaega^\omegaega_{ab}(\lambdangle\omegaega,\vartheta\rangle)\vert \lesssimssim \sqrt{1-\lambdangle \omegaega,\vartheta\rangle^2}, \end{split} \end{equation*} and the rest follows in a similar way from the boundedness of ${{\beta}ta}ar{R}_\ell$ by using \eqref{DerJacobi} and \eqref{SogForm}. \end{proof} \subsection{Set size gain}\lambdabel{apdx:set_gain} The idea here is that in the bilinear estimates we can always gain the smallest of \emph{both $p,p_j$ and $q,q_j$}, since they correspond to different directions. {{\beta}ta}egin{lemma}\lambdabel{lem:set_gain} Consider a typical bilinear expression $\mathfrak{m}athcal{Q}_\mathfrak{m}athfrak{m}$ with localizations and a multiplier $\mathfrak{m}athfrak{m}$, i.e.\ {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} &\widehat{\mathfrak{m}athcal{Q}_\mathfrak{m}athfrak{m}(f,g)}(\xi):=\int_\eta e^{\pm is\Phi}\cdothi(\xi,\eta) \mathfrak{m}athfrak{m}(\xi,\eta)\textnormal{h}at{f}(\xi-\eta)\textnormal{h}at{g}(\eta) d\eta,\\ &\cdothi(\xi,\eta)=\varphi_{k,p,q}(\xi)\varphi_{k_1,p_1,q_1}(\xi-\eta)\varphi_{k_2,p_2,q_2}(\eta). \end{aligned} \end{equation} Then with {{\beta}ta}egin{equation} \abs{\mathfrak{m}athcal{S}}:=\mathfrak{m}in\{2^{p+k},2^{p_1+k_1},2^{p_2+k_2}\}\cdotdot\mathfrak{m}in\{2^{\frac{q+k}{2}},2^{\frac{q_1+k_1}{2}},2^{\frac{q_2+k_2}{2}}\} \end{equation} we have that {{\beta}ta}egin{equation} \norm{\mathfrak{m}athcal{Q}_\mathfrak{m}athfrak{m}(f,g)}_{L^2}\lesssimssim \abs{\mathfrak{m}athcal{S}}\cdotdot\norm{\mathfrak{m}athfrak{m}}_{L^\infty_{\xi,\eta}} \norm{P_{k_1,p_1,q_1}f}_{L^2}\norm{P_{k_2,p_2,q_2}g}_{L^2}. \end{equation} \end{lemma} {{\beta}ta}egin{proof} To begin, let us assume that $p+k<p_1+k_1$ and $q+k>q_1+k_1$ (the ``symmetric cases'' of $p+k<p_1+k_1$ with $q+k<q_1+k_1$ and reverse are direct). Then, for any $h\in L^2$ we find that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \abs{\ip{\mathfrak{m}athcal{Q}_\mathfrak{m}(f,g),h}}&\lesssimssim \iint_{\mathfrak{m}athbb{R}^3}\abs{\mathfrak{m}athfrak{m}(\xi,\eta)}\varphi(2^{-k-p}\xi_\textnormal{h})\vert\widehat{f}(\xi-\eta)\vert\varphi(2^{-k_1-q_1}(\xi_3-\eta_3))\vert\widehat{g}(\eta)h(\xi)\vert d\xi d\eta\\ &\lesssimssim \mathcal{V}ert \mathfrak{m}athfrak{m}\mathcal{V}ert_{L^\infty}\mathcal{V}ert \widehat{h}(\xi)\widehat{f}(\xi-\eta)\mathcal{V}ert_{L^2_{\xi,\eta}}\mathcal{V}ert \varphi(2^{-k-p}\xi_\textnormal{h})\varphi(2^{-k_1-q_1}(\xi_3-\eta_3))\widehat{g}(\eta)\mathcal{V}ert_{L^2_{\xi,\eta}}\\ &\lesssimssim 2^{k+\frac{k_1}{2}}2^{p+\frac{q_1}{2}}\mathcal{V}ert \mathfrak{m}athfrak{m}\mathcal{V}ert_{L^\infty}\mathcal{V}ert f\mathcal{V}ert_{L^2}\mathcal{V}ert g\mathcal{V}ert_{L^2}\mathcal{V}ert h\mathcal{V}ert_{L^2}. \end{split} \end{equation*} The claim then follows upon changing variables $\eta\leftrightarrow\xi-\eta$. \end{proof} {{\beta}ta}egin{lemma}\lambdabel{lem:set_gain2} With notation as in Lemma \ref{lem:set_gain}, consider {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} &\widehat{{{\beta}ta}ar{\mathfrak{m}athcal{Q}}_\mathfrak{m}athfrak{m}(f,g)}(\xi):=\int_\eta e^{\pm is\Phi}\cdothi(\xi,\eta) \varphi(\lambdambda^{-1}\Phi)\mathfrak{m}athfrak{m}(\xi,\eta)\textnormal{h}at{f}(\xi-\eta)\textnormal{h}at{g}(\eta) d\eta,\qquad \lambdambda>0. \end{aligned} \end{equation} {{\beta}ta}egin{enumerate} \item\lambdabel{it:vertgain} Assume that on the support of $\cdothi$ we have $\abs{\partial_{\eta_3}\Phi}\gtrsim L>0$. Then we have that {{\beta}ta}egin{equation} \norm{{{\beta}ta}ar{\mathfrak{m}athcal{Q}}_\mathfrak{m}athfrak{m}(f,g)}_{L^2}\lesssimssim \mathfrak{m}in\{2^{k_1+p_1},2^{k_2+p_2}\}\cdotdot (\lambdambda L^{-1})^{\frac{1}{2}}\cdotdot\norm{\mathfrak{m}athfrak{m}}_{L^\infty_{\xi,\eta}} \norm{P_{k_1,p_1}f}_{L^2}\norm{P_{k_2,p_2}g}_{L^2}. \end{equation} \item\lambdabel{it:horgain} Assume that on the support of $\cdothi$ we have $\abs{\partialla_{\eta_\textnormal{h}}\Phi}\gtrsim L>0$. Then we have that {{\beta}ta}egin{equation} \norm{{{\beta}ta}ar{\mathfrak{m}athcal{Q}}_\mathfrak{m}athfrak{m}(f,g)}_{L^2}\lesssimssim \mathfrak{m}in\{2^{k_1+q_1},2^{k_2+q_2}\}^{\frac{1}{2}}\cdotdot 2^{\frac{k_{2}+p_{2}}{2}}\cdotdot (\lambdambda L^{-1})^{\frac{1}{2}}\cdotdot\norm{\mathfrak{m}athfrak{m}}_{L^\infty_{\xi,\eta}} \norm{P_{k_1,p_1,q_1}f}_{L^2}\norm{P_{k_2,p_2,q_2}g}_{L^2}. \end{equation} (Analogous statements hold if $\abs{\partial_{\xi_3}\Phi}\gtrsim L>0$ resp.\ $\abs{\partialla_{\xi_\textnormal{h}}\Phi}\gtrsim L>0$.) \end{enumerate} \end{lemma} {{\beta}ta}egin{proof} It suffices to prove \eqref{it:vertgain}, part \eqref{it:horgain} is similar. Assume without loss of generality that $2^{k_2+p_2}\lesssimssim 2^{k_1+p_1}$ (else exchange the roles of $\textnormal{h}at{f}$ and $h$ below). We have for any $h\in L^2$ that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \abs{\ip{{{\beta}ta}ar{\mathfrak{m}athcal{Q}}_\mathfrak{m}(f,g),h}}&\lesssimssim \iint_{\mathfrak{m}athbb{R}^3}\abs{\mathfrak{m}athfrak{m}(\xi,\eta)}\cdothi(\xi,\eta) \varphi(\lambdambda^{-1}\Phi)\vert\widehat{f}(\xi-\eta)\widehat{g}(\eta)\vert \vert h(\xi)\vert d\xi d\eta\\ &\lesssimssim \norm{\mathfrak{m}}_{L^\infty_{\xi,\eta}}\cdotdot \norm{\widehat{f}(\xi-\eta)\widehat{g}(\eta)}_{L^2_{\xi,\eta}}\cdotdot\norm{\cdothi(\xi,\eta) \varphi(\lambdambda^{-1}\Phi)h(\xi)}_{L^2_{\xi,\eta}}, \end{aligned} \end{equation} and the claim follows since {{\beta}ta}egin{equation} \norm{\cdothi(\xi,\eta) \varphi(\lambdambda^{-1}\Phi)h(\xi)}_{L^2_{\xi,\eta}}\lesssimssim \lambdambda^{\frac{1}{2}}\cdotdot L^{-\frac{1}{2}} 2^{p_2+k_2}\norm{h}_{L^2}, \end{equation} where we have used that {{\beta}ta}egin{equation} \sup_\xi \int \cdothi(\xi,\eta) \varphi(\lambdambda^{-1}\Phi)d\eta\lesssimssim \lambdambda L^{-1}\cdotdot 2^{2p_2+2k_2} \end{equation} by changing variables (for fixed $\xi$) $\eta\mathfrak{m}apsto\zetata:=(\eta_1,\eta_2,\Phi(\xi,\eta))$ with Jacobian $\abs{\textnormal{det}\frac{\partial\eta}{\partial\zetata}}=\abs{\partial_{\eta_3}\Phi}^{-1}\lesssimssim L^{-1}$. \end{proof} \subsection{Control of Fourier transform in $L^\infty$} We record here that our decay norm $D$ in \eqref{eq:defDnorm} also controls the Fourier transform in $L^\infty$: {{\beta}ta}egin{lemma}\lambdabel{lem:ControlLinfty} Assume that $f$ is axisymmetric. Then there holds that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathcal{V}ert \widehat{P_{k,p,q}f}\mathcal{V}ert_{L^\infty} &\lesssimssim 2^{-\frac{3k}{2}}\left[\norm{P_k f}_B+\norm{SP_kf}_B\right]+2^{-\frac{3k}{2}}\left[\norm{P_k f}_X+\norm{SP_k f}_X\right]\lesssimssim 2^{-\frac{3k}{2}}\norm{f}_D. \end{split} \end{equation*} \end{lemma} {{\beta}ta}egin{proof} We recall the notation {{\beta}ta}egin{equation} \varphi_{k,p,q}(\xi)=\varphi(2^{-k}\abs{\xi})\varphi(2^{-p}\sqrt{1-\mathfrak{m}athbf{\mathcal{L}ambda}bda^2(\xi)})\varphi(2^{-q}\mathfrak{m}athbf{\mathcal{L}ambda}bda(\xi)), \end{equation} and assume that $f=P_{k,p,q}f$. Switching to spherical coordinates $({\rho}o,\thetaeta,\phi)\in\mathfrak{m}athbb{R}_+\times[0,2\pi]\times [0,\pi]$ and using that $f$ is axisymmetric (and thus independent of $\thetaeta$), we have that for any $({\rho}o_0,\phi_0)$ on the support of $\varphi_{k,p,q}\widehat{f}$ there holds {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \varphi_{k,p,q}\widehat{f}({\rho}o,\phi)&=\varphi_{k,p,q}\widehat{f}({\rho}o_0,\phi_0)+\int_{{\rho}o_0}^{\rho}o\partial_{\rho}o(\varphi_{k,p,q}\widehat{f})(s,\phi_0)ds+\int_{\phi_0}^\phi\partial_\phi(\varphi_{k,p,q}\widehat{f})({\rho}o_0,\alphapha)d\alphapha\\ &\qquad+\int_{\phi_0}^\phi\int_{{\rho}o_0}^{\rho}o\partial_{\rho}o\partial_\phi(\varphi_{k,p,q}\widehat{f})(s,\alphapha)dsd\alphapha. \end{split} \end{equation*} On the one hand, for any choice of $({\rho}o_0,\phi_0)$ we have that for $\widetilde{\varphi}_{k,p,q}$ with similar support properties as $\varphi_{k,p,q}$ there holds {{\beta}ta}egin{equation} \vert\partial_{\rho}o\partial_\phi(\varphi_{k,p,q}\widehat{f})(s,\alphapha)\vert \lesssimssim\widetilde{\varphi}_{k,p,q}\left[2^{-k}2^{-p-q}\vert\widehat{f}(s,\alphapha)\vert+2^{-p-q}\vert\partial_{\rho}o\widehat{f}(s,\alphapha)\vert+2^{-k}\vert\partial_\phi\widehat{f}(s,\alphapha)\vert+\vert\partial_{\rho}o\partial_\phi\widehat{f}(s,\alphapha)\vert\right]. \end{equation} Now we note that for any $g$ there holds that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \abs{\int_{\phi_0}^\phi\int_{{\rho}o_0}^{\rho}o\widetilde{\varphi}_{k,p,q}\widehat{g}(s,\alphapha)dsd\alphapha}^2 &\lesssimssim \iint_{\substack{{\rho}o\in [2^k,2^{k+1}],\\\sigman\phi\in[2^{p-2},2^{p+2}],\\\cdotos\phi\in[2^{q-2},2^{q+2}]}} d{\rho}o d\phi\cdotdot 2^{-2k-p}\iint_{\substack{{\rho}o\in [2^k,2^{k+1}],\\\sigman\phi\in[2^{p-2},2^{p+2}],\\\cdotos\phi\in[2^{q-2},2^{q+2}]}}\vert\widehat{g}\vert^2{\rho}o^2\sigman\phi d{\rho}o d\phi\\ &\lesssimssim 2^{-k}2^{q} \norm{g}_{L^2}^2. \end{aligned} \end{equation} Recalling that $S={\rho}o\partial_{\rho}o$ and $\partial_\phi=\mathfrak{m}athcal{U}ps$, it thus follows that {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \abs{\int_{\phi_0}^\phi\int_{{\rho}o_0}^{\rho}o\partial_{\rho}o\partial_\phi(\varphi_{k,p,q}\widehat{f})(s,\alphapha)dsd\alphapha}&\lesssimssim 2^{-\frac{3}{2}k}\mathfrak{m}athcal{B}ig[2^{-p-\frac{q}{2}}\norm{f}_{L^2}+2^{-p-\frac{q}{2}}\norm{Sf}_{L^2}+2^{\frac{q}{2}}\norm{\mathfrak{m}athcal{U}ps f}_{L^2}+2^{\frac{q}{2}}\norm{\mathfrak{m}athcal{U}ps Sf}_{L^2}\mathfrak{m}athcal{B}ig]. \end{aligned} \end{equation} We can now average over ${\rho}o_0$ and $\phi_0$ to obtain similarly that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \left\vert \int_{{\rho}o_0}^{\rho}o\partial_{\rho}o(\varphi_{k,p,q}\widehat{f})(s,\phi_0)ds\right\vert&\lesssimssim 2^{-k-p}\left\vert \iint_{s,\phi}\widetilde{\varphi}_{k,p,q}\left[\vert\widehat{f}\vert+\vert{\rho}o\partial_{\rho}o\widehat{f}\vert\right](s,\phi)dsd\phi\right\vert\\ & \lesssimssim 2^{-\frac{3}{2}k-p+\frac{q}{2}}\left[\mathcal{V}ert f\mathcal{V}ert_{L^2}+\norm{S\textnormal{h}at{f}}_{L^2}\right],\\ \end{split} \end{equation*} and that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \abs{\int_{\phi_0}^\phi\partial_\phi(\varphi_{k,p,q}\widehat{f})({\rho}o_0,\alphapha)d\alphapha}&\lesssimssim 2^{-k} \iint_{s,\phi}\widetilde{\varphi}_{k,p,q}\left[2^{-p-q}\vert \widehat{f}\vert+\vert\partial_\phi\widehat{f}\vert\right](s,\phi)dsd\phi\\ &\lesssimssim 2^{-\frac{3}{2}k}\left[2^{-p-\frac{q}{2}}\sum_{\vert p-p^{\partial}ime\vert+\vert q-q^{\partial}ime\vert\le 4}\mathcal{V}ert \widehat{P_{k,p^{\partial}ime,q^{\partial}ime}f}\mathcal{V}ert_{L^2}+\norm{P_{k,p^{\partial}ime,q^{\partial}ime}\mathfrak{m}athcal{U}ps f}_{L^2}\right],\\ \vert \widehat{f}({\rho}o_0,\phi_0)\vert &\lesssimssim 2^{-(k+p)}\iint_{s,\phi}\vert \widehat{f}(s,\phi)\vert dsd\phi\lesssimssim 2^{-3k/2-p}\sum_{\vert p-p^{\partial}ime\vert+\vert q-q^{\partial}ime\vert\le 4}\mathcal{V}ert \widehat{P_{k,p^{\partial}ime,q^{\partial}ime}f}\mathcal{V}ert_{L^2}. \end{split} \end{equation*} To conclude the proof it suffices to note that {{\beta}ta}egin{equation} \norm{P_{k,p,q}\mathfrak{m}athcal{U}ps f}_{L^2}\lesssimssim \sum_{\ell+p\geq 0}2^{\ell}\norm{P_{k,p}R^{(p)}_{\ell}f}_{L^2}\lesssimssim \sum_{\ell+p\geq 0}2^{-{{\beta}ta}eta(\ell+p)}\norm{f}_X\lesssimssim \norm{f}_X. \end{equation} \end{proof} \subsection{An interpolation} Here we record two interpolation inequalities that should allow us to gain some $X$ norm or decay even for ``many'' vector fields. We define the operator {{\beta}ta}egin{equation*} \mathcal{V}ert S^{\le a}f\mathcal{V}ert_{L^r}:=\sup_{0\le \alphapha\le a}\mathcal{V}ert S^\alphapha f\mathcal{V}ert_{L^r}. \end{equation*} Using this, we can prove the following interpolation result: {{\beta}ta}egin{lemma}\lambdabel{lem:interpol} Let $r\ge 1$, $a,b\ge0$, $K\ge1$ be integers. There holds that {{\beta}ta}egin{equation}\lambdabel{eq:vf-interpol} {{\beta}ta}egin{split} \mathcal{V}ert S^{\le a+b}f\mathcal{V}ert_{L^{2r}}&\lesssimssim_b \mathcal{V}ert S^{\le a}f\mathcal{V}ert_{L^{2r}}^\frac{1}{2}\mathcal{V}ert S^{\le a+2b}f\mathcal{V}ert_{L^{2r}}^\frac{1}{2},\\ \mathcal{V}ert S^{\le b}f\mathcal{V}ert_{L^{2r}}&\lesssimssim_{K,r,b} \mathcal{V}ert f\mathcal{V}ert_{L^{2r}}^{1-\frac{1}{K}}\mathcal{V}ert S^{\le Kb}f\mathcal{V}ert_{L^{2r}}^\frac{1}{K}, \end{split} \end{equation} uniformly in $a\ge0$ and $f$. \end{lemma} {{\beta}ta}egin{proof} It suffices to assume that $a=0$. The first estimate follows from the second one for $r=1$ and $K=2$. Given $f\ne0$, and $C>\frac{1}{2}\log(2r+1)$, we claim that $g(n):=\log(\mathcal{V}ert S^{\le n}f\mathcal{V}ert_{L^{2r}})+Cn^2$ is a discrete convex function. This follows by integration by parts since {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathcal{V}ert S^{n+1}f\mathcal{V}ert_{L^{2r}}^{2r}&=\int {\rho}o \partial_{\rho}o(S^nf)\cdotdot (S^{n+1}f)^{2r-1}\cdotdot{\rho}o^2d{\rho}o\\ &=-3\int S^nf\cdotdot (S^{n+1}f)^{2r-1}\cdotdot{\rho}o^2d{\rho}o-(2r-1)\int S^nf\cdotdot (S^{n+1}f)^{2r-2}\cdotdot S^{n+2}f\cdotdot{\rho}o^2d{\rho}o\\ &\le \mathcal{V}ert S^nf\mathcal{V}ert_{L^{2r}}\cdotdot\mathcal{V}ert S^{n+1}f\mathcal{V}ert_{L^{2r}}^{2r-2}\cdotdot \left[3\mathcal{V}ert S^{n+1}f\mathcal{V}ert_{L^{2r}}+(2r-1)\mathcal{V}ert S^{n+2}f\mathcal{V}ert_{L^{2r}}\right]. \end{split} \end{equation*} Since $\mathcal{V}ert S^{n+1}f\mathcal{V}ert_{L^{2r}}> 0$, we can divide and we deduce that {{\beta}ta}egin{equation*} {{\beta}ta}egin{split} \mathcal{V}ert S^{\le n+1}f\mathcal{V}ert_{L^{2r}}^{2}&\le 2(r+1)\cdotdot\mathcal{V}ert S^{\le n}f\mathcal{V}ert_{L^{2r}}\mathcal{V}ert S^{\le n+2}f\mathcal{V}ert_{L^{2r}}.\\ \end{split} \end{equation*} The claimed inequality follows by convexity\footnote{{{\beta}ta}lue{We say that the sequence $a_n=\norm{S^{\leq n}f}_{L^{2r}}$ is convex} if and only if the piecewise linear function such that $f(n)=a_n$ is convex.}. \end{proof} To apply this when $f=P_kf$ is a dispersive unknown it suffices to note that $[S,e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}]=0$, so that with \eqref{eq:vf-interpol} we have {{\beta}ta}egin{equation}\lambdabel{eq:interpol_example} {{\beta}ta}egin{aligned} \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}S^N f}_{L^{2r}}&\lesssimssim \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}S^{N-3}f}_{L^{2r}}^{1-\frac{1}{K}}\norm{S^{\leq N+3(K-1)}f}_{L^{2r}}^{\frac{1}{K}}\\ &\lesssimssim \norm{e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda}S^{N-3}f}_{L^{\infty}}^{(1-\frac{1}{r})(1-\frac{1}{K})}\norm{S^{N-3}f}_{L^2}^{\frac{1}{r}(1-\frac{1}{K})}\cdotdot \norm{S^{\leq N+3(K-1)}f}_{L^{2}}^{\frac{1}{K}}2^{k\frac{3(r-1)}{2r}\frac{1}{K}}. \end{aligned} \end{equation} Let us record that this gives us some decay also for the maximum number of vector fields (in the $X$ or $B$ norms) on our unknowns. {{\beta}ta}egin{corollary}\lambdabel{cor:extrapol_decay} Under the bootstrap assumptions \eqref{eq:btstrap-assump}, if $f$ is a dispersive unknown of \eqref{eq:EC_disp} and the number of vector fields $M>0$ in \eqref{eq:id} is sufficiently large, then we have that for some $0<\kappappa\ll{{\beta}ta}eta$ there holds {{\beta}ta}egin{equation} \norm{P_{k}e^{it\mathfrak{m}athbf{\mathcal{L}ambda}bda} S^b f}_{L^\infty}\lesssimssim 2^{\frac{3k}{2}-3k^+}t^{-1+\kappappa}\varepsilonsilonilons_1,\qquad 0\leq b\leq N. \end{equation} \end{corollary} {{\beta}ta}egin{proof} For $b\leq N-3$ the faster decay rate $t^{-1}$ follows from Proposition \ref{prop:decay}, whereas when $N-2\leq b\leq N$ this follows from \eqref{eq:interpol_example} and choosing $K\gg\kappappa^{-1}$ and $r\gg 1$ sufficiently large. \end{proof} \subsection{Symbol bounds}\lambdabel{sec:symbols} In this section we give the relevant symbol estimates for the multipliers we need. We recall the notations \eqref{eq:loc_def3} and for a multiplier $m\in L^1_{loc}(\mathfrak{m}athbb{R}^3\times\mathfrak{m}athbb{R}^3)$ we let {{\beta}ta}egin{equation} {{\beta}ta}egin{aligned} \norm{m}_{\widetilde{\mathfrak{m}athcal{W}}_\textnormal{h}}&:=\sup_{k,q,\,k_i,q_i,i=1,2}\norm{\mathfrak{m}athcal{F}(\cdothi_\textnormal{h} m)}_{L^1(\mathfrak{m}athbb{R}^3\times\mathfrak{m}athbb{R}^3)},\\ \norm{m}_{\widetilde{\mathfrak{m}athcal{W}}}&:=\sup_{k,p,q,\,k_i,p_i,q_i,i=1,2}\norm{\mathfrak{m}athcal{F}(\cdothi m)}_{L^1(\mathfrak{m}athbb{R}^3\times\mathfrak{m}athbb{R}^3)}. \end{aligned} \end{equation} We then have H\"older's inequality {{\beta}ta}egin{equation}\lambdabel{ProdRule2} \norm{\mathfrak{m}athcal{Q}_{m\cdothi_\textnormal{h}}(f,g)}_{L^r}\lesssimssim \norm{m}_{\widetilde{\mathfrak{m}athcal{W}}_\textnormal{h}}\norm{f}_{L^p}\norm{g}_{L^q},\quad \norm{\mathfrak{m}athcal{Q}_{m\cdothi}(f,g)}_{L^r}\lesssimssim \norm{m}_{\widetilde{\mathfrak{m}athcal{W}}}\norm{f}_{L^p}\norm{g}_{L^q},\qquad \frac{1}{r}=\frac{1}{p}+\frac{1}{q}, \end{equation} and the algebra property {{\beta}ta}egin{equation}\lambdabel{eq:alg_prop} \norm{m_1\cdotdot m_2}_{\widetilde{\mathfrak{m}athcal{W}}}\lesssimssim \norm{m_1}_{\widetilde{\mathfrak{m}athcal{W}}}\norm{m_2}_{\widetilde{\mathfrak{m}athcal{W}}}. \end{equation} We have the following symbol bounds: {{\beta}ta}egin{lemma}\lambdabel{lem:phasesymb_bd} Let $\psi_>:=1-\psi$. Then {{\beta}ta}egin{equation}\lambdabel{eq:phasesymb_bd} {{\beta}ta}egin{aligned} \norm{\Phi^{-1}\psi_>(2^{-q_{\mathfrak{m}ax}}\Phi)}_{\widetilde{\mathfrak{m}athcal{W}}}\lesssimssim 2^{-q_{\mathfrak{m}ax}} \qquad\textnormal{and} \qquad\norm{\Phi^{-1}\psi_>(\Phi)}_{\widetilde{\mathfrak{m}athcal{W}}_\textnormal{h}}\lesssimssim 1. \end{aligned} \end{equation} \end{lemma} {{\beta}ta}egin{proof} The first inequality was established in \cdotite[Lemma A.15]{rotE}, while the second one follows from a direct adaption of that proof. \end{proof} {{\beta}ta}ibliographystyle{siam} {{\beta}ta}ibliography{refs} \end{document}
\begin{document} \begin{center}\textbf{\large{CURVE 2017\\ Project description}} \end{center} \section{Overview} The major theme of the conference is the study of representation varieties and geometric structures with the help of effective computational tools. The conference will explore abstract concepts (character varieties, bounded cohomology, cluster algebras) via concrete experimentation (coordinates, explicit computations, quiver mutations, and explicit constructions of geometric structures). It is the second conference organized by the CURVE project, \url{curve.unhyperbolic.org}. The first conference, CURVE 2015, is described in Section~\ref{CURVE2015}. \section{Scientific context} The pioneering work of Fock and Goncharov~\cite{FockGoncharov} on higher Teichm\"uller spaces made it clear that the character variety $X_G(M)$ of conjugacy classes of representations of the fundamental group of a manifold $M$ into a Lie group $G$ can be studied via configuration spaces of tuples of flags in $G$. These configuration spaces have very interesting structures, and are naturally endowed with a system of explicit cooordinates. These coordinates have been used to create a database of representation varieties of 3-manifolds~\cite{CURVE,censusfkr}, and the coordinates have been used to construct new geometric structures~\cite{DerauxFalbel,FalbelWang}. Explicit formulas for invariants (volume, Chern-Simons invariant, Dehn invariant) has been derived in terms of the coordinates~\cite{GaroufalidisThurstonZickert,ZickertEnhancedPtolemy}, and volume bounds have been derived via bounded cohomology~\cite{BBIn}. Computer software (the Ptolemy module in SnapPy~\cite{SnapPy}) has been created to manipulate the coordinates and perform explicit computations. The conference will .... Add another paragraph. The conference is unique because ... Identify software challenges. Invite programmers. Matthias Goerner, Benjamin Burton, Stephen Gilles. The philosophy of the conference is to put equal emphasis on abstraction and concrete experiment. The conference will foster an environment, where theoretical advances are inspired by experiment, a method which has historically been important in all areas of science. This means that graduate students and even undergraduate students can perform research without fully understanding the theory. Say why a second CURVE conference is important, and how it differs from the first. \begin{comment} Special coordinates arising from triangulations became ubiquitous in computations in the last decade. This area of research has gained a lot of momentum recently due to the following major advances: \begin{itemize} \item The development of special coordinates on higher Teichm\"{u}ller spaces of a triangulated surfaces (Fock and Goncharov~\cite{FockGoncharov}). \item The generalization of these coordinates to 3-manifolds (independently by Bergeron-Falbel-Guilloux~\cite{BergeronFalbelGuilloux}, Garoufalidis-Thurston-Zickert~\cite{GaroufalidisThurstonZickert}, and Dimofte-Gabella-Goncharov~\cite{DimofteGabellaGoncharov}). The coordinates allow for \emph{exact} computations of representation varieties, and give simple formulas for classical and quantum invariants. \item The discovery of new geometric structures (spherical CR structures and flag structures) on the figure 8 knot and other simple 3-manifolds (Deraux-Falbel~\cite{DerauxFalbel} and Falbel-Wang~\cite{FalbelWang}). \item The development of the \emph{Ptolemy module}, software for computing with the coordinates (written by Matthias Goerner). The Ptolemy module is part of the computer software SnapPy~\cite{SnapPy}. \item The development of a census of boundary-unipotent representations, available at \url{curve.unhyperbolic.org} (see also~Falbel-Koseleff-Rouillier\cite{censusfkr}). \end{itemize} The possibility of doing exact computations has opened up new interactions between low dimensional topology and computer algebra, two topics which have previously had little communication. The conference follows the first conference CURVE 2015 \emph{(Computing and Understanding Representation Varieties Efficiently)}, \url{curve.unhyperbolic.org}), which was held at the Institut des Math\'ematiques de Jussieu in Paris, France. A major goal of the conference is to make available to the community the recent developments in the field. Introductory and adavanced minicourses will be held each day of the conference. \end{comment} \section{Details of the conference} \textbf{Title:} CURVE 2017 \\ \textbf{Time and place:} August 7-11, 2017, at the University of Maryland, College Park \\ \textbf{Organizers:} Martin Deraux, Elisha Falbel, Antonin Guilloux, Pierre-Vincent Koseleff, Fabrice Rouillier, Christian Zickert \\ \textbf{Conference website:} \url{http://sgt.imj-prg.fr/www/curve2017/} \\ \textbf{Main themes:} Character varieties, explicit coordinates, geometric structures, quiver mutations and cluster algebras, bounded cohomology, geometric structures, effective computation. \\ \textbf{Speakers:} The following people have agreed (some tentatively) to speak:\\ \hspace{-0.7cm}\begin{minipage}[t]{0.48\textwidth} \begin{itemize} \item Michelle Bucher (Gen\`{e}ve) \item Tudor Dimofte (Davis) \item Stavros Garoufalidis (Georgia Tech) \item Alexander Goncharov (Yale) \item Allesandra Iozzi (Z\"urich) \item Dylan Thurston (Indiana) \end{itemize} \end{minipage}\hspace{0.3cm} \begin{minipage}[t]{0.5\textwidth} \begin{itemize} \item Nathan Dunfield (UIUC) \item Matthias Goerner (Pixar) \item Antonin Guilloux (Jussieu) \item Walter Neumann (Columbia) \item Morwen Thistlethwaite (Tennessee) \end{itemize} \end{minipage} \subsection{Structure of the talks}\label{structure} The talks are divided into research talks and minicourses.\\ \noindent\textbf{Research talks:} We expect the research talks to focus on bounded cohomology (Iozzi, Bucher), geometric structures (TBA), advanced properties of the coordinates (Goncharov), and on the relationship to quantum topology (Garoufalidis, Dimofte). \noindent\textbf{Minicourses:} The conference will feature 2 minicourses: One on Cluster algebras (Thurston), and one on bounded cohomology (Bucher). The minicourses will be introductory and are aimed primarily at graduate students and postdocs. \subsection{Plans for recruiting participants} The organizers plan to invite as many students and postdocs as funding permits. The previous conference, CURVE 2015, had over 30 young graduate students and postdocs, 12 of which based in the US. The PI is currently supervising 3 graduate students, who will all participate. All senior participants will be asked to advertise the conference to their students. Previous conference had 2 female speakers and approximately 15\% of the participants were women. There were 17 countries represented. To ensure broader participation in CURVE 2017 we will add gender and ethnicity to the registration page to. In this way we will be able to track this data for future events. \subsection{Plans for disseminating results} All theoretical results as well as software, databases and computations will be made publically available at \url{curve.unhyperbolic.org}. \section{Broader impacts} The CURVE database of representations has already led to many interesting observations including new geometric structures, interesting relationships among volumes of representations (many examples discovered by undergraduate students~\cite{GillesHuston}), representation varieties that have interesting algebraic geometry features such as not being flat or reduced over $\mathbb Z$ (concrete examples make these concepts a lot easier to grasp than from reading the definitions in a textbook), and examples of non-rationals curves of boundary-unipotent representations. These observations (as well as those yet to be discovered) will undoubtedly inspire research by professional mathematicians as well as graduate students and undergraduate students. Ask users of software what features are most needed. Identify most significant software challenges. Will also facilitate communication between CURVE members who don't often have the opportunity to meet face to face. Higher dimensions. Regina support. \section{Results from prior NSF funding} \subsection{Conference grants}\label{CURVE2015} The conference CURVE 2015 received a supplement of \$25,000 from the NSF supported GEAR network. The remainder of the budget (total of \$68,151.25) was funded by ANR (The French National Research Agency). The ANR grant has now expired. The NSF funding was used to support US based participants of the conference who did not have other funding sources. \\ \noindent\textbf{Intellectual merit:} The conference took place at the Institute de Math\'ematique de Jussieu in Paris, June 22-26, 2015. There were over 100 participants including over 30 graduate students and postdocs. There were 20 speakers including leading experts such as Bonahon, Dunfield, Fock, Goncharov, Kapovich and many others. \\ \textbf{Broader impacts:} Papers on CURVE themes coauthored by CURVE 2015 participants appearing since the CURVE 2015 conference include~\cite{Acosta,FalbelGuilloux,FalbelDuality,CharFig8,FGLieGroups}. The conference inspired several new collaborations (for example a collaboration between Antonin Guilloux, Inkang Kim and Giulia Sarfatti on quaternionic deformation of representations). One graduate student spoke at the conference. \subsection{PI grants} The PI was most recently supported by \begin{center} \emph{Representations of 3-manifold groups}\\ DMS-1309088, 08/15-2013 to 07/23-2017, \$157,992.00 \end{center} \noindent\textbf{Intellectual merit:} The research funded by this grant led to six publications (Duke~\cite{GaroufalidisThurstonZickert}, Quantum Topol.~\cite{GaroufalidisZickert}, Crelle~\cite{ZickertAlgK}, Math.~Z.~\cite{ZickertEnhancedPtolemy}, AGT~\cite{GaroufalidisGoernerZickert,PtolemyField}), and two preprints (\!\cite{InvariantPtolemy,FGLieGroups}). The PI presented this research in 8 conference talks, 11 department seminars, and 1 three lecture minicourse. \\ \textbf{Broader impacts:} The PI supervised three graduate students (ongoing), conducted one REU project (one student is now a grad student of the PI), participated in six community outreach events, gave two presentations at a local elementary school, and one minicourse for high school teachers. \end{document}
\begin{document} \title{Efficient Estimation of Regularization Parameters via Downsampling and the Singular Value Expansion} \author{Rosemary A. Renaut$^1$, Michael Horst$^2$, Yang Wang$^3$, Douglas Cochran$^4$ and Jakob Hansen$^5$} \date{\today} \address{$^1$School of Mathematical and Statistical Sciences, Arizona State University, P.O. Box 871804, Tempe, AZ 85287-1804, \\ $^2$ Department of Mathematics, The Ohio State University, 100 Math Tower, 231 West 18th Avenue, Columbus, OH 43210-1174, \\ $^3$ Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong,\\ $^4$ School of Electrical, Computer and Energy Engineering, Ira A. Fulton Schools of Engineering, Arizona State University, P.O. Box 875706, Tempe, AZ 85287-5706,\\ $^5$David Rittenhouse Laboratory, University of Pennsylvania, 209 S. 33rd St., Philadelphia, PA 19104,} \eads{\mailto{[email protected]}, \mailto{[email protected]}, \mailto{[email protected]}, \mailto{[email protected]}, \mailto{[email protected]}} \begin{abstract} The solution, $\bo x$, of the linear system of equations $A\bo x\approx \bo b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel $H(s,t)$ is considered. The Tikhonov regularized solution $\bo x(\lambda)$ is found as the minimizer of $J(\bo x)=\{ \|A \bo x -\bo b\|_2^2 + \lambda^2 \|L \bo x\|_2^2\}$. $\bo x(\lambda)$ depends on regularization parameter $\lambda$ that trades off the data fidelity, and on the smoothing norm determined by $L$. Here we consider the case where $L$ is diagonal and invertible, and employ the Galerkin method to provide the relationship between the singular value expansion and the singular value decomposition for square integrable kernels. The resulting approximation of the integral equation permits examination of the properties of the regularized solution $\bo x(\lambda)$ independent of the sample size of the data. We prove that estimation of the regularization parameter can be obtained by consistently down sampling the data and the system matrix, leading to solutions of coarse to fine grained resolution. Hence, the estimate of $\lambda$ for a large problem may be found by downsampling to a smaller problem, or to a set of smaller problems, effectively moving the costly estimate of the regularization parameter to the coarse representation of the problem. Moreover, the full singular value decomposition for the fine scale system is replaced by a number of dominant terms which is determined from the coarse resolution system, again reducing the computational cost. Numerical results illustrate the theory and demonstrate the practicality of the approach for regularization parameter estimation using generalized cross validation, unbiased predictive risk estimation and the discrepancy principle applied for both the system of equations, and the augmented system of equations. \end{abstract} \noindent\textbf{Keywords:} Singular value expansion, Singular value decomposition, Ill-posed inverse problem, Tikhonov regularization, Regularization parameter estimation \ams{65F22, 45B05} \maketitle \section{Introduction} We consider numerical solutions of the Fredholm integral equation of the first kind \begin{align} \int_{\Omega_t} H(s,t)f(t) \, dt = g(s), \quad s \in \Omega_s, \label{eq:fred} \end{align} for real functions $f$ and $g$ defined on domains $\Omega_t$ and $\Omega_s$, respectively. The source function $f(t)$ is unknown, $H(s,t)$ is a known square-integrable kernel, i.e. $\|H\|=(\int_{\Omega_s}\int_{\Omega_t} H^2(s,t) dt ds )^{1/2} < \infty$, and square integrable $g(s)$ is approximated from sampled data. For ease we assume that $H$ is non-degenerate. The theoretical analysis for the existence of solutions of the integral equation is well studied e.g. \cite{baker,smithies} and uses the singular value expansion (SVE) for the kernel $H$. For the inner product defined by \begin{align*} \lra{f,\psi} =: \int_{\Omega_t} {f(t)}\psi(t) \, dt, \quad \|f\|= \lra{f,f} ^{1/2}, \end{align*} the SVE for $H$ is given by the mean convergent expansion \begin{align} \label{eq:sve} H(s,t) = \sum_{i=1}^\infty\mu_iu_i(s) {v}_i(t). \end{align} The sets of orthonormal functions $\{u_i(s)\}$ and $\{v_i(t)\}$ are complete and comprise the left and right singular functions of the kernel, respectively, \begin{align*} \lra{H^*(s,t),u_i(s)} &= \int_{\Omega_s} H(s,t) u_i(s) \, ds = \mu_iv_i(t),\\ \lra{H(s,t),v_i(t)} &= \int_{\Omega_t} H(s,t)v_i(t)\, dt = \mu_iu_i(s).\end{align*} The $\mu_i$, ordered such that $\mu_1\ge\mu_2\ge\ldots>0$, where positivity follows by the assumption of non degeneracy, are the singular values of $H$. By completeness of the singular functions and the square integrability of the kernel \begin{align*} \norm{H}_2^2 &= \sum\limits_{i=1}^\infty \mu_i^2<\infty. \end{align*} Applying the SVE in \eqref{eq:fred} yields the mean convergent expansion for $f$ \begin{align}\label{contsoln} f(t) = \sum_{i=1}^\infty \frac{\lra{u_i(s),g}}{\mu_i} v_i(t) =:\sum_{i=1}^\infty \frac{\hat{g_i}}{\mu_i} v_i(t), \end{align} which exists and is square integrable if and only if \begin{align*} \sum_{i=1}^{\infty} \left(\frac{\hat{g_i}}{\mu_i}\right)^2 < \infty, \end{align*} cf. \cite{smithies}. It is immediate that for square integrability of $f$ the coefficients ${\hat{g_i}}$ must eventually decay faster than the singular values. This requirement is known as the {\em continuous Picard condition} \cite{Hansen}. If $g$ is error contaminated such that ultimately $ {\hat{g_i}}$ roughly stagnate at the error level, the Picard condition will be violated. In particular, the determination of $f$ from $g$ is an ill-posed problem, even if a unique solution exists $f$ will be sensitive to errors in $g$. Additional constraint, or stabilization, conditions are needed in order to estimate a suitable square integrable solution, e.g. by truncating the solution, and/or regularization by filtering, also known as Tikhonov regularization. Applying the Galerkin method for the numerical solution of the integral equation provides a discretization for which the singular value decomposition (SVD) of the underlying system matrix is closely related to the SVE of the kernel \cite{hansensvepaper}. The properties of the discrete solution are closely related to those of the continuous solution, and hence regularization of the discrete solution effectively regularizes the continuous solution for a sufficiently fine resolution. The regularization of the discrete system is the topic of this study. Vogel \cite[Chapter 7]{vogel:02}, analyzed methods for selecting the regularization parameter, denoted by $\lambda$, under assumption of the existence of a singular system for the partially discrete operator mapping from the infinite dimensional to the discrete system of size $n$. With the assumption of discrete data and stochastic noise, the convergence for the error in the solution as an approximation to the true infinite dimensional solution, and the convergence of the predictive error for the solution, defined as how well the given solution predicts the true data, were examined. The results employ the singular system to obtain the discrete solution in terms of the singular system, and hence also provide estimates for the error, the predictive error, and the residual in terms of this system. The analysis explicitly includes the noise terms in the data. Assuming algebraic decay of the singular values and the coefficients of the true solution, at given algebraic rates, and that the orthogonal projection of the true solution onto the null space of the partially discrete operator vanishes as $n$ increases, yields asymptotic rates of convergence for the expected error norms, and predictive errors, for the truncated singular value decomposition (TSVD) solution to the discrete problem, i.e. regularizing by truncation, as well as for Tikhonov regularization. This formulation permits comparison of the regularization methods, in terms of the convergence of the estimated $\lambda$ with $n$ and assumed rate parameters. In contrast, here we exploit the SVE-SVD relationship, with the assumption of square integrability of $H(s,t)$, $f(t)$ and $g(s)$, to derive convergence results for $\lambda$ with $n$, but without assuming specific rates of convergence of $\lambda$. Instead of the TSVD regularization we consider the filtered TSVD solution, dependent on an effective numerical rank of the system. As in Vogel \cite{vogel:02}, our results also consider the impact of noise in the data. Although the connection between the regularized solutions of the continuous and discrete approximations is well-accepted, it appears that these convergence results have not been extensively exploited practically in the context of efficient estimation of $\lambda$. Here we use the potentially nested set of coarse to fine discretizations of \eqref{eq:fred} by consistently defined approximations to obtain $\lambda$ for a fine resolution solution, without applying estimation of $\lambda$ on the fine scale. This approach provides a cost effective mechanism for estimating $\lambda$ in the context of any Tikhonov, or iterated Tikhonov \cite{SB,WoRo:07,Zhd}, regularization scheme for the solution of \eqref{eq:fred}, including in the presence of noise, assuming uniform sampling of the data. The number of significant components in the SVD basis at the fine scale problem is determined from the coarse scale SVD basis, removing the high cost of forming the complete SVD for the high resolution system. Then, the time consuming part of estimating $\lambda$ is played out for the coarse scale representation of the problem only. An outline of the paper now follows. In order to use the approximate singular value expansion \cite{hansensvepaper} we apply the Galerkin method for solving \eqref{eq:fred}. This is briefly reviewed in section~\ref{sec:Galerkin}, and leads to the discretization formulae for the regularized discrete solution dependent on $\lambda$ and on sampling level $n$. Techniques for estimating $\lambda$ are reviewed in section~\ref{sec:regest}. Many of these techniques require that some information about the noise contamination of the measurements is available, which leads to weighting, or left preconditioning, of the system matrix, as compared to right preconditioning due to the invertible regularization operator $L$. We prove that the resulting left and right preconditioned kernel is still square integrable; section~\ref{mod kernel}. The convergence of $\lambda$ across scales is provided in section~\ref{sec:lambdaconv} based on theoretical results for the numerical rank in section~\ref{theory rank}. Practical implementation is described in section~\ref{sec:practical}, including the downsampling in section~\ref{sec:downsample}, estimating the numerical rank in section~\ref{sec:numericalrank} and the algorithm for parameter estimation in the presence of noise in section~\ref{sec:regparam}. Numerical illustrations verifying the theoretical convergence results are provided in section~\ref{simulations}. In section~\ref{results} simulations for a smooth and a piece wise constant source $f(t)$, and an example with slowly decaying spectrum, demonstrate the practicality of the technique. Future work and conclusions are discussed in section~\ref{conclusions}. \subsection{Notation} We first review the notation that is adopted throughout the paper. All variables in boldface $\bo {x}$, $\bo {b}$, etc. refer to vectors, with scalar entries e.g. $x_i$ distinguished from columns of a matrix $U$ given by $\bo u_i$. The proofs require the use of multiple resolutions for discretizing the functions. Any variable with a superscript $^{(n)}$ relates to that variable for a discretization with $n$ points, or an expansion with $n$ terms. In general $\lambda$ is a regularization parameter, $A$ is a system matrix derived from kernel $H$, the unknown source function is $f(t)$, samples of function $g(s)$ are assumed, and for any function $f$, $\hat{f}_i$ indicates the $i^{th}$ Galerkin coefficient of the function. For arbitrary vector $\bf y$ we denote $\bo y \sim \mathcal{N}(\bo y_0, C)$ to indicate that $\bo y$ is a random vector following a multivariate normal distribution with expected value, $E(\bo y) = \bo y_0$ and covariance matrix $C$. $L^2(\Omega)$ denotes the linear space of square integrable functions on $\Omega$. The weighted norm is given by $\| \bo x\|_W^2=\bo x^T W \bo x$. For scalar $k$ the power $\bo x^k$ indicates the component wise power for each component of vector $\bo x$. We also reserve $\zeta^2$ for the variance of white noise data. \section{Background Material}\label{sec:Galerkin} \subsection{Approximating the Singular Value Expansion} Suppose $\{\phi_j(t)\}_{j=1}^\infty$ and $\{\psi_i(s)\}_{i=1}^\infty$ are orthonormal bases (ONB) for $L^2\paren{\Omega_t}$ and $L^2\paren{\Omega_s}$, respectively, such that \begin{align}\label{expansion} f(t) = \sum_{i=1}^\infty \lra{\phi_i(t), f(t)} \phi_i(t) \quad \mathrm{and} \quad g(s) = \sum_{i=1}^\infty \lra{\psi_i(s), g(s)} \psi_i(s). \end{align} The Galerkin method as a general discretization scheme for computing eigensystems was introduced in \cite[Section 3.8]{baker}, and extended as described in Algorithm~\ref{MM} for computing the SVE in \cite{hansensvepaper}. \begin{algorithm} \caption{Galerkin Method for Approximating the SVE\label{MM} \cite{hansensvepaper}} \begin{algorithmic}[1] \mathbb{R}equire ONB $\{\phi_j(t)\}_{j=1}^n$ and $\{\psi_i(s)\}_{i=1}^n$, and kernel function $H(s,t)$. \State Calculate the kernel matrix $A^{(n)}$ with entries $(a^{(n)}_{ij})$ \begin{align}\label{kernel matrix} a^{(n)}_{ij} =: \lra{ \psi_i(s), \lra{H(s, t), \phi_j(t)}} = \int_{\Omega_s}\int_{\Omega_t} \psi_i(s) H(s,t) \phi_j(t) dt \,ds,\quad i,j =1:n. \end{align} \State Compute the SVD, $\sub{A} =\sub{U} \sub{\Sigma} (\sub{V})^T$, where $\sub{U}$ and $\sub{V}$ are orthogonal, \begin{align*} \sub{U}= (u^{(n)}_{ij}), \,\,\sub{V}=(v^{(n)}_{ij}),\,\, \sub{\Sigma} =\mathrm{diag}(\sub{\sigma_1}, \dots, \sub{\sigma_n}), \,\, \sub{\sigma_1} \ge \dots \ge \sub{\sigma_n} >0. \end{align*} \State Define \begin{align}\label{singvec} \sub{\tilde{u}_i}(s) = \sum_{j=1}^n\sub{u_{ji}} \psi_j(s), \quad \sub{\tilde{v}_i}(t) = \sum_{j=1}^n \sub{v_{ji}}\phi_j(t), \quad i=1:n. \end{align} \end{algorithmic} \end{algorithm} We give without proof the key results, cf. \cite[Theorems 1, 2, 4, 5]{hansensvepaper}, which relate the SVE and SVD singular systems. In particular, the singular values $\mu_i$, and singular functions $v_i(t)$, $u_i(s)$ for \eqref{eq:sve} are approximated by $\sub{\sigma_i}$, $\sub{\tilde{v}_i}(t) $ and $\sub{\tilde{u}_i}(s) $, respectively, \cite{hansensvepaper}. \begin{theorem}[SVE-SVD, \cite{hansensvepaper}]\label{SVE-SVD} Suppose \begin{align}\label{defDelta} (\sub{\Delta})^2=:\|H\|^2 - \|\sub{A}\|_F^2.\end{align} Then the following hold for all $i$ and $n$, independent of the convergence of $\sub{\Delta}$ to $0$: \begin{enumerate} \item{\label{thm1}} ${\sub{\sigma_i}} \le {\sigma^{(n+1)}_i} \le \mu_i$. \item{\label{thm2}} $0 \le \mu_i - {\sub{\sigma_i}} \le {\sub{\Delta}}$. \item ${\sub{\sigma_i}} \le \mu_i \le \sqrt{{(\sub{\sigma_i})^2}+{(\sub{\Delta})^2}}$. \item{\label{thm4}} If $\mu_i\ne\mu_{i+1}$, then $0 \le \max\{\|{u_i-{\sub{\tilde u_i}}}\|, \|{v_i-{\sub{\tilde v_i}}}\|\} \le \sqrt{\frac{2 \, {\sub{\Delta}}}{\mu_i-\mu_{i+1}}}$. \end{enumerate} \end{theorem} Taking the inner product in \eqref{eq:fred} defines the discrete system of equations \begin{align}\label{discrete system} \sub{\bo b} = \sub{A} \sub{\bo x}, \quad \text{with} \quad \sub{b_i}=\lra{\sub{\psi_i}(s), g(s)}, \quad \text{and}\quad \sub{x_i}=\lra{\sub{\phi_i}(t), f(t)}, \end{align} with the truncated approximation to \eqref{expansion} \begin{align*} f(t) \approx \sub{f}(t)=: \sum_{i=1}^n \sub{x_i} \phi_i(t) \quad \mathrm{and} \quad g(s) \approx \sub{g}(s)=: \sum_{i=1}^n \sub{b_i} \psi_i(s). \end{align*} Solving \eqref{discrete system} for $\sub{\bo x}$ using the SVD for $\sub{A}$ yields the approximation \cite[(26)]{hansensvepaper} for \eqref{contsoln} \begin{align*} \sub{f}(t)&= \sum_{j=1}^n\left(\sum_{i=1}^n \frac{(\sub{\bo u_i})^T \sub{\bo b}}{\sub{\sigma_i}} \sub{ v_{ji}}\right) \phi_j(t) \\ &= \sum_{i=1}^n \frac{(\sub{\bo u_i})^T \sub{\bo b}}{\sub{\sigma_i}} \left(\sum_{j=1}^n\sub{ v_{ji}} \phi_j(t) \right) = \sum_{i=1}^n \frac{(\sub{\bo u_i})^T \sub{\bo b}}{\sub{\sigma_i}}\sub{\tilde{v}_i}(t)\\ &= \sum_{i=1}^n \frac{\sub{\beta_i}}{\sub{\sigma_i}} \sub{\tilde{v}_i}(t). \end{align*} Here \begin{align} \label{betacoeff} \sub{\beta_i}=(\sub{\bo u_i})^T \sub{\bo b} = \lra{\sub{\tilde{u}_i}(s), g(s)} \approx \lra{u_i(s), g(s)} = \hat{g}_i,\end{align} which follows from Theorem~\ref{SVE-SVD} and the definitions in \eqref{singvec}. Therefore, regularizing through the introduction of filter factors $\sub{q_i}$, $i=1:n$, \begin{align}\label{solnfilter} \sub{f_{\text{Reg}}}(t)=: \sum_{i=1}^n \sub{q_i} \frac{\sub{\beta_i}}{\sub{\sigma_i}} \sub{\tilde{v}_i}(t), \end{align} yields an approximate regularized solution of the continuous function \cite{hansensvepaper}. Practically, we suppose $\sub{q_i}=\sub{q}(\sub{\lambda}, \sub{\sigma_i})$ for regularization parameter $\sub{\lambda}$ and desire to find a suitable choice for $\sub{\lambda}$ such that solution $\sub{f_{\text{Reg}}}(t)$ is square integrable. Equivalently, from \eqref{solnfilter}, square integrability translates to the requirement that for vector $\bo k$ with entries $\bo k_i=\sub{q_i} \sub{\beta_i}/\sub{\sigma_i}$, $i=1\dots n$, $\|\bo k\|_2^2<\infty$. \subsection{Regularization parameter estimation}\label{sec:regest} Without loss of generality we drop superscripts on the variables, and suppose for the moment that the system matrix $A$ in \eqref{discrete system} is of size $n\times n$, and $g(s)$ is sampled at $n$ points. We consider the solution of the Tikhonov regularized problem \begin{align}\label{regform} \bo{ x}_{\text{Reg}}(\lambda)= \argmin_{\bo x} \{ \|\bo R(\lambda)\|_2^2\}=: \argmin_{\bo x} \left\{\|{ A}\bo{x}-\bo{ b}\|_2^2+\lambda^2\|{\bo{x}}\|_2^2\right\}, \end{align} where here $\bo R$ defines the residual for the approximate augmented system, $[A; \lambda I] \bo{x} \approx [\bo{b};\bo{0}]$. Consistent with the topic of this paper, we assume that the SVD is used to write the filtered solution \begin{align}\label{regsoln} \bo{ {x}}_{\text{Reg}}(\lambda)= \sum_{i=1}^n \frac{\sigma_i^2}{\lambda^2+\sigma_i^2}\frac{\bo u_i^T \bo b}{\sigma_i} \bo v_i = \sum_{i=1}^n q(\lambda,\sigma_i)\frac{\beta_i}{\sigma_i} \bo v_i, \quad q(\lambda,\sigma)=\frac{\sigma^2}{\lambda^2+\sigma^2 }, \end{align} and we make the observation $\| \bo{ {x}}_{\text{Reg}}(\lambda)\|_2^2= \|\bo k\|_2^2$. In the subsequent discussion of the methods for finding the regularization parameter we also use the SVD, in line with the overall theme of this paper which is focused on the use of the SVD and not on other techniques to find the solution. Many methods exist for determining a regularization parameter $\lambda$ which will yield an acceptable solution with respect to some measure defining \textit{acceptable}. Amongst others, these include the Morozov discrepancy principle (MDP) \cite{morozov}, the unbiased predictive risk estimator (UPRE) \cite{vogel:02}, generalized cross validation (GCV) \cite{golub1979generalized} and the $\chi^2$ principle applied for the augmented system (ADP) \cite{mere:09}. The MDP, UPRE and ADP all assume prior statistical information on the noise in the measurements $\bo b$. For simplicity in the presentation of the relevant functionals we assume that $\bo b_{\text{obs}}=\bo b + \bo e$ where the noise is white with variance $\zeta^2$, $\bo e \sim \mathbb{N}n(0, \zeta^2 I_n)$. Then parameter $\lambda$ is found as the minimum of a nonlinear functional for both the UPRE and GCV, but the root of a monotonically increasing function for both the MDP and ADP. Defining residual $\bo r(\lambda) = A \bo x(\lambda)- \bo b$ and influence matrix $A(\lambda) = A^T(A^TA+\lambda^2I)^{-1}A^T$, the relevant functionals are given by \begin{align}\label{MDP} \textrm{MDP}:\quad &D(\lambda) = \|\bo r(\lambda)\|_2^2= \sum_{i=1}^n \left(1-q(\lambda,\sigma_i)\right)^2 \beta_i^2 =\zeta^2 \tau, \quad 0<\tau<n, \\ \label{ADP} \textrm{ADP}:\quad &C(\lambda)= \|\bo R(\lambda)\|_2^2 = \sum_{i=1}^n \left(1-q(\lambda,\sigma_i)\right)\beta_i^2 =\zeta^2 n, \\ \label{upre} \textrm{UPRE}:\quad &U(\lambda)=\|\bo r(\lambda)\|_2^2 +2 \zeta^2 \mathrm{trace}(A(\lambda))= \sum_{i=1}^n \left(1-q(\lambda,\sigma_i)\right)^2 \beta_i^2 + 2 \zeta^2 \sum_{i=1}^n q(\lambda,\sigma_i),\\ \label{gcv} \textrm{GCV}:\quad &G(\lambda)=\frac{n^2 \|\bo r(\lambda)\|_2^2}{ \left(\mathrm{trace}(I- A(\lambda))\right)^2}=\frac{n^2 \sum_{i=1}^n \left(1-q(\lambda,\sigma_i))\right)^2 \beta_i^2}{\left( n-\sum_{i=1}^n q(\lambda,\sigma_i)\right)^2}. \end{align} Here, the introduction of the weight $n^2$ in the numerator of $G$ is non standard but convenient, and does not impact the location of the minimum. For \eqref{MDP} we note that the specific derivation depends on the $\chi^2$ distribution of the residual, which, for a square invertible system, has theoretically $0$ degrees of freedom. Heuristically, it is standard to use $\tau=n$, for white noise with variance $1$, so that the solution fits the data on average to within one standard deviation \cite{ABT}. On the other hand, the ADP functional also arises from a statistical analysis but as stated here is under the assumption that $\bo x_0=0$. Then $C(\lambda)/\zeta^2 \sim \chi^2(n)$, i.e. the weighted functional follows a $\chi^2$ distribution with $n$ degrees of freedom, so that $E(C(\lambda))=n\zeta^2$. For completeness we also comment on the more general problem e.g. \cite{Hansen,Regtools,vogel:02}, \begin{align}\label{genregsoln} \bo x_{\text{Reg}}(\lambda, L, W, \bo x_0)=\argmin_{\bo x} \{J(\bo x)\}=\argmin_{\bo x} \left\{\norm{A\bo{x}-\bo{b}}_W^2+\lambda^2\norm{L(\bo{x}-\bo x_0)}_2^2\right\}. \end{align} When $L$ is invertible we may solve with respect to the transformation $\bo y = L \bo{x}$ for the system with matrix $AL^{-1}$, hence mapping the basis for the solution. Often, however, $L$ is chosen as a noninvertible smoothing operator, say approximating a derivative operator. This case is not considered here, requiring extensions of the generalized singular value decomposition (GSVD) instead of the SVD. In \eqref{genregsoln} $\bo x_0$ constrains the solution to be close to $\bo x_0$. For the ADP it is assumed that $E(\bo x) = \bo x_0$, for the solution, $\bo x$, considered as a stochastic variable. Assuming $W$ is positive definite, with $\tilde{A}=W^{1/2}A$ and $\tilde{\bo b}=W^{1/2}(\bo b - A \bo x_0)$, \begin{align*} \bo{ \tilde{x}}_{\text{Reg}}(\lambda)= \argmin_{\bo x} \left\{\|{\tilde{A}\bo{x}-\bo{\tilde{b}}}\|_2^2+\lambda^2\|{\bo{x}}\|_2^2\right\} + \bo x_0. \end{align*} When $W = \mathbb{C}b^{-1}$ the noise in $\bo b$ is whitened and \eqref{MDP}-\eqref{upre} apply with $\zeta^2=1$. We assume from here on that $L$ is diagonal and invertible, and only consider \eqref{regform} in the remaining discussion concerning the numerical implementation. We note that we do still need to assess whether this left and right preconditioning of the original system matrix $A$, by $W^{1/2}$ on the left or $L^{-1}$ on the right, resp., has an impact on the square integrability of the modified kernel and will discuss this in section~\ref{mod kernel}. Practically, it is useful to truncate the expansion for $\bo{ {x}}_{\text{Reg}}(\lambda)$ at the numerical rank of the matrix $A$, dependent on the machine precision, effectively removing from \eqref{regsoln} terms dependent on the smallest singular values, those which are not resolved due to the numerical precision of the software environment. This is equivalent to taking the filter factors $q(\lambda, \sigma_i)=0$ for small $\sigma_i$. Typically, these coefficients would be filtered out by the appropriate choice of $\lambda$, but for the purposes of our analysis it is helpful to determine the $i$ for which the coefficient becomes zero with respect to the machine precision, and introduce the concept of numerical rank. \begin{definition}[Numerical Rank]\label{numericalrank} We define the numerical rank with respect to a given precision $\varepsilonilon$ to be $p=\{ \max{i} : \sigma_i > \varepsilonilon\}$. \end{definition} Because of the ordering of the singular values, and defining $q(\lambda,\sigma_i)=0$ for $i>p$ so that the last $n-p$ terms are removed, \eqref{regsoln} is replaced by the truncated solution \begin{align*} \bo{ {x}}_{\text{TSVDReg}}(\lambda)= \sum_{i=1}^p q(\lambda,\sigma_i)\frac{\bo u_i^T \bo b}{\sigma_i} \bo v_i, \end{align*} e.g. \cite{Hansen,RHM:10}. Consequently, functionals \eqref{MDP}-\eqref{gcv} are modified. For example, in \eqref{ADP} we obtain \begin{align*} C(\lambda)= \sum_{i=1}^p \left(1-q(\lambda,\sigma_i)\right)\beta_i^2 + \sum_{i=p+1}^n \beta_i^2 . \end{align*} In this case the analysis still finds $E(C(\lambda)/\zeta^2)\sim\chi^2(n)$ \cite{RHM:10}, but now $E(\sum_{i=p+1}^n \beta_i^2) =\zeta^2 (n-p)$ for large enough $n-p$ \cite{mere:09}. For the UPRE functional the truncation simply introduces constant terms which can thus be ignored in the minimization. We obtain the truncated expressions \begin{align}\label{TMDP} D_{\text{T}}(\lambda) &= \sum_{i=1}^p \left(1-q(\lambda,\sigma_i)\right)^2 \beta_i^2 =\tau \zeta^2 , \quad 0<\tau<p, \\ \label{TADP} C_{\text{T}}(\lambda)&= \sum_{i=1}^p \left(1-q(\lambda,\sigma_i)\right)\beta_i^2 =\zeta^2 p, \\ \label{Tupre} U_{\text{T}}(\lambda)&=\sum_{i=1}^p \left(1-q(\lambda,\sigma_i)\right)^2 \beta_i^2 + 2 \zeta^2 \sum_{i=1}^p q(\lambda,\sigma_i), \\ \label{Tgcv} G_{\text{T}}(\lambda)&=\frac{n^2 \left(\sum_{i=1}^p \left(1-q(\lambda,\sigma_i)\right)^2 \beta_i^2 +\sum_{i=p+1}^n \beta_i^2\right)}{\left(n-p+ \sum_{i=1}^p q(\lambda,\sigma_i)\right)^2}. \end{align} Observe that the GCV apparently needs the coefficients $ {\beta_i}$ for all $i$, regardless of choice of $p$. On the other hand for orthogonal $U$ and given $\bo b$ of length $n$ we note \begin{align*} \|\bo b\|_2^2 =\|U^T\bo b\|_2^2 = \sum_{i=1}^p \beta^2_i + \sum_{i=p+1}^n \beta_i^2\quad \text{yields} \sum_{i=p+1}^n \beta_i^2 = \|\bo b\|^2 - \sum_{i=1}^p \beta^2_i. \end{align*} Thus indeed the GCV can be evaluated without a complete SVD for the system of size $n$. As to the choice of the analysis of these methods, and exclusion of other regularization parameter selection techniques, we note that there are numerous techniques that can be applied. We did not choose to discuss the well-known L-curve, e.g. \cite{HansenLC}. There the techniques for analysis are somewhat different, requiring the analysis of the curvature of the L-curve. Vogel \cite{vogel:02} did consider the L-curve, however with less positive results. In particular, he found that the L-curve either becomes flat with increasing $n$, or gives a value that does not lead to mean square convergence of the error. Thus here we have chosen to consider the particular selection methods and ignore for now the L-curve. To apply regularization parameter selection techniques practically in the context of the SVE-SVD relation, we first examine the determination of the numerical rank $p$ for the set of system matrices $\{\sub{A}\}$ with increasing $n$ and the square integrability of the kernel $H(s,t)$ under the variable mappings that correspond to the left and right preconditioning of the matrix $A$. \section{Theoretical Results}\label{theoryresults} \subsection{Square integrability of the weighted kernel}\label{mod kernel} The theoretical justification for using an estimate of the regularization parameter $\lambda$ obtained from a down sampled set of data, and appropriately down sampled kernel matrix, relies primarily on the results on the SVE-SVD relationship discussed in Theorem~\ref{SVE-SVD}. But as noted for \eqref{genregsoln} it is important to discuss the impact of the left and right preconditioning of the matrix $A$ which results from replacing $A$ by $A=W^{1/2} A L^{-1}$ in \eqref{regform}. For diagonal matrices $W$ and $L$ it is immediate that premultiplication by $W^{1/2}$ amounts to a row scaling and post multiplication by $L^{-1}$ to a column scaling. Consistent with the practical data we suppose that $W \approx C^{-1}$ for symmetric positive definite (SPD) matrix $C$. Then $C^{-1/2}$ is a sampling of a function $c(s)\ne 0$, by $C$ SPD, and $L$ the sampling of a function $\ell(t)\ne 0$, by the invertibility of $L$. The kernel is replaced by a weighted rational kernel $\tilde{H}(s,t)= H(s,t)/(c(s)\ell(t))$. \begin{theorem}\label{thm:weighted kernel} For bounded functions $c(s)>c_0>0$ and $|\ell(t)|>\ell_0> 0$ defined on $\Omega_s$ and $\Omega_t$ respectively, the square integrability of the weighted kernel $\tilde{H}(s,t)=H(s,t)/(c(s)\ell(t))$ defined on $\Omega_s\times\Omega_t$ follows from the square integrability of $H$ on the same domain. \end{theorem} \begin{proof} The proof is immediate from \begin{align*} \|\tilde{H}\| = \| \frac{1}{c(s)} H(s,t) \frac{1}{\ell(t)}\| \le \frac{1}{c_0 \ell_0} \|H\| <\infty. \end{align*} \end{proof} We have thus determined that we may use the relation between the SVE and the SVD for the left and right preconditioned kernel, $\tilde{H}$. Furthermore, data $g(s)$ and source function $f(t)$ are mapped accordingly without impacting the analysis, i.e. we have the mapped source $\tilde{f}(t) = \ell(t) f(t) $ and mapped data $\tilde{g}=g(s)/c(s)$. \subsection{Convergence of the SVD to the SVE and Numerical Rank}\label{theory rank} Although Theorem~\ref{SVE-SVD} summarizes the primary results used in our analysis, some additional results are useful for further analysis with respect to the numerical rank. First we focus on $\sub{\Delta}$ defined in \eqref{defDelta} which provides an estimate of the error in the estimation of $H$ using $\sub{A}$. If $\lim\limits_{n\to\infty}{\sub{\Delta}}=0$, then matrix $\sub{A}$ effectively becomes independent of its discretization in providing an accurate representation of the integral with kernel $H$. \begin{prop}\label{prop1}Suppose ONB $\{\phi_j\}_{j=1}^\infty$ and $\{\psi_i\}_{i=1}^\infty$ in Algorithm~\ref{MM} are complete, then $\lim\limits_{n\to\infty} \left(\sub{\Delta}\right)^2 = 0$. \end{prop} \begin{proof} $H(s,t)$ is defined on $\Omega_s\times\Omega_t$, i.e. $H\in L^2(\Omega_s\times\Omega_t)$. Let $\{\phi_j(t)\}$ be an ONB for $L^2(\Omega_t)$ and $\{\psi_i(s)\}$ be an ONB for $L^2(\Omega_s)$. Then $\{\phi_j(t)\psi_i(s) \, \big| \, i,j\in\mathbb{Z}^+\}$ is an ONB for $L^2(\Omega_s\times\Omega_t)$. Setting, \begin{align*} H(s,t) &= \sum\limits_{i=1}^\infty\sum\limits_{j=1}^\infty a_{ij}\phi_j(t)\psi_i(s) \quad \text{yields}\\ \lra{\lra{H,\psi_i},\phi_j} &= \dis{\int_{\Omega_t}\int_{\Omega_s} \psi_i(s)H(s,t)\phi_j(t) \, ds \, dt }= a_{ij}. \end{align*} Immediately, $\norm{H(s,t)}^2=\sum\limits_{i=1}^\infty\sum\limits_{j=1}^\infty\abs{a_{ij}}^2$, and defining $\sub{A}=[a_{ij}]_{i,j=1}^n$ yields \begin{align*}\left(\sub{\Delta}\right)^2 =\|H\|^2 -\|\sub{A}\|_F^2&= \sum\limits_{i=1}^\infty\sum\limits_{j=1}^\infty\abs{a_{ij}}^2 - \sum\limits_{i=1}^n\sum\limits_{j=1}^n\abs{a_{ij}}^2 = \sum\limits_{\max(i,j)>n} \abs{a_{ij}}^2. \end{align*} But now by square integrability $\norm{H}^2<\infty$, so the expression is a convergent series. Thus, by Cauchy's criterion the tail end of the sum converges to zero with $n$. \end{proof} Practically we suppose the discrete system in \eqref{discrete system} is constructed using different basis functions for each choice of $n$, and that $\sub{a_{ij}}$ are calculated using a quadrature rule. Then Proposition~\ref{prop1} may not immediately apply due to quadrature error. But, for the theory, we assume $\left(\sub{\Delta}\right)^2\xrightarrow{n \to \infty} 0$. We also assume that all continuous singular values are distinct, as required for Theorem~\ref{SVE-SVD} statement~\ref{thm4}. Analogues of this result hold for the case without distinct singular values \cite{hansensvepaper}. In relating the numerical rank $\sub{p}$ across resolutions it is helpful to establish a few basic results, the first two of which effectively appear in \cite{hansensvepaper}. \begin{lemma} \label{lemma1} If $\left(\sub{\Delta}\right)^2\xrightarrow{n \to \infty} 0$, then $\lim\limits_{n\to\infty}{\sub{\sigma_i}}=\mu_i$ for all $i$. \end{lemma} \begin{proof} By Theorem~\ref{SVE-SVD} statements~\ref{thm1} and \ref{thm2}, it is immediate that $\lim\limits_{n\to\infty}{\sub{\sigma_i}}=\mu_i$ for all $i$. \end{proof} \begin{lemma} \label{lemma3} If $\left(\sub{\Delta}\right)^2\xrightarrow{n \to \infty} 0$ then $\lim\limits_{n\to\infty} {\sub{\beta_i}} =\lra{u_i, g} = \hat{g}_i$ for all $i$. \end{lemma} \begin{proof} By Theorem~\ref{SVE-SVD} statement~\ref{thm4}, $\lim\limits_{n\to\infty}{\sub{\tilde{u}_i}}=u_i$. Using \eqref{betacoeff} and recalling $\sub{\beta_i} =(\sub{\bo u_i})^T\bo b$, yields $\sub{\beta_i}=\lra{\sub{\tilde{u}_i}, g}$. Thus $ \lim\limits_{n\to\infty} {\sub{\beta_i}} = \lim\limits_{n\to\infty}\lra{\sub{\tilde{u}_i}, g}=\lra{u_i, g} = \hat{g}_i$. \end{proof} \begin{lemma} \label{lemma4} For all $i$ and $n,$ $\sub{\sigma_i} \le \mu_i \le {\sub{\Delta}} + {\sub{\sigma_{i-1}}}$ and $\mu_{i+1}-{\sub{\Delta}} \le {\sub{\sigma_i}} \le \mu_i$. \end{lemma} \begin{proof} From Theorem~\ref{SVE-SVD} statement~\ref{thm2}, and using the ordering of $\{\sub{\sigma_i}\}_{i=1}^n$, $\mu_i-{\sub{\sigma_i}}\le {\sub{\Delta}}$, implies $\mu_i \le {\sub{\Delta}} + {\sub{\sigma_i}} \le {\sub{\Delta}} + {\sub{\sigma_{i-1}}}$. By Theorem~\ref{SVE-SVD} statement~\ref{thm1} this gives $\sub{\sigma_i} \le \mu_i \le {\sub{\Delta}} + {\sub{\sigma_{i-1}}}$. Reindexing and subtracting $\sub{\Delta}$ also provides $\mu_{i+1}-{\sub{\Delta}} \le {\sub{\sigma_i}} \le \mu_i$. \end{proof} To relate the convergence between continuous and discrete spectra, we introduce the parameter $\varepsilonilon$ which arises in the Definition \ref{numericalrank} of the numerical rank and depends on the machine precision. \begin{theorem}[Numerical Rank] \label{lemma2} Let us assume that $\left(\sub{\Delta}\right)^2\xrightarrow{n \to \infty} 0$ and that all continuous singular values $\mu_i$ are distinct. Let $P\in\mathbb{Z}^+ \text{ such that } \mu_P>\varepsilonilon$ and $\mu_{P+1}\le \varepsilonilon$, for small positive $\varepsilonilon$, then $\lim\limits_{n\to\infty} {\sub{p}} = P =: p^*$. Moreover, there exists $n^*\in\mathbb{Z}^+\text{ such that } {\sub{p}}=p^* $ for all $ n\ge n^*$. \end{theorem} \begin{proof} First note that because $\norm{H}^2 = \sum\limits_{i=1}^\infty\mu_i^2<\infty$, $\mu_i\to0$, and $P$ exists. From Theorem~\ref{SVE-SVD} statement~\ref{thm1}, $\sub{\sigma_{p^*}}\le{\sigma^{(n+1)}_{p^*}}\le\mu_{p^*}$. Additionally, for all $ i>p^*, \, {\sub{\sigma_i}}\le\mu_{p^*+1}<\varepsilonilon$. Thus $\sub{p}\le p^*$. From Lemma~\ref{lemma4}, $\varepsilonilon <\mu_{p^*} \le {\sub{\Delta}}+ \sub{\sigma_{p^*-1}} \le {\sub{\Delta}}+ \sub{\sigma_{i}}$, for any $i<p^*-1$. Thus as $\sub{\Delta}\to0$, $\sub{\sigma_i}> \varepsilonilon$, for all $i\le p^*-1$, yielding $\sub{p}\ge p^*-1$, i.e. $p^*-1\le \sub{p}\le p^*$. Suppose $\sub{p} = p^*-1$, then $\sub{\sigma_{p^*}}<\varepsilonilon <\sub{\sigma_{p^*-1}}$. But by Theorem~\ref{SVE-SVD} statement~\ref{thm1}, $0\le \mu_{p^*} -\sigma_{p^*} \le \sub{\Delta}$, and as $\sub{\Delta}\to0$, $\sigma_{p^*}\to\mu_{p^*}>\varepsilonilon$. Hence $\lim\limits_{n\to\infty} {\sub{p}} = p^*$, and there exists $ n^*\in\mathbb{Z}^+\text{ such that } {\sub{p}}=p^*$ for all $n\ge n^*$. \end{proof} \subsection{Convergence of $\sub{\lambda}$}\label{sec:lambdaconv} We define regularization parameter $\sub{\lambda}$ to be the estimate of the regularization parameter with resolution $n$, and $\lambda^*$ to be the estimate for regularizing the continuous solution \eqref{contsoln}. Now, to assist in the analysis we introduce some notation for functionals that occur repeatedly in the formulation. Specifically, the regularization functionals \eqref{TMDP}-\eqref{Tgcv} are expressible in terms of the common multivariable function \begin{align*} \eta(\lambda, p, k, \bo a, \bo z) &= \sum_{i=1}^{p} z^2_i \paren{\frac{\lambda^2}{a_i^2+\lambda^2}}^k =\sum_{i=1}^{p}{z^2_i} \paren{1-{q}(\lambda, a_i)}^k\\ &=\left({\bo z^2}\right)^T ({{\bf1}} - {\bo w} (\lambda, {\bo a}))^k \end{align*} where we have defined the vectors $\bo z $, ${\bf1}$, $\bo w$ and $\bo a$ $\in \mathcal{R}^p$, with $w_i = q(\lambda, a_i)$ and ${\bf1}_i$ $=1$. With the appropriate identification of the terms in $\eta$ we obtain, \begin{align*} \sub{D_{\text{T}}}(\lambda) &=\eta(\lambda,\sub{p}, 2, \sub{\boldsymbol{\sigma}}(1:\sub{p}), \sub{\boldsymbol{\beta}}(1:\sub{p})) \\ \sub{C_{\text{T}}}(\lambda)&= \eta(\lambda,\sub{p}, 1, \sub{\boldsymbol{\sigma}}(1:\sub{p}), \sub{\boldsymbol{\beta}}(1:\sub{p})) \\ \sub{U_{\text{T}}}(\lambda)&=\eta(\lambda,\sub{p}, 2, \sub{\boldsymbol{\sigma}}(1:\sub{p}), \sub{\boldsymbol{\beta}}(1:\sub{p})) + 2 \zeta^2 {\bf1}^T\bo w(\lambda, \sub{\boldsymbol{\sigma}}(1:\sub{p}))\\%=\eta(\lambda,p,2, \sigma_i,\beta_i^2) + 2 \zeta^2 \| \bo q(\lambda) \|_1 \\ \label{etagcv} \sub{G_{\text{T}}}(\lambda)&= \frac{ n^2 \left(\eta(\lambda,\sub{p}, 2, \sub{\boldsymbol{\sigma}}(1:\sub{p}), \sub{\boldsymbol{\beta}}(1:\sub{p})) +\|\boldsymbol{\beta}(\sub{p}+1:n)\|_2^2\right)}{\left(n-\sub{p}+ {\bf1}^T\bo w(\lambda, \sub{\boldsymbol{\sigma}}(1:\sub{p})) \right)^2}. \end{align*} Equivalent continuous functionals are obtained for effective continuous rank $p^*$ as in Theorem~\ref{lemma2}, by defining $z_i=\hat{g}_i$, and $a_i=\mu_i$. For example, the limiting GCV is \begin{align} \label{contgcv} G^*(\lambda)&=\lim_{n \to \infty}\left\{ \frac{n^2 \left(\eta(\lambda,p^*,2,{\boldsymbol \mu}(1:p^*), \hat{\bo g}(1:p^*) ) +\|\boldsymbol{\beta}({p}^*+1:n)\|_2^2\right)}{\left(n-p^*+ {\bf1}^T\bo w(\lambda, {\boldsymbol \mu}(1:p^*))\right)^2} \right\}. \end{align} To relate the continuous and discrete functionals, note immediately the continuity of $\bo w$ with respect to $\bo a$ and $\lambda$, and hence of $\eta$ with respect to $\bo a$, $\bo z$ and $\lambda$. Moreover, using Lemma~\ref{lemma1} and Theorem~\ref{SVE-SVD} statement~\ref{thm1} for $\mu_i$, and Lemma~\ref{lemma3} for $\hat{g}_i$, we introduce \begin{align}\label{expsigma} \subs{\sigma}&=\mu_i+{\subs{\varepsilonilon}}, \quad \subs{\varepsilonilon}<0, \quad \lim\limits_{n\to\infty}{\subs{\varepsilonilon}}=0\\ \label{expbeta} \sub{\beta_i}&=\hat{g}_i+{\subs{\delta}}, \quad \lim\limits_{n\to\infty}{\subs{\delta}}=0. \end{align} \begin{lemma}[Convergence of $\eta$]\label{lemmaetaconv} Suppose $n>n^*$ such that Theorem~\ref{lemma2} holds, and in \eqref{expsigma} and \eqref{expbeta} $|\subs{\varepsilonilon}|< \subs{\sigma}$, and $|\subs{\delta}|< |\hat{g}_i|$, respectively. Then, \begin{align} \nonumber \lim_{n\to\infty} \eta(\lambda,{\sub{p}}, k, \sub{\boldsymbol{\sigma}}(1:\sub{p}), \sub{\boldsymbol{\beta}}(1:\sub{p})) &= \eta(\lambda,p^*,k, {\boldsymbol \mu}(1:p^*), \hat{\bo g}(1:p^*)) \\ \lim_{n \to \infty} {\bf1}^T\bo w(\lambda, \sub{\boldsymbol{\sigma}}(1:\sub{p})) &={\bf1}^T \bo w(\lambda, {\boldsymbol \mu}(1:p^*))\nonumber\\% \label{convw}\\ \lim_{n\to\infty} \frac{n^2}{\left(n-\sub{p}+ {\bf1}^T\bo w(1:\sub{p})\right)^2} &=1, \quad p^*<< n. \nonumber \end{align} \end{lemma} \begin{proof} The proof is immediate by the continuity of $\eta$ and $\bo w$. \end{proof} For ease of notation we introduce $\bar{\bo z}$ to be vector $\bo z$ truncated to length ${p}^*$. \begin{theorem}[Convergence of $\sub{\lambda}$]\label{regconv} Suppose $n>n^*$ such that Theorem~\ref{lemma2} holds, and in \eqref{expsigma} and \eqref{expbeta} $|\subs{\varepsilonilon}|< \subs{\sigma}$, and $|\subs{\delta}|< |\hat{g}_i|$, respectively. Assume $\sub{\lambda}$ and $\lambda^*$ are given by one of the following cases: \begin{enumerate} \item $\sub{\lambda}$ solves $\sub{D_{\text{T}}}(\lambda) = \zeta^2 \tau$ and $\lambda^*$ solves $D^*(\lambda) =\eta(\lambda,p^*,2, \bar{{\boldsymbol \mu}}, \bar{\hat{\bo g}}) = \zeta^2 \tau$. \item $\sub{\lambda}$ solves $\sub{C_{\text{T}}}(\lambda) = \zeta^2 \sub{p}$ and $\lambda^*$ solves $C^*(\lambda) =\eta(\lambda,p^*,1, \bar{{\boldsymbol \mu}}, \bar{\hat{\bo g}}) = \zeta^2 p^*$. \item $\sub{\lambda} = \argmin_{\lambda} \sub{U_{\text{T}}}(\lambda)$ and $\lambda^* = \argmin_{\lambda}U^*(\lambda) =\argmin_{\lambda} \{\eta(\lambda,p^*,2, \bar{{\boldsymbol \mu}}, \bar{\hat{\bo g}}) +2 \zeta^2 {\bf1}^T\bo w(\lambda, \bar{{\boldsymbol \mu}}) \}$. \item $\sub{\lambda} = \argmin_{\lambda} \sub{G_{\text{T}}}(\lambda)$ and $\lambda^* = \argmin_{\lambda}G^*(\lambda)$, $G^*(\lambda)$ as defined in \eqref{contgcv}. \end{enumerate} Then, in each case, $\lim\limits_{n\to\infty}{\sub{\lambda}}=\lambda^*$. \end{theorem} \begin{proof} The result follows by Lemma~\ref{lemmaetaconv} immediately for the MDP, ADP and UPRE functionals. For the GCV we note in addition that $\lim_{i \to \infty} (\sub{\beta_i})^2=0$. Therefore $\lim_{n\to \infty} \|\boldsymbol{\beta}(p^*+1:n)\|_2^2$ is bounded and independent of $\lambda$. \end{proof} \section{Practical Implementation}\label{sec:practical} Our interest, as noted, is the solution of the integral equation \eqref{eq:fred} rather than the generation of an approximation to the SVE for the kernel $H(s,t)$. We carefully describe the stages of the algorithm which lead to the determination of the solution of the large scale problem, using the regularization parameter estimated using only the coarse resolution system of equations. In the following we generally assume that the data $g(s)$ is provided at a discrete set of points, $\{\subl{s_\imath}\}_{\imath=1}^N $, cf. \cite{hansensvepaper}, and is contaminated by noise vector $\bo e$, where $\bo e \sim \mathbb{N}n(0, C)$ for diagonal matrix $C=\mathrm{diag}(\zeta_1^2, \zeta_2^2, \dots, \zeta_N^2)$. We also assume that the approximation of $f(t)$ is required at $N$ points $\{\subl{t_\jmath}\}_{\jmath=1}^N $. \subsection{The downsampled system}\label{sec:downsample} To apply the algorithm with respect to different resolutions we have to first identify the sampling, i.e. we pick the coarse level resolution $n<N$ such that it is possible to find a sampling $\{\sub{s_i}\}_{i=1}^n $, with $\sub{s_i}=\subl{s_\imath}$ for some $\imath \in \iota$ for index set $\iota$. For example if $n=N/2$ then we may take every second sampled point at the fine resolution for the downsampled data so that $\iota=\{1, 3, \dots, N-1\}$. The samples are ordered $\subl{s_1}\le \sub{s_1}< \sub{s_2} \dots <\sub{s_n}\le \subl{s_N}$ and yield the sampling vector with entries $\subs{g} = g(\subs{s})$. We now use the indicator functions, normalized to length $1$, given by \begin{align}\label{indicator} \chi_k(x) = \left\{\begin{array}{ll} \frac{1}{\sqrt{ dx_k}} & ~~~x \in \Omega_k =[x_k-\frac{dx_k}{2}, x_{k}+\frac{dx_k}{2}]\\ 0 &~~~\textrm{otherwise}\end{array} \right. . \end{align} Then, defining step size $\sub{d s_i}=\sub{s_{i+1}}-\sub{s_i}$ with the equivalent definitions in $t$, and such that the sample points are at the mid points of each non-overlapping interval, \begin{align}\label{ONB} \sub{\psi_i}(s) = \left\{\begin{array}{ll} \frac{1}{\sqrt{\sub{d s_i}}} & s \in \Omega_{\sub{s_i}} =[\sub{s_i}-\frac{\sub{d s_i}}{2}, \sub{s_i}+\frac{\sub{d s_i}}{2}]\\ 0 & \textrm{otherwise}\end{array} \right.. \end{align} Thus, in \eqref{discrete system}, with the assumption that the integral over $\Omega_{\sub{s_i}}$ uses the mid point rule, we obtain the approximation \begin{align}\subs{\hat{g}} = \langle{g(s), \psi_i(s)}\rangle \approx g(s^{(n)}_i) \sqrt{\sub{d s_i}}=:\sub{b_i}. \label{coeffgnonoise}\end{align} The function $f(t)$ is defined similarly, for indicator basis functions $\sub{\phi_j}(t)$, as in \eqref{indicator}, so that \begin{align} \subj{\hat{f}} &= \langle{f(t), \phi_j(t)}\rangle \approx f(t^{(n)}_j) \sqrt{\sub{d t_j}}=:\sub{x_j}, \nonumber \\ \text{where} \quad \sub{\phi_j}(t) &= \left\{\begin{array}{ll} \frac{1}{\sqrt{\sub{d t_j}}} & t \in \Omega_{\sub{t_j}} =[\sub{t_j}-\frac{\sub{d t_j}}{2}, \sub{t_j}+\frac{\sub{d t_j}}{2}]\\ 0 & \textrm{otherwise}\end{array} \right.. \label{ONB2} \end{align} Then, again assuming the mid point rule, the approximation to the kernel matrix is given by \begin{align}\nonumber \int_{\Omega_{{s}}}\int_{\Omega_{{t}}}\sub{\psi_i}(s) H(s,t) \sub{\phi_j}(t) dt ds&=\frac{1}{\sqrt{\sub{d s_i} \sub{d t_j}}}\int_{\Omega_{\sub{s_i}}}\int_{\Omega_{\sub{t_j}}}H(s,t) dt ds\\ &\approx {\sqrt{\sub{d s_i} \sub{d t_j}}} H(\subs{s}, \sub{t_j}) =: \sub{a_{ij}}. \label{Anmatrix}\end{align} For the resolution-based algorithm $\subl{A}$ is required and can be also calculated using \eqref{Anmatrix}. Thus $\sub{A}$ can be obtained by sampling and scaling $\subl{A}$, i.e. by extracting rows and columns with the correct scaling, \begin{align} \sub{a_{ij}} =\sqrt{\sub{d s_i} \sub{d t_j}} H(\subs{s}, \sub{t_j}) =\frac{\sqrt{\sub{d s_i} \sub{d t_j}}}{\sqrt{\subl{d s_\imath} \subl{d t_\jmath}}} \subl{a_{\imath\jmath}}. \label{scaleA} \end{align} We note that the impact of the quadrature error in the calculation of these elements, which tends to $0$ with $n$, is ignored in the analysis. We have demonstrated, therefore, that the computational cost for determining the matrix for the coarse grain resolution $\sub{A}$ is negligible compared to the cost for determining the kernel matrix $\subl{A}$ which would be required independent of any coarse-fine resolution arguments.\footnote{If the kernel integral is calculated exactly over the given interval it is still possible to obtain $\sub{A}$ from $\subl{A}$ by summing the relevant terms from $\subl{A}$ but the scaling factor is the inverse of that in \eqref{scaleA}.} Further, in obtaining the sampling $\{\sub{s_i}\} $, it is appropriate to define a sampling interval $\ell$ such that $\imath = 1 : \ell:N$, yielding non-overlapping intervals. When the sampling is completely uniform, with all of $\subl{d s_\imath}$, $\subl{d t_\imath}$, $\sub{d s_i}$, and $\sub{d t_i}$, independent of index $\imath$ and $i$, $\sub{A} =\alpha \subl{A}$, where $\alpha$ depends only on the ratios between the number of points at each resolution. Given matrices $\sub{A}$ and $\subl{A}$, the goal is now to determine the numerical rank $\subl{p}$ and regularization parameter $\subl{\lambda}$ only using the SVD of $\sub{A}$. Then, using the first $\subl{p}$ terms of the SVD for $\subl{A}$, calculated for example using for example \textrm{svds}$(\subl{A}, \subl{p})$ in Matlab, and thus not requiring the full SVD for $\subl{A}$, approximations at the original fine resolution are given by \begin{align}\nonumber \subl{f}(t_k) &\approx \sum_{\jmath=1}^N \left(\sum_{\imath=1}^\subl{p} q(\subl{\lambda},\subl{\sigma_\imath})\frac{(\subl{\bo u_\imath})^T \subl{\bo b}}{\subl{\sigma_\imath}} \subl{ v_{\jmath\imath}}\right)\phi_\jmath(t_k) \\ &=\frac{1}{\sqrt{\subl{d t_k}}}\sum_{\imath=1}^\subl{p} q(\subl{\lambda},\subl{\sigma_\imath})\frac{(\subl{\bo u_\imath})^T \subl{\bo b}}{\subl{\sigma_\imath}} \subl{ v_{k\imath}}=: \subl{\bo f_k}. \label{tsvdregsolnN}\end{align} \subsection{Determination of the numerical rank}\label{sec:numericalrank} Theorem~\ref{lemma2} suggests that the numerical rank estimation introduces an additional significant parameter $\varepsilonilon$ for the determination of $\sub{p}$. First we note that this parameter is not the machine precision for floating point arithmetic $\varepsilon_{\mathrm{float}}$ e.g. $ 2.2204e-16$ in Matlab 2014b. While $\varepsilonilon$ does depend on $\varepsilonilon_{\mathrm{float}}$ it also depends on the numerical spectrum of $\subl{A}$ and is easily estimated using the singular values for $\sub{A}$. In particular, $\varepsilonilon$ is only relevant in determining the effective numerical rank of the problem, so as to determine the truncation of the SVD. We illustrate this for the numerical examples in section~\ref{simulations}, demonstrating the convergence of the singular values and consequent estimation of $\sub{p}$. \subsection{Determination of the regularization parameter}\label{sec:regparam} It remains to more carefully consider the estimation of the regularization parameter given the provided data. Suppose, for now, that the measured data has been whitened via weighting of the sample vector yielding $\tilde{\bo g} =:\zeta W^{1/2}\bo g$, where $W$ is the inverse covariance matrix for the noise vector $\bo e$, and $\zeta^2$ is the mean of the variance in each measurement. Likewise, then, the kernel matrix $\subl{A}$ is replaced by $\tilde{\subl{A}}=\zeta W^{1/2} \subl{A}$. We maintain the parameter $\zeta$ in the analysis just to emphasize its usage in the formulae in section~\ref{sec:regest} for estimating the regularization parameter. The weighted noise vector $\tilde{\bo e} = \zeta W^{1/2} {\bo e} \sim \mathbb{N}n(0, \zeta^2I)$ and the estimation formulae apply as given in \eqref{TMDP}, \eqref{TADP}, and \eqref{Tupre}, noting again by Theorem~\ref{thm:weighted kernel} that the scaling does not impact the convergence of the weighted singular values and coefficients. But now, again applying \eqref{coeffgnonoise}, the actual right hand side data are obtained by inner products with the weighted data \begin{align*} (\sub{\hat{\tilde{g}}_{\mathrm{obs}}}) _i=\langle{\tilde{g}_{\mathrm{obs}}(s), \psi_i(s)}\rangle &= \langle{\tilde{g}(s)+\tilde{e}(s), \psi_i(s)} \rangle=\subs{ \tilde{b}}+ \subs{\hat{\tilde{e}}} \quad \text{for which}\\ \subs{\hat{\tilde{e}}} &= \tilde{e}(s_i) \sqrt{\sub{ds}}, \quad\subs{ \hat{\tilde{e}}} \sim \mathcal{N}(0, \left(\zeta^{(n)}\right)^2 ), \quad \left(\zeta^{(n)}\right)^2 =\sub{ds}\zeta^2. \end{align*} Hence the noise level now depends on $n$ and we cannot immediately apply the convergence results, Theorem~\ref{regconv}, for \eqref{TMDP}-\eqref{Tupre} because the variance of the noise in each system depends on $n$. We recall again that the GCV estimator is independent of $\zeta^2$ and no further discussion is needed. It is immediate by \eqref{regform} that if we introduce scaling of the kernel matrix and the right hand side data by constant $\mu$ then \begin{align*} \bo{ x}_{\text{Reg}}(\tilde{\lambda},\mu)= \argmin_{\bo x} \left\{{\mu^2}\|{ A}\bo{x}-\bo{ b}\|_2^2+\tilde{\lambda}^2\|{\bo{x}}\|_2^2\right\} =\bo{ x}_{\text{Reg}}({\lambda}), \quad \lambda=\frac{\tilde{\lambda}}{\mu}. \end{align*} On the other hand, suppose that we find the regularization parameter $\sub{\tilde{\lambda}}$ using variance $\left(\zeta^{(m)}\right)^2$ instead of $\left(\sub{\zeta}\right)^2$ then this corresponds to scaling the data by $\mu=\zeta^{(m)}/\zeta^{(n)}$ yielding a solution with regularization parameter \begin{align}\label{scaling} {\lambda^{(n)}}={\frac{\zeta^{(n)}}{\zeta^{(m)}}} \tilde{\lambda}^{(n)}. \end{align} We note that this result also follows by considering the expansion for the solution \eqref{regsoln}, and by the uniqueness of the SVD \cite{GoLo:96}, at least with respect to the singular values and of the singular vectors up to their signs. Thus to apply the convergence results for \eqref{TMDP}-\eqref{Tupre} we calculate $\sub{\tilde{\lambda}}$ using $\left(\zeta^{(n)}\right)^2$ yielding $\subl{\lambda}=\left(\zeta^{(N)}/\zeta^{(n)} \right)\subl{\tilde{\lambda}}=\left(\zeta^{(N)}/\zeta^{(n)} \right)\sub{\tilde{\lambda}}$ by the convergence of the funcationals for constant variance. Then we may use $\subl{\lambda}$, $\subl{\sigma_i}$ and $\subl{\beta_i}$ to find the solution using \eqref{tsvdregsolnN}. For clarification the steps are described in Algorithm~\ref{alg}. \begin{algorithm}[H]\caption{Galerkin Method to obtain regularized solution of \eqref{eq:fred} \label{alg}} \begin{algorithmic}[1] \mathbb{R}equire whitened data $\{g(s_\imath)\}_{\imath=1}^N$, whitened kernel function $H(s,t)$ and precision $\varepsilonilon$. \State Pick the coarse level resolution $n<N$ such that it is possible to find a sampling $\{g(\sub{s_i}\}_{i=1}^n)$, with $\sub{s_i}=\subl{s_\imath}$, $\imath \in \iota$, for some index set $\iota$, with ordering $\subl{s_1}\le \sub{s_1}< \sub{s_2} \dots <\sub{s_n}\le \subl{s_N}$. \State Choose ONB $\{\sub{\phi_j}(t)\}_{j=1}^n$, $\{\sub{\psi_i}(s)\}_{i=1}^n$, $\{\subl{\phi_\jmath}(t)\}_{\jmath=1}^N$ and $\{\subl{\psi_\imath}(s)\}_{\imath=1}^N$, e.g. using \eqref{ONB} and \eqref{ONB2}. \State Calculate the right hand side vector $\sub{\bo b}$ using \eqref{coeffgnonoise}. \State Calculate $\subl{A}$ with entries $(\subl{a_{ij}})$ using \eqref{Anmatrix}. \State Calculate matrix $\sub{A}$ by sampling from $\subl{A}$ using \eqref{scaleA}. \State Compute SVD, $\sub{A} =\sub{U} \sub{\Sigma} (\sub{V})^T$. Estimate $\sub{p}$ using $\varepsilonilon$. \State Compute $\subl{p}=\sub{p}$ dominant terms for SVD of $\subl{A}$. \State Find $\sub{\tilde{\lambda}}$ using regularization parameter estimation by one of MDP, ADP, UPRE or GCV, using $\left(\zeta^{(n)}\right)^2$. \State Relate $\sub{\tilde{\lambda}}$ to $\subl{\lambda}$, via $\subl{\lambda}=\sub{\tilde{\lambda}}\sqrt{\zeta^2 \sub{d s}}/\zeta^{(n)} = \sub{\tilde{\lambda}}\sqrt{\subl{d s}/\sub{ds}}$ \Ensure the regularized solution \eqref{tsvdregsolnN} for resolution $N$ using $\subl{\boldsymbol{\beta}}$ and $\subl{\boldsymbol{\sigma}}$. \end{algorithmic} \end{algorithm} Some advantages of the coarse to fine resolution argument are apparent. For example, suppose that $\subl{{\lambda}}$ is found directly by the UPRE. When $N$ is large, $\sub{ds}$ is small, the noise in the coefficients goes to zero, forcing $\zeta^2=0$, so that in the minimization the residual function dominates the filter terms. For the ADP and MDP the right hand side also tends to zero with $\sub{ds} \to 0$, forcing the filter terms identically to $1$, i.e. to no filtering, which is consistent with the noise in the expansion coefficients going to $0$. Noise due to the data sampling is effectively ignored at the high resolution, but is accounted for at the lower resolution. \section{Experimental Validation of the Theoretical Results}\label{simulations} We illustrate the theoretical discussions in section~\ref{theoryresults} with problem \textit{gravity} from the Regularization toolbox \cite{Regtools}, for which \begin{align} H(s,t) &= \frac{d}{\paren{d^2+(s-t)^2}^{3/2}},\quad (s,t) \in [0,1] \times [0,1], \quad d>0 \label{eq:gravK}\\ f(t) &= \sin\paren{\pi t}+.5\sin\paren{2\pi t}.\label{eq:gravf} \end{align} This problem has the advantage that we can explicitly determine $(\Delta^{(n)})^2$ and the parameter dependence due to $d$ introduces problems of ill-posedness increasing with $d$. \subsection{Illustration of the SVE-SVD relation} To investigate the convergence of $(\Delta^{(n)})^2 \rightarrow 0$ we use \begin{align*} \norm{H}_2^2 = \int_0^1\int_0^1 \frac{d^2}{\paren{d^2+(s-t)^2}^3} \,dt\,ds = \frac{3\arctan\paren{\frac{1}{d}}+\frac{d}{d^2+1}}{4d^3}. \end{align*} Thus $H$ is square integrable. For $d=.25$, $\norm{H}_2^2 \approx 67.404$ and for $d=.5$, $\norm{H}_2^2 \approx 7.443$. Moreover, $\sub{a_{ij}}$ in \eqref{kernel matrix} is also available exactly. For $s_{i+1}-s_i=ds=dt =t_{j+1}-t_j$, \begin{align*} \sub{a_{ij}}&=\frac{1}{\sqrt{ds dt}}\int_{s_i}^{s_{i+1}}\int_{t_j}^{t_{j+1}} \frac{d}{\paren{d^2+(s-t)^2}^{3/2}} \,dt\,ds \\ &=\frac{1}{dt} \frac{\left( \sqrt{(s_{i+1}-t_j)^2+d^2} + \sqrt{(s_{i}-t_{j+1})^2+d^2} -2 \sqrt{(s_i-t_j)^2+d^2}\right)}{d}. \end{align*} In Figure~\ref{fig:delta} we show the convergence of $\lvert \paren{{\Delta^{(n)}}}^2 \rvert$ with $n$ for problem sizes $n=100, \dots, 1000$, with both $d=0.25$ and $d=0.5$. It is worth noting that calculation of this estimate with the midpoint rule generates convergence of the estimate, but due to quadrature error the numerical calculation of $(\sub{\Delta})^2$ using $\|H\|^2 - \|\sub{A}\|_F^2$, is negative. $\|\sub{A}\|_F^2$ still converges to $\|H\|^2$, but from above rather than from below $\|H\|^2$. For the exact calculation, shown in Figure~\ref{fig:deltaexact}, $\|\sub{A}\|_F^2$ converges from above as given by the theory. The convergence of the singular values, leading to the effective numerical rank, Theorem~\ref{lemma2}, is illustrated in Figures~\ref{fig:pstara}-\ref{fig:pstarb}. The vertical line indicates the rank calculation for $\varepsilonilon=10^{-15}$. Here, consistent with the simulations $\sub{A}$ is calculated using the midpoint quadrature rule. The singular values decay exponentially up to the numerical precision of the Matlab implementation, $\varepsilonilon=10^{-16}$. For problem size $n=50$ and $d=0.25$, numerical precision does not impact the singular values. We note that in these, and subsequent figures, the markers and colors are consistently determined by resolution $n$. \begin{figure} \caption{In \ref{fig:delta} \label{fig:delta} \label{fig:deltaexact} \label{fig:pstara} \label{fig:pstarb} \label{fig:pstar} \end{figure} \subsection{Illustrating convergence of the functionals}\label{functionals} We illustrate in Figure~\ref{fig:regfunch1} the convergence of the regularization functionals with truncation at $\sub{p}$ for the \texttt{gravity} problem with $d=0.25$ and $d=0.5$, with discretizations using $n=50$, $100$, $200$, $500$, $1000$, $1500$ and $3000$ points as in Figure~\ref{fig:pstar}. The truncation parameter $\sub{p}$ is determined for $\varepsilonilon=10^{-15}$ in each case. To better identify the location of the root for the MDP and ADP functionals, we plot $\lvert D(\lambda)-p^* \zeta^2\rvert$ and $\lvert C(\lambda)-p^* \zeta^2\rvert$, i.e. assuming safety parameter $\tau=1$ for the MDP. In the calculations the coefficients $\sub{\beta_i}$ and $\sub{\sigma_i}$ are weighted by the noise in the measurements of $g(s)$, i.e. corresponding to whitening the system by the inverse square root of the covariance matrix of the measured noise in the data. We pick a constant value of $\zeta^2$ to verify the convergence of the functionals for constant $\zeta^2$ as discussed in section~\ref{theoryresults}. Here, for the MDP, ADP and UPRE $\zeta^2=ds^{(50)}$, i.e. the noise at the coarsest resolution of the problem. The UPRE and GCV functionals converge independent of $n$, while the MDP and ADP are clearly more sensitive to the correct choice of $\zeta^2$ with $n$. On the other hand, the UPRE and GCV functionals become increasingly flat with decreasing $\lambda$ suggesting that the determination of the minimum will be difficult. One can see that the MDP suggests a larger regularization parameter, with this choice of $\tau$, and will likely oversmooth as compared to UPRE and GCV, while the ADP parameter may lead to undersmoothing. \begin{figure} \caption{The MDP, ADP, UPRE, and GCV, functionals \eqref{TMDP} \label{fig:regfunch1} \end{figure} Figure~\ref{fig:regfunchi} illustrates the regularization functionals, calculated as for Figure~\ref{fig:regfunch1} but with $\zeta^2=ds^{(n)}$ for each $n$, ie at the correct variance in each case. Now one more clearly sees the movement of the MDP and ADP curves to the left, corresponding to $\sub{\lambda} \rightarrow 0$ with $n$, indicative of $\zeta^2=ds^{(n)}$ also converging to $0$ with $n$. On the other hand, the plots for the UPRE and the GCV show that the residual term dominates in each case, so that the curves exhibit the same convergence shown in Figure~\ref{fig:regfunch1}, further emphasizing the potential difficult of minimizing the functionals at low noise levels. Still, the UPRE in Figure~\ref{fig:regfunch1} indicates a greater evidence of a minimum in the given range. It should be noted that the plots for the GCV are independent of the choice of the variance. The vertical lines in each case indicate the location of the regularization parameter with $n$, calculated using the scaling argument in \eqref{scaling}, but imposing a fixed variance, e.g. $\left(\zeta^{(50)}\right)^2$ for all $n$ to find $\sub{\tilde{\lambda}}$. In this case $\subl{\lambda}$ decreases with $n$, since it is calculated for decreasing variance $\left(\zeta^{(n)}\right)^2$ with $n$, as $\sub{d s} \rightarrow 0$. For the GCV the vertical lines indicate the regularization parameters chosen for each $n$, rather than from $n=50$ scaled to larger $n$. The locations of $\sub{\lambda}$ for the MDP give quite good estimates for the locations of the minima of the relevant functions, but the ADP estimates tend to be larger than would be suggested by the minima, and hence that there may be less under-smoothing when calculated from the case with $n=50$. The difficulty with estimating a good minimum for the GCV is evident. One should expect that $\sub{\lambda}$ tends to the left with increasing $n$, but this characteristic is not always observed, indeed no monotonicity in $\sub{\lambda}$ is found. With the GCV the terms using $\sub{\beta_i}$ for $i>p^*$ converge to $0$, because these coefficients represent the less dominant spectral coefficients in the expansion for $g(s)$, the dominant energy is maintained in the first $p^*$ terms, independent of $n$. Additionally, this means that with $\zeta^2\rightarrow 0$, the UPRE and GCV functionals are both minimizing the residual only scaled by a different constant term, and finding $\sub{\lambda}$ directly with $\zeta^2=ds^{(n)}$ effectively minimizes the residual and may lead to under-smoothing in the solution. We note that determining the correct tolerance for finding the minimum of the ADP and MDP, formulated as minimization of the distance from the right hand side, is a limiting factor of both ADP and MDP methods. \begin{figure} \caption{The MDP, ADP, UPRE, and GCV, functionals \eqref{TMDP} \label{fig:regfunchi} \end{figure} \section{Numerical Experiments}\label{results} We present a selection of results using Algorithm~\ref{alg} to demonstrate its use for problems with differing characteristics. First we look further at problem \texttt{gravity} with two different levels of conditioning as determined by $d=0.25$ and $d=0.50$. We also show the results using \texttt{gravity} for a discontinuous source. Note that for \texttt{gravity} the spectrum decays very quickly. In contrast, problem \texttt{deriv2}, also from \cite{Regtools}, has a very slowly decaying spectrum. Finally, therefore, we illustrate simulations for high noise and \texttt{deriv2}. The presented results are illustrative of our experiments with other samples, noise levels and problems and are given to verify the approach. \subsection{Problem \texttt{gravity}}\label{results_1} We consider a problem of size $N=3000$ for the discretization of \eqref{eq:gravK}-\eqref{eq:gravf}. Regularization parameters to use for $N=3000$ are obtained by downsampling the data, as described in Algorithm~\ref{alg}, to problems of size $n=50$, $100$, $200$, $500$, $1000$ and $1500$, representing sampling the data at intervals $\ell=60$, $30$, $15$, $6$, $3$ and $2$. For comparison the solutions without any downsampling, i.e. using $N=3000$ and $\ell=1$ are also provided. Noisy data are obtained by forming $g_\mathrm{obs}(s_i) = g(s_i)+\nu \max_j(|g(s_j)|) e(s_i)$, for $\nu=0.001$ and $\nu = 0.1$, representing low and high noise. For $d=0.25$ and $d=0.5$ respectively, $\max_j |g(s_j)| = 6.7542$ and $2.1895$. Errors $e(s_i)$ are drawn from a random normal distribution with variance $1$. The resulting data samples are illustrated in Figure~\ref{fig:noisy}. \begin{figure} \caption{Illustrative noisy data with $.1\%$ and $10\%$ noise for problem \texttt{gravity} \label{fig:noisy} \end{figure} We note that the estimate for $p^*$ which is approximated by $\sub{p}$ may not be stable initially for small $n$, thus potentially leading to different estimates of the regularization parameter to use for $\subl{\lambda}$, when estimated using different values for $n$, ie different samplings. Although one may theoretically chose to determine $p^*$ for any given $\varepsilonilon$, here we present results corresponding to $\varepsilonilon=10^{-15}$, which as can be seen from Figure~\ref{fig:pstar} is effectively the point at which the singular values for $i>p^*$ are contaminated by numerical noise. In all cases the regularization parameter at resolution $n$ is calculated by each of the regularization parameter estimation methods, MDP, ADP, UPRE and GCV using $\zeta^2=ds^{(n)}$. The estimate with the GCV is independent of the given $\zeta^2$. Given $\sub{\lambda}$ the fine solution using $3000$ points is calculated using \eqref{scaling} and the dominant $p^*=\sub{p}$ components of the SVD for the matrix $\subl{A}$. We first illustrate in Figure~\ref{fig:soln} the solutions for a single arbitrarily chosen noise vector. All solutions are calculated using $3000$ points, but for clarity in the plots the solutions are plotted using just $50$ points. Any solution which has an amplitude greater than $3$ is also omitted from the plots, in order that the plots are not cluttered by the high oscillatory behavior of the severely unstable solutions. Solutions with less severe instability are also evident as moderate oscillations around the true solutions. Note that the actual amplitude of the exact solution is less than $1.5$. In each case the individual legends indicate which solutions are plotted. \begin{figure} \caption{Solutions of problem \texttt{gravity} \label{fig:soln} \end{figure} The most immediate observation is that no one single method is perfect for all $n$, but this is not at all surprising; methods for estimating regularization parameters are not foolproof and it would be deceptive to indicate otherwise. The success of any given method depends on the specific given right hand side data. An example of this is shown in the case with $.10\%$ noise and $d=0.50$ for which GCV generates acceptable solutions for all $n$, except $n=200$, which is not shown. Further the solution obtained by the ADP for the cases with low noise are less stable for $N=3000$, which illustrates that the estimate for $\lambda^{(N)}$ obtained from $n<N$ may yield more stable solutions. Overall, these results confirm that GCV is less reliable, as already suggested from the discussion of the functionals in section~\ref{functionals}. On the other hand, in most cases, except for high noise and $d=0.5$, the regularization parameter estimated at the coarsest level, with $n=50$ giving $\lambda^{(50)}$ and then used to obtain the solution for size $N=3000$, is remarkably robust. \begin{table} \caption{Relative errors, mean(standard deviation) over $25$ samples for $d=0.25$ and $0.10\%$ noise. Solutions calculated for resolution with $3000$ points using the regularization parameter calculated using $n=50$, $100$, $200$, $500$, $1000$, $1500$ and $3000$ points, using $\zeta^2=ds^{(n)}$ in the estimation of $\sub{\lambda}$. The minimum average relative error by each method for $n<3000$ in bold face. \label{Errord100noise1}} \begin{tabular}{|*{ 9}{c|}} \hline\noalign{ } $n$& ADP & MDP &UPRE &GCV \\ \noalign{ }\hline\noalign{ } $50 $&$7.4813(37.523)$&$7.2614(37.566)$&$0.0752(0.207)$&$1.0691(5.442)$\\ $100 $&$0.0992(0.033)$&$\mathbf{0.0104(0.002)}$&$0.0156(0.006)$&$0.0369(0.091)$\\ $200 $&$0.0606(0.020)$&$0.0129(0.001)$&$0.0109(0.003)$&$0.2521(1.048)$\\ $500 $&$0.0354(0.010)$&$0.0169(0.001)$&$\mathbf{0.0097(0.002)}$&$0.0166(0.017)$\\ $1000 $&$0.0242(0.006)$&$0.0208(0.001)$&$0.0108(0.002)$&$\mathbf{0.0147(0.018)}$\\ $1500 $&$\mathbf{0.0196(0.005)}$&$0.0234(0.000)$&$0.0119(0.002)$&$0.0310(0.061)$\\ $3000 $&$0.0141(0.004)$&$0.0289(0.000)$&$0.0142(0.001)$&$0.0364(0.074)$\\ \hline\noalign{ } \end{tabular}\end{table} \begin{table} \caption{Relative errors, mean(standard deviation) over $25$ samples for $d=0.25$ and $10.00\%$ noise. Solutions calculated for resolution with $3000$ points using the regularization parameter calculated using $n=50$, $100$, $200$, $500$, $1000$, $1500$ and $3000$ points, using $\zeta^2=ds^{(n)}$. The minimum average relative error by each method for $n<3000$ in bold face. \label{Errord100noise100}} \begin{tabular}{|*{ 9}{c|}} \hline\noalign{ } $n$& ADP & MDP &UPRE &GCV \\ \noalign{ }\hline\noalign{ } $50 $&$8.5622(28.062)$&$7.0155(25.881)$&$4.7672(20.958)$&$0.1437(0.281)$\\ $100 $&$0.0981(0.038)$&$\mathbf{0.0512(0.010)}$&$0.1040(0.039)$&$0.1615(0.369)$\\ $200 $&$0.0608(0.019)$&$0.0642(0.007)$&$0.0714(0.026)$&$0.3365(1.004)$\\ $500 $&$\mathbf{0.0511(0.010)}$&$0.0978(0.004)$&$\mathbf{0.0522(0.014)}$&$0.3274(1.385)$\\ $1000 $&$0.0606(0.008)$&$0.1376(0.003)$&$0.0525(0.010)$&$\mathbf{0.1082(0.169)}$\\ $1500 $&$0.0705(0.006)$&$0.1689(0.003)$&$0.0579(0.008)$&$0.3233(0.838)$\\ $3000 $&$0.0949(0.004)$&$0.2429(0.003)$&$0.0742(0.006)$&$0.0939(0.073)$\\ \hline\noalign{ } \end{tabular}\end{table} \begin{table} \caption{Relative errors, mean(standard deviation) over $25$ samples for $d=0.50$ and $0.10\%$ noise. Solutions calculated for resolution with $3000$ points using the regularization parameter calculated using $n=50$, $100$, $200$, $500$, $1000$, $1500$ and $3000$ points, using $\zeta^2=ds^{(n)}$. The minimum average relative error by each method for $n<3000$ in bold face. \label{Errord200noise1}} \begin{tabular}{|*{ 9}{c|}} \hline\noalign{ } $n$& ADP & MDP &UPRE &GCV \\ \noalign{ }\hline\noalign{ } $50 $&$0.2758(0.211)$&$0.0376(0.071)$&$1.5614(6.356)$&$\mathbf{0.0790(0.221)}$\\ $100 $&$0.1046(0.051)$&$\mathbf{0.0147(0.004)}$&$0.0234(0.016)$&$0.1837(0.593)$\\ $200 $&$0.0628(0.030)$&$0.0199(0.002)$&$0.0145(0.006)$&$1.3135(3.823)$\\ $500 $&$0.0351(0.017)$&$0.0282(0.001)$&$\mathbf{0.0131(0.004)}$&$0.9100(3.662)$\\ $1000 $&$0.0237(0.011)$&$0.0352(0.001)$&$0.0154(0.003)$&$0.2765(0.844)$\\ $1500 $&$\mathbf{0.0194(0.009)}$&$0.0395(0.001)$&$0.0176(0.003)$&$1.0593(3.822)$\\ $3000 $&$0.0148(0.006)$&$0.0487(0.001)$&$0.0226(0.002)$&$1.1728(5.080)$\\ \hline\noalign{ } \end{tabular}\end{table} \begin{table} \caption{Relative errors, mean(standard deviation) over $25$ samples for $d=0.50$ and $10.00\%$ noise. Solutions calculated for resolution with $3000$ points using the regularization parameter calculated using $n=50$, $100$, $200$, $500$, $1000$, $1500$ and $3000$ points, using $\zeta^2=ds^{(n)}$ .The minimum average relative error by each method for $n<3000$ in bold face. \label{Errord200noise100}} \begin{tabular}{|*{ 9}{c|}} \hline\noalign{ } $n$& ADP & MDP &UPRE &GCV \\ \noalign{ }\hline\noalign{ } $50 $&$4.4299(12.051)$&$2.7538(7.493)$&$3.4666(9.399)$&$0.8003(2.611)$\\ $100 $&$0.0974(0.051)$&$\mathbf{0.1212(0.017)}$&$0.1056(0.055)$&$0.4049(1.295)$\\ $200 $&$\mathbf{0.0843(0.032)}$&$0.1623(0.011)$&$\mathbf{0.0845(0.039)}$&$\mathbf{0.2221(0.500)}$\\ $500 $&$0.1077(0.020)$&$0.2142(0.006)$&$0.0965(0.024)$&$0.6397(1.657)$\\ $1000 $&$0.1397(0.014)$&$0.2555(0.005)$&$0.1222(0.017)$&$0.5330(1.379)$\\ $1500 $&$0.1591(0.011)$&$0.2809(0.004)$&$0.1399(0.014)$&$2.0134(6.776)$\\ $3000 $&$0.1932(0.007)$&$0.3295(0.003)$&$0.1723(0.011)$&$2.5886(7.040)$\\ \hline\noalign{ } \end{tabular}\end{table} To further examine the performance of regularization parameter estimation based on the approximate singular expansion, the experiment illustrated in Figure~\ref{fig:noisy} was repeated for $25$ distinct representations of the noise vectors. The approach for calculating the regularization parameter and solution uses Algorithm~\ref{alg} for each noise vector. The relative error was calculated with respect to the known true solution, and both mean and standard deviation for errors over all $25$ realizations of the noise were obtained, and are reported in Tables~\ref{Errord100noise1}, \ref{Errord100noise100}, \ref{Errord200noise1}, and \ref{Errord200noise100}, for $d=0.25$ and noise $.1\%$ and $10\%$, and then $d=0.5$ and the two noise levels, respectively. The results validate the expectation from examination of the functionals. Overall the GCV is generally less robust. In all cases the approaches can be more robust for estimating $\subl{\lambda}$ when obtained from subsampling. Examination of the obtained regularization parameters, not shown here, also confirms that the GCV obtained results show greater variability, even though it should be noted that we use an expensive approach with sampling across $1000$ choices of $\lambda$ in the given range and then seek the minimum around the minimal value found using Matlab function \texttt{fminbnd}. \subsection{Piecewise constant solution} It is well-known that regularization parameter estimation is more challenging for non smooth solutions, namely for those for which the exact spectral coefficients do not decay quickly to $0$. The contamination of spectral coefficients associated with higher frequencies in the basis, i.e. the vectors $\bo v_i$ for larger $i$, limits the ability to accurately resolve discontinuities in the solutions, and alternative regularizing norms are required, e.g. total variation, iterative regularization etc, e.g. \cite{SB,vogel:02,WoRo:07,Zhd}. At the heart of such techniques, however, are standard Tikhonov regularizers. Thus the ability to obtain reasonable estimates of the solution, even in the presence of sharp gradients and/or discontinuities is still relevant as the component of a more general regularization approach. Indeed, effective regularizers that can be obtained efficiently are even more significant within the context of an iteratively refined Tikhonov solution for obtaining an approximate $L_1$ regularizer. We therefore examine Algorithm~\ref{alg} for the piecewise constant function indicated in Figure~\ref{data1}, replacing source \eqref{eq:gravf} for the \texttt{gravity} problem with $d=0.25$. Two noise levels and regularization parameters $\sub{\lambda}$ found using $n=50$, $100$, $200$, $500$ and $1000$ points, with $\varepsilonilon=10^{-15}$, are used to obtain the solutions with $1000$ points. The presented solutions in Figure~\ref{discontdata} are consistent with expectations from the results presented in section~\ref{results_1}. While we would not expect to resolve the discontinuities with a smoothing regularizer, the results with $10\%$ noise are sufficiently encouraging to support future study of the downsampling techniques in the context of iterative edge enhancing regularizers. \begin{figure} \caption{The solutions for the noisy data illustrated in Figure~\ref{data1} \label{data1} \label{discontdata} \end{figure} \subsection{Slowly Decaying Spectrum} Problem \texttt{gravity} has a quickly decaying spectrum, as shown in Figure~\ref{fig:pstar}. We now consider a problem with a very slowly decaying spectrum, see Figure~\ref{fig:pstarderiv}, example \texttt{deriv2} from \cite{Regtools}, for which the kernel is also square integrable, see Figure~\ref{fig:deltaderiv}, $\|H(s,t)\|^2=1/90$, where $H(s,t)$ is defined on $[0,1]$ for both variables and $H(s,t)=s(t-1)$ for $s<t$ and $H(s,t)=t(s-1)$, otherwise. For this example we look at problem sizes $750$, $1000$, $1200$, $1500$, $2000$, $3000$ and $6000$. It is evident from Figure~\ref{fig:pstarderiv} that we cannot completely capture the spectrum for $N=6000$ by using smaller $n$; although as $n$ increases the spectral values closely follow spectral components for $N=6000$, for almost all terms obtained. Picking a numerical rank is now relevant, and will exclude terms from the $N=6000$ expansion. We illustrate solutions obtained for $10\%$ and $25\%$ noise as shown in Figures~\ref{fig:pstardata}-\ref{fig:pstardata1} for exact source $f(t)=t$ for $t<0.5$ and $f(t)=(1-t)$, otherwise. The solutions obtained using UPRE and GCV for different numerical ranks, $\varepsilonilon$ decreasing from $10^{-5}$ to $10^{-8}$, corresponding to $\subl{p}$ approximately $100$, $320$, $1020$ and $3750$, are given in Figures~\ref{deriv2soln1}-\ref{deriv2soln2} for the two noise levels. Although the spectrum decays slowly, the Picard plots, Figure~\ref{picardderiv}-\ref{picardderiv1}, show that noise enters the solution quickly for small indices, thus demonstrating that it is sufficient to use low rank, when using a single parameter estimation technique. Should one use a multi-parameter regularization one may be able to account for different windows in the spectrum, as presented in recent literature, \cite{ChEaOl:11,LSY,meadmulti}. \begin{figure} \caption{In \ref{fig:deltaderiv} \label{fig:deltaderiv} \label{fig:pstarderiv} \label{fig:pstardata} \label{fig:pstardata1} \label{picardderiv} \label{picardderiv1} \end{figure} \begin{figure} \caption{Solutions for problem \texttt{deriv2} \label{deriv2soln1} \end{figure} \begin{figure} \caption{Solutions for problem \texttt{deriv2} \label{deriv2soln2} \end{figure} \section{Conclusions and Future Work}\label{conclusions} We have verified that the theoretical relationship between the continuous SVE for a square integrable kernel and the SVD of the discretization of an integral equation using the Galerkin method can be exploited in the context of efficient regularization parameter selection in solving an ill-posed inverse problem. Analysis of the regularization techniques demonstrates convergence of the regularization parameter with increasing resolution for the discretization of the integral equation using the Galerkin approach. By finding the regularization parameter for a coarse representation of the system, the cost of finding the regularization parameter is negligible as compared to the solution of a fine scale problem. Moreover, exploiting numerical rank, which is approximately preserved across resolutions for a sufficiently sampled high resolution system, mitigates the need to find the singular value decomposition for the high resolution system. Effectively, the solution of the fine scale problem is found by projection to a coarser scale space for the solution, on which the dominant singular properties of the high resolution system are preserved. This provides a valid alternative to applying a Krylov iterative technique for the solution of the system of equations, which also uses projection to a smaller space that effectively also preserves the singular system properties from the fine scale, first presented in \cite{PaigeSau2,PaigeSau3} but by now extensively studied in amongst others \cite{ChNaOl:08,GaNoRu,HaJe,HPS,JeHa,ReSgYe,RHM:10}. Numerical results verify both the theoretical developments and application of the technique for the solution of ill-posed integral equations, for kernels with different conditioning, and solutions which are smooth or piece wise constant. Although we acknowledge that Tikhonov regularization is not the method of choice for the determination of solutions which are non smooth, we note that most techniques that pose the regularization with a more relevant norm such as the $L_1$-norm \cite{SB,vogel:02,WoRo:07,Zhd}, still embed within the solution technique the need to solve using an $L_2$ norm, albeit with an operator $L$, e.g. \eqref{genregsoln}. By judicious choice of boundary conditions, even when $L$ approximates a derivative, it is possible to approximate the derivative with an invertible operator $L$, \cite{DoRe,Lothar:2010}. Thus the application of the techniques in this paper to edge enhancing regularization is a topic for future research. While Vogel \cite{vogel:02} had previously provided an analysis of the convergence of regularization parameter selection techniques, including for the GCV, UPRE and MDP, he did not exploit the numerical rank to overall reduce the computational cost, in the context of the Galerkin approximation for the integral equation. Moreover, a discussion of the extension of these techniques for higher dimensions was not addressed. For separable and square integrable kernels, the tensor product SVD may be used to further reduce the computational cost, with different rank estimations in each dimension of the kernel. These ideas are also relevant in the context of spatially invariant kernels which admit convolutional representations for the integral equation, and dependent on the boundary conditions, can be solved using Fourier or cosine transforms \cite{vogel:02,hansenbook,HNO}. Again these are topics of future research, with application for practically relevant large scale problems. Further, in the context of practically relevant large scale problems, an algorithmic approach to assessing sufficient convergence of the solution for increasing resolution should be developed. \ack Rosemary Renaut acknowledges the support of AFOSR grant 025717: ``Development and Analysis of Non-Classical Numerical Approximation Methods", and NSF grant DMS 1216559: ``Novel Numerical Approximation Techniques for Non-Standard Sampling Regimes". \end{document}
\begin{example}gin{document} \title{\bf \Large Copositive Tensor Detection and Its Applications in Physics and Hypergraphs} \author{Haibin Chen\thanks{School of Management Science, Qufu Normal University, Rizhao, Shandong, P.R. China. Email: [email protected]. This author's work was supported by the National Natural Science Foundation of China (Grant No. 11601261), and Natural Science Foundation of Shandong Province (Grant No. ZR2016AQ12). } \quad Zheng-Hai Huang \thanks{Department of Mathematics, School of Science, Tianjin University, Tianjin 300072, P.R. China. Email: [email protected]. This author's work was supported by the National Natural Science Foundation of China (Grant No. 11431002).} \quad Liqun Qi \thanks{Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong. Email: [email protected]. This author's work was supported by the Hong Kong Research Grant Council (Grant No. PolyU 501212, 501913, 15302114 and 15300715).}} \date{} \maketitle \begin{example}gin{abstract} Copositivity of tensors plays an important role in vacuum stability of a general scalar potential, polynomial optimization, tensor complementarity problem and tensor generalized eigenvalue complementarity problem. In this paper, we propose a new algorithm for testing copositivity of high order tensors, and then present applications of the algorithm in physics and hypergraphs. For this purpose, we first give several new conditions for copositivity of tensors based on the representative matrix of a simplex. Then a new algorithm is proposed with the help of a proper convex subcone of the copositive tensor cone, which is defined via the copositivity of Z-tensors. Furthermore, by considering a sum-of-squares program problem, we define two new subsets of the copositive tensor cone and discuss their convexity. As an application of the proposed algorithm, we prove that the coclique number of a uniform hypergraph is equivalent with an optimization problem over the completely positive tensor cone, which implies that the proposed algorithm can be applied to compute an upper bound of the coclique number of a uniform hypergraph. Then we study another application of the proposed algorithm on particle physics in testing copositivity of some potential fields. At last, various numerical examples are given to show the performance of the algorithm. \noindent{\bf Keywords:} Symmetric tensor; Strictly copositive; Positive semi-definiteness; Simplicial partition; Particle physics; Hypergraphs \noindent{\bf AMS Subject Classification(2010):} 65H17, 15A18, 90C30. \end{abstract} \section{Introduction} Copositivity of high order tensors has received a growing amount of interest in vacuum stability of a general scalar potential \cite{KK16}, tensor complementarity problem \cite{CQW, SQ15, SQ,BHW16,WHB16}, tensor generalized eigenvalue complementarity problem \cite{Ling16} and polynomial optimization problems \cite{Pena14,song2016}. A symmetric tensor is called copositive if it generates a multivariate form taking nonnegative values over the nonnegative orthant \cite{qlq2013}. In the literature, copositive tensors constitute a large class of tensors that contain nonnegative tensors and several kinds of structured tensors in the even order symmetric case such as $M$-tensors, diagonally dominant tensors and so on \cite{CCLQ, chen14,Ding15,Kannan15,LCQL,LCL,LWZ,Qi14,qi14,Zhang12}. Recently, Kannike \cite{KK16} studied the vacuum stability of a general scalar potential of a few fields. With the help of copositive tensors and its relationship to orbit space variables, Kannike showed that how to find positivity conditions for more complicated potentials. Then, he discussed the vacuum stability conditions of the general potential of two real scalars, without and with the Higgs boson included in the potential \cite{KK16}. Furthermore, explicit vacuum stability conditions for the two Higgs doublet model were given, and a short overview of positivity conditions for tensors of quadratic couplings were established via tensor eigenvalues. In \cite{Pena14}, Pena et al. provided a general characterization of polynomial optimization problems that can be formulated as a conic program over the cone of completely positive tensors. It is known that the cone of completely positive tensors has a natural associated dual cone of copositive tensors \cite{QXX14}. In light of this relationship, any completely positive program stated in \cite{Pena14} has a natural dual conic program over the cone of copositive tensors. As a consequence of this characterization, it follows that recent related results for quadratic problems can be further strengthened and generalized to higher order polynomial optimization problems. For completely positive tensors and their applications, also see \cite{K2015, LQ, QXX14}. In \cite{song2016}, Song and Qi gave the concepts of Pareto $H$-eigenvalue (Pareto $Z$-eigenvalue) for symmetric tensors and proved that the minimum Pareto $H$-eigenvalue (Pareto $Z$-eigenvalue) is equivalent to the optimal value of a polynomial optimization problem. It is proved that a symmetric tensor $\mathcal{A}$ is strictly copositive if and only if every Pareto $H$-eigenvalue ($Z$-eigenvalue) of $\mathcal{A}$ is positive, and $\mathcal{A}$ is copositive if and only if every Pareto $H$-eigenvalue ($Z$-eigenvalue) of $\mathcal{A}$ is nonnegative \cite{song2016}. Unfortunately, it is NP-hard to compute the minimum Pareto H-eigenvalue or Pareto Z-eigenvalue of a general symmetric tensor. On the other hand, Che, Qi and Wei \cite{CQW} showed that the tensor complementarity problem with a strictly copositive tensor has a nonempty and compact solution set. Song and Qi \cite{SQ15} proved that a real symmetric tensor is a (strictly) semi-positive if and only if it is (strictly) copositive. Song and Qi \cite{SQ15,SQ} obtained several results for the tensor complementarity problem with a (strictly) semi-positive tensor. Huang and Qi \cite{HQ2016} formulated an $n$-person noncooperative game as a tensor complementarity problem with the involved tensor being nonnegative. Thus, copositive tensors play an important role in the tensor complementarity problem. Besides, Ling et al. \cite{Ling16} gave an affirmative result that the tensor generalized eigenvalue complementarity problem is solvable and has at least one solution under assumptions that the related tensor is strictly copositive. Thus, a challenging problem is how to check the copositivity of a given symmetric tensor efficiently? Several sufficient conditions or necessary and sufficient conditions for copositive tensors have been presented in \cite{qlq2013,Song15}. However, it is hard to verify numerically whether a tensor is copositive or not from these conditions. Actually, the problem to judge whether a symmetric tensor is copositive or not is NP-complete, even for the matrix case \cite{Dickinson14,Murty87}. Very recently \cite{chen16}, Chen et al. gave some theoretical studies on various conditions for (strictly) copositive tensors; and based on some of the theoretical findings, several new criteria for copositive tensors are proposed based on the representation of the multivariate form in barycentric coordinates with respect to the standard simplex and simplicial partitions. It is verified that, as the partition gets finer and finer, the concerned conditions eventually capture all strictly copositive tensors. It should be pointed out that the algorithm investigated in \cite{chen16} can be viewed as an extension of some branch-and-bound type algorithms for testing copositivity of symmetric matrices \cite{Bundfuss08,Sponsel12,Xu11}. In this paper, with the help of sum-of-square polynomial technique, an alternative numerical algorithm for copositivity of tensors is proposed, which is established via a kinds of structured tensors and on the choice of a suitable convex subcone of copositive tensor cone. It is proved that the coclique number of a uniform hypergraph is equivalent with an optimization problem over the completely positive tensor cone, which is the dual cone of copositive tensors. Using this, the proposed algorithm can be applied to compute the upper bound of the coclique number of a uniform hypergraph. Furthermore, the proposed algorithm is applied to test the copositivity of some potential fields on particle physics. The rest of this paper is organized as follows. In Section 2, we recall some notions and basic facts about tensors and the corresponding homogeneous polynomials. In Section 3, based on the corresponding matrix of a simplex, several new criteria for (strictly) copositive tensors based on the simplicial subdivision are presented. In Section 4, we propose the main numerical detection algorithm for copositive tensors based on a subcone of the copositive tensor cone. The relationship between the iteration number of the algorithm and the number of all sub-simplex is established. Furthermore, different candidates for the subcone are discussed. An upper bound for the coclique number of a uniform hypergraph is given in Section 5. In Section 6, some numerical results are reported to verify the performance of the algorithms. In Section 7, a particle physical example on vacuum stability is presented, and its copositivity of coupling tensors is tested by the proposed algorithm. Some final remarks are given in Section 8. To move on, we briefly mention the notation that will be used in the sequel. Let $\mathbb{R}^n$ be the $n$ dimensional real Euclidean space and and the set of all nonnegative vectors in $\mathbb{R}^n$ be denoted by $\mathbb{R}^n_+$. The set all positive integers is denoted by $\mathbb{N}$. Suppose $m, n\in \mathbb{N}$ are two natural numbers. Denote $[n]=\{1,2,\cdots,n\}$. Vectors are denoted by bold lowercase letters i.e. ${\bf x},~ {\bf y},\cdots$, matrices are denoted by capital letters i.e. $A, B, \cdots$, and tensors are written as calligraphic capitals such as $\mathcal{A}, \mathcal{T}, \cdots.$ The $i$-th unit coordinate vector in $\mathbb{R}^n$ is denoted by ${\bf e_i}$. All one tensor and all one vector are denoted by $\mathcal{E}$ and ${\bf e}$ respectively. If the symbol $|\cdot|$ is used on a tensor $\mathcal{A}=(a_{i_1 \cdots i_m})_{1\leq i_j\leq n}$, $j=1,\cdots,m$, it denotes another tensor $|\mathcal{A}|=(|a_{i_1 \cdots i_m}|)_{1\leq i_j\leq n}$, $j\in [m]$. If $\mathcal{B}=(b_{i_1 \cdots i_m})_{1\leq i_j\leq n}$, $j\in [m]$ is another tensor, then $\mathcal{A}\leq \mathcal{B}$ means $a_{i_1 \cdots i_m} \leq b_{i_1 \cdots i_m}$ for all $i_1,\cdots,i_m \in [n]$. \setcounter{equation}{0} \section{Preliminaries} A real $m$-th order $n$-dimensional tensor $\mathcal{A}=(a_{i_1i_2\cdots i_m})$ is a multi-array of real entries $a_{i_1i_2\cdots i_m}$, where $i_j \in [n]$ for $j\in [m]$. In this paper, we always assume that $m\geq 3$ and $n\geq 2$. A tensor is said to be nonnegative if all its entries are nonnegative. If the entries $a_{i_1i_2\cdots i_m}$ are invariant under any permutation of their indices, then tensor $\mathcal{A}$ is called a symmetric tensor. In this paper, we always consider real symmetric tensors. The identity tensor $\mathcal{I}$ with order $m$ and dimension $n$ is given by $\mathcal{I}_{i_1\cdots i_m}=1$ if $i_1=\cdots=i_m$ and $\mathcal{I}_{i_1\cdots i_m}=0$ otherwise. All one tensor $\mathcal{E}$ (all one vector ${\bf e}$) is a tensor (vector) with all entries equal one. We denote $$ \mathbb{S}_{m,n}:=\{\mathcal{A}: \mathcal{A} \mbox{ is an } m\mbox{-th~order } n\mbox{-dimensional} \mbox{ symmetric tensor}\}. $$ Clearly, $\mathbb{S}_{m,n}$ is a vector space under the addition and multiplication defined as below: for any $t \in \mathbb{R}$, $\mathcal{A}=(a_{i_1 \cdots i_m})_{1 \le i_1,\cdots,i_m \le n}$ and $\mathcal{B}=(b_{i_1 \cdots i_m})_{1 \le i_1,\cdots,i_m \le n},$ \[ \mathcal{A}+\mathcal{B}=(a_{i_1 \cdots i_m}+b_{i_1 \cdots i_m})_{1 \le i_1,\cdots,i_m \le n}\quad \mbox{\rm and }\quad t \mathcal{A}=(ta_{i_1 \cdots i_m})_{1 \le i_1,\cdots,i_m \le n}. \] In addition, there are some more tensor cones that will be used in the following analysis such as copositive tensor cone ($\mathbb{COP}_{m,n}$), completely positive tensor cone ($\mathbb{CP}_{m,n}$), nonnegative tensor cone ($\mathbb{N}^+_{m,n}$) and positive semi-definite tensor cone ($\mathbb{PSD}$). For any $\mathcal{A}, \mathcal{B} \in \mathbb{S}_{m,n}$, we define the inner product by $\langle \mathcal{A},\mathcal{B}\rangle:=\sum_{i_1,\cdots,i_m=1}^{n}a_{i_1 \cdots i_m}b_{i_1 \cdots i_m}$, and the corresponding norm by $$ \|\mathcal{A}\|=\left(\langle \mathcal{A},\mathcal{A}\rangle\right)^{1/2}=\left(\sum_{i_1,\cdots,i_m=1}^{n}(a_{i_1 \cdots i_m})^2\right)^{1/2}. $$ For any ${\bf x}\in \mathbb{R}^n$, we use $x_i$ to denote its $i$th component; and use $\|{\bf x}\|_m$ to denote the $m$-norm of ${\bf x}$. For $m$ vectors ${\bf x},{\bf y}, \cdots, {\bf z}\in \mathbb{R}^n$, we use ${\bf x}\circ {\bf y}\circ \cdots \circ {\bf z}$ to denote the $m$-th order $n$-dimensional symmetric rank one tensor with \[ ({\bf x}\circ {\bf y}\circ \cdots \circ {\bf z})_{i_1 i_2\cdots i_m}=x_{i_1}y_{i_2}\cdots z_{i_m}, \ \forall \, i_1,\cdots,i_m \in [n]. \] And the inner product of a symmetric tensor and the rank one tensor is given by $$ \langle \mathcal{A}, {\bf x}\circ {\bf y}\circ \cdots \circ {\bf z}\rangle:=\sum_{i_1,\cdots,i_m=1}^{n}a_{i_1 \cdots i_m}x_{i_1}y_{i_2}\cdots z_{i_m}. $$ For $m\in \mathbb{N}$ and $k\in [m]$, we denote $$ \mathcal{A}{\bf x}^k{\bf y}^{m-k}=\langle\mathcal{A}, \underbrace{{\bf x}\circ \cdots {\bf x}}_k\circ\underbrace{{\bf y}\circ \cdots \circ {\bf y}}_{m-k}\rangle\quad \mbox{\rm and}\quad \mathcal{A}{\bf x}^m=\langle\mathcal{A}, \underbrace{{\bf x}\circ \cdots {\bf x}}_m\rangle, $$ then \begin{example}gin{equation}\label{e21} \mathcal{A}{\bf x}^k{\bf y}^{m-k}=\sum_{i_1,\cdots,i_m=1}^{n}a_{i_1 \cdots i_m}x_{i_1}\cdots x_{i_k}y_{i_{k+1}}\cdots y_{i_m} \quad \mbox{\rm and}\quad \mathcal{A}{\bf x}^m=\sum_{i_1,\cdots,i_m=1}^{n}a_{i_1 \cdots i_m}x_{i_1}\cdots x_{i_m}. \end{equation} For any $\mathcal{A}=(a_{i_1i_2\cdots i_m})\in \mathbb{S}_{m,n}$ and ${\bf x}\in \mathbb{R}^n$, we have $\mathcal{A}{\bf x}^{m-1}\in \mathbb{R}^n$ with $$ (\mathcal{A}{\bf x}^{m-1})_i=\sum_{i_2,i_3,\cdots,i_m\in [n]}a_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m},~~\forall \, i\in [n]. $$ It is known that an $m$-th order $n$-dimensional symmetric tensor defines uniquely an $m$-th degree homogeneous polynomial $f_{\mathcal{A}}({\bf x})$ on $\mathbb{R}^n$: for all ${\bf x}=(x_1,\cdots,x_n)^T \in \mathbb{R}^n$, $$ f_{\mathcal{A}}({\bf x})= \mathcal{A}{\bf x}^m=\sum_{i_1,i_2,\cdots, i_m\in [n]}a_{i_1i_2\cdots i_m}x_{i_1}x_{i_2}\cdots x_{i_m}; $$ and conversely, any $m$-th degree homogeneous polynomial function $f({\bf x})$ on $\mathbb{R}^n$ also corresponds uniquely a symmetric tensor. Furthermore, an even order symmetric tensor $\mathcal{A}$ is called positive semi-definite (positive definite) if $f_{\mathcal{A}}({\bf x}) \geq 0$ ($f_{\mathcal{A}}({\bf x})> 0$) for all ${\bf x}\in \mathbb{R}^n$ (${\bf x}\in \mathbb{R}^n \begin{example}gin{algo}ckslash \{\bf 0\}$). To end this section, we introduce the notion of tensor product, which will be used in the following analysis. \begin{definition}\label{def21}{\bf \cite{Shao13}} Let $\mathcal{A}$ ($\mathcal{B}$) be an order $m\geq2$ (an order $k\geq1$) dimension $n$ tensor. The product $\mathcal{A}\mathcal{B}$ is the following tensor $\mathcal{C}$ of order $(m-1)(k-1)+1$ with entries: $$c_{i\alpha_1\alpha_2\cdots\alpha_{m-1}}=\sum_{i_2,\cdots,i_m\in [n_2]}a_{ii_2\cdots i_m}b_{i_2\alpha_1}\cdots b_{i_m\alpha_{m-1}},$$ where $i\in [n], \alpha_1,\alpha_2,\cdots,\alpha_{m-1} \in [n]^{k-1}$. \end{definition} Here, when tensor $\mathcal{B}$ reduces to a 1st order tensor, i.e., vector of $\mathbb{R}^n$, the production $\mathcal{A}{\bf x}$ coincides with the notation $\mathcal{A}{\bf x}^m$ \setcounter{equation}{0} \section{Several conditions of copositive tensors} In this section, we will give several new sufficient conditions or necessary conditions with the help of a proper subcone of the cone of copositive tensors. As a simplex $S$ is determined by its vertices, it also can be represented by a matrix $V_S$ whose columns are these vertices. $V_S$ is nonsingular and unique up to a permutation of its columns. In the following analysis, we always assume that $\mathbb{M}$ is a subcone of the copositive tensor cone $\mathbb{COP}_{m,n}$. Before we move on, we first list the definition of copositive tensors and some notions about simplex. \begin{definition}\label{def31} {\bf \cite{qlq2013}} Let $\mathcal{A}\in S_{m,n}$ be given. If $\mathcal{A}{\bf x}^m\geq 0\;(\mathcal{A}{\bf x}^m>0)$ for any ${\bf x}\in \mathbb{R}^n_+\;({\bf x}\in \mathbb{R}^n_+\begin{example}gin{algo}ckslash \{{\bf0}\})$, then $\mathcal{A}$ is called a copositive (strictly copositive) tensor. \end{definition} The standard simplex with vertices ${\bf e}_1, {\bf e}_2,\cdots,{\bf e}_n$ is denoted by $S_0=\{{\bf x}\in \mathbb{R}^n_+~|~\|{\bf x}\|_1=1 \}$. Let $S,S_1,S_2,\cdots,S_r$ be finite simplices in $\mathbb{R}^n$. The set $\tilde{S}=\{S_1,S_2,\cdots,S_r\}$ is called a simplicial partition of $S$ if it satisfies that $$ S=\bigcup_{i=1}^rS_i\quad \mbox{\rm and}\quad \mbox{\rm int} S_i\bigcap \mbox{\rm int} S_j=\emptyset \;\, \mbox{\rm for any}\;\, i,j\in [r]\;\, \mbox{\rm with}\; i\neq j, $$ where $\mbox{\rm int}S_i$ denotes the interior of $S_i$ for any $i\in [r]$. Let $d(\widetilde{S})$ denote the maximum diameter of a simplex in $\widetilde{S}$, which is given by $$ d(\widetilde{S})=\max_{k\in [r]}\max_{i,j\in [n]}\|{\bf u}^k_i-{\bf u}^k_j\|_2. $$ \begin{example}gin{theorem}\label{them31} Suppose $\mathcal{A}\in \mathbb{S}_{m,n}$. Let $S_1=conv\{ {\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n\}$ be a simplex, where ${\bf u}_i\in \mathbb{R}^n, i\in [n]$. Let $V=({\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n)$ be the square matrix corresponding to $S_1$. If $V^T\mathcal{A}V\in \mathbb{M}$, then $\mathcal{A}{\bf x}^m\geq 0$ for all ${\bf x}\in S_1$.\end{theorem}\vskip 3pt \noindent\it Proof. \hspace{1mm}\rm It is apparent that $V^T\mathcal{A}V$ is an $m$-th order $n$ dimensional tensor with entries $$ \begin{example}gin{aligned} (V^T\mathcal{A}V)_{i_1i_2\cdots i_m}=&\sum_{j_1,j_2,\cdots,j_m\in [n]}(V^T)_{i_1j_1}a_{j_1j_2\cdots j_m}V_{j_2i_2} \cdots V_{j_mi_m} \\ =&\sum_{j_1,j_2,\cdots,j_m\in [n]}V_{j_1i_1}a_{j_1j_2\cdots j_m}V_{j_2i_2} \cdots V_{j_mi_m}\\ =&\sum_{j_1,j_2,\cdots,j_m\in [n]}a_{j_1j_2\cdots j_m} ({\bf u}_{i_1})_{j_1}({\bf u}_{i_2})_{j_2}\cdots ({\bf u}_{i_m})_{j_m}\\ =&\langle\mathcal{A}, {\bf u}_{i_1} \circ{\bf u}_{i_2}\circ \cdots \circ {\bf u}_{i_m} \rangle, \end{aligned} $$ for all $i_1, i_2, \cdots, i_m\in [n]$. For any ${\bf x}\in S_1$, for some $k\in [r]$, it follows that, $$ {\bf x}=x_1 {\bf u}_1+x_2 {\bf u}_2+\cdots+x_n {\bf u}_n,~~\sum_{i=1}^n x_i=1,~~x_i\geq 0,~\forall\, i\in [n]. $$ Thus, $$ \begin{example}gin{aligned} \mathcal{A}{\bf x}^m=&\langle\mathcal{A}, (x_1 {\bf u}_1+x_2 {\bf u}_2+\cdots+x_n {\bf u}_n)^m\rangle \\ =&\sum_{i_1,i_2,\cdots,i_m\in [n]}x_{i_1}x_{i_2}\cdots x_{i_m}\langle\mathcal{A}, {\bf u}_{i_1} \circ{\bf u}_{i_2}\circ \cdots \circ {\bf u}_{i_m} \rangle \\ =&\sum_{i_1,i_2,\cdots,i_m\in [n]}(V^T\mathcal{A}V)_{i_1i_2\cdots i_m}x_{i_1}x_{i_2}\cdots x_{i_m} \\ =&(V^T\mathcal{A}V)\begin{example}gin{algo}r{{\bf x}}^m \\ \geq & 0, \end{aligned} $$ since $\begin{example}gin{algo}r{{\bf x}}^T=(x_1,x_2,\cdots,x_n)\in \mathbb{R}^n_+$ and $V^T\mathcal{A}V$ is copositive, and hence, the desired result follows. {$\Box$}\vskip 5pt \begin{example}gin{corollary}\label{corol31} Let $\mathcal{A}\in \mathbb{S}_{m,n}$ be given. Suppose $\widetilde{S}=\{S_1,S_2,\cdots,S_r\}$ is a simplicial partition of simplex $S_0=\{{\bf x}\in \mathbb{R}^n_+~|~\|{\bf x}\|_1=1 \}$; and the vertices of simplex $S_k$ are denoted by ${\bf u}^k_1,{\bf u}^k_2,\cdots,{\bf u}^k_n$ for any $k\in [r]$. Let $V_{S_k}=({\bf u}^k_1, {\bf u}^k_2, \cdots, {\bf u}^k_n)$ be the matrix corresponding to simplex $S_k$ for any $k\in [r]$. Then $\mathcal{A}$ is copositive if $V_{S_k}^T\mathcal{A}V_{S_k}\in \mathbb{M}$ for all $k\in [r]$. \end{corollary} In fact, Corollary \ref{corol31} is a generalization of the sufficient condition proposed in \cite{chen16}. In order to give a necessary condition for the strictly copositive tensor, we cite a useful result below. \begin{example}gin{lemma}\label{lema31}{\bf \cite{chen16}} Let $\mathcal{A}\in \mathbb{S}_{m,n}$ be a strictly copositive tensor. Then, there exists $\varepsilon>0$ such that for all finite simplicial partitions $\widetilde{S}=\{S_1,S_2,\cdots,S_r\}$ of $S_0$ with $d(\widetilde{S})<\varepsilon$, it follows that $$ \langle \mathcal{A},~{\bf u}^k_{i_1}\circ {\bf u}^k_{i_2}\circ \cdots\circ {\bf u}^k_{i_m} \rangle > 0 $$ for all $k\in [r], i_j\in [n],j\in [m]$, where ${\bf u}^k_1, {\bf u}^k_2,\cdots, {\bf u}^k_n$ are vertices of the simplex $S_k$. \end{lemma} \begin{example}gin{theorem}\label{them32}Let $\mathcal{A}\in \mathbb{S}_{m,n}$ be a strictly copositive tensor. Suppose $\mathbb{M}\supseteq \mathbb{N}^+_{m,n}$. Then, there exists $\varepsilon>0$ such that for all finite simplicial partitions $\widetilde{S}=\{S_1,S_2,\cdots,S_r\}$ of $S_0$ with $d(\widetilde{S})<\varepsilon$, it follows that $V_{S_k}^T\mathcal{A}V_{S_k}\in \mathbb{M}$ for all $k\in [r]$, where $V_{S_k}=({\bf u}^k_1, {\bf u}^k_2,\cdots, {\bf u}^k_n)\in \mathbb{R}^{n\times n}$ and ${\bf u}^k_1, {\bf u}^k_2,\cdots, {\bf u}^k_n$ are vertices of the simplex $S_k$. \end{theorem}\vskip 3pt \noindent\it Proof. \hspace{1mm}\rm By Lemma \ref{lema31}, we obtain that $$ \begin{example}gin{aligned} (V_{S_k}^T\mathcal{A}V_{S_k})_{i_1i_2\cdots i_m}=&\sum_{j_1,j_2,\cdots,j_m\in [n]}(V_{S_k}^T)_{i_1j_1}a_{j_1j_2\cdots j_m}(V_{S_k})_{j_2i_2} \cdots (V_{S_k})_{j_mi_m} \\ =&\sum_{j_1,j_2,\cdots,j_m\in [n]}(V_{S_k})_{j_1i_1}a_{j_1j_2\cdots j_m}(V_{S_k})_{j_2i_2} \cdots (V_{S_k})_{j_mi_m}\\ =&\sum_{j_1,j_2,\cdots,j_m\in [n]}a_{j_1j_2\cdots j_m} ({\bf u}^k_{i_1})_{j_1}({\bf u}^k_{i_2})_{j_2}\cdots ({\bf u}^k_{i_m})_{j_m}\\ =&\langle\mathcal{A}, {\bf u}^k_{i_1} \circ{\bf u}^k_{i_2}\circ \cdots \circ {\bf u}^k_{i_m} \rangle \\ >& 0. \end{aligned} $$ By Lemma \ref{lema31}, we know that $V_{S_k}^T\mathcal{A}V_{S_k}>0$ when the $\varepsilon>0$ is small enough. Thus $V_{S_k}^T\mathcal{A}V_{S_k}\in \mathbb{N}^+_{m,n}\subseteq\mathbb{M}$ for all $k\in [r]$, and the desired results hold. {$\Box$}\vskip 5pt \begin{example}gin{theorem}\label{them33} Suppose $\mathcal{A}\in \mathbb{S}_{m,n}$ is copositive. Let $S=conv\{{\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n\}$ be a simplex with $\mathcal{A}{\bf u}_i^m>0$ for all $i\in [n]$. Let $V=({\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n)\in \mathbb{R}^{n\times n}$. If there exists $\tilde{{\bf x}}\in S\begin{example}gin{algo}ckslash \{{\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n\}$ such that $\mathcal{A}\tilde{{\bf x}}^m=0$, then $V^T\mathcal{A}V$ is not strictly copositive. \end{theorem}\vskip 3pt \noindent\it Proof. \hspace{1mm}\rm We will prove the conclusion by contradiction. Assume $V^T\mathcal{A}V$ is strictly copositive. Then, for any ${\bf x}\in \mathbb{R}_+^n \setminus \{{\bf 0}\}$, we have $$ \begin{example}gin{aligned} (V^T\mathcal{A}V){\bf x}^m=&\sum_{i_1,i_2,\cdots,i_m\in [n]}(V^T\mathcal{A}V)_{i_1i_2\cdots i_m}x_{i_1}x_{i_2}\cdots x_{i_m} \\ =& \sum_{i_1,i_2,\cdots,i_m\in [n]}x_{i_1}x_{i_2}\cdots x_{i_m}\langle\mathcal{A}, {\bf u}_{i_1} \circ{\bf u}_{i_2}\circ \cdots \circ {\bf u}_{i_m} \rangle \\ > & 0. \end{aligned} $$ Since $\tilde{{\bf x}}\neq {\bf 0}$ and ${\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n$ constitute a basis of $\mathbb{R}^n$, by letting $\tilde{{\bf x}}=\tilde{{\bf x}}_1{\bf u}_1+\tilde{{\bf x}}_2{\bf u}_2+\cdots+\tilde{{\bf x}}_n{\bf u}_n$, it follows that $$ \begin{example}gin{aligned} 0=&\mathcal{A}\tilde{{\bf x}}^m \\ =&\langle \mathcal{A}, (\tilde{{\bf x}}_1{\bf u}_1+\tilde{{\bf x}}_2{\bf u}_2+\cdots+\tilde{{\bf x}}_n{\bf u}_n)^m\rangle \\ =& \sum_{i_1,i_2,\cdots,i_m\in [n]}\tilde{x}_{i_1}\tilde{x}_{i_2}\cdots \tilde{x}_{i_m}\langle\mathcal{A}, {\bf u}_{i_1} \circ{\bf u}_{i_2}\circ \cdots \circ {\bf u}_{i_m} \rangle \\ =&\sum_{i_1,i_2,\cdots,i_m\in [n]}(V^T\mathcal{A}V)_{i_1i_2\cdots i_m}\tilde{x}_{i_1}\tilde{x}_{i_2}\cdots \tilde{x}_{i_m} \\ =&(V^T\mathcal{A}V)\tilde{{\bf x}}^m, \end{aligned} $$ which contradicts the assumption that $V^T\mathcal{A}V$ is strictly copositive, and hence, the desired result holds. {$\Box$}\vskip 5pt \setcounter{equation}{0} \section{Algorithms} The results of the preceding section naturally yield an algorithm to test whether a tensor is copositive or not. Similar to the algorithm given in \cite{chen16}, we will present an algorithm by starting with the standard simplex in $\mathbb{R}^n_+$, and checking whether there is a vertex ${\bf v}$ with $\mathcal{A}{\bf v}^m < 0$, or whether the copositivity criterion of Theorem \ref{them31} is satisfied. First of all, we list the algorithm proposed in \cite{chen16} in the below, and then, discuss the relationship between iteration number and the number of all sub-simplices when the algorithm stops in finitely many iterations. \begin{example}gin{tabular}{@{}l@{}} \hline \multicolumn{1}{c}{\bf Algorithm 1} \\ \hline {\bf Input:} $\mathcal{A}\in \mathbb{S}_{m,n}$ \\ \qquad Set $\widetilde{S}:=\{S_1\}$, where $S_1=conv\{{\bf e}_1,{\bf e}_2,\cdots,{\bf e}_n\}$ is the standard simplex \\ \qquad Set $k:=1$ \\ \qquad while $k\neq 0$ do \\ \qquad\qquad set $S:=S_k=conv\{{\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n\}\in \widetilde{S}$ \\ \qquad\qquad if there exists $i\in [n]$ such that $\mathcal{A}{\bf u}^m_i<0$, then \\ \qquad\qquad\qquad return ``$\mathcal{A}$ is not copositive'' \\ \qquad\qquad else if $\langle \mathcal{A}, {\bf u}_{i_1}\circ {\bf u}_{i_2}\circ \cdots \circ {\bf u}_{i_m}\rangle \geq 0$ for all $i_1,i_2,\cdots,i_m\in [n]$, then \\ \qquad\qquad\qquad set $\widetilde{S}:=\widetilde{S} \begin{example}gin{algo}ckslash \{S_k\}$ and $k:=k-1$ \\ \qquad\qquad else \\ \qquad\qquad\qquad set \\ \qquad\qquad\qquad\qquad $S_k:=conv\{{\bf u}_1,\cdots,{\bf u}_{p-1},{\bf v},{\bf u}_{p+1},\cdots,{\bf u}_n\}$; \\ \qquad\qquad\qquad\qquad $S_{k+1}:=conv\{{\bf u}_1,\cdots,{\bf u}_{q-1},{\bf v},{\bf u}_{q+1},\cdots,{\bf u}_n\}$, \\ \qquad\qquad\qquad\qquad where ${\bf v}=\frac{{\bf u_p}+{\bf u_q}}{2},\, [p,q]=arg\max_{i,j\in [n]}\|{\bf u_i}-{\bf u_j}\|_2$ and $p<q$.\\ \qquad\qquad\qquad set $\widetilde{S}:=\widetilde{S}\begin{example}gin{algo}ckslash \{S\}\bigcup \{S_k, S_{k+1}\}$ and $k:=k+1$ \\ \qquad\qquad end if \\ \qquad end while \\ \qquad return `` $\mathcal{A}$ is copositive.'' \\ {\bf Output:} ``$\mathcal{A}$ is copositive'' or ``$\mathcal{A}$ is not copositive''.\\ \hline \end{tabular} For the standard simplex $S=conv\{{\bf e}_1,{\bf e}_2,\cdots,{\bf e}_n\}$ and its simplicial partition $\tilde{S}=\{S_1,S_2,\cdots,S_r\}$, any simplex $S_i, i\in [r]$ is called a sub-simplex of $S$. \begin{example}gin{proposition} For a given tensor $\mathcal{A}\in \mathbb{S}_{m,n}$, if Algorithm 1 stops in the $k$-th iteration, $k\geq 2$, then the number of all sub-simplices need to be checked during the whole running process is $d=\frac{k+1}{2}$. Furthermore, if $\mathcal{A}$ is nonnegative or $\mathcal{A}$ has negative diagonal entries, then $k=1$. \end{proposition} \noindent\it Proof. \hspace{1mm}\rm By conditions, the algorithm stops in $k$-th iteration. Assume the original standard simplex is cut $t$ times from beginning to the end, by the fact that the number of simplices will increase 1 when it is cut one time, so we have $$d=t+1~~ {\bf and}~~ t+t+1=k,$$ which implies that $d=\frac{k+1}{2}$. When $\mathcal{A}$ is nonnegative or $\mathcal{A}$ has negative diagonal entries, it means that $$ \langle \mathcal{A},~{\bf e_{i_1}}\circ {\bf e_{i_2}}\circ \cdots \circ {\bf e_{i_m}} \rangle=a_{i_1i_2\cdots i_m} \geq 0, ~\forall~ i_1,i_2,\cdots,i_m \in [n], $$ or $\mathcal{A}{\bf e}_i^m<0$ for some $i\in [n]$. Thus, this algorithm will stop in one iteration, i.e., $k=1$, and hence, the desired result holds. {$\Box$}\vskip 5pt We now list the main algorithm related to a convex cone $\mathbb{M}$, which is a subcone of the copositive tensor cone. Then, different choice for the subcone $\mathbb{M}$ are discussed in detail. \begin{example}gin{tabular}{@{}l@{}} \hline \multicolumn{1}{c}{\bf Algorithm 2} \\ \hline {\bf Input:} $\mathcal{A}\in \mathbb{S}_{m,n}$ \\ \qquad Set $\widetilde{S}:=\{S_1\}$, where $S_1=conv\{{\bf e}_1,{\bf e}_2,\cdots,{\bf e}_n\}$ is the standard simplex \\ \qquad Set $k:=1$ \\ \qquad while $k\neq 0$ do \\ \qquad\qquad set $S:=S_k=conv\{{\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n\}\in \widetilde{S}$ \\ \qquad\qquad let $V=({\bf u}_1,{\bf u}_2,\cdots,{\bf u}_n)$ be the square matrix corresponding to $S$ \\ \qquad\qquad if there exists $i\in [n]$ such that $\mathcal{A}{\bf u}^m_i<0$, then \\ \qquad\qquad\qquad return ``$\mathcal{A}$ is not copositive'' \\ \qquad\qquad else if $V^T\mathcal{A}V\in \mathbb{M}$, then $\widetilde{S}=\widetilde{S} \begin{example}gin{algo}ckslash \{S_k\}$ and $k:=k-1$ \\\ \qquad\qquad else \\ \qquad\qquad\qquad set \\ \qquad\qquad\qquad\qquad $S_k:=conv\{{\bf u}_1,\cdots,{\bf u}_{p-1},{\bf v},{\bf u}_{p+1},\cdots,{\bf u}_n\}$; \\ \qquad\qquad\qquad\qquad $S_{k+1}:=conv\{{\bf u}_1,\cdots,{\bf u}_{q-1},{\bf v},{\bf u}_{q+1},\cdots,{\bf u}_n\}$, \\ \qquad\qquad\qquad\qquad where ${\bf v}=\frac{{\bf u_p}+{\bf u_q}}{2},\, [p,q]=arg\max_{i,j\in [n]}\|{\bf u_i}-{\bf u_j}\|_2$ and $p<q$.\\ \qquad\qquad\qquad set $\widetilde{S}:=\widetilde{S}\begin{example}gin{algo}ckslash \{S\}\bigcup \{S_k, S_{k+1}\}$ and $k:=k+1$ \\ \qquad\qquad end if \\ \qquad end while \\ \qquad return `` $\mathcal{A}$ is copositive.'' \\ {\bf Output:} ``$\mathcal{A}$ is copositive'' or ``$\mathcal{A}$ is not copositive''.\\ \hline \end{tabular} \begin{example}gin{Remark} By the analysis in Section 4, whether or not the algorithm does terminate depends on the input tensor $\mathcal{A}$. {\bf (i)} If the input tensor $\mathcal{A}$ is not copositive, then the algorithm terminates. In this case, it does not matter which set $\mathbb{M}$ is used. {\bf (ii)} If the input tensor $\mathcal{A}$ is strictly copositive and $\mathbb{M}\supseteq \mathbb{N}_{m, n}^+$, then the algorithm terminates in finitely many iterations. {\bf (iii)} If the input tensor $\mathcal{A}$ is copositive but not strictly copositive, then the algorithm may or may not terminate. \end{Remark} An important issue which influences the number of iterations and the runtime of Algorithm 2 is the choice of the set $\mathbb{M}$. The desirable properties of the set $\mathbb{M}$ used here can be summarized such that, for any given symmetric tensor $\mathcal{A}$, we can easily check whether $\mathcal{A}\in \mathbb{M}$; and $\mathbb{M}$ is a subcone of the copostive tensor cone that is as large as possible. \subsection{The choice that $\mathbb{M}=\mathbb{N}^+_{m,n}$} The first choice one may consider easily is $\mathbb{M} = \mathbb{N}^+_{m,n}$. However, this is not always desirable. To check whether a symmetric tensor belongs to $\mathbb{N}^+_{m,n}$ does not take much effort, but the nonnegative tensor cone is a quite bad approximation of the copositive tensor cone. So each iteration of the algorithm is cheap but the number of iterations may tend to be large. On the other side, it should be noted that, in Algorithm 1, the following inequality $$\langle \mathcal{A}, {\bf u}_{i_1}\circ {\bf u}_{i_2}\circ \cdots \circ {\bf u}_{i_m}\rangle \geq 0,~\forall~i_1,i_2,\cdots,i_m\in [n],$$ exactly imply $V^T\mathcal{A}V\in \mathbb{M}=\mathbb{N}^+_{m,n}$ and the converse may not be true in general. \subsection{An alternative choice that $\mathbb{M}$ is related with $Z$-tensors} In order to choose a good approximation of the copositive tensor cone, we first recall the matrix case. It is obvious that $\mathbb{PSD}+\mathbb{N}^+_{m,n}$ is a good approximation of copositive matrix cone \cite{Sponsel12,Tanaka16}. The problem of testing a given matrix whether or not belongs to $\mathbb{PSD}+\mathbb{N}^+_{m,n}$ can be solved by solving the following doubly nonnegative program $$ \begin{example}gin{aligned} {\bf Minimize}~~&~\langle A, X\rangle \\ {\bf subject~to}~&~\langle I_n, X\rangle=1,~X\in \mathbb{PSD}\cap\mathbb{N}^+_{m,n}, \end{aligned} $$ which can be expressed as a semidefinite program. Thus, the set $\mathbb{PSD}+\mathbb{N}^+_{m,n}$ is a rather large and tractable convex subcone of $\mathbb{COP}_{m,n}$. However, solving the doubly nonnegative problem takes an awful lot of time \cite{Sponsel12,Yoshi10} and does not make for a practical implementation. To overcome the drawback, in \cite{Sponsel12}, more easily tractable subcones of the copostive matrix cone are proposed such that $$ \mathbb{H}=\{A\in \mathbb{S}_n~|~A-N(A)\in \mathbb{PSD}\}, $$ where $N(A)$ is a square matrix such that $$ N(A)_{ij}=\left\{ \begin{example}gin{array}{cll} A_{ij}~&~~A_{ij}>0~{\bf and}~i\neq j, \\ 0~~&~~{\bf otherwise}. \end{array} \right. $$ Here, $A-N(A)$ is a $Z$-matrix. Stimulated by this method and the notion of $Z$-tensors \cite{Zhang12}, we now extend this subcone to the high order tensor case. As we all know that checking the positive semi-definiteness of a general symmetric tensor is NP-hard \cite{Lim2013}. But for some symmetric tensors with special structure, it may not be NP-hard. Polynomial time algorithm for checking the positive semi-definiteness of $Z$-tensors was established in \cite{CLQ2016}. Now, for a given tensor $\mathcal{A}=(a_{i_1i_2\cdots i_m})$ with order $m$ and dimension $n$, let $$ N(\mathcal{A})_{i_1i_2\cdots i_m}=\left\{ \begin{example}gin{array}{cll} a_{i_1i_2\cdots i_m}~ & ~~a_{i_1i_2\cdots i_m}>0~~{\bf and}~~\delta_{i_1i_2\cdots i_m}=0, \\ 0~~ & ~~{\bf otherwise}, \end{array} \right. $$ where $\delta_{i_1i_2\cdots i_m}=1$ if and only if $i_1=i_2=\cdots=i_m$, otherwise $\delta_{i_1i_2\cdots i_m}=0$. Next, we will consider a new subcone of copositive tensor cone from two cases. \noindent{\bf (I)} When $m$ is even, we define the set $\mathbb{H}_1$ such that $$ \mathbb{H}_1=\{\mathcal{A}\in \mathbb{S}_{m,n}~|~\mathcal{A}-N(\mathcal{A})\in \mathbb{PSD}\}. $$ Here $\mathcal{A}-N(\mathcal{A})$ is an even order symmetric $Z$-tensor. For any ${\bf x}\in \mathbb{R}^n$, suppose $f_1({\bf x})=(\mathcal{A}-N(\mathcal{A})){\bf x}^m$ and the minimum $H$-eigenvalue of $\mathcal{A}-N(\mathcal{A})$ is denoted by $\lambda_{{\rm min}}(\mathcal{A}-N(\mathcal{A}))$. Since an even order symmetric tensor is positive semi-definite if and only if its minimum $H$-eigenvalue is nonnegative \cite{Qi05}, by Theorem 5.1 in \cite{CLQ2016}, we know that $\mathcal{A}\in \mathbb{H}_1$ if and only if \begin{example}gin{equation}\label{e41} \lambda_{{\rm min}}(\mathcal{A}-N(\mathcal{A}))= \max_{\mu, r \in \mathbb{R}}\{\mu: f_1({\bf x})-r (\|{\bf x}\|_m^m-1)-\mu \in \Sigma^2_m[{\bf x}]\}\geq 0, \end{equation} where $\Sigma_m^2[{\bf x}]$ is the set of all SOS polynomials with degree at most $m$. It is easy to know that the sums-of-squares problem (\ref{e41}) can be equivalently rewritten as a semi-definite programming problem (SDP), and so, can be solved efficiently. Indeed, this conversion can be done by using the commonly used Matlab Toolbox YALMIP \cite{Yalmip1,Yalmip2}. The simple code using {\rm YALMIP} is appended as follows \begin{example}gin{verbatim} sdpvar x1 x2 ... xn r mu f=f1(x); g = [(x1^m+x2^m+...+xn^m)-1]; F = [sos(f1-mu-r*g)]; solvesos(F,-mu,[],[r;mu]). \end{verbatim} {\bf (II)} When $m$ is odd, it is well known that there is not any nontrivial odd order positive semi-definite tensors. Thus, the subcone $\mathbb{H}_2$ can be changed to another subcone of $\mathbb{COP}_{m,n}$ such that $$ \mathbb{H}_2=\{\mathcal{A}\in \mathbb{S}_{m,n}~|~\mathcal{A}-N(\mathcal{A})\in \mathbb{COP}_{m,n}\}. $$ Now, let $\begin{example}gin{algo}r{\mathcal{A}}$ be a symmetric tensor with order $2m$ and dimension $n$ with entries such that \begin{example}gin{equation}\label{e42} f_2({\bf x})=\begin{example}gin{algo}r{\mathcal{A}}{\bf x}^{2m}=\sum_{i_1,i_2,\cdots,i_m\in [n]}(a_{i_1i_2\cdots i_m}-N(\mathcal{A})_{i_1i_2\cdots i_m})x_{i_1}^2x_{i_2}^2\cdots x_{i_m}^2,~~\forall~{\bf x}\in \mathbb{R}^n. \end{equation} Since $\mathcal{A}-N(\mathcal{A})$ is a $Z$-tensor, it is not difficult to check that $\begin{example}gin{algo}r{\mathcal{A}}$ is a $Z$-tensor. By (\ref{e41}) and (\ref{e42}), we know that $$ \mathcal{A}-N(\mathcal{A})\in \mathbb{COP}_{m,n}~\Leftrightarrow~\begin{example}gin{algo}r{\mathcal{A}} \in \mathbb{PSD}, $$ which implies that \begin{example}gin{equation}\label{e43} \mathcal{A}-N(\mathcal{A})\in \mathbb{COP}_{m,n}~\Leftrightarrow~\max_{\mu, r \in \mathbb{R}}\{\mu: f_2({\bf x})-r (\|{\bf x}\|_{2m}^{2m}-1)-\mu \in \Sigma^2_{2m}[{\bf x}]\}\geq 0. \end{equation} Thus, for a given odd order symmetric tensor $\mathcal{A}$, the matlab code using {\rm YALMIP} to check whether $\mathcal{A}\in \mathbb{H}_2$ is listed below: \begin{example}gin{verbatim} sdpvar x1 x2 ... xn r mu f=f2(x); g = [(x1^2m+x2^2m+...+xn^2m)-1]; F = [sos(f2-mu-r*g)]; solvesos(F,-mu,[],[r;mu]). \end{verbatim} To end this section, we show the convexity of $\mathbb{H}_1, \mathbb{H}_2$. Before that, we first cite a useful lemma. \begin{example}gin{lemma}\label{lema41}{\bf \cite{Qi14}} Suppose $\mathcal{C}$, $\mathcal{B}$ are two nonnegative tensors with order $m$ and dimension $n$. If it satisfies that $|\mathcal{B}|\leq \mathcal{C}$, then $\rho{(B)} \leq \rho{(C)}$, where $\rho{(B)}, \rho{(C)}$ are spectral radius of $\mathcal{C}$ and $\mathcal{B}$ respectively. \end{lemma} \begin{example}gin{lemma}\label{lema42} Let $m\in \mathbb{N}$ be even number. Suppose $\mathcal{A}$, $\mathcal{B}$ are symmetric $Z$-tensors with order $m$ and dimension $n$. If $\mathcal{A}\leq \mathcal{B}$ and $\mathcal{A}$ is positive semi-definite, then $\mathcal{B}$ is positive semi-definite. \end{lemma} \noindent\it Proof. \hspace{1mm}\rm Since $\mathcal{A}, \mathcal{B}$ are $Z$-tensors, we can find $t\in \mathbb{R}, t>0 $ such that $$ \mathcal{A}=t\mathcal{I}-\mathcal{A}',~~\mathcal{B}=t\mathcal{I}-\mathcal{B}', $$ where $\mathcal{A}', \mathcal{B}'$ are nonnegative tensors. It is easy to know that $\mathcal{A}' \geq \mathcal{B}'$ since $\mathcal{A} \leq \mathcal{B}$. By Lemma \ref{lema41} and Corollary 3 in \cite{Qi05}, it follows that $\rho{(A')} \geq \rho{(B')}$ and $$ \lambda_{min}(\mathcal{B})=t-\rho(\mathcal{B}')\geq t-\rho(\mathcal{A}')=\lambda_{min}(\mathcal{A}). $$ Here, $\lambda_{min}(\mathcal{A}), \lambda_{min}(\mathcal{B})$ denote the minimum $H$-eigenvalues of $\mathcal{A}$ and $\mathcal{B}$ respectively. By conditions that $\mathcal{A}$ is positive semi-definite, we obtain that $$ \lambda_{min}(\mathcal{B})\geq \lambda_{min}(\mathcal{A})\geq 0, $$ which implies that $\mathcal{B}$ is positive semi-definite, and hence, the desired results hold. {$\Box$}\vskip 5pt \begin{example}gin{theorem}\label{them41} Let $m\in \mathbb{N}$ be even. Then $\mathbb{H}_1$ is convex and it satisfies $\mathbb{N}^+_{m,n}\subseteq \mathbb{H}_1\subseteq \mathbb{COP}_{m,n}$. \end{theorem}\vskip 3pt \noindent\it Proof. \hspace{1mm}\rm The second statement is obvious by their definitions respectively, so we only need to prove the convexity of $\mathcal{H}_1$ i.e., $\mathcal{A}+\mathcal{B}\in \mathbb{H}_1$ for any $\mathcal{A}, \mathcal{B}\in \mathbb{H}_1$. Suppose $\mathcal{A}, \mathcal{B}\in \mathbb{H}_1$, by the definition of $\mathbb{H}_1$, we have that \begin{example}gin{equation}\label{e44} \mathcal{A}-N(\mathcal{A})\in \mathbb{PSD},~~\mathcal{B}-N(\mathcal{B})\in \mathbb{PSD}. \end{equation} By the fact that $N(\mathcal{A+B})\leq N(\mathcal{A})+N(\mathcal{B})$, we obtain that $$ \mathcal{\mathcal{A+B}}-N(\mathcal{A+B})\geq \mathcal{A+B}-N(\mathcal{A})-N(\mathcal{B}). $$ By (\ref{e44}) and Lemma \ref{lema42}, we know that $\mathcal{\mathcal{A+B}}-N(\mathcal{A+B})\in \mathbb{PSD}$, which implies that $\mathcal{A+B}\in \mathbb{H}_1$. So, the desired result holds. {$\Box$}\vskip 5pt For any $Z$-tensor $\end{theorem}\vskip 3pta \mathcal{I}-\mathcal{B}$, from Theorem 3.12 of \cite{Zhang12}, it follows that $\end{theorem}\vskip 3pta \mathcal{I}-\mathcal{B}$ is copositive if and only if $\end{theorem}\vskip 3pta\geq \rho(\mathcal{B})$. Similar to the proof of Theorem \ref{them41}, we know the following conclusion holds and the proof is omitted. \begin{example}gin{theorem}\label{them42} Let $m\in \mathbb{N}$ be odd. Then $\mathbb{H}_2$ is convex and it satisfies $\mathbb{N}^+_{m,n}\subseteq \mathbb{H}_2\subseteq \mathbb{COP}_{m,n}$. \end{theorem}\vskip 3pt By the analysis above, in Algorithm 2, we can choose $\mathbb{M}=\mathbb{H}_1$ in the even order case and $\mathbb{M}=\mathbb{H}_2$ in the odd order case. \setcounter{equation}{0} \section{An upper bound for the coclique number of an uniform hypergraph} In this section, we show that computing the coclique number of a uniform hypergraph can be reformulated as a linear program over the cone of completely positive tensors. By the dual property of copositive tensor cone and completely positive tensor cone, we present an upper bound for the coclique number, which can be computed by the previous algorithm. We first recall some notions of hypergraph \cite{Cooper12, Qi14}. A hypergraph means an undirected simple $m$-uniform hypergraph $G=(V,E)$ with vertex set $V=\{1,2,\cdots,n\}$, and edge set $E = \{e_1,e_2,\cdots, e_k\}$ with $e_p \subseteq V$ for $p\in [k]$. By $m$-uniformity, we mean that for every edge $e\in E$, the cardinality $|e|$ of $e$ is equal to $m$. A 2-uniform hypergraph is typically called graph. Throughout this paper, we focus on $m\geq 3$ and $n \geq m$. Moreover, since the trivial hypergraph (i.e., $E=\emptyset$) is of less interest, we consider only hypergraphs having at least one edge (i.e., nontrivial) in this section. \begin{definition}\label{def51}{\bf (Coclique number of a hypergraph)} The coclique of an $m$-uniform hypergraph $G$ is a set of vertices such that any of its $m$ vertex subset is not an edge of $G$, and the largest cardinality of a coclique of $G$ is called the coclique number of $G$, denoted by $\omega(G)$. \end{definition} By Definition \ref{def51}, we can easily get the following results. \begin{example}gin{proposition}\label{prop51} Suppose $G=(V,E)$ is a nontrivial $m$-uniform hypergraph. Let $|V|=n$. Then, the coclique number $\omega(G)$ of $G$ satisfies that $m-1\leq \omega(G) \leq n-1$. \end{proposition} The following definition for the adjacency tensor was proposed by Cooper and Dutle \cite{Cooper12}, which is important in the following analysis. \begin{definition}\label{def52}{\bf (Adjacency tensor of a hypergraph)} Let $G=(V,E)$ be an $m$-uniform hypergraph where $V=\{1,2,\cdots,n\}$. The adjacency tensor of $G$ is defined as the $m$-th order $n$ dimensional tensor $\mathcal{A}$ with $$ a_{i_1i_2\cdots i_m}=\left\{ \begin{example}gin{array}{cll} \frac{1}{(m-1)!} & \{i_1,i_2,\cdots,i_m\}\in E, \\ 0 & {\bf otherwise}. \end{array} \right. $$ \end{definition} \begin{example}gin{theorem}\label{them51} Let $G=(V,E)$ be an $m$-uniform hypergraph. Suppose $|V|=n$ and $G$ is nontrivial. Let $\omega(G)$ denote the coclique number of $G$. Then $\omega(G)^{m-1}$ is equal to the optimal value of the following problem: $$ \begin{example}gin{array}{clll} {\rm (P)}~~~& \max & \langle \mathcal{X}, \mathcal{E}\rangle \\ & {\rm s.t.} & \mathcal{X}_{i_1i_2\cdots i_m}=0,~~\{i_1,i_2,\cdots,i_m\}\in E, \\ & & \langle \mathcal{X}, \mathcal{I}\rangle=1, \\ & & \mathcal{X}\in \mathbb{CP}_{m,n}, \end{array} $$ where $\mathcal{E}$ is a all one tensor with order $m$ and dimension $n$. \end{theorem}\vskip 3pt \noindent\it Proof. \hspace{1mm}\rm By the fact that $\mathbb{CP}_{m,n}$ is a convex cone \cite{QXX14}, it is apparent that the feasible set of problem (P) is also convex. So its optimal value will be attained at an extreme point, i.e., there is ${\bf x}^*\in \mathbb{R}^n_+$ such that $f^*=\langle ({\bf x}^*)^m, \mathcal{E} \rangle$, where $f^*$ is the optimal value of (P). Constraint conditions of (P) implies that $\|{\bf x}\|_m=1$ and the support set $S^*$ of ${\bf x}^*$ is a coclique of $G$. By the optimal conditions of (P), we can easily get that all nonzero entries of ${\bf x}^*$ are equal, which means that for any $i\in [n]$ $$ {\bf x}^*_i= \left\{ \begin{example}gin{array}{cll} \frac{1}{\sqrt[m]{|S^*|}} & i\in S^*, \\ 0 & {\rm otherwise}. \end{array} \right. $$ Thus, the optimal value $f^*=\langle ({\bf x}^*)^m, \mathcal{E} \rangle=({\bf e}^T{\bf x}^*)^m=|S^*|^{m-1}$, which implies that $S^*$ must be the maximum coclique and the desired result holds. {$\Box$}\vskip 5pt \begin{example}gin{theorem}\label{them52} Assume $G$ and $\omega(G)$ are defined as in Theorem \ref{them51}. Let $\mathcal{A}$ be the adjacency tensor of hypergraph $G$. Then, it holds that $$ \omega(G)^{m-1}\leq \min_{\lambda \in \mathbb{N}}\{\lambda~|~\lambda(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n}\}. $$ \end{theorem}\vskip 3pt \noindent\it Proof. \hspace{1mm}\rm Since $\mathcal{X}\in \mathbb{CP}_{m,n}$ is nonnegative, by the proof of Theorem \ref{them51}, we know that problem (P) is equivalent to $$ \begin{example}gin{array}{clll} \max & \langle \mathcal{X}, \mathcal{E}\rangle \\ {\rm s.t.} & \langle\mathcal{X},\mathcal{A}\rangle=0, \\ & \langle \mathcal{X}, \mathcal{I}\rangle=1, \\ & \mathcal{X}\in \mathbb{CP}_{m,n}, \end{array} $$ which can be relaxed to the problem such that $$ \begin{example}gin{array}{clll} {\rm (P')} & \max & \langle \mathcal{X}, \mathcal{E}\rangle \\ & {\rm s.t.} & \langle\mathcal{X},\mathcal{A}+\mathcal{I}\rangle=1, \\ & & \mathcal{X}\in \mathbb{CP}_{m,n}. \end{array} $$ Then, the dual problem of ${\rm (P')}$ is that $$ \min_{\lambda \in \mathbb{N}}\{\lambda~|~\lambda(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n}\}. $$ From the well known weak duality theorem, we have $$ \begin{example}gin{aligned} \omega(G)^{m-1}\leq & \max \{\langle \mathcal{X}, \mathcal{E}\rangle ~|~\langle\mathcal{X},\mathcal{A}+\mathcal{I}\rangle=1, \mathcal{X}\in \mathbb{CP}_{m,n} \} \\ \leq & \min_{\lambda \in \mathbb{N}}\{\lambda~|~\lambda(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n}\}, \end{aligned} $$ which implies the desired results hold. {$\Box$}\vskip 5pt By Proposition \ref{prop51} and Theorem \ref{them52}, we can try finitely many iterations to get an upper bound for the coclique number of a given uniform hypergraph by Algorithms 2. For example, for an $m$-uniform hypergraph $G=(V,E)$ with $V=[n]$, if there is $k\in [n]$ such that $$ k^{m-1}(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n},~~(k-1)^{m-1}(\mathcal{A}+\mathcal{I})-\mathcal{E}\notin \mathbb{COP}_{m,n}, $$ then we know that the coclique number of $G$ satisfies $\omega(G)\leq k$. \setcounter{equation}{0} \section{Numerical results} In this section, we report some preliminary numerical results of Algorithm 2, where the subcone $\mathbb{M}$ is chosen according to Section 4.2; and we use YALMIP \cite{Yalmip1,Yalmip2} and Sedumi \cite{sturm99} to solve the resulted semidefinite programs. All experiments are finished in Matlab2014a on a HP Z800 Workstation with Intel(R) Xeno(R) CPU X5680 @ 3.33GHz 3.33 GHz and 48 GB of RAM. All experiments are divided into the following two parts. {\bf Part 1 (Copositivity detection)}. In this part, we implement Algorithm 5.2 to detect whether a tensor is copositive or not by using several examples tested in \cite{chen16}. \begin{example}gin{example}\label{example1} We test the tensor $\mathcal{A}$ in the following form: \begin{example}gin{eqnarray}\label{E-exam1-1} \mathcal{A}=\end{theorem}\vskip 3pta \mathcal{I}-\mathcal{B}, \end{eqnarray} (i) Suppose that $\mathcal{A}\in S_{3,3}$ (or $\mathcal{A}\in S_{4,4}$) is given by (\ref{E-exam1-1}), where $\mathcal{B}\in S_{3,3}$ (or $\mathcal{B}\in S_{4,4}$) is a tensor of ones and $\end{theorem}\vskip 3pta$ is specified in the table of our numerical results.\\ (ii) Suppose that $\mathcal{A}\in S_{m,n}$ is given by (\ref{E-exam1-1}), where $\mathcal{B}\in S_{m,n}$ is randomly generated with all its elements are in the interval $(0,1)$. \end{example} The numerical results of testing the tensor $\mathcal{A}$ defined by Example \ref{example1}(i) are given in Table 1, where ``$\rho$" denotes the spectral radius of the tensor $\mathcal{B}$, ``IT" denotes the number of iterations, ``CPU(s)" denotes the CPU time in seconds, and ``Result" denotes the output result in which ``No" denotes the output result that the tested tensor is not copositive and ``Yes" denotes the output result that the tested tensor is copositive. \begin{example}gin{table}[ht] \caption{The numerical results of the problem in Example \ref{example1}(i)} \begin{example}gin{center} \begin{example}gin{tabular}[c] {| c | c | c | c | c | c | c|} \hline $m$ & $n$ &$\rho$ & $\end{theorem}\vskip 3pta$ & IT & CPU(s) & Result\\ \hline & & & 1 & 2 &3.37 & No \\ \cline{4-7} & & & 8.99 & 20 &81.2 & No \\ \cline{4-7} 3 & 3 & 9 & 9 & $>100$ & & \\ \cline{4-7} & & & 9.01 & 1 &0.92 & Yes \\ \cline{4-7} & & & 19 & 1 &0.967 & Yes \\ \hline & & & 10 & 8 &32.6 & No \\ \cline{4-7} 4 & 4 & 64 & 64 & $>100$ & & \\ \cline{4-7} & & & 74 & 1 &1.33 & Yes \\ \hline \end{tabular} \end{center} \end{table} The numerical results of testing the tensor $\mathcal{A}$ defined by Example \ref{example1}(ii) are given in Table 2, where the spectral radius $\rho$ of every tensor $\mathcal{B}$ is computed by the higher order power method. In our experiments, for the same $m$ and $n$, we generate randomly every tested problem 10 times. In Table 2, ``MinIT" and ``MaxIT" denote the minimal number and the maximal number of iterations among ten times experiments for every tested problem, respectively, ``MinCPU(s)" and ``MaxCPU(s)" denote the smallest and the largest CPU times in second among ten times experiments for every tested problem, respectively, ``Nyes" denotes the number of the output results that the tested tensors are copositive, and ``Nno" denotes the number of the output results that the tested tensors are not copositive. \begin{example}gin{table}[ht] \caption{The numerical results of the problem in Example \ref{example1}(ii)} \begin{example}gin{center} \begin{example}gin{tabular}[c] {| c | c | c | c | c | c | c| c | c |} \hline $m$ & $n$ & $\end{theorem}\vskip 3pta$ & MinIT & MaxIT &MinCPU(s) &MaxCPU(s) & Nyes & Nno\\ \hline 3 & 3 & $\rho-1$ & 8 & 8 &18.7825 &30.9506 & & 10 \\ \cline{3-9} & & $\rho+1$ & 1 & 1 &0.98 &2.96 & 10 & \\ \cline{3-9} & & $\rho+10$ & 1 & 1 &1.02 &3.27 & 10 & \\ \hline 3 & 4 & $\rho-1$ & 16 & 16 &70.9337 &82.2125 & & 10 \\ \cline{3-9} & & $\rho+1$ & 1 & 1 &1.95 &3.8844 & 10 & \\ \cline{3-9} & & $\rho+10$ & 1 & 1 &2.184 &4.1808 & 10 & \\ \hline 4 & 3 & $\rho-1$ & 16 & 22 &72.5249 &106.3147 & & 10 \\ \cline{3-9} & & $\rho+1$ & 1 & 1 &1.5912 &3.2916 & 10 & \\ \cline{3-9} & & $\rho+10$ & 1 & 1 &1.7472 &3.354 & 10 & \\ \hline 4 & 4 & $\rho-1$ & 11 & 11 &47.2683 &52.3695 & & 10 \\ \cline{3-9} & & $\rho+1$ & 1 & 1 &1.7628 &3.8688 & 10 & \\ \cline{3-9} & & $\rho+10$ & 1 & 1 &1.8252 &3.978 & 10 & \\ \hline 6 & 3 & $\rho-1$ & 17 & 27 &97.6254 &157.3114 & & 10 \\ \cline{3-9} & & $\rho+1$ & 1 & 1 &2.574 &2.9016 & 10 & \\ \cline{3-9} & & $\rho+10$ & 1 & 1 &2.5116 &2.6676 & 10 & \\ \hline \end{tabular} \end{center} \end{table} From Tables 1 and 2, it is easy to see that the concerned tensors can be correctly tested with few number of iterations. In particular, for every strictly copositive tensor we tested, only one step iteration reaches the right conclusion. \begin{example}gin{example}\label{example2} We test the following three tensors: (i) Suppose that $\mathcal{A}\in S_{6,3}$ is given by \begin{example}gin{eqnarray*} \left\{\begin{example}gin{array}{ll} \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(111122)}}a_{i_1i_2i_3i_4i_5i_6}=1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(112222)}}a_{i_1i_2i_3i_4i_5i_6}=1,\\ a_{333333}=1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(112233)}}a_{i_1i_2i_3i_4i_5i_6}=-3; \end{array}\right. \end{eqnarray*} (ii) Suppose that $\mathcal{A}\in S_{6,3}$ is given by \begin{example}gin{eqnarray*} \left\{\begin{example}gin{array}{ll} a_{111111}=1,\;\; a_{222222}=1,\;\; a_{333333}=1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(111122)}}a_{i_1i_2i_3i_4i_5i_6}=-1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(112222)}}a_{i_1i_2i_3i_4i_5i_6}=-1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(111133)}}a_{i_1i_2i_3i_4i_5i_6}=-1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(113333)}}a_{i_1i_2i_3i_4i_5i_6}=-1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(222233)}}a_{i_1i_2i_3i_4i_5i_6}=-1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(223333)}}a_{i_1i_2i_3i_4i_5i_6}=-1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(112233)}}a_{i_1i_2i_3i_4i_5i_6}=3; \end{array}\right. \end{eqnarray*} (iii) Suppose that $\mathcal{A}\in S_{6,3}$ is given by \begin{example}gin{eqnarray*} \left\{\begin{example}gin{array}{ll} \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(111122)}}a_{i_1i_2i_3i_4i_5i_6}=1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(222233)}}a_{i_1i_2i_3i_4i_5i_6}=1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(333311)}}a_{i_1i_2i_3i_4i_5i_6}=1,\\ \sum_{i_1i_2i_3i_4i_5i_6\in S_{\pi(112233)}}a_{i_1i_2i_3i_4i_5i_6}=-3. \end{array}\right. \end{eqnarray*} \end{example} The corresponding polynomials of the above tensors are famous Motzkin polynomial, Robinson polynomial and Choi-Lam polynomial, respectively. It is easy to see that the above three tensors are copositive, but not strictly copositive. We use Algorithm 5.2 to test the tensor $\mathcal{A}+\sigma \mathcal{E}$ with $\sigma>0$, and the numerical results are listed in Table 3. \begin{example}gin{table}[ht] \caption{The numerical results of the problem in Example \ref{example2}} \begin{example}gin{center} \begin{example}gin{tabular}[c] {| c | c | c | c |} \hline & Example 6.3(i) & Example 6.3(ii) & Example 6.3(iii) \\ \cline{2-4} $\sigma$ & IT/CPU(s) & IT/CPU(s) & IT/CPU(s) \\ \hline 0.01 & 3/14.3 & 11/60 & 5/27.4 \\ \hline 0.001 & 19/92.8 & 27/152 & 17/99.5 \\ \hline 0.0001 & 55/291 & 67/379 & 35/209 \\ \hline \end{tabular} \end{center} \end{table} All tensors tested in Examples \ref{example1} and \ref{example2} were tested in \cite{chen16}. Compared the numerical results shown in Tables 1-3 with those given in \cite{chen16}, choosing $\mathbb{M}= \mathbb{H}$ requires the least number of iterations but each iteration is so costly that the overall runtime is in most cases still higher than $\mathbb{M}=\mathbb{N}^+_{m,n}$. {\bf Part 2 (Illustration of Theorem 5.2)}. As said in the last section, for an $m$-uniform hypergraph $G=(V,E)$ with $V=[n]$, if there is $k\in [n]$ such that $$ k^{m-1}(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n}~~{\bf or}~~(k-1)^{m-1}(\mathcal{A}+\mathcal{I})-\mathcal{E}\notin \mathbb{COP}_{m,n}, $$ then we know that the coclique number of $G$ satisfies $\omega(G)\leq k$. By this way, we can compute the coclique number of an $m$-uniform hypergraph. Conversely, if the coclique number of an $m$-uniform hypergraph is known, we can also check the main result obtained in Section 5. In this part, we illustrate Theorem 5.2 by constructing two examples. \begin{example}gin{example}\label{example-1} Let $V=\{1,2,3\}$ and $E$ be a set of subsets of $V$. Let $G=(V,E)$ be a $3$-uniform hypergraph. If $V_0=\{1\}$, $V_1=\{2,3\}$ and $E=\left\{1,2,3\right\}$, then $G$ is a hyper-star. \end{example} The adjacency tensor of $G$ is as follows: \begin{example}gin{eqnarray*} A(:,:,1) = \left(\begin{example}gin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & \frac{1}{2}\\ 0 & \frac{1}{2} & 0 \end{array}\right), A(:,:,2) = \left(\begin{example}gin{array}{ccc} 0 & 0 & \frac{1}{2}\\ 0 & 0 & 0\\ \frac{1}{2} & 0 & 0 \end{array}\right), A(:,:,3) = \left(\begin{example}gin{array}{ccc} 0 & \frac{1}{2} & 0\\ \frac{1}{2} & 0 & 0\\ 0 & 0 & 0 \end{array}\right). \end{eqnarray*} By Algorithm 2, we can obtain that $4(\mathcal{A}+\mathcal{I})-\mathcal{E}\notin \mathbb{COP}_{m,n}$ (This can be also seen by $f({\bf x})=4(\mathcal{A}+\mathcal{I}){\bf x}^3-\mathcal{E}{\bf x}^3=-3<0$ when ${\bf x}=(1,1,1)\in \mathbb{R}^3$). So, from the monotonicity of $\lambda(\mathcal{A}+\mathcal{I})-\mathcal{E}$, we know that $\min_{\lambda \in \mathbb{N}}\{\lambda~|~\lambda(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n}\}>4$. By the fact that $\omega(G)=n-1=2$, we have $$ \omega(G)^{m-1}\leq \min_{\lambda \in \mathbb{N}}\{\lambda~|~\lambda(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n}\}, $$ which verifies Theorem 5.2. \begin{example}gin{example}\label{example-2} Let $V=\{1,2,3,4\}$ and $E$ be a set of subsets of $V$. Let $G=(V,E)$ be a $4$-uniform hypergraph. If $V_0=\{1\}$, $V_1=\{2,3,4\}$ and $E=\left\{\{1,2,3,4\}\right\}$, then $G$ is a hyper-star. \end{example} The coefficients of the adjacency tensor $\mathcal A$ of $G$ are as follows: \begin{example}gin{eqnarray*} &&A(:,:,1,1) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right), A(:,:,2,1) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{6} \\ 0 & 0 & \frac{1}{6} & 0 \end{array}\right), A(:,:,3,1) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{6} \\ 0 & 0 & 0 & 0 \\ 0 & \frac{1}{6} & 0 & 0 \end{array}\right),\\ &&A(:,:,4,1) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{6} & 0\\ 0 & \frac{1}{6} & 0 & 0\\ 0 & 0 & 0 & 0 \end{array}\right), A(:,:,1,2) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{6} \\ 0 & 0 & \frac{1}{6} & 0 \end{array}\right), A(:,:,2,2) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right),\\ &&A(:,:,3,2) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & \frac{1}{6} \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \frac{1}{6} & 0 & 0 & 0 \end{array}\right), A(:,:,4,2) = \left(\begin{example}gin{array}{cccc} 0 & 0 & \frac{1}{6} & 0 \\ 0 & 0 & 0 & 0 \\ \frac{1}{6} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{array}\right), A(:,:,1,3) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{6} \\ 0 & 0 & 0 & 0 \\ 0 &\frac{1}{6} & 0 & 0 \end{array}\right),\\ &&A(:,:,2,3) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & \frac{1}{6} \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \frac{1}{6} & 0 & 0 & 0 \end{array}\right), A(:,:,3,3) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right), A(:,:,4,3) = \left(\begin{example}gin{array}{cccc} 0 & \frac{1}{6} & 0 & 0\\ \frac{1}{6} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right),\\ &&A(:,:,1,4) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{6} & 0\\ 0 & \frac{1}{6} & 0 & 0\\ 0 & 0 & 0 & 0 \end{array}\right), A(:,:,2,4) = \left(\begin{example}gin{array}{cccc} 0 & 0 & \frac{1}{6} & 0 \\ 0 & 0 & 0 & 0 \\ \frac{1}{6} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{array}\right), A(:,:,3,4) = \left(\begin{example}gin{array}{cccc} 0 & \frac{1}{6} & 0 & 0\\ \frac{1}{6} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right),\\ &&A(:,:,4,4) = \left(\begin{example}gin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right), \end{eqnarray*} By running program of Algorithm 2, we obtain $8(\mathcal{A}+\mathcal{I})-\mathcal{E} \notin \mathbb{COP}_{m,n}$, which implies that $$\min_{\lambda \in \mathbb{N}}\{\lambda~|~\lambda(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n}\}\geq27.$$ Since $\omega(G)=3$, it holds that $$ \omega(G)^{m-1}\leq \min_{\lambda \in \mathbb{N}}\{\lambda~|~\lambda(\mathcal{A}+\mathcal{I})-\mathcal{E}\in \mathbb{COP}_{m,n}\}, $$ and hence, Theorem 5.2 is true. \setcounter{equation}{0} \section{Checking vacuum stability for $\mathbb{Z}_3$ scalar dark matter} Kannike \cite{KK16} studied the vacuum stability of a general scalar potential of a few fields, and explicit vacuum stability conditions for more complicated potentials are given. In \cite{KK16}, one important physical example is given by scalar dark matter stable under $\mathbb{Z}_3$ discrete group. The most general scalar quartic potential of the standard model(SM) Higgs $H_1$, an inert doublet $H_2$ and a complex singlet $S$ which is symmetric under a $\mathbb{Z}_3$ group is \begin{example}gin{equation}\label{e71} \begin{example}gin{aligned} V(h_1,h_2,S)=& \lambda_1|H_1|^4+\lambda_2|H_2|^4+\lambda_3|H_1|^2|H_2|^2+\lambda_4(H_1^{\dag}H_2)(H_2^{\dag}H_1)+\lambda_S|S|^4 +\lambda_{S1}|S|^2|H_1|^2\\ &+\lambda_{S2}|S|^2|H_2|^2+\frac{1}{2}(\lambda_{S12}S^2H_1^{\dag}H_2+\lambda_{S12}^*{S^{\dag}}^2H_2^{\dag}H_1) \\ =&\lambda_1h_1^4+\lambda_2h_2^4+\lambda_3h_1^2h_2^2+\lambda_4\rho^2h_1^2h_2^2+\lambda_Ss^4+\lambda_{S1}s^2h_1^2 +\lambda_{S2}s^2h_2^2-|\lambda_{S12}|\rho s^2h_1h_2 \\ \equiv& \lambda_Ss^4+M^2(h_1,h_2)s^2+V(h_1,h_2), \end{aligned} \end{equation} where $M^2(h_1,h_2):=\lambda_{S1}s^2h_1^2 +\lambda_{S2}s^2h_2^2-|\lambda_{S12}|\rho s^2h_1h_2$ and $V(h_1,h_2):=V(h_1,h_2,0)$. Here, in physical sense, the variables $h_1, h_2$ and $s$ are nonnegative since they are magnitudes of scalar fields, so the coupling tensor $\mathcal V$ of coefficients of (\ref{e71}) has to be copositive. This has to hold for all values of the extra parameter $\rho$ ranges from 0 to 1, so the potential has to be minimized or scanned over it. Now, we give the explicit form for the coupling tensor of (\ref{e71}) as ${\mathcal V}=(V_{i_1i_2i_3i_4})$, which is an order 4 dimension 3 real symmetric tensor: $$ V_{1111}=\lambda_1,~~V_{2222}=\lambda_2,~~V_{3333}=\lambda_S $$ $$ V_{1122}=\frac{1}{6}(\lambda_3+\lambda_4\rho^2), V_{1133}=\frac{1}{6}\lambda_{S1}, V_{2233}=\frac{1}{6}\lambda_{S2}, V_{1233}=-\frac{1}{12}|\lambda_{S12}| $$ and $V_{i_1i_2i_3i_4}=0$ for the others. Then, by Algorithm 2, we give a series of explicit coefficients and check the vacuum stability of the potential (\ref{e71}). As to $\lambda$'s in the entries of $\mathcal V$, in particle physics all calculated quantities are expanded in series of $\lambda_i/(4 \pi)$. Due to the perturbativity requirement of these series, the absolute values of the $\lambda$ coefficients must be no larger than $4 \pi$. On the other hand, for the coupling tensor to be copositive, the diagonal entries have to be nonnegative. Hence, we can take from the beginning that $0 \leq V_{1111}, V_{2222}, V_{3333} \leq 4 \pi$. Then, because the rest of the entries of $\mathcal V$ are a $\lambda$ paremeter times some coefficients, their lower and upper bounds should be accordingly changed. So $-2 \times 4 \pi/6 \leq V_{1122} \leq 2 \times 4 \pi/6$ (with an extra factor $2$ because it is the sum of two $\lambda$'s), $-4 \pi/6 \leq V_{1133} \leq 4 \pi/6$, $-4 \pi/6 \leq V_{1133} \leq 4 \pi/6$, $-4 \pi/6 \leq V_{2233} \leq 4 \pi/6$, and $-4 \pi/12 \leq V_{1233} \leq 0$. When $\rho\neq 0$, Kannike \cite{KK16} obtained that the conditions for the potential (\ref{e71}) symmetric under a $\mathbb{Z}_3$ to be bounded from below are \begin{example}gin{eqnarray}\label{e72} \left\{\begin{example}gin{array}{l} \lambda_S>0,\\ V(h_1,h_2)>0,\\ 0<h_1^2<1, 0<h_2^2<1, 0<s^2<1,\;\mbox{\rm and}\; 0<\rho^2<1\quad \Longrightarrow \quad V_{\min}>0, \end{array}\right. \end{eqnarray} where \begin{example}gin{eqnarray}\label{e73} \begin{example}gin{array}{rcl} \rho&=&\left(|\lambda_{S12}|s^2\right)/\left(2\lambda_4h_1h_2\right),\\ h_1^2&=&\frac 12\left\{(2\lambda_2-\lambda_3)(4\lambda_S\lambda_4-|\lambda_{S12}|^2)+2\lambda_4\left[ (\lambda_3+\lambda_{S1})\lambda_{S2}-2\lambda_2\lambda_{S1}-\lambda_{S2}^2\right]\right\}/t,\\ h_2^2&=&\frac 12\left\{(2\lambda_1-\lambda_3)(4\lambda_S\lambda_4-|\lambda_{S12}|^2)+2\lambda_4\left[ (\lambda_3+\lambda_{S2})\lambda_{S1}-2\lambda_1\lambda_{S2}-\lambda_{S1}^2\right]\right\}/t,\\ s^2&=& \lambda_4\left(4\lambda_1\lambda_2-\lambda_3^2-2\lambda_1\lambda_{S2}-2\lambda_2\lambda_{S1}+\lambda_3(\lambda_{S1}+\lambda_{S2})\right)/t,\\ V_{\min}&=& \frac{1}{4}\left[(4\lambda_1\lambda_2-\lambda_3^2)(4\lambda_S\lambda_4-|\lambda_{S12}|^2)-4\lambda_4(\lambda_1\lambda_{S2}^2 +\lambda_2\lambda_{S1}^2-\lambda_3\lambda_{S1}\lambda_{S2})\right]/t \end{array} \end{eqnarray} with \begin{example}gin{eqnarray*} \begin{example}gin{array}{rcl} t&:=&(\lambda_1+\lambda_2-\lambda_3)\times (4\lambda_S\lambda_4-|\lambda_{S12}|^2)\\ &&+\lambda_4\left[4\lambda_1\lambda_2-\lambda_3^2-4\lambda_1\lambda_{S2}-4\lambda_2\lambda_{S1} +2\lambda_3(\lambda_{S1}+\lambda_{S2})-(\lambda_{S1}-\lambda_{S2})^2\right]. \end{array} \end{eqnarray*} And the third formula in (\ref{e72}) is replaced by $V_{\rho=0}>0$ when $\rho=0$; and by $V_{\rho=1}>0$ when $\rho=1$. Now, we implement Algorithm 2 to test the copositivity of the tensor defined by the potential (\ref{e71}), and the numerical results are listed in Table 4, where the values of $h_1^2, h_2^2, s^2, \rho$ and $V_{\min}$ are computed by (\ref{e73}), ``IT" denotes the number of iteration, ``CPU(s)" denotes the CPU time in seconds, ``Yes" denotes the output result that the tested tensor is copositive and ``No" denotes the output result that the tested tensor is not copositive. \begin{example}gin{table} \caption{The numerical results for the stability of the potential (\ref{e71})} \begin{example}gin{center} \begin{example}gin{tabular}[c] {| c | c | c | c | c | c | c| c | c | c | c | c| c | c | c | c |} \hline $\lambda_1$ & $\lambda_2$ & $\lambda_S$ & $\lambda_3$ & $\lambda_4$ & $\lambda_{S1}$ & $\lambda_{S2}$ & $\lambda_{S12}$ & $h_1^2$ & $h_2^2$ & $s^2$ & $\rho$ & $V_{\min}$ & IT & CPU(s) & Result \\ \hline $\pi$ & $\pi$ & $\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & $\pi$ & 0.25 & 0.25 & 0.5 & 1 & 1.96 & 3 & 0.09 & Yes \\ \hline $\pi$ & $\pi$ & $\pi$ & $\pi$ & $\pi$ &-$\pi$ &-$\pi$ & 0 & 0.27 & 0.27 & 0.45 & 0 & 0.57 & 19 & 0.37 & Yes \\ \hline $\pi$/4& $\pi$/4 & $\pi$/4 & $\pi$ & $\pi$ &0 &0 & 4$\pi$ & 0.56 & 0.56 &-0.11 & 0.4 & 1.31 & 9 & 0.17 & No \\ \hline $\pi$ & $\pi$ & $\pi$/2 & $\pi$ & $\pi$ &0 &0 & 4$\pi$ & 0.64 & 0.64 &-0.27 & 0.86 & 3.00 & 5 & 0.08 & No \\ \hline $\pi$ & $\pi$ & -$\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & 4$\pi$ & 0.52 & 0.52 &-0.05 & 0.18 & 2.39 & 1 & 0.02 & No \\ \hline $-\pi$ & $\pi$ & $\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & 4$\pi$ &-0.64 & 1.91 &-0.27 &0.49i & 4.57 & 1 & 0.02 & No \\ \hline $\pi$ & $-\pi$ & $\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & 4$\pi$ & 1.91 &-0.64 &-0.27 &0.49i & 4.57 & 1 & 0.02 & No \\ \hline $\pi$ & $\pi$ & -$\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & 3$\pi$ & 0.54 & 0.54 &-0.07 & 0.2 & 2.41 & 1 & 0.02 & No \\ \hline $-\pi$ & $\pi$ & $\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & 3$\pi$ &-0.88 & 2.63 &-0.75 &0.74i & 5.69 & 1 & 0.02 & No \\ \hline $\pi$ & $-\pi$ & $\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & 3$\pi$ & 2.63 &-0.88 &-0.75 &0.74i & 5.69 & 1 & 0.02 & No \\ \hline $\pi$/4& $\pi$/4 & $\pi$/4 & $\pi$/4 & $\pi$/4 &$\pi$/2 &$\pi$/2 & 4$\pi$ & 0.50 &0.50 &0.004 & 0.06 & 0.59 & 3 & 0.05 & No \\ \hline $\pi$ & $\pi$ & $\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & $\pi$/2& 0.32 & 0.32 & 0.36 & 0.29 & 2.07 & 3 & 0.06 & Yes \\ \hline 2$\pi$ & $\pi$ & 2$\pi$ & $\pi$ & $\pi$ &$\pi$ &$\pi$ & $\pi$/2& 0.20 & 0.59 & 0.21 & 0.15 & 2.51 & 3 & 0.06 & Yes \\ \hline $\pi$ & $\pi$ & 2$\pi$ & $\pi$ & $\pi$ &$\pi$ & 0 & $\pi$/2& 0.24 & 0.50 & 0.26 & 0.19 & 1.95 & 7 & 0.13 & Yes \\ \hline $\pi$ & $\pi$ & 2$\pi$ & $\pi$ & $\pi$ & 0 &$\pi$ & $\pi$/2& 0.50 & 0.24 & 0.26 & 0.19 & 1.95 & 7 & 0.14 & Yes \\ \hline $2\pi$ & $\pi$ & 2$\pi$ & $\pi$ & $\pi$ & 0 &$\pi$ & $\pi$/2& 0.25 & 0.49 & 0.26 & 0.18 & 2.34 & 7 & 0.14 & Yes \\ \hline $\pi$ & $2\pi$ & 2$\pi$ & $\pi$ & $\pi$ & 0 & $\pi$ & $\pi$/2& 0.60 & 0.10 & 0.31 & 0.32 & 2.02 & 7 & 0.14 & Yes \\ \hline $2\pi$ & $2\pi$ & $\pi$ & $\pi$ & $\pi$ & 0 & $\pi$ & $\pi$/2& 0.29 & 0.08 & 0.62 & 0.99 & 1.97 & 7 & 0.14 & Yes \\ \hline $\pi$ & $\pi$ & $2\pi$ & 0 & $\pi$ & $\pi$ & 0 & $2\pi$ & 0.29 & 0.43 & 0.29 & 0.82 & 1.35 & 11 & 0.20 & Yes \\ \hline $\pi$/4& $\pi$/4 & $\pi$ & 0 & $\pi$ & $\pi$ & 0 & $2\pi$ & 0.29 & 0.57 & 0.14 & 0.35 & 0.45 & 7 & 0.14 & Yes \\ \hline $\pi$ & $\pi$ & $\pi$ & 0 & $\pi$ & -$\pi$ & 0 & $\pi$/2& 0.40 & 0.19 & 0.41 & 0.38 & 0.60 & 19 & 0.36 & Yes \\ \hline $\pi$ & $\pi$ & $\pi$ & 0 & $\pi$ & -$\pi$ & -$\pi$ & $2\pi$ & 0.17 & 0.17 & 0.67 & 4 &-0.52 & 17 & 0.31 & No \\ \hline $\pi$ & $\pi$ & $\pi$ & $\pi$ & $\pi$ & -$\pi$ & -$\pi$ & $2\pi$ & 0.14 & 0.14 & 0.71 & 5 &-0.45 & 17 & 0.31 & No \\ \hline \end{tabular} \end{center} \end{table} It should be noted that it is easy to check that the parameters are satisfied with the conditions (\ref{e72}) when our tested results are ``Yes", while the parameters are not satisfied with the conditions (\ref{e72}) when our tested results are ``No". We also tested other cases, the computation effect is similar. These imply that our algorithm is efficient and applicable to such physical problems. \setcounter{equation}{0} \section{Conclusions} In this paper, an alternative form of a previously given algorithm for copositivity of high order tensors is given, and applications of the proposed algorithm to test copositivity of the coupling tensor in a vacuum stability model in particle physics, and to compute the coclique number of a uniform hypergraph are presented. Several new conditions for copositivity of tensors based on the representative matrix of a simplex are proved. We see that for the performance of this algorithm the choice of the set $\mathbb{M}$ is crucial, and it is observed that verifying copositivity of tensors is much harder than verifying non-copositivity. However, some interesting questions still need to study in the future: {\bf 1}. Are there any better choices for the set $\mathbb{M}$ in Algorithm 2? {\bf 2}. How to update the proposed method to make it available for copositive tensors but not strictly copositive? {\bf Acknowledgment} We are thankful to Kristjan Kannike for the discussion on the vacuum stability model. \begin{example}gin{thebibliography}{99} \bibitem{BHW16} X.L. Bai, Z.H. Huang, Y. Wang, {\it Global uniqueness and solvability for tensor complementarity problems}, Journal of Optimization Theory and Applications 170 (2016) 72-84. \bibitem{Bundfuss08} S. Bundfuss, M. D$\ddot{u}$r, {\it Algorithmic copositivity detection by simplicial partition}, Linear Algebra and its Applications 428(7) (2008) 1511-1523. \bibitem{CQW} M. Che, L. Qi and Y. Wei, {\it Positive definite tensors to nonlinear complementarity problems}, Journal of Optimization Theory and Applications 168 (2016) 475-487. \bibitem{CCLQ} H. Chen, Y. Chen, G. Li, L. Qi, {\it Finding the maximum eigenvalue of a class of tensors with applications in copositivity test and hypergraphs}, arXiv preprint arXiv:1511.02328v2, 2016. \bibitem{chen16} H. Chen, Z.H. Huang, L. Qi, {\it Copositivity Detection of Tensors: Theory and Algorithm}, arXiv preprint arXiv:1603.01823, 2016. \bibitem{CLQ2016} H. Chen, G. Li and L. Qi, {\it SOS Tensor Decomposition: Theory and Applications}, Communications in Mathematical Sciences 8 (2016) 2073-2100. \bibitem{chen14} H. Chen, L. Qi, {\it Positive definiteness and semi-definiteness of even order symmetric Cauchy tensors}, Journal of Industrial and Management Optimization 11 (2015) 1263-1274. \bibitem{Cooper12} J. Cooper, A. Dutle, {\it Spectra of uniform hypergraphs}, Linear Algebra Appl. 436 (2012) 3268-3292. \bibitem{Dickinson14} P.J.C. Dickinson, L. Gijben, {\it On the computational complexity of membership problems for the completely positive cone and its dual}, Computational Optimization and Applications 57 (2014) 403-415. \bibitem{Ding15} W. Ding, Z. Luo, L. Qi, {\it $P$-Tensors, $P _0 $-Tensors, and Tensor Complementarity Problem}, arXiv preprint arXiv:1507.06731 (2015). \bibitem{Lim2013} C.J. Hillar, L.H. Lim, {\it Most tensor problems are NP-hard}, Journal of the ACM (JACM) 60 (2013) 45. \bibitem{HQ2016} Z.H. Huang, L. Qi, {\it Formulating an n-person noncooperative game as a tensor complementarity problem}, Computational Optimization and Applications DOI: 10.1007/s10589-016-9872-7 (2016). \bibitem{Kannan15} M.R. Kannan, N. Shaked-Monderer, A. Berman, {\it Some properties of strong $H$-tensors and general $H$-tensors}, Linear Algebra and its Applications 476 (2015) 42-55. \bibitem{KK16} K. Kannike, {\it Vacuum Stability of a General Scalar Potential of a Few Fields}, European Physical Journal C 76 (2016) 324. \bibitem{K2015} T. Kolda, {\it Numerical optimization for symmetric tensor decomposition}, Mathematical Programming Ser.B, 151 (2015) 225-248. \bibitem{LCL} C. Li, Y. Li, {\it Double B tensors and quasi-double B tensors}, Linear Algebra and its Applications 466 (2015) 343-356. \bibitem{LCQL} C. Li, L. Qi, Y. Li, {\it MB-tensors and MB$_0$-tensors}, Linear Algebra and its Applications 484 (2015) 141-153. \bibitem{LWZ} C. Li, F. Wang, J. Zhao, Y. Zhu, Y. Li, {\it Criterions for the positive definiteness of real supersymmetric tensors}, Journal of Computational and Applied Mathematics 255 (2014) 1-14. \bibitem{Ling16} C. Ling, H. He, L. Qi, {\it On the cone eigenvalue complementarity problem for higher-order tensors}, Computational Optimization and Applications 63(1) (2016) 143-168. \bibitem{Yalmip1} J. L\"{o}fberg, {\it YALMIP : A Toolbox for Modeling and Optimization in MATLAB}, In Proceedings of the CACSD Conference, Taipei, Taiwan, 2004. \bibitem{Yalmip2} J. L\"{o}fberg, {\it Pre- and post-processing sums-of-squares programs in practice}, IEEE Tran. on Auto. Cont. 54 (2009) 1007-1011. \bibitem{LQ} Z. Luo, L. Qi, {\it Completely positive tensors: Properties, easily checkable subclasses and tractable relaxations}, SIAM Journal on Matrix Analysis and Applications 37 (2016) 1675-1698. \bibitem{Murty87} K.G. Murty, S.N. Kabadi, {\it Some NP-complete problems in quadratic and nonlinear programming}, Mathematical Programming 39(2) (1987) 117-129. \bibitem{Pena14} J. Pena, J.C. Vera, L.F. Zuluaga, {\it Completely positive reformulations for polynomial optimization}, Mathematical Programming 151(2) (2014) 405-431. \bibitem{Qi05} L. Qi, {\it Eigenvalue of a real supersymmetric tensor}, J. Symb. Comput. 40 (2005) 1302-1324. \bibitem{qlq2013} L. Qi, {\it Symmetric nonnegative tensors and copositive tensors}, Linear Algebra and its Applications 439 (2013) 228-238. \bibitem{Qi14} L. Qi, {\it H$^+$-eigenvalues of Laplacian and signless Laplacian tensors}, Communications in Mathematical Sciences 12 (2014) 1045-1064. \bibitem{qi14} L. Qi, Y. Song, {\it An even order symmetric B tensor is positive definite}, Linear Algebra and its Applications 457 (2014) 303-312. \bibitem{QXX14} L. Qi, C. Xu, Y. Xu, {\it Nonnegative tensor factorization, completely positive tensors, and a hierarchical elimination algorithm}, SIAM Journal on Matrix Analysis and Applications 35(4) (2014) 1227-1241. \bibitem{Shao13} J.Y. Shao, {\it A general product of tensors with applications}, Linear Algebra and its Applications 439 (2013) 2350-2366. \bibitem{Song15} Y. Song, L. Qi, {\it Necessary and sufficient conditions for copositive tensors}, Linear and Multilinear Algebra 63(1) (2015) 120-131. \bibitem{SQ} Y. Song, L. Qi, {\it On strictly semi-positive tensors}, arXiv:1509.01327 (2015). \bibitem{SQ15} Y. Song, L. Qi, {\it Tensor complementarity problem and semi-positive tensors}, Journal of Optimization Theory and Applications 169 (2016) 1069-1078. \bibitem{song2016} Y. Song, L. Qi, {\it Eigenvalue analysis of constrained minimization problem for homogeneous polynomials}, Journal of Global Optimization 64 (2016) 563-575. \bibitem{Sponsel12} J. Sponsel, S. Bundfuss, M. D$\ddot{u}$r, {\it An improved algorithm to test copositivity}, Journal of Global Optimization 52(3) (2012) 537-551. \bibitem{sturm99} J.F. Sturm, {\it Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones}, Optimization Methods and Software 11/12 (1999) 625-653. \bibitem{Tanaka16} A. Tanaka, A. Yoshise, {\it Tractable Subcones and LP-based Algorithms for Testing Copositivity}, arXiv preprint arXiv:1601.06878 (2016). \bibitem{WHB16} Y. Wang, Z.H. Huang, X.L. Bai, {\it Exceptionally regular tensors and complementarity problems}, Optimization Methods and Software 31(4) (2016) 815-828. \bibitem{Xu11} J. Xu, Y. Yao, {\it An algorithm for determining copositive matrices}, Linear Algebra and its Applications 435(11) (2011) 2784-2792. \bibitem{Yoshi10} A. Yoshise, Y. Matsukawa, {\it On optimization over the doubly nonnegative cone}, Computer-Aided Control System Design (CACSD), 2010 IEEE International Symposium on. IEEE, (2010) 13-18. \bibitem{Zhang12} L. Zhang, L. Qi, G. Zhou, {\it M-tensors and some applications}, SIAM Journal on Matrix Analysis and Applications 35 (2014) 437-452. \end{thebibliography} \end{document}
\begin{document} \title[Variance bounds for Gaussian FPP]{Variance bounds for Gaussian first passage percolation} \author{Vivek Dewan$^*$} \email{[email protected]} \address{$^*$Institut Fourier, Universit\'{e} Grenoble Alpes} \begin{abstract} Recently, many results have been established drawing a parallel between Bernoulli percolation and models given by levels of smooth Gaussian fields with unbounded, strongly decaying correlation (see e.g \cite{beffara2016v}, \cite{rivera2017critical}, \cite{muirhead2018sharp}). In a previous work with D. Gayet \cite{dewangayet}, we started to extend these analogies by adapting the first basic results of classical first passage percolation (first established in \cite{kesten1986aspects}, \cite{cox}) in this new framework: positivity of the time constant and the ball-shape theorem. In the present paper, we present a proof inspired by Kesten \cite{10.1214/aoap/1177005426} of other basic properties of the new FPP model: an upper bound on the variance in the FPP pseudometric given by the Euclidean distance with a logarithmic factor, and a constant lower bound. Our results notably apply to the Bargmann-Fock field. \end{abstract} \date{\today} \thanks{} \keywords{first passage percolation, Gaussian fields} \subjclass[2010]{60G60 (primary); 60F99 (secondary)} \maketitle \tableofcontents \section{Introduction} \subsection{The models} \paragraph{\bf{Classical FPP}} The classical model of first passage percolation (FPP) was introduced by Hammersley and Welsh in 1965~\cite{Hw}. In its most basic form, it consists in assigning an i.i.d $\Ber(p)$ random variable (seen as a \emph{time}) to every edge in the graph $(\ZZ^d,\mathcal{E})$, where the edges in $\mathcal{E}$ are all edges between pairs of vertices which differ by $\pm1$ in one coordinate. The pseudometric $T$ is then defined as the smallest number of $1$ weights in an edge path between two vertices. One important quantity in the study of this model is the family of \emph{time constants}: the deterministic limits of $T(0,nx)/n$ for any given $x$, denoted $\mu_p(x)$. It is known that there is for this quantity a phase transition similar to that of Bernoulli percolation: $\mu_p$ is positive if and only if $p$ is smaller than $p_c(d)$, which is the critical parameter for Bernoulli percolation, and depends on the dimension $d$. Subsequently, Kesten \cite{10.1214/aoap/1177005426} established results controlling the fluctuations of the quantity $T(0,nx)$, namely that its variance was lower bounded by a constant and linearly upper-bounded. In later works, Benjamini, Kalai and Schramm \cite{BKS}, (and Benaïm and Rossignol \cite{10.1214/07-AIHP124} with relaxed conditions on the law) got a logarithmic improvement on the upper bound and Newman and Piza \cite{newmanpiza} got one on the lower bound. \paragraph{\bf{Gaussian FPP}} Recall that a Gaussian field is a random function $\RR^d\mapsto \RR$ such that for any finite set of points $(x_1,...,x_k)$, $(f(x_1),...,f(x_k))$ is a Gaussian vector. A Gaussian field is fully determined by its \emph{covariance kernel} $$ \kappa(x,y):=\cov(f(x),f(y)). $$ In recent years, there has been increased interest in a percolation model based on such random maps: \emph{Gaussian percolation}. It is \emph{a priori} widely different from classical percolation. It pertains to the large-scale behaviour of excursion sets of smooth Gaussian fields, i.e sets of the form $$ \mathcal{E}_\ell :=\{x\text{ }|\text{ } f(x)\geq -\ell\}. $$ A phase transition similar to that of Bernoulli percolation was established in the planar case, with the parameter $\ell$ of the threshold level playing the role of $p$ (the first properties in \cite{beffara2016v}, the full phase transition for the Bargmann-Fock field in \cite{rivera2017critical}, and finally for planar fields with polynomial decay in \cite{muirhead2018sharp}). Higher dimensional phase transition has been studied in recent papers: a sharp phase transition has been established first for fields with bounded correlation \cite{dewan2021upper}, then for fields with fast enough polynomial decorrelation \cite{severo2021sharp}. The \emph{critical level} $\ell_c$ (a priori depending on the field $f$) is defined as \begin{equation}\label{ellcdef} \ell_c:=\sup\{\ell \text{ such that }\PP[\mathcal{E}_\ell\text{ has an unbounded conected component}]=0\}. \end{equation} In a recent paper \cite{dewangayet}, D. Gayet and the author have started to investigate a FPP model in the context of Gaussian fields, with a pseudometric naturally defined from excursion sets. We established a time constant result: in this model and under several natural assumptions on the field, the time constant $\mu (x)$ is positive if and only if the level considerd $\ell$ is positive, i.e "most of the space" has full time cost. In the present paper, we will establish both upper and lower bounds on the variance of $T(0,x)$, in the framework of Gaussian fields with exponential decay of correlations, with ideas directly adaptated from Kesten's \cite{10.1214/aoap/1177005426}. \subsection{Previous results} We present a general defintion for our pseudometric. \begin{defn}\label{pseudometric} Let $\psi$ be a measurable function $\RR\mapsto \RR$ such that \begin{enumerate} \item $\psi\geq 0$ \item $\psi$ is non-decreasing \item \label{sublinear} There exists a constant $C_\psi >0$ such that for any $x\geq 1$, $\psi(x)\leq C_\psi x$. \item $\psi(x)>0 \Leftrightarrow x>0.$ \end{enumerate} Let $f$ be an almost-surely continuous Gaussian field over $\RR^d$, $A$ and $B$ be two compact subsets of $\RR^d$ and $\ell\in\RR$. The associated pseudometric is then: \begin{equation} T(A,B):=\inf\limits_{\substack{\gamma\text{ piecewise affine}\\ \text{path from $A$ to $B$}}} \quad \int\limits_\gamma \psi(f+\ell). \end{equation} \end{defn} \begin{rem} The following two values of $\psi$ yield very natural pseudometrics. \begin{itemize} \item The first one can be seen as a Gaussian equivalent of classical FPP with Bernoulli edge weights: \begin{equation*} \psi=\un_{\RR_+^*}. \end{equation*} \item The second one can be pictured as a metric given by the graph of the Gaussian field, with a "flat sea" leveling off the low values: \begin{equation*} \psi(x)=\max(x,0). \end{equation*} \end{itemize} \end{rem} For technical reasons, given any pair of sets $A,B$ both contained in a third set $\mathcal{S}$, we define the \emph{restricted pseudometric} as \begin{equation}\label{restrtime} T^\mathcal{S}(A,B):=\inf\limits_{\substack{\gamma\text{ piecewise affine path}\\ \text{from $A$ to $B$ contained in $\mathcal{S}$}}} \quad \int\limits_\gamma \psi(f+\ell), \end{equation} Notice that for any pair of Gaussian fields $f,g$ such that $f\leq g$, for any bounded sets $A,B$, for any $\ell$, $T(A,B)$ is larger when evaluated with respect to $g$ than when evaluated with respect to $f$. For any $x\in\RR^d$, we call the \emph{time constant} associated with $x$ the real number $\mu (x)$ such that \begin{equation}\label{convergences} \lim\limits_{n \to +\infty} \frac{1}n T(0,nx) = \mu (x)\ \text{ almost surely and $L^1$}, \end{equation} provided it exists. The main result of the paper \cite{dewangayet} concerning Gaussian FPP was the following (see next section for statement of the assumptions). \begin{thm}(\cite[Theorems 2.5 and 2.7]{dewangayet})\label{posBF} Let $f$ be a Gaussian field over $\RR^d$ and satisfying Assumption \ref{a:basic} for some $\alpha$-sub-exponential function $F$ with $\alpha>1$. Let $\ell\in\RR$. Let $T$ an associated pseudometric given by~(\ref{pseudometric}). Then, \begin{enumerate} \item the associated family of time constants $(\mu )_{\ell\in \RR} $ given by (\ref{convergences}) are well defined, and they are either all zero or all non-zero (but finite), in which case $\mu $ is a norm. \item If $B_t$ is the ball of radius $t$ for the pseudometric $T$, and $\mathcal{B}_M$ is the ball of radius $M$ for the sup norm,\begin{enumerate} \item If $\mu =0$ then for any positive $M$, $$ \PP \Bigl[ \mathcal{B}_M \subset \frac{1}t B_t \text{ for all $t$ large enough} \Bigr] =1,$$ where $\mathcal{B}_M$ is the Euclidean ball of radius $M$. \item If $\mu $ is a norm then there exists a convex compact subset $K$ of $\RR^d$ with non-empty interior such that, for any positive $\epsilon$, \begin{equation}\label{inclusions} \PP \Bigl[(1-\epsilon)K\subset\frac{1}tB_t\subset(1+\epsilon)K\text{ for all $t$ large enough}\Bigr]=1. \end{equation} \end{enumerate} \item \label{postimecst} Assume further that $f$ satisfies the further Assumption \ref{a:pos} (positivity of correlations). Then, $$ \ell>-\ell_c\Rightarrow \mu >0 $$ \item \label{postimecst2} Assume further that $f$ is planar. Then, $$ \mu >0 \Leftrightarrow \ell>0.$$ \end{enumerate} \end{thm} \begin{rem}\label{higher} \begin{enumerate} \item Items (\ref{postimecst}) and (\ref{postimecst2}) can be intuitively interpreted as the idea that what matters in knowing whether the time constant is positive is whether the instantaneous set percolates: the condition $\ell>-\ell_c$ exactly means that the instantaneous set is below the percolation threshold (item (\ref{postimecst}) is only an implication but it is expected to be an equivalence). \item In the original work, these results were established in a more general framework than that of Gaussian fields, which only made use of assumptions of decorrelation and decay of one-arm probabilities. Notably, they were also valid for an FPP model given by random Voronoi tilings. \end{enumerate} \end{rem} This result as well as our new bounds (Theorems \ref{main2} and \ref{main}) apply to the \emph{Bargmann-Fock field}, which appears in random complex and real algebraic geometry (see \cite{beffara2016v}). It is given by the correlation kenel: $$\kappa(x,y)=\exp\left(-\frac{1}{2}\|x-y\|^2\right). $$ Equivalently, we can explicitly write it as the following random field $f$: \begin{equation}\label{BF} f(x)=\exp\left(-\frac{1}{2}|x|^2\right)\sum\limits_{i,j\in\NN} a_{i,j}\frac{x_1^ix_2^j}{\sqrt{i!j!}}, \end{equation} where the $a_{i,j}$'s are i.i.d centered Gaussians of variance $1$. \paragraph{\bf{Acknowledgements}} The author is grateful to Damien Gayet for his many corrections and enlightening discussions. We also thank Stephen Muirhead for valuable insights and comments on an earlier version of this work. We also warmly thank the referee who carefully read and helpfully commented on an earlier version of this paper. \section{Main results} \subsection{Definitions and Assumptions} We first define the \emph{finitely correlated} counterparts of a given Gaussian field, which we will be using repeatedly in what follows. \begin{defn}\label{corrrange} We say that a Gaussian field $f$ over $\RR^d$ has \emph{correlation range $r$} for some $r>0$ if its covariance kernel $\kappa$ verifies $\kappa(x,y)=0$ for all $x,y$ such that $\|x-y\|_2 \geq r$. \end{defn} \begin{defn}\label{d:cutoff} Fix some smooth function $\varphi$ on $\RR$ such that \begin{itemize} \item $\varphi\geq 0,$ \item $\Supp(\varphi)\subseteq [-1,1]$, \item $\varphi=1$ on $[-1/2,1/2]$. \end{itemize} We then define, for any Gaussian field $f=q\star W$ and $r>0$, the counterpart of $f$ with correlation range $r$ to be: $$ f_r:=q_r\star W, $$ where $$ q_r(x)=\varphi(\|x\|_2/r)q. $$ \end{defn} \paragraph{\bf{Notations}}\label{notations} In the rest of the paper, the function $\psi$ (see Definition \ref{pseudometric}) giving the pseudometric is considered to be fixed. Notations $\PP$, $\EE$, $\Var$, $\cov$ will denote probability, expectation, variance, covariance respectively. Any random variable or event containing the pseudometric $T$ is to be interpreted in the sense of Definition \ref{pseudometric}, the field $f$ and the level $\ell$ always being fixed beforehand. For any $r>0$, the index $r$ signifies that we switch to the law where we take the field $f_r$ instead of $f$. For example, if $E$ is an event, we write $$ \PP_r[E] $$ to signify the probability of $E$ for the field $f_r$. When there are more than one Gaussian fields considered, we may also put the name of the field as the index (e.g write $\PP_f$ for the probability of an event for the field $f$). If $E$ is a set or an event, notations $\un_E$ and $\un\{E\}$ both designate the indicator function of $E$. Now some definitions relating to correlation decay. \begin{defn}\label{subpoly} A function $F$ defined on $\RR$ is said to be $\alpha$\emph{-sub-polynomial} for some $\alpha>0$ if, $$ x^\alpha F(x) \longrightarrow 0. $$ A function $F$ defined on $\RR$ is said to be $\alpha$\emph{-sub-exponential} for some $\alpha>0$ if, $$ e^{x^\alpha} F(x) \longrightarrow 0. $$ \end{defn} \begin{defn} For any number $r$, we let $A_r$ be the annulus of inner radius $1$ and outer radius $r$ centered at $0$ for the sup norm. We call $S_r$ the boundary of the ball of radius $r$, and we write $T(A_r)$ for $T(S_1,S_r)$. \end{defn} The following is the key condition used in all of our results. While it does not seem very natural in its statement, it can be seen as a quantitative estimate in the positivity of the time constant. \begin{Condition}\label{macrotime} A Gaussian field on $\RR^d$ along with a pseudometric of the form given by Definition \ref{pseudometric} satisfies the \emph{macroscopic annulus times condition} for some level $\ell\in\RR$ if there exists a constant $a>0$ and a $\max(2d,4)$-sub-polynomial function $G$ such that for any $N\geq 1$, $$ \PP (T(A_N)\leq aN)\leq G(N). $$ \end{Condition} As we will see thanks to Proposition \ref{numberballs}, in the cases where we have positivity of the time constant, as established in \cite{dewangayet}, Condition \ref{macrotime} is always verified. Here are the main assumptions on the Gaussian field $f$ we will use for all of our results. \begin{asp}(Basic assumptions) \label{a:basic} \begin{enumerate}[(a)] \item \label{i:stat} The field $f$ is centered, stationary and ergodic. \item \label{i:whitenoise} The field $f$ has a \emph{spatial-moving-average representation} $f=q\star W$ where $q\in L^2(\RR^d)$ and $W$ is the white-noise on $\RR^d$. \item(Regularity) \label{i:reg} $q$ is $\mathcal{C}^3$ and each of its derivatives is in $L^2(\RR^d)$. Further, $q$ is $L^1$. \item(Decay of correlations with map $F$) \label{i:decay} There exists a function $F$ from $\RR$ to $\RR$ which decays to $0$ at $\infty$ such that for any $x\in\RR$, for any multi-index $\alpha$ such that $|\alpha|\leq 1$, $$\Bigg(\int\limits_{\|u\|\geq x}|\partial^\alpha q(u)|^2\Bigg)^{1/2}\leq F(x).$$ \item(Symmetry)\label{i:sym} The moving-average kernel $q$ is symmetric under permutation of the axes, and symmetry in all axes. \item(Positive spectral density)\label{i:spectr} The moving-average kernel $q$ verifies $\int\limits_{\RR^d} q>0$. \end{enumerate} \end{asp} \begin{rem} Assumption \ref{i:whitenoise} is implied by, the covariance kernel $\kappa$ having fast enough decay. When it holds, then we have the equality $q\star q=\kappa(0,.)$, and further the Fourier tranform of $q$ is the square root of that of $\kappa(0,.)$, and is a continuous function. For a definition of the white-noise, see section \ref{p:WN} of the appendix. Assumption \ref{i:reg} ascertains that $f$ is almost surely $\mathcal{C}^2$. \end{rem} Further, we will use the following extra assumption for some of our results: \begin{asp}(Positive association) \label{a:pos} We say that a Gaussian field $f$ verifies the \emph{positive association assumption} if its moving-average kernel $q$ verifies $$ q\geq 0. $$ \end{asp} \begin{rem} Recalling that $q\star q=\kappa(0,.)$, this assumption implies $$ \kappa\geq 0. $$ \end{rem} In the rest of the article, the norm used on $\RR^d$ is the sup norm, and for any $r$, $\mathcal{B}_r$ denotes the ball of radius $r$ for said norm. Within proofs, all numbered constants depend only on the field $f$ and level $\ell$ considered, and two constants with the same number in two different proofs may be different. \subsection{Statements} Our first result is the following constant lower bound. \begin{thm}\label{main2} Let $f$ be a Gaussian field on $\RR^d$ verifying Assumption \ref{a:basic} as well as the positivity assumption, Assumption \ref{a:pos}. Let $\ell\in\RR$ be a level. Let $T$ be a pseudometric as defined in Definition \ref{pseudometric}. Then there exists a constant $C>0$ such that for any $|x|\geq 2$, $$ \Var T(0,x)\geq C. $$ \end{thm} Our main general result, an upper bound on the variance, is the following: \begin{thm}\label{main} Let $f$ be a Gaussian field on $\RR^d$ verifying Assumption \ref{a:basic} with $\alpha$-sub-exponential decay function $F$ for some $\alpha>1$. Let $\ell\in\RR$ be a level. Let $T$ be a pseudometric as defined in Definition \ref{pseudometric}. Suppose that Condition \ref{macrotime} is verified. Fix $\varepsilon>0$. Then there exists a constant $C_\varepsilon>0$ such that for any $|x|\geq 2$, $$ \Var T(0,x)\leq C_\varepsilon|x|(\log|x|)^{1/\alpha+\varepsilon}. $$ \end{thm} In particular, using Proposition \ref{numberballs}, we have \begin{cor}\label{2dcoro} Let $f$ be a Gaussian field on $\RR^d$ verifying Assumption \ref{a:basic} with $\alpha$-sub-exponential decay function $F$ for some $\alpha>1$ and the positivity assumption, Assumption \ref{a:pos}. Let the level verify $\ell>-\ell_c(f)$ (the critical level of the field $f$ as defined in (\ref{ellcdef})). Let $T$ be a pseudometric as defined in Definition \ref{pseudometric}. Fix $\varepsilon>0$. Then there exists a constant $C_\varepsilon>0$ such that for any $|x|\geq 2$, $$ \Var T(0,x)\leq C_\varepsilon|x|(\log|x|)^{1/\alpha+\varepsilon}. $$ \end{cor} \paragraph{\bf{Open questions}} \begin{itemize} \item Though we strongly suspect it to be the case, it remains to ascertain whether our methods can be adapted to other models in which we can find ways to split randomness into disjoint boxes, as we will be doing here. Notably, for the Poisson-Boolean or Voronoi FPP models (see e.g \cite{dewangayet} for their definitions). \item It would be interesting to relax the assumptions on the Gaussian field to a slower than exponential decay. The sharpness of phase transition result from \cite{severo2021sharp} which we use to establish Condition \ref{macrotime} only requires a polynoial decay with exponent larger than the dimension, and we expect that our result should hold in this regime as well. \item One might wonder why our upper bound doesn’t match the best bound in the classical framework obtained by Benjamini, Kalai and Schramm in \cite{BKS}. Our main tool, the Efron-Stein inequality (Proposition \ref{ES}) is the one used by Kesten to obtain a linear bound, and we do not rely on methods based on Poincaré inequalities as in \cite{BKS} and \cite{10.1214/07-AIHP124}. Such inequalities have been established for Gaussian measures. However, to our knowledge, these apply in a framework where one has a control of order $\|x\|$ on the the number of real random variables which intervene in computing the functional $T(0,x)$, which is not the case in our framework since the underlying space is a Gaussian white-noise. \end{itemize} \section{Proof of main results} \subsection{Proof of the constant lower bound} We give a proof of our constant lower bound, Theorem \ref{main2}, which does not need Condition \ref{macrotime} but does need the positive correlation assumption, Assumption \ref{a:pos}. It is strongly inspired by Kesten's original FPP proof in \cite{10.1214/aoap/1177005426} and its restatement by Auffinger, Damron and Hanson in \cite{auffinger201750}. Before the proof itself, we only need one auxiliary result: the following decomposition proved by S. Muirhead and the author in a previous work \cite{dewan2021upper}. \begin{defn}\label{efess} Let $f$ be a Gaussian field satisfying Assumption \ref{a:basic} with moving-average kernel $q$ and let $r>0$. For any $r>0$, we define the field $\tilde{f}_r$ to be the field $$ q \star (W|_{\mathcal{B}_r}), $$ see section \ref{p:WN} of the appendix for details on restrictions of the white-noise. \end{defn} The decomposition is as follows. \begin{prop}[\cite{dewan2021upper}, Proposition A.1]\label{decomp} Let $f$ be a Gaussian field satisfying Assumption \ref{a:basic} with moving-average kernel $q$ and let $r>0$. Let $Z_1$ be a standard normal random variable. Then there exists a Gaussian field $g$ independent from $Z_1$ such that we have the following equality in law: \[ \tilde{f}_r(\cdot) \stackrel{d}{=} \frac{Z_1 (q \star \un_{\mathcal{B}_r})(\cdot)}{r^{d/2}} + g(\cdot) .\] \end{prop} On to the proof itself. \begin{proof}[Proof of Theorem \ref{main2}] Let $f$ be a Gaussian field verifying the assumptions, and $\ell\in\RR$. By Proposition \ref{decomp}, we have the equality in law: \begin{equation}\label{equality} \tilde{f}_{1} \stackrel{d}{=} Z_1 (q \star \un_{\mathcal{B}_{1}}) + g , \end{equation} where we recall (Definition \ref{efess}) that $\tilde{f}_1=q\star (W|_{\mathcal{B}_1})$, $Z_1$ is a standard Gaussian and $g$ an independent Gaussian field. Let $Z_1'$ be an independent standard Gaussian. We write $f$ (resp. $f'$) for the associated realization of the Gaussian field using $Z_1$ (resp. $Z_1'$), and a common realization of $g$ and $q\star (W|_{\mathcal{B}_1^\compl})$ (see Proposition \ref{splitboxes} for a justification of this splitting of the white-noise). Let $A$ be the event $\{Z_1\geq 1\}$ and $B$ be the event $\{Z'_1\leq -1\}$, let $a>0$ be their common probability. We now write, for all $|x|\geq 2$, \begin{align} \begin{split} \label{variancelowerbd} &\Var_f T(0,x) \geq \Var\EE_f\Big[ T(0,x)\Big| Z_1\Big]\\ &\quad =\frac12 \EE\Bigg[\Big(\EE_f\Big[ T(0,x)\Big| Z_1\Big]-\EE_{f'}\Big[ T(0,x)\Big| Z_1'\Big]\Big)^2\Bigg]\\ &\quad \geq \frac12 \EE\Bigg[\Big(\EE_f\Big[ T(0,x)\Big| Z_1\Big]-\EE_{f'}\Big[ T(0,x)\Big| Z_1'\Big]\Big)^2\un_{A\cap B}\Bigg]. \end{split} \end{align} For any $\varepsilon>0$, define the constant $C_0$, independent of $x$, to be: $$ C_0:=\EE\Bigg[\inf\limits_{\substack{\gamma\text{ piecewise affine}\\ \text{ path from $0$ to }\partial\mathcal{B}_\varepsilon }}T_f(\gamma)-T_{f'}(\gamma)\Bigg| A\cap B\Bigg], $$ where $T_f(\gamma)$ (resp. $T_{f'}(\gamma)$) denotes the integral along $\gamma$ of $\psi(f+\ell)$ (resp. $\psi(f'+\ell)$). Now, $q\geq 0$ by Assumption \ref{a:pos} (and $q$ is non-zero by Assumption \ref{a:basic}), and on $A\cap B$, $Z_1-Z_1'\geq 2$. Plugging this into (\ref{equality}), we deduce that there exists a constant $C_f>0$ such that on event $A\cap B$, $f\geq f'+C_f$ on $\mathcal{B}_1$. For any $\varepsilon>0$, consider the event $$ E_\varepsilon:=\{ C_f/4-\ell\leq f\leq C_f/2-\ell\text{ on $\mathcal{B}_\varepsilon$}\}. $$ Notice that $E_\varepsilon\cap A$ and $B$ are independent. Further, event $E_\varepsilon$ can be written conditionally on the value of $Z_1$ as an event of the form $g|_{\mathcal{B}_\varepsilon}\in I$, where $g$ is the independent field from (\ref{equality}) and $I$ is some interval of nonempty interior. So that $E_\varepsilon\cap A$ has positive probability for $\varepsilon$ small enough. We fix such an $\varepsilon$. In total, the event $E_\varepsilon\cap A\cap B$ has positive probability. We then have, by Definition \ref{pseudometric} that on $E_\varepsilon\cap A\cap B$, $$ \inf\limits_{\substack{\gamma\text{ piecewise affine}\\ \text{ path from $0$ to }\partial\mathcal{B}_\varepsilon }}T_f(\gamma)>0, $$ and $$ f'|_{\mathcal{B}_\varepsilon}\leq -\ell \text{, hence }T_{f'}|_{\mathcal{B}_\varepsilon}=0. $$ Then, since $$ C_0\geq \EE\Bigg[\Big(\inf\limits_{\substack{\gamma\text{ piecewise affine}\\ \text{path from $0$ to }\partial\mathcal{B}_\varepsilon }}T_f(\gamma)-T_{f'}(\gamma)\Big)\un_{E_\varepsilon}\Bigg| A\cap B\Bigg], $$ we conclude that $$ C_0>0. $$ Finally, we claim that when event $A\cap B$ occurs, $$ \EE_f\Big[ T(0,x)\Big| Z_1\Big]-\EE_{f'}\Big[ T(0,x)\Big| Z_1'\Big]\geq C_0. $$ Indeed, if $\gamma$ is a path between $0$ and $x$, we have by definition of $C_0$ that on $A\cap B$, $$\EE_f T(\gamma|_{\mathcal{B}_\varepsilon}) \geq \EE_{f'}T(\gamma|_{\mathcal{B}_\varepsilon})+C_0.$$ Further, since $T$ is an increasing random variable (see Definition \ref{increasing}) and $q\geq 0$, $Z_1\geq Z_1'$ implies that $T_f\geq T_{f'}$. So that, on $A\cap B$, $$T_f(\gamma|_{\mathcal{B}_\varepsilon^\compl}) \geq T_{f'}(\gamma|_{\mathcal{B}_\varepsilon^\compl}).$$ In total, on $A\cap B$, $$\EE_f T(\gamma) \geq \EE_{f'}T(\gamma)+C_0.$$ And, returning to (\ref{variancelowerbd}), for any $|x|\geq 2$, $$ \Var T(0,x) \geq \frac12(C_0a^2)^2. $$ \end{proof} \subsection{Proof of the upper bound} We start with the auxiliary results used in the proof of our main upper bound. The longer proofs are delayed to section \ref{proofaux}. \subsubsection{Auxiliary results} The first lemma gives us a control of the sup norm of a Gaussian field in a box of given size, using only its expected pointwise variances and those of its first derivatives. \begin{prop}[\cite{muirhead2018sharp}, Lemma 3.12]\label{muirvansupnorm} There exists a constant $c_0>0$ such that for any $\mathcal{C}^1$ Gaussian field $g$ on $\RR^2$, for any $R_1\geq c_0$ and $R_2\geq \log R_1$, and for any $r\in [1,\infty]$ $$ \PP\Big[\|g\|_{\infty,\mathcal{B}_{R_1}}\geq mR_2\Big]\leq e^{-R_2^2/c_0}, $$ where $$ m=\Big(\sup\limits_{x\in\RR^2}\sup\limits_{|\alpha|\leq 1}\EE[(\partial^\alpha g)^2(x)]\Big)^{1/2}. $$ \end{prop} Integrating this estimate yields the following corollary: \begin{cor}\label{muirvansupnorm2} For any integer $k$ there exists a constant $C>0$ such that for any $\mathcal{C}^1$ Gaussian field $g$ on $\RR^2$, for any $R_1\geq C$, and for any $r\in[1,\infty]$ $$ \EE\Big[\|g_r\|^k_{\infty,\mathcal{B}_{R_1}}\Big]\leq Cm^k \log^k R_1. $$ \end{cor} \begin{rem} In the case of the fields we will be working with, i.e those satisfying Assumption \ref{a:basic}, we have $m<\infty$. \end{rem} One key notion in using the previous estimates is that of monotonic events. \begin{defn}\label{increasing} \begin{itemize} \item A measurable event $A$ of a Gaussian field $f$ over $\RR^d$ is said to be \emph{increasing} if for any non-negative function $h$ on $\RR^d$, $f\in A \implies f+h\in A$. \item It is said to be \emph{decreasing} if the same holds for any non-positive function $h$. \item Finally, an event is said to be \emph{monotonic} if it is either increasing or decreasing. \item Similarly, a real-valued random variable $X$ which is a function of a Gaussian field on $\RR^d$ is said to be \emph{increasing} (resp. \emph{decreasing}, \emph{monotonic}) if for any Gaussian fields $f$ and non-negative (resp. non-positive) function $h$ , $X$ evaluated with respect to $f$ is smaller than $X$ evaluated with respect to $f+h$. \end{itemize} \end{defn} Muirhead and Vanneuville established a comparison result for probabilities of monotonic events between a Gaussian field and its finite correlation range versions (see Proposition \ref{CMMV}). It relies on the notion of \emph{Cameron-Martin space} of a Gaussian field (introduced and discussed in section \ref{p:CM} of the appendix). We have slightly modified their proof to obtain the following comparison between variances for Gaussian fields with infinite correlation range and their finite-range counterparts. \begin{prop}\label{variancesfields} Let $f$ be a Gaussian field satisying Assumption \ref{a:basic}, with some $\alpha$-sub-exponential decay function $F$ for $\alpha>1$, and $\ell\in\RR$ such that Condition \ref{macrotime} is verified. Let $\varepsilon>0$. Then for any positive integer $N$ and $x$ of norm $N$ $$ \Var T(0,x) \leq \Var_{(\log N)^{1/\alpha+\varepsilon}}[T^{\mathcal{B}_{N^2}}(0,x)](1+o(1))+o(1), $$ where $o(1)$ designates quantities which go to $0$ as $N$ goes to infinity, and $T^{\mathcal{B}_{N^2}}$ is as in (\ref{restrtime}). \end{prop} Now for the tools used in establishing an upper bound for the variance in the bounded correlation model. The first one is a classical technique for estimating the variance of a function of several random variables by "splitting the variance contributed by each one". We will be using it in the context of mesoscopic squares which geodesics for our pseudometric pass through. \begin{prop}[Efron-Stein's inequality]\label{ES} Let $(X_1,...,X_n)$ be a finite sequence of independent random variables, and $X'_i$ be an independent copy of $X_i$ for all $i$. Let $\phi$ be an $L^2$ function of $(X_1,...,X_n)$. Then, $$ \Var \phi(X_1,...,X_n)\leq \sum\limits_{i=1}^{n}\EE\left[\Big(\phi(X_1,..,X_{i-1},X'_i,X_{i+1},...,X_n)- \phi(X_1,,...,X_n)\Big)_+^2\right], $$ where index $+$ denotes the positive part. \end{prop} The following lemma will be used in order to control the number of "small increments" in a geodesic. It is akin to a classical lemma by Kesten (\cite{kesten1986aspects}, Proposition 5.8). \begin{defn} For any pair of number $r$, $R$ and any finite sequence of compact sets in $\RR^d$, we say that they are $(r,R)$\emph{-separated} if the distance of any pair of sets in the sequence is more than $r$ and the distance between any consecutive sets is less than $R$. \end{defn} \begin{defn}\label{E2} For any $a,B,M>0$, for any sequence $(r_N)_{N\in\NN}$ of real numbers, for any $N\in\NN$, let $\mathcal{E}_{a,B,r,N,M}$ be the event that there exists a sequence of more than $M$ disjoint translates of $A_{r_N}$ with centers in $\ZZ^d$, $(r_N,Br_N)$-separated, with the first annulus being at distance less than $Br_N$ from $0$, and with the sum of their times smaller than $aN$. \end{defn} \begin{lem}\label{length} Let $f$ be a Gaussian field on $\RR^d$ field verifying Assumption \ref{a:basic}, as well as $\ell\in\RR$ such that Condition \ref{macrotime} is verified. Let $B>1$. Let $(r_N)_{N\in\NN}$ be a sequence going to $\infty$ as $N$ goes to $\infty$. Then there exist $a>0$, $N_0\in\NN$ such that $$ \forall N\geq N_0\text{, }\forall M\geq 2\frac{N}{r_N}\text{, }\quad\PP_{r_N}[\mathcal{E}_{a,B,r,N,M}]\leq \left(\frac12\right)^{M}. $$ \end{lem} \begin{proof} Fix a Gaussian field, a parameter $\ell$. Consider a sequence of annuli as in Definition \ref{E2} and $a>0$. Call $(\mathcal{A}_1,...,\mathcal{A}_M)$ the first $M$ annuli of the sequence. For each annulus $\mathcal{A}_i$, define $$ \mathcal{V}_{i,a,N}:=\{T(\mathcal{A}_i)\leq ar_N\}. $$ Further, notice that for $\mathcal{E}_{a,B,r,N,M}$ to occur, at least $M-N/r_N$ events of the form $\mathcal{V}_{i,a,N}$ occur. The separation condition in $\mathcal{E}_{a,B,r,N,M}$ allows to notice that, for any annulus in the sequence, the next one can be chosen among $C_0(2B)^dr_N^d$ possibilities, $C_0$ being a universal constant. The same holds for the first annulus. Hence the total number of different possible sequences is smaller than $$ (C_0(2B)^dr_N^d)^M. $$ Thus the number of possible sets of annuli with time smaller than $ar_N$ is smaller than \begin{align*} (C_0(2B)^dr_N^d)^M{M \choose M-\frac{N}{r_N}} \leq (C_0(2B)^dr_N^d)^M(2e)^{M-\frac{N}{r_N}} \end{align*} (by the classical inequality ${n \choose k}\leq \left(\frac{en}{k}\right)^k$, since $M\geq 2\frac{N}{r_N}$). Finally, using the separation assumption in Definition \ref{E2}, we have for all $N$, $M\geq 2\frac{N}{r_N}$, $$ \PP_{r_N}[\mathcal{E}_{a,B,r,N,M}]\leq (C_0(2B)^dr_N^d)^M \left(2e\PP_{r_N}(\mathcal{V}_{i,a,N})\right)^{M-\frac{N}{r_N}}. $$ So that, since $M\geq 2\frac{N}{r_N}$, \begin{equation}\label{repr} \PP_{r_N}[\mathcal{E}_{a,B,r,N,M}]\leq \left(C_0(2B)^dr_N^d\sqrt{2e\PP_{r_N}(\mathcal{V}_{i,a,N})}\right)^M. \end{equation} Since we have Condition \ref{macrotime}, we get that there exists $a>0$, $N_0\in\NN$ such that for all $N\geq N_0$, for all $i$, $$ \PP_{r_N}(\mathcal{V}_{i,a,N})\leq \frac{1}{8eC_0^2(2B)^{2d}r_N^{2d}}. $$ The conclusion then follows from (\ref{repr}). \end{proof} We now make a few geometric observations which will allow us to apply the previous lemma. \begin{defn}\label{crossed} We say that a continuous path $\gamma$ \emph{crosses} an annulus $A$ if $\gamma$ intersects both the inner and outer squares of $A$. \end{defn} \begin{defn}\label{properlyused} Consider any continuous path $\gamma$, and a $d$-dimensional square $\mathcal{S}$ of side $r$. We call the square $\mathcal{S}$ \emph{properly used} if its intersection with $\gamma$ has diameter greater than $\frac{r}{3^d}$. \end{defn} \begin{defn}\label{squareslattice} For any $r>0$ and integer $d$, let $\mathcal{S}$ be the $d$-dimensional hyper-square defined by $$ \mathcal{S}:=\Big\{x=(x_1,...,x_d)\in\RR^d\big|\quad \forall i\in\{1,...,d\}, \quad 0< x_i<r\Big\}. $$ We will call \emph{set of hyper-squares defined by the lattice $r\ZZ^d$} the set of translates of $\mathcal{S}$ by a vector of the form $$ r(k_1,...,k_d), $$ the $k_i$'s being integers. \end{defn} Let $r>0$ and $f$ be a Gaussian field. Let $\ell>0$. Let $T$ be the metric associated with $f$ and $\ell$ (see Definition \ref{pseudometric}). Let $(S_i)_i$ be the set of hyper-squares defined by $r \ZZ^d$. \begin{defn}\label{Gox} For any $x\in\RR^d$, call $\mathcal{G}(0,x)$ the set of squares of $r\ZZ^d$ which are at distance less than $r$ from all geodesics between $0$ and $x$. \end{defn} \begin{lem}\label{proper} Consider a square $S_i$ a path $\gamma$ goes through. Then, as long as $\gamma$ does not \emph{properly use} (see Definition \ref{properlyused}) a neighboring square, it stays within $S_i$ and its neighbors. \end{lem} The proof of this lemma is immediate: \begin{proof} Let $(\tilde{S}_j)_{j=1,...,3^d-1}$ be the family of neighboring squares of $S_i$, i.e squares that share at least one boundary vertex with $S_i$ in $r\ZZ^d$. If the path $\gamma$ does not \emph{properly use} any of the $\tilde{S}_j$, by adding up the diameter of its intersection with each of them, we deduce that $\gamma$ cannot go at distance larger than $\frac{3^d-1}{3^d}r<r$ from $S_i$, and thus it cannot intersect any square other than $S_i$ or the $\tilde{S}_j$'s. \end{proof} For any $x\in\RR^d$, given any geodesic $\gamma$ for the pseudometric $T$ (see Definition \ref{pseudometric}) between $0$ and $x$, we define a $(r,2r)$-separated set of squares which we call $\tilde{\mathcal{G}}_\gamma(0,x)$, by the following procedure: \begin{defn}\label{algo1} Let $x\in\RR^d$. Start with $\tilde{\mathcal{G}}_\gamma(0,x)$ being the empty set. Each new square the geodesic intersects is added to the set $\tilde{\mathcal{G}}_\gamma(0,x)$ if: \begin{itemize} \item none of its $3^d-1$ neighbors is already in $\tilde{\mathcal{G}}_\gamma(0,x)$. \item it is \emph{properly used}. \end{itemize} Since by definition, a geodesic has finite Euclidean length, this procedure terminates. \end{defn} \begin{figure} \caption{In red, a geodesic $\gamma$ between $0$ and $x$. In light green, all squares in $\tilde{\mathcal{G} \label{f:squares} \end{figure} This construction is illustrated in Figure \ref{f:squares}. The fact that these squares are separated by a distance more than $r$ is obvious given the definition. The fact that they are at distance less than $2r$ can be seen thanks to Lemma \ref{proper}. \begin{rem}\label{ann} For any square in $\tilde{\mathcal{G}}_\gamma(0,x)$, there exists an integer point at distance less than $1$ from its boundary and a copy of $A_{r/3^d}$ centered at that point which is crossed by the geodesic $\gamma$, as is clear given the definition of a \emph{properly used} square (Definition \ref{properlyused}). \end{rem} For any geodesic $\gamma$, starting from $\tilde{\mathcal{G}}_\gamma(0,x)$, we define the set $\mathcal{G}_\gamma^\dagger(0,x)$ in the following way: \begin{defn}\label{algo2} Start with $\mathcal{G}_\gamma^\dagger(0,x)=\tilde{\mathcal{G}}_\gamma(0,x)$. \begin{itemize} \item Add to the set $\mathcal{G}_\gamma^\dagger(0,x)$ all neighbors of its elements which are \emph{properly used}. We thus add no more than $3^d-1$ new squares per already present square. \item Then add to the set $\mathcal{G}_\gamma^\dagger(0,x)$ all neighbors of its elements which the geodesic $\gamma$ goes through but are not \emph{properly used}. Again, no more than $3^d-1$ squares per previously present square are added. \item Finally, add all neighbors of all previously present squares. Once more, no more than $3^d-1$ squares per previously present square are added. \end{itemize} \end{defn} \begin{prop}\label{contained} For any $r>0$, for any Gaussian field $f$, for any level $\ell$, for any $x\in\RR^d$, for any geodesic $\gamma$ between $0$ and $x$, $$ \mathcal{G}_\gamma^\dagger(0,x)\supseteq \mathcal{G}(0,x). $$ \end{prop} \begin{proof} The first step of the procedure in Definition \ref{algo2} allows us to obtain a set of squares containing all \emph{properly used} squares, this is clear given Definition \ref{algo1}. The second step allows us to recover a set containing all the squares the geodesic goes through, as can be seen thanks to Lemma \ref{proper}. Finally, the third step allows to recover all squares in the set $\mathcal{G}(0,x)$, which is clear given its definition. Thus for any $\gamma$, $\mathcal{G}_\gamma^\dagger(0,x)\supseteq \mathcal{G}(0,x)$. \end{proof} In Figure \ref{f:squares}, all intermediate values of the set $\mathcal{G}_\gamma^\dagger(0,x)$ are represented. Combining all the estimates inside Definition \ref{algo2} and using Proposition \ref{contained}, we get the following. \begin{lem}\label{combin} For any $r>0$, for any $x$ of norm $N$, for any Gaussian field $f$ and associated pseudometric, for any $x\in\RR^d$, for any geodesic $\gamma$ between $0$ and $x$, \begin{equation*} \#\tilde{\mathcal{G}}_\gamma(0,x)\geq \frac{1}{(3^d)^3}\#\mathcal{G}_\gamma^\dagger(0,x)\geq \frac{1}{3^{3d}} \#\mathcal{G}(0,x). \end{equation*} \end{lem} Finally, we will need the following control on the expectation of the pseudometric. \begin{lem}\label{subaddlem} Let $f$ be a Gaussian field satisfying Assumption \ref{a:basic}. Let $u=(u_n)_{n\in\NN}$ be a superadditive sequence such that for any $n$, $u_n\geq n$. Let $T$ be a pseudometric as in Definition \ref{pseudometric}, and for any $r>0$, $f_r$ be given by Definition \ref{d:cutoff}. Let $\ell\in\RR$. There exists a constant $C_{f,u}>0$ such that for any $n\in\NN$ and $x$ of norm $n$, for any $r\geq 1$, $$ \frac{\EE_{r}T^{\mathcal{B}_{u_n}}(0,x)}{n}\leq C_{f,u}, $$ where $T^{\mathcal{B}_{u_n}}$ is as in (\ref{restrtime}). \end{lem} \begin{proof} Let $f$ be a Gaussian field satisfying Assumption \ref{a:basic} and $\ell\in\RR$. Fix a vector $y$ of norm $1$. For any $r\geq 1$, the sequence $$ (\EE_{r}T^{\mathcal{B}_{u_n}}(0,ny))_n $$ is subadditive. Indeed, if $n$, $m$ are two integers, \begin{align*} & T^{\mathcal{B}_{u_{n+m}}}(0,(n+m)y)\\ &\quad \leq T^{\mathcal{B}_{u_{n+m}}}(0,ny)+T^{\mathcal{B}_{u_{n+m}}}(ny,(n+m)y)\\ &\quad \leq T^{\mathcal{B}_{u_{n}}}(0,ny)+T^{\mathcal{B}_{u_{m}}+ny}(ny,(n+m)y), \end{align*} where in the last step we have used the superadditivity of the sequence $(u_n)$ and the fact that for any $n$, $u_n\geq n$ to argue that $$ \mathcal{B}_{u_{m}}+ny\subseteq \mathcal{B}_{u_{n}}+\mathcal{B}_{u_{m}}\subseteq \mathcal{B}_{u_{n+m}}. $$ Thus, for any $r\geq 1$, for any $n\in\NN$, \begin{equation}\label{usesubadd} \frac{\EE_{r}T^{\mathcal{B}_{u_n}}(0,ny)}{n}\leq\EE_{r}T^{\mathcal{B}_{u_1}}(0,y), \end{equation} Further, by Definition \ref{pseudometric}, for any $r$, $$\EE_{r}T^{\mathcal{B}_{u_1}}(0,y)\leq u_1\sqrt{d}\text{ }\EE[\psi(\|f_r\|_{\infty,\mathcal{B}_{u_1}})]\leq C_\psi u_1\sqrt{d}\text{ } \EE\|f_r\|_{\infty,\mathcal{B}_{u_1}}.$$ Thus by Corollary \ref{muirvansupnorm2} we deduce that there exists a constant $C_{f,u}$ such that for any $r\geq 1$, $$ \EE_{r}T^{\mathcal{B}_{u_1}}(0,y)\leq C_{f,u}. $$ Combining this with (\ref{usesubadd}) yields the conclusion. \end{proof} \subsubsection{Proof} We are now ready to start the main proof. Once again, it is inspired by Kesten's original proof of the linear variance bound for classical first passage percolation (\cite{10.1214/aoap/1177005426}, equation 1.13 in Theorem 1), as well as its simplified version presented by Auffinger, Damron and Hanson in \cite{auffinger201750}. \begin{proof}[Proof of Theorem \ref{main}] Let $f$ be a Gaussian field and $\ell$ be a level such that the assumptions are verified. Recall that for any $r>0$, $f_r$ is the field obtained from the same white-noise as $f$ but which has correlation range $r$ (Definition \ref{corrrange}), and for any level $\ell$, $\Var_{r}$, $\EE_{r}$, $\PP_{r}$ are defined for the corresponding measure. Notice that, by Proposition \ref{variancesfields}, it is enough to show that for any $\varepsilon>0$ there exists $C_\varepsilon>0$ such that for any $N\in \NN_{\geq 2}$ and $x$ of norm $N$ \begin{equation}\label{goal} \Var_{(\log N)^{1/\alpha+\varepsilon}} T^{\mathcal{B}_{N^2}}(0,x)\leq C'_\varepsilon N(\log N)^{1/\alpha+\varepsilon}. \end{equation} Fix $\varepsilon>0$. In the rest of this proof, all statements we make for some integer $N$ relate to the field $f_{(\log N)^{1/\alpha+\varepsilon}}$ and the pseudometric $T^{\mathcal{B}_{N^2}}$. For the sake of readability, we do not write the corresponding indices and exponents. For example, in this proof, when we write $$ \Var T (0,x),\quad\text{we mean}\quad\Var_{(\log N)^{1/\alpha+\varepsilon}} T^{\mathcal{B}_{N^2}} (0,x). $$ Let $N\in\NN_{\geq 2}$ and $x$ be of norm $N$. Let $(S_i)_{i\in\NN}$ be the sequence of hyper-squares defined by the lattice $(\log N)^{1/\alpha+\varepsilon}\ZZ^d$ (see Definition \ref{squareslattice}), ordered in an arbitrary way. We apply Efron-Stein's inequality (Proposition \ref{ES}) to $T(0,x)$: $$ \Var T (0,x)\leq\sum\limits_{i}\EE\left[\left(T^{*i}(0,x)-T(0,x)\right)_+^2\right], $$ where $T^{*i}(0,x)$ designates the random variable $T(0,x)$ where we have resampled the white-noise in the square $S_i$. Now, notice that since the field has correlation range $(\log N)^{1/\alpha+\varepsilon}$, $\Big(T^{*i}(0,x)-T(0,x)\Big)_+$ can be non-zero only if all of the geodesics defining $T(0,x)$ go at distance less than $(\log N)^{1/\alpha+\varepsilon}$ from the square $S_i$. Indeed, if some geodesic $\gamma$ goes at a larger distance from the square, then resampling the white-noise in the square does not change $T(\gamma)$, hence $T(0,x)$ does not increase. Otherwise stated, $$\Big(T^{*i}(0,x)-T(0,x)\Big)_+\neq 0 \implies S_i \in \mathcal{G}(0,x),$$ recalling Definition \ref{Gox}. Further, by Definition \ref{pseudometric}, we always have for any $N$, for any $x$ of norm $N$, $$ \Big(T^{*i}(0,x)-T(0,x)\Big)_+\leq C_\psi (\log N)^{1/\alpha+\varepsilon} \|f^{*i}+\ell\|_{\infty, S_i^+}, $$ where $f^{*i}$ designates the non-stationary Gaussian field $q\star W|_{S_i}$, $W|_{S_i}$ being the resampled white-noise on $S_i$ and $0$ elsewhere, and $S_i^+$ designates the union of $S_i$ and all of its neighbors. In particular, $\|f^{*i}+\ell\|_{\infty, S_i^+}$ depends only on the resampled white-noise in $S_i$. Hence, \begin{align*} \begin{split} & \Var T (0,x) \\ & \quad \leq \sum\limits_{i} \EE\Bigg[ \Big(T^{*i}(0,x)-T(0,x)\Big)_+^2\un_{S_i\in \mathcal{G}(0,x)}\Bigg] \\ & \quad \leq C_\psi ^2(\log N)^{2/\alpha+2\varepsilon}\sum\limits_{i} \EE\Bigg[ \|f^{*i}+\ell\|_{\infty, S_i^+}^2\un_{S_i\in \mathcal{G}(0,x)}\Bigg] \\ & \quad = C_\psi ^2(\log N)^{2/\alpha+2\varepsilon}\sum\limits_{i}\EE\left[ \|f^{*i}+\ell\|_{\infty, S_i^+}^2\right]\EE\left[\un_{S_i\in\mathcal{G}(0,x)}\right]. \end{split} \end{align*} Now, using Corollary \ref{muirvansupnorm2}, there exists a constant $C_0$ such that for any $N\in\NN_{\geq 3}$ and for any $i$, $$ \EE\left[ \|f^{*i}+\ell\|_{\infty, S_i^+}^2\right]\leq C_0(\log\log N)^2. $$ So that, for any $N$, for any $x$ of norm $N$, $$ \Var T (0,x)\leq C_0C_\psi (\log N)^{2(1/\alpha+\varepsilon)}(\log\log N)^2 \EE \#\mathcal{G}(0,x). $$ Fix some geodesic $\gamma$ from $0$ to $x$. Using Lemma \ref{combin} and recalling the set $\tilde{\mathcal{G}}_\gamma(0,x)$ from Definition \ref{algo1}, we deduce that for any $N\in\NN_{\geq 3}$, $x\in\RR^d$ such that $|x|=N$, \begin{equation} \label{varmain} \Var T (0,x)\leq C_0C_\psi 3^{3d} (\log N)^{2(1/\alpha+\varepsilon)}(\log \log N)^2\EE \#\tilde{\mathcal{G}}_\gamma(0,x). \end{equation} Now, for any $N$, for any $x$ such that $|x|=N$, for any $a>0$, define the random variable $Y_N^a$ to be: $$ Y_{N}^a:=\#\tilde{\mathcal{G}}_\gamma(0,x)\un\Big\{\sum\limits_{S_i\in\tilde{\mathcal{G}}_\gamma(0,x)}T(\mathcal{A}_{S_i})<a(\log N)^{1/\alpha+\varepsilon}\#\tilde{\mathcal{G}}_\gamma(0,x)\Big\}, $$ where for each $i$, $\mathcal{A}_{S_i}$ is the first annulus of side $(\log N)^{1/\alpha+\varepsilon} /3^d$ with center in $\ZZ^d$ and at distance less than one from the boundary of $S_i$ crossed by the geodesic. Such an annulus exists, as is justified by Remark \ref{ann}. We thus write for any $a$, $N$, for any $x$ such that $|x|=N$, \begin{equation}\label{twoparts2} \EE \#\tilde{\mathcal{G}}_\gamma(0,x)\leq a^{-1}(\log N)^{-(1/\alpha+\varepsilon)}\EE T(0,x)+\EE Y_{N}^a. \end{equation} Definition \ref{algo1} allows for $\tilde{\mathcal{G}}(0,x)$ to fill the conditions of for separation of annuli in the family of events $\mathcal{E}_{a,B,r,N,M}$ (Definition \ref{E2}). We apply Lemma \ref{length}, remembering that we have assumed that Condition \ref{macrotime} is satisfied. We get $a>0$, $N_0\in\NN$ such that for any $N\geq N_0$, for any $M\geq 2\frac{N}{(\log N)^{1/\alpha+\varepsilon}}$, $$ \PP[Y_{N}^a\geq M]\leq \left(\frac{1}{2}\right)^{M}, $$ so that, up to increasing $N_0$, for any $N\geq N_0$, $$ \EE Y_N^a \leq 3\frac{N}{(\log N)^{1/\alpha+\varepsilon}}. $$ Further, by Lemma \ref{subaddlem}, there exists a constant $C_1$ such that for any $N$, for any $x$ such that $|x|=N$, $$ \EE T(0,x)\leq C_1 N. $$ We deduce by (\ref{twoparts2}) that for any $N\geq N_0$, for any $x$ such that $|x|=N$, \begin{equation*} \EE \#\tilde{\mathcal{G}}_\gamma(0,x)\leq (a^{-1}C_1+3) \frac{N}{(\log N)^{1/\alpha+\varepsilon}}. \end{equation*} So that, reprising (\ref{varmain}), for any $N\geq N_0$, for any $x$ such that $|x|=N$, \begin{equation*} \Var T (0,x)\leq C_0C_\psi 3^{3d}(a^{-1}C_1+3) N(\log N)^{1/\alpha+\varepsilon}(\log \log N)^2, \end{equation*} which yields, up to changing $\varepsilon$ to $\varepsilon/2$, a constant $C_\varepsilon>0$ depending on $\varepsilon$ such that for any $N\in\NN_{\geq 2}$ and $x$ of norm $N$ \begin{equation}\label{maincutoff} \Var T (0,x)\leq C_\varepsilon N(\log N)^{1/\alpha+\varepsilon}, \end{equation} which is (\ref{goal}). \end{proof} Finally, let us state the proposition used to deduce Corollary \ref{2dcoro} from Theorem \ref{main}. \begin{prop}\label{numberballs} Let $f$ be a Gaussian field on $\R^d$ satisfying Assumption \ref{a:basic} with decay function $F$ $\alpha$-sub-exponential (see Definition \ref{subpoly}) for some $\alpha>1$ and Assumption \ref{a:pos}. Let $\ell>-\ell_c(f)$. There exist constants $a$, $C_1$ such that for any $N\geq 1$, $$ \PP (T(A_N)\leq aN)\leq C_1e^{-N^{\alpha/5}}. $$ In particular, Condition \ref{macrotime} is then verified. \end{prop} \section{Proof of auxiliary results}\label{proofaux} In this section, we prove the toolbox results and all other auxiliary results from the previous section. \subsection{Variance comparison} Let us start with everything that pertains to establishing the main variance comparison result, Proposition \ref{variancesfields}. The following is a useful lemma for comparing a Gaussian field with its finite-correlation-range counterparts. \begin{prop}[\cite{muirhead2018sharp}, Proposition 3.11]\label{muirvanapprox} For any Gaussian field $f$ verifying Assumption \ref{a:basic} for some decay function $F$, there exist constants $C_1$, $C_2$, $r_0>0$ such that for all $N\in \NN$, $r\geq r_0$, for all $t\geq \log N$, $$ \PP \left[\|f-f_r\|_{\infty, \mathcal{B}_N}\geq C_1 t F(r)\right]\leq e^{-C_2t^2}. $$ \end{prop} Muirhead and Vanneuville \cite{muirhead2018sharp} have proved the following using the previous Proposition. \begin{prop}[\cite{muirhead2018sharp}, Proposition 4.1]\label{CMMV} Consider a Gaussian field $f$ on $\RR^2$ satisfying Assumption \ref{a:basic} with moving average kernel $q$ and decay function $F$. Then there exists $c_1>0$ such that, for every $N\in \NN$ and $r\geq1$, every monotonic event $A$ measurable with respect to the field inside a ball of radius $N$ and every level $\ell$ $$ |\PP[A]-\PP_{r}[A]|\leq c_1\Big(N(\log N)F(r)+N^{-\log N}\Big). $$ \end{prop} We will now start on the variance control results, with a method inspired by Muirhead and Vanneuville's. We use the last two propositions to establish a similar estimate, but this time for variances: \begin{lem}\label{CM} For any Gaussian field $f$ satisfying Assumption \ref{a:basic} with moving average kernel $q$ and decay function $F$, there exist constants $C_0$, $C_1>0$, $r_0$ such that for any $N\in\NN$, $r\geq r_0$, for any pair of sets $A,B$ in a square of side $N$, for any $\ell$, $$ \Big|\Var (T^{\mathcal{B}_N}(A,B))- \Var_{r} (T^{\mathcal{B}_N} (A,B))\Big| \leq C_0 \max\left(N^4(\log N)^5 F(r),N^{-C_1\log N}\right), $$ $T^{\mathcal{B}_N}$ being the restricted pseudometric (see (\ref{restrtime})). \end{lem} To prove this lemma, we will need an intermediate result, Lemma \ref{Cs}. Let us make the following preliminary statement, whose elementary proof we omit. \begin{lem}\label{varianceprop} Let $X$ be a $L^2$ real-valued random variable. For any constant $c$, we have $$ \EE[(X-c)^2] \geq \Var X. $$ \end{lem} The auxiliary Lemma is the following. \begin{lem}\label{Cs} For any pair of Gaussian fields $f,g$ satisfying Assumption \ref{a:basic} there exist $c>0$, $N_0\in\NN$ such that for any $N\geq N_0$, $t>0$ such that $Nt\leq c$, for any random variable $X(f)$ that is $L^4$, measurable with respect to the field $f$ in $\mathcal{B}_N$ and monotonic (see Definition \ref{increasing}), $$ \Var (X(f)\mathcal{E}_t)\leq \Var [X(g)] + \PP[\mathcal{E}_t=0]\EE[X(g)]^2 +\frac{2Nt}{\int\limits_{\RR^d} q} \left(\EE\left[\Big(X(g)-\EE[X(g)]\Big)^4\right]\right)^{1/2}, $$ where $\mathcal{E}_t$ is the random variable $\un_{\|f-g\|_{\infty,\mathcal{B}_N}\leq t}$ and $q$ is the moving-average kernel of $g$. Further, there exists $r_0>0$ such that for all $g=f_r$, $r\geq r_0$, the bound holds with the value of $N_0$ being identical and the kernel $q$ being that of $f$. \end{lem} \begin{proof} Let $f$ and $g$ be as in the statement. Through a Cameron-Martin construction presented in the Appendix (see section \ref{p:CM}) applied to $g$, we can find a function $h$ such that $|h|\geq 1$ on $\mathcal{B}_N$ and there exists a random variable $Q(h)$ called the Radon-Nikodym difference associated to $h$ such that for any event $A$, by Proposition \ref{CMcontrol} \begin{equation}\label{controldifference} |\PP[g\in A]-\PP[g+h\in A]|= \left|\EE_g\left[Q(h)\un_A\right]\right|. \end{equation} By Proposition \ref{CMcontrol2} there exist universal constants $c$, $C_0>0$ and constants $N_0$ depending on $g$ such that for any $N\geq N_0$, for any $t\leq c/N$, \begin{equation}\label{q} \EE [Q(th)^2]\leq \frac{C_0tN}{\int q}, \end{equation} where $$N_0=\inf \{N\in\NN, \quad \inf\limits_{\mathcal{B}_{\frac1N}} \rho\geq \frac{\rho(0)}{2}\}$$ and $q$ is the moving-average kernel of $g$. Further, if $g=f_r$, if we call $\rho$ the spectral density of $f$ and for any $r$, $\rho_r$ that of $f_r$, we have $\rho_r\overset{a.e}{\underset{r\to\infty}{\longrightarrow}} \rho$. Thus there exist universal constants $c$, $C_0>0$ and constants $N_0$ depending on $f$ such that for any $N\geq N_0$, $r\geq r_0$, for any $t\leq c/N$, \begin{equation}\label{q'} \EE [Q_r(th)^2]\leq \frac{C_0tN}{\int q}, \end{equation} $Q_r$ being the Radon-Nikodym difference for $f_r$ and $q$ being the moving-average kernel of $f$. Now, consider an integer $N$ and a number $0<t\leq c/N$. Consider a monotonic $L^4$ random variable $X$. We suppose without loss of generality that it is increasing. By Lemma \ref{varianceprop}, it is enough to bound $\EE[(X(f)\mathcal{E}_t-\EE [X(g)])^2]$. We thus write \begin{align} \begin{split} \label{beforeCM} & \EE[(X(f)\mathcal{E}_t-\EE[ X(g)])^2] \\ &\quad = \int\limits_{\RR_+} \PP[(X(f)\mathcal{E}_t-\EE[ X(g)])^2\geq u]du \\ &\quad = \int\limits_{\RR_+} \left[\PP[X(f)\mathcal{E}_t-\EE[ X(g)]\geq \sqrt{u}]+\PP[X(f)\mathcal{E}_t-\EE[ X(g)]\leq -\sqrt{u}]\right]du \\ &\quad = \int\limits_{\RR_+} \left[\PP[X(g+th)\mathcal{E}_t-\EE[ X(g)]\geq \sqrt{u}]+\PP[X(g-th)\mathcal{E}_t-\EE[ X(g)]\leq -\sqrt{u}]\right]du, \end{split} \end{align} where in the last step we have used that when $\mathcal{E}_t\neq 0$, on $\mathcal{B}_N$, $$ g-th\leq f\leq g+th, $$ and increasingness of $X$. Now, we always have $X\mathcal{E}_t\leq X$ and further, when $u> \EE [X(g)]^2$, we have $$ \PP[X(g-th)\mathcal{E}_t-\EE[ X(g)]\leq -\sqrt{u}]\leq \PP[X(g-th)-\EE[ X(g)]\leq -\sqrt{u}]. $$ When $u\leq \EE [X(g)]^2$ , we bound this quantity by $$ \PP[X(g-th)-\EE[ X(g)]\leq -\sqrt{u}]+\PP[\mathcal{E}_t=0]. $$ So that, retunring to (\ref{beforeCM}), we have \begin{align*} & \EE[(X(f)\mathcal{E}_t-\EE[ X(g)])^2] \\ &\quad \leq \int\limits_{\RR_+} \left[ \PP[X(g-th)-\EE[ X(g)]\geq \sqrt{u}]+ \PP[X(g-th)-\EE[ X(g)]\leq -\sqrt{u}]\right]du \\ &\quad + \EE [X(g)]^2\PP[\mathcal{E}_t=0]. \end{align*} We recall (\ref{controldifference}) and get \begin{align*} & \EE[(X(f)\mathcal{E}_t-\EE[ X(g)])^2] \\ & \quad \leq \int\limits_{\RR_+} \left[\PP[X(g)-\EE[X(g)]\geq \sqrt{u}]+\PP[X(g)-\EE[X(g)]\leq -\sqrt{u}]\right]du + \EE [X(g)]^2\PP[\mathcal{E}_t=0]. \\ & \quad + \int\limits_{\RR_+}\left[\EE[Q(th)\un_{X(g)-\EE[X(g)]\geq \sqrt{u}}]+\EE[Q(-th)\un_{X(g)-\EE[X(g)]\leq -\sqrt{u}}]\right] \\ & \quad \leq \Var[X(g)]+\int\limits_{\RR_+}\EE[(|Q(th)|+|Q(-th)|)\un_{(X(g)-\EE[X(g)])^2\geq u}]+ \EE [X(g)]^2\PP[\mathcal{E}_t=0], \end{align*} $Q$ being the Radon-Nikodym difference of $g$. So that by the Cauchy-Schwarz inequality, this can be bounded for any $N\geq N_0$, for any $t$ small enough by \begin{align*} & \Var [X(g)]+\Big(\EE\left[(|Q(th)|+|Q(-th)|)^2\right]\EE\Big[\Big(\int\limits_{\RR_+}\un_{(X(g)-\EE[X(g)])^2\geq u}du\Big)^2\Big]\Big)^{1/2}+ \EE [X(g)]^2\PP[\mathcal{E}_t=0] \\ & =\Var [X(g)]+\left(\EE\left[(|Q(th)|+|Q(-th)|)^2\right]\EE\left[(X(g)-\EE [X(g)])^4\right]\right)^{1/2}+ \EE[X(g)]^2\PP[\mathcal{E}_t=0] \\ & \leq \Var [X(g)]+\frac{C_0N}{\int q}t\left(\EE\left[(X(g)-\EE[X(g)])^4\right]\right)^{1/2}+ \EE [X(g)]^2\PP[\mathcal{E}_t=0]. \end{align*} where we have used relation (\ref{q}), (resp (\ref{q'}) for the statement with $g=f_r$). \end{proof} We can now prove Lemma \ref{CM}. \begin{proof}[Proof of Lemma \ref{CM}] Let $f$ be a Gaussian field satisfying the assumptions of Lemma \ref{CM}. Let $\ell\in\RR$. For clarity, in this proof, we use the index $r$ directly on the pseudometric $T$ to indicate which field we work with instead of on the variance operator, which may involve a random variable that depends on both $f$ and $f_r$. We prove that there exist constants $C_0$, $C_1$ such that for any $N\in\NN_{\geq 2}$ \begin{equation}\label{oneineq} \Var (T^{\mathcal{B}_N}(A,B))\leq \Var (T_{r}^{\mathcal{B}_N} (A,B)) + C_0 \max\left(N^4(\log N)^5 F(r),N^{-C_1\log N}\right), \end{equation} the other inequality's proof is identical, reversing the roles of the two fields. Let $C_1$ be the constant from Proposition \ref{muirvanapprox}. For any $N$,$r$, $t$, let $\mathcal{E}_{N,r,t}$ be the event $$\mathcal{E}_{N,r,t}:=\{\|f-f_r\|_{\infty, \mathcal{B}_N}\geq C_1 t F(r)\}.$$ We have, for all $N,r$, $A,B\subseteq\mathcal{B}_N$, for all $t$, for all $\ell$, \begin{align} \begin{split} \label{twoparts} &\Var T^{\mathcal{B}_N}(A,B) \\ &\quad =\Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl} +T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}}\Big] \\ &\quad =\Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl}\Big]+\Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}}\Big] \\ &\quad +2\cov\left(T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl}, T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}}\right). \end{split} \end{align} Let us treat the second term. Recall Definition \ref{pseudometric}. It implies that any pseudodistance between two sets $A$, $B$ within a ball of size $N$ is upper bounded by $C_\psi N\|f\|_{\infty,\mathcal{B}_N}$. We use Propositions \ref{muirvansupnorm} and \ref{muirvanapprox} to control the two probabilities. For any $N\in \NN$, $r>0$, $t\geq\log N$, for any $A,B$: \begin{align*} \begin{split} & \Var\Big[T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}}\Big]\leq C_\psi ^2N^2\EE[\|f+\ell\|_{\infty,\mathcal{B}_N}^2\un_{\mathcal{E}_{N,r,t}}] \\ & \quad \leq C_\psi ^2N^2 \int\limits_{\RR_+} \min\Big(\PP\Big[\|f+\ell\|_{\infty,\mathcal{B}_N}^2\geq u \Big],\PP[\mathcal{E}_{N,r,t}]\Big)du \\ & \quad \leq C_\psi ^2N^2 \int\limits_{\RR_+} \min(\un_{u\leq \log N}+e^{-u^2/c_0}\un_{u\geq\log N},e^{-C_2t^2})du \\ & \quad \leq C_3 N^2\log Ne^{-C_4t^2}, \end{split} \end{align*} for some constants $C_3, C_4$ independent of $r$. Returning to (\ref{twoparts}), for all $N$, $r$, $t$, for any $A,B$ \begin{align}\label{whenwestay} \begin{split} &\Var T^{\mathcal{B}_N}(A,B)\leq \Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl}\Big]+\Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}}\Big] \\ &\quad + 2 \left(\Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl}\Big]\Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}}\Big]\right)^{1/2} \\ & \quad \leq \Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl}\Big]\Big(1+3(C_3\log N)^{1/2} Ne^{-\frac{C_4t^2}{2}} \Big), \end{split} \end{align} assuming $\Var\Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl}\Big]$ is larger than $1$ (in the opposite case one can write the term $(C_3\log N)^{1/2} Ne^{-\frac{C_4t^2}{2}}$ without the factor). Now, apply Lemma \ref{Cs}. We bound the expectation of $T^{\mathcal{B}_N}(A,B)$ using Lemma \ref{subaddlem}. We bound the fourth moment by $\EE_r[(NC_\psi \|f_r\|_{\infty,\mathcal{B}_N})^4]$, itself bounded using Corollary \ref{muirvansupnorm2}. We get $N_0\in\NN$, $C_5, C_6$, $r_0>0$ such that for any $N\geq N_0$, $r\geq r_0$, for any $t\geq \log N$, for any $A,B$ \begin{equation}\label{whenwestay3} \Var \Big[ T^{\mathcal{B}_N}(A,B)\Big] \leq \Var\Big[ T_r^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl}\Big]+ C_5 N^2e^{-C_2t^2}+ C_6C_\psi^4 N^4(\log N)^4 tF(r). \end{equation} Combining (\ref{whenwestay}), then (\ref{whenwestay3}), we get for any $N\geq N_0$, $r\geq r_0$, for any $t\geq \log N$, for any $A,B$ \begin{align*} \begin{split} &\Var T^{\mathcal{B}_N}(A,B)\\ &\quad \leq \Var \Big[ T^{\mathcal{B}_N}(A,B)\un_{\mathcal{E}_{N,r,t}^\compl}\Big]\Big(1+3(C_3\log N)^{1/2} Ne^{-\frac{C_4t^2}{2}} \Big)\\ &\quad \leq \left( \Var\Big[ T_r^{\mathcal{B}_N}(A,B)\Big]+C_5 N^2e^{-C_2t^2}+ C_6C_\psi^4 N^4(\log N)^4 tF(r)\right)\left(1+3(C_3\log N)^{1/2} Ne^{-\frac{C_4t^2}{2}}\right). \end{split} \end{align*} Take $t=\log N$ and get constants $C_7$, $C_8$ such that for any $N\in \NN_{\geq 2}$, $r\geq 1$, $$ \Var T^{\mathcal{B}_N}(A,B)\leq \left( \Var T_r^{\mathcal{B}_N}(A,B) + C_6C_\psi^4 N^4(\log N)^5F(r)\right)\left(1+C_7N^{-C_8\log N}\right), $$ which proves (\ref{oneineq}). \end{proof} The following technical lemma is combined with Lemma \ref{CM} in our computations to get our final variance comparison result. \begin{defn} For any $\varepsilon>0$, for any point $x$ such that $|x|=N$, define $\Gamma(0,x)$ to be a geodesic between $0$ and $x$ with minimal Euclidean diameter (chosen with some arbitrary rule). \end{defn} \begin{lem}\label{length2} Let $f$ be a Gaussian field on $\RR^d$ verifying Assumption \ref{a:basic} with decay $F$, as well as $\ell$ such that conditon \ref{macrotime} is verified for some function $G$. There exists a constant $c_0>0$ such that for any $N\geq 1$, for any $x$ such that $|x|=N$ and $M\geq N^2$, $$ \PP\Big[\Diam (\Gamma(0,x))\geq M\Big]\leq G(M) + e^{-M/c_0}. $$ \end{lem} \begin{proof} We have, for any $N$, for any $x$ such that $|x|=N$, $M\geq N^2$, for any $r$ \begin{equation}\label{macrotimes1} \PP\Big[\Diam (\Gamma(0,x))\geq M\Big]\leq \PP[ T(A_{M})\leq \|f+\ell\|_{\infty, \mathcal{B}_N}N]. \end{equation} Indeed, if $\Diam (\Gamma(0,x))\geq M$ then all geodesics exit the annulus $A_{M}$, and do so with time smaller than $ \|f\|_{\infty, \mathcal{B}_N}N$, otherwise the euclidean geodesic between $0$ and $x$ would have smaller time. Now, by Proposition \ref{muirvansupnorm} for any $a>0$, there exists a constant $c_0>0$ such that for any $N\in\NN$, for any $M\geq N^2$, \begin{equation}\label{macrotimes2} \PP[\|f+\ell\|_{\infty, \mathcal{B}_N}N\geq aM]\leq e^{-\frac{M^2}{c_0N^2}}. \end{equation} Now, since $M\geq N^2$, $\frac{M^2}{N^2}\geq M$. And $$ \PP[ T(A_{M})\leq \|f+\ell\|_{\infty, \mathcal{B}_N}N]\leq \PP[\|f+\ell \|_{\infty, \mathcal{B}_N}N\geq aM]+\PP[ T(A_{M})\leq aM]. $$ So that, by combining (\ref{macrotimes1}), (\ref{macrotimes2}) and Condition \ref{macrotime}, we get the desired result. \end{proof} Now, on to the main proof of this subsection. \begin{proof}[Proof of Proposition \ref{variancesfields}] Once again, we only prove one inequality, the other one's proof being identical. Fix a field $f$ and a level $\ell$. For any $N$, for any $x$ of norm $N$ and $M>N$, recall that $\Gamma(0,x)$ is the intersection of all geodesics between $0$ and $x$ and let \begin{equation}\label{smallevent} \mathcal{E}_{N,x,M} :=\{\Diam\Gamma(0,x) \leq M\}. \end{equation} So that on that event, $T^{\mathcal{B}_M}(0,x)$ and $T(0,x)$ coincide and we have \begin{align}\label{cov} \begin{split} & \Var T(0,x) \\ & \quad \leq \Var [T^{\mathcal{B}_M}(0,x)\un_{\mathcal{E}_{N,x,M}}]+\Var [T(0,x)\un_{\mathcal{E}_{N,x,M}^\compl}] \\ & \quad +2\left(\Var [T^{\mathcal{B}_M}(0,x)\un_{\mathcal{E}_{N,x,M}}]\Var[T(0,x)\un_{\mathcal{E}_{N,x,M}^\compl}]\right)^{1/2}. \end{split} \end{align} And likewise, \begin{align}\label{cov2} \begin{split} & \Var [T^{\mathcal{B}_M}(0,x)\un_{\mathcal{E}_{N,x,M}}] \\ & \quad \leq \Var [T^{\mathcal{B}_M}(0,x)]+\Var [T^{\mathcal{B}_M}(0,x)\un_{\mathcal{E}_{N,x,M}^\compl}] \\ & \quad +2\left(\Var [T^{\mathcal{B}_M}(0,x)]\Var[T^{\mathcal{B}_M}(0,x)\un_{\mathcal{E}_{N,x,M}^\compl}]\right)^{1/2}. \end{split} \end{align} Now, using Lemma \ref{CM}, we get constants $C_0$, $C_1$ such that for any $N$, for any $M\geq N^2$ and $x$ such that $|x|=N$, \begin{align} \begin{split} \label{restriction} &\Var [T^{\mathcal{B}_M}(0,x)] \\ & \quad \leq \Var_{(\log N)^{1/\alpha+\varepsilon}}[T^{\mathcal{B}_M}(0,x)] \\ & \quad +C_0 \max\left(M^4(\log M)^5 F((\log N)^{1/\alpha+\varepsilon})N^{-C_1\log N}\right). \\ \end{split} \end{align} Further, recalling Definition \ref{pseudometric}, there exists a constant $C_\psi $ such that for any $N,M$, for any $x$ of norm $N$, \begin{align*} \begin{split} & \Var [T(0,x)\un_{\mathcal{E}_{N,x,M}^\compl}] \\ & \quad \leq C_\psi \EE[\|f+\ell\|^2_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\Diam^2(\Gamma(0,x))\un_{\mathcal{E}_{N,x,M}^\compl}]\\ & \quad\leq C_\psi \Big( \EE[\Diam^4(\Gamma(0,x))\un_{\|f+\ell\|_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\leq \Diam(\Gamma(0,x))}\un_{\mathcal{E}_{N,x,M}^\compl}] \\ & \quad + \EE\big[\|f+\ell\|^4\un_{\mathcal{E}_{N,x,M}^\compl}\un_{\|f+\ell\|_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\geq \Diam(\Gamma(0,x))}\big] \Big) \\ &\quad \leq C_\psi\Big(\int\limits_{s\in\RR_+}\PP\big[\Diam^4(\Gamma(0,x))\un_{\mathcal{E}_{N,x,M}^\compl}\geq s\big]ds \\ &\quad + \int\limits_{s\in\RR_+}\PP\big[\|f+\ell\|^4_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\un_{\mathcal{E}_{N,x,M}^\compl}\un_{\|f+\ell\|_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\geq \Diam(\Gamma(0,x))}\geq s\big]ds \Big). \end{split} \end{align*} We notice that in both previous integrals, for $s>0$, we can bound the integrand by $\PP[\mathcal{E}_{N,x,M}^\compl]$. We do so for $s\in(0,M^4]$. So that \begin{align} \begin{split} \label{smallterm} & \Var [T(0,x)\un_{\mathcal{E}_{N,x,M}^\compl}] \\ &\quad \leq C_\psi\Big( 2M^4\PP[\mathcal{E}_{N,x,M}^\compl]+\int\limits_{s\geq M^4}\PP\big[\Diam^4(\Gamma(0,x))\un_{\mathcal{E}_{N,x,M}^\compl}\geq s\big]ds \\ &\quad + \int\limits_{s\geq M^4}\PP\big[\|f+\ell\|^4_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\un_{\mathcal{E}_{N,x,M}^\compl}\un_{\|f+\ell\|_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\geq \Diam(\Gamma(0,x))}\geq s\big]ds \Big). \end{split} \end{align} Notice that $$\PP\Big[\|f+\ell\|^4_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\un_{\mathcal{E}_{N,x,M}^\compl}\un_{\|f+\ell\|_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\geq \Diam(\Gamma(0,x))}\geq s\Big]\leq \PP\big[\|f+\ell\|^4_{\infty,\mathcal{B}_{s}}\geq s\big].$$ So that, applying Proposition \ref{muirvansupnorm} in (\ref{smallterm}), for any $N$, for any $M> N$, for any $x$ of norm $N$, \begin{align}\label{smallterm2} \begin{split} & \Var [T(0,x)\un_{\mathcal{E}_{N,x,M}^\compl}] \\ & \quad \leq C_\psi \Big( 2M^4\PP[\mathcal{E}_{N,x,M}^\compl]+\int\limits_{s\geq M^4} \PP[\mathcal{E}_{N,x,\varepsilon,s^{1/4}}^\compl]ds+\int\limits_{s\geq M^4}e^{-s^{1/2}/c_0}ds\Big) \\ & \quad \leq C_\psi \Big( 2M^4\PP[\mathcal{E}_{N,x,M}^\compl]+\int\limits_{s\geq M} 4s^3\PP[\mathcal{E}_{N,x,\varepsilon,s}^\compl]ds+C_1 e^{-M}\Big), \end{split} \end{align} for some constant $C_1$. And similarly, \begin{align}\label{smallterm3} \begin{split} & \Var [T^{\mathcal{B}_M}(0,x)\un_{\mathcal{E}_{N,x,M}^\compl}] \\ & \quad \leq C_\psi M^2\int\limits_{s\in\RR_+}\PP\big[\|f+\ell\|^2_{\infty,\mathcal{B}_{\Diam(\Gamma(0,x))}}\un_{\mathcal{E}_{N,x,M}^\compl}\geq s\big]ds \\ & \quad \leq C_\psi \Big( M^4\PP[\mathcal{E}_{N,x,M}^\compl]+C_1 e^{-M}\Big). \end{split} \end{align} Now, by Lemma \ref{length2}, recalling (\ref{smallevent}), there exists a $\max(2d,4)$-sub-polynomial function $G$ such that, for any $N$ and $M\geq N^2$, \begin{equation}\label{largediam} \PP[\mathcal{E}_{N,x,M}^\compl]\leq G(M), \end{equation} where the function $G$ includes both the function $G$ of the lemma and the exponential term. So that we return to (\ref{cov}), use (\ref{cov2}) to remove the indicator of $\mathcal{E}_{N,x,M}$ and then use (\ref{restriction}) to switch from the field $f$ to $f_{(\log N)^{1/\alpha+\varepsilon}}$, and finally use (\ref{smallterm2}), (\ref{smallterm3}) and (\ref{largediam}) to control the error terms. We get a constant $C_2$ such that for all $N\in\NN$, for all $x$ such that $|x|=N$ and $M\geq N^2$, \begin{align*} &\Var T(0,x) \\ & \quad \leq\Big[\Var_{(\log N)^{1/\alpha+\varepsilon}}[T^{\mathcal{B}_M}(0,x)] \\ & \quad +C_2 \max\left(M^4(\log M)^5 F((\log N)^{1/\alpha+\varepsilon}),N^{-C_1\log N}\right)\Big] \\ & \quad \times \Big[1+6C_\psi \Big(3M^4G(M)+\int\limits_{s\geq M}4s^3G(s)ds +C_1e^{-M}\Big)^{1/2}\Big]. \end{align*} Take $M=N^2$ and, recall that $F$ is $\alpha$\emph{-sub-exponential} for some $\alpha>1$ and $G$ is $4$\emph{-sub-polynomial} (see Definition \ref{subpoly}). We then have, for any $N$ and $x$ such that $|x|=N$, $$ \Var T(0,x) \leq \Var_{(\log N)^{1/\alpha+\varepsilon}}\Big[T^{\mathcal{B}_{N^2}}(0,x)\Big]\big(1+o(1)\big)+o(1), $$ where $o(1)$ are sequences depending only on $N$ and going to $0$ as $N$ goes to infinity. \end{proof} \subsection{On Condition \ref{macrotime}} To prove Proposition \ref{numberballs}, the key proposition in establishing that Condition \ref{macrotime} is verified, we need the following auxiliary results. \begin{lem} \label{expdecann} Let $f$ be a Gaussian field on $\RR^d$ satisfying Assumption \ref{a:basic} for some $\alpha$-sub-exponential function $F$ with $\alpha>0$, as well as the positivity assumption, Assumption \ref{a:pos}. Let $\ell>-\ell_c$. Then, there exist positive constants $c,M_0$ such that $$ \forall M\geq M_0, \exists a>0 \text{ such that}\ \PP\left[\frac{T(A_M)}{M}\leq a\right]\leq e^{-cM}. $$ \end{lem} \begin{proof} This follows readily from \cite[Theorem 1.2]{severo2021sharp} which gives exponential decay for annulus crossing probabilities, combined with right-continuity of the map $x\mapsto\PP[X\leq x]$ for any real-valued random variable $X$. \end{proof} \begin{lem}[\cite{dewangayet}, Proposition 4.4 and Corollary 5.9] \label{bootstrap} Let $f$ be a Gaussian field on $\RR^d$ satisfying Assumption \ref{a:basic} for some $\alpha$-sub-exponential decay function $F$ with $1<\alpha<2$. Let $\ell\in\RR$. Then for any $1\leq Q<R<S$ and any positive constant $\delta$, \begin{equation}\label{strass} \PP \left[\frac{T(A_S)}S< \frac{\delta}{1+\frac{Q}{R}}\right]\leq \left(c_d S^{d-1}\frac{R}{Q}\right)^n \left( \PP \left[\frac{T(A_R)}R< \delta \right]^{n} +n Se^{-\frac12Q^{\alpha}} \right), \end{equation} where $c_d>0$ is a constant depending only on the dimension $d$, where $ n=\lfloor N\frac{Q}{2R+2Q}\rfloor $ with $N= \lfloor \frac{S-1}{2R+Q}\rfloor $. \end{lem} Now on to the proof of Proposition \ref{numberballs}. \begin{proof}Fix a Gaussian field $f$ verifying the assumptions. Let $M_0$ and $c$ be the constants from Lemma \ref{expdecann}. For all $k\in \NN$, and $M\in[M_0,M_0^2]$, let \begin{equation}\label{scales} N_{M,k}:=M^{2^k}. \end{equation} Thus, note that for all $x\in [M_0,+\infty)$, there exist $M,k$ such that $x=N_{M,k}$. Now, fix some $M\in[M_0,M_0^2].$ For any $k\geq 1$ and $\delta>0$, we have, by Lemma \ref{bootstrap} applied with $S=N_{M,k}$, $R=N_{M,k-1}$ and $Q=\sqrt{N_{M,k-1}}$ ($=N_{M,k-2}$ if $k\geq 2$), \begin{align}\label{usebootstrap} \begin{split} &\PP\left[\frac{T(A_{N_{M,k}})}{N_{M,k}}< \frac{\delta}{1+\frac{1}{\sqrt{N_{M,k-1}}}}\right] \\ & \leq \left(c_d N_{M,k}^{d-1}\sqrt{N_{M,k-1}}\right)^{\sqrt{N_{M,k-1}}} \left( \PP\left[\frac{T(A_{N_{M,k-1}})}{N_{M,k-1}}< \delta \right]^{\sqrt{N_{M,k-1}}} +\sqrt{N_{M,k-1}} N_{M,k}e^{-\frac12N_{M,k-1}^{\alpha/2}} \right). \end{split} \end{align} Now, by Lemma \ref{expdecann}, there exists $\delta>0$ and $k_0$ such that for any $M\in[M_0,M_0^2],$ \begin{equation}\label{init} \PP\left[\frac{T(A_{N_{M,k_0-1}})}{N_{M,k_0-1}}< \delta\right]\leq C_1 e^{-N_{M,k_0-1}^{\alpha/5}}, \end{equation} Therefore, up to increasing $k_0$, we have for all $M\in[M_0,M_0^2]$, \begin{align}\label{usebootstrap2} \begin{split} &\PP\left[\frac{T(A_{N_{M,k_0}})}{N_{M,k_0}}< \frac{\delta}{1+\frac{1}{\sqrt{N_{M,k_0-1}}}}\right] \\ & \quad \leq \left(c_d N_{M,k_0}^{d-1}\sqrt{N_{M,k_0-1}}\right)^{\sqrt{N_{M,k_0-1}}} \left( (C_1 e^{-N_{M,k_0-1}^{\alpha/5}})^{\sqrt{N_{M,k_0-1}}} +\sqrt{N_{M,k_0-1}} N_{M,k_0}e^{-\frac12N_{M,k_0-1}^{\alpha/2}} \right) \\ &\quad \leq C_1 e^{-N_{M,k_0}^{\alpha/5}}. \end{split} \end{align} Indeed, we have for all $M\in[M_0,M_0^2]$ and $k$ large enough $$\sqrt{N_{M,k-1}}-N_{M,k-1}^{\alpha/5}\sqrt{N_{M,k-1}}=N_{M,k-1}^{1/2}-N_{M,k-1}^{\alpha/5+1/2}<-N_{M,k}^{\alpha/5}, $$ and recalling that $\alpha>1$, $$ \sqrt{N_{M,k-1}}-\frac12N_{M,k-1}^{\alpha/2}=\sqrt{N_{M,k-1}}-\frac12N_{M,k}^{\alpha/4}=N_{M,k}^{1/4}-\frac12N_{M,k}^{\alpha/4} < -N_{M,k}^{\alpha/5}. $$ so that using (\ref{usebootstrap}) and repeating the reasoning of (\ref{usebootstrap2}), we get by induction that for any $k\geq k_0$, for any $M\in[M_0,M_0^2]$, $$ \PP\left[\frac{T(A_{N_{M,k}})}{N_{M,k}}< \delta_\infty\right]\leq C_1e^{-N_{M,k}^{\alpha/5}}, $$ where $$\delta_\infty:=\delta\prod\limits_{k=1}^\infty\frac{1}{1+\frac{1}{\sqrt{N_{M_0,k-1}}}}>0,$$ which is the conclusion of Proposition \ref{numberballs}. \end{proof} \section{Appendix} \subsection{Cameron-Martin space}\label{p:CM} Given a Gaussian field $f$, we introduce a Hilbert space $H$. It is constituted of elements of $\mathcal{C}(\RR^d)$, and called the \emph{Cameron-Martin space} of $f$. To define it, first define the Hilbert space $G$ to be the space of Gaussian random variables of the form $$ \sum\limits_{i\in\NN}a_if(x_i), $$ where the $a_i$ are in $\RR$ and the $x_i$ in $\RR^d$, and they satisfy $$ \sum\limits_{i,j}a_ia_j\kappa(x_i,x_j)<\infty, $$ $\kappa$ being the covariance kernel of $f$. We further define the map $P$ from $G$ to $\mathcal{C}(\RR^d)$ by $$ \xi\mapsto P(\xi)(.):=\EE[\xi f(.)]. $$ \begin{defn}[Cameron-Martin space]\label{d:cm} The Cameron-Martin space $H$ of $f$ is then the set $P(G)$ equipped with the scalar product $$ \langle h_1,h_2\rangle:=\EE[P^{-1}(h_1)P^{-1}(h_2)]. $$ \end{defn} We now explain a construction which is used to exhibit elements of the Cameron-Martin space whose support contains large balls. Suppose that the field $f$ has a spectral density $\rho^2$. Then the Cameron-Martin space of $f$ can be equivalently described as the space $$ \tilde{H}=\mathcal{F}[g\rho], \quad\quad g\in L^2_{\text{sym}}(S), $$ where $\mathcal{F}$ denotes the Fourier transform, $S$ is the support of $\rho$, $L^2_{\text{sym}}(S)$ is the set of complex Hermitian $L^2$ functions supported on $S$ and the inner product is the one associated with $L^2_{\text{sym}}(S)$. We then have, for any $h\in H$ such that its Fourier transform $\hat{h}$ is defined, $$ \|h\|_H^2=\int\limits_{\RR^d}\frac{|\hat{h}|^2(x)}{\rho^2(x)}dx. $$ Using this description, if the field $f$ verifies Assumption \ref{a:basic}, in particular the spectral density assumption \ref{i:spectr}, one can establish the following: \begin{equation}\label{normcontrol} \|h\|_H\leq \frac{(\sup |\hat{h}|)\Vol(\Supp(\hat{h}))}{\inf\limits_{\Supp(\hat{h})}|\rho|}. \end{equation} The following are the key propositions used to establish the comparisons between the laws of Gaussian fields with close moving-average kernels. \begin{prop}[Cameron-Martin theorem, see e.g \cite{janson_1997} Theorems 14.1 and 3.33] \label{CMcontrol} Let $f$ be a Gaussian field satisfying Assumption \ref{a:basic}. Let $h$ be an element of its Cameron-Martin space $H$. Let $X=P^{-1}(h)$ (see Definition \ref{d:cm}). Then the Radon-Nikodym derivative of the law of $f+h$ with respect to that of $f$ is $\exp[X-\frac12 \EE[X^2]]$. Otherwise stated, if $A$ is an event, then $$ |\PP[f\in A]-\PP[f+h\in A]|= \left|\EE_f\left[Q(h)\un_A\right]\right|, $$ where $$Q(h):=1-\exp[X-\frac12 \EE[X^2]].$$ \end{prop} We will call $Q(h)$ the \emph{Radon-Nikodym difference} associated to $h$. \begin{prop}[\cite{muirhead2018sharp}, proof of Proposition 3.6 and Corollary 3.10] \label{CMcontrol2} There exists a universal constant $c>0$ such that for any Gaussian field $f$, for any element $h$ of its Cameron-Martin space verifying $\|h\|_H\leq c$, its Radon-Nikodym difference $Q(h)$ verifies: $$ \EE_f\left[Q(h)^2\right]\leq \frac{\|h\|_H^2}{\log 2}, $$ $\|\|_H$ being as in Definition \ref{d:cm}. Further, if $$N_0:=\inf \{N\in\NN,\quad \inf\limits_{\mathcal{B}_{\frac1N}} \rho\geq \frac{\rho(0)}{2}\},$$ then for any $N\geq N_0$, there is an element $h$ of the Cameron-Martin space verifying $|h|\geq 1$ on $\mathcal{B}_N$ such that $$ \|h\|_H\leq \frac{C_0 N}{\int\limits_{\RR^d} q}, $$ where $C_0$ is a universal constant. \end{prop} \begin{rem} The latter estimate follows from equation (\ref{normcontrol}), by considering functions $g\in L^2_{\textup{sym}}(S)$ with support on small annuli, and recalling that $q\star q=\kappa(0,.)$ so that $\int\limits_{\RR^d} q=\rho(0)$. \end{rem} \subsection{White-noise and subsets}\label{p:WN} We state a few facts about convolution with the white-noise on $\RR^d$, see \cite{dewan2021upper}, Appendix A for more details. \begin{defn} Let $(\varphi_i)_{i\in\NN}$ be a Hilbertian basis of $L^2(\RR^d)$, and $(Z_i)_{i\in\NN}$ be an i.i.d sequence of centered Gaussian random variables of mean $1$. For any $L^2$ map $q$, define $$ q\star W:=\lim\limits_{n\to\infty}\sum\limits_{i=1}^n q\star Z_i \varphi_i, $$ where the limit is that of convergence in law with respect to $\mathcal{C}^0$ topology on compact sets. \end{defn} The limit law in this convergence is independent from the Hilbertain basis we have chosen. Now, if $(S_i)_{i\in\NN}$ is a family of compact sets intersecting only on their boundaries (which have 0 Lebesgue measure), and covering the whole space $\RR^d$, we can define $q\star (W|_{S_i})$ for any $i$ in the same way, with the $\varphi_j$ being this time elements of a Hilbertian basis of $L^2(S_i)$. The $q\star (W|_{S_i})$ can thus be taken independent to one another and in that case, it is easy to see that the following holds. \begin{prop}\label{splitboxes} We have the following equality in law, for any such family $(S_i)_{i\in\NN}$ of $\RR^d$ of compact sets, for any $q\in L^2$. \begin{equation}\label{wnsplit} q\star W = \sum\limits_{i=1}^\infty q\star (W|_{S_i}). \end{equation} \end{prop} {} \end{document}
\begin{document} \pagenumbering{arabic} \begin{abstract} We prove that a group homomorphism $\varphi\colon L\to G$ from a locally compact Hausdorff group $L$ into a discrete group $G$ either is continuous, or there exists a normal open subgroup $N\subseteq L$ such that $\varphi(N)$ is a torsion group provided that $G$ does not include $\mathbb{Q}$ or the $p$-adic integers $\mathbb{Z}_p$ or the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$ as a subgroup, and if the torsion subgroups of $G$ are small in the sense that any torsion subgroup of $G$ is artinian. In particular, if $\varphi$ is surjective and $G$ additionally does not have non-trivial normal torsion subgroups, then $\varphi$ is continuous. As an application we obtain results concerning the continuity of group homomorphisms from locally compact Hausdorff groups to many groups from geometric group theory, in particular to automorphism groups of right-angled Artin groups and to Helly groups. \hspace{-0.4cm}{\bf Key words.} \textit{Automatic continuity, locally compact Hausdorff groups, metrically injective groups, Helly groups, automorphism groups of right-angled Artin groups} \hspace{-0.4cm}{\bf 2010 Mathematics Subject Classification.} Primary: 22D05; Secondary: 20F65 \end{abstract} \thanks{The work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044--390685587, Mathematics M\"unster: Dynamics-Geometry-Structure. The first author was supported by (Polish) Narodowe Centrum Nauki, UMO-2018/31/G/ST1/02681. The second and third author were also partially supported by (Polish) Narodowe Centrum Nauki, UMO-2018/31/G/ST1/02681. The second author was supported by a stipend of the Studienstiftung des deutschen Volkes. The third author was supported by DFG grant VA 1397/2-1. This work is part of the PhD projects of the first and second author.} \maketitle \section{Introduction} In the class of locally compact Hausdorff groups ${\bf LCG}$ one has to distinguish between algebraic morphisms and continuous morphisms. {\em We will always assume that locally compact groups have the Hausdorff property.} By ${\rm Hom}(L,G)$ we denote the set of algebraic morphisms, i.e. group homomorphisms that are not necessarily continuous, and by ${\rm cHom}(L,G)$ we denote the subset of continuous group homomorphisms. We are interested in conditions on the discrete group $G$ such that ${\rm Hom}({\bf LCG},G)={\rm cHom}({\bf LCG},G)$, i.e every algebraic homomorphism $\varphi\colon L\to G$ is continuous for every locally compact group $L$. From a category theory perspective this is the question whether the forgetful functor from ${\bf LCG}$ to ${\bf Grp}$ is surjective. Questions concerning automatic continuity of group homomorphisms from locally compact groups into discrete groups have been studied for many years. A remarkable result obtained by Dudley in \cite{Dudley} says that any group homomorphism from a locally compact group into a free (abelian) group is continuous. Further results in this direction can be found in \cite{BogopolskiCorson}, \cite{ConnerCorson}, \cite{CorsonAut} and in \cite{CorsonKazachkov}. A characterization in terms of forbidden subgroups of $G$ was obtained in \cite{CorsonVarghese}: ${\rm Hom}({\bf LCG},G)={\rm cHom}({\bf LCG},G)$ if and only if $G$ is torsion-free and does not contain $\mathbb{Q}$ or the $p$-adic integers $\mathbb{Z}_p$ for any prime $p$ as a subgroup. By definition, a discrete group $G$ is called $lcH$\textit{-slender} if ${\rm Hom}({\bf LCG},G)={\rm cHom}({\bf LCG},G)$. In geometric group theory it is common to investigate virtual properties of groups, hence we call a group $G$ \textit{virtually lcH-slender} if it has a finite index $lcH$-slender subgroup. Using \cite{CorsonVarghese}, we obtain a characterization of virtually $lcH$-slender groups. \begin{corollaryA} A group $G$ is virtually $lcH$-slender if and only if $G$ is virtually torsion-free and does not include $\mathbb{Q}$ or the p-adic integers $\mathbb{Z}_p$ for any prime $p$ as a subgroup. \end{corollaryA} Many groups from geometric group theory are not $lcH$-slender, but they are virtually $lcH$-slender. Some examples of these groups are Coxeter groups and (outer) automorphism groups of right-angled Artin groups. The main focus of this article is on automatic continuity for surjective group homomorphisms from locally compact groups into discrete groups. We know that any surjective group homomorphism from a locally compact group into $\mathbb{Z}$ is continuous, but what happens if we replace the group $\mathbb{Z}$ by a slightly bigger group that contains torsion elements, for example the infinite dihedral group $\mathbb{Z}\rtimes\mathbb{Z}/2\mathbb{Z}$? Let ${\rm Epi}(L,G)$ be the set of surjective group homomorphisms and ${\rm cEpi}(L,G)$ the subset consisting of continuous surjective group homomorphisms. The question we address here is the following:\\ \textcolor{black}{\rule{\textwidth}{0.07cm}} \begin{quote} {\em Under which conditions on the discrete group $G$ does the equality $${\rm Epi}({\bf LCG},G)={\rm cEpi}({\bf LCG},G)$$ hold? } \end{quote} \textcolor{black}{\rule{\textwidth}{0.07cm}} It was proven by Morris and Nickolas in \cite{MN} that if $G$ is a non-trivial (finite) free product $\mathbin{*}_{i\in I} G_i$ of groups $G_i$, then ${\rm Epi}({\bf LCG},\mathbin{*}_{i\in I} G_i)={\rm cEpi}({\bf LCG},\mathbin{*}_{i\in I}G_i)$. Finite free products of groups are special cases of graph products of groups. Given a finite simplicial graph $\Gamma=(V, E)$ and a collection of groups $\mathcal{G} = \{ G_u \mid u \in V\}$, the \emph{graph product} $G_\Gamma$ is defined as the quotient $({\ast}_{u\in V} G_u) / \langle \langle [G_v,G_w]\text{ for }\{v,w\}\in E \rangle \rangle$. Kramer and the third author proved in \cite{KramerVarghese} that if the vertex set of $\Gamma$ is not equal to $S\cup \{w\in V\mid \{v,w\}\in E\text{ for all }v\in S\}$ where the subgraph generated by $S$ is complete, then ${\rm Epi}({\bf LCG},G_\Gamma)={\rm cEpi}({\bf LCG},G_\Gamma).$ Further, the second and third author proved in \cite{MV20} that if $G$ is a subgroup of a CAT$(0)$ group whose torsion subgroups are finite and $G$ does not have non-trivial finite normal subgroups, then ${\rm Epi}({\bf LCG}, G)={\rm cEpi}({\bf LCG}, G)$ by geometric means. Our main result is the following. \begin{theoremB} Let $G$ be a discrete group. If \begin{enumerate} \item[(i)] $G$ does not include $\mathbb{Q}$ or the $p$-adic integers $\mathbb{Z}_p$ for any prime $p$ as a subgroup, \item[(ii)] $G$ does not include the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$ as a subgroup and \item[(iii)] torsion subgroups in $G$ are artinian, \end{enumerate} then any group homomorphism $\varphi\colon L\to G$ from a locally compact group $L$ to $G$ is continuous, or there exists a normal open subgroup $N\subseteq L$ such that $\varphi(N)$ is a non-trivial torsion group. If additionally \begin{enumerate} \item[(iv)] $G$ does not have non-trivial torsion normal subgroups, \end{enumerate} then ${\rm Epi}({\bf LCG},G)={\rm cEpi}({\bf LCG},G)$. \end{theoremB} We want to remark that an open normal subgroup $N$ in a non-discrete locally compact group $L$ is large in the sense that $L/N$ is a discrete group. Thus a group homomorphism $\varphi\colon L\to G$ from a locally compact group $L$ to a discrete group $G$ with properties (i)-(iii) of Theorem B is continuous or the image is almost a torsion group. Many geometric groups, i.e. groups that admit a geometric action on a metric space with additional geometry are of our interest. Our main result is inspired by the question if similar automatic continuity results as in the case of CAT$(0)$ groups hold for other geometric groups, in particular for metrically injective groups. We call a group $G$ \textit{metrically injective} if it acts geometrically on an injective metric space, i. e. a metric space that is an injective object in the category of metric spaces and $1$-Lipschitz maps. Examples of metrically injective groups include Gromov-hyperbolic groups \cite{Lang}, uniform lattices in ${\rm GL}_n(\mathbb{R})$ \cite{Haettel} and Helly groups \cite{Helly}, in particular CAT$(0)$ cocompactly cubulated groups, finitely presented graphical $C(4)-T(4)$ small cancellation groups, type-preserving uniform lattices in Euclidean buildings of type $\widetilde{C}_n$ \cite{Helly}, uniform lattices in ${\rm GL}_n(\mathbb{Q}_p)$ \cite{Haettel} and Artin groups of type FC \cite{HuangOsajda}. As an application of Theorem B we obtain an automatic continuity result for group homomorphisms from locally compact groups to metrically injective groups and many other groups from geometric group theory. \begin{corollaryC}(see Proposition \ref{classG}) If $G$ is a subgroup of \begin{enumerate} \item a virtually $lcH$-slender group, \item a cocompactly cubulated ${\rm CAT}(0)$ group, \item a ${\rm CAT}(0)$ group whose torsion subgroups are artinian, e.g. a Coxeter group, \item a Gromov-hyperbolic group, \item a metrically injective group whose torsion subgroups are artinian, e.g. a Helly group whose torsion subgroups are artinian, \item a finitely generated residually finite group whose torsion subgroups are artinian, e.g. the (outer) automorphism group of a right-angled Artin group, \item a one-relator group, \item a finitely generated linear group in characteristic $0$, \item the Higman group, \end{enumerate} then any group homomorphism $\varphi\colon L\to G$ from a locally compact group $L$ is continuous or there exists a normal open subgroup $N\subseteq L$ such that $\varphi(N)$ is a torsion group. If $G$ does not have non-trivial torsion normal subgroups, then ${\rm Epi}({\bf LCG},G)={\rm cEpi}({\bf LCG},G)$. \end{corollaryC} Furthermore we show that we can generate new examples from these old ones by taking extensions and graph products where the defining graph $\Gamma$ is finite. More precisely, let $\mathcal{G}$ denote the class of groups that do not contain $\mathbb{Q}$ or $\mathbb{Z}_p$ or the Pr\"ufer group $\mathbb{Z}(p^\infty)$ for any prime number $p$ and whose torsion subgroups are artinian. \begin{Proposition D}(see Proposition \ref{Extensions} and Proposition \ref{ClassG}) The class $\mathcal{G}$ is closed under taking extensions and graph products of groups. \end{Proposition D} \subsection*{Structure of the proof of Theorem B} Given a locally compact group $L$, the connected component which contains the identity, denoted by $L^ \circ$, is a closed normal subgroup. Thus we obtain a short exact sequence of locally compact groups $\left\{1\right\}\to L^\circ\to L\to L/L^\circ\to\left\{1\right\}$, where $L/L^\circ $ is a totally disconnected locally compact group. In this way, the study of group homomorphisms from a general locally compact group reduces to studying group homomorphisms from connected locally compact groups and totally disconnected locally compact groups. The world of connected locally compact groups is well understood. Iwasawa's structure Theorem \cite[Thm. 13]{Iwasawa} tells us that the puzzle pieces of these groups are compact groups and the real numbers. On the other hand, any group with the discrete topology is a totally disconnected locally compact group, hence there are no structure theorems for this class of groups, but there is an important result by van Dantzig that says that any totally disconnected locally compact group has an open compact subgroup \cite[III \S 4, No. 6]{Dantzig}. Iwasawa's and van Dantzig's Theorems are the key ingredients in the proof of Theorem B. Given a group homomorphism $\varphi\colon L\to G$ where $G$ satisfies the conditions (i)-(iii), in the first step, using Iwasawa's structure Theorem, we prove that $\varphi(L^\circ)$ is a trivial group. In the second step, using van Dantzig's Theorem and the fact that torsion subgroups in $G$ are artinian, we prove that for the induced group homomorphism $\overline{\varphi}\colon L/L^\circ\to \varphi(L)$ there exists a compact open subgroup $K\subseteq L/L^\circ$ which is mapped to a minimal torsion normal subgroup. We then pull the subgroup $\overline{\varphi}(K)$ back and show it is indeed open and also mapped to a torsion normal subgroup under $\varphi$. If condition (iv) is satisfied too, then the subgroup $\overline{\varphi}(K)$ is trivial. Hence the maps $\overline{\varphi}$ and $\varphi$ have open kernels and therefore are continuous. \subsection*{Acknowledgment.} The authors thank Linus Kramer for helpful comments on a preprint of this paper and Damian Osajda for pointing out the right attributions. Additionally the authors thank the anonymous referee for pointing out a mistake in the argument and how to get around it via the strengthening of Corollary 2.5 and the introduction of Lemmas 4.1 and 4.2. The second and third author are grateful for the opportunity to take part in the Mini-Workshop ``Nonpositively Curved Complexes" in Oberwolfach organized by Damian Osajda, Piotr Przytycki and Petra Schwer that inspired them to start this project. The third author thanks Samuel Corson for introducing her to the Higman group and for giving arguments that this group satisfies the conditions (i)-(iv) of Theorem B. The second author thanks the Studienstiftung des deutschen Volkes for the support. \section{Preliminaries} \subsection{Locally compact groups} A {\em locally compact group} is a group endowed with a locally compact Hausdorff topology such that the group operations are continuous. We will always assume that locally compact groups have the Hausdorff property. The content of this section and much more information can be found in various books on topological groups, e. g. \cite{Stroppel}. Examples of (connected) locally compact groups are $\mathbb{R}$, the circle group $U(1) = \left\{x\in \mathbb{C} \mid | z |= 1\right\}$ and ${\rm SL}_n(\mathbb{R})$. Every group endowed with the discrete topology is a totally disconnected locally compact group. The $p$-adic integers $\mathbb{Z}_p$ and $\prod_{\mathbb{N}}\mathbb{Z}/2\mathbb{Z}$ are examples of non-discrete totally disconnected locally compact groups. Furthermore, the automorphism group of a locally finite connected graph with the topology of pointwise convergence is also such a group. There are two results on locally compact groups which we need for the proof of Theorem B. \begin{theorem} \label{IwasawaVanDantzig} Let $L$ be a locally compact group. \begin{enumerate} \item (Iwasawa's structure Theorem \cite[Thm. 13]{Iwasawa}) If $L$ is connected, then $L=H_0\cdot\ldots\cdot H_n\cdot K$ where each $H_i$ is isomorphic to $\mathbb{R}$ and $K$ is a compact connected group. \item (van Dantzig's Theorem \cite[III \S 4, No. 6]{Dantzig}) If $L$ is totally disconnected, then $L$ has a compact open subgroup. \end{enumerate} \end{theorem} \subsection{Torsion and divisible groups} Recall that a group $G$ is called a \textit{torsion} group if all its elements have finite order and $G$ is called \textit{torsion-free} if every non-trivial element of $G$ is of infinite order. For a given group $G$ we denote by ${\rm Tor}(G)$ the subset of $G$ consisting of all finite order elements. If $G$ is abelian, the subset ${\rm Tor}(G)$ is a (torsion) subgroup of $G$ and $G/{\rm Tor}(G)$ is a torsion-free group \cite[Thm. 1.2]{Fuchs}. \begin{lemma}\label{TorsionNormal} Let $G$ be a group and $N\subseteq G$ a normal torsion-free subgroup. Let $\pi\colon G\to G/N$ be the canonical projection. If $H\subseteq G/N$ is torsion-free, then $\pi^{-1}(H)$ is torsion-free. \end{lemma} \begin{proof} Suppose there exists an $h\in \pi^{-1}(H)$ with ${\rm ord}(h)<\infty$. Then $\pi(h)$ also has finite order, so $\pi(h)=1\cdot N$. But that means $h\in N$ and since $N$ is torsion-free, we can therefore conclude $h=1$ which means that $\pi^{-1}(H)$ is torsion-free. \end{proof} The group of $p$-adic integers $\mathbb{Z}_p$ is a crucial poison group in the characterization of $lcH$-slender groups given in \cite[Thm. 1]{CorsonVarghese}. We recall a definition of this important group: For a prime $p$ we denote the group of \textit{$p$-adic integers} by $\mathbb{Z}_p$, which is defined as the inverse limit of the inverse system $0\longleftarrow \mathbb{Z}/p\mathbb{Z}\longleftarrow\mathbb{Z}/p^2\mathbb{Z}\longleftarrow\dots\longleftarrow\mathbb{Z}/p^n\mathbb{Z}\longleftarrow\dots$ where each of the maps $\mathbb{Z}/p^{n-1}\mathbb{Z}\longleftarrow\mathbb{Z}/p^n\mathbb{Z}$ reduces an integer mod $p^n $ to its value mod $p^{n-1}$. The group $\mathbb{Z}_p$ is a torsion-free uncountable compact group \cite[p. 9]{Sullivan}. For a natural number $n>0$ an element $g\in G$ is called \textit{$n$-divisible} if there exists an element $h$ in $G$ such that $h^n=g$. If every element of $G$ is $n$-divisible, we call $G$ \textit{$n$-divisible}. We call $g\in G$ \textit{divisible} if $g$ is $n$-divisible for all natural numbers $n>0$ and $G$ is said to be \textit{divisible} if every element $g\in G$ is divisible. Examples of divisible groups include $\mathbb{Q}$, $\mathbb{R}$ and compact connected groups \cite[Cor. 2]{Mycielski}, a non-example is the compact group $\mathbb{Z}_p$, nevertheless it is $q$-divisible for any prime $q\in\mathbb{P}-\left\{p\right\}$ \cite[p. 40]{Fuchs}. The structure of quotients of $\mathbb{Z}_p$ can be described using the following result. Since we could not find a reference, we give a proof here which is based on \cite{Mathoverflow}. \begin{lemma} \label{QuotientZp} If $H$ is a non-trivial subgroup of $\mathbb{Z}_p$, then $\mathbb{Z}_p/H\cong T\oplus D$, where $T$ is a finite cyclic group of $p$-power order and $D$ is an abelian divisible group. \end{lemma} \begin{proof} First we recall that any non-trivial closed subgroup of $\mathbb{Z}_p$ is also open \cite[Prop. 7]{RobertsonSchreiber}. Hence, we have the following equivalence: $$\text{A non-trivial subgroup }B\text{ is open in }\mathbb{Z}_p\text{ iff }B\text{ is closed in } \mathbb{Z}_p.$$ Now we show that for a non-trivial subgroup $H\subseteq \mathbb{Z}_p$, the quotient $\overline{H}/H$ is an abelian divisible group. Since multiplication by $p$, denoted by $m_p\colon\mathbb{Z}_p\to\mathbb{Z}_p$, $m_p(g)=p\cdot g$ is continuous and $\mathbb{Z}_p$ is compact, the group $p\overline{H}$ is open and furthermore the group $p\overline{H}+H=\bigcup_{h\in H}(p\overline{H}+h)$ is open and therefore closed and contains $H$. Hence $\overline{H}=p\overline{H}+H$. We obtain $p\cdot(\overline{H}/H)=(p\cdot\overline{H})/H=(p\overline{H}+H)/H=\overline{H}/H$, thus $\overline{H}/H$ is $p$-divisible. Furthermore, since $\overline{H}$ is closed, it is $q$-divisible for every prime $q\neq p$ \cite[Lemma 6.2.3]{Fuchs}. So $\overline{H}/H$ is $q$-divisible for all primes $q$ and therefore divisible. It was proven in \cite[Thm. 4.2.5]{Fuchs} that a divisible subgroup of an abelian group is a direct summand of that group. Thus there exists a group $T$ such that $$\mathbb{Z}_p/H\cong T\oplus \overline{H}/H.$$ It remains to prove that $T$ is finite cyclic group of $p$-power order. We have $$T\cong \mathbb{Z}_p/H / \overline{H}/H\cong \mathbb{Z}_p/\overline{H}.$$ Again, the group $\overline{H}$ is an open subgroup and therefore $\bigcup\limits_{g\in \mathbb{Z}_p} g\overline{H}$ is an open cover of $\mathbb{Z}_p$. Since $\mathbb{Z}_p$ is compact, this cover has to be finite, hence $\mathbb{Z}_p/\overline{H}$ is finite. Now we consider the group homomorphism $\pi:= \pi_2\circ \pi_1\colon\mathbb{Z}_p\overset{\pi_1}{\rightarrow} \mathbb{Z}_p/H\cong T\oplus D\overset{\pi_2}{\rightarrow} T.$ Since the group $T$ is finite, this group homomorphism is continuous \cite[Thm. 5.9]{Klopsch}. Thus $T$ has to be cyclic, since the group $\mathbb{Z}_p$ has a dense cyclic subgroup. Further, the group $\mathbb{Z}_p$ is $q$-divisible for any prime $q\neq p$, hence $\pi(\mathbb{Z}_p)=T$ is also $q$-divisible for any prime $q\neq p$. Hence $T$ is a cyclic group of $p$-power order. \end{proof} \subsection{Abelian divisible groups} Some important divisible abelian groups are the Pr\"ufer $p$-groups $\mathbb{Z}(p^\infty)$: Let $p$ be a prime. For each natural number $n$, consider the quotient group $\mathbb{Z}/p^n\mathbb{Z}$ and the embedding $\mathbb{Z}/p^n\mathbb{Z}\to \mathbb{Z}/p^{n+1}\mathbb{Z}$ induced by multiplication by $p$. The direct limit of this system is called the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$. Each Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ is divisible, abelian and every proper subgroup of it is finite and cyclic. Moreover, the complete list of subgroups of a Pr\"ufer $p$-group is given by $0\subsetneq \mathbb{Z}/p\mathbb{Z}\subsetneq \mathbb{Z}/p^2\mathbb{Z}\subsetneq\dots\subsetneq\mathbb{Z}/p^n\mathbb{Z}\subsetneq\dots\subsetneq \mathbb{Z}(p^\infty)$. For more information regarding these groups we refer to \cite[Chap. 1.3]{Fuchs}).\\ The structure of abelian divisible groups is completely understood, as can be seen by the following theorem, which can be found in \cite[Thm. 4.3.1]{Fuchs}. \begin{StructureTheorem} \label{structureTheorem} An abelian divisible group is the direct sum of groups each of which is isomorphic either to the additive group $(\mathbb{Q},+)$ of rational numbers or to a Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for a prime $p$. \end{StructureTheorem} In particular this can be applied to homomorphisms from divisible groups (e.g. $\mathbb{R}$) into groups $G$ that do not contain $\mathbb{Q}$ or the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$: \begin{corollary} \label{Rtrivial} Let $H$ be a divisible group and $f\colon H\to G$ be a group homomorphism. If $G$ does not include $\mathbb{Q}$ and the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$ as a subgroup, then $f(H)$ is trivial. \end{corollary} The very clever idea of the following proof was suggested to us by the anonymous referee. \begin{proof} Supposing to the contrary that $f(H)$ is non-trivial we select $g_0\in f(H)-\{1\}$. Of course, $f(H)$ is divisible since $H$ is divisible, so assuming we have selected $g_n\in f(H)$ we select $g_{n+1}\in f(H)$ such that $g_{n+1}^{(n+1)!}=g_n$. Now the subgroup $\langle g_0,g_1,\dots\rangle\subseteq f(H)$ is a nontrivial abelian divisible group, so by Theorem \ref{structureTheorem} this subgroup must include a copy of $\mathbb{Q}$ or some $\mathbb{Z}(p^\infty)$, contradicting the assumption that $G$, and therefore $f(H)$, does not include such subgroups. \end{proof} \section{Virtually lcH-slender groups} A discrete group $G$ is called \textit{locally compact Hausdorff slender} (abbrev. \textit{lcH-slender}) if any group homomorphism from a locally compact group to $G$ is continuous. This concept was first introduced by Conner and Corson in \cite{ConnerCorson}. There are some obvious ``poison subgroups" that prevent a group $G$ from being \textit{lcH}-slender. These subgroups are $\mathbb{Q}$, the $p$-adic integers $\mathbb{Z}_p$, the Pr\"ufer $p$-group and torsion groups in general for the following reason: There exist discontinuous homomorphisms from $\mathbb{R}$ to $\mathbb{Q}$. For example take a Hamelbasis $B$ of $\mathbb{R}=\bigoplus_{b\in B}b\mathbb{Q}$, then the map that maps any linear combination $\sum_{b\in B} bq_b$ with coefficients $q_b\in\mathbb{Q}$ to the sum of the coefficients in $\mathbb{Q}$ is discontinuous regarding the standard topology on $\mathbb{R}$. Similarly, one can construct a discontinuous homomorphism from the compact circle group $\mathbb{R}/\mathbb{Z}$ to the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$. If $G$ contains torsion, then $G$ includes $\mathbb{Z}/p\mathbb{Z}$ as a subgroup for some prime $p$. One can construct a discontinuous homomorphism from the compact group $\prod_\mathbb{N} \mathbb{Z}/p\mathbb{Z}$ by extending the homomorphism from $\bigoplus_\mathbb{N} \mathbb{Z}/p\mathbb{Z}$ to $\mathbb{Z}/p\mathbb{Z}$, which just takes the sum of all entries, via a vector space argument. The $p$-adic integers $\mathbb{Z}_p$ form a non-discrete compact group for any prime $p$. Therefore, if $G$ includes $\mathbb{Z}_p$ for some $p$, then the identity homomorphism from the non-discrete compact $\mathbb{Z}_p$ to the discrete subgroup of $G$ isomorphic to $\mathbb{Z}_p$ is discontinuous. The algebraic structure of $lcH$-slender groups was characterized by Corson and the third author in \cite[Thm. 1]{CorsonVarghese}. \begin{theorem} \label{SamOlga} A group $G$ is $lcH$-slender if and only if $G$ is torsion-free and does not include $\mathbb{Q}$ or any $p$-adic integer group $\mathbb{Z}_p$ as a subgroup. \end{theorem} We extend this concept to groups with torsion by studying \textit{virtually lcH-slender groups}, i. e. groups that contain a finite index $lcH$-slender subgroup. \begin{corollaryA} A group $G$ is virtually $lcH$-slender if and only if $G$ is virtually torsion-free and does not include $\mathbb{Q}$ or the p-adic integers $\mathbb{Z}_p$ for any prime $p$ as a subgroup. \end{corollaryA} \begin{proof} Suppose $G$ is virtually torsion-free and does not include $\mathbb{Q}$ or any $p$-adic integer group $\mathbb{Z}_p$ as a subgroup and let $H$ be a finite index torsion-free subgroup of $G$. Then $H$ does not include $\mathbb{Q}$ or any $p$-adic integer group $\mathbb{Z}_p$ as a subgroup, so $H$ is $lcH$-slender by \cite[Thm. 1]{CorsonVarghese} and therefore $G$ is virtually $lcH$-slender. Suppose now that $G$ is virtually $lcH$-slender. Then $G$ is virtually torsion-free. Let $H$ be a finite index $lcH$-slender subgroup of $G$. Without loss of the generality we can assume that $H$ is normal in $G$. Suppose that $G$ includes a subgroup $P$ which is either isomorphic to $\mathbb{Q}$ or to the group of $p$-adic integers $\mathbb{Z}_p$. Since $H$ has finite index in $G$, $P/(P\cap H)\cong PH/H$ is a finite quotient group of $P$.\\ \textit{Case 1:} $P\cong \mathbb{Q}$. Every quotient of $\mathbb{Q}$ is divisible, thus $\mathbb{Q}$ does not have a finite non-trivial quotient group, so $P/(P\cap H)$ must be trivial and therefore $P\subset H$, which is a contradiction to the assumption, that $H$ is \textit{lcH}-slender by \cite[Thm. 1]{CorsonVarghese}.\\ \textit{Case 2:} $P\cong \mathbb{Z}_p$ for some prime $p$. Since $P/P\cap H$ is finite, $P\cap H$ must be isomorphic to $\mathbb{Z}_p$ \cite[p. 18]{Fuchs}, which is a contradiction to the assumption, that $H$ is $lcH$-slender by Theorem \ref{SamOlga}. \end{proof} The class consisting of virtually torsion-free groups includes the following groups: \begin{itemize} \item (Outer) automorphism groups of right-angled Artin groups \cite[Thm. 5.2]{CharneyVogtmann}, Lemma \ref{TorsionNormal}, since ${\rm Inn}(A_\Gamma)$ is torsion-free. \item Finitely generated linear groups in characteristic $0$ \cite{Selberg}, in particular Coxeter groups \cite[Cor. 6.12.12]{Davis}. \end{itemize} In Proposition \ref{classG} we will show that all these groups contain neither $\mathbb{Q}$ nor $\mathbb{Z}_p$ for any prime $p$ as a subgroup, hence these groups are virtually $lcH$-slender. \section{Proof of Theorem B} We recall some concepts from abelian group theory. An abelian group is \textit{algebraically compact} if it is an algebraic direct summand of a compact Hausdorff abelian group (see \cite[Section 6.1]{Fuchs}). In particular, a compact abelian group is algebraically compact, since it is trivially a direct summand of itself. An abelian group $A$ is \textit{cotorsion} if every short exact sequence of abelian groups,\\ \begin{center} \begin{tikzcd} 0 \arrow[r] & A \arrow[r, "i"] & B \arrow[r, "\pi"] & C \arrow[r] & 0 \end{tikzcd} \end{center} with $C$ torsion-free, splits (that is, there is a homomorphism $\psi \colon C\to B$ such that $\pi\circ\psi$ is identity)\cite[Section 9.6]{Fuchs}. Algebraically compact groups are cotorsion and the homomorphic image of a cotorsion group is cotorsion (see \cite[p. 282]{Fuchs} for both assertions).\\ An abelian group is \textit{cotorsion-free} if $0$ is its only cotorsion subgroup \cite[p. 500]{Fuchs}. An abelian group is cotorsion-free if and only if it is torsion-free and does not include a copy of $\mathbb{Z}_p$ or $\mathbb{Q}$ \cite[Thm. 13.3.8]{Fuchs}.\\ A torsion abelian group $A$ is cotorsion if and only if it is of form $A=B\oplus D$ where $B$ is bounded (that is, there exists some positive natural number $n$ with $nB=0$) and $D$ is divisible \cite[Cor. 9.8.4]{Fuchs}. In particular, a finite abelian torsion group is cotorsion. Recall, a group $H$ is called \textit{artinian}, if every non-empty collection of subgroups of $H$ has a minimal element under subset partial order. \begin{lemma} \label{AbelianArtinian} Let $A$ be an abelian group. The group $A$ is artinian if and only if $A$ can be written as a direct sum $A=J\oplus\bigoplus_{i=0}^m \mathbb{Z}(p_i^\infty)$ of a finite abelian group $J$ and finitely many Pr\"ufer $p_i$-groups. \end{lemma} \begin{proof} By \cite[Thm. 4.5.3]{Fuchs} $A$ is artinian if and only if $A$ is a direct sum of a finite number of so called cocyclic groups, which are each isomorphic to either $\mathbb{Z}/p^k$ or $\mathbb{Z}(p^\infty)$ for some prime $p$ and some positive integer $k$ by \cite[Thm. 1.3.3]{Fuchs}. \end{proof} For the proof of Theorem B we need one more additional lemma. \begin{lemma} \label{CompactTorsion} Let $\varphi\colon K\to G$ be a group homomorphism from a compact group $K$ into a discrete group $G$. Assume that $G$ does not include $\mathbb{Q}$, or the $p$-adic integers $\mathbb{Z}_p$ for any prime $p$, or the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$ as a subgroup. Also suppose that abelian torsion subgroups of $G$ are artinian. Then $\varphi(K)$ is a torsion group. \end{lemma} \begin{proof} For $k\in K$ the subgroup $\overline{\langle k\rangle}$ of $K$ is a compact abelian group \cite[Lemma 4.4]{Stroppel}, therefore algebraically compact, therefore cotorsion. Thus the homomorphic image $A=\varphi(\overline{\langle k\rangle})$ is cotorsion. Let $\rm{Tor}(A)$ denote the torsion subgroup of $A$. As $A$ is an abelian subgroup of $G$ we know that $\rm{Tor}(A)$ is artinian, and therefore $\rm{Tor}(A)$ is a direct sum of a finite abelian group and some Pr\"ufer groups by Lemma \ref{AbelianArtinian}. But $G$ does not include any Pr\"ufer groups, so $\rm{Tor}(A)$ is finite.\\ Now, $\rm{Tor}(A)$ is a finite abelian group, and therefore cotorsion. Since $A/\rm{Tor}(A)$ is torsion-free, the short exact sequence\\ \begin{center} \begin{tikzcd} 0 \arrow[r] & \rm{Tor}(A) \arrow[r] & A \arrow[r] & A/\rm{Tor}(A) \arrow[r] & 0 \end{tikzcd} \end{center} splits, and so we may write $A\cong \rm{Tor}(A)\oplus A/\rm{Tor}(A)$. Now $A/\rm{Tor}(A)$ is torsion-free and it also does not include $\mathbb{Q}$ or any $\mathbb{Z}_p$ since it is isomorphic to a subgroup of $A$ (hence isomorphic to a subgroup of $G$). Therefore $A/\rm{Tor}(A)$ is cotorsion-free. But $A/\rm{Tor}(A)$ is a homomorphic image of the cotorsion group $A$, and therefore is itself cotorsion. Thus $A/\rm{Tor}(A)$ is trivial. In particular $\rm{Tor}(A)=A$ and $A$ is finite. More particularly $\varphi(k)$ has finite order. Hence $\varphi(K)$ is a torsion group. \end{proof} \begin{proof}[{\bf Proof of Theorem B}] We will first show the part of the theorem in which the target group only satisfies conditions (i)-(iii). The statement for a group also satisfying the fourth condition will be an easy corollary of that case. Let $\varphi\colon L\to G$ be a group homomorphism from a locally compact group $L$ into a group $G$ that satisfies the conditions (i)-(iii) of Theorem B. Without loss of generality we can assume $\varphi(L)=G$, since every subgroup of $G$ satisfies (i)-(iii) of Theorem B. \textit{Step 1: $\varphi(L^\circ)$ is trivial.}\\ Due to Iwasawa's structure Theorem \ref{IwasawaVanDantzig} we can write the connected component $L^\circ$ as $L^\circ=H_1\cdot...\cdot H_k\cdot K$, where each $H_i$ is isomorphic to $\mathbb{R}$ and $K$ is a compact connected group. Due to Lemma \ref{Rtrivial}, the image $\varphi(H_i)$ of each $H_i$ is trivial. Also, since $K$ is compact connected it is divisible \cite[Cor. 2]{Mycielski}, and therefore $\varphi(K)$ is also trivial by Lemma \ref{CompactTorsion}. Since $\varphi(L^\circ)$ is trivial, we know that $\varphi$ descends to a homomorphism $\overline{\varphi}:L/L^\circ\to G$: \begin{center} \begin{tikzcd} L \arrow[rr, "\varphi"] \arrow[swap, rd, "\pi"] & & G \\ & L/L^\circ \arrow[swap, ru, "\overline{\varphi}"] & \end{tikzcd} \end{center} \textit{Step 2: Apply van Dantzig's Theorem to $L/L^\circ$:}\\ Since $L/L^\circ$ is a totally disconnected locally compact group, we apply van Dantzig's Theorem \ref{IwasawaVanDantzig} to find at least one compact open subgroup $K_1\subseteq L/L^\circ$. This allows us to differentiate the following two cases:\\ \textit{Case A:} There exists a compact open subgroup $K\subseteq L/L^\circ$ such that $\overline{\varphi}(K)$ is trivial. Then $\pi^{-1}(K)$ is open in $L$ and completely inside the kernel of $\varphi$, so $\ker(\varphi)$ is open. Hence the map $\varphi$ is continuous.\\ \textit{Case B:} There is no such compact open subgroup. This case requires more separate steps: \textit{Step B.1: Find a ``minimal" compact open subgroup $K_0\subseteq L/L^\circ$ using condition (iii):}\\ By Lemma \ref{CompactTorsion} we know that for every compact open subgroup $K$, the image $\overline{\varphi}(K)$ is a torsion group. We consider the following family of torsion subgroups of $G$: $$\mathcal{T}=\{\bar{\varphi}(K)\big| K\subseteq L/L^\circ\text{ compact open subgroup}\}.$$ Since torsion subgroups of $G$ are artinian, the set $\mathcal{T}$ has a minimal element, we choose one and we call it $\overline{\varphi}(K_0)$. \textit{Step B.2: We now show that $\overline{\varphi}(K_0)$ is a normal subgroup, hence the theorem holds for t.d.l.c. groups:}\\ The group $gK_0g^{-1}$ is a compact open subgroup of $L/L^\circ$ for all $g\in L/L^\circ$, so $gK_0g^{-1} \cap K_0$ is as well. Thus we have $\bar{\varphi}(gK_0g^{-1}\cap K_0)\subseteq \bar{\varphi}(K_0)$ and due to minimality we have $\bar{\varphi}(gK_0g^{-1}\cap K_0)= \bar{\varphi}(K_0)$. That means $\bar{\varphi}(gK_0g^{-1})=\bar{\varphi}(K_0)$ for every $g\in L/L^\circ$ and therefore $\bar{\varphi}(L/L^\circ)$ is contained in the normalizer ${\rm Nor}_{L/L^\circ}(\bar{\varphi}(K_0))$. By assumption the map $\varphi$ is surjective, therefore $G=\overline{\varphi}(L/L^\circ)={\rm Nor}(\bar{\varphi}(K_0))$. We now set $N_0:=\bar{\varphi}(K_0)$. \textit{Step B.3: Get the desired result for $\varphi$.}\\ The subgroup $\overline{\varphi}^{-1}(\overline{\varphi}(K_0))$ is an open normal subgroup in $L/L^\circ$. Therefore $N:=\pi^{-1}(\overline{\varphi}^{-1}(\overline{\varphi}(K_0))$ is an open normal subgroup of $L$, due to the continuity of $\pi$, and $\varphi(N)=\overline{\varphi}(K_0)$ which is torsion. This concludes the proof if $G$ satisfies conditions (i)-(iii). If $G$ additionally satisfies condition (iv), then $\varphi(N)$ has to be trivial. But then $N\subseteq \ker(\varphi)$, so the kernel is open and $\varphi$ is continuous. \end{proof} If $L$ is an \textit{almost connected} locally compact group (that is, a locally compact group where $L/L^\circ$ is compact) we might strengthen Theorem B: \begin{corollary} If $G$ satisfies conditions (i) and (ii) of Theorem B and \textbf{abelian} torsion subgroups are artinian, then for any homomorphism $\varphi\colon L\to G$ from an almost connected locally compact group $L$ to $G$ the image $\varphi(L)$ is a torsion group. \end{corollary} \begin{proof} Analogous to step 1 of the proof of Theorem B we see, that $\varphi(L^\circ)$ is trivial. Since $L/L^\circ$ is compact the image $\bar{\varphi}(L/L^\circ)$ is torsion by Lemma \ref{CompactTorsion}. But then $\varphi(L)=\bar{\varphi}(L/L^\circ)$ is also torsion. \end{proof} In particular, every homorphism from an almost connected locally compact group into a CAT$(0)$ group has torsion image (for the proof see the proof of Proposition 5.2 (3)). \section{On discrete groups with properties (i)-(iii) of Theorem B} In this last section we collect many groups that arise in geometric group theory and satisfy the conditions (i)-(iii) of Theorem B. Let $\mathcal{G}$ denote the class of all groups $G$ with the following three properties: \begin{enumerate} \item[(i)] $G$ does not include $\mathbb{Q}$ or the $p$-adic integers $\mathbb{Z}_p$ for any prime $p$ as a subgroup, \item[(ii)] $G$ does not include the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$ as a subgroup and \item[(iii)] torsion subgroups in $G$ are artinian. \end{enumerate} The class $\mathcal{G}$ is closed under taking subgroups. Furthermore, this class is closed under taking finite graph products of groups and group extensions, which we will show in Section 5.3. An algebraic property of a group that prohibits the existence of subgroups isomorphic to $\mathbb{Q}$ or $\mathbb{Z}(p^\infty)$ is residual finiteness. We recall that a group $G$ is said to be \textit{residually finite} if for any $g\in G-\left\{1_G\right\}$ there exists a finite group $F_g$ and a group homomorphism $f\colon G\to F_g$ such that $f(g)\neq 1$. It follows from the definition that any subgroup of a residually finite group is also residually finite. \begin{lemma} \label{ResFinite} Let $G$ be a group. If $G$ is residually finite, then $G$ does not have non-trivial divisible subgroups. In particular, a residually finite group does not contain $\mathbb{Q}$ or the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$ as a subgroup. \end{lemma} \begin{proof} Assume that $G$ has a non-trivial divisible subgroup $D$. For $g\in D-\left\{1_G\right\}$ there exists a finite group $F$ and a group homomorphism $f\colon D\to F$ such that $f(g)\neq 1_F$. Since $D$ is divisible, there exists $d\in D$ such that $d^{ord(F)}=g$, where ${\rm ord}(F)$ is the cardinality of the group $F$. We obtain $$f(g)=f(d^{{\rm ord}(F)})=f(d)^{{\rm ord}(F)}=1_F.$$ This contradicts the fact that $f(g)\neq 1_F$, thus $D$ is trivial. \end{proof} Our goal is to convince the reader that the class $\mathcal{G}$ is huge and contains many geometric groups, i.e. groups with nice actions on metrically injective spaces or Gromov-hyperbolic spaces or ${\rm CAT}(0)$ spaces. For a definition of a metrically injective group see Subsection 5.1 and for definitions and further properties of Gromov-hyperbolic groups and ${\rm CAT}(0)$ groups we refer to \cite{BridsonHaefliger}. \begin{proposition} \label{classG} If $G$ is \begin{enumerate} \item a virtually $lcH$-slender group, \item a cocompactly cubulated ${\rm CAT}(0)$ group, \item a ${\rm CAT}(0)$ group whose torsion subgroups are artinian, e.g. a Coxeter group, \item a metrically injective group whose torsion subgroups are artinian, e.g. a Helly group whose torsion subgroups are artinian, \item a Gromov-hyperbolic group, \item a finitely generated residually finite group whose torsion subgroups are artinian, e.g. the (outer) automorphism group of a right-angled Artin group, \item a one-relator group, \item a finitely generated linear group in characteristic $0$, \item the Higman group, \end{enumerate} then $G$ is in the class $\mathcal{G}$. \end{proposition} \begin{proof} First of all we note that the groups in (2)-(9) are all finitely generated, in particular countable. Thus these groups can not have $\mathbb{Z}_p$ as a subgroup since this group is uncountable. \textit{to (1):} Let $G$ be a virtually $lcH$-slender group. By Corollary A the group $G$ does not include $\mathbb{Q}$ or $\mathbb{Z}_p$ for any prime $p$ as a subgroup. Further, the group $G$ is virtually torsion-free, in particular $G$ has a normal torsion-free subgroup $H$ of finite index. Let $T\subseteq G$ be a torsion subgroup. We consider the map $\pi\circ\iota \colon T\hookrightarrow G\twoheadrightarrow G/H$. Since $H$ is torsion-free, this map $\pi\circ\iota$ is injective and therefore $T$ has to be finite. Thus, the group $G$ is in the class $\mathcal{G}$. \textit{to (2) and (3):} It is known that an abelian subgroup of a ${\rm CAT}(0)$ group is finitely generated, see \cite[Thm. I.4.1]{Davis}, thus a ${\rm CAT}(0)$ group does not have $\mathbb{Q}$ as a subgroup. Further, a {\rm CAT}(0) group has only finitely many conjugacy classes of finite subgroups \cite[Thm. I.4.1]{Davis}, hence there is a bound on the order of finite order elements in a ${ \rm CAT}(0)$ group and therefore this group can not have $\mathbb{Z}(p^\infty)$ as a subgroup. Furthermore, it is known that torsion subgroups in Coxeter groups are finite \cite[Prop. 7.4]{KramerVarghese}. Finiteness of torsion subgroups of cocompactly cubulated groups follows from \cite[Cor. G]{CapraceSageev}.\\ Despite the fact that ${\rm CAT}(0)$ groups are very popular, the question regarding the finiteness of torsion subgroups of general ${\rm CAT}(0)$ groups is still open. \textit{to (4):} See Section 5.1, in particular Lemma \ref{divisible}, Lemma \ref{torsion} and Corollary \ref{Helly}. \textit{to (5):} Let $G$ be a Gromov-hyperbolic group. It is known that torsion subgroups of Gromov-hyperbolic groups are finite \cite[Chap. 8 Cor. 36]{Ghys}, thus $\mathbb{Z}(p^\infty)$ is not a subgroup of $G$ and any torsion subgroup of $G$ is artinian. Further, it was proven in \cite[Cor. 6.8]{Helly} that any Gromov-hyperbolic group is a Helly group. Thus, by (4) the group $G$ is in the class $\mathcal{G}$. \textit{to (6):} By Lemma \ref{ResFinite} we know that a residually finite group does not include $\mathbb{Q}$ or $\mathbb{Z}(p^\infty)$ as a subgroup. Hence, a finitely generated residually finite group whose torsion subgroups are artinian is in the class $\mathcal{G}$. Given a right-angled Artin group $A_\Gamma$, there exist a number $m\in\mathbb{N}$ and an injective group homomorphism $A_\Gamma\hookrightarrow{ \rm GL}_m(\mathbb{R})$ \cite[Cor. 3.6]{HsuWise}, thus any right-angled Artin group is a finitely generated linear group and as such it is residually finite \cite{Malcev}. Furthermore, it was proven in \cite{Baumslag} that the automorphism group of a finitely generated residually finite group is also residually finite, thus ${\rm Aut}(A_\Gamma)$ is residually finite. It was proven in \cite[Thm. 4.2]{CVout} that also the outer automorphism group of a right-angled Artin group is residually finite. Hence, by Lemma \ref{ResFinite} the (outer) automorphism group of a right-angled Artin group does not include $\mathbb{Q}$ or $\mathbb{Z}(p^\infty)$ as a subgroup. Torsion subgroups in the (outer) automorphism group of a right-angled Artin group are finite, which follows from the fact that this group is virtually torsion-free, as we already mentioned before. \textit{to (7):} Given a one-relator group $G$, there are two possibilities: (i) $G$ has torsion elements, (ii) $G$ is torsion-free. In the first case it was proven in \cite[Thm. IV.5.5]{Lyndon} that $G$ is hyperbolic, thus by (3) this group is in the class $\mathcal{G}$. If $G$ is torsion-free, then $G$ does not include $\mathbb{Q}$ or $\mathbb{Z}_p$ as a subgroup \cite[Thm. 1]{Newman}. \textit{to (8):} Let $G$ be a finitely generated linear group in characteristic $0$. It was proven in \cite{Selberg} that $G$ is virtually torsion-free, hence torsion subgroups of $G$ are always finite. Further, it is known that $G$ is residually finite \cite{Malcev}, thus by (5) this group is in the class $\mathcal{G}$. \textit{to (9):} The Higman group was defined in \cite{Higman} and is an amalgam of two torsion-free groups \cite[Thm. IV.2.7]{Lyndon}, thus it is torsion-free by \cite[\S 6.2 Prop. 21]{Serre}. Further, this group does not include $\mathbb{Q}$ as a subgroup, which follows from \cite[Thm. E]{Martin} . \end{proof} \subsection{Geometric groups} To prove the fact that a metrically injective group $G$ is in the class $\mathcal{G}$, we first need to gather some information about these groups. The goal is to briefly introduce the framework in which these groups are studied, state the results we need in order to prove Proposition \ref{classG} (4) and give its proof. Let $(X,d)$ be a metric space. For an isometry $f\colon X\to X$ the \textit{translation length of }$f$, denoted by $|f|$ is defined as $|f|:={\rm inf}\left\{d(x,f(x))\mid x\in X\right\}$. We have the following general classification of the isometries of a metric space $X$ in terms of ${\rm Min}(f):=\left\{x\in X\mid d(x, f(x))=|f|\right\}$: An isometry $f$ is said to be \textit{parabolic} if ${\rm Min}(f)=\emptyset$ and \textit{semi-simple} otherwise. In the latter case, $f$ is \textit{elliptic} if for any $x\in X$ the subset $\left\{f^n(x)\mid n\in \mathbb{N}\right\}\subseteq X$ is bounded and \textit{hyperbolic} if it is unbounded. For a group $G$ and a metric space $X$, a group action $\Phi\colon G\to{\rm Isom}(X)$ is called \textit{proper} if for each $x\in X$ there exists a real number $r>0$ such that the set $\left\{g\in G\mid \Phi(g)(B_r(x))\cap B_r(x)\neq\emptyset\right\}$ is finite and $\Phi$ is called \textit{cocompact} if there exists a compact subset $K\subseteq X$ such that $\Phi(G)(K)=X$. In geometric group theory it is common to work with actions that have both properties, such an action is called \textit{geometric}. The main idea of geometric group theory is, given a group $G$, to construct a geometric action on a metric space $X$ with rich geometry and to use this geometry to prove algebraic results about the group $G$. Here, the spaces will be metrically injective spaces, while the algebraic property of the groups will be the occurrence of subgroups isomorphic to $\mathbb{Q}$, the $p$-adic integers $\mathbb{Z}_p$ or the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$. Before proving Proposition \ref{classG} (4) we need to discuss a few useful results. We first need to study groups that are \textit{almost divisible}, that is, there is an unbounded sequence $(n_i)_{i\in \mathbb{N}}$ of natural numbers such that the group is $n_i$ divisible for each $n_i$. We note that the groups $\mathbb{Q}$, the $p$-adic integers $\mathbb{Z}_p$ and the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ are examples of almost divisible groups. \begin{proposition} \label{toolQ} Let $\Phi\colon G\to{\rm Isom}(X)$ be a geometric action on a metric space $X$. If for any hyperbolic isometry $\Phi(g)\in \Phi(G)$ we have $|\Phi(g)^n|\geq n\cdot |\Phi(g)|$ for all $n\in\mathbb{N}$, then any almost divisible subgroup $H\subseteq G$ is a torsion group. \end{proposition} \begin{proof} First of all we note that every isometry in $\Phi(G)$ is semi-simple \cite[Thm. II.6.2.10]{BridsonHaefliger}. Further, it was shown in \cite[Thm. II.6.2.10]{BridsonHaefliger} that if $G$ acts geometrically on a metric space $X$, then the infimum of translation length of hyperbolic isometries is always positive. Let $H\subseteq G$ be an almost divisible subgroup. Our goal is to prove that for $h\in H$ the order of $h$ has to be finite. First, we can see that if ${\rm ord}(\Phi(h))< \infty$, then ${\rm ord}(h)<\infty$ since the action is proper, so in that case $h$ is a torsion element. If ${\rm ord}(\Phi(h))=\infty$, then $|\Phi(h)|>0$, because else the action would not be proper. Now $H$ is almost divisible, so there exists an unbounded sequence $(n_i)_{i\in \mathbb{N}}$ such that there exist elements $d_i\in H$ with $h=d_i^{n_i}$. By assumption we have $$|\Phi(h)|=|\Phi(d_i^{n_i})|=|\Phi(d_i)^{n_i}|\geq n_i\cdot |\Phi(d_i)|.$$ But since the sequence $(n_i)_{i\in\mathbb{N}}$ is unbounded, that means that the sequence of translation lengths $(|\Phi(d_i)|)_{i\in \mathbb{N}}$ converges to zero, which in turns contradicts the fact that the infimum of translation lengths of hyperbolic isometries is positive. \end{proof} The result of Proposition \ref{toolQ} gives us a tool to prove that a group which we are interested in does not include $\mathbb{Q}$ or the $p$-adic integers $\mathbb{Z}_p$ as a subgroup. \begin{remark}\label{finEmb} The inequality condition on the translation lengths of hyperbolic isometries is necessary, since any finitely generated group $G$ acts geometrically on its Cayley-graph ${\rm Cay}(G,S)$ where $S$ is a finite generating set of $G$. There are finitely generated groups that contain divisible torsion-free subgroups, e.g. $\mathbb{Q}$, as can be found in \cite[Thm. IV]{Neumann}, there even exist finitely presented groups that contain $\mathbb{Q}$ \cite[Thm 1.4, Prop. 1.10]{finitelypresented}. \end{remark} Now we are interested in actions on metric spaces, which satisfy the condition on hyperbolic isometries from Proposition \ref{toolQ}. One possibility for this is to study actions on injective metric spaces. We say a metric space $(X,d)$ is \textit{injective}, if it is an injective object in the category of metric spaces and $1$-Lipschitz maps. Examples of injective metric spaces are $\mathbb{R}$-trees or more generally finite dimensional CAT$(0)$ cube complexes with the $l^{\infty}$ metric (see \cite{Bowditch} for more details). Following \cite{Haettel}, we call a group $G$ \textit{metrically injective} if it acts geometrically on an injective metric space. A very important tool for studying these spaces are \textit{bicombings}. Given a geodesic metric space $(X,d)$, we call a map $\sigma\colon X\times X\times [0,1]\to X$ a \textit{bicombing} if the family of maps $\sigma_{xy}:=\sigma(x, y, \cdot)\colon [0,1]\to X $ satisfies the following three properties: \begin{itemize} \item $\sigma_{xy}$ is a constant speed geodesic from $x$ to $y$, that is $\sigma_{xy}(0)=x, \sigma_{xy}(1)=y$ and $d(\sigma_{xy}(s), \sigma_{xy}(t))=\mid t-s\mid d(x,y)$ for $s, t\in [0,1]$ and $x,y\in X$. \item $\sigma_{yx}(t)=\sigma_{xy}(1-t)$ for $t\in [0,1]$ and $x,y\in X$. \item $d(\sigma_{xy}(t), \sigma_{vw}(t))\leq (1-t)d(x,v)+td(y,w)$ for $t\in [0,1]$ and $x,y,v,w\in X$. \end{itemize} Given an isometry $\gamma\in {\rm Isom}(X)$, we say the bicombing $\sigma$ is $\gamma$\textit{-equivariant} if $\gamma\left(\sigma(x,y,t)\right)=\sigma(\gamma(x),\gamma(y),t)$ for all $x,y\in X$ and $t\in [0,1]$. It was proven in \cite[Prop. 3.8]{Lang} that an injective metric space $X$ always admits an ${\rm Isom}(X)$-equivariant bicombing. We now want to construct ``barycenter" maps for injective metric spaces to prove the inequality for translation lengths of hyperbolic isometries of the previous proposition holds in injective metric spaces. Descombes and Lang observed in \cite{bicombings} that the barycenter maps given in \cite{Navas} for Busemann spaces translate to metrically injective spaces. Here we include the complete construction of these maps. \begin{lemma} \label{bar} Let $X$ be an injective metric space. \begin{enumerate} \item For every $n\in\mathbb{N}$ there exists a map ${\rm bar}_n\colon X^n\to X$ such that for all $x_1,...,x_n,y_1,...,y_n\in X$: \begin{enumerate} \item $d({\rm bar}_n(x_1, x_2, \ldots, x_n), {\rm bar}_n(y_1, y_2, \ldots, y_n))\leq\frac{1}{n}\Sigma^n_{i=1} d(x_i, y_i)$, \item ${\rm bar}_n(x_1, x_2, \ldots, x_n)={\rm bar}_n(x_{\pi(1)}, x_{\pi(2)}, \ldots, x_{\pi(n)})$ for every $\pi\in{\rm Sym}(n)$ and \item $\gamma({\rm bar}_n(x_1, x_2, \ldots, x_n))={\rm bar}_n(\gamma(x_1), \gamma(x_2),\ldots, \gamma(x_n))$ for every isometry $\gamma\in{\rm Isom}(X)$. \end{enumerate} \item Let $\varphi\in {\rm Isom}(X)$ be an isometry. For every $n\in\mathbb{N}$ and $x\in X$, the inequality $|\varphi| \leq\frac{1}{n} d(x, \varphi^n(x))$ holds. \end{enumerate} \end{lemma} \begin{proof} Let $X$ be an injective metric space and $\sigma$ be an ${\rm Isom}(X)$-equivariant bicombing. We first define ${\rm bar}_1\colon X\to X$ as ${\rm bar}_1(x):=x$ for $x\in X$ and ${\rm bar}_2:X\times X\to X$ as ${\rm bar}_2(x,y):=\sigma_{xy}(\frac{1}{2})$ for $x,y\in X$. The maps ${\rm bar}_1$ and ${\rm bar}_2$ obviously satisfy conditions (a), (b) and (c). Now assuming that ${\rm bar}_n$ has been defined and satisfies the properties (a), (b) and (c) we define ${\rm bar}_{n+1}((x_1,....,x_{n+1}))$ as follows: For a tupel $z=(z_1,z_2,...,z_n, z_{n+1})\in X^{n+1}$ we set $\hat{z}^{(i)}:=(z_1,z_2,...,z_{i-1},z_{i+1},...,z_n, z_{n+1}).$ We now define a sequence $(y_k)_{k\in\mathbb{N}}\text{ where each }y_k=(y_{1k}, ...., y_{(n+1)k})\in X^{n+1}$. We start with $y_1:=(x_1, ...,x_{n+1}).$ For $k\geq 2$, $y_{k}$ is defined by recursively applying ${\rm bar}_n$ to $\hat{y}^{(i)}_{k-1}$. More precisely: $$y_{ik}:={\rm bar}_n(\hat{y}^{(i)}_{k-1})\text{ for }i\in\{1,...,n+1\}.$$ One can show, that each sequence $\left(y_{ik}\right)_{k\in \mathbb{N}}$ for $i=1, ..., n+1$ is a Cauchy-sequence and therefore convergent since $X$ is complete (see \cite[Section 1]{Navas}). Additionally, one can show that $\lim_{k\to \infty}y_{ik}=\lim_{k\to \infty}y_{jk}$ for all $i,j\in \{1,...,n+1\}$. Therefore we can define ${\rm bar}_{n+1}(x_1,...,x_{n+1}):=\lim_{k\to \infty}y_{ik}$ for some $i\in\{1,...,n+1\}$ and the limit is independent of the choice of $i$, this can also be found in \cite[Section 1]{Navas}. The fact that properties (b) and (c) are satisfied is immediate from the construction, checking property (a) is a lot more work and is done carefully in \cite[Section 1]{Navas}. As an example, we can visualize the construction of bar$_4(x)$ with $x=(x_1,x_2,x_3,x_4)$, $x_i\in(\mathbb{R}^2,\ell^\infty)$ as follows. \begin{center} \begin{tikzpicture}[scale=0.95] \coordinate[label=right: {$x_1$}] (A) at (5,0); \coordinate[label=above: {$x_2$}] (B) at (0,3); \coordinate[label=left: {$x_3$}] (C) at (-5,0); \coordinate[label=below: {$x_4$}] (D) at (0,-3); \fill (A) circle (2pt); \fill (B) circle (2pt); \fill (C) circle (2pt); \fill (D) circle (2pt); \fill[color=blue] (-2.2,0) circle (2pt); \draw[color=blue] (-2.2,0) -- (-2.2,0) node[left]{${\rm bar}_3(\hat{x}^{(1)})$}; \fill[color=blue] (2.2,0) circle (2pt); \draw[color=blue] (2.2,0) -- (2.2,0) node[right]{${\rm bar}_3(\hat{x}^{(3)})$}; \fill[color=blue] (0,1.2) circle (2pt); \draw[color=blue] (0,1.2) -- (0,1.2) node[above]{${\rm bar}_3(\hat{x}^{(4)})$}; \fill[color=blue] (0,-1.2) circle (2pt); \draw[color=blue] (0,-1.2) -- (0,-1.2) node[below]{${\rm bar}_3(\hat{x}^{(2)})$}; \fill[color=dgreen] (-1,0) circle (2pt); \draw[color=dgreen] (-1,0) -- (-1,0) node[above]{${\rm bar}_3(\hat{y}_1^{(1)})$}; \fill[color=dgreen] (1,0) circle (2pt); \draw[color=dgreen] (1,0) -- (1,0) node[above]{${\rm bar}_3(\hat{y}_1^{(3)})$}; \fill[color=dgreen] (0,0.5) circle (2pt); \draw[color=dgreen] (0,0.5) -- (0,0.5) node[above]{${\rm bar}_3(\hat{y}_1^{(4)})$}; \fill[color=dgreen] (0,-0.5) circle (2pt); \draw[color=dgreen] (0,-0.5) -- (0,-0.5) node[below]{${\rm bar}_3(\hat{y}_1^{(2)})$}; \fill[color=dgreen] (-0.1,0) circle (1pt); \fill[color=dgreen] (-0.22,0) circle (1pt); \fill[color=dgreen] (-0.4,0) circle (1pt); \fill[color=dgreen] (0.1,0) circle (1pt); \fill[color=dgreen] (0.22,0) circle (1pt); \fill[color=dgreen] (0.4,0) circle (1pt); \fill[color=dgreen] (0,0.1) circle (1pt); \fill[color=dgreen] (0,0.3) circle (1pt); \fill[color=dgreen] (0,0.18) circle (1pt); \fill[color=dgreen] (0,-0.1) circle (1pt); \fill[color=dgreen] (0,-0.3) circle (1pt); \fill[color=dgreen] (0,-0.18) circle (1pt); \fill[color=red] (0,0) circle (2pt); \draw[color=red] (0.9,-0.65)--(0.9,-0.65) node[above]{${\rm bar}_4(x)$}; \end{tikzpicture} \end{center} For the second part let $\varphi\in{\rm Isom}(X)$ be an isometry and $n\in\mathbb{N}$. For $x\in X$ we have: \begin{align*} |\varphi| &\overset{\text{Def.}}{=}{\rm inf}\left\{d(y,\varphi(y)\mid y\in X\right\} \\ &\leq d({\rm bar}_n(x, \varphi(x), \varphi^2(x), \ldots, \varphi^{n-1}(x)), \varphi({\rm bar}_n(x, \varphi(x), \varphi^2(x), \ldots, \varphi^{n-1}(x)))) \\ &\overset{(1c)}{=} d( {\rm bar}_n(x, \varphi(x), \varphi^2(x), \ldots, \varphi^{n-1}(x)) ,{\rm bar}_n(\varphi(x), \varphi^2(x), \varphi^3(x), \ldots, \varphi^{n}(x))) \\ &\overset{(1b)}{=} d( {\rm bar}_n(x, \varphi(x), \varphi^2(x), \ldots, \varphi^{n-1}(x)), {\rm bar}_n(\varphi^n(x), \varphi(x), \varphi^{2}(x), \ldots, \varphi^{n-1}(x))) \\ &\overset{(1a)}{\leq} \frac{1}{n}\cdot( d(x, \varphi^n(x))+d(\varphi(x), \varphi(x))+\ldots+d(\varphi^{n-1}(x), \varphi^{n-1}(x))) \\ &=\frac{1}{n} d(x, \varphi^n(x)) \end{align*} \end{proof} We note that the statement of the following lemma follows from various results of \cite{bicombings}. For the sake of completeness we give the proof of it here following the ideas of \cite{bicombings}. \begin{lemma}\label{divisible} Suppose $G$ is a metrically injective group and $H\leq G$ a subgroup. If $H$ is almost divisible, then $H$ is a torsion group. In particular, a metrically injective group does not have $\mathbb{Q}$ nor $\mathbb{Z}_p$ as a subgroup. \end{lemma} \begin{proof} Let $\Phi\colon G\to {\rm Isom}(X)$ be a geometric action of $G$ on an injective metric space $X$. Let $\Phi(g)$ be a hyperbolic isometry and $n\in\mathbb{N}$. By Lemma \ref{bar} for $x\in{\rm Min}(\Phi(g)^n)$ the inequality $$|\Phi(g)| \leq\frac{1}{n} d(x, \Phi(g)^n(x))=\frac{1}{n}|\Phi(g)^n|$$ holds, hence $n\cdot |\Phi(g)| \leq |\Phi(g)^n|$. It follows by Proposition \ref{toolQ} that any almost divisible subgroup of $G$ is a torsion group. Since both $\mathbb{Q}$ and the $p$-adic integers $\mathbb{Z}_p$ are (almost) divisible and torsion-free, the group $G$ has neither $\mathbb{Q}$ nor $\mathbb{Z}_p$ as a subgroup. \end{proof} We want to point out that an injective metric space is contractible \cite[Chap. 2]{Bowditch} and it is geodesic \cite[\S 2]{Lang}. Thus, a metrically injective group is finitely presented due to \cite[Cor. I.8.11]{BridsonHaefliger}. Therefore it cannot contain $\mathbb{Z}_p$, since this group is uncountable. However, as we noted in Remark \ref{finEmb}, this is not enough on its own to prevent the existence of subgroups isomorphic to $\mathbb{Q}$. \begin{lemma}\label{torsion} The order of elements in torsion subgroups of a metrically injective group is bounded. In particular, a metrically injective group does not include the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ for any prime $p$ as a subgroup. \end{lemma} \begin{proof} We show that $G$ has finitely many conjugacy classes of finite subgroups. We first use \cite[Prop. 1.2]{Lang} to see that any finite subgroup of $G$ fixes a point. Then we can use \cite[Prop I.8.5]{BridsonHaefliger} and obtain that there are only finitely many conjugacy classes of isotropy subgroups, these are all finite due to the properness of the action. Since every torsion element is mapped into such an isotropy subgroup, its order then needs to be bounded, since the kernel is also finite due to properness. Since the order of elements in the Pr\"ufer $p$-group $\mathbb{Z}(p^\infty)$ is unbounded, a metrically injective group can not have this group as a subgroup. \end{proof} Helly graphs are discrete versions of injective metric spaces. More precisely, a connected graph is called \textit{Helly} if any family of pairwise intersecting combinatorial balls has a non-empty global intersection. A group $G$ is called \textit{Helly} if it acts geometrically by simplicial isometries on a Helly graph. For example all Gromov-hyperbolic groups are Helly as well as groups acting geometrically on a ${\rm CAT}(0)$ cube complex, see \cite[Prop. 6.1, Cor. 6.8]{Helly}. Since a Helly group is also metrically injective \cite[Thm. 1.5]{Helly}, we obtain \begin{corollary} \label{Helly} Any Helly group does not include $\mathbb{Q}$ or $\mathbb{Z}_p$ or $\mathbb{Z}(p^ \infty)$ for any prime $p$ as a subgroup. \end{corollary} \subsection{The class $\mathcal{G}$ and graph products} In this subsection we show that the class $\mathcal{G}$ is closed under taking extensions and graph products. The following lemma shows that the property of a group to have only artinian torsion subgroups is inherited by taking extensions. \begin{lemma}\label{artinian} Let $\left\{1\right\}\to A\overset{\iota}{\to} B\overset{\pi}{\to} C\to\left\{1\right\}$ be a short exact sequence of groups. If torsion subgroups of $A$ and $C$ are artinian, then torsion subgroups of $B$ are also artinian. In particular, if $S,T$ are two groups whose torsion subgroups are artinian, then the torsion subgroups of $S\times T$ are also artinian. \end{lemma} \begin{proof} For a torsion subgroup $H$ in $B$ we have a short exact sequence of torsion groups $$\left\{1\right\}\rightarrow H\cap \iota(A)\rightarrow H\rightarrow \pi(H)\rightarrow\left\{1\right\}.$$ By assumption the torsion groups $H\cap\iota(A)$ and $\pi(H)$ are artinian. Further, it is known that being artinian is preserved by taking extensions \cite[Thm. 7.3]{Olshanskii}. Hence the group $H$ is artinian. For the in particular statement, we consider the sequence $$\left\{1\right\}\to S\to S\times T\to (S\times T)/S\to\left\{1\right\}$$ that is exact and the torsion subgroups of $S$ and $(S\times T)/S\cong T$ are artinian. Thus the torsion subgroups of $S\times T$ are also artinian by the previous statement. \end{proof} \begin{proposition}\label{Extensions} The class $\mathcal{G}$ is closed by taking extensions. \end{proposition} \begin{proof} Let $\left\{1\right\}\rightarrow A\overset{\iota}{\rightarrow} B\overset{\pi}{\rightarrow} C\rightarrow \left\{1\right\}$ be an exact sequence of groups where $A$ and $C$ are in the class $\mathcal{G}$. Our goal is to show that $B$ is also in the class $\mathcal{G}$. Suppose that $\mathbb{Q}$ is a subgroup of $B$. Since $\mathbb{Q}$ is divisible, the image of $\mathbb{Q}$ under $\pi$ is also an abelian divisible group. By assumption the group $C$ is contained in the class $\mathcal{G}$ and therefore any abelian divisible subgroup of $C$ is trivial by Theorem \ref{structureTheorem}. Hence $\mathbb{Q}\subseteq \ker(\pi)=\iota(A)\cong A$, this contradicts the fact that the group $A$ is in the class $\mathcal{G}$. The same arguments hold for the group $\mathbb{Z}(p^\infty$), since this group is also divisible. Suppose now that $\mathbb{Z}_p$ is a subgroup of $B$. The kernel of $\pi_{\mid\mathbb{Z}_p}\colon\mathbb{Z}_p\to C$ is non-trivial since the group $C$ is in the class $\mathcal{G}$. Thus, by Lemma \ref{QuotientZp} we know that $\pi(\mathbb{Z}_p)\cong T\times D$ where $T$ is a finite cyclic group of $p$-power order and $D$ is an abelian divisible group. By assumption the group $C$ is in the class $\mathcal{G}$ and therefore $D$ has to be trivial. Thus $\ker(\pi_{\mid\mathbb{Z}_p})\subseteq \ker(\pi)=\iota(A)\cong A$ is an open subgroup of $p$-power index of $\mathbb{Z}_p$ by \cite[Thm. 5.2]{Klopsch}. It is therefore also a closed subgroup and thus is isomorphic to $\mathbb{Z}_p$ by \cite[Prop. 2.7(b)]{Profinite}, which is a contradiction to the assumption that $A$ is in the class $\mathcal{G}$. By assumption the torsion subgroups of $A$ and $C$ are artinian, thus by Lemma \ref{artinian} the torsion subgroups of $B$ are artinian too. Hence the group $B$ is in the class $\mathcal{G}$. \end{proof} Now we show that the class $\mathcal{G}$ is closed under taking graph products. We want to do this with geometric means. This is inspired by \cite{KramerVarghese}. \begin{definition} Given a finite simplicial graph $\Gamma=(V, E)$ and a collection of groups $\mathcal{G} = \{ G_u \mid u \in V\}$, the \emph{graph product} $G_\Gamma$ is defined as the quotient $$({\ast}_{u\in V} G_u) / \langle \langle [G_v,G_w]\text{ for }\{v,w\}\in E \rangle \rangle.$$ \end{definition} Given a graph product $G_\Gamma$, there exists a finite dimensional right-angled building $X_\Gamma$ on which $G_\Gamma$ acts isometrically \cite[Thm. 5.1]{DavisBuildings}, see also \cite[Section 3.5]{KramerVarghese}. \begin{proposition}\label{Building} If a subgroup $H\leq G_\Gamma$ of a graph product $G_\Gamma$ acts locally elliptically on the associated building $X_\Gamma$, i.e. each element has a fixed point, then $H$ has a global fixed point and $H$ is contained in a point-stabilizer, which has the form $gG_\Delta g^{-1}$ for a maximal clique $\Delta$ in $\Gamma$ and a $g\in G_\Gamma$. \end{proposition} \begin{proof} This can be found in \cite[Section 3.5]{KramerVarghese} and \cite[Lemma 3.6]{KramerVarghese}. \end{proof} We are now ready to prove our final proposition. \begin{proposition}\label{ClassG} The class $\mathcal{G}$ is closed under taking graph products. \end{proposition} In particular any finite direct or free product of the groups named in Corollary C provides another example to which Theorem B can be applied. \begin{proof} Given a graph product $G_\Gamma$, let $X_\Gamma$ denote the associated finite dimensional right-angled building. Such a building is a CAT$(0)$ cube complex \cite[Thm. 5.1, Thm. 11.1]{DavisBuildings} and therefore we can apply \cite{Bridson} to see that every isometry of $X_\Gamma$ is semi-simple and that the infimum of translation lengths of hyperbolic isometries is positive. So now suppose $\mathbb{Q}$ is not a subgroup of all the vertex groups $G_v$ but a subgroup of $G_\Gamma$. Then $\mathbb{Q}$ acts on $X_\Gamma$ via semi-simple isometries. Since $X_\Gamma$ is CAT$(0)$ we can apply \cite[Thm 2.5, Claim 7]{Caprace Marquis} to see that the action of $\mathbb{Q}$ has to be locally elliptic (since the infimum of translation lengths of hyperbolic isometries is positive). Therefore we can apply Proposition \ref{Building} to see that $\mathbb{Q}$ is contained in a vertex stabilizer. So there exists a complete subgraph $\Delta$ such that $\mathbb{Q}\subseteq G_\Delta=G_{v_1}\times G_{v_2}\times...\times G_{v_n}$ for some $n\in\mathbb{N}$, because stabilizers are conjugates of $G_\Delta$ and conjugating is an isomorphism of groups. We can then obtain maps $\pi_i\colon \mathbb{Q}\to G_{v_i}$ for $i\in\{1,...,n\}$ by taking quotients, $\pi_i$ is the canonical quotient map $G_\Delta\to G_\Delta\big/\left(G_{v_1}\times...\times G_{v_{i-1}}\times G_{v_{i+1}}\times...\times G_{v_n}\right)\cong G_{v_i}$. Each $\pi_i(\mathbb{Q})$ needs to be an abelian divisible subgroup of $G_i$ and hence needs to be trivial by Theorem \ref{structureTheorem}. This cannot be the case however, so the assumption that $G_\Gamma$ contains $\mathbb{Q}$ was wrong.\\ The same argument holds for the $p$-Pr\"ufer groups.\\ For $\mathbb{Z}_p$ we apply a similar argument to conclude that $\mathbb{Z}_p\subseteq G_\Delta=G_{v_1}\times G_{v_2}\times...\times G_{v_n}$ for some $n\in\mathbb{N}$. This is possible, since $\mathbb{Z}_p$ is $q$-divisible for every prime $q\neq p$. We again obtain maps $\pi_i\colon \mathbb{Z}_p\to G_{v_i}$ for $i\in\{1,...,n\}$. Since no $G_i$ contains $\mathbb{Z}_p$ each $\pi_i(\mathbb{Z}_p)$ needs to be a proper quotient of $\mathbb{Z}_p$, thus there are finite groups $T_i$ and abelian divisible groups $A_i$ with $\pi_i(\mathbb{Z}_p)\cong T_i\times A_i$ by Lemma \ref{QuotientZp}. But as above, $A_i$ has to be trivial. But this is a contradiction, since $\mathbb{Z}_p$ is not finite. Finally to show that torsion subgroups of $G_\Gamma$ are artinian we reduce to the direct product case as follows: Given a torsion subgroup $H$ of $G_\Gamma$, we know it needs to act locally elliptically on $G_\Gamma$, since every element has finite order. Thus due to Proposition \ref{Building} the torsion group $H$ is contained in $ g (G_{v_1}\times ... \times G_{v_k}) g^{-1}$ for some vertex groups $G_{v_i}$ for some $k\in \mathbb{N}$, $1\leq i\leq k$ and some $g\in G$. So $ G_{v_1}\times ... \times G_{v_k}$ contains a subgroup isomorphic to $H$. Since all the torsion subgroups in the vertex groups are artinian, we can apply Lemma \ref{artinian} to see that the torsion subgroups of $G_{v_1}\times ... \times G_{v_k}$ are artinian too. Therefore $H$ is artinian too, which is what we wanted to show. \end{proof} \end{document}
\begin{document} \begin{abstract} In this paper we compute Gr\"{o}bner bases for determinantal ideals of the form $I_{1}(XY)$, where $X$ and $Y$ are both matrices whose entries are indeterminates over a field $K$. We use the Gr\"{o}bner basis structure to determine Betti numbers for such ideals. \end{abstract} \maketitle \section{Introduction} Let $K$ be a field and $\{x_{ij}; \, 1\leq i \leq m, \, 1\leq j \leq n\}$, $\{y_{j}; \, 1\leq j \leq n\}$ be indeterminates over $K$. Let $K[x_{ij}]$ and $K[x_{ij}, y_{j}]$ denote the polynomial algebras over $K$. Let $X$ denote an $m\times n$ matrix such that its entries belong to the ideal $\langle \{x_{ij}; \, 1\leq i \leq m, \, 1\leq j \leq n\}\rangle$. Let $Y=(y_{j})_{n\times 1}$ be the generic $n\times 1$ column matrix. Let $I_{1}(XY)$ denote the ideal generated by the $1\times 1$ minors or the entries of the $m\times 1$ matrix $XY$. Ideals of the form $I_{1}(XY)$ appeared in the work of J. Herzog \cite{herzog} in 1974. These ideals are closely related to the notion of Buchsbaum-Eisenbud variety of complexes. A characteristic free study of these varieties can be found in \cite{concini}, where the defining equations of these varieties have been described as minors of matrices using combinatorial structure of multitableux. It has also been proved that the varieties are Cohen-Macaulay and Normal. The ideal $I_{1}(XY)$ is a special case of the defining ideal of a variety of complexes, when $n_{0} = m$, $n_{1} = n$, $n_{2} = 1$, in the notation of \cite{concini}. These ideals feature once again in \cite{tchernev}, in the study of the structure of a \textit{universal ring} of a \textit{universal pair} defined by Hochster. It has been proved in \cite{tchernev} that the set of standard monomials form a free basis for the universal ring. The initial ideal of the defining ideal is given by the set of all nonstandard monomials, which form a monomial ideal. A combination of Gr\"{o}bner basis techniques and representation theory techniques yield the results in \cite{tchernev}. We were not aware of this work when we computed a Gr\"{o}bner basis for the ideal $I_{1}(XY)$ using very elementary techniques. Our technique uses nothing more than the Buchberger's criterion and the description of Gr\"{o}bner bases for the ideals of minors of matrices from \cite{conca} and \cite{sturmfels}. Given determinantal ideals $I$ and $J$, the sum ideal $I + J$ is often difficult to understand and they appear in various contexts. Ideals $I_{1}(XY)+J$ are special in the sense that they occur in several geometric considerations like linkage and generic residual intersection of polynomial ideals, especially in the context of syzygies; see \cite{nor}, \cite{akm}, \cite{bucheis}, \cite{bkm}, \cite{johnson}. Some important classes of ideals in this category are the Northcott ideals, the Herzog ideals; see Definition 3.4 in \cite{akm} and the deviation two Gorenstein ideals defined in \cite{hunul}. Northcott ideals were resolved by Northcott in \cite{nor}. Herzog gave a resolution of a special case of the Herzog ideals in \cite{herzog}. These results were extended in \cite{bucheis}. In a similar vein, Bruns-Kustin-Miller \cite{bkm} resolved the ideal $I_{1}(XY)+ I_{\min (m,n)}(X)$, where $X$ is a generic $m \times n$ matrix and $Y$ is a generic $n\times 1$ matrix. Johnson-McLoud \cite{johnson} proved certain properties for the ideals of the form $I_{1}(XY)+I_{2}(X)$, where $X$ is a generic symmetric matrix and $Y$ is either generic or generic alternating. One of the recent articles is \cite{it} which shows connection of ideals of this form with the ideal of the dual of the quotient bundle on the Grassmannian $G(2,n)$. Ideals of the form $I+J$ also appear naturally in the study of some natural class of curves; see \cite{hip}. While computing Betti numbers for such ideals, a useful technique is often the iterated Mapping Cone. This technique requires a good understanding of successive colon ideals between $I$ and $J$, which is often difficult to compute. It is helpful if Gr\"{o}bner bases for $I$ and $J$ are known. In this paper our aim is to produce some suitable Gr\"{o}bner bases for ideals of the form $I_{1}(XY)$, when $Y$ is a generic column matrix and $X$ is one of the following: \begin{enumerate} \item $X$ is a generic square matrix; \item $X$ is a generic symmetric matrix; \item $X$ is a generic $(n+1)\times n$ matrix. \end{enumerate} We have also studied $I_{1}(XY)$, when \begin{enumerate} \item[(4)] $X$ is an $(m\times mn)$ generic matrix and Y is an $(mn\times n)$ generic matrix. \end{enumerate} Our method is constructive and it would exhibit that the first two cases behave similarly. Newly constructed Gr\"{o}bner bases will be used to compute the Betti numbers of $I_{1}(XY)$. We will see that computing Betti numbers for $I_{1}(XY)$ in the first two cases is not difficult, while the last two cases are not so straightforward. We will use some results from \cite{sstprime} and \cite{sstsum} which have some more deep consequences of the Gr\"{o}bner basis computation carried out in this paper. \section{Defining the problems} Let $K$ be a field and $\{x_{ij}; \, 1\leq i \leq n+1, \, 1\leq j \leq n\}$, $\{y_{j}; \, 1\leq j \leq n\}$ be indeterminates over $K$. Let $R = K[x_{ij}, y_{j}\mid 1\leq i, j \leq n]$, $\widehat{R} = K[x_{ij}, y_{j}\mid 1\leq i \leq n+1, \, 1\leq j \leq n]$ denote polynomial $K$-algebras. Let $X=(x_{ij})_{n\times n}$, such that $X$ is either generic or generic symmetric. Let $\widehat{X}=(x_{ij})_{(n+1)\times n}$ and $Y=(y_{j})_{n\times 1}$ be generic matrices. We define $\mathcal{I} = I_{1}(XY)$ and $\mathcal{J} = I_{1}(\widehat{X}Y)$. Let $g_{i} = \sum_{j=1}^{n}x_{ij}y_{j}$, for $1\leq i\leq n$. Then, $\mathcal{I} = \langle g_{1}, \ldots , g_{n}\rangle$. Let us choose the lexicographic monomial order on $R$ given by \begin{enumerate} \item $x_{11}> x_{22}> \cdots >x_{nn}$; \item $x_{ij}, y_{j} < x_{nn}$ for every $1 \leq i \neq j \leq n$. \end{enumerate} It is an interesting observation that the set $\{g_{1}, \ldots , g_{n}\}$ is a Gr\"obner basis for $\mathcal{I}$ with respect to the above monomial order and the elements $g_{1}, \ldots , g_{n}$ form a regular sequence as well; see Lemma \ref{disjoint} and Theorem \ref{regseqI}. However, this Gr\"{o}bner basis is too small in size to be of much help in applications like computing primary decomposition of $I_{1}(XY)$ or computing Betti numbers of ideals of the form $I_{1}(XY) + J$, carried out in \cite{sstprime} and \cite{sstsum} respectively. This motivated us to look for a a different Gr\"{o}bner basis for $\mathcal{I}$; see Theorem \ref{gbtheorem}. This construction gives rise to a bigger picture and naturally generalizes to a Gr\"{o}bner basis for the ideal $\mathcal{J} = I_{1}(\widehat{X}Y)$. As an application, we compute the Betti numbers for the ideals $\mathcal{I}$ and $\mathcal{J}$; see section 6. \section{Notation} \begin{enumerate} \item[(i)] $C_{k}:= \{\mathbf{a} = (a_{1},\cdots,a_{k}) \mid 1\leq a_{1} < \cdots < a_{k} \leq n\}$; denotes the collection of all ordered $k$-tuples from $\{1,\cdots,n\}$. In case of $\mathcal{J} = I_{1}(\widehat{X}Y)$, the set $C_{k}$ would denote the collection of all ordered $k$-tuples $(a_{1},\cdots,a_{k})$ from $\{1,\cdots,n+1\}$. \item[(ii)] Given $\mathbf{a}=(a_{1},\ldots,a_{k})\in C_{k}$; \begin{itemize} \item $X^{\mathbf{a}}=[a_{1},\cdots,a_{k}|1,2,\ldots , k]$ denotes the $k\times k$ minor of the matrix $X$, with $a_{1},\ldots,a_{k}$ as rows and $1,\ldots , k$ as columns. Similarly, $\widehat{X}^{\mathbf{a}}=[a_{1},\cdots,a_{k}|1,\ldots , k]$ denotes the $k\times k$ minor of the matrix $\widehat{X}$, with $a_{1},\ldots,a_{k}$ as rows and $1,\ldots , k$ as columns. \item $S_{k}:=\{X^{\mathbf{a}}:\mathbf{a}\in C_{k}\}$ and $I_{k}$ denotes the ideal generated by $S_{k}$ in the polynomial ring $R$ (respectively $\widehat{R}$); \item $X^{\mathbf{a},m}:= [a_{1},\cdots,a_{k}|1,\cdots,k-1,m]$ if $m\geq k$; \item $\widetilde{X^{\mathbf{a}}} = \sum_{m\geq k}[a_{1},\cdots,a_{k}|1,\cdots,k-1,m]y_{m} = \sum_{m\geq k}X^{\mathbf{a},m}y_{m}$; \item $\widetilde{S}_{k}:= \{\widetilde{X^{\mathbf{a}}}: X^{\mathbf{a}} \in S_{k}\}$ and $\widetilde{I}_{k}$ denotes the ideal generated by $\widetilde{S}_{k}$ in the polynomial ring $R$ (respectively $\widehat{R}$); \item $G_{k}=\cup_{i\geq k}\widetilde{S}_{i}$; \item $G =\cup_{k\geq 1}G_{k}$; \item $X^{\mathbf{a}}_{r}:= [a_{1},a_{2},\cdots,\hat{a_{r}},a_{r+1}\cdots,a_{k}|1,2,\cdots,k-1]$, if $k\geq 2$. \end{itemize} \item[(iii)] Suppose that $C_{k} = \left\lbrace\mathbf{a}_{1}< \ldots < \mathbf{a}_{\binom{n}{k}}\right\rbrace$, where $<$ is the lexicographic ordering. Given $m\geq k$, the map $$\sigma_{m}:\left\lbrace X^{\mathbf{a}_{1},m}, \ldots , X^{\mathbf{a}_{\binom{n}{k}},m}\right\rbrace \rightarrow \left\lbrace 1,\cdots,\binom{n}{k}\right\rbrace$$ is defined by $\sigma_{m}(X^{\mathbf{a}_{i},m})=i$. This is a bijective map. The map $\sigma_{k}$ will be denoted by $\sigma$, which is the bijection from $S_{k}$ to $\{1,\cdots,\binom{n}{k}\}$ given by $\sigma(X^{\mathbf{a}_{i}})=\sigma_{k}(X^{\mathbf{a}_{i}, k})=i$. \end{enumerate} \section{Gr\"{o}bner basis for $\mathcal{I}$} We first construct a Gr\"{o}bner basis for the ideal $\mathcal{I}$. A similar computation works for computing a Gr\"{o}bner basis for the ideal $\mathcal{J}$, which will be discussed in the next section. Our aim in this section is to prove \begin{theorem}\label{gbtheorem} The set $G_{k}$ is a reduced Gr\"{o}bner Basis for the ideal $\widetilde{I}_{k}$, with respect to the lexicographic monomial order induced by the following order on the variables: $y_{1}>y_{2}>\cdots >y_{n}> x_{ij}$ for all $i,j$, such that $x_{ij}>x_{i'j'}$ if $i<i'$ or if $i=i'$ and $j< j'$. In particular, $\mathcal{G} = G_{1}$ is a reduced Gr\"{o}bner Basis for the ideal $\widetilde{I}_{1} = \mathcal{I}$. \end{theorem} \noindent We first write down the main steps involved in the proof. Let $\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}}\in G_{k}=\cup_{i\geq k}\widetilde{S}_{i}$. Then, either $X^{\mathbf{a}},X^{\mathbf{b}}\in S_{k}$ or $X^{\mathbf{a}}\in S_{k}$, $X^{\mathbf{b}}\in S_{k'}$, for $k'>k$. Our aim is to show that $S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})\rightarrow_{G_{k}} 0$ and use Buchberger's criterion. \begin{enumerate} \item[(A)] By Lemma \ref{concath}, we have $S(X^{\mathbf{a}},X^{\mathbf{b}})\longrightarrow_{S_{k}} 0$. We write $m_{\mathbf{a}}X^{\mathbf{a}}+m_{\mathbf{b}}X^{\mathbf{b}} =S(X^{\mathbf{a}},X^{\mathbf{b}}) = \sum_{t=1}^{\binom{n}{k}}\alpha_{t}X^{\mathbf{a}_{t}}\longrightarrow_{S_{k}} 0$, such that $X^{\mathbf{a}_{i}} = X^{\mathbf{a}}$ and $X^{\mathbf{a}_{j}} = X^{\mathbf{b}}$, for some $i$ and $j$. Therefore, by Schreyer's theorem the tuples $(\alpha_{1},\ldots ,\alpha_{i}-m_{\mathbf{a}},\ldots , \alpha_{j}-m_{\mathbf{b}},\ldots,\alpha_{r} )$ generate ${\rm Syz}(I_{k})$. \item[(B)] ${\rm Syz}(I_{k})$ is precisely known by \cite{en}. \item[(C)] $S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})\longrightarrow_{\widetilde{S}_{k}} S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}}) - \sum_{t=1}^{\binom{n}{k}} \alpha_{t}\widetilde{X}^{\mathbf{a}_{t}}$ by Lemma \ref{spolylemma}, if $X^{\mathbf{a}},X^{\mathbf{b}}\in S_{k}$ and by Lemma \ref{spolydifferent}, if $X^{\mathbf{a}}\in S_{k}$, $X^{\mathbf{b}}\in S_{k'}$, for $k'>k$. \item[(D)] $S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})- \sum_{t=1}^{\binom{n}{k}} \alpha_{t}\widetilde{X}^{\mathbf{a}_{t}}=s\in \widetilde{I}_{k+1}$, by Lemma \ref{spolylemma}, if $X^{\mathbf{a}},X^{\mathbf{b}}\in S_{k}$. \item[(E)] $S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})- \sum_{t=1}^{\binom{n}{k}} \alpha_{t}\widetilde{X}^{\mathbf{a}_{t}}=s\in \widetilde{I}_{k'+1}$, by Lemma \ref{spolydifferent}, if $ X^{\mathbf{a}}\in S_{k}$, $X^{\mathbf{b}}\in S_{k'}$, for $k'>k$. \item[(F)] $s\longrightarrow_{G_{k}} 0$, proved in Theorem \ref{gbtheorem} for both the cases. \end{enumerate} We first prove a number of Lemmas to complete the proof through the steps mentioned above. \begin{lemma}\label{concath} The set $S_{k}$ forms a Gr\"{o}bner basis of $I_{k}$ with respect to the chosen monomial order on $R$. \end{lemma} \proof We use Buchberger's criterion for the proof. Let $\mathbf{c}, \mathbf{d}\in S_{k}$. Suppose that $S(X^{\mathbf{c}},X^{\mathbf{d}})\stackrel{S_{k}}{\longrightarrow} r$. Then, $S(X^{\mathbf{c}},X^{\mathbf{d}})-\sum_{\mathbf{a_{i}}\in C_{i}}h_{i}X^{\mathbf{a_{i}}}=r$. If $X$ is generic (respectively generic symmetric), we know by \cite{sturmfels} (respectively by \cite{conca}) that the set of all $k\times k$ minors of the matrix $X$ forms a Gr\"{o}bner basis for the ideal $I_{k}(X)$, with respect to the chosen monomial order. Therefore, there exists $[a_{1},a_{2},\cdots,a_{k}\mid b_{1},b_{2},\cdots,b_{k}]$, such that its leading term $\prod_{i=1}^{k}x_{a_{i}b_{i}}$ divides ${\rm \mbox{Lt}}(r)$. We see that if $b_{k}= k$, the minor belongs to the set $S_{k}$ and we are done. Let us now consider the case $b_{k}\geq k+1$. Let $X$ be generic symmetric. Then, $a_{k} = k$ and $b_{k}\geq k+1$ imply that the minor belongs to the set $S_{k}$. If $a_{k},b_{k}\geq k+1$, then $x_{a_{k}b_{k}}\mid {\rm \mbox{Lt}}(r)$ but $x_{a_{k}b_{k}}$ doesn't divide any term of elements in $S_{k}$. Let $X$ be generic. Then, for any $a_{k}$ and under the condition $b_{k}\geq k+1$, then $x_{a_{k}b_{k}}\mid {\rm \mbox{Lt}}(r)$ but $x_{a_{k}b_{k}}$ doesn't divide any term of elements in $S_{k}$.\qed \begin{lemma}\label{disjoint} Let $h_{1},h_{2}\cdots, h_{n}\in R$ be such that with respect to a suitable monomial order on $R$, the leading terms of them are pairwise coprime. Then, $h_{1},h_{2}\cdots, h_{n}$ is a Gr\"{o}bner basis of the ideal generated by $h_{1},h_{2}\cdots, h_{n}$ with respect to the same monomial order and they form a regular sequence in $R$. \end{lemma} \proof. The proof is a routine application of the division algorithm.\qed \begin{lemma}\label{EN} Let $1\leq k \leq n$. The height of the ideal $I_{k}$ is $n-k+1$, in case of $X$. \end{lemma} \proof. Let us consider the case for $X$. We know that $ht(I_{k})\leq n-k+1$. It suffices to find a regular sequence of that length in the ideal $I_{k}$. We claim that $\{[1\cdots k|1 \cdots k], [2\cdots k+1|1\cdots k],\ldots , [n-k+1 \cdots n|1 \cdots k]\}$ forms a regular sequence. The leading term of $[a_{1},a_{2},\cdots,a_{k}\mid b_{1},b_{2},\cdots,b_{k}]$ with respect to the chosen monomial order is $\prod_{i=1}^{k}x_{a_{i}b_{i}}$. Therefore, leading terms of the above minors are mutually coprime and we are done by Lemma \ref{disjoint}. \qed \begin{remark} We now assume that $X = (x_{ij})$ is a generic $n\times n$ matrix. The proof for the symmetric case is exactly the same. \end{remark} \subsection*{Description of generators of ${\rm Syz}(I_{k})$} By Lemma \ref{EN} we conclude that a minimal free resolution of the ideal $I_{k}$ is given by the Eagon-Northcott complex. Let us describe the first syzygies of the Eagon-Northcott resolution of $I_{k}$. Let $\mathbf{a} = (a_{1}, \ldots , a_{k+1})\in C_{k+1}$. For $1\leq r\leq k+1$, we define $X^{\mathbf{a}}_{r}=[a_{1},\ldots,\hat{a_{r}}, \ldots,a_{k+1}|1,\ldots,k]$. Hence $X^{\mathbf{a}}_{r}\in S_{k}$. We define the map $\phi$ as follows. \begin{eqnarray*} \{1,2,\cdots,k\}\times C_{k+1}& \stackrel{\phi}{\longrightarrow} & R^{\binom{n}{k}}\\ (j,\mathbf{a})& \mapsto & \alpha \end{eqnarray*} such that $\, \alpha(i) = \begin{cases} (-1)^{r_{i}+1}x_{(a_{r_{i}},\ j)} & \textrm{if} \, i=\sigma(X^{\mathbf{a}}_{r_{i}})\ \textrm{for \ some} \quad r_{i}; \\[2mm] 0 & \textrm{otherwise}. \end{cases}$ \noindent The map $\sigma$ is the bijection from $S_{k}$ to $\{1,2,\cdots,\binom{n}{k}\}$, defined before. The image of $\phi$ gives a complete list of generators of ${\rm Syz}(I_{k})$. \begin{example} We give an example, by taking $k=3$ and $n=5$. Let $\sigma: S_{5} \longrightarrow \{1,\cdots \binom{5}{3}\}$ be defined by, \begin{itemize} \item $[1,2,3\mid 1,2,3]\mapsto 1$ \item $[1, 2, 4\mid 1,2,3]\mapsto 2$ \item $[1,2,5\mid 1,2,3]\mapsto 3$ \item $[1, 3, 4\mid 1,2,3]\mapsto 4$ \item $[1, 3, 5\mid 1,2,3]\mapsto 5$ \item $[1,4,5\mid 1,2,3]\mapsto 6$ \item $[2,3,4\mid 1,2,3]\mapsto 7$ \item $[2,3,5\mid 1,2,3]\mapsto 8$ \item $[2,4,5\mid 1,2,3]\mapsto 9$ \item $[3,4,5\mid 1,2,3]\mapsto 10$ \end{itemize} In our example, $\phi:\{1,\cdots 3\}\times C_{4}\longrightarrow R^{\binom{5}{3}}$ and $\phi (j,\mathbf{a})\mapsto \alpha$. Let $j=2$ and $\mathbf{a}=(1, 3, 4, 5)$. Then, $X^{\mathbf{a}}_{1}=[3, 4, 5\mid 1, 2, 3]$, $X^{\mathbf{a}}_{2}=[1, 4, 5\mid 1, 2, 3]$, $X^{\mathbf{a}}_{3}=[1, 3, 5\mid 1, 2, 3]$, $X^{\mathbf{a}}_{4}=[1, 3, 4\mid 1, 2, 3]$. Therefore, $\sigma(X^{\mathbf{a}}_{1})=10$, $\sigma(X^{\mathbf{a}}_{2})=6$, $\sigma(X^{\mathbf{a}}_{3})=5$, $\sigma(X^{\mathbf{a}}_{4})=4$. Similarly, $\alpha(4)=(-1)^{4+1}x_{52}=-x_{52}$, $\alpha(5)=(-1)^{3+1}x_{42}=x_{42}$, $\alpha(6)=(-1)^{2+1}x_{32}=-x_{32}$, $\alpha(10)=(-1)^{1+1}x_{12}=x_{12}$. Therefore, $\alpha=(0,0,0,-x_{52},x_{42},-x_{32},0,0,0,x_{12})$.\\ \end{example} \begin{lemma}\label{syzlemma} Let $1\leq k\leq n-1$ and let $S_{k}=\left\lbrace X^{\mathbf{a}_{1}}, \ldots , X^{\mathbf{a}_{\binom{n}{k}}}\right\rbrace$ be such that $\mathbf{a}_{1}< \ldots < \mathbf{a}_{\binom{n}{k}}$ with respect to the lexicographic ordering. Suppose that $\alpha =(\alpha_{1},\cdots,\alpha_{\binom{n}{k}})\in$ Syz$^{1}(I_{k})$, then $\sum_{i=1}^{\binom{n}{k}}\alpha_{i}X^{\mathbf{a}_{i}}=0$ and $\sum_{i=1}^{\binom{n}{k}}\alpha_{i}\widetilde{X^{\mathbf{a}_{i}}}\in \widetilde{I}_{k+1}$. \end{lemma} \proof We have $\widetilde{X}^{\mathbf{a}_{i}}=\sum_{m\geq k}\sigma_{m}^{-1}(i)y_{m}$. Therefore $$\sum_{i=1}^{\binom{n}{k}}\alpha_{i}\widetilde{X}^{\mathbf{a}_{i}}=\sum_{i}\alpha_{i}(\sum_{m\geq k}\sigma_{m}^{-1}(i)y_{m})=\sum_{m\geq k}(\sum_{i}\alpha_{i}\sigma_{m}^{-1}(i)y_{m}).$$ It is enough to show that $\sum_{i}\alpha_{i}\sigma_{m}^{-1}(i)y_{m}\in \widetilde{I}_{k+1}$, for every $m\geq k$. We have $\alpha\in {\rm Syz}(I_{k})=\langle {\rm Im}(\phi) \rangle$. Without loss of generality we may assume that $\alpha\in {\rm Im}(\phi)$. There exists $(j,\mathbf{a}_{k+1})\in \{1,2,\cdots k\}\times C_{k+1}$ such that $\phi(j,\mathbf{a}_{k+1})=\alpha$. We will show that $\alpha_{i}\cdot \sigma_{m}^{-1}(i)\in I_{k+1}$ for every $m\geq k$ and each $i$. We have $i=\sigma (X_{r_{i}}^{\mathbf{a}_{k+1}})$ since $\alpha_{i}\neq 0$. But $\sigma_{m}^{-1}(i)=[a_{1},\ldots,\hat{a_{r_{i}}},\ldots,a_{k+1}|1,\ldots,k-1,m ]$. We have $$[a_{1},\ldots,a_{k+1}|j,1,\ldots,k-1,m]=0\quad \text{for} \quad j\leq k-1\quad \text{and}$$ $$[a_{1},\ldots,a_{k+1}|k,1,\ldots,k-1,m]= (-1)^{k}[a_{1},\ldots,a_{k+1}|1,\ldots,k,m]\in I_{k+1}.$$ Therefore, \begin{eqnarray*} \sum_{i=1}^{\binom{n}{k}}\alpha_{i}\cdot\sigma_{m}^{-1}(i) & = & \sum_{i=1}^{\binom{n}{k}} (-1)^{r_{i}+1}x_{(a_{r_{i}},\ j)}[a_{1},\ldots,\hat{a_{r_{i}}},\ldots,a_{k+1}|1,\ldots,k-1,m]\\ {} & = & [a_{1},\ldots,a_{k+1}|j,1,\ldots,k-1,m]\in I_{k+1}; \end{eqnarray*} Hence, $$\sum_{i=1}^{\binom{n}{k}}\alpha_{i}\widetilde{X^{\mathbf{a}_{i}}} =\sum_{i=1}^{\binom{n}{k}}\alpha_{i}\cdot \widetilde{\sigma_{m}^{-1}(i)}= (-1)^{k}\sum_{i=1}^{\binom{n}{k}}[a_{1},\ldots,a_{k+1}|1,\ldots,k,m]y_{m}\in \widetilde{I}_{k+1}.\qed$$ \begin{lemma}\label{spolylemma} Let $X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}}\in S_{k}=\left\lbrace X^{\mathbf{a}_{1}}, \ldots , X^{\mathbf{a}_{\binom{n}{k}}}\right\rbrace$, for $i\neq j$. Then, there exist monomials $h_{t}$ in $R$ and a polynomial $r\in \widetilde{I}_{k+1}$ such that \begin{enumerate} \item[(i)] $S(X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}}) =\sum_{t=1}^{\binom{n}{k}} h_{t}X^{\mathbf{a}_{t}}$, upon division by $S_{k}$; \item[(ii)] $S(\widetilde{X}^{\mathbf{a}_{i}},\widetilde{X}^{\mathbf{a}_{j}})=\sum_{t=1}^{\binom{n}{k}} h_{t}\widetilde{X}^{\mathbf{a}_{t}}+r$, upon division by $\widetilde{S}_{k}$. \end{enumerate} \end{lemma} \proof (i) The expression follows from the observation that $S_{k}$ is a Gr\"{o}bner basis for the ideal $I_{k}$. \noindent (ii) We first note that, ${\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{a}_{t}})={\rm \mbox{Lt}}(X^{\mathbf{a}_{t}})y_{k}$, for every $X^{\mathbf{a}_{t}}\in S_{k}$. Let $S(X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}})=cX^{\mathbf{a}_{i}}-dX^{\mathbf{a}_{j}}$, where $c=\dfrac{{\rm \mbox{lcm}}({\rm \mbox{Lt}}(X^{\mathbf{a}_{i}}),{\rm \mbox{Lt}}(X^{\mathbf{a}_{j}}))}{X^{\mathbf{a}_{i}}}$ and $d=\dfrac{{\rm \mbox{lcm}}({\rm \mbox{Lt}}(X^{\mathbf{a}_{i}}),{\rm \mbox{Lt}}(X^{\mathbf{a}_{j}}))}{X^{\mathbf{a}_{j}}}$ Hence, \begin{eqnarray*} S(\widetilde{X}^{\mathbf{a}_{i}},\widetilde{X}^{\mathbf{a}_{j}}) & = & c\cdot \widetilde{X}^{\mathbf{a}_{i}}-d\cdot \widetilde{X}^{\mathbf{a}_{i}}\\ {} & = & \sum_{m\geq k}\left[c\cdot X^{\mathbf{a}_{i}, m}-d\cdot X^{\mathbf{a}_{j}, m}\right]y_{m}. \end{eqnarray*} It follows immediately that ${\rm \mbox{Lt}}(S(\widetilde{X}^{\mathbf{a}_{i}},\widetilde{X}^{\mathbf{a}_{j}}))=y_{k}{\rm \mbox{Lt}}(S(X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}}))$. The set $S_{k}$ is a Gr\"{o}bner basis for the ideal $I_{k}$. Therefore, we have ${\rm \mbox{Lt}}(X^{\mathbf{a}_{t}})\mid {\rm \mbox{Lt}}(S(X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}}))$, for some $t$. Then, ${\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{a}_{t}})\mid {\rm \mbox{Lt}}(S(\widetilde{X}^{\mathbf{a}_{i}},\widetilde{X}^{\mathbf{a}_{j}}))$ and we have $h_{t}=\dfrac{{\rm \mbox{Lt}}(S(X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}}))}{{\rm \mbox{Lt}}(X^{\mathbf{a}_{t}})}= \dfrac{{\rm \mbox{Lt}}(S(\widetilde{X}^{\mathbf{a}_{i}},\widetilde{X}^{\mathbf{a}_{j}}))}{{\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{a}_{t}})}$. We can write \begin{eqnarray*} r_{1} & := & S(\widetilde{X}^{\mathbf{a}_{i}},\widetilde{X}^{\mathbf{a}_{j}})- h_{t}\widetilde{X}^{\mathbf{a}_{t}}\\ {} & = & \sum_{m\geq k}[c\cdot X^{\mathbf{a}_{i},m}-d\cdot X^{\mathbf{a}_{j},m}-h_{t}X^{\mathbf{a}_{t},m}]y_{m}\\ {} & = & \sum_{m> k}[c\cdot X^{\mathbf{a}_{i},m}-d\cdot X^{\mathbf{a}_{j},m}-h_{t}X^{\mathbf{a}_{t},m}]y_{m}+[c\cdot X^{\mathbf{a}_{i}}-d\cdot X^{\mathbf{a}_{j}}-h_{t}X^{\mathbf{a}_{t}}] y_{k} \end{eqnarray*} Note that $r_{1}\in \widetilde{I}_{k}$ and ${\rm \mbox{Lt}}(r_{1}) = {\rm \mbox{Lt}} (S(\widetilde{X}^{\mathbf{a}_{i}},\widetilde{X}^{\mathbf{a}_{j}})- h_{t}\widetilde{X}^{\mathbf{a}_{t}})= y_{k}{\rm \mbox{Lt}}(S(X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}})-h_{t}X^{\mathbf{a}_{t}})$. We proceed as before with the polynomial $S(X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}})-h_{t}X^{\mathbf{a}_{t}}\in I_{k}$ and continue the process to obtain the desired expression involving the polynomial $r$. We now show that the polynomial $r$ is in the ideal $\widetilde{I}_{k+1}$. Let us write $H_{j}= h_{j}+ d$, $H_{i}= h_{i}-c$ and $H_{t}= h_{t}$ for $t\neq i,j$. It follows from $S(X^{\mathbf{a}_{i}},X^{\mathbf{a}_{j}})= \sum_{t=1}^{\binom{n}{k}} h_{t}X^{\mathbf{a}_{t}}$, that $\sum_{t=1}^{\binom{n}{k}}H_{t}X^{\mathbf{a}_{t}}= 0$. Therefore, $\mathbf{H} = (H_{1}, \ldots , H_{\binom{n}{k}})\in {\rm Syz}(I_{k})$ and by Lemma \ref{syzlemma} we have $\sum_{t=1}^{\binom{n}{k}}H_{t}\widetilde{X}^{\mathbf{a}_{t}}\in \widetilde{I}_{k+1}$. Hence, $r = S(\widetilde{X}^{\mathbf{a}_{i}},\widetilde{X}^{\mathbf{a}_{j}})-\sum_{t\neq i,j} h_{t}\widetilde{X}^{\mathbf{a}_{t}}\in \widetilde{I}_{k+1}$. \qed \begin{lemma}\label{laplace} \begin{enumerate} \item [(i)] Let $k^{'}>k$ and $\mathbf{a} = (a_{1}, \ldots , a_{k^{'}})\in C_{k^{'}}$. Suppose that $X^{\mathbf{a}}=\sum_{\mathbf{b}_{t}\in C_{k}}\beta_{\mathbf{b}_{t}}X^{\mathbf{b}_{t}}$ is the Laplace expansion of $X^{\mathbf{a}}$. Then $$\sum_{\mathbf{b}_{t}\in C_{k}}\beta_{\mathbf{b}_{t}}X^{\mathbf{b}_{t},i} = [a_{1}, \ldots , a_{k^{'}}|1, \ldots , k-1, i, k+1, \ldots , k^{'}].$$ \item [(ii)] Let $k^{'}>k$; $\mathbf{a}=(a_{1},\ldots,a_{k^{'}})\in C_{k^{'}}$, $\mathbf{b}=(b_{1},\ldots,b_{k})\in C_{k}$. Suppose that $X^{\mathbf{a}}=\sum_{\mathbf{p}\in C_{k}}\alpha_{\mathbf{p}}X^{\mathbf{p}}$ and $S(X^{\mathbf{a}},X^{\mathbf{b}})=cX^{\mathbf{a}}-dX^{\mathbf{b}}=\sum_{\mathbf{p}\in C_{k}}\beta_{\mathbf{p}}X^{\mathbf{p}}$. Then $$\quad c\sum_{t\geq k}[a_{1}, \cdots , a_{k^{'}}| 1, \cdots, k-1, t, k+1, \cdots , k^{'}]y_{t}-d\widetilde{X}^{\mathbf{b}}-\sum_{\mathbf{p}\in C_{k}}\beta_{\mathbf{p}}\widetilde{X}^{\mathbf{p}}\in \widetilde{I}_{k+1}.$$ \end{enumerate} \end{lemma} \proof (i) See \cite{janjic}. \noindent (ii) We have $S(X^{\mathbf{a}},X^{\mathbf{b}})=cX^{\mathbf{a}}-dX^{\mathbf{b}}=\sum_{\mathbf{p}\in C_{k}}\beta_{\mathbf{p}}X^{\mathbf{p}}$. By rearranging terms we get $\sum_{\mathbf{p}\in C_{k}}(c\alpha_{\mathbf{p}}-\beta_{\mathbf{p}})X^{\mathbf{p}}-dX^{\mathbf{b}}=0$ and by separating out the term $(c\alpha_{\mathbf{b}}-\beta_{\mathbf{b}})X^{\mathbf{b}}$ we get $\sum_{\mathbf{p}\neq\mathbf{b}}(c\alpha_{\mathbf{p}}-\beta_{\mathbf{p}})X^{\mathbf{p}}+(c\alpha_{\mathbf{b}}-\beta_{\mathbf{b}}-d)X^{\mathbf{b}}=0$. Therefore, $\sum_{\mathbf{p}\neq\mathbf{b}}(c\alpha_{\mathbf{p}}-\beta_{\mathbf{p}})\widetilde{X}^{\mathbf{p}}+(c\alpha_{\mathbf{b}}-\beta_{\mathbf{b}}-d)\widetilde{X}^{\mathbf{b}} \in \widetilde{I}_{k+1}$, by Lemma \ref{syzlemma}. Hence $\sum_{t\geq k}\sum_{\mathbf{p}\neq\mathbf{b}}(c\alpha_{\mathbf{p}}-\beta_{\mathbf{p}})X^{\mathbf{p},t}y_{t}+(c\alpha_{\mathbf{b}}-\beta_{\mathbf{b}}-d)\sum_{t\geq k}X^{\mathbf{b},t}y_{t}\in \widetilde{I}_{k+1}$. Now $\quad \sum_{t\geq k}\sum_{\mathbf{p}\in C_{k}}\alpha_{\mathbf{p}}X^{p,t}= \quad \sum_{t\geq k}[a_{1}, \cdots , a_{k^{'}}| 1, \cdots, k-1, t, k+1, \cdots , k^{'}]$ by (i). Hence, $$c\sum_{t\geq k}[a_{1}, \cdots , a_{k^{'}}| 1, \cdots, k-1, t, k+1, \cdots , k^{'}]y_{t}-d\widetilde{X}^{\mathbf{b}}-\sum_{\mathbf{p}\in C_{k}}\beta_{\mathbf{p}}\widetilde{X}^{\mathbf{p}}\in \widetilde{I}_{k+1}.\qed $$ \begin{lemma}\label{spolydifferent} Let $k^{'}>k$; $\mathbf{a}=(a_{1},\ldots,a_{k^{'}})\in C_{k^{'}}$, $\mathbf{b}=(b_{1},\ldots,b_{k})\in C_{k}$. Suppose that $S_{k}=\left\lbrace X^{\mathbf{a}_{1}}, \ldots , X^{\mathbf{a}_{\binom{n}{k}}}\right\rbrace $, such that $\mathbf{a}_{1}< \ldots < \mathbf{a}_{\binom{n}{k}}$ with respect to the lexicographic ordering. Then, there exist monomials $h_{t}\in R$ and a polynomial $r\in \widetilde{I}_{k+1}$ such that \begin{enumerate} \item [(i)] $S(X^{\mathbf{a}},X^{\mathbf{b}})=\sum_{t=1}^{\binom{n}{k}}h_{t}X^{\mathbf{a}_{t}}$, upon division by $S_{k}$. \item [(ii)] $S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})=\sum_{t=1}^{\binom{n}{k}}(h_{t}\widetilde{X}^{\mathbf{a}_{t}})y_{k'} + r$, upon division by $\widetilde{S}_{k}$. \end{enumerate} \end{lemma} \proof (i) The expression follows from the observation that $S_{k}$ is a Gr\"{o}bner basis for the ideal $I_{k}$. \noindent (ii) Let $S(X^{\mathbf{a}},X^{\mathbf{b}})=cX^{\mathbf{a}}-dX^{\mathbf{b}}$, where $c=\dfrac{{\rm \mbox{lcm}} ({\rm \mbox{Lt}}(X^{\mathbf{a}}),{\rm \mbox{Lt}}(X^{\mathbf{b}}))}{X^{\mathbf{a}}}$ and $d=\dfrac{{\rm \mbox{lcm}} ({\rm \mbox{Lt}}(X^{\mathbf{a}}),{\rm \mbox{Lt}}(X^{\mathbf{b}}))}{X^{\mathbf{b}}}$. Then, \begin{eqnarray*} S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}}) & = & cy_{k}\widetilde{X}^{\mathbf{a}}-dy_{k^{'}}\widetilde{X}^{\mathbf{b}}\\ {} & = & cy_{k}\sum_{t\geq k^{'}}X^{\mathbf{a},t}y_{t}-dy_{k^{'}}\sum_{t\geq k}X^{\mathbf{b},t}y_{t}\\ {} & = & y_{k}y_{k^{'}}(cX^{\mathbf{a}}-dX^{\mathbf{b}})+ \text{terms \,devoid \,of} \, y_{k}. \end{eqnarray*} We therefore have $\text{Lt}(S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}}))=y_{k}y_{k^{'}}\text{Lt}(S(X^{\mathbf{a}},X^{\mathbf{b}}))$, since $y_{k}$ is the largest variable appearing in the above expression. The set $S_{k}$ being a Gr\"{o}bner basis for the ideal $I_{k}$, we have ${\rm \mbox{Lt}}(X^{\mathbf{a_{t}}})$ dividing ${\rm \mbox{Lt}}(S(X^{\mathbf{a_{i}}},X^{\mathbf{a_{j}}}))$ for some $t$. Let $h_{t}=\dfrac{{\rm \mbox{Lt}}(cX^{\mathbf{a}}-dX^{\mathbf{b}})}{{\rm \mbox{Lt}}(X^{\mathbf{a}_{t}})}$, with $t=1, \ldots , \binom{n}{k}$. Moreover, ${\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{a}_{t}})$ being equal to $y_{k}{\rm \mbox{Lt}}(X^{\mathbf{a}_{t}})$, it divides ${\rm \mbox{Lt}}(S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}}))$. Let $$r_{1}:= S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})-\frac{{\rm \mbox{Lt}}(S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}}))}{{\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{a}_{t}})}\widetilde{X}^{\mathbf{a}_{t}}=S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})-y_{k^{'}}h_{t}\widetilde{X}^{\mathbf{a}_{t}}\in\widetilde{I}_{k}.$$ We have \begin{eqnarray*} r_{1} & = & y_{k}y_{k^{'}}(cX^{\mathbf{a}}-dX^{\mathbf{b}})-y_{k^{'}}h_{t}\widetilde{X}^{\mathbf{a}_{t}}+ \text{terms \,devoid \,of} \, y_{k}\\ {} & = & y_{k}y_{k^{'}}(cX^{\mathbf{a}}-dX^{\mathbf{b}}) - y_{k^{'}}h_{t}\sum_{i\geq k}X^{\mathbf{a}_{t},i}y_{i}+ \text{terms \,devoid \,of} \, y_{k}\\ {} & = & y_{k}y_{k^{'}}(cX^{\mathbf{a}}-dX^{\mathbf{b}} - h_{t}X^{\mathbf{a}_{t}})+ \text{terms \,devoid \,of} \, y_{k}\\ {} & = & y_{k}y_{k^{'}}(S(X^{\mathbf{a}},X^{\mathbf{b}}) - h_{t}X^{\mathbf{a}_{t}}) + \text{terms \,devoid \,of} \, y_{k}.\\ \end{eqnarray*} Hence, ${\rm \mbox{Lt}}(r_{1}) = {\rm \mbox{Lt}} (S(X^{\mathbf{a}},X^{\mathbf{b}}) - h_{t}X^{\mathbf{a}_{t}}) = y_{k}y_{k'}{\rm \mbox{Lt}}(S(X^{\mathbf{a}},X^{\mathbf{b}}) - h_{t}X^{\mathbf{a}_{t}})$. We proceed as before with the polynomial $S(X^{\mathbf{a}},X^{\mathbf{b}})-h_{t}X^{\mathbf{a}_{t}}\in I_{k}$ and continue the process to obtain the desired expression involving the polynomial $r$. We now show that the polynomial $r$ is in the ideal $\widetilde{I}_{k+1}$. Let us write \begin{eqnarray*} r & = & S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})-\sum_{t=1}^{\binom{n}{k}}(h_{t}\widetilde{X}^{\mathbf{a}_{t}})y_{k'}\\ {} & = & cy_{k}\sum_{l\geq k^{'}}X^{\mathbf{a},l}y_{l}-dy_{k^{'}}\sum_{l\geq k}X^{\mathbf{b},l}y_{l}-\sum_{t=1}^{\binom{n}{k}}\sum_{l\geq k}h_{t}X^{\mathbf{a}_{t},l}y_{l}y_{k^{'}} + T - T; \end{eqnarray*} where $T = c\sum_{l\geq k}[a_{1},\ldots, a_{k^{'}}\mid 1,\ldots,k-1,l,k+1,\ldots, k^{'}]y_{l}y_{k^{'}}$. After a rearrangement of terms, we may write \begin{eqnarray*} r & = & \left(T-\sum_{t=1}^{\binom{n}{k}}\sum_{l\geq k}h_{t}X^{\mathbf{a}_{t},l}y_{l}y_{k^{'}} -dy_{k^{'}}\sum_{l\geq k}X^{\mathbf{b},l}y_{l}\right)\\ {} & {} & +\left(cy_{k}\sum_{l\geq k^{'}}X^{\mathbf{a},l}y_{l}\right) - T.\\[2mm] \end{eqnarray*} Let $T^{'}= c\sum_{l> k}[a_{1},\ldots, a_{k^{'}}\mid 1,\ldots,k-1,l,k+1,\ldots, k^{'}]y_{l}y_{k^{'}}$. Now we note, $cX^{\textbf{a}}-dX^{\textbf{b}}- \sum_{t=1}^{\binom{n}{k}}h_{t}X^{\mathbf{a}_{t}}=0 $. Hence $T-\sum_{t=1}^{\binom{n}{k}}\sum_{l\geq k}h_{t}X^{\mathbf{a}_{t},l}y_{l}y_{k^{'}} - dy_{k^{'}}\sum_{l\geq k}X^{\mathbf{b},l}y_{l}$ becomes equal to $$T^{'}-\sum_{t=1}^{\binom{n}{k}}\sum_{l> k}h_{t}X^{\mathbf{a}_{t},l}y_{l}y_{k^{'}} -dy_{k^{'}}\sum_{l> k}X^{\mathbf{b},l}y_{l}.$$ We also have $cy_{k}\sum_{l\geq k^{'}}X^{\mathbf{a},l}y_{l} - T = cy_{k}\sum_{l> k^{'}}X^{\mathbf{a},l}y_{l} - T^{'}$, since the term for $l=k^{'}$ in $cy_{k}\sum_{l\geq k^{'}}X^{\mathbf{a},l}y_{l}$ gets cancelled with the term appearing in $T$ for $l=k$. Hence we write \begin{eqnarray*} r & = & \left(T^{'}-\sum_{t=1}^{\binom{n}{k}}\sum_{l> k}h_{t}X^{\mathbf{a}_{t},l}y_{l}y_{k^{'}} -dy_{k^{'}}\sum_{l> k}X^{\mathbf{b},l}y_{l}\right)_{1}\\ {} & {} & +\left(cy_{k}\sum_{l> k^{'}}X^{\mathbf{a},l}y_{l}\right)_{2} - T^{'}\\[2mm] {}&= & \left(\ \right)_{1} + \left(\ \right)_{2}- T^{'}. \end{eqnarray*} Clearly, the expression $(\ )_{1}$ belongs to $\widetilde{I}_{k+1}$, by Lemma \ref{laplace}. We note that no term of $(\ )_{1}$ contains $y_{k}$. So also for $T^{'}.$ Hence, the leading term of $r$ is the leading term of $(\ )_{2}$. By an application of similar argument as above we see that the expression $(\ )_{2}$, after division by elements of $\widetilde{S}_{k}$, further reduces to \begin{eqnarray*} -\left(\sum _{l> k^{'}}\sum_{s\geq k^{'}}c[a_{1},\ldots, a_{k'}|1,\ldots, k-1 ,s ,k+1 ,\ldots ,k'-1, l]y_{l}y_{s}\right)\\ = \quad -\left(\sum _{l> k^{'}}\sum_{s> k^{'}}c[a_{1},\ldots, a_{k'}|1,\ldots, k-1 ,s ,k+1 ,\ldots ,k'-1, l]y_{l}y_{s}\right)\\ -\left(\sum _{l> k^{'}}c[a_{1},\ldots, a_{k'}|1,\ldots, k-1 ,k^{'} ,k+1 ,\ldots ,k'-1, l]y_{l}y_{k^{'}}\right). \end{eqnarray*} Moreover, $$\sum _{l> k^{'}}c[a_{1},\ldots, a_{k'}|1,\ldots, k-1 ,k^{'} ,k+1 ,\ldots ,k'-1, l]y_{l}y_{k^{'}} + T' = 0$$ and $$\sum _{l> k^{'}}\sum_{s>k^{'}}c[a_{1},\ldots, a_{k'}|1,\ldots, k-1 ,s ,k+1 ,\ldots ,k'-1, l]y_{l}y_{k^{'}}=0.$$ Therefore, after division by elements of $\widetilde{S}_{k}$, the expression $(\ )_{1}+(\ )_{2}-T'$ reduces to $(\ )_{1}$, which is in $\widetilde{I}_{k+1}$. \qed \noindent\textbf{Proof of Theorem \ref{gbtheorem}.} We use induction on $n-k$ to prove that $G_{k}$ is a Gr\"{o}bner basis for the ideal $\widetilde{I}_{k}$. For $n-k=0$; the set $G_{k} = \widetilde{S}_{n}$ contains only one element and hence trivially forms a Gr\"{o}bner basis. We apply Buchberger's algorithm to prove our claim. Let $X^{\mathbf{a}},X^{\mathbf{b}} \in G_{k}$. The following cases may arise: \begin{itemize} \item $X^{\mathbf{a}},X^{\mathbf{b}}\in S_{k}$, for $\mathbf{a}, \mathbf{b}\in C_{k}$; \item $X^{\mathbf{a}}\in S_{k'}$ and $X^{\mathbf{b}}\in S_{k}$ where $k'>k$; $\mathbf{a}\in C_{k'}$ and $\mathbf{b}\in C_{k}$. \end{itemize} We have proved in Lemmas \ref{spolylemma} and \ref{spolydifferent} that upon division by $\widetilde{S}_{k}$, the $S$-polynomial $S(\widetilde{X}^{\mathbf{a}},\widetilde{X}^{\mathbf{b}})\longrightarrow r$ for some $r\in\widetilde{I}_{k+1}$, in both the cases. By induction hypothesis, $G_{k+1}$ is a Gr\"{o}bner basis for $\widetilde{I}_{k+1}$. Hence $r$ reduces to $0$ modulo $G_{k+1}$ and hence modulo $G_{k}$, since $G_{k+1}\subset G_{k}$ . We now show that $G_{k}$ is a reduced Gr\"{o}bner basis for $\widetilde{I}_{k}$. Let $X^{\mathbf{a}}\in S_{k'}$ and $X^{\mathbf{b}}\in S_{k}$ where $k'\geq k$; $\mathbf{a}\in C_{k'}$ and $\mathbf{b}\in C_{k}$. Then, $\widetilde{X}^{\mathbf{a}} = \sum_{i\geq k'}X^{\mathbf{a},i}y_{i}$ and $\widetilde{X}^{\mathbf{b}} = \sum_{i\geq k}X^{\mathbf{b},i}y_{i}$. If $k' > k$, then $y_{k'}|{\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{a}})$ but does not divide ${\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{b}})$. Hence, ${\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{a}})$ does not divide ${\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{b}})$. If $k'=k$, then ${\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{a}})=x_{(a_{1},1)}\cdots x_{(a_{k},k)}y_{k}$ and ${\rm \mbox{Lt}}(\widetilde{X}^{\mathbf{b}})=x_{(b_{1},1)}\cdots x_{(b_{k},k)}y_{k}$. Therefore, $\widetilde{X}^{\mathbf{a}}|\widetilde{X}^{\mathbf{b}}$ implies that $\textbf{a}=\textbf{b}$. This proves that the Gr\"{o}bner basis is reduced. \qed \section{Gr\"{o}bner basis for $\mathcal{J}$} \begin{theorem}\label{gbtheoremJ} Let us consider the lexicographic monomial order induced by $y_{1}>y_{2}>\cdots >y_{n}>x_{11}>x_{12}>\cdots > x_{(n+1),(n-1)}>x_{(n+1),n}$ on $\widehat{R} = K[x_{ij}, y_{j}\mid 1\leq i \leq n+1, \, 1\leq j \leq n]$. The set $G_{k}$ is a reduced Gr\"{o}bner Basis for the ideal $\widetilde{I}_{k}$. In particular, $\mathcal{G} = G_{1}$ is a reduced Gr\"{o}bner Basis for the ideal $\widetilde{I}_{1} = \mathcal{J}$. \end{theorem} \proof The scheme of the proof is the same as that for $\mathcal{I}$, with suitable changes made for $\widehat{X}$ in the Lemmas. We only reiterate the last part of the proof where we carry out induction on $n-k$. For $n-k=0$, the set $G_{k} = \widetilde{S}_{n} = \{\Delta_{1}y_{n}, \ldots , \Delta_{n+1}y_{n}\}$, where $\Delta_{i} = \det(\widehat{X}_{i})$. We first note that ${\rm \mbox{Lt}}(\Delta_{i})$ and ${\rm \mbox{Lt}}(\Delta_{j})$ are coprime. Therefore, \begin{eqnarray*} S(\Delta_{i}y_{n}, \Delta_{j}y_{n}) & = & {\rm \mbox{Lt}}(\Delta_{j})\cdot(\Delta_{i}y_{n}) - {\rm \mbox{Lt}}(\Delta_{i})\cdot(\Delta_{j}y_{n})\\ {} & = & {\rm \mbox{Lt}}(\Delta_{j})({\rm \mbox{Lt}}(\Delta_{i})y_{n} + y_{n}p_{i}) - {\rm \mbox{Lt}}(\Delta_{i})({\rm \mbox{Lt}}(\Delta_{j})y_{n} - y_{n}p_{j})\\ {} & = & ({\rm \mbox{Lt}}(\Delta_{j})y_{n})p_{i} - ({\rm \mbox{Lt}}(\Delta_{i})y_{n})p_{j}\\ {} & = & (\Delta_{j}y_{n} - p_{j}y_{n})p_{i} - (\Delta_{i}y_{n} - p_{i}y_{n})p_{j}\\ {} & = & \Delta_{j}y_{n}p_{i} - \Delta_{i}y_{n}p_{j}\longrightarrow _{G_{n}} 0. \end{eqnarray*} The rest of the proof is essentially the same as that for Theorem \ref{gbtheorem}.\qed \section{Betti Numbers of $\mathcal{I}$ and $\mathcal{J}$} \begin{theorem}\label{regseqI} Suppose that $X = (x_{ij})_{n\times n}$ is either a generic or a generic symmetric $n\times n$ matrix and $Y$ a generic $n\times 1$ matrix given by $Y=(y_{j})_{n\times 1}$. If $X$ is generic, we write $g_{i}=\sum_{j=1}^{n} x_{ij}y_{j}$ and $\mathcal{I} = I_{1}(XY) = \langle g_{1},g_{2},\cdots ,g_{n}\rangle$. If $X$ is generic symmetric, we write $g_{1}=\sum_{j=1}^{n}x_{1j}y_{j}$, $g_{n}=(\sum_{1\leq k \leq n}x_{kn}y_{k})$ and $g_{i}=(\sum_{1\leq k<i}x_{ki}y_{k})+(\sum_{i\leq k\leq n}x_{ik}y_{k})$ for $1<i<n$ and $\mathcal{I} = I_{1}(XY) = \langle g_{1},\cdots ,g_{n}\rangle$. The generators $g_{1}, \ldots , g_{n}$ of $\mathcal{I} = I_{1}(XY)$ in either case form a regular sequence in the polynomial $K$-algebra $R = K[x_{ij}, \, y_{j} \mid 1\leq i,j\leq n]$. Moreover, $\{g_{1}, \ldots , g_{n}\}$ form a Gr\"obner basis for $\mathcal{I}$ in either case with respect to the lexicographic monomial order which satisfies (1) and (2) given below: \begin{enumerate} \item $x_{11}> x_{22}> \cdots >x_{nn}$; \item $x_{ij}, y_{j} < x_{nn}$ for every $1 \leq i \neq j \leq n$. \end{enumerate} \end{theorem} \proof The monomial order chosen is lexicographic order induced by the ordering among the variables given by (1) and (2). It is clear from the expressions of $g_{i}$ that their leading terms are pairwise coprime. Therefore, the proof follows from Lemma \ref{disjoint}. \qed \begin{corollary}\label{bettiI} $\mathcal{I}$ is minimally resolved by the Koszul complex $\mathbb{G}$ and the $i$-th Betti number of $\mathcal{I}$ is $\binom{n}{i}$. \end{corollary} \begin{theorem}\label{bettiJ} Suppose that $\widehat{X} = (x_{ij})_{(n+1)\times n}$ is a generic $(n+1)\times n$ matrix and $Y$ a generic $n\times 1$ matrix given by $Y=(y_{j})_{n\times 1}$. Let $g_{i}=\sum_{j=1}^{n+1} x_{ij}y_{j}$ and $\mathcal{J} = I_{1}(\widehat{X}Y) = \langle g_{1},\cdots ,g_{n+1}\rangle$. The total Betti numbers of the ideal $\mathcal{J}$ are $\beta_{0}=1, \beta_{1}=n+1$, $\beta_{n+1}= n$, $\beta_{k+1}=\binom{n}{k}+\binom{n}{k-1}+\binom{n}{k+1}$ for $1\leq k < n$. \end{theorem} We first discuss the scheme of the proof below. We will use the following observations to compute the total Betti numbers of $\mathcal{J}$. \begin{itemize} \item[Step 1.] The minimal graded free resolution of $\mathcal{I}=\langle g_{1},\cdots,g_{n}\rangle $ is given by the Koszul Resolution. \item[Step 2.] We prove that $\langle g_{1},\cdots,g_{n}: g_{n+1}\rangle = \langle g_{1},\cdots,g_{n},\Delta\rangle$; where $\Delta= \det(X)$. This proof requires the fact that $\langle g_{1},\cdots,g_{n},\Delta\rangle$ is a prime ideal, which has been proved in Theorem 5.4 in \cite{sstprime}. \item[Step 3.] We prove that $\langle g_{1},\cdots g_{n}:\Delta\rangle =\langle y_{1},y_{2},\cdots, y_{n}\rangle$. \item[Step 4.] We construct a graded free resolution of $\langle g_{1},\cdots,g_{n},\Delta\rangle $ using mapping cone between resolutions of $\langle g_{1},\cdots,g_{n}\rangle$ and $\langle y_{1},\cdots,y_{n}\rangle $. We extract a minimal free resolution from this resolution. \item[Step 5.] Finally, we construct a graded free resolution of $\langle g_{1},\cdots,g_{n},g_{n+1}\rangle$ using mapping cone between free resolutions of $\langle g_{1},\cdots,g_{n},\Delta\rangle$ and $\langle g_{1},\cdots,g_{n}\rangle$. We extract a minimal free resolution from this resolution. \end{itemize} \begin{remark} We need detailed information about the ideal $\langle g_{1},\cdots,g_{n},\Delta\rangle$, where $\Delta = \det(X)$. We need the fact that this ideal is a prime ideal, which has been proved in Theorem 5.4 in \cite{sstprime}. We also need a minimal free resolution for this ideal, which has been proved below in Lemma \ref{resgennorth}. We came to know much later that $\langle g_{1},\cdots,g_{n},\Delta\rangle$ was defined in \cite{nor}. It is known as the generic Northcott ideal and a minimal free resolution can be found in \cite{nor}. However, we give a different proof here using our Gr\"{o}bner basis computation, which also shows the linking of nested complete intersection ideals. Moreover, Northcott's resolution can perhaps be used to prove that $\langle g_{1},\cdots,g_{n},\Delta\rangle$ is a prime ideal, although our proof in \cite{sstprime} is absolutely different and uses the result in \cite{fer}. \end{remark} \begin{lemma}\label{cofactor} $\Delta y_{i}=\sum_{j=1}^{n}A_{ji}g_{j}$, where $A_{ji}$ is the cofactor of $x_{ji}$ in $X$. \end{lemma} \proof We have $$\Delta y_{i} = \sum_{j=1}^{n}A_{ji}x_{ji}y_{i} = \sum_{j=1}^{n}A_{ji}\left(\sum_{k=1}^{n}x_{jk}y_{k}\right) - \sum_{j=1}^{n}A_{ji}\left(\sum_{k\neq i}x_{jk}y_{k}\right) = \sum_{j=1}^{n}A_{ji}g_{j},$$ since $\sum_{j=1}^{n}A_{ji}\left(\sum_{k\neq i}x_{jk}y_{k}\right) = \sum_{k\neq i}\left(\sum_{j=1}^{n}A_{ji}x_{jk}\right)y_{k} = 0$.\qed \begin{lemma}\label{cofactorcor} $\langle g_{1},\cdots,g_{n},\Delta\rangle\subseteq \langle g_{1},\cdots,g_{n}: g_{n+1}\rangle$. \end{lemma} \proof We have $g_{i}\in \langle g_{1},\cdots,g_{n}: g_{n+1}\rangle$, for every $1\leq i\leq n$. Moreover, $y_{i}\Delta\in \langle g_{1},\cdots,g_{n}\rangle$, by Lemma \ref{cofactor}. Hence, $g_{n+1}\Delta\in \langle g_{1},\cdots,g_{n}\rangle$. \qed \begin{lemma}\label{colon1} $\langle g_{1},\cdots,g_{n}: g_{n+1}\rangle=\langle g_{1},\cdots,g_{n},\Delta\rangle $ \end{lemma} \proof We have proved that $\langle g_{1},\cdots,g_{n},\Delta\rangle\subseteq \langle g_{1},\cdots,g_{n}: g_{n+1}\rangle$ in Lemma \ref{cofactorcor}. We now prove that $\langle g_{1},\cdots,g_{n}:g_{n+1}\rangle\subseteq \langle g_{1},\cdots,g_{n},\Delta\rangle$. Let $z\in \langle g_{1},\cdots,g_{n}:g_{n+1}\rangle$. Then $zg_{n+1}\in \langle g_{1},\cdots,g_{n}\rangle\subset\langle g_{1},\cdots,g_{n},\Delta\rangle$. It is easy to see that $g_{n+1}\notin \langle g_{1},\cdots,g_{n},\Delta\rangle$. Therefore, $z\in \langle g_{1},\cdots,g_{n},\Delta\rangle$, since $\langle g_{1},\cdots,g_{n},\Delta\rangle$ is a prime ideal by Theorem 5.4 in \cite{sstprime}. \qed \begin{lemma}\label{colon2} $\langle g_{1},\cdots,g_{n}:\Delta\rangle=\langle y_{1},\cdots,y_{n}\rangle$ \end{lemma} \proof We have $y_{i}\Delta\in \langle g_{1},\cdots,g_{n}\rangle$ by Lemma \ref{cofactor}; which implies that $\langle y_{1},\cdots,y_{n}\rangle\subset\langle g_{1},\cdots,g_{n}:\Delta\rangle$. Let $z\in \langle g_{1},\cdots,g_{n}:\Delta\rangle$. Then $z\Delta\in\langle g_{1},\cdots,g_{n}\rangle\subseteq \langle y_{1},\cdots,y_{n}\rangle$. Therefore, $z\in \langle y_{1},\cdots,y_{n}\rangle$, since $\Delta\notin \langle y_{1},\cdots,y_{n}\rangle$ and $\langle y_{1},\cdots,y_{n}\rangle$ is a prime ideal.\qed \noindent\textbf{Mapping Cones.} \, The resolution for $\langle y_{1},\cdots,y_{n}\rangle$ is given by the Koszul complex $\mathbb{F}_{\centerdot}$\, . We now give a resolution of $\langle g_{1},\cdots,g_{n},\Delta\rangle$ by the mapping cone technique. We know that $\langle g_{1}, \cdots,g_{n}:\Delta\rangle=\langle y_{1},\cdots,y_{n}\rangle$, by Lemma \ref{colon2}. We first construct a connecting homomorphism $\phi_{\centerdot}:\mathbb{F}_{\centerdot}\longrightarrow \mathbb{G}_{\centerdot}$\, . Let $\phi_{0}$ denote the multiplication by $\Delta$. In order to make the map $\phi_{0}$ a degree zero map, we set the grading as $\mathbb{F}_{0}\cong (R(-n))^{1}$ and $\mathbb{G}_{0}= (R(0))^{1}$. Since $\mathbb{F}_{\centerdot}$ and $\mathbb{G}_{\centerdot}$ are both Koszul resolutions, we set the grading as $\mathbb{G}_{i}\cong (R(-2i))^{\binom{n}{i}}$ and $\mathbb{F}_{i}\cong (R(-n-i))^{\binom{n}{i}}$. Now we see that, $i\neq n$ implies that $-2i\neq -n -i$. Hence the image of $\phi_{i}$ for $i\neq n$ is contained in the maximal ideal. We have $\mathbb{F}_{i}=\mathbb{G}_{i}$, only for $i=n$. If we can show that the map $\phi_{n}$ is not the zero map, then this will be the only free part of the resolution which we can cancel out for obtaining the minimal resolution. \begin{lemma} The map $\phi_{n}$ is not the zero map. \end{lemma} \proof We refer to \cite{hip}. If $\phi_{n}$ is the zero map, then $\phi_{0}(R)\subseteq \delta_{1}(\mathbb{G}_{1})$, where $\delta_{.}$ denotes the differential of $\mathbb{G}$. The image of $\delta_{1}$ is the ideal $\langle g_{1},\cdots,g_{n}\rangle$, which does not contain $\phi_{0}(1)=\Delta$. The map $\phi_{n}$ is not the zero map.\qed Therefore, the above discussion proves the following Lemma. \begin{lemma}\label{resgennorth} Hence a minimal graded free resolution of $\langle g_{1},\cdots,g_{n},\Delta\rangle$ is given by $\mathbb{M_{\centerdot}}$, such that $\mathbb{M}_{i}\cong (R(-n-i+1))^{\binom{n}{i-1}}\oplus (R(-2i))^{\binom{n}{i}}$ for $0<i<n$, $\mathbb{M}_{0}\cong R(0)$ and $\mathbb{M}_{n}\cong (R(-2n))^{n}$. \end{lemma} \noindent\textbf{(Proof of Theorem \ref{bettiJ}.)} We now find the Betti numbers for the ideal $\langle g_{1},\cdots, g_{n+1}\rangle$ by constructing the mapping cone between the resolutions $\mathbb{M_{\centerdot}}$ and the resolution $\mathbb{G_{\centerdot}}$ of $\langle g_{1},\cdots, g_{n}\rangle$. The connecting map $\psi_{0}$ is multiplication by $g_{n+1}$. Hence to make it degree zero we set, $\mathbb{G}_{0}= (R(2))^{1}$ and $\mathbb{G}_{i}\cong (R(2-2i))^{\binom{n}{i}}$ for $i>0$. Here we note that $2-2i\neq -2i$ and $-n-i+1\neq 2-2i$ for $1\leq i\leq n$. Hence, for each $1\leq i\leq n$, the image of $\psi_{i}$ is contained in the maximal ideal. This shows that the resolution obtained by the mapping cone between $\mathbb{M_{\centerdot}}$ and $\mathbb{G_{\centerdot}}$ is minimal. Hence the total Betti numbers of $\mathcal{J}$ are: \noindent $\beta_{0}=1, \beta_{1}=n+1$; \\ $\beta_{n+1}= n$; \\ $\beta_{k+1}=\binom{n}{k}+\binom{n}{k-1}+\binom{n}{k+1}$ for $1\leq k < n$. \qed \begin{corollary} The ring $R/\mathcal{I}$ is Cohen-Macaulay and the ring $\hat{R}/\mathcal{J}$ is not Cohen-Macaulay. \end{corollary} \proof The polynomial ring $R$ is Cohen-Macaulay and $g_{1},\ldots,g_{n}$ is a regular sequence therefore the ring $R/\mathcal{I}$ is Cohen-Macaulay. We have seen that $\textrm{projdim}_{\widehat{R}}\widehat{R}/\mathcal{J}=n+1$. Therefore, by the Auslander-Bauchsbaum formula $\textrm{depth}_{\widehat{R}}\widehat{R}/\mathcal{J}=n(n+1)+n-(n+1)=n^{2}+n-1$. We have proved in Lemma 5.5 in \cite{sstprime} that $\langle y_{1},\ldots,y_{n}\rangle$ is a minimal prime over $\mathcal{J}$. Therefore, $\textrm{dim}\widehat{R}/\mathcal{J}\geq \textrm{dim}\widehat{R}/\langle y_{1},\ldots,y_{n}\rangle=n^{2}+n $; hence the ring $\widehat{R}/\mathcal{J}$ is not Cohen-Macaulay.\qed \section{$I_{1}(XY)$, where X is $m\times mn$ generic matrix and Y is $mn\times n$ generic matrix} Finally, we consider the case when $X=(x_{ij})_{m\times mn}$ is a generic matrix of size $m\times mn$ and $Y=(y_{ij})_{mn\times n}$ is generic matrix of size $mn\times n$. We define $\mathfrak{I} = I_{1}(XY)$. Let $g_{ij} = \sum_{t=1}^{mn}x_{it}y_{tj}$, with $1\leq i\leq m, \, 1\leq i\leq n$. Then, $\mathfrak{I} = \langle \{g_{ij} \mid 1\leq i\leq m, \, 1\leq i\leq n\}\rangle$. In this section we construct a Gr\"{o}bner basis for the ideal $\mathfrak{I}$ with respect to a suitable monomial order and use that to show that the generators $g_{ij}$, with $1\leq i\leq m$, $1\leq i\leq n$ form a regular sequence. We first set a few notations before we prove the main results. \begin{itemize} \item $X=\begin{pmatrix}A_{1}&\cdots & A_{n} \end{pmatrix}$, where $A_{s}=\begin{pmatrix} x_{1(m(s-1)+1)}&\cdots & x_{1(ms)}\\ \vdots & \vdots& \vdots\\ x_{m(m(s-1)+1)}&\cdots & x_{m(ms)}\\ \end{pmatrix} $ is the $m\times m$ matrix for every $1\leq s\leq n$. \item $[X]_{s}=\begin{pmatrix} A_{s}& A_{1}&\cdots &\widehat{A_{s}}&\cdots & A_{n} \end{pmatrix}$, for every $1\leq s\leq n$. \item $[Y]_{s}=\begin{pmatrix} y_{(m(s-1)+1)s}\\ \vdots\\ y_{(ms)s}\\ y_{1s}\\ \vdots\\ y_{(mn)s} \end{pmatrix}$, for every $1\leq s\leq n$. \end{itemize} We will use Theorem \ref{gbtheorem} for constructing a Gr\"{o}bner basis for the ideal $\mathfrak{I}$. A very important reason behind considering this class of ideals is that we get some nice examples of transversal intersection of ideals. Two results that would be useful for our purpose are the following: \begin{lemma} \label{trans} Let $>$ be a monomial ordering on $R$. Let $I$ and $J$ be ideals in $R$, such that $m(I)$ and $m(J)$ denote unique minimal generating sets for their leading ideals ${\rm \mbox{Lt}}(I)$ and ${\rm \mbox{Lt}}(J)$ respectively. Then, $I\cap J = IJ$ if the set of variables occurring in the set $m(I)$ is disjointed from the the set of variables occurring in the set $m(J)$. \end{lemma} \proof See Lemma 3.6 in \cite{sstsum}.\qed \begin{lemma}\label{tensorprod} Let $I$ and $J$ be graded ideals in a graded ring $R$, such that $I\cap J=I\cdot J$. Suppose that $\mathbb{F}_{\centerdot}$ and $\mathbb{G}_{\centerdot}$ are minimal free resolutions of $I$ and $J$ respectively. Then $\mathbb{F}_{\centerdot}\otimes \mathbb{G}_{\centerdot}$ is a minimal free resolution for the graded ideal $I+J$. \end{lemma} \proof See Lemma 3.7 in \cite{sstsum}.\qed \begin{theorem}\label{gmain} Let us choose the lexicographic monomial order on $R$ induced by $ y_{11}>y_{21}>\cdots >y_{(mn)1}> y_{(m+1)2}>y_{(m+2)2}>\cdots >y_{(2m)2}>y_{12}>\cdots y_{(mn)2}>\cdots > y_{(m(n-1)+1)n}>y_{(m(n-1)+2)n}>\cdots >y_{((mn)n}>y_{1n}>\cdots y_{(m(n-1))n}>x_{11}> x_{12}> \cdots >x_{m(mn)}$. Let $\mathcal{G}_{s}$ be the reduced Gr\"{o}bner Basis of the ideal $I_{1}([X]_{s}[Y]_{s})$ for $1\leq s\leq n$, obtained by Theorem \ref{gbtheorem}. Then $\mathfrak{G}_{t}=\cup_{s=1}^{t}\mathcal{G}_{s}$ is a reduced Gr\"{o}bner Basis for the ideal $P_{t}=\sum_{s=1}^{t}I_{1}([X]_{s}[Y]_{s})$ for $1\leq t\leq n$. In particular, $\mathfrak{G}_{n}$ is a reduced Gr\"{o}bner Basis for the ideal $P_{n}=\mathfrak{I}=I_{1}(XY)$. \end{theorem} \proof We have $P_{t}=\sum_{s=1}^{t}I_{1}([X]_{s}[Y]_{s})$, and we observe that if $p\in \mathcal{G}_{s} $ and $q\in \mathcal{G}_{t}$ for $1\leq s<t\leq n$, then $\gcd({\rm \mbox{Lt}}(p),{\rm \mbox{Lt}}(q))=1$. Therefore the $S$-polynomial of $p,q$ reduces to zero after applying division upon $\mathfrak{G}_{t}$.\qed \begin{theorem} Let us denote $P_{t}=\sum_{s=1}^{t}I_{1}([X]_{s}[Y]_{s})$, for $1\leq t\leq n-1$. Then $P_{t}\cap I_{1}([X]_{t+1}[Y]_{t+1})= P_{t}\cdot I_{1}([X]_{t+1}[Y]_{t+1})$. Hence the elements $g_{ij} = \sum_{t=1}^{mn}x_{it}y_{tj}$, $1\leq i\leq m$, \, $1\leq i\leq n$ form a regular sequence and the Koszul complex resolves $R/\mathfrak{I}$ as an $R$-module minimally. \end{theorem} \proof If $p\in \mathcal{G}_{s} $ and $q\in \mathcal{G}_{t}$, for $1\leq s<t\leq n$. Then $\gcd({\rm \mbox{Lt}}(p),{\rm \mbox{Lt}}(q))=1$, therefore by theorem \ref{gmain} and lemma \ref{trans}, we have $P_{t}\cap I_{1}([X]_{t+1}[Y]_{t+1})= P_{t}\cdot I_{1}([X]_{t+1}[Y]_{t+1})$. By Theorem \ref{regseqI} the generators of the ideal $P_{1}$ form a regular sequence and also the generators of the ideal $I_{1}([X]_{s}[Y]_{s})$ form a regular sequence for each $1\leq s\leq n$. Hence the Koszul complex resolve $R/P_{1}$ and $R/I_{1}([X]_{s}[Y]_{s})$ minimally. Now $P_{t}\cap I_{1}([X]_{t+1}[Y]_{t+1})= P_{t}\cdot I_{1}([X]_{t+1}[Y]_{t+1})$. Hence, by application of lemma \ref{trans} we can conclude that the Koszul complex resolves $R/\mathfrak{I}$ minimaly. \qed \end{document}
\begin{document} $\mbox{\c{t}}$itle{The antipode of a dual quasi-Hopf algebra with nonzero integrals is bijective} \dedicatory{Dedicated to Fred Van Oystaeyen for his sixtieth birthday} $\mbox{\u{a}}$uthor{M. Beattie}$\mbox{\c{t}}$hanks{ The first author's research was supported by an NSERC Discovery Grant.} $\mbox{\u{a}}$ddress{Department of Mathematics and Computer Science, Mount Allison University, Sackville, New Brunswick, Canada E4L 1E6 } \email{[email protected]} $\mbox{\u{a}}$uthor{M.C. Iovanov}$\mbox{\c{t}}$hanks{The second author was partially supported by the contract nr. 24/28.09.07 with UEFISCU ``Groups, quantum groups, corings and representation theory" of CNCIS, PN II (ID\_1002)} $\mbox{\u{a}}$ddress{State University of New York, 244 Mathematics Building, Buffalo, NY 14260-2900, USA, and University of Bucharest, Fac. Matematica \& Informatica, Str. Academiei nr. 14, Bucharest 010014, Romania} \email{[email protected]} $\mbox{\u{a}}$uthor{\c{S}. Raianu} $\mbox{\u{a}}$ddress{Mathematics Department, California State University, Dominguez Hills, 1000 E Victoria St, Carson CA 90747, USA} \email{[email protected]} \begin{abstract} For $A$ a Hopf algebra of arbitrary dimension over a field $K$, it is well-known that if $A$ has nonzero integrals, or, in other words, if the coalgebra $A$ is co-Frobenius, then the space of integrals is one-dimensional and the antipode of $A$ is bijective. Bulacu and Caenepeel recently showed that if $H$ is a dual quasi-Hopf algebra with nonzero integrals, then the space of integrals is one-dimensional, and the antipode is injective. In this short note we show that the antipode is bijective. \end{abstract} \title{The antipode of a dual quasi-Hopf algebra with nonzero integrals is bijective} \date{} $\mbox{\c{s}}$ection{Introduction } The definition of quasi-Hopf algebras and the dual notion of dual quasi-Hopf algebras is motivated by quantum physics and dates back to work of Drinfel'd \cite{D}. The theory of integrals for quasi-Hopf algebras was studied in \cite{pvo, hn, bc}. In \cite{bc}, Bulacu and Caenepeel showed that a dual quasi-Hopf algebra is co-Frobenius as a coalgebra if and only if it has a nonzero integral. In this case, the space of integrals is one-dimensional and the antipode is injective, so that for finite dimensional dual quasi-Hopf algebras the antipode is bijective. In this note, we use the ideas from a new short proof of the bijectivity of the antipode for Hopf algebras by the second author \cite{i} to show that the antipode of a dual quasi-Hopf algebra with integrals is bijective, thus extending the classical result of Radford \cite{R} for Hopf algebras. In this paper we prove \begin{theorem}\label{TH} Let $H$ be a co-Frobenius dual quasi-Hopf algebra, equivalently, a dual quasi-Hopf algebra having nonzero integrals. Then the antipode of $H$ is bijective. \end{theorem} $\mbox{\c{s}}$ection{Preliminaries} In this section we briefly review the definition of a dual quasi-Hopf algebra over a field $K$. We refer the reader to \cite{abe, dnr, sw} for the basic definitions and properties of coalgebras and their comodules and of Hopf algebras. For the definition of dual quasi-Hopf algebra we follow \cite[Section 2.4]{M}. \begin{de} A dual quasi-bialgebra $H$ over $K$ is a coassociative coalgebra $(H, \Delta, \varepsilon)$ together with a unit $u:K\longrightarrowightarrow H$, $u(1)=1$, and a not necessarily associative multiplication $M:H\otimesimes H\longrightarrowightarrow H$. The maps $u$ and $M$ are coalgebra maps. We write $ab$ for $M(a \otimesimes b)$. As well, there is an element $\varphi\in(H\otimesimes H\otimesimes H)^*$ called the {\it reassociator}, which is invertible with respect to the convolution algebra structure of $(H\otimesimes H\otimesimes H)^*$. The following relations must hold for all $h,g,f,e\in H$: \begin{eqnarray} h_1(g_1f_1)\varphi(h_2,g_2,f_2) & = & \varphi(h_1,g_1,f_1)(h_2g_2)f_2 \label{e1}\\ 1h=h1=h \label{e2}\\ \varphi(h_1,g_1,f_1e_1)\varphi(h_2g_2,f_2,e_2) & = & \varphi(g_1,f_1,e_1)\varphi(h_1,g_2f_2,e_2)\varphi(h_2,g_3,f_3) \label{e3}\\ \varphi(h,1,g) = \varepsilon(h)\varepsilon(g) \label{e4} \end{eqnarray} \end{de} Here we use Sweedler's sigma notation with the summation symbol omitted. \begin{de} A dual quasi-bialgebra $H$ is called a dual quasi-Hopf algebra if there exists an antimorphism $S$ of the coalgebra $H$ and elements $$\mbox{\u{a}}$lpha,\beta\in H^*$ such that for all $h\in H$: \begin{eqnarray} S(h_1)$\mbox{\u{a}}$lpha(h_2)h_3=$\mbox{\u{a}}$lpha(h)1, & & h_1\beta(h_2)S(h_3)=\beta(h)1 \label{e5}\\ \varphi(h_1\beta(h_2),S(h_3),$\mbox{\u{a}}$lpha(h_4)h_5) & = & \varphi^{-1}(S(h_1),$\mbox{\u{a}}$lpha(h_2)h_3,\beta(h_4)S(h_5)) =\varepsilon(h). \label{e6} \end{eqnarray} \end{de} Let $H$ be a dual quasi-Hopf algebra. As in the Hopf algebra case, a left integral on $H$ is an element $T\in H^*$ such that $h^*T=h^*(1)T$ for all $h^* \in H^*$; the space of left integrals is denoted by $\int_l$ and by \cite[Proposition 4.7]{bc} has dimension 0 or 1. Right integrals are defined analogously with space of right integrals denoted by $\int_r$. Suppose $0 \neq T \in \int_l$. It is easily seen that $\int_l$ is a two sided ideal of the algebra $H^*$, and $KT$\mbox{\c{s}}$ubseteq Rat(H^*)$ with right comultiplication given by $T\mapsto T\otimesimes 1$. Since for co-Frobenius coalgebras $Rat(H^*)=Rat({}_{H^*}H^*)=Rat(H^*_{H^*})$, $KT$ must have left comultiplication $T\mapsto a\otimesimes T$. By coassociativity, $a$ is a grouplike element, called the distinguished grouplike of $H$. Then, for all $h^* \in H^*$, \begin{eqnarray} Th^* & = & h^*(a)T. \label{e7.5} \end{eqnarray} From \cite[Proposition 4.2]{bc}, the function $$\mbox{\c{t}}$heta^*:\int_l\otimesimes H\longrightarrowightarrow Rat(_{H^*}H^*)$ \begin{eqnarray} $\mbox{\c{t}}$heta^*(T\otimesimes h)= $\mbox{\c{s}}$igma(S(h_5)\otimesimes $\mbox{\u{a}}$lpha(h_6)h_7)(S(h_4) \longrightarrowightharpoonup T ) $\mbox{\c{s}}$igma^{-1}(S(h_3)\otimesimes\beta(S(h_2))S^2(h_1)) \label{e7} \end{eqnarray} is an isomorphism of right $H$-comodules, where $$\mbox{\c{s}}$igma :H\otimesimes H\longrightarrowightarrow H^*$ is defined by $$\mbox{\c{s}}$igma(h\otimesimes g)(f)=\varphi (f,h,g)$, $$\mbox{\c{s}}$igma^{-1} $ is the convolution inverse of $$\mbox{\c{s}}$igma$, and, as usual, $(h \longrightarrowightharpoonup T)(g) = T(gh)$. $\mbox{\c{s}}$ection{Proof of the theorem} Let $H$ be a quasi-Hopf algebra with $0 \neq T \in \int_l$. As in \cite{i}, for each right $H$-comodule $(M,\longrightarrowho)$, we denote by ${}^aM$ the left $H$-comodule structure on $M$ defined by $m_{-1}^a\otimesimes m_{0}^a=aS(m_1)\otimesimes m_0$, where $\longrightarrowho(m)=m_0\otimesimes m_1$. Denote the induced right $H^*$-module structure on ${}^aM$ by $m\cdot^a h^*=h^*(m_{-1}^a)m_{0}^a = h^*(aS(m_1))m_0$. By \cite[Corollary 4.4]{bc} the antipode $S$ of $H$ is injective, and therefore has a left inverse $S^l$. Then, for $$\mbox{\c{s}}$igma$ as above, we have the following analogue of \cite[Proposition 2.5]{i}: \begin{proposition}\label{surjmap} The map $p:{^a{H}}$\mbox{\c{t}}$o Rat(H^*)$ defined by \begin{eqnarray} p(h) & = & $\mbox{\c{s}}$igma(S(S^l(h_3))\otimesimes$\mbox{\u{a}}$lpha(S^l(h_2))S^l(h_1))*(h_4\hbox{$\longrightarrowightharpoonup$} T) *$\mbox{\c{s}}$igma^{-1}(h_5\beta(h_6)\otimesimes S(h_7)) \label{e8} \end{eqnarray} is a surjective morphism of left $H$-comodules. \end{proposition} \begin{proof} Let $\Psi:=$\mbox{\c{s}}$igma(S(S^l(h_3))\otimesimes$\mbox{\u{a}}$lpha(S^l(h_2))S^l(h_1))$. Then for $c^*\in H^*$ and $g\in H$: \begin{eqnarray*} (p(h)*c^*)(g)&=&p(h)(g_1)c^*(g_2)\\ (\longrightarrowef{e8})&=&\Psi(g_1)T(g_2h_4)$\mbox{\c{s}}$igma^{-1}(h_5\beta(h_6)\otimesimes S(h_7))(g_3)c^*(g_4)\\ &=&\Psi(g_1)T(g_2h_4)\varphi^{-1}(g_3,h_5\beta(h_6),S(h_7))c^*(g_4)\\ &=&\Psi(g_1)T(g_2h_4)\varphi^{-1}(g_3,h_5,S(h_7))c^*(g_4\beta(h_6))\\ (\longrightarrowef{e5})&=&\Psi(g_1)T(g_2h_4)\varphi^{-1}(g_3,h_5,S(h_9))c^*(g_4(h_6\beta(h_7)S(h_8)))\\ &=&\Psi(g_1)T(g_2h_4)c^*(\varphi^{-1}(g_3,h_5,S(h_9))g_4(h_6\beta(h_7)S(h_8)))\\ &=&\Psi(g_1)T(g_2h_4)\beta(h_7)c^*(\varphi^{-1}(g_3,h_5,S(h_9))g_4(h_6S(h_8)))\\ (\longrightarrowef{e1})&=&\Psi(g_1)T(g_2h_4)\beta(h_7)c^*((g_3h_5)S(h_9)\varphi^{-1}(g_4,h_6,S(h_8)))\\ &=&\Psi(g_1)T(g_2h_4)(S(h_9) \longrightarrowightharpoonup c^* )(g_3h_5)\beta(h_7)\varphi^{-1}(g_4,h_6,S(h_8))\\ &=&\Psi(g_1)(T*(S(h_8)\longrightarrowightharpoonup c^*))(g_2h_4)\beta(h_6)\varphi^{-1}(g_3,h_5,S(h_7))\\ (\longrightarrowef{e7.5})&=&\Psi(g_1)(S(h_8) \longrightarrowightharpoonup c^*)(a)T(g_2h_4)\beta(h_6)\varphi^{-1}(g_3,h_5,S(h_7))\\ &=&\Psi(g_1)T(g_2h_4)$\mbox{\c{s}}$igma^{-1}(h_5\beta(h_6)\otimesimes S(h_7))(g_3)c^*(aS(h_8))\\ (\longrightarrowef{e8})&=&p(h_1)(g)c^*(aS(h_2))\\ &=&p(c^*(aS(h_2))h_1)(g)\\ &=&p(c^*(h^a_{-1})h^a_{0})(g)\\ &=&p(h\cdot^a c^*)(g). \end{eqnarray*} Thus $p$ is left $H$-colinear. Finally, we note that $p\circ S = $\mbox{\c{t}}$heta^*(T\otimesimes -)$ where $$\mbox{\c{t}}$heta^*$ is the isomorphism from (\longrightarrowef{e7}) so that $p$ is surjective. \end{proof} Let $c $ be a grouplike element of $H$. From \cite[p.580]{bc}, $c$ is invertible with inverse $S(c)$. We will show that left multiplication by $c$ has an inverse too. \par Let $$\mbox{\c{t}}$heta_c\in{\longrightarrowm End}(H)$ be defined by $$\mbox{\c{t}}$heta_c(h)=ch$ and define the coinner automorphisms $q_c$ and $r_c= q_c^{-1} \in{\longrightarrowm End}(H)$ by: $$q_c(h)= \varphi^{-1}(c,S(c),h_1)h_2\varphi(c,S(c),h_3) \mbox{ and } r_c(h)= \varphi(c,S(c),h_1)h_2\varphi^{-1}(c,S(c),h_3).$$ \begin{lemma}\label{theta} For any grouplike element $c$ and $$\mbox{\c{t}}$heta_c, r_c,q_c$ as above, $$\mbox{\c{t}}$heta_c\circ$\mbox{\c{t}}$heta_{c^{-1}}=r_c$ and thus $$\mbox{\c{t}}$heta_c$ is bijective with inverse $$\mbox{\c{t}}$heta_c^{-1}=$\mbox{\c{t}}$heta_{c^{-1}}\circ q_c=q_{c^{-1}}\circ$\mbox{\c{t}}$heta_{c^{-1}}$. \end{lemma} \begin{proof} Using (\longrightarrowef{e1}) and the fact that $c^{-1}=S(c)$, we see that $$$\mbox{\c{t}}$heta_c\circ$\mbox{\c{t}}$heta_{c^{-1}}(h) = c(c^{-1}h)= \varphi(c,S(c),h_1)(cS(c))h_2\varphi^{-1}(c,S(c),h_3) = r_c(h).$$ The same formula for $c^{-1}=S(c)$ yields $$\mbox{\c{t}}$heta_{c^{-1}}\circ$\mbox{\c{t}}$heta_c=r_{c^{-1}}$ and the statement then follows directly. \end{proof} We can now prove our main result.\\[.5cm] {\it Proof of Theorem \longrightarrowef{TH}.}\\ We only need to prove the surjectivity. The proof goes along the lines of the proof of \cite[Theorem 2.6]{i}, but with the difference that here the antipode is not necessarily an anti-morphism of algebras. \par Let $\pi$ be the composition map ${}^aH$\mbox{\c{s}}$tackrel{p}{\longrightarrowightarrow}Rat(H^*_{H^*})$\mbox{\c{s}}$tackrel{$\mbox{\c{s}}$im}{\longrightarrowightarrow}H\otimesimes \int_r$\mbox{\c{s}}$imeq H$, where the last two isomorphisms follow by left-right symmetry of the results of \cite{bc}. Since $H$ is a co-Frobenius coalgebra, ${^H\!{H}}$ is projective by \cite[Theorem 1.3]{GTN} or \cite[Theorem 4.5, (x)]{bc}, and as $\pi$ is surjective, there is a morphism of left $H$-comodules $\lambda:H\longrightarrowightarrow {}^aH$ such that $\pi\lambda={\longrightarrowm Id}_H$. We then have $$ aS(\lambda(h)_2)\otimesimes\lambda(h)_1 = \lambda(h)^a_{-1}\otimesimes \lambda(h)^a_{0}=h_1\otimesimes\lambda(h_2). $$ Applying ${\longrightarrowm Id}\otimesimes \varepsilon\pi$, we get $aS(\varepsilon\pi(\lambda(h)_1)\lambda(h)_2)=h$ for any $h \in H$. Thus $$\mbox{\c{t}}$heta_a \circ S$ is surjective and since $$\mbox{\c{t}}$heta_a$ is bijective by Lemma \longrightarrowef{theta}, $S$ is surjective also. $ $\mbox{\c{s}}$quare$ $\mbox{\c{t}}$hebibliography{MMM} \bibitem{abe} E. Abe, {\it Hopf Algebras}, Cambridge Univ. Press, 1977. \bibitem{bc} D. Bulacu, S. Caenepeel, Integrals for (dual) quasi-Hopf algebras. Applications, J. Algebra {\bf 266} (2003), no. 2, 552-583. \bibitem{dnr} S. D$\mbox{\u{a}}$ sc$\mbox{\u{a}}$ lescu, C. N$\mbox{\u{a}}$ st$\mbox{\u{a}}$ sescu, $\mbox{\c{S}}$. Raianu, {\it Hopf Algebras: an Introduction}, Monographs and Textbooks in Pure and Applied Mathematics {\bf 235}, Marcel Dekker, Inc., New York, 2001. \bibitem{D} V.G. Drinfel'd, Quasi-Hopf Algebras, Leningrad Math. J. {\bf 1} (1990) 1419-1457. \bibitem{GTN} J. Gomez-Torrecillas, C. N\u ast\u asescu, Quasi-co-Frobenius coalgebras, J. Algebra {\bf 174} (1995), no. 3, 909-923. \bibitem{hn} F.Hausser, F. Nill, Integral theory for quasi-Hopf algebras, arXiv:math/9904164v2. \bibitem{i} M.C. Iovanov, Generalized Frobenius algebras and the theory of Hopf algebras, preprint, arXiv:0803.0775. \bibitem{M} S. Majid, {\it Foundations of Quantum Group Theory}, Cambridge University Press, Cambridge, 1995. \bibitem{pvo} F. Panaite, F. van Oystaeyen, Existence of integrals for finite dimensional Hopf algebras, Bull. Belg. Math. Soc. Simon Stevin {\bf 7} (2000) 261-264. \bibitem{R} D.E. Radford, Finiteness conditions for a Hopf algebra with a nonzero integral, J. Algebra {\bf 46} (1977), no. 1, 189-195. \bibitem{sw} M.E. Sweedler, {\it Hopf Algebras}, Benjamin, New York, 1969. \end{document}
\begin{equation}gin{document} \widetildetle{A fractional Hadamard formula and applications} \author[] {Sidy Moctar Djitte${}^{1,2}$, Mouhamed Moustapha Fall${}^1$, Tobias Weth${}^2$} \address{${}^1$African Institute for Mathematical Sciences in Senegal (AIMS Senegal), KM 2, Route de Joal, B.P. 14 18. Mbour, S\'en\'egal.} \address{${}^2$Goethe-Universit\"{a}t Frankfurt, Institut f\"{u}r Mathematik. Robert-Mayer-Str. 10, D-60629 Frankfurt, Germany.} \email{[email protected], [email protected]} \email{[email protected]} \email{[email protected]} \title{A fractional Hadamard formula and applications} \begin{equation}gin{abstract} \noindent We derive a shape derivative formula for the family of principal Dirichlet eigenvalues $\l_s({\Omega}ega)$ of the fractional Laplacian $(-\Delta)^s$ associated with bounded open sets ${\Omega}ega \subset \mathbb{R}^N$ of class $C^{1,1}$. This extends, with a help of a new approach, a result in \cite{dal} which was restricted to the case $s=\frac{1}{2}$. As an application, we consider the maximization problem for ${\langle}mbda_s(\O)$ among annular-shaped domains of fixed volume of the type $B\setminus \overline B'$, where $B$ is a fixed ball and $B'$ is ball whose position is varied within $B$. We prove that ${\langle}mbda_s(B\setminus \overline B')$ is maximal when the two balls are concentric. Our approach also allows to derive similar results for the fractional torsional rigidity. More generally, we will characterize one-sided shape derivatives for best constants of a family of subcritical fractional Sobolev embeddings. \end{abstract} \title{A fractional Hadamard formula and applications} \section{Introduction}{\langle}bel{S:intro} Let $s\in (0,1)$ and $\Omega \subset \mathbb{R}^N$ be a bounded open set. The present paper is devoted to the study of best constants ${\langle}mbda_{s,p}(\O)$ in the family of subcritical Sobolev inequalities \begin{equation}gin{equation} {\langle}bel{eq:sobolev-ineq-main} {\langle}mbda_{s,p}(\O) \|u\|_{L^p({\Omega}ega)}^2 \le [u]_{s}^2 qquad \bar{\varepsilon}xt{for all $u \in {\mathcal H}^s_0({\Omega}ega)$,} \end{equation} where $p\in [1, \frac{2N}{N-2s})$ if $2s<N$ and $p\in [1, \infty)$ if $2s\geq N=1$. Here, the Sobolev space ${\mathcal H}^s_0({\Omega}ega)$ is given as completion of $C^\infty_c({\Omega}ega)$ with respect to the norm $[\,\cdot\,]_{s}$ defined by \begin{equation} {\langle}bel{eq:def-bNs} [u]_{s}^2= \frac{ c_{N,s}}{2}\int_{\mathbb{R}^{N}}\! \int_{\mathbb{R}^{N}}\!\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dx dy quad qquad \bar{\varepsilon}xt{with} quad c_{N,s}= pi^{-\frac{N}{2}}s 4^s\frac{\Gamma(\frac{N}{2}+s)}{\Gamma(1-s)}. \end{equation} The normalization constant $c_{N,s}$ is chosen such that $[u]_{s}^2=\int_{\mathbb{R}^N}|\xi|^{2s}|\hat{u}(\xi)|^2d\xi$ for $u \in {\mathcal H}^s_0({\Omega}ega)$, where $\hat{u}$ denotes the Fourier transform of $u$. The best (i.e., largest possible) constant in (\ref{eq:sobolev-ineq-main}) is given by \begin{equation} {\langle}bel{eq:def-lambda-sp} {\langle}mbda_{s,p}(\O):=\inf \left\{[u]_s^2\::\, u\in {\mathcal H}^s_0({\Omega}ega),quad \|u\|_{L^p(\O)}=1 \right\}. \end{equation} As a consequence of the subcriticality assumption on $p$ and the boundedness of ${\Omega}ega$, the space ${\mathcal H}^s_0({\Omega}ega)$ compactly embeds into $L^p(\O)$. Therefore a direct minimization argument shows that ${\langle}mbda_{s,p}(\O) $ admits a nonnegative minimizer $u\in {\mathcal H}^s_0({\Omega}ega)$ with $\|u\|_{L^p({\Omega}ega)}=1$. Moreover, every such minimizer solves, in the weak sense, the semilinear problem \begin{equation}gin{align}{\langle}bel{eq:1.1} (-\D)^s u = \l_{s,p}(\O)u^{p-1}quad\bar{\varepsilon}xt{ in $\O$},qquad quad u = 0quad\bar{\varepsilon}xt{ in $\mathbb{R}^N\setminus\O$.} \end{align} where $ (-\D)^s$ stands for the fractional Laplacian. It therefore follows from regularity theory and the strong maximum principle for $ (-\D)^s$ that $u$ is strictly positive in ${\Omega}ega$, see Lemma~\ref{reg-prop-minimizers} below. We recall that, for functions $\vp\in C^{1,1}_c(\mathbb{R}^N)$, the fractional Laplacian is given by $$ (-\D)^s\vp(x)=c_{N,s}\,PV \!\!\int_{\mathbb{R}^N} \frac{\vp(x)-\vp(y)}{|x-y|^{N+2s}}\, dy=\frac{c_{N,s}}{2}\int_{\mathbb{R}^N}\frac{2\varphi (x)-\vp(x+y)-\vp(x-y)}{|y|^{N+2s}}\, dy. $$ Of particular interest are the cases $p=1$ and $p=2$ which correspond to the fractional torsion problem \begin{equation}gin{align}{\langle}bel{eq:1.1-ft} (-\D)^s u = \l_{s,1}(\O)quad\bar{\varepsilon}xt{ in $\O$},qquad quad u = 0quad\bar{\varepsilon}xt{ in $\mathbb{R}^N\setminus\O$,} \end{align} and the eigenvalue problem \begin{equation}gin{align}{\langle}bel{eq:1.1-ev} (-\D)^s u = \l_{s,2}(\O)uquad\bar{\varepsilon}xt{ in $\O$},qquad quad u = 0quad\bar{\varepsilon}xt{ in $\mathbb{R}^N\setminus\O$,} \end{align} associated with the first Dirichlet eigenvalue of the fractional Laplacian, respectively. In these cases, the minimization problem for $\l_{s,p}(\O)$ in \eqref{eq:def-lambda-sp} possesses a unique positive minimizer. Indeed, it is a well-known consequence of the fractional maximum principle that \eqref{eq:1.1-ft} admits a unique solution, and that \eqref{eq:1.1-ev} has a unique positive eigenfunction with $\|u\|_{L^2({\Omega}ega)}=1$. Incidentally, the uniqueness of positive minimizers extends to the full range $1 \le p \le 2$, as we shall show in Lemma~\ref{uniqueness-extended} in the appendix of this paper. Our first goal in this paper is to analyze the dependence of the best constants on the underlying domain ${\Omega}ega$. For this we shall derive a formula for a one-sided shape derivative of the map $\O\mapsto \l_{s,p}(\O)$. We assume from now on that ${\Omega}ega \subset \mathbb{R}^N$ is a bounded open set of class $C^{1,1}$, and we consider a family of deformations $\{\Phi_\varepsilon\}_{\varepsilon \in (-1,1)}$ with the following properties: \begin{equation}{\langle}bel{eq:def-diffeom} \begin{equation}gin{aligned} &\bar{\varepsilon}xt{$\Phi_\varepsilon \in C^{1,1}(\mathbb{R}^N; \mathbb{R}^N)$ for $\varepsilon \in (-1,1)$, $\Phi_0= \id_{\mathbb{R}^N}$, and}\\ &\bar{\varepsilon}xt{the map $(-1,1) \to C^{0,1}(\mathbb{R}^N,\mathbb{R}^N)$, $\varepsilon \to \Phi_\varepsilon$ is of class $C^2$.} \end{aligned} \end{equation} {\color{red} We note that \eqref{eq:def-diffeom} implies that $\Phi_\e:\mathbb{R}^N\to \mathbb{R}^N$ is a global diffeomorphism if $|\e|$ is small enough, see e.g. \cite[Chapter 4.1]{delfour-zolesio}. To clarify, we stress that we only need the $C^2$-dependence of $\Phi_\varepsilon$ on $\varepsilon$ with respect to Lipschitz-norms, while $\Phi_\varepsilon$ is assumed to be a $C^{1,1}$-function for $\varepsilon \in (-1,1)$ to guarantee $C^{1,1}$-regularity of the perturbed domains $\Phi_\e(\O)$.} From the variational characterization of $ \l_{s,p}(\O)$ it is not difficult to see that the map $\e\mapsto \l_{s,p}(\Phi_\e(\O))$ is continuous. However, since $\l_{s,p}(\O)$ may not have a unique positive minimizer, we cannot expect this map to be differentiable. We therefore rely on determining the right derivative of $\e\mapsto \l_{s,p}(\Phi_\e(\O))$ from which we derive differentiability whenever $ \l_{s,p}(\O)$ admits a unique positive minimizer, thereby extending the classical Hadamard shape derivative formula for the first Dirichlet eigenvalue of the Laplacian $-\Delta$. Throughout this paper, we consider a fixed function $\delta \in C^{1,1}(\mathbb{R}^N)$ which coincides with the \bar{\varepsilon}xtit{signed} distance function $ {\rm dist}(\cdot,\mathbb{R}^{N}\setminus {\Omega}ega)-{\rm dist}(\cdot, {\Omega}ega)$ in a neighborhood of the boundary $partial\O$. We note here that, since we assume that ${\Omega}ega$ is of class $C^{1,1}$, the signed distance function is also of class $C^{1,1}$ in a neighborhood of $partial \O$ but not globally on $\mathbb{R}^N$. We also suppose that $\delta $ is chosen with the property that $\delta $ is positive in $\O$ and negative in $\mathbb{R}^N\setminus \overlineerline\O$, as it is the case for the signed distance function. Our first main result is the following. \begin{equation}gin{theorem}{\langle}bel{th:Shape-deriv} Let $\l_{s,p}(\O)$ be given by \eqref{eq:def-lambda-sp} and consider a family of deformations $\Phi_\varepsilon$ satisfying \eqref{eq:def-diffeom}. Then the map $\e\mapsto \th(\e):=\l_{s,p}(\Phi_\e(\O))$ is right differentiable at $\e=0$. Moreover, \begin{equation}{\langle}bel{eq:right-der-main-th} partial_+\th(0)=\min \left\{ \G(1+s)^2 \int_{partial\O}(u/\delta ^s)^2 X \cdot \nu \,dx\::\, u\in \mathcal{H} \right\}, \end{equation} where $\nu$ denotes the interior unit normal on ${partial}rtial {\Omega}ega$, $\calH$ is the set of positive minimizers for $ \l_{s,p}(\O)$ and $X := {partial}rtial_\varepsilon \big|_{\e=0} \Phi_\varepsilon$. \end{theorem} Here the function $u/\delta ^s$ is defined on ${partial}rtial {\Omega}ega$ as a limit. Namely, for $x_0\in {partial}rtial \O$, the limit \begin{equation}gin{equation} {\langle}bel{eq:limit-def-fractional-normal} \frac{u}{\delta ^s}(x_0)=\lim_{\stackrel{x\to x_0 }{x\in \O}}\frac{u}{\delta ^s}(x) \end{equation} exists, as the function $u/\delta ^s$ extends to a function in $C^\a(\overline\O)$ for some $\a>0$, see \cite{RX}. In addition, the function $\delta ^{1-s}\nabla u$ also admits a Hölder continuous extension on $\overlineerline {\Omega}ega$ satisfying $\delta ^{1-s}\nabla u\cdot \nu=s u/\delta ^s$ on $partial\O$, see \cite{FS-2019}. As a consequence, the expression $u/\delta ^s$, restricted on $partial\O$, plays the role of an inner fractional normal derivative. Note that, for $s = 1$, the limit on the RHS of (\ref{eq:limit-def-fractional-normal}) coincides with the classical inner normal derivative of $u$ at $x_0$. We observe that the constant $\G(1+s)^2$ appears also in the fractional Pohozaev identity, see e.g. \cite{RX-Poh}. This is, to some extend, not surprising at least in the classical case since Pohozaev's identity can be obtained using techniques of domain variation, see e.g. \cite{Wagner}. We also remark that one-sided derivatives naturally arise in the analysis of parameter-dependent minimization problems, see e.g. \cite[Section 10.2.3]{delfour-zolesio} for an abstract result in this direction. Related to this, they also appear in the analysis of the domain dependence of eigenvalue problems with possible degeneracy, see e.g. \cite{FW18} and the references therein. A natural consequence of Theorem \ref{th:Shape-deriv} is that the map $\e\mapsto \th(\e)=\l_{s,p}(\Phi_\e(\O))$ is differentiable at $\varepsilon = 0$ whenever $\l_{s,p}(\O)$ admits a unique positive minimizer. Indeed, applying Theorem \ref{th:Shape-deriv} to the map $\e\mapsto \widetilde \th(\e):=\l_{s,p}(\Phi_{-\e}(\O))$ yields $$ partial_-\th(0)=-partial_+\widetilde \th(0)=\max \left\{\G(1+s)^2 \int_{partial\O}(u/\delta ^s)^2 X\cdot \nu\,dx\::\, u\in \mathcal{H} \right\}, $$ where ${\mathcal H}$ is given as in Theorem~\ref{th:Shape-deriv}. As a consequence, we obtain the following result. \begin{equation}gin{corollary}{\langle}bel{cor:1.2} Let $\l_{s,p}(\O)$ be given by \eqref{eq:def-lambda-sp} and consider a family of deformations $\Phi_\varepsilon$ satisfying \eqref{eq:def-diffeom}. Suppose that $ \l_{s,p}(\O)$ admits a unique positive minimizer $u\in {\mathcal H}^s_0({\Omega}ega)$. Then the map $\e\mapsto \th(\e)=\l_{s,p}(\Phi_\e(\O))$ is differentiable at $\e=0$. Moreover \begin{equation}gin{equation} {\langle}bel{eq:cor:1.2-formula} \th'(0)=\G(1+s)^2 \int_{partial\O}(u/\delta ^s)^2 X\cdot \nu\,dx, \end{equation} where $X:=partial_\e\big|_{\e=0}\Phi_\varepsilon$. \end{corollary} As mentioned earlier, $ \l_{s,p}(\O)$ admits a unique positive minimizer $u\in {\mathcal H}^s_0({\Omega}ega)$ for $1 \le p \le 2$, see Lemma~\ref{uniqueness-extended} in the appendix. Therefore Corollary \ref{cor:1.2} extends, in particular, the classical Hadamard formula, for the first Dirichlet eigenvalue $\l_{1,2}(\O)$ of $-\Delta$, to the fractional setting. We recall, see e.g. \cite{Eh}, that the classical Hadamard formula is given by \begin{equation}gin{align}{\langle}bel{eq:1.2} \frac{d}{d\e}{\bf B}ig|_{\varepsilon = 0}\l_{1,2}(\Phi_\e(\O)) = \int_{partial\O} |\nabla u|^2 X\cdot \nu \,dx. \end{align} {\color{red} An analogue of Corollary \ref{cor:1.2} for the case of the local $r$-Laplace operator was obtained in \cite{ ML,CFR}. We also point out that, prior to this paper, a Hadamard formula in the fractional setting of the type (\ref{eq:cor:1.2-formula}) was obtained in \cite{dal} for the special case $p=1$, $s= \frac{1}{2}$, $N = 2$ and $\O$ of class $C^\infty$. We are not aware of any other previous work related to Theorem~\ref{th:Shape-deriv} or \ref{cor:1.2} in the fractional setting.} Our next result provides a characterization of constrained local minima of $\l_{s,p}$. Here and in the following, we call a bounded open subset $\O$ of class $C^{1,1}$ a constrained local minimum for $\l_{s,p}$ if for all families of deformations $\Phi_\varepsilon$ satisfying \eqref{eq:def-diffeom} and the volume invariance condition $|\Phi_\varepsilon({\Omega}ega)|= |{\Omega}ega|$ for $\varepsilon \in (-1,1)$, there exists $\e_0 \in (0,1)$ with $\l_{s,p}(\Phi_\e(\O)) \geq \l_{s,p}(\O)$ for $\e\in (-\e_0,\e_0)$. Our classification result reads as follows. \begin{equation}gin{corollary}{\langle}bel{cor:1.3} Let $p\in \{1\}\cup [2,\infty)$. If an open subset $\O$ of $\mathbb{R}^N$ of class $C^{3}$ is a volume constrained local minimum for $\O\mapsto \l_{s,p}(\O)$, then $\O$ is a ball. \end{corollary} Corollary \ref{cor:1.3} is a consequence of Theorem \ref{th:Shape-deriv}, from which we derive that if $\O$ is a constrained local minimum then any element $u\in \calH$ satisfies the overdetermined condition $u/\delta ^s\equiv constant$ on $partial\O$. Therefore by the rigidity result in \cite{JW17} we find that $\O$ must be a ball. {\color{red} We point out that we are not able to include the case $p \in (1,2)$ in Corollary \ref{cor:1.3}, since the rigidity result in \cite{JW17} is based on the moving plane method and therefore requires the nonlinearity in \eqref{eq:1.1} to be Lipschitz. The case $p \in (1,2)$ therefore remains an open problem in Corollary \ref{cor:1.3}.} \\ We note that the authors in \cite{dal} considered the shape minimization problem for $\l_{s,p}({\Omega}ega)$ in the case $p=1$, $s= \frac{1}{2}$, $N = 2$ among domains $\O$ of class $C^\infty$ of fixed volume. They showed in \cite{dal} that such minimizers are discs. Next we consider the optimization problem of $\O\mapsto {\langle}mbda_{s,p}(\O)$ for $p\in \{1,2\}$ and $\O$ a punctured ball, with the hole having the shape of ball. We show that, as the hole moves in $\O$ then ${\langle}mbda_{s,p}(\O)$ is maximal when the two balls are concentric. In the local case $s=1$ and $N=2$, this is a classical result by Hersch \cite{Hersch}. For subsequent generalizations in the case of the local problem, see \cite {LL14,A-C,Kesavan}. \begin{equation}gin{theorem}{\langle}bel{th:1.2} Let $p\in \{1,2\}$, $B_1(0)$ be the unit centered ball and $\tau \in (0,1)$. Define $$ {\mathcal A}:=\{a\in B_{1}(0)\,:\, B_\tau(a) \subset B_1(0)\}. $$ Then the map ${\mathcal A}\to \mathbb{R}$, $\,a\mapsto{\langle}mbda_{s,p}(B_1(0)\setminus \overline{ B_\tau(a)})$ takes its maximum at $a=0$. \end{theorem} The proof of Theorem \ref{th:1.2} is inspired by the argument given in \cite {LL14,Kesavan} for the local case $s=1$. It uses the fractional Hadamard formula in Corollary \ref{cor:1.2} and maximum principles for anti-symmetric functions. Our proof also shows that the map $ a\mapsto{\langle}mbda_{s,p}(B_1(0)\setminus \overline{ B_\tau(a)})$ takes its minimum when the boundary of the ball $B_{\tau}(a)$ touches the one of $B_{1}(0)$, see Section~\ref{S:proofs} below. The proof of Theorem \ref{th:Shape-deriv} is based on the use of test functions in the variational characterization of $\l_{s,p}(\O)$ and $\l_{s,p}(\Phi_\e(\O))$. The general strategy is inspired by the direct approach in \cite{FW18}, which is related to a Neumann eigenvalue problem on manifolds. In the case of $\l_{s,p}(\Phi_\e(\O))$, it is important to make a change of variables so that $\l_{s,p}(\Phi_\e(\O))$ is determined by minimizing an $\e$-dependent family of seminorms among functions $u\in {\mathcal H}^s_0({\Omega}ega)$, see Section \ref{S:Not-per} below. An obvious choice of test functions are minimizers $u$ and $v_\e$ for $\l_{s,p}( \O)$ and $\l_{s,p}(\Phi_\e(\O))$, respectively. However, due to the fact that $u$ is only of class $C^s$ up to the boundary, we cannot obtain a boundary integral term directly from the divergence theorem. In particular, the integration by parts formula given in \cite[Theorem 1.9]{RX-Poh} does not apply to general vector fields $X$ which appear in \eqref{eq:right-der-main-th}. Hence, we need to replace $u$ with $\z_k u$, where $\z_k$ is a cut-off function vanishing in a $\frac{1}{k}$-neighborhood of $partial\O$. This leads to upper and lower estimates of $\l_{s,p}(\Phi_\e(\O))$ up to order $o(\e)$, where the first order term is given by an integral involving $ (-\D)^s(\z_k u)$ and $\nabla (\z_k u)$. We refer the reader to Section~\ref{S:Onse-side} below for more precise information. A highly nontrivial task is now to pass to the limit as $k\to \infty$ in order to get boundary integrals involving $psi:= u/\delta ^s$. This is the most difficult part of the paper. We refer to Proposition~\ref{lem:lim-ok} and Section~\ref{s:proof-Lemma-conv} below for more details. The paper is organized as follows. In Section \ref{S:Not-per}, we provide preliminary results on convergence properties of integral functional, inner approximations of functions in ${\mathcal H}^s_0({\Omega}ega)$ and on properties of minimizers of \eqref{eq:def-lambda-sp}. In Section~\ref{sec:doma-pert-assoc}, we introduce notation related to domain deformations and related quantities. In Section \ref{S:Onse-side} we establish a preliminary variant of Theorem~\ref{th:Shape-deriv}, which is given in Proposition~\ref{prop:shape-deriv}. In this variant, the constant $\G(1+s)^2$ in \eqref{eq:right-der-main-th} is replaced by an implicitly given value which still depends on cut-off data. The proofs of the main results, as stated in this introduction, are then completed in Section \ref{S:proofs}. Finally, Section \ref{s:proof-Lemma-conv} is devoted to the proof of the main technical ingredient of the paper, which is given by Proposition \ref{lem:lim-ok}.\\ \bar{\varepsilon}xtbf{Acknowledgements:} This work is supported by DAAD and BMBF (Germany) within the project 57385104. The authors would like to thank Sven Jarohs for helpful discussions. M.M Fall's work is supported by the Alexander von Humboldt foundation. \section{Notations and preliminary results}{\langle}bel{S:Not-per} Throughout this section, we fix a bounded open set ${\Omega}ega \subset \mathbb{R}^N$. As noted in the introduction, we define the space ${\mathcal H}^s_0({\Omega}ega)$ as completion of $C^\infty_c({\Omega}ega)$ with respect to the norm $[\,\cdot\,]_{s}$ given in \eqref{eq:def-bNs}. Then ${\mathcal H}^s_0({\Omega}ega)$ is a Hilbert space with scalar product $$ (u,v) \mapsto [u,v]_{s}= \frac{ c_{N,s}}{2}\int_{\mathbb{R}^{N}}\! \int_{\mathbb{R}^{N}}\!\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}dx dy, $$ where $c_{N,s}$ is given in \eqref{eq:def-bNs}. It is well known and easy to see that ${\mathcal H}^s_0({\Omega}ega)$ coincides with the closure of $C^\infty_c({\Omega}ega)$ in the standard fractional Sobolev space $H^s(\mathbb{R}^N)$. Moreover, if ${\Omega}ega$ has a continuous boundary, then ${\mathcal H}^s_0({\Omega}ega)$ admits the highly useful characterization \begin{equation}gin{equation} {\langle}bel{useful-characterization} {\mathcal H}^s_0({\Omega}ega)= \bigl \{ w \in L^1_{loc}(\mathbb{R}^N)\::\: [w]_s^2 < \infty,quad w \equiv 0\; \bar{\varepsilon}xt{on $\mathbb{R}^N \setminus {\Omega}ega$} \bigr\}, \end{equation} see e.g. \cite[Theorem 1.4.2.2]{G11}. We start with an elementary but useful observation. \begin{equation}gin{lemma}{\langle}bel{lem:v_ez_k-ok-prelim} Let $\mu\in L^\infty(\mathbb{R}^N\widetildemes \mathbb{R}^N)$, and let $(v_k)_k$ be a sequence in ${\mathcal H}^s_0({\Omega}ega)$ with $v_k \to v$ in ${\mathcal H}^s_0({\Omega}ega)$ as $k \to \infty$. Then we have $$ \lim_{k\to\infty}\int_{\mathbb{R}^{2N}} \frac{(v_k(x)-v_k(y))^2\mu(x,y)}{|x-y|^{N+2s}} dxdy = \int_{\mathbb{R}^{2N}} \frac{(v(x)-v(y))^2\mu(x,y)}{|x-y|^{N+2s}} dxdy. \\ $$ \end{lemma} \begin{equation}gin{proof} We have \begin{equation}gin{align*} &{\bf B}igl|\int_{\mathbb{R}^{2N}} \frac{(v_k(x)-v_k(y))^2-(v(x)-v(y))^2\mu(x,y)}{|x-y|^{N+2s}} dxdy{\bf B}igr|\\ &\le \|\mu\|_{L^\infty} \int_{\mathbb{R}^{2N}} \frac{|(v_k(x)-v_k(y))^2-(v(x)-v(y))^2|}{|x-y|^{N+2s}} dxdy, \end{align*} where \begin{equation}gin{align*} &\int_{\mathbb{R}^{2N}} \frac{|(v_k(x)-v_k(y))^2-(v(x)-v(y))^2|}{|x-y|^{N+2s}} dxdy\\ &=\int_{\mathbb{R}^{2N}} \frac{|[(v_k(x)-v(x))-(v_k(y)-v(y))][(v_k(x)+v(x))-(v_k(y)+v(y))]|}{|x-y|^{N+2s}} dxdy\\ &\le \frac{2}{c_{N,s}}[v_k-v]_s [v_k+v]_s\: \to\: 0 qquad \bar{\varepsilon}xt{as $k \to \infty$.} \end{align*} \end{proof} Throughout the remainder of this paper, we fix $\rho\in C^\infty_c(-2,2)$ with $0 \le \rho \le 1$, $\rho\equiv 1$ on $(-1,1)$, and we define \begin{equation}gin{equation} {\langle}bel{eq:def-z} \zeta\in C^\infty(\mathbb{R}),qquad \z(t)=1-\rho(t). \end{equation} Moreover, for $k \in \mathbb{N}$, we define the functions \begin{equation}gin{equation} {\langle}bel{eq:def-z-k-etc} \rho_k,\: \z_k\: \in\: C^{1,1}(\mathbb{R}^N), quad qquad \rho_k(x)= \rho(k {partial}rtial ta(x)), \;quad \z_k(x)= \z(k {partial}rtial ta(x)). \end{equation} We note that the function $\rho_k$ is supported in the $\frac{2}{k}$-neighborhood of the boundary, while the function $\z_k$ vanishes in the $\frac{1}{k}$-neighborhood of the boundary. \begin{equation}gin{lemma} {\langle}bel{inner-approx-property} Let ${\Omega}ega \subset \mathbb{R}^N$ be a bounded Lipschitz domain and let $u \in {\mathcal H}^s_0({\Omega}ega)$. Moreover, for $k \in \mathbb{N}$, let $u_k := u \zeta_k \in {\mathcal H}^s_0({\Omega}ega)$ denote inner approximations of $u$. Then we have $$ u_k \to u qquad \bar{\varepsilon}xt{in ${\mathcal H}^s_0({\Omega}ega)$.} $$ \end{lemma} \begin{equation}gin{proof} In the following, the letter $C>0$ stands for various constants independent of $k$. Since $\rho_k= 1 -\z_k$, it suffices to show that \begin{equation}gin{equation} {\langle}bel{eq:u-phi-r-claim} u \rho_k \in {\mathcal H}^s_0({\Omega}ega)\; \bar{\varepsilon}xt{for $k$ sufficiently large}qquad \bar{\varepsilon}xt{and}qquad [u \rho_k]_s \to 0 quad \bar{\varepsilon}xt{as $k \to \infty$.} \end{equation} For $\varepsilon>0$, we put $A_\varepsilon= \{x \in {\Omega}ega\::\: {partial}rtial ta(x)< \varepsilon\}$. Since $u \rho_k$ vanishes in $\mathbb{R}^N \setminus A_{\frac{2}{k}}$, $0 \le \rho_k \le 1$ on $\mathbb{R}^N$ and $|{\rho_k}(x)-{\rho_k}(y)| \le C \min \{k |x-y|,1\}$ for $x,y \in \mathbb{R}^N$, we observe that \begin{equation}gin{align} &\frac{1}{c_{N,s}} [{\rho_k} u]^2_s =\frac{1}{2}\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} \frac{[u(x){\rho_k}(x)-u(y){\rho_k}(y)]^2}{|x-y|^{N+2s}} dydx \nonumber\\ &= \frac{1}{2}\int_{A_{\frac{4}{k}}}\int_{A_{\frac{4}{k}}} \frac{[u(x){\rho_k}(x)-u(y){\rho_k}(y)]^2}{|x-y|^{N+2s}}\ dydx +\int_{A_{{\frac{2}{k}}}} u(x)^2 {\rho_k}(x)^2 \int_{\mathbb{R}^N \setminus A_{\frac{4}{k}}} |x-y|^{-N-2s}\ dydx\nonumber\\ &\le \frac{1}{2}\int_{A_{\frac{4}{k}}}\int_{A_{\frac{4}{k}}} \frac{\bigl[u(x)\bigl({\rho_k}(x)-{\rho_k}(y)\bigr)+ {\rho_k}(y)\bigl(u(x)-u(y)\bigr)\bigr]^2}{|x-y|^{N+2s}}\ dydx\nonumber\\ &quad+ C \int_{A_{{\frac{2}{k}}}} u(x)^2 {\rm dist}(x,\mathbb{R}^N \setminus A_{\frac{4}{k}})^{-2s} dx \nonumber\\ &\le \int_{A_{\frac{4}{k}}} u^2(x) \int_{A_{\frac{4}{k}}}\frac{({\rho_k}(x)-{\rho_k}(y))^2}{|x-y|^{N+2s}}\ dydx + \int_{A_{\frac{4}{k}}}\int_{A_{\frac{4}{k}}}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dydx \nonumber \end{align} \begin{equation}gin{align} &quad+ C\int_{A_{{\frac{2}{k}}}} u(x)^2 {partial}rtial ta^{-2s}(x)dx\nonumber\\ &\le C k^2 \int_{A_{\frac{4}{k}}} u^2(x)\int_{B_{\frac{1}{k}}(x)} |x-y|^{2-2s-N} dydx + C \int_{A_{\frac{4}{k}}} u^2(x)\int_{\mathbb{R}^N \setminus B_{\frac{1}{k}}(x)} |x-y|^{-N-2s} dydx\nonumber\\ &quad + \int_{A_{\frac{4}{k}}}\int_{A_{\frac{4}{k}}}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dydx + C \int_{A_{{\frac{2}{k}}}} u(x)^2 {partial}rtial ta^{-2s}(x) dx \nonumber\\ &\le C k^{2s} \int_{A_{\frac{4}{k}}} u^2(x)dx + \int_{A_{\frac{4}{k}}}\int_{A_{\frac{4}{k}}}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dydx + C \int_{A_{{\frac{2}{k}}}} u(x)^2 {partial}rtial ta^{-2s}(x) dx\nonumber\\ &\le C \int_{A_{\frac{4}{k}}} u^2(x){partial}rtial ta^{-2s}(x)dx + \int_{A_{\frac{4}{k}}}\int_{A_{\frac{4}{k}}}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dydx. {\langle}bel{proof-inner-approximation-1} \end{align} Now, since ${\Omega}ega$ has a Lipschitz boundary, using $ \int_{\mathbb{R}^N \setminus {\Omega}ega}|x-y|^{-N-2s}\,dy\sim \delta ^{-2s}(x) $ see e.g \cite{HC}, we get $$ \int_{{\Omega}ega} u^2(x){partial}rtial ta^{-2s}(x)dx \le C \int_{{\Omega}ega}u^2(x) \int_{\mathbb{R}^N \setminus {\Omega}ega}|x-y|^{-N-2s}\,dy dx \le C[u]_s^2, $$ and therefore \begin{equation}gin{equation} {\langle}bel{proof-inner-approximation-2} \int_{A_{\frac{4}{k}}} u^2(x){partial}rtial ta^{-2s}(x)dx \to 0 qquad \bar{\varepsilon}xt{as $k \to \infty$.} \end{equation} Moreover, since also $$ \int_{{\Omega}ega} \int_{{\Omega}ega}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dydx \le \frac{2}{c_{N,s}}[u]_s^2, $$ we have \begin{equation}gin{equation} {\langle}bel{proof-inner-approximation-3} \int_{A_{\frac{4}{k}}}\int_{A_{\frac{4}{k}}}\frac{(u(x)-u(y))^2}{|x-y|^{N+2s}}dydx \to 0 qquad \bar{\varepsilon}xt{as $k \to \infty$.} \end{equation} Combining (\ref{proof-inner-approximation-1}), (\ref{proof-inner-approximation-2}) and (\ref{proof-inner-approximation-3}), we obtain (\ref{eq:u-phi-r-claim}), as required. \end{proof} From now on, we fix a bounded $C^{1,1}$-domain ${\Omega}ega \subset \mathbb{R}^N$. We also let $$ C^s_0(\overline \O) = \left\lbrace w\in C^s(\overline\O): w = 0quad\bar{\varepsilon}xt{in}quad \mathbb{R}^N\setminus\Omega \right\rbrace, $$ and we recall the following regularity and positivity properties of nonnegative minimizers for ${\langle}mbda_{s,p}({\Omega}ega)$ as defined in \eqref{eq:def-lambda-sp}. \begin{equation}gin{lemma} {\langle}bel{reg-prop-minimizers} Let $u \in {\mathcal H}_0^s({\Omega}ega)$ be a nonnegative minimizer for ${\langle}mbda_{s,p}({\Omega}ega)$. Then $u\in C^s_0(\overline \O) \cap C^\infty_{loc}({\Omega}ega)$. Moreover, $psi:=\frac{u}{\delta ^s} \in C^{\a}(\overline\O)$ for some ${\alpha_p}ha \in (0,1)$, and there exists a constant $c=c(N,s,\O,\a,p)>0$ with the property that \begin{equation} {\langle}bel{eq:Bnd-reg-v-eps} \|psi\|_{C^{\a}(\overline\O)}\leq c \end{equation} and \begin{equation}{\langle}bel{eq:Grad-bnd-psi-eps} | \nabla psi (x) | \leq c \delta ^{\a-1}(x) qquad\bar{\varepsilon}xtrm{ for all $x\in \O$.} \end{equation} Moreover, $psi >0$ on $\overlineerline {\Omega}ega$, so in particular $u>0$ in ${\Omega}ega$. \end{lemma} \begin{equation}gin{proof} By standard arguments in the calculus of variations, $u$ is a weak solution of (\ref{eq:1.1}). By \cite[Proposition 1.3]{XJV} we have that $u\in L^\infty(\O)$, and therefore the RHS of (\ref{eq:1.1}) is a function in $L^\infty(\O)$. Thus the regularity up to the boundary $u \in C^s_0(\overline \O)$ is proved in \cite{RX}, where also the $C^{\alpha_p}ha$-bound \eqref{eq:Bnd-reg-v-eps} for the function $psi= \frac{u}{\delta ^s}$ is established for some ${\alpha_p}ha>0$. Moreover, \eqref{eq:Grad-bnd-psi-eps} is proved in \cite{FS-2019}. It also follows from (\ref{eq:1.1}), the strong maximum principle and the Hopf lemma for the fractional Laplacian that $psi$ is a strictly positive function on $\overlineerline {\Omega}ega$. In particular, $u>0$ in ${\Omega}ega$, Therefore $u \in C^\infty_{loc}(\O)$ follows by interior regularity theory (see e.g. \cite{RS16a}) and the fact that the function $t \mapsto t^{p-1}$ is of class $C^\infty$ on $(0,\infty)$. \end{proof} The computation of one-sided shape derivatives as given in Theorem~\ref{th:Shape-deriv} will be carried out in Section~\ref{S:Onse-side}, and it requires the following key technical proposition. Since its proof is long and quite involved, we postpone the proof to Section \ref{s:proof-Lemma-conv} below. \begin{equation}gin{proposition}{\langle}bel{lem:lim-ok} Let $X \in C^{0}(\overlineerline {\Omega}ega,\mathbb{R}^N)$, let $u \in C^s_0(\overline \O)\cap C^1(\O)$, and assume that $psi:=\frac{u}{\delta ^s}$ extends to a function on $\overlineerline {\Omega}ega$ satisfying \eqref{eq:Bnd-reg-v-eps} and \eqref{eq:Grad-bnd-psi-eps}. Moreover, put $U_k:=u \z_k \in C^{1,1}_c({\Omega}ega)$, where $\z_k$ is defined in (\ref{eq:def-z-k-etc}). Then $$ \lim_{k\to \infty}\int_{{\Omega}ega}\nabla U_k \cdot X {\bf B}igl( u (-\D)^s \z_k -I(u,\z_k){\bf B}igr)\,dx= -\kappa_s \int_{partial\O} psi^2 X\cdot \nu \, dx, $$ where \begin{equation} {\langle}bel{eq:-kappa-s-implicit} \kappa_s:= - \int_{\mathbb{R}}h'(r) (-\D)^s h(r)\, dr qquad \bar{\varepsilon}xt{with}quad h(r):=r_+^s \z(r)=\max(r,0)^s\z(r) \end{equation} and $\z$ given in (\ref{eq:def-z}), and where we use the notation \begin{equation}{\langle}bel{eq:def-I-u-v} I(u,v)(x):= c_{N,s}\int_{\mathbb{R}^N}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\, dy \end{equation} for $u\in C^s_c(\mathbb{R}^N)$, $v\in C^{0,1}(\mathbb{R}^N)$ and $x \in \mathbb{R}^N$. \end{proposition} \begin{equation}gin{remark} {\langle}bel{sec:notat-prel-results-remark} The minus sign in the definition of the constant $\kappa_s$ in \eqref{eq:-kappa-s-implicit} might appear a bit strange at first glance. We shall see later that, defined in this way, $\kappaappa_s$ has a positive value. A priori it is not clear that the value of $\kappa_s$ does not depend on the particular choice of the function $\z$. This follows a posteriori once we have established in Proposition~\ref{prop:shape-deriv} below that this constant appears in Theorem~\ref{th:Shape-deriv}. This will then allow us to show that $\kappa_s = \frac{\Gamma(1+s)^2}{2}$ by applying the resulting shape derivative formula to a one-parameter family of concentric balls, see Section~\ref{S:proofs} below. A more direct, but somewhat lengthy computation of $\kappa_s$ is possible via the logarithmic Laplacian, which has been introduced in \cite{WC}. \end{remark} \section{Domain perturbation and the associated variational problem} {\langle}bel{sec:doma-pert-assoc} Here and in the following, we define ${\Omega}ega_\varepsilon: = \Phi_\varepsilon({\Omega}ega)$. In order to study the dependence of ${\langle}mbda_{s,p}({\Omega}ega_\varepsilon)$ on $\varepsilon$, it is convenient to pull back the problem on the fixed domain ${\Omega}ega$ via a change of variables. For this we let $\bar{\varepsilon}xtrm{Jac}_{\Phi_\e}$ denote the Jacobian determinant of the map $\Phi_\varepsilon \in C^{1,1}(\mathbb{R}^N)$, and we define the kernels \begin{equation} {\langle}bel{eq:def-K-eps} K_\e(x,y):= { c_{N,s}}\frac{\bar{\varepsilon}xtrm{Jac}_{\Phi_\e}(x)\bar{\varepsilon}xtrm{Jac}_{\Phi_\e}(y)}{|\Phi_\e(x)-\Phi_\e(y)|^{N+2s}}qquad\bar{\varepsilon}xtrm{and} qquad K_0(x,y)= { c_{N,s}}\frac{1}{|x-y|^{N+2s}}. \end{equation} Then \eqref{eq:def-diffeom} gives rise to the well known expansions \begin{equation} {\langle}bel{eq:Jacob} \bar{\varepsilon}xtrm{Jac}_{\Phi_\e}(x)=1+\e\bar{\varepsilon}xtrm{div}X(x)+ O(\e^2),qquad {partial}rtial_\varepsilon \bar{\varepsilon}xtrm{Jac}_{\Phi_\e}(x) = \bar{\varepsilon}xtrm{div}X(x)+ O(\varepsilon) \end{equation} uniformly in $x \in \mathbb{R}^N$, where $X:=partial_\varepsilon \big|_{\e=0} \Phi_\varepsilon \in C^{0,1}(\mathbb{R}^N;\mathbb{R}^N)$ and therefore $\bar{\varepsilon}xtrm{\delta iv}X$ is a.e. defined on $\mathbb{R}^N$. From \eqref{eq:def-diffeom}, we also get $$ {|\Phi_\e(x)-\Phi_\e(y)|^{-N-2s}}=|x-y|^{-N-2s}\left( 1+ 2\e\frac{x-y}{|x-y|} \cdot P_{X}(x,y)+O(\e^2) \right)^{-\frac{N+2s}{2}}, $$ and $$ {partial}rtial_\varepsilon {|\Phi_\e(x)-\Phi_\e(y)|^{-N-2s}}=|x-y|^{-N-2s}\left(- (N+2s) \frac{x-y}{|x-y|} \cdot P_{X}(x,y)+O(\e)\right), $$ uniformly in $x,y \in \mathbb{R}^N$, $x \not = y$ with $$ P_X \in L^\infty(\mathbb{R}^N \widetildemes \mathbb{R}^N), qquad P_{X}(x,y)=\frac{X(x)-X(y)}{|x-y|}. $$ Moreover by \eqref{eq:Jacob} and the fact that $partial_\varepsilon \Phi_\e$, $X\in C^{0,1}(\mathbb{R}^N)$, we have that \begin{equation}gin{align} K_\e(x,y)=K_0(x,y)+\epartial_\e{\bf B}ig|_{\e=0} K_\e(x,y)+O(\e^2) K_0(x,y) , {\langle}bel{eq:estim-K-eps-1-1} \end{align} and \begin{equation}gin{align} {partial}rtial_\varepsilon K_\e(x,y)=partial_\e{\bf B}ig|_{\e=0}K_\e(x,y) +O(\e)K_0(x,y), {\langle}bel{eq:estim-K-eps-1-2} \end{align} uniformly in $x,y \in \mathbb{R}^N$, $x \not = y$, where \begin{equation}gin{align} partial_\e{\bf B}ig|_{\e=0} K_\e(x,y)=- {\bf B}igl[(N+2s)\frac{x-y}{|x-y|} \cdot P_{X}(x,y)-&(\bar{\varepsilon}xtrm{div}X(x)+ \bar{\varepsilon}xtrm{div}X(y)){\bf B}igr] K_0(x,y). {\langle}bel{eq:estim-K-eps-2} \end{align} In particular, it follows from (\ref{eq:estim-K-eps-1-1}) and (\ref{eq:estim-K-eps-2}) that there exist $\e_0,C>0$ with the property that \begin{equation} {\langle}bel{eq:Elliptic-K-e} \frac{1}{C}K_0(x,y)\leq K_\e(x,y)\leq C K_0(x,y) qquad\bar{\varepsilon}xtrm{ for all $x, y\in \mathbb{R}^N$, $x \not =y$ and $\e\in (-\e_0,\e_0)$.} \end{equation} For $v\in {\mathcal H}^s_0({\Omega}ega)$ and $\e\in (-\varepsilon_0,\varepsilon_0)$, we now define \begin{equation}{\langle}bel{eq:def-j-Uk} {\mathcal V}_{v}(\e):= \frac{1}{2}\int_{\mathbb{R}^{2N}} {(v(x)-v(y))^2} K_\e(x,y) dxdy . \end{equation} Then, by \eqref{eq:def-lambda-sp}, \eqref{eq:def-diffeom} and a change of variables, we have the following variational characterization for ${\langle}mbda_{s,p}(\O_\e)$: \begin{equation}gin{align}{\langle}bel{eq:var-carac-lOe} {\langle}mbda_{s,p}^\varepsilon &:= {\langle}mbda_{s,p}(\O_\e)=\inf \left\{[u]_s^2\::\: u\in {\mathcal H}^s_0({\Omega}ega_\varepsilon), quad\int_{\O_\varepsilon} |u|^p\, dx=1 \right\}\nonumber\\ &=\inf \left\{ {\mathcal V}_{v}(\e) \::\: v\in {\mathcal H}^s_0({\Omega}ega),quad\int_{\O} |v|^p\bar{\varepsilon}xtrm{Jac}_{\Phi_\e}(x)\, dx=1 \right\} quad \bar{\varepsilon}xt{for $\varepsilon \in (-\e_0,\e_0)$.} \end{align} As mentioned earlier, we prefer to use (\ref{eq:var-carac-lOe}) from now on where the underlying domain is fixed and the integral terms depend on $\varepsilon$ instead. It follows from (\ref{eq:estim-K-eps-1-1}) and (\ref{eq:estim-K-eps-1-2}) that, for given $v \in {\mathcal H}^s_0({\Omega}ega)$, the function ${\mathcal V}_v: (-\varepsilon_0,\varepsilon_0) \to \mathbb{R}$ is of class $C^1$ with \begin{equation}gin{equation} {\langle}bel{cV-v-diff-property-zero} {\mathcal V}_v'(0) = \frac{1}{2}\int_{\mathbb{R}^{2N}} {(v(x)-v(y))^2} partial_\e\big|_{\e=0} K_\e(x,y) dxdy, \end{equation} where $partial_\e\big|_{\e=0} K_\e(x,y)$ is given in (\ref{eq:estim-K-eps-2}), \begin{equation}gin{equation} {\langle}bel{cV-v-deriv-zero-bounded} |{\mathcal V}_v'(0)| \le C [v]_s^2 qquad \bar{\varepsilon}xt{with a constant $C >0$} \end{equation} and we have the expansions \begin{equation}gin{equation} {\langle}bel{cV-v-diff-property} {\mathcal V}_v(\varepsilon) = {\mathcal V}_v(0)+ \varepsilon {\mathcal V}_v'(0) + O(\varepsilon^2)[v]_s^2, qquad {\mathcal V}_v'(\varepsilon) = {\mathcal V}_v'(0) + O(\varepsilon)[v]_s^2 \end{equation} with $O(\varepsilon)$, $O(\varepsilon^2)$ independent of $v$. From \eqref{eq:Jacob}, \eqref{eq:Elliptic-K-e} and the variational characterization (\ref{eq:var-carac-lOe}), it is easy to see that $$ \frac{1}{C} \le \l_{s,p}^\varepsilon \leq C qquad \bar{\varepsilon}xt{for all $\e\in (-\e_0,\e_0)$ with some constant $C>0$.} $$ Using this and \eqref{eq:Jacob}, \eqref{eq:Elliptic-K-e} once more, we can show that \begin{equation}{\langle}bel{eq;bound-v-eps} \frac{1}{C}\leq \|v_\e\|_{L^p(\O)}\leq Cqquad \bar{\varepsilon}xt{and}qquad \frac{1}{C} \le [v_\e]_s\leq C . \end{equation} for every $\e\in (-\e_0,\e_0)$ and every minimizer $v_\varepsilon \in {\mathcal H}^s_0({\Omega}ega)$ for (\ref{eq:var-carac-lOe}) with a constant $C>0$. The following lemma is essentially a corollary of Lemma~\ref{lem:v_ez_k-ok-prelim}. \begin{equation}gin{lemma} {\langle}bel{lem:v_ez_k-ok-corollary} Let $(v_k)_k$ be a sequence in ${\mathcal H}^s_0({\Omega}ega)$ with $v_k \to v$ in ${\mathcal H}^s_0({\Omega}ega)$. Then we have $$ \lim_{k\to\infty}{\mathcal V}_{v_{k}}(0)= {\mathcal V}_{v}(0)qquad \bar{\varepsilon}xt{and}qquad \lim_{k\to\infty}{\mathcal V}_{v_k}'(0)= {\mathcal V}_{v}'(0). $$ \end{lemma} \begin{equation}gin{proof} The first limit is trivial since ${\mathcal V}_v(0)= [v]_s^2$ for $v \in {\mathcal H}^s_0({\Omega}ega)$. The second limit follows from Lemma~\ref{lem:v_ez_k-ok-prelim}, (\ref{eq:estim-K-eps-2}) and (\ref{cV-v-diff-property-zero}) by noting that $\mu \in L^\infty(\mathbb{R}^N \widetildemes \mathbb{R}^N)$ for the function $$ \mu(x,y)=- (N+2s) \frac{x-y}{|x-y|} \cdot P_{X}(x,y) + (\bar{\varepsilon}xtrm{div}X(x)+ \bar{\varepsilon}xtrm{div}X(y)). $$ \end{proof} \section{One-sided Shape derivative computations}{\langle}bel{S:Onse-side} We keep using the notation of the previous sections, and we recall in particular the variational characterization of $\l_{s,p}^\e=\l_{s,p}(\O_\e)$ given in (\ref{eq:var-carac-lOe}). The aim of this section is to prove the following result. \begin{equation}gin{proposition}{\langle}bel{prop:shape-deriv} We have \begin{equation}gin{align*} partial_\e^+ {\bf B}ig|_{\e=0}\l_{s,p}^\e =\min &\left\{ 2 \kappa_s \int_{partial\O} (u/\delta ^s)^2X\cdot \nu \, dx\::\, u\in \mathcal{H} \right\}, \end{align*} where $\calH$ is the set of positive minimizers for $\l_{s,p}^0:= \l_{s,p}(\O)$, $X:=partial_\varepsilon \big|_{\e=0} \Phi_\varepsilon$ and $\kappa_s$ is given by \eqref{eq:-kappa-s-implicit}. \end{proposition} The proof of Proposition \ref{prop:shape-deriv} requires several preliminary results. We start with a formula for the derivative of the function given by \eqref{eq:def-j-Uk}. \begin{equation}gin{lemma}{\langle}bel{lem:j1kprimes-preliminary} Let $U \in C^{1,1}_c(\O)$. Then \begin{equation}gin{align} &{\mathcal V}_{U}'(0)= -2 \int_{\mathbb{R}^N} \nabla U \cdot X (-\Delta)^s U dx. \end{align} \end{lemma} \begin{equation}gin{proof} By \eqref{eq:estim-K-eps-2}, (\ref{cV-v-diff-property}) and Fubini's theorem, we have \begin{equation}gin{align*} {\mathcal V}_{U}'(0)=& \frac{-(N+2s) c_{N,s}}{2}\int_{\mathbb{R}^{2N}} {(U(x)-U(y))^2}\frac{(x-y)\cdot(X(x)-X(y))}{|x-y|^{N+2s+2}} dx dy \nonumber\\ &+ \frac{1}{2}\int_{\mathbb{R}^{2N}} {(U(x)-U(y))^2} K_0(x,y) (\bar{\varepsilon}xtrm{div}X(x)+ \bar{\varepsilon}xtrm{div}X(y))dxdy\\ =& \frac{-(N+2s) c_{N,s}}{2} \lim_{\mu \to 0} \int_{|x-y|>\mu} {(U(x)-U(y))^2}\frac{(x-y)\cdot(X(x)-X(y))}{|x-y|^{N+2s+2}} dxdy \nonumber\\ &+ \int_{\mathbb{R}^{2N}} {(U(x)-U(y))^2} K_0(x,y)\bar{\varepsilon}xtrm{div}X(x) dxdy\\ =&-(N+2s) c_{N,s} \lim_{\mu \to 0} \int_{\mathbb{R}^N} \int_{\mathbb{R}^N \setminus \overlineerline{B_\mu(y)}} {(U(x)-U(y))^2}\frac{(x-y)\cdot X(x)}{|x-y|^{N+2s+2}} dxdy \nonumber\\ &+ \int_{\mathbb{R}^{2N}} {(U(x)-U(y))^2} K_0(x,y)\bar{\varepsilon}xtrm{div}X(x) dxdy \end{align*} {\color{red} Applying, for fixed $y \in \mathbb{R}^N$ and $\mu>0$, the divergence theorem in the domain $\{x \in \mathbb{R}^{N}\::\: |x-y| > \mu \}$ and using that $\n_{x} |x-y|^{-N-2s}=-(N+2s)\frac{x-y}{|x-y|^{N+2s+2}}$,} we obtain \begin{equation}gin{align} {\mathcal V}_{U}'(0) =&c_{N,s} \lim_{\mu\to 0} \int_{\mathbb{R}^N} \int_{\mathbb{R}^N \setminus \overlineerline{B_\mu(y)}} {(U(x)-U(y))^2} \n_{x} |x-y|^{-N-2s} \cdot X(x) dxdy \nonumber\\ &+ \int_{\mathbb{R}^{2N}} {(U(x)-U(y))^2} K_0(x,y)\bar{\varepsilon}xtrm{div}X(x) dxdy \nonumber\\ =&- \lim_{\mu\to 0} \int_{\mathbb{R}^N} \int_{\mathbb{R}^N \setminus \overlineerline{B_\mu(y)}} {(U(x)-U(y))^2} K_0(x,y) \bar{\varepsilon}xtrm{div}X(x)dxdy \nonumber\\ & - \lim_{\mu\to 0}\int_{\mathbb{R}^N} \int_{\mathbb{R}^N \setminus \overlineerline{B_\mu(y)}} {(U(x)-U(y))\nabla U(x) \cdot X(x) } K_0(x,y) dxdy \nonumber\\ &+ \lim_{\mu\to 0} \int_{\mathbb{R}^N} \int_{{partial}rtial B_\mu(y)} {(U(x)-U(y))^2} \frac{y-x}{|x-y|}\cdot X(x) K_0(x,y) \, d\s(y)\,dx \nonumber\\ &+ \int_{\mathbb{R}^{2N}} {(U(x)-U(y))^2} K_0(x,y) \bar{\varepsilon}xtrm{div}X(x)dxdy\nonumber\\ =& - \lim_{\mu\to 0}\int_{|x-y|>\mu} (U(x)-U(y))\nabla U(x) \cdot X(x) K_0(x,y) d(x,y) \nonumber\\ &+ \lim_{\mu\to 0}\mu^{-N-1-2s} \int_{|x-y|=\mu} {(U(x)-U(y))^2} (y-x) \cdot X(x) \, d\s(x,y)\nonumber\\ =&- \frac{c_{N,s}}{2} \lim_{\mu\to 0}\int_{\mathbb{R}^N}\nabla U(x) \cdot X(x) \int_{\mathbb{R}^N \setminus \overlineerline{B_\mu(0)}} \frac{2 U(x)-U(x+z)-U(x-z)}{|z|^{N+2s}} dz dx \nonumber\\ &+ \frac{1}{2} \lim_{\mu\to 0}\mu^{-N-1-2s} \int_{|x-y|= \mu} {(U(x)-U(y))^2} (y-x) \cdot (X(x)-X(y)) \, d\s(x,y) {\langle}bel{revision-eq-1} \end{align} Since $U \in C^{1,1}_c({\Omega}ega)$, we have that \begin{equation}gin{align} &\frac{c_{N,s}}{2} \lim_{\mu\to 0}\int_{\mathbb{R}^N} \nabla U(x) \cdot X(x) \int_{\mathbb{R}^N \setminus \overlineerline{B_\mu(0)}} \frac{2 U(x)-U(x+z)-U(x-z)}{|z|^{N+2s}} dz dx\nonumber \\ &= \frac{c_{N,s}}{2} \int_{\mathbb{R}^N}\nabla U(x) \cdot X(x) \int_{\mathbb{R}^N} \frac{2 U(x)-U(x+z)-U(x-z)}{|z|^{N+2s}} dz dx\nonumber\\ &= \int_{\mathbb{R}^N} (-\Delta)^sU(x) \nabla U(x) \cdot X(x) dx. {\langle}bel{revision-eq-2} \end{align} Moreover, since $U$ is compactly supported, we may fix $R>0$ large enough such that $(U(x)-U(y))^2 = 0$ for all $x,y \in B_R(0)$ with $|x-y|<1$. Setting $N_\mu:= \{(x,y) \in B_R(0) \widetildemes B_R(0) \::\: |x-y|= \mu\}$ for $0<\mu<1$ and using that $U, X \in C^{0,1}(\mathbb{R}^N)$, we thus deduce that \begin{equation}gin{align} &\mu^{-N-1-2s} \int_{|x-y|= \mu} {(U(x)-U(y))^2} (y-x) \cdot (X(x)-X(y)) \, d\s(x,y)\nonumber\\ &= \mu^{-N-1-2s} \int_{N_\mu} {(U(x)-U(y))^2} (y-x) \cdot (X(x)-X(y))\, d\s(x,y) = O(\mu^{3-1-2s}) \to 0,{\langle}bel{revision-eq-3} \end{align} as $\mu \to 0$, since the $2N-1$-dimensional measure of the set $N_\mu$ is of order $O(N-1)$ as $\mu \to 0$. The claim now follows by combining \eqref{revision-eq-1},\eqref{revision-eq-2} and \eqref{revision-eq-3}. \end{proof} We cannot apply Lemma~\ref{lem:j1kprimes-preliminary} directly to minimizers $u \in {\mathcal H}^s_0({\Omega}ega)$ of $\l_{s,p}(\O)$ since these are not contained in $C^{1,1}_c({\Omega}ega)$. The aim is therefore to apply Lemma~\ref{lem:j1kprimes-preliminary} to $U_k:=u \z_k \in C^{1,1}_c({\Omega}ega)$ with $\z_k$ given in (\ref{eq:def-z-k-etc}), and to use Proposition~\ref{lem:lim-ok}. This leads to the following derivative formula which plays a key role in the proof of Proposition~\ref{prop:shape-deriv}. \begin{equation}gin{lemma}{\langle}bel{lem:j1kprimes} Let $u\in {\mathcal H}^s_0({\Omega}ega)$ be a solution to \eqref{eq:1.1}. Then we have $$ {\mathcal V}_u'(0)= \frac{2\l_{s,p}(\O) }{p}\int_{\O} u^p \delta iv \,X \,dx + 2 \kappa_s \int_{partial\O} (u/\delta ^s)^2 X\cdot \nu\, dx. $$ \end{lemma} \begin{equation}gin{proof} By Lemma~\ref{reg-prop-minimizers} and since ${\Omega}ega$ is of class $C^{1,1}$, we have $U_k:=u \zeta_k\in C^{1,1}_c({\Omega}ega) \subset {\mathcal H}^s_0({\Omega}ega)$ for $k \in \mathbb{N}$, and $U_k \to u$ in ${\mathcal H}^s_0({\Omega}ega)$ by Lemma~\ref{inner-approx-property}. Consequently, ${\mathcal V}_u'(0)= \lim \limits_{k \to \infty}{\mathcal V}_{U_k}'(0)$ by Corollary~\ref{lem:v_ez_k-ok-corollary}, so it remains to show that \begin{equation}gin{equation} {\langle}bel{eq:remains-to-show} \lim_{k \to \infty}{\mathcal V}_{U_k}'(0) = \frac{2\l_{s,p}(\O) }{p}\int_{\O} u^p \delta iv X \,dx + 2 \kappa_s \int_{partial\O} (u/\delta ^s)^2 X\cdot \nu\, dx. \end{equation} Applying Lemma~\ref{lem:j1kprimes-preliminary} to $U_k$, we find that $$ {\mathcal V}_{U_k}'(0)= -2 \int_{\mathbb{R}^N} \nabla U_k \cdot X (-\Delta)^s U_k dx qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$.} $$ By the standard product rule for the fractional Laplacian, we have $ (-\D)^s U_k=u (-\D)^s \z_k + \z_k (-\D)^s u-I(u, \z_k)$ with $I(u, \z_k)$ given by \eqref{eq:def-I-u-v}. We thus obtain \begin{equation}gin{align} &{\mathcal V}_{U_k}'(0)=- 2\int_{\mathbb{R}^N} \nabla U_k \cdot X \z_k (-\D)^s u\, dx-2\int_{\mathbb{R}^N} [\nabla U_k \cdot X] u (-\D)^s \z_k \, dx {\langle}bel{eq:first-est-jk1-pr} \\ &qquadqquad+ 2 \int_{\mathbb{R}^N} \nabla U_k \cdot X I(u, \z_k)\, dx \nonumber\\ &=-2\l_{s,p}(\O) \int_{\O} \nabla U_k \cdot X \z_k u^{p-1}\, dx -2 \int_{\mathbb{R}^N}\nabla U_k \cdot X {\bf B}igl( u (-\D)^s \z_k - I(u, \z_k){\bf B}igr)\,dx,\nonumber \end{align} where we used that $ (-\D)^s u=\l_{s,p}(\O) u^{p-1}$ in $\O$. Consequently, Proposition~\ref{lem:lim-ok} yields that \begin{equation}gin{equation} {\langle}bel{limit-formula-prelim} \lim_{k \to \infty}{\mathcal V}_{U_k}'(0)= -2\l_{s,p}(\O) \lim_{k \to \infty} \int_{\O} \nabla U_k\cdot X \z_k u^{p-1}\, dx + 2 \kappa_s \int_{partial\O} psi^2 X\cdot \nu\, dx. \end{equation} Moreover, integrating by parts, we obtain, for $k \in \mathbb{N}$, \begin{equation}gin{align} \int_{\O} [\nabla U_k\cdot &X] \z_k u^{p-1}\, dx = \frac{1}{p} \int_{\O} [\nabla u^{p} \cdot X] \z_k^2 \, dx+ \int_{\O} [\nabla \z_k \cdot X] \z_k u^{p}\, dx \nonumber \\ &=-\frac{1}{p}\int_{\O} u^p \delta iv X \z_k^2 \,dx -\frac{2}{p}\int_{\O} u^p \z_k [X\cdot\nabla \z_k] \,dx + \int_{\O} u^{p} \z_k [X\cdot \nabla \z_k] \, dx.{\langle}bel{eq:rho1-prime} \end{align} Since $u^p\in C^s_0(\overline\O)$ by Lemma~\ref{reg-prop-minimizers}, it is easy to see from the definition of $\z_k$ that the last two terms in \eqref{eq:rho1-prime} tend to zero as $k\to \infty$, whereas $$ \lim_{k\to\infty}\int_{\O} u^p\bar{\varepsilon}xtrm{div} X \z_k^2 \,dx= \int_{\O} u^p\bar{\varepsilon}xtrm{div} X \,dx. $$ Hence $$ \lim_{k \to \infty} \int_{\O} \nabla U_k\cdot X \z_k u^{p-1} \, dx = - \frac{1}{p} \int_{\O} u^p\bar{\varepsilon}xtrm{div} X \,dx. $$ Plugging this into (\ref{limit-formula-prelim}), we obtain (\ref{eq:remains-to-show}), as required. \end{proof} Our next lemma provides an upper estimate for $ partial_\e^+ {\bf B}ig|_{\e=0}\l_{s,p}^\e$. \begin{equation}gin{lemma}{\langle}bel{lem:upper-estim} Let $u\in {\mathcal H}$ be a positive minimizer for ${\langle}mbda_{s,p}^0 = \l_{s,p}(\O)$. Then \begin{equation}gin{align}{\langle}bel{eq:upper-bnd} \limsup_{\varepsilon \to 0^+}\frac{\l_{s,p}^\e-\l_{s,p}^0}{\e} \leq& 2 \kappa_s \int_{partial\O} (u/\delta ^s)^2 X\cdot \nu\, dx . \end{align} \end{lemma} \begin{equation}gin{proof} For $\varepsilon \in (-\varepsilon_0,\varepsilon_0)$, we define $$ j(\varepsilon):=\frac{{\mathcal V}_{u}(\e)}{\tau(\e)}quad \bar{\varepsilon}xt{for $k \in \mathbb{N}$ with} quad \tau(\e):= \left( \int_{\O}|u|^p \bar{\varepsilon}xtrm{Jac}_{\Phi_\e}(x)\, dx \right)^{2/p}. $$ By \eqref{eq:var-carac-lOe}, we then have $\l_{s,p}^\varepsilon \leq j(\e)$ for $\varepsilon \in (-\varepsilon_0,\varepsilon_0)$. Moreover, $$ \tau(0)=\|u\|_{L^p({\Omega}ega)}^{2/p}=1, quad {\mathcal V}_{u}(0)= [u]_s^2 = \l_{s,p}(\O)quad \bar{\varepsilon}xt{and}quad j(0)= \frac{{\mathcal V}_{u}(0)}{\tau(0)} = \l_{s,p}^0, $$ which implies that $$ partial_\e^+ {\bf B}ig|_{\e=0}\l_{s,p}^\e\leq j'(0) = 2 \kappa_s \int_{partial\O} (u/\delta ^s)^2 X\cdot \nu \, dx, $$ by Lemma \ref{lem:j1kprimes} and \eqref{eq:Jacob}, as claimed. \end{proof} Next, we shall prove a lower estimate for $ partial_\e^+ {\bf B}ig|_{\e=0}\l_{s,p}^\e$. \begin{equation}gin{lemma}{\langle}bel{lem:lower-estim} We have $$ \liminf_{\e\searrow 0^+}\frac{\l_{s,p}^\e-\l_{s,p}^0}{\e}\geq \inf\left\{ 2\kappa_s \int_{partial\O}(u/\delta ^s)^2 X\cdot \nu\,dx\::\, u\in {\mathcal H} \right\}. $$ \end{lemma} \begin{equation}gin{proof} Let $(\e_n)_n$ be a sequence of positive numbers converging to zero and with the property that \begin{equation} {\langle}bel{eq:lim-to-lim-inf} \lim_{n\to \infty}\frac{ \l_{s,p}^{\e_n}- \l_{s,p}^0}{\e_n} = \liminf_{\e\searrow 0^+}\frac{\l_{s,p}^\e-\l_{s,p}^0}{\e}. \end{equation} For $n\in\mathbb{N}$, we let $v_{\e_n}$ be a positive minimizer corresponding to the variational characterization of $\l_{s,p}^{\e_n}$ given in (\ref{eq:var-carac-lOe}), i.e. we have \begin{equation}gin{equation} {\langle}bel{eq:e-n-prime-eq} \l_{s,p}^{\e_n} = {\mathcal V}_{v_{\e_n}}(\e_n) qquad \bar{\varepsilon}xt{and}qquad \int_{\O} v_{\e_n}^p\bar{\varepsilon}xtrm{Jac}_{\Phi_{\e_n}}dx = 1. \end{equation} Since $v_{\e_n}$ remains bounded in ${\mathcal H}^s_0({\Omega}ega)$ by \eqref{eq;bound-v-eps}, we may pass to a sub-sequence with the property that $v_{\e_n} \rightharpoonup u$ in ${\mathcal H}^s_0({\Omega}ega)$ for some $u\in {\mathcal H}^s_0({\Omega}ega)$. Moreover, $v_{\e_n} \to u$ in $L^p({\Omega}ega)$ as $n \to \infty$ since the embedding ${\mathcal H}^s_0({\Omega}ega) \to L^p({\Omega}ega)$ is compact. In the following, to keep the notation simple, we write $\e$ in place of $\e_n$. By (\ref{cV-v-deriv-zero-bounded}), (\ref{cV-v-diff-property}) and (\ref{eq:e-n-prime-eq}), we have \begin{equation}gin{equation}{\langle}bel{eq:ATl-prelim} {\mathcal V}_{v_{\e}}(0)= {\mathcal V}_{v_\e}(\e)- \varepsilon {\mathcal V}_{v_\varepsilon}'(0) + O(\varepsilon^2)[v_\varepsilon]_s^2 = \l_{s,p}^\varepsilon - \varepsilon {\mathcal V}_{v_\varepsilon}'(0) + O(\varepsilon^2)=\l_{s,p}^\varepsilon + O(\varepsilon) \end{equation} and therefore \begin{equation}gin{equation} {\langle}bel{strong-convergence-v-eps-prelim} {\mathcal V}_{u}(0) =[u]_s^2 \le \liminf_{\varepsilon \to 0} [v_\varepsilon]_s^2 = \liminf_{\varepsilon \to 0}{\mathcal V}_{v_{\e}}(0) \le \limsup_{\varepsilon \to 0}\l_{s,p}^\varepsilon \le \l_{s,p}^0, \end{equation} where the last inequality follows from Lemma~\ref{lem:upper-estim}. In view of \eqref{eq:Jacob} and the strong convergence $v_\varepsilon \to u$ in $L^p({\Omega}ega)$, we see that \begin{equation}gin{equation} {\langle}bel{eq:div-new-expansion-v-eps} 1 = \int_{\O} v_\e^p\bar{\varepsilon}xtrm{Jac}_{\Phi_\e}dx= \int_{\O} v_\e^p(1+\varepsilon \bar{\varepsilon}xtrm{div}X )dx+O(\e^2) = \int_{{\Omega}ega} u^p\,dx +o(1) \end{equation} as $\varepsilon \to 0$, and hence $\|u\|_{L^p({\Omega}ega)}=1$. Combining this with \eqref{strong-convergence-v-eps-prelim}, we see that $u \in {\mathcal H}$ is a minimizer for $\l_{s,p}^0$, and that equality must hold in all inequalities of \eqref{strong-convergence-v-eps-prelim}. From this we deduce that \begin{equation}gin{equation} {\langle}bel{strong-convergence-v-eps} \bar{\varepsilon}xt{$v_\varepsilon \to u$ strongly in ${\mathcal H}^s_0({\Omega}ega).$} \end{equation} Now (\ref{eq:ATl-prelim}) and the variational characterization of $\l_{s,p}^0$ imply that \begin{equation}gin{equation}{\langle}bel{eq:ATl} \l_{s,p}^0 \left(\int_{\O} v_{\e}^pdx \right)^{2/p}\leq {\mathcal V}_{v_{\e}}(0)= \l_{s,p}(\O_\e) - \varepsilon {\mathcal V}_{v_\varepsilon}'(0) + O(\varepsilon^2) \end{equation} whereas by (\ref{eq:div-new-expansion-v-eps}) we have $$ \int_{\O} v_\e^p\,dx = 1-\varepsilon \int_{\O} v_\e^p \bar{\varepsilon}xtrm{div}X dx+O(\e^2) = 1-\varepsilon \int_{\O} u^p \bar{\varepsilon}xtrm{div}X dx+o(\e) $$ and therefore \begin{equation}gin{equation} {\langle}bel{eq:div-expansion-lower-bound} \left(\int_{\O} v_{\e}^pdx \right)^{2/p}=1- \frac{2 \e}{p}\int_{\O} u^p \bar{\varepsilon}xtrm{div}X dx +o(\e). \end{equation} Plugging this into (\ref{eq:ATl}), we get the inequality $$ \l_{s,p}^\varepsilon \ge {\bf B}igl(1- \frac{2 \e}{p}\int_{\O} u^p \bar{\varepsilon}xtrm{div}X dx{\bf B}igr) \l_{s,p}^0 + \varepsilon {\mathcal V}_{v_\varepsilon}'(0)+o(\varepsilon). $$ Since, moreover, ${\mathcal V}_{v_\varepsilon}'(0) \to {\mathcal V}_u'(0)$ as $\varepsilon \to 0$ by Lemma~\ref{lem:v_ez_k-ok-corollary} and (\ref{strong-convergence-v-eps}), it follows that $$ \l_{s,p}^\varepsilon-\l_{s,p}^0 \ge \varepsilon {\bf B}igl({\mathcal V}_{u}'(0) -\frac{2 \l_{s,p}^0}{p}\int_{\O} u^p \bar{\varepsilon}xtrm{div}X dx{\bf B}igr)+o(\varepsilon) $$ and therefore $$ \l_{s,p}^\e-\l_{s,p}^0 \ge 2 \varepsilon \kappa_s \int_{partial\O}(u/\delta ^s)^2 X\cdot \nu\,dx+o(\e) $$ by Lemma~\ref{lem:j1kprimes}. We thus conclude that $$ \lim_{\varepsilon \to 0^+}\frac{\l_{s,p}^\e-\l_{s,p}^0}{\e} \ge 2\kappa_s \int_{partial\O}(u/\delta ^s)^2 X\cdot \nu\,dx. $$ Taking the infinimum over $u\in {\mathcal H}$ in the RHS of this inequality and using \eqref{eq:lim-to-lim-inf}, we get the result. \end{proof} \begin{equation}gin{proof}[Proof of Proposition \ref{prop:shape-deriv} (completed)] Proposition \ref{prop:shape-deriv} is a consequence of Lemma \ref{lem:upper-estim} and Lemma \ref{lem:lower-estim}. Indeed, let $$ A_{s,p}(\O):= \inf \left\{2\kappa_s \int_{partial\O}(u/\delta ^s)^2 X\cdot \nu\,dx \::\, u\in {\mathcal H} \right\}. $$ Thanks to \eqref{eq:Bnd-reg-v-eps} the infinimum $A_{s,p}(\O)$ is attained. Finally by Lemma \ref{lem:upper-estim} and Lemma \ref{lem:lower-estim} we get \begin{equation}gin{align*} A_{s,p}(\O) \geq partial_\e^+ {\bf B}ig|_{\e=0}\l_{s,p}^\varepsilon \geq \liminf_{\e\searrow 0}\frac{\l_{s,p}^\e-\l_{s,p}^0}{\e}\geq A_{s,p}(\O). \end{align*} \end{proof} \section{Proof of the main results}{\langle}bel{S:proofs} In this section we complete the proofs of the main results stated in the introduction. \begin{equation}gin{proof}[Proof of Theorem \ref{th:Shape-deriv} (completed)] In view of Proposition \ref{prop:shape-deriv}, the proof of Theorem \ref{th:Shape-deriv} is complete once we show that \begin{equation}{\langle}bel{eq:k-s-Gamma} 2\kappa_s=\G(1+s)^2, \end{equation} where $\Gamma$ is the usual Gamma function. In view of \eqref{eq:-kappa-s-implicit}, the constant $\kappa_s$ does not depend on $N$, $p$ and $\O$, we consider the case $N=p=1$ and the family of diffeomorphisms $\Phi_\e$ on $\mathbb{R}^N$ given by $\Phi_\e(x)=(1+\e)x$, $\varepsilon \in (-1,1)$, so that $X:= {partial}rtial_\varepsilon \big|_{\varepsilon = 0} \Phi_\e$ is simply given by $X(x)=x$. Letting $\O_0:=(-1,1)$, we define $\O_{\e}=\Phi_\e(\O_0)=(-1-\e,1+\e)$. Moreover, we consider $w_\e\in {\mathcal H}^s_0({\Omega}ega_\e) \cap C^s_0([-1-\e,1+\e])$ given by \begin{equation} {\langle}bel{eq:express-ells} w_\e(x)=\ell_s((1+\e)^2-|x|^2)^s_+ qquad\bar{\varepsilon}xtrm{ with }qquad \ell_s:= \frac{2^{-2s}\G(1/2)}{\G(s+1/2)\G(1+s)}. \end{equation} It is well known that $w_\varepsilon$ is the unique solution of the problem $$ (-\D)^s w_\e=1 quad \bar{\varepsilon}xt{in $\O_\e$,}qquad w_\varepsilon \equiv 0 quad \bar{\varepsilon}xt{on $\mathbb{R}^N \setminus {\Omega}ega_\e$,} $$ see e.g. \cite{RX-Poh} or \cite{JW17}. Recalling (\ref{eq:1.1}), we thus deduce that $u_\varepsilon = \l_{s,1}(\O_\e)w_\e$ is the unique positive minimizer corresponding to \eqref{eq:def-lambda-sp} in the case $N=p=1$, which implies that $\|u_\e\|_{L^1(\mathbb{R})}=1$ and therefore \begin{equation}gin{equation} {\langle}bel{eq:-to-diff-eq} \l_{s,1}(\O_\e)= \|w_\e\|_{L^1(\mathbb{R})}^{-1}= (1+\e)^{-(2s+1)} \|w_0\|_{L^1(\mathbb{R})}^{-1}. \end{equation} Moreover, by standard properties of the Gamma function, \begin{equation}gin{align*} \|w_0\|_{L^1(\mathbb{R})} &=\ell_s \int_{-1}^1 (1-|x|^2)^s\, dx=2 \ell_s \int_0^1(1-r^2)^s\, dr= \ell_s \int_0^1 t^{-1/2} (1-t)^s\, dt\\ &= \ell_s \frac{\G(1/2)\G(s+1)}{\G(s+3/2)}= \ell_s \frac{\G(1/2)\G(s+1)}{(s+1/2)\G(s+1/2)}= \frac{2^{2s}\,\ell_s^2\,\G(s+1)^2}{s+1/2}. \end{align*} By differentiating (\ref{eq:-to-diff-eq}), we get \begin{equation} {\langle}bel{eq:diff-simple-case} partial_\e{\bf B}ig|_{\e=0}\l_{s,1}(\O_\e) =-\frac{2s+1}{\|w_0\|_{L^1(\mathbb{R})}}. \end{equation} On the other hand, by Proposition \ref{prop:shape-deriv} and the fact that $u_0$ is the unique positive minimizer for ${\langle}mbda_{s,1}$, we deduce that $$ partial_\e^+{\bf B}ig|_{\e=0}\l_{s,1}(\O_\e)= -2\kappa_s [(u_0/{partial}rtial ta^s)^2(1)+(u_0/{partial}rtial ta^s)^2(-1)]= -2^{2+2s} \kappa_s \,\ell_s^2 \,{\langle}mbda_{s,1}(\O_0)^2= -\frac{2^{2+2s} \kappa_s\, \ell_s^2}{\|w_0\|_{L^1(\mathbb{R})}^2}. $$ We thus conclude that $$ 2 \kappa_s = \frac{(2s+1)\|w_0\|_{L^1(\mathbb{R})}}{2^{1+2s} \ell_s^2} = \Gamma(s+1)^2. $$ Thus, by Proposition \ref{prop:shape-deriv}, we get the result as stated in the theorem. \end{proof} \noindent \begin{equation}gin{proof}[Proof of Corollary \ref{cor:1.3}] Let $h\in C^{3}(partial\O)$, with $\int_{partial\O}h\, dx=0$. Then it is well known (see e.g. \cite[Lemma 2.2]{FW18}) that there exists a family of diffeomorphisms $\Phi_\varepsilon: \mathbb{R}^N \to \mathbb{R}^N$, $\varepsilon \in (-1,1)$ satisfying \eqref{eq:def-diffeom} and having the following properties: \begin{equation} \bar{\varepsilon}xt{$|\Phi_\varepsilon({\Omega}ega)|= |{\Omega}ega|$ for $\varepsilon \in (-1,1)$, and $X:= {partial}rtial_\varepsilon \big|_{\e=0}\Phi_\e$ equals $h \nu$ on ${partial}rtial {\Omega}ega$.} {\langle}bel{eq:Vectorfil-Y} \end{equation} By assumption, there exists $\varepsilon_0 \in (0,1)$ with $\l_{s,p}(\Phi_\e(\O)) \geq \l_{s,p}(\O)$ for $\varepsilon \in (-\e_0,\e_0)$. Applying Theorem \ref{th:Shape-deriv} and noting that $X \cdot \nu \equiv h$ on ${partial}rtial {\Omega}ega$ by \eqref{eq:Vectorfil-Y}, we get $$ \min \left\{\G(1+s)^2 \int_{partial\O}(u/\delta ^s)^2 h\,dx\::\, u\in \mathcal{H} \right\}\geq 0. $$ By the same argument applied to $-h$, we get \begin{equation}gin{equation} \max \left\{\G(1+s)^2 \int_{partial\O}(u/\delta ^s)^2 h\,dx\::\, u\in \mathcal{H} \right\}\leq 0. \end{equation} We thus conclude that $$ \int_{partial\O}(u/\delta ^s)^2 h\,dx = 0 qquad \bar{\varepsilon}xt{ for every $u\in \calH$ and for all $h\in C^3(partial\O)$, with $\int_{partial\O}h\, dx=0$.} $$ By a standard argument, this implies that $u/\delta ^s$ is constant on $partial\O$. Now, since $u$ solves \eqref{eq:1.1} and $p\in \{1\}\cup [2,\infty)$, we deduce from \cite[Theorem 1.2]{JW17} that $\O$ is a ball. \end{proof} \begin{equation}gin{proof}[Proof of Theorem \ref{th:1.2}] Consider the unit centered ball $B_1=B_1(0)$. For $\tau\in (0,1)$ and $t\in (\tau-1,1-\tau)$, we define $B^t:=B_{\tau}(te_1)$, where $e_1$ is the first coordinate direction. To prove Theorem \ref{th:1.2}, we can take advantage of the invariance under rotations of the problem and may restrict our attention to domains of the form ${\Omega}ega(t) = B_1\setminus \overlineerline{B^t} $. We define \begin{equation}gin{equation} {\langle}bel{eq:def-theta} \theta:(\tau-1,1-\tau)\to \mathbb{R}, qquad \theta(t):=\l_{s,p}(\O(t)). \end{equation} We claim that $\theta$ is differentiable and satisfies \begin{equation}gin{equation}{\langle}bel{4.4} \theta'(t)<0 qquad\bar{\varepsilon}xtrm{ for $t\in (0,1-\tau)$.} \end{equation} For this we fix $t \in (\tau-1,1-\tau)$ and a vector field $X: \mathbb{R}^N \to \mathbb{R}^N$ given by $X(x) = \rho(x)e_{1}$, where $\rho \in C^{\infty}_{c}(B_{1})$ satisfies $\rho \equiv 1$ in a neighborhood of $B^t$. For $\varepsilon \in (-1,1)$, we then define $\Phi_\varepsilon: \mathbb{R}^N \to \mathbb{R}^N$ by $\Phi_\varepsilon(x)=x + \begin{equation}ta \varepsilon X(x)$, where $\begin{equation}ta>0$ is chosen sufficiently small to guarantee that $\Phi_\varepsilon$, $\varepsilon \in (-1,1)$ is a family of diffeomorphisms satisfying \eqref{eq:def-diffeom} and satisfying $\Phi_\varepsilon(B_1) = B_1$ for $\varepsilon \in (-1,1)$. Then, by construction, we have \begin{equation}gin{equation} {\langle}bel{eq:o-t-characterization} \Phi_\varepsilon(\O(t))= \Phi_{\varepsilon}\left(B_1\setminus \overlineerline{B^t}\right) = B_1\setminus \overlineerline{\Phi_{\varepsilon}(B^t)}= B_1\setminus \overlineerline{B^{t+\begin{equation}ta \varepsilon}} = \O(t+\begin{equation}ta \varepsilon). \end{equation} Next we recall that, since $p \in \{1,2\}$, there exists a unique positive minimizer $u \in {\mathcal H}^s_0({\Omega}ega(t))$ corresponding to the variational characterization \eqref{eq:def-lambda-sp} of ${\langle}mbda_{s,p}(\O(t))$. Hence, by Corollary~\ref{cor:1.2}, the map $\varepsilon \mapsto \l_{s,p}(\Phi_\varepsilon(\O(t)))$ is differentiable at $\varepsilon = 0$. In view of (\ref{eq:o-t-characterization}), we thus find that the map $\theta$ in (\ref{eq:def-theta}) is differentiable at $t$, and \begin{equation}gin{equation} {\langle}bel{eq:theta-prime-formula} \theta'(t)= \frac{1}{\begin{equation}ta} \frac{d}{d\varepsilon} {\bf B}ig|_{\varepsilon=0}\l_{s,p}(\Phi_\varepsilon(\O(t))) = \Gamma(1+s)^2\int_{{partial}rtial{\Omega}ega(t)}\left(\frac{u}{\delta ^{s}}\right)^{2} X\cdot \nu\, dx = \Gamma(1+s)^2\int_{{partial}rtial B^t}\left(\frac{u}{\delta ^{s}}\right)^{2} \nu_1\, dx \end{equation} by (\ref{eq:cor:1.2-formula}). Here $\nu$ denotes the interior unit normal on ${partial}rtial {\Omega}ega(t)$ which coincides with the exterior unit normal to $B^t$ on ${partial}rtial B^t$, and we used that $$ X \equiv e_1quad \bar{\varepsilon}xt{on ${partial}rtial B^t$},qquad X \equiv 0 quad \bar{\varepsilon}xt{on ${partial}rtial B_1 = {partial}rtial {\Omega}ega(t) \setminus {partial}rtial B^t$} $$ to get the last equality in (\ref{eq:theta-prime-formula}). Next, for fixed $t\in (0,1-\tau)$, let $H$ be the half space defined by $H= \{x\in \mathbb{R}^N: x\cdot e_1> t\}$ and let $\Theta = H \cap {\Omega}ega(t)$. We also let $r_H:\mathbb{R}^N\to \mathbb{R}^N$ be the reflection map with respect to he hyperplane ${partial}rtial H:=\{x\in \mathbb{R}^N: x\cdot e_1= t\}$. For $x\in \mathbb{R}^N$, we denote $\bar x:= r_{H}(x)$, $\overlineerline u (x):= u(\overlineerline x)$. Using these notations, we have \begin{equation}gin{align} &\theta'(t) = \Gamma(1+s)^2\int_{{partial}rtial B^{t}}\left(\frac{u}{\delta ^{s}}\right)^{2}\nu_{1}\,dx \nonumber\\ &=\Gamma(1+s)^2\int_{{partial}rtial B^{t}\cap\Theta}\left( \left(\frac{ u}{\delta ^{s}}\right)^{2}(x)-\left(\frac{\overline u}{\delta ^{s}} \right)^{2}( x)\right) \nu_{1}\, dx. {\langle}bel{eq:firslet-derap} \end{align} Let $w = \overline u -u \in H^s(\mathbb{R}^N)$. Then $w$ is a (weak) solution of the problem \begin{equation}gin{equation} (-\D)^s w= \l_{s,p}({\Omega}ega({t})) \overline u^{p-1}- \l_{s,p}({\Omega}ega({t})) u^{p-1}= c_p w qquad\bar{\varepsilon}xt{in}quad \Theta, \end{equation} where $$ \begin{equation}gin{cases} c_p:= \l_{s,p}({\Omega}ega({t}))&qquad \bar{\varepsilon}xtrm { for $p=2$,}\\ c_p=0 &qquad \bar{\varepsilon}xtrm { for $p=1$}. \end{cases} $$ Moreover, by definition, $w \equiv \overline u\geq 0$ in $H\setminus \overline\Theta$, and $w \equiv \overline u >0$ in the subset $[r_H(B_1) \cap H] \setminus \overline\Theta$ which has positive measure since $t>0$. Using that $w$ is anti-symmetric with respect to $H$ and the fact that $\l_{s,p}(\Theta)> c_p $ (which follows since $\Theta$ is a proper subdomain of $\O(t)$), we can apply the weak maximum principle for antisymmetric functions (see \cite[Proposition 3.1]{JW17} or \cite[Proposition 3.5]{JW}) to deduce that $w\ge 0$ in $\Theta$. Moreover, since $w \not \equiv 0$ in $\mathbb{R}^N$, it follows from the strong maximum principle for antisymmetric functions given in \cite[Proposition 3.6]{JW} that $w>0$ in $\Theta$. Now by the fractional Hopf lemma for antisymmetric functions (see \cite[Proposition 3.3]{JW17}) we conclude that $$ 0< \frac{w}{\delta ^s}= \frac{\overline u}{\delta ^s}- {\frac{u}{\delta ^s}}quad \bar{\varepsilon}xt{and therefore}quad {\frac{\overline u}{\delta ^s}} > \frac{u}{\delta ^s} \geq 0 qquad\bar{\varepsilon}xtrm {on $partial B^{t} \cap \Theta$}. $$ From this and \eqref{eq:firslet-derap} we get \eqref{4.4}, since $\nu_1>0$ on $ partial B^{t}\cap \Theta$. \\ To conclude, we observe that the function $t\mapsto\l_{s,p}(t)=\l_{s,p}(\O(t))$ is even, thanks to the invariance of the problem under rotations. Therefore the function $\theta$ attains its maximum uniquely at $t=0$. \end{proof} \section{Proof of Proposition \ref{lem:lim-ok}}{\langle}bel{s:proof-Lemma-conv} The aim of this section is to prove Proposition \ref{lem:lim-ok}. For the readers convenience, we repeat the statement here. \begin{equation}gin{proposition}{\langle}bel{lem:lim-ok-new-section} Let $X \in C^{0}(\overlineerline {\Omega}ega,\mathbb{R}^N)$, let $u \in C^s_0(\overline \O)\cap C^1(\O)$, and assume that $psi:=\frac{u}{\delta ^s}$ extends to a function on $\overlineerline {\Omega}ega$ satisfying \eqref{eq:Bnd-reg-v-eps} and \eqref{eq:Grad-bnd-psi-eps}. Moreover, put $U_k:=u \z_k \in C^{1,1}_c({\Omega}ega)$, where $\z_k$ is defined in (\ref{eq:def-z-k-etc}). Then \begin{equation}gin{equation} {\langle}bel{eq:lem:new-section-claim} \lim_{k\to \infty}\int_{{\Omega}ega}\nabla U_k \cdot X {\bf B}igl( u (-\D)^s \z_k -I(u,\z_k){\bf B}igr)\,dx= - \kappa_s \int_{partial\O} psi^2 X\cdot \nu\, dx, \end{equation} where \begin{equation} {\langle}bel{eq:-kappa-s-implicit-new-section} \kappa_s:= -\int_{\mathbb{R}} h'(r) (-\D)^s h(r)\, dr qquad \bar{\varepsilon}xt{with}quad h(r):=r^s_+ \z(r) \end{equation} and $\z$ given in (\ref{eq:def-z}), and where we use the notation \begin{equation}{\langle}bel{eq:def-I-u-v-new-section} I(u,v)(x):= \int_{\mathbb{R}^N}{(u(x)-u(y))(v(x)-v(y))}K_0(x,y)\, dy \end{equation} for $u\in C^s_c(\mathbb{R}^N)$, $v\in C^{0,1}(\mathbb{R}^N)$ and $x \in \mathbb{R}^N$. \end{proposition} The remainder of this section is devoted to the proof of this proposition. For $k \in \mathbb{N}$, we define \begin{equation}gin{equation} {\langle}bel{eq:def-ge-k} g_k:= \nabla U_k \cdot X {\bf B}igl( u (-\D)^s \z_k -I(u,\z_k){\bf B}igr) quad : quad {\Omega}ega \to \mathbb{R}. \end{equation} For $\varepsilon>0$, we put $$ {\Omega}ega^\varepsilon= \{x \in \mathbb{R}^N\::\: |{partial}rtial ta(x)|< \varepsilon\} qquad \bar{\varepsilon}xt{and}qquad {\Omega}ega^\varepsilon_+= \{x \in \mathbb{R}^N\::\: 0< {partial}rtial ta(x)< \varepsilon\}= \{x \in {\Omega}ega\::\: {partial}rtial ta(x)< \varepsilon\}. $$ For every $\varepsilon>0$, we then have \begin{equation}gin{equation} {\langle}bel{eq:reduction-to-eps-collar-prelim-1} \lim_{k\to \infty}\int_{{\Omega}ega \setminus {\Omega}ega^\varepsilon} g_k\,dx =0. \end{equation} To see this, we first note that $\z_k \to 1$ pointwise on $\mathbb{R}^N \setminus {partial}rtial {\Omega}ega$, and therefore a.e. on $\mathbb{R}^N$. Moreover, choosing a compact neighborhood $K \subset {\Omega}ega$ of ${\Omega}ega \setminus {\Omega}ega^\varepsilon$, we have $$ (-\D)^s \z_k(x)= c_{N,s}\int_{\mathbb{R}^N \setminus K}\frac{1- \z_k(y)}{|x-y|^{N+2s}}dy qquad \bar{\varepsilon}xt{for $x \in {\Omega}ega \setminus {\Omega}ega^\varepsilon$ and $k$ sufficiently large,} $$ where $\frac{|1- \z_k(y)|}{|x-y|^{N+2s}}\le \frac{C}{1+|y|^{N+2s}}$ for $x \in {\Omega}ega \setminus {\Omega}ega^\varepsilon,\: y\in \mathbb{R}^N \setminus K$ and $C>0$ independent of $x$ and $y$. Consequently, $\| (-\D)^s \z_k\|_{L^\infty({\Omega}ega \setminus {\Omega}ega^\varepsilon)}$ remains bounded independently of $k$ and $ (-\D)^s \z_k \to 0$ pointwise on ${\Omega}ega \setminus {\Omega}ega^\varepsilon$ by the dominated convergence theorem. Similarly, we see that $\|I(u,\z_k)\|_{L^\infty({\Omega}ega \setminus {\Omega}ega^\varepsilon)}$ remains bounded independently of $k$ and $I(u,\z_k) \to 0$ pointwise on ${\Omega}ega \setminus {\Omega}ega^\varepsilon$. Consequently, we find that $$ \bar{\varepsilon}xt{$\|g_k\|_{L^\infty({\Omega}ega \setminus {\Omega}ega^\varepsilon)}$ is bounded independently of $k$ and $g_k \to 0$ pointwise on ${\Omega}ega \setminus {\Omega}ega^\varepsilon$.} $$ Hence (\ref{eq:reduction-to-eps-collar-prelim-1}) follows again by the dominated convergence theorem. As a consequence, \begin{equation}gin{equation} {\langle}bel{eq:reduction-to-eps-collar} \lim_{k\to \infty}\int_{{\Omega}ega} g_k(x) \,dx = \lim_{k\to \infty}\int_{{\Omega}ega^\varepsilon_+} g_k(x)\,dx qquad \bar{\varepsilon}xt{for every $\varepsilon>0$.} \end{equation} Let, as before, $\nu: {partial}rtial {\Omega}ega \to \mathbb{R}^N$ denotes the unit interior normal vector field on ${\Omega}ega$. Since we assume that ${partial}rtial {\Omega}ega$ is of class $C^{1,1}$, the map $\nu$ is Lipschitz, which means that the derivative $d \nu: T {partial}rtial {\Omega}ega \to \mathbb{R}^N$ is a.e. well defined and bounded. Moreover, we may fix $\varepsilon>0$ from now on with the property that the map \begin{equation}{\langle}bel{eq:Om-diffeo} \Psi: {partial}rtial \Omega \widetildemes (-\e, \e) \to {\Omega}ega^{\e},qquad (\sigma,r) \mapsto {\Psi}(\s,r)= \sigma + r \nu(\s) \end{equation} is a bi-Lipschitz map with $\Psi({partial}rtial \Omega \widetildemes (0, \e))= {\Omega}ega^{\e}_+$. In particular, $\Psi$ is a.e. differentiable, and the variable $r$ is precisely the signed distance of the point ${\Psi}(\s,r)$ to the boundary ${partial}rtial {\Omega}ega$, i.e., \begin{equation}gin{equation} {\langle}bel{eq:new-delta-equation} {partial}rtial ta(\Psi(\s,r)) = r qquad \bar{\varepsilon}xt{for $\sigma \in {partial}rtial \O,\: 0 \le r < \varepsilon.$} \end{equation} Moreover, for $0<\varepsilon' \le \varepsilon$, it follows from (\ref{eq:reduction-to-eps-collar}) that \begin{equation}gin{align} \lim_{k\to \infty}\int_{{\Omega}ega} g_k\,dx&= \lim_{k\to \infty}\int_{{\Omega}ega^{\varepsilon'}_+} g_k\,dx = \lim_{k\to \infty}\int_{{partial}rtial \O} \int_{0}^{\varepsilon'}{\rm Jac}_{{\Psi}}(\s,r) g_k({\Psi}(\s,r))\,dr d\sigma \nonumber\\ &= \lim_{k\to \infty}\frac{1}{k} \int_{{partial}rtial \O} \int_{0}^{k \e'}j_k(\s,r) G_k(\s,r) \,dr d\sigma, {\langle}bel{claim-first-reduction} \end{align} where we define \begin{equation}gin{equation} {\langle}bel{eq:def-j-k-G-k} j_k(\s,r) = {\rm Jac}_{{\Psi}}(\s,\frac{r}{k}) quad \bar{\varepsilon}xt{and}quad G_k(\s,r)=g_k({\Psi}(\s,\frac{r}{k})) qquad \bar{\varepsilon}xt{for a.e. $\sigma \in {partial}rtial \O,\: 0 \le r < k \varepsilon.$} \end{equation} We note that \begin{equation}gin{equation} {\langle}bel{eq:j-k-est} \begin{equation}gin{aligned} &\|j_k\|_{L^\infty({partial}rtial {\Omega}ega \widetildemes [0,k\varepsilon))} \le \|{\rm Jac}_{{\Psi}}\|_{L^\infty({\Omega}ega_\varepsilon)}< \infty quad \bar{\varepsilon}xt{for all $k$, and}\\ &\lim_{k \to \infty} j_k(\sigma,r)= {\rm Jac}_{{\Psi}}(\s,0)=1 quad \bar{\varepsilon}xt{for a.e. $\sigma \in {partial}rtial {\Omega}ega$, $r>0$.} \end{aligned} \end{equation} By definition of the functions $g_k$ in (\ref{eq:def-ge-k}), we may write \begin{equation}gin{equation} {\langle}bel{G-k-splitting} G_k(\s,r) = G_{k}^0(\s,r)[G_{k}^1(\s,r) - G_{k}^2(\s,r)] qquad \bar{\varepsilon}xt{for $\sigma \in {partial}rtial \O,\: 0 \le r < k \varepsilon$} \end{equation} with \begin{equation}gin{equation} \begin{equation}gin{aligned} G_k^0(\s,r) &= [\nabla U_k \cdot X]({\Psi}(\s,\frac{r}{k}))\\ G_k^1(\s,r) &= [u (-\D)^s \z_k]({\Psi}(\s,\frac{r}{k})) && qquad \bar{\varepsilon}xt{and}\\ G_k^2(\s,r) &= I(u,\z_k)({\Psi}(\s,\frac{r}{k})). \end{aligned} {\langle}bel{h-0-1-2} \end{equation} In order to analyze the limit in (\ref{claim-first-reduction}) for suitable $\e' \in (0,\e]$, we provide estimates for the functions $G_k^0, G_k^1, G_k^2$ separately in the following. We start with an estimate for $G_k^0$ given by the following lemma. \begin{equation}gin{lemma} {\langle}bel{first-function-est} Let ${\alpha_p}ha \in (0,1)$ be given by Lemma~\ref{reg-prop-minimizers}. Then we have \begin{equation}gin{equation} {\langle}bel{eq:h-k-0-bound} k^{s-1}|G_k^0(\s,r)| \le C(r^{s-1}+r^{s-1+{\alpha_p}ha}) qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k\varepsilon$} \end{equation} with a constant $C>0$, and \begin{equation}gin{equation} {\langle}bel{eq:h-k-0-limit} \lim_{k \to \infty} k^{s-1} G_k^0(\s,r) = h'(r) psi(\s) [X(\s) \cdot \nu(\s)] quad \bar{\varepsilon}xt{for $\sigma \in {partial}rtial {\Omega}ega$, $r>0$} \end{equation} with the function $r \mapsto h(r)= r^s_+ \zeta(r)$ given in \eqref{eq:-kappa-s-implicit-new-section}. \end{lemma} \begin{equation}gin{proof} Since $u= psi {partial}rtial ta^s$, we have $$ \nabla u = s {partial}rtial ta^{s-1} psi \nabla {partial}rtial ta + {partial}rtial ta^s \nabla psi = s {partial}rtial ta^{s-1} psi \nabla {partial}rtial ta + O({partial}rtial ta^{s-1+{\alpha_p}ha})qquad \bar{\varepsilon}xt{in ${\Omega}ega$} $$ by Lemma~\ref{reg-prop-minimizers}, and therefore, since $\z_k = \zeta\circ (k {partial}rtial ta)$ by (\ref{eq:def-z-k-etc}), $$ \nabla U_k = \nabla {\bf B}igl(u \z_k {\bf B}igr)= {\bf B}igl(s \zeta\circ (k {partial}rtial ta) + k {partial}rtial ta \z' \circ (k {partial}rtial ta){\bf B}igr) psi {partial}rtial ta^{s-1}\nabla {partial}rtial ta + O({partial}rtial ta^{s-1+{\alpha_p}ha}) qquad \bar{\varepsilon}xt{in ${\Omega}ega$.} $$ Consequently, by (\ref{eq:new-delta-equation}) we have $$ \bigl[\bigl(\nabla U_k\bigr) \circ {\Psi}\bigr](\s,\frac{r}{k}) = {\bf B}igl(s \z(r) + r \z'(r){\bf B}igr) psi(\sigma + \frac{r}{k}\nu(\s)) \bigl(\frac{r}{k}\bigr)^{s-1} \nabla {partial}rtial ta(\sigma + \frac{r}{k}\nu(\s)) + O{\bf B}igl(\bigl(\frac{r}{k}\bigr)^{s-1+{\alpha_p}ha}{\bf B}igr) $$ for $\sigma \in {partial}rtial {\Omega}ega$, $0 \le r < \varepsilon$ with $O(r^{s-1+{\alpha_p}ha})$ independent of $k$, and therefore \begin{equation}gin{align*} G_k^0(\s,r)&= {\bf B}igl(s \z(r) + r \z'(r){\bf B}igr) psi(\sigma + \frac{r}{k}\nu(\s)) \nabla {partial}rtial ta(\sigma + \frac{r}{k}\nu(\s)) \cdot X(\sigma + \frac{r}{k}\nu(\s))k^{1-s} r^{s-1}\\ &+ k^{1-s-{\alpha_p}ha} O(r^{s-1+{\alpha_p}ha}) qquad \bar{\varepsilon}xt{for $\sigma \in {partial}rtial {\Omega}ega$, $0 \le r < k \varepsilon$.} \end{align*} Since ${\alpha_p}ha>0$, we deduce that \begin{equation}gin{align*} k^{s-1}G_k^0(\s,r) &\to {\bf B}igl(s \z(r) + r \z'(r){\bf B}igr) psi(\s) \nabla {partial}rtial ta(\s) \cdot X(\s)r^{s-1}=h'(r) psi(\s) X(\s) \cdot \nu(\s) qquad \bar{\varepsilon}xt{as $k \to \infty$} \end{align*} for $\sigma \in {partial}rtial {\Omega}ega$, $r>0$, while $$ k^{s-1}|G_k^0(\s,r)| \le C(r^{s-1}+r^{s-1+{\alpha_p}ha}) qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k\varepsilon$} $$ with a constant $C>0$ independent of $k$ and $r$, as claimed. \end{proof} Next we consider the functions $G_k^1$ defined in (\ref{h-0-1-2}), and we first state the following estimate. \begin{equation}gin{proposition} {\langle}bel{delta-s-z-k-conv} There exists $\varepsilon'>0$ with the property that \begin{equation}gin{equation} {\langle}bel{eq:delta-s-z-k-concl-bound} |k^{-2s} (-\D)^s \z_k({\Psi}(\sigma, \frac{r}{k}))|\le \frac{C}{1+r^{1+2s}}qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$} \end{equation} with a constant $C>0$. Moreover, \begin{equation}gin{equation} {\langle}bel{eq:delta-s-z-k-concl-limit} \lim_{k \to \infty}k^{-2s} (-\D)^s \z_k({\Psi}(\sigma, \frac{r}{k})) = (-\Delta)^s \zeta(r) quad \bar{\varepsilon}xt{for $\sigma \in {partial}rtial {\Omega}ega$, $r>0$.} \end{equation} \end{proposition} Before giving the somewhat lengthy proof of this proposition, we infer the following corollary related to the functions $G_k^1$. \begin{equation}gin{corollary} {\langle}bel{corol-second-function-est} There exists $\varepsilon'>0$ with the property that \begin{equation}gin{equation} {\langle}bel{eq:h-2-bound} |k^{-s}G_k^1(\sigma,r)|\le \frac{C r^s}{1+r^{1+2s}}qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$} \end{equation} with a constant $C>0$. Moreover, \begin{equation}gin{equation} {\langle}bel{eq:h-2-limit} \lim_{k \to \infty}k^{-s}G_k^1(\sigma,r) = psi(\sigma) r^s (-\Delta)^s \zeta(r) quad \bar{\varepsilon}xt{for $\sigma \in {partial}rtial {\Omega}ega$, $r>0$.} \end{equation} \end{corollary} \begin{equation}gin{proof} Since $u = psi {partial}rtial ta^s$ we have $u({\Psi}(\sigma,\frac{r}{k}))= k^{-s} psi(\sigma + \frac{r}{k} \nu(\sigma)) r^s$ for $k \in \mathbb{N}$, $0 \le r < k\varepsilon$, and $$ \lim_{k \to \infty}k^{s}u({\Psi}(\sigma,\frac{r}{k})) = psi(\sigma) r^s quad \bar{\varepsilon}xt{for $\sigma \in {partial}rtial {\Omega}ega$, $r>0$.} $$ Since moreover $\|psi\|_{L^\infty({\Omega}ega_\varepsilon)}<\infty$, the claim now follows from Proposition~\ref{delta-s-z-k-conv} by recalling the definition in $G_k^1$ in (\ref{h-0-1-2}). \end{proof} We now turn to the proof of Proposition~\ref{delta-s-z-k-conv}, and we need some preliminary considerations. Since ${partial}rtial {\Omega}ega$ is of class $C^{1,1}$ by assumption, there exists an open ball $B \subset \mathbb{R}^{N-1}$ centered at the origin and, for every $\s\in partial\O$, a parametrization $f_\s: B \to partial\O$ of class $C^{1,1}$ with the property that $f_\s(0)=\s$ and $d f_\s(0): \mathbb{R}^{N-1} \to \mathbb{R}^N$ is a linear isometry. For $z \in B$ we then have $$ f_\s(z)-f_\s(0)= df_\s(0)z + O(|z|^2) $$ and therefore \begin{equation}gin{align} {\langle}bel{eq:f-s-first-exp} &|f_\s(0)-f_\s(z)|^2 = |df_\s(0)z|^2 + O(|z|^3) = |z|^2 + O(|z|^3),\\ {\langle}bel{eq:f-s-second-exp} &(f_\s(0)-f_\s(z))\cdot \nu(\s) = - df_\s(0)z \cdot \nu(\s)+ O(|z|^2)= O(|z|^2), \end{align} where we used in (\ref{eq:f-s-second-exp}) that $d f_\s(0)z$ belongs to the tangent space $T_\sigma {partial}rtial {\Omega}ega = \{\nu(\s)\}^perp$. Here and in the following, the term $\calO(\tau)$ stands for a function depending on $\tau$ and possibly other quantities but satisfying $|\calO(\tau)| \le C \tau$ with a constant $C>0$. Recalling the definition of the map $\Psi$ in \eqref{eq:Om-diffeo} and writing $\nu_\sigma(z):= \nu(f_\s(z))$ for $z \in B$, we now define \begin{equation} \Psi_\s: (-\e,\e) \widetildemes B \to {\Omega}ega^\varepsilon,qquad \Psi_\s(r,z) = \Psi(f_\s(z),r) = f_\s(z) + r \nu_\sigma(z). \end{equation} Then $\Psi_\s$ is a bi-Lipschitz map which maps $(-\e,\e) \widetildemes B$ onto a neighborhood of $\sigma$. Consequently, there exists $\varepsilon' \in (0,\frac{\varepsilon}{2})$ with the property that \begin{equation}gin{equation} {\langle}bel{eq:distance-sigma-ineq} |\sigma-y| \ge 3\varepsilon' qquad \bar{\varepsilon}xt{for all $y \in \mathbb{R}^N \setminus \Psi_\s((-\e,\e) \widetildemes B)$.} \end{equation} Moreover, $\varepsilon'$ can be chosen independently of $\sigma \in {partial}rtial {\Omega}ega$. Coming back to the proof of Proposition~\ref{delta-s-z-k-conv}, we now write, for $\sigma \in {partial}rtial {\Omega}ega$ and $r \in [0,k\varepsilon')$, \begin{equation}gin{equation} {\langle}bel{eq:delta-s-splitting} (-\D)^s \z_k(\Psi(\s,\frac{r}{k})) = c_{N,s} {\bf B}igl( A_k(\sigma,r)+ B_k(\sigma,r) {\bf B}igr) \end{equation} with $$ A_{k}(\sigma,r) := \int_{ \Psi_\s((-\e,\e) \widetildemes B)} \frac{\z(r)-\z_k(y)}{|\Psi(\s,\frac{r}{k}) - y|^{N+2s}}\, dy $$ and $$ B_k(\sigma,r) := \int_{\mathbb{R}^N \setminus \Psi_\s((-\e,\e) \widetildemes B)} \frac{\z(r)-\z_k(y)}{|\Psi(\s,\frac{r}{k})- y|^{N+2s}}\, dy. $$ Here we used that $\z_k(\Psi(\s,\frac{r}{k}))= \z(r)$ for $\sigma \in {partial}rtial {\Omega}ega$, $r \in [0,k \varepsilon')$ by (\ref{eq:new-delta-equation}) and the definition of $\z_k$. We first provide a rather straightforward estimate for the functions $B_k$. \begin{equation}gin{lemma} {\langle}bel{B-k-new-lemma} We have \begin{equation}gin{equation} {\langle}bel{eq:B-estimate-majorization} k^{-2s}|B_k(\sigma,r)| \le \frac{C}{1+r^{1+2s}} qquad quad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$, $\sigma \in {partial}rtial {\Omega}ega$} \end{equation} with a constant $C>0$ and \begin{equation}gin{equation} {\langle}bel{eq:B-estimate-limit} \lim_{k \to \infty}k^{-2s}|B_k(\sigma,r)|=0 qquad \bar{\varepsilon}xt{for every $\sigma \in {\Omega}ega$, $r\ge 0$.} \end{equation} \end{lemma} \begin{equation}gin{proof} By (\ref{eq:distance-sigma-ineq}) and since $r < k\varepsilon'$, we have $$ |\Psi(\s,\frac{r}{k})- y|= |\sigma - y + \frac{r}{k}\nu(\sigma)|\ge |\sigma - y|- \frac{r}{k} \ge \frac{|\sigma - y|}{3} +\varepsilon' quad \bar{\varepsilon}xt{for $y \in \mathbb{R}^N \setminus \Psi_\s((-\e,\e) \widetildemes B)$.} $$ Recalling that $\zeta= 1 -\rho$, $\:\z_k = 1-\rho_k$ and that $\rho_k$ is supported in ${\Omega}ega^{\bar{\varepsilon}xt{\widetildeny{$\frac{2}{k}$}}}$, we thus estimate \begin{equation}gin{align*} |B_k(\sigma,r)| &\le \int_{\mathbb{R}^N \setminus \Psi_\s((-\e,\e) \widetildemes B)} \frac{|\rho(r)-\rho_k(y)|}{|\Psi(\s,\frac{r}{k})- y|^{N+2s}}\, dy \\ &\le 3^{N+2s} |\rho(r)| \int_{\mathbb{R}^N} {\bf B}igl(|\sigma - y| +3 \varepsilon'{\bf B}igr)^{-N-2s}dy + \bigl(\varepsilon'\bigr)^{-N-2s} \int_{\mathbb{R}^N} |\rho_k(y)| \, dy\\ &\le C {\bf B}igl(|\rho(r)| +|{\Omega}ega^{\bar{\varepsilon}xt{\widetildeny{$\frac{2}{k}$}}}|{\bf B}igr) \le C {\bf B}igl(|\rho(r)| + k^{-1}{\bf B}igr). \end{align*} Here and in the following, the letter $C$ stands for various positive constants. This estimate readily yields (\ref{eq:B-estimate-limit}). Moreover, $$ k^{-2s}|B_k(\sigma,r)| \le C k^{-2s}{\bf B}igl(|\rho(r)| + k^{-1}{\bf B}igr) \le \frac{C}{1+r^{1+2s}} + k^{-1-2s} \le \frac{C}{1+r^{1+2s}} $$ for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$, $\sigma \in {partial}rtial {\Omega}ega$, as claimed in (\ref{eq:B-estimate-majorization}). \end{proof} To complete the proof of Proposition~\ref{delta-s-z-k-conv}, it thus remains to consider the functions $A_{k}$ in the following. For this, we need the following additional estimates for the maps $\Psi_\s$, $\sigma \in {partial}rtial {\Omega}ega$. We note here that $\Psi_\s$ is a.e. differentiable since it is Lipschitz, so the Jacobian determinant ${\rm Jac}_{{\Psi_\s}}$ is a.e. well-defined on $(-\e,\e) \widetildemes B$. \begin{equation}gin{lemma} {\langle}bel{new-lemma-Psi-sigma} There exists a constant $C_0$ with the property that for every $\sigma \in {partial}rtial {\Omega}ega$ we have the following estimates:\\[0.3cm] (i) $quad |{\rm Jac}_{{\Psi_\s}}(r,z)| \le C_0$ for a.e. $r \in (-\varepsilon,\varepsilon)$, $z \in B$;\\[0.3cm] (ii) $quad |{\rm Jac}_{{\Psi_\s}}(r,z)-1| \le C_0(|r|+|z|)$ for a.e. $r \in (-\varepsilon,\varepsilon)$, $z \in B$;\\[0.3cm] (iii) $quad |{\rm Jac}_{{\Psi_\s}}(r+t,z)-{\rm Jac}_{{\Psi_\s}}(r-t,z)| \le C_0|t|$ for a.e. $r \in (-\varepsilon,\varepsilon)$, $z \in B$, $t \in (-\varepsilon-r,\varepsilon-r)$;\\[0.3cm] Moreover, for $\sigma \in {partial}rtial {\Omega}ega$, $r \in (-\varepsilon,\varepsilon)$, $z \in B$, $t \in (-\varepsilon-r,\varepsilon-r)$ we have\\[0.3cm] (iv) $quad \frac{1}{C_0} \bigl(t^2 + |z|^2\bigr)^{\frac{1}{2}} \le |{\Psi_\s}(r,0)- {\Psi_\s}(r+t,z)| \le C_0 \bigl(t^2 + |z|^2\bigr)^{\frac{1}{2}}$,\\[0.3cm] and for $\sigma \in {partial}rtial {\Omega}ega$, $r \in (-\varepsilon,\varepsilon)$, $t \in (-\varepsilon-r,\varepsilon-r) \setminus \{0\}$ and $z \in \frac{1}{|t|}B$ we have\\[0.3cm] (v) $quad {\bf B}igl|\frac{|{\Psi_\s}(r,0)- {\Psi_\s}(r+t ,|t| z)|^{2}}{t^2}-(1+|z|^2){\bf B}igr| \le C_0 (|t|+ |r| + |tz|)|z|^2$;\\[0.3cm] (vi) $quad {\bf B}igl|\bigl|{\Psi_\s}(r,0)- {\Psi_\s}(r+t ,|t| z)\bigr|^{-N-2s}- \bigl|{\Psi_\s}(r,0)- {\Psi_\s}(r-t ,|t| z)\bigr|^{-N-2s}{\bf B}igr|$ \hspace{10cm} $\le C_0 |t|^{1-N-2s}(1+|z|^2)^{-\frac{N+2s}{2}}$. \end{lemma} \begin{equation}gin{proof} The inequalities (i) and (iv) are direct consequences of the fact that $\Psi_\s$ is bi-Lipschitz. In particular, if $C_0$ is a Lipschitz constant for $\Psi_\s^{-1}$, we have $$ \bigl(t^2 + |z|^2\bigr)^{\frac{1}{2}} = |(-t,z)|= |(r,0) -(r+t,z)| \le C_0 |{\Psi_\s}(r,0)- {\Psi_\s}(r+t,z)| $$ for $\sigma \in {partial}rtial {\Omega}ega$, $r \in (-\varepsilon,\varepsilon)$, $z \in B$ and $t \in (-\varepsilon-r,\varepsilon-r)$, so the first inequality in (iv) follows. By making $C_0$ larger if necessary so that it is also a Lipschitz constant for $\Psi_\s$, we then deduce the second inequality in (iv). To see (ii) and (iii), we note that $d \Psi_\s$ is a.e. given by $$ d \Psi_\sigma (r,z)(r',z') = [d f_\sigma(z)+ r d \nu_\sigma(z)]z' + r' \nu_\sigma(z) $$ for $(r,z) \in (-\varepsilon,\varepsilon) \widetildemes B$, $(r',z') \in \mathbb{R} \widetildemes \mathbb{R}^{N-1}$, which implies that $$ [d \Psi_\sigma (r,z)-d \Psi_\sigma (0,0)] (r',z') = [d f_\sigma(z)-df_\sigma(0)]z'+ r d \nu_\sigma(z) +r' \bigl(\nu_\sigma(z)-\nu_\sigma(0)\bigr) $$ and $$ [d \Psi_\sigma (r+t,z)-d \Psi_\sigma (r-t,z)] (r',z') = 2t d \nu_\sigma(z) z'. $$ Since $d f_\sigma$, $\nu_\sigma$ are Lipschitz functions on $B$, $d\nu_\s$ is a bounded function on $B$ and the determinant is a locally Lipschitz continuous function on the space of linear endomorphisms of $\mathbb{R}^N$, it follows that $$ |{\rm Jac}_{{\Psi_\s}}(r,z)-{\rm Jac}_{{\Psi_\s}}(0,0)| \le C_0(|r|+|z|) quad \bar{\varepsilon}xt{and}quad |{\rm Jac}_{{\Psi_\s}}(r+t,z)-{\rm Jac}_{{\Psi_\s}}(r-t,z)| \le C_0|t| $$ for a.e. $r \in (-\varepsilon,\varepsilon)$, $z \in B$, $t \in (-\varepsilon-r,\varepsilon-r)$. Moreover, ${\rm Jac}_{{\Psi_\s}}(0,0) = 1$ since the map $$ \mathbb{R} \widetildemes \mathbb{R}^{N-1} \to \mathbb{R}^N, qquad (r',z') \mapsto d{\Psi_\s}(0,0)(r',z') = d f_\sigma(0)z' + r' \nu_\sigma(0) $$ is an isometry. Hence (ii) and (iii) follow. To see (v) and (vi), we note that by definition of $\Psi_\s$ we have $$ {\Psi_\s}(r,0)- {\Psi_\s}(r+t,z)=f_\s(0)-f_\s(z)-t \nu_\s(0)+(r+t)(\nu_\s(0)-\nu_\s(z)) $$ for $z \in B$, $r \in (0,\e')$ and $t \in (-\varepsilon-r,\e-r)$. Using moreover that $(\nu_\s(0)-\nu_\s(z))\cdot \nu_\s(0)=\frac{1}{2}|\nu_\s(0)-\nu_\s(z)|^2$, we get \begin{equation}gin{align} &|{\Psi_\s}(r,0)- {\Psi_\s}(r+t,z)|^2 =t^2+ |f_\s(0)-f_\s(z)|^2 +(r+t)^2|\nu_\s(0)-\nu_\s(z)|^2 \nonumber\\ &- 2t(f_\s(0)-f_\s(z))\cdot \nu_\s(0) -t(r+t)|\nu_\s(0)-\nu_\s(z)|^2 +2(r+t)(f_\s(0)-f_\s(z))\cdot (\nu_\s(0)-\nu_\s(z)) \nonumber\\ &=t^2+ |f_\s(0)-f_\s(z)|^2 +r(r+t)|\nu_\s(0)-\nu_\s(z)|^2 \nonumber\\ &qquad qquad qquad - 2t(f_\s(0)-f_\s(z))\cdot \nu_\s(0) +2(r+t)(f_\s(0)-f_\s(z))\cdot (\nu_\s(0)-\nu_\s(z)) \nonumber \\ &=t^2+ |z|^2 + \bigl[|z| m_\s(z) +r(r+t) n_\sigma(z)-2t p_\s(z) +2(r+t)q_\sigma(z) \bigr]|z|^2 {\langle}bel{prelim-expansion-psi-s-diff} \end{align} for $z \in B$, $r \in (-\e,\e)$ and $t \in (-\varepsilon-r,\e-r)$ with the functions $$ m_\s(z) = \frac{|f_\s(0)-f_\s(z)|^2-|z|^2}{|z|^3}, quad n_\s(z)= \frac{|\nu_\s(0)-\nu_\s(z)|^2}{|z|^2},quad p_\s(z)= \frac{(f_\s(0)-f_\s(z))\cdot \nu_\s(0)}{|z|^2} $$ and $$ q_\s(z) = \frac{(f_\s(0)-f_\s(z))\cdot (\nu_\s(0)-\nu_\s(z))}{|z|^2}, qquad z \in B \setminus \{0\}, $$ which are all bounded as a consequence of the Lipschitz continuity of $f_\s$ and $\nu_\s$ and of (\ref{eq:f-s-first-exp}) and (\ref{eq:f-s-second-exp}). We deduce that \begin{equation}gin{align*} &{\bf B}igl|\frac{|{\Psi_\s}(r,0)- {\Psi_\s}(r+t ,|t| z)|^{2}}{t^2}-(1+|z|^2){\bf B}igr|\\ &= {\bf B}igl||tz| m_\s(|t|z) +r(r+t) n_\sigma(|t|z)-2t p_\s(|t|z) +2(r+t)q_\sigma(|t|z) {\bf B}igr| |z|^2 \le C_0(|tz|+ |r| + |t|)|z|^2 \end{align*} for $\sigma \in {partial}rtial {\Omega}ega$, $r \in (-\varepsilon,\varepsilon)$, $t \in (-\varepsilon-r,\varepsilon-r) \setminus \{0\}$ and $z \in \frac{1}{|t|}B$ if $C_0$ is chosen sufficiently large, as claimed in (v). For the proof of (vi), we now set $w_\sigma(r,t,z):= \frac{1}{t^2}|{\Psi_\s}(r,0)- {\Psi_\s}(r+t,|t|z)|^2$, and we note that $$ w_\sigma(r,t,z) \ge \frac{1+|z|^2}{C_0^2}quad \bar{\varepsilon}xt{for $\sigma \in {partial}rtial {\Omega}ega$, $r \in (-\varepsilon,\varepsilon)$, $t \in (-\varepsilon-r,\varepsilon-r) \setminus \{0\}$, $z \in \frac{1}{|t|}B$} $$ by (iv). Moreover, from (\ref{prelim-expansion-psi-s-diff}) we infer that \begin{equation}gin{equation*} {\bf B}igl|w_\sigma(r,t,z) -w_\sigma(r,-t,z) {\bf B}igr|= {\bf B}igl| 2 r t n_\sigma(|t|z) +4t\bigl( q_\sigma(|t|z)-p_\s(|t|z)\bigr) {\bf B}igr| |z|^2 \le C_0 |t||z|^2 \end{equation*} for $\sigma \in {partial}rtial {\Omega}ega$, $r \in (-\varepsilon,\varepsilon)$, $t \in (-\varepsilon-r,\varepsilon-r) \setminus \{0\}$ and $z \in \frac{1}{|t|}B$ if $C_0$ is made larger if necessary. Using these estimates together with the mean value theorem, we get that, for some $\tau= \tau(\sigma,r,t,z)$ with $-t < \tau< t$, \begin{equation}gin{align*} {\bf B}igl|&|{\Psi_\s}(r,0)- {\Psi_\s}(r+t ,|t|z)|^{-N-2s}-|{\Psi_\s}(r,0)- {\Psi_\s}(r-t ,|t|z)|^{-N-2s}{\bf B}igr|\\ &= |t|^{-N-2s}{\bf B}igl|w_\sigma(r,t,z)^{-\frac{N+2s}{2}} -w_\sigma(r,-t,z)^{-\frac{N+2s}{2}} {\bf B}igr|\\ &= \frac{(N+2s)|t|^{-N-2s} }{2} w_\sigma(r,\tau,z)^{-\frac{N+2s+2}{2}} {\bf B}igl|w_\sigma(r,t,z) -w_\sigma(r,-t,z) {\bf B}igr|\\ &\le C_0|t|^{1-N-2s} (1+|z|^2)^{-\frac{N+2s+2}{2}}|z|^2 \le C_0|t|^{1-N-2s} (1+|z|^2)^{-\frac{N+2s}{2}} {\langle}bel{diff-first-est} \end{align*} for $z\in B$, $r\in (0,\e')$ and $t \in (-\e+r,\e-r)$ after making $C_0$ larger if necessary, as claimed in (vi). \end{proof} We now have all the tools to study the quantity $A_{k}(\sigma,r)$ in \eqref{eq:delta-s-splitting}. \begin{equation}gin{lemma} {\langle}bel{A-k-new-lemma} We have \begin{equation}gin{equation} {\langle}bel{eq:A-estimate-majorization} k^{-2s}|A_k(\sigma,r)| \le \frac{C}{1+r^{1+2s}} qquad quad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$, $\sigma \in {partial}rtial {\Omega}ega$} \end{equation} with a constant $C>0$ and \begin{equation}gin{equation} {\langle}bel{eq:A-estimate-limit} \lim_{k \to \infty}k^{-2s}A_k(\sigma,r)=\frac{(-\Delta)^s\z(r)}{c_{N,s}} qquad \bar{\varepsilon}xt{for every $\sigma \in {\Omega}ega$, $r\ge 0$.} \end{equation} \end{lemma} \begin{equation}gin{proof} For $\sigma \in {partial}rtial {\Omega}ega$ and $0 < r < k \e'$, we write, with a change of variables, \begin{equation}gin{align} &A_{k}(\sigma,r) {\langle}bel{change-of-variables-A}\\ &= \int_{ \Psi_\s((-\e,\e) \widetildemes B)} \frac{\z(r)-\z_k(y)}{|\Psi(\s,\frac{r}{k}) - y|^{N+2s}}\, dy= \int_{-\e}^{\e} \int_{B}{\rm Jac}_{\Psi_\s}(\widetildelde r,z)\: \frac{\z(r)-\z(k \widetildelde r)}{|\Psi_\s(\frac{r}{k},0) - \Psi_\s(\widetildelde r,z)|^{N+2s}}\,d z d \widetildelde r \nonumber\\ &= \frac{1}{k}\int_{-k\e-r}^{k\e-r} \int_{B}{\rm Jac}_{\Psi_\s}(\frac{r+t}{k},z)\: \frac{\z(r)-\z(r+t)}{|\Psi_\s(\frac{r}{k},0) - \Psi_\s(\frac{r+t}{k},z)|^{N+2s}}\,d z d t \nonumber\\ &= \int_{-k\e-r}^{k\e-r} \frac{|t|^{N-1}}{k^N} \int_{\frac{k}{|t|}B}{\rm Jac}_{\Psi_\s}(\frac{r+t}{k},\frac{|t|z}{k})\: \frac{\z(r)-\z(r+t)}{|\Psi_\s(\frac{r}{k},0) - \Psi_\s(\frac{r+t}{k},\frac{|t|z}{k})|^{N+2s}}\,d z d t\nonumber\\ &= k^{2s}\int_{\mathbb{R}}\frac{\z(r)-\z(r+t)}{|t|^{1+2s}}{\mathcal K}_k(r,t) dt \nonumber \end{align} with the kernels ${\mathcal K}_k: (0,k\e') \widetildemes \mathbb{R} \to \mathbb{R}$ defined by $$ {\mathcal K}_k(r,t)= \left\{ \begin{equation}gin{aligned} & {\bf B}igl(\frac{|t|}{k}{\bf B}igr)^{N+2s} \int_{\frac{k}{|t|}B} \frac{ {\rm Jac}_{{\Psi_\s}}(\frac{r+t}{k},\frac{|t|z}{k})}{\bigl|{\Psi_\s}(\frac{r}{k},0)-{\Psi_\s}(\frac{r+t}{k},\frac{|t|z}{k}) \bigr|^{N+2s}}dz,&&quad t\in (-k\varepsilon-r,k\varepsilon-r), \\ &0,&&quad t \not \in (-k\varepsilon-r,k\varepsilon-r). \end{aligned} \right. $$ Consequently, \begin{equation}gin{equation} {\langle}bel{eq:A-k-splitting} A_{k}(\sigma,r)= k^{2s}{\bf B}igl( J^1_k(\s,r)+ J^2_k(\s,r){\bf B}igr) \end{equation} with $$ J^1_k(\s,r):=\frac{1}{4 } \int_{\mathbb{R}} \frac{2\z(r)-\z(r+t)-\z(r-t)}{|t|^{1+2s}} \bigl({\mathcal K}_k(r,t) +{\mathcal K}_k(r,-t)\bigr) dt $$ and $$ J^2_k(\s,r):=-\frac{1}{4} \int_{\mathbb{R}} \frac{\z(r+t)-\z(r-t)}{|t|^{2s}}\,\frac{{\mathcal K}_k(r,t)- {\mathcal K}_k(r,-t)}{|t|} dt. $$ By Lemma~\ref{new-lemma-Psi-sigma}(i),(iv) and the definition of ${\mathcal K}_k$, we have \begin{equation} {\langle}bel{eq:est-cK1} |{\mathcal K}_k(r,t)| \le C_0^{N+2s+1} \int_{\frac{k}{|t|}B} {\bf B}igl(1+ {\bf B}igl|z|^2{\bf B}igr)^{-\frac{N+2s}{2}}dz \le C_0^{N+2s+1} a_{N,s} \end{equation} for $r \in (-k\e',k\e')$ and $t \in \mathbb{R} \setminus \{0\}$ with \begin{equation} {\langle}bel{def-a-n-s} a_{N,s}:=\int_{\mathbb{R}^{N-1}}(1+|z|^2)^{-\frac{N+2s}{2}}dz < \infty. \end{equation} Moreover, by Lemma~\ref{new-lemma-Psi-sigma}(i)(ii),(iv),(v) and the dominated convergence theorem, we have \begin{equation} {\langle}bel{eq:est-cK2} \lim_{k\to \infty}{\mathcal K}_k(r,t)= \int_{\mathbb{R}^{N-1}}(1+|z|^2)^{-\frac{N+2s}{2}}\,dz = a_{N,s} qquad \bar{\varepsilon}xt{for every $r \ge 0$, $t \in \mathbb{R} \setminus \{0\}$.} \end{equation} Using \eqref{eq:est-cK1} and the fact that $\rho=1-\zeta\in C^\infty_c(\mathbb{R})$, we obtain the estimate \begin{equation}gin{align} | J^1_k(\s,r)| &\leq C \int_{\mathbb{R}} {\frac{|2\z( r)-\z(r+t)-\z(r-t)|}{|t|^{1+2s}}} \,dt \nonumber\\ &= C \int_{\mathbb{R}} {\frac{|2\rho(r)-\rho(r+t)-\rho(r-t)|}{|t|^{1+2s}}} \,dt \leq \frac{C}{1+r^{1+2s}} {\langle}bel{eq:J1} \end{align} for $k \in \mathbb{N}$, $r\in (0,k\e')$ and $\sigma \in {partial}rtial {\Omega}ega$. Here and in the following, the letter $C>0$ stands for different positive constants. Moreover, by \eqref{eq:est-cK1}, \eqref{eq:est-cK2} and the dominated convergence theorem, we find that \begin{equation}gin{equation} \lim_{k \to \infty} J^1_k(\s,r) = \frac{a_{N,s}}{2} \int_{\mathbb{R}}\frac{2\z(r)-\z(r+t)-\z(r-t)}{|t|^{1+2s}}dt = \frac{a_{N,s}}{c_{1,s}} (-\Delta)^s\z(r)= \frac{(-\Delta)^s\z(r)}{c_{N,s}}. {\langle}bel{eq:J2} \end{equation} Here we have used the fact that \begin{equation}{\langle}bel{eq:bNs-aNs} c_{N,s}a_{N,s}=c_{1,s}, \end{equation} see e.g. \cite{FW-half}. Next we deal with $J^2_k(\s,r)$, and for this we have to estimate the kernel differences $|{\mathcal K}_k(r,t)- {\mathcal K}_k(r,-t)|$. By Lemma~\ref{new-lemma-Psi-sigma}(i),(iii),(iv) and (vi), we have $$ {\bf B}igl| \frac{ {\rm Jac}_{{\Psi_\s}}(\frac{r+t}{k},\frac{|t|z}{k})}{\bigl|{\Psi_\s}(\frac{r}{k},0)-{\Psi_\s}(\frac{r+t}{k},\frac{|t|z}{k}) \bigr|^{N+2s}}-\frac{ {\rm Jac}_{{\Psi_\s}}(\frac{r-t}{k},\frac{|t|z}{k})}{\bigl|{\Psi_\s}(\frac{r}{k},0)-{\Psi_\s}(\frac{r-t}{k},\frac{|t|z}{k}) \bigr|^{N+2s}}{\bf B}igr|\le C {\bf B}igl(\frac{|t|}{k}{\bf B}igr)^{1-N-2s}(1+|z|^2)^{-\frac{N+2s}{2}} $$ for $z\in \frac{k}{|t|}B$, $r\in (0,k\e')$ and $t \in (-k\e+r,k\e-r)$ and therefore \begin{equation}gin{equation} \frac{|{\mathcal K}_k(r,t)- {\mathcal K}_k(r,-t)|}{|t|} \le \frac{C}{k} \int_{\frac{k}{|t|}B}(1+|z|^2)^{-\frac{N+2s}{2}} \,dz \le \frac{C}{k} \int_{\mathbb{R}^{N-1}}(1+|z|^2)^{-\frac{N+2s}{2}} \,dz \le \frac{C}{k} {\langle}bel{eq:est-cK1-0-new}\ \end{equation} for $r\in (0,k\e') $ and $t \in (-k\e+r,k\e-r)$. Moreover, by definition we have \begin{equation}gin{equation} {\langle}bel{eq:est-cK1-0-0-new} |{\mathcal K}_k(r,t)- {\mathcal K}_k(r,-t)|= 0 qquad \bar{\varepsilon}xt{for $t \in \mathbb{R} \setminus (-k\e-r,k\e+r)$,} \end{equation} while for $t \in (-k\e-r,-k\e+r) \cup (k\e-r,k\e+r)$ we have $|t|\ge k\varepsilon -\varepsilon' \ge \frac{k\varepsilon}{2}$ and therefore, similarly as in \eqref{eq:est-cK1}, \begin{equation} {\langle}bel{eq:est-cK1-new-1} \frac{|{\mathcal K}_k(r,t)|}{|t|} \le \frac{C}{|t|} \int_{\frac{k}{|t|}B} \bigl(1+ |z|^2\bigr)^{-\frac{N+2s}{2}} dz \le \frac{C}{k} \int_{\frac{2}{\varepsilon}B} \bigl(1+ |z|^2\bigr)^{-\frac{N+2s}{2}} dz \le \frac{C}{k}. \end{equation} Note here that the constant $C>0$ on the RHS depends on $\varepsilon$, but this is not a problem. Combining (\ref{eq:est-cK1-0-new}), (\ref{eq:est-cK1-0-0-new}), \eqref{eq:est-cK1-new-1} and using that $\rho=1-\z\in C^\infty_c(\mathbb{R})$, we get \begin{equation}gin{align*} &|J^2_k(\s,r)| \le \frac{1}{4 } \int_{\mathbb{R}} \frac{|\z(r+t)-\z(r-t)|}{|t|^{2s}} \frac{|{\mathcal K}_k(r,t)- {\mathcal K}_k(r,-t)|}{|t|} dt\\ &\le \frac{C}{k} \int_{\mathbb{R}} \frac{|\z(r+t)-\z(r-t)|}{t^{2s}}d t = \frac{C}{k} \int_{\mathbb{R}} \frac{|\rho(r+t)-\rho(r-t)|}{t^{2s}} d t \le \frac{C(1+r)^{-2s}}{k} \end{align*} for $k \in \mathbb{N}$, $\sigma \in {partial}rtial {\Omega}ega$ and $0 \le r <k \e'$. Hence \begin{equation}gin{equation} {\langle}bel{eq:J-2-concl-bound} |J^2_k(\s,r)| \le \frac{C}{1+r^{1+2s}}qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$} \end{equation} and \begin{equation}gin{equation} {\langle}bel{eq:J-2-concl-limit} \lim_{k \to \infty}|J^2_k(\s,r)| = 0 qquad \bar{\varepsilon}xt{for all $r \ge 0$.} \end{equation} Now (\ref{eq:A-estimate-majorization}) follows by combining (\ref{eq:A-k-splitting}), (\ref{eq:J1}) and (\ref{eq:J-2-concl-bound}). Moreover, (\ref{eq:A-estimate-limit}) follows by combining (\ref{eq:A-k-splitting}), (\ref{eq:J2}) and (\ref{eq:J-2-concl-limit}). \end{proof} \begin{equation}gin{proof}[Proof of Proposition~\ref{delta-s-z-k-conv}] The proof is completed by combining (\ref{eq:delta-s-splitting}) with Lemmas~\ref{B-k-new-lemma} and~\ref{A-k-new-lemma}. \end{proof} It finally remains to estimate the function $G_k^2$ in (\ref{G-k-splitting}). \begin{equation}gin{lemma} {\langle}bel{I-s-z-k-conv} There exists $\varepsilon'>0$ with the property that the function $G_k^2$ defined in (\ref{h-0-1-2}) satisfies \begin{equation}gin{equation} {\langle}bel{eq:I-s-z-k-concl-bound} |k^{-s} G_k^2(\sigma,r)| \le \frac{C}{1+r^{1+s}}qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$, $\sigma \in {partial}rtial {\Omega}ega$} \end{equation} with a constant $C>0$. Moreover, \begin{equation}gin{equation} {\langle}bel{eq:I-s-z-k-concl-limit} \lim_{k \to \infty} k^{-s} G_k^2(\sigma,r) = psi(\s)\widetildelde I(r) \end{equation} with $$ \widetildelde I(r) = c_{1,s} \int_{\mathbb{R}}\frac{ \bigl(r^s_+ - (r+t)^s_+ \bigr)\bigl(\z(r)-\z(r+t)\bigr)}{|t|^{1+2s}}dt. $$ \end{lemma} \begin{equation}gin{proof} The proof is similar to the one of Proposition~\ref{delta-s-z-k-conv}, but there are some differences we need to deal with. First, as in the proof of Proposition~\ref{delta-s-z-k-conv}, we choose $\e' \in (0,\frac{\varepsilon}{2})$ small enough, so that (\ref{eq:distance-sigma-ineq}) holds. Similarly as in (\ref{eq:delta-s-splitting}) we can then write \begin{equation}gin{equation} {\langle}bel{eq:I-s-splitting} G_k^2(\sigma,r)= c_{N,s} {\bf B}igl( \widetilde A_k(\sigma,r)+ \widetilde B_k(\sigma,r) {\bf B}igr) \end{equation} with $$ \widetilde A_{k}(\sigma,r) := \int_{ \Psi_\s((-\e,\e) \widetildemes B)} \frac{(u({\Psi}(\sigma,\frac{r}{k}))-u(y))(\z(r)-\z_k(y))}{|\Psi(\s,\frac{r}{k}) - y|^{N+2s}}\, dy $$ and $$ \widetilde B_k(\sigma,r) = \int_{\mathbb{R}^N \setminus \Psi_\s((-\e,\e) \widetildemes B)} \frac{(u({\Psi}(\sigma,\frac{r}{k}))-u(y))(\z(r)-\z_k(y))}{|\Psi(\s,\frac{r}{k}) - y|^{N+2s}}\, dy. $$ As noted in the proof of Lemma~\ref{B-k-new-lemma}, we have $$ |\Psi(\s,\frac{r}{k})- y| \ge \frac{|\sigma - y|}{3} +\varepsilon' quad \bar{\varepsilon}xt{for $y \in \mathbb{R}^N \setminus \Psi_\s((-\e,\e) \widetildemes B)$, $0<r< k \e'$.} $$ Therefore, since $u \in L^\infty(\mathbb{R}^N)$, we may estimate as in the proof of Lemma~\ref{B-k-new-lemma} to get $$ |\widetilde B_k(\sigma,r)| \le 2 \|u\|_{L^\infty} \int_{\mathbb{R}^N \setminus \Psi_\s((-\e,\e) \widetildemes B)}\frac{|\rho(r)-\rho_k(y)|}{|\Psi(\s,\frac{r}{k}) - y|^{N+2s}}dy \le C {\bf B}igl(|\rho(r)| + k^{-1}{\bf B}igr). $$ Here, as before, the letter $C$ stands for various positive constants. Consequently, \begin{equation}gin{equation} {\langle}bel{eq:tilde-B-estimate-limit} \lim_{k \to \infty}k^{-s}|\widetilde B_k(\sigma,r)|=0 qquad \bar{\varepsilon}xt{for every $\sigma \in {\Omega}ega$, $r\ge 0$,} \end{equation} since $\rho$ has compact support in $\mathbb{R}$, and \begin{equation}gin{equation} {\langle}bel{eq:tilde-B-estimate-majorization} k^{-s}|\widetilde B_k(\sigma,r)| \le C k^{-s}{\bf B}igl(|\rho(r)| + k^{-1}{\bf B}igr) \le \frac{C}{1+r^{1+s}} qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$, $\sigma \in {partial}rtial {\Omega}ega$.} \end{equation} Hence it remains to estimate $\widetilde A_{k}(\sigma,r)$. For this we note that, by the same change of variables as in (\ref{change-of-variables-A}), we have \begin{equation}gin{align} &\widetilde A_{k}(\sigma,r)= \int_{-\e}^{\e} \int_{B}{\rm Jac}_{\Psi_\s}(z,\widetildelde r)\: \frac{(u(\Psi(\frac{r}{k},0))-u(\Psi_\s(\widetildelde r,z))) (\z(r)-\z(k \widetildelde r))}{|\Psi_\s(\frac{r}{k},0) - \Psi_\s(\widetildelde r,z)|^{N+2s}}\,d z d \widetildelde r \nonumber\\ &= k^s \int_{\mathbb{R}}\frac{\z(r)-\z(r+t)}{|t|^{1+s}} \widetilde {\mathcal K}_k(r,t)dt {\langle}bel{tilde-A-change-of-variables} \end{align} with the kernel \begin{equation}gin{align*} &\widetilde {\mathcal K}_k(r,t)\\ &= \left\{ \begin{equation}gin{aligned} & {\bf B}igl(\frac{|t|}{k}{\bf B}igr)^{N+s} \int_{\frac{k}{|t|}B} \frac{\bigl(u(\Psi_\s(\frac{r}{k},0))-u(\Psi_\s(\frac{r+t}{k},\frac{|t|}{k}z))\bigr) {\rm Jac}_{{\Psi_\s}}(\frac{r+t}{k},\frac{|t|z}{k})}{\bigl|{\Psi_\s}(\frac{r}{k},0)-{\Psi_\s}(\frac{r+t}{k},\frac{|t|z}{k}) \bigr|^{N+2s}}dz,&&quad t\in (-k\varepsilon-r,k\varepsilon-r), \\ &0,&&quad t \not \in (-k\varepsilon-r,k\varepsilon-r). \end{aligned} \right. \end{align*} Since $u\in C^s(\mathbb{R}^N)$ and $\Psi_\s$ is Lipschitz, we have $$ \bigl|u(\Psi_\s(\frac{r}{k},0))-u(\Psi_\s(\frac{r+t}{k},\frac{|t|}{k}z)\bigr| \leq C {\bf B}igl(\bigl(\frac{|t|}{k}\bigr)^2 +\bigl(\frac{|t z|}{k}\bigr)^{2}{\bf B}igr)^{\frac{s}{2}} \leq C{\bf B}igl(\frac{|t|}{k}{\bf B}igr)^s(1+|z|^s), $$ for $\sigma \in {partial}rtial {\Omega}ega$, $r \in (-k\varepsilon,k\varepsilon)$, $t \in (-k\varepsilon-r,k\varepsilon-r) \setminus \{0\}$ and $z \in \frac{k}{|t|}B$. Therefore, by using Lemma~\ref{new-lemma-Psi-sigma}(i),(iv) as in \eqref{eq:est-cK1}, \begin{equation}gin{equation} {\langle}bel{eq:tilde-K-bound} |\widetilde {\mathcal K}_k(r,t)| \le C \int_{\mathbb{R}^{N-1}}(1+|z|^s) (1+|z|^2)^{-\frac{N+2s}{2}}dz \le C \int_{\mathbb{R}^{N-1}}(1+|z|)^{-N-s}dz < \infty. \end{equation} Inserting this estimate in (\ref{tilde-A-change-of-variables}), we conclude that $$ k^{-s} |\widetilde A_{k}(\sigma,r)|\le C \int_{\mathbb{R}}\frac{|\z(r)-\z(r+t)|}{|t|^{1+s}}\,dt = C \int_{\mathbb{R}}\frac{|\rho(r)-\rho(r+t)|}{|t|^{1+s}}\,dt \le \frac{C}{1+r^{1+s}}. $$ for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$, $\sigma \in {partial}rtial {\Omega}ega$. Combining this inequality with (\ref{eq:tilde-B-estimate-majorization}), we obtain (\ref{eq:I-s-z-k-concl-bound}). Moreover, since $u \in C^s_0(\overline{\O})$ and $psi= \frac{u}{{partial}rtial ta^s} \in C^0(\overline\O)$, we have \begin{equation}{\langle}bel{eq:equzLemf} \lim_{k \to \infty} k^s \left[ u(\Psi_\s(\frac{r}{k},0))-u(\Psi_\s(\frac{r+t}{k},\frac{|t|}{k}z))\right] = psi(\s)(r^s_+ - (r+t)^s_+ ) \end{equation} for $\sigma \in {partial}rtial {\Omega}ega$, $r> 0$ and $t \in \mathbb{R}$ and $z \in \mathbb{R}^{N-1}$. Consequently, arguing as for \eqref{eq:est-cK2} with Lemma~\ref{new-lemma-Psi-sigma}(i)(ii),(iv),(v) and the dominated convergence theorem, we find that \begin{equation}gin{equation} {\langle}bel{eq:w-k(r-t-z)-limit} \lim_{k \to \infty} \widetilde {\mathcal K}_k(r,t) = psi(\s)\frac{(r^s_+ - (r+t)^s_+)}{|t|^s}\int_{\mathbb{R}^{N-1}} (1+|z|^2)^{-\frac{N+2s}{2}} dz = a_{N,s}psi(\s)\frac{(r^s_+ - (r+t)^s_+)}{|t|^s} \end{equation} for $\sigma \in {partial}rtial {\Omega}ega$, $r> 0$ and $t \in \mathbb{R}$ with $a_{N,s}$ given in \eqref{def-a-n-s}. Hence, by (\ref{tilde-A-change-of-variables}), (\ref{eq:tilde-K-bound}), (\ref{eq:w-k(r-t-z)-limit}) and the dominated convergence theorem, $$ \lim_{k \to \infty} k^{-s} \widetilde A_{k}(\sigma,r)= a_{N,s} psi(\s) \int_{\mathbb{R}}\frac{ (r^s_+ - (r+t)^s_+ )(\z(r)-\z(r+t))}{|t|^{1+2s}}dt = \frac{a_{N,s}}{c_{1,s}} psi(\s) \widetildelde I(r) = \frac{psi(\s) \widetildelde I(r)}{c_{N,s}}, $$ where we used again \eqref{eq:bNs-aNs} for the last equality. Combining this with (\ref{eq:I-s-splitting}) and (\ref{eq:tilde-B-estimate-limit}), we obtain~(\ref{eq:I-s-z-k-concl-limit}). \end{proof} We are now ready to complete the \begin{equation}gin{proof}[Proof of Proposition~\ref{lem:lim-ok-new-section}] Combining (\ref{eq:h-k-0-bound}), (\ref{eq:h-2-bound}) and (\ref{eq:I-s-z-k-concl-bound}), we see that there exists $\varepsilon'>0$ with the property that the functions $G_k$ defined in (\ref{eq:def-ge-k}) satisfy \begin{equation}gin{equation} {\langle}bel{eq:complete-proof-bound} \frac{G_k(\s,r)}{k} \le C \frac{r^{s-1} + r^{s-1+{\alpha_p}ha}}{1+r^{1+s}}qquad \bar{\varepsilon}xt{for $k \in \mathbb{N}$, $0 \le r < k \varepsilon'$} \end{equation} with a constant $C>0$ independent of $k$ and $r$. Since $s, {\alpha_p}ha \in (0,1)$, the RHS of this inequality is integrable over $[0,\infty)$. Moreover, by (\ref{eq:h-k-0-limit}), (\ref{eq:h-2-limit}) and (\ref{eq:I-s-z-k-concl-limit}), \begin{equation}gin{equation} {\langle}bel{eq:complete-proof-limit} \frac{1}{k}G_k(\s,r) \to [X(\s) \cdot \nu(\s)] psi^2(\s)h'(r)\bigl(r^{s}(-\Delta)^s \z(r)-\widetildelde I(r)\bigr) \end{equation} for every $r>0$, $\sigma \inpartial {\Omega}ega$ as $k \to \infty$. Next we note that, by a standard computation, \begin{equation}gin{equation} {\langle}bel{eq:complete-proof-limit-1} (-\Delta)^s h(r)= (-\Delta)^s [r^{s}_+ \z(r)] = \z(r)(-\Delta)^s r^s_+ + r^s_+ (-\Delta)^s \z(r)-\widetildelde I(r) = r^s_+ (-\Delta)^s \z(r)-\widetildelde I(r) \end{equation} for $r>0$ since $r^s_+$ is an $s$-harmonic function on $(0,\infty)$ see e.g \cite{BC}. Hence, by (\ref{claim-first-reduction}),~(\ref{claim-first-reduction}), (\ref{eq:complete-proof-bound}), (\ref{eq:complete-proof-limit}), (\ref{eq:complete-proof-limit-1}) and the dominated convergence theorem, we conclude that \begin{equation}gin{align*} \lim_{k \to \infty}\int_{{\Omega}ega}g_k dx &= \int_0^\infty h'(r)(-\Delta)^s h(r)dr \int_{{partial}rtial {\Omega}ega}[X(\s) \cdot \nu(\s)] psi^2(\s)d\s\\ &= \int_{\mathbb{R}} h'(r)(-\Delta)^s h(r)dr \int_{{partial}rtial {\Omega}ega}[X(\s) \cdot \nu(\s)] psi^2(\s)d\s, \end{align*} as claimed in (\ref{eq:lem:new-section-claim}). \end{proof} \appendix \section{} Here we give a short proof of the uniqueness of positive minimizers of the problem \eqref{eq:def-lambda-sp} for $1 \le p \le 2$. \begin{equation}gin{lemma} {\langle}bel{uniqueness-extended} Let ${\Omega}ega \subset \mathbb{R}^N$ be a bounded open set of class $C^{1,1}$, let $p\in[1,2]$, and let $u_1$ and $u_2$ be two positive minimizers of \eqref{eq:def-lambda-sp}. Then $u_1=u_2$. \end{lemma} \begin{equation}gin{proof} Suppose by contradiction that there are two different positive minimizers $u_1,u_2$ for the minimization problem. Then, since $\|u_1\|_{L^p(\O)} = \|u_2\|_{L^p(\O)} = 1$, the difference $u_1-u_2$ changes sign. Since moreover $\frac{u_1}{{partial}rtial ta^s}$ and $\frac{u_2}{{partial}rtial ta^s}$ are continuous positive functions on $\overlineerline {\Omega}ega$ by Lemma~\ref{reg-prop-minimizers}, there exists a maximal $\tau \in (0,1)$ with $$ \tau u_1 \le u_2 qquad \bar{\varepsilon}xt{on $\overlineerline {\Omega}ega$.} $$ Moreover, $\tau u_1 \not \equiv u_2$ since $u_1-u_2$ changes sign. Consequently, $v:= u_2 -\tau u_1$ satisfies $v \ge 0$ on $\overlineerline {\Omega}ega$ and $v \not \equiv 0$. Moreover, using that $p-1 \in [0,1]$ and $\tau \in (0,1)$, we find that $$ (-\Delta)^s v = {\langle}mbda \bigl(u_2^{p-1}-\tau u_1^{p-1}\bigr) \ge {\langle}mbda \bigl(u_2^{p-1}-(\tau u_1)^{p-1}\bigr) \ge 0 quad \bar{\varepsilon}xt{in ${\Omega}ega$,}qquad v= 0 qquad \bar{\varepsilon}xt{in $\mathbb{R}^N \setminus {\Omega}ega$} $$ with ${\langle}mbda := {\langle}mbda_{s,p}({\Omega}ega)>0$. Now the strong maximum principle for the fractional Laplacian and the fractional Hopf lemma implies that $v= u_2- \tau u_1$ is strictly positive in ${\Omega}ega$ and $\frac{v}{{partial}rtial ta^s}>0$ on ${partial}rtial {\Omega}ega$. This contradicts the maximality of $\tau$. Hence uniqueness holds. \end{proof} \begin{equation}gin{thebibliography}{99} \bibitem{BC} C. Bucur, "Some nonlocal operators and effects due to nonlocality." arXiv preprint arXiv:1705.00953 (2017). \bibitem{CFR} T. Caroll, M. M. Fall and J. Ratzkin, On the rate of change of the best constant in the Sobolev inequality. Math. Nachr. 290 (2017), no. 14-15, 2185-2197. \bibitem{HC} H.Chen, "The Dirichlet elliptic problem involving regional fractional Laplacian." Journal of Mathematical Physics 59.7 (2018): 071504. \bibitem{WC} H. Chen and T. Weth, The Dirichlet Problem for the Logarithmic Laplacian. Comm. Partial. Differential Equations. 44 (2019), no. 11, 1100--1139. \bibitem{A-C} A. M.H. Chorwadwala and R. Mahadevan, A shape optimization problem for the $ p $-Laplacian. Proc. Roy. Soc. Edinburg Sect. A145. (2015), no. 6, 1145--1151. \bibitem{dal} A-L. Dalibard and D. G{\'e}rard-Varet, On shape optimization problems involving the fractional Laplacian. ESAIM Control Optim. Calc. Var. 19 (2013), no.4 976--1013. \bibitem{delfour-zolesio} M. Delfour and J. Zolesio, Shapes and Geometries. Analysis, Differential Calculus, and Optimization, Advances in Design and Control, Vol. 4, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2001. \bibitem{FS-2019} M. M. Fall and S. Jarohs, Gradient estimates in fractional Dirichlet problems. Potential Anal. 54 (2021), no. 4, 627--636. 35 (47). \bibitem{JW17} M. M. Fall and S. Jarohs, Overdetermined problems with fractional Laplacian. ESAIM Control Optim. Calc. Var. 21 (2015), no.4, 924--938. \bibitem{FW-half} M. M. Fall and T. Weth, Monotonicity and nonexistence results for some fractional elliptic problems in the half-space. Commun. Contemp. Math. 18 (2016), no. 1, 1550012, 25 pp. \bibitem{FW18} M. M. Fall and T. Weth, Critical domains for the first nonzero Neumann eigenvalue in Riemannian manifolds. The Journal of Geometric Analysis (2018): 1-27. \bibitem{ML} J. Garcia Meli\'an and J.J. Sabina de Lis, On the perturbation of eigenvalues for the $p$-Laplacian, C. R. Acad. Sci. Paris S\'er. I Math. 332 (2001), 893-898. \bibitem{G11} P. Grisvard. Elliptic problems in nonsmooth domains, volume 69 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011. Reprint of the 1985 original. \bibitem{LL14} E. M. Harrell and P. Kr{\"o}ger, K. Kurata, On the placement of an obstacle or a well so as to optimize the fundamental eigenvalue. SIAM journal on mathematical analysis 33.1 (2001): 240-259. \bibitem{Eh}A. Henrot and M. Pierre, Variation et optimisation de formes, Math\'ematiques \& Applications (Berlin) [Mathematics \& Applications], vol. 48, Springer, Berlin, 2005, Une analyse g\'eom\'etrique. [A geometric analysis]. \bibitem{Hersch} J. Hersch, The method of interior parallels applied to polygonal or multiply connected membranes, Pacific J. Math., 13 (1963) 1229-1238. \bibitem{JW} S. Jarohs and T. Weth, Symmetry via antisymmetric maximum principles in nonlocal problems of variable order. Ann. Mat. Pura Appl. (4) 195 (2016), no. 1, 273–291. \bibitem{Kesavan} S. Kesavan, On two functionals connected to the Laplacian in a class of doubly connected domains, Proc. Roy. Soc. Edinburgh Sect. A, 133(2003) 617-624. \bibitem{RS16a} X. Ros-Oton and J. Serra, Regularity theory for general stable operators, J. Differential Equations 260 (2016), 8675-8715. \bibitem{RX} X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional Laplacian: regularity up to the boundary. J. Math. Pures Appl. (9) 101. (2014), no.3, 275--302. \bibitem{RX-Poh} X. Ros-Oton and J. Serra, The Pohozaev identity for the fractional Laplacian, Arch. Rat. Mech. Anal 213 (2014), 587-628. \bibitem{XJV} X. Ros-Oton, J. Serra, and E. Valdinoci. "Pohozaev identities for anisotropic integrodifferential operators." Communications in Partial Differential Equations 42.8 (2017): 1290-1321. \bibitem{Le} R. A. Silverman et al., Special functions and their applications, Courier Corporation, 1972. \bibitem{Wagner} A. Wagner, Pohozaev's Identity from a Variational Viewpoint, Journal of Mathematical Analysis and Applications 266, 149-159 (2002). \end{thebibliography} \end{document}
\begin{document} \title{Conjunction of Conditional Events and T-norms} \begin{abstract} We study the relationship between a notion of conjunction among conditional events, introduced in recent papers, and the notion of Frank t-norm. By examining different cases, in the setting of coherence, we show each time that the conjunction coincides with a suitable Frank t-norm. In particular, the conjunction may coincide with the Product t-norm, the Minimum t-norm, and Lukasiewicz t-norm. We show by a counterexample, that the prevision assessments obtained by Lukasiewicz t-norm may be not coherent. Then, we give some conditions of coherence when using Lukasiewicz t-norm. \keywords{Coherence, Conditional Event, Conjunction, Frank t-norm.} \end{abstract} \section{Introduction} In this paper we use the coherence-based approach to probability of de Finetti (\cite{biazzo00,biazzo05,coletti02,CoSV13,CoSV15,defi36,definetti74,gilio02,gilio12ijar,gilio16,gilio13,PfSa19}). We use a notion of conjunction which, differently from other authors, is defined as a suitable conditional random quantity with values in the unit interval (see, e.g. \cite{GiSa13c,GiSa13a,GiSa14,GiSa19,SPOG18}). We study the relationship between our notion of conjunction and the notion of Frank t-norm. For some aspects which relate probability and Frank t-norm see, e.g., \cite{Tasso12,coletti14FSS,coletti04,Dubois86,Flaminio18,Navara05}. We show that, under the hypothesis of logical independence, if the prevision assessments involved with the conjunction $(A|H) \wedge (B|K)$ of two conditional events are coherent, then the prevision of the conjunction coincides, for a suitable $\lambda \in [0,+\infty]$, with the Frank t-norm $T_\lambda(x,y)$, where $x=P(A|H), y=P(B|K)$. Moreover, $(A|H) \wedge (B|K)= T_\lambda(A|H,B|K)$. Then, we consider the case $A=B$, by determining the set of all coherent assessment $(x,y,z)$ on $\{A|H,A|K, (A|H) \wedge (A|K)\}$. We show that, under coherence, it holds that $(A|H) \wedge (A|K)= T_\lambda(A|H,A|K)$, where $\lambda \in [0,1]$. We also study the particular case where $A=B$ and $HK=\emptyset$. Then, we consider conjunctions of three conditional events and we show that to make prevision assignments by means of the Product t-norm, or the Minimum t-norm, is coherent. Finally, we examine the Lukasiewicz t-norm and we show by a counterexample that coherence is in general not assured. We give some conditions for coherence when the prevision assessments are made by using the Lukasiewicz t-norm. \section{Preliminary Notions and Results} In our approach, given two events $A$ and $H$, with $H \neq \emptyset$, the conditional event $A|H$ is looked at as a three-valued logical entity which is true, or false, or void, according to whether $AH$ is true, or $\widebar{A}H$ is true, or $\widebar{H}$ is true. We observe that the conditional probability and/or conditional prevision values are assessed in the setting of coherence-based probabilistic approach. In numerical terms $A|H$ assumes one of the values $1$, or $0$, or $x$, where $x=P(A|H)$ represents the assessed degree of belief on $A|H$. Then, $A|H=AH+x\widebar{H}$. Given a family $\mathcal F = \{X_1|H_1,\ldots,X_n|H_n\}$, for each $i \in \{1,\ldots,n\}$ we denote by $\{x_{i1}, \ldots,x_{ir_i}\}$ the set of possible values of $X_i$ when $H_i$ is true; then, for each $i$ and $j = 1, \ldots, r_i$, we set $A_{ij} = (X_i = x_{ij})$. We set $C_0 = \widebar{H}_1 \cdots \widebar{H}_n$ (it may be $C_0 = \emptyset$); moreover, we denote by $C_1, \ldots, C_m$ the constituents contained in $H_1\vee \cdots \vee H_n$. Hence $\bigwedge_{i=1}^n(A_{i1} \vee \cdots \vee A_{ir_i} \vee \widebar{H}_i) = \bigvee_{h = 0}^m C_h$. With each $C_h,\, h \in \{1,\ldots,m\}$, we associate a vector $Q_h=(q_{h1},\ldots,q_{hn})$, where $q_{hi}=x_{ij}$ if $C_h \subseteq A_{ij},\, j=1,\ldots,r_i$, while $q_{hi}=\mu_i$ if $C_h \subseteq \widebar{H}_i$; with $C_0$ it is associated $Q_0=\mathcal M = (\mu_1,\ldots,\mu_n)$. Denoting by $\mathcal I$ the convex hull of $Q_1, \ldots, Q_m$, the condition $\mathcal M\in \mathcal I$ amounts to the existence of a vector $(\lambda_1,\ldots,\lambda_m)$ such that: $ \sum_{h=1}^m \lambda_h Q_h = \mathcal M \,,\; \sum_{h=1}^m \lambda_h = 1 \,,\; \lambda_h \geq 0 \,,\; \forall \, h$; in other words, $\mathcal M\in \mathcal I$ is equivalent to the solvability of the system $(\mathcal{S}igma)$, associated with $(\mathcal F,\mathcal M)$, \begin{equation}\label{SYST-SIGMA} (\mathcal{S}igma) \quad \begin{array}{ll} \sum_{h=1}^m \lambda_h q_{hi} = \mu_i \,,\; i \in\{1,\ldots,n\} \,, \sum_{h=1}^m \lambda_h = 1,\;\;\lambda_h \geq 0 \,,\; \,h \in\{1,\ldots,m\}\,. \end{array} \end{equation} Given the assessment $\mathcal M =(\mu_1,\ldots,\mu_n)$ on $\mathcal F = \{X_1|H_1,\ldots,X_n|H_n\}$, let $S$ be the set of solutions $\Lambda = (\lambda_1, \ldots,\lambda_m)$ of system $(\mathcal{S}igma)$. We point out that the solvability of system $(\mathcal{S}igma)$ is a necessary (but not sufficient) condition for coherence of $\mathcal M$ on $\mathcal F$. When $(\mathcal{S}igma)$ is solvable, that is $S \neq \emptyset$, we define: \begin{equation}\label{EQ:I0} \begin{array}{ll} I_0 = \{i : \max_{\Lambda \in S} \sum_{h:C_h\subseteq H_i}\lambda_h= 0\},\; \mathcal F_0 = \{X_i|H_i \,, i \in I_0\},\;\; \mathcal M_0 = (\mu_i ,\, i \in I_0)\,. \end{array} \end{equation} For what concerns the probabilistic meaning of $I_0$, it holds that $i\in I_0$ if and only if the (unique) coherent extension of $\mathcal M$ to $H_i|(\bigvee_{j=1}^nH_j)$ is zero. Then, the following theorem can be proved (\cite[Theorem 3]{BiGS08}) \begin{theorem}\label{CNES-PREV-I_0-INT}{\rm [{\em Operative characterization of coherence}] A conditional prevision assessment ${\mathcal M} = (\mu_1,\ldots,\mu_n)$ on the family $\mathcal F = \{X_1|H_1,\ldots,X_n|H_n\}$ is coherent if and only if the following conditions are satisfied: \\ (i) the system $(\mathcal{S}igma)$ defined in (\ref{SYST-SIGMA}) is solvable; (ii) if $I_0 \neq \emptyset$, then $\mathcal M_0$ is coherent. } \end{theorem} Coherence can be related to proper scoring rules (\cite{BiGS12,GiSa11a,LSA12,LSA15,LaSA18}). \begin{definition}\label{CONJUNCTION}Given any pair of conditional events $A|H$ and $B|K$, with $P(A|H)=x$ and $P(B|K)=y$, their conjunction is the conditional random quantity $(A|H)\wedge(B|K)$, with $\mathbb{P}ev[(A|H)\wedge(B|K)]=z$, defined as \begin{equation}\label{EQ:CONJUNCTION} (A|H)\wedge(B|K) =\left\{\begin{array}{ll} 1, &\mbox{if $AHBK$ is true,}\\ 0, &\mbox{if $\widebar{A}H\vee \widebar{B}K$ is true,}\\ x, &\mbox{if $\widebar{H}BK$ is true,}\\ y, &\mbox{if $AH\widebar{K}$ is true,}\\ z, &\mbox{if $\widebar{H}\,\widebar{K}$ is true}. \end{array} \right. \end{equation} \end{definition} In betting terms, the prevision $z$ represents the amount you agree to pay, with the proviso that you will receive the quantity $(A|H)\wedge(B|K)$. Different approaches to compounded conditionals, not based on coherence, have been developed by other authors (see, e.g., \cite{Kauf09,mcgee89}). We recall a result which shows that Fr\'echet-Hoeffding bounds still hold for the conjunction of conditional events (\cite[Theorem~7]{GiSa14}). \begin{theorem}\label{THM:FRECHET}{\rm Given any coherent assessment $(x,y)$ on $\{A|H, B|K\}$, with $A,H,B$, $K$ logically independent, $H\neq \emptyset, K\neq \emptyset$, the extension $z = \mathbb{P}[(A|H) \wedge (B|K)]$ is coherent if and only if the following Fr\'echet-Hoeffding bounds are satisfied: \begin{equation}\label{LOW-UPPER} \max\{x+y-1,0\} = z' \; \leq \; z \; \leq \; z'' = \min\{x,y\} \,. \end{equation} }\end{theorem} \begin{remark} From Theorem \ref{THM:FRECHET}, as the assessment $(x,y)$ on $\{A|H,B|K\}$ is coherent for every $(x,y)\in[0,1]^2$, the set $\mathcal Pi$ of coherent assessments $(x,y,z)$ on $\{A|H,B|K,(A|H)\wedge(B|K)\}$ is \begin{equation}\label{EQ:PI2} \begin{small} \mathcal Pi=\{(x,y,z): (x,y)\in[0,1]^2, \max\{x+y-1,0\} \leq z\leq \min\{x,y\} \end{small} \}. \end{equation} The set $\mathcal Pi$ is the tetrahedron with vertices the points $(1,1,1)$, $(1,0,0)$, $(0,1,0)$, $(0,0,0)$. For other definition of conjunctions, where the conjunction is a conditional event, some results on lower and upper bounds have been given in \cite{SUM2018S}. \end{remark} \begin{definition}\label{DEF:CONGn} Let be given $n$ conditional events $E_1|H_1,\ldots,E_n|H_n$. For each subset $S$, with $\emptyset\neq S \subseteq \{1,\ldots,n\}$, let $x_{S}$ be a prevision assessment on $\bigwedge_{i\in S} (E_i|H_i)$. The conjunction $\mathcal{C}_{1\cdots n}=(E_1|H_1) \wedge \cdots \wedge (E_n|H_n)$ is defined as \begin{equation}\label{EQ:CF} \begin{array}{lll} \mathcal{C}_{1\cdots n}= \left\{ \begin{array}{llll} 1, &\mbox{ if } \bigwedge_{i=1}^n E_iH_i, \mbox{ is true} \\ 0, &\mbox{ if } \bigvee_{i=1}^n \widebar{E}_iH_i, \mbox{ is true}, \\ x_{S}, &\mbox{ if } \bigwedge_{i\in S} \widebar{H}_i\bigwedge_{i\notin S} E_i{H}_i\, \mbox{ is true}, \; \emptyset \neq S\subseteq \{1,2\ldots,n\}. \end{array} \right. \end{array} \end{equation} \end{definition} In particular, $\mathcal{C}_1=E_1|H_1$; moreover, for $\mathcal{S}=\{i_1,\ldots,i_k\}\subseteq \{1,\ldots,n\}$, the conjunction $\bigwedge_{i\in S} (E_i|H_i)$ is denoted by $\mathcal{C}_{i_1\cdots i_k}$ and $x_{\mathcal{S}}$ is also denoted by $x_{i_1\cdots i_k}$. Moreover, if $\mathcal{S}=\{i_1,\ldots,i_k\}\subseteq \{1,\ldots,n\}$, the conjunction $\bigwedge_{i\in S} (E_i|H_i)$ is denoted by $\mathcal{C}_{i_1\cdots i_k}$ and $x_{\mathcal{S}}$ is also denoted by $x_{i_1\cdots i_k}$. In the betting framework, you agree to pay $x_{1\cdots n}=\mathbb{P}ev( \mathcal{C}_{1\cdots n})$ with the proviso that you will receive: $1$, if all conditional events are true; $0$, if at least one of the conditional events is false; the prevision of the conjunction of that conditional events which are void, otherwise. The operation of conjunction is associative and commutative. We observe that, based on Definition \ref{DEF:CONGn}, when $n=3$ we obtain \begin{equation}\label{EQ:CONJUNCTION3} \begin{small} \begin{array}{lll} \mathcal{C}_{123} =\left\{ \begin{array}{llll} 1, &\mbox{ if } E_1H_1E_2H_2E_3H_3 \mbox{ is true},\\ 0, &\mbox{ if } \widebar{E}_1H_1 \vee \widebar{E}_2H_2 \vee \widebar{E}_3H_3 \mbox{ is true},\\ x_1,& \mbox{ if } \widebar{H}_1E_2H_2E_3H_3 \mbox{ is true},\\ x_2,& \mbox{ if } \widebar{H}_2E_1H_1E_3H_3 \mbox{ is true},\\ x_3, &\mbox{ if } \widebar{H}_3E_1H_1E_2H_2 \mbox{ is true}, \\ x_{12}, &\mbox{ if } \widebar{H}_1\widebar{H}_2E_3H_3 \mbox{ is true}, \\ x_{13}, &\mbox{ if } \widebar{H}_1\widebar{H}_3E_2H_2 \mbox{ is true}, \\ x_{23}, &\mbox{ if } \widebar{H}_2\widebar{H}_3E_1H_1 \mbox{ is true}, \\ x_{123}, &\mbox{ if } \widebar{H}_1\widebar{H}_2\widebar{H}_3 \mbox{ is true}. \\ \end{array} \right. \end{array} \end{small} \end{equation} We recall the following result (\cite[Theorem 15]{GiSa19}). \begin{theorem}\label{THM:PIFOR3} Assume that the events $E_1, E_2, E_3, H_1, H_2, H_3$ are logically independent, with $H_1\neq \emptyset, H_2\neq \emptyset, H_3\neq \emptyset$. Then, the set $\mathcal Pi$ of all coherent assessments $\mathcal{M}=(x_1,x_2,x_3,x_{12},x_{13},x_{23},x_{123})$ on $\mathcal F=\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}, \mathcal{C}_{123}\}$ is the set of points $(x_1,x_2,x_3,x_{12},x_{13},x_{23},x_{123})$ which satisfy the following conditions \begin{equation} \small \label{EQ:SYSTEMPISTATEMENT} \left\{ \begin{array}{l} (x_1,x_2,x_3)\in[0,1]^3,\\ \max\{x_1+x_2-1,x_{13}+x_{23}-x_3,0\}\leq x_{12}\leq \min\{x_1,x_2\},\\ \max\{x_1+x_3-1,x_{12}+x_{23}-x_2,0\}\leq x_{13}\leq \min\{x_1,x_3\},\\ \max\{x_2+x_3-1,x_{12}+x_{13}-x_1,0\}\leq x_{23}\leq \min\{x_2,x_3\},\\ 1-x_1-x_2-x_3+x_{12}+x_{13}+x_{23}\geq 0,\\ x_{123}\geq \max\{0,x_{12}+x_{13}-x_1,x_{12}+x_{23}-x_2,x_{13}+x_{23}-x_3\},\\ x_{123}\leq \min\{x_{12},x_{13},x_{23},1-x_1-x_2-x_3+x_{12}+x_{13}+x_{23}\}. \end{array} \right. \end{equation} \end{theorem} \begin{remark}\label{REM:INEQPI} As shown in (\ref{EQ:SYSTEMPISTATEMENT}), the coherence of $(x_1,x_2,x_3,x_{12},x_{13},x_{23},x_{123})$ amounts to the condition \begin{equation}\label{EQ:INEQPI} \begin{array}{ll} \max\{0,x_{12}+x_{13}-x_1,x_{12}+x_{23}-x_2,x_{13}+x_{23}-x_3\}\,\;\leq \;x_{123}\;\leq \\ \leq\;\; \min\{x_{12},x_{13},x_{23},1-x_1-x_2-x_3+x_{12}+x_{13}+x_{23}\}. \end{array} \end{equation} Then, in particular, the extension $x_{123}$ on $\mathcal{C}_{123}$ is coherent if and only if $x_{123}\in[x_{123}',x_{123}'']$, where $x_{123}'=\max\{0,x_{12}+x_{13}-x_1,x_{12}+x_{23}-x_2,x_{13}+x_{23}-x_3\}$, $ x_{123}''= \min\{x_{12},x_{13},x_{23},1-x_1-x_2-x_3+x_{12}+x_{13}+x_{23}\}.$ \end{remark} Then, by Theorem \ref{THM:PIFOR3} it follows \cite[Corollary 1]{GiSa19} \begin{corollary}\label{COR:PIFOR3} For any coherent assessment $(x_1,x_2,x_3,x_{12},x_{13},x_{23})$ on $\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}\}$ the extension $x_{123}$ on $\mathcal{C}_{123}$ is coherent if and only if $x_{123}\in[x_{123}',x_{123}'']$, where \begin{equation}\label{EQ:INECOR} \begin{array}{ll} x_{123}'=\max\{0,x_{12}+x_{13}-x_1,x_{12}+x_{23}-x_2,x_{13}+x_{23}-x_3\},\\ x_{123}''= \min\{x_{12},x_{13},x_{23},1-x_1-x_2-x_3+x_{12}+x_{13}+x_{23}\}. \end{array} \end{equation} \end{corollary} We recall that in case of logical dependencies, the set of all coherent assessments may be smaller than that one associated with the case of logical independence. However (see \cite[Theorem 16]{GiSa19}) the set of coherent assessments is the same when $H_1=H_2=H_3=H$ (where possibly $H=\Omega$; see also \cite[p. 232]{Joe97}) and a corollary similar to Corollary \ref{COR:PIFOR3} also holds in this case. For a similar result based on copulas see \cite{Dura08}. \section{Representation by Frank t-norms for $(A|H)\wedge(B|K)$} We recall that for every $\lambda \in[0,+\infty]$ the Frank t-norm $T_{\lambda}:[0,1]^2\rightarrow [0,1]$ with parameter $\lambda$ is defined as \begin{equation}\label{EQ:FRANK} \begin{small} T_{\lambda}(u,v)=\left\{\begin{array}{ll} T_{M}(u,v)=\min\{u,v\}, & \text{ if } \lambda=0,\\ T_{P}(u,v)=uv, & \text{ if } \lambda=1,\\ T_{L}(u,v)=\max\{u+v-1,0\}, & \text{ if } \lambda=+\infty,\\ \log_{\lambda}(1+\frac{(\lambda^u-1)(\lambda^v-1)}{\lambda-1}), & \text{ otherwise}. \end{array}\right. \end{small} \end{equation} We recall that $T_{\lambda}$ is continuous with respect to $\lambda$; moreover, for every $\lambda \in[0,+\infty]$, it holds that $T_{L}(u,v)\leq T_{\lambda}(u,v) \leq T_{M}(u,v)$, for every $(u,v)\in[0,1]^2$ (see, e.g., \cite{KlMP00},\cite{KlMe05}). In the next result we study the relation between our notion of conjunction and t-norms. \begin{theorem}\label{THM_TNORM2} Let us consider the conjunction $(A|H)\wedge(B|K)$, with $A, B, H, K$ logically independent and with $P(A|H)=x$, $P(B|K)=y$. Moreover, given any $\lambda \in[0,+\infty]$, let $T_{\lambda}$ be the Frank t-norm with parameter $\lambda$. Then, the assessment $z=T_{\lambda}(x,y)$ on $(A|H)\wedge(B|K)$ is a coherent extension of $(x,y)$ on $\{A|H,B|K\}$; moreover $(A|H)\wedge(B|K)=T_{\lambda}(A|H, B|K)$. Conversely, given any coherent extension $z=\mathbb{P}ev[(A|H)\wedge(B|K)]$ of $(x,y)$, there exists $\lambda \in[0,+\infty]$ such that $z=T_{\lambda}(x,y)$. \end{theorem} \begin{proof} We observe that from Theorem \ref{THM:FRECHET}, for any given $\lambda$, the assessment $z=T_{\lambda}(x,y)$ is a coherent extension of $(x,y)$ on $\{A|H,B|K\}$. Moreover, from (\ref{EQ:FRANK}) it holds that $T_{\lambda}(1,1)=1$, $T_{\lambda}(u,0)=T_{\lambda}(0,v)=0$, $T_{\lambda}(u,1)=u$, $T_{\lambda}(1,v)=v$. Hence, \begin{equation}\label{EQ:FRANKBIS} \begin{small} T_{\lambda}(A|H,B|K) =\left\{\begin{array}{ll} 1, &\mbox{ if $AHBK$ is true,}\\ 0, &\mbox{ if $\widebar{A}H$ is true or $\widebar{B}K$ is true,}\\ x, &\mbox{ if $\widebar{H}BK$ is true,}\\ y, &\mbox{ if $\widebar{K}AH$ is true,}\\ T_{\lambda}(x,y), &\mbox{ if $\widebar{H}\,\widebar{K}$ is true}, \end{array} \right. \end{small} \end{equation} and, if we choose $z=T_{\lambda}(x,y)$, from (\ref{EQ:CONJUNCTION}) and (\ref{EQ:FRANKBIS}) it follows that $(A|H)\wedge(B|K)=T_{\lambda}(A|H, B|K)$. \\ Conversely, given any coherent extension $z$ of $(x,y)$, there exists $\lambda$ such that $z=T_{\lambda}(x,y)$. Indeed, if $z=\min\{x,y\}$, then $\lambda=0$; if $z=\max\{x+y-1,0\}$, then $\lambda=+\infty$; if $\max\{x+y-1,0\}<z<\min\{x,y\} $, then by continuity of $T_{\lambda}$ with respect to $\lambda$ it holds that $z=T_{\lambda}(x,y)$ for some $\lambda \in\,]0,\infty[$ (for instance, if $z=xy$, then $z=T_1(x,y)$) and hence $(A|H)\wedge(B|K)=T_{\lambda}(A|H, B|K)$. \qed \end{proof} \begin{remark} As we can see from (\ref{EQ:CONJUNCTION}) and Theorem \ref{THM_TNORM2}, in case of logically independent events, if the assessed values $x,y,z$ are such that $z=T_{\lambda}(x,y)$ for a given $\lambda$, then the conjunction $(A|H)\wedge (B|K)=T_{\lambda}(A|H,B|K)$. For instance, if $z=T_1(x,y)=xy$, then $(A|H)\wedge (B|K)=T_{1}(A|H,B|K)=(A|H)\cdot (B|K)$. Conversely, if $(A|H)\wedge (B|K)=T_{\lambda}(A|H,B|K)$ for a given $\lambda$, then $z=T_{\lambda}(x,y)$. Then, the set $\mathcal Pi$ given in (\ref{EQ:PI2}) can be written as $\mathcal Pi=\{(x,y,z): (x,y)\in[0,1]^2, z=T_{\lambda}(x,y), \lambda \in [0,+\infty]\}.$ \end{remark} \section{Conjunction of $(A|H)$ and $(A|K)$} In this section we examine the conjunction of two conditional events in the particular case when $A=B$, that is $(A|H)\wedge (A|K)$. By setting $P(A|H)=x$, $P(A|K)=y$ and $\mathbb{P}ev[(A|H)\wedge (A|K)]=z$, it holds that \[ (A|H)\wedge (A|K)=AHK+x\widebar{H}AK+y\widebar{K}AH+z\widebar{H}\,\widebar{K}\in\{1,0,x,y,z\}. \] \begin{theorem}\label{THM:A=B} Let $A, H, K$ be three logically independent events, with $H\neq \emptyset$, $K\neq \emptyset$. The set $\mathcal Pi$ of all coherent assessments $(x,y,z)$ on the family $\mathcal F=\{A|H,A|K,(A|H)\wedge (A|K)\}$ is given by \begin{equation}\label{EQ:PIA=B} \mathcal Pi=\{(x,y,z): (x,y)\in[0,1]^2, T_P(x,y)= xy\leq z\leq \min\{x,y\}=T_{M}(x,y)\}. \end{equation} \end{theorem} \begin{proof} Let $\mathcal M=(x,y,z)$ be a prevision assessment on $\mathcal F$. The constituents associated with the pair $(\mathcal F,\mathcal M)$ and contained in $H \vee K$ are: $ C_1=AHK$, $C_2=\widebar{A}HK$, $C_3=\widebar{A}\widebar{H}K$, $C_4=\widebar{A}H\widebar{K}$, $C_5=A\widebar{H}K$, $C_6=AH\widebar{K}$. The associated points $Q_h$'s are $Q_1=(1,1,1), Q_2=(0,0,0), Q_3=(x,0,0), Q_4=(0,y,0), Q_5=(x,1,x), Q_6=(1,y,y)$. With the further constituent $C_0=\widebar{H}\widebar{K}$ it is associated the point $Q_0=\mathcal{M}=(x,y,z)$. Considering the convex hull $\mathcal I$ (see Figure \ref{FIG:IEA1}) of $Q_1, \ldots, Q_6$, a necessary condition for the coherence of the prevision assessment $\mathcal M=(x,y,z)$ on $\mathcal F$ is that $\mathcal M \in \mathcal I$, that is the following system must be solvable \[ (\mathcal{S}igma) \left\{ \begin{array}{l} \lambda_1+x\lambda_3+x\lambda_5+\lambda_6=x,\;\; \lambda_1+y\lambda_4+\lambda_5+y\lambda_6=y,\;\; \lambda_1+x\lambda_5+y\lambda_6=z,\\ \sum_{h=1}^6\lambda_h=1,\;\; \lambda_h\geq 0,\; h=1,\ldots,6. \end{array} \right. \] First of all, we observe that solvability of $(\mathcal{S}igma)$ requires that $z\leq x$ and $z\leq y$, that is $z\leq \min\{x,y\}$. We now verify that $(x,y,z)$, with $(x,y)\in[0,1]^2$ and $z=\min\{x,y\}$, is coherent. We distinguish two cases: $(i)$ $x\leq y$ and $(ii)$ $x> y$. \\ Case $(i)$. In this case $z=\min\{x,y\}=x$. If $y=0$ the system $(\mathcal{S}igma)$ becomes \[ \begin{array}{l} \lambda_1+\lambda_6=0,\;\; \lambda_1+\lambda_5=0,\;\; \lambda_1=0,\; \lambda_2+\lambda_3+\lambda_4=1,\;\; \lambda_h\geq 0,\;\; h=1,\ldots,6. \end{array} \] which is clearly solvable. In particular there exist solutions with $\lambda_2>0,\lambda_3>0, \lambda_4>0$, by Theorem \ref{CNES-PREV-I_0-INT}, as the set $I_0$ is empty the solvability of $(\mathcal{S}igma)$ is sufficient for coherence of the assessment $(0,0,0)$. If $y>0$ the system $(\mathcal{S}igma)$ is solvable and a solution is $ \Lambda=(\lambda_1,\ldots, \lambda_6)=(x,\frac{x(1-y)}{y},0,\frac{y-x}{y},0,0)$. We observe that, if $x>0$, then $\lambda_1>0$ and $I_0=\emptyset$ because $\mathcal{C}_1=HK\subseteq H\vee K$, so that $\mathcal M=(x,y,x)$ is coherent. If $x=0$ (and hence $z=0$), then $\lambda_4=1$ and $I_0\subseteq \{2\}$. Then, as the sub-assessment $P(A|K)=y$ is coherent, it follows that the assessment $\mathcal M=(0,y,0)$ is coherent too.\\ Case $(ii)$. The system is solvable and a solution is $ \Lambda=(\lambda_1,\ldots, \lambda_6)=(y,\frac{y(1-x)}{x},\frac{x-y}{x},0,0,0).$ We observe that, if $y>0$, then $\lambda_1>0$ and $I_0=\emptyset$ because $\mathcal{C}_1=HK\subseteq H\vee K$, so that $\mathcal M=(x,y,y)$ is coherent. If $y=0$ (and hence $z=0$), then $\lambda_3=1$ and $I_0\subseteq \{1\}$. Then, as the sub-assessment $P(A|H)=x$ is coherent, it follows that the assessment $\mathcal M=(x,0,0)$ is coherent too. Thus, for every $(x,y)\in[0,1]^2$, the assessment $(x,y,\min\{x,y\})$ is coherent and, as $z\leq \min\{x,y\}$, the upper bound on $z$ is $\min\{x,y\}=T_M(x,y)$. \\ We now verify that $(x,y,xy)$, with $(x,y)\in[0,1]^2$ is coherent; moreover we will show that $(x,y,z)$, with $z<xy$, is not coherent, in other words the lower bound for $z$ is $xy$. First of all, we observe that $\mathcal M=(1-x)Q_4+xQ_6$, so that a solution of $(\mathcal{S}igma)$ is $\Lambda_1=(0,0,0,1-x,0,x)$. Moreover, $\mathcal M=(1-y)Q_3+yQ_5$, so that another solution is $\Lambda_2=(0,0,1-y,0,y,0)$. Then $ \Lambda=\frac{\Lambda_1+\Lambda_2}{2}=(0,0,\frac{1-y}{2},\frac{1-x}{2},\frac{y}{2},\frac{x}{2}) $ is a solution of $(\mathcal{S}igma)$ such that $I_0=\emptyset$. Thus the assessment $(x,y,xy)$ is coherent for every $(x,y)\in[0,1]^2$. In order to verify that $xy$ is the lower bound on $z$ we observe that the points $Q_3,Q_4,Q_5,Q_6$ belong to a plane $\pi$ of equation: $yX+xY-Z=xy$, where $X,Y,Z$ are the axis' coordinates. Now, by considering the function $f(X,Y,Z)= yX+xY-Z$, we observe that for each constant $k$ the equation $f(X,Y,Z)=k$ represents a plane which is parallel to $\pi$ and coincides with $\pi$ when $k=xy$. We also observe that $f(Q_1)=f(1,1,1)=x+y-1=T_L(x,y)\leq xy=T_P(x,y)$, $f(Q_2)=f(0,0,0)=0 \leq xy=T_P(x,y)$, and $f(Q_3)=f(Q_4)=f(Q_5)=f(Q_6)= xy=T_P(x,y)$. Then, for every $\mathcal P=\sum_{h=1}^6\lambda_hQ_h$, with $\lambda_h\geq 0$ and $\sum_{h=1}^6\lambda_h=1$, that is $\mathcal P\in \mathcal I$, it holds that $ f(\mathcal P)=f\big(\sum_{h=1}^6\lambda_hQ_h\big)=\sum_{h=1}^6\lambda_hf(Q_h)\leq xy. $ On the other hand, given any $a>0$, by considering $\mathcal P=(x,y,xy-a)$ it holds that $ f(\mathcal P)=f(x,y,xy-a)=xy+xy-xy+a= xy+a>xy. $ Therefore, for any given $a>0$ the assessment $(x,y,xy-a)$ is not coherent because $(x,y,xy-a)\notin \mathcal I$. Then, the lower bound on $z$ is $xy=T_{P}(x,y)$. Finally, the set of all coherent assessments $(x,y,z)$ on $\mathcal F$ is the set $\mathcal Pi$ in (\ref{EQ:PIA=B}). \qed \end{proof} \begin{figure} \caption{Convex hull $\mathcal I$ of the points $Q_1, Q_2,Q_3, Q_4, Q_5,Q_6$. $\mathcal M'=(x,y,z'), \mathcal M''=(x,y,z'')$, where $(x,y)\in[0,1]^2$, $z'=xy$, $z''=\min\{x,y\} \label{FIG:IEA1} \end{figure} Based on Theorem \ref{THM:A=B}, we can give an analogous version for the Theorem \ref{THM_TNORM2} (when $A=B$). \begin{theorem}\label{THM_TNORM2A=B} Let us consider the conjunction $(A|H)\wedge(A|K)$, with $A, H, K$ logically independent and with $P(A|H)=x$, $P(A|K)=y$. Moreover, given any $\lambda \in[0,1]$, let $T_{\lambda}$ be the Frank t-norm with parameter $\lambda$. Then, the assessment $z=T_{\lambda}(x,y)$ on $(A|H)\wedge(A|K)$ is a coherent extension of $(x,y)$ on $\{A|H,A|K\}$; moreover $(A|H)\wedge(A|K)=T_{\lambda}(A|H, A|K)$. Conversely, given any coherent extension $z=\mathbb{P}ev[(A|H)\wedge(A|K)]$ of $(x,y)$, there exists $\lambda \in[0,1]$ such that $z=T_{\lambda}(x,y)$. \end{theorem} The next result follows from Theorem \ref{THM:A=B} when $H$, $K$ are incompatible. \begin{theorem}\label{THM:SETPROD} Let $A, H, K$ be three events, with $A$ logically independent from both $H$ and $K$, with $H\neq \emptyset$, $K\neq \emptyset$, $HK=\emptyset$. The set $\mathcal Pi$ of all coherent assessments $(x,y,z)$ on the family $\mathcal F=\{A|H,A|K,(A|H)\wedge (A|K)\}$ is given by $\mathcal Pi=\{(x,y,z): (x,y)\in[0,1]^2, z= xy=T_P(x,y)\}.$ \end{theorem} \begin{proof} We observe that \[ \begin{small} (A|H)\wedge (A|K)=\left\{\begin{array}{ll} 0, &\mbox{ if } \widebar{A}\widebar{H}K \vee \widebar{A}H\widebar{K} \mbox{ is true,}\\ x, &\mbox{ if } \widebar{H}AK \mbox{ is true,}\\ y, &\mbox{ if } AH\widebar{K}\mbox{ is true,}\\ z, &\mbox{ if } \widebar{H}\widebar{K} \mbox{ is true.}\\ \end{array} \right. \end{small} \] Moreover, as $HK=\emptyset$, the points $Q_h$'s are $(x,0,0), (0,y,0), (x,1,x), (1,y,y)$, which coincide with the points $Q_3,\ldots, Q_6$ of the case $HK\neq \emptyset$. Then, as shown in the proof of Theorem \ref{THM:A=B}, the condition $\mathcal M=(x,y,z)$ belongs to the convex hull of $(x,0,0), (0,y,0), (x,1,x), (1,y,y)$ amounts to the condition $z=xy$. \qed \end{proof} \begin{remark} From Theorem \ref{THM:SETPROD}, when $HK=\emptyset$ it holds that $ (A|H)\wedge (A|K)=(A|H)\cdot (A|K)=T_P(A|H,A|K), $ where $x=P(A|H)$ and $y=P(A|K)$. \end{remark} \section{Further Results on Frank t-norms} In this section we give some results which concern Frank t-norms and the family $\mathcal F=\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}, \mathcal{C}_{123}\}$. We recall that, given any t-norm $T(x_1,x_2)$ it holds that $T(x_1,x_2,x_3)=T(T(x_1,x_2),x_3)$. \subsection{On the Product t-norm} \begin{theorem}\label{THM:PROD} Assume that the events $E_1, E_2, E_3, H_1, H_2, H_3$ are logically independent, with $H_1\neq \emptyset, H_2\neq \emptyset, H_3\neq \emptyset$. If the assessment $\mathcal{M}=(x_1,x_2,x_3,x_{12},x_{13},x_{23},x_{123})$ on $\mathcal F=\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}, \mathcal{C}_{123}\}$ is such that $(x_1,x_2,x_3)\in[0,1]^3$, $x_{ij}=T_{1}(x_i,x_j)=x_ix_j$, $i\neq j$, and $x_{123}=T_{1}(x_1,x_2,x_3)=x_1x_2x_3$, then $\mathcal M$ is coherent. Moreover, $\mathcal{C}_{ij}=T_{1}(\mathcal{C}_i,\mathcal{C}_j)=\mathcal{C}_i\mathcal{C}_j$, $i\neq j$, and $\mathcal{C}_{123}=T_{1}(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)=\mathcal{C}_{1}\mathcal{C}_{2}\mathcal{C}_{3}$. \end{theorem} \begin{proof} From Remark \ref{REM:INEQPI}, the coherence of $\mathcal M$ amounts to the inequalities in (\ref{EQ:INEQPI}). As $x_{ij}=T_{1}(x_i,x_j)=x_ix_j$, $i\neq j$, and $x_{123}=T_{1}(x_1,x_2,x_3)=x_1x_2x_3$, the inequalities (\ref{EQ:INEQPI}) become \begin{equation} \begin{array}{ll} \max\{0,x_1(x_2+x_3-1),x_{2}(x_1+x_3-1),x_3(x_1+x_2-1)\}\,\;\leq \;x_{1}x_2x_3\;\leq \\ \leq\;\; \min\{x_{1}x_2,x_{1}x_3,x_{2}x_3,(1-x_1)(1-x_2)(1-x_3)+x_1x_2x_3\}. \end{array} \end{equation} Thus, by recalling that $x_i+x_j-1\leq x_ix_j$, the inequalities are satisfied and hence $\mathcal M$ is coherent. Moreover, from (\ref{EQ:CONJUNCTION}) and (\ref{EQ:CONJUNCTION3}) it follows that $ \mathcal{C}_{ij}=T_{1}(\mathcal{C}_i,\mathcal{C}_j)=\mathcal{C}_i\mathcal{C}_j$, $i\neq j$, and $\mathcal{C}_{123}=T_{1}(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)=\mathcal{C}_{1}\mathcal{C}_{2}\mathcal{C}_{3}$.\qed \end{proof} \subsection{On the Minimum t-norm} \begin{theorem}\label{THM:MIN} Assume that the events $E_1, E_2, E_3, H_1, H_2, H_3$ are logically independent, with $H_1\neq \emptyset, H_2\neq \emptyset, H_3\neq \emptyset$. If the assessment $\mathcal{M}=(x_1,x_2,x_3,x_{12},x_{13},x_{23},x_{123})$ on $\mathcal F=\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}, \mathcal{C}_{123}\}$ is such that $(x_1,x_2,x_3)\in[0,1]^3$, $x_{ij}=T_{M}(x_i,x_j)=\min\{x_i,x_j\}$, $i\neq j$, and $x_{123}=T_{M}(x_1,x_2,x_3)=\min\{x_1,x_2,x_3\}$, then $\mathcal M$ is coherent. Moreover, $\mathcal{C}_{ij}=T_{M}(\mathcal{C}_i,\mathcal{C}_j)=\min\{\mathcal{C}_i,\mathcal{C}_j\}$, $i\neq j$, and $\mathcal{C}_{123}=T_{M}(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)=\min\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}\}$. \end{theorem} \begin{proof} From Remark \ref{REM:INEQPI}, the coherence of $\mathcal M$ amounts to the inequalities in (\ref{EQ:INEQPI}). Without loss of generality, we assume that $x_1\leq x_2\leq x_3$. Then $x_{12}=T_{M}(x_1,x_2)=x_1$, $x_{13}=T_{M}(x_1,x_3)=x_1$, $x_{23}=T_{M}(x_2,x_3)=x_2$, and $x_{123}=T_{M}(x_1,x_2,x_3)=x_1$. The inequalities (\ref{EQ:INEQPI}) become \begin{equation}\label{EQ:MIN} \begin{array}{ll} \max\{0,x_{1},x_{1}+x_2-x_3\}=x_1\,\;\leq \;x_{1}\;\leq x_1= \min\{x_{1},x_{2},1-x_3+x_{1}\}. \end{array} \end{equation} Thus, the inequalities are satisfied and hence $\mathcal M$ is coherent. Moreover, from (\ref{EQ:CONJUNCTION}) and (\ref{EQ:CONJUNCTION3}) it follows that $\mathcal{C}_{ij}=T_{M}(\mathcal{C}_i,\mathcal{C}_j)=\min\{\mathcal{C}_i,\mathcal{C}_j\}$, $i\neq j$, and $\mathcal{C}_{123}=T_{M}(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)=\min\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}\}$. \qed \end{proof} \begin{remark} As we can see from $(\ref{EQ:MIN})$ and Corollary \ref{COR:PIFOR3}, the assessment $x_{123}=\min\{x_1,x_2,x_3\}$ is the unique coherent extension on $\mathcal{C}_{123}$ of the assessment $ (x_1,x_2,x_3,\min\{x_1,x_2\},\min\{x_1,x_3\},\min\{x_2,x_3\})$ on $\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}\}$. \\ We also notice that, if $\mathcal{C}_1\leq \mathcal{C}_2\leq \mathcal{C}_3$, then $\mathcal{C}_{12}=\mathcal{C}_1$, $\mathcal{C}_{13}=\mathcal{C}_1$, $\mathcal{C}_{23}=\mathcal{C}_2$, and $\mathcal{C}_{123}=\mathcal{C}_1$. Moreover, $x_{12}=x_1$, $x_{13}=x_1$, $x_{23}=x_2$, and $x_{123}=x_1$. \end{remark} \subsection{On Lukasiewicz t-norm} We observe that in general the results of Theorems \ref{THM:PROD} and \ref{THM:MIN} do not hold for the Lukasiewicz t-norm (and hence for any given Frank t-norm), as shown in the example below. We recall that $T_L(x_1,x_2,x_3)=\max\{x_1+x_2+x_3-2,0\}$. \begin{example} The assessment $(x_1,x_2,x_3,T_L(x_1,x_2),T_L(x_1,x_3),T_L(x_2,x_3)$, $T_L(x_1,x_2,x_3))$ on the family $\mathcal F=\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}, \mathcal{C}_{123}\}$, with $(x_1,x_2,x_3)=(0.5,0.6,0.7)$ is not coherent. Indeed, by observing that $T_L(x_1,x_2)=0.1$ $T_L(x_1,x_3)=0.2$, $T_L(x_2,x_3)=0.3$, and $T_L(x_1,x_2,x_3)=0$, formula (\ref{EQ:INEQPI}) becomes $ \max\{0, 0.1+0.2-0.5,0.1+0.3-0.6,0.2+0.3-0.7\}\,\;\leq \;0\; \leq\;\; \min\{0.1,0.2,0.3,1-0.5-0.6-0.7+0.1+0.2+0.3\}$, that is: $ \max\{0, -0.2\}\,\;\leq \;0\;\leq \min\{0.1,0.2,0.3,-0.2\}; $ thus the inequalities are not satisfied and the assessment is not coherent. \end{example} More in general we have \begin{theorem}\label{THM:LUK} The assessment $(x_1,x_2,x_3,T_L(x_1,x_2),T_L(x_1,x_3),T_L(x_2,x_3))$ on the family $\mathcal F=\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}\}$, with $T_L(x_1,x_2)>0$, ${T_L(x_1,x_3)>0}$, ${T_L(x_2,x_3)>0}$ is coherent if and only if $x_1+x_2+x_3-2\geq 0$. Moreover, when $x_1+x_2+x_3-2\geq 0$ the unique coherent extension $x_{123}$ on $\mathcal{C}_{123}$ is $x_{123}=T_L(x_1,x_2,x_3)$. \end{theorem} \begin{proof} We distinguish two cases: $(i)$ $x_1+x_2+x_3-2< 0$; $(ii)$ $x_1+x_2+x_3-2\geq 0$.\\ Case $(i)$. From (\ref{EQ:SYSTEMPISTATEMENT}) the inequality $1-x_1-x_2-x_3+x_{12}+x_{13}+x_{23}\geq 0$ is not satisfied because $ 1-x_1-x_2-x_3+x_{12}+x_{13}+x_{23}=x_{1}+x_2+x_3-2<0. $ Therefore the assessment is not coherent.\\ Case $(ii)$. We set $x_{123}=T_L(x_1,x_2,x_3)=x_1+x_2+x_3-2$. Then, by observing that $0<x_i+x_j-1\leq x_1+x_2+x_3-2$, $i\neq j$, formula (\ref{EQ:INEQPI}) becomes $\max\{0,x_{1}+x_2+x_3-2\}\,\;\leq \;x_{1}+x_2+x_3-2\; \leq\;\; \min\{x_{1}+x_2-1,x_{1}+x_3-1,x_{2}+x_3-1,x_1+x_2+x_3-2\}$, that is: $ \;x_{1}+x_2+x_3-2\;\leq \;x_{1}+x_2+x_3-2\;\leq \;x_{1}+x_2+x_3-2. $ Thus, the inequalities are satisfied and the assessment $ (x_1,x_2,x_3,T_L(x_1,x_2),T_L(x_1,x_3),T_L(x_2,x_3), T_L(x_1,x_2,x_3))$ on $\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23},\mathcal{C}_{123}\}$ is coherent and the sub-assessment $ (x_1,x_2,x_3,T_L(x_1,x_2),T_L(x_1,x_3),T_L(x_2,x_3)) $ on $\mathcal F$ is coherent too. \qed \end{proof} A result related with Theorem \ref{THM:LUK} is given below. \begin{theorem} If the assessment $(x_1,x_2,x_3,T_L(x_1,x_2),T_L(x_1,x_3),T_L(x_2,x_3)$, $T_L(x_1,x_2,x_3))$ on the family $\mathcal F=\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3}, \mathcal{C}_{12}, \mathcal{C}_{13}, \mathcal{C}_{23}, \mathcal{C}_{123} \}$, is such that $T_L(x_1,x_2,x_3)>0$, then the assessment is coherent. \end{theorem} \begin{proof} We observe that $T_L(x_1,x_2,x_3)=x_1+x_2+x_3-2>0$; then $x_i>0$, $i=1,2,3$, and $0<x_i+x_j-1\leq x_1+x_2+x_3-2$, $i\neq j$. Then formula (\ref{EQ:INEQPI})) becomes: $\;\;\max\{0,x_{1}+x_2+x_3-2\}\,\;\leq \;x_{1}+x_2+x_3-2\;\leq\\ \leq\;\; \min\{x_{1}+x_2-1,x_{1}+x_3-1,x_{2}+x_3-1,x_1+x_2+x_3-2\},$ that is: \\ $x_{1}+x_2+x_3-2\;\leq \;x_{1}+x_2+x_3-2\;\leq \;x_{1}+x_2+x_3-2.$ \\ Thus, the inequalities are satisfied and the assessment is coherent. \qed \end{proof} \section{Conclusions} We have studied the relationship between the notions of conjunction and of Frank t-norms. We have shown that, under logical independence of events and coherence of prevision assessments, for a suitable $\lambda \in [0,+\infty]$ it holds that $\mathbb{P}ev((A|H) \wedge (B|K))= T_\lambda(x,y)$ and $(A|H) \wedge (B|K)= T_\lambda(A|H,B|K)$. Then, we have considered the case $A=B$, by determining the set of all coherent assessment $(x,y,z)$ on $(A|H,B|K,(A|H) \wedge (A|K))$. We have shown that, under coherence, for a suitable $\lambda \in [0,1]$ it holds that $(A|H) \wedge (A|K)= T_\lambda(A|H,A|K)$. We have also studied the particular case where $A=B$ and $HK=\emptyset$. Then, we have considered the conjunction of three conditional events and we have shown that the prevision assessments produced by the Product t-norm, or the Minimum t-norm, are coherent. Finally, we have examined the Lukasiewicz t-norm and we have shown, by a counterexample, that coherence in general is not assured. We have given some conditions for coherence when the prevision assessments are based on the Lukasiewicz t-norm. Future work should concern the deepening and generalization of the results of this paper. \\ \ \\ {\bf Acknowledgments}. We thank three anonymous referees for their useful comments. \end{document}
\begin{document} \date{\today} \title{\textbf{Totally Geodesic Submanifolds in Tangent Bundle with g-natural Metric}} \author{Stanis\l aw Ewert-Krzemieniewski (Szczecin)} \maketitle \begin{abstract} In the paper we investigate submanifolds in a tangent bundle endowed with g-natural metric $G$, defined by a vector field on a base manifold. We give a sufficient condition for a vector field on $M$ to defined totally geodesic submanifold in ($TM,G).$ The parallel vector field is discussed in more detail. \textbf{Mathematics Subject Classification }Primary 53B25, 53C40, secondary 53B21, 55C25. \textbf{Key words}: Riemannian manifold, submanifold, tangent bundle, g - natural metric, totally geodesic, Sasaki metric. \end{abstract} \section{Preliminaries} A smooth vector field $u$ on a manifold $M$ defines a submanifold $u(M)$ in the tangent bundle $TM$ in an obvious way. A set of early results on the subject is presented in (\cite{1}, Chapter III). The metric on $u(M),$ if any is considered, is induced from the complete or vertical lifts of the metric on $M.$ On the other hand, there are few results on totally geodesic submanifolds in $TM$ defined by a vector field on $M,$ with metric induced from the Sasaki one. First, if $u$ is a zero vector field on $M,$ then $u(M)$ is totally geodesic (\cite{2}). For $u\neq 0$ Walczak proved \begin{theorem} (\cite{3}) \begin{enumerate} \item If $\nabla u=0$ on $(M,g),$ then $u(M)$ is a totally geodesics submanifold. \item If $u$ is of constant length and $u(M)$ is totally geodesic, then $u$ is parallel $(\nabla u=0).$ \end{enumerate} \end{theorem} For submanifolds in ($TM,G_{S})$ we have \begin{theorem} (\cite{4}, Proposition 2.1, Theorem 2.1) \begin{enumerate} \item Let $N^{m}\subset TM,$ be a submanifold embedded in the tangent bundle of a Riemannian manifold $M,$ which is transverse to the fibre at a point $ z\in N^{m}.$ Then there is a submanifold $F^{m}\subset M,$ such that $\pi (z)\in F^{m},$ an open $U\ $containing $\pi (z),$ an open $V$ containing $z\in TM$ and a vector field $u$ along $F^{m}\cap U$ such that $N^{m}\cap V=u(F^{m}\cap U).$ \item Let $N^{m}\ $be a connected compact submanifold of the tangent bundle of a connected simply connected Riemannian manifold $M^{m}$ that is everywhere transverse to the fibres of $TM^{m}.$ Then $M^{m}$ is compact and there is a vector field $u$ on $M^{m},$ such that \begin{equation*} u(M^{m})=N^{m}. \end{equation*} \end{enumerate} \end{theorem} In this respect the former theorem was extended to the case of a vector field $u$ defined on an arbitrary submanifold of $M$ (\cite{4}, Proposition 3.1 and Corollaries 3.1 - 3.3) to give necessary and sufficient conditions for $u(M)$ to be totally geodesic. The aim of the paper is to extend the above results to the case of a tangent bundle endowed with $g-$ natural metric $G.$ Firstly, we shall indicate large classes of the $g-$ natural metrics such that the theorems by Walczak cited above hold. Secondly, we give a sufficient condition for a vector field $u$ to define totally geodesic submanifold in ($TM,G).$ At the end of the paper some examples are given. Throughout the paper all manifolds under consideration are smooth and Hausdorff ones. The metric $g$ of the base manifold $M$ is always assumed to be Riemannian one. \section{Tangent Bundle} Let $x$ be a point of a Riemannian manifold $(M,g),$ dim$M=n,$ covered by coordinate neighbourhoods $(U,$ $(x^{j})),$ $j=1,...,n.\ $Let $TM\ $be a tangent bundle of $M$ and $\pi :TM\longrightarrow M\ $be a natural projection on $M.$ If $x\in U$ and $u=u^{r}\frac{\partial }{\partial x^{r}} _{\mid x}\in T_{x}M\ $then $(\pi ^{-1}(U),$ $((x^{r}),(u^{r})),$ $r=1,...,n,$ is a coordinate neighbourhood on $TM.$ We shall write $\partial _{k}=\frac{ \partial }{\partial x^{k}}$ and $\delta _{k}=\frac{\partial }{\partial u^{k}} .$ The space $T_{(x,u)}TM$ tangent to $TM$ at $(x,u)$ splits into a direct sum of two isomorphic subspaces \begin{equation*} T_{(x,u)}TM=H_{(x,u)}TM\oplus V_{(x,u)}TM. \end{equation*} $V_{(x,u)}TM$ is the kernel of the differential of the projection $\pi :TM\longrightarrow M,$ i.e. \begin{equation*} V_{(x,u)}TM=Ker\left( d\pi |_{(x,u)}\right) \end{equation*} and is called the vertical subspace of $T_{(x,u)}TM.$ $H_{(x,u)}TM$ is the kernel of the connection map of the Levi-Civita connection $\nabla ,$ i.e. \begin{equation*} H_{(x,u)}TM=Ker(K_{(x,u)}) \end{equation*} and is called the horizontal subspace. For any smooth vector field $Z:M\longrightarrow TM$ and $X_{x}\in T_{x}M$ the connection map \begin{equation*} K_{(x,u)}:T_{(x,u)}TM\longrightarrow T_{x}M \end{equation*} of the Levi-Civita connection $\nabla $ \ is given by \begin{equation*} K(dZ_{x}(X_{x}))=\left( \nabla _{X}Z\right) _{x}. \end{equation*} If $\widetilde{X}=\left( X^{r}\partial _{r}+\overline{X}^{r}\delta _{r}\right) |_{(x,u)},$ then $d\pi \left( \widetilde{X}\right) =X^{r}\partial _{r}|_{x}$ and $K(\widetilde{X})=(\overline{X} ^{r}+u^{s}X^{t}\Gamma _{st}^{r})\partial _{r}|_{x}.$ On the other hand, for any vector field $X=X^{r}\partial _{r}$ on $M$ there exist unique vectors \begin{equation*} X^{h}=X^{j}\partial _{j}-u^{r}X^{s}\Gamma _{rs}^{j}\delta _{j},\quad X^{v}=X^{j}\delta _{j}, \end{equation*} called the horizontal and vertical lifts of $X$ respectively. We have \begin{equation*} d\pi (X^{h})=X,\quad K(X^{h})=0,\quad d\pi (X^{v})=0,\quad K(X^{v})=X. \end{equation*} In (\cite{5}) the class of $g-$natural metrics was defined. We have \begin{lemma} (\cite{5},\cite{6}, \cite{7}) Let $(M,g)$ be a Riemannian manifold and $G$ be a $g-$natural metric on $TM.$ There exist functions $a_{j},$ $ b_{j}:<0,\infty )\longrightarrow R,$ $j=1,2,3,$ such that for every $X,$ $Y,$ $u\in T_{x}M$ \begin{multline*} G_{(x,u)}(X^{h},Y^{h})=(a_{1}+a_{3})(r^{2})g_{x}(X,Y)+(b_{1}+b_{3})(r^{2})g_{x}(X,u)g_{x}(Y,u), \\ G_{(x,u)}(X^{h},Y^{v})=G_{(x,u)}(X^{v},Y^{h})=a_{2}(r^{2})g_{x}(X,Y)+b_{2}(r^{2})g_{x}(X,u)g_{x}(Y,u), \\ G_{(x,u)}(X^{v},Y^{v})=a_{1}(r^{2})g_{x}(X,Y)+b_{1}(r^{2})g_{x}(X,u)g_{x}(Y,u), \end{multline*} where $r^{2}=g_{x}(u,u).$ For $\dim M=1$ the same holds for $b_{j}=0,$ $ j=1,2,3.$ \end{lemma} Setting $a_{1}=1,$ $a_{2}=a_{3}=b_{j}=0$ we obtain the Sasaki metric, while setting $a_{1}=b_{1}=\frac{1}{1+r^{2}},$ $a_{2}=b_{2}=0=0,$ $a_{1}+a_{3}=1,$ $b_{1}+b_{3}=1$ we get the Cheeger-Gromoll one. Following (\cite{6}) we put \begin{enumerate} \item $a(t)=a_{1}(t)\left( a_{1}(t)+a_{3}(t)\right) -a_{2}^{2}(t),$ \item $F_{j}(t)=a_{j}(t)+tb_{j}(t),$ \item $F(t)=F_{1}(t)\left[ F_{1}(t)+F_{3}(t)\right] -F_{2}^{2}(t)$ for all $t\in <0,\infty ).$ \end{enumerate} We shall often abbreviate: $A=a_{1}+a_{3},$ $B=b_{1}+b_{3}.$ \begin{lemma} \label{Lemma 9}(\cite{6}, Proposition 2.7) The necessary and sufficient conditions for a $g-$ natural metric $G$ on the tangent bundle of a Riemannian manifold $(M,g)$ to be non-degenerate are $a(t)\neq 0$ and $ F(t)\neq 0$ for all $t\in <0,\infty ).$ If $\dim M=1$ this is equivalent to $ a(t)\neq 0$ for all $t\in <0,\infty ).$ \end{lemma} \section{Submanifolds Defined by Vector Fields} Let $u$ be a smooth vector field on a Riemannian manifold $(M,g)$ and $ N=u(M).$ If $p\in M,$ then $z=(p,u(p))\in TM.$ The space $ T_{z}N=T_{(p,u(p))}N$ consists of vectors $u_{\ast }(X)=X^{h}+(\nabla _{X}u)^{v}$ for all $X\in T_{p}M.$ The normal space $T_{z}^{\perp }N$ consists of vectors $\eta =W^{h}+V^{v},$ where $W,$ $V\in T_{p}M$ and \begin{equation} G_{z}(\eta ,u_{\ast }(X))=0. \end{equation} Hence, we have \begin{multline} g(AW+Bg(u,W)u+a_{2}V+b_{2}g(u,V)u,\ X)+ \label{20} \\ g(a_{1}V+b_{1}g(u,V)u+a_{2}W+b_{2}g(u,W)u,\ \nabla _{X}u)=0 \end{multline} for all $X\in \mathfrak{X}(M).$ By the use of the Weingarten formula \begin{equation*} \widetilde{\nabla }_{X}V=-A_{V}X+D_{X}V, \end{equation*} the definition of the $g$ - natural metric $G$ and its Levi-Civita connection $\widetilde{\nabla }$ (\cite{6}, \cite{7}, for definitions \ of F tensors cf \cite{8}) we obtain \begin{multline} -G_{z}(A_{W^{h}+V^{v}}u_{\ast }(X),\ u_{\ast }(X))=G(\widetilde{\nabla } _{u_{\ast }(X)}(W^{h}+V^{v}),\ u_{\ast }(X))= \label{30} \\ Ag(\nabla _{X}W,X)+Bg(\nabla _{X}W,u)g(X,u)+a_{2}g(\nabla _{X}W,\nabla _{X}u)+ \\ b_{2}g(\nabla _{X}W,u)g(\nabla _{X}u,u)+a_{2}g(\nabla _{X}V,X)+b_{2}g(\nabla _{X}V,u)g(X,u)+ \\ a_{1}g(\nabla _{X}V,\nabla _{X}u)+b_{1}g(\nabla _{X}V,u)g(\nabla _{X}u,u)+ \\ A\left\{ A[u,X,W,X]+C[u,X,V,X]+C[u,W,\nabla _{X}u,X]+\right. \left. E[u,\nabla _{X}u,V,X]\right\} + \\ B\left\{ A[u,X,W,u]+C[u,X,V,u]+C[u,W,\nabla _{X}u,u]+\right. \left. E[u,\nabla _{X}u,V,u]\right\} g(X,u)+ \\ a_{2}\left\{ A[u,X,W,\nabla _{X}u]+C[u,X,V,\nabla _{X}u]+C[u,W,\nabla _{X}u,\nabla _{X}u]+\right. \\ \left. E[u,\nabla _{X}u,V,\nabla _{X}u]\right\} + \\ b_{2}\left\{ A[u,X,W,u]+C[u,X,V,u]+C[u,W,\nabla _{X}u,u]+\right. \\ \left. E[u,\nabla _{X}u,V,u]\right\} g(u,\nabla _{X}u)+ \\ a_{2}\left\{ B[u,X,W,X]+D[u,X,V,X]+D[u,W,\nabla _{X}u,X]+F[u,\nabla _{X}u,V,X]\right\} + \\ b_{2}\left\{ B[u,X,W,u]+D[u,X,V,u]+D[u,W,\nabla _{X}u,u]+F[u,\nabla _{X}u,V,u]\right\} g(X,u)+ \\ a_{1}\left\{ B[u,X,W,\nabla _{X}u]+D[u,X,V,\nabla _{X}u]+D[u,W,\nabla _{X}u,\nabla _{X}u]+\right. \\ \left. F[u,\nabla _{X}u,V,\nabla _{X}u]\right\} + \\ b_{1}\left\{ B[u,X,W,u]+D[u,X,V,u]+D[u,W,\nabla _{X}u,u]+\right. \\ \left. F[u,\nabla _{X}u,V,u]\right\} g(u,\nabla _{X}u). \end{multline} Hence, after straightforward long computations, we obtain \begin{lemma} \label{L10}Let $u$ be a vector field on a Riemannian manifold $(M,g).$ Let $ u(M$) be a submanifold defined by $u$ in $TM$ with metric induced from a $g$ - natural non-degenerated metric $G$ on $TM.$ Then the second fundamental form $A$ satisfies \begin{multline} -G_{z}(A_{W^{h}+V^{v}}u_{\ast }(X),\ u_{\ast }(X))=G(\widetilde{\nabla } _{u_{\ast }(X)}(W^{h}+V^{v}),\ u_{\ast }(X))= \label{40} \\ a_{1}R(u,\nabla _{X}u,W,X)+a_{2}R(u,X,W,X)+ \\ Ag(X,\nabla _{X}W)+Bg(u,X)g(u,\nabla _{X}W)+Bg(u,X)g(V,X)+ \\ a_{1}g(\nabla _{X}u,\nabla _{X}V)+b_{1}g(u,\nabla _{X}u)g(V,\nabla _{X}u)+b_{1}g(u,\nabla _{X}u)g(u,\nabla _{X}V)+ \\ a_{2}g(X,\nabla _{X}V)+a_{2}g(\nabla _{X}u,\nabla _{X}W)+ \\ b_{2}g(u,X)g(u,\nabla _{X}V)+b_{2}g(u,X)g(V,\nabla _{X}u)+b_{2}g(u,\nabla _{X}u)g(V,X)+ \\ b_{2}g(u,\nabla _{X}u)g(u,\nabla _{X}W)+ \\ A^{\prime }g(u,V)g(X,X)+B^{\prime }g(u,V)g(u,X)^{2}+a_{1}^{\prime }g(u,V)g(\nabla _{X}u,\nabla _{X}u)+ \\ 2a_{2}{}^{\prime }g(u,V)g(X,\nabla _{X}u)+b_{1}^{\prime }g(u,V)g(u,\nabla _{X}u)^{2}+2b_{2}^{\prime }g(u,V)g(u,X)g(u,\nabla _{X}u) \end{multline} at a point $z=(p,u(p))\in TM,$ $p\in M.$ \end{lemma} If $u\ $is a parallel vector field on $M,\ $i.e. $\nabla _{X}u=0$ for all $ X\in \mathfrak{X}(M),$ then from (\ref{20}) we get \begin{equation*} AW+Bg(u,W)u+a_{2}V+b_{2}g(u,V)u=0, \end{equation*} whence \begin{equation*} A\nabla _{X}W+Bg(u,\nabla _{X}W)u+a_{2}\nabla _{X}V+b_{2}g(u,\nabla _{X}V)u=0, \end{equation*} since $\nabla _{X}f(r^{2})=2f^{\prime }(r^{2})g(u,\nabla _{X}u)=0.$ Applying to Lemma \ref{L10} we get \begin{multline*} -G_{z}(A_{W^{h}+V^{v}}u_{\ast }(X),\ u_{\ast }(X))=a_{2}g\left( \QATOP{{}}{{} }R(u,X,W),\ X\right) + \\ g\left( \QATOP{{}}{{}}Bg(u,X)X+A^{\prime }g(X,X)u+B^{\prime }g(u,X)^{2}u,\ V\right) . \end{multline*} Consequently, we have \begin{proposition} Let $u$ be a parallel vector field on a Riemannian manifold $(M,g).$ If $G$ is a non-degenerated $g-$ natural metric on $TM,$ such that $A^{\prime }=0,$ $B=0$ and either $a_{2}=0$ \ or $M$ is flat, then $u(M)$ is a totally geodesic submanifold in $TM.$ \end{proposition} \begin{proposition} Let $G$ be a non-degenerated $g-$ natural metric on $TM$ such that $ A^{\prime }=0,$ $B=0,$ $a_{2}=b_{2}=0$ on $TM$ and $a_{1}+a_{1}^{\prime }r^{2}\neq 0$ on a dense subset of $TM.$ If $u$ is of constant length and $u(M)$ is a totally geodesic submanifold in $(M,g)$, then $u$ is parallel. \end{proposition} \begin{proof} Since \begin{equation*} G(u_{\ast }(X),\ (F_{1}+F_{3})u^{v}-F_{2}u^{h})=Fg(u,\nabla _{X}u), \end{equation*} the vector field $\tau =(F_{1}+F_{3})u^{v}-F_{2}u^{h}$ is normal if and only if $u$ is of constant length, i.e. $g(u,\nabla _{X}u)=0.$ Thus Lemma \ref {L10} yields \begin{multline*} -G_{z}(A_{\tau }u_{\ast }(X),\ u_{\ast }(X))= \\ \left[ \left( a_{1}+a_{1}^{\prime }r^{2}\right) (F_{1}+F_{3})-a_{2}F_{2} \right] g(\nabla _{X}u,\nabla _{X}u)- \\ F_{2}\left[ a_{1}R(u,\nabla _{X}u,u,X)+a_{2}R(u,X,u,X)+Ag(X,\nabla _{X}u) \right] + \\ (F_{1}+F_{3})\left[ (B+B^{\prime }r^{2})g(u,X)^{2}+(a_{2}+a_{2}^{\prime }r^{2})g(X,\nabla _{X}u)+A^{\prime }r^{2}g(X,X)\right] . \end{multline*} This completes the proof. \end{proof} \begin{corollary} In particular, this holds for $G$ being either Sasaki or Cheeger-Gromoll metric. \end{corollary} \section{Generalization} Put \begin{multline*} T_{W}(X,u)=\nabla _{X}\left( \QDATOP{{}}{{}}AX+a_{2}\nabla _{X}u+Bg(u,X)u+b_{2}g(u,\nabla _{X}u)u\right) + \\ a_{1}R(u,\nabla _{X}u,X)+a_{2}R(u,X,X), \end{multline*} \begin{multline*} T_{V}(X,u)=\nabla _{X}(\QDATOP{{}}{{}}a_{2}X+a_{1}\nabla _{X}u)+b_{2}\nabla _{X}(g(u,X))u+\nabla _{X}(b_{1}g(u,\nabla _{X}u))u- \\ \left( \QDATOP{{}}{{}}Bg(u,X)+b_{2}g(u,\nabla _{X}u)\right) X- \\ \left( \QDATOP{{}}{{}}\frac{1}{2}\nabla _{X}(b_{1})g(u,\nabla _{X}u)+A^{\prime }g(X,X)+B^{\prime }g(u,X)^{2}+\right. \\ \left. \QDATOP{{}}{{}}a_{1}^{\prime }g(\nabla _{X}u,\nabla _{X}u)+2a_{2}^{\prime }g(X,\nabla _{X}u)\right) u. \end{multline*} \begin{proposition} For any vector fields $X,W,V$ tangent to $(M,g)$ such that $W^{h}+V^{v}$ is normal we have \begin{equation} G_{z}(A_{W^{h}+V^{v}}u_{\ast }(X),\ u_{\ast }(X))=g(W,T_{W}(X,u))+g(V,T_{V}(X,u)). \label{60} \end{equation} \end{proposition} \begin{proof} To prove the proposition one has to differentiate (\ref{20}) with respect to the vector field $X,$ then subtract (\ref{40}) to eliminate terms containing $\nabla _{X}W,$ $\nabla _{X}V$ and, finally, apply the identity $g(\nabla _{X}u,\nabla _{X}u)=X(g(u,\nabla _{X}u))-g(u,\nabla _{X}\nabla _{X}u).$ \end{proof} \begin{corollary} The submanifold $u(M)$ defined in $(TM,G)$ by a smooth vector field $u$ on a Riemannian manifold $M$ is totally geodesic if $T_{W}(X,u)=T_{V}(X,u)=0\ $ for all vector fields $X$ tangent to $(M,g).$ \end{corollary} Observe that if $u$ is parallel, then replacing in (\ref{20}) $X$ with $ \nabla _{X}X$ we obtain \begin{equation*} g(AW+Bg(u,W)u+a_{2}V+b_{2}g(u,V)u,\nabla _{X}X)=0. \end{equation*} Moreover, if $u$ is a tors-forming vector field on $M$, i.e. $\nabla _{X}u=\varrho (X)u+\alpha X$ for all $X\in TM,$ then \begin{multline*} g\left[ W,(A+\alpha a_{2})X+(B+\alpha b_{2})g(u,X)u+\rho (X)F_{2}u\right] + \\ g\left[ V,(a_{2}+\alpha a_{1})X+(b_{2}+\alpha b_{1})g(u,X)u+\rho (X)F_{1}u \right] =0. \end{multline*} Replacing in the last equation (or in (\ref{20})) an arbitrary $X$ with $ \nabla _{X}X$ and subtracting the result from (\ref{60}) we shall get for the particular vector field $u$ the following \begin{proposition} If $u$ is a concircular vector field on $M$, i.e. $\nabla _{X}u=\alpha X$ \ for all $X\in TM,$ then \begin{multline*} T_{W}(X,u)=\left[ X(A+\alpha a_{2})+\alpha (B+\alpha b_{2})g(u,X)\right] X+(a_{2}+\alpha a_{1})R(u,X,X)+ \\ \left[ X(B+\alpha b_{2})g(u,X)+\alpha (B+\alpha b_{2})g(X,X)\right] u, \end{multline*} \begin{multline*} T_{V}(X,u)=\left[ X(a_{2}+\alpha a_{1})-(B+\alpha b_{2})g(u,X)\right] X+ \\ \left[ \left( b_{1}X(\alpha )+\frac{1}{2}\alpha X(b_{1})\right) g(u,X)+\alpha (b_{2}+\alpha b_{1})g(X,X)\right] u- \\ \left[ B^{\prime }g(u,X)^{2}+(A^{\prime }+\alpha (a_{1}^{\prime }\alpha +2a_{2}^{\prime }))g(X,X)\right] u. \end{multline*} \end{proposition} For the Sasaki metric and a concircular vector field $u$ we have \begin{equation*} T_{W}(X,u)=\alpha R(u,X,X),\quad T_{V}(X,u)=0, \end{equation*} so the concircular non-parallel vector field gives rise to a totally geodesic submanifold only if $(M,g)$ is flat. On the other hand, for a given concircular vector field on a non-flat $ (M,g), $ we can always construct a family of nondegenerated $g$ - natural metrics on $TM$ that are not Sasaki ones so that $u(M)$ is totally geodesic in $TM$. Having given $\alpha $ and, for example, $a_{1}$ on a neighbourhood of a point $z=(p,u(p))\in TM$ it is enough to put $a_{2}=-\alpha a_{1},$ $ A=\alpha ^{2}a_{1}+C,$ $C=$ $const\neq 0,$ $b_{j}=0.$ Similarly, if $(M,g)$ is flat and $\alpha =const.$one puts $a_{2}=-\alpha a_{1}+C_{1},$ $A=\alpha ^{2}a_{1}+C_{1}\alpha +C,$ $C=$ $const,$ $C_{1}=const,$ $Ca_{1}+C_{1}\neq 0,$ $b_{j}=0.$ \begin{proposition} If $u$ is a recurrent vector field on $M$, i.e. $\nabla _{X}u=\varrho (X)u$ for all $X\in TM,$ then \begin{multline*} T_{W}(X,u)=F_{2}\left[ (\nabla _{X}\rho )(X)+\rho ^{2}\right] u+2A^{\prime }\rho r^{2}X+2\rho (B+B^{\prime }r^{2})g(u,X)u+ \\ 2\rho ^{2}r^{2}(a_{2}^{\prime }+b_{2}+b_{2}^{\prime }r^{2})u+a_{2}R(u,X,X), \end{multline*} \begin{multline*} T_{V}(X,u)=F_{1}\left[ (\nabla _{X}\rho )(X)+\rho ^{2}\right] u+\left[ b_{1}+a_{1}^{\prime }+b_{1}^{\prime }r^{2}\right] \rho ^{2}r^{2}u+ \\ \left[ (2a_{2}^{\prime }-b_{2})\rho r^{2}-Bg(u,X)\right] X-\left[ \rho (2a_{2}^{\prime }-b_{2})g(u,X)-A^{\prime }g(X,X)u-B^{\prime }g(u,X)^{2} \right] u. \end{multline*} \end{proposition} For the Sasaki metric and a recurrent vector field $u$ we have \begin{equation*} T_{W}(X,u)=0,\quad T_{V}(X,u)=\left[ \left( \nabla _{X}\varrho \right) (X)+\varrho ^{2}\right] u \end{equation*} on arbitrary $(M,g).$ If $(M,g)$ is flat and the recurrent form $\varrho $ is parallel, then there exist a family of $g$ - natural metrics on $TM,$ such that $u(M)$ is totally geodesic in $TM.$ \begin{example} \begin{eqnarray*} A &=&const\neq 0,\quad B=0,\quad a_{2}=b_{2}=0, \\ a_{1}+ta_{1}^{\prime }+t(2b_{1}+tb_{1}^{\prime }) &=&0,\quad F_{1}=a_{1}+tb_{1}\neq 0. \end{eqnarray*} \end{example} Stanis\l aw Ewert-Krzemieniewski West Pomeranian University of Technology Szczecin School of Mathematics Al. Piast\'{o}w 17, 70-310 Szczecin, Poland e-mail: [email protected] \end{document}
\begin{equation}gin{document} \date{} \title{Face reduction and the immobile indices approaches to regularization of linear Copositive Programming problems} \author{Kostyukova O.I.\thanks{Institute of Mathematics, National Academy of Sciences of Belarus, Surganov str. 11, 220072, Minsk, Belarus ({\tt [email protected]}).} \and Tchemisova T.V.\thanks{Mathematical Department, University of Aveiro, Campus Universitario Santiago, 3810-193, Aveiro, Portugal ({\tt [email protected]}).}} \maketitle \begin{equation}gin{abstract} The paper is devoted to the regularization of linear Copositive Programming problems which consists of transforming a problem to an equivalent form, where the Slater condition is satisfied and the strong duality holds. We describe here two regularization algorithms based on the concept of immobile indices and an understanding of the important role these indices play in the feasible sets' characterization. These algorithms are compared to some regularization procedures developed for a more general case of convex problems and based on a facial reduction approach. We show that the immobile-index-based approach combined with the specifics of copositive problems allows us to construct more explicit and detailed regularization algorithms for linear Copositive Programming problems than those already available. \end{abstract} \textbf{Key words.} Linear copositive programming, strong duality, normalized immobile index set, regularization, minimal cone, facial reduction, constraint qualifications \\ \textbf{AMS subject classification.} 90C25, 90C30, 90C34 \section{Introduction}\label{Introduction} Conic optimization is a subfield of convex optimization that studies the problems of minimizing a convex function over the intersection of an affine subspace and a convex cone. For a gentle introduction to conic optimization and a survey of its applications in Operations Research and related areas, we refer interested readers to \cite{Let} and the references therein. Copositive Programming (CoP) problems form a special class of conic problems and can be considered as an optimization over the convex cone of so-called {\it copositive matrices} (i.e. matrices which are positive semi-defined on the non-negative orthant). Copositive models arise in many important applications, including $\mathcal{NP}$ -hard problems. For the references on motivation and application of CoP see, e.g. \cite{Bomze2012, deKlerk,Dur2010}. In {\it linear} CoP, the objective function is linear and the constraints are formulated with the help of linear matrix functions. Linear copositive problems are closely related to that of linear {\it Semi-Infinite Programming} (SIP) and {\it Semidefinite Programming} (SDP). Copositive and semidefinite problems are particular cases of SIP problems, but CoP deals with more challenging and less studied problems than SDP. The literature on the theory and methods of SIP, CoP, and SDP is quite extensive. We refer the interested readers to \cite{AH2013,HandbookSDPnew,Bomze2012, Dur2010,Weber2,W-handbook}, and the references in these works. In convex and conic optimization, optimality conditions, and duality results are usually formulated under certain regularity conditions, so-called constraint qualifications (CQ) (see, e.g. \cite{HandbookSDPnew,Kort,solodov,W-handbook}). Such conditions should guarantee the fulfillment of the Karush-Kuhn-Tucker (KKT)- type optimality conditions and the {\it strong duality} property consisting in the fact that the optimal values of the primal problem and the corresponding Lagrangian dual one are equal and the dual problem attains its maximum. Strong duality is the cornerstone of convex optimization, playing a particularly important role in the stability of numerical methods. Unfortunately, even in convex optimization, many problems cannot be classified as regular (i.e. satisfying some regularity conditions such as, for example, strict feasibility). In \cite{Dry-Wol}, we read: ``...{\it new optimization modeling techniques and convex relaxations for hard nonconvex problems have shown that the loss of strict feasibility is a more pronounced phenomenon than has previously been realized}''. This phenomenon can occur because of either the poor choice of functions that describe feasible sets or the degeneration of the feasible sets themselves. According to \cite{Tuncel}, sometimes the loss of a certain CQ ``...{\it is a modeling issue rather than inherent to the problem instance...}'' which ``... {\it justifies the pleasing paradigm: efficient modeling provides for a stable program}''. Thus, the idea of a {\it regularization} appears quite naturally which is aimed at obtaining an equivalent and more convenient reformulation of the problem with some required properties, one of which is that the regularized problem must satisfy the generalized Slater condition. The first papers on regularization of abstract convex problems (the regularization procedures are called {\it pre-processing} there) appeared in the 1980-th, being followed by various publications for special classes of conic problems (see, e.g. \cite{BW4,BW1}). Nevertheless, as Drusvyatskiy and Wolkowicz wrote in \cite{Dry-Wol} published in 2017, for conic optimization in general, research in the field of regularization algorithms is still in its infancy. At the same time, the authors of \cite{Dry-Wol} confirm that to make a regularization algorithm viable, it is necessary to actively explore the structure of the problem since for some specific applications of conic optimization, the rich basic structure makes regularization quite possible and leads to significantly simplified models and enhanced algorithms. Several approaches to the regularization of conic optimization problems are proposed in the literature. In \cite{BW4, BW1}, the concept of the {\it minimal cone} of constraints was used by Borwein and Wolkowicz for regularization of abstract convex and conic convex problems for which any CQ fails. An algorithm, proposed there for the description of the minimal cone, is based on a successive reduction of cone's faces and was named by the authors the {\it Facial Reduction Algorithm (FRA)}. A different approach was proposed by Luo, Sturm, and Zhang (see \cite{Luo} and the references therein) which is called the {\it dual regularization} or {\it conic expansion}. This approach tries to close a {\it duality gap} (the difference between the primal and dual optimal values) of the regularized problems by expanding the dual constraints' cone. In \cite{Waki}, Waki and Muramatsu applied the facial reduction approach to a conic optimization problem in such a way that each primal reduced cone is dual to the cone generated by the conic expansion approach. The facial reduction approach has been successfully applied to SDP and the second-order cone programming problems, as well as to certain classes of optimization problems over symmetric (i.e. self-dual and homogeneous) and nice cones (see, e.g. \cite{PP, Perm,Terlaki,Ramana-W}). At the same time, the question of effective constructive application of this approach to other classes of problems remains open. This is because the known FRAs are more {\it conceptual } than practical. In this paper, based on the results from \cite{KT-new,KT-JOTA, KT-SetValued}, we develop a different approach to regularization of linear CoP problems. This approach is based on the concept of {\it immobile indices}, i.e. indices of the constraints that are active for all feasible solutions. The purpose of the paper is to \begin{equation}gin{enumerate} \item[a)] describe in details a finite algorithm for regularization of linear CoP problems that is based on the concept of immobile indices but does not require any additional information about them; \item[ b) ] compare two approaches to regularization of the linear CoP problems, one based on the facial reduction and the other based on the concept of immobile indices, and the corresponding regularized problems constructed using these approaches. \end{enumerate} To the best of our knowledge, in CoP there has never been an attempt to develop detailed and easy in implementation algorithms, based on the minimal cone representation (see, e.g. the FRA in \cite{BW4,BW1} and the modified FRA in \cite{Waki}). Nor do we have any information about any other attempts to describe constructive regularization procedures for linear copositive problems. The regularization algorithms presented in the paper are new, original, and timely due to the growing number of eminent applications of CoP. The paper is organized as follows. Section \ref{Introduction} hosts Introduction. Section \ref{optimality} contains equivalent formulations of the linear CoP problem and the basic definitions. In section \ref{sec5}, we describe different ways of regularization of copositive problems. In subsection \ref{subsec1}, we describe the minimal face regularization from \cite{ BW4, BW1} when it is applied to linear CoP problems; in \ref{subsec2}, we shortly describe the proposed in \cite{KT-SetValued} one-step regularization based on the concept of immobile indices and compare the regularized problems obtained in subsections \ref{subsec1} and \ref{subsec2}. Section \ref{Sec6} contains iterative algorithms for regularization of linear copositive problems: the Waki and Muramatsu's facial reduction algorithm is described in \ref{subsec5}, a new regularization algorithm based on the immobile index set together with its compressed modification is described in \ref{subsec6}, followed by a short discussion on the iterative algorithms. Section \ref{Concl} contains some conclusions. \section{Linear copositive programming problem: equivalent \\ formulations and basic definitions}\label{optimality} Given an integer $p>1$, denote by $\mathbb R^{ p}_+$ the set of all $ p$ vectors with non-negative components, by ${\mathcal S}(p)$ and $\mathcal S_+(p)$ the space of real symmetric $p\times p$ matrices and the cone of symmetric positive semidefinite $p\times p$ matrices, respectively, and let $\mathcal{COP}^{p}$ stay for the cone of symmetric copositive $p\times p$ matrices: $$\mathcal{COP}^p:=\{D\in {\mathcal S}(p):t^{\top}Dt\geq 0 \ \forall t \in \mathbb R^p_+\}.$$ The space $\mathcal S(p)$ is considered here as a vector space with the trace inner product $$A\bullet B:={\rm trace}\, (AB).$$ Consider a linear copositive programming problem in the form \begin{equation}gin{equation} \displaystyle\min_{x \in \mathbb R^n } \ c^{\top}x, \;\;\; \mbox{s.t. } {\mathcal A}(x) \in \mathcal{COP}^p,\label{SDP-2}\end{equation} where $x=(x_1,...,x_n)^{\top}$ is the vector of decision variables. The data of the problem are presented by vector $c \in \mathbb R^n$ and the constraints matrix function $\mathcal A(x)$ defined in the form \begin{equation}gin{equation}\label{A} \mathcal A(x):=\displaystyle\sum_{i=1}^{n}A_ix_i+A_0,\end{equation} with given matrices $A_i\in \mathcal S(p), i=0,1,\dots,n$. It is well known (see e.g. \cite{AH2013}) that the copositive problem (\ref{SDP-2}) is equivalent to the following convex SIP problem: \begin{equation}gin{equation} \displaystyle\min_{x \in \mathbb R^n } c^{\top}x, \;\;\; \mbox{s.t. } t^{\top}{\mathcal A}(x)t \geq 0 \;\; \forall t \in T,\label{SIP}\end{equation} with a $p$ - dimensional compact index set in the form of a simplex \begin{equation}gin{equation}\label{SetT}T:=\{t \in \mathbb R^p_+:\mathbf{e}^{\top}t=1\},\end{equation} where $\mathbf{e}=(1,1,...,1)^{\top}\in \mathbb R^p. $ Denote by $X$ the feasible set of the equivalent problems (\ref{SDP-2}) and (\ref{SIP}): \begin{equation} X:=\{x\in \mathbb R^n: {\mathcal A}(x)\in {\cal COP}^p\} =\{x\in \mathbb R^n:t^{\top}{\mathcal A}(x)t\geq 0 \;\; \forall t \in T\}.\label{setX}\end{equation} In what follows, we will suppose that $X\not=\emptyset.$ Evidently, the set $X$ is convex. \begin{equation}gin{remark}\label{R-1} Since $X\not=\emptyset,$ then without loss of generality, we can consider that $A_0\in {\cal COP}^p. $ Indeed, by fixing a feasible solution $y\in X$ and substituting the variable $x$ by a new variable $z{:}=x-y$, we can replace the original problem (\ref{SDP-2}) by the following one in terms of $z$: \begin{equation}gin{equation*} \displaystyle\min_{z \in \mathbb R^n} \ c^{\top}z, \;\;\; \mbox{s.t. } \bar {\mathcal A}(z) \in \mathcal{COP}^p,\end{equation*} with $\bar {\mathcal A}(z):=\displaystyle\sum_{i=1}^{n}A_iz_i+\bar A_0,$ $ \bar A_0:= \mathcal A(y)\in {\cal COP}^p.$ \end{remark} According to the commonly used definition, the constraints of the copositive problem (\ref{SDP-2}) satisfy {\it the Slater condition} if \begin{equation}gin{equation}\label{Slait1} \exists \; \bar{x} \in \mathbb R^n \; \mbox{ such that } \; {\mathcal A}(\bar x) \in {\rm int }\, \mathcal{COP}^p= \{D\in \mathcal{S}(p): t^{\top}Dt>0 \; \forall t \in \mathbb R^p_+,\; t \not=\bf{0}\}. \end{equation} Here ${\rm int }\,\mathcal B$ stays for the interior of a set $\mathcal B$. Following \cite{KT-JOTA, KT-SetValued}, let's define the {\it set of normalized immobile indices} $ T_{im}$ in problem (\ref{SDP-2}): \begin{equation}gin{equation}\label{Tnormalized} T_{im}:=\{t \in T : t^{\top} \mathcal A (x)t=0 \ \ \forall x\in {X}\} .\end{equation} In what follows, the elements of the set $T_{im}$ are called immobile indices. The following lemma follows from Lemma 1 and Proposition 1 in \cite{KT-SetValued}. \begin{equation}gin{lemma} \label{LOK} Given the linear copositive problem (\ref{SDP-2}), (i) the Slater condition (\ref{Slait1}) is equivalent to the emptiness of the set $T_{im},$ (ii) the normalized immobile index set $ T_{im}$ is either empty or can be represented as a union of a finite number of convex closed bounded polyhedra. \end{lemma} For a vector $t=(t_k,k\in P)^{\top}\in \mathbb R^p_+$ with $ P:=\{1,2,...,p\}$, define the sets $$P_+(t):=\{k\in P:t_k>0\}, \ P_0(t):=P\setminus P_+(t).$$ Given a set $\mathcal B$ and a point $l=(l_k,k \in P)^{\top}$ in $\mathbb R^p$, denote by $\rho(l,\mathcal B)$ the distance between these set and point, $\ \rho(l,\mathcal B):=\min\limits_{\tau \in \mathcal B}\sum\limits_{k\in P}|l_k-\tau_k|, $ and by ${\rm conv } \mathcal B$ the convex hull of the set $\mathcal B$. Suppose that in problem (\ref{SDP-2}), the normalized immobile index set $T_{im}$ is non-empty. Consider a finite non-empty subset of $T_{im}$: \begin{equation} V=\{\tau(i)\in T_{im} \; \forall i \in I\},\; 0<|I|<\infty.\label{N0**}\end{equation} For this set, define the following number and sets: \begin{equation} \sigma(V):=\min\{\tau_k(i), \ \ k \in P_+(\tau(i)) , \; i \in I\}>0,\ \ \qquad\qquad\qquad\qquad\label{ep}\end{equation} \begin{equation} \label{omega}\Omega(V):=\{t \in T:\rho(t,{\rm conv} V)\geq \sigma(V)\},\qquad\qquad\qquad\qquad\qquad\qquad\end{equation} \begin{equation}gin{equation} \mathcal X(V):=\{x \in \mathbb R^n: {\mathcal A}(x) \tau(i)\geq 0 \; \forall i \in {I}; \;\; t^\top {\mathcal A}(x)t\geq 0 \; \forall t \in \Omega(V)\}. \label{calX}\end{equation} In \cite{KT-new}, the following theorem is proved. \begin{equation}gin{theorem}\label{L25-02-1} Consider problem (\ref{SDP-2}) with the feasible set $X$. For any subset (\ref{N0**}) of the set of normalized immobile indices of this problem, the following {equality} holds true: $$ X={\mathcal X}(V),$$ where the set ${\mathcal X}(V)$ is defined in (\ref{calX}). \end{theorem} \section{Regularization of copositive problems}\label{sec5} In this section, first, we remind a known regularization approach developed in \cite{ BW4, BW1} for conic optimization problems and based on the concept of the minimal face. We briefly describe how this approach can be applied to linear CoP problems. After, for the copositive problem (\ref{SDP-2}), we present another regularization approach based on the concept of immobile indices and compare the regularized problems obtained using two considered approaches. \subsection{Minimal face regularization}\label{subsec1} Let us, first, recall the necessary terms and notions. By definition, a convex subset $\mathbf{F}$ of the cone ${\cal COP}^p$ is its {\it face} if for any $x\in {\cal COP}^p$, $y\in {\cal COP}^p$, the inclusion $x + y\in \mathbf{F}$ implies $x\in \mathbf{F},$ $y\in \mathbf{F}.$ It is evident that any face of the cone ${\cal COP}^p$ is also a cone. Given the copositive problem (\ref{SDP-2}) with the feasible set $X$ presented in (\ref{setX}), let $\mathbf{F}_{min}$ be the smallest (by inclusion) face of ${\cal COP}^p$ containing a set $\cal D $ defined in terms of the constraints of this problem as follows: \begin{equation} {\cal D}:=\{ \mathcal A(x), \ x \in X\}.\label{D}\end{equation} In what follows, the face $\mathbf{F}_{min}$ will be called {\it the minimal face of the optimization problem } (\ref{SDP-2}). Generally speaking, for the copositive problem (\ref{SDP-2}), the approach suggested in \cite{ BW4, BW1}, is to replace the constraint ${\cal A}(x)\in {\cal COP}^p$ with an equivalent constraint ${\cal A}(x)\in \mathbf F_{min}$. The resulting regularized problem takes the form \begin{equation} \min_{ x\in \mathbb R^n }\ c^\top x, \;\; \mbox{ s.t. } {\mathcal A}(x)\in \mathbf{F}_{min}.\label{25P1}\end{equation} The dual problem to (\ref{25P1}) can be written in the form \begin{equation} \max_{ U \in \mathcal S(p) }\; -A_0\bullet U,\; \mbox{ s.t. } A_j\bullet U=c_j \; \forall j=1,...,n;\; U\in \mathbf{F}^*_{min},\label{25D1}\end{equation} where $\mathbf{F}^*_{min}$ is the dual cone to the cone $ \mathbf{F}_{min}.$ It is proved in \cite{ BW4, BW1}, that the constraints of problem (\ref{25P1}) satisfy the {\it generalized Slater condition}: there exists $\bar x\in X$ such that ${\mathcal A}(\bar x)\in {\rm relint} \; {\mathbf{F}}_{min}$ and hence the duality gap between the dual problems (\ref{25P1}) and (\ref{25D1}) vanishes. Here ${\rm relint }\,\mathcal B$ stays for the relative interior of a set $\mathcal B$. Unfortunately, there is no available information about how to explicitly construct the cones $ \mathbf{F}_{min}$ and $\mathbf{F}^*_{min}$ in a general case and, in particular, in the case of copositive problems. \subsection{One-step regularization based on the concept of immobile indices}\label{subsec2} In our paper \cite{KT-SetValued}, for the copositive problem (\ref{SDP-2}), we obtained a regularized dual problem which is different from (\ref{25D1}) . The construction of this dual is based on the concept of immobile indices and can be considered as {\it one-step regularization} since it contains a unique step. Consider the copositive problem (\ref{SDP-2}). Let $T_{im}$ be the defined in (\ref{Tnormalized}) normalized set of immobile indices of this problem. If $T_{im}=\emptyset$, then problem (\ref{SDP-2}) satisfies the Slater condition, which means that it is already regular and no regularization is required. Now, suppose that $T_{im}\not =\emptyset$. In this case, the Slater condition is not satisfied and the problem is not regular. Let us describe how one can convert problem (\ref{SDP-2}) into a regularized one. Consider the set ${\rm conv}\, T_{im}$ and the set $W$ of all vertices of ${\rm conv}\, T_{im}$: \begin{equation} W:=\{t(j), \; j \in J\}, \; 0<|J|<\infty.\label{ver}\end{equation} Suppose that the elements $t(j), \; j \in J,$ of the set $W$ are known. Then we can regularize problem (\ref{SDP-2}) in just \textbf{one step}. In fact, it follows from Theorem \ref{L25-02-1} and Lemmas 2 and 3 in \cite{KT-SetValued} that the set $X$ of feasible solutions of the original problem (\ref{SDP-2}) coincides with the set of feasible solutions of the following system: $$ t^\top{\mathcal A}(x)t\geq 0\; \forall t \in \Omega(W);\;\;\; \mathcal A(x)t(i)\geq 0 \; \forall i \in J,$$ and the next condition is satisfied: \begin{equation} \exists \; \bar{x} \in X \; \mbox{ such that } \; t^\top{\mathcal A}(\bar x)t>0\;\; \forall t \in \Omega(W).\label{S-type}\end{equation} Here the set $\Omega(W)$ is defined by the rules (\ref{omega}) with $V=W.$ Consequently, the original copositive problem (\ref{SDP-2}) is equivalent to the following SIP problem: \begin{equation}a & \min\limits_{x\in \mathbb R^n} \; c^\top x,\qquad \label{reg-1}\\ &\mbox{s.t. }\;\; t^\top{\mathcal A}(x)t\geq 0\ \forall t \in \Omega(W),\label{reg-2}\\ &{\mathcal A}(x)t(i)\geq 0\ \forall i \in J.\label{reg-3}\end{equation}a Problem (\ref{reg-1})-(\ref{reg-3}) can be considered as a {\it regularized} primal problem since \begin{equation}gin{itemize} \item it possesses a finite number of linear inequality constraints (\ref{reg-3}), \item the first group of constraints, (\ref{reg-2}), satisfies the Slater type condition (\ref{S-type}), \item the set $\Omega(W)$ is compact. \end{itemize} Let us stress that in problem (\ref{reg-1})-(\ref{reg-3}), the infinite index set $\Omega(W)$ is obtained by removing the set $T_{im}$ together with the $ \sigma(W) $-neighborhood of its convex hull, from the original index set $T$. Note here that the set $\Omega (W)$ \begin{equation}gin{itemize} \item[(a)] is explicitly constructed by the rules (\ref{ep}), (\ref{omega}), using the finite set $W=\{t(j), j \in J\}$ of vertices of ${\rm conv} T_{im}$, \item[(b)] does not contain the set ${\rm conv}\,T_{im}$, \item[(c)] may be sufficiently small. \end{itemize} All these properties may be useful for numerical solving the problem (\ref{reg-1})-(\ref{reg-3}). It is evident that problem (\ref{reg-1})-(\ref{reg-3}) can be written in the equivalent conic form \begin{equation} \min_{ x\in \mathbb R^n }\ c^\top x,\;\; \mbox{ s.t. } {\mathcal A}(x)\in {\cal K}_0,\label{25P2-2}\end{equation} where $ \ {\mathcal K}_0:=\{D\in {\cal S}(p):\ t^\top Dt\geq 0\;\;\forall t\in \Omega(W); \ Dt(j)\geq 0 \; \forall j \in J \}.$ It can be shown that ${\cal K}_0\subset {\cal COP}^p$. The dual problem to (\ref{25P2-2}) is as follows: \begin{equation} \max_{ U \in \mathcal S(p)}\; -A_0\bullet U,\; \mbox{ s.t. } A_j\bullet U=c_j\; \forall j=1,...,n;\; U\in {\mathcal K}^*_{0}.\label{25D2}\end{equation} In the problem above, ${\mathcal K}^*_{0}$ is the dual cone to ${\mathcal K}_0$ and has the form \begin{equation} {\mathcal K}^*_{0}={\rm cl} \{D \ \in {\cal S}(p): \ D{\in}{\cal CP}(W)\oplus {\cal P}^*\},\label{K**}\end{equation} where \begin{equation}a& {\cal CP}(W):={\rm conv} \{tt^{\top}:t \in \Omega(W)\},\;\nonumber\\ & \ {\cal P}^*:=\{D \in {\cal S}(p): \ D= \sum\limits_{j\in J}(\lambda(j)(t(j))^\top+t(j)(\lambda(j))^\top), \ \lambda(j)\geq 0\; \forall j \in J\}.\nonumber\end{equation}a Here and in what follows, for given sets $ {\mathcal B}$ and ${\mathcal G}$, ${\rm cl} \, {\mathcal B}$ denotes the closure of the set ${\mathcal B}$ and ${\mathcal B}\oplus {\mathcal G}$ stays for the Minkowski sum of the corresponding two sets. Notice that for the pair of dual conic problems (\ref{25P2-2}) and (\ref{25D2}), the duality gap is zero. As it was shown in \cite{KT-SetValued}, the cone (\ref{K**}) in problem (\ref{25D2}) can be replaced by the following one (which has a more explicit form since it does not contain the closure operator): $${\mathbf{{K}}_0^*:}= \{D\in{\mathcal S}(p): \ D\in {\cal CP}^{p}\oplus {\cal P}^*\},$$ where $\mathcal{ CP}^{p}$ denotes the set of completely positive matrices: \begin{equation} {{\cal CP}^{p}}:={\rm conv} \{tt^{\top}:t \in \mathbb R_+^{p}\},\label{CP}\end{equation} and there is no duality gap for problem (\ref{25P2-2}) and its dual problem in the form (\ref{25D2}) with ${\mathcal K}^*_{0}$ replaced by $ {\mathbf{{K}}_0^*}$. Note that the cones ${\mathcal K}_0$ and ${\mathbf{{K}}_0^*}$ are explicitly described in terms of indices (\ref{ver}) and this is an advantage of the presented here approach over that described in \ref{subsec1}. The only drawback of the regularization procedure described here is the following: \begin{equation}gin{itemize} \item [] {\it to apply the one-step regularization , one needs to know the finite number of indices (\ref{ver}) which are the vertices of the set ${\rm conv}\, T_{im}.$} \end{itemize} It is easy to see that the regularized primal problem (\ref{25P2-2}) can be modified as follows: \begin{equation} \min\limits_{x\in \mathbb R^n} \; c^\top x,\;\; \mbox{ s.t. } {\mathcal A}(x)\in \overline {\mathcal K}_0,\label{R-2}\end{equation} where \begin{equation}gin{equation*}\begin{equation}gin{split} \overline {\mathcal K}_0=\{D\in S(p): \ & t^\top Dt\geq 0\;\; \forall t \in \Omega(W), \\ &e^\top_kDt(j)=0 \; \forall k \in P_+( t(j) ),\;\; e^\top_k Dt(j)\geq 0 \; \forall k \in P\setminus P_+( t(j) ),\ \forall j \in J\},\label{barK*}\end{split}\end{equation*} $\{e_k, \; k \in P\}$ is the standard (canonical) basis of the vector space $\mathbb R^p$. It is evident that $\overline {\mathcal K}_0\subset {\mathcal K}_0 $ and, as mentioned above, ${\cal K}_0\subset {\cal COP}^{p}$. Hence $\overline {\cal K}_0\subset {\cal COP}^p.$ To show that these regularizations are themselves deeply connected, let us give an explicit description of the minimal face $ \mathbf{F}_{min}$ in terms of the vertices of the set ${\rm conv} \, T_{im}$ and the index sets $M(j),\ j \in J,$ defined as: \begin{equation} M(j): =\{k \in P:e^\top_k{\cal A}(x)t(j)=0\;\; \forall x \in X\}, \ j \in J.\label{N*}\end{equation} The following theorem can be proved (see \cite{KT-new}). \begin{equation}gin{theorem}\label{T3} Given copositive problem (\ref{SDP-2}), let $\{t(j),j\in J\}$ be the (finite) set of all vertices of the set ${\rm conv}\, T_{im}$. Then the minimal face $ {\mathbf{F}}_{min}$ of this problem can be described in two equivalent forms \begin{equation}gin{equation*}\begin{equation}gin{split}\mathbf{F}_{min}=K_{min}(1):=\{D\in {\cal COP}^p: & \ e^\top_kDt(j)=0 \ {\forall} k \in {M(j)}, \; \forall j \in J\}, \ \mbox{and}\\ \mathbf{F}_{min}={K}_{min}(2){:}=\{D\in {\cal COP}^p: &\ e^\top_kDt(j)=0 \ {\forall} k \in {M(j)},\\ &\ e^\top_kDt(j)\geq 0 \ \forall k \in {P\setminus M(j)}, \forall j \in J\}.\end{split}\end{equation*} \end{theorem} Now, having described the minimal face $\mathbf{F}_{min}$ via immobile indices, we can compare the regularized problems (\ref{25P1}), (\ref{25P2-2}), and (\ref{R-2}) in more detail. The regularized problem (\ref{25P1}) is formulated using the facial reduction approach to the copositive problem (\ref{SDP-2}) and the regularized problems (\ref{25P2-2}) and (\ref{R-2}) are obtained using the immobile indices of this problem. The difference between these three problems is that in problem (\ref{25P1}), the constraint set is determined by the minimal face $\mathbf{F}_{min}$, while the constraints of problem (\ref{25P2-2}) are formulated with the help of the cone $\mathcal K_0$, and the constraints of problem (\ref{R-2}) use the cone $\overline {\mathcal K}_0$. It should be noticed that the minimal face $\mathbf{F}_{min}$ and the cones $\mathcal K_0$ and $\overline {\mathcal K}_0$ satisfy the inclusions $$\mathbf{F}_{min}\subset \overline {\mathcal K}_{0}\subset {\mathcal K}_{0}.$$ At the same time, the cones $\mathbf{F}_{min}$ and {$\overline {\mathcal K}_{0}$} are faces of the cone of copositive matrices ${\cal COP}^{p}$, while the cone ${\mathcal K}_{0}$ is not, in general. One can show that {$\overline {\mathcal K}_{0}$} is an exposed face while the face $\mathbf{F}_{min}$ is not, in general. For each of the mentioned above conic problems, we face certain challenges caused by the troubles connected with the {\it concrete construction} of the respective cones. Thus, for example, for the copositive problem (\ref{SDP-2}), the following difficulties should be mentioned: \begin{equation}gin{itemize} \item to define the cones ${\mathcal K}_0$ and $\overline {\mathcal K}_0$, the elements $t(j), \; j \in J,$ of the finite set of indices (\ref{ver}) should be known; \item as far as we know, there are no explicit procedures of constructing the minimal face $\mathbf{F}_{min}$ and its dual cone $\mathbf{F}^*_{min}$. \end{itemize} Theorem \ref{T3} shows how the minimal face $\mathbf{F}_{min}$ can be represented in the form of the cones ${K}_{min}(1)$ and $K_{min}(2)$ via immobile indices. Notice that to construct these cones, one has to find not only the set of indices (\ref{ver}) but also the corresponding sets $ M (j), j \in J$, defined in (\ref{N*}). As mentioned above, regularity is an important property of optimization problems. As a rule, the regularity of copositive problems is characterized by the Slater condition. In this regard, it is important to note that the regularized problem (\ref{25P1}) satisfies the {\it generalized } Slater condition while the obtained here regularized problems (\ref{25P2-2}) and (\ref{R-2}) satisfy the Slater type condition (\ref{S-type}). This difference can be important for further study of linear CoP problems, as well as for the development of stable numerical methods for them. \section{Iterative algorithms for regularization of linear copositive problems}\label{Sec6} In section \ref{sec5}, we considered general schemes of two theoretical methods that allowed us to obtain regularizations of the linear copositive problem (\ref{SDP-2}). In each of these {schemes}, we meet some difficulties associated with explicit representations of the respective "regularized" feasible cones and their dual ones. In this section, we consider and compare two different regularization approaches aimed to overcome these difficulties using algorithmic procedures. \subsection{ Waki and Muramatsu's facial reduction algorithm}\label{subsec5} In \cite{Waki}, for linear conic problems, a regularization algorithm was proposed by Waki and Muramatsu. This algorithm can be considered as the Facial Reduction Algorithm (FRA) from \cite{BW4, BW1} applied to linear conic problems in finite-dimensional spaces. Let us describe the algorithm from \cite{Waki} for the linear copositive problem (\ref{SDP-2}) with the matrix constraint function $\mathcal A(x)$ defined in (\ref{A}). Remind that here we consider that problem (\ref{SDP-2}) is feasible. Then, according to Remark \ref{R-1}, we can assume that $A_0\in {\cal COP}^{p}$. Denote $${\rm Ker}\,\mathcal A:=\{D\in S(p): A_j\bullet D=0 \ \forall j=0,1,...,n\}.$$ As above, let ${\mathcal F}^*$ denote the dual cone of a given cone ${\mathcal F}$. For a given feasible copositive problem (\ref{SDP-2}), starting with ${\cal COP}^{p}$, the Waki and Muramatsu's algorithm repeatedly finds smaller faces of ${\cal COP}^{p}$ until it stops with the minimal face $\mathbf{F}_{min}$. {\bf Waki and Muramatsu's FRA for the copositive problem (\ref{SDP-2})} $\qquad$ {\bf Step 1:} Set $i{:}=0$ and ${\mathcal F}_0:={\cal COP}^{p}.$ $\qquad$ {\bf Step 2:} If ${\rm Ker}\mathcal A\cap {\mathcal F}^*_i\subset {\rm span}\{Y_1,...,Y_i\},$ then STOP: $\mathbf{F}_{min}={\mathcal F}_i.$ $\qquad$ {\bf Step 3:} Find $Y_{i+1}\in {\rm Ker}\mathcal A\cap {\mathcal F}^*_i\setminus {\rm span}\{Y_1,...,Y_i\}.$ $\qquad$ {\bf Step 4:} Set ${\mathcal F}_{i+1}{:}={\mathcal F}_i\cap \{Y_{i+1}\}^\bot $ and $i:=i+1$, and go to step 2. The description of the algorithm is very simple but, in practice, its implementation presents serious difficulties which arise on step 2 and especially on step 3. As the matter of fact, in the case of the copositive problem (\ref{SDP-2}), the fulfillment of step 3 is hard already on the first two iterations. Let us consider the initial iteration when $i=0$. On step 3, one has to find a matrix $Y_1\in {\rm Ker}\mathcal A\cap {\mathcal F}^*_0.$ Since ${\mathcal F}_0={\cal COP}^{p}$, then at the current iteration ($i=0$) we know the explicit description of the dual cone for ${\mathcal F}_0$: ${\mathcal F}^*_0={{\cal CP}^{p}}$, where the cone ${\cal CP}^{p}$ is defined in (\ref{CP}). Therefore, the matrix $Y_1$ should have the form \begin{equation}gin{equation*} Y_1=\sum\limits_{i\in I_1}t(i)(t(i))^\top, \; t(i)\geq 0,\; t(i)\not= 0 \ \forall i \in I_1,\;\; 0<|I_1|\leq p(p+1)/2,\label{3q}\end{equation*} and the condition $ \sum\limits_{i\in I_1}(t(i))^\top A_jt(i)=0\;\forall j=0,1,...,n$, has to be satisfied. At the next iteration ($i=1$), one is looking for a matrix $Y_2$ such that \begin{equation}gin{enumerate} \item[\bf{C1}:] $Y_2\in {\mathcal F}^*_1={\rm cl} \{D\in S(p):D\in{\cal CP}^{p}\,\oplus\; \alpha Y_1,\; \alpha \in \mathbb R\},$ \item[\bf{C2}{:}] $Y_2\not \in {\rm span}\{Y_1\},$ \item[\bf{C3}{:}] $A_j\bullet Y_2=0 \ \forall j=0,1,...,n.$ \end{enumerate} The first difficulty arises when trying to satisfy the condition \textbf{C1}, as there is no explicit description of the set ${\mathcal F}^*_1$. Notice that this set is defined using the closure operator, this operator being essential for the definition of ${\mathcal F}^*_1$. Therefore, in general, for a matrix $Y_2$ satisfying the condition {\bf C1}, it may happen that $Y_2\not \in \{D\in S(p): \ D\in {\cal CP}^{p}\,\oplus\; \alpha Y_1, \; \alpha\in \mathbb R\}.$ In \cite{Waki}, there is {also} no any indication of how to find a matrix $Y_2$ satisfying the conditions { \textbf{C2}} and { \textbf{C3}}. Notice that {the} fulfillment of these conditions is a non-trivial task as well. Thus, we can state that although the reported in \cite{Waki} FRA is an easy-to-describe method, its practical implementation is not constructively described, which makes it difficult to apply. There is no information concerning which form should have the matrix $Y_i$ at the $i$-th iteration ( $i\geq 1$) of the algorithm and how to meet the conditions \textbf{ C1} - { \textbf{C3}} for it. \subsection{A regularization based on the immobile indices }\label{subsec6} Here we will describe and justify a different algorithm for regularization of the copositive problem (\ref{SDP-2}). This algorithm has a similar structure to the Waki and Muramatsu's FRA considered in \ref{subsec5} but is based on the concept of immobile indices and described in more detail, being, therefore, more constructive. Note from the outset that although our algorithm exploits the properties of the set of immobile indices, it does not require the initial knowledge of either this set or the vertices of its convex hull. \subsubsection{Algorithm \textbf{REG-LCoP} (REGularization of Linear Copositive Problems)}\label{4-2-1} {\bf Iteration $\#$ 0.} Given the copositive problem in the form (\ref{SDP-2}), consider the following {\it regular} SIP problem: \begin{equation}gin{equation*}{SIP}_0: \qquad \displaystyle\min_{(x,\mu)\in \mathbb R^{n+1}}\ \mu, \mbox{ s.t. } t^{\top}{\mathcal A}(x)t+\mu\geq 0\; \forall t \in T,\label{1a}\end{equation*} with the index set $T$ defined in (\ref{SetT}). If there exists a feasible solution $(\bar x,\bar \mu)$ of this problem with $\bar \mu<0$, then set $m_*:=0$ and go to the {\it Final step.} Otherwise the vector $(x=\mathbf 0,\mu=0)$ is an optimal solution of the problem ($SIP_0$). It should be noticed that in the problem ($SIP_0$), the index set $T$ is compact, and the constraints satisfy the Slater condition. Hence, (see e.g. \cite{Bon}), it follows from the optimality conditions for the vector $(x= \mathbf 0 ,\mu=0)$ that there exist indices and numbers \begin{equation}gin{equation*}\tau(i)\in T, \; \; \gamma(i)>0 \ \forall i \in I_1, \;\; |I_1|\leq n+1,\end{equation*} such that $\sum\limits_{i\in I_1}\gamma(i)(\tau(i))^{\top} A_j\tau(i)=0 \ \forall j=0,1,...,n;\;\; \sum\limits_{i\in I_1}\gamma(i)=1.$ It follows from {the relations above} that $I_1\neq \emptyset $ and $\tau(i)\in T_{im}\subset T \; \forall i \in I_1,$ and hence $ e^\top_k{\cal A}(x)\tau(i)=0\; \forall k \in P_+(t(i)),\; \forall i \in I_1,\ \forall x\in X.$ Set $L_1(i):=P_+(\tau(i)), \; i \in I_1,$ and go to the next iteration. {\bf Iteration $\#$ $m$, $m\geq 1$.} By the beginning of the iteration, we have indices and sets $\tau(i),$ $ L_m(i),$ $i\in I_m,$ such that \begin{equation} \tau(i)\in T_{im}, \ P_+(\tau(i))\subset {L_m(i)}\subset P,\; e^\top_k{\cal A}(x)\tau(i)=0\ \forall k \in L_m(i),\; \forall x\in X,\; \forall i \in I_m.\label{vvv}\end{equation} Consider a SIP problem \begin{equation}gin{equation*}\begin{equation}gin{split} & \min\limits_{(x,\mu)\in \mathbb R^{n+{1}}} \mu, \\ {SIP}_m: \qquad\mbox{ s.t. } \ \ &e^{{\top}}_k{\mathcal A}(x)\tau(i) =0 \; \forall k \in L_m(i) ;\ e^{{\top}}_k{\mathcal A}(x)\tau(i)\geq 0 \; \forall k\in P\setminus L_m(i),\ \; \forall i \in I_m,\\ &t^{\top}{\mathcal A}(x)t+\mu\geq 0 \; \forall t \in \Omega(W_m),\end{split}\end{equation*} where $W_m:=\{\tau(i), i \in I_m\}$, the set $\Omega(W_m)$ is constructed by the rules (\ref{ep}), (\ref{omega}) with $V=W_m$. Since $W_m$ is a subset of the set of immobile indices of problem (\ref{SDP-2}), then it follows from Theorem \ref{L25-02-1} and the equalities in (\ref{vvv}) that ${\cal X}(W_m)=X$. Notice that by the definition of the set $\Omega(W_m)$, it holds: $$\rho(t,{\rm conv} W_m)\geq \sigma(W_m) >0\;\; \forall t \in \Omega(W_m).$$ In the problem (${SIP}_m$), the index set $\Omega(W_m)$ is compact and the constraints satisfy the following Slater type condition: \begin{equation}gin{equation*}\begin{equation}gin{split} \exists (\widehat x,\widehat \mu) \mbox{ such that } & e^{{\top}}_k{\mathcal A}(\widehat x)\tau(i) =0 \; \forall k \in {L_m(i)};\, e^{{\top}}_k{\mathcal A}(\widehat x)\tau(i)\geq 0 \; \forall k \in P\setminus {L_m(i)}, \; \forall i \in I_m, \\ &t^{\top}{\mathcal A}(\widehat x)t+\widehat \mu> 0 \; \forall t \in \Omega(W_m).\end{split}\end{equation*} Hence, this problem is {\it regular}. If problem {(${SIP}_m$)} admits a feasible solution $(\bar x,\bar \mu)$ with $\bar \mu<0$, then STOP and go to the {\it Final step} with $m_*:=m.$ Otherwise, the vector $(x=\mathbf 0,\mu=0)$ is an optimal solution of {(${SIP}_m$)}. Since this problem is regular, the optimality of the vector $(x=\mathbf 0,\mu=0)$ provides (see \cite{levin}) that there exist indices, numbers, and vectors \begin{equation} \tau(i)\in \Omega(W_m), \; \gamma(i), i \in \Delta I_m, \; \; 1\leq |\Delta I_m|\leq n +1;\ \lambda^m(i)\in \mathbb R^p, i\in I_m,\label{11.1}\end{equation} which satisfy the following conditions: \begin{equation}gin{equation}\begin{equation}gin{split} &\sum\limits_{i\in \Delta I_m}\gamma(i)(\tau(i))^{\top}A_j\tau(i)+ 2\sum\limits_{i\in I_m}( \lambda^m(i))^{\top}A_j\tau(i)=0 \; \forall j=0,1,...,n;\\ &\gamma(i)>0 \; \forall i \in \Delta I_m;\;\;\; \lambda^m_k(i)\geq 0 \; \forall k \in P\setminus {L_m(i)},\; \forall i \in I_m.\label{18-11}\end{split}\end{equation} Here and in what follows, without loss of generality, we suppose that $\Delta I_m\cap I_m=\emptyset.$ Moreover, applying the described in {\cite{KT-Global}} procedure \textbf{DAM} to the data (\ref{11.1}), it is possible to ensure that the following conditions are met: \begin{equation} P_0(\tau(i))\cap P_+(\tau(j))\not=\emptyset \; \; \forall i \in \Delta I_m, \; \forall j\in I_m.\label{11*}\end{equation} Let us set $\Delta L(i):=\{k \in P\setminus L_m(i):\lambda^m_k (i) >0\}, \ i \in I_m,$ $${L_{m+1}(i)}:={L_m(i)}\cup \Delta {L}(i), \ i \in I_m;\;\; {L_{m+1}(i)}:=P_+(\tau(i)), \ i \in \Delta I_m.\qquad$$ It follows from (\ref{18-11}) that $e^\top_k{\cal A}(x)\tau(i)=0\ \forall k \in \Delta L(i),$ $\ \forall i \in I_m,$ and $\tau(i)\in T_{im}\; \forall i \in \Delta I_m.$ The last inclusions imply the equalities $e^\top_k{\cal A}(x)\tau(i)=0\; \forall k \in P_+(\tau(i)),$ $ \forall i \in \Delta I_m.$ Go to the next iteration $\# (m+1)$ with the new data \begin{equation} \tau(i),\; L_{m+1}(i), \; i \in I_{m+1}:=I_m\cup \Delta I_m.\label{o-18}\end{equation} {\bf Final step.} It follows from (\ref{11*}) that the algorithm \textbf{\textbf{REG-LCoP}} runs a finite number $m_*$ of iterations. Therefore, for some $m_*\geq 0$, the problem (${SIP}_{m_*})$ has a feasible solution $(\bar x, \bar \mu)$ with $\bar \mu<0$. Observe that by Theorem \ref{L25-02-1}, the found vector $\bar x$ is a feasible solution of the original copositive problem (\ref{SDP-2}). If $m_*=0$, then the constraints of problem (\ref{SDP-2}) satisfy the Slater condition with $\bar x$, and hence the problem is regular. Suppose now that $m_*>0$. Consider problem \begin{equation}gin{equation*}\begin{equation}gin{split}& \min\limits_{x\in \mathbb R^{n}} c^\top x, \\ \mbox{ s.t. }& \ e^{{\top}}_k{\mathcal A}(x)\tau(i)=0 \; \forall k \in {L_{m_*}}(i) ,\ e^{{\top}}_k{\mathcal A}(x)\tau(i)\geq 0\ \; \forall k\in P\setminus {L_{m_*}}(i),\; \forall i \in I_{m_*}, \\ &t^{\top}{\mathcal A}(x)t\geq 0 \; \forall t \in \Omega(W_{m_*}),\end{split}\end{equation*} where the sets $W_{m_*}=\{\tau(i), i \in I_{m_*}\}$ and $\Omega(W_{m_*})$ are the same as in the problem (${SIP}_{m_*})$. Note that by construction, ${\cal X}(W_{m_*})=X. $ Hence the problem above is equivalent to problem (\ref{SDP-2}) and can be considered as its regularization since \begin{equation}gin{itemize} \item it has a finite number of linear equality/inequality constraints, \item by construction, $t^{\top}{\mathcal A}(\bar x)t> 0 \; \forall t \in \Omega(W_{m_*})$ for some $\bar x\in X$. \end{itemize} The algorithm is described.$ \qquad \blacksquare$ \begin{equation}gin{remark} In the described above algorithm \textbf{REG-LCoP}, it is assumed that $X\not =\emptyset $. It is easy to modify the algorithm so that this assumption is removed. \end{remark} \subsubsection{ On the comparison of the algorithms }\label{subsec7} To give another interpretation of the algorithm \textbf{REG-LCoP} and to better trace the compliance of the algorithm \textbf{REG-LCoP} to the Waki and Muramatsu's FRA from \cite{Waki} (presented in \ref{subsec5}), let us perform some additional constructions at the iterations of the algorithm \textbf{REG-LCoP}. At the end of {\bf Iteration $\#$ 0}, having data $\tau(i),\; \gamma(i), \; L_1(i), i\in I_1,$ let us set $${\mathcal F}_0{:}={\cal COP}^p,\;\; Y_1{:}=\sum\limits_{i\in I_1}\gamma(i)\tau(i) (\tau(i))^{\top},\; {\mathcal F}_1{:}={\mathcal F}_0\cap \{Y_1\}^\bot.$$ Notice here that, by construction, \begin{equation}gin{equation*}\begin{equation}gin{split}&\qquad\qquad\qquad Y_1\not =\mathbb O_{p},\; Y_1\in {\rm Ker }\mathcal A,\; Y_1\in {\mathcal F}^*_0={\cal CP}^p,\\ {\mathcal F}_1: &={\mathcal F}_0\cap \{Y_1\}^\bot=\{D\in {\cal COP}^p: D\bullet Y_1=0\}=\{D\in {\cal COP}^p:(\tau(i))^\top D\tau(i)=0\ \forall i \in I_1\}\qquad \\&=\{D\in {\cal COP}^p:e^\top _kD\tau(i)=0 \ \forall i \in {L_1(i)},\ e^\top _kD\tau(i)\geq 0\ \forall i \in P\setminus {L_1(i)},\ \forall i \in I_1\},\end{split}\end{equation*} where $\mathbb O_p$ is the $p\times p$ null matrix. Consider {\bf Iteration $\#$ $m$, $1\leq m\leq m_*$.} By the beginning of the iteration, we have a cone $ {\mathcal F}_m={\mathcal F}_{m-1}\cap \{Y_m\}^\bot$ that can be described as follows: \begin{equation}gin{equation}\begin{equation}gin{split}{\mathcal F}_m =\{D\in {\cal COP}^p: \ e^\top _kD\tau(i)=0 \ \forall k \in L_m(i), \; e^\top _kD\tau(i)\geq 0\ \forall k \in P\setminus L_m(i), \forall i \in I_m\}.\label{o-14}\end{split}\end{equation} At the end of this iteration, we have new data (\ref{o-18}) and numbers $\gamma(i), i\in \Delta I_m$. Let us set \begin{equation} Y_{m+1}{:}=\sum\limits_{i\in \Delta I_m}\gamma(i)\tau(i)(\tau(i))^{\top}+ \sum\limits_{i\in I_m} [\tau(i)(\lambda^m(i))^{\top}+\lambda^m(i)(\tau(i))^{\top}].\label{Ym}\end{equation} From the equations in (\ref{18-11}), we conclude: $Y_{m+1}\in {\rm Ker}\mathcal A.$ From (\ref{o-14}), it follows: \begin{equation}a &{\mathcal F}^*_m =\ {\rm cl}\{D\in \mathcal S(p):\ D\in {\cal CP}^p \oplus {\cal P}^*_m \},\nonumber\\& {\cal P}^*_m:= \{D\in \mathcal S(p):D=\sum\limits_{i\in I_m} [\tau(i)(\lambda(i))^{\top}+\lambda(i)(\tau(i))^{\top}],\ \lambda_k(i)\geq 0 \ \forall k \in P\setminus {L_m(i)}, \forall i \in I_m\}.\nonumber\end{equation}a Hence, by construction, $Y_{m+1}\in {\mathcal F}^*_m.$ Consider the cone ${\mathcal F}_{m+1}:={\mathcal F}_{m}\cap \{Y_{m+1}\}^\bot$ and show that it can be described as follows{:} \begin{equation}gin{equation}\begin{equation}gin{split} {\mathcal F}_{m+1} =\{D\in {\cal COP}^p:\ &e^\top _kD\tau(i)=0 \ \forall k \in {L_{m+1}(i)},\\ & e^\top _kD\tau(i)\geq 0 \ \forall k \in P\setminus {L_{m+1}(i)}, \forall i \in I_{m+1}\}.\label{o-15}\end{split}\end{equation} In fact, it follows from (\ref{Ym}) that the equality $D\bullet Y_{m+1}=0$ can be rewritten in the form \begin{equation}gin{eqnarray} &0=D\bullet Y_{m+1}=\sum\limits_{i\in \Delta I_m}\gamma(i)(\tau(i))^{\top}D\tau(i)+2\sum\limits_{i\in I_m}(\lambda^m(i))^{\top}D\tau(i), \nonumber\\ &\mbox{ where } \ \ \gamma(i)>0 \ \forall i \in \Delta I_m;\;\;\; \lambda^m_k(i)\geq 0\ \forall k \in P\setminus {L_m(i)}, \forall i \in I_m.\nonumber\end{eqnarray} Taking into account (\ref{o-14}) and {the} relations above, we conclude that for $D\in {\mathcal F}_m$, the equality $D\bullet Y_{m+1}=0 $ implies the equalities $$ (\tau(i))^{\top}D\tau(i)=0 \ \forall i \in \Delta I_m,\; e^\top_kD\tau(i)=0\ \forall k \in \Delta {L}(i), \forall i \in I_m.$$ Notice that the relations $D\in {\cal COP}^{p},$ $(\tau(i))^{\top}D\tau(i)=0, \tau(i)\geq 0 \ \forall i \in \Delta I_m,$ imply \begin{equation} e^\top_kD\tau(i)=0 \ \forall k \in P_+(\tau(i)){\mbox{ and } } e^\top_kD\tau(i)\geq 0\ \forall k \in P\setminus P_+(\tau(i)), \forall i \in \Delta I_m.\label{o-17}\end{equation} Representation (\ref{o-15}) follows from (\ref{o-14}) and (\ref{o-17}). The constructed by rules (\ref{Ym}) and (\ref{o-15}) matrices and cones \begin{equation} Y_m,\; {\mathcal F}_m,\; m=0, 1,...,m_*,\label{w4}\end{equation} satisfy the following relations: \begin{equation}gin{equation*}\begin{equation}gin{split}&Y_m\in {\mathcal F}^*_{m-1}\ \ \ \forall m=1,...,m_*, \ Y_0=\mathbb O_{p}; \qquad\qquad\qquad\qquad\qquad\qquad(I)\\ &Y_m\in {\rm Ker}\mathcal A\ \ \ \forall m=1,...,m_*; \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(II)\\ &{\mathcal F}_m= {\mathcal F}_{m-1}\cap \{Y_m\}^\bot\ \ \ \forall m=1,...,m_*, \ {\mathcal F}_0={\mathcal {COP}}^p.\qquad\qquad\qquad(III)\end{split}\end{equation*} Now we see that the algorithm {\textbf{REG-LCoP}} allows one to get a more clear description of the structure of the matrices $Y_m, m=1,...,m_*,$ satisfying conditions $(I)-(III)$, and quite constructive rules of their formation: \begin{equation}gin{itemize} \item[]\it for a given $m$, the matrix $Y_m$ has a form (\ref{Ym}) and is built on the basis of {the} optimality conditions for the feasible solution $(x=\mathbf 0, \mu=0)$ in the {corresponding} \textbf{regular} SIP problem (${SIP}_m$). \end{itemize} As it was shown in subsection \ref{subsec5}, at each iteration, the Waki and Muramatsu's FRA produces a set of matrices and cones (\ref{w4}) satisfying the {conditions} $(I) - (III)$, and the condition $$Y_m\not\in {\rm span}\{Y_0,Y_1,...,Y_{m-1}\}\ \forall m=1,...,m_*-1.\qquad\qquad\qquad (IV)$$ On the other hand, t he described in subsection \ref{subsec6} algorithm \textbf{REG-LCoP} at each iteration produces a set of matrices and cones (\ref{w4}) satisfying the conditions $(I) - (III)$ but not necessarily the condition (IV). Since in the algorithm \textbf{REG-LCoP}, the fulfillment of the condition (IV) is not guaranteed at each iteration, if compare this algorithm with the Waki and Muramatsu's FRA, at the first glance it may seem that, in general, the number of iterations executed by the algorithm \textbf{REG-LCoP} is larger. Such an impression is caused by the fact that in \ref{4-2-1}, we described in more detail all the steps of the algorithm and explicitly indicated all the computations carried out at each iteration. As for the Waki and Muramatsu's FRA, its iterations are described only in general terms. In what follows, we set out a {\it modification} of the algorithm \textbf{REG-LCoP}, where the number of iterations is reduced and it is guaranteed that all conditions $(I) - (IV) $ are satisfied on each {\it core} iteration. This modification is formal, being essentially another way of the iterations' numbering. The real number of the calculations on the steps of this modified algorithm is the same as on the iterations of the original one. \subsubsection{ A compressed modification of the algorithm \textbf{REG-LCoP} } Consider the algorithm \textbf{REG-LCoP} presented in \ref{4-2-1}. Evidently, one can reduce the number of iterations of the algorithm if squeeze into just one iteration that iterations of the algorithm which change the description of the dual cone ${\cal F}^*_m$, but do not change the cone ${\cal F}_m$ itself. In other words, we will only move to the next {\it core} iteration when all conditions $(I) - (IV)$ are satisfied. Formally, such a procedure can be described as follows. Suppose that the algorithm \textbf{REG-LCoP} has constructed matrices and cones (\ref{w4}), satisfying the properties $ (I)-(III)$ and let $m_*>0.$ Denote by $m_s\in \{0,1,...,m_*-1\},\; s=0,1,...,s_*, $ the iterations' numbers such that \begin{equation}gin{equation*}\begin{equation}gin{split}&m_0: =0,\; m_s<m_{s+1}\; \forall s=0,1,...,s_*-1;\;\; m_{s_*}+1=m_*,\\ &Y_{m_s+1}\not\in {\rm span}\{Y_0,Y_1,...,Y_{m_s}\}\; \forall s=0,1, ...,s_*-1;\\ &Y_{m_s+1+i}\in {\rm span}\{Y_0,Y_1,...,Y_{m_s+i}\}\; \forall i=1,...,m_{s+1}-m_s-1, \ \forall s=0,1,...,s_*-1.\end{split}\end{equation*} Here $s_*$ denotes the number of iterations for which the conditions above are met. Notice that the set $\{l,l+1,...,w\}$ is considered empty if $w<l.$ In other words, the condition $(IV)$ {is} satisfied only for $m\in \{m_s+1,\; s=0,1,...,s_*-1\}$ and, possibly, for $m=m_*$. Set $$\bar Y_0:=Y_0,\; \bar {\mathcal F}_0:={\mathcal F}_0,\; \bar Y_{s+1}:=Y_{m_s+1},\; \bar {\mathcal F}_{s+1}:={\mathcal F}_{m_s+1}\; \forall s=0,1,...,s_*.$$ It is easy to check that the following conditions hold true: $$\bar Y_s\in \bar {\mathcal F}^*_{s-1},\; \bar Y_s\in {\rm Ker}\mathcal A,\; \bar {\mathcal F}_s=\bar {\mathcal F}_{s-1}\cap \{\bar Y_s\}^\bot \ \forall s=1,...,s_*+1;$$ $$\bar Y_s\not\in{\rm span}\{\bar Y_0,\bar Y_1,...,\bar Y_{s-1}\}\; \forall s=1,...,s_*.$$ Thus, after the described above squeezing, we get $s_*$ core iterations of the modified algorithm. It follows from the conditions above that $s_*\leq {\rm dim}(\rm Ker \,\mathcal A).$ Notice that for any $s=0,1,...,s_*-1,$ the iterations of the algorithm \textbf{REG-LCoP} having the numbers $m_s+1+i,\; \mbox{where } i=1,...,m_{s+1}-m_s-1$ (the compressed iterations), are not useless. They can be considered as the steps of a regularization procedure for the cone ${\mathcal F}_{m_s+1}$ at the current core iteration $\# \; s $. At each of these iterations, we reformulate the cone ${\mathcal F}_{m_s+1}$ in a new equivalent form. This additional information allows us to improve (make more regular) the representation of the cone ${\mathcal F}_{m_s+1}$ and get a more explicit and useful description of its dual cone ${\mathcal F}^*_{m_s+1}$. \subsubsection{A short discussion on the algorithms considered in this section}\label{Discussion} By analyzing and comparing the iterative algorithms presented above, we can draw the following conclusions. \begin{equation}gin{enumerate} \item The Waki and Muramatsu's facial reduction algorithm from \cite{Waki}, reformulated for copositive problems in subsection \ref{subsec5}, is very simple in the description and runs no more than ${\rm dim } ({\rm Ker}\mathcal A)$ iterations. But this algorithm is more conceptual than constructive since it does not provide any information about the structure of the matrix $Y_m$ and the cone ${\mathcal F}^*_m$ at its $m$-th iteration. Moreover, it is not explained in \cite{Waki} how to fulfill steps 2 and 3 at each iteration. \item The algorithm \textbf{REG-LCoP} proposed in subsection \ref{subsec6} also runs a finite number of iterations. This algorithm is described in all details and justified. The quite constructive rules for calculating the matrix $Y_m$ satisfying the condition $Y_m\in {\mathcal F}^*_m$, are presented using the information available at the Iteration $\# m$ of this algorithm. These rules are derived from the optimality conditions for the optimal solution $(x=\mathbf 0, \mu=0)$ of the \textbf{regular} problem (${SIP}_{m})$. Notice that it is possible to develop a modification of the algorithm \textbf{REG-LCoP} which runs no more than $2n$ iterations. \item Finally, to show that the described in \ref{4-2-1} algorithm \textbf{REG-LCoP} is not worse (by the number of iterations) than the FRA from the subsection \ref{subsec5}, we presented a compressed modification of the algorithm \textbf{REG-LCoP}. This modification consists of no more than ${\rm dim } ({\rm Ker}\mathcal A)$ iterations as well as the algorithm from subsection \ref{subsec5}. \end{enumerate} \section{Conclusions}\label{Concl} The main contribution of the paper is that, based on the concept of immobile indices, previously introduced for semi-infinite optimization problems, we suggested new methods for regularization of copositive problems. The algorithmic procedure of regularization of copositive problems is described in the form of the algorithm \textbf{REG-LCoP} and is compared with the facial reduction approach based on the minimal cone representation. We show that, when applied to the linear CoP problem (\ref{SDP-2}), the algorithm \textbf{REG-LCoP} possesses the same properties as the FRA suggested by Waki and Muramatsu in \cite{Waki}, but its iterations are explicit, described in more detail and hence more constructive. The described in the paper algorithms are useful for the study of convex copositive problems. In particular, for the linear copositive problem, they allow to \begin{equation}gin{itemize} \item formulate an equivalent (regular) semi-infinite problem which satisfies the Slater type regularity condition and can be solved numerically; \item prove new optimality conditions without any CQs; \item develop strong duality theory based on an explicit representation of the "regularized" feasible cone and the corresponding dual (such as, e.g. the Extended Lagrange Dual Problem suggested for SDP by Ramana et al. in \cite{Ramana-W}). \end{itemize} The described in the paper regularization approach is novel and rather constructive. It is important to stress that no constructive regularization procedures are known for linear copositive problems. \begin{equation}gin{thebibliography}{99} \bibitem{AH2013} F. Ahmed, M. D\"{u}r and G. Still, Copositive Programming via semi-infinite optimization. J. Optim. Theory Appl., \textbf{159} (2013) 322--340. \bibitem{HandbookSDPnew} M.F. Anjos and J.B. Lasserre (Eds.), {\it Handbook of Semidefinite, Conic and Polynomial Optimization}, International Series in Operational Research and Management Science, \textbf{166}, 138 p. Springer US (2012). \bibitem{Bomze2012} I.M. Bomze, Copositive optimization - recent developments and applications. EJOR \textbf{216}(3) (2012) 509--520. \bibitem{Bon} J.F. Bonnans and A. Shapiro, {\it Perturbation analysis of optimization problems}, 601 p. Springer-Verlag, New-York (NY) (2000). \bibitem{BW4} J.M. Borwein and H. Wolkowicz, Facial reduction for a cone-convex programming problem. J. Austral. Math. Soc., Ser. A. \textbf{30}(3) (1981) 369-380. \bibitem{BW1} J.M. Borwein and H. Wolkowicz, Regularizing the abstract convex program. J. Math. Anal. Appl. \textbf{83} (1981) 495-530. \bibitem{deKlerk} E. de Klerk and D.V. Pasechnik, Approximation of the stability number number of a graph via copositive programming. SIAM J. Optim. \textbf{12} (2002) 875--892. \bibitem{Dur2010} M. D\"{u}r, Copositive Programming -- a survey. In: M. Diehl, F. Glineur, E. Jarlebring and W. Michielis (Eds.), {\it Recent advances in optimization and its applications in engineering}, 535 p. Springer-Verlag, Berlin, Heidelberg (2010). \bibitem{Dry-Wol} D. Drusvyatskiy and H. Wolkowicz, The many faces of degeneracy in conic optimization. Foundations and Trends in Optimization, Now Publishers Inc. \textbf{3}(2) (2017) 77-170. \bibitem{Kort} K.O.Kortanek and Q. Zhang, Perfect duality in semi-infinite and semidefinite programming. Math. Program., Ser. A \textbf{91} (2001) 127-144. \bibitem{KT-new} O.I. Kostyukova and T.V. Tchemisova, On equivalent representations and properties of faces of the cone of copositive matrices, arXiv:2012.03610v1 [math.OC] (2020) \bibitem{KT-Global} O.I. Kostyukova and T.V. Tchemisova, Strong duality for a problem of linear copositive programming. arXiv:2004.09865 [math.OC] (2019) \bibitem{KT-JOTA} O.I. Kostyukova and T.V. Tchemisova, Optimality conditions for convex Semi-Infinite Programming problems with finitely representable compact index sets. J. Optim. Theor Appl. \textbf{175}(1) (2017) 76-103 . \bibitem{KT-SetValued} O.I. Kostyukova, T.V. Tchemisova and O.S. Dudina, Immobile indices and CQ-free optimality criteria for linear copositive programming problems, Set-Valued Var. Anal.\textbf{28} (2020) 89–107. \bibitem{Let} A.N. Letchford and A.J. Parkes, A guide to conic optimisation and its applications. RAIRO–Rech. Oper. Res. \textbf{52} (2018) 1087 -- 1106. https://doi.org/10.1051/ro/2018034 \bibitem{levin} V.L. Levin, Application of E. Helly's theorem to convex programming, problems of best approximation and related questions. Math. USSR Sbornik \textbf{8} (2) (1969) 235-247. \bibitem{Luo} Z.-Q. Luo, F.J. Sturm and S. Zhang, Duality results for conic convex programming. Econometric institute report no. 9719/a, Erasmus University Rotterdam, Erasmus School of Economics (ESE), Econometric Institute (1997). \bibitem{PP} G. Pataki, A simple derivation of a facial reduction algorithm and extended dual systems. Preprint available at http://www.unc.edu/ pataki/papers/fr.pdf (2000). \bibitem{Perm} F. Permenter, H. Friberg and E. Andersen, Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach. Tech. Report, DOI 10.13140/RG.2.1.4340.7847 (2015). \bibitem{Terlaki} I. P{\'o}lik and T. Terlaky, Exact duality for optimization over symmetric cones. AdvOL-Report No. 2007/10 McMaster University, Advanced Optimization Lab., Hamilton, Canada (2007). \bibitem{Ramana-W} M. V. Ramana, L. Tuncel, and H. Wolkowicz, Strong duality for Semidefinite Programming. SIAM J. Optimization\textbf{7}(3) (1997) 641-662. \bibitem{solodov} M.V. Solodov, {\it Constraint Qualifications}, Wiley Encyclopedia of Operations Research and Management Science, James J. Cochran, et al. (editors), John Wiley \& Sons, Inc. (2010). \bibitem{Tuncel} L.Tun\c{c}el and H. Wolkowicz, Strong duality and minimal representations for cone optimization. Comput. Optim. Appl. \textbf{53} (2013) 619--648. \bibitem{Waki} H. Waki and M. Muramatsu, Facial reduction algorithms for conic optimization problems. J. Optim. Theory Appl. \textbf{158} (2013) 188-215. \bibitem{Weber2} G.-W. Weber, {\it Generalized semi-infinite optimization and related topics}, Heldermann publishing house, Research and Exposition in Mathematics 29, Lemgo, eds.: K.H. Hofmann and R. Willem (2003). \bibitem{W-handbook} H. Wolkowicz, R. Saigal and L. Vandenberghe, {\it Handbook of Semidefinite Programming - theory, algorithms, and applications}, Kluwer Academic Publishers (2000). \end{thebibliography} \end{document}
\begin{document} \title{Collision models can efficiently simulate any multipartite Markovian quantum dynamics } \author{Marco Cattaneo} \email{[email protected] } \affiliation{Instituto de F\'{i}sica Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears E-07122, Palma de Mallorca, Spain} \affiliation{QTF Centre of Excellence, Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turun Yliopisto, Finland} \affiliation{QTF Centre of Excellence, Department of Physics, University of Helsinki, P.O. Box 43, FI-00014 Helsinki, Finland} \author{Gabriele De Chiara} \affiliation{Centre for Theoretical Atomic, Molecular and Optical Physics, Queen’s University Belfast, Belfast BT7 1NN, United Kingdom} \author{Sabrina Maniscalco} \affiliation{QTF Centre of Excellence, Department of Physics, University of Helsinki, P.O. Box 43, FI-00014 Helsinki, Finland} \affiliation{QTF Centre of Excellence, Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turun Yliopisto, Finland} \affiliation{QTF Centre of Excellence, Department of Applied Physics, School of Science, Aalto University, FI-00076 Aalto, Finland} \author{Roberta Zambrini} \affiliation{Instituto de F\'{i}sica Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears E-07122, Palma de Mallorca, Spain} \author{Gian Luca Giorgi} \affiliation{Instituto de F\'{i}sica Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears E-07122, Palma de Mallorca, Spain} \date{Received: \today / Accepted: date} \begin{abstract} We introduce the \textit{multipartite collision model}, defined in terms of elementary interactions between subsystems and ancillas, and show that it can simulate the Markovian dynamics of any multipartite open quantum system. {We} develop a method to estimate an analytical error bound for any repeated interactions model, and we use it to prove that the error of our scheme displays an optimal {scaling}. Finally, we provide a simple decomposition of the \textit{multipartite collision model} into elementary quantum gates, and show that it is efficiently simulable on a quantum computer according to the dissipative quantum Church-Turing theorem, i.e. it requires a polynomial number of resources. \end{abstract} \maketitle \textit{Introduction}.--The collision approach represents one of the most successful methods to describe the dynamics of an open quantum system, being based on the intriguing idea that enviroment-induced decoherence and dissipation arise because of rapid repeated collisions between each system unit and a set of environment ancillas, occurring during a timestep $\Delta t$. This framework, whose origins can be traced back to some important works of the previous century \cite{Karplus1948,Rau1963,Dumcke1985}, has given birth to a pletora of ``collision'' or ``repeated interactions'' models \cite{Scarani2002,Ziman2002,Ziman2005,Attal2006a,Giovannetti2012,Ciccarello2013a,Bruneau2014a,Vacchini2014,Grimmer2016a,Kretschmer2016a}, which have been receiving an increasing attention in recent years, especially due to their fundamental importance in the fields of quantum thermodynamics and open quantum systems. For instance, collision models have been proven useful to investigate flux rectification \cite{Landi2014}, Landauer's principle \cite{Lorenzo2015a,PezzuttoNJP2016}, the emergence of thermalization or non-equilibrium steady states \cite{Karevski2009,Barra2017,Lostaglio2018,Cusumano2018a,Seah2019a,Arisoy2019,Manatuly2019a,Korzekwa2020,Ehrich2020,Guarnieri2020a}, quantum thermometry \cite{Seah2019}, quantum batteries \cite{Barra2019} and quantum thermal machines \cite{Dag2016,Pezzutto2019,Hewgill2020,DeChiara2020,Piccione2020}, as well as to analyze the thermodynamics of non-thermal baths \cite{Manzano2016,Manzano2018,Rodrigues2019} or in the presence of strong coupling \cite{Strasberg2019a}. Applications outside the field of thermodynamics include the study of open quantum optical systems \cite{Bruneau2014,Ciccarello2017,Cilluffo2020,Carollo2020}, simulation of non-Markovian effects \cite{Ciccarello2013a,Bernardes2014,McCloskey2014,Vacchini2014,Kretschmer2016a,Cakmak2017a,Lorenzo2017,Campbell2018,Jin2018a,Man2018a} and cascade models \cite{Giovannetti2012,Lorenzo2015a,Lorenzo2015,Cusumano2017a,Cusumano2018}, quantum synchronization \cite{Karpat2019,karpat2020synchronization}, entanglement generation \cite{Daryanoosh2018,Cakmak2019,Li_2020}, quantum transport \cite{Chisholm2020} and quantum Darwinism \cite{Campbell2019a,Garcia-Perez2020}. The structure of any single-qubit collision model and {the correspondence with an equivalent} master equation is well-understood \cite{Ziman2005,Rybar2012,Filippov2017}. In contrast, while some collision models for multipartite systems have been presented in the past few years \cite{Daryanoosh2018,Lorenzo2017,Giovannetti2012,Supplemental}, a universal protocol suitable for efficient simulation of multipartite open system dynamics via collision models, described in terms of elementary collisions between subsystems and ancillas, has not been provided yet. Reproducing any possible open dynamics by means of elementary collision models promises to be particularly valuable to deal with the microscopic description of multipartite open systems, where \textit{global} master equations are needed \cite{Hofer2017,Gonzalez2017,Cattaneo2019b} and one cannot always rely on local descriptions, that may display fundamental differences e.g. from the thermodynamic point of view \cite{Levy2014,Stockburger2017}. Here, collision models are extremely useful to study the elementary exchange of heat and energy, and the microscopic production of work in each single interaction between a unit of the system and an environment ancilla \cite{Barra2015,Strasberg2017,DeChiara2018a}. For instance, a collision model analysis resolves the violation of the second law of thermodynamics when using a local master equation \cite{DeChiara2018a,Hewgill2020a}. In this Letter, we introduce the \textit{multipartite collision model} (MCM), based on elementary interactions between each unit of a multipartite system and a set of ancillary qubits of the environment. We show that the MCM is able to reproduce any Gorini-Kossakowski-Sudarshan-Lindblad (GKLS) master equation \cite{Lindblad1976,Gorini1976a} in the limit of small timestep $\Delta t\rightarrow 0^+$, therefore describing any possible divisible dynamical map. {After providing a simple decomposition into elementary quantum gates, we prove that the MCM is efficiently simulable on a quantum computer under the assumptions of the dissipative quantum Church-Turing theorem \cite{Kliesch2011}, as it requires a number of resources that scales polynomially as a function of the number of subsystems, time, and the inverse of desired precision.} This allows for the efficient simulation of a whole range of complex open quantum systems under the Markovianity assumption: by tuning our model in an intuitive way, we can mimic the effect of different types of separate and/or common baths (bosonic, fermionic, spin, etc.) at any temperature, as well as reproduce each elementary system-bath interaction characterizing a generic global master equation, with or without a non-local unitary system dynamics. Non-Markovian effects may be then simulated by Markovian embeddings of pseudomodes into the MCM \cite{Lorenzo2017}. Furthermore, by developing a method valid for \textit{any} collision model, we calculate an analytical error bound for the simulation of a generic semigroup dynamics by means of the MCM, proving that its scaling is optimal. To guarantee the generality of the MCM, we {will show that it can simulate the dynamics driven by} any GKLS master equation, both in the diagonal \cite{Lindblad1976} and non-diagonal form \cite{Gorini1976a}. The latter can be expressed by means of the Liouvillian superoperator $\mathcal{L}$ as: \begin{equation} \label{eqn:LindbladNoDiag} \begin{split} \mathcal{L}[\rho_S(t)]=&-i[\tilde{H}_S,\rho_S(t)]+\sum_{j,k=1}^{J} \gamma_{jk}\mathcal{D}_{F_j,F_k}[\rho_S(t)], \end{split} \end{equation} where $\mathcal{D}_{O_1,O_2}[\rho]=O_1\rho O_2^\dagger-\frac{1}{2}\{O_2^\dagger O_1,\rho\}$, {$\tilde{H}_S$} is an effective {system} Hamiltonian, $\gamma_{jk}$ is the semipositive \textit{Kossakowski matrix}, while we term $\{F_k\}_{k=1}^J$ as \textit{GKS operators} {\cite{Gorini1976a}}. If $\mathcal{H}_S=\bigotimes_{j=1}^M \mathcal{H}_S^{(j)}$ is the Hilbert space of the system composed of (for simplicity identical) $M$ subsystems with ${\rm dim}(\mathcal{H}_S^{(j)})=d$ ($d<\infty$), in general we have $J=d^{2M}-1$. We obtain the diagonal GKLS form by diagonalizing the Kossakowski matrix through a suitable unitary matrix $C$: we introduce the \textit{Lindblad operators} {\cite{Lindblad1976}} $L_k=\sum_{j=1}^J C_{jk} F_j$, and we derive the corresponding decay rates $\Gamma_k$ as the eigenvalues of $\gamma_{jk}$. \begin{figure} \caption{(a): Pictorial representation of the MCM. The ancilla $p$ generates a term in the master equation that couples subsystems $1$ and $2$. The ancilla $p'$ interacts with subsystem $3$ only, and yields a local term in the master equation. (b): Circuit scheme of the interaction between ancilla $p$ and subsystems $1$ and $2$. If the system is made of qubits, three two-qubit gates are required.} \label{fig:collMod} \end{figure} For the sake of clarity, we begin by assuming that each GKS operator $F_k$ {acts} non-trivially on a single subsystem only, although the MCM is not restricted to it, {as we will see in the following}. {This assumption is satisfied by a wide range of local and global master equations \cite{Cattaneo2019b}, and corresponds to neglecting environment-mediated many-body interactions between the subsystems.} Under this assumption, the total number of Lindblad operators reduces to $J=M(d^2-1)$, and the index $j=(m,\alpha)$ {can be decomposed into} two additional indexes: $m=1,\ldots,M$ labeling the subsystems and $\alpha=1,\ldots,(d^2-1)$ selecting the specific GKS operator acting locally thereon. \textit{\textit{Multipartite collision model}}.--The procedure to implement the MCM under the assumption of local GKS operators is depicted in Fig.~\ref{fig:collMod}. For the non-diagonal case we can identify the following five steps: \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item For each pair of GKS operators $F_{m,\alpha}$ and $F_{m',\alpha'}$ appearing in Eq.~\eqref{eqn:LindbladNoDiag}, consider an independent ancillary qubit of the environment labeled by {$p=(m,\alpha,m',\alpha')$}, and construct the sequence of local elementary subsystem-ancilla interactions given by: \begin{equation} \label{eqn:pairEv} U_p(\Delta t)=U_p^{(m,\alpha)}(\Delta t/2)U_p^{(m',\alpha')}(\Delta t)U_p^{(m,\alpha)}(\Delta t/2), \end{equation} where {($\hbar=1$)} \begin{equation} \label{eqn:elemInt} U_p^{(m,\alpha)}(\Delta t)=\exp(-i g_I \Delta t H_{I,p}^{(m,\alpha)}). \end{equation} $H_{I,p}^{(m,\alpha)}=(\lambda_p^{(m,\alpha)}F_{m,\alpha}\sigma_{p}^++h.c.)$, $g_I$ is a fixed constant with the units of energy and $\lambda_p^{(m,\alpha)}$ is a dimensionless parameter we can freely tune. \item Compose all the unitary evolutions associated to each pair of GKS operators into a global unitary operator describing the overall interaction with the environment, choosing freely the order in which we insert the former: \begin{equation} \label{eqn:evComposition} U_I(\Delta t)=\prod_{p\in \mathsf{P}}U_p(\Delta t), \end{equation} where the elements of the set $\mathsf{P}$ are all the possible pairs $(m,\alpha,m',\alpha')$. \item Add a unitary system evolution driven by the dimensionless system Hamiltonian $H_S$ to obtain the final global operator for the simulation of the MCM: \begin{equation} \label{eqn:evSimTot} U_{sim}(\Delta t)=U_S(\Delta t)\circ U_I(\Delta t), \end{equation} with $U_S(\Delta t)=\exp(-i g_S\Delta t H_S)$, where $H_S=\tilde{H}_S/g_S$ and $g_S$ is a fixed constant with the units of energy, defining the order of magnitude of $\tilde{H}_S$. \item Prepare the set of environment {qubits} with $p\in\mathsf{P}$ in an initial separable state $\rho_E(0)=\bigotimes_{p\in\mathsf{P}} \eta_p$, where $\eta_p=c_p\ket{\downarrow}_p\bra{\downarrow}+(1-c_p)\ket{\uparrow}_p\bra{\uparrow}$, with $0\leq c_p\leq 1$, is a diagonal state in the basis of $\sigma_{p}^z$. \item Apply a single step of the MCM on the system state $\rho_S$ as the quantum map: \begin{equation} \label{eqn:quantumMap} \phi_{\Delta t}[\rho_S]=\Tr_E\left[U_{sim}(\Delta t)\rho_S\otimes\rho_E(0)U_{sim}^\dagger(\Delta t)\right], \end{equation} where the trace over the environment $E$ includes the trace over each environment ancilla with $p\in\mathsf{P}$. \end{enumerate} We show in the Supplemental Material \cite{Supplemental} that under certain requirements the dynamics generated by the MCM corresponds to the one driven by a general GKLS master equation~\eqref{eqn:LindbladNoDiag}. Specifically, we follow the standard derivation of a collision model \cite{Lorenzo2017}: we assume the limit of small timestep $\Delta t\rightarrow 0^+$, with $g_S\ll g_I \ll \Delta t^{-1}$, and $\lim_{\Delta t\rightarrow 0^+} g_I^2\Delta t=\gamma$, where $\gamma$ is a finite energy constant. For simplicity, the coefficients $\lambda_p^{(m,\alpha)}$ in Eq.~\eqref{eqn:elemInt} are taken of the order of $O(1)$. Under the above assumptions, the evolution generated by a single application of the quantum map corresponds to: \begin{equation} \label{eqn:approxMap} \phi_{\Delta t}=\mathbb{I}+\Delta t \mathcal{L}+O(\Delta t^2), \end{equation} where the Liouvillian superoperator reads \cite{Supplemental}: \begin{equation} \label{eqn:Liouvillian} \begin{split} \mathcal{L}[\rho_S]=&-i[\tilde{H}_S,\rho_S]+\sum_{p\in\mathsf{P}}\left(\gamma_p^\downarrow \mathcal{D}_{F_{m,\alpha},F_{m',\alpha'}}[\rho_S] \right.\\ &+\left. \gamma_p^\uparrow \mathcal{D}_{F_{m,\alpha}^\dagger,F_{m',\alpha'}^\dagger}[\rho_S] +h.c.\right). \end{split} \end{equation} The coefficients are: \begin{equation} \label{eqn:coefficients} \begin{split} &\gamma_p^\downarrow=\begin{cases} \gamma c_p \lambda_p^{(m,\alpha)}(\lambda_p^{(m',\alpha')})^*&\textnormal{ if }m\neq m'\textnormal{ or }\alpha\neq\alpha'\\ \gamma\sum_{\bar{p}} c_{\bar{p}} \abs{\lambda_{\bar{p}}^{(m,\alpha)}}^2 &\textnormal{ otherwise}\\ \end{cases}\\ &\gamma_p^\uparrow=\begin{cases} \gamma(1-c_p) (\lambda_p^{(m,\alpha)})^*\lambda_p^{(m',\alpha')}&\textnormal{ if }m\neq m'\textnormal{ or }\alpha\neq\alpha'\\ \gamma\sum_{\bar{p}} (1-c_{\bar{p}}) \abs{\lambda_{\bar{p}}^{(m,\alpha)}}^2 &\textnormal{ otherwise},\\ \end{cases}\\ \end{split} \end{equation} with summation over all the unordered pairs of GKS operators $\bar{p}=(m,\alpha,\bar{m},\bar{\alpha})$. These coefficients give rise to a semipositive Kossakowski matrix, i.e. the master equation~\eqref{eqn:Liouvillian} is already in GKLS form. {Eq.~\eqref{eqn:Liouvillian} also contains all the terms associated to the adjoint GKS operators with Kossakowski matrix $\gamma_p^{\uparrow}$, that can be removed by setting $c_p=1$ $\forall p$ (i.e. by preparing each ancilla in the ground state).} Given the freedom in the choice of $\lambda_{p}^{(m,\alpha)}$ and $\lambda_{p}^{(m',\alpha')}$ in the Hamiltonian of each elementary subsystem-ancilla interaction introduced in Eq.~\eqref{eqn:elemInt}, we can engineer $\gamma_p^{\downarrow}$ in order to reproduce any Kossakowski matrix for the GKS operators of the master equation~\eqref{eqn:Liouvillian}, and therefore any non-diagonal GKLS master equation~\eqref{eqn:LindbladNoDiag} {with effective Hamiltonian $\tilde{H}_S$}. We can therefore conclude that repeated rapid applications of the MCM simulate the quantum semigroup dynamics driven by any Liouvillian $\mathcal{L}$: \begin{equation} \label{eqn:semigroupEv} \lim_{\Delta t\rightarrow 0^+} \left(\phi_{\Delta t}\right)^n=\exp\mathcal{L}t,\textnormal{ with }t=n\Delta t. \end{equation} {This is our first major result.} For a small but finite $\Delta t$, the MCM reproduces the open dynamics only for discrete times $t=n\Delta t$, where the resolution given by $\Delta t$ can be thought of as the \textit{coarse-graining} of the master equation \cite{Farina2019}. Finally, it is not always necessary to take one ancilla for each pair of jump operators. In certain scenarios we may rely on a simpler version of the MCM that requires a smaller number of resources \cite{Supplemental}. The collision scheme introduced above is particularly useful in situations where one has to apply the MCM to a symbolic GKLS master equation that cannot be diagonalized analytically. In all other cases, the MCM realizes the diagonal form of the GKLS master equation by following the same lines described above, with the prescription that we just need one ancillary qubit for each Lindblad operator $L_k$. Indeed, under the assumption of local GKS operators, we can write $L_k=\sum_{m=1}^M \tilde{F}_{m}^{(k)}$, {where $\tilde{F}_{m}^{(k)}=\sum_{\alpha=1}^{d^2-1} C_{\alpha k} F_{m,\alpha}$ is a local sum of GKS operators}, and the evolution in Eq.~\eqref{eqn:pairEv} is replaced by the sequence of elementary interactions \begin{equation} \label{eqn:diagU} U_k(\Delta t)=\prod_{m=1}^M U_k^{(M-m+1)}(\Delta t/2)\prod_{m'=1}^M U_k^{(m')}(\Delta t/2), \end{equation} and $U_k^{(m)}(\Delta t)=\exp [-i g_I \Delta t (\lambda_k \tilde{F}_m^{(k)}\sigma_k^+ + h.c.)] $, so that $\Gamma_k=\lim_{\Delta t\rightarrow 0^+} g_I^2 \Delta t \abs{\lambda_k}^2 $ is the decay rate of the $k$th Lindblad operator. Correspondingly, the product in the global unitary operator for the interaction with the environment in Eq.~\eqref{eqn:evComposition} runs over $k=1,\ldots,J$ instead of the pairs $p\in\mathsf{P}$. \textit{Temperature}.--Note that a suitable engineering of the parameters of the MCM allows for the simulation of any thermal bath at any (even negative) temperature. For instance, to mimic a single thermal bath at temperature $T$, one can use a single ancilla prepared in a thermal state at temperature $T$, and the strength of the decay rates can be engineered by tuning the parameters $\lambda_p^{(m,\alpha)}$ as a function of $T$ \cite{Supplemental}. Our model also allows for energy-nonconserving elementary interactions (e.g. with counter-rotating terms such as $a^\dagger\sigma_p^++h.c.$ for a bosonic mode $a$ and a qubit ancilla labeled by $p$). This may generate squeezing-like terms, which corresponds to the ancillas not having the same temperature as the effective bath, and any complex scenario with multiple baths can be realized. This engineering overcomes the physical constraints of previous open system quantum simulations based on qubit ancillas \cite{Wang2011}. \textit{Extension to many-body GKS operators and time-dependent semigroups}.--The MCM also works in the case of many-body GKS operators that cannot be trivially decomposed into single subsystem-ancilla interactions {(e.g. when a GKS operator is written as $F_j=\sigma_1^-\sigma_2^+$ \cite{Manzano2019})}. In this scenario, we will have an elementary collision in Eq.~\eqref{eqn:elemInt} with Hamiltonian $H_{I,p}^{(j)}=\lambda_p^{(j)}F_j\sigma_{p}^++h.c.$, where $F_j$ acts non-trivially on more than one subsystem, and therefore cannot be represented by a single two-qubit gate on a quantum computer. Its action may be implemented by multi-qubit gates, as already done in quantum simulation of open systems \cite{Barreiro2011}, or by a decomposition in terms of two-qubit gates \cite{nielsenchuang}. In general terms, we may assume to have at our disposal a set of $R$ Hamiltonians for each GKS operator $F_j$ (or Lindblad operator $L_j$ in the diagonal case), $H_r^{(j)}=G_r^{(j)}\sigma_{p}^++h.c.$ ($p$ labels a generic ancilla), that we are able to simulate by elementary multi-qubit gates in our lab, through which we can build any required GKS operator Hamiltonian in Eq.~\eqref{eqn:elemInt} as $H_{I,p}^{(j)}=\sum_{r=1}^R\mu_r^{(p,j)} H_r^{(j)}$. Then, we can simulate the MCM by the decomposition $\exp(-ig_I\Delta tH_{I,p}^{(j)})=\prod_{r=1}^R U_{p,R-r+1}^{(j)}(\Delta t/2)\prod_{r'=1}^R U_{p,r'}^{(j)}(\Delta t/2)+O(g_I^3\Delta t^3)$, with $U_{p,r}^{(j)}(\Delta t)=\exp(-ig_I\Delta t\mu_r^{(p,j)} H_r^{(j)})$, which still brings an error of the order of $O(\Delta t^2)$ in Eq.~\eqref{eqn:approxMap} \cite{Supplemental}. {Note that, if we go back to the condition of local GKS operators,} for simplicity we can assume to be able to directly implement any elementary gate $U_{p}^{(m,\alpha)}$ in the lab, and therefore for the non-diagonal case $R=1$. In the diagonal case, we can interpret $R$ as the number of different elementary subsystem-ancilla interactions in Eq.~\eqref{eqn:diagU}, therefore $R=M$. Finally, extensions to time-dependent semigroups in which the Kossakowski matrix in Eq.~\eqref{eqn:LindbladNoDiag} depends on time, $\gamma_{jk}(t)$ semipositive for any time $t$, are {immediate}: we just need to set a time-dependent parameter $\lambda_p^{(m,\alpha)}(t)$ in the Hamiltonian of Eq.~\eqref{eqn:elemInt}, and to make it vary as a function of $t$. Analogously, we can make the system Hamiltonian depend on time as well, as $H_S(t)$. \textit{Error estimation}.--Previous treatments of the error analysis for a collision model have usually neglected higher-order terms in the Taylor expansion, e.g. see the detailed discussion in Ref.~\cite{Grimmer2016a}. Sometimes this may not be accurate, since the infinite series of higher-order terms may bring a non-negligible contribution \cite{Childs2019}. Here, we estimate an analytical error bound for the MCM by keeping all the terms of the infinite Taylor expansion through a method based on Suzuki's higher-order integrators \cite{Suzuki1985} which can be found in the Supplemental Material \cite{Supplemental}, whose validity applies to any collision model. For the sake of a general description, we compute the error bound without assuming the GKS operators locality, and therefore relying on the sets of $R$ many-body Hamiltonians $H_r^{(j)}$ introduced above. To estimate the error bound we employ the $1\rightarrow 1$ superoperator norm $\norm{\mathcal{T}}_{1\rightarrow 1}$ and the operator norm $\norm{A}_\infty$ \footnote{The superoperator norm is defined as \cite{Kliesch2011,Sweke2015} $\norm{\mathcal{T}}_{1\rightarrow 1}=\sup_{\norm{A}_1=1}\norm{\mathcal{T}[A]}_1$, where $\norm{A}_1=\Tr[\sqrt{A^\dagger A}]$ is the trace norm. The operator or infinity norm is defined as $\norm{A}_\infty=\sup_{\norm{v}=1}\norm{Av}$, where $\norm{v}$ is the standard vector norm.}. We can identify four different kinds of error made by approximating the semigroup evolution through the collision model: \begin{description} \item[Global error] $\epsilon_g=\norm{\exp\mathcal{L}t-(\phi_{\Delta t})^n}_{1\rightarrow 1}$, with $\Delta t=t/n$. \item[Single-step error] $\epsilon_s=\norm{\exp\mathcal{L}\Delta t-\phi_{\Delta t}}_{1\rightarrow 1}$. \item[Truncation error] $\epsilon_t=\norm{\exp\mathcal{L}\Delta t-(\mathbb{I}+\Delta t\mathcal{L})}_{1\rightarrow 1}$. \item[Collision error] $\epsilon_c=\norm{\phi_{\Delta t}-(\mathbb{I}+\Delta t\mathcal{L})}_{1\rightarrow 1}$. \end{description} Following \textit{Lemma 2} in Ref.~\cite{Sweke2014}, we have $\epsilon_g\leq n\epsilon_s$, and according to the triangle inequality $\epsilon_s\leq\epsilon_t+\epsilon_c$. The latter errors can be bound as \cite{Supplemental}: \begin{equation} \label{eqn:boundT} \epsilon_t\leq 2e (R\Lambda(1+J R\Lambda)\Delta t)^2,\;2R\Lambda(1+J R\Lambda)\Delta t<1, \end{equation} \begin{equation} \label{eqn:boundC} \epsilon_c\leq \textnormal{pol}_1(\Lambda,\Xi,g_S,\gamma)\Delta t^2+\textnormal{pol}_2(\Lambda,\Xi,g_S,\gamma)\Delta t^3, \end{equation} where $\textnormal{pol}_1$ and $\textnormal{pol}_2$ are polynomial functions of $g_S,\gamma,\Lambda=\max_{r,j,p}(\norm{H_S}_\infty,\norm{\mu_r^{(p,j)} H_r^{(j)}}_\infty)$ and $\Xi$, equal to the total number of different elementary unitary evolutions driven by a single $H_r^{(j)}$ in Eq.~\eqref{eqn:evComposition} (in the case of MCM for the diagonal master equation, we have $\Xi=R J$, for the non-diagonal scenario $\Xi=R \abs{\mathsf{P}}$). The exact expressions of $\textnormal{pol}_1$ and $\textnormal{pol}_2$, as well as the above bounds under the assumption of $k$-locality \cite{Kliesch2011}, {are discussed with further details} in the Supplemental Material \cite{Supplemental}. Here, we just remark that the global error of the MCM follows the behavior: \begin{equation} \label{eqn:globalErrBeh} \epsilon_g=O(n\Delta t^2)=O(t^2/n). \end{equation} This {scaling is optimal} for the error made by simulating an open system dynamics via a general scheme of repeated unitary evolutions \cite{Cleve2017}, and therefore via general collision models. Such scaling, for instance, is always saturated by the truncation error $\epsilon_t$, which is the same for any model of rapid repeated interactions. {This is our second major result}. \textit{Resource estimation for quantum simulation}.--To address the quantum simulation efficiency of the MCM we assume the $k$-locality of the Liouvillian $\mathcal{L}$, {namely, that} it can be written as a sum of Liouvillians $\mathcal{L}_\sigma$ non-trivially acting on $k$ subsystems only: $\mathcal{L}=\sum_{\sigma=1}^K\mathcal{L}_\sigma$. This is a standard assumption for quantum simulation on a circuital quantum computer, introduced by Kliesch et al. for open systems \cite{Kliesch2011,Barthel2012}, and first imposed in the seminal paper by Lloyd on Hamiltonian quantum simulation \cite{Lloyd1996}. $K\leq M^k$ is the total number of possible $k$-local terms, that for {large} $M$ goes as $K\sim M^k/(k!e^k)$ \cite{Supplemental}. We estimate the number of resources focusing on the MCM for the diagonal GKLS master equation only, given that this is certainly the most convenient scheme for the simulation on a quantum computer. We allow for many-body GKS operators, and we count the number of elementary gates driven by the sets of $R$ Hamiltonians $\{H_r^{(\sigma,j)}\}_{r=1}^R$, corresponding to the Lindblad operators $L_{\sigma,j}$ of {each} $\mathcal{L}_\sigma$. {Note that $k$-locality implies $R< d^{2k}$.} Under these assumptions and with $H_S=\sum_{\sigma=1}^K H_S^{(\sigma)}$, the error bound in Eq.~\eqref{eqn:boundC} is conveniently rewritten by substituting $\Lambda\rightarrow\Lambda'=\max_{r,j,\sigma}(\norm{H_S^{(\sigma)}}_\infty,\norm{\mu_r^{(\sigma,j)} H_r^{(\sigma,j)}}_\infty)$, $\Xi\rightarrow \Xi'=K R J_k$ \cite{Supplemental}, where the total number of Lindblad operators for $k$-local Liouvillians is bound by $J_k\leq d^{2k}-1$. $\Lambda'$ does not increase with the total number of subsystems, while $\Xi'$ scales polynomially with $M$. Moreover, the bound in Eq.~\eqref{eqn:boundT} is multiplied by $K^2$ and modified with $\Lambda\rightarrow\Lambda'$, $J\rightarrow J_k$ as above, thus it scales polynomially with $M$. Therefore, we can set $\epsilon_g\leq f(M)t^2/n$, where $f(M)$ is a polynomial function of the total number of subsystems \cite{Supplemental}. For a single timestep of the MCM, we need one ancilla for each Lindblad operator of each $k$-local Liouvillian. Therefore, we require $K J_k$ ancillas per timestep. For the simulation up to time $t$ within a global precision of $\epsilon_g$, we need $ N_A=\left\lceil K\cdot J_k\cdot f(M)t^2/\epsilon_g\right\rceil$ ancillas, which is a polynomial function $\textnormal{poly}(M,t,1/\epsilon_g)$ and therefore provides us with an efficient number of ancillas \footnote{We are assuming to use a new set of ancillas at each timestep. In case of not having at our disposal a large number of available ancillas, one may also reinitialize the set of $K J_k$ ancillas to the state $\rho_E(0)$ before each timestep, and the results of this work would still hold.} for quantum simulation \cite{Kliesch2011,Sweke2015}. We need $2R-1$ elementary quantum gates for each Lindblad operator of a single timestep. Hence, the total number of gates in a single timestep is $(2R-1) K J_k+N_G^{(S)}$, where $N_G^{(S)}$ is the necessary number of gates to simulate the free system evolution $U_S(\Delta t)$ in Eq.~\eqref{eqn:evSimTot}, which is efficient under the required assumptions \cite{Lloyd1996}. Consequently, to simulate the dynamics up to time $t$ making an error not bigger than $\epsilon_g$, we need \begin{equation} \label{eqn:totNumG} N_G=\left\lceil ((2R-1)\cdot K \cdot J_k+N_G^{(S)}) f(M)t^2/\epsilon_g\right\rceil \end{equation} gates. Under the condition of local GKS operators, we can substitute $R=k$ in Eq.~\eqref{eqn:totNumG}. Once again, $N_G=\textnormal{poly}(M,t,1/\epsilon_g)$ and therefore the MCM is efficiently simulable on a quantum computer according to the dissipative quantum Church-Turing theorem. {This is our third major result}. The total number of gates scales as $t^2/\epsilon_g$, which is optimal \cite{Cleve2017} for collision models. \textit{Conclusions}.--We have presented the \textit{multipartite collision model} (MCM), able to reproduce any Markovian dynamics (or, more precisely, any divisible dynamical map) of a general system made of $M$ subsystems by means of elementary interactions between each subsystem and a single environment ancilla, which can be efficiently simulated through elementary quantum gates. Furthermore, we have derived an analytical error bound for the simulation of generic semigroup dynamics via MCM, and observed that it displays an optimal scaling. In light of the above findings, we believe that the MCM will play a major role in the study and simulation of multipartite open quantum systems in the next future. {Our results pave the way towards general applications of the collision approach to global master equations, many-body dissipative collective effects like superradiance or synchronization, transport in complex open systems, as well as to a wide range of problems in quantum thermodynamics, such as the study of Landauer's principle in any multipartite system, of composed thermal machines or of the microscopic exchange of energy between subsystems and ancillas. Finally, the efficient simulation of the MCM on a NISQ device is within our reach through currently available technology \cite{Garcia-Perez2020a}.} \textit{Acknowledgments}-- M.C. thanks Rodrigo Mart\'{i}nez-Pe\~{n}a for useful suggestions. This work was supported by CAIB through QUAREC project (PRD2018/47), by the Spanish State Research Agency through projects PID2019-109094GB-C21, and through the Severo Ochoa and Mar\'ia de Maeztu Program for Centers and Units of Excellence in R\&D (MDM-2017-0711). G.L.G. is funded by the Spanish Ministerio de Educación y Formación Profesional/Ministerio de Universidades and co-funded by the University of the Balearic Islands through the Beatriz Galindo program (BG20 /00085). G. D. C. acknowledges support from the UK EPSRC grants EP/S02994X/1 and EP/T026715/1. S.M. acknowledges financial support from the Academy of Finland via the Centre of Excellence program (Project no.~336814). \pagebreak \widetext \begin{center} \textbf{\large Supplemental Material: Collision models can efficiently simulate any multipartite Markovian quantum dynamics} \end{center} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \makeatletter \renewcommand{S\arabic{equation}}{S\arabic{equation}} \section{Previous collision models for multipartite systems} In this section we briefly review some previous schemes of collision models focused on multipartite open systems and decomposable into elementary collisions. Let us start considering Refs. \cite{Daryanoosh2018,DeChiara2020}, where collision models based on elementary subsystem-ancilla interactions with entangled ancillas have been shown to create correlations. The main limitation with respect to our approach is given by the geometrical constraints on the entangled state of the ancillas, which prevents such model from reproducing a general GKLS master equation. For instance, it cannot properly handle the temperature of a bath (see e.g. the examples in Ref.~\cite{Daryanoosh2018}). \newline \indent In Ref. \cite{Lorenzo2017}, a ``composite collision model'' was introduced; here, the system is partitioned into many auxiliary systems interacting with the environment ancillas. In this model, the total interaction and the master equations are strictly local, and therefore only \textit{local} master equations can be simulated \cite{DeChiara2018a}, as it is not possible to reproduce any global effect \cite{Cattaneo2019b}. A quite general \textit{cascade} collision model for correlated quantum channels has been introduced by Giovannetti and Palma \cite{Giovannetti2012} and has been employed in several interesting applications \cite{Lorenzo2015a,Lorenzo2015,Cusumano2017a,Cusumano2018}. The model assumes an ordered set of subsystems where the dynamics of the $m$th subsystem is not influenced by the dynamics of the $m'$th one, with $m'> m$. In general terms, the cascade model cannot reproduce a generic GKLS master equation, unless one applies it several times with permuted series of collisions in order to account for all possible causality dependence, let us say one for each possible order of the set of subsystems, which appears extremely complex and inefficient, both in terms of resources (ancillas and gates) and in terms of readability of the final $1\rightarrow 1$ relation between parameters of the model and GKLS coefficients. Moreover, this cascade model does not include the possibility of non-local unitary system evolutions, and is therefore not suitable for the study of generic global master equations. \section{Derivation of the collision model and master equation} \subsection{General derivation of the master equation in the limit of infinitesimal timestep} We will review here the method to derive the master equation of a collision model with internal unitary dynamics of the system, introduced in Ref.~\cite{Lorenzo2017} or analogously in Ref.~\cite{Landi2014}. For simplicity, let us assume to have a single interaction Hamiltonian $H_I$ describing the system-environment collisions with associated energy constant $g_I$, and the free system Hamiltonian $H_S$ with energy constant $g_S$. Then, the total unitary evolution of the collision model during the timestep $\Delta t$ can be written as ($\hbar=1$ throughout all the supplemental material): \begin{equation} \label{eqn:totEv} U_{sim}(\Delta t)=U_S(\Delta)\circ U_I(\Delta t),\qquad U_S(\Delta t)=e^{-i g_S \Delta t H_S},\qquad U_I(\Delta t)=e^{-i g_I \Delta t H_I}. \end{equation} The corresponding quantum map reads: \begin{equation} \phi_{\Delta t}[\rho_S]=\Tr_E\left[U_{sim}(\Delta t)\rho_{SE}U_{sim}^\dagger(\Delta t)\right], \end{equation} where $\rho_{SE}=\rho_S\otimes\rho_E(0)$. We set $g_S\ll g_I\ll \Delta t^{-1}$. In this limit, we are able to derive a consistent master equation for $\Delta t\rightarrow 0^+$. The condition $g_j\Delta t\rightarrow 0^+$ for all $j$ allows us to write a series expansion of $U_{sim}(\Delta t)$: each unitary evolution can be written as (till the second order): \begin{equation} U_j(\Delta t)= \mathbb{I}-ig_j\Delta t H_j-\frac{g_j^2\Delta t^2}{2}H_j^2+O(g_j^3\Delta t^3). \end{equation} Let us now expand the quantum map $\phi_{\Delta t}$ using the above equation: $\phi_{\Delta t}\simeq\phi_{\Delta t}^{(0)}+\phi_{\Delta t}^{(1)}+\phi_{\Delta t}^{(2)}$, where $\phi_{\Delta t}^{(k)}$ is of the order of $O((g_j\Delta t)^k)$. We obtain the following terms: \begin{equation} \phi_{\Delta t}^{(0)}[\rho_S]=\rho_S. \end{equation} \begin{equation} \phi_{\Delta t}^{(1)}[\rho_S]=-i g_S\Delta t [H_S,\rho_S]-i g_I\Delta t \Tr_E[[H_I,\rho_{SE}]]. \end{equation} \begin{equation} \begin{split} \phi_{\Delta t}^{(2)}[\rho_S]=&{g_S^2\Delta t^2\mathcal{D}_{H_S}[\rho_S]}+g_I^2\Delta t^2\Tr_E[\mathcal{D}_{H_I}[\rho_{SE}]]\\ &+{g_Sg_I\Delta t^2\Tr_E[H_S\rho_{SE} H_I+H_I\rho_{SE} H_S-H_SH_I\rho_{SE}-\rho_{SE}H_IH_S]}, \end{split} \end{equation} where $\mathcal{D}_{O}[\rho]=O\rho O^\dagger-\frac{1}{2}\{O^\dagger O,\rho\}$. We remove the first-order term proportional to $g_I$ by setting $\Tr_E[[H_I,\rho_{SE}]]=0$, which is the usual requirement in the derivation of the master equation \cite{Cattaneo2019b}. Furthermore, we assume $g_I^2\Delta t \rightarrow \gamma$, where $\gamma$ is a finite constant with the units of energy, while $g_S^j\Delta t\rightarrow 0^+$ for all $j$ (i.e. $g_S=O(\gamma)=O(1)$, having fixed a proper energy scale). This can be analogously achieved, for instance, by setting $g_I=O(g_S)=O(1)$, while $H_I\rightarrow H_I/\sqrt{\Delta t}$ \cite{Landi2014}. Given that $g_I\gg g_S$, we can neglect all the terms of the second order depending on the system Hamiltonian. Finally, the master equation reads: \begin{equation} \begin{split} \label{eqn:compositeMasterEq} \frac{d\rho_S}{dt}=&\lim_{\Delta t\rightarrow 0^+}\frac{\phi_{\Delta t}[\rho_S]-\rho_S}{\Delta t}=-i g_S [H_S,\rho_S]+\gamma\Tr_E[H_I\rho_{SE} H_I-\frac{1}{2}\{H_I^2,\rho_{SE}\}]+O(\Delta t)+O(g_I \Delta t). \end{split} \end{equation} \subsection{Derivation for the \textit{\textit{multipartite collision model}}} Consider a pair $p$ of GKS operators $F_{m,\alpha}$, $F_{m',\alpha'}$. Then, the Trotter formula symmetrization \cite{Suzuki1985,Hatano2005} applied to Eq.~\eqref{eqn:pairEv} of the main text leads to: \begin{equation} \label{eqn:suzukiSecOrd} U_p(\Delta t)=U_p^{(m,\alpha)}(\Delta t/2)U_p^{(m',\alpha')}(\Delta t)U_p^{(m,\alpha)}(\Delta t/2)=\exp\{-ig_I\Delta t [(\lambda_p^{(m,\alpha)}F_{m,\alpha}+\lambda_p^{(m',\alpha')}F_{m',\alpha'})\sigma_{p}^++h.c.]\}+O(g_I^3\Delta t^3), \end{equation} where the remainder of the above equation can be obtained from the equality \cite{Hatano2005} $e^{g_I\Delta t/2 A}e^{g_I\Delta t/B}e^{g_I\Delta t/2 A}=e^{g_I\Delta t (A+B)+(g_I\Delta t)^3 R_3+(g_I\Delta t)^5 R_5+\ldots}$, where $R_j$ with $j$ odd and $j\geq 3$ are expressions proportional to $j$ operators $A$ and/or $B$, derived from the Baker-Campbell-Hausdorff formula. Let us now insert all the pairs $p\in\mathsf{P}$ in the product of Eq.~\eqref{eqn:evComposition} of the main text. For simplicity and to recover the notation of the master equation~\eqref{eqn:compositeMasterEq}, we will gather all the terms in a single interaction Hamiltonian written as: \begin{equation} \label{eqn:intHamTot} H_I=\sum_{p\in\mathsf{P}} \left[(\lambda_p^{(m,\alpha)}F_{m,\alpha}+\lambda_p^{(m',\alpha')}F_{m',\alpha'})\sigma_{p}^++h.c.\right], \end{equation} and compute the total evolution in Eq.~\eqref{eqn:evComposition} of the main text as $U_I(\Delta t)=\exp(-ig_I\Delta t H_I)$. In general, $\exp(-ig_I\Delta t H_I)\neq\prod_{p\in\mathsf{P}}\exp\{-ig_I\Delta t [(\lambda_p^{(m,\alpha)}F_{m,\alpha}+\lambda_p^{(m',\alpha')}F_{m',\alpha'})\sigma_{p}^++h.c.]\}$, because the system operators of different pairs may not commute, but we will see that this simple $U_I(\Delta t)$ does the job in the master equation up to a suitable order. Let us now derive the master equation in the limit $g_S\ll g_I\ll \Delta t^{-1}$ by inserting Eq.~\eqref{eqn:intHamTot} in Eq.~\eqref{eqn:compositeMasterEq}. The dissipator $\mathcal{D}$ (term proportional to $\gamma$) reads: \begin{equation} \label{eqn:dissipatorOne} \begin{split} \mathcal{D}[\rho_S]=&\gamma\sum_{p\in\mathsf{P}}\sum_{p'\in\mathsf{P}}\Bigg[\Tr_E[\sigma_{p'}^-\sigma_{p}^+\rho_E(0)]\left(\left(\lambda_{p}^{(m,\alpha)}F_{m,\alpha}+\lambda_{p}^{(m',\alpha')} F_{m',\alpha'}\right)\rho_S\left((\lambda_{p'}^{(n,\beta)})^*F_{n,\beta}^\dagger+(\lambda_{p'}^{(n',\beta')})^* F_{n',\beta'}^\dagger\right)\right.\\ &\left.-\frac{1}{2}\left\{\left((\lambda_{p'}^{(n,\beta)})^*F_{n,\beta}^\dagger+(\lambda_{p'}^{(n',\beta')})^* F_{n',\beta'}^\dagger\right)\left(\lambda_{p}^{(m,\alpha)}F_{m,\alpha}+\lambda_{p}^{(m',\alpha')}F_{m',\alpha'}\right),\rho_S \right\}\right)\\ &+\Tr_E[\sigma_{p'}^+\sigma_{p}^-\rho_E(0)]\left(\left((\lambda_{p}^{(m,\alpha)})^*F_{m,\alpha}^\dagger+(\lambda_{p}^{(m',\alpha')})^* F_{m',\alpha'}^\dagger\right)\rho_S\left(\lambda_{p'}^{(n,\beta)}F_{n,\beta}+\lambda_{p'}^{(n',\beta')}F_{n',\beta'}\right)\right.\\ &\left.-\frac{1}{2}\left\{\left(\lambda_{p'}^{(n,\beta)}F_{n,\beta}+\lambda_{p'}^{(n',\beta')}F_{n',\beta'}\right)\left((\lambda_{p}^{(m,\alpha)})^*F_{m,\alpha}^\dagger+(\lambda_{p}^{(m',\alpha')})^* F_{m',\alpha'}^\dagger\right),\rho_S \right\}\right)\Bigg], \end{split} \end{equation} where $p=(m,\alpha,m',\alpha')$, $p'=(n,\beta,n',\beta')$, and we have neglected the terms with double creation or double annihilation of an ancillary qubit excitation because of the following initial state choice. As explained in the main text, we choose as initial state of the ancillas $\rho_E(0)=\bigotimes_{p\in\mathsf{P}} \eta_p$, where $\eta_p=c_p\ket{\downarrow}_p\!\bra{\downarrow}+(1-c_p)\ket{\uparrow}_p\!\bra{\uparrow}$, with $0\leq c_p\leq 1$, is a diagonal state in the basis of $\sigma_{p}^z$. Since $\rho_E(0)$ is a separable state of all the ancillas, the autocorrelation functions of the environment are proportional to the Dirac's deltas $\delta_{m,n}\delta_{m',n'}\delta_{\alpha,\alpha'}\delta_{\alpha',\alpha''}$ (or shortly, $\delta_{p,p'}$ for different pairs) and they can be simplified as $\Tr_E[\sigma_{p'}^-\sigma_{p}^+\rho_E(0)]=c_p\delta_{p,p'}$, $\Tr_E[\sigma_{p'}^+\sigma_{p}^-\rho_E(0)]=(1-c_p)\delta_{p,p'}$. Note that the choice of initial environment state implies that $\Tr_E[\sigma_{p}^+\rho_E(0)]=0$, that leads to satisfy the requirement $\Tr_E[[H_I,\rho_{SE}]]=0$ necessary to derive Eq.~\eqref{eqn:compositeMasterEq}. More in general, $\Tr_E[\underbrace{\sigma_{p}^{+-}\sigma_{p'}^{+-}\sigma_{p''}^{+-}\ldots}_{j\; (\sigma_E^{+-})'s}\rho_E(0)]=0$ if $j$ is odd. This immediately implies that all the terms proportional to $g_I^j$ with odd $j$ in the master equation vanish. Therefore, the remainder $O(g_I\Delta t)$ in Eq.~\eqref{eqn:compositeMasterEq} vanishes for the MCM, and the leading order is $O(\Delta t)$, or the equivalent $O(g_I^2\Delta t^2)=\gamma O(\Delta t)$. Accordingly, the error that the Suzuki-Trotter symmetrization in Eq.~\eqref{eqn:suzukiSecOrd} brings to the master equation is not of the order of $O(g_I^3\Delta t^3)$, but of $O(g_I^4\Delta t^4)=O(\Delta t^2)$ ($O(\Delta t)$ in Eq.~\eqref{eqn:compositeMasterEq}), and equivalently for the decomposition into $R$ elementary multi-qubit gates in a scenario with many-body GKS operators. Finally, we understand that deriving the master equation by employing $U_I(\Delta t)$ with Hamiltonian in Eq.~\eqref{eqn:intHamTot} instead of the product in Eq.~\eqref{eqn:evComposition} of the main text is accurate up to an error of the order of $O(g_I^4\Delta t^4)=O(\Delta t^2)$ ($O(\Delta t)$ in Eq.~\eqref{eqn:compositeMasterEq}), since GKS operators associated to different ancillas do not mix in the dissipator of Eq.~\eqref{eqn:dissipatorOne}. This is why the quantum map $\phi_{\Delta t}$ simulates $\mathbb{I}+\Delta t\mathcal{L}$ up to an error of the order of $O(\Delta t^2)$, as claimed in Eq.~\eqref{eqn:approxMap} of the main text. Let us now rewrite the dissipator by highlighting each term, with $p=(m,\alpha,m',\alpha')$: \begin{equation} \label{eqn:dissipatorTwo} \begin{split} \mathcal{D}[\rho_S]=&\gamma\sum_{p\in\mathsf{P}}\;\sum_{(x,r),(y,s)=(m,\alpha),(m',\alpha')}\left[c_{p}\lambda_{p}^{(x,r)}(\lambda_{p}^{(y,s)})^*\left(F_{x,r} \rho_S F_{y,s}^\dagger-\frac{1}{2}\{F_{y,s}^\dagger F_{x,r},\rho_S\}\right)\right.\\ &\left.+(1-c_p)(\lambda_{p}^{(x,r)})^*\lambda_{p}^{(y,s)}\left(F_{x,r}^\dagger \rho_S F_{y,s}-\frac{1}{2}\{F_{y,s} F_{x,r}^\dagger,\rho_S\}\right)\right]. \end{split} \end{equation} Finally, we obtain the master equation as: \begin{equation} \label{eqn:masterEqFin} \begin{split} \frac{d\rho_S}{dt}=&-i[g_S H_S,\rho_S]+\sum_{p\in\mathsf{P}} \left[\gamma_p^\downarrow \left(F_{m,\alpha}\rho_S F_{m',\alpha'}^\dagger-\frac{1}{2}\{F_{m',\alpha'}^\dagger F_{m,\alpha},\rho_S\}\right)+h.c.\right]\\ &+\sum_{p\in\mathsf{P}}\left[ \gamma_p^\uparrow \left(F_{m,\alpha}^\dagger\rho_S F_{m',\alpha'}-\frac{1}{2}\{F_{m',\alpha'} F_{m,\alpha}^\dagger,\rho_S\}\right)+h.c.\right]. \end{split} \end{equation} with \begin{equation} \label{eqn:coefficientsBis} \begin{split} &\gamma_p^\downarrow=\begin{cases} \gamma c_p \lambda_p^{(m,\alpha)}(\lambda_p^{(m',\alpha')})^*&\textnormal{ if }m\neq m'\textnormal{ or }\alpha\neq\alpha'\\ \gamma\sum_{\bar{p}} c_{\bar{p}} \abs{\lambda_{\bar{p}}^{(m,\alpha)}}^2 &\textnormal{ otherwise}\\ \end{cases}\\ &\gamma_p^\uparrow=\begin{cases} \gamma(1-c_p) (\lambda_p^{(m,\alpha)})^*\lambda_p^{(m',\alpha')}&\textnormal{ if }m\neq m'\textnormal{ or }\alpha\neq\alpha'\\ \gamma\sum_{\bar{p}} (1-c_{\bar{p}}) \abs{\lambda_{\bar{p}}^{(m,\alpha)}}^2 &\textnormal{ otherwise},\\ \end{cases}\\ \end{split} \end{equation} summing over all the unordered pairs of GKS operators $\bar{p}=(m,\alpha,\bar{m},\bar{\alpha})$. These are Eqs.~\eqref{eqn:Liouvillian} and~\eqref{eqn:coefficients} of the main text. The summation over all the pairs for $m=m'$, $\alpha=\alpha'$ is due to the fact that each interaction $U_p(\Delta t)$ for a pair $p$ of GKS operators also brings a ``self-contribution'' of the form $F_{m,\alpha}\rho_SF_{m,\alpha}^\dagger+\ldots$ in the dissipator. As can be observed in Eq.~\eqref{eqn:dissipatorTwo}, the ``emission'' contribution brought to the master equation by $U_p(\Delta t)$ for a given pair $p$ of GKS operators can be described by a $2\times 2$ matrix $v_p$, with $(v_p)_{11}=c_p \abs{\lambda_{m,\alpha}}^2$, $(v_p)_{12}=c_p \lambda_{m,\alpha}(\lambda_{m',\alpha'})^*$, $(v_p)_{21}=c_p (\lambda_{m,\alpha})^*\lambda_{m',\alpha'}$, $(v_p)_{22}=c_p \abs{\lambda_{m',\alpha'}}^2$. Clearly, $v_p\geq 0$ since it is obtained by an autocorrelation function. The total Kossakowski matrix for the emission (and correspondingly for absorption), described by the coefficients $\gamma_p^\downarrow$, is obtained by summing all the semipositive matrices $v_p$ (extended as sparse $J\times J$ matrices) for all $p\in\mathsf{P}$, therefore it is semipositive as well and the master equation is in GKLS form. \section{Remarks on the collision model and examples} \subsection{Shortcuts} In case one is not able to diagonalize a symbolic Kossakowski matrix and has to rely on the non-diagonal MCM, some shortcuts may still reduce the number of required resources: while one ancilla for each pair of GKS operators is necessary to control all the degrees of freedom for the most general non-diagonal GKLS master equation, in many situations we do not need such a large number of them. For instance, let us address the master equation describing the perfectly collective superradiant emission of $M$ identical two-level atoms immersed in a single common bath: \begin{equation} \label{eqn:supMasEq} \frac{d}{dt}\rho_S(t)=-i[H_A+H_{LS},\rho_S(t)]+\sum_{m,m'=1}^M \gamma\left(\sigma_m^-\rho_S(t)\sigma_{m'}^+-\frac{1}{2}\{\sigma_{m'}^+\sigma_m^-,\rho_S(t)\}\right), \end{equation} with $H_A=\sum_{m=1}^M \frac{\omega}{2}\sigma_m^z$, and $H_{LS}=\sum_{m,m'=1}^M s^\downarrow\sigma_{m'}^+\sigma_m^-$ is the Lamb-shift Hamiltonian, which can be simulated by means of two-qubit gates. Then, we can employ a version of the MCM requiring only a single ancilla in the ground state, that couples to each atom through $\sigma_E^+$: let us build the elementary collisions through the two-qubit gates $U^{(m)}(\Delta t)=\exp\{-i g_I\Delta t (\sigma_m^-\sigma_E^++h.c.)\}$, corresponding to Eq.~\eqref{eqn:elemInt} of the main text. Then, we can build the symmetrized unitary operator (with a total number of $2M-1$ gates): \begin{equation} \label{eqn:supU} U_{sup}(\Delta t)=\prod_{m=M}^1U^{(m)}(\Delta t/2)\prod_{m'=1}^MU^{(m')}(\Delta t/2), \end{equation} corresponding to $U_p(\Delta t)$ in Eq.~\eqref{eqn:pairEv} of the main text. By running the MCM with $U_I(\Delta t)=U_{sup}(\Delta t)$, $U_S(\Delta t)=\exp(-i g_S\Delta t(H_A+H_{LS})/g_S)$ as in Eq.~\eqref{eqn:totEv} (we assume $\omega=O(g_S)$), with the usual requirement $g_S\ll g_I\ll \Delta t^{-1}$, $g_I^2\Delta t\rightarrow\gamma$, we readily get the master equation~\eqref{eqn:supMasEq}. The same result can be obtained in the case of an inhomogeneous spatial distribution of the atoms, i.e. when the decay rate $\gamma$ is not uniform anymore and we have to describe it through a suitable Kossakowksi matrix $\gamma_{mm'}$. In this scenario, we need to introduce a proper weight $\lambda^{(m)}=\xi_m$ in each $U^{(m)}(\Delta t)$. For instance, if each atom couples to the electromagnetic field with a given dimensionless strength $\xi_m$, we will have $\gamma_{mm'}=\gamma\xi_m\xi_{m'}$, and correspondigly we will need to set $U^{(m)}(\Delta t)=\exp\{-i g_I\Delta t (\xi_m\sigma_m^-\sigma_E^++h.c.)\}$. In contrast, in a situation with a chain of $M$ strongly coupled two-level atoms, each one immersed in a local bath, we cannot rely on a single ancilla. Nonetheless, we can describe the master equation of such scenario as a superposition of $M$ common baths, each one bringing a term as in Eq.~\eqref{eqn:supMasEq} (see for instance Ref.~\cite{Cattaneo2020}). Then, we can select one ancilla for each different bath, and implement the chain of elementary two-qubit gates as in Eq.~\eqref{eqn:supU}, for each of them. We thus obtain a further shortcut of the MCM for the non-diagonal GKLS master equation, which is able to simulate the open dynamics of the atomic chain by employing $M$ ancillas only. \subsection{Temperature} For simplicity, let us consider a common thermal bath of harmonic oscillators collectively acting on two identical qubits with frequency $\omega$, and let us suppose that the bath is at temperature $T$. Then, similarly to Eq.~\eqref{eqn:supMasEq}, the master equation reads: \begin{equation} \label{eqn:tempEq} \begin{split} \frac{d}{dt}\rho_S(t)=&-i[H_q+H_{LS},\rho_S(t)]+\underbrace{\sum_{m,m'=1,2} \gamma^\downarrow\left(\sigma_m^-\rho_S(t)\sigma_{m'}^+-\frac{1}{2}\{\sigma_{m'}^+\sigma_m^-,\rho_S(t)\}\right)}_\textnormal{emission}\\ &+\underbrace{\sum_{m,m'=1,2} \gamma^\uparrow\left(\sigma_m^+\rho_S(t)\sigma_{m'}^--\frac{1}{2}\{\sigma_{m'}^-\sigma_m^+,\rho_S(t)\}\right)}_\textnormal{absorption}, \end{split} \end{equation} where $H_q=\omega/2(\sigma_1^z+\sigma_2^z)$ is the free Hamiltonian of the qubits, $H_{LS}$ is the Lamb-shift Hamiltonian and the emission and absorption coefficients are given by: \begin{equation} \label{eqn:emisAbs} \gamma^\downarrow=\gamma (N_T(\omega)+1),\qquad \gamma^\uparrow=\gamma N_T(\omega), \end{equation} where $N_T(\omega)=1/(\exp(\beta\omega)-1)$, with $\beta=1/k_B T$. Then, the MCM still requires a single ancillary qubit to simulate the master equation~\eqref{eqn:tempEq}, following the same lines of the previous shortcut. Indeed, let us consider a single ancilla with operator $\sigma_E^+$, initialized in the state $\eta_E=c_E\ket{\downarrow}_E\!\bra{\downarrow}+(1-c_E)\ket{\uparrow}_E\!\bra{\uparrow}$, with $c_E=(N_T(\omega)+1)/(2N_T(\omega)+1)$. Then, we build the elementary collision gates as $U^{(m)}(\Delta t)=\exp\{-ig_I\Delta t \lambda( \sigma_m^-\sigma_E^++h.c.)\}$, where $\lambda=\sqrt{2N_T(\omega)+1}$. The total evolution for the system-environment interaction is then built as (equivalent to Eqs.~\eqref{eqn:pairEv} and~\eqref{eqn:evComposition} of the main text): \begin{equation} \label{eqn:totalIntEv} U_I(\Delta t)=U^{(1)}(\Delta t/2)U^{(2)}(\Delta t)U^{(1)}(\Delta t/2), \end{equation} and following the derivation of the MCM we straightforwardly obtain the master equation~\eqref{eqn:tempEq}. Note that the proper engineering of the parameter $\lambda$ in each elementary interaction plays here a central role: if the temperature increases, for instance, we just have to tune the value of $\lambda$ and to suitably change the temperature of the initial ancillary state. Through this procedure, we can simulate any temperature of even more complex baths. \section{Error bound derivation} For simplicity, let us derive the error bound by assuming the $k$-locality condition, i.e. $\mathcal{L}=\sum_{\sigma=1}^K\mathcal{L}_\sigma$ and each $\mathcal{L}_\sigma$ acts non-trivially on $k$ subsystems only. Each $\mathcal{L}_\sigma$ has at maximum $J_k=d^{2k}-1$ Lindblad operators. The result without $k$-locality can then be recovered by setting $k=M$, $K=1$, $J_k=J$. First of all, let us estimate the maximum number $K$ of possible different $k$-local Liouvillians in the presence of $M$ subsystems. We can evaluate it as the number of $k$-element combinations of $M$ objects, for simplicity without repetition (Liouvillian terms with repetitions can be merged with larger terms without repetition): \begin{equation} \label{eqn:K} K=\frac{M!}{k!(M-k)!}. \end{equation} To perform an accurate resource analysis for quantum computation, it is important to study the behavior of $K$ in the asymptotic limit $M\rightarrow \infty$. Using Stirling's formula, we have for fixed $k$ and $M\rightarrow \infty$: \begin{equation} \begin{split} \frac{M!}{k!(M-k)!}&\sim\frac{\sqrt{2\pi M}M^M e^{-M}}{k!\sqrt{2\pi (M-k)}(M-k)^{(M-k)} e^{-(M-k)}}\sim\frac{M^k}{k!e^k}, \end{split} \end{equation} showing that it behaves polynomially as a function of the number of subsystems (note that we obtain the same asymptoptic behavior if we assume combinations with repetitions). Furthermore, for fixed $k$ and $M$ we always have fewer combinations than variations, therefore $K\leq M^k$ for all $M,k$. As discussed in the main text, we employ the $1\rightarrow 1$ superoperator norm to estimate the error bound: $\norm{\mathcal{T}}_{1\rightarrow 1}=\sup_{\norm{A}_1=1}\norm{\mathcal{T}[A]}_1$, where $\norm{A}_1=\Tr[\sqrt{A^\dagger A}]$ is the trace norm. As noticed in Ref.~\cite{Kliesch2011}, this norm does not behave well as the system size increases, since $\norm{A\otimes\mathbb{I}_n}_1=n\norm{A}_1$, nonetheless we can reduce it to the computation of some operator norms by means of Hölder's inequality: $\norm{AB}_1\leq\norm{A}_1\norm{B}_\infty$. $\norm{A}_\infty=\sup_{\norm{v}=1}\norm{Av}$ is the operator or infinity norm, where $\norm{v}$ is the standard vector norm. The superoperator norm, the trace norm and the operator norm satisfy the submultiplicativity property. Let us now evaluate an error bound for the truncation error. \subsection{Truncation error} We want to evalute the truncation error $\epsilon_t=\norm{\exp\mathcal{L}\Delta t-(\mathbb{I}+\Delta t\mathcal{L})}_{1\rightarrow 1}$ for the simulation of a generic Liouvillian $\mathcal{L}$. This is basically the second-order remainder of the Taylor expansion of $\exp\mathcal{L}t$ around zero. Taking inspiration from Ref.~\cite{Childs2018a} (Supp. Inf.), we have: \begin{equation} \begin{split} \epsilon_t&=\norm{\exp\mathcal{L}\Delta t-(\mathbb{I}+\Delta t \mathcal{L})}_{1\rightarrow 1}=\norm{\sum_{j=2}^\infty \frac{(\mathcal{L}\Delta t)^j}{j!}}_{1\rightarrow 1}\leq \sum_{j=2}^\infty \frac{(\norm{\mathcal{L}}_{1\rightarrow 1}\Delta t)^j}{j!}\leq \frac{(\norm{\mathcal{L}}_{1\rightarrow 1}\Delta t)^2}{2!}\sum_{j=0}^\infty \frac{(\norm{\mathcal{L}}_{1\rightarrow 1}\Delta t)^j}{j!}, \end{split} \end{equation} therefore we get: \begin{equation} \boxed{\epsilon_t\leq \frac{(\norm{\mathcal{L}}_{1\rightarrow 1}\Delta t)^2}{2}\exp(\norm{\mathcal{L}}_{1\rightarrow 1}\Delta t)\leq e\frac{(\norm{\mathcal{L}}_{1\rightarrow 1}\Delta t)^2}{2},} \end{equation} with the prescription $\norm{\mathcal{L}}_{1\rightarrow 1}\Delta t<1$ as in Ref.~\cite{Berry2007}. Let us finally estimate $\norm{\mathcal{L}}_{1\rightarrow 1}$ for a $k$-local Liouvillian. Using the decomposition $\mathcal{L}=\sum_{\sigma=1}^K\mathcal{L}_\sigma$, we have $\norm{\mathcal{L}}_{1\rightarrow 1}\leq K \norm{\mathcal{L}_{max}}_{1\rightarrow 1}$, where $\norm{\mathcal{L}_{max}}_{1\rightarrow 1}=\max_\sigma\norm{\mathcal{L}_\sigma}_{1\rightarrow 1}$. Since any Liouvillian $\mathcal{L}_\sigma$ is $k$-local, it can be seen as a sparse matrix \cite{Childs2017} acting on $k$ elements only (out of $M$). The maximal norm does not increase when the number of subsystems increases, therefore the truncation error has a polynomial dependence on $M$ (the exponential does not provide an ``exploding'' behaviour, because in general we set the condition $\norm{\mathcal{L}}_{1\rightarrow 1}\Delta t< 1$). We can further estimate $\norm{\mathcal{L}}_{1\rightarrow 1}$ following the original work on $k$-local Liouvillians \cite{Kliesch2011}, and find: \begin{equation} \begin{split} \norm{\mathcal{L}_\sigma}_{1\rightarrow 1}&=\sup_{\norm{\rho}_1=1}\norm{-i[H_\sigma,\rho]+\sum_{j=1}^{J_k} \left(L_{\sigma, j}\rho L_{\sigma,j} ^\dagger-\frac{1}{2}\{L_{\sigma,j} ^\dagger L_{\sigma,j},\rho\}\right)}_{1}\\ &\leq 2\norm{H_\sigma}_\infty+2\sum_{j=1}^{J_k}\norm{L_{\sigma,j}}_\infty^2\leq 2(a_\sigma+J_k a_\sigma^2), \end{split} \end{equation} with $a_\sigma=\max_j\{\norm{H_\sigma}_\infty,\norm{L_{\sigma,j}}_\infty\}$, where we have used the Hölder's inequality and the fact that $\norm{A}_\infty=\norm{A^\dagger}_\infty$. Consequently, we have: \begin{equation} \label{eqn:truncErrOne} \norm{\mathcal{L}}_{1\rightarrow 1}\leq K\norm{\mathcal{L}_{max}}_{1\rightarrow 1}\leq 2K(a_{max}+J_k a_{max}^2), \end{equation} with $a_{max}=\max_\sigma a_\sigma$. Being a supremum over operator norms of bounded operators, $a_{max}$ does not increase with the number of subsystems. Finally, for simplicity we can write the truncation error bound as: \begin{equation} \label{eqn:truncErr} \boxed {\epsilon_t\leq 2e (K(a_{max}+J_k a_{max}^2)\Delta t)^2,} \end{equation} with the prescription $2K(a_{max}+J_k a_{max}^2)\Delta t<1$ \cite{Berry2007}. We will see later how this expression is related to the result given in Eq.~\eqref{eqn:boundT} of the main text. Note that the derivation of a bound for the truncation error is not restricted to the MCM, but \textit{is valid for any collision model} simulating the semigroup dynamics driven by $\mathcal{L}$. \subsection{Collision error} Let us finally estimate an error bound for the collision error $\epsilon_c=\norm{\phi_{\Delta t}-(\mathbb{I}+\Delta t\mathcal{L})}_{1\rightarrow 1}$, where $\phi_{\Delta t}$ is the MCM quantum map for the simulation of a generic Liouvillian $\mathcal{L}$. We will find that its expression is more complex than the one of the truncation error. For simplicity, we will evaluate it for the MCM for a GKLS master equation in diagonal form, and then discuss the differences with the non-diagonal form. In contrast, we will not assume the locality of the GKS operators, which may be many-body as well. Let us start with defining the form of the global unitary transformation evolving the collision model (Eq.~\eqref{eqn:evSimTot} of the main text): \begin{equation} \label{eqn:decompositionSim} U_{sim}(\Delta t)=\exp(-i\Delta t g_S H_S)\prod_{\sigma=1}^K\prod_{j=1}^{J_k}\prod_{r=R}^{1} \exp(-ig_I\Delta t/2\mu_r^{(\sigma,j)} H_r^{(\sigma,j)})\prod_{r'=1}^{R} \exp(-ig_I\Delta t/2\mu_{r'}^{(\sigma,j)} H_{r'}^{(\sigma,j)}), \end{equation} where we recall that $\sigma$ labels the different $k$-local Liouvillians, $j$ labels the Lindblad operators inside each Liouvillian and $r$ labels the decomposition of each Lindblad operator into elementary quantum gates available in the lab: $L_{\sigma,j}\sigma_{E_{\sigma,j}}^++h.c.=\sum_{r=1}^R\mu_r^{(\sigma,j)} H_r^{(\sigma,j)}$ (see the related comment about extensions to non-local GKS operators in the main text). The collision error reads: \begin{equation} \begin{split} \epsilon_c=\sup_{\norm{\rho_S}_1=1}\norm{\Tr_E[U_{sim}(\Delta t)\rho_S\otimes\rho_E U_{sim}^\dagger(\Delta t)]-(\mathbb{I}+\Delta t \mathcal{L})}_1=\sup_{\norm{\rho_S}_1=1}\norm{\mathsf{R}_c\left(\Tr_E[U_{sim}(\Delta t)\rho_S\otimes\rho_E U_{sim}^\dagger(\Delta t)]\right)}_1, \end{split} \end{equation} where $\mathsf{R}_c\left(\Tr_E[U_{sim}(\Delta t)\rho_S\otimes\rho_E U_{sim}^\dagger(\Delta t)]\right)$ indicates the remainder of the expansion in Eq.~\eqref{eqn:compositeMasterEq} that is not cancelled out by the substraction with the first order expansion of the Lindblad semigroup. Since the remainder consists of an infinite sum of terms and the partial trace is a linear operation, according to Ref.~\cite{Lidar2008} we can remove the latter: \begin{equation} \begin{split} \norm{\mathsf{R}_c\left(\Tr_E[U_{sim}(\Delta t)\rho_S\otimes\rho_E U_{sim}^\dagger(\Delta t)]\right)}_1&=\norm{\Tr_E\left[\mathsf{R}_c\left(U_{sim}(\Delta t)\rho_S\otimes\rho_E U_{sim}^\dagger(\Delta t)\right)\right]}_1\\ &\leq \norm{\mathsf{R}_c\left(U_{sim}(\Delta t)\rho_S\otimes\rho_E U_{sim}^\dagger(\Delta t)\right)}_1. \end{split} \end{equation} Then, extending a method for Hamiltonian simulation to the framework of open systems \cite{Childs2018a}, we expand the unitary evolutions: \begin{equation} \label{eqn:expansionGen} \begin{split} &\norm{\mathsf{R}_c\left(U_{sim}(\Delta t)\rho_S\otimes\rho_E U_{sim}^\dagger(\Delta t)\right)}_1=\Biggl|\!\Biggl|\sideset{}{'}\sum \prod_{\sigma=1}^K\prod_{j=1}^{J_k}\prod_{r=R}^{1}\prod_{r'=1}^{R}\\ &\frac{(-i \Delta t g_s H_S)^{w_s}}{w_s!}\frac{(-i \Delta t/2 g_I\mu_r^{(\sigma,j)} H_r^{(\sigma,j)})^{w_{i(\sigma,j,r)}}}{w_{i(\sigma,j,r)}!}\frac{(-i \Delta t/2 g_I\mu_{r'}^{(\sigma,j)} H_{r'}^{(\sigma,j)})^{w'_{i(\sigma,j,r')}}}{w'_{i(\sigma,j,r')}!}\rho_S\otimes\rho_E\\ &\frac{(i \Delta t/2 g_I\mu_{r'}^{(\sigma,j)} H_{r'}^{(\sigma,j)})^{u'_{i(\sigma,j,r')}}}{u'_{i(\sigma,j,r')}!}\frac{(i \Delta t/2 g_I\mu_r^{(\sigma,j)} H_r^{(\sigma,j)})^{u_{i(\sigma,j,r)}}}{u_{i(\sigma,j,r)}!}\frac{(i \Delta t g_s H_S)^{u_{s}}}{u_{s}!}\Biggl|\!\Biggl|_1\\ &\leq\sideset{}{'}\sum\prod_{\sigma=1}^K\prod_{j=1}^{J_k}\prod_{r=R}^{1}\prod_{r'=1}^{R}\\ &\frac{( \Delta t g_S \norm{H_S}_\infty)^{w_s}}{w_s!}\frac{( \Delta t/2 g_I \norm{\mu_r^{(\sigma,j)} H_r^{(\sigma,j)}}_\infty)^{w_{i(\sigma,j,r)}}}{w_{i(\sigma,j,r)}!}\frac{( \Delta t/2 g_I \norm{\mu_{r'}^{(\sigma,j)} H_{r'}^{(\sigma,j)}}_\infty)^{w'_{i(\sigma,j,r')}}}{w'_{i(\sigma,j,r')}!}\norm{\rho_S\otimes\rho_E}_1\\ &\frac{(\Delta t/2 g_I \norm{\mu_{r'}^{(\sigma,j)} H_{r'}^{(\sigma,j)}}_\infty)^{u'_{i(\sigma,j,r')}}}{u'_{i(\sigma,j,r')}!}\frac{( \Delta t/2 g_I \norm{\mu_{r}^{(\sigma,j)} H_{r}^{(\sigma,j)}}_\infty)^{u_{i(\sigma,j,r)}}}{u_{i(\sigma,j,r)}!}\frac{( \Delta t g_S \norm{H_S}_\infty)^{u_{s}}}{u_{s}!}\\ &\leq \mathsf{R}_c\left(\exp{2\Delta t\left[g_S\sum_{\sigma}\norm{H_S^{(\sigma)}}_\infty+\sum_{\sigma,j,r} g_I \norm{\mu_{r}^{(\sigma,j)} H_{r}^{(\sigma,j)}}_\infty\right]}\right). \end{split} \end{equation} We have introduced the notation $\sideset{}{'}\sum $ to express the sum over all the indexes $w_{s},u_{s},w_{i(\sigma,j,r)},w'_{i(\sigma,j,r')},u_{i(\sigma,j,r)},u'_{i(\sigma,j,r)}=1,\ldots,\infty$, such that their possible combinations in the expansion of Eq.~\eqref{eqn:expansionGen} are contained in the remainder $\mathsf{R}_c$. Then, we have employed the Hölder's inequality, the submultiplicativity of the norm, the triangle inequality and the fact that \cite{Lidar2008} $\norm{\rho_S\otimes\rho_E}_1=\norm{\rho_S}_1\norm{\rho_E}_1=1$, since $\norm{\rho_S}_1=1$ by definition in the supremum and $\norm{\rho_E}_1=1$ because it is a density matrix. Furthermore, we have decomposed the $k$-local system Hamiltonian as $H_S=\sum_{\sigma=1}^K H_S^{(\sigma)}$. Finally, let us simplify the result by extracting a maximal value: $\Lambda=\max_{\sigma,j,r}\{\norm{H_S^{\sigma}}_\infty,\norm{\mu_{r}^{(\sigma,j)} H_{r}^{(\sigma,j)}}_\infty\}$. Then, we have: \begin{equation} \label{eqn:collBis} \boxed{\epsilon_c\leq \mathsf{R}_c\left(\exp{2\Delta t\Lambda(Kg_S+ \Xi g_I)}\right),} \end{equation} where $\Xi=K\cdot J_k\cdot R$. If we employed the MCM for the non-diagonal GKLS form, we would find a product over all the pairs $p\in\mathsf{P}_\sigma$ instead of over all the Lindblad operators for $j=1,\ldots,J_k$ in Eq.~\eqref{eqn:decompositionSim}. Then, we would obtain the same result of Eq.~\eqref{eqn:collBis} with $\Lambda$ maximized over $p\in\mathsf{P}_\sigma$ and $\Xi=K\cdot\abs{\mathsf{P}}_k\cdot R$ ($\abs{\mathsf{P}}_k=\max_\sigma \abs{\mathsf{P}}_\sigma)$. Without assuming the $k$-locality condition, we have the same results without the maximization over $\sigma$, and respectively $\Xi=J\cdot R$ or $\Xi=R\cdot \abs{\mathsf{P}}$, which are the quantities given in the main text. Note that the estimation of the error bound in Eq.~\eqref{eqn:collBis} is once again valid \textit{for any collision model}, which is in general driven by a unitary operator as the one decomposed in Eq.~\eqref{eqn:decompositionSim} and whose quantum map leads to the expansion treated in Eq.~\eqref{eqn:expansionGen}. The only difference between diverse collision models relies on the number of elementary gates appearing in the decomposition of Eq.~\eqref{eqn:decompositionSim}, expressed by the constant $\Xi$, and consequently in the maximization through which we obtain $\Lambda$. Finally, let us connect the constant $\Lambda$ to the constant $a_{max}$ introduced in Eq.~\eqref{eqn:truncErrOne} for the truncation error. $a_{max}$ is obtained through a maximization over the Lindblad operators, while $\Lambda$ over all the elementary Hamiltonians $\mu_{r}^{(\sigma,j)} H_{r}^{(\sigma,j)}$ providing the decomposition of each Lindblad operator as $L_{\sigma,j}\sigma_{E_{\sigma,j}}^++h.c.=\sum_{r=1}^R\mu_{r}^{(\sigma,j)} H_{r}^{(\sigma,j)}$. Then, we can write $a_{max}\leq R\cdot\Lambda$, having used $\norm{\sigma_{E_{\sigma,j}}^+}_\infty=1$ and $\norm{A}_\infty=\norm{A\otimes\sigma^++A^\dagger\otimes\sigma^-}_\infty$. We can prove the latter result by noticing that $\norm{(A\otimes\sigma^++A^\dagger\otimes\sigma^-)^2}_\infty=\norm{(A\otimes\sigma^++A^\dagger\otimes\sigma^-)}_\infty^2=\norm{AA^\dagger\otimes\ket{\uparrow}\bra{\uparrow}+A^\dagger A\otimes\ket{\downarrow}\!\bra{\uparrow}}_\infty=\max\{\norm{AA^\dagger}_\infty,\norm{A^\dagger A}_\infty\}=\norm{A}_\infty^2$, where we have used $\norm{A A^\dagger}_\infty=\norm{A}_\infty^2$. Consequently, we can rewrite the error bound for the truncation error Eq.~\eqref{eqn:truncErr} as: \begin{equation} \label{eqn:truncFin} \boxed {\epsilon_t\leq 2e (KR\Lambda(1+J_k R\Lambda)\Delta t)^2,} \end{equation} with $2KR\Lambda(1+J_k R\Lambda)\Delta t<1$, corresponding to Eq.~\eqref{eqn:boundT} of the main text for $K=1$, $J_k=J$. Let us now evaluate an explicit expression for the remainder $\mathsf{R}_c$ according to the discussion in the derivation of the general master equation~\eqref{eqn:compositeMasterEq}. We can recognize three different expansions of $\mathsf{R}_c$: one containing only terms with $g_I$, one containing only terms with $g_S$ and one containing both $g_I$ and $g_S$. Let us write them as: \begin{equation} \begin{split} \mathsf{R}_c^{(I)}=&\sum_{i=3}^\infty \frac{(2\Xi\Delta t g_I\Lambda)^i}{i!}\leq\frac{(2\Xi\Delta t g_I\Lambda)^3}{3!}\exp(2\Xi\Delta t g_I\Lambda)=\frac{(2\Xi\Lambda)^3\gamma\Delta t^2g_I}{3!}\exp(2\Xi\Delta t g_I\Lambda),\\ \mathsf{R}_c^{(S)}=&\sum_{i=2}^\infty \frac{(2K\Delta t g_S\Lambda)^i}{i!}\leq\frac{(2K\Delta t g_S\Lambda)^2}{2!}\exp(2R\Delta t g_S\Lambda),\\ \mathsf{R}_c^{(SI)}=&\sum_{i=2}^\infty\frac{(2\Delta t \Lambda)^i\sum_{i'=1}^{i-1}\binom{i}{i'}(Kg_S)^{i'}(\Xi g_I)^{i-i'}}{i!}\leq\sum_{i=2}^\infty\frac{(4\Delta t \Lambda)^iKg_S(\Xi g_I)^{i-1}}{i!}\\ \leq & \frac{(4\Lambda)^2 Kg_S \Xi\Delta t^2 g_I}{2!}\exp(4\Xi\Delta t g_I\Lambda),\\ \end{split} \end{equation} where, as discussed in the derivation of the master equation for a generic collision model, $\gamma=\Delta t g_I^2$, and for the last remainder we have used $g_S\ll g_I$ and $\sum_{i'=1}^{i-1}\binom{i}{i'}\leq\sum_{i'=0}^{i}\binom{i}{i'}=2^i$. Finally, we get the result: \begin{equation} \label{eqn:collErr} \boxed{\epsilon_c\leq \frac{(2\Xi\Lambda)^3\gamma\Delta t^2g_I}{3!}\exp(2\Xi\Delta t g_I\Lambda)+\frac{(2 K g_S\Lambda)^2\Delta t^2}{2!}\exp(2\Delta t Kg_S\Lambda)+ \frac{(4\Lambda)^2 Kg_S\Xi \Delta t^2 g_I}{2!}\exp(4\Xi\Delta t g_I\Lambda).} \end{equation} The error bound above is valid \textit{for any collision model} derived through the method discussed at the beginning of the Supplemental Material. Note that the expression in Eq.~\eqref{eqn:collErr} does not contain any exponential function in the number of subsystems, therefore the collision model is efficiently simulable according to the dissipative quantum Church-Turing theorem \cite{Kliesch2011}. However, it is suboptimal in the dependence on $\Delta t$: the first and the third term on the right-hand side of Eq.~\eqref{eqn:collErr} scale as $O(\Delta t^{3/2})$, having used $g_I=\sqrt{\gamma/\Delta t}$. As a consequence, the global error shows a behavior (see the discussion in the main text) $\epsilon_g=O(n\Delta t^{3/2})=O(t^{3/2}/\sqrt{n})$, and the number of repetitions of the algorithm is of the order of $n=O(t^3/\epsilon_g^2)$. As discussed in Ref.~\cite{Cleve2017}, ``the best we can do'' in a collision model is likely to have a scaling of $\epsilon_g=O(n\Delta t^2)$. We will now show that such scaling is recovered when choosing the \textit{\textit{multipartite collision model}}. From the discussion on the derivation of the MCM leading to Eq.~\eqref{eqn:dissipatorTwo}, if we take as bath operators $\sigma_E^{+-}$ and as initial environment state a density matrix diagonal in the computational basis, then we observe that all the terms in the master equation with an odd number of $g_I$'s are removed by the partial trace, and we should not consider them in the above calculations. This means that the remainders are modified as follows: \begin{equation} \begin{split} \mathsf{R}_c^{(I)}=&\sum_{i=2}^\infty \frac{(2\Xi\Delta t g_I\Lambda)^{2i}}{(2i)!}\leq\frac{(2\Xi\Delta t g_I\Lambda)^4}{4!}\cosh(2\Xi\Delta t g_I\Lambda)=\frac{(2\Xi\Lambda)^4\gamma^2\Delta t^2}{4!}\cosh(2\Xi\Delta t g_I\Lambda),\\ \mathsf{R}_c^{(SI)}=&\sum_{i=1}^\infty\frac{(2\Delta t \Lambda)^{2i+1}\sum_{i'=1}^{i}\binom{2i+1}{2i'}(Kg_S)^{2i-2i'+1}(\Xi g_I)^{2i'}}{(2i+1)!}+\sum_{i=2}^\infty\frac{(2\Delta t \Lambda)^{2i}\sum_{i'=1}^{i-1}\binom{2i}{2i'}(Kg_S)^{2i-2i'}(\Xi g_I)^{2i'}}{(2i)!}\\ \leq& Kg_S\sum_{i=1}^\infty\frac{(4\Delta t \Lambda)^{2i+1}(\Xi g_I)^{2i}}{(2i+1)!}+(Kg_S)^2\sum_{i=2}^\infty\frac{(4\Delta t \Lambda)^{2i}(\Xi g_I)^{2i-2}}{(2i)!}\\ \leq& \frac{(4\Lambda)^3 \Xi^2 g_S K\gamma \Delta t^2}{2!}\sinh(4\Xi\Delta t g_I\Lambda)+\frac{(4\Lambda)^4 \Xi^2 (Kg_S)^2 \gamma \Delta t^3}{4!}\cosh(4\Xi\Delta t g_I\Lambda),\\ \end{split} \end{equation} where for simplicity we have approximated the partial binomial coefficient through the total binomial coefficient (tighter approximations may be found). We observe that we have recovered the desired limit of $\epsilon_c=O(\Delta t^2)$, corresponding to the best upper bound for the error of a collision model: for the global error, we have $\epsilon_g=O(n\Delta t^2)$, as claimed in Eq.~\eqref{eqn:globalErrBeh} of the main text. Let us finally provide a simplified expression for the error bound, by requiring $4\Xi\Delta t g_I\Lambda<1$, $2\Delta t Kg_S\Lambda<1$. This requirement is polynomial in the trade-off between $\Xi$ and $K$ (increasing with the number of subsystems) and $\Delta t$, therefore the simulation is still efficient. We have: \begin{equation} \label{eqn:collErrFin} \boxed{\epsilon_c\leq \textnormal{pol}_1(\Lambda,\Xi,K,g_S,\gamma)\Delta t^2+\textnormal{pol}_2(\Lambda,\Xi,K,g_S,\gamma)\Delta t^3,} \end{equation} with \begin{equation} \label{eqn:pol1pol2} \begin{split} &\boxed{\textnormal{pol}_1(\Lambda,\Xi,K,g_S,\gamma)= \frac{(2\Xi\Lambda)^4\gamma^2\cosh(1/2)}{4!}+\frac{(4\Lambda)^3 \Xi^2 g_S K\gamma\sinh(1)}{2!}+\frac{e(2 K g_S\Lambda)^2}{2!},}\\ &\boxed{\textnormal{pol}_2(\Lambda,\Xi,K,g_S,\gamma)= \frac{(4\Lambda)^4 \Xi^2 (Kg_S)^2 \gamma \cosh(1)}{4!},} \end{split} \end{equation} that without assuming the $k$-locality condition ($K=1$) correspond to the functions introduced in Eq.~\eqref{eqn:boundC} of the main text. Finally, let us estimate the polynomial function $f(M)$ appearing in the resource estimation in Eq.~\eqref{eqn:totNumG} of the main text: \begin{equation} \label{eqn:fM} \boxed{f(M)=\frac{(2\Xi\Lambda)^4\gamma^2\cosh(1/2)}{4!}+\frac{(4\Lambda)^3 \Xi^2 g_S K\gamma\sinh(1)}{2!}+\frac{e(2 K g_S\Lambda)^2}{2!}+\frac{(4\Lambda)^4 \Xi^2 (Kg_S)^2 \gamma\Delta t \cosh(1)}{4!}+2e(KR\Lambda(1+J_k R\Lambda))^2.} \end{equation} The dependence on $M$ appears in $K$, as prescribed by Eq.~\eqref{eqn:K}, and in $\Xi=K\cdot J_k\cdot R$. Recalling that $K\leq M^k$, $f(M)$ is a polynomial function of the number of subsystems. \end{document}
\begin{equation}gin{document} \maketitle \begin{equation}gin{abstract} Let $G$ be a real connected Lie group with polynomial volume growth, endowed with its Haar measure $dx$. Given a $C^2$ positive function $M$ on $G$, we give a sufficient condition for an $L^2$ Poincar\'e inequality with respect to the measure $M(x)dx$ to hold on $G$. We then establish a non-local Poincar\'e inequality on $G$ with respect to $M(x)dx$. \end{abstract} \tableofcontents \section{Introduction} Let $G$ be a unimodular connected Lie group endowed with a measure $M(x)\, dx$ where $M \in L^1(G)$ and $dx$ stands for the Haar measure on $G$. By ``unimodular'', we mean that the Haar measure is left and right-invariant. We always assume that $M=e^{-v}$ where $v$ is a $C^2$ function on $G$. If we denote by $\mathcal G$ the Lie algebra of $G$, we consider a family $$\mathbb X= \left \{ X_1,...,X_k \right \}$$ of left-invariant vector fields on $G$ satisfying the H\"ormander condition, i.e. $\mathcal G$ is the Lie algebra generated by the $X_i's$. A standard metric on $G$ , called the Carnot-Caratheodory metric, is naturally associated with $\mathbb X$ and is defined as follows: let $\ell : [0,1] \to G$ be an absolutely continuous path. We say that $\ell$ is admissible if there exist measurable functions $a_1,...,a_k : [0,1] \to \mathbb C$ such that, for almost every $t \in [0,1]$, one has $$\ell'(t)=\sum_{i=1}^k a_i(t) X_i(\ell(t)).$$ If $\ell$ is admissible, its length is defined by $$|\ell |= \int_0^1\left(\sum_{i=1}^k |a_i(t)|^2 \,dt \right)^{ \frac 12 }.$$ For all $x,y \in G $, define $d(x,y)$ as the infimum of the lengths of all admissible paths joining $x$ to $y$ (such a curve exists by the H\"ormander condition). This distance is left-invariant. For short, we denote by $|x|$ the distance between $e$, the neutral element of the group and $x$, so that the distance from $x$ to $y$ is equal to $|y^{-1}x|$. For all $r>0$, denote by $B(x,r)$ the open ball in $G$ with respect to the Carnot-Caratheodory distance and by $V(r)$ the Haar measure of any ball. There exists $d\in \mathbb{N}^{\ast}$ (called the local dimension of $(G,\mathbb X)$) and $0<c<C$ such that, for all $r\in (0,1)$, $$ cr^d\leq V(r)\leq Cr^d, $$ see \cite{nsw}. When $r>1$, two situations may occur (see \cite{guivarch}): \begin{equation}gin{itemize} \item Either there exist $c,C,D >0$ such that, for all $r>1$, $$c r^D \leq V(r) \leq C r^D$$ where $D$ is called the dimension at infinity of the group (note that, contrary to $d$, $D$ does not depend on $\mathbb X$). The group is said to have polynomial volume growth. \item Or there exist $c_1,c_2,C_1,C_2 >0$ such that, for all $r>1$, $$c_1 e^{c_2r} \leq V(r) \leq C_1 e^{C_2r}$$ and the group is said to have exponential volume growth. \end{itemize} When $G$ has polynomial volume growth, it is plain to see that there exists $C>0$ such that, for all $r>0$, \begin{equation}gin{equation} \label{homog} V(2r)\leq CV(r), \end{equation} which implies that there exist $C>0$ and $\kappa>0$ such that, for all $r>0$ and all $\theta>1$, \begin{equation}gin{equation} \label{homogiter} V(\theta r)\leq C\theta^{\kappa}V(r). \end{equation} \noindent Denote by $H^1(G,d\mu_M)$ the Sobolev space of functions $f\in L^2(G,d\mu_M)$ such that $X_if\in L^2(G,d\mu_M)$ for all $1\leq i\leq k$. We are interested in $L^2$ Poincar\'e inequalities for the measure $d\mu_M$. In order to state sufficient conditions for such an inequality to hold, we introduce the operator $$L_M f =-M^{-1} \sum_{i=1}^k X_i \Big \{ M X_i f \Big \}$$ for all $f$ such that $$f\in {\mathcal D}(L_M):=\left\{g\in H^1(G,d\mu_M);\ \frac 1{\sqrt{M}} X_i \Big \{ M X_i f \Big \}\in L^2(G,dx),\,\, \forall 1\,\leq i\leq k\right\}.$$ One therefore has, for all $f\in {\mathcal D}(L_M)$ and $g\in H^1(G,d\mu_M)$, $$ \int_{G} L_Mf(x)g(x) d\mu_M(x)=\sum_{i=1}^k \int_{G} X_i f(x)\cdot X_i g(x) d\mu_M(x). $$ In particular, the operator $L_M$ is symmetric on $L^2(G,d\mu_M)$.\par \noindent Following \cite{bbcg}, say that a $C^2$ function $W:G\rightarrow \mathbb{R}$ is a Lyapunov function if $W(x)\geq 1$ for all $x\in G$ and there exist constants $\theta>0$, $b\geq 0$ and $R>0$ such that, for all $x\in G$, \begin{equation}gin{equation} \label{lyap} -L_MW(x)\leq -\theta W(x)+b{\bf 1}_{B(e,R)}(x), \end{equation} where, for all $A\subset G$, ${\bf 1}_A$ denotes the characteristic function of $A$. We first claim: \begin{equation}gin{theo} \label{poincmu} Assume that $G$ is unimodular and that there exists a Lyapunov function $W$ on $G$. Then, $d\mu_M$ satisfies the following $L^2$ Poincar\'e inequality: there exists $C>0$ such that, for all function $f\in H^1(G,d\mu_M)$ with $\int_G f(x)d\mu_M(x)=0$, \begin{equation}gin{equation} \label{eqpoincmu} \int_G \left\vert f(x)\right\vert^2 d\mu_M(x)\leq C\sum_{i=1}^k \int_G \left\vert X_if(x)\right\vert^2 d\mu_M(x). \end{equation} \end{theo} Let us give, as a corollary, a sufficient condition on $v$ for (\ref{eqpoincmu}) to hold: \begin{equation}gin{cor} \label{suffpoincmu} Assume that $G$ is unimodular and there exist constants $a\in (0,1)$, $c>0$ and $R>0$ such that, for all $x\in G$ with $\left\vert x\right\vert>R$, \begin{equation}gin{equation} \label{assumpoinc} a\sum_{i=1}^k \left\vert X_iv(x)\right\vert^2-\sum_{i=1}^k X_i^2v(x)\geq c. \end{equation} Then (\ref{eqpoincmu}) holds. \end{cor} Notice that, if (\ref{assumpoinc}) holds with $a\in \left(0,\frac 12\right)$, then the Poincar\'e inequality (\ref{eqpoincmu}) has the following self-improvement: \begin{equation}gin{pro} \label{poincimprovedM} Assume that $G$ is unimodular and that there exist constants $c>0$, $R>0$ and $\varepsilon\in (0,1)$ such that, for all $x\in G$, \begin{equation}gin{equation} \label{c} \frac{1-\varepsilon}2\sum_{i=1}^k \left\vert X_iv(x)\right\vert^2 -\sum_{i=1}^k X_i^2v(x)\geq c\mbox{ whenever }\left\vert x\right\vert>R. \end{equation} Then there exists $C>0$ such that, for all function $f\in H^1(G,d\mu_M)$ such that $\int_G f(x)d\mu_M(x)=0$: \begin{equation}gin{equation} \label{pim} \sum_{i=1}^k \int_{G} \left\vert X_if(x)\right\vert^2d\mu_M(x) \ge C \, \int_{G} \left\vert f(x)\right\vert ^2\left(1+\sum_{i=1}^k \left\vert X_iv(x)\right\vert^2\right)d\mu_M(x) \end{equation} \end{pro} We finally obtain a Poincar\'e inequality for $d\mu_M$ involving a non local term: \begin{equation}gin{theo} \label{mainth} Let $G$ be a unimodular Lie group with polynomial growth. Let $d\mu_M= M dx$ be a measure absolutely continuous with respect to the Haar measure on $G$ where $M=e^{-v} \in L^1(G)$ and $v\in C^2(G)$. Assume that there exist constants $c>0$, $R>0$ and $\varepsilon\in (0,1)$ such that (\ref{c}) holds. Let $\alpha\in (0,2)$. Then there exists $\lambda_\alpha(M)>0$ such that, for any function $f\in {\mathcal D}(G)$ satisfying $\int_{G} f(x) \, d\mu_M(x)=0$, \begin{equation}gin{eqnarray} \label{poincfrac} \iint _{G \times G } \frac{\left\vert f(x)-f(y)\right\vert^2}{V\left(\left\vert y^{-1} x\right\vert\right)\left\vert y^{-1} x\right\vert^{\alpha}} \, dx\, d\mu_M(y) \ge \lambda_\alpha(M) \\ \, \int_{\mathbb{R}^n} \left\vert f(x)\right\vert^2 \left(1+\sum_{i=1}^k \left\vert X_iv(x)\right\vert^2\right) \,d\mu_M(x). \nonumber \end{eqnarray} \end{theo} Note that (\ref{poincfrac}) is an improvement of (\ref{pim}) in terms of fractional nonlocal quantities. The proof follows the same line as the paper \cite{MRS} but we concentrate here on a more geometric context. In order to prove Theorem \ref{mainth}, we need to introduce fractional powers of $L_M$. This is the object of the following developments. Since the operator $L_M$ is symmetric and non-negative on $L^2(G,d\mu_M)$, we can define the usual power $L^{\begin{equation}ta}$ for any $\begin{equation}ta\in (0,1)$ by means of spectral theory.\par \noindent Section \ref{poinc1} is devoted to the proof of Theorem \ref{poincmu} and Corollary \ref{suffpoincmu}. Then, in Section \ref{proofmain}, we check $L^2$ ``off-diagonal'' estimates for the resolvent of $L_M$ and use them to establish Theorem \ref{mainth}. \section{A proof of the Poincar\'e inequality for $d\mu_M$} \label{poinc1} We follow closely the approach of \cite{bbcg}. Recall first that the following $L^2$ local Poincar\'e inequality holds on $G$ for the measure $dx$: for all $R>0$, there exists $C_R>0$ such that, for all $x\in G$, all $r \in (0,R)$, all ball $B:=B(x,r)$ and all function $f\in C^{\infty}(B)$, \begin{equation}gin{equation} \label{poincdx} \int_B \left\vert f(x)-f_B\right\vert^2dx\leq C_R r^2\sum_{i=1}^k \int_B \left\vert X_if(x)\right\vert^2dx, \end{equation} where $f_B:=\frac 1{V(r)}\int_B f(x)dx$. In the Euclidean context, Poincar\'e inequalities for vector-fields satisfying H\"ormander conditions were obtained by Jerison in \cite{jerison}. A proof of (\ref{poincdx}) in the case of unimodular Lie groups can be found in \cite{saloff1995parabolic}, but the idea goes back to \cite{varo}. A nice survey on this topic can be found in \cite{sobmetpoinc}. Notice that no global growth assumption on the volume of balls is required for (\ref{poincdx}) to hold. \par \noindent The proof of (\ref{eqpoincmu}) relies on the following inequality: \begin{equation}gin{lem} \label{lempoinc} For all function $f\in H^1(G,d\mu_M)$ on $G$, \begin{equation}gin{equation} \label{firstpart} \int_G \frac{L_MW}{W}(x)f^2(x)d\mu_M(x) \leq \sum_{i=1}^k \int_G \left\vert X_if(x)\right\vert^2d\mu_M(x). \end{equation} \end{lem} {\bf Proof: } Assume first that $f$ is compactly supported on $G$. Using the definition of $L_M$, one has $$ \begin{equation}gin{array}{lll} \displaystyle \int_G \frac{L_MW}{W}(x)f^2(x)d\mu_M(x) & = & \displaystyle \sum_{i=1}^k \int_G X_i\left(\frac{f^2}W\right)(x) \cdot X_iW(x)d\mu_M(x)\\ & = & \displaystyle 2\sum_{i=1}^k \int_G \frac fW(x) X_if(x)\cdot X_iW(x)d\mu_M(x)\\ & & \displaystyle -\sum_{i=1}^k \int_G \frac{f^2}{W^2}(x) \left\vert X_iW(x)\right\vert^2 d\mu_M(x)\\ & = & \displaystyle \sum_{i=1}^k \int_G \left\vert X_i f(x)\right\vert^2d\mu_M(x) \\ & & \displaystyle -\sum_{i=1}^k \int_G \left\vert X_if-\frac fW X_iW\right\vert^2(x)d\mu_M(x)\\ & \leq & \displaystyle \sum_{i=1}^k \int_G \left\vert X_if(x)\right\vert^2d\mu_M(x). \end{array} $$ Notice that all the previous integrals are finite because of the support condition on $f$. Now, if $f$ is as in Lemma \ref{lempoinc}, consider a nondecreasing sequence of smooth compactly supported functions $\chi_n$ satisfying $$ {\bf 1}_{B(e,nR)} \leq \chi_n\leq 1\mbox{ and } \left\vert X_i\chi_n\right\vert\leq 1\mbox{ for all }1\leq i\leq k. $$ Applying (\ref{firstpart}) to $f\chi_n$ and letting $n$ go to $+\infty$ yields the desired conclusion, by use of the monotone convergence theorem in the left-hand side and the dominated convergence theorem in the right-hand side. \fin\par \noindent Let us now establish (\ref{eqpoincmu}). Let $g$ be a smooth function on $G$ and let $f:=g-c$ on $G$ where $c$ is a constant to be chosen. By assumption (\ref{lyap}), \begin{equation}gin{equation} \label{twoterms} \int_G f^2(x)d\mu_M(x)\leq \int_G f^2(x)\frac{L_MW}{\theta W}(x)d\mu_M(x)+\int_{B(e,R)} f^2(x)\frac{b}{\theta W}(x)d\mu_M(x). \end{equation} \par \noindent Lemma \ref{lempoinc} shows that (\ref{firstpart}) holds. Let us now turn to the second term in the right-hand side of (\ref{twoterms}). Fix $c$ such that $\int_{B(e,R)} f(x)d\mu_M(x)=0$. By (\ref{poincdx}) applied to $f$ on $B(e,R)$ and the fact that $M$ is bounded from above and below on $B(e,R)$, one has $$ \int_{B(e,R)} f^2(x)d\mu_M(x)\leq CR^2\sum_{i=1}^k \int_{B(e,R)} \left\vert X_if(x)\right\vert^2d\mu_M(x) $$ where the constant $C$ depends on $R$ and $M$. Therefore, using the fact that $W\geq 1$ on $G$, \begin{equation}gin{equation} \label{secondpart} \int_{B(e,R)} f^2(x)\frac{b}{\theta W}(x)d\mu_M(x)\leq CR^2\sum_{i=1}^k \int_{B(e,R)} \left\vert X_if(x)\right\vert^2d\mu_M(x) \end{equation} where the constant $C$ depends on $R, M, \theta$ and $b$. Gathering (\ref{twoterms}), (\ref{firstpart}) and (\ref{secondpart}) yields $$ \int_G (g(x)-c)^2d\mu_M(x)\leq C\sum_{i=1}^k \int_G \left\vert X_ig(x)\right\vert^2d\mu_M(x), $$ which easily implies (\ref{eqpoincmu}) for the function $g$ (and the same dependence for the constant $C$). \fin\par \noindent {\bf Proof of Corollary \ref{suffpoincmu}: } according to Theorem \ref{poincmu}, it is enough to find a Lyapunov function $W$. Define $$ W(x):=e^{\gamma\left(v(x)-\inf_Gv\right)} $$ where $\gamma>0$ will be chosen later. Since $$ -L_MW(x)=\gamma\left(\sum_{i=1}^k X_i^2v(x)-(1-\gamma)\sum_{i=1}^k \left\vert X_iv(x)\right\vert^2\right)W(x), $$ $W$ is a Lyapunov function for $\gamma:=1-a$ because of the assumption on $v$. Indeed, one can take $\theta=c \gamma$ and $b= \max_{B(e,R)} \Big \{-L_M W+\theta W \Big \}$ (recall that $M$ is a $C^2$ function). \fin\par \noindent Let us now prove Proposition \ref{poincimprovedM}. Observe first that, since $v$ is $C^2$ on $G$ and (\ref{c}) holds, there exists $\alpha\in \mathbb{R}$ such that, for all $x\in G$, \begin{equation}gin{equation} \label{alpha} \frac{1-\varepsilon}2\sum_{i=1}^k \left\vert X_iv(x)\right\vert^2 -\sum_{i=1}^k X_i^2v(x)\geq \alpha. \end{equation} Let $f$ be as in the statement of Proposition \ref{poincimprovedM} and let $g:=fM^{\frac 12}$. Since, for all $1\leq i\leq k$, $$ X_if=M^{-\frac 12}X_ig-\frac 12 g M^{-\frac 32} X_iM. $$ Assumption (\ref{alpha}) yields two positive constants $\begin{equation}ta,\gamma$ such that \begin{equation}gin{multline} \label{estim1} \displaystyle \sum_{i=1}^k \int_{G} \left\vert X_if(x)\right\vert^2(x) \, d\mu_M(x)= \\ \displaystyle \sum_{i=1}^k \int_{G} \left(\left\vert X_ig(x)\right\vert^2 +\frac 14g^2(x)\left\vert X_i v(x)\right\vert^2+ g(x)X_ig(x)X_i v(x)\right) \, dx\\ = \displaystyle \sum_{i=1}^k \int_{G} \left(\left\vert X_i g(x)\right\vert^2 +\frac 14g^2(x)\left\vert X_i v(x)\right\vert^2+\frac 12 X_i\left(g^2\right)(x)X_iv(x)\right) \, dx\\ \geq \displaystyle \sum_{i=1}^k \int_{G} g^2(x) \left(\frac 14\left\vert X_iv(x)\right\vert^2-\frac 12 X_i^2v(x)\right) \, dx\\ \geq \displaystyle \sum_{i=1}^k \int_{G} f^2(x)\left(\begin{equation}ta \left\vert X_iv(x)\right\vert^2-\gamma\right)d\mu_M(x). \end{multline} The conjunction of (\ref{eqpoincmu}), which holds because of (\ref{c}), and (\ref{estim1}) yields the desired conclusion. \fin\par \section{Proof of Theorem \ref{mainth}} \label{proofmain} We divide the proof into several steps. \subsection{Rewriting the improved Poincar\'e inequality} By the definition of $L_M$, the conclusion of Proposition \ref{poincimprovedM} means, in terms of operators in $L^2(G,d\mu_M)$, that, for some $\lambda>0$, \begin{equation}gin{equation} \label{ineqop} L_M\geq \lambda \mu, \end{equation} where $\mu$ is the multiplication operator by $1+\sum_{i=1}^k \left\vert X_iv\right\vert^2$. Using a functional calculus argument (see \cite{davies}, p. 110), one deduces from (\ref{ineqop}) that, for any $\alpha\in (0,2)$, \[ L_M^{\alpha/2}\geq \lambda^{\alpha/2}\mu^{\alpha/2} \] which implies, thanks to the fact $L_M^{\alpha/2} = (L_M^{\alpha/4})^2$ and the symmetry of $L_M^{\alpha/4}$ on $L^2(G,d\mu_M)$, that $$ \int_{G} \left\vert f(x)\right\vert^2 \left(1+\sum_{i=1}^k \left\vert X_iv(x)\right\vert^2\right)^{\alpha/2}d\mu_M(x)\leq $$ $$C \int_{G} \left\vert L_M^{\alpha/4}f(x)\right\vert^2 \, d\mu_M(x) = C \left\| L_M^{\alpha/4} f \right\|^2 _{L^2(G,d\mu_M)}. $$ The conclusion of Theorem \ref{mainth} will follow by estimating the quantity $ \left\| L^{\alpha/4} f \right\|^2 _{L^2(G,d\mu_M)}.$ \subsection{Off-diagonal $L^2$ estimates for the resolvent of $L_M$} The crucial estimates to derive the desired inequality are some $L^2$ ``off-diagonal'' estimates for the resolvent of $L_M$, in the spirit of \cite{gaff} . This is the object of the following lemma. \begin{equation}gin{lem} \label{off} There exists $C$ with the following property: for all closed disjoint subsets $E,F\subset G$ with $\mbox{d}(E,F)=:d>0$, all function $f\in L^2(G,d\mu_M)$ supported in $E$ and all $t>0$, $$ \left\Vert (\mbox{I}+t \, L_M)^{-1}f\right\Vert_{L^2(F,d\mu_M)}+\left\Vert t \, L_M(\mbox{I}+t \, L_M)^{-1}f\right\Vert_{L^2(F,d\mu_M)}\leq $$ $$ 8 \, e^{-C \, \frac{d}{\sqrt t}} \left\Vert f\right\Vert_{L^2(E,d\mu_M)}. $$ \end{lem} \begin{equation}gin{proof} We argue as in \cite{kato}, Lemma 1.1. From the fact that $L_M$ is self-adjoint on $L^2(G,d\mu_M)$ we have \[ \| (L_M-\mu)^{-1} \|_{L^2(G,d\mu_M)} \le \frac{1}{\mbox{dist}(\mu,\Sigma(L_M))} \] where $\Sigma(L_M)$ denotes the spectrum of $L_M$, and $\mu \not \in \Sigma(L_M)$. Then we deduce that $(\mbox{I}+t \, L_M)^{-1}$ is bounded with norm less than $1$ for all $t >0$, and it is clearly enough to argue when $0<t<d$. In the following computations, we will make explicit the dependence of the measure $d\mu_M$ in terms of $M$ for sake of clarity. Define $u_t=(\mbox{I}+t \, L_M)^{-1}f$, so that, for all function $v\in H^{1}(G,d\mu_M)$, \begin{equation}gin{eqnarray} \label{test} \int_{G} u_t(x) \, v(x) \, M(x) \, dx+ \\ \nonumber t \, \sum_{i=1}^k \int_{G} X_i u_t(x)\cdot X_i v(x) \, M(x) \, dx=\\ \nonumber \int_{G} f(x) \, v(x) \, M(x) \, dx. \end{eqnarray} Fix now a nonnegative function $\eta\in {\mathcal D}(G)$ vanishing on $E$. Since $f$ is supported in $E$, applying (\ref{test}) with $v=\eta^2 \, u_t$ (remember that $u_t\in H^1(G,d\mu_M)$) yields \[ \int_{G} \eta^2(x)\left\vert u_t(x)\right\vert^2 \, M(x) \, dx + t \sum_{i=1}^k\, \int_{G} X_i u_t(x)\cdot X_i (\eta^2u_t) \, M(x) \, dx=0, \] which implies \begin{equation}gin{multline*} \int_{G} \eta^2(x)\left\vert u_t(x)\right\vert^2 \, M(x) \, dx + t \, \int_{G} \eta^2(x)\sum_{i=1}^k \left\vert X_i u_t(x) \right \vert^2 \, M(x) \, dx \\ = -2 \, t \sum_{i=1}^k \, \int_{G} \eta(x) \, u_t(x) \, X_i \eta(x) \cdot X_i u_t(x) \, M(x) \, dx \\ \leq \displaystyle t \, \int_{G} \left\vert u_t(x)\right\vert^2 \sum_{i=1}^k |X_i \eta(x) |^2 \, M(x)\, dx +\\ t \, \int_{G} \eta^2(x)\sum_{i=1}^k \left\vert X_i u_t(x)\right\vert^2 \, M(x) \, dx, \end{multline*} hence \begin{equation}gin{equation} \label{hence} \int_{G} \eta^2(x)\left\vert u_t(x)\right\vert^2 \, M(x) \, dx\leq t \, \int_{G} \left\vert u_t(x)\right\vert^2 \sum_{i=1}^k \left\vert X_i \eta(x)\right\vert^2 \, M(x) \, dx. \end{equation} Let $\zeta$ be a nonnegative smooth function on $G$ such that $\zeta=0$ on $E$, so that $\eta := e^{\alpha \, \zeta}-1 \geq 0$ and $\eta$ vanishes on $E$ for some $\alpha >0$ to be chosen. Choosing this particular $\eta$ in \eqref{hence} with $\alpha>0$ gives \[ \int_{G} \left\vert e^{\alpha \, \zeta(x)}-1\right\vert^2\left\vert u_t(x)\right\vert^2 \, M(x) \, dx \leq \] \[ \alpha^2 \, t \, \int_{G} \left\vert u_t(x)\right\vert^2 \sum_{i=1}^k \left\vert X_i \zeta(x)\right\vert^2 \, e^{2 \, \alpha \, \zeta(x)} \, M(x) \, dx. \] Taking $\alpha= 1/(2 \, \sqrt{t} \, \max_i \left\Vert X_i\zeta\right\Vert_{\infty})$, one obtains \[ \int_{G} \left\vert e^{\alpha \, \zeta(x)}-1\right\vert^2\left\vert u_t(x)\right\vert^2 \, M(x) \, dx\leq \frac 14 \, \int_{G} \left\vert u_t(x)\right\vert^2 e^{2\, \alpha \, \zeta(x)} \, M(x) \, dx. \] Using the fact that the norm of $(I+tL_M)^{-1}$ is bounded by $1$ uniformly in $t >0$, this gives \[ \begin{equation}gin{array}{lll} \displaystyle \left\Vert e^{\alpha\zeta} \, u_t\right\Vert_{L^2(G,d\mu_M)} & \leq & \displaystyle \left\Vert \left(e^{\alpha\zeta}-1\right) \, u_t\right\Vert_{L^2(G,d\mu_M)} + \left\Vert u_t\right\Vert_{L^2(G,d\mu_M)} \\ & \leq & \displaystyle \frac 12 \left\Vert e^{\alpha\zeta} \, u_t\right\Vert_{L^2(G,d\mu_M)} + \left\Vert f\right\Vert_{L^2(G,d\mu_M)}, \end{array} \] therefore \[ \begin{equation}gin{array}{lll} \displaystyle \int_{G} \left\vert e^{\alpha \, \zeta(x)}\right\vert^2\left\vert u_t(x)\right\vert^2 \, M(x) \, dx & \leq & \displaystyle 4 \, \int_{G} \left\vert f(x)\right\vert^2 \, M(x) \,dx. \end{array} \] We choose now $\zeta$ such that $\zeta =0$ on $E$ as before and additionnally that $\zeta=1$ on $F$. It can furthermore be chosen with $\max_{i=1,...k} \left\Vert X_i\zeta\right\Vert_{\infty} \leq C/d$, which yields the desired conclusion for the $L^2$ norm of $(I+tL_M)^{-1}f$ with a factor $4$ in the right-hand side. Since $t \, L_M(\mbox{I}+t \, L_M)^{-1}f=f-(\mbox{I}+t \, L_M)^{-1}f$, the desired inequality with a factor $8$ readily follows. \end{proof} \subsection{Control of $\left\Vert L_M^{\alpha/4}f\right\Vert_{L^2(G,d\mu_M)}$ and conclusion of the proof of Theorem \ref{mainth}} This is now the heart of the proof to reach the conclusion of Theorem \ref{mainth}. The following first lemma is a standard quadratic estimate on powers of subelliptic operators. It is based on spectral theory. \begin{equation}gin{lem} \label{quadratic} Let $\alpha\in (0,2)$. There exists $C>0$ such that, for all $f\in {\mathcal D}(L_M)$, \begin{equation}gin{equation} \label{spectral} \left\Vert L_M^{\alpha/4}f\right\Vert_{L^2(G, d\mu_M )}^2\leq C_3 \, \int_0^{+\infty} t^{-1-\alpha/2} \left\Vert t \, L_M \, (\mbox{I} + t \, L_M)^{-1} f\right\Vert_{L^2(G,d\mu_M)}^2 \, dt. \end{equation} \end{lem} We now come to the desired estimate. \begin{equation}gin{lem} \label{controllalpha} Let $\alpha\in (0,2)$ . There exists $C >0$ such that, for all $f\in {\mathcal D}(G)$, $$ \int_0^{\infty} t^{-1-\alpha/2} \left\Vert t \, L_M \, (\mbox{I} + t \, L_M)^{-1} f\right\Vert_{L^2(G,d\mu_M)}^2 \, dt \leq $$ $$C \, \iint_{G \times G} \frac{\left\vert f(x)-f(y)\right\vert^2}{V\left(\left\vert y^{-1}x\right\vert\right)\left\vert y^{-1}x\right\vert^{\alpha}} \, M(x)\, \, dx\, dy. $$ \end{lem} \begin{equation}gin{proof} Fix $t \in (0, +\infty)$. Following Lemma~\ref{quadratic}, we give an upper bound of $$\left\Vert t \, L_M \, (\mbox{I} + t \, L_M)^{-1} f\right\Vert_{L^2(G,d\mu_M)} ^2$$ involving first order differences for $f$. Using (\ref{homog}), one can pick up a countable family $x_j^t$, $j\in \mathbb{N}$, such that the balls $B\left(x_j^{t},\sqrt{t}\right)$ are pairwise disjoint and \begin{equation}gin{equation} \label{union} G=\bigcup_{j \in \mathbb{N}} B\left(x_j^{t},2\sqrt{t}\right). \end{equation} By Lemma \ref{cardinal} in Appendix A, there exists a constant $\widetilde C>0$ such that for all $\theta>1$ and all $x\in G$, there are at most $\widetilde C\, \theta^{2\kappa}$ indexes $j$ such that $|x^{-1}x_j^t |\leq \theta\sqrt{t}$ where $\kappa$ is given by (\ref{homogiter}). For fixed $j$, one has $$ t \, L_M \, (\mbox{I} + t\, L_M)^{-1} f= t \, L_M \, (\mbox{I} + t\, L_M)^{-1} \, g^{j,t} $$ where, for all $x\in G$, $$ g^{j,t}(x):=f(x)-m^{j,t} $$ and $m^{j,t}$ is defined by $$ m^{j,t}:=\frac 1{V\left(2\sqrt{t}\right)}\int_{B\left(x_j^{t},2\sqrt{t}\right)} f(y) dy$$ Note that, here, the mean value of $f$ is computed with respect to the Haar measure on $G$. Since (\ref{union}) holds, one clearly has $$ \begin{equation}gin{array}{lll} \displaystyle \left\Vert t \, L _M\, (\mbox{I} + t\, L_M)^{-1} f \right\Vert_{L^2(G,d\mu_M)}^2 & \leq & \displaystyle \sum_{j \in \mathbb{N}} \left\Vert t \, L_M \, (\mbox{I} + t\, L_M)^{-1} f \right\Vert_{L^2\left(B(x_j^t,2\sqrt{t}),d\mu_M\right)}^2\\ & = & \displaystyle \sum_{j \in \mathbb{N}} \left\Vert t\, L_M \, (\mbox{I} + t\, L_M)^{-1} g^{j,t} \right\Vert_{L^2\left(B\left(x_j^t,2\sqrt{t}\right),d\mu_M \right )}^2, \end{array} $$ and we are left with the task of estimating $$ \left\Vert t \, L_M \, (\mbox{I} + t\, L_M)^{-1} g^{j,t}\right\Vert_{L^2\left(B\left(x_j^{t},2\sqrt{t}\right),d\mu_M \right )} ^2. $$ To that purpose, set $$ C_0^{j,t}=B\left(x_j^t, 4\sqrt{t}\right) \ \mbox{ and } \ C_k^{j,t}=B\left(x_j^t,2^{k+2}\sqrt{t}\right)\setminus B\left(x_j^t,2^{k+1}\sqrt{t}\right), \ \forall \, k \ge 1, $$ and $g^{j,t}_k:=g^{j,t} \, {\bf 1}_{C_k^{j,t}}$, $k \ge 0$, where, for any subset $A\subset G$, ${\bf 1}_A$ is the usual characteristic function of $A$. Since $g^{j,t}=\sum_{k\geq 0} g^{j,t}_k$ one has \begin{equation}gin{eqnarray} \left\Vert t \, L_M \, (\mbox{I} + t\, L_M)^{-1} g^{j,t} \right\Vert_{L^2\left(B\left(x_j^{t},2\sqrt{t}\right),d\mu_M\right)} \leq \\ \sum_{k\geq 0} \left\Vert t \, L_M \, (\mbox{I} + t\, L_M)^{-1} g_k^{j,t} \right\Vert_{L^2\left(B\left(x_j^{t},2\sqrt{t}\right),d\mu_M\right)} \nonumber \end{eqnarray} and, using Lemma \ref{off}, one obtains (for some constants $C,c>0$) \begin{equation}gin{eqnarray} \label{expdecay} \left\Vert t \, L_M \, (\mbox{I} + t\, L_M)^{-1} g^{j,t} \right\Vert_{L^2\left(B\left(x_j^{t},2\sqrt{t}\right),d\mu_M\right)} \leq \\ \nonumber C \, \left( \left\Vert g_0^{j,t}\right\Vert_{L^2(C_0^{j,t},d\mu_M)} +\sum_{k\geq 1} e^{-c \, 2^{k}} \left\Vert g_k^{j,t}\right\Vert_{L^2(C_k^{j,t},d\mu_M)} \right). \end{eqnarray} By Cauchy-Schwarz's inequality, we deduce (for another constant $C'>0$) \begin{equation}gin{eqnarray}\label{expdecaybis} \left\Vert t \, L_M \, (\mbox{I} + t\, L_M)^{-1} g^{j,t} \right\Vert_{L^2\left(B\left(x_j^{t},2\sqrt{t}\right),d\mu_M\right)} ^2 \leq \\ \nonumber C' \, \left( \left\Vert g_0^{j,t}\right\Vert_{L^2(C_0^{j,t},d\mu_M)}^2 +\sum_{k\geq 1} e^{-c \, 2^{k}} \left\Vert g_k^{j,t}\right\Vert_{L^2(C_k^{j,t},d\mu_M)}^2 \right). \end{eqnarray} As a consequence, we have \begin{equation}gin{equation} \label{expdecayter} \begin{equation}gin{array}{lll} \displaystyle \int_0^{\infty} t^{-1-\alpha/2} \left\Vert t \, L_M \, (\mbox{I} + t \, L_M)^{-1} f\right\Vert_{L^2(G,d\mu_M)}^2 \, dt \leq \\ \displaystyle C' \, \int_0^{\infty} t^{-1-\alpha/2} \sum_{j\ge 0} \left\Vert g_0^{j,t}\right\Vert_{L^2(C_0^{j,t},d\mu_M)}^2 dt+ \\ \displaystyle C' \, \int_0^{\infty} t^{-1-\alpha/2} \sum_{k\geq 1} e^{-c \, 2^{k}} \sum_{j \geq 0} \left\Vert g_k^{j,t}\right\Vert_{L^2(C_k^{j,t},d\mu_M)}^2 dt. \end{array} \end{equation} We claim that, and we pospone the proof into Appendix B: \begin{equation}gin{lem} \label{estimg} There exists $\begin{eqnarray}r C>0$ such that, for all $t>0$ and all $j \in \mathbb{N}$: \begin{equation}gin{itemize} \item[{\bf A.}] For the first term: $$\displaystyle \left\Vert g_0^{j,t}\right\Vert_{L^2(C_0^{j,t},M)}^2\leq \frac{\begin{eqnarray}r C}{V(\sqrt{t})} \int_{B\left(x_j^t,4\sqrt{t}\right)} \int_{B\left(x_j^t,4\sqrt{t}\right)} \left\vert f(x)-f(y)\right\vert^2 \, d\mu_M(x) \, dy. $$ \item[{\bf B.}] For all $k\geq 1$, \[ \left\Vert g^{j,t}_k\right\Vert_{L^2(C_k^{j,t},d\mu_M)}^2 \leq \] \[ \frac{\begin{eqnarray}r C}{V(2^k\sqrt{t})} \int_{x\in B(x^t_j,2^{k+2}\sqrt{t})} \int_{y\in B(x^t_j,2^{k+2}\sqrt{t})} \left\vert f(x)-f(y)\right\vert^2 \, d\mu_M(x)\, dy.\] \end{itemize} \end{lem} We finish the proof of the theorem. Using Assertion {\bf A} in Lemma \ref{estimg}, summing up on $j \ge 0$ and integrating over $(0,\infty)$, we get \begin{equation}gin{multline*} \displaystyle \int_0^{\infty} t^{-1-\alpha/2} \sum_{j \ge 0} \left \Vert g_0^{j,t}\right\Vert_{L^2\left(C_0^{j,t},d\mu_M\right)}^2 \, dt = \sum_{j \ge 0} \int_0^{\infty} t^{-1-\alpha/2} \left\Vert g_0^{j,t}\right \Vert_{L^2\left(C_0^{j,t},d\mu_M\right)}^2 \, dt \\ \displaystyle \le \begin{eqnarray}r C \, \sum_{j \ge 0} \int_0^{\infty} \frac{t^{-1-\frac{\alpha}2}}{V(\sqrt{t})} \left(\int_{B\left(x_j^t,4\sqrt{t}\right)} \int_{B\left(x_j^t,4\sqrt{t}\right)} \left\vert f(x)-f(y)\right\vert^2 \, d\mu_M(x) \, dy\right) \, dt \\ \displaystyle \le \begin{eqnarray}r C\, \sum_{j \ge 0}\iint_{(x,y)\in G\times G} \left\vert f(x)-f(y)\right\vert^2 M(x)\times \\ \left(\int_{ t\geq \max\left\{\frac{\left\vert x^{-1}x_j^t\right\vert^2}{16}\,;\ \frac{\left\vert y^{-1}x_j^t\right\vert^2}{16}\right\}} \, \frac{t^{-1-\frac{\alpha}2}}{V(\sqrt{t})}dt\right) \, dx \, dy. \end{multline*} The Fubini theorem now shows $$ \sum_{j \ge 0} \int_{ t\geq \max\left\{\frac{\left\vert x^{-1}x_j^t\right\vert^2}{16}\, ; \ \frac{\left\vert y^{-1}x_j^t\right\vert^2}{16}\right\}} \, \frac{t^{-1-\frac{\alpha}2}}{V(\sqrt{t})} dt = $$ $$ \int_0^{\infty} \frac{t^{-1-\frac{\alpha}2}}{V(\sqrt{t})} \, \sum_{j \ge 0} {\bf 1}_{ \left(\max\left\{ \frac{\left\vert x^{-1}x_j^t\right\vert^2}{16}\, ; \ \frac{\left\vert y^{-1}x_j^t\right\vert^2}{16} \right\},+\infty\right)} (t) \, dt. $$ Observe that, by Lemma \ref{cardinal}, there is a constant $N \in \mathbb{N}$ such that, for all $t>0$, there are at most $N$ indexes $j$ such that $\left\vert x^{-1}x_j^t\right\vert^2< 16\, t$ and $ \left\vert y^{-1}x_j^t\right\vert^2<16 \, t$, and for these indexes $j$, one has $ \left\vert x^{-1}y\right\vert<8\sqrt{t}$. It therefore follows that $$ \sum_{j \ge 0} {\bf 1}_{\left(\max\left\{ \frac{\left\vert x^{-1}x_j^t\right\vert^2}{16} \, ; \ \frac{\left\vert y^{-1}x_j^t\right\vert^2}{16} \right\},+\infty\right)}(t)\leq N \, {\bf 1}_{ \left(\left\vert x^{-1}y\right\vert^2/64,+\infty\right)}(t), $$ so that, by (\ref{homog}), \begin{equation}gin{multline} \label{intg0} \displaystyle \int_0^{\infty} t^{-1-\alpha/2} \sum_{j} \left\Vert g_0^{j,t}\right\Vert_{L^2\left(C_0^{j,t},d\mu_M\right)}^2 \, dt \\ \leq \begin{eqnarray}r C \, N \, \iint_{G\times G} \left\vert f(x)-f(y)\right\vert^2M(x) \left( \int_{\left\vert x^{-1}y\right\vert^2/64}^{\infty} \, \frac{t^{-1-\frac{\alpha}2}}{V(\sqrt{t})} \, dt\right) \, dx \, dy \\ \displaystyle \leq \begin{eqnarray}r C \, N \, \iint_{G\times G} \frac{\left\vert f(x)-f(y)\right\vert^2} {V\left(\left\vert x^{-1}y\right\vert\right)\left\vert x^{-1}y\right\vert^{\alpha}} \, d\mu_M(x) \, dy. \end{multline} Using now Assertion {\bf B} in Lemma \ref{estimg}, we obtain, for all $j \ge 0$ and all $k\geq 1$, $$ \begin{equation}gin{array}{l} \displaystyle \int_0^{\infty} t^{-1-\alpha/2} \, \sum_{j \ge 0} \left\Vert g^{j,t}_k\right\Vert_2^2dt \\ \displaystyle \leq \begin{eqnarray}r C \, \, \sum_{j \ge 0} \int_0^{\infty} \frac{t^{-1-\frac{\alpha}2}}{V\left(2^k\sqrt{t}\right)} \, \left(\iint_{ B(x^t_j,2^{k+2}\sqrt{t}) \times B(x^t_j,2^{k+2}\sqrt{t})} \left\vert f(x)-f(y)\right\vert^2 \, M(x) \, dx \, dy\right) \, dt\\ \displaystyle \leq \begin{eqnarray}r C \, \, \sum_{j \ge 0} \iint_{x,y\in G} \left\vert f(x)-f(y)\right\vert^2 \, M(x) \times \\ \, \displaystyle \left(\int_0^{\infty} \frac{t^{-1-\frac{\alpha}2}}{V(2^k\sqrt{t})} \, {\bf 1}_{\left( \max\left\{\frac{\left\vert x^{-1}x^t_j\right\vert^2}{4^{k+2}}, \frac{\left\vert y^{-1}x^t_j\right\vert^2}{4^{k+2}}\right\},+\infty\right)} (t) \, dt\right) \, dx \, dy. \end{array} $$ But, given $t>0$, $x,y\in G$, by Lemma \ref{cardinal} again, there exist at most $\widetilde C \, 2^{2k\kappa}$ indexes $j$ such that $$ \left\vert x^{-1}x_j^t\right\vert\leq 2^{k+2}\sqrt{t} \ \mbox{ and } \ \left\vert y^{-1}x_j^t\right\vert\leq 2^{k+2}\sqrt{t}, $$ and for these indexes $j$, $\left\vert x^{-1}y\right\vert\leq 2^{k+3}\sqrt{t}$. As a consequence, \begin{equation}gin{equation} \label{intgk} \begin{equation}gin{array}{lll} \displaystyle \int_0^{\infty} \frac{t^{-1-\frac{\alpha}2}}{V(2^k\sqrt{t})} \, \sum_{j \ge 0} {\bf 1}_{ \left(\max\left\{\frac{\left\vert x^{-1}x^t_j\right\vert^2}{4^{k+2}}, \frac{\left\vert x^{-1}x^t_j\right\vert^2}{4^{k+2}}\right\},+\infty\right)}(t) \, dt \leq \\ \displaystyle \widetilde C \, 2^{2k\kappa} \, \int_{ t\geq \frac{\left\vert x^{-1}y\right\vert^2}{4^{k+3}}} \, \frac{t^{-1-\frac{\alpha}2}}{V(2^k\sqrt{t})} \, dt \leq \\ \displaystyle \widetilde C' \frac{2^{k(2\kappa+\alpha)}} {V\left(\left\vert x^{-1}y\right\vert\right)\left\vert x^{-1}y\right\vert^{\alpha}}, \end{array} \end{equation} for some other constant $\widetilde C' >0$, and therefore $$ \displaystyle \int_0^{\infty} \frac{t^{-1-\alpha/2}}{V\left(2^k\sqrt{t}\right)} \sum_{j} \left\Vert g_k ^{j,t}\right\Vert_{L^2\left(C_0^{j,t},d\mu_M\right)}^2 \, dt \leq $$ $$ \begin{eqnarray}r C \, \widetilde C' \, 2^{k(2\kappa+\alpha)} \, \iint_{G\times G} \frac{\left\vert f(x)-f(y)\right\vert^2} {V\left(\left\vert x^{-1}y\right\vert\right)\left\vert x^{-1}y\right\vert^{\alpha}} \, M(x) \, dx \, dy. $$ We can now conclude the proof of Lemma \ref{controllalpha}, using Lemma \ref{quadratic}, (\ref{expdecay}), (\ref{intg0}) and (\ref{intgk}). We have proved, by reconsidering \eqref{expdecayter}: \begin{equation}gin{multline} \displaystyle \int_0^{\infty} t^{-1-\alpha/2} \left\Vert t \, L_M \, (\mbox{I} + t \, L_M)^{-1} f\right\Vert_{L^2(G,d\mu_M)}^2 \, dt \leq \\ \displaystyle C' \, \begin{eqnarray}r C \, N \, \iint_{G\times G} \frac{\left\vert f(x)-f(y)\right\vert^2} {V\left(\left\vert x^{-1}y\right\vert\right) \left\vert x-y\right\vert^{\alpha}} \, M(x) \, dx \, dy \\ + \displaystyle \sum_{k \ge 1} C' \, \begin{eqnarray}r C \, \widetilde C' \, 2^{k(2\kappa+\alpha)} \, e^{-c \, 2^k} \, \iint_{G\times G} \frac{\left\vert f(x)-f(y)\right\vert^2} {V\left(\left\vert x^{-1}y\right\vert\right)\left\vert x^{-1}y\right\vert^{\alpha}} \, M(x) \, dx \, dy \end{multline} and we deduce that $$ \displaystyle \int_0^{\infty} t^{-1-\alpha/2} \left\Vert t \, L_M \, (\mbox{I} + t \, L_M)^{-1} f\right\Vert_{L^2(G,d\mu_M)}^2 \, dt \le $$ $$ C \, \iint_{G\times G} \frac{\left\vert f(x)-f(y)\right\vert^2} {V\left(\left\vert x^{-1}y\right\vert\right)\left\vert x^{-1}y\right\vert^{\alpha}} \, d\mu_M(x) \, dy $$ for some constant $C$ as claimed in the statement. \end{proof} \begin{equation}gin{rem} In the Euclidean context, Strichartz proved in (\cite{stri}) that, when $0<\alpha<2$, for all $p\in (1,+\infty)$, \begin{equation}gin{equation} \label{compars} \left\Vert (-\Delta)^{\alpha/4}f\right\Vert_{L^p(\mathbb{R}^n)} \leq C_{\alpha,p} \left\Vert S_{\alpha}f\right\Vert_{L^p(\mathbb{R}^n)} \end{equation} where $$ S_{\alpha}f(x)=\left(\int_0^{+\infty} \left(\int_B \left\vert f(x+ry)-f(x)\right\vert dy\right)^2 \frac{dr}{r^{1+\alpha}}\right)^{\frac 12}, $$ and also (\cite{stein}) \begin{equation}gin{equation} \label{compard} \left\Vert (-\Delta)^{\alpha/4}f\right\Vert_{L^p(\mathbb{R}^n)} \leq C_{\alpha,p} \left\Vert D_{\alpha}f\right\Vert_{L^p(\mathbb{R}^n)} \end{equation} where $$ D_{\alpha}f(x)=\left(\int_{\mathbb{R}^n} \frac{\left\vert f(x+y)-f(x)\right\vert^2}{\left\vert y\right\vert^{n+\alpha}}dy\right)^{\frac 12}. $$ In \cite{crt}, these inequalities were extended to the setting of a unimodular Lie group endowed with a sub-laplacian $\Delta$, relying on semigroups techniques and Littlewood-Paley-Stein functionals. In particular, in \cite{crt}, the authors use {\it pointwise} estimates of the kernel of the semigroup generated by $\Delta$. In the present paper, we deal with the operator $L_M$ for which these pointwise estimates are not available, but it turns out that $L^2$ off-diagonal estimates are enough for our purpose. Note that we do not obtain $L^p$ inequalities here. \end{rem} \section{Appendix A: Technical lemma} We prove the following lemma. \begin{equation}gin{lem} \label{cardinal} Let $G$ and the $x_j^t$ be as in the proof of Lemma \ref{controllalpha} . Then there exists a constant $\widetilde C>0$ with the following property: for all $\theta>1$ and all $x\in G$, there are at most $\widetilde C\, \theta^{2\kappa}$ indexes $j$ such that $\left\vert x^{-1}x_j^t\right\vert\leq \theta\sqrt{t}$. \end{lem} \noindent {\bf Proof of Lemma~\ref{cardinal}.} The argument is very simple (see \cite{kanai}) and we give it for the sake of completeness. Let $x\in G$ and denote $$I(x):= \left\{j \in \mathbb{N} \, ;\ \left\vert x^{-1}x_j^t\right\vert\leq \theta\sqrt{t}\right\}.$$ Since, for all $j\in I(x)$ $$B\left(x_j^t,\sqrt{t}\right)\subset B\left(x,\left(1+\theta\right)\sqrt{t}\right),$$ and $$ B\left(x,\sqrt{t}\right)\subset B\left(x_j^t,(1+\theta)\sqrt{t}\right),$$ one has by (\ref{homogiter}) and the fact that the balls $B\left(x_j^t,\sqrt{t}\right)$ are pairwise disjoint, $$ \begin{equation}gin{array}{lll} \displaystyle \left\vert I(x)\right\vert V\left(x,\sqrt{t}\right) & \leq & \displaystyle \sum_{j\in I(x)} V\left(x_j^t,\left(1+\theta\right)\sqrt{t}\right)\\ & \leq & \displaystyle C(1+\theta)^{\kappa} \sum_{j\in I(x)} V\left(x_j^t,\sqrt{t}\right)\\ & \leq & \displaystyle C(1+\theta)^{\kappa} V\left(x,\left(1+\theta\right)\sqrt{t}\right)\\ & \leq & \displaystyle C(1+\theta)^{2\kappa} V\left(x,\sqrt{t}\right) \end{array} $$ and we get the desired conclusion. \fin\par \section{Appendix B: Estimates for $g_j^t$} We prove Lemma \ref{estimg}. For all $x\in G$, $$ \begin{equation}gin{array}{lll} \displaystyle g_0^{j,t}(x) & = & \displaystyle f(x) -\frac 1{V(2\sqrt{t})} \int_{B\left(x_j^t,2\sqrt{t}\right)} f(y) \, dy\\ & = & \displaystyle \frac 1{V(2\sqrt t)} \int_{B\left(x_j^t,2\sqrt{t}\right)} (f(x)-f(y)) \, dy. \end{array} $$ By Cauchy-Schwarz inequality and (\ref{homog}), it follows that $$ \left\vert g_0^{j,t}(x)\right\vert^2\leq \frac C{V(\sqrt{t})} \int_{ B\left(x_j^t,4\sqrt{t}\right)} \left\vert f(x)-f(y)\right\vert^2dy. $$ Therefore, $$ \left\Vert g_0^{j,t}\right\Vert_{L^2(C_0^{j,t},M)}^2\leq \frac C{V(\sqrt{t})} \int_{ B\left(x_j^t,4\sqrt{t}\right)} \int_{ B\left(x_j^t,4\sqrt{t}\right)} \left\vert f(x)-f(y)\right\vert^2 \, d\mu_M(x) \, dy, $$ which shows Assertion {\bf A}. We argue similarly for Assertion {\bf B} and obtain $$ \displaystyle \left\Vert g^{j,t}_k\right\Vert_{L^2 (C_k ^{j,t},M)}^2 \leq \displaystyle \frac C{V(2^k\sqrt{t})} \int_{ x\in B(x^t_j,2^{k+2}\sqrt{t})} \int_{ y\in B(x^t_j,2^{k+2}\sqrt{t})} \left\vert f(x)-f(y)\right\vert^2 \, d\mu_M(x) \, dy, $$ which ends the proof. {\em Emmanuel Russ}-- Universit\'e Paul C\'ezanne, LATP,\\ Facult\'e des Sciences et Techniques, Case cour A\\ Avenue Escadrille Normandie-Niemen, F-13397 Marseille, Cedex 20, France et \\ CNRS, LATP, CMI, 39 rue F. Joliot-Curie, F-13453 Marseille Cedex 13, France {\em Yannick Sire}-- Universit\'e Paul C\'ezanne, LATP,\\ Facult\'e des Sciences et Techniques, Case cour A\\ Avenue Escadrille Normandie-Niemen, F-13397 Marseille, Cedex 20, France et \\ CNRS, LATP, CMI, 39 rue F. Joliot-Curie, F-13453 Marseille Cedex 13, France. \end{document}
\begin{document} \title{Generalized Normal Forms of Two-Dimensional Autonomous Systems with a Hamiltonian Unperturbed Part} \author{V.~V.~Basov} \email{[email protected]} \affiliation{Saint Petersburg State University} \author{A.~S.~Vaganyan} \email{[email protected]} \thanks{This research is supported by the Chebyshev Laboratory (Department of Mathematics and Mechanics, St. Petersburg State University) under RF Government grant 11.G34.31.0026.} \altaffiliation{Chebyshev Laboratory, St. Petersburg State University, 14th Line, 29b, Saint Petersburg, 199178 Russia} \affiliation{Saint Petersburg State University} \begin{abstract} Generalized pseudo-Hamiltonian normal forms (GPHNF) and an effective method of obtaining them are introduced for two-dimensional systems of autonomous ODEs with a Hamiltonian quasi-homogeneous unperturbed part of an arbitrary degree. The terms that can be additionally eliminated in a GPHNF are constructively distinguished, and it is shown that after removing them GPHNF becomes a generalized normal form (GNF). By using the introduced method, all the GNFs are obtained in cases where the unperturbed part of the system is Hamiltonian and has monomial components, which allowed to generalize some results by Takens, Baider and Sanders, as well as Basov et al. \end{abstract} \keywords{generalized normal forms, Hamiltonian normal forms, resonance equation.} \maketitle \section{Some facts from the theory of normal forms} \subsection{Quasi-homogeneous polynomials and vector fields} Let $x=(x_1,x_2),$ $p=(p_1,p_2)$ and $D=(\partial_1,\partial_2)$ where $x_1,x_2$ are scalar variables, $p_1,p_2$ are nonnegative integers, and $\partial_1=\partial/\partial x_1,$ $\partial_2=\partial/\partial x_2.$ Denote by $\langle\,.\,,\,.\,\rangle$ the standard inner product on vectors, Euclidean or Hermitian depending on the context. Consider vector $\gamma=(\gamma_1,\gamma_2)$ with $\gamma_1,\gamma_2\in \mathbb N,$ $\text{GCD}(\gamma_1,\gamma_2)=1$ called \textit{weight} of the variable $x,$ and let $\delta=\gamma_1+\gamma_2.$ \begin{definition}[g.~d.] \label{weight}\ \ We call the number $\langle p,\gamma\rangle$ \textit{the generalized degree} of a monomial $x_1^{p_1} x_2^{p_2}.$ \end{definition} \begin{definition}[QHP] \label{QHP}\ \ Polynomial $P(x)$ is called \textit{a quasi-homogeneous polynomial} of g.~d. $k$ with weight $\gamma$ and is denoted by $P^{[k]}_\gamma(x)$ if it is a linear combination of monomials of g.~d. $k.$ \end{definition} \begin{definition}[VQHP]\ \ Vector polynomial ${\mathcal P}(x)=\bigl(P_{1}(x),P_{2}(x)\bigr)$ is called \textit{a vector quasi-homogeneous polynomial} of g.~d. $k$ with weight $\gamma$ and is denoted by ${\mathcal P}_\gamma^{[k]}(x),$ if its components $P_{1}(x),P_{2}(x)$ are QHPs of g.~d. $k+\gamma_1$ and $k+\gamma_2$ respectively. \end{definition} \begin{remark}\ \ Further, we identify each VQHP ${\mathcal P}_\gamma^{[k]}(x)= \bigl(P_{\gamma,1}^{[k+\gamma_1]}(x),P_{\gamma,2}^{[k+\gamma_2]}(x)\bigr)$ with the corresponding vector field, i.~e., define the action of VQHPs on formal power series: $${\mathcal P}_\gamma^{[k]}(f)(x)=P_{\gamma,1}^{[k+\gamma_1]}(x)\,\partial_1 f(x)+P_{\gamma,2}^{[k+\gamma_2]}(x)\,\partial_2 f(x),$$ and define the Lie bracket of two VQHPs ${\mathcal P}_\gamma^{[k]}(x)= \bigl(P_{\gamma,1}^{[k+\gamma_1]}(x), P_{\gamma,2}^{[k+\gamma_2]}(x)\bigr)$ and ${\mathcal Q}_\gamma^{[l]}(x)=\bigl(Q_{\gamma,1}^{[l+\gamma_1]}(x), Q_{\gamma,2}^{[l+\gamma_2]}(x)\bigr):$ $$[{\mathcal P}_\gamma^{[k]}, {\mathcal Q}_\gamma^{[l]}](x)= \bigl({\mathcal P}_{\gamma}^{[k]}(Q_{\gamma,1}^{[l+\gamma_1]})(x)- {\mathcal Q}_\gamma^{[l]}(P_{\gamma,1}^{[k+\gamma_1]})(x),\ {\mathcal P}_{\gamma}^{[k]}(Q_{\gamma,2}^{[l+\gamma_2]})(x)- {\mathcal Q}_\gamma^{[l]}(P_{\gamma,2}^{[k+\gamma_2]})(x)\bigr).$$ \end{remark} \begin{definition}\ \ Given a vector series $\mathcal P(x)=\sum_{k=0}^\infty \mathcal P_\gamma^{[k]}(x),$ we call the least positive integer $\chi\ge0$ such that $\mathcal P_\gamma^{[\chi]}\not\equiv0$ \textit{the generalized order} of $\mathcal P.$ Denote the generalized order by $\mathrm{ord}_\gamma \mathcal P.$ If $\mathcal P\equiv0,$ then $\mathrm{ord}_\gamma \mathcal P=+\infty.$ \end{definition} \subsection{Homological equation} Consider the system \begin{equation} \label{Eq1} \dot x={\mathcal P}_{\gamma}^{[\chi]}(x)+\mathcal X(x)\qquad\bigl(\chi\ge0,\ \mathcal X=(X_1,X_2),\ \mathrm{ord}_\gamma \mathcal X\ge\chi+1\bigr). \end{equation} Let the near-identity formal change of variables \begin{equation} \label{Eq2} x=y+\mathcal Q(y) \qquad\bigl(y=(y_1,y_2),\ \mathcal Q=(Q_1,Q_2),\ \mathrm{ord}_\gamma\mathcal Q\ge1\bigr) \end{equation} transform it into the system \begin{equation} \label{Eq3} \dot y={\mathcal P}_{\gamma}^{[\chi]}(y)+\mathcal Y(y)\qquad\bigl( \mathcal Y=(Y_1,Y_2),\ \mathrm{ord}_\gamma \mathcal Y\ge\chi+1\bigr). \end{equation} Then the series $\mathcal X,\mathcal Y$ and $\mathcal Q$ satisfy \textit{the homological equation} (see~\cite{basov03}): \begin{equation} \label{Eq4} [\mathcal P_{\gamma}^{[\chi]},\mathcal Q_{\gamma}^{[k]}]=\widetilde{ \mathcal Y}_{\gamma}^{[k+\chi]}-\mathcal Y_{\gamma}^{[k+\chi]} \end{equation} where $k\ge1,$ and $\widetilde{\mathcal Y}_{\gamma}^{[k+\chi]}$ includes only the components of the VQHPs $\mathcal Q_{\gamma}^{[l]}$ and $\mathcal Y_{\gamma}^{[l+\chi]}$ with $l=\overline{1,k-1}.$ If $\mathrm{ord}_\gamma \mathcal Q=m\ge1,$ then for $k=\overline{1,m},$ equation~\eqref{Eq4} takes the form \begin{equation} \label{Eq5} \mathcal X_{\gamma}^{[l+\chi]}=\mathcal Y_{\gamma}^{[l+\chi]}\quad (l=\overline{1,m-1}),\qquad[\mathcal P_{\gamma}^{[\chi]}, \mathcal Q_{\gamma}^{[m]}]=\mathcal X_{\gamma}^{[m+\chi]}- \mathcal Y_{\gamma}^{[m+\chi]}. \end{equation} \subsection{Generalized normal form (GNF)} Denote the linear spaces of VQHPs of g.~d. $k\ge1$ and $k+\chi$ in $x$ by $\mathfrak V_\gamma^{[k]}$ and $\mathfrak V_\gamma^{[k+\chi]},$ and their dimensions by $k_\gamma=\dim\mathfrak V_\gamma^{[k]}$ and $(k+\chi)_\gamma= \dim\mathfrak V_\gamma^{[k+\chi]}.$ In the spaces $\mathfrak V_\gamma^{[k]}$ and $\mathfrak V_\gamma^{[k+\chi]},$ choose the standard bases $\{(x_1^{p_1}x_2^{p_2},0),(0,x_1^{p'_1}x_2^{p'_2}):\ \langle p,\gamma\rangle-\gamma_1=\langle p',\gamma\rangle-\gamma_2=q\},$ where $q$ equals $k$ and $k+\chi$ respectively, order their elements lexicographically, and denote the corresponding matrix representation of the linear operator $[\mathcal P_{\gamma}^{[\chi]},\,.\,]: \mathfrak V_\gamma^{[k]}\rightarrow\mathfrak V_\gamma^{[k+\chi]}$ by $\mathbf P_{\gamma,k}^{[\chi]}.$ Assume that the matrix $\mathbf P_{\gamma,k}^{[\chi]}$ has rank $r_k=k_\gamma-k_\gamma^0$ where $0\le k_\gamma^0\le k_\gamma-1.$ Comparing the VQHPs $\mathcal Q_{\gamma}^{[k]},\ \widetilde{ \mathcal Y}_{\gamma}^{[k+\chi]}$ and $\mathcal Y_{\gamma}^{[k+\chi]}$ to the coefficient vectors of their expansions in the standard bases $\mathbf Q_{\gamma}^{[k]},\ \widetilde{\mathbf Y}_{\gamma}^{[k+\chi]}$ and $\mathbf Y_{\gamma}^{[k+\chi]},$ represent the homological equation~\eqref{Eq4} as a system of linear algebraic equations: $\mathbf P_{\gamma,k}^{[\chi]}\mathbf Q_{\gamma}^{[k]}= \widetilde{\mathbf Y}_{\gamma}^{[k+\chi]}-\mathbf Y_{\gamma}^{[k+\chi]}.$ From the obtained equations, select a subsystem of order $r_k$ with nonzero determinant, and solve it with respect to $\mathbf Q_{\gamma}^{[k]}.$ Arbitrarily fix the remaining $k_\gamma^0$ coefficients of $\mathbf Q_{\gamma}^{[k]}.$ Substituting $\mathbf Q_{\gamma}^{[k]}$ in the remaining equations, we obtain $n_k=(k+\chi)_\gamma-r_k$ linearly independent relations on coefficients of the VQHP $\mathcal Y_{\gamma}^{[k+\chi]}$ called \textit{resonance equations}: $$\langle\mathbf a_i^{k},\mathbf Y_{\gamma}^{[k+\chi]}\rangle= \langle\mathbf a_i^{k},\widetilde{\mathbf Y}_{\gamma}^{[k+\chi]}\rangle,\quad \mathbf a_i^{k}=\mathrm{const}\qquad(i=\overline{1,n_k}).$$ Coefficients of the VQHP $\mathcal Y_{\gamma}^{[k+\chi]}$ that really participate in at least one of the resonance equations are called \textit{resonant} and the others are called \textit{nonresonant}. \begin{definition}\ \ We call any set $\mathfrak Y=\cup_{k=1}^\infty\{ \mathbf Y_{\gamma,l_j}^{[k+\chi]}\}_{j=1}^{n_k}$ of resonant coefficients of the VQHP $\mathcal Y_{\gamma}^{[k+\chi]}$ such that for all $k\ge1$ $\det(\{\mathbf a_{i,l_j}^{k}\}_{i,j=1}^{n_k})\ne0$ \textit{a resonant set}. \end{definition} \begin{definition}[GNF]\ \ System~\eqref{Eq3} is called \textit{a generalized normal form} if all coefficients of the vector series $\mathcal Y$ are zero except for the coefficients from some resonant set $\mathfrak Y$ that have arbitrary values. \end{definition} Thereby, each GNF is generated by a certain resonant set $\mathfrak Y.$ \begin{proposition}[{\cite[Th.~2]{basov03}}]\ \ Given any system~\eqref{Eq1} and arbitrarily chosen resonant set $\mathfrak Y,$ then there exists a near-identity formal transformation~\eqref{Eq2} that brings it to a certain GNF~\eqref{Eq3} generated by $\mathfrak Y.$ \end{proposition} \section{Generalized pseudo-Hamiltonian normal form (GPHNF)} \subsection{Euler operator} \begin{definition}\ \ \textit{The Euler operator} with weight $\gamma$ is the VQHP $\mathcal E_\gamma(x)=(\gamma_1 x_1,\gamma_2 x_2).$ \end{definition} Clearly, the Euler operator has zero g.~d., so in the designation $\mathcal E_\gamma,$ the g.~d. is omitted. Let $Q_\gamma^{[k]}$ be a QHP, and ${\mathcal Q}_\gamma^{[k]}= \bigl(Q_{\gamma,1}^{[k+\gamma_1]},Q_{\gamma,2}^{[k+\gamma_2]}\bigr)$ be a VQHP of g.~d. $k\ge0.$ \begin{proposition}\ \ The Euler operator with weight $\gamma$ has the following properties: $$1)\ \mathcal E_\gamma(Q_\gamma^{[k]})=k Q_\gamma^{[k]};\quad 2)\ [\mathcal E_\gamma,\mathcal Q_\gamma^{[k]}]=k\mathcal Q_\gamma^{[k]};\quad 3)\ \mathrm{div}(Q_\gamma^{[k]}\mathcal E_\gamma)=(k+\delta)Q_\gamma^{[k]}.$$ \end{proposition} \begin{proof}\ \ 1)\ The property is obvious. 2)\ According to 1), $[\mathcal E_\gamma,\mathcal Q_\gamma^{[k]}]= \bigl({\mathcal E}_{\gamma}(Q_{\gamma,1}^{[k+\gamma_1]})- {\mathcal Q}_\gamma^{[k]}(\gamma_1 x_1),\ {\mathcal E}_{\gamma}(Q_{\gamma,2}^{[k+\gamma_2]})- {\mathcal Q}_\gamma^{[k]}(\gamma_2 x_2)\bigr)=k\mathcal Q_\gamma^{[k]}.$ 3)\ According to 1) and the Leibniz rule, $\mathrm{div}(Q_\gamma^{[k]}\mathcal E_\gamma)=\mathcal E_\gamma(Q_\gamma^{[k]})+ Q_\gamma^{[k]} \mathrm{div}\mathcal E_\gamma=(k+\delta)Q_\gamma^{[k]}.$ \end{proof} \begin{lemma}\label{EulerLm}\ \ Each VQHP $\mathcal Q_\gamma^{[k]}= \bigl(Q_{\gamma,1}^{[k+\gamma_1]}, Q_{\gamma,2}^{[k+\gamma_2]}\bigr)$ can be uniquely represented in the form \begin{equation} \label{Eq6} \mathcal Q_\gamma^{[k]}=\mathcal I_\gamma^{[k]}+ J_\gamma^{[k]}\mathcal E_\gamma\qquad \bigl( \mathcal I_\gamma^{[k]}= (-\partial_2 I_\gamma^{[k+\delta]}, \partial_1 I_\gamma^{[k+\delta]})\bigr). \end{equation} Herewith, $J_\gamma^{[k]}=\mathrm{div}(\mathcal Q_\gamma^{[k]})/(k+\delta).$ \end{lemma} \begin{proof}\ \ Take the divergence of both sides of equality~\eqref{Eq6}. According to property~3) of $\mathcal E_\gamma$ and the fact that $\mathrm{div}(\mathcal I_\gamma^{[k]})\equiv0,$ we obtain $J_\gamma^{[k]}=\mathrm{div}(\mathcal Q_\gamma^{[k]})/(k+\delta).$ Hence, $I_\gamma^{[k+\delta]}$ must satisfy the system of equations $\partial_1 I_\gamma^{[k+\delta]}=Q_{\gamma,2}^{[k+\gamma_2]}-\gamma_2 x_2 \mathrm{div}(\mathcal Q_\gamma^{[k]})/(k+\delta),$ $\partial_2 I_\gamma^{[k+\delta]}=\gamma_1 x_1 \mathrm{div}( \mathcal Q_\gamma^{[k]})/(k+\delta)-Q_{\gamma,1}^{[k+\gamma_1]}.$ By the Poincare lemma, taking into account the quasi-homogeneity, $I_\gamma^{[k+\delta]}$ exists and is unique. \end{proof} \subsection{Hamiltonian resonant sets} Let $H_\gamma^{[\chi+\delta]}\not\equiv0$ be a QHP, and $\mathcal H_\gamma^{[\chi]}= (-\partial_2 H_\gamma^{[\chi+\delta]},\partial_1 H_\gamma^{[\chi+\delta]})$ be a VQHP of g.~d. $\chi\ge0.$ Denote the space of polynomials in $x$ by $\mathfrak{P},$ and introduce the inner-product on $\mathfrak{P}:$ $$\langle\langle P,Q\rangle\rangle=P(D)\overline{Q}(x)|_{x=0}\qquad \bigl(P,Q\in\mathfrak{P},\ D=(\partial_1,\partial_2)\bigr)$$ where the upper bar stands for the complex conjugation of the coefficients. Then the operator $\left.\mathcal H_\gamma^{[\chi]}\right.^*: \mathfrak{P}\rightarrow\mathfrak{P}$ conjugate to $\mathcal H_\gamma^{[\chi]}$ with respect to $\langle\langle\,.\,,\,.\,\rangle\rangle$ has the form (see~\cite{BV}) \begin{equation} \label{Eq7} \left.\mathcal H_\gamma^{[\chi]}\right.^*= x_2\cdot{\partial_1 \overline{H}_\gamma^{[\chi+\delta]}}(D)- x_1\cdot{\partial_2 \overline{H}_\gamma^{[\chi+\delta]}}(D)= {\partial_1 \overline{H}_\gamma^{[\chi+\delta]}}(D)\cdot x_2- {\partial_2 \overline{H}_\gamma^{[\chi+\delta]}}(D)\cdot x_1. \end{equation} \begin{definition} \label{REdef}\ \ Kernel $\mathfrak{R}_\gamma= \mathrm{Ker}\left.\mathcal H_\gamma^{[\chi]}\right.^*$ of the operator~\eqref{Eq7} is called the space of \textit{resonant polynomials}. Denote the linear space of resonant QHPs of g.~d. $k\ge0$ by $\mathfrak{R}_\gamma^{[k]}.$ \end{definition} Let $\{R_{\gamma,i}^{[k]}\}_{i=1}^{s_k}$ be a basis for $\mathfrak{R}_\gamma^{[k]},$ and $\{\widetilde R_{\gamma,i}^{[k]}\}_{i=1}^{\widetilde s_k}$ be a basis for $\widetilde{\mathfrak{R}}_\gamma^{[k]}=\mathfrak{R}_\gamma^{[k]}\cap \left.\mathcal H_\gamma^{[\chi]}\right.^*\!(\mathfrak P_\gamma^{[k+\chi]}).$ \begin{definition}\label{Nabor}\ \ Sets of QHPs $\mathfrak{S}_\gamma^{[k]}=\{S_{\gamma,j}^{[k]}\}_{j=1}^{s_k}$ and $\widetilde{\mathfrak{S}}_\gamma^{[k]}= \{\widetilde S_{\gamma,j}^{[k]}\}_{j=1}^{\widetilde s_k}$ are called, respectively, \textit{a Hamiltonian resonant set} and \textit{a Hamiltonian reduced resonant set} in g.~d. $k,$ if $\det(\{\langle\langle R_{\gamma,i}^{[k]},S_{\gamma,j}^{[k]}\rangle\rangle\}_{i,j=1}^{s_k})\ne0$ and $\det(\{\langle\langle\widetilde R_{\gamma,i}^{[k]}, \widetilde S_{\gamma,j}^{[k]}\rangle\rangle\}_{i,j=1}^{\widetilde s_k})\ne0.$ The sets $\mathfrak{S}_\gamma=\cup_{k\ge0}\mathfrak{S}_\gamma^{[k]}$ and $\widetilde{\mathfrak{S}}_\gamma=\cup_{k\ge0} \widetilde{\mathfrak{S}}_\gamma^{[k]}$ are called, respectively, a Hamiltonian resonant set and a Hamiltonian reduced resonant set. \end{definition} Show the independence of the definition of a Hamiltonian resonant set of the choice of basis. Since any other basis $\{{R'}_{\gamma,i}^{[k]}\}_{i=1}^{s_k}$ for the space $\mathfrak{R}_\gamma^{[k]}$ is obtained from $\{R_{\gamma,i}^{[k]}\}_{i=1}^{s_k}$ by multiplication by a nonsingular matrix: ${R'}_{\gamma,i}^{[k]}=\sum_{j=1}^{s_k}A_{ij} R_{\gamma,j}^{[k]},$ $\det{A}\ne0$ $(i=\overline{1,s_k}),$ for each Hamiltonian resonant set $\mathfrak{S}_\gamma^{[k]}=\{S_{\gamma,j}^{[k]}\}_{j=1}^{s_k}$ in g.~d. $k,$ we have $\det(\{\langle\langle {R'}_{\gamma,i}^{[k]},S_{\gamma,j}^{[k]}\rangle\rangle\}_{i,j=1}^{s_k})= \det(\{\sum_{l=1}^{s_k}A_{il}\langle\langle R_{\gamma,l}^{[k]}, S_{\gamma,j}^{[k]}\rangle\rangle\}_{i,j=1}^{s_k})= \det{A}\, \det(\{\langle\langle R_{\gamma,i}^{[k]}, S_{\gamma,j}^{[k]}\rangle\rangle\}_{i,j=1}^{s_k})\ne0.$ Similarly, the definition of a Hamiltonian reduced resonant set $\widetilde{\mathfrak{S}}_\gamma^{[k]}$ is also independent of the choice of basis. \begin{definition}\label{MinNabor}\ \ We say that a Hamiltonian resonant set ${\mathfrak{S}}_\gamma$ or a Hamiltonian reduced resonant set $\widetilde{\mathfrak{S}}_\gamma$ is \textit{minimal} if it consists of monomials. \end{definition} \subsection{Definition of the GPHNF and existence theorem for the normalizing transformation} Let the unperturbed part of system~\eqref{Eq1} have the form $\mathcal P_{\gamma}^{[\chi]}= \mathcal H_\gamma^{[\chi]}$ where, as in the previous subsection, $\mathcal H_\gamma^{[\chi]}= (-\partial_2 H_\gamma^{[\chi+\delta]}, \partial_1 H_\gamma^{[\chi+\delta]})$ is a VQHP of g.~d. $\chi\ge0.$ According to Lemma~\ref{EulerLm} system~\eqref{Eq1} can be uniquely represented in the form \begin{equation} \label{Eq8} \dot x=\mathcal H_\gamma^{[\chi]}(x)+\sum_{k=1}^\infty \big( \mathcal F_\gamma^{[k+\chi]}(x)+G_\gamma^{[k+\chi]}(x) \mathcal E_\gamma(x)\big) \end{equation} where $\mathcal F_\gamma^{[k+\chi]}= \big(-\partial_2 F_\gamma^{[k+\chi+\delta]}, \partial_1 F_\gamma^{[k+\chi+\delta]}\big).$ Herewith, $G_\gamma^{[k+\chi]}=\mathrm{div}\big( \mathcal X_\gamma^{[k+\chi]}\big)/(k+\chi+\delta).$ \begin{definition}[GPHNF]\ \ System~\eqref{Eq8} is called \textit{a generalized pseudo-Hamiltonian normal form}, if for some Hamiltonian resonant set $\mathfrak S_\gamma$ and Hamiltonian reduced resonant set $\widetilde{\mathfrak S}_\gamma,$ the following conditions are satisfied: $$\forall\,k\ge1\qquad F_\gamma^{[k+\chi+\delta]}\in Lin( \widetilde{\mathfrak S}_\gamma^{[k+\chi+\delta]}),\quad G_\gamma^{[k+\chi]}\in Lin(\mathfrak S_\gamma^{[k+\chi]}).$$ \end{definition} Thereby, the structure of each GPHNF is generated by certain Hamiltonian resonant and reduced resonant sets $\mathfrak S_\gamma$ and $\widetilde{\mathfrak S}_\gamma.$ \begin{lemma}\ \ 1) Let $J_\gamma^{[k]}$ be a QHP of g.~d. $k\ge0.$ Then for some QHP $K_\gamma^{[k+\chi+\delta]},$ \begin{equation} \label{Eq9} [\mathcal H_\gamma^{[\chi]},J_\gamma^{[k]}\mathcal E_\gamma]= \mathcal K_\gamma^{[k+\chi]}+\frac{k+\delta}{k+\chi+\delta} \mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})\mathcal E_\gamma\qquad \bigl(\mathcal K_\gamma^{[k+\chi]}=(-\partial_2 K_\gamma^{[k+\chi+\delta]}, \partial_1 K_\gamma^{[k+\chi+\delta]})\bigr). \end{equation} Herewith, if $\mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})=0,$ then $\mathcal K_\gamma^{[k+\chi]}=-\chi J_\gamma^{[k]}\mathcal H_\gamma^{[\chi]}$ and $\mathcal H_\gamma^{[\chi]}(K_\gamma^{[k+\chi+\delta]})=0.$ 2) Conversely, given any QHP $K_\gamma^{[k+\chi+\delta]}$ such that $\mathcal H_\gamma^{[\chi]}(K_\gamma^{[k+\chi+\delta]})=0,$ then there exists unique QHP $J_\gamma^{[k]}$ such that $\mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})=0$ and equality~\eqref{Eq9} is satisfied. \end{lemma} \begin{proof}\ \ 1) By Lemma~\ref{EulerLm} and the identity $\mathrm{div}(J_\gamma^{[k]} \mathcal H_\gamma^{[\chi]})\equiv\mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]}),$ it follows that \begin{multline}\notag [\mathcal H_\gamma^{[\chi]},J_\gamma^{[k]}\mathcal E_\gamma]= \mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})\mathcal E_\gamma+ J_\gamma^{[k]}[\mathcal H_\gamma^{[\chi]},\mathcal E_\gamma]= \mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})\mathcal E_\gamma- \chi J_\gamma^{[k]}\mathcal H_\gamma^{[\chi]}=\\ =\Bigl(1-\frac{\chi}{k+\chi+\delta}\Bigr) \mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})\mathcal E_\gamma+ \mathcal K_\gamma^{[k+\chi]}= \frac{k+\delta}{k+\chi+\delta} \mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})\mathcal E_\gamma+ \mathcal K_\gamma^{[k+\chi]}. \end{multline} In particular, if $\mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})=0,$ then $\mathcal K_\gamma^{[k+\chi]}=-\chi J_\gamma^{[k]}\mathcal H_\gamma^{[\chi]}.$ Herewith, from the chain of equalities $[\mathcal H_\gamma^{[\chi]},J_\gamma^{[k]}\mathcal H_\gamma^{[\chi]}]= \mathcal H_\gamma^{[\chi]}(J_\gamma^{[k]})\mathcal H_\gamma^{[\chi]}+ J_\gamma^{[k]}[\mathcal H_\gamma^{[\chi]},\mathcal H_\gamma^{[\chi]}]=0,$ it follows that $\mathcal H_\gamma^{[\chi]}(K_\gamma^{[k+\chi+\delta]})=0.$ 2) The converse follows from the fact that any two integrals of $\mathcal H_\gamma^{[\chi]}$ are functionally dependent. \end{proof} \begin{theorem}\ \ Given any system~\eqref{Eq8} and arbitrary Hamiltonian resonant set $\mathfrak S_\gamma$ and Hamiltonian reduced resonant set $\widetilde{\mathfrak S}_\gamma,$ then there exists a near-identity formal transformation~\eqref{Eq2} that brings it to a certain GPHNF generated by $\mathfrak S_\gamma$ and $\widetilde{\mathfrak S}_\gamma.$ \end{theorem} \begin{proof}\ \ Let $\mathfrak S_\gamma,\ \widetilde{\mathfrak S}_\gamma$ be Hamiltonian resonant and reduced resonant sets for $\mathcal H_\gamma^{[\chi]},$ and let transformation~\eqref{Eq2} with $\mathrm{ord}_\gamma \mathcal Q=m\ge1$ bring system~\eqref{Eq8} into system~\eqref{Eq3} of the form \begin{equation} \label{Eq10} \dot y=\mathcal H_\gamma^{[\chi]}(y)+\sum_{k=1}^\infty \big(\widetilde{\mathcal F}_\gamma^{[k+\chi]}(y)+ \widetilde G_\gamma^{[k+\chi]}(y)\mathcal E_\gamma(y)\big), \end{equation} where $\widetilde{\mathcal F}_\gamma^{[k+\chi]}= (-\partial_2 \widetilde F_\gamma^{[k+\chi+\delta]}, \partial_1 \widetilde F_\gamma^{[k+\chi+\delta]})$ and $\widetilde G_\gamma^{[k+\chi]}= \mathrm{div}(\mathcal Y_\gamma^{[k+\chi]})/(k+\chi+\delta).$ By formulas~\eqref{Eq6} and~\eqref{Eq9}, we get the following expression for the Lie bracket of the VQHPs $\mathcal H_{\gamma}^{[\chi]}$ and $\mathcal Q_{\gamma}^{[m]}:$ \begin{equation} \label{Eq11} [\mathcal H_{\gamma}^{[\chi]},\mathcal Q_{\gamma}^{[m]}]= [\mathcal H_{\gamma}^{[\chi]},\mathcal I_\gamma^{[m]}+ J_\gamma^{[m]}\mathcal E_\gamma]= \bigl([\mathcal H_{\gamma}^{[\chi]},\mathcal I_\gamma^{[m]}]+ \mathcal K_\gamma^{[m+\chi]}\bigr)+ \frac{m+\delta}{m+\chi+\delta} \mathcal H_\gamma^{[\chi]}(J_\gamma^{[m]})\mathcal E_\gamma. \end{equation} Substituting into~\eqref{Eq5} $\mathcal P_{\gamma}^{[\chi]}= \mathcal H_\gamma^{[\chi]},$ $\mathcal X_{\gamma}^{[m+\chi]}= \mathcal F_\gamma^{[m+\chi]}+G_\gamma^{[m+\chi]}\mathcal E_\gamma,$ and $\mathcal Y_{\gamma}^{[m+\chi]}=\widetilde{\mathcal F}_\gamma^{[m+\chi]}+ \widetilde G_\gamma^{[m+\chi]}\mathcal E_\gamma$ and using equality~\eqref{Eq11}, we get the system of equations \begin{equation} \label{Eq12} \mathcal H_{\gamma}^{[\chi]}(I_\gamma^{[m+\delta]})+K_\gamma^{[m+\chi+\delta]}= {F}_\gamma^{[m+\chi+\delta]}-\widetilde{F}_\gamma^{[m+\chi+\delta]},\quad \frac{m+\delta}{m+\chi+\delta}\mathcal H_\gamma^{[\chi]}(J_\gamma^{[m]})= G_\gamma^{[m+\chi]}-\widetilde G_\gamma^{[m+\chi]}. \end{equation} Denote ${\mathfrak{S}}_\gamma^{[m+\chi]}= \{S_{\gamma,j}^{[m+\chi]}\}_{j=1}^{s_{m+\chi}},$ $\widetilde{\mathfrak{S}}_\gamma^{[m+\chi+\delta]}= \{\widetilde S_{\gamma,j}^{[m+\chi+\delta]}\}_{j=1}^{ \widetilde s_{m+\chi+\delta}}.$ In the linear spaces $\mathfrak{R}_\gamma^{[m+\chi]}$ and $\widetilde{\mathfrak{R}}_\gamma^{[m+\chi+\delta]}= \mathfrak{R}_\gamma^{[m+\chi+\delta]}\cap \left.\mathcal H_\gamma^{[\chi]}\right.^*\!( \mathfrak P_\gamma^{[m+2\chi+\delta]}),$ choose bases $\{R_{\gamma,i}^{[m+\chi]}\}_{i=1}^{ s_{m+\chi}}$ and $\{\widetilde R_{\gamma,i}^{[m+\chi+\delta]}\}_{i=1}^{ \widetilde s_{m+\chi+\delta}}$ respectively. Define matrices $A=\{\langle\langle R_{\gamma,i}^{[m+\chi]}, S_{\gamma,j}^{[m+\chi]}\rangle\rangle\}_{i,j=1}^{ s_{m+\chi}}$ and $\widetilde A=\{\langle\langle \widetilde R_{\gamma,i}^{[m+\chi+\delta]}, \widetilde S_{\gamma,j}^{[m+\chi+\delta]}\rangle\rangle\}_{i,j=1}^{ \widetilde s_{m+\chi+\delta}},$ vectors $c=\{\langle\langle R_{\gamma,i}^{[m+\chi]}, G_\gamma^{[m+\chi]}\rangle\rangle\}_{i=1}^{ s_{m+\chi}}$ and $\widetilde c=\{\langle\langle \widetilde R_{\gamma,i}^{[m+\chi+\delta]}, F_\gamma^{[m+\chi+\delta]}\rangle\rangle\}_{i=1}^{ \widetilde s_{m+\chi+\delta}},$ $b=A^{-1}c$ and $\widetilde b= \widetilde A^{-1}\widetilde c,$ and QHPs $\widetilde G_\gamma^{[m+\chi]}=\sum_{j=1}^{ s_{m+\chi}} b_j S_{\gamma,j}^{[m+\chi]}$ and $\widetilde F_\gamma^{[m+\chi+\delta]}= \sum_{j=1}^{ \widetilde s_{m+\chi+\delta}}\widetilde b_j \widetilde S_{\gamma,j}^{[m+\chi+\delta]}.$ Then $\langle\langle R_{\gamma,i}^{[m+\chi]}, \widetilde G_\gamma^{[m+\chi]}-G_\gamma^{[m+\chi]}\rangle\rangle= \sum_{j=1}^{ s_{m+\chi}}A_{ij}b_j-c_i=0$ for all $i=\overline{1, s_{m+\chi}}.$ Hence, by the Fredholm alternative, we obtain the QHP $J_\gamma^{[m]}$ satisfying~$(\ref{Eq12}_2),$ up to an integral of the VQHP $\mathcal H_\gamma^{[\chi]}.$ To complete the proof, it is enough to consider the case where $\widetilde G_\gamma^{[m+\chi]}=G_\gamma^{[m+\chi]}.$ We have $\langle\langle\widetilde R_{\gamma,i}^{[m+\chi+\delta]}, \widetilde F_\gamma^{[m+\chi+\delta]}-F_\gamma^{[m+\chi+\delta]}\rangle\rangle= \sum_{j=1}^{\widetilde s_{m+\chi+\delta}}\widetilde A_{ij} \widetilde b_j-\widetilde c_i=0$ for all $i=\overline{1,\widetilde s_{m+\chi+\delta}}.$ Hence, by the Fredholm alternative and the definition of $\widetilde R_{\gamma,i}^{[m+\chi+\delta]},$ the QHP $\widetilde F_\gamma^{[m+\chi+\delta]}-F_\gamma^{[m+\chi+\delta]}$ can be represenred in the form~$(\ref{Eq12}_1)$ where $\mathcal H_\gamma^{[\chi]}(K_\gamma^{[m+\chi+\delta]})=0.$ And according to Lemma~2, every such QHP $K_\gamma^{[m+\chi+\delta]}$ can be obtained by choosing an appropriate $J_\gamma^{[m]}$ in~\eqref{Eq11} such that $\mathcal H_\gamma^{[\chi]}(J_\gamma^{[m]})=0.$ So, we have proved the existense of the QHPs $I_\gamma^{[m+\delta]}$ and $J_\gamma^{[m]}$ that satisfy~\eqref{Eq12}, and hence, there exists a VQHP $\mathcal Q_\gamma^{[m]}$ such that the transformation~\eqref{Eq2} with $\mathrm{ord}_\gamma \mathcal Q=m\ge1$ takes system~\eqref{Eq8} to the view~\eqref{Eq10} with $\widetilde F_\gamma^{[m+\chi+\delta]}\in Lin(\widetilde{\mathfrak S}_\gamma^{[m+\chi+\delta]})$ and $\widetilde G_\gamma^{[m+\chi]}\in Lin(\mathfrak S_\gamma^{[m+\chi]}).$ Hence, step by step increasing $m,$ we find the required transformation as a composition of the transformations obtained on each step. \end{proof} \section{Reduction of the GPHNF to GNF} Consider GPHNF~\eqref{Eq10} $\dot y=\mathcal H_\gamma^{[\chi]}(y)+\sum_{k=1}^\infty \big(\widetilde{\mathcal F}_\gamma^{[k+\chi]}(y)+ \widetilde G_\gamma^{[k+\chi]}(y)\mathcal E_\gamma(y)\big)$ generated by arbitrary minimal sets $\mathfrak S_\gamma,\widetilde{\mathfrak S}_\gamma.$ Coefficients of its perturbation $\mathcal Y$ can be expressed in terms of $\widetilde F$ and $\widetilde G$ as follows: \begin{equation} \label{Eq13} Y_1^{(p_1+1,p_2)}=-(p_2+1)\widetilde{F}^{(p_1+1,p_2+1)}+ \gamma_1 \widetilde{G}^{(p_1,p_2)},\quad Y_2^{(p_1,p_2+1)}=(p_1+1)\widetilde{F}^{(p_1+1,p_2+1)}+ \gamma_2 \widetilde{G}^{(p_1,p_2)}. \end{equation} Note that if $y_1^{p_1} y_2^{p_2}\not\in \mathfrak S_\gamma,$ then there exists a vector $q=(q_1,q_2)$ with integer nonnegative components such that $\langle p-q,\gamma\rangle=\chi$ and at least one of the coefficients $[\mathcal H_\gamma^{[\chi]},y_1^{q_1} y_2^{q_2} \mathcal E_\gamma]_1^{(p_1+1,p_2)}$ and $[\mathcal H_\gamma^{[\chi]},y_1^{q_1} y_2^{q_2} \mathcal E_\gamma]_2^{(p_1,p_2+1)}$ is nonzero. Define a subset $\mathfrak Y$ of the set of coefficients of $\mathcal Y,$ element by element, as follows: \begin{itemize} \item[i)] $Y_1^{(p_1+1,p_2)},Y_2^{(p_1,p_2+1)}\in\mathfrak Y,$ if $y_1^{p_1+1} y_2^{p_2+1}\in \widetilde{\mathfrak S}_\gamma,$ $y_1^{p_1} y_2^{p_2}\in \mathfrak S_\gamma;$ \item[ii)] either $Y_1^{(p_1+1,p_2)}\in\mathfrak Y$ or $Y_2^{(p_1,p_2+1)}\in\mathfrak Y,$ if $y_1^{p_1+1} y_2^{p_2+1}\not\in \widetilde{\mathfrak S}_\gamma,$ $y_1^{p_1} y_2^{p_2}\in \mathfrak S_\gamma;$ \item[iii)] either $Y_1^{(p_1+1,p_2)}\in\mathfrak Y$ or $Y_2^{(p_1,p_2+1)}\in\mathfrak Y,$ if $y_1^{p_1+1} y_2^{p_2+1}\in \widetilde{\mathfrak S}_\gamma,$ $y_1^{p_1} y_2^{p_2}\not\in \mathfrak S_\gamma$ and there exists a vector $q=(q_1,q_2)$ with nonnegative integer components such that $\langle p-q,\gamma\rangle=\chi$ and, respectively, $[\mathcal H_\gamma^{[\chi]},y_1^{q_1} y_2^{q_2} \mathcal E_\gamma]_2^{(p_1,p_2+1)}\ne0$ or $[\mathcal H_\gamma^{[\chi]},y_1^{q_1} y_2^{q_2} \mathcal E_\gamma]_1^{(p_1+1,p_2)}\ne0.$ \end{itemize} \begin{theorem}\ \ Given any system~\eqref{Eq1} with a Hamiltonian unperturbed part $\mathcal P_\gamma^{[\chi]}=\mathcal H_\gamma^{[\chi]}$ where $\mathcal H_\gamma^{[\chi]}=(-\partial_2 H_\gamma^{[\chi+\delta]}, \partial_1 H_\gamma^{[\chi+\delta]}),$ and given arbitrary Hamiltonian resonant set $\mathfrak S_\gamma,$ Hamiltonian reduced resonant set $\widetilde{\mathfrak S}_\gamma,$ and the set $\mathfrak Y$ constructed by the above rules, then there exists a near-identity formal transformation~\eqref{Eq2} that brings it to the form~\eqref{Eq3}, where all coefficients of the perturbation are zero, except for the coefficients from $\mathfrak Y$ that have arbitrary values. Moreover, the obtained system is a GNF. \end{theorem} \begin{proof}\ \ According to Proposition~1, it is enough to show that the set $\mathfrak Y$ is a resonant set for the unperturbed part $\mathcal H_\gamma^{[\chi]}.$ Denote $\mathfrak Y_\gamma^{[k+\chi]}= \{Y_1^{(p_1+1,p_2)},Y_2^{(p_1,p_2+1)}\in\mathfrak Y:\ \langle p,\gamma\rangle=k+\chi\},$ $n_k=|\mathfrak Y_\gamma^{[k+\chi]}|,$ $r_k=|\mathfrak S_\gamma^{[k+\chi]}|,$ and $\widetilde r_k= |\widetilde{\mathfrak S}_\gamma^{[k+\chi+\delta]}|.$ First, show that for all $k\ge1,$ the number $n_k$ is equal to the number of independent resonance equations in g.~d. $k+\chi,$ i.~e., $n_k=\dim\mathfrak V_\gamma^{[k+\chi]}-\dim [ \mathcal H_{\gamma}^{[\chi]},\mathfrak V_\gamma^{[k]}],$ where $\mathfrak V_\gamma^{[k+\chi]}$ denotes the linear space of VQHPs of g.~d. $k+\chi,$ and $[\mathcal H_{\gamma}^{[\chi]},\mathfrak V_\gamma^{[k]}]$ denotes the image of the linear operator $[\mathcal H_{\gamma}^{[\chi]},\,.\,]: \mathfrak V_\gamma^{[k]}\rightarrow\mathfrak V_\gamma^{[k+\chi]}.$ Indeed, by construction, $n_k=r_k+\widetilde r_k.$ In turn, it follows from the definition of the sets $\mathfrak S_\gamma$ and $\widetilde{\mathfrak S}_\gamma$ that $r_k=\dim\mathfrak P_\gamma^{[k+\chi]}- \dim\mathcal H_\gamma^{[\chi]}(\mathfrak P_\gamma^{[k]})$ and $\widetilde r_k=\dim\mathfrak P_\gamma^{[k+\chi+\delta]}- \dim\bigl(\mathcal H_\gamma^{[\chi]}(\mathfrak P_\gamma^{[k+\delta]})+ \mathrm{Ker\,}\mathcal H_\gamma^{[\chi]}\bigr|_{ \mathfrak P_\gamma^{[k+\chi+\delta]}}\bigr).$ According to~\eqref{Eq6} and~\eqref{Eq9}, $\dim\mathfrak V_\gamma^{[k+\chi]}=\dim\mathfrak P_\gamma^{[k+\chi+\delta]}+ \dim\mathfrak P_\gamma^{[k+\chi]},$ and $\dim [\mathcal H_{\gamma}^{[\chi]},\mathfrak V_\gamma^{[k]}]= \dim\mathcal H_\gamma^{[\chi]}(\mathfrak P_\gamma^{[k]})+ \dim\bigl(\mathcal H_\gamma^{[\chi]}(\mathfrak P_\gamma^{[k+\delta]})+ \mathrm{Ker\,}\mathcal H_\gamma^{[\chi]}\bigr|_{ \mathfrak P_\gamma^{[k+\chi+\delta]}}\bigr).$ Hence, $n_k=r_k+\widetilde r_k=\dim\mathfrak V_\gamma^{[k+\chi]}- \dim [\mathcal H_{\gamma}^{[\chi]},\mathfrak V_\gamma^{[k]}].$ By construction, all the $n_k$ elements of the set $\mathfrak Y_\gamma^{[k+\chi]}$ can be uniquely expressed from the system of $n_k$ linear equations~\eqref{Eq13} in terms of coefficients of the QHPs $\widetilde F_\gamma^{[k+\chi+\delta]}, \widetilde G_\gamma^{[k+\chi]}$ from~\eqref{Eq10} (all but the $n_k$ coefficients of the perturbation of the GPHNF are zero in g.~d. $k+\chi$). Therefore, it remains to show transformations that eliminate the coefficients that do not belong to $\mathfrak Y,$ in cases~ii) and~iii). ii) Let $Y_1^{(p_1+1,p_2)}\in\mathfrak Y,$ $y_1^{p_1+1} y_2^{p_2+1}\not\in \widetilde{\mathfrak S}_\gamma,$ $y_1^{p_1} y_2^{p_2}\in \mathfrak S_\gamma.$ Thereby, $Y_2^{(p_1,p_2+1)}\not\in\mathfrak Y$ and $(Y_1^{(p_1+1,p_2)} y_1^{p_1+1} y_2^{p_2}, Y_2^{(p_1,p_2+1)} y_1^{p_1} y_2^{p_2+1})= \widetilde{G}^{(p_1,p_2)} y_1^{p_1} y_2^{p_2} \mathcal E_\gamma.$ It follows then from the expansion $$y_1^{p_1} y_2^{p_2} \mathcal E_\gamma= \frac{\gamma_2}{p_1+1} \Bigl(-(p_2+1)y_1^{p_1+1}y_2^{p_2},(p_1+1)y_1^{p_1}y_2^{p_2+1}\Bigr)+ \frac{\gamma_1(p_1+1)+\gamma_2(p_2+1)}{p_1+1}\Bigl(y_1^{p_1+1}y_2^{p_2},0 \Bigr)$$ that the coefficient $Y_2^{(p_1,p_2+1)}$ can be set to zero by adding the term $-\gamma_2 \widetilde{G}^{(p_1,p_2)}(p_1+1)^{-1} y_1^{p_1+1} y_2^{p_2+1}$ to~$\widetilde{F}_\gamma^{[k+\chi+\delta]}$ in the proof of Theorem~1, which is possible, since $y_1^{p_1+1} y_2^{p_2+1}\not\in \widetilde{\mathfrak S}_\gamma.$ Similarly, in case where $Y_2^{(p_1,p_2+1)}\in\mathfrak Y,$ $y_1^{p_1+1} y_2^{p_2+1}\not\in \widetilde{\mathfrak S}_\gamma,$ $y_1^{p_1} y_2^{p_2}\in \mathfrak S_\gamma,$ we eliminate the coefficient $Y_1^{(p_1+1,p_2)}\not\in\mathfrak Y.$ iii) Let $Y_1^{(p_1+1,p_2)}\in\mathfrak Y,$ $y_1^{p_1+1} y_2^{p_2+1}\in \widetilde{\mathfrak S}_\gamma,$ $y_1^{p_1} y_2^{p_2}\not\in \mathfrak S_\gamma,$ and let there exist a vector $q=(q_1,q_2)$ with nonnegative integer components $q_1,q_2$ such that $\langle p-q,\gamma\rangle=\chi,$ $[\mathcal H_\gamma^{[\chi]},y_1^{q_1} y_2^{q_2} \mathcal E_\gamma]_2^{(p_1,p_2+1)}\ne0.$ Thereby, $Y_2^{(p_1,p_2+1)}\not\in\mathfrak Y$ and $$\Bigl(Y_1^{(p_1+1,p_2)} y_1^{p_1+1} y_2^{p_2}, Y_2^{(p_1,p_2+1)} y_1^{p_1} y_2^{p_2+1}\Bigr)= \widetilde{F}^{(p_1+1,p_2+1)} \Bigl(-(p_2+1) y_1^{p_1+1}y_2^{p_2}, (p_1+1) y_1^{p_1}y_2^{p_2+1}\Bigr).$$ Then the coefficient $Y_2^{(p_1,p_2+1)}$ can be set to zero by using a transformation of the form~\eqref{Eq2} where $\mathcal Q=C y_1^{q_1} y_2^{q_2} \mathcal E$ with an appropriate coefficient $C.$ Herewith, all the extra terms can be taken into account by adding in a suitable way the terms that do not belong to $\widetilde{\mathfrak S}_\gamma$ and $\mathfrak S_\gamma$ to the QHPs $\widetilde{F}_\gamma^{[k+\chi+\delta]}$ and $\widetilde{G}_\gamma^{[k+\chi]}$ respectively (see the proof of Theorem~1). Similarly, if $Y_2^{(p_1,p_2+1)}\in\mathfrak Y,$ $y_1^{p_1+1} y_2^{p_2+1}\in \widetilde{\mathfrak S}_\gamma,$ $y_1^{p_1} y_2^{p_2}\not\in \mathfrak S_\gamma,$ and $[\mathcal H_\gamma^{[\chi]},y_1^{q_1} y_2^{q_2} \mathcal E_\gamma]_1^{(p_1+1,p_2)}\ne0,$ we eliminate the coefficient $Y_1^{(p_1+1,p_2)}\not\in\mathfrak Y.$ \end{proof} \begin{remark}\ \ If there are several pairs of coefficients $Y_1^{(p_1+1,p_2)}, Y_2^{(p_1,p_2+1)}$ in one and the same g.~d. that fit case iii), then there exists the same number of monomials $y_1^{q_1} y_2^{q_2}$ that satisfy the conditions described for this case, and the coefficients for the transformations are obtained from an algebraic system with nonzero determinant that expresses the equality to zero of the corresponding coefficients of $\mathcal Y.$ \end{remark} \section{GNFs of systems the Hamiltonian unperturbed part of which has monomial components} In this section, using Theorems~1 and~2, we compute GNFs for systems with a Hamiltonian unperturbed part represented by a vector monomial, i.~e. a vector with the monomial components. The results are compared with the known GNFs, obtained earlier by Takens~\cite{takens}, Baider and Sanders~\cite{sanders2}, Basov et~al.~\cite{basov03,basov,fed,skit}. \subsection{The unperturbed part $(x_2^{m-1},0)$ with $m\ge2$} Consider system~\eqref{Eq8} where $\mathcal H^{[\chi]}_\gamma=(x_2^{m-1},0),$ $H^{[\chi+\delta]}_\gamma=-\cfrac{x_2^{m}}{m},$ $\gamma=(1,1),$ $\chi=m-2,$ $m\ge2.$ In this case, the space of resonant polynomials has the view (see~\cite{BV}) $$\mathfrak R_\gamma=Lin(\{x_1^{p_1} x_2^{p_2}:\ p_2=\overline{0,m-2}\,\}).$$ $\mathfrak R_\gamma$ does not contain integrals of $\mathcal H_\gamma^{[\chi]}$ of degree higher than $m,$ that is, every Hamiltonian reduced resonant set is a Hamiltonian resonant set, in degrees higher than $m.$ \begin{corollary}\ \ Given any system~\eqref{Eq1} with the unperturbed part $(x_2^{m-1},0)$ where $m\ge2,$ then there exists a near-identity formal transformation~\eqref{Eq2} that brings it into the GNF \begin{equation} \label{Eq14} \begin{split} &\dot y_1=y_2^{m-1}+ \sum_{i=m}^\infty\sum_{j=0}^{m-3}Y_1^{(i-j,j)}y_1^{i-j}y_2^j +y_1^{2} y_2^{m-2} \sum_{i=0}^\infty Y_1^{(i+2,m-2)} y_1^{i},\\ &\dot y_2= \sum_{i=m}^\infty\sum_{j=0}^{m-2}Y_2^{(i-j,j)}y_1^{i-j}y_2^j+ y_1 y_2^{m-1} \sum_{i=0}^\infty Y_2^{(i+1,m-1)} y_1^{i} \end{split} \end{equation} where for each $i\ge0,$ we take either $Y_1^{(i+2,m-2)}=0$ or $Y_2^{(i+1,m-1)}=0.$ \end{corollary} \begin{proof}\ \ For all $k>m,$ choose the Hamiltonian resonant sets $$\mathfrak S_\gamma^{[k]},\widetilde{\mathfrak S}_\gamma^{[k]}= \{x_1^{p_1} x_2^{p_2}:\ p_2=\overline{0,m-2};\ |p|=k\,\}.$$ Then GPHNF~\eqref{Eq10} takes the form~\eqref{Eq14} with $Y_1^{(i+2,m-2)}=Y_2^{(i+1,m-1)}=\widetilde G^{(i+1,m-2)}.$ Hence, the corollary follows from Theorem~2. \end{proof} For $m=2,$ formula~\eqref{Eq14} gives \textit{the Takens normal form}~\cite{takens}, and for $m=3,$ it gives~\cite[Th.~11]{fed}. \subsection{The unperturbed part $(-m x_1^{l} x_2^{m-1}, l x_1^{l-1} x_2^{m})$ with $l>m\ge1$} Consider system~\eqref{Eq8} where $\mathcal H^{[\chi]}_\gamma=(-m x_1^{l} x_2^{m-1}, l x_1^{l-1} x_2^{m}),$ $H^{[\chi+\delta]}_\gamma={x_1^l x_2^m},$ $\gamma=(1,1),$ $\chi=l+m-2,$ $l>m\ge1,$ and $\text{GCD}(l,m)=d.$ In this case, the space of resonant polynomials has the view (see~\cite{BV}) $$\mathfrak R_\gamma=Lin\bigl(\{x_1^{p_1} x_2^{p_2}:\ p_1=\overline{0,l-2}, \text{ or } p_2=\overline{0,m-2},\text{ or } p_1=rl/d-1,\ p_2=rm/d-1,\ r\ge d\,\}\bigr).$$ $\mathfrak R_\gamma$ does not contain integrals of $\mathcal H_\gamma^{[\chi]}$ of degree higher than $l+m,$ that is, every Hamiltonian reduced resonant set is a Hamiltonian resonant set, in degrees higher than $l+m.$ \begin{corollary}\ \ Given any system~\eqref{Eq1} with the unperturbed part $(-m x_1^{l} x_2^{m-1}, l x_1^{l-1} x_2^{m})$ where $l>m\ge1$ and $\mathrm{GCD}(l,m)=d,$ then there exists a near-identity formal transformation~\eqref{Eq2} that brings it into the GNF \begin{equation}\label{Eq15} \begin{split} &\dot y_1=-m y_1^{l} y_2^{m-1}+ \sum_{k=l+m}^\infty\Bigl(\sum_{i=0}^{l-2} Y_1^{(i,k-i)}y_1^i y_2^{k-i}+ \sum_{j=0}^{m-3} Y_1^{(k-j,j)}y_1^{k-j} y_2^j\Bigr)+\\ &\qquad +y_1^{l+2} y_2^{m-2}\sum_{i=0}^\infty Y_1^{(i+l+2,m-2)}y_1^{i}+ y_1^{l-1} y_2^{m+1} \sum_{j=0}^\infty Y_1^{(l-1,m+j+1)}y_2^{j}+\\ &\qquad\qquad +\sum_{r=d+1}^\infty Y_1^{(rl/d,rm/d-1)} y_1^{rl/d} y_2^{rm/d-1}+ \sum_{s=d+1+[\frac{3d-1}{l+m}]}^\infty Y_1^{(sl/d-1,sm/d-2)} y_1^{sl/d-1} y_2^{sm/d-2},\\ &\dot y_2=l y_1^{l-1} y_2^{m}+ \sum_{k=l+m}^\infty\Bigl(\sum_{i=0}^{l-3} Y_2^{(i,k-i)}y_1^i y_2^{k-i}+ \sum_{j=0}^{m-2} Y_2^{(k-j,j)}y_1^{k-j} y_2^j\Bigr)+\\ &\qquad +y_1^{l+1} y_2^{m-1} \sum_{i=0}^\infty Y_2^{(i+l+1,m-1)}y_1^{i}+ y_1^{l-2} y_2^{m+2}\sum_{j=0}^\infty Y_2^{(l-2,m+j+2)}y_2^{j}+\\ &\qquad\qquad +\sum_{r=d+1}^\infty Y_2^{(rl/d-1,rm/d)} y_1^{rl/d-1} y_2^{rm/d}+ \sum_{s=d+1+[\frac{3d-1}{l+m}]}^\infty Y_2^{(sl/d-2,sm/d-1)} y_1^{sl/d-2} y_2^{sm/d-1} \end{split} \end{equation} where for each $i,j\ge0,$ $r\ge d+1$ and $s\ge d+1+[\frac{3d-1}{l+m}],$ we take either $Y_1^{(i+l+2,m-2)}=0$ or $Y_2^{(i+l+1,m-1)}=0,$ either $Y_1^{(l-1,m+j+1)}=0$ or $Y_2^{(l-2,m+j+2)}=0,$ either $Y_1^{(rl/d,rm/d-1)}=0$ or $Y_2^{(rl/d-1,rm/d)}=0,$ and in case $m=1,$ we take $Y_2^{(sl/d-2,sm/d-1)}=0,$ and in case $m\ge2$ we take either $Y_1^{(sl/d-1,sm/d-2)}=0$ or $Y_2^{(sl/d-2,sm/d-1)}=0.$ \end{corollary} \begin{proof} \ \ For all $k>l+m,$ choose the Hamiltonian resonant sets \begin{multline}\notag \mathfrak S_\gamma^{[k]},\widetilde{\mathfrak S}_\gamma^{[k]}= \{x_1^{p_1} x_2^{p_2}:\ p_1=\overline{0,l-2}, \text{ or } p_2=\overline{0,m-2}, \text{ or }\\ p_1=rl/d-1,\ p_2=rm/d-1,\ r\ge d;\ |p|=k\,\}. \end{multline} Then GPHNF~\eqref{Eq10} takes the form~\eqref{Eq15} where $Y_1^{(l-1,m+j+1)}=Y_2^{(l-2,m+j+2)}=\widetilde G^{(l-2,m+j+1)},$ $Y_1^{(i+l+2,m-2)}=Y_2^{(i+l+1,m-1)}=\widetilde G^{(i+l+1,m-2)},$ $Y_1^{(rl/d,rm/d-1)}=Y_2^{(rl/d-1,rm/d)}=\widetilde G^{(rl/d-1,rm/d-1)},$ and $Y_1^{(sl/d-1,sm/d-2)}=(1-sm/d)\widetilde F^{(sl/d-1,sm/d-1)},$ $Y_2^{(sl/d-2,sm/d-1)}=(sl/d-1)\widetilde F^{(sl/d-1,sm/d-1)}.$ Transformation~\eqref{Eq2}, with $\mathcal Q= C y_1^{s l/d-l-1} y_2^{s m/d-m-1} \mathcal E$ where $C=\mathrm{const},$ changes the coefficients $Y_1^{(sl/d-1,sm/d-2)}$ and $Y_2^{(sl/d-2,sm/d-1)}$ by $C(1-m)(l+m)$ and $C(l-1)(l+m)$ respectevely. Hence, the corollary follows from Theorem~2. \end{proof} For $l=2,\,m=1,$ formula~\eqref{Eq15} gives~\cite[Th.~7]{skit} for $\alpha=-1/2.$ \subsection{The unperturbed part $(-x_1^{m} x_2^{m-1}, x_1^{m-1} x_2^{m})$ with $m\ge1$} Consider system~\eqref{Eq8} where $\mathcal H^{[\chi]}_\gamma= (-x_1^{m} x_2^{m-1}, x_1^{m-1} x_2^{m}),$ $H^{[\chi+\delta]}_\gamma= {x_1^m x_2^m}/{m},$ $\gamma=(1,1),$ $\chi=2m-2,$ and $m\ge1.$ In this case, the space of resonant polynomials has the view (see~\cite{BV}) $$\mathfrak R_\gamma=Lin\bigl(\{x_1^{p_1} x_2^{p_2}:\ p_1=\overline{0,m-2}, \text{ or } p_2=\overline{0,m-2}, \text{ or } p_1=p_2\,\}\bigr).$$ Since monomials $x_1^k x_2^k$ $(k\ge0)$ are integrals of the unperturbed part, such monomials are absent in the Hamiltonian reduced resonant set, in degrees higher than $2m.$ \begin{corollary}\ \ Given any system~\eqref{Eq1} with the unperturbed part $(-x_1^{m} x_2^{m-1}, x_1^{m-1} x_2^{m})$ where $m\ge1,$ then there exists a near-identity formal transformation~\eqref{Eq2} that brings it into the GNF \begin{equation} \label{Eq16} \begin{split} &\dot y_1=-y_1^{m} y_2^{m-1}+\sum_{k=2m}^\infty\Bigl(\sum_{i=0}^{m-2} Y_1^{(i,k-i)}y_1^i y_2^{k-i}+ \sum_{j=0}^{m-3} Y_1^{(k-j,j)}y_1^{k-j} y_2^j\Bigr)+\\ &\qquad+ y_1^{m+2} y_2^{m-2}\sum_{i=0}^\infty Y_1^{(i+m+2,m-2)} y_1^{i} +y_1^{m-1} y_2^{m+1} \sum_{j=0}^\infty Y_1^{(m-1,m+j+1)} y_2^{j}+\\ &\qquad\qquad+y_1^{m} y_2^{m}\sum_{r=0}^\infty Y_1^{(r+m+1,r+m)} y_1^{r+1} y_2^{r},\\ &\dot y_2=y_1^{m-1} y_2^{m}+\sum_{k=2m}^\infty\Bigl(\sum_{i=0}^{m-3} Y_2^{(i,k-i)}y_1^i y_2^{k-i}+ \sum_{j=0}^{m-2} Y_2^{(k-j,j)}y_1^{k-j} y_2^j\Bigr)+\\ &\qquad+y_1^{m+1} y_2^{m-1} \sum_{i=0}^\infty Y_2^{(i+m+1,m-1)} y_1^{i}+ y_1^{m-2} y_2^{m+2}\sum_{j=0}^\infty Y_2^{(m-2,m+j+2)} y_2^{j}+\\ &\qquad\qquad+y_1^{m} y_2^{m}\sum_{r=0}^\infty Y_2^{(r+m,r+m+1)} y_1^{r} y_2^{r+1} \end{split} \end{equation} where for each $i,j,r\ge0,$ we take either $Y_1^{(i+m+2,m-2)}=0$ or $Y_2^{(i+m+1,m-1)}=0,$ either $Y_1^{(m-1,m+j+1)}=0$ or $Y_2^{(m-2,m+j+2)}=0,$ and either $Y_1^{(r+m+1,r+m)}=0$ or $Y_2^{(r+m,r+m+1)}=0.$ \end{corollary} \begin{proof}\ \ For all $k>2m,$ choose \begin{equation} \notag \begin{split} &\mathfrak S_\gamma^{[k]}= \{x_1^{p_1} x_2^{p_2}:\ p_1=\overline{0,m-2}, \text{ or } p_2=\overline{0,m-2}, \text{ or } p_1=p_2;\ |p|=k\,\},\\ &\widetilde{\mathfrak S}_\gamma^{[k]}= \{x_1^{p_1} x_2^{p_2}:\ p_1=\overline{0,m-2} \text{ or } p_2=\overline{0,m-2};\ |p|=k\,\}. \end{split} \end{equation} Then GPHNF~\eqref{Eq10} takes the form~\eqref{Eq16} where $Y_1^{(i+m+2,m-2)}=Y_2^{(i+m+1,m-1)}=\widetilde G^{(i+m+1,m-2)},$ $Y_1^{(m-1,m+j+1)}=Y_2^{(m-2,m+j+2)}=\widetilde G^{(m-2,m+j+1)},$ $Y_1^{(r+m+1,r+m)}=Y_2^{(r+m,r+m+1)}=\widetilde G^{(r+m,r+m)}.$ Hence, the corollary follows from Theorem~2. \end{proof} \subsection{The unperturbed part $(\pm x_2^{m-1}, x_1^{l-1})$ with $l\ge m\ge2$} Consider system~\eqref{Eq8} where $\mathcal H^{[\chi]}_\gamma= (\pm x_2^{m-1}, x_1^{l-1}),$ $H^{[\chi+\delta]}_\gamma=x_1^l/l\mp x_2^m/m,$ $\gamma=(m/d,l/d),$ $\chi=(lm-l-m)/d,$ $l\ge m\ge2,$ and $d=\mathrm{GCD}(l,m).$ \begin{lemma} \label{Lmbin}\ \ Minimal Hamiltonian resonant and reduced resonant sets in g.~d. $k>lm/d$ have the view \begin{equation} \notag \begin{split} &\mathfrak S_\gamma^{[k]}=\{x_1^{p_1-r_{p} l}x_2^{p_2+r_{p} m}:\ p_1\not\equiv-1\ \mathrm{mod}\,l,\ p_2=\overline{0,m-2}\,\},\\ &\widetilde{\mathfrak S}_\gamma^{[k]}= \{x_1^{p_1-\widetilde r_{p} l}x_2^{p_2+\widetilde r_{p} m}:\ p_1\not\equiv-1,0\ \mathrm{mod}\,l,\ p_2=0 \text{ or } p_1\not\equiv-1\ \mathrm{mod}\,l,\ p_2=\overline{1,m-2}\,\} \end{split} \end{equation} where $\langle p,\gamma\rangle=k,$ and $r_{p},\widetilde r_{p}$ are arbitrary integers such that $0\le r_{p},\widetilde r_{p}\le [p_1/l].$ \end{lemma} \begin{proof}\ \ Let $R=\sum\limits^\infty_{p_1,p_2=0} R^{(p_1,p_2)} x_1^{p_1} x_2^{p_2}\in\mathfrak R_\gamma.$ According to formula~\eqref{Eq7} $\left.\mathcal H_\gamma^{[\chi]}\right.^*= x_2({\partial^{l-1}}/{\partial x_1^{l-1}})\pm x_1({\partial^{m-1}}/{\partial x_2^{m-1}}),$ thus \begin{equation} \notag \sum^\infty_{i=l-1}\sum^\infty_{j=0} (l-1)! C^{l-1}_i R^{(i,j)} x_1^{i-l+1} x_2^{j+1}\pm \sum^\infty_{i=0}\sum^\infty_{j=m-1} (m-1)! C^{m-1}_j R^{(i,j)} x_1^{i+1} x_2^{j-m+1} =0, \end{equation} thence, setting the coefficients of $x_1^{i+1},x_2^{j+1},$ and $x_1^{i+1}x_2^{j+1}$ to zero, we obtain the equations \begin{equation} \label{Eq17} R^{(i,m-1)}=0,\quad R^{(l-1,j)}=0,\quad(l-1)! C^{l-1}_{i+l} R^{(i+l,j)}\pm (m-1)! C^{m-1}_{j+m} R^{(i,j+m)}=0\qquad(i,j\ge0). \end{equation} Hence, by induction, we find that $R^{(i,km-1)},R^{(kl-1,j)}=0$ for all $k\ge1.$ It also follows from equations~\eqref{Eq17} that any resonant polynomial $R$ is uniquely defined by its coefficients $R^{(p_1-r_{p}l,p_2+r_{p}m)}$ of the monomials $x_1^{p_1-r_{p}l}x_2^{p_2+r_{p}m}$ where $p_2\le m-2$ $(0\le r_{p}\le [p_1/l]).$ Thus, set $\mathfrak S_\gamma^{[k]}$ of such monomials is a minimal Hamiltonian resonant set. For the Hamiltonian reduced resonant set, the lemma follows from the fact that any quasi-homogeneous polynomial integral for $\mathcal H^{[\chi]}_\gamma$ is a power of $H^{[\chi+\delta]}_\gamma.$ \end{proof} For each given $k\in \mathbb Z,$ denote $\theta[k]=0$ if $k<0,$ and $\theta[k]=1$ if $k\ge0.$ \begin{corollary}\ \ Given any system~\eqref{Eq1} with the unperturbed part $(\pm x_2^{m-1}, x_1^{l-1})$ where $l\ge m\ge2,$ then there exists a near-identity formal transformation~\eqref{Eq2} that brings it into the GNF \begin{equation} \label{Eq18} \begin{split} &\dot y_1=\pm y_2^{m-1}+\sum_{j=1}^{m-2}y_2^{j-1}\biggl( \sum_{\substack{i\not\equiv0,-1\,\mathrm{mod}\, l\\ i>l(1-j/m)}} Y_1^{(i,j-1)} y_1^{i} + \sum_{r=1+\theta[m-jl]}^\infty Y_1^{(r l-1,j-1)} y_1^{rl-1}+\\ &\qquad+\sum_{s=1}^\infty Y_1^{(sl,j-1)} y_1^{sl}\biggr)+ \sum_{\substack{i\not\equiv0\,\mathrm{mod}\, l\\ i>l/m}} Y_1^{(i,m-2)} y_1^{i}y_2^{m-2},\\ &\dot y_2=y_1^{l-1}+\sum_{j=1}^{m-2}y_2^{j}\biggl( \sum_{\substack{i\not\equiv0,-1\,\mathrm{mod}\, l\\ i>l(1-j/m)}} Y_2^{(i-1,j)}y_1^{i-1}+\sum_{r=1+\theta[m-jl]}^\infty Y_2^{(rl-2,j)} y_1^{rl-2}+\\ &\qquad+\sum_{s=1}^\infty Y_2^{(sl-1,j)} y_1^{sl-1}\biggr)+ \sum_{\substack{i\not\equiv0,-1\,\mathrm{mod}\, l\\ i>l}} Y_2^{(i-1,0)} y_1^{i-1}+\sum_{\substack{i\not\equiv0\,\mathrm{mod}\, l\\ i>l/m}} Y_2^{(i-1,m-1)} y_1^{i-1}y_2^{m-1} \end{split} \end{equation} where for each $i\not\equiv0\mod l,$ $i>l/m,$ $j=\overline{1,m-2},$ $r\ge 1+\theta[m-jl],$ and $s\ge1,$ we take either $Y_1^{(i,m-2)}=0$ or $Y_2^{(i-1,m-1)}=0,$ either $Y_1^{(r l-1,j-1)}=0$ or $Y_2^{(rl-2,j)}=0,$ either $Y_1^{(sl,j-1)}=0$ or $Y_2^{(sl-1,j)}=0,$ except for the case where $l=m,$ in which for $j=m-2,$ we take $Y_1^{(sm,m-3)}=0$ in the pairs $\{Y_1^{(sm,m-3)},Y_2^{(sm-1,m-2)}\}.$ \end{corollary} \begin{proof}\ \ For all $k>lm/d,$ choose the Hamiltonian resonant and reduced resonant sets given by Lemma~\ref{Lmbin} with all $r_{p},\widetilde r_{p}=0:$ \begin{equation} \notag \begin{split} &\mathfrak S_\gamma^{[k]}=\{x_1^{p_1} x_2^{p_2}:\ p_1\not\equiv-1\ \mathrm{mod}\,l,\ p_2=\overline{0,m-2};\ \langle p,\gamma\rangle=k\,\},\\ &\widetilde{\mathfrak S}_\gamma^{[k]}=\{x_1^{p_1} x_2^{p_2}:\ p_1\not\equiv-1,0\ \mathrm{mod}\,l,\ p_2=0 \text{ or } p_1\not\equiv-1\ \mathrm{mod}\,l,\ p_2=\overline{1,m-2};\ \langle p,\gamma\rangle=k\,\}. \end{split} \end{equation} Then GPHNF~\eqref{Eq10} takes the form~\eqref{Eq18} where $Y_1^{(i,m-2)}=Y_2^{(i-1,m-1)}=\widetilde G^{(i-1,m-2)},$ $Y_1^{(r l-1,j-1)}=Y_2^{(rl-2,j)}=\widetilde G^{(rl-2,j-1)},$ $Y_1^{(sl,j-1)}=-j\widetilde F^{(sl,j)},$ and $Y_2^{(sl-1,j)}= s l \widetilde F^{(sl,j)}.$ Transformation~\eqref{Eq2} with $\mathcal Q=C y_1^{sl-l} y_2^{j} \mathcal E$ where $C=\mathrm{const}$ changes the coefficients $Y_1^{(sl,j-1)}$ and $Y_2^{(sl-1,j)}$ by $-C j$ and $C(l-j-2)$ respectively. Hence, for $l>m,$ as well as for $l=m$ and $j=\overline{1,m-3},$ we can set either coefficient in the pair $\{Y_1^{(sl,j-1)}, Y_2^{(sl-1,j)}\}$ to zero by choosing $C.$ And only in case where $l=m$ and $j=m-2,$ the coefficient $Y_2^{(sm-1,m-2)}$ does not change under this transformation. In this case, we zero the coefficent $Y_1^{(sm,m-3)}$ in system~\eqref{Eq18} by choosing $C=Y_1^{(sm,m-3)}/(m-3).$ Hence, the corollary follows from Theorem~2. \end{proof} For $m=2,$ GNF~\eqref{Eq18} is the so called \textit{second order Takens-Bogdanov normal form} that was obtained by Baider and Sanders in~\cite{sanders2}. In particular, for $l=3,\,m=2,$ formula~\eqref{Eq18} agrees with~\cite[Th.~4]{basov}, and in case $l=4,\,m=2,$ it is consistent with~\cite[Th.~3]{basov03}. \end{document}
\begin{document} \title{Nonlocal solutions of parabolic equations \\with strongly elliptic differential operators} \author{ Irene Benedetti\\ all\textit{Department of Mathematics and Computer Sciences, University of Perugia}\\ all\textit{I-06123 Italy, e-mail: [email protected]}\\ \and Luisa Malaguti\\ all\textit{Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia}\\ all\textit{I-42122 Italy, e-mail: [email protected]} \and Valentina Taddei\\ all\textit{Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia}\\ all\textit{I-42122 Italy, e-mail: [email protected]}} \date{} \maketitle \begin{abstract} The paper deals with second order parabolic equations on bounded domains with Dirichlet conditions in arbitrary Euclidean spaces. Their interest comes from being models for describing reaction-diffusion processes in several frameworks. A linear diffusion term in divergence form is included which generates a strongly elliptic differential operator. A further linear part, of integral type, is present which accounts of nonlocal diffusion behaviours. The main result provides a unifying method for studying the existence and localization of solutions satisfying nonlocal associated boundary conditions. The Cauchy multipoint and the mean value conditions are included in this investigation. The problem is transformed into its abstract setting and the proofs are based on the homotopic invariance of the Leray-Schauder topological degree. A \varepsilonmph{bounding function} (i.e. Lyapunov-like function) theory is developed, which is new in this infinite dimensional context. It allows that the associated vector fields have no fixed points on the boundary of their domains and then it makes possible the use of a degree argument. \noindent \textbf{AMS Subject Classification:} Primary 35K20. Secondary 34B10, 47H11, 93D30. allskip \noindent \textbf{Keywords:} Parabolic equations; multipoint and mean value conditions; degree theory; Lyapunov-like functions. \varepsilonnd{abstract} \section{Introduction}\label{s:intro} The paper deals with the second order parabolic equation \begin{equation}\label{e:PE} \varphirac{\partial u(t, \xi)}{\partial t}= \displaystyleplaystyle{\sum_{i,j=1}^n\varphirac{\partial}{\partial \xi_i} \left( a_{i,j}(\xi)\varphirac{\partial u(t,\xi)}{\partial \xi_j}\right)+\int_{D}k(\xi,y)u(t,y)dy}-bu(t,\xi)+g(t,u(t,\xi)) \varepsilonnd{equation} with $t\in [0,T]$ and $\xi\in D\subset \mathbb{R}^n$, where $D$ is a bounded domain with a sufficiently regular boundary $\partial D$. The coefficients $a_{i,j}\in C^1(\overline D) $ for $i,j=1,...,n$, are symmetric i.e. \begin{equation}\label{e:sym} a_{i,j}(\xi)=a_{j,i}(\xi), \, \xi\in \overline D \quad \text{for }i, j =1,...,n \text{ with } i\ne j \varepsilonnd{equation} and there is a value $C_0>0$ such that \begin{equation}\label{e:elliptic} C_0\Vert \sigma \Vert^2 \le \displaystyleplaystyle{\sum_{i,j=1}^n}a_{i,j}(\xi)\sigma_i\sigma_j \text{ for all } \sigma \in \mathbb{R}^n. \varepsilonnd{equation} Moreover $b > 0$ is a prescribed constant, $k: D \times D \to \mathbb{R}$ and $g:[0,T]\times \mathbb{R} \to \mathbb{R}$ are two given maps. The solution is subject to the Dirichlet boundary conditions \begin{equation}\label{e:D} u(t,\xi)=0, \quad \text{for } t\in[0,T], \, \xi \in \partial D. \varepsilonnd{equation} Equation \varepsilonqref{e:PE} is a model for reaction-diffusion processes in many frameworks and hence it is widely investigated. We refer to the recent monographs \cite{DB G V}, \cite{GK}, \cite{MN} and \cite{Yagi} for a wide discussion on parabolic dynamics. The symmetric second order differential operator in its r.h.s. accounts of diffusion behaviours of a punctual type while the nonlocal term in integral form includes long distance diffusive interactions or memory effects. When $a_{i,j}(\xi) \varepsilonquiv \delta_{i,j}=\left\{\begin{array}{ll} 0 & i\ne j \\ 1 & i=j \varepsilonnd{array}\right.$, the differential term on the right hand side of \varepsilonqref{e:PE} simply reduces to the Laplace operator and hence \varepsilonqref{e:PE} becomes \begin{equation*} u_t(t,\xi)=\Delta u(t,\xi)+\int_Dk(\xi, y)u(t, y)\, dy -bu(t,\xi)+g(t, u(t,\xi)), \varepsilonnspace t\in[0,T], \, \xi \in D. \varepsilonnd{equation*} We always assume that \begin{equation}\label{e:gandk} \begin{array}{rl} (i)&g \text{ is continuous and there exist } L>0 \text{ and } \beta \in (0,1) \text{ such that}\\ &\vert g(t, \xi)-g(t, y)\vert \le L\max\{\vert \xi-y\vert^{\beta}, \vert \xi-y\vert \}, \text{ for } t \in [0,T], \, \xi,y \in \mathbb{R},\\[5mm] (ii)&k\in L^{\infty}(D \times D) \text{ and } 0\le k(\xi,y) \le 1 \text{ for a.a. } \xi,y \in D. \varepsilonnd{array} \varepsilonnd{equation} By the estimate in \varepsilonqref{e:gandk}\varepsilonmph{(i)} the function $g$ has a sublinear growth in its second variable $\xi$ when $\vert \xi \vert \to \infty$, for every $t\in [0,T]$; $g$ is also H\"{o}lder continuous with exponent $\beta$ in $\xi$ for every $t$ and then, in particular, $g(t,\xi)$ may approach $g(t,0)$ as $\vert \xi \vert^{\beta}$ when $\xi \to 0$. \noindent When $k(\xi, y):=h(\xi-y)$ for a.a. $\xi, y \in D$ with $h \in L^{\infty}(D)$ and $0\le h(\xi)\le 1$ for a.a. $\xi \in D$, the integral term in \varepsilonqref{e:PE} can be written in the form $h\ast u(t,\cdot)$, i.e. is a convolution product with convolution kernel $h$. allskip As usual $L^p(D)$ denotes the Lebesgue space $L^p(D, \mathbb{R})$ and we always restrict to the case when \begin{equation*} 1<p<\infty. \varepsilonnd{equation*} Under conditions \varepsilonqref{e:sym} and \varepsilonqref{e:elliptic} the linear elliptic partial differential operator in divergence form $A_p \colon W^{2,p}\left(D\right)\cap W^{1,p}_0\left(D\right)\to L^p\left(D\right)$ given by \begin{equation}\label{e:divergence} A_p(v)(x)=\sum_{i,j=1}^n \varphirac{\partial}{\partial x_i}\left( a_{ij}(x)\varphirac{\partial v(x)}{\partial x_j}\right) \varepsilonnd{equation} is well-defined and it is the infinitesimal generator of an analytic semigroup of contractions $\{S(t)\}_{t\ge 0}$ in $L^p(D)$ (see e.g. \cite[Theorem 3.6 p. 215]{p}); we refer to Section \ref{s:prelim} for an additional discussion about this semigroup. \noindent The abstract formulation of \varepsilonqref{e:PE} takes the form \begin{equation} \label{e:AbEq} \begin{array}{l} x^{\, \prime}(t)=Ax(t)+f(t,x(t)), \quad t\in [0,T], \, x\in L^p(D)\\ \varepsilonnd{array} \varepsilonnd{equation} with \begin{equation}\label{e:fromutox} x(t):=u(t,\cdot), \quad \text{for } t\in [0,T]. \varepsilonnd{equation} As usual \varepsilonqref{e:AbEq} is obtained by equation \varepsilonqref{e:PE} when $u$ is no longer considered as a function of the variables $t$ and $\xi$, but as a mapping $t \longmapsto u(t, \cdot)$ with $t \in [0,T]$ and $u(t, \cdot)$ in a suitable function space. For short in \varepsilonqref{e:AbEq} we simply denote with $A$ the linear operator $A_p$ while the function $f \colon [0,T]\times L^p(D) \to L^p(D)$ includes all the additional terms in \varepsilonqref{e:PE} (see formula \varepsilonqref{e:fH} in Section \ref{s:application}). \noindent By a solution of \varepsilonqref{e:PE} we mean a function $u\colon [0,T]\times D \to \mathbb{R}$ with $u(t, \cdot) \in L^p(D)$ for all $t \in [0,T]$ such that the corresponding function $x$ introduced in \varepsilonqref{e:fromutox} belongs to $C([0,T], L^p(D))$ and it is a solution of \varepsilonqref{e:AbEq} in integral form, i.e. $x$ is a mild solution (see Definition \varepsilonqref{d:mild}) of \varepsilonqref{e:AbEq}. allskip The existence of solutions to \varepsilonqref{e:PE} which satisfy given nonlocal conditions displays a growing interest for the possibility of these trajectories to capture additional information about the dynamics. We mention, for instance, the mean value condition. \begin{equation}\label{e:mean} u(0,\xi)=\varphirac{1}{T}\int_0^T u(t,\xi)\, dt, \quad \text{for a.a. }\xi \in D \varepsilonnd{equation} and the multipoint condition \begin{equation}\label{e:Cauchy} u(0,\xi)=\displaystyleplaystyle{\sum_{i=1}^{q}}\alpha_i u(t_i, \xi) \text{ with }t_i \in (0,T], \, \alpha_i \in \mathbb{R}, \, i=1,...,q \quad \text{and a.a. }\xi\in D. \varepsilonnd{equation} allskip The linear parabolic case with no integral term and boundary conditions as in \varepsilonqref{e:Cauchy} is in Chabrowski \cite{Cw}; the study is based on a maximum principle and the use of a Green function. The model in Deng \cite{Deng} deals with the evolution of a small quantity of gas in a tube; the nonlocal condition is of integral type (see \varepsilonqref{e:mean}) and $t$ varies on a half-line; the nonlinear term is smooth and also the asymptotic behaviour of the solution at infinity is discussed. The nonlocal condition in Jackson \cite{J} is quite general and possibly nonlinear. Pao \cite{pao} treated the existence and multiplicity of solutions between a pair of ordered upper and lower solution again in a smooth model which also includes a nonlocal initial condition. Infante-Maciejewski \cite{IM} and Xue \cite{xue09} studied systems of two equations with an elliptic part given by the Laplace operator; while the latter is based on a fixed point argument, in the former a degree argument is used and the appearance of positive solutions is proved; strong growth restrictions on the terms are assumed in both papers. The model introduced by Zhu-Li \cite{zhuli11} is quite general, again with integral nonlocal conditions, but the growth and regularity conditions are rather strong and given in implicit form. A degree argument is used also by Benedetti-Loi-Taddei \cite{BLT}, combined with an approximation solvability method and it seems especially useful for treating the case when the nonlinearity depends on some weighted mean value of the solution. At last the model proposed by Viorel \cite{Viorel} has an autonomous nonlinearity of polynomial type with a superlinear growth at infinity. We will prove the following result on the existence of solutions satisfying the above boundary conditions. Notice that our model has quite general regularity conditions and no growth restrictions on its term. As usual the symbol $\vert D\vert$ denotes the Lebesgue measure of the set $D$. \begin{theorem}\label{t:nonPEP} Consider equation \varepsilonqref{e:PE} with $a_{i,j}\in C^1(\overline D), \, i,j=1,...,n$ satisfying \varepsilonqref{e:sym}, \varepsilonqref{e:elliptic}. Assume conditions in \varepsilonqref{e:gandk} and let $b>L+\vert D\vert$. Then problem \varepsilonqref{e:PE}-\varepsilonqref{e:D} \begin{itemize} \item[\varepsilonmph{(i)}] admits a solution satisfying condition \varepsilonqref{e:mean}; \item[\varepsilonmph{(ii)}] admits a solution satisfying condition \varepsilonqref{e:Cauchy} provided that \begin{equation*} \sum_{i=1}^q \vert \alpha_i\vert \le 1. \varepsilonnd{equation*} \varepsilonnd{itemize} \varepsilonnd{theorem} The paper contains a wider discussion which involves the quite general nonlocal condition \begin{equation}\label{e:nonPE} x(0)=M(x) \varepsilonnd{equation} where $M\colon C([0,T], L^p(D)) \to L^p(D) $ and $x$ is the function defined in \varepsilonqref{e:fromutox}. It is clear that \varepsilonqref{e:mean}, \varepsilonqref{e:Cauchy} and the Cauchy condition $x(0)=x_0=u(0, \cdot)$ satisfy \varepsilonqref{e:nonPE}. Notice moreover that \begin{itemize} \item[(i)] the periodic condition: $M(x)=x(T)=u(T, \cdot); $ \item[(ii)] the antiperiodic condition: $ M(x)=-x(T)=u(T, \cdot) $ \varepsilonnd{itemize} are special cases of \varepsilonqref{e:Cauchy}. We remark that \varepsilonqref{e:nonPE} also includes nonlinear conditions such as \begin{equation*} u(0,\xi)=G\left( \int_0^T h(t)u(t,\xi)\, dt\right), \quad \xi \in D \varepsilonnd{equation*} with suitable $h \colon [0,T] \to \mathbb{R}$ and $G \colon \mathbb{R} \to \mathbb{R}$ introduced for instance in \cite{BTV} (see Example 5) in the framework of age-population models. allskip The proof of Theorem \ref{t:nonPEP} is in Section \ref{s:application} where there is a quite general discussion about problem\varepsilonqref{e:PE}-\varepsilonqref{e:D}-\varepsilonqref{e:nonPE} (see Theorem \ref{t:PEgen}). The results are based on a unifying approach of topological type on the abstract setting, i.e. for equation \varepsilonqref{e:equation} (see Section \ref{s:abstrat}); a degree argument, in particular, is used there which makes then possible to avoid strong restrictions on the terms of \varepsilonqref{e:PE} as already noted about Theorem \ref{t:nonPEP}. On the other hand, the involved vector fields need to be fixed-points free on their boundary; the property is obtained by a bounding function (i.e. Lyapunov-like) method which is original in this infinite dimensional setting; the method is discussed in Section \ref{s:bounding}. Section \ref{s:prelim} contains some notation and preliminary results. allskip allskip Several Banach spaces appear in this paper; we simply use the symbol $\Vert \cdot \Vert$ to denote the norm in all of them when it is clear from the context which is, each time, the involved space. The symbol $ E^* $ stands for the dual space of $ E. $ \section{Preliminary results and notation}\label{s:prelim} \noindent Let $E$ be a Banach space. A family of linear, bounded operators $S(t):E\to E$, for $t $ in the interval $[0, \infty)$, is called a \varepsilonmph{$C_0$-semigroup} if the following conditions are satisfied: \begin{enumerate}[(a)] \item $S(0)=I$; \item $S(t+r)=S(t)S(r)=S(r)S(t)$ for $t,r \in [0, \infty)$; \item the function $t\to S(t)x$ is continuous on $[0, \infty)$, for every $x\in E$. \varepsilonnd{enumerate} The \varepsilonmph{infinitesimal generator } of $S(t)$ is the linear operator $A$ defined by \begin{equation*} Ax=\lim_{h\to 0^+}\varphirac {\left ( S(h)-I\right)x}{h}, \, x\in D(A) \varepsilonnd{equation*} with \begin{equation*} D(A):=\left\{x\in E \, : \, \lim_{h\to 0^+}\varphirac {\left ( S(h)-I\right)x }{h} \text{ exists }\right\}. \varepsilonnd{equation*} We refer to \cite{Lun}, \cite{p}, \cite{vr2} for the theory of semigroups. Here we only restrict to those properties which are needed in our investigation. \noindent As a straightforward consequence of \varepsilonmph{(c)} (see e.g. \cite[Theorem 2.4 p.4]{p}), we obtain \begin{equation}\label{e:media} \lim_{h\to 0}\varphirac{1}{h}\int_t^{t+h}S(s)x \, ds=S(t)x, \varepsilonnspace t\ge 0, \, x \in E. \varepsilonnd{equation} allskip \noindent Every $C_0$-semigroup is bounded, for $t$ in a bounded interval, (see e.g. \cite[Theorem 2.2 p.4]{p}), in the space $\mathcal{L}(E)$ of linear, bounded operators. When further $\Vert S(t) \Vert \leq 1 $ for $ t \geq 0, \{S(t)\}_{t\ge 0}$ is said to be a \varepsilonmph{contraction semigroup}. It is easy to see that every contraction semigroup satisfies \begin{equation}\label{e:contraction} S(t)r\overline B \subseteq r\overline B, \qquad \text{for } r>0, \, t\ge0, \varepsilonnd{equation} where $B$ is the open unit ball in $E$ centered at $0$. The semigroup $\{S(t)\}_{t\ge 0}$ is said: \begin{enumerate}[] \item \varepsilonmph{compact} if $S(t)$ is compact, for $t>0$; \item \varepsilonmph{uniformly differentiable} if the map $t \longmapsto S(t)$ defined on $[0, \infty)$ with values in $\mathcal{L}(E)$ is differentiable for every $t>0$ (see e.g. \cite[Definition 6.3.2]{vr2}). \varepsilonnd{enumerate} For $0<\theta\le \pi$ define the sector \begin{equation*} C_{\theta}:=\{z\in \mathbb{C} \, : \, -\theta <\text{arg } z <\theta\}. \varepsilonnd{equation*} Clearly \begin{equation*} \overline{C}_{\theta}=\{z\in \mathbb{C} \, : \, -\theta \le \text{arg } z \le \theta\}\cup \{0\}. \varepsilonnd{equation*} \begin{defn}\label{d:analytic} (\cite[Definition 7.1.1]{vr2}) The $C_0$-semigroup $\{S(t)\}_{t\ge 0}$ is said to be \varepsilonmph{analytic} if there is $\theta \in (0, \pi]$ and a mapping $\tilde S \colon \overline{C}_{\theta}\to \mathcal{L}(E) $ such that \begin{enumerate}[(i)] \item $S(t)=\tilde S(t), \, t\ge 0; $ \item $\tilde S(z+w)=\tilde S(z)\tilde S(w)$ for $z, w \in \overline{C}_{\theta}; $ \item $\displaystyleplaystyle{\lim_{z\in \overline{C}_{\theta}, \, z\to 0}}\tilde S(z)x=x$ for $x \in E; $ \item the mapping $z \longmapsto \tilde S(z)$ is analytic from $\overline{C}_{\theta}$ to $\mathcal{L}(E). $ \varepsilonnd{enumerate} \varepsilonnd{defn} \begin{lemma}\label{l:graphnorm}(\cite[Corollary 3.5.1]{vr2} \ Let $A\colon D(A) \subseteq E \to E$ be the infinitesimal generator of a C$_0$-semigroup. The map $\Vert \cdot \Vert_{D(A)} \colon D(A) \to \mathbb{R}$ given by \begin{equation} \Vert x\Vert_{D(A)} :=\Vert x \Vert+\Vert Ax \Vert \varepsilonnd{equation} defines a norm in $D(A)$, called graph norm, with respect to which $D(A)$ is a Banach space. \varepsilonnd{lemma} \noindent We make use, in the sequel, of the following compactness condition which involves the graph norm. \begin{proposition}\cite[Corollary 6.3.2]{vr2}\label{p:Vrabie} \ Let $A \colon D(A) \subseteq E \to E$ be the infinitesimal generator of a uniformly differentiable C$_0$-semigroup of contractions. If $D(A)$, endowed with the graph norm, is compactly embedded in $E$, then the semigroup generated by $A$ is compact. \varepsilonnd{proposition} allskip We complete this brief discussion about semigroup theory with an important result about the semigroup generated by the elliptic operator introduced in \varepsilonqref{e:divergence}. \begin{theorem}\label{t:compact} \ Consider the linear operator $A_p$ defined in \varepsilonqref{e:divergence}. When conditions \varepsilonqref{e:sym}, \varepsilonqref{e:elliptic} are satisfied, the semigroup generated by $A_p$ is compact. \varepsilonnd{theorem} \begin{proof} The linear operator $A_p$ is the infinitesimal generator of an analytic semigroup of contractions on $L^p(D)$ (see e.g. \cite[Theorem 3.6 p.215 ]{p}). In particular, the semigroup is uniformly differentiable. $A_p$ is also strongly elliptic of order $2$ by conditions \varepsilonqref{e:sym}, \varepsilonqref{e:elliptic}. Therefore, the following estimate holds (see e.g \cite[Theorem 3.1 p. 212]{p}) \begin{equation*} \Vert v\Vert_{W^{2,p}(D)}\le C\left( \Vert Av\Vert_{L^p(D)}+\Vert v\Vert_{L^p(D)}\right)=Cv\Vert_{D(A_p)}, \quad v \in D(A_p), \varepsilonnd{equation*} for some $C>0$. Hence, the Banach space $\left(D(A_p), \Vert \cdot \Vert_{D(A_p)}\right)$ is continuously embedded into the space $\left(D(A_p), \Vert \cdot \Vert_{W^{2,p}(D)}\right)$. By Sobolev-Rellich-Kondrachov Theroem (see e.g. \cite[Theorem 1.5.4]{vr2}) the embedding of $\left(D(A_p), \Vert \cdot \Vert_{W^{2,p}(D)}\right)$ into $\left(D(A_p), \Vert \cdot \Vert_{L^p(D)}\right)$ is compact. We obtained that the embedding of $\left(D(A_p), \Vert \cdot \Vert_{D(A_p)}\right)$ into $\left(D(A_p), \Vert \cdot \Vert_{L^p(D)}\right)$ is compact. We complete the proof by means of Proposition \ref{p:Vrabie}. \varepsilonnd{proof} allskip \begin{defn}\label{d:semi}(\cite[Definition 4.2.1]{KOZ}) \ A sequence $\{f_n\}\subset L^1([a,b], E)$ is said to be \varepsilonmph{semicompact} if it is integrably bounded, i.e. $ \Vert f_n(t) \Vert \le \nu(t) $ for a.a. $ t \in [a,b] $ and all $n \in \mathbb{N} $ with $ \nu\in L^1([a,b], E)$, and the set $\{f_n(t)\}$ is relatively compact for a.a. $t \in [a,b]$. \varepsilonnd{defn} We recall now a useful compactness result in the space of continuous function, involving semicompact sequences. \begin{theorem}\label{t:semicomp} (\cite[Theorem 5.1.1.]{KOZ}) \ Let $G\colon L^1([a,b], E) \to C([a,b], E) $ be an operator satisfying the following conditions \begin{itemize} \item[{(i)}] \ there exists $\sigma \ge 0$ such that \begin{equation*} \Vert Gf - Gg \Vert \le \sigma \Vert f-g \Vert \varepsilonnd{equation*} \item[{(ii)}] \ for any compact $K \subset E$ and sequence $\{f_n\}\subset L^1([a,b], E)$ such that $\{f_n(t)\} \subset K$ for a.a. $t \in [a,b]$, the weak convergence $f_n \rightharpoonup f_0$ in $L^1([a,b], E)$ implies that $Gf_n \to Gf_0$. \varepsilonnd{itemize} Then, for every semicompact sequence $\{f_n\}\subset L^1([a,b], E)$ the sequence $\{Gf_n\}$ is relatively compact in $C([a,b], E) $ and, moreover, if $f_n \rightharpoonup f_0$ then $Gf_n \to Gf_0$. \varepsilonnd{theorem} \begin{Ex} \label{ex:CO} \ Let $\{S(t)\}_{t\ge 0} $ be a (not necessarily compact) $C_0$-semigroup. Then, the associated Cauchy operator $G \colon L^1([a,b], E) \to C([a,b], E)$ defined by $$ Gf(t)=\int_a^t S(t-s)f(s) \, ds $$ satisfies conditions (i) and (ii) in Theorem \ref{t:semicomp} (see e.g. \cite[Lemma 4.2.1]{KOZ}). \varepsilonnd{Ex} A function $f \colon X\to Y$ between the Banach spaces $X$ and $Y$ is said to be completely continuous if it is continuous and maps bounded subsets $U\subset X$ into relatively compact subsets of $Y$. Given a nonempty, open and bounded set $U\subset X$ and a completely continuous map $g\colon\overline{U}\to X$ satisfying $x\neq g(x)$ for all $x\in\partial U$, then for the corresponding vector field $i-g$ (where $i$ denotes the identity map on $ X $) the Leray-Schauder topological degree $deg (i-g,\overline{U})$ is well-defined (see, e.g. \cite{KrZa, LS}) and it satisfies the usual properties. \section{Nonlocal solutions in Banach spaces}\label{s:abstrat} In this part we deal with the equation \varepsilonqref{e:AbEq}. Indeed, in order to lead a discussion as general as possible, we let $t$ varying in an arbitrary interval $[a,b]$ and $x$ in a reflexive Banach space $E$ i.e. we consider \begin{equation}\label{e:equation} x^{\, \prime}(t)=Ax(t)+f(t,x(t)), \quad t\in [a,b], \, x\in E. \varepsilonnd{equation} We assume that \begin{itemize} \item[\varepsilonmph{(A)}] $ A $ is a linear, not necessarily bounded, operator with $ A: D(A) \subset E \rightarrow E $ and it generates a compact $C_0$-semigroup of contractions $S: [0,\infty) \to \mathcal{L}(E)$ (see Section \ref{s:prelim} for details); \item[\varepsilonmph{(f)}] the function $f\colon [a,b]\times E \to E$ is continuous and for every $\Omega \subset E$ bounded there is $\nu_{\Omega} \in L^1([a,b])$ such that $\Vert f(t,x) \Vert \le \nu_{\Omega}(t)$ for a.a. $t \in [a,b]$ and $x \in \Omega$. \varepsilonnd{itemize} allskip We consider mild solutions of \varepsilonqref{e:equation}, that is functions $x \in C([a,b],E)$ which satisfy \varepsilonqref{e:equation} in integral form; more precisely \begin{defn}\label{d:mild} A function $x\in C([a,b], E)$ is said to be a mild solution of the equation \varepsilonqref{e:equation} if \begin{equation} x(t)=S(t-a)x(a)+\int_a^t S(t-s)f(s, x(s))\, ds, \quad t \in [a,b]. \varepsilonnd{equation} \varepsilonnd{defn} We combine equation \varepsilonqref{e:equation} with a nonlocal condition \begin{equation} \label{e:nonlocal} \begin{array}{l} x(a)=M(x), \varepsilonnd{array} \varepsilonnd{equation} where $M \colon C([a,b], E) \to E. $ We assume that (see below formula \varepsilonqref{e:contraction} for the symbol $B$) \begin{itemize} \item[(\varepsilonmph{m$_0$})] there exists a positive constant $ r $ such that $ M\left( C([a,b], r\overline B)\right) \subseteq r\overline B; $ \item[(\varepsilonmph{m$_1$})] if $ \{x_n\} \subset C([a,b], r\overline B) $ and $x_n(t) \to x(t)$ for $t \in (a,b]$ with $x \in C([a,b], E), $ then $M(x_n)\to M(x)$ as $n \to \infty$. \varepsilonnd{itemize} allskip \begin{remark}\label{r:M} By condition (m$_1$), the function $ M $ is clearly continuous, when restricted to $ C([a,b], r\overline B). $ \varepsilonnd{remark} allskip The study of problem \varepsilonqref{e:equation}-\varepsilonqref{e:nonlocal} very naturally leads to be performed with a topological method. It was initiated by Byszewski \cite{BY} and the nonlocal condition there is of the type \begin{equation*} x(a)+g(t_1, ..., t_p, x(t_1), ..., x(t_p))=x_0, \quad a<t_1<...<t_p\le b, \varepsilonnd{equation*} hence possibly nonlinear. Additional results can be found in \cite{BLT}, \cite{BP}, \cite{CR}, \cite{KOWY}, \cite{LLX}, \cite{xue09} and \cite{ZSL12} (see also the references there). A fixed point theorem is used in all these papers such as the Banach contraction principle, the Schauder fixed point theorem or the fixed point theorem for condensing maps; hence, strong growth and regularity assumptions are needed, such as the compactness of the function involved in the nonlocal condition or the sublinearity of the nonlinear term with respect to the variable $x$. \noindent A topological degree was introduced by \'{C}wiszewski-Kokocki \cite{CK} (see also \cite{Koko}) for the study of a periodic problem. A new approximation solvability method, involving a degree argument, was used in Benedetti-Loi-Taddei \cite{BLT}, it allows to treat nonlinear terms satisfying the very general growth condition in \varepsilonmph{(f)} but requires the continuity of $f(t, \cdot)$ with respect to the weak topology in $E$ for a.a. $t$ and the linearity of $M$. Despite, like the above mentioned results, in this paper we use the invariance of an appropriate topological degree by an homotopic field, our new technique allows to assume the very general growth condition \varepsilonmph{(f)} and we do not need any compactness or linearity conditions on $M$. However, in order that such an invariance is satisfied, the vector fields need to be fixed-points free on the boundary of their domains (see Section \ref{s:prelim}). This is usually known as the \varepsilonmph{transversality condition} which is strictly related to the notion of bounding function (i.e. Lyapunov-like function, see Definition \varepsilonqref{d:bf} and Section \ref{s:bounding}). allskip \noindent In some cases (see e.g. \cite{AOmair}, \cite{CR}, \cite{KOWY} and \cite{xue09}) the discussion took place in the multivalued setting. We claim that, with minor changes, the present investigation can be generalized to multivalued dynamics. \begin{defn}\label{d:bf} Let $K \subset E$ be nonempty, open and bounded. A function $V \colon E \to \mathbb{R}$ satisfying \begin{itemize} \item[\varepsilonmph{(V1)}] $ V/_{\partial K}=0, \quad V/_{K}\le 0$; \item[\varepsilonmph{(V2)}] $\displaystyle{\liminf_{h \to 0^-}}\varphirac{V(x+hf(t,x))}{h}<0$, for all $t \in (a,b]$ and $x \in \partial K$; \varepsilonnd{itemize} is said to be a bounding function for equation \varepsilonqref{e:equation}. \varepsilonnd{defn} \noindent By means of this tool we can use the Leray-Schauder degree theory in order to prove the following result. \begin{theorem}\label{t:main} \ Consider the b.v.p. \varepsilonqref{e:equation}-\varepsilonqref{e:nonlocal} under conditions \varepsilonmph{(A)}, \varepsilonmph{(f)}, \varepsilonmph{(m$_0$)} and \varepsilonmph{(m$_1$)}. Let there exists a locally Lipschitzian bounding function $V \colon E \to \mathbb{R}$ of \varepsilonqref{e:equation} with $K:=rB $ and $ r $ as in \varepsilonmph{(m$_0$)}. \noindent Then problem \varepsilonqref{e:equation}-\varepsilonqref{e:nonlocal} admits at least one mild solution $x \in C([a,b], r\overline B)$. \varepsilonnd{theorem} \begin{proof}The proof splits into two parts. By means of the homotopic invariance of the Leray-Schauder degree, we first solve an initial value problem associated to equation \varepsilonqref{e:equation} in an arbitrary interval $[a+\varphirac 1m, b]$ with $m \in \mathbb{N}$ sufficiently large (see \varepsilonqref{e:x}-\varepsilonqref{e:x(a+1/m)} with $\lambda=1$). The compactness of the semigroup $S(t)$ is fundamental in this reasoning and this is why we restrict to the interval $[a+\varphirac 1m, b]$. In such a way we get a sequence $\{x_m\}$ of continuous functions taking values in $r\overline B$. Then we obtain a solution of the original problem \varepsilonqref{e:equation}-\varepsilonqref{e:nonlocal} by passing to the limit in the sequence $\{x_m\}$. Let $Q:=C([a,b], r\overline B)$ with $r>0$ as in \varepsilonmph{(m$_0$)} and $m \in \mathbb{N}$ such that $a+\varphirac 1m <b. $ Notice that, since $ \{S(t)\}_{t\ge 0} $ is a semigroup of contractions, it follows that $ S(t)r\overline B\subseteq r\overline B, \, t \ge 0.$ allskip \noindent STEP 1. \ We define the map $\mathcal{T}_m \, :\, Q\times [0,1]\to C([a,b], E )$ as follows \begin{equation}\label{e:DTm} \mathcal{T}_m(q, \lambda)(t)=\left\{ \begin{array}{ll}\lambda S(\varphirac 1m)M(q) & t\in [a, a+\varphirac 1m]\\ \lambda S(t-a)M(q)+\lambda\int_{a+\varphirac 1m}^t S(t-s)f(s,q(s))ds & t\in[a+\varphirac 1m, b], \varepsilonnd{array} \right. \varepsilonnd{equation} which, according to \varepsilonmph{(A)} and \varepsilonmph{(f)}, is well defined. \\ With $q, \, \lambda $ and $ m $ as before, let $ y_{q,\lambda}, \varepsilonta_{q,\lambda} \colon [a+\varphirac 1m, b] \to E $ be given by \begin{equation}\label{e:yeta} y_{q,\lambda}(t):=\lambda S(t-a)M(q), \qquad \varepsilonta_{q,\lambda}(t):=\lambda\int_{a+\varphirac 1m}^t S(t-s)f(s,q(s))ds; \varepsilonnd{equation} notice that \begin{equation*} \mathcal{T}_m(q, \lambda)(t)=y_{q,\lambda}(t)+ \varepsilonta_{q,\lambda}(t), \qquad t\in[a+\varphirac 1m, b]. \varepsilonnd{equation*} By the equality \begin{equation*} S(t-\varphirac 1m -a)\left[\lambda S(\varphirac1m) M(q)\right]= \lambda S(t-a)M(q), \qquad t\in[a+\varphirac 1m, b] \varepsilonnd{equation*} we obtain that the function $x:=\mathcal{T}_m(q, \lambda)$ is the unique solution (see \cite[Corollary 2.2 p.106]{p}) of the linear initial value problem \begin{equation}\label{e:ivpz} \left\{\begin{array}{rl} z'(t)=&Az(t)+\lambda f(t,q(t)), \varepsilonnspace t\in[a+\varphirac 1m,b]\\ z(a+\varphirac 1m)=&\lambda S(\varphirac 1m) M(q). \varepsilonnd{array} \right. \varepsilonnd{equation} In particular, every fixed point $x=\mathcal{T}_m(x, \lambda), $ with $ x\in Q $ and $\lambda \in [0,1] $ is a mild solution of the equation \begin{equation}\label{e:x} x^{\prime}(t)=Ax(t)+\lambda f(t,x(t)), \qquad t\in [a+\varphirac 1m, b] \varepsilonnd{equation} (see Definition \ref{d:mild}) which satisfies \begin{equation}\label{e:x(a+1/m)} x(a+\varphirac 1m)=\lambda S(\varphirac 1m)M(x). \varepsilonnd{equation} We will show that $\mathcal{T}_m(\cdot, 1)$ has a fixed point $x_m=\mathcal{T}_m(x_m, 1), $ with $x_m \in Q. $\\ The use of a topological method then arises quite naturally and hence we investigate, in the following, the regularity properties of the map $\mathcal{T}_m$. allskip \varepsilonmph{(1a)} \ First we show that $\mathcal{T}_m$ is continuous. In fact, let $\{ q_n\} \subset Q$ satisfying $q_n \to q$ in $C([a,b], E)$ and let $\{ \lambda_n\}\subset [0,1]$ with $\lambda_n \to \lambda$. For every $n \in \mathbb{N}$, the function $x_n :=\mathcal{T}_m(q_n, \lambda_n)$ is such that \begin{equation}\label{e:xn} x_n(t)=\left\{ \begin{array}{ll} \lambda_n S(\varphirac 1m )M(q_n), & t\in [a, a+\varphirac 1m]\\ y_{q_n,\lambda_n}(t)+\varepsilonta_{q_n,\lambda_n}(t) & t\in [a+\varphirac 1m, b] \varepsilonnd{array} \right. \varepsilonnd{equation} (see \varepsilonqref{e:yeta}). Put $x: =\mathcal{T}_m(q, \lambda)$. Since, by \varepsilonmph{(A)}, $ \Vert S(t) \Vert \le 1 $ for $ t\ge 0, $ when $t\in [a, a+\varphirac 1m]$ we have that \begin{equation*} \begin{array}{rl} \Vert x_n(t) -x(t) \Vert =&\Vert \lambda_n S(\varphirac 1m)M(q_n) -\lambda S(\varphirac 1m)M(q)\Vert \\ \le &\vert \lambda_n -\lambda\vert \Vert S(\varphirac 1m) M(q_n) \Vert +\lambda \Vert S(\varphirac 1m)\left[M(q_n) -M(q) \right] \Vert \\ \le & \vert \lambda_n -\lambda\vert \Vert M(q_n) \Vert +\lambda \Vert M(q_n) -M(q) \Vert . \varepsilonnd{array} \varepsilonnd{equation*} Since $M(q_n) \to M(q)$ in $E$ (see Remark \ref{r:M}) hence, in particular, $\{M(q_n) \}$ is bounded in $E$, we obtain that $x_n \to x$ in $C([a, a+\varphirac 1m], E)$.\\ Similarly it is easy to prove that \begin{equation}\label{e:yconv} y_{q_n, \lambda_n}\to y_{q,\lambda} \varepsilonnspace \text{in } C([a+\varphirac 1m, b], E). \varepsilonnd{equation} Notice that, by \varepsilonmph{(f)} and the convergence of $ \{q_n\} $ to $ q, $ it follows that $f(t, q_n(t)) \to f(t, q(t)), \, t \in [a,b]. $ Again by \varepsilonmph{(f)} the convergence is also dominated since \begin{equation}\label{e:fqn} \Vert f(t, q_n(t)) \Vert \le \nu_{r\overline B}(t), \quad \text{for a.a. } t \in [a,b]. \varepsilonnd{equation} By the Lebesgue dominated convergence theorem we conclude that, for every $ t \in [a+\varphirac 1 m,b], $ \begin{equation*} \begin{array}{rl} \Vert \varepsilonta_{q_n, \lambda_n}(t) -\varepsilonta_{q, \lambda}(t) \Vert \le&\vert \lambda_n - \lambda \vert \int_{a+\varphirac 1 m}^t \Vert S(t-s) f(s, q_n(s)) ds \Vert + \\ & \hskip .2 cm \lambda \int_{a+\varphirac 1 m}^t \Vert S(t-s) [f(s, q_n(s)) - f(s, q(s))] \Vert ds \\ \le & \vert \lambda_n -\lambda\vert \int_{a+\varphirac 1m}^{b} \nu_{r\overline B}(s) \, ds + \lambda \int_{a+\varphirac 1 m}^b \Vert f(s, q_n(s)) - f(s, q(s)) \Vert ds \to 0. \varepsilonnd{array} \varepsilonnd{equation*} Hence $\varepsilonta_{q_n, \lambda_n}\to \varepsilonta_{q,\lambda}$ in $ C([a+\varphirac 1m, b], E) $ and by \varepsilonqref{e:yconv} this proves that $x_n \to x$ in $C([a,b], E)$, i.e. $\mathcal{T}_m$ is a continuous operator. \varepsilonmph{(1b)} \ Now we show that $\mathcal{T}_m$ is compact. To this aim consider a sequence $\{x_n\}\subset \mathcal{T}_m(Q\times [0,1])$ which implies the existence of $\{q_n\} \subset Q$ and $\{\lambda_n\}\subset [0, 1]$ such that $x_n=\mathcal{T}_m(q_n, \lambda_n)$, i.e. $x_n$ satisfies condition \varepsilonqref{e:xn}, for all $n \in \mathbb{N}$. With no loss of generality we can restrict to a subsequence, as usual denoted as the sequence, such that $\lambda_n \to \lambda \in [0,1]$. \noindent First consider the interval $[a, a+\varphirac 1m]. $ Since, by \varepsilonmph{(m$_0$)}, the sequence $\{M(q_n)\}\subset r\overline B$ is bounded and the semigroup $\{S(t)\}_{t\ge 0}$ is compact, we obtain that $x_n$ is relatively compact in $C([a, a+\varphirac 1m], E)$. \noindent Let $t\in [a+\varphirac 1m, b]. $ As before the set $\{y_{q_n, \lambda_n}(t)\}\subset E$ is relatively compact in $E$. We prove now that $\{y_{q_n, \lambda_n}\}$ is equicontinuous in $ [a+\varphirac 1m, b]$. Let, in fact, $t \in [a+\varphirac 1m, b]$ and $\varepsilon >0$. The compactness of $\{S(t)\}_{t\ge 0}$ implies the continuity of $S \colon (0, \infty) \to \mathcal{L}(E) $ in the uniform operator topology (see \cite[Theorem 3.2 p. 49]{p}). Hence we can find $\delta=\delta(\varepsilon)>0$ satisfying \begin{equation*} \Vert S(t-a)-S(t^{\, \prime}-a)\Vert \le\varphirac{\varepsilon }{r}, \quad \vert t-t^{\, \prime}\vert <\delta \text{ with } t^{\, \prime}\in [a+\varphirac 1m, b]. \varepsilonnd{equation*} Therefore, by \varepsilonmph{(m$_0$)} \begin{equation*} \Vert y_{q_n, \lambda_n}(t) -y_{q_n, \lambda_n}(t^{\, \prime})\Vert \le \lambda_n \Vert S(t-a)-S(t^{\, \prime}-a)\Vert \Vert M(q_n)\Vert \le\varphirac{\varepsilon }{r} \, \cdot \, r=\varepsilon, \varepsilonnd{equation*} for every $ n\in \mathbb{N} $ and $ t,t' \in [a+\varphirac 1m, b] $ with $ \vert t-t^{\, \prime}\vert <\delta. $\\ Then, by the abstract version of the Ascoli-Arzel\'a theorem, we obtain that \begin{equation}\label{e:ycompact} \{y_{q_n, \lambda_n}\} \text{ is relatively compact in } C([a+\varphirac 1m, b], E). \varepsilonnd{equation} \noindent Consider now the sequence $\{\varepsilonta_{q_n, \lambda_n}\}$ defined in \varepsilonqref{e:yeta}. Fix $t\in [a+\varphirac 1m, b]$ and let $h_{n,t}(s):=S(t-s)f(s, q_n(s))$ for $n \in \mathbb{N}$ and $s\in [a+\varphirac 1m, t]$. According to \varepsilonqref{e:fqn} and since, by \varepsilonmph{(A)}, $ \Vert S(t) \Vert \le 1 $ for $ t\ge 0, $ we have that $\Vert h_{n,t}(s)\Vert \le \nu_{r\overline B}(s)$ for a.a. $s\in [a+\varphirac 1m, t]$, hence $\{h_{n,t}\}$ is integrably bounded in $ [a+\varphirac 1m, t]$. Moreover, condition \varepsilonqref{e:fqn} implies that $\{f(s, q_n(s))\}$ is bounded in $E$ for a.a. $s\in [a+\varphirac 1m, t]$ and by the compactness of the semigroup $\{S(t)\}_{t\ge 0}$ we obtain that the sequence $\{h_{n,t}(s)\}$ is relatively compact for a.a. $s \in [a+\varphirac 1m, t]$. In conclusion the sequence $\{h_{n,t}\}$ is semicompact in $ [a+\varphirac 1m, t]$ (see Definition \ref{d:semi}).\\ \noindent Let us introduce now the operator $\hat S \colon L^1( [a+\varphirac 1m, t], E) \to C( [a+\varphirac 1m, t], E)$ given by \begin{equation*} \hat S \varphi (\tau):= \int_{a+\varphirac 1m}^{\tau} \varphi(s)\, ds. \varepsilonnd{equation*} It is easy to see that $\hat S$ satisfies both conditions \varepsilonmph{(i)} and \varepsilonmph{(ii)} of Theorem \ref{t:semicomp}; indeed $\hat S$ is the Cauchy operator (see Example \ref{ex:CO}) in the special case when the semigroup is identically equal to $I$. By virtue of Theorem \ref{t:semicomp}, the sequence $\{\varepsilonta_{q_n, \lambda_n}\}$ is relatively compact in $C([a+\varphirac 1m, t], E). $ Hence, in particular, $\{\varepsilonta_{q_n, \lambda_n}(t)\}$ is relatively compact in $E$ for all $t \in [a+\varphirac 1m, b]$.\\ \noindent We prove now that $\{\varepsilonta_{q_n, \lambda_n}\}$ is equicontinuous. In fact, fix $t \in [a+\varphirac 1m, b]$ and let $t^{\, \prime}\in [a+\varphirac 1m, b]$ with $t^{\, \prime}>t$. Notice that \begin{equation*} \Vert \varepsilonta_{q_n, \lambda_n}(t^{\, \prime}) -\varepsilonta_{q_n, \lambda_n}(t) \Vert = \lambda_n\Vert\int_{a+\varphirac 1m}^{t^{\, \prime}} S(t^{\, \prime}-s)f(s, q_n(s))\, ds-\int_{a+\varphirac 1m}^{t} S(t-s)f(s, q_n(s))\, ds\Vert. \varepsilonnd{equation*} For every given $\sigma \in (0, t-\varphirac 1m-a)$, we can then estimate $\Vert \varepsilonta_{q_n, \lambda_n}(t^{\, \prime}) -\varepsilonta_{q_n, \lambda_n}(t) \Vert$ by means of the sum of the following three integrals \begin{equation}\label{e:zn} \begin{array}{rl} \Vert \varepsilonta_{q_n, \lambda_n}(t^{\, \prime}) -\varepsilonta_{q_n, \lambda_n}(t) \Vert\le & \lambda_n \Vert \int_{a+\varphirac 1m}^{t-\sigma} [S(t^{\, \prime}-s)-S(t-s)]f(s, q_n(s))\, ds\Vert \\ \\ +& \lambda_n \Vert \int_{t-\sigma}^{t} [S(t^{\, \prime}-s)-S(t-s)]f(s, q_n(s))\, ds\Vert \\ \\ + & \lambda_n \Vert \int_{t}^{t^{\, \prime}} S(t^{\, \prime}-s)f(s, q_n(s))\, ds\Vert. \varepsilonnd{array} \varepsilonnd{equation} Now fix $\varepsilon>0. $ With no loss of generality we can take $\sigma >0$ such that \begin{equation}\label{e:6D} \int_{t-\sigma}^t \nu_{r \overline B}(s) \, ds <\varphirac{\varepsilon}{6}. \varepsilonnd{equation} Let us start from the first integral in \varepsilonqref{e:zn}. Notice that $t^{\, \prime}-s, t-s \in [\sigma, t^{\, \prime}-\varphirac 1m-a]$ for $s \in [a+\varphirac 1m, t-\sigma]$. Since $\{S(t)\}_{t\ge 0}$ is compact, it is also uniformly continuous in the compact interval $ [\sigma, t^{\, \prime}-\varphirac 1m-a]$. We can then find $\sigma_1(\varepsilon)>0$ such that \begin{equation*} \Vert S(\tau^{\, \prime})-S(\tau)\Vert \le \varphirac{\varepsilon}{3\Vert \nu_{r \overline B}\Vert}, \quad \vert \tau^{\, \prime}-\tau\vert <\sigma_1, \quad \tau^{\, \prime}, \tau \in \left[\sigma, t^{\, \prime}-\varphirac 1m -a\right]. \varepsilonnd{equation*} Since $t^{\, \prime}-s-(t-s)=t^{\, \prime}-t$ for $s\in [a+\varphirac 1m, t-\sigma]$, when $0<t^{\prime}-t <\sigma_1, $ by \varepsilonqref{e:fqn} and since $ \Vert S(t) \Vert \le 1 $ for $ t\ge 0, $ we have \begin{equation*} \begin{array}{rl} \lambda_n \Vert \int_{a+\varphirac 1m}^{t-\sigma} [S(t^{\, \prime}-s)-S(t-s)]f(s, q_n(s))\, ds\Vert \le& \int_{a+\varphirac 1m}^{t-\sigma} \Vert S(t^{\, \prime}-s)-S(t-s)\Vert \Vert f(s, q_n(s))\Vert \, ds \\ \le & \varphirac{\varepsilon}{3\Vert \nu_{r \overline B}\Vert }\int_a^b \Vert f(s, q_n(s)) \Vert \, ds \le \varphirac{\varepsilon}{3}. \varepsilonnd{array} \varepsilonnd{equation*} According to \varepsilonqref{e:6D} the second integral in \varepsilonqref{e:zn} is such that \begin{equation*} \begin{array}{rl} \lambda_n\Vert \int_{t-\sigma}^t [S(t^{\, \prime}-s)-S(t-s)]f(s, q_n(s))\, ds\Vert \le & \int_{t-\sigma}^t [\Vert S(t^{\, \prime}-s)\Vert + \Vert S(t-s)\Vert] \Vert f(s, q_n(s))\Vert \, ds\\ \le & 2\int_{t-\sigma}^t \nu_{r\overline B}(s) \, ds\le \varphirac{\varepsilon}{3}. \varepsilonnd{array} \varepsilonnd{equation*} Moreover, let $\sigma_2>0$ be such that \begin{equation*} \int_{t}^{t^{\, \prime}}\nu_{r\overline B}(s) \, ds \le \varphirac{\varepsilon}{3}, \quad \text{for } t^{\, \prime}-t<\sigma_2; \varepsilonnd{equation*} hence we obtain the following estimate for the third integral in \varepsilonqref{e:zn} \begin{equation*} \lambda_n\Vert \int_{t}^{t^{\, \prime}} S(t^{\, \prime}-s)f(s, q_n(s)) \, ds \Vert \le \int_{t}^{t^{\, \prime}}\nu_{r\overline B}(s) \, ds \le \varphirac{\varepsilon}{3}. \varepsilonnd{equation*} Consequently, when $t^{\, \prime}-t <\min\{\sigma_1, \sigma_2\}$, we have that $\Vert \varepsilonta_{q_n, \lambda_n}(t^{\, \prime})-\varepsilonta_{q_n, \lambda_n}(t)\Vert <\varepsilon; $ since the reasoning is similar also in the case $t^{\, \prime}\in [a+\varphirac 1m, t]$ we conclude that $\{\varepsilonta_{q_n, \lambda_n}\}$ is equicontinuous in $[a+\varphirac 1m, b]$ and again we can use the abstract version of Arzel\`a-Ascoli theorem in order to show that $\{\varepsilonta_{q_n, \lambda_n}\}$ is relatively compact in $C([a+\varphirac 1m, b], E)$. Therefore, by \varepsilonqref{e:ycompact} and the estimates in the interval $[a, a+\varphirac 1m]$ (see the beginning of (1b)), we have that $\{x_n\}$ is relatively compact and so the operator $\mathcal{T}_m$ is compact. In conclusion $\mathcal{T}_m$ is completely continuous since it is both continuous and compact. \varepsilonmph{(1c)} \ For every $q\in Q$ we have that $\mathcal{T}_m(q, 0)\varepsilonquiv 0$ and since $0\in rB$, it implies that $\mathcal{T}_m(Q, 0) \subset \mbox{int} \, Q$. \varepsilonmph{(1d)} \ We apply now a degree argument for the study of the fixed points of $\mathcal{T}_m(\cdot, 1) $ and then we need to show that $\mathcal{T}_m(\cdot, \lambda)$ is fixed points free on $\partial Q$ for every $\lambda \in [0,1]$. The case $\lambda=0$ was already treated in \varepsilonmph{(1c)}. Any possible fixed point $x\in \partial Q$ for $\lambda=1$, i.e. satisfying $x = \mathcal{T}_m(x, 1)$, is already a solution of our problem. So, it remains to show that $\mathcal{T}_m(\cdot, \lambda)$ is fixed-points free on $\partial Q$ only for $\lambda \in (0,1)$. We reason by contradiction and assume the existence of $(x, \lambda) \in \partial Q \times (0,1)$ satisfying $x=\mathcal{T}_m(x, \lambda)$. According to the definition of $Q$, there exists $t_0 \in [a,b]$ such that $\Vert x(t_0) \Vert=r. $ By \varepsilonmph{(m$_0$)} and since $\{S(t)\}_{t\ge 0} $ is a semigroup of contractions, the case $t_0 \in [a, a+\varphirac 1m]$ leads to the contradictory conclusion \begin{equation*} r=\Vert x(t_0) \Vert= \lambda \Vert S(\varphirac 1m)M(x)\Vert \le \lambda r <r. \varepsilonnd{equation*} Consequently $t_0 \in (a+\varphirac 1m, b]; $ we recall that $x(t)$ is a mild solution of the equation \begin{equation}\label{e:g0} x^{\prime}(t)=Ax(t)+h(t,x(t)), \quad t\in [a+\varphirac 1m, b] \varepsilonnd{equation} where \begin{equation}\label{e:flambda} h(t,x):=\lambda f(t,x), \quad (t,x)\in [a,b]\times E. \varepsilonnd{equation} Notice that $ h $ satisfies conditions \varepsilonmph{(f)}. We show that $ V $ is a bounding function for \varepsilonqref{e:g0} with $ K=rB, $ in particular that $ V $ satisfies condition \varepsilonmph{(V2)}. In fact, let $x \in E$ with $\Vert x \Vert =r $ and $t \in (a, b]$; since $ V $ is a bounding function for \varepsilonqref{e:equation} there is a sequence $\{h_n\}\subset \mathbb{R}$, with $h_n \to 0^-$ such that \begin{equation*} \lim_{n\to \infty}\varphirac{V(x+h_nf(t,x))}{h_n}<0. \varepsilonnd{equation*} Put $k_n:=\varphirac{ h_n}{\lambda}, \, n \in \mathbb{N}$; we have that $k_n \to 0^-$ and \begin{equation*} \lim_{n\to \infty}\varphirac{V(x+ k_ng(t,x))}{k_n}=\lim_{n\to \infty}\varphirac{V(x+ k_n \lambda f(t,x))}{k_n}=\lambda \, \lim_{n\to \infty}\varphirac{V(x+ h_nf(t,x))}{h_n}<0. \varepsilonnd{equation*} Hence $ V $ is a bounding function for \varepsilonqref{e:g0} and by applying Theorem \ref{t:bf} to \varepsilonqref{e:g0} we obtain that $\Vert x(t_0) \Vert <r. $ In conclusion $\Vert x(t) \Vert <r $ for all $t \in [a+\varphirac 1m, b] $ and hence $\mathcal{T}_m$ is fixed points free on the boundary $\partial Q$. \varepsilonmph{(1e)} \ The properties already proved imply that $\mathcal{T}_m$ is an admissible homotopy which connects the fields $\mathcal{T}_m(\cdot, 0)$ and $\mathcal{T}_m(\cdot, 1)$. According to the homotopic invariance and the normalization property of the Leray-Schauder topological degree, we then obtain \begin{equation*} deg( i-\mathcal{T}_m(\cdot, 1), \, Q)=deg( i-\mathcal{T}_m(\cdot, 0), \, Q)=1 \varepsilonnd{equation*} and hence there exists $x_m \in Q$ such that $x_m=\mathcal{T}_m(x_m, 1)$, i.e. $x_m(t) \varepsilonquiv S(\varphirac 1m)M(x_m) $ for $t \in[a, a+\varphirac 1m]$ and $x_m$ is a solution of the initial value problem \varepsilonqref{e:x}-\varepsilonqref{e:x(a+1/m)} for $\lambda=1$ on $[a+\varphirac 1m, b]; $ moreover $\Vert x_m(t) \Vert \le r$ for $t \in [a,b]$ with $r$ introduced in \varepsilonmph{(m$_0$)}. \noindent STEP 2. \ In this part we consider the sequence of functions $\{x_m\}$ obtained in the previous step and, by passing to the limit, we get a solution of problem \varepsilonqref{e:equation}-\varepsilonqref{e:nonlocal}. According to \varepsilonqref{e:DTm}, we recall that \begin{equation}\label{e:xm} x_m(t) =\left\{\begin{array}{rl} S(\varphirac 1m)M(x_m) \quad& t\in [a, a+\varphirac 1m]\\ S(t-a)M(x_m)+\int_{a+\varphirac{1}{m}}^t S(t-s)f(s, x_m(s)) \, ds, \quad& t \in [a+ \varphirac 1m, b]. \varepsilonnd{array} \right. \varepsilonnd{equation} allskip \varepsilonmph{(2a)} \ Take $\alpha \in (a,b]$ and let $m$ be sufficiently large so that $a+\varphirac 1m <\alpha. $ Since $\{x_m\}\subset Q$, according to \varepsilonmph{(f)} we have that $\Vert f(t, x_m(t)) \Vert \le\nu_{ r\overline B}(t)$ for a.a. $t \in [a,b]$. Hence, with a similar reasoning as in \varepsilonmph{(1b)}, we can show that $\{x_m\}$ is relatively compact in $[\alpha, b]$. allskip \varepsilonmph{(2b)} \ Fix a decreasing sequence $\{ a_n \}\subset (a,b)$ satisfying $a_n \to a$ as $n \to \infty$. According to (2a), $ \{ x_m\} $ is relatively compact in $ C([a_1,b],E); $ hence there is a subsequence $ \{ x_m^{(1)}\} $ converging in $ C([a_1,b],E) $ to a continuous function $ \overline x:[a_1,b] \to E. $ Similarly, there exists a subsequence $ \{ x_m^{(2)} \} $ of $\{ x_{m}^{(1)} \}$ converging in $ C([a_2,b],E) $ to a continuous function $ \overline {\overline x}: [a_2,b] \to E $ and, according to the unicity of the limit, $ \overline x(t) = \overline {\overline x}(t) $ for $ t \in [a_1,b]. $ Proceeding by induction, for every $n \in \mathbb{N}$ we can find a sequence $\{ x_{m}^{(n)} \}$ which is a subsequence of $\{ x_{m}^{(n-1)} \}$ and converges in $C([a_n, b], E)$. According to the unicity of the limit, we can define a continuous function $ \tilde x:(a,b] \to E$ and, using a Cantor diagonalization argument, the sequence $ x_n^{(n)} (t) \to \tilde x(t)$ for $ t\in (a, b]$. By the continuity of $ f $ (see condition \varepsilonmph{(f)}), it implies that $f(t, x_{n}^{(n)} (t)) \to f(t, \tilde x(t))$ for all $t \in (a,b]$ and also that $t \longmapsto f(t, \tilde x(t)) $ is continuous on $(a, b]. $ Moreover, since $\{ x_m \} \subset Q$, again by \varepsilonmph{(f)} we have that $\Vert f(t, x_{n}^{(n)}(t)) \Vert \le \nu_{r\overline B}(t)$ and hence $\Vert f(t, \tilde x(t)) \Vert \le \nu_{r\overline B}(t)$ for a.a. $t\in (a,b]$, with $\nu_{r\overline B} \in L^1([a,b])$. allskip \varepsilonmph{(2c)} \ Since $\{ x_{n}^{(n)}\} \subset C([a,b], r\overline B), $ by \varepsilonmph{(m$_0$)} we obtain that $\{M(x_{n}^{(n)})\}\subset r\overline B. $ Hence, by the reflexivity of $ E, $ there is a subsequence, as usual denoted as the sequence, satisfying \begin{equation}\label{e:x0} M\left(x_{n}^{(n)}\right)\rightharpoonup x_0 \varepsilonnspace \mbox{as } n \to \infty, \varepsilonnspace \text{with } \Vert x_0 \Vert \le r. \varepsilonnd{equation} allskip \varepsilonmph{(2d)} \ Let us introduce, now, the continuous function \begin{equation}\label{e:solution} x(t):=S(t-a)x_0+\int_a^t S(t-s)f(s, \tilde x(s)) \, ds, \qquad t \in [a,b], \varepsilonnd{equation} with $x_0$ as in \varepsilonmph{(2c)} and $\tilde x$ defined in \varepsilonmph{(2b)}. The integral in \varepsilonqref{e:solution} is well defined by the regularity of the function $t \longmapsto f(t, \tilde x(t)) $ in $[a, b] $ showed in \varepsilonmph{(2b)}. We claim that \begin{equation}\label{e:conv} x_{n}^{(n)}(t) \to x(t), \varepsilonnspace t \in (a,b], \qquad \mbox{as } n \to \infty. \varepsilonnd{equation} In fact, by \varepsilonqref{e:xm} and the definition of $ \{ x_{n}^{(n)}\}, $ there is a sequence $ \{p_n\}, $ with $ 0<p_n \le 1/n, \, n \in \mathbb{N}, $ such that \begin{equation}\label{e:xpn} x_{n}^{(n)}(t)=\left\{ \begin{array}{ll} S(p_n)M(x_{n}^{(n)}) & t\in [a, a+p_n]\\ S(t-a)M(x_{n}^{(n)})+\int_{a+p_n}^t S(t-s)f(s,x_{n}^{(n)}(s))ds & t\in[a+p_n, b]. \varepsilonnd{array} \right. \varepsilonnd{equation} Let $t \in (a,b]. $ By \varepsilonqref{e:x0} we have that \begin{equation*} S(t-a)M(x_{n}^{(n)})\rightharpoonup S(t-a)x_0. \varepsilonnd{equation*} Now take $n \ge \overline n $ such that $ t \in [a+p_n, b] $ for every $ n \ge \overline n. $ Notice that \begin{equation*} \int_{a+p_n }^t S(t-s)f(s, x_{n}^{(n)}(s)) \, ds= \int_a^t S(t-s)f(s, x_{n}^{(n)}(s)) \, ds-\int_a^{a+p_n } S(t-s)f(s, x_{n}^{(n)}(s)) \, ds. \varepsilonnd{equation*} By the properties showed in \varepsilonmph{(2b)}, we have \begin{equation*} S(t-s)f(s, x_{n}^{(n)}(s)) \to S(t-s)f(s, \tilde x(s)), \varepsilonnspace s \in (a, t]. \varepsilonnd{equation*} Moreover $\Vert S(t-s)f(s, x_{n}^{(n)}(s)) \Vert \le \nu_{r\overline B}(s)$ a.e. in $[a,t], $ with $\nu_{r\overline B} \in L^1([a, t])$. So the Lebesgue Dominated Convergence Theorem leads to \begin{equation*} \int_a^t S(t-s)f(s, x_{n}^{(n)}(s)) \, ds \to \int_a^t S(t-s)f(s, \tilde x(s)) \, ds, \varepsilonnspace \text{as } n \to \infty. \varepsilonnd{equation*} Since $0<p_n \le 1/n, \, n \in \mathbb{N}, $ we have \begin{equation*} \int_a^{a+p_n} S(t-s)f(s, x_{n}^{(n)}(s)) \, ds \to 0 \varepsilonnspace \text{as } n \to \infty; \varepsilonnd{equation*} hence, by \varepsilonqref{e:xpn} , $x_{n}^{(n)}(t) \rightharpoonup x(t). $ Since we showed in \varepsilonmph{(2b)} that $x_{n}^{(n)}(t) \to \tilde x(t)$ for $t \in (a, b]$, we obtain that $\tilde x(t)=x(t)$ and hence, by \varepsilonqref{e:solution}, $x$ is a mild solution of \varepsilonqref{e:equation} in $[a,b]$. By condition \varepsilonmph{(m$_1$)} we get that $M(x_{n}^{(n)}) \to M(x) $ thus \varepsilonqref{e:x0} implies that $ x(a)=x_0=M(x) $ and hence $x$ satisfies the boundary condition \varepsilonqref{e:nonlocal}. The proof is complete. \varepsilonnd{proof} \section{Nonlocal solutions of the parabolic equation \varepsilonqref{e:PE}} \label{s:application} In this part we apply the methods and results contained in Section \ref{s:abstrat} for the study of nonlocal boundary value problems associated to equation \varepsilonqref{e:PE}. We prove, in particular, Theorem \ref{t:nonPEP} as a special case of the following more general result (see Theorem \ref{t:PEgen}).\\ We put \begin{equation} \label{e:maxg} \mu:= \max_{ t\in [0, T]} \vert g(t,0)\vert. \varepsilonnd{equation} The definition is well posed by the continuity of $g$ (see condition \varepsilonqref{e:gandk}\varepsilonmph{(i)}). \begin{theorem}\label{t:PEgen} Consider problem \varepsilonqref{e:PE}-\varepsilonqref{e:D}-\varepsilonqref{e:nonPE}. Let $a_{i,j}\in C^1(\overline D), \, i,j=1,...,n$ satisfy conditions \varepsilonqref{e:sym}, \varepsilonqref{e:elliptic} and assume \varepsilonqref{e:gandk}; suppose \varepsilonmph{($m_0$)} and \varepsilonmph{($m_1$)} hold (see Section \ref{s:abstrat}) with $E=L^p(D)$. If, moreover, \begin{itemize} \item[(a)] $b>L+\vert D\vert $ \item [(b)] $\displaystyleplaystyle{r>\varphirac{\left( L+\mu\right)\vert D\vert}{b-\vert D\vert-L}}$, \varepsilonnd{itemize} with $\mu$ as in \varepsilonqref{e:maxg}, then \varepsilonqref{e:PE}-\varepsilonqref{e:D}-\varepsilonqref{e:nonPE} is solvable. \varepsilonnd{theorem} \begin{proof} \ We reformulate \varepsilonqref{e:PE}-\varepsilonqref{e:D}-\varepsilonqref{e:nonPE} in the abstract setting \varepsilonqref{e:equation}-\varepsilonqref{e:nonlocal} where $ E:=L^p(D), \, 1<p<\infty. $ \\ The linear elliptic partial differential operator in divergence form $A_p$ introduced in \varepsilonqref{e:divergence} generates a compact $C_0$-semigroup of contractions $\{S(t)\}_{t\ge 0} $ (see Theorem \ref{t:compact}). \\ Given $ \varepsilonta \in E $ we denote, as usual, with $ \vert \varepsilonta\vert $ the map $ \xi \longmapsto \vert \varepsilonta(\xi)\vert $ for a.a. $ \xi \in D. $ Consider $\beta \in (0,1); $ since $ D $ is bounded and $ c>0 $ implies $ \displaystyleplaystyle{c^{\beta}\le \max\{ 1, \, c\}}, $ it is clear that also $\vert \varepsilonta \vert^{\beta} \in E. $ Moreover it is easy to see that $ \vert \varepsilonta \vert^{p\beta} \in L^{1/\beta} (D) $ and the following estimate is satisfied \begin{equation}\label{e:eta} \left \Vert \vert \varepsilonta \vert^{\beta} \right \Vert \le \vert D \vert^{\varphirac{1-\beta}{p}} \cdot \Vert \varepsilonta \Vert^{\beta}, \varepsilonnspace \varepsilonta \in E. \varepsilonnd{equation} allskip \noindent We introduce the function $ f \colon [0, T]\times E \to E$ defined by \begin{equation}\label{e:fH} f(t,\varepsilonta)(\xi):=\int_D k(\xi,y) \varepsilonta(y) \, dy-b\varepsilonta(\xi)+g(t, \varepsilonta(\xi)), \varepsilonnspace \text{for a.a. } \xi \in D. \varepsilonnd{equation} By \varepsilonqref{e:gandk} we have \begin{equation*} \vert f(t,\varepsilonta)(\xi)\vert \le \int_{D} \vert \varepsilonta(y)\vert \, dy +b\vert \varepsilonta(\xi)\vert + \vert g(t, \varepsilonta(\xi))-g(t,0)\vert +\vert g(t,0)\vert, \varepsilonnd{equation*} for $ t \in [0,T] $ and a.a. $ \xi \in D. $ Hence, by the H\"{o}lder inequality, we obtain \begin{equation}\label{e:about f} \begin{array}{rl}\vert f(t,\varepsilonta)(\xi) \vert\le &\vert D\vert^{1-\varphirac 1p} \Vert \varepsilonta \Vert +b\vert \varepsilonta(\xi)\vert +L\max\{\vert \varepsilonta(\xi)\vert, \, \vert \varepsilonta(\xi)\vert^{\beta}\}+ \mu\\ &\vert D\vert^{1-\varphirac 1p} \Vert \varepsilonta \Vert+(b+L)\vert \varepsilonta(\xi)\vert +L+\mu, \varepsilonnspace \text{for a.a. } \xi \in D \varepsilonnd{array} \varepsilonnd{equation} with $\mu$ as in \varepsilonqref{e:maxg}, implying that $f$ is well-defined. Therefore, in abstract setting, equation \varepsilonqref{e:PE} takes the form \varepsilonqref{e:equation} with $x(t)=u(t, \cdot). $\\ Now we show that $f$ is continuous. Let $(t_n, \varepsilonta_n) \to (t,\varepsilonta)$ in $[0, T]\times E. $ By the continuity of $g$ we have that $g(t_n, \varepsilonta(\xi)) \to g(t, \varepsilonta(\xi)), $ for a.a. $ \xi \in D$ and the convergence is dominated in $ E $ since \begin{equation*} \begin{array}{rl} \vert g(t_n, \varepsilonta(\xi))-g(t, \varepsilonta(\xi))\vert \le & \vert g(t_n, \varepsilonta(\xi))-g(t_n, 0)\vert + \vert g(t, \varepsilonta(\xi))-g(t, 0)\vert\\ & + \, \vert g(t_n, 0)\vert + \vert g(t, 0)\vert \\ \le & 2L(1+\vert \varepsilonta(\xi)\vert)+2\mu, \varepsilonnspace \text{for a.a. } \xi \in D. \varepsilonnd{array} \varepsilonnd{equation*} Therefore \begin{equation}\label{e:g} \Vert g(t_n, \varepsilonta(\cdot)) -g(t, \varepsilonta(\cdot)) \Vert \to 0, \varepsilonnspace \text{as }n \to \infty, \text{ in } E. \varepsilonnd{equation} By \varepsilonqref{e:gandk} and the H\"{o}lder inequality, we have \begin{equation*} \begin{array}{rl} \vert f(t_n, \varepsilonta_n)(\xi)-f(t, \varepsilonta)(\xi)\vert \le & \vert f(t_n, \varepsilonta_n)(\xi)-f(t_n, \varepsilonta)(\xi)\vert+ \vert f(t_n, \varepsilonta)-f(t, \varepsilonta)\vert(\xi)\\ \le &\vert D \vert^{1-\varphirac1 p}\Vert \varepsilonta_n -\varepsilonta \Vert +(b+L) \vert \varepsilonta_n(\xi)-\varepsilonta(\xi)\vert + L\vert \varepsilonta_n(\xi)-\varepsilonta(\xi) \vert^{\beta}\\ & +\vert g(t_n, \varepsilonta(\xi))-g(t, \varepsilonta(\xi))\vert. \varepsilonnd{array} \varepsilonnd{equation*} Hence, by \varepsilonqref{e:eta} and \varepsilonqref{e:g}, $f(t_n, \varepsilonta_n) \to f(t,\varepsilonta)$ in $ E $ and $ f $ is continuous.\\ We prove that $ f $ satisfies also the growth condition in \varepsilonmph{(f)}. So, let $\Omega \subset E$ be bounded and take $\varepsilonta \in \Omega. $ By the estimate \begin{equation}\label{e:abp} (a+b)^p \le 2^p (a^p + b^p) \quad \text{for } a,b \ge 0 \text{ and } p>1, \varepsilonnd{equation} and according to \varepsilonqref{e:about f} we have \begin{equation*} \Vert f(t, \varepsilonta)\Vert^p = \int_{D}\left[ \vert f(t, \varepsilonta)\vert(\xi)\right]^p \, d\xi\le 2^p(b+L)^p\Vert \varepsilonta \Vert^p +2^p \left( \vert D \vert^{1-\varphirac1p}\Vert \varepsilonta \Vert +L+\mu\right)^p \vert D \vert, \varepsilonnd{equation*} for all $t \in [0,T] $ and hence \varepsilonmph{(f)} is satisfied. allskip By assumption, the nonlocal condition \varepsilonqref{e:nonPE} satisfies both \varepsilonmph{(m$_0$)} and \varepsilonmph{(m$_1$)}. allskip It remains to show the existence of a locally Lipschitzian bounding function $V \colon E \to \mathbb{R}$ (Definition \ref{d:bf}) with $K=rB$ and $r$ as in \varepsilonmph{($m_0$)}. Consider the function \begin{equation*} V_r(x)=\varphirac 12 \left(\Vert x\Vert^2-r^2 \right), \quad x\in E. \varepsilonnd{equation*} $V_r$ is locally Lipschitzian; it is also Fr\'{e}chet differentiable, since $L^p(D)$ is uniformly convex, and $$ \langle \dot V_r(y), z \rangle = \displaystyleplaystyle\varphirac{1}{\|y\|^{p-2}} \displaystyleplaystyle\int_D |y(\xi)|^{p-2} \; y(\xi) \; z(\xi) \, d\xi, \varepsilonnspace y, z \in E $$ (see e.g. Example \ref{ex:Vnorma}\varepsilonmph{(ii)}). We prove that $V_r$ satisfies condition \varepsilonmph{(V2)}. Notice that, since $D$ is bounded, \begin{equation*} \int_D\vert \varepsilonta(\xi)\vert^{p-1}\, d\xi \le \vert D \vert^{\varphirac 1p}\Vert \varepsilonta\Vert ^{p-1}, \varepsilonnspace \varepsilonta \in L^p(D). \varepsilonnd{equation*} Therefore, we have the following estimate \begin{equation*} \begin{array}{rl} &\left \vert \int_{D} |\varepsilonta(\xi)|^{p-2}\varepsilonta(\xi) \left [ \int_D k(\xi, y)\varepsilonta(y)\, dy +g(t,\varepsilonta(\xi))\right]\, d\xi\right \vert\\ \\ \le &\int_{D} |\varepsilonta(\xi)|^{p-1}\left( \int_D \vert \varepsilonta(y)\vert\, dy +\vert g(t,\varepsilonta(\xi))-g(t,0)\vert +\mu\right)\, d\xi\\ \\ \le &\int_{D} |\varepsilonta(\xi)|^{p-1}\left( L\vert \varepsilonta(\xi)\vert +\vert D\vert^{1-\varphirac 1p}\Vert \varepsilonta\Vert +L+\mu\right)\, d\xi\\ \\ \le &L\int_D\vert \varepsilonta(\xi)\vert^p \, d\xi + \left(\vert D\vert^{1-\varphirac 1p}\Vert \varepsilonta \Vert +L+\mu \right)\int_D\vert \varepsilonta(\xi)\vert^{p-1}\, d\xi\\ \\ \le &L\Vert \varepsilonta\Vert^{p}+\left(\vert D\vert^{1-\varphirac 1p}\Vert \varepsilonta \Vert +L+\mu \right)\vert D\vert^{\varphirac 1p}\Vert \varepsilonta\Vert ^{p-1}\\ \\ =& \left(\vert D\vert +L \right)\Vert \varepsilonta\Vert^{p}+\left( L+\mu\right)\vert D\vert^{\varphirac 1p}\Vert \varepsilonta\Vert^{p-1}. \varepsilonnd{array} \varepsilonnd{equation*} Thus, if $\Vert \varepsilonta \Vert =r$, by means of conditions \varepsilonmph{(a)}-\varepsilonmph{(b)} and the H\"{o}lder inequality, we have that \begin{equation*} \begin{array}{rl} \langle \dot V_r(\varepsilonta), f(t,\varepsilonta)\rangle =& \displaystyleplaystyle \varphirac{1}{\|\varepsilonta\|^{p-2}}\int_{D} |\varepsilonta(\xi)|^{p-2}\varepsilonta(\xi) \left [ \int_D k(\xi, y)\varepsilonta(y)\, dy -b\varepsilonta(\xi)+g(t,\varepsilonta(\xi))\right]\, d\xi\\ \\ =&-b \Vert \varepsilonta \Vert^2+\varphirac{1}{\|\varepsilonta\|^{p-2}} \int_{D} |\varepsilonta(\xi)|^{p-2}\varepsilonta(\xi) \left [ \int_D k(\xi, y)\varepsilonta(y)\, dy +g(t,\varepsilonta(\xi))\right]\, d\xi\\ \\ \le&-b \Vert \varepsilonta \Vert^2+\varphirac{1}{\|\varepsilonta\|^{p-2}} \left \vert \int_{D} |\varepsilonta(\xi)|^{p-2}\varepsilonta(\xi) \left [ \int_D k(\xi, y)\varepsilonta(y)\, dy +g(t,\varepsilonta(\xi))\right]\, d\xi\right \vert\\ \\ \le &\left(-b+\vert D\vert +L \right)\Vert \varepsilonta\Vert^{2}+\left( L+\mu\right)\vert D\vert^{\varphirac 1p}\Vert \varepsilonta\Vert \\ \\ = & (-b+\vert D\vert +L)r^2+ \left( L+\mu\right)\vert D\vert^{\varphirac 1p}\, r. \varepsilonnd{array} \varepsilonnd{equation*} Since $b>L+\vert D\vert$, when \begin{equation*} r>\varphirac{(L+\mu) \vert D \vert^{1/p}}{b-\vert D\vert -L} \varepsilonnd{equation*} we obtain that \begin{equation*} \langle \dot V_r(\varepsilonta), \, f(t, \varepsilonta)\rangle <0, \varepsilonnspace \Vert \varepsilonta \Vert =r \varepsilonnd{equation*} and then $V_r$ is a locally Lipschitzian bounding function for \varepsilonqref{e:equation} (see Remark \ref{r:bf}). allskip \noindent All the assumptions of Theorem \ref{t:main} are then satisfied and the proof is complete \varepsilonnd{proof} \noindent \varepsilonmph{Proof of Theorem \ref{t:nonPEP} }\ The proof follows from Theorem \ref{t:PEgen}. It remains only to show that \varepsilonqref{e:mean} and \varepsilonqref{e:Cauchy} in the abstract setting satisfy \varepsilonmph{($m_0$)} and \varepsilonmph{($m_1$)} in $E=L^p(D)$. \begin{itemize} \item[\varepsilonmph{(i)}] \ Let $M \colon C([0,T], L^p(D)) \to L^p(D)$ be defined by \begin{equation*} Mx=\varphirac 1T \int_0^T x(t)\, dt. \varepsilonnd{equation*} Notice that the definition is well-posed since $x(t)$ is a continuous function. Let $r>0$ and consider $x \in C([0,T], L^p(D))$ with $\Vert x\Vert \le r $. Then \begin{equation*} \Vert Mx \Vert \le \varphirac 1T \int_0^T \Vert x(t) \Vert \, dt \le r; \varepsilonnd{equation*} hence condition \varepsilonmph{($m_0$)} is satisfied for any $r>0$. With no loss of generality, we can then assume that also condition \varepsilonmph{(b)} in Theorem \ref{t:PEgen} is satisfied. Let $\{x_n\}\subset C([0,T], L^p(D)) $ be such that $x_n(t) \to x(t), \, t \in (0,T]$ with $x \in C([0,T], L^p(D))$ and $\Vert x_n\Vert\le r$ for all $n$. The convergence of $\{x_n\}$ is then dominated, implying that \begin{equation*} Mx_n=\varphirac 1T \int_0^T x_n(t)\, dt \to \varphirac 1T \int_0^T x(t)\, dt=Mx; \varepsilonnd{equation*} hence also \varepsilonmph{($m_1$)} is satisfied. By applying Theorem \ref{t:PEgen}, we state claim \varepsilonmph{(i)}. \item[\varepsilonmph{(ii)}] Let $M \colon C([0,T], L^p(D)) \to L^p(D)$ be such that \begin{equation*} Mx=\sum_{i=1}^q \alpha_i x(t_i), \varepsilonnspace \text{with } t_i\in (0,T], \, \alpha_i\in \mathbb{R}, i=1,...,q \text{ and } \sum_{i=1}^q \vert \alpha_i\vert \le 1. \varepsilonnd{equation*} If $x \in C([0,T], L^p(D))$ with $\Vert x\Vert \le r, \, r>0$ we have \begin{equation*} \Vert Mx\Vert \le \sum_{i=1}^q \vert \alpha_i\vert \Vert x(t_i)\Vert \le r \sum_{i=1}^q \vert \alpha_i\vert\le r, \varepsilonnd{equation*} implying condition \varepsilonmph{($m_0$)}. As in \varepsilonmph{(i)}, by the arbitrariness of $r$ we can assume that condition \varepsilonmph{(b)} in Theorem \ref{t:PEgen} is satisfied. If, moreover, $\{x_n\}\subset C([0,T], L^p(D)) $ and $x \in C([0,T], L^p(D)) $ are defined as in \varepsilonmph{(i)}, it is easy to see that \begin{equation*} Mx_n=\sum_{i=1}^q \alpha_i x_n(t_i) \to \sum_{i=1}^q \alpha_i x(t_i) =Mx, \text{ as } n \to \infty, \varepsilonnd{equation*} so also \varepsilonmph{($m_1$)} is satisfied. Claim \varepsilonmph{(ii)} then follows, again by Theorem \ref{t:PEgen}, and the proof is complete. \varepsilonnd{itemize} \section{Bounding functions for mild solutions}\label{s:bounding} This part contains a brief discussion about the notion of bounding function introduced in Section \ref{s:abstrat} (see Definition \ref{d:bf}). Notice that the set $ \overline K $ in Definition \ref{d:bf} is the $0$-sublevel set of $ V. $ The function $ V $ takes its name from its relevant property. In fact, when such a $ V $ exists, every solution $ x \in C([a,b], E) $ of \varepsilonqref{e:equation} which is located in $ \overline K $ lies, indeed, in $ K $ for $ t \in (a,b] $ (see Theorem \ref{t:bf}). Hence, the existence of a bounding function for \varepsilonqref{e:equation} makes the transversality condition automatically satisfied on $ (a,b] $ and it remains to check the behaviour of the solution only for $ t = a. $ \\ The bounding function theory was originally introduced in \cite{GaMa1} and \cite{MT} (see also \cite{GaMa2}), in the framework of finite dimensional systems and with smooth bounding function. Again in Euclidean spaces, the theory was extended in \cite{T} and in \cite{Za}, to the case of non-smooth functions. The bounding function theory was developed in \cite{AMT}, in an infinite dimensional setting, when $A\colon E \to E$ is linear and bounded and then \varepsilonqref{e:equation} has classical, i.e. absolutely continuous, solutions. A special type of bounding function (see \varepsilonqref{e:Vr} below) was used in \cite{BLT}, in combination with the Yoshida approximation of the linear part. allskip To the best of our knowledge, no result about the existence of bounding functions is available in the present general framework, i.e. in an arbitrary Banach space when the linear term is not necessarily bounded and it generates a $C_0$-semigroup. allskip \begin{remark}\label{r:bf}Let us denote with $\langle \cdot, \, \cdot \rangle$ the duality between $E$ and its dual space $E^*$. When $V$ is G\^{a}teaux differentiable it is easy to see that \begin{equation*} \lim_{h \to 0}\varphirac{V(x+hy)-V(x)}{h}=\langle \dot V(x), \, y \rangle, \quad x, y \in E. \varepsilonnd{equation*} Therefore, since $V(x)=0, \, x \in \partial K, $ if $V$ is G\^{a}teaux differentiable on $\partial K, $ condition \varepsilonmph{(V2)} simply reduces to the inequality \begin{equation}\label{e:CH} \langle \dot V(x), \, f(t,x) \rangle<0, \varepsilonnspace \text{for } t \in (a,b] \text{ and } x \in \partial K. \varepsilonnd{equation} \varepsilonnd{remark} \begin{theorem} \label{t:bf} \ Let $ E $ be a Banach space, $ A: D(A) \subset E \rightarrow E $ linear, not necessarily bounded and such that it generates a $C_0$-semigroup $S: [0,\infty) \to \mathcal{L}(E)$ and $f\colon [a,b]\times E \to E$ continuous. Assume the existence of a bounding function $V \colon E \to \mathbb{R}$ of \varepsilonqref{e:equation} (Definition \ref{d:bf}) which is locally Lipschitzian with $K \subset E$ and let there is $\delta \in (0,\infty)$ such that \begin{equation}\label{e:S(t)K} S(\tau)\overline K\subseteq \overline K \quad \text{for } \tau\in [0,\delta]. \varepsilonnd{equation} If $x \colon [a,b] \to E$ is a mild solution of \varepsilonqref{e:equation} with $x(t) \in \overline K$ for $t \in [a,b]$, then $x(t) \in K$ for $t \in (a,b]$. \varepsilonnd{theorem} \begin{proof} \ Let $x:[a,b] \to E$ be a solution of \varepsilonqref{e:equation} satisfying $x(t) \in \overline K $ for all $ t \in [a,b] $. Assume, by contradiction, the existence of $ \hat t \in (a,b] $ such that $\hat x:=x(\hat t) \in \partial K$.\\ By condition \varepsilonmph{(V2)} there is a sequence $\{k_n\}\subset (-\infty, 0)$ with $\hat t +k_n>a$ for all $n$ such that $k_n \to 0^-$ as $n \to \infty$ and \begin{equation}\label{e:V2s} \lim_{n \to \infty}\varphirac{V(\hat x+k_nf(\hat t,\hat x))}{k_n}<0. \varepsilonnd{equation} According to the definition of mild solution (see Definition \ref{d:mild}) and the properties of the semigroup we have that $$ \begin{array}{rl} \hat x=x(\hat t)=&S(\hat t-a )x(a)+\int_a^{\hat t}S(\hat t-s)f(s, x(s))\, ds\\ \\ =&S(-k_n)\left[S(\hat t-a+k_n)x(a)+\int_a^{\hat t +k_n}S(\hat t-s+k_n)f(s, x(s)) \, ds\right]\\ &+ \int_{\hat t +k_n}^{\hat t}S(\hat t-s)f(s, x(s)) \, ds\\ \\ =&S(-k_n)x(\hat t+k_n) + \int_{\hat t +k_n}^{\hat t}S(\hat t-s)f(s, x(s))\, ds. \varepsilonnd{array} $$ Hence \begin{equation}\label{e:sxa} \hat x-S(-k_n)x(\hat t+k_n)=\int_{\hat t +k_n}^{\hat t}S(\hat t-s)f(s, x(s))\, ds, \varepsilonnspace n \in \mathbb{N}. \varepsilonnd{equation} Since \begin{equation*} \int_{\hat t +k_n}^{\hat t}S(\hat t-s)f(s, x(s))\, ds=\int_{\hat t +k_n}^{\hat t}S(\hat t-s)f(\hat t, \hat x)\, ds+\int_{\hat t +k_n}^{\hat t}S(\hat t-s)\left[f(s, x(s))-f(\hat t, \hat x) \right]\, ds, \varepsilonnd{equation*} by means of the change of variables $\tau=\hat t -s$ introduced in the first integral on the right hand side, we have \begin{equation}\label{e:sxb} \begin{array}{rl} \displaystyleplaystyle\int_{\hat t +k_n}^{\hat t}S(\hat t-s)f(s, x(s))\, ds=& \int_0^{-k_n} S(\tau)f(\hat t, \hat x)\, d\tau \\ \displaystyleplaystyle+&\int_{\hat t +k_n}^{\hat t} S(\hat t-s)\left[f(s, x(s))-f(\hat t, \hat x) \right]\, ds. \varepsilonnd{array} \varepsilonnd{equation} Since both $f$ and $x$ are continuous functions in their respective domains, for every $\varepsilon >0$ there is $\sigma=\sigma(\varepsilon)>0$ such that \begin{equation}\label{e:f} \Vert f(t, x(t))-f(\hat t, \hat x)\Vert \le \varepsilon, \quad \mbox{for }\vert t-\hat t\vert \le \sigma. \varepsilonnd{equation} Hence there is $\overline n=\overline n(\sigma)$ such that \varepsilonqref{e:f} is satisfied for $t\in [\hat t+k_n, \hat t]$ and $n\ge \overline n$. Consequently, since $ \Vert S(t) \Vert \le 1 $ for $ t\ge0 $ by \varepsilonmph{(A)}, for $n \ge \overline n$ we have \begin{equation*} \begin{array}{rl} \varphirac{1}{-k_n}\left \Vert\int_{\hat t +k_n}^{\hat t}S(\hat t-s)\left[f(s, x(s))-f(\hat t, \hat x) \right]\, ds\right \Vert \le & \varphirac{1}{-k_n} \int_{\hat t +k_n}^{\hat t}\left \Vert S(\hat t-s)\right \Vert \left \Vert f(s, x(s))-f(\hat t, \hat x) \right \Vert \, ds\\ \\ \le &\varphirac{1}{-k_n}\varepsilon(-k_n)=\varepsilon. \varepsilonnd{array} \varepsilonnd{equation*} It shows that \begin{equation*} \varphirac{1}{-k_n}\int_{\hat t +k_n}^{\hat t}S(\hat t-s)\left[f(s, x(s))-f(\hat t, \hat x) \right]\, ds \to 0, \quad \text{as } n \to \infty. \varepsilonnd{equation*} Notice (see e.g. \varepsilonqref{e:media}) that \begin{equation*} \varphirac{1}{-k_n}\int_0^{-k_n}S(\tau)f(\hat t, \hat x)\, d\tau \to f(\hat t, \hat x), \quad \text{as } n \to \infty. \varepsilonnd{equation*} Therefore, from conditions \varepsilonqref{e:sxa} and \varepsilonqref{e:sxb}, we obtain \begin{equation*} \varphirac{S(-k_n)x(\hat t +k_n)-\hat x}{k_n} \to f(\hat t, \hat x). \varepsilonnd{equation*} We have then showed the existence of $\{\sigma_n\}\subset E$, depending on $\{k_n\}$, such that $\sigma_n \to 0$ as $n \to \infty$ and satisfying \begin{equation}\label{e:sxc} S(-k_n)x(\hat t +k_n)=\hat x+k_nf(\hat t, \hat x)+k_n\sigma_n. \varepsilonnd{equation} Let $U \subset E$ be open with $\hat x \in U$ and $L>0$ be such that $V|_U$ is $L$-Lipschitzian. Take $n$ large enough in such a way that both $\hat x+k_nf(\hat t, \hat x)$ and $ \hat x+k_nf(\hat t, \hat x)+k_n\sigma_n$ belong to $U$ and $S(-k_n)x(\hat t +k_n) \in \overline K$ by \varepsilonqref{e:S(t)K}. According to \varepsilonmph{(V1)} and \varepsilonqref{e:sxc} we obtain that $$ 0\le \varphirac{V(S(-k_n)x(\hat t +k_n))}{k_n}=\varphirac{V(\hat x+k_nf(\hat t, \hat x)+k_n\sigma_n)}{k_n}= \varphirac{V(\hat x+k_nf(\hat t, \hat x))}{k_n}+\Delta_n, $$ with $$ \Delta_n:= \varphirac{V(\hat x+k_nf(\hat t, \hat x)+k_n\sigma_n)-V(\hat x+k_nf(\hat t, \hat x))}{k_n}. $$ By the L-Lipschitzianity in $U$ of $ V, $ we obtain that \begin{equation*} \vert \Delta_n \vert \le L\Vert \sigma_n \Vert \to 0, \varepsilonnspace \text{as }n \to \infty. \varepsilonnd{equation*} Consequently, \begin{eqnarray}\label{eq:contr1} \lim_{n \to \infty} \varphirac{V(\hat x +k_nf(\hat t, \hat x))}{k_n}&=&\lim_{n \to \infty} \biggl[ \varphirac{V(\hat x+k_nf(\hat t, \hat x))}{k_n}+\Delta_n \biggr]\nonumber\\ &=& \lim_{n \to \infty} \varphirac{V(S(-k_n)x(\hat t +k_n))}{k_n} \ge 0 \varepsilonnd{eqnarray} in contradiction with \varepsilonqref{e:V2s}. Hence $\hat x \in K$ and the proof is complete. \varepsilonnd{proof} allskip The case when the $0$-sublevel set of the bounding function is the ball centered in $0$ and with radius $ r,$ i.e. \begin{equation}\label{e:Vr} V_r(x)=\varphirac 12 \left( \Vert x \Vert^2 -r^2\right), \varepsilonnspace x\in E, \, r>0, \varepsilonnd{equation} frequently occurs in several applications (see e.g. Section \ref{s:application}). \begin{Ex}\label{ex:Vnorma} \ \varepsilonmph{(i)} Let $E$ be a Hilbert space with scalar product $\langle \cdot, \, \cdot \rangle$. In this case $V_r \in C^1(E)$ and $\dot V_r (x) \colon E \to E $ is such that $\dot V_r(x)(y)= \langle x, \, y \rangle, \, \, x, y \in E, \, r>0. $ Moreover $\Vert \dot V_r(x) \Vert =\Vert x \Vert, \, x \in E, $ and then $ V_r $ is also locally Lipschitzian. Therefore, when the function $ f $ satisfies \begin{equation}\label{e:V2H} \langle x, \, f(t,x) \rangle <0, \varepsilonnspace \text{for } t\in (a,b] \text{ and } \, \Vert x\Vert =r, \varepsilonnd{equation} then, by \varepsilonqref{e:CH}, $V_r$ is a locally Lipschitzian bounding function for \varepsilonqref{e:equation} for all $ r>0. $ \varepsilonmph{(ii) } Assume, now, that $ E^* $ is a uniformly convex Banach space. Hence the function $ V_r $ is Frech\'{e}t differentiable on $ E $ with \begin{equation*} \langle \dot V_r(x), y \rangle=\langle J(x), y\rangle, \varepsilonnspace \text{ for } x, y \in E, \varepsilonnd{equation*} where $ J \colon E \to E^* $ is the single-valued duality map given by $$ J(x)=x^* \in E^* \; : \; \|x^*\|=\|x\| \; \mbox{and} \; \bigl\langle x^*, x \bigr\rangle =\|x\|^2 $$ and $ J $ is continuous (see e.g. \cite{D}). Hence, if we further assume the existence of $ r>0 $ such that \begin{equation}\label{e:J} \langle J(x), \, f(t,x) \rangle <0, \varepsilonnspace \text{for } t\in (a,b] \text{ and } \, \Vert x\Vert =r, \varepsilonnd{equation} then $V_r$ is a locally Lipschitzian bounding function for equation \varepsilonqref{e:equation}. \\ In particular, let $ E=L^p(\Omega) $ where $\Omega \subset \mathbb{R}^n$ is a bounded measurable subset of $ \mathbb{R}^n, n\ge 1 $ and $ 1<p<\infty. $ Then $ E $ is reflexive, $ E^*=L^{p\prime}, $ with $ 1/p+1/p^{\prime}=1, $ is uniformly convex and (see \cite[Example 1.4.4]{vr1} and \cite[Chapter I, Example 2.7]{HP}) \begin{equation*} \langle J(x), y \rangle = \displaystyleplaystyle\varphirac{1}{\|x\|^{p-2}} \displaystyleplaystyle\int_\Omega |x(\xi)|^{p-2} \; x(\xi) \; y(\xi) \, d\xi, \varepsilonnspace x, y \in L^p(\Omega). \varepsilonnd{equation*} Again by \varepsilonqref{e:J}, if the following condition \begin{equation}\label{e:Jp} \langle J(x), f(t,x) \rangle = \displaystyleplaystyle\varphirac{1}{\|x\|^{p-2}} \displaystyleplaystyle\int_\Omega |x(\xi)|^{p-2} \; x(\xi) \; f(t, x)(\xi) \, d\xi<0, \varepsilonnspace \text{for } t\in (a,b] \text{ and } \, \Vert x\Vert =r, \varepsilonnd{equation} is satisfied, then $V_r$ is a locally Lipschitzian bounding function for \varepsilonqref{e:equation}. \varepsilonnd{Ex} \begin{Ex}\label{ex:Vd} \ In a uniformly convex Banach space $ E, $ consider the usual distance function $d(\cdot, r\overline B) \colon E \to \mathbb{R}$ from the closed ball centered in $0$ with radius $r>0$. From its definition, \begin{equation*} d(x, r\overline B) :=\left\{\begin{array}{ll} 0 & \Vert x \Vert \le r\\ \displaystyleplaystyle{\inf_{y\in r\overline B} } \, \Vert x-y\Vert=\Vert x \Vert -r & \Vert x \Vert >r \varepsilonnd{array} \right. \varepsilonnd{equation*} it is easy to show that it is a Lipschitzian function. If we further assume that \begin{equation}\label{e:dist} \liminf_{h \to 0^-}\varphirac{d(x+hf(t,x), r\overline B)}{h}<0, \qquad \text{for } t\in(a, b], \, \Vert x \Vert =r, \varepsilonnd{equation} the function $d(\cdot, \overline B)$ is a bounding function for equation \varepsilonqref{e:equation}. \varepsilonnd{Ex} \section*{Acknowledgments} The authors are members of the {\varepsilonm Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`{a} e le loro Applicazioni} (GNAMPA) of the {\varepsilonm Istituto Nazionale di Alta Matematica} (INdAM) and acknowledge financial support from this institution. The first author has been supported by the project Fondi Ricerca di Base 2016 \varepsilonmph{Metodi topologici per equazioni differenziali in spazi astratti}, Department of Mathematics and Computer Science, University of Perugia. \noindent\begin{thebibliography}{99} \bibitem{AMT} J. Andres, L. Malaguti and V. Taddei, {\it On boundary value problems in Banach spaces}, Dynam. Systems Appl. 18 (2009), 275-302. \bibitem{AOmair} R.~A. Al-Omair and A.~G. Ibrahim, \varepsilonmph{Existence of mild solutions of a semilinear evolution differential inclusions with nonlocal conditions}, Electron. J. Differential Equations 2009, No. 42, 11 pp. \bibitem{BLT} I. Benedetti, N.~V. Loi and V. Taddei, {\it An approximation solvability method for nonlocal semilinear differential problems in Banach spaces}, Discrete Cont. Dyn. Syst. 37,(2017), 2977-2998. \bibitem{BTV} I. Benedetti, V. Taddei and M. V$\ddot{\mbox{a}}$th, {\it Evolution Problems with Nonlinear Nonlocal Boundary Conditions}, J. Dynam. Differential Equations 25 (2013), 477-503. \bibitem{BP} A. Boucherif and R. Precup, \varepsilonmph{Semilinear evolution equations with nonlocal initial conditions}, Dynam. Systems Appl. 16 (2007), 507--516. \bibitem{BY} L. Byszewski, \varepsilonmph{Theorems about the existence and uniqueness of solutions of a semilinear evolution nonlocal Cauchy problem}, J. Math. Anal. Appl. 162 (1991), 494--505. \bibitem{Cw} J. Chabrowski, \varepsilonmph{On nonlocal problems for parabolic equations}, Nagoya Math. J. 93 (1984), 109--131. \bibitem{CR} T. Cardinali and P. Rubbioni, {\it Aronszajn - Hukuhara type theorem for semilinear differential inclusions with nonlocal conditions}, Electron. J. Qual. Theory Differ. Equ. 2015, No. 45, 12 pp. \bibitem{CK} A. \'{C}wiszewski and P. Kokocki, \varepsilonmph{Krasnoselʹskii type formula and translation along trajectories method for evolution equations}, Discrete Contin. Dyn. Syst. 22 (2008), 605--628. \bibitem{Deng} K. Deng, \varepsilonmph{Exponential decay of solutions of semilinear parabolic equations with nonlocal initial conditions}, J. Math. Anal. Appl. 179 (1993), 630--637. \bibitem{DB G V} E. Di Benedetto, U. Gianazza and V. Vespri, \varepsilonmph{Harnack's inequality for degenerate and singular parabolic equations}, Springer Monographs in Mathematics, Springer, New York, 2012. \bibitem{D} J. Diestel, \varepsilonmph{Geometry of Banach Spaces-Selected Topics}, Lecture Notes in Mathematics, no. 485. Springer-Verlag, Berlin-New York, 1975. \bibitem{GaMa1} R.~E. Gaines and J.~L. Mawhin, {\it Ordinary differential equations with nonlinear boundary conditions}, J. Differential Equations 26 (1977), 200-222. \bibitem{GaMa2} R.~E. Gaines and J.~L. Mawhin, \varepsilonmph{Coincidence Degree and Nonlinear Differential Equations}. Lecture Notes in Mathematics, no. 568, Springer-Verlag, Berlin-New York, 1977. \bibitem{GK} B.~H. Gilding and R. Kersner, {\it Travelling Waves in Nonlinear Diffusion-Convection-Reaction}. Birkh\"{a}user Verlag: Basel, 2004. \bibitem{HP} S. Hu and N. Papageorgiou, \varepsilonmph{Handbook of Multivalued Analysis Vol. I: Theory}, Springer US, 1997. \bibitem{IM} G. Infante and M. Maciejewski, \varepsilonmph{Multiple positive solutions of parabolic systems with nonlinear, nonlocal initial conditions}, J. Lond. Math. Soc. (2) 94 (2016), 859--882. \bibitem{J} D. Jackson, \varepsilonmph{Existence and uniqueness of solutions to semilinear nonlocal parabolic equations}, J. Math. Anal. Appl. 172 (1993), 256--265. \bibitem{KOZ} M. Kamenskii, V. Obukhovskii and P. Zecca, \varepsilonmph{Condensing Multivalued Maps and Semilinear Differential Inclusions in Banach Space}, W.~de Gruyter, Berlin, 2001. \bibitem{KOWY} T.~D. Ke, V. Obukhovskii, N. Wong and J. Yao, \varepsilonmph{On semilinear integro-differential equations with nonlocal conditions in Banach spaces}, Abstr. Appl. Anal. 2012, Art. ID 137576, 26 pp. \bibitem{Koko} P. Kokocki, \varepsilonmph{Krasnosel'skii type formula and translation along trajectories method on the scale of fractional spaces}, Commun. Pure Appl. Anal. 14 (2015), 2315--2334. \bibitem{KrZa} M.~A. Krasnosel'skii and P.~P. Zabreiko, \varepsilonmph{Geometrical Methods of Nonlinear Analysis}. Izdat. Nauka, Moscow 1975; English translation, Springer-Verlag, Berlin, 1984. \bibitem{LS} J. Leray and J. Schauder, {\it Topologie et \'equations fonctionnelles}, Ann. Sci. \'{E}cole Norm. Sup. 51 (1934), 45-78. \bibitem{LLX} J. Liang, J. Liu and T.~J. Xiao, \varepsilonmph{Nonlocal Cauchy problems governed by compact operator families}, Nonlinear Anal. 57 (2004), 183--189. \bibitem{Lun} A. Lunardi, \varepsilonmph{Analytic semigroups and optimal regularity in parabolic problems}, Progress in Nonlinear Differential Equations and their Applications, 16, Birkhäuser Verlag, Basel, 1995. \bibitem{MT} J.~L. Mawhin and H.~B. Thompson, {\it Periodic or bounded solutions of Carath\'eodory systems of ordinary differential equations}, J. Dynam. Differential Equations 15 (2003), 327--334. \bibitem{MN} J.~C. Meyer and D.~J. Needham, \varepsilonmph{The Cauchy problem for non-Lipschitz semi-linear parabolic partial differential equations}, London Mathematical Society Lecture Note Series, 419, Cambridge University Press, Cambridge, 2015. \bibitem{pao} C.~V. Pao, \varepsilonmph{Reaction diffusion equations with nonlocal boundary and nonlocal initial conditions}, J. Math. Anal. Appl. 195 (1995), 702--718. \bibitem{p} A. Pazy, \varepsilonmph{Semigroups of Linear Operators and Applications to Partial Differential Equations}, Springer-Verlag, Berlin, 1983. \bibitem{T} V. Taddei, {\it Bound sets for Floquet boundary value problems: the nonsmooth case}. Discrete Contin. Dynam. Systems 6 (2000), 459-473. \bibitem{Viorel} A. Viorel, \varepsilonmph{Nonlocal Cauchy problems close to an asymptotically stable equilibrium point}, J. Math. Anal. Appl. 433 (2016), 1736--1742. \bibitem{vr1} I.~I. Vrabie, \varepsilonmph{Compactness Methods for Nonlinear Evolutions}, 2nd ed., Longman House, Burn Mill, Harlow, 1990. \bibitem{vr2} I.~I. Vrabie \varepsilonmph{$C_0$-semigroups and applications,} North-Holland Mathematics Studies, 191, North-Holland Publishing Co., Amsterdam, 2003. \bibitem{xue09} X. Xue, \varepsilonmph{Nonlocal nonlinear differential equations with a measure of noncompactness in Banach spaces}, Nonlinear Anal. 70 (2009), 2593--2601. \bibitem{Yagi} A. Yagi, \varepsilonmph{Abstract parabolic evolution equations and their applications}, Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010. \bibitem{Za} F. Zanolin, {\it Bound sets, periodic solutions and flow-invariant for ordinary differential equations in $\mathbb{R}^{n}$: some remarks}, Rend. Istit. Mat. Univ. Trieste 19 (1987), 76-92. \bibitem{zhuli11} L. Zhu and G. Li, \varepsilonmph{Existence results of semilinear differential equations with nonlocal initial conditions in Banach spaces}, Nonlinear Anal. 74 (2011), 5133--5140. \bibitem{ZSL12} T. Zhu, C. Song and G. Li, \varepsilonmph{Existence of mild solutions for abstract semilinear evolution equations in Banach spaces}, Nonlinear Anal. 75 (2012), 177--181. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{f Extremal conjugated unicyclic and bicyclic graphs with respect to total-eccentricity index} \blfootnote{\raggedright Email addresses: [email protected], [email protected] (M. A. Malik), [email protected] (R. Farooq).} \begin{abstract} Let $G$ be a molecular graph. The total-eccentricity index of graph $G$ is defined as the sum of eccentricities of all vertices of $G$. The extremal trees, unicyclic and bicyclic graphs, and extremal conjugated trees with respect to total-eccentricity index are known. In this paper, we extend these results and study the extremal conjugated unicyclic and bicyclic graphs with respect to total-eccentricity index. \end{abstract} \begin{quote} {\bf Keywords:} Topological indices, total-eccentricity index, conjugated graphs. \end{quote} \begin{quote} {\bf AMS Classification:} 05C05, 05C35 \end{quote} \section{Introduction} Let $G$ be an $n$-vertex molecular graph with vertex set $V(G)$ and edge set $E(G)$. The vertices and edges of $G$ respectively correspond to atoms and chemical bonds between atoms. A topological index $T$ is a numerical quantity associated with the chemical structure of a molecule. The aim of such association is to correlate these indices with various physico-chemical properties of a chemical compound. An edge between two vertices $u,v\in V(G)$ is denoted by $uv$. The order and size of $G$ are respectively the cardinalities $|V(G)|$ and $|E(G)|$. The neighbourhood $N_G(v)$ of a vertex $v$ in $G$ is the set of vertices adjacent to $v$. A simple graph is a graph without loops and multiple edges. All graphs considered in this paper are simple. The degree $d_G(v)$ of a vertex $v$ in $G$ is the cardinality $|N_G(v)|$. A graph $G$ is called a $k$-regular graph if $d_G(v)=k$ for all $v\in V(G)$. A vertex of degree $1$ is called a pendant vertex. Let $P_n$, $C_n$ and $K_n$ respectively denote an $n$-vertex path, cycle and a complete graph. An $n$-vertex complete bipartite graph and $n$-vertex star is respectively denoted by $K_{a,b}$ and $K_{1,n-1}$ (or simply $S_n$), where $a+b=n$. A $(v_1,v_n)$-path with vertex set $\{v_1,v_2,\ldots,v_n\}$ is denoted by $v_1v_2\ldots v_n$. A graph $G$ is said to be connected if there exists a path between every pair of vertices in $G$. A maximal connected subgraph of a graph is called a component. A vertex $v\in V(G)$ is called a cut-vertex if deletion of $v$, along with the edges incident on it, increases the number of components of $G$. A maximal connected subgraph of a graph without any cut-vertex is called a block. A tree is a connected graph containing no cycle. Thus, an $n$-vertex tree is a connected graph with exactly $n-1$ edges. An $n$-vertex unicyclic graph is a simple connected graph which contains $n$ edges. Similarly, an $n$-vertex bicyclic graph is a simple connected graph which contains $n+1$ edges. A matching $M$ in a graph $G$ is a subset of edges of $G$ such that no two edges in $M$ share a common vertex. A vertex $u$ in $G$ is said to be $M$-saturated if an edge of $M$ is incident with $u$. A matching $M$ is said to be perfect if every vertex in $G$ is $M$-saturated. A conjugated graph is a graph that contains a perfect matching. In graphs representing organic compounds, perfect matchings correspond to Kekul\'{e} structures, playing an important role in analysis of the resonance energy and stability of hydrocarbons~\cite{Gutman}. The distance $d_G(u,v)$ between two vertices $u,v\in V(G)$ is defined as the length of a shortest path between $u$ and $v$ in $G$. If there is no path between vertices $u$ and $v$ then $d_G(u,v)$ is defined to be $\infty$. The eccentricity $e_G(v)$ of a vertex $v\in V(G)$ is defined as the largest distance from $v$ to any other vertex in $G$. The diameter ${\rm diam}(G)$ and radius ${\rm rad}(G)$ of a graph $G$ are respectively defined as: \begin{eqnarray} \label{rad} {\rm rad}(G) &=& \min_{v\in V(G)}e_G(v),\\ \label{diam} {\rm diam}(G) &=& \max_{v\in V(G)}e_G(v). \end{eqnarray} A vertex $v\in V(G)$ is said to be central if $e_G(v) = {\rm rad}(G)$. The graph induced by all central vertices of $G$ is called the center of $G$, denoted as $C(G)$. A vertex $w$ is called an eccentric vertex of a vertex $v$ in $G$ if $d_G(v,w) = e_G(v)$. The set of all eccentric vertices of $v$ in a graph $G$ is denoted by $E_G(v)$. The first topological index was introduced by Wiener~\cite{W1947} in 1947, to calculate the boiling points of paraffins. In 1971, Hosoya~\cite{H71} defined the notion of Wiener index for any graph as the half sum of distances between all pairs of vertices. The average-eccentricity of an $n$-vertex graph $G$ was defined in 1988 by Skorobogatov and Dobrynin~\cite{Skoro1988} as: \begin{equation}\label{average} avec(G) = \frac{1}{n}\sum_{u\in V(G)} e_G(u). \end{equation} In the recent literature, a minor modification of average-eccentricity index $avec(G)$ is used and referred as total-eccentricity index $\tau(G)$. It is defined as: \begin{equation}\label{sigma} \tau(G) = \sum_{u\in V(G)} e_G(u). \end{equation} The eccentric-connectivity index and the Randi\'c index of a graph $G$ are defined respectively by $\xi(G) = \sum_{v\in V(G)} d_G(v) e_G(v)$ and $R(G)= \sum_{uv\in E(G)}(d_G(u)d_G(v))^{-\frac{1}{2}}$. Liang and Liu~\cite{LiLiu2016} proved a conjecture on the relation between the average-eccentricity and Randi\'c index. Dankelmann and Mukwembi~\cite{DankleMuk2004} obtained upper bounds on the average-eccentricity in terms of several graph parameters. Smith et al.~\cite{Smith2016} studied the extremal values of total-eccentricity index in trees. Ilic~\cite{Ilic12} studied some extremal graphs with respect to average-eccentricity. Farooq et al.~\cite{Farooq2017} studied the extremal unicyclic and bicyclic graphs and extremal conjugated trees with respect to total-eccentricity index. For more details on topological indices of graphs and networks, the author is referred to~\cite{akhter-farooq2019,Doslic}. In this paper, we extend the results of \cite{Farooq2017} to conjugated unicyclic and bicyclic graphs. For some special families of graphs of order $n\geq 4$, the total-eccentricity index is given as follows: \begin{enumerate} \item For a $k$-regular graph $G$, we have $\tau(G) = \frac{\xi(G)}{k}$, \item $\tau(K_n) = n$, \item $\tau(K_{m,n}) = 2(m+n)$, $m,n\geq 2$, \item The total-eccentricity index of a star $S_n$, a cycle $C_n$ and a path $P_n$ is given by \begin{eqnarray} \label{tau Sn} \tau(S_n) &=& 2n - 1,\\ \tau(C_n) &=& \begin{cases} \frac{n}{2}& \mbox{if~ } n\equiv 0 (\bmod 2)\\ \frac{n-1}{2} & \mbox{if~ } n\equiv 1 (\bmod 2), \end{cases}\\ \label{tau Pn} \tau(P_n) &=& \begin{cases} \frac{3n^2}{4} - \frac{n}{2} & \mbox{if~ } n\equiv 0 (\bmod 2)\\ \frac{3n^2}{4} - \frac{n}{2} - \frac{1}{4} & \mbox{if~ } n\equiv 1 (\bmod 2). \end{cases} \end{eqnarray} \end{enumerate} Let $\{v_1,v_2,\ldots,v_n\}$ be the vertices of a path $P_n$. Let $U_2$ be a unicyclic graph obtained from $P_n$ by joining $v_1$ and $v_3$ by an edge. Similarly, let $B_2$ be a bicyclic graph obtained from $P_n$ by joining $v_1$ with two vertices $v_3$ and $v_4$. Note that when $n\equiv 0\pmod 2$, the graphs $U_2$ and $B_2$ are conjugated and are denoted by $\overline{U}_2$ and $\overline{B}_2$, respectively. In Figure~\ref{max conj}, we give two $7$-vertex bicyclic graphs $U_2$ and $B_2$. For $n\equiv 0\pmod 2$, let $S_n^*$ be an $n$-vertex conjugated tree obtained by identifying one vertex each from $\frac{n-2}{2}$ copies of $P_3$ and deleting a single pendent vertex. Let $v$ be the unique central vertex of $S_n^*$. Let $\overline{U}_1$ be a conjugated unicyclic graph obtained from $S_n^*$ by adding an edge between $v$ and any vertex not adjacent to $v$. In a similar fashion, let $\overline{B}_1$ be a conjugated bicyclic graph obtained from $S_n^*$ by adding two edges between $v$ and any two vertices not adjacent to $v$ (see Figure~\ref{ext conj}). \begin{figure} \caption{The $7$-vertex unicyclic and bicyclic graphs $U_2$ and $B_2$.} \label{max conj} \end{figure} \begin{figure} \caption{The $8$-vertex conjugated unicyclic and bicyclic graphs $\overline{U} \label{ext conj} \end{figure} Now we give some previously known results on the center of a graph from \cite{Harary53} and some results on extremal graphs with respect to total-eccentricity index from \cite{Farooq2017}. In next theorem, we give a result dealing with the location of center in a connected graph. \begin{theorem}[Harary and Norman \cite{Harary53}]\label{Harary} The center of a connected graph $G$ is contained in a block of $G$. \end{theorem} The only possible blocks in a unicyclic graph are $K_1$, $K_2$ or a cycle $C_k$. Thus the following corollary gives the center of an $n$-vertex conjugated unicyclic graph $\overline{U}$. \begin{corollary}\label{Harary2} If $\overline{U}$ is an $n$-vertex conjugated unicyclic graph with a unique cycle $C_k$, then $C(\overline{U}) = K_1$ or $K_2$, or $C(\overline{U}) \subseteq C_k$ \end{corollary} The following results give the extremal unicyclic and bicyclic graphs with respect to total-eccentricity index. \begin{theorem}[Farooq et al.~\cite{Farooq2017}] \label{tau max uni} Among all $n$-vertex unicyclic graphs, $n\geq 4$, the graph $U_2$ shown in Figure~\ref{max conj} has maximal total-eccentricity index. \end{theorem} \begin{theorem}[Farooq et al.\cite{Farooq2017}] \label{tau max bi} Among all $n$-vertex bicyclic graphs, $n\geq 5$, the graph $B_2$ shown in Figure~\ref{max conj} has the maximal total-eccentricity index. \end{theorem} In Section 2, we find extremal conjugated unicyclic and bicyclic graphs with respect to total-eccentricity index. \section{Conjugated unicyclic and bicyclic graphs} In this section, we find extremal conjugated unicyclic and bicyclic graphs with respect to total-eccentricity index. In~\eqref{U_B_1}, we give the total-eccentricity index of the conjugated graphs $\overline{U}_1$, $\overline{U}_2$, $\overline{B}_1$ and $\overline{B}_2$ which can easily be computed. \begin{equation}\label{U_B_1} \begin{array}{cc} \tau(\overline{U}_1) = \frac{7}{2}n - 3, & \tau(\overline{U}_2) = \frac{3n^2}{4} - n - \frac{3}{4}, \\[.2cm] \tau(\overline{B}_1) = \frac{7}{2}n - 4, & \tau(\overline{B}_2) = \frac{3}{4}n^2 - n - 2. \end{array} \end{equation} Using Theorem~\ref{Harary} and Corollary~\ref{Harary2}, we prove the following result. \begin{remark}\label{Theorem 3.1} When $n=4$, the graph shown in Figure~\ref{uni_n_46}(a) has the smallest total-eccentricity index among all $4$-vertex conjugated unicyclic graphs. When $n=6$, the graphs shown in Figure~\ref{uni_n_46}(b) and Figure~\ref{uni_n_46}(c) have smallest total-eccentricity index among $6$-vertex conjugated unicyclic graphs. When $n=8$, the graph shown in Figure~\ref{uni_n_46}(d) has smallest total-eccentricity index among $8$-vertex conjugated unicyclic graphs. \end{remark} \begin{figure} \caption{The $n$-vertex conjugated unicyclic graphs with minimal total-eccentricity index when $n=4,6,8$.} \label{uni_n_46} \end{figure} \begin{theorem}\label{tau conj uni min} Let $n\equiv 0\pmod 2$ and $n\geq 10$. Then among all $n$-vertex conjugated unicyclic graphs, the graph $\overline{U}_1$ shown in Figure~\ref{ext conj} has the minimal total-eccentricity index. \end{theorem} \begin{proof} Let $\overline{U}_1$ be the $n$-vertex conjugated unicyclic graph shown in Figure~\ref{ext conj}. Let $\overline{U}$ be an arbitrary $n$-vertex conjugated unicyclic graph with a unique cycle $C_k$. We show that $\tau(\overline{U}) \geq \tau(\overline{U_1})$. Let $n_i$ denote the number of vertices with eccentricity $i$ in $\overline{U}$. If $k \geq 8$ or ${\rm rad}(\overline{U}) \geq 4$ then \begin{equation*} \tau(\overline{U}) \geq 4n > \frac{7n}{2}-3 = \tau(\overline{U}_1). \end{equation*} In the rest of the proof, we assume that $k \in \{3,4,5,6,7\}$ and ${\rm rad}(\overline{U}) \in \{2,3\}$. Let $k\in \{6,7\}$. If $x$ is a vertex of $\overline{U}$ such that $x$ is not on $C_k$, then $e_{\overline{U}}(x) \geq 4$. Also, it is easily seen that there are at most five vertices on $C_k$ with eccentricity $3$. Thus \begin{equation}\label{ecc_k67} \tau(\overline{U}) \geq 3(5) + 4(n-5) = 4n - 5 > \frac{7n}{2}-3 = \tau(\overline{U}_1). \end{equation} We complete the proof by considering the following cases.\\ \textbf{Case 1}. When ${\rm rad}(\overline{U}) = 3$ and $k\in \{3,4,5\}$. By Corollary~\ref{Harary2}, $C(\overline{U}) = K_1$ or $C(\overline{U}) = K_2$ or $C(\overline{U}) \subseteq C_k$. This shows that $\overline{U}$ has at most five vertices with eccentricity $3$. Thus the inequality~\eqref{ecc_k67} holds in this case.\\ \textbf{Case 2}. When ${\rm rad}(\overline{U}) = 2$ and $k\in \{4,5\}$. Then ${\rm diam}(\overline{U}) \leq 2 {\rm rad}(\overline{U}) = 4$ and there will be exactly one vertex in $C(\overline{U})$. That is, $n_2 = 1$. Let $v$ be the vertex with $e_{\overline{U}}(v) = 2$. Then $v\in V(C_k)$. Considering several possibilities for longest possible paths (of length $2$) starting from $v$ and that $\overline{U}$ is conjugated, one can see that $\overline{U}$ is isomorphic to one of the graphs shown in Figure~\ref{r=2_k=5}. Moreover, observe that $n_4\geq \frac{n}{2}-1$. \begin{figure} \caption{The $n$-vertex conjugated unicyclic graphs discussed in Case 2.} \label{r=2_k=5} \end{figure} \noindent Since $n_2 + n_3 + n_4 = n$ and $n_2 = 1$, we can write \begin{eqnarray*} \tau(\overline{U}) &=& 2n_2 + 3n_3 + 4n_4\\ &=& 2 + 3(n_3 + n_4) + n_4\\ &=& 3n - 1 + n_4\\ &\geq& 3n - 1 + \frac{n}{2} - 1 = \frac{7n}{2} - 2 > \tau(\overline{U}_1). \end{eqnarray*} \\ \textbf{Case 3.} When ${\rm rad}(\overline{U}) = 2$ and $k=3$. Then $n_2 = 1$ (see Figure~\ref{r=2_k=3}). Let $v$ be the unique central vertex of $\overline{U}$. Then either $v$ is a vertex of $C_3$ or $v$ is adjacent to a vertex of $C_3$. When $v\in V(C_3)$, then $\overline{U}$ is isomorphic to one of the graphs shown in Figure~\ref{r=2_k=3}(a), \ref{r=2_k=3}(b) or \ref{r=2_k=3}(c). In this case, all vertices with eccentricity $4$ are pendent. This gives $n_4\geq \frac{n}{2} - 2$. Therefore \begin{eqnarray*} \tau(\overline{U}) &=& 2 (1) + 3(n_3+n_4) + n_4\\ &\geq& 3n - 1 + \frac{n}{2} - 2\\ &=&\frac{7n}{2} - 3 = \tau(\overline{U}_1). \end{eqnarray*} Similarly, if the central vertex $v$ is not on $C_3$, then $\overline{U}$ is isomorphic to one of the graphs shown in Figure~\ref{r=2_k=3}(d) or \ref{r=2_k=3}(e). Note that $n_4 \geq \frac{n}{2}$. Thus $\nonumber \tau(\overline{U}) \geq 2 (1) + 3(n - 1) + \frac{n}{2} = \frac{7n}{2}-1 > \tau(\overline{U}_1).$ Combining all the cases, we see that $\overline{U}_1$ is the minimal graph with respect to total-eccentricity index. This completes the proof. \begin{figure} \caption{The $n$-vertex conjugated unicyclic graphs discussed in Case 3.} \label{r=2_k=3} \end{figure} \end{proof} The following theorem gives the maximal conjugated unicyclic graphs with respect to total-eccentricity index. \begin{theorem}\label{tau conj uni max} Let $n\equiv 0(\bmod 2)$. Then the $n$-vertex conjugated unicyclic graph corresponding to the maximal total-eccentricity index is the graph $\overline{U}_2$ shown in Figure~\ref{ext conj}. \end{theorem} \begin{proof} Note that the class of all $n$-vertex conjugated unicyclic graphs forms a subclass of the class of all $n$-vertex unicyclic graphs. From Theorem~\ref{tau max uni} we see that among all $n$-vertex unicyclic graphs, the graph $U_2$ (see Figure~\ref{max conj}) has the largest total-eccentricity index. Since $U_2$ admits a a perfect matching when $n\equiv 0(\bmod 2)$, the result follows. \end{proof} \begin{corollary} For an $n$-vertex conjugated unicyclic graph $\overline{U}$, we have $\frac{7n}{2} - 3 \leq \tau(\overline{U}) \leq \frac{3n^2}{4} - n - \frac{3}{4}$. \end{corollary} \begin{proof} Using Theorem~\ref{tau conj uni min}, Theorem~\ref{tau conj uni max} and equation~\eqref{U_B_1}, we obtain the required result. \end{proof} The next theorem gives the minimal conjugated bicyclic graphs with respect to total-eccentricity index. \begin{remark}\label{Theorem 3.4} Let $n = 4$. Then among all $4$-vertex conjugated bicyclic graphs, one can easily see that the graph shown in Figure~\ref{counterbi}(a) has the minimal total-eccentricity index. Similarly, when $n=6$ and $8$, then the graphs respectively shown in Figure~\ref{counterbi}(b) and Figure~\ref{counterbi}(c) have the minimal total-eccentricity index among all $6$-vertex and $8$-vertex conjugated bicyclic graphs. \end{remark} \begin{figure} \caption{The $n$-vertex conjugated bicyclic graphs with minimal total-eccentricity index when $n=4,6$ and $8$.} \label{counterbi} \end{figure} \begin{theorem}\label{tau conj bi min} Let $n\equiv 0\pmod 2$ and $n\geq 10$. Then among the $n$-vertex conjugated bicyclic graphs, the graph $\overline{B}_1$ shown in Figure~\ref{ext conj} has the minimal total-eccentricity index. \end{theorem} \begin{proof} Let $\overline{B}_1$ be the $n$-vertex conjugated bicyclic graph shown in Figure~\ref{ext conj}. Let $\overline{B}$ be an arbitrary $n$-vertex conjugated bicyclic graph and $\overline{B} \ncong \overline{B}_1$. Let $C(\overline{B})$ denote the center of $\overline{B}$ and $n_i$ denote the number of vertices with eccentricity $i$. The proof is divided into two cases depending upon the number of cycles in $\overline{B}$. \\ \textbf{Case 1}. When $\overline{B}$ contains two edge-disjoint cycles $C_{k_1}$ and $C_{k_2}$ of lengths $k_1$ and $k_2$, respectively. Without loss of generality, assume that $k_1\leq k_2$. If ${\rm rad}(\overline{B})\geq 4$ or $k_2\geq 8$, then $$\tau(\overline{B}) \geq 4n > \frac{7n}{2}-4 = \tau(\overline{B}_1).$$ Thus, we assume that $k_2\in \{3,4,5,6,7\}$ and ${\rm rad}(\overline{B}) \in \{2,3\}$. If $k_2\in \{6,7\}$, then for any vertex $x \notin V(C_{k_2})$, $e_{\overline{B}}(x) \geq 4$. Moreover, as $k_2\leq 7$, the number of vertices with eccentricity $3$ are at most $7$. Thus \begin{equation}\label{T67} \tau(\overline{B}) \geq 3(7) + 4(n-7) = 4n - 7 > \frac{7n}{2}-4 = \tau(\overline{B}_1). \end{equation} We consider the following three subcases. \\[.2cm] \textbf{(a)} Let ${\rm rad}(\overline{B})=3$ and $k_2 \in \{3,4,5\}$. By Theorem~\ref{Harary}, we have $|V(C(\overline{B}))| \leq 7$. Thus $\tau(\overline{B})$ satisfies equation~\eqref{T67}. \\[.2cm] \textbf{(b)} Let ${\rm rad}(\overline{B})=2$ and $k_2\in \{4,5\}$. Take $v\in V(C(\overline{B}))$. We observe that the center $C(\overline{B})$ is contained in $C_{k_2}$ and $n_2 = 1$. Assume $v$ to be the unique central vertex of $\overline{B}$. Then for several possible choices for possible pendent vertices in the conjugated graph $\overline{B}$, one can observe that $\overline{B}$ is one of the graphs shown in Figure~\ref{r=2_k=45_bi}. Moreover $n_4 \geq \frac{n}{2}-2$. Then $\tau(\overline{B}) \geq 2n_2 + 3n_3 + 4n_4 = 2n_2 + 3(n_3 + n_4) + n_4 \geq 2(1) + 3(n-1) + \frac{n}{2}-2 = \frac{7n}{2} - 3 > \tau(\overline{B}_1)$. \begin{figure} \caption{The $n$-vertex conjugated bicyclic graphs discussed in Case 1-(b).} \label{r=2_k=45_bi} \end{figure} \\[.2cm] \textbf{(c)} Let ${\rm rad}(\overline{B})=2$ and $k_1 = k_2 = 3$. If $C(\overline{B}) \subseteq C_{k_2}$ then $C(\overline{B}) = K_1$, otherwise $n\leq 8$ which is not true. Similarly, if $C(\overline{B}) \nsubseteq C_{k_2}$, then $C(\overline{B}) = K_2$ or $K_1$. Since $n_2 \ngeq 2$, we have $n_2 = 1$. Let $c$ be the unique central vertex. Then $\overline{B}$ is isomorphic to one of the graphs shown in Figure~\ref{counterbi-2}. When $c$ is not a vertex of $C_{k_1}$ or $C_{k_2}$, then $n_4 \geq \frac{n}{2}+1$ and $\tau(\overline{B}) \geq 2(1) + 3(n-1) + \frac{n}{2}+1 = \frac{7n}{2} > \tau(\overline{B}_1)$. If $c$ is a vertex of $C_{k_1}$ or $C_{k_2}$, then $n_4\geq \frac{n}{2}-3$. Thus $\tau(\overline{B}) \geq 2(1) + 3(n-1) + \frac{n}{2}-3 = \frac{7n}{2} - 4 = \tau(\overline{B}_1)$. \begin{figure} \caption{The $n$-vertex conjugated bicyclic graphs studied in Case 1-(c).} \label{counterbi-2} \end{figure} \\ \textbf{Case 2.} When cycles of $\overline{B}$ share some edges. Then there are cycles $C_{k_1}$, $C_{k_2}$ and $C_{k_3}$ in $\overline{B}$ of lengths $k_1$, $k_2$ and $k_3$, respectively. Without loss of generality, assume that $k_1\leq k_2\leq k_3$. Then $k_1,k_2 \geq 3$ and $k_3 \geq 4$. Let $Q$ be the subgraph of $\overline{B}$ induced by the vertices of $C_{k_1}$, $C_{k_2}$ and $C_{k_3}$. Clearly, $Q$ contains the cycle $C_{k_3}$. Assume that $V(C_{k_3}) = \{v_1,v_2,\ldots,v_{k_3}\}$. Then $\overline{B}$ is minimal with respect to total-eccentricity index if $Q$ can be obtained from the cycle $C_{k_3}$ by adding the edge $v_1v_{\lfloor\frac{k_3}{2} \rfloor + 1}$ or $v_1v_{\lfloor\frac{k_3}{2} \rfloor + 2}$. When $k_3 \geq 13$, then $e_{\overline{B}}(w) \geq 4$ for all $w\in V(\overline{B})$. This gives us $\tau(\overline{B}) \geq 4n > \tau(\overline{B}_1)$. Assume that ${\rm rad}(\overline{B}) \leq 3$ and $k_3\leq 12$. We consider the following two subcases:\\[.2cm] \textbf{(a)} When $k_3\in \{9,10,11,12\}$. Then ${\rm rad}(\overline{B}) = 3$. There exists a vertex $c$ with degree $3$ in $Q$ such that $c\in C(\overline{B})$ and $c$ has at most $\frac{n-10}{2}+1$ neighbours not in $Q$. Clearly, all of these vertices will have eccentricity $4$. Then $\frac{n}{2}-5$ such vertices can have unique pendent vertices with eccentricity $5$. Moreover, there will be at least one vertex with eccentricity $5$ in $Q$, otherwise ${\rm rad}(\overline{B}) \neq 3$. Thus $n_5 \geq \frac{n}{2}-4$. Moreover, at most $3$ vertices in $Q$ can have eccentricity $3$. This gives $1 \leq n_3 \leq 3$. When $k_3=9$, such a graph $\overline{B}$ is shown in Figure~\ref{r=2_k3=4}(a). The vertex $v\in V(Q)$ such that $e_{\overline{B}}(v) = 5$ is also shown in the figure. Using the facts that $n = n_3+n_4+n_5$ and $n_4+n_5 = n - n_3 \geq n - 3$. We get \begin{eqnarray*} \tau(\overline{B}) &=& 3n_3 + 4n_4 + 5n_5 \\ &=& 3n_3 + 4(n_4 + n_5) + n_5 \\ &\geq& 3(1) + 4(n - 3) + \frac{n}{2} - 4 \\ &=& \frac{9n}{2} - 13 \geq \tau(\overline{B}_1). \end{eqnarray*} \textbf{(b)} When $k_3 \in \{4,5,6,7,8\}$. We first assume that ${\rm rad}(\overline{B}) = 3$. Then by Theorem~\ref{Harary}, we have $1 \leq n_3 \leq 8$. Thus $\tau(\overline{B}) \geq 3(8) + 4(n-8) = 4n - 8 \geq \tau(\overline{B}_1)$. On the other hand, when ${\rm rad}(\overline{B}) = 2$. Then $n_2 = 1$. When $C(\overline{B}) \nsubseteq Q$, then the minimal graph $\overline{B}$ is isomorphic to the graph shown in Figure~\ref{r=2_k3=4}(b). Clearly, $n_4 \geq \frac{n}{2}$. Thus $\tau(\overline{B}) = 2n_2 + 3(n_3 + n_4) + n_4 \geq 2(1) + 3(n-1) + \frac{n}{2} = \frac{7n}{2} - 1 > \tau(\overline{B}_1)$. When $C(\overline{B}) \subseteq Q$, then $\overline{B}$ is isomorphic to one of the graphs shown in Figures~\ref{r=2_k3=4}(c)$-$\ref{r=2_k3=4}(h). It can be seen that $n_4\geq \frac{n}{2}-2$. Thus we can write $\tau(\overline{B}) = 2n_2 + 3(n_3 + n_4) + n_4 \geq 2(1) + 3(n-1) + \frac{n}{2} - 2 = \frac{7n}{2} - 3 = \tau(\overline{B}_1)$. \begin{figure} \caption{The conjugated bicyclic graphs studied in Case 2. The vertices $c,x$ and $y$ represent central vertices. } \label{r=2_k3=4} \end{figure} \\ Combining the results of Case 1 and Case 2, we see that among all conjugated bicyclic graphs, $\overline{B}_1$ has the minimal total-eccentricity index. The proof is complete. \end{proof} \begin{theorem}\label{tau conj bi max} Let $n\equiv 0\pmod 2$. Then among the $n$-vertex conjugated bicyclic graphs, the graph $\overline{B}_2$ shown in Figure~\ref{ext conj} has the maximal total-eccentricity index. \end{theorem} \begin{proof} For $n\equiv 0(\bmod 2)$, the proof can be derived from the proof of Theorem~\ref{tau max bi}. \end{proof} \begin{corollary} For any conjugated bicyclic graph $\overline{B}$, we have $\frac{7n}{2}-4 \leq \tau(\overline{B}) \leq \frac{3n^2}{4} - n -2$. \end{corollary} \begin{proof} The result follows by using Theorem~\ref{tau conj bi min}, Theorem~\ref{tau conj bi max} and equation~\eqref{T67}. \end{proof} \section{Conclusion} In this paper, we extended the results of Farooq et al.~\cite{Farooq2017} and studied the extremal conjugated unicyclic and bicyclic graphs with respect to total-eccentricity index. \end{document}
\begin{document} \title{Dense Bag-of-Temporal-SIFT-Words\\ for Time Series Classification} \author{Adeline Bailly\inst{1,4} \and Simon Malinowski\inst{2} \and Romain Tavenard\inst{1} \and \\ Laetitia Chapel\inst{3} \and Thomas Guyet\inst{4} } \institute{Universit{\'e} de Rennes 2, IRISA, LETG-Rennes COSTEL, Rennes, France \and Universit{\'e} de Rennes 1, IRISA, Rennes, France \and Universit{\'e} de Bretagne-Sud, IRISA, Vannes, France \and Agrocampus Ouest, IRISA, Rennes, France } \maketitle \setcounter{footnote}{0} \begin{abstract} The SIFT framework has shown to be accurate in the image classification context. In~\cite{botsw15}, we designed a Bag-of-Words approach based on an adaptation of this framework to time series classification. It relies on two steps: SIFT-based features are first extracted and quantized into words; histograms of occurrences of each word are then fed into a classifier. In this paper, we investigate techniques to improve the performance of Bag-of-Temporal-SIFT-Words: dense extraction of keypoints and normalization of Bag-of-Words histograms. Extensive experiments show that our method significantly outperforms most state-of-the-art techniques for time series classification. \keywords{time series classification, Bag-of-Words, SIFT, dense features, BoTSW, D-BoTSW} \end{abstract} \section{Introduction \label{sec:intro} } Classification of time series has received an important amount of interest over the past years due to many real-life applications, such as medicine~\cite{wang2012bowbiomedical}, environmental modeling~\cite{dusseux13}, speech recognition~\cite{lecun95}. A wide range of algorithms have been proposed to solve this problem. One simple classifier is the $k$-nearest-neighbor ($k$NN), which is usually combined with Euclidean Distance (ED) or Dynamic Time Warping (DTW) similarity measure. The combination of the $k$NN classifier with DTW is one of the most popular method since it achieves high classification accuracy~\cite{ratanamahatana2004everything}. However, this method has a high computation cost which makes its use difficult for large-scale real-life applications. Above-mentioned techniques compute similarity between time series based on point-to-point comparisons. Classification techniques based on higher level structures (\emph{e.g.} feature vectors) are most of the time faster, while being at least as accurate as DTW-based classifiers. Hence, various works have investigated the extraction of local and global features in time series. Among these works, the Bag-of-Words (BoW) approach (also called Bag-of-Features) consists in representing documents using a histogram of word occurrences. It is a very common technique in text mining, information retrieval and content-based image retrieval because of its simplicity and performance. For these reasons, it has been adapted to time series data in some recent works~\cite{Bay15,baydogan2013bof,lin2012bop,senin2013saxsvm,wang2012bowbiomedical}. Different kinds of features based on simple statistics, computed at a local scale, are used to create the words. In the context of image retrieval and classification, scale-invariant descriptors have proved their accuracy. Particularly, the Scale-Invariant Feature Transform (SIFT) framework has led to widely used descriptors~\cite{lowe2004distinctive}. These descriptors are scale and rotation invariant while being robust to noise. In~\cite{botsw15}, we built on this framework to design a BoW approach for time series classification where words correspond to quantized versions of local features. Features are built using the SIFT framework for both detection and description of the keypoints. This approach can be seen as an adaptation of~\cite{sivic2003video}, which uses SIFT features associated with visual words, to time series. In this paper, we improve our previous work by applying enhancement techniques for BoW approaches, such as dense extraction and BoW normalization. To validate this, we conduct extensive experiments on a wide range of datasets. This paper is organized as follows. Section~\ref{sec:related} summarizes related work, Section~\ref{sec:proposed} describes the proposed Bag-of-Temporal-SIFT-Words (BoTSW) method and its improved version (dense extraction and BoW normalization, D-BoTSW), and Section~\ref{sec:xp} reports experimental results. Finally, Section~\ref{sec:ccl} concludes and discusses future work. \section{Related work \label{sec:related} } Our approach for time series classification builds on two well-known methods in computer vision: local features are extracted from time series using a SIFT-based approach and a global representation of time series is produced using Bag-of-Words. This section first introduces state-of-the-art distance-based methods in time series classification and then presents previous works that make use of Bag-of-Words approaches for time series classification. \subsection{Distance-based time series classification} Data mining community has, for long, investigated the field of time series classification. Early works focus on the use of dedicated similarity measures to assess similarity between time series. In~\cite{ratanamahatana2004everything}, Ratanamahatana and Keogh compare Dynamic Time Warping to Euclidean Distance when used with a simple $k$NN classifier. While the former benefits from its robustness to temporal distortions to achieve high accuracy, ED is known to have much lower computational cost. Cuturi~\cite{cuturi2011fast} shows that, although DTW is well-suited to retrieval tasks since it focuses on the best possible alignment between time series, it fails at precisely quantifying dissimilarity between non-matching sequences (which is backed by the fact that DTW-derived kernel is not positive definite). Hence, he introduces the Global Alignment Kernel that takes into account all possible alignments in order to produce a reliable similarity measure to be used at the core of standard kernel methods such as Support Vector Machines (SVM). Lines and Bagnall~\cite{lines14} propose an ensemble classifier based on elastic distance measures (including DTW), named Proportional Elastic Ensemble (PROP). Instead of building classification decision on similarities between time series, Ye and Keogh~\cite{ye2009time} use a decision tree in which the partitioning of time series is performed with respect to the presence (or absence) of discriminant sub-sequences (named shapelets) in the series. Though accurate, the method is very computational demanding as building the decision tree requires one to check for all candidate shapelets. Douzal and Amblard~\cite{douzal2010pr} define a dedicated similarity measure for time series which is then used in a classification tree. \subsection{Bag-of-Words for time series classification} Inspired by text mining, information retrieval and computer vision communities, recent works have investigated the use of Bag-of-Words for time series classification~\cite{Bay15,baydogan2013bof,lin2012bop,senin2013saxsvm,wang2012bowbiomedical}. These works are based on two main operations: converting time series into Bag-of-Words, and building a classifier upon this BoW representation. Usually, standard techniques such as random forests, SVM, neural networks or $k$NN are used for the classification step. Yet, many different ways of converting time series into Bag-of-Words have been introduced. Among them, Baydogan \emph{et al.} ~\cite{baydogan2013bof} propose a framework to classify time series denoted TSBF where local features such as mean, variance and extremum values are computed on sliding windows. These features are then quantized into words using a codebook learned by a class probability estimate distribution. In~\cite{wang2012bowbiomedical}, discrete wavelet coefficients are extracted on sliding windows and then quantized into words using $k$-means. In~\cite{lin2012bop,senin2013saxsvm}, words are constructed using the Symbolic Aggregate approXimation (SAX) representation~\cite{Lin03} of time series. SAX symbols are extracted from time series and histograms of $n$-grams of these symbols are computed to form a Bag-of-Patterns (BoP). In \cite{senin2013saxsvm}, Senin and Malinchik combine SAX with Vector Space Model to form the SAX-VSM method. In~\cite{Bay15}, Baydogan and Runger design a symbolic representation of multivariate time series (MTS), called SMTS, where MTS are transformed into a feature matrix, whose rows are feature vectors containing a time index, the values and the gradient of time series at this time index (on all dimensions). Random samples of this matrix are given to decision trees whose leaves are seen as words. A histogram of words is output when the different trees are learned. Local feature extraction has been investigated for long in the computer vision community. One of the most powerful local feature for image is SIFT~\cite{lowe2004distinctive}. It consists in detecting keypoints as extremum values of the the Difference-of-Gaussians (DoG) function and describing their neighborhoods using histograms of gradients. Xie and Beigi~\cite{Xie09} use similar keypoint detection for time series. Keypoints are then described by scale-invariant features that characterize the shapes surrounding the extremum. In~\cite{Can12}, extraction and description of time series keypoints in a SIFT-like framework is used to reduce the complexity of DTW: features are used to match anchor points from two different time series and prune the search space when searching for the optimal path for DTW. In this paper, we build upon BoW of SIFT-based descriptors. We propose an adaptation of SIFT to mono-dimensional signals that preserves their robustness to noise and their scale invariance. We then use BoW to gather information from many local features into a single global one. \section{Bag-of-Temporal-SIFT-Words (BoTSW) method \label{sec:proposed} } The proposed method is based on three main steps: (i) extraction of keypoints in time series, (ii) description of these keypoints by gradient magnitude at a specific scale and (iii) representation of time series by a BoW, where words correspond to quantized version of the description of keypoints. These steps are depicted in \figurename~\ref{fig:overview} and detailed below. \begin{figure} \caption{Dense extraction $(\tau_\text{step} \caption{Keypoint description $(n_b=4, a=2)$} \caption{$k$-means generated codebook $(k=6)$} \caption{Resulting $k$-dimensional histogram} \caption{ Approach overview: (a) A time series and its dense-extracted keypoints. (b) Keypoint description is based on the time series filtered at the scale at which the keypoint is extracted. Descriptors are quantized into words. (c) Codewords obtained \emph{via} \label{fig:overview} \end{figure} \subsection{Keypoints extraction in time series} The first step of our method consists in extracting keypoints in time series. Two approaches are described here: the first one is based on scale-space extrema detection (as in~\cite{botsw15}) and the second one proposes a dense extraction scheme. \subsubsection{Scale-space extrema detection.} Following the SIFT framework, keypoints in time series are detected as local extrema in terms of both scale and (temporal) location. These scale-space extrema are identified using a DoG function, and form a list of scale-invariant keypoints. Let $L(t,\sigma)$ be the convolution ($\ast$) of a Gaussian function $G(t,\sigma)$ of width $\sigma$ with a time series $S(t)$: \begin{equation} L(t,\sigma) = G(t,\sigma) \ast S(t) \end{equation} where $G(t,\sigma)$ is defined as \begin{equation} G(t,\sigma) = \frac{1}{\sqrt{2\pi}~\sigma}~e^{- t^2 / 2\sigma^2}. \end{equation} Lowe~\cite{lowe1999objectrecognition} proposes the Difference-of-Gaussians (DoG) function to detect scale-space extrema in images. Adapted to time series, a DoG function is obtained by subtracting two time series filtered at consecutive scales: \begin{equation} D(t,\sigma) = L(t, k_\text{sc}\sigma) - L(t,\sigma), \end{equation} where $k_\text{sc}$ is a parameter of the method that controls the scale ratio between two consecutive scales. Keypoints are then detected at time index $t$ in scale $j$ if they correspond to extrema of $D(t, k_\text{sc}^j\sigma_0)$ in both time and scale, where $\sigma_0$ is the width of the Gaussian corresponding to the reference scale. At a given scale, each point has two neighbors: one at the previous and one at the following time instant. Points also have neighbors one scale up and one scale down at the previous, same and next time instants, leading to a total of eight neighbors. If a point is higher (or lower) than all of its neighbors, it is considered as an extremum in the scale-space domain and hence a keypoint of $S$. \subsubsection{Dense extraction.} Previous researches have shown that accurate classification could be achieved by using densely extracted local features~\cite{jurie05,wang09}. In this section, we present the adaptation of this setup to our BoTSW scheme. Keypoints selected with dense extraction no longer correspond to extrema but are rather systematically extracted at all scales every $\tau_\text{step}$ time steps on Gaussian-filtered time series $L(\cdot{},k_\text{sc}^j\sigma_0)$. Unlike scale-space extrema detection, regular sampling guarantees a minimal amount of keypoints per time series. This is especially crucial for smooth time series from which very few keypoints are detected when using scale-space extrema detection. In addition, even if the densely extracted keypoints are not scale-space extrema, description of these keypoints (cf. Section~\ref{ssec:desc}) covers the description of scale-space extrema if $\tau_\text{step}$ is not too large. This usually leads to more robust global descriptors. A dense extraction scheme is represented in~\figurename~\ref{fig:overview}, where we consider a step of $\tau_\text{step} = 15$ for the sake of readability. In the following, when dense extraction is performed, we will refer to our method as D-BoTSW (for dense BoTSW). \subsection{Description of the extracted keypoints} \label{ssec:desc} Next step in our process is the description of keypoints. A keypoint at time index $t$ and scale $j$ is described by gradient magnitudes of $L(\cdot{}, k_\text{sc}^j\sigma_0)$ around $t$. To do so, $n_b$ blocks of size $a$ are selected around the keypoint. Gradients are computed at each point of each block and weighted using a Gaussian window of standard deviation $\frac{a \times n_b}{2}$ so that points that are farther in time from the detected keypoint have lower influence. Then, each block is described by two values: the sum of positive gradients and the sum of negative gradients. Resulting feature vector is hence of dimension $2 \times n_b$. \subsection{Bag-of-Temporal-SIFT-Words for time series classification} \label{botsw_v_features} The set of all training features is used to learn a codebook of $k$ words using $k$-means clustering. Words represent different local behaviors in time series. Then, for a given time series, each feature vector is assigned the closest word in the codebook. The number of occurrences of each word in a time series is computed. (D-)BoTSW representation of a time series is the $\ell_2$-normalized histogram (\emph{i.e.} frequency vector) of word occurrences. \subsubsection{Bag-of-Words normalization.} Dense sampling on multiple Gaussian-filtered time series provides considerable information to process. It also tends to generate words with little informative power, as stop words do in text mining applications. In order to reduce the impact of those words, we compare two normalization schemes for BoW: Signed Square Root normalization (SSR) and Inverse Document Frequency normalization (IDF). These normalizations are commonly used in image retrieval and classification based on histograms~\cite{jegou12,jegou2010aggregating,perronin10,sivic2003video}. J{\'e}gou \emph{et al.}~\cite{jegou2010aggregating} and Perronin \emph{et al.}~\cite{perronin10} show that reducing the influence of frequent codewords before $\ell_2$ normalization could be profitable. They apply a power $\alpha \in [0,1]$ on their global representation. SSR normalization corresponds to the case where $\alpha = 0.5$, which leads to near-optimal results~\cite{jegou2010aggregating,perronin10}. IDF normalization also tends to lower the influence of frequent codewords. To do so, document frequency of words is computed as the number of training time series in which the word occurs. BoW are then updated by diving each component by its associated document frequency. SSR and IDF normalizations both reduce the influence of frequent codewords in the codebook, and are applied before $\ell_2$ normalization. We show in the experimental part of this paper that using BoW normalization improves the accuracy of our method. Normalized histograms are finally given to a classifier that learns how to discriminate classes from this D-BoTSW representation. \section{Experiments and results} \label{sec:xp} In this section, we investigate the impact of both dense extraction of the keypoints and normalization of the Bag-of-Words on classification performance. We then compare our results to the ones obtained with standard time series classification techniques. For the sake of reproducibility, C++ source code used for (D-)BoTSW in these experiments is made available for download\footnote{ \url{http://people.irisa.fr/Adeline.Bailly/code.html}}. To provide illustrative timings for our methods, we ran it on a personal computer, for a given set of parameters, using dataset \emph{Cricket\_X}~\cite{ucr} that is made of 390 training time series and 390 test ones. Each time series in the dataset is of length 300. Extraction and description of dense keypoints takes around 1 second for all time series in the dataset. Then, 35 seconds are necessary to learn a $k$-means and fit a linear SVM classifier using training data only. Finally, classification of all D-BoTSW corresponding to test time series takes less than 1 second. \subsection{Experimental setup} \label{sec:xp_setup} Experiments are conducted on the 86 currently available datasets from the UCR repository~\cite{ucr}, the largest online database for time series classification. It includes a wide variety of problems, such as sensor reading (\emph{ECG}), image outline (\emph{ArrowHead}), human motion (\emph{GunPoint}), as well as simulated problems (\emph{TwoPatterns}). All datasets are split into a training and a test set, whose size varies between less than 20 and more than 8000 time series. For a given dataset, all time series have the same length, ranging from 24 to more than 2500 points. Parameters $a$, $n_b$, $k$ and $C_{SVM}$ of (D-)BoTSW are learned, while we set $\sigma_0 = 1.6$ and $k_\text{sc} = 2^{1/3}$, as these values have shown to produce stable results~\cite{lowe2004distinctive}. Parameters $a$, $n_b$, $k$ and $C_{SVM}$ vary inside the following sets: $\{4, 8\}$, $\{4, 8, 12, 16, 20\}$, $\left\{2^i, \forall i \in \{5..10\}\right\}$ and $\{1, 10, 100\}$ respectively. Codebooks are obtained \emph{via} $k$-means quantization and a linear SVM is used to classify time series represented as (D-)BoTSW. For our approach, the best sets (in terms of accuracy) of $(a, n_b, k, C_{SVM})$ parameters are selected by performing cross-validation on the training set. Due to the heterogeneity of the datasets, leave-one-out cross-validation is performed on datasets where the training set contains less than 300 time series, and $10$-fold cross-validation is used otherwise. These best sets of parameters are then used to build the classifier on the training set and evaluate it on the test set. For datasets with little training data, it is likely that several sets of parameters yield best performance during the cross-validation process. For example, when using \emph{DiatomSizeReduction} dataset, BoTSW has 150 out of 180 parameter sets yielding best performance, while there are 42 such sets for D-BoTSW with SSR normalization. In both cases, the number of \emph{best} parameter sets is too high to allow a fair parameter selection. When this happens, we keep all parameter sets with best performance at training and perform a majority voting between their outputs at test time. Parameters $a$ and $n_b$ both influence the descriptions of the keypoints; their optimal values vary between sets so that the description of keypoints can fit the shape of the data. If the data contains sharp peaks, the size of the neighborhood on which features are computed (equal to $a\times{}n_b$) should be small. On the contrary, if it contains smooth peaks, descriptions should take more points into account. Parameter $k$ of the $k$-means needs to be large enough to precisely represent the different features. However, it needs to be small enough in order to avoid overfitting. We consequently allow a large range of values for $k$. In the following, BoTSW denotes the approach where keypoints are selected as scale-space extrema and BoW histograms are $\ell_2$-normalized. For all experiments with dense extraction, we set $\tau_\text{step} = 10$, and we extract keypoints at all scales. Using such a value for $\tau_\text{step}$ enables one to have a sufficient number of keypoints even for small time series, and guarantees that keypoint neighborhoods overlap so that all subparts of the time series are described. \subsection{Experiments on dense extraction} \label{sec:xp_dense} \begin{figure} \caption{\label{fig:errorrate_basic_dense} \label{fig:errorrate_basic_dense} \end{figure} \figurename~\ref{fig:errorrate_basic_dense} shows a pairwise comparison of error rates between BoTSW and its dense counterpart D-BoTSW for all datasets in the UCR repository. A point on the diagonal means that obtained error rates are equals. A point above the diagonal illustrates a case where D-BoTSW has a smaller error rate than BoTSW. Wilcoxon signed rank test's $p$-value and Win/Tie/Lose scores are given in the bottom-right corner of the figure. Win/Tie/Lose scores indicate that D-BoTSW reaches better performance than BoTSW on 61 datasets, equivalent performance on 4 datasets and worse on 21 datasets. Wilcoxon test shows that this difference is significant (in the following, we will use a significance level of 10\% for all statistical tests). D-BoTSW improves classification on a large majority of the datasets. However, most points are close to the diagonal, which means that the improvement is of little magnitude. In the following, we show how to further improve these results thanks to D-BoTSW normalization. \subsection{Experiments on BoW normalization} \label{sec:xp_norm} \begin{figure} \caption{\label{fig:errorrate_dense_norm} \label{fig:errorrate_dense_norm} \end{figure} In image retrieval and classification, Bag-of-Words normalizations have been shown to improve classification rates with dense extracted keypoints. We investigate here the impact of SSR and IDF normalizations on D-BoTSW for time series classification. As it can be seen in~\figurename~\ref{fig:errorrate_dense_norm}, both SSR and IDF normalizations improve classification performance (though the improvement of using IDF is not statistically significant). Lowering the influence of largely-represented codewords hence leads to more accurate classification with D-BoTSW. IDF normalization only leads to a small improvement in classification accuracy: Win/Tie/Lose score against non-normalized D-BoSTW is 38/14/34. On the contrary, SSR normalization significantly improves the classification accuracy, with a Win/Tie/Lose score of 61/10/15 over non-normalized D-BoSTW. \begin{figure} \caption{$\ell_2$ normalized D-BoTSW} \caption{IDF+$\ell_2$ normalized D-BoTSW} \caption{SSR+$\ell_2$ normalized D-BoTSW} \caption{Per-dimension energy of D-BoTSW vectors extracted from dataset \emph{ShapesAll} \label{fig:var_dense_norm} \end{figure} This is backed by~\figurename~\ref{fig:var_dense_norm}, in which one can see that when using SSR normalization, variance (\emph{i.e.} energy) is spread across all dimensions of the BoW, leading to a more balanced representation than with other two normalization schemes. \subsection{Comparison with state-of-the-art methods} In the following, we will refer to dense SSR-normalized BoTSW as D-BoTSW, since this setup is the one providing the best classification performance. We now compare D-BoTSW to the most popular state-of-the-art methods for time series classification. The UCR repository provides error rates for the 86 datasets with Euclidean distance 1NN (EDNN) and Dynamic Time Warping 1NN (DTWNN)~\cite{ratanamahatana2004everything}. We use published error rates for TSBF (45 datasets)~\cite{baydogan2013bof}, SAX-VSM (51 datasets)~\cite{senin2013saxsvm}, SMTS (45 datasets)~\cite{Bay15}, PROP (46 datasets)~\cite{lines14} and BoP (20 datasets). As BoP~\cite{lin2012bop} only provides classification performance for 20 datasets, we decided not to plot pairwise comparison of error rates between D-BoTSW and BoP. Note however that the Win/Tie/Lose score is 17/1/2 in favor of D-BoTSW and this difference is statistically significant ($p<0.001$). BoP has smaller error rate than D-BoTSW on \emph{wafer} (0.003 \emph{vs.} 0.004) and \emph{Olive Oil} (0.133 \emph{vs.} 0.167) data sets. \begin{figure} \caption{\label{fig:errorrate_baselines} \label{fig:errorrate_baselines} \end{figure} \begin{table}[!p] \centering \input{erate_table.tex} \caption{\label{tab:errorrate} Classification error rates for D-BoTSW with SSR normalization (for each dataset, best performance is written as bold text).} \end{table} \figurename~\ref{fig:errorrate_baselines} shows that D-BoTSW performs better than 1NN combined with ED (EDNN) or DTW (DTWNN), TSBF, SAX-VSM and SMTS. Though relying on a single similarity measure that has linear time complexity in the length of time series, D-BoTSW slightly outperforms PROP, which relies on outputs from several similarity measures with quadratic time complexity. In~\figurename~\ref{fig:errorrate_baselines}, it is striking to realize that D-BoTSW not only improves the classification, but might improve it considerably. Error rate on \emph{Shapelet Sim} dataset drops from 0.461 (EDNN) and 0.35 (DTWNN) to 0 (D-BoTSW), for example. Pairwise comparisons of methods show that all observed differences between D-BoTSW and state-of-the-art methods are statistically significant, except for PROP. Error rates (ER) obtained with D-BoTSW are reported in~\tablename~\ref{tab:errorrate}, together with baseline scores publicly available at~\cite{ucr}. This set of experiments, conducted on a wide variety of time series datasets, shows that D-BoTSW significantly outperforms most state-of-the-art methods. \section{Conclusion} \label{sec:ccl} In this paper, we presented the D-BoTSW technique, which transforms time series into histograms of quantized local features. The association of SIFT keypoints and Bag-of-Words has been widely used and is considered as a standard technique in image domain, however it has never been investigated for time series classification. We carried out extensive experiments and showed that dense keypoint extraction and SSR normalization of Bag-of-Words lead to the best performance for our method. We compared the results with standard techniques for time series classification: D-BoTSW has comparable performance to PROP with lower time complexity and significantly outperforms all other techniques. We believe that classification performance could be further improved by taking more time information into account, as well as reducing the impact of quantization losses in our representation. Indeed, only local temporal information is embedded in our model and the global structure of time series is ignored. Moreover, more detailed global representations for sets of features than the standard BoW have been proposed in the computer vision community~\cite{jegou2010aggregating,perronnin2007fisher}, and such global features could be used in our framework. \section*{Acknowledgments} This work has been partly funded by ANR project ASTERIX (ANR-13-JS02-0005-01), R{\'e}gion Bretagne and CNES-TOSCA project VEGIDAR. {} \end{document}
\begin{document} \title{Worst-case Quantum Hypothesis Testing with Separable Measurements} \author{Le Phuc Thinh} \email{[email protected]} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore} \affiliation{Institut f{\"u}r Theoretische Physik, Leibniz Universit{\"a}t Hannover, Appelstr. 2, 30167 Hannover, Germany} \author{Michele Dall'Arno} \email{[email protected]} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore} \affiliation{Yukawa Institute for Theoretical Physics, Kyoto University, Kitashirakawa Oiwakecho, Sakyoku, Kyoto 606-8502, Japan} \affiliation{Faculty of Education and Integrated Arts and Sciences, Waseda University, 1-6-1 Nishiwaseda, Shinjuku-ku, Tokyo 169-8050, Japan} \author{Valerio Scarani} \email{[email protected]} \affiliation{Centre for Quantum Technologies, National University of Singapore, Singapore} \affiliation{Department of Physics, National University of Singapore, Singapore} \begin{abstract} For any pair of quantum states (the hypotheses), the task of binary quantum hypotheses testing is to derive the tradeoff relation between the probability $p_{01}$ of rejecting the null hypothesis and $p_{10}$ of accepting the alternative hypothesis. The case when both hypotheses are explicitly given was solved in the pioneering work by Helstrom. Here, instead, for any given null hypothesis as a pure state, we consider the worst-case alternative hypothesis that maximizes $p_{10}$ under a constraint on the distinguishability of such hypotheses. Additionally, we restrict the optimization to separable measurements, in order to describe tests that are performed locally. The case $p_{01}=0$ has been recently studied under the name of ``quantum state verification''. We show that the problem can be cast as a semi-definite program (SDP). Then we study in detail the two-qubit case. A comprehensive study in parameter space is done by solving the SDP numerically. We also obtain analytical solutions in the case of commuting hypotheses, and in the case where the two hypotheses can be orthogonal (in the latter case, we prove that the restriction to separable measurements generically prevents perfect distinguishability). In regards to quantum state verification, our work shows the existence of more efficient strategies for noisy measurement scenarios. \end{abstract} \rightline{YITP-20-101} \maketitle \section{\label{sec:intro}Introduction} The task of quantum hypothesis testing~\cite{helstrom1969quantum}, a subfield of quantum state estimation~\cite{Hradil97, Paris_book, Holevo_book}, is to optimally identify, according to some given payoff function, an unknown quantum state given as a black box. The strategy consists of performing a quantum measurement, whose optimality depends upon the payoff function and the prior information about the state, available in the form of a probability distribution over the state space. Several discrimination problems~\cite{discriminate_unitary, discriminate_channel, quantum_reading, BDD11, DBDMJD12, qillumination, quantumstatediscrimination, discriminate_mixedstate, discriminate_errormargin, maxconfidence} are based on quantum hypothesis testing. In the simplest non-trivial instance of the problem, the prior distribution has support over two states only, the \textit{null} and the \textit{alternative} hypotheses. Hence, binary quantum hypothesis testing corresponds to the derivation of the tradeoff relation between two error probabilities: I) the probability of rejecting the null hypothesis, and II) the probability of accepting the alternative hypothesis. The case when such hypotheses are given explicitly was solved analytically by Helstrom~\cite{helstrom1969quantum}. Here, instead, we consider the case in which only one state (the null hypothesis) is explicitly given. For the other state (the alternative hypothesis), we consider a constrained worst-case scenario. The worst-case alternative hypothesis is the one that maximizes the type-II error probability, under a given lower bound on the distinguishability of the two hypotheses. Additionally, we consider multipartite hypotheses, and we restrict the optimization to separable measurements only. This setup generalizes the so-called ``quantum state verification''~\cite{qv, qv_bipartite, qv_dicke, qv_2qubit, qv_correspondence, qv_noniid, qv_pure}, by relaxing the assumption that the type-I error probability is null. Our first contribution is a formulation of the aforementioned problem as a semi-definite program, that can be efficiently solved with readily available numerical tools. Then, we specify to the case in which the hypotheses are two-qubit states, and we derive an analytical form for the worst-case hypothesis. Finally, we analytically derive the tradeoff relation between type-I and type-II error probabilities in the case when the hypotheses commute. In regards to quantum state verification, our work shows the existence of more efficient strategies in certain parameter regimes for noisy measurement scenarios. The structure of this paper is as follows. In Section~\ref{sec:hypothesis_testing} we recall the general problem of quantum hypothesis testing and we introduce the specific problem addressed here. In Section~\ref{sec:sdp} we reformulate our problem as a semi-definite program, and we analytically derive the worst-case alternative hypothesis in the two-qubit case. In Section~\ref{sec:results} we analytically derive the tradeoff relation between type-I and type-II error probabilities for commuting hypotheses. Section~\ref{sec:conclusion} summarizes our results. \section{\label{sec:hypothesis_testing}Hypothesis testing of quantum states} The simplest scenario of hypothesis testing is binary {\em quantum state discrimination} between $\rho_0$ and $\rho_1$. In other words, one is asked to decide which is more likely between two hypotheses $H_0$ --- the {\em null hypothesis} --- representing the fact that the unknown state is $\rho_0$, and $H_1$ --- the {\em alternative hypothesis} --- corresponding to the unknown state being $\rho_1$. The decision process can be formalized by a POVM $\{\Omega,\openone-\Omega\}$ where the element $\Omega$ accepts $H_0$ and $\openone-\Omega$ accepts $H_1$. This naturally gives rise to two errors, type I or {\em false positive} $p_{01}=\tr(\rho_0(\openone-\Omega))$ and type II or {\em false negative} $p_{10}=\tr(\rho_1\Omega)$. False positive probability captures the situation that the decision process accepts $H_1$ when hypothesis $H_0$ is true. False negative probability corresponds to the other situation where one accepts $H_0$ when $H_1$ is true. Therefore, in this language, the problem is to design an {\em optimal measurement} $\Omega$ that optimizes certain figure-of-merit. For example, Helstrom strategy minimizes the average probability of error $p_0p_{01}+p_1p_{10}$ where $p_0$ is the {\em a priori} probability of occurrence of hypothesis $H_0$ and ditto for $p_1$. Several problems in quantum information such as quantum channel coding~\cite{channelcoding} and quantum illumination~\cite{qillumination} can be seen as hypothesis testing problems by assigning appropriate sets to hypothesis and choosing appropriate figures-of-merit (see e.g.~\cite{hayashi_2006,hayashi_2009}). Here we look at the task that has been called {\em quantum state verification}~\cite{qv}. In this task, $H_0$ is the state $\ketbra{\psi}$, and $H_1$ is the set of states $\sigma$ such that $\bra{\psi}\sigma\ket{\psi}\leq1-\epsilon$. Previous works \cite{qv, qv_bipartite, qv_dicke, qv_2qubit, qv_correspondence, qv_noniid, qv_pure} considered strategies that have no false positive, i.e. $p_{01}=0$, and set out to minimize the worst-case probability of false negative $$ p_{10}(\epsilon):=\min_{\substack{0\preceq\Omega\preceq\openone \\ \bra{\psi}\Omega\ket{\psi}=1\\ \Omega\in\text{[set]}}}\max_{\substack{\sigma\succeq0 \\ \tr(\sigma)=1 \\ \bra{\psi}\sigma\ket{\psi}\leq1-\epsilon}}\tr(\Omega\sigma)\,. $$ The set to which $\Omega$ belongs can be that of all effects, or a restricted one. When dealing with composite systems, a particularly relevant set is the set $\text{SEP}$ of \textit{separable} measurements because they are easier to implement than LOCC or richer local measurement classes and at the same time could provide a bound on the performance of other classes. In this work, we shall focus on this one and leave possible extensions to future work. Here we relax the condition $p_{01}=0$ to $p_{01}\leq\delta$, leading to the optimisation \begin{align}\label{eq:main_opt_sep} p_{10}(\delta,\epsilon):=\min_{\substack{0\preceq\Omega\preceq\openone \\ \bra{\psi}\Omega\ket{\psi}\geq1-\delta \\ \Omega\in\text{SEP}}}\max_{\substack{\sigma\succeq0 \\ \tr(\sigma)=1 \\ \bra{\psi}\sigma\ket{\psi}\leq1-\epsilon}}\tr(\Omega\sigma)\,. \end{align} This generalisation is relevant, as it allows the study of the \textit{tradeoff} between $\delta$ and $p_{10}(\delta,\epsilon)$. From the technical point of view, this study does not constitute a straightforward extension of previously employed mathematical tools for the following reason. The condition $\bra{\psi}\Omega\ket{\psi}=1$ forces $\Omega$ to commute with $\ketbra{\psi}$, which provides a significant simplification in the number of parameters and structure of the problem. When that condition is relaxed to $\bra{\psi}\Omega\ket{\psi}\geq 1-\delta$, commutativity can no longer be assumed \textit{a priori} (and we shall show that, for some values of $\delta$ and the other parameters, the optimal strategy is indeed \textit{not} the commuting one). \section{\label{sec:sdp}Reformulations of the optimisation} In this section, we first show that the optimisation \eqref{eq:main_opt_sep} for separable measurements can be cast as a semidefinite program (SDP), which allows for reliable numerical solutions. Then, for the case of two-qubit states, we solve the optimisation of the inner problem, thus casting the optimisation in a form which will allow deriving some analytical results in Section \ref{sec:results}. \subsection{Reformulation as a SDP} The problem we are considering is at first sight a min max problem involving two variables $\Omega,\sigma$ that appears bilinearly in the objective function. Though fixing each variable is separately a SDP and can be reliably solved to any precision, there is no guarantee on the optimality of remaining outer optimization if one deploys numerical methods. A closer analysis of the optimization problem shows that one can in fact use duality theory of semidefinite programming to reformulate the problem. We refer the reader to the classic book~\cite{boyd2004convex} for more information on duality in optimization. \begin{lem} The optimisation \eqref{eq:main_opt_sep} can be reformulated as a semidefinite program \begin{equation}\label{eq:main_opt_sep_sdp} \begin{aligned} p_{10}(\delta,\epsilon)=&\min_{\Omega,y_1,y_2} y_1+(1-\epsilon)y_2 \\ &\text{ s. t. } 0\preceq\Omega\preceq\openone\\ &\phantom{\text{ s. t. }} \bra{\psi}\Omega\ket{\psi}\geq1-\delta\\ &\phantom{\text{ s. t. }} \Omega\in\textup{SEP}\\ &\phantom{\text{ s. t. }} y_1\openone+y_2\ketbra{\psi}\succeq\Omega \\ &\phantom{\text{ s. t. }} y_1\in\mathbb{R}, y_2\geq0 \\ \end{aligned} \end{equation} \end{lem} \begin{proof} The constraints on $\Omega$ (outer optimisation) remain the same, while we replace the inner optimisation $$\max\{\tr(\Omega\sigma):\sigma\succeq0,\tr\sigma=1,\bra{\psi}\sigma\ket{\psi}\leq1-\epsilon\}$$ by its dual, which is the semidefinite program $$\min\{y_1+(1-\epsilon)y_2:y_1\openone+y_2\ketbra{\psi}\succeq\Omega^\dagger,y_2\geq0\}\,.$$ Moreover, strong duality holds because the primal is feasible, and thanks to $\Omega^\dagger=\Omega$ the dual is strictly feasible (choose $y_2>0$ such that $(y_1\openone-\Omega+y_2\ketbra{\psi}\succ0$). This means that the primal and dual optimum are the same, and also the primal optimum is attained. Hence, our minimax problem becomes~\eqref{eq:main_opt_sep_sdp}. Note that separability is a SDP constraint albeit exponential in size~\cite{separable_SDP} and not just a hierarchy of SDP constraints~\cite{DPS_hierarchy}. \end{proof} \subsection{Two-qubit states} For two-qubit states $\ket{\psi}=\cos\theta\ket{00}+\sin\theta\ket{11}$, we can proceed with additional analytic derivations. Without loss of generality, we consider the regime of parameters where $\theta\in[0,\pi/4]$, $\epsilon\in(0,1]$ and $\delta\in[0,1]$. \begin{lem}\label{lemma2} For two-qubit pure states $\ket{\psi}=\cos\theta\ket{00}+\sin\theta\ket{11}$, the optimisation~\eqref{eq:main_opt_sep_sdp} reduces to an optimisation over real variables \begin{equation}\label{eq:optlemma2} \begin{aligned} p_{10}(\delta,\epsilon)=&\min_{t,z,x,\omega,y_1,y_2} y_1+(1-\epsilon)y_2 \\ &\text{ s. t. } 0\preceq\begin{pmatrix} t+z & x\\ x & t-z \end{pmatrix} \preceq\openone\\ &\phantom{\text{ s. t. }} 0\leq\omega\leq1\\ &\phantom{\text{ s. t. }} t+z\geq1-\delta\\ &\phantom{\text{ s. t. }} \omega\geq\abs{x\cos2\theta+z\sin2\theta}\\ &\phantom{\text{ s. t. }} \begin{pmatrix} y_1+y_2-(t+z) & -x\\ -x & y_1-(t-z) \end{pmatrix}\succeq0 \\ &\phantom{\text{ s. t. }} y_1-\omega\geq0 \\ &\phantom{\text{ s. t. }} y_1\in\mathbb{R}, y_2\geq0 \\ \end{aligned} \end{equation} \end{lem} \begin{proof} We first spend the symmetry present in the state. Define \begin{align} \Omega_a:=\frac{1}{2\pi}\int_0^{2\pi}(U_{\phi} \otimes U_{-\phi})\Omega(U_{\phi} \otimes U_{-\phi})^\dagger \diff\phi \end{align} with $U_{\phi}=\ketbra{0}+e^{i\phi}\ketbra{1}$. For any feasible $(\Omega,y)$, the pair $(\Omega_a,y)$ remains feasible with the same value of the objective function. Moreover, the state is also invariant under swapping $S$ of two qubits, so that $(\bar{\Omega}_a,y)$ with $\bar{\Omega}_a:=(\Omega_a+S\Omega_aS^\dagger))/2$ is feasible as well. Lastly, $\bar{\Omega}_a$ can be taken to be real symmetric because the feasible region is preserved under taking entrywise complex conjugate, and the objective value is unchanged. We note that this same symmetrisation was carried out in \cite{qv_2qubit} on the primal inner problem, thanks to the assumption that $\Omega$ commutes with $\ket{\psi}\bra{\psi}$. It's by looking at the dual that we noticed that the symmetry is independent of the commutation assumption. This observation simplifies the number of variables in our optimisation. Specifically, let $\ket{\psi^\perp}=-\cos\theta\ket{00}+\sin\theta\ket{11}$, it suffices to optimize over real symmetric matrices \begin{align} \bar{\Omega}_a = \begin{pmatrix} t+z & x & 0 & 0\\ x & t-z & 0 & 0\\ 0 & 0 & \omega & 0\\ 0 & 0 & 0 & \omega \end{pmatrix} \end{align} in the ordered basis $\{\ket{\psi},\ket{\psi^\perp},\ket{01},\ket{10}\}$. Writing out the separability constraint, which for qubits is equivalent to positive partial transpose, we arrive at the final form given by the Lemma. \end{proof} Remarkably, what was the inner optimisation (now optimisation over $y_1$ and $y_2$) can be further solved analytically; besides, one can set $t=1-\delta-z$ and $\omega=\abs{x\cos2\theta+z\sin2\theta}$ without loss of generality. The lengthy proof of these steps is presented in Appendix \ref{sslengthy}. The farthest version of the optimisation that we can reach analytically reads: \begin{lem}\label{lemmafinal} For two-qubit pure states $\ket{\psi}=\cos\theta\ket{00}+\sin\theta\ket{11}$, the optimisation~\eqref{eq:main_opt_sep_sdp} reduces to \begin{equation}\label{eq:final} \begin{aligned} p_{10}(\delta,\epsilon)=&\min_{z,x} f(\abs{x\cos2\theta+z\sin2\theta}) \\ &\text{ s. t. } 0\preceq\begin{pmatrix} 1-\delta & x\\ x & 1-\delta-2z \end{pmatrix} \preceq\openone\\ &\phantom{\text{ s. t. }} 1-\delta-2z+\sqrt{\frac{1-\epsilon}{\epsilon}}\abs{x}\leq\abs{x\cos2\theta+z\sin2\theta} \end{aligned} \end{equation} where \begin{equation*} f(y_1^*):=y_1^*+(1-\epsilon)\left[1-\delta-y_1^*+\frac{x^2}{y_1^*-(1-\delta-2z)}\right]\,. \end{equation*} \end{lem} \section{Results} \label{sec:results} For our results, we keep focusing on the case of two qubits, although we recall that the SDP \eqref{eq:main_opt_sep_sdp} is valid in general and one could therefore set out to solve it in any other case. \subsection{Commuting strategy} As we mentioned earlier, when $\delta=0$, which is the case considered in~\cite{qv,qv_2qubit}, the condition $\bra{\psi}\Omega\ket{\psi}=1$ immediately implies that $\Omega$ is diagonal in the same basis as $\ket{\psi}\bra{\psi}$, that is $x=0$ in our notation. When $\delta\neq 0$, there is no \textit{a priori} guarantee that the optimal solution will be a commuting one; but we can obtain an upper bound $p^c_{10}(\delta,\epsilon)$ by \textit{enforcing} $x=0$. In this case, the optimisation \eqref{eq:final} becomes trivial: \begin{equation*} \begin{aligned} p^c_{10}(\delta,\epsilon)=&\min_{z} z\epsilon \sin2\theta +(1-\epsilon)(1-\delta)\\ &\text{ s. t. } z\geq \frac{1-\delta}{2+\sin2\theta}\,, \end{aligned} \end{equation*} that is \begin{equation} p^c_{10}(\delta,\epsilon)=(1-\delta)\left[1-\frac{\epsilon}{1+\sin\theta\cos\theta}\right]\,. \end{equation} This result could have been derived at an earlier stage than Lemma \ref{lemmafinal} (see Lemma \ref{lemmacomm} in the Appendix). In fact, it can also be derived without any reliance on the SDP formulation, by adapting the steps made in Ref.~\cite{qv_2qubit} to the case $\delta\neq 0$. \subsection{Analytical solution for $\epsilon=1$} Next, we present the analytical solution of \eqref{eq:final} for the special case $\epsilon=1$. The optimisation now reads \begin{equation}\label{eq:finaleps1} \begin{aligned} p_{10}(\delta,1)=&\min_{z,x} \abs{x\cos2\theta+z\sin2\theta} \\ &\text{ s. t. } 1-\delta-z-\sqrt{x^2+z^2}\geq 0\\ &\phantom{\text{ s. t. }}1-\delta-z+\sqrt{x^2+z^2}\leq 1\\ &\phantom{\text{ s. t. }} 1-\delta-2z\leq\abs{x\cos2\theta+z\sin2\theta}\,. \end{aligned} \end{equation} where we have spelled out the two matrix constraints in \eqref{eq:final}. Even for this simple case, the study is heavy, though without intrinsic difficulties. First we notice that for the maximally entangled state ($\cos 2\theta=0$, $\sin2\theta=1$) the figure of merit is simply $z$, and the last constraint is $z\geq \frac{1-\delta}{3}$. The two quadratic constraints are both feasible for $z= \frac{1-\delta}{3}$, for a variety of values of $x$ including $x=0$. Thus, for $\theta=\frac{\pi}{4}$ we find $p_{10}(\delta,1)=p^c_{10}(\delta,1)=\frac{1-\delta}{3}$; both the commuting strategy and several non-commuting ones achieve this bound. For $\cos 2\theta<1$, the solution is unique and can be inferred by studying the feasible region and the figure of merit graphically in the $(x,z)$ plane (Appendix \ref{appb}). The end result is: \begin{equation}\label{eq:soleps1} p_{10}(\delta,1)=p^c_{10}(\delta,1)\,+\,x^*\,\frac{2\cos 2\theta}{2+\sin 2\theta} \end{equation} where $x^*=\max{(x_0,x_1)}$ is the optimal value of $x$ determined by \begin{equation}\label{x0x1} \begin{aligned} x_0&=(1-\delta)\,\left(\frac{\cos 2\theta-\sqrt{1+2\sin 2\theta}}{2+\sin 2\theta}\right)\,,\\ x_1&=-\frac{\delta\cos 2\theta+\sqrt{\delta^2(1+2\sin 2\theta)+2\delta(2+\sin 2\theta)}}{2+\sin 2\theta}\,. \end{aligned} \end{equation} Since both $x_0$ and $x_1$ are non-positive, $p_{10}(\delta,1)\leq p^c_{10}(\delta,1)$ as expected. Notice that \eqref{eq:soleps1} captures also the case $\theta=\frac{\pi}{4}$ (only, $x^*$ is not unique in that case). Besides, $x_0=0$ holds only for $\cos 2\theta=1$ i.e. for the product state; and $x_1=0$ holds only for $\delta=0$. In summary, for $\epsilon=1$, $0<\delta<1$, and $0<\theta<\frac{\pi}{4}$, the optimal strategy is \textit{not} the commuting one. \begin{figure} \caption{\label{fig:numerics} \label{fig:numerics} \end{figure} Since we have set $\epsilon=1$, which means that $\sigma$ can be orthogonal to $\ket{\psi}$, it is natural to check also when $p_{10}(\delta,1)=0$, which would be obviously the case if we had not added the constraint that $\Omega$ must be separable. By inspection, we see that this is the case only for $\theta=0$, i.e. when the state itself is product (in which case it is trivial: one can find an orthogonal product state and check the orthogonality locally). \subsection{Numerical solutions of the SDP} We have seen that, even in the case $\epsilon=1$ when the figure of merit is at its simplest, the analytical solution requires some work and yields a not-so-transparent result. In view of this, for arbitrary values of $\epsilon$ we leave aside any attempt of solving the optimisation \eqref{eq:final} analytically, and resort rather to numerical solutions of the SDP \eqref{eq:main_opt_sep_sdp}. The results are presented in Fig.~\ref{fig:numerics}. We see that, in a large portion of parameter space, a commuting strategy is very close to being optimal (if not exactly so). A significant difference is seen only for $\epsilon\gtrsim 0.8$, that is, when the state $\sigma$ is allowed to be almost orthogonal to $\ket{\psi}$. \section{\label{sec:conclusion}Conclusions} We have worked in the quantum hypothesis testing scenario that has been called “quantum state verification”, in which the the null hypothesis is a pure state $\ket{\psi}$, while the alternative hypothesis may be any state $\sigma$ that is “distinguishable enough” from $\ket{\psi}$ (quantified by $\langle \psi|\sigma|\psi\rangle \leq 1-\epsilon$). Like previous works, we focused on entangled states shared by two distant players, and studied hypothesis testing under separable operations. We studied the tradeoff between the probability of false negative and that of false positive (the latter had been set to zero in previous studies, which amounts at assuming that the optimal POVM for the discrimination is implemented perfectly). The bilinear nature of the resulting optimization is overcome by reformulating the problem as a SDP. Then we presented the detailed solution for the case of two qubits, including analytical results for some extreme cases. We showed that, in general, the solution is a non-trivial modification of previous constructions: in particular, the optimal POVM may not commute with the closest state $\sigma$. \acknowledgements{We thank Masahito Hayashi for discussions about our modifications of the hypotheses, and more generally the connection between quantum verification and hypothesis testing. This work is supported by the National Research Foundation and the Ministry of Education, Singapore, under the Research Centres of Excellence programme. L.P.T acknowledges support from the Alexander von Humboldt Foundation. M. D. acknowledges support from MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant No. JPMXS0118067285, JSPS KAKENHI Grant Number JP20K03774, and the International Research Unit of Quantum Information, Kyoto University.} \begin{appendix} \section{Solution of the inner optimisation} \label{sslengthy} In this Appendix, we show how to go from Lemma \ref{lemma2} to Lemma \ref{lemmafinal}, while proving a few other intermediate results. We begin by simplifying the inequality constraints on $t+z$ and $\omega$. \begin{lem} Without loss of generality, the optimisation \eqref{eq:optlemma2} in Lemma \ref{lemma2} becomes \begin{equation} \begin{aligned} p_{10}(\delta,\epsilon)=&\min_{t,z,x,y_1,y_2} y_1+(1-\epsilon)y_2 \\ &\text{ s. t. } 0\preceq\begin{pmatrix} t+z & x\\ x & t-z \end{pmatrix} \preceq\openone\\ &\phantom{\text{ s. t. }} t+z = 1-\delta\\ &\phantom{\text{ s. t. }} \begin{pmatrix} y_1+y_2-(t+z) & -x\\ -x & y_1-(t-z) \end{pmatrix}\succeq0 \\ &\phantom{\text{ s. t. }} y_1\geq\omega:=\abs{x\cos2\theta+z\sin2\theta} \\ &\phantom{\text{ s. t. }} y_1\in\mathbb{R}, y_2\geq0 \\ \end{aligned} \end{equation} \end{lem} \begin{proof} With the notation introduced in the proof of Lemma \ref{lemma2}, for any feasible $(\bar{\Omega}_a,y_1,y_2)$ with $\bra{\psi}\bar{\Omega}_a\ket{\psi}>1-\delta\geq0$ for $\delta\in[0,1]$ there is another feasible $(\bar{\Omega}_a',y_1,y_2)$, where $$\bar{\Omega}_a' = \frac{(1-\delta)}{\bra{\psi}\bar{\Omega}_a\ket{\psi}}\bar{\Omega}_a$$ ensures $\bra{\psi}\bar{\Omega}_a'\ket{\psi}=1-\delta$, achieving the same objective value, we can without loss of generality assume that $t+z=1-\delta$. It is clear that by reducing $\omega$, we increase the size of the feasible region of the inner optimisation over variables $y_1,y_2$. Therefore since $\abs{x\cos2\theta+z\sin2\theta}\leq1$ follows from other constraints, we have that $0\leq\abs{x\cos2\theta+z\sin2\theta}\leq\omega\leq1$, which means it suffices to take $\omega$ equal the lower bound. \end{proof} We now solve the inner optimisation, that is the optimisation over $y_1$ and $y_2$. It is natural to split our consideration into commuting strategy $x=0$ and non-commuting strategy $x\neq0$, as the commuting case is a simpler linear programming problem. \begin{lem}\label{lem:inner_commuting} For the commuting strategy $x=0$, the solution of the inner optimisation is \begin{equation} y_1^* + (1-\epsilon)\max\{0,t+z-y_1^*\} \end{equation} where $y_1^*:=\max\{t-z,\abs{z\sin2\theta}\}$. \end{lem} \begin{proof} By Sylvester's criterion for psd, the inner minimization becomes \begin{align*} &\min_{y_1,y_2} y_1+(1-\epsilon)y_2 \\ &\text{ s. t. } y_1+y_2-(t+z)\geq0 \\ &\phantom{\text{ s. t. }} y_1-(t-z)\geq0 \\ &\phantom{\text{ s. t. }} (y_1+y_2-(t+z))(y_1-(t-z))\geq x^2 \\ &\phantom{\text{ s. t. }} y_1\geq\abs{x\cos2\theta+z\sin2\theta}, y_2\geq0 \end{align*} When $x=0$ the quadratic constraint trivially follows from the inequality constraints so we can drop it and the optimisation is linear. The constraints are \begin{align*} y_1&\geq\max\{t-z,\abs{x\cos2\theta+z\sin2\theta}\} \\ y_2&\geq\max\{0,(t+z)-y_1\} \end{align*} so that the minimum is reached at the lower bounds. \end{proof} \begin{lem}\label{lemmacomm} For the commuting strategy $x=0$, the optimal error probability is given by \begin{equation}\label{commapp} p_{10}(\delta,\epsilon)=(1-\delta)\left[1-\frac{\epsilon}{1+\sin\theta\cos\theta}\right] \end{equation} \end{lem} \begin{proof} Since $x=0$, we are left with the program \begin{align*} &\min_{z} y_1^* + (1-\epsilon)\max\{0,t+z-y_1^*\} \\ &\text{ s. t. } 0\leq t-z\leq1\\ &\phantom{\text{ s. t. }} t+z = 1-\delta\\ &\phantom{\text{ s. t. }} y_1^*:=\max\{t-z,\abs{z\sin2\theta}\} \end{align*} The objective function can be rewritten as $$\max\{y_1^*,(1-\epsilon)(1-\delta)+\epsilon y_1^*\}$$ from which we consider two cases. If $y_1^*\geq1-\delta$ then \begin{align*} &\min_{z} y_1^* \\ &\text{ s. t. } 0\leq 1-\delta-2z\leq1\\ &\phantom{\text{ s. t. }} y_1^*:=\max\{1-\delta-2z,\abs{z\sin2\theta}\}\geq1-\delta \end{align*} Here $y_1^*\geq0$ always, so the minimum is at least $1-\delta$. If $y_1^*\leq1-\delta$ then \begin{align*} &\min_{z} (1-\epsilon)(1-\delta)+\epsilon y_1^* \\ &\text{ s. t. } 0\leq 1-\delta-2z\leq1\\ &\phantom{\text{ s. t. }} y_1^*:=\max\{1-\delta-2z,\abs{z\sin2\theta}\}\leq1-\delta \end{align*} It is straightforward to see that the minimum is achieved when $$0\leq1-\delta-2z=\abs{z\sin2\theta}\leq1-\delta$$ corresponding to an optimal solution $z^*=\frac{1-\delta}{2+\sin2\theta}$ with optimum value $$(1-\delta)\left[1-\frac{\epsilon}{1+\sin\theta\cos\theta}\right]\,.$$ Since the global minimum is the smaller value of these two cases, the proof of the Lemma is complete. \end{proof} We remark that the structure of the optimal verification operator among all commuting strategies is rather simple. Explicitly we have that \begin{align} \Omega^* = \begin{pmatrix} 1-\delta & 0 & 0 & 0\\ 0 & \omega^* & 0 & 0\\ 0 & 0 & \omega^* & 0\\ 0 & 0 & 0 & \omega^* \end{pmatrix}\,,\quad \omega^*=\frac{(1-\delta)\sin2\theta}{2+\sin2\theta}\,. \end{align} This can be seen as a generalization of the optimal commuting strategy that Wang and Hayashi found for $\delta=0$ case~\cite{qv_2qubit} to the $\delta\in[0,1]$ case. We now consider the noncommuting case, which is no longer a linear optimisation problem. \begin{lem}\label{lem:inner_noncommuting} For the noncommuting strategy $x\neq0$, the solution of the inner optimisation is \begin{equation} y_1^*+(1-\epsilon)\left[(t+z)-y_1^*+\frac{x^2}{y_1^*-(t-z)}\right] \end{equation} with the value \begin{equation}\label{eq:y1opt_cases} y_1^* = \begin{cases} \omega \,\mathrm{if}\, \hat{y}_1 \leq \omega \\ \hat{y}_1 \,\mathrm{if}\, \omega < \hat{y}_1 < t+\sqrt{x^2+z^2} \\ t+\sqrt{x^2+z^2} \,\mathrm{if}\, \hat{y}_1 > t+\sqrt{x^2+z^2} \end{cases} \end{equation} for $\omega=\abs{x\cos2\theta+z\sin2\theta}$ and $\hat{y}_1=(t-z)+\sqrt{\frac{1-\epsilon}{\epsilon}}\abs{x}$. \end{lem} \begin{proof} When $x\neq0$ (noncommuting strategy), the feasible region excludes the points $(y_1,y_2)$ where \begin{align*} y_1-(t-z) = 0, \text{ or } y_1+y_2-(t+z) = 0 \end{align*} and so the optimisation becomes \begin{align*} &\min_{y_1,y_2} y_1+(1-\epsilon)y_2 \\ &\text{ s. t. } y_1>t-z, y_1\geq\omega \\ &\phantom{\text{ s. t. }} y_2\geq\max\left\{0,(t+z)-y_1+\frac{x^2}{y_1-(t-z)}\right\} \end{align*} Here the optimisation splits into two branches. Firstly, consider the branch $$(t+z)-y_1+\frac{x^2}{y_1-(t-z)}\leq0$$ equivalently under the condition $y_1>t-z$ $$((t-z)-y_1)((t+z)-y_1)-x^2\geq0$$ and explicitly in terms of the roots \begin{align*} y_1&\leq \lambda_{\min}:=t-\sqrt{x^2+z^2} \text{ or },\\ y_1&\geq \lambda_{\max}:=t+\sqrt{x^2+z^2} \end{align*} But then $t-\sqrt{x^2+z^2} < t-z < t+\sqrt{x^2+z^2}$ implies that the feasible region is $y_1\geq\max\{\omega,\lambda_{\max}\}$ leading to the optimum value $y_1^*=\max\{\omega,\lambda_{\max}\}$ which is always at least $\lambda_{\max}$. Secondly, the remaining branch $$(t+z)-y_1+\frac{x^2}{y_1-(t-z)}\geq0\,,$$ which is equivalent to \begin{align*} t-\sqrt{x^2+z^2}=:\lambda_{\min} \leq y_1\leq \lambda_{\max}:=t+\sqrt{x^2+z^2} \end{align*} could be infeasible depending on $\omega$. However, whenever feasible, i.e. $\omega\leq\lambda_{\max}$, the minimum is upper bounded by the value of the objective function $$y_1+(1-\epsilon)\left[(t+z)-y_1+\frac{x^2}{y_1-(t-z)}\right]$$ at the feasible point $y_1=\lambda_{\max}$, i.e. for which the objective value is $\lambda_{\max} + (1-\epsilon)*0$. Therefore, without loss of generality we consider this latter branch whenever feasible. The inner optimisation becomes \begin{align*} &\min_{y_1} y_1+(1-\epsilon)\left[(t+z)-y_1+\frac{x^2}{y_1-(t-z)}\right]\\ &\text{ s. t. } y_1>t-z, \omega\leq y_1\leq\lambda_{\max}, \omega\leq\lambda_{\max} \end{align*} from which is is clear that the minimum is reached at the stationary point $\hat{y}_1$ which is the largest solution of $$\epsilon(\hat{y}_1-(t-z))^2=(1-\epsilon)x^2$$ whenever this point is feasible, or at the endpoints $\omega$ if $y_1^*<\omega$ and $\lambda_{\max}$ if $y_1^*>\lambda_{\max}$. (Note that $x\neq0$ ensures $\hat{y}_1>t-z$ if exists so that the smallest solution is always infeasible.) \end{proof} Finally, we present the proof of Lemma \ref{lemmafinal}: \begin{proof} With $y_1^*$ given before, we have to solve \begin{equation} \begin{aligned} p_{10}(\delta,\epsilon)=&\min_{t,z,x} f(y_1^*) \\ &\text{ s. t. } 0\preceq\begin{pmatrix} t+z & x\\ x & t-z \end{pmatrix} \preceq\openone\\ &\phantom{\text{ s. t. }} t+z = 1-\delta \end{aligned} \end{equation} This becomes an optimisation over two real variables $z,x$ after eliminating $t$. To see the branch reduction, we consider feasible $(z,x)$ that satisfies \begin{equation}\label{eq:feasible_zx} 0\preceq\begin{pmatrix} 1-\delta & x\\ x & 1-\delta-2z \end{pmatrix} \preceq\openone \end{equation} and show that objective value (abuse of notation and redefine the function $f$ eliminating variable $t$) \begin{equation*} f(y_1^*):=y_1^*+(1-\epsilon)\left[1-\delta-y_1^*+\frac{x^2}{y_1^*-(1-\delta-2z)}\right] \end{equation*} is smaller in the region {\bf I} defined by $$1-\delta-2z+\sqrt{\frac{1-\epsilon}{\epsilon}}\abs{x}\leq\abs{x\cos2\theta+z\sin2\theta}\,.$$ In the region {\bf III} defined by the inequality $$1-\delta-2z+\sqrt{\frac{1-\epsilon}{\epsilon}}\abs{x}\geq1-\delta-z+\sqrt{x^2+z^2}$$ the objective function takes the value \begin{align*} &f(1-\delta-z+\sqrt{x^2+z^2})\\ &=(1-\delta)+\epsilon(-z+\sqrt{x^2+z^2})+\frac{(1-\epsilon)x^2}{z+\sqrt{x^2+z^2}}\\ &=(1-\delta)+\epsilon(-z+\sqrt{x^2+z^2})-(1-\epsilon)(z-\sqrt{x^2+z^2})\\ &=1-\delta-z+\sqrt{x^2+z^2} \end{align*} which is a function of two independent variables $z,x$, and is increasing in terms of $\abs{x}$ for a fixed value of $z$. Likewise, in the region {\bf II} defined by the inequality \begin{align*} \abs{x\cos2\theta+z\sin2\theta} &\leq 1-\delta-2z+\sqrt{\frac{1-\epsilon}{\epsilon}}\abs{x}\\ &\leq 1-\delta-z+\sqrt{x^2+z^2} \end{align*} the objective function take the value \begin{align*} &f\left(1-\delta-2z+\sqrt{\frac{1-\epsilon}{\epsilon}}\abs{x}\right)\\ &=(1-\epsilon)(1-\delta)+\epsilon\left(1-\delta-2z+\sqrt{\frac{1-\epsilon}{\epsilon}}\abs{x}\right)+\epsilon\\ &=(1-\delta)+\epsilon-2\epsilon z+\sqrt{\epsilon(1-\epsilon)}\abs{x}\,, \end{align*} which is also increasing in $\abs{x}$. Moreover, at the boundary between two regions, the objective functions agree. The argument now goes as follows: for each feasible $z$, we look at the set of feasible $x$ that is defined by~\eqref{eq:feasible_zx}. For any feasible $x_1,x_2$ in region {\bf III} (if exist), since the objective function is increasing, the point with smaller $\abs{x_j}$ achieves a lower objective value. Hence for minimization, it suffices to consider feasible $x$ in the boundary of region {\bf III}. Since this boundary is also contained in region {\bf II}, we have shown that without loss of generality it suffices to consider the feasible $x$ that belong to regions {\bf I} and {\bf II}. Now the argument can be repeated: points $x_3,x_4$ in region {\bf II} with smaller $\abs{x_j}$ achieve small objective value. This reduces the feasible region to region {\bf I} only. \end{proof} \section{The optimal solution for $\epsilon=1$} \label{appb} In this Appendix, we proceed to solve \eqref{eq:finaleps1}. \begin{figure} \caption{Graphic determination of the solution of the optimisation \eqref{eq:finaleps1} \label{fig4} \end{figure} The feasible region is the intersection of three regions in the $(x,z)$ plane: \begin{itemize} \item The constraint $1-\delta-z-\sqrt{x^2+z^2}\geq 0$ defines the region \begin{align}\label{eq:P0}\mathbf{P_0}:&\;z\,\leq\,-\frac{x^2}{2(1-\delta)}+\frac{1-\delta}{2}\,,\end{align} upper-bounded by a parabola whose maximum at $(x,z)=(0,\frac{1-\delta}{2})$. \item The constraint $1-\delta-z+\sqrt{x^2+z^2}\leq 1$ defines the region \begin{align}\label{eq:P1}\mathbf{P_1}:&\;z\,\geq\, \frac{x^2}{2\delta}-\frac{\delta}{2}\,,\end{align} lower-bounded by parabola whose minimum is at $(x,z)=(0,-\frac{\delta}{2})$ \item The constraint $1-\delta-2z\leq\abs{x\cos2\theta+z\sin2\theta}$ defines the region \begin{align}\label{eq:K} \mathbf{K}:&\;\left\{\begin{array}{lcl} z\,\geq\, \frac{1-\delta+x\cos 2\theta}{2-\sin 2\theta}&\textrm{ for } &x\leq x_k \\ z\,\geq\, \frac{1-\delta-x\cos 2\theta}{2+\sin 2\theta}&\textrm{ for } &x\geq x_k \end{array}\right. \end{align} lower-bounded by a broken line with kink at $x_k=-\frac{1-\delta}{2}\tan(2\theta)$ the intersection with $x\cos2\theta+z\sin2\theta=0$. \end{itemize} The figure of merit to be minimised is $|x\cos2\theta+z\sin2\theta|$: this means that $p_{10}(\delta,\epsilon)$ is given by the smallest distance between the line $z=-\cot{2\theta}x$ and a point of the feasible region. A graphical inspection (see Figs \ref{fig4} and \ref{fig5}) shows that this minimal distance is always achieved by the point that is the intersection of either $\mathbf{P_0}$ or $\mathbf{P_1}$ with the line $z\,\geq\, \frac{1-\delta-x\cos 2\theta}{2+\sin 2\theta}$. These are the points whose $x$ coordinates have been called $x_0$ and $x_1$, given in Eq.~\eqref{x0x1} in the main text. A more fully analytical proof of this result would not bring further clarity (and for good measure, the correctness of the result has been double-checked numerically with the solution of the corresponding SDP). \begin{figure} \caption{When $\theta$ is reduced, the slope of the left segment of $K$ increases and also cuts $P_0$, whence the feasible region consists of two disjoint sets. Nonetheless, the closest point to the line $z=-\cot{2\theta} \label{fig5} \end{figure} \end{appendix} \end{document}
\begin{document} \title{Non-Local Boxes for Networks} \author{Jean-Daniel Bancal$^{1,2}$ and Nicolas Gisin$^{1,3}$ \\ \it \small $^1$Group of Applied Physics, University of Geneva, 1211 Geneva 4, Switzerland \\ $^2$Université Paris-Saclay, CEA, CNRS, Institut de physique théorique, 91191, Gif-sur-Yvette, France\\ $^3$Schaffhausen Institute of Technology - SIT, Geneva, Switzerland} \date{\small \today} \begin{abstract} Nonlocal boxes are conceptual tools that capture the essence of the phenomenon of quantum non-locality, central to modern quantum theory and quantum technologies. We introduce network nonlocal boxes tailored for quantum networks under the natural assumption that these networks connect independent sources and do not allow signaling. Hence, these boxes satisfy the No-Signaling and Independence (NSI) principle. For the case of boxes without inputs, connecting pairs of bipartite sources and producing binary outputs, we prove that the sources and boxes producing local random outputs and maximal 2-box correlations, i.e. $E_2=\sqrt{2}-1$, $E_2^o=1$, are essentially unique. \end{abstract} \maketitle \section{Introduction}\leftarrowabel{intro} Non-locality is a key feature of quantum physics and one of the major discovery - arguably the major discovery - of last century physics. Modern quantum technology promises, in addition to quantum computers, quantum networks that will connect these quantum processors and offer proven confidentiality of all communications. It is thus natural and timely to study quantum non-locality in networks, a field that has been burgeoning for about a decade under the names of bilocality and $n$-locality, for 2 and $n$ independent sources, respectively \cite{Branciard, Fritz,Branciard2,ChavesFritz,Tavakoli,Henson,Wood,ChavesKueng,TavakoliConnected,Fritz2,Rosset, Chaves,Tavakoli3,Tavakoli2,Andreoli,Fraser,Luo,Inflation,Wolfe,Salman,Renou}. Let us stress that in networks the characteristic feature of quantum physics, namely entanglement, enters twice. First, entanglement of the subsystems emitted by the sources. Second, entanglement produced by the joint measurements that connect independent subsystems emitted by different sources. This second form of entanglement is much less studied and understood than the first one \cite{EJM}. In this work we go beyond quantum physics and study $n$-locality for arbitrary boxes only limited by no-signaling and independence, a principle we name NSI \cite{GisinBancal}. This is in the spirit of the non-local boxes introduced by Popescu and Rochlich - the so-called PR-boxes \cite{PR} - as a conceptual tool to study Bell non-locality. Here, however, we go beyond Bell non-locality, our boxes connect independent sources and have no inputs. A priori, such network non-local boxes could have any number of outputs and connect any number of sources, with sources connected to any number of boxes. In this paper we restrict ourselves to boxes connecting two sources, producing binary outputs and sources connected to two boxes, see Fig 1. We name the networks obtained with these resources binary networks. Similarly to the PR-boxes the aim is the set the limits of non-locality in networks imposed by the NSI principle and to offer conceptual tools to study this new form of non-locality. Assuming that locally each output of a box is totally random and that the outputs of two neighboring boxes are maximally correlated when using a particular type of source, we find that the statistics produced by this box in presence of this source in a binary network are unique (up to flipping all outputs). \begin{figure} \caption{\it Various networks. Boxes are indicated with a square that outputs $x_j$. Sources are marks as {\Large$*$} \end{figure} \section{Binary Network Non-Local Boxes} First, consider two identical but independent sources, sending out subsystems in two opposite directions, coupled by one box as illustrated in Fig. 1.a. Each box ouputs a bit $x_j=\pm1$, which we often label merely $\pm$. We assume that this bit is random, hence its expectation value is zero: $E_1=\<{x_j}=0$. Figure 1.b illustrates the case of a box connected to the two parts of a single source. Here also we assume $E_1^o=\<{x_j}=0$, where the superscript ``o" indicates that this correlator corresponds to a closed loop. Next, let us add more independent sources and boxes. We focus on the case in which all boxes are identical, i.e. copies of our box of interest, and similarly all sources are also copies of our source of interest. Only two fully connected configurations are possible. Either all sources and boxes are on a line, Fig. 1c, or they form a loop, Fig. 1d. It is important to realize that no other configurations is possible. Hence, proving the existence of a source and box compatible with these two configurations suffices to prove that they are conceptually possible. In both configurations, the probability of the outputs $x_j$ can conveniently be expressed in terms of all correlators. Since all sources and boxes are identical, the correlators between $k$ adjacent boxes are all equal, denoted $E_k=\<{x_{m+1}\cdot x_{m+2}\cdot ...\cdot x_{m+k}}$. In case of a polygon configuration with $n$ boxes at the vertices, i.e. a loop, there is one additional correlator: $E_n^o=\<{x_1\cdot x_2\cdot...\cdot x_n}$. Let us consider some examples. First, the joint probability of outputs for 3 boxes on a line reads: \begin{equation}a p_3(x_1,x_2,x_3)&=&\frac{1}{2^3}\big(1+(x_1+x_2+x_3)E1 \nonumber\\ &+& (x_1x_2+x_2x_3)E_2 \nonumber\\ &+& x_1x_2x_3E_3 + x_1x_3E_1^2\big) \leftarrowabel{p3a} \end{equation}a where the bipartite correlator associated to $x_1x_3$ is $E_1^2$ due to the independence assumption. Since by assumption $E_1=0$, we obtain \begin{equation}a p_3(x_1,x_2,x_3)&=&\frac{1}{2^3}\big(1+ (x_1x_2+x_2x_3)E_2 \nonumber\\ &+& x_1x_2x_3E_3\big) \leftarrowabel{p3} \end{equation}a where the first term guarantees normalization. Next, for 4 boxes in a square we have: \begin{equation}a p_4^o(x_1x_2x_3x_4)&=&\frac{1}{2^4}\big(1+(x_1x_2+x_2x_3+x_3x_4+x_4x_1)E_2 \nonumber\\ &+&(x_1x_2x_3+x_2x_3x_4+x_3x_4x_1+x_4x_1x_2)E_3 \nonumber\\ &+& x_1x_2x_3x_4E_4^o\big) \end{equation}a and so on. The correlators corresponding to disconnected sets factorize, because of the assumed independence of the sources, as in the last term in (\rightarrowef{p3a}). It is important to realize that for hexagons and 5 boxes in a line and larger networks, independence implies some non-linear correlators. For example, in a line with 5 boxes the last correlator equals the square of the 2-box correlator: \begin{equation}a p_5(x_1...x_5)&=&\frac{1}{2^5}\big(1+\sum_{k=1}^4x_kx_{k+1}E_2 +\sum_{k=1}^3 x_kx_{k+1}x_{k+2}E_3 \nonumber\\ &+& \sum_{k=1}^2 x_kx_{k+1}x_{k+2}x_{k+3}E_4 + \prod_{k=1}^5 x_k E_5 \nonumber\\ &+& x_1x_2x_4x_5 E_2^2\big) \end{equation}a Obviously, the correlators have to be such that $p(x_j)\geq0$ for all outputs strings $\{x_j\}$. In \cite{GisinBancal}, using 6 sources and 6 boxes in an hexagonal loop, we proved the following bound on the 2-box correlator: \begin{equation}\leftarrowabel{sqrt2m1} E_2\leftarroweq\sqrt{2}-1\approx 0.4142 \end{equation} Here we prove that this upper bound can be saturated for all possible binary networks and that the saturation of (\rightarrowef{sqrt2m1}) is achieved in an essentially unique way (up to flipping all outputs). Our main results are the correlators for boxes in a line (\rightarrowef{En}), section \rightarrowef{line}, and in a loop (\rightarrowef{Eno}), section \rightarrowef{loop}. These correlators are unique given $E_1=0$, $E_2=\sqrt{2}-1$ and $E_2^o=1$, and we prove that they guarantee that all probabilities $p(\vec x)$ and $p^o(\vec x)$, where $\vec x=(x_1..x_n)$, are non-negative. But first, in the next section, we consider 3 boxes in a line and prove that the value of the corresponding correlator $E_3$ is unique (up to its sign). \section{3-box correlator}\leftarrowabel{E3} Consider 3 boxes in a line as in eq. (\rightarrowef{p3}). We have $p(+-+)=\frac{1}{8}\big(1-2E_2-E_3\big)\geq0$. Hence, $E_2=\sqrt{2}-1$ imposes $E_3\leftarroweq1-E_2=3-2\sqrt{2}$. Interestingly, when considering lines with more copies of the box, this inequality becomes an equality (up to a sign flip), namely \begin{equation}\leftarrowabel{E3} E_3=\pm(3-2\sqrt{2}), \end{equation} as implied by the following lemma proven in the Appendix. \!\!\!\!\!\!\textbf{Lemma 1.}\quad \textit{In a line configuration with 7 boxes, whenever $E_1=0$ and $E_2=\sqrt{2}-1$, the following inequality holds:} \begin{equation}\leftarrowabel{E3bound} E_3^2 \geq (3-2\sqrt{2})^2. \end{equation} Henceforth we assume the positive sign for $E_3$. We'll see that this choice implies that all correlators are non-negative. Note that by flipping all outputs, all odd correlators, in particular $E_3$, merely change sign. The relation (\rightarrowef{E3}), inserted in (\rightarrowef{p3}), has the following important consequence: $p(+-+)=0$. Accordingly, there are never 3 adjacent boxes with outputs $+-+$: \begin{equation}a p_n(x_1..x_{j-1}+-+x_{j+3}..x_n)&=& \nonumber\\ p(+-+)&\cdot& \\ p(x_1..x_{x-1}x_{j+3}..x_n|x_j=+,x_{j+1}&=&-,x_{j+2}=+)=0 \nonumber \end{equation}a This in turn implies that whenever two connected boxes output $+-$, then the next box necessarily outputs $-$. This is key in proving existence and uniqueness of our network non-local box, as we show in the two next sections. \section{Recursion formula for the $E_k$}\leftarrowabel{line} Let us define $q_n(\vec x)=2^np_n(\vec x)$, so that the normalization factors drop out. We still have the non-negativity conditions $q(\vec x)\geq0$ for all $\vec x$. The case of $n+1$ boxes in a line can be reduced to $n$ boxes as follows: \begin{equation}a q_{n+1}(x_1..x_{n+1})&=&q_n(x_1..x_n) + (\Pi_{j=1}^{n+1} x_j) E_{n+1} \nonumber\\ &+& \sum_{k=0}^{n-1}(\Pi_{j=k+2}^{n+1}x_j)q_kE_{n-k} \\ &=&q(x_1..x_n) \nonumber\\ &+& (\Pi_{j=1}^{n+1} x_j) (E_{n+1}+Q_n) \leftarrowabel{qnp1} \end{equation}a where $Q_n=\sum_{k=0}^{n-1}(\Pi_{j=1}^{k+1}x_j)q_kE_{n-k}$, $q_k=q_k(x_1..x_k)$ and we used $x_j^2=1$ for all $j$. The first term on the left of (\rightarrowef{qnp1}) takes into account the first $n$ boxes, the second term all $n+1$ boxes and the last term all cases that involve the last box but not all boxes. Considering successively the cases $(\Pi_{j=1}^{n+1} x_j) = \pm1$ one gets: \begin{equation} \leftarrowabel{EnBound} q_n(x_1..x_n)-Q_n\geq E_{n+1}\geq -q_n(x_1..x_n)-Q_n \end{equation} Apply this to the string of alternating outputs $q_n(+-+-+-...)$, the result of the previous section implies $q_k=0$ for all $k\geq3$. Hence, from (\rightarrowef{qnp1}) and (\rightarrowef{EnBound}) one obtains the recursion formula: \begin{equation}\leftarrowabel{Enp1} E_{n+1} = -E_n + E_{n-1} + (1-E_2)E_{n-2} \end{equation} This leads to the following closed form for all correlators of boxes in a line: \begin{equation}\leftarrowabel{En} \boxed{ E_n=\frac{1}{\mu}\big[ \big(\frac{2}{\mu-1}\big)^{n-1} - \big(\frac{-2}{\mu+1}\big)^{n-1} \big] }\end{equation} where $\mu=\sqrt{5+4\sqrt{2}}\approx 3.2645$. Notice that $E_n>0$ for all $n\geq2$. Obviously, flipping all outputs would change the sign of $E_n$ for all odd $n$'s. Table I lists the 12 first values of $E_n$, remarkably they are all of the form $n+m\sqrt{2}$, with $n$ and $m$ integers. It remains to prove that these correlators guarantee $q_n(\vec x)\geq0$ for all $\vec x$. First, notice that by inserting the recusion formula (\rightarrowef{Enp1}) into (\rightarrowef{qnp1}) one gets, after some algebra: \begin{equation}\leftarrowabel{qone} q_n(\vec 1_n)=(\sqrt{2})^{n-1} \end{equation} where $\vec 1_n=(+1,+1,...,+1)$, with $n$ \ $+1$'s. Second, using $q_n(x_1..x_{j-1}+-+x_{j+3}..x_n)=0$ for all $2\leftarroweq j\leftarroweq n-3$ one sees that the line of boxes can be split in parts each time the output string changes sign. Without loss of generality we may assume $x_1=+1$. Then, either all outputs are $+1$, i.e. $\vec x=\vec 1_n$ in which case $q_n(\vec 1_n)=\sqrt{2}^{n+1}>0$, or the first output $-1$ happen at position $j_0$, i.e. $x_{k}=+1$ for all $k<j_0$ and $x_{j_0}=-1$. In such a case, the box at position $j_0+1$ can be removed, as anyway the output $-1$ is certain. But if the $j_0+1$ box is removed, then the independence assumption implies that the probability factorizes: $p_n(x_1x_2...x_n)=p_{j_0}(++...++-)\cdot p_{n-j_0-1}(x_{j_0+2}...x_n)$. Eventually, all probabilities are products of the following terms: $p_n(\vec1_n)$, $p_n(\vec1_{n-1},-)$, $p_n(-,\vec1_{n-1})$ and $p_n(-,\vec1_{n-2},-)$. Remains to prove that these 4 terms are all non-negative for all $n$. Relation (\rightarrowef{qone}) proves the positivity of the first term and will be heavily used to prove the positivity of the 3 other term: \begin{equation}a p_{n+1}(\vec1_n,-)&=&p_{n+1}(-,\vec1_n)=p_n(\vec1_n)-p_{n+1}(\vec1_n,+) \nonumber\\ &=&\frac{1}{\sqrt{2}^{n+1}}\big(1-\frac{1}{\sqrt{2}}\big)>0 \\ p_{n+2}(-,\vec1_n,-)&=&p_n(\vec1_n)-p_{n+2}(-,\vec1_n,+) \nonumber\\ &-&p_{n+2}(+,\vec1_n,-)-p_{n+2}(+,\vec1_n,+) \\ &=&\frac{1}{\sqrt{2}^{n+1}}\big(1-\frac{2}{\sqrt{2}}(1-\frac{1}{\sqrt{2}})-\frac{1}{2}\big) \\ &>&0 \nonumber \end{equation}a In summary, all the correlators (\rightarrowef{En}) corresponding to $n$ boxes in a line are fixed by the assumptions $E_1=0$ and $E_2=\sqrt{2}-1$ (and $E_3\geq0$) and they guarantee that all probabilities $p(\vec x)\geq0$, for all output strings $\vec x$. In the next section we prove that these correlators are also compatible with $n$ boxes in any polygon configuration. \section{Polygons}\leftarrowabel{loop} The smallest closed loop has a single box fed by the two links produced by a single source, see Fig. 1a. In this case we assume, similarly to a single box in a line, $E_1^o=0$, where the upper index $o$ indicates a closed loop. Accordingly, single boxes always produce fully random outputs. The second shortest loop has 2 boxes and 2 sources. Since $E_2^o$ is not limited by the NSI principle (see Fig. 1), we assume it takes the largest possible value\footnote{At first sight, one may fear that combining the two end sources of the 2-box configuration in a line into a single source, thus changing the line into a loop configuration, leads to signaling, since $E_2\neq E_2^o$. However, the change in the source takes time to influence the boxes, the time of flight of the subsystems emitted by the sources. Hence, $E_2\neq E_2^o$ does not lead to signaling.} : $E_2^o=1$. Let's now consider polygons with $n+1$ vertices and edges, for $n\geq 2$. Using a similar technique as for the line we have: \begin{equation}a\leftarrowabel{qo} q^o_{n+1}(x_1..x_{n+1})&=& q_n(x_1..x_n) + \Pi_{j=1}^{n+1}x_j\cdot E^o_{n+1} \nonumber\\ &+& \sum_{k=1}^n E_k \cdot\sum_{\ell=1}^k \big(\Pi_{j=n+1-k+\ell}^{n+\ell}~ x_j\big) \nonumber\\ &\cdot& q_{n-1-k}(x_{n-1-k+\ell}..x_{n+\ell+2}) \end{equation}a where all indices of $x_j$'s are assumed modulo $n+1$ (e.g. $x_{n+\ell+2}=x_{\ell+1}$) and we set $q_{-1}=q_0=1$. Using $q_3(+-+)=0$ we deduce that for all $n\geq3$ $q_{n+1}(\vec 1_n,-)=0$. Let's apply this to the above formula: \begin{equation}a q^o_{n+1}(\vec 1_n,-)&=&q_n(\vec 1_n)-E^o_{n+1} \nonumber\\ &+&\sum_{k=1}^n E_k\cdot\sum_{\ell=1}^k x_1..\hat x_\ell..\hat x_{\ell+n-k}..x_{n+1} \nonumber\\ &\cdot& q_{n-1-k}(x_{\ell}..x_{\ell+n-k-1}) \leftarrowabel{qno}\\ &=&\sqrt{2}^{n-1}-E^o_{n+1}-\sum_{k=1}^{n-2} E_k\cdot k\cdot\sqrt{2}^{n-k-2} \nonumber\\ &-&(n-1)E_{n-1} - nE_n=0 \leftarrowabel{qooo} \end{equation}a Consequently: \begin{equation}a\leftarrowabel{Eo} E^o_{n+1}&=&\sqrt{2}^{n-1}-\sum_{k=1}^{n-2}E_k\cdot k\cdot\sqrt{2}^{n-k-2} \nonumber\\ &-&n\cdot E_n - (n-1)\cdot E_{n-1} \end{equation}a Using eq. (\rightarrowef{En}), with the same $\mu=\sqrt{5+4\sqrt{2}}$, one gets, for all $n\geq2$: \begin{equation}\leftarrowabel{Eno} \boxed{ E_n^o=\big(\frac{2}{\mu-1}\big)^n + \big(\frac{-2}{\mu+1}\big)^n }\end{equation} Note that $E_n^o>0$ for all $n$. Table I lists the 12 first values of $E_n^o$, remarkably they are all of the form $n+m\sqrt{2}$, with $n$ and $m$ integers, as we found for the correlators in a line. Remains to prove that $q_{n+1}^o(\vec x)\geq 0$ for all $\vec x$. First, assume that not all outputs are equal. Then, there are 2 adjancent boxes with outputs $+1-1$. Since $p_3(+-+)=0$, we can remove the next box as it necessarily outputs $-1$. In this way one opens the loop and reduces it to a line for which we already proved non-negativity of all probabilities. Next, assume all outputs are +1. Since $E_n^o\geq0$ for all $n$, $p_n^o(\vec 1_n)\geq0$. Finally, assume all outputs are -1: \begin{equation} p_n^o(\overrightarrow{-1}_n)=\frac{1}{2^n}\big(1+n\sum_{k=2}^{n-1}(-1)^kE_k + (-1)^nE_n^o\big) \end{equation} From (\rightarrowef{En}) and (\rightarrowef{Eno}) straightforward computations shows that $E_{2n}-E_{2n+1}>0$ and $1+E_{n-1}-E_n^o>0$. Hence, $p_n^o(\overrightarrow{-1}_n)\geq0$ for all $n$. \section{conclusion} We proved that under the natural assumption $E_1=E_1^o=0$ there is a single pair of source and box in binary networks that maximizes the 2-box correlators $E_2$ and $E_2^o$, in the sense that the statistics produced by these resources in any binary network is uniquely defined. The existence and uniqueness of such a source and box is highly non-trivial. Admittedly, the deep reason why such a source and box exist and - especially - are unique is left open. Another question is whether this binary non-local resource can be realized within quantum theory and, if not, how close quantum can come? A natural challenge is whether a similar result holds also for boxes with more outputs, boxes connecting more than 2 sources, and/or sources connecting more than 2 boxes. We investigated numerically the case of boxes with 4 outputs under the natural assumption of output permutation symmetry, i.e. for all permutations $\pi$ of $\{1,2,3,4\}$, $p_n(\vec x)=p_n(\pi(\vec x))$ (same permutations for all outputs). However, we found no sign of the uniqueness for 4-output network boxes, though the question remains open and should be investigated in future work. Also, other types of networks should be studied, in particular configurations beyond lines and loops, like, e.g. star networks. Network non-locality is a new and fascinating form of non-locality. It combines non-locality and ``entangling" joint measurements. It deserves to be analyzed within quantum theory and, as we do here, from outside quantum theory under the very general NSI principle: no-signaling and independence. The presented source and box for binary networks is a conceptual tool, similar in spirit to the PR- boxes introduced by Popescu and Rochlich \cite{PR}; such conceptual tool are useful to understand and simulate quantum correlations, and for applications in the spirit of device independent quantum information processing. Among the many fascinating research directions to which this one contributes are all questions on the limits of quantum non-locality (e.g. information causality \cite{infoCausality}). In particular, macroscopic locality \cite{Miguel} and classical limits of non-local boxes \cite{Daniel,NGMagnets} are especially interesting and should be applid to network boxes. \\ \onecolumngrid \begin{table}[h] \centering \begin{tabular} {c|c|c|c|c|c|c|c|c|c|c|c|c} $n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline $E_n$ & 0 & $\sqrt{2}-1$ & $3-2\sqrt{2}$ & $3\sqrt{2}-4$ & $3-2\sqrt{2}$ & $3-2\sqrt{2}$ & $10\sqrt{2}-14$ & $27-19\sqrt{2}$ & $22\sqrt{2}-31$ & $10-7\sqrt{2}$ & $51-36\sqrt{2}$ & $104\sqrt{2}-147$ \\ \hline $E_n^o$ & 0 & 1 & $2-\sqrt{2}$ & $4\sqrt{2}-5$ & $9-6\sqrt{2}$ & $6\sqrt{2}-8$ & $\sqrt{2}-1$ & $23-16\sqrt{2}$ & $37\sqrt{2}-52$ & $71-50\sqrt{2}$ & $32\sqrt{2}-45$ & $44-62\sqrt{2}$ \\ \end{tabular} \caption{\it Table of the first 12 correlators in line, $E_n$, and in a loop, $E_n^o$. (The indicated values of $E_2^o$ and $E_3^o$ are the maximal values compatible with $p_2^o(x_1x_2)\geq0$ and $p_3^o(x_1x_2x_3)\geq0$, respectively.)} \end{table} \twocolumngrid \small \section*{Acknowledgment} Stimulating discussions with Nicolas Brunner and financial support by the Swiss NCCR SwissMap are acknowledged. \appendix \section{Proof of Lemma 1} In this appendix we prove the lemma used in section III: \!\!\!\!\!\!\textbf{Lemma 1.}\quad \textit{In a line configuration with 7 boxes, whenever $E_1=0$ and $E_2=\sqrt{2}-1$, the following inequality holds:} \begin{equation} E_3^2 \geq (3-2\sqrt{2})^2. \end{equation} \begin{proof} Let us identify 17 probabilities: \begin{eqnarray} P_1 &= P(-------)\\ P_2 &= P(---+---)\\ P_3 &= P(--+-+--)\\ P_4 &= P(+-+-+-+)\\ P_5 &= P(++-+-++)\\ P_6 &= P(+++-+++)\\ P_7 &= P(+++++++)\\ P_8 &= P(-++-+-+)\\ P_9 &= P(-+-+--+)\\ Q_1 &= P(----+-+)\\ Q_2 &= P(----+++)\\ Q_3 &= P(---+-++)\\ Q_4 &= P(---++++)\\ Q_5 &= P(--+-+++)\\ Q_6 &= P(--++-+-)\\ Q_7 &= P(-+-++++)\\ Q_8 &= P(+-+--++). \end{eqnarray} We define three vectors in $\mathbb{R}^{24}$: \begin{equation} \begin{split} u = (8,9,6,1,9,4,5,3,7,1,6,2,7,1,7,5,3,7,1,6,2,6,7,1)\\ v = (4,2,7,6,4,2,2,4,1,7,6,8,6,3,5,4,2,7,1,1,3,5,3,5) \end{split} \end{equation} and \begin{equation} \begin{split} c = &\Big(\frac{11146 + 16545\sqrt{2}}{1160712}, \frac{2581 + 4686\sqrt{2}}{773808}, \frac{3 + 34\sqrt{2}}{18424},\\ &\frac{109 + 2003\sqrt{2}}{515872}, \frac{10253 + 16404\sqrt{2}}{1160712}, \frac{9529 + 14340\sqrt{2}}{1547616},\\ &\frac{29063 + 10799\sqrt{2}}{3095232}, \frac{50719 - 4773\sqrt{2}}{9285696}, \frac{1517 + 304\sqrt{2}}{147392},\\ &\frac{9(139 + 40\sqrt{2})}{147392}, \frac{3(-38 + 337\sqrt{2})}{257936}, \frac{6 + 19\sqrt{2}}{5488},\\ &\frac{227 + 1805\sqrt{2}}{515872}, \frac{16154 + 2677\sqrt{2}}{3095232}, \frac{107(80 + 139\sqrt{2})}{3095232},\\ &\frac{127277 - 41427\sqrt{2}}{9285696}, \frac{41879 + 4049\sqrt{2}}{3095232}, \frac{1387 + 366\sqrt{2}}{147392},\\ &\frac{1381 + 298\sqrt{2}}{147392}, \frac{3 + 34\sqrt{2}}{18424}, \frac{44 + 25\sqrt{2}}{5488},\\ &\frac{3(636 + 299\sqrt{2})}{257936}, \frac{2066 - 383\sqrt{2}}{442176}, \frac{8536 + 9995\sqrt{2}}{3095232}\Big) \end{split} \end{equation} A lenghty but direct computation shows that \begin{equation} 2^{14}\cdot\sum_{i=1}^{24} c_i P_{u_i} Q_{v_i} = 12\sqrt{2} - 17 + E_3^2. \end{equation} In other words, this expression does not involve any correlator with more than $3$ parties. Since the probabilities and all components of $c$ are positive, this expression also is positive, i.e. \begin{equation} 12\sqrt{2} - 17 + E_3^2 \geq 0, \end{equation} which is equivalent to Eq.~\eqref{E3bound}. \end{proof} The proof above was derived by considering probabilities for seven parties in a line in the space of correlator monomials. According to~\eqref{qnp1} and~\eqref{En}, every probability in this scenario can be expressed as a linear combination of some products of correlators. The task of finding a linear combination of probabilities with positive coefficients for which all monomial except $E_3^2$ vanish then amounts to solving a linear program. The proof constitutes a solution of this program. \end{document}
\begin{document} \begin{abstract} Let $\Omega$ be a smooth bounded domain in $\rn$ ($n\geq 3$) such that $0\in\partial \Omega$. We consider issues of non-existence, existence, and multiplicity of variational solutions in $\huno$ for the borderline Dirichlet problem, $$\left\{ \begin{array}{llll} -\Delta u-\gamma \frac{u}{|x|^2}- h(x) u &=& \frac{|u|^{\crits-2}u}{|x|^s} \ \ &\text{in } \Omega,\\ u&=&0 &\text{on }\bono, \end{array} \right.\eqno{(E)}$$ where $0<s<2$, ${\crits}:=\frac{2(n-s)}{n-2}$, $\gamma\in\rr$ and $h\in C^0(\overline{\Omega})$. We use sharp blow-up analysis on --possibly high energy-- solutions of corresponding subcritical problems to establish, for example, that if $\gamma<\frac{n^2}{4}-1$ and the principal curvatures of $\partial\Omega$ at $0$ are non-positive but not all of them vanishing, then Equation (E) has an infinite number of high energy (possibly sign-changing) solutions in $\huno$. This complements results of the first and third authors, who showed in \cite{gr4} that if $\gamma\leq \frac{n^2}{4}-\frac{1}{4}$ and the mean curvature of $\partial\Omega$ at $0$ is negative, then $(E)$ has a positive least energy solution. On the other hand, the sharp blow-up analysis also allows us to show that if the mean curvature at $0$ is nonzero and the mass, when defined, is also nonzero, then there is a surprising stability of regimes where there are no variational positive solutions under $C^1$-perturbations of the potential $h$. In particular, and in sharp contrast with the non-singular case (i.e., when $\gamma=s=0$), we prove non-existence of such solutions for $(E)$ in any dimension, whenever $\Omega$ is star-shaped and $h$ is close to $0$, which include situations not covered by the classical Pohozaev obstruction. \end{abstract} \maketitle \tableofcontents \section{\, Introduction} This manuscript is the continuation of a long-time project initiated by the first and the third author in \cite{gr1} about nonlinear critical equations involving the Hardy potential when the singularity is located on the boundary of the domain under study. Let $\Omega$ be such a smooth bounded domain in $\rn$, $n\geq 3$, with $0\in\partial\Omega$. We fix $s\in (0,2)$ and define the critical Sobolev exponent ${\crits}:=\frac{2(n-s)}{n-2}$. For $\gamma\in\rr$ and $h_0\in C^1(\overline{\Omega})$, we consider in the sequel issues of non-existence, existence, and multiplicity of variational solutions in $\huno$ for the borderline Dirichlet problem, \begin{eqnarray} \label{one} \left\{ \begin{array}{llll} -\Delta u-\gamma \frac{u}{|x|^2}-h_0(x) u &=& \frac{|u|^{\crits-2}u}{|x|^s} \ \ &\text{in } \Omega,\\ u&=&0 &\text{on }\bono. \end{array} \right. \end{eqnarray} By solutions, we mean here functions $u\in\huno$, i.e., the completion of $C^{\infty}_c(\Omega)$ for the $L_2$-norm of the gradient $\Vert\nabla u\Vert_2$. This problem has by now a long history starting with the fact that when $\gamma=s=0$ and $h_0$ is a constant, it is the counterpart of the Yamabe problem \cites{aubin, LeeParker, schoen1} in Euclidian space, as initiated by Brezis-Nirenberg \cite{bn}, with important contributions in the critical dimension $n=3$, by Druet \cite{d2}, and for multiplicity results for $n\geq 7$, by Devillanova-Solimini \cite{ds}, among many others. The case dealing with least energy solutions for $s>0$ but $\gamma =0$, when the singularity $0$ is on the boundary of the domain was initiated by Ghoussoub-Kang \cite{gk} and developed by Ghoussoub-Robert \cite{gr1}. The case involving the Hardy potential, i.e., when $\gamma >0$, was introduced by Lin-Wadade \cite{LW3} with a follow-up contribution by Ghoussoub-Robert \cite{gr4}. This paper addresses remaining issues about the multiplicity of solutions, but also about obstructions to the existence of solutions and their stability under small perturbations. The existence of solutions is related to the coercivity of the operator$-\Delta -\frac{\gamma}{|x|^2} -h_0(x)$. It is clear that the operator $-\Delta - \frac{\gamma}{|x|^2}$ is coercive on $\huno$ whenever $\gamma <\gamma_H(\Omega)$, where $\gamma_H(\Omega)$ is the Hardy constant associated to the domain $\Omega$, that is \begin{align}\label{Hardy inequality} \gamma_{H}(\Omega):= \inf_{u \in\huno \setminus \{ 0\}}\frac{ \int _{\Omega} |\nabla u|^{2} ~ dx}{ \int_{\Omega} \frac{u^{2}}{|x|^{2}}~ dx}, \end{align} which has been extensively studied (see for example \cite{GM.book} and \cite{gr4}). We recall that if $0\in \Omega$, then \begin{equation} \gamma_H (\Omega)=\gamma_{H}(\rn)=\frac{(n-2)^2}{4}. \end{equation} When $0\in \partial \Omega$, the situation is extremely different. For non-smooth domains modeled on cones, we refer to Egnell \cite{eg}, and the more recent works of Cheikh-Ali \cites{HCA,HCA2}. If $\Omega$ is smooth, then, around $0$, the domain is modeled on the half-space $\rnm:=\{x\in \rn; \, x_1<0\}$. We then get that (see \cite{gr4}) \begin{equation} \frac{(n-2)^2}{4} < \gamma_{H}(\Omega)\leq\gamma_H(\rnm)= \frac{n^2}{4}. \end{equation} Note that when $h_{0}\equiv 0$, (\ref{one}) is the Euler-Lagrange equation for the following Hardy-Sobolev variational problem: For $\gamma <\gamma_H(\Omega)$ and $0\leq s\leq 2$, there exists $\mu_{\gamma, s}(\Omega)>0$ such that \begin{equation} \label{general} \mu_{\gamma, s}(\Omega)=\inf\left\{\frac{\int_{\Omega} |\nabla u|^2\, dx-\gamma \int_{\Omega}\frac{u^2}{|x|^2}\, dx}{\left(\int_{\Omega}\frac{|u|^{\crit}}{|x|^s}\, dx\right)^{\frac{2}{\crit}}};\, u \in\huno \setminus \{ 0\} \right\}. \end{equation} Note that when $s=2$ and $\gamma=0$, this is the Hardy inequality mentioned above, while if $s=0$ and $\gamma=0$, it is the Sobolev inequality. If $\Omega =\R^n$, $s\in [0, 2]$ and $\gamma \in (-\infty , \frac{(n-2)^2}{4})$, (\ref{general}) contains -- after a suitable change of variables -- the Caffarelli-Kohn-Nirenberg inequalities \cite{ckn}. The latter state that there is a constant $C:=C(a,b,n)>0$ such that, \begin{equation} \label{CKN} \left(\int_{\rn}|x|^{-bq}|u|^q \right)^{\frac{2}{q}}\leq C\int_{\rn}|x|^{-2a}|\nabla u|^2 dx \hbox{ for all }u\in C^\infty_c(\rn), \end{equation} where \begin{equation}\label{cond1} -\infty<a<\frac{n-2}{2}, \ \ 0 \leq b-a\leq 1, \ \ {\rm and}\ \ q=\frac{2n}{n-2+2(b-a)}. \end{equation} The first difficulty in these problems is due to the fact that $\crits$ is critical from the viewpoint of the Sobolev embeddings, in such a way that if $\Omega$ is bounded, then $\huno$ is embedded in the weighted space $L^p(\Omega, |x|^{-s})$ for $1\leq p\leq\crits $, and the embedding is compact if and only if $p<\crits$. This lack of compactness defeats the classical minimization strategy to get extremals for (\ref{general}). In fact, when $s=0$ and $\gamma=0$, this is the setting of the critical case in the classical Sobolev inequalities, which started this whole line of inquiry, due to its connection with the Yamabe problem on compact Riemannian manifolds \cite{aubin}, \cite{schoen1}, \cite{LeeParker}. Another complicating feature of the problem is that the term $\frac{u}{|x|^2}$ is as critical as $\frac{u^{2^*(s)-1}}{|x|^s}$ in the sense that they have the same homogeneity as the Laplacian. These difficulties are summarized by the invariance of the problem under conformal transformation. Indeed, for a function $u:\Omega\to\rr$ and $r>0$, let \begin{equation}\label{conf:trans} u_r: x\mapsto r^{\frac{n-2}{2}}u(r \cdot x) \end{equation} and note that Equation \eqref{one} is then "essentially" invariant under the transformation $u\mapsto u_r$ in the sense that \begin{eqnarray}\label{eq:trans} \left\{ \begin{array}{llll} -\Delta u_r-\gamma \frac{u_r}{|x|^2}-r^2h_{0}(rx) u_r &=& \frac{|u_r|^{\crits-2}u_r}{|x|^s} \ \ &\text{in } r^{-1}\Omega,\\ u_r&=&0 &\text{on }r^{-1}\bono. \end{array} \right. \end{eqnarray} This "invariance" is behind the lack of compactness in the embeddings associated to the variational formulation of \eqref{one}, which prohibits the use of general abstract topological or variational methods. However, as one notices, the invariance is not complete, since the potential $h$ has changed, and the domain itself was transformed. As we shall see, both the geometry of the domain and -to a lesser extent- the potential $h$ break the invariance enough that one will be able to recover compactness for \eqref{one}. Another important aspect of this problem is the singularity at $0$ and its location within the domain since the Hardy potential does not belong to the Kato class. Elliptic problems with singular potential arise in quantum mechanics, astrophysics, as well as in Riemannian geometry, in particular in the study of the scalar curvature problem on the standard sphere. Indeed, if the latter is equipped with its standard metric whose scalar curvature is singular at the north and south poles, then by considering its stereographic projection of $\rn$, the problem of finding a conformal metric with prescribed scalar curvature $K(x)$ leads to finding solutions of the form $-\Delta u-\gamma \frac{u}{|x|^2}=K(x)u^{2^\star(0)-1}$ on $\rn$. The latter is a simplified version of the nonlinear Wheeler-DeWitt equation, which appears in quantum cosmology (see \cites{BB,BE,LZ,SmetsTAMS} and the references cited therein). This paper deals specifically with the case where $0$ belongs to the boundary of a smooth domain $\Omega$. We shall see that the boundary at $0$ plays an important role, and our starting point is the existence Theorem \ref{gro} below for least energy solutions. It was first established by Ghoussoub-Robert \cite{gr1} when $\gamma=0$, then by Lin-Wadade \cite{LW3} when $0<\gamma <\frac{(n-2)^2}{4}$ under the assumption that the mean curvature at $0$ is negative. The result was extended to the range $\gamma\leq \frac{n^2-1}{4}$ in \cite{gr4}, but more importantly, it was shown there that in the remaining range $(\frac{n^2-1}{4}, \frac{n^2}{4})$, the curvature condition does not suffice anymore and a more global condition is needed: the boundary mass $m_{\gamma,h}(\Omega)$ of a domain associated to $\gamma$ and $h$, that we now recall. \subsection{The models and the definition of the mass} Letting formally $r\to 0$ in \eqref{eq:trans}, we get that $u$ should behave like solutions to \begin{eqnarray}\label{eq:U:sec2} \left\{ \begin{array}{llll} -\Delta U-\gamma \frac{U}{|x|^2} &=& \frac{|U|^{\crits-2}U}{|x|^s} \ \ &\text{ in }\rnm,\\ U&=&0 &\text{on }\partial\rnm. \end{array} \right. \end{eqnarray} To the best of our knowledge, no explicit positive solution of \eqref{eq:U:sec2} is known. This was the reason why a specific blowup analysis was carried out in \cite{gr1}, which relied on the symmetry properties and a precise description of the asymptotic behavior of such solutions --also established in \cite{gr1}. On the other hand, the asymptotic behavior of such nonlinear problems is governed by the solutions to the linear problem \begin{eqnarray}\label{eq:U:lin} \left\{ \begin{array}{llll} -\Delta U-\gamma \frac{U}{|x|^2} &=& 0 \ \ &\text{ in }\rnm,\\ U&=&0 &\text{on }\partial\rnm. \end{array} \right. \end{eqnarray} One can then easily see that a function of the form $u(x)=x_1|x|^{-\beta}$ is a solution to (\ref{eq:U:lin}) if and only if $\beta\in \{\beta_-(\gamma),\beta_+(\gamma)\}$, where \begin{align}\label{def:beta} \beta_{\pm}(\gamma):=\frac{n}{2}\pm\sqrt{\frac{n^2}{4}-\gamma} \quad \hbox{ for }\gamma<\frac{n^2}{4}. \end{align} \begin{thdefi}[\cite{gr4}]\label{thdefi:mass} Let $\Omega$ be a smooth bounded domain of $\mathbb{R}^{n}$ $(n \geq 3)$ such that $0 \in \partial \Omega$. Suppose $\gamma<\frac{n^2}{4}$ and let $h\in C^{1}(\overline\Omega)$ be such that the operator $-\Delta-\gamma|x|^{-2}-h$ is coercive. Assuming that $$\gamma>\frac{n^2-1}{4},$$ then there exists $\mathcal{H}\in C^2(\overline{\Omega}\setminus\{0\})$ such that $$\qquad\qquad \left\{\begin{array}{ll} -\Delta \mathcal{H}-\frac{\gamma}{|x|^2}\mathcal{H}+h(x) \mathcal{H}=0 &\hbox{ in } \Omega\\ \mathcal{H}>0&\hbox{ in } \Omega\\ \mathcal{H}=0&\hbox{ on }\partial\Omega \setminus\{0\}. \end{array}\right.$$ Then, there exist constants $c_1,c_2\in\rr$ with $c_1>0$ such that $$\mathcal{H}(x)=c_1\frac{d(x,\bdry)}{|x|^{\beta_+(\gamma)}}+c_2\frac{d(x,\bdry)}{|x|^{\beta_-(\gamma)}}+o\left(\frac{d(x,\bdry)}{|x|^{\beta_-(\gamma)}}\right)$$ as $x\to 0$. In the spirit of Schoen-Yau \cite{SY}, we define the boundary mass as $$m_{\gamma,h}(\Omega):=\frac{c_2}{c_1},$$ which is independent of the choice of $\mathcal{H}$. \end{thdefi} The problem of existence of least energy solutions can now be summarized in the following theorem, whose proof can also be deduced from the refined blow-up techniques developed in this paper. \begin{theorem}[G.-R.\cite{gr1}, Lin-Wadade \cite{LW3}, G.-R. \cite{gr4}] \label{gro} Let $\Omega$ be a smooth bounded domain in $\rn$ $(n\geq 3)$ such that the singularity $0$ belongs to the boundary $\partial \Omega$. Suppose that $0< s <2$ and fix $h_0\in C^1(\overline{\Omega})$ such that $-\Delta-\gamma|x|^{-2}-h_0$ is coercive. Assume one of the following two conditions: \begin{itemize} \item $\gamma\leq \frac{n^2-1}{4}$ and the mean curvature of $\partial\Omega$ at $0$ is negative. \item $\frac{n^2-1}{4}<\gamma<\frac{n^2}{4}$ and the boundary mass $m_{\gamma,h_{0}}(\Omega)$ is positive. \end{itemize} Then, there is a positive solution to \eqref{one} that is a minimizer for the associated variational problem, \begin{equation} \inf\left\{\frac{\int_{\Omega} |\nabla u|^2\, dx-\gamma \int_{\Omega}\frac{u^2}{|x|^2}\, dx - \int_{\Omega} h_0(x) u^2\, dx}{\left(\int_{\Omega}\frac{|u|^{\crit}}{|x|^s}\, dx\right)^{\frac{2}{\crit}}};\, u \in\huno \setminus \{ 0\} \right\}. \end{equation} \end{theorem} Our focus in this project, is to investigate the extent to which the above local curvature condition at $0$ and the global (mass) condition on the domain are necessary for the existence of positive solutions. Most importantly, we give results pertaining to the persistence of the lack of positive solutions for \eqref{one} under $C^1$-perturbations of the potential $h$. We will also show that, under suitable curvature conditions, this equation has an infinite number of non-necessarily positive solutions. Both existence and non-existence results will follow from a sharp blow-up analysis of solutions to perturbations of Equation \eqref{one}. More precisely, we consider \begin{equation}\label{subexpos} \hbox{$\pe\in [0,\crits-2)$ such that $\lim_{\eps\to 0}\pe=0$, } \end{equation} and a family $(\he)_{\eps>0}\in C^1(\overline{\Omega})$ such that \begin{align}\label{hyp:he} \lim_{\eps\to 0}\he=h_0 \hbox{ in }C^1(\overline{\Omega})\hbox{ and }-\Delta - \frac{\gamma}{|x|^2}-h_{0}\hbox{ is coercive in }\Omega. \end{align} We then perform a blow-up analysis, as $\eps$ go to zero, on a sequence of functions $(\ue)_{\eps>0}$ in $\huno$ such that $\ue$ is a solution to the Dirichlet boundary value problems: $$ \left\{ \begin{array}{llll} -\Delta \ue-\gamma \frac{\ue}{|x|^2}-\he \ue&=& \frac{|\ue|^{\crits-2-\pe}\ue}{|x|^s} \ \ &\text{in } \Omega ,\\ \ue&=&0 &\text{on }\partial \Omega. \end{array}\right.\eqno{(E_\eps)} $$ The novelty and delicacy of our analysis stem from the fact that the sequence $(\ue)_{\eps>0}$ might blow up along excited states, as opposed to a unique ground state in \cite{gr1}. Moreover, the sequence $(\ue)_{\eps>0}$ is not assumed to have a fixed sign. \subsection{Non-existence: stability of the Pohozaev obstruction.} Starting with issues of non-existence of solutions, we shall prove the following surprising stability of regimes where variational positive solutions do not exist. \begin{theorem}\label{thm:non:ter} Let $\Omega$ be a smooth bounded domain in $\rn$ $(n\geq 3)$ such that the singularity $0$ belongs to the boundary $\partial \Omega$. Assume that $0< s <2$ and $\gamma<n^2/4$. Fix $h_0\in C^1(\overline{\Omega})$ such that $-\Delta-\gamma|x|^{-2}-h_0$ is coercive, and assume that one of the following conditions hold: \begin{itemize} \item $\gamma\leq \frac{n^2-1}{4}$ and the mean curvature at $0$ is non-zero; \item $\gamma> \frac{n^2-1}{4}$ and the boundary mass $m_{\gamma,h_0}(\Omega)$ is non-zero. \end{itemize} If there is no positive variational solution to \eqref{one} with $h=h_0$, then for all $\Lambda>0$, there exists $\epsilon:=\epsilon(\Lambda,h_0)>0$ such that for any $h\in C^1(\overline{\Omega})$ with $\Vert h-h_0\Vert_{C^1(\Omega)}<\epsilon,$ there is no positive solution to \eqref{one} such that $\Vert \nabla u\Vert_2\leq\Lambda$. \end{theorem} The above result is surprising for the following reason: Assuming $\Omega$ is starshaped with respect to $0$, then the classical Pohozaev obstruction (see Section \ref{sec:app:poho}) yields that \eqref{one} has no positive variational solution whenever \begin{equation}\label{ineq:h:poho} \hbox{ $h_0(x)+\frac{1}{2}( \nabla h_0(x), x) \leq 0$ for all $x\in \Omega$.} \end{equation} We then get the following corollaries. \begin{coro}\label{thm:non} Let $\Omega$ be a smooth bounded domain in $\rn$ $(n\geq 3)$ such that $0 \in \partial \Omega$. Assume $\Omega$ is starshaped with respect to $0$, $0< s <2$ and $\gamma<\gamma_H(\Omega)$. If $\gamma\leq\frac{n^2-1}{4}$, we shall also assume that the mean curvature at $0$ is non-vanishing. If $h_0$ is a potential satisfying \eqref{ineq:h:poho}, then for all $\Lambda>0$, there exists $\epsilon(\Lambda,h_0)>0$ such that for all $h\in C^1(\overline{\Omega})$ satisfying $\Vert h-h_0\Vert_{C^1(\Omega)}<\epsilon(\Lambda,h_0),$ there is no positive solution to \eqref{one} such that $\Vert \nabla u\Vert_2\leq\Lambda$. \end{coro} \begin{coro}\label{thm:non:bis} Let $\Omega$ be a smooth bounded domain in $\rn$ $(n\geq 3)$, such that $0\in\partial \Omega$. We fix $0< s <2$ and $\gamma<\gamma_H(\Omega)$, the Hardy constant defined in \eqref{Hardy inequality}. Assume that $$\Omega\hbox{ is starshaped with respect to }0.$$ When $\gamma\leq\frac{n^2-1}{4}$, we assume that the mean curvature at $0$ is positive. Then for all $\Lambda>0$, there exists $\epsilon(\Lambda)>0$ such that for all $\lambda\in [0,\epsilon(\Lambda))$, there is no positive solution to \begin{equation}\label{one:bis}\left\{ \begin{array}{cl} -\Delta u-\gamma \frac{u}{|x|^2}-\lambda u = \frac{u^{\crits-1}}{|x|^s} &\text{ in } \Omega,\\ u>0 &\hbox{ in }\Omega\\ u=0 &\hbox{ on }\bono \end{array}\right.\end{equation} with $\Vert \nabla u\Vert_2\leq\Lambda$. \end{coro} It is worth comparing these results to what happens in the nonsingular case. Indeed, in contrast to the singular case, a celebrated result of Brezis-Nirenberg \cite{bn} shows that, for $\gamma=s=0$, a variational solution to \eqref{one:bis} always exists whenever $n\geq 4$ and $0<\lambda <\lambda_1(\Omega)$, with the geometry of the domain playing no role whatsoever. On the other hand, Druet-Laurain \cite{dl} showed that the geometry plays a role in dimension $n=3$, still for $\gamma=s=0$, by proving that when $\Omega$ is star-shaped, then there is no solution to \eqref{one:bis} for all small values of $\lambda>0$ (with no apriori bound on $\Vert \nabla u\Vert_2$). Another point of view is that for $n=3$, the nonexistence of solutions persists under small perturbations, but it does not for $n\geq 4$: the Pohozaev obstruction is stable only for $n=3$ in the nonsingular case. \noindent This is in stark contrast with the situation here, i.e. when $0\in\partial\Omega$ and $s>0$. In this case, for both the existence and non-existence results, the geometry plays a role in all dimensions: it is either the local geometry at $0$ (i.e., depending on whether the mean curvature at $0$ is vanishing or not) in high dimensions, or the global geometry of the domain (i.e., depending on whether the mass is positive or the domain is star-shaped) in low dimensions. Corollaries \ref{thm:non} and \ref{thm:non:bis} show that the Pohozaev obstruction is stable in all dimensions in the singular case. \noindent Let us discuss some extensions related to this absence or not of low/large dimension phenomenon. \begin{itemize} \item Our stability result still holds under an additional smooth perturbation of the domain $\Omega$, as was done by Druet-Hebey-Laurain \cite{dhl} when $n=3$, $\gamma=s=0$. \item In the forthcoming paper \cite{GMR2}, we tackle the case of the interior singularity $0\in \Omega$, where the results are much more in the spirit of Brezis-Nirenberg and Druet-Laurain concerning the dichotomy between low and high dimensions. \item On of the main features of the stability result of Druet-Laurain \cite{dl} is the absence of any apriori control on $\Vert \nabla u\Vert_2$. In the interor case $0\in\Omega$, we expect to get rid also of the apriori bound in the singular case $s>0$. In the boundary case $0\in\partial\Omega$, bypassing the apriori bound by $\Lambda$ is more delicate and will require extra care. These issues are projects in progress. \end{itemize} \noindent The proof of Theorem \ref{thm:non:ter} (and Corollaries \ref{thm:non} and \ref{thm:non:bis}) relies on the blow-up analysis. Namely, arguing by contradiction, we assume the existence of solutions $(\ue)_\eps$ to \eqref{one:bis} with $p_\eps\equiv 0$ and $(h_\eps)_\eps\to h_0$ in $C^1$ with a control on the Dirichlet energy. Due to the "invariance" under the conformal transformation \eqref{conf:trans}, the $\ue$'s might concentrate on some peaks at $0$. The formation of these peaks is described via blow-up analysis in Proposition \ref{prop:fund:est}. Then Proposition \ref{prop:rate:sc:2} applies which yields vanishing of the mean curvature or the mass, depending on the dimension, contradicting the hypothesis of Theorem \ref{thm:non:ter}. Concerning Corollaries \ref{thm:non} and \ref{thm:non:bis}, the hypothesis imply that the mass is negative when defined. \subsection{Multiplicity of sign-changing solutions} As to the question of multiplicity, we shall prove the following result, which uses that in the subcritical case, i.e., when $p_\epsilon>0$, there is an infinite number of higher energy solutions for such $\epsilon$. Again, the core of the proof is a sharp blow-up analysis of such solutions as $p_\epsilon\to 0$. \begin{theorem}[{\textit{\rm The general case}}]\label{th:cpct:sc} Let $\Omega$ be a smooth bounded domain in $\rn$, $n\geq 3$, such that $0\in \bdry$ and assume that $0< s < 2$. Let $h_{0} \in C^1(\overline{\Omega})$ and $(\he)_{\eps>0}\in C^1(\overline{\Omega})$ be such that \eqref{hyp:he} holds, and let $(\pe)_{\eps>0}$ be such that \eqref{subexpos} holds. Consider a sequence of functions $(\ue)_{\eps>0}$ that is uniformly bounded in $\huno$ such that for each $\eps >0$, $u_\epsilon$ satisfies Equation $(E_\eps)$. Then, \begin{enumerate} \item If $\gamma<\frac{n^2}{4}-1$ and the principal curvatures of $\partial\Omega$ at $0$ are non-positive but not all of them vanish, then the sequence $(\ue)_{\eps>0}$ is pre-compact in $\huno$. \item In particular, Equation \eqref{one} has an infinite number of (possibly sign-changing) solutions in $\huno$. \end{enumerate} \end{theorem} The above result was established by Ghoussoub-Robert \cite{gr2} in the case when $\gamma=0$. The main challenge here is to prove the compactness of the subcritical solutions at high energy levels, as the nonlinearities approach the critical exponent. The multiplicity result then follows from standard min-max methods. The proof relies heavily on pointwise blow-up analysis techniques in the spirit of Druet-Hebey-Robert \cite{dhr} and Druet \cite{druet.jdg}, though our situation adds considerable difficulties to carrying out the program. \subsection{Compactness Theorems and blow-up analysis} As mentioned above, the central tool is an analysis of the formation of peaks on families $(\ue)_\eps$ of solutions to equations like \eqref{one} when blow-up occurs. This long analysis yields Propositions \ref{prop:rate:sc} and \ref{prop:rate:sc:2} that describe the blow-up rate. When blowup does not occur, there is compactness. The following theorems are immediate consequences of these propositions. We note that the restrictions on both $\gamma$ and on the curvature at $0$ are more stringent than for the existence of a ground state solution in Theorem \ref{gro}. The stronger assumptions turned out to be due to the potentially sign-changing approximate solutions -actually solutions of subcritical problems- and not because they are not necessarily minimizing. Indeed, the following theorem does not assume any smallness of the energy bound as long as the approximate solutions are positive. It therefore yields another proof for Theorem \ref{gro}, which does not rely on the existence of minimizing sequence below the energy level of a single bubble. \begin{theorem}[{\textit{\rm The non-changing sign case}}]\label{th:cpct:sc:3} Assume in addition to the hypothesis of Theorem \ref{th:cpct:sc}, that the solutions $(\ue)_{\eps>0}$ satisfy for all $\eps>0$, \begin{equation} \hbox{$\ue >0$ \quad on $\Omega$.} \end{equation} Then, the sequence $(\ue)_{\eps>0}$ is pre-compact in $\huno$, provided one of the following conditions is satisfied: \begin{itemize} \item $\gamma\leq \frac{n^2-1}{4}$ and the mean curvature of $\partial\Omega$ at $0$ is negative. \item $\frac{n^2-1}{4}<\gamma<\frac{n^2}{4}$ and the boundary mass $m_{\gamma,h_{0}}(\Omega)$ is positive. \end{itemize} \end{theorem} Our method also shows that if the --possibly sign-changing-- sequence is weakly null, then the compactness result in Theorem \ref{th:cpct:sc} will still hold for $\gamma$ up to $\frac{n^2}{4}-\frac{1}{4}$: \begin{theorem}[{\textit{\rm The case of a weak null limit}}]\label{th:cpct:sc:2} Assume in addition to the hypothesis of Theorem \ref{th:cpct:sc}, that the solutions $(\ue)_{\eps>0}$ satisfy, \begin{equation}\lim_{\eps\to 0}\Vert \ue\Vert_2=0. \end{equation} If $\gamma<\frac{n^2-1}{4}$ and the principal curvatures of $\partial\Omega$ at $0$ are non-positive but not all of them vanishing, then the sequence $(\ue)_{\eps>0}$ converges strongly to $0$ in $\huno$. \end{theorem} \subsection{Structure of the manuscript} This paper is organized as follows. Section \ref{sec:setup} consists in preliminary material in order to introduce the sequence of functions that will be thoroughly analyzed in Sections \ref{sec:blowuplemma:1} to \ref{pf blow-up rates} in the case where they "blow-up". Section \ref{sec:proof:th} contains the proof of the multiplicity result and Section \ref{sec:nonex} will have the applications to non-existence regimes and their stability under perturbations. We then have five relevant appendices. The first (Appendix A, Section \ref{sec:app:poho}) introduces the Pohozaev identity in our setting. The second (Appendix B, Section \ref{sec:app:lemma}) contains a technical lemma on the continuity of the first eigenvalue $\lambda_1(\Delta +V)$ with respect to variations of the potential $V$. Appendix C (Section \ref{sec:app:regul}) recalls regularity results established in \cite{gr4} about the regularity and behavior at $0$ of solutions of equations involving the Hardy-Schr\"odinger operator on bounded domains having $0$ on their boundary. In Appendix D (Section \ref{sec:app:c}), we construct the Green functions associated to the operators $-\Delta -\frac{\gamma}{|x|^2} -h$ on such domains, and exhibit some of their properties needed throughout the memoir. The last Appendix E (Section \ref{sec:G:rnm}) does the same but for the Hardy-Schr\"odinger operator $-\Delta -\frac{\gamma}{|x|^2}$ on $\rnm$. \section{\, Setting up the blow-up }\label{sec:setup} Throughout this paper, $\Omega$ will always be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry$. We will always assume that $\gamma <\frac{n^2}{4}$ and $s\in (0,2)$. We set $\crits:=\frac{2(n-s)}{n-2}$. When $\gamma < \gamma_{H}(\Omega)$, then the following Hardy-Sobolev inequality holds on $\Omega$: there exists $C>0$ such that \begin{equation}\label{HS-ineq} C\left(\int_\Omega\frac{|u|^{\crits}}{|x|^s}\,dx\right)^{{2}/{\crits}}\leq \int_\Omega |\nabla u|^2\,dx-\gamma \int_\Omega\frac{u^2}{|x|^2}\,dx\hbox{ for all }u\in \huno. \end{equation} For each $\eps>0$, we consider $\pe\in [0,\crits-2)$ such that \begin{align}\label{lim:pe} \lim_{\eps\to 0}\pe=0. \end{align} Let $h_{0}\in C^1(\overline{\Omega})$ and consider a family $(\he)_{\eps>0}\in C^1(\overline{\Omega})$ such that (\ref{hyp:he}) holds. Consider a sequence of functions $(\ue)_{\eps>0}$ in $\huno$ such that for all $\eps >0$ the function $\ue$ is a solution to the Dirichlet boundary value problem: $$ \left\{ \begin{array}{llll} -\Delta \ue-\gamma \frac{\ue}{|x|^2}-\he \ue&=& \frac{|\ue|^{\crits-2-\pe}\ue}{|x|^s} \ \ &\text{in } \huno ,\\ \ue&=&0 &\text{on }\partial \Omega. \end{array}\right.\eqno{(E_\eps)} $$ By the regularity result Theorem \ref{th:hopf} in Appendix B, we have $\ue \in C^{2}(\overline{\Omega}\setminus\{0\})$ and there exists $K_\eps\in\rr$ such that $ \lim_{x\to 0}~\frac{ |x|^{\bm} \ue (x)}{ d(x, \bdry) } =K_{\epsilon}$. In addition, we assume that the sequence $(\ue)_{\eps>0}$ is bounded in $\huno$ and we let $\Lambda>0$ be such that \begin{align}\label{bnd:ue} \int \limits_{\Omega} \frac{|\ue|^{\crits-\pe}}{|x|^{s}} dx \leq \Lambda \qquad \hbox{ for all } \eps>0. \end{align} It then follows from the weak compactness of the unit ball of $\huno$ that there exists $u_0\in\huno$ such that as $\eps\to 0$ \bequa\label{weak:lim:ue} \ue\rightharpoonup u_0 \qquad \hbox{ weakly in } \huno. \eequa Note that $u_0$ is a solution to the Dirichlet boundary value problem: $$ \left\{ \begin{array}{llll} -\Delta u-\gamma \frac{u}{|x|^2}-h_{0} u&=& \frac{|u|^{\crits-2-\pe}u}{|x|^s} \ \ & \hbox{ in } \Ono ,\\ u&=&0 & \hbox{ on } \bono. \end{array}\right. $$ From the regularity Theorem \ref{th:hopf} we have $u_{0} \in C^{2}(\overline{\Omega}\setminus\{0\})$ and $ \ds \lim_{x\to 0}\frac{|x|^{\bm} u_{0} (x)}{d\left( x, \bdry\right)}=K_{0} \in \R$. It then follows that $\displaystyle \sup \limits_{\Omega} \frac{|x|^{\bm}u_{0}(x)}{d(x,\bdry)} $ and hence $\Vert |x|^{\bm-1} u_{0} (x) \Vert_{L^{\infty}(\Omega)}$ is finite. \noindent We fix $\tau\in\rr$ such that \begin{align}\label{tau} \bm-1<\tau<\frac{n-2}{2}. \end{align} The following proposition shows that the sequence $(\ue)_{\eps}$ is pre-compact in $\huno$ if $\left(|x|^{\tau}\ue \right)_{\eps>0}$ is uniformly bounded in $L^\infty(\Omega)$. \begin{proposition}\label{prop:bounded} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry$ and assume that $0< s < 2$, $\gamma<\frac{n^2}{4}$. We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he} and \eqref{lim:pe} holds. Suppose that there exists $C>0$ such that $|x|^{\tau}|\ue(x)|\leq C$ for all $x\in\Omega$ and for all $\eps >0$. Then up to a subsequence, $\lim \limits_{\eps\to 0}\ue=u_0$ in $\huno$, where $u_0$ is as in \eqref{weak:lim:ue}. \end{proposition} \noindent{\it Proof of Proposition \ref{prop:bounded}:} The sequence $(\ue)$ is clearly uniformly bounded in $L^{\infty}(\Omega')$ for any $\Omega' \subset \subset \Omegabar\setminus\{0\}$. Then by standard elliptic estimates and from \eqref{weak:lim:ue} it follows that $\ue \to u_{0}$ in $C^2_{loc}(\Omegabar\setminus\{0\})$. Now since $|x|^{\tau}|\ue(x)|\leq C$ for all $x\in\Omega$ and for all $\eps >0$, and since $\tau<\frac{n-2}{2}$, we have \begin{align}\label{lim:int:0} \lim \limits_{\delta\to 0}\lim \limits_{\eps \to 0} \int \limits_{\Omega \cap B_{\delta}(0)} \frac{|\ue|^{\crits-\pe}}{|x|^{s}} dx=0 ~\hbox{ and } ~\lim \limits_{\delta\to 0}\lim \limits_{\eps \to 0} \int \limits_{\Omega \cap B_{\delta}(0)} \frac{|\ue|^{2}}{|x|^{2}} dx=0. \end{align} Therefore \begin{align*} \lim \limits_{\eps \to 0} \int \limits_{\Omega} \frac{|\ue|^{\crits-\pe}}{|x|^{s}} dx = \int \limits_{\Omega } \frac{|u_{0}|^{\crits}}{|x|^{s}} dx ~\hbox{ and } ~ \lim \limits_{\eps \to 0} \int \limits_{\Omega} \frac{|\ue|^{2}}{|x|^{2}} dx = \int \limits_{\Omega } \frac{|u_{0}|^{2}}{|x|^{2}} dx. \end{align*} From $(E_{\eps})$ and \eqref{weak:lim:ue} we then obtain \begin{align*} \lim \limits_{\eps \to 0} \int \limits_{\Omega} \left( |\nabla \ue|^2-\gamma \frac{\ue^2}{|x|^2}- \he \ue^2 \right)dx &= \int \limits_{\Omega} \left( |\nabla u_{0}|^2-\gamma \frac{u_{0}^2}{|x|^2}- h_{0} u_{0}^2 \right)dx \\ \hbox{ so then } \lim \limits_{\eps \to 0} \int \limits_{\Omega} |\nabla \ue|^2 &= \lim \limits_{\eps \to 0} \int \limits_{\Omega} |\nabla u_{0}|^2. \end{align*} And hence $\lim \limits_{\eps\to 0}\ue=u_0$ in $\huno$. This proves Proposition \ref{prop:bounded}. \qed \noindent From now on, we assume that \bequa\label{hyp:blowup} \lim_{\eps\to 0}\Vert |x|^{\tau }\ue\Vert_{L^\infty(\Omega)}=+\infty. \eequa We shall say that blow-up occurs whenever (\ref{hyp:blowup}) holds. \section{\, Scaling Lemmas}\label{sec:blowuplemma:1} \noindent In this section we state and prove two scaling lemmas which we shall use many times in our analysis. We start by describing a parametrization around a point of the boundary $\partial \Omega$. Let $ p \in \partial \Omega $. Then there exists $U$,$V$ open in ${\R^{n}}$, there exists $I \subset \R$ an open interval, there exists $U' \subset \R^{n-1}$ an open subset, and there exist a smooth diffeomorphism $\mathcal{T}: U \longrightarrow V$ and $\T_0\in C^\infty(U')$, such that upto a rotation of coordinates if necessary \begin{align}\label{def:T:bdry} \left\{\begin{array}{ll} \bullet & 0 \in U=I\times U' \hbox{ and }p \in V.\\ \bullet & \mathcal{T}(0)=p.\\ \bullet &\mathcal{T} \left( U \cap \{x_{1} <0 \} \right)= V \cap \Omega ~ \hbox{ and } \mathcal{T} \left( U \cap \{x_{1} =0 \} \right)= V \cap \partial \Omega. \\ \bullet & D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}. \hbox{ Here $D_{x} \mathcal{T} $ denotes the differential of $ \mathcal{T} $ at the point $x$}\\ & \hbox{ and $ \mathbb{I}_{\R^{n}}$ is the identity map on $\R^{n}$.}\\ \bullet &{\mathcal{T}}_{*}(0)~ (e_{1})= \nu_{p}~\hbox{ where }\nu_{p}\hbox{ denotes the outer unit normal vector to }\\ & \partial \Omega\hbox{ at the point }p.\\ \bullet & \{ { \mathcal{T}}_{*} (0)(e_{2}) , \cdots, {\mathcal{T}}_{*} (0)(e_{n}) \} \hbox{ forms an orthonormal basis of }\\ &T_{p} \partial \Omega .\\ \bullet & \T(x_1,y)=p+(x_1+\T_0(y),y)\hbox{ for all }(x_1,y)\in I\times U'=U \\ \bullet & \T_0(0)=0 \hbox{ and }\nabla\T_0(0)=0. \end{array}\right. \end{align} This boundary parametrization will be throughout useful during our analysis. An important remark is that \begin{align}\label{rem:T:bdry} \left( \T(x_{1},y), \bdry \right)=(1+o(1))|x_{1}| \qquad \hbox{ for all } (x_1,y)\in I\times U'=U ~ \hbox{ close to } 0. \end{align} \begin{lemma}{\label{scaling lemma 1}} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry $ and assume that $0< s < 2$, $\gamma<\frac{n^2}{4}$. Let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} holds. Let $(y_\eps)_\eps \in \Omega $ and let \begin{align*} \nu_\eps^{-\frac{n-2}{2}}:= |u_{\eps}(y_{\eps})|, \quad \elle:=\nu_\eps^{1-\frac{\pe}{\crits-2}} ~ \hbox{ and } \quad \kappa_{\eps}:= \left| y_{\eps} \right|^{s/2} \elle^{\frac{2-s}{2}} \qquad \hbox{ for } \eps >0 \end{align*} Suppose $\lim \limits_{\eps \to 0} y_{\eps}=0$ and $\lim \limits_{\eps \to 0} \nu_{\eps}=0$. Assume that for any $R > 0$ there exists $C(R)>0$ such that for all $\eps >0$ \begin{align}\label{scale lem:hyp 1 on u} |u_{\eps}(x)| \leq& C(R) ~\frac{|y_{\eps}|^{\tau }}{| x|^{\tau }} | u_{\eps}(y_{\eps})| \qquad \hbox{ for all } x\in B_{ R \kappa_{\eps}}( y_{\eps}) \cap \Omega. \end{align} Then \begin{align*} |\ye|=O(\elle) \qquad \hbox{ as } \eps \to 0. \end{align*} \end{lemma} \noindent{\it Proof of Lemma \ref{scaling lemma 1}:} We proceed by contradiction and assume that \begin{align} \label{scaling lem 1: contradiction} \lim_{\eps\to 0}\frac{|\ye|}{\elle}=+\infty. \end{align} Then it follows from the definition of $\kappa_{\eps}$ that \begin{align} \label{ppty:ke} \lim \limits_{\eps\to 0}\kappa_{\eps}=0,\; \lim \limits_{\eps\to 0}\frac{\kappa_{\eps}}{\elle}=+\infty \hbox{ and }\lim \limits_{\eps\to 0}\frac{\kappa_{\eps}}{|\ye|}=0. \end{align} \noindent{\it Case 1:} We assume that there exists $\rho>0$ such for all $\eps>0$ that $$\frac{d(\ye,\partial\Omega)}{\kappa_{\eps}}\geq 3\rho.$$ \noindent We define for all $\eps>0$ $$\ve(x):=\nu_\epsilon^{\frac{n-2}{2}} \ue(\ye+\kappa_{\eps} x) \qquad \hbox{ for } x \in B_{2\rho}(0)$$ Note that this is well defined for $\eps >0$ small enough. It follows from \eqref{scale lem:hyp 1 on u} that there exists $C(\rho)>0$ such that all $\eps>0$ \begin{align}\label{bnd:lem:1:bis} |\ve(x)|\leq C(\rho) \frac{1}{\left| \frac{\ye}{|y_{\epsilon}|}+\frac{\kappa_{\eps}}{|y_{\epsilon}|} x \right|^{\tau }} \qquad \forall x \in B_{2\rho}(0) \end{align} using \eqref{ppty:ke} we then get as $\eps \to 0$ \begin{align*} |\ve(x)|\leq C(\rho)\left( 1+o(1)\right) \qquad \forall x \in B_{2\rho}(0). \end{align*} From equation $(E_\eps)$ we obtain that $v_{\eps}$ satisfies $$-\Delta\ve-\frac{\kappa_{\eps}^2}{|y_{\eps}|^{2}} \frac{\gamma}{\left|\frac{\ye}{|\ye|}+\frac{\kappa_{\eps}}{|\ye|}x\right|^{2}}~\ve-\kappa_{\eps}^2~\he(\ye+\kappa_{\eps} x)~\ve=\frac{|\ve|^{\crits-2-\pe}\ve}{\left|\frac{\ye}{|\ye|}+\frac{\kappa_{\eps}}{|\ye|}x\right|^s}$$ weakly in $B_{2\rho}(0)$ for all $\eps>0$. With the help of \eqref{ppty:ke} and standard elliptic theory it then follows that there exists $v\in C^1(B_{2\rho}(0))$ such that $$\lim \limits_{\eps\to 0} \ve = v \qquad \hbox{ in } C^1(B_{\rho}(0)).$$ In particular, \begin{align} \label{lim:v:case1:nonzero} |v(0)|=\lim \limits_{\eps\to 0} |\ve(0)|=1 \end{align} and therefore $v\not\equiv 0$. \noindent On the other hand, a change of variables and the definition of $\kappa_{\eps}$ yields \begin{align*} \int \limits_{ B_{\rho \kappa_{\eps}}(\ye)}\frac{|\ue|^{\crits-\pe}}{|x|^s}~dx &= \frac{|\ue(\ye)|^{\crits-\pe}\kappa_{\eps}^n}{|\ye|^s}\int \limits_{B_{\rho}(0)}\frac{|\ve|^{\crits-\pe}}{\left|\frac{\ye}{|\ye|}+\frac{\kappa_{\eps}}{|\ye|} x\right|^s}~ dx\\ &= \elle^{-\left(1+\frac{2(2-s)}{\crits-2-\pe} \right)}\left(\frac{|\ye|}{\elle}\right)^{s\left(\frac{n-2}{2}\right)}\int \limits_{B_{\rho}(0)}\frac{|\ve|^{\crits-\pe}}{\left|\frac{\ye}{|\ye|}+\frac{\kappa_{\eps}}{|\ye|}x\right|^s}~dx \\ &\geq \left(\frac{|\ye|}{\elle}\right)^{s \left( \frac{n-2}{2} \right)}\int \limits_{B_{\rho}(0)}\frac{|\ve|^{\crits-\pe}}{\left|\frac{\ye}{|\ye|}+\frac{\kappa_{\eps}}{|\ye|}x\right|^s}~dx. \end{align*} Using the equation $(E_\eps)$, \eqref{bnd:ue}, \eqref{scaling lem 1: contradiction}, \eqref{ppty:ke} and passing to the limit $\eps\to 0$ we get that $$\int_{B_{\rho}(0)}|v|^{\crits}\, dx=0$$ and so then $v\equiv 0$ in $B_\rho(0)$, a contradiction with \eqref{lim:v:case1:nonzero}. Thus \eqref{scaling lem 1: contradiction} cannot hold in that case. \noindent{\it Case 2:} We assume that, up to a subsequence, \bequa\label{lim:d:be:0} \lim_{\eps\to 0}\frac{d(\ye,\partial\Omega)}{\kappa_{\eps}}=0. \eequa Note that $\lim \limits_{\eps \to 0} y_{\eps}= 0 $. Consider the boundary map $\T:U\to V$ as in \eqref{def:T:bdry}, where $U,V$ are both open neighbourhoods of $0$. We let $\tue=\ue\circ\T$, which is defined in $U\cap \rnm$. For any $i,j=1,...,n$, we let $g_{ij}=(\partial_i\T,\partial_j\T)$, where $(\cdot,\cdot)$ denotes the Euclidean scalar product on $\rn$, and we consider $g$ as a metric on $\rn$. We let $\Delta_g=div_g(\nabla)$ the Laplace-Beltrami operator with respect to the metric $g$. As easily checked, using $(E_{\eps})$ we get that for all $\eps >0$ $$-\Delta_g\tue-\frac{\gamma }{|\T(x)|^{2}} \tue-\he\circ\T(x)\cdot \tue=\frac{|\tue|^{\crit-2-\pe}\tue}{|\T(x)|^s}$$weakly in $U\cap \rnm$. We let $\ze\in\partial\Omega$ be such that \begin{align} \label{def:ze} |\ze-\ye|=d(\ye,\partial\Omega). \end{align} We let $\tye,\tze\in U$ such that \begin{align} \label{def:tye:tze} \T(\tye)=\ye\hbox{ and }\T(\tze)=\ze. \end{align} It follows from the properties \eqref{def:T:bdry} of the boundary map $\T$ that \begin{align} \label{ppty:tye:tze} \lim_{\eps\to 0}\tye=\lim_{\eps\to 0}\tze=0,\; (\tye)_1<0\hbox{ and }(\tze)_1=0 \end{align} \noindent We rescale and define for all $\eps >0$ $$\tve(x):=\nu_{\eps}^{\frac{n-2}{2}}\tue(\tze+\kappa_{\eps} x) \qquad \hbox{ for } x\in \frac{U-\tze}{\kappa_{\eps}}\cap \rnm. $$ With \ref{ppty:tye:tze}), we get that $\tve$ is defined on $B_R(0)\cap\{x_1<0\}$ for all $R>0$, for $\eps$ is small enough. Then for all $\eps>0$ the functions $\tve$ satisfies the equation: $$-\Delta_{\tge}\tve- \frac{\kappa_{\eps}^2}{|y_{\eps}|^{2}} \frac{\gamma}{\left|\frac{ \T (\tze+\kappa_{\eps} x)}{|\ye|}\right|^2}-\kappa_{\eps}^2\he\circ\T(\tze+\kappa_{\eps} x)\tve=\frac{|\tve|^{\crit-2-\pe}\tve}{\left|\frac{ \T (\tze+\kappa_{\eps} x)}{|\ye|}\right|^s}$$ weakly in $B_R(0)\cap\{x_1<0\}$. In this expression, $\tge=g(\tze+\kappa_{\eps} x)$ and $\Delta_{\tge}$ is the Laplace-Beltrami operator with respect to the metric $\tge$. With \eqref{lim:d:be:0}, \eqref{def:ze} and \eqref{def:tye:tze}, we get for all $\eps>0$ $$\T(\tze+\kappa_{\eps} x)=\ye+O_R(1)\kappa_{\eps} \qquad \hbox{ for all } x\in B_{R}(0)\cap\{x_1\leq 0\}$$ where, there exists $C_R>0$ such that $|O_R(1)|\leq C_R$ for all $x\in B_{R}(0)\cap\{x_1\leq0\}$. With (\ref{ppty:ke}), we then get that $$\lim_{\eps\to 0}\frac{|\T(\tze+\kappa_{\eps} x)|}{|\ye|}=1 \qquad \hbox{ in } C^0(B_{R}(0)\cap\{x_1\leq 0\}).$$ It follows from \eqref{scale lem:hyp 1 on u} that there exists $C'(R)>0$ such that all $\eps>0$ \begin{align}\label{bnd:lem:1:ter} |\tve(x)|\leq C(R) \frac{1}{{\left|\frac{ \T (\tze+\kappa_{\eps} x)}{|\ye|}\right|^{\tau}}} \qquad \forall x \in B_{R}(0)\cap\{x_1\leq0 \}. \end{align} Using \eqref{ppty:ke} and the propoerties of the boundary map $\T$ we then get as $\eps \to 0$ $$|\tve(x)|\leq C(R) \left( 1+o(1)\right) \qquad \forall x \in B_{R}(0)\cap\{x_1\leq0 \}.$$ With the help of \eqref{ppty:ke} and standard elliptic theory it then follows that there exists $\tv\in C^1(B_R(0)\cap\{x_1\leq 0\})$ such that $$\lim_{\eps\to 0}\tve=\tv \qquad \hbox{ in } C^{0}(B_{R/2}(0)\cap \{x_1\leq 0\}).$$ Since $\tve$ vanishes on $B_R(0)\cap\{x_1=0\}$ and (\ref{bnd:lem:1:ter}) holds, it follows that \begin{align}\label{eq:ter:vanish} \tv\equiv 0\hbox{ on }B_{R/2}(0)\cap \{x_1=0\}. \end{align} Moreover, from \eqref{lim:d:be:0}, \eqref{def:ze} and \eqref{def:tye:tze} we have that $$ \left| \tve\left(\frac{\tye-\tze}{\kappa_{\eps}}\right) \right|=1\hbox{ and }\lim_{\eps\to 0}\frac{\tye-\tze}{\kappa_{\eps}}=0.$$ In particular, $\tv(0)=1$, contradiction with \eqref{eq:ter:vanish}. Thus \eqref{scaling lem 1: contradiction} cannot hold in {\it Case 2} also. \noindent In both cases, we have contradicted \eqref{scaling lem 1: contradiction} . This proves that $\ye=O(\elle)$ when $\eps\to 0$, which proves the Lemma. \qed \begin{lemma}{\label{scaling lemma 2}} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry $ and assume that $0< s < 2$, $\gamma<\frac{n^2}{4}$. Let $(\ue)$, $(\he)$ and $(\pe)$ such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} holds. Let $(y_\eps)_\eps \in \Omega $ and let \begin{align*} \nu_\eps^{-\frac{n-2}{2}}:= |u_{\eps}(y_{\eps})| \quad \hbox{ and } \quad \elle:=\nu_\eps^{1-\frac{\pe}{\crits-2}} \qquad \hbox{ for } \eps >0 \end{align*} Suppose $\nu_{\epsilon} \to 0$ and $|\ye|=O(\elle) $ as $\eps \to 0$. \noindent Since $0\in\partial\Omega$, we let $\T :U\to V$ as in \eqref{def:T:bdry} with $y_0=0$, where $U,V$ are open neighborhoods of $0$. For $\eps >0$ we rescale and define \begin{align*} \twe(x):= \nu_{\eps}^{\frac{n-2}{2}} u_{\eps} \circ \T( \elle x) \qquad \text{ for } x \in \elle^{-1} U \cap \rnmpbar. \end{align*} Assume that for any $R >\delta> 0$ there exists $C(R,\delta)>0$ such that for all $\eps>0$ \begin{align}\label{scale lem:hyp 2 on u} |\twe(x)| \leq& C(R, \delta) \qquad \text{ for all } x\in B_{R }(0) \setminus \overline{ B_{\delta}(0)} \cap \rnm. \end{align} \noindent Then there exists $\tw \in H_{1,0}^2(\rnm) \cap C^1(\rnmpbar)$ such that \begin{align*} \twe \rightharpoonup & ~\tw \qquad \hbox{ weakly in } H_{1,0}^2(\rnm) \quad \text{ as } \eps \rightarrow 0 \notag\\ \twe \rightarrow & ~\tw \qquad \hbox{in } C^{1}_{loc}(\rnmpbar) \qquad \text{ as } \eps \rightarrow 0 \end{align*} And $\tw$ satisfies weakly the equation $$-\Delta \tw-\frac{\gamma}{|x|^2}\tw= \frac{|\tw|^{\crits-2} \tw}{|x|^s}\hbox{ in }\rnm.$$ \noindent Moreover if $\tw \not\equiv 0$, then $$\int \limits_{\rnm} \frac{|\tw|^{\crits}}{|x|^{s}} \geq \mu_{\gamma, s}(\rnm)^{\frac{\crits}{\crits-2}} $$ and there exists $t \in (0,1]$ such that $\lim \limits_{\eps\to0} \nu_{\eps}^{\pe}=t$, where $\mu_{\gamma,s}(\rnm)$ is as in \eqref{general}. \end{lemma} \noindent{\it Proof of Lemma \ref{scaling lemma 2}:} The proof proceeds in four steps.\\ \noindent {\bf Step \ref{scaling lemma 2}.1:} Let $\eta \in C^{\infty}_{c}(\R^{n})$. One has that $\eta \twe \in H_{0,1}^2(\rnm)$ for $\eps >0 $ sufficiently small. We claim that there exists $\tw_{\eta} \in H_{1,0}^2(\rnm)$ such that upto a subsequence \begin{eqnarray*} \left \{ \begin{array} {lc} \eta \twe \rightharpoonup \tw_{\eta} \qquad \quad \text{ weakly in } H_{1,0}^2(\rnm) ~ \text{ as } \eps \to 0, \\ \eta \twe \rightarrow \tw_{\eta}(x) \qquad a.e ~ ~ \text{ in } \rnm ~ \text{ as } \eps \to 0. \end{array} \right. \end{eqnarray*} \noindent We prove the claim. Let $x \in \R_{-}^{n}$, then \begin{align*} \nabla \left( \eta \twe \right)(x)= \twe(x) \nabla \eta(x) + \nu_{\eps}^{\frac{n-2}{2}} \elle~ \eta(x) D_{(\elle x)} \T \left[\nabla \ue \left( \T (\elle x) \right) \right] \end{align*} In this expression, $D_x\T$ is the differential of the function $\T$ at $x$. \noindent Now for any $\theta >0$, there exists $C({\theta}) >0$ such that for any $a,b >0$ \begin{align*} (a+b)^{2} \leq C({\theta}) a^{2} + (1+ \theta) b^{2} \end{align*} With this inequality we then obtain \begin{eqnarray*} \int \limits_{\R_{-}^{n}} \left| \nabla \left( \eta \twe \right)\right|^{2}~ dx& \leq& C(\theta) \int \limits_{\R_{-}^{n}} | \nabla \eta |^{2} \twe^{2} ~ dx \\ &&+ (1 + \theta) \nu_{\eps}^{\frac{n-2}{2}} \elle \int \limits_{\R_{-}^{n}} \eta^{2} \left| D_{(\elle x)} \T \left[\nabla \ue \left( \T (\elle x) \right) \right] \right|^{2} ~ dx \end{eqnarray*} Since $D_{0} \T = \mathbb{I}_{\R^{n}} $ we have as $\eps \to0$ \begin{align*} \int \limits_{\R_{-}^{n}} \left| \nabla \left( \eta \twe \right)\right|^{2}~ dx & \leq C(\theta) \int \limits_{\R_{-}^{n}} | \nabla \eta |^{2} \twe^{2} ~ dx \\ & + (1 + \theta) \left( 1 + O(\elle )\right) \nu_{\eps}^{\frac{n-2}{2}} \elle \int \limits_{\R_{-}^{n}} \eta^{2} \left| \nabla \ue\left( \T (\elle x) \right) \right|^{2} (1+ o(1)) ~dx \end{align*} With H\"{o}lder inequality and a change of variables this becomes \begin{align}{\label{b'dd on the integral 1}} \int \limits_{\R_{-}^{n}} \left| \nabla \left( \eta \twe \right)\right|^{2}~ dx &\leq~ C(\theta) \left\| \nabla \eta \right\|^{2}_{L^{n}} \left( \frac{\nu_{\eps}}{\elle} \right)^{n-2} \left( ~\int \limits_{\Omega} |\ue|^{\crit} ~ dx \right)^{\frac{n-2}{n}} \notag \\ & + (1+ \theta) \ \left( \frac{\nu_{\eps}}{\elle} \right)^{n-2} \int \limits_{\Omega} \left| \nabla u_{\epsilon} \right|^{2} ~ dx \end{align} Since $\left\| u_{\epsilon} \right\|_{\huno}= O(1)$, so for $\eps >0$ small enough \begin{align*} \left\| \eta \twe \right\|_{H_{1,0}^2(\rnm)} \leq C_{\eta} \end{align*} Where $C_{\eta}$ is a constant depending on the function $\eta$. The claim then follows from the reflexivity of $H_{1,0}^2(\rnm)$. \noindent {\bf Step \ref{scaling lemma 2}.2:} Let ${\eta}_{1} \in C_{c}^{\infty}(\R^{n})$, $0 \leq {\eta}_{1} \leq 1$ be a smooth cut-off function, such that \begin{eqnarray} {\eta} _{1} = \left \{ \begin{array} {lc} 1 \quad \text{for } \ \ x \in B_1(0)\\ 0 \quad \text{for } \ \ x \in {\R}^n \backslash B_2(0 ) \end{array} \right. \end{eqnarray} For any $R>0$ we let $\eta_{R} = \eta_{1}(x/R)$. Then with a diagonal argument we can assume that upto a subsequence for any $R >0$ there exists $\tw_{R} \in H_{1,0}^2(\rnm)$ such that \begin{eqnarray*} \left \{ \begin{array} {lc} \eta_{R} \twe \rightharpoonup \tw_{R} \qquad \qquad \text{ weakly in } H_{1,0}^2(\rnm) ~ \hbox{ as } \eps \to 0 \\ \eta_{R} \twe(x) \rightarrow \tw_{R}(x) \qquad a.e ~ x~\text{ in } \rnm ~ \text{ as } \eps \to 0 \end{array} \right. \end{eqnarray*} Since $\displaystyle{\left\| \nabla \eta_{R} \right\|^{2}_{n} =\left\| \nabla \eta_{1} \right\|^{2}_{n} }$ for all $R >0$, letting $\eps \to 0$ in $(\ref{b'dd on the integral 1})$ we obtain that \begin{align*} \int_{\R^{n}_{-}} \left|\nabla w_{R} \right|^{2} dx \leq C \qquad \text{for all } R>0 \end{align*} where $C$ is a constant independent of $R$. So there exists $ \tw \in H_{1,0}^2(\rnm) $ such that \begin{eqnarray*} \left \{ \begin{array} {lc} \tw_{R} \rightharpoonup \tw \qquad \qquad \text{ weakly in } D^{1,2} (\R^{n}) ~ \text{ as } R \to +\infty \\ \tw_{R}(x) \rightarrow \tw(x)\qquad a.e ~ x~\text{ in } \rnm ~ \text{ as } R \to +\infty \end{array} \right. \end{eqnarray*} \noindent {\bf Step \ref{scaling lemma 2}.3:} We claim that $ \tw \in C^{1}( \rnmpbar )$ and it satisfies weakly the equation $$ \left\{ \begin{array}{llll} -\Delta \tw-\frac{\gamma}{|x|^2} \tw&=& \frac{|\tw|^{\crits-2} \tw}{|x|^s} \ \ & \hbox{ in } \R^{n}_{-}\\ \tw&=&0 & \hbox{ on } \partial \rnm \setminus \{ 0\}. \end{array}\right. $$ \noindent We prove the claim. For any $i,j=1,...,n$, we let $(\tge)_{ij}=(\partial_i\T(\elle x),\partial_j\T(\elle x))$, where $(\cdot,\cdot)$ denotes the Euclidean scalar product on $\rn$. We consider $\tge$ as a metric on $\rn$. We let $\Delta_g=div_g(\nabla)$ the Laplace-Beltrami operator with respect to the metric $g$. From $(E_{\eps})$ it follows that for any $\eps >0$ and $R>0$, $ \eta_{R} \twe $ satisfies weakly the equation \begin{align}{\label{blowup eqn 1}} -\Delta_{\tge} \left(\eta_{R} \twe \right)- \frac{\gamma}{\left| \frac{\T (\elle x)}{\elle}\right|^{2}}\eta_{R} \twe -\elle^2~\he \circ \T(\elle x) \eta_{R} \twe =\frac{|\left(\eta_{R} \twe \right)|^{\crits-2-\pe} \left(\eta_{R} \twe \right)}{\left| \frac{\T (\elle x)}{\elle}\right|^{s}}. \end{align} and note that $ \eta_{R} \twe \equiv 0$ on $ B_{R}(0) \setminus \{ 0\} \cap \partial \rnm$. From \eqref{def:T:bdry}, \eqref{scale lem:hyp 2 on u} and using the standard elliptic estimates it follows that $\tw_{R} \in C^{1} \left( B_{R}(0) \setminus \{ 0\} \cap \overline{\rnm} \right) $ and that up to a subsequence \begin{align*} \lim \limits_{\epsilon \to 0} \eta_{R} \twe = \tw_{R} \qquad \hbox{ in } C_{loc}^{1} \left( B_{R/2}(0) \setminus \{ 0\} \cap \overline{\rnm} \right). \end{align*} Letting $\eps \to 0$ in eqn $\eqref{blowup eqn 1}$ gives that $ w_{R} $ satisfies weakly the equation $$-\Delta \tw_{R} - \frac{\gamma}{|x|^{2}} \tw_{R} =\frac{|\tw_{R}|^{\crits-2-\pe} \tw_{R} }{|x|^s}.$$ Again we have that $|\tw_{R}(x)| \leq C(R, \delta)$ for all $x \in \overline{B_{R/2}(0)} \setminus \overline{B_{2 \delta}(0)}$ and then again from standard elliptic estimates it follows that $ \tw \in C^{1}( \rnmpbar)$ and $\lim \limits_{R\to + \infty} \tilde{w}_{R}= \tilde{w}$ in $C^{1}_{loc}(\rnmpbar )$, up to a subsequence. Letting $R \to + \infty$ we obtain that $ \tw $ satisfies weakly the equation $$ \left\{ \begin{array}{llll} -\Delta \tw-\frac{\gamma}{|x|^2} \tw&=& \frac{|\tw|^{\crits-2} \tw}{|x|^s} \ \ & \hbox{ in } \R^{n}_{-}\\ \tw&=&0 & \hbox{ on } \partial \rnm \setminus \{ 0\}. \end{array}\right. $$ This proves our claim. \noindent {\bf Step \ref{scaling lemma 2}.4:} Coming back to equation $\eqref{b'dd on the integral 1}$ we have for $R>0$ \begin{align}{\label{b'dd on the integral 1b}} \int \limits_{\R_{-}^{n}} \left| \nabla \left( \eta_{R} \twe \right)\right|^{2}~ dx &\leq~ C(\theta) \left( ~\int \limits_{ \{ x \in \rnm: R <|x|<2R \}} (\eta_{2R} \twe)^{2^{*}} ~ dx \right)^{\frac{n-2}{n}} \notag \\ & + (1+ \theta) \ \left( \frac{\nu_{\eps}}{\elle} \right)^{n-2} \int \limits_{\Omega} \left| \nabla u_{\epsilon} \right|^{2} ~ dx. \end{align} Since the sequence $(\ue)_{\eps}$ is bounded in $\huno$, letting $\eps \to 0$ and then $R \to + \infty$ we obtain for some constant $C$ \begin{align*} \int \limits_{\rnm} \left| \nabla \tw \right|^{2}~ dx \leq C\left( \lim \limits_{\epsilon \to 0} \left( \frac{\nu_{\eps}}{\elle} \right) \right)^{n-2} . \end{align*} \noindent Now if $w \not\equiv 0$ weakly satisfies the equation $$ \left\{ \begin{array}{llll} -\Delta \tw-\frac{\gamma}{|x|^2} \tw&=& \frac{|\tw|^{\crits-2} \tw}{|x|^s} \ \ & \hbox{ in } \R^{n}_{-}\\ \tw&=&0 & \hbox{ on } \partial \rnmp . \end{array}\right. $$ using the definition of $\mu_{\gamma, s}(\rnm)$ it then follows that $$\int \limits_{\rnm} \frac{|w|^{\crits}}{|x|^{s}} \geq \mu_{\gamma, s}(\rnm)^{\frac{\crits}{\crits-2}}. $$ Hence $\displaystyle \lim \limits_{\epsilon \to 0} \left( \frac{\nu_{\eps}}{\elle} \right) >0$ which implies that \begin{align*} t:= \lim\limits_{\epsilon \to 0} \nu_{\eps} ^{\pe} >0. \end{align*} Since $\lim \limits_{\eps \to 0} \nu_{\eps}=0 $, therefore we have that $ 0 <t \leq 1.$ This completes the lemma. \qed \section{\, Construction and exhaustion of the blow-up scales}\label{sec:exh} \noindent In this section we prove the following proposition in the spirit of Druet-Hebey-Robert \cite{dhr}: \begin{proposition}\label{prop:exhaust} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry$ and assume that $0< s < 2$, $\gamma<\frac{n^2}{4}$. Let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} holds. Assume that blow-up occurs, that is $$\lim_{\eps\to 0}\Vert |x|^{\tau} \ue\Vert_{L^\infty(\Omega)}=+\infty ~\hbox{ where } ~ \bm-1<\tau<\frac{n-2}{2} .$$ \noindent Then there exists $N\in\nn^\star$ families of scales $(\mei)_{\eps>0}$ such that we have: \par \begin{enumerate} \item[{\bf(A1)}] $\lim \limits_{\eps\to 0}\ue=u_0$ in $C^2_{loc}(\overline{\Omega}\setminus\{0\})$ where $u_0$ is as in\eqref{weak:lim:ue}. \\ \item[{\bf(A2)}] $0< \meun< ...<\meN$, for all $\eps>0$. \\ \item[{\bf(A3)}] $\lim\limits_{\eps\to 0}\meN=0\hbox{ and } \lim\limits_{\eps\to 0}\frac{\mu_{i+1,\eps}}{\mei}=+\infty\hbox{ for all } 1 \leq i \leq N-1.$ \\ \item[{\bf(A4)}] For any $1 \leq i \leq N$ and for $\eps >0$ we rescale and define $$ \tuei(x):= \mei^{\frac{n-2}{2}}\ue( \T (\kei x)) \qquad \hbox{ for } x \in k_{i,\eps}^{-1} U \cap \rnmpbar,$$ where $\kei=\mei^{1-\frac{\pe}{\crit-2}}$. Then there exists $\tui \in H_{1,0}^2(\rnm) \cap C^1(\rnmpbar)$, $\tui \not\equiv 0$ such that $\tui $ weakly solves the equation $$\left\{ \begin{array}{llll} -\Delta \tui-\frac{\gamma}{|x|^2}\tui &=& \frac{|\tui|^{\crits-2} \tui}{|x|^s} &\hbox{ in } \rnm \\ \tui&=&0 & \hbox{ on } \partial \rnmp . \end{array}\right.$$ and \begin{align*} \tuei \longrightarrow &~ \tui \qquad \hbox{in } C^{1}_{loc}(\rnmpbar) \qquad \hbox{ as } \eps \to0, \notag\\ \tuei \rightharpoonup &~ \tui \qquad \hbox{ weakly in } H_{1,0}^2(\rnm) \quad \hbox{ as } \eps \to0. \end{align*} \item[{\bf(A5)}] There exists $C>0$ such that $$|x|^{\frac{n-2}{2}}|\ue(x)|^{1-\frac{\pe}{\crits-2}}\leq C\quad \hbox{for all $\eps>0$ and all $x\in\Omega$.}$$ \item[{\bf(A6)}] $\lim \limits_{R\to +\infty}\lim \limits_{\eps\to 0}~\sup \limits_{ \Omega \setminus B_{R k_{N,\eps}}(0) } |x|^{\frac{n-2}{2}}|\ue(x)-u_0(x)|^{1-\frac{\pe}{\crits-2}}=0.$ \\ \item[{\bf(A7)}] $\lim \limits_{\delta \to 0} \lim \limits_{\eps\to 0}~\sup \limits_{ B_{\delta k_{1,\eps}}(0) \cap \Omega } |x|^{\frac{n-2}{2}}\left|\ue(x)-\mu_{1,\eps}^{-\frac{n-2}{2}} \tu_{1}\left( \frac{\T^{-1}(x)}{k_{1,\eps} } \right)\right|^{1-\frac{\pe}{\crits-2}}=0.$\\ \item[{\bf(A8)}] For any $\delta>0$ and any $1 \leq i \leq N-1$, we have $$\lim \limits_{R\to +\infty}\lim \limits_{\eps\to 0}\sup \limits_{\delta k_{i+1, \eps}\geq |x|\geq R \kei}|x|^{\frac{n-2}{2}}\left|\ue(x)-\mu_{i+1,\eps}^{-\frac{n-2}{2}} \tu_{i+1}\left( \frac{\T^{-1}(x)}{k_{i+1,\eps} } \right)\right|^{1-\frac{\pe}{\crits-2}}=0.$$\\ \item[{\bf(A9)}] For any $i\in\{1,...,N\}$, there exists $t_i\in (0,1]$ such that $\lim_{\eps\to 0}\mei^{\pe}=t_i.$ \end{enumerate} \end{proposition} \noindent The proof of this proposition is inspired by \cite{dhr} and proceeds in five steps. \noindent Since $s>0$, the subcriticality $\crits<\crit$ of equations $(E_{\eps})$ along with \eqref{weak:lim:ue} yields that $\ue \to u_{0}$ in $C^2_{loc}(\Omegabar\setminus\{0\})$. So the only blow-up point is the origin. \noindent{\bf Step \ref{sec:exh}.1:} The construction of the $\mei$'s proceeds by induction. This step is the initiation. \noindent By the regularity Theorem \ref{th:hopf} and the definition of $\tau$ in \eqref{tau} it follows that for any $\eps >0$ there exists $\xeun \in \Omegabar\setminus\{0\}$ such that \begin{align}\label{def: xe} \sup \limits_{ x \in \Ono} |x|^{\tau} |\ue (x)|= |\xeun|^{\tau} |\ue (\xeun)|. \end{align} \noindent We define $\meun$ and $\keun>0$ as follows \begin{align}\label{def:me:ke} \meun^{-\frac{n-2}{2}}:=|\ue(\xeun)| ~ \hbox{ and } ~\keun:=\meun^{1-\frac{\pe}{\crit-2}}. \end{align} Since blow-up occurs, that is (\ref{hyp:blowup}) holds and since $\ue \to u_{0}$ in $C^2_{loc}(\Omegabar\setminus\{0\})$, we have that $$\lim \limits_{\eps \to 0} \xeun=0 \in \bdry \qquad \hbox{ and }\qquad \lim \limits_{\eps \to 0} \meun =0.$$ It follows that $\ue$ satisfies the hypothesis \eqref{scale lem:hyp 1 on u} of Lemma \ref{scaling lemma 1} with $\ye=\xeun$, $\nu_{\eps}=\meun$. Therefore $$|\xeun| = O\left(\keun\right)~ \hbox{ as } \eps \to 0. $$ \noindent In fact, we claim that there exists $c_{1}>0$ such that \begin{align}\label{def:c1} \lim \limits_{\eps \to 0} \frac{|\xeun|}{\keun}= c_{1}. \end{align} We argue by contradiction and we assume that $|\xeun|=o(\keun)$ as $\eps \to 0$. Let $\displaystyle \tilde{x}_{1,\eps}:= \T^{-1}(\xeun) \in \rnm$. Since $|\xeun|=o(\keun)$ as $\eps \to 0$, so also $| \tilde{x}_{1,\eps}|=o(k_{1,\eps})$ as $\eps \to 0$. \noindent We define for $\eps >0$ $$\tilde{v}_{\eps}(x):=\meun^{\frac{n-2}{2}}\ue( \T ( |\tilde{x}_{1,\eps}| ~x)) \qquad \hbox{ for } x \in \frac{U}{|\tilde{x}_{1,\eps}|} \cap \rnmpbar $$ Using $(E_{\eps})$ we obtain that $\tilde{v}_{\eps}$ satisfies the equation $$-\Delta\tilde{v}_{\eps}- \frac{\gamma}{\left| \frac{\T \left(|\tilde{x}_{1,\eps}| x\right)}{|\tilde{x}_{1,\eps}|}\right|^2} \tilde{v}_{\eps}+|\xeun|^2\he \circ \T(|\tilde{x}_{1,\eps}|x) ~\tilde{v}_{\eps}=\left(\frac{|\tilde{x}_{1,\eps}|}{\keun}\right)^{2-s-\pe}\frac{|\tilde{v}_{\eps}|^{\crits-2-\pe}\tilde{v}_{\eps}}{\left| \frac{\T \left(|\tilde{x}_{1,\eps}| x\right)}{|\tilde{x}_{1,\eps}|}\right|^s}$$ The definition \eqref{def: xe} yields as $\eps \to 0$, $\left| x \right|^\tau |\tilde{v}_{\eps}(x)|\leq 2$ for all $x\in \rnm. $\\ Standard elliptic theory then yields the existence of $\tilde{v}\in C^2(\rnmpbar)$ such that $\tv_{\eps} \to \tv$ in $C^2_{loc}(\rnmpbar)$ where $$\left\{ \begin{array}{llll} -\Delta \tv -\frac{\gamma}{|x|^2}\tv &=&0 &\hbox{ in } \rnm \\ \tv&=&0 & \hbox{ on } \partial \rnmp . \end{array}\right.$$ In addition, we have that $ \left|\tilde{v}_{\eps} \left(|\tilde{x}_{1,\eps} |^{-1} \tilde{x}_{1,\eps} \right) \right|=1$ and so $\tv \not \equiv 0$. Also since $|x|^\tau |\tilde{v}(x)|\leq 2$ in $\rnmpbar$, we have the bound that \begin{align}\label{sing sol:grad control} |x|^{\tau+1} |\tilde{v}(x)|\leq 2 |x_{1}| \qquad \hbox{ for all } ~x =(x_{1}, \tilde{x}) \hbox{ in } \rnm, \end{align} which implies that $$|\tv(x)| < 4 \frac{|x_{1}|}{|x|^{\bp}}+ 4 \frac{|x_{1}|}{|x|^{\bm}} \qquad \hbox{ for all } ~x =(x_{1}, \tilde{x}) \hbox{ in } \rnm.$$ Therefore $x\mapsto \tilde{V}(x):=4 \frac{|x_{1}|}{|x|^{\bp}}+ 4 \frac{|x_{1}|}{|x|^{\bm}} -\tv(x)$ is a positive solution to $-\Delta \tilde{V}-\frac{\gamma}{|x|^2} \tilde{V}=0$ in $\rnm$. Proposition \ref{prop:liouville} yields the existence of $A,B \in \R$ such that $$\tv(x) =A \frac{|x_{1}|}{|x|^{\bp}}+ B \frac{|x_{1}|}{|x|^{\bm}} \qquad \hbox{ for all } ~x \hbox{ in } \rnm.$$ But the pointwise control \eqref{sing sol:grad control} then implies $A=B=0$ by letting $|x|\to 0$ and $\to \infty$. This contradicts $\tv \not \equiv 0$. This proves Claim \eqref{def:c1}. \noindent We rescale and define for all $\eps >0$ $$\tu_{1,\eps}(x):=\meun^{\frac{n-2}{2}} \ue( \T (k_{1,\eps} ~x)) \qquad \hbox{ for } x \in k_{1,\epsilon}^{-1} U \cap \rnmpbar $$ It follows from \eqref{def: xe} and \eqref{def:c1} that $\tueun$ satisfies the hypothesis \eqref{scale lem:hyp 2 on u} of Lemma \ref{scaling lemma 2} with $\ye=x_{1,\eps}$, $\nu_{\eps}=\mu_{1,\eps}$. Then using Lemma \ref{scaling lemma 2} we get that there exists $\tu_{1} \in H_{1,0}^2(\rnm) \cap C^1(\rnmpbar)$ weakly satisfying the equation: $$\left\{ \begin{array}{llll} -\Delta \tu_{1}-\frac{\gamma}{|x|^2}\tu_{1} &=& \frac{|\tu_{1}|^{\crits-2} \tui}{|x|^s} &\hbox{ in } \rnm \\ \tu_{1}&=&0 & \hbox{ on } \partial \rnmp . \end{array}\right.$$ and \begin{align*} \tu_{1,\eps} \longrightarrow &~ \tu_{1} \qquad \hbox{in } C^{1}_{loc}(\rnmpbar) \qquad \hbox{ as } \eps \to0, \notag\\ \tu_{1,\eps} \rightharpoonup &~ \tu_{1} \qquad \hbox{ weakly in } H_{1,0}^2(\rnm) \quad \hbox{ as } \eps \to0. \end{align*}\\ It follows from the definition that $\left| \tu_{i_{o},\eps}\left( \frac{\tilde{x}_{1,\eps}}{\keun}\right) \right|=1$. From \eqref{def:c1} we therefore have that $\tu_{1} \not \equiv 0$. And hence again from Lemma \ref{scaling lemma 2} we get that $$\int \limits_{\rnm} \frac{|\tu_{1} |^{\crits}}{|x|^{s}} \geq \mu_{\gamma, s}(\rnm)^{\frac{\crits}{\crits-2}}.$$ Moreover, there exists $t_{1}\in (0,1]$ such that $\lim \limits_{\eps\to 0}\meun^{\pe}=t_{1}$. Since $\frac{|x|^{\bm}}{|x_{1}|}\tu_{1}\in C^0(\R^{n})$, we get as $\eps\to 0$ $$|\ye|^{\frac{n-2}{2}}\left|\mu_{1, \eps}^{-\frac{n-2}{2}}\tu_{1}\left(\frac{\T^{-1}(\ye)}{k_{1,\eps}}\right)\right|^{1-\frac{\pe}{\crits-2}}=O\left(|\tilde{y}_{\eps}|\right)^{\frac{n}{2}-\bm}=o(1),$$ and $$\lim_{\delta \to 0} \lim_{\eps\to 0}~\sup_{ B_{\delta k_{1,\eps}}(0) \cap \Omega } |x|^{\frac{n-2}{2}}\left|\ue(x)-\mu_{1,\eps}^{-\frac{n-2}{2}} \tu_{1}\left( \frac{\T^{-1}(x)}{k_{1,\eps} } \right)\right|^{1-\frac{\pe}{\crits-2}}=0.$$ \qed \noindent{\bf Step \ref{sec:exh}.2:} We claim that there exists $C>0$ such that \begin{align}\label{ineq:est:1} |x|^{\frac{n-2}{2}}|\ue(x)|^{1-\frac{\pe}{\crits-2}} \leq C \quad \hbox{for all $\eps>0$ and all $x\in\Ono$.} \end{align} We argue by contradiction and let $(\ye)_{\eps>0} \in\Ono$ be such that \begin{align} \label{hyp:step32} \sup_{x\in\Ono}|x|^{\frac{n-2}{2}}|\ue(x)|^{1-\frac{\pe}{\crits-2}}=|\ye|^{\frac{n-2}{2}}|\ue(\ye)|^{1-\frac{\pe}{\crits-2}}\to +\infty ~ \hbox{ as } \eps\to 0. \end{align} By the regularity Theorem \ref{th:hopf}, it follows that the sequence $(\ye)_{\eps>0}$ is well-defined and moreover $\lim \limits_{\eps \to 0} y_{\eps} =0$, since $\ue \to u_{0}$ in $C^2_{loc}(\Omegabar\setminus\{0\})$. For $\eps >0$ we let $$\nu_{\eps}:=|\ue(\ye)|^{-\frac{2}{n-2}},~ \elle:=\nu_{\eps}^{1-\frac{\pe}{\crits-2}} \hbox{ and } \kappa_{\eps}:= \left| y_{\epsilon} \right|^{s/2} \elle^{\frac{2-s}{2}} .$$ Then it follows from (\ref{hyp:step32}) that \begin{align}\label{lim:infty:34} \lim_{\eps\to 0}\nu_{\eps}=0, ~\lim_{\eps\to 0}\frac{|\ye|}{\elle}=+\infty \hbox{ and } \lim \limits_{\eps\to 0}\frac{\kappa_{\eps}}{|\ye|}=0. \end{align} \noindent Let $R>0$ and let $x\in B_R(0)$ be such that $\ye+\kappa_{\eps} x\in\Ono$. It follows from the definition (\ref{hyp:step32}) of $\ye$ that for all $\eps>0$ $$|\ye+\kappa_{\eps} x|^{\frac{n-2}{2}}|\ue(\ye+\kappa_{\eps} x)|^{1-\frac{\pe}{\crits-2}}\leq |\ye|^{\frac{n-2}{2}}|\ue(\ye)|^{1-\frac{\pe}{\crits-2}}$$ and then, for all $\eps>0$ $$\left(\frac{|\ue(\ye+\kappa_{\eps} x)|}{|\ue(\ye)|}\right)^{1-\frac{\pe}{\crits-2}}\leq\left(\frac{1}{1-\frac{\kappa_{\eps}}{|\ye|}R}\right)^{\frac{n-2}{2}}$$ for all $x\in B_R(0)$ such that $\ye+\kappa_{\eps} x\in\Ono$. Using \eqref{lim:infty:34}, we get that there exists $C(R)>0$ such that the hypothesis \eqref{scale lem:hyp 1 on u} of Lemma \ref{scaling lemma 1} is satisfied and therefore one has $|\ye|=O(\elle)$ when $\eps\to 0$, contradiction to (\ref{lim:infty:34}). This proves (\ref{ineq:est:1}). \qed \noindent Let $\I \in\mathbb{N}^\star$. We consider the following assertions: \begin{enumerate} \item[{\bf(B1)}] $0< \meun< ...<\mu_{\I,\eps}.$\\ \item[{\bf(B2)}] $\lim_{\eps\to 0}\mu_{\eps,\I}=0\hbox{ and }~\lim_{\eps\to 0}\frac{\mu_{i+1,\eps}}{\mei}=+\infty\hbox{ for all } 1 \leq i \leq \I-1$\\ \item[{\bf(B3)}] For all $ 1 \leq i \leq \I$, there exists $\tui \in H_{1,0}^2(\rnm) \cap C^2(\rnmpbar)$ such that $\tui $ weakly solves the equation $$\left\{ \begin{array}{llll} -\Delta \tu_{i}-\frac{\gamma}{|x|^2}\tu_{i} &=& \frac{|\tu_{i}|^{\crits-2} \tui}{|x|^s} &\hbox{ in } \rnm \\ \tu_{i}&=&0 & \hbox{ on } \partial \rnmp, \end{array}\right.$$ with $$\int \limits_{\rnm} \frac{|\tui|^{\crits}}{|x|^{s}} \geq \mu_{\gamma, s}(\rnm)^{\frac{\crits}{\crits-2}},$$ and \begin{align*} \tuei \longrightarrow &~ \tui \qquad \hbox{in } C^{1}_{loc}(\rnmpbar) \qquad \hbox{ as } \eps \to0, \notag\\ \tuei \rightharpoonup &~ \tui \qquad \hbox{ weakly in } H_{1,0}^2(\rnm) \quad \hbox{ as } \eps \to0, \end{align*} where for $\eps >0$, we have set $\kei=\mei^{1-\frac{\pe}{\crit-2}}$ and $$\tu_{i,\eps}(x):=\meun^{\frac{n-2}{2}} \ue( \T (k_{i,\eps} ~x)) \qquad \hbox{ for } x \in k_{i,\epsilon}^{-1} U \cap \rnmpbar.$$ \item[{\bf(B4)}] For all $1 \leq i \leq \I $, there exists $t_i\in (0,1]$ such that $\lim_{\eps\to 0}\mei^{\pe}=t_i.$ \end{enumerate} \noindent We shall then say that $({\mathcal H}_{\I})$ holds if there exists $\I$ sequences $(\mei)_{\eps>0}$, $i=1,...,\I$ such that items (B1), (B2) (B3) and (B4) holds. Note that it follows from Step \ref{sec:exh}.1 that $({\mathcal H}_{1})$ holds. Next we show the following: \noindent {\bf Step \ref{sec:exh}.3} \label{prop:HN} Let $I\geq 1$. We assume that $({\mathcal H}_{\I})$ holds. Then, either $$\lim_{R\to +\infty}\lim_{\eps\to 0}\sup_{ \Omega \setminus B_{R k_{\I,\eps}}(0)} |x|^{\frac{n-2}{2}}|\ue(x)-u_0(x)|^{1-\frac{\pe}{\crits-2}}=0,$$ or ${\mathcal H}_{\I+1}$ holds. \noindent{\it Proof of Step \ref{sec:exh}.3:} Suppose $\lim\limits_{R\to +\infty}\lim\limits_{\eps\to 0}\sup_{\Omega \setminus B_{R k_{\I,\eps}}(0)} |x|^{\frac{n-2}{2}}|\ue(x)-u_0(x)|^{1-\frac{\pe}{\crit-2}}\neq 0.$ Then, there exists a sequence of points $(\ye)_{\eps>0}\in\Ono$ such that \bequa\label{hyp:lim:ye:HN} \lim_{\eps\to 0}\frac{|\ye|}{k_{\I,\eps}}=+\infty\hbox{ and }\lim_{\eps\to 0}|\ye|^{\frac{n-2}{2}}|\ue(\ye)-u_0(\ye)|^{1-\frac{\pe}{\crits-2}}=a>0. \eequa Since $\ue \to u_{0}$ in $C^2_{loc}(\Omegabar\setminus\{0\})$ it follows that $\lim \limits_{\eps \to 0} y_{\eps} =0$. Then by the regularity Theorem \ref{th:hopf} and since $\bm < \frac{n-2}{2}$, we get \begin{align}\label{ref:a} \lim_{\eps\to 0}|\ye|^{\frac{n-2}{2}}|\ue(\ye)|^{1-\frac{\pe}{\crits-2}}=a>0 \end{align} for some positive constant $a$. In particular, $\lim \limits_{\eps\to 0}|\ue(\ye)|=+\infty$. Let $$\mu_{\I+1, \eps}:=|\ue(\ye)|^{-\frac{2}{n-2}}\hbox{ and } ~ k_{\I+1,\eps}:=\mu_{\I+1,\eps}^{1-\frac{\pe}{\crit-2}}.$$ As a consequence we have \begin{align} \label{hyp:lim:ye:1:HN} \lim \limits_{\eps\to 0}\mu_{\I +1,\eps}=0 \qquad \hbox{ and } \qquad \lim \limits_{\eps\to 0}\frac{|\ye|}{k_{\I+1,\eps}}=a>0. \end{align} \noindent We rescale and define $$\tu_{\I+1,\eps}(x):=\mu_{\I+1,\eps}^{\frac{n-2}{2}} \ue( \T (k_{I+1,\eps} ~x)) \qquad \hbox{ for } x \in k_{\I+1,\epsilon}^{-1} U \cap \rnmpbar. $$ It follows from \eqref{ineq:est:1} that for all $\eps >0$ $$\left| \frac{ \T (k_{I+1,\eps} ~x)}{k_{\I+1,\eps}}\right|^{\frac{n-2}{2}}|\tu_{\I+1,\eps}(x)|^{1-\frac{\pe}{\crit-2}}\leq C \qquad \hbox{ for } x \in k_{\I+1,\epsilon}^{-1}\Omega\setminus\{0\},$$ and so hypothesis \eqref{scale lem:hyp 2 on u} of Lemma \ref{scaling lemma 2} is satisfied. Using Lemma \ref{scaling lemma 2}, we then get that there exists $\tu_{\I+1}\in H_{1,0}^2(\rnm) \cap C^1(\rnmpbar)$ that satisfies weakly the equation: $$-\Delta \tu_{\I+1}-\frac{\gamma}{|x|^2}\tu_{\I+1}= \frac{|\tu_{\I+1}|^{\crits-2} \tu_{\I+1}}{|x|^s}\hbox{ in }\rnm.$$ while \begin{align*} \tu_{\I+1,\eps} \rightharpoonup ~\tu_{\I+1} \quad \hbox{weakly in $H_{1,0}^2(\rnm)$ \quad and \quad $ \tu_{\I+1,\eps} \rightarrow ~\tu_{\I+1} \quad \hbox{in } C^{1}_{loc}(\rnmpbar)$,} \end{align*} as $\eps \rightarrow 0.$ \noindent We denote $\displaystyle \tye:=\frac{ \T^{-1}(\ye) }{k_{\I+1,\eps}} \in \rnm$. From \eqref{hyp:lim:ye:1:HN} it follows that that $\lim \limits_{\eps\to0} |\tye|:= |\tilde{y}_0| >a/2\neq 0$. Therefore $$|\tu_{\I+1}(\tilde{y}_0)|=\lim_{\eps\to 0}|\tu_{\I+1,\eps}(\tye)|=1.$$ Since $\tu_{\I+1}\equiv 0$ on $ \partial \rnmp$ so $\tye \notin \partial \rnm$ and hence $\tu_{\I+1}\not\equiv 0$. Hence again from Lemma \ref{scaling lemma 2}, we get $$\int \limits_{\rnm} \frac{|\tu_{\I+1}|^{\crits}}{|x|^{s}} \geq \mu_{\gamma, s}(\rnm)^{\frac{\crits}{\crits-2}}$$ and there exists $t_{\I+1}\in (0,1]$ such that $\lim \limits_{\eps\to0}\mu_{\I+1,\eps}^{\pe}=t_{\I+1}$. Moreover, it follows from \eqref{hyp:lim:ye:HN} and \eqref{hyp:lim:ye:1:HN} that $$\lim \limits_{\eps\to 0}\frac{\mu_{\I+1,\eps}}{\mu_{\I,\eps}}=+\infty \hbox{ and } \lim_{\eps\to 0}\mu_{\I+1,\eps}=0.$$ Hence the families $(\mei)_{\eps>0}$, $1\leq i \leq \I+1$ satisfy ${\mathcal H}_{\I+1}$.\qed \noindent The next step is equivalent to step \ref{sec:exh}.3 at intermediate scales. \noindent {\bf Step \ref{sec:exh}.4} \label{prop:HN+} Let $I\geq 1$. We assume that $({\mathcal H}_{\I})$ holds. Then, for any $1\leq i \leq \I-1$ and for any $\delta>0$, either $$\lim \limits_{R\to +\infty} \lim \limits_{\eps\to 0} \sup_{\Omega \cap B_{\delta k_{i+1,\eps}}(0)\setminus \overline{B}_{R k_{i,\eps}(0)}}|x|^{\frac{n-2}{2}}\left|\ue(x)-\mu_{i+1,\eps}^{-\frac{n-2}{2}}\tu_{i+1}\left(\frac{ \T^{-1}(x)}{k_{i+1,\eps}}\right)\right|^{1-\frac{\pe}{\crit-2}}=0$$ or $({\mathcal H}_{\I+1})$ holds. \noindent{\it Proof of Step \ref{sec:exh}.4:} We assume that there exist an $i\leq \I-1$ and $\delta>0$ such that $$\lim \limits_{R\to +\infty} \lim \limits_{\eps\to 0} \sup_{\Omega \cap B_{\delta k_{i+1,\eps}}(0)\setminus \overline{B}_{R k_{i,\eps}}(0)}|x|^{\frac{n-2}{2}}\left|\ue(x)-\mu_{i+1,\eps}^{-\frac{n-2}{2}}\tu_{i+1}\left(\frac{ \T^{-1}(x)}{k_{i+1,\eps}}\right)\right|^{1-\frac{\pe}{\crits-2}}>0.$$ It then follows that there exists a sequence $(\ye)_{\eps>0}\in\Omega$ such that \beqn &&\lim_{\eps\to 0}\frac{|\ye|}{k_{i, \eps}}=+\infty,\qquad |\ye|\leq \delta k_{i+1,\eps}\hbox{ for all }\eps>0\label{hyp:lim:ye:1:bis}\\ && |\ye|^{\frac{n-2}{2}}\left|\ue(\ye)-\mu_{i+1,\eps}^{-\frac{n-2}{2}}\tu_{i+1}\left(\frac{\T^{-1}(\ye)}{k_{i+1,\eps}}\right)\right|^{1-\frac{\pe}{\crits-2}}=a>0,\label{hyp:lim:ye:1:ter} \eeqn for some positive constant $a$. Note that $ a< +\infty$ since $$|x|^{\frac{n-2}{2}}\left|\ue(x)-\mu_{i+1,\eps}^{-\frac{n-2}{2}}\tu_{i+1}\left(\frac{ \T^{-1}(x)}{k_{i+1,\eps}}\right) \right|^{1-\frac{\pe}{\crits-2}}$$ is uniformly bounded for all $x \in \Omega \cap B_{\delta k_{i+1,\eps}}(0)\setminus \overline{B}_{R k_{i,\eps}}(0)$. \noindent Let $\tye^{*} \in\rnm$ be such that $ \T^{-1}(\ye)= k_{i+1,\eps}~\tye^{*}$. It follows that $|\tye^{*}|\leq \delta$ for all $\eps>0$. We rewrite (\ref{hyp:lim:ye:1:ter}) as $$\lim_{\eps\to 0} |\tye^{*}|^{\frac{n-2}{2}}\left|\tu_{i+1, \eps}(\tye^{*})-\tu_{i+1}(\tye^{*})\right|^{1-\frac{\pe}{\crits-2}}=a>0.$$ Then from point (B3) of ${\mathcal H}_{\I}$ it follows that $\tye^{*}\to 0$ as $\eps\to 0$. Since $\frac{|x|^{\bm}}{|x_{1}|} \tu_{i+1}\in C^0(\R^{n})$, we get as $\eps\to 0$ $$|\ye|^{\frac{n-2}{2}}\left|\mu_{i+1, \eps}^{-\frac{n-2}{2}}\tu_{i+1}\left(\frac{\ye}{k_{i+1,\eps}}\right)\right|^{1-\frac{\pe}{\crits-2}}=O\left(\frac{|\ye|}{k_{i+1,\eps}}\right)^{\frac{n}{2}-\bm}=o(1)$$ Then (\ref{hyp:lim:ye:1:ter}) becomes \begin{align}\label{hyp:lim:ye:bis} \lim \limits_{\eps\to0}|\ye|^{\frac{n-2}{2}}|\ue(\ye)|^{1-\frac{\pe}{\crits-2}}=a>0. \end{align} In particular, $\lim \limits_{\eps\to 0}|\ue(\ye)|=+\infty$. We let $$\nu_{\eps}:=|\ue(\ye)|^{-\frac{2}{n-2}}\hbox{ and }\elle:=\nu_{\eps}^{1-\frac{\pe}{\crits-2}}.$$ Then we have \begin{align} \label{hyp:lim:ye:1:HN+} \lim \limits_{\eps\to 0}\nu_{\eps}=0 \qquad \hbox{ and } \qquad \lim \limits_{\eps\to 0}\frac{|\ye|}{\elle}=a>0. \end{align} \noindent We rescale and define $$\tu_{\eps}(x):=\nu_{\eps}^{\frac{n-2}{2}}\ue(\T(\elle~x)) \qquad \hbox{ for } x \in \elle^{-1} U \cap \rnmpbar . $$ It follows from \eqref{ineq:est:1} that for all $\eps >0$ $$|x|^{\frac{n-2}{2}}|\tu_{\eps}(x)|^{1-\frac{\pe}{\crits-2}}\leq C \qquad \hbox{ for } x \in \elle^{-1} U \cap \rnmpbar, $$ so that hypothesis \eqref{scale lem:hyp 2 on u} of Lemma \ref{scaling lemma 2} is satisfied. We can then use it to get that there exists $\tu \in D^{1.2}(\rnm) \cap C^{1}(\rnmpbar)$ that satisfies weakly the equation: $$-\Delta \tu-\frac{\gamma}{|x|^2} \tu= \frac{|\tu|^{\crits-2} \tu}{|x|^s}\hbox{ in }\rnm,$$ while \begin{align*} \tu_{ \eps} \rightharpoonup & ~\tu \qquad \hbox{ weakly in } H_{1,0}^2(\rnm) \quad \text{ as } \eps \rightarrow 0 \notag\\ \tu_{ \eps} \rightarrow & ~\tu \qquad \hbox{in } C^{1}_{loc}(\rnmpbar) \qquad \text{ as } \eps \rightarrow 0. \end{align*} \noindent We denote $\displaystyle \tye:=\frac{ \T^{-1}(\ye) }{\elle} \in \rnm$. From \eqref{hyp:lim:ye:bis} it follows that that $\lim \limits_{\eps\to0} |\tye|:= |\tilde{y}_0| >a/2\neq 0$. Therefore $$|\tu(\tilde{y}_0)|=\lim_{\eps\to 0}|\tu_{\eps}(\tye)|=1.$$ Since $\tu \equiv 0$ on $ \partial \rnmp$ so $\tye \notin \partial \rnm$ and hence $\tu\not\equiv 0$. Hence again from Lemma \ref{scaling lemma 2} we get $$\int \limits_{\rnm} \frac{|\tu|^{\crits}}{|x|^{s}} \geq \mu_{\gamma, s}(\rnm)^{\frac{\crits}{\crits-2}},$$ and there exists $t\in (0,1]$ such that $\lim \limits_{\eps\to0}\nu_{\eps}^{\pe}=t$. Moreover, from (\ref{hyp:lim:ye:bis}), (\ref{hyp:lim:ye:1:bis}), and since $\lim \limits_{\eps \to 0} \frac{|\ye|}{k_{i+1,\eps}}=0$, it follows that $$\lim \limits_{\eps\to 0}\frac{\nu_{\eps}}{\mu_{i,\eps}}=+\infty~\hbox{ and }~\lim \limits_{\eps\to 0}\frac{\mu_{i+1, \eps}}{\nu_{\eps}}=+\infty .$$ Hence the families $(\meun)$,..., $(\mei)$, $(\nu_{\eps})$, $(\mu_{i+1, \eps})$,..., $(\mu_{\I,\eps})$ satisfy $({\mathcal H}_{\I+1})$.\qed \noindent The last step tells us that the process of constructing $\{ {\mathcal H}_{\I} \}$ stops after a finite number of steps. \\ \noindent{\bf Step \ref{sec:exh}.5:} Let $N_0=\max\{\I : ({\mathcal H}_\I) \hbox{ holds } \}$. Then $N_0<+\infty$ and the conclusion of Proposition \ref{prop:exhaust} holds with $N=N_0$. \noindent{\it Proof of Step \ref{sec:exh}.5:} Indeed, assume that $({\mathcal H}_{\I})$ holds. Since $\mei=o(\mu_{i+1, \eps})$ for all $1 \leq i \leq N-1$, we get with a change of variable and the definition of $\tuei$ that for any $R > \delta>0$ \begin{align*} \int \limits_{\Omega} \frac{|\ue|^{\crits -\pe}}{|x|^{s}} dx & \geq \sum_{i=1}^{\I} \int \limits_{ \T \left( B_{R \kei}(0)\setminus \overline{B}_{\delta\kei}(0) \cap \rnm \right)} \frac{|\ue|^{\crits-\pe}}{|x|^{s}} dx \notag \\ &\geq \sum_{i=1}^{\I} \int \limits_{ B_{R \kei}(0)\setminus \overline{B}_{\delta\kei}(0) \cap \rnm } \frac{|\tu_{i,\eps}|^{\crits-\pe}}{|x|^{s}} dv_{g_{i,\eps}}. \end{align*} Here $g_{i,\eps}$ is the metric such that $(g_{\eps,i})_{qr}=(\partial_q\T(\kei x),\partial_r\T(\kei x))$ for all $q,r\in\{1,...,n\}$. Then from \eqref{bnd:ue} we have \begin{align} ~\Lambda \geq \sum_{i=1}^{\I} \int \limits_{ B_{R \kei}(0)\setminus \overline{B}_{\delta\kei}(0) \cap \rnm } \frac{|\tu_{i,\eps}|^{\crits-\pe}}{|x|^{s}} dv_{g_{i,\eps}}. \end{align} Passing to the limit $\eps\to 0$ and then $\delta \to0$, $R \to +\infty$ we obtain using point (B3) of ${\mathcal H}_{\I}$, that $$\Lambda \geq \I \mu_{\gamma, s}(\rnm)^{\frac{\crits}{\crits-2}},$$ from which it follows that $N_0<+\infty$. $\Box$\\ \noindent To complete the proof, we let families $(\meun)_{\eps>0}$,..., $(\mu_{N_0, \eps})_{\eps>0}$ be such that ${\mathcal H}_{N_0}$ holds. We argue by contradiction and assume that the conclusion of Proposition \ref{prop:exhaust} does not hold with $N=N_0$. Assertions (A1), (A2), (A3),(A4), (A5) , (A7) and (A9) hold. Assume that (A6) or (A8) does not hold. It then follows from Steps 4.3, 4.4 and 4.5 that ${\mathcal H}_{N+1}$ holds. A contradiction with the choice of $N=N_0$. Hence the proposition is proved. \qed \section{\, Strong pointwise estimates}\label{sec:se:1} \noindent The objective of this section is to obtain pointwise controls on $\ue$ and $\nabla \ue $. The core is the proof of the following proposition in the spirit of Druet-Hebey-Robert \cite{dhr}: \begin{proposition}\label{prop:fund:est} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry $ and assume that $0< s < 2$, $\gamma<\frac{n^2}{4}$. Let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} holds. Assume that blow-up occurs, that is $$\lim_{\eps\to 0}\Vert |x|^{\tau} \ue\Vert_{L^\infty(\Omega)}=+\infty ~\hbox{ where } ~ \bm-1<\tau<\frac{n-2}{2} .$$ Consider $\mu_{1,\eps},...,\mu_{N,\eps}$ from Proposition \ref{prop:exhaust}. Then, there exists $C>0$ such that for all $\eps>0$ \begin{align}\label{eq:est:global} |\ue(x)| \leq~ C& \left(~\sum_{i=1}^N\frac{\mei^{\frac{\bp-\bm}{2}} |x| }{ \mei^{\bp-\bm}|x|^{\bm}+|x|^{\bp}}+ \frac{ \Vert |x|^{\bm-1}u_0 \Vert_{L^{\infty}(\Omega)} }{|x|^{\bm}} |x| \right) . \end{align} for all $x\in \Omega $. \end{proposition} \noindent The proof of this estimate, inspired by the methodology of \cite{dhr}, proceeds in seven steps. \noindent{\bf Step \ref{sec:se:1}.1:} We claim that for any $\alpha >0$ small and any $R>0$, there exists $C(\alpha, R)>0$ such that for all $\eps>0$ sufficiently small, we have for all $x\in\Omega\setminus \overline{B}_{ Rk_{N,\eps}}(0)$, \begin{align}\label{estim:alpha:1} |\ue(x)|\leq C(\alpha,R)~\left( \frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}-\alpha } |x|}{|x|^{\bp-\alpha}}+ \frac{\Vert |x|^{\bm-1}u_0 \Vert_{L^{\infty}(\Omega)}}{|x|^{\bm +\alpha}}|x| \right). \end{align} \noindent{\it Proof of Step \ref{sec:se:1}.1:} We fix $\gamma'$ such that $\gamma<\gamma'<\frac{n^2}{4}$. Since the operator $-\Delta - \frac{\gamma}{|x|^2}-h_{0}(x)$ is coercive, taking $\gamma'$ close to $\gamma$ it follows that the operator $-\Delta-\frac{\gamma'}{|x|^2} -h_{0}$ is also coercive in $\Omega$. From Theorem \ref{th:sing}, there exists $H\in C^2(\overline{\Omega}\setminus\{0\})$ such that \begin{align} \label{eqn:sup-sol} \left\{\begin{array}{ll} -\Delta H-\frac{\gamma'}{|x|^2}H-h_{0}(x) H=0 &\hbox{ in } \Omega \\ H>0&\hbox{ in } \Omega\\ H=0&\hbox{ on }\partial\Omega \setminus \{ 0\}. \end{array}\right. \end{align} And we have the following bound on $H$, that there exists $C_{1}>0$ such that \begin{align}\label{esti:sup-sol} \frac{1}{C_{1}}\frac{d(x,\bdry)}{|x|^{\beta_{+}(\gamma') }} \leq H(x) \leq C_{1}\frac{d(x,\bdry)}{|x|^{\beta_{+}(\gamma') }} \qquad \hbox{ for all } x \in \Omega. \end{align} \noindent We let $\lambda^{\gamma'}_{1}>0$ be the first eigenvalue of the coercive operator $-\Delta-\frac{\gamma'}{|x|^2} -h_{0}$ on $\Omega$ and we let $\varphi\in C^2(\overline{\Omega}\setminus\{0\})\cap \huno$ be the unique eigenfunction such that \begin{align}\label{eqn:sub-sol} \left\{\begin{array}{ll} -\Delta\varphi-\frac{\gamma'}{|x|^2}\varphi-h_{0}(x) \varphi~=\lambda^{\gamma'}_{1}\varphi ~ &\hbox{ in } \Omega\\ \varphi~ >0 &\hbox{ in }\Omega\\ \varphi~=0 &\hbox{ on }\partial\Omega\ \setminus \{ 0\}.\ \end{array}\right. \end{align} It follows from the regularity result, Theorem \ref{th:hopf} that there exists $C_2>0$ such that \begin{align}\label{esti:sub-sol} \frac{1}{C_{2}}\frac{d(x,\bdry)}{|x|^{\beta_{-}(\gamma') }} \leq \varphi(x) \leq C_{2}\frac{d(x,\bdry)}{|x|^{\beta_{-}(\gamma') }} \qquad \hbox{ for all } x \in \Omega. \end{align} \noindent We define the operator \begin{equation}\label{def:L} \mathcal{L}_\eps:=-\Delta-\left(\frac{\gamma}{|x|^2}+h_{\eps} \right)-\frac{|\ue|^{\crits-2-p_{\eps}}}{|x|^s}.\end{equation} \noindent {\it Step \ref{sec:se:1}.1.1:} We claim that given any $\gamma<\gamma'<\frac{n^2}{4}$ there exist $\delta_0>0$ and $R_0>0$ such that for any $0<\delta < \delta_0$ and $R>R_0$, we have for $\eps>0$ sufficiently small \begin{align}\label{ineq:LG} \mathcal{L}_\eps H(x) >0\hbox{ and } \mathcal{L}_\eps \varphi(x) &~>0 \qquad \hbox{for all }x\in B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega. \notag\\ \mathcal{L}_\eps H(x) &~>0 \qquad \hbox{for all }x\in \Omega\setminus \overline{B}_{R\keN}(0), \hbox{ if } u_{0} \equiv0. \end{align} \noindent We prove the claim. As one checks for all $\eps >0$ and $ x \in \Omega$ \begin{align*} \frac{\mathcal{L}_\eps H(x)}{ H(x)}= \frac{\gamma' -\gamma}{|x|^2}+(h_{0} -h_{\eps}) -\frac{|\ue|^{\crits-2-p_{\eps}}}{|x|^s}. \end{align*} and \begin{align*} \frac{\mathcal{L}_\eps \varphi (x)}{ \varphi(x)}= \frac{\gamma' -\gamma}{|x|^2} +(h_{0} -h_{\eps}) -\frac{|\ue|^{\crits-2-p_{\eps}}}{|x|^s}+ \lambda^{\gamma'}_{1}. \end{align*} \noindent One has for $\eps >0$ sufficiently small $\Vert h_{0}- h_{\eps}\Vert_\infty \leq \frac{\gamma'-\gamma}{4(1+\sup \limits_{\Omega}|x|^{2} )}$ and we choose $0 <\delta_0 <1$ such that \begin{align}\label{def:delta0} ~ \delta_{0}^{\left(\crits-2 \right)\left(\frac{n}{2}-\bm \right)} \Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}^{\crits-2} \leq\frac{\gamma'-\gamma}{ 2^{\crits+3}}. \end{align} This choice is possible thanks to (\ref{hyp:he}) and the regularity Theorem \ref{th:hopf} respectively. It follows from point (A6) of Proposition \ref{prop:exhaust} that, there exists $R_0>0$ such that for any $R>R_0$, we have for all $\eps>0$ sufficiently small $$|x|^{\frac{n-2}{2}}|\ue(x)-u_0(x)|^{1-\frac{\pe}{\crits-2}}\leq \left(\frac{\gamma'-\gamma}{2^{\crits+2}}\right)^{\frac{1}{\crits-2}} \qquad \hbox{for all } x\in \Omega\setminus \overline{B}_{R\keN}(0) $$ \noindent With this choice of $\delta_0$ and $R_{0}$ we get that for any $0<\delta < \delta_0$ and $R>R_0$, we have for $\eps>0$ small enough \begin{align*} |x|^{2-s}|\ue(x)|^{\crits-2-\pe}&\leq 2^{\crits-1-\pe}|x|^{2-s}|\ue(x)-u_0(x)|^{\crits-2-\pe} \\ &\quad + 2^{\crits-1-\pe}|x|^{2-s}|u_0(x)|^{\crits-2-\pe} \notag \\ &\leq 2^{-\pe}\frac{\gamma' -\gamma}{4} \leq \frac{\gamma' -\gamma}{4} \end{align*} for all $x\in B_\delta(0) \setminus \overline{B}_{R\keN}(0) \cap \Omega$, if $u_{0} \not\equiv0$, and \[ ~|x|^{2-s}|\ue(x)|^{\crits-2-\pe} \leq \frac{\gamma' -\gamma}{4} \qquad \hbox{for all }x\in \Omega\setminus \overline{B}_{R\keN}(0), \hbox{ if } u_{0} \equiv0. \] Hence we obtain that for $\eps>0$ small enough \begin{align} \frac{\mathcal{L}_\eps H(x)}{ H(x)} = &~ \frac{\gamma' -\gamma}{|x|^2} +h_{0}-h_{\eps} -\frac{|\ue|^{\crits-2-p_{\eps}}}{|x|^s}\notag\\ \geq& ~\frac{\gamma' -\gamma}{|x|^2} +h_{0}-h_{\eps} - \frac{\gamma' -\gamma}{4 |x|^2} \notag\\ \geq& ~\frac{\gamma' -\gamma}{|x|^2} -\frac{\gamma' -\gamma}{4 |x|^2} - \frac{\gamma' -\gamma}{4 |x|^2} = \frac{\gamma' -\gamma}{2 |x|^2} \notag\\ >&~0 \qquad \hbox{ for all } x\in B_\delta (0)\setminus \overline{B}_{R\keN}(0) \cap \Omega \hbox{ if } u_{0} \not\equiv0 \notag\\ \hbox{and }~~\frac{\mathcal{L}_\eps H(x)}{ H(x)} >&~0 \qquad \hbox{for all }x\in \Omega\setminus \overline{B}_{R\keN}(0), \hbox{ if } u_{0} \equiv0.\notag \end{align} Similarly we have \begin{align*} \frac{\mathcal{L}_\eps \varphi (x)}{ \varphi(x)} >~0 \qquad \hbox{ for all } x\in B_\delta (0)\setminus \overline{B}_{R\keN}(0) \cap \Omega. \end{align*} \qed \noindent{\it Step \ref{sec:se:1}.1.2:} It follows from point (A4) of Proposition \ref{prop:exhaust} that there exists $C'_1(R)>0$ such that for all $\eps>0$ small \begin{align*} |\ue(x)|\leq C'_1(R) \frac{\meN^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} d(x, \bdry) }{|x|^{\beta_{+}(\gamma')}}\qquad \hbox{ for all } x\in \Omega \cap \partial B_{R\keN}(0). \end{align*} By estimate \eqref{esti:sup-sol} on $H$, we then have for some constant $C_1(R)>0$ \begin{align}\label{sup:ue:boundary:2} |\ue(x)|\leq C_1(R) \meN^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} H(x)\qquad \hbox{ for all } x\in \Omega \cap \partial B_{R\keN}(0). \end{align} \noindent It follows from point (A1) of Proposition \ref{prop:exhaust} and the regularity Theorem \ref{th:hopf}, that there exists $C'_2(\delta)>0$ such that for all $\eps>0$ small \begin{align}\label{sup:ue:boundary:1} |\ue(x)|\leq C'_2(\delta) \Vert { |x|^{\bm-1}u_0} \Vert_{L^{\infty}(\Omega)} \frac{ d(x, \bdry) }{|x|^{\beta_{-}(\gamma') }} \qquad \hbox{ for all } x\in \Omega \cap \partial B_{\delta}(0), \hbox{ if } u_{0}\not \equiv0. \end{align} And then by the estimate \eqref{esti:sub-sol} on $\varphi$ we have for some constant $C_2(\delta)>0$ \begin{align}\label{sup:ue:boundary:1:bis} |\ue(x)|\leq C_2(\delta) \Vert { |x|^{\bm-1}u_0} \Vert_{L^{\infty}(\Omega)}~ \varphi(x) \qquad \hbox{ for all } x\in \Omega \cap \partial B_{\delta}(0), \hbox{ if } u_{0}\not \equiv0. \end{align} \noindent We now let for all $\eps>0$ $$\Psi_{\eps}(x):= C_1(R) \meN^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} H(x) +C_2(\delta) \Vert { |x|^{\bm-1}u_0} \Vert_{L^{\infty}(\Omega)}~ \varphi(x)~~ \hbox{ for } x \in \Omega.$$ \noindent Then \eqref{sup:ue:boundary:1} and \eqref{sup:ue:boundary:2} imply that for all $\eps>0$ small \begin{align}\label{control:ue:bord} |u_{\eps}(x)| \leq \Psi_{\eps}(x) \qquad \hbox{ for all } x \in \partial \left(B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega \right), \hbox{ if } u_{0} \not \equiv 0 \end{align} and \begin{align}\label{control:ue:bord:u0} |u_{\eps}(x)| \leq \Psi_{\eps}(x) \qquad \hbox{ for all } x \in \partial(\Omega\setminus \overline{B}_{R\keN}(0)), \hbox{ if } u_{0}\equiv0. \end{align} \noindent Therefore when $ u_{0}\not \equiv0$ it follows from \eqref{ineq:LG} and \eqref{control:ue:bord} that for all $\eps>0$ sufficiently small $$\left\{\begin{array}{ll} \mathcal{L}_\eps \Psi_\eps \geq 0=\mathcal{L}_\eps\ue & \hbox{ in } B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega\\ \Psi_\eps \geq \ue& \hbox{ on } \partial\left(B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega \right) \\ \mathcal{L}_\eps \Psi_\eps \geq 0=-\mathcal{L}_\eps\ue & \hbox{ in } B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega\\ \Psi_\eps \geq -\ue& \hbox{ on } \partial\left(B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega \right). \end{array}\right.$$ and from \eqref{ineq:LG} and \eqref{control:ue:bord:u0}, in case $u_{0} \equiv 0$, we have for $\eps>0$ sufficiently small $$\left\{\begin{array}{ll} \mathcal{L}_\eps \Psi_\eps \geq 0=\mathcal{L}_\eps \ue & \hbox{ in } \Omega \setminus \overline{B}_{R\keN}(0) \\ \Psi_\eps \geq \ue& \hbox{ on } \partial(\Omega\setminus \overline{B}_{R\keN}(0))\\ \mathcal{L}_\eps \Psi_\eps \geq 0=-\mathcal{L}_\eps\ue & \hbox{ in } \Omega \setminus \overline{B}_{R\keN}(0)\\ \Psi_\eps \geq -\ue& \hbox{ on } \partial(\Omega \setminus \overline{B}_{R\keN}(0)). \end{array}\right.$$ Since $\Psi_\eps>0$ and $\mathcal{L}_\eps \Psi_\eps>0$, it follows from the comparison principle of Berestycki-Nirenberg-Varadhan \cite{bnv} that the operator $\mathcal{L}_\eps$ satisfies the comparison principle on $ B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega$. Therefore \begin{align*} |\ue(x)|\leq&~ \Psi_\eps(x) \qquad \hbox{ for all } x \in B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega, \notag\\ \hbox{ and }~ |\ue(x)|\leq &~ \Psi_\eps(x) \qquad \hbox{ for all } x \in \Omega \setminus \overline{B}_{R\keN}(0) ~ \hbox{ if } u_{0} \equiv 0. \end{align*} Therefore when $ u_{0}\not \equiv0$ we have for for all $\eps>0$ small \begin{align*} |u_{\eps}(x)| \leq C_1(R) \meN^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} H(x) +C_2(\delta) \Vert {|x|^{\bm-1}u_0} \Vert_{L^{\infty}(\Omega)}~ \varphi(x) \end{align*} for all $ x \in B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega$, for $R$ large and $\delta$ small. \noindent Then, when $ u_{0}\not \equiv0$, using the estimates \eqref{esti:sup-sol} and \eqref{esti:sub-sol}, we have or all $\eps>0$ small \begin{align*} |u_{\eps}(x)| &~\leq C_1(R) \frac{ \meN^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} d\left( x, \bdry\right) }{|x|^{\beta_{+}(\gamma') }}+ C_2(\delta) \Vert {|x|^{\bm-1}u_0} \Vert_{L^{\infty}(\Omega)} \frac{d\left( x, \bdry\right)}{|x|^{\beta_{-}(\gamma') }} \notag\\ &~\leq C_1(R) \frac{ \meN^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x| }{|x|^{\beta_{+}(\gamma') }}+C_2(\delta) \frac{\Vert {|x|^{\bm-1}u_0} \Vert_{L^{\infty}(\Omega)} }{|x|^{\beta_{-}(\gamma') }}|x| . \end{align*} for all $ x \in B_\delta(0)\setminus \overline{B}_{R\keN}(0) \cap \Omega$, for $R$ large and $\delta$ small. \noindent Similarly if $u_{0} \equiv 0$, then all $\eps>0$ small and $R >0 $ large $$|u_{\eps}(x)|\leq C_1(R) \frac{\meN^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x|}{|x|^{\beta_{+}(\gamma')}} ~ \hbox{ for all } x \in \Omega \setminus \overline{B}_{R\keN}(0).$$ Taking $\gamma'$ close to $\gamma$, along with points (A1) and (A4) of Proposition \ref{prop:exhaust}, it then follows that estimate \eqref{estim:alpha:1} holds on $\Omega\setminus \overline{B}_{ Rk_{\eps,N}}(0)$ for all $R>0$.\qed \noindent{\bf Step \ref{sec:se:1}.2:} Let $1 \leq i \leq N-1$. We\ claim that for any $\alpha >0$ small and any $R,\rho>0$, there exists $C(\alpha,R,\rho)>0$ such that all $\eps>0$.\par \begin{align}\label{estim:alpha:1:bis} |\ue(x)|\leq C(\alpha,R,\rho)\left(\frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}-\alpha}|x|}{|x|^{\bp-\alpha}}+\frac{|x|}{\mu_{i+1,\eps}^{\frac{\bp-\bm}{2}-\alpha} |x|^{\bm +\alpha}} \right) \end{align} for all $x\in B_{\rho k_{i+1,\eps}}(0)\setminus \overline{B}_{R k_{i,\eps}}(0) \cap \Omega$. \noindent{\it Proof of Step \ref{sec:se:1}.2:} We let $i\in \{1,...,N-1\}$. We emulate the proof of Step \ref{sec:se:1}.1. Fix $\gamma'$ such that $\gamma<\gamma'<\frac{n^2}{4}$. Consider the functions $H$ and $\varphi$ defined in Step \ref{sec:se:1}.1 satisfying \eqref{eqn:sup-sol} and \eqref{eqn:sup-sol} respectively. \noindent{\it Step \ref{sec:se:1}.2.1:} We claim that given any $\gamma<\gamma'<\frac{n^2}{4}$ there exist $\rho_0>0$ and $R_0>0$ such that for any $0<\rho < \rho_0$ and $R>R_0$, we have for $\eps>0$ sufficiently small \begin{align} \label{ineq:LG2} \mathcal{L}_\eps H(x) >0\hbox{ and } \mathcal{L}_\eps \varphi(x) >0 \qquad \hbox{for all }x\in B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i, \eps}}(0) \cap \Omega \end{align} where $\mathcal{L}\eps$ is as in \eqref{def:L}. \noindent We prove the claim. As one checks for all $\eps >0$ and $ x \in \Omega$ \begin{align*} \frac{\mathcal{L}_\eps H(x)}{ H(x)}&~= \frac{\gamma' -\gamma}{|x|^2} + h_{0} -h_{\eps} -\frac{|\ue|^{\crits-2-p_{\eps}}}{|x|^s}, \notag \\ \frac{\mathcal{L}_\eps \varphi (x)}{ \varphi(x)}&~\geq \frac{\gamma' -\gamma}{|x|^2} + h_{0}-h_{\eps} -\frac{|\ue|^{\crits-2-p_{\eps}}}{|x|^s}. \end{align*} \noindent We choose $0<\rho_0<1$ such that \begin{align}\label{def:rho0} & \rho_0^2\sup \limits_{\Omega}| h_{0}-h_{\eps}| \leq \frac{\gamma'-\gamma}{4} \quad \hbox{ for all } \eps>0 \hbox{ small and } ~ \notag \\ &\rho_{0}^{\left(\crits-2 \right)\left(\frac{n}{2}-\bm \right)} \Vert |x|^{\bm-1} \tu_{i+1} \vert|_{L^\infty(B_2(0)\cap \rnm)}^{\crits-2} \leq\frac{\gamma'-\gamma}{ 2^{\crits+3}} \end{align} It follows from point (A8) of Proposition \ref{prop:exhaust} that there exists $R_0>0$ such that for any $R>R_0$ and any $0<\rho < \rho_0$, we have for all $\eps>0$ sufficiently small $$ |x|^{\frac{n-2}{2}}\left|\ue(x)-\mu_{i+1,\eps}^{-\frac{n-2}{2}}\tu_{i+1}\left( \frac{\T^{-1}(x)}{k_{i+1,\eps} } \right)\right|^{1-\frac{\pe}{\crits-2}} \leq \left(\frac{\gamma'-\gamma}{2^{\crits+2}}\right)^{\frac{1}{\crits-2}} $$ for all $x\in B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega$. \noindent With this choice of $\rho_0$ and $R_{0}$ we get that for any $0<\rho < \rho_0$ and $R>R_0$, we have for $\eps>0$ small enough \begin{align} |x|^{2-s}|\ue(x)|^{\crits-2-\pe} \leq &~ 2^{\crits-1-\pe}|x|^{2-s}\left|\ue(x)-\mu_{i+1,\eps}^{-\frac{n-2}{2}}\tu_{i+1}\left(\frac{\T^{-1}(x)}{k_{i+1, \eps}} \right)\right|^{\crits-2-\pe} \notag\\ &+2^{\crits-1-\pe}\left(\frac{|x|}{k_{i+1,\eps}}\right)^{2-s} \left|\tu_{i+1} \left(\frac{ \T^{-1}(x)}{k_{i+1, \eps}}\right) \right|^{\crits-2-\pe} \notag\\ & \leq \frac{\gamma' -\gamma}{4} \qquad \hbox{ for all } x\in B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0).\notag \end{align} Hence as in Step \ref{sec:se:1}.1 we have that for $\eps>0$ small enough \begin{align*} \frac{\mathcal{L}_\eps H(x)}{ H(x)} >0 ~ \hbox{ and } \frac{\mathcal{L}_\eps \varphi (x)}{ \varphi(x)}>0 ~ \hbox{ for all }x\in B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i, \eps}}(0) \cap \Omega. \end{align*} \noindent{\it Step \ref{sec:se:1}.2.2:} Let $i\in \{1,...,N-1\}$. It follows from point (A4) of Proposition \ref{prop:exhaust} that there exists $C'_1(R)>0$ such that for all $\eps>0$ small \begin{align*} |\ue(x)|\leq C'_1(R) \frac{\mei^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} d(x, \bdry) }{|x|^{\beta_{+}(\gamma')}}\qquad \hbox{ for all } x\in \Omega \cap \partial B_{R\kei}(0), \end{align*} And then by the estimate \eqref{esti:sup-sol} on $H$ we have for some constant $C_1(R)>0$ \begin{align}\label{sup:ue:boundary:2:bis} |\ue(x)|\leq C_1(R) \mei^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} H(x)\qquad \hbox{ for all } x\in \Omega \cap \partial B_{R\kei}(0). \end{align} \noindent Again from point (A4) of Proposition \ref{prop:exhaust} it follows that there exists $C'_2(\rho)>0$ such that for all $\eps>0$ small \begin{align*} |\ue(x)|\leq C'_2(\rho) \frac{d(x,\bdry)}{\mu_{i+1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x|^{\beta_{-}(\gamma')}} \qquad \hbox{ for all } x\in \Omega\cap \partial B_{\rho k_{i+1, \eps}}(0), \end{align*} and then by the estimate \eqref{esti:sub-sol} on $\varphi$ we have for some constant $C_2(\delta)>0$ \begin{align}\label{sup:ue:boundary:3:bis} |\ue(x)|\leq C_2(\rho) \frac{\varphi(x) }{\mu_{i+1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} } \qquad \hbox{ for all } x\in \Omega\cap \partial B_{\rho k_{i+1, \eps}}(0). \end{align} \noindent We let for all $\eps>0$ $$\tilde{\Psi}_{\eps}(x):= C_1(R) \mei^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} H(x) +C_2(\rho) \frac{\varphi(x) }{\mu_{i+1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} } \qquad \hbox { for } x \in \Omega. $$ Then \eqref{sup:ue:boundary:2:bis} and \eqref{sup:ue:boundary:3:bis} implies that for all $\eps>0$ small \begin{align}\label{control:ue:bord:bis} |u_{\eps}(x)| \leq \tilde{\Psi}_{\eps}(x) \qquad \hbox{ for all } x \in \partial \left(B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0 ) \cap \Omega \right). \end{align} \noindent Therefore it follows from \eqref{ineq:LG2} and \eqref{control:ue:bord:bis} that $\eps>0$ sufficiently small $$\left\{\begin{array}{ll} \mathcal{L}_\eps \tilde{\Psi}_\eps \geq 0=\mathcal{L}_\eps\ue & \hbox{ in } B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega \\ \tilde{\Psi}_\eps \geq \ue& \hbox{ on } \partial \left(B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega \right) \\ \mathcal{L}_\eps \tilde{\Psi}_\eps \geq 0=-\mathcal{L}_\eps\ue & \hbox{ in } B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega \\ \tilde{\Psi}_\eps \geq -\ue& \hbox{ on } \partial \left(B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0 ) \cap \Omega\right). \end{array}\right.$$ Since $\tilde{\Psi}_\eps>0$ and $\mathcal{L}_\eps \tilde{\Psi}_\eps>0$, it follows from the comparison principle of Berestycki-Nirenberg-Varadhan \cite{bnv} that the operator $\mathcal{L}_\eps$ satisfies the comparison principle on $ B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega$. Therefore $$|\ue(x)|\leq \tilde{\Psi}_\eps(x) \qquad \hbox{ for all } x \in B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega.$$ So for all $\eps>0$ small $$|u_{\eps}(x)|\leq C_1(R) \mei^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} H(x) +C_2(\rho) \frac{\varphi(x) }{\mu_{i+1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} } $$ for all $ x \in B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega$, for $R$ large and $\rho$ small. Then using the estimates \eqref{esti:sup-sol} and \eqref{esti:sub-sol} we have or all $\eps>0$ small \begin{align*} |u_{\eps}(x)| &~\leq C_1(R) \frac{ \mei^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} d\left( x, \bdry\right) }{|x|^{\beta_{+}(\gamma') }}+ C_2(\rho) \frac{d(x,\bdry)}{\mu_{i+1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x|^{\beta_{-}(\gamma')}} \notag\\ &~\leq C_1(R) \frac{ \mei^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x| }{|x|^{\beta_{+}(\gamma') }}+C_2(\rho) \frac{|x|}{\mu_{i+1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x|^{\beta_{-}(\gamma')}} . \end{align*} for all $ x \in B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega$, for $R$ large and $\rho$ small. Taking $\gamma'$ close to $\gamma$, along with point (A4) of Proposition \ref{prop:exhaust} it then follows that estimate \eqref{estim:alpha:1:bis} holds on $B_{\rho k_{i+1, \eps}}(0)\setminus \overline{B}_{R\kei}(0) \cap \Omega$, for all $R \rho>0$.\qed \noindent{\bf Step \ref{sec:se:1}.3:} We\ claim that for any $\alpha >0$ small and any $\rho>0$, there exists $C(\alpha,\rho)>0$ such that all $\eps>0$.\par \begin{align}\label{estim:alpha:1:tis} |\ue(x)|\leq C(\alpha,\rho)\frac{|x|}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}-\alpha} |x|^{\bm +\alpha}} \qquad \hbox{ for all } x\in B_{\rho k_{1,\eps}}(0) \cap \Omega. \end{align} \noindent{\it Proof of Step \ref{sec:se:1}.3:} Fix $\gamma'$ such that $\gamma<\gamma'<\frac{n^2}{4}$. Consider the function $\varphi$ defined in Step \ref{sec:se:1}.1 satisfying \eqref{eqn:sup-sol}. \noindent{\it Step \ref{sec:se:1}.3.1:} We claim that given any $\gamma<\gamma'<\frac{n^2}{4}$ there exist $\rho_0>0$ such that for any $0<\rho < \rho_0$ we have for $\eps>0$ sufficiently small \begin{align} \label{ineq:LG3} \mathcal{L}_\eps \varphi(x) >0 \qquad \hbox{for all }x\in B_{\rho k_{1,\eps}}(0) \cap \Omega, \end{align} where $\mathcal{L}\eps$ is as in \eqref{def:L}. \noindent Indeed, for all $\eps >0$ and $ x \in \Omega$ \begin{align*} \frac{\mathcal{L}_\eps \varphi (x)}{ \varphi(x)}&~\geq \frac{\gamma' -\gamma}{|x|^2} -h_{\eps} -\frac{|\ue|^{\crits-2-p_{\eps}}}{|x|^s}. \end{align*} We choose $0<\rho_0<1$ such that \begin{align} & \rho_0^2\sup \limits_{\Omega}| h_{\eps}| \leq \frac{\gamma'-\gamma}{4} \quad \hbox{ for all } \eps>0 \hbox{ small and } ~ \notag \\ &\rho_{0}^{\left(\crits-2 \right)\left(\frac{n}{2}-\bm \right)} \Vert |x|^{\bm-1} \tu_{1} \vert|_{L^\infty(B_2(0)\cap \rnm)}^{\crits-2} \leq\frac{\gamma'-\gamma}{ 2^{\crits+3}}\notag \end{align} It follows from point (A7) of Proposition \ref{prop:exhaust} that for any $0<\rho < \rho_0$, we have for all $\eps>0$ sufficiently small $$ |x|^{\frac{n-2}{2}}\left|\ue(x)-\mu_{1,\eps}^{-\frac{n-2}{2}}\tu_{1}\left( \frac{\T^{-1}(x)}{k_{1,\eps} } \right)\right|^{1-\frac{\pe}{\crits-2}} \leq \left(\frac{\gamma'-\gamma}{2^{\crits+2}}\right)^{\frac{1}{\crits-2}} $$ for all $x\in B_{\rho k_{1,\eps}}(0) \cap \Omega$. \noindent With this choice of $\rho_0$ we get that for any $0<\rho < \rho_0$ we have for $\eps>0$ small enough \begin{align} |x|^{2-s}|\ue(x)|^{\crits-2-\pe} \leq &~ 2^{\crits-1-\pe}|x|^{2-s}\left|\ue(x)-\mu_{1,\eps}^{-\frac{n-2}{2}}\tu_{1}\left(\frac{\T^{-1}(x)}{k_{1, \eps}} \right)\right|^{\crits-2-\pe} \notag\\ &+2^{\crits-1-\pe}\left(\frac{|x|}{k_{1,\eps}}\right)^{2-s} \left|\tu_{1} \left(\frac{ \T^{-1}(x)}{k_{1, \eps}}\right) \right|^{\crits-2-\pe} \notag\\ & \leq \frac{\gamma' -\gamma}{4} \qquad \hbox{ for all } x\in B_{\rho k_{1,\eps}}(0) \cap \Omega.\notag \end{align} Hence as in Step \ref{sec:se:1}.1 we have that for $\eps>0$ small enough \begin{align*} \frac{\mathcal{L}_\eps \varphi (x)}{ \varphi(x)}>0 ~ \hbox{ for all } x\in B_{\rho k_{1,\eps}}(0) \cap \Omega. \end{align*} $\Box$ \noindent{\it Step \ref{sec:se:1}.3.2:} It follows from point (A4) of Proposition \ref{prop:exhaust} that there exists $C'_2(\rho)>0$ such that for all $\eps>0$ small \begin{align*} |\ue(x)|\leq C'_2(\rho) \frac{d(x,\bdry)}{\mu_{1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x|^{\beta_{-}(\gamma')}} \qquad \hbox{ for all } x\in \partial B_{\rho k_{1, \eps}}(0) \cap \Omega \end{align*} and then by the estimate \eqref{esti:sub-sol} on $\varphi$ we have for some constant $C_2(\delta)>0$ \begin{align}\label{sup:ue:boundary:3:tis} |\ue(x)|\leq C_2(\rho) \frac{\varphi(x) }{\mu_{1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} } \qquad \hbox{ for all } x\in \partial B_{\rho k_{1, \eps}}(0) \cap \Omega. \end{align} \noindent We let for all $\eps>0$ $$\Psi^{0}_{\eps}(x):= C_2(\rho) \frac{\varphi(x) }{\mu_{1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} }\qquad \hbox{ for } x \in \Omega. $$ Then \eqref{sup:ue:boundary:3:tis} implies that for all $\eps>0$ small \begin{align}\label{control:ue:bord:tis} |u_{\eps}(x)| \leq \Psi^{0}_{\eps}(x) \qquad \hbox{ for all } x \in \partial\left( B_{\rho k_{1, \eps}}(0) \cap \Omega \setminus \{ 0\}\right) . \end{align} \noindent Therefore it follows from \eqref{ineq:LG3} and \eqref{control:ue:bord:tis} that $\eps>0$ sufficiently small $$\left\{\begin{array}{ll} \mathcal{L}_\eps\Psi^{0}_\eps \geq 0=\mathcal{L}_\eps\ue & \hbox{ in } B_{\rho k_{1,\eps}}(0) \cap \Omega \\ \Psi^{0}_\eps \geq \ue& \hbox{ on } \partial\left( B_{\rho k_{1, \eps}}(0) \cap \Omega \setminus \{ 0\}\right) \\ \mathcal{L}_\eps \Psi^{0}_\eps \geq 0=-\mathcal{L}_\eps\ue & \hbox{ in } B_{\rho k_{1,\eps}}(0) \cap \Omega\\ \Psi^{0}_\eps \geq -\ue& \hbox{ on } \partial\left( B_{\rho k_{1, \eps}}(0) \cap \Omega \setminus \{ 0\}\right). \end{array}\right.$$ Since the operator $\mathcal{L}_\eps$ satisfies the comparison principle on $ B_{\rho k_{1,\eps}}(0)$. Therefore $$|\ue(x)|\leq\Psi^{0}_\eps(x) \qquad \hbox{ for all } x \in B_{\rho k_{1,\eps}}(0) \cap \Omega.$$ And so for all $\eps >0$ small $$|u_{\eps}(x)|\leq C_2(\rho) \frac{ \varphi(x)}{\mu_{1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} } ~ \hbox{ for all } x \in B_{\rho k_{1,\eps}}(0)\cap \Omega.$$ for $\rho$ small. Using the estimate \eqref{esti:sub-sol} we have or all $\eps>0$ small \begin{align*} |u_{\eps}(x)| &~\leq C_2(\rho) \frac{d(x,\bdry)}{\mu_{1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x|^{\beta_{-}(\gamma')}} \notag\\ &~\leq C_2(\rho) \frac{|x|}{\mu_{1,\eps}^{\frac{\beta_{+}(\gamma')-\beta_{-}(\gamma')}{2}} |x|^{\beta_{-}(\gamma')}} . \end{align*} for $\rho$ small. It then follows from point (A4) of Proposition \ref{prop:exhaust} that estimate \eqref{estim:alpha:1:tis} holds on $x \in B_{\rho k_{1,\eps}}(0) \cap \Omega$ for all $\rho>0$.\qed \\ \noindent{\bf Step \ref{sec:se:1}.4:} Combining the previous three steps, it follows from \eqref{estim:alpha:1}, \eqref{estim:alpha:1:bis}, \eqref{estim:alpha:1:tis} and Proposition \ref{prop:exhaust} that for any $\alpha>0$ small, there exists $C(\alpha)>0$ such that for all $\eps >0$ we have for all $x \in \Omega$, \begin{eqnarray}\label{eq:est:alpha:global} |\ue(x)|&\leq& C(\alpha) \left(~\sum_{i=1}^N\frac{\mei^{\frac{\bp-\bm}{2}-\alpha} ~ |x|}{ \mei^{\left(\bp-\bm\right)-2\alpha}|x|^{\bm+\alpha}+|x|^{\bp -\alpha}}\right.\nonumber\\ &&\left.+ \frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}}{|x|^{\bm+\alpha}} |x| \right). \end{eqnarray} \noindent Next we improve the above estimate and show that one can take $\alpha =0$ in \eqref{eq:est:alpha:global}.\\ \noindent We let $G_{\epsilon}$ be the Green's function for the coercive operator $-\Delta-\frac{\gamma}{|x|^{2}}-h_{\epsilon}$ on $\Omega$ with Dirichlet boundary condition. Green's representation formula, the pointwise bounds on the Green's function \eqref{est:G:up} and the regularity Theorem \ref{th:hopf}, yields for any $ z \in \Omega$, $$\ue(z) = \int \limits_\Omega G_{\epsilon}(z,x)\left( \frac{|\ue (x)|^{\crits-2-\pe} ~\ue(x)}{|x|^s}\right) dx,$$ and therefore, \begin{align}\label{ineq:1} |\ue(z)| \leq & \int \limits_\Omega G_{\epsilon}(z,x) \frac{|\ue (x)|^{\crits-1-\pe} }{|x|^s} ~ dx \notag \\ \leq&~ C \int \limits_\Omega \left(\frac{\max\{|z|,|x|\}}{\min\{|z|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(z,\bdry)}{|x-z|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx. \end{align} \noindent To estimate the above integral we break it into three parts. \\ \noindent{\bf Step \ref{sec:se:1}.5:} There exist $C>0$ such that for any sequence $(\ze)$ with $\ze\in \Omega\setminus \overline{B}_{k_{N,\eps}}(0)$, we have \begin{align}\label{estim:1:ze} \int \limits_\Omega G_{\epsilon}(\ze,x) \frac{|\ue (x)|^{\crits-1-\pe} }{|x|^s}~dx \leq C~\left(\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}} |\ze|}{|\ze|^{\bp}}+\frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}}{|\ze|^{\bm }}|\ze| \right). \end{align} \noindent{\it Proof of Step \ref{sec:se:1}.5:} To estimate the right-hand-side of \eqref{ineq:1} in this case, we split $\Omega$ into four subdomains as: $\Omega = \bigcup \limits_{i=1}^{4} D^{N}_{i,\eps}$ where \begin{itemize} \item $D^{N}_{1,\eps}:= B_{k_{N,\eps}}(0) \cap \Omega,\; D^{N}_{2,\eps} :=\{ k_{N,\eps}<|x|< \frac{1}{2}|\ze|\} \cap \Omega,$ \item $D^{N}_{3,\eps}:=\{\frac{1}{2}|\ze|<|x|< 2|\ze|\} \cap \Omega,\; D^{N}_{4,\eps}:=\{ 2|\ze|<|x|\} \cap \Omega. $ \end{itemize} Note that one has $\frac{1}{2}|\ze|< |x-\ze|$ in $D^{N}_{2,\eps}$ and $\frac{1}{2}|x|< |x-\ze|$ in $D^{N}_{4,\eps}$. \noindent Using point (A5) of Proposition \ref{prop:exhaust} and a change of variable, we get \begin{align}\label{estim:1:I1:N} I^{N}_{1}: =&~C~ \int \limits_{D^{N}_{1,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry)}{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~d(\ze,\bdry) \int \limits_{D^{N}_{1,\eps}} \frac{|\ze|^{\bm}}{|x|^{\bm-1}} |x-\ze|^{-n}\frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq & ~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{D^{N}_{1,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bm-1 +s}} ~ dx \notag \\ \leq & ~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{B_{k_{N,\eps}}(0)} \frac{1}{|x|^{\bm -1+s+ (\crits-1-\pe)\frac{n-2}{2}}} ~ dx \notag \\ \leq & ~C ~\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}} d(\ze,\bdry)}{ |\ze|^{\bp} } \int \limits_{B_{1}(0)} \frac{1}{|x|^{ n- \frac{\bp-\bm}{2} -\pe \frac{n-2}{2} }} ~ dx \notag \\ \leq & ~C~ \frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{ |\ze|^{\bp} } . \end{align} \noindent Now we estimate \begin{align*} I^{N}_{2}: =&~C \int \limits_{D^{N}_{2,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry)}{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~d(\ze,\bdry)\frac{|\ze|^{\bm}}{|\ze|^{n}} \int \limits_{D^{N}_{2,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bm-1 +s}}~ dx. \notag \\ \end{align*} Using \eqref{estim:alpha:1} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{align}\label{estim:1:I2:N} I^{N}_{2}\leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{D^{N}_{2,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bm -1+s}}~ dx \notag \\ \leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \mu_{N,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} \int \limits_{D^{N}_{2,\eps}} \frac{|x|^{-\bm+1 -s} }{|x|^{(\crits-1-\pe)(\bp-1-\alpha)}}~ dx \notag \\ & + C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{D^{N}_{2,\eps}} \frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)} }{|x|^{(\crits-1-\pe)(\bm-1+\alpha)+\bm -1+s}}~ dx \notag \\ \leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \mu_{N,\eps}^{\frac{\bp-\bm}{2}} \int \limits_{1 \leq |x| } \frac{1}{|x|^{n+ (\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)}}~ dx \notag \\ & + C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{|x| \leq \frac{1}{2}|\ze|} \frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)} }{|x|^{ (\crits-\pe) (\bm-1) +s +\alpha(\crits-1-\pe)}}~ dx \notag \\ \leq &~C~ \ \frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}} |\ze|}{|\ze|^{\bp}} \int \limits_{1 \leq |x| } \frac{1}{|x|^{n+ (\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)}}~ dx \notag \\ & + C~ \frac{|\ze|^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)} }{|\ze|^{\bm}} |\ze| \notag \\ \leq &~ C \left(\ \frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}} |\ze|}{|\ze|^{\bp}} + \frac{ \Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)} }{|\ze|^{\bm}} |\ze| \right). \end{align} \noindent For the next integral \begin{align*} I^{N}_{3} :=&~C ~\int \limits_{D^{N}_{3,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry)}{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx\notag \\ \leq &~C~d(\ze,\bdry) \int \limits_{D^{N}_{3,\eps}} \frac{|x| }{|x-\ze|^{n}}\frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx. \end{align*} From \eqref{estim:alpha:1} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{align}\label{estim:1:I3:N} I^{N}_{3} \leq &~C ~d(\ze,\bdry) \mu_{N,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} \int \limits_{D^{N}_{3,\eps}} \frac{|x|^{1-s} |x-\ze|^{-n}}{|x|^{(\bp-1 -\alpha)(\crits -1-\pe)}}~ dx \notag \\ & + Cd(\ze,\bdry) \int \limits_{D^{N}_{3,\eps}} \frac{ |x|^{1-s} |x-\ze|^{-n}}{|x|^{(\bm-1 +\alpha)(\crits -1-\pe)}}\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)} ~dx \notag \\ \leq &~C ~\frac{ \mu_{N,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} d(\ze,\bdry)}{|\ze|^{(\bp -1-\alpha)(\crits -1-\pe)+s-1}} \int \limits_{D^{N}_{3,\eps}} |x-\ze|^{-n} ~ dx \notag \\ & + C \frac{ \Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}d(\ze,\bdry) }{|\ze|^{(\bm -1+\alpha)(\crits -1-\pe)+s-1}} \int \limits_{D^{N}_{3,\eps}} |x-\ze|^{-n}~ dx \notag \\ \leq &~C ~ \frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}} d(\ze,\bdry) }{|\ze|^{\bp}} \left( \frac{\mu_{N,\eps}}{|\ze|}\right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \notag \\ & + C~\frac{ \Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}d(\ze,\bdry)}{|\ze|^{\bm}} |\ze|^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \notag \\ \leq &~ C~\left(\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{|\ze|^{\bp}}+\frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}}{|\ze|^{\bm -1}} |\ze|\right). \end{align} \noindent Finally we estimate \begin{align*} I^{N}_{4} :=&~C~ \int \limits_{D^{N}_{4,\eps}}\left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry)}{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~\frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x|} |x|^{\bm+1-n} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x|} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bp+s-1}}~ dx. \notag \\ \end{align*} Then from \eqref{estim:alpha:1} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{align}\label{estim:1:I4:N} I^{N}_{4}\leq &~C~\frac{\mu_{N,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x|} \frac{|x|^{\alpha(\crits-1-\pe)}}{|x|^{(\crits-\pe )(\bp-1)+s}}~ dx \notag \\ & +C~ \frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x|} \frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}}{|x|^{ \bp+s+\bm(\crits-1-\pe)+\alpha(\crits-1-\pe)}}~ dx \notag \\ \leq &~C~\frac{\mu_{N,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x|} \frac{|x|^{\alpha(\crits-1-\pe)}}{|x|^{n+( \crits-\pe)\left( \frac{\bp-\bm}{2}\right)}}~ dx \notag \\ & +C~ \frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x|} \frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}}{|x|^{ n- \left[(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)\right]}}~ dx \notag \\ \leq &~C~\frac{\mu_{N,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}}{|\ze|^{\bm}} \frac{d(\ze,\bdry)}{|\ze|^{ (\crits-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)}} \notag \\ & + C~\frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}d(\ze,\bdry)}{|\ze|^{\bm}} \notag \\ \leq &~C~\frac{\mu_{N,\eps}^{ \frac{\bp-\bm}{2}}d(\ze,\bdry)}{|\ze|^{\bp}} \left( \frac{\mu_{N,\eps}}{|\ze|}\right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \notag \\ &+ C~\frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}d(\ze,\bdry)}{|\ze|^{\bm}} \notag \\ \leq &~ C~\left(\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{|\ze|^{\bp}}+\frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}}{|\ze|^{\bm }} |\ze|\right). \end{align} \noindent Combining \eqref{estim:1:I1:N}, \eqref{estim:1:I2:N}, \eqref{estim:1:I3:N} and \eqref{estim:1:I4:N}, we then obtain for some constant $C>0$ \begin{eqnarray*} \int \limits_\Omega G_{\epsilon}(\ze,x) \frac{|\ue (x)|^{\crits-1-\pe} }{|x|^s}~dx &\leq& ~ C~\left(\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{|\ze|^{\bp}}\right.\\ &&\left.+\frac{\Vert |x|^{\bm-1}u_0 \vert|^{\crits-1-\pe}_{L^{\infty}(\Omega)}}{|\ze|^{\bm }} |\ze| \right), \end{eqnarray*} which we write as \begin{eqnarray*} &&\int \limits_\Omega G_{\epsilon}(\ze,x) \frac{|\ue (x)|^{\crits-1-\pe} }{|x|^s}~dx \\ &&\leq ~ C~\left(\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{|\ze|^{\bp}}+\frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}}{|\ze|^{\bm }}|\ze| \right) \end{eqnarray*} for some $C>0$. This proves \eqref{estim:1:ze}. \qed \\ \noindent{\bf Step \ref{sec:se:1}.6:} There exists $C>0$ such that for sequence of points $(\ze)$ in $ B_{k_{1,\eps}}(0) \cap \Omega$ we have \begin{align}\label{estim:1:tis:ze} \int \limits_\Omega G_{\epsilon}(\ze,x) \frac{|\ue (x)|^{\crits-1-\pe} }{|x|^s}~dx \leq C~ \frac{|\ze|}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}} |\ze|^{\bm }} . \end{align} \noindent{\it Proof of Step \ref{sec:se:1}.6:} Here again, to estimate the right-hand-side of \eqref{ineq:1} in this case, we split $\Omega$ into four subdomains as: $\Omega = \bigcup \limits_{i=1}^{4} D^{1}_{i,\eps}(R)$ where \begin{itemize} \item $D^{1}_{1,\eps} :=\{|x|< \frac{1}{2}|\ze|\} \cap \Omega,$ \item $D^{1}_{2,\eps}:=\{\frac{1}{2}|\ze|<|x|< 2|\ze|\} \cap \Omega,$ \item $D^{1}_{3,\eps}:=\{ 2|\ze|<|x| \leq k_{1,\eps} \} \cap \Omega, $ \item $D^{1}_{4,\eps}:=\{ k_{1,\eps} <|x|\}\cap \Omega . $ \end{itemize} Note that one has $\frac{1}{2}|\ze|< |x-\ze|$ in $D^{1}_{1,\eps}$ and $\frac{1}{2}|x|< |x-\ze|$ in $D^{1}_{3,\eps}$. We then have \begin{align*} I^{1}_{1} :=&~C \int \limits_{D^{1}_{1,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry) }{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C d(\ze,\bdry)\frac{|\ze|^{\bm}}{|\ze|^{n-2}} \int \limits_{D^{1}_{1,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bm +s-1}}~ dx. \notag \\ \end{align*} Using \eqref{estim:alpha:1:tis} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{align}\label{estim:1:I1:tis} I^{1}_{1}\leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{D^{1}_{1,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bm +s-1}}~ dx \notag \\ \leq &~ C~ \frac{ \mu_{1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{D^{1}_{1,\eps}} \frac{|x|^{-\bm -s+1}}{|x|^{(\crits-1-\pe)(\bm-1+\alpha)}}~ dx \notag \\ \leq &~ C~\frac{ \mu_{1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{|x| \leq \frac{1}{2}|\ze|} \frac{|x|^{-\alpha(\crits-1-\pe)}}{|x|^{( \crits -\pe)(\bm-1) +s }}~ dx \notag \\ \leq &~ C~ \left(\frac{|\ze| }{\mu_{1,\eps}} \right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \frac{d(\ze,\bdry)}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}} \notag \\ \leq &~ C~\frac{|\ze|}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}}. \end{align} \noindent Next we have \begin{align*} I^{1}_{2} :=&~C \int \limits_{D^{1}_{2,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry) }{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C ~d(\ze,\bdry) \int \limits_{D^{1}_{2,\eps}} \frac{d(x,\bdry) }{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx. \end{align*} From \eqref{estim:alpha:1:tis} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{align}\label{estim:1:I2:tis} I^{1}_{2} \leq &~C~ \mu_{1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} d(\ze,\bdry) \int \limits_{D^{1}_{2,\eps}} \frac{|x|^{-s+1} |x-\ze|^{-n}}{|x|^{(\bm-1 +\alpha)(\crits -1-\pe)}}~ dx \notag \\ \leq &~ C~ \frac{\mu_{1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}d(\ze,\bdry)}{|\ze|^{(\bm -1+\alpha)(\crits -1-\pe)+s-1}} \int \limits_{D^{1}_{2,\eps}} |x|^{-n}~ dx \notag \\ \leq &~ C~ \left(\frac{|\ze| }{\mu_{1,\eps}} \right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \frac{d(\ze,\bdry) }{\mu_{1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}} \notag \\ \leq &~ C~\frac{|\ze|}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}}. \end{align} \noindent For \begin{align*} I^{1}_{3}: =&~C \int \limits_{D^{1}_{3,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry) }{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C ~\frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x|\leq k_{1,\eps}} |x|^{\bm+1-n} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~\frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x| \leq k_{1,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bp+s-1}}~ dx. \notag \\ \end{align*} Then from \eqref{estim:alpha:1:tis} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{align*} I^{1}_{3}\leq &~C~ \frac{\mu_{1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}d(\ze,\bdry)}{|\ze|^{\bm}} A_\eps \end{align*} $$A_\eps:=\int \limits_{2 |\ze| \leq |x| \leq k_{1,\eps}} \frac{1}{|x|^{ \bp+s-1+(\bm-1-\pe)(\crits-1)+\alpha(\crits-1-\pe)}}~ dx$$ With a change of variable, we get that \begin{eqnarray*} A_\eps &\leq &\int \limits_{2 |\ze| \leq |x| \leq k_{1,\eps}} \frac{dx}{|x|^{ n- \left[(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)\right]}}\\ &\leq & \mu_{1,\eps}^{\left( \frac{\bp-\bm}{2}\right)(\crits-2-\pe)-\alpha(\crits-1-\pe)}\int \limits_{B_{1}(0)} \frac{|x|^{ -\alpha (\crits-1-\pe)}dx}{|x|^{ n- (\crits-2)\left( \frac{\bp-\bm}{2}\right)}}\\ &\leq & C \mu_{1,\eps}^{\left( \frac{\bp-\bm}{2}\right)(\crits-2-\pe)-\alpha(\crits-1-\pe)} \end{eqnarray*} and then \begin{align}\label{estim:1:I3:tis} I^{1}_{3}\leq &~ C~ \frac{d(\ze,\bdry)}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}} \leq C~ \frac{|\ze|}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}}. \end{align} \noindent For the last integral, we use point (A5) of Proposition \ref{prop:exhaust} and a change of variable to get \begin{align} I^{1}_{4} :=&~C \int \limits_{D^{1}_{4,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry) }{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~\frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{|x| \geq k_{1,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bp+s-1}}~ dx \notag \\ \leq &~C ~\frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{|x| \geq k_{1,\eps}} \frac{1}{|x|^{\bp+s -1+\frac{n-2}{2}(\crits-1-\pe) }}~ dx \label{estim:1:I4:tis}\\ \leq &~ C~ \frac{d(\ze,\bdry)}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}} \int \limits_{|x| \geq 1 }\frac{d}{|x|^{ n+\frac{\bp-\bm}{2}} }\leq C~ \frac{|\ze|}{\mu_{1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}}.\notag \end{align} \noindent Combining \eqref{estim:1:I1:tis}, \eqref{estim:1:I2:tis}, \eqref{estim:1:I3:tis} and \eqref{estim:1:I4:tis}, we then obtain \eqref{estim:1:tis:ze}. $\Box$ \noindent{\bf Step \ref{sec:se:1}.7:} Let $1 \leq i \leq N-1$. There exists $C>0$ such that for sequence of points $(\ze)$ in $B_{ k_{i+1,\eps}}(0)\setminus \overline{B}_{ k_{i,\eps}}(0) \cap \Omega$ we have \begin{align}\label{estim:1:bis:ze} \int \limits_\Omega G_{\epsilon}(\ze,x) &\frac{|\ue (x)|^{\crits-1-\pe} }{|x|^s}~dx \leq C~\left(\frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{|\ze|^{\bp}}+\frac{|\ze|}{\mu_{i+1,\eps}^{\frac{\bp-\bm}{2}} |\ze|^{\bm }} \right). \end{align} \noindent{\it Proof of Step \ref{sec:se:1}.7:} To estimate the right-hand-side of \eqref{ineq:1} in this case, we split $\Omega$ into five subdomains as: $\Omega = \bigcup \limits_{j=1}^{5} D^{i}_{j,\eps}$ where \begin{itemize} \item $D^{i}_{1,\eps}:= B_{k_{i,\eps}}(0) \cap \Omega,$\\ \item $D^{i}_{2,\eps} :=\{ k_{i,\eps}<|x|< \frac{1}{2}|\ze|\} \cap \Omega,$\\ \item $D^{i}_{3,\eps}:=\{\frac{1}{2}|\ze|<|x|< 2|\ze|\} \cap \Omega,$\\ \item $D^{i}_{4,\eps}:=\{ 2|\ze|<|x| < k_{i+1,\eps} \} \cap \Omega, $\\ \item $D^{i}_{5,\eps}:=\{ k_{i+1,\eps} <|x| \} \cap \Omega. $ \end{itemize} Note that one has $\frac{1}{2}|\ze|< |x-\ze|$ in $D^{i}_{2,\eps}$ and $\frac{1}{2}|x|< |x-\ze|$ in $D^{i}_{4,\eps}$. \noindent First we have using point (A5) of Proposition \ref{prop:exhaust} and a change of variable \begin{align}\label{estim:1:I1:bis} I^{i}_{1} :=&~C \int \limits_{D^{i}_{1,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry) }{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx\notag \\ \leq &~C~d(\ze,\bdry) \int \limits_{D^{i}_{1,\eps}} \frac{|\ze|^{\bm}}{|x|^{\bm-1}} |x-\ze|^{-n}\frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq & ~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{D^{i}_{1,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bm-1 +s}} ~ dx \notag \\ \leq & ~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{B_{k_{i,\eps}}(0)} \frac{1}{|x|^{\bm -1+s+ (\crits-1-\pe)\frac{n-2}{2}}} ~ dx \notag \\ \leq & ~C ~\frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}} d(\ze,\bdry)}{ |\ze|^{\bp} } \int \limits_{B_{1}(0)} \frac{1}{|x|^{ n- \frac{\bp-\bm}{2}} } ~ dx\leq C~ \frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{ |\ze|^{\bp} } . \end{align} \noindent Now we estimate \begin{align*} I^{i}_{2} :=&~C \int \limits_{D^{i}_{2,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry) }{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx\notag \\ \leq &~C~d(\ze,\bdry)\frac{|\ze|^{\bm}}{|\ze|^{n}} \int \limits_{D^{i}_{2,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bm-1 +s}}~ dx. \notag \\ \notag \\ \end{align*} Using \eqref{estim:alpha:1:bis} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{align}\label{estim:1:I2:bis} I^{i}_{2}\leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{D^{i}_{2,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bm +s}}~ dx \notag \\ \leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \mu_{i,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} \int \limits_{D^{i}_{2,\eps}} \frac{|x|^{-\bm+1 -s} \, dx}{|x|^{(\crits-1-\pe)(\bp-1-\alpha)}} \notag \\ & + C~ \frac{ \mu_{i+1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}d(\ze,\bdry) }{|\ze|^{\bp}} \int \limits_{D^{i}_{2,\eps}} \frac{|x|^{-\bm+1 -s} \, dx}{|x|^{(\crits-1-\pe)(\bm-1+\alpha)}} \notag \\ \leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bp}} \mu_{i,\eps}^{\frac{\bp-\bm}{2}} \int \limits_{1 \leq |x| } \frac{dx}{|x|^{n+ (\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)}} \notag \\ & + C~ \frac{ \mu_{i+1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}d(\ze,\bdry) }{|\ze|^{\bp}} \int \limits_{|x| \leq \frac{1}{2}|\ze|} \frac{|x|^{-\alpha(\crits-1-\pe)}\, dx}{|x|^{ \crits (\bm-1) +s }} \notag \\ \leq &~C~ \frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}}d(\ze,\bdry)}{|\ze|^{\bp}} \int \limits_{1 \leq |x| } \frac{dx}{|x|^{n+ (\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)}}\notag \\ & + C~ \left(\frac{|\ze| }{\mu_{i+1,\eps}} \right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \frac{d(\ze,\bdry)}{ \mu^{\frac{\bp-\bm}{2}}_{i+1,\eps} |\ze|^{\bm}} \notag \\ \leq &~ C ~\left (\frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{|\ze|^{\bp}} + \frac{|\ze|}{ \mu^{\frac{\bp-\bm}{2}}_{i+1,\eps} |\ze|^{\bm}} \right). \end{align} And next \begin{align*} I^{i}_{3} :=&~C \int \limits_{D^{i}_{3,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry) }{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx\notag \\ \leq &~C~d(\ze,\bdry) \int \limits_{D^{i}_{3,\eps}} \frac{|x| }{|x-\ze|^{n}}\frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx. \end{align*} From \eqref{estim:alpha:1:bis} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{align}\label{estim:1:I3:bis} I^{i}_{3} \leq &~C ~d(\ze,\bdry) \mu_{i,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} \int \limits_{D^{i}_{3,\eps}} \frac{|x|^{1-s} |x-\ze|^{-n}\, dx}{|x|^{(\bp-1 -\alpha)(\crits -1-\pe)}} \notag \\ & + C~d(\ze,\bdry) \mu_{i+1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} \int \limits_{D^{i}_{3,\eps}} \frac{|x|^{1-s} |x-\ze|^{-n}\, dx}{|x|^{(\bm-1 +\alpha)(\crits -1-\pe)}} \notag \\ \leq &~C ~\frac{ \mu_{i,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}d(\ze,\bdry)}{|\ze|^{(\bp -1-\alpha)(\crits -1-\pe)+s-1}} \int \limits_{D^{i}_{3,\eps}} |x-\ze|^{-n} ~ dx \notag \\ & + C~ \frac{ \mu_{i+1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)}d(\ze,\bdry) }{|\ze|^{(\bm -1+\alpha)(\crits -1-\pe)+s-1}} \int \limits_{D^{i}_{3,\eps}} |x-\ze|^{-n}~ dx \notag \\ \leq &~C ~ \frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}} d(\ze,\bdry) }{|\ze|^{\bp}} \left( \frac{\mu_{i,\eps}}{|\ze|}\right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \notag \\ & + C~\left(\frac{|\ze| }{\mu_{i+1,\eps}} \right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \frac{ d(\ze,\bdry)}{|\ze|^{\bm}} \notag \\ \leq &~ C~\left(\frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{|\ze|^{\bp}}+\frac{|\ze| }{ \mu^{\frac{\bp-\bm}{2}}_{i+1,\eps} |\ze|^{\bm}} \right). \end{align} The next integral becomes \begin{align*} I^{i}_{4} :=&~C \int \limits_{D^{i}_{4,\eps}} \left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry)}{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~\frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x| < k_{i+1,\eps}} |x|^{\bm+1-n} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~ \frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{2 |\ze| \leq |x| < k_{i+1,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bp+s-1}}~ dx. \notag \\\notag \\ \end{align*} Then from \eqref{estim:alpha:1:bis} we get for $0<\alpha < \frac{\crits-2}{\crits-1}\left( \frac{\bp-\bm}{2}\right)$ \begin{eqnarray*} I^{i}_{4}&\leq& C\frac{d(\ze,\bdry)}{|\ze|^{\bm}}\left(\mu_{i,\eps}^{\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} A_\eps \right.\\ &&\left.+ \mu_{i+1,\eps}^{-\left( \frac{\bp-\bm}{2}-\alpha\right)(\crits-1-\pe)} B_\eps\right) \end{eqnarray*} with \begin{eqnarray*} A_\eps&:=& \int \limits_{2 |\ze| \leq |x| < k_{i+1,\eps}} \frac{|x|^{\alpha(\crits-1-\pe)}\, dx}{|x|^{(\crits-\pe) (\bp-1)+s}}\\ &\leq & \int \limits_{2 |\ze| \leq |x|< k_{i+1,\eps}} \frac{|x|^{\alpha(\crits-1-\pe)}\, dx}{|x|^{n+ (\crits-\pe)\left( \frac{\bp-\bm}{2}\right)}}~ dx \end{eqnarray*} \begin{eqnarray*} B_\eps&:=&\int \limits_{2 |\ze| \leq |x|< k_{i+1,\eps}} \frac{dx}{|x|^{ \bp+s+\bm(\crits-1-\pe)+\alpha(\crits-1-\pe)}}\\ &\leq & \int \limits_{2 |\ze| \leq |x| < k_{i+1,\eps}} \frac{ dx}{|x|^{ n- \left[(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)\right]}} \end{eqnarray*} Arguing as in the case $i=1$, with a change of variable we get that \begin{eqnarray}\label{estim:1:I4:bis} I^{i}_{4}&\leq& C\frac{\mu_{i,\eps}^{ \frac{\bp-\bm}{2}}d(\ze,\bdry)}{|\ze|^{\bp}} \left( \frac{\mu_{i,\eps}}{|\ze|}\right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \notag \\ && + C\left(\frac{|\ze| }{\mu_{i+1,\eps}} \right)^{(\crits-2-\pe)\left( \frac{\bp-\bm}{2}\right)-\alpha (\crits-1-\pe)} \frac{ d(\ze,\bdry)}{|\ze|^{\bm}} \notag \\ &\leq& C\left(\frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}}|\ze|}{|\ze|^{\bp}}+\frac{|\ze| }{ \mu^{\frac{\bp-\bm}{2}}_{i+1,\eps} |\ze|^{\bm}} \right). \end{eqnarray} Finally we get for the last integral from point (A5) of Proposition \ref{prop:exhaust} and a change of variable \begin{align}\label{estim:1:I5:bis} I^{i}_{5} :=&~C \int \limits_{D^{i}_{5,\eps}}\left(\frac{\max\{|\ze|,|x|\}}{\min\{|\ze|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) d(\ze,\bdry)}{|x-\ze|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag \\ \leq &~C~\frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{|x| \geq k_{i+1,\eps}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^{\bp+s-1}}~ dx \notag \\ \leq &~C ~\frac{d(\ze,\bdry)}{|\ze|^{\bm}} \int \limits_{|x| \geq k_{i+1,\eps}} \frac{1}{|x|^{\bp+s -1+\frac{n-2}{2}(\crits-1-\pe) }}~ dx \notag\\ \leq &~ C~ \frac{d(\ze,\bdry)}{\mu_{i+1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}} \int \limits_{|x| \geq 1 }\frac{1}{|x|^{ n+\frac{\bp-\bm}{2}} } \notag\\ \leq &~ C~ \frac{|\ze|}{\mu_{i+1,\eps}^{\frac{\bp-\bm}{2}}|\ze|^{\bm}}. \end{align} \noindent Then from \eqref{estim:1:I1:bis}, \eqref{estim:1:I2:bis}, \eqref{estim:1:I3:bis}, \eqref{estim:1:I4:bis} and \eqref{estim:1:I5:bis} we get the estimate \eqref{estim:1:bis:ze}. $\Box$ \noindent Combining the estimates \eqref{ineq:1}, \eqref{estim:1:ze}, \eqref{estim:1:tis:ze} and \eqref{estim:1:bis:ze} we get that, there exists a constant $C>0$ such that for any sequence of points $(\ze)$ in $\Omega$ we have \begin{align*} |\ue(\ze)| \leq &~ C \left( \sum_{i=1}^N\frac{\mei^{\frac{\bp-\bm}{2}}|\ze|}{ \mei^{\bp-\bm}|\ze|^{\bm}+|\ze|^{\bp}}+ \frac{\Vert |x|^{\bm}u_0 \vert|_{L^{\infty}(\Omega)}}{|\ze|^{\bm}} |\ze| \right). \notag \end{align*} This completes the proof of Proposition \ref{prop:fund:est}. $\Box$\\ \noindent In our next result we obtain a pointwise control on the gradient. \begin{proposition}\label{prop:fund:est:grad} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry $ and assume that $0< s < 2$, $\gamma<\frac{n^2}{4}$. Let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} holds. Assume that blow-up occurs, that is $$\lim_{\eps\to 0}\Vert |x|^{\tau} \ue\Vert_{L^\infty(\Omega)}=+\infty ~\hbox{ where } ~ \bm-1<\tau<\frac{n-2}{2} .$$ Consider the $\mu_{1,\eps},...,\mu_{N,\eps}$ from Proposition \ref{prop:exhaust}. Then there exists $C>0$ such that for all $\eps>0$ \begin{align}\label{eq:est:grad:global} |\nabla \ue(x)|\leq C \left(~\sum_{i=1}^N\frac{\mei^{\frac{\bp-\bm}{2}} }{ \mei^{\bp-\bm}|x|^{\bm}+|x|^{\bp}}+\frac{ \Vert |x|^{\bm-1}u_0 \Vert_{L^{\infty}(\Omega)} }{|x|^{\bm}} \right) \end{align} for all $x\in \overline{\Omega} \setminus \{ 0\}$. \end{proposition} \noindent{\it Proof of Proposition \ref{prop:fund:est:grad}:} We let $G_{\epsilon}$ be the Green's function of the coercive operator $-\Delta-\frac{\gamma}{|x|^{2}}-h_{\epsilon}$ on $\Omega$ with Dirichlet boundary condition. Differentiating the Green's representation formula, and then using the pointwise bounds on the gradient Green's function \eqref{ineq:grad:G} and the regularity result Theorem \ref{th:hopf} yields for any $ z \in \Omega$ \begin{align}\label{estim:grad:global:z} \ue(z) =& \int \limits_\Omega G_{\epsilon}(z,x) \frac{|\ue (x)|^{\crits-2-\pe} ~\ue(x)}{|x|^s}dx \notag \\ |\nabla \ue(z)| \leq & ~C~\int \limits_\Omega \left|\nabla_{z} G_{\epsilon}(z,x) \right| \frac{|\ue (x)|^{\crits-1-\pe} }{|x|^s} dx \notag \\ \leq & ~C\int \limits_\Omega \left|\nabla_{z} G_{\epsilon}(z,x) \right| \frac{|\ue (x)|^{\crits-1-\pe} }{|x|^s}~dx\notag \\ \leq&~ \int \limits_\Omega \left(\frac{\max\{|z|,|x|\}}{\min\{|z|,|x|\}}\right)^{\bm} \frac{d(x,\bdry) }{|x-z|^{n}} \frac{|\ue(x)|^{\crits-1-\pe}}{|x|^s}~ dx \notag. \end{align} \noindent Then using the pointwise estimates \eqref{eq:est:global} the proof goes exactly as in Proposition \ref{prop:fund:est}. $\Box$ \section{\, Sharp blow-up rates and the proof of Compactness}\label{sec:cpct} The proof of compactness rely on the following two key propositions. \begin{proposition}\label{prop:rate:sc} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry $ and assume that $0< s < 2$, $\gamma<\frac{n^2}{4}$. Let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} holds. Assume that blow-up occurs, that is \begin{equation}\label{blowup} \lim_{\eps\to 0}\Vert |x|^{\tau} \ue\Vert_{L^\infty(\Omega)}=+\infty ~\hbox{ for some } ~ \bm-1<\tau<\frac{n-2}{2} . \end{equation} Consider the $\mu_{1,\eps},...,\mu_{N,\eps}$ and $t_{1},...,t_{N}$ from Proposition \ref{prop:exhaust}. Suppose that \begin{equation} \hbox{either }\{\,\bp-\bm>2\,\}\hbox{ or }\{\bp-\bm>1\hbox{ and }u_0\equiv 0\}. \end{equation} Then, we have following blow-up rates: \begin{align}\label{rate:1} \lim \limits_{\eps \to 0} \frac{\pe}{\mu_{N,\eps}}&~=c_{n,s,t_N}\frac{ \int \limits_{\partial\rnm} II_{0}(x,x) |\nabla \tu_{N}|^2 ~d\sigma}{ \sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx }. \end{align} Here $II_{0}$ denotes the second fundamental form of $\bdry$ at $0 \in \bdry$ and \begin{equation*} c_{n,s,t_N}:=\frac{n-s}{(n-2)^{2}} \frac{1}{ t_{N}^{\frac{n-1}{\crits-2}}}. \end{equation*} \end{proposition} \begin{proposition}[\textit{The positive case}]\label{prop:rate:sc:2} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in \bdry $ and assume that $0< s < 2$, $\gamma<\frac{n^2}{4}$. Let $(\ue)$, $(\he)$ and $(\pe)$ be as in Proposition \ref{prop:rate:sc} and let $H(0)$ denote the mean curvature of $\partial\Omega$ at $0$. Assume that blow-up occurs as in \eqref{blowup}. Consider the $\mu_{1,\eps},...,\mu_{N,\eps}$ and $t_{1},...,t_{N}$ from Proposition \ref{prop:exhaust}. Suppose in addition that \begin{equation} \ue>0 \qquad \hbox{for all } \eps >0. \end{equation} We define \begin{equation} C_{n,s,(t_i), (\tu)_i}:=\frac{c_{n,s,t_N} \int \limits_{\partial\rnm} |x|^2 |\nabla \tu_{N}|^2 ~d\sigma}{ (n-1)\sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx } \end{equation} Then, we have the following blow-up rates: \noindent 1) When $\bp-\bm\geq 2$, then \begin{align*} \lim \limits_{\eps \to 0} \frac{\pe}{\mu_{N,\eps}}&~=C_{n,s,(t_i), (\tu)_i}\cdot H(0)\,\, \hbox{ if }\left\{\begin{array}{l} \bp-\bm>2\\ \hbox{ or }\bp-\bm=2\hbox{ and }u_0\equiv 0 \end{array}\right\}. \end{align*} \begin{align*} \lim \limits_{\eps \to 0} \frac{\pe}{\mu_{N,\eps}}&~=C_{n,s,(t_i), (\tu)_i}\cdot H(0)-\tilde{K} \,\, \hbox{ if }\bp-\bm=2\hbox{ and }u_0>0. \end{align*} for some $\tilde{K}>0$. \noindent 2) When $\bp-\bm<2$, then $u_0\equiv 0$ and \begin{align*} \lim \limits_{\eps \to 0} \frac{\pe}{\mu_{N,\eps}}&~=C_{n,s,(t_i), (\tu)_i}\cdot H(0)\quad \hbox{ if }\bp-\bm>1.\end{align*} \begin{equation}\label{asymp:crit:log} \lim \limits_{\eps \to 0} \frac{\pe}{\mu_{N,\eps}\ln\frac{1}{\mu_{\eps,N}}}=C'_{n,s,(t_i), (\tu)_i}\cdot H(0)\quad \hbox{ if }\bp-\bm=1 \end{equation} \begin{equation} \lim_{\eps\to 0}\frac{p_\eps}{\mu_{N,\eps}^{\bp-\bm}}=-\chi\cdot m_{\gamma,h}(\Omega)\quad \hbox{ if }\bp-\bm<1\label{est:mass} \end{equation} where $$C'_{n,s,(t_i), (\tu)_i}:=\frac{n-s}{(n-2)^{2}}\frac{K^2\omega_{n-2}}{(n-1) \sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx}, $$ the constant $K$ is as in \eqref{def:K}, $\chi>0$ is a constant and $m_{\gamma,h}(\Omega)$ is the boundary mass defined in Theorem \ref{thdefi:mass}. \end{proposition} \noindent {\it Proof of Theorems \ref{th:cpct:sc}, \ref{th:cpct:sc:2} and \ref{th:cpct:sc:3}:} We argue by contradiction and assume that the family is not pre-compact. Then, up to a subsequence, it blows up. We then apply Propositions \ref{prop:rate:sc} and \ref{prop:rate:sc:2} to get the blow-up rate (that is nonegative). However, the hypothesis of Theorems \ref{th:cpct:sc}, \ref{th:cpct:sc:2} and \ref{th:cpct:sc:3} yield exactly negative blow-up rates. This is a contradiction, and therefore the family is pre-compact. This proves the Theorems.\qed \noindent We now establish Propositions \ref{prop:rate:sc} and \ref{prop:rate:sc:2}. The proof is divided in 13 steps in Sections \ref{loc pohoaev id} to \ref{pf blow-up rates}. These steps are numbered Steps P1, P2, etc. \section{\, Estimates on the localized Pohozaev identity}\label{loc pohoaev id} \noindent In the sequel, we let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. Note that $$\gamma<\frac{n^2}{4}-1\; \Leftrightarrow\;\bp-\bm>2,$$ and $$\gamma<\frac{n^2-1}{4}\; \Leftrightarrow\;\bp-\bm>1.$$ In the sequel, we will permanently use the following consequence of (A9) of Proposition \ref{prop:exhaust}: for all $i=1,...,N$, there exists $c_i>1$ such that \begin{equation}\label{bnd:k:mu} c_i^{-1} \mu_{\eps,i}\leq k_{\eps,i}\leq c_i \mu_{\eps,i}. \end{equation} \begin{step}[Pohozaev identity]\label{step:p1} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We define \begin{align}\label{def:Fe} F_{\eps}(x):=&~( x, \nu) \left( \frac{|\nabla \ue|^2}{2} -\frac{\gamma}{2}\frac{\ue^{2}}{|x|^{2}} - \frac{h_{\eps}(x)}{2} \ue^{2} - \frac{1}{\crits -p_{\eps}} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} \right) \notag\\ & ~- \left(x^i\partial_i \ue+\frac{n-2}{2} \ue \right)\partial_\nu \ue \end{align} We let $\T$ be a chart at $0$ as in \eqref{def:T:bdry}. We define $\re:=\sqrt{\mu_{N,\eps}}$. Then \begin{align}\label{PohoId3} & \int \limits_{\T\left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \left( h_{\eps}(x) +\frac{ \left( \nabla h_{\eps}, x \right)}{2} \right)\ue^{2}~ dx \notag\\ & +\frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits -p_{\eps}}\right) \int \limits_{\T\left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag \\ =~& - \int \limits_{ \T \left( \rnm \cap \partial B_{ r_{\eps} }(0) \right) } F_{\eps}(x)~d\sigma +\int \limits_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right) } F_{\eps}(x)~d\sigma \notag\\ & + \int \limits_{ \T\left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \end{align} and, for $\delta_0>0$ small enough, \begin{align}\label{PohoId3:bis} & \int \limits_{\T\left( \rnm \cap B_{\delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \left( h_{\eps}(x) +\frac{ \left( \nabla h_{\eps}, x \right)}{2} \right)\ue^{2}~ dx \notag\\ & \qquad \qquad +\frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits -p_{\eps}}\right) \int \limits_{\T\left( \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag \\ =~& -\int \limits_{ \T \left( \rnm \cap \partial B_{ \delta_0 }(0) \right) } F_{\eps}(x)~d\sigma + \int \limits_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right) } F_{\eps}(x)~d\sigma \notag\\ & + \int \limits_{ \T\left( \partial \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \end{align} \end{step} \noindent{\it Proof of Step P1:} We apply the Pohozaev identity \eqref{PohoId} with $y_{0}=0$ and $$U_{\eps}= \T\left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) \subset \Omega.$$ This yields \begin{align}\label{PohoId2} - \int \limits_{U_{\eps}} \left( h_{\eps}(x) +\frac{ \left( \nabla h_{\eps}, x \right)}{2} \right)\ue^{2}~ dx &~ - \frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits -p_{\eps}}\right) \int \limits_{U_{\eps} } \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag \\ &~ = \int \limits_{\partial U_{\eps}} F_{\eps}(x)~d\sigma. \end{align} It follows from the properties of the boundary map that \begin{align*} \partial U_{\eps} =&~\partial \left( \T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) \right)&~ \notag\\ = &~\T \left( \rnm \cap \partial B_{ r_{\eps} }(0) \right) \cup \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right) \cup \T\left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) \end{align*} Since for all $\eps >0$, $\ue \equiv 0$ on $\bdry$, identity \eqref{PohoId2} yields \eqref{PohoId3}. Concerning \eqref{PohoId3:bis}, we apply the Pohozaev identity \eqref{PohoId} with $y_{0}=0$ and $$V_{\eps}= \T\left( \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) \subset \Omega.$$ The argument is similar. This ends the proof of Step \ref{step:p1}.\qed \noindent We will estimate each of the terms in the above integral identities and calculate the limit as $\eps \to 0$. \subsection{Estimates of the $L^{\crits}$ and $L^2-$terms in the localized Pohozaev identity} \begin{step}\label{step:p2} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We claim that, as $\eps \to 0$ \begin{equation}\label{eq:mystery} \int \limits_{\T\left( \rnm \cap B_{r_\eps }(0) \setminus B_{ k_{1,\eps}^3 }(0)\right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx = \sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx +o(1). \end{equation} and \begin{align}\label{cpct:2*term:2} \int \limits_{\T \left( \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^3 }(0) \right)} \frac{|\ue|^{\crits-p_\eps }}{|x|^s} ~ dx= \sum \limits_{i=1}^{N} \frac{1}{t_i^{\frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx +o(1)\hbox{ if }u_0\equiv 0. \end{align} \end{step} \noindent {\it Proof of Step \ref{step:p2}:} For any $R,\rho >0$ we decompose the above integral as \begin{align*} \int \limits_{ \T \left( \rnm \cap B_{r_{\eps} }(0) \setminus B_{k_{1,\eps}^{3} }(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx&= \int \limits_{ \T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag\\ &+ \sum \limits_{i=1}^{N} \int \limits_{ \T \left( \rnm \cap B_{R k_{i,\eps}}(0) \setminus \overline{B}_{\rho k_{i,\eps}}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag\\ &+ \sum \limits_{i=1}^{N-1} \int \limits_{ \T \left( \rnm \cap B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag\\ & + \int \limits_{ \T \left( \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx. \end{align*} We will evaluate each of the above terms and calculate the limit $\lim\limits_{R \to + \infty} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0}$. \noindent From the estimate \eqref{eq:est:global}, we get as $\eps \to 0$ \begin{align*} &\int \limits_{ \T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag\\ \leq &~ C~\int \limits_{ \T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} \left[ \frac{ \mu_{N,\eps}^{\frac{\bp-\bm}{2}(\crits-\pe)}}{|x|^{(\bp-1)(\crits-\pe)+s}}+ \frac{1}{ |x|^{(\bm-1)(\crits-\pe)+s}} \right] ~dx\notag\\ \leq &~ C~\int \limits_{ \rnm \cap B_{ r_{\eps}}(0) \setminus \overline{B}_{{R k_{N,\eps}}}(0) } \frac{ \mu_{N,\eps}^{\frac{\bp-\bm}{2}(\crits-\pe)}}{|x|^{(\bp-1)(\crits-\pe)+s}} \left| \hbox{Jac } \T(x) \right|~dx \notag \\ + &~ C~\int \limits_{ \rnm \cap B_{ r_{\eps}}(0) \setminus \overline{B}_{{R k_{N,\eps}}}(0) } \frac{1}{ |x|^{(\bm-1)(\crits-\pe)+s}}\left| \hbox{Jac } \T(x) \right|~dx \notag \\ \leq &~ C~ \int \limits_{ \rnm \cap B_{ \frac{ r_{\eps}}{k_{N,\eps}}}(0) \setminus \overline{B}_{R }(0) } \frac{ 1}{|x|^{n+ \crits \left( \frac{\bp-\bm}{2}\right)-\pe(\bp-1)}}\left| \hbox{Jac } \T(k_{N,\eps}x) \right|~dx \notag\\ + &~ C~\int \limits_{ \rnm \cap B_{ 1}(0) \setminus \overline{B}_{\frac{R k_{N,\eps}}{r_{\eps}}}(0) } \frac{ 1 }{ |x|^{n-\crits \left( \frac{\bp-\bm}{2}\right)-\pe(\bm-1)}} \left| \hbox{Jac } \T( r_{\eps} x) \right|~dx \notag \\ \leq &C\left( R^{-\crits \left( \frac{\bp-\bm}{2}\right)-\pe(\bp-1)} + r_{\eps}^{\crits \left( \frac{\bp-\bm}{2}\right)+\pe(\bm-1)}\right) . \end{align*} Therefore \begin{align}{\label{cpct:2*term:1}} \lim\limits_{R \to + \infty} \lim\limits_{\eps \to 0}\int \limits_{ \T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx =0. \end{align} \noindent It follows from Proposition \ref{prop:exhaust} that for any $1 \leq i\leq N$ \begin{align}{\label{cpct:2*term:2:lala}} \lim\limits_{R \to + \infty} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0} \int \limits_{ \T \left( \rnm \cap B_{R k_{i,\eps}}(0) \setminus \overline{B}_{\rho k_{i,\eps}}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx = \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx. \end{align} \noindent Let $1 \leq i\leq N-1$. In Proposition \ref{prop:fund:est}, we had obtained the following pointwise estimates: For any $R, \rho>0$ and all $\eps>0$ we have \begin{align*} |\ue(x)| \leq C~ \frac{\mei^{\frac{\bp-\bm}{2}} |x|}{|x|^{\bp}}+C~ \frac{ |x| }{ \mu_{ i+1,\eps}^{\frac{\bp-\bm}{2}}|x|^{\bm}} \end{align*} for all $x \in B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) $. \noindent Then we have as $\eps \to 0$ \begin{align*} &\int \limits_{ \T \left( \rnm \cap B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag\\ \leq &~ C~\int \limits_{ \T \left( \rnm \cap B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) \right)} \left[ \frac{ \mei^{\frac{\bp-\bm}{2}(\crits-\pe)}}{|x|^{(\bp-1)(\crits-\pe)+s}}+ \frac{ \mu_{i+1,\eps}^{-\frac{\bp-\bm}{2}(\crits-\pe)} }{ |x|^{(\bm-1)(\crits-\pe)+s}} \right]~dx \notag\\ \leq &~ C~\int \limits_{ \rnm \cap B_{ \rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) } \left[ \frac{ \mei^{\frac{\bp-\bm}{2}(\crits-\pe)}}{|x|^{(\bp-1)(\crits-\pe)+s}}+ \frac{ \mu_{i+1,\eps}^{-\frac{\bp-\bm}{2}(\crits-\pe)} }{ |x|^{(\bm-1)(\crits-\pe)+s}} \right] \,dx \notag \\ \leq &~ C~ \int \limits_{ \rnm \cap B_{ \frac{\rho k_{i+1,\eps}}{k_{i,\eps}}}(0) \setminus \overline{B}_{R }(0) } \frac{ 1}{|x|^{n+ \crits \left( \frac{\bp-\bm}{2}\right)-\pe(\bp-1)}}\,dx \notag\\ &+ ~ C~\int \limits_{ \rnm \cap B_{2\rho }(0) \setminus \overline{B}_{ \frac{R k_{i,\eps}}{k_{i+1,\eps}}}(0) } \frac{ 1 }{ |x|^{n-\crits \left( \frac{\bp-\bm}{2}\right)-\pe(\bm-1)}} \, dx \notag\\ \leq &C\left( R^{-\crits \left( \frac{\bp-\bm}{2}\right)-\pe(\bp-1)} + \rho^{\crits \left( \frac{\bp-\bm}{2}\right)+\pe(\bm-1)}\right). \end{align*} And so \begin{align}{\label{cpct:2*term:3}} \lim\limits_{R \to + \infty} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0} \int \limits_{ \T \left( \rnm \cap B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx =0. \end{align} \noindent Again, from the pointwise estimates of Proposition \ref{prop:fund:est}, we have as $\eps \to 0$ \begin{align*} &\int \limits_{ \T \left( \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \notag\\ \leq &~ C~\int \limits_{ \T \left( \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } \frac{ \mu_{1,\eps}^{-\frac{\bp-\bm}{2}(\crits-\pe)} }{ |x|^{(\bm-1)(\crits-\pe)+s}} ~dx \notag\\ \leq &~ C~\int \limits_{ \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus \overline{B}_{ k_{1,\eps}^{3}}(0) } \frac{ \mu_{1,\eps}^{-\frac{\bp-\bm}{2}(\crits-\pe)} }{ |x|^{(\bm-1)(\crits-\pe)+s}} \left| \hbox{Jac } \T(x) \right|~dx \notag \\ \leq& ~ C~\int \limits_{ \rnm \cap B_{\rho }(0) \setminus \overline{B}_{ k_{1,\eps}^{2}}(0) } \frac{ 1 }{ |x|^{n-\crits \left( \frac{\bp-\bm}{2}\right)-\pe(\bm-1)}} \left| \hbox{Jac } \T(k_{1,\eps}x) \right|~dx \notag\\ \leq &C ~\rho^{\crits \left( \frac{\bp-\bm}{2}\right)+\pe(\bm-1)}. \end{align*} Therefore \begin{align}{\label{cpct:2*term:4}} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0} \int \limits_{ \T \left( \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}}~dx=0. \end{align} \noindent Combining \eqref{cpct:2*term:1}, \eqref{cpct:2*term:2:lala}, \eqref{cpct:2*term:3} and \eqref{cpct:2*term:4} we obtain \eqref{eq:mystery}. \noindent We now prove \eqref{cpct:2*term:2} under the assumption that $u_0\equiv 0$. We decompose the integral as \begin{align*} \int \limits_{ \T \left( \rnm \cap B_{\delta_0 }(0) \setminus B_{k_{1,\eps}^{3} }(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx&= \int \limits_{ \T \left( \rnm \cap B_{ \delta_0 }(0) \setminus \overline{B}_{r_\eps}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx\notag\\ & \qquad + \int \limits_{ \T \left( \rnm \cap B_{r_\eps }(0) \setminus B_{k_{1,\eps}^{3} }(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx, \end{align*} with $r_\eps:=\sqrt{\mu_{N,\eps}}$. From the estimate \eqref{eq:est:global} and $u_0\equiv 0$, we get as $\eps \to 0$ \begin{equation*} \int \limits_{ \T \left( \rnm \cap B_{ \delta_0 }(0) \setminus \overline{B}_{r_\eps}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \leq C~\int \limits_{ \rnm \cap B_{ \delta_0 }(0) \setminus \overline{B}_{r_{\eps}}(0) } \left[ \frac{ \mu_{N,\eps}^{\frac{\bp-\bm}{2}(\crits-\pe)}}{|x|^{(\bp-1)(\crits-\pe)+s}} \right] ~dx \end{equation*} Since $(\bp-1)\crits+s>n$, we then get that $$ \int \limits_{ \T \left( \rnm \cap B_{ \delta_0 }(0) \setminus \overline{B}_{r_\eps}(0) \right)} \frac{|\ue|^{\crits-p_{\eps} }}{|x|^{s}} ~ dx \leq C\left(\frac{\mu_{N,\eps}}{r_\eps}\right)^{\frac{\crits}{2}(\bp-\bm)}=o(1)$$ as $\eps\to 0$. Therefore, with \eqref{cpct:2*term:1}, we get \eqref{cpct:2*term:2}. This proves \eqref{cpct:2*term:2}. \noindent This ends the proof of Step \ref{step:p2}.\qed \begin{step}\label{step:p3} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We claim that \begin{eqnarray} &&\int \limits_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \left( h_{\eps}(x) +\frac{ \left( \nabla h_{\eps}, x \right)}{2} \right)\ue^{2}~ dx \nonumber\\ &&= \left\{ \begin{array}{llll} O(\mu_{N,\eps}^2) ~ & \hbox{ if } \bp-\bm > 2\\ O(\mu_{N,\eps}^2\ln \frac{1}{\mu_{N,\eps}}) ~ & \hbox{ if } \bp-\bm = 2\\ O(\mu_{N,\eps}^{1+\frac{\bp-\bm}{2}}) ~ &\hbox{ if } \bp-\bm < 2. \end{array} \right. \label{cpct:L2term:sc} \end{eqnarray} And if $u_{0}\equiv 0$ \begin{eqnarray} && \int \limits_{\T \left( \rnm \cap B_{ \delta_{0} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \left( h_{\eps}(x) +\frac{ \left( \nabla h_{\eps}, x \right)}{2} \right)\ue^{2}~ dx \nonumber\\ &&= \left\{ \begin{array}{llll} O(\mu_{N,\eps}^2) ~ & \hbox{ if } \bp-\bm > 2\\ O(\mu_{N,\eps}^2\ln \frac{1}{\mu_{N,\eps}}) ~ & \hbox{ if } \bp-\bm = 2\\ O(\mu_{N,\eps}^{\bp-\bm}) ~ &\hbox{ if } \bp-\bm < 2. \end{array} \right. \label{cpct:L2term:u=0} \end{eqnarray} \end{step} \noindent{\it Proof of Step \ref{step:p3}:} From estimate \eqref{eq:est:global} and after a change of variables, we get as $\eps \to 0$, \begin{eqnarray} && \int \limits_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \left( h_{\eps}(x) +\frac{ \left( \nabla h_{\eps}, x \right)}{2} \right)\ue^{2}~ dx \hfil \notag \\ && \leq C \int \limits_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \ue^{2}~ dx \notag\\ && \hfil \leq ~ C~\int \limits_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \left[ \frac{ \mu_{N,\eps}^{\bp-\bm}}{|x|^{2(\bp-1)}}~dx + \frac{ 1}{|x|^{2(\bm-1)}}~dx \right] \label{ineq:2018}\\\ && \leq ~ C~ \int \limits_{ \rnm \cap B_{ r_{\eps}}(0) \setminus \overline{B}_{{R k_{1,\eps}^{3}}}(0) } \left(\sum_{i=1}^N\frac{\mu_{i,\eps}^{\bp-\bm}|x|^2}{\mu_{i,\eps}^{2(\bp-\bm)}|x|^{2\bm}+|x|^{2\bp}}\right.\notag\\ &&\hskip4cm \left.+\frac{ 1}{|x|^{2(\bm-1)}}\right)\, dx. \notag \end{eqnarray} \noindent{\it Case 1:} Assuming that $\bp-\bm<2$, we then have the following rough bound from \eqref{ineq:2018}, \begin{eqnarray} && \int_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \ue^{2}\, dx\nonumber\\ &&\leq C\int \limits_{ \rnm \cap B_{ r_{\eps}}(0) \setminus \overline{B}_{{R k_{1,\eps}^{3}}}(0) } \left(\frac{ \mu_{N,\eps}^{\bp-\bm}}{|x|^{2(\bp-1)}} + \frac{ 1}{|x|^{2(\bm-1)}}\right)\, dx\nonumber\\ &&\leq C \mu_{N,\eps}^{1+\frac{\bp-\bm}{2}}\hbox{ if }\bp-\bm<2.\label{ineq:111} \end{eqnarray} \noindent{\it Case 2:} Assuming $\bp-\bm\geq 2$, then via a change of variable in \eqref{ineq:2018}, we get \begin{eqnarray*} \int \limits_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \ue^{2}~ dx &&\leq C\sum_{i=1}^N\mu_{i,\eps}^2 \int_{ B_{ \frac{r_{\eps}}{\mu_{i,\eps}}}(0) \setminus \overline{B}_{ \frac{k_{1,\eps}^3}{\mu_{i,\eps}}}(0) }\frac{|x|^2\, dx}{|x|^{2\bm}+|x|^{2\bp}}\notag \\ &&+ C\int_{B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) }|x|^{2-2\bm}\, dx. \end{eqnarray*} Therefore, if $\bp-\bm>2$, then \begin{eqnarray} && \int \limits_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \ue^{2}~ dx \leq C\sum_{i=1}^N\mu_{i,\eps}^2 + Cr_\eps^{n+2-2\bm}\leq C\mu_{N,\eps}^2.\label{ineq:222} \end{eqnarray} When $\bp-\bm=2$, we get that \begin{eqnarray*} \int \limits_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \ue^{2}~ dx &&\leq C\sum_{i=1}^N\mu_{i,\eps}^2 \left(1+\int_{ B_{ \frac{r_{\eps}}{\mu_{i,\eps}}}(0) \setminus \overline{B}_1(0) }|x|^{2-\bp}\, dx\right)\\ &&+ Cr_\eps^{2+\bp-\bm}\\ &&\leq C\mu_{N,\eps}^2\ln\frac{1}{\mu_{N,\eps}}+C\sum_{i=1}^{N-1}\mu_{i,\eps}^2\ln\frac{1}{\mu_{i,\eps}}. \end{eqnarray*} Since $\mu_{N,\eps}\to 0$ and $\lim_{\eps\to 0}\mu_{i,\eps}/\mu_{N,\eps}$ is finite for all $i=1,...,N-1$, we get that \begin{equation} \int \limits_{\T \left( \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \ue^{2}\, dx=O\left(\mu_{N,\eps}^2\ln\frac{1}{\mu_{N,\eps}}\right),\label{ineq:333} \end{equation} since $\bp-\bm=2$. Inequality \eqref{ineq:2018} put together with \eqref{ineq:111}, \eqref{ineq:222} and \eqref{ineq:333} yield \eqref{cpct:L2term:sc}. \noindent When $u_0\equiv 0$ we decompose the integral and proceed as in the proof of \eqref{cpct:2*term:2} to obtain \eqref{cpct:L2term:u=0}. This ends Step \ref{step:p3}.\qed \subsection{Estimate of the curvature term in the Pohozaev identity when $\bp-\bm>1$} \begin{step}\label{step:p4} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs and that $\bp -\bm >1$. We claim that, as $\eps \to 0$ \begin{eqnarray} &&\int \limits_{ \T\left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma\nonumber\\ &&= \frac{\mu_{N,\eps}}{2} \left(\frac{1}{ t_{N}^{\frac{n-1}{\crits-2}}} \int \limits_{\partial\rnm} II_{0}(x,x) \frac{|\nabla \tu_{N}|^2}{2} ~d\sigma +o(1) \right).\label{cpct:bdry:term:C} \end{eqnarray} Here, see Proposition \ref{prop:rate:sc}, $II_0$ denotes the second fundamental form. Moreover, when $u_0\equiv 0$, we claim that as $\eps\to 0$, \begin{eqnarray} &&\int \limits_{ \T\left( \partial \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma\nonumber\\ &&= \frac{\mu_{N,\eps}}{2} \left(\frac{1}{ t_{N}^{\frac{n-1}{\crits-2}}} \int \limits_{\partial\rnm} II_{0}(x,x) \frac{|\nabla \tu_{N}|^2}{2} ~d\sigma +o(1) \right).\label{cpct:bdry:term:C:2} \end{eqnarray} \end{step} \noindent {\it Proof of Step \ref{step:p4}:} We have for any $R, \rho >0$, \begin{eqnarray}\label{decomp:bdry:term:C} &&\int \limits_{ \T\left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma\\ &&= \int \limits_{ \T \left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \notag\\ &&+ \sum \limits_{i=1}^{N} \int \limits_{ \T \left( \partial \rnm \cap B_{R k_{i,\eps}}(0) \setminus \overline{B}_{\rho k_{i,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma\notag\\ && + \sum \limits_{i=1}^{N-1} \int \limits_{ \T \left( \partial \rnm \cap B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \notag\\ & &+ \int \limits_{ \T \left( \partial \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma . \end{eqnarray} \noindent We consider the second fundamental form associated to $\partial\Omega$, $II_0(x,y)=(d\nu_px,y)$ for $0\in\partial\Omega$ and all $x,y\in T_{0}\partial\Omega$ ($\nu$ is the outward normal vector at the hypersurface $\partial\Omega$). In the canonical basis of $\partial\rnm=T_0\partial\Omega$, the matrix of the bilinear form $II_{0}$ is $-D^2_0\T_0$, where $D^2_0\T_0$ is the Hessian matrix of $\T_0$ at $0$. Using the expression of $\T$ (see \eqref{def:T:bdry}), we can write for all $x \in U \cap \partial \rnm$ $$\nu(\T(x))=\frac{(1,-\partial_2\T_0(x), ...,-\partial_n\T_0(x))}{\sqrt{1+\sum_{i=2}^n(\partial_i\T_0(x))^2}}.$$ With the expression of $\T$, we then get that \begin{align*} \left(\nu\circ \T(x),\T(x) \right)= \frac{\T_0(x)-\sum_{p=2}^n x^p\partial_p\T_0(x)}{\sqrt{1+\sum_{p=2}^n(\partial_p\T_0(x))^2}} \end{align*} And so for all $x\in U\cap\partial \rnm$. \begin{align}\label{est:T:2} |(\T (x),\nu\circ\T(x))|\leq C|x|^2 \end{align} Since $\T_0(0)=0$ and $\nabla\T_0(0)=0$ (see \eqref{def:T:bdry}), we then get as $|x| \to 0$ \begin{align}\label{est:T:3} \left(\nu\circ \T(x),\T(x) \right)=-\frac{1}{2}\sum \limits_{p,q=2}^n x^px^q \partial_{pq} \T_0(0)+O(|x|^{3}) \end{align} and therefore for all $\eps>0$ and all $x\in B_R(0) \cap \partial \rnm $ \begin{align}\label{est:T:4} \left(\T(k_{N,\eps}x ), \nu \ \circ T(k_{N,\eps} x)\right)&~=-\frac{1}{2}\keN^2 \sum_{p,q=2}^n x^px^q\partial_{pq}\T_0(0)+\theta_{\eps,R}(x) \keN^2 \notag\\ &~= \frac{1}{2}\keN^2 II_{0}(x,x)+\theta_{\eps,R}(x) \keN^2 \end{align} where $\lim \limits_{\eps\to 0}\sup \limits_{B_R(0)\cap\{x_1=0\}}|\theta_{\eps,R}|=0$ for any $R>0$. \noindent{\it Step \ref{step:p4}.1:} Let $1 \leq i\leq N-1$. In Proposition \ref{prop:fund:est:grad} we have obtained the pointwise estimates, that for any $R, \rho>0$ and all $\eps>0$ we have for all $x \in B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) $, \begin{align*} |\nabla \ue(x)| \leq C~ \frac{\mei^{\frac{\bp-\bm}{2}} }{|x|^{\bp}}+C~ \frac{ 1 }{ \mu_{ i+1,\eps}^{\frac{\bp-\bm}{2}}|x|^{\bm}}. \end{align*} \noindent For clearness, we write in this step $$D_\eps:= \T \left( \partial \rnm \cap B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) \right).$$ As $\eps \to 0$ we have that \begin{eqnarray*} \left|\int \limits_{D_\eps} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma\right|& \leq &~ C~\int \limits_{D_\eps} (x, \nu) \left[ \frac{ \mei^{\bp-\bm}}{|x|^{2\bp}}+ \frac{ 1 }{ \mu_{ i+1,\eps}^{\bp-\bm}|x|^{2\bm}} \right]~d\sigma \notag\\ & \leq &~ C~\int \limits_{ D_\eps} |x|^{2} \left[ \frac{ \mei^{\bp-\bm}}{|x|^{2\bp}}+ \frac{ 1 }{ \mu_{ i+1,\eps}^{\bp-\bm}|x|^{2\bm}} \right]~d\sigma \notag\\ & \leq &~ C~\mu_{i,\eps} \int \limits_{ \partial \rnm \cap B_{ \frac{\rho k_{i+1,\eps}}{k_{i,\eps}}}(0) \setminus \overline{B}_{R }(0) } \frac{ d\sigma}{|x|^{(n-1)+(\bp-\bm-1)}} \notag\\ &&+ ~ C~\mu_{i+1,\eps}\int \limits_{ \partial \rnm \cap B_{\rho }(0) \setminus \overline{B}_{ \frac{R k_{i,\eps}}{k_{i+1,\eps}}}(0) } \frac{ d\sigma }{ |x|^{(n-1)-(\bp-\bm+1)}} \notag\\ & \leq& C~\left( \mu_{i,\eps} R^{-(\bp-\bm-1)}+ \mu_{i+1,\eps} \rho^{\bp-\bm+1} \right). \end{eqnarray*} So then for all $1 \leq i\leq N-1$ \begin{align}\label{cpct:bdry:term:C:3} \lim\limits_{R \to + \infty} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0} \left( \mu_{N,\eps}^{-1} \int \limits_{ \T \left( \partial \rnm \cap B_{\rho k_{i+1,\eps}}(0) \setminus \overline{B}_{R k_{i,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \right)=0. \end{align} This ends Step \ref{step:p4}.1. \noindent{\it Step \ref{step:p4}.2:} Again from the estimates of Proposition \ref{prop:fund:est:grad}, we have as $\eps \to 0$ \begin{eqnarray*} && \left|\int \limits_{ \T \left( \partial \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \right| \notag \\ && \leq C \int \limits_{ \T \left( \partial \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \frac{ |( x, \nu)|\,d\sigma}{ \mu_{ 1,\eps}^{\bp-\bm}|x|^{2\bm}} \notag\\ && \leq ~C~ \int \limits_{ \partial \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) } \frac{ |x|^{2}\,d\sigma}{ \mu_{ 1,\eps}^{\bp-\bm}|x|^{2\bm}} \notag\\ && \leq ~ C~k_{1,\eps}\int \limits_{ \partial \rnm \cap B_{\rho}(0) \setminus B_{ k^{2}_{1,\eps} }(0) } \frac{ d\sigma }{ |x|^{2\bm-2}} \leq ~C~\mu_{1,\eps}~ \rho^{\bp-\bm+1}. \notag \end{eqnarray*} Then, using again \eqref{bnd:k:mu}, we get that \begin{align}\label{cpct:bdry:term:C:4} \lim\limits_{R \to + \infty} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0} \left( \mu_{N,\eps}^{-1} \int \limits_{ \T \left( \partial \rnm \cap B_{\rho k_{1,\eps}}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \right)=0. \end{align} This ends Step \ref{step:p4}.2. \noindent{\it Step \ref{step:p4}.3:} With the pointwise estimates of Proposition \ref{prop:fund:est:grad}, we obtain as $\eps \to 0$ \begin{eqnarray*} &&\int \limits_{ \T \left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \notag\\ && \leq~ C~\int \limits_{ \T \left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} |x|^{2} \left[ \frac{ \mu_{N,\eps}^{\bp-\bm}}{|x|^{2\bp}}+ \frac{ 1 }{|x|^{2\bm}} \right]~d\sigma \notag\\ &&\leq ~ C~k_{N,\eps} \int \limits_{ \partial \rnm \cap B_{ \frac{ r_{\eps}}{k_{N,\eps}}}(0) \setminus \overline{B}_{R }(0) } \frac{ 1}{|x|^{2\bp-2}}~d\sigma \notag\\ & &+ ~ C~r_{\eps}^{\bp-\bm+1}\int \limits_{ \partial \rnm \cap B_{1}(0) \setminus \overline{B}_{ \frac{R k_{N,\eps}}{ r_{\eps}}}(0) } \frac{ 1 }{ |x|^{2\bm-2}} ~d\sigma \notag\\ &&\leq~ C~k_{N,\eps} \int \limits_{ \partial \rnm \cap B_{ \frac{ k_{N,\eps}}{k_{N-1,\eps}}}(0) \setminus \overline{B}_{R /2}(0) } \frac{ 1}{|x|^{(n-1)+(\bp-\bm-1)}}~d\sigma, \notag \end{eqnarray*} and then \begin{eqnarray*} &&\int \limits_{ \T \left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \notag\\ & &+ ~ C~r_{\eps}^{\bp-\bm+1} \int \limits_{ \partial \rnm \cap B_{1 }(0) \setminus \overline{B}_{ \frac{R k_{N,\eps}}{2r_{\eps}}}(0) } \frac{ 1 }{ |x|^{(n-1)-(\bp-\bm+1)}} ~d\sigma \notag\\ && \leq C~k_{N,\eps}~\left( R^{-(\bp-\bm-1)}+ r_{\eps}^{\bp-\bm-1} \right)~d\sigma. \notag\\ \end{eqnarray*} So if $\bp-\bm>1$ \begin{align}\label{cpct:bdry:term:C:1} \lim\limits_{R \to + \infty} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0} \left( \mu_{N,\eps}^{-1} \int \limits_{ \T \left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus \overline{B}_{R k_{N,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \right)=0. \end{align} This ends Step \ref{step:p4}.3. \noindent{\it Step \ref{step:p4}.4:} Let $1 \leq i\leq N$. When $\bp -\bm >1$, we have \begin{align}\label{cpct:bdry:term:C:i} \lim\limits_{R \to + \infty} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0} &\left( \mu_{i,\eps}^{-1} \int \limits_{ \T \left( \partial \rnm \cap B_{R k_{i,\eps}}(0) \setminus \overline{B}_{\rho k_{i,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \right) \notag\\ &\qquad \qquad \qquad =\frac{1}{2}\frac{1}{ t_{i}^{\frac{n-1}{\crits-2}}} \int \limits_{\partial\rnm} II_{0}(x,x) \frac{|\nabla \tui|^2}{2} ~d\sigma, \end{align} where $II_{0}(x,x)$ is the second fundamental form of the boundary $\bdry$ at $0$. \noindent{\it Proof of Step \ref{step:p4}.4:} Consider $\tu_{i}$ obtained in Proposition \ref{prop:exhaust}. It follows that for some constant $C>0$, \begin{align*} |\nabla \tu_{i}(x)| \leq \frac{C}{|x|^{\bm}+|x|^{\bp}} \qquad \hbox{ for all } x \in \rnmpbar. \end{align*} So when $\bp-\bm >1$, the function $|x|^{2}|\nabla \tu_{i}| \in L^{2}(\R^{n-1}) $. \noindent With a change of variable and the definition of $\tu_{i,\eps}$ we then obtain \begin{eqnarray} &&\mu_{i,\eps}^{-1} \int \limits_{ \T \left( \partial \rnm \cap B_{R k_{i,\eps}}(0) \setminus \overline{B}_{\rho k_{i,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \notag\\ && = \frac{k_{i,\eps}^{n-3}}{ \mu_{i,\eps}^{n-1}} ~ \int \limits_{ \partial \rnm \cap B_{R }(0) \setminus \overline{B}_{\rho }(0) } \left(\T(k_{N,\eps}x ), \nu \ \circ T(k_{N,\eps} x)\right) \frac{|\nabla \tu_{i,\eps}|^2}{2} ~d\sigma \notag\\ && = -\frac{k_{i,\eps}^{n-3}}{ \mu_{i,\eps}^{n-1}} \left(~ \int \limits_{ \partial \rnm \cap B_{R }(0) \setminus \overline{B}_{\rho }(0) } \frac{1}{2} \keN^2 \sum_{p,q=2}^n\partial_{pq}\T_0(0)x^px^q \frac{|\nabla \tu_{i,\eps}|^2}{2} ~d\sigma +\theta_{\eps,R}(x) \keN^2 \right) \notag\\ && = -\left( \frac{k_{i,\eps}}{ \mu_{i,\eps}} \right)^{n-1} \left(~ \int \limits_{ \partial \rnm \cap B_{R }(0) \setminus \overline{B}_{\rho }(0) } \frac{1}{2} \sum_{p,q=2}^n\partial_{pq}\T_0(0)x^px^q \frac{|\nabla \tu_{i,\eps}|^2}{2} ~d\sigma +\theta_{\eps,R}(x) \right). \notag \end{eqnarray} Since $|x|^{2}|\nabla \tu_{i}| \in L^{2}(\R^{n-1}) $, passing to the limits it follows from the expression of the second fundamental form in \eqref{est:T:4}, that \begin{eqnarray*} &&\lim\limits_{R \to + \infty} \lim\limits_{\rho \to 0} \lim\limits_{\eps \to 0} \left( \mu_{i,\eps}^{-1} \int \limits_{ \T \left( \partial \rnm \cap B_{R k_{i,\eps}}(0) \setminus \overline{B}_{\rho k_{i,\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \right) \notag\\ && = - \frac{1}{2}\frac{1}{ t^{\frac{n-1}{\crits-2}}} \int \limits_{ \partial \rnm \cap B_{R }(0) \setminus \overline{B}_{\rho }(0) } \sum_{p,q=2}^n\partial_{pq}\T_0(0)x^px^q \frac{|\nabla \tui|^2}{2} ~d\sigma \notag\\ && = \frac{1}{2}\frac{1}{ t_{i}^{\frac{n-1}{\crits-2}}} \int \limits_{\partial\rnm} II_{0}(x,x) \frac{|\nabla \tui|^2}{2} ~d\sigma. \end{eqnarray*} This ends Step \ref{step:p4}.4. \noindent Plugging \eqref{cpct:bdry:term:C:1}, \eqref{cpct:bdry:term:C:i}, \eqref{cpct:bdry:term:C:3} and \eqref{cpct:bdry:term:C:4} in the integral \eqref{decomp:bdry:term:C}, we get \eqref{cpct:bdry:term:C}. This proves the first identity of Step \ref{step:p4}. \noindent{\it Step \ref{step:p4}.5:} We now assume that $u_0\equiv 0$ and $\bp-\bm>1$. We prove \eqref{cpct:bdry:term:C:2}. We write \begin{align}\label{decomp:bdry:term:C:2} \int \limits_{ \T\left( \partial \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma=& \int \limits_{ \T \left( \partial \rnm \cap B_{ \delta }(0) \setminus \overline{B}_{r_\eps}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \notag\\ &+ \int \limits_{ \T \left( \partial \rnm \cap \overline{B}_{r_\eps}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma \end{align} \noindent With the pointwise estimates of Proposition \ref{prop:fund:est:grad} with $u_0\equiv 0$, and using that $\bp-\bm>1$, we obtain as $\eps \to 0$ \begin{eqnarray*} &&\left|\int \limits_{ \T \left( \partial \rnm \cap B_{\delta_0 }(0) \setminus \overline{B}_{r_{\eps}}(0) \right)} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma\right| \\&&\leq C~\int \limits_{ \partial \rnm \cap B_{ \delta_0 }(0) \setminus \overline{B}_{r_\eps}(0) } |x|^{2} \left[ \frac{ \mu_{N,\eps}^{\bp-\bm}}{|x|^{2\bp}} \right]~d\sigma \notag\\ &&\leq C \frac{ \mu_{N,\eps}^{\bp-\bm}}{\re^{2\bp-2-n+1}}\leq C \mu_{N,\eps}^{1+\frac{\bp-\bm-1}{2}}=o(\mu_{N,\eps}), \end{eqnarray*} since $\bp-\bm>1$. Then, with \eqref{cpct:bdry:term:C}, we get \eqref{cpct:bdry:term:C:2}. This ends Step \ref{step:p4}.5. \noindent These five substeps prove Step \ref{step:p4}.\qed \subsection{Estimates of the boundary terms} \begin{step}\label{step:p5} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We fix a chart $\T$ as in \eqref{def:T:bdry} and, for any $\eps>0$, we define \begin{align*} \tve(x):= r_{\eps}^{\bm-1} \ue( \T (r_{\eps} x)) \qquad \hbox{for } x \in r_{\eps}^{-1} U \cap \rnmpbar, \end{align*} where $r_{\eps}:= \sqrt{\mu_{N,\eps}}$. We claim that there exists $\tv \in C^{1} ( \rnmpbar) $ such that $$\lim \limits_{\eps \to 0} \tve(x)= \tv~\hbox{ in } C^{1}_{loc} ( \rnmpbar) $$ where $\tv$ is a solution of \begin{equation}\label{blowup:eqn:cpct} \left\{ \begin{array}{llll} -\Delta \tv -\frac{\gamma}{|x|^{2}} \tv&=&0 & \hbox{ in } \rnm \\ \tv&=&0 & \hbox{ on } \partial \rnm \setminus \{ 0\}. \end{array}\right. \end{equation} \end{step} \noindent {\it Proof of Step \ref{step:p5}:} For any $i,j=1,...,n$, we let $(\tge)_{ij}=(\T^*Eucl)(r_\eps x)_{ij}=(\partial_i\T(r_{\eps} x),\partial_j\T( r_{\eps} x))$, where $(\cdot,\cdot)$ denotes the Euclidean scalar product on $\rn$. We consider $\tge$ as a metric on $\rn$. In the sequel, we let $\Delta_g=div_g(\nabla)$ be the Laplace-Beltrami operator with respect to a metric $g$. From $(E_{\eps})$ it follows that for all $\eps >0$, the rescaled functions $\tve $ weakly satisfies the equation \begin{align}{\label{eqn:cpct:1}} -\Delta_{\tge} \tve - \frac{\gamma}{\left| \frac{\T ( r_{\eps} x)}{r_{\eps}}\right|^{2}} \tve -r_{\eps}^2~\he \circ \T(r_{\eps} x)~ \tve =r_{\eps}^{\theta +\pe \bm}\frac{| \tve |^{\crits-2-\pe} \tve}{\left| \frac{\T (r_{\eps} x)}{r_{\eps}}\right|^{s}}. \end{align} with $\theta:=(\crits-2)\frac{\bp-\bm}{2}>0$ and $ \tve \equiv 0$ on $ \partial \rnm \setminus \{ 0\}$. \noindent Using the pointwise estimates \eqref{eq:est:global} we obtain the bound, that as $\eps \to 0$ we have for $ x \in \rnm$ \begin{align*} |\tv_{\eps}(x)| \leq & ~C~r_{\eps}^{\bm-1} \sum_{i=1}^N\frac{\mei^{\frac{\bp-\bm}{2}} | \T (r_{\eps} x)|}{ \mei^{\bp-\bm}| \T (r_{\eps} x)|^{\bm}+| \T (r_{\eps} x)|^{\bp}} \notag\\ &~+~C~ r_{\eps}^{\bm-1}\frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}}{| \T (r_{\eps} x)|^{\bm}} |\T (r_{\eps} x)| \notag\\ \leq & ~C~\sum_{i=1}^N \frac{\left( \frac{\mei}{\mu_{N,\eps}} \right)^{\frac{\bp-\bm}{2}} \left| \frac{\T (r_{\eps} x)}{r_{\eps}} \right|}{ \left( \frac{\mei}{ \sqrt{\mu_{N,\eps}}} \right)^{\bp-\bm} \left| \frac{\T (r_{\eps} x)}{r_{\eps}} \right|^{\bm}+ \left| \frac{\T (r_{\eps} x)}{r_{\eps}} \right|^{\bp}} \notag\\ &~+~C~ \frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)} }{ \left| \frac{\T (r_{\eps} x)}{r_{\eps}} \right|^{\bm}} \left| \frac{\T (r_{\eps} x)}{r_{\eps}} \right| \notag\\ \leq & ~C \left(~\sum_{i=1}^N\frac{ \left( \frac{\mei}{\mu_{N,\eps}} \right)^{\frac{\bp-\bm}{2}} |x| }{ \left( \frac{\mei}{ \sqrt{\mu_{N,\eps}}} \right)^{\bp-\bm}| x|^{\bm}+|x|^{\bp}}\right.\\ &\left.+ \frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}+ }{|x|^{\bm}} |x| \right) \notag\\ \leq &~C ~ \left(\frac{1}{ |x|^{\bp-1}} + \frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}}{ |x|^{\bm-1} }\right). \end{align*} Then passing to limits in the equation \eqref{eqn:cpct:1}, standard elliptic theory yields the existence of $\tilde{v}\in C^2(\rnmpbar)$ such that $\tv_{\eps} \to \tv$ in $C^2_{loc}(\rnmpbar)$ and $\tv$ satisfies the equation: \begin{align*} \left\{ \begin{array}{llll} -\Delta \tv -\frac{\gamma}{|x|^{2}} \tv&=&0 \ \ & \hbox{ in } \rnm \\ \tv&=&0 & \hbox{ on } \partial \rnm \setminus \{ 0\}. \end{array}\right. \end{align*} and we have the following bound on $\tv$ $$|\tv(x)| \leq C ~ \left(\frac{|x_{1}|}{ |x|^{\bp}} + \frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}}{ |x|^{\bm} }|x_{1}| \right) \qquad \hbox{ for all } x=(x_{1},\tilde{x}) \hbox{ in } \rnm.$$ This ends the proof of Step \ref{step:p5}.\qed \begin{step}\label{step:p6} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We claim that, as $\eps \to 0$, \begin{align}\label{cpct:bdry:term1and23} \int \limits_{ \T \left( \rnm \cap \partial B_{ r_{\eps} }(0) \right)} F_{\eps}(x)~d\sigma =~ \mu_{N,\eps}^{\frac{\bp-\bm}{2}} \left( \mathcal{F}_0 +o(1) \right) \end{align} with \begin{align}\label{def:F1} \mathcal{F}_0 := \int \limits_{\rnm \cap \partial B_{1 }(0) } ( x, \nu) \left( \frac{|\nabla \tv|^2}{2} -\frac{\gamma}{2}\frac{\tv^{2}}{|x|^{2}} \right) - \left(x^i\partial_i \tv+\frac{n-2}{2} \tv \right)\partial_\nu \tv ~d \sigma \end{align} and \begin{align}\label{cpct:bdry:term1and2} ~ \int \limits_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right)} F_{\eps}(x)~d\sigma&~= o\left( \mu_{N,\eps}^{\bp-\bm}\right). \end{align} \end{step} \noindent{\it Proof of Step \ref{step:p6}:} We keep the notations of Step \ref{step:p5}. With a change of variable and the definition of $\tve$, and $\theta:=(\crits-2)\frac{\bp-\bm}{2}>0$, we get \begin{align*} & \int \limits_{ \T \left( \rnm \cap \partial B_{ r_{\eps} }(0) \right)} F_{\eps}(x)~d\sigma= \notag\\ & ~ r_{\eps}^{\bp-\bm} \int \limits_{\rnm \cap \partial B_{1 }(0) } ( x, \nu)_{\tge} \left( \frac{|\nabla_{\tge} \tve|^2}{2} -\frac{\gamma}{2}\frac{\tve^{2}}{|x|_{\tge}^{2}} \right) - \left(x^i\partial_i \tve+\frac{n-2}{2} \tve \right)\partial_{\nu} \tve ~d \sigma_{\tge} \notag\\ & ~-r_{\eps}^{\bp-\bm} \int \limits_{\rnm \cap \partial B_{1 }(0) } \left(r_{\eps}^{2}\frac{h_{\eps}( \T (r_{\eps}x))}{2} \tve^{2} - \frac{ r_{\eps}^{\theta+(\bm -1)\pe} }{\crits -p_{\eps} } \frac{|\tve|^{\crits-p_{\eps} }}{|x|_{\tge}^{s}}\right) d\sigma_{\tge}. \end{align*} From the convergence result of Step \ref{step:p5}, we then get \eqref{cpct:bdry:term1and23}. \noindent For the next boundary term, from the estimates \eqref{eq:est:global} and \eqref{eq:est:grad:global} we obtain \begin{eqnarray*} &&\left| \int_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right)} F_{\eps}(x)~d\sigma \right|\\ && \leq ~\frac{C}{\mu_{1,\eps}^{\bp-\bm}} \int \limits_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right)} |x| \left( \frac{1}{|x|^{2\bm}}+\frac{|x|^{2}}{|x|^{2\bm}} \right)~dx \notag\\ &&+C \int \limits_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right)} |x| \frac{ \mu_{1,\eps}^{-\crits \left(\frac{\bp -\bm}{2} \right)+ \pe \left(\frac{\bp -\bm}{2} \right)} }{|x|^{(\bm-1)(\crits-\pe)+s}}~dx \notag\\ && \leq ~\frac{C}{\mu_{1,\eps}^{\bp-\bm}} \int \limits_{ \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) } |x| \left( \frac{1}{|x|^{2\bm}}+\frac{|x|^{2}}{|x|^{2\bm}} \right)~dx \notag\\ && +C~ \int \limits_{ \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) } |x| \frac{ \mu_{1,\eps}^{-\crits \left(\frac{\bp -\bm}{2} \right)+ \pe \left(\frac{\bp -\bm}{2} \right)} }{|x|^{(\bm-1)(\crits-\pe)+s}}~dx \notag\\ & &\leq C \mu_{1,\eps}^{\bp-\bm} \left( \mu_{1,\eps}^{\bp-\bm}+\mu_{1,\eps}^{(\bp-\bm)\left( \frac{2-s}{n-2}\right) +\pe\left( \frac{n-2}{2}\right)} \right). \end{eqnarray*} And so \begin{align}\label{cpct:bdry:term2} \int \limits_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{2} }(0) \right)} F_{\eps}(x)~d\sigma= o\left( \mu_{N,\eps}^{\bp-\bm}\right). \end{align} This ends Step \ref{step:p6}.\qed \begin{step}\label{step:p7} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We assume that $u_0\equiv 0$. We define \begin{equation}\label{def:bue} \bar{u}_\eps:=\frac{\ue}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}. \end{equation} We claim that there exists $\bar{u}\in C^2(\overline{\Omega}\setminus\{0\})$ such that \begin{equation}\label{cv:bue} \lim_{\eps\to 0}\bar{u}_\eps=\bar{u}\hbox{ in }C^2_{loc}(\overline{\Omega}\setminus\{0\})\hbox{ with }\left\{\begin{array}{ll} -\Delta \bar{u}-\left(\frac{\gamma}{|x|^2}+h_0\right)\bar{u}=0 &\hbox{ in }\Omega\\ \bar{u}=0&\hbox{ in }\partial\Omega\setminus\{0\} \end{array}\right. \end{equation} \end{step} \noindent{\it Proof of Step \ref{step:p7}:} Since $u_0\equiv 0$, it follows from \eqref{eq:est:global} that there exists $C>0$ such that \begin{equation}\label{control:bue} |\bar{u}_\eps(x)|\leq C |x|^{1-\bp}\hbox{ for all }x\in\Omega\hbox{ and }\eps>0. \end{equation} Moreover, equation $(E_\eps)$ rewrites $$-\Delta \bar{u}_\eps-\left(\frac{\gamma}{|x|^2}+h_\eps\right)\bar{u}_\eps=\mu_{N,\eps}^{\frac{\bp-\bm}{2}(\crits-2-p_\eps)}\frac{|\bar{u}_\eps|^{\crits-2-p_\eps}\bar{u}_\eps}{|x|^s}\;\hbox{ in }\Omega, $$ and $\bar{u}_\eps=0$ on $\partial\Omega$. It then follows from standard elliptic theory that the claim holds. This ends Step \ref{step:p7}.\qed \begin{step}\label{step:p8} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We assume that $u_0\equiv 0$. We claim that \begin{align}\label{cpct:bdry:term1and2:lala} \int \limits_{ \T \left( \rnm \cap \partial B_{ \delta_0 }(0) \right)} F_{\eps}(x)~d\sigma&~=\left( \mathcal{F}_{\delta_{0}}+o(1)\right)\mu_{N,\eps}^{\bp-\bm}, \end{align} and \begin{align}\label{id:tralala} ~ \int \limits_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right)} F_{\eps}(x)~d\sigma&~= o\left( \mu_{N,\eps}^{\bp-\bm}\right), \end{align} where \begin{equation}\label{def:Fd} \mathcal{F}_{\delta_{0}} := \int \limits_{ \T \left( \rnm \cap \partial B_{ \delta_0 }(0) \right)} ( x, \nu) \left( \frac{|\nabla \bar{u}|^2}{2} -\left(\frac{\gamma}{|x|^2}+h_0\right)\frac{\bar{u}^{2}}{2} \right) - \left(x^i\partial_i \bar{u}+\frac{n-2}{2} \bar{u} \right)\partial_{\nu} \bar{u} ~d \sigma. \end{equation} \end{step} \noindent{\it Proof of Step \ref{step:p8}:} The second term has already been estimated in \eqref{cpct:bdry:term1and2}. We are left with the first term. With a change of variable, the definition of $\bar{u}_\eps$ and the convergence \eqref{cv:bue}, we get \begin{eqnarray} && \int \limits_{ \T \left( \rnm \cap \partial B_{ \delta_0 }(0) \right)} F_{\eps}(x)~d\sigma\\ &&= \mu_{N,\eps}^{\bp-\bm} \int \limits_{ \T \left( \rnm \cap \partial B_{ \delta_0 }(0) \right)} ( x, \nu) \left( \frac{|\nabla \bar{u}_\eps|^2}{2} -\left(\frac{\gamma}{|x|^{2}}+h_\eps\right)\frac{\bar{u}_\eps^{2}}{2}\right)~d \sigma \notag \\ && - \mu_{N,\eps}^{\frac{\bp-\bm}{2}(\crit-2-\eps)}\int \limits_{ \T \left( \rnm \cap \partial B_{ \delta_0 }(0) \right)}\frac{|\bar{u}_\eps|^{\crit-2-\eps}\bar{u}_\eps}{|x|^2}~d \sigma \notag \\ &&- \int \limits_{ \T \left( \rnm \cap \partial B_{ \delta_0 }(0) \right)} \left(x^i\partial_i \bar{u}_\eps+\frac{n-2}{2} \bar{u}_\eps \right)\partial_{\nu} \bar{u}_\eps ~d \sigma \notag\\ &&=\mu_{N,\eps}^{\bp-\bm} \left( \mathcal{F}_{\delta_{0}}+o(1) \right).\label{est:Fe} \end{eqnarray} where $\mathcal{F}_{\delta_{0}}$ is as above. Arguing as in the proof of \eqref{cpct:bdry:term2}, we get that \begin{align}\label{cpct:bdry:term2:bis} \int \limits_{ \T \left( \rnm \cap \partial B_{ k_{1,\eps}^{3} }(0) \right)} F_{\eps}(x)~d\sigma= o\left( \mu_{N,\eps}^{\bp-\bm}\right)\hbox{ as }\eps\to 0. \end{align} This ends Step \ref{step:p8}.\qed \begin{step}\label{step:p9} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We assume that $\ue>0$ for all $\eps>0$. Then $\mathcal{F}_0\geq 0$ and $${\mathcal F}_0>0\;\Leftrightarrow\; u_0>0.$$ where $\mathcal{F}_0$ is as in \eqref{def:F1}. \end{step} \noindent{\it Proof of Step \ref{step:p9}:} We let $\tv$ be defined as in Step \ref{step:p5}. It follows from Step \ref{step:p5} that $\tv$ satisfies \eqref{blowup:eqn:cpct} and we have the following bound on $\tv$ \begin{equation} |\tv(x)| \leq C ~ \left(\frac{|x_{1}|}{ |x|^{\bp}} + \frac{\Vert |x|^{\bm-1}u_0 \vert|_{L^{\infty}(\Omega)}}{ |x|^{\bm} }|x_{1}| \right) \qquad \hbox{ for all } x=(x_{1},\tilde{x}) \hbox{ in } \rnm.\label{est:tv} \end{equation} Given $\alpha\in\rr$, we define $v_\alpha(x):=x_1|x|^{-\alpha}$ for all $x\in \rnm$. Since $\tv\geq 0$, it follows from Proposition 6.4 in Ghoussoub-Robert \cite{gr4} that there exists $A,B\geq 0$ such that \begin{equation}\label{eq:tv:A:B} \tv:=A v_{\bp}+B v_{\bm}. \end{equation} \noindent{\it Step \ref{step:p9}.1:} We claim that $B=0$ when $u_0\equiv 0$. \noindent This is a direct consequence of controling \eqref{eq:tv:A:B} with \eqref{est:tv} when $u_0\equiv 0$ and letting $|x|\to\infty$. \noindent{\it Step \ref{step:p9}.2:} We claim that $B>0$ when $u_0>0$. \noindent We prove the claim. We fix $x\in \rnm$. Green's representation formula yields \begin{eqnarray*} \tv_\eps(x)&=& \int_\Omega r_{\eps}^{\bm-1} G_\eps(\T (r_{\eps} x), y)\frac{\ue^{\crits-1}(y)}{|y|^s}\, dy. \end{eqnarray*} We fix $\omega\subset\subset \Omega$. Then there exists $c(\omega)>0$ such that $|y|\geq d(y,\partial\Omega)\geq c(\omega)$ for all $y\in \omega$. Moreover, the control \eqref{est:G:up} of the Green's function yields \begin{eqnarray*} \tv_\eps(x)&\geq & c\int_\omega r_{\eps}^{\bm-1} \frac{r_\eps x_1}{r_\eps^{\bm}|x|^{\bm}}|c(\omega)-r_\eps |x||^{-n}\frac{\ue^{\crits-1}(y)}{|y|^s}\, dy, \end{eqnarray*} and then, passing to the limit $\eps\to 0$, we get that \begin{eqnarray*} \tv(x)&\geq & \frac{c x_1}{|x|^{\bm}}\int_\omega \frac{u_0^{\crits-1}(y)}{|y|^s}\, dy, \end{eqnarray*} for all $x\in \rnm$. As one checks, this yields $B\geq c\int_\omega \frac{u_0^{\crits-1}(y)}{|y|^s}\, dy >0$ when $u_0>0$. This ends Step \ref{step:p9}.2. \noindent{\it Step \ref{step:p9}.3:} We claim that $A>0$. \noindent The proof is similar to Step \ref{step:p9}.2. We fix $x\in \rnm $ and $\omega\subset\subset \rnm$. Green's representation formula and the pointwise control \eqref{est:G:up} yield \begin{eqnarray*} \tv_\eps(x)&\geq & \int_{\T (\mu_{N,\eps}\omega)} r_{\eps}^{\bm-1} G_\eps(\T (r_{\eps} x), y)\frac{\ue^{\crits-1}(y)}{|y|^s}\, dy\\ &\geq & \int_{\omega} r_{\eps}^{\bm-1} G_\eps(\T (r_{\eps} x), \T(\mu_{N,\eps} y))\mu_{N,\eps}^n \frac{\ue(\T(\mu_{N,\eps} y))^{\crits-1}}{|\mu_{N,\eps} y|^s}\, dy\\ &\geq & \int_{\omega} r_{\eps}^{\bm-1} \left(\frac{r_{\eps}|x|}{\mu_{N,\eps}|y|}\right)^{\bm}K_\eps(x,y) \mu_{N,\eps}^{\frac{n-2}{2}}\frac{\tuei(y)^{\crits-1}}{|y|^s}\, dy\\ &\geq & \int_{\omega} r_{\eps}^{2\bm-n} |x|^{\bm} \left|x-\frac{\mu_{N,\eps}}{r_\eps} y \right|^{-n}x_1 y_1 \mu_{N,\eps}^{\frac{n}{2}-\bm}\frac{\tuei(y)^{\crits-1}}{|y|^s}\, dy \end{eqnarray*} with $$K_\eps(x,y)=|r_\eps x-\mu_{N,\eps} y|^{2-n}\min\left\{1,\frac{\mu_{N,\eps}}{r_\eps}\frac{x_1 y_1}{|x-\frac{\mu_{N,\eps}}{r_\eps} y|^2}\right\} $$ Since $r_\eps:=\sqrt{\mu_{N,\eps}}$, letting $\eps\to 0$, we get with the convergence (A4) of Proposition \ref{prop:exhaust} that \begin{eqnarray*} \tv_\eps(x)&\geq & \int_{\omega} r_{\eps}^{2\bm-n} |x|^{\bm} \left|x-\frac{\mu_{N,\eps}}{r_\eps} y \right|^{-n}x_1 y_1 \mu_{N,\eps}^{\frac{n}{2}-\bm}\frac{\tuei(y)^{\crits-1}}{|y|^s}\, dy\\ &\geq & \frac{x_1}{|x|^{\bp}}\int_{\omega} \frac{\tilde{u}_i(y)^{\crits-1}}{|y|^s}\, dy \end{eqnarray*} for all $x\in\rnm$. Therefore, as one checks, $A\geq \int_{\omega} \frac{\tilde{u}_i(y)^{\crits-1}}{|y|^s}\, dy>0$. This ends Step \ref{step:p9}.3. \noindent{\it Step \ref{step:p9}.4:} We claim that \begin{equation}\label{id:F:A:B} \mathcal{F}_0=\frac{\omega_{n-1}}{n}\left(\frac{n^2}{4}-\gamma\right)\cdot AB. \end{equation} We prove the claim. The definition \eqref{def:F1} reads \begin{align}\label{def:F0:2} \mathcal{F}_0 := \int \limits_{\rnm \cap \partial B_{1 }(0) } ( x, \nu) \left( \frac{|\nabla \tv|^2}{2} -\frac{\gamma}{2}\frac{\tv^{2}}{|x|^{2}} \right) - \left(x^i\partial_i \tv+\frac{n-2}{2} \tv \right)\partial_\nu \tv ~d \sigma \end{align} For simplicity, we define the bilinear form \begin{eqnarray*} {\mathcal H}_\delta(u,v)&=&\int \limits_{\rnm \cap \partial B_{\delta}(0) } \left[( x, \nu) \left( (\nabla u,\nabla v) -\gamma \frac{uv}{|x|^{2}} \right) - \left(x^i\partial_i u+\frac{n-2}{2} u \right)\partial_\nu v\right.\\ &&\left.-\left(x^i\partial_i v+\frac{n-2}{2} v \right)\partial_\nu u\right] ~d \sigma \end{eqnarray*} As one checks, \begin{eqnarray} \mathcal{F}_0&=&\frac{1}{2}{\mathcal H}_1(A v_{\bp}+B v_{\bm},A v_{\bp}+B v_{\bm})\nonumber\\ &=& \frac{A^2}{2}{\mathcal H}_1(v_{\bp},v_{\bp})+AB{\mathcal H}_1(v_{\bp},v_{\bm})\nonumber\\ &&+\frac{B^2}{2}{\mathcal H}_1(v_{\bm},v_{\bm})\nonumber \end{eqnarray} In full generality, we compute ${\mathcal H}_\delta(v_\alpha,v_\beta)$ for all $\alpha,\beta\in\rr$ and all $\delta>0$. As one checks, for any $i=1,...,n$, we have that $\partial_i v_\alpha=\left(\delta_{i,1}-\alpha\frac{x_1 x_i}{|x|^2}\right)|x|^{-\alpha}$ for all $x\in\rnm$. Moreover, for $x\in \partial B_\delta(0)$, we have that $\partial_\nu v_\alpha=\frac{x^i}{|x|}\partial_i v_\alpha$. Consequently, straightforward computations yield $$ \left(x^i\partial_i v_\alpha+\frac{n-2}{2} v_\alpha \right)\partial_\nu v_\beta=-(\beta-1)\left(\frac{n}{2}-\alpha\right)\frac{v_\alpha v_\beta}{|x|}$$ and $$(x,\nu)\left((\nabla v_\alpha,\nabla v_\beta)-\frac{\gamma}{|x|^2}v_\alpha v_\beta\right)=|x|^{1-\alpha-\beta}+(\alpha\beta-\alpha-\beta-\gamma)\frac{v_\alpha v_\beta}{|x|}$$ and then \begin{eqnarray*} {\mathcal H}_\delta(v_\alpha,v_\beta) &=& \int_{\rnm\cap \partial B_\delta(0)}\left(|x|^{1-\alpha-\beta}+\left(\frac{n}{2}(\alpha+\beta)-n-\alpha\beta-\gamma\right)\frac{v_\alpha v_\beta}{|x|}\right)\, d\sigma \end{eqnarray*} We have that $$\int_{\rnm\cap \partial B_\delta(0)}|x|^{1-\alpha-\beta}\, d\sigma=\frac{1}{2}\int_{B_\delta(0)}|x|^{1-\alpha-\beta}\, d\sigma=\frac{\omega_{n-1}}{2}\delta^{n-\alpha-\beta}$$ and \begin{eqnarray*} \int_{\rnm\cap \partial B_\delta(0)}\frac{v_\alpha v_\beta}{|x|}\, d\sigma&=&\frac{1}{2}\int_{B_\delta(0)}x_1^2|x|^{-\alpha-\beta-1}\, d\sigma\\ &=&\frac{1}{2n}\int_{B_\delta(0)}|x|^{-\alpha-\beta+1}\, d\sigma=\frac{\omega_{n-1}}{2n}\delta^{n-\alpha-\beta} \end{eqnarray*} Plugging all these identities together yields $${\mathcal H}_\delta(v_\alpha,v_\beta) =\frac{\omega_{n-1}}{2n}\delta^{n-\alpha-\beta}\left(\frac{n}{2}(\alpha+\beta)-\alpha\beta-\gamma\right).$$ Since $\bp,\bm$ are solutions to $X^2-nX+\gamma=0$, we get that $${\mathcal H}_\delta(v_{\bm},v_{\bm})= {\mathcal H}_\delta(v_{\bp},v_{\bp})= 0.$$ Since $\bp+\bm=n$ and $\bp\bm=\gamma$, we get that $${\mathcal H}_\delta(v_{\bm},v_{\bp})=\frac{\omega_{n-1}}{n}\left(\frac{n^2}{4}-\gamma\right).$$ Plugging all these results together yields \eqref{id:F:A:B}. This ends Step \ref{step:p9}.4. \noindent These substeps end the proof of Step \ref{step:p9}.\qed \begin{step}\label{step:p10} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We assume that $\bp-\bm<2$ and $\ue>0$ for all $\eps>0$. Then $u_0\equiv 0$. \end{step} \noindent{\it Proof of Step \ref{step:p10}:} We claim that, as $\eps\to 0$, \begin{equation}\label{est:2:1} \int \limits_{ \T\left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma=o\left(\mu_{N,\eps}^{\frac{\bp-\bm}{2}}\right)\hbox{ when }\bp-\bm<2 \end{equation} Indeed, if $\bp-\bm>1$, the claim follows from \eqref{cpct:bdry:term:C} and $1>\frac{\bp-\bm}{2}$. If now $\bp-\bm<1$, then \eqref{est:T:2} and the control \eqref{eq:est:grad:global} yield that \begin{eqnarray*} &&\left|\int \limits_{ \T\left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{2} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma\right| \\ &&\leq C\int_{ \partial \rnm \cap B_{ r_{\eps} }(0) }|x|^2\left(\sum_{i=1}^N\frac{\mu_{i,\eps}^{\bp-\bm}}{|x|^{2\bp}}\, dx+ \frac{dx}{|x|^{2\bm}}\right) ~d\sigma\\ &&\leq C\sum_{i=1}^N\mu_{i,\eps}^{\bp-\bm}r_\eps^{n-1-2(\bp-1)}+Cr_\eps^{\bp-\bm+1}= o(\mu_{N,\eps}^{\frac{\bp-\bm}{2}}) \end{eqnarray*} as $\eps\to 0$. The limit case $\bp-\bm=1$ is similar. This proves the claim. \noindent Plugging \eqref{eq:mystery}, \eqref{cpct:L2term:sc}, \eqref{cpct:bdry:term1and23}, \eqref{cpct:bdry:term1and2} and \eqref{est:2:1} into the Pohozaev identity \eqref{PohoId3}, we get \begin{align}\label{id:poho:36} & \frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits}\right) \left(\sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx +o(1)\right) =-\left(\mathcal{F}_0+o(1)\right)\mu_{N,\eps}^{\frac{\bp-\bm}{2}} \end{align} as $\eps\to 0$, where $\mathcal{F}_0$ is as in \eqref{def:F0:2}. Therefore $\mathcal{F}_0\leq 0$. Since $\ue>0$, it then follows from \eqref{id:F:A:B} of Step \ref{step:p9} that $u_0\equiv 0$. This proves Step \ref{step:p10}.\qed \section{\, Proof of the sharp blow-up rates}\label{pf blow-up rates} We now prove the sharp blow-up rates claimed in Propositions \ref{prop:rate:sc} and \ref{prop:rate:sc:2}. We start with the case when $\bp-\bm\neq 1$. As a preliminary estimate, we claim that \begin{eqnarray} & & \frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits -p_{\eps}}\right) \left(\sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx +o(1)\right)\notag\\ &&=\int \limits_{ \T\left( \partial \rnm \cap B_{ r_{\eps} }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma -\left( \mathcal{F}_0+o(1)\right)\mu_{N,\eps}^{\frac{\bp-\bm}{2}} \label{est:1} \end{eqnarray} as $\eps\to 0$, where $\mathcal{F}_0 $ is as in \eqref{def:F1}; and, when $u_0\equiv 0$, we claim that \begin{eqnarray}\label{est:4:1} && \frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits -p_{\eps}}\right) \left(\sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx +o(1)\right) \notag\\ && = \int \limits_{ \T\left( \partial \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma - \left(\mathcal{F}_{\delta_0} +o(1)\right)\mu_{N,\eps}^{\bp-\bm} \notag\\ && {+ \underbrace{o(\mu_{N,\eps})}_{\text{when $\bp-\bm \geq 2$}} + \underbrace{O(\mu_{N,\eps}^{\bp-\bm})}_{\text{when $\bp-\bm < 2$}}}, \end{eqnarray} where $\mathcal{F}_{\delta_{0}}$ is as in \eqref{def:Fd}. \noindent We prove the claim. Collecting the first estimate of Step P2, \eqref{cpct:L2term:sc}, \eqref{cpct:bdry:term1and23} and \eqref{cpct:bdry:term1and2} of the terms of the Pohozaev identity \eqref{PohoId3} gives \eqref{est:1}. Similarly, the second estimate of Step P2, \eqref{cpct:L2term:u=0}, \eqref{cpct:bdry:term1and2:lala} and \eqref{id:tralala} of the terms of the Pohozaev identity \eqref{PohoId3:bis} gives \eqref{est:4:1}. \subsection{Proof of the sharp blow-up rates when $\bp-\bm\neq 1$} We first assume $\ue>0$ and $\bp-\bm<1$. \begin{step}\label{step:p11} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We assume that $\ue>0$ and $\bp-\bm<1$. Then \eqref{est:mass} holds, that is \begin{equation} \lim_{\eps\to 0}\frac{p_\eps}{\mu_{N,\eps}^{\bp-\bm}}=-\frac{\frac{\omega_{n-1}\crits^2}{n}\left(\frac{n^2}{4}-\gamma\right)A^2}{(n-s)\sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx}\cdot m_{\gamma,h}(\Omega) \label{est:mass:bis} \end{equation} for some $A>0$, where $m_{\gamma,h}(\Omega)$ is the boundary mass. \end{step} \noindent{\it Proof of Step \ref{step:p11}:} It follows from Step \ref{step:p10} that $u_0\equiv 0$. \noindent{\it Step \ref{step:p11}:1:} We now claim that \begin{equation} \frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits }\right) \left(\sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx +o(1)\right) = \mu_{N,\eps}^{\bp-\bm}\left(M_{\delta_0}+o(1)\right)\nonumber\end{equation} where \begin{eqnarray}\label{est:inf:1} M_{\delta_0}&:=&-\int_{\T\left( \rnm \cap B_{\delta_0 }(0) \right)} \left( h_0(x) +\frac{ \left( \nabla h_0, x \right)}{2} \right)\bar{u}^{2}~ dx- \mathcal{F}_{\delta_0}\nonumber\\ &&+\int_{ \T\left( \partial \rnm \cap B_{ \delta_0}(0) \right) } ( x, \nu) \frac{|\nabla \bar{u}|^2}{2} ~d\sigma, \end{eqnarray} and $\mathcal{F}_{\delta_{0}}$ is as in \eqref{def:Fd} and $\bar{u}$ is as in \eqref{cv:bue}. \noindent Indeed, the Pohozaev identity \eqref{PohoId3}, the convergence \eqref{def:bue}, \eqref{control:bue}, \eqref{cv:bue} and $\bp-\bm<1$ yield \begin{eqnarray}\label{est:l2} &&\int \limits_{\T\left( \rnm \cap B_{\delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)} \left( h_{\eps}(x) +\frac{ \left( \nabla h_{\eps}, x \right)}{2} \right)\ue^{2}~ dx\\ &&=\mu_{N,\eps}^{\bp-\bm}\left(\int_{\T\left( \rnm \cap B_{\delta_0 }(0) \right)} \left( h_0(x) +\frac{ \left( \nabla h_0, x \right)}{2} \right)\bar{u}^{2}~ dx+o(1)\right)\nonumber \end{eqnarray} With $u_0\equiv 0$ and the control \eqref{eq:est:grad:global}, we get that $|\nabla \ue(x)|\leq C\mu_{N,\eps}^{\frac{\bp-\bm}{2}}|x|^{-\bp}$ for all $\eps>0$ and $x\in \Omega$. Therefore, with \eqref{def:bue} and \eqref{cv:bue}, we get that \begin{eqnarray}\label{est:grad} &&\int \limits_{ \T\left( \partial \rnm \cap B_{ \delta_0}(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma\nonumber\\ &&=\mu_{N,\eps}^{\bp-\bm}\left(\int_{ \T\left( \partial \rnm \cap B_{ \delta_0}(0) \right) } ( x, \nu) \frac{|\nabla \bar{u}|^2}{2} ~d\sigma+o(1)\right) \end{eqnarray} as $\eps\to 0$. Plugging \eqref{cpct:bdry:term1and2}, \eqref{est:l2} and \eqref{est:grad} into \eqref{PohoId3:bis}, we get \eqref{est:inf:1}. This proves the claim and ends Step \ref{step:p11}.1. \noindent We fix $\delta<\delta'$. Taking $U:=\T(\rnm\cap B_{\delta'}(0)\setminus B_\delta(0))$, $K=0$ and $u=\bar{u}$ in \eqref{PohoId}, and using \eqref{cv:bue}, we get that $M_\delta$ is independent of the choice of $\delta>0$ small enough. \noindent{\it Step \ref{step:p11}.2:} We claim that $\bar{u}>0$. \noindent We prove the claim. Since $\bar{u}\geq 0$ is a solution to \eqref{cv:bue}, it is enough to prove that $\bar{u}\not\equiv 0$. We argue as in the proof of Step \ref{step:p9}. We fix $x\in \Omega$. Green's identity and$\ue>0$ yield \begin{eqnarray*} &&\quad \bue(x)=\mu_{N,\eps}^{-(\bp-\bm)/2}\int_\Omega G_\eps(x,y)\frac{\ue(y)^{\crits-1-p_\eps}}{|y|^s}\, dy\\ &&\geq \mu_{N,\eps}^{-(\bp-\bm)/2}\int_{A_\eps} G_\eps(x,y)\frac{\ue(y)^{\crits-1-p_\eps}}{|y|^s}\, dy\\ &&\geq C \mu_{N,\eps}^{n-s-(\bp-\bm)/2}\int_{A} G_\eps(x,\T(\mu_{N,\eps}y))\frac{\ue(\T(\mu_{N,\eps}y))^{\crits-1-p_\eps}}{|y|^s}\, dy, \end{eqnarray*} where $A_\epsilon:=\T(\rnm\cap B_{2\mu_{N,\eps}}(0)\setminus B_{\mu_{N,\eps}}(0))$, $A:=\rnm\cap B_2(0)\setminus B_1(0)$. With the pointwise control \eqref{est:G:up}, we get \begin{eqnarray*} &&\quad \bue(x)\geq\\ && C \int_A \left(\frac{|x|}{|y|}\right)^{\bm}|x-\T(\mu_{N,\eps}y)|^{2-n}\left(\frac{d(x,\partial\Omega) |y_1|}{|x-\T(\mu_{N,\eps}y)|^2}\right) \frac{u_{\eps,i}(y)^{\crits-1-p_\eps}}{|y|^s}\, dy \end{eqnarray*} where $u_{\eps,i}$ is as in Proposition \ref{prop:exhaust}. Letting $\eps\to 0$ and using the convergence (A4) of Proposition \ref{prop:exhaust}, we get that $$\bar{u}(x)\geq C\frac{d(x,\partial\Omega)}{|x|^{\bp}} \hbox{ for all }x\in \Omega.$$ And then $\bar{u}>0$ in $\Omega$. This proves the claim and Step \ref{step:p11}.2. \noindent We fix $r_0>0$ and $\eta\in C^\infty(\rn)$ such that $\eta(x)=1$ in $B_{r_0}(0)$ and $\eta(x)=0$ in $\rn\setminus B_{2r_0}(0)$. It then follows from \cites{gr4,gr5} that, for $r_0>0$ small enough, there exists $A>0$ and $\beta\in H_{0}^1(\Omega)$ such that $$\bar{u}(x)=A\left(\frac{\eta(x) d(x,\partial\Omega)}{|x|^{\bp}}+\beta(x)\right)\hbox{ for all }x\in \Omega$$ with $$\beta(x)=m_{\gamma,h}(\Omega)\frac{\eta(x) d(x,\partial\Omega)}{|x|^{\bm}}+o\left(\frac{\eta(x) d(x,\partial\Omega)}{|x|^{\bm}}\right)$$ as $\eps\to 0$. Here, $m_{\gamma,h}(\Omega)$ is the boundary mass. \noindent{\it Step \ref{step:p11}.3:} We claim that \begin{equation}\label{est:Md} \lim_{\delta\to 0}M_\delta= - \frac{\omega_{n-1}}{n}\left(\frac{n^2}{4}-\gamma\right)A^2\cdot m_{\gamma,h}(\Omega) \end{equation} We prove the claim. Since $\bar{u}$ is a solution to \eqref{cv:bue}, it follows from standard elliptic theory that there exists $C>0$ such that $\bar{u}(x)+|x||\nabla\bar{u}(x)|\leq C|x|^{1-\bp}$ for all $x\in \Omega$. Therefore, since $\bp-\bm<1$, we get that $$\lim_{\delta\to 0}\int_{\T(\rnm\cap B_\delta(0))}\bar{u}^2\, dx+\int_{\T(\rnm\cap \partial B_\delta(0))}\bar{u}^2\, d\sigma+\int_{\T(\partial\rnm\cap B_\delta(0))}|x|^2|\nabla\bar{u}|^2\, d\sigma=0.$$ Therefore, \begin{equation*} M_\delta=-\frac{A^2}{2}\bar{\mathcal H}_\delta(\bar{v}_{\bp}+\bar{v}_{\bm},\bar{v}_{\bp}+\bar{v}_{\bm} )+o(1) \end{equation*} as $\delta\to 0$, where \begin{eqnarray*} \bar{\mathcal H}_\delta(u,v)&:=&\int \limits_{ \T \left( \rnm \cap \partial B_{ \delta_0 }(0) \right)} \left[( x, \nu) \left( (\nabla u,\nabla v) -\frac{\gamma}{|x|^2}uv \right) - \left(x^i\partial_i u+\frac{n-2}{2} u \right)\partial_{\nu} v\right.\\ &&\left.- \left(x^i\partial_i v+\frac{n-2}{2} v \right)\partial_{\nu} u\right]\,d \sigma \end{eqnarray*} and $$\bar{v}_{\bp}(x):=\frac{\eta(x) d(x,\partial\Omega)}{|x|^{\bp}}\hbox{ and }\bar{v}_{\bm}(x)=\beta(x)\hbox{ for all }x\in \Omega.$$ We then get that \begin{eqnarray*} M_\delta&=&-\frac{A^2}{2}\bar{\mathcal H}_\delta(\bar{v}_{\bp},\bar{v}_{\bp} )-A^2\bar{\mathcal H}_\delta(\bar{v}_{\bp},\bar{v}_{\bm} )\\ &&-\frac{A^2}{2}\bar{\mathcal H}_\delta(\bar{v}_{\bm},\bar{v}_{\bm} )+o(1) \end{eqnarray*} as $\delta\to 0$. For any $x\in \rnm\cap B_\delta(0)$, with the chart $\T$ and the definition of $\beta$, we get $$\bar{v}_{\bp}(\T(x)):=\frac{|x_1|}{|x|^{\bp}}+O(|x|^{2-\bp})=v_{\bp}+O(|x|^{2-\bp})$$ $$\hbox{ and }\bar{v}_{\bm}(\T(x))=m_{\gamma,h}(\Omega)\frac{|x_1|}{|x|^{\bm}}+O(|x|^{2-\bm})=m\cdot v_{\bm}+O(|x|^{2-\bm}).$$ Moreover, elliptic theory yields $$\nabla (\bar{v}_{\bp}\circ\T(x)):=\nabla v_{\bp}+O(|x|^{1-\bp}).$$ $$\hbox{ and }\nabla (\bar{v}_{\bm}\circ\T(x))=m_{\gamma,h}(\Omega)\cdot \nabla v_{\bm}+O(|x|^{1-\bm})\hbox{ for all }x\in \rnm\cap B_\delta(0), $$ where $v_\beta$ is defined in the proof of Step \ref{step:p9}. Since $\bp-\bm<1$ and $\bp+\bm=n$, we get with a change of variable that as $\delta\to 0$, \begin{eqnarray*} \bar{\mathcal H}_\delta(\bar{v}_{\bp},\bar{v}_{\bp} )&=&{\mathcal H}_\delta(v_{\bp},v_{\bp} )+O(\delta^{1-(\bp-\bm)})\\ \bar{\mathcal H}_\delta(\bar{v}_{\bp},\bar{v}_{\bm} )&=&m_{\gamma,h}(\Omega)\cdot {\mathcal H}_\delta(v_{\bp},v_{\bm} )+O(\delta^{1-(\bp-\bm)})\\ \bar{\mathcal H}_\delta(\bar{v}_{\bm},\bar{v}_{\bm} )&=&O(\delta^{n-2\bm}). \end{eqnarray*} Using the computations performed in the proof of Step \ref{step:p9}, we then get \eqref{est:Md}. This proves the claim and ends Step \ref{step:p11}.3. \noindent{\it End of the proof of Step \ref{step:p11}:} Since $M_\delta$ is independent of $\delta$ small, we then get that $M_{\delta_0}=- \frac{\omega_{n-1}}{n}\left(\frac{n^2}{4}-\gamma\right)A^2m_{\gamma,h}(\Omega)$. Putting this estimate in \eqref{est:inf:1}, we then get \eqref{est:mass:bis}. This end Step \ref{step:p11}.\qed \noindent {\it Proof of Proposition \ref{prop:rate:sc} when $\bp-\bm>2$:} Plugging \eqref{cpct:bdry:term:C} into \eqref{est:1} and using that $ \bp-\bm>2$, we obtain \begin{align*} \lim \limits_{\eps \to 0} \frac{\pe}{\mu_{N,\eps}}&~=\frac{n-s}{(n-2)^{2}} \frac{1}{ t_{N}^{\frac{n-1}{\crits-2}}}\frac{ \int \limits_{\partial\rnm} II_{0}(x,x) |\nabla \tu_{N}|^2 ~d\sigma}{ \sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx }. \end{align*} This yields \eqref{rate:1} when $\bp-\bm>2$. \noindent{\it Proof of Proposition \ref{prop:rate:sc} when $\bp-\bm>1$ and $u_0\equiv 0$.} Plugging \eqref{cpct:bdry:term:C:2} into \eqref{est:4:1} and using that $ \bp-\bm>1$, we obtain also \eqref{rate:1}.\\ \noindent{\it Proof of Proposition \ref{prop:rate:sc:2} when $\bp-\bm>1$.} Since $\ue>0$, we get that $\tilde{u}_N>0$. Therefore, it follows from Ghoussoub-Robert \cite{gr4} that $\bar{u}_N(x_1,x')=\bar{U}_N(x_1,|x'|)$ for all $(x_1,x')\in(0,+\infty)\times \rr^{n-1}$. Due to this symmetry, when $\bp-\bm>1$, we get that \begin{eqnarray} &&\int_{\partial\rnm} II_{0}(x,x) |\nabla \tu_{N}|^2 ~d\sigma=\sum_{i,j=1}^{n-1}\int_{\partial\rnm} II_{0,ij}x^ix^j |\nabla \tu_{N}|^2 ~d\sigma\label{est:3:1}\\ &&=\frac{\sum_{i=1}^{n-1}II_{0,ii}}{n-1}\int_{\partial\rnm} |x|^2 |\nabla \tu_{N}|^2 ~d\sigma=\frac{\int_{\partial\rnm} |x|^2 |\nabla \tu_{N}|^2 ~d\sigma}{n-1}H(0).\notag \end{eqnarray} When $\bp-\bm>2$ or $\{\bp-\bm=2\hbox{ and }u_0\equiv 0\}$, Proposition \ref{prop:rate:sc:2} follows from \eqref{rate:1} and \eqref{est:3:1}. When $\{\bp-\bm=2\hbox{ and }u_0>0\}$, Proposition \ref{prop:rate:sc:2} follows from \eqref{est:1}, \eqref{id:F:A:B} of Step \ref{step:p9}, \eqref{cpct:bdry:term:C} and \eqref{est:3:1}. When $1<\bp-\bm<2$, Proposition \ref{prop:rate:sc:2} follows from Step \ref{step:p10}, \eqref{est:4:1}, \eqref{cpct:bdry:term:C:2} and \eqref{est:3:1}. \noindent{\it Proof of Proposition \ref{prop:rate:sc:2} when $\bp-\bm<1$:} This is a direct consequence of Steps \ref{step:p10} and \ref{step:p11}. \subsection{Proof of the sharp blow-up rates when $\bp-\bm = 1$} We start with the following refined asymptotics when $\ue>0$, $\bp-\bm=1$ and $u_0\equiv 0$. \begin{step}\label{prop:11} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We assume that $\ue>0$ and $u_0\equiv 0$. We fix a family of parameters $(\lambda_\eps)_{\eps>0}\in (0,+\infty)$ such that \begin{equation}\label{hyp:lambda} \lim_{\eps\to 0}\lambda_\eps=0\hbox{ and }\lim_{\eps\to 0}\frac{\mu_{N,\eps}}{\lambda_\eps}=0. \end{equation} Then, for all $x\in\overline{\rnm}$, $x\neq 0$, we have that \begin{equation*} \lim_{\eps\to 0}\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}\ue(\T(\lambda_\eps x))=K\cdot\frac{|x_1|}{|x|^{\bp}}, \end{equation*} where $\T$ is as in \eqref{def:T:bdry}, \begin{equation}\label{def:K} K:=t_N^{-\frac{\bp-1}{\crits-2}}L_{\gamma,\Omega}\int_{\rnm}\frac{|y_1|}{|y|^{\bm}}\frac{\tilde{u}_N^{\crits-1}(y)}{|y|^s}\, dy>0 \end{equation} and $L_{\gamma,\Omega}>0$ is given by \eqref{est:G:4}. Moreover, this limit holds in $C^2_{loc}(\overline{\rnm}\setminus\{0\})$. \end{step} \noindent{\it Proof of Step \ref{prop:11}:} We define $$w_\eps(x):=\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}\ue(\T(\lambda_\eps x))$$ for all $x\in \rnm\cap \lambda_\eps^{-1}U$. As in the proof of \eqref{cpct:bdry:term1and23}, for any $i,j=1,...,n$, we let $(\tge)_{ij}=(\partial_i\T(r_{\eps} x),\partial_j\T( r_{\eps} x))$, where $(\cdot,\cdot)$ denotes the Euclidean scalar product on $\rn$. We consider $\tge$ as a metric on $\rn$. We let $\Delta_g=div_g(\nabla)$, the Laplace-Beltrami operator with respect to the metric $g$. From $(E_{\eps})$ it follows that for all $\eps >0$, we have that \begin{equation*} \left\{\begin{array}{cl} -\Delta_{\tge} w_\eps - \frac{\gamma}{\left| \frac{\T ( \lambda_{\eps} x)}{\lambda_{\eps}}\right|^{2}} w_\eps -\lambda_{\eps}^2~\he \circ \T(\lambda_{\eps} x) w_\eps =s_\eps\frac{w_\eps^{\crits-1-\pe}}{\left| \frac{\T (\lambda_{\eps} x)}{\lambda_{\eps}}\right|^{s}}& \hbox{ in }\rnm\cap \lambda_\eps^{-1}U\\ w_\eps>0 & \hbox{ in }\rnm\cap \lambda_\eps^{-1}U\\ w_\eps=0 & \hbox{ on }(\partial\rnm\setminus\{0\})\cap \lambda_\eps^{-1}U. \end{array}\right. \end{equation*} With $$s_\eps:=\left(\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}{\lambda_\eps^{\bp-1}}\right)^{\crit-2-p_\eps}\lambda_\eps^{2-s} .$$ Since $\mu_{N,\eps}^{p_\eps}\to t_N>0$ (see (A9) of Proposition \ref{prop:exhaust}) and $$(\bp-1)(\crits-2)-(2-s)=(\crits-2)\frac{\bp-\bm}{2},$$ then using the hypothesis \eqref{hyp:lambda}, we get that $$\left(\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}{\lambda_\eps^{\bp-1}}\right)^{\crit-2-p_\eps}\lambda_\eps^{2-s}\leq C \left(\frac{\mu_{N,\eps}}{\lambda_\eps}\right)^{(\bp-1)(\crits-2-p_\eps)-(2-s)}=o(1)$$ as $\eps\to 0$. Since $u_0\equiv 0$, it follows from the pointwise control \eqref{eq:est:global} that there exists $C>0$ such that $0<w_\eps(x)\leq C|x_1|\cdot |x|^{-\bp}$ for all $x\in \rnm\cap \lambda_\eps^{-1}U$. It then follows from standard elliptic theory that there exists $w\in C^2(\overline{\rnm}\setminus\{0\})$ such that \begin{equation}\label{lim:w:c2} \lim_{\eps\to 0}w_\eps=w\hbox{ in }C^2_{loc}(\overline{\rnm}\setminus\{0\}) \end{equation} with \begin{equation*} \left\{\begin{array}{ll} -\Delta w - \frac{\gamma}{|x|^2} w=0&\hbox{ in }\rnm\\ 0\leq w(x)\leq C|x_1|\cdot |x|^{-\bp}&\hbox{ in }\rnm\\ w=0&\hbox{ on }\partial\rnm\setminus\{0\}. \end{array}\right. \end{equation*} It follows from Lemma 4.2 in Ghoussoub-Robert \cite{gr4} (see also Pinchover-Tintarev \cite{PT}) that there exists $\Lambda\geq 0$ such that $w(x)= \Lambda |x_1|\cdot |x|^{-\bp}$ for all $x\in\rnm$. We are left with proving that $\Lambda=K$ defined in \eqref{def:K}. We fix $x\in \rnm$. Green's representation formula yields \begin{eqnarray} w_\eps(x)&=&\int_\Omega\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}G_\eps(\T(\lambda_\eps x),y)\frac{\ue(y)^{\crits-1-p_\eps}}{|y|^s}\,dy\nonumber\\ &=& \int_{\T(\rnm\cap (B_{R k_{N,\eps}}(0)\setminus B_{\delta k_{N,\eps}}(0))}+\int_{\Omega\setminus\T(\rnm\cap (B_{R k_{N,\eps}}(0)\setminus B_{\delta k_{N,\eps}}(0))}\label{est:w:0} \end{eqnarray} \noindent{\it Step \ref{prop:11}.1:} We estimate the first term of the right-hand-side. Since $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$, a change of variable yields \begin{eqnarray*} &&\int_{\T(\rnm\cap (B_{Rk_{N,\eps}}(0)\setminus B_{\delta k_{N,\eps}}(0))}\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}G_\eps(\T(\lambda_\eps x),y)\frac{\ue(y)^{\crits-1-p_\eps}}{|y|^s}\,dy\\ &&=s_\eps^{(1)}\int_{\rnm\cap (B_{R}(0)\setminus B_{\delta}(0))}G_\eps(\T(\lambda_\eps x),\T(k_{N,\eps}z))\frac{\tilde{u}_{N,\eps}(z)^{\crits-1-p_\eps}}{|z|^s}(1+o(1))\,dz \end{eqnarray*} with $$s_\eps^{(1)}:=\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}k_{N,\eps}^{n-s}\mu_{N,\eps}^{-\frac{n-2}{2}(\crits-1-p_\eps)}$$ It follows from \eqref{est:G:4} that for any $z\in \rnm$, we have that $$G_\eps(\T(\lambda_\eps x),\T(k_{N,\eps}z))=(L_{\gamma,\Omega}+o(1))\frac{\lambda_\eps|x_1|}{\lambda_\eps^{\bp}|x|^{\bp}}\cdot \frac{k_{N,\eps}|y_1|}{k_{N,\eps}^{\bm}|z|^{\bm}},$$ and that the convergence is uniform with repect to $z\in \rnm\cap (B_{R}(0)\setminus B_{\delta}(0))$. Plugging this estimate in the above equality, using that $k_{N,\eps}=\mu_{N,\eps}^{1-p_\eps/(\crits-2)}$, $\mu_{N,\eps}^{p_\eps}\to t_N>0$ and the convergence of $\tilde{u}_{N,\eps}$ to $\tilde{u}_N$ (see Proposition \ref{prop:exhaust}), we get that \begin{eqnarray*} &&\int_{\T(\rnm\cap (B_{R k_{N,\eps}}(0)\setminus B_{\delta k_{N,\eps}}(0))}\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}G_\eps(\T(\lambda_\eps x),y)\frac{\ue(y)^{\crits-1-p_\eps}}{|y|^s}\,dy\\ &&=L_{\gamma,\Omega}\frac{|x_1|}{|x|^{\bp}}t_N^{-\frac{\bp-1}{\crits-2}}\int_{\rnm\cap (B_{R}(0)\setminus B_{\delta}(0))}\frac{|y_1|}{|y|^{\bm}}\frac{\tilde{u}_{N}(z)^{\crits-1}}{|z|^s}\,dz+o(1) \end{eqnarray*} as $\eps\to 0$. Therefore, \begin{eqnarray} &&\lim_{R\to +\infty, \delta\to 0}\lim_{\eps\to 0}\int_{\T(\rnm\cap (B_{R k_{N,\eps}}(0)\setminus B_{\delta k_{N,\eps}}(0))}\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}G_\eps(\T(\lambda_\eps x),y)\frac{\ue(y)^{\crits-1}}{|y|^s}\,dy\nonumber\\ &&=K\frac{|x_1|}{|x|^{\bp}}\label{est:w:1} \end{eqnarray} where $K$ is as in \eqref{def:K}. \noindent{\it Step \ref{prop:11}.2:} With the control \eqref{est:G:up} on the Green's function and the pointwise control \eqref{eq:est:global} on $\ue$, we get that \begin{eqnarray} &&\int_{\Omega\setminus\T(\rnm\cap (B_{R k_{N,\eps}}(0)\setminus B_{\delta k_{N,\eps}}(0))}\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}G_\eps(\T(\lambda_\eps x),y)\frac{\ue(y)^{\crits-1-\pe}}{|y|^s}\,dy\nonumber\\ &&\leq \sum_{i=1}^{N-1}A_{i,\eps} +B_\eps(R)+C_\eps(\delta)\label{est:w:0:bis} \end{eqnarray} where \begin{equation*} A_{i,\eps}:=C\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}\int_{B_{R_0}(0)}\frac{\ell_\epsilon (x, y)^{\bm}r_\epsilon (x , y)}{|\T(\lambda_\eps x)-y|^{n-2}|y|^s}\left(\frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}}|y|}{\mu_{i,\eps}^{\bp-\bm}|y|^{\bm}+|y|^{\bp}}\right)^{\crits-1}\, dy \end{equation*} \begin{equation*} B_\eps(R):=C\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}\int_{B_{R_0}(0)\setminus B_{R k_{N,\eps}}(0)}\frac{\ell_\epsilon (x, y)^{\bm}r_\epsilon (x, y)}{|\T(\lambda_\eps x)-y|^{n-2}}\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}(\crits-1)}}{|y|^{(\bp-1)(\crits-1)+s}}\, dy \end{equation*} \begin{equation*} C_\eps(\delta):=C(x)\frac{\lambda_\eps^{\bp-1+2-n+\bm-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}\cdot\crits}}\int_{ B_{\delta k_{N,\eps}}(0)}\frac{dy}{ |y|^{(\bm-1)(\crits-1)+s+\bm-1}} \end{equation*} where $\ell_\epsilon (x, y):=\frac{\max\{\lambda_\eps|x|,|y|\}}{\min\{\lambda_\eps|x|,|y|\}}$, and $r_\epsilon (x, y)=\min\left\{1,\frac{\lambda_\eps|x_1|\cdot|y|}{|\T(\lambda_\eps x)-y|^2}\right\}$. \noindent{\it Step \ref{prop:11}.3.} We first estimate $C_\eps(\delta)$. Since $n>s+\crits(\bm-1)$ (this is a consequence of $\bm<n/2$), straightforward computations yield $$C_\eps(\delta)\leq C(x)\delta^{\frac{\crits}{2}(\bp-\bm)},$$ and therefore \begin{equation}\label{est:w:2} \lim_{\delta\to 0}\lim_{\eps\to 0}C_\eps(\delta)=0. \end{equation} \noindent{\it Step \ref{prop:11}.4.} We estimate $B_\eps(R)$. We split the integral as $$B_\eps(R)=\int_{R k_{\eps,N}<|y|<\frac{\lambda_\eps|x|}{2}}I_\eps(y)\, dy+\int_{\frac{\lambda_\eps|x|}{2}<|y|<2\lambda_\eps|x|}I_\eps(y)\, dy+\int_{|y|>2\lambda_\eps|x|}I_\eps(y)\, dy$$ where $I_\eps(y)$ is the integrand. Since $$n-(s+(\bp-1)(\crits-1)+\bm-1)=-\frac{\crits-2}{2}(\bp-\bm)<0,$$ straightforward computations yield \begin{eqnarray*} &&\int_{R k_{N,\eps}<|y|<\frac{\lambda_\eps|x|}{2}}I_\eps(y)\, dy\\ &&\leq C(x)\frac{\lambda_\eps^{\bp-1+\bm-1+2-n}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}} \int_{R k_{N,\eps}<|y|<\frac{\lambda_\eps|x|}{2}}\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}(\crits-1)}}{|y|^{(\bp-1)(\crits-1)+s+\bm-1}}\, dy\\ &&\leq C(x)R^{-\frac{\crits-2}{2}(\bp-\bm)}, \end{eqnarray*} For the next term, a change of variable yields \begin{eqnarray*} &&\int_{\frac{\lambda_\eps|x|}{2}<|y|<2\lambda_\eps|x|}I_\eps(y)\, dy\\ &&\leq C(x)\frac{\lambda_\eps^{\bp-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}} \int_{\frac{\lambda_\eps|x|}{2}<|y|<2\lambda_\eps|x|}|\T(\lambda_\eps x)-y|^{2-n}\frac{\mu_{N,\eps}^{\frac{\bp-\bm}{2}(\crits-1)}}{|y|^{(\bp-1)(\crits-1)+s}}\, dy\\ &&\leq C(x) \left(\frac{\mu_{N,\eps}}{\lambda_\eps}\right)^{\frac{\crits-2}{2}(\bp-\bm)}\int_{\frac{|x|}{2}<|z|<2|x|}|x-z|^{2-n}\, dz=o(1) \end{eqnarray*} as $\eps\to 0$. Finally, since $\bp+\bm=n$ and $n-s-(\bp-1)\crits=\frac{\crits}{2}(\bp-\bm)$, we estimate the last term \begin{eqnarray*} &&\int_{|y|>2\lambda_\eps|x|}I_\eps(y)\, dy\\ &&\leq C(x) \mu_{N,\eps}^{\frac{\crits-2}{2}(\bp-\bm)}\lambda_\eps^{\bp-\bm}\int_{|y|>2\lambda_\eps|x|}\frac{|y|^{\bm+1-n-s}\, dy}{|y|^{(\bp-1)(\crits-1)}}\\ &&\leq C(x) \left(\frac{\mu_{N,\eps}}{\lambda_\eps}\right)^{\frac{\crits-2}{2}(\bp-\bm)}=o(1) \end{eqnarray*} as $\eps\to 0$. All these inequalities yield \begin{equation}\label{est:w:3} \lim_{R\to +\infty}\lim_{\eps\to 0}B_\eps(R)=0. \end{equation} \noindent{\it Step \ref{prop:11}.5.} We fix $i\in \{1,...,N-1\}$ and estimate $A_{i,\eps}$. As above, we split the integral as $$A_{i,\eps}=\int_{|y|<\frac{\lambda_\eps|x|}{2}}J_{i,\eps}(y)\, dy+\int_{\frac{\lambda_\eps|x|}{2}<|y|<2\lambda_\eps|x|}J_{i,\eps}(y)\, dy+\int_{|y|>2\lambda_\eps|x|}J_{i,\eps}(y)\, dy,$$ where $J_{i,\eps}$ is the integrand. Since $\mu_{i,\eps}\leq \mu_{N,\eps}$, as one checks, the second and the third integral of the right-hand-side are controled from above respectively by $\int_{\frac{\lambda_\eps|x|}{2}<|y|<2\lambda_\eps|x|}I_\eps(y)\, dy$ and $\int_{|y|>2\lambda_\eps|x|}I_\eps(y)\, dy$ that have been computed just above and go to $0$ as $\eps\to 0$. We are then left with the first term. With a change of variables, we have that \begin{eqnarray*} &&\int_{|y|<\frac{\lambda_\eps|x|}{2}}J_{i,\eps}(y)\, dy\\ &&\leq C(x)\frac{\lambda_\eps^{\bp-1+\bm+2-n-1}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}} \\ &&\times\int_{|y|<\frac{\lambda_\eps|x|}{2}} \left(\frac{\mu_{i,\eps}^{\frac{\bp-\bm}{2}}|y|}{\mu_{i,\eps}^{\bp-\bm}|y|^{\bm}+|y|^{\bp}}\right)^{\crits-1} \frac{dy}{|y|^{s+\bm-1}}\\ &&\leq C(x) \frac{\mu_{i,\eps}^{1+n-s-\bm-\frac{n-2}{2}(\crits-1)}}{\mu_{N,\eps}^{\frac{\bp-\bm}{2}}}\\ &&\times\int_{|z|<\frac{\lambda_\eps|x|}{2\mu_{i,\eps}}}\frac{1}{|z|^{(\bm-1)+s}}\left(\frac{|z|}{|z|^{\bm}+|z|^{\bp}}\right)^{\crits-1}\, dz\\ &&\leq C(x)\left(\frac{\mu_{i,\eps}}{\mu_{N,\eps}}\right)^{\frac{\bp-\bm}{2}} \end{eqnarray*} since $n>s+(\crits(\bm-1))$ and $n<(\bm-1)+s+(\crits-1)(\bp-1)$. Since $\mu_{i,\eps}=o(\mu_{N,\eps})$ as $\eps\to 0$, we get that \begin{equation}\label{est:w:4} \lim_{\eps\to 0}A_{i,\eps}=0. \end{equation} \noindent{\it Step \ref{prop:11}.6:} Plugging \eqref{est:w:1}, \eqref{est:w:2}, \eqref{est:w:3} and \eqref{est:w:4} into \eqref{est:w:0} and \eqref{est:w:0:bis} yields $\lim_{\eps\to 0}w_\eps(x)=K\frac{|x_1|}{|x|^{\bp}}$ for all $x\in \rnm$. With \eqref{lim:w:c2}, we then get that $\Lambda=K$. This proves Step \ref{prop:11}.\\ Now we can prove Proposition \ref{prop:rate:sc:2} when $\bp-\bm=1$ in the case when $\ue>0.$ \begin{step}\label{step:p13} We let $(\ue)$, $(\he)$ and $(\pe)$ be such that $(E_\eps)$, \eqref{hyp:he}, \eqref{lim:pe} and \eqref{bnd:ue} hold. We assume that blow-up occurs. We assume that $\ue>0$ and $\bp-\bm=1$. Then $u_0\equiv 0$ and \begin{eqnarray} && \frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits}\right) \left(\sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx +o(1)\right)\nonumber\\ && = \frac{K^2\omega_{n-2}H(0)}{4(n-1)}\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}+o\left(\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}\right).\label{est:inter} \end{eqnarray} \end{step} The case $\bp-\bm=1$ of Proposition \ref{prop:rate:sc:2} is a consequence of Step \ref{step:p13}.\\ \noindent{\it Proof of Step \ref{step:p13}:} First remark that since $\bp+\bm=n$, we then have that $$\bp=\frac{n+1}{2}\hbox{ and }\bm=\frac{n-1}{2}.$$ It follows from Step \ref{step:p10} that $u_0\equiv 0$. We use \eqref{est:4:1} that writes {\small \begin{equation} \frac{p_{\eps}}{\crits} \left( \frac{n-s}{\crits -p_{\eps}}\right) \left(\sum \limits_{i=1}^{N} \frac{1}{t_{i}^{ \frac{n-2}{\crits-2}}} \int \limits_{\rnm } \frac{|\tu_{i}|^{\crits }}{|x|^{s}} ~ dx +o(1)\right) = \int \limits_{ {\mathcal T}_\epsilon } ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma +O\left(\mu_{N,\eps}\right)\label{est:5}. \end{equation} } where ${\mathcal T}_\epsilon:= \T\left( \partial \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right)$. It follows from \eqref{est:T:3} that \begin{eqnarray} &&\ds \int \limits_{ {\mathcal T}_\epsilon} ( x, \nu) \frac{|\nabla \ue|^2}{2} ~d\sigma=\ds-\frac{1}{4}\int \limits_{ {\mathcal T}_\epsilon} \sum_{p,q=2}^{n} x^px^q \partial_{pq} \T_{0}(0) |\nabla (\ue\circ\T)|_{\T^\star\eucl}^2 (1+O(|x|)~d\sigma\nonumber\\ &&\ds +O\left(\int \limits_{ \partial \rnm \cap B_{ \delta_0 }(0) }|x|^3 |\nabla (\ue\circ\T)|_{\T^\star\eucl}^2 \,d\sigma\right)\nonumber\\ &&=\ds-\frac{1}{4}\int \limits_{ \partial \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) } \sum_{p,q=2}^{n} x^px^q \partial_{pq} \T_{0}(0) |\nabla (\ue\circ\T)|^2 \,d\sigma\nonumber\\ &&\ds +O\left(\int \limits_{ \partial \rnm \cap B_{ \delta_0 }(0) }|x|^3 |\nabla (\ue\circ\T)|^2 \,d\sigma\right).\label{est:23} \end{eqnarray} With the control \eqref{eq:est:grad:global} and $\bp-\bm=1$, we get that \begin{eqnarray} \int_{ \partial \rnm \cap B_{ \delta_0 }(0) }|x|^3 |\nabla (\ue\circ\T)|^2 \,d\sigma&\leq& C\sum_{i=1}^N\int_{ \partial \rnm \cap B_{ \delta_0 }(0) }|x|^3 \frac{\mu_{i,\eps}^{\bp-\bm}}{|x|^{2\bp}}\,d\sigma\nonumber\\ &\leq& C\mu_{N,\eps}^{\bp-\bm}= C\mu_{N,\eps}\label{est:24} \end{eqnarray} \noindent We need an intermediate result. We let $(s_\eps)_\eps,(t_\eps)_\eps\in [0,+\infty)$ such that $0\leq s_\eps\leq t_\eps$, and $\mu_{\eps,N}=o(t_\eps)$ as $\eps\to 0$. We claim that \begin{eqnarray}\label{est:square} \int_{\partial \rnm \cap \left(B_{ t_\eps }(0)\setminus B_{s_\eps}(0) \right)}|x|^2 |\nabla (\ue\circ\T)|^2 \,d\sigma&\leq& C\sum_{i}\mu_{i,\eps}\ln\left(\frac{t_\eps}{\max\{s_\eps,\mu_{i,\eps}\}}\right) \end{eqnarray} Indeed, with the pointwise control \eqref{eq:est:grad:global}, $u_0\equiv 0$ and $2\bp=n+1$, we get that \begin{eqnarray*} &&\ds \int \limits_{ \partial \rnm \cap \left(B_{ t_\eps }(0)\setminus B_{s_\eps}(0) \right)}|x|^2 |\nabla (\ue\circ\T)|^2 \,d\sigma\\ &&\leq C\sum_{i=1,...,N}\mu_{i,\eps}^{\bp-\bm}\int_{s_\eps}^{t_\eps}\frac{r^{2+(n-1)-1}\, dr}{\mu_{i,\eps}^{2(\bp-\bm)}r^{2\bm}+r^{2\bp}}\\ &&\leq C\sum_{i=1,...,N}\mu_{i,\eps}\int_{\frac{s_\eps}{\mu_{i,\eps}}}^{\frac{t_\eps}{\mu_{i,\eps}}}\frac{r^{2\bp-1}\, dr}{r^{2\bm}+r^{2\bp}} \end{eqnarray*} Distinguishing the cases $s_\eps\leq \mu_{i,\eps}$ and $s_\eps\geq \mu_{i,\eps}$, we get \eqref{est:square}. This proves the claim. \noindent We define $\theta_\eps:=\frac{1}{\sqrt{|\ln\mu_{N,\eps}|}}$, $\alpha_\eps:=\mu_{N,\eps}^{\theta_\eps}$ and $\beta_\eps:=\mu_{N,\eps}^{1-\theta_\eps}$. As one checks, we have that \begin{equation} \left\{\begin{array}{ccc} \mu_{\eps,N}=o(\beta_\eps) & \beta_\eps=o(\alpha_\eps) & \alpha_\eps=o(1) \\ \ln \frac{\alpha_\eps}{\beta_\eps}\simeq \ln\frac{1}{\mu_{N,\eps}} & \ln \frac{\beta_\eps}{\mu_{N,\eps}}=o\left(\ln\frac{1}{\mu_{N,\eps}} \right) & \ln\alpha_\eps=o(\ln\mu_{N,\eps}) \end{array}\right\}\label{ppty:a:b} \end{equation} as $\eps\to 0$. It then follows from \eqref{est:square} and the properties \eqref{ppty:a:b} that \begin{equation} \left\{\begin{array}{l} \ds \int\limits_{ \partial \rnm \cap \left(B_{ \delta_0}(0)\setminus B_{\alpha_\eps}(0) \right)}|x|^2 |\nabla (\ue\circ\T)|^2 =o\left(\mu_{N,\eps} \ln\frac{1}{\mu_{N,\eps}}\right);\\ \ds \int \limits_{ \partial \rnm \cap B_{ \beta_\eps}(0)}|x|^2 |\nabla (\ue\circ\T)|^2 =o\left(\mu_{N,\eps} \ln\frac{1}{\mu_{N,\eps}}\right) \end{array}\right\}\label{est:25} \end{equation} Since $\mu_{N,\eps}=o(\beta_\eps)$ and $\alpha_\eps=o(1)$ as $\eps\to 0$, it follows from Proposition \ref{prop:11} that \begin{equation}\label{asymp:ue} \lim_{\eps\to 0}\sup_{x\in \partial\rnm\cap B_{\alpha_\eps}(0)\setminus B_{\beta_\eps}(0)}\left|\frac{|x|^{2\bp}|\nabla (\ue\circ\T)|^2(x)}{\mu_{N,\eps}^{\bp-\bm}}-K^2\right|=0 \end{equation} We fix $i,j\in \{2,...,n\}$. It follows from \eqref{asymp:ue} and $\bp-\bm=1$ that \begin{eqnarray} &&\ds \int \limits_{\partial\rnm\cap B_{\alpha_\eps}(0)\setminus B_{\beta_\eps}(0)}x^ix^j \partial_{ij} \T_{0}(0)|\nabla(\ue\circ\T)|^2\, dx\nonumber\\ &&= \ds \int \limits_{\partial\rnm\cap B_{\alpha_\eps}(0)\setminus B_{\beta_\eps}(0)}\mu_{N,\eps}\frac{x^ix^j \partial_{ij} \T_{0}(0)}{|x|^{2\bp}}K^2\, dx\nonumber\\ &&\ds +\int \limits_{\partial\rnm\cap B_{\alpha_\eps}(0)\setminus B_{\beta_\eps}(0)}\mu_{N,\eps}\frac{x^ix^j \partial_{ij} \T_{0}(0) }{|x|^{2\bp}}\left(\frac{|x|^{2\bp}|\nabla(\ue\circ\T)|^2}{\mu_{N,\eps}}-K^2\right)\, dx\nonumber\\ &&\ds =\int \limits_{\partial\rnm\cap B_{\alpha_\eps}(0)\setminus B_{\beta_\eps}(0)}\mu_{N,\eps}\frac{x^ix^j \partial_{ij} \T_{0}(0) }{|x|^{2\bp}}K^2\, dx\nonumber\\ &&+o\left(\int_{\partial\rnm\cap B_{\alpha_\eps}(0)\setminus B_{\beta_\eps}(0)}\mu_{N,\eps}\frac{|x|^2}{|x|^{2\bp}}\, dx\right)\label{est:234} \end{eqnarray} Independently, with a change of variable and $2\bp=n+1$, we get that \begin{align*} \ds \int \limits_{\partial\rnm\cap B_{\alpha_\eps}(0)\setminus B_{\beta_\eps}(0)}\frac{x^ix^j \partial_{ij} \T_{0}(0) }{|x|^{2\bp}}\, dx&= \partial_{ij} \T_{0}(0) \left(\int_{\beta_\eps}^{\alpha_\eps}\frac{dr}{r}\right)\left(\int_{\mathbb{S}^{n-2}}\sigma^i\sigma^j\, d\sigma\right) \\ &=\delta_{ij} \partial_{ij} \T_{0}(0)\frac{ \omega_{n-2}}{n-1}\ln\frac{\alpha_\eps}{\beta_\eps}, \end{align*} where $\omega_{n-2}$ is the volume of the round $(n-2)-$unit sphere. This equality, \eqref{est:234} and the properties \eqref{ppty:a:b} yield \begin{eqnarray}\label{est:26} &&\ds \int \limits_{\partial\rnm\cap B_{\alpha_\eps}(0)\setminus B_{\beta_\eps}(0)}x^ix^j \delta_{ij} \partial_{ij} \T_{0}(0)|\nabla(\ue\circ\T)|^2\, dx\nonumber\\ &&=\delta_{ij} \partial_{ij} \T_{0}(0) \frac{K^2\omega_{n-2}}{n-1}\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}+o\left(\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}\right). \end{eqnarray} Therefore, plugging \eqref{est:24}, \eqref{est:25} and \eqref{est:26} into \eqref{est:23} yields \begin{eqnarray*} && \int \limits_{ \T\left( \partial \rnm \cap B_{ \delta_0 }(0) \setminus B_{ k_{1,\eps}^{3} }(0) \right) } ( x, \nu) \frac{|\nabla \ue|^2}{2} \,d\sigma\\ &&= -\frac{K^2\omega_{n-2} \sum_{i=2}^{n} \partial_{ii} \T_{0}(0)}{4(n-1)}\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}+o\left(\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}\right)\\ &&=\frac{K^2\omega_{n-2} \sum_{i=2}^{n} II_{0,ii}}{4(n-1)}\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}+o\left(\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}\right)\\ &&= \frac{K^2\omega_{n-2} H(0)}{4(n-1)}\mu_{N, \eps}\ln\frac{1}{\mu_{N,\eps}}+o\left(\mu_{N,\eps}\ln\frac{1}{\mu_{N,\eps}}\right). \end{eqnarray*} Plugging this latest estimate into \eqref{est:5} yields \eqref{est:inter}. This ends the proof of Step \ref{step:p13}.\qed \section{\, Proof of multiplicity}\label{sec:proof:th} \noindent {\bf Proof of Theorem \ref{th:cpct:sc}:} We fix $\gamma<n^2/4$ and $h\in C^1(\overline{\Omega})$ such that $-\Delta-\gamma|x|^{-2}-h$ is coercive. For each $2<p\leq \crit$, we consider the $C^2$-functional \begin{equation*} I_{p, \gamma}(u)=\frac{1}{2}\int_\Omega\left(\vert \nabla u\vert^2\, dx-\frac{\gamma}{2} {|u|^2}{|x|^2}-hu^2\right)\, dx- \frac{1}{p}\int_{\Omega} \frac {|u|^p}{|x|^s}\, dx \end{equation*} on $\huno$, whose critical points are the weak solutions of \begin{equation} \label{main} \left\{ \begin{array}{lll} -\Delta u-\frac{\gamma}{|x|^2}u-h u&= \frac{|u|^{p-2}u}{|x|^s} &{\rm on} \ \Omega \\ u &= 0 &{\rm on} \ \partial\Omega. \end{array} \right. \end{equation} For a fixed $u\in \huno$, $u\not\equiv 0$, we have that $$I_{p, \gamma}(\lambda u)=\frac{\lambda^2}{2}\int_\Omega\vert \nabla u\vert^2\, dx-\frac{\gamma \lambda^2}{2}\int_{\Omega} \frac {|u|^2}{|x|^2}\, dx-\lambda^2\int_\Omega hu^2\, dx- \frac{\lambda^p}{p}\int_{\Omega} \frac {|u|^p}{|x|^s}\, dx$$ Then, since coercivity holds, we have that that $\lim_{\lambda \to \infty}I_{p, \gamma}(\lambda u)=-\infty$, which means that for each finite dimensional subspace $E_k \subset E:=\huno$, there exists $R_k>0$ such that \begin{equation} \label{neg} \sup \{I_{p, \gamma}(u); u\in E_k, \Vert u\Vert_{H_1^2} >R_k\} <0 \end{equation} when $p\to\crit (s)$. Let $(E_k)^\infty_{k=1}$ be an increasing sequence of subspaces of $\huno$ such that $\dim E_k=k $ and $\overline{\cup^\infty_{k=1} E_k}=E:=\huno$ and define the min-max values: $$c_{p,k} = {\ds \inf_{g \in {\bf H}_k} \sup_{x\in E_k} I_{p, \gamma}(g(x))},$$ where $$ {\bf H}_k=\{g \in C(E,E); \hbox{ $g$ is odd and $g(v)=v$ for $\|v\| > R_k$ for some $R_k>0 \}$}.$$ \begin{proposition} \label{critical.values} With the above notation and assuming $n\ge 3$, we have: \begin{enumerate} \item For each $k\in \nn$, $c_{p,k}>0$ and $\lim\limits_{p\to \crits} c_{p,k}=c_{\crits,k}:=c_k.$ \item If $2<p<{\crits}$, there exists for each $k$, functions $u_{p,k}\in \huno$ such that $I'_{p, \gamma}(u_{p,k})=0$, and $I_{p, \gamma}(u_{p,k})=c_{p,k}$. \item For each $2<p < {\crits}$, we have $ c_{p,k}\ge D_{n,p} k^{\frac{p+1}{p-1}\frac{2}{n}}$ where $D_{n,p}>0$ is such that $\lim \limits_{p\to {\crits}}D_{n,p}=0$. \item $\lim\limits_{k\to \infty} c_k=\lim\limits_{k\to \infty} c_{\crits,k}=+\infty$. \end{enumerate} \end{proposition} \noindent {\bf Proof:} (1) Coercivity yields the existence of $a_0>0$ such that \begin{equation}\label{coer} \int_\Omega\left(|\nabla u|^2-\frac{\gamma}{|x|^2}u^2-hu^2\right)\, dx\geq a_0\int_\Omega|\nabla u|^2\, dx\hbox{ for all }u\in \huno. \end{equation} With \eqref{coer}, the Hardy and the Hardy-Sobolev inequality \eqref{HS-ineq}, there exists $C>0$ and $\alpha>0$ such that \[ I_{p, \gamma}(u)\geq \frac{a_0}{2}\|\nabla u\|_2^2-C\|\nabla u\|_2^p =\|\nabla u\|_2^2\left(\frac{a_0}{2}-C\|\nabla u\|_2^{p-2}\right)\geq \alpha >0 \] for all $u\in\huno$ such that provided $\Vert \nabla u\Vert_2= \rho$ for some $\rho>0$ small enough. Then the sphere $S_\rho=\{u\in E; \|u\|_{\huno}= \rho\}$ intersects every image $g(E_k)$ by an odd continuous function $g$. It follows that \[ c_{p,k}\geq \inf \{I_{p, \gamma}(u); u\in S_\rho \} \geq \alpha >0. \] In view of (\ref{neg}), it follows that for each $g\in {\bf H}_k$, we have that \[ \sup\limits_{x\in E_k}I_{p_i, \gamma}(g(x))=\sup\limits_{x\in D_k}I_{p, \gamma}(g(x)) \] where $D_k$ denotes the ball in $E_k$ of radius $R_k$. Consider now a sequence $p_i \to \crits$ and note first that for each $u\in E$, we have that $I_{p_i, \gamma}(u) \to I_{\crits, \gamma} (u)$. Since $g(D_k)$ is compact and the family of functionals $(I_{p, \gamma})_p$ is equicontinuous, it follows that $\sup\limits_{x\in E_k}I_{p, \gamma}(g(x))\to \sup\limits_{x\in E_k}I_{\crits, \gamma }(g(x))$, from which follows that $\limsup\limits_{i\in \nn}c_{p_i,k}\leq \sup\limits_{x\in E_k}I_{\crits, \gamma} (g(x))$. Since this holds for any $g\in {\bf H}_k$, it follows that \[ \limsup\limits_{i\in \nn}c_{p_i,k}\leq c_{\crits, k}=c_k. \] On the other hand, the function $f(r)=\frac{1}{p}r^p-\frac{1}{\crits}r^{\crits}$ attains its maximum on $[0, +\infty)$ at $r=1$ and therefore $f(r) \leq \frac{1}{p}-\frac{1}{\crits}$ for all $r>0$. It follows \begin{eqnarray*} I_{\crits, \gamma}(u) &= &I_{p, \gamma}(u)+\int_\Omega \frac{1}{|x|^s} \left(\frac{1}{p}|u(x)|^p-\frac{1}{\crits}|u(x)|^{\crits }\right)\, dx\\ &\leq& I_{p, \gamma}(u)+\int_\Omega \frac{1}{|x|^s} \left(\frac{1}{p}-\frac{1}{\crits}\right)\, dx \end{eqnarray*} from which follows that $c_k\leq \liminf\limits_{i\in \nn}c_{p_i,k}$, and claim (1) is proved. \\ If now $p< {\crits}$, we are in the subcritical case, that is we have compactness in the Sobolev embedding $\huno \to L^p(\Omega; |x|^{-s}dx)$ and therefore $I_{p, \gamma}$ has the Palais-Smale condition. It is then standard to find critical points $u_{p,k}$ for $I_{p, \gamma}$ at each level $c_{p,k}$ (see for example the book \cite{Gh}). Consider now the functional \begin{equation*} I_{p, 0}(u)=\frac{1}{2}\int_\Omega\vert \nabla u\vert^2\, dx- \frac{1}{p}\int_{\Omega} \frac {|u|^p}{|x|^s}\, dx \end{equation*} and its critical values $$c^0_{p,k} = {\ds \inf_{g \in {\bf H}_k} \sup_{x\in E_k} I_{p, 0}(g(x))}.$$ It has been shown in \cite{gr2} that (1), (2) and (3) of Proposition \ref{critical.values} hold, with $c^0_{p,k}$ and $c^0_{k}$ replacing $c_{p,k}$ and $c_{k}$ respectively. In particular, $\lim\limits_{k\to \infty} c^0_k=\lim\limits_{k\to \infty} c^0_{\crits,k}=+\infty$. \noindent On the other hand, with the coercivity \eqref{coer}, we have that \[ I_{p, \gamma}(u) \geq a_0^{\frac{p}{p-2}} I_{p,0}(v) \quad \hbox{for every $u\in \huno$,} \] where $v=a_0^{-\frac{1}{p-2}}u$. It then follows that $\lim\limits_{k\to \infty} c_k=\lim\limits_{k\to \infty} c_{\crits,k}=+\infty$. \noindent To complete the proof of Theorem \ref{th:cpct:sc}, notice that since for each $k$, we have $$ \lim\limits_{p_i\to \crits}I_{p_i, \gamma}(u_{p_i,k})=\lim\limits_{p_i\to\crits}c_{p_i,k}=c_k,$$ it follows that the sequence $(u_{p_i,k})_i $ is uniformly bounded in $\huno$. Moreover, since $I_{p_i}'(u_{p_i,k})=0$, it follows from the compactness result that by letting $p_i\to \crits$, we get a solution $u_k$ of (\ref{main}) in such a way that $I_{\crit(s), \gamma}(u_k)=\lim\limits_{p\to \crits}I_{p, \gamma}(u_{p,k})=\lim\limits_{p\to \crits}c_{p,k}=c_k$. Since the latter sequence goes to infinity, it follows that (\ref{main}) has an infinite number of critical levels. \section{Proof of the non-existence result}\label{sec:nonex} \noindent{\bf Proof of Theorem \ref{thm:non:ter}:} We argue by contradiction. We fix $\gamma<\gamma_H(\Omega)\leq\frac{n^2}{4}$ and $\Lambda>0$. We assume that there is a family $(\ue)_{\eps>0}\in \huno$ of solutions to \begin{equation}\label{eq:non}\left\{ \begin{array}{cl} -\Delta \ue-\gamma \frac{\ue}{|x|^2}-h_\eps \ue = \frac{\ue^{\crits-1}}{|x|^s} &\text{in } \Omega,\\ \ue>0 &\hbox{ in }\Omega\\ \ue=0 &\hbox{ on }\bono \end{array}\right.\end{equation} with $\Vert \nabla \ue\Vert_2\leq\Lambda$ and $\lim_{\eps\to 0}h_\eps=h_0$ in $C^1(\overline{\Omega})$. \noindent We claim that $(\ue)_{\eps>0}$ is not pre-compact in $\huno$. Otherwise, up to extraction, there would be $u_0\in \huno$, $u_0\geq 0$, such that $\ue\to u_0$ in $\huno$ as $\eps\to 0$. Passing to the limit in the equation, we get that $u_0\geq 0$ and \begin{equation}\label{eq:non:2}\left\{ \begin{array}{cl} -\Delta u_0-\gamma \frac{u_0}{|x|^2}-h_0 u_0= \frac{u_0^{\crits-1}}{|x|^s} &\text{in } \Omega,\\ u_0\geq 0 &\hbox{ in }\Omega\\ u_0=0 &\hbox{ on }\bono. \end{array}\right.\end{equation} The coercivity of $-\Delta u_0-\gamma |x|^{-2}-h_0$ and the convergence of $(h_\eps)_\eps$ yield \begin{eqnarray*} &&C\left(\int_{\Omega}\frac{\ue^{\crits}}{|x|^s}dx\right)^{{2}/{\crits}}\leq \int_{\Omega} |\nabla \ue|^2\,dx-\int_\Omega\left(\frac{\gamma}{|x|^2}+\he\right)\ue^2\, dx\leq\int_{\Omega}\frac{\ue^{\crits}}{|x|^s}dx, \end{eqnarray*} for small $\epsilon>0$, and then, since $\ue>0$, there exists $c_0>0$ such that $$\int_{\Omega}\frac{\ue^{\crits}}{|x|^s}dx\geq c_0$$ for all $\eps>0$. Passing to the limit yields $u_0\not\equiv 0$. Therefore, $u_0>0$ is a solution to \eqref{eq:non} with $\epsilon=0$. This is not possible simply by the hypothesis. The family $(\ue)_\eps$ is not pre-compact and it therefore blows-up with bounded energy. Let $u_0\in \huno$ be its weak limit, which is necessarily a solution to \eqref{eq:non:2}, and hence must be the trivial solution $u_0\equiv 0$. Proposition \ref{prop:rate:sc:2} then yields that either \begin{equation} \hbox{$\bp-\bm\geq 1$ and therefore $H(0)=0$,} \end{equation} or \begin{equation}\label{res:blowup} \hbox{$\bp-\bm<1$ and therefore $m_{\gamma,h_0}(\Omega)=0$.} \end{equation} It now suffices to note that when $\gamma\leq (n^2-1)/4$ then $\bp-\bm\geq 1$ and the above contradicts our assumption that $H(0)\neq 0$. Similarly, if $\gamma>(n^2-1)/4$, then $\bp-\bm<1$ and the above contradicts our assumption that the mass is non-zero. In either case, this means that no such a family of positive solutions $(\ue)_{\eps>0}$ exist. \qed \noindent{\bf Proof of Corollary \ref{thm:non}:} First note that if $h_0$ satisfies \begin{equation}\label{hyp:h00} h_0(x)+\frac{1}{2}(\nabla h_0(x), x)\leq 0\hbox{ for all }x\in \Omega, \end{equation} then by differentiating for any $x\in \Omega$, the function $t\mapsto t^2 h_0(tx)$ (which is well defined for $t\in [0,1]$ since $\Omega$ is starshaped), we get that $h_0\leq 0$. Therefore $-\Delta-\gamma|x|^{-2}-h_0$ is coercive.\\ Assume now there is a positive variational solution $u_0$ corresponding to $h_0$, the Pohozaev identity \eqref{PohoId} then gives $$\int_{\partial\Omega}(x,\nu)\frac{(\partial_\nu u_0)^2}{2}\, d\sigma-\int_\Omega \left(h_0+\frac{1}{2}(\nabla h_0, x)\right)u_0^2\, dx=0.$$ Hopf's strong comparison principle yields $\partial_\nu u_0 <0$. Since $\Omega$ is starshaped with respect to $0$, we get that $(x,\nu)\geq 0$ on $\partial\Omega$. Therefore, with \eqref{hyp:h00}, we get that $(x,\nu)=0$ for all $x\in \Omega$, which is a contradiction since $\Omega$ is smooth and bounded. \\ If now $\gamma\leq (n^2-1)/4$, the result follows from Theorem \ref{thm:non:ter} since we have assumed that $H(0)\neq 0$. \\ If $\gamma > (n^2-1)/4$, we use Theorem 7.1 in Ghoussoub-Robert \cite{gr4} to find $\mathcal{K}\in C^2(\overline{\Omega}\setminus\{0\})$ and $A>0$ such that $$\left\{\begin{array}{ll} -\Delta\mathcal{K}-\frac{\gamma}{|x|^2}\mathcal{K}-h_0 \mathcal{K}=0 &\hbox{ in }\Omega\\ \mathcal{K}>0 &\hbox{ in }\Omega\\ \mathcal{K}=0 &\hbox{ on }\partial\Omega\setminus\{0\}. \end{array}\right.$$ and such that $$\mathcal{K}(x)=A\left(\frac{\eta(x)d(x,\partial\Omega)}{|x|^{\bp}}+\beta(x)\right)\hbox{ for all }x\in\Omega,$$ where $\eta\in C^\infty_c(\rn)$ and $\beta\in \huno$ are as in Step \ref{step:p11}. We now apply the Pohozaev identity \eqref{PohoId} to $\mathcal{K}$ on the domain $U:=\Omega\setminus \T(B_\delta(0))$ for $\T$ as in \eqref{def:T:bdry}: using that $\mathcal{K}^2\in L^1(\Omega)$ and $(\cdot,\nu)(\partial_\nu \mathcal{K})^2\in L^1(\partial\Omega)$ when $\bp-\bm<1$, we get that $$\int_{\partial\Omega} (x,\nu)\frac{(\partial_\nu \mathcal{K})^2}{2}\, d\sigma-\int_\Omega \left(h_0+\frac{1}{2}(\nabla h_0, x)\right)\mathcal{K}^2\, dx=M_{\delta}$$ where $M_\delta$ is defined in \eqref{est:inf:1}. With \eqref{est:Md}, we then get $$\int_{\partial\Omega} (x,\nu)\frac{(\partial_\nu \mathcal{K})^2}{2}\, d\sigma-\int_\Omega \left(h_0+\frac{1}{2}(\nabla h_0, x)\right)\mathcal{K}^2\, dx=- \frac{\omega_{n-1}}{n}\left(\frac{n^2}{4}-\gamma\right)A^2\cdot m_{\gamma,h_0}(\Omega).$$ Since $\Omega$ is star-shaped and $h_0$ satisfies (\ref{hyp:h00}), it follows that $m_{\gamma,h_0}(\Omega)<0$ and Theorem \ref{thm:non:ter} then applies to complete our corollary. \section{Appendix A: The Pohozaev identity}\label{sec:app:poho} \begin{proposition} Let $U\subset\rn$ be a smooth bounded domain and let $u\in C^2(\overline{U})$ be a solution of \begin{equation}\label{poh} - \Delta u -\gamma \frac{u}{|x|^2}-hu =K\frac{|u|^{\crits -2-p}}{|x|^{s}}u \quad \hbox{on $U$}. \end{equation} Then, we have \begin{eqnarray*} &&- \int \limits_{U} \left( h(x) +\frac{ \left( \nabla h, x \right)}{2} \right)u^{2}~ dx ~ - \frac{p}{\crits} \left(\frac{n-s}{\crits -p}\right) \int \limits_{U } K \frac{|u|^{\crits-p }}{|x|^{s}} ~ dx \\ &&= \int \limits_{\partial U} F(x)~d\sigma, \end{eqnarray*} where \begin{eqnarray*} F(x)&:=&( x, \nu) \left( \frac{|\nabla u|^2}{2} -\frac{\gamma}{2}\frac{u^{2}}{|x|^{2}} - \frac{h(x)}{2} u^{2} - \frac{K}{\crits -p} \frac{|u|^{\crits-p }}{|x|^{s}} \right) \\ && - \left(x^i\partial_i u+\frac{n-2}{2} u \right)\partial_\nu u. \end{eqnarray*} \end{proposition} \noindent{\it Proof:} For any $y_{0}\in \rn$, the classical Pohozaev identity yields \begin{eqnarray*} &&-\int \limits_{U}\left((x-y_{0})^i\partial_i u+\frac{n-2}{2}u\right)\Delta u~ dx\\ && = \int \limits_{\partial U}\left[(x-y_{0},\nu)\frac{|\nabla u|^2}{2}-\left((x-y_{0})^i\partial_i u+\frac{n-2}{2}u\right)\partial_\nu u\right]d\sigma, \end{eqnarray*} where $\nu$ is the outer normal to the boundary $\partial U$. \noindent One has for $1 \leq j \leq n$ \begin{align*} \partial_{j} \left( \frac{|u|^{\crits-p }}{|x|^{s}} \right) = -s \frac{x^{j}}{|x|^{s+2}} |u|^{\crits -p}+ (\crits -p) \frac{|u|^{\crits -2-p}}{|x|^{s}} u \partial_{j}u \end{align*} So \begin{eqnarray*} && \left( x-y_{0}, \nabla u \right)\frac{|u|^{\crits -2-p}}{|x|^{s}}u = \frac{1}{\crits -p}(x-y_{0})^{j}\partial_{j} \left( \frac{|u|^{\crits-p }}{|x|^{s}} \right) \\ &&+ \frac{s}{\crits -p} \frac{|u|^{\crits -p}}{|x|^{s}} - \frac{s}{\crits -p} \frac{(x,y_{0})}{|x|^{s+2}}|u|^{\crits -p}. \end{eqnarray*} Then integration by parts yields \begin{align*} \int \limits_{U} \left( x-y_{0}, \nabla u \right)\frac{|u|^{\crits -2-p}}{|x|^{s}}u ~ dx= \frac{1}{\crits -p} \int \limits_{U} (x-y_{0})^{j}\partial_{j} \left( \frac{|u|^{\crits-p }}{|x|^{s}} \right) dx \\ + \frac{s}{\crits -p} \int \limits_{U}\frac{|u|^{\crits -p}}{|x|^{s}} dx - \frac{s}{\crits -p} \int \limits_{U}\frac{(x,y_{0})}{|x|^{s+2}}|u|^{\crits -p} dx \end{align*} \begin{align*} =& -\frac{n-s}{\crits -p} \int \limits_{U} \frac{|u|^{\crits-p }}{|x|^{s}} ~ dx - \frac{s}{\crits -p} \int \limits_{U}\frac{(x,y_{0})}{|x|^{s+2}}|u|^{\crits -p} dx \\ & + \frac{1}{\crits -p} \int \limits_{\partial U} ( x-y_{0}, \nu) \frac{|u|^{\crits-p }}{|x|^{s}}~ d\sigma. \end{align*} \noindent Similarly, \begin{align*} \left( x-y_{0}, \nabla u \right)\frac{u}{|x|^{2}} = \frac{1}{2}(x-y_{0})^{j}\partial_{j} \left( \frac{u^{2}}{|x|^{2}} \right) +\frac{u^{2}}{|x|^{2}} - \frac{(x,y_{0})}{|x|^{4}}u^{2} \end{align*} \begin{align*} \int \limits_{U} \left( x-y_{0}, \nabla u \right)\frac{u}{|x|^{2}} ~ dx =& -\frac{n-2}{2} \int \limits_{U} \frac{u^{2}}{|x|^{2}} ~ dx - \int \limits_{U}\frac{(x,y_{0})}{|x|^{4}}u^{2} dx \notag \\ &+ \frac{1}{2}\int \limits_{\partial U} ( x-y_{0}, \nu) \frac{u^{2}}{|x|^{2}}~ d\sigma \end{align*} and \begin{align*} \int \limits_{U} \left( x-y_{0}, \nabla u \right) h(x) u ~ dx = & -\frac{n}{2} \int \limits_{U} h(x) u^{2}~ dx -\frac{1}{2}\int \limits_{U} \left( \nabla h, x-y_{0}\right) u^{2} ~ dx \\ &+ \frac{1}{2}\int \limits_{\partial U} ( x-y_{0}, \nu) h(x)u^{2}~ d\sigma \end{align*} \noindent Combining the above, we obtain for any $K$ and any $y_0\in \rn$, \begin{align}\label{PohoId} &\int \limits_{U}\left((x-y_{0})^i\partial_i u+\frac{n-2}{2}u\right)\left(- \Delta u -\gamma \frac{u}{|x|^2}-hu -K\frac{|u|^{\crits -2-p}}{|x|^{s}}u \right)~ dx \notag \\ &\quad - \int \limits_{U} h(x) u^{2}~ dx - \frac{1}{2}\int \limits_{U} \left( \nabla h, x-y_{0}\right) u^{2} ~ dx \notag\\ &\quad - \frac{p}{\crits} \left( \frac{n-s}{\crits -p}\right) \int \limits_{U} K\frac{|u|^{\crits-p }}{|x|^{s}} ~ dx \notag \\ &\quad - \gamma \int \limits_{U}\frac{(x,y_{0})}{|x|^{4}}u^{2} dx - \frac{s}{\crits -p} \int \limits_{U}\frac{(x,y_{0})}{|x|^{s+2}}K|u|^{\crits -p} dx \notag \\ & = \int \limits_{\partial U}\left[( x-y_{0}, \nu) \left( \frac{|\nabla u|^2}{2} -\frac{\gamma}{2}\frac{u^{2}}{|x|^{2}} - \frac{h(x)}{2} u^{2} - \frac{K}{\crits -p} \frac{|u|^{\crits-p }}{|x|^{s}} \right) \right]d\sigma \notag \\ &\quad - \int \limits_{\partial U} \left[ \left((x-y_{0})^i\partial_i u+\frac{n-2}{2}u\right)\partial_\nu u \right]d\sigma. \end{align} We conclude by taking $y_0=0$ and using that $u$ satisfies (\ref{poh}) on $U$. \section[Appendix B: A continuity property for the first eigenvalue]{Appendix B: A continuity property of the first eigenvalue of Schr\"odinger operators}\label{sec:app:lemma} \begin{lemma}\label{lem:vp} Let $\Omega\subset\rn$, $n\geq 3$, be a smooth bounded domain. Let $(V_k)_k:\Omega\to \rr$ and $V_\infty:\Omega\to\rr$ be measurable functions and let {$(x_k)_k\in \overline{\Omega}$} be a sequence of points. We assume that \begin{enumerate}[i)] \item $\lim \limits_{k\to +\infty}V_k(x)=V_\infty(x)$ for a.e. $x\in\Omega$, \item There exists $C>0$ such that $|V_k(x)|\leq C|x-x_k|^{-2}$ for all $k\in\nn$ and $x\in\Omega$. \item $\lim \limits_{k\to +\infty}x_k=0\in\partial\Omega.$ \item For some $\gamma_0<n^2/4$, there exists $\delta>0$ such that $|V_k(x)|\leq \gamma_0|x-x_k|^{-2}$ for all $k\in\nn$ and {$x\in B_\delta(0)\cap\Omega$}. \item The first eigenvalue $\lambda_1(-\Delta+V_k)$ is achieved for all $k\in\nn$. \end{enumerate} Then, \begin{equation} \lim_{k\to +\infty}\lambda_1(-\Delta+V_k)=\lambda_1(-\Delta+V_\infty). \end{equation} \end{lemma} \noindent{\it Proof:} We first claim that $(\lambda_1(-\Delta+V_k))_k$ is bounded. Indeed, fix $\varphi\in \huno\setminus\{0\}$ and use the Hardy inequality to write for all $k\in\nn$, $$\lambda_1(-\Delta+V_k)\leq \frac{\int_\Omega (|\nabla \varphi|^2+V_k\varphi^2)\, dx}{\int_\Omega \varphi^2\, dx}\leq \frac{\int_\Omega (|\nabla \varphi|^2+C|x-x_k|^{-2}\varphi^2)\, dx}{\int_\Omega \varphi^2\, dx}:=M<+\infty$$ For the lower bound, we have for any $\varphi\in \huno$, \begin{eqnarray} \int_\Omega (|\nabla \varphi|^2+V_k\varphi^2)\, dx&= & \int_\Omega |\nabla \varphi|^2\, dx+\int_{B_\delta(0)}V_k\varphi^2\, dx+\int_{\Omega\setminus B_\delta(0)}V_k\varphi^2\, dx\nonumber \\ &\geq & \int_\Omega |\nabla \varphi|^2\, dx-\gamma_0\int_{B_\delta(0)}|x-x_k|^{-2}\varphi^2\, dx\nonumber\\ &&-4C\delta^{-2}\int_{\Omega\setminus B_\delta(0)}\varphi^2\, dx\nonumber \\ &\geq & \left(1-4\gamma_0/n^2\right)\int_\Omega |\nabla \varphi|^2\, dx-4C\delta^{-2}\int_{\Omega}\varphi^2\, dx.\label{ineq:123} \end{eqnarray} Since $\gamma_0<n^2/4$, we then get that $\lambda_1(-\Delta+V_k)\geq -4C\delta^{-2}$ for large $k$, which proves the lower bound. \noindent Up to a subsequence, we can now assume that $(\lambda_1(-\Delta+V_k))_k$ converges as $k\to +\infty$. We now show that \begin{equation}\label{ineq:vp:100} \liminf_{k\to +\infty}\lambda_1(-\Delta+V_k)\geq \lambda_1(-\Delta+V_\infty). \end{equation} For $k\in\nn$, we let $\varphi_k\in \huno$ be a minimizer of $\lambda_1(-\Delta+V_k)$ such that $\int_\Omega\varphi_k^2\, dx=1$. In particular, \begin{equation}\label{eq:phi:vp} -\Delta\varphi_k+V_k\varphi_k=\lambda_1(-\Delta+V_k)\varphi_k\hbox{ weakly in }\huno. \end{equation} Inequality \eqref{ineq:123} above yields the boundedness of $(\varphi_k)_k$ in $\huno$. Up to a subsequence, we let $\varphi\in \huno$ such that, as $k\to +\infty$, $\varphi_k\rightharpoonup \varphi$ weakly in $\huno$, $\varphi_k\to \varphi$ strongly in $L^2(\Omega)$ (then $\int_\Omega\varphi^2\, dx=1$) and $\varphi_k(x)\to\varphi(x)$ for a.e. $x\in \Omega$. Letting $k\to +\infty$ in \eqref{eq:phi:vp}, the hypothesis on $(V_k)$ allow us to conclude that $$-\Delta\varphi+V_\infty\varphi=\lim_{k\to +\infty}\lambda_1(-\Delta+V_k)\varphi\hbox{ weakly in }\huno.$$ Since $\int_\Omega\varphi^2\, dx=1$ and we have extracted subsequences, we then get \eqref{ineq:vp:100}. \noindent Finally, we prove the reverse inequality. For $\eps>0$, let $\varphi\in \huno$ be such that $$ \frac{\int_\Omega (|\nabla \varphi|^2+V_\infty\varphi^2)\, dx}{\int_\Omega \varphi^2\, dx}\leq \lambda_1(-\Delta+V_\infty)+\eps.$$ We have $$\lambda_1(-\Delta+V_k)\leq \lambda_1(-\Delta+V_\infty)+\eps +\frac{\int_\Omega |V_k-V_\infty|\varphi^2\, dx}{\int_\Omega \varphi^2\, dx}.$$ The hypothesis of Lemma \ref{lem:vp} allow us to conclude that $\int_\Omega |V_k-V_\infty|\varphi^2\, dx\to 0$ as $k\to +\infty$. Therefore $\limsup_{k\to +\infty}\lambda_1(-\Delta+V_k)\leq \lambda_1(-\Delta+V_\infty)+\eps$ for all $\eps>0$. Letting $\eps\to 0$, we get the reverse inequality and the conclusion of Lemma \ref{lem:vp}.\qed \section[Appendix C: Regularity and the Hardy-Schr\"odinger operator]{Appendix C: Regularity and the Hardy-Schr\"odinger operator on $\R^{n}_{-}$}\label{sec:app:regul} In this section, we collect some important results from the paper \cite{gr4} used in the proof of the compactness theorems. First we state the following regularity result: \begin{theorem}[\cite{gr4}, see also \cite{ff} ]\label{th:hopf} Let $\Omega$ be a smooth bounded domain of $\mathbb{R}^{n}$ $(n \geq 3)$ such that $0 \in \partial \Omega$. We fix $\gamma<\frac{n^2}{4}$ and $f: \Omega\times \rr\to \rr$ is a Caratheodory function such that $$|f(x, v)|\leq C|v| \left(1+\frac{|v|^{\crits-2}}{|x|^s}\right)\hbox{\rm for all }x\in \Omega\hbox{ \rm and }v\in \rr.$$ Let $u\in \huno$ be a weak solution of \begin{align}\label{regul:eq} -\Delta u-\frac{\gamma+O(|x|^\theta)}{|x|^2}u=f(x,u) \qquad \hbox{ in } \left(\huno \right)' \end{align} for some $\theta>0$. Then there exists $K\in\rr$ such that \begin{align} \label{eq:hopf} \lim_{x\to 0}\frac{u(x)}{d(x,\bdry) |x|^{-\bm}}=K. \end{align} Moreover, if $u\geq 0$ and $u\not\equiv 0$, we have that $K>0$. \end{theorem} The following result characterizes the positive solution to the singular global equation \begin{proposition}[\cite{gr4}]\label{prop:liouville} Let $\gamma<\frac{n^2}{4}$ and let $u\in C^2(\rn\setminus \{0\})$ be a nonnegative function such that \begin{align*} \left\{ \begin{array}{llll} -\Delta u -\frac{\gamma}{|x|^2} u &=&0 &\hbox{ in } \rnm \\ u&=&0 & \hbox{ on } \partial \rnmp . \end{array}\right. \end{align*} Then there exist $C_-,C_+\geq 0$ such that $$u(x)=C_{-}\frac{|x_{1}|}{|x|^{\bp}}+ C_{+} \frac{|x_{1}|}{|x|^{\bm}} \qquad \hbox{ for all } x\in \rnm.$$ \end{proposition} \noindent Next, we recall the existence and behaviour of the singular solution to the homogeneous equation. \begin{theorem}[\cite{gr4}] \label{th:sing} Let $\Omega$ be a smooth bounded domain of $\mathbb{R}^{n}$ $(n \geq 3)$ such that $0 \in \partial \Omega$. Fix $\gamma<\frac{n^2}{4}$ and $h\in C^{1}(\overline\Omega)$ be such that the operator $\Delta-\gamma|x|^{-2}-h$ is coercive. There exists then $\mathcal{H}\in C^2(\overline{\Omega}\setminus\{0\})$ such that $$\qquad\qquad \left\{\begin{array}{ll} -\Delta \mathcal{H}-\frac{\gamma}{|x|^2}\mathcal{H}+h(x) \mathcal{H}=0 &\hbox{ in } \Omega\\ \mathcal{H}>0&\hbox{ in } \Omega\\ \mathcal{H}=0&\hbox{ on }\partial\Omega \setminus\{0\}. \end{array}\right.$$ These solutions are unique up to a positive multiplicative constant, and there exists $c>0$ such that $\mathcal{H}(x)\simeq_{x\to 0}c\frac{d(x,\bdry)}{|x|^{\beta_+(\gamma)}}.$ \end{theorem} \section[Appendix D: Green's function on a bounded domain]{Appendix D: Green's function for the Hardy-Schr\"odinger operator with boundary singularity on a bounded domain}\label{sec:app:c} \begin{defi} Let $\Omega$ be a smooth bounded domain of $\rn$, $n\geq 3$, such that $0\in\partial\Omega$. We fix $\gamma<n^2/4$ and $h\in C^{0,\theta}(\overline{\Omega})$, $\theta\in (0,1)$ such that $-\Delta-(\gamma|x|^{-2}+h)$ is coercive. We say that $G:\Omega\times\Omega\setminus\{(x,x)/\, x\in \Omega\}$ is a Green's function for $-\Delta-\gamma|x|^{-2}-h$ if \noindent{\bf $\bullet$} For any $p\in\Omega$, $G_p:=G(p,\cdot)\in L^1(\Omega)$. \noindent{\bf $\bullet$} For all $f\in C^\infty_c(\Omega)$ and all $p\in \Omega$, then \begin{equation*} \varphi(p)=\int_{\Omega}G_p(x)f(x)\, dx. \end{equation*} where $\varphi\in \huno\cap C^0(\Omega)$ is the unique solution to \begin{equation*} -\Delta\varphi-\left(\frac{\gamma}{|x|^2}+h(x)\right)\varphi=f\hbox{ in }\Omega\; ; \; \varphi_{|\partial\Omega}=0. \end{equation*} \end{defi} \noindent This appendix is devoted to the proof of the following result. \begin{theorem}\label{th:green:gamma:domain} Let $\Omega$ be a smooth bounded domain of $\rn$ such that $0\in\partial\Omega$. We fix $\gamma<\frac{n^2}{4}$. We let $h\in C^{0,\theta}(\Omegabar)$ be such that $-\Delta-\gamma|x|^{-2}-h$ is coercive. Then there exists a unique Green's function $G$ for $-\Delta-\gamma|x|^{-2}-h$. Moreover: \noindent{\bf I. Properties of $G$.} The Green's function $G$ is such that \noindent{\bf (a)} $G_p\in C^{2,\theta}(\overline{\Omega}\setminus\{0,p\})$ and $G_p>0$ for all $p\in\Omega$. \noindent{\bf (b)} For all $p\in\Omega$ and all $\eta\in C^\infty_c(\rn\setminus\{p\})$, we have that $\eta G_p\in \huno$. \noindent{\bf (c)} For all $f\in L^{\frac{2n}{n+2}}(\Omega)\cap L^q(\Omega\setminus B_\delta(0))$, for all $\delta>0$ and some $q>n/2$, we have for any $p\in\Omega$ \begin{equation}\label{id:172} \varphi(p)=\int_{\Omega}G_p(x)f(x)\, dx. \end{equation} where $\varphi\in \huno\cap C^0(\Omega)$ is the unique solution to \begin{equation}\label{eq:G:dist} -\Delta\varphi-\left(\frac{\gamma}{|x|^2}+h(x)\right)\varphi=f\hbox{ in }\Omega\; ; \; \varphi_{|\partial\Omega}=0, \end{equation} In particular, \begin{equation}\label{eq:G:c2} \left\{\begin{array}{ll} -\Delta G_p-\left(\frac{\gamma}{|x|^2}+h(x)\right)G_p=0 &\hbox{ in }\Omega\setminus\{p\},\\ G_p>0 &\hbox{ in }\Omega\setminus\{p\},\\ G_p=0 &\hbox{ in }\partial\Omega\setminus\{0\}. \end{array}\right. \end{equation} \noindent{\bf II. Asymptotics.}\label{th:green:gamma:asymp} $G$ satisfies the following properties:\\ \noindent{\bf (d)} For all $p\in\Omega\setminus\{0\}$, there exists $c_0(p)>0$ such that \begin{equation}\label{asymp:G} G_p(x)\sim_{x\to 0} c_0(p)\frac{d(x,\partial\Omega)}{|x|^{\bm}}\hbox{ and }G_p(x)\sim_{x\to p}\frac{1}{(n-2)\omega_{n-1}|x-p|^{n-2}} \end{equation} where $$\bm:=\frac{n}{2}-\sqrt{\frac{n^2}{4}-\gamma}\hbox{ and }\bp:=\frac{n}{2}+\sqrt{\frac{n^2}{4}-\gamma}.$$ \noindent{\bf (e)} There exists $c>0$ depending only on $\gamma$, the coercivity constant and an upper-bound for $\Vert h\Vert_{C^{0,\theta}}$ such that \begin{equation}\label{est:G:up} c^{-1}H_p(x)< G_p(x)< c H_p(x)\hbox{ for }x\in\Omega-\{0,p\}, \end{equation} where \begin{equation}\label{def:Hp:1} H_p(x):=\left(\frac{\max\{|p|,|x|\}}{\min\{|p|,|x|\}}\right)^{\bm}|x-p|^{2-n}\min\left\{1,\frac{d(x,\partial\Omega)d(p,\partial\Omega)}{|x-p|^2}\right\}. \end{equation} And \begin{equation}\label{ineq:grad:G} |\nabla G_p(x)|\leq c \left(\frac{\max\{|p|,|x|\}}{\min\{|p|,|x|\}}\right)^{\bm}|x-p|^{1-n}\min\left\{1,\frac{d(p,\partial\Omega)}{|x-p|}\right\}\hbox{ for }x\in\Omega-\{0,p\}. \end{equation} \noindent{\bf (f)} There exists $L_{\gamma,\Omega}>0$ such that for any $(h_i)_i\in C^{0,\theta}(\Omega)$ such that $\lim \limits_{i\to+\infty}h_i=h$ in $C^{0,\theta}$, then for any sequences $(x_i)_i,(y_i)_i\in \Omega$ such that $$y_i=o(|x_i|)\hbox{ and }x_i=o(1)\hbox{ as }i\to +\infty,$$ then, as $i\to +\infty$ we have that \begin{equation}\label{est:G:4} G_{h_i}(x_i,y_i)=(L_{\gamma,\Omega}+o(1))\frac{d(x_i,\partial\Omega)}{|x_i|^{\bp}}\frac{d(y_i,\partial\Omega)}{|y_i|^{\bm}} \end{equation} \end{theorem} \noindent{\bf Notations:} In order to simplify notations, we will often drop the dependence in the domain $\Omega$ and the dimension $n\geq 3$. If $F: A\times B\to\rr$ is a function, then for any $x\in A$, we define $F_x: B\to\rr$ by $F_x(y):=F(x,y)$ for all $y\in B$. Finally, we will write $\hbox{Diag}(A):=\{(x,x)/\, x\in A\}$ for any set $A$. \\ We split the proof into several parts. \subsection{Proof of existence and uniqueness of the Green function} We let $\eta_\eps(x):=\tilde{\eta}(\eps^{-1}|x|)$ for all $x\in\rn$ and $\eps>0$, where $\tilde{\eta}\in C^\infty(\rr)$ is nondecreasing and such that $\tilde{\eta}(t)=0$ for $t<1$ and $\tilde{\eta}(t)=1$ for $t>1$. It follows from Lemma \ref{lem:vp} (see Appendix B) and the coercivity of $-\Delta-\left(\gamma|x|^{-2}+h\right)$ that there exists $\eps_0>0$ and $c>0$ such that such that for all $\varphi\in \huno$ and $\eps\in (0,\eps_0)$, $$\int_\Omega\left(|\nabla \varphi|^2-\left(\frac{\gamma\eta_\eps}{|x|^2}+h(x)\right)\varphi^2\right)\, dx\geq c\int_\Omega\varphi^2\, dx.$$ As a consequence, there exists $c>0$ such that for all $\varphi\in \huno$ and $\eps\in (0,\eps_0)$, \begin{equation}\label{bnd:coer} \int_\Omega\left(|\nabla \varphi|^2-\left(\frac{\gamma\eta_\eps}{|x|^2}+h(x)\right)\varphi^2\right)\, dx\geq c\Vert\varphi\Vert_{H_{1}^2}^2. \end{equation} Let $G_\eps>0$ be the Green's function of $-\Delta-\left(\gamma\eta_\eps|x|^{-2}+h\right)$ on $\Omega$ with Dirichlet boundary condition. The existence follows from the coercivity and the $C^{0,\theta}$ regularity of the potential for any $\eps>0$ (see Robert \cite{rob.green}). In particular, we have that \begin{equation}\label{eq:G:eps} \left\{\begin{array}{ll} -\Delta G_\eps(x,\cdot)-\left(\frac{\gamma\eta_\eps}{|\cdot|^2}+h\right)G_\eps(x,\cdot)=0&\hbox{ in }\Omega\setminus\{x\}\\ G_\eps(x,\cdot)=0&\hbox{ on }\partial\Omega \end{array}\right. \end{equation} \noindent{\bf Step \ref{sec:app:c}.1: Integral bounds for $G_\eps$.} We claim that for all $\delta>0$ and $1<q<\frac{n}{n-2}$ and $\delta'\in (0,\delta)$, there exists $C(\delta,q)>0$ and $C(\delta,\delta')>0$ such that \begin{equation}\label{int:bnd:G} \Vert G_\eps(x,\cdot)\Vert_{L^q(\Omega)}\leq C(\delta,q)\hbox{ and }\Vert G_\eps(x,\cdot)\Vert_{L^{\frac{2n}{n-2}}(\Omega\setminus B_{\delta'}(x))}\leq C(\delta,\delta') \end{equation} for all $x\in \Omega$, $|x|>\delta$. We prove the claim. We fix $f\in C^\infty_c(\Omega)$ and let $\varphi_\eps\in C^{2,\theta}(\overline{\Omega})$ be the solution to the boundary value problem \begin{equation} \label{eq:phi:eps}\left\{\begin{array}{ll} -\Delta \varphi_\eps-\left(\frac{\gamma\eta_\eps}{|x|^2}+h(x)\right)\varphi_\eps= f &\hbox{ in }\Omega\\ \quad \varphi_\eps=0&\hbox{ on }\partial\Omega \end{array}\right. \end{equation} Multiplying the equation by $\varphi_\eps$, integrating by parts on $\Omega$, using \eqref{bnd:coer} and H\"older's inequality, we get that $$\int_\Omega |\nabla\varphi_\eps|^2\, dx\leq C\Vert f\Vert_{\frac{2n}{n+2}}\Vert\varphi_\eps\Vert_{\frac{2n}{n-2}}$$ where $C>0$ is independent of $\epsilon$, $f$ and $\varphi_\epsilon$. The Sobolev inequality $\Vert\varphi\Vert_{\frac{2n}{n-2}}\leq C\Vert \nabla\varphi\Vert_2$ for $\varphi\in \huno$ then yields \begin{equation*} \Vert\varphi_\eps\Vert_{\frac{2n}{n-2}}\leq C\Vert f\Vert_{\frac{2n}{n+2}} \end{equation*} where $C>0$ is independent of $\epsilon$, $f$ and $\varphi_\epsilon$. Fix $p>n/2$ and $\delta\in (0,\delta_0)$ and $\delta_1,\delta_2>0$ such that $\delta_1+\delta_2<\delta$, and $x\in\Omega$ such that $|x|>\delta$. It follows from standard elliptic theory that \begin{eqnarray*} |\varphi_\eps(x)|&\leq & \Vert \varphi_\eps\Vert_{C^0(B_{\delta_1}(x))}\\ &\leq & C\left(\Vert \varphi_\epsilon\Vert_{L^{\frac{2n}{n-2}}(B_{\delta_1+\delta_2}(x))}+\Vert f\Vert_{L^{p}(B_{\delta_1+\delta_2}(x))}\right)\\ &\leq & C\left(\Vert f\Vert_{L^{\frac{2n}{n+2}}(\Omega)}+\Vert f\Vert_{L^{p}(B_{\delta_1+\delta_2}(x))}\right) \end{eqnarray*} where $C>0$ depends on $p,\delta,\delta_1,\delta_2$, $\gamma$ and $\Vert h\Vert_\infty$. Therefore, Green's representation formula yields \begin{equation}\label{rep:G:f} \left|\int_\Omega G_\eps(x,\cdot)f\, dy\right|\leq C\left(\Vert f\Vert_{L^{\frac{2n}{n+2}}(\Omega)}+\Vert f\Vert_{L^{p}(B_{\delta_1+\delta_2}(x))}\right) \end{equation} for all $f\in C^\infty_c(\Omega)$. It follows from \eqref{rep:G:f} that $$\left|\int_\Omega G_\eps(x,\cdot)f\, dy\right|\leq C\cdot\Vert f\Vert_{L^{p}(\Omega)}$$ for all $f\in C^\infty_c(\Omega)$ where $p>n/2$. It then follows from duality arguments that for any $q\in (1, n/(n-2))$ and any $\delta>0$, there exists $C(\delta,q)>0$ such that $\Vert G_\eps(x,\cdot)\Vert_{L^q(\Omega)}\leq C(\delta,q)$ for all $\eps<\eps_0$ and $x\in \Omega\setminus B_\delta(0)$. \noindent Let $\delta'\in (0,\delta)$ and $\delta_1,\delta_2>0$ such that $\delta_1+\delta_2<\delta'$. We get from \eqref{rep:G:f} that \begin{equation}\label{rep:G:f:2} \left|\int_\Omega G_\eps(x,\cdot)f\, dy\right|\leq C\Vert f\Vert_{L^{\frac{2n}{n+2}}(\Omega\setminus B_{\delta'}(x))} \end{equation} for all $f\in C^\infty_c(\Omega\setminus B_{\delta'}(x))$. Here again, a duality argument yields \eqref{int:bnd:G}, which proves the claim in Step \ref{sec:app:c}.1. \noindent Using the same method, we can get an improvement of the control, the cost being the integrability exponent $q$. When $q\in (1,n/(n-1))$, we get that $p>n$. Then, $\Vert \varphi_\eps\Vert_{C^1(B_{\delta_1}(x)\cap\Omega)}$ is controled by the $L^p$ and $L^{\frac{2n}{n+2}}$ norms. Moreover, $|\varphi_\eps(x)|\leq \Vert \varphi_\eps\Vert_{C^0(B_{\delta_1}(x)\cap\Omega)}d(x,\partial\Omega)$. The argument above then yields \begin{equation}\label{int:bnd:G:boundary} \Vert G_\eps(x,\cdot)\Vert_{L^q(\Omega)}\leq C(\delta,q)d(x,\partial\Omega) \hbox{ for }q\in \left(1,\frac{n}{n-1}\right). \end{equation} \noindent{\bf Step \ref{sec:app:c}.2: Convergence of $G_\epsilon$.} Fix $x\in \Omega\setminus\{0\}$. For $0<\eps<\eps'$, since $G_\eps(x,\cdot)$, $G_{\eps'}(x,\cdot)$ are $C^2$ outside $x$, \eqref{eq:G:eps} yields $$-\Delta(G_\eps(x,\cdot)-G_{\eps'}(x,\cdot))-\left(\frac{\gamma\eta_\eps}{|\cdot|^2}+h\right)(G_\eps(x,\cdot)-G_{\eps'}(x,\cdot))= \frac{\gamma(\eta_\eps-\eta_{\eps'})}{|\cdot|^2}G_{\eps'}(x,\cdot)$$ in the strong sense. The coercivity \eqref{bnd:coer} then yields $G_\eps(x,\cdot)\geq G_{\eps'}(x,\cdot)$ for $0<\eps<\eps'$ if $\gamma\geq 0$, and the reverse inequality if $\gamma<0$. It then follows from the integral bound \eqref{int:bnd:G} and elliptic regularity that there exists $G(x,\cdot)\in C^{2,\theta}(\overline{\Omega}\setminus\{0,x\})$ such that \begin{equation}\label{lim:G:eps} \lim_{\eps\to 0}G_\eps(x,\cdot)=G(x,\cdot)\geq 0\hbox{ in }C^{2,\theta}_{loc}(\overline{\Omega}-\{0,x\}). \end{equation} In particular, $G$ is symmetric and \begin{equation}\label{eq:G:x} -\Delta G(x,\cdot)-\left(\frac{\gamma}{|\cdot|^2}+h\right)G(x,\cdot)=0\hbox{ in }\Omega\setminus\{x\}\hbox{ and }G(x,\cdot)=0\hbox{ on }\partial\Omega. \end{equation} Moreover, passing to the limit $\eps\to 0$ in \eqref{int:bnd:G}, \eqref{int:bnd:G:boundary} and using elliptic regularity, we get that for all $\delta>0$, $1<q<\frac{n}{n-2}$ and $\delta'\in (0,\delta)$, there exist $C(\delta,q)>0$ and $C(\delta,\delta')>0$ such that for all $x\in \Omega$, $|x|>\delta$, \begin{equation}\label{int:bnd:G:bis} \Vert G(x,\cdot)\Vert_{L^q(\Omega)}\leq C(\delta,q)\hbox{ and }\Vert G(x,\cdot)\Vert_{L^{\frac{2n}{n-2}}(\Omega\setminus B_{\delta'}(x))}\leq C(\delta,\delta') \end{equation} and \begin{equation}\label{int:bnd:G:boundary:2} \Vert G(x,\cdot)\Vert_{L^q(\Omega)}\leq C(\delta,q)d(x,\partial\Omega) \hbox{ for }q\in \left(1,\frac{n}{n-1}\right). \end{equation} In particular, for any $x\in\Omega\setminus\{0\}$, $G(x,\cdot)\in L^{k}(\Omega)$ for all $1<k<n/(n-2)$ and $G(x,\cdot)\in L^{2n/(n-2)}(\Omega\setminus B_\delta(x))$ for all $\delta>0$. Moreover, for any $f\in L^{\frac{2n}{n+2}}(\Omega)\cap L^q(\Omega\setminus B_\delta(0))$ for all $\delta>0$ with $q>n/2$, let $\varphi_\eps\in\huno$ be such that \eqref{eq:phi:eps} holds. It follows from elliptic theory that $\varphi_\eps\in C^{0,\tau}(\overline{\Omega}\setminus\{0\})$ for some $\tau\in (0,1)$ and that for all $\delta_1>0$, there exists $C(\delta_1)>0$ such that $\Vert\varphi_\eps\Vert_{C^{0,\tau}(\overline{\Omega}\setminus B_{\delta_1}(0))}\leq C(\delta_1)$. We fix $x\in \Omega\setminus \{0\}$. Passing to the limit $\eps\to 0$ in the Green identity $\varphi_\eps(x)=\int_\Omega G_\eps(x,\cdot)f\, dy$ yields \begin{equation}\label{id:regul} \varphi(x)=\int_\Omega G(x,\cdot)f\, dy\hbox{ for all }x\in\Omega\setminus\{0\} \end{equation} where $\varphi\in \huno\cap C^{0}(\overline{\Omega}\setminus\{0\})$ is the only weak solution to $$\left\{\begin{array}{ll} -\Delta \varphi-\left(\frac{\gamma}{|x|^2}+h(x)\right)\varphi= f &\hbox{ in }\Omega\\ \varphi=0&\hbox{ on }\partial\Omega \end{array}\right.$$ Since $G(x,\cdot)\geq 0$, \eqref{eq:G:x} and the strong comparison principle yield $G(x,\cdot)>0$. These points prove that $G$ is a Green's function for the operator and that (c) holds. \noindent We now prove point (b). We fix $\eta\in C^\infty_c(\rn-\{x\})$ such that $\eta(y)=1$ when $y\in B_\delta(0)$ for some $\delta>0$. Then $\eta G_\eps(x,\cdot)\in C^{2,\theta}(\overline{\Omega})\cap \huno$. It follows from \eqref{eq:G:eps} and \eqref{lim:G:eps} that $$-\Delta (\eta G_\eps(x,\cdot))-\left(\frac{\gamma\eta_\eps}{|\cdot|^2}+h\right)(\eta G_\eps(x,\cdot))={\bf 1}_{B_\delta(0)^c}f_\eps\hbox{ in }\Omega$$ where $\Vert f_\eps\Vert_{C^0(\overline{\Omega})}\leq C$ for some $C>0$ and all $\eps>0$. Therefore, with the coercivity \eqref{bnd:coer} and the convergence \eqref{lim:G:eps}, we get that $$c\Vert \eta G_\eps(x,\cdot)\Vert_{H_1^2}^2\leq \int_{\Omega\setminus B_\delta(0)}f_\eps\eta G_\eps(x,\cdot)\, dy\leq C$$ for all $\eps>0$. Reflexivity yields convergence of $(\eta G_\eps(x,\cdot))$ in $\huno\cap L^2(\Omega)$ as $\eps\to 0$ up to extraction. The convergence in $C^2$ and uniqueness then yields $\eta G(x,\cdot)\in \huno$ and $\eta G_\eps(x,\cdot)\to \eta G(x,\cdot)$ in $\huno$ as $\eps\to 0$. The case of a general $ \eta$ is a direct consequence. This proves point (b). \noindent For the uniqueness, we suppose $G'$ be another Green's function. We fix $x\in \Omega$ and we define $H_x:=G_x-G'_x$. Then $H_x\in L^1(\Omega)$ and for any $f\in C^\infty_c(\Omega)$, we have that $\int_\Omega H_x f\, dy=0$. Approximating a compactly supported function by smooth fonctions with compact support, we get that this equality holds for all $f\in C^0_c(\Omega)$. Integration theory then yields $H_x\equiv 0$, and then $G'_x\equiv G_x$. This proves uniqueness. This finishes the proof of (a). \noindent This proves existence and uniqueness of the Green's function in Theorem \ref{th:green:gamma:domain}(I). \subsection{Proof of the upper bound}\label{sec:proof} The behavior \eqref{asymp:G} is a consequence of the classification of solutions to harmonic equations and Theorem 4.1 in Ghoussoub-Robert \cite{gr4}. \noindent In the proof, we will often use sub- and super-solutions to the linear problem. The following existence result is contained in Proposition 4.3 of \cite{gr4}: \begin{proposition}\label{prop:sub:super} Let $\Omega$ be a smooth domain and $h\in C^{0}(\overline{\Omega})$ be a continuous fonction. We fix $\gamma<\frac{n^2}{4}$ and $\beta\in \{\bm,\bp\}$. Then, there exist $r>0$, and $\overline{u}_{\beta},\underline{u}_{\beta}\in C^\infty(\overline{\Omega}\setminus\{0\})$ such that \begin{equation}\label{ppty:ua} \left\{\begin{array}{cc} \overline{u}_{\beta},\underline{u}_{\beta}=0 &\hbox{ on }\partial\Omega \cap B_r(0)\\ -\Delta \overline{u}_{\beta}-\left(\frac{\gamma}{|x|^2}+h\right)\overline{u}_{\beta}>0&\hbox{ in }\Omega\cap B_r(0)\\ -\Delta \underline{u}_{\beta}-\left(\frac{\gamma}{|x|^2}+h\right)\underline{u}_{\beta}<0&\hbox{ in }\Omega\cap B_r(0). \end{array}\right. \end{equation} Moreover, for some $\tau>0$, we have that, as $x\to 0$, $x\in \Omega$, \begin{equation}\label{asymp:ua:plus} \overline{u}_{\beta}(x)=\underline{u}_{\beta}(x)(1+O(|x|^\tau))=\frac{d(x,\partial\Omega)}{|x|^{\beta}}(1+O(|x|^{\tau})). \end{equation} \end{proposition} \noindent{\bf Step \ref{sec:app:c}.3: Upper bound for $G(x,y)$ when one variable is far from $0$.} \noindent{\it Step \ref{sec:app:c}.3.1:} It follows from \eqref{eq:G:x}, elliptic theory, \eqref{int:bnd:G:boundary:2} and \eqref{int:bnd:G:bis} that for any $\delta>0$, there exists $C(\delta)>0$ such that \begin{equation}\label{est:1:bis} 0<G(x,y)\leq C(\delta)d(y,\partial\Omega)d(x,\partial\Omega)\hbox{ for }x,y\in\Omega\hbox{ s.t. }|x|,|y|>\delta,\, |x-y|>\delta. \end{equation} \noindent{\it Step \ref{sec:app:c}.3.2:} We claim that for any $\delta>0$, there exists $C(\delta)>0$ such that \begin{equation}\label{est:2} |x-y|^{n-2}G(x,y)\leq C(\delta)\min\left\{1,\frac{d(x,\partial\Omega)d(y,\partial\Omega)}{|x-y|^2}\right\}\hbox{ for }x,y\in\Omega\hbox{ s.t. }|x|,|y|>\delta. \end{equation} Indeed, with no loss of generality, we can assume that $\delta\in (0,\delta_0)$. Let $\Omega_\delta$ be a smooth domain of $\rn$ be such that $\Omega\setminus B_{3\delta/4}(0)\subset \Omega_\delta\subset \Omega\setminus B_{\delta/2}(0)$. We fix $x\in\Omega$ such that $|x|>\delta$. Let $H_x$ be the Green's function for $-\Delta -\left(\frac{\gamma}{|x|^2}+h(x)\right)$ in $\Omega_\delta$ with Dirichlet boundary condition. Classical estimates (see \cite{rob.green}) yield the existence of $C(\delta)>0$ such that $$|x-y|^{n-2}H_x(y)\leq C(\delta)\min\left\{1,\frac{d(x,\partial\Omega)d(y,\partial\Omega)}{|x-y|^2}\right\}\hbox{ for all }x,y\in \Omega_\delta.$$ It is easy to check that $$\left\{\begin{array}{ll} -\Delta (G_x-H_x)-\left(\frac{\gamma}{|\cdot|^2}+h\right)(G_x-H_x)= 0 &\hbox{ weakly in }\Omega_\delta\\ G_x-H_x=0&\hbox{ on }\left(\partial\Omega_\delta\right)\setminus B_{3\delta/4}(0)\\ G_x-H_x=G_x&\hbox{ on }\left(\partial\Omega_\delta\right)\cap B_{3\delta/4}(0). \end{array}\right.$$ Regularity theory then yields that $G_x-H_x\in C^{2,\theta}(\overline{\Omega_\delta})$. It follows from \eqref{est:1:bis} that $G_x(y)\leq C_1(\delta)d(y,\partial\Omega)d(x,\partial\Omega)$ on $(\partial\Omega_\delta)\cap B_{3\delta/4}(0)$ for $|x|>\delta$. The comparison principle then yields $G_x(y)-H_x(y)\leq C_1(\delta)d(y,\partial\Omega)d(x,\partial\Omega)$ for $y\in \Omega_\delta$ and $|x|>\delta$. The above bound for $H_x$ and \eqref{est:1:bis} then yields \eqref{est:2}. \noindent{\it Step \ref{sec:app:c}.3.3:} We now claim that for any $0<\delta'<\delta$, there exists $C(\delta,\delta')>0$ such that \begin{equation}\label{est:3} |y|^{\bm}G(x,y)\leq C(\delta,\delta')d(y,\partial\Omega)d(x,\partial\Omega)\hbox{ for }x,y\in\Omega\hbox{ s.t. }|x|>\delta>\delta'>|y|. \end{equation} We let $\delta_1\in (0,\delta')$ that will be fixed later. We use \eqref{est:1:bis} to deduce that $G_x(y)\leq C(\delta,\delta_1)d(x,\partial\Omega)d(y,\partial\Omega)$ for all $x\in \Omega\setminus B_\delta(0)$ and $y\in \partial B_{\delta_1}(0)\cap\Omega$. Since $\delta_1<|x|$, we have that $$\left\{\begin{array}{ll} -\Delta G_x-\left(\frac{\gamma}{|x|^2}+h\right)G_x= 0 &\hbox{ in }\Omega\cap B_{\delta_1}(0)\\ 0\leq G_x\leq C(\delta,\delta_1)d(y,\partial\Omega)d(x,\partial\Omega)&\hbox{ on }\partial (\Omega\cap B_{\delta_1}(0))\setminus\{0\}. \end{array}\right.$$ We choose a supersolution $\overline{u}_{\bm}$ as in \eqref{ppty:ua} of Proposition \ref{prop:sub:super}. It follows from \eqref{asymp:ua:plus} and \eqref{est:1:bis} that for $\delta_1>0$, there exists $C(\delta,\delta_1)>0$ such that $G_x(z)\leq C(\delta,\delta_1)d(x,\partial\Omega)u_{\beta_-}(z)$ for all $z\in \partial (\Omega\cap B_{\delta_1}(0))$. It then follows from the comparison principle that $G_x(y)\leq C(\delta,\delta_1)d(x,\partial\Omega)u_{\beta_-}(y)$ for all $y\in (\Omega\cap B_{\delta_1}(0))\setminus\{0\}$. Combining this with \eqref{est:1:bis} and \eqref{ppty:ua}, we obtain \eqref{est:3}. \noindent Note that by symmetry, we also get that for any $0<\delta'<\delta$, there exists $C(\delta,\delta')>0$ such that \begin{equation}\label{est:4} |x|^{\bm}G(x,y)\leq C(\delta,\delta')d(x,\partial\Omega)d(y,\partial\Omega)\hbox{ for }x,y\in\Omega\hbox{ s.t. }|y|>\delta>\delta'>|x|. \end{equation} \noindent{\bf Step \ref{sec:app:c}.4: Upper bound for $G(x,y)$ when both variables approach $0$.}\\ We claim first that for all $c_1,c_2,c_3>0$, there exists $C(c_1,c_2,c_3)>0$ such that for $x,y\in\Omega$ such that $c_1|x|<|y|<c_2|x|$ and $|x-y|>c_3|x|$, we have \begin{equation}\label{est:5:bis} |x-y|^{n-2}G(x,y)\leq C(c_1,c_2,c_3)\frac{d(x,\partial\Omega)d(y,\partial\Omega)}{|x|^2}.\end{equation} When one of the variables stays far from $0$, \eqref{est:5:bis} is a consequence of \eqref{est:1:bis}. We now consider a chart $\T$ at $0$ as in \eqref{def:T:bdry}. In particular, there is $\delta_0>0$, $0\in V\subset \rn$ and $\T:B_{2\delta_0}(0)\to V$ a smooth diffeomorphism such that $\T(0)=0$ and \begin{equation}\label{chart:phi} \T(B_{2\delta_0}(0)\cap\rnm)=\T(U)\cap\Omega\hbox{ and }\T(B_{2\delta_0}(0)\cap\partial\rnm)=\T(U)\cap\partial\Omega. \end{equation} Moreover, $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$ and \begin{equation}\label{asymp:phi} |\T(X)|=(1+O(|X|))|X|\hbox{ for all }X\in B_{3\delta_0/2}(0). \end{equation} We fix $X\in \rnm$ such that $0<|X|<3\delta_0/2$. We define $$H(z):=G_{\T(X)}(\T(|X|z))\hbox{ for }z\in B_{\delta_0/|X|}(0)\setminus \left\{0,\frac{X}{|X|}\right\},$$ so that $$-\Delta_{g_X} H-\left(\frac{\gamma}{\left(\frac{|\T(|X|z|)}{|X|}\right)^2}+|X|^2h(\T(|X|z))\right)H=0\hbox{ in }B_{\delta_0/|X|}(0)\setminus\left \{0,\frac{X}{|X|}\right\}.$$ where $g_X:=(\T^\star \eucl)_X$ is the pulled-back metric of the Euclidean metric $\eucl$ via the chart $\T$ at the point $X$. Since $H>0$, it follows from the Harnack inequality on the boundary (see Proposition 6.3 in Ghoussoub-Robert \cite{gr4}) that for all $R>0$ large enough and $r>0$ small enough, there exist $\delta_1>0$ and $C>0$ independent of $|X|<3\delta_0/2$ such that \begin{equation*} \frac{H(z)}{|z_1|}\leq C \frac{H(z')}{|z'_1|}\hbox{ for all }z,z'\in (B_R(0)\cap \rnm)\setminus \left(B_r(0)\cup B_r\left(\frac{X}{|X|}\right)\right), \end{equation*} which, via the chart $\T$, yields \begin{equation}\label{harnack:g} \frac{G_x(y)}{d(y,\partial\Omega)}\leq C \frac{G_x(y')}{d(y',\partial\Omega)}\hbox{ for all }y,y'\in \Omega\cap B_{R|x|/2}(0)\setminus \left(B_{2r|x|}(0)\cup B_{2r|x|}(x)\right). \end{equation} for all $x\in \Omega$ such that $|x|<\delta_0$. We let $W$ be a smooth domain of $\rn$ such that for some $\lambda>0$ small enough, we have \begin{equation}\label{ppte:W.bis} B_\lambda(0)\cap\Omega\subset W\subset B_{2\lambda}(0)\cap\Omega\hbox{ and } B_\lambda(0)\cap\partial W=B_\lambda(0)\cap\partial \Omega. \end{equation} We choose a subsolution $\underline{u}_{\bp}$ as in \eqref{ppty:ua} of Proposition \ref{prop:sub:super}. It follows from \eqref{asymp:ua:plus} and \eqref{est:1:bis} that for $|x|<\delta_2$ small $$G_x(z)\geq C(R)|x|^{\bp}\left(\inf_{y\in \Omega\cap \partial B_{R|x|}(0)}\frac{G_x(y)}{d(y,\partial\Omega)}\right) \underline{u}_{\bp}(z)\hbox{ for all }z\in W \cap \partial B_{R|x|/3}(0).$$ Since $-\Delta G_x-(\gamma|\cdot|^{-2}+h)G_x=0$ outside $0$, it follows from coercivity and the comparison principle that $$G_x(z)\geq c|x|^{\bp}\left(\inf_{y\in \Omega\cap \partial B_{R|x|}(0)}\frac{G_x(y)}{d(y,\partial\Omega)}\right) \underline{u}_{\bp}(z)\hbox{ for all }z\in W\setminus B_{R|x|/3}(0).$$ We fix $z_0\in W\setminus\{0\}$. Then for $\delta_3$ small enough, when $|x|<\delta_3$, it follows from \eqref{est:4} and the Harnack inequality \eqref{harnack:g} that there exists $C>0$ independent of $x$ such that $$G_x(y)\leq C |x|^{-\bp-\bm}d(x,\partial\Omega)d(y,\partial\Omega)\hbox{ for all }y\in B_{R|x|}(0)\setminus \left(B_{r|x|}(0)\cup B_{r|x|}(x)\right)$$ Taking $r>0$ small enough and $R>0$ large enough, we then get \eqref{est:5:bis} for $|x|<\delta_3$. The general case for arbitrary $x\in \Omega\setminus \{0\}$ then follows from \eqref{est:2}. This completes the proof of \eqref{est:5:bis}. \noindent{\it Step \ref{sec:app:c}.4.2:} We claim that for all $c_1,c_2>0$, there exists $C(c_1,c_2)>0$ such that \begin{equation}\label{est:6} |x-y|^{n-2}G(x,y)\leq C(c_1,c_2)\min\left\{1,\frac{d(x,\partial\Omega)d(y,\partial\Omega)}{|x-y|^2}\right\} \end{equation} for all $x,y\in\Omega$ s.t. $c_1|x|<|y|<c_2|x|$. To prove \eqref{est:6}, we distinguish three cases: \noindent{\it Case 1:} We assume that \begin{equation}\label{hyp:x:1} |x|\leq C_1 d(x,\partial\Omega)\hbox{ with }C_1>1. \end{equation} We define $$H(z):=|x|^{n-2}G_x(x+|x|z)\hbox{ for }z\in B_{1/C_1}(0)\setminus \{0\}.$$ Note that this definition makes sense since for such $z$, $x+|x|z\in \Omega$. We then have that $H\in C^2(\overline{B_{1/(2C_1)}(0)}\setminus\{0\})$ and $$-\Delta H-\left(\frac{\gamma}{\left|\frac{x}{|x|}+z\right|^2}+|x|^2h(x+|x|z)\right)H=\delta_0\hbox{ weakly in }B_{1/(2C_1)}(0).$$ We now argue as in the proof of \eqref{est:2}. From \eqref{est:5:bis}, we have that $|H(z)|\leq C$ for all $z\in \partial B_{1/(2C_1)}(0)$ where $C$ is independent of $x\in \Omega\setminus\{0\}$ satisfying \eqref{hyp:x:1}. Let $\Gamma_0$ be the Green's function of $-\Delta -\left(\frac{\gamma}{\left|\frac{x}{|x|}+z\right|^2}+|x|^2h(x+|x|z)\right)$ at $0$ on $B_{1/(2C_1)}(0)$ with Dirichlet boundary condition. Therefore, $H-\Gamma_0\in C^2(\overline{B_{1/(2C_1)}(0)})$ and, via the comparison principle, it is bounded by its supremum on the boundary. Therefore $|z|^{n-2}H(z)\leq C$ for all $B_{1/(2C_1)}(0)\setminus\{0\}$ where $C$ is independent of $x\in \Omega\setminus\{0\}$ satisfying \eqref{hyp:x:1}. Scaling back and using \eqref{est:5:bis}, we get $|x-y|^{n-2}G_x(y)\leq C$ for all $x,y\in \Omega\setminus\{0\}$ such that $c_1|x|<|y|<c_2|x|$ and \eqref{hyp:x:1} holds. This proves \eqref{est:6} if $d(x,\partial\Omega)d(y,\partial\Omega)\geq|x-y|^2$. If $d(x,\partial\Omega)d(y,\partial\Omega)<|x-y|^2$, we get that $d(x,\partial\Omega)<2|x-y|$, and then \eqref{hyp:x:1} yields $|x|\leq 2C_1|x-y|$, and \eqref{est:6} is a consequence of \eqref{est:5:bis}. \noindent This ends the proof of \eqref{est:6} in Case 1. \noindent{\it Case 2:} By symmetry, \eqref{est:6} also holds when $|y|\leq C_1 d(y,\partial\Omega)$. \noindent{\it Case 3:} We assume that $d(x,\partial\Omega)\leq C_1^{-1} |x|$ and $d(y,\partial\Omega)\leq C_1^{-1} |y|$. We consider a chart at $0$, that is $\delta_0>0$, $0\in V\subset \rn$ and $\T:B_{2\delta_0}(0)\to V$ a smooth diffeomorphism such that $\T(0)=0$ and that \eqref{chart:phi} and \eqref{asymp:phi} hold. We fix $x'\in \rr^{n-1}$ such that $0<|x'|<3\delta_0/2$. \noindent We assume that $r\leq c_0|x'|$. We define $$H_y(z):=r^{n-2}G_{\T((0,x')+r y)}(\T((0,x')+r z))\hbox{ for }y,z\in B_{\delta_0/(2r)}(0)\cap\rnm\setminus \{0\}.$$ We then have that $H_y\in C^2(\overline{B_{R_0}(0)}\cap\rnm\setminus\{0,y\})$ and $$-\Delta_{g_r} H_y-\left(\frac{\gamma}{\left(\frac{|\T((0,x')+rz)}{r}\right)^2}+r^2h(\T((0,x')+r z))\right)H_y=\delta_y\hbox{ weakly in }B_{R_0}(0)\cap\rnm,$$ where $g_{r}:=(\T^\star \eucl)_{(0,x')+rz}$ is the pulled-back metric of the Euclidean metric $\eucl$ via the chart $\T$ at the point $(0,x')+rz$. We now argue as in the proof of \eqref{est:2}. From \eqref{est:5}, we have that $|H_y(z)|\leq C$ for all $z\in \partial B_{R_0}(0)\cap\rnm$ where $C$ is independent of $y\in B_{R_0/2}(0)$ and $r\in (0,\delta_0/4)$. Let $\Gamma_y$ be the Green's function of $-\Delta_{g_r} -\left(\frac{\gamma}{\left(\frac{|\T((0,x')+rz)}{r}\right)^2}+r^2h(\T((0,x')+r z))\right)$ at $y$ on $B_{c_0/2}(0)\cap\rnm$ with Dirichlet boundary condition. Therefore, $H_y-\Gamma_y\in C^2(\overline{B_{c_0/2}(0)\cap\rnm})$ and, via the comparison principle, it is bounded by its supremum on the boundary. It follows from \eqref{est:5} and elliptic estimates for $\Gamma_y$ (see for instance \cite{rob.green}) that $|H_y-\Gamma_y|(z)\leq C |y_1|\cdot|z_1|$ for $z\in \partial(B_{c_0/2}(0)\cap\rnm)$ and $y\in B_{c_0/4}(0)\cap\rnm$. Applying elliptic estimates, we then get that $|H_y-\Gamma_y|(z)\leq C |y_1|\cdot|z_1|$ for $z\in B_{c_0/2}(0)\cap\rnm$ and $y\in B_{c_0/4}(0)\cap\rnm$, and since $$\Gamma_y(z)\leq C|z-y|^{2-n}\min\left\{1,\frac{|y_1|\cdot|z_1|}{|y-z|^2}\right\}\hbox{ for all }y,z\in B_{c_0/2}(0)\cap\rnm$$ (see \cite{rob.green}), we get that $$|z-y|^{n-2}H_y(z)\leq C\min\left\{1,\frac{|y_1|\cdot|z_1|}{|y-z|^2}\right\}\hbox{ for all }y,z\in B_{c_0/2}(0)\cap\rnm$$ where $C$ is independent of $x'\in B_{\delta_0/2}(0)\setminus\{0\}$. This yields \begin{equation}\label{est:green:36} |rz-ry|^{n-2}G_{\T((0,x')+r y)}(\T((0,x')+r z))\leq C\min\{1,\frac{|y_1|\cdot|z_1|}{|y-z|^2}\} \end{equation} for $|x'|<\delta_0/3$, $r\leq c_0|x'|$ and $|y|,|z|\leq c_0/4$. \noindent We now prove \eqref{est:6} in the last case. We fix $x\in\Omega\setminus \{0\}$ such that $|x|<\delta_0/3$. We assume that $d(x,\partial\Omega)\leq C_1^{-1} |x|\; , \; d(y,\partial\Omega)\leq C_1^{-1} |y|\hbox{ and }|x-y|\leq \epsilon_0|x|$. We let $(x_1,x'), (y_1,y')\in B_{\delta_0}(0)$ be such that $x=\T(x_1,x')$ and $y=\T(y_1,y')$. Taking the norm $|(x_1,x')|=|x_1|+|x'|$, we define $r:=\max\{d(x,\partial\Omega),|x-y|\}$. Using that $|X|/2\leq |\T(X)|\leq 2|X|$ for $X\in B_{\delta_0}(0)$, up to taking $\epsilon_0>0$ small and $C_1,c_0>1$ large enough, we get that $$\left|\frac{x_1}{r}\right|\leq \frac{c_0}{4}\; ,\; \left|\left(\frac{y_1}{r},\frac{y'-x'}{r}\right)\right|\leq \frac{c_0}{4}\hbox{ and }r\leq c_0|x'|.$$ Therefore, \eqref{est:green:36} applies and we get \eqref{est:6} in Case 3. \noindent We are now in position to conclude. Inequality \eqref{est:6} is a consequence of Cases 1, 2, 3, \eqref{est:2} and \eqref{est:5}. This ends the proof of \eqref{est:6}. \noindent{\it Step \ref{sec:app:c}.4.3:} We now show that there exists $C>0$ such that \begin{equation}\label{est:7} |y|^{\bm}|x|^{\bp}G(x,y)\leq Cd(x,\partial\Omega) d(y,\partial\Omega)\hbox{ for }x,y\in\Omega\hbox{ such that }|y|<\frac{1}{2}|x|. \end{equation} The proof goes essentially as in \eqref{est:3}. For $|x|<\delta$ with $\delta>0$ small, we have that $$-\Delta G_x-\left(\frac{\gamma}{|\cdot|^2}+h\right)G_x=0\hbox{ in }H^1(\Omega\cap B_{|x|/3}(0))\cap C^2(\Omegabar\cap B_{|x|/3}(0)\setminus\{0\}).$$ It follows from \eqref{est:5} that $G_x(y)\leq C |x|^{-n} d(x,\partial\Omega) d(y,\partial\Omega)$ in $\Omega\cap \partial B_{|x|/3}(0)$. We choose a supersolution $\overline{u}_{\bm}$ as in \eqref{ppty:ua} of Proposition \ref{prop:sub:super}. It follows from \eqref{asymp:ua:plus} and \eqref{est:5} that there exists $C>0$ such that $$G_x(y)\leq C |x|^{-\bp} d(x,\partial\Omega) \overline{u}_{\bm}(y) \hbox{ for all }y\in \Omega\cap \partial B_{|x|/3}(0).$$ The comparison principle yields that this inequality holds on $\Omega\cap B_{|x|/3}(0)$. \noindent{\it Step \ref{sec:app:c}.4.4:} By symmetry, we conclude that there exists $C>0$ such that \begin{equation}\label{est:8} |x|^{\bm}|y|^{\bp}G(x,y)\leq Cd(x,\partial\Omega) d(y,\partial\Omega)\hbox{ for }x,y\in\Omega\hbox{ s.t. }|x|<\frac{1}{2}|y|. \end{equation} \noindent{\bf Step \ref{sec:app:c}.5:} Finally, it follows from \eqref{est:7}, \eqref{est:8} and \eqref{est:6} that there exists $c>0$ such that \begin{equation}\label{G:up} G(x,y)\leq c \left(\frac{\max\{|y|,|x|\}}{\min\{|y|,|x|\}}\right)^{\bm}|x-y|^{2-n}\min\left\{1,\frac{d(x,\partial\Omega)d(y,\partial\Omega)}{|x-y|^2}\right\} \end{equation} for all $x,y\in \Omega$, $x\neq y$. This proves the upper bound in \eqref{est:G:up} of Theorem \ref{th:green:gamma:asymp}. The lower-bound and the control of the gradient will be proved in Section \ref{sec:lower}. \subsection{Behavior at infinitesimal scale}\label{sec:th:cv} We prove three convergence results to get a comprehensive behavior of the Green's function. Throughout this subsection, we assume $\Omega$ is a smooth bounded domain of $\rn$ such that $0\in\partial\Omega$. We fix $\gamma<\frac{n^2}{4}$ and let $h\in C^{0,\theta}(\Omegabar)$ be such that $-\Delta-\gamma|x|^{-2}-h$ is coercive. We consider $G$ to be the Green's function of $-\Delta-\gamma|x|^{-2}-h$ with Dirichlet boundary condition on $\partial\Omega$. \begin{lemma}\label{th:cv:1} Let $(x_i)_i\in\Omega$ and $(r_i)_i\in (0,+\infty)$ be such that $$\lim_{i\to +\infty}r_i=0\hbox{ and }\lim_{i\to +\infty}\frac{d(x_i,\partial\Omega)}{r_i}=+\infty.$$ Then, for all $X,Y\in \rn$ such that $X\neq Y$, we have that \begin{equation*} \lim_{i\to +\infty}r_i^{n-2}G(x_i+r_iX,x_i+r_iY)=\frac{1}{(n-2)\omega_{n-1}}|X-Y|^{2-n} \end{equation*} Moreover, the convergence holds in $C^2_{loc}((\rn)^2\setminus\hbox{Diag}(\rn))$. \end{lemma} To deal with the case when the points approach the boundary, we consider a chart $\T$ as in \eqref{def:T:bdry}. In particular, $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$. \begin{lemma}\label{th:cv:2} Let $(x_i)_i\in\partial\Omega$ and $(r_i)_i\in (0,+\infty)$ and $x_0\in\partial\Omega$ be such that $$\lim_{i\to +\infty}r_i=0,\; \lim_{i\to +\infty}x_i=x_0\in \partial\Omega\hbox{ and }\lim_{i\to +\infty}\frac{|x_i|}{r_i}=+\infty.$$ We let $\T$ be a chart at $x_0$ as in \eqref{def:T:bdry}. We define $x_i'\in\rr^{n-1}$ such that $x_i=\T(0,x_i')$. Then, for all $X,Y\in \rnm$ such that $X\neq Y$, we have that \begin{eqnarray*} &&\lim_{i\to +\infty}r_i^{n-2}G(\T\left((0,x'_i)+r_iX\right),\T\left((0,x'_i)+r_iY\right))\\ &&=\frac{1}{(n-2)\omega_{n-1}}\left(|X-Y|^{2-n}-|X-Y^*|^{2-n}\right) \end{eqnarray*} where $(Y_1,Y')^*=(-Y_1,Y')$ for $(Y_1,Y')\in \rr\times\rr^{n-1}$. Moreover, the convergence holds in $C^2_{loc}((\overline{\rnm})^2\setminus\hbox{Diag}(\overline{\rnm})\})$. \end{lemma} \begin{lemma}\label{th:cv:3} Let $(r_i)_i\in (0,+\infty)$ be such that $\lim_{i\to +\infty}r_i=0$. We let $\T$ be a chart at $0$ as in \eqref{def:T:bdry}. Then, for all $X,Y\in \overline{\rnm}\setminus\{0\}$ such that $X\neq Y$, we have that \begin{equation*} \lim_{i\to +\infty}r_i^{n-2}G(\T\left(r_iX\right),\T\left(r_iY\right))={\mathcal G}(X,Y) \end{equation*} where ${\mathcal G}(X,Y)={\mathcal G}_X(Y)$ is the Green's function for $-\Delta-\gamma|x|^{-2}$ on $\rnm$ with Dirichlet boundary condition. Moreover, the convergence holds in $C^2_{loc}((\overline{\rnm}\setminus\{0\})^2\setminus\hbox{Diag}(\overline{\rnm}\setminus\{0\}))$. \end{lemma} \noindent{\it Proof of Lemma \ref{th:cv:1}:} We let $(r_i)_i\in (0,+\infty)$ and $(x_i)_i\in\Omega$ as in the statement of the lemma. For any $X,Y\in\rn$, $X\neq Y$, we define $$G_i(X,Y):=r_i^{n-2}G(x_i+r_iX,x_i+r_iY)$$ for all $i\in\nn$. Since $r_i=o(d(x_i,\partial\Omega))$ as $i\to +\infty$, for any $R>0$, there exists $i_0\in\nn$ such that this definition makes sense for any $X,Y\in B_R(0)$. Equation \eqref{eq:G:c2} yields \begin{equation}\label{est:eq:Gi:1} -\Delta G_i(X,\cdot)-\left(\frac{\gamma}{\left|\frac{x_i}{r_i}+\cdot\right|^2}+r_i^2 h(x_i+r_i\cdot)\right)G_i(X,\cdot)=0\hbox{ in }B_R(0)\setminus\{X\}. \end{equation} The pointwise control \eqref{G:up} writes \begin{equation}\label{est:Gi:1} 0< G_i(X,Y)\leq c \left(\frac{\max\{|x_i+r_i X|,|x_i+r_i Y|\}}{\min\{|x_i+r_i X|,|x_i+r_i Y|\}}\right)^{\bm}|X-Y|^{2-n} \end{equation} for all $X,Y\in B_R(0)$ such that $X\neq Y$. Since $0\in\partial\Omega$, we have that $d(x_i,\partial\Omega)\leq |x_i|$, and therefore $r_i=o(|x_i|)$ as $i\to +\infty$. Equation \eqref{est:eq:Gi:1} and inequality \eqref{est:Gi:1} yield \begin{equation*} -\Delta G_i(X,\cdot)+\theta_i(X,\cdot)G_i(X,\cdot)=0\hbox{ in }B_R(0)\setminus\{X\}. \end{equation*} where $\theta_i\to0$ uniformly in $C^0_{loc}((\rn)^2)$ and $0< G_i(X,Y)\leq c |X-Y|^{2-n}$ for all $X,Y\in B_R(0)$ such that $X\neq Y$. It then follows from standard elliptic theory that, up to a subsequence, there exists $G_\infty(X,\cdot)\in C^2(\rn\setminus\{X\})$ such that $G_i(X,\cdot)\to G_\infty(X,\cdot)\geq 0$ in $C^2_{loc}(\rn\setminus\{X\})$ and $$-\Delta G_\infty(X,\cdot)=0\hbox{ in }\rn\setminus\{X\}\hbox{ and }G_\infty(X,Y)\leq c |X-Y|^{2-n}\hbox{ for }X,Y\in \rn,\; X\neq Y.$$ It then follows from the classification of positive harmonic functions that there exists $\lambda>0$ such that $G_\infty(X,Y)=\lambda |X-Y|^{2-n}$ for all $X,Y\in\rn$, $X\neq Y$. \noindent We fix $\varphi\in C^\infty_c(\rn)$. We define $\varphi_i(x):=\varphi(r_i^{-1}(x-x_i))$ for $x\in \Omega$ (this makes sense for $i$ large enough). It follows from \eqref{eq:G:dist} that $$\varphi_i(x_i+r_i X)=\int_\Omega G(x_i+r_i X, y)\left(-\Delta\varphi_i(y)-\left(\frac{\gamma}{|y|^2}+h(y)\right)\varphi_i(y)\right)\, dy.$$ Via a change of variable, and passing to the limit, we get that $$\varphi(X)=\int_{\rn} G_\infty(X,Y)\left(-\Delta\varphi(Y)\right)\, dy.$$ Since $G_\infty(X,Y)=\lambda |X-Y|^{2-n}$, we get that $\lambda=1/((n-2)\omega_{n-1}) $. Since the limit is unique, the convergence holds without extracting a subsequence. The convergence in $C^2_{loc}((\rn)^2\setminus\hbox{Diag}(\rn))$ follows from the symmetry of $G$ and elliptic theory.\qed \noindent{\it Proof of Lemma \ref{th:cv:2}:} The proof goes as in the proof of lemma \ref{th:cv:1}, except that we have to take a chart due to the closeness of the boundary. We let $(r_i)_i\in (0,+\infty)$, $(x_i)_i\in\partial\Omega$ and $x_0\in \partial\Omega$ as in the statement of the lemma. We let $\T$ be a chart at $x_0$ as in \eqref{def:T:bdry} (in particular $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$) and we set $x'_i\in\rn$ such that $x_i=\T(0, x_i')$. In particular, $\lim_{i\to +\infty}x_i'=0$. For any $X,Y\in\overline{\rnm}$, $X\neq Y$, we define $$G_i(X,Y):=r_i^{n-2}G(\T\left((0,x'_i)+r_iX\right),\T\left((0,x'_i)+r_iY\right))$$ for all $i\in\nn$. Here again, provided $X,Y$ remain in a given compact set, the definition of $G_i$ makes sense for large $i$. Equation \eqref{eq:G:c2} then rewrites \begin{equation}\label{est:eq:Gi:2} -\Delta_{g_i} G_i(X,\cdot)-\hat{\theta}_iG_i(X,\cdot)=0\hbox{ in }B_R(0)\cap\rnm\setminus\{X\}\hbox{ ; }G_i(X,\cdot)\equiv 0\hbox{ on }\partial\rnm\cap B_R(0) \end{equation} where $$\hat{\theta}_i(Y):=\frac{\gamma}{\left|\frac{\T((0,x'_i)+r_i Y)}{r_i}\right|^2}+r_i^2 h(\T((0,x'_i)+r_i Y))$$ and $g_i=\T^\star\eucl((0,x'_i)+r_i \cdot)$ is the pull-back of the Euclidean metric. In particular, since $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$, we get that $g_i\to\eucl$ in $C^2_{loc}(\rn)$. Since $r_i=o(|x_i|)$, we get that $r_i=o(|x'_i|)$ as $i\to +\infty$, and, using again that $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$, we get that $\hat{\theta}_i\to 0$ uniformly in $B_R(0)\cap\rnm$. The pointwise control \eqref{G:up} rewrite $G_i(X,Y)\leq c |X-Y|^{2-n}$ for all $X,Y\in \rnm$, $X\neq Y$. With the same arguments as above, we get that for any $X\in \overline{\rnm}$, there exists $G_\infty(X,\cdot)\in C^2(\overline{\rnm}\setminus\{X\})$ such that $$\lim_{i\to +\infty}G_i(X,\cdot)=G_\infty(X,\cdot)\hbox{ in }C^2_{loc}(\overline{\rnm}\setminus\{X\})$$ $$\hbox{ with }\left\{\begin{array}{ll} -\Delta G_\infty(X,\cdot)=0&\hbox{ in }\rnm\setminus\{X\}\\ G_\infty(X,\cdot)\geq 0&\\ G_\infty(X,\cdot)\equiv 0&\hbox{ on }\partial\rnm\setminus\{X\} \end{array}\right.$$ and $$\varphi(X)=\int_{\rnm}G_\infty(X,\cdot)(-\Delta\varphi)\, dY\hbox{ for all }\varphi\in C^\infty_c(\rnm).$$ with $0\leq G_\infty(X,Y)\leq c |X-Y|^{2-n}$ for all $X,Y\in \rnm$, $X\neq Y$. Define $$\Gamma_{\rnm}(X,Y)=\frac{1}{(n-2)\omega_{n-1}}\left(|X-Y|^{2-n}-|X-Y^*|^{2-n}\right).$$ As one checks (see for instance \cite{rob.green}), $\Gamma_{\rnm}$ satisfies the same properties as $G_\infty$. We set $f:=G_\infty(X,\cdot)-\Gamma_{\rnm}(X,\cdot)$. As one checks, $f\in C^\infty(\overline{\rnm}\setminus\{X\})$, $-\Delta f=0$ in the distribution sense in $\rnm$, $|f|\leq C|X-\cdot|^{2-n}$ in $\rnm\setminus\{X\}$ and $f_{\partial\rnm}=0$. Hypoellipticity yields $f\in C^\infty(\overline{\rnm})$. Multiplying $-\Delta f$ by $f$ and integrating by parts, we get that $f\equiv 0$, and then $G_\infty(X,\cdot)=\Gamma_{\rnm}(X,\cdot)$. As above, this proves the convergence without any extraction. The convergence in $C^2_{loc}((\overline{\rnm})^2\setminus\hbox{Diag}(\overline{\rnm}))$ follows from the symmetry of $G$ and elliptic theory.\qed \noindent{\it Proof of Lemma \ref{th:cv:3}:} Here again, the proof is similar to the two preceding proofs. We let $(r_i)_i\in (0,+\infty)$ such that $\lim_{i\to +\infty}r_i=0$. We let $\T$ be a chart at $0$ as in \eqref{def:T:bdry} (in particular $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$). For any $X,Y\in\overline{\rnm}\setminus\{0\}$, we define $$G_i(X,Y):=r_i^{n-2}G(\T\left(r_iX\right),\T\left(r_iY\right))$$ for all $i\in\nn$. Equation \eqref{eq:G:c2} rewrites \begin{equation*} -\Delta_{g_i} G_i(X,\cdot)-\left(\frac{\gamma}{\left|\frac{\T(r_i \cdot)}{r_i}\right|^2}+r_i^2 h(\T(r_i \cdot))\right)G_i(X,\cdot)=0\hbox{ in }B_R(0)\cap\rnm\setminus\{0,X\}. \end{equation*} with $G_i(X,\cdot)\equiv 0$ on $B_R(0)\cap\partial\rnm$, where $g_i=\T^\star\eucl(r_i \cdot)$ is the pull-back of the Euclidean metric. In particular, since $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$, we get that $g_i\to\eucl$ in $C^2_{loc}(\rn)$. The pointwise control \eqref{G:up} writes $$0\leq G_i(X,Y)\leq C \left(\frac{\max\{|X|,|Y|\}}{\min\{|X|,|Y|\}}\right)^{\bm}|X-Y|^{2-n}\hbox{ for }X,Y\in\rnm,\; X\neq Y. $$ It then follows from elliptic theory that $G_i(X,\cdot)\to G_\infty(X,\cdot)$ in $C^2_{loc}(\overline{\rnm}\setminus\{0,X\})$. In particular, $G_\infty(X,\cdot)$ vanishes on $\partial\rnm\setminus\{0\}$ and \begin{equation}\label{est:G:rn} 0\leq G_\infty(X,Y)\leq C \left(\frac{\max\{|X|,|Y|\}}{\min\{|X|,|Y|\}}\right)^{\bm}|X-Y|^{2-n}\hbox{ for }X,Y\in\rnm,\; X\neq Y. \end{equation} Moreover, passing to the limit in Green's representation formula, we get that $$\varphi(X)=\int_{\rnm}G_\infty(X,Y)\left(-\Delta \varphi-\frac{\gamma }{|Y|^2}\varphi\right)\, dY\hbox{ for all }\varphi\in C^\infty_c(\rnm).$$ Since $G(x,\cdot)$ is locally in $\huno$ (see (b) in Theorem \ref{th:green:gamma:domain}), we get that $(\eta G_i(X,\cdot))_i$ is uniformly bounded in $H_{1,0}^2(\rnm)$ for all $\eta\in C^\infty_c(\rn\setminus\{X\})$. Up to another extraction, we get weak convergence in $H_{1,0}^2(\rnm)$, and then $\eta G_\infty(X,\cdot)\in H_{1,0}^2(\rnm)$ for all $\eta\in C^\infty_c(\rn\setminus\{X\})$. It then follows from Theorem \ref{th:green:gamma:rn} and \eqref{est:G:rn} that $G_\infty(X,\cdot)={\mathcal G}_{X}$ is the unique Green's function of $-\Delta-\gamma|x|^{-2}$ on $\rnm$ with Dirichlet boundary condition. Here again, the convergence in $C^2$ follows from elliptic theory.\qed \subsection{A lower bound for the Green's function}\label{sec:lower} We let $\Omega$, $\gamma$, $h$ be as in Theorem \ref{th:green:gamma:domain}. We let $G$ be the Green's function for $-\Delta-(\gamma|x|^{-2}+h)$ on $\Omega$ with Dirichlet boundary condition. We let $(x_i), (y_i)_{i\in\nn}$ be such that $x_i,y_i\in\Omega$ and $x_i\neq y_i$ for all $i\in\nn$. We also assume that there exists $x_\infty,y_\infty\in \overline{\Omega}$ such that $$\lim_{i\to +\infty}x_i=x_\infty\hbox{ and }\lim_{i\to +\infty}y_i=y_\infty$$ and that there exists $c_1,c_2$ such that \begin{equation*} \lim_{i\to +\infty}\frac{G(x_i,y_i)}{H(x_i,y_i)}=c_1\in [0,+\infty] \hbox{ and }\lim_{i\to +\infty}\frac{|\nabla G_{x_i}(y_i)|}{\Gamma(x_i,y_i)} =c_2\in [0,+\infty] \end{equation*} where $H(x,y)$ is defined in \eqref{def:Hp:1} and $$\Gamma(x,y):=\left(\frac{\max\{|x|,|y|\}}{\min\{|x|,|y|\}}\right)^{\bm}|x-y|^{1-n}\min\left\{1,\frac{d(x,\partial\Omega)}{|x-y|}\right\}$$ for $x,y\in\Omega$, $x\neq y$. Note that $c_1<+\infty$ by \eqref{G:up}. We claim that \begin{equation}\label{lim:c} 0<c_1\hbox{ and }0\leq c_2<+\infty \end{equation} The lower bound in \eqref{est:G:up} and the upper bound in \eqref{ineq:grad:G} both follow from \eqref{lim:c}. \noindent This section is devoted to proving \eqref{lim:c}. We distinguish several cases: \noindent{\bf Case 1: $x_\infty\neq y_\infty$, $x_\infty,y_\infty\in \Omega$.} As one checks, we then have that $$\lim_{i\to +\infty}G(x_i,y_i)=G(x_\infty,y_\infty)> 0.$$ Therefore, we get that $c_1\in (0,+\infty)$. Concerning the gradient, $\lim_{i\to +\infty}|\nabla G_{x_i}(y_i)|=|\nabla G_{x_\infty}(y_\infty)|\geq 0$ and this yields $c_2<+\infty$. This proves \eqref{lim:c} in Case 1. \noindent{\bf Case 2: $x_\infty\in\Omega$ and $y_\infty\in \partial\Omega\setminus\{0\}$.} Since $x_\infty,y_\infty$ are distinct and far from $0$, we have that $G(x_i,y_i)=d(y_i,\partial\Omega)\left(-\partial_{\nu}G_{x_\infty}(y_\infty)+o(1)\right)$ as $i\to +\infty$, where $\partial_\nu G_{x_\infty}(y_\infty)$ is the normal derivative of $G_{x_\infty}>0$ at the boundary point $y_\infty$. Hopf's Lemma then yields $\partial_\nu G_{x_\infty}(y_\infty)<0$. As one checks, we have that $H(x_i,y_i)=(c+o(1))d(y_i,\partial\Omega)$ as $i\to +\infty$. This then yields $0<c_1<+\infty$. Concerning the gradient, we get that $\lim_{i\to +\infty}|\nabla G_{x_i}(y_i)|=|\nabla G_{x_\infty}(y_\infty)|\geq 0$ and $\lim_{i\to +\infty}\Gamma(x_i,y_i)\in (0,+\infty)$, which yields $c_2<+\infty$. This proves \eqref{lim:c} in Case 2. \noindent{\bf Case 3: $x_\infty\in\Omega$ and $y_\infty=0\in\partial\Omega$.} It follows from Case 2 above that there exists $c>0$ such that $G_{x_i}(y)\geq c d(y,\partial\Omega)|y|^{-\bm}$ for all $y\in \partial (\Omega\cap B_{r_0}(0))$. We take the subsolution $\underline{u}_{\bm}$ defined in Proposition \ref{prop:sub:super}. With \eqref{asymp:ua:plus}, there exists $c'>0$ such that $G_{x_i}(y)\geq c_1\underline{u}_{\bm}(y)$ for all $y\in \partial (\Omega\cap B_{r_0}(0))$. Since $G_{x_i}$ is locally in $H_{1,0}^2$ around $0$, the comparison principle and \eqref{asymp:ua:plus} yields $G_{x_i}(y)\geq c" d(y,\partial\Omega)|y|^{-\bm}$ for all $y\in\Omega\cap B_{r_0}(0)$. This yields $c_1>0$. \noindent We deal with the gradient. We let $\T$ be a chart at $0$ as in \eqref{def:T:bdry} and we define $$G_i(y):=r_i^{\bm-1}G_{x_i}(\T(r_i y))\hbox{ for }y\in \rnm\cap B_2(0)$$ with $r_i\to 0$. It follows from \eqref{G:up} that $G_i(y)\leq C |y_1|\cdot |y|^{-\bm}$ for all $y\in \rnm\cap B_2(0)$. It follows from \eqref{eq:G:c2} that $-\Delta_{g_i}G_i-\left(\gamma|\cdot|^2+o(1)\right)G_i=0$ in $\rnm\cap B_2(0)$ where $g_i:=\T^\star\eucl(r_i \cdot)$ and $o(1)\to 0$ in $L^\infty_{loc}(\rn)$. Elliptic regularity then yields $|\nabla G_i(y)|\leq C$ for $y\in \rnm\cap B_{3/2}(0)$. We now let $r_i:=|\tilde{y}_i|$ where $y_i:=\T(\tilde{y}_i)$, so that $r_i\to 0$. We then have that $|\nabla G_i(\tilde{y_i}/r_i)|\leq C$, which rewrites $|\nabla G_{x_i}(y_i)|\leq C|y_i|^{-\bm}$. By estimating $\Gamma(x_i,y_i)$, we then get that $c_2<+\infty$. This proves \eqref{lim:c} in Case 3. \noindent{\bf Case 4: $x_\infty\neq y_\infty$, $x_\infty,y_\infty\in \partial\Omega\setminus\{0\}$.} Since $x_\infty,y_\infty$ are distinct and far from $0$, we have that $G(x_i,y_i)=d(y_i,\partial\Omega)d(x_i,\partial\Omega)\left(\partial_{\nu_x}\partial_{\nu_y}G_{x_\infty}(y_\infty)+o(1)\right)$ as $i\to +\infty$, where $\partial_{\nu_x}$ is the normal derivative along the first coordinate, and $\partial_{\nu_y}$ is the normal derivative along the second coordinate. Since $y\mapsto G_x(y)$ is positive for $x,y\in\Omega$, $x\neq y$, and solves \eqref{eq:G:c2}, Hopf's maximum principle yields $-\partial_{\nu_y}G(x,y_\infty)>0$ for $x\in \Omega$. Moreover, it follows from the symmetry of $G$ that $-\partial_{\nu_y}G(x,y_\infty)>0$ solves also \eqref{eq:G:c2}. Another application of Hopf's principle yields $\partial_{\nu_x}\partial_{\nu_y}G_{x_\infty}(y_\infty)>0$. Estimating independently $H(x_i,y_i)$, we get that $0<c_1<+\infty$. \noindent We deal with the gradient. We have that $|\nabla_y G_{x_i}(y_i)|=|\nabla_y (G_{x_i}-G_{\tilde{x_i}})(y_i)|$ where $\tilde{x}_i\in\partial\Omega$ is the projection of $x_i$ on $\partial\Omega$. The $C^2-$control then yields $|\nabla_y G_{x_i}(y_i)|\leq Cd(x_i,\partial\Omega)$. Estimating independently $\Gamma(x_i,y_i)$, we get that $c_2<+\infty$. This proves \eqref{lim:c} in Case 4. \noindent{\bf Case 5: $x_\infty\neq y_\infty$, $x_\infty\in \partial\Omega\setminus\{0\}$ and $y_\infty=0$.} It follows from Cases 2 and 4 that $G_{x_i}(y)\geq C d(x_i,\partial\Omega)d(y_i,\partial\Omega)$ for all $y\in \partial(B_{|x_\infty|/2}(0)\cap\Omega)$. Using a sub-solution as in Case 3, we get that $G_{x_i}(y)\geq c d(x_i,\partial\Omega)d(y,\partial\Omega)|y|^{-\bm}$ for all $y\in \partial(B_{|x_\infty|/2}(0)\cap\Omega)$. This yields $0<c_1$. \noindent For the gradient estimate, we choose a chart $\T$ around $y_\infty=0$ as in \eqref{def:T:bdry}, and we let $r_i:=|\tilde{y}_i|\to 0$ where $y_i=\T(\tilde{y}_i)$we define $G_i(y):=r_i^{\bm-1}G_{x_i}(\T(r_i y))/d(x_i,\partial\Omega)$ for $y\in \rnm\cap B_2(0)$ where $r_i\to 0$ . The pointwise control \eqref{G:up} and equation \eqref{eq:G:c2} yields the convergence of $(G_i)$ in $C^1_{loc}(\overline{\rnm}\cap B_2(0)\setminus\{0\})$ as $i\to +\infty$. The boundedness of $|\nabla G_i|$ yields $c_2<+\infty$. This proves \eqref{lim:c} in Case 5. \noindent Since $G$ is symmetric, it follows from Cases 1 to 5 that \eqref{lim:c} holds when $x_\infty\neq y_\infty$. \noindent We now deal with the case $x_\infty=y_\infty$, which rewrites $\lim_{i\to +\infty}|x_i-y_i|=0$. Via a rescaling, we are essentially back to the case $x_\infty\neq y_\infty$ via the convergence Theorems \ref{th:cv:1}, \ref{th:cv:2} and \ref{th:cv:3}. \noindent{\bf Case 6: $|x_i-y_i|=o(d(x_i,\partial\Omega))$ as $i\to +\infty$.} We set $r_i:=|x_i-y_i|\to 0$ as $i\to +\infty$ and we define $$G_i(Y):=r_i^{n-2}G(x_i, x_i+r_iY)\hbox{ for }Y\in \frac{\Omega-x_i}{r_i}\setminus\{0\}.$$ It follows from Theorem \ref{th:cv:1} that $G_i\to c_n|\cdot|^{2-n}$ in $C^2_{loc}(\rn\setminus\{0\})$ as $i\to +\infty$, with $c_n:=((n-2)\omega_{n-1})^{-1}$. We define $Y_i:=\frac{y_i-x_i}{|y_i-x_i|}$, and we then get that $|y_i-x_i|^{n-2}G(x_i,y_i)=G_i(Y_i)\to c_n$ as $i\to +\infty$. Estimating $H(x_i,y_i)$ (and noting that $d(x_i,\partial\Omega)\leq |x_i-0|=|x_i|$), we get that $0<c_1<+\infty$. \noindent The convergence of the gradient yields $|\nabla G_i(Y_i)|\leq C$ for all $i$. With the original function $G$ and points $x_i$, $y_i$, this yields $c_2<+\infty$. This proves \eqref{lim:c} in Case 6. \noindent{\bf Case 7: $d(x_i,\partial\Omega)=O(|x_i-y_i|)$ and $|x_i-y_i|=o(|x_i|)$ as $i\to +\infty$.} Then $\lim_{i\to +\infty}x_i=x_\infty\in \partial\Omega$. We let $\T$ be a chart at $x_\infty$ as in \eqref{def:T:bdry}, in particular $D_{0} \mathcal{T} = \mathbb{I}_{\R^{n}}$. We let $x_i=\T(x_{i,1}, x_i')$ and $y_i=\T(y_{i,1}, y_i')$ where $(x_{i,1}, x_i'),(y_{i,1}, y_i')\in (-\infty, 0)\times \rr^{n-1}$ are going to $0$ as $i\to +\infty$. In particular $d(x_i,\partial\Omega)=(1+o(1))|x_{i,1}|$ and $d(y_i,\partial\Omega)=(1+o(1))|y_{i,1}|$ as $i\to +\infty$. We define $r_i:=|(y_{i,1}, y_i')-(x_{i,1}, x_i')|$. In particular $r_i=(1+o(1))|x_i-y_i|$ as $i\to +\infty$. The hypothesis of Case 7 rewrite $x_{i,1}=O(r_i)$ and $r_i=o(|(x_{i,1}, x_i')|)$. Consequently, we have that $y_{i,1}=O(r_i)$ and $r_i=o(|x_{i}'|)$ as $i\to +\infty$. We define $$G_i(X,Y):=r_i^{n-2}G(\T\left((0,x'_i)+r_iX\right),\T\left((0,x'_i)+r_iY\right))$$ for $X,Y\in \rnm$ such that $X\neq Y$. It follows from Theorem \ref{th:cv:2} that $$\lim_{i\to +\infty}G_i(X,Y)=c_n\left(|X-Y|^{2-n}-|X-Y^*|^{2-n}\right):=\Psi(X,Y)$$ for all $X,Y\in \overline{\rnm}$, $X\neq Y$, and this convergence holds in $C^2_{loc}$. We define $X_i:=(r_i^{-1}x_{i,1},0)$ and $Y_i:=(r_i^{-1}y_{i,1}, r_i^{-1}(y_i'-x_i'))$: the definition of $r_i$ yields $X_i\to X_\infty\in \overline{\rnm}$ and $Y_i\to Y_\infty\in \overline{\rnm}$ as $i\to +\infty$. Therefore, we get that $$|x_i-y_i|^{n-2}G(x_i,y_i)=(1+o(1))G_i(X_i,Y_i)\to \Psi(X_\infty,Y_\infty)$$ as $i\to +\infty$, and \begin{equation}\label{lim:bdy} |X_{\infty,1}|=\lim_{i\to +\infty}\frac{|x_{i,1}|}{r_i}=\lim_{i\to +\infty}\frac{d(x_i,\partial\Omega)}{r_i}. \end{equation} \noindent{\it Case 7.1: $X_{\infty,1}\neq 0$ and $Y_{\infty,1}\neq 0$.} We then get that $\lim_{i\to +\infty}|x_i-y_i|^{n-2}G(x_i,y_i)=\Psi(X_\infty,Y_\infty)>0$. Moreover, it follows from \eqref{lim:bdy} that $d(x_i,\partial\Omega)d(y_i,\partial\Omega)=(c+o(1))|x_i-y_i|^2$ as $i\to +\infty$ for some $c>0$. Since $|x_i|=(1+o(1))|y_i|$ as $i\to +\infty$ (this follows from the assumption of Case 7), we get that $\lim_{i\to +\infty}|x_i-y_i|^{n-2}H(x_i,y_i)\in (0,+\infty)$. Then $0<c_1<+\infty$. \noindent{\it Case 7.2: $X_{\infty,1}\neq 0$ and $Y_{\infty,1}=0$.} Then $Y_{i,1}\to 0$ as $i\to +\infty$, and then, there exists $(\tau_i)_i\in (0,1)$ such that $G_i(X_i,Y_i)=Y_{i,1}\partial_{Y_1}G_i(X_i, (\tau_i Y_{i,1}, Y'_i))$. Letting $i\to +\infty$ and using the convergence of $G_i$ in $C^1$, we get that \begin{eqnarray*} |x_i-y_i|^{n-2}G(x_i,y_i)&=&(1+o(1))G_i(X_i,Y_i)=Y_{i,1}\partial_{Y_1}G_i(X_i, \tau_i Y_i)\\ &=&\frac{d(y_i,\partial\Omega)}{|x_i-y_i|}\left(-\partial_{Y_1}\Psi(X_\infty,Y_\infty)+o(1)\right) \end{eqnarray*} as $i\to +\infty$. As one checks, $\partial_{Y_1}\Psi(X_\infty,Y_\infty)<0$. Arguing as in Case 7.1, we get that $0<c_1<+\infty$. \noindent{\it Case 7.3: $X_{\infty,1}=Y_{\infty,1}=0$.} As in Case 7.2, there exists $(\tau_i)_i,(\sigma_i)_i\in (0,1)$ such that $G_i(X_i,Y_i)=Y_{i,1}X_{i,1}\partial_{Y_1}\partial_{X_1}G_i((\sigma_i X_{i,1}, X'_i)X_i, (\tau_i Y_{i,1}, Y'_i))$. We conclude as above, noting that $\partial_{Y_1}\partial_{X_1}\Psi(X_\infty,Y_\infty)>0$. Then $0<c_1<+\infty$. \noindent The gradient estimate is proved as in Cases 1 to 6. This proves \eqref{lim:c} in Case 7. \noindent{\bf Case 8: $d(x_i,\partial\Omega)=O(|x_i-y_i|)$, $|x_i|=O(|x_i-y_i|)$ and $|y_i|=O(|x_i-y_i|)$ as $i\to +\infty$.} In particular, $x_\infty=y_\infty=0$. We take a chart at $0$ as in Case 7, and we define $(x_{i,1},x_i'),(y_{i,1},y_i')$ similarly. We define $r_i:=|(y_{i,1}, y_i')-(x_{i,1}, x_i')|=(1+o(1))|x_i-y_i|$ as $i\to +\infty$. We define $$G_i(X,Y):=r_i^{n-2}G(\T\left(r_iX\right),\T\left(r_iY\right))$$ for $X,Y\in \rnm$. It follows from Theorem \ref{th:cv:3} that $G_i\to {\mathcal G}$ in $C^2_{loc}((\overline{\rnm}\setminus\{0\})^2\setminus\hbox{Diag}(\overline{\rnm}\setminus\{0\}))$, where $\mathcal G$ is the Green's function for $-\Delta-\gamma|\cdot|^{-2}$ in $\rnm$. Then$$|x_i-y_i|^{n-2}G(x_i,y_i)=(1+o(1))G_i(X_i,Y_i)={\mathcal G}(X_\infty,Y_\infty)+o(1)$$ as $i\to +\infty$. \noindent{\it Case 8.1: We assume that $X_{\infty,1}\neq 0$ and $Y_{\infty,1}\neq 0$.} Then we get $0<c_1<+\infty$ as in Case 7.1. \noindent{\it Case 8.2: We assume that $X_\infty\in \rnm$ and $Y_\infty\in \partial\rnm\setminus\{0\}$ or $X_\infty,Y_\infty\in \partial\rnm\setminus\{0\}$}. Then we argue as in Cases 7.2 and 7.3 to get $0<c_1<+\infty$ provided $\{\partial_{Y_1}{\mathcal G}(X_\infty,Y_\infty)<0\hbox{ if }X_\infty\in \rnm\hbox{ and }Y_\infty\in \partial\rnm\}$ and $\{\partial_{Y_1}\partial_{X_1}{\mathcal G}(X_\infty,Y_\infty)>0\hbox{ if }X_\infty,Y_\infty\in \partial\rnm\}$. So we are just left with proving these two inequalities. \noindent We assume that $X_\infty\in \rnm$. It follows from Theorem \ref{th:green:gamma:rn} below that ${\mathcal G}(X_\infty,\cdot)>0$ is a solution to $(-\Delta-\gamma|\cdot|^{-2}){\mathcal G}(X_\infty,\cdot)=0$ in $\rnm-\{X_\infty\}$, vanishing on $\partial\rnm\setminus\{0\}$. Hopf's maximum principle then yields $-\partial_{Y_1}{\mathcal G}(X_\infty,Y_\infty)>0$ for $Y_\infty\in \partial\rnm\setminus\{0\}$. \noindent We fix $Y_\infty\in \partial\rnm\setminus\{0\}$. For $X\in\rnm$, we then define $H(X):=-\partial_{Y_1}{\mathcal G}(X,Y_\infty)>0$ by the above argument. Moreover, $(-\Delta-\gamma|\cdot|^{-2})H=0$ in $\rnm$, vanishing on $\partial\rnm\setminus\{0, Y_\infty\}$. Hopf's maximum principle yields $-\partial_{X_1}H(X_\infty)=\partial_{Y_1}\partial_{X_1}{\mathcal G}(X_\infty,Y_\infty)>0$ for $X_\infty,Y_\infty\in \partial\rnm\setminus\{0\}$ \noindent{\it Case 8.3: we assume that $X_\infty=0$ or $Y_\infty=0$.} Since $|X_\infty-Y_\infty|=1$, without loss of generality, we can assume that $X_\infty\neq 0$. It follows from Cases 8.1 and 8.2 that there exists $C>0$ such that \begin{equation}\label{low:G:34} C^{-1}\frac{d(x_i,\partial\Omega)}{|x_i|^{n-\bm}}\frac{d(y,\partial\Omega)}{|y|^{\bm}}\leq G_{x_i}(y)\leq C\frac{d(x_i,\partial\Omega)}{|x_i|^{n-\bm}}\frac{d(y,\partial\Omega)}{|y|^{\bm}} \end{equation} for all $y\in \partial (B_{|x_i|/2}(0)\cap\Omega)$. We let $\underline{u}_{\bm}$ be the sub-solution given by Proposition \ref{prop:sub:super}. Arguing as in Case 3, it then follows from the comparison principle that \eqref{low:G:34} holds for $y\in B_{|x_i|/2}(0)\cap\Omega$. Since $|y_i|=o(|x_i|)$, we then get that \eqref{low:G:34} holds with $y:=y_i$. Estimating $H(x_i,y_i)$, we then get that $0<c_1<+\infty$. \noindent The gradient estimate is proved as in Cases 1 to 6. This proves \eqref{lim:c} in Case 8. \noindent Since $G$ is symmetric, it follows from Cases 7 and 8 that \eqref{lim:c} holds when $x_\infty=y_\infty$. \noindent In conclusion, we get that \eqref{lim:c} holds, which proves the initial claim. As noted previously, both the lower bound in \eqref{est:G:up} and the upper bound in \eqref{ineq:grad:G} follow from these results. \noindent We are now left with proving \eqref{est:G:4}. We let $(\txi)_i, (\tyi)_i\in \Omega$ be such that $$\tyi=o(|\txi|)\hbox{ and }\txi=o(1)\hbox{ as }i\to +\infty,$$ and $(h_i)_i\in C^{0,\theta}(\Omega)$ such that $\lim \limits_{i\to+\infty}h_i=h$ in $C^{0,\theta}$. It follows from \eqref{est:G:up} that, up to extraction, there exists $l>0$ such that \begin{equation}\label{est:G:pf} G_{h_i}(\txi,\tyi)=(l+o(1))\frac{d(\txi,\partial\Omega)}{|\txi|^{\bp}}\frac{d(\tyi,\partial\Omega)}{|\tyi|^{\bm}} \end{equation} From now on, to avoid unnecessary notations, the extraction is \underline{fixed}. We define $$r_i:=|\txi|\, ;\, s_i:=|\tyi|\, ;\, \tau_i:=s_i^{-1}\T^{-1}(\tyi)\in\rnm\hbox{ and }\theta_i:=r_i^{-1}\T^{-1}(\txi)\in\rnm,$$ and $\theta_\infty,\tau_\infty\in \overline{\rnm}$ such that \begin{equation}\label{def:theta} \txi=\T(r_i\theta_i)\, ;\, \tyi=\T(s_i\tau_i)\, ; \, \theta_i\to \theta_\infty\neq 0\hbox{ and }\tau_i\to\tau_\infty\neq 0\hbox{ as }i\to +\infty. \end{equation} \begin{step}\label{step:G:pf:1} We fix $R>0$. We claim that \begin{equation}\label{cv:G:pf:1} G_{h_i}(\txi,y)=(l+o(1))\frac{d(\txi,\partial\Omega)}{|\txi|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}}\hbox{ as }i\to +\infty \end{equation} uniformly for $y\in \Omega\cap \T(B_{Rs_i}\setminus B_{R^{-1}s_i})$. \end{step} \noindent{\it Proof of Step \ref{step:G:pf:1}:} For $z\in B_{2R}\setminus B_{(2R)^{-1}}$, we define $$G_i(z):=\frac{s_i^{\bm-1}|\txi|^{\bp}}{d(\txi,\partial\Omega)}G_{h_i}(\txi, \T(s_i z)).$$ As one checks, \eqref{cv:G:pf:1} is equivalent to prove that \begin{equation}\label{cv:G:pf:1:equiv} G_i(y)=(l+o(1))\frac{|y_1|}{|y|^{\bm}}\hbox{ uniformly for }y\in B_{R}(0)\setminus B_{R^{-1}}(0) \end{equation} Since $s_i=o(|\txi|)$ and \eqref{rem:T:bdry} holds, it follows from the control \eqref{est:G:up} that there exists $C>0$ such that \begin{equation}\label{pf:G:1} \frac{1}{C}\cdot \frac{|z_1|}{|z|^{\bm}}\leq G_i(z)\leq C \cdot \frac{|z_1|}{|z|^{\bm}}\hbox{ for all }z\in\rnm\cap B_{2R}(0)\setminus B_{(2R)^{-1}}(0). \end{equation} As for \eqref{est:eq:Gi:2}, it follows from \eqref{eq:G:c2} that \begin{equation}\label{pf:G:2} -\Delta_{g_i} G_i-\left(\frac{\gamma s_i^2}{|\T(s_i \cdot)|^2}+O(s_i^2)\right)G_i=0\hbox{ in }B_R(0)\cap\rnm\hbox{ ; }G_i\equiv 0\hbox{ on }\partial\rnm\cap B_R(0)\setminus\{0\}. \end{equation} It follows from \eqref{pf:G:1}, \eqref{pf:G:2} and standard elliptic theory that there exists $G\in C^2(\overline{\rnm}\setminus\{0\})$ such that, up to a subsequence, \begin{equation}\label{lim:Gi:pf} \lim_{i\to +\infty}G_i=G\hbox{ in }C^2_{loc}(\overline{\rnm}\setminus\{0\}) \end{equation} with $$-\Delta G-\frac{\gamma}{|x|^2}G=0\hbox{ in }\overline{\rnm}\setminus\{0\}\, ;\, G=0\hbox{ on }\partial\rnm\setminus\{0\}\, ;$$ $$\frac{1}{C}\cdot \frac{|z_1|}{|z|^{\bm}}\leq G(z)\leq C \cdot \frac{|z_1|}{|z|^{\bm}}\hbox{ for all }z\in \rnm\setminus\{0\}.$$ It the follows from Proposition 6.4 in \cite{gr4} that there exists $\lambda>0$ such that \begin{equation}\label{G:pf:explicit} G(z)=\lambda \cdot \frac{|z_1|}{|z|^{\bm}}\hbox{ for all }z\in \rnm. \end{equation} We claim that $\lambda=l$. We prove the claim. It follows from \eqref{est:G:pf} and the definition \eqref{def:theta} of $\tau_i$ that \begin{equation}\label{eq:G:l} G_i(\tau_i)=(l+o(1))\frac{|\tau_{i,1}|}{|\tau_i|^{\bm}}\hbox{ and }\tau_i\to\tau_\infty\neq 0\hbox{ as }i\to +\infty. \end{equation} \noindent{\it Case 1:} we assume that $\tau_\infty\in\rnm\setminus\{0\}$, that is $\tau_{\infty,1}\neq 0$. Passing to the limit in \eqref{eq:G:l}, using the convergence \eqref{lim:Gi:pf} and the explicit form \eqref{G:pf:explicit}, we get that $$l\frac{|\tau_{\infty,1}|}{|\tau_{\infty}|^{\bm}}=\lambda\frac{|\tau_{\infty,1}|}{|\tau_\infty|^{\bm}},$$ and therefore, since $\tau_{\infty,1}\neq 0$, we get that $\lambda=l$. \noindent{\it Case 2:} we assume that $\tau_\infty\in \partial\rnm\setminus\{0\}$, that is $\tau_{i,1}\to 0$ as $i\to +\infty$. With a Taylor expansion, we get that there exists a sequence $(t_i)_{i\in\nn}\in (0,1)$ such that $G_i(\tau_i)=\partial_1G_i(t_i \tau_{i,1},\theta_i')\tau_{i,1}$ for all $i\in \nn$. With the convergence \eqref{lim:Gi:pf} of $G_i$ to $G$ in $C^1$, we get that $$G_i(\tau_i)=\left(\partial_1G(\tau_\infty)+o(1)\right)\cdot\tau_{i,1}=\left(\frac{\lambda}{|\tau_\infty|^{\bm}}+o(1)\right)\cdot |\tau_{i,1}|.$$ Since $\tau_{i,1}\neq 0$ for all $i\in\nn$, it follows form \eqref{eq:G:l} that $\lambda=l$. \noindent Therefore, in both cases, we have proved that $\lambda=l$. It follows from this uniqueness that the convergence of $G_i$ holds with no extraction. \noindent We now prove \eqref{cv:G:pf:1:equiv}. We let $(z_i)_i\in \rnm\setminus\{0\}$ be such that $z_i\to z_\infty\in \overline{\rnm}\setminus\{0\}$. Then $G_i(z_i)\to G(z_\infty)$ as $i\to +\infty$. Therefore, if $z_{\infty,1}\neq 0$, we get that $G_i(z_i)=(1+o(1)) G(z_i)$ as $i\to +\infty$. We now assume that $z_{\infty,1}=0$, that is $z_{i,1}\to 0$ as $i\to +\infty$. We use the $C^1-$convergence of $(G_i)$ and argue as in Case 2 above to get that $\lim_{i\to +\infty}|z_{i,1}|^{-1}G_i(z_i)=-\partial_1G(z_\infty)\neq 0$. As one checks, this yields also $G_i(z_i)=(1+o(1)) G(z_i)$ as $i\to +\infty$. As noticed above, this proves \eqref{cv:G:pf:1} and ends Step \ref{step:G:pf:1}.\qed \begin{step}\label{step:G:pf:2} We fix $R>0$. We claim that \begin{equation}\label{cv:G:pf:2} G_{h_i}(\txi,y)=(l+o(1))\frac{d(\txi,\partial\Omega)}{|\txi|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}}\hbox{ as }i\to +\infty \end{equation} uniformly for $y\in \Omega\cap \T(B_{Rs_i}(0))$. \end{step} \noindent{\it Proof of Step \ref{step:G:pf:2}:} For $r>0$ small, we choose $\bar{u}_{\bm}\in C^2(\Omega\cap B_r(0))$ a supersolution to $-\Delta \bar{u}_{\bm}-(\gamma|x|^{-2}+h_i)\bar{u}_{\bm}>0$ as in \eqref{ppty:ua} and \eqref{asymp:ua:plus}. Note that, due to the convergence of $(h_i)$ to $h$ in $C^0$, the choice of $\bar{u}_{\bm}$ can be made independently of $i$. We fix $\eps>0$. It follows from the convergence \eqref{cv:G:pf:1} of Step \ref{step:G:pf:1} and \eqref{asymp:ua:plus} that there exists $i_0\in\nn$ \begin{equation}\label{est:sup:G:pf} G_{h_i}(\txi,y)\leq (l+\eps)\frac{d(\txi,\partial\Omega)}{|\txi|^{\bp}}\bar{u}_{\bm}(y)\hbox{ for all }y\in \partial \left(\Omega\cap \T(B_{Rs_i}(0))\right)\hbox{ for all }i\geq i_0. \end{equation} Note that $G_{h_i}(\txi,\cdot), \bar{u}_{\bm}\in H_1^2\left(\Omega\cap \T(B_{Rs_i}(0))\right)$ (these are variational super- or sub-solutions) and that the operator $-\Delta -(\gamma|x|^{-2}+h_i)$ is coercive. Since $G_{h_i}(\txi,\cdot)$ is a solution and $\bar{u}_{\bm}$ is a supersolution to $-\Delta u-(\gamma|x|^{-2}+h_i)u=0$, it follows from the comparison principle that \eqref{est:sup:G:pf} holds for $y\in \Omega\cap \T(B_{Rs_i}(0))$. With \eqref{asymp:ua:plus}, we get that there exists $i_1\in\nn$ such that \begin{equation}\label{est:sup:G:pf:2} G_{h_i}(\txi,y)\leq (l+2\eps)\frac{d(\txi,\partial\Omega)}{|\txi|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}}\hbox{ for all }y\in \Omega\cap \T(B_{Rs_i}(0))\hbox{ for all }i\geq i_1. \end{equation} Using a subsolution $\underline{u}_{\bm}$ as in \eqref{ppty:ua} and \eqref{asymp:ua:plus} and arguing as above, we get that \begin{equation}\label{est:sup:G:pf:3} G_{h_i}(\txi,y)\geq (l-2\eps)\frac{d(\txi,\partial\Omega)}{|\txi|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}}\hbox{ for all }y\in \Omega\cap \T(B_{Rs_i}(0))\hbox{ for all }i\geq i_2. \end{equation} The inequalities \eqref{est:sup:G:pf:2} and \eqref{est:sup:G:pf:3} put together yield \eqref{cv:G:pf:2}. This ends Step \ref{step:G:pf:2}.\qed \noindent We now vary the $x-$variable. \begin{step}\label{step:G:pf:3} We fix $R,R'>0$. We claim that \begin{eqnarray} &&G_{h_i}(\txi,y)=(l+o(1))\frac{d(x,\partial\Omega)}{|x|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}}\hbox{ as }i\to +\infty\label{cv:G:pf:3}\\ &&\hbox{ uniformly for }y\in \Omega\cap \T(B_{Rs_i}(0))\hbox{ and }x\in \Omega\cap \T(B_{R'r_i}(0)\setminus B_{(R')^{-1}r_i}(0)) .\nonumber \end{eqnarray} \end{step} \noindent{\it Proof of Step \ref{step:G:pf:3}:} We fix a sequence $(y_i)_i\in \Omega$ such that $y_i\in \T(B_{Rs_i}(0))$ for all $i\in \nn$. For $z\in B_{2R'}\setminus B_{(2R')^{-1}}$, we define $$\tilde{G}_i(z):=\frac{|y_i|^{\bm}r_i^{\bp-1}}{d(y_i,\partial\Omega)}G_{h_i}(\T(s_i z), y_i).$$ As one checks, \eqref{cv:G:pf:3} is equivalent to prove that \begin{equation}\label{cv:G:pf:3:equiv} \tilde{G}_i(x)=(l+o(1))\frac{|x_1|}{|x|^{\bp}}\hbox{ uniformly for }x\in B_{R'}(0)\setminus B_{(R')^{-1}}(0) \end{equation} Since $|y_i|=o(r_i)$ as $i\to +\infty$ and \eqref{rem:T:bdry} holds, it follows from the control \eqref{est:G:up} that there exists $C>0$ such that \begin{equation}\label{pf:G:3} \frac{1}{C}\cdot \frac{|z_1|}{|z|^{\bp}}\leq \tilde{G}_i(z)\leq C \cdot \frac{|z_1|}{|z|^{\bp}}\hbox{ for all }z\in\rnm\cap B_{2R'}\setminus B_{(2R')^{-1}}. \end{equation} As for \eqref{est:eq:Gi:2}, it follows from \eqref{eq:G:c2} that \begin{equation}\label{pf:G:4} -\Delta_{g_i} \tilde{G}_i-\left(\frac{\gamma r_i^2}{|\T(r_i \cdot)|^2}+O(r_i^2)\right)\tilde{G}_i=0\hbox{ in }B_{2R'}(0)\cap\rnm\hbox{ ; }\tilde{G}_i\equiv 0\hbox{ on }\partial\rnm\cap B_{2R'}(0)\setminus\{0\}. \end{equation} It follows from \eqref{pf:G:3}, \eqref{pf:G:4} and standard elliptic theory that there exists $\tilde{G}\in C^2(\overline{\rnm}\setminus\{0\})$ such that, up to a subsequence, \begin{equation}\label{lim:Gi:pf:bis} \lim_{i\to +\infty}\tilde{G}_i=\tilde{G}\hbox{ in }C^2_{loc}(\overline{\rnm}\setminus\{0\}) \end{equation} with $$-\Delta \tilde{G}-\frac{\gamma}{|x|^2}\tilde{G}=0\hbox{ in }\overline{\rnm}\setminus\{0\}\, ;\, \tilde{G}=0\hbox{ on }\partial\rnm\setminus\{0\}\, ;$$ $$\frac{1}{C}\cdot \frac{|z_1|}{|z|^{\bp}}\leq \tilde{G}(z)\leq C \cdot \frac{|z_1|}{|z|^{\bp}}\hbox{ for all }z\in \rnm\setminus\{0\}.$$ It the follows from Proposition 6.4 in \cite{gr4} that there exists $\mu>0$ such that \begin{equation}\label{G:pf:explicit:bis} \tilde{G}(z)=\mu \cdot \frac{|z_1|}{|z|^{\bp}}\hbox{ for all }z\in \rnm. \end{equation} We claim that $\mu=l$. We prove the claim. It follows from \eqref{cv:G:pf:2} and the definition \eqref{def:theta} of $\theta_i$ that \begin{equation}\label{eq:G:l:2} \tilde{G}_i(\theta_i)=(l+o(1))\frac{|\theta_{i,1}|}{|\theta_i|^{\bp}}\hbox{ and }\theta_i\to\theta_\infty\neq 0\hbox{ as }i\to +\infty. \end{equation} \noindent{\it Case 1:} we assume that $\theta_\infty\in\rnm\setminus\{0\}$, that is $\theta_{\infty,1}\neq 0$. Passing to the limit in \eqref{eq:G:l:2}, using the convergence \eqref{lim:Gi:pf:bis} and the explicit form \eqref{G:pf:explicit:bis}, as in Case 1 of Step \ref{step:G:pf:1}, we get that $l|\theta_{\infty,1}|\cdot |\theta_{\infty}|^{-\bm}=\mu|\theta_{\infty,1}|\cdot|\theta_\infty|^{-\bm}$, and therefore, since $\theta_{\infty,1}\neq 0$, we get that $\mu=l$. \noindent{\it Case 2:} we assume that $\theta_\infty\in \partial\rnm\setminus\{0\}$, that is $\theta_{i,1}\to 0$ as $i\to +\infty$. With a Taylor expansion, we get that there exists a sequence $(\tilde{t}_i)_{i\in\nn}\in (0,1)$ such that $\tilde{G}_i(\theta_i)=\partial_1\tilde{G}_i(\tilde{t}_i \theta_{i,1},\theta_i')\theta_{i,1}$ for all $i\in \nn$. With the convergence \eqref{lim:Gi:pf:bis} of $\tilde{G}_i$ to $\tilde{G}$ in $C^1$, we get that $$\tilde{G}_i(\theta_i)=\left(\partial_1\tilde{G}(\theta_\infty)+o(1)\right)\cdot\theta_{i,1}=\left(\frac{\mu}{|\theta_\infty|^{\bp}}+o(1)\right)\cdot |\theta_{i,1}|.$$ Since $\theta_{i,1}\neq 0$ for all $i\in\nn$, it follows form \eqref{eq:G:l:2} that $\mu=l$. \noindent Therefore, in both cases, we have proved that $\mu=l$. It follows from this uniqueness that the convergence of $\tilde{G}_i$ holds with no extraction. As for Step \ref{step:G:pf:1}, we get \eqref{cv:G:pf:1}. This ends Step \ref{step:G:pf:3}.\qed \begin{step}\label{step:G:pf:4} We fix $R,R'>0$. We claim that \begin{equation} G_{h_i}(x,y)=(l+o(1)+O(|x|^{\bp-\bm}))\frac{d(x,\partial\Omega)}{|x|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}}\hbox{ as }i\to +\infty\label{cv:G:pf:4} \end{equation} uniformly for $y\in \Omega\cap \T(B_{Rs_i}(0))$ and $x\in \Omega\setminus \T(B_{(R')^{-1}r_i}(0))$. \end{step} \noindent{\it Proof of Step \ref{step:G:pf:4}:} The differs from Step \ref{step:G:pf:2} since one works on domains exteriors to the ball of radius $r_i$. Here again, we choose $(y_i)_i$ such that $y_i\in \T(B_{Rs_i}(0))$. For $r>0$ small, we choose $\bar{u}_{\bp}\in C^2(\Omega\cap B_r(0))$ a supersolution to $-\Delta \bar{u}_{\bp}-(\gamma|x|^{-2}+h_i)\bar{u}_{\bp}>0$ as in \eqref{ppty:ua} and \eqref{asymp:ua:plus}. Note that, due to the convergence of $(h_i)$ to $h$ in $C^0$, the choice of $\bar{u}_{\bm}$ can be made independently of $i$. We fix $\eps>0$. It follows from the convergence \eqref{cv:G:pf:3} of Step \ref{step:G:pf:3} and \eqref{asymp:ua:plus} that there exists $i_0\in\nn$ \begin{equation}\label{est:sup:G:pf:4} G_{h_i}(x,y_i)\leq (l+\eps)\frac{d(y_i,\partial\Omega)}{|y_i|^{\bm}}\bar{u}_{\bp}(x)\hbox{ for all }x\in \Omega\cap\partial \T(B_{R'r_i}(0))\hbox{ for all }i\geq i_0. \end{equation} We fix $\delta>0$ such that $\delta<r$. We choose a supersolution $\bar{u}_{\bm}$ as in \eqref{ppty:ua} and \eqref{asymp:ua:plus}. It follows from the upper bound \eqref{est:G:up} that for some $i_1\in\nn$, there exists $C>0$ such that \begin{equation}\label{est:sup:G:pf:5} G_{h_i}(x,y_i)\leq C\frac{d(y_i,\partial\Omega)}{|y_i|^{\bm}}\bar{u}_{\bm}(x)\hbox{ for all }x\in \Omega\cap\partial B_\delta(0)\hbox{ for all }i\geq i_1. \end{equation} Therefore, \begin{equation}\label{est:G:up:34} G_{h_i}(x,y_i)\leq w_i(x)\hbox{ for all }x\in \partial\left(\Omega\cap \T(B_\delta(0)\setminus B_{(R')^{-1}r_i}(0))\right) \end{equation} where $$w_i:=\frac{d(y_i,\partial\Omega)}{|y_i|^{\bm}}\left((l+\eps)\bar{u}_{\bp}+C\bar{u}_{\bm}\right)$$ and, since $\bar{u}_{\bp},\bar{u}_{\bm}$ are supersolution, $$-\Delta w_i-\left(\frac{\gamma}{|x|^2}+h_i\right)w_i\geq 0\hbox{ in }\Omega\cap \T(B_\delta(0)\setminus B_{(R')^{-1}r_i}(0)).$$ Since $-\Delta -\left(\gamma|x|^{-2}+h_i\right)$ is coercive, the maximum principle holds and \eqref{est:G:up:34} holds on $\Omega\cap \T(B_\delta(0)\setminus B_{(R')^{-1}r_i}(0))$. With \eqref{asymp:ua:plus}, we get that there exists $i_2\in\nn$ such that \begin{equation}\label{est:sup:G:pf:45} G_{h_i}(x,y_i)\leq \left(l+2\eps+C|x|^{\bp-\bm}\right)\frac{d(x,\partial\Omega)}{|x|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}} \end{equation} for all $x\in \Omega\cap \T(B_\delta(0)\setminus B_{(R')^{-1}r_i}(0))$ for all $i\geq i_2$. Using subsolutions and arguing as above, we get that for some $i_3\in\nn$ \begin{equation}\label{est:sup:G:pf:55} G_{h_i}(x,y_i)\geq \left(l-2\eps-C|x|^{\bp-\bm}\right)\frac{d(x,\partial\Omega)}{|x|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}} \end{equation} for all $x\in \Omega\cap \T(B_\delta(0)\setminus B_{(R')^{-1}r_i}(0))$ for all $i\geq i_3$. The inequalities \eqref{est:sup:G:pf:45} and \eqref{est:sup:G:pf:55} put together yield \eqref{cv:G:pf:4}. This ends Step \ref{step:G:pf:4}.\qed \begin{step}\label{step:G:pf:5} We let $(X_i)_i,(Y_i)_i\in \Omega$ such that $|Y_i|=o(|X_i|)$ and $X_i=o(1)$ as $i\to +\infty$. We assume that there exists $l'>0$ such that $$G_{h_i}(X_i,Y_i)=(l'+o(1))\frac{d(X_i,\partial\Omega)}{|X_i|^{\bp}}\frac{d(Y_i,\partial\Omega)}{|Y_i|^{\bm}}\hbox{ as }i\to +\infty.$$ Then $l'=l$. \end{step} \noindent{\it Proof of Step \ref{step:G:pf:5}:} We define $$\sigma_i:=\min\{|\tyi|, |Y_i|\}\hbox{ and }\rho_i:=\max\{|\txi|, |X_i|\}.$$ We let $(z_i)_i, (t_i)_i\in\Omega$ such that $c_1\sigma_i\leq |z_i|\leq c_2\sigma_i$ and $c_1\rho_i\leq |t_i|\leq c_2\rho_i$ for all $i\in\nn$. Since $|z_i|=O(s_i)$, $r_i=O(|t_i|)$ and $t_i\to 0$ as $i\to +\infty$, it follows from \eqref{cv:G:pf:4} that $$G_{h_i}(z_i,t_i)=(l+o(1))\frac{d(z_i,\partial\Omega)}{|z_i|^{\bm}}\frac{d(t_i,\partial\Omega)}{|t_i|^{\bp}}\hbox{ as }i\to +\infty.$$ In addition, since $|z_i|=O(|Y|_i)$, $|X_i|=O(|t_i|)$ and $t_i\to 0$ as $i\to +\infty$, it follows from \eqref{cv:G:pf:4} that $$G_{h_i}(z_i,t_i)=(l'+o(1))\frac{d(z_i,\partial\Omega)}{|z_i|^{\bm}}\frac{d(t_i,\partial\Omega)}{|t_i|^{\bp}}\hbox{ as }i\to +\infty.$$ Therefore, we get that $l'=l$. This ends Step \ref{step:G:pf:5}.\qed \begin{step}\label{step:G:pf:6} We let $(X_i)_i,(Y_i)_i\in \Omega$ such that $|Y_i|=o(|X_i|)$ and $X_i=o(1)$ as $i\to +\infty$. Then $$G_{h_i}(X_i,Y_i)=(l+o(1))\frac{d(X_i,\partial\Omega)}{|X_i|^{\bp}}\frac{d(Y_i,\partial\Omega)}{|Y_i|^{\bm}}\hbox{ as }i\to +\infty.$$ \end{step} \noindent{\it Proof of Step \ref{step:G:pf:6}:} We argue by contradiction and we assume that there exists $\eps_0>0$ and a subsequences $(\varphi(i))_i$ such that $|U_{\varphi(i)}-l|\geq\eps_0$ for all $i\in\nn$ where $$U_i:=\frac{G_{h_i}(X_i,Y_i)|Y_i|^{\bm}|X_i|^{\bp}}{d(X_i,\partial\Omega)d(Y_i,\partial\Omega)}.$$ Since $(U_{\varphi(i)})$ is bounded, up to another extraction, there exists $l^{\prime\prime}>0$ such that $U_{\varphi(i)}\to l^{\prime\prime}$ as $i\to +\infty$. Therefore, $|l-l^{\prime\prime}|\geq\eps_0$ and $l^{\prime\prime}\neq l$. Since \eqref{est:G:pf} holds for the subfamily $(\varphi(i))$, it then follows from Step \ref{step:G:pf:5} that $l^{\prime\prime}=l$, contradicting $l^{\prime\prime}\neq l$. This ends Step \ref{step:G:pf:6}. \noindent We are now in position to prove \eqref{est:G:4}, that is the convergence with no extraction of subsequence. It follows from \eqref{est:G:pf} and Step \ref{step:G:pf:5} applied to $(h_i)_i$ and to the null function that there exists a subsequence $(h_{\varphi(i)})$ and $l,L_{\gamma,\Omega}>0$ such that for any $(x_i)_i,(y_i)_i\in\Omega$ such that $|y_i|=o(|x_i|)$ and $x_i=o(1)$ as $i\to +\infty$, then \begin{equation}\label{G:l1} G_{h_{\varphi(i)}}(x_i,y_i)=(l+o(1))\frac{d(x_i,\partial\Omega)}{|x_i|^{\bp}}\frac{d(y_i,\partial\Omega)}{|y_i|^{\bm}}, \end{equation} and \begin{equation}\label{G:l2} G_{0}(x_i,y_i)=(L_{\gamma,\Omega}+o(1))\frac{d(x_i,\partial\Omega)}{|x_i|^{\bp}}\frac{d(y_i,\partial\Omega)}{|y_i|^{\bm}} \end{equation} as $i\to +\infty$. We fix a sequence $(x_i)_i\in\Omega$ such that $x_i\to 0$ and $d(x_i,\partial\Omega)\geq |x_i|/2$ as $i\to +\infty$. In the distribution sense, we have that $$-\Delta(G_{h_{\varphi(i)}}(x_i,\cdot)-G_{0}(x_i,\cdot))+h_{\varphi(i)} (G_{h_{\varphi(i)}}(x_i,\cdot)-G_{0}(x_i,\cdot))=(0-h_{\varphi(i)})G_{0}(x_i,\cdot)\hbox{ in }\Omega$$ in the distribution sense and $G_{h_{\varphi(i)}}(x_i,\cdot)-G_{0}(x_i,\cdot)=0$ on $\partial\Omega$. It follows from \eqref{est:G:up} that for any $1<p<\frac{n}{n-2}$, we have that $\Vert G_{0}(x_i,\cdot)\Vert_p\leq C(p)$ for all $i\in\nn$. It then follows from elliptic theory that $G_{h_{\varphi(i)}}(x_i,\cdot)-G_{h}(x_i,\cdot)\in W^{2,p}(\Omega)$ and that $$\Vert G_{h_{\varphi(i)}}(x_i,\cdot)-G_{0}(x_i,\cdot)\Vert_{W^{2,p}}\leq C \Vert h_{\varphi(i)}\Vert_\infty$$ For $1<p<\min\{n/2;n/(n-2)\}$, we define $q:=\frac{np}{n-2p}$. Sobolev embeddings then yield $$\Vert G_{h_{\varphi(i)}}(x_i,\cdot)-G_{0}(x_i,\cdot)\Vert_{L^q(\Omega)}\leq C \Vert h_{\varphi(i)}\Vert_\infty.$$ We let $(\eps_i)_>0$ such that $\eps_i\to 0$ as $i\to +\infty$. We define $\alpha_i:=\eps_i|x_i|$ so that $\alpha_i=o(|x_i|)$ as $i\to +\infty$. We have that $$\int_{B_{\alpha_i}(0)}\left|G_{h_{\varphi(i)}}(x_i,y)-G_{0}(x_i,y)\right|^q\, dy\leq C \Vert h_{\varphi(i)}\Vert_\infty^q.$$ It then follows from \eqref{G:l1}, \eqref{G:l2} and the boundedness of $(h_i)$ in $C^0$ that $$\int_{B_{\alpha_i}(0)}\left|(l-L_{\gamma,\Omega}+o(1))\frac{d(x_i,\partial\Omega)}{|x_i|^{\bp}}\frac{d(y,\partial\Omega)}{|y|^{\bm}}\right|^q\, dy\leq C.$$ We assume by contradiction that $l\neq L_{\gamma,\Omega}$, so that $$\frac{d(x_i,\partial\Omega)}{|x_i|^{\bp}}\left(\int_{B_{\alpha_i}(0)}\left|\frac{d(y,\partial\Omega)}{|y|^{\bm}}\right|^q\, dy\right)^{1/q}\leq C .$$ If $n\leq q(1-\bm)$, then the integral is infinite. This is a contradiction. Therefore $n>q(1-\bm)$. Estimating the integral and using that $|x_i|\leq 2d(x_i,\partial\Omega)$, we get that $$|x_i|^{1-\bp}\alpha_i^{1-\bm+\frac{n}{q}}\leq C .$$ With $\alpha_i=\eps_i|x|_i$, $\bm+\bp=n$ and the definition of $q$, we get that $$|x_i|^{-n\left(1-\frac{1}{p}\right)}\eps_i^{1-\bm+\frac{n}{q}}\leq C.$$ Since $|x_i|\to 0$, with a suitable choice of $\eps_i\to 0$, we get a contradiction. \noindent Therefore $l=L_{\gamma,\Omega}$ that is independent of the choice of the sequence $(h_i)$. This proves \eqref{est:G:4} and ends the proof of Theorem \ref{th:green:gamma:domain}. \section[Appendix E: Green's function on $\rnm$]{Appendix E: Green's function for the Hardy-Schr\"odinger operator on $\rnm$}\label{sec:G:rnm} In this section, we prove the following: \begin{theorem}\label{th:green:gamma:rn} Fix $\gamma<\frac{n^2}{4}$. For all $p\in\rnm\setminus\{0\}$, there exists $G_p\in L^1(\rnm)$ such that \noindent{\bf (i)} $\eta G_p\in H_{1,0}^2(\rnm)$ for all $\eta\in C^\infty_c(\rn-\{p\})$, \noindent{\bf (ii)} For all $\varphi\in C^\infty_c(\rnm)$, we have that \begin{equation}\label{eq:kernel} \varphi(p)=\int_{\rnm}G_p(x)\left(-\Delta \varphi-\frac{\gamma }{|x|^2}\varphi\right)\, dx, \end{equation} \noindent Moreover, if $G_p,G'_p$ satisfy $(i)$ and $(ii)$ and are positive, then there exists $C\in\rr$ such that $G_p(x)-G'_p(x)=C|x_1|\cdot|x|^{-\bm}$ for all $x\in\rnm\setminus\{0,p\}$. \noindent In particular, there exists one and only one function ${\mathcal G}_p={\mathcal G}(p,\cdot)>0$ such that (i) and (ii) hold with $G_p={\mathcal G}_p$ and \noindent{\bf (iii) } ${\mathcal G}_p(x)=O\left(\frac{|x_1|}{|x|^{\bp}}\right)\hbox{ as }|x|\to+\infty. $ We then say that ${\mathcal G}$ is the Green's function for $-\Delta-\gamma|x|^{-2}$ on $\rnm$ with Dirichlet boundary condition. \noindent In addition, ${\mathcal G}$ satisfies the following properties: \noindent{\bf (iv) } For all $p\in\rnp$, there exists $c_0(p),c_\infty(p)>0$ such that \begin{equation}\label{eq:23} {\mathcal G}_p(x)\sim_{x\to 0} \frac{c_0(p)|x_1|}{|x|^{\bm}}\hbox{ and }{\mathcal G}_p(x)\sim_{x\to \infty} \frac{c_\infty(p)|x_1|}{|x|^{\bp}} \end{equation} and \begin{equation}\label{eq:24} {\mathcal G}_p(x)\sim_{x\to p}\frac{1}{(n-2)\omega_{n-1}|x-p|^{n-2}}. \end{equation} \noindent{\bf (v) } There exists $c>0$ independent of $p$ such that \begin{equation}\label{est:G:glob} c^{-1} {\mathcal H}_p(x)\leq {\mathcal G}_p(x)\leq c {\mathcal H}_p(x) \end{equation} where \begin{equation}\label{def:Hp} {\mathcal H}_p(x):=\left(\frac{\max\{|p|,|x|\}}{\min\{|p|,|x|\}}\right)^{\bm}|x-p|^{2-n}\min\left\{1,\frac{|x_1|\cdot|p_1|}{|x-p|^2}\right\} \end{equation} \end{theorem} \noindent{\it Proof of Theorem \ref{th:green:gamma:rn}:} We shall again proceed with several steps. \noindent{\bf Step \ref{sec:G:rnm}.1: Construction of a positive kernel at a given point:} For a fixed $p_0\in\rn\setminus\{0\}$, we show that there exists $G_{p_0}\in C^2(\overline{\rnm}\setminus\{0,p_0\})$ such that \begin{equation}\label{lim:G:bis} \left\{\begin{array}{ll} -\Delta G_{p_0}-\frac{\gamma}{|x|^2}G_{p_0}=0&\hbox{ in }\rnm\setminus\{0,p_0\}\\ G_{p_0}>0&\\G_{p_0}\in L^{\frac{2n}{n-2}}(B_\delta(0)\cap\rnm)&\hbox{ with }\delta:=|p_0|/4\\ G_{p_0}\hbox{ satisfies }(ii)\hbox{ with }p=p_0. \end{array}\right. \end{equation} Indeed, let $\tilde{\eta}\in C^\infty(\rr)$ be a nondecreasing function such that $0\leq \tilde{\eta}\leq 1$, $\tilde{\eta}(t)=0$ for all $t\leq 1$ and $\tilde{\eta}(t)=1$ for all $t\geq 2$. For $\eps>0$, set $\eta_\eps(x):=\tilde{\eta}\left(\frac{|x|}{\eps}\right)$ for all $x\in \rn$. \noindent We let $\Omega_1$ be a smooth bounded domain of $\rn$ such that $\rnm\cap B_1(0)\subset \Omega_1\subset \rnm\cap B_3(0)$. We define $\Omega_R:=R\cdot \Omega_1$ so that $\rnm\cap B_R(0)\subset \Omega_R\subset \rnm\cap B_{3R}(0).$ We argue as in the proof of \eqref{bnd:coer} to deduce that the operator $-\Delta-\frac{\gamma\eta_\eps}{|x|^2}$ is coercive on $\Omega_R$ and that there exists $c>0$ independent of $R,\eps>0$ such that \begin{equation*} \int_{\Omega_R}\left(|\nabla \varphi|^2-\frac{\gamma\eta_\eps}{|x|^2}\varphi^2\right)\, dx\geq c\int_{\Omega_R}|\nabla \varphi|^2\, dx \quad \hbox{for all $\varphi\in C^\infty_c(\Omega_R)$.} \end{equation*} \noindent Consider $R,\eps>0$ such that $R>2|p_0|$ and $\eps<\frac{|p_0|}{6}$, and let $G_{R,\eps}$ be the Green's function of $-\Delta-\frac{\gamma\eta_\eps}{|x|^2}$ in $\Omega_R$ with Dirichlet boundary condition. We have that $G_{R,\eps}>0$ since the operator is coercive. \noindent Fix $R_0>0$ and $q'\in (1,\frac{n}{n-2})$, then by arguing as in the proof of \eqref{int:bnd:G}, we get that there exists $C=C(\gamma,p_0, q', R_0)$ such that \begin{equation}\label{bnd:G:1} \Vert G_{R,\eps}(p_0,\cdot)\Vert_{L^{q'}(B_{R_0}(0)\cap\rnm)}\leq C\hbox{ for all }R>R_0\hbox{ and }0<\eps<\frac{|p_0|}{6}, \end{equation} and \begin{equation}\label{bnd:G:0} \Vert G_{R,\eps}(p_0,\cdot)\Vert_{L^{\frac{2n}{n-2}}(B_{\delta_0}(0)\cap\rnm)}\leq C\hbox{ for all }R>R_0\hbox{ and }0<\eps<\frac{|p_0|}{6}, \end{equation} where $\delta:=|p_0|/4$. Arguing again as in Step \ref{sec:app:c}.2 of the proof of Theorem \ref{th:green:gamma:domain}, there exists $G_{p_0}\in C^2(\overline{\rnm}\setminus\{0,p_0\})$ such that \begin{equation}\label{lim:G} \left\{\begin{array}{ll} G_{R,\eps}(p_0,\cdot)\to G_{p_0}\geq 0&\hbox{ in }C^2_{loc}(\overline{\rnm}\setminus\{0,p_0\})\hbox{ as }R\to +\infty,\; \eps\to 0\\ -\Delta G_{p_0}-\frac{\gamma}{|x|^2}G_{p_0}=0&\hbox{ in }\rnm\setminus\{0,p_0\}\\ G_{p_0}\equiv 0\hbox{ on }\partial\rnm\setminus\{0\}\\ G_{p_0}\in L^{\frac{2n}{n-2}}(B_\delta(0)\cap \rnm) \end{array}\right. \end{equation} and $\eta G_{p_0}\in H_{1,0}^2(\rnm)$ for all $\eta\in C^\infty_c(\rn\setminus\{p_0\})$. Fix $\varphi\in C^\infty_c(\rnm)$. For $R>0$ large enough, we have that $\varphi(p_0)=\int_{\rnm}G_{R,\eps}(p_0,\cdot)(-\Delta\varphi-\gamma\eta_\eps|x|^{-2}\varphi)\, dx$. The integral bounds above yield $x\mapsto G_{p_0}(x) |x|^{-2}\in L^1_{loc}(\rnm)$. Therefore, we get \begin{equation}\label{formula:G} \varphi(p_0)=\int_{\rnm}G_{p_0}(x)\left(-\Delta \varphi-\frac{\gamma }{|x|^2}\varphi\right)\, dx\hbox{ for all }\varphi\in C^\infty_c(\rnm). \end{equation} As a consequence, $G_{p_0}>0$. \noindent{\bf Step \ref{sec:G:rnm}.2: Asymptotic behavior at $0$ and $p_0$ for solutions to \eqref{lim:G:bis}.} It follows from Theorem 6.1 in Ghoussoub-Robert \cite{gr4} that either $G_{p_0}$ behaves like $|x_1|\cdot|x|^{-\bm}$ or $|x_1|\cdot|x|^{-\bp}$ at $0$. Since $G_{p_0}\in L^{\frac{2n}{n-2}}(B_\delta(0)\cap\rnm)$ for some small $\delta>0$ and $\bm<\frac{n}{2}<\bp$, we get that there exists $c_0>0$ such that \begin{equation}\label{asymp:G:0} \lim_{x\to 0}\frac{G_{p_0}(x)}{|x_1|\cdot |x|^{-\bm}}=c_0. \end{equation} \noindent Since $G_{p_0}$ is positive and smooth in a neighborhood of $p_0$, it follows from \eqref{formula:G} and the classification of solutions to harmonic equations that \begin{equation}\label{asymp:p} G_{p_0}(x)\sim_{x\to p_0}\frac{1}{(n-2)\omega_{n-1}|x-p_0|^{n-2}}. \end{equation} \noindent{\bf Step \ref{sec:G:rnm}.3: Asymptotic behavior at $\infty$ for solutions to \eqref{lim:G:bis}:} We let $$\tilde{G}_{p_0}(x):=\frac{1}{|x|^{n-2}}G_{p_0}\left(\frac{x}{|x|^2}\right)\hbox{ for all }x\in\rnm\setminus\left\{0, \frac{p_0}{|p_0|^2}\right\},$$ be the Kelvin's transform of $G$. We have that $$-\Delta \tilde{G}_{p_0}-\frac{\gamma}{|x|^2}\tilde{G}_{p_0}=0\hbox{ in }\rnm\setminus\left\{0, \frac{p_0}{|p_0|^2}\right\}\; ;\; \tilde{G}\equiv 0\hbox{ on }\partial\rnm\setminus\{p_0\}.$$ Since $\tilde{G}_{p_0}>0$, it follows from Theorem 6.1 in \cite{gr4} that there exists $c_1>0$ such that \begin{equation*} \hbox{either }\tilde{G}_{p_0}(x)\sim_{x\to 0}c_1\frac{|x_1|}{|x|^{\bm}}\hbox{ or }\tilde{G}_{p_0}(x)\sim_{x\to 0}c_1\frac{|x_1|}{|x|^{\bp}}. \end{equation*} Coming back to $G_{p_0}$, we get that \begin{equation}\label{choice} \hbox{either }G_{p_0}(x)\sim_{|x|\to \infty}c_1\frac{|x_1|}{|x|^{\bp}}\hbox{ or }G_{p_0}(x)\sim_{|x|\to \infty}c_1\frac{|x_1|}{|x|^{\bm}}. \end{equation} Assuming we are in the second case, for any $c\leq c_1$, we define $$\bar{G}_c(x):=G_{p_0}(x)-c\frac{|x_1|}{|x|^{\bm}}\hbox{ in }\rnm\setminus\{0, p_0\},$$ which satisfy $-\Delta \bar{G}_c-\frac{\gamma}{|x|^2}\bar{G}_c=0$ in $\rnm\setminus\{0, p_0\}$. It follows from \eqref{choice} and \eqref{asymp:p} that for $c<c_1$, $\bar{G}_c>0$ around $p_0$ and $\infty$. Using that $\eta \bar{G}_c\in H_{1,0}^2(\rnm)$ for all $\eta\in C^\infty_c(\rn\setminus\{p_0\})$, it follows from the coercivity of $-\Delta-\gamma|x|^{-2}$ that $\bar{G}_c>0$ in $\rnm\setminus\{0, p_0\}$ for $c<c_1$. Letting $c\to c_1$ yields $\bar{G}_{c_1}\geq 0$, and then $\bar{G}_{c_1}> 0$. Since $\bar{G}_{c_1}(x)=o(|x_1|\cdot|x|^{-\bm})$ as $|x|\to \infty$, another Kelvin transform and Theorem 6.1 in \cite{gr4} yield $|x_1|^{-1}|x|^{\bp}\bar{G}_{c_1}(x)\to c_2>0$ as $|x|\to \infty$ for some $c_2>0$. Then there exists $c_3>0$ such that \begin{equation}\label{again} \lim_{x\to 0}\frac{\bar{G}_{c_1}(x)}{|x_1|\cdot |x|^{-\bm}}=c_3>0\hbox{ and }\lim_{x\to \infty}\frac{\bar{G}_{c_1}(x)}{|x_1|\cdot |x|^{-\bp}}=c_2. \end{equation} Since $x\mapsto |x_1|\cdot|x|^{-\bm}\in H_{1,loc}^2(\rn)$, we get that $$\varphi(p)=\int_{\rnm}\bar{G}_{c_1}(x)\left(-\Delta \varphi-\frac{\gamma }{|x|^2}\varphi\right)\, dx\hbox{ for all }\varphi\in C^\infty_c(\rnm).$$ \noindent{\bf Step \ref{sec:G:rnm}.4: Uniqueness:} Let $G_1,G_2>0$ be 2 functions such that $(i),(ii)$ hold for $p:=p_0$, and set $H:=G_1-G_2$. It follows from Steps 2 and 3 that there exists $c\in\rr$ such that $H'(x):=H(x)-c|x_1|\cdot|x|^{-\bm}$ satisfies \begin{equation}\label{bnd:H:prime} H'(x)=_{x\to 0}O\left(|x_1|\cdot|x|^{-\bm}\right)\hbox{ and }H'(x)=_{|x|\to \infty}O\left(|x_1|\cdot|x|^{-\bp}\right). \end{equation} We then have that $\eta H'\in H_{1,0}^2(\rnm)$ for all $\eta\in C^\infty_c(\rn\setminus\{p_0\})$ and $$\int_{\rnm}H'(x)\left(-\Delta\varphi-\frac{\gamma}{|x|^2}\varphi\right)\, dx=0 \quad \hbox{for all $\varphi\in C^\infty_c(\rnm)$.}$$ The ellipticity of the Laplacian then yields $H'\in C^\infty(\overline{\rnm}\setminus\{0\})$. The pointwise bounds \eqref{bnd:H:prime} yield that $H'\in H_{1,0}^2(\rnm)$. Multiplying $-\Delta H'-\frac{\gamma}{|x|^2}H'=0$ by $H'$, integrating by parts and the coercivity yield $H'\equiv 0$, and therefore, $(G_1-G_2)(x)=c|x_1|\cdot|x|^{-\bm}$ for all $x\in\rnm$. This proves uniqueness. \noindent{\bf Step \ref{sec:G:rnm}.5: Existence.} It follows from Steps 2 and 3 that, up to substracting a multiple of $x\mapsto |x_1|\cdot|x|^{-\bm}$, there exists a unique function ${\mathcal G}_{p_0}>0$ satisfying (i), (ii) and the pointwise control (iii). Moreover, \eqref{asymp:G:0}, \eqref{asymp:p} and \eqref{again} yield \eqref{eq:23} and \eqref{eq:24}. As a consequence, \eqref{est:G:glob} holds with $p=p_0$. \noindent For $p\in\rnp$, consider $\rho_p: \rnm\to\rnm$ a linear isometry fixing $\rnm$ such that $\rho_p(\frac{p_0}{|p_0|})=\frac{p}{|p|}$, and define $${\mathcal G}_p(x):=\left(\frac{|p_0|}{|p|}\right)^{n-2}{\mathcal G}_{p_0}\left(\left(\rho_p^{-1}\left(\frac{|p_0|}{|p|}x\right)\right)\right)\hbox{ for all }x\in \rn\setminus\{0,p\}.$$ As one checks, ${\mathcal G}_p>0$ satisfies (i), (ii), (iii), \eqref{eq:23}, \eqref{eq:24} and \eqref{est:G:glob}. \noindent The definition of ${\mathcal G}_p$ is independent of the choice of $\rho_p$. Indeed, for any linear isometry $\rho_{p_0}: \rnm\to\rnm$ fixing $p_0$ and $\rnm$, ${\mathcal G}_{p_0}\circ \rho_{p_0}^{-1}$ satisfies (i), (ii), (iii), and therefore ${\mathcal G}_{p_0}\circ \rho_{p_0}^{-1}={\mathcal G}_{p_0}$. The argument goes similarly of any isometry fixing $p$. \end{document}
\begin{document} \title{$k$-disjunctive cuts and a finite cutting plane algorithm for general mixed integer linear programs} \begin{abstract} \textbf{Abstract:} In this paper we give a generalization of the well known split cuts of Cook, Kannan and Schrijver \cite{CKS90} to cuts which are based on multi-term disjunctions. They will be called $k$-disjunctive cuts. The starting point is the question what kind of cuts is needed for a finite cutting plane algorithm for general mixed integer programs. We will deal with this question in detail and derive cutting planes based on $k$-disjunctions related to a given cut vector. Finally we will show how a finite cutting plane algorithm can be established using these cuts in combination with Gomory mixed integer cuts. \end{abstract} \section{Introduction}\label{sec:intro} In this paper we will deal with cutting planes and related algorithms for general mixed integer linear programs (MILP). As most of the results will be derived by geometric arguments we focus on programs that are given by inequality constraints, i.e. \begin{equation}\label{eq:MILP} \begin{array}{ll} \max & cx + hy \\ & Ax + Gy \leq b \\ & x \in {\mathbb Z}^p \end{array} \end{equation} where the input data are the matrices $ A \in {\mathbb Q}^{m \times p}, G \in {\mathbb Q}^{m \times q}$, the column vector $b \in {\mathbb Q}^m$ and the row vectors $ c \in {\mathbb Q}^p, h \in {\mathbb Q}^q$. Moreover we denote by the polyhedra $P = \{ (x,y) : Ax + Gy \leq b\} \subset {\mathbb R}^{p+q}$ and $ P_I = {\mathrm{co}}\,nv( \{(x,y) \in P: x \in {\mathbb Z}^p\}) \subset {\mathbb R}^{p+q} $ the feasible domains of the LP relaxation and the (mixed) integer hull of a given MILP, respectively. We call a MILP bounded, if the polyhedron $P$ is bounded. We will also need the projection ${\mathrm{proj}_X}\,(P) := \{ x \in {\mathbb R}^p : \exists y \in {\mathbb R}^q : (x,y) \in P\}$ of the polyhedron $P$ on the space of the integer variables. By a cutting plane for $P$ we understand an inequality $ \alpha x + \beta y \leq \gamma$ with row vectors $ \alpha \in {\mathbb Q}^p, \beta \in {\mathbb Q}^q$ which is valid for $P_I$ but not for $P$. Using cutting planes gives a simple idea of how to solve a general MILP: Solve the LP relaxation of the MILP. If the optimal solution is feasible, i.e. satisfies the integrality constraint, an optimal solution is found. Otherwise find a valid cutting plane that cuts off the current solution and repeat. But unlike the pure integer case no finite exact cutting plane algorithm is known for general MILP. Therefor we remark that most cutting planes for general MILP such as e.g. Gomory mixed integer cuts \cite{Gom63} or mixed integer rounding cuts \cite{NW90} are special cases of or equivalent to split cuts \cite{CKS90}. This fact and more detailed relations between these and other cuts are stated in \cite{CL01}. Here a split cut is defined as a cutting plane $ \alpha x + \beta y \leq \gamma$ for $P$ with the additional property that there exists $ d \in {\mathbb Z}^p, \delta \in {\mathbb Z}$ such that $ \alpha x + \beta y \leq \gamma$ is valid for all $ (x,y) \in P$ which satisfy the split disjunction $ dx \leq \delta $ or $ dx \geq \delta + 1$. So split cuts are defined not constructively but by a property, only. Now one can see in the following 'classical' example of Cook, Kannan and Schrijver \cite{CKS90} that split cuts are not sufficient for solving a general MILP in finite time. \begin{exa}\label{exa:cks} The MILP \begin{eqnarray*} && \max y \\ && -x_1 + y \leq 0 \\ && -x_2 + y \leq 0 \\ && x_1 + x_2 + y \leq 2 \\ && x_1, x_2 \in {\mathbb Z} \end{eqnarray*} has the optimal objective function value 0 but the problem cannot be solved by any algorithm that uses split cuts, only. A proof of this statement in a more general context is given in \autoref{lem:expneed}. \end{exa} On the other hand, as positive results in the context of cutting plane algorithms for MILP we can only give the following two special cases: For mixed 0-1 programs split cuts are sufficient for generating the integer hull $P_I$ of a given polyhedron $P$. See e.g. \cite{NW90} in the context of mixed integer rounding cuts or \cite{BCC93} in the more recent representation of lift-and-project cuts. For general MILP, there only exists a finite approximation algorithm of Owen and Mehrotra \cite{SM01} which finds a feasible $\epsilon$-optimal solution and uses simple split cuts, that means split cuts to disjunctions $x_i \leq \delta \vee x_i \geq \delta + 1$. So as split cuts fail in the design of a finite cutting plane algorithm for general MILP we want to generalize this approach to cuts that are based on multi-term disjunctions. Therefor we start in \autoref{sec:kdis} with the introduction of $k$-disjunctive cuts and some of its basic properties. Afterwards we look at the approximation properties of the $k$-disjunctive closures and deal with the question what kind of cuts is needed for an exact finite cutting plane algorithm both in general and in special cases. Finally we derive a $k$-disjunctive cut according to a given cut vector. In \autoref{sec:algo} we turn to algorithmic aspects and give a way of how a finite cutting plane algorithm for general MILP can be designed using $k$-disjunctive cuts in connection with the well known mixed integer Gomory cuts. Finally we will discuss the algorithm and give some interpretations. \subsection{Preliminaries}\label{ssec:prel} Here we repeat two basic results that we will need during this paper. The first one deals with the computation of the projection ${\mathrm{proj}_X}\,(P)$, the second one with the convergence of the mixed integer Gomory algorithm in a special case. \begin{lem}\label{lem:projection} Let a polyhedron $P = \{ (x,y) : Ax + Gy \leq b\}$ be given. Then \begin{equation*} {\mathrm{proj}_X}\,(P) = \{x \in {\mathbb R}^p:\ v^rAx \leq v^rb, \ \forall r \in R \}, \end{equation*} where $\{v^r\}_{r \in R} $ is the set of extreme rays of the cone $ Q := \{v \in {\mathbb R}^m: \ G^Tv = 0, v \geq 0\}.$ \end{lem} \begin{proof} The statement follows by applying the Farkas Lemma, see e.g. \cite{NW88}, I.4.4. \end{proof} Next we look at the usual mixed integer Gomory algorithm \cite{Gom63}. Although the algorithm does in general not even converge to the optimum, the special case in which the optimal objective function value can be assumed to be integer, e.g. the case of $h = 0$, can be solved finitely using the algorithm. In detail we have the following \begin{thm}\label{thm:gomory} Let a bounded MILP \eqref{eq:MILP} be given. Then the mixed integer Gomory algorithm terminates finitely with an optimal solution or detects infeasibility under the following conditions: \begin{enumerate} \item One uses the lexicographic version of the simplex algorithm for solving the LP relaxation. \item The optimal objective function value is integral. \item A least index rule is used for cut generation, i.e. the mixed integer Gomory cut according to the first variable $x_j$, that is fractional in the current LP solution, is added to the program. Here $x_0$ corresponds to the objective function value. \end{enumerate} \end{thm} Using the last theorem, it is obvious that we can check in finite time if there is a feasible point in a polytope with a given (rational) objective function value, as by scaling it can be always assumed that the optimal objective function value is integral. This is expressed in the following \begin{cor}\label{cor:gomory} Let a bounded MILP \eqref{eq:MILP} with the additional constraint $ cx + hy = \gamma$ be given. Then the mixed integer Gomory algorithm terminates finitely with a feasible solution or detects infeasibility. \end{cor} \section{$k$-disjunctive cuts}\label{sec:kdis} \subsection{Basic definitions and properties}\label{ssec:basic} In analogy to the definition of a split cut based on a split disjunction we now define a $k$-disjunctive cut that is based on a $k$-disjunction that contains every integral vector. \begin{defi}\label{defi:kdis} Let $k \geq 2$ be a natural number, $d^1,\ldots,d^k \in {\mathbb Z}^p $ integral vectors and $\delta^1,\ldots,\delta^k \in {\mathbb Z}$. Then we call the inequalities $d^1x \leq \delta^1,\ldots,d^kx \leq \delta^k$ a $k$-disjunction, if for all $x \in {\mathbb Z}^p$ there is an $i \in \{1,\ldots,k\}$ with $ d^ix \leq \delta^i$. In this case we write $D(k,d,\delta)$ with $ d = (d^1,\ldots,d^k), \delta = (\delta^1,\ldots,\delta^k)$ for the $k$-disjunction. \end{defi} \begin{figure} \caption{Examples for $k$-disjunctions in ${\mathbb R} \label{sfig:3dis} \label{sfig:4dis} \label{fig:kdis} \end{figure} We note that we do not require the vectors $d^i, \delta^i$ to be different. So every $l$-disjunction is also a $k$-disjunction for $l < k$. Especially every split disjunction is also a $k$-disjunction. Moreover every $k$-disjunction is a cover of ${\mathbb Z}^p$ by definition. \begin{defi}\label{defi:kdiscut} Let $P \subset {\mathbb R}^{p+q }$ be a polyhedron and $\alpha x + \beta y\leq \gamma$ be a cutting plane. Then $\alpha x + \beta y\leq \gamma$ is called a $k$-disjunctive cut for $P$, if there exists a $k$-disjunction $D(k,d,\delta)$ with \begin{equation*} (x,y) \in P: \ \alpha x + \beta y > \gamma \Longrightarrow d^i x > \delta^i, \ \forall i \in \{1,\ldots,k\} . \end{equation*} \end{defi} Of course every $k$-disjunctive cut for $P$ is valid for $P_I$ by definition. According to the remark after \autoref{defi:kdis} every $l$-disjunctive cut is also a $k$-disjunctive cut for $l < k$. So every split cut is a $k$-disjunctive cut. \begin{defi}\label{defi:kdisclos} Let $P \subset {\mathbb R}^{p+q }$ be a polyhedron. Then the intersection of all $k$-disjunctive inequalities is called the $k$-disjunctive closure of $P$ and denoted by $P_k^{(1)}$. Analog the $i$-th $k$-disjunctive closure $P_k^{(i)}$ of $P$ is defined as the $k$-disjunctive closure of $P_k^{(i-1)}$. In the special case of $k = 2$ we will also write $P^{(i)}$ instead of $P_2^{(i)}$. \end{defi} We want to remark that it is not evident if the $k$-disjunctive closure $P_k^{(1)}$ for a given polyhedron $P$ is again a polyhedron in the case of $k \geq 3$. The both proofs of this property for the split closure \cite{ACL05} and \cite{CKS90} cannot be applied to the more general case. However, we will not further deal with this question, as our results in the following are independent of this property. We further remark that \autoref{defi:kdiscut} also applies in a natural way to closed convex sets $P$. Therewith it is guaranteed that the definition of the $i$-th $k$-disjunctive closure $P_k^{(i)}$ of a polyhedron $P$ is well defined. A valid cut to a given $k$-disjunction can be computed as intersection cut to any basis solution of the LP-relaxation that is not contained in the disjunction according to \cite{Bal71}. In the case of $k = 2$, Andersen, Cornu\'ejols and Li have shown \cite{ACL05} that intersection cuts are sufficient to describe all cuts to a given split disjunction. This result is not true for general $k$-disjunctions. Here not every valid $k$-disjunctive cut to a given disjunction is equal to or dominated by an intersection cut. This can be seen in the following \begin{exa} We look at the polyhedral cone $C \in {\mathbb R}^{2+1}$ with apex $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ that is defined by \begin{eqnarray*} -x_1 + y &\leq& 0 \\ -x_2 + y &\leq& 0 \\ x_1 + y &\leq& 1 \\ x_2 + y &\leq& 1. \end{eqnarray*} Then $ y \leq 0$ is a $4$-disjunctive cut for $C$ to the $4$-disjunction $D:= \{x_1 + x_2 \geq 2, x_1 - x_2 \geq 1, -x_1 + x_2 \geq 1, -x_1 - x_2 \geq 0 \}$ . The set of all bases is given by any three of the above constraints. Computing the four relating intersection cuts to the $4$-disjunction $D$ we get that the point $(\frac{1}{2},\frac{1}{2},\frac{1}{6})$ is valid for the four cuts and so $ y \geq \frac{1}{6}$ has to be satisfied. \end{exa} Although the properties of general $k$-disjunctive cuts are more involved than in the case of split cuts, an investigation of these cuts is useful because every valid cutting plane for a given polyhedron is a $k$-disjunctive cut for some $k$. \begin{lem}\label{lem:every} Let $P \subset {\mathbb R}^{p+q }$ be a polyhedron and $\alpha x + \beta y \leq \gamma$ be a valid cutting plane. Then $\alpha x + \beta y \leq \gamma$ is a $k$-disjunctive cut for some $k \in {\mathbb N}$. \end{lem} \begin{proof} Let $P$ be a polyhedron and $\alpha x + \beta y \leq \gamma$ be a valid cutting plane. The set $M$ that is cut off by the above cutting plane is given by \begin{equation*} M:=\{(x,y) \in P: \alpha x + \beta y > \gamma \}. \end{equation*} $M$ contains no feasible points of $P_I$. So we have $x \not\in {\mathbb Z}^p$ for $(x,y) \in M$. By \autoref{lem:projection} the projection of $M$ can be expressed as \begin{equation*} {\mathrm{proj}_X}\, (M) = \{ x \in {\mathbb R}^p: A^e x \leq b^e, \, A^l x < b^l\} \end{equation*} with wlog integral matrices $ A^e, A^l$ with rows $a_i$ satisfying $\mathrm{gcd}(a_i) = 1$ and integer vectors $b^e,b^l$. We modify the coefficients of the vectors $ b^e, b^l$ by \begin{eqnarray*} \widetilde b_i^e &=& \lfloor b_i^e \rfloor + 1, \\ \widetilde b_i^l &=& \lceil b_i^l \rceil. \end{eqnarray*} Altogether we get that $\alpha x + \beta y \leq \gamma$ is a $k$-disjunctive cut according to \\ $ D(k, -(A^e,A^l), -(\widetilde b^e, \widetilde b^l))$. \end{proof} It is our goal to compute the mixed integer hull of a given polyhedron using $k$-disjunctive cuts. Of course this should be done 'as simple as possible', what means that both the number $k$ of hyperplanes needed for the disjunctions and the number of iterations in a cutting plane procedure should be small. At least the latter property can be easily realized as the next theorem shows. \begin{thm}\label{thm:2pclos} Let $P \subset {\mathbb R}^{p+q}$ be a polyhedron. Then $ P_I = P^{(1)}_{2^p}$. \end{thm} \begin{proof} We will show that every valid inequality $\alpha x + \beta y \leq \gamma$ for $P_I$ is a $2^p$-disjunctive cut for $P$. This is sufficient for the theorem. Using \autoref{lem:every} we get that $ \alpha x + \beta y \leq \gamma$ is a $k$-disjunctive cut with a related disjunction $D(k,d,\delta)$. So the claim is shown for $k \leq 2^p$. Otherwise the number of inequalities of the disjunction can be reduced until the required limit of $2^p$. Since $D(k,d,\delta)$ is a $k$-disjunction we have \begin{eqnarray*} \forall x \in {\mathbb Z}^p \ \exists i \in \{1,\ldots,k\}: \ d^ix \leq \delta^i. \end{eqnarray*} On the other hand we take the set of all integral vectors with the property $ d^ix = \delta^i $ for a given $i \in \{1,\ldots,k\}$. Either it exists now a vector $\bar{x} \in {\mathbb Z}^p$ with $d^i \bar{x} = \delta^i $ and $ d^j \bar{x} > \delta^j, \ \forall j \in \{1,\ldots,k\} \setminus \{i\}$, or we can expand the disjunction by setting the right hand side of the inequality to $ \delta^i - 1$ and repeat this consideration. This may also lead to the case that the inequality can be dropped. Therewith we can restrict ourselves to disjunctions with the additional condition: \begin{equation*} \forall i \in \{1,\ldots,k\} \ \exists x^i \in {\mathbb Z}^p : \ d^ix^i = \delta^i \wedge d^jx^i > \delta^j, \ \forall j \in \{1,\ldots,k\} \setminus \{i \} \end{equation*} The set ${\mathrm{co}}\,nv(\{x^1,\ldots,x^k\})$ contains except for its vertices $\{x^1,\ldots,x^k\}$ no more integral vectors: Assumed that there was another integral vector $z \in {\mathrm{co}}\,nv(\{x^1,\ldots,x^k\})$ we had $ d^i z \leq \delta^i$ for an $i$. This contradicts the definition of the vectors $x^i$. So we have constructed a set that contains exactly $k$ integer points as its vertices. This will lead to a contradiction for $k > 2^p$. Then we had at least two vertices $v,w$ with the additional property that each component $v_i,w_i, \ i \in \{1,\ldots,n\}$ of both vectors is either even or odd. So $ \frac{1}{2}(v+w) $ is an integral vector which is contained in ${\mathrm{co}}\,nv(\{x^1,\ldots,x^k\})$. This is a contradiction to the properties of the set. \end{proof} We look at an easy example to see that $2^p$-disjunctive cuts are needed in general to compute the mixed integer hull of a polyhedron in one step. \begin{exa}\label{exa:2pclos1} We take the $p$-dimensional unit cube $C = [0;1]^p$ and define the polyhedron $Q$ by \begin{equation*} Q = \{x: ax \leq \max_{x \in C} ax, \, a \in \{-1,1\}^p\}. \end{equation*} Next we embed $Q$ in the ${\mathbb R}^{p+1}$ and define the polyhedron \begin{equation*} P = {\mathrm{co}}\,nv \left\{ \left( \begin{array}{c} x \\ 0 \end{array} \right), \frac{1}{2} \mathbf{1} \right\}, \ x \in Q. \end{equation*} Of course it is $P_I = C$ and the only valid $k$-disjunction for the cutting plane $ x_{p+1} \leq 0$ is defined by the facets of $Q$ itself. \end{exa} As \autoref{thm:2pclos} shows, the mixed integer hull of a general polyhedron can be 'easily' generated with $2^p$-disjunctive cuts in theory. Of course for practical issues the use of disjunctions with an exponential number of defining hyperplane is very expensive. So we will deal in the following with the second question we mentioned above, i.e. what kind of $k$-disjunctive cuts we need at least in computing the mixed integer hull using a repeated application of k-disjunctive cuts. \subsection{Approximation property of split cuts} \label{ssec:convkclos} Before we further analyze which cuts we need to solve a MILP exactly, we will deal with the approximation properties of $k$-disjunctive cuts. We repeat that already using split cuts is sufficient to approximate the optimal objective function value of any MILP arbitrarily exact. Therefor look at the series $ (\gamma^{(i)})_{i \in {\mathbb N}}$ of objective function values that is given by \begin{equation}\label{eq:series} \gamma^{(i)} := \max \{cx + hy| \ (x,y) \in P^{(i)}\} \end{equation} for an arbitrary objective function $cx + hy $ that is bounded over the polyhedron $P$. In detail we get the following \begin{thm}\label{thm:finconv} Let $P \subset {\mathbb R}^{p+q}$ be a polytope, $cx + hy$ an objective function, $ \gamma^* = \max \{cx +hy| \ (x,y) \in P_I \}$ and $ \gamma^{(i)} $ as defined in \eqref{eq:series}. Then for all $\epsilon > 0$ there is an $i_0 \in {\mathbb N}$ with $ | \gamma^{(i_0)} - \gamma^*| < \epsilon $. \end{thm} \begin{proof} A proof of this statement in a slightly different form using a repeated variable disjunction can be found in the paper \cite{SM01} of Owen and Mehrotra. Moreover the algorithm in this paper also gives a constructive proof. \end{proof} As we can approximate any optimal objective function value arbitrarily exact using split cuts, the use of general $k$-disjunctive cuts becomes necessary for determining exact solutions, only. Moreover we want to remark that in practical applications already optimizing over the first split closure often gives a good approximation of the optimal objective function value. This was in detail investigated by Balas and Saxena \cite{BS06} for instances from the MIPLIB 3.0 and several other classes of structured MILP. \subsection{Solving MILP exactly}\label{ssec:finite} We now get back to the question what kind of cuts is needed to solve a general MILP exactly. As we will see, this depends on the structure of the projection of the solution space on the $x$-space of integral variables. For example the important special case of the solution space being a vertex can be solved just using split cuts. However, we will see that in general the required number of disjunctive hyperplanes is exponential in the dimension of the integer space. We start with the case that the solution set contains relative interior integer points. \begin{thm}\label{thm:verfin} Let $P \subset {\mathbb R}^{p+q}$ be a polyhedron, $cx + hy$ an over $P$ bounded objective function and $ \gamma^* = \max \{cx +hy| \ (x,y) \in P_I \}$. If \begin{equation*} {\mathrm{relint}}\,( {\mathrm{proj}_X}\,( \{ (x,y) \in P_I: cx + hy = \gamma^*\} )) \cap {\mathbb Z}^p \not= \emptyset \end{equation*} then there is a $k \in {\mathbb N}$ with $\max \{cx +hy| \ (x,y) \in P^{(k)} \} = \gamma^*$. \end{thm} \begin{proof} If $ \max \{cx +hy| \ (x,y) \in P \} = \gamma^*$ there is nothing to show, so let $ \max \{cx +hy| \ (x,y) \in P \} > \gamma^*$. That means especially that $ {\mathrm{int}}\,({\mathrm{proj}_X}\,(M)) \cap {\mathbb Z}^p = \emptyset$ where $M := P_I \cap \{ (x,y): cx + hy = \gamma^*\} $ denotes the solution set. Moreover let $x^* \in {\mathrm{relint}}\,( {\mathrm{proj}_X}\,(M)) \cap {\mathbb Z}^p$. To proof the claim we have to show that $ cx + hy \leq \gamma^*$ is a split cut for one of the polyhedra $P^{(k)}, \, k \in {\mathbb N}$. Let $A_Ix + G_Iy \leq b_I$ denote these inequalities in the representation of the mixed integer hull that constrain the set $M$. With \autoref{thm:finconv} we get \begin{equation}\label{eq:finconv1} \lim_{k \rightarrow \infty} \max \{a_{I,i}x + g_{I,i}y| \ (x,y) \in P^{(k)}\} = b_{I,i}. \end{equation} Moreover $x^*$ lies in the boundary of the projection ${\mathrm{proj}_X}\,(M^{(k)})$ of each set $ M^{(k)}:= P^{(k)} \cap \{ (x,y): cx + hy \geq \gamma^*\}$. As $ M^{(k+1)} \subseteq M^{(k)} $ there exists an inequality $px \leq \pi$ that is valid for all of the sets ${\mathrm{proj}_X}\,(M^{(k)})$ with the additional property \begin{equation}\label{eq:finconv2} px = \pi, \forall x \in {\mathrm{proj}_X}\,(M), \end{equation} as $x^* \in {\mathrm{relint}}\,({\mathrm{proj}_X}\,(M))$. If we combine \eqref{eq:finconv1} and \eqref{eq:finconv2} we get as direct consequence that $cx + hy \leq \gamma^*$ is a split cut to the disjunction $D(p,\pi)$ for some $P^{(n)}, n \in {\mathbb N}$. \end{proof} After we have seen that split cuts are even sufficient for solving an important class of MILP exactly, we turn to the general situation. The idea for finite convergence using $k$-disjunctive cuts in general consists of the basic principle that there has to exist a $k$-disjunction $D$ so that the interior of the projection ${\mathrm{proj}_X}\,(M)$ of the solution set is not contained in $D$. If no appropriate $k$-disjunction exists for all closures $P_k^{(i)}$ then we cannot achieve a finite algorithm using $k$-disjunctive cuts. On the other hand, if this condition is satisfied for each face of the solution set, finite convergence can be shown in the general case. \begin{thm}\label{thm:mainfin} Let $P \subset {\mathbb R}^{p+q}$ be a polyhedron, $cx + hy$ an over $P$ bounded objective function, $ \gamma^* = \max \{cx +hy| \ (x,y) \in P_I \}$ and $M := P_I \cap \{ (x,y): cx + hy = \gamma^*\}$. If there exists for both $M$ and all its faces $f \in F$ with ${\mathrm{relint}}\,({\mathrm{proj}_X}\,(f)) \cap {\mathbb Z}^p = \emptyset$ a $k$-disjunction $D_f(k,d,\delta)$ with the property \begin{equation*} x \in {\mathrm{relint}}\,({\mathrm{proj}_X}\,(f)) \ \Longrightarrow x \not\in D_f, \end{equation*} then there exists a $n \in {\mathbb N}$ with $ \max \{cx + hy| \ (x,y) \in P_k^{(n)} \} = \gamma^*$. \end{thm} \begin{proof} We prove the claim by induction over the dimension $l$ of the solution set $M$. We start with $l = 0$. In this case $k = 2 $ can always be chosen and the result is a special case of \autoref{thm:verfin}. We assume now that the claim is true for $l-1, l \in {\mathbb N}$. \\ So let $M$ be the solution set of $\max \{cx + hy |\ (x,y) \in P_I \}$ and $\dim(M) = l$. Moreover let for $k \in {\mathbb N}$ exist a $k$-disjunction $D(k,d,\delta) $ for $M$ according to the assumption. We proof that $cx + hy \leq \gamma^*$ is a k-disjunctive cut to the disjunction $D$ for one of the sets $P_k^{(n)}$. Therefor we show that it exists a $n \in {\mathbb N}$ such that $ (P_k^{(n)} \cap \{ (x,y): cx + hy > \gamma^*\}) \cap D = \emptyset$. With the disjunction $D$ we define the polyhedron $Q:= \{(x,y): dx \geq \delta \wedge cx + hy = \gamma^*\}$. As $cx + hy = \gamma^*$ is a supporting hyperplane of $P_I$, there exists $(\widehat x, \widehat y)$ with $ c \widehat x + h \widehat y > \gamma^*$ and $ \widehat x \not\in D$ such that each of the inequalities $c_f x + h_f y \leq \gamma_f$ defined by $(\widehat x, \widehat y)$ and a facet $f$ of $Q$ is valid for $P_I$. All inequalities $c_f x + h_f y \leq \gamma_f$ at most support $M$ in an under dimensional face. So it follows either by induction hypothesis or by \autoref{thm:finconv} that the inequalities $c_f x + h_f y \leq \gamma_f$ are valid for some $P_k^{(n)}$. Herewith, the condition $ (P_k^{(n)} \cap \{ (x,y): cx + hy > \gamma^*\}) \cap D = \emptyset$ is satisfied and the theorem is proven. \end{proof} We remark that it is necessary to involve all the faces of the solution set in the last theorem, as the following example shows. \begin{exa} We define the polyhedron $P \subset {\mathbb R}^{3 + 1}$ through the vertices \begin{eqnarray*} &&(0,0,0,0), (2,0,0,0), (0,2,0,0) \\ &&(0,0,1,0), (2,0,1,0), (0,2,1,0) \\ &&\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2} \right) \end{eqnarray*} For the objective function vector $ (0,0,0,1)$ we get that $ M $ is contained in a split disjunction, whereas the cut according to the face $x_3 = 0$ is a $3$-disjunctive cut, only. \end{exa} We now deal with the question what cuts we need to solve a general MILP. Therefor we use special sets that can arise as solution sets of MILP to give a lower bound of the required number of disjunction terms. The idea is based on a generalization of \autoref{exa:cks}. \begin{lem}\label{lem:expneed} Let $P \subset {\mathbb R}^{p+q}$ be a polyhedron, $cx + hy$ an over $P$ bounded objective function, $ \gamma^* = \max \{cx +hy| \ (x,y) \in P_I \} > \max \{cx +hy| \ (x,y) \in P \}$ and $M := P_I \cap \{ (x,y): cx + hy = \gamma^*\}$. If ${\mathrm{proj}_X}\,(M) \subset {\mathbb R}^p$ has $k$ facets with each containing a relative interior integer point, than $ \max \{cx +hy| \ (x,y) \in P_{k-1}^{(i)} \} > \gamma^*, \, \forall i \in {\mathbb N}$. \end{lem} \begin{proof} Let ${\mathrm{proj}_X}\,(M)$ be given with relative interior points $ \{x^1,\ldots,x^k\} \in {\mathbb Z}^p$ that are contained in pairwise different facets. Then we have $ x^{ij} := \frac{1}{2} (x^i + x^j) \in {\mathrm{int}}\,({\mathrm{proj}_X}\,(M))$ for all $i,j, \, i \not=j$. By presumption there exists $(x^{ij},y^{ij}) \in P$ with $cx^{ij} + hy^{ij} > \gamma^*$. Moreover at least one of the points $(x^{ij},y^{ij})$ is not cut off by an arbitrary $k-1$-disjunctive cut. So the cut is valid for the set $ Q^{ij} := {\mathrm{co}}\,nv((x^{ij},y^{ij}), P_I)$. As each cut can be classified by this property we get that $ \bigcap_{i \not=j} Q^{ij} \subseteq P_{k-1}^{(1)}$. It is clear that $\bigcap_{i \not=j} Q^{ij}$ contains a point $(x,y)$ with $cx + hy > \gamma^*$. As $P_{k-1}^{(1)}$ satisfies all presumptions and the solution set $M$ does not change, the proof follows by induction. \end{proof} Therewith we can show now that we need cutting planes to an in the dimension $p$ exponential number of disjunctive terms to solve a MILP in general. \begin{thm}\label{thm:expon} Let $P$ be a polyhedron, $cx + hy $ be an over $P$ bounded objective function with $ \max\{cx + hy : (x,y) \in P \} = \gamma^*$. Then in general \begin{equation*} \max\{cx + hy : (x,y) \in P_{2^{p-1} + 1 }^{(n)} \} > \gamma^*, \ \forall n \in {\mathbb N}. \end{equation*} \end{thm} \begin{proof} Using \autoref{lem:expneed} it is sufficient to give an integer polytope $Q \subset {\mathbb R}^p$ with $p = n + 1$ and at least $2^{n} + 2 $ facets that contains no interior integer point but in each facet a relative interior integer point. Therefor we define $Q$ as the set of all $(x,x_{n+1}) \in {\mathbb R}^{n+1}$ with the property: \begin{eqnarray*} && a x - \pi(a)x_{n+1} \leq 1, \ a \in \{\pm 1\}^n \\ && 0 \leq x_{n+1} \leq 2 \end{eqnarray*} with $ \pi(a) := |\{i \in \{1,\ldots,n\}:a_i = 1 \} |- 1$. We show that $Q$ has the desired properties. \\ Its vertices are contained in the hyperplane $x_{n+1} = 0$ or $x_{n+1} = 2$. For $x_{n+1} = 0$ the related polytope is the $n$-dimensional cross polytope. For $x_{n+1} = 2$ the related polytope is generated by the vertices $ \mathbf{1} + (n-1)u_i$. The last property follows from the fact that $ax \leq 1 $ is active for a vector $ \pm u_i$ if, and only if $ ax \leq 1 + 2 \pi(a)$ is active for $ \mathbf{1} \pm (n-1)u_i$ by definition of $\pi(a)$. So $Q$ is integer. \\ Let $(z,1) \in {\mathbb Z}^{n+1}$ be given. We take the side constraint $ ax \leq 1 + \pi(a)$ with $a_i = 1 \Longleftrightarrow z_i > 0, \ i \in \{1,\ldots,n\}$ and get \begin{equation*} a z = \sum_{1 \leq i \leq n} |z_i| \geq \sum_{z_i > 0} z_i \geq |\{ i \in \{1,\ldots,n\}: a_i = 1\}| = 1 + \pi(a). \end{equation*} So $(z,1)$ is no interior point of $Q$. Moreover we can see that $(z,1) \in Q $ for $ z \in \{0,1\}^n$ and that $(z,1)$ is a relative interior point of the facet $ ax - \pi(a) x_{n+1} \leq 1 $ for $a_i = 1 \Longleftrightarrow z_i = 1, \ i \in \{1,\ldots,n\}$. As $ \mathbf{0}$ and $\mathbf{1}$ are relative interior point of $ x_{n+1} = 0$ and $ x_{n+1} = 2$, $Q$ has all properties. \end{proof} \begin{figure} \caption{The set $Q$ constructed in \autoref{thm:expon} \label{sfig:exa_2p_y0} \label{sfig:exa_2p_y1} \label{sfig:exa_2p_y2} \label{fig:kdis} \end{figure} So we have proven that in general at least $2^{p-1} +2$-disjunctive cuts are required to solve a MILP exactly in a finite number of steps. We remark that we have an upper bound of $ 2^p$ as shown in \autoref{thm:2pclos}. With this result we can see that an exact cutting plane algorithm gets in general very expensive as a large number of disjunctive hyperplanes has to be computed. Moreover we have not yet discussed how to determine a cut to the related $k$-disjunction. As intersection cuts according to basis relaxations do not generate strong cuts in general, this is an important issue for practical applications. On the other hand we have seen that a wide class of problems can even be solved using split cuts. Moreover the convergence properties of $k$-disjunctive cuts depend on the structure of the projection of the polyhedron and the objective function. This fact suggests to use information of the projection in cutting plane algorithms. \subsection{Computing $k$-disjunctive cuts}\label{ssec:compkcuts} At the end of this section we want to give an alternative to compute strong valid $k$-disjunctive cuts. Unlike the usual generating of valid cuts for MILP, we need as additional input the vector $(c \ h)$ to which we want to cut. Moreover we restrict ourselves on shifted polyhedral cones $ P= \{(x,y): Ax + Gy \leq b\} $ with apex $(x^*,y^*), x^* \not\in {\mathbb Z}^p$ and assume that the function $ cx + hy$ attains its unique maximum at $(x^*,y^*)$ with value $\gamma^*$. In this situation we can describe $(x^*,y^*)$ as polyhedron given by the (over-determined) system \begin{equation}\label{eq:compkcut1} P_{\gamma^*} := \left\{ \left( \begin{array}{c} A \\ -c \end{array}\right)x + \left( \begin{array}{c} G \\ -h \end{array}\right)y \leq \left( \begin{array}{c} b \\ -\gamma^* \end{array}\right) \right\}. \end{equation} Using \autoref{lem:projection}, the projection of the above system on the $x$-space - that is equal to $x^*$ - is given by \begin{equation}\label{eq:compkcut2} {\mathrm{proj}_X}\,(P_{\gamma^*}) = \left\{ x \in {\mathbb R}^p: v^r \left( \begin{array}{c} A \\ -c \end{array} \right) x \leq v^r \left( \begin{array}{c} b \\ -\gamma^* \end{array} \right) , \ \forall r \in R \right\} \end{equation} with $R$ being the set of extreme rays of the cone \begin{equation}\label{eq:compkcut3} Q = \left\{ v \in {\mathbb R}^{m+1}: \left( \begin{array}{c} G \\ -h \end{array} \right)^T v = 0, \ v \geq 0 \right\}. \end{equation} As the cone $Q$ is rational, we can assume that the extremal rays $v^r$ are elements of the additive group \begin{equation}\label{eq:compkcut4} \mathcal{G}_{P_{\gamma^*}} := \left\{w \in {\mathbb Q}^{m+1}: w \left( \begin{array}{c} A \\ -c \end{array} \right) \in {\mathbb Z}^{m+1} \right\}. \end{equation} Therewith we can use the above polyhedral description of ${\mathrm{proj}_X}\,(P_{\gamma^*})$ to define a valid $k$-disjunction for $P$, that does not contain the apex $(x^*,y^*)$. We do this by rounding up the right hand sides of the defining constraints of ${\mathrm{proj}_X}\,(P_{\gamma^*})$ in \eqref{eq:compkcut2}. \begin{lem}\label{lem:Rdis} Let $P,(x^*,y^*),(c \ h),\gamma^*$ and $ {\mathrm{proj}_X}\,(P_{\gamma^*})$ as defined above. Moreover define for $r \in R$ \begin{eqnarray*} d^r &:=& v_{1,\ldots,m}^r A - v_{m+1}^r c \\ \delta^r &:=& \lfloor v_{1,\ldots,m}^r b - v_{m+1}^r \gamma^* \rfloor + 1 \end{eqnarray*} with $ v^r = (v_{1,\ldots,m}^r, v_{m+1}^r)$. Then $ D(|R|,-d,-\delta)$ is a valid $|R|$-disjunction for $P$ that does not contain $(x^*,y^*)$. \end{lem} \begin{proof} By definition it is $\max\{ cx +hy: (x,y) \in P_I\} < \gamma^*$. So there is an $\epsilon > 0$ such that $cx + hy \leq \gamma^* - \epsilon $ is valid but not optimal for $P_I$. Moreover the inequalities defining $ {\mathrm{proj}_X}\,(P_{\gamma^* - \epsilon})$ are given by $ v_{1,\ldots,m}^r A - v_{m+1}^r c $ with right hand sides $v_{1,\ldots,m}^r b - v_{m+1}^r (\gamma^* - \epsilon)$. The set $ {\mathrm{proj}_X}\,(P_{\gamma^* - \epsilon})$ contains no integer points, so for all $x \in {\mathbb Z}^p$ there is a $r \in R$ with \begin{equation*} d^r x > v_{1,\ldots,m}^r b - v_{m+1}^r \gamma^*. \end{equation*} Therewith is follows that the polyhedron $ \{x \in {\mathbb R}^p: dx \leq \delta\} $ contains no integer point in its interior. This is equivalent to the set $ D(|R|,-d,-\delta)$ being a valid $|R|$-disjunction. Moreover, as the right hand side of each defining hyperplane of the projection has been enlarged by the definition of $D$ it is obvious that $(x^*,y^*$) is not contained in the disjunction. This proofs the lemma. We remark that the above definitions and the proof is similar to \autoref{lem:every}. \end{proof} So we have found a $k$-disjunction that can be used to cut off the current LP solution $(x^*,y^*)$. As we have mentioned at the beginning of this section we want to cut to the vector $(c \ h)$. We can do this now using the right hand side $\delta$ of the disjunction $D$. As for $ v_{m+1}^r > 0$ the value of $\delta^r$ depends on the objective function value, we can compute the objective function value that corresponds to the value of $\delta^r$ that we have got by the rounding operation. The inequalities of $D$ whose right hand sides $\delta^r$ are independent of the value of $\gamma$ can be omitted in this considerations. Taking the maximum of the related objective function values for all constraints gives us a valid cut to the vector $(c \ h)$. As the disjunction does not contain $x^*$, we can ensure that the current solution is cut off. \begin{thm}\label{thm:Rdiscut} Let $P,(x^*,y^*),(c \ h),\gamma^*, {\mathrm{proj}_X}\,(P_{\gamma^*})$ and $ D(|R|,-d,-\delta)$ as defined above. Let for $r \in R$ with $v_{m+1}^r > 0$ \begin{equation*} \gamma^r := \frac{\delta^r - v_{1,\ldots,m}^r b }{ - v_{m+1}^r} \end{equation*} and $\widehat \gamma = \max\{\gamma^r: r \in R, \, v_{m+1}^r > 0\}$. Then $cx + hy \leq \widehat \gamma $ is valid for $P_I$ and $cx^* + hy^* > \widehat \gamma$. \end{thm} \begin{proof} The validity of the inequality $cx + hy \leq \widehat \gamma $ follows directly using \autoref{lem:Rdis}, as it is a disjunctive cut according to $ D(|R|,-d,-\delta)$ by definition. Equally it follows that $cx^* + hy^* > \widehat \gamma$. \end{proof} As we have finished the derivation of the $k$-disjunctive cut, we want to add some remarks. Using the projection as $k$-disjunction, we solve the problem how to find a suitable $k$-disjunction for cutting in general. This relates both to the selection of the number $k$ and the selection of the defining hyperplanes of the disjunction. Moreover we have seen in the last subsections, that using information of the projection can be useful. On the other hand, the projection that we use corresponds to the predisposed cutting vector. So the selection of a suitable $k$-disjunction is partially shifted to the selection of the cutting vector. Here it is i.e. open how to choose cutting vectors to get deep cuts in general. However, for solving a given MILP we will see in the next section that this approach leads to a finite algorithm if we use the objective function vector. \section{Algorithm}\label{sec:algo} We now turn to an algorithmic application of the previous results and want to present an exact algorithm that solves a bounded MILP in finite time. It is based on a series of mixed integer Gomory cuts that is mixed with certain $k$-disjunctive cuts which are required as discussed in \autoref{ssec:finite}. The $k$-disjunctive cuts we use here are similar to the ones we introduced in \autoref{ssec:compkcuts}, using the objective function as the vector to which we cut. As the assumptions that we have made there for the $k$-disjunctive cuts are in general not satisfied, we have to do some modifications. So we will define $k$-disjunctive cuts over general polyhedra $P$ for an arbitrary cut vector $(c \ h)$. We discuss the details of the generalization. It is clear that the equalities \eqref{eq:compkcut2}, \eqref{eq:compkcut3}, \eqref{eq:compkcut4} also describe the projection for $P$ being a general polyhedron and $\gamma^*$ being an arbitrary value of the objective function. Even the derivation of a valid $k$-disjunction and a valid $k$-disjunctive cut, respectively, is true, if the value $\gamma^*$ of the objective function $cx +hy$ is not optimal for $P_I$. However, for the application in the algorithm we will define a slightly weaker version of the $|R|$-disjunctive cut that does not always cut off the current LP solution, but can be used more general. We do this in the next \begin{thm}\label{thm:genRdiscut} Let $P$ be a polyhedron and $\gamma^*$ such that $c x + h y \leq \gamma^*$ is valid for $P_I$ but not for $P$. Define the $|R|$-disjunction $ D(|R|,-d,-\delta)$ using the equalities \eqref{eq:compkcut1}, \eqref{eq:compkcut2}, \eqref{eq:compkcut3}, \eqref{eq:compkcut4} with \begin{eqnarray*} d^r &:=& v_{1,\ldots,m}^r A - v_{m+1}^r c \\ \delta^r &:=& \lceil v_{1,\ldots,m}^r b - v_{m+1}^r \gamma^* \rceil \end{eqnarray*} and let $\widehat \gamma = \max\{\gamma^r: r \in R, \, v_{m+1}^r > 0\}$ analog to \autoref{thm:Rdiscut}. Then $c x + h y \leq \widehat \gamma $ is a valid cutting plane for $P_I$. \end{thm} \begin{proof} By assumption ${\mathrm{proj}_X}\,(P_{\gamma^*})$ cannot contain an integral point in its interior. So rounding up the right hand sides gives a valid $|R|$-disjunction. Therefor $c x + h y \leq \widehat \gamma $ is a valid cutting plane analog to the proof of \autoref{thm:Rdiscut}. \end{proof} We go on with the single steps of the algorithm. We start with the usual mixed integer Gomory algorithm as long as we get either a feasible solution of the MILP or a solution of the LP relaxation that has a lower objective function value. This happens in finite time by \autoref{cor:gomory} if we restrict ourselves to polytopes. If the objective function value has decreased we can apply \autoref{thm:genRdiscut} and compute a valid $|R|$-disjunctive cut using the objective function as vector to which we cut. Now we can apply the Gomory algorithm to the modified program again, until either a feasible solution is found or the objective function value decreases, and use \autoref{thm:genRdiscut} again. In this way we get an algorithm that finitely terminates with an optimal solution to the given MILP or detects infeasibility. The formal algorithm is stated in \autoref{alg:exact}. \begin{algorithm}\caption{Exact cutting plane algorithm} \label{alg:exact} \begin{algorithmic}[1] {\mathbb S}tate \textbf{Input:} bounded MILP \eqref{eq:MILP} {\mathbb S}tate \textbf{Output:} "optimal solution $(x^*,y^*)$" or "problem infeasible" if no solution exists; {\mathbb S}tate {\mathbb S}tate $(x^*, y^*) := {\mathrm{argmax}}\,\{cx + hy: (x,y) \in P\}$; {\mathbb S}tate $ \gamma^* := \max\{cx + hy: (x,y) \in P\}$; {\mathbb S}tate \If{$ P = \emptyset$} {\mathbb S}tate "problem infeasible"; \textbf{break} {\mathbb E}ndIf \If{$ x^* \in {\mathbb Z}^p$} {\mathbb S}tate "optimal solution $ (x^*, y^* )$"; \textbf{break} {\mathbb E}ndIf {\mathbb S}tate \While{$x^* \not\in {\mathbb Z}^p$} {\mathbb S}tate $ \gamma := \gamma^*$; \While{$\gamma^* = \gamma$} {\mathbb S}tate Compute Gomory cut $ \alpha^1 x + \alpha^2 y \leq \beta$ to $P, (x^*, y^*)$ by least index rule; {\mathbb S}tate $ P := P \cap \{(x,y): \alpha^1 x + \alpha^2 y \leq \beta\}$; {\mathbb S}tate $( x^*, y^*) := {\mathrm{argmax}}\,\{cx + hy: (x,y) \in P\}$; {\mathbb S}tate $\gamma^* := \max\{cx + hy: (x,y) \in P\}$; \If{$ P = \emptyset$} {\mathbb S}tate "problem infeasible"; \textbf{break} {\mathbb E}ndIf \If{$ x^* \in {\mathbb Z}^p$} {\mathbb S}tate "optimal solution $( x^*, y^*)$"; \textbf{break} {\mathbb E}ndIf {\mathbb E}ndWhile {\mathbb S}tate {\mathbb S}tate Compute $\widehat \gamma = \max\{\gamma^r : r \in R, v_{m+1}^r > 0 \}$ according to \autoref{thm:genRdiscut} for $P, (c \ h ), \gamma^*$; {\mathbb S}tate $\gamma^* = \widetilde \gamma$; {\mathbb E}ndWhile {\mathbb S}tate \end{algorithmic} \end{algorithm} \begin{thm}\label{thm:finalg} Let a bounded MILP \eqref{eq:MILP} be given. Then \autoref{alg:exact} either finds an optimal solution or detects infeasibility in a finite number of steps. \end{thm} \begin{proof} The proof follows immediately with the following two facts: Every while loop (16) to (27) has only finite many iterations by \autoref{cor:gomory} as $P$ is bounded by presumption. Similarly the outer while loop (14) to (31) has only finite many iterations as the possible number of different values $\widehat \gamma$ is finite. \end{proof} Before we further discuss the algorithm we will give two examples. We start with repeating \autoref{exa:cks}: \begin{exa} Let again the MILP \begin{eqnarray*} && \max y \\ && -x_1 + y \leq 0 \\ && -x_2 + y \leq 0 \\ && x_1 + x_2 + y \leq 2 \\ && x_1, x_2 \in {\mathbb Z} \end{eqnarray*} with the optimal solution $(\frac{2}{3},\frac{2}{3},\frac{2}{3})$ of the LP relaxation be given. The mixed integer Gomory cuts according to $x_1$ and $x_2$ are given by $ -x_1 + 2 y \leq 0 $ and $ -x_2 + 2 y \leq 0 $ with the new LP solution $ (\frac{4}{5},\frac{4}{5}, \frac{2}{5})$. As the value of the objective function has decreased, we compute $\widetilde \gamma$ as in \autoref{thm:genRdiscut}. The extremal rays of the cone $\{(\begin{array}{cccc} 1 & 1 & 1 & -1 \end{array})y = 0, \ y \geq 0\}$ are the three vectors \begin{equation*} (1,0,0,1),(0,1,0,1),(0,0,1,1), \end{equation*} so the projection ${\mathrm{proj}_X}\, (P_\gamma)$ is given by \begin{eqnarray*} -x_1 &\leq& 0 - \gamma\\ -x_2 &\leq& 0 - \gamma \\ x_1 + x_2 &\leq& 2 - \gamma \end{eqnarray*} Inserting the current value $ \gamma = \frac{2}{5}$ of the objective function and rounding gives \\ $\max_{r =1,2,3} \tilde \gamma^r = \max\{0,0,0\} = 0$. After applying the related cut $ y \leq 0$ we get as new LP solution the feasible point $(2,0,0)$ and the algorithm stops with an optimal solution. \end{exa} Second we show how the algorithm works for the example of Owen and Mehrotra \cite{SM01}. For this ILP the usual mixed integer Gomory algorithm does not converge to the optimum. \begin{exa} Let the ILP \begin{eqnarray*} \max&&x_1 + x_2 \\ && 8x_1 + 12x_2 \leq 27 \\ && 8x_1 + 3 x_2 \leq 18 \\ && x_1, x_2 \geq 0 \\ && x_1, x_2 \in {\mathbb Z} \end{eqnarray*} with the initial LP solution $(\frac{15}{8}, 1) $ be given. After applying the first possible cut to $x_1$ the value of the objective function decreases and we can go to the second step of the algorithm. As we have an ILP it is $ {\mathrm{proj}_X}\,(P_{\gamma}) = P_{\gamma} $ with $ P_{\gamma} $ given by \begin{eqnarray*} - x_1 - x_2 &\leq& -\gamma \\ 8x_1 + 12x_2&\leq& 27 \\ 8x_1 + 3 x_2 &\leq& 18 \\ x_1, x_2 &\geq& 0 \\ \end{eqnarray*} By rounding we get finally the valid cut $ x_1 + x_2 \leq 2 $ that relates to the optimal objective function value. The result of this example is typical for applying the algorithm on ILP. In this case we have to presume that all input data is integral and the $k$-disjunctive cut to the objective function reduces to the Chv\'atal Gomory cut of the objective function vector. \end{exa} Concluding we want to discuss the algorithm. We have seen in the last example that for an ILP the $k$-disjunctive cut reduces to an integer Gomory cut to the objective function. So the whole algorithm can be seen as a variant of the pure integer Gomory algorithm in this case. The crucial fact for finite convergence of the integer algorithm is the possibility to add cuts both to the objective function and to each variable if they are not integral. Using $k$-disjunctive cuts to the objective function we have now the possibility to add cuts to the objective function in the case of MILP as well. Therewith we obtain a convergent algorithm in analogy to the integer case. Of course the complex part of the algorithm consists in computing the $k$-disjunctive cut as the number $|R|$ of extreme rays $v^r$ of the cone $Q$ grows exponentially. So an efficient algorithm for computing the extreme rays of the related cone is required. Moreover we have to ensure that the computed rays satisfy the integrality constraints, i.e. are contained in the group $ \mathcal G_{P_{\gamma^*}}$. Therefor we can presume in practical applications the coefficients of the matrix $A$ and the vector $c$ to be integer. Then the integrality constraints are satisfied, if all of the extreme rays $v^r$ are integer. However, we will not further deal with this issue here, but refer to the papers of Henk and Weismantel \cite{HW96} and of Hemmecke \cite{Hem06} and the references therein. They state several algorithms for this and the similar problem of computing Hilbert bases of polyhedral cones. At last we want to give a further interpretation of the algorithm. Therefor we assume that the feasible domain $P$ is full dimensional and bounded. One can see that in this case we can always choose an optimal solution of the MILP such that $q$ defining inequalities of $P$ are active. So the solution is contained in a $ (p+q) - q = p $-dimensional face of $P$. Therefor we can solve the MILP by solving each of the related $p$-dimensional subproblems and taking the best solution. Moreover the set of feasible solutions in each $p$-dimensional face is discrete in general, so solving a MILP for a $p$-dimensional face can be interpreted as solving an ILP, as we could apply a suitable affine transformation. This means that solving a MILP can be seen as parallel solving of several ILP. Especially every valid cutting plane for $P_I$ is even valid for each of the discrete subproblems. Therefor we need information of the related discrete subproblems if we want to generate strong valid cuts. As the number of $p$-dimensional faces of $P$ grows exponentially, this interpretation also gives another reasoning that we need $k$-disjunctive cuts with an exponential number of defining disjunctive hyperplanes to solve general MILP. Within the algorithm we can find the $p$-dimensional subproblems in the facets of the polyhedron ${\mathrm{proj}_X}\,(P_{\gamma^*})$, where the value of the right hand side $\delta^i$ can be related to the current objective function value of the subproblem. \end{document}
\begin{document} \title{Vacuum Landscaping: Cause of Nonlocal Influences without Signalling} \author{Gerhard \surname{Grössing}\textsuperscript{}} {\rm e}mail[Corresponding author: ]{[email protected]} \homepage{http://www.nonlinearstudies.at} \selectlanguage{british} \author{Siegfried \surname{Fussy}} \author{Johannes \surname{Mesa Pascasio}} \author{Herbert \surname{Schwabl}} \affiliation{Austrian Institute for Nonlinear Studies, Akademiehof, Friedrichstr.~10, 1010 Vienna, Austria } \begin{abstract} In the quest for an understanding of nonlocality with respect to an appropriate ontology, we propose a \textquotedblleft cosmological solution\textquotedblleft . We assume that from the beginning of the universe each point in space has been the location of a scalar field representing a zero-point vacuum energy that nonlocally vibrates at a vast range of different frequencies across the whole universe. A quantum, then, is a nonequilibrium steady state in the form of a \textquotedblleft bouncer\textquotedblleft{} coupled resonantly to one of those (particle type dependent) frequencies, in remote analogy to the bouncing oil drops on an oscillating oil bath as in Couder's experiments. A major difference to the latter analogy is given by the nonlocal nature of the vacuum oscillations. We show with the examples of double- and $n$-slit interference that the assumed nonlocality of the distribution functions alone suffices to derive the de\textasciitilde{}Broglie-{}-Bohm guiding equation for $N$ particles with otherwise purely classical means. In our model, no influences from configuration space are required, as everything can be described in 3-space. Importantly, the setting up of an experimental arrangement limits and shapes the forward and osmotic contributions and is described as vacuum landscaping. \begin{lyxgreyedout} \global\long\,\mathrm{d}ef\VEC#1{\mathbf{#1}} \global\long\,\mathrm{d}ef\,\mathrm{d}{\,\mathrm{d}} \global\long\,\mathrm{d}ef{\rm e}{{\rm e}} \global\long\,\mathrm{d}ef{\rm i}{{\rm i}} \global\long\,\mathrm{d}ef\meant#1{\left<#1\right>} \global\long\,\mathrm{d}ef\meanx#1{\overline{#1}} \global\long\,\mathrm{d}ef\ensuremath{\genfrac{}{}{0pt}{1}{-}{\scriptstyle (\kern-1pt +\kern-1pt )}}{{\rm e}nsuremath{\genfrac{}{}{0pt}{1}{-}{\scriptstyle (\kern-1pt +\kern-1pt )}}} \global\long\,\mathrm{d}ef\ensuremath{\genfrac{}{}{0pt}{1}{+}{\scriptstyle (\kern-1pt -\kern-1pt )}}{{\rm e}nsuremath{\genfrac{}{}{0pt}{1}{+}{\scriptstyle (\kern-1pt -\kern-1pt )}}} \global\long\,\mathrm{d}ef\partial{\partialartial} {\rm e}nd{lyxgreyedout} {\rm e}nd{abstract} \maketitle \section{Introduction: Quantum Mechanics without Wavefunctions\label{sec:introduction}} ``Emergent Quantum Mechanics'' stands for the idea that quantum mechanics is based on a more encompassing deeper level theory. This counters the traditional belief, usually expressed in the context of orthodox Copenhagen-type quantum mechanics, that quantum theory is an ``ultimate'' theory whose main features will prevail for all time and will be applicable to all questions of physics. Note, for example, that even in more recent approaches to spacetime, the concept of an ``emergent spacetime'' is introduced as a description even of space and time emerging from basic quantum mechanical entities. This, of course, need not be so, considering the fact that there is ``plenty of room at the bottom'', i.e.\ as Feynman implied, between present-day resolutions and minimally possible times and distances, which could in principle be way below resolutions reasonably argued about in present times (i.e.\ on Planck scales). One of the main attractive features of the de~Broglie\textendash Bohm interpretation of the quantum mechanical formalism, and of Bohmian mechanics as well, lies in the possibility to extend its domain into space and/or time resolutions where modified behaviours different from quantum mechanical ones may be expected. In other words, there may be new physics involved that would require an explicitly more encompassing theory than quantum mechanics, i.e.\ a deeper level theory. Our group's approach, which we pursued throughout the last 10 years, is characterized by the search for such a theory under the premise that even for nonrelativistic quantum mechanics, the Schrödinger equation cannot be an appropriate starting point, since the wavefunction is still lacking a firm theoretical basis and its meaning is generally not agreed upon. For a similar reason, also the de~Broglie\textendash Bohm theory cannot be our starting point, as it is based on the Schrödinger equation and the use of the wavefunction to begin with. Rather, we aim at an explicit ansatz for a deeper level theory without wavefunctions, from which the Schrödinger equation, or the de~Broglie\textendash Bohm guiding equation, can be derived. We firmly believe that we have accomplished this and we can now proceed to study consequences of the approach beyond orthodox expectations. Throughout recent years, apart from our own model, several approaches to a quantum mechanics without wavefunctions have been proposed~\cite{Deckert.2007quantum,Poirier.2010bohmian,Poirier.2012trajectory-based,Schiff.2012communication:,Hall.2014quantum}. These refer to ``many classical worlds'' which provide Bohm-type trajectories with certain repulsion effects. From our realistic point of view, the true ontologies of these models, however, do not become apparent. So let us turn to our model. As every physical theory is based on metaphysical assumptions, we must make clear what our assumptions are. Here they are. We propose a ``cosmological solution'' in that the Big Bang, or any other model explaining the apparent expansion of the universe, is essentially related to the vacuum energy. (The latter may constitute what is called the dark energy, but we do not need to specify this here.) We assume that from the beginning of the universe each point in space has been the location of a scalar field representing a zero-point vacuum energy that vibrates at a vast range of different frequencies across the whole universe. More specifically, we consider the universe as an energetically open system where the vacuum energy not only drives expansion, but also each individual ``particle'' oscillation $\omega=E/\hbar$ in the universe. In order to maintain a particular frequency, any such oscillator must be characterized by a throughput of energy external to it. In this regard, we have time and again employed the analogy of Couder's experiments with bouncing oil drops on a vibrating bath~\cite{Couder.2005,Couder.2006single-particle,Couder.2012probabilities,Bush.2010quantum,Bush.2015new,Bush.2015pilot-wave}: The bouncer/particle is always in resonant interaction with a relevant environment. Our model, though also largely classical, has a very different ontology from the ``many classical worlds'' one. We consider {\rm e}mph{one} ``superclassical'' world instead: a purely classical world plus ``cosmological nonlocality'', i.e.\ a nonlocal bath for every oscillator/particle due to the all-pervading vacuum energy, which \textendash{} mostly in the context of quantum mechanics \textendash{} is called the zero-point energy. So, it is the one classical world together with the fluctuating environment related to the vacuum energy that enters our definition of a quantum as an emergent system. The latter consists of a bouncer and an undulatory/wave-like nonlocal environment defined by proper boundary conditions.\footnote{As an aside we note that this is not related to de~Broglie's ``nonlinear wave mechanics''~\cite{DeBroglie.1960book}, as there the nonlinear wave, with the particle as soliton-like singularity, is considered as one ontic entity. In our case, however, we speak of two separate, though synchronous elements: local oscillators and generally nonlocal oscillating fields.} In previous work, we have shown how the Schrödinger equation can be derived from a nonequilibrium sub-quantum dynamics~\cite{Groessing.2008vacuum,Groessing.2010emergence,Groessing.2011dice,Groessing.2012doubleslit}, where in accordance with the model sketched above the particle is considered as a steady state with a constant throughput of energy. This, then, leads to the two-momenta approach to emergent quantum mechanics which shall be outlined in the next section. \section{The Two-Momenta Approach to Emergent Quantum Mechanics\label{sec:motivation}} We consider the empirical fact that each particle of nature is attributed an energy $E=\hbar\omega$ as one of the essential features of quantum systems.\footnote{We have also presented a classical explanation for this relation from our sub-quantum model~\cite{Groessing.2011explan}, but do not need to use the details for our present purposes.} Oscillations, characterized by some typical angular frequency $\omega$, are described as properties of off-equilibrium steady-state systems. ''Particles'' can then be assumed to be dissipative systems maintained in a nonequilibrium steady-state by a permanent throughput of energy, or heat flow, respectively. The heat flow must be described by an external kinetic energy term. Then the energy of the total system, i.e.\ of the particle and it's thermal context, becomes \begin{equation} E_{{\rm tot}}=\hbar\omega+\frac{(\,\mathrm{d}elta\VEC p)^{2}}{2m}\;,\label{eq:2.1} {\rm e}nd{equation} where $\,\mathrm{d}elta\VEC p$ is an additional, fluctuating momentum component of the particle of mass $m$. We assume that an effect of said thermal context is given by detection probability distributions which are wave-like in the particle's surroundings. Thus, the detection probability density $P(\VEC x,t)$ is considered to coincide with a classical wave's intensity $I(\VEC x,t)=R^{2}(\VEC x,t),$ with $R(\VEC x,t)$ being the wave's real-valued amplitude \begin{equation} P(\VEC x,t)=R^{2}(\VEC x,t)\;,\quad\text{with normalization}\,{\rm i}nt P\,\mathrm{d}^{n}x=1\;.\label{eq:2.2} {\rm e}nd{equation} In ref.~\cite{Groessing.2008vacuum}, we combine some results of nonequilibrium thermodynamics with classical wave mechanics. We propose that the many microscopic degrees of freedom associated with the hypothesized sub-quantum medium can be recast into the emergent macroscopic properties of the wave-like behaviour on the quantum level. Thus, for the relevant description of the total system one no longer needs the full phase space information of all microscopic entities, but only the emergent particle coordinates. For implementation, we model a particle as being surrounded by a heat bath, i.e.\ a reservoir that is very large compared to the small dissipative system, such that that the momentum distribution in this region is given by the usual Maxwell\textendash Boltzmann distribution. This corresponds to a ``thermostatic'' regulation of the reservoir's temperature, which is equivalent to the statement that the energy lost to the thermostat can be regarded as heat. Thus, one can formulate a \textit{proposition of emergence} \cite{Groessing.2008vacuum} providing the equilibrium-type probability (density) ratio \begin{equation} \frac{P(\VEC x,t)}{P(\VEC x,0)}={\rm e}^{-\frac{\Delta Q(t)}{kT}}\;,\label{eq:2.3} {\rm e}nd{equation} with $k$ being Boltzmann's constant, $T$ the reservoir temperature, and $\Delta Q(t)$ the heat that is exchanged between the particle and its environment. Equations (\ref{eq:2.1}), (\ref{eq:2.2}), and (\ref{eq:2.3}) are the only assumptions necessary to derive the Schrödinger equation from (modern) classical mechanics. We need to employ only two additional well-known results. The first is given by Boltzmann's formula for the slow transformation of a periodic motion (with period $\tau=2\partiali/\omega$) upon application of a heat transfer $\Delta Q$. This is needed as we deal with an oscillator of angular frequency $\omega$ in a heat bath $Q$, and a change in the vacuum surroundings of the oscillator will come as a heat transfer $\Delta Q$ . The latter is responsible for a change $\,\mathrm{d}elta S$ of the action function $S$ representing the effect of the vacuum's ``zero-point'' fluctuations. With the action function $S={\rm i}nt\left(E_{{\rm kin}}-V\right)\,\mathrm{d} t$, the relation between heat and action was first given by Boltzmann~\cite{Boltzmann.1866uber}, \begin{equation} \Delta Q(t)=2\omega[\,\mathrm{d}elta S(t)-\,\mathrm{d}elta S(0)]\;.\label{eq:2.4} {\rm e}nd{equation} Finally, the requirement that the average kinetic energy of the thermostat equals the average kinetic energy of the oscillator is given, for each degree of freedom, by \begin{equation} \frac{kT}{2}=\frac{\hbar\omega}{2}\;.\label{eq:2.5} {\rm e}nd{equation} Combining these two results, Eqs.~(\ref{eq:2.4}) and (\ref{eq:2.5}), with~(\ref{eq:2.3}) one obtains \begin{equation} P(\VEC x,t)=P(\VEC x,0){\rm e}^{-\frac{2}{\hbar}[\,\mathrm{d}elta S(\VEC x,t)-\,\mathrm{d}elta S(\VEC x,0)]}\;,\label{eq:2.6} {\rm e}nd{equation} from which follows the expression for the momentum fluctuation $\,\mathrm{d}elta\VEC p$ of (\ref{eq:2.1}) as \begin{equation} \,\mathrm{d}elta\VEC p(\VEC x,t)=\nabla(\,\mathrm{d}elta S(\VEC x,t))=-\frac{\hbar}{2}\frac{\nabla P(\VEC x,t)}{P(\VEC x,t)}\;.\label{eq:2.7} {\rm e}nd{equation} This, then, provides the additional kinetic energy term for one particle as \begin{equation} \,\mathrm{d}elta E_{{\rm kin}}=\frac{1}{2m}\nabla(\,\mathrm{d}elta S)\cdot\nabla(\,\mathrm{d}elta S)=\frac{1}{2m}\left(\frac{\hbar}{2}\frac{\nabla P}{P}\right)^{2}\;.\label{eq:2.8} {\rm e}nd{equation} Thus, writing down a classical action integral for $j=N$ particles in $m$-dimensional space, including this new term for each of them, yields (with external potential $V$) \begin{equation} A={\rm i}nt L\,\mathrm{d}^{m}x\,\mathrm{d} t={\rm i}nt P\left[\frac{\partialartial S}{\partialartial t}+\sum_{j=1}^{N}\frac{1}{2m_{j}}\nabla_{j}S\cdot\nabla_{j}S+\sum_{j=1}^{N}\frac{1}{2m_{j}}\left(\frac{\hbar}{2}\frac{\nabla_{j}P}{P}\right)^{2}+V\right]\,\mathrm{d}^{m}x\,\mathrm{d} t\;,\label{eq:2.9} {\rm e}nd{equation} where the probability density $P=P(\VEC x_{1},\VEC x_{2},\ldots,\VEC x_{N},t)$. With the definition of forward and osmotic velocities, respectively, \begin{equation} \VEC v_{j}:=\frac{\VEC p_{j}}{m_{j}}=\frac{\nabla_{j}S}{m_{j}}\qquad\textrm{ and }\qquad\VEC u_{j}:=\frac{\,\mathrm{d}elta\VEC p_{j}}{m_{j}}=-\frac{\hbar}{2m_{j}}\frac{\nabla_{j}P}{P},\label{eq:2.9b} {\rm e}nd{equation} one can rewrite (\ref{eq:2.9}) as \begin{equation} A={\rm i}nt L\,\mathrm{d}^{m}x\,\mathrm{d} t={\rm i}nt P\left[\frac{\partialartial S}{\partialartial t}+V+\sum_{j=1}^{N}\frac{m_{j}}{2}\VEC v_{j}^{2}+\sum_{j=1}^{N}\frac{m_{j}}{2}\VEC u_{j}^{2}\right]\,\mathrm{d}^{m}x\,\mathrm{d} t\;.\label{eq:2.9c} {\rm e}nd{equation} This can be considered as the basis for our approach with two momenta, i.e.\ the forward momentum $m\VEC v$ and the osmotic momentum $m\VEC u$, respectively. At first glance, the Lagrangian in Eq.~(\ref{eq:2.9c}) looks completely classical, with two kinetic energy terms per particle instead of one. However, due to the particular nature of the osmotic momentum as given in Eq.~(\ref{eq:2.9b}), nonlocal influences are introduced: even at long distances away from the particle location, where the particle's contribution to $P$ is practically negligibly small, the expression of the form $\frac{\nabla_{j}P}{P}$ may be large and affects immediately the whole fluctuating environment. This is why the osmotic variant of the kinetic energy makes all the difference to the usual classical mechanics, or, in other words, is the basis for quantum mechanics. Introducing now the Madelung transformation \begin{equation} \partialsi=R\;{\rm e}^{\frac{{\rm i}}{\hbar}S}\;,\label{eq:2.10} {\rm e}nd{equation} where $R=\sqrt{P}$ as in (\ref{eq:2.2}), one has, with bars denoting averages, \begin{equation} \overline{\left|\frac{\nabla_{j}\partialsi}{\partialsi}\right|^{2}}:={\rm i}nt\,\mathrm{d}^{m}x\,\mathrm{d} t\left|\frac{\nabla_{j}\partialsi}{\partialsi}\right|^{2}=\overline{\left(\frac{1}{2}\frac{\nabla_{j}P}{P}\right)^{2}}+\overline{\left(\frac{\nabla_{j}S}{\hbar}\right)^{2}}\;,\label{eq:2.11} {\rm e}nd{equation} and one can rewrite (\ref{eq:2.9}) as \begin{equation} A={\rm i}nt L\,\mathrm{d}^{m}x\,\mathrm{d} t={\rm i}nt\,\mathrm{d}^{m}x\,\mathrm{d} t\left[|\partialsi|^{2}\left(\frac{\partialartial S}{\partialartial t}+V\right)+\sum_{j=1}^{N}\frac{\hbar^{2}}{2m_{j}}|\nabla_{j}\partialsi|^{2}\right]\;.\label{eq:2.12} {\rm e}nd{equation} Thus, with the identity $|\partialsi|^{2}\frac{\partialartial S}{\partialartial t}=-\frac{{\rm i}\hbar}{2}(\partialsi^{*}\,\mathrm{d}ot{\partialsi}-\,\mathrm{d}ot{\partialsi}^{*}\partialsi)$, one obtains the familiar Lagrange density \begin{equation} L=-\frac{{\rm i}\hbar}{2}(\partialsi^{*}\,\mathrm{d}ot{\partialsi}-\,\mathrm{d}ot{\partialsi}^{*}\partialsi)+\sum_{j=1}^{N}\frac{\hbar^{2}}{2m_{j}}\nabla_{j}\partialsi\cdot\nabla_{j}\partialsi^{*}+V\partialsi^{*}\partialsi\;,\label{eq:2.13} {\rm e}nd{equation} from which by the usual procedures one arrives at the $N$-particle Schrödinger equation \begin{equation} {\rm i}\hbar\frac{\partialartial\partialsi}{\partialartial t}=\left(-\sum_{j=1}^{N}\frac{\hbar^{2}}{2m_{j}}\nabla_{j}^{2}+V\right)\partialsi\;.\label{eq:2.14} {\rm e}nd{equation} Note also that from (\ref{eq:2.9}) one obtains upon variation in $P$ the modified Hamilton\textendash Jacobi equation familiar from the de~Broglie\textendash Bohm interpretation, i.e. \begin{equation} \frac{\partialartial S}{\partialartial t}+\sum_{j=1}^{N}\frac{(\nabla_{j}S)^{2}}{2m_{j}}+V(\VEC x_{1},\VEC x_{2},\,\mathrm{d}ots,\VEC x_{N},t)+U(\VEC x_{1},\VEC x_{2},\ldots,\VEC x_{N},t)=0\;,\label{eq:2.15} {\rm e}nd{equation} where $U$ is known as the ``quantum potential'' \begin{equation} U(\VEC x_{1},\VEC x_{2},\ldots,\VEC x_{N},t)=\sum_{j=1}^{N}\frac{\hbar^{2}}{4m_{j}}\left[\frac{1}{2}\left(\frac{\nabla_{j}P}{P}\right)^{2}-\frac{\nabla_{j}^{2}P}{P}\right]=-\sum_{j=1}^{N}\frac{\hbar^{2}}{2m_{j}}\frac{\nabla_{j}^{2}R}{R}\;.\label{eq:2.16} {\rm e}nd{equation} Moreover, with the definitions of $\VEC u_{j}$ in~(\ref{eq:2.9b}) one can rewrite $U$ as \begin{equation} U=\sum_{j=1}^{N}\left[\frac{m_{j}\VEC u_{j}^{2}}{2}-\frac{\hbar}{2}(\nabla_{j}\cdot\VEC u_{j})\right]\;.\label{eq:2.18} {\rm e}nd{equation} However, as was already pointed out in ref.~\cite{Groessing.2008vacuum}, with the aid of (\ref{eq:2.4}) and (\ref{eq:2.6}), $\VEC u_{j}$ can also be written as \begin{equation} \VEC u_{j}=\frac{1}{2\omega_{j}m_{j}}\nabla_{j}Q\;,\label{eq:2.19} {\rm e}nd{equation} which thus explicitly shows its dependence on the spatial behaviour of the heat flow $\,\mathrm{d}elta Q$. Insertion of (\ref{eq:2.19}) into (\ref{eq:2.18}) then provides the thermodynamic formulation of the quantum potential as \begin{equation} U=\sum_{j=1}^{N}\frac{\hbar^{2}}{4m_{j}}\left[\frac{1}{2}\left(\frac{\nabla_{j}Q}{\hbar\omega_{j}}\right)^{2}-\frac{\nabla_{j}^{2}Q}{\hbar\omega_{j}}\right]\;.\label{eq:2.20} {\rm e}nd{equation} As in our model particles and fields are dynamically interlocked, it would be highly misleading to picture the quantum potential in a manner similar to the classical scenario of particle plus field, where the latter can be switched on and off like an ordinary potential. Contrariwise, in our case the particle velocities/momenta must be considered as {\rm e}mph{emergent}. One can illustrate this with the situation in double-slit interference (Figure~\ref{fig:interf-2}). Considering an incoming beam of, say, electrons with wave number $\mathbf{k}$ impinging on a wall with two slits, two beams with wave numbers $\mathbf{k}_{A}$ and $\mathbf{k}_{B}$, respectively, are created, which one may denote as ``pre-determined'' quantities, resulting also in pre-determined velocities $\mathbf{v}_{\alpha}=\frac{1}{m}\hbar\mathbf{k}_{\alpha}$, $\alpha={\rm e}nsuremath{A}\:\mathrm{or}\:B$. However, if one considers that the electrons are not moving in empty space, but in an undulatory environment created by the ubiquitous zero-point field ``filling'' the whole experimental setup. One has to combine all the velocities/momenta at a given point in space and time in order to compute the resulting, or emergent, velocity/momentum field $\mathbf{v}_{i}=\frac{1}{m}\hbar\boldsymbol{\kappa}_{i}$, $i=1\,\mathrm{or}\,2$ (Figure~\ref{fig:interf-2}), where $i$ is a bookkeeping index not necessarily related to the particle coming from a particular slit~\cite{Fussy.2014multislit}. The relevant contributions other than the particle's forward momentum $m\mathbf{v}$ originate from the osmotic momentum $m\mathbf{u}$. The latter is well known from Nelson's stochastic theory~\cite{Nelson.1966derivation}, but its identical form has been derived by one of us from an assumed sub-quantum nonequilibrium thermodynamics~\cite{Groessing.2008vacuum,Groessing.2009origin} as it was described above. As shall be shown in the next section, our model also provides an understanding and deeper-level explanation of the microphysical, causal processes involved, i.e.\ of the guiding law~\cite{Groessing.2015implications} of the de~Broglie\textendash Bohm theory. \begin{figure}[h] \centering{} \begin{minipage}[t]{0.9\columnwidth} \begin{center} {\rm i}ncludegraphics[width=1\columnwidth]{emqm17-fig1f}\caption{{\small{}Scheme of interference at a double-slit. Considering an incoming beam of electrons with wave number $\mathbf{k}$ impinging on a wall with two slits, two beams with wave numbers $\mathbf{k}_{A}$ and $\mathbf{k}_{B}$, respectively, are created, which one may denote as ``pre-determined'' velocities $\mathbf{v}_{\alpha}=\frac{1}{m}\hbar\mathbf{k}_{\alpha}\textrm{,\:{\rm e}nsuremath{\alpha}={\rm e}nsuremath{A}\:or\:{\rm e}nsuremath{B}.}$ Taking into account the influences of the osmotic momentum field $m\mathbf{u}$, one has to combine all the velocities/momenta at a given point in space and time in order to compute the resulting, or emergent, velocity/momentum field $\mathbf{v}_{i}=\frac{1}{m}\hbar\boldsymbol{\kappa}_{i},\:{\rm e}mph{i}={\rm e}nsuremath{1}\mathrm{\:or\:}{\rm e}nsuremath{2}$. This, then, provides the correct intensity distributions and average trajectories (lower plane). }\label{fig:interf-2}} \partialar{\rm e}nd{center} {\rm e}nd{minipage} {\rm e}nd{figure} \section{Derivation of the De Broglie\textendash Bohm Guiding Equation for {\rm e}mph{$N$} Particles} Consider at first one particle in an $n$-slit system. In quantum mechanics, as well as in our emergent quantum mechanics approach, one can write down a formula for the total intensity distribution $P$ which is very similar to the classical formula. For the general case of $n$ slits, it holds with phase differences $\varphi_{ii'}=\varphi_{i}-\varphi_{i'}$ between the slits $i$, $i'$ that \begin{equation} P=\sum_{i=1}^{n}\left(P_{i}+\sum_{i'=i+1}^{n}2R_{i}R_{i'}\cos\varphi_{ii'}\right),\label{eq:Sup2.1} {\rm e}nd{equation} where the phase differences are defined over the whole domain of the experimental setup. As in our model the ``particle'' is actually a bouncer in a fluctuating wave-like environment, i.e.~analogously to the bouncers of the Couder experiments, one does have some (e.g.\ Gaussian) distribution, with its centre following the Ehrenfest trajectory in the free case, but one also has a diffusion to the right and to the left of the mean path which is just due to that stochastic bouncing. Thus the total velocity field of our bouncer in its fluctuating environment is given by the sum of the forward velocity $\VEC v$ and the respective osmotic velocities $\VEC u_{\mathrm{L}}$ and $\VEC u_{\mathrm{R}}$ to the left and the right. As for any direction $\alpha$ the osmotic velocity $\VEC u_{\alpha}=\frac{\hbar}{2m}\frac{\nabla P}{P}$ does not necessarily fall off with the distance, one has long effective tails of the distributions which contribute to the nonlocal nature of the interference phenomena~\cite{Groessing.2013dice}. In sum, one has three distinct velocity (or current) channels per slit in an $n$-slit system. We have previously shown~\cite{Fussy.2014multislit,Groessing.2014relational} how one can derive the Bohmian guidance formula from our two-momenta approach. Introducing classical wave amplitudes $R(\VEC w_{i})$ and generalized velocity field vectors $\VEC w_{i}$, which represent either a forward velocity $\VEC v$ or an osmotic velocity $\VEC u$ in the direction transversal to $\VEC v$, we calculate the phase-dependent amplitude contributions of the total system's wave field projected on one channel's amplitude $R(\VEC w_{i})$ at the point $(\VEC x,t)$ in the following way. We define a {\rm e}mph{relational intensity} $P(\VEC w_{i})$ as the local wave intensity $P(\VEC w_{i})$ in each channel (i.e.~$\VEC w_{i}$), recalling that there are 3 velocity channels per slit: $\VEC u_{\mathrm{L}}$, $\VEC u_{\mathrm{R}}$, and $\VEC v$. The sum of all relational intensities, then, is the total intensity, i.e.\ the total probability density. In an $n$-slit system, we thus obtain for the relational intensities and the corresponding currents, respectively, i.e.\ for each channel component $\mathit{i}$, \begin{align} P(\VEC w_{i}) & =R(\VEC w_{i})\VEC{\hat{w}}_{i}\cdot{\,\mathrm{d}isplaystyle \sum_{i'=1}^{3n}}\VEC{\hat{w}}_{i'}R(\VEC w_{i'})\label{eq:Proj-1}\\ \VEC J\mathrm{(}\VEC w_{i}\mathrm{)} & =\VEC w_{i}P(\VEC w_{i}),\qquad i=1,\ldots,3n {\rm e}nd{align} with unit vectors $\VEC{\hat{w}}_{i}$ and \begin{equation} \cos\varphi_{ii'}:=\VEC{\hat{w}}_{i}\cdot\VEC{\hat{w}}_{i'}\,. {\rm e}nd{equation} Consequently, the total intensity and current of our field read as \begin{align} P_{\mathrm{tot}}= & {\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}P(\VEC w_{i})=\left({\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}\VEC{\hat{w}}_{i}R(\VEC w_{i})\right)^{2}\label{eq:Ptot6-1}\\ \VEC J_{\mathrm{tot}}= & \sum_{i=1}^{3n}\VEC J(\VEC w_{i})={\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}\VEC w_{i}P(\VEC w_{i}),\label{eq:Jtot6-1} {\rm e}nd{align} leading to the \textit{emergent total velocity} \begin{equation} \VEC v_{\mathrm{tot}}=\frac{\VEC J_{\mathrm{tot}}}{P_{\mathrm{tot}}}=\frac{{\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}\VEC w_{i}P(\VEC w_{i})}{{\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}P(\VEC w_{i})}\,,\label{eq:vtot_fin-1} {\rm e}nd{equation} which represents the \textit{probability flux lines.} In~\cite{Groessing.2012doubleslit,Fussy.2014multislit} we have shown with the example of $n=2,$ i.e.\ a double-slit system, that Eq.~(\ref{eq:vtot_fin-1}) can equivalently be written in the form \begin{equation} \VEC v_{\mathrm{tot}}=\frac{R_{1}^{2}\VEC v_{\mathrm{1}}+R_{2}^{2}\VEC v_{\mathrm{2}}+R_{1}R_{2}\left(\VEC v_{\mathrm{1}}+\VEC v_{2}\right)\cos\varphi+R_{1}R_{2}\left(\VEC u_{1}-\VEC u_{2}\right)\sin\varphi}{R_{1}^{2}+R_{2}^{2}+2R_{1}R_{2}\cos\varphi}\,.\label{eq:vtot-1} {\rm e}nd{equation} The trajectories or streamlines, respectively, are obtained according to $\VEC{\,\mathrm{d}ot{x}}=\VEC v_{\mathrm{tot}}$ in the usual way by integration. As we have first shown in~\cite{Groessing.2012doubleslit}, by re-inserting the expressions for forward and osmotic velocities, respectively, i.e. \begin{equation} \VEC v_{i}=\frac{\nabla S_{i}}{m}\,,\qquad\VEC u_{i}=-\frac{\hbar}{m}\frac{\nabla R_{i}}{R_{i}}\,,\label{eq:velocities} {\rm e}nd{equation} one immediately identifies Eq.~(\ref{eq:vtot-1}) with the Bohmian guidance formula. Naturally, employing the Madelung transformation for each slit $\alpha$ ($\alpha=1$ or $2$), \begin{equation} \partialsi_{\alpha}=R_{\alpha}{\rm e}^{\mathrm{i}S_{\alpha}/\hbar},\label{eq:3.14-1} {\rm e}nd{equation} and thus $P_{\alpha}=R_{\alpha}^{2}=|\partialsi_{\alpha}|^{2}=\partialsi_{\alpha}^{*}\partialsi_{\alpha}$, with $\varphi=(S_{1}-S_{2})/\hbar$, and recalling the usual trigonometric identities such as $\cos\varphi=\frac{1}{2}\left({\rm e}^{\mathrm{i}\varphi}+{\rm e}^{-\mathrm{i}\varphi}\right)$, one can rewrite the total average current immediately in the usual quantum mechanical form as \begin{equation} \begin{array}{rl} {\,\mathrm{d}isplaystyle \mathbf{J}_{{\rm tot}}} & =P_{{\rm tot}}\mathbf{v}_{{\rm tot}}\\ & ={\,\mathrm{d}isplaystyle (\partialsi_{1}+\partialsi_{2})^{*}(\partialsi_{1}+\partialsi_{2})\frac{1}{2}\left[\frac{1}{m}\left(-\mathrm{i}\hbar\frac{\nabla(\partialsi_{1}+\partialsi_{2})}{(\partialsi_{1}+\partialsi_{2})}\right)+\frac{1}{m}\left(\mathrm{i}\hbar\frac{\nabla(\partialsi_{1}+\partialsi_{2})^{*}}{(\partialsi_{1}+\partialsi_{2})^{*}}\right)\right]}\\ & ={\,\mathrm{d}isplaystyle -\frac{\mathrm{i}\hbar}{2m}\left[\Psi^{*}\nabla\Psi-\Psi\nabla\Psi^{*}\right]={\,\mathrm{d}isplaystyle \frac{1}{m}{\rm Re}\left\{ \Psi^{*}(-\mathrm{i}\hbar\nabla)\Psi\right\} ,}} {\rm e}nd{array}\label{eq:3.18-1} {\rm e}nd{equation} where $P_{{\rm tot}}=|\partialsi_{1}+\partialsi_{2}|^{2}=:|\Psi|^{2}$. Eq.~(\ref{eq:vtot_fin-1}) has been derived for one particle in an $n$-slit system. However, for the spinless particles obeying the Schrödinger equation\footnote{As we do not yet have a relativistic model involving spin our results for the many-particle case cannot account for the difference in particle statistics, i.e.\ for fermions or bosons. This will be a task for future work.} it is straightforward to extend this derivation to the many-particle case. Due to the purely additive terms in the expressions for the total current and total probability density, respectively, also for {\rm e}mph{$N$} particles, Eqs.~(\ref{eq:Ptot6-1}) and (\ref{eq:Jtot6-1}) become \begin{align} P_{\mathrm{tot,}N} & =\sum_{j=1}^{N}\left[{\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}P(\VEC w_{i})\right]_{j}=\sum_{j=1}^{N}\left[\left({\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}\VEC{\hat{w}}_{i}R(\VEC w_{i})\right)^{2}\right]_{j}\,,\\ \VEC J_{\mathrm{tot,}N} & =\sum_{j=1}^{N}\left[\sum_{i=1}^{3n}\VEC J(\VEC w_{i})\right]_{j}=\sum_{j=1}^{N}\left[{\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}\VEC w_{i}P(\VEC w_{i})\right]_{j}\,, {\rm e}nd{align} and, analogously, Eq.~(\ref{eq:vtot_fin-1}), \begin{align} \VEC v_{\mathrm{tot,}N} & =\frac{\VEC J_{\mathrm{tot}}}{P_{\mathrm{tot}}}=\frac{{\,\mathrm{d}isplaystyle \sum_{j=1}^{N}}\left[{\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}\VEC w_{i}P(\VEC w_{i})\right]_{j}}{{\,\mathrm{d}isplaystyle \sum_{j=1}^{N}}\left[{\,\mathrm{d}isplaystyle \sum_{i=1}^{3n}}P(\VEC w_{i})\right]_{j}}\,, {\rm e}nd{align} where $\VEC w_{i}$ is dependent on the velocities~(\ref{eq:velocities}) with different $S_{i}$ and $R_{i}$ for every $j$. In quantum mechanical terms the only difference now is that the currents' nabla operators have to be applied at all of the locations of the respective {\rm e}mph{$N$} particles, thus providing \begin{equation} {\,\mathrm{d}isplaystyle \mathbf{J}_{{\rm tot}}}\left(N\right)={\,\mathrm{d}isplaystyle \sum_{j=1}^{N}}\frac{1}{m_{j}}{\rm Re}\left\{ \Psi^{*}\left(t\right)(-\mathrm{i}\hbar\nabla_{j})\Psi\left(t\right)\right\} , {\rm e}nd{equation} where $\Psi\left(t\right)$ now is the total $N$-particle wave function, whereas the flux lines are given by \begin{equation} \VEC v_{j}\left(t\right)=\frac{\hbar}{m_{j}}\mathrm{Im}\frac{\nabla_{j}\Psi\left(t\right)}{\Psi\left(t\right)}\qquad\forall j=1,...,N. {\rm e}nd{equation} In sum, with our introduction of a relational intensity $P(\VEC w_{i})$ for channels $\VEC w_{i}$, which include sub-quantum velocity fields, we obtain the guidance formula also for $N$-particle systems in real 3-dimensional space.\textsl{{\rm e}mph{ The central ingredient for this to be possible is to consider the emergence of the velocity field from the interplay of the totality of all of the system's velocity channels.}} \begin{figure}[h] \centering{} \begin{minipage}[t]{0.9\columnwidth} \begin{center} {\rm i}ncludegraphics[width=1.0\columnwidth,height=1.0\columnwidth]{ds_fig3} \caption{Classical computer simulation of the interference pattern: intensity distribution with increasing intensity from white through yellow and orange, with trajectories (red) for two Gaussian slits, and with \textbf{large dispersion} (evolution from bottom to top; $v_{x,1}=v_{x,2}=0$). \label{fig:3}} \partialar{\rm e}nd{center} {\rm e}nd{minipage} {\rm e}nd{figure} \begin{figure}[h] \centering{} \begin{minipage}[t]{0.9\columnwidth} \begin{center} {\rm i}ncludegraphics[width=1.0\columnwidth,height=1.0\columnwidth]{ds_fig2a} \caption{Classical computer simulation of the interference pattern: intensity distribution with increasing intensity from white through yellow and orange, with trajectories (red) for two Gaussian slits, and with \textbf{small dispersion} (evolution from bottom to top; $v_{x,1}=-v_{x,2}$). \label{fig:2}} \partialar{\rm e}nd{center} {\rm e}nd{minipage} {\rm e}nd{figure} In Figures~\ref{fig:3} and \ref{fig:2}, trajectories (flux lines) for two Gaussian slits are shown (from ref.~\cite{Groessing.2012doubleslit}). These trajectories are in full accordance with those obtained from the Bohmian approach, as can be seen by comparison with references~\cite{Holland.1993}, \cite{Bohm.1993undivided}, and \cite{Sanz.2009context}, for example. \section{Vacuum Landscaping: Cause of nonlocal influences without signalling\label{sec:vacuum}} In the foregoing sections, we pointed out how nonlocality appears in our model. Particularly in discussing Eqs.~(\ref{eq:2.9})\textendash (\ref{eq:2.9c}), it was shown that the form of the osmotic momentum \begin{equation} m\mathbf{u}=-\frac{\hbar}{2}\frac{\nabla P}{P} {\rm e}nd{equation} may be responsible for relevant influences. Moreover, if one assumes a particle at some position $\VEC x$ in space, and with a probability distribution $P$, the latter is a distribution around $\VEC x$ with long tails across the whole experimental setup which may be very thin but still non-zero. Then, even at locations $\VEC y$ very remote from $\VEC x$, and although the probability distribution $P$ pertaining to the far-away particle might be minuscule, it still may become effective immediately through the zero-point field. The physical reason for bringing in nonlocality is the assumed resonant coupling of the particle(s) with fluctuations of the zero-point vacuum filling the whole experimental setup. Take, for example, a typical ``Gaussian slit''. We effectively describe $P$ by a Gaussian with long non-zero tails throughout the whole apparatus. As we have seen, in order to calculate on-screen distributions (i.e.\ total intensities) of particles which went one-at-a-time through an $n$-slit device, one only needs a two-momentum description and a calculation which uses the totality of all relational intensities involving the relative phases determined across the whole apparatus. In general, we propose a resonant interaction of the bouncing ``particle'' with a {\rm e}mph{relevant environment}.\footnote{In a similar vein, Bohm~\cite{Bohm.1980wholeness} speaks of a ``relatively independent subtotality'' of the universe, to account for the possible neglect of the ``rest of the universe'' in practical calculations.} For idealized, non-interacting particles, this relevant environment would be the whole universe, and thus the idealized prototype of the ``cosmological solution'' referred to in the introduction. For any particle in any experimental setup, however, the relevant environment is defined by the boundary conditions of the apparatus. Whereas the idealized one-particle scenario would constitute an indefinite order of vibrations w.r.t. the particle oscillations potentially locking in, the very building up of an experiment may represent a dynamical transition from this indefinite order to the establishment of a definite order. The latter is characterized by the emergence of standing waves between the boundaries of the apparatus (like, e.g., source and detector), to which the particle oscillations lock in. Moreover, if an experimenter decides to change the boundary conditions (e.g., by altering the probability landscape between source and detector), such a ``switching'' would establish yet another definite order. The introduction or change of boundary conditions, which immediately affects the probability landscape, and the forward and the osmotic fields, we term ``vacuum landscaping''. In other words, the change of boundary conditions of an experimental arrangement constitutes the immediate transition from one cosmological solution in the relevant environment (i.e.\ within the old boundary conditions) to another (i.e.\ the new ones). The ``surfing'' bouncer/particle just locally jumps from the old to the new standing wave solutions, respectively. This is a process that happens locally for the particle, practically instantaneously (i.e.\ within a time span $\partialropto\nicefrac{1}{\omega}),$ and nonlocally for the standing waves, due to the very definition of the cosmological solutions. The vacuum landscape is thus nonlocally changed without the propagation of ``signals'' in a communication theoretical sense.\footnote{It is {\rm e}mph{exclusively} the latter that must be prohibited in order to avoid causal loops leading to paradoxes. See Walleczek and Grössing~\cite{Walleczek.2014non-signalling,Walleczek.2016nonlocal} for an extensive clarification of this issue.} We have, for example, discussed in some detail what happens in a double-slit experiment if one starts with one slit only, and when the particle might pass it, one opens the second slit~\cite{Groessing.2013dice,Groessing.2013joaopessoa}. In accordance with Tollaksen \textit{et~al.}~\cite{Tollaksen.2010quantum} we found that the opening of the second slit (i.e.\ a change in boundary conditions) results in an uncontrollable shift in momentum on the particle passing the first slit. Due to its uncontrollability (or, the ``complete uncertainty'' in~\cite{Tollaksen.2010quantum}), this momentum shift cannot be used for signalling. Still, it is necessary to \textit{a posteriori} understand the final distributions on the screen which would be incorrect without acknowledging said momentum kick. Similarly, Aspect-type experiments of two-particle interferometry can be understood as alterations of vacuum landscapes. Consider, for example, the case in two-particle interferometry, where Alice and Bob each are equipped with an interfering device and receive one of the counter-propagating particles from their common source. If Alice during the time-of-flight of the particles changes her device by making with suitable mirrors one of the interferometer arms longer than the other, this constitutes an immediate switching from one vacuum landscape to another, with the standing waves of the zero-point field now reflecting the new experimental arrangement. In other words, the $P$-field has been changed nonlocally throughout the experimental setup, and therefore also all relational intensities \begin{equation} P(\VEC w_{i})=R(\VEC w_{i})\VEC{\hat{w}}_{i}\cdot{\,\mathrm{d}isplaystyle \sum_{i'}}\VEC{\hat{w}}_{i'}R(\VEC w_{i'}) {\rm e}nd{equation} involved. The latter represent the relative phase shifts $\,\mathrm{d}elta\varphi_{i,i'}=\,\mathrm{d}elta\arccos\VEC{\hat{w}}_{i}\cdot\VEC{\hat{w}}_{i'}$ occurring due to the switching, and this change is becoming manifest also in the total probability density \begin{equation} P_{\mathrm{tot}}={\,\mathrm{d}isplaystyle \sum_{i}}P(\VEC w_{i})=\left({\,\mathrm{d}isplaystyle \sum_{i}}\VEC{\hat{w}}_{i}R(\VEC w_{i})\right)^{2}, {\rm e}nd{equation} with $i$ running through all channels of both Alice and Bob. The quantum mechanical nonlocal correlations thus appear without any propagation (e.g., from Alice to Bob), superluminal or other. As implied by Gisin's group~\cite{Bancal.2012quantum}, this violates a ``principle of continuity'' of propagating influences from {\rm e}mph{A} to {\rm e}mph{B}, but its non-signalling character is still in accordance with relativity and the nonlocal correlations of quantum mechanics. Practically instantaneous vacuum landscaping by Alice and/or Bob thus ensures the full agreement with the quantum mechanical predictions without the need to invoke (superluminal or other) signalling. Our model is, therefore, an example of nonlocal influencing without signalling, which was recently shown to provide a viable option for realistic modelling of nonlocal correlations.~\cite{Walleczek.2014non-signalling,Walleczek.2016nonlocal} \section{Conclusions and outlook\label{sec:conclusion}} With our two-momentum approach to an emergent quantum mechanics we have shown that one can in principle base the foundations of quantum mechanics on a deeper level that does not need wavefunctions. Still, one can derive from this new starting point, which is largely rooted in classical nonequilibrium thermodynamics, the usual nonrelativistic quantum mechanical formalism involving wavefunctions, like the Schrödinger equation or the de~Broglie\textendash Bohm guiding law. With regard to the latter, the big advantage of our approach is given by the fact that we avoid the troublesome influence from configuration space on particles in real space, which Bohm himself has called ``indigestible''. Instead, in our model the guiding equation is completely understandable in real coordinate space, and actually a rather typical consequence of the fact that the total current is the sum of all particular currents, and the total intensity, or probability density, respectively, is the sum of all relational intensities. As we are working with Schrödinger (i.e.\ spinless) particles, accounting for differences in particle statistics is still an open problem. As shown, we can replicate quantum mechanical features exactly by subjecting classical particle trajectories to diffusive processes caused by the presence of the zero point field, with the important property that the probability densities involved extend, however feebly, over the whole setup of an experiment. The model employs a two-momenta approach to the particle propagation, i.e., forward and osmotic momenta. The form of the latter has been derived without any recurrence to other approaches such as Nelson's. The one thing that {\rm e}mph{is} to be digested from our model is the fact that the relational intensities are nonlocally defined, over the whole experimental arrangement (i.e.\ the ``relevant environment''). This lies at the bottom of our deeper-level ansatz, and it is the {\rm e}mph{only} difference to an otherwise completely classical approach. We believe that this price is not too high, for we obtain a logical, realistic picture of quantum processes which is rather simple to arrive at. Nevertheless, in order to accept it, one needs to radically reconsider what an ``object'' is. We believe that it is very much in the spirit of David Bohm's thinking to direct one's attention away from a particle-centred view and consider an alternative option: that the universe is to be taken as a totality, which only under very specific and delicate experimental arrangements can be broken down to a laboratory-sized relevant environment, even if that laboratory might stretch along interplanetary distances. In our approach, the setting up of an experimental arrangement limits and shapes the forward and osmotic contributions and is described as vacuum landscaping. Accordingly, any change of the boundary conditions can be the cause of nonlocal influences throughout the whole setup, thus explaining, e.g., Aspect-type experiments. We argue that these influences can in no way be used for signalling purposes in the communication theoretic sense, and are therefore fully compatible with special relativity. Accepting that the vacuum fluctuations throughout the universe, or at least within such a laboratory, are a defining part of a quantum, amounts to seeing any object like an ``elementary particle'' as nonlocally extended and, eventually, as exerting nonlocal influences on other particles. For anyone who can digest this, quantum mechanics is no more mysterious than classical mechanics or any other branch of physics. \begin{turnpage} {\rm e}nd{turnpage} \begin{ruledtabular} {\rm e}nd{ruledtabular} \begin{turnpage} {\rm e}nd{turnpage} \appendix \begin{acknowledgments} We thank Jan Walleczek for many enlightening discussions, and the Fetzer Franklin Fund of the John E. Fetzer Memorial Trust for partial support of the current work. Also, we wish to thank Lev Vaidman and several other colleagues for stimulating discussions. We thank the latter for the exchange of viewpoints sometimes closely related to our approach, as they also become apparent in their respective works: Herman Batelaan~\cite{Batelaan.2016momentum}, Ana María Cetto and Luis de la Pe{\~n}a~\cite{Cetto.2014emerging-quantum}, Hans-Thomas Elze~\cite{Elze.2018configuration}, Basil Hiley~\cite{Hiley.2016structure}, Tim Maudlin~\cite{Maudlin.2011quantum}, Travis Norsen~\cite{Norsen.2016bohmian}, Garnet Ord~\cite{Ord.2009quantum}, and Louis Vervoort~\cite{Vervoort.2016no-go}. {\rm e}nd{acknowledgments} \partialrovidecommand{\href}[2]{#2}\begingroup\raggedright\begin{thebibliography}{10} \bibitem{Deckert.2007quantum} D.-A. Deckert, D.~D{\"u}rr, and P.~Pickl, ``Quantum dynamics with bohmian trajectories,'' \href{http://dx.doi.org/10.1021/jp0711996}{{{\rm e}m J. Phys. Chem. A} {\bfseries 111} (2007) 10325--10330}. \bibitem{Poirier.2010bohmian} B.~Poirier, ``Bohmian mechanics without pilot waves,'' \href{http://dx.doi.org/10.1016/j.chemphys.2009.12.024}{{{\rm e}m Chem. Phys.} {\bfseries 370} (2010) 4--14}. \bibitem{Poirier.2012trajectory-based} B.~Poirier, ``Trajectory-based theory of relativistic quantum particles,'' {{\rm e}m pre-print} (2012) , \href{http://arxiv.org/abs/1208.6260}{{\ttfamily arXiv:1208.6260 [quant-ph]}}. \bibitem{Schiff.2012communication:} J.~Schiff and B.~Poirier, ``Communication: Quantum mechanics without wavefunctions,'' \href{http://dx.doi.org/10.1063/1.3680558}{{{\rm e}m J. Chem. Phys.} {\bfseries 136} (2012) 031102}, \href{http://arxiv.org/abs/1201.2382v1}{{\ttfamily arXiv:1201.2382v1 [quant-ph]}}. \bibitem{Hall.2014quantum} M.~J.~W. Hall, D.-A. Deckert, and H.~M. Wiseman, ``Quantum phenomena modeled by interactions between many classical worlds,'' \href{http://dx.doi.org/10.1103/PhysRevX.4.041013}{{{\rm e}m Phys. Rev. X} {\bfseries 4} (2014) 041013}, \href{http://arxiv.org/abs/1402.6144}{{\ttfamily arXiv:1402.6144 [quant-ph]}}. \bibitem{Couder.2005} Y.~Couder, S.~Proti{\`e}re, E.~Fort, and A.~Boudaoud, ``Dynamical phenomena: {W}alking and orbiting droplets,'' \href{http://dx.doi.org/10.1038/437208a}{{{\rm e}m Nature} {\bfseries 437} (2005) 208--208}. \bibitem{Couder.2006single-particle} Y.~Couder and E.~Fort, ``Single-particle diffraction and interference at a macroscopic scale,'' \href{http://dx.doi.org/10.1103/PhysRevLett.97.154101}{{{\rm e}m Phys. Rev. Lett.} {\bfseries 97} (2006) 154101}. \bibitem{Couder.2012probabilities} Y.~Couder and E.~Fort, ``Probabilities and trajectories in a classical wave-particle duality,'' \href{http://dx.doi.org/10.1088/1742-6596/361/1/012001}{{{\rm e}m J. Phys.: Conf. Ser.} {\bfseries 361} (2012) 012001}. \bibitem{Bush.2010quantum} J.~W.~M. Bush, ``Quantum mechanics writ large,'' \href{http://dx.doi.org/10.1073/pnas.1012399107}{{{\rm e}m {PNAS}} {\bfseries 107} (2010) 17455--17456}. \bibitem{Bush.2015new} J.~W.~M. Bush, ``The new wave of pilot-wave theory,'' \href{http://dx.doi.org/10.1063/PT.3.2882}{{{\rm e}m Phys. Today} {\bfseries 68} (2015) 47--53}. \bibitem{Bush.2015pilot-wave} J.~W.~M. Bush, ``Pilot-wave hydrodynamics,'' \href{http://dx.doi.org/10.1146/annurev-fluid-010814-014506}{{{\rm e}m Annu. Rev. Fluid Mech.} {\bfseries 47} (2015) 269--292}. ~ \bibitem{DeBroglie.1960book} L.~V. P.~R. de~Broglie, {{\rm e}m Non-Linear Wave Mechanics: A Causal Interpretation.} \newblock Elsevier, Amsterdam, 1960. \bibitem{Groessing.2008vacuum} G.~Gr{\"o}ssing, ``The vacuum fluctuation theorem: {E}xact {S}chr{\"o}dinger equation via nonequilibrium thermodynamics,'' \href{http://dx.doi.org/10.1016/j.physleta.2008.05.007}{{{\rm e}m Phys. Lett. A} {\bfseries 372} (2008) 4556--4563}, \href{http://arxiv.org/abs/0711.4954}{{\ttfamily arXiv:0711.4954 [quant-ph]}}. \bibitem{Groessing.2010emergence} G.~Gr{\"o}ssing, S.~Fussy, J.~Mesa~Pascasio, and H.~Schwabl, ``Emergence and collapse of quantum mechanical superposition: {O}rthogonality of reversible dynamics and irreversible diffusion,'' \href{http://dx.doi.org/10.1016/j.physa.2010.07.017}{{{\rm e}m Physica A} {\bfseries 389} (2010) 4473--4484}, \href{http://arxiv.org/abs/1004.4596}{{\ttfamily arXiv:1004.4596 [quant-ph]}}. \bibitem{Groessing.2011dice} G.~Gr{\"o}ssing, S.~Fussy, J.~Mesa~Pascasio, and H.~Schwabl, ``Elements of sub-quantum thermodynamics: {Q}uantum motion as ballistic diffusion,'' \href{http://dx.doi.org/10.1088/1742-6596/306/1/012046}{{{\rm e}m J. Phys.: Conf. Ser.} {\bfseries 306} (2011) 012046}, \href{http://arxiv.org/abs/1005.1058}{{\ttfamily arXiv:1005.1058 [physics.gen-ph]}}. \bibitem{Groessing.2012doubleslit} G.~Gr{\"o}ssing, S.~Fussy, J.~Mesa~Pascasio, and H.~Schwabl, ``An explanation of interference effects in the double slit experiment: {C}lassical trajectories plus ballistic diffusion caused by zero-point fluctuations,'' \href{http://dx.doi.org/10.1016/j.aop.2011.11.010}{{{\rm e}m Ann. Phys.} {\bfseries 327} (2012) 421--437}, \href{http://arxiv.org/abs/1106.5994}{{\ttfamily arXiv:1106.5994 [quant-ph]}}. \bibitem{Groessing.2011explan} G.~Gr{\"o}ssing, J.~Mesa~Pascasio, and H.~Schwabl, ``A classical explanation of quantization,'' \href{http://dx.doi.org/10.1007/s10701-011-9556-1}{{{\rm e}m Found. Phys.} {\bfseries 41} (2011) 1437--1453}, \href{http://arxiv.org/abs/0812.3561}{{\ttfamily arXiv:0812.3561 [quant-ph]}}. \bibitem{Boltzmann.1866uber} L.~Boltzmann, ``{{\"U}}ber die mechanische {B}edeutung des zweiten {H}auptsatzes der {W}{\"a}rmetheorie,'' {{\rm e}m Wien. Ber.} {\bfseries 53} (1866) 195--200. \bibitem{Fussy.2014multislit} S.~Fussy, J.~Mesa~Pascasio, H.~Schwabl, and G.~Gr{\"o}ssing, ``Born's rule as signature of a superclassical current algebra,'' \href{http://dx.doi.org/10.1016/j.aop.2014.02.002}{{{\rm e}m Ann. Phys.} {\bfseries 343} (2014) 200--214}, \href{http://arxiv.org/abs/1308.5924}{{\ttfamily arXiv:1308.5924 [quant-ph]}}. \bibitem{Nelson.1966derivation} E.~Nelson, ``Derivation of the {S}chr{\"o}dinger equation from {N}ewtonian mechanics,'' \href{http://dx.doi.org/10.1103/PhysRev.150.1079}{{{\rm e}m Phys. Rev.} {\bfseries 150} (1966) 1079--1085}. \bibitem{Groessing.2009origin} G.~Gr{\"o}ssing, ``On the thermodynamic origin of the quantum potential,'' \href{http://dx.doi.org/10.1016/j.physa.2008.11.033}{{{\rm e}m Physica A} {\bfseries 388} (2009) 811--823}, \href{http://arxiv.org/abs/0808.3539}{{\ttfamily arXiv:0808.3539 [quant-ph]}}. \bibitem{Groessing.2015implications} G.~Gr{\"o}ssing, S.~Fussy, J.~Mesa~Pascasio, and H.~Schwabl, ``Implications of a deeper level explanation of the de{B}roglie--{B}ohm version of quantum mechanics,'' \href{http://dx.doi.org/10.1007/s40509-015-0031-0}{{{\rm e}m Quantum Stud.: Math. Found.} {\bfseries 2} (2015) 133--140}, \href{http://arxiv.org/abs/1412.8349}{{\ttfamily arXiv:1412.8349 [quant-ph]}}. \bibitem{Groessing.2013dice} G.~Gr{\"o}ssing, S.~Fussy, J.~Mesa~Pascasio, and H.~Schwabl, ``{'Systemic} nonlocality' from changing constraints on sub-quantum kinematics,'' \href{http://dx.doi.org/10.1088/1742-6596/442/1/012012}{{{\rm e}m J. Phys.: Conf. Ser.} {\bfseries 442} (2013) 012012}, \href{http://arxiv.org/abs/1303.2867}{{\ttfamily arXiv:1303.2867 [quant-ph]}}. \bibitem{Groessing.2014relational} G.~Gr{\"o}ssing, S.~Fussy, J.~Mesa~Pascasio, and H.~Schwabl, ``Relational causality and classical probability: {G}rounding quantum phenomenology in a superclassical theory,'' \href{http://dx.doi.org/10.1088/1742-6596/504/1/012006}{{{\rm e}m J. Phys.: Conf. Ser.} {\bfseries 504} (2014) 012006}, \href{http://arxiv.org/abs/1403.3295}{{\ttfamily arXiv:1403.3295 [quant-ph]}}. \bibitem{Holland.1993} P.~R. Holland, {{\rm e}m The Quantum Theory of Motion}. \newblock Cambridge University Press, Cambridge, {UK}, 1993. \bibitem{Bohm.1993undivided} D.~Bohm and B.~J. Hiley, {{\rm e}m The Undivided Universe: An Ontological Interpretation of Quantum Theory}. \newblock Routledge, London, {UK}, 1993. \bibitem{Sanz.2009context} {\'A}.~S. Sanz and F.~Borondo, ``Contextuality, decoherence and quantum trajectories,'' \href{http://dx.doi.org/10.1016/j.cplett.2009.07.061}{{{\rm e}m Chem. Phys. Lett.} {\bfseries 478} (2009) 301--306}, \href{http://arxiv.org/abs/0803.2581}{{\ttfamily arXiv:0803.2581 [quant-ph]}}. \bibitem{Bohm.1980wholeness} D.~Bohm, {{\rm e}m Wholeness and the Implicate Order}. \newblock Routledge, London, {UK}, 1980. \bibitem{Walleczek.2014non-signalling} J.~Walleczek and G.~Gr{\"o}ssing, ``The non-signalling theorem in generalizations of {B}ell's theorem,'' \href{http://dx.doi.org/10.1088/1742-6596/504/1/012001}{{{\rm e}m J. Phys.: Conf. Ser.} {\bfseries 504} (2014) 012001}, \href{http://arxiv.org/abs/1403.3588}{{\ttfamily arXiv:1403.3588 [quant-ph]}}. \bibitem{Walleczek.2016nonlocal} J.~Walleczek and G.~Gr{\"o}ssing, ``Nonlocal quantum information transfer without superluminal signalling and communication,'' \href{http://dx.doi.org/10.1007/s10701-016-9987-9}{{{\rm e}m Found. Phys.} {\bfseries 46} (2016) 1208--1228}, \href{http://arxiv.org/abs/1501.07177}{{\ttfamily arXiv:1501.07177 [quant-ph]}}. \bibitem{Groessing.2013joaopessoa} G.~Gr{\"o}ssing, ``Emergence of quantum mechanics from a sub-quantum statistical mechanics,'' \href{http://dx.doi.org/10.1142/S0217979214501793}{{{\rm e}m Int. J. Mod. Phys. B} (2014) 145--179}, \href{http://arxiv.org/abs/1304.3719}{{\ttfamily arXiv:1304.3719 [quant-ph]}}. \bibitem{Tollaksen.2010quantum} J.~Tollaksen, Y.~Aharonov, A.~Casher, T.~Kaufherr, and S.~Nussinov, ``Quantum interference experiments, modular variables and weak measurements,'' \href{http://dx.doi.org/10.1088/1367-2630/12/1/013023}{{{\rm e}m New J. Phys.} {\bfseries 12} (2010) 013023}, \href{http://arxiv.org/abs/0910.4227v1}{{\ttfamily arXiv:0910.4227v1 [quant-ph]}}. \bibitem{Bancal.2012quantum} J.-D. Bancal, S.~Pironio, A.~Acin, Y.-C. Liang, V.~Scarani, and N.~Gisin, ``Quantum nonlocality based on finite-speed causal influences leads to superluminal signaling,'' \href{http://dx.doi.org/10.1038/nphys2460}{{{\rm e}m Nature Phys.} {\bfseries 8} (2012) 867--870}, \href{http://arxiv.org/abs/1110.3795}{{\ttfamily arXiv:1110.3795 [quant-ph]}}. \bibitem{Batelaan.2016momentum} H.~Batelaan, E.~Jones, W.~C.-W. Huang, and R.~Bach, ``Momentum exchange in the electron double-slit experiment,'' \href{http://dx.doi.org/10.1088/1742-6596/701/1/012007}{{{\rm e}m J. Phys.: Conf. Ser.} {\bfseries 701} (2016) 012007}. \bibitem{Cetto.2014emerging-quantum} L.~de~la Pe{\~n}a, A.~M. Cetto, and A.~Vald{\'e}s-Hern{\'a}ndes, {{\rm e}m The Emerging Quantum: The Physics behind Quantum Mechanics}. \newblock Springer, Berlin, 2014. \bibitem{Elze.2018configuration} H.-T. Elze, ``On configuration space, born's rule and ontological states,'' {{\rm e}m pre-print} (2018) , \href{http://arxiv.org/abs/1802.07189}{{\ttfamily arXiv:1802.07189 [quant-ph]}}. \bibitem{Hiley.2016structure} B.~J. Hiley, ``Structure process, weak values and local momentum,'' \href{http://dx.doi.org/10.1088/1742-6596/701/1/012010}{{{\rm e}m J. Phys.: Conf. Ser.} {\bfseries 701} (2016) 012010}. \bibitem{Maudlin.2011quantum} T.~Maudlin, {{\rm e}m Quantum Non-Locality and Relativity: Metaphysical Intimations of Modern Physics}. \newblock Wiley-Blackwell, West Sussex, {UK}, 3~ed., 2011. \bibitem{Norsen.2016bohmian} T.~Norsen, ``Bohmian conditional wave functions (and the status of the quantum state),'' \href{http://dx.doi.org/10.1088/1742-6596/701/1/012003}{{{\rm e}m J. Phys.: Conf. Ser.} {\bfseries 701} (2016) 012003}. \bibitem{Ord.2009quantum} G.~Ord, ``Quantum mechanics in a two-dimensional spacetime: What is a wavefunction?,'' \href{http://dx.doi.org/10.1016/j.aop.2009.03.007}{{{\rm e}m Ann. Phys.} {\bfseries 324} (2009) 1211--1218}. \bibitem{Vervoort.2016no-go} L.~Vervoort, ``No-go theorems face background-based theories for quantum mechanics,'' \href{http://dx.doi.org/10.1007/s10701-015-9973-7}{{{\rm e}m Found. Phys.} {\bfseries 46} (2016) 458--472}, \href{http://arxiv.org/abs/1406.0901}{{\ttfamily arXiv:1406.0901 [quant-ph]}}. {\rm e}nd{thebibliography}{\rm e}ndgroup {\rm e}nd{document}
\begin{document} \title{ANALYSIS OF THE RIEMANN ZETA FUNCTION} \begin{abstract} The paper uses a feature of calculating the Riemann Zeta function in the critical strip, where its approximate value is determined by partial sums of the Dirichlet series, which it is given. \par These expressions are called the first and second approximate equation of the Riemann Zeta function. \par The representation of the terms of the Dirichlet series by vectors allows us when analyzing the polyline formed by these vectors: \par 1) explain the geometric meaning of the generalized summation of the Dirichlet series in the critical strip; \par 2) obtain formula for calculating the Riemann Zeta function; \par 3) obtain the functional equation of the Riemann Zeta function based on the geometric properties of vectors forming the polyline; \par 4) explain the geometric meaning of the second approximate equation of the Riemann Zeta function; \par 5) obtain the vector equation of non-trivial zeros of the Riemann Zeta function; \par 6) determine why the Riemann Zeta function has non-trivial zeros on the critical line; \par 7) understand why the Riemann Zeta function cannot have non-trivial zeros in the critical strip other than the critical line. \par The main result of the paper is a definition of possible ways of the confirmation of the Riemann hypothesis based on the properties of the vector system of the second approximate equation of the Riemann Zeta function. \end{abstract} \keywords{the Riemann Zeta function, non-trivial zeros of the Riemann Zeta function, second approximate functional equation of the Riemann Zeta function } \section{Introduction} A more detailed title of the paper can be formulated as follows: \par \textit{Geometric analysis of expressions that determine a value of the Riemann zeta function in the critical strip.} \par However, this name will not fully reflect the content of the paper. It is necessary to refer to the sequence of problems. The starting point of the analysis of the Riemann zeta function was the question: \par \textit{Why the Riemann zeta function has non-trivial zeros.} \par A superficial study of the question led to the idea of geometric analysis of the Dirichlet series. \par Preliminary geometric analysis allowed to obtain the following results: \par 1) definition of an expression for calculating a value of the Riemann zeta function in the critical strip; \par 2) derivation of the functional equation of the Riemann zeta function based on the geometric properties of vectors; \par 3) explanation of the geometric meaning of the second approximate equation of the Riemann zeta function. \par The most important result of the preliminary geometric analysis was the question why the Riemann zeta function cannot have non-trivial zeros in the critical strip except the critical line. \par Therefore, the extended title of the paper is follows: \par \textit{Geometric analysis of the expressions defining a value of the Riemann zeta function in the critical strip with the aim of answering the question of why the Riemann zeta function cannot have non-trivial zeros in the critical strip, except for the critical line.} \par At once, it is necessary to define the upper limit of the paper. Although that the paper explores the main question of the Riemann hypothesis, we are not talking about its proof. \par The results of the geometric analysis of the expressions defining a value of the Riemann zeta function in the critical strip is the definition of the possible ways of the confirmation of the Riemann hypothesis based on the results of this analysis. \par In addition, if we are talking about the boundaries, then we will mark the lower limit of the paper. We will not use the latest advances in the theory of the Riemann zeta function and improve anyone's result (we will state our position on this later). \par We turn to the origins of the theory of the Riemann zeta function and base on the very first results obtained by Riemann, Hardy, Littlewood and Titchmarsh, as well as by Adamar and Valle Poussin. \par In addition, of course, we cannot do without the official definition of the Riemann hypothesis: \par \textit{The Riemann zeta function is the function of the complex variable s, defined in the half-plane Re(s) > 1 by the absolutely convergent series:} \begin{equation}\label{zeta_dirichlet}\zeta(s) = \sum\limits_{n=1}^{\infty}\frac{1}{n^s};\end{equation} \textit{and in the whole complex plane C by analytic continuation. As shown by Riemann, $\zeta(s)$ extends to C as a meromorphic function with only a simple pole at s =1, with residue 1, and satisfies the functional equation:} \begin{equation}\label{zeta_func_eq}\pi^{-s/2}\Gamma(\frac{s}{2})\zeta(s)=\pi^{-(1-s)/2}\Gamma(\frac{1-s}{2})\zeta(1-s);\end{equation} \textit{Thus, in terms of the function $\zeta(s)$, we can state Riemann hypothesis: The non-trivial zeros of $\zeta(s)$ have real part equal to 1/2.} \par We define two important concepts of the Riemann zeta function theory: the critical strip and the critical line. \par Both concepts relate to the region where non-trivial zeros of the Riemann zeta function are located. \par Adamar and Valle-Poussin independently in the course of the proof of the theorem about the distribution of prime numbers, which is based on the behavior of non-trivial zeros of the Riemann zeta function , showed that all non-trivial zeros Riemann zeta function is located in a narrow strip $0<Re(s)<1$. In the theory of the Riemann zeta function, this strip is called \textit {critical strip}. \par The line $Re(s)=1/2$ referred to in the Riemann hypothesis is called \textit {critical line}. \par It is known that the Dirichlet series (\ref{zeta_dirichlet}), which defines the Riemann zeta function, diverges in the critical strip, therefore, to perform an analytical continuation of the Riemann zeta function, \textit{the method of generalized summation} of divergent series, in particular the generalized Euler-Maclaurin summation formula is used. \par We should note that the generalized Euler-Maclaurin summation formula is used in the case where the partial sums of a divergent series are suitable for computing the generalized sum of this divergent series. \par From the theory of generalized summation of divergent series \cite{HA3}, it is known that any method of generalized summation, if it yields any result, yields the same result with other methods of generalized summation. \par On the one hand, this fact confirms the uniqueness of the analytical continuation of the function of a complex variable, and on the other hand, apparently, the principle of analytical continuation extends in the form of a generalized summation to the functions of a real variable, which are defined in divergent series. \par Now we can formulate the principles of geometric analysis of expressions that define a value of the Riemann zeta function in the critical strip: \par 1) a complex number can be represented geometrically using points or vectors on the plane; \par 2) the terms of the Dirichlet series, which defines the Riemann zeta function, are complex numbers; \par 3) a value of the Riemann zeta function in the region of analytic continuation can be represented by any method of generalized summation of divergent series. \par What exactly is the geometric analysis of values of the Riemann zeta function, we will see when we construct a polyline, which form a vector corresponding to the terms of the Dirichlet series, which defines the Riemann zeta function. \par All results in the paper are obtained empirically by calculations with a given precision (15 significant digits). \section{Representation of the Riemann zeta function value by a vector system} This section presents detailed results of geometric analysis of expressions that determine a value of the Riemann zeta function in the critical strip. \par The possibility of such an analysis is due to two important facts: \par 1) representation of complex numbers by vectors; \par 2) using of methods of generalized summation of divergent series. \subsection{Representation of the partial sum of the Dirichlet series by a vector system} Each term of the Dirichlet series (\ref{zeta_dirichlet}), which defines the Riemann zeta function, is a complex number $x_n+iy_n$, hence it can be represented by the vector $X_n=(x_n, y_n)$. \par To obtain the coordinates of the vector $X_n(s), s=\sigma+it$, we first present the expression $n^{-s}$ in the exponential form of a complex number, and then, using the Euler formula, in the trigonometric form of a complex number: \begin{equation}\label{x_vect_direchlet}X_n(s)=\frac{1}{n^{s}}=\frac{1}{n^{\sigma}}e^{-it\log(n)}=\frac{1}{n^{\sigma}}(\cos(t\log(n))-i\sin(t\log(n)));\end{equation} \par Then the coordinates of the vector $X_n(s)$ can be calculated by formulas: \begin{equation}\label{x_n_y_n}x_n(s)=\frac{1}{n^{\sigma}}\cos(t\log(n)); y_n(s)=-\frac{1}{n^{\sigma}}\sin(t\log(n));\end{equation} \par \textit{Using the rules of analytical geometry, we can obtain the coordinates corresponding to the partial sum $s_m(s)$ of the Dirichlet series (\ref{zeta_dirichlet}):} \begin{equation}\label{s_m_x_s_m_y}s_m(s)_x=\sum_{n=1}^{m}{x_n(s)}; s_m(s)_y=\sum_{n=1}^{m}{y_n(s)};\end{equation} \begin{figure} \caption{Polyline, $s=1.25+279.229250928i$} \label{fig:s1_1} \end{figure} \par We construct a polyline corresponding to partial sums of $s_m(s)$. Select a value \textit{in the convergence region} of the Dirichlet series (\ref{zeta_dirichlet}), for example, $s=1.25+279.229250928 i$. \par We also display the vector $(0.69444570272324, 0.61658346971775)$, which corresponds to a value of the Riemann zeta function at $s=1.25+279.229250928 i$. \par We will display the first $m=90$ vectors (later we will explain why we chose such a number) so that the vectors follow in ascending order of their numbers (fig. \ref{fig:s1_1}). \par Now we change a value of the real part $s=0.75+279.229250928 i$ and move to the region where the Dirichlet series (\ref{zeta_dirichlet}) \textit{diverges.} \begin{figure} \caption{Polyline, $s=0.75+279.229250928i$} \label{fig:s3_1} \end{figure} \par We also display the vector $(0.22903651233853, 0.51572970834588)$, which corresponds to a value of the Riemann zeta function at $s=0.75+279.229250928 i$. \par We observe (fig. \ref{fig:s3_1}) an increase of size of the polyline, but the qualitative behavior of the graph does not change. \textit{The polyline twists around a point corresponding to a value of the Riemann zeta function.} \par To see what exactly is the difference, we need to consider the behavior of vectors with smaller numbers, for example, in the range $m=(300, 310)$ with relation to vectors with large numbers, for example, in the range $m=(500, 510)$. \begin{figure} \caption{Part of polyline, $s=1.25+279.229250928i$} \label{fig:s1_2_1} \end{figure} \par We see (fig. \ref{fig:s1_2_1}) that when $\sigma=1.25$ the polyline is \textit{a converging} spiral, as the radius of the spiral in the range of $m=(300, 310)$ is greater than the radius of the spiral in the range of $m=(500, 510)$. \begin{figure} \caption{Part of polyline, $s=0.75+279.229250928i$} \label{fig:s3_2_1} \end{figure} \par Unlike the first case, when $\sigma=0.75$ we see (fig. \ref{fig:s3_2_1}) that a polyline is \textit{a divergent} spiral, since the radius of the spiral in the range $m=(300, 310)$ is less than the radius of the spiral in the range $m=(500, 510)$. \par But regardless of whether the spiral converges or diverges, in both cases we see that the center of the spiral corresponds to the point corresponding to a value of the Riemann zeta function (we will later show that it is indeed). \par This fact can be considered a geometric explanation of the method of generalized summation: \par \textit{The coordinates of the partial sums $s_m(s)_x$ and $s_m(s)_y$ of the Dirichlet series (\ref{zeta_dirichlet}) vary with relation to some middle values of $x$ and $y$, which we take as a value of the Riemann zeta function, only in one case the coordinates of the partial sums converge infinitely to these values, and in the other case they diverge infinitely with relation to these values.} \subsection{Properties of vector system of the partial sums of the Dirichlet series – the Riemann spiral} In order not to use in the further analysis of the long definitions of the vector system, which we are going to study in detail, we introduce the following definition: \par \textit{The Riemann spiral is a polyline formed by vectors corresponding to the terms of the Dirichlet series (\ref{zeta_dirichlet}) defining the Riemann zeta function, in ascending order of their numbers.} \par A preliminary analysis of \textit{Riemann spiral} showed that vectors arranged in ascending order of their numbers form a converging or divergent spiral with a center at a point corresponding to a value of the Riemann zeta function. \par We also see (fig. \ref{fig:s3_1}) that the Riemann spiral has several more centers where the vectors first form a converging spiral and then a diverging spiral. \par Comparing (fig. \ref{fig:s1_1}) and (fig. \ref{fig:s3_1}) we see that the number of such centers does not depend on the real part of a complex number. \par While comparing (fig. \ref{fig:s3_1}) and (fig. \ref{fig:s7_1}), we see that the number of such centers increases when the imaginary part of a complex number increases. \begin{figure} \caption{The Riemann spiral, $s=0.75+959.459168807i$} \label{fig:s7_1} \end{figure} \par We can explain this behavior of Riemann spiral vectors if we return to the exponential form (\ref{x_vect_direchlet})of the record of terms of the Dirichlet series (\ref{zeta_dirichlet}). \par The paradox of the Riemann spiral is that with unlimited growth of the function $t\log(n)$, the absolute angles $\varphi_n(t)$ of its vectors behave in a pseudo-random way (fig. \ref{fig:absolute_angles}), because we can recognize angles only in the range $[0, 2\pi]$. \begin{equation}\label{phi_n}\varphi_n(t)=t\log(n)mod\ 2\pi;\end{equation} \begin{figure} \caption{Absolute angles $\varphi_n(t)$ of vectors of the Riemann spiral, rad, $s=0.75+959.459168807i$} \label{fig:absolute_angles} \end{figure} \par The angles between the vectors $\Delta\varphi_n(t)$, if they are measured not as visible angles between segments, but as angles between directions, permanently grow (fig. \ref{fig:relative_angles}) to a value $2\pi$, then they sharply decrease to a value $0$ and again grow to a value $2\pi$. \begin{equation}\label{delta_phi_n}\Delta\varphi_n(t)=\varphi_n(t)-\varphi_{n-1}(t);\end{equation} \par On the last part this growth is \textit{asymptotic,} i.e. no matter how large the vector number $n$ is, a value $\log(n)$ will never be equal to a value $\log(n+1)$. \begin{figure} \caption{Relative angles $\Delta\varphi_n(t)$ of vectors of the Riemann spiral, rad, $s=0.75+959.459168807i$} \label{fig:relative_angles} \end{figure} \par In consequence of the revealed properties of the Riemann spiral vectors, we observe two types of singular points: \par 1) \textit{reverse points} where the visible twisting of the vectors is replaced by untwisting, these points are multiples of $(2k-1)\pi$; \par 2) \textit{inflection points}, in which the visible untwisting of the vectors is replaced by twisting, these points are multiples of $2k\pi$; \par Now we can explain why for a value of $s=1.25+279.229250928 i$ we took 90 vectors to construct the Riemann spiral: \par \begin{equation}\label{m1}279.229250928/\pi=88.881431082;\end{equation} \par Therefore, the first reverse point is between the 88th and 89th vectors, we just rounded the number of vectors to a multiple. \par \textit{This is the number of vectors we must use to build the Riemann spiral in order to vectors fully twisted around the point corresponding to a value of the Riemann zeta function.} \par In addition, we can determine the number of reverse points $m$, as we remember, this number determines the range in which (fig. \ref{fig:relative_angles}) the periodic monotonous increase of angles between the Riemann spiral vectors is observed, moreover this number plays an important role (as we will see later) in the representation of the Riemann zeta function values by the vector system. \par We can determine the number of reverse points $m$ from the condition that between two reverse points there is at least one vector: \par \begin{equation}\label{m2_eq}\frac{t}{(2m-1)\pi}-\frac{t}{(2m+1)\pi}=1;\end{equation} \par From this equation we find: \par \begin{equation}\label{m2}m=\sqrt{\frac{t}{2\pi}+\frac{1}{4}};\end{equation} \par At the end of the consideration of the static parameters of the Riemann spiral, we perform an analysis of its radius of curvature: \par \begin{equation}\label{curvature_radius_eq}r_n=\frac{|X_n|\cos(\Delta\varphi_n)}{\sqrt{1-\cos(\Delta\varphi_n)^2}};\end{equation} \par where \begin{equation}\label{cos_delta_varphi_n}\cos(\Delta\varphi_n)=\frac{(X_n,X_{n-1})}{|X_n||X_{n-1}|};\end{equation} \begin{equation}\label{module}|X_n|=\sqrt{x_n^2+y_n^2};\end{equation} \begin{equation}\label{scalar_product}(X_n,X_{n-1})=x_nx_{n-1}+y_ny_{n-1};\end{equation} \begin{figure} \caption{Curvature radius of the Riemann spiral, $s=0.75+959.459168807i$} \label{fig:curvature_radius} \end{figure} \par We see (fig. \ref{fig:curvature_radius}) that the Riemann spiral has \textit{an alternating sign} of radius of curvature. This is the only spiral that has this property. \par The maximum negative value of the radius of curvature takes at the reverse point, and the maximum positive is at the inflection point. \subsection{Derivation of an empirical expression for the Riemann zeta function} We study in detail the behavior of Riemann spiral vectors after the first reverse point: $$m=\frac{t}{\pi};$$ \par As we know, the Riemann spiral vectors in the critical strip form a divergent spiral (fig. \ref{fig:s3_2_1}). \par Consider the Riemann spiral after the first reverse point on the left boundary of the critical strip (Fig. \ref{fig:s8_2_1}) i.e. when $\sigma=0$. \begin{figure} \caption{Part of polyline, $s=0+279.229250928i$} \label{fig:s8_2_1} \end{figure} \par We see that the size of the polyline has increased in comparison with (fig. \ref{fig:s3_2_1}), but the behavior of the vectors has not changed, as their numbers increase, the vectors tend to form a regular polygon and the quantity of edges grows with the growth of vector numbers. \par And as we understand, as a result of the vectors tend to form a circle of the greater radius, than the greater the number of vectors, and the center of the circle tends to the center of the spiral, consequently, to the point, which corresponds to a value of the Riemann zeta function. \par In analytic number theory, this fact\footnote{The first is the correspondence to the graph of partial sums of the Dirichlet series considered by Erickson in his paper \cite{ER}} is known as the first approximate equation of the Riemann zeta function: \begin{equation}\label{zeta_eq_1}\zeta(s)=\sum_{n\le x}{\frac{1}{n^s}}-\frac{x^{1-s}}{1-s}+\mathcal{O}(x^{-\sigma}); \sigma>0; |t|< 2\pi x/C; C > 1\end{equation} \par As we know, Hardy and Littlewood obtained this approximate equation based on the generalized Euler-Maclaurin summation method. \par It is known from the theory of generalized summation of divergent series \cite{HA3} that we can apply \textit{any other method} of generalized summation (if it gives any value) and get the same result. \par We use the Riemann spiral to determine another method of generalized summation. \par Consider the first 30 vectors of the Riemann spiral after the first reverse point (fig. \ref{fig:q_1}), here the vectors form a star-shaped polygon, with the point that corresponds to a value of the Riemann zeta function at the center of this polygon, as in the case of a divergent spiral (fig. \ref{fig:s3_2_1}). \begin{figure} \caption{First 30 vectors after the first reverse point, $s=0.75+279.229250928i$} \label{fig:q_1} \end{figure} \par Connect the middle of the segments formed by vectors and get 29 segments (fig. \ref{fig:q_2}). \begin{figure} \caption{The first step of generalized summation, $s=0.75+279.229250928i$} \label{fig:q_2} \end{figure} \par We see that the star-shaped polygon has decreased in size, and the point that corresponds to a value of the Riemann zeta function is again at the center of this polygon. \begin{figure} \caption{The twentieth step of generalized summation, $s=0.75+279.229250928i$} \label{fig:q_20} \end{figure} \par We will repeat the operation of reducing the polygon (fig. \ref{fig:q_20}) until there is one segment left. \par \textit{Calculations show that when calculating the coordinates of vectors with an accuracy of 15 characters, the coordinates of the middle of this segment with an accuracy of not less than 13 digits match a value of the zeta function of Riemann, for example, at point $0.75+279.229250928i$ the exact value of \cite{ZF} equals $0.22903651233853+0.51572970834588i$, and in the calculation of midpoints of the segments we get a value $0.22903651233856+0.51572970834589i$.} \par To calculate values of the Riemann zeta function, a reduced formula can be obtained by the described method. Write sequentially the expressions to calculate the midpoints of the segments and substitute successively the obtained formulas one another: \begin{equation}\label{a}a_i = \frac{x_i+x_{i+1}}{2};\end{equation} \begin{equation}\label{b}b_i =\frac{a_i+a_{i+1}}{2}=\frac{x_i+2x_{i+1}+x_{i+2}}{4};\end{equation} \begin{equation}\label{c}c_i =\frac{b_i+b_{i+1}}{2}= \frac{x_i+3x_{i+1}+3x_{i+2}+x_{i+3}}{8};\end{equation} \begin{equation}\label{d}d_i =\frac{c_i+c_{i+1}}{2}=\frac{x_i+4x_{i+1}+6x_{i+2}+4x_{i+3}+x_{i+4}}{16};\end{equation} \par We write the same formulas for the coordinates $y_k$. \par In the numerator of each formula (\ref{a}-\ref{d}) we see the sum of vectors multiplied by binomial coefficients, which correspond to the degree of Newton's binomial, equal to the number of vectors minus one, and in the denominator the degree of two, equal to the number of vectors minus one. \par Now we can write down the abbreviated formula: \par \begin{equation}\label{s_x_s_y}s_x=\frac{1}{2^m}\sum_{k=0}^{m}\Big(_m^k\Big)x_k; s_y=\frac{1}{2^m}\sum_{k=0}^{m}\Big(_m^k\Big)y_k;\end{equation} \par where $x_k$ and $y_k$ are coordinates of partial sums (\ref{s_m_x_s_m_y}) of Dirichlet series. \par \textit{We obtained the formula of the generalized Cesaro summation method $(C, k)$ \cite{HA3}.} \par It should be noted that to calculate the coordinates of the center of the star-shaped polygon when a value of the imaginary part of the complex number is equal to 279.229250928, we use 30 vectors after the first reverse point, while starting with a value of the imaginary part of a complex number equal to 1000, it is enough to use only 10 vectors after the first reverse point. \par We obtained a result that applies not only to the Riemann zeta function, but also to all functions of a complex variable that have an analytic continuation. \par A result that relates not only to the analytic continuation of the Riemann zeta function, but to the analytic continuation as the essence of any function of a complex variable. \par A result that refers to any function and any physical process where such functions are applied, and hence a result that defines the essence of that physical process. \par Euler's intuitive belief\footnote{These inconveniences and apparent contradictions can be avoided if we give the word \glqq sum\grqq \ a meaning different from the usual. Let us say that the sum of any infinite series is a finite expression from which the series can be derived.} that a divergent series can be matched with a certain value \cite{EL}, has evolved into the fundamental theory\footnote{It is impossible to state Euler's principle accurately without clear ideas about functions of a complex variable and analytic continuation.} of generalized summation of divergent series, which Hardy systematically laid out in his book \cite{HA3}. \par The essence of our conclusions is as follows: \par 1) When it comes to analytical continuation of some function of a complex variable, it automatically means that there are at least two regions of definition of this function: \par a) the region where the series by which this function is defined converges; \par b) the region where the same series diverges. \par \textit{Actually, the question of analytical continuation arises because there is an region where the series that defines the function of a complex variable diverges.} \par Thus, the analytical continuation of any function of a complex variable is inseparably linked to the fact that there is a divergent series. \par We can say that this is the very essence of the analytical continuation. \par 2) Analytical continuation is possible only if there is some method of generalized summation that will give a result, i.e. some value other than infinity. \par This value will be a value of the function, in the region where the series by which this function is defined diverges. \par 3) The most important thing in this question is that the series by which the function is defined must behave asymptotically in the region where this series infinitely converges and in the region where it infinitely diverges. \par And then we come to an important point: \par \textit{The function of a complex variable, if it has an analytical continuation (and therefore is given by a series that converges in one region and diverges in another, i.e. has no limit of partial sums of this series) is determined by the asymptotic law of behavior of the series by which this function is given and it is no matter whether this series converges or diverges, a value of this function will be the asymptotic value with relation to which this series converges or diverges.} \par Analytical continuation is possible only if the series with which the function is given has asymptotic behavior, i.e. its values oscillate with relation to the asymptote, which is a value of the function. \par In the case of a function of a complex variable, there are two such asymptotes, and in the case of a function of a real variable, such an asymptote is one. \par If the Riemann zeta function is given asymptotic values, then it also has \textit{asymptotic value of zero,} hence it may seem that the Riemann hypothesis cannot be proved. \par As we will show later, a value of the Riemann zeta function can be given by a finite vector system, the sum of which gives \textit{the exact value of zero} if these vectors form a polygon. \par \textit{In this regard, we can conclude that the Riemann hypothesis can be confirmed only if such a finite vector system exists and only using the properties of the vectors of this system.} \par One can disagree with the conclusion that it is possible to confirm the Riemann hypothesis using a finite vector system, but we will go this way, because the chosen method of geometric analysis of the Riemann zeta function allows us to penetrate into the essence of the phenomenon. \subsection{Derivation of an empirical expression for the functional equation of the Riemann zeta function} We already know one dynamic property of the Riemann spiral, which is that the number of reverse points increases with the growth of the imaginary part of a complex number. \par We will study \textit{the reverse points} and \textit{inflection points,} which, as we will see later, are not just special points of the Riemann spiral, they are its essential points that define the essence of the Riemann spiral and the Riemann zeta function. \par We will use our empirical formula for calculating a value of the Riemann zeta function, which corresponds to the Cesaro generalized summation method. \par As we saw (fig. \ref{fig:s7_1}), the Riemann spiral vectors at any reverse point where the Riemann spiral has a negative radius of curvature, up to the reverse point form a converging spiral, and after the reverse point form a divergent spiral. \par This fact allows us to apply the formula for calculating a value of the Riemann zeta function at any reverse point, because this formula allows us to calculate the coordinates of any center of the divergent spiral, which form the Riemann spiral vectors. \par For the convenience of further presentation, we introduce the following definition: \par \textit{Middle vector of the Riemann spiral is a directed segment connecting two adjacent centers of the Riemann spiral, drawn in the direction from the center with a smaller number to the center with a larger number (fig. \ref{fig:average_vectors}) when numbering from the center corresponding to value of the Riemann zeta function.} \begin{figure} \caption{The middle vectors of the Riemann spiral , $s=0.25+5002.981i$} \label{fig:average_vectors} \end{figure} \par To identify modulus of middle vectors of the Riemann spiral first using the formula (\ref{s_x_s_y}) we define the coordinates of the centers of the Riemann spiral. \par To calculate the coordinates of the first center of the Riemann spiral (a value of the Riemann zeta function) we will use 30 vectors, to calculate the coordinates of the second center - 20 vectors. \par To increase the accuracy, we can choose a different number of vectors for each center of the Riemann spiral, but since we choose a sufficiently large value of the imaginary part of a complex number and a sufficiently small number of vectors, it will be enough to use 5 vectors to calculate the coordinates starting from the third center of the Riemann spiral. \par Then, using the formula to determine the distance between two points, we find the modulus of the middle vectors: \begin{equation}\label{dist}|Y_n|=\sqrt{(x_{n+1}-x_n)^2+(y_{n+1}-y_n)^2};\end{equation} \par We calculate the modulus of the first six middle vectors of the Riemann spiral for values of the real part in the range from 0 to 1 in increments of 0.1 and a fixed value of the imaginary part of a complex number - 5000 and then we approximate the dependence of the modulus of the middle vector of the Riemann spiral on its sequence number (fig. \ref{fig:v1_5000_6}). \begin{figure} \caption{The dependence of the modulus of the middle vector from the sequence number, $Re(s)=(0.0, 0.1, 0.2, 0.3)$, $Im(s)=5000$} \label{fig:v1_5000_6} \end{figure} \par The best method of approximation of dependence of the modulus of the middle vector from the sequence number (fig. \ref{fig:v1_5000_6}, where $x=n$) for different values of the real part of a complex number (and the fixed imaginary part of a complex number) is a power function \begin{equation}\label{depence_1}|Y_n|=An^B;\end{equation} \par It should be noted that the accuracy of the approximation increases or with a decrease in the number of vectors (fig. \ref{fig:v0_5000_5}), or by increasing a value of the imaginary part of a complex number (fig. \ref{fig:v0_8000_6}), this fact indicates the \textit{asymptotic} dependence of the obtained expressions. \begin{figure} \caption{The dependence of the modulus of the middle vector for the five vectors , $s=0+5000i$} \label{fig:v0_5000_5} \end{figure} \begin{figure} \caption{Dependence of the middle vector modulus for a larger value of the imaginary part of a complex number , $s=0+8000i$} \label{fig:v0_8000_6} \end{figure} \par Now approximate the dependence of the coefficients $A$ and $B$ on a value of the real part of a complex number. \par We start with the coefficient $B$, because at this stage we will complete its analysis, and for the analysis of the coefficient $A$ additional calculations will be required. \begin{figure} \caption{Dependence of the coefficient $B$ on the real part of a complex number, $Im(s)=5000$} \label{fig:factor2_5000} \end{figure} \par The coefficient $B$ has a linear dependence (fig. \ref{fig:factor2_5000}, where $x=\sigma$) from a value of the real part of a complex number. Therefore, taking into account the identified asymptotic dependence, we can rewrite the expression (\ref{depence_1}) for the modulus of the middle vectors of the Riemann spiral: \begin{equation}\label{depence_2}|Y_n|=A\frac{1}{n^{1-\sigma}};\end{equation} \par The best way to approximate the dependence of the coefficient $A$ (fig. \ref{fig:factor1_5000}, where $x=\sigma$) from a value of the real part of a complex number is the exponent: \begin{equation}\label{depence_3}A=Ce^{D\sigma};\end{equation} \begin{figure} \caption{Dependence of the coefficient $A$ on the real part of a complex number, $Im(s)=5000$} \label{fig:factor1_5000} \end{figure} \par We first find the ratio of the coefficients $C$ and $D$, for this we calculate $\log(C)$: \begin{equation}\label{depence_4}2\log(C)=6.67772573;\end{equation} \par Taking into account the revealed asymptotic dependence $2\log(C)=D$, we can rewrite the expression (\ref{depence_3}) for the coefficient $A$: \begin{equation}\label{depence_5}A=Ce^{-2\log(C)\sigma}=e^{log(C)-2\log(C)\sigma}=e^{2log(C)(\frac{1}{2}-\sigma)}=(C^2)^{\frac{1}{2}-\sigma};\end{equation} \par Now, taking into account the identified asymptotic dependence $|Y_n|=A=C$ when $\sigma=0$, we calculate the modulus of the first middle vector when $\sigma=0$ for different values of the imaginary part of a complex number in the range from 1000 to 9000 in increments of 1000 and then approximate the dependence of the coefficients $C$ on a value of the imaginary part of a complex number. \begin{figure} \caption{Dependence of the coefficient $C$ on the imaginary part of a complex number, $Re(s)=0$} \label{fig:factor1} \end{figure} \par The best way to approximate the dependence of the coefficient $C$ (Fig. \ref{fig:factor1}, where $x=t$) from a value of the imaginary part of a complex number is a power function: \begin{equation}\label{depence_6}C=Et^F;\end{equation} \par Taking into account the revealed asymptotic dependence consider $$F=\frac{1}{2};$$ \par then we can write the final expression for the modulus of the middle vectors of the Riemann spiral: \begin{equation}\label{depence_8}|Y_n|=(E^2t)^{\frac{1}{2}-\sigma}\frac{1}{n^{1-\sigma}};\end{equation} \par where $E^2=0.159154719364$ some constant, the meaning of which we learn later. \par We obtained an asymptotic expression (\ref{depence_8}) for the modulus of the middle vectors of the Riemann spiral, which becomes, as we found out, more precisely when the imaginary part of a complex number increases. \par \textit{As a consequence of the asymptotic form of the resulting expression, we can apply it to any middle vector, even if we can no longer calculate its coordinates or the calculated coordinates give an inaccurate value of the middle vector modulus.} \par Therefore, we can obtain any quantity of middle vectors necessary to construct \textit{the inverse Riemann spiral.} \par So we can get an infinite series, which is given by the middle vectors of the Riemann spiral. \par By comparing the expression for the modulus of vectors (\ref{x_n_y_n}) of the Riemann spiral and the expression for the modulus of its middle vectors (\ref{depence_8}), we can assume that the infinite series formed by the middle vectors of the Riemann spiral sets a value of the Riemann zeta function $\zeta(1-s)$. \par \textit{So we can assume that values of the Riemann zeta function $\zeta(s)$ and $\zeta(1-s)$ are related through an expression for the middle vectors of the Riemann spiral.} \par To complete the derivation of the dependence of the Riemann zeta function $\zeta(s)$ and $\zeta(1-s)$, it is necessary to determine the dependence of the angles between the middle vectors of the Riemann spiral and construct an inverse Riemann spiral whose vectors, as we show further, asymptotically twist around the zero of of the complex plane. \par If to determine the coordinates of the centers of Riemann spiral, we used the \textit{reverse points,} to determine the angles between middle vectors of the Riemann spiral, we will use \textit{inflection points.} We see (fig. \ref{fig:average_vectors}) that the inflection points are not only the points at which the visible untwisting of the vectors is replaced by the twisting, but also the points at which the middle vectors intersect the Riemann spiral. \par One can show by computing that the angles between the middle vectors and the Riemann spiral at the intersection points are asymptotically equal to $\pi/4$, then the angles between the middle vectors can be equated to the angles between the Riemann spiral vectors at the inflection points. \par As we remember, the inflection points are multiples of $2k\pi$, then using the properties of the logarithm, we can find the angle between the first and any other middle vector of the Riemann spiral: \begin{equation}\label{depence_9}\beta_k=\alpha_k-\alpha_1=t\log\Big(\frac{t}{2\pi}\Big)-t\log(k)-t\log\Big(\frac{t}{2\pi}\Big)=-t\log(k);\end{equation} \par We have obtained an expression that shows that the angles between the first middle vector of the Riemann spiral and any other middle vector are equal in modulus and opposite in sign to the angles between the first vector and corresponding vector of the Riemann spiral, the negative sign shows that the middle vectors have a special kind of symmetry (which we will consider later) with relation to the vectors of value of the Riemann spiral. \par Knowing the coordinates of the first middle vector, which we calculated with sufficient accuracy, we can now find the angles and modulus of the remaining middle vectors using the obtained asymptotic expressions and construct the inverse Riemann spiral (Fig. \ref{fig:reverse_spiral_approx}). \begin{figure} \caption{Inverse Riemann spiral, $s=0.25+5002.981i$} \label{fig:reverse_spiral_approx} \end{figure} \par Now at $n=k$ we can write the final formula to calculate the coordinates of the middle vectors $Y_n$: \begin{equation}\label{h_x_n_h_y_n}\tilde{x}_n(s)=(E^2t)^{\frac{1}{2}-\sigma}\frac{1}{n^{1-\sigma}}\cos(\alpha_1-t\log(n)); \tilde{y}_n(s)=(E^2t)^{\frac{1}{2}-\sigma}\frac{1}{n^{1-\sigma}}\sin(\alpha_1-t\log(n));\end{equation} \par Using Euler's formula for complex numbers we write the expression for the middle vectors of the Riemann spiral in exponential form: \begin{equation}\label{y_n_exp}Y_n(s)=(E^2t)^{\frac{1}{2}-\sigma}\frac{1}{n^{1-\sigma}}e^{-i(\alpha_1-t\log(n))};\end{equation} \par Using the rules of analytical geometry, we can obtain the coordinates of the vector corresponding to the partial sum of $\hat{s}_m(s)$ inverse Riemann spiral: \begin{equation}\label{h_s_m_x_h_s_m_y}\tilde{s}_m(s)_x=\zeta(s)_x-\sum_{n=1}^{m}{\tilde{x}_n(s)}; \tilde{s}_m(s)_y=\zeta(s)_y-\sum_{n=1}^{m}{\tilde{y}_n(s)};\end{equation} \par To verify that the obtained expression for the middle vectors of the Riemann spiral defines the relation values of the Riemann zeta function $\zeta(s)$ and $\zeta(1-s)$, consider the middle vectors of the Riemann spiral near the point of zero (fig. \ref{fig:reverse_spiral_zero}). \begin{figure} \caption{Middle vectors of the inverse Riemann spiral near the point of zero, $s=0.25+5002.981i$} \label{fig:reverse_spiral_zero} \end{figure} \par We see that the middle of the vectors twist around the zero point, in the same way as the vectors of the Riemann spiral twist around the point with the coordinates of a value of the Riemann zeta function\footnote{This fact was already considered by the forum Stack Exchange \cite{MA}, but nobody try to compute the Riemann zeta function with geometric method, which corresponds to the method of generalized summation Cesaro.}. \par Now we can write the final equation that relates values of the Riemann zeta function $\zeta(s)$ and $\zeta(1-s)$ \textit{taking into account the rules of generalized summation of divergent series:} \begin{equation}\label{zeta_func_approx}\sum_{n=1}^{\infty}{\frac{1}{n^s}}-(E^2t)^{\frac{1}{2}-\sigma}e^{-i\alpha_1}\sum_{n=1}^{\infty}{\frac{1}{n^{1-s}}}=0;\end{equation} \par where $\alpha_1$ is the angle of the first middle vector of the Riemann spiral. \par \textit{This fact shows the asymptotic form of functional equation and the geometric nature of the Riemann zeta function, based on the significant role of turning points and inflection points of the Riemann spiral, which determine the middle vectors and the inverse Riemann spiral.} \par We can try to find an asymptotic expression for the angle $\alpha_1$ of the first middle vector of the Riemann spiral in a geometric way, but this will not significantly improve the result. \par We use the results of the analytical theory of numbers to an arithmetic way to find the exact expression for the angle $\alpha_1$ of the first middle vector of the Riemann spiral and to determine the meaning of the constant $E^2=0.159154719364$. \subsection{Derivation of empirical expression for CHI function} The functional equation of the Riemann zeta function \cite{TI} has several equivalent entries: \begin{equation}\label{zeta_func_eq2}\zeta(s)=\chi(s)\zeta(1-s); \end{equation} \par where \begin{equation}\label{chi_eq}\chi(s)=\frac{(2\pi)^s}{2\Gamma(s)\cos(\large\frac{\pi s}{2})}=2^s\pi^{s-1}\sin(\frac{\pi s}{2})\Gamma(1-s)=\pi^{s-\frac{1}{2}}\frac{\Gamma(\frac{1-s}{2})}{\Gamma(\frac{s}{2})};\end{equation} \par Comparing (\ref{zeta_func_approx}) and (\ref{zeta_func_eq2}), we get: \begin{equation}\label{chi_eq_app}\chi(s)=(E^2t)^{\frac{1}{2}-\sigma}e^{-i\alpha_1};\end{equation} \par We find the same expression in Titchmarsh \cite{TI}: \par in any fixed strip $\alpha\le\sigma\le\beta$, when $t\to \infty$: \begin{equation}\label{chi_eq_app2}\chi(s)= \Big(\frac{2\pi}{t}\Big)^{(\sigma+it-\frac{1}{2})}e^{i(t+\frac{\pi}{4})}\Big\{1+\mathcal{O}\Big(\frac{1}{t}\Big)\Big\};\end{equation} \par We write the expression (\ref{chi_eq_app2}) in the exponential form of a complex number: \begin{equation}\label{chi_eq_ex}\chi(s)= \Big(\frac{t}{2\pi}\Big)^{(\frac{1}{2}-\sigma)}e^{-i(t(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{4}+\tau(s))};\end{equation} \par where \begin{equation}\label{tau}\tau(s)= \mathcal{O}\Big(\frac{1}{t}\Big);\end{equation} \par By matching (\ref{chi_eq_app}) and (\ref{chi_eq_ex}), we obtain the angle of the first middle vector of the Riemann spiral: \begin{equation}\label{alpha1}\alpha_1=t(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{4}+\tau(s);\end{equation} \par And define the meaning of the constant $E^2=0.159154719364$: \begin{equation}\label{e_2}E^2=\frac{1}{2\pi}=0.159154943091;\end{equation} \par Now we can write the asymptotic equation (\ref{zeta_func_approx}) in its final form: \begin{equation}\label{zeta_func_approx2}\sum_{n=1}^{\infty}{\frac{1}{n^s}}-\Big(\frac{t}{2\pi}\Big)^{\frac{1}{2}-\sigma}e^{-i\alpha_1}\sum_{n=1}^{\infty}{\frac{1}{n^{1-s}}}=0;\end{equation} \par We will find the empirical expression for remainder term $\tau(s)$ of the CHI function, which defines the ratio of the modulus and the argument of the exact $\chi(s)$ and the approximate $\tilde\chi(s)$ value of the CHI function: \begin{equation}\label{chi_eq_rem}\tau(s)=\Delta\varphi_{\chi}+\log\Big(\frac{|\chi(s)|}{|\tilde\chi(s)|}\Big)i;\end{equation} \par where \begin{equation}\label{delta_varphi_chi}\Delta\varphi_{\chi}=Arg(\chi(s))-Arg(\tilde\chi(s));\end{equation} \par The exact values\footnote{The exact value will be understood as a value obtained with a given accuracy.} CHI functions we find from the functional equation of the Riemann zeta function, substituting the exact values of the Riemann zeta function: \begin{equation}\label{chi_eq_ex2}\chi(s)=\frac{\zeta(s)}{\zeta(1-s)};\end{equation} \par Approximate values of the CHI function we find from the expression (\ref{chi_eq_ex}), dropping the function $\tau(s)$. \begin{equation}\label{chi_eq_app3}\tilde\chi(s)= \Big(\frac{t}{2\pi}\Big)^{(\frac{1}{2}-\sigma)}e^{-i(t(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{4})};\end{equation} \par The calculation of values of the Riemann zeta function is currently available in different mathematical packages. To calculate the exact values of the Riemann zeta function we will use the Internet service \cite{ZF}. \par We will use 15 significant digits, because this accuracy is enough to analyze the CHI function. \par We will calculate values of the Riemann zeta function in the numerator (\ref{chi_eq_ex2}) for values of the real part of a complex number in the range from 0 to 1 in increments of 0.1 and values of the imaginary part of a complex number in the range from 1000 to 9000 in increments of 1000. Note that the range of values of the real part of the a complex number contains 2m+1 value, which are related by the following relation: \begin{equation}\label{sigma} 1-\sigma_{k+1}=\sigma_{2m-k+1};\end{equation} \par where k varies between 0 and 2m, hence, for k=m: \begin{equation}\label{sigma_m}\sigma_{m+1} = 0.5;\end{equation} We use, on the one hand, the property of the Riemann zeta function as a function of a complex variable: \begin{equation}\label{zeta_conj}\overline{\zeta(1-\sigma+it)}=\zeta(\overline{1-\sigma+it})=\zeta(1-\sigma-it)=\zeta(1-s);\end{equation} \par From other side: \begin{equation}\label{zeta_conj2}\overline{\zeta(1-\sigma+it)}=Re(\zeta(1-\sigma+it))-Im(\zeta(1-\sigma+it))i;\end{equation} \par Equate (\ref{zeta_conj}) and (\ref{zeta_conj2}): \begin{equation}\label{zeta_right}\zeta(1-s)=\zeta(1-\sigma-it)=Re(\zeta(1-\sigma+it))-Im(\zeta(1-\sigma+it))i;\end{equation} \par Now we use the relation (\ref{sigma}): \begin{equation}\label{zeta_right_req}\zeta(1-s_{k+1})=Re(\zeta(\sigma_{2m-k+1}+it))-Im(\zeta(\sigma_{2m-k+1}+it))i=Re(\zeta(s_{2m-k+1})-Im(s_{2m-k+1})i;\end{equation} \par We use the resulting formula to compute the Riemann zeta function in the denominator (\ref{chi_eq_ex2}) based on values computed for the numerator. Then we get an expression for CHI function: \begin{equation}\label{chi_req}\chi(s_{k+1})=\frac{(Re(\zeta(s_{k+1}))+Im(\zeta(s_{k+1}))i)(Re(\zeta(s_{2m-k+1}))+Im(\zeta(s_{2m-k+1}))i)}{Re(\zeta(s_{2m-k+1}))^2+Im(\zeta(s_{2m-k+1}))^2};\end{equation} \par We will open brackets and write separate expressions for the real part of the CHI function: \begin{equation}\label{chi_req_re}Re(\chi(s_{k+1}))=\frac{Re(\zeta(s_{k+1}))Re(\zeta(s_{2m-k+1}))-Im(\zeta(s_{k+1}))Im(\zeta(s_{2m-k+1}))}{Re(\zeta(s_{2m-k+1}))^2+Im(\zeta(s_{2m-k+1}))^2};\end{equation} \par and for imaginary: \begin{equation}\label{chi_req_im}Im(\chi(s_{k+1}))=\frac{Re(\zeta(s_{k+1}))Im(\zeta(s_{2m-k+1}))+Im(\zeta(s_{k+1}))Re(\zeta(s_{2m-k+1}))}{Re(\zeta(s_{2m-k+1}))^2+Im(\zeta(s_{2m-k+1}))^2};\end{equation} \par We will consider this to be the exact value of CHI function, because we can calculate it with a given accuracy. \par Now write the expression for the real part of the approximate value of the CHI function: \begin{equation}\label{chi_req_re_app}Re(\tilde\chi(s_{k+1}))=(\frac{2\pi}{t})^{(\sigma-\frac{1}{2})}\cos(t(\log(\frac{2\pi}{t})+1)+\frac{\pi}{4});\end{equation} \par and for imaginary: \begin{equation}\label{chi_req_im_app}Im(\tilde\chi(s_{k+1}))=(\frac{2\pi}{t})^{(\sigma-\frac{1}{2})}\sin(t(\log(\frac{2\pi}{t})+1)+\frac{\pi}{4});\end{equation} \begin{figure} \caption{The ratio of the exact $|\chi(s)|$ and approximate $|\tilde\chi(s)|$ module of the CHI function} \label{fig:ratio_chi} \end{figure} \par Calculate the modulus for the exact $|\chi(s)|$ value of CHI function: \begin{equation}\label{chi_abs}|\chi(s)|=\sqrt{Re(\chi(s))^2+Im(\chi(s))^2};\end{equation} \par and for the approximate $|\tilde\chi(s)|$ value of CHI function: \par \begin{equation}\label{chi_abs_app}|\tilde\chi(s)|=\sqrt{Re(\tilde\chi(s))^2+Im(\tilde\chi(s))^2};\end{equation} \par and the angle between the exact $\chi(s)$ and approximate $\tilde\chi(s)$ value of CHI function: \begin{equation}\label{chi_angle}\Delta\varphi_{\chi}=Arg(\chi(s))-Arg(\tilde\chi(s))= \arccos(\frac{Re(\chi(s))}{|\chi(s)|})-\arccos(\frac{Re(\tilde\chi(s))}{|\tilde\chi(s)|});\end{equation} \par We construct graphs of the ratio of the modulus of the exact $|\chi(s)|$ and approximate $|\tilde\chi(s)|$ values of the CHI function of the real part of a complex number of numbers (fig. \ref{fig:ratio_chi}). \par \textit{The graph shows (fig. \ref{fig:ratio_chi}) that the ratio of the modulus of the exact $|\chi(s)|$ and approximate $|\tilde\chi(s)|$ values CHI functions can be taken as 1, therefore, we can say that:} \begin{equation}\label{tau2}\tau(s)=\Delta\varphi_{\chi};\end{equation} \par We construct graphs of the dependence of $\Delta\varphi_{\chi}$ on the real part of a complex number (fig. \ref{fig:angle_chi_real}) and from the imaginary part of a complex number (fig. \ref{fig:angle_chi_complex}). \begin{figure} \caption{The angle $\Delta\varphi_{\chi} \label{fig:angle_chi_real} \end{figure} \begin{figure} \caption{The angle $\Delta\varphi_{\chi} \label{fig:angle_chi_complex} \end{figure} \par $\tau(s)$ provided (\ref{tau2}) is the argument of remainder term of the CHI function, hence it is a combination of the arguments of remainder terms of the product $\Gamma(s)\cos(\pi s/2)$ in (\ref{chi_eq}). \par The argument of the remainder term $\mu(s)$ of the gamma function can be obtained from an expression we can find in Titchmarsh \cite{TI}: \begin{equation}\label{gamma_app}\log(\Gamma(\sigma+it))=(\sigma+it-\frac{1}{2})\log{it}-it+\frac{1}{2}\log{2\pi}+\mu(s);\end{equation} \par The most significant researches of remainder term of the gamma function can be found in the paper of Riemann \cite{SI} and Gabcke \cite{GA}, they independently and in different ways obtain an expression for the argument of the remainder term of the gamma function when $\sigma=1/2$. \par We use the expression explicitly written by Gabcke \cite{GA}: \begin{equation}\label{mu}\mu(t)=\frac{1}{48t}+\frac{1}{5760t^3}+\frac{1}{80640t^5}+\mathcal{O}(t^{-7});\end{equation} \par We construct a graph of the dependence of the argument of the remainder term $\mu(t)$ of the gamma function from the imaginary part of a complex number when $\sigma=1/2$ (fig. \ref{fig:remainder_gamma}). \begin{figure} \caption{The remainder term of the gamma function, $\sigma=1/2$} \label{fig:remainder_gamma} \end{figure} We compare the obtained graph with the graph of dependence $\Delta\varphi_{\chi}$ on the imaginary part of a complex number when $\sigma=1/2$ (fig. \ref{fig:angle_chi_complex_0_5}). \begin{figure} \caption{The angle $\Delta\varphi_{\chi} \label{fig:angle_chi_complex_0_5} \end{figure} \par These graphs correspond to each other up to sign and constant value, i.e. the absolute value of the angle between the exact $\chi(s)$ and approximate $\tilde\chi(s)$ value of the CHI function is exactly two times greater than a value of the argument of the remainder term of the gamma function when $\sigma=1/2$. \par \textit{But a more significant result is obtained by dividing the angle values $\Delta\varphi_{\chi}$ between the exact $\chi(s)$ and approximate $\tilde\chi(s)$ value of the CHI function by a value of the argument of remainder term $\mu(t)$ of the gamma function when $\sigma=1/2$.} \par The result of this operation we get the functional dependence $\lambda(\sigma)$ values of the argument of the remainder term $\tau(s)$ of CHI function (with different values of the real part of a complex number) from a value of the argument of the remainder term $\mu(t)$ of the gamma function when $\sigma=1/2$. \begin{figure} \caption{Factor $\lambda(\sigma)$ (from the real part of a complex number)} \label{fig:lambda_func} \end{figure} \begin{equation}\label{tau3}\tau(s)=\Delta\varphi_{\chi}=\lambda(\sigma)\mu(t);\end{equation} \par The coefficient of $\lambda(\sigma)$ shows which number to multiply a value of the argument of the remainder term $\mu(t)$ of the gamma function in the form of the Riemann-Gabcke to get a value of the argument of the remainder term $\tau(s)$ of CHI function. \par We will find an explanation for this paradoxical identity in the further study of expressions that determine a value of the Riemann zeta function. \par We obtained an exact expression for the angle $\alpha_1$ of the first middle vector of the Riemann spiral: \par \begin{equation}\label{alpha1_ex}\alpha_1=t(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{4}+\lambda(\sigma)\mu(t);\end{equation} \par As well as the exact expression for the CHI function: \begin{equation}\label{chi_eq_ex3}\chi(s)= \Big(\frac{t}{2\pi}\Big)^{(\frac{1}{2}-\sigma)}e^{-i(t(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{4}+\lambda(\sigma)\mu(t))};\end{equation} \par Now we can perform the exact construction of the inverse Riemann spiral. \subsection{Representation of the second approximate equation of the Riemann zeta function by a vector system} We write the second approximate equation of the Riemann zeta function \cite{HA1} in vector form. \begin{equation}\label{zeta_eq_2}\zeta(s)=\sum_{n\le x}{\frac{1}{n^s}}+\chi(s)\sum_{n\le y}{\frac{1}{n^{1-s}}}+\mathcal{O}(x^{-\sigma})+\mathcal{O}(|t|^{1/2-\sigma}y^{\sigma-1}); \end{equation} \begin{equation}\label{zeta_eq_2_cond}0<\sigma <1; 2\pi xy=|t|;\end{equation} \begin{equation}\label{chi_eq2}\chi(s)=\frac{(2\pi)^s}{2\Gamma(s)\cos(\large\frac{\pi s}{2})}=2^s\pi^{s-1}\sin(\frac{\pi s}{2})\Gamma(1-s);\end{equation} \par We use the exponential form of a complex number, then , using Euler's formula, go to the trigonometric form of a complex number and get the coordinates of the vectors. Put $x=y$, then for $m=\Big[\sqrt{\frac{t}{2\pi}}\Big]$ we get: \begin{equation}\label{zeta_eq_2_vect}\zeta(s)=\sum_{n=1}^{m}{X_n(s)}+\sum_{n=1}^{m}{Y_n(s)}+R(s);\end{equation} \begin{description} \item where \begin{equation}\label{x_vect}X_n(s)=\frac{1}{n^{s}}=\frac{1}{n^{\sigma}}e^{-it\log(n)}=\frac{1}{n^{\sigma}}(\cos(t\log(n))-i\sin(t\log(n)));\end{equation} \begin{equation}\label{y_vect}Y_n(s)=\chi(s)\frac{1}{n^{1-s}}=\chi(s)\frac{1}{n^{1-\sigma}}e^{it\log(n)} =\chi(s)\frac{1}{n^{1-\sigma}}(\cos(t\log(n))+i\sin(t\log(n));\end{equation} \item $R(s)$ - some function of the complex variable, which we will estimate later using the exact values of $\zeta(s)$ and $\chi(s)$. \end{description} \par The vector system (\ref{zeta_eq_2_vect}) determines a value of $\zeta(s)$ at each interval: \begin{equation}\label{m_interval}t\in[2\pi m^2,2\pi (m+1)^2); m=1,2,3...\end{equation} \begin{figure} \caption{Finite vector system, $s=0.25+5002.981i$} \label{fig:finite_vector_system} \end{figure} \par We see that the first sum (\ref{zeta_eq_2_vect}) corresponds to the vectors of the Riemann spiral, and the second is the middle vector of the Riemann spiral. \par Thus, we can explain the geometric meaning of the second approximate equation of the Riemann zeta function. \begin{figure} \caption{Gap of vector system, $s=0.25+5002.981i$} \label{fig:remainder_gap} \end{figure} \par In the analysis of Riemann spiral we received the quantity of reverse points: $$m=\sqrt{\frac{t}{2\pi}+\frac{1}{4}}$$ of the conditions that between two reverse points is at least one vector of the Riemann spiral. \par If we look at the inverse Riemann spiral (fig. \ref{fig:reverse_spiral_approx}), we note that the number of reverse points, which corresponds to one side of the middle vectors of the Riemann spiral, and on the other hand, the vectors of value of the Riemann spiral, which had not yet twist in a convergent and then a divergent spiral. \par \textit{If we remove the vectors that twist into spirals, we get the vector system of (fig. \ref{fig:finite_vector_system}), which corresponds to the second approximate equation of the Riemann zeta function.} \par We can also explain the geometric meaning of the remainder term of the second approximate equation of the Riemann zeta function. \par As the scale of the picture of the vector system increases (fig. \ref{fig:finite_vector_system}), we see a gap between the vectors and the middle vectors of the Riemann spiral (fig. \ref{fig:remainder_gap}), this gap is the remainder term of the second approximate equation of the Riemann zeta function. \subsection{The axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function - conformal symmetry} \par In the process of geometric derivation of the functional equation of the Riemann zeta function we found that the angles (\ref{depence_9}) between the first middle vector of the Riemann spiral and any other middle vector are equal in modulus and opposite in sign to the angles between the first vector and appropriate vector of the Riemann spiral. \par The same result (\ref{x_vect}) and (\ref{y_vect}) we obtained when writing the second approximate equation of the Riemann zeta function in vector form. \par We show 1) that the angles between any two middle vectors $Y_i$ and $Y_j$ of the Riemann spiral are equal in modulus and opposite in sign, if we measure the angle from the respective vectors, to the angles between the corresponding vectors $X_i$ and $X_j$ of the Riemann spiral and 2) there is a line that has angles equaled modulus and opposite in sign, if we measure the angle from this line, with any pair of corresponding vectors $Y_i$ and $X_i$ of the Riemann spiral (Lemma 1). \par Put in accordance with the first middle vector $Y_1$, the two random middle vectors $Y_i$ and $Y_j$ of the Riemann spiral, the vector $X_1$ and two relevant vectors $X_i$ and $X_j$ of the Riemann spiral the segments $A_1A_2$, $A_2A_3$, $A_3A_4$, $A_1'A_2'$ , $A_2'A_3'$ and $A_3'A_4'$ respectively. \par Consider (fig. \ref{fig:conformal_symmetry_lemma}) two polyline formed by vertices $A_1A_2A_3A_4$ and $A_1'A_2'A_3'A_4 '$ respectively, and oriented arbitrarily. Then edges $A_2A_3$ and $A_3A_4$ have angles with the edge $A_1A_2$ is equal in modulus and opposite in sign to the angles, which have edge $A_2'A_3'$ and $A_3'A_4'$ respectively, with the edge $A_1'A_2'$, if we measure angles from the edges $A_1A_2$ and $A_1'A_2'$ respectively. \par 1) We show that the angle between edges $A_2A_3$ and $A_3A_4$ is equal in modulus and opposite in sign to the angle between the edges $A_2'A_3'$ and $A_3'A_4 '$, if we measure the angle from the corresponding edges, for example, from $A_3A_4$ and $A_3'A_4'$. \begin{figure} \caption{Polylines with equal angles} \label{fig:conformal_symmetry_lemma} \end{figure} \par A) Consider first (fig. \ref{fig:conformal_symmetry_lemma}) case when edges $A_1A_2$, $A_3A_4$ and edges $A_1'A_2'$, $A_3'A_4'$ respectively are not parallel to each other. \par Continue edges $A_1A_2$, $A_3A_4$ and edges $A_1'A_2'$, $A_3'A_4'$ to the intersection, we obtain the triangles $A_2B_2A_3$ and $A_2'B_2'A_3'$ respectively (fig. \ref{fig:conformal_symmetry_lemma}). \par These triangles are congruent by two angles, because the angles $A_1B_2A_4$ and $A_1'B_2'A_4'$ are equal in modulus, as the angle of the edges $A_3A_4$ and $A_3'A_4'$ with edges $A_1A_2$ and $A_1'A_2'$ respectively, and the angles $B_2A_2A_3$ and $B_2'A_2'A_3'$ are adjacent angles $A_1A_2A_3$ and $A_1'A_2'A_3'$ respectively which are also equal in modulus, as the angle of the edges $A_2A_3$ and $A_2'A_3'$ with edges $A_1A_2$ and $A_1'A_2'$ respectively. \par Hence, the angles $A_2A_3A_4$ and $A_2'A_3'A_4'$ are equal in modulus as the angles adjacent to the angles $A_2A_3B_2$ and $A_2'A_3'B_2'$ respectively, which are equal as corresponding angles of congruent triangles. \par B) Now consider (fig. \ref{fig:conformal_symmetry_2_lemma}) case when edges $A_1A_2$, $A_3A_4$ and edges $A_1'A_2'$, $A_3'A_4'$ respectively parallel to each other. \begin{figure} \caption{Polylines with equal angles, parallel edges} \label{fig:conformal_symmetry_2_lemma} \end{figure} \par The continue of edges $A_1A_2$, $A_3A_4$ and edges $A_1'A_2'$, $A_3'A_4'$ respectively do not intersect, because they form parallel lines $A_1B_2$, $A_4B_3$ and $A_1'B_2'$, $A_4'B_3'$ respectively. \par Thus edges $A_2A_3$ and $A_2'A_3'$ are intersecting lines of those parallel lines, respectively. \par Hence, the angles $A_2A_3A_4$ and $A_2'A_3'A_4'$ are equal in modulus as corresponding angles at intersecting lines of two parallel lines because the angles $A_1A_2A_3$ and $A_1'A_2'A_3'$ are equal in modulus, as the angle of the edges $A_2A_3$ and $A_2'A_3'$ with edges $A_1A_2$ and $A_1'A_2'$ respectively. \par If we measure the angle $A_2A_3A_4$ from edge $A_3A_4$, it is necessary to count its anti-clockwise, then it has a positive sign. \par If we measure the angle $A_2'A_3'A_4'$ from edge $A_3'A_4'$, it is necessary to count its clockwise, then it has a negative sign. \par Therefore, the angle between the edges $A_2A_3$ and $A_3A_4$ equal in modulus and opposite in sign to the angle between the edges $A_2'A_3'$ and $A_3'A_4'$, if we measure the angle from the edges $A_3A_4$ and $A_3'A_4'$ respectively. \par 2) Now we continue edges $A_1A_2$ and $A_1'A_2'$ to the intersection and divide the angle $A_2O_1A_2'$ into two equal angles, we will get a line $O_1O_3$, which has equal in modulus and opposite in sign angles, if we measure the angle from this line, to edges $A_1A_2$ and $A_1'A_2'$ (fig. \ref{fig:conformal_symmetry_lemma}). \par We show that line $O_1O_3$ also is equal in modulus and opposite in sign to angles, if we measure the angle from this line, with edges $A_2A_3$, $A_3A_4$ and $A_2'A_3'$, $A_3'A_4'$ respectively. \par A) Consider first (fig. \ref{fig:conformal_symmetry_lemma}) case when edges $A_2A_3$ and $A_2'A_3'$ are not parallel to each other (sum of angles$A_2O_1O_3$, $A_1A_2A_3$ and $A_2'O_1O_3$, $A_1'A_2'A_3'$ is not equal to $\pi$). \par Continue edges $A_2 A_3$, $A_2'A_3'$ to the intersection with the line $O_1 O_3$, we get the triangles $O_1A_2 O_2$ and $O_1A_2'O_2'$ respectively. \par These triangles are congruent by two angles, because the angles $A_2O_1O_3$ and $A_2'O_1O_3$ are equal in modulus by build, and the angles $A_1A_2A_3$ and $A_1'A_2'A_3'$ are equal in modulus, as the angle of the edges $A_1A_2$ and $A_1'A_2'$ with edges $A_2A_3$ and $A_2'A_3'$ respectively. \par Hence, angles $A_2O_2O_1$ and $A_2'O_2'O_1$ are equal in modulus as the corresponding angles of the congruent triangles. \par B) Now consider (fig. \ref{fig:conformal_symmetry_3_lemma}) case when edges $A_2A_3$ and $A_2'A_3'$ are parallel to each other (sum of angles$A_2O_1O_3$, $A_1A_2A_3$ and $A_2'O_1O_3$, $A_1'A_2'A_3'$ is equal to $\pi$). \begin{figure} \caption{Polylines with equal angles, sum of angles $\pi$} \label{fig:conformal_symmetry_3_lemma} \end{figure} \par In this case, lines $O_1A_2$ and $O_1A_2'$ are intersecting two parallel lines $O_1O_3$, $A_2O_2$ and $O_1O_3$, $A_2'O_2'$ respectively, since the angles $A_2O_1O_3$ and $A_2'O_1O_3$ are equal in modulus by build, and the angles $A_1A_2A_3$ and $A_1'A_2'A_3'$ are equal in modulus as the angle of the edges $A_1A_2$ and $A_1'A_2'$ with edges $A_2A_3$ and $A_2'A_3'$ respectively, and the sum of the respective angles at intersecting lines is equal to $\pi$. \par Therefore, the angles between the line segments $A_2A_3$ and $A_2'A_3'$ and line $O_1O_3$ is equal to zero because $A_2A_3$ and $A_2'A_3'$ are parallel to line $O_1O_3$. \par Continue edges $A_3A_4$, $A_3'A_4'$ to the intersection with the straight line $O_1O_3$, get the triangles $O1B_2O_3$ and $O_1B_2'O_3'$ respectively (fig. \ref{fig:conformal_symmetry_lemma}). \par These triangles are congruent by two angles, because the angles $A_2O_1O_3$ and $A_2'O_1O_3$ are equal in modulus by build, and the angles $A_1B_2A_4$ and $A_1'B_2'A_4'$ are equal in modulus, as the angle of the edges $A_1A_2$ and $A_1'A_2'$ with edges $A_3A_4$ and $A_3'A_4'$ respectively. \par Hence, angles $B_2O_3O_1$ and $B_2'O_3'O_1$ are equal in modulus as the corresponding angles of the congruent triangles. \par If we measure the angles $A_2O_2O_1$ and $B_2O_3O_1$ from a line $O_1O_3$, they need to count anti-clockwise, then either have a positive sign. \par If we measure the angles $A_2'O_2'O_1$ and $B_2'O_3'O_1$ from a line $O_1O_3$, they need to count clockwise, then either have a negative sign. \par Therefore, line $O_1O_3$ has equal in modulus and opposite in sign angles, if we measure the angle from this line, with edges $A_1A_2$, $A_2A_3$, $A_3A_4$ and $A_1'A_2'$, $A_2'A_3'$, $A_3'A_4'$ respectively. $\square$ \par According to the Lemma 1, the vector system of the second approximate equation of the Riemann zeta function has \textit{a special kind of symmetry} when there is a line that has angles equal in modulus and opposite in sign, if we measure its from this line, with any pair of corresponding vectors $Y_i$ and $X_i$ of the Riemann spiral. \par It should be noted that this symmetry of angles is kept when $\sigma=1/2$ when, as we will show later, the vector system of the second approximate equation of the Riemann zeta function has \textit{mirror symmetry.} \par To distinguish these two types of symmetry, we give a name to a special kind of symmetry of the vector system of the second approximate equation of the Riemann zeta function by analogy with the conformal transformation in which the angles are kept. \par \textit{Conformal symmetry - a special kind of symmetry in which there is a line that has equal modulus and opposite sign angles, if we measure the angle from this line, with any pair of corresponding segments.} \par The angle $\hat\varphi_M$ of the axis of mirror symmetry is equal to the angle $\varphi_M$ of the axis of conformal symmetry \begin{equation}\label{phi_m}\hat\varphi_M=\varphi_M=\frac{Arg(\chi(s))}{2}+\frac{\pi}{2};\end{equation} \par In other words, it is the same line if we draw the axis of conformal symmetry at the same distance from the end of the first middle vector $Y_1$ and from the end of the first vector $X_1$ of the Riemann spiral. \subsection{Mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function} In 1932 Siegel published notes of Riemann \cite{SI} in which Riemann, unlike Hardy and Littlewood, represented the remainder term of the second approximate equation of the Riemann zeta function explicitly: \begin{equation}\label{zeta_eq_2_zi}\zeta(s)=\sum_{l=1}^{m}{l^{-s}}+\frac{(2\pi)^s}{2\Gamma(s)\cos(\frac{\pi s}{2})}\sum_{l=1}^{m}{l^{s-1}}+(-1)^{m-1}\frac{(2\pi) ^{\frac{s+1}{2}}}{\Gamma(s)}t^{\frac{s-1}{2}}e^{ \frac{\pi is}{2}- \frac{ti}{2}- \frac{\pi i}{8}}\mathcal{S}; \end{equation} \begin{equation}\label{rem_sum}\mathcal{S}=\sum_{0\le 2r\le k\le n-1}{\frac{2^{-k}i^{r-k}k!}{r!(k-2r)!}a_kF^{(k-2r)}(\delta)}+\mathcal{O}\Big(\big(\frac{3n}{t}\big)^{\frac{n}{6}}\Big);\end{equation} \begin{equation}\label{rem_param}n\le 2\cdot 10^{-8}t; m=\Big[\sqrt{\frac{t}{2\pi}}\Big]; \\ \delta=\sqrt{t}-(m+\frac{1}{2})\sqrt{2\pi};\end{equation} \begin{equation}\label{rem_func}F(u) =\frac{\cos{(u^2+\frac{3\pi}{8})}}{\cos{(\sqrt{2\pi}u)}};\end{equation} \par We use an approximate expression (\ref{gamma_app}) for the gamma function to write the expression for the remainder term of the second approximate equation of the Riemann zeta function in exponential form. \begin{equation}\label{rem_exp}R(s) = (-1)^{m-1}\Big(\frac{t}{2\pi}\Big)^{-\frac{\sigma}{2}}e^{-i[\frac{t}{2}(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{8}+\mu(s)]}\mathcal{S},\end{equation} \par Given an estimate for the sum of $\mathcal{S}$, which can be found, for example, Titchmarsh \cite{TI}: \begin{equation}\label{sum_app}\mathcal{S}=\frac{\cos{(\delta^2+\frac{3\pi}{8})}}{\cos{(\sqrt{2\pi}\delta)}}+\mathcal{O}(t^{-\frac{1}{2}});\end{equation} we can identify the expression for the argument of the remainder term of the second approximate equation of the Riemann zeta function: \begin{equation}\label{arg_rem}Arg(R(s))=-(\frac{t}{2}(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{8}+\mu(s));\end{equation} \par Taking into account the expression (\ref{mu}) for the argument of the remainder term of the gamma function when $\sigma=1/2$, we obtain an expression for the argument of the remainder term of the second approximate equation of the Riemann zeta function on the critical line. \begin{equation}\label{arg_rem_2}Arg(\hat R(s))=-(\frac{t}{2}(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{8}+\mu(t));\end{equation} \par In the derivation of exact expressions for the CHI function, we found the expression (\ref{tau3}), which shows that when $\sigma=1/2$ value of the argument of remainder term $\tau(s)$ of CHI function is exactly twice the argument of the remainder term $\mu(t)$ of gamma functions (\ref{mu}) in the form of the Riemann-Gabcke. \begin{equation}\label{arg_chi}Arg(\hat\chi(s))=-(t(\log{\frac{t}{2\pi}}-1)-\frac{\pi}{4}+2\mu(t));\end{equation} \par Comparing the expression (\ref{arg_rem_2}) for the argument, the remainder term of the second approximate equation of the Riemann zeta function of and the expression (\ref{arg_chi}) for the argument CHI-function on the critical line, we find a fundamental property of the vector of the remainder term of the second approximate equation of the Riemann zeta function: \begin{equation}\label{arg_rem_3}Arg(\hat R(s))=\frac{Agr(\hat\chi(s))}{2};\end{equation} \par \textit{On the critical line, the argument of the remainder term of the second approximate equation of the Riemann zeta function is exactly half the argument of the CHI function.} \par Comparing (\ref{phi_m}) and (\ref{arg_rem_3}) we see that when $\sigma=1/2$, the vector of the remainder term $\hat R(s)$ is perpendicular to the axis of symmetry of the vector system of the second approximate equation of the zeta function of Riemann: \begin{equation}\label{arg_rem_4}Arg(\hat R(s))=\varphi_L;\end{equation} \par As we will show later, this fact is fundamental in the existence of non-trivial zeros of the Riemann zeta function on the critical line. \par When changing the imaginary part of a complex number, the vectors of the vector system of the second approximate equation of the Riemann zeta function can occupy an any position in the entire range of angles $[0, 2\pi]$, in consequence of which they form a polyline with self-intersections (fig. \ref{fig:finite_vector_system}), which complicates the analysis of this vector system. \par To obtain the polyline formed by the vectors $X_n$ and the middle vectors of the Riemann spiral $Y_n$, without self-intersections, the vectors can be ordered by a value of the angle, then they will form a polyline, which has no intersections (fig. \ref{fig:finite_vector_system_perm}). \begin{figure} \caption{Permutation of vectors of the Riemann spiral vector system, $s=0.25+5002.981i$} \label{fig:finite_vector_system_perm} \end{figure} \par We determine the properties of the vector system of the second approximate equation of the Riemann zeta function in the permutation of vectors. \par As it is known from analytical geometry, the sum of vectors conforms the permutation law, i.e. it does not change when the vectors are permuted. \par Thus, the permutation of the vectors of the vector system of the second approximate equation of the Riemann zeta function does not affect a value of the Riemann zeta function. \par Since conformal symmetry, by definition, depends only on angles and does not depend on the actual position of the segments, then the permutation of the vectors of the vector system of the second approximate equation of the Riemann zeta function, conformal symmetry is kept because the angles between the lines formed by the vectors and axis of symmetry do not change. \par While, for mirror symmetry, the permutation of vectors forms a new pair of vertices and it is necessary to determine that they are on the same line perpendicular to the axis of symmetry and at the same distance from the axis of symmetry. \begin{figure} \caption{Permutation vectors, new pair of vertices} \label{fig:vector_permutation} \end{figure} \par Consider (fig. \ref{fig:vector_permutation}) symmetrical polygon $S_1$ formed by the vertices $A_1A_2A_3$ $A_3'A_2'A_1'$. \par The axis of symmetry $O_1O_3$ divides segments $A_1'A_1$, $A_2'A_2$ and $A_3'A_3$ into equal segments, because vertices $A_1'$ and $A_1$, $A_2'$ and $A_2$, $A_3'$ and $A_3$ is mirror symmetrical relative to the axis of symmetry $O_1O_3$. \par Change the edges $A_1'A_2'$, $A_2'A_3'$ and $A_1A_2$, $A_2A_3$ of their places, we get two new vertices $\hat A_2'$ and $\hat A_2$. \par We connect vertices $\hat A_2'$, $O_3$ and $\hat A_2$, $O_3$, we get the triangles $T_1'$ and $T_1$ are formed respectively the vertices of $\hat A_2'A_3'O_3$ and $\hat A_2A_3O_3$. \par We show that the triangles $T_1'$ and $T_1$ are equal. \par Parallelograms $A_1'A_2'A_3'\hat A_2'$ and $A_1A_2A_3\hat A_2$ are equal by build, therefore, angles $A_2'A_3'\hat A_2'$ and $A_2A_3\hat A_2$ are equal; \par In the source polygon $S_1$, the angles $A_2'A_3'O_3$ and $A_2A_3O_3$ are equal, hence angles $\hat A_2'A_3'O_3$ and $\hat A_2A_3O_3$ are equal as the difference of equal angles; \par Then the triangles $T_1'$ and $T_2$ are equal by the equality of the two edges $\hat A_2'A_3'$, $\hat A_2A_3$ and $A_3'O_3$, $A_3O_3$ and the angle between them $\hat A_2'A_3'O_3$ and $\hat A_2A_3O_3$; \par Connect vertices $\hat A_2'$ and $\hat A_2$, we get the intersection of $\hat O_2$ segments $\hat A_2'\hat A_2$ and the axis of symmetry $O_1O_3$. \par The angles $\hat A_2'O_3A_3'$ and $\hat A_2O_3A_3$ are equal as the corresponding angles of equal triangles $T_1'$ and $T_1$, hence angles $\hat A_2'O_3\hat O_2$ and $\hat A_2O_3\hat O_2$ are equal; \par Then the triangle $\hat A_2'O_3\hat A_2$ is isosceles and the segment $\hat O_2O_3$ is its height, because the bisector in an equilateral triangle is its height. \par Therefore, the vertices $\hat A_2'$ and $\hat A_2$ lie on the line perpendicular to the axis of symmetry $O_1O_3$ and they have the same distance from the axis of symmetry $O_1O_3$. \par Thus, when the corresponding edges of the symmetric polygon are permuted, the mirror symmetry is kept (Lemma 2). $\square$ \par Now we define the property of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$. \par Consider (fig. \ref{fig:conformal_vectors}) polygon, formed by the vertices $A_1A_2A_3A_3'A_2'A_1' $, whose edges $A_1'A_2'$, $A_2'A_3'$ and $A_1A_2$, $A_2A_3$ are equal, have conformal symmetry with relation to the axis of symmetry $O_1O_3$ and vertices $A_1'$ and $A_1$ is mirror symmetrical relative to the axis of symmetry $O_1O_3$. \begin{figure} \caption{Equal vectors possessing conformal symmetry} \label{fig:conformal_vectors} \end{figure} \par We show that the vertices $A_2'$, $A_3'$ and $A_2$, $A_3$ respectively are mirror symmetric about the axis of symmetry $O_1O_3$ (Lemma 3). \par The axis of symmetry $O_1O_3$ divides the segment $A_1'A_1$ into equal segments and segment $A_1'A_1$ is perpendicular to the axis of symmetry $O_1O_3$. \par The angles $A_1A_2A_3$ and $A_1'A_2'A_3'$ are equal by Lemma 1, since edges $A_1A_2$, $A_2A_3$ and $A_1'A_2'$, $A_2'A_3'$ respectively have conformal symmetry. \par Construct segments $O_1A_2'$, $O_1A_3'$ and $O_1A_2$, $O_1A_3$. \par Triangles $O_1A_1'A_2'$ and $O_1A_1A_2$ are equal by the equality of the two sides $O_1A_1'$, $A_1'A_2'$ and $O_1A_1$, $A_1A_2$ respectively and the angle between them $O_1A_1'A_2'$ and $O_1A_1A_2$. \par The angles $O_1A_2'A_3'$ and $O_1A_2A_3$ are equal as parts of the equal angles $A_1'A_2'A_3'$ and $A_1A_2A_3$ since angles $A_1'A_2'O_1$ and $A_1A_2O_1$ are equal as the corresponding angles of equal triangles. \par Triangles $O_1A_2'A_3'$ and $O_1A_2A_3$ are equal by the equality of the two sides $O_1A_2'$, $A_2'A_3'$ and $O_1A_2$, $A_2A_3$ respectively and the angle between them $O_1A_2'A_3'$ and $O_1A_2A_3$. \par The angles $O_3O_1A_2'$ and $O_3O_1A_2$ are equal as parts os the equal angles $A_1'O_1O_3$ and $A_1O_1O_3$ since angles $A_1'O_1A_2'$ and $A_1O_1A_2$ are equal as the corresponding angles of equal triangles. \par The angles $O_3O_1A_3'$ and $O_3O_1A_3$ are equal as parts of the equal angles $A_1'O_1O_3$ and $A_1O_1O_3$ since angles $A_1'O_1A_2'$, $A_1O_1A_2$ and $A_2'O_1A_3'$, $A_2O_1A_3$ respectively are equal as the corresponding angles of equal triangles. \par Triangles $A_3'O_1A_3$ and $A_2'O_1A_2$ are isosceles and segments $O_1O_2$ and $O_1O_3$ are respectively their height, hence the segments $O_1O_2$ and $O_1O_3$ divide segments $A_2'A_2$ and $A_3 A_3'$ into equal parts and segments $A_2'A_2$ and $A_3 A_3'$ are perpendilular axis of symmetry $O_1O_3$. $\square$ \par We construct a polyline formed by the vectors of the second approximate equation of the Riemann zeta function when $\sigma=1/2$ (fig. \ref{fig:mirror_symmetry}). \par In accordance with (\ref{x_vect}) and (\ref{y_vect}) for $X_n$ and $Y_n$ respectively all segments $A_1'A_2', A_2'A_3', ... \\A_{m-2}'A_{m-1}', A_{m-1}'A_m'$ and $A_1A_2, A_2A_3, ... A_{m-2}A_{m-1}, A_{m-1}A_m$ when $\sigma=1/2$ are equal. \par We draw the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function through the middle of the segment formed by the vector $\hat R(s)$ of the remainder term when $\sigma=1/2$. \par We obtain two mirror-symmetric vertices $A_m ' $ and $A_m$ besause when $\sigma=1/2$ the vector $\hat R(s)$ of the remainder term is perpendicular to the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function. \begin{figure} \caption{Mirror symmetry of the Riemann spiral vector system, $s=0.5+5002.981i$} \label{fig:mirror_symmetry} \end{figure} \par \textit{According to Lemma 3, the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$ has mirror symmetry} \footnote{We later show that the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function is also determined by the argument of the Riemann zeta function when $\sigma=1/2$.}. \par Corollary 1. The vector $A_1'A_1$ corresponds to the vector of a value of the Riemann zeta function when $\sigma=1/2$, therefore, when $\sigma=1/2$, the argument of the Riemann zeta function up to the sign corresponds to the direction of the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function. \par Corollary 2. Consider the projection of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$ on the axis of symmetry $M$ of this vector system (fig. \ref{fig:mirror_symmetry}). \par Segments $A_1'A_1$ and $A_m'A_m$ is perpendicular to the axis of symmetry of $M$, hence, their projection on axis of symmetry $M$ equal to zero. \par The projections of the vectors $X_n$ and $Y_n$ on the axis of symmetry $M$ are equal in modulus and opposite sign, hence \begin{equation}\label{x_n_y_n_m}(\sum_{n=1}^{m}{X_n})_M+(\sum_{n=1}^{m}{Y_n})_M=0;\end{equation} \par Therefore \begin{equation}\label{zeta_app_m}(\sum_{n=1}^{m}{X_n})_M+(\sum_{n=1}^{m}{Y_n})_M+\hat R(s)_M=0;\end{equation} \par and \begin{equation}\label{zeta_m}\hat \zeta(s)_M=0;\end{equation} \par Corollary 3. Consider the projection of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$ on the normal $L$ to the axis of symmetry $M$ of this vector system (fig. \ref{fig:mirror_symmetry}). \par According to the rules of vector summation \begin{equation}\label{zeta_app_l}\hat \zeta(s)=\hat \zeta(s)_L=(\sum_{n=1}^{m}{X_n})_L+(\sum_{n=1}^{m}{Y_n})_L+\hat R(s)_L;\end{equation} \par \textit{This expression, as we will show later, called the Riemann-Siegel formula, is used to find the non-trivial zeros of the Riemann zeta function on the critical line.} \par It should be noted that, while, vector $X_1$ remains fixed relative to the axes $x=Re(s)$ and $y=Im(s)$, relative to the normal of $L$ to the axis of symmetry and the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function vectors $X_n$ and $Y_n$ are rotated, in accordance with (\ref{x_vect}) and (\ref{y_vect}) towards each other with equal speeds and the angles of the vector of the remainder term $R(s)$ remains fixed when $\sigma=1/2$. \par This rotation of the vectors $X_n$ and $Y_n$ leads to the cyclic behavior of the projection $\hat\zeta(s)_L$ of the Riemann zeta function on the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function on \textit{the critical line,} i.e. $\hat\zeta(s)_L=\zeta(1/2+it)_L$ alternately takes the maximum positive and maximum negative value, therefore, $\hat\zeta(s)_L$ cyclically takes a value of zero. \par Projections $\zeta(s)_L$ and $\zeta(s)_M$ of the Riemann zeta function respectively on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function in the critical strip when $\sigma\ne 1/2$ have the same cyclic behavior, i.e. $\zeta(s)_L$ and $\zeta(s)_M$ alternately take the maximum positive and maximum negative value, hence $\zeta(s)_L$ and $\zeta(s)_M$ is cyclically takes a value of zero, and, as we will show later, if $\zeta(s)_L$ or $\zeta(s)_M$ are set to zero for any value of $\sigma+it$, then they take a value of zero for value $1-\sigma+it$ also. \subsection{Non-trivial zeros of the Riemann zeta function} Using the vector equation of the Riemann zeta function (\ref{zeta_eq_2_vect}), we can obtain the vector equation of the non-trivial zeros of the Riemann zeta function. \begin{equation}\label{zeta_eq_2_zero}\sum_{n=1}^{m}{X_n(s)}+\sum_{n=1}^{m}{Y_n(s)}+R(s)=0;\end{equation} \par Denote the sums of vectors $X_n$ and $Y_n$: \begin{equation}\label{l_1_l_2}L_1=\sum_{n=1}^{m}{X_n(s)}; L_2=\sum_{n=1}^{m}{Y_n(s)};\end{equation} \par The vectors $L_1$ and $L_2$ are invariants of the vector system of the second approximate equation of the Riemann zeta function, since they do not depend on the order of the vectors $X_n$ and $Y_n$, nor on their quantity. \par We can now determine the geometric condition of the non-trivial zeros of the Riemann zeta function: \begin{equation}\label{zeta_zero_vect}L_1+L_2+R=0;\end{equation} \par This condition means that when the Riemann zeta function takes a value of non-trivial zero when $\sigma=1/2$, the vectors $L_1$, $L_2$ and $R$ form \textit{an isosceles} triangle, because when $\sigma=1/2$ $|L_1|=|L_2|$, and when $\sigma\ne 1/2$, if the Riemann zeta function takes a value of non-trivial zero, these vectors must form a triangle of \textit{general form,} because when $\sigma\ne 1/2$ $|L_1|\ne |L_2|$. \par According to Hardy's theorem \cite{HA2}, the Riemann zeta function has an infinite number of zeros when $\sigma=1/2$. \par \textit{This fact is confirmed by the projection values $\hat\zeta(s)_L$ and $\hat\zeta(s)_M$ of the Riemann zeta function respectively on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function on the critical line.} \par The Riemann zeta function takes a value of non-trivial zero when $\sigma=1/2$ every time when the projection $\hat\zeta(s)_L$ takes a value of zero, because the projection $\hat\zeta(s)_M$ when $\sigma=1/2$, according to the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function on the critical line are equel to zero identicaly. \par \textit{Geometric meaning of non-trivial zeros of the Riemann zeta function means that when the Riemann zeta function takes a value of non-trivial zero when $\sigma=1/2$, the vector system of the second approximate equation of the Riemann zeta function, as a consequence of the mirror symmetry of this vector system on the critical line, in accordance with Lemma 3, forms a symmetric polygon, since the sum of the vectors forming the polygon is equal to zero.} \par Using the vector system of the second approximate equation of the Riemann zeta function, we can also explain the geometric meaning of the Riemann-Siegel function \cite{SI,GA,TI}, which is used to compute the non-trivial zeros of the Riemann zeta function on the critical line: \begin{equation}\label{zeta_ri_si}Z(t)=e^{\theta i}\zeta(\frac{1}{2}+it);\end{equation} \par where \begin{equation}\label{teta}e^{\theta i}=\Big(\chi(\frac{1}{2}+it)\Big)^{\frac{1}{2}};\end{equation} \par According to the rules of multiplication of complex numbers, the Riemann-Siegel function determines the projection $\hat\zeta(s)_L$ of the Riemann zeta function on the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function, since \begin{equation}\label{teta2}\theta=\frac{Arg(\chi(\frac{1}{2}+it))}{2}=\varphi_L;\end{equation} \par Which corresponds to the results of our research, i.e. $\hat \zeta(s)=\hat \zeta(s)_L$, since when $\sigma=1/2$ in accordance with the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function on the critical line $\hat \zeta(s)_M=0$. \par In conclusion of the research of the vector system of the second approximate equation of the Riemann zeta function, we establish another fundamental fact. \par The first middle vector $Y_1$ of the Riemann spiral rotates relative to the axes $x=Re(s)$ and $y=Im(s)$ around a fixed first vector $X_1$ of the Riemann spiral, and in accordance with the argument of the CHI function (\ref{chi_eq_app}) does $N(t)$ complete rotations: \begin{equation}\label{x_1_num}N(t)=\frac{|Arg(\chi(s))|}{2\pi};\end{equation} \par Later we show that when $\sigma=1/2$ the first middle vector $Y_1$ of the Riemann spiral passes through the zero of the complex plane average once for each complete rotation \footnote{The research of the vector system of the second approximate equation of the Riemann zeta function on the critical line shows that if the first middle vector $Y_1$ of the Riemann spiral for any complete rotation around the fixed first vector $X_1$ of the Riemann spiral never passes through the zero of the complex plane, then for another complete rotation it passes through the zero value of the complex plane twice.} since, in accordance with the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function on the critical line, the end of first middle vector $Y_1$ makes reciprocating motions along the normal $L$ to the axis of symmetry of this vector system, since this normal passes through the end of the first vector $X_1$ of the Riemann spiral, which is at the zero of the complex plane. \par \textit{Thus, we can determine the number of non-trivial zeros of the Riemann zeta function on the critical line (as opposed to the Riemann-von Mangoldt formula, which determines the number of non-trivial zeros of the Riemann zeta function in the critical strip) via the number of complete rotations of the first middle vector $Y_1$ of the Riemann spiral:} \begin{equation}\label{zeta_zero_num}N_0(t)=\Bigg[\frac{|Arg(\chi(s))-\alpha_2|}{2\pi}\Bigg]+2;\end{equation} \par where $\alpha_2$ argument to the second base point \footnote{base point is a value of a complex variable in which the first middle vector $Y_1$ occupies the position opposite to the first vector $X_1$ of the Riemann spiral.} of the first middle vector $Y_1$ of the Riemann spiral. \section{Variants of confirmation of the Riemann hypothesis} Before proceeding to the variants of confirmation of the Riemann hypothesis based on the analysis of the vector system of the second approximate equation of the Riemann zeta function, we consider two traditional approaches. \par The authors of the first approach estimate the proportion of non-trivial zeros $k$ and the proportion of simple zeros $k^*$ on the critical line compared to $N(T)$ the number of non-trivial zeros of the Riemann zeta function in the critical strip. \begin{equation}\label{k}k=\lim_{T\to\infty}\inf\frac{N_0(T)}{N(T)};\end{equation} \begin{equation}\label{k_s}k^*=\lim_{T\to\infty}\inf\frac{N_{0s}(T)}{N(T)};\end{equation} \par In the critical strip $N(T)$, the number of non-trivial zeros of the Riemann zeta function is determined by the Riemann-von Mangoldt formula \cite{TI}: \begin{equation}\label{n_mn}N(T)=\frac{T}{2\pi}(\log{\frac{T}{2\pi}}-1)+\frac{7}{8}+S(T)+\mathcal{O}(\frac{1}{T});\end{equation} \begin{equation}\label{s_mn}S(T)=\frac{1}{\pi}Arg(\zeta(\frac{1}{2}+iT))=\mathcal{O}(\log T), T\to\infty;\end{equation} \par This expression \cite{BU} is used to determine the proportion of non-trivial zeros $k$ and the proportion of simple zeros $k^*$ on the critical line: \begin{equation}\label{k_2}k\ge 1-\frac{1}{R}\log\Big(\frac{1}{T}\int_{1}^{T}|V\psi(\sigma_0+it)|^2dt\Big)+o(1);\end{equation} \begin{equation}\label{k_s_2}k^*\ge 1-\frac{1}{R}\log\Big(\frac{1}{T}\int_{2}^{T}|V\psi(\sigma_0+it)|^2dt\Big)+o(1);\end{equation} \par where $V(s)$ is some function whose number of zeros is the same as the number of zeros $\zeta(s)$ in a contour bounded by a rectangle (i. e. not on the critical line): \begin{equation}\label{rect}\frac{1}{2}<\sigma<1; 0<t<T;\end{equation} \par $\psi(s)$ some \glqq mollifier\grqq \ function that has no zeros and compensates for the change of $|V(s)|$. \par R is some positive real number \par and \begin{equation}\label{sigma_0}\sigma_0=\frac{1}{2}-\frac{R}{\log T};\end{equation} \par In recent papers \cite{BU, FE} the function $V(s)$ is used in the form of Levenson \cite{LE}. \begin{equation}\label{v}V(s)=Q(-\frac{1}{\log T}\frac{d}{dt})\zeta(s);\end{equation} \par where $Q(x)$ is a real polynomial with $Q(0)=1$ and $Q'(x)=Q'(1-x)$. \par In this approach, the authors use different kinds of polynomials $Q(x)$, \glqq mollifier\grqq \ functions $\psi (s)$, as well as different methods of approximation and evaluation of the integral \begin{equation}\label{int}\int_{1}^{T}|V\psi(\sigma_0+it)|^2dt;\end{equation} \par In the paper \cite{BU} in 2011 it is proved that \begin{equation}\label{k_bu}k\ge .4105; k^*\ge .4058\end{equation} \par In the parallel paper \cite{FE} also in 2011 it is proved that \begin{equation}\label{k_fe}k\ge .4128;\end{equation} \par To understand the dynamics of the results in this direction, we compare them with the paper \cite{CO} in which in 1989. \begin{equation}\label{k_co}k\ge .4088; k^*\ge .4013\end{equation} \par \textit{The complexity of this approach lies in the fact that different methods of approximation of the integral (\ref{int}) allow to obtain an insufficiently accurate result.} \par We hope, the authors of \cite{PR} in 2019 found a way to accurately determine the number of non-trivial zeros in the critical strip and on the critical line, because they claim to have obtained the result $k=1$. \par The second traditional direction of confirmation of the Riemann hypothesis is connected with direct verification of non-trivial zeros of the Riemann zeta function. \par As we have already mentioned, the Riemann-Siegel formula (\ref{zeta_ri_si}) is used for this. \par One of the recent papers \cite{GO}, which besides computing $10^{13}$ of the first non-trivial zeros of the Riemann zeta function on the critical line offers a statistical analysis of these zeros and an improved approximation method of the Riemann-Siegel formula, was published in 2004. \par \textit{The complexity of this approach lies in the fact that it is impossible to calculate all the non-trivial zeros of the Riemann zeta function, therefore, by this method of confirmation the Riemann hypothesis it can be refuted rather than confirm.} \par It should be noted that there is \textit{a contradiction} between the results of the first and second method of confirmation of the Riemann hypothesis. \par The determination of the proportion of non-trivial zeros on the critical line is in no way related to \textit{the specific interval} of the imaginary part of a complex number, the authors of the first approach do not try to show that there is a sufficiently large interval where the Riemann hypothesis is true in accordance with the second method of confirmation the Riemann Hypothesis. \par In other words, by determining the proportion of non-trivial zeros of the Riemann zeta function on the critical line, the authors of the first direction of confirmation of the Riemann hypothesis \textit{indirectly indicate that part of the non-trivial zeros of the Riemann zeta function lies not on the critical line at any interval}, i.e. even where these zeros are already verified by the second method. \par The site \cite{WA} collects some unsuccessful attempts to prove the Riemann hypothesis, some of them are given with a detailed error analysis. \par In his presentation, Peter Sarnak \cite{SA}, in the course of analyzing the different approaches to proving the Riemann hypothesis, mentions that about three papers a week are submitted for consideration annually. \par However, the situation with the proof of the Riemann hypothesis remains uncertain and we tend to agree with Pete Clark \cite{KL}: \par \textit{So far as I know, there is no approach to the Riemann Hypothesis which has been fleshed out far enough to get an even moderately skeptical expert to back it, with any odds whatsoever. I think this situation should be contrasted with that of Fermat's Last Theorem [FLT]: a lot of number theorists, had they known in say 1990 that Wiles was working on FLT via Taniyama-Shimura, would have found that plausible and encouraging.} \par Based on the research results described in the second section of our paper, we propose several new approaches to confirm the Riemann hypothesis based on the properties of the vector system of the second approximate equation of the Riemann zeta function: \par 1) the first method is based on determining the exact number (\ref{zeta_zero_num}) of non-trivial zeros of the Riemann zeta function on the critical line; \par 2) the second method of confirmation of the Riemann hypothesis is based on the analysis of projections $\zeta(s)_L$ and $\zeta(s)_M$ of the Riemann zeta function respectively on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function in the critical strip; \par 3) the third method is based on the vector condition (\ref{zeta_zero_vect}) of non-trivial zeros of the Riemann zeta function. \par Determining the exact number (\ref{zeta_zero_num}) of non-trivial zeros of the Riemann zeta function on the critical line is based on several facts: \par 1) the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$ has mirror symmetry (\ref{zeta_m}); \par 2) the vector system of the second approximate equation of the Riemann zeta function rotates around the end of the first vector $X_1$ of the Riemann spiral (\ref{phi_m}) in a fixed coordinate system of the complex plane; \par 3) vectors $X_n$ and the middle vectors $Y_n$ of the Riemann spirals rotate in opposite directions (\ref{x_vect}) and (\ref{y_vect}) in the moving coordinate system formed by a normal $L$ to the axis of symmetry and axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function. \par The first middle vector $Y_1$ of the Riemann spiral rotates with the vector system of the second approximate equation of the Riemann zeta function around the end of the first vector $X_1$ of the Riemann spiral (\ref{phi_m}) in the fixed coordinate system of the complex plane, so the first middle vector $Y_1$ of the Riemann spiral, in accordance with (\ref{chi_eq_app}), periodically passes \textit{base point} where it takes a position opposite to the first vector $X_1$ of the Riemann spiral: \begin{equation}\label{base_point}Arg(\chi(\frac{1}{2}+it_k))=(2k-1)\pi;\end{equation} \par The argument of \textit{the base points} of the first middle vector $Y_1$ of the Riemann spiral differs from the argument of the Gram points \cite{GR} by $\pi/2$: \begin{equation}\label{gram_point}\theta=\frac{Arg(\chi(\frac{1}{2}+it_n))}{2}=(n-1)\pi;\end{equation} \par \textit{We consider that the complete rotation of the first middle vector $Y_1$ of the Riemann spiral is the rotation from any base point to the next base point.} \begin{figure} \caption{Base point \#4520, the first middle vector is above the real axis, $s=0.5+5001.099505i$} \label{fig:gram_point_4520} \end{figure} \par In accordance with the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$, at the base points the end of the first middle vector $Y_1$ of the Riemann spiral is on the imaginary axis of the complex plane, so the vector can occupy one of two positions above (fig. \ref{fig:gram_point_4520}) or bottom (fig. \ref{fig:gram_point_4525}) of the real axis of the complex plane. \begin{figure} \caption{Base point \#4525, the first middle vector is below the real axis, $s=0.5+5005.8024855i$} \label{fig:gram_point_4525} \end{figure} \par Since the vectors $X_n$ and the middle vectors $Y_n$ of the Riemann spiral rotate in opposite directions (\ref{x_vect}, \ref{y_vect}) in the moving coordinate system formed by the normal $L$ to the axis of symmetry and the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function, in the fixed coordinate system of the complex plane, the first middle vector $Y_1$ of the Riemann spiral, when it is below the real axis of the complex plane, rotates towards the first vector $X_1$ Riemann Spirals and opposite when the first middle vector $Y_1$ of the Riemann spiral is on top of the real axis of the complex plane, it rotates away from the first vector $X_1$ of the Riemann spiral. \par If the first middle vector $Y_1$ of the Riemann spiral at the base point rotates towards the first vector $X_1$ of the Riemann spiral, then, in accordance with the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$, until the rotation of the first middle vector $Y_1$ of the Riemann spiral is completed, the axis of symmetry $M$ of this vector system crosses the zero of the complex plane (fig. \ref{fig:root_4525}). \par Conversely, if the first middle vector $Y_1$ of the Riemann spiral at the base point rotates away from the first vector $X_1$ of the Riemann spiral, then from the beginning of the rotation of the first middle vector $Y_1$ of the Riemann spiral, the axis of symmetry $M$ of this vector system has already crossed the zero of the complex plane (fig. \ref{fig:root_4520}). \begin{figure} \caption{Non-trivial zero of the Riemann zeta function \#4525, after the base point, $s=0.5+5006.208381106i$} \label{fig:root_4525} \end{figure} \begin{figure} \caption{Non-trivial zero of the Riemann zeta function \#4520, up to the base point, $s=0.5+5000.834381i$} \label{fig:root_4520} \end{figure} \par In the result of the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function, when the axis of symmetry $M$ of this vector system crosses the zero of the complex plane, the end of the first middle vector $Y_1$ of the Riemann spiral also crosses the zero of the complex plane, so the vector system of the second approximate equation of the Riemann zeta function at this point forms a closed polyline. \par \textit{It is known from analytical geometry that the sum of vectors forming a closed polyline is equal to zero.} \par Therefore, when the first middle vector $Y_1$ of the Riemann spiral is at the base point, we can be sure that either before that point or after that point the Riemann zeta function takes a value of non-trivial zero. \par \textit{This fact allows us to conclude that one non-trivial zero of the Riemann zeta function corresponds to one complete rotation of the first middle $Y_1$ vector of the Riemann spiral.} \par In accordance with of different combinations of the location of the first middle vector $Y_1$ of the Riemann spiral relative to the real axis of the complex plane at adjacent base points, there may be a different number of non-trivial zeros of the Riemann zeta function in the intervals between the base points: \par a) one non-trivial zero if the first middle vector $Y_1$ of the Riemann spiral occupies the same positions at two serial base points; \par b) two zeros if at the first base point the first middle vector $Y_1$ of the Riemann spiral is below and at the next base point is above the real axis of the complex plane; \par c) no zero, if, on the contrary, at the first base point the first middle vector $Y_1$ of the Riemann spiral is above, and at the next base point, is below the real axis of the complex plane. \par Denote the types of base points: \par $a_1$ - if the first middle vector $Y_1$ of the Riemann spiral occupies a position above\footnote{If the Riemann zeta function can take a value of non-trivial zero at the base point, then the first middle vector $Y_1$ will occupy the position of the first vector $X_1$ of the Riemann spiral (we will not consider this state in detail), we assume that this position belongs to the base point type $a_1$.} the real axis of the complex plane; \par $a_2$ - if the first middle vector $Y_1$ of the Riemann spiral occupies the position below the real axis of the complex plane. \par Then we can define a sequence of base points of the same types, which correspond to the first type of interval, which has one non-trivial zero: $$A_1=a_1a_1;$$ $$A_2=a_2a_2;$$ \par and sequence base points with different types, which correspond to the second and third interval type, respectively: $$B=a_2a_1;$$ $$C=a_1a_2;$$ \par It is obvious that the sequence $A_1$ and $A_2$ can't follow each other, because $$a_1a_1a_2a_2=A_1 CA_2$$ \par or $$a_2 a_2a_1 a_1=A_2 BA_1$$ \par It is also clear that each other can not follow the sequence $B$, since $$a_2a_1 a_2 a_1=BC$$ \par and each other can not follow sequence $C$, since $$a_1a_2a_1a_2=CBC$$ \par Therefore, if at some interval between two base points of the Zeta-function of Riemann has no non-trivial zeros, i.e. is the interval of type $C$, then at another interval the Zeta-function of Riemann will have two non-trivial zero, it is the interval of type $B$, since these intervals appear every time when the type of base point changed, such as $$a_1a_2a_1=CB$$ or more long chain $$a_1a_2a_2a_2a_1a_1a_1a_1a_1a_2a_2a_1=CA_2A_2BA_1A_1A_1A_1CA_2B$$ \par \textit{Thus, the total number of non-trivial zeros of the Riemann zeta function at critical line always corresponds to the number of base points or the number of complete rotations of the first middle $Y_1$ vector of the Riemann spiral around the end of the first vector $X_1$ of the Riemann spiral in the fixed coordinate system of the complex plane.} \par We used this property to obtain the expression (\ref{zeta_zero_num}) the number of non-trivial zeros of the Riemann zeta function on the critical line through the angle of the first middle vector $Y_1$ of the Riemann spiral. \par We substitute the exact expression (\ref{arg_chi}) argument CHI functions in (\ref{zeta_zero_num}): \begin{equation}\label{zeta_zero_num_0}N_0(T)=\Bigg[\Big|\frac{T}{2\pi}(\log{\frac{T}{2\pi}}-1)-\frac{1}{8}+\frac{2\mu(T)-\alpha_2}{2\pi}\Big|\Bigg]+2;\end{equation} \par where $\mu(T)$ is the remainder term of the gamma function (\ref{mu}) when $\sigma=1/2$; \par $\alpha_2$ argument of the CHI function at the second base point. \par Comparing the expression for the number of non-trivial zeros in the critical strip (\ref{n_mn}) and the expression for the number of non-trivial zeros on the critical line (\ref{zeta_zero_num_0}), we see that these values \glqq match\grqq. \par We are in a paradoxical situation where we know the exact number of zeros on the critical line, because $\mu(T)\to 0$ at $T\to \infty$ and do not know the exact number of zeros in the critical strip, because in the most optimistic estimate \cite{TR}: \begin{equation}\label{s_tr}|S(T)|<1.998+0.17\log(T); T>e;\end{equation} \par \textit{Thus, based on the last estimate (\ref{s_tr}) of the remainder term of the Riemann-von Mangoldt formula, we can say that \glqq almost all\grqq \ non-trivial zeros of the Riemann zeta function lie on the critical line.} \par It should be noted that for the final solution of the problem by this method it is not enough to show that \begin{equation}\label{s_li}|S(T)|<\mathcal{O}(\frac{\log(T)}{\log\log(T)}); T\to \infty;\end{equation} \par Although that this result was obtained by Littlewood \cite{LI} provided that the Riemann hypothesis is true, since the expression $(2\mu(T)-\alpha_2)/2\pi$ has a limit at $T\to \infty$, and the expression $\log(T)/\log\log (T)$ has no such limit, although it grows very slowly. \par \textit{In other words, by comparing the number of non-trivial zeros of the Riemann zeta function in the critical strip and on the critical line, it is almost impossible to confirm the Riemann hypothesis.} \par Using the method of analyzing the vector system of the second approximate equation of the Riemann zeta function, we can offer a confirmation of the Riemann hypothesis \textit{from the contrary.} \par This approach is to show that Riemann zeta functions \textit{cannot have non-trivial zeros} when $\sigma\ne 1/2$. \par This problem is solved in different ways by the second and third methods of confirmation of the Riemann hypothesis, based on the properties of the vector system of the second approximate equation of the Riemann zeta function. \par We have already considered the dynamics of the first middle vector $Y_1$ of the Riemann spiral at the base points when $\sigma=1/2$, now consider the dynamics of the first middle vector $Y_1$ of the Riemann spiral at the base points when $\sigma\ne 1/2$. \par In accordance with (\ref{alpha1_ex}) base point when $\sigma\ne 1/2$ are in the neighborhood $\epsilon=\mathcal{O}(t^{-1})$ points when $\sigma=1/2$, i.e. they have practically the same value at $t\to \infty$. \par Therefore, at the base points when $\sigma\ne 1/2$ the first middle vector $Y_1$ of the Riemann spiral occupies a position opposite to the first vector $X_1$ of the Riemann spiral, while, in accordance with the violation of the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function, the end of the first middle vector $Y_1$ of the Riemann spiral \textit{cannot be} on the imaginary axis of the complex plane. \par Therefore, when $\sigma<1/2$, the end of the first middle vector $Y_1$ of the Riemann spiral is on the left (fig. \ref{fig:gram_point_4525_left}), when $\sigma>1/2$ it is on the right (fig. \ref{fig:gram_point_4525_right}) from the imaginary axis of the complex plane, regardless of the position relative to the real axis of the complex plane. \begin{figure} \caption{Base point \#4525, the first middle vector of the Riemann spiral is on the left, $s=0.35+5005.8024855i$} \label{fig:gram_point_4525_left} \end{figure} \begin{figure} \caption{Base point \#4525, the first middle vector of the Riemann spiral is on the right, $s=0.65+5005.8024855i$} \label{fig:gram_point_4525_right} \end{figure} \par It is easy to notice that at the base point of the vector values of the Riemann zeta function at values $\sigma+it$ and $1 - \sigma+it$ deviate from the normal $L$ to the axis of symmetry of the vector system of the second approximate equation in different directions at \textit{the same angle} (fig. \ref{fig:gram_point_4525_two_angles}). \begin{figure} \caption{The deviation of the vector of values of the Riemann zeta function in the base point \#4525, when $\sigma=0.35$ and $\sigma=0.65$} \label{fig:gram_point_4525_two_angles} \end{figure} \par This behavior of vectors of values of the Riemann zeta function for values of the complex variable symmetric about the critical line can be easily explained by the arithmetic of arguments of a complex numbers at the base point. \begin{equation}\label{arg1}Arg(\zeta(s))_B=Arg(\chi(s))_B+Arg(\zeta(1-s))_B\end{equation} \par At the base point \begin{equation}\label{arg2}Arg(\chi(s))_B=\pi;\end{equation} \par since \begin{equation}\label{zeta_conj3}\zeta(1-\sigma+it)=\zeta(\overline{1-\sigma-it})=\zeta(\overline{1-s})=\overline{\zeta(1-s)};\end{equation} \begin{equation}\label{arg3}Arg(\zeta(1-s))=-Arg(\overline{\zeta(1-s)})=-Arg(\zeta(1-\sigma+it));\end{equation} \par then at the base point \begin{equation}\label{arg4}Arg(\zeta(\sigma+it))_B=\pi-Arg(\zeta(1-\sigma+it))_B;\end{equation} \par This ratio of arguments is kept in the moving coordinate system formed by the normal $L$ to the axis of symmetry and the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function \textit{for any values} of the complex variable symmetric about the critical line. \begin{equation}\label{arg5}Arg(\zeta(\sigma+it))-\frac{Arg(\chi(\sigma+it))}{2}=-(Arg(\zeta(1-\sigma+it))-\frac{Arg(\chi(\sigma+it))}{2});\end{equation} \par The vector system of the second approximate equation of the Riemann zeta function rotates relative to the end of the first vector $X_1$ of the Riemann spiral (\ref{phi_m}), while, in accordance with (\ref{x_vect}, \ref{y_vect}), each next vector of this vector system rotates relative to the previous vector in the same direction as the entire vector system, therefore, the angle of twist of the vectors \textit{grows monotonically.} \par Thus, in accordance with (\ref{x_vect}, \ref{y_vect}) when $\sigma<1/2$, the angle of twist of vectors \textit{grows faster} than when $\sigma=1/2$, because the angles between the vectors are equal, but when $\sigma<1/2$, the modulus of each vector is greater than the modulus of the corresponding vector when $\sigma=1/2$, so the vector system when $\sigma<1/2$ is twisted \textit{at a greater angle} than in the case $\sigma=1/2$. \par While, when $\sigma>1/2$, the angle of twist of vectors \textit{grows slower} than when $\sigma=1/2$, because the angles between the vectors are equal to, but $\sigma>1/2$, the modulus of each vector is less than the modulus of the corresponding vector when $\sigma=1/2$, so the vector system when $\sigma>1/2$ is twisted \textit{at a smaller angle} than in the case $\sigma=1/2$. \begin{figure} \caption{Base point \#4525, the vector of values of the Riemann zeta function parallel to the axis of symmetry, $s=0.35+5006.186i$} \label{fig:gram_point_4525_symmetry1} \end{figure} \par In accordance with the identified ratio of the arguments (\ref{arg5}) and a monotonic increase of the angle of twist of vectors, we can conclude that the vectors of value of the Riemann zeta function, when values of the complex variable are symmetric about the critical line, rotate in the moving coordinate system formed by a normal of $L$ to the axis of symmetry and axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function in opposite directions with the same speed and therefore at different values of the imaginary part of a complex number has a special positions: \par a) directed in different sites, along the axis of symmetry $M$ (fig. \ref{fig:gram_point_4525_symmetry1}, \ref{fig:gram_point_4525_symmetry2}); \par b) directed in the same site, along the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function (fig. \ref{fig:gram_point_4525_normal1}, \ref{fig:gram_point_4525_normal2}). \begin{figure} \caption{Base point \#4525, the vector of values of the Riemann zeta function parallel to the axis of symmetry, $s=0.65+5006.186i$} \label{fig:gram_point_4525_symmetry2} \end{figure} \begin{figure} \caption{Base point \#4525, the vector of values of the Riemann zeta function parallel to the normal to the axis of symmetry, $s=0.35+5006.484i$} \label{fig:gram_point_4525_normal1} \end{figure} \begin{figure} \caption{Base point \#4525, the vector of values of the Riemann zeta function parallel to the normal to the axis of symmetry, $s=0.65+5006.484i$} \label{fig:gram_point_4525_normal2} \end{figure} \par While, when $\sigma=1/2$, in accordance with the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$, the vector of value of the Riemann zeta function rotates so that it is always located along the normal $L$ to the axis of symmetry of this coordinate system (fig. \ref{fig:gram_point_4525}). \par According to the position relative to the real axis of the complex plane at the base point when $\sigma\ne 1/2$, the first middle vector $Y_1$ of the Riemann spiral, when it is below the real axis of the complex plane, rotates towards the first vector $X_1$ of the Riemann spiral and, conversely, when the first middle vector $Y_1$ of the Riemann spiral is above the real axis of the complex plane, it rotates away from the first vector $X_1$ of the Riemann spiral. \par If the first middle vector $Y_1$ of the Riemann spiral at the base point when $\sigma\ne 1/2$ rotates towards the first vector $X_1$ of the Riemann spiral, then, in accordance with the conformal symmetry of the vector system of the second approximate equation of the Riemann zeta function when $\sigma\ne 1/2$, until the rotation of the first middle vector $Y_1$ of the Riemann spiral is completed, it will take a special position when the axis of symmetry $M$ of this vector system passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral. \par Conversely, if the first middle vector $Y_1$ of the Riemann spiral at the base point when $\sigma\ne 1/2$ rotates away from the first vector $X_1$ of the Riemann spiral, from the beginning of the rotation of the first middle vector $Y_1$ of the Riemann spiral, it already occupied a special position when the axis of symmetry $M$ of this vector system is passed through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral. \par This special position of the first middle vector $Y_1$ of the Riemann spiral corresponds to the special position of the vector of value of the Riemann zeta function when it locates along to the axis of symmetry $M$ (fig. \ref{fig:gram_point_4525_symmetry1}, \ref{fig:gram_point_4525_symmetry2}) of the vector system of the second approximate equation of the Riemann zeta function. \par According to the rules of summation of vectors in this special position of the first middle vector $Y_1$ of the Riemann spiral, the projection of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry of this vector system is equal to zero, hence \begin{equation}\label{zeta_l_y1}\zeta(s)_L=0;\end{equation} \par By increasing the imaginary part of a complex number, the first middle vector $Y_1$ of the Riemann spiral will move to another special position when the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral at that moment it rotated by an angle $\pi/2$ from the first special position in the moving coordinate system formed by the normal $L$ to the axis of symmetry and the axis of symmetry $M$ of this vector system. \par Second special position of the first middle vector $Y_1$ of the Riemann spiral corresponds to the special position of the vector of value of the Riemann zeta function when it locates along the normal $L$ to the axis of symmetry (fig. \ref{fig:gram_point_4525_normal1}, \ref{fig:gram_point_4525_normal2}) of the vector system of the second approximate equation of the Riemann zeta function. \par According to the rules of summation of vectors in this special position of the first middle vector $Y_1$ of the Riemann spiral, the projection of the vector system of the second approximate equation of the Riemann zeta function on the axis of symmetry $M$ of this vector system is equal to zero, hence \begin{equation}\label{zeta_m_y1}\zeta(s)_M=0;\end{equation} \par Thus, when we performed an additional analysis of the vector system of the second approximate equation of the Riemann zeta function in accordance with the identified ratio of the arguments (\ref{arg5}) and monotonic increase of the angle of twist of vectors we found that each base point corresponds to \textit{two special positions} of the first middle vector $Y_1$ of the Riemann spiral, in which when $\sigma\ne 1/2$ the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system \textit{alternately} take a value of zero and when $\sigma=1/2$ one of which corresponds to the non-trivial zero of the Riemann zeta function. \par Now we need to make sure that the modulus of the Riemann zeta function cannot take a value of zero except for the special position of the first middle vector $Y_1$ of the Riemann spiral when $\sigma=1/2$. \par We will analyzing the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system starting from the boundary of the critical strip, where the Riemann zeta function has no non-trivial zeros, i.e. for values $\sigma=0$ and $\sigma=1$. \par Construct the graphs of the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system when $\zeta(1+it)_L=0$, in accordance with the ratio of the arguments (\ref{arg5}) of the vectors of values of the Riemann zeta function, when values of the complex variable symmetric about the critical line, we obtain another equality $\zeta(0+it)_L=0$ (fig. \ref{fig:gram_point_4525_zeta_1_l}). \begin{figure} \caption{Base point \#4525, projection of the vector system on the normal $L$ to the axis of symmetry when $\zeta(1+5006,09072i)_L=0$} \label{fig:gram_point_4525_zeta_1_l} \end{figure} \begin{figure} \caption{Base point \#4525, projection of the vector system on the axis of symmetry $M$ when $\zeta(1+5006,09072i)_L=0$} \label{fig:gram_point_4525_zeta_1_m} \end{figure} \par Analysis of the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system shows that in the interval $A_k$ of the imaginary part of a complex number from $t_1$: $\zeta(1+it_1)_L=0$ to $t_2$: $\zeta(1/2+it_2)_L=0$, any value of $t$ corresponds to a value $0<\sigma<1$: $\zeta(\sigma+it)_L=0$. \par In other words, in the interval $A_k$ for each value of the imaginary part of a complex number, the graph of the function $\zeta(\sigma+it)_L=0$ crosses the abscissa axis twice for the symmetric values $\sigma+it$ and $1-\sigma+it$ until when $\sigma=1/2$ it reaches a value of the non-trivial zero of the Riemann zeta function. \par While the graph of the function $\zeta(\sigma+it) _M=0$ at any value $t$ of the imaginary part of a complex number from the interval $A_k$ crosses the abscissa axis only once, at the point $\sigma=1/2$ (fig. \ref{fig:gram_point_4525_zeta_1_m}). \par Now we construct the graphs of the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system when $\zeta(1+it)_M=0$, in accordance with the ratio of the arguments (\ref{arg5}) of the vectors of value of the Riemann zeta function, when values of the complex variable symmetric about the critical line, we obtain another equality $\zeta(0+it)_M=0$ (fig. \ref{fig:gram_point_4525_zeta_2_m}). \begin{figure} \caption{Base point \#4525, projection of the vector system on the normal $L$ to the axis of symmetry when $\zeta(1+5006,4559i)_M=0$} \label{fig:gram_point_4525_zeta_2_l} \end{figure} \begin{figure} \caption{Base point \#4525, projection of the vector system on the axis of symmetry $M$ when $\zeta(1+5006,4559i)_M=0$} \label{fig:gram_point_4525_zeta_2_m} \end{figure} \par Analysis of the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system shows that in the interval $C_k$ of the imaginary part of a complex number from $t'_1$: $\zeta(1+it'_1)_M=0$ to $t'_2$: $\zeta(1/2+it'_2)_M=0$, any value of $t'$ corresponds to a value $0<\sigma<1$: $\zeta(\sigma+it')_M=0$. \par In other words, in the interval $C_k$ for each value of the imaginary part of a complex number, the graph of the function $\zeta(\sigma+it')_M=0$ crosses the abscissa axis three times for the symmetric values $\sigma+it'$, $1-\sigma+it'$ and at the point $\sigma=1/2$. \par While the graph of the function $\zeta(\sigma+it') _L=0$ at any value $t'$ of the imaginary part of a complex number from the interval $C_k$ never crosses the abscissa axis (fig. \ref{fig:gram_point_4525_zeta_2_l}). \par Therefore, in the interval $C_k$ for any value of the imaginary part of a complex number and any value of the real part of a complex number $0<\sigma<1$, the Riemann zeta function has no non-trivial zeros. \begin{figure} \caption{Base point \#4525, projection of the vector system on the normal $L$ to the axis of symmetry when $\zeta(1/2+5006,208381106i)_L=0$} \label{fig:gram_point_4525_zeta_3_l} \end{figure} \begin{figure} \caption{Base point \#4525, projection of the vector system on the axis of symmetry $M$ when $\zeta(1/2+5006,208381106i)_L=0$} \label{fig:gram_point_4525_zeta_3_m} \end{figure} \par Analysis of the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system in the interval of $B_k$ between intervals $A_k$ and $C_k$ and in the interval $D_k$ between intervals $C_k$ and $A_{k+1}$ shows that the projection at any value of the imaginary part of a complex number and any value of the real part of a complex number when $0<\sigma<1$ is not equal to zero, therefore, in the interval of $B_k$ and $D_k$ the Riemann zeta function has no non-trivial zeros. \par It should be noted that the sign of the projection of the vector system on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function changes at each interval $C_k$, i.e. it depends on the number of the base point (fig. \ref{fig:graphics_projections}). \par While the sign of the projection of the vector system on the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function depends on the location of the first middle vector $Y_1$ of the Riemann spiral at the base point (fig. \ref{fig:graphics_projections}), i.e. it changes at each interval $A_k$. \par Thus, when we performed an additional analysis of the vector system of the second approximate equation of the Riemann zeta function we found that each base point corresponds to \textit{four intervals} in which the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system take certain values and only in one interval when $\sigma=1/2$ they can be zero at the same time (fig. \ref{fig:gram_point_4525_zeta_3_l} and \ref{fig:gram_point_4525_zeta_3_m}), in this moment the Riemann zeta function takes a value of non-trivial zero. \par Now that we know all possible variants of the ratio of the projection of vectors system of the second approximate equation of the Riemann zeta function on normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system we found out \textit{what and why} can be these projections for any value of the real part of a complex number in the critical strip, and on the boundary of the critical strip, i.e. for values $\sigma=0$ and $\sigma=1$, where the Riemann zeta function has no non-trivial zeros, as it proved Adamar and vallée Poussin. \par We have found out how these projections change when the imaginary part of the complex number changes and how these changes are related to the number of the base point and the position of the first middle vector $Y_1$ of the Riemann spiral at the base point, we have to answer two questions: \par 1) Why should these projections have such ratios for any base point? \par 2) Why can't there be any other reason for the modulus of the Riemann zeta function to go to zero when $\sigma\ne 1/2$? \par The first question has already been answered in the analysis of the projections of the vector system on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function: \par 1) The vector system of the second approximate equation of the Riemann zeta function rotates when the imaginary part of the complex number changes, which we can confirm this by the equation of the axis of symmetry (\ref{phi_m}) of this vector system; \par 2) The vector system periodically passes the base points, where it is convenient to fix its special properties; \par 3) The special properties of the vector system are determined by the position of the first middle vector $Y_1$ of the Riemann spiral at the base point relative to the real axis of the complex plane and the axes of the moving coordinate system formed by the normal $L$ to the axis of symmetry and the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function; \par 4) The arguments of $\zeta(s)$ and $\overline{\zeta(1-s)}$ have axial symmetry around the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function; \par 5) The vectors of value of $\zeta(s)$ and $\overline{\zeta(1-s)}$ rotate in different directions at the same speed in the moving coordinate system formed by the normal $L$ to the axis of symmetry and the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function; \par 6) Projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system are determined by the vectors of value of $\zeta(s)$ and $\overline{\zeta(1-s)}$. \par \textit{Thus, the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this vector system have periodic properties (which we described earlier) with relation to the base points, so these properties of the projections of the vector system must be observed at any base point, and therefore for all values of the complex variable, where this vector system determines a value of the Riemann zeta function, i.e., on the entire complex plane except the real axis, where, as is known, the Riemann zeta function has only trivial zeros.} \par To answer the second question, why there can be no other reason for the modulus of the Riemann zeta function to zero when $\sigma\ne 1/2$, it is necessary to consider possible variants of such a zero transformation, for example: \par 1) The interval $A_k$, where $\zeta(s)_L=0$, intersects with the interval $C_k$, where $\zeta(s)_M=0$, for any value of a complex variable when $\sigma\ne 1/2$; \par 2) The condition $\zeta(s)_L=0$ and $\zeta(s)_M=0$ is satisfied in the interval $B_k$ or $D_k$ , where $\zeta(s)_L\ne 0$ and $\zeta(s)_M\ne 0$, for any value of the complex variable in the critical strip. \par \textit{It is obvious that the module $\zeta(s)$ can not arbitrarily be reduced to zero because this would require that the zero was reduced module of all vectors $X_n$ and $Y_n$ of the Riemann spiral, which contradicts (\ref{x_vect}, \ref{y_vect}).} \par Somebody can think of other reasons why the modulus of the Riemann zeta function can transform to zero when $\sigma\ne 1/2$, we hope that upon careful examination they will all be refuted, because the identified properties of the projections of the vector system of the second approximate equation of the Riemann zeta function on the normal $L$ to the axis of symmetry and on the axis of symmetry $M$ of this system show the direction of confirmation why the Riemann zeta function may not have non-trivial zeros when $\sigma\ne 1/2$. \par Now we consider another method of confirmation the Riemann hypothesis, based on the properties of the vector system of the second approximate equation of the Riemann zeta function. \par We need to find out in which cases the invariants $L_1$ and $L_2$ (\ref{l_1_l_2}) of this vector system and the vector of the remainder term $R$ of the second approximate equation of the Riemann zeta function can form a triangle (\ref{zeta_zero_vect}). \par Consider the invariants $L_1$ and $L_2$ at the special points of the first middle vector $Y_1$ of the Riemann spiral, when the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral (fig. \ref{fig:gram_point_4525_symmetry_inv}), and when the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral (fig. \ref{fig:gram_point_4525_normal_inv}). \begin{figure} \caption{Base point \#4525, the vector of values of the Riemann zeta function parallel to the axis of symmetry, $s=0.35+5006.186i$ and $s=0.65+5006.186i$} \label{fig:gram_point_4525_symmetry_inv} \end{figure} \begin{figure} \caption{Base point \#4525, the vector of values of the Riemann zeta function parallel to the normal to the axis of symmetry, $s=0.35+5006.484i$ and $s=0.65+5006.484i$} \label{fig:gram_point_4525_normal_inv} \end{figure} \par In all other cases, in accordance with the continuity of values of the Riemann zeta function, the invariants $L_1$ and $L_2$ will occupy different intermediate positions (fig. \ref{fig:gram_point_4525_intermediate_inv}). \begin{figure} \caption{Base point \#4525, intermediate position of the vector of values of the Riemann zeta function, $s=0.35+5006.186i$ and $s=0.65+5006.186i$} \label{fig:gram_point_4525_intermediate_inv} \end{figure} \par When the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral (fig. \ref{fig:gram_point_4520_normal_inv_opposite}) invariants $L_1$ and $L_2$ can occupy a position close to the trapezoid, but in any case, obviously, can not form a triangle. \begin{figure} \caption{Base point \#4520, the vector of values of the Riemann zeta function parallel to the normal to the axis of symmetry, $s=0.35+5001.415i$ and $s=0.65+5001.415i$} \label{fig:gram_point_4520_normal_inv_opposite} \end{figure} \par \textit{It is obvious that the invariants $L_1$ and $L_2$ can occupy the position closest to the triangle only when the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral (fig. \ref{fig:gram_point_4525_symmetry_inv}), so we will perform further analysis of the invariants $L_1$ and $L_2$ in this special position of the first middle vector $Y_1$ of the Riemann spiral.} \par It should be noted that in this position of the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$ in accordance with the mirror symmetry, this vector system forms a closed polyline, and therefore the Riemann zeta function takes a value of non-trivial zero (fig. \ref{fig:gram_point_4525_symmetry_inv_zero}). \begin{figure} \caption{Base point \#4525, non-trivial zero of the Riemann zeta function, $s=0.5+5006.208381106i$} \label{fig:gram_point_4525_symmetry_inv_zero} \end{figure} \par In the ordinate of the non-trivial zero of the Riemann zeta function, the projection modulus of the invariant $L_1$ on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function increases when $\sigma$ decreases and, conversely, it decreases when $\sigma$ increases. \par The projection of the invariant $L_2$ on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function changes according to the sign of the projection of its gradient on the axis of symmetry $M$: \begin{equation}\label{grad_l_2}grad_M L_2=\sum_{n=1}^{m}\Big(\frac{\partial Y_n}{\partial\sigma}\Big)_M;\end{equation} \par If the sign of the projection of the invariant $L_2$ on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function is equal to the sign of the projection of its gradient on the axis of symmetry $M$, then the projection of the invariant $L_2$ on the axis of symmetry $M$ increases when $\sigma$ increases (fig. \ref{fig:gram_point_4525_symmetry_3_inv}) respectively while $\sigma$ decreases it decreases also (fig. \ref{fig:gram_point_4525_symmetry_2_inv}) and, conversely, if the sign of the projection of the invariant $L_2$ on the axis of symmetry $M$ is not equal to the sign of the projection of its gradient on the axis of symmetry $M$, then when $\sigma$ increases the projection of the invariant $L_2$ on the axis of symmetry $M$ decreases (fig. \ref{fig:gram_point_4525_symmetry_5_inv}) respectively while $\sigma$ decreases it increases (fig. \ref{fig:gram_point_4525_symmetry_4_inv}). \begin{figure} \caption{Base point \#4525, the ordinate of the non-trivial zero of the Riemann zeta function, positive gradient, $s=0.65+5006.208381106i$} \label{fig:gram_point_4525_symmetry_3_inv} \end{figure} \begin{figure} \caption{Base point \#4525, the ordinate of the non-trivial zero of the Riemann zeta function, positive gradient, $s=0.35+5006.208381106i$} \label{fig:gram_point_4525_symmetry_2_inv} \end{figure} \begin{figure} \caption{Base point \#4521, the ordinate of the non-trivial zero of the Riemann zeta function, negative gradient, $s=0.65+5001.889773627i$} \label{fig:gram_point_4525_symmetry_5_inv} \end{figure} \begin{figure} \caption{Base point \#4521, the ordinate of the non-trivial zero of the Riemann zeta function, negative gradient, $s=0.35+5001.889773627i$} \label{fig:gram_point_4525_symmetry_4_inv} \end{figure} \par The sign of the projection of the gradient of the invariant $L_2$ on the axis of symmetry $M$ depends on the distribution of angles and modulus of the middle vectors, which becomes obvious if we arrange the middle vectors in increasing order of their angles (fig. \ref{fig:gram_point_4525_symmetry_inv_zero}). \par Researches show that when $\sigma\ne 1/2$ at the point when $\zeta(s)_M=0$ the sign of the projection of the gradient of the invariant $L_2$ on the axis of symmetry $M$ is kept. \par Now, when we consider the invariants $L_1$ and $L_2$ at the point where the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral (fig. \ref{fig:gram_point_4525_symmetry_inv}), it is sufficient to consider the sum of projections of vectors $L_1$, $L_2$ and R on the axis of symmetry $M$, because at this point the sum of projections of vectors $L_1$, $L_2$ and R on the normal $L$ to the axis of symmetry is equal to zero. \par Consider separately the sum of projections of invariants $L_1$ and $L_2$ on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function: \begin{equation}\label{delta_l}\Delta L=\sum_{n=1}^{m}{(X_n(s))_M}+\sum_{n=1}^{m}{(Y_n(s))_M};\end{equation} \par and the projection of vector R of the remainder term of the second approximate equation of the Riemann zeta function: \begin{equation}\label{delta_r}\Delta R=R\sin(\Delta\varphi_R);\end{equation} \par where $\Delta\varphi_R$ is the deviation of the vector of the remainder term R of the second approximate equation of the Riemann zeta function from the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function. \par Then the vector condition (\ref{zeta_zero_vect}) of the non-trivial zero of the Riemann zeta function can be rewritten as follows: \begin{equation}\label{zeta_zero_module}|\Delta L|=|\Delta R|;\end{equation} \par We already know that in accordance with the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$ at the point when the axis of symmetry $M$ of this vector system passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral (fig. \ref{fig:gram_point_4525_symmetry_inv_zero}), the Riemann zeta function takes a value of non-trivial zero. \par Thus $\Delta L=0$ and $\Delta R=0$. \par When $\sigma\ne 1/2$, the mirror symmetry of the vector system of the second approximate equation of the Riemann zeta function is broken, in other words, $\Delta L\ne 0$ and $\Delta R\ne 0$, hence the change in $\Delta L$, if the Riemann zeta function can take a value of non-trivial zero, must be compensated by the change in $\Delta R$. \par Consider the dependence of the sum of projections of the invariants $L_1$ and $L_2$ (fig. \ref{fig:projections_sum_l1_l2}) and the projection of the vector $R$ of the remainder term (fig. \ref{fig:projection_r}) on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function from the real part of a complex number at the point when the axis of symmetry $M$ of this vector system passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral. \begin{figure} \caption{Base point \#4525, sum of projections of invariants $L_1$ and $L_2$, ordinate $\zeta(0.35+5006.186i)_L=0$} \label{fig:projections_sum_l1_l2} \end{figure} \begin{figure} \caption{Base point \#4525, projection of the remainder term $R$, ordinate $\zeta(0.35+5006.186i)_L=0$} \label{fig:projection_r} \end{figure} \par It is obvious that these functions are equal only at the point $\sigma=1/2$, i.e. on the critical line. \par Consider the \textit{boundary function} that separates values $\Delta L$ and $\Delta R$ (fig. \ref{fig:boundary_function}), at the point where the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral: \begin{equation}\label{boundary_function}F(s)=A \Big(\frac{\sum_{n=1}^{m}{|Y_n(s)|}}{\sum_{n=1}^{m}{|X_n(s)|}}-1\Big);\end{equation} \par where $A$ is some constant greater than zero. \begin{figure} \caption{Base point \#4525, boundary function $F(s)$, ordinate $\zeta(0.35+5006.186i)_L=0$, $A=0.5$} \label{fig:boundary_function} \end{figure} \begin{figure} \caption{Graphics of projections of the Riemann zeta function} \label{fig:graphics_projections} \end{figure} \par It is obvious that the sum of projections of invariants $L_1$ and $L_2$ (fig. \ref{fig:projections_sum_l1_l2}) on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=0$ is determined by a value of the projection $\zeta(0+it)_M$ (fig. \ref{fig:graphics_projections}), because in this case, $|\Delta L|\gg|\Delta R|$ (fig. \ref{fig:projection_r}), therefore, the sum of projections of invariants $L_1$ and $L_2$ on the axis of symmetry $M$ at all points except $\sigma=1/2$ modulo more than values of the boundary function, while values the projection of the vector $R$ of the remainder term on the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function in all points except $\sigma=1/2$ modulo smaller than values of the boundary functions. \par Now we need to find out the dependence of the angle of deviation $\Delta\varphi_R$ of vector remainder member of the second approximate equation of the Riemann zeta function from normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function, because a value of this angle determines a value of $\Delta R$. \par Although that Riemann recorded the remainder term (\ref{zeta_eq_2_zi}) of the second approximate equation of the Riemann zeta function explicitly, Riemann and other authors use the argument of the remainder term only for the particular case of (\ref{arg_rem_2}) when $\sigma=1/2$, so we use the explicit expression for the CHI functions (\ref{chi_eq_ex3}) and will calculate the argument of the remainder term using the exact values of the Riemann zeta function \cite{ZF}: \begin{equation}\label{delta_varphi_r}\Delta\varphi_R=\frac{1}{2}Arg(\chi(s))-Arg(R(s))=\frac{1}{2}\arccos(\frac{Re(\chi(s))}{|\chi(s)|})-\arccos(\frac{Re(R(s))}{|R(s)|});\end{equation} \par where \begin{equation}\label{r_vect}R(s)=\zeta(s)-\sum_{n=1}^{m}{X_n(s)}-\sum_{n=1}^{m}{Y_n(s)};\end{equation} \begin{equation}\label{r_module}|R(s)|=\sqrt{Re(R(s))^2+Im(R(s))^2};\end{equation} \par The calculations show a linear relationship (fig. \ref{fig:delta_varphi_r_real}) of the angle of deviation $\Delta\varphi_R$ of the vector of the remainder term of the second approximate equation of the Riemann zeta function from normal $L$ to the axis of symmetry of this vector system from the real part of a complex number. \begin{figure} \caption{Base point \#4525, the deviation angle of vector $\Delta\varphi_R$ of remainder term, rad, ordinate $\zeta(0.35+5006.186i)_L=0$} \label{fig:delta_varphi_r_real} \end{figure} \begin{figure} \caption{The deviation angle of vector $\Delta\varphi_R$ of remainder term, rad (from the imaginary part of a complex number)} \label{fig:delta_varphi_r_complex} \end{figure} \begin{figure} \caption{Fractional part of $\sqrt{t/2\pi} \label{fig:frac_part_m_complex} \end{figure} \begin{figure} \caption{The maximum derivation angle of the vector $\Delta\varphi_R$ of the remainder term, rad (from the imaginary part of a complex number)} \label{fig:delta_varphi_r_max_complex} \end{figure} \par \textit{This dependence is confirmed for any values of the imaginary part of a complex number, since it is determined by the form of the polyline formed by the vector system of the second approximate equation of the Riemann zeta function, for symmetric values $\sigma+it$ (fig. \ref{fig:gram_point_4525_left}) and $1-\sigma+it$ (fig. \ref{fig:gram_point_4525_right}).} \par It can be shown that these polylines are mirror congruent, hence all the angles of these polylines are mirror congruent, including the arguments of the remainder terms. \par The angle of deviation $\Delta\varphi_R$ of the vector of the remainder term of the second approximate equation of the Riemann zeta function from normal $L$ to the axis of symmetry of the vector system of the second approximate equation dzeta-functions of Riemann has a periodic dependence (fig. \ref{fig:delta_varphi_r_complex}) from the imaginary part of a complex number, with a period equal to an interval (\ref{m_interval}), where fractional part of the expression $\sqrt{t/2\pi}$, which varies from 0 to 1 (fig. \ref{fig:frac_part_m_complex}), has the periodic dependence also. \par The angle of deviation $\Delta\varphi_R$ of the vector of the remainder term of the second approximate equation of the Riemann zeta function from normal $L$ to the axis of symmetry of this vector system has the maximum value at the boundaries of the intervals (\ref{m_interval}), and the absolute value of the maximum variance asymptotically decreases (fig. \ref{fig:delta_varphi_r_max_complex}) with the growth of the imaginary part of a complex number, which corresponds to the evaluation of the remainder term of the gamma function: \begin{equation}\label{mu_limit}\mu(s)\to 0, t\to\infty;\end{equation} \par \textit{Thus, at the point where the axis of symmetry $M$ of the vector system of the second approximate equation of the Riemann zeta function passes through both the zero of the complex plane and the end of the first middle vector $Y_1$ of the Riemann spiral, the condition (\ref{zeta_zero_module}) can be true only when $\sigma=1/2$, since $\sigma\ne 1/2$ $|\Delta L|>|\Delta R|$ on any interval (\ref{m_interval}).} \par Somebody can think of different options, why expression (\ref{boundary_function}) cannot be \textit{bounding function}, but we hope that they will all be refuted. \section{Summary} Was Riemann going to speak at the Berlin Academy of Sciences on the occasion of his election as a corresponding member to present a proof of the asymptotic law of the distribution of prime numbers? \par Only Riemann himself could have answered this question, but we are inclined to assume that he did not intend to. \par Neither before nor after (Riemann died seven years later) he did not return to the subject publicly. \par Riemann did not publish any paper in progress, so the paper on the analysis of the formula, which is now called the Riemann-Siegel formula was published by Siegel based on Riemann's notes, but this paper is rather a development of the analytical analysis of the Riemann zeta function, rather than a proof of the asymptotic distribution law of prime numbers. \par The speech was rather about the analytical function of the complex variable. \par Riemann showed how bypassing the difficult arguments about the convergence of the series, which defines the function, we can get its analytical continuation using the residue theorem. \par Riemann also used a feature of a function of a complex variable for which zeros are its singular points, which carry basic information about the function. \par It was in connection with this feature of the complex variable function that the Riemann hypothesis appeared - the hypothesis of the distribution of zeros of the Riemann zeta function. \par Strange is also the absence of any mention of the representation of complex numbers by points on the plane, although Riemann in his report uses the rotation of the zeta function by the angle $Arg(\chi(s))/2$, which is certainly an operation on complex numbers as points on the plane. \par Such relation to complex numbers seems even more strange in light of the fact that Riemann was a student of Gauss, who was one of the first to introduce the representation of complex numbers by points on the plane. \par \textit{In other words, we tend to assume that the zeta function was chosen to show how we can solve the problem by methods of functions of a complex variable, Riemann is not important the problem itself, it is important approach and methods that gives the theory of functions of a complex variable as the apotheosis of the theory of analytic functions.} \par Although that a theorem of the distribution of prime numbers is proved analytically, i.e. using the analytical Riemann zeta function, later it was found a proof that uses the functions of a real variable, i.e. an elementary proof. \par Thus, the role of the Riemann zeta function has shifted towards regularization, the so-called methods of generalized summation of divergent series. \par The Riemann hypothesis seems to belong to such problems, since the generalized Riemann hypothesis deals with an entire class of Dirichlet L-functions. \par Zeta function regularization is also used in physics, particularly in quantum field theory. \par We prefer to conclude that the main role of the Riemann zeta function is in the understanding of generalized methods for summing asymptotic divergent series and as a consequence of constructing an analytic continuation of the functions of a complex variable. \par Based on that on numerous forums \glqq seriously\grqq \ is discussed infinite sum of natural numbers: \begin{equation}\label{123}1+2+3+ = -\frac{1}{12};\end{equation} \par only a few understand the essence of the generalized summation or regularization of divergent series. \par Here mathematics encounters philosophy, namely with \textit{the law of unity of opposites.} \par The essence of generalized summation is \textit{regularization} (that is why the second name of the method is regularization) - it means that a divergent series cannot exist without its \textit{opposite} - convergent series. \par In other words, there is only \textit{one series}, but it converges in one region and diverges in another. \par The most natural such series is the Dirichlet series: \begin{equation}\label{dirichlet}\sum_{n=1}^{\infty}\frac{1}{n^s};\end{equation} \par which in the real form was researched by Euler (he first raised the question of the need for the concept of generalized summation), and in the complex form it was considered by Riemann. \par Riemaann left the question of generalized summation, skillfully replacing the divergent Dirichlet series by already regularized integral of gamma functions of a complex variable: \begin{equation}\label{gamma_int}\Gamma (z)={\frac {1}{e^{i2\pi z}-1}}\int \limits _{L}\!t^{\,z-1}e^{-t}\,dt;\end{equation} \par The essence of unity \glqq summation\grqq \ of an infinite series is in the definition of \glqq sum\grqq \ this series in the region where this series converges and in the region where \textit{the same} series diverges. \par The misconception begins in the definition of \textit{infinite sum.} In the case of a convergent series, it only seems to us that we can find the \glqq sum\grqq \ of this series. In fact, we find \textit{limit of partial sums} because in accordance with the convergence of the series, such a limit exists (by definition). \par Euler first formulated the need for a different concept of the word \glqq sum\grqq \ in the application to the divergent series (or rather to the series in the region where it diverges), he explained this by the practical need to attach some value divergent series. \par We now know that this value is found as \textit{limit of generalized partial sums} of an infinite series. \par And the main condition that is imposed on the method of obtaining generalized partial sums (except that there must be a limit) is \textit{regularity}, i.e. the limit of generalized partial sums in the region where the series converges must be equal to the limit of partial sums of this series. \par This understanding, as Hardy observed, came only with the development of the theory of the function of a complex variable, namely the notion of \textit{analytic continuation}, which is closely linked to the infinite series that defines the analytic function, and is also closely linked to the fact that this infinite series converges in one region and diverges in another. \par An analytic continuation of a function of a complex variable, if it is possible, is unique and this fact (which has a regorous proof) is possible only if there exists a limit of generalized partial sums of an infinite series by which the analytic function is defined, in the region where this series diverges. \par In the theory of generalized summation of divergent series, it is also rigorously proved that if the limit of generalized partial sums exists for two different regularization methods (obtaining generalized partial sums), then it has the same value. \par \textit{It is this correspondence of different methods of obtaining generalized partial sums and analytic continuation of the function of a complex variable that Hardy had in mind.} \par The Dirichlet series $\sum\limits_{n=1}^{\infty}\frac{1}{n^s}$ in complex form defines the Riemann zeta function. \par As is known, this series diverges in the critical strip, where the Riemann zeta function has non-trivial zeros. \par Hence all non-trivial zeros of the Riemann zeta function are \textit{limit} of generalized partial sums of the Dirichlet series, while the trivial zeros of the Riemann zeta function define odd Bernoulli numbers, which are all zero. \par In the Riemann zeta function theory to regularize the Dirichlet series the Euler-MacLaren formula \footnote{unfortunately, the fact that the Euler-McLaren summation formula is used for the Riemann zeta function as a method of generalized summation of divergent series is not mentioned in all textbooks.} is traditionally used, as we mentioned earlier, this formula is used if the partial sums of a divergent series are suitable for calculating the generalized sum of that divergent series. \par The Euler-MacLaren generalized summation formula allows us to move from an infinite sum to an improper integral, i.e. to the limit of generalized partial sums of the Dirichlet series. \par The geometric analysis of partial sums of the Dirichlet series, which defines the Riemann zeta function, allowed us to make an important conclusion that a generalized summation of infinite series is possible if this series diverges asymptotically. \par And then we get into the essence of the definition of analytic functions through \textit{asymptotic} infinite series, and a value of the function equal to a value of the asymptote in any case or when the series asymptotically converge or when a series diverges asymptotically, while in order to find the limit of this asymptote when the series converges, use partial sums of this series and when the series diverges, to find the limit of the asymptote, we must use the regular generalized partial sum. \par And then we can obtain an analytic continuation of the function given by the \textit{asymptotic infinite series.} \par Therefore, the result obtained at the very beginning of the research, namely the use of an alternative method of generalized summation of Cesaro, which was obtained geometrically, is as important as the description of the various options to confirm the Riemann hypothesis. \section{Conclusion} \par We believe that the method of geometric analysis of Dirichlet series, based on the representation of complex numbers by points on the plane, will complement the set of tools of analytical number theory. \par The method described in this paper allows us to get into the essence of the function of a complex variable, to identify regularities that explain any value of this function (including in the region where the series that difine this function diverges) and most importantly it gives \textit{an idea of the exact value of zero} as the sum of vectors that form a closed polyline. \par After going through the analysis of the vector system of the second approximate equation of the Riemann zeta function, we can formulate the results of the second method of confirmation the Riemann hypothesis without using this vector system, since we only needed it to find the key points that indicate that the Riemann zeta function \textit{cannot have non-trivial zeros in the critical strip, except for the critical line.} \par Obviously we can move from the fixed coordinate system formed by the axes $x=Re(s)$ and $y=Im(s)$, to the moving coordinate system formed by axes $L$ with angle $\varphi_L=Arg(\chi(s))/2$ and by axes $M$ with angle $\varphi_M=(Arg(\chi(s))+\pi)/2$ passing through the zero of the complex plane. \par Then from the functional equation of the Riemann zeta function: \begin{equation}\label{zeta_func_eq3}\zeta(s)=\chi(s)\zeta(1-s); \end{equation} \par and equality arguments of functions: \begin{equation}\label{arg6}Arg(\zeta(1-s))=-Arg(\zeta(\overline{1-s})); \end{equation} \par using the arithmetic of the arguments of complex numbers, when we rotate the vector of a value of Riemann zeta function by the angle $Arg(\chi(s))/2$ in the negative direction, we obtain: \begin{equation}\label{zeta_arg}Arg(\zeta(s))-\frac{Arg(\chi(s))}{2}=-(Arg(\zeta(\overline{1-s}))-\frac{Arg(\chi(s))}{2});\end{equation} \par Consequently, in the moving coordinate system formed by the axes $L$ and $M$, the angles of the vectors of value of $\zeta(s)$ and $\zeta(\overline{1-s})$ are symmetric about the axis $L$, thus, in accordance with the symmetry of the angles, the vector of value of $\hat\zeta(s)=\zeta (1/2+it)$ is always directed along the axis $L$. \par In other words, in the moving coordinate system formed by the axes $L$ and $M$, the vector of value of $ \hat\zeta(s)=\zeta(1/2+it)$ remains fixed and only changes its modulus and sign, while the vectors of value of $\zeta(s)$ and $\zeta(\overline{1-s})$ when $\sigma\ne 1/2$ rotate in this moving coordinate system in different directions with the same speed. \par Therefore: \par a) the projection $\hat\zeta(s) _M=\zeta(1/2+it)_M$ is always zero; \par b) the projections $\zeta(s)_L$ and $\zeta(\overline{1-s})_L$ are periodically equal to zero when the vectors of value of $\zeta(s)$ and $\zeta(\overline{1-s})$ both locate along the axis $M$ they have opposite directions and at this point the projections $\zeta(s)_M$ and $\zeta(\overline{1-s})_M$ are not equal to zero, because $\zeta(s)_L$ is an odd harmonic function, and $\zeta(s)_M$ is an even harmonic functionthat conjugate to $\zeta(s)_L$; \par c) the projections $\zeta(s)_M$ and $\zeta(\overline{1-s})_M$ are periodically equal to zero when the vectors of value of $\zeta(s)$ and $\zeta(\overline{1-s})$ both locate along the axis $L$ while they have the same direction and at this point the projections $\zeta(s)_L$ and $\zeta(\overline{1-s})_L$ are not equal to zero, since $\zeta(s)_L$ is an odd harmonic function, and $\zeta(s)_M$ is an even harmonic functionthat conjugate to $\zeta(s)_L$; \par So only $\hat\zeta(s)_M$ and $\hat\zeta(s)_L$ can be equal to zero at the same time, because $\hat\zeta(s)_L=\zeta(1/2+it)_L$ is an odd harmonic function, and $\hat\zeta(s)_M=\zeta(1/2+it)_M$ is a harmonic function \textit{identically equal to zero} (fig. \ref{fig:graphics_projections_2}). \par \textit{Therefore, when $\sigma\ne 1/2$, the Riemann zeta function cannot be zero, because when $\sigma\ne 1/2$, the projections $\zeta(s)_L$ and $\zeta(s)_M$ cannot be equal to zero at the same time} (fig. \ref{fig:graphics_projections_3}). \begin{figure} \caption{Graphics of projections of the Riemann zeta function, $\sigma=1/2$} \label{fig:graphics_projections_2} \end{figure} \begin{figure} \caption{Graphics of projections of the Riemann zeta function, $\sigma=0$} \label{fig:graphics_projections_3} \end{figure} \par There may be a Lemma on conjugate harmonic functions identically nonzero, but we have not found it, just like the Lemma on the symmetric polygon we proved earlier (Lemma 3). \par Therefore, to confirm our conclusions, we are forced to return to the vector system of the second approximate equation of the Riemann zeta function. \par Vertical marks on the graphs (fig. \ref{fig:graphics_projections_2} and \ref{fig:graphics_projections_3}) is \textit{the base point}, corresponding to solution of the equation: \begin{equation}\label{}Arg(\chi(\frac{1}{2}+it_k))=(2k-1)\pi;\end{equation} \par Using the mirror symmetry property of the vector system of the second approximate equation of the Riemann zeta function when $\sigma=1/2$, we previously defined two types of base points: \par $a_1$ - if the first middle vector of the Riemann spiral at the base point is above or along the real axis of the complex plane (the second position corresponds to the non-trivial zero of the Riemann zeta function at the base point is a likely event), in this case the non-trivial zero of the Riemann zeta function that corresponds to this base point is located between this and the previous base point; \par $a_2$ - if the first middle vector of the Riemann spiral at the base point is below the real axis of the complex plane, then the non-trivial zero of the Riemann zeta function that corresponds to this base point is located between this and the next base point; \par as well as four types of intervals between base points of different types: \par $A_1=a_1a_1$ and $A_2=a_2a_2$ - intervals of this kind contain one non-trivial zero of the Riemann zeta function; \par $B=a_2a_1$ - interval of this kind contains two non-trivial zeros of the zeta function of Riemann; \par $C=a_1a_2$ - interval of this kind does not contain any non-trivial zero of the Riemann zeta function. \par We also found that there cannot be the following combinations of intervals: \par $A_1A_2$, $A_2A_1$, $BB$ and $CC$; \par Hence we have \textit{a fixed set} of combinations of intervals: \par $A_1A_1=a_1a_1a_1$, $A_1C=a_1a_1a_2$, $CB=a_1a_2a_1$, $CA_2=a_1a_2a_2$, $BA_1=a_2a_1a_1$, $BC=a_2a_1a_2$, $A_2B=a_2a_2a_1$, $A_2A_2=a_2a_2a_2$; \par It is obvious that the sequence of base points which are represented in the graphs (fig. \ref{fig:graphics_projections_2} and \ref{fig:graphics_projections_3}), contains all the possible combinations of intervals: \par $a_2a_2a_2a_1a_1a_1a_2a_1a_2a_2=A_2A_2BA_1A_1BCBA_2$; \par In accordance with the properties of the vector system of the second approximate equation of the Riemann zeta function, we can determine the sign of the function $\zeta(\sigma+it)_L$ and the sign of its first derivative at each base point. \par This requires: \par 1) determine the type of base point by the position of the first middle vector Riemann spiral relative to the real axis of the complex plane at the base point; \par 2) determine the direction of the normal $L$ to the axis of symmetry of the vector system of the second approximate equation of the Riemann zeta function at the base point; \par Then \par A) If the first middle vector Riemann sprial is below the real axis of the complex plane, the function $\zeta (\sigma+it)_L$ will have the sign opposite to the direction of the normal $L$ to the axis of symmetry; \par B) If the first middle vector of the Riemann sprial is above the real axis of the complex plane, the function $\zeta (\sigma+it)_L$ will have a sign corresponding to the direction of the normal $L$ to the axis of symmetry; \par C) The sign of the first derivative of the function $\zeta(\sigma+it)_L$ always has a sign corresponding to the direction of the normal $L$ to the axis of symmetry; \par These rules are executed for any combination of intervals from \textit{a fixed set,} so they are executed for any combination of intervals that can occur. \par In the moving coordinate system formed by axes $L$ and $M$ when $\sigma\ne 1/2$, the vector of values of the Riemann zeta function must rotate on the angle $\pi/2$ from the position corresponding to $\zeta(\sigma+it)_L=0$ to the position corresponding to $\zeta(\sigma+it)_M=0$ as well as from the position corresponding to $\zeta(\sigma+it)_M=0$ to position corresponding to $\zeta(\sigma+it)_L=0$, hence \textit{all the zeros of the function $\zeta(\sigma+it)_M$ when $\sigma\ne 1/2$, lie between the zeros of the function $\zeta(\sigma+it)_L$.} \par Corollary 1. The function $\zeta(1/2+it)_L$ has an infinite number of zeros (this statement corresponds to Hardy's theorem \cite{HA2} on an infinite number of non-trivial zeros of the Riemann zeta function on the critical line). \par Corollary 2. The number of non-trivial zeros of the Riemann zeta function on the critical line corresponds to the number of base points: \begin{equation}\label{}N_0(T)=\Bigg[\Big|\frac{T}{2\pi}(\log{\frac{T}{2\pi}}-1)-\frac{1}{8}+\frac{2\mu(T)-\alpha_2}{2\pi}\Big|\Bigg]+2;\end{equation} \par where $\mu(T)$ is the remainder term of the gamma function (\ref{mu}) when $\sigma=1/2$; \par $\alpha_2$ argument of the CHI function at the second base point. \par Corollary 3. The function $\zeta(\sigma+it)_L$ when $\sigma\ne 1/2$ has an infinite number of zeros. \par Corollary 4. The function $\zeta(\sigma+it)_M$ when $\sigma\ne 1/2$ has an infinite number of zeros. \par Corollary 5. Zeros of function $\zeta(\sigma+it)_L$ and functions $\zeta(\sigma+it)_M$ when $\sigma\ne 1/2$ are not the same, because all zeros of function $\zeta(\sigma+it)_M$ when $\sigma\ne 1/2$ lie between zeros of function $\zeta(\sigma+it)_L$. \par Based on the obtained results, we believe that the methods of confirmation of the Riemann hypothesis based on the properties of the vector system of the second approximate equation of the Riemann zeta function will soon lead to its proof. \section{Gratitudes} Special thanks to Professor July Dubensky, who allowed to speak at the seminar and instilled confidence in the continuation of the research. Colleagues and friends who listened to the first results and supported throughout the research. My wife, who helped to prepare the presentation at the seminar, supported and believed in success. \par The organizers of the conference of Chebyshev collection in Tula, inclusion in the list of participants of this conference allowed to prepare the first version of the paper, but a exclusion from the list of participants mobilized and allowed to obtain new important results of the research. \end{document}
{\mathbf{e}}gin{document} \title{A new structure for difference matrices over abelian $p$-groups} \author{Koen van Greevenbroek \and Jonathan Jedwab} \date{8 June 2018 (revised 28 November 2018)} \maketitle \symbolfootnote[0]{ Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby BC V5A 1S6, Canada. \par J.~Jedwab is supported by NSERC. \par Email: {\tt [email protected]}, {\tt [email protected]} } {\mathbf{e}}gin{abstract} A difference matrix over a group is a discrete structure that is intimately related to many other combinatorial designs, including mutually orthogonal Latin squares, orthogonal arrays, and transversal designs. Interest in constructing difference matrices over $2$-groups has been renewed by the recent discovery that these matrices can be used to construct large linking systems of difference sets, which in turn provide examples of systems of linked symmetric designs and association schemes. We survey the main constructive and nonexistence results for difference matrices, beginning with a classical construction based on the properties of a finite field. We then introduce the concept of a contracted difference matrix, which generates a much larger difference matrix. We show that several of the main constructive results for difference matrices over abelian $p$-groups can be substantially simplified and extended using contracted difference matrices. In particular, we obtain new linking systems of difference sets of size $7$ in infinite families of abelian $2$-groups, whereas previously the largest known size was~$3$. \end{abstract} \section{Introduction} \label{sec:introduction} Let $G$ be a non-trivial group. A \emph{$(G, m, \lambda)$ difference matrix over~$G$} is an $m \times \lambda |G|$ matrix $(a_{ij})$ with $0 \le i \le m-1$ and $0 \le j \le \lambda |G|-1$ and each entry $a_{ij} \in G$ such that, for all distinct rows $i$ and $\ell$, the multiset of ``differences'' \[ \{a_{ij}a_{\ell j}^{-1} : 0 \le j \le \lambda |G|-1\} \] contains each element of $G$ exactly $\lambda$ times. When the group $G$ is abelian, we shall (except in \cref{sec:red-link}) use additive rather than multiplicative group notation. {\mathbf{e}}gin{example} Let $G = {\mathbb{Z}}_2^3$ and represent the element $(u,v,w) \in G$ in the compressed form $uvw$. The matrix {\mathbf{e}}gin{equation*} \left[ {\mathbf{e}}gin{array}{cccccccc} 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 \\ \rowcolor{thm2} 000 & 101 & 111 & 010 & 110 & 011 & 001 & 100 \\ 000 & 100 & 010 & 110 & 001 & 101 & 011 & 111 \\ \rowcolor{thm2} 000 & 111 & 110 & 001 & 011 & 100 & 101 & 010 \\ 000 & 001 & 101 & 100 & 111 & 110 & 010 & 011 \\ \end{array} \right] \end{equation*} is a $(G, 5, 1)$ difference matrix. For example, the differences between corresponding entries of the two shaded rows are \[ 000, 010, 001, 011, 101, 111, 100, 110, \] in which each element of $G$ appears exactly once. \end{example} Difference matrices are related to many other combinatorial designs, including mutually orthogonal Latin squares, orthogonal arrays, transversal designs, whist tournaments, generalized Steiner triple systems, and optical orthogonal codes \cite{colbourn-diffmatrices}, \cite{pan-chang}. Recently, difference matrices over $2$-groups were used as the key ingredient in a new construction of linking systems of difference sets \cite{jedwab-li-simon-arxiv}. The central objective is to determine, for a given group $G$ and parameter $\lambda$, the largest number of rows $m$ for which a $(G,m,\lambda)$ difference matrix exists. Colbourn~\cite{colbourn-diffmatrices} gives a concise summary of known existence and nonexistence results as of 2007. This paper is organized so that the material up to the end of \cref{sec:red-link} is a survey, whereas that from Section~\ref{sec:contr-diff-matrices} onwards presents new ideas and results. In \cref{sec:basic} we describe some basic properties of difference matrices, including nonexistence results and connections to other combinatorial structures. In \cref{sec:diff-matrices} we review the major constructions for difference matrices, principally: a classical construction over elementary abelian $p$-groups based on finite fields; a composition construction based on the Kronecker product; and a construction of 4-row difference matrices over abelian noncyclic groups. In \cref{sec:red-link} we explain how difference matrices with $\lambda=1$ over certain $2$-groups were recently used to construct linking systems of difference sets. In \cref{sec:contr-diff-matrices} we introduce the concept of a contracted difference matrix over an abelian $p$-group, which generates a much larger difference matrix over the same group. We derive a finite field construction, a composition construction, and an abelian noncyclic 2-group construction for contracted difference matrices. These constructions are significantly simpler and more compact than the corresponding constructions for difference matrices given in \cref{sec:diff-matrices}, but we show that they can often be used to produce results for difference matrices that are just as powerful as those obtained in \cref{sec:diff-matrices}. In \cref{sec:new-contr-diff} we present four examples of contracted difference matrices found by computer search. These examples generate new infinite families of (contracted) difference matrices over abelian $2$-groups with more rows than previously known, from which we in turn construct larger linking systems of difference sets than previously known. In \cref{sec:open-questions} we present some open questions about (contracted) difference matrices. Appendix~\ref{sec:list-contr-diff} contains an example of the largest known contracted difference matrix over each abelian $2$-group of order at most~$64$. Python 3 code for checking and searching for (contracted) difference matrices is available at https://gitlab.com/koenvg/contracted-difference-matrices. \section{Basic properties} \label{sec:basic} In this section we present some basic properties of difference matrices, including nonexistence results and connections to other combinatorial designs. If $G$ is a group and $A = (a_{ij})$ is a $(G,m,\lambda)$ difference matrix, then the difference matrix property is preserved when each entry of a column of $A$ is right-multiplied by a fixed $g \in G$, because $(a_{ij}g)(a_{\ell j}g)^{-1}= a_{ij} a_{\ell j}^{-1}$. By right-multiplying all entries of each column $j$ of $A$ by $a_{0j}^{-1}$, we may therefore assume that each entry of row $0$ of $A$ is~$1_G$. The difference property of the matrix then implies that, for each $i \ge 1$, row $i$ of $G$ contains every element of $G$ exactly $\lambda$ times. We may likewise right-multiply all entries of each row $i$ by $a_{i0}^{-1}$, so that each entry of column $0$ of $A$ is also~$1_G$. The resulting matrix is in \emph{normalized form}. If a $(G,m,\lambda)$ difference matrix with $m \ge 2$ exists, then deleting one row gives a $(G,m-1,\lambda)$ difference matrix. The existence of a $(G,m,\lambda)$ difference matrix implies the existence of a resolvable orthogonal array ${\rm OA}_\lambda(m,|G|)$ and a transversal design ${\rm TD}_\lambda(m+1,|G|)$, and is a generalized Bhaskar Rao design ${\rm GBRD}(m,m,\lambda|G|;G)$ \cite{colbourn-diffmatrices}. A trivial $(G,2,\lambda)$ difference matrix exists for every group $G$ and every integer $\lambda \ge 1$, for example comprising a first row containing $\lambda |G|$ copies of the identity~$1_G$ and a second row containing each element of $G$ exactly $\lambda$ times. The number of rows $m$ in a nontrivial $(G,m,\lambda)$ difference matrix therefore satisfies $m \ge 3$, and by the following counting result it also satisfies $m \le \lambda|G|$. {\mathbf{e}}gin{theorem}[Jungnickel~{\cite[Theorem~2.2]{jungnickel-diffmatrices}}] \label{thm:jungnickel-nonexistence} Let $G$ be a group, and suppose there exists a $(G,m,\lambda)$ difference matrix. Then $m \le \lambda|G|$. \end{theorem} {\mathbf{e}}gin{proof} The following self-contained argument is adapted from the proof of a more general result given in \cite[Proposition~3.1]{jungnickel-diffmatrices}. By assumption there exists a $(G,m,\lambda)$ difference matrix $A= (a_{ij})$, and we may assume that $A$ is in normalized form. For $g \in G$ and $0 \le j \le \lambda|G|-1$, let $n_{gj}$ be the number of times $g$ occurs in column $j$ of $A$, so that \[ n_{gj}=\sum_{i=0}^{m-1} I[a_{ij}=g] \] where $I[X]$ is the indicator function of event~$X$. Then {\mathbf{e}}gin{align} \sum_{j>0} \sum_{g \in G} n_{gj} &= \sum_{j>0} \sum_i \sum_{g \in G} I[a_{ij}=g] \nonumber \\ &= \sum_{j>0} \sum_i 1 \nonumber \\ &= (\lambda|G|-1)m \label{eqn:ngj}, \end{align} whereas {\mathbf{e}}gin{align*} \sum_{j>0} \sum_{g \in G} n_{gj}^2 &= \sum_{j>0} \sum_{g \in G} \sum_{i, \ell} I[a_{ij}=a_{\ell j} =g] \\ &= \sum_{j>0} \sum_{g \in G} \bigg ( \sum_i I[a_{ij}=g] + \sum_{i \ne \ell} I[a_{ij}=a_{\ell j} =g] \bigg ) \\ &= \sum_{j>0} \sum_i \sum_{g \in G} I[a_{ij}=g] + \sum_{i \ne \ell} \sum_{j>0} \sum_{g \in G} I[a_{ij}=a_{\ell j} =g] \\ &= \sum_{j>0} \sum_i 1 + \sum_{i \ne \ell} \sum_{j>0} I[a_{ij}=a_{\ell j}] \\ &= \sum_{j>0} \sum_i 1 + \sum_{i \ne \ell} (\lambda-1) \end{align*} because the normalization of $A$ gives $a_{i0} = a_{\ell 0}$ for all distinct $i$ and $\ell$. Therefore {\mathbf{e}}gin{equation} \label{eqn:ngj2} \sum_{j>0} \sum_{g \in G} n_{gj}^2 = (\lambda|G|-1)m + m(m-1)(\lambda-1), \end{equation} and the result follows by substituting \eqref{eqn:ngj} and \eqref{eqn:ngj2} into the Cauchy-Schwarz inequality \[ \Big( \sum_{j>0} \sum_{g \in G} n_{gj}\Big)^2 \le (\lambda|G|-1) |G| \sum_{j>0} \sum_{g \in G} n_{gj}^2 \] and simplifying. \end{proof} If the upper bound $m = \lambda|G|$ in \cref{thm:jungnickel-nonexistence} is attained, then the resulting $(G,\lambda|G|,\lambda)$ difference matrix is a square matrix known as a \emph{generalized Hadamard matrix $\GH(|G|,\lambda)$ over $G$} (see \cite{delauney-hcd} for a survey). In particular, a $\GH(2,2\lambda)$ over the group $(\{-1,1\},\cdot)$ is a \emph{Hadamard matrix} of order~$4\lambda$ (see \cite{horadam-book} or \cite{craigen-hcd}, for example, for background on this much-studied topic). In all known examples of a $\GH(|G|,\lambda)$ over $G$, the group order $|G|$ is a prime power and, if $G$ is not elementary abelian, then $|G|$ is a square \cite[p.~303]{delauney-hcd}. We shall be mostly concerned with $(G,m,\lambda)$ difference matrices for which $m < \lambda|G|$, and especially those with $\lambda = 1$ because of several connections to other combinatorial objects. In particular, a $(G,m,1)$ difference matrix is equivalent to a $G$-regular set of $m-1$ mutually orthogonal Latin squares of order $|G|$ \cite[Theorem~1]{jungnickel-latin}, and to a set of $m-2$ pairwise orthogonal orthomorphisms of~$G$ \cite[p.~195]{evans-jcd}. Moreover, a crucial ingredient in a recent construction of reduced linking systems of difference sets \cite{jedwab-li-simon-arxiv} is a $(G,m,1)$ difference matrix for certain $2$-groups $G$, as described in \cref{sec:red-link}. We shall therefore pay special attention to $(G,m,1)$ difference matrices over $2$-groups. A further reason for regarding the case $\lambda=1$ as fundamental is that there are many methods for composing two difference matrices (the composition constructions of Theorems~\ref{thm:prod-jungnickel}, \ref{comp}, \ref{mm'}, and \ref{lambda+mu}), and for constructing a new difference matrix from another (the homomorphism construction of \cref{group-hom}), under all of which the value of $\lambda$ increases or remains the same; in particular, we can use \cref{lambda+mu} to produce a $(G,m,\lambda)$ difference matrix for each $\lambda > 1$ from a $(G,m,1)$ matrix. The following nonexistence result rules out, as a special case, the existence of a $(G,3,1)$ difference matrix when $G$ is a cyclic $2$-group. Indeed, we shall see (for example in \cref{thm:rec} and from \cref{small-2-grp-info}) that the currently known existence pattern for $(G,m,1)$ difference matrices over a $2$-group~$G$ of fixed order and for fixed $m$ appears to favor groups of smaller exponent and larger rank. {\mathbf{e}}gin{theorem}[Hall and Paige~{\cite[Theorem~5]{hall-paige}}, Drake~{\cite[Theorem~1.10]{drake}}] \label{thm:drake-nonexistence} Let $G$ be a group containing a nontrivial cyclic Sylow $2$-subgroup, and let $\lambda$ be odd. Then there does not exist a $(G,3,\lambda)$ difference matrix. \end{theorem} \section{Constructions for difference matrices} \label{sec:diff-matrices} In this section we describe some of the principal constructive results for difference matrices, especially as they relate to the case $\lambda =1$. We sometimes omit proofs, or else describe constructions without proving they satisfy the required poperties. \subsection{Finite field construction (Drake)} \label{sec:finite-field} The following construction, based on properties of a finite field, is a foundational example that shows the upper bound of \cref{thm:jungnickel-nonexistence} can be attained for every elementary abelian group. {\mathbf{e}}gin{proposition}[Drake~{\cite[Proposition~1.5]{drake}}] \label{elem-ab-1} Let $p$ be prime and let $n$ be a positive integer. Then the additive form of a multiplication table for $\GF(p^n)$ is a $({\mathbb{Z}}_{p}^{n}, p^n, 1)$ difference matrix. \end{proposition} {\mathbf{e}}gin{example} \label{ex:Z22} We use \cref{elem-ab-1} to construct a $({\mathbb{Z}}_{2}^2, 4, 1)$ difference matrix. Let $\alpha$ be a root of the primitive polynomial $f(x)=x^2+x+1$ in ${\mathbb{Z}}_2[x]$, and construct $\GF(2^2)$ as ${\mathbb{Z}}_2[x]/\langle f(x) \rangle$. The additive group of $\GF(2^2)$ is ${\mathbb{Z}}_2^2$, and the multiplication table of $\GF(2^{2})$ written in additive notation gives the \mbox{$({\mathbb{Z}}_2^2, 4, 1)$} difference matrix \[ \kbordermatrix{ \cdot & 0 & 1 & \alpha & \alpha^2 \\ 0 & 00 & 00 & 00 & 00 \\ 1 & 00 & 01 & 10 & 11 \\ \alpha & 00 & 10 & 11 & 01 \\ \alpha^2 & 00 & 11 & 01 & 10 \\ }. \] \end{example} {\mathbf{e}}gin{example} \label{ex:Z23} We use \cref{elem-ab-1} to construct a $({\mathbb{Z}}_{2}^3, 8, 1)$ difference matrix. Let $\alpha$ be a root of the primitive polynomial $f(x)=x^3+x+1$ in ${\mathbb{Z}}_2[x]$, and construct $\GF(2^3)$ as ${\mathbb{Z}}_2[x]/\langle f(x) \rangle$. The additive group of $\GF(2^3)$ is ${\mathbb{Z}}_2^3$, and the multiplication table of $\GF(2^{3})$ written in additive notation gives the \mbox{$({\mathbb{Z}}_2^3, 8, 1)$} difference matrix {\mathbf{e}}gin{equation*} \DrawBox[fill=thm2]{pic cs:l1}{pic cs:r1} \kbordermatrix{ \cdot & 0 & 1 & \alpha & \alpha^2 & \alpha^3 & \alpha^4 & \alpha^5 & \alpha^6 \\ 0 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 \\ 1 & 000 & \tikzmark{l1}001 & 010 & 100 & 011 & 110 & 111 & 101 \\ \alpha & 000 & 010 & 100 & 011 & 110 & 111 & 101 & 001 \\ \alpha^2 & 000 & 100 & 011 & 110\tikzmark{r1} & 111 & 101 & 001 & 010 \\ \alpha^3 & 000 & 011 & 110 & 111 & 101 & 001 & 010 & 100 \\ \alpha^4 & 000 & 110 & 111 & 101 & 001 & 010 & 100 & 011 \\ \alpha^5 & 000 & 111 & 101 & 001 & 010 & 100 & 011 & 110 \\ \alpha^6 & 000 & 101 & 001 & 010 & 100 & 011 & 110 & 111 \\ }. \end{equation*} (We shall refer to the shaded entries of this multiplication table in Example~\ref{ex:contr-field}.) \end{example} The next result extends the construction of \cref{elem-ab-1} to give examples with $\lambda > 1$. {\mathbf{e}}gin{lemma}[{\cite[Proposition~1.8]{drake}}, {\cite[Proposition~4.4]{jungnickel-diffmatrices}}] \label{group-hom} Let $G$ and $H$ be groups. Suppose that $\phi \colon G \to H$ is a surjective homomorphism and that $A = (a_{ij})$ is a $(G, m, \lambda)$ difference matrix. Then $\phi(A) = (\phi(a_{ij}))$ is an $(H, m, \lambda |\Ker \phi|)$ difference matrix. \end{lemma} {\mathbf{e}}gin{proof} The difference of two distinct rows of~$A$ contains each element of $G$ exactly $\lambda$ times, so by the First Isomorphism Theorem the difference of two distinct rows of $\phi(A)$ contains each element of $H$ exactly $\lambda |\Ker \phi|$ times. \end{proof} {\mathbf{e}}gin{corollary}[{\cite[Corollary~1.9]{drake}}] \label{elem-ab} Let $p$ be prime, and let $m, n \geq 1$ and $s \geq 0$ be integers. Then there exists a $({\mathbb{Z}}_p^n, m, p^s)$ difference matrix if and only if $m \le p^{n+s}$. \end{corollary} {\mathbf{e}}gin{proof} The condition $m \le p^{n+s}$ is necessary, by \cref{thm:jungnickel-nonexistence}. To show existence for $m < p^{n+s}$, delete $p^{n+s}-m$ of the rows of the difference matrix for $m=p^{n+s}$. It remains to construct a $({\mathbb{Z}}_p^n,p^{n+s},p^s)$ difference matrix. By \cref{elem-ab-1}, there exists a $({\mathbb{Z}}_p^{n+s},p^{n+s},1)$ difference matrix. Apply \cref{group-hom} using a surjective homomorphism $\phi: {\mathbb{Z}}_p^{n+s} \to {\mathbb{Z}}_p^n$. \end{proof} {\mathbf{e}}gin{example} We use \cref{group-hom} to construct a $({\mathbb{Z}}_2^2, 8, 2)$ difference matrix from the $({\mathbb{Z}}_2^3, 8, 1)$ difference matrix of \cref{ex:Z23}. Apply the canonical homomorphism ${\mathbb{Z}}_2^3 \to {\mathbb{Z}}_2^2$ to remove the third component of each element of ${\mathbb{Z}}_2^3$, giving the $({\mathbb{Z}}_2^2, 8, 2)$ difference matrix {\mathbf{e}}gin{equation*} {\mathbf{e}}gin{bmatrix} 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 \\ 00 & 00 & 01 & 10 & 01 & 11 & 11 & 10 \\ 00 & 01 & 10 & 01 & 11 & 11 & 10 & 00 \\ 00 & 10 & 01 & 11 & 11 & 10 & 00 & 01 \\ 00 & 01 & 11 & 11 & 10 & 00 & 01 & 10 \\ 00 & 11 & 11 & 10 & 00 & 01 & 10 & 01 \\ 00 & 11 & 10 & 00 & 01 & 10 & 01 & 11 \\ 00 & 10 & 00 & 01 & 10 & 01 & 11 & 11 \\ \end{bmatrix}. \end{equation*} \end{example} \subsection{Composition construction (Buratti)} \label{sec:composition} The following composition construction combines difference matrices in groups $H$ and $K$ to produce a difference matrix in~$H \times K$. We can use this composition to combine difference matrices in groups of prime power order, including those of \cref{sec:abelian-noncyclic}, giving a rich existence pattern. {\mathbf{e}}gin{theorem}[Jungnickel~{\cite[Proposition~4.5]{jungnickel-diffmatrices}}] \label{thm:prod-jungnickel} Let $H$ and $K$ be groups. Suppose there exists an $(H,m,\lambda)$ difference matrix and a $(K,m,\mu)$ difference matrix. Then there exists a $(H \times K,m,\lambda \mu)$ difference matrix. \end{theorem} \cref{thm:prod-jungnickel} occurs as the case $G = H \times K$ of the following more general composition construction, which combines difference matrices in groups $H$ and $G/H$ to produce a difference matrix in~$G$. {\mathbf{e}}gin{theorem}[Buratti~{\cite[Theorem~2.5~and~Corollary~2.6]{buratti}}] \label{comp} Let $G$ be a group containing a normal subgroup~$H$. Suppose that $A$ is an $(H, m, \lambda)$ difference matrix and that $(b_{ij}H)$ is a $(G/H, m, \mu)$ difference matrix, and let $B = (b_{ij})$. Then the matrix each of whose rows is the Kronecker product of the corresponding rows in $A$ and $B$ is a $(G, m, \lambda \mu)$ difference matrix. \end{theorem} {\mathbf{e}}gin{example} We use \cref{comp} to construct a $(G, 4, 1)$ difference matrix for $G = {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$, using additive notation. Let $H = \langle 010, 200 \rangle$, so that $G / H = \langle 001+H, 100+H \rangle$ and both $H$ and $G / H$ are isomorphic to ${\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$. The $({\mathbb{Z}}_2^2,2,1)$ difference matrix of \cref{ex:Z22} gives the $(H, 4, 1)$ difference matrix {\mathbf{e}}gin{equation*} A = {\mathbf{e}}gin{bmatrix} 000 & 000 & 000 & 000 \\ 000 & 010 & 200 & 210 \\ 000 & 200 & 210 & 010 \\ 000 & 210 & 010 & 200 \end{bmatrix} \end{equation*} and the $(G / H, 4, 1)$ difference matrix $(b_{ij} + H)$, where {\mathbf{e}}gin{equation*} B = (b_{ij}) = {\mathbf{e}}gin{bmatrix} 000 & 000 & 000 & 000 \\ 000 & 001 & 100 & 101 \\ 000 & 100 & 101 & 001 \\ 000 & 101 & 001 & 100 \end{bmatrix}. \end{equation*} By \cref{comp}, the Kronecker product of corresponding rows in $A$ and $B$ then gives the rows of the $(G, 4, 1)$ difference matrix {\mathbf{e}}gin{equation*} {\mathbf{e}}gin{bmatrix} (000 + 000) & (000 + 000) & (000 + 000) & (000 + 000) & (000 + 000) & (000 + 000) & \cdots \\ (000 + 000) & (000 + 001) & (000 + 100) & (000 + 101) & (010 + 000) & (010 + 001) & \cdots \\ (000 + 000) & (000 + 100) & (000 + 101) & (000 + 001) & (200 + 000) & (200 + 100) & \cdots \\ (000 + 000) & (000 + 101) & (000 + 001) & (000 + 100) & (210 + 000) & (210 + 101) & \cdots \end{bmatrix} \end{equation*} {\mathbf{e}}gin{equation*} = \setlength{\arraycolsep}{2.8pt} {\mathbf{e}}gin{bmatrix} 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 & 000 \\ 000 & 001 & 100 & 101 & 010 & 011 & 110 & 111 & 200 & 201 & 300 & 301 & 210 & 211 & 310 & 311 \\ 000 & 100 & 101 & 001 & 200 & 300 & 301 & 201 & 210 & 310 & 311 & 211 & 010 & 110 & 111 & 011 \\ 000 & 101 & 001 & 100 & 210 & 311 & 211 & 310 & 010 & 111 & 011 & 110 & 200 & 301 & 201 & 300 \end{bmatrix}. \end{equation*} \end{example} Repeated application of the composition construction of \cref{comp} to difference matrices over elementary abelian groups, as given by~\cref{elem-ab-1}, produces examples over a larger set of groups. {\mathbf{e}}gin{example}\label{ex:2chains} We use \cref{comp} to construct $(G,4,1)$ difference matrix for $G = {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2}$. Form a chain of subgroups \[ G \supset G_1 \supset G_2, \] where $G_1 \cong {\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2$ and $G_2 \cong {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ such that $G/G_1 \cong {\mathbb{Z}}_2^4$ and $G_1/G_2 \cong {\mathbb{Z}}_2^3$. By \cref{elem-ab-1}, there is a $(G/G_1,2^4,1)$ and a $(G_1/G_2,2^3,1)$ and a $(G_2,2^2,1)$ difference matrix. Use \cref{comp} to combine the first $2^2$ rows of the $(G_2,2^2,1)$ and $(G_1/G_2,2^3,1)$ difference matrices to give a $(G_1,2^2,1)$ difference matrix; then combine this with the first $2^2$ rows of the $(G/G_1,2^4,1)$ difference matrix to give a $(G,2^2,1)$ difference matrix. The number of rows in this difference matrix is $\min(2^4,2^3,2^2) = 2^2$. However, by using a different chain of subgroups we can instead obtain a $(G,8,1)$ difference matrix: choose $G'_1 \cong {\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ and $G'_2 \cong {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ such that $G/G'_1 \cong {\mathbb{Z}}_2^3$ and $G'_1/G'_2 \cong {\mathbb{Z}}_2^3$. Since each of $G'_2$, $G'_1/G'_2$, and $G/G'_1$ is isomorphic to ${\mathbb{Z}}_2^3$, combination under \cref{comp} produces a $(G,2^3,1)$ difference matrix. The number of rows in this difference matrix is $\min(2^3,2^3,2^3) =2^3$. \end{example} \cref{ex:2chains} shows that when \cref{elem-ab-1} and \cref{comp} are used to produce a difference matrix by choosing a chain of subgroups, some choices can result in a larger number of rows for the final difference matrix than others. \cref{prop:buratti-chain} shows how to choose a chain of subgroups that will produce the largest number of rows in the final difference matrix, and \cref{thm:rec} gives the result of making this choice. {\mathbf{e}}gin{proposition}[Buratti~{\cite[Lemma~2.10]{buratti}}]\label{prop:buratti-chain} Let $p$ be prime, and let $G$ be an abelian group of order~$p^n$ and exponent $p^e$. Then $\floor{n/e}$ is the largest integer $s$ for which there is a chain of subgroups $G_i$ of $G$ satisfying {\mathbf{e}}gin{equation*} G = G_0 \supset G_1 \supset \dots \supset G_r \end{equation*} for some integer $r \ge 0$, such that $G_r$ and each of the quotient groups $G_{i-1}/G_i$ is elementary abelian and of order at least~$p^s$. The upper bound $\floor{n/e}$ is attained when each of the $\floor{n/e}$ largest direct factors of $G_{i-1}$ is reduced by a factor of $p$ in $G_i$ and when $G_r$ is the first resulting subgroup that is elementary abelian, and in this case we have \[ \mbox{$G_{i-1}/G_i \cong {\mathbb{Z}}_p^{\floor{n/e}}$ for each $i$ satisfying $1 \le i \le r$, and $G_{r} \cong {\mathbb{Z}}_p^\ell$ for some integer $\ell \ge \floor{n/e}$.} \] \end{proposition} {\mathbf{e}}gin{theorem}[Buratti~{\cite[Theorem~2.11]{buratti}}] \label{thm:rec} Let $p$ be prime, and let $G$ be an abelian group of order $p^{n}$ and exponent~$p^{e}$. Then there exists a $(G, p^{\floor{n/e}}, 1)$ difference matrix. \end{theorem} {\mathbf{e}}gin{proof} By \cref{prop:buratti-chain}, there is an integer $r \ge 0$ and a chain of subgroups \[ G = G_0 \supset G_1 \supset \dots \supset G_r \] such that $G_{i-1}/G_i \cong {\mathbb{Z}}_p^{\floor{n/e}}$ for each $i$ satisfying $1 \le i \le r$, and $G_r \cong {\mathbb{Z}}_p^\ell$ for some integer $\ell \ge \floor{n/e}$. By \cref{elem-ab-1} there is therefore a $(G_{i-1}/G_i,p^\floor{n/e},1)$ difference matrix for each~$i$, and by \cref{elem-ab} there is a $(G_r,p^\floor{n/e},1)$ difference matrix. Apply \cref{comp} to successive pairs $(G,H) = (G_{r-1},G_r), (G_{r-2},G_{r-1}), \dots, (G_0,G_1)$ to obtain a $(G_i, p^\floor{n/e}, 1)$ difference matrix for $i = r-1, r-2, \dots, 0$. The case $i=0$ gives the result. \end{proof} \cref{comp} can also be used to obtain the following result. {\mathbf{e}}gin{theorem}[{\cite[Theorem~2.13]{buratti}}] Let $G$ be a group and let $p$ be the smallest prime divisor of $|G|$. Then there exists a $(G,p,1)$ difference matrix. \end{theorem} \subsection{Abelian noncyclic construction (Pan and Chang)} \label{sec:abelian-noncyclic} By \cref{prop:buratti-chain}, no chain of subgroups will produce a larger number of rows than $p^\floor{n/e}$ in \cref{thm:rec} under combination of \cref{elem-ab-1} and \cref{comp}. However, we now show that larger values are sometimes possible, using the following construction in $2$-groups with large exponent. {\mathbf{e}}gin{theorem}[Pan and Chang~{\cite[Lemma~3.3]{pan-chang}}] \label{thm:pc} Let $e$ be a positive integer. Then there exists a $({\mathbb{Z}}_{2^{e}} \times {\mathbb{Z}}_{2}, 4, 1)$ difference matrix. \end{theorem} {\mathbf{e}}gin{proof}[Construction for \cref{thm:pc}] Define sets {\mathbf{e}}gin{align*} I_{1} &= \{0, 1, \ldots, 2^{e-2} - 1\}, \\ I_{2} &= \{2^{e-2}, 2^{e-2} + 1, \ldots, 2^{e-1} - 1\}, \\ I_{1}^{*} &= I_{1} \setminus \{2^{e-2} - 1\} \cup \{2^{e-1} - 1\}, \\ I_{2}^{*} &= I_{2} \setminus \{2^{e-1} - 1\} \cup \{2^{e-2} - 1\}. \end{align*} For $0 \le i \le 2^{e-1}-1$, define length 4 column vectors over ${\mathbb{Z}}_{2^{e}} \times {\mathbb{Z}}_{2}$ {\mathbf{e}}gin{align*} c_{i}^{(0)} &= {\mathbf{e}}gin{cases} {\mathbf{e}}gin{bmatrix} (0,0) & (2i, 0) & (4i, 0) & (-2i, 0) \end{bmatrix}^{\intercal} & \mbox{for $i \in I_{1}$}, \\[1ex] {\mathbf{e}}gin{bmatrix} (0,0) & (2i, 0) & (4i, 1) & (-2i, 1) \end{bmatrix}^{\intercal} & \mbox{for $i \in I_{2}$}, \end{cases} \\ c_{i}^{(1)} &= {\mathbf{e}}gin{cases} {\mathbf{e}}gin{bmatrix} (0,0) & (2i, 1) & (4i+1, 0) & (-2i-1, 1) \end{bmatrix}^{\intercal} & \mbox{for $i \in I_{1}$}, \\[1ex] {\mathbf{e}}gin{bmatrix} (0,0) & (2i, 1) & (4i+1, 1) & (-2i-1, 0) \end{bmatrix}^{\intercal} & \mbox{for $i \in I_{2}$}, \end{cases} \\ c_{i}^{(2)} &= {\mathbf{e}}gin{cases} {\mathbf{e}}gin{bmatrix} (0,0) & (2i+1, 0) & (4i+2, 0) & (-2i-1, 0) \end{bmatrix}^{\intercal} & \mbox{for $i \in I_{1}$}, \\[1ex] {\mathbf{e}}gin{bmatrix} (0,0) & (2i+1, 0) & (4i+2, 1) & (-2i-1, 1) \end{bmatrix}^{\intercal} & \mbox{for $i \in I_{2}$}, \end{cases} \\ c_{i}^{(3)} &= {\mathbf{e}}gin{cases} {\mathbf{e}}gin{bmatrix} (0,0) & (2i+1, 1) & (4i+3, 0) & (-2i-2, 1) \end{bmatrix}^{\intercal} & \mbox{for $i \in I_{1}^{*}$}, \\[1ex] {\mathbf{e}}gin{bmatrix} (0,0) & (2i+1, 1) & (4i+3, 1) & (-2i-2, 0) \end{bmatrix}^{\intercal} & \mbox{for $i \in I_{2}^{*}$}. \end{cases} \end{align*} Define a $4 \times 2^{e-1}$ matrix {\mathbf{e}}gin{equation*} D_{r} = {\mathbf{e}}gin{bmatrix} c_{0}^{(r)} & c_{1}^{(r)} & \ldots & c_{2^{e-1}-1}^{(r)} \end{bmatrix} \quad \mbox{for $r = 0, 1, 2, 3$}. \end{equation*} Then a $({\mathbb{Z}}_{2^e} \times {\mathbb{Z}}_2,4,1)$ difference matrix is {\mathbf{e}}gin{equation*} D = {\mathbf{e}}gin{bmatrix} D_{0} \mid D_{1} \mid D_{2} \mid D_{3} \end{bmatrix}.\qedhere \end{equation*} \end{proof} {\mathbf{e}}gin{example} We use the construction for Theorem~\ref{thm:pc} to produce a $({\mathbb{Z}}_{8} \times {\mathbb{Z}}_{2}, 4, 1)$ difference matrix. Set $I_{1} = \{0, 1\}$, $I_{2} = \{2, 3\}$, $I_{1}^{*} = \{0, 3\}$ $I_{2}^{*} = \{1,2\}$. Each $D_r$ is a $4 \times 4$ matrix whose columns are $c_0^{(r)}, c_1^{(r)}, c_2^{(r)}, c_3^{(r)}$, and the constructed matrix is {\mathbf{e}}gin{equation*} D = {\mathbf{e}}gin{bmatrix} 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 \\ 00 & 20 & 40 & 60 & 01 & 21 & 41 & 61 & 10 & 30 & 50 & 70 & 11 & 31 & 51 & 71 \\ 00 & 40 & 01 & 41 & 10 & 50 & 11 & 51 & 20 & 60 & 21 & 61 & 30 & 71 & 31 & 70 \\ 00 & 60 & 41 & 21 & 71 & 51 & 30 & 10 & 70 & 50 & 31 & 11 & 61 & 40 & 20 & 01 \end{bmatrix}. \end{equation*} \end{example} By combining the result of \cref{thm:pc} with a suitable chain of subgroups, we can construct a $(G,4,1)$ difference matrix in all abelian noncyclic $2$-groups. {\mathbf{e}}gin{theorem}[Pan and Chang~{\cite[Lemma~3.4]{pan-chang}}] \label{thm:non-cyclic-2} Let $G$ be an abelian noncyclic $2$-group. Then there exists a $(G, 4, 1)$ difference matrix. \end{theorem} {\mathbf{e}}gin{proof} Let $G$ have order $2^n$. The proof is by induction on $n \ge 2$. The base case $n=2$ requires a $({\mathbb{Z}}_2^2, 4, 1)$ difference matrix, which is provided by \cref{elem-ab-1}. Now assume all cases up to $n-1 \ge 2$ are true. If $G = {\mathbb{Z}}_{2^{n-1}} \times {\mathbb{Z}}_2$, then case $n$ is true by \cref{thm:pc}, and if $G = {\mathbb{Z}}_2^3$ then case $n$ is true by \cref{elem-ab}. Otherwise we can choose a noncyclic subgroup $H$ of $G$ such that $G/H \cong {\mathbb{Z}}_2^2$. There exists a $(G/H,4,1)$ difference matrix by \cref{elem-ab-1}, and an $(H,4,1)$ difference matrix by the inductive hypothesis. Apply \cref{comp} to produce a $(G, 4, 1)$ difference matrix, proving case $n$ and completing the induction. \end{proof} Pan and Chang provide a generalization of \cref{thm:non-cyclic-2} to non-$2$-groups, and a corresponding result for the case $\lambda > 1$. {\mathbf{e}}gin{theorem}[{\cite[Theorem~1.2]{pan-chang}}] \label{pan-chang-non-2} Let $G$ be an abelian noncyclic group whose Sylow $2$-subgroup of $G$ is trivial or noncyclic. Then there exists a $(G,4,1)$ difference matrix. \end{theorem} {\mathbf{e}}gin{theorem}[{\cite[Theorem~1.3]{pan-chang}}] Let $G$ be an abelian group and let $\lambda > 1$ be an integer. If $\lambda$ is even, or if $\lambda$ is odd and the Sylow $2$-subgroup of $G$ is trivial or noncyclic, then there exists a $(G,4,\lambda)$ difference matrix. \end{theorem} \subsection{Other constructions} \label{sec:other} Theorems~\ref{thm:drake-nonexistence} and~\ref{pan-chang-non-2} settle the existence question for a $(G,4,1)$ difference matrix over all abelian groups~$G$, except those that are cyclic of odd order. The following result concerns these groups. {\mathbf{e}}gin{theorem}[Ge~{\cite[Theorem~3.12]{ge-diffmatrices}}] \label{G41-cyclic} Let $v \ge 5$ be an odd integer for which $\gcd(v,27) \ne 9$. Then there exists a $({\mathbb{Z}}_v,4,1)$ difference matrix. \end{theorem} The existence pattern for the cases not handled by \cref{G41-cyclic} (namely those for which $\gcd(v,27) = 9$) is not yet clear; it is known {\cite[Lemma~2.2]{pan-chang}} that there does not exist a $({\mathbb{Z}}_9,4,1)$ difference matrix. The following construction, like that of \cref{elem-ab-1}, is based on properties of a finite field and provides examples of generalized Hadamard matrices. {\mathbf{e}}gin{theorem}[Jungnickel~{\cite[Theorem~2.4]{jungnickel-diffmatrices}}] Let $p$ be an odd prime and let $n$ be a positive integer. Then there exists a $({\mathbb{Z}}_p^n,2p^n,2)$ difference matrix. \end{theorem} The construction of \cref{comp} composes difference matrices over groups $H$ and~$G/H$. In contrast, the construction of \cref{mm'} (based on a Kronecker product) and of~\cref{lambda+mu} (based on concatenation) both compose two difference matrices over the same group. {\mathbf{e}}gin{theorem}[Shrikhande~{\cite[Theorem~3]{shrikhande}}] \label{mm'} Let $G$ be a group. Suppose there exists a $(G,m,\lambda)$ difference matrix and a $(G,m',\mu)$ difference matrix. Then there exists a $(G,mm',\lambda \mu |G|)$ difference matrix. \end{theorem} {\mathbf{e}}gin{theorem}[Jungnickel~{\cite[Proposition~4.2]{jungnickel-diffmatrices}}] \label{lambda+mu} Let $G$ be a group. Suppose there exists a $(G,m,\lambda)$ difference matrix and a $(G,m,\mu)$ difference matrix. Then there exists a $(G, m, \lambda + \mu)$ difference matrix. \end{theorem} There are several constructions of difference matrices based on the existence of other types of combinatorial design such as pairwise balanced designs, orthogonal arrays, transversal designs, affine resolvable block designs, rings, difference families, and group complementary pairs \cite{colbourn-kreher}, \cite{jungnickel-diffmatrices}, \cite{delauney-genHadamard}, \cite{delauney-diffmatrices}. \subsection{Computer search results} After submitting the original version of this paper, we became aware of the following computer search results for difference matrices in groups of order 16. These results, which were found using the viewpoint of orthogonal orthomorphisms, improve on the constructions of Sections~\ref{sec:finite-field}--\ref{sec:abelian-noncyclic}. {\mathbf{e}}gin{proposition}[Lazebnik and Thomason~{\cite[p.1556]{lazebnik-thomason}}] \label{prop:lazebnik-thomason} The largest number of rows $m$ for which a $(G, m, 1)$ difference matrix exists is {\mathbf{e}}gin{enumerate}[(i)] \item $5$ for $G = {\mathbb{Z}}_8 \times {\mathbb{Z}}_2$, \item $8$ for $G = {\mathbb{Z}}_4 \times {\mathbb{Z}}_4$, \item $8$ for $G = {\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$. \end{enumerate} \end{proposition} \section{Reduced linking systems of difference sets} \label{sec:red-link} We use multiplicative notation for groups throughout this section. {\mathbf{e}}gin{definition} Let $G$ be a group of order $v$ and let $D$ be a subset of $G$ with $k$ elements. Then $D$ is a $(v,k,\lambda,n)$-\emph{difference set in $G$} if the multiset $\{d_1 d_2^{-1}: d_1, d_2 \in D \text{ and } d_1 \ne d_2 \}$ contains every non-identity element of $G$ exactly $\lambda$ times and, by convention, $n=k- \lambda$. \end{definition} A difference set in a group $G$ is equivalent to a symmetric design with a regular automorphism group \cite{lander} (see \cite{jungnickel-survey} and its updates \cite{jungnickel-survey-update}, \cite{jungnickel-survey-update2}, for example, for background). {\mathbf{e}}gin{definition} Let $G$ be a group of order $v$ and let $\ell \ge 2$. Suppose $\mathcal{R}=\{D_1, D_2, \cdots, D_\ell \}$ is a collection of size $\ell$ of $(v,k, \lambda,n)$-difference sets in $G$. Then $\mathcal{R}$ is a \emph{reduced $(v,k,\lambda,n; \ell)$-linking system of difference sets in $G$ of size $\ell$} if there are integers $\mu,\nu$ such that for all distinct $i,j$ there is some $(v,k,\lambda,n)$-difference set $D(i,j)$ in $G$ satisfying {\mathbf{e}}gin{align}\label{LinkingProperty} \sum_{d_i \in D_i} d_i \sum_{d_j \in D_j} d_j^{-1} = (\mu- \nu) \sum_{d \in D(i,j)} d + \nu \sum_{g \in G} g \quad \text{ in } {\mathbb{Z}}[G]. \end{align} \end{definition} A reduced linking system of difference sets is equivalent to a linking system of difference sets \cite[Proposition~1.7]{jedwab-li-simon-arxiv}, as introduced by Davis, Martin, and Polhill~\cite{davis-martin-polhill}. Such a system gives rise to a system of linked symmetric designs, as introduced by Cameron \cite{cameron-doubly} and studied by Cameron and Seidel~\cite{cameron-seidel}, and is equivalent to a 3-class Q-antipodal cometric association scheme~\cite{vandam}. Jedwab, Li, and Simon \cite{jedwab-li-simon-arxiv} recently showed how to construct a reduced linking system of difference sets, based on a difference matrix over a $2$-group having $\lambda=1$. {\mathbf{e}}gin{theorem}[{\cite[Theorems~1.2~and~5.6]{jedwab-li-simon-arxiv}}]\label{thm:jls} Let $G$ be a group of order $2^{2d+2}$ which contains a central subgroup $E$ isomorphic to ${\mathbb{Z}}_2^{d+1}$. Let $m \ge 3$ and suppose there exists a $(G/E, m,1)$-difference matrix. Then there exists a reduced linking system of $(v,k,\lambda,n)$-difference sets in $G$ of size $m-1$, where {\mathbf{e}}gin{equation} \label{eqn:hadamard} (v,k,\lambda,n) = (2^{2d+2}, 2^d(2^{d+1}-1), 2^d(2^d-1), 2^{2d}). \end{equation} \end{theorem} {\mathbf{e}}gin{proof}[Construction for \cref{thm:jls}] Let $s= 2^{d+1}-1$ and let $H_1,H_2, \dots, H_s$ be the subgroups of $G$ corresponding to the hyperplanes ($d$-dimensional subspaces) of $E$ when $E$ is regarded as a vector space of dimension $d+1$ over $\GF$$(2)$. Let the normalized form (see \cref{sec:basic}) of the $(G/E,m,1)$-difference matrix be $B=(b_{ij}E)$ for $0 \le i \le m-1$ and $0 \le j \le s$. Choose $e_{ij} \in E$ for each $1 \le i \le m-1$ and $ 1 \le j \le s$ arbitrarily and let \[ D_i = \bigcup_{j=1}^s b_{ij} e_{ij} H_j \quad \text{ for } 1 \le i \le m-1. \] Then $\{D_1, D_2, \dots, D_{m-1}\}$ is a reduced linking system of $(v,k,\lambda,n)$-difference sets in $G$ of size $m-1$, with $(v,k,\lambda,n)$ as given in~\eqref{eqn:hadamard}. \end{proof} {\mathbf{e}}gin{example}[{\cite[Example~5.7]{jedwab-li-simon-arxiv}}] We use the construction for \cref{thm:jls} to produce a reduced linking system of $(16,6,2,4)$-difference sets in $G = {\mathbb{Z}}_{4} \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 = \langle x,y,z \rangle$ of size $3$. Let $E = \langle x^2, z \rangle$, which is isomorphic to~${\mathbb{Z}}_2^2$. The subgroups of $G$ corresponding to the hyperplanes of $E$ when $E$ is regarded as a vector space of dimension $2$ over $\GF(2)$ are $H_1 = \langle x^2 \rangle, H_2 = \langle z \rangle, H_3 = \langle x^2 z \rangle$. Using the $({\mathbb{Z}}_2^2,4,1)$ difference matrix of \cref{ex:Z22}, the matrix $(b_{ij}E)$ is a $(G/E,4,1)$-difference matrix where {\mathbf{e}}gin{align*} (b_{ij}) = {\mathbf{e}}gin{bmatrix} 1_{G} & 1_{G} & 1_{G} & 1_{G} \\ 1_{G} & x & y & xy \\ 1_{G} & y & xy & x \\ 1_{G} & xy & x & y \end{bmatrix} \text{ for } 0 \le i,j \le 3. \end{align*} Take, for example, {\mathbf{e}}gin{align*} (e_{ij}) = {\mathbf{e}}gin{bmatrix} 1_E & 1_E & 1_E \\ z & x^2 & x^2 \\ z & 1_E & 1_E \end{bmatrix} \text{ for } 1 \le i,j \le 3. \end{align*} Then $\{D_1, D_2, D_3\}$ is a reduced linking system of $(16,6,2,4)$-difference sets in $G$ of size $3$, where where {\mathbf{e}}gin{align*} D_1 &=x H_1 \cup y H_2 \cup xy H_3, \\ D_2 &=y z H_1 \cup x^3y H_2 \cup x^3H_3, \\ D_3 &=xyz H_1 \cup x H_2 \cup y H_3. \end{align*} \end{example} The application of \cref{thm:jls} to the difference matrices specified in Theorems~\ref{thm:non-cyclic-2} and~\ref{thm:rec} gives the infinite families of linking systems of difference sets of \cref{cor:jls}~(i) and~(ii), respectively. {\mathbf{e}}gin{theorem}[{\cite[Corollaries~5.8~and~5.9]{jedwab-li-simon-arxiv}}]\label{cor:jls} Let $G$ be an abelian group of order $2^{2d+2}$, rank at least $d+1$, and exponent $2^e$. {\mathbf{e}}gin{enumerate}[(i)] \item If $e \le d+1$, then there exists a reduced linking system of $(v,k,\lambda,n)$-difference sets in~$G$ of size~$3$, with $(v,k,\lambda,n)$ as given in~\eqref{eqn:hadamard}. \item If $2 \le e \le \frac{d+3}{2}$, then there exists a reduced linking system of $(v,k,\lambda,n)$-difference sets in~$G$ of size $2^{ \left\lfloor \frac{d+1}{e-1} \right\rfloor }-1$, with $(v,k,\lambda,n)$ as given in~\eqref{eqn:hadamard}. \end{enumerate} \end{theorem} \section{Contracted difference matrices} \label{sec:contr-diff-matrices} We observe that some difference matrices over an abelian $p$-group have a particularly rich structure, in that their rows can be written as a ${\mathbb{Z}}_p$-linear combination of a small set of rows, and their columns can likewise be written as a ${\mathbb{Z}}_p$-linear combination of a small set of columns. In this section we capture this structure by introducing the concept of a contracted difference matrix, and present a series of constructions for contracted difference matrices related to those for difference matrices given in \cref{sec:diff-matrices}. {\mathbf{e}}gin{definition} Let $p$ be prime and let $M$ be a $k \times \ell$ matrix over an abelian $p$-group~$(G,+)$. The \emph{$p$-expansion of $M$}, written $f_p(M)$, is a $p^{k} \times p^{\ell}$ matrix over $G$ whose rows are indexed by ${\mathbf{r}} \in {\mathbb{Z}}_p^k$ and whose columns are indexed by ${\mathbf{c}} \in {\mathbb{Z}}_p^\ell$. The $({\mathbf{r}}, {\mathbf{c}})$ entry of $f_p(M)$ is the vector-matrix-vector product ${\mathbf{r}} M {\mathbf{c}}^{\intercal}$ (in which ${\mathbf{r}}$ and ${\mathbf{c}}$ are regarded as row vectors). \end{definition} The $p$-expansion of a matrix can be represented as the product of matrices, as demonstrated in \cref{ex:expansion}. {\mathbf{e}}gin{definition}\label{defn:cdm} Let $p$ be prime and let $G$ be an abelian group of order $p^{n}$. A \mbox{$k \times (n + s)$} matrix~$M$ over $G$ is a \emph{$(G, k, s)$ contracted difference matrix} if the $p^k \times p^{n+s}$ matrix $f_{p}(M)$ is a $(G, p^{k}, p^{s})$ difference matrix. \end{definition} {\mathbf{e}}gin{example}\label{ex:expansion} The matrix $M = {\mathbf{e}}gin{bmatrix} 01 & 10 & 20 \\ 21 & 01 & 10 \end{bmatrix}$ over $G = {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2}$ is a $(G, 2, 0)$ contracted difference matrix because its $2$-expansion {\mathbf{e}}gin{align*} f_{2}(M) &= {\mathbf{e}}gin{bmatrix} 0 & 0 \\ 0 & 1 \\ 1 & 0 \\ 1 & 1 \end{bmatrix} {\mathbf{e}}gin{bmatrix} 01 & 10 & 20 \\ 21 & 01 & 10 \end{bmatrix} {\mathbf{e}}gin{bmatrix} 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{bmatrix} \\ &=\; \setlength\arraycolsep{3.3pt} \kbordermatrix{ & 000 & 001 & 010 & 011 & 100 & 101 & 110 & 111 \\ 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 \\ 01 & 00 & 10 & 01 & 11 & 21 & 31 & 20 & 30 \\ 10 & 00 & 20 & 10 & 30 & 01 & 21 & 11 & 31 \\ 11 & 00 & 30 & 11 & 01 & 20 & 10 & 31 & 21 } \end{align*} (in which the row and column indexing is shown explicitly) is a $(G, 4, 1)$ difference matrix. \end{example} {\mathbf{e}}gin{example} The matrix $M = {\mathbf{e}}gin{bmatrix} 01 & 10 \\ 10 & 11 \end{bmatrix}$ over $G = {\mathbb{Z}}_{3} \times {\mathbb{Z}}_{3}$ is a $(G, 2, 0)$ contracted difference matrix, because its $3$-expansion {\mathbf{e}}gin{align*} f_{3}(M) &= {\setlength\arraycolsep{3pt} {\mathbf{e}}gin{bmatrix} 0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 \\ 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 \end{bmatrix}^{\intercal}} {\mathbf{e}}gin{bmatrix} 01 & 10 \\ 10 & 11 \end{bmatrix} {\setlength\arraycolsep{3pt} {\mathbf{e}}gin{bmatrix} 0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 \\ 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 \end{bmatrix}} \\ &= \kbordermatrix{ & 00 & 01 & 02 & 10 & 11 & 12 & 20 & 21 & 22 \\ 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 & 00 \\ 01 & 00 & 11 & 22 & 10 & 21 & 02 & 20 & 01 & 12 \\ 02 & 00 & 22 & 11 & 20 & 12 & 01 & 10 & 02 & 21 \\ 10 & 00 & 10 & 20 & 01 & 11 & 21 & 02 & 12 & 22 \\ 11 & 00 & 21 & 12 & 11 & 02 & 20 & 22 & 10 & 01 \\ 12 & 00 & 02 & 01 & 21 & 20 & 22 & 12 & 11 & 10 \\ 20 & 00 & 20 & 10 & 02 & 22 & 12 & 01 & 21 & 11 \\ 21 & 00 & 01 & 02 & 12 & 10 & 11 & 21 & 22 & 20 \\ 22 & 00 & 12 & 21 & 22 & 01 & 10 & 11 & 20 & 02 } \end{align*} is a $(G, 9, 1)$ difference matrix. \end{example} If a $(G,k,s)$ contracted difference matrix with $k \ge 2$ exists, then deleting one row gives a $(G,k-1,s)$ contracted difference matrix. A trivial $(G,1,s)$ contracted difference matrix exists for every abelian $p$-group $G = {\mathbb{Z}}_{p^{a_1}} \times {\mathbb{Z}}_{p^{a_2}} \times \dots \times {\mathbb{Z}}_{p^{a_r}}$ and every integer $s \ge 0$, for example comprising a single row containing the elements $\bigcup_{i=1}^r\{{\mathbf{e}}_i, p {\mathbf{e}}_i,p^2 {\mathbf{e}}_i, \dots,p^{a_{i}-1}{\mathbf{e}}_i\}$ together with $s$ copies of the identity $0_G$, where ${\mathbf{e}}_i$ is the vector of length $r$ taking the value 1 in position $i$ and 0 in all other positions. For $p$ a prime and $G$ an abelian group of order $p^n$, the number of rows in a nontrivial $(G,k,s)$ contracted difference matrix therefore satisfies $k \ge 2$, and by \cref{thm:jungnickel-nonexistence} and \cref{defn:cdm} it also satisfies $k \le n+s$. We now give a method for testing whether a given matrix is a $(G,k,s)$ contracted difference matrix without calculating the $p$-expansion of the matrix in full. The method simplifies further in the case $s = 0$, which is of particular interest because a $(G,k,0)$ contracted difference matrix produces a difference matrix with $\lambda = 1$ (having special significance, as discussed in \cref{sec:basic}). {\mathbf{e}}gin{lemma} \label{lem:contr-char} Let $p$ be prime and let $G$ be an abelian group of order $p^{n}$. {\mathbf{e}}gin{enumerate}[(i)] \item Let $M$ be a $k \times (n+s)$ matrix over $G$. Then $M$ is a $(G, k, s)$ contracted difference matrix if and only if the set \[ \mbox{$\big \{{\mathbf{a}} M {\mathbf{c}}^\intercal : {\mathbf{c}} \in {\mathbb{Z}}_p^{n+s} \big\}$ contains each element of $G$ exactly $p^s$ times} \] for all nonzero row vectors ${\mathbf{a}} = (a_{i})$ of length $k$, where each $a_{i}$ is an integer satisfying $-p < a_{i} < p$. \item Let $M$ be a $k \times n$ matrix over $G$. Then $M$ is a $(G, k, 0)$ contracted difference matrix if and only if {\mathbf{e}}gin{equation*} {\mathbf{a}} M {\mathbf{b}}^{\intercal} = 0_{G} \quad \mbox{implies} \quad {\mathbf{a}} = {\mathbf{0}} \text{ or } {\mathbf{b}} = {\mathbf{0}} \end{equation*} for all row vectors ${\mathbf{a}} = (a_{i})$ and ${\mathbf{b}} = (b_{j})$ of length $k$ and $n$ respectively, where each $a_{i}$ and $b_{j}$ is an integer satisfying $-p < a_{i}, b_{j} < p$. \end{enumerate} \end{lemma} {\mathbf{e}}gin{proof} \mbox{} {\mathbf{e}}gin{enumerate}[(i)] \item By definition, $M$ is a $(G,k,s)$ contracted difference matrix if and only if $f_p(M)$ is a $(G,p^k,p^s)$ difference matrix. Since the row of $f_p(M)$ indexed by ${\mathbf{r}} \in {\mathbb{Z}}_p^k$ comprises the elements of the set $\big \{{\mathbf{r}} M {\mathbf{c}}^\intercal : {\mathbf{c}} \in {\mathbb{Z}}_p^{n+s} \big\}$, this condition holds if and only if the set $\big \{ ({\mathbf{r}}_1 - {\mathbf{r}}_2)M {\mathbf{c}}^\intercal : {\mathbf{c}} \in {\mathbb{Z}}_p^{n+s} \big \}$ contains each element of $G$ exactly $p^s$ times, for all distinct ${\mathbf{r}}_1, {\mathbf{r}}_2 \in {\mathbb{Z}}_p^k$. Set ${\mathbf{a}} = {\mathbf{r}}_1-{\mathbf{r}}_2$ to obtain the result. \item Using the case $s=0$ in the proof of part (i), we have that $M$ is a $(G,k,0)$ contracted difference matrix if and only if the set $\big \{ ({\mathbf{r}}_1 - {\mathbf{r}}_2)M {\mathbf{c}}^\intercal : {\mathbf{c}} \in {\mathbb{Z}}_p^n \big \}$ contains each element of $G$ exactly once, for all distinct ${\mathbf{r}}_1, {\mathbf{r}}_2 \in {\mathbb{Z}}_p^k$. Since $G$ has order $p^n$, this condition holds if and only if $({\mathbf{r}}_1-{\mathbf{r}}_2) M ({\mathbf{c}}_1-{\mathbf{c}}_2)^\intercal \ne 0_G$ for all distinct ${\mathbf{r}}_1, {\mathbf{r}}_2 \in {\mathbb{Z}}_p^k$ and all distinct ${\mathbf{c}}_1, {\mathbf{c}}_2 \in {\mathbb{Z}}_p^n$. Set ${\mathbf{a}} = {\mathbf{r}}_1-{\mathbf{r}}_2$ and ${\mathbf{b}} = {\mathbf{c}}_1-{\mathbf{c}}_2$ to obtain the result.\qedhere \end{enumerate} \end{proof} We now present several constructions of contracted difference matrices over abelian $p$-groups, which are related to the constructions for difference matrices given in \cref{sec:diff-matrices} as set out in \cref{tab:related-constructions}. Each of the contracted difference matrix constructions is more compact and simple than the corresponding difference matrix construction, as can be seen by comparing the examples of this section with those of \cref{sec:diff-matrices}. The proofs of some of the corresponding pairs of results are similar (particularly Corollary~\ref{elem-ab}/\ref{contr-elem-ab}, Corollary~\ref{thm:rec}/\ref{thm:contr-rec}, and Corollary~\ref{thm:non-cyclic-2}/\ref{thm:contr-non-cyclic-2}); however, the construction proving \cref{thm:contr-pc} is considerably more straightforward than that proving \cref{thm:pc}. By \cref{defn:cdm}, the major results Theorems~\ref{thm:rec} and~\ref{thm:non-cyclic-2} are direct consequences of Corollaries~\ref{thm:contr-rec} and~\ref{thm:contr-non-cyclic-2}, respectively. {\mathbf{e}}gin{table} {\mathbf{e}}gin{center} \caption{Related constructions in Sections~\ref{sec:diff-matrices} and~\ref{sec:contr-diff-matrices}.} \mbox{} \\ \label{tab:related-constructions} {\mathbf{e}}gin{tabular}{|l|l|l|} \toprule & construction of & construction of contracted \\ & difference matrix & difference matrix \\ \midrule Finite field & \cref{elem-ab-1} & \cref{contr-elem-ab-1} \\ & \cref{elem-ab} & \cref{contr-elem-ab} \\ \midrule Homomorphism & \cref{group-hom} & \cref{contr-group-hom} \\ \midrule Composition & \cref{comp} & \cref{contr-comp} \\ & \cref{thm:rec} & \cref{thm:contr-rec} \\ \midrule Abelian noncyclic $2$-group & \cref{thm:pc} & \cref{thm:contr-pc} \\ & \cref{thm:non-cyclic-2} & \cref{thm:contr-non-cyclic-2} \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Finite field construction} \label{sec:contr-finite-field} The constructions in this section are related to those in \cref{sec:finite-field}. {\mathbf{e}}gin{proposition} \label{contr-elem-ab-1} Let $p$ be prime, let $n$ be a positive integer, and let $\alpha$ be a primitive element of~$\GF(p^n)$. Then the additive form of the multiplication table for $\{1,\alpha,\alpha^2,\dots,\alpha^{n-1}\}$ is a $({\mathbb{Z}}_{p}^{n}, n, 0)$ contracted difference matrix. \end{proposition} {\mathbf{e}}gin{proof} The matrix corresponding to the multiplication table for $\{1,\alpha,\alpha^2,\dots,\alpha^{n-1}\}$ has $(i,j)$ entry $\alpha^{i+j}$ for $0 \le i,j \le n-1$. We shall use Lemma~\ref{lem:contr-char}~(ii) to show that the additive form of this matrix is a $({\mathbb{Z}}_p^n, n, 0)$ contracted difference matrix, where we regard ${\mathbb{Z}}_p^n$ as the additive group of $\GF(p^n)$. Suppose that $0 = \sum_{i,j=0}^{n-1} a_i \alpha^{i+j} b_j$ for integers $a_i, b_j$ satisfying $-p < a_i, b_j < p$. Then $\sum_{i=0}^{n-1} a_i \alpha^i = 0$ or $\sum_{j=0}^{n-1} b_j \alpha^j =0$, and since $\{1,\alpha,\alpha^2,\dots,\alpha^{n-1}\}$ is an integral basis for ${\mathbb{Z}}_p^n$ we conclude that either $a_i = 0$ for all $i$ or else $b_j = 0$ for all $j$. \end{proof} {\mathbf{e}}gin{example} \label{ex:contr-field} We use \cref{contr-elem-ab-1} to construct a $({\mathbb{Z}}_2^3, 3, 0)$ contracted difference matrix. The shaded entries of \cref{ex:Z23}, constructed using a primitive element $\alpha$ of $\GF(2^3)$ satisfying $\alpha^3+\alpha+1 = 0$, comprise the additive form of the multiplication table for $\{1,\alpha,\alpha^2\}$ and so are a $({\mathbb{Z}}_2^3, 3, 0)$ contracted difference matrix. \end{example} The next result extends the construction of \cref{contr-elem-ab-1} to give examples with $s > 0$. {\mathbf{e}}gin{lemma} \label{contr-group-hom} Let $p$ be prime, and let $G$ and $H$ be abelian groups of orders $p^{n+u}$ and $p^{n}$, respectively. Suppose that $\phi : G \to H$ is a surjective homomorphism and that $M=(m_{ij})$ is a $(G, k, s)$ contracted difference matrix. Then $\phi(M) = (\phi(m_{ij}))$ is an $(H, k, u + s)$ contracted difference matrix. \end{lemma} {\mathbf{e}}gin{proof} Let ${\mathbf{a}} = (a_i)$ be a nonzero row vector of length $k$, where each $a_i$ is an integer satisfying $-p < a_i < p$. By Lemma~\ref{lem:contr-char}~(i), we are given that $\big \{{\mathbf{a}} M {\mathbf{c}}^\intercal : {\mathbf{c}} \in {\mathbb{Z}}_p^{n+u+s}\big\}$ contains each element of $G$ exactly $p^s$ times and are required to prove that $\big \{{\mathbf{a}} \phi(M) {\mathbf{c}}^\intercal : {\mathbf{c}} \in {\mathbb{Z}}_p^{n+u+s}\big\}$ contains each element of $H$ exactly $p^u p^s$ times. This follows from the First Isomorphism Theorem, because $|\Ker(\phi)| = |G|/|H| = p^u$. \end{proof} {\mathbf{e}}gin{corollary} \label{contr-elem-ab} Let $p$ be prime, and let $k, n \ge 1$ and $s \ge 0$ be integers. Then there exists a $({\mathbb{Z}}_{p}^{n}, k, s)$ contracted difference matrix if and only if $k \le n+s$. \end{corollary} {\mathbf{e}}gin{proof} The condition $k \le n+s$ is necessary, by \cref{thm:jungnickel-nonexistence} and \cref{defn:cdm}. To show existence for $k < n+s$, delete $n+s-k$ of the rows of the contracted difference matrix for $k= n+s$. It remains to construct a $({\mathbb{Z}}_p^n,n+s,s)$ contracted difference matrix. By \cref{contr-elem-ab-1}, there exists a $({\mathbb{Z}}_p^{n+s},n+s,0)$ contracted difference matrix. Apply \cref{contr-group-hom} using a surjective homomorphism $\phi: {\mathbb{Z}}_p^{n+s} \to {\mathbb{Z}}_p^n$. \end{proof} {\mathbf{e}}gin{example} We use \cref{contr-group-hom} to construct a $({\mathbb{Z}}_2^2,4,2)$ contracted difference matrix. Firstly construct a $({\mathbb{Z}}_{2}^{4}, 4, 0)$ contracted difference matrix according to \cref{contr-elem-ab-1}, using a primitive element $\alpha$ of $\GF(2^4)$ satisfying $\alpha^{4} + \alpha + 1 = 0$: {\mathbf{e}}gin{equation*} \kbordermatrix{ \cdot & 1 & \alpha & \alpha^2 & \alpha^3 \\ 1 & 0001 & 0010 & 0100 & 1000 \\ \alpha & 0010 & 0100 & 1000 & 0011 \\ \alpha^2 & 0100 & 1000 & 0011 & 0110 \\ \alpha^3 & 1000 & 0011 & 0110 & 1100 }. \end{equation*} Now apply the canonical homomorphism from ${\mathbb{Z}}_2^4$ to ${\mathbb{Z}}_2^2$ that removes the last two components of each element of ${\mathbb{Z}}_2^4$, giving the $({\mathbb{Z}}_2^2, 4, 2)$ contracted difference matrix {\mathbf{e}}gin{equation*} {\mathbf{e}}gin{bmatrix} 00 & 00 & 01 & 10 \\ 00 & 01 & 10 & 00 \\ 01 & 10 & 00 & 01 \\ 10 & 00 & 01 & 11 \end{bmatrix}. \end{equation*} \end{example} \subsection{Composition construction} The constructions in this section are related to those in \cref{sec:composition}. {\mathbf{e}}gin{theorem} \label{contr-comp} Let $G$ be an abelian $p$-group and let $H$ be a subgroup of~$G$. Suppose that $L$ is an $(H, k, s)$ contracted difference matrix and that $(m_{ij}+H)$ is a $(G/H, k, t)$ contracted difference matrix, and let $M = (m_{ij})$. Then the matrix ${\mathbf{e}}gin{bmatrix}L\mid M\end{bmatrix}$ is a $(G, k, s + t)$ contracted difference matrix. \end{theorem} {\mathbf{e}}gin{proof} Let $|H| = p^n$ and $|G| = p^{n+u}$. Let ${\mathbf{a}} = (a_i)$ be a nonzero row vector of length $k$, where each $a_i$ is an integer satisfying $-p < a_i < p$. By \cref{lem:contr-char}~(i), we are given that $\big \{{\mathbf{a}} L {\mathbf{c}}^\intercal : {\mathbf{c}} \in {\mathbb{Z}}_p^{n+s}\big\}$ contains each element of $H$ exactly $p^s$ times and that $\big \{{\mathbf{a}} (m_{ij}+H) {\mathbf{d}}^\intercal : {\mathbf{d}} \in {\mathbb{Z}}_p^{u+t}\big\}$ contains each element of $G/H$ exactly $p^t$ times. Since each element of $G$ can be written uniquely as the sum of an element of $H$ and a coset representative of $H$ in $G$, it follows that $\big \{{\mathbf{a}} [L \mid M] ({\mathbf{c}},{\mathbf{d}})^\intercal : ({\mathbf{c}},{\mathbf{d}}) \in {\mathbb{Z}}_p^{n+s+u+t}\big\}$ contains each element of $G$ exactly $p^sp^t$ times. The result follows by \cref{lem:contr-char}~(i). \end{proof} {\mathbf{e}}gin{example} We use \cref{contr-comp} to construct a $(G,2,0)$ contracted difference matrix for $G = {\mathbb{Z}}_{9} \times {\mathbb{Z}}_{3} \times {\mathbb{Z}}_{3}$. Let $H = \langle 300, 010 \rangle$, so that both $H$ and $G/H$ are isomorphic to ${\mathbb{Z}}_{3} \times {\mathbb{Z}}_{3}$. Apply \cref{contr-elem-ab-1}, using a primitive element $\alpha$ of $\GF(3^2)$ satisfying $\alpha^{2} + \alpha + 2=0$, to construct the $(H, 2, 0)$ and $(G / H, 2, 0)$ contracted difference matrices {\mathbf{e}}gin{equation*} {\mathbf{e}}gin{bmatrix} 010 & 300 \\ 300 & 610 \end{bmatrix} \qquad\text{and}\qquad {\mathbf{e}}gin{bmatrix} 001 + H & 100 + H \\ 100 + H & 201 + H \end{bmatrix}. \end{equation*} By \cref{contr-comp}, a $(G, 2, 0)$ contracted difference matrix is {\mathbf{e}}gin{equation*} {\mathbf{e}}gin{bmatrix} 010 & 300 & 001 & 100 \\ 300 & 610 & 100 & 201 \end{bmatrix}. \end{equation*} \end{example} The proof of the next result follows that of \cref{thm:rec}, by replacing each quoted result for a difference matrix by the corresponding result for a contracted difference matrix according to \cref{tab:related-constructions}. {\mathbf{e}}gin{corollary} \label{thm:contr-rec} Let $p$ be prime, and let $G$ be an abelian group of order $p^{n}$ and exponent~$p^{e}$. Then there exists a $(G, \floor{n/e}, 0)$ contracted difference matrix. \end{corollary} {\mathbf{e}}gin{example} We illustrate the proof of \cref{thm:contr-rec} with $p=2$, $n=11$, $e=3$ to construct a $(G,3,0)$ contracted difference matrix for $G = {\mathbb{Z}}_8 \times {\mathbb{Z}}_8 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2$. Following \cref{prop:buratti-chain}, we set $G=G_0$ and choose successive subgroups $G_1$, $G_2$, \dots so that each of the $\floor{n/e} =3$ largest direct factors of $G_{i-1}$ is reduced by a factor of $2$ in $G_i$, and determine $G_r$ to be the first resulting subgroup that is elementary abelian: {\mathbf{e}}gin{align*} G_0 &= {\mathbb{Z}}_8 \times {\mathbb{Z}}_8 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2, \\ \langle 20000, 02000, 00200, 00010, 00001 \rangle = G_1 & \cong {\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2, \\ \langle 40000, 04000, 00200, 00020, 00001 \rangle = G_2 & \cong {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2. \end{align*} This gives $r=2$ and $G_0/G_1 \cong {\mathbb{Z}}_2^3$ and $G_1/G_2 \cong {\mathbb{Z}}_2^3$ and $G_2 \cong {\mathbb{Z}}_2^5$. Use \cref{ex:contr-field} to construct the $(G_0/G_1,3,0)$ and $(G_1/G_2,3,0)$ contracted difference matrices \[ {\mathbf{e}}gin{bmatrix} 00100 + G_1 & 01000 + G_1 & 10000 + G_1 \\ 01000 + G_1 & 10000 + G_1 & 01100 + G_1 \\ 10000 + G_1 & 01100 + G_1 & 11000 + G_1 \\ \end{bmatrix} \quad \mbox{and} \quad {\mathbf{e}}gin{bmatrix} 00010 + G_2 & 02000 + G_2 & 20000 + G_2 \\ 02000 + G_2 & 20000 + G_2 & 02010 + G_2 \\ 20000 + G_2 & 02010 + G_2 & 22000 + G_2 \end{bmatrix}. \] Apply \cref{contr-elem-ab-1}, using a primitive element $\alpha$ of $\GF(2^5)$ satisfying $\alpha^5+\alpha^2+1 = 0$, to construct a $(G_2,5,0)$ contracted difference matrix and then delete the last two rows to leave a $(G_2,3,0)$ contracted difference matrix \[ {\mathbf{e}}gin{bmatrix} 00001 & 00020 & 00200 & 04000 & 40000 \\ 00020 & 00200 & 04000 & 40000 & 00201 \\ 00200 & 04000 & 40000 & 00201 & 04020 \end{bmatrix}. \] Now use \cref{contr-comp} to combine the $(G_2,3,0)$ and $(G_1/G_2,3,0)$ contracted difference matrices to give a $(G_1,3,0)$ contracted difference matrix; then combine this with the $(G_0/G_1,3,0)$ contracted difference matrix to give a $(G_0,3,0)$ contracted difference matrix \[ {\mathbf{e}}gin{bmatrix} 00001 & 00020 & 00200 & 04000 & 40000 & 00010 & 02000 & 20000 & 00100 & 01000 & 10000 \\ 00020 & 00200 & 04000 & 40000 & 00201 & 02000 & 20000 & 02010 & 01000 & 10000 & 01100 \\ 00200 & 04000 & 40000 & 00201 & 04020 & 20000 & 02010 & 22000 & 10000 & 01100 & 11000 \end{bmatrix}. \] \end{example} \subsection{Abelian noncyclic $2$-group construction} The constructions in this section are related to those in \cref{sec:abelian-noncyclic}. {\mathbf{e}}gin{theorem} \label{thm:contr-pc} Let $e$ be a positive integer. Then there exists a $({\mathbb{Z}}_{2^{e}} \times {\mathbb{Z}}_{2}, 2, 0)$ contracted difference matrix. \end{theorem} {\mathbf{e}}gin{proof} We use Lemma~\ref{lem:contr-char}~(ii) to show that $2 \times (e+1)$ matrix {\mathbf{e}}gin{equation*} M = {\mathbf{e}}gin{bmatrix} (0,1) & (1,0) & (2,0) & (4,0) & (8,0) & \cdots & (2^{e-1},0) \\ (2^{e-1},1) & (0,1) & (1,0) & (2,0) & (4,0) & \cdots & (2^{e-2},0) \end{bmatrix} \end{equation*} is a $({\mathbb{Z}}_{2^{e}} \times {\mathbb{Z}}_{2}, 2, 0)$ contracted difference matrix. Suppose that ${\mathbf{a}} M {\mathbf{b}}^\intercal = (0,0)$, where ${\mathbf{a}} = {\mathbf{e}}gin{bmatrix} a_1 & a_2 \end{bmatrix}$ and ${\mathbf{b}} = {\mathbf{e}}gin{bmatrix} b_e & b_0 & b_1 & b_2 & \dots & b_{e-1}\end{bmatrix}$, and that each $a_i, b_j \in \{-1,0,1\}$ and $(a_1, a_2) \ne (0,0)$. It is sufficient to show that each $b_j$ is~0. Expand the equation ${\mathbf{a}} M {\mathbf{b}}^\intercal = (0,0)$ to give {\mathbf{e}}gin{align} 2^{e-1}a_2b_e + a_1b_0 + (2a_1+a_2)(b_1+2b_2+4b_3+\dots+2^{e-2}b_{e-1}) & \equiv 0 \pmod{2^e} \label{eqn:M1} \\ (a_1+a_2)b_e + a_2b_0 &\equiv 0 \pmod{2}. \label{eqn:M2} \end{align} {\mathbf{e}}gin{description} \item[Case 1: $a_2=0$.] By \eqref{eqn:M1}, \[ a_1(b_0+2b_1+4b_2+\dots+2^{e-1}b_{e-1}) \equiv 0 \pmod{2^e}. \] Since $(a_1,a_2)\ne(0,0)$ we have $a_1 \ne 0$, and therefore $b_0 = b_1 = \dots = b_{e-1} = 0$. Then \eqref{eqn:M2} gives $a_1 b_e \equiv 0 \pmod{2}$, so that $b_e = 0$. \item[Case 2: $a_2 \in \{-1,1\}$.] We firstly note that $a_1 b_0=0$: if $a_1=0$ this is immediate, otherwise we have $a_1 + a_2 \equiv 0 \pmod{2}$ and then from \eqref{eqn:M2} we conclude that $b_0=0$. Now substitute $a_1 b_0=0$ in \eqref{eqn:M1} to give \[ (2a_1+a_2)(b_1+2b_2+4b_3+\dots+2^{e-2}b_{e-1}+2^{e-1}b_e) \equiv 0 \pmod{2^e}. \] Since $2a_1+a_2 \in \{-3,-1,1,3\}$, this implies that \[ b_1+2b_2+4b_3+\dots+2^{e-2}b_{e-1}+2^{e-1}b_e \equiv 0 \pmod{2^e} \] and so $b_1 = b_2 = \dots = b_e = 0$. Then from \eqref{eqn:M2} we have $b_0=0$.\qedhere \end{description} \end{proof} We remark that \cref{thm:contr-pc} implies \cref{thm:pc}, and yet relies on a considerably simpler construction. The proof of the next result follows that of \cref{thm:non-cyclic-2}, by replacing each quoted result for a difference matrix by the corresponding result for a contracted difference matrix according to \cref{tab:related-constructions}. {\mathbf{e}}gin{corollary} \label{thm:contr-non-cyclic-2} Let $G$ be an abelian noncyclic $2$-group. Then there exists a $(G, 2, 0)$ contracted difference matrix. \end{corollary} \section{Further examples of contracted difference matrices} \label{sec:new-contr-diff} In this section we present a $(G,3,0)$ contracted difference matrix in each of four abelian 2-groups~$G$, and derive several consequences. These examples cannot be obtained from the constructions given in \cref{sec:contr-diff-matrices}, but were instead found by computer search. A principal advantage of searching for a contracted difference matrix is that an exhaustive search can be feasible even though an exhaustive search for the corresponding size of difference matrix is not, because only an exponentially smaller number of matrices need be considered: in the present case, when $G$ is an abelian group of order~$2^n$, a $(G,3,0)$ contracted difference matrix has $3n$ entries in~$G$ whereas a $(G,2^3,1)$ difference matrix has $2^{n+3}$ entries in~$G$. Even when neither exhaustive search is feasible, a random search for a $(G,3,0)$ contracted difference matrix appears experimentally to be successful far more often than a random search for a $(G,2^3,1)$ difference matrix. A further advantage of searching for a contracted difference matrix is that \cref{lem:contr-char} allows us to check a candidate contracted difference matrix efficiently, without the need to expand the matrix and check the differences between all row pairs explicitly. {\mathbf{e}}gin{example}\label{ex:3new} The following $(G,3,0)$ contracted difference matrices were found by computer search: {\mathbf{e}}gin{align*} {\mathbf{e}}gin{bmatrix} 001 & 010 & 100 & 200 \\ 010 & 201 & 001 & 100 \\ 211 & 100 & 201 & 210 \end{bmatrix} &\mbox{ over $G={\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$,} \\ {\mathbf{e}}gin{bmatrix} 001 & 010 & 020 & 100 & 200 \\ 021 & 001 & 211 & 010 & 100 \\ 220 & 101 & 200 & 210 & 320 \end{bmatrix} &\mbox{ over $G={\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2$,} \\ {\mathbf{e}}gin{bmatrix} 0001 & 0010 & 0100 & 1000 & 2000 \\ 0010 & 0100 & 2001 & 0001 & 1000 \\ 1001 & 2110 & 0001 & 0010 & 0101 \end{bmatrix} &\mbox{ over $G={\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$,} \\ {\mathbf{e}}gin{bmatrix} 4101 & 2000 & 1100 & 0010 & 7111 & 0101 \\ 4010 & 0111 & 2011 & 4001 & 7001 & 0001 \\ 4000 & 7110 & 5100 & 0011 & 5010 & 2011 \end{bmatrix} &\mbox{ over $G={\mathbb{Z}}_8 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$.} \end{align*} \end{example} \cref{small-2-grp-info} combines the contracted difference matrices of \cref{ex:3new} with the constructive results of Corollaries~\ref{thm:contr-rec} and~\ref{thm:contr-non-cyclic-2} to show the largest number of rows $k$ for which a $(G,k,0)$ contracted difference matrix is known to exist, for all abelian $2$-groups of order at most~64. The nonexistence results of Theorems~\ref{thm:jungnickel-nonexistence} and~\ref{thm:drake-nonexistence}, together with exhaustive search results, are used to indicate when the displayed value of $k$ is known to be the maximum possible. {\mathbf{e}}gin{table} {\mathbf{e}}gin{center} {\mathbf{e}}gin{threeparttable} \caption{The largest number of rows $k$ for which a $(G,k,0)$ contracted difference matrix is known to exist, for each abelian $2$-group $G$ of order at most~$64$. An example matrix for each of these groups is given in Appendix~\ref{sec:list-contr-diff}. } \label{small-2-grp-info} {\mathbf{e}}gin{tabular}{l|c|l|l} \toprule Group $G$ & \# rows $k$ & Source & Maximum possible $k$? \\ \midrule ${\mathbb{Z}}_{2}$ & 1 & trivial & yes (\cref{thm:drake-nonexistence}) \\ \midrule ${\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 2 & \cref{thm:contr-rec} & yes (\cref{thm:jungnickel-nonexistence}) \\ ${\mathbb{Z}}_{4}$ & 1 & trivial & yes (\cref{thm:drake-nonexistence}) \\ \midrule ${\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 3 & \cref{thm:contr-rec} & yes (\cref{thm:jungnickel-nonexistence}) \\ ${\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2}$ & 2 & \cref{thm:contr-non-cyclic-2} & yes (computer search) \\ ${\mathbb{Z}}_{8}$ & 1 & trivial & yes (\cref{thm:drake-nonexistence}) \\ \midrule ${\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 4 & \cref{thm:contr-rec} & yes (\cref{thm:jungnickel-nonexistence}) \\ ${\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 3 & computer search & yes (computer search) \\ ${\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4}$ & 2 & \cref{thm:contr-non-cyclic-2} & yes (computer search) \\ ${\mathbb{Z}}_{8} \times {\mathbb{Z}}_{2}$ & 2 & \cref{thm:contr-non-cyclic-2} & yes (computer search) \\ ${\mathbb{Z}}_{16}$ & 1 & trivial & yes (\cref{thm:drake-nonexistence}) \\ \midrule ${\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 5 & \cref{thm:contr-rec} & yes (\cref{thm:jungnickel-nonexistence}) \\ ${\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 3 & computer search & unknown \\ ${\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2}$ & 3 & computer search & unknown$^*$ \\ ${\mathbb{Z}}_{8} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 2 & \cref{thm:contr-non-cyclic-2} & unknown$^*$ \\ ${\mathbb{Z}}_{8} \times {\mathbb{Z}}_{4}$ & 2 & \cref{thm:contr-non-cyclic-2} & unknown$^*$ \\ ${\mathbb{Z}}_{16} \times {\mathbb{Z}}_{2}$ & 2 & \cref{thm:contr-non-cyclic-2} & unknown$^*$ \\ ${\mathbb{Z}}_{32}$ & 1 & trivial & yes (\cref{thm:drake-nonexistence}) \\ \midrule ${\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 6 & \cref{thm:contr-rec} & yes (\cref{thm:jungnickel-nonexistence}) \\ ${\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 3$^\dagger$ & \cref{thm:contr-rec} & unknown \\ ${\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 3$^\dagger$ & \cref{thm:contr-rec} & unknown \\ ${\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4}$ & 3 & \cref{thm:contr-rec} & unknown \\ ${\mathbb{Z}}_{8} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 3 & computer search & unknown \\ ${\mathbb{Z}}_{8} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2}$ & 2$^\dagger$ & \cref{thm:contr-non-cyclic-2} & unknown \\ ${\mathbb{Z}}_{8} \times {\mathbb{Z}}_{8}$ & 2 & \cref{thm:contr-non-cyclic-2} & unknown \\ ${\mathbb{Z}}_{16} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2}$ & 2 & \cref{thm:contr-non-cyclic-2} & unknown \\ ${\mathbb{Z}}_{16} \times {\mathbb{Z}}_{4}$ & 2 & \cref{thm:contr-non-cyclic-2} & unknown \\ ${\mathbb{Z}}_{32} \times {\mathbb{Z}}_{2}$ & 2 & \cref{thm:contr-non-cyclic-2} & unknown \\ ${\mathbb{Z}}_{64}$ & 1 & trivial & yes (\cref{thm:drake-nonexistence}) \\ \bottomrule \end{tabular} {\mathbf{e}}gin{tablenotes} \item $^*$ Known by exhaustive search to be the maximum possible $k$ when one of the rows of the contracted difference matrix takes its lexicographically first feasible value (the $2$-expansion of this row consisting of a row of only $0_G$ and a row containing every element of~$G$), for example ${\mathbf{e}}gin{bmatrix} 001 & 010 & 020 & 100 & 200 \end{bmatrix}$ when $G = {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2}$. \item $^\dagger$ \cref{q:rank-r} asks: can this number be increased by one? \item \end{tablenotes} \end{threeparttable} \end{center} \end{table} \subsection{A new infinite family of $(G,3,0)$ contracted difference matrices} \label{sec:new-family} We can use the four examples given in \cref{ex:3new} to construct a $(G,3,0)$ contracted difference matrix for infinitely many abelian 2-groups~$G$ that are not handled by the methods of Section~\ref{sec:contr-diff-matrices}. By \cref{contr-comp} it is sufficient to find a chain of subgroups $G_i$ of $G$ satisfying \[ G = G_0 \supset G_1 \supset \dots \supset G_r \supset G_{r+1} = \{0_G\} \] such that there exists a $(G_{i-1}/G_i,3,0)$ contracted difference matrix for each $i$ satisfying \mbox{$1 \le i \le r+1$}. By \cref{contr-elem-ab-1} and \cref{ex:3new}, this is possible in particular if the quotient group $G_{i-1}/G_i$ is isomorphic to ${\mathbb{Z}}_2^r$ for some $r \ge 3$ or to one of the groups in the set \[ \{{\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2, \, \, {\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2, \, \, {\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2, \, \, {\mathbb{Z}}_8 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \}, \] for each $i$. For example, to construct a $(G,3,0)$ contracted difference matrix for $G = {\mathbb{Z}}_{256} \times {\mathbb{Z}}_{32} \times {\mathbb{Z}}_{16} \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2$, we can use the subgroup chain $G = G_0$, $G_1 \cong {\mathbb{Z}}_{32} \times {\mathbb{Z}}_{16} \times {\mathbb{Z}}_{16} \times {\mathbb{Z}}_2$, $G_2 \cong {\mathbb{Z}}_{16} \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2$, $G_3 \cong {\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$, $G_4 = \{0_G\}$. It does not seem straightforward to determine the set of all such groups $G$ explicitly, but we can show the existence of a large set of such groups by a straightforward induction. {\mathbf{e}}gin{theorem} \label{thm:contr-3-rows-partial} Let $n \ge 3$ and $e \le n/2$ be positive integers. Then for at least one abelian group~$G$ of order $2^n$ and exponent $2^e$ there exists a $(G, 3, 0)$ contracted difference matrix. \end{theorem} {\mathbf{e}}gin{proof} The proof is by induction on $n \geq 3$, using base cases $n = 3, 4, 5, 6$. The base cases are provided by entries of \cref{small-2-grp-info}. Now assume all cases up to $n-4 \ge 3$ are true. If $e=1$ or $2$, then case $n$ is true by \cref{thm:contr-rec} because $\floor{n/e} \ge 3$. Otherwise $e \ge 3$, and then by the inductive hypothesis there is a group $H$ of order $2^{n-4}$ and exponent $2^{e-2}$ for which there exists a $(H,3,0)$ contracted difference matrix. Take $G$ to be an abelian group of order $2^n$ and exponent $2^e$ such that $G/H \cong {\mathbb{Z}}_4\times{\mathbb{Z}}_2\times{\mathbb{Z}}_2$. By \cref{ex:3new}, there exists a $(G/H,3,0)$ contracted difference matrix. Apply \cref{contr-comp} to produce a $(G,3,0)$ contracted difference matrix, proving case $n$ and completing the induction. \end{proof} {\mathbf{e}}gin{example} We construct a $(G, 3, 0)$ contracted difference matrix for $G = {\mathbb{Z}}_{16} \times {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{4}$ (whereas \cref{thm:contr-rec} gives only a $(G,2,0)$ contracted difference matrix). Choose $G_1 \cong {\mathbb{Z}}_4 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2$ to be a subgroup of $G$ such that $G/G_1 \cong {\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$. Using the $(Z_4\times{\mathbb{Z}}_4\times{\mathbb{Z}}_2,3,0)$ and $(Z_4\times{\mathbb{Z}}_2\times{\mathbb{Z}}_2,3,0)$ contracted difference matrices of \cref{ex:3new} in \cref{contr-comp} we then obtain the following $(G,3,0)$ contracted difference matrix: {\mathbf{e}}gin{equation*} {\mathbf{e}}gin{bmatrix} 002 & 020 & 040 & 400 & 800 & 001 & 010 & 100 & 200 \\ 042 & 002 & 822 & 020 & 400 & 010 & 201 & 001 & 100 \\ 840 & 402 & 800 & 820 & (12)40 & 211 & 100 & 201 & 210 \end{bmatrix} \end{equation*} \end{example} \cref{largest-known-size} shows how the results of \cref{thm:contr-3-rows-partial} improve those of Corollaries~\ref{thm:contr-rec} and~\ref{thm:contr-non-cyclic-2}. \newlength\celldim \newlength\fontheight \newlength\extraheight \newcommand\nl{\tabularnewline} \newcolumntype{Z}{ @{} >{\centering$} p{\celldim} <{$}@{} } \newcolumntype{S}{ @{} >{\centering \rule[-0.5\extraheight]{0pt}{\fontheight + \extraheight} {\mathbf{e}}gin{minipage}{\celldim}\centering$} p{\celldim} <{$\end{minipage}} @{} } \setlength\celldim{1.8em} \settoheight\fontheight{A} \setlength\extraheight{\celldim - \fontheight} {\mathbf{e}}gin{figure} \centering {\mathbf{e}}gin{equation*} \setlength\arraycolsep{0pt} {\mathbf{e}}gin{array}{SZ|ZZZZZZZZZZZZZZZ} \multirow{8}{*}{ $e$} & 8 & & & & & & & & 1 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} \tikzmark{thm2}2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 \nl & 7 & & & & & & & 1 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{new} 3 \nl & 6 & & & & & & 1 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{new} 3 & \cellcolor{new} 3 & \cellcolor{new} 3\tikzmark{new} \nl & 5 & & & & & 1 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{new} 3 & \cellcolor{new} 3 & \cellcolor{new} 3 & \cellcolor{new} 3 & \cellcolor{new} 3 \nl & 4 & & & & 1 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{new} 3 & \cellcolor{new} 3 & \cellcolor{new} 3 & \cellcolor{new} 3 & \cellcolor{thm1} 3 & \cellcolor{thm1} 3 & \cellcolor{thm1} 3 \nl & 3 & & & 1 & \cellcolor{thm2} 2 & \cellcolor{thm2} 2 & \cellcolor{base} \textcolor{basetext}{\tikzmark{b1}3} & \cellcolor{new} 3 & \cellcolor{new} 3 & \cellcolor{thm1} 3 & \cellcolor{thm1} 3 & \cellcolor{thm1} 3 & \cellcolor{thm1} 4 & \cellcolor{thm1} 4 & \cellcolor{thm1} 4\tikzmark{thm1} \nl & 2 & & 1 & \cellcolor{thm2} 2 & \cellcolor{base} \textcolor{basetext}{\tikzmark{b2}3} & \cellcolor{base} \textcolor{basetext}{\tikzmark{b3}3} & \cellcolor{thm1} 3 & \cellcolor{thm1} 3 & \cellcolor{thm1} 4 & \cellcolor{thm1} 4 & \cellcolor{thm1} 5 & \cellcolor{thm1} 5 & \cellcolor{thm1} 6 & \cellcolor{thm1} 6 & \cellcolor{thm1} 7 \nl & 1 & 1 & \cellcolor{thm1} 2 & \cellcolor{thm1} 3 & \cellcolor{thm1} 4 & \cellcolor{thm1} 5 & \cellcolor{thm1} 6 & \cellcolor{thm1} 7 & \cellcolor{thm1} 8 & \cellcolor{thm1} 9 & \cellcolor{thm1} 10 & \cellcolor{thm1} 11 & \cellcolor{thm1} 12 & \cellcolor{thm1} 13 & \cellcolor{thm1} 14 \nl \cline{2-16} & & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \nl \multicolumn{16}{c}{\hspace{1.3cm} n } \end{array} \end{equation*} {\mathbf{e}}gin{tikzpicture}[overlay, remember picture] \node[left=0.5cm, above=3cm] at (pic cs:b2) (B) {\colorbox{base}{from \cref{ex:3new}}}; \foreach \x in {b1, b2, b3} \draw[black!50, >=stealth, dashed, ->, shorten >=4mm] (B) -- ($(pic cs:\x) + (0.5ex,0.5em)$); \node[right=5mm, above=-2mm, rotate=-90] at (pic cs:thm1) (1) {\colorbox{thm1}{\cref{thm:contr-rec}}}; \node[right=5mm, above=4mm, rotate=-90] at (pic cs:new) (3) {\colorbox{new}{~\cref{thm:contr-3-rows-partial}}}; \node[right=-4mm, above=6mm] at (pic cs:thm2) (2) {\colorbox{thm2}{\cref{thm:contr-non-cyclic-2}}}; \end{tikzpicture} \caption{The largest number of rows $k$ for which a $(G, k, 0)$ contracted difference matrix is known to exist for at least one abelian group $G$ of order~$2^n$ and exponent~$2^e$.} \label{largest-known-size} \end{figure} \subsection{New linking systems of difference sets of size~$7$} \label{sec:link-syst-diff} Question~1 of \cite[Section~6]{jedwab-li-simon-arxiv} asks whether there are examples of difference matrices over $2$-groups having more rows than those specified in Theorems~\ref{thm:rec} and~\ref{thm:non-cyclic-2}. The difference matrices described in \cref{prop:lazebnik-thomason} provide such examples. By \cref{defn:cdm}, the four new $(G,3,0)$ contracted difference matrices described in \cref{sec:new-family} immediately give $(G,8,1)$ difference matrices that likewise cannot be obtained from those two theorems (giving, in particular, a second source for a $({\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2, 8, 1)$ difference matrix). Moreover, we now describe how the four new contracted difference matrices of Section~\ref{sec:new-family} can be used to construct reduced linking systems of difference sets of size 7 in certain abelian $2$-groups $G$ of order $2^{2d+2}$ for which the largest previously known size was~3. To construct such a reduced linking system, it is sufficient by \cref{thm:jls} for $G$ to contain a subgroup $E$ isomorphic to ${\mathbb{Z}}_2^{d+1}$ such that there is a $(G/E,3,0)$ contracted difference matrix. We may therefore choose $H$ to be any group satisfying the quotient chain condition described in \cref{sec:new-family}, and then choose $G$ to be any abelian group containing a subgroup $E$ isomorphic to ${\mathbb{Z}}_2^{d+1}$ such that $G/E \cong H$. It again does not seem straightforward to determine the set of all such groups $G$ explicitly (in the absence of an explicit condition for all suitable groups $H$), but we can show the existence of a large set of such groups~$G$. {\mathbf{e}}gin{corollary}\label{cor:new-linking} Let $d \ge 2$ and let $e$ be an integer satisfying $2 \le e \le \frac{d+3}{2}$. Then there is at least one abelian group $G$ of order $2^{2d+2}$, rank $d+1$, and exponent $2^e$, such that there is a reduced linking system of $(v,k,\lambda,n)$-difference sets in $G$ of size~$7$, with $(v,k,\lambda,n)$ as given in~\eqref{eqn:hadamard}. \end{corollary} {\mathbf{e}}gin{proof} Applying \cref{thm:contr-3-rows-partial} with $n = d+1$ and $e$ replaced by $e-1$, there is at least one abelian group $H$ of order $2^{d+1}$ and exponent $2^{e-1}$ for which there exists an $(H,3,0)$ contracted difference matrix and therefore an $(H,8,1)$ difference matrix. Since $H$ has rank at most $d+1$, there is therefore a group $G$ of order $2^{2d+2}$, rank $d+1$, and exponent $2^e$, containing a subgroup $E \cong {\mathbb{Z}}_{2}^{d+1}$ such that $G/E \cong H$. The result then follows from \cref{thm:jls}. \end{proof} The reduced linking systems of difference sets given by \cref{cor:jls}~(i) all have size 3, and those given by \cref{cor:jls}~(ii) (under the stated condition $2 \le e \le \frac{d+3}{2}$) have size~3 when $\frac{d+1}{e-1} < 3$. \cref{cor:new-linking} therefore provides reduced linking systems of difference sets, that are larger than those previously known, for all $d \ge 2$ and all~$e$ satisfying $\frac{d+4}{3} < e \le \frac{d+3}{2}$. {\mathbf{e}}gin{example} The largest known size of a reduced linking system of $(256,120,56,64)$-difference sets in an abelian group of order $256$ and exponent $8$ was given as~$3$ in \cite[Table~3]{jedwab-li-simon-arxiv}. Using the above procedure, we can increase this size to $7$ for each of the groups \[ {\mathbb{Z}}_8 \times {\mathbb{Z}}_2^5, \quad {\mathbb{Z}}_8 \times {\mathbb{Z}}_4 \times {\mathbb{Z}}_2^3, \quad {\mathbb{Z}}_8 \times {\mathbb{Z}}_4^2 \times {\mathbb{Z}}_2 \] (but not for the group ${\mathbb{Z}}_8^2 \times {\mathbb{Z}}_2^2$) by choosing a subgroup $E$ isomorphic to ${\mathbb{Z}}_2^4$ such that $G/E \cong {\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$, and using the $({\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2,3,0)$ contracted difference matrix of \cref{ex:3new} to provide a $({\mathbb{Z}}_4 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2,8,1)$ difference matrix for use in \cref{thm:jls}. \end{example} We remark that additional reduced linking systems of difference sets of size~7 can be obtained by applying the composition construction of \cref{comp} to the $(Z_4 \times {\mathbb{Z}}_4, 8, 1)$ difference matrix of \cref{prop:lazebnik-thomason}~(ii) to produce a further infinite family of difference matrices with 8 rows in abelian 2-groups. This infinite family does not arise from the analogous construction for contracted difference matrices, because there is no $({\mathbb{Z}}_4 \times {\mathbb{Z}}_4, 3, 0)$ contracted difference matrix (see \cref{small-2-grp-info}). \section{Open Questions} \label{sec:open-questions} We conclude with three open questions. The first question concerns how effective the concept of a contracted difference matrix is in explaining the existence pattern of difference matrices in abelian $p$-groups. It is motivated by comparing the results of \cref{elem-ab-1}, \cref{thm:rec}, \cref{thm:non-cyclic-2} for difference matrices with those of \cref{contr-elem-ab-1}, \cref{thm:contr-rec}, \cref{thm:contr-non-cyclic-2}, respectively, for contracted difference matrices. {\mathbf{e}}gin{question} \label{q:contr-implies-diff} For which primes $p$, abelian $p$-groups $G$, and integers $k \ge 1$ does there exist a $(G, p^k, 1)$ difference matrix but not a $(G, k, 0)$ contracted difference matrix? \end{question} \noindent The only example currently known that satisfies the conditions of \cref{q:contr-implies-diff} is given by $p=2$ and $G = {\mathbb{Z}}_4 \times {\mathbb{Z}}_4$ and $k=3$, from \cref{prop:lazebnik-thomason}~(ii) and the exhaustive search result for ${\mathbb{Z}}_4 \times {\mathbb{Z}}_4$ in \cref{small-2-grp-info}. The second question concerns the largest number of rows of a difference matrix over an abelian $p$-group. {\mathbf{e}}gin{question} \label{q:log} For which primes $p$ and abelian $p$-groups $G$ is the largest integer $m$ for which a $(G, m, 1)$ difference matrix exists not a power of~$p$? \end{question} \noindent The only example currently known that satisfies the conditions of \cref{q:log} is given by $p=2$ and $G = {\mathbb{Z}}_8 \times {\mathbb{Z}}_2$ and $m=5$, from \cref{prop:lazebnik-thomason}~(i). We also know that $p=3$ and $G={\mathbb{Z}}_9 \times {\mathbb{Z}}_3$ and $k=2$ satisfy the conditions of at least one of Questions~\ref{q:contr-implies-diff} and~\ref{q:log}: by \cref{pan-chang-non-2} there exists a $({\mathbb{Z}}_9 \times {\mathbb{Z}}_3, 4, 1)$ difference matrix, but exhaustive search shows there is no $({\mathbb{Z}}_9 \times {\mathbb{Z}}_3, 2, 0)$ contracted difference matrix. The third question follows from the observation that the currently known existence pattern for contracted difference matrices over $2$-groups of fixed order, as illustrated in \cref{small-2-grp-info}, appears to favor groups of smaller exponent and larger rank. {\mathbf{e}}gin{question} \label{q:rank-r} Let $G$ be an abelian group of order $2^{n}$, exponent at most $2^{\frac{n}{r-1}}$, and rank at least~$r$. Does there exist a $(G, r, 0)$ contracted difference matrix? \end{question} A positive answer to \cref{q:rank-r} for the case $r=2$ is given by \cref{thm:contr-non-cyclic-2}; and a positive answer for the case $r=3$ would allow the words ``for at least one abelian group'' in \cref{thm:contr-3-rows-partial} to be replaced by ``for all abelian groups'', provided that a minimum rank of $3$ is specified. \section{Contracted difference matrix examples} \label{sec:list-contr-diff} This appendix gives an example $(G,k,0)$ contracted difference matrix containing the largest known number of rows~$k$ (as stated in \cref{small-2-grp-info}), for each abelian $2$-group $G$ of order at most 64. {\mathbf{e}}gin{longtable}{>{$}l<{$}>{$}l<{$}} \label{tab:small-2-grp-examples} \\ \toprule \text{Group} & \text{Matrix} \\ \midrule \addlinespace {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 1 \end{bmatrix} \\ \addlinespace \midrule \addlinespace {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 01 & 10 \\ 10 & 11 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} & {\mathbf{e}}gin{bmatrix} 1 & 2 \end{bmatrix} \\ \addlinespace \midrule \addlinespace {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 001 & 010 & 100 \\ 010 & 100 & 011 \\ 100 & 011 & 110 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 01 & 10 & 20 \\ 21 & 01 & 10 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{8} & {\mathbf{e}}gin{bmatrix} 1 & 2 & 4 \end{bmatrix} \\ \addlinespace \midrule \addlinespace {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 0001 & 0010 & 0100 & 1000 \\ 0010 & 0100 & 1000 & 0011 \\ 0100 & 1000 & 0011 & 0110 \\ 1000 & 0011 & 0110 & 1100 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 001 & 010 & 100 & 200 \\ 010 & 201 & 001 & 100 \\ 211 & 100 & 201 & 210 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} & {\mathbf{e}}gin{bmatrix} 01 & 10 & 02 & 20 \\ 10 & 11 & 20 & 22 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 01 & 10 & 20 & 40 \\ 41 & 01 & 10 & 20 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{16} & {\mathbf{e}}gin{bmatrix} 1 & 2 & 4 & 8 \end{bmatrix} \\ \addlinespace \midrule \addlinespace {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 00001 & 00010 & 00100 & 01000 & 10000 \\ 00010 & 00100 & 01000 & 10000 & 00101 \\ 00100 & 01000 & 10000 & 00101 & 01010 \\ 01000 & 10000 & 00101 & 01010 & 10100 \\ 10000 & 00101 & 01010 & 10100 & 01101 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 0001 & 0010 & 0100 & 1000 & 2000 \\ 0010 & 0100 & 2001 & 0001 & 1000 \\ 1001 & 2110 & 0001 & 0010 & 0101 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 001 & 010 & 020 & 100 & 200 \\ 021 & 001 & 211 & 010 & 100 \\ 220 & 101 & 200 & 210 & 320 \end{bmatrix}\\ \addlinespace {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 010 & 100 & 200 & 001 & 400 \\ 210 & 010 & 100 & 400 & 401 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{4} & {\mathbf{e}}gin{bmatrix} 01 & 10 & 20 & 02 & 40 \\ 21 & 01 & 10 & 40 & 42 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{16} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 01 & 10 & 20 & 40 & 80 \\ 81 & 01 & 10 & 20 & 40 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{32} & {\mathbf{e}}gin{bmatrix} 1 & 2 & 4 & 8 & (16) \end{bmatrix} \\ \addlinespace \midrule \addlinespace {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & \setlength\arraycolsep{3.2pt} {\mathbf{e}}gin{bmatrix} 000001 & 000010 & 000100 & 001000 & 010000 & 100000 \\ 000010 & 000100 & 001000 & 010000 & 100000 & 000011 \\ 000100 & 001000 & 010000 & 100000 & 000011 & 000110 \\ 001000 & 010000 & 100000 & 000011 & 000110 & 001100 \\ 010000 & 100000 & 000011 & 000110 & 001100 & 011000 \\ 100000 & 000011 & 000110 & 001100 & 011000 & 110000 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & \setlength\arraycolsep{3.2pt} {\mathbf{e}}gin{bmatrix} 00100 & 01000 & 20000 & 00001 & 00010 & 10000 \\ 01000 & 20000 & 01100 & 00010 & 10000 & 00011 \\ 20000 & 01100 & 21000 & 10000 & 00011 & 10010 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 0010 & 0200 & 2000 & 0001 & 0100 & 1000 \\ 0200 & 2000 & 0210 & 0100 & 1000 & 0101 \\ 2000 & 0210 & 2200 & 1000 & 0101 & 1100 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{4} & {\mathbf{e}}gin{bmatrix} 002 & 020 & 200 & 001 & 010 & 100 \\ 020 & 200 & 022 & 010 & 100 & 011 \\ 200 & 022 & 220 & 100 & 011 & 110 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 4101 & 2000 & 1100 & 0010 & 7111 & 0101 \\ 4010 & 0111 & 2011 & 4001 & 7001 & 0001 \\ 4000 & 7110 & 5100 & 0011 & 5010 & 2011 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 020 & 400 & 010 & 200 & 001 & 100 \\ 400 & 420 & 200 & 210 & 100 & 101 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{8} \times {\mathbb{Z}}_{8} & {\mathbf{e}}gin{bmatrix} 04 & 40 & 02 & 20 & 01 & 10 \\ 40 & 44 & 20 & 22 & 10 & 11 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{16} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 010 & 800 & 001 & 100 & 200 & 400 \\ 800 & 810 & 401 & 001 & 100 & 200 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{16} \times {\mathbb{Z}}_{4} & {\mathbf{e}}gin{bmatrix} 02 & 80 & 01 & 10 & 20 & 40 \\ 80 & 82 & 41 & 01 & 10 & 20 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{32} \times {\mathbb{Z}}_{2} & {\mathbf{e}}gin{bmatrix} 01 & 10 & 20 & 40 & 80 & (16)0 \\ (16)1 & 01 & 10 & 20 & 40 & 80 \end{bmatrix} \\ \addlinespace {\mathbb{Z}}_{64} & {\mathbf{e}}gin{bmatrix} 1 & 2 & 4 & 8 & (16) & (32) \end{bmatrix} \\ \addlinespace \bottomrule \end{longtable} \end{appendices} \end{document}
\begin{document} \title{ON $(g, arphi)$-CONTRACTION IN FUZZY METRIC SPACES} \begin{abstract} In this paper, we give a generalization of Hicks type contractions and Golet type contractions on fuzzy metric spaces. We prove some fixed point theorems for this new type contraction mappings on fuzzy metric spaces. \end{abstract} \section{Introduction and preliminaries} The notion of fuzzy sets was introduced by Zadeh \cite{Zad}. From then, various concepts of fuzzy metric spaces were considered in \cite{Geo, Kal, Mih}. Many authors have studied fixed point theory in fuzzy metric spaces. The most interesting references are \cite{Fa1, Fa2, Gre, Had, Pap, Raz}. Some works on intuitionistic fuzzy metric/normed spaces has been carried out intensively in \cite{Raf, Sad}. In the sequel, we shall adopt usual terminology, notation and convensions of fuzzy metric spaces introduced by George and Veeramani \cite{Geo}. \begin{definition}A binary operation $\ast \colon [0,1] \times [0,1]\to [0,1]$ is a continuous $t$-norm if $([0,1],\ast)$ is a topological monoid with unit 1 such that $a \ast b \leq c \ast d$ whenever $a \leq c$ and $b \leq d$ for $a,b,c,d \in [0,1]$. \end{definition} \begin{definition}A {\em fuzzy metric space}(briefly $FM$-space) is a triple $(X,M,\ast)$ where $X$ is an arbitrary set, $\ast$ is a continuous $t$-norm and $M\colon X\times X\times [0,+\infty]\to [0,1]$ is a (fuzzy) mapping satisfied the following conditions:\\ For all $x,y,z\in X$ and $s,t>0$,\\ (FM1) $M(x,y,0)=0$;\\ (FM2) $M(x,y,t)>0$;\\ (FM3) $M(x,y,t)=1$ if and only if $x=y$;\\ (FM4) $M(x,y,t)=M(y,x,t)$;\\ (FM5) $M(x,z,t+s)\geq M(x,y,t)\ast M(y,z,s)$;\\ (FM6) $M(x,y,\cdot)\colon (0,+\infty)\to [0,1]$ is continuous for all ${x,y} \in X$. \end{definition} \begin{lemma}$M(x,y,\cdot)$ is a nondecreasing function for all ${x,y}\in X$. \end{lemma} Let $U_M$ denote the $M$-uniformity, the uniformity generated by fuzzy metric $M$. Then the family \[ \{U_{\epsilon,\lambda}\colon\epsilon > 0,\lambda \in (0,1)\}\] where \[U_{\epsilon,\lambda}=\{(x,y)\in X\times X \colon M(x,y,\epsilon)>1-\lambda\},\] and \[\{U_{\lambda}\colon \lambda > 0\}\] where \[U_{\lambda}=\{(x,y)\in X \times X \colon M(x,y,\lambda)>1-\lambda\}\] are bases for this uniformity. \begin{remark}Every continuous $t$-norm $\ast$ satisfies \[\sup_{a<1}(a\ast a)=1\] to ensure the existence of the $M-uniformity$ on $X$. \end{remark} \begin{lemma}\label{lem} If $\ast$ is a continuous $t$-norm, then if $r\in (0,1)$, there is a $s\in (0,1)$ such that $s\ast s\geq r$. \end{lemma} \begin{definition}In $FM$-space $(X,M,\ast)$, the mapping $f\colon X\to X$ is said to be {\em fuzzy continuous} at $x_0$ if and only if for every $t>0$, there exists $s>0$ such that \[M(x_{0},y,s)>1-s\Rightarrow M(fx_{0},fy,t)>1-t.\] \end{definition} The mapping $f\colon X\to X$ is fuzzy continuous if and only if it is fuzzy continuous at every point in $X$. \begin{theorem}Let $(X,M,\ast)$ be a $FM$-space and $U_{M}$ be the $M$-uniformity induced by the fuzzy metric $M$. Then the sequence $\{x_{n}\}in X$ is said to be {\em fuzzy convergence\/} to $x\in X$ (in short, $x_{n}\to x$) if and only if \[\lim_{n\to\infty}M(x_{n},x,t)=1\] for all $t>0$. \end{theorem} \begin{definition} A sequence $\{x_n\}$ in a $FM$-space $(X,M,\ast)$ is a fuzzy Cauchy sequence if and only if \[\lim_{n\to +\infty}M(x_{m},x_{n},t)=1\] for every $m,n,>0$ and $t>0$. \end{definition} \begin{definition} The $FM$-space $(X,M,\ast)$ is said to be {\em fuzzy compact\/} if every sequence $\{x_{n}\}$ in $X$ has a subsequence $\{x_{n_k}\}$ such that $x_{n_k}\to x\in X$. \end{definition} A fuzzy metric space in which every fuzzy Cauchy sequence is convergent is called a {\em complete fuzzy metric space\/}. \begin{definition} Let $(X,M,\ast)$ be a $FM$-space and $A$ be a nonempty subset of $X$. The fuzzy closure of $A$, denoted by $\overline A$ is the set \[\overline A=\{y\in X\colon \exists x\in A, M(x,y,\epsilon)>1-\lambda, \epsilon >0, \lambda\in (0,1)\}\] \end{definition} \section{Main Results} \noindent In our fixed point theorem we consider the FM-space $(X,M,\ast)$ endowed with the $M$-uniformity. \begin{definition}\label{def1}Let $\Phi$ be the class of all mappings $\varphi\colon\mathbb R^{+}\to\mathbb R^{+}$ ($\mathbb R^{+} = [0,+\infty)$) with the following properties: \begin{enumerate} \item $\varphi$ nondecreasing, \item $\varphi$ is right continuous, \item $\lim_{n\to +\infty}\varphi ^{n}(t)=0$ for every $t>0$. \end{enumerate} \end{definition} \begin{remark}\label{rem1}(a) It is easy to see that under these conditions, the function $\varphi$ satisfies also $\varphi(t)< t$ for all $t>0$ and therefore $\varphi(0)= 0$.\\ (b) By property (iii), we mean that for every $\epsilon > 0$ and $\lambda\in (0,1)$ there exists an integer $N(\epsilon,\lambda)$ such that $\varphi^{n}(t) \leq min\{\epsilon,\lambda\}$ whenever $n \geq N(\epsilon,\lambda)$. \end{remark} In \cite{Gol}, Golet has introduced $g$-contraction mappings in probabilistic metric spaces. In the following definition, we give the $g$-contraction in the fuzzy setting. \begin{definition}\label{def2}Let $f$ and $g$ are mappings defined on a $FM$-space $(X,M,\ast)$ and let suppose that $g$ is bijective. The mapping $f$ is called a fuzzy {\em $g$-contraction\/} if there exists a $k\in (0,1)$ such that for every ${x,y}\in X$, $t>0$, \begin{eqnarray}\label{add1} M(gx,gy,t)>1-t \Rightarrow M(fx,fy,kt)>1-kt. \end{eqnarray} \end{definition} By considering a mapping $\varphi \in \Phi$ as given in Definition ~\ref{def1}, we generalize the condition (\ref{add1}) in Definition ~\ref{def2} as follow. \begin{definition}\label{def3} Let $(X,M,\ast)$ be a $FM$-space and $\varphi \in \Phi$. We say that the mapping $f\colon X \to X$ is a fuzzy {\em $(g,\varphi)$-contraction\/} if there exists a bijective function $g\colon X \to X$ such that for every ${x,y}\in X$ and for every $t>0$, \begin{eqnarray}\label{add2} M(gx,gy,t)>1-t \Rightarrow M(fx,fy,\varphi (t))>1- \varphi (t). \end{eqnarray} \end{definition} \noindent Note that, if $\varphi (t) = kt$ for $k \in (0,1)$, $t>0$, then the condition (\ref{add2}) is actually the fuzzy $g$-contraction due to Golet \cite{Gol}. If the function $g$ is identity function, then (\ref{add2}) represents fuzzy ($\varphi - H$)- contraction according to Mihet \cite{Mih}. Hence the $(g,\varphi)$-contraction generalizes the Golet and Mihet's contraction principles respectively in fuzzy metric spaces. The following lemma is reproduced from \cite{Gol} to suits our purposes in fuzzy metric spaces.. \begin{lemma}\label{lem1}Let $g$ be an injective mapping on $(X,M,\ast)$.\\ (i) If $M^{g}(x,y,t)=M(gx,gy,t)$, then $(X,M^{g},\ast)$ is a fuzzy metric space.\\ (ii) If $X_{1}=g(X)$ and $(X,M,\ast)$ is a complete $FM$-space then $(X,M^{g},\ast)$ is also a complete $FM$-space.\\ (iii) If $(X_{1},M,\ast)$ is fuzzy compact then $(X,M^{g},\ast)$ is also fuzzy compact. \end{lemma} \begin{proof}The proof of (i) and (ii) are immediate. To prove (iii), let $\{x_{n}\}$ be a sequence in X. Then, for $u_{n}=gx_{n}$, $\{u_{n}\}$ is a sequence in $X_{1}$ for which we can find a convergent subsequence $\{u_{n_{k}}\}$, say $u_{n_{k}}\to u\in X_{1}$ for $k\to +\infty$. Suppose that the sequence $\{y_{n}\}$ and $y$ in $X$, and set $y_{n}=g^{-1}u_{n_{k}}$, and $y = g^{-1}u$. Then \[ M^{g}(y_{n_{k}},y,t) = M(gy_{n_{k}},gy,t) = M(u_{n_{k}},u,t) \to 1\] as $k \to +\infty$ for every $t>0$. This implies that $(X,M^{g},\ast)$ is fuzzy compact. \end{proof} \begin{lemma}\label{lem2} If $f$ is a fuzzy $(g,\varphi)$-contraction. Then\\ (i) $f$ is a fuzzy (uniformly) continuous mapping on $(X,M^{g},\ast)$ with values in $(X,M,\ast)$. (ii) $g^{-1}\circ f$ is a continuous mapping on $(X,M^{g},\ast)$ with values into itself. \end{lemma} \begin{proof}(i) Let $\{x_{n}\}$ a sequence in $X$ such that $x_{n}\to x$ in $X$. In $(X,M^{g},\ast)$, this implies that \[ \lim_{n\to +\infty}M^{g}(x_{n},x,t)=\lim_{n\to +\infty}M(gx_{n},gx,t)=1, \forall t>0.\]. By the fuzzy $(g,\varphi)$-contraction (\ref{add2}) it follows that \[ \lim_{n\to +\infty}M(fx_{n},fx,t)\geq \lim_{n\to +\infty}M(fx_{n},fx,\varphi (t))=1, \forall t>0.\]. This implies that $f$ is fuzzy continuous.\\ (ii) Note that, since \[ \lim_{n\to +\infty}M(gg^{-1}fx_{n},gg^{-1}fx,t)= \lim_{n\to +\infty}M^{g}(g^{-1}fx_{n},g^{-1}fx,t)=1, \forall t>0,\] this shows that the mapping $g^{-1} \circ f$ defined on $(X,M^{g},\ast)$ with values in itself is fuzzy continuous. \end{proof} \begin{theorem}\label{main} Let $f$ and $g$ are two function defined on a complete $FM$-space $(X,M,\ast)$. If $g$ is bijective and $f$ is fuzzy $(g,\varphi)$-contraction, then there exists a unique coincidence point $z\in X$ such that $gz=fz$. \end{theorem} \begin{proof} It is obvious that $M^{g}(x,y,t)>1-t$ whenever $t>1$. Hence, we have $M(gx,gy,t)>1-t$. By condition (\ref{add2}), we have $M(fx,fy,\varphi (t))>1-\varphi (t)$. But, \begin{eqnarray*} M(fx,fy,\varphi (t)) & = & M(gg^{-1}fx,gg^{-1}fy,\varphi(t))\\ & = & M^{g}(g^{-1}fx,g^{-1}fy,\varphi(t))\\ & > & 1-\varphi(t). \end{eqnarray*} Now, by letting $h=g^{-1}f$, we have \[ M^{g}(hx,hy,\varphi (t))>1-\varphi(t).\] By condition (\ref{add2}), we have \begin{eqnarray*} M(fhx,fhy, \varphi^{2}(t)) &=& M(gg^{-1}fhx,gg^{-1}fhy,\varphi^{2}(t))\\ &=& M^{g}(g^{-1}fhx,g^{-1}fhy,\varphi^{2}(t))\\ &=& M^{g}(h^{2}x,h^{2}y,\varphi^{2}(t))\\ &>& 1-\varphi^{2}(t). \end{eqnarray*} By repeating this process, we have \[ M^{g}(h^{n}x,h^{n}y,\varphi^{n}(t))>1-\varphi^{n}(t).\] Since $\lim_{n\to +\infty}\varphi^{n}(t)=0$, then for every $\epsilon >0$ and $\lambda \in (0,1)$, there exists a positive integer $N(\epsilon,\lambda)$ such that $\varphi^{n}(t)\leq min(\epsilon, \lambda)$, whenever $n\geq N(\epsilon,\lambda)$. Furthermore, since $M$ is nondecreasing we have \[ M^{g}(h^{n}x,h^{n}y,\epsilon)\geq M^{g}(h^{n},h^{n},\varphi^{n}(t))>1-\varphi^{n}(t)>1-\lambda.\] Let $x_{0}$ in $X$ to be fixed and let the sequence $\{x_{n}\}$ in $X$ defined recursively by $x_{n+1}=hx_{n}$, or equivalently by $gx_{n+1}=fx_{n}$. Now, consider $x=x_{p}$ and $y=x_{0}$, then from the above inequality we have \[ M^{g}(x_{n+p},x_{n},\epsilon) = M^{g}(h^{n}x_{p},h^{n}x_{0},\epsilon)>1-\lambda,\] for every $n\geq N(\epsilon,\lambda)$ and $p\geq 1$. Therefore $\{x_{n}\}$ is a fuzzy Cauchy sequence in $X$. Since $(X,M,\ast)$ is complete, by Lemma \ref{lem1}, $(X,M^{g},\ast)$ is complete and there is a point $z\in X$ such that $x_{n}\to z$ under $M^{g}$. Since by Lemma \ref{lem2}, $h$ is continuous, we have $z=hz$, i.e., $z=g^{-1}fz$, or equivalently $gz=fz$. For the uniqueness, assume $gw=fw$ for some $x\in X$. Then, for any $t>0$ and using (\ref{add2}) repeatedly, we can show that after $n$ iterates, we have \[ M(gz,gw,t)>1-t \Rightarrow M(fz,fw,\varphi^{n}(t))>1-\varphi^{n}(t).\] Thus, we have $\lim_{n\to +\infty}M(fz,fw,\varphi^{n}(t))=1$ which implies that $fz=fw$. \end{proof} As an example, if we take in consideration that every metric space $(X,d)$ can be made into a fuzzy metric space $(X,M,\ast)$, in a natural way, by setting $M(x,y,t)= \frac{t}{t+d(x,y)}$, for every ${x,y}\in X$, $t>0$ and $a\ast b=ab$ for every $a,b\in [0,1]$, by Theorem ~\ref{main} one obtains the following fixed point theorem for mappings defined on metric spaces. \begin{example}Let $f$ and $g$ are two mappings defined on a non-empty set $X$ with values in a complete metric space $(X,d)$, $g$ is bijective and $f$ is a $(g,\varphi)$-contraction, that is, there exists a $\varphi \in \Phi$ such that \[ d(fx,fy) \leq \varphi(d(gx,gy)),\] for every ${x,y}\in X$, then there exists a unique element $x^{*}\in X$ such that $fx^{*}=gx^{*}$. \begin{proof} We suppose that $d(x,y)\in [0,1]$. If this is not true, we define the mapping $d_{1}(x,y)=1-e^{-d(x,y)}$, then the pair $(X,d_{1})$ is a metric space and the uniformities defined by the metrics $d$ and $d_{1}$ are equivalent.\\ Now, let suppose that $f$ is a fuzzy $(g,\varphi)$-contraction for some $\varphi\in \Phi$ on $(X,d)$, $t>0$, $M(gx,gy,t)>1-t$. Then we have $\frac{t}{t+d(gx,gy)}>1-t$. This implies $d(gx,gy)<t$ and consequently, we have, $\varphi(d(gx,gy))<\varphi(t)$ for all $t>0$. Thus, $d(fx,fy)<\varphi(t)$ which implies that $\frac{\varphi(t)}{\varphi(t)+d(fx,fy)}>1-\varphi(t)$, i.e., $M(fx,fy,\varphi(t))>1-\varphi(t)$. So, the mapping $f$ is a fuzzy $(g,\varphi)$-contraction defined on $(X,M,\ast)$ and the conclusion follows by the Theorem ~\ref{main}. \end{proof} \end{example} As a multivalued generalization of the notion of $g$-contraction (Definition \ref{def3}), we shall introduce the notion of a fuzzy $(g,\varphi)$-contraction where $\varphi \in \Phi$ for a multivalued mapping. Let $2^{X}$ be the family of all nonempty subsets of $X$ . \begin{definition}\label{def4}Let $(X,M,\ast)$ be a $FM$-space, $A$ is a nonempty subset of $X$ and $T\colon A \to 2^{X}$. The mapping $T$ is called a fuzzy $(g,\varphi)$-contraction, where $\varphi \in \Phi$ if there exists a bijective function $g\colon X \to X$ such that for every ${x,y}\in X$ and every $t>0$, \begin{eqnarray} M^{g}(x,y,t)>1-t \Rightarrow \\ \nonumber\forall u \in (T\circ g)(x), \exists v\in (T\circ g)(y)\colon M(u,v,\varphi(t))>1-\varphi(t). \end{eqnarray} \end{definition} \begin{definition} Let $(X,M,\ast)$ be a $FM$-space, $A$ is a nonempty subset of $X$ and $T\colon A \to 2^{X}$. We say that $T$ is {\em weakly fuzzy demicompact\/} if for every sequence $\{x_{n}\}$ from $A$ such that $x_{n+1}\in Tx_{n}, n\in\mathbb{N}$ and \[\lim_{n\to+\infty}M(x_{n+1},x_{n},t)=1\] for every $t>0$, there exists a fuzzy convergent subsequence $\{x_{n_{k}}\}$. \end{definition} By $cl(X)$ we shall denote the family of all nonempty closed subsets of $X$. \begin{theorem}\label{main2}Let $(X,M,\ast)$ be a complete $FM$-space, $g\colon X \to X$ be a bijective function and $T\colon A \to cl(A)$ where $A\in cl(A)$ a fuzzy $(g,\varphi)$-contraction, where $\ varphi \in \Phi$. If $T$ is weakly fuzzy demicompact then there exists at least one element $x\in A$ such that $x\in Tx$. \end{theorem} \begin{proof}Let $x_{0}$, $x_{1}\in (T\circ g)(x_{0})$. Let $t>0$. Since $M^{g}(x_{1},x_{0},t)>0$, it follows that \[M^{g}(x_{1},x_{0},t)>1-t.\] The mapping $T$ is a fuzzy $(g,\varphi)$-contraction, therefore by Definition\ref{def4} there exists $x_{2}\in (T\circ g)(x_{1})$ such that \[M(x_{2},x_{1},\varphi(t))>1-\varphi(t).\] Since $M(x_{2},x_{1},\varphi(t))=M^{g}(g^{-1}x_{2},g^{-1}x_{1},\varphi(t))$, we have \[ M^{g}(g^{-1}x_{2},g^{-1}x_{1},\varphi(t))>1-\varphi(t).\] Similarly, it follows that there exists $x_{3}\in (T\circ g)(x_{2})$ such that \[M(x_{3},x_{2},\varphi^{2}(t))>1-\varphi^{2}(t).\] By repeating the above process, there exists $x_{n}\in (T\circ g)(x_{n-1})$ $(n\geq 4)$ such that \[M(x_{n},x_{n-1},\varphi^{n-1}(t))=1-\varphi^{n-1}(t), \forall t>0.\] By letting $n\to +\infty$, \[\lim_{n\to +\infty}M(x_{n},x_{n-1},\varphi^{n-1}(t))=1, \forall t>0.\] Further, by Remark \ref{rem1}(ii), we have \[\lim_{n\to +\infty}M(x_{n},x_{n-1},\epsilon)=1, \forall t>0.\] Since $T$ is weakly fuzzy demicompact from the above limit, there exists a convergent fuzzy subsequence $\{x_{n}\}$ such that $\lim_{k\to +\infty}x_{n_{k}}=x$ for some $x\in A$. Now, we show that $x\in (T\circ g)(x)$. Since $(T\circ g)(x)=\overline{(T\circ g)(x)}$, we shall prove that $x\in \overline{(T\circ g)(x)}$, i.e. for every $\epsilon >0$ and $\lambda\in (0,1)$, there exists $y\in (T\circ g)(x)$ such that \[M(x,y,\epsilon)>1-\lambda.\] Note that, since $\ast$ is a continuous $t$-norm, by Lemma \ref{lem}, for $\lambda\in (0,1)$ there is a $\delta\in (0,1)$ such that \[(1-\delta)\ast (1-\delta)\geq 1-\lambda.\] Further, if $\delta_{1}\in (0,1)$ is such that \[(1-\delta_{1})\ast (1-\delta_{1})\geq 1-\delta\] and $\delta_{2}= min\{\delta,\delta_{1}\}$, we have \begin{eqnarray*} (1-\delta_{2})\ast [(1-\delta_{2})\ast (1-\delta_{2})] &\geq& (1-\delta)\ast [(1-\delta_{1})\ast (1-\delta_{1})]\\ &\geq& (1-\delta)\ast (1-\delta)\\ &>& 1-\delta. \end{eqnarray*} Since $\lim_{k\to +\infty}x_{n_{k}}=x$, there exists an integer $k_{1}$ such that \[M(x,x_{n},\epsilon/3)>1-\delta_{2}, \forall k\geq k_{1}.\]. Let $k_{2}$ be an integer such that \[M(x_{n},x_{n+1},\epsilon/3)>1-\delta_{2}, \forall k\geq k_{2}.\] Let $s>0$ be such that $\varphi(s)< min\{\epsilon/3,\delta_{2}\}$ and $k_{3}$ be an integer such that \[M^{g}(x_{n_{k}},x,s)>1-s, \forall k\geq k_{3}.\] Since $T$ is $(g,\varphi)$-contraction there exists $y\in (T\circ g)(x)$ such that \[M(x_{n_{k}+1},y,\varphi(s))>1-varphi(s),\] and so \[ M(x_{n_{k}+1},y,\epsilon /3) \geq (M(x_{n_{k}+1},y,\varphi(s)) >1-\varphi(s)>1-\delta_{2}\] for every $k\geq k_{3}$. If $k\geq max\{k_{1},k_{2},k_{3}\}$, we have \begin{eqnarray*} M(x,y,\epsilon) &\geq& M(x,x_{n_{k}},\epsilon /3)\ast (M(x_{n_{k}}, x_{n_{k}+1},\epsilon /3)\ast M(x_{n_{k}+1},y, \epsilon /3)\\ &\geq& (1-\delta_{2})\ast((1-\delta_{2})\ast (1-\delta_{2}))\\ &>& 1-\lambda. \end{eqnarray*} Hence $x\in\overline{(T\circ g)(x)}=(T\circ g)(x)$. The proof is completed. \end{proof} Note that, Theorem \ref{main2} above generalizes the theorem proved by Pap et al in \cite{Pap}. \begin{remark} We note that Theorem \ref{main} can be obtained in a more general setting, namely when $g$ does not satisfy the injective conditions. In this case we can use a pseudo-inverse of $g$ defined as a selector of the multi-valued inverse of $g$. We will consider this situation in our next paper. \end{remark} \end{document}
\begin{document} \title{InferNet for Delayed Reinforcement Tasks: \Addressing the Temporal Credit Assignment Problem } \begin{abstract} The temporal Credit Assignment Problem (CAP) is a well-known and challenging task in AI. While Reinforcement Learning (RL), especially Deep RL, works well when immediate rewards are available, it can fail when only delayed rewards are available or when the reward function is noisy. In this work, we propose delegating the CAP to a Neural Network-based algorithm named \emph{InferNet} that explicitly learns to infer the immediate rewards from the delayed rewards. The effectiveness of InferNet was evaluated on two online RL tasks: a simple GridWorld and 40 Atari games; and two offline RL tasks: GridWorld and a real-life Sepsis treatment task. For all tasks, the effectiveness of using the InferNet inferred rewards is compared against the immediate and the delayed rewards with two settings: with noisy rewards and without noise. Overall, our results show that the effectiveness of InferNet is robust against noisy reward functions and is an effective add-on mechanism for solving temporal CAP in a wide range of RL tasks, from classic RL simulation environments to a real-world RL problem and for both online and offline learning. \end{abstract} \keywords{Deep Reinforcement Learning \and Credit Assignment Problem} \section{Introduction} \label{sec:delayed-reinforcement} A large body of real-world tasks can be characterized as sequential multi-step learning problems, where the outcome of the selected actions is \emph{delayed}. Discovering which action(s) are responsible for the delayed outcome is known as the \textbf{\emph{(temporal) Credit Assignment Problem (CAP)}} \cite{Minsky1961StepsTA}. Solving the temporal CAP is especially important for \emph{delayed reinforcement} tasks \cite{sutton1985temporal}, in which a reward $r_t$ obtained at time $t$, can be affected by all previous actions, $a_0$, $a_1$, ..., $a_{t-1}$, $a_t$ and thus we need to assign credit or blame to each of those actions individually. Such tasks become extremely challenging if there are long delays between the actions and their corresponding outcomes. Prior research has explored solving the CAP by formulating it as an RL problem \cite{suttonXreinforcement}, in which an agent learns how to interact with a potentially non-stationary, stochastic, and partially observable environment to maximize the long-term cumulative reward. For example, Temporal Difference (TD) learning methods \cite{sutton1988learning} have been widely used to tackle the CAP \cite{sutton1990time}. In particular, the TD($\lambda$) algorithm \cite{sutton1988learning,tesauro1992practical} uses eligibility traces to update the value of a state by using all the future rewards in the episode, which makes it easier to assign credit for long trajectories. In prior work, one way to mitigate the impact of the CAP is to use model-based RL or simulations, which allow collecting vast amounts of data. However, in many real-life domains such as healthcare, building accurate simulations is especially challenging because disease progression is a rather complex process; moreover, learning policies while interacting with patients can be unethical or illegal. On the other hand, it is essential to solve the CAP problems in such domains because reward functions are often not only delayed but noisy. The most appropriate rewards to use in healthcare are the patient outcomes, which are typically unavailable until the entire trajectory is complete. This is because disease progression makes it difficult to assess patient health states moment by moment, and more importantly, many instructional or medical interventions that boost short-term performance may not be effective over the long term. Furthermore, reward functions in such domains are often incomplete or imperfect observations of the underlying true reward mechanisms. For example, a patient's final outcome of a stay can be inaccurate/noisy, as shown by the 30-day readmission rates among survivors of sepsis across 633,407 hospitalizations among 3,315 hospitals is 28.7\% \cite{sepsisarticle}. Previously, an alternative approach was proposed by \cite{azizsoltani2019unobserved}, denoted \emph{InferGP}. They first applied Gaussian Processes (GP) to infer \emph{unobservable} immediate rewards from delayed rewards, and then applied standard RL algorithms to induce policies based on the inferred rewards. While promising, that work has the following three limitations in the order of increasing severeness: 1) it did not investigate the effectiveness of InferGP with noisy reward functions, even though GP are known to be robust against noise; 2) InferGP does not scale up well, as it has poor time and space complexities, as shown below; and 3) InferGP can only be applied for offline-RL because it incorporates information from the entire training dataset into the model when applying Bayesian inference to infer the reward. Many DRL algorithms, however, often need millions or even billions of interactions obtained by extensively exploring the environment before they can learn a competitive policy, which makes InferGP impractical for large datasets and online RL. In this work, we propose a novel Neural Network based approach named \emph{InferNet}, which infers ``immediate rewards" from the delayed rewards and then those inferred immediate rewards can be used to train any RL agent. InferNet is a general, \emph{scalable} mechanism that works alongside any online and offline RL algorithms. It is an easy yet effective add-on mechanism for mitigating the temporal CAP. The effectiveness of InferNet was evaluated on two online RL tasks: a simple GridWorld and 40 Atari games; and two offline RL tasks: GridWorld and a real-life Sepsis treatment task. For both online and offline RL tasks, the effectiveness of using the InferNet inferred rewards is compared against immediate and delayed rewards. Additionally, we evaluated the effectiveness of each reward setting by adding noise to the reward functions to more accurately mimic real-world scenarios. Our results shows that for online RL tasks, the InferNet policy significantly outperforms the delayed policy in the GridWorld, and its performance is on par with the immediate policy. For the Atari games, InferNet outperforms the corresponding delayed policy on 32 out of 40 games; and it can perform as well as or better than the immediate policy on 8 games. When noise is present in the rewards, InferNet outperforms the immediate policy on the GridWorld; and the performance of the immediate policies for the Atari games suffers greatly while the delayed and InferNet policies are less prone to be affected by the noise. Moreover, our proposed InferNet policy outperformed the corresponding delayed reward policy across 30 Atari games and it even performs better than or equal to the immediate reward policy on 23 games. On the offline RL tasks, InferNet performs as good as or better than both InferGP, immediate and delayed rewards. \section{InferNet: Neural Net Inferred Rewards} \textbf{Problem Definition:} The environment is modeled as a Markov Decision Process, where at each time-step $t$ the agent observes the environment in state $s_t$, it takes an action $a_t$ and receives a scalar reward $r_t$ and the environment moves to state $s_{t+1}$. In the discrete case, $a_t$ is selected from a discrete set of actions $a_t \in A = \{1, ..., |A|\}$. The RL agent is tasked with maximizing the expected discounted sum of future rewards, or return, defined as $R_t = \sum_{\tau = t}^T \gamma^{\tau - t} r_t$, where $\gamma \in (0, 1]$ is the discount factor and $T$ is the last timestep in the episode. A value function is commonly used to estimate the expected return for each state or state-action pair. The optimal action-value function is defined as $Q^*(s, a) = max_\pi Q^\pi (s, a)$, where $Q^\pi$ estimates the long-term reward the agent would observe after following action $a$ from state $s$ and following policy $\pi$ thereafter. \noindent \textbf{InferNet:} The intuition behind InferNet is rather straightforward. InferNet uses a deep neural network to infer the immediate rewards from the delayed reward in an episode. At each timestep, the observed state and action are passed as input to the neural network, which will output a single scalar, the inferred immediate reward for that state and action: $r_t = f(s_t, a_t | \theta)$. Here $\theta$ indicates the parameters (weights and biases) of the neural network. To address the credit assignment problem, InferNet distributes the final delayed reward among all the states in the episode. More specifically, the network learns to infer the immediate rewards from the delayed reward by applying a constraint on the predicted rewards: the sum of all the predicted rewards in one episode must be equal to the delayed reward, as shown in Equation \ref{inferred rewards} where $R_{del}$ indicates the delayed reward. This way, the network needs to model the reward function, conditioned on the state-action pair for each timestep, and it will minimize the loss between the sum of predicted rewards and the delayed reward for each episode. \begin{equation} \label{inferred rewards} R_{del} = f(s_0, a_0|\theta) + f(s_1, a_1|\theta) + ... + f(s_{T-1}, a_{T-1}|\theta) \end{equation} We used the \emph{TimeDistributed} layer available on TensorFlow Keras \cite{tensorflow2015-whitepaper,chollet2015keras} in order to repeat the same neural network operation multiple times, sharing weights across time, and pass the entire episode at once as input to the neural network. It should be noted that despite sharing weights across time, there is no internal state that is passed to the next timestep (as in a recurrent neural network). Each output is only dependent on the state and action passed as inputs at that timestep. We train InferNet by minimizing the loss function shown in Equation \ref{eq:loss function}. The pseudo-code for training InferNet alongside an RL algorithm is shown in Algorithm \ref{alg:main_alg}. This process can be seen as making the neural network learn a function that outputs a reward for each state-action pair, subject to the constraint of all rewards in an episode summing up to the delayed reward for that episode. \begin{equation} \label{eq:loss function} Loss(\theta) = (R_{del} - \sum_{t=1}^{T} f(s_t, a_t|\theta) )^2 \end{equation} To evaluate the effectiveness of InferNet, we divide the experimental evaluation into online and offline RL tasks. \begin{algorithm}[tb] \caption{InferNet Online} \label{alg:main_alg} \begin{algorithmic}[1] \STATE Initialize InferNet buffer $D \leftarrow ()$ \STATE // Pretrain InferNet \FOR{$episode \gets1$ to $K$} \STATE Play an episode randomly and collect the data \STATE Delayed reward $R_{del} = r_0 + r_1 + .. + r_{T-1}$ \STATE $D \leftarrow D \cup (s_0, a_0, ..., s_{T-1}, a_{T-1}, R_{del})$ \STATE Sample mini-batch of episodes $B \sim D$ \STATE Train InferNet on $B$: \\ $L(\theta) = (R - \sum_{t=0}^{T-1} f(s_t, a_t)|\theta))^2$ \ENDFOR \FOR{$episode \gets1$ to $M$} \STATE Set episode data sequence $tmp \leftarrow ()$ \WHILE{not end of episode} \STATE Get state $s$ from env \STATE Select action $a \sim \pi$ \STATE $s', r \sim env(s, a)$ \STATE $tmp \leftarrow tmp \cup (s, a, r, s')$ \STATE Train RL agent \STATE Sample batch of episodes $B \sim D$ \STATE Train InferNet on $B$: \\ $L(\theta) = (R - \sum_{t=0}^{T-1} f(s_t, a_t)|\theta))^2$ \ENDWHILE \STATE Use InferNet to infer rewards for the steps in $tmp$\\ \STATE Replace rewards in $tmp$ with InferNet rewards\\ \STATE $D \leftarrow D \cup tmp$\\ \STATE Store data in $tmp$ to train the RL agent later on \ENDFOR \end{algorithmic} \end{algorithm} \section{Online RL Experiments} The effectiveness of InferNet is investigated on a GridWorld first, and then on the Atari 2600 Learning Environment. We compare the following reward settings: 1) \emph{Immediate rewards:} when available, they are the gold standard. 2) \emph{Delayed rewards:} these rewards are used as a baseline. All the intermediate rewards will be zero and the reward that indicates how good or bad the intermediate actions were will only be provided at the end of the episode. When the rewards are not delayed by nature, we simulate the delayed rewards by ``hiding" the immediate rewards and providing the sum of all the immediate rewards at the end of the episode, as one big delayed reward. 3) \emph{InferNet rewards:} our proposed method which uses a Neural Network to predict the immediate rewards from the delayed reward. On both tasks, we also evaluate the power of InferNet when the reward function is noisy. \subsection{Grid World} \label{sec:gridworld} A GridWorld environment was designed as a simple RL testbed where we can compare the true immediate rewards to the inferred rewards produced by InferNet. This environment consists of a 14x7 grid, with five positive rewards (+1) and four negative rewards (-1), located randomly along the grid, but always in the same locations. All other states have a reward of zero. The initial state is located at the bottom-right corner of the grid, and the agent's goal is to reach the terminal state, located at the top-left corner, while collecting the positive rewards and avoiding the negative ones. The three available actions are: move up, left and down. Figure \ref{fig:losses} (Appendix) shows that by minimizing the training error in Eq. \ref{eq:loss function} (the difference between the delayed reward and the sum of immediate predicted rewards) (red line), InferNet minimizes the true error (the difference between the predicted and true immediate rewards) (blue line). We then evaluate the effectiveness of the rewards predicted by InferNet when used to train an RL agent, and compare them to the immediate and delayed rewards. We repeated each experiment five times with different random seeds, and show the mean and standard deviation of those runs. We explored Q-Learning and TD($\lambda$) as RL agents; the latter is known for being able to solve the temporal CAP. \noindent \textbf{Q-Learning } is a TD-Learning method, but it does not employ any eligibility traces, and it usually only uses a 1-step reward. Figure \ref{fig:gridworld-online-results} (Top Left) shows the result of training different Q-Learning agents on the three reward settings without noise. These results clearly show that the Q-Learning agent that uses the delayed rewards cannot learn to solve the simple GridWorld task. It is no better than a random agent. However, when we first use InferNet to infer the ``immediate" rewards from the delayed ones, the agent is able to solve the task as effectively as the agent uses the immediate rewards. When the reward function is noisy, Figure \ref{fig:gridworld-online-results} (Bottom Left) shows that the immediate reward agent suffers significantly, and cannot solve the environment completely, while InferNet is much more robust to the noise. \begin{figure} \caption{Results of training a Q-Learning agent (left) and a TD($\lambda$) agent (right) on the GridWorld task with no noise, for immediate, delayed, and InferNet rewards. No noise in the rewards (Top) and noisy rewards (Bottom). Gaussian noise ($\mathcal{N} \label{fig:gridworld-online-results} \end{figure} \noindent \textbf{TD($\lambda$) } is known to be one of the strongest methods to solve the CAP. This algorithm takes advantage of the benefits of TD methods, and includes eligibility traces, which allows the agent to look at all the future rewards to estimate the value of each state. This makes propagating the delayed reward easier than in the case of 1-step rewards. Despite all these advantages, Figure \ref{fig:gridworld-online-results} (Top Right) shows that when the rewards are delayed, the agent is not able to learn as effectively as the agent that has access to the true immediate rewards. However, the agent that uses the InferNet predicted reward achieves the same performance as the agent that uses the immediate rewards; they can both fully solve the environment. When the reward function is noisy, Figure \ref{fig:gridworld-online-results} (Bottom Right) shows that none of the agents suffer. This result shows that TD($\lambda$) is more robust to noisy rewards than Q-Learning. \subsection{Atari Learning Environment (ALE)} The ALE provides visually complex environments in which the state space is very high dimensional, represented by pixels on a screen. It is important to note that in some games, each episode can consist of thousands of steps, so learning from a single delayed reward is no trivial task. We used OpenAI gym \cite{1606.01540} to simulate the environments, and the stable baselines library \cite{stable-baselines} to train the Deep RL agent. Here we evaluate the performance of InferNet in conjunction with a Prioritized Dueling DQN agent \cite{wang2015dueling,Schaul2015_priortized}. \begin{figure*} \caption{Performance of the Prioritized Dueling DQN agent on the different Atari games with the three reward settings: Delayed (vertical line at x=0), Immediate (red) and InferNet (Blue). Without noisy rewards (Left) and with noisy rewards (Right). The results have been normalized to show the Delayed rewards at x=0 as a vertical line, and each bar shows how many times better than the delayed agent it is.} \label{fig:Atari} \end{figure*} \noindent \textbf{Noise-free Rewards:} The results of training the Dueling DQN agent on the three different reward settings are shown in Figure \ref{fig:Atari} (Left). The full results can be found in the supplementary material. The agent trained on the rewards provided by InferNet performs as well as or better than the agent which uses the delayed rewards in almost all games. In some cases, it can even match the performance of the agent in the Immediate setting. These results clearly show that when immediate rewards are not available, using our InferNet is preferable over training the agent on the delayed rewards. \begin{figure} \caption{Training process for Seaquest (Top) and Freeway (Bottom). Left: No Noise. Right: Gaussian noise ($\mathcal{N} \label{fig:seaquest-freeway} \end{figure} \noindent \textbf{Noisy Rewards:} We repeated the Atari experiments after adding Gaussian noise to the observed rewards. As the noise is unbiased, the expectation of the sum of rewards is the same with and without noise, as shown in Eq. \ref{noise info}. \begin{dmath} \label{noise info} \mathbb{E}[R] = \mathbb{E}[r_0 + ... + r_{T-1}] = \mathbb{E}[r_0 + \mathcal{N}(0, \sigma^2) + ... + r_{T-1}+ \mathcal{N}(0, \sigma^2)] \end{dmath} Figure \ref{fig:Atari} (Right) shows the results of training the same Prioritized Dueling DQN agent on noisy rewards on immediate, delayed and InferNet rewards. It shows that the performance of the agent trained on noisy immediate rewards suffers significantly when compared to the noisy-free immediate rewards. InferNet outperforms the Immediate rewards in more games than in the noise-free setting. Two examples of this are the games of Seaquest and Freeway (Figure \ref{fig:seaquest-freeway}). \section{Offline RL Experiments} When applying RL to solve many real-life tasks such as healthcare, we have to perform offline learning. This means that the training needs to be done from a fixed training dataset, and no further exploration of the environment is possible. In these situations, having a method that effectively solves the CAP is crucial. For the offline experiments, we added one more reward setting to the experiment: InferGP. This is an alternative prior method for inferring the immediate rewards from the delayed ones. Prior work has shown that InferGP works reasonably well in a wide range of offline RL tasks \cite{azizsoltani2019unobserved}. Our goal is to determine the efficacy of InferNet for offline RL tasks, when compared to immediate, delayed and InferGP rewards. We use the same GridWorld environment as in Section \ref{sec:gridworld}. Additionally, we want to evaluate our method in a real world problem: a healthcare task where the goal of the agent is to induce a policy for sepsis treatment and septic shock prevention. \begin{figure} \caption{RMSE between the inferred and the true immediate rewards as a function of the number of episodes collected from the GridWorld environment. Left: No noise. Right: Gaussian noise ($\mathcal{N} \label{fig:rmse-gridworld} \end{figure} \subsection{GridWorld} In this offline experiment, we generate random data, and then use that data to infer the rewards and train the RL agent. \noindent \textbf{RMSE:} We evaluated the amount of data needed for InferNet and InferGP to approximate the true immediate rewards. We calculated the root mean squared error (RMSE) between the inferred rewards and the true immediate rewards in the training dataset by varying the amount of training data. Figure \ref{fig:rmse-gridworld} (Left) compares this RMSE for InferNet and InferGP. Overall, with 100 or more trajectories, InferNet consistently has a lower RMSE than InferGP. Figure \ref{fig:rmse-gridworld} (Right) compares the RMSE of the two approaches with noisy rewards. Adding noise makes the CAP more challenging, and InferGP cannot adapt as well as InferNet (InferGP increases from 0.15 to 0.5 and InferNet increases from 0.13 to 0.2), despite the fact that GP are known to be able to handle noisy data. \noindent \textbf{Offline Q-Learning:} We trained a tabular Q-learning agent for 5000 iterations on the same dataset used to infer the rewards. We compared the four reward settings: immediate, delayed, InferGP and InferNet. Once our RL policies are trained, their effectiveness is evaluated online by interacting with the GridWorld environment directly. Figure \ref{fig:offline-gridworld} (Left) shows the mean and standard deviation of the performance of the agent when interacting with the environment for 50 episodes, as a function of the number of episodes available in the training dataset. Figure \ref{fig:offline-gridworld} (Left) shows that, as expected, the delayed policy performs poorly, while the Immediate policy can converge to the optimal policy after only 10 episodes of data; additionally, both InferNet and InferGP both can converge to the optimal policy but they need more training data (around 150 episodes) than the Immediate policy. Figure \ref{fig:offline-gridworld} (Right) shows the performance of the policies when the rewards are noisy. It clearly shows that adding noise to the rewards function deteriorates the Immediate policy, while InferNet and InferGP policies also suffer but it seems that InferNet is the best option. \begin{figure} \caption{Performance of Q-Learning agents on the GridWorld environment as a function of the number of training episodes. Left: No noise. Right: Gaussian Noise ($\mathcal{N} \label{fig:offline-gridworld} \end{figure} \noindent \textbf{Time Complexity}: Figure \ref{fig:time-gridworld} empirically compares the time complexity of InferNet and InferGP: the training time of InferNet is less sensitive to the size of the training dataset, while the training time of InferGP increases cubically as the training data increases. Fundamentally, InferGP has an asymptotic time complexity of $O(n^3)$ and an asymptotic space complexity of $O(n^2)$, where $n$ refers to the size of the dataset. InferNet has a time complexity of $O(n)$ since we sample a constant amount of mini-batches from the dataset for each gradient descent step, and we only need to train the network for a constant number of epochs. The space complexity for InferNet is $O(f*l)$, where $f$ is the number features in the state and action that are passed as inputs, and $l$ is the length of the episode that is passed as input. \begin{wrapfigure}{r}{0.4\columnwidth} \centering \includegraphics[width=0.49\columnwidth]{Figures/time_analysis.jpg} \caption{Time analysis of InferNet and InferGP.} \label{fig:time-gridworld} \end{wrapfigure} \subsection{Healthcare} We evaluate InferNet and InferGP, on a real-world sepsis treatment task where the rewards are delayed. The goal is to learn an optimal treatment policy to prevent patients from going into septic shock, the most severe complication of sepsis, which leads to a mortality rate as high as 50\%. As many as 80\% of sepsis deaths could be prevented with timely diagnosis and treatment \cite{kumar2006duration}; thus, it is crucial to monitor sepsis progression and recommend the optimal treatment as early as possible. Despite the severity of the disease and the challenges faced by practitioners, it is notoriously difficult to reach an agreement for the optimal treatment due to the complex nature of sepsis and different patients’ constitutions. Moreover, continuous updates in the sepsis guidelines often lead to inconsistent clinical practices \cite{Backer2017}. Recently, several DRL approaches have been investigated for septic treatment, utilizing Electronic Health Records (EHRs). However, \cite{Raghu2017,Komorowski2018} only considered delayed rewards, while \cite{azizsoltani2019unobserved} leveraged the Gaussian process based immediate reward inference method, which is one of our baselines. \noindent \textbf{Data:} Our EHRs were collected from a large US healthcare system (July, 2013 to December, 2015). We identified $2,964$ septic shock positive visits and sampled $2,964$ negative visits based on the expert clinical rules, keeping the same ratio of age, gender, race, and the length of hospital stay as in the original EHR. We selected 22 sepsis-related state features such as vital signs, lab results and medical interventions, and defined four types of treatments as actions: no treatment, oxygen control, anti-infection drug, and vassopressor. \noindent \textbf{Reward:} The rewards were assigned by the expert-guided reward function based on the multiple septic stages. These rewards are delayed in time and noisy. The delayed rewards are given when the patient goes into septic shock or recovers at the end of their stay, and the noise in the rewards is a result of imperfect sensors or incomplete measurements. \noindent \textbf{Experiment setting:} We compared three reward settings: delayed, InferGP, and InferNet, since the immediate rewards are not available. InferNet predicts one reward for each timestep. After inferring the immediate rewards, the septic treatment policies were induced using a Dueling DQN agent. The hyper-parameters are shown in the Appendix. \noindent \textbf{Evaluation metric:} The induced policies were evaluated using the \emph{septic shock rate}, which is the portion of shock-positive visits in each group belonging to the corresponding agreement rate, as the agreement rate increases from 0 to 1 with a 0.1-rate interval. It is desirable that the higher the agreement rate, the lower the septic shock rate. The policy agreement rate in the non-shock patients should be higher, which means that our policy agrees more with the physicians' actions for the non-shock patients. \begin{figure} \caption{Healthcare: Septic shock rate as a function of the agreement rate between the policy and the physician actions for the training (Left) and test (Right) sets.} \label{fig:ehr} \end{figure} \noindent \textbf{Results:} The underlying assumption is that the agreed treatments are adequate and acceptable, as they were taken by real doctors in real clinical cases, but not necessarily optimal. Figure \ref{fig:ehr} shows the \emph{septic shock rate} in the visit group for the training (Left) and test (Right) sets, as a function of the corresponding agreement rates. For InferNet, the septic shock rate almost monotonically decreases in the test set evaluation, as the agreement rate increases and reaches the lowest shock rate of all policies. InferGP shows a general trend of decreasing shock rate with a larger variance than InferNet as the agreement rate increases, while delayed fails to learn an effective shock prevention policy. This supports that InferNet significantly improves the policy training process at preventing septic shock, compared with InferGP and delayed. Furthermore, InferNet and InferGP induced policies that agree with the physicians more for the non-shock patients, while when the patients are more likely to go into septic shock, the agents try to search a different treatment strategy from the given treatments that resulted in septic shock. \section{Related Work} In recent years, by utilizing deep learning and novel RL algorithms, Deep RL (DRL) has shown great success in various complex tasks \cite{berner2019dota,silver2018general}. Much of prior work on DRL has focused on online learning where the agent learns while interacting with the environment. Immediate rewards are generally much more effective than delayed rewards for RL because of the CAP: the more we delay rewards or punishments, the harder it becomes to assign credit or blame properly. Different approaches have been proposed and applied for solving the CAP. For example, when applying DRL for games such as Chess, Shogi and Go, the final rewards are determined by the outcomes of the game: $-1$/$0$/$+1$ for loss/draw/win respectively; and for each state, Monte Carlo Tree Search (MCTS) was used to learn the likelihood of each outcome \cite{silver2017mastering,silver2018general}. Because of the CAP, RL algorithms often need more training data to learn an effective policy using delayed rewards than using immediate rewards. More importantly, for some extremely complicated games, DRL may fail to learn an effective policy altogether. As a result, prior research used expert-designed immediate rewards, or learned a reward function from expert experience trajectories, using reward engineering methods such as Inverse RL \cite{ziebart2008maximum,abbeel2004apprenticeship,levine2011nonlinear,ramachandran2007bayesian}. For example, Berner et al. used human-crafted intermediate rewards to simplify the CAP. They designed a reward function based on what expert players agree to be good in that game \cite{berner2019dota}. While effective, such expert-designed rewards are often labor-intensive, expensive, and domain specific. Additionally, these expert rewards might introduce expert bias into the process, leading to sub-optimal agent performance, as shown by AlphaGo Zero \cite{silver2017mastering} outperforming the original AlphaGo \cite{silver2016mastering}. The human brain is very efficient at solving the CAP when learning to perform new tasks \cite{asaad2017prefrontal,richards2019dendritic}. Thus a wealth of neuroscience research focuses on understanding the learning and decision-making process in animals and humans. For example, \cite{agogino2004unifying} studied the structural and temporal CAP and suggested a unification of the problem for multi-agent, time-extended problems. In RL, the temporal CAP has been widely studied \cite{sutton1985temporal}, and solutions to it have been proposed in order to more successfully train neural network systems \cite{ororbia2018conducting,lansdell2019learning}. In machine learning, prior research tried to solve the CAP by formulating it as an RL task \cite{suttonXreinforcement}. The best known family of algorithms to tackle the CAP are the Temporal Difference (TD) Learning methods, and TD($\lambda$) in particular \cite{sutton1988learning}. It employs eligibility traces to use all the future rewards when updating the value of each state, resulting in better assignment of credit/blame for each action. \section{Conclusion} We developed a deep learning algorithm that explicitly tackles the CAP by generating immediate rewards from the delayed rewards. Our results show that our algorithm makes it easier for the RL algorithm of choice to solve the task at hand, both for online and offline RL, while mitigating the problem generated by noisy rewards. We showed that InferNet can accurately predict the true immediate rewards on a simple GridWorld and help Q-Learning and TD($\lambda$) agents solve the environment. An RL agent that learns a treatment to avoid septic shock from a real-life healthcare dataset can also benefit from the rewards provided by InferNet to make more effective decisions. Finally, we showed that our algorithm scales to large datasets and to online RL, which allows it to help solve more complex pixel-based games such as the Atari games, and it can be especially useful when the reward is noisy as shown by the performance of the agent on the noisy version of the Atari games. \appendix \section{Additional Results} \subsection{GridWorld} As mentioned in section \ref{sec:gridworld}, Figure \ref{fig:losses} shows that by minimizing the training error in Eq. \ref{eq:loss function} (the difference between the delayed reward and the sum of immediate predicted rewards) (red line), InferNet minimizes the true error (the difference between the predicted and true immediate rewards) (blue line). This result empirically shows that minimizing our objective loss function is indeed making InferNet learn the true immediate rewards. \subsection{CartPole} The goal in CartPole is to move a cart left and right in order to keep a pole balanced in its vertical position. The reward function provides a rewards of +1 for each step where the pole is kept vertical. This means that the reward function InferNet needs to learn is pretty simple, a reward of +1 for each timestep, regardless of the state and action passed as inputs. The difficulty in this environment lies on the continuous state space, that makes tabular RL methods inoperable. For this reason, we compared the same three reward settings using the DQN algorithm. For the InferNet setting, we trained the DQN agent \cite{mnih2015human} and InferNet simultaneously, providing the DQN agent with the rewards produced by InferNet. The results of training this DQN agent on the immediate, delayed and InferNet rewards are shown in figure \ref{fig:cartpole}, where we show the mean and standard deviation of training each agent with 20 different random seeds as a function of the number of training steps. These results show that InferNet can work in combination with a Deep RL agent, to mitigate the credit assignment problem and boost the performance of the agent when only delayed rewards are available, and can perform as well as the agent that uses the immediate rewards. Meanwhile, the agent that learns from the delayed rewards cannot completely solve this task, although it is able to get a reasonable score. \subsection{Impact of Different Noise Levels} In this section of the appendix, we explore the impact that different levels of noise have on the different reward settings. For that, we chose three Atari games where the Immediate reward setting was impacted in several ways: Centipede, Freeway and Seaquest. The standard deviation of the Gaussian distribution used to generate the white noise (the mean is still zero) was modified to adjust the overall level of noise. We tried the following standard deviations: 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6. Figures \ref{fig:centipede}, \ref{fig:seaquest}, and \ref{fig:freeway} show the mean performance of the agent over two runs with different random seeds. They show how the different noise levels affect the performance of each setting for these three games. InferNet is probably the method that best balances robustness to noise with overall performance. \begin{figure} \caption{Training process for InferNet. Minimizing the objective loss (red line) results in also minimizing the true loss (blue line), which is what we want to ultimately achieve.} \label{fig:losses} \end{figure} \begin{figure} \caption{Comparison of immediate, delayed and InferNet policies in the CartPole task.} \label{fig:cartpole} \end{figure} \begin{figure*} \caption{Training on Centipede with different noise levels. Left: Immediate. Center: InferNet. Right: Delayed.} \label{fig:centipede} \end{figure*} \begin{figure*} \caption{Training on Seaquest with different noise levels. Left: Immediate. Center: InferNet. Right: Delayed.} \label{fig:seaquest} \end{figure*} \begin{figure*} \caption{Training on Freeway with different noise levels. Left: Immediate. Center: InferNet. Right: Delayed.} \label{fig:freeway} \end{figure*} \subsection{Atari Learning Environment} The NN architecture for Deep RL agent was the same as in \cite{wang2015dueling}). InferNet also uses the same architecture as the agent. The only differences were 1) the output layer consists of a single value for InferNet, 2) InferNet uses dropout during training for regularization, and 3) we wrapped all the InferNet layers in the TimeDistributed layer, in order to be able to pass all the steps in an episode as input at once, and infer the immediate reward for each of those steps (see Algorithm \ref{alg:main_alg}). It should be noted that the hyper-parameters (including the replay buffer size) are different from those in the Dueling DQN paper. We allowed for these changes because our goal is not to outperform the current state of the art method or to improve upon some prior Deep Reinforcement Learning algorithm. Our task is to create a method that infers better rewards and can be used by other RL algorithms in order to learn more effectively. Modifying some of these hyper-parameters allowed for less expensive training. Table \ref{final-table} shows the complete results of the Prioritized Dueling DQN agent, with the different reward settings, on all the Atari games. However, it is important to know that as we did not intend to outperform any other algorithm or state-of-the-art method, our evaluation was different from the Dueling DQN paper in several ways: 1) the size of the experience replay buffer was 10,000 instead of 1,000,000; 2) the evaluation was performed by making the loss of a life indicate the end of the episode; 3) we used the default hyper-parameters in the stable baselines library (which can be seen in our source code too). We also show the approximate episode length to get an idea of the situations in which InferNet may perform best. Finally, we moved some of the games to the lower section of the table for three possible reasons, 1) none of the agents learned anything useful, they are not better than random (shown in the lower section of the table, but the different columns do include a number); 2) the need for more computational resources needed to train InferNet on these games is too large (shown in the lower section of the table, and the columns are left blank); 3) other runtime errors (shown in the lower section of the table, and the columns are left blank). \begin{table*}[b!] \caption{Performance of the different settings on the Atari games, starting with 30 no-op actions. The evaluation was different from the usual in: 1) the size of the experience replay buffer was 10,000 instead of 1,000,000; 2) the evaluation was performed by making the loss of a life indicate the end of the episode; 3) we used the default hyper-parameters in the stable baselines library.} \label{final-table} \begin{center} \begin{small} \begin{sc} \begin{tabular}{|l|rrr|rrr|r|} \toprule & & Non-Noisy & & & Noisy & & Approx. Ep. Length \\ \midrule Game & Immediate & InferNet & Delayed & Immediate & InferNet & Delayed & \\ \midrule Amidar & \textbf{24} & 14 & 16 & 5.7 & \textbf{15.8} & 5.7 & 200 \\ Seaquest & \textbf{93} & 83 & 40 & 51 & \textbf{80} & 42 & 200 \\ Space Invaders & \textbf{202} & 175 & 58 & 83 & \textbf{129} & 66.5 & 220 \\ Star Gunner & \textbf{240} & 196 & 110 & 82 & \textbf{90} & 78 & 220 \\ Wizard Of Wor & \textbf{75} & 68 & 35 & 30 & \textbf{45} & 25 & 170 \\ Asterix & \textbf{970} & 570 & 200 & 75 & \textbf{202} & 75 & 200 \\ Battle Zone & \textbf{1000} & 955 & 885 & 982 & \textbf{1043} & 730 & 300 \\ Breakout & \textbf{42} & 8.3 & 0.8 & 2.6 & \textbf{16} & 0.8 & 120 \\ Crazy Climber & \textbf{10,070} & 3850 & 700 & 3560 & \textbf{4200} & 600 & 800 \\ Freeway & \textbf{32} & 21.7 & 3.4 & 4 & \textbf{21.7} & 20.7 & 2000 \\ Kangaroo & \textbf{715} & 380 & 210 & 48 & \textbf{123} & 48 & 180 \\ Kung-Fu Master & \textbf{2765} & 1350 & 45 & 30 & \textbf{1530} & 30 & 300 \\ Name This Game & \textbf{1400} & 720 & 190 & 425 & \textbf{781} & 396 & 600 \\ Phoenix & \textbf{570} & 255 & 35 & 12 & \textbf{330} & 25.5 & 200 \\ Q*Bert & \textbf{1150} & 220 & 105 & 120 & \textbf{125} & 83 & 140 \\ Road Runner & \textbf{10000} & 330 & 500 & 236 & \textbf{354} & 187 & 180 \\ Atlantis & \textbf{4680} & 980 & 760 & \textbf{1650} & \textbf{1650} & 410 & 120 \\ Centipede & \textbf{1500} & 1300 & 370 & \textbf{900} & \textbf{900} & 227 & 250 \\ Bank Heist & \textbf{185} & 2 & 2.5 & \textbf{16.6} & 1 & 3.8 & 300 \\ Beam Rider & \textbf{360} & 210 & 143 & \textbf{165} & 140 & 65 & 600 \\ Boxing & \textbf{10} & -26 & -26 & \textbf{-8.7} & -26 & -35 & 1800 \\ Gopher & \textbf{640} & 145 & 41 & \textbf{298} & 42 & 42 & 350 \\ Gravitar & \textbf{30} & 20 & 21.2 & \textbf{50} & 19 & 2.2 & 125 \\ Krull & \textbf{1725} & 725 & 330 & \textbf{1745} & 398 & 140 & 500 \\ Ms Pac-Man & \textbf{590} & 190 & 190 & \textbf{348} & 235 & 175 & 210 \\ Pitfall! & \textbf{-3.8} & -16 & -37.2 & \textbf{-9.8} & -15.4 & -12.3 & 700 \\ Pong & \textbf{19.5} & -20.8 & -17 & \textbf{-18.5} & -20.9 & -20.7 & 1000 \\ River Raid & \textbf{1160} & 490 & 255 & \textbf{513} & 381 & 150 & 150 \\ Robotank & \textbf{1.85} & 1.8 & 0.8 & \textbf{2.43} & 0.8 & 1.2 & 600 \\ Video Pinball & \textbf{11600} & 4440 & 1000 & \textbf{3000} & 2500 & 330 & 800 \\ Alien & \textbf{475} & 121 & 140 & \textbf{150} & 110 & \textbf{150} & 200 \\ HERO & \textbf{3525} & 3520 & 1765 & 1160 & 1090 & \textbf{1960} & 400 \\ Assault & \textbf{400} & 140 & 53 & 83.4 & 73 & \textbf{97} & 200 \\ Demon Attack & \textbf{580} & 370 & 110 & 53 & 54 & \textbf{110} & 1000 \\ Private Eye & \textbf{1200} & 600 & 700 & 155 & 10 & \textbf{1500} & 2700 \\ Time Pilot & 290 & \textbf{460} & 330 & 98 & \textbf{393} & 221 & 300 \\ Up and Down & 515 & \textbf{700} & 258 & 350 & \textbf{592} & 160 & 150 \\ James Bond & 32.3 & \textbf{39.5} & 16 & 9.5 & \textbf{15.8} & 1.4 & 120 \\ Berzerk & 150 & \textbf{900} & 125 & 73 & \textbf{128} & 100 & 150 \\ Bowling & 30 & \textbf{30.8} & 27 & 20 & \textbf{30} & 21 & 2350 \\ Yars' Revenge & 1337 & \textbf{2005} & 900 & 1300 & \textbf{1950} & 900 & 160 \\ Solaris & 292 & \textbf{370} & 116 & \textbf{250} & 210 & 45 & 2500 \\ Frostbite & 100 & \textbf{105} & 50 & \textbf{50} & 48.7 & 25 & 150 \\ \midrule Zaxxon & 0 & 0 & 0 & 0 & 0 & 0 & 175 \\ Venture & 0 & 0 & 0 & 0 & 0 & 0 & 700 \\ Ice Hockey & -14 & -14 & -15 & -11 & -12.7 & -15 & 3400 \\ Double Dunk & -22.7 & -22.1 & -22.5 & -20.5 & -21.8 & -22.5 & 6000 \\ Tennis & -23.3 & -23.8 & -23.9 & -22.8 & -23.8 & -23.9 & 6000 \\ Fishing Derby & -89 & -91 & -90.8 & -91 & -94.1 & -91 & 1850 \\ Skiing & -31,000 & -31000 & -31,000 & -31000 & -31000 & -31000 & 4400 \\ Chopper Command & & & & & & & 16000 \\ Tutankham & & & & & & & 4500 \\ Enduro & & & & & & & 3320 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \end{table*} \section{Hyper-parameters} Table \ref{hyperparameters} shows the hyper-parameters for the four experiments in this work. The dashes indicate that the hyper-parameter was not used, either because the training was performed offline or because of a design decision. \begin{table*}[b] \caption{Hyper-parameters used for the different experiments.} \label{hyperparameters} \begin{center} \begin{small} \begin{tabular}{|l|l|l|l|l|l|} \toprule Parameter Name & GW Online & GW Offline & Healthcare & CartPole & Atari \\ \midrule InferNet Hidden Layers & 3 Dense & 3 Dense & 3 Dense & 3 Dense & 3 Conv + 1 Dense \\ InferNet Num. Units & 3x256 & 3x256 & 3x256 & 3x64 & Conv: 32, 64, 64. Dense: 512 \\ InferNet Activation & Leaky ReLU & Leaky ReLU & Leaky ReLu & ReLU & ReLU \\ InferNet Dropout Rate & --- & --- & 0.2 & 0.2 & 0.2 \\ InferNet Optimizer & Adam & Adam & Adam & Adam & Adam \\ InferNet Learning Rate & 1e-4 & 1e-3 & 1e-4 & 1e-4 & 3e-3 \\ InferNet Batch Size & 32 ep. & 32 ep. & 20 ep. & 10 ep. & 1 ep. \\ InferNet Training Steps & 500,000 & 50,000 & 1,000,000 & 60,000 & Varying per game \\ InferNet Buffer Size & 500 & --- & --- & 500 ep. & 500 ep. \\ Agent Training Steps & 2,000 ep. & 5,000 & 1,000,000 & 150,000 & Varying per game \\ Agent Discount $\gamma$ & 0.9 & 0.90 & 0.99 & 0.99 & 0.99 \\ Agent Batch Size & --- & 32 & 32 & 32 & 32 \\ Agent Buffer Size & --- & --- & --- & 500,000 & 10,000 \\ Agent Hidden Layers & --- & --- & 2 Dense & 2 Dense & 3 Conv + 1 Dense \\ Agent Num. Units & --- & --- & 2x256 & 2x32 & Conv: 32, 64, 64. Dense: 512 \\ Agent Activation & --- & --- & ReLU & ReLU & ReLU \\ Agent Learning Rate & --- & --- & 1e-4 & 2.5e-4 & 1e-4 \\ TD($\lambda$): $\lambda$ & 0.91 & --- & --- & --- & --- \\ TD($\lambda$): $\alpha$ & 0.1 & --- & --- & --- & --- \\ TD($\lambda$): traces & Dutch & --- & --- & --- & --- \\ \bottomrule \end{tabular} \end{small} \end{center} \end{table*} \section{Ethical Considerations} In our human-machine mixed initiative RL framework, human experts are the final decision makers, and the RL agent assists for them to make a better and timely decision. In healthcare, despite the vast amount of data used to induce RL policies, the types of input are limited to EHRs and do not cover all the considerations as human physicians do in terms of knowledge, resources, finance, patients' preference, and unrecorded context. With such limitations, the RL agent could still help domain experts as an assistant to timely suggest best possible treatment options with their expected consequences by learning and generalizing an effective treatment policy from the medical records, related to similar disease or symptoms. Such an application would particularly benefit medical students or busy experts in emergency rooms requiring urgent decisions, and many hospitals suffering from a lack of expertise on a specific disease such as sepsis. \end{document}
\begin{document} \displaystylete{} \title{{f A note on property $(gb)$ and perturbations} \large \begin{quote} {\bf Abstract:} ~An operator $T \in \mathcal{B}(X)$ defined on a Banach space $X$ satisfies property $(gb)$ if the complement in the approximate point spectrum $\sigma_{a}(T)$ of the upper semi-B-Weyl spectrum $\sigma_{SBF_{+}^{-}}(T)$ coincides with the set $\Pi(T)$ of all poles of the resolvent of $T$. In this note we continue to study property $(gb)$ and the stability of it, for a bounded linear operator $T$ acting on a Banach space, under perturbations by nilpotent operators, by finite rank operators, by quasi-nilpotent operators commuting with $T$. Two counterexamples show that property $(gb)$ in general is not preserved under commuting quasi-nilpotent perturbations or commuting finite rank perturbations. \\ {\bf 2010 Mathematics Subject Classification:} primary 47A10, 47A11; secondary 47A53, 47A55 \\ {\bf Key words:} Generalized a-Browder's theorem; property $(gb)$; eventual topological uniform descent; commuting perturbation. \end{quote} \section{Introduction} \quad\,~Throughout this note, let $\mathcal{B}(X)$ denote the Banach algebra of all bounded linear operators acting on an infinite dimensional complex Banach space $X$, and $\mathcal{F}(X)$ denote its ideal of finite rank operators on $X$. For an operator $T \in \mathcal{B}(X)$, let $T^{*}$ denote its dual, $\mathcal {N}(T)$ its kernel, $\alpha(T)$ its nullity, $\mathcal {R}(T)$ its range, $\beta(T)$ its defect, $\sigma(T)$ its spectrum and $\sigma_{a}(T)$ its approximate point spectrum. If the range $\mathcal {R}(T)$ is closed and $\alpha(T) < \infty$ (resp. $\beta(T) < \infty$), then $T$ is said to be $upper$ $semi$-$Fredholm$ (resp. $lower$ $semi$-$Fredholm$). If $T \in \mathcal{B}(X)$ is both upper and lower semi-Fredholm, then $T$ is said to be $Fredholm$. If $T \in \mathcal{B}(X)$ is either upper or lower semi-Fredholm, then $T$ is said to be $semi$-$Fredholm$, and its index is defined by \begin{upshape}ind\end{upshape}$(T)$ = $\alpha(T)-\beta(T)$. The $upper$ $semi$-$Weyl$ $operators$ are defined as the class of upper semi-Fredholm operators with index less than or equal to zero, while $Weyl$ $operators$ are defined as the class of Fredholm operators of index zero. These classes of operators generate the following spectra: the $Weyl$ $spectrum$ defined by $$\sigma_{W}(T):= \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a Weyl operator} \},$$ the $upper$ $semi$-$Weyl$ $spectrum$ (in literature called also $Weyl$ $essential$ $approximate$ $point$ $spectrum$) defined by $$\sigma_{SF_{+}^{-}}(T):= \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a upper semi-Weyl operator}\}.$$ Recall that the $descent$ and the $ascent$ of $T \in \mathcal{B}(X)$ are $dsc(T)= \inf \{n \in \mathbf{\mathbb{N}}:\mathcal {R}(T^{n})= \mathcal {R}(T^{n+1})\}$ and $asc(T)=\inf \{n \in \mathbf{\mathbb{N}}:\mathcal {N}(T^{n})= \mathcal {N}(T^{n+1})\}$, respectively (the infimum of an empty set is defined to be $\infty$). If $asc(T) < \infty$ and $\mathcal {R}(T^{asc(T)+1})$ is closed, then $T$ is said to be $left$ $Drazin$ $invertible$. If $dsc(T) < \infty $ and $\mathcal {R}(T^{dsc(T)})$ is closed, then $T$ is said to be $right$ $Drazin$ $invertible$. If $asc(T) = dsc(T) < \infty $, then $T$ is said to be $Drazin$ $invertible$. Clearly, $T \in \mathcal{B}(X)$ is both left and right Drazin invertible if and only if $T$ is Drazin invertible. An operator $T \in \mathcal{B}(X)$ is called $upper$ $semi$-$Browder$ if it is a upper semi-Fredholm operator with finite ascent, while $T$ is called $Browder$ if it is a Fredholm operator of finite ascent and descent. The $Browder$ $spectrum$ of $T \in \mathcal{B}(X)$ is defined by $$\sigma_{B}(T):= \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a Browder operator} \},$$ the $upper$ $semi$-$Browder$ $spectrum$ (in literature called also $Browder$ $essential$ $approximate$ $point$ $spectrum$) is defined by $$\sigma_{UB}(T):= \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a upper semi-Browder operator}\}.$$ An operator $T \in \mathcal{B}(X)$ is called $Riesz$ if its essential spectrum $\sigma_e(T):= \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not Fredholm} \}=\{0\}$. Suppose that $T \in \mathcal{B}(X)$ and that $R \in \mathcal{B}(X)$ is a Riesz operator commuting with $T$. Then it follows from [\cite{Tylli}, Proposition 5] and [\cite{Rakocevic}, Theorem 1] that \begin{equation} {\label{eq 1.1}} \qquad\qquad\qquad\qquad\qquad\quad \ \ \sigma_{SF_{+}^{-}}(T+R) = \sigma_{SF_{+}^{-}}(T); \end{equation} \begin{equation} {\label{eq 1.2}} \qquad\qquad\qquad\qquad\qquad\qquad \ \ \sigma_{W}(T+R) = \sigma_{W}(T); \end{equation} \begin{equation} {\label{eq 1.3}} \qquad\qquad\qquad\qquad\qquad\quad \ \ \sigma_{UB}(T+R) = \sigma_{UB}(T); \end{equation} \begin{equation} {\label{eq 1.4}} \qquad\qquad\qquad\qquad\qquad\qquad \ \ \sigma_{B}(T+R) = \sigma_{B}(T). \end{equation} For each integer $n$, define $T_{n}$ to be the restriction of $T$ to $\mathcal{R}(T^{n})$ viewed as the map from $\mathcal{R}(T^{n})$ into $\mathcal{R}(T^{n})$ (in particular $T_{0} = T $). If there exists $n \in \mathbb{N}$ such that $\mathcal {R}(T^{n})$ is closed and $T_{n}$ is upper semi-Fredholm, then $T$ is called $upper$ $semi$-$B$-$Fredholm$. It follows from [\cite{Berkani semi-B-fredholm}, Proposition 2.1] that if there exists $n \in \mathbb{N}$ such that $\mathcal {R}(T^{n})$ is closed and $T_{n}$ is upper semi-Fredholm, then $\mathcal {R}(T^{m})$ is closed, $T_{m}$ is upper semi-Fredholm and \begin{upshape}ind\end{upshape}$(T_{m})$ = \begin{upshape}ind\end{upshape}$(T_{n})$ for all $m \geq n$. This enables us to define the index of a upper semi-B-Fredholm operator $T$ as the index of the upper semi-Fredholm operator $T_{n}$, where $n$ is an integer satisfying $\mathcal {R}(T^{n})$ is closed and $T_{n}$ is upper semi-Fredholm. An operator $T \in \mathcal {B}(X)$ is called $upper$ $semi$-$B$-$Weyl$ if $T$ is upper semi-B-Fredholm and \begin{upshape}ind\end{upshape}$(T) \leq 0$. For $T \in \mathcal{B}(X)$, let us define the $left$ $Drazin$ $spectrum$, the $Drazin$ $spectrum$ and the $upper$ $semi$-$B$-$Weyl$ $spectrum$ of $T$ as follows respectively: $$ \sigma_{LD}(T) := \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a left Drazin invertible operator} \};$$ $$ \sigma_{D}(T):= \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a Drazin invertible operator} \};$$ $$ \sigma_{SBF_{+}^{-}}(T) := \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a upper semi-B-Weyl operator} \}.$$ Let $\Pi(T)$ denote the set of all poles of $T$. We say that $\lambda \in \sigma_{a}(T)$ is a left pole of $T$ if $T-\lambda I$ is left Drazin invertible. Let $\Pi_{a}(T)$ denote the set of all left poles of $T$. It is well know that $\Pi(T)=\sigma(T) \backslash \sigma_{D}(T) = \makebox{iso}\sigma(T) \backslash \sigma_{D}(T)$ and $\Pi_{a}(T)=\sigma_{a}(T) \backslash \sigma_{LD}(T)=\makebox{iso}\sigma_{a}(T) \backslash \sigma_{LD}(T) $. Here and henceforth, for $A \subseteq \mathbb{C}$, $\makebox{iso}A$ is the set of isolated points of $A$. An operator $T \in \mathcal{B}(X)$ is called $a$-$polaroid$ if $\makebox{iso}\sigma_{a}(T)= \varnothing $ or every isolated point of $\sigma_{a}(T)$ is a left pole of $T$. Following Harte and Lee [\cite{Harte-Lee}] we say that $T \in \mathcal{B}(X)$ satisfies Browder's theorem if $\sigma_{W}(T)=\sigma_{B}(T)$. While, according to Djordjevi\'c and Han [\cite{Djordjevic-Han}], we say that $T$ satisfies a-Browder's theorem if $\sigma_{SF_{+}^{-}}(T)=\sigma_{UB}(T)$. The following two variants of Browder's theorem have been introduced by Berkani and Zariouh [\cite{Berkani-Zariouh Extended}] and Berkani and Koliha [\cite{Berkani-Koliha}], respectively. \begin {definition}${\label{1.1}}$ \begin{upshape} An operator $T \in \mathcal{B}(X)$ is said to possess property $(gb)$ if $$\sigma_{a}(T) \backslash \sigma_{SBF_{+}^{-}}(T) = \Pi(T).$$ While $T \in \mathcal{B}(X)$ is said to satisfy generalized a-Browder's theorem if $$\sigma_{a}(T) \backslash \sigma_{SBF_{+}^{-}}(T) = \Pi_{a}(T).$$ \end{upshape} \end{definition} From formulas (\ref{eq 1.1})--(\ref{eq 1.4}), it follows immediately that Browder's theorem and a-Browder's theorem are preserved under commuting Riesz perturbations. It is proved in [\cite{AmouchM ZguittiH equivalence}, Theorem 2.2] that generalized a-Browder's theorem is equivalent to a-Browder's theorem. Hence, generalized a-Browder's theorem is stable under commuting Riesz perturbations. That is, if $T \in \mathcal{B}(X)$ satisfies generalized a-Browder's theorem and $R$ is a Riesz operator commuting with $T$, then $T+R$ satisfies generalized a-Browder's theorem. The single-valued extension property was introduced by Dunford in [\cite{Dunford 1},\cite{Dunford 2}] and has an important role in local spectral theory and Fredholm theory, see the recent monographs [\cite{Aiena}] by Aiena and [\cite{Laursen-Neumann}] by Laursen and Neumann. \begin {definition}${\label{1.1}}$ \begin{upshape} An operator $T \in \mathcal{B}(X)$ is said to have the single-valued extension property at $\lambda_{0} \in \mathbb{C}$ (SVEP at $\lambda_{0}$ for brevity), if for every open neighborhood $U$ of $\lambda_{0}$ the only analytic function $f:U \rightarrow X$ which satisfies the equation $(\lambda I-T)f(\lambda) = 0$ for all $ \lambda \in U$ is the function $f (\lambda ) \equiv 0 $. Let $S(T):= \{\lambda \in \mathbb{C}: T \makebox{ does not have the SVEP at } \lambda \}$. An operator $T \in \mathcal{B}(X)$ is said to have SVEP if $S(T)=\varnothing$. \end{upshape} \end{definition} In this note we continue the study of property $(gb)$ which is studied in some recent papers [\cite{Berkani-Zariouh Extended},\cite{Berkani-Zariouh New Extended},\cite{Rashid}]. We show that property $(gb)$ is satisfied by an operator $T$ satisfying $S(T^{*}) \subseteq \sigma_{SBF_{+}^{-}}(T)$. We give a revised proof of [\cite{Rashid}, Theorem 3.10] to prove that property $(gb)$ is preserved under commuting nilpotent perturbations. We show also that if $T \in \mathcal{B}(X)$ satisfies $S(T^*) \subseteq \sigma_{SBF_{+}^{-}}(T)$ and $F$ is a finite rank operator commuting with $T$, then $T+F$ satisfies property $(gb)$. We show that if $T \in \mathcal{B}(X)$ is a a-polaroid operator satisfying property $(gb)$ and $Q$ is a quasi-nilpotent operator commuting with $T$, then $T+Q$ satisfies property $(gb)$. Two counterexamples are also given to show that property $(gb)$ in general is not preserved under commuting quasi-nilpotent perturbations or commuting finite rank perturbations. These results improve and revise some recent results of Rashid in [\cite{Rashid}]. \section{Main results} \quad\,~We begin with the following lemmas. \begin {lemma}${\label{2.1}}$ \begin{upshape} ([\cite{Berkani-Zariouh Extended}, Corollary 2.9]) \end{upshape} An operator $T \in \mathcal{B}(X)$ possesses property $(gb)$ if and only if $T$ satisfies generalized a-Browder's theorem and $\Pi(T)=\Pi_{a}(T)$. \end{lemma} \begin {lemma}${\label{2.2}}$ If the equality $\sigma_{SBF_{+}^{-}}(T)=\sigma_{D}(T)$ holds for $T \in \mathcal{B}(X)$, then $T$ possesses property $(gb)$. \end{lemma} \begin{proof} Suppose that $\sigma_{SBF_{+}^{-}}(T)=\sigma_{D}(T)$. If $\lambda \in \sigma_{a}(T) \backslash \sigma_{SBF_{+}^{-}}(T)$, then $\lambda \in \sigma_{a}(T) \backslash \sigma_{D}(T)$ $ \subseteq \Pi(T)$. This implies that $\sigma_{a}(T) \backslash \sigma_{SBF_{+}^{-}}(T) = \Pi(T)$. Since $\Pi(T) \subseteq \sigma_{a}(T) \backslash \sigma_{SBF_{+}^{-}}(T)$ is always true, $\sigma_{a}(T) \backslash \sigma_{SBF_{+}^{-}}(T) = \Pi(T)$, i.e. $T$ possesses property $(gb)$. \end{proof} \begin {lemma}${\label{2.3}}$ If $T \in \mathcal{B}(X)$, then $\sigma_{SBF_{+}^{-}}(T) \cup S(T^*) = \sigma_{D}(T)$. \end{lemma} \begin{proof} Let $\lambda \notin \sigma_{SBF_{+}^{-}}(T) \cup S(T^*)$. Then $T - \lambda$ is a upper semi-Weyl operator and $T^*$ has SVEP at $\lambda$. Thus $T - \lambda$ is a upper semi-B-Fredholm operator and $\makebox{ind}(T - \lambda) \leq 0$. Hence there exists $n \in \mathbb{N}$ such that $\mathcal {R}((T-\lambda)^{n})$ is closed, $(T - \lambda)_{n}$ is a upper semi-Fredholm operator and $\makebox{ind}(T - \lambda)_{n} \leq 0$. By [\cite{Aiena2007quasifredholm}, Theorem 2.11], $dsc(T - \lambda) < \infty$. Thus $dsc(T - \lambda)_{n} < \infty$, by [\cite{Aiena}, Theorem 3.4(ii)], $\makebox{ind}(T - \lambda)_{n} \geq 0$. By [\cite{Aiena}, Theorem 3.4(iv)], $asc(T - \lambda)_{n} = dsc(T - \lambda)_{n} < \infty$. Consequently, $(T - \lambda)_{n}$ is a Browder operator. Thus by [\cite{Aiena-Biondi-Carpintero}, Theorem 2.9] we then conclude that $T - \lambda$ is Drazin invertible, i.e. $\lambda \notin \sigma_{D}(T)$. Hence $\sigma_{D}(T) \subseteq \sigma_{SBF_{+}^{-}}(T) \cup S(T^*)$. Since the reverse inclusion obviously holds, we get $\sigma_{SBF_{+}^{-}}(T) \cup S(T^*) = \sigma_{D}(T)$. \end{proof} \begin {theorem}${\label{2.4}}$ If $T \in \mathcal{B}(X)$ satisfies $S(T^*) \subseteq \sigma_{SBF_{+}^{-}}(T)$, then $T$ possesses property $(gb)$. In particular, if $T^*$ has SVEP, then $T$ possesses property $(gb)$. \end{theorem} \begin{proof} Suppose that $S(T^*) \subseteq \sigma_{SBF_{+}^{-}}(T)$. Then by Lemma \ref{2.3}, we get $\sigma_{SBF_{+}^{-}}(T) = \sigma_{D}(T)$. Consequently, by Lemma \ref{2.2}, $T$ possesses property $(gb)$. If $T^*$ has SVEP, then $S(T^*)=\varnothing$, the conclusion follows immediately. \end{proof} The following example shows that the converse of Theorem \ref{2.4} is not true. \begin {example}${\label{2.5}}$ \begin{upshape} Let $X$ be the Hilbert space $l_{2}(\mathbb{N})$ and let $T: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be the unilateral right shift operator defined by $$T(x_{1},x_{2},\cdots )=(0,x_{1},x_{2}, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in l_{2}(\mathbb{N}).$$ Then, $$\sigma_{a}(T) = \{\lambda \in \mathbb{C}: |\lambda| = 1 \},$$ $$\sigma_{SBF_{+}^{-}}(T) = \{\lambda \in \mathbb{C}: |\lambda| = 1 \}$$ and $$\Pi(T) = \varnothing.$$ Hence $\sigma_{a}(T) \backslash \sigma_{SBF_{+}^{-}}(T) = \Pi(T)$, i.e. $T$ possesses property $(gb)$. But $S(T^*)= \{\lambda \in \mathbb{C}: 0 \leq |\lambda| < 1 \} \nsubseteq \{\lambda \in \mathbb{C}: |\lambda| = 1 \} = \sigma_{SBF_{+}^{-}}(T).$ \end{upshape} \end{example} The next theorem had been established in [\cite{Rashid}, Theorem 3.10], but its proof was not so clear. Hence we give a revised proof of it. \begin {theorem}${\label{2.6}}$ If $T \in \mathcal{B}(X)$ satisfies property $(gb)$ and $N$ is a nilpotent operator that commutes with $T$, then $T+N$ satisfies property $(gb)$. \end{theorem} \begin{proof} Suppose that $T \in \mathcal{B}(X)$ satisfies property $(gb)$ and $N$ is a nilpotent operator that commutes with $T$. By Lemma \ref{2.1}, $T$ satisfies generalized a-Browder's theorem and $\Pi(T)=\Pi_{a}(T)$. Hence $T+N$ satisfies generalized a-Browder's theorem. By [\cite{Mbekhta-Muller 9}], $\sigma(T+N) = \sigma(T)$ and $\sigma_{a}(T+N) = \sigma_{a}(T)$. Hence, by [\cite{Kaashoek-Lay}, Theorem 2.2] and [\cite{Bel-Burgos-Oudghiri 3}, Theorem 3.2], we have that $\Pi(T+N)=\sigma(T+N) \backslash \sigma_{D}(T+N) = \sigma(T) \backslash \sigma_{D}(T)=\Pi(T)=\Pi_{a}(T) = \sigma_{a}(T) \backslash \sigma_{LD}(T)=\sigma_{a}(T+N) \backslash \sigma_{LD}(T+N) = \Pi_{a}(T+N)$. By Lemma \ref{2.1} again, $T+N$ satisfies property $(gb)$. \end{proof} The following example, which is a revised version of [\cite{Rashid}, Example 3.11], shows that the hypothesis of commutativity in Theorem \ref{2.6} is crucial. \begin {example}${\label{2.7}}$ \begin{upshape} Let $T: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be the unilateral right shift operator defined by $$T(x_{1},x_{2},\cdots )=(0,x_{1},x_{2}, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in l_{2}(\mathbb{N}).$$ Let $N: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be a nilpotent operator with rank one defined by $$N(x_{1},x_{2},\cdots )=(0,-x_{1},0, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in l_{2}(\mathbb{N}).$$ Then $TN \neq NT$. Moreover, $$\sigma(T) = \{\lambda \in \mathbb{C}: 0 \leq |\lambda| \leq 1 \},$$ $$\sigma_{a}(T) = \{\lambda \in \mathbb{C}: |\lambda| = 1 \} ,$$ $$\sigma(T+N) = \{\lambda \in \mathbb{C}: 0 \leq |\lambda| \leq 1 \}$$ and $$\sigma_{a}(T+N) = \{\lambda \in \mathbb{C}: |\lambda| = 1 \} \cup \{ 0 \}.$$ It follows that $\Pi_{a}(T)=\Pi(T)= \varnothing$ and $\{0 \}= \Pi_{a}(T+N) \neq \Pi(T+N)= \varnothing.$ Hence by Lemma \ref{2.1}, $T+N$ does not satisfy property $(gb)$. But since $T$ has SVEP, $T$ satisfies a-Browder's theorem or equivalently, by [\cite{AmouchM ZguittiH equivalence}, Theorem 2.2], $T$ satisfies generalized a-Browder's theorem. Therefore by Lemma \ref{2.1} again, $T$ satisfies property $(gb)$. \end{upshape} \end{example} To continue the discussion of this note, we recall some classical definitions. Using the isomorphism $X/\mathcal {N}(T^{d}) \approx \mathcal {R}(T^{d})$ and following [\cite{Grabiner 9}], a topology on $\mathcal {R}(T^{d})$ is defined as follows. \begin{definition} ${\label{2.8}}$ \begin{upshape} Let $T \in \mathcal{B}(X)$. For every $d \in \mathbf{\mathbb{N}},$ the operator range topological on $\mathcal {R}(T^{d})$ is defined by the norm $||\small{\cdot}||_{\mathcal {R}(T^{d})}$ such that for all $y \in \mathcal {R}(T^{d})$, $$||y||_{\mathcal {R}(T^{d})} := \inf\{||x||: x \in X,y=T^{d}x\}.$$ \end{upshape} \end{definition} For a detailed discussion of operator ranges and their topologies, we refer the reader to [\cite{Fillmore-Williams 7}] and [\cite{Grabiner 8}]. \begin {definition}${\label{2.9}}$ \begin{upshape} Let $T \in \mathcal{B}(X)$ and let $d \in \mathbf{\mathbb{N}}$. Then $T$ has $uniform$ $descent$ for $n \geq d$ if $k_{n}(T)=0$ for all $n \geq d$. If in addition $\mathcal {R}(T^{n})$ is closed in the operator range topology of $\mathcal {R}(T^{d})$ for all $n \geq d$$,$ then we say that $T$ has $eventual$ $topological$ $uniform$ $descent$$,$ and$,$ more precisely$,$ that $T$ has $topological$ $uniform$ $descent$ $for$ $n \geq d$. \end{upshape} \end{definition} Operators with eventual topological uniform descent are introduced by Grabiner in [\cite{Grabiner 9}]. It includes many classes of operators introduced in the Introduction of this note, such as upper semi-B-Fredholm operators, left Drazin invertible operators, Drazin invertible operators, and so on. It also includes many other classes of operators such as operators of Kato type, quasi-Fredholm operators, operators with finite descent and operators with finite essential descent, and so on. A very detailed and far-reaching account of these notations can be seen in [\cite{Aiena},\cite{Berkani},\cite{Mbekhta-Muller 9}]. Especially, operators which have topological uniform descent for $n \geq 0$ are precisely the $semi$-$regular$ operators studied by Mbekhta in [\cite{Mbekhta}]. Discussions of operators with eventual topological uniform descent may be found in [\cite{Berkani-Castro-Djordjevic},\cite{Cao},\cite{Grabiner 9},\cite{Jiang-Zhong-Zeng},\cite{Zeng-Zhong-Wu}]. \begin {lemma}${\label{2.10}}$ If $T \in \mathcal{B}(X)$ and $F$ is a finite rank operator commuting with $T$, then $(1)$ $\sigma_{SBF_{+}^{-}}(T+F) =\sigma_{SBF_{+}^{-}}(T)$; $(2)$ $\sigma_{D}(T+F) =\sigma_{D}(T)$. \end{lemma} \begin{proof} $(1)$ Without loss of generality, we need only to show that $0 \notin \sigma_{SBF_{+}^{-}}(T+F)$ if and only if $0 \notin \sigma_{SBF_{+}^{-}}(T)$. By symmetry, it suffices to prove that $0 \notin \sigma_{SBF_{+}^{-}}(T+F)$ if $0 \notin \sigma_{SBF_{+}^{-}}(T)$. Suppose that $0 \notin \sigma_{SBF_{+}^{-}}(T)$. Then $T$ is a upper semi-B-Fredholm operator and $\makebox{ind}(T) \leq 0$. Hence it follows from [\cite{Berkani}, Theorem 3.6] and [\cite{Bel-Burgos-Oudghiri 3}, Theorem 3.2] that $T+F$ is also a upper semi-B-Fredholm operator. Thus by [\cite{Grabiner 9}, Theorem 5.8], $\makebox{ind}(T+F)=\makebox{ind}(T) \leq 0$. Consequently, $T+F$ is a semi-B-Weyl operator, i.e. $0 \notin \sigma_{SBF_{+}^{-}}(T)$, and this completes the proof of $(1)$. $(2)$ Noting that an operator is Drazin invertible if and only if it is of finite ascent and finite descent, the conclusion follows from [\cite{Kaashoek-Lay}, Theorem 2.2]. \end{proof} \begin {theorem}${\label{2.11}}$ If $T \in \mathcal{B}(X)$ satisfies $S(T^*) \subseteq \sigma_{SBF_{+}^{-}}(T)$ and $F$ is a finite rank operator commuting with $T$, then $T+F$ satisfies property $(gb)$. \end{theorem} \begin{proof} Since $F$ is a finite rank operator commuting with $T$, by Lemma \ref{2.10}, $\sigma_{SBF_{+}^{-}}(T+F) =\sigma_{SBF_{+}^{-}}(T)$ and $\sigma_{D}(T+F) =\sigma_{D}(T)$. Since $S(T^*) \subseteq \sigma_{SBF_{+}^{-}}(T)$, by Lemma \ref{2.3}, $\sigma_{SBF_{+}^{-}}(T)=\sigma_{D}(T).$ Thus, $\sigma_{SBF_{+}^{-}}(T+F)=\sigma_{D}(T+F)$. By Lemma \ref{2.2}, $T+F$ satisfies property $(gb)$. \end{proof} The following example illustrates that property $(gb)$ in general is not preserved under commuting finite rank perturbations. \begin {example}${\label{2.12}}$ \begin{upshape} Let $U: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be the unilateral right shift operator defined by $$U(x_{1},x_{2},\cdots )=(0,x_{1},x_{2}, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in l_{2}(\mathbb{N}).$$ For fixed $0 < \varepsilon < 1$, let $F_{\varepsilon}: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be a finite rank operator defined by $$F_{\varepsilon}(x_{1},x_{2},\cdots )=(-\varepsilon x_{1},0,0, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in l_{2}(\mathbb{N}).$$ We consider the operators $T$ and $F$ defined by $T=U \oplus I$ and $F=0 \oplus F_{\varepsilon}$, respectively. Then $F$ is a finite rank operator and $TF=FT$. Moreover, $$\sigma(T) = \sigma(U) \cup \sigma(I) = \{\lambda \in \mathbb{C}: 0 \leq |\lambda| \leq 1 \},$$ $$\sigma_{a}(T) = \sigma_{a}(U) \cup \sigma_{a}(I) = \{\lambda \in \mathbb{C}: |\lambda| = 1 \} ,$$ $$\sigma(T+F) = \sigma(U) \cup \sigma(I+F_{\varepsilon}) = \{\lambda \in \mathbb{C}: 0 \leq |\lambda| \leq 1 \}$$ and $$\sigma_{a}(T+F) = \sigma_{a}(U) \cup \sigma_{a}(I+F_{\varepsilon}) = \{\lambda \in \mathbb{C}: |\lambda| = 1 \} \cup \{ 1 - \varepsilon \}.$$ It follows that $\Pi_{a}(T)=\Pi(T)= \varnothing$ and $\{1 - \varepsilon \}= \Pi_{a}(T+F) \neq \Pi(T+F)= \varnothing.$ Hence by Lemma \ref{2.1}, $T+F$ does not satisfy property $(gb)$. But since $T$ has SVEP, $T$ satisfies a-Browder's theorem or equivalently, by [\cite{AmouchM ZguittiH equivalence}, Theorem 2.2], $T$ satisfies generalized a-Browder's theorem. Therefore by Lemma \ref{2.1} again, $T$ satisfies property $(gb)$. \end{upshape} \end{example} Rashid gives in [\cite{Rashid}, Theorem 3.15] that if $T \in \mathcal{B}(X)$ and $Q$ is a quasi-nilpotent operator that commute with $T$, then $$\sigma_{SBF_{+}^{-}}(T+Q)=\sigma_{SBF_{+}^{-}}(T).$$ The next example show that this equality does not hold in general. \begin {example}${\label{2.13}}$ \begin{upshape} Let $Q$ denote the Volterra operator on the Banach space $C[0,1]$ defined by $$(Qf)(t) = \int_{0}^{t}f(s)\, \mathrm{d}s \makebox{\ \ \ for all } f \in C[0,1] \makebox{\ and } t \in [0,1]. $$ $Q$ is injective and quasi-nilpotent. Hence it is easy to see that $\mathcal {R}(Q^{n})$ is not closed for every $n \in \mathbb{N}$. Let $T = 0 \in \mathcal{B}(C[0,1])$. It is easy to see that $TQ=0=QT$ and $0 \notin \sigma_{SBF_{+}^{-}}(0) = \sigma_{SBF_{+}^{-}}(T)$, but $0 \in \sigma_{SBF_{+}^{-}}(Q) = \sigma_{SBF_{+}^{-}}(0+Q) = \sigma_{SBF_{+}^{-}}(T+Q)$. Hence $\sigma_{SBF_{+}^{-}}(T+Q) \neq \sigma_{SBF_{+}^{-}}(T).$ \end{upshape} \end{example} Rashid claims in [\cite{Rashid}, Theorem 3.16] that property $(gb)$ is stable under commuting quasi-nilpotent perturbations, but its proof is rely on [\cite{Rashid}, Theorem 3.15] which, by Example \ref{2.11}, is not always true. The following example shows that property $(gb)$ in general is not preserved under commuting quasi-nilpotent perturbations. \begin {example}${\label{2.14}}$ \begin{upshape} Let $U: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be the unilateral right shift operator defined by $$U(x_{1},x_{2},\cdots )=(0,x_{1},x_{2}, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in l_{2}(\mathbb{N}).$$ Let $V: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be a quasi-nilpotent operator defined by $$V(x_{1},x_{2},\cdots )=(0,x_{1},0,\frac{x_{3}}{3},\frac{x_{4}}{4} \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in l_{2}(\mathbb{N}).$$ Let $N: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be a quasi-nilpotent operator defined by $$N(x_{1},x_{2},\cdots )=(0,0,0,-\frac{x_{3}}{3},-\frac{x_{4}}{4} \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in l_{2}(\mathbb{N}).$$ It is easy to verify that $VN=NV$. We consider the operators $T$ and $Q$ defined by $T=U \oplus V$ and $Q=0 \oplus N$, respectively. Then $Q$ is quasi-nilpotent and $TQ=QT$. Moreover, $$\sigma(T) = \sigma(U) \cup \sigma(V) = \{\lambda \in \mathbb{C}: 0 \leq |\lambda| \leq 1 \},$$ $$\sigma_{a}(T) = \sigma_{a}(U) \cup \sigma_{a}(V) = \{\lambda \in \mathbb{C}: |\lambda| = 1 \} \cup \{0\},$$ $$\sigma(T+Q) = \sigma(U) \cup \sigma(V+N) = \{\lambda \in \mathbb{C}: 0 \leq |\lambda| \leq 1 \}$$ and $$\sigma_{a}(T+Q) = \sigma_{a}(U) \cup \sigma_{a}(V+N) = \{\lambda \in \mathbb{C}: |\lambda| = 1 \} \cup \{0\}.$$ It follows that $\Pi_{a}(T)=\Pi(T)= \varnothing$ and $\{0\}= \Pi_{a}(T+Q) \neq \Pi(T+Q)= \varnothing.$ Hence by Lemma \ref{2.1}, $T+Q$ does not satisfy property $(gb)$. But since $T$ has SVEP, $T$ satisfies a-Browder's theorem or equivalently, by [\cite{AmouchM ZguittiH equivalence}, Theorem 2.2], $T$ satisfies generalized a-Browder's theorem. Therefore by Lemma \ref{2.1} again, $T$ satisfies property $(gb)$. \end{upshape} \end{example} \begin {theorem}${\label{2.15}}$ Suppose that $T \in \mathcal{B}(X)$ obeys property $(gb)$ and that $Q \in \mathcal{B}(X)$ is a quasi-nilpotent operator commuting with $T$. If $T$ is a-polaroid, then $T+Q$ obeys $(gb)$. \end{theorem} \begin{proof} Since $T$ satisfies property $(gb)$, by Lemma \ref{2.1}, $T$ satisfies generalized a-Browder's theorem and $\Pi(T)=\Pi_{a}(T)$. Hence $T+Q$ satisfies generalized a-Browder's theorem. In order to show that $T+Q$ satisfies property $(gb)$, by Lemma \ref{2.1} again, it suffices to show that $\Pi(T+Q)=\Pi_{a}(T+Q)$. Since $\Pi(T+Q) \subseteq \Pi_{a}(T+Q)$ is always true, one needs only to show that $\Pi_{a}(T+Q) \subseteq \Pi(T+Q)$. Let $\lambda \in \Pi_{a}(T+Q)=\sigma_{a}(T+Q) \backslash \sigma_{LD}(T+Q) = \makebox{iso}\sigma_{a}(T+Q) \backslash \sigma_{LD}(T+Q)$. Then by [\cite{Mbekhta-Muller 9}], $\lambda \in \makebox{iso}\sigma_{a}(T)$. Since $T$ is a-polaroid, $\lambda \in \Pi_{a}(T) = \Pi(T)$. Thus by [\cite{Zeng-Zhong-Wu}, Theorem 3.12], $\lambda \in \Pi(T+Q)$. Therefore $\Pi_{a}(T+Q) \subseteq \Pi(T+Q)$, and this completes the proof. \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{Atoms in Quasilocal Integral Domains} \author{D.D. Anderson} \ead{[email protected]} \author{K. Bombardier\corref{cor1}} \ead{[email protected]} \cortext[cor1]{Corresponding author} \address{Department of Mathematics, The University of Iowa, Iowa City, IA, 52242, USA} \begin{abstract} Let $(R,M)$ be a quasilocal integral domain. We investigate the set of irreducible elements (atoms) of $R$. Special attention is given to the set of atoms in $M \backslash M^2$ and to the existence of atoms in $M^2$. While our main interest is in local Cohen-Kaplansky (CK) domains (atomic integral domains with only finitely many non-associate atoms), we endeavor to obtain results in the greatest generality possible. In contradiction to a statement of Cohen and Kaplansky, we construct a local CK domain with precisely eight nonassociate atoms having an atom in $M^2$. \end{abstract} \begin{keyword} quasilocal domain \sep irreducible element \sep atom \sep atomic domain \sep Cohen Kaplansky domain \end{keyword} \end{frontmatter} \section{Introduction} \label{S:1} Let $R$ be a (commutative) integral domain. A nonzero nonunit $x \in R$ is \emph{irreducible}, or an \emph{atom}, if $x = ab$ implies $a$ or $b$ is a unit and $R$ is \emph{atomic} if each nonzero nonunit of $R$ is a finite product of atoms. An atomic domain with only finitely many nonassociate atoms is called a \emph{Cohen-Kaplansky} (\emph{CK}) \emph{domain}. (While a field is an atomic domain, even a CK domain, to avoid trivialities, we assume throughout that $R$ is not a field.) While the purpose of this article is to study local CK domains and their atoms, in Section 2 we begin by investigating atoms in quasilocal domains that need not even be atomic. While we focus on quasilocal domains, we should point out that the study of atoms or atomicity cannot generally be reduced to the quasilocal case. Indeed, the ring of integer-valued polynomials is a two-dimensional Pr\"{u}fer BFD (and hence is atomic), but has a localization at a maximal ideal that is not atomic \cite[Example 2.7(b)]{AAZ}. Conversely, if $R$ is a Bezout almost Dedekind domain that is not a PID (take $R = D(X)$ where $D$ is your favorite non-Dedekind almost Dedekind domain), then $R$ is not atomic, but each localization of $R$ is a DVR and hence atomic. However, for CK domains we can effectively reduce to the local case, see Theorem \ref{Lattice}. As usual two elements $a$ and $b$ of a domain $R$ are \emph{associates}, denoted $a \sim b$, if $b = ua$ for some unit $u \in R$. The setup for Section 2 is a not necessarily atomic quasilocal domain $(R,M)$, usually with $M \neq M^2$. (We reserve the term \enquote{local} for a Noetherian quasilocal domain.) Set $\overline{R} = R/M$. We begin by remarking that if $M^{\beta} = 0$ for some ordinal $\beta$, then $R$ satisfies ACCP (Theorem \ref{BFDgen}). If $x \in M \backslash M^2$, $x$ is certainly an atom. Special attention is given to the set of atoms contained in $M \backslash M^2$ and to the existence of atoms in $M^2$. We say that $M^n$ is (\emph{weakly}) \emph{universal} if $M^n \subseteq Rx$ for each atom $x \in R$ ($x \in M \backslash M^2$). We show that if there are exactly $n$ nonassociate atoms (in $M \backslash M^2$), then $M^{n-1} (M^n)$ is (weakly) universal (Theorem \ref{Weakly}). Suppose that $M \neq M^2$. Let $\{x_{\alpha}\}_{\alpha \in \Lambda} \subseteq M \backslash M^2$ be a complete set of representatives of the one-dimensional $\overline{R}$-subspaces of $M / M^2$. Then $\{x_{\alpha}\}_{\alpha \in \Lambda}$ is a set of nonassociate atoms of $R$ lying in $M \backslash M^2$ (thus we have a lower bound for the number of nonassociate atoms in $M \backslash M^2$) and $M^2$ is universal if and only if $\{x_{\alpha}\}_{\alpha \in \Lambda}$ is a complete set of nonassociate atoms of $R$ (lying in $M \backslash M^2$) (Theorem \ref{QLdoms}). We show that $M^2$ is universal if and only if $[M:M] = \{x \in K | xM \subseteq M\}$ ($K$ the quotient field of $R$) is a quasilocal domain with principal maximal ideal $M$ (Theorem \ref{QLdoms}). For $M^n$ universal $(n \geq 2)$, we give an upper bound for the number of nonassociate atoms in $M^{n-1} \backslash M^n$ (Theorem \ref{Upper}). Finally we show that if $(R,M)$ is a quasilocal domain with $M \neq M^2$ having only finitely many nonassociate atoms, then $P = \bigcap_{n=1}^{\infty} M^n$ is prime and $R/P$ is a CK domain (Theorem \ref{Uni}). Section 3 concentrates on local CK domains. We review some characterizations of local CK domains. We offer alternative proofs and sharpen several results from \cite{CK}. Let $(R,M)$ be a local CK domain that is not a DVR. Let $V = U([M:M]) / U(R)$ where for a ring $S$, $U(S)$ is the group of units of $S$. Now $V$ is finite and $|V| \geq |\overline{R}|$ with $|V| \geq |\overline{R}|+1$ if $M$ is the maximal ideal of $[M:M]$ (Theorem \ref{Vcard}). For $x \in M$ and $u \in U([M:M])$, $x \in M^{n-1} \backslash M^n$ $\iff$ $ux \in M^{n-1} \backslash M^n$ and $x$ is an atom $\iff$ $ux$ is an atom. Thus the number of nonassociate atoms in $M^{n-1} \backslash M^n$ is a multiple of $|V|$, possibly $0$ for $n \geq 3$. Moreover, $M^2$ is universal $\iff$ the nonassociate atoms consist of a single $V$-class $\iff$ the nonassociate atoms contained in $M \backslash M^2$ consist of a single $V$-class. Thus if the number of nonassociate atoms in $R$ (or in $M \backslash M^2$) is prime, $M^2$ is universal (Corollary \ref{2p}). The fourth section consists of examples. Of particular interest are local CK domains of the form $R = K + WX + F[[X]]X^2$ where $K \subseteq F$ is an extension of finite fields and $W$ is a $K$-subspace of $F$. Cohen and Kaplansky's paper \cite{CK} is entitled \enquote{Rings with a finite number of primes. I.} (They use the term \enquote{prime} to mean an atom.) II never appeared, but on page 472 in regard to the result on the universality of $M^{n-1}$ when $R$ is a CK domain with precisely $n$ nonassociate atoms, they state \enquote{This result will incidentally be considerably sharpened in the paper that follows.} A question they raised, but were unable to answer, was whether a local CK domain $(R,M)$ could have an atom in $M^2$. To quote from page $473$ of their paper: \enquote{Whether or not there exist rings with a prime (sic) in $M^2$ is a question that has not yet been settled. It follows from (2), and the fact that $k$ and $N$ are at least $2$, that such a ring must have at least seven primes (sic). Since we shall prove below that $M^2$ is universal when $n$ is prime, the lower bound becomes $n=8$. We shall continue this discussion in the second paper; but we remark that at the moment our best result has ruled out the possibility of a prime (sic) in $M^2$ for $n=8$ or $n=9$.} Now in \cite{AMO} it was shown that you can have an atom in $M^2$. Using the construction given there, we give an example of a local CK domain $(R,M)$ with exactly eight nonassociate atoms having two nonassociate atoms in $M^2$. Perhaps this is why II never appeared. We also use the construction given in \cite{AMO} to construct a local CK domain $(R,M)$ with $M^{2n}$ universal, but $M^{2n-1}$ not universal. In Section 5 we investigate the existence of local CK domains with exactly $n$ nonassociate atoms for small $n$. \section{Atoms in Quasilocal Domains} In this section we study the set of atoms of a quasilocal domain $(R,M)$. We will usually assume that $M \neq M^2$ so we have atoms in $M \backslash M^2$. While our main goal is to study local CK domains, in this section we try to keep the results as general as possible by not assuming that $R$ is atomic or that the number of nonassociate atoms involved is necessarily finite. Several of the results of this section have previously been given for CK domains \cite{CK}. Recall that $R$ is a \emph{bounded} \emph{factorization} \emph{domain} (\emph{BFD}) if for each nonzero nonunit $x \in R$ there is a natural number $N(x)$ so that if $x = x_1 \cdots x_n$ where $x_i \in R$ is a nonunit, then $n \leq N(x)$. We say that $R$ satisfies the \emph{ascending chain condition} \emph{on principal ideals} (\emph{ACCP}) if any ascending chain of principal ideals of $R$ stabilizes. It is well known and easily proved that \begin{center} BFD $\implies$ ACCP $\implies$ atomic \end{center} \noindent and that none of these implications can be reversed, even for quasilocal domains. See Section 4 for more details. We next generalize the well known result that a quasilocal domain $(R,M)$ with $\bigcap_{n=1}^{\infty} M^n = 0$ is a BFD. Recall that $M^{\beta}$ is defined for each ordinal $\beta$ where $M^{\beta + 1} = M M^{\beta}$ and for $\beta$ a limit ordinal $M^{\beta} = \bigcap_{\alpha < \beta} M^{\alpha}$. \begin{thm} \label{BFDgen} Let $(R,M)$ be a quasilocal domain. If $M^{\alpha}=0$ for some ordinal $\alpha$, then $R$ satisfies ACCP. If further $M^{w} = \bigcap_{n=1}^{\infty} M^n = 0$, $R$ is a BFD. \end{thm} \begin{proof} Define a function $\phi: M \backslash \{0\} \to \text{ORD}$ by $\phi(x) = \beta$ where $x \in M^{\beta} \backslash M^{\beta+1}.$ Now for $x,y \in M \backslash \{0\}$, $\phi(xy) > \phi(x)$. Hence if $0 \neq Rx_1 \subsetneq Rx_2 \subsetneq Rx_3 \subsetneq \cdots$ is an infinite ascending chain of principal ideals in $R$, $\phi(x_1) > \phi(x_2) > \phi(x_3) > \cdots$ is an infinite descending chain of ordinals, a contradiction. (This is \cite[Proposition 2]{AJ}.) For the case where $0 = M^{w} = \bigcap_{n=1}^{\infty} M^n$, let $0 \neq x = x_1 \cdots x_m$ where $x_i \in M$. Then $m \leq \phi(x)$; so $R$ is a BFD. \end{proof} Let $S$ and $T$ be subsets of $R \backslash \{0\}$ where $R$ is an integral domain. We say that $S$ is \emph{universally divisible by $T$} if each element of $S$ is divisible by each element of $T$, or equivalently, $S \subseteq \bigcap_{t \in T} Rt$. For $(R,M)$ quasilocal, $M^n$ is (\emph{weakly}) \emph{universal} if $M^n$ is universally divisible by $T=\{x | x \in R \text{ is an atom}\}$ ($T=\{x | x \in M \backslash M^2$). The concept of $M^n$ being universal was introduced by Cohen and Kaplansky \cite{CK} who characterized the CK domains with $M^2$ universal and showed that if $R$ is a local CK domain with exactly $n$ nonassociate atoms, then $M^{n-1}$ is universal; see Theorem \ref{Weakly} for a generalization. We next characterize quasilocal domains $(R,M)$ with $M^2$ universal. \begin{thm} \label{QLdoms} Let $(R,M)$ be a quasilocal domain with $M \neq M^2$. Put $\overline{R}=R/M$. Let $\{V_{\alpha}\}_{\alpha \in \Lambda}$ be the set of one-dimensional $\overline{R}$-subspaces of $M/M^2$. For each $\alpha \in \Lambda$, let $x_{\alpha} \in M \backslash M^2$ with $V_{\alpha} = \overline{R} \overline{x_{\alpha}}$. \begin{itemize} \item[(1)] If $x \in M \backslash M^2$, $x$ is an atom of $R$. \item[(2)] If $x,y \in M$ with $x \sim y$, then $\overline{R} \overline{x} = \overline{R} \overline{y}$. So associate atoms of $M \backslash M^2$ determine the same one-dimensional subspace of $M/M^2$. \item[(3)] $\{x_{\alpha}\}_{\alpha \in \Lambda}$ is a set of nonassociate atoms in $M \backslash M^2$. \item[(4)] Suppose that there is an atom $q \in M^2$. Let $\{u_{\beta}\}_{\beta \in \Gamma}$ be a complete set of representatives of $\overline{R}$. Then $\{x_{\alpha}+u_{\beta}q\}_{(\alpha,\beta)\in \Lambda \times \Gamma}$ is a set of nonassociate atoms in $M \backslash M^2$. \item[(5)] The following are equivalent: \begin{itemize} \item[(a)] $M^2$ is universal, \item[(b)] $\{x_{\alpha}\}_{\alpha \in \Lambda}$ is a complete set of nonassociate atoms in $M \backslash M^2$, \item[(c)] $\{x_{\alpha}\}_{\alpha \in \Lambda}$ is a complete set of nonassociate atoms of $R$, \item[(d)] $aM=M^2$ for each $a \in M \backslash M^2$, \item[(e)] $M^2$ is weakly universal, and \item[(f)] $[M:M]$ is a quasilocal domain with principal maximal ideal $M$. \end{itemize} \item[(6)] $R$ is an atomic domain with $M^2$ universal if and only if $[M:M]$ is a DVR with maximal ideal $M$. In this case $R$ is even a BFD. \item[(7)] Suppose that $R$ is local. Then $M^2$ is universal if and only if $R'$, the integral closure of $R$, is a DVR with maximal ideal $M$. \end{itemize} \end{thm} \begin{proof} (1) and (2) are clear and together prove (3). (We note that (3) is well known with the finite case given in \cite{CK}.) (4) Cohen and Kaplansky \cite{CK} proved this for $R$ a CK domain. While their proof extends to this case mutatis mutandis, we give the simple proof for completeness. Certainly each $x_{\alpha}+u_{\beta}q \in M \backslash M^2$ is an atom. Suppose that $x_{\alpha}+u_{\beta}q \sim x_{\alpha '}+u_{\beta '}q$, so $x_{\alpha}+u_{\beta}q = u (x_{\alpha '}+u_{\beta '}q)$ for some unit $u \in R$. Then $\overline{x_{\alpha}} = \overline{u} \; \overline{x_{\alpha '}}$, so $\alpha = \alpha '$ and $\overline{u} = \overline{1}$ in $\overline{R}$. Now $x_{\alpha}(1-u) = (u u_{\beta '} - u_{\beta})q$, so $x_{\alpha} \not \sim q$ gives $u u_{\beta '} - u_{\beta} \in M$. Finally, $\overline{u}=\overline{1}$ in $\overline{R}$ gives $\overline{u_{\beta '}} = \overline{u_{\beta}}$; so $\beta ' = \beta$. (5) (a) $\implies$ (b) Suppose that $M^2$ is universal. Let $x \in M \backslash M^2$ be an atom. So $\overline{R} \overline{x} = V_{\alpha} = \overline{R} \overline{x_{\alpha}}$ for some $\alpha \in \Lambda$. Now $M^2$ universal gives $M^2 \subseteq Rx \cap Rx_{\alpha}$, so $Rx = Rx+M^2 = Rx_{\alpha}+M^2=Rx_{\alpha}$. Hence $x \sim x_{\alpha}$. (b) $\implies$ (c) Suppose there is an atom $q \in M^2$. Then by (4), for any $\alpha \in \Lambda$, $x_{\alpha}+q$ is an atom in $M \backslash M^2$ not associated with any $x_{\beta}$, a contradiction. (c) $\implies$ (a) Let $x \in M^2$. For any $\alpha \in \Lambda$, $x_{\alpha}+x \in M \backslash M^2$ and hence is an atom. So $x_{\alpha}+x \sim x_{\beta}$ for some $\beta$, necessarily with $\beta = \alpha$ since $\overline{x_{\alpha}}=\overline{x_{\beta}}$. So $x_{\alpha}+x = ux_{\alpha}$ for some unit $u \in R$. Then $x=(u-1)x_{\alpha} \in R x_{\alpha}$. So $M^2 \subseteq \bigcap_{\alpha \in \Lambda} Rx_{\alpha} = \bigcap \{Ra \; | \; a \text{ is an atom of R}\}$. (a) , (e) $\iff$ (d) Just observe that for $a \in M \backslash M^2$, $Ra \supseteq M^2 \iff aM=M^2$. (d) $\implies$ (f) Let $x \in [M:M]$ be a nonunit, so $xM \subsetneq M$. Let $a \in M \backslash M^2$. Now $xa \in M \backslash M^2 \implies xaM=M^2 = aM \implies xM=M$, a contradiction. Thus $xa \in M^2 = aM \implies x \in M$. Hence $[M:M]$ is quasilocal with maximal ideal $M$. Let $a \in M \backslash M^2$; we show $M = a[M:M]$. For $b \in M$, $aM = M^2 \supseteq bM$ so $b/a \in [M:M]$. Thus $b \in a[M:M]$. Hence $M \subseteq a[M:M] \subseteq M$. (f) $\implies$ (a) Suppose $M=a[M:M]$ where $a \in M$. So atoms of $R$ have the form $ua$ where $u \in [M:M]$ is a unit. Hence $M^2 = a^2 [M:M]=a[M:M](ua) \subseteq Rua$. (6) $(\impliedby)$ Suppose that $[M:M]$ is a DVR with maximal ideal $M$. By (5), $M^2$ is universal. Since $[M:M]$ is a DVR, $\bigcap_{n=1}^{\infty} M^n = 0.$ Hence $R$ is a BFD and hence is atomic. ($\implies$) This is \cite[Corollary 5.2]{AMO}, but we offer a simple self-contained proof. By (5), $[M:M]$ is a quasilocal domain with principal maximal ideal $M$. We first show that $[M:M]$ is a valuation domain. Let $x,y \in M \backslash \{0\}$ so $x = a_1 \cdots a_n$, $y = b_1 \cdots b_m$ where $a_i, b_j$ are atoms. Now by (5) $a_i M = M^2 = b_j M$. Hence $a_1 \cdots a_n M = M^{n+1}$ and $b_1 \cdots b_m M = M^{m+1}$. Suppose $n \leq m$. Then $yM = b_1 \cdots b_m M = M^{m+1} \subseteq M^{n+1} = a_1 \cdots a_n M = xM$. So $y/x \in [M:M]$. It follows that $[M:M]$ is a valuation domain. Now $M^{n+1} \subseteq Rx$. So $\bigcap_{n=1}^{\infty} M^n \subseteq \bigcap \{Rx \; | \; x \in M \backslash \{0\}\}=0$. Thus $[M:M]$ is a DVR. (7) Suppose that $R$ is local. Then $R \subseteq [M:M]\subseteq R'$. If $M^2$ is universal, $[M:M]$ is a DVR by (6) and hence $[M:M] = R'$. Conversely, suppose that $R'$ is a DVR with maximal ideal $M$. Then $R'M \subseteq M$ so $R' \subseteq [M:M]$. Hence $[M:M] = R'$ is a DVR with maximal ideal $M$. By (6), $M^2$ is universal. \end{proof} \begin{cor} Let $(V,M)$ be a quasilocal domain with principal maximal ideal $M$. Let $L$ be a subfield of $V/M$. Let $R$ be the pullback of \begin{center} \[\xymatrixcolsep{1.5pc} \xymatrix{ R \ar@{-->}[r] \ar@{-->}[d] & V \ar[d] \\ L \ar@{}[r]|-*[@]{\subseteq} & V/M.} \] \end{center} Then $R$ is a quasilocal domain with maximal ideal $M$, $M^2$ universal, and $V=[M:M]$. Conversely, if $R$ is a quasilocal domain with maximal ideal $M \neq M^2$ and $M^2$ universal, then $V=[M:M]$ has principal maximal ideal $M$ and $R$ is the pullback of \begin{center} \[\xymatrixcolsep{3pc} \xymatrix{ & V \ar[d] \\ R/M \ar@{^{(}->}[r] & V/M.} \] \end{center} \end{cor} \begin{cor} Let $(V,M)$ be a quasilocal domain with principal maximal ideal $M$. Let $(D,P)$ be a quasilocal subring of $V$ with $P = D \cap M$. Let $R = D+M$. Then $R$ is a quasilocal domain with maximal ideal $M$ and $M^2$ is universal. \end{cor} \begin{note} $(R,M)$ with $M^2$ universal does not imply that $R$ is atomic. For example, take $(V,M)$ to be a valuation domain with principal maximal ideal $M$ and dim $V > 1$. Let $(D,P)$ be a subring of $V$ with $P = M \cap D$ (e.g., $D = V$), then $R = D+M$ is quasilocal with maximal ideal $M$ and $M^2$ is universal, but $R$ is not atomic since $V = [M:M]$ is not a DVR. \end{note} The next theorem investigates the number of nonassociate atoms in $M \backslash M^2$ for a quasilocal domain $(R,M)$. We need the following definitions. Let $R$ be an integral domain. We call $R$ a \emph{finite atom} (\emph{FA}) \emph{domain} if $R$ has only finitely many (possible none) nonassociate atoms. In the extreme case where $R$ has no atoms, following \cite{CDM} we call $R$ an \emph{antimatter domain}. Thus an atomic FA domain is just a CK domain. For $(R,M)$ quasilocal, $R$ is a \emph{weak finite atom} (\emph{WFA}) \emph{domain} if there are only finitely many nonassociate atoms in $M \backslash M^2$. Thus if $M=M^2$, $R$ is a WFA domain. Let $(V,M)$ be a valuation domain. As either $M=M^2$ and $V$ is antimatter or $M=(a)$ and $a$ is the only atom of $V$ up to associates, $V$ is a FA domain. \begin{thm} \label{QLatom} Let $(R,M)$ be a quasilocal domain. Put $\overline{R} = R/M$. \begin{itemize} \item [(1)] $R$ has no atoms in $M \backslash M^2$ if and only if $M=M^2$. \item [(2)] If there are only finitely many nonassociate atoms in $M \backslash M^2$, but at least one, then $M$ is finitely generated. Thus if $R$ is a WFA domain, either $M=M^2$ or $M$ is finitely generated. \item [(3)] If $M = (a)$ is principal, then $a \in M \backslash M^2$ and $a$ is the only atom of $R$ up to associates. Conversely, suppose that up to associates there is only one atom $a$ in $M \backslash M^2$. Then $M = (a)$. \item [(4)] The following are equivalent: \begin{itemize} \item [(a)] $R$ is a DVR, \item [(b)] $M$ is principal and $R$ is atomic, \item [(c)] $R$ is atomic, $\text{dim}_{\overline{R}} \; M/M^2 = 1$, and there are only finitely many atoms in $M \backslash M^2$ up to associates, \item [(d)] $R$ is atomic and has exactly one atom up to associates, and \item [(e)] $R$ is atomic and has exactly one atom in $M \backslash M^2$ up to associates. \end{itemize} \item [(5)] If $\overline{R}$ is infinite, there are either no atoms in $M \backslash M^2$ (i.e., $M=M^2$), exactly one atom in $M \backslash M^2$ up to associates (i.e., $M$ is principal), or there are infinitely many nonassociate atoms in $M \backslash M^2$. Thus if $R$ is a WFA domain, either $M=M^2$ or $M$ is principal. \item [(6)] Suppose that $\overline{R}$ is finite. If $\text{dim}_{\overline{R}} M/M^2$ is infinite, there are infinitely many nonassociate atoms in $M \backslash M^2$. If $\text{dim}_{\overline{R}} \; M/M^2 = 0$, then $M=M^2$ and there are no atoms in $M \backslash M^2$. Suppose that $1 \leq \text{dim}_{\overline{R}} M/M^2 = k < \infty$ and $m:= (|\overline{R}|^k -1) / (|\overline{R}|-1)$. Suppose there are $n$ nonassociate atoms in $M \backslash M^2$. Then $n \geq m$. If there is an atom in $M^2$, then $n \geq m |\overline{R}|$ and hence there are at least $m |\overline{R}|+1 = (|\overline{R}|^{k+1}-1)/(|\overline{R}|-1)$ nonassociate atoms in $R$. Suppose $n < \infty$, then $M$ can be generated by $\floor{\text{log}_{|\overline{R}|} \; n}+1$ elements. Finally, the following are equivalent: \begin{itemize} \item [(a)] $M^2$ is universal, \item [(b)] n=m, and \item [(c)] there are exactly $m$ nonassociate atoms in $M \backslash M^2$. \end{itemize} \item [(7)] $R$ cannot have exactly two nonassociate atoms in $M \backslash M^2$. If either $R$ is atomic or $M \neq M^2$, $R$ cannot have exactly two nonassociate atoms. If $M \neq M^2$ and $R$ is not a DVR, there are at least three nonassociate atoms in $M \backslash M^2$. If further there is an atom in $M^2$, $R$ has at least six nonassociate atoms in $M \backslash M^2$ and hence $R$ has at least seven nonassociate atoms. (In the next section we will see that if a local CK domain has an atom in $M^2$, then there must be at least eight nonassociate atoms. In Section 4 we give an example of a local CK domain with eight nonassociate atoms having an atom in $M^2$.) \end{itemize} \end{thm} \begin{proof} (1) Clear since an element of $M \backslash M^2$ is an atom. (2) Now suppose that $a_1,\ldots,a_n$ ($n \geq 1)$ is a complete set of nonassociate atoms in $M \backslash M^2$. Then $M = (a_1) \cup \cdots \cup (a_n) \cup M^2 = (a_1,\ldots,a_n) \cup M^2$. Since $n \geq 1$, $M \neq M^2$, so $M = (a_1,\ldots,a_n)$. (3) ($\implies$) Clear ($\impliedby$) By Theorem \ref{QLdoms}, $\text{dim}_{\overline{R}} M/M^2 = 1$, so $M = (a) + M^2$ for some $a \in M \backslash M^2$. By (2), $M$ is finitely generated, so $M = (a)$ by Nakayama's Lemma. (4) (a) $\implies$ (b) $\implies$ (c) Clear. (c) $\implies$ (d) By (2), $M$ is finitely generated. Then $\text{dim}_{\overline{R}} M/M^2 = 1$ gives $M$ is principal and hence $R$ has only one atom up to associates. (d) $\implies$ (e) Suppose $a$ is the only atom of $R$ up to associates. Then $M = (a)$, so $a \notin M^2$. (e) $\implies$ (a) Let $a$ be the atom in $M \backslash M^2$. As we have seen $M = (a)$. So $a$ is the only atom of $R$ up to associates. Hence every nonzero nonunit of $R$ has the form $u a^n$ where $u$ is a unit and $n \geq 1$. Thus $R$ is a DVR. (5) Suppose that $\overline{R}$ is infinite. If $\text{dim}_{\overline{R}} M/M^2 > 1$, then $M/M^2$ has infinitely many one-dimensional subspaces and hence there are infinitely many nonassociate atoms in $M \backslash M^2$ by Theorem \ref{QLdoms} (3). If $\text{dim}_{\overline{R}} M/M^2 = 1$ and there are only finitely many nonassociate atoms in $M \backslash M^2$, then $M$ is principal since $M$ is finitely generated by (2). So up to associates there is one atom in $M \backslash M^2$. If $\text{dim}_{\overline{R}} M/M^2 = 0$, $M = M^2$ and there are no atoms in $M \backslash M^2$. The last statement is now immediate. (6) Suppose that $\overline{R}$ is finite. The statements concerning the cases when $\text{dim}_{\overline{R}} M/M^2$ is infinite or $0$ are clear. So suppose that $1 \leq \text{dim}_{\overline{R}} M/M^2 = k < \infty$. Then $M/M^2$ has $m:= (|\overline{R}|^k-1) / (|\overline{R}|-1)$ one-dimensional $\overline{R}$-subspaces. Thus by Theorem \ref{QLdoms} (3), there are at least $m$ nonassociate atoms in $M \backslash M^2$. Suppose there is an atom $q$ in $M^2$. Let $x_1,\ldots,x_m$ be the $m$ atoms corresponding to the one-dimensional subspaces of $M/M^2$ and $u_1,\ldots,u_{|\overline{R}|}$ be a complete set of representatives of $\overline{R}$. Then by Theorem \ref{QLdoms} (4), the elements $x_i + u_j q$, $i = 1,\ldots,m$, $j=1,\ldots,|\overline{R}|$ are $m |\overline{R}|$ nonassociate atoms in $M \backslash M^2$. Thus there are at least $m |\overline{R}|+1 = (|\overline{R}|^{k+1}-1)/(|\overline{R}|-1)$ nonassociate atoms. Suppose $n<\infty$, then $M$ is finitely generated. Now $M$ can be generated by $k$ elements and $(|\overline{R}|^{k}-1)/(|\overline{R}|-1) \leq n$. Hence $k \leq \floor{\text{log}_{|\overline{R}|} \; n}+1$. The statement concerning the universality of $M^2$ follows from Theorem \ref{QLdoms} (5). (Most of (5) is given in \cite{CK} for the case where $R$ is a local CK domain. However, the hypothesis that $R$ is atomic is not needed.) (7) The fact that there cannot be exactly two nonassociate atoms in $M \backslash M^2$ follows from (5) and (6). Suppose that $R$ has exactly two nonassociate atoms $p$ and $q$. First suppose that $R$ is atomic. Then $p+q$ must have an atomic factor not an associate of $p$ or $q$, a contradiction. Next assume that $M \neq M^2$. So there is an atom in $M \backslash M^2$, say $p \in M \backslash M^2$. If $q \in M \backslash M^2$ we contradict the first statement of (7). So $q \in M^2$. But then by Theorem \ref{QLdoms} (4), $p+q$ is a third atom. The last statement follows from (6). \end{proof} However, as the following example shows, it is quite possible to have infinitely many nonassociate atoms in $M \backslash M^2$ where $(R,M)$ is a quasilocal domain with $\overline{R}=R/M$ finite and $\text{dim}_{\overline{R}} M / M^2 = 1$. \begin{ex} Let $(V,N)$ be a quasilocal domain with nonzero idempotent maximal ideal $N$ (e.g., $V$ is a valuation domain with nonprincipal maximal ideal). Let $R = V[[X]]$. So $R$ is a quasilocal domain with maximal ideal $M=(X,N)$ and $\overline{R}=R/M=V/N$. Here $\text{dim}_{\overline{R}} M/M^2 = 1$, but $M$ is not principal and there are infinitely many nonassociate atoms in $M \backslash M^2$ (since for $n,n' \in N, n+X \sim x'+X \implies n \sim n'$). By choosing $V/N$ to be finite, we even have that $\overline{R}$ and $M / M^2$ are finite. \end{ex} We have given a lower bound, the cardinality of the set of one-dimensional $\overline{R}$-subspaces of $M/M^2$, for the number of nonassociate atoms in $M \backslash M^2$. We next give an upper bound, the cardinality of the set of one-dimensional $\overline{R}$-subspaces of $M^{n-1} / M^n$, for the number of nonassociate atoms in $M^{n-1} \backslash M^n$ where $M^n$ is universal. \begin{thm} \label{Upper} Let $(R,M)$ be a quasilocal domain, $\overline{R}=R/M$, and $n \geq 2$. Let $x,y \in M^{n-1} \backslash M^n$. \begin{itemize} \item[(1)] If $x \sim y$, then $\overline{R} \overline{x} = \overline{R} \overline{y}$. \item[(2)] Suppose that $y$ is an atom and $M^n$ is universal. If $\overline{R} \overline{x} = \overline{R} \overline{y}$, then $x \sim y$. \item[(3)] Suppose that $M^n$ is universal. Then two atoms $x,y \in M^{n-1}$ are associates if and only if $\overline{R} \overline{x} = \overline{R} \overline{y}$. \item[(4)] Suppose that $M^n$ is universal. Let $\alpha$ be the cardinality of the set of one-dimensional $\overline{R}$-subspaces of $M^{n-1} / M^n$. Then there are at most $\alpha$ nonassociate atoms in $M^{n-1} \backslash M^n$. Hence if $\overline{R}$ and $l := \text{dim}_{\overline{R}} \; M^{n-1} / M^n$ are finite, there are at most $(|\overline{R}|^l -1) / (|\overline{R}|-1)$ nonassociate atoms in $M^{n-1} \backslash M^n$. For $n \geq 3$, there are at most $(|\overline{R}|^l-1) / (|\overline{R}|-1)-1$ nonassociate atoms in $M^{n-1} \backslash M^n$. \item[(5)] Suppose that $M^n$ is universal. Let $\{x_{\alpha}\}_{\alpha \in \Lambda}$ be a complete set of representatives for the one-dimensional $\overline{R}$-subspaces of $M^{n-1} / M^n$. Then each $x_{\alpha}$ is an atom if and only if $n=2$ or $M^{n-1}=M^n$. \end{itemize} \end{thm} \begin{proof} (1) Clear. (2) Now $\overline{R} \overline{x} = \overline{R} \overline{y}$ gives $x -ry \in M^n$ for some $r \in R$. Since $M^n$ is universal, $y | x-ry$, so $y | x$. Since $x,y \in M^{n-1} \backslash M^n$, $x \sim y$. (3) This follows from (1) and (2). (4) The first part follows from (3). Suppose that $n \geq 3$. Then by (5) not all of the one-dimensional $\overline{R}$-subspaces of $M^{n-1} / M^n$ can give rise to an atom. (5) If $n=2$, then each $x_{\alpha} \in M \backslash M^2$ and hence is an atom. If $M^{n-1}=M^n$, the result is obvious. Conversely, suppose that each $x_{\alpha}$ is an atom. Then for $y \in M^{n-1} \backslash M^n$, $\overline{R} \overline{y} = \overline{R} \overline{x_{\alpha}}$ for some $\alpha \in \Lambda$. Hence by (2), $y \sim x_{\alpha}$ and hence is an atom. Thus each element $y \in M^{n-1} \backslash M^n$ is an atom. Suppose that $n > 2$. Let $x \in M \backslash M^2$. If $xM^{n-2} \not \subseteq M^n$, we have $xm \in M^{n-1} \backslash M^n$ for some $m \in M$. But this is a contradiction since $xm$ is not an atom. Thus $xM^{n-2} \subseteq M^n$ for each $x \in M$ and hence $M^{n-1} = M M^{n-2} \subseteq M^n$. So $M^{n-1} = M^n$. \end{proof} Cohen and Kaplansky \cite{CK} showed that if $(R,M)$ is a local CK domain with precisely $n$ nonassociate atoms, then $M^{n-1}$ is universal. We generalize this result in Theorem \ref{Weakly} (which does not require $R$ to be atomic). Our proof of Theorem \ref{Weakly} is modeled after their proof. But we first show that for $(R,M)$ a quasilocal (W)FA domain with $M \neq M^2$, some power of $M$ is (weakly) universal. We also generalize the well known result that if $P$ is a principal prime ideal, then $Q = \bigcap_{n=1}^{\infty} P^n$ is prime and there are no prime ideals properly between $P$ and $Q$. \begin{thm} \label{Uni} Let $(R,M)$ be a quasilocal domain and let $P = \bigcap_{n=1}^{\infty} M^n$. \begin{itemize} \item[(1)] Suppose that $M \neq M^2$. Then $R$ is a (W)FA domain if and only if either (a) $M$ is principal, or (b) $R/M$ is finite, $M$ is finitely generated, and some power of $M$ is (weakly) universal. \item[(2)] If $R$ is a WFA domain with $M \neq M^2$, then there can be no atoms in $P$ and each nonzero nonunit of $R$ has an atom as a factor. For $a \in M \backslash M^2$, $(a)$ is $M$-primary. Thus if $R$ is Noetherian, dim $R=1$. \item[(3)] Let $R$ be an FA domain. Then $R/P$ is a field or CK domain and hence $P$ is prime and there are no prime ideals properly between $P$ and $M$. If $M \neq M^2$ and either dim $R=1$ or $R$ is completely integrally closed, $R$ is a CK domain. \end{itemize} \end{thm} \begin{proof} (1) ($\impliedby$) If $M$ is principal, then $R$ is a FA domain and $M$ is universal. So suppose that $R/M$ is finite, $M$ is finitely generated, and $M^l$ is (weakly) universal. Since $R/M$ is finite and $M$ is finitely generated, $R/M^l$ is finite. Thus there are only finitely many principal ideals $(a) \supseteq M^l$. Hence $R$ is a (W)FA domain. ($\implies$) Since $M \neq M^2$, there is at least one atom in $M \backslash M^2$. By Theorem \ref{QLatom} (2), $M$ is finitely generated. If $R/M$ is infinite, Theorem \ref{QLatom} (5) gives that $M$ is principal. So suppose that $M$ is not principal. Then $R/M$ is finite. Let $a_1,\ldots,a_n$ be a complete set of nonassociate atoms (in $M \backslash M^2$). For the WFA case, $M = (a_1) \cup \cdots \cup (a_n) \cup M^2$ is an irredundant union. So by McCoy's Theorem \cite{M}, there exists an $l$ with $M^l \subseteq (a_1) \cap \cdots \cap (a_n) \cap M^2 \subseteq (a_1) \cap \cdots \cap (a_n).$ So $M^l$ is weakly universal. Now consider the FA case. Since $R$ is a WFA domain with $M \neq M^2$, for $a \in M \backslash M^2$, some $M^l \subseteq (a)$. So without loss of generality we can assume that $\bigcap_{n=1}^{\infty} M^n \subset (a_1)$. Now each element of $M^i \backslash M^{i+1}$ for $i \geq 1$ is a finite product of atoms. Hence $M = (a_1) \cup \cdots \cup (a_n)$ and this union is irredundant. So again by McCoy's Theorem, some $M^k \subseteq (a_1) \cap \cdots \cap (a_n)$. So $M^k$ is universal. (2) The first statement follows since some $M^l$ is weakly universal so each nonzero element of $P$ has an atom as a proper factor and since each element of $M^n \backslash M^{n+1}$ is a finite product of atoms. Let $a \in M \backslash M^2$. We noted in the proof of (1) that some $M^l \subseteq (a)$. Thus (a) is $M$-primary. If $R$ is Noetherian, the Principal Ideal Theorem gives that dim $R=1$. (3) We can assume that $M \neq M^2$. Since $M$ is finitely generated, the powers of $M$ properly descend. Thus $\overline{R} = R/P$ is not Artinian. Let $x_1, \ldots, x_n$ be a complete set of nonassociate atoms of $R$. Then every nonzero nonunit of $\overline{R}$ is a unit of $\overline{R}$ times a power-product of the $\overline{x_i}$'s. By \cite[Theorem 1]{A} $\overline{R}$ is either a finite ring, SPIR, or CK domain. Since in the first two cases $\overline{R}$ is Artinian, we must have that $\overline{R}$ is a CK domain. Thus $P$ is prime and there are no prime ideals properly between $P$ and $M$. Suppose that $M \neq M^2$. If dim $R=1$, $P=0$ and $R$ is a CK domain. Suppose that $R$ is completely integrally closed. Then for $a \in M$, $\bigcap_{n=1}^{\infty} (a^n)=0$. Let $a \in M \backslash M^2$, so some $M^l \subseteq (a)$. Then $P = \bigcap_{n=1}^{\infty} M^n \subseteq \bigcap_{n=1}^{\infty} (a^n)=0$. Thus $R$ is a CK domain (even a DVR). \end{proof} \begin{rem} Of course it is quite possible for a (W)FA domain $(R,M)$ to have $\bigcap_{n=1}^{\infty} M^n \neq 0$. Let $(V,M)$ be a valuation domain, so either $M=M^2$ or $M$ is principal, and $\bigcap_{n=1}^{\infty} M^n = 0$ $\iff$ $V$ is a DVR. We remark that we know of no quasilocal atomic domain $(R,M)$ with $M=M^2$. \end{rem} \begin{thm} \label{Weakly} Let $(R,M)$ be a quasilocal domain. \begin{itemize} \item[(1)] Suppose that there are exactly $n$, $0 \leq n < \infty$ nonassociate atoms in $M \backslash M^2$. Then $M^n$ is weakly universal. \item[(2)] Suppose that $M \neq M^2$ and that there are exactly $2 \leq n < \infty$ nonassociate atoms in $R$. Then $M^{n-1}$ is universal. \end{itemize} \end{thm} \begin{proof} Note that (1) is trivial for $n=0$. For $n=1$, $M$ is principal by Theorem \ref{QLatom} and (1) also holds. So we can assume in both cases that $2 \leq n < \infty$. Let $a_1, \ldots, a_n$ ($n \geq 2$) be a complete set of nonassociate atoms (in $M \backslash M^2$). By the proof of Theorem \ref{QLatom} (2), $M = (a_1,\ldots,a_n)$; so $M^k = \left(\prod_{j=1}^k a_{i_j} \right)$. Suppose that $M^{n-1}$ ($M^n$) is not (weakly) universal. Without loss of generality we can assume that $a_1 \nmid \prod_{j=1}^{n-1} a_{i_j}$ $\left(a_1 \nmid \prod_{j=1}^n a_{i_j} \right)$. Put $x_k := \prod_{j=k}^{n-1} a_{i_j} \left(\prod_{j=k}^n a_{i_j} \right)$ for $k=1,\ldots,n-1$. So $a_1 \nmid x_k$. Set $y_k = a_1 + x_k$. So $a_1 \nmid y_k$ and $a_{i_{n-1}} \nmid y_k$ In case (1) each $y_k \in M \backslash M^2$ and hence is an atom. In case (2) each $y_k$ is divisible by an atom as noted in Theorem \ref{Uni}. Thus in either case (1) or (2) each of the $n-1$ elements $y_1,\ldots,y_{n-1}$ is divisible by one of the $n-2$ atoms $a_2,\ldots,a_{i_{n-2}}$. So by the Pigeonhole Principle, we have $i,j$, $1 \leq i \leq j \leq n-1$ and $l$, $2 \leq l \leq i_{n-2}$ with $a_l | y_i,y_j$. So $a_l | y_j - y_i = x_j (1 - x_i / x_j).$ Now $1-x_i/x_j$ is a unit, so $a_l | x_j$. Hence $a_l | y_j - x_j = a_1$, a contradiction. \end{proof} Three remarks concerning Theorem \ref{Weakly} (2) are in order. First $n-1$ may be the best possible. For example $R = \text{GF}(2) + \text{GF}(2^2)[[X]]X$ (Example \ref{Wint}) (resp., $R = \text{GF}(2) + \text{GF}(2)[[X]]X^2$ (Example \ref{Corrected})) has $3$ (resp., $4$) nonassociate atoms and $M^2$ (resp., $M^3$) is universal while $M$ (resp., $M^2$) is not. Second, a CK domain $(R,M)$ with $M^2$ universal can have an arbitrarily large number of nonassociate atoms. So certainly $n-1$ need not be the least power of $M$ that is universal. Third, the least power of $M$ that is universal can be arbitrarily large. For each $n \geq 2$, Example \ref{AndMottEx} gives a local CK domain $(R_n,M_n)$ with $M_n^{2n}$ universal, but $M_n^{2n-1}$ not universal. \section{Local CK Domains} In this section we sharpen and offer alternative proofs for some of the results in \cite{CK} concerning the number of atoms in a local CK domain. For the reader's convenience, we recall some characterizations of local CK domains. \begin{thm} \label{CKequiv} Let $(R,M)$ be a quasilocal domain and $\overline{R}=R/M$. Then the following conditions are equivalent. \begin{itemize} \item[(1)] $R$ is a CK domain. \item[(2)] Either $\overline{R}$ is infinite and $R$ is a DVR or $\overline{R}$ is finite and $R$ is a one-dimensional analytically irreducible local domain. \item[(3)] Either $\overline{R}$ is infinite and $R$ is a DVR or $\overline{R}$ is finite, $R'$ is a DVR and a finitely generated $R$-module where $R'$ is the integral closure of $R$. \item[(4)] Either $\overline{R}$ is infinite and $R$ is a DVR or $R$ is atomic (e.g., $R$ is Noetherian), $\overline{R}$ is finite, $M$ is finitely generated (e.g., $R$ is Noetherian), and some power of $M$ is universal. \item[(5)] $R$ is a one-dimensional local domain that is an FFD and has finite elasticity $\rho (R)$. \item[(6)] $R$ has group of divisibility $G(R) \cong \mathbb{Z} \oplus F$ where $F$ is finite. \end{itemize} \end{thm} \begin{proof} The equivalence of (1) - (3) may be found in \cite[Theorem 4.3]{AMO}. (1) $\iff$ (4) This follows from Theorem \ref{Uni} (1). (1) $\iff$ (6) \cite[Corollary 3.6]{AMO} (2) $\iff$ (5) Let $(R,M)$ be a one-dimensional local domain. Then $R$ is an FFD $\iff$ $R$ is a DVR or $\overline{R}$ is finite \cite[Corollary 6]{AMU}, and $\rho (R) < \infty$ $\iff$ $R$ is analytically irreducible \cite[Theorem 2.12]{AA}. \end{proof} Let $(R,M)$ be a local CK domain. Since $R$ is analytically irreducible, the map $\theta: L(R) \to L(\hat{R})$ given by $\theta(I) = \hat{R} I$ where $L(R)$ (resp., $L(\hat{R})$) is the lattice of ideals of $R$ (resp., $\hat{R}$) and $\hat{R}$ is the $M$-adic completion of $R$, is a multiplicative lattice isomorphism. So $R$ is a CK domain if and only if $\hat{R}$ is a CK domain and both the ideal structure and the factorization structure (up to units) of $R$ and $\hat{R}$ are identical. Thus in the local case, very little is lost by assuming that $R$ is complete. The following result \cite[Theorem 4.5]{AMO} characterizes complete local CK domains. \begin{thm} (1) Let $F_0 \subseteq F$ be finite fields and let $n \geq 1$. Suppose that $R$ is an integral domain with $F_0 + F[[X]]X^n \subseteq R \subseteq F[[X]]$. Then $R$ is a complete local CK domain with residue field between $F_0$ and $F$. Conversely, suppose that $(R,M)$ is a complete local CK domain with $R/M$ finite and char $R= \text{char} R/M$. Let $F_0$ (resp., $F$) be a coefficient field for $R$ (resp., $R'$, the integral closure of $R$). Then there exists an $n \geq 1$ with $F_0 + F[[X]]X^n \subseteq R \subseteq F[[X]]$. (2) Let $p > 0$ be prime and $\mathbb{Z}_p$ the $p$-adic integers and $\mathbb{Q}_p$ the field of rational $p$-adics, and let $L$ be a finite field extension of $\mathbb{Q}_p$. Let $(\overline{\mathbb{Z}_p}, (\pi))$ be the integral closure of $\mathbb{Z}_p$ in $L$. So $\overline{\mathbb{Z}_p}$ is a complete DVR. Suppose that $R$ is an integral domain with $\mathbb{Z}_p + \pi^n \overline{\mathbb{Z}_p} \subseteq R \subseteq \overline{\mathbb{Z}_p}$ for some $n$. Then $(R,M)$ is a complete local CK domain with char $R=0$ and char $R/M = p > 0$. Conversely, suppose that $(R,M)$ is a complete local CK domain with char $R=0$ and $R/M$ finite with char $R/M = p > 0$. Let $L$ be the quotient field of $R$ and $\overline{\mathbb{Z}_p}$ the integral closure of $\mathbb{Z}_p$ in $L$. Then $\overline{\mathbb{Z}_p} = \overline{R}$ and there exists an $n \geq 1$ so that $\mathbb{Z}_p + \pi^n \overline{\mathbb{Z}_p} \subseteq R \subseteq \overline{\mathbb{Z}_p}$. \end{thm} Let $(R,M)$ be a local CK domain that is not a DVR. Let $R'$ be the integral closure of $R$, so $R'$ is a DVR. Since $M$ has grade one, $R \subsetneq M^{-1}$, so $R \subsetneq M^{-1} \subseteq [M:M] \subseteq R'$. Since $R'$ is local, $[M:M]$ is as well. Now $[M:M]$ local and $R \subsetneq [M:M]$ gives that $U(R) \subsetneq U([M:M])$. So $V:= U([M:M]) / U(R)$ is a nontrivial subgroup of $U(R') /U(R)$ and $U(R') /U(R)$ is finite. (The fact that $U(R')/U(R)$ (and hence $V$) is finite follows from \cite[Theorem 3.9]{GM}. However, the fact that $V$ is finite also follows from the correspondence below.) Fix $x \in M \backslash \{0\}$. Then the set $\{R \sigma x \; | \; \sigma \in U([M:M])\}$ of principal ideals corresponds to the set $Vx$ via $R \sigma x \leftrightarrow \sigma x U(R)$. But $\{R \sigma x | \sigma \in U([M:M])\}$ corresponds to a complete set of nonassociate elements of the form $\sigma x$ where $\sigma \in U([M:M])$. Cohen and Kaplansky showed that $|V| \geq |\overline{R}|$ where $\overline{R} = R/M$. (See the paragraph preceding \cite[Theorem 11]{CK}.) We sharpen this result and give an alternative proof. \begin{thm} \label{Vcard} Let $(R,M)$ be a local CK domain that is not a DVR. Let $\overline{R} = R/M$. Then $|V| \geq |\overline{R}|$. If $M$ is the maximal ideal of $[M:M]$, then $|V| \geq |\overline{R}|+1$. \end{thm} \begin{proof} First, suppose $M \neq \mathcal{M}$, the maximal ideal of $[M:M]$. Let $m \in \mathcal{M} \backslash M$. Let $u_0 = 0, u_1 = 1, \ldots, u_{N-1}, N = |\overline{R}|$, be a complete set of representatives of $\overline{R}$. So $u_1,\cdots,u_{N-1}$ are units of $R$. Thus $1, u_1+m, \cdots,u_{N-1}+m$ are units of $[M:M]$ with $(u_i+m)U(R) \neq 1 U(R)$. Suppose that $u_i + m = \lambda (u_j+m)$ for some $\lambda \in U(R)$. Then $u_i-\lambda u_j = (\lambda-1)m \in R \cap \mathcal{M}=M$. So $m \notin M$ gives $\lambda-1 \in M$; thus $\overline{\lambda}=\overline{1}$ in $\overline{R}$. But then $0 \equiv u_i - \lambda u_j \equiv u_i - u_j$ mod $M$, so $i=j$. Thus $1 U(R),(u_1+m)U(R),\ldots,(u_{N-1}+m)U(R)$ are $N$ distinct elements of $V$. Now suppose that $M$ is the maximal ideal of $[M:M]$. Now $[M:M]/M$ is an $\overline{R}$-vector space of dimension greater than one, so it has at least $N+1$ one-dimensional $\overline{R}$-subspaces. Suppose that $\overline{R} \overline{v_1}, \cdots, \overline{R} \overline{v_l}$, $l \geq N+1$, are the one-dimensional $\overline{R}$-subspaces of $[M:M]/M$ where $v_i \in [M:M]$. Since $M$ is the maximal ideal of $[M:M]$, $v_i \notin M$; so $v_i \in U([M:M])$. Also, if $v_i = \lambda v_j$ for some $\lambda U(R)$, then $\overline{R} \overline{v_i} = \overline{R} \overline{v_j}$, and hence $i=j$. So $|V| \geq l \geq N+1$. \end{proof} \begin{thm} \label{AtomEquiv} Let $(R,M)$ be a local CK domain that is not a DVR. Let $x \in M \backslash \{0\}$ and $\sigma \in U([M:M])$. \begin{itemize} \item[(1)] $x$ is an atom if and only if $\sigma x$ is an atom. \item[(2)] $x \in M^n$ if and only if $\sigma x \in M^n$. \item[(3)] $x$ is an atom in $M^{n-1} \backslash M^n$ ($n \geq 2$) if and only if $\sigma x$ is an atom in $M^{n-1} \backslash M^n$. Thus the number of nonassociate atoms of $R$ and the number of nonassociate atoms in $M \backslash M^2$ (in $M^{n-1} \backslash M^n \text{ for } n \geq 3$) is a nonzero multiple of $|V|$ (is a multiple of $|V|$, possibly $0$). \item[(4)] The following are equivalent. \begin{itemize} \item[(a)] $M^2$ is universal. \item[(b)] The atoms of $R$ are given by a single coset of $V$. \item[(c)] The atoms of $M \backslash M^2$ are given by a single coset of $V$. \end{itemize} \end{itemize} \end{thm} \begin{proof} (1) Suppose that $x$ is an atom. If $\sigma x = ab$ where $a,b \in M$, then $x = (\sigma^{-1}a)b$, a contradiction. So $\sigma x$ is an atom. Hence if $\sigma x$ is an atom, so is $x = \sigma^{-1} (\sigma x)$. (2) Suppose $x \in M^n$ so $x = \sum_{i=1}^n m_{i_1} \cdots m_{i_n}$ for some $m_{i_j} \in M$. Then $\sigma x = \sum_{i=1}^n (\sigma m_{i_1}) m_{i_2} \cdots m_{i_n} \in M^n$. Thus if $\sigma x \in M^n$, $x = \sigma^{-1} (\sigma x) \in M^n$. (3) This follows from (1) and (2) and the remarks concerning $V$ given in the paragraph preceding Theorem \ref{Vcard}. (4) $(a) \implies (b)$ Suppose that $M^2$ is universal. By Theorem \ref{QLdoms} (5), $R' = [M:M]$ is a DVR with maximal ideal $M=\pi [M:M]$ for any $\pi \in M \backslash M^2$. Thus the atoms of $R$ have the form $\sigma \pi$ where $\sigma \in U([M:M])$. So the single coset $V \pi$ gives the atoms of $R$. $(b) \implies (c)$ Clear. $(c) \implies (a)$ We have that for any $x \in M \backslash M^2$, the elements of $M \backslash M^2$ have the form $\sigma x$ where $\sigma \in U([M:M])$. So for $a,b \in M \backslash M^2$, $a = u_1 x$ and $b = u_2 x$ for some $u_1,u_2 \in U([M:M])$. So $ab = (u_1 x) (u_2 x) = (u_1 u_2 x) x \in MRx$. Thus $M^2 \subseteq MRx + M^3$. By Nakayama's Lemma $M^2 = MRx$. By Theorem \ref{QLdoms} (5), $M^2$ is universal. \end{proof} Parts of Theorem \ref{AtomEquiv} were proved by Cohen and Kaplansky \cite{CK}. They noted (1), $4 (a) \iff 4(b)$, and that the number of nonassociate atoms is a multiple of $|V|$. \begin{cor} \label{2p} Let $(R,M)$ be a local CK domain that is not a DVR. \begin{itemize} \item[(1)] \cite[Corollary, page 475]{CK} Suppose that the number of nonassociate atoms of $R$ is prime, then $M^2$ is universal. \item[(2)] Suppose that the number of nonassociate atoms in $M \backslash M^2$ is prime, then $M^2$ is universal. \item[(3)] Suppose that $R$ has exactly $2 p$ nonassociate atoms where $p$ is prime and $|\overline{R}| \neq 2$ where $\overline{R} = R/M$. Then $R$ has no atoms in $M^2$. \end{itemize} \end{cor} \begin{proof} (1) and (2) follow immediately from Theorem \ref{AtomEquiv} (4). (3) Here $|V| \geq |\overline{R}| \geq 3$ by Theorem \ref{Vcard} and $|V| \; | \; 2p$, so $|V|=p$ or $2p$. If $|V|=2p$, every atom is in $M \backslash M^2$ (in fact, $M^2$ is universal). So suppose that $|V|=p$. So there are either $2p$ nonassociate atoms in $M \backslash M^2$ or $p$ nonassociate atoms in $M \backslash M^2$ and $p$ nonassociate atoms in $M^2$. In the first case every atom is in $M \backslash M^2$. The second case cannot occur since by (2) if the number of nonassociate atoms in $M \backslash M^2$ is prime, $M^2$ is universal and hence all atoms lie in $M \backslash M^2$. \end{proof} \begin{cor} Let $(R,M)$ be a local CK domain that is not a DVR and let $\overline{R} = R/M$. Suppose that there are less than $2|\overline{R}|$ nonassociate atoms in $M \backslash M^2$. Then $R$ has exactly $|\overline{R}|+1$ nonassociate atoms and $M^2$ is universal. \end{cor} \begin{proof} If $M^2$ is not universal, then there are at least two cosets of $V$ containing atoms in $M \backslash M^2$, so there are at least $2 |V| \geq 2 |\overline{R}|$ nonassociate atoms in $M \backslash M^2$. Thus $M^2$ must be universal. Hence $R$ has $1 + |\overline{R}| + \cdots + |\overline{R}|^{k-1} = (|\overline{R}|^k-1) / (|\overline{R}|-1)$ nonassociate atoms where $k = \text{dim}_{\overline{R}} \; M / M^2$. But $1 + |\overline{R}| + |\overline{R}|^2 \geq 2 |\overline{R}|$, so we must have $k=2$ in which case $R$ has $|\overline{R}|+1$ nonassociate atoms. \end{proof} Let $R = GF(2)[[X^2,X^3]] = GF(2) + GF(2)[[X]]X^2$, so $\overline{R} = GF(2)$ and hence $|\overline{R}|=2$ where $\overline{R} = R/M$, $M$ the maximal ideal of $R$. By Example \ref{CKex}, $R$ has exactly $4 = 2 |\overline{R}|$ nonassociate atoms, but $M^2$ is not universal. We can now improve on Theorem \ref{QLatom} (6) which stated that if $(R,M)$ is a local CK domain with an atom in $M^2$, then the number of nonassociate atoms of $R$ is at least $|\overline{R}| (|\overline{R}|^k-1) / (|\overline{R}|-1)+1 = (|\overline{R}|^{k+1}-1) / (|\overline{R}|-1)$ where $\overline{R} = R/M$ and $k = \text{dim}_{|\overline{R}|} M / M^2$. \begin{cor} Let $(R,M)$ be a local CK domain with an atom in $M^2$. Let $\overline{R} = R/M$ and $k = \text{dim}_{\overline{R}} M / M^2$. Then the number of nonassociate atoms of $R$ is at least $|\overline{R}| (|\overline{R}|^k-1) / (|\overline{R}|-1)+|\overline{R}| = (|\overline{R}|^{k+1}-1) / (|\overline{R}|-1) + |\overline{R}|-1$. If further $M$ is the maximal ideal of $[M:M]$, there are at least $(|\overline{R}|^{k+1}-1) / (|\overline{R}|-1) + |\overline{R}|$ nonassociate atoms. \end{cor} \section{Examples} This section consists of examples. We begin by stating the following example from \cite{AMO} showing for each $n \geq 2$ the existence of a local CK domain $(R,M)$ with an atom in $M^n \backslash M^{n+1}$, thus answering a question raised by Cohen and Kaplansky \cite{CK}. While not noted in \cite{AMO}, we show here that $M^{2n}$ is universal while $M^{2n-1}$ is not. Thus in a local CK domain $(R,M)$, the least power of $M$ that is universal can be arbitrarily large. \begin{ex} (\cite[Example 7.3]{AMO}) \label{AndMottEx} Let $K$ be a finite field and let $n \geq 2$. Then there is a complete local CK domain $(R,M)$ with $R/M \cong K$ and an atom $f \in M^n \backslash M^{n+1}$. Moreover, no element of $M^{n+1}$ is an atom. Here $M^{2n}$ is universal but $M^{2n-1}$ is not universal. Let $f_n \in K[Y]$ be irreducible of degree $n$. Let $F$ be a field extension of $K$ with $[F:K]=n+1$ and let $1,y,\ldots,y^{n-1}$ be a $K$-basis for $F$. For $i$, $1 \leq i \leq n-1$, put $V_i=K \cdot 1 + K \cdot y + \cdots + K \cdot y^i$ and $R = K + V_1 X + \cdots + V_{n-1}X^{n-1}+F[[X]]X^n$. So $R$ is a local CK domain with maximal ideal $M=V_1 X + \cdots + V_{n-1}X^{n-1}+F[[X]]X^n$. For $i \geq n$, $M^i = F[[X]]X^i$. Let $f=f_n(y)X^n$. Then $f \in M^n \backslash M^{n+1}$ is an atom. Let $g \in R$ with $\text{ord}(g)=j$. Then $Rg \supseteq M^n g = F[[X]]X^ng = F[[X]]X^{n+j}=M^{n+j}$. This shows that there is no atom in $M^{n+1}=F[[X]]X^{n+1}$ (take $g=X$) and hence that $M^{2n}=F[[X]]X^{2n}$ is universal. However, $M^{2n-1} \not \subseteq Rf$, so $M^{2n-1}$ is not universal. For suppose $M^{2n-1} \subseteq Rf$, then $F[[X]]X^{2n-1}=M^{2n-1}=M^{n-1}f$. Let $g \in M^{n-1}f$ with $\text{ord}(g)=2n-1$. Then the leading coefficient of $g$ is in the $(n-1)$-dimensional $K$-subspace $V_{n-1}f = \{vf_n \; | \; v \in V_{n-1}\} \subsetneq F$, contradicting our assumption that $M^{n-1} f = F[[X]]X^{2n-1}$. \end{ex} For the case $n=2$, the ring $R$ has the form $R = K + WX + F[[X]]X^2$ where $K \subsetneq F$ is a field extension and $W$ is a $K$-subspace of $F$. As we will see this example has exactly $8$ nonassociate atoms with $2$ atoms in $M^2$ where $M$ is the maximal ideal of $R$. Thus we begin (Example \ref{GenEx}) with a careful study of quasilocal domains of the form $R = K + WX + F[[X]]X^2$ where $K \subseteq F$ is an arbitrary field extension and $W$ is a $K$-subspace of $F$ (possibly $0$). Such a domain is a BFD and is a CK domain if and only if $K =W = F$ or $F$ is finite. We completely determine the atoms of $R$ and the atoms of $R$ lying in $M^2$, if any. We determine the cardinality of the set of nonassociate atoms of $R$ that lie in $M \backslash M^2$ and the cardinality of the set of nonassociate atoms of $R$ that lie in $M^2$. For $R$ a CK domain, these numbers are given in Example \ref{CKex}. We then give our example (Example \ref{8atom}) of a local CK domain with $8$ atoms, 2 of which are in $M^2$. As previously remarked, this contradicts a statement of Cohen and Kaplansky. We end this section by giving a construction of a family of local CK domains with precisely $3$ nonassociate atoms. \begin{ex} \label{GenEx} Let $K \subseteq F$ be a field extension and let $W$ be a $K$-subspace of $F$, possibly $0$. Let $R = K + WX + F[[X]]X^2$. So $R$ is a quasilocal domain with maximal ideal $M = WX + F[[X]]X^2$. For $n \geq 1$, $W^n :=\{w_1 \cdots w_n | w_i \in W\}$ and $KW^n$ denotes the $K$-subspace of $F$ spanned by $W^n$. Now $\bigcap_{n=1}^{\infty} M^n =0$, so $R$ is a BFD. We have $R$ is Noetherian if and only if $[F:K] < \infty$, and $R$ is a CK domain if and only if $R$ is a DVR (that is, $F=W=K$) or $F$ is finite. Furthermore, $R$ has residue field $\overline{R}=R/M=K$. The quotient field of $R$ is $F[[X]][X^{-1}]$, its complete integral closure is $R^c=F[[X]]$, and its integral closure is $R' = L + F[[X]]$ where $L$ is the algebraic closure of $K$ in $F$. We have that $[M:M]=[W:_F W]+F[[X]]X$ where $K \subseteq [W:_F W] \subseteq F$ and $[W:_F W] = \{x \in F\; | \; xW \subseteq W \}$ is an integral domain. Additionally, $U(R) = \{\sum_{n=0}^{\infty} a_n X^n \in R \; | \; a_0 \neq 0 \}$ and \begin{align*} U([M:M]) &= \{\sum_{n=0}^{\infty} a_n X^n \in [M:M] \; | \; a_0 \in U([W:_F W] \} \\ &= \{\sum_{n=0}^{\infty} a_n X^n \in F[[X]] \; | \; a_0 \in U([W:_F W]) \}. \end{align*} We have the s.e.s. $$0 \to U(R^c) / U(R) \to G(R) \to G(R^c) \to 0$$ which splits since $G(R^c) \cong \mathbb{Z}$, so $G(R) \cong \mathbb{Z} \oplus U(R^c) / U(R)$. Now $U(R^c) /U(R) \cong F^* / K^* \oplus F/W$, for the map $$\sum_{n=0}^{\infty} a_n X^n \to (a_0 K^*, a_0^{-1} a_1 + W),$$ $\sum_{n=0}^{\infty} a_n X^n \in F[[X]]$, $a_0 \neq 0$, is a homomorphism with kernel $U(R)$. With this identification $V:=U([M:M]) / U(R) \cong U([W:_F W]) / K^* \oplus F/W$. Let $f = a_n X^n + a_{n+1}X^{n+1}+ \cdots \in R$ where $n \geq 0$ and $a_n \neq 0$. Then $f \sim a_n X^n + a_{n+1}X^{n+1}$. Suppose that $a X^n + bX^{n+1} \sim c X^n + d x^{n+1}$ in $R$, $a,c, \neq 0$. Then $a c^{-1} \in K^*$, so after multiplying $cX^n + dX^{n+1}$ by $ac^{-1}$, it suffices to determine when $aX^n + bX^{n+1} \sim aX^n + d X^{n+1}$. But this holds if and only if $(aX^n + bX^{n+1}) (aX^n + dX^{n+1})^{-1} \in U(R)$ $\iff$ $\frac{b}{a} - \frac{d}{a} \in W$ $\iff$ $b + aW = d + aW$. Here $aW$ is a $K$-subspace of $F$. Let $\{a_{\alpha}\}_{\alpha \in \Lambda}$ be a complete set of representatives of $F^* / K^*$. Equivalently, we have that $\{a_{\alpha}\}_{\alpha \in \Lambda}$ is a complete set of representatives of the one-dimensional $K$-subspaces of $F$ ($a_{\alpha}K^* \leftrightarrow K a_{\alpha}$). Let $\{b_{\beta}\}_{\beta \in \Gamma}$ be a complete set of representatives of $F/W$. For $a \in F^*$, $\{ab_{\beta}\}_{\beta \in \Gamma}$ is a complete set of representatives of $F/aW$. We have $|\Lambda| = |F^* / K^*|$ and $|\Gamma| = |F/W| = |F/aW|$. For $W \neq 0$, we let $\{a_{\alpha}\}_{\alpha \in \Omega} \subseteq \{a_{\alpha}\}_{\alpha \in \Lambda}$ be a complete set of representatives of the one-dimensional $K$-subspaces of $W$, or equivalently, of the cosets $\{w K^* \; | \; w \in W \backslash \{0\}\}$. Suppose $n \geq 2$. Now $a_{\alpha}X^n + bX^{n+1} \sim a_{\beta}X^n + dX^{n+1} \implies \alpha = \beta$ and $a_{\alpha}X^n + bX^{n+1} \sim a_{\alpha}X^n + dX^{n+1} \iff b + a_{\alpha}W = d + a_{\alpha}W$. Thus we have that the set $\{a_{\alpha}X^n + a_{\alpha}b_{\beta}X^{n+1}\}_{(\alpha,\beta) \in \Lambda \times \Gamma}$ is a complete set of representatives for the equivalence classes of associate elements of the form $aX^n + bX^{n+1}$, $0 \neq a \in F$. Next suppose that $n=1$ and $W \neq 0$. Then $\{a_{\alpha}X + a_{\alpha}b_{\beta}X^2\}_{(\alpha,\beta)\in \Omega \times \Gamma}$ is a complete set of representatives for the equivalence classes of associate elements of the form $aX + bX^2$ where $0 \neq a \in W$. We next determine the atoms of $R$. Let $f = a_n X^n + a_{n+1}X^{n+1} + \cdots \in R$ where $a_n \neq 0$, so $\text{ord}(f)=n$. Now $f$ is a unit $\iff$ $\text{ord}(f)=0$. If $n = \text{ord}(f)=1$, $f$ is an atom. If $\text{ord}(f) \geq 4$, $f$ is never an atom. Suppose $W \neq 0$. Let $0 \neq w \in W$. Let $\text{ord}(f)=3$. Then $f = w X (w^{-1}a_3 X^2+w^{-1}a_4X^3 + \cdots )$ and hence $f$ is not an atom. For $W=0$ and $\text{ord}(f)=3$, $f$ is always an atom. We next determine when $f$ is an atom for $\text{ord}(f)=2$. Now $f = a_2X^2+a_3X^3 + \cdots$ is an atom $\iff$ $a_2 X^2 + a_3 X^3$ is an atom. Since $f$ is not an atom $\iff$ $a \in W^2$, we have that $a_2 X^2 + a_3 X^3$ is an atom $\iff$ $a_2 \in F \backslash W^2$. Case $W = 0$. So $R = K + F[[X]]X^2$, $M^n = F[[X]]X^{2n}$, $[M:M]=F[[X]]$, $G(R) \cong \mathbb{Z} \oplus F^* /K^* \oplus F$ and $V \cong F^* / K^* \oplus F$, under this identification. Also, $\{a_{\alpha}X^2 + a_{\alpha}bX^3 | \alpha \in \Lambda, b \in F\}$ or $\{a_{\alpha}X^2 + bX^3 \; | \; \alpha \in \Lambda, b \in F\}$ is a complete set of nonassociate atoms of $R$ of order $2$. Hence this set of nonassociate atoms has cardinality $|F^* /K^*||F|$. Likewise $\{a_{\alpha}X^3 + a_{\alpha}bX^4 \; | \; a \in \Lambda, b \in F\}$ or $\{a_{\alpha} X^3 + bX^4 \; | \; \alpha \in \Lambda, b \in F\}$ is a complete set of nonassociate atoms of order $3$ and has cardinality $|F^*/K^*||F|$. So the set of nonassociate atoms of $R$ has cardinality $2|F^*/K^*||F|$. There are no atoms in $M^2$. Here $M^2$ is not universal, but $M^3$ is universal. Case $W \neq 0$. \begin{itemize} \item[a)] $W=F$, so $R = K + F[[X]]X$. Here $M^2$ is universal and $\{a_{\alpha}X\}_{\alpha \in \Lambda}$ is a complete set of nonassociate atoms of $R$. All atoms are in $M \backslash M^2$. The cardinality of the set of atoms is $|F^*/K^*|$. We can identify $V$ with $F^*/K^*$ where $G(R) \cong \mathbb{Z} \oplus F^*/K^*$. \item[b)] $W \neq F$, so $0 \subsetneq W \subsetneq F$. Here atoms have order $1$ or order $2$, so there are no atoms in $M^3$. We see that $\{a_{\alpha}X + a_{\alpha}b_{\beta}X^2\}_{(\alpha,\beta) \in \Omega \times \Gamma}$ is a complete set of nonassociate atoms of order $1$, of course, all lying in $M \backslash M^2$. So this gives $|\Omega| |F/W|$ nonassociate atoms. Now $aX^2 + bX^3$ is an atom $\iff$ $a \notin W^2$. Thus $\{a_{\alpha}X^2 + a_{\alpha}b_{\beta}X^3 \; | \; a_{\alpha} \in F^* \backslash W^2, \beta \in \Gamma\}$ is a complete set of nonassociate atoms of order $2$. The cardinality of this set is $|\{a_{\alpha} \; | \; \alpha \in \Lambda, a_{\alpha} \in F^* \backslash W^2\}| |F/W|$. Note that $|\{a_{\alpha} \; | \; \alpha \in \Lambda, a_{\alpha} \in F^* \backslash W^2\}| = |\{aK^* \; | \; a \in F^* \backslash W^2\}|=|\{Ka \; | \; a \in F^* \backslash W^2\}|$. So the cardinality of the set of nonassociate atoms of $R$ is $|\Omega| |F/W| + |\{a_{\alpha} \; | \; \alpha \in \Lambda, a_{\alpha} \in F^* \backslash W^2\}| |F/W|$. Note that the atom (of order 2) $a_{\alpha} X^2 + a_{\alpha} b_{\beta}X^3$ is in $M^2 = KW^2 X^2 + F[[X]]X^3 \iff a_{\alpha} \in KW^2 \backslash W^2$. Thus there is an atom in $M^2$ $\iff$ $W^2 \subsetneq KW^2$. The set $\{a_{\alpha} X^2 + a_{\alpha}b_{\beta}X^3 \; | \; a_{\alpha} \in KW^2 \backslash W^2, \beta \in \Gamma\}$ is a complete set of nonassociate atoms of $R$ in $M^2$. The cardinality of this set is $|\{a_{\alpha} \; | \; \alpha \in \Lambda, a_{\alpha} \in KW^2 \backslash W^2\}| |F/W|$. Here $M^2$ is never universal, but $M^4$ is always universal. Let $f$ be an atom of $R$. If $\text{ord}(f)=1$, $Rf \supseteq M^3 = KW^3 + F[[X]]X^4$. So suppose $\text{ord}(f)=2$, so $f \sim aX^2 + bX^3$ where $a \in F^* \backslash W^2$ and $b \in F$. Now $Rf \supseteq M^3 \implies F = Wa$. Conversely, if $a \in F^* \backslash W^2$ with $F = aW$, then $R(aX^2 + bX^3) \supseteq M^3$ for any $b \in F$. Thus $M^3$ is universal if and only if $F=aW$ for each $a \in F^* \backslash W^2$. However $F=aW$ $\iff$ $F = a^{-1}F=W$. But we are assuming that $W \neq F$. Thus $M^3$ is universal if and only if $F=W^2$. \end{itemize} \end{ex} We summarize the results for Example \ref{GenEx} for the case where $R$ is a CK domain. \begin{ex} \label{CKex} Let $R = K+WX + F[[X]]X^2$ where $K \subseteq F$ is an extension of finite fields and $W$ is a $K$-subspace of $F$. Let $M$ be the maximal ideal of $R$. We have $R^c = R' = F[[X]]$ and $[M:M]=[W:_F W] + F[[X]]X$. Here $[W:_F W]$ is an intermediate field of $K \subseteq F$. Let $\{a_{\alpha}\}_{\alpha \in \Lambda}$ be a complete set of representatives of $F^*/K^*$ (or equivalently, of the one-dimensional $K$-subspaces of $F$). Let $\Omega = \{\alpha \in \Lambda \; | \; a_{\alpha} \in W\}$. So $|\Lambda| = (|F|-1)/(|K|-1)$ and $|\Omega| = (|W|-1)/(|K|-1)$. Let $\{b_{\beta}\}_{\beta \in \Gamma}$ be a complete set of representatives of $F/W$, so $|\Gamma|=|F|/|W|$. Here $G(R) \cong \mathbb{Z} \oplus F^* / K^* \oplus F/W$ and we can identify $V$ with $[W:_F W]^* / K^* \oplus F/W$. So $|V| = ((|[W:_F W]|-1) / (|K|-1) ) (|F| / |W|)$. \begin{itemize} \item[(1)] $R$ is a DVR $\iff$ $K=W=F$. (Here we don't need that $F$ is finite.) \item[(2)] $M^2$ is universal $\iff$ $W=F$. In this case $\{a_{\alpha}X\}_{\alpha \in \Lambda}$ is a complete set of nonassociate elements of $R$. So the number of nonassociate atoms is $(|F|-1)/(|K|-1)$. There is one $V$-class of nonassociate atoms. \item[(3)] Suppose $W=0$, so $R=K+F[[X]]X^2$. Then $\{a_{\alpha}X^2 + bX^3 \; | \; \alpha \in \Lambda, b \in F\}$ (resp., $\{a_{\alpha}X^3 + bX^4 \; | \; \alpha \in \Lambda, b \in F\}$) is a complete set of nonassociate atoms of $R$ of order $2$ (resp., order $3$). So the number of nonassociate atoms of $R$ is $2 ((|F|-1)/(|K|-1)) |F|$. We have $M^3$ is universal, but $M^2$ is not. There are no atoms in $M^2$. Here $[M:M]=F[[X]]$ and $V \cong F^* /K^* \oplus F$. So $|V| = ((|F|-1)/(|K|-1))|F|$ and there are two $V$-classes of nonassociate atoms. \item[(4)] Suppose that $0 \subsetneq W \subsetneq F$. Then $\{a_{\alpha}X + a_{\alpha}b_{\beta}X^2 \; | \; \alpha \in \Omega, \beta \in \Gamma \}$ is a complete set of nonassociate atoms of $R$ of order $1$. Their cardinality is $(|W|-1)/(|K|-1)(|F|/|W|)$. And $\{a_{\alpha}X^2 + a_{\alpha} b_{\beta}X^3 \; | \; \alpha \in \Lambda, a_{\alpha} \in F^* \backslash W^2, \beta \in \Gamma \}$ is a complete set of nonassociate atoms of $R$ of order $2$. Their number is $m (|F|/|W|)$ where $m = |\{a_{\alpha} \; | \; \alpha \in \Lambda, a_{\alpha} \in F^* \backslash W^2\}| = |\{aK^* \; | \; a \in F^* \backslash W^2\}| = |\{Ka \; | \; a \in F^* \backslash W^2 \}|$. So $R$ has $((|W|-1)/(|K|-1)+m)(|F|/|W|)$ nonassociate atoms. There is an atom in $M^2$ (and hence in $M^2 \backslash M^3$) if and only if $W^2 \subsetneq KW^2$. In this case $\{a_{\alpha}X^2 + a_{\alpha} b_{\beta}X^3 \; | \; \alpha \in \Lambda, a_{\alpha} \in KW^2 \backslash W^2, \beta \in \Gamma \}$ is a complete set of nonassociate atoms in $M^2$. The cardinality of this set is $m' (|F|/|W|)$ where $m' = |\{a_{\alpha} \; | \; \alpha \in \Lambda, a_{\alpha} \in KW^2 \backslash W^2 \}|=|\{aK^* \; | \; a \in KW^2 \backslash W^2 \}|=|\{Ka \: | \; a \in KW^2 \backslash W^2 \}|$. We have $M^3$ is universal if and only if $F=W^2$. Otherwise $M^4$ is universal. Here $K \subseteq [W:_F W] \subseteq F$ is an intermediate field and $|V| = (|[W:_F W]|-1)/(|K|-1)(|F|/|W|)$. \end{itemize} \end{ex} We specialize further to the case where $W$ is an intermediate field $L$, $K \subseteq L \subseteq F$, with $F$ still finite. \begin{ex} \label{Wint} Let $L$ be an intermediate field of the field extension $K \subseteq F$ where $F$ is finite and let $R = K + LX + F[[X]]X^2$. So $M = LX + F[[X]]X^2$ is the maximal ideal of $R$ and $[M:M]=L + F[[X]]X$. Here $M^4$ is universal, but $M^3$ is universal $\iff$ $M^2$ is universal $\iff$ $L=F$. There are no atoms in $M^2$. We see that $R$ has $((|F|-1) / (|K|-1)) |F| /|L|$ nonassociate atoms, $|V| = (|L|-1) / (|K|-1)$, so there are $((|F|-1) / (|L|-1)) |F| / |L|$ $V$-classes of nonassociate atoms. \end{ex} We now give our example of a local CK domain $(R,M)$ with $8$ atoms having an atom in $M^2$. \begin{ex} \label{8atom} Let $\{1,y,y^2\}$ be a GF(2)-basis for GF($2^3$) where $y$ is a zero of the irreducible polynomial $Y^3+Y+1 \in GF(2)[Y]$. Let $W$ be the subspace of GF($2^3$) spanned by $1$ and $y$ and $R = GF(2)+WX + GF(2^3)[[X]]X^2$. (This is \cite[Example 7.3]{AMO} for the case $n=2$.) Here $W^2 = \{0,1,y,1+y,y^2,1+y^2,y+y^2\}$, so $GF(2)W^2 = GF(2^3)$. Now $\{1,y,1+y,y^2,1+y^2,y+y^2,1+y+y^2 \}$ is a complete set of representatives of the one-dimensional $GF(2)$-subspaces of $GF(2^3)$ and those lying in $W$ are $\{1,y,1+y\}$. We take $0,1+y+y^2$ as a complete set of representatives of $GF(2^3) / W$. So the order $1$ atoms are $X, X+(1+y+y^2)X^2, yX, yX+y(1+y+y^2)X^2 = yX+(1+y^2)X^2, (1+y)X, \text{ and } (1+y)X + (1+y)(1+y+y^2)X^2 = (1+y)X + yX^2$. These $6$ atoms are in $M \backslash M^2$. Now $GF(2) W^2 \backslash W^2 = F \backslash W^2 = \{1+y+y^2\}$. So the two remaining atoms of $R$ are $(1+y+y^2)X^2$ and $(1+y+y^2)X^2 + (1+y+y^2)^2X^3 = (1+y+y^2)X^2 + (1+y)X^3$, both of which lie in $M^2 \backslash M^3$. Here $[W:_{GF(2)} W] = GF(2)$ since $1 \in W$, so $[M:M]=GF(2) + GF(2^3)[[X]]X$. So we can take $1$ and $1+(1+y+y^2)X$ as representations of $V = U([M:M]) /U(R)$. Here the $V$ classes are $\{X, X(1+(1+y+y^2)X)\}, \{yX, yX(1+(1+y+y^2)X)\}, \{(1+y)X, (1+y)X(1+(1+y+y^2)X)\}, \text{ and } \{(1+y+y^2)X, (1+y+y^2)X(1+(1+y+y^2)X)\}$. Finally, since $GF(2^3) \neq W^2$, $M^3$ is not universal, but $M^4$ is universal. \end{ex} More generally we have the following example whose proof is similar to that of Example \ref{8atom}. It is interesting to note that except for the case of $p=2$ or $3$, and $n=1$, the local CK domain $(R,M)$ in this example has more nonassociate atoms in $M^2$ than in $M \backslash M^2$. \begin{ex} \label{ExAtomsInM2} Let $p$ be a prime number and $n$ a natural number. Let $\{1,y,y^2\}$ be a GF($p^n$)-basis for GF($p^{3n}$) where $y$ is a zero of an irreducible cubic $f(Y) \in \text{GF}(p^n)[Y]$. Let $W$ be the GF($p^n$)-subspace $\text{GF}(p^n)\cdot 1 + \text{GF}(p^n) \cdot y$ and $R$ be the ring $R = \text{GF}(p^n) + WX + \text{GF}(p^{3n})[[X]]X^2$. Then $(R,M)$ is a local CK domain with $M^4$ universal (but $M^3$ is not universal). Here $|W^2| = \frac{p^{3n}}{2}+p^{2n}-\frac{p^n}{2}$ so $|\text{GF}(p^3) \backslash W^2 | = p^{3n} - \left( \frac{p^{3n}}{2}+p^{2n}-\frac{p^n}{2} \right) = \frac{p^n (p^n-1)^2}{2}$. So $|\{\text{GF}(p^{3n})^*a \; | \; a \in \text{GF}(p^{3n}) \backslash W^2 \} | = \frac{p^n(p^n-1)}{2}$. So from Example \ref{GenEx} we have \begin{align*} |V| = p^n, \\ p^n(p^n+1) & \text{ nonassociate atoms in } M \backslash M^2, \\ \frac{p^{2n}(p^n-1)}{2} & \text{ nonassociate atoms in } M^2 \backslash M^3,\text{ and }\\ \frac{p^n(p^{2n}+p^n+2)}{2} & \text{ total nonassociate atoms.} \end{align*} For small values of $p^n$ we have: \begin{center} \begin{tabular}{ | c | c | c | c |} \hline $p^n$ & Atoms in $M \backslash M^2$ & Atoms in $M^2$ & Atoms \\ \hline 2 & 6 & 2 & 8 \\ \hline 3 & 12 & 9 & 21 \\ \hline 4 & 20 & 24 & 44 \\ \hline 5 & 30 & 50 & 80 \\ \hline 7 & 56 & 147 & 203 \\ \hline 8 & 72 & 224 & 296 \\ \hline 9 & 90 & 324 & 414 \\ \hline \end{tabular} \end{center} \end{ex} After pointing out an error in \cite{CK}, it is time to point out a mathematical error and typographical error in \cite{AMO} and to note a partial correction. In \cite[Theorem 7.1]{AMO} it stated for $R = K+V_1X + \cdots + V_{n-1} X^{n-1}+F[[X]]X^n$ where $K \subseteq F$ is a field extension and $V_1,\ldots,V_{n-1}$ are $K$-subspaces of $F$ with $V_i V_j \subseteq V_{i+j}$ for $i+j \leq n-1$, that $G(R) \cong F^* / K^* \oplus F/V_1 \oplus \cdots \oplus F / V_{n-1}$. In the proof it is alleged that the map $\pi: U(F[[X]]) \to F^* / K^* \oplus F/V_1 \oplus \cdots \oplus F / V_{n-1}$ given by $\pi (a_0 (1+a_1 X + a_2X^2 + \cdots)) = (a_0K^*,a_1+V_1,\ldots,a_{n-1}+V_{n-1})$ is a homomorphism. However, this is only the case for $n=1$. Thus we only have the isomorphism $G(K+WX + F[[X]]X^2) \cong \mathbb{Z} \oplus F^* / K^* \oplus F/W$ which is given in Example \ref{GenEx}. Corollary 7.2 of \cite{AMO} concerns the special case where $R = K+F[[X]]X^n$. The assertion that $G(R) \cong F^* / K^* \oplus F^{n-1}$ is only valid for $n=1,2$. Also, there is a typographical error in giving the number of nonassociate atoms of $R$ as $n |F^*/K^*| |F|^n$ where obviously the correct number is $n |F^* /K^*| |F|^{n-1}$. The proof given is correct once the typographical error is corrected. However, as we will use this example, we state the correct result with a self-contained proof. \begin{ex} \label{Corrected} Let $K \subseteq F$ be a field extension, $n \geq 1$, and $R = K+F[[X]]X^n$. Then $R$ is a BFD but is local (resp., a CK domain) if and only if $[F:K] < \infty$ (resp., $K=F$ and $n=1$ or $F$ is finite). Let $\{a_{\alpha}\}_{\alpha \in \Lambda}$ be a complete set of representatives of $F^*/K^*$. Then $\{a_{\alpha}X^i + b_1 X^{i+1} + \cdots + b_{n-1}X^{i+n-1} \; | \; \alpha \in \Lambda, b_i \in F, n \leq i \leq 2n-1\}$ is a complete set of nonassociate atoms of $R$. Thus $R$ has $n |F^*/K^*| |F|^{n-1}$ nonassociate atoms. We have $M^3$ is universal, but $M^2$ is not universal unless $n=1$. There are no atoms in $M^2$ where $M$ is the maximal ideal of $R$. It follows from \cite[Corollary 5.6]{AMO} that atoms have the form $u(X)X^i$ where $u(X) \in U(F[[X]])$ and $n \leq i \leq 2n-1$ and the number of nonassociate atoms is $n |F^*/K^*| |U(F[[X]]/U(R)|^{n-1}$. But we prove this directly. It is easy to see that atoms of $R$ have the form $a_i X^i + a_{i+1}X^{i+1}+ \cdots$ where $a_i \neq 0$ and $n \leq i \leq 2n-1$. But $a_i X^i + a_{i+1}X^{i+1} + \cdots = u(X) X^i$ where $u(X)=a_i + a_{i+1}X + \cdots \in U(F[[X]])$. Note that $u(X)X^i \sim v(X)X^j \iff i=j$ and $u(X)U(R) = v(X)U(R)$. But $|U(F[[X]])/U(R)| = |F^*/K^*||F|^{n-1}$ as seen by the bijection given by $\left(\sum_{m=0}^{\infty} a_m X^m \right) U(R) \leftrightarrow (a_0 K^*,a_0^{-1}a_1,\ldots,a_0^{-1}a_{n-1})$ between $U(F[[X]])/U(R)$ and $F^*/K^* \oplus F^{n-1}$. (This map is an isomorphism only for $n \leq 2$.) \end{ex} Let $(R,M)$ be a local CK domain with $\overline{R}=R/M$ and $k = \text{dim}_{\overline{R}} M/M^2$. Then by Theorem \ref{QLatom} (6) $R$ has at least $(|\overline{R}|^k-1) / (|\overline{R}|-1)$ nonassociate atoms. Thus the number of nonassociate atoms gives an upper bound for $|\overline{R}|$ and $k$. However, for a given $|\overline{R}|$ and $k$, the number of nonassociate atoms can be arbitrarily large. We illustrate this for the case $\overline{R} \cong GF(2)$ and $k=2$. \begin{ex} (An example of a local CK domain $(R,M)$ with $\overline{R} = R/M \cong GF(2)$ and $\text{dim}_{\overline{R}} M/M^2 = 2$ having more than $2^{n-1}+1$ nonassociate atoms for $n \geq 2$) Let $f$ be a principal prime of $GF(2) [[X,Y]]$ with $\text{ord}(f) = n \geq 2$. Take $R = GF(2)[[X,Y]] / (f)$ so $R$ is a complete local CK domain with $\overline{R} \cong GF(2)$ and having $\text{dim}_{\overline{R}} M/M^2 = 2$. Suppose that $R$ has $m$ nonassociate atoms $a_1,\ldots,a_m$. Taking $f_i \in GF(2) [[X,Y]]$ with $\overline{f_i} = a_i$, we have $M = (a_1)\cup \cdots \cup (a_m)$ and hence $(X,Y) = (f_1,f) \cup \cdots \cup (f_m,f) = (f_1) + (X,Y)^n \cup \cdots \cup (f_m) + (X,Y)^n$. We claim that $m \geq 2^{n-1}+1$. Put $S = GF(2)[[X,Y]] / (X,Y)^n$ and $N = (X,Y) / (X,Y)^n$. Then $N$ is a union of $m$ principal ideals. But $|S|=2^{\frac{n(n+1)}{2}}$, $|N|=2^{\frac{(n-1)(n+2)}{2}}$, and if $Sa$ is a proper principal ideal, $$|Sa| = |S| / |\text{ann}(a)| \leq |S| / |N^{n-1}| \leq 2^{(n(n+1)/2} / 2^n = 2^{(n-1)n/2}.$$ Since two principal ideals have nonempty intersection, $$m \geq |N| / |Sa| + 1 = 2^{(n-1)(n+2)/2} / 2^{(n-1)n/2}+1 = 2^{n-1}+1.$$ So $R$ has at least $2^{n-1}+1$ nonassociate atoms. \end{ex} The following well known diagram appears in \cite{AAZ} where examples are given to show that none of the implications can be reversed. \[ \xymatrix{ &HFD \ar@{=>}[rd] \\ UFD \ar@{=>}[ru] \ar@{=>}[rd] & &BFD \ar@{=>}[r] &ACCP \ar@{=>}[r] &Atomic\\ &FFD \ar@{=>}[ru]} \] Let us extend this diagram for the case of a quasilocal domain $(R,M)$. \[ \xymatrix{ & CK \ar@{=>}[r] \ar@{=>}[d] \ar@{=>}@/_6pc/[lddd] & Noetherian \ar@{=>}[d] \\ HFD \ar@{=>}[r] & RBFD \ar@{=>}[rd] & M^{\omega}=0 \ar@{=>}[r] \ar@{=>}[d] & M^{\alpha}=0 \ar@{=>}[d] \\ UFD \ar@{=>}[u] \ar@{=>}[d] & & BFD \ar@{=>}[r] &ACCP \ar@{=>}[r] & Atomic \\ FFD \ar@{=>}[rru]} \] Here $R$ is a RBFD if $R$ has finite elasticity $\rho(R)$. A one-dimensional local domain is a RBFD if and only if $R$ is analytically irreducible \cite[Theorem 2.12]{AA}. Also, a one-dimensional local domain is an FFD if and only if $R$ is a DVR or $R/M$ is finite \cite[Corollary 6]{AMU}. Thus a one-dimensional local domain is a CK domain if and only if it is an FFD and RBFD. Except for the implication $M^{\alpha}=0$ $\implies$ ACCP, we give examples of quasilocal domains showing that the implications cannot be reversed. In fact, in all cases except $M^{\omega}=0$ $\implies$ BFD we give one-dimensional quasilocal examples. (1) BFD $\notimplies M^{\omega} = 0$: Localizing \cite[Example 5.7]{HLV} gives an example of a quasilocal Krull domain $(R,M)$ with $\bigcap_{n=1}^{\infty} M^n \neq 0$. But a Krull domain is a BFD. (2) atomic $\notimplies$ ACCP: Gram's example \cite{G} of an atomic domain not satisfying ACCP is a one-dimensional quasilocal domain. (3) ACCP $\notimplies$ BFD, $M^{\alpha} =0 \notimplies M^{\omega}=0$: take \cite[Example 2.1]{AAZ} $K[X;T]$, $K$ a field and $T$ the additive submonoid of $\mathbb{Q}^+$ generated by $\{1/p | p \text{ is prime }\}$, localized at $N = \{f \in K[X;T] | f \text{ has nonzero constant term}\}$. (4) $M^{\omega} =0 \notimplies$ Noetherian: $\mathbb{Q}+\mathbb{C}[[X]]X$. (5) HFD $\notimplies$ UFD, BFD $\notimplies$ FFD, RBFD $\notimplies$ CK, Noetherian $\notimplies$ CK: $\mathbb{R}+\mathbb{C}[[X]]$. (6) FFD $\notimplies$ UFD, RBFD $\notimplies$ HFD, FFD $\notimplies$ CK: $K[[X^2,X^3]]$, $K$ a field (with $K$ infinite for FFD $\notimplies$ CK). (7) BFD $\notimplies$ RBFD: any one-dimensional local domain that is not analytically irreducible. We end by investigating local CK domains with exactly three nonassociate atoms. As we know, in this case $M^2$ is universal. In the complete equi-characteristic case it is easy to completely characterize such integral domains. We begin with the following more general result. \begin{thm} Let $(R,M)$ be a complete local domain with residue field $\overline{R}$. Suppose that $M^2$ is universal and char $R =$ char $\overline{R}$. So the integral closure $R'$ is a complete DVR and hence $R' \cong F[[X]]$ where $F$ is a subfield of $R'$ that maps isomorphically onto the residue field of $R'$. Then $R \cong K + F[[X]]X$ where $K = R \cap F$ is a subfield of $R$ isomorphic to $\overline{R}$. Also, $R$ is a CK domain if and only if $K=F$ (and hence $R \cong F[[X]]$ is a DVR) or $\overline{R}$ is finite. \end{thm} \begin{proof} Since $M^2$ is universal, $R'=[M:M]$ is a DVR with maximal ideal $M$ by Theorem \ref{QLdoms} (7). So $R'$ is a complete DVR with char $R' =$ char $R'/M$. So $R' \cong F[[X]]$ where $F$ is a subfield of $R'$ that maps isomorphically onto $R'/M$. Now $R'$ has maximal ideal $M=F[[X]]X$. Let $K=R \cap F$, so $F[[X]]X \subseteq R$ gives $R = K+F[[X]]X$. Since $K \subseteq F$ is integral, $K$ is a field and clearly $K$ is isomorphic to $\overline{R}$. Certainly if $K=F$ (and hence $R$ is a DVR) or $\overline{R}$ is finite, $R$ is a CK domain. Conversely, suppose that $R$ is a CK domain. If $\overline{R}$ is infinite, $R$ must be a DVR and hence $K=F$. \end{proof} \begin{cor} Let $(R,M)$ be a complete local CK domain with exactly three nonassociate atoms with char $R= $ char $\overline{R}$. Then $R \cong GF(2) + GF(2^2)[[X]]X$. \end{cor} \begin{proof} Now $R = GF(2)+GF(2^2)[[X]]X$ is a complete local CK domain with char $R= $ char $\overline{R}$ having exactly $|GF(2^2)^*/GF(2)^*|=3$ nonassociate atoms. Conversely, suppose that $(R,M)$ is a complete local CK domain with exactly 3 non-associate atoms. Since $3$ is prime, $M^2$ is universal. Since $R$ is not a DVR, $\overline{R}$ must be finite. So $R \cong GF(p^n) + GF(p^{nk})[[X]]X$ for some prime $p$ (= char $\overline{R}$) and $k \geq 2$. Now $R$ has $3 = |GF(p^{nk})^* / GF(p^n)^*|$ nonassociate atoms. So we must have $p=2$ and $k=2$. Thus $R \cong GF(2) + GF(2^2)[[X]]X$. \end{proof} There is another way to realize $R = GF(2)+GF(2^2)[[X]]X$. Here $GF(2^2) = \{0,1,\alpha,1+\alpha\}$ where $\alpha^2 = \alpha+1$. We claim that $$R \cong GF(2)[[X,Y]] / (X^2+XY+Y^2).$$ Here is a sketch. The map $\phi: GF(2)[[X,Y]] \to R$ given by $\phi(X)=X$ and $\phi(Y)=\alpha X$ is an epimorphism. Since $R$ is an integral domain, $\text{ker}(\phi)$ is a prime ideal of $GF(2)[[X,Y]]$. Now $\phi(X^2+XY+Y^2) = X^2 + \alpha X^2 + \alpha^2 X^2 = (1+\alpha+\alpha^2)X^2 = 0$, so $(X^2+XY+Y^2) \subseteq \text{ker}(\phi)$. But since $(X^2+XY+Y^2)$ is a prime ideal, we have $\text{ker}(\phi) = (X^2+XY+Y^2)$. Here we have realized a local CK domain with precisely $3$ nonassociate atoms as a homomorphic image of a two-dimensional regular local ring. We next generalize this result. \begin{thm} \label{Image} Let $(D,M)$ be a two-dimensional regular local domain with $M = (x_1,x_2)$ and let $f$ be a principal prime of $D$. Then $D/(f)$ is an integral domain with precisely three nonassociate atoms if and only if $D/M \cong \text{GF}(2)$ and $f = u_1x_1^2 + u_2 x_1 x_2 + u_3 x_2^2$ where $u_1,u_2,u_3$ are units. \end{thm} \begin{proof} Note that if $f \in M \backslash M^2$, $\overline{D}:=D/(f)$ is a DVR, and if $\overline{D}$ has exactly three nonassociate atoms, then $D/M \cong \text{GF}(2)$ by Theorem \ref{QLatom} (2). Now $\overline{D}$ has precisely three nonassociate atoms $\iff$ $\overline{M} = (\overline{x_1}) \cup (\overline{x_2}) \cup (\overline{x_1}+\overline{x_2})$ $\iff$ $M = (x_1,f) \cup (x_2,f) \cup (x_1+x_2,f)$. Let $f = a_1 x_1^2 + a_2 x_1 x_2 + a_3 x_2^2$. Then $(x_1,f) = (x_1,a_3x_2^2)$, $(x_2,f) = (x_2,a_1x_1^2)$, and $(x_1+x_2,f) = (x_1+x_2,(a_3-a_2-a_1)x_1x_2)$. $(\impliedby)$ Suppose that $a_1, a_2$, and $a_3$ are units. Then $D/M \cong \text{GF}(2)$ gives that $a_3-a_2-a_1$ is a unit. So $(x_1,f) \cup (x_2,f) \cup (x_1+x_2,f) = (x_1) + M^2 \cup (x_2)+M^2 \cup (x_1+x_2)+M^2 = M$ since $(x_1)+M^2$, $(x_2)+M^2$, and $(x_1+x_2)+M^2$ are the one-dimensional $D/M$-subspaces of $M / M^2$. $(\implies)$ Suppose that $$M=(x_1,f) \cup (x_2,f) \cup (x_1+x_2,f).$$ If $a_1 \in M$, $(x_2,f) \subseteq (x_2)+M^3$, so $M = (x_1)+M^2 \cup (x_2)+M^3 \cup (x_1+x_2)+M^2$. Consider $x_2+x_1^2$. Now $x_2+x_1^2 \in (x_1)+M^2$ $\implies$ $x_2 \in (x_1)+M^2$ and $x_2+x_1^2 \in (x_1+x_2)+M^2$ $\implies$ $x_2 \in (x_1+x_2)+M^2$, both contradictions. And $x_2+x_1^2 \in (x_2)+M^3$ $\implies$ $M^2 \subseteq (x_2)+M^3$, a contradiction. Interchanging $x_1$ and $x_2$ shows that $a_3 \in M$ leads to a contradiction. So $a_1$ and $a_3$ must be units. Suppose that $a_2 \in M$. Then $a_3-a_2-a_1 \in M$. Thus $M = (x_1,f) \cup (x_2,f) \cup (x_1+x_2,f)$ gives $M = (x_1) + M^2 \cup (x_2)+M^2 \cup (x_1+x_2)+M^3$. Put $y_1 = -x_1$, $y_2 = x_1+x_2$, so $y_1+y_2 = x_2$ and $(y_1,y_2)=M$. Now $M = (y_1)+M^2 \cup (y_2)+M^3 \cup (y_1+y_2)+M^2$, a contradiction. \end{proof} With regard to the element $f$ in Theorem \ref{Image}, we have the following proposition. \begin{prop} Let $(D,M)$ be a two-dimensional regular local domain with $M=(x_1,x_2)$. Suppose that $D/M \cong \text{GF}(2)$. Then \begin{itemize} \item[(1)] $f = u_1x_1^2 + u_2x_1x_2 + u_3x_2^2$ where $u_1,u_2,u_3$ are units if and only if $f = x_1^2 + x_1x_2 + x_2^2 + g$ for some $g \in M^3$. \item[(2)] For units $u_1,u_2,u_3$ of $R$, $f = u_1x_1^2 + u_2x_1x_2 + u_3 x_2^3$ is a nonzero principal prime. \end{itemize} \end{prop} \begin{proof} (1) $(\implies)$ Since $D/M \cong \text{GF}(2)$, $u_i = 1+m_i$ for some $m_i \in M$. Then $f=x_1^2 + x_1x_2+x_2^2 + g$ where $g = m_1x_1^2 + m_2x_1x_2 + m_3 x_2^2 \in M^3$. $(\impliedby)$ Suppose $f = x_1^2 + x_1x_2 + x_2^2 + g$ where $g \in M^3$. Note that $g = ax_1^2 + bx_2^2$ for some $a,b \in M$. Then $f = x_1^2 + x_1x_2 + x_2^2 + g = (1+a)x_1^2 + x_1x_2 + (1+b)x_2^2$ where $a+1$, $1+b$ are units. (Note that this shows that we can take $u_2=1$.) (2) Note that $D$ is a UFD and $x_1,x_2$ are principal primes. (The simple proof that a two-dimensional regular local ring is a UFD does not require the more general result that a regular local ring is a UFD.) We first note that $a_1 x_1^2 + a_2 x_1 x_2 + a_3 x_2^2 = 0$ implies $a_1,a_2,a_3 \in M$. While this follows from analytic independence, we give a simple proof. Suppose that $a_1x_1^2 + a_2 x_1 x_2 + a_3 x_2^2 = 0$ and say $a_2$ is a unit. Then $a_2 x_1 x_2 = -a_1 x_1^2 - a_3 x_2^2$ gives $x_1 | a_3 x_2^2$. So $x_1 | a_3$. Likewise $x_2 | a_1$. So dividing by $x_1 x_2$ gives $a_2 \in M$, a contradiction. Similar proofs show that $a_1,a_3 \in M$. Let $f = u_1 x_1^2 + u_2 x_1 x_2 + u_2 x_2^2$ where $u_1,u_2,u_3$ are units. So $f \neq 0$. We show that $f$ is irreducible and hence prime. Suppose that $f = (Ax_1 + Bx_2)(Cx_1 + Dx_2)$. Then $(u_1-AC)x_1^2 + (u_2 - (AD+BC))x_1x_2 + (u_3-BD)x_2^2 = 0$. So $u_1 - AC, u_2-(AD+BC),u_2-BD \in M$. Thus $AC$, $AD+BC$, $BD$ are units. Hence $A,B,C,D$ are units. But then $AD$ and $BC$ are units, so $AD+BC \in M$ since $D/M \cong \text{GF}(2)$. \end{proof} Let $(R,m)$ be a complete local CK domain with precisely three nonassociate atoms. Then $R/m \cong \text{GF}(2)$ and $\text{dim}_{R/m} m/m^2 = 2$, so $R$ is a homomorphic image of a two-dimensional complete regular local ring $(D,M)$ with residue field $\text{GF}(2)$. So if $\text{char} R = 2$, $D \cong \text{GF}(2) [[X,Y]]$ while if char $R =0$, $D \cong V[[X]]$ (with $M = (2,X)$) or $D \cong V[[X,Y]] / (g)$ (with $M = (\overline{X},\overline{Y})$) where $g = 2-h$ with $h \in (2,X,Y)^2$ and $(V,2V)$ is a complete DVR with residue field $\text{GF}(2)$. In the first case char $R=2$, $R \cong \text{GF}(2)[[X,Y]] / (X^2 + XY + Y^2)$ (as $\text{GF}(2) [[X,Y]] / (X^2 + XY + Y^2)$ is such a ring and any such one is isomorphic to $\text{GF}(2) + \text{GF}(2^2)[[X]]X$). In the second case where char $R = 0$ and $2 \notin m^2$, $R \cong V[[X]] / (u_1 \cdot 4 + u_2 \cdot 2X + u_3 X^2)$ where $u_1,u_2,u_3$ are units of $V[[X]]$. In the third case where char $R=0$ and $2 \in m^2$, $R \cong V[[X,Y]] / (g,u_1X^2+u_2XY + u_3Y^2)$ where $g = 2-h$, $h \in (2,X,Y)^2$ and $u_1,u_2,u_3$ are units of $V[[X,Y]]$. \section{CK Domains with $n$ Atoms} Let $R$ be an integral domain. We say that $R$ is \emph{primefree} if $R$ has no nonzero principal primes. Let $\alpha$ be a possibly infinite cardinal number. We say that \emph{$R$ has $\alpha$ atoms} if there is a set $A$ of atoms of $R$ with $|A|=\alpha$ such that every atom of $R$ is an associate of exactly one element of $A$. In this section we are interested in local CK domains or more generally primefree CK domains with a prescribed number of atoms. However, we begin by noting that for an infinite cardinal number $\alpha$, there exists a one-dimensional local domain with $\alpha$ atoms. \begin{ex} \label{AlphaInf} (A one-dimensional local domain with $\alpha$ atoms for $\alpha$ infinite) Let $(D,N)$ be a one-dimensional local domain that is not a DVR with $|D| \leq \alpha$. Let $\{X_{\beta}\}_{\beta \in \Lambda}$ be a set of indeterminates with $|\Lambda|= \alpha$ and let $R = D(\{X_{\beta}\}_{\beta \in \Lambda})$ be the Nagata ring $D[\{X_{\beta}\}_{\beta \in \Lambda}]_{N[\{X_{\beta}\}_{\beta \in \Lambda}]}$. Then $R$ is a one-dimensional local domain with maximal ideal $M=RN$. As $|R|=\alpha$, $R$ has at most $\alpha$ atoms. Let $m_1,m_2$ be part of a minimal basis for $N$. Then for $\beta_1,\beta_2 \in \Lambda$, $\beta_1 \neq \beta_2$, $R(m_1+m_2X_{\beta_2},m_1+m_2X_{\beta_2}) = R(m_1,m_2)$. So $m_1+m_2X_{\beta_1}$, $m_1 + m_2X_{\beta_2}$ is part of a minimal basis for $M$. Thus $m_1 + m_2 X_{\beta_1}$ and $m_1 + m_2 X_{\beta_2}$ are nonassociate atoms of $R$. Hence $R$ has $\alpha$ atoms. \end{ex} Thus we have the following question. \begin{question} For which natural numbers $n$, does there exist a local CK domain with $n$ atoms? \end{question} Now for $n=1$ we just have a DVR and by Theorem \ref{QLatom} (7) a local CK domain cannot have $2$ atoms. So suppose $n \geq 3$. Cohen and Kaplansky \cite{CK} showed that if $n = (p^{nk}-1) / (p^n-1)$ where $p$ is prime and $m,k \geq 1$, there is a (complete) local CK domain $(R,M)$ with $M^2$ universal having $n$ atoms. We can construct $R$ as follows. Let $D$ be a DVR with residue field $\text{GF}(p^{nk})$ and take $R$ to be the pullback of the natural map $D \to \text{GF}(p^{nk})$ along $\text{GF}(p^m) \hookrightarrow \text{GF}(p^{mk})$. So such an $R$ can have characteristic $0$ or characteristic $p$. In the equicharacteristic complete case $R$ has the form $\text{GF}(p^m) + \text{GF}(p^{mk})[[X]]X$ (Example \ref{Wint}). This was generalized in \cite{AMO} (see Example \ref{Corrected}): $R = \text{GF}(p^m) + \text{GF}(p^{mk})[[X]]X^l$, $m,k,l \geq 1,$ is a complete local CK domain with $l((p^{mk}-1)/(p^m-1))p^{mk(l-1)}$ atoms. Suppose that $n \geq 3$ is prime. Then there is a local CK domain $(R,M)$ with $n$ atoms if and only if $M^2$ is universal if and only if $n = (p^{mk}-1) / (p^m-1)$ for some prime $p$ and natural numbers $m,k$ with $k \geq 2.$ The first odd prime not of this form is $11$. So there is no local CK domain with $11$ atoms. (As we will see there are local CK domains with $n$ atoms for $n=3,4,\ldots,10$.) The odd primes less than $100$ not of this form are $11$, $19$, $23$, $29$, $37$, $41$, $43$, $47$, $59$, $67$, $71$, $73$, $79$, $83$, $89$, and $97$. Clark, Gosavi, and Pollack \cite{CGP} have noted that among the prime numbers, the set of primes of the form $(p^{mk}-1)/(p^m-1)$ has density $0$. We have constructed three infinite families of positive characteristic CK domains \begin{itemize} \item[(1)] (Example \ref{Wint}) $\text{GF}(p^m) + \text{GF}(p^{mk})X + \text{GF}(p^{mkl})X^2$ \hspace*{\fill} $m,k,l \geq 1$ \item[(2)] (Example \ref{ExAtomsInM2}) $\text{GF}(p^m) + WX + GF(p^{3m})[[X]]X^2$ \hspace*{\fill} $m \geq 1$ \item[(3)] (Example \ref{Corrected}) $\text{GF}(p^m) + \text{GF}(p^{mk})[[X]]X^l$ \hspace*{\fill} $m,k,l \geq 1$ \end{itemize} The following table gives the number of atoms, the number of atoms in $M^2$ (for (2) necessarily not in $M^3$), and the minimal power of $M$ which is universal for each family. \begin{center} \begin{tabular}{ | c | c | c | c |} \hline \thead{Family} & Atoms & Atoms in $M^2$ & Universality \\ \hline 1 & $\frac{(p^{mkl}-1)p^{mk(l-1)}}{p^m-1}$ & 0 & \makecell{$M^4$, $M^3 = M^2$ $\iff$ $l=1$ \\ $M \iff k=l=1$} \\ \hline 2 & $\frac{p^m(p^{2m}+p^m+2)}{2}$ & $\frac{p^{2m}(p^m-1)}{2}$ & $M^4$ \\ \hline 3 & $\frac{l(p^{mk}-1)p^{mk(l-1)}}{p^m-1}$ & 0 & \makecell{$M^3$, $M^2 \iff l=1$ \\ $M \iff k=1=1$} \\ \hline \end{tabular} \end{center} Suppose that we take $l=1$ in Family $1$ or $3$, so $M^2$ is universal. For $n < 100$, this gives local CK domains with $M^2$ universal for $n=1$, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 17, 18, 20, 21, 24, 26, 28, 30, 31, 32, 33, 38, 40, 42, 44, 48, 50, 54, 57, 60, 62, 63, 65, 68, 72, 74, 80, 82, 84, 90, 91, 98. For Family $1$ if we take $l > 1$, then $M^4$ is universal. For $n < 100$, we get local CK domains with $n=6$, 12, 20, 28, 30, 56, and $60$ atoms. For Family $2$, $M^4$ is universal and for $n < 100$ we get local CK domains with $n=8,$ 21, 44, and $80$ atoms. For Family $3$ if we take $l \geq 2$, then $M^3$ is universal. We get local CK domains with $n$ atoms for $l=2$: $n=4$, 6, 8, 10, 14, 16, 18, 22, 24, 26, 32, 34, 38, 46, 50, 54, 58, 62, 64, 72, 74, 82, 86, 94, and 98; $l=3$: $n=2$, 27, 48, and $75$; $l=4$: $n=32$. Thus for $n < 100$, after deleting the primes $n$ with no local CK domain with $n$ atoms, it is unknown to us whether there exist local CK domains with $n$ atoms for $n=25$, 35, 36, 39, 45, 51, 52, 53, 55, 69, 70, 76, 77, 78, 81, 85, 87, 88, 92, 93, 95, 96, or 99. So far most of our examples of local CK domains have been in characteristic $p$. Coykendall and Spicer \cite{CS} and Clark, Govasi, and Pollack \cite{CGP} gave some examples in characteristic $0$ from number theory. We begin with this following example. \begin{ex} \label{Zpw} (\cite[Corollary 3.6]{CS}). Let $d$ be a square free integer and $\mathbb{Z}[\omega]$ be the ring of integers of $\mathbb{Q}[\sqrt{d}]$. Let $p_1,\ldots,p_n$ be distinct primes that are inert in $\mathbb{Z}[\omega]$ and put $p=p_1 \cdots p_n$. Let $R = \mathbb{Z}[p \omega]_S$ where $S = R \backslash (p_1, p \omega) \cup \cdots \cup (p_n, p \omega)$. Then $R$ is a primefree CK domain with $n$ maximal ideals $M_i = (p_i,p \omega)_S$, $i=1,\ldots,n$. Each $R_{M_i} = \mathbb{Z}[p \omega]_{(p_i,p \omega)}$ is a local CK domain with $p_i+1$ atoms. Thus $R$ has $\sum_{i=1}^n (p_i+1)$ atoms (see Theorem \ref{Lattice} (1)). Here one can show that $R_{M_i}$ is a local CK domain via Theorem \ref{CKequiv} ((1) $\iff$ (3)). It is easily checked that $M_{i_{M_i}}^2$ is universal. Since $M_{M_i}$ is doubly generated and $R_{M_i}$ has residue field $\text{GF}(p_i)$, $R_{M_i}$ has $(p_i^2-1)/(p_i-1) = p_i+1$ atoms. \end{ex} We next investigate the local CK domains with $n$ atoms for $n \leq 11$. \begin{ex} Local CK domain $(R,M)$ with $n \leq 11$ atoms. \begin{itemize} \item[$n=1$:] $R$ is a DVR \item[$n=2$:] $R$ does not exist (Theorem \ref{QLatom} (7)) \item[$n=3$:] $3$ is prime so $M^2$ is universal with $|\overline{R}|=2$ and $\text{dim}_{\overline{R}} M / M^2 = 2$. Examples include $\mathbb{Z}[2 \omega]_{(2,2 \omega)}$ with $2$ inert in $\mathbb{Z}[\omega]$ (say for example, $d=5$) (see Example \ref{Zpw}) in characteristic $0$ and the unique complete local example $\text{GF}(2)+\text{GF}(2^2)[[X]]X$ in positive characteristic. \item[$n=4$:] No atoms in $M^2$ and $M^3$ is universal, $|\overline{R}|=2$ or $3$ and $\text{dim}_{\overline{R}} M / M^2 = 2$. $M^2$ universal: Examples include $\mathbb{Z}[3 \omega]_{(3,3 \omega)}$ with $3$ inert in $\mathbb{Z}[\omega]$ (say for $d=5$) in characteristic $0$ and the unique complete local example $\text{GF}(3)+\text{GF}(3^2)[[X]]X$ in positive characteristic. $M^3$ universal: $\text{GF}(2) + \text{GF}(2)[[X]]X^2$ \item[$n=5$:] $5$ is prime so $M^2$ is universal with $|\overline{R}| = 4$ and $\text{dim}_{\overline{R}} M / M^2 = 2$. Here $\text{GF}(2^2) + \text{GF}(2^4)[[X]]X$ is the unique complete local example in positive characteristic. For a characteristic $0$ example, let $D$ be a DVR of characteristic $0$ with residue field $\text{GF}(2^2)$ and take $R$ to be the pullback of the natural map $D \to \text{GF}(2^2)$ along $\text{GF}(2) \hookrightarrow \text{GF}(2^2)$. \item[$n=6$:] There are no atoms in $M^2$ and $M^5$ is universal, $|\overline{R}| \leq 5$, and $\text{dim}_{\overline{R}} M / M^2 = 2$. $M^2$ universal: $|\overline{R}|=5$. In positive characteristic $\text{GF}(5)+\text{GF}(5^2)[[X]]X$ is the unique example. In characteristic $0$ we can take $\mathbb{Z}[5 \omega]_{(5,5 \omega)}$ where $5$ is inert in $\mathbb{Z}[\omega]$ (say for $d=13$) $M^3$ universal: $\text{GF}(3) + \text{GF}(3)[[X]]X^2$ $M^4$ universal: $\text{GF}(2) + \text{GF}(2)X + \text{GF}(2^2)[[X]]X^2$ $M^5$ universal: no example known \item[$n=7$:] $7$ is prime so $M^2$ is universal with $|\overline{R}|=2$ and $\text{dim}_{\overline{R}} M M^2 = 3$. Here $\text{GF}(2) + \text{GF}(2^3)[[X]]X$ is the unique complete local example in positive characteristic. For a characteristic $0$ example, take a DVR $D$ with residue field $\text{GF}(2^3)$ and take $R$ to be the pullback of the natural map $D \to \text{GF}(2^3)$ along $\text{GF}(2) \hookrightarrow \text{GF}(2^3)$. \item[$n=8$:] Here $M^7$ is universal, $|\overline{R}| \leq 7$ and $\text{dim}_{\overline{R}} M / M^2 = 2$ except for $|\overline{R}|=2$ and $\text{dim}_{\overline{R}} M / M^2 = 3$. Since there are at least $6$ atoms in $M \backslash M^2$, either all atoms are in $M \backslash M^2$ or there are $6$ atoms in $M \backslash M^2$ and $2$ in $M^2$. The case $|\overline{R}|=5$ is ruled out since this implies $|V| \geq 5$ so $|V|=8$ which gives $M^2$ universal, a contradiction. $M^2$ universal: $|\overline{R}|=7$ and $\text{dim}_{\overline{R}} M / M^2 = 2$. So $\text{GF}(7) + \text{GF}(7^2)[[X]]X$ is the unique complete local example in positive characteristic and $\mathbb{Z}[7 \omega]_{(7,7 \omega)}$ with $7$ inert in $\mathbb{Z}[\omega]$ (say $d=5$) is a characteristic $0$ example. $M^2$ not universal: Here $|V|=2$ or $4$. If $|V|=4$ all $8$ atoms are in $M \backslash M^2$. For $|V|=4$, $|\overline{R}| \leq 4$. If $|\overline{R}|=3$ or $4$, $\text{dim}_{\overline{R}} M / M^2 = 2$. $M^3$ universal example: $\text{GF}(2^2) + \text{GF}(2^2)[[X]]X^2$ (no atoms in $M^2$) $M^4$ universal example: $\text{GF}(2) + WX + \text{GF}(2^3)[[X]]X^2$ (Example \ref{8atom}) (6 atoms in $M \backslash M^2$, 2 atoms in $M^2$). \item[$n=9$:] Here $|V|=9$ in which case $M^2$ is universal or $|V|=3$. There are either $9$ atoms in $M \backslash M^2$ or $6$ atoms in $M \backslash M^2$ and $3$ in $M^2$. $M^2$ universal: Here $|\overline{R}|=8$ and $\text{dim}_{\overline{R}} M / M^2 = 2$. We have that $\text{GF}(2^3) + \text{GF}(2^6)[[X]]X$ is the unique complete local example in positive characteristic. In characteristic $0$ we can take a DVR with residue field $\text{GF}(2^6)$ and take $R$ to be the pullback of the natural map $D \to \text{GF}(2^6)$ along $\text{GF}(2^3) \hookrightarrow \text{GF}(2^6)$. $M^2$ is not universal: So $|V|=3$. So $|\overline{R}|=2$ or $3$. And we have that $\text{dim}_{\overline{R}} M / M^2 = 2$ unless $|\overline{R}|=2$ and $\text{dim}_{\overline{R}} M / M^2 = 3$. Suppose $|\overline{R}|=3$. Then we cannot have an atom in $M^2$ since an atom in $M^2$ would give at least $4 \cdot 3$ atoms in $M \backslash M^2$. Suppose $|\overline{R}|=2$. Here either all $9$ atoms are in $M \backslash M^2$ or $6$ are in $M \backslash M^2$ and $3$ in $M^2$. If $\text{dim}_{\overline{R}} M / M^2 = 3$, all atoms are in $M \backslash M^2$. Cohen and Kaplansky claimed that for $n=9$, we cannot have atoms in $M^2$. We have been unable to verify this. \item[$n=10$:] Here $|V|=2, 5$, or 10. If $|V|=10$, $M^2$ is universal. Here $|\overline{R}|=9$ and also $\text{dim}_{\overline{R}} M / M^2 = 2$. So in the positive characteristic case $\text{GF}(3^2) + \text{GF}(3^4)[[X]]X$ is the unique complete local example. A characteristic $0$ example can be obtained via pullbacks. If $|V|=5$, then all ten atoms are in $M \backslash M^2$. The only remaining case $|V|=2$ forces $|\overline{R}|=2$ and $\text{dim}_{\overline{R}} M / M^2 = 2$ or $3$. If $\text{dim}_{\overline{R}} M / M^2 = 3$, there are at least $7$ atoms in $M \backslash M^2$ and hence all atoms are in $M \backslash M^2$. Suppose that $\text{dim}_{\overline{R}} M / M^2 = 2$. If there is an atom in $M^2$, there are at least $6$ atoms in $M \backslash M^2$. Thus either all $10$ atoms are in $M \backslash M^2$, $8$ are in $M \backslash M^2$ and $2$ in $M^2$, or $6$ in $M \backslash M^2$ and $4$ in $M^2$. \item[$n=11$:] $11$ is prime and not of the form $(p^{nk}-1) / (p^n-1)$. So an example does not exist. \end{itemize} \end{ex} We end by considering the case of nonlocal CK domains. Using Example \ref{Zpw}, Coykendall and Spicer \cite{CS} showed that for $n=\sum_{i=1}^m (p_i+1)$ for distinct primes $p_1, \ldots, p_n$ there is a primefree CK domain with $n$ atoms. Assuming a variant of the Goldbach Conjecture (each even number $n \geq 6$ is a sum of two distinct primes), they show that for $n \geq 3$, there is a primefree CK domain with $3$ or less maximal ideals with $n$ atoms. Then Clark, Gosavi, and Pollack \cite[Theorem 1.4]{CGP} showed using a generalization of Bertrand's Postulate that each $n \geq 6$ can be written as $\sum_{i=1}^m (p_i+1)$ for distinct primes $p_1,\cdots,p_n$. Using Theorem \ref{Lattice} (2), they obtained a number of interesting results concerning primefree CK domains with $n$ atoms and $m$ maximal ideals. We list some of their results. \begin{thm} Let $m$ and $n$ be positive integers. \begin{itemize} \item[(1)] \cite[Theorem 1.6]{CGP} If $n \geq 10$ is even (resp., $n \geq 13$ is odd) and $m \in [3,\frac{n}{3}]$ (resp., $m \in [4,\frac{n}{3}])$, there is a primefree CK domain of characteristic $0$ with $n$ atoms and $m$ maximal ideals. \item[(2)] \cite[Theorem 1.11]{CGP} Let $q$ be a prime power. If $q$ is even (resp., $q$ is odd), then for all $n \geq 2q^2-q$ (resp., $n \geq 2q^2 - q + 1$), there is a primefree CK domain with $n$ atoms that is a $\text{GF}(q)$-algebra. \end{itemize} \end{thm} We end with the following result that shows that for CK domains we can usually reduce to the complete local domain case. Here the first statement is well known while the second is just a restatement of a result of Clark, Gosavi, and Pollack \cite[Theorem 1.10]{CGP}. \begin{thm} \label{Lattice} \begin{itemize} \item[(1)] Let $R$ be a CK domain with maximal ideals $M_1,\ldots,M_n$. Then the map $$L(R) \to L(\widehat{R_{M_1}}) \times \cdots \times L(\widehat{R_{M_n}})$$ given by $$A \to (\widehat{R_{M_1}}A,\ldots,\widehat{R_{M_n}}A)$$ is a multiplicative lattice isomorphism that induces an order preserving monoid isomorphism of the positive cones of the corresponding groups of divisibility \begin{align*} G_+(R) \to G_+(\widehat{R_{M_1}}) \times \cdots \times G_+(\widehat{R_{M_n}}) \\ aU(R) \to (aU(\widehat{R_{M_1}}),\ldots,aU(\widehat{R_{M_n}})) \end{align*} where $\; \; \; \widehat{} \; \; $ denotes the appropriate $M$-adic completion. \item[(2)] Let $(R_1,m_1),\ldots,(R_n,m_n)$ be local CK domains with finite residue fields such that either (1) each char $R_i=0$ or (2) each $R_i$ is an $F$-algebra where $F$ is a finite field. Then there exists a CK domain $R$ with maximal ideals $M_1,\ldots,M_n$ such that either (1) char $R=0$ or (2) $R$ is an $F$-algebra, each $R_i / m_i \cong R_{M_i} / M_{i_{M_i}}$ and $\widehat{R_i} \cong \widehat{R_{M_i}}$, where $\; \; \widehat{} \; $ denotes the appropriate $M$-adic completion. Thus if each $R_i$ has $l_i$ atoms, $R$ has $l_1 + \cdots + l_n$ atoms and if no $R_i$ is a DVR, $R$ is primefree. \end{itemize} \end{thm} \begin{proof} (1) Let $A \neq 0$ be an ideal of $R$. Then $A = (A_{M_1} \cap R) \cap \cdots \cap (A_{M_n} \cap R) = (A_{M_1} \cap R) \cdots (A_{M_n} \cap R)$ where each $A_{M_i} \cap R = R$ or is $M_i$-primary. Moreover, $A$ is principal if and only if each $A_{M_i} \cap R$ is principal. This induces a multiplication lattice homomorphism $L(R) \to L(R_{M_1}) \times \cdots \times L(R_{M_n})$ by $A \mapsto (A_{M_1},\ldots,A_{M_n})$. Also since each nonzero ideal of $R_{M_i}$ (resp., $\widehat{R_{M_i}}$) is $M_{i_{M_i}}$-primary (resp., $\widehat{M_{i_{M_i}}}$-primary), the map $L(R_{M_i}) \to L(\widehat{R_{M_i}})$ given by $A \to \widehat{R_{M_i}} A$ is a multiplicative lattice isomorphism. Clearly both maps preserve principal ideals. There the map $A \to (\widehat{R_{M_1}}A, \ldots,\widehat{R_{M_n}}A)$ is a multiplicative lattice map that preserves principal ideals. (2) The proof of \cite[Theorem 1.10]{CGP} showed that if $R_1,\ldots,R_n$ are any primefree local CK domains with either (1) each char $R_i=0$ or (2) each $R_i$ is an $F$-algebra where $F$ is a finite field, then there is a primefree CK domain $R$ with (1) either char $R=0$ or (2) $R$ is an $F$-algebra such that $R_i / m_i \cong R_{M_i} / M_{M_i} (\cong R/M_i)$ and $\widehat{R_i} \cong \widehat{R_{M_i}}$. The condition that the $R_i$ were primefree gives that each residue field $R_i / m_i$ is finite. The same proof carries through if we allow $R_i$ to be a DVR as long as $R_i / m_i$ is finite. \end{proof} \end{document}
{{\mathbf m}athbf b}egin{document} {\mathbf t}itle{Unsupervised Graph Spectral Feature Denoising for Crop Yield Prediction\\ {\mathbf t}hanks{Gene Cheung acknowledges the support of the NSERC grants RGPIN-2019-06271, RGPAS-2019-00110.} } {{\mathbf m}athbf a}uthor{{\mathbf I}EEEauthorblockN{Saghar Bagheri} {\mathbf I}EEEauthorblockA{{\mathbf t}extit{Dept. of EECS} \\ {\mathbf t}extit{York University}\\ Toronto, Canada\\ [email protected]} {{\mathbf m}athbf a}nd {\mathbf I}EEEauthorblockN{Chinthaka Dinesh} {\mathbf I}EEEauthorblockA{{\mathbf t}extit{Dept. of EECS} \\ {\mathbf t}extit{York University}\\ Toronto, Canada \\ [email protected]} {{\mathbf m}athbf a}nd {\mathbf I}EEEauthorblockN{Gene Cheung} {\mathbf I}EEEauthorblockA{{\mathbf t}extit{Dept. of EECS} \\ {\mathbf t}extit{York University}\\ Toronto, Canada \\ [email protected]} {{\mathbf m}athbf a}nd {\mathbf I}EEEauthorblockN{Timothy Eadie} {\mathbf I}EEEauthorblockA{{\mathbf t}extit{} \\ {\mathbf t}extit{GrowersEdge}\\ Iowa, USA\\ [email protected]} } {\mathbf m}aketitle {{\mathbf m}athbf b}egin{abstract} Prediction of annual crop yields at a county granularity is important for national food production and price stability. In this paper, towards the goal of better crop yield prediction, leveraging recent graph signal processing (GSP) tools to exploit spatial correlation among neighboring counties, we denoise relevant features via graph spectral filtering that are inputs to a deep learning prediction model. Specifically, we first construct a combinatorial graph with edge weights that encode county-to-county similarities in soil and location features via metric learning. We then denoise features via a maximum a posteriori (MAP) formulation with a graph Laplacian regularizer (GLR). We focus on the challenge to estimate the crucial weight parameter ${\mathbf m}u$, trading off the fidelity term and GLR, that is a function of noise variance in an unsupervised manner. We first estimate noise variance directly from noise-corrupted graph signals using a graph clique detection (GCD) procedure that discovers locally constant regions. We then compute an optimal ${\mathbf m}u$ minimizing an approximate mean square error function via bias-variance analysis. Experimental results from collected USDA data show that using denoised features as input, performance of a crop yield prediction model can be improved noticeably. {{\mathbf m}athbf e}nd{abstract} {{\mathbf m}athbf b}egin{IEEEkeywords} Graph spectral filtering, unsupervised learning, bias-variance analysis, crop yield prediction {{\mathbf m}athbf e}nd{IEEEkeywords} {\mathbf s}ection{Introduction} {{\mathbf m}athbf l}abel{sec:intro} As weather patterns become more volatile due to unprecedented climate change, accurate {\mathbf t}extit{crop yield prediction}---forecast of agriculture production such as corn or soybean at a county / state granularity---is increasingly important in agronomics to ensure a robust and reliable national food supply {{\mathbf m}athbf c}ite{cai17}. A conventional crop yield prediction scheme gathers {\mathbf t}extit{relevant features} that influence crop production---{{\mathbf m}athbf e}g, soil composition, precipitation, temperature---as input to a deep learning (DL) model such as convolutional neural net (CNN) {{\mathbf m}athbf c}ite{khaki20} and long short-term memory (LSTM) {{\mathbf m}athbf c}ite{sun19} to estimate yield per county / state in a {\mathbf t}extit{supervised} manner. While this is feasible when the training dataset is sufficiently large, the trained model is nonetheless susceptible to noise in feature data, typically collected by USDA from satellite images and farmer surveys{{\mathbf m}athbf f}ootnote{https://www.usda.gov/}. In this paper, we focus on the problem of pre-denoising relevant features prior to DL model training to improve crop yield prediction performance. Given that basic environmental conditions such as soil makeup, rainfall and drought index at one county are typically similar to nearby ones, one would expect crucial features directly related to crop yields, such as {\mathbf t}extit{normalized difference vegetation index} (NDVI) and {\mathbf t}extit{enhanced vegetation index} (EVI)~{{\mathbf m}athbf c}ite{matsushita2007}, at neighboring counties to be similar as well. To exploit these inter-county similarities for feature denoising, leveraging recent rapid progress in {\mathbf t}extit{graph signal processing} (GSP) {{\mathbf m}athbf c}ite{ortega18ieee,cheung18} we pursue a graph spectral filtering approach. While graph signal denoising is now well studied in many contexts, including general band-limited graph signals {{\mathbf m}athbf c}ite{chen15}, 2D images {{\mathbf m}athbf c}ite{pang17,vu21}, and 3D point clouds {{\mathbf m}athbf c}ite{zeng20,dinesh20}, our problem setting for crop feature denoising is particularly challenging because of its {\mathbf t}extit{unsupervised} nature. Specifically, an obtained feature ${\mathbf y} {{\mathbf m}athbf i}n {\mathbf m}athbb{R}^N$ for $N$ counties is typically noise-corrupted, and one has no access to ground truth data ${\mathbf x}^o$ nor knowledge of the noise variance ${\mathbf s}igma^2$. Thus, the important weight parameter ${\mathbf m}u$ that trades off the fidelity term $\|{\mathbf y} - {\mathbf x}\|^2_2$ against the graph signal prior such as the {\mathbf t}extit{graph Laplacian regularizer}{{\mathbf m}athbf f}ootnote{${\mathbf L}$ is the combinatorial graph Laplacian matrix; definitions are formally defined in Section\;{\mathbf r}ef{sec:preli}.} ${\mathbf x}^{\mathbf t}op {\mathbf L} {\mathbf x}$ {{\mathbf m}athbf c}ite{pang17} or {\mathbf t}extit{graph total variation} (GTV) {{\mathbf m}athbf c}ite{bai19} in a {\mathbf t}extit{maximum a posteriori} (MAP) formulation cannot be easily derived {{\mathbf m}athbf c}ite{chen17} or trained end-to-end {{\mathbf m}athbf c}ite{vu21} as previously done. In this paper, we focus on the unsupervised estimation of the weight parameter ${\mathbf m}u$ in a GLR-regularized MAP formulation for relevant feature pre-denoising to improve crop yield prediction. Specifically, we first construct a combinatorial graph ${{\mathbf m}athbf c}G$ with edge weights $w_{i,j}$ encoding similarities between counties (nodes) $i$ and $j$. $w_{i,j}$ is inversely proportional to the {\mathbf t}extit{Mahanalobis distance} $d_{i,j} = ({{\mathbf m}athbf f}_i - {{\mathbf m}athbf f}_j)^{\mathbf t}op {\mathbf M} ({{\mathbf m}athbf f}_i - {{\mathbf m}athbf f}_j)$, where ${{\mathbf m}athbf f}_i$ is a vector for node $i$ composed of soil and location features, and ${\mathbf M}$ is an optimized metric matrix {{\mathbf m}athbf c}ite{wei_TSP2020}. We then estimate noise variance ${\mathbf s}igma^2$ directly from noise-corrupted features using our proposed {\mathbf t}extit{graph clique detection} (GCD) procedure, generalized from noise estimation in 2D imaging~{{\mathbf m}athbf c}ite{wu2015}. Finally, we derive equations analyzing the {\mathbf t}extit{bias-variance tradeoff} {{\mathbf m}athbf c}ite{chen17} to minimize the resulting MSE of our MAP estimate and compute the optimal weight parameter ${\mathbf m}u$. See Fig.\;{\mathbf r}ef{fig:countygraphsignal} for an illustration of a similarity graph connecting neighboring counties in Iowa with undirected edges, where the set of feature values per county is shown as a discrete signal on top of the graph. {{\mathbf m}athbf b}egin{figure}[t] {{\mathbf m}athbf c}entering {{\mathbf m}athbf i}ncludegraphics[width=0.65{{\mathbf m}athbf c}olumnwidth]{county_GSP.png} {\mathbf v}space{-0.05in} {{\mathbf m}athbf c}aption{Feature of different counties in Iowa as a discrete signal on a combinatorial graph. } {{\mathbf m}athbf l}abel{fig:countygraphsignal} {{\mathbf m}athbf e}nd{figure} Using USDA corn data from $10$ states in the corn belt (Iowa, Illinois, Indiana, Ohio, Nebraska, Minnesota, Wisconsin, Michigan, Missouri, and Kentucky) containing $938$ counties, experimental results show that using our GLR-regularized denoiser with optimized ${\mathbf m}u$ to denoise two important EVI features led to improved performance in an DL model {{\mathbf m}athbf c}ite{chen16}: reduction of {\mathbf t}extit{root mean square error} (RMSE) {{\mathbf m}athbf c}ite{pasquel22} by $0.434\%$ in crop yield prediction compared to the baseline when the features were not pre-denoised. The paper is organized as follows. We first overview our crop yield prediction model in Section\;{\mathbf r}ef{sec:overview}. We then present our unsupervised feature denoising algorithm in Section\;{\mathbf r}ef{sec:opt}. Finally, we present experimental results and conclusion in Section\;{\mathbf r}ef{sec:results} and {\mathbf r}ef{sec:conclude}, respectively. {\mathbf s}ection{Prediction Framework} {{\mathbf m}athbf l}abel{sec:overview} We overview a conventional crop yield prediction framework {{\mathbf m}athbf c}ite{khaki20,sun19}. First, {\mathbf t}extit{relevant features} at a county level such as silt / clay / sand percentage in soil, accumulated rainfall, drought index, and growing degree days (GDD) are collected from various sources, including USDA and satellite images. Features of the same county are inputted to a DL model like CNN or LSTM for future yield prediction, trained in a supervised manner using annual county-level yield data provided by USDA. Note that existing yield prediction schemes {{\mathbf m}athbf c}ite{khaki20,sun19} focus mainly on exploiting {\mathbf t}extit{temporal correlation} (both short-term and long-term) to predict future crop yields. Depending on the type of features, the acquired measurements may be noise-corrupted. This may be due to measurement errors by faulty mechanical instruments, human errors during farmer surveys, etc. Given that environmental variables are likely similar in neighboring counties, one would expect similar basic features in a local region. To exploit this {\mathbf t}extit{spatial correlation} for feature denoising, we employ a graph spectral approach to be described next. {\mathbf s}ection{Feature Denoising} {{\mathbf m}athbf l}abel{sec:opt} {\mathbf s}ubsection{Preliminaries} {{\mathbf m}athbf l}abel{sec:preli} An $N$-node undirected positive graph ${{\mathbf m}athbf c}G({{\mathbf m}athbf c}N,{{\mathbf m}athbf c}E,{\mathbf W})$ can be specified by a symmetric {\mathbf t}extit{adjacency matrix} ${\mathbf W} {{\mathbf m}athbf i}n {\mathbf m}athbb{R}^{N {\mathbf t}imes N}$, where $W_{i,j} = w_{i,j} > 0$ is the weight of an edge $(i,j) {{\mathbf m}athbf i}n {{\mathbf m}athbf c}E$ connecting nodes $i, j {{\mathbf m}athbf i}n {{\mathbf m}athbf c}N = \{1, {{\mathbf m}athbf l}dots, N\}$, and $W_{i,j} = 0$ if there is no edge $(i,j) {\mathbf n}ot{{\mathbf m}athbf i}n {{\mathbf m}athbf c}E$. Here we assume there are no self-loops, and thus $W_{i,i} = 0, {{\mathbf m}athbf f}orall i$. Diagonal {\mathbf t}extit{degree matrix} ${\mathbf D} {{\mathbf m}athbf i}n {\mathbf m}athbb{R}^{N {\mathbf t}imes N}$ has diagonal entries $D_{i,i} = {\mathbf s}um_j W_{i,j}$. We can now define the combinatorial {\mathbf t}extit{graph Laplacian matrix} ${\mathbf L} {\mathbf t}riangleq {\mathbf D} - {\mathbf W}$, which is {\mathbf t}extit{positive semi-definite} (PSD) for a positive graph ${{\mathbf m}athbf c}G$ ({{\mathbf m}athbf i}e, $W_{i,j} {{\mathbf m}athbf g}eq 0, {{\mathbf m}athbf f}orall i,j$) {{\mathbf m}athbf c}ite{cheung18}. An assignment of a scalar $x_i$ to each graph node $i {{\mathbf m}athbf i}n {{\mathbf m}athbf c}N$ composes a {\mathbf t}extit{graph signal} ${\mathbf x} {{\mathbf m}athbf i}n {\mathbf m}athbb{R}^N$. Signal ${\mathbf x}$ is smooth with respect to (w.r.t.) graph ${{\mathbf m}athbf c}G$ if its variation over ${{\mathbf m}athbf c}G$ is small. A popular graph smoothness measure is the {\mathbf t}extit{graph Laplacian regularizer} (GLR) ${\mathbf x}^{\mathbf t}op {\mathbf L} {\mathbf x}$ {{\mathbf m}athbf c}ite{pang17}, {{\mathbf m}athbf i}e, ${\mathbf x}$ is smooth iff ${\mathbf x}^{\mathbf t}op {\mathbf L} {\mathbf x}$ is small. Denote by $({{\mathbf m}athbf l}ambda_i,{\mathbf v}_i)$ the $i$-th eigen-pair of matrix ${\mathbf L}$, and ${\mathbf V}$ the eigen-matrix composed of eigenvectors $\{{\mathbf v}_i\}_{i=1}^N$ as columns. ${\mathbf V}^{\mathbf t}op$ is known as the {\mathbf t}extit{Graph Fourier Transform} (GFT) {{\mathbf m}athbf c}ite{cheung18} that converts a graph signal ${\mathbf x}$ to its graph frequency representation ${{\mathbf m}athbf b}alpha = {\mathbf V}^{\mathbf t}op {\mathbf x}$. GLR can be expanded as {{\mathbf m}athbf b}egin{align} {\mathbf x}^{\mathbf t}op {\mathbf L} {\mathbf x} = {\mathbf s}um_{(i,j) {{\mathbf m}athbf i}n {{\mathbf m}athbf c}E} w_{i,j} (x_i - x_j)^2 = {\mathbf s}um_{k=1}^{N} {{\mathbf m}athbf l}ambda_k {{\mathbf m}athbf a}lpha_k^2 . {{\mathbf m}athbf e}nd{align} Thus, a small GLR means that a connected node pair $(i,j) {{\mathbf m}athbf i}n {{\mathbf m}athbf c}E$ with large edge weight $w_{i,j}$ has similar sample values $x_i$ and $x_j$ in the nodal domain, and most signal energy resides in low graph frequency coefficients ${{\mathbf m}athbf a}lpha_k$ in the spectral domain---${\mathbf x}$ is a {\mathbf t}extit{low-pass} (LP) signal. {\mathbf s}ubsection{Graph Metric Learning} {{\mathbf m}athbf l}abel{sec:graph_learning} Assuming that each node $i {{\mathbf m}athbf i}n {{\mathbf m}athbf c}N$ is endowed with a length-$K$ {\mathbf t}extit{feature vector} ${{\mathbf m}athbf f}_i {{\mathbf m}athbf i}n {\mathbf m}athbb{R}^K$, one can compute edge weight $w_{i,j}$ connecting nodes $i$ and $j$ in ${{\mathbf m}athbf c}G$ as {{\mathbf m}athbf b}egin{align} w_{i,j} = {{\mathbf m}athbf e}xp {{\mathbf m}athbf l}eft\{ - ({{\mathbf m}athbf f}_i - {{\mathbf m}athbf f}_j)^{\mathbf t}op {\mathbf M} ({{\mathbf m}athbf f}_i - {{\mathbf m}athbf f}_j) {\mathbf r}ight\} {{\mathbf m}athbf e}nd{align} where ${\mathbf M} {\mathbf s}ucceq 0$ is a PSD {\mathbf t}extit{metric matrix} that determines the square Mahalanobis distance (feature distance) $d_{i,j} = ({{\mathbf m}athbf f}_i - {{\mathbf m}athbf f}_j)^{\mathbf t}op {\mathbf M} ({{\mathbf m}athbf f}_i - {{\mathbf m}athbf f}_j) {{\mathbf m}athbf g}eq 0$ between nodes $i$ and $j$. There exist {\mathbf t}extit{metric learning} schemes {{\mathbf m}athbf c}ite{wei_TSP2020,yang21} that optimize ${\mathbf M}$ given an objective function $f({\mathbf M})$ and training data ${{\mathbf m}athbf c}X = \{{\mathbf x}_1, {{\mathbf m}athbf l}dots, {\mathbf x}_T\}$. For example, we can define $f({\mathbf M})$ using GLR and seek ${\mathbf M}$ by minimizing $f({\mathbf M})$: {{\mathbf m}athbf b}egin{align} {\mathbf m}in_{{\mathbf M} {\mathbf s}ucceq 0} f({\mathbf M}) = {\mathbf s}um_{t=1}^T {\mathbf x}_t^{\mathbf t}op {\mathbf L}({\mathbf M}) {\mathbf x}_t . {{\mathbf m}athbf e}nd{align} In this paper, we adopt an existing metric learning scheme {{\mathbf m}athbf c}ite{wei_TSP2020}, and use soil- and location-related features---clay percentage, available water storage estimate (AWS), soil organic carbon stock estimate (SOC) and 2D location features---to compose ${{\mathbf m}athbf f}_i {{\mathbf m}athbf i}n {\mathbf m}athbb{R}^5$. These features are comparatively noise-free and thus reliable. We use also these features as training data ${{\mathbf m}athbf c}X$ to optimize ${\mathbf M}$, resulting in graph ${{\mathbf m}athbf c}G$. (Node pair $(i,j)$ with distance $d_{i,j}$ larger than a threshold has no edge $(i,j) {\mathbf n}ot{{\mathbf m}athbf i}n {{\mathbf m}athbf c}E$). We will use ${{\mathbf m}athbf c}G$ to denoise two EVI features that are important for yield prediction. {\mathbf s}ubsection{Denoising Formulation} Given a constructed graph ${{\mathbf m}athbf c}G$ specified by a graph Laplacian matrix ${\mathbf L}$, one can denoise a target input feature ${\mathbf y} {{\mathbf m}athbf i}n {\mathbf m}athbb{R}^N$ using a MAP formulation regularized by GLR {{\mathbf m}athbf c}ite{pang17}: {{\mathbf m}athbf b}egin{align} {\mathbf m}in_{{\mathbf x}} \|{\mathbf y} - {\mathbf x} \|^2_2 + {\mathbf m}u {\mathbf x}^{\mathbf t}op {\mathbf L} {\mathbf x} {{\mathbf m}athbf l}abel{eq:MAP} {{\mathbf m}athbf e}nd{align} where ${\mathbf m}u > 0$ is a weight parameter trading off the fidelity term and GLR. Given ${\mathbf L}$ is PSD, objective {{\mathbf m}athbf e}qref{eq:MAP} is convex with a system of linear equations as solution: {{\mathbf m}athbf b}egin{align} {{\mathbf m}athbf l}eft({\mathbf I} + {\mathbf m}u {\mathbf L} {\mathbf r}ight) {\mathbf x}^* = {\mathbf y} . {{\mathbf m}athbf l}abel{eq:MAP_sol} {{\mathbf m}athbf e}nd{align} Given that matrix ${\mathbf I} + {\mathbf m}u {\mathbf L}$ is symmetric, {\mathbf t}extit{positive definite} (PD) and sparse, {{\mathbf m}athbf e}qref{eq:MAP_sol} can be solved using {\mathbf t}extit{conjugate gradient} (CG){{\mathbf m}athbf c}ite{shewchuk1994} without matrix inverse. We focus on the selection of ${\mathbf m}u$ in {{\mathbf m}athbf e}qref{eq:MAP} next. {\mathbf s}ubsection{Estimating Noise Variance} In our feature denoising scenario, we first estimate the noise variance ${\mathbf s}igma^2$ directly from noisy feature (signal) ${\mathbf y}$, using which weight parameter ${\mathbf m}u$ in {{\mathbf m}athbf e}qref{eq:MAP} is computed. We propose a noise estimation procedure called {\mathbf t}extit{graph clique detection} (GCD) when a graph ${{\mathbf m}athbf c}G$ encoded with inter-node similarities is provided. We generalize from a noise estimation scheme for 2D images~{{\mathbf m}athbf c}ite{wu2015}. First, we identify {\mathbf t}extit{locally constant regions} (LCRs) ${{\mathbf m}athbf c}R_m$ where signal samples are expected to be similar, {{\mathbf m}athbf i}e, $x_i {{\mathbf m}athbf a}pprox x_j, {{\mathbf m}athbf f}orall i,j {{\mathbf m}athbf i}n {{\mathbf m}athbf c}R_m$. Then, we compute mean ${{\mathbf m}athbf b}ar{x}_m = {{\mathbf m}athbf f}rac{1}{|{{\mathbf m}athbf c}R_{m}|}{\mathbf s}um_{i {{\mathbf m}athbf i}n {{\mathbf m}athbf c}R_m} x_i$ and variance ${\mathbf s}igma^2_m = {{\mathbf m}athbf f}rac{1}{|{{\mathbf m}athbf c}R_{m}|} {\mathbf s}um_{i {{\mathbf m}athbf i}n {{\mathbf m}athbf c}R_m} (x_i - {{\mathbf m}athbf b}ar{x}_m)^2$ for each ${{\mathbf m}athbf c}R_m$. Finally, we compute the global noise variance as the weighted average: {{\mathbf m}athbf b}egin{align} {\mathbf s}igma^2 = {\mathbf s}um_m {{\mathbf m}athbf f}rac{|{{\mathbf m}athbf c}R_m|}{{\mathbf s}um_k |{{\mathbf m}athbf c}R_k|}{\mathbf s}igma_m^2 . {{\mathbf m}athbf l}abel{eq:noise_variance} {{\mathbf m}athbf e}nd{align} The crux thus resides in the identification of LCRs in a graph. Note that this does not imply conventional {\mathbf t}extit{graph clustering}~{{\mathbf m}athbf c}ite{schaeffer2007graph}: grouping of {\mathbf t}extit{all} graph nodes to two or more non-overlapping sets. There is no requirement here to put every node in a LCR. We describe our proposal based on cliques. A {\mathbf t}extit{clique} is a (sub-)graph where every node is connected with every other node in the (sub-)graph. Thus, a clique implies a node cluster with strong inter-node similarities, which we assume is roughly constant. Given an input graph ${{\mathbf m}athbf c}G$, we identify cliques in ${{\mathbf m}athbf c}G$ as follows. {\mathbf s}ubsubsection{$k$-hop Connected Graph} {{\mathbf m}athbf b}egin{figure}[t] {{\mathbf m}athbf c}entering {\mathbf s}ubfloat[]{ {{\mathbf m}athbf i}ncludegraphics[width=0.21{\mathbf t}extwidth]{original1} {{\mathbf m}athbf l}abel{fig:pcd-on}} {{\mathbf m}athbf h}space{-10pt} {\mathbf s}ubfloat[{}]{ {{\mathbf m}athbf i}ncludegraphics[width=0.24{\mathbf t}extwidth]{images/cliques1.png} {{\mathbf m}athbf l}abel{fig:pcd-off}} {\mathbf v}space{-0.05in} {{\mathbf m}athbf c}aption{(a) Example of a $10$-node graph, where edges with weights less than threshold ${{\mathbf m}athbf h}at{w}$ are colored in green; (b) The resulting k-hop connected graph (KCG) for $k=2$, after removing green edges and creating edges (colored in magenta) by connecting 2-hop neighbors. Two maximal cliques (out of 5) in KCG are highlighted. } {{\mathbf m}athbf l}abel{fig:cliques} {{\mathbf m}athbf e}nd{figure} We first sort $M = |{{\mathbf m}athbf c}E|$ edges in ${{\mathbf m}athbf c}G$ in weights from smallest to largest. For a given {\mathbf t}extit{threshold weight} ${{\mathbf m}athbf h}at{w}$ (to be discussed) and $k {{\mathbf m}athbf i}n {\mathbf m}athbb{Z}^+$, we remove all edges $(i,j) {{\mathbf m}athbf i}n {{\mathbf m}athbf c}E$ with weights $w_{i,j} < {{\mathbf m}athbf h}at{w}$ and construct a {\mathbf t}extit{$k$-hop connected graph} (KCG) ${{\mathbf m}athbf c}G^{(k)}$ with edges connecting nodes $i$ and $j$ that are $k$-hop neighbors in ${{\mathbf m}athbf c}G$. If ${{\mathbf m}athbf c}G^{(k)}$ has at least a target ${{\mathbf m}athbf h}at{E}$ number of edges, then it is a {\mathbf t}extit{feasible} KCG, with minimum connectivity $C({{\mathbf m}athbf c}G^{(k)}) = {{\mathbf m}athbf h}at{w}^k$. $C({{\mathbf m}athbf c}G^{(k)})$ is the weakest possible connection between two connected nodes in ${{\mathbf m}athbf c}G^{(k)}$, interpreting edge weights $w_{i,j}$ as conditional probabilities as done in {\mathbf t}extit{Gaussian Markov Random Field} (GMRF)~{{\mathbf m}athbf c}ite{rue2005}. To find threshold ${{\mathbf m}athbf h}at{w}$ for a given $k$, we seek the {\mathbf t}extit{largest} ${{\mathbf m}athbf h}at{w}$ for feasible graphs ${{\mathbf m}athbf c}G^{(k)}$ (with minimum ${{\mathbf m}athbf h}at{E}$ edges) via binary search among $M$ edges in complexity ${{\mathbf m}athbf c}O({{\mathbf m}athbf l}og M)$. We initialize $k=1$, compute threshold ${{\mathbf m}athbf h}at{w}$, then increment $k$ and repeat the procedure until we identify a maximal{{\mathbf m}athbf f}ootnote{Given $0<{{\mathbf m}athbf h}at{w}<1$, ${{\mathbf m}athbf h}at{w}^k$ becomes smaller as $k$ increases. Thus, in practice we observe a local maximum in $C({{\mathbf m}athbf c}G^{(k)})$ as function of $k$.} $C({{\mathbf m}athbf c}G^{(k)}) = {{\mathbf m}athbf h}at{w}^k$ for $k {{\mathbf m}athbf i}n \{1, 2. {{\mathbf m}athbf l}dots\}$. See Fig.\;{\mathbf r}ef{fig:cliques}(b) for an example of a KCG ${{\mathbf m}athbf c}G^{(2)}$ given original graph ${{\mathbf m}athbf c}G$ in Fig.\;{\mathbf r}ef{fig:cliques}(a). We see, for example, that edge $(3,10)$ is removed from ${{\mathbf m}athbf c}G$, but edge $(4,10)$ is added in ${{\mathbf m}athbf c}G^{(2)}$ because nodes $4$ and $10$ are $2$-hop neighbors in ${{\mathbf m}athbf c}G$. The idea is to identify strongly similar pairs in original ${{\mathbf m}athbf c}G$ and connect them with explicit edges in ${{\mathbf m}athbf c}G^{(k)}$. Then the maximal cliques{{\mathbf m}athbf f}ootnote{A maximal clique is a clique that cannot be extended by including one more adjacent node. Hence, a maximal clique is not a sub-set of a larger clique in the graph.} are discovered using algorithm in {{\mathbf m}athbf c}ite{cazals2008note}, as shown in Fig.\;{\mathbf r}ef{fig:cliques}(b). The cliques in the resulting graph ${{\mathbf m}athbf c}G^{(k)}$ are LCRs used to calculate the noise variance via {{\mathbf m}athbf e}qref{eq:noise_variance}. {\mathbf s}ubsubsection{Target ${{\mathbf m}athbf h}at{E}$ Edges} The last issue is the designation of targeted ${{\mathbf m}athbf h}at{E}$ edges. ${{\mathbf m}athbf h}at{E}$ should be chosen so that each clique $m$ discovered in KCG ${{\mathbf m}athbf c}G^{(k)}$ has enough nodes to reliably compute mean ${{\mathbf m}athbf b}ar{x}_m$ and variance ${\mathbf s}igma_m^2$. We estimate ${{\mathbf m}athbf h}at{E}$ as follows. Given input graph ${{\mathbf m}athbf c}G$, we first compute the average degree ${{\mathbf m}athbf b}ar{d}$. Then, we target a given clique to have an average of $n_c$ nodes---large enough to reliably compute mean and variance. Thus, the average degree of the resulting graph can be approximated as ${{\mathbf m}athbf b}ar{d}+n_c-1$. Finally, we compute ${{\mathbf m}athbf h}at{E} {{\mathbf m}athbf a}pprox N({{\mathbf m}athbf b}ar{d}+n_c-1)$, where $N$ is the number of nodes in the input graph ${{\mathbf m}athbf c}G$. {\mathbf s}ubsection{Deriving Weight Parameter} {{\mathbf m}athbf b}egin{figure}[t] {{\mathbf m}athbf c}entering {{\mathbf m}athbf i}ncludegraphics[width=0.5{{\mathbf m}athbf c}olumnwidth]{MSE2.png} {{\mathbf m}athbf c}aption{Bias $B({\mathbf m}u)$, variance $V({\mathbf m}u)$, and MSE(${\mathbf m}u$), as functions of weight parameter ${\mathbf m}u$, for a signal with respect to a graph constructed in Section~{\mathbf r}ef{sec:graph_learning}. The underlying graph is constructed by connecting adjacent counties in Iowa. Graph signal ${\mathbf x}^{o}$ is the clay percentages in each county. We assume ${\mathbf s}igma^2=1$ when computing $B({\mathbf m}u)$, $V({\mathbf m}u)$, and MSE(${\mathbf m}u$) in {{\mathbf m}athbf e}qref{eq:MSE}. } {{\mathbf m}athbf l}abel{fig:functions} {{\mathbf m}athbf e}nd{figure} {{\mathbf m}athbf b}egin{figure}[t] {{\mathbf m}athbf c}entering {\mathbf s}ubfloat[]{ {{\mathbf m}athbf i}ncludegraphics[width=0.235{\mathbf t}extwidth]{eig_2} {{\mathbf m}athbf l}abel{fig:pcd-on}} {\mathbf s}ubfloat[{}]{ {{\mathbf m}athbf i}ncludegraphics[width=0.235{\mathbf t}extwidth]{energy_2} {{\mathbf m}athbf l}abel{fig:pcd-off}} {\mathbf v}space{-0.05in} {{\mathbf m}athbf c}aption{(a) Modeling ${{\mathbf m}athbf l}ambda_i$'s as an exponentially increasing function $f(i)$; (b) modeling ${{\mathbf m}athbf a}lpha_i^2$ as an exponentially decreasing function $g(i)$.} {{\mathbf m}athbf l}abel{fig:approximations} {{\mathbf m}athbf e}nd{figure} Having estimated a noise variance ${\mathbf s}igma^2$, we now derive the optimal weight parameter ${\mathbf m}u$ for MAP formulation {{\mathbf m}athbf e}qref{eq:MAP}. Following the derivation in {{\mathbf m}athbf c}ite{chen17}, given ${\mathbf s}igma^2$, the mean square error (MSE) of the MAP estimate ${\mathbf x}^*$ from {{\mathbf m}athbf e}qref{eq:MAP} computed using ground truth signal ${\mathbf x}^o$ as function of ${\mathbf m}u$ is {{\mathbf m}athbf b}egin{align} {\mathbf t}ext{MSE}({\mathbf m}u) = {\mathbf u}nderbrace{{\mathbf s}um_{i=2}^N {\mathbf p}si_i^2 ({\mathbf v}_i^{\mathbf t}op {\mathbf x}^o)^2}_{B({\mathbf m}u)} + {\mathbf u}nderbrace{{\mathbf s}igma^2 {\mathbf s}um_{i=1}^N {\mathbf p}hi_i^2}_{V({\mathbf m}u)} {{\mathbf m}athbf l}abel{eq:MSE} {{\mathbf m}athbf e}nd{align} where ${\mathbf p}si_i = {{\mathbf m}athbf f}rac{1}{1 + {{\mathbf m}athbf f}rac{1}{{\mathbf m}u {{\mathbf m}athbf l}ambda_i}}$ and ${\mathbf p}hi_i = {{\mathbf m}athbf f}rac{1}{1 + {\mathbf m}u {{\mathbf m}athbf l}ambda_i}$. The first term $B({\mathbf m}u)$ corresponds to the {\mathbf t}extit{bias} of estimate ${\mathbf x}^*$, which is a differentiable, {\mathbf t}extit{concave} and monotonically increasing function of ${\mathbf m}u>0$. In contrast, the second term $V({\mathbf m}u)$ corresponds to the {\mathbf t}extit{variance} of ${\mathbf x}^*$, and is a differentiable, {\mathbf t}extit{convex} monotonically decreasing function of ${\mathbf m}u>0$. When combined, MSE is a differentiable and provably {\mathbf t}extit{pseudo-convex} function of ${\mathbf m}u>0$ {{\mathbf m}athbf c}ite{mangasarian1975pseudo}, {{\mathbf m}athbf i}e, {{\mathbf m}athbf b}egin{align} {\mathbf n}abla {\mathbf t}ext{MSE}({\mathbf m}u_1) {{\mathbf m}athbf c}dot ({\mathbf m}u_2 - {\mathbf m}u_1) {{\mathbf m}athbf g}eq 0 {\mathbf r}ightarrow {\mathbf t}ext{MSE}({\mathbf m}u_2) {{\mathbf m}athbf g}eq {\mathbf t}ext{MSE}({\mathbf m}u_1), {{\mathbf m}athbf l}abel{eq:covex_def} {{\mathbf m}athbf e}nd{align} ${{\mathbf m}athbf f}orall {\mathbf m}u_1, {\mathbf m}u_2 >0$. See Fig.\;{\mathbf r}ef{fig:functions} for an example of bias $B({\mathbf m}u)$, variance $V({\mathbf m}u)$ and ${\mathbf t}ext{MSE}({\mathbf m}u)$ for a specific graph signal ${\mathbf x}^o$ and a graph ${{\mathbf m}athbf c}G$, and Appendix~{\mathbf r}ef{app:convex} for a proof of pseudo-convexity. In {{\mathbf m}athbf c}ite{chen17}, the authors derived a corollary where ${\mathbf t}ext{MSE}({\mathbf m}u)$ in {{\mathbf m}athbf e}qref{eq:MSE} is replaced by a convex upper bound ${\mathbf t}ext{MSE}^+({\mathbf m}u)$ that is more easily computable. The optimal ${\mathbf m}u$ is then computed by minimizing the convex function ${\mathbf t}ext{MSE}^+({\mathbf m}u)$ using conventional optimization methods. However, this upper bound is too loose in practice to be useful. Instead, we take an alternative approach: we approximate {{\mathbf m}athbf e}qref{eq:MSE} by modeling the distributions of eigenvalues ${{\mathbf m}athbf l}ambda_i$'s of ${\mathbf L}$ and signal energies ${{\mathbf m}athbf a}lpha_i^2 = ({\mathbf v}_i^{\mathbf t}op {\mathbf x}^o)^2$ at graph frequencies $i$ as follows. We model ${{\mathbf m}athbf l}ambda_i$'s as an exponentially increasing function $f(i)$, and model ${{\mathbf m}athbf a}lpha_i^2$ as an exponentially decreasing function $g{{\mathbf m}athbf l}eft(i{\mathbf r}ight)$, namely {{\mathbf m}athbf b}egin{equation} {{\mathbf m}athbf l}ambda_i{{\mathbf m}athbf a}pprox f(i)=q{{\mathbf m}athbf e}xp\{{{\mathbf m}athbf g}amma i\}; {{\mathbf m}athbf h}space{10pt} {{\mathbf m}athbf a}lpha_i^2{{\mathbf m}athbf a}pprox g(i)=r{{\mathbf m}athbf e}xp\{-{\mathbf t}heta i\}, {{\mathbf m}athbf l}abel{eq:approx_func} {{\mathbf m}athbf e}nd{equation} where $q, {{\mathbf m}athbf g}amma, r, {\mathbf t}heta$ are parameters. See Fig.\;{\mathbf r}ef{fig:approximations} for illustrations of both approximations. To compute those parameters, we first compute extreme eigen-pairs $({{\mathbf m}athbf l}ambda_i, {\mathbf v}_i)$ for $i {{\mathbf m}athbf i}n \{2, N\}$ in linear time using LOBPCG~{{\mathbf m}athbf c}ite{knyazev2001}. Hence we have following expressions from~({\mathbf r}ef{eq:approx_func}), {{\mathbf m}athbf b}egin{equation} {{\mathbf m}athbf b}egin{split} {{\mathbf m}athbf l}ambda_2 &{{\mathbf m}athbf a}pprox q{{\mathbf m}athbf e}xp\{2{{\mathbf m}athbf g}amma\}; {{\mathbf m}athbf h}space{20pt} {{\mathbf m}athbf l}ambda_N {{\mathbf m}athbf a}pprox q{{\mathbf m}athbf e}xp\{N{{\mathbf m}athbf g}amma\},\\ {{\mathbf m}athbf a}lpha_2^2 &{{\mathbf m}athbf a}pprox r{{\mathbf m}athbf e}xp\{-2{\mathbf t}heta\}; {{\mathbf m}athbf h}space{15pt} {{\mathbf m}athbf a}lpha_N^2 {{\mathbf m}athbf a}pprox r{{\mathbf m}athbf e}xp\{-N{\mathbf t}heta\}. {{\mathbf m}athbf e}nd{split} {{\mathbf m}athbf e}nd{equation} By solving these equations, one can obtain the four parameters as {{\mathbf m}athbf b}egin{equation} {{\mathbf m}athbf b}egin{split} {{\mathbf m}athbf g}amma &= {{\mathbf m}athbf f}rac{{{\mathbf m}athbf l}n{{{\mathbf m}athbf f}rac{{{\mathbf m}athbf l}ambda_N}{{{\mathbf m}athbf l}ambda_2}}}{N-2}; {{\mathbf m}athbf h}space{15pt} q={{\mathbf m}athbf l}ambda_2{{\mathbf m}athbf e}xp{{\mathbf m}athbf l}eft\{-2{{\mathbf m}athbf f}rac{{{\mathbf m}athbf l}n{{{\mathbf m}athbf f}rac{{{\mathbf m}athbf l}ambda_N}{{{\mathbf m}athbf l}ambda_2}}}{N-2}{\mathbf r}ight\},\\ {\mathbf t}heta &= -{{\mathbf m}athbf f}rac{{{\mathbf m}athbf l}n{{{\mathbf m}athbf f}rac{{{\mathbf m}athbf a}lpha_N^2}{{{\mathbf m}athbf a}lpha_2^2}}}{N-2}; {{\mathbf m}athbf h}space{10pt} r={{\mathbf m}athbf a}lpha_2^2{{\mathbf m}athbf e}xp{{\mathbf m}athbf l}eft\{-2{{\mathbf m}athbf f}rac{{{\mathbf m}athbf l}n{{{\mathbf m}athbf f}rac{{{\mathbf m}athbf a}lpha_N^2}{{{\mathbf m}athbf a}lpha_2^2}}}{N-2}{\mathbf r}ight\}. {{\mathbf m}athbf e}nd{split} {{\mathbf m}athbf e}nd{equation} One can thus approximate MSE in~({\mathbf r}ef{eq:MSE}) as {{\mathbf m}athbf b}egin{align} {\mathbf t}ext{MSE}^a({\mathbf m}u) = {\mathbf s}um_{i=2}^N {{\mathbf m}athbf f}rac{g(i)}{{{\mathbf m}athbf l}eft(1+{{\mathbf m}athbf f}rac{1}{{\mathbf m}u f(i)}{\mathbf r}ight)^2} + {\mathbf s}igma^2 {\mathbf s}um_{i=1}^N {{\mathbf m}athbf f}rac{1}{{{\mathbf m}athbf l}eft(1 + {\mathbf m}u f(i){\mathbf r}ight)^2} . {{\mathbf m}athbf l}abel{eq:approx_MSE} {{\mathbf m}athbf e}nd{align} Since MSE in~({\mathbf r}ef{eq:MSE}) is a differentiable and pseudo-convex function for ${\mathbf m}u >0$, ${\mathbf t}ext{MSE}^a$ in~({\mathbf r}ef{eq:approx_MSE}) is also a differentiable and pseudo-convex function for ${\mathbf m}u>0$ with its gradient equals to {{\mathbf m}athbf b}egin{equation} {\mathbf n}abla {\mathbf t}ext{MSE}^a({\mathbf m}u)={\mathbf s}um_{i=2}^{N}{{\mathbf m}athbf f}rac{2{\mathbf m}u g(i)f(i)^2-2f(i){\mathbf s}igma^2}{(1+{\mathbf m}u f(i))^3}. {{\mathbf m}athbf e}nd{equation} Finally, the optimal ${\mathbf m}u>0$ is computed by iteratively minimizing the pseudo-convex function~${\mathbf t}ext{MSE}^a({\mathbf m}u)$ using a standard gradient-decent algorithm: {{\mathbf m}athbf b}egin{equation} {\mathbf m}u^{(k)}={\mathbf m}u^{(k-1)}-t{\mathbf n}abla {\mathbf t}ext{MSE}^a({\mathbf m}u^{(k-1)}), {{\mathbf m}athbf l}abel{eq:GD} {{\mathbf m}athbf e}nd{equation} where $t$ is the step size and ${\mathbf m}u^{(k)}$ is the value of ${\mathbf m}u$ at the $k$-th iteration. We compute {{\mathbf m}athbf e}qref{eq:GD} iteratively until convergence. {\mathbf s}ection{Experimentation} {{\mathbf m}athbf l}abel{sec:results} {{\mathbf m}athbf b}egin{figure}[t] {{\mathbf m}athbf c}entering {{\mathbf m}athbf h}space{-35pt} {\mathbf s}ubfloat[]{ {{\mathbf m}athbf i}ncludegraphics[width=0.26{\mathbf t}extwidth]{images/juneimportance.png} {{\mathbf m}athbf l}abel{fig:pcd-on}} {{\mathbf m}athbf h}space{-0.25in} {\mathbf s}ubfloat[{}]{ {{\mathbf m}athbf i}ncludegraphics[width=0.26{\mathbf t}extwidth]{images/julyimportance.png} {{\mathbf m}athbf l}abel{fig:pcd-off}} {\mathbf v}space{-0.05in} {{\mathbf m}athbf c}aption{ Feature importance for (a) {\mathbf t}exttt{EVI\_June} and (b) {\mathbf t}exttt{EVI\_July}.} {{\mathbf m}athbf l}abel{fig:featureimportance} {\mathbf v}space{-14pt} {{\mathbf m}athbf e}nd{figure} {{\mathbf m}athbf b}egin{figure}[t] {{\mathbf m}athbf c}entering {{\mathbf m}athbf h}space{-30pt} {\mathbf s}ubfloat[]{ {{\mathbf m}athbf i}ncludegraphics[width=0.235{\mathbf t}extwidth]{images/evijune_corr.png} {{\mathbf m}athbf l}abel{fig:pcd-on}} {\mathbf s}ubfloat[{}]{ {{\mathbf m}athbf i}ncludegraphics[width=0.235{\mathbf t}extwidth]{images/evijuly_corr.png} {{\mathbf m}athbf l}abel{fig:pcd-off}} {\mathbf v}space{-0.05in} {{\mathbf m}athbf c}aption{ Correlation coefficient between denoised EVI feature ((a) {\mathbf t}exttt{EVI\_June}; (b) {\mathbf t}exttt{EVI\_July}) and the crop yield feature.} {{\mathbf m}athbf l}abel{fig:correlations} {\mathbf v}space{-10pt} {{\mathbf m}athbf e}nd{figure} {\mathbf s}ubsection{Experimental Setup} To test the effectiveness of our proposed feature pre-denoising algorithm, we conducted the following experiment. We used the corn yield data at the county level between year 2010 and 2019 provided by USDA and the National Agricultural Statistics Service{{\mathbf m}athbf f}ootnote{https://quickstats.nass.usda.gov/} to predict yields in 2020. We performed our experiments in $938$ counties in $10$ states (Iowa, Illinois, Indiana, Kentucky, Michigan, Minnesota, Missouri, Nebraska, Ohio, Wisconsin) in the corn belt. As discussed in Section\;{\mathbf r}ef{sec:graph_learning}, we used five soil- and location-related features to compose feature ${{\mathbf m}athbf f}_i {{\mathbf m}athbf i}n {\mathbf m}athbb{R}^5$ for each county $i$ and metric learning algorithm in {{\mathbf m}athbf c}ite{wei_TSP2020} to compute metric matrix ${\mathbf M}$, in order to build a similarity graph ${{\mathbf m}athbf c}G$. For feature denoising, we targeted enhanced vegetation Index (EVI) for the months of June and July, {\mathbf t}exttt{EVI\_June} and {\mathbf t}exttt{EVI\_July}. In a nutshell, EVI quantifies vegetation greenness per area based on captured satellite images, and is an important feature for yield prediction. EVI is noisy for a variety of reasons: low-resolution satellite images, cloud occlusion, etc. We built a DL model for yield prediction based on XGBoost {{\mathbf m}athbf c}ite{chen16} as the baseline, using which different versions of {\mathbf t}exttt{EVI\_June} and {\mathbf t}exttt{EVI\_July} were injected as input along with other relevant features. {\mathbf s}ubsection{Experimental Results} First, we computed the optimum weight parameter ${\mathbf m}u$ for both {\mathbf t}exttt{EVI\_June} and {\mathbf t}exttt{EVI\_July}, which was ${\mathbf m}u = 2.2$. Table\;{\mathbf r}ef{table1} shows the crop yield prediction performance using noisy features versus denoised features with different weight parameters, under three metrics in the yield prediction literature: root-mean-square error (RMSE), Mean Absolute Error (MAE) and R2 score (larger the better) {{\mathbf m}athbf c}ite{pasquel22}. Results in Table\;{\mathbf r}ef{table1} demonstrate that our optimal weight parameter ({{\mathbf m}athbf i}e, $2.2$) has the best results among other ${\mathbf m}u$ values. Specifically, our denoised features can reduce RMSE by $0.434\%$. In addition to the metrics in Table\;{\mathbf r}ef{table1}, we measured the {\mathbf t}extit{permutation feature importance}~{{\mathbf m}athbf c}ite{altmann2010permutation} for both {\mathbf t}exttt{EVI\_June} and {\mathbf t}exttt{EVI\_July} before and after denoising. Fig.\;{\mathbf r}ef{fig:featureimportance} shows that the importance of these features increases after denoising, demonstrating the positive effects of our unsupervised feature denoiser. Specifically, the result for the optimal ${\mathbf m}u = 2.2$ induced the most feature importance. {{\mathbf m}athbf b}egin{table}[h!] {{\mathbf m}athbf c}aption{Performance Metrics with different weight parameters} {{\mathbf m}athbf l}abel{table1} {{\mathbf m}athbf c}entering {{\mathbf m}athbf b}egin{footnotesize} {{\mathbf m}athbf b}egin{tabular}{|l|l|l|l|l|} {{\mathbf m}athbf h}line {\mathbf m}ultirow{1}{*}{Metric} & Original & ${\mathbf m}u = 0.001$ & ${\mathbf m}u= 0.01$ & ${\mathbf m}u = 2.2$ \\ {{\mathbf m}athbf h}line RMSE (bu/ac) & 14.139 & 14.1966 & 14.2042 & {\mathbf t}extbf{14.0776} \\ {{\mathbf m}athbf h}line MAE (bu/ac) & 11.225 & 11.2635 & 11.2674 & {\mathbf t}extbf{10.9839} \\ {{\mathbf m}athbf h}line R2 & 0.5894 & 0.5860 & 0.5856 & {\mathbf t}extbf{0.5929} \\ {{\mathbf m}athbf h}line {{\mathbf m}athbf e}nd{tabular} {{\mathbf m}athbf e}nd{footnotesize} {{\mathbf m}athbf e}nd{table} Further, we calculated the correlation between the original / denoised feature {\mathbf t}exttt{EVI\_June} and {\mathbf t}exttt{EVI\_July} and actual crop yield. Fig.\;{\mathbf r}ef{fig:correlations} shows that the denoised features with the optimal ${\mathbf m}u = 2.2$ has the largest correlation with the crop yield feature. In comparison, using previous method in {{\mathbf m}athbf c}ite{chen17} to estimate ${\mathbf m}u = 3.35$ resulted in a weaker correlation. {{\mathbf m}athbf b}egin{figure}[ht] {{\mathbf m}athbf c}entering {{{\mathbf m}athbf i}ncludegraphics[width=0.5{\mathbf t}extwidth]{images/2denoised.png}} {\mathbf v}space{-0.2in} {{\mathbf m}athbf c}aption{Yield prediction error in all the counties using denoised features}{{\mathbf m}athbf l}abel{fig:gl} {{\mathbf m}athbf l}abel{fig:yield map} {{\mathbf m}athbf e}nd{figure} Lastly, to visualize the effect of our denoising algorithm, Fig.\;{\mathbf r}ef{fig:yield map} shows the yield prediction error for different counties in the 10 states in the corn belt. We observe that with the exception of a set of counties in southern Iowa devastated by a rare strong wind event (called {\mathbf t}extit{derecho}) in 2020, there were very few noticeably large yield prediction errors. {\mathbf s}ection{Conclusion} {{\mathbf m}athbf l}abel{sec:conclude} Conventional crop yield prediction schemes exploit only temporal correlation to estimate future yields per county given input relevant features. In contrast, to exploit inherent spatial correlations among neighboring counties, we perform graph spectral filtering to pre-denoise input features for a deep learning model prior to network parameter training. Specifically, we formulate the feature denoising problem via a MAP formulation with the {\mathbf t}extit{graph Laplacian regularizer} (GLR). We derive the weight parameter ${\mathbf m}u$ trading off the fidelity term against GLR in two steps. We first estimate noise variance directly from noisy observations using a graph clique detection (GCD) procedure that discovers locally constant regions. We then compute an optimal ${\mathbf m}u$ minimizing an MSE objective via bias-variance analysis. Experiments show that using denoised features as input can improve a DL models' crop yield prediction. {{\mathbf m}athbf a}ppendices {\mathbf s}ection{Proof of pseudo-convexity for~({\mathbf r}ef{eq:MSE})} {{\mathbf m}athbf l}abel{app:convex} {{\mathbf m}athbf b}egin{small} We rewrite {{\mathbf m}athbf e}qref{eq:MSE}) as {{\mathbf m}athbf b}egin{equation} {\mathbf t}ext{MSE}({\mathbf m}u)={\mathbf s}um_{i=2}^{N}{{\mathbf m}athbf f}rac{{\mathbf m}u^2{{\mathbf m}athbf l}ambda_i^2{{\mathbf m}athbf a}lpha_i^2 + {\mathbf s}igma^{2}}{(1+{\mathbf m}u{{\mathbf m}athbf l}ambda_i)^2} + {\mathbf s}igma^2, {{\mathbf m}athbf e}nd{equation} where ${{\mathbf m}athbf a}lpha_i^2 = ({\mathbf v}_i^{\mathbf t}op {\mathbf x}^o)^2$. For simplicity, we only provide the proof for $N=i$. In this case, we can write, {{\mathbf m}athbf b}egin{equation} {\mathbf n}abla {\mathbf t}ext{MSE}({\mathbf m}u)={{\mathbf m}athbf f}rac{2{\mathbf m}u {{\mathbf m}athbf l}ambda_i^2{{\mathbf m}athbf a}lpha_i^2-2{{\mathbf m}athbf l}ambda_i{\mathbf s}igma^2}{(1+{\mathbf m}u {{\mathbf m}athbf l}ambda_i)^3}. {{\mathbf m}athbf e}nd{equation} Thus, the following expressions follow naturally: {{\mathbf m}athbf b}egin{equation} {{\mathbf m}athbf b}egin{split} {\mathbf m}u &{{\mathbf m}athbf g}eq {{\mathbf m}athbf f}rac{{\mathbf s}igma^2}{{{\mathbf m}athbf l}ambda_i{{\mathbf m}athbf a}lpha_i^2} > 0{\mathbf r}ightarrow {\mathbf n}abla {\mathbf t}ext{MSE}({\mathbf m}u){{\mathbf m}athbf g}eq 0; \\ 0 &<{\mathbf m}u < {{\mathbf m}athbf f}rac{{\mathbf s}igma^2}{{{\mathbf m}athbf l}ambda_i{{\mathbf m}athbf a}lpha_i^2}{\mathbf r}ightarrow {\mathbf n}abla {\mathbf t}ext{MSE}({\mathbf m}u)< 0. {{\mathbf m}athbf e}nd{split} {{\mathbf m}athbf l}abel{eq:grad_sign} {{\mathbf m}athbf e}nd{equation} Further, according to~({\mathbf r}ef{eq:grad_sign}), for ${\mathbf m}u_1{{\mathbf m}athbf g}eq{{\mathbf m}athbf f}rac{{\mathbf s}igma^2}{{{\mathbf m}athbf l}ambda_i{{\mathbf m}athbf a}lpha_i^2} > 0$, {{\mathbf m}athbf b}egin{equation} ({\mathbf m}u_2-{\mathbf m}u_1){{\mathbf m}athbf g}eq 0 {\mathbf r}ightarrow ({\mathbf t}ext{MSE}({\mathbf m}u_2)-{\mathbf t}ext{MSE}({\mathbf m}u_1)){{\mathbf m}athbf g}eq 0, {{\mathbf m}athbf l}abel{eq:grad_diff1} {{\mathbf m}athbf e}nd{equation} and for $0<{\mathbf m}u_1<{{\mathbf m}athbf f}rac{{\mathbf s}igma^2}{{{\mathbf m}athbf l}ambda_i{{\mathbf m}athbf a}lpha_i^2}$, {{\mathbf m}athbf b}egin{equation} ({\mathbf m}u_2-{\mathbf m}u_1)< 0 {\mathbf r}ightarrow ({\mathbf t}ext{MSE}({\mathbf m}u_2)-{\mathbf t}ext{MSE}({\mathbf m}u_1)){{\mathbf m}athbf g}eq 0. {{\mathbf m}athbf l}abel{eq:grad_diff2} {{\mathbf m}athbf e}nd{equation} Now, by combining {{\mathbf m}athbf e}qref{eq:grad_sign}, {{\mathbf m}athbf e}qref{eq:grad_diff1}, and {{\mathbf m}athbf e}qref{eq:grad_diff2}, one can write {{\mathbf m}athbf e}qref{eq:covex_def}, which concludes the proof. {{\mathbf m}athbf e}nd{small} {{\mathbf m}athbf b}ibliographystyle{IEEEtran} {{\mathbf m}athbf b}ibliography{ref2} {{\mathbf m}athbf e}nd{document}
\begin{document} \title{Computing Small Unit-Distance Graphs \mbox{with Chromatic Number 5} \begin{abstract} We present a new method for reducing the size of graphs with a given property. Our method, which is based on clausal proof minimization, allowed us to compute several 553-vertex unit-distance graphs with chromatic number 5, while the smallest published unit-distance graph with chromatic number 5 has 1581 vertices. The latter graph was constructed by Aubrey de Grey to show that the chromatic number of the plane is at least 5. The lack of a 4-coloring of our graphs is due to a clear pattern enforced on some vertices. Also, our graphs can be mechanically validated in a second, which suggests that the pattern is based on a reasonably short argument. \end{abstract} \section{Introduction} The {\em chromatic number of the plane}, a problem first proposed by Edward Nelson in 1950~\cite{coloring}, asks how many colors are needed to color all points of the plane such that no two points at distance 1 from each other have the same color. Early results showed that at least four and at most seven colors are required. By the de Bruijn--Erd\H{o}s theorem, the chromatic number of the plane is the largest possible chromatic number of a finite unit-distance graph~\cite{deBruijn}. The Moser Spindle, a unit-distance graph with 7 vertices and 11 edges, shows the lower bound~\cite{Moser}, while the upper bound is due to a 7-coloring of the entire plane by John Isbell~\cite{coloring}. In a recent breakthrough for this problem, Aubrey de Grey improved the lower bound by providing a unit-distance graph with 1581 vertices with chromatic number 5~\cite{DeGrey}. This graph was obtained by shrinking the initial graph with chromatic number 5 consisting of $20\,425$ vertices. The 1581-vertex graph is almost minimal: at most 4 vertices can be removed without introducing a 4-coloring of the remaining graph. The discovery by de Grey started a Polymath project to find smaller unit-distance graphs with chromatic number 5. In this paper, we present our first contributions in this direction and we describe a method to reduce the size of graphs while preserving their chromatic number. Our results show that this method is quite effective as it was able to produce unit-distance graphs with 553 vertices. Our method exploits two formal-methods technologies: the ability of \emph{satisfiability \textup{(}SAT\textup{)} solvers} to find a short refutation for unsatisfiable formulas (if they exist) and \emph{proof checkers} that can minimize refutations and unsatisfiable formulas. The refutations emitted by SAT solvers are hardly minimal. Depending on the application from which the formula originates, typically $10\%$ to $99\%$ of the refutation can be omitted. Several techniques have been developed to avoid checking irrelevant parts of a refutation~\cite{Heule:2013:trim}. These techniques minimize proofs in order to share and revalidate them. For example, the proofs of the Boolean Pythagorean Triples~\cite{ptn} and Schur Number Five~\cite{S5} problems are enormous, even after minimization: 200 terabytes and 2 petabytes, respectively. Here we use clausal-proof-minimization techniques for a different purpose: shrinking graphs. Given a unit-distance graph with chromatic number 5, we first construct a propositional formula that encodes whether there exists a valid 4-coloring of this graph. This formula is unsatisfiable and we can use a SAT solver to compute a refutation. From the minimized refutation, we extract a subgraph that also has chromatic number 5. We then apply this process repeatedly to make the graph ever smaller. \section{Preliminaries} \subsection{Chromatic Number of the Plane} The Chromatic Number of the Plane (CNP) asks how many colors are required in a coloring of the plane to ensure that there exists no monochromatic pair of points with distance 1. A graph for which all edges have the same length is called a {\em unit-distance graph}. A lower bound for CNP of $k$ colors can be obtained by showing that a unit-distance graph has chromatic number $k$. We will use three operations to construct larger and larger graphs: the Minkowski sum, rotation, and merge. Given two sets of points $A$ and $B$, the Minkowski sum of $A$ and $B$, denoted by $A \oplus B$, equals \mbox{$\{a+b \mid a \in A, b \in B\}$}. Consider the sets of points $A = \{(0,0), (1,0)\}$ and \mbox{$B = \{(0,0), (1/2,\sqrt{3}/2)\}$}, then $A \oplus B = \{(0,0), (1,0), (1/2,\sqrt{3}/2), (3/2,\sqrt{3}/2)\}$. \begin{figure} \caption{From left to right: illustrations of $A$, $B$, $A \oplus B$, and the Moser Spindle. The graphs shown have chromatic number 2, 2, 3, and 4, respectively.} \label{fig:intro} \end{figure} Given a positive integer $i$, we denote by $\theta_i$ the rotation around point $(0,0)$ with angle $\arccos(\frac{2i-1}{2i})$ and by $\theta_i^k$ the application of $\theta_i$ $k$ times. Let $p$ be a point with distance $\sqrt{i}$ from $(0,0)$, then the points $p$ and $\theta_i(p)$ are exactly distance 1 apart and thus would be connected with an edge in a unit-distance graph. Consider again the set of points $A \oplus B$ above. The points $A \oplus B \cup \theta_3(A \oplus B)$ form the Moser Spindle. Figure~\ref{fig:intro} shows visualizations of these sets with connected vertices colored differently. \subsection{Propositional Formulas} We will minimize graphs on the propositional level. We consider propositional formulas in \emph{conjunctive normal form} (CNF), which are defined as follows. A \emph{literal} is either a variable $x$ (a \emph{positive literal}) or the negation $\overline x$ of a variable~$x$ (a \emph{negative literal}). The \emph{complement} $\overline l$ of a literal $l$ is defined as $\overline l = \overline x$ if $l = x$ and $\overline l = x$ if $l = \overline x$. For a literal $l$, $\mathit{var}(l)$ denotes the variable of $l$. A \emph{clause} is a disjunction of literals and a \emph{formula} is a conjunction of clauses. An \emph{assignment} is a function from a set of variables to the truth values 1{}~(\emph{true}) and 0{} (\emph{false}). A literal $l$ is \emph{satisfied} by an assignment $\alpha$ if $l$ is positive and \mbox{$\alpha(\mathit{var}(l)) = 1$} or if it is negative and $\alpha(\mathit{var}(l)) = 0$. A literal is \emph{falsified} by an assignment if its complement is satisfied by the assignment. A clause is satisfied by an assignment $\alpha$ if it contains a literal that is satisfied by~$\alpha$. Finally, a formula is satisfied by an assignment $\alpha$ if all its clauses are satisfied by $\alpha$. A formula is \emph{satisfiable} if there exists an assignment that satisfies it and otherwise it is \emph{unsatisfiable}. Two formulas are \emph{logically equivalent} if they are satisfied by the same assignments; they are \emph{satisfiability equivalent} if they are either both satisfiable or both unsatisfiable. \subsection{Clausal Proofs} In the following, we introduce a formal notion of clause redundancy. A clause $C$ is \emph{redundant} with respect to a formula $F$ if $F$ and $F \land C$ are satisfiability equivalent. For instance, the clause $C = x \lor y$ is redundant with respect to the formula $F = (\overline x \lor \overline y)$ since $F$ and $F \land C$ are satisfiability equivalent (although they are not logically equivalent). This redundancy notion allows us to add redundant clauses to a formula without affecting its satisfiability. Given a formula $F = \{C_1, \dots, C_m\}$, a \emph{clausal derivation} of a clause $C_n$ from $F$ is a sequence $C_{m+1}, \dots, C_n$ of clauses. Such a sequence gives rise to \mbox{formulas} $F_m, F_{m+1}, \dots, F_n$, where $F_i = \{C_1, \dots, C_i\}$. We call $F_i$ the \emph{accumulated formula} corresponding to the \mbox{$i$-th} proof step. A clausal derivation is \emph{correct} if every clause $C_i$ ($i > m$) is redundant with respect to the formula $F_{i-1}$ and if this redundancy can be checked in polynomial time with respect to the size of the proof. A clausal derivation is a \emph{proof} of a formula $F$ if it derives the unsatisfiable empty clause. Clearly, since every clause-addition step preserves satisfiability, and since the empty clause is always false, a proof of $F$ certifies the unsatisfiability of~$F$. The proofs computed in this paper show that the chromatic a given graph is at least 5. We will also refer to proofs as {\em refutations} as they refute the existence of a valid 4-coloring. \section{Clausal Proof Minimization} SAT solving techniques are not only useful to validate the chromatic number of a graph, but they can also help reduce the size of the graph while preserving the chromatic number. The method works as follows. Given a graph $G$ with chromatic number $k$, first generate the propositional formula $F$ that encodes whether the graph can be colored with $k-1$ colors. This formula is unsatisfiable. Most SAT solvers can emit a proof of unsatisfiability. There exist several checkers for such proofs, even checkers that are formally verified in the theorem provers {\sf ACL2}, {\sf Coq}, and {\sf Isabelle}~\cite{Cruz-Filipe2017,Lammich2017}. We used the (unverified) checker {\sf DRAT-trim}~\cite{Heule:2013:trim} that allows minimizing the clausal proof as well as extracting an unsatisfiable core, i.e., a subformula that is also unsatisfiable. From the unsatisfiable core one can easily extract a subgraph $G'$ of $G$ such that $G'$ also has chromatic number $k$. \subsection{Encoding} We can compute the chromatic number of a graph $G$ as follows. Construct two formulas, one asking whether $G$ can be colored with $k-1$ colors, and one whether $G$ can be colored with $k$ colors. Now, $G$ has chromatic number $k$ if and only if the former is unsatisfiable while the latter is satisfiable. The construction of these two formulas can be achieved using the following encoding. Given a graph $G = (V,E)$ and a parameter $k$, the encoding uses $k|V|$ boolean variables $x_{v,c}$ with $v \in V$ and $c \in \{1,\dots,k\}$. These variables have the following meaning: $x_{v,c}$ is true if and only if vertex $v$ has color $c$. Now we can encode whether $G$ can be colored with $k$ colors: \[ G_k : = \bigwedge_{v \in V} (x_{v,1} \lor \dots \lor x_{v,k}) \land \bigwedge_{\{v,w\} \in E} \bigwedge_{c \in \{1,\dots,k\}} (\overline x_{v,c} \lor \overline x_{w,c}) \] The first type of clauses ensures that each vertex has at least one color, while the second type of clauses forces that two connected vertices are colored differently. Additionally, we could include clauses to require that each vertex has at most one color. However, these clauses are redundant and would be eliminated by blocked clause elimination~\cite{BCE}, a SAT preprocessing technique. We added symmetry-breaking predicates~\cite{Crawford} during all experiments to speedup solving and proof minimization. The color-symmetries were broken by fixing the vertex at ($0$, $0$) to the first color, the vertex at ($1$, $0$) to the second color, and the vertex at ($1/2$, $\sqrt{3}/2$) to the third color. These three points are at distance 1 from each other and occurred in all our graphs. The speedup is roughly a factor of 24 ($4 \cdot 3 \cdot 2)$, when trying to find a 4-coloring. We did not explore yet whether an encoding based on Zykov contraction~\cite{Zykov} would allow shorter proofs of unsatisfiability. In essence, such an encoding would add variables and clauses that encode for a pair of vertices whether they have the same color. Solving graph coloring problems using such an extended encoding has been successful in the past~\cite{Schaafsma}. \subsection{Graph Trimming} Modern SAT solvers can emit clausal proofs. We used the SAT solver {\sf Glucose}~\cite{glucose} to produce the proofs. The most commonly supported format for clausal proofs is $\mathrm{DRAT}$, which computes the redundancy of clauses using the resolution asymmetric tautology check~\cite{rules}. Some $\mathrm{DRAT}$ proof checkers can extract from a refutation an unsatisfiable core, i.e., a subformula that is still unsatisfiable. When the formula expresses a graph coloring property, the unsatisfiable core represents a subgraph with the same coloring property. The absence of the clause $(x_{v,1} \lor \dots \lor x_{v,k})$ in the core shows that vertex $v$ can be removed, while the absence of all clauses $(\overline x_{v,c} \lor \overline x_{w,c})$ with $c \in \{1,\dots,k\}$ show that edge $\{v,w\}$ can be removed. When trying to find a small unit-distance graph with a given chromatic number, we are interested in reducing the number of vertices. Although the proof checker can be easily modified to ensure that no edges are removed, we achieved larger reductions by allowing edges to be deleted and then restoring edges between vertices that survived the shrinking. \subsection{Randomization} SAT solvers and clausal-proof-minimization tools are deterministic. To increase the probability of finding small unit-distance graphs with chromatic number 5, we want to randomize the process and minimize many clausal proofs. The proofs produced by SAT solvers depend heavily on the ordering of the clauses in the input file. The initial heuristic ordering of the variables is based on their occurrence in the input file. The earlier a variable occurs in the input file, the higher its place in the ordering. Although more sophisticated initialization methods have been proposed, this method is effective in practice. The effectiveness is caused by the typical encoding of a problem into propositional logic where one starts with the more important variables. However, for our application there are no clear important variables. Based on these observations, we applied the following lightweight randomization. First, we shuffle the input formula and apply graph trimming on the result. When the clausal-proof-minimization tool is no longer able to remove vertices from the graph, we shuffle the clauses of the current formula and produce a new clausal proof. Then we continue graph trimming using the new formula and proof. This process is repeated until randomization cannot further reduce the size of the graph. \subsection {Critical Graphs} A graph is vertex/edge {\em critical} with respect to a given property if removing any vertex/edge would break that property. Here we are interested in vertex critical graphs with respect to the chromatic number. Graph trimming as described above would remove most redundant vertices of the graph and the randomization method allows shrinking the graph even further. However, in most cases the reduced graphs are not critical: There still exist some vertices that can be removed while preserving the chromatic number. Both the SAT solver and the clausal-proof-minimization tool aim to find a relatively short argument (i.e., clausal proof) explaining why imply the fewest number of involved vertices. In fact, an argument can frequently be shortened by using redundant (non-critical) vertices. In the final step we therefore make the graph critical by the following procedure. Randomly pick a vertex from the graph and determine the chromatic number of the graph without it. If the chromatic number is not changed, then the vertex is removed from the graph. This process is repeated until all remaining vertices have been determined to be critical. Instead of using this naive method to make the graph critical, we could have used more sophisticated tools that compute a minimal unsatisfiable core from the propositional formula. However, these tools did not improve the performance or the size in an observable way. \subsection{Validation} Determining whether a set of points form a unit-distance graph with chromatic number 5 requires two checks: (i) does the corresponding graph have chromatic number 5; and (ii) is the distance between two connected points exactly 1. The techniques discussed in this paper can easily perform the first check. SAT solvers can compute valid 5-coloring for the critical graphs in a fraction of a second. The proofs showing that there exists no 4-coloring are actually quite small: between $14\,000$ and $19\,000$ clause addition steps. Proofs of that size can be checked in roughly a second even with formally verified checkers. We used the {\sf DRAT-trim} tool~\cite{Heule:2013:trim} to validate them. Proofs of recently solved hard-combinatorial problems, such as the Pythagorean Triples and Schur Number Five, are much larger: roughly 1 trillion and 10 trillion clause addition steps, respectively~\cite{ptn,S5}. For the second check we used a tool based on Gr\"obner basis, available at \url{http://fmv.jku.at/dist1sqrtgb/}, to validate for every edge in the graph that the corresponding points are exactly 1 apart. The tool produces files that can be validated using {\sf Singular}~\cite{Singular} and {\sf pactrim}~\cite{RitircBiereKauers-SCSC18}. There is no need to check whether all edges are present as missing edges can only decrease the chromatic number. Checking only the correctness of the edges in the graph is cheap. The total validation time for our smallest critical graphs is about a second or two. \begin{figure} \caption{Left, a 3-coloring of de Grey's graph $V_{31} \label{fig:V} \end{figure} \section{Results} In this section we discuss the various techniques that we used and developed to obtain small unit-distance graphs with chromatic number 5. The techniques were originally designed for verification purposes and applying them to graph minimization is novel and unexpected. The main strategy is to start with a large graph and shrink it using clausal proof minimization. We minimized several large graphs with various heuristics and most reduced graphs consisted of 800 to 900 vertices. However, we were able to produce several graphs of 553 vertices using three techniques. The first technique (Section~\ref{sec:small}) enabled producing graphs with less than 700 vertices consistently. Second, we obtained graphs with just over 600 vertices by shrinking merged copies of graphs with less than 700 vertices (Section~\ref{sec:merge}). Finally, we added some points far away from the origin in order to eliminate more points close to the origin (Section~\ref{sec:outer}). The graphs and corresponding proofs mentioned in this section are available at \url{https://github.com/marijnheule/CNP-SAT}. \subsection{Finding a small symmetric subgraph} \label{sec:small} The main building block of our graphs is de Grey's $V_{31}$~\cite{DeGrey}: five 7-wheels with a common central vertex. The graph $V_{31}$ has 31 vertices, 60 edges, is 3-colorable, and all points are in the field $\mathbb{Q}[\sqrt{3}, \sqrt{11}]$. These points can be obtain by applying $\theta_1^{j}\theta_3^{k}$ on point ($1$, $0$) around ($0$, $0$) with $j \in \{0,1,2,3,4,5\}$ and $k \in \{-1,-\frac{1}{2},0,\frac{1}{2},1\}$. A visualization of this graph is shown in Figure~\ref{fig:V} (left). During an early stage of the experimentation, we observed that the graph $(V_{31} \oplus V_{31} \oplus V_{31}) \cup \theta_4(V_{31} \oplus V_{31} \oplus V_{31})$ has chromatic number 5. Furthermore, all points that are further away than 2 of the center can be removed without affecting the chromatic number. \begin{figure} \caption{A 4-coloring of $V_{1939} \label{fig:V1939} \end{figure} Instead of removing the points at distance larger than 2 from the center, we constructed the following graph. Let $V_{151}$ be the Minkowski sum of $V_{31}$ and $V_{31}$ without the points at distance larger than 1 from the center. This graph has 151 vertices and 510 edges and is shown in Figure~\ref{fig:V} (right). Now let $V_{1939}$ be the Minkowski sum of $V_{31}$ and $V_{151}$. This graph is shown in Figure~\ref{fig:V1939}. The graph $V_{1939} \cup \theta_4(V_{1939})$ has chromatic number 5 as well. We applied clausal proof minimization on the formula that encodes whether the graph $V_{1939} \cup \theta_4(V_{1939})$ is 4-colorable. Most random probes of clausal proof minimization produced a subgraph of $V_{1939} \cup \theta_4(V_{1939})$ with slightly more than 800 vertices. Occasionally it produces graphs with fewer than 700 vertices, while never producing graphs in the range of 700 to 800 vertices. \begin{figure} \caption{A visualization of the edges between $S_{199} \label{fig:T4S199} \end{figure} Closer examination of the minimized graphs with fewer than 700 vertices revealed that only a small fraction of the points (always less than 200 vertices) are in the field $\mathbb{Q}[\sqrt{3}, \sqrt{11}]$. These points originate from the subgraph $V_{1939}$, while the other points originate from the subgraph $\theta_4(V_{1939})$. Other patterns can be observed in the graphs with fewer than 700 vertices: there were at least 12 points in the field $\mathbb{Q}[\sqrt{3}, \sqrt{11}]$ at distance 2, while the graphs with more than 800 vertices had fewer than three such points. Hence keeping the points at distance 2 appears crucial to find smaller graphs. Rotation $\theta_4$ does not only add edges between points at distance 2 (by construction), but also between points at other distances. In fact, half the edges between points in $V_{1939}$ and $\theta_4(V_{1939})$ are due to points that are closer to the center: i.e., at $\frac{\sqrt{33} + 1}{2\sqrt{3}}$ and $\frac{\sqrt{33} - 1}{2\sqrt{3}}$ from the origin. Figure~\ref{fig:T4S199} shows the newly introduced edges due to $\theta_4$. \begin{figure} \caption{A 4-coloring of the graph $S_{199} \label{fig:S199} \end{figure} Visualizing the points in the field $\mathbb{Q}[\sqrt{3}, \sqrt{11}]$ reveals that they are highly symmetric: both reflection in the horizontal axis and a rotation of $\theta_1 = 60^{\circ}$ map the points onto themselves. Figure~\ref{fig:S199} shows this visualization. Shown is a 199-vertex graph with 888 edges at unit distance, which we call $S_{199}$. The minimized graphs did not fully produce $S_{199}$, but always yielded a subgraph that missed a handful (up to a dozen) vertices in various locations. There exist many 4-colorings of $S_{199}$, but we observed no clear pattern. Interesting patterns emerge when merging $S_{199}$ and $\theta_4(S_{199})$, as shown in Figure~\ref{fig:T7plot}. Notice that points that are close to each other frequently have the same color. More importantly, roughly half of the vertices that are close to distance 2 from the center have the same color as the central vertex. In later experiments we minimized the graph $V_{1939} \cup \theta_4(S_{199})$, which allowed us to consistently produce unit-distance graphs with fewer than 700 vertices. We suspect that the above mentioned patterns contribute to the lack of a 4-coloring of $V_{1939} \cup \theta_4(S_{199})$. Notice that $V_{1939}$ has $S_{199}$ as a subgraph. \begin{figure} \caption{A 4-coloring of the graph $S_{199} \label{fig:T7plot} \end{figure} \subsection {Merging Critical Graphs} \label{sec:merge} In order to further produce smaller unit-distance graphs with chromatic number 5, we selected two critical graphs obtained earlier, merged them, and applied clausal proof minimization again. There are a significant number of options to merge two graphs and we experimented with a variety of these. The most effective merging strategy in our experiments turned out to be rotating the graphs along the central vertex in such a way that a vertex in one graph at unit distance from the center is merged with a vertex from the other graph at unit distance. Although two different critical graphs can be used for merging, we observed that it is also effective to merge two copies of the same critical graph. The minimization procedure frequently produced a graph that was larger than both the critical graphs that were merged. Only three kinds of rotations occasionally resulted in smaller graphs. The first two rotations are $\theta_1^k$ and $\theta_3^k$ for a small value of $k$. These rotations clearly increase the average vertex degree by merging vertices and introducing edges between points at distance 1 ($\theta_1^k$) and $\sqrt{3}$ ($\theta_3^k$) from the origin. Most other rotations result in little interaction between the two critical graphs and thus hardly increase the average vertex degree. \begin{figure} \caption{A rotation by $\theta_3^{\frac{1} \label{fig:3wheel} \end{figure} The most effective rotations introduce edges between the points at distance $1$ and the distances $\frac{\sqrt{33} + 3}{6}$ and $\frac{\sqrt{33} - 3}{6}$ from the origin, thereby increasing the average vertex degree significantly. Figure~\ref{fig:3wheel} illustrates this by showing three 7-wheel graphs with the radii $1$, $\frac{\sqrt{33} + 3}{6}$, and $\frac{\sqrt{33} - 3}{6}$ (left) and two copies of this graph rotated in such a way that the points on these distances become connected (right). The graph on the left has average vertex degree $\frac {36}{19}$, while the graph of the right has average vertex degree $\frac{120}{37}$. A rotation by $\theta_3^{\frac{1}{2}}$ for example achieves this and maps point $(1,0)$ onto point $(\frac{\sqrt{33}}{6}, \frac{\sqrt{3}}{6})$. Both points are at unit distance from the origin and both are part of $V_{31}$ and of most other graphs that we used in the experiments. \begin{figure} \caption{A visualization of a 610-vertex unit-distance graph with chromatic number 5. Five colors are used for the vertices. Only the center uses the fifth color (white).} \label{fig:610} \end{figure} The combination of merging and minimization only introduced vertices in the field $\mathbb{Q}[\sqrt{3}, \sqrt{11}]$. The smallest graphs contained roughly 50 vertices that do not occur in $(V_{31} \oplus V_{31} \oplus V_{31})$ and thus not in $V_{1939}$. The smallest graph that we found using the techniques discussed so far contains 610 vertices and 3000 edges. This graph is shown in Figure~\ref{fig:610} using a 5-coloring in which only the central vertex has the fifth color. Recall that this graph is vertex critical. Hence our graph possesses such a coloring in which any vertex can be the only one with the fifth color. \subsection {Minimizing the Small Part} \label{sec:outer} The critical graphs found so far can be partitioned into two parts: a subgraph of $\theta_4(S_{199})$ and the subgraph induced by the remaining vertices. We refer to the former as the small part, as it consists typically of only 187 vertices, and to the latter as the large part. In all statements regarding the size of these graphs, we count the central vertex in both parts. All points in the smallest critical graphs are at distance 2 or less from the center. Several approaches have been examined in order to find unit-distance graphs with fewer than 600 vertices. Only one approach was effective. \begin{figure} \caption{A visualization of a 553-vertex unit-distance graph with chromatic number 5. Five colors are used for the vertices. Only the center uses the fifth color (white).} \label{fig:553} \end{figure} We focussed on adding points that are further away than 2 from the origin in order to remove more inner vertices. Adding points from the field $\mathbb{Q}[\sqrt{3}, \sqrt{11}]$ may allow reducing the large part, but none of the experiments were successful. However, we were able to substantially reduce the small part using this strategy. The most effective approach was as follows. We first constructed the Minkowski sum of $\theta_4(S_{199})$ and $\theta_4(S_{199})$ and removed all points that were less or equal than 2 away from the origin. This graph consists of 2028 vertices. All points were added to the smallest critical graphs that were found in the earlier steps, followed by clausal proof minimization. This resulted in a dozen graphs with 553 vertices and (on average) 2720 edges. Figure~\ref{fig:553} shows one of these graphs, which we refer to as $G_{553}$. Practically all vertices that were removed during minimization originated from the small part. This part was reduced to 133 or 134 vertices. The 553-vertex graphs appear less symmetric compared to the earlier graphs. This is caused by the few vertices that are far from the origin. The unit-distance graphs with 553 vertices are vertex critical, but not edge critical. The proofs of unsatisfiability show that many of the edge clauses can be removed without introducing a 4-coloring. Randomly removing edges until fixpoint eliminates about 270 edges (close to $10\%$) of these graphs. Remarkably, all critical graphs have a handful of vertices with degree 4. If we removed such a vertex from the graph, all its four neighbors would have a different color in all valid 4-colorings. Graph $S_{199}$ has even 12 vertices with degree 4. Reducing a 553-vertex graph to become edge critical will increase the number vertices with degree 4 to roughly 12. These vertices tend to be evenly distributed between the small and large parts of the graph. \subsection{Analysis} The sizes of the small and the large parts of the 553-vertex graphs suggest that they play a different role in eliminating 4-colorings. We analyzed the 4-colorings of both parts when restricted to the key vertices, i.e, the ones that connect the parts. These are the vertices at distance $\frac{\sqrt{33} - 1}{2\sqrt{3}}$, $\frac{\sqrt{33} + 1}{2\sqrt{3}}$, and 2 from the origin. Recall that Figure~\ref{fig:T4S199} shows the interaction between these vertices. The 553-vertex graphs have all 24 vertices occurring in $S_{199} \cup \theta_4(S_{199})$ at distance 2 from the original and most of the vertices at distance $\frac{\sqrt{33} - 1}{2\sqrt{3}}$, $\frac{\sqrt{33} + 1}{2\sqrt{3}}$ from the origin. \begin{figure} \caption{The 420-vertex large part of $G_{553} \label{fig:numberL} \end{figure} Figure~\ref{fig:numberL} shows the large part of $G_{553}$ in which the central and key vertices are numbered. The other vertices in the large part significantly restrict the number of different 4-colorings of these key vertices. In fact, there are only twenty 4-colorings of these vertices such that they either have the same color as the central vertex or a different color. Table~\ref{tab:solutions} shows the details. The clearest pattern is that either the vertices $v_1$ to $v_5$ (at distance $\frac{\sqrt{33} - 1}{2\sqrt{3}}$) have the same color as the central vertex or the vertices $v_6$ to $v_{10}$ (at distance $\frac{\sqrt{33} + 1}{2\sqrt{3}}$) have the same color as the central vertex. This pattern heavily constrains the vertices in the small part at those distances: Either all vertices at distance $\frac{\sqrt{33} + 1}{2\sqrt{3}}$ or all vertices at distance $\frac{\sqrt{33} - 1}{2\sqrt{3}}$ from the origin must have a different color than the central vertex. Other patterns can be observed as well. For example, if the missing two vertices would be added, then most colorings can be obtained by a rotation of 60 degrees. In contrast, the small part hardly constrains its key vertices since as many as $9974$ 4-colorings are allowed. However, none of these many 4-colorings is compatible with the twenty 4-colorings of the large part. Enforcing the above mentioned pattern, i.e., either all vertices at distance $\frac{\sqrt{33} + 1}{2\sqrt{3}}$ or all vertices at distance $\frac{\sqrt{33} - 1}{2\sqrt{3}}$ from the origin must have a different color than the central vertex, reduces the number of 4-colorings of the small part to 1353. \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \begin{table}[t] \caption{A list of the twenty 4-colorings of $v_1$ to $v_{22}$ in $G_{553}$. The $1$s denote that the vertex has the same color as the central vertex, while the $0$s denote that the vertex is colored differently than the central vertex. The bold $1$s indicate the main pattern.} \label{tab:solutions} \centering \taburulecolor{lightgray} \begin{tabular}{|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}||@{} P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}||@{}P{0.55cm}@{}|@{} P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|@{}P{0.55cm}@{}|} \hhline{-----||-----||------------} $v_1$ & $v_2$ &$v_3$ &$v_4$ &$v_5$ &$v_6$ &$v_7$ &$v_8$ &$v_9$ &$v_{10}$ &$v_{11}$ & $v_{12}$ &$v_{13}$ &$v_{14}$ &$v_{15}$ &$v_{16}$ &$v_{17}$ &$v_{18}$ &$v_{19}$ &$v_{20}$ &$v_{21}$ &$v_{22}$ \\ \hhline{-----||-----||------------}\noalign{ }\hhline{-----||-----||------------} 0 & 0 & 0 & 0 & 0 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hhline{-----||-----||------------} 0 & 0 & 0 & 0 & 0 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 \\ \hhline{-----||-----||------------} 1 & 0 & 0 & 0 & 0 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 \\ \hhline{-----||-----||------------} 0 & 1 & 0 & 0 & 0 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 \\ \hhline{-----||-----||------------} 0 & 0 & 1 & 0 & 0 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 \\ \hhline{-----||-----||------------} 0 & 0 & 0 & 1 & 0 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\ \hhline{-----||-----||------------} 0 & 0 & 0 & 0 & 1 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\ \hhline{-----||-----||------------} 1 & 0 & 0 & 1 & 0 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ \hhline{-----||-----||------------} 0 & 1 & 0 & 0 & 1 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\ \hhline{-----||-----||------------} 0 & 0 & 1 & 0 & 0 & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ \hhline{-----||-----||------------} \noalign{ }\hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ \hhline{-----||-----||------------} {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & {\bf 1} & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ \hhline{-----||-----||------------} \end{tabular} \end{table} \section{Conclusions} We demonstrated that clausal proof minimization can be an effective technique to reduce the size of graphs with a given property. We used this method to shrink graphs while preserving the chromatic number. This resulted in a dozen of unit-distance graphs with chromatic number 5 consisting of 553 vertices --- a reduction of over 1000 vertices compared to the smallest previously known unit-distance graph with chromatic number 5. A main goal of this research is to obtain a human-understandable unit-distance graph with chromatic number 5. Although that goal has not been reached yet, the experiments produced some interesting results. For example, either all vertices at distance $\frac{\sqrt{33} - 1}{2\sqrt{3}}$ or all vertices at distance $\frac{\sqrt{33} + 1}{2\sqrt{3}}$ from the central vertex are forced to the same color as the central vertex by the large part of the minimized graphs. Also, our research produced a symmetric graph of 199 vertices that was vital for the reduction. We will study this graph in more detail to determine which properties make it so useful. Moreover, we found two rotations that connected points at multiple distances, thus increasing the average vertex degree of unit-distance graphs. Finding more such rotations may allow us to shrink the graphs even further. Applying clausal-proof techniques to provide mathematical insights is an interesting twist in the discussion about the usefulness of mechanized mathematics. It has been argued that computers are just ``ticking off possibilities''~\cite{Lamb}. In this case, however, they reveal important patterns. The techniques described in this paper may actually produce the most clean and compact proof that the chromatic number of the plane is at least 5. Finally, all graphs used in our experiments could be easily colored with 5 colors, even the ones with many thousands of vertices. However, we observed that this does not hold for the graph $(S_{199} \oplus S_{199}) \cup \theta_4(S_{199} \oplus S_{199})$. This graph is 5-colorable, even when requiring two colors for the central vertex, but computing such a coloring is expensive. Consequently, such colorings may be rare and thus may contain certain patterns. This could point to the existence of unit-distance graphs with chromatic number 6 with thousands of vertices. \end{document}
\begin{document} \title{Mitigating Coupling Map Constrained Correlated Measurement Errors on Quantum Devices} \thispagestyle{plain} \pagestyle{plain} \begin{abstract} We introduce a technique for the suppression of state-dependent and correlated measurement errors, which are commonly observed on modern superconducting quantum devices. Our method leverages previous results, establishing that correlated errors tend to be physically localised on quantum devices to perform characterisations over the coupling map of the device, and to join overlapping measurement calibrations as a series of sparse matrices. We term this `Coupling Map Calibration'. We quantitatively demonstrate the advantages of our proposed error mitigation system design across a range of current IBM quantum devices. Our experimental results on common benchmark circuits demonstrate up to a $41\%$ reduction in the error rate without increasing the number of executions of the quantum device required when compared to conventional error mitigation methods. \end{abstract} \section{Introduction} Advancements in techniques to accurately store and process quantum information have led towards the realisation of small scale prototype quantum devices~\cite{preskill_nisq,Qiskit,sycamore,honeywell,ionq,rigetti}. It is hoped that these near term, intermediate scale quantum devices (NISQ) may improve to the point that they are able to implement algorithms with asymptotic improvements over their classical counterparts~\cite{deutsch_rapid_1992, shor_polynomial-time_1995, grover_fast_1996}. These algorithms present more efficient methods for tackling problems such as integer factorisation and nano-scale simulation~\cite{kassal_polynomial-time_2008,full_stack}. One of a range of hurdles that hinders the potential applications of quantum computation is the extreme sensitivity to noise~\cite{nielsen} and the associated difficulty in ensuring the reliable execution of quantum algorithms. Error rates are too high and qubit counts too low to perform quantum error correction and satisfy a fault tolerance threshold~\cite{Proctor_2021,gottesman}. In this noisy regime, devices are limited to low circuit depths and require a large number of repeated executions of a quantum circuit to statistically determine the correct output~\cite{altepeter}. One of the major challenges in designing future quantum systems is to mitigate or even eliminate the sources of errors with the hope of eventually achieving fault tolerance and scalable computation. Several approaches have been taken to the problems of characterising, suppressing\cite{magesan_characterizing_2012,emerson_rb,altepeter,Granade_2017}, mitigating \cite{harper_correlated,howard,martonosi_crosstalk}, and correcting errors\cite{gottesman} in quantum systems. As quantum devices require regular recalibration, long-term characterisation is not always possible as the exhibition of errors on these systems drifts over time~\cite{harper_fault-tolerant_2019}. This characterisation is essential for mitigating errors in quantum systems. Once characterised, a variety of techniques may be deployed to attempt to suppress the associated errors, and in the long term approach a fault tolerant threshold. \begin{figure} \caption{\scriptsize IBMQ Oslo\label{subfig:corr_oslo} \label{subfig:corr_oslo} \caption{\scriptsize IBMQ Lima\label{subfig:corr_lima} \label{subfig:corr_lima} \caption{\scriptsize IBMQ Quito\label{subfig:corr_quito} \label{subfig:corr_quito} \caption{\scriptsize IBMQ Manila\label{subfig:corr_manila} \label{subfig:corr_manila} \caption{\scriptsize IBMQ Nairobi\label{subfig:corr_nairobi} \label{subfig:corr_nairobi} \caption{\scriptsize IBMQ Belem\label{subfig:corr_belem} \label{subfig:corr_belem} \caption{\small Frobenius norm between calibration matrices $C_{ij} \label{fig:corr_coupling_demo} \end{figure} Current state-of-the-art characterisation techniques include randomised benchmarking~\cite{magesan_characterizing_2012}, tomography~\cite{berry,howard,altepeter}, heuristic methods~\cite{Granade_2017}, and particle filtering techniques~\cite{riddhi, qinfer-1_0}. In this paper, we focus on suppressing two categories of significant errors widely observed on today's superconducting NISQ devices: \textit{state-dependent measurement errors} and \textit{correlated measurement errors}. Prior approaches~\cite{swamit_state_dep,Qiskit} to suppress these errors have been hemmed in by the competing goals of scalability and complexity of characterising correlated errors. More recent Bayesian approaches~\cite{jigsaw} are constrained by sampling requirements on devices that exhibit inherently correlated, non-uniform and non-Markovian noise~\cite{wildcard_rbk}. An example of the complexity of this error landscape can be seen in Fig.~\ref{fig:corr_coupling_demo}. Here the physical layout of the qubits is represented by their proximity in the figure, while the correlation of measurement errors is represented by the edge thickness. To address this challenge, we propose a {\it Coupling Map Calibration (CMC)} scheme, which represents a middle ground approach that efficiently stitches together small measurement error calibration patches and preserves the information about local correlations. To extend this approach, we also propose a coupling map profiling method {\it ERR} which constructs maps of the highest locally-correlated qubits measurements which may then be passed to CMC. In summary, this paper makes the following contributions: \begin{itemize} \item We demonstrate a method for efficiently constructing and joining sparse measurement calibration matrices to form a full-device measurement calibration matrix. From this we construct CMC: a scheme for performing joint sparse calibrations using the coupling maps of NISQ devices; \item We extend this method to account for local but non-coupling map aligned error models with a patching technique termed {\it ERR}. We demonstrate that ERR maps are stable on the order of several weeks; \item Our experimental results in implementing CMC and CMC-ERR on a range of IBMQ devices reduce errors by an average of $35\%$ and up to $41\%$, equalling or outperforming existing scalable methods. We additionally demonstrate that these results are consistent with expectations of measurement errors from simulation. We additionally demonstrate that the choice of measurement error mitigation technique depends upon the noise-profile and architectural topologies of the device. \end{itemize} \section{Background} \label{section:background} \subsection{Quantum Computation with Qubits} The basis for classical binary computation is the manipulation of a set of discrete two-level systems with states labelled $0$ or $1$: \emph{bits}. A classical computer with $n$ bits can access $2^{n}$ states represented by a string of $0$s and $1$s. By comparison, quantum computation is the manipulation of a set of continuous systems, the most commonly proposed of which is a similar ensemble of two-level systems, termed \emph{qubits}. The states of these quantum systems are labelled using the basis vectors $\ket{0} = \left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]$ and $\ket{1} = \left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]$. In addition to these basis states, the system may access a series of intermediary linear combinations of states denoted by $\ket{\psi} = \alpha \ket{0} + \beta \ket{1}$. Here $\ket{\psi}$ is an arbitrary qubit and $\alpha$ and $\beta$ are complex numbers such that $\left|\alpha\right|^2 + \left|\beta\right|^2 = 1$. This linear combination of states is not replicated in classical computation. As $\alpha$ and $\beta$ are normalised complex numbers we may define a set of orthogonal bases $\hat{x}, \hat{y}$ and $\hat{z}$. For a particular choice of measurement basis $\left|\alpha\right|^2$ and $|\beta|^2$ describe the probability of measuring their associated state. We can extend this to a density state representation, where $\rho = \sum_j p_j \ket{\psi_j}\bra{\psi_j}$ and operators are performed by conjugation. Measurements of density states are performed by tracing over a measurement operator $M$. The choice of these measurement operators will be important for our model. Operations on quantum systems (termed {\it gates}) are unitary operations that act on the state and alter the observed likelihood distribution of measurement outcomes. The general rotation of a single qubit by angles $\theta$ about the X axis, $\phi$ about the Y axis and $\lambda$ about the Z axis, can be described by \begin{equation} U3(\theta, \phi, \lambda) = \begin{pmatrix} \cos(\frac{\theta}{2}) & -e^{i\lambda}\sin(\frac{\theta}{2}) \\ e^{i\phi}\sin(\frac{\theta}{2}) & e^{i(\phi+\lambda)}\cos(\frac{\theta}{2}) \end{pmatrix} \label{equ:unitary}. \end{equation} The gates $RX(\theta)$ $RZ(\lambda)$ and $RY(\phi)$ are then special cases of this unitary gate and constitute the {\it Pauli Generators}. A rotation by each of these generators by an angle of $\pi$ then gives the $X$, $Y$ and $Z$ gates. While multi-qubit systems can be constructed using the tensor product of two single qubit systems, not all quantum states or operations may be described as a linear combination of single-qubit systems. Some useful multi-qubit operations include invertible controlled gates that conditionally apply a-single qubit rotation to a target qubit. It is possible to deconstruct an arbitrary quantum operation into a sequence of single qubit rotations and two-qubit controlled operations~\cite{nielsen}. A sequence of quantum gates is termed a {\it circuit} (an example of which may be found in Fig.~\ref{fig:x_gates}). Horizontal lines in a circuit represent qubits, while `boxes' represent gates, the progression from left to right indicates the order in which these gates are applied. \subsection{Types of Quantum Noise} Given the probabilistic nature of quantum systems, even a small amount of noise results in a non-zero probability of obtaining an incorrect measurement outcome. To mitigate this quantum circuits are executed multiple times in order to collect statistics and attempt to reconstruct an `error-free' output. Each iteration may be referred to as a `shot' or a `trial'. This repetition adds a multiplicative factor to the complexity of the execution of the quantum circuit. If the error rate is sufficiently high, it may not be possible to distinguish the output distribution from the noise. Errors may be characterised as being due to interactions with the environment~\cite{unruh}, imperfect gate operations or, as we focus on in this paper, due to imperfect state preparation and measurement (SPAM). \textbf{Gate Errors} describe errors due to imperfect gate operations. As measurement is probabilistic any imperfect gate will result in a non-zero probability of producing an incorrect output. This noise will accumulate as the circuit depth grows. \textbf{SPAM Errors} relate to the incorrect preparation and measurement of the system. Unlike the environmental and gate errors, they only occur during the preparation and measurement stages of the circuit and do not scale with circuit depth, though they do scale with the size of the quantum register, and the number of measurements required. \subsection{State-Dependent, Biased and Correlated Errors} In addition to the causes of errors on quantum devices we may also consider how errors act on quantum states. An example error is an independent and identically distributed (IID) uniform random Pauli error over all qubits. This is the same as applying the operation $E(p) = (1 - p)I + \frac{p}{3}(X + Y + Z)$ independently to each qubit in the system for some error rate $p$. Other types of errors are not as impartial. In this paper we will focus on two significant categories of NISQ errors: state-dependent errors and correlated errors, both of which have been observed on NISQ devices~\cite{swamit_state_dep,harper_correlated,jigsaw}. \textbf{State-Dependent} errors exist where the error rate associated with an operation depends on the state of the qubit~\cite{swamit_state_dep} with a particular focus on state-dependent errors that occur during measurement. During the relatively long period of time required for measurement on NISQ superconducting devices, the rate of decay from the $\ket{1}$ state is greater than the rate at which qubits in the $\ket{0}$ state are spontaneously excited. \textbf{Correlated Errors} occur when the probability of an error acting on two qubits is greater than the product of the probabilities with which the error would act on either qubit individually~\cite{harper_correlated,jigsaw}, as demonstrated in Fig.~\ref{fig:correlation}. As most modern quantum devices only admit two-qubit gates between a subset of the qubits on the device, this network of admissible two-qubit relations forms a {\it coupling map}. It has been observed that correlated errors tend to occur in close proximity on the coupling map\cite{harper_correlated}. \begin{figure}\label{fig:correlation} \end{figure} \subsection{Measurement Errors on IBMQ Devices} \begin{figure} \caption{\small Error probability over 4000 shots of and Quito device following the application of a sequential number of X gates. Measuring a $\ket{1} \label{fig:x_gates} \end{figure} In this paper, we focus on state-dependent and correlated measurement errors as currently exhibited on IBMQ devices~\cite{harper_correlated,swamit_state_dep,jigsaw}. State-dependent measurement errors are still present in the most current versions of IBMQ systems as demonstrated in Fig.~\ref{fig:x_gates}. Here a single qubit has been prepared in the $\ket{0}$ state before a sequence of $X$ gates have been applied. As $X$ is a unitary operation, if an odd number of $X$ gates are applied the expected state is $\ket{1}$, while an even number should result in the $\ket{0}$ state. Transpiler optimisations have been disabled to prevent the compiler reducing these gate sequences to either a single $X$ gate or no gate at all. \begin{table*}[t!]\small \centering \begin{tabular}{lcr} \toprule {\bf Method} & {\bf Computational Cost (Quantum Circuit Executions)} & {\bf Output} \\ \midrule Process Tomography~\cite{howard,altepeter,merkel_tomography} & $r4^{n}$ & SPAM + Gate Errors \\ Complete Calibration~\cite{Qiskit} & $r2^{n}$ & SPAM Errors \\ Tensored Calibrations~\cite{Qiskit} & $2nr$ & Non-correlated SPAM Errors \\ Randomised Benchmarking~\cite{emerson_rb,magesan_characterizing_2012} & $\text{Poly}(n)$ & Average SPAM and Gate \\ Pauli / Clifford Twirling~\cite{harper_correlated} & $\text{Poly}(n)$ & SPAM-Free Errors \\ AIM~\cite{swamit_state_dep} & $4r$ & Average Biased SPAM \\ SIM~\cite{swamit_state_dep} & $2nr + kr$ & Top $k$ least biased SPAM \\ JIGSAW~\cite{jigsaw} & $\frac{nk}{2}$ + k & Bayesian Error Distribution\\ {\it CMC (This Paper)} & $\frac{4}{k}er$ & Local SPAM Errors \\ \bottomrule \end{tabular} \caption{\small Computational cost for a range of error characterisation methods. $n$ is the number of qubits and $r$ is the number of repetitions required of the circuit. The $k$ for AIM is a pre-chosen constant, typically 4. For the coupling map method (CMC), $e$ is the number of edges in the graph while $k$ is the speed-up from performing non-local patches simultaneously.\label{table:comparisons}} \end{table*} If measurement errors were state independent we would expect that the error rate would increase exponentially with the circuit depth. Instead we observe that the error rate of the $\ket{1}$ states is significantly higher than that of the $\ket{0}$ states even as circuit depth increases, indicating the presence of state-dependent measurement errors. These results suggest that state-dependent measurement errors dominate both single-qubit gate errors and environmental noise at small circuit depths and up to the quantum volume of the device. Additionally correlated measurement errors are present on current IBMQ devices as seen in Fig.~\ref{fig:corr_coupling_demo}. Here we compare the difference between two single-qubit calibrations $C_i \otimes C_j$ and two-qubit calibrations $C_{ij}$. If the measurement errors were entirely independent then these two calibrations would be equal. This demonstrates that not only do correlated measurement errors exist on these devices, but that some appear to persist between calibration cycles. Previous work has demonstrated that these correlated errors tend to occur in physical proximity on the device~\cite{harper_correlated}. \section{Measurement Error Suppression} Previous approaches in error mitigation have involved two often distinct lines of inquiry: the characterisation of the noise of the system~\cite{riddhi,magesan_characterizing_2012,Granade_2017,harper_fault-tolerant_2019}, and the design of the system to reduce noise~\cite{swamit_state_dep, sc_estimating}. Often this may be an iterative approach as errors cannot be suppressed or mitigated until they have been characterised. The output from an execution of a quantum system is a single sample from a larger probability distribution. In order to obtain any meaningful statistics to characterise the device, an ensemble of measurements must be taken. Measurement perturbs the state of a quantum system, so a diminishing amount of information is obtained by performing multiple measurements on the same qubit. Cloning a quantum state is also not permitted~\cite{wooters} which prevents ``copying'' a state and performing the same operations on the copies. As a result the most common approach to quantum characterisation is to repeatedly prepare a state and then perform measurements on it~\cite{berry}. The primary figure of merit in comparing approaches to characterising quantum systems is the requisite number of circuits that must be executed. In this section we briefly sketch the details of a range of schemes that seek to mitigate measurement errors on quantum devices. These schemes trade between quantum computational overheads and the scope and quality of the data that is obtained. A comparison of previous characterisation and \subsection{Tomography} One of the first and most accurate methods for characterising quantum states and processes, including errors, is \textit{tomography}~\cite{Granade_2017,altepeter,howard}. As the number of measurements required to perform tomography scales exponentially with the number of qubits, these approaches have become increasingly infeasible on recent devices. Tomography is performed by repeating an experiment over all measurement bases to reconstruct the density state of the system prior to measurement~\cite{greenbaum_tomography,altepeter}. Tomography scales exponentially in the number of qubits but provides the most accurate reconstruction of the state, and by extension the most accurate profile of the noise acting on it~\cite{howard,merkel_tomography}. The insight that ties process tomography to error characterisation is that errors are themselves quantum processes that act on the device. By performing process tomography over a portion of the system, environmental, gate and SPAM errors can be diagnosed. i.e. an error is simultaneously an error and an operation that evolves the state and can hence be characterised. Process tomography provides a description of the error channel that incorporates information about state-dependent and correlated errors. Once an error $\mathcal{E}$ is characterised and if the matrix describing $\mathcal{E}$ is invertible, then we can apply $\mathcal{E}^{-1}$ to mitigate the error. The downside is the cost: as the number of qubits increases, repetitions of $4^n$ circuits become computationally unfeasible for modern quantum devices, let alone future devices with hundreds to thousands of qubits. The reconstruction of a quantum state from a set of measurements is termed {\it quantum state tomography}~\cite{howard, altepeter}. By taking a histogram of the measurement results over a complete basis of $2^n$ measurement operators, the resulting probability distribution can be used to estimate the quantum state~\cite{greenbaum_tomography}. To perform state tomography, we begin with a set of measurement operators $M$ and a density matrix $\rho$. Born's rule gives $\vec{p}_{i} = \text{Tr}(M^{\dag}_i M_i \rho)$. Where $p_i$ is the probability of obtaining a measurement result $M_i$ from the state $\rho$. This value can be determined experimentally by running the circuit and calculating the frequency with which $M_i$ is observed. The number of times the circuit is executed is termed the number of {\it shots}. A greater number of shots typically corresponds to a better estimate of $p_i$. By constructing a matrix $A = M^{\dag}_0 M_0 \otimes M^{\dag}_1 M_1 \hdots \otimes M^{\dag}_m M_m$, we can derive $A(\rho ^ {\otimes m}) = \vec{p}$. Inverting $A$ then reconstructs the density matrix $\rho$ given the measurement outcomes $\vec{p}$. State tomography may be extended to determine the action of an operation on the system~\cite{merkel_tomography}. The Choi-Jamio\l kowski isomorphism may be used in open quantum systems to derive a map from states to processes~\cite{wood_open_quantum,Granade_2017}. This forms the basis of {\it quantum process tomography}. By selecting $2^{2n}$ linearly independent inputs $\rho_j$ we can characterise the action of a process $V$ on the system. The approach is much the same as state tomography, except we now perform the gate we are characterising prior to measurement $\vec{p}_{ij} = \text{Tr}(M^{\dag}_i M_i V\rho_j V^{\dag})$. \subsection{Measurement Error Calibration} A more limited form of tomography can be used to characterise and invert measurement errors for devices with a fixed measurement basis (such as the IBM quantum devices)~\cite{Qiskit,nielsen}. For $n$ qubits, we construct a set of circuits that prepares and measures each of the $2^n$ possible states in the measurement basis. The columns of this {\it calibration matrix} $C$ (a subset of $\mathcal{E}$) contain the probabilities with which each measurement outcome was observed, while the rows correspond to the state that was prepared. This calibration matrix now describes the map from expected measurement states to observed measurement states, and hence describes the effect of measurement errors in the measurement basis. This calibration matrix can then be inverted and applied to mitigate noise on the device. This technique provides the most accurate characterisation of measurement errors, but comes with an exponential overhead in the number of calibration circuits. If we assume that measurement errors are independent then our calibration matrix is linearly separable; $C_{1, 2, ... n} = C_{1} \otimes C_{2} \hdots \otimes C_{n}$ and can be constructed with only $2n$ calibration circuits. This method will characterise state-dependent measurement errors, however no information will be gained about multi-qubit correlated errors. Taking the assumptions of linear independence further, we can perform all of our calibrations using only two circuits; $I^{\otimes{n}}$ and $X^{\otimes{n}}$. The individual calibration matrices $C_{i}$ can be recovered from the joint calibration matrix by marginalising the contributions of the other qubits to the probability distribution; which takes the form of the normalised partial trace $|\text{Tr}_j(C_{ij})| \approx C_i$. This technique is limited as NISQ devices exhibit measurement crosstalk~\cite{jigsaw} and as a result measurement errors are not independent. \subsection{Randomised Benchmarking (RB)} RB is a fundamentally different technique to tomography based calibration. A set of random circuits with the overall action $I$ of varying lengths are constructed. Each circuit is executed, and the probability of measuring $\ket{0}^{\otimes n}$ state dictates the average error rate of that circuit. The error rate is a function of the circuit depth, and by fitting error rates from random circuits of varying lengths we can estimate the average gate and SPAM errors on the device. Although it requires far fewer operations than tomographic methods, randomised benchmarking cannot distinguish correlated and state-dependent errors. This average error rate is useful for device profiling and some simulations, but is not as useful for implementing error mitigation strategies. \textbf{Pauli / Clifford twirling}~\cite{harper_correlated} is an extension of Randomised benchmarking that approximates the error channel of a quantum device as a {\it Gibbs random field} (GRF)~\cite{harper_correlated}. It uses the difference between the fit to the GRF and the observed distribution to determine the incidence of correlated errors on the device. This method scales polynomially in the number of qubits $n$. Pauli twirling provides an efficient approximation of the SPAM free errors, but as a result does not include biased or correlated measurement errors. \subsection{Circuit Specific Strategies} Another class of approaches involves attempting to profile and minimise the noise at the whole circuit level. These methods depend on the choice of circuit and must be re-run for each new circuit, in contrast with schemes that directly characterise measurement errors on a device. {\bf Static Invert and Measure} (SIM)~\cite{swamit_state_dep} targets state-dependent measurement errors on a particular circuit and tackles scalability by restricting itself to exactly four characterisation circuits. The circuit that is being characterised is executed, and prior to measurement one of the following four operators are applied: $I^{\otimes n}$, $X^{\otimes n}$, $(I\otimes X)^{\otimes \frac{n}{2}}$ and $(X \otimes I)^{\otimes \frac{n}{2}}$. The results of these circuits are then averaged to attempt to mitigate the action of any state-dependent error on the output. For a circuit with a state-dependent measurement bias, this method will reduce the error rate by approximately half as it averages between the dependent states. SIM may average over correlated measurement errors that align with its characterisation circuits. {\bf Adaptive Invert and Measure} (AIM)~\cite{swamit_state_dep} follows a similar strategy to SIM in applying characterisation operators after the action of a target circuit. It increases the pool of characterisation circuits and attempts to determine a set of $k$ `correct' strings to use for averaging. It begins with $\frac{r_1 n}{2}$ characterisation circuits of the form $I^{\otimes 2i}\otimes X ^{\otimes 4} \otimes I^{\otimes n - 2i}$ for even $i$ in the range 0 to $\frac{n}{2}$ and applies them prior to measurement on a target circuit. These `patch' circuits are then effectively acting on sets of four qubits at a time with an overlap of two qubits between different patches. The outputs of these circuits are then averaged and the top $k$ characterisation circuits are selected and used in a further $r_2 k$ executions which are in turn averaged to produce the final output. This selection mechanism assumes that some elements of those top $k$ bit strings are improving the success probability of the circuit. {\bf JIGSAW}~\cite{jigsaw} is another circuit specific strategy. Unlike AIM and SIM, it targets correlated measurement errors by constructing Bayesian filters. Each filter is constructed by randomly dividing the set of measured qubits into `patches'. The measurement distribution of each patch then forms a sub-table which acts as a local Bayes filter on a global measurement distribution. These filters are applied by splitting the global measurement distribution and updating elements corresponding to the patch. This subset is then normalised and rejoined with the global distribution. While JIGSAW is currently a state of the art method in scalable measurement error suppression, it suffers from inconsistent results: should one of the sub-tables contain a single value, the renormalisation of that subset will promote it to a state that is measured with probability 1. When convolved with the global measurement distribution, this can result in JIGSAW erroneously over-reporting states that occur with low probability. There are two avenues by which this can occur: (1) if a measurement outcome in the `global' table is sufficiently distant from all other outcomes, or (2) if not all results are measured in the subset circuit. The impact of this inconsistency then depends on the choice of subset circuits, the distribution of measurement results, the associated noise of the device and the order in which sub-tables are applied. \section{Coupling Map Calibration (CMC)} \label{section:Model} \begin{figure*} \caption{\small Coupling Map Calibration scheme for measurement error mitigation using patched calibrations. The scheme converts the coupling map of the device into a series of calibration patch circuits which are executed to collect data on two qubit calibrations. These calibrations can then be joined or traced over as required to mitigate measurement errors for any other circuit that is executed on the device. Red boxes indicate executions on a quantum device, blue elements are classical operations, orange inputs and green outputs. \label{fig:model} \label{fig:model} \end{figure*} From the limitations of the current state of the art techniques discussed in the previous section, we derive three design objectives: \begin{enumerate} \item Characterising both correlated and state-dependent measurement errors on NISQ devices; \item Maintaining a polynomial number of characterisation shots in the number of qubits; and \item Demonstrating an improvement in the error rate reduction compared to the other polynomial methods, ideally achieving similar rates of error mitigation as calibration techniques. \end{enumerate} To achieve this, we propose a novel coupling map calibration (CMC) scheme, as outlined in Fig.~\ref{fig:model}. Previous work~\cite{harper_correlated} has established that the majority of correlated errors on modern quantum devices occur within physical proximity on the device. Leverage this insight we construct a set of sparse calibration matrices for each edge on the coupling map with each of these calibration edges termed a `patch'. By joining these sparse matrices we can then reconstruct an approximate global calibration matrix, which can then be used for measurement error mitigation. While the number of measurement circuits required to construct a full calibration matrix scales exponentially with the number of qubits, CMC only requires four such circuits for each edge and hence scales linearly in the number of edges on the coupling map. By the same argument we can perform non-local calibration circuits simultaneously and trace out the individual results. The number of calibration circuits is then reduced as a function of the sparsity of the coupling map. \subsection{Coupling Map Calibration Patches} Under the same assertion that correlated errors are physically correlated~\cite{harper_correlated}, physically distant qubits should not exhibit correlated errors. In other words, $C_{ij} = C_{i} \otimes C_{j}$ if $i$ and $j$ are sufficiently distant qubits. By the same argument, for pairs of adjacent qubits $ab$ and $cd$ that are sufficiently distant $C_{abcd} = C_{ab} \otimes C_{cd}$ if $ab$ and $cd$ are sufficiently distant pairs of qubits. From this, $C_{ab} = |\text{Tr}_{cd}(C_{abcd})|$ and $C_{cd} = |\text{Tr}_{ab}(C_{abcd})|$. If $ab$ and $cd$ are sufficiently distant edges on the coupling map, then we can calculate $C_{ab}$ and $C_{cd}$ simultaneously. \begin{figure} \caption{ \small Example measurement patches on the IBM Tokyo device with two qubits per patch and a distance between patches of at least one qubit and where each patch is represented by a colour. This pattern could be continued until all edges are incorporated in at least one patch.\label{fig:coupling_tokyo} \label{fig:coupling_tokyo} \end{figure} For this construction, we assume that all correlations are local on the device up to some distance $k$. Given the connectivity graph of the device, we can construct a set of `calibration patches' containing $n$ qubits such that all edges on the graph are included in at least one patch. We can then calibrate these two patches simultaneously without an increase in the number of shots (i.e., number of circuits executed on the quantum devices). From this strategy, the total number of shots per calibration matrix is $4r$, and the total number of calibration matrices is approximately a constant fraction of the number of edges in the coupling map, which in turn is typically on the order of the number of qubits on the device. For example, in the case of the IBM Tokyo device, the number of edges is $3$-$4$ times the number of qubits. A $k=1$ patching strategy reduces the number of circuits by approximately the same factor. This can be seen in Fig.~\ref{fig:coupling_tokyo}. We present a greedy construction for these patches in Algorithm~\ref{alg:coupling_map}. This approach iteratively finds a set of patches separated by at least distance $k$ until all edges of the graph are contained in a set. Each patch within a set may then be constructed simultaneously. The total cost in the number of measurements required is then only four times the number of such sets. For the Tokyo device, this corresponds to $40$ calibration circuits for all qubits individually, $140$ calibration circuits to characterise each edge individually, $54$ circuits for coupling map patching, $760$ circuits for all pairs of qubits, and $2^{20}$ circuits for the full calibration. When tested on large random coupling maps ($>100$ qubits) with an average of four edges per qubit, this greedy approach reduced the number of calibration circuits by between a factor of 3 to 10. This style of coupling map (albeit not randomly constructed) is representative of the coupling maps of current IBMQ, Google, Rigetti, IonQ, and Honeywell devices~\cite{sycamore,rigetti,honeywell,Qiskit}. \begin{algorithm}[t!]\small \caption{Greedy Distance $k$ Patch Construction}\label{alg:coupling_map} \begin{algorithmic} \STATE $\mathrm{patch\_construct} (E, k)$ \STATE Copy $E$ to $E'$ \STATE Initialise an empty list $C$ \WHILE{$E'$ is not empty} \STATE Initialise empty lists $C_i$, $B$ \STATE Pop $e$ from $E'$ and append it to $C_i$ and $B$ \STATE Mark all elements of $E$ as unvisited \WHILE {Not all vertices in $E$ are visited AND $B$ is not empty} \STATE Mark all elements of $B$ as visited \STATE Perform a depth $k$ BFS from each element in $B$, mark all vertices of up to distance $k$ as `visited' \STATE Replace $B$ with the depth $k$ boundary of the BFS \FOR{each edge $b$ in $B$} \IF{If $b$ is not in $E'$ AND $b$ has not yet been visited} \STATE Add $b$ to $C_i$ \STATE Mark $b$ as visited \STATE Perform a depth $k$ BFS from each element in $B$, mark all vertices of up to distance $k$ as `visited' \ENDIF \ENDFOR \ENDWHILE \STATE Append $C_i$ to $C$ \ENDWHILE \RETURN $C$ \end{algorithmic} \end{algorithm} \subsection{Joining Calibration Matrices} \label{sec:joining_calib} The core of our model is finding a joint approximation of two overlapping local calibrations. Two disjoint calibrations may be joined via the tensor product: \begin{equation} C_{ij} = C_i \otimes C_j \end{equation} However, this necessarily cannot capture any information regarding correlated errors between those two qubits. In order to capture correlated errors we must construct calibration circuits from the combination of all measurement operators across both qubits. Expanding this problem to $n$ qubits on a device scales as $2^n$. Ideally we would wish to perform two qubit calibrations with four calibration circuits each and stitch together these individual `patches' as seen in Fig.~\ref{fig:patches}. This raises the problem of how to join two-qubit calibration matrices with overlapping qubits. \begin{figure} \caption{Overlapping patches of two qubit calibrations; the goal of the method is to join these calibrations to construct an approximate $C_{012} \label{fig:patches} \end{figure} As calibration matrices are normalised maps between probability distributions, we can split a calibration matrix formed by the outer product of two other calibration matrices using the partial trace operation and normalising such that the sum of each column in the resulting calibration matrix is 1 as shown by \begin{equation} C_{i} = |\text{Tr}_j(C_i \otimes C_j)|. \end{equation} For a calibration matrix acting on multiple qubits, we can construct an approximate single-qubit calibration matrix using the same method \begin{equation} C_{i} \approx |\text{Tr}_j(C_{ij})|. \end{equation} If we have multiple multi-qubit calibration matrices all acting on the same qubit then we need to construct a technique by which we can apply each calibration jointly. For $v$ calibration matrices overlapping on some qubit labelled $j$, we can construct an approximation of the calibration matrix between qubits $i$ and $j$ (with some order parameter $v_a$) by \begin{equation} C'_{ij}(v_a) = \left(I \otimes C_j^{\frac{v - 1 - v_a}{v}}\right)^{-1} C_{ij} \left(I \otimes C_j^{\frac{v_a}{v}}\right)^{-1} \end{equation} such that $\text{Tr}_i(C'_{ij}) \approx C_j^{\frac{1}{v}}$ and $\text{Tr}_{j}(C'_{ij}) \approx \text{Tr}_{j}(C_{ij})$. This construction can be seen in Fig~\ref{fig:cij_construct}. \begin{figure} \caption{\small Construction of $C'{ij} \label{fig:cij_construct} \end{figure} Similarly, for some other qubit $k$ with order parameter $v_b$ we can construct \begin{equation} C'_{jk}(v_b) = \left( C_j^{\frac{v - 1 - v_b}{v}} \otimes I \right)^{-1} C_{jk} \left(C_j^{\frac{v_b}{v}} \otimes I\right)^{-1} \end{equation} such that if $v_1 > v_0$ {and $v = 2$ \begin{equation} C'_{ijk} = (I \otimes C'_{jk})(C'_{ij} \otimes I). \end{equation} This allows us to patch together and construct a joint calibration matrix from the overlapping individual calibrations given an explicit order. \begin{figure} \caption{\small A spanning set of two qubit patches incorporating all edges in the graph and the form of the associated calibration matrix. Columns represent tensor product while rows represent matrix products. Each individual column is itself a sparse matrix. For large systems (with matrices scaling with the number of qubits as $2^n \times 2^n$) it is faster to do a series of sparse matrix vector multiplications than it is to perform a single dense matrix multiplication.\label{fig:spanning_set} \label{fig:spanning_set} \end{figure} This method can be extended to joining calibration matrices of arbitrary sizes and with overlaps by selecting the appropriate order parameter for each contributing term in $C'$. An example of this for a square plaquette of connected qubits can be seen in Fig.~\ref{fig:spanning_set}. Armed with the ability to approximate larger calibration matrices from smaller ones, we can now build a calibration matrix for the system. To construct this, we need a spanning set of calibration matrices of $n$ qubits across the system, before using the method above to construct a set of calibrations that may be applied linearly. As correlated errors tend to be spatially close on the coupling map~\cite{harper_correlated}, the question then becomes one of deciding which spanning set of calibration matrices should be used. For a base application of CMC we have opted to use edges of the coupling map as the spanning set of calibration matrices. \subsection{CMC: Mitigating Measurement Errors} Using this ability to construct and join sets of calibration matrices we may now consider how to wrangle this into a measurement error mitigation scheme. We start with the coupling map of the device and the set of qubits that we are measuring. If the measured qubit $i$ has no neighbours, then we can construct the effective calibration matrix from the patches $C_{ij}$ by tracing out the non-measured qubits $j$ such that $C'_i = |\prod_j \text{Tr}_j(C_{ij})^{\frac{1}{v}}|$. If instead qubit $i$ shares a non-zero number of edges we follow the method discussed in section~\ref{sec:joining_calib} and trace out over edges shared with non-measured qubits while multiplying the shared edges. In both cases, the order in which these matrices are applied must be identical to the order in which the calibration matrices were constructed as seen in Fig.~\ref{fig:cij_construct}. Once this set of sparse calibration matrices have been constructed, we can then invert each matrix then take the tensor product with the identity for each non-participating qubit. By reversing the order of these inverted matrices we have constructed the inverse of the calibration matrix. This set of inverted calibration matrices can now be applied to a measurement distribution in order to perform the desired measurement error mitigation. We term this scheme CMC. If the number of qubits in a calibration patch is significantly less than the number of qubits in the system, then each individual calibration matrix will be sparse. The maximum number of entries in the measurement vector is bounded by the number of shots performed on the system, and can be periodically culled of very low weight entries. In the regime of a $50$+ qubit system, applying a series of sparse matrix-vector products is much more performant than a $2^{n}\times2^{n}$ dense full calibration matrix. \subsection{ERR: Device Tailored Mitigations} As seen in Fig.~\ref{fig:corr_coupling_demo}, while highly correlated errors tend to be local, they do not necessarily align to the coupling map. If a device exhibits a systemic correlated error that persists through multiple calibrations, we may modify our coupling map method to instead characterise the noisiest edges. The goal is to construct a coupling map with at most $n$ edges while attempting to maximise the number of highly correlated measurement errors. We present {\it ERR} an $O(n^2)$ greedy approach to this problem in Algorithm~\ref{alg:err_coupling_map} with a graphical example in Fig.~\ref{fig:err_map_demo}. Here $E$ is the initial coupling map, and $k$ is a locality parameter, such that only two-qubit edges of distance less than $k$ are considered. \begin{algorithm}[t!]\small \caption{Error Coupling Map Construction}\label{alg:err_coupling_map} \begin{algorithmic} \STATE $\mathrm{error\_coupling\_map\_construct} (E, k)$ \STATE Construct all 1 and 2 qubit calibration matrices $C_i$, $C_{i < j}, \forall |i, j| < k \in E$ \STATE Calculate all edge weights $w_{i, j} = ||C_i \otimes C_j - C_{ij}||$ \STATE Sort edge weights $W$ in descending order $w_{i, j}$ \STATE Initialise an empty graph $E'$ \FOR{$w_{i, j}$ in $W$ and $|E'| < n$} \IF{$i$ in $E'$ and $j$ not in $E'$} \STATE Add $j$ and edge $i, j$ to $E'$ \ENDIF \IF{$j$ in $E'$ and $i$ not in $E'$} \STATE Add $i$ and edge $i, j$ to $E'$ \ENDIF \IF{$i$ and $j$ not in $E'$} \STATE Let $w_i$, $w_j$ be the next weighted edges containing $i$ and $j$ \STATE Add $\min_{w}((i, w_i), (j, w_j))$ and dangling edge $i, j$ to $E'$ \ENDIF \ENDFOR \STATE return $E'$ \end{algorithmic} \end{algorithm} Note that unlike the computational coupling map there is no requirement that this error coupling map be connected. We can then perform CMC over this error coupling map to construct our sparse measurement calibration matrices. \begin{figure} \caption{\scriptsize Coupling Map} \caption{\scriptsize $k\le3$ Correlations} \caption{\scriptsize Error Map} \caption{\small Construction of a $k=3$ local error map using ERR\label{fig:err_map_demo} \label{fig:err_map_demo} \end{figure} \section{Evaluation Methodology} \label{section:Methodology} \begin{figure} \caption{\small (Left) Hinton diagrams of simulated correlated measurement errors over four qubits. Clockwise from top left; single qubit (uncorrelated), two qubit (all pairs), three qubit (triplets), four qubit (flip all bits). The four qubit error channel simply flips all the bits in the state. (Right) Hinton diagrams of simulated state-dependent measurement errors over four qubits. The four-qubit channel only has a single non-diagonal entry as there is only one four-qubit state-dependent measurement error.\label{fig:hinton_corr} \label{fig:hinton_corr} \end{figure} This section will discuss the application and methodology of the designs discussed in Section \ref{section:Model} and the construction of the circuits that are executed on the IBMQ devices. These circuits will be used to compare the following measurement error mitigation techniques: SIM, AIM, Full calibration, Linear calibration, CMC, CMC-ERR, and JIGSAW. In this paper, we conducted experiments on the IBMQ, `Quito' `Lima', `Manila', and `Nairobi' devices. The relevant figures of merit are the overall error rate (here taken as the probability with which the correct output is produced) and the total number of quantum device executions. The goal is to minimise the error rate for a fixed number of measurement shots when comparing circuit selection and averaging techniques (AIM/SIM) against classical post-processing approaches (calibration methods). We define the success probability as the frequency with which the measurement output aligns with a classically verified error-free result. An extension of this is the one norm distance which measures the difference between a classically verified distribution of measurement outcomes and an observed measurement distribution. To compare the various measurement error mitigation techniques, we focus on the GHZ benchmark. This benchmark is the smallest circuit that entangles all qubits on a device. \subsection{Benchmark Error Simulations} The first benchmarks are a set of simulated measurement errors over a range of states. The simulated error channels exhibit varying amounts of state-dependent, uniform, and correlated errors. These simulated error distributions can be seen in Fig~\ref{fig:hinton_corr}. These diagrams indicate the probability with which a particular input state is mapped to an output state by the measurement error channel. The size of the squares scales with the probability of that operation. By constructing combinations of these error channels, we compare different mitigation methods against a range of scenarios relating purely to measurement errors. Using these measurement error channels, we can construct errors that probe the responsiveness of different error mitigation techniques to correlated and state-dependent errors. We can also determine the scalability of each of these methods in terms of the total number of shots required to produce a consistent result. Simulations are then performed by applying the matrices associated with each gate and measurement operator to construct the ideal, noiseless, measurement output vector. We then apply the constructed measurement error channel to this output vector, producing a vector of the probabilities of measuring each state given the measurement error channel. This distribution can then be sampled to produce simulated measurement results. In addition to these tailored measurement errors, we construct a set of simulated backend devices as seen in Fig.~\ref{fig:simulated_device_backends}. The topologies of these simulated devices emulate the coupling map architectures of a range of modern quantum devices. We use these architectures along with the Qiskit statevector simulator~\cite{Qiskit} to evaluate GHZ circuits for varying device sizes. For these simulations, we set a one qubit gate error rate of $0.1\%$, a two qubit gate error rate of $1\%$, and a readout error rate between $2-8\%$, and state-dependent measurement errors between $2-8\%$ for each qubit for both $\ket{0} \rightarrow \ket{1}$ and $\ket{0} \rightarrow \ket{1}$. $T_1$ and $T_2$ times are set to infinity. These single and two-qubit error rates are approximately analogous to those exhibited by the current NISQ devices; for example, one calibration of the IBM Quito device reports an average of two-qubit error rate as $0.98\%$, an average single-qubit $H$ error rate of $0.03\%$ and readout errors between $3\%$ and $7\%$. \begin{figure} \caption{\footnotesize Hexagonal (IBM Washington)~\cite{Qiskit} \label{fig:hex_arch} \caption{\footnotesize Octagonal (Rigetti Aspen)~\cite{rigetti_aspen} \label{fig:oct_arch} \caption{\footnotesize Grid (Google Sycamore)~\cite{sycamore} \label{fig:squ_arch} \caption{\footnotesize Fully Connected (IonQ Forte)~\cite{ionq} \label{fig:ful_arch} \caption{\small Simulated device backends represent the coupling maps from a range of modern NISQ architectures. Devices are constructed where each qubit has a random state-dependent measurement error for both $\ket{0} \label{fig:simulated_device_backends} \end{figure} \subsection{NISQ Device Benchmark Circuits} Additionally, we benchmark the performance of these measurement error mitigation methods on IBM quantum devices using GHZ circuits. These represent the smallest full device entangling circuits, minimising the impact of gate errors while producing a non-classical distribution of measurement results. The GHZ circuits are constructed by performing a single-qubit Hadamard gate on qubit 0 of the device, and then performing a breadth first search of two-qubit CNOT operations across the coupling map. This construction ensures that there is no advantage gained by different qubit allocations, routing methods or other compiler optimisations. \section{Evaluation} \label{section:evaluation} \subsection{Simulated Measurement Errors} \begin{figure} \caption{\small Correlated Measurement Error Mitigation} \caption{\small State-Dependent Measurement Error Mitigation} \caption{\small (a) A sample correlated measurement error mitigation. (b) A sample state-dependent measurement error mitigation. Dashed lines indicate the average of each distribution. The `Bare' column shows the error distribution without before any mitigations are applied. The bifurcation of JIGSAW occurs where due to the pathological nature of the error model the sub-tables may be missing entries, leading to errors during renormalisation.\label{fig:sim_state_err} \label{fig:sim_state_err} \end{figure} We benchmarked a range of measurement error mitigation methods over 136000 trials using a pair of known measurement error operations applied to the full set of $2^n$ computational basis states over four qubits, with results shown in Fig.~\ref{fig:sim_state_err}. Each mitigation method is afforded an equal number of measurements of the quantum system to perform any calibrations and operations of the circuit. As the classical input state is known, the success probability is the frequency that the classical output state is reported. `Bare' represents the distribution of errors without any mitigation. The correlated measurement error model applies only two qubit correlated errors. The state-dependent error model applies single qubit state-dependent errors, as a result the $\ket{0}^{\otimes n}$ state experiences no errors at all. AIM and SIM perform their averaging techniques, which has no overall effect for correlated errors, and narrows the distribution of state-dependent errors. With these focused error models, JIGSAW suffers from an empty sub-table entry with high probability, and should not be considered representative of JIGSAW's performance on more rounded error models. CMC performs well in the presence of both classes of errors, but is outperformed by the Linear and Full methods, which require exponential overheads. As the total number of shots was constrained, the Full model suffers from slight sampling errors leading to a tail in its distribution. \subsection{Simulated Architectures} Using the architectures described in Fig.~\ref{fig:simulated_device_backends}, we consider the performance of our range of error mitigation techniques as a function of the topology and scale of the device. In each trial each method is provided with a fixed 16000 shots to reconstruct a GHZ state from the output of the simulated device. As the Qiskit statevector simulators limit readout errors on a per-qubit basis, the noise in these experiments is biased but not correlated. \begin{figure} \caption{\small Error rate of GHZ state preparation over a family of simulated devices with grid coupling maps as shown in Fig.~\ref{fig:squ_arch} \label{fig:grid_eval} \end{figure} In Fig.~\ref{fig:grid_eval} and Fig.~\ref{fig:hex_eval} that CMC and CMC-ERR performs the best of the non-exponential methods over grid and hexagonal architectures. The Full and Linear methods provide the greatest reduction in one-norm distance, but require exponential overheads which limits their application to low qubit counts. AIM and SIM are nearly indistinguishable from the bare error rate. JIGSAW outperforms the averaging methods, but is in turn outperformed by CMC. As the statevector simulator's measurement errors are applied on a per-qubit basis, this guarantees the linearity of the error model. As a result the linear calibration performs as well as the full calibration over these simulations up to the difference in sampling during characterisation. \begin{figure} \caption{\small Error rate of GHZ state preparation over a family of simulated devices with grid coupling maps as shown in Fig.~\ref{fig:hex_arch} \label{fig:hex_eval} \end{figure} In the case of the fully connected topology seen in Fig.~\ref{fig:ful_arch}, the number of edges scales quadratically with the number of qubits. As a result the CMC method begins to suffer from a reduced number of shots per coupling map patch, which significantly reduces the accuracy of the method, as can be seen in Fig.~\ref{fig:full_eval}. For this dense coupling map JIGSAW slightly outperforms CMC, and would be expected to significantly outperform it for larger devices. CMC-ERR outperforms both CMC and JIGSAW, using the constraint of $n$ edges to mitigate the quadratic scaling issues of base CMC. \begin{figure} \caption{\small Error rate of GHZ state preparation over a family of simulated devices with grid coupling maps as shown in Fig.~\ref{fig:ful_arch} \label{fig:full_eval} \end{figure} For completeness, we include the octagonal topology with the same error model as the other families of devices. At 16 qubits, JIGSAW achieves a $23\%$ reduction over the baseline error rate, while $CMC$ reduces the error rate by $37\%$. For the same octagonal device, AIM and SIM are within $1\%$ of the initial error rate. \subsection{NISQ Device Benchmarks} \begin{table}[t!]\small \centering \begin{tabular}{lcccc} \toprule \multirow{2}*{\bf Method} & \multicolumn{4}{c}{GHZ Circuit Benchmarks} \\ & {\footnotesize Manila - $5$} & {\footnotesize Lima - $5$} &{\footnotesize Quito - $5$} & {\footnotesize Nairobi - $7$} \\ \midrule Bare & $0.20\substack{+0.10 \\ -0.04}$ & $0.23\substack{+0.02 \\ -0.01}$ & $0.26\substack{+0.01 \\ -0.01}$ & $0.56\substack{+0.02 \\ -0.01}$ \\ Full & $0.10\substack{+0.11 \\ -0.04}$ & $0.07\substack{+0.02 \\ -0.01}$ & $0.04\substack{+0.02 \\ -0.01}$ & N/A\\ Linear & $0.11\substack{+0.02 \\ -0.02}$ & $0.06\substack{+0.02 \\ -0.01}$ & $0.08\substack{+0.02 \\ -0.02}$ & N/A \\ AIM & $0.18\substack{+0.03 \\ -0.02}$ & $0.23\substack{+0.01 \\ -0.01}$ & $0.26\substack{+0.01 \\ -0.01}$ & $0.57\substack{+0.01 \\ -0.01}$\\ SIM & $0.19\substack{+0.03 \\ -0.02}$ & $0.23\substack{+0.01 \\ -0.01}$ & $0.27\substack{+0.01 \\ -0.01}$ & $0.62\substack{+0.01 \\ -0.02}$\\ JIGSAW & $0.18\substack{+0.06 \\ -0.06}$ & ${\bf 0.17\substack{\bf +0.06 \\ \bf -0.05}}$ & $0.21\substack{+0.04 \\ -0.04}$ & $0.52\substack{+0.19 \\ -0.23}$\\ {\it CMC } & $0.14\substack{+0.09 \\ -0.05}$ & ${\bf 0.17\substack{\bf +0.01 \\ \bf -0.04}}$ & ${\bf 0.16\substack{\bf +0.01 \\ \bf -0.01}}$ & $0.64\substack{+0.02 \\ -0.06}$\\ {\it $\text{CMC}_{\text{ERR}}$} & ${\bf0.13\substack{\bf +0.08 \\ \bf -0.03}}$ & $0.25\substack{+0.03 \\ -0.03}$ & $0.17\substack{+0.10 \\ -0.03}$ & ${\bf 0.33\substack{\bf +0.41 \\ \bf -0.12}}$\\ \bottomrule \end{tabular} \caption{\small GHZ benchmarks of the 1-norm distance between output distribution and ideal $\ket{+}^{\otimes n}$ state. The relative performance of CMC and $\text{CMC}_{\text{ERR}}$ depends on the `uniformity' of the error distributions for that device as seen in Fig.~\ref{fig:corr_coupling_demo}\label{table:GHZ_dev} The best non-exponential method is bolded in each column.} \end{table} Lastly we compare the performance of these methods on physical quantum devices. IBM Manila and Nairobi have local but non-coupling map aligned correlated errors, while Lima and Quito have locally uniform error profiles, as seen in Fig.~\ref{fig:corr_coupling_demo}. From this we would expect a relative advantage for CMC-ERR on Manila and Nairobi, and for CMC on Lima and Quito. This is supported by the results in Table ~\ref{table:GHZ_dev}. Each method is allocated 32000 shots to perform both calibration and any required circuit executions. For all five qubit devices exponential methods achieve the best performance. However, at the seven qubit mark these methods begin to encounter scaling issues, with the Full calibration approach exceeding 100 calibration circuits. Of the non-exponential methods CMC and CMC-ERR beat or match JIGSAW on all devices. These results reveal a strong association on the performance of each method with the underlying error model of the device. Correlated errors on IBMQ-Nairobi are almost anti-aligned with the device's coupling map, as seen in Fig.~\ref{fig:err_map_demo}. This explains the poor performance of base CMC on this device which failed to characterise most of the correlated errors. Similarly JIGSAW, which relies on calibrating against random pairs of qubits, performs best on relatively even error distributions. In several instances JIGSAW's best case matched CMC or CMC-ERR's best case, but had a worse average performance due to its reliance on the randomised calibration pairs. CMC-ERR exhibited a $41\%$ reduction in the average error rate on IBMQ Nairobi, demonstrating the efficacy of tailoring correlated error mitigation methods to device noise profiles. \section{Discussion} \label{section:discussion} \subsection{Scalability} The IBMQ implementations of the Full and Linear calibration matrices quickly encounter scaling issues. These methods require the construction of a $2^n\times2^n$ matrix. The Full method performs this using $2^n$ calibration circuits. For $n > 10$ it becomes unfeasible to queue and execute all the required calibration circuits. Alongside the constraints of constructing this matrix we must also consider the classical computational overheads. For example at $n = 14$ the dense calibration matrix (using 32 bit floating point representation) occupies 32GB of RAM. Calculating the inverse of this matrix and applying it to the sparse output vector of the measurement results is computationally unfeasible. By comparison using a na\"ive COO sparse matrix representation with CMC, 32GB affords us 32 qubits. More memory efficient sparse matrix constructions may be made given the regular structure of the CMC matrices. For calibration matrix methods (Full, Linear and CMC), as long as the error profile of the device does not drift significantly, it is possible to apply the same calibration matrix to multiple circuits executed on the same device. Circuit dependent methods (AIM, SIM, and JIGSAW) must be re-run from scratch for every new circuit that is executed. We find that ERR characterisations are stable for a given device on the order of weeks between significant recalibrations. \subsection{Linearity of Edge Counts on NISQ Devices} The main concern for the scalability of CMC is the number of edges in the coupling map. A non-sparse coupling map increases the number of two qubit calibrations that are performed and decreases the number of calibration patches that may be performed simultaneously. Table~\ref{table:edge_count} shows the number of edges as a function of the number of qubits for a range of modern architectures. \begin{table}[h] \centering \begin{tabular}{lcr} \toprule Architecture & Example & Edge Count$(n)$\\ \midrule {\footnotesize Linear} & {\footnotesize Honeywell H1}{\footnotesize~\cite{honeywell}} & {\footnotesize $n - 1$} \\ {\footnotesize Grid}{(\footnotesize Fig.~\ref{fig:squ_arch})} & {\footnotesize Google Sycamore}{\footnotesize~\cite{sycamore}} & {\footnotesize $2n + c + r$}\\ {\footnotesize Local Grid}{(\footnotesize Fig.~\ref{fig:coupling_tokyo})} & {\footnotesize IBM Tokyo }{\footnotesize~\cite{Qiskit}} & {\footnotesize $4n + c + r$}\\ {\footnotesize Hexagonal}{(\footnotesize Fig.~\ref{fig:hex_arch})} & {\footnotesize Rigetti ACORN}{\footnotesize \cite{rigetti}} & {\footnotesize $(n - 1) + cr$}\\ {\footnotesize Heavy Hex}{(\footnotesize Fig.~\ref{fig:hex_arch})} & {\footnotesize IBM Washington}{\footnotesize \cite{Qiskit}} & {\footnotesize $(n - 1) + cr$}\\ {\footnotesize Octagonal}{(\footnotesize Fig.~\ref{fig:oct_arch})} & {\footnotesize Rigetti ASPEN}{\footnotesize \cite{rigetti_aspen}} & {\footnotesize $\frac{3n}{2} - 2r - 2c$}\\ {\footnotesize Fully Connected}{(\footnotesize Fig.~\ref{fig:ful_arch})} & {\footnotesize IonQ Forte}{\footnotesize \cite{ionq}} & {\footnotesize $\frac{1}{2}(n^2 - n)$}\\ \bottomrule \end{tabular} \caption{\small Edge count as a function of the number of qubits $n$, rows $r$ and columns $c$ for a range of device architectures. For all architectures $n \ge rc$. All architectures excepting IonQ's grow the number of edges linearly with the number of qubits.\label{table:edge_count}} \end{table} IonQ's fully-connected devices~\cite{ionq}(Fig.~\ref{fig:ful_arch}) are the only family of architectures with a greater than linear growth in the number of edges, hence it is not scalable to perform bare CMC over this fully connected graph. The construction of CMC-ERR avoids this scaling problem bounds that the total number of calibration patches by the number of qubits. \section{Conclusion} \label{section:conclusion} In this paper we have demonstrated CMC and CMC-ERR, which are novel methods for efficiently constructing and joining sparse and scalable measurement calibration matrices. These methods do not increase the number of measurement shots on the device. Results on IBMQ devices have achieved up to a $41\%$ reduction in the error rate rate of the circuit, averaging at $35\%$ over the experiments performed and outperform all other non-exponential methods. Our results demonstrate a strong association between the performance of measurement error mitigation methods with the underlying error model of the device. All code required to replicate these results can be found in the corresponding \href{https://github.com/Alan-Robertson/quantum_measurement_error_mitigation}{git repository}. \end{document}
\begin{document} \title{The effect of repeated differentiation on $L$-functions} \author{Jos Gunns} \address{School of Mathematics, University of Bristol, University Walk, Bristol, BS8 1TW} \email{[email protected]} \author{Christopher Hughes} \address{Department of Mathematics, University of York, York, YO10 5DD, United Kingdom} \email{[email protected]} \date{1 May 2018} \subjclass[2010]{11M41} \begin{abstract} We show that under repeated differentiation, the zeros of the Selberg $\Xi$-function become more evenly spaced out, but with some scaling towards the origin. We do this by showing the high derivatives of the $\Xi$-function converge to the cosine function, and this is achieved by expressing a product of Gamma functions as a single Fourier transform. \end{abstract} \maketitle \section{Introduction} In 2006 Haseo Ki \cite{Ki05} proved a conjecture of Farmer and Rhoades \cite{FarRho}, that differentiating the Riemann $\Xi$-function evens out the zero spacing. Specifically Ki showed that there exists sequences $A_n$ and $C_n$ with $C_n \to 0$ slowly such that \begin{equation}\label{eq:Zeta_Cosine} \lim_{n \to \infty}A_n\Xi^{(2n)}(C_nz)=\cos(z), \end{equation} In this paper we extend Ki's result to the entire Selberg Class of $L$-functions, showing that there exists sequences $A_n$ and $C_n$ (which depend on the properties of $L$-function under consideration) and constants $M'$ and $\theta$, such that \begin{equation*} \lim_{n \to \infty}A_n\Xi_F^{(2n)}\left(C_n z-\frac{M'}{\Lambda}\right)= \cos(z+\theta). \end{equation*} where $\Xi_F$ is the Xi-function for the $L$-function $F$, an element of the Selberg Class. This result is stated more precisely in Theorem~\ref{thm:diffL}. In \cite{Selberg}, Selberg proposed an axiomatic definition of an $L$-function, now known as the Selberg Class. \begin{definition} A function $F(s)$ is an element of the Selberg Class if: \begin{enumerate} \item It has a Dirichlet series of the form \[ F(s) = \sum_{n=1}^\infty \frac{a_n}{n^s} \] which is absolutely convergent for $\operatorname{Re}(s)>1$. \item It is a meromorphic function such that $(s-1)^m F(s)$ is an entire function of order 1, for some integer $m\geq 0$. \item It has a functional equation of the form $\Phi(s) = \overline{\Phi(1-\overline{s})}$, where \[ \Phi(s) = \epsilon Q^s F(s) \prod_{j=1}^k \Gamma(\lambda_j s +\mu_j) \] with $\epsilon, Q, \lambda_j$ and $\mu_j$ all constants, and subject to $|\epsilon|=1$, $Q>0$, $\lambda_j>0$ and $\operatorname{Re}(\mu_j)\geq 0$. \item The coefficients in the Dirichlet series satisfy $a_1=1$ and $a_n = O(n^\delta)$ for some fixed positive $\delta$. \item It has an Euler product in the sense that \[ \log F(s) = \sum_{n} \frac{b_n}{n^s} \] with $b_n=0$ unless when $n=p^r$ for some prime $p$ and $r$ a positive integer, and $b_n = O(n^\theta)$ for some $\theta<1/2$. \end{enumerate} \end{definition} Kaczorowski and Perelli \cite{KP11} define an Extended Selberg Class, essentially by dropping the requirement for the function to satisfy an Euler product. Our results apply equally to elements of this extended class of $L$-functions. \begin{definition} A function $F(s)$ is an element of the Extended Selberg Class if it satisfies axioms (1)--(3) above. \end{definition} \begin{remark} The degree of an $L$-function is $2\Lambda$, where \[ \Lambda = \sum_{j=1}^k \lambda_j . \] It is conjectured that the degree is always an integer. However, this is only known for $L$-functions of degree 2 or less \cite{KP11}. More specifically, it is believed that, using the duplication formula, the gamma functions can be transformed so that $\lambda_j=1/2$ for all $j$ (and in such a case, the $L$-function has degree $k$). \end{remark} \begin{definition} Let $F$ be an element of the Selberg Class, and set \[ \xi_F(s) = s^m (1-s)^m \epsilon Q^s \prod_{j=1}^k \Gamma(\lambda_j s +\mu_j) F(s). \] \end{definition} Note that by assumption of $F$ being in the Selberg Class, $\xi_F(s)$ is an entire function of order 1, with the functional equation $\xi_F(s) = \overline{\xi_F(1-\overline{s})}$. \begin{definition} Set $\Xi_F(z) = \xi_F(\tfrac12+\mathrm{i} z)$. \end{definition} \begin{remark} From the functional equation $\Xi_F(z)$ is a real function for $z\in\mathbb R$. If the Dirichlet coefficients of $F$ are real, then $\Xi(z)$ is an even function. \end{remark} Ki proved his result for the Riemann $\Xi$-function by starting with the integral representation of the Gamma function to show that \begin{equation*} \Xi_\zeta(z)=\int_{-\infty}^{\infty}\varphi(x)e^{\mathrm{i} x z}\mathrm{d} x, \end{equation*} where \begin{equation*} \varphi(x)=2 \sum_{n=1}^\infty \left(2n^4 \pi^2 e^{9x/2} - 3n^2\pi e^{5x/2} \right) e^{-n^2\pi e^{2 x}}. \end{equation*} Note that the functional equation yields the fact that $\varphi(x) = \varphi(-x)$. After a suitable change of variables, this yields \begin{equation*} \Xi_\zeta(z)=2\pi^2 \int_{0}^{\infty} e^{-a e^x} e^{b x} \left(1+O(e^{-x})\right) \left(e^{\mathrm{i} x z/2} + e^{-\mathrm{i} x z/2}\right)\mathrm{d} x, \end{equation*} with $a=\pi$ and $b=9/4$. By differentiating such integrals, Ki was able to explicitly show the existence of sequences $A_n$ and $C_n$ such that \eqref{eq:Zeta_Cosine} held. His method also holds for Hecke $L$-functions, since the functional equation, analogously to the Riemann Xi-function, can be written with a single Gamma function. However, the Selberg Class of $L$-functions generally includes a product of disparate Gamma functions, which cannot be simplified down to a single one by the multiplication formula of the Gamma function. In sections \ref{sect:FourierTrans} and \ref{sect:RepeatedDiff}, we find the Fourier transform for the analogous $\Xi$-function for an element of the (extended) Selberg Class of $L$-functions, showing it can be written as \begin{equation*} \Xi_F(z)=B \int_{-\infty}^{\infty} \varphi(x)e^{\mathrm{i} \Lambda z x}\mathrm{d} x, \end{equation*} where $\varphi(x) = e^{-a e^{x}} e^{b x} \left(1+O(e^{-x})\right)$ as $x\to\infty$, and where $\Lambda = \sum\lambda_j$. In section \ref{sect:RepeatedDiff}, we start from that result to demonstrate the existence of sequences $A_n$ and $C_n$ such that \begin{equation*} \lim_{n \to \infty} A_n \Xi_F^{(2n)}\left(C_n z-\frac{M'}{\Lambda}\right)=\cos(z+\theta) \end{equation*} where $\theta=\arg(B)$ and $M'=\sum_{j=1}^k \operatorname{Im} \mu_j$. We utilize a similar argument to that used by Ki. The rates of convergence are considered in section \ref{sect:graphs}, demonstrated by numerical examples. \section{Expressing the $\Xi$-function as a Fourier transform}\label{sect:FourierTrans} \begin{theorem}\label{thm:FourierTransformXi} Let $F$ be an element of the Selberg Class, with data $m$, $k$, $\varepsilon, Q, \lambda_j$, and $\mu_j$. The Fourier transform of the $Xi$-function related to $F$ is \begin{align*} \widehat\Xi_F(x) &= \int_{-\infty}^\infty \Xi_F(z) e^{-\mathrm{i} x z} \mathrm{d} z\\ &= \hat B \exp\left(- \hat a e^{x/\Lambda}+ \hat b x \right) \left(1+O\left(e^{-x/\Lambda}\right)\right) \end{align*} where \[ \hat a = \Lambda Q^{-1/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda} \] and \[ \hat b= \frac{2m+M+\tfrac12\Lambda}{\Lambda} \] and \[ \hat B = (-1)^m \varepsilon Q^{-(M+2m)/\Lambda} (2\pi)^{(k+1)/2} \Lambda^{2m-1/2} \prod_{j=1}^k \lambda_j^{-\frac12 + \mu_j + \lambda_j \left( - M -2m\right)/\Lambda} \] where \[ \Lambda= \sum_{j=1}^k \lambda_j \] and \[ M = \sum_{j=1}^k \mu_j - \tfrac12(k-1). \] \end{theorem} \begin{remark} Note that $\Lambda$ and $M$ are invariant under the Gamma multiplication formulae. \end{remark} Recall that \begin{align*} \Xi_F(z)&=\xi_F(\frac{1}{2}+iz) \\ &=\varepsilon Q^{1/2+iz} \left(\tfrac14+z^2\right)^m F(\tfrac{1}{2}+\mathrm{i} z) \prod_{j=1}^k\Gamma(\mathrm{i} \lambda_jz+\mu_j+\tfrac{1}{2}\lambda_j) \end{align*} is an entire function. We wish to find its Fourier transform \[ \widehat\Xi_F(x) = \int_{-\infty}^\infty \Xi_F(z) e^{-\mathrm{i} x z} \mathrm{d} z. \] Shifting the contour so that $F(s)$ can be represented by its Dirichlet series, swapping the order of summation and integration and shifting the contour back, we find that \begin{equation}\label{eq:XiHat} \widehat\Xi_F(x) = \varepsilon Q^{1/2} \sum_{n=1}^\infty \frac{a_n}{n^{1/2}} \int_{-\infty}^\infty \left(\tfrac14+z^2\right)^m \prod_{j=1}^k\Gamma(\mathrm{i} \lambda_jz+\mu_j+\tfrac{1}{2}\lambda_j) \left(\frac{n e^x}{Q}\right)^{-\mathrm{i} z} \mathrm{d} z. \end{equation} Thus the Fourier transform can be found by convolutions and differentiations of the Fourier transform of the Gamma function. \begin{theorem}[Fourier transform of multiple gamma functions]\label{thm:FT gamma} Let $\lambda_1,\dots,\lambda_k>0$ and let $\alpha_1,\dots,\alpha_k$ be such that their real parts are all positive. Then for large $T$, \begin{multline*} \int_{-\infty}^\infty \left(\prod_{j=1}^k \Gamma(\alpha_j + \mathrm{i}\lambda_j z) \right) e^{-\mathrm{i} T z} \mathrm{d} z \\ = C_k \exp\left(-\Lambda e^{T/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda} + \frac{T(A-(k-1)/2)}{\Lambda}\right) \left(1+O\left(e^{-T/\Lambda}\right)\right) \end{multline*} where $\Lambda= \sum_{j=1}^k \lambda_j$ and $A = \sum_{j=1}^k \alpha_j$ and \begin{equation}\label{eq:Ck} C_k = \frac{(2\pi)^{(k+1)/2}}{\sqrt{\Lambda}} \prod_{j=1}^k \lambda_j^{-\frac12 + \alpha_j + \lambda_j \left( \frac12 (k-1) - A \right)/\Lambda}. \end{equation} \end{theorem} \begin{remark} Booker stated a similar result in the case when $\lambda_j=1/2$ for all $j$, in section 5.2 of \cite{Boo05}. \end{remark} \begin{proof} We prove this theorem by induction. The base case, when $k=1$ says that for $\lambda>0$ and $\operatorname{Re}(\alpha)>0$, \begin{equation}\label{eq:gamma_FT} \int_{-\infty}^\infty \Gamma(\mathrm{i} \lambda z+\alpha) e^{-\mathrm{i} T z} \mathrm{d} z = \frac{2\pi}{\lambda} \exp\left(-e^{T/\lambda}+ T\alpha/\lambda \right). \end{equation} This is simply the Fourier transform of one gamma function, a classical result. With our choice of Fourier constants the convolution theorem is \[ \int_{-\infty}^\infty f(z) g(z) e^{-\mathrm{i} T z} \mathrm{d} z = \frac{1}{2\pi} \int_{-\infty}^\infty \widehat f(x) \widehat g(T-x) \mathrm{d} x \] where $\widehat f$ and $\widehat g$ are the Fourier transforms of $f$ and $g$ respectively. The Fourier transform of $k+1$ gamma functions will be the convolution of the Fourier transform of $k$ gamma functions with the Fourier transform of one gamma function, both of which are known by the inductive hypothesis. That is, \begin{multline}\label{eq:conv_many_gammas} \int_{-\infty}^\infty \left(\prod_{j=1}^{k+1} \Gamma(\alpha_j + \mathrm{i}\lambda_j z) \right) e^{-\mathrm{i} T z} \mathrm{d} z \\ = \frac{C_k}{\lambda_{k+1}} \int_{-\infty}^\infty \exp\left(-\Lambda e^{x/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda} + \frac{x(A-(k-1)/2)}{\Lambda}\right) \left(1+O\left(e^{-x/\Lambda}\right) \right) \\ \times \exp\left(-e^{(T-x)/\lambda_{k+1}}+ \frac{(T-x)\alpha_{k+1}}{\lambda_{k+1}} \right) \mathrm{d} x \end{multline} where we have set $\Lambda= \sum_{j=1}^k \lambda_j$ and $A = \sum_{j=1}^k \alpha_j$. Later in the proof, we will also set $\Lambda'= \sum_{j=1}^{k+1} \lambda_j$ and $A' = \sum_{j=1}^{k+1} \alpha_j$. We will asymptotically evaluate this integral. Note that the exponential in the integrand is dominated by \[ -\Lambda e^{x/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda} -e^{(T-x)/\lambda_{k+1}} \] and this has a maximum at $x=x_0$ where $x_0$ is such that \[ - e^{x_0/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda} + \frac{1}{\lambda_{k+1}} e^{(T-x_0)/\lambda_{k+1}} = 0 \] that is \[ x_0 = \frac{T \Lambda}{\Lambda'} + \frac{\lambda_{k+1} \Lambda}{\Lambda'} \ln\left(\frac{1}{\lambda_{k+1}} \prod_{j=1}^k \lambda_j^{\lambda_j / \Lambda} \right) \] where $\Lambda' = \Lambda+\lambda_{k+1} = \sum_{j=1}^{k+1} \lambda_j$. Thus, expanding around $x=x_0+\epsilon$ for small $\epsilon$, we have (after a fair amount of straightforward algebraic simplification, and using the identity $\Lambda' = \Lambda + \lambda_{k+1}$) \begin{multline*} - \Lambda e^{x/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda} -e^{(T-x)/\lambda_{k+1}} = -e^{T/\Lambda'} \prod_{j=1}^{k+1} \lambda_j^{-\lambda_j / \Lambda'} \left( \Lambda e^{\epsilon / \Lambda} + \lambda_{k+1} e^{-\epsilon/\lambda_{k+1}} \right) \\ = - \Lambda' e^{T/\Lambda'} \prod_{j=1}^{k+1} \lambda_j^{-\lambda_j / \Lambda'} \left(1 + \frac{1}{2\lambda_{k+1} \Lambda} \epsilon^2 + B_1 \epsilon^3 + O( \epsilon^4) \right) \end{multline*} where $B_1$ is an inconsequential constant that depends upon $\Lambda$ and $\lambda_{k+1}$. (We remark that it is no surprise the coefficient of the $\epsilon$ term is zero, as this is the expansion around the maximum of the LHS). Substituting $x=x_0+\epsilon$ in the two other terms in the exponent of the integrand in \eqref{eq:conv_many_gammas} and letting $A' = A + \alpha_{k+1} = \sum_{j=1}^{k+1} \alpha_j$ we have \begin{multline*} \frac{x(A-\tfrac12(k-1))}{\Lambda} + \frac{(T-x)\alpha_{k+1}}{\lambda_{k+1}} = \frac{T (A'-\tfrac12(k-1))}{\Lambda'} \\ + \frac{\lambda_{k+1} (A-\tfrac12(k-1)) - \alpha_{k+1}\Lambda}{\Lambda'} \ln\left(\frac{1}{\lambda_{k+1}} \prod_{j=1}^k \lambda_j^{\lambda_j / \Lambda} \right) + B_2 \epsilon \end{multline*} where $B_2 = \frac{A-\tfrac12(k-1)}{\Lambda} - \frac{\alpha_{k+1} }{\lambda_{k+1}} $ is another inconsequential constant. Substituting both these expansions back into \eqref{eq:conv_many_gammas} we see that the Fourier transform of the $k+1$ Gamma functions is asymptotically \begin{multline*} C \exp\left(- \Lambda' e^{T/\Lambda'} \prod_{j=1}^{k+1} \lambda_j^{-\lambda_j / \Lambda'} + \frac{T (A'-\tfrac12(k-1))}{\Lambda'} \right) \\ \times \int \exp\left( - \epsilon^2\frac{\Lambda'}{2\lambda_{k+1} \Lambda} e^{T/\Lambda'} \prod_{j=1}^{k+1} \lambda_j^{-\lambda_j / \Lambda'} \left(1 + B_1 \epsilon + O( \epsilon^2)\right) + B_2 \epsilon \right) \mathrm{d} \epsilon \end{multline*} where \begin{equation}\label{eq:C} C = \frac{ C_k }{\lambda_{k+1}} \left(\frac{1}{\lambda_{k+1}} \prod_{j=1}^k \lambda_j^{\lambda_j / \Lambda} \right)^{ \frac{\lambda_{k+1} (A-\frac12(k-1)) - \alpha_{k+1}\Lambda}{\Lambda'} }. \end{equation} We utilise here the normal methods of asymptotic analysis, where the range of the $\epsilon$ integral is thought of as being small (so $O(\epsilon)$ terms are small), but $\epsilon^2 e^{T/\Lambda'}$ is large, so the Gaussian integral can be extended to the whole real line with trivially small error. To be concrete, truncate the $\epsilon$ integral to be over $\left[-e^{-T/3\Lambda'} , e^{-T/3\Lambda'} \right]$ and let $Q=\frac{\Lambda'}{2\lambda_{k+1} \Lambda} \prod_{j=1}^{k+1} \lambda_j^{-\lambda_j / \Lambda'}$, so we have \begin{multline*} \int_{-e^{-T/3\Lambda'}}^{e^{-T/3\Lambda'}} e^{- \epsilon^2 Q e^{T/\Lambda'} \left(1 + B_1 \epsilon + O( \epsilon^2)\right) + B_2 \epsilon } \mathrm{d} \epsilon\\ = \int_{-e^{-T/3\Lambda'}}^{e^{-T/3\Lambda'}} e^{ - \epsilon^2 Q e^{T/\Lambda'}} \left(1 - B_1 Q e^{T/\Lambda'} \epsilon^3 + B_2 \epsilon + O\left(e^{2T/\Lambda'} \epsilon^6\right) \right) \mathrm{d} \epsilon. \end{multline*} We can extend the integral to be over all $\mathbb R$ with a tiny error, of size $O\left( e^{-Q e^{T/3\Lambda'}}\right)$. Note that due to the symmetry of the integral, the odd terms in $\epsilon$ vanish, and note that the big-O term in the integrand contributes $O\left(e^{-3T/2\Lambda'}\right)$ to the integral. Therefore, the above integral equals \begin{multline*} \int_{-\infty}^\infty e^{ - \epsilon^2 Q e^{T/\Lambda'}} \left(1 + O\left(e^{2T/\Lambda'} \epsilon^6\right) \right) \mathrm{d} \epsilon + O\left( e^{-Q e^{T/3\Lambda'}}\right)\\ =\sqrt{\frac{\pi}{Q}} e^{-T/2\Lambda'} \left(1 + O(e^{-T/\Lambda'})\right). \end{multline*} It is easy to see the contribution to \eqref{eq:conv_many_gammas} from outside the range \begin{equation*} \left[x_0-e^{-T/3\Lambda'} , x_0+e^{-T/3\Lambda'} \right] \end{equation*} contributes a tiny amount, dominated by the error term above, and so \begin{multline*} \int_{-\infty}^\infty \left(\prod_{j=1}^{k+1} \Gamma(\alpha_j + \mathrm{i}\lambda_j z) \right) e^{-\mathrm{i} T z} \mathrm{d} z = \sqrt{\frac{2\pi \lambda_{k+1} \Lambda}{\Lambda'}} \prod_{j=1}^{k+1} \lambda_j^{\lambda_j / (2\Lambda')} C \times\\ \times \exp\left(- \Lambda' e^{T/\Lambda'} \prod_{j=1}^{k+1} \lambda_j^{-\lambda_j / \Lambda'} + \frac{T (A'-\tfrac12 k)}{\Lambda'} \right) \left(1+O\left(e^{-T/\Lambda'}\right) \right). \end{multline*} In order to simplify the constant, recall the definitions of $C$ given in $\eqref{eq:C}$ and $C_k$ given in \eqref{eq:Ck}. After some rearranging, we see that \begin{align*} \sqrt{\frac{2\pi \lambda_{k+1} \Lambda}{\Lambda'}} \prod_{j=1}^{k+1} \lambda_j^{\lambda_j / (2\Lambda')} C &= \frac{(2\pi)^{(k+2)/2}}{\sqrt{\Lambda'}} \prod_{j=1}^{k+1} \lambda_j^{-1/2 + \alpha_k + \lambda_j \left(k/2 - A' \right)/\Lambda'} \\ &=C_{k+1} \end{align*} which is the required form for $k+1$ Gamma functions, thus completing the proof. \end{proof} \begin{corollary} Let $\lambda_1,\dots,\lambda_k>0$ and let $\alpha_1,\dots,\alpha_k$ be such that their real parts are all positive. Then for large $T$, \begin{multline*} \int_{-\infty}^\infty \left(\tfrac14 + z^2\right)^m \left(\prod_{j=1}^k \Gamma(\alpha_j + \mathrm{i}\lambda_j z) \right) e^{-\mathrm{i} T z} \mathrm{d} z \\ = C_{k,m} \exp\left(-\Lambda e^{T/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda} + \frac{T(2m+A-(k-1)/2)}{\Lambda}\right) \left(1+O\left(e^{-T/\Lambda}\right)\right) \end{multline*} where $\Lambda= \sum_{j=1}^k \lambda_j$ and $A = \sum_{j=1}^k \alpha_j$ and \begin{equation*} C_{k,m} = (-1)^m (2\pi)^{(k+1)/2} \Lambda^{2m-1/2} \prod_{j=1}^k \lambda_j^{-\frac12 + \alpha_j + \lambda_j \left( \frac12 (k-1) - A -2m\right)/\Lambda}. \end{equation*} \end{corollary} \begin{proof} The new term $\left(\tfrac14 + z^2\right)^m$ requires the first $2m$ derivatives of the RHS to be calculated. The big-O term is differentiable, and note that it dominates all the derivatives other than the $2m$\textsuperscript{th} derivative. The result then follows immediately. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:FourierTransformXi}] First note that from the above Corollary, the contribution to \eqref{eq:XiHat} for the terms with $n>1$ are exponentially smaller than the error term in $n=1$ term, for large $x$. Since $a_1=1$ for an element of the Selberg Class, we have that for large $x$, \begin{multline*} \widehat \Xi_F(x) = (-1)^m \varepsilon Q^{-(M+2m)/\Lambda} (2\pi)^{(k+1)/2} \Lambda^{2m-1/2} \prod_{j=1}^k \lambda_j^{-\frac12 + \mu_j - \lambda_j \left( M +2m\right)/\Lambda} \\ \times \exp\left(-\Lambda Q^{-1/\Lambda} e^{x/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda} + (2m+M+\tfrac12\Lambda)\frac{x}{\Lambda}\right) \left(1+O\left(e^{-x/\Lambda}\right)\right), \end{multline*} where we have used the Corollary above, with $\alpha_j = \mu_j+\tfrac12\lambda_j$, $T=x-\log Q$ and we set $M = \sum_{j=1}^k \mu_j - \tfrac12(k-1)$. This is the theorem, with the constants $\hat B$, $\hat a$ and $\hat b$ given explicitly. \end{proof} \begin{remark} The proof made essential use of only the first three assumptions arising from $F(s)$ being an element of the Selberg class. Therefore this result holds for $F$ an element of the Extended Selberg Class (with $\hat B$ being trivially changed if $a_1\neq1$). \end{remark} \section{The $\Xi$-function under repeated differentiation}\label{sect:RepeatedDiff} Note that with our choice of constants, the inverse Fourier transform is \begin{equation*} \Xi_F(z) = \frac{1}{2\pi} \int_{-\infty}^\infty \widehat\Xi_F(x) e^{\mathrm{i} x z} \mathrm{d} x. \end{equation*} Note that the $\mu_j$, part of the data of the $L$-function $F$, could be complex. If we define \begin{equation*} M'=\sum_{j=1}^k \operatorname{Im} \mu_j, \end{equation*} and rescale $z$ we have \begin{align*} \Xi_F\left(\frac{z-M'}{\Lambda}\right) &=\frac{\Lambda}{2\pi} \int_{-\infty}^\infty \widehat\Xi_F(x \Lambda) e^{-\mathrm{i} x M'} e^{\mathrm{i} x z } \mathrm{d} x \\ &= B \int_{-\infty}^{\infty} \varphi(x)e^{ixz} \mathrm{d} x \end{align*} where by Theorem~\ref{thm:FourierTransformXi} \begin{equation}\label{eq:phi} \varphi(x) = e^{-a e^x} e^{b x} \left(1+O(e^{-x})\right), \end{equation} with \begin{equation}\label{eq:a} a = \Lambda Q^{-1/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda}, \end{equation} \begin{equation}\label{eq:b} b= 2m+\tfrac12\Lambda-\tfrac12(k-1) + \sum_{j=1}^k \operatorname{Re} \mu_j \end{equation} and $B = \hat B \Lambda / 2\pi$. (Note that $a,b \in\mathbb R$ and, in the notation of Theorem~\ref{thm:FourierTransformXi}, $a=\hat a$ and $b = \Lambda \hat b - \mathrm{i} M'$). \begin{theorem}\label{thm:diffL} Let $\Xi_F(z)$ be the Xi-function for the $L$-function $F$, an element of the Selberg Class. Let $w_n$ be defined as the solution to \begin{equation*} a w_n e^{w_n} = b w_n +2n \end{equation*} where $a$ and $b$ are given by \eqref{eq:a} and \eqref{eq:b} respectively. Then uniformly on compact subsets of $\mathbb C$, \begin{equation*} \lim_{n\to\infty} A_n \Xi_F^{(2n)} \left(C_n z-\frac{M'}{\Lambda}\right) = \cos(z+\arg(B)), \end{equation*} where $\Lambda$, $M'$, and $B$ are given in Theorem~\ref{thm:FourierTransformXi}, and the sequences $A_n$ and $C_n$ are given by \begin{equation*} A_n = (-1)^n \exp\left(a e^{w_n} - b w_n\right) \frac{\sqrt{n}}{2|B|\Lambda^{2n} w_n^{2n+1/2} \sqrt{\pi}} \end{equation*} and \begin{equation*} C_n = \frac{1}{\Lambda w_n}. \end{equation*} \end{theorem} \begin{remark} One can see that, for large $n$, the $w_n$ defined in the theorem satisfies \begin{equation*} w_n\sim \log\left(\frac{2n}{a}\right)-\log\log\left(\frac{2n}{a}\right) . \end{equation*} \end{remark} \begin{proof} From the functional equation for the $L$-function we have that \begin{equation*} \Xi_F\left(\frac{z-M'}{\Lambda}\right)=\overline{\Xi_F\left(\frac{\overline{z}-M'}{\Lambda}\right)} \end{equation*} so \begin{align*} B \int_{-\infty}^{\infty}\varphi(x)e^{ixz}\mathrm{d} x&=\overline{B}\int_{-\infty}^{\infty}\varphi(x)e^{-ixz}\mathrm{d} x\\ &=\overline{B}\int_{-\infty}^{\infty}\varphi(-x)e^{ixz}\mathrm{d} x, \end{align*} and since this holds for any $z \in \mathbb{C}$ we have \begin{equation*} B\varphi(x)=\overline{B}\varphi(-x). \end{equation*} Therefore \begin{equation}\label{eq:Xi_as_f_integral} \Xi_F\left(\frac{z-M'}{\Lambda}\right)=\int_0^{\infty}\varphi(x)\left(B e^{ixz}+\overline{B}e^{-ixz}\right) \mathrm{d} x. \end{equation} We can now consider just the integral \begin{equation*} f(z)=\int_0^{\infty}\varphi(x)e^{ixz}\mathrm{d} x \end{equation*} as the second integral will behave in much the same way. Differentiating this, we have that \begin{equation*} f^{(2n)}(z)=(-1)^n \int_0^{\infty}\varphi(x)x^{2n}e^{ixz}\mathrm{d} x. \end{equation*} Haseo Ki \cite{Ki05} proved that uniformly on compact subsets of $\mathbb C$, \begin{equation*} \lim_{n \to \infty}\int_0^{\infty}v_n\varphi(w_nx)x^{2n}e^{ixz}\mathrm{d} x =e^{iz}, \end{equation*} where $\varphi(x)$ is of the form given in \eqref{eq:phi}, and $w_n$ is defined such that \begin{equation*} a w_n e^{w_n} = b w_n +2n \end{equation*} and \begin{equation*} v_n = \sqrt{\frac{n w_n}{\pi}}e^{ ae^{w_n}} e^{-b w_n} . \end{equation*} Therefore, we have that \begin{align*} f^{(2n)}(z / w_n)&=(-1)^n\int_0^{\infty}\varphi(x)x^{2n}e^{\mathrm{i} x z / w_n}\mathrm{d} x\\ &=(-1)^n w_n^{2n+1}\int_0^{\infty}\varphi(w_nx)x^{2n}e^{ixz}\mathrm{d} x \end{align*} and using Ki's work (and including the error term) we have \begin{equation*} f^{(2n)}(z/w_n)=\sqrt{\frac{\pi}{nw_n}}(-1)^ne^{-ae^{w_n}}e^{bw_n}w_n^{2n+1}e^{iz}\left(1+\mathcal{O}(w_n^{-2})\right). \end{equation*} From \eqref{eq:Xi_as_f_integral} we see that \begin{equation*} \frac{1}{\Lambda^{2n}} \Xi_F^{(2n)}\left(\frac{z-M'}{\Lambda}\right) = B f^{(2n)}(z) + \overline{B}f^{(2n)}(-z) \end{equation*} so setting $C_n = \frac{1}{\Lambda w_n}$, \begin{align*} (-1)^n e^{a e^{w_n} - b w_n} w_n^{-2n-1} \sqrt{\frac{n w_n}{\pi}} \frac{1}{|B| \Lambda^{2n}} & \Xi_F^{(2n)}\left(C_n z-\frac{M'}{\Lambda}\right) \\ &=\left(\frac{B}{|B|} e^{iz}+\frac{\overline{B}}{|B|}e^{-iz}\right)(1+\mathcal{O}(w_n^{-2})) \\ &=2\cos(z+\arg(B))(1+\mathcal{O}(w_n^{-2})) \end{align*} and after taking the limit, the proof Theorem~\ref{thm:diffL} is complete. \end{proof} \section{Numerical Demonstrations}\label{sect:graphs} In this section we briefly discuss how the $L$-function's data affects the convergence to the cosine function. Recall that the error term is $O(w_n^{-2})$ where \begin{equation*} w_n \sim \log\left(\frac{2n}{a}\right), \end{equation*} with \begin{equation*} a = \Lambda Q^{-1/\Lambda} \prod_{j=1}^k \lambda_j^{-\lambda_j / \Lambda}. \end{equation*} Therefore $L$-functions with larger conductor converge slightly more quickly, and high degree $L$-functions converge more slowly. This fact is clearer if one assumes that one can transform the $L$-function so its data has $\lambda_j=1/2$ for all $j$, since then $a = k Q^{-2/k}$. The sequence $C_n$ effectively scales the density of the zeros of the $(2n)$\textsuperscript{th} derivative. We have that \begin{equation*} C_n=\frac{1}{\Lambda w_n} \to 0. \end{equation*} which means that the zeros of the unscaled $(2n)$\textsuperscript{th} derivative have moved towards the origin. Compare, for example, the Riemann Xi-function before any derivatives have been taken and after 100 derivatives have been taken. \begin{figure} \caption{$n=0$} \caption{$n=50$} \caption{Plots of the Riemann Xi-function after $2n$ derivatives} \end{figure} These figures also demonstrate the convergence to the cosine function. Finally, the $A_n$ term dictates how large the derivatives of the $L$-functions get. From \begin{equation*} A_n=\frac{\sqrt{n}(-1)^ne^{ae^{w_n}}e^{-bw_n}}{2w_n^{2n+1/2}\sqrt{\pi}|B| \Lambda^{2n}} \end{equation*} and using the defining equation for $w_n$, $a w_n e^{w_n} = b w_n + 2n$, we have that \begin{equation*} \log \left| A_n \right| = 2n(1-\log\Lambda - \log w_n) - a e^{w_n} ( w_n-1 ) + \tfrac12\log n - \tfrac12\log w_n + O(1) \end{equation*} and so since $w_n \sim \log(2n/a)$, as $n \to \infty$ we have that $A_n \to 0$, which means that the size of the $(2n)$\textsuperscript{th} derivative gets large as $n$ increases, although for $L$-functions of small degree where $\log \Lambda < 1$ the size of the derivatives can initially decrease, before eventually increasing. \end{document}
\betagin{document} \betagin{abstract} In this article, we study Griess algebras and vertex operator subalgebras generated by Ising vectors in a moonshine type VOA such that the subgroup generated by the corresponding Miyamoto involutions has the shape $3^2{:}2$ and any two Ising vectors generate a 3C subVOA $U_{3C}$. We show that such a Griess algebra is uniquely determined, up to isomorphisms. The structure of the corresponding vertex operator algebra is also discussed. In addition, we give a construction of such a VOA inside the lattice VOA $V_{E_8^3}$, which gives an explicit example for Majorana representations of the group $3^2{:}2$ of $3C$-pure type. \end{abstract} \maketitle \section{Introduction} A vertex operator algebra (VOA) $V=\oplus_{n=0}^{\infty} V_{n}$ is said to be of \emph{moonshine type} if $\dim(V_{0})=1$ and $V_{1}=0.$ In this case, the weight 2 subspace $V_{2}$ has a commutative non-associative product defined by $a\cdot b=a_{1}b$ for $a,b\in V_{2}$ and it has a symmetric invariant bilinear form $\langlengle\cdot,\cdot\ranglengle$ given by $\langlengle a,b\ranglengle\mathbbm{1}=a_{3}b$ for $a,b\in V_{2}$ \cite{FLM}. The algebra $(V_{2},\cdot,\langlengle\cdot,\cdot\ranglengle)$ is often called the \emph{Griess algebra} of $V$. An element $e\in V_{2}$ is called an \emph{Ising vector} if $e\cdot e=2e$ and the sub-VOA generated by $e$ is isomorphic to the simple Virasoro VOA $L(\frac{1}{2},0)$ of central charge $\frac{1}{2}$. In \cite{miy}, the basic properties of Ising vectors have been studied. Miyamoto also gave a simple method to construct involutive automorphisms of a VOA $V$ from Ising vectors. These automorphisms are often called Miyamoto involutions. When $V$ is the famous Moonshine VOA $V^{\natural}$, Miyamoto \cite{M} showed that there is a $1-1$ correspondence between the $2A$ involutions of the Monster group and Ising vectors in $V^{\natural}$ (see also \cite{ho}). This correspondence is very useful for studying some mysterious phenomena of the Monster group and many problems about $2A$-involutions in the Monster group may also be translated into questions about Ising vectors. For example, the McKay's observation on the affine $E_{8}$-diagram has been studied in \cite{LYY} using Miyamoto involutions and certain VOA generated by $2$ Ising vectors were constructed. Nine different VOA were constructed in \cite{LYY} and they are denoted by $U_{1A},U_{2A},U_{2B},U_{3A},U_{3C},U_{4A},U_{4B},U_{5A}$, and $U_{6A}$ because of their connection to the 6-transposition property of the Monster group (cf. \cite[Introduction]{LYY}), where $1A,2A,\dots,6A$ are the labels for certain conjugacy classes of the Monster as denoted in \cite{ATLAS}. In \cite{Sa}, Griess algebras generated by 2 Ising vectors contained in a moonshine type VOA over $\mathbb{R}$ with a positive definite invariant form are classified. There are also $9$ possible cases, and they correspond exactly to the Griess algebra $\mathcal{G}U_{nX}$ of the $9$ VOA $U_{nX}$, for $nX\in\{1A,2A,2B,3A,3C,4A,4B,5A,6A\}$. Therefore, there is again a correspondence between the dihedral subgroups generated by two $2A$-involutions, up to conjugacy and the Griess subalgebras generated by 2 Ising vectors in $V^{\natural}$, up to isomorphism. Motivated by the work of Sakuma \cite{Sa}, Ivanov axiomatized the properties of Ising vectors and introduced the notion of Majorana representations for finite groups in his book \cite{Iva}. Ivanov and his research group also initiated a program on classifying the Majorana representations for various finite groups \cite{Iva2,Iva3,IS,IPSS}. In particular, the famous $196884$-dimensional Monster Griess algebra constructed by Griess\,\cite{G} is a Majorana representation of the Monster simple group. In fact, most known examples of Majorana representations are constructed as certain subalgebras of this Monster Griess algebra. In this article, we study the Griess algebras and vertex operator subalgebras generated by Ising vectors in a moonshine type VOA such that the subgroup generated by the corresponding Miyamoto involutions has the shape $3^2{:}2$ and any two Ising vectors generate a 3C subVOA $U_{3C}$. We show that such a Griess algebra is uniquely determined, up to isomorphisms. The structure of the corresponding vertex operator algebra is also discussed. In particular, we determine the central charge. We also give an explicit construction of such a VOA in a lattice VOA. Therefore, we obtain an example for a Majorana representation of the group $3^2{:}2$ of $3C$-pure type. We shall note that the Monster simple group does not have a subgroup of the shape $3^2{:}2$ such that all order 3 elements belong to the conjugacy class $3C$. Therefore, the VOA that we constructed cannot be embedded into the Moonshine VOA. The structure of the Griess algebra can actually be determined easily by the structure of the $3C$-algebra and the action of the group $3^2{:}2$. It is also not very difficult to determine the central charge. To construct the VOA explicitly, we combine the construction of the so-called dihedral subVOA from \cite{LYY} and the construction of $EE_8$-pairs from \cite{GL}. In fact, it is quite straightforward to find Ising vectors satisfying our hypotheses. The main difficulty is to show the subVOA generated by these Ising vectors has zero weight 1 subspace. The organization of this article is as follows. In Section 2, we recall some basic definitions and notations. We also review the structure of the so-called $3C$-algebra from \cite{LYY,LYY2}. In Section 3, we study the Griess algebra generated 9 Ising vectors such that the subgroup generated by the corresponding Miyamoto involutions has the shape $3^2:2$ and any two Ising vectors generate a 3C subVOA $U_{3C}$. We show that such a Griess algebra is uniquely determined, up to isomorphisms. We also study the subVOA generated by these Ising vectors. We show that such a VOA has central charge $4$ and it has a full subVOA isomorphic to $L(\frac{1}2,0)\otimes L(\frac{21}{22}, 0) \otimes L(\frac{28}{11},0).$ Some highest weight vectors are also determined. In Section 4, we give an explicit construction of a VOA $W$ in the lattice $V_{E_8^3}$ satisfying our hypotheses. In Section 5, we show that the VOA $W$ is isomorphic to the commutant subVOA $\mathbb{C}om_{V_{E_8^3}}( L_{\hat{sl}_9(\mathbb{C})}(3,0))$ using the theory of parafermion VOA. The decomposition of $W$ as a sum of irreducible modules of the parafermion VOA $K(sl_3(\mathbb{C}),9)$ is also obtained. \section{Preliminary} First we will recall some definitions and review several basic facts. \betagin{df} \langlebel{A-bilinear-form} Let $V$ be a VOA. A bilinear $\langlengle\!\langlengle\cdot,\cdot\ranglengle\!\ranglengle$ form on $V$ is said to be \emph{invariant (or contragredient,} see \cite{FHL}) if \betagin{equation} \langlengle\!\langlengle Y(a,z)u,v\ranglengle\!\ranglengle=\langlengle\!\langlengle u,Y(e^{zL(1)}(-z^{-2})^{L(0)}a,z^{-1})v\ranglengle\!\ranglengle\langlebel{eq:invariant} \end{equation} for any $a$, $u$, $v$ $\in$ $V$. \end{df} \betagin{df} Let $V$ be a VOA over $\mathbb{C}$. A \emph{real form} of $V$ is a subVOA $V_\mathbb{R}$ (with the same vacuum and Virasoro element) of $V$ over $\mathbb{R}$ such that $V=V_\mathbb{R}\otimes \mathbb{C}$. A real form $V_\mathbb{R}$ is said to be \emph{a positive definite real form} if the invariant form $\langlengle\!\langlengle\cdot,\cdot\ranglengle\!\ranglengle$ restricted on $V_\mathbb{R}$ is real valued and positive definite. \end{df} \betagin{df} \langlebel{vvc} Let $V$ be a VOA. An element $v\in V_2$ is called a \emph{simple Virasoro vector} of central charge $c$ if the subVOA $\mathrm{Vir}(v)$ generated by $e$ is isomorphic to the simple Virasoro VOA $L(c,0)$ of central charge $c$. \end{df} \betagin{df} \langlebel{Ising vector} A simple Virasoro vector of central charge $1/2$ is an \emph{Ising vector}. \end{df} \betagin{rem} \langlebel{3irr module} It is well known the VOA $L(\frac{1}{2},0)$ is rational and it has exactly 3 irreducible modules $L(\frac{1}{2},0)$, $L(\frac{1}{2},\frac{1}{2})$, and $L(\frac{1}{2},\frac{1}{16})$ (cf. \cite{dmz,miy}). \end{rem} \betagin{rem}\langlebel{Veh} Let $V$ be a VOA and let $e\in V$ be an Ising vector. Then we have the decomposition \[ V=V_{e}(0)\oplus V_{e}(\frac{1}{2})\oplus V_{e}(\frac{1}{16}), \] where $V_{e}(h)$ denotes the sum of all irreducible $\mathrm{Vir}(e)$-submodules of $V$ isomorphic to $L(\frac{1}{2},h)$ for $h\in\{0,\frac{1}{2},\frac{1}{16}\}$. \end{rem} \betagin{thm}[\cite{miy}]\langlebel{taue} The linear map $\tau_{e}:\, V\rightarrow V$ defined by \betagin{equation} \tau_{e}:=\betagin{cases} 1 & \text{on \ensuremath{V_{e}(0)\oplus V_{e}(\frac{1}{2})}},\\ -1 & \text{on \ensuremath{V_{e}(\frac{1}{16})}}, \end{cases}\langlebel{eq:taue} \end{equation} is an automorphism of $V$. On the fixed point subspace $ V^{\tau_{e}}=V_{e}(0)\oplus V_{e}(1/2)$, the linear map $\sigma_{e}:\, V^{\tau_{e}}\rightarrow V^{\tau_{e}}$ defined by \betagin{equation} \sigma_{e}:=\betagin{cases} 1 & \text{on \ensuremath{V_{e}(0)}},\\ -1 & \text{on \ensuremath{V_{e}(\frac{1}{2})}}, \end{cases}\langlebel{eq:-1} \end{equation} is an automorphism of $V^{\tau_{e}}$. \end{thm} \subsection{3C-algebra} Next we recall the properties of the $3C$-algebra $U_{3C}$ from \cite{LYY2} (see also \cite{Sa}). The followings can be found in \cite[Section 3.9]{LYY2}. \betagin{lem}\langlebel{eiej} Let $U=U_{3C}$ be the $3C$-algebra defined as in \cite{LYY2}. Then \betagin{enumerate} \item $U_1=0$ and $U$ is generated by its weight 2 subspace $U_2$ as a VOA. \item $\dim U_2=3$ and it is spanned by 3 Ising vectors. \item There exist exactly 3 Ising vectors in $U_2$, say, $ e^0,e^1, e^2$. Moreover, we have \[ (e^i)_1 (e^j) =\frac{1}{32}( e^i+e^j -e^k), \quad \text{ and } \quad \langle e^i, e^j\rangle =\frac{1}{2^8} \] for $i\neq j$ and $\{i,j,k\}=\{0,1,2\}$. \item The Virasoro element of $U$ is given by $$\frac{32}{33}(e^0+e^1+e^2).$$ \item Let $a=\frac{32}{33}(e^0+e^1+e^2) -e^0$. Then $a$ is a simple Virasoro vector of central charge $21/22$. Moreover, the subVOA generated by $e^0$ and $a$ is isomorphic to $L(\frac{1}2,0)\otimes L(\frac{21}{22},0)$. \end{enumerate} \end{lem} \section{Griess algebras generated by Ising vectors} In this section, we study Griess algebras generated by Ising vectors in a moonshine type VOA $V$ over $\mathbb{R}$ such that the invariant bilinear form is positive definite. We assume that (1) the subgroup generated by the corresponding Miyamoto involutions has the shape $3^2{:}2$; and (2) any two of Ising vectors generate a $3C$-algebra. We shall show that such a Griess algebra is unique, up to isomorphism. We also show that the subVOA generated by these Ising vectors has central charge $4$ and it has a full subVOA isomorphic to $L(\frac{1}2,0)\otimes L(\frac{21}{22}, 0) \otimes L(\frac{28}{11},0).$ First we note that the group $3^2{:}2$ has exactly 9 involutions and all of them are conjugated to each other. Consider a set of 9 Ising vectors $\{ e^0, e^1, \dots, e^8\} \subset V$ such that $\langle e^{i}, e^{j}\rangle=\frac{1}{2^8}$ and $\tau_{e^i}\tau_{e^j}$ has order $3$ for any $i\neq j$, that means the Griess subalgebra generated by $e^{i}$ and $e^{j}$ is isomorphic to $\mathcal{G}U_{3C}$. We also assume the subgroup $G$ generated by their Miyamoto involutions has the shape $3^2{:}2$. \betagin{nota}\langlebel{eijGW} Let $H=O_3(G)\cong 3^2$, the largest normal 3-subgroup of $G$. Let $g$ and $h$ be generators of $H$. Then $g$ and $h$ have order 3 and $\langle g,h\rangle= H\cong 3^2$. Let $e=e^0$ and denote \[ e^{i,j} := g^ih^j e \quad \text{ for all } 0\leq i,j \leq 2. \] Then $\{e^{i,j} \mid 0\leq i,j \leq 2\}= \{ e^0, e^1, \dots, e^8\}$. We also denote the Griess subalgebra and the subVOA generated by $\{e^{i,j}\mid 0\leq i,j \leq 2\}$ by $\mathcal{G}$ and $W$, respectively. The following lemma shows the uniqueness of Griess algebra of pure 3C type. \end{nota} \betagin{lem} The Griess subalgebra $\mathcal{G}$ is spanned by $\{e^{i,j}\mid 0\leq i,j \leq 2\}$ and $\dim \mathcal{G}=9$. \end{lem} \betagin{proof} By Lemma \ref{eiej}, we have \[ e^{i,j}\cdot e^{i',j'} = \betagin{cases} \frac{1}{32}( e^{i,j} + e^{i',j'}- e^{i'', j''}), & \text{ if } (i,j)\neq (i'j'),\\ 2e^{i,j} &\text{ if } (i,j)= (i'j'), \end{cases} \] where $i+i'+i''=j+j'+j''=0\mod 3$. Therefore, $span\{e^{i,j}\mid 0\leq i,j \leq 2\}$ is closed under the Griess algebra product and $\mathcal{G}=span\{e^{i,j}\mid 0\leq i,j \leq 2\}$. By our assumption, we have the Gram matrix \[ \betagin{pmatrix} \langle e^{i,j}, e^{i',j'}\rangle \end{pmatrix} _{0\leq i,j,i'j' \leq 2} = \betagin{pmatrix} \frac{1}4 & \frac{1}{2^8} & \dots & \frac{1}{2^8}\\ \frac{1}{2^8} & \frac{1}4 & \dots & \frac{1}{2^8}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{1}{2^8} & \frac{1}{2^8} & \dots & \frac{1}4 \end{pmatrix}. \] It has rank $9$ and hence $\{e^{i,j}\mid 0\leq i,j \leq 2\}$ is a linearly independent set and $\dim \mathcal{G}=9$. \end{proof} Next, we shall give more details about the VOA generated by $ \{ e_{i, j}\}$. \betagin{lem}\langlebel{w} Let $$\omega:= \frac{8}9 \sum_{0\leq i,j \leq 2} e^{i,j}.$$ Then $\omega$ is a Virasoro vector of central charge $4$. Moreover, $\omega\cdot e^{i,j} =e^{i,j}\cdot \omega =2e^{i,j}$ for any $0\leq i,j \leq 2$. In other words, ${\omega}/2$ is the identity element in $\mathcal{G}$. \end{lem} \betagin{proof} By Lemma \ref{eiej}, we have \[ \betagin{split} \omega\cdot \omega & =\left(\frac{8}{9}\right)^2\left( 2\sum_{0\leq i,j, \leq 2} e^{i,j} + \sum_{0\leq i,j, \leq 2} \left(\sum_{0\leq i',j' \leq 2 \atop {(i,j)\neq (i'j') \atop i+i'+i''=j+j'+j''=0 \mod 3} } \frac{1}{32}( e^{i,j} + e^{i',j'}- e^{i'', j''})\right) \right)\\ & =\left( \frac{8}{9}\right)^2\left( 2\sum_{0\leq i,j, \leq 2} e^{i,j} + \sum_{0\leq i,j, \leq 2} \frac{8}{32} e^{i,j} \right)\\ & = \left( \frac{8}{9}\right)^2 \cdot \frac{9}4 \sum_{0\leq i,j, \leq 2} e^{i,j} = 2\omega \end{split} \] and \[ \langle \omega, \omega\rangle = \left(\frac{8}{9}\right)^2 \left( \frac{1}{4} \times 9 + \frac{1}{2^8} \times 72\right) = 2. \] Therefore, $\omega$ is a Virasoro vector of central charge $4$. That $\omega\cdot e^{i,j} =e^{i,j}\cdot \omega =2e^{i,j}$ for any $0\leq i,j \leq 2$ can be verified easily by Lemma \ref{eiej}, also . \end{proof} \betagin{lem}\langlebel{aiaj} Let \[ \betagin{split} a^1= & a= \frac{32}{33} (e^{0,0}+e^{0,1}+e^{0,2}) -e^{0,0},\\ a^2= & \frac{32}{33} (e^{0,0}+e^{1,0}+e^{2,0}) -e^{0,0},\\ a^3= & \frac{32}{33} (e^{0,0}+e^{1,1}+e^{2,2}) -e^{0,0},\\ a^4= & \frac{32}{33} (e^{0,0}+e^{1,2}+e^{2,1}) -e^{0,0}. \end{split} \] Then $a^1,a^2, a^3,a^4$ are Virasoro vectors of central charge $21/22$. Moreover, we have \[ a^i\cdot a^j = \frac{1}{33}( 2a^i+2a^j- a^k-a^\EuScript{L}l), \] for $i\neq j$ and $\{i,j,k,\EuScript{L}l\}=\{1,2,3,4\}$. \end{lem} \betagin{proof} We only prove the case for $i=1$ and $j=2$. The other cases can be proved similarly. By Lemma \ref{eiej}, we have \[ \betagin{split} (e^{0,0}+e^{0,1}+e^{0,2})\cdot e^{0,0} &= 2e^{0,0}+\frac{1}{32} ( e^{0,0}+e^{0,1}-e^{0,2}+ e^{0,0}+e^{0,2} - e^{0,1}) = \frac{33}{16}e^{0,0}\\ (e^{0,0}+e^{0,1}+e^{0,2})\cdot e^{1,0} &= \frac{1}{32} ( e^{0,0}+e^{1,0}-e^{2,0}+ e^{0,1}+e^{1,0}-e^{2,2}+ e^{0,2}+e^{1,0}-e^{2,1})\\ (e^{0,0}+e^{0,1}+e^{0,2})\cdot e^{2,0} &= \frac{1}{32} ( e^{0,0}+e^{2,0}-e^{1,0}+ e^{0,1}+e^{2,0}-e^{1,2}+ e^{0,2}+e^{2,0}-e^{1,1}). \end{split} \] Thus, \[ \betagin{split} &\qquad a^1\cdot a^2\\ & =\left[ \frac{32}{33} (e^{0,0}+e^{0,1}+e^{0,2}) -e^{0,0}\right] \cdot \left[ \frac{32}{33} (e^{0,0}+e^{1,0}+e^{2,0}) -e^{0,0}\right]\\ &= \left(\frac{32}{33}\right)^2 \left[ \frac{33}{16} e^{0,0} +\frac{1}{32} \left ( 2(e^{0,0}+e^{0,1}+e^{0,2} + e^{1,0}+e^{2,0}) - (e^{1,1}+ e^{1,2}+e^{2,1}+e^{2,2}) \right)\right]\\ & \quad - 4e^{0,0}+ 2e^{0,0} \\ &= \frac{32}{33^2} \left[ 2(e^{0,1}+e^{0,2} + e^{1,0}+e^{2,0}) - (e^{1,1}+ e^{1,2}+e^{2,1}+e^{2,2})\right] - \frac{2}{33^2}e^{0,0}\\ &= \frac{1}{33}( 2a^1+2a^2- a^3-a^4) \end{split} \] as desired. \end{proof} \betagin{lem} Let \[ b^1= \frac{8}9 \sum_{0\leq i,j \leq 2} e^{i,j} -\frac{32}{33} (e^{0,0}+e^{0,1}+e^{0,2}). \] Then $b^1$ is a Virasoro vector of central charge $28/11$. Moreover, $e^{0,0}$, $a^1$ and $b^1$ are mutually orthogonal and $\omega= e^{0,0}+ a^1 +b^1$. Therefore, $W$ has a full subVOA isomorphic to the tensor product of Virasoro VOA $$L(\frac{1}2,0)\otimes L(\frac{21}{22}, 0) \otimes L(\frac{28}{11},0).$$ \end{lem} \betagin{proof} It follows from (4), (5) of Lemma \ref{eiej} and Lemma \ref{w}. \end{proof} \betagin{lem} For any $i,j\in \{2,3,4\}, i\neq j$, the element $a^i-a^j$ is a highest weight vector of weight $(0,1/11, 21/11)$ with respect to $\mathrm{Vir}(e^{0,0})\otimes \mathrm{Vir}(a^1)\otimes \mathrm{Vir}(b^1)$. \end{lem} \betagin{proof} By Lemma \ref{aiaj}, we have \[ a^1_1(a^i-a^j) =\frac{1}{33}[ (2a^1+2a^i -a^j -a^k) - (2a^1+2a^j -a^i -a^k)] =\frac{1}{11}(a^i-a^j), \] where $\{i,j,k\}=\{2,3,4\}$. Since $e^{0,0}_1 a^i=0$ and $\omega_1 a^i=2 a^i$ for all $i$, $a^i-a^j$ is a highest weight vector of weight $(0,1/11, 21/11)$ with respect to $\mathrm{Vir}(e^{0,0})\otimes \mathrm{Vir}(a^1)\otimes \mathrm{Vir}(b^1)$. \end{proof} \betagin{lem}\langlebel{1over16} The vector $e^{0,1}-e^{0,2}$ is a highest weight vector of weight $(1/16,31/16,0)$ with respect to $\mathrm{Vir}(e^{0,0})\otimes \mathrm{Vir}(a^1)\otimes \mathrm{Vir}(b^1)$. \end{lem} \betagin{proof} By direct calculations, we have \[ e^{0,0}_1(e^{0,1}-e^{0,2}) =\frac{1}{32}[( e^{0,0} +e^{0,1}-e^{0,2}) - ( e^{0,0} +e^{0,2}-e^{0,1})]= \frac{1}{16} (e^{0,1}-e^{0,2}) \] and \[ \frac{32}{33}( e^{0,0} +e^{0,1}+e^{0,2})_1 (e^{0,1}-e^{0,2}) = 2 (e^{0,1}-e^{0,2}). \] Since $a^1= \frac{32}{33}( e^{0,0} +e^{0,1}+e^{0,2})-e^{0,0}$ and $b^1= \omega - \frac{32}{33}( e^{0,0} +e^{0,1}+e^{0,2})$, we have $$a^1_1(e^{0,1}-e^{0,2})= \frac{31}{16}(e^{0,1}-e^{0,2})\quad \text{ and }\quad b^1_1 (e^{0,1}-e^{0,2})=0$$ as desired. \end{proof} \betagin{lem} With respect to $\mathrm{Vir}(e^{0,0})\otimes \mathrm{Vir}(a^1)\otimes \mathrm{Vir}(b^1)$, $(e^{1,0}+e^{1,1}+e^{1,2})- (e^{2,0}+e^{2,1}+e^{2,2})$ is a highest weight vector of weight $(1/16, 21/176, 20/11)$. On the other hand, $(e^{1,1} -e^{2,2})- (e^{1,2}-e^{2,1})$ and $(e^{1,0} -e^{2,0})- (e^{1,1}-e^{2,2})$ are highest weight vectors of weight $(1/16, 5/176, 21/11)$. \end{lem} \betagin{proof} By the same calculations as in Lemma \ref{1over16}, $(e^{1,0}- e^{2,0})$, $(e^{1,1} -e^{2,2})$ and $(e^{1,2}-e^{2,1})$ are $1/16$-eigenvectors of $e^{0,0}_1$. By Lemma \ref{eiej}, we also have \[ \frac{32}{33}(e^{0,0}+e^{0,1}+e^{0,2})_1( e^{1,1}- e^{2,2}) = \frac{1}{33}(4( e^{1,1}- e^{2,2})+(e^{1,0}-e^{2,0})+(e^{1,2}-e^{2,1})). \] Let $v=(e^{1,0}+e^{1,1}+e^{1,2})- (e^{2,0}+e^{2,1}+e^{2,2})$. Then \[ \frac{32}{33}(e^{0,0}+e^{0,1}+e^{0,2})_1 v = \frac{1}{33} (4+1+1)v=\frac{2}{11}v. \] Thus, $a_1^1 v= (\frac{2}{11} -\frac{1}{16})v =\frac{21}{176} v$ and $b^1_1v = (2- \frac{2}{11}) v=\frac{20}{11} v$. Moreover, \[ \betagin{split} &\frac{32}{33}(e^{0,0}+e^{0,1}+e^{0,2})_1( (e^{1,1}- e^{2,2}) - (e^{1,2}-e^{2,1})) \\ =&\frac{1}{33} (4-1) ( (e^{1,1}- e^{2,2}) - (e^{1,2}-e^{2,1}))\\ = &\frac{1}{11} ( (e^{1,1}- e^{2,2}) - (e^{1,2}-e^{2,1})). \end{split} \] Thus, we have $$a_1^1 ( (e^{1,1}- e^{2,2}) - (e^{1,2}-e^{2,1}))= \frac{5}{176} ( (e^{1,1}- e^{2,2}) - (e^{1,2}-e^{2,1}))$$ and $$b^1_1 ( (e^{1,1}- e^{2,2}) - (e^{1,2}-e^{2,1}))v = \frac{21}{11} ( (e^{1,1}- e^{2,2}) - (e^{1,2}-e^{2,1})).$$ The remaining cases can be proved similarly. \end{proof} \section{Lattice VOA $V_{E_8\perp E_8\perp E_8}$} In this section, we shall construct explicitly a VOA $W$ satisfying the hypothesis in Section 3 inside the lattice VOA $V_{E_8\perp E_8\perp E_8}$. Our notation for the lattice vertex operator algebra \betagin{equation}\langlebel{VL} V_L = M(1) \otimes \mathbb{C}\{L\} \end{equation} associated with a positive definite even lattice $L$ is standard \cite{FLM}. In particular, ${\mathfrak h}=\mathbb{C}\otimes_{\mathbb{Z}} L$ is an abelian Lie algebra and we extend the bilinear form to ${\mathfrak h}$ by $\mathbb{C}$-linearity. Also, $\hat {\mathfrak h}={\mathfrak h}\otimes \mathbb{C}[t,t^{-1}]\oplus \mathbb{C} k$ is the corresponding affine algebra and $\mathbb{C} k$ is the 1-dimensional center of $\hat{\mathfrak{h}}$. The subspace $M(1)=\mathbb{C}[\alpha_i(n)|1\leq i\leq d, n<0]$ for a basis $\{\alpha_1, \dots,\alpha_d\}$ of $\mathfrak{h}$, where $\alpha(n)=\alpha\otimes t^n,$ is the unique irreducible $\hat{\mathfrak h}$-module such that $\alphapha(n)\cdot 1=0$ for all $\alphapha\in {\mathfrak h}$ and $n$ nonnegative, and $k=1.$ Also, $\mathbb{C}\{L\}=span \{e^{\betata}\mid \betata\in L\}$ is the twisted group algebra of the additive group $L$ such that $e^\beta e^\alpha=(-1)^{\langle \alpha, \beta\rangle} e^\alpha e^\beta$ for any $\alpha, \beta\in L$. The vacuum vector $\mathbbm{1}$ of $V_L$ is $1\otimes e^0$ and the Virasoro element $\omegaega_L$ is $\frac{1}{2}\sum_{i=1}^d\betata_i(-1)^2\cdot \mathbbm{1}$ where $\{\betata_1,..., \betata_d\}$ is an orthonormal basis of ${\mathfrak h}.$ For the explicit definition of the corresponding vertex operators, we shall refer to \cite{FLM} for details. \betagin{df}\langlebel{tensor} Let $A$ and $B$ be integral lattices with the inner products $\langle \ , \ \rangle_A$ and $\langle \ , \ \rangle_B$, respectively. {\it The tensor product of the lattices} $A$ and $B$ is defined to be the integral lattice which is isomorphic to $A\otimes_\mathbb{Z} B$ as a $\mathbb{Z}$-module and has the inner product given by \[ \langle \alpha\otimes \beta, \alpha' \otimes \beta'\rangle = \langle\alpha,\alpha'\rangle_A \cdot \langle \beta,\beta'\rangle_B, \quad \text{ for any } \alpha,\alpha'\in A,\text{ and } \beta,\beta'\in B. \] We simply denote the tensor product of the lattices $A$ and $B$ by $A\otimes B$. \end{df} Now let $L=E_8\perp E_8\perp E_8$ be the orthogonal sum of 3 copies of the root lattice of type $E_8$. Set \betagin{equation}\langlebel{MN} \betagin{split} M& =\{ (\alpha, -\alpha,0) \mid \alpha\in E_8\} <L,\quad \text{ and }\\ N&=\{ (0, \alpha, -\alpha) \mid \alpha\in E_8\} <L. \end{split} \end{equation} Then $M\cong N \cong \sqrt{2}E_8$ and $M+N\cong A_2\otimes E_8$ (see \cite{GL}). Note that there is a third $\sqrt{2}E_8$-sublattice \[ \tilde{N} =\{ (\alpha,0, -\alpha)\mid \alpha \in E_8\} < M+N. \] We shall fix a (bilinear) 2-cocyle $\varepsilon_0: E_8\times E_8 \to \mathbb{Z}_2$ such that \betagin{equation}\langlebel{epsilon0} \betagin{split} \varepsilon_0(\alpha, \alpha) \equiv \frac{1}{2}\langle \alpha, \alpha\rangle \mod 2,\\ \text{and}\quad \varepsilon_0(\alpha, \beta) - \varepsilon_0(\beta, \alpha) \equiv \langle \alpha, \beta\rangle \mod 2,\\ \end{split} \end{equation} for all $\alpha, \beta \in E_8$. Note that such a $2$-cocycle exists (cf. \cite[(6.1.27)-(6.1.29)]{FLM}). Moreover, $e^\alpha e^{-\alpha} = - e^0$ for any $\alpha \in E_8$ such that $ \langle \alpha, \alpha \rangle =2$. We shall extend $\varepsilon_0$ to $L$ by defining \[ \varepsilon_0\Big( (\alpha,\alpha', \alpha''), ( \beta, \beta', \beta'')\Big)= \varepsilon_0(\alpha, \beta )+ \varepsilon_0(\alpha', \beta') +\varepsilon_0(\alpha'', \beta''). \] It is easy to check by direct calculations that $\varepsilon_0$ is trivial on $M$, $N$ or $\tilde{N}$. Consider the lattice VOA \[ V_L\cong V_{E_8}\otimes V_{E_8}\otimes V_{E_8}. \] \betagin{df}\langlebel{rho} Let ${\bf a} $ be an element of $E_8$ such that \[ K:= \{\beta\in E_8\mid \langle \beta, {\bf a} \rangle \in 3\mathbb{Z}\} \cong A_8. \] Set $\tilde{{\bf a}}=({\bf a}, -{\bf a}, 0)$ and define an automorphism $\rho$ of $V_L$ by \[ \rho = \exp\left( \frac{2\pi i}3 \tilde{{\bf a}}(0)\right). \] Then $\rho$ has order $3$ and the fixed point subspace $V_M^\rho \cong V_{\sqrt{2}A_8}$. \end{df} \betagin{nota}\langlebel{efg} Let $M, N$ and $\rho$ be defined as above. Set \[ \betagin{split} e& := e_M =\frac{1}{16}\omegaega_M + \frac{1}{32} \sum_{\alpha\in M(4)} e^\alpha,\\ f& := e_N =\frac{1}{16}\omegaega_N + \frac{1}{32} \sum_{\alpha\in N(4)} e^\alpha,\\ e_{ \tilde{N}} & :=\frac{1}{16}\omegaega_{ \tilde{N}} + \frac{1}{32} \sum_{\alpha\in { \tilde{N}} (4)} e^\alpha, \quad \text{ and }\\ e'&:= \rho(e). \end{split} \] It is shown in \cite{DLMN} that $e, f$ and $e_{\tilde{N}}$ are Ising vectors and hence $e'= \rho(e)$ is also an Ising vector (see also \cite{LYY,LYY2}). \end{nota} The following lemma can be proved by direct calculations (see \cite{GL,LYY,LYY2}). \betagin{lem} We have $\langle e, f\rangle =\langle e, e'\rangle= \langle f,e'\rangle = 1/{2^8}$. Moreover, the subVOA $\mathrm{VOA}(e,f)$, $\mathrm{VOA}(e,g)$, $\mathrm{VOA}(f,g)$ generated by $\{e,f\}$, $\{e,e'\}$ and $\{f,e'\}$, respectively, are isomorphic to the $3C$-algebra $U_{3C}$. We also have $ e_M \cdot e_N = 1/{32} ( e_M + e_N - e_{ \tilde{N} })$, and hence $ \tau_e (f) = e_{\tilde{N}}$. \end{lem} \betagin{nota}\langlebel{Nota:W} Let $W:=\mathrm{VOA}(e,f,e')$ be the subVOA generated by $e, f$ and $e'$. We also denote $h=\tau_e\tau_f$ and $g=\tau_e\tau_{e'}$. Then $g$ and $h$ both have order $3$. Note also that $e,f,e'\in V_{M+N}$ and thus $W< V_{M+N} \cong V_{A_2\otimes E_8}$. \end{nota} \betagin{lem}\langlebel{ghcom} As automorphisms of $W$, $g$ commutes with $h$. \end{lem} \betagin{proof} Recall that $g=\tau_e\tau_{e'} = \rho$ on $V_{L}$ (see \cite{LYY}). Moreover, $h(e) =f= e_N$ and $h^2(e)= e_{\tilde{N}}$. By a direct calculation, we have \[ hg(e)= hgh^{-1} h(e) = \rho^h (e_N), \] where $\rho^h = h\rho h^{-1}= \exp\left(\frac{2 \pi i}3 (0 , {\bf a}, -{\bf a})(0)\right)$. Since $\langle (0, \beta, -\beta), (0 , {\bf a}, -{\bf a})\rangle = 2\langle \beta, {\bf a}\rangle$ and $\langle (0, \beta, -\beta), ({\bf a}, -{\bf a},0)\rangle = -\langle \beta, {\bf a}\rangle$, we have \[ gh(e) =\rho(e_N)=\rho^h(e_N) =hg(e). \] Similarly, we have \[ hg(e') = hg^{2} (e) = (\rho^h)^{2} (e_N), \quad gh(e') =ghg(e) = g (\rho^h(e_N))= (\rho^h)^{2} (e_N) \] and \[ hg(f)=hgh(e)=(hgh^{2})h^2(e)= \rho^h (e_{\tilde{N}}) , \quad gh(f)=g(e_{\tilde{N}})=\rho(e_{\tilde{N}}). \] Hence $gh=hg$ on $W$. \end{proof} \betagin{nota} For any $0 \leq i,j\leq 2$, denote \[ e^{i,j}= g^i h^j (e). \] In particular, we have \betagin{align*} &e^{0,0}= e_M, && e^{0,1}=e_N, && e^{0,2}= e_{\tilde{N}},\\ &e^{1,0}= \rho e_M, && e^{1,1}= \rho e_N, && e^{1,2}= \rho e_{\tilde{N}},\\ &e^{2,0}= \rho^2 e_M, && e^{2,1}=\rho^2e_N, && e^{2,2}= \rho^2 e_{\tilde{N}}. \end{align*} \end{nota} \betagin{rem} By the same methods as in \cite{GL,LYY}, it is also quite straightforward to verify that $\langle e^{i,j}, e^{i'j'}\rangle=\frac{1}{2^8}$ whenever $(i,j)\neq (i',j')$. \end{rem} \betagin{lem} Let $G$ be the subgroup of $Aut(W)$ generated by $\tau_e$, $\tau_f$ and $\tau_{e'}$. Then $G$ has the shape $3^2:2$. \end{lem} \betagin{proof} By Lemma \ref{ghcom}, we know the group $ \langlengle g, h \ranglengle$ generated by $ g$ and $h$ is isomorphic to $ 3^2$. We claim that $ G \cong \langlengle g, h \ranglengle : \langlengle \tau_e \ranglengle$. For this we need to prove 1) $ \langlengle g, h \ranglengle $ is normal in $ G$;\quad 2) $ G = \langlengle g, h \ranglengle \langlengle \tau_e \ranglengle$ and\quad 3) $ \langlengle g, h \ranglengle \cap \langlengle \tau_e \ranglengle =0$. \\ It is easy to prove 2) and 3). For 1), by Lemma \ref{ghcom} we have $ gh = hg$ and hence $ \tau_f \tau_e \tau_{ e'} = \tau_{ e'} \tau_e \tau_f $. Thus $ \tau_f h \tau_f = \tau_f \tau_e \tau_{ e'} \tau_f = \tau_{ e'} \tau_e \tau_f^2 = \tau_{ e'} \tau_e = h^2 \in \langlengle g, h \ranglengle$. Similar computation gives $ \langlengle g, h \ranglengle $ is normal in $ G$. \end{proof} \section{Affine VOA $L_{\omegaidehat{sl}_9(\mathbb{C})}(3,0)$} \langlebel{sec:5} Recall from Definition \ref{rho} that the sublattice \[ K= \{\beta\in E_8\mid \langle \beta, {\bf a}\rangle \in 3\mathbb{Z}\} \cong A_8. \] Thus, we have an embedding \[ V_{K\perp K\perp K}\cong V_K\otimes V_K\otimes V_K \hookrightarrow V_L. \] It is also well known\,\cite{FLM} that $V_K\cong V_{A_8}$ is an irreducible level 1 representation of the affine Lie algebra $\omegaidehat{sl}_9(\mathbb{C})$. Moreover, the weight 1 subspace $(V_K)_1$ is a simple Lie algebra isomorphic to $sl_9(\mathbb{C})$. Let $\eta_i:K\to K\perp K\perp K$, $i=1,2,3$, be the embedding of $K$ into the $i$-th direct summand of $K\perp K\perp K$, i.e., \[ \eta_1(\alpha)=(\alpha,0,0),\quad \eta_2(\alpha)=(0, \alpha,0), \quad \eta_3(\alpha)=(0, 0,\alpha), \] for any $\alpha\in K$. \betagin{nota}\langlebel{hef} For any $\alpha\in K(2):=\{ \alpha\in K\mid \langle \alpha, \alpha\rangle =2\}$, set \[ \betagin{split} H_\alpha &= (\alpha, \alpha, \alpha)(-1) \cdot \mathbbm{1}, \\ E_\alpha &= e^{\eta_1(\alpha)} + e^{\eta_2(\alpha)} +e^{\eta_3(\alpha)}. \end{split} \] Then $\{H_\alpha, E_\alpha\mid \alpha\in K(2)\}$ generates a subVOA isomorphic to the affine VOA $L_{\hat{sl}_9(\mathbb{C})}(3,0)$ in $V_L$ (see \cite[Proposition 13.1]{DL} and \cite{FZ}). Moreover, the Virasoro element of $L_{\hat{sl}_9(\mathbb{C})}(3,0)$ is given by \[ \Omega= \frac{1}{2(3+9)} \left[ \sum_{k=1}^8 (h^k, h^k,h^k)(-1)^2\cdot \mathbbm{1} + \sum_{\alpha\in K(2)} (E_\alpha)_{-1}(-E_{-\alpha})\right], \] where $\{h^1, \dots, h^8\}$ is an orthonormal basis of $K\otimes \mathbb{C}= E_8\otimes \mathbb{C}$. Note that the dual vector of $ E_ \alphapha$ is $-E_{ - \alphapha}$. We also denote $E=\{(\alpha, \alpha, \alpha) \mid \alpha\in E_8\}< L$. Note that \[ E=Ann_{L}(M+N) :=\{ \beta\in L\mid \langle \beta, \beta' \rangle =0 \text{ for all } \beta'\in M+N\}. \] \end{nota} \betagin{lem} Denote the Virasoro element of a lattice VOA $V_S$ by $\omegaega_S$. Then we have \[ \betagin{split} \Omega&=\omegaega_E +\frac{3}4 \omegaega_{M+N} -\frac{1}{12} \sum_{\alpha\in K(2)\atop 1\leq i,j\leq 3, i\neq j} e^{\eta_i(\alpha)-\eta_j(\alpha)},\\ & = \omegaega_L -\frac{8}9 \sum_{ 0\leq i,j\leq2} e^{i,j}. \end{split} \] \end{lem} \betagin{proof} Let $\{h^1, \dots, h^8\}$ be an orthonormal basis of $A_8\otimes \mathbb{C}= E_8\otimes \mathbb{C}$. Then \[ \betagin{split} \Omega= & \ \frac{1}{2(3+9)} \left[ \sum_{k=1}^8 (h^k, h^k,h^k)(-1)^2\cdot \mathbbm{1}\right.\\ &\left. \ - \sum_{\alpha\in K(2)} (e^{\eta_1(\alpha)} +e^{\eta_2(\alpha)} +e^{\eta_3(\alpha)})_{-1} ( e^{-\eta_1(\alpha)}+e^{-\eta_2(\alpha)}+e^{-\eta_3(\alpha)})\right ],\\ =& \frac{1}{24} \left[ 6\omegaega_E + \sum_{\alpha\in K(2)} \sum_{i=1}^3 \frac{1}2 ( \eta_i(\alpha)(-2)\cdot \mathbbm{1} + \eta_i(\alpha)(-1)^2\cdot \mathbbm{1}) \right. \\ & \left. \ - 2\sum_{\alpha\in K(2) \atop 1\leq i,j\leq 3, i\neq j} e^{\eta_i(\alpha)-\eta_j(\alpha)}\right],\\ =& \frac{1}4\omegaega_E + \frac{18}{24} \omegaega_L -\frac{1}{12} \sum_{\alpha\in K(2)\atop 1\leq i,j\leq 3, i\neq j} e^{\eta_i(\alpha)-\eta_j(\alpha)}. \end{split} \] Since $\omegaega_L=\omegaega_{M+N}+\omegaega_E$, we have \[ \Omega =\omegaega_E +\frac{3}4 \omegaega_{M+N} -\frac{1}{12} \sum_{\alpha\in K(2)\atop 1\leq i,j\leq 3, i\neq j} e^{\eta_i(\alpha)-\eta_j(\alpha)}. \] Now let $ \Delta^i := \{ \betata \in E_8(2) \mid \langlengle {\bf{a}}, \betata \ranglengle = i \mod 3 \mathbb{Z}\}$ for $ i = 0, 1$ and $ 2$. Note that $ \Delta^0 = K(2)$. Then we have \betagin{align*} e^{0,0} = e_M = \frac{1}{ 16} \omegaega_M + \frac{1}{32} \sum_{ i =0}^2 \sum_{ \alphapha \in \Delta^i } e^{( \alphapha, - \alphapha, 0)}, \\ e^{1,0} =\rho e_M = \frac{1}{ 16} \omegaega_M + \frac{1}{32}\sum_{ i =0}^2 \sum_{ \alphapha \in \Delta^i} \xi^{2i} e^{( \alphapha, - \alphapha, 0)}, \\ e^{2,0} = \rho^2 e_M = \frac{1}{ 16} \omegaega_M + \frac{1}{32}\sum_{ i =0}^2 \sum_{ \alphapha \in \Delta^i } \xi^{ i} e^{( \alphapha, - \alphapha, 0)}. \end{align*} Hence \[ \sum_{ i=0}^2 e^{i, 0} = ( 1 + \rho + \rho^2) e_M= \frac{3}{ 16} \omegaega_M + \frac{3}{32} \sum_{ \alphapha \in K(2) } e^{( \alphapha, - \alphapha, 0)} . \] Similar computation gives \betagin{align*} \sum_{ 0 \le i, j \le 2} e^{i, j } = \frac{3}{ 16} ( \omegaega_M + \omegaega_N + \omegaega_ { \tilde{N}}) + \frac{3}{32}\sum_{\alpha\in K(2)\atop 1\leq i,j\leq 3, i\neq j} e^{\eta_i(\alpha)-\eta_j(\alpha)}. \end{align*} Recall that $M+N \cong A_2\otimes E_8$. It contains a full rank sublattice isometric to $(\sqrt{2}A_2)^8$ and hence $\omega _{M+N}$ is the sum of the conformal elements of each tensor copy of $V_{\sqrt{2}A_2}^{\otimes 8}$. We also note that the conformal element of the lattice VOA $V_{\sqrt{2}A_2}$ is given by \[ \betagin{split} \omega_{\sqrt{2}A_2} &= \frac{1}6 (\alpha_1(-1)^2+\alpha_2(-1)^2+\alpha_3(-1)^2)\cdot \mathbbm{1}\\ &= \frac{2}3\left( \frac{1}{2}(\frac{\alpha_1(-1)}{\sqrt{2}})^2+\frac{1}{2}(\frac{\alpha_2(-1)}{\sqrt{2}})^2 +\frac{1}{2}(\frac{\alpha_3(-1)}{\sqrt{2}})^2 \right)\cdot \mathbbm{1}, \end{split} \] where $\alpha_1, \alpha_2, \alpha_3$ are positive roots of a root lattice type $A_2$ \cite{DLMN}. Thus $ \omegaega_{ M +N } = \frac{2}{3} ( \omegaega_M + \omegaega_N + \omegaega_{ \tilde{N}} )$ and we get \[ \Omega = \omegaega_L -\frac{8}9 \sum_{ 0\leq i,j\leq2} e^{i,j}, \] as desired. \end{proof} \betagin{lem}\langlebel{coma8} For any $0 \leq i, j\leq 2$, we have $e^{i, j}\in \mathrm{Com}_{V_L}\left( L_{\omegaidehat{sl}_9(\mathbb{C})}(3,0)\right)$. Hence $W\subset \mathrm{Com}_{V_L}\left( L_{\omegaidehat{sl}_9(\mathbb{C})}(3,0)\right)$. \end{lem} \betagin{proof} Since $e^{i,j}\in V_{M+N}$ and $E=\{(\alpha,\alpha, \alpha)\mid \alpha\in E_8\}$ is orthogonal to $M+N$, it is clear that $(H_\alpha)_n e^{i,j} =0 $ for all $n\geq 0$. It is also clear that $(E_\alpha)_n e^{i,j}=0 $ for any root $\alpha\in K$ and $n\geq 2$. Recall from \cite{FLM} that \[ Y(e^\alpha, z) = \exp\left(\sum_{n\in \mathbb{Z}^+} \frac{\alpha(-n)}{n} z^{n}\right) \exp\left(\sum_{n\in \mathbb{Z}^+} \frac{\alpha(n)}{-n} z^{-n}\right) e^\alpha z^\alpha. \] Now let $\sigma=(123)$ be a 3-cycle. Then by direct calculation, we have \[ \betagin{split} &\ \ (E_\alpha)_1 e^{i,j} =(E_\alpha)_1 (\rho^ih^j e_M) \\ & =(e^{\eta_1(\alpha)} + e^{\eta_2(\alpha)} +e^{\eta_3(\alpha)})_1 \left( \frac{1}{16} \omegaega_{h^{j}(M)} +\frac{1}{32} \sum_{\alpha\in \Delta^+(E_8)} \rho^i (e^{(\eta_{\sigma^{j}(1) } -\eta_{\sigma^{j}(2) })(\alpha)} + e^{ -(\eta_{\sigma^{j}(1)} -\eta_{\sigma^{j}(2)})(\alpha)} ) \right)\\ &= \frac{1}{16} \left(\langle \alpha, \alpha\rangle^2 \frac{1}8(e^{\eta_{\sigma^{j}(1)}(\alpha)}+e^{\eta_{\sigma^{j}(2)}(\alpha)})\right) + \frac{1}{32}\varepsilon(\alpha, -\alpha) (e^{\eta_{\sigma^{j}(1)}(\alpha)}+e^{\eta_{\sigma^{j}(2)}(\alpha)}) =0, \end{split} \] and \[ \betagin{split} & \ \ (E_\alpha)_0 e^{i,j}\\ & =(e^{\eta_1(\alpha)} + e^{\eta_2(\alpha)} +e^{\eta_3(\alpha)})_0 \left( \frac{1}{16} \omegaega_{h^{j}(M)} +\frac{1}{32} \sum_{\alpha\in \Delta^+(E_8)} \rho^i (e^{(\eta_{\sigma^{j}(1) } -\eta_{\sigma^{j}(2) })(\alpha)} + e^{ -(\eta_{\sigma^{j}(1)} -\eta_{\sigma^{j}(2)})(\alpha)} ) \right)\\ &= \frac{1}{16} \left(\langle \alpha, \alpha\rangle^2 \frac{1}8(\eta_{\sigma^{j}(1) }(\alpha)(-1)e^{\eta_{\sigma^{j}(1) }(\alpha)} + \eta_{\sigma^{j}(2)}(\alpha) (-1)e^{\eta_{\sigma^{j}(2)} (\alpha)})\right. \\ &\left. \quad - 2\langle \alpha, \alpha\rangle \frac{1}{8}\left(\eta_{\sigma^{j}(1) } -\eta_{\sigma^{j}(2) })(\alpha)(-1) e^{\eta_{\sigma^{j}(1) }(\alpha)} - (\eta_{\sigma^{j}(1) } -\eta_{\sigma^{j}(2) })(\alpha)(-1) e^{\eta_{\sigma^{j}(2) }(\alpha)}\right)\right)\\ & \quad + \frac{1}{32}\varepsilon(\alpha, -\alpha) (\eta_{\sigma^{j}(2) }(\alpha)(-1) e^{\eta_{\sigma^{j}(1) }(\alpha)} +\eta_{\sigma^{j}(1)}(\alpha)(-1) e^{\eta_{\sigma^{j}(2) }(\alpha)})\\ &=0 \end{split} \] for any root $\alpha\in K$. Therefore, $(E_\alpha)_n e^{i,j} =0$ for all $n\geq 0$. Since $L_{\omegaidehat{sl}_9(\mathbb{C})}(3,0)$ is generated by $E_\alpha$ and $H_\alpha$, we have the desired conclusion. \end{proof} \betagin{thm} Let $W$ be the subVOA generated by $e,f$ and $e'$ in $V_L$. Then $W_1=0$. \end{thm} \betagin{proof} Since $h(-1)\cdot \mathbbm{1} \in L_{\hat{sl}_9(\mathbb{C})}(3,0)$ for all $h\in E$, the commutant subVOA \[ \mathrm{Com}_{V_L}\left( L_{\omegaidehat{sl}_9(\mathbb{C})}(3,0)\right) \subset V_{M+N}. \] Therefore, it suffices to show $W\cap (V_{M+N})_1 =0$. Recall that $M+N\cong A_2\otimes E_8$ has no roots. Thus, \[ (V_{M+N})_1= span_\mathbb{C}\{h(-1)\cdot \mathbbm{1} \mid h\in (M+N)\otimes \mathbb{C}\}. \] However, \[ \Omega_1 h(-1)\cdot \mathbbm{1} = \left(\omegaega_E +\frac{3}4 \omegaega_{M+N} -\frac{1}{12} \sum_{\alpha\in K(2)\atop 1\leq i,j\leq 3, i\neq j} e^{\eta_i(\alpha)-\eta_j(\alpha)}\right)_1 h(-1)\cdot \mathbbm{1} = \frac{3}4 h(-1)\cdot \mathbbm{1} \neq 0 \] for any $0\neq h\in (M+N)\otimes \mathbb{C}$. Thus, $\left(\mathrm{Com}_{V_L}( L_{\omegaidehat{sl}_9(\mathbb{C})}(3,0)) \right)\cap (V_{M+N})_1=0$ and we have $W_1=0$. \end{proof} \betagin{rem} Note that the lattice VOA $V_L$ also contains a subVOA isomorphic to $L_{\hat{E}_8 }(3,0)$, the level affine VOA associated to the Kac-Moody Lie algebra of type $E_8^{(1)}$. The central charge of $\mathbb{C}om_{V_L}(L_{\hat{E}_8 }(3,0))$ is $16/11$, which is the same as $U_{3C}$. In fact, it can be shown by the similar calculation as Lemma \ref{coma8} that $e_M$ and $e_N$ defined in Notation \ref{efg} are contained in $\mathbb{C}om_{V_L}(L_{\hat{E}_8 }(3,0))$. Moreover, $$U_{3C}\cong VOA(e_M,e_N)= \mathbb{C}om_{V_L}(L_{\hat{E}_8 }(3,0)).$$ \end{rem} \subsection{A positive definite real form} Next we shall show that the Ising vectors $e^{i,j}$, $0\leq i,j\leq 2,$ are contained in a positive definite real form of $V_{E_8^3}$. First we recall that the lattice VOA constructed in \cite{FLM} can be defined over $\mathbb{R}$. Let $V_{L,\mathbb{R}}= S(\hat{\mathfrak{h}}^-_\mathbb{R})\otimes \mathbb{R}\{L\}$ be the real lattice VOA associated to an even positive definite lattice, where $\mathfrak{h}=\mathbb{R}\otimes_\mathbb{Z} L$, $ \hat{\mathfrak{h}}^-= \oplus_{n\in \mathbb{Z}^+} \mathfrak{h}\otimes \mathbb{R} t^{-n}$. As usual, we use $x(-n)$ to denote $x\otimes t^{-n}$ for $x\in \mathfrak{h}$ and $n\in \mathbb{Z}^+$. \betagin{nota}\langlebel{-1map} Let $\theta: V_{L,\mathbb{R}} \to V_{L, \mathbb{R}}$ be defined by \[ \theta( x(-n_1)\cdots x(-n_k)\otimes e^\alpha) = (-1)^k x(-n_1)\cdots x(-n_k)\otimes e^{-\alpha}. \] Then $\theta$ is an automorphism of $V_{L,\mathbb{R}}$, which is a lift of the $(-1)$-isometry of $L$ \cite{FLM}. We shall denote the $(\pm 1)$-eigenspaces of $\theta$ on $V_{L,\mathbb{R}}$ by $V_{L,\mathbb{R}}^\pm$. \end{nota} The following result is well-known \cite{FLM,M}. \betagin{prop}[cf. Proposition 2.7 of \cite{M}]\langlebel{VLpd} Let $L$ be an even positive definite lattice. Then the real subspace $\tilde{V}_{L,\mathbb{R}}=V_{L,\mathbb{R}}^+\oplus \sqrt{-1}V_{L,\mathbb{R}}^-$ is a real form of $V_L$. Furthermore, the invariant form on $\tilde{V}_{L,\mathbb{R}}$ is positive definite. \end{prop} Now apply the above theorem to the case $L=E_8^3$. We have the following result. \betagin{prop} Let $\tilde{V}_{E_8^3,\mathbb{R}}=V_{E_8^3 ,\mathbb{R}}^+\oplus \sqrt{-1}V_{E_8^3,\mathbb{R}}^-$. Then $\tilde{V}_{E_8^3,\mathbb{R}}$ is a positive definite real form of $V_{E_8^3}$. \end{prop} The next lemma is clear by the definitions of $e_N, e_N$ and $e_{\tilde{N}}$. \betagin{lem} The Ising vectors $e_M, e_N$ and $e_{\tilde{N}}$ defined in Notation \ref{efg} are contained in $V_{E_8^3 ,\mathbb{R}}^+$. \end{lem} Recall the automorphism $\rho= \exp(\frac{2\pi i}3 ({\bf a}, -{\bf a},0)(0))$ defined in Definition \ref{rho}, where ${\bf a}$ is an element of $E_8$ such that $K= \{\beta\in E_8\mid \langle \beta, {\bf a}\rangle \in 3\mathbb{Z}\} \cong A_8.$ Then we have the coset decomposition \[ E_8= A_8 \cup (b+A_8) \cup (-b +A_8), \] where $b$ is a root of $E_8$ such that $\langle b, {\bf a}\rangle \equiv 1 \mod 3$. Note that \[ M=\{ (\alpha, -\alpha,0)\mid \alpha \in E_8\}\cong \sqrt{2}E_8 \] and \[ \tilde{K} =\{ (\alpha, -\alpha,0)\mid \alpha \in K\}\cong \sqrt{2}A_8. \] Set \[ \betagin{split} X^0&: = \frac{1}3(e_M + \rho e_M +\rho^2 e_M), \\ X^1&:= \frac{1}3( e_M + \xi \rho e_M +\xi^2 \rho^2 e_M),\\ X^2&:= \frac{1}3(e_M + \xi^2 \rho e_M +\xi \rho^2 e_M) ,\\ \end{split} \] where $\xi=\exp{\frac{2 \pi i}3}= \frac{1}2( -1+\sqrt{-3})$. The next lemma can be proved by the same calculations as in \cite{LYY}. Note that $\rho X^0=X^0$, $\rho X^1=\xi^2 X^1$ and $\rho X^2= \xi X^2$. \betagin{lem} The vector $X^0$ is contained in $V_{M,\mathbb{R}}^+$. Moreover, \[ X^1=\frac{1}{32} \sum_{\gamma\in (b, -b,0)+\tilde{K}\atop \langle \gamma, \gamma\rangle =4} e^\gamma \] and \[ X^2= \frac{1}{32} \sum_{\gamma\in -(b, -b,0)+\tilde{K}\atop \langle \gamma, \gamma\rangle =4} e^\gamma. \] Therefore, $X^1+X^2\in V_{M,\mathbb{R}}^+$ and $X^1- X^2\in V_{M,\mathbb{R}}^-$. \end{lem} \betagin{lem} The Ising vectors $e^{i,j}, 0\leq i,j\leq 2,$ are all contained in $\tilde{V}_{E_8^3,\mathbb{R}}$. \end{lem} \betagin{proof} By the discussion above, we have \[ \rho e_M = X^0 - \frac{1}2 (X^1+X^2) + \frac{1}2 \sqrt{-3} (X^1-X^2). \] Since $X^1+X^2\in V_{M,\mathbb{R}}^+$ and $X^1- X^2\in V_{M,\mathbb{R}}^-$, we have $\rho e_M \in \tilde{V}_{E_8^3,\mathbb{R}}$. Similarly, we also have $\rho^2 e_M, \rho e_N, \rho^2 e_N, \rho e_{\tilde{N}}, \rho^2 e_{\tilde{N}} \in \tilde{V}_{E_8^3,\mathbb{R}}$ as desired. \end{proof} \section{Parafermion VOA and $W$} In this section, we shall show that the VOA $W$ defined in Notation \ref{Nota:W} is, in fact, equal to the commutant subVOA $\tilde{W}=\mathrm{Com}_{V_L}\left( L_{\omegaidehat{sl}_9(\mathbb{C})}(3,0)\right)$. Recall from \cite{Lam2} that the lattice VOA $V_{A_3^8}$ contains a full subVOA $K(sl_3(\mathbb{C}),9)\otimes L_{\hat{sl}_9(\mathbb{C})}(3,0)$, where $K(sl_3(\mathbb{C}),9)$ is the parafermion VOA associated to the affine VOA $L_{\hat{sl}_3(\mathbb{C})}(9,0)$. Therefore, the VOA $\tilde{W}$ contains a full subVOA isomorphic to the parafermion VOA $K(sl_3(\mathbb{C}),9)$. \subsection{Parafermion VOA} First, we recall the definition of parafermion VOA from \cite{DW} (cf. \cite{DLY,DLWY}). Let $\mathfrak{g}$ be a finite dimensional simple Lie algebra and $\omegaidehat{\mathfrak{g}}$ the affine Kac-Moody Lie algebra associated with $\mathfrak{g}$. Let $\Pi=\{\alpha_1, \dots, \alpha_n\}$ be a set of simple roots and $\theta$ the highest root. Let $Q$ be the root lattice of $\mathfrak{g}$. For any positive integer $k$, we denote \[ P_+^k(\mathfrak{g})= \{ \Lambda \in \mathbb{Q}\otimes_\mathbb{Z} Q\mid \langle \alpha_i, \Lambda\rangle \in \mathbb{Z}_{\geq 0} \text{ for all } i=1, \dots,n \text{ and } \langle \theta, \Lambda\rangle \leq k\}, \] the set of dominant integral weights for $\mathfrak{g}$ with level $k$. Let $L_{\omegaidehat{\mathfrak{g}}}(k,\Lambda)$ be the irreducible module of $\hat{\mathfrak{g}}$ with highest weight $\Lambda$ and level $k$. Then $L_{\omegaidehat{\mathfrak{g}}}(k,0)$ forms a simple VOA with the Virasoro element given by the Sugawara construction \betagin{equation}\langlebel{suga} \Omega_{\mathfrak{g},k}= \frac{1}{2 ( k +h^\vee)} \sum (u_i)_{-1}u^i, \end{equation} where $h^\vee$ is the dual Coxeter number, $\{u_i\}$ is a basis of $\mathfrak{g}$ and $ \{ u^i:= ( u_i) ^\ast \}$ is the dual basis of $ \{ u_i \}$ with respect to the normalized Killing form (see \cite{FZ}). Moreover, the central charge of $L_{\omegaidehat{\mathfrak{g}}}(k,0)$ is \betagin{equation}\langlebel{cc} \frac{k\dim\mathfrak{g}}{k+h^\vee}. \end{equation} The vertex operator algebra $L_{\omegaidehat{\mathfrak{g}}}(k,0)$ contains a Heisenberg vertex operator algebra corresponding to a Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{g}$. Let $M_{\hat{\mathfrak{h}}}(k, 0)$ be the vertex operator subalgebra of $L_{\omegaidehat{\mathfrak{g}}}(k,0)$ generated by $h(-1)\cdot \mathbbm{1}$ for $h\in{\mathfrak{h}}$. The commutant $K(\mathfrak{g},k)$ of $M_{\hat{\mathfrak{h}}}(k, 0)$ in $L_{\omegaidehat{\mathfrak{g}}}(k,0)$ is called a parafermion vertex operator algebra. The VOA $L_{\omegaidehat{\mathfrak{g}}}(k,0)$ is completely reducible as an $M_{\hat{\mathfrak{h}}}(k, 0)$-module and we have a decomposition (see \cite{DW}). \betagin{lem}\langlebel{Klambda} For any $\langlembda\in \mathfrak{h}^*$, let $M_{\hat{\mathfrak{h}}}(k, \langlembda)$ be the irreducible highest weight module for $\hat{\mathfrak{h}}$ with a highest weight vector $v_\langlembda$ such that $h(0)v_\langlembda = \langlembda(h)v_\langlembda$ for $h\in \mathfrak{h}$. Denote \[ K_{\mathfrak{g},k}(\langlembda)= K_{\mathfrak{g},k}(0, \langlembda) = \{v\in L_{\omegaidehat{\mathfrak{g}}}(k,0)\mid h(m)v =\langlembda(h)\delta_{m,0} v \text{ for } h\in \mathfrak{h}, m\geq 0\}. \] Then we have \[ L_{\omegaidehat{\mathfrak{g}}}(k,0) =\bigoplus_{\langlembda\in Q} K_{\mathfrak{g},k}(\langlembda) \otimes M_{\hat{\mathfrak{h}}}(k, \langlembda), \] where $Q$ is the root lattice of $\mathfrak{g}$. \end{lem} Similarly, for any $\Lambda\in P_+^k(\mathfrak{g})$, we also have the decomposition. \betagin{lem}\langlebel{KLl} Denote $ K_{\mathfrak{g},k}(\Lambda, \langlembda)= \{v\in L_{\omegaidehat{\mathfrak{g}}}(k,\Lambda)\mid h(m)v =\langlembda(h)\delta_{m,0} v \text{ for } h\in \mathfrak{h}, m\geq 0\}. $ Then \[ L_{\omegaidehat{\mathfrak{g}}}(k,\Lambda) =\bigoplus_{\langlembda\in \Lambda + Q} K_{\mathfrak{g},k}(\Lambda, \langlembda) \otimes M_{\hat{\mathfrak{h}}}(k, \langlembda). \] \end{lem} \subsection{A generating set} In \cite{DW}, it is shown that the parafermion VOA $K(\mathfrak{g}, k)$ is generated by subVOAs isomorphic to $K(sl_2(\mathbb{C}), k)$. We first give a brief review of their work. Let $\mathfrak{h}$ be a Cartan subalgebra of $\mathfrak{g}$ and let $\Delta_+$ be the set of all positive roots of $\mathfrak{g}$. Then \[ \mathfrak{g}= \mathfrak{h}\oplus \bigoplus_{\alpha\in \Delta_+} (\mathbb{C} x_\alpha \oplus \mathbb{C} x_{-\alpha}), \] where $x_{\pm \alpha} \in \mathfrak{g}_{\pm\alpha} =\{ u\in\mathfrak{g}\mid [h,u]=\pm \alpha(h) u \text{ for all } h\in \mathfrak{h}\}$. \betagin{nota}\langlebel{Pal} For any $\alpha\in \Delta_+$, let $h_\alpha=[x_\alpha, x_{-\alpha}]$. Then $S_\alpha=span\{h_\alpha, x_\alpha, x_{-\alpha}\}$ is a Lie subalgebra of $\mathfrak{g}$ isomorphic to $sl_2(\mathbb{C})$. Define \[ \omegaega_\alpha = \frac{1}{ 2k(k + 2)} (kh_\alpha(-2)1- h_\alpha(-1)2\mathbbm{1} + 2kx_\alpha(-1)x_{-\alpha}(-1)\mathbbm{1}) \] and \[ \betagin{split} W^3_\alpha = &k^2h_\alpha(-3)\mathbbm{1} + 3kh_\alpha(-2)h_\alpha(-1)\mathbbm{1} + 2h_\alpha(-1)^3\mathbbm{1}- 6kh_\alpha(-1)x_\alpha(-1)x_\alpha(-1)\mathbbm{1}\\ & +3k^2x_\alpha(-2)x_\alpha(-1)\mathbbm{1}- 3k^2x_\alpha(-1)x_\alpha(-2)\mathbbm{1}. \end{split} \] We use $P_\alpha$ to denote the vertex operator subalgebra of $K(\mathfrak{g}, k)$ generated by $\omega_\alpha$ and $W^3_\alpha$ for $\alpha\in \Delta_+$. \end{nota} The next theorem is proved in \cite{DW}. \betagin{thm} [Theorem 4.2 of \cite{DW}]\langlebel{dwtheorem} The simple vertex operator algebra $K(\mathfrak{g}, k)$ is generated by $P_\alpha$, $\alpha\in \Delta_+$ and $P_\alpha$ is a simple vertex operator algebra isomorphic to the parafermion vertex operator algebra $K(sl_2(\mathbb{C}), k)$ associated to $sl_2(\mathbb{C})$. \end{thm} \subsection{Lattice VOA $V_{A_n^{k+1}}$} Next we recall an embedding of the VOA $$K(sl_{k+1}, n+1)\otimes L_{\hat{sl}_{n+1}(\mathbb{C})}(k+1, 0)$$ into the lattice VOA $V_{A_n^{k+1}}$ from \cite{Lam2}. We use the standard model for the root lattice of type $A_\EuScript{L}l$. In particular, \[ A_\EuScript{L}l=\{ \sum a_i\epsilon_i\in \mathbb{Z}^{\EuScript{L}l+1}\mid a_i\in \mathbb{Z} \text{ and } \sum_{i=1}^{\EuScript{L}l+1}a_i=0\}, \] where $\epsilon_i$ is the row vector whose $i$-th entry is $1$ and all other entries are $0$. The dual lattice \[ A_\EuScript{L}l^* =\bigcup_{i=0}^\EuScript{L}l \left ( \gamma_{A_\EuScript{L}l}(i)+A_\EuScript{L}l \right), \] where $ \gamma_{A_\EuScript{L}l}(i)= \frac{1}{\EuScript{L}l+1} \left( \sum_{j=1}^{\EuScript{L}l+1-i} i \epsilon_j - \sum_{j=\EuScript{L}l+1-i+1}^{\EuScript{L}l+1} (\EuScript{L}l+1-i)\epsilon_j\right)$ for $i=0, \dots, \EuScript{L}l$. \betagin{nota}\langlebel{nota3:1} Let $n$ and $k$ be positive integers. We shall consider two injective maps $\eta_i: \mathbb{Z}^{n+1} \to \mathbb{Z}^{(n+1)(k+1)}$ and $\iota_i: \mathbb{Z}^{k+1} \to \mathbb{Z}^{(n+1)(k+1)}$ defined by \[ \eta_i(\epsilon_j) = \epsilon_{(n+1)(i-1) +j} \quad \text{ and } \quad \iota_i(\epsilon_j)= \epsilon_{(n+1)(j-1) +i}, \] for $i=1, \dots, k+1, j=1, \dots, n+1$. Let $d_{k+1}= \sum_{j=1}^{k+1} \eta_j: \mathbb{Z}^{n+1} \to \mathbb{Z}^{(n+1)(k+1)}$ and $\mu_{n+1}= \sum_{j=1}^{n+1}\iota_{j}: \mathbb{Z}^{k+1} \to \mathbb{Z}^{(n+1)(k+1)}$. Then we have \[ d_{k+1}(a_1, \dots, a_{n+1}) = (a_1, \dots, a_{n+1}, a_1, \dots, a_{n+1}, \dots, a_1, \dots, a_{n+1}), \] and \[ \mu_{n+1}(a_1, \dots, a_{k+1}) = (a_1, \dots, a_1,a_2, \dots, a_2, \dots, a_{k+1},\dots, a_{k+1}). \] \end{nota} Set $X=d_{k+1}(A_n)$ and $Y=\mu_{n+1}(A_k)$. Then $X\cong \sqrt{k+1}A_n$ and $Y\cong \sqrt{n+1}A_k$. Moreover, we have \betagin{equation}\langlebel{ann} \betagin{split} Ann_{A_{(n+1)(k+1)-1}}(Y) = \oplus_{i=1}^{k+1} \eta_i(A_n) \cong A_n^{k+1},\\ Ann_{A_{(n+1)(k+1)-1}}(X) = \oplus_{j=1}^{n+1}\iota_j(A_k) \cong A_k^{n+1}, \end{split} \end{equation} where $Ann_{A}(B)=\{ x\in A\mid \langle x,y\rangle =0 \text{ for all } y\in B\}$ is the annihilator of a sublattice $B$ in an integral lattice $A$. By the same construction as in Notation \ref{hef} (see also \cite[Chapter 13]{DL}), one can obtain subVOAs isomorphic to $L_{\hat{sl}_{n+1}(\mathbb{C})}(k+1,0)$ and $L_{\hat{sl}_{k+1}(\mathbb{C})}(n+1,0)$ in the lattice VOA $V_{A_{(k+1)(n+1)-1}}$. The next proposition is well known in the literature \cite{KW,Lam2,NT}. \betagin{prop}\langlebel{decL} The VOA $L_{\hat{sl}_{n+1}(\mathbb{C})}(k+1,0)$ and $L_{\hat{sl}_{k+1}(\mathbb{C})}(n+1,0)$ are mutually commutative in the lattice VOA $V_{A_{(k+1)(n+1)-1}}$. Moreover, $L_{\hat{sl}_{n+1}(\mathbb{C})}(k+1,0)\otimes L_{\hat{sl}_{k+1}(\mathbb{C})}(n+1,0)$ is a full subVOA of $V_{(n+1)(k+1)-1}$. \end{prop} \betagin{rem}\langlebel{Andec} It is also known that the VOA $V_{\mu_{n+1}(A_k)}$ is contained in the affine VOA $L_{\hat{sl}_{k+1}(\mathbb{C})}(n+1,0)$ and $ K(sl_{k+1}(\mathbb{C}), n+1) = \mathbb{C}om_{L_{\hat{sl}_{k+1}(\mathbb{C})}(n+1,0)} (V_{\mu_{n+1}(A_k)})$ (cf. \cite[Lemma 4.1]{Lam2}). Moreover, for any $\Lambda \in P^+_{n+1}(sl_{k+1}(\mathbb{C}))$, we have the decomposition \betagin{equation}\langlebel{decan} L_{\hat{sl}_{k+1}}(n+1,\Lambda) =\bigoplus_{\langlembda\in \frac{1}{n+1}(\mu_{n+1}(\Lambda+ A_k))/ {\mu_{n+1}(A_k)}} K_{sl_{k+1}(\mathbb{C}),n+1} (\Lambda, (n+1)\bar{\langlembda}) \otimes V_{\langlembda+\mu_{n+1}(A_k)} \end{equation} as a module of $V_{\mu_{n+1}(A_k)}\otimes K(sl_{k+1}(\mathbb{C}),n+1)$, where $\bar{\langlembda}\in \frac{1}{n+1}A_k$ such that $\mu_{n+1}(\bar{\langlembda}) =\langlembda$ (see \cite[Lemma 4.3]{Lam2}). Note that it is shown in \cite[Theorem 14.20]{DL} that $K_{sl_{k+1}(\mathbb{C}),n+1} (\Lambda, (n+1)\bar{\langlembda})$, for $\Lambda\in P_+^{n+1}(sl_{k+1}(\mathbb{C}))$, $\langlembda\in (\mu_{n+1}(A_k))^*$, are irreducible $K(sl_{k+1}(\mathbb{C}),n+1)$-modules. \end{rem} Next we consider the case when $n=8$ and $k=2$. Then $(n+1)(k+1)-1=26$. We shall study the decomposition of $\tilde{W} =\mathbb{C}om_{V_{E_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0))$ as a $K(sl_3(\mathbb{C}),9)$-module. Denote $$\nu_1=\eta_1-\eta_2,\quad \quad \nu_2=\eta_2-\eta_3, $$ and define $\mu=\mu_3:\mathbb{Z}^3 \to \mathbb{Z}^{27}$ by \[ \mu (a_1, a_2, a_3) =(a_1, \dots, a_1, a_2, \dots, a_2,a_3, \dots a_3). \] Note that $Y=\mu (A_2)\cong 3A_2$ and $$Ann_{A_{26}}(Y)= \{ \alpha\in A_{26}\mid \langle \alpha, \beta\rangle =0 \text{ for any }\beta\in Y\}\cong A_8^3.$$ Next we discuss the coset decomposition $Y+A_8^3$ in $A_{26}$. \betagin{lem}\langlebel{a26} Let $\alpha_1=(1,-1,0)$ and $\alpha_2=(0,1,-1)$ be roots of $A_2$. Then we have \[ \betagin{split} A_{26} &= \bigcup_{0\leq i, j\leq 8} \left( \left(-\frac{1}9 (i \mu (\alpha_1)+j\mu(\alpha_2))+Y\right) + \left(\nu_1(\gamma_{A_8}(i))+\nu_2(\gamma_{A_8}(j))+A_8^3 \right) \right). \end{split} \] \end{lem} \betagin{proof} First we note that $ [A_{26}: Y+A_8^3]= \sqrt{( 9^2\cdot 3) \cdot 9^3/27}=9^2$. Moreover, we have \[ -\frac{1}9 (i \mu (\alpha_1)+j\mu(\alpha_2)) + \nu_1(\gamma_{A_8}(i))+\nu_2(\gamma_{A_8}(j)) = \sum_{k=10-i}^9 \iota_k(\alphapha_1) + \sum_{k'=10-i}^9 \iota_{k'}(\alphapha_2). \] Note that $\sum_{k=10-i}^{9} \iota_k (\alphapha_p) \notin Y+A_8^3$ for any $i\neq 0, p=1,2$. Therefore, \[ \left( -\frac{1}9 (i \mu (\alpha_1)+j\mu(\alpha_2)) + Y\right ) + \left(\nu_1(\gamma_{A_8}(i))+\nu_2(\gamma_{A_8}(j)) + A_8^3\right), \] for $i,j =0, \dots, 8,$ give $9^2$ distinct cosets in $A_{26}/ (Y+A_8^3)$. Thus, we have the desired conclusion. \end{proof} \betagin{lem}\langlebel{e8overa8} Let $\delta=\gamma_{A_8}(3)=\frac{1}3(1^6 -2^3) \in A_8^*$. Then for any $k, \EuScript{L}l =0, \pm 1$, we have \[ \betagin{split} \mathbb{C}om_{V_{(k\nu_1+\EuScript{L}l\nu_2)(\delta)+A_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0)) &= \{ v\in V_{(k\nu_1+\EuScript{L}l\nu_2)(\delta)+A_8^3}\mid \Omega_n v=0 \text{ for all } n\geq 0\}\\ &\cong K_{sl_3(\mathbb{C}), 9}(0, -3(k\alpha_1+\EuScript{L}l\alpha_2)). \end{split} \] \end{lem} \betagin{proof} By Lemma \ref{a26}, \[ V_{A_{26}} =\bigoplus_{0\leq i,j \leq 8} V_{-\frac{1}9 (i \mu (\alpha_1)+j\mu(\alpha_2))+Y} \otimes V_{\nu_1(\gamma_{A_8}(i))+\nu_2(\gamma_{A_8}(j))+A_8^3}. \] Moreover, by \eqref{decan}, \[ L_{\hat{sl}_3}(9,0) =\mathbb{C}om_{V_{A_{26}}}(L_{\hat{sl}_9(\mathbb{C})}(3,0)) =\bigoplus_{\langlembda \in \frac{1}9 Y/Y} V_{\langlembda+Y} \otimes K_{sl_3(\mathbb{C}), 9}(0, 9\bar{\langlembda}). \] Take $i=3k$ and $j=3\EuScript{L}l$. Then we have \[ \betagin{split} \mathbb{C}om_{V_{(k\nu_1+\EuScript{L}l\nu_2)(\delta)+A_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0)) &\cong K_{sl_3(\mathbb{C}), 9}(0, 9\cdot -\frac{1}9 (3k\alpha_1 +3\EuScript{L}l \alpha_2))\\ & = K_{sl_3(\mathbb{C}), 9}(0, -3(k\alpha_1+\EuScript{L}l\alpha_2)) \end{split} \] as desired. \end{proof} \betagin{lem} We have the decomposition \[ \tilde{W}= \mathbb{C}om_{V_{E_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0)) = \bigoplus_{i,j=0, \pm 1} K_{sl_3(\mathbb{C}), 9}(0, 3(i\alpha_1+j\alpha_2)). \] \end{lem} \betagin{proof} First we note that $M+N \cong A_2\otimes E_8$ and \[ M+N= \bigcup_{0\leq k,\EuScript{L}l \leq 2} \left( (k\nu_1+\EuScript{L}l\nu_2)(\delta)+A_2\otimes A_8\right). \] Since $A_2\otimes A_8 \cong Ann_{A_8^3}(d_3(A_8))$ and $V_{d_3(A_8)}\subset L_{\hat{sl}_9(\mathbb{C})}(3,0)$, we have \[ \tilde{W}= \mathbb{C}om_{V_{E_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0)) < V_{M+N}. \] The conclusion now follows from Lemma \ref{e8overa8}. \end{proof} Now let $\alpha\in A_2$ be a root. Then $\mathbb{Z}\alpha\cong A_1$ and \[ L(\alpha)=\oplus_{j=1}^9 \iota_j(\mathbb{Z}\alpha) \cong A_1^9 \subset A_{26}. \] Let $H_\alpha$ and $E_\alpha$ be defined as in Notation \ref{hef}. Then $\{H_\alpha, E_{\alpha}, -E_{-\alpha}\}$ forms a $sl_2$-triple in the lattice VOA $V_{A_1^9}< V_{A_{26}}$. Moreover, it generates a subVOA $\mathcal{L}_\alpha$ isomorphic to the affine VOA $L_{\hat{sl}_2(\mathbb{C})}(9,0)$. Let $M_\alpha(9,0)$ be the subVOA generated by $H_\alpha$. Then \[ \mathcal{K}_\alpha:= \mathbb{C}om_{\mathcal{L}_\alpha}(M_\alpha(9,0)) \cong K(sl_2(\mathbb{C}), 9). \] Note also that $\mathcal{K}_\alpha= \mathbb{C}om_{\mathcal{L}_\alpha}(M_\alpha(9,0)) < \mathbb{C}om_{L_{\hat{sl}_3(\mathbb{C})}(9,0)}( V_{\mu(A_2)})=K({sl}_3(\mathbb{C}),9).$ Set $h_\alpha=H_\alpha$, $x_\alpha=E_\alpha$ and $x_{-\alpha}= -E_{-\alpha}$. Then the elements $\omega_\alpha$ and $W_\alpha^3$ defined in Notation \ref{Pal} are contained in $\mathcal{K}_\alpha$. In fact, $\mathcal{K}_\alpha$ is generated by $\omega_\alpha$ and $W_\alpha^3$ (see \cite{DLY}). \betagin{thm} The VOA $W$ defined in Notation \ref{eijGW} contains a full subVOA isomorphic to $K(sl_3(\mathbb{C}), 9)$. \end{thm} \betagin{proof} Recall that $W= VOA( e^{i,j}\mid 0\leq i,j\leq 2)$. We also have \[ M=(\eta_1-\eta_2)(E_8) ,\quad N= (\eta_2-\eta_3)(E_8),\quad \tilde{N}=(\eta_1-\eta_3)(E_8). \] Let $\alpha_1=(1,-1,0)$, $\alpha_2=(0,1,-1)$ and $\alpha_3= \alpha_1+\alpha_2=(1,0,-1)$ be the positive roots of $A_2$. Then by the same calculations as in \cite{LYY}, it is straightforward to verify that \[ \mathcal{K}_{\alpha_1} < VOA(e_M, \rho e_M),\quad \mathcal{K}_{\alpha_2} < VOA(e_N, \rho e_N),\quad \mathcal{K}_{\alpha_3} < VOA(e_{\tilde{N}}, \rho e_{\tilde{N}}), \] where $e_M, e_N, e_{\tilde{N}}$ and $\rho$ are defined as in Notation \ref{efg}. By Theorem \ref{dwtheorem}, $\mathcal{K}_{\alpha_1}, \mathcal{K}_{\alpha_2}$ and $\mathcal{K}_{\alpha_3}$ generates a subVOA isomorphic to $K(sl_3(\mathbb{C}), 9)$ in $W$. It is a full subVOA of $W$ because they have the same central charge. \end{proof} \betagin{thm} We have $W=\tilde{W}= \mathbb{C}om_{V_{E_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0))$. \end{thm} \betagin{proof} By the previous lemma, the subVOA $W$ contains $K(sl_3(\mathbb{C}),9)$ as a full subVOA. Therefore, it suffices to show that $K_{sl_3(\mathbb{C}), 9}(0, 3(i\alpha_1+j\alpha_2))$ is contained in $W$ for any $i,j =0, \pm 1$. By \cite[Proposition 2.2]{LYY}, \[ X_{\nu_1}^+ =\frac{1}{32}\sum_{\gamma\in \nu_1(\delta) + \nu_1(A_8)\atop \langle \gamma, \gamma \rangle=4} e^\gamma \qquad \text{ and } \qquad X_{\nu_1}^- =\frac{1}{32}\sum_{\gamma\in -\nu_1(\delta)+\nu_1(A_8)\atop \langle \gamma, \gamma \rangle=4} e^\gamma \] are contained in $VOA(e_M, \rho e_M)< W$. Moreover, it is straightforward to verify that \[ X_{\nu_1}^+ \in \mathbb{C}om_{V_{\nu_1(\delta)+A_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0)) \cong K_{sl_3(\mathbb{C}), 9}(0, -3\alpha_1) \] and \[ X_{\nu_1}^- \in \mathbb{C}om_{V_{-\nu_1(\delta)+A_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0)) \cong K_{sl_3(\mathbb{C}), 9}(0, 3\alpha_1). \] Therefore, $W$ contains $K_{sl_3(\mathbb{C}), 9}(0, \pm 3\alpha_1)$ as $K(sl_3(\mathbb{C}),9)$-submodules. Similarly, $W$ also contains $K_{sl_3(\mathbb{C}), 9}(0, \pm 3\alpha_2)$ and $ K_{sl_3(\mathbb{C}), 9}(0, \pm 3(\alpha_1+\alpha_2))$ as $K(sl_3(\mathbb{C}),9)$-submodules. Moreover, it is clear that $0 \neq (X_{\nu_1}^+)_{-3} (X_{\nu_2}^-) \in V_{(\nu_1-\nu_2)(\delta)+A_8^3} $. Since $X_{\nu_1}^+$ and $X_{\nu_2}^-$ are contained in the commutant of $L_{\hat{sl}_9(\mathbb{C})}(3,0)$, we have $(X_{\nu_1}^+)_{-3} (X_{\nu_2}^-) \in \mathbb{C}om_{V_{(\nu_1-\nu_2)(\delta)+A_8^3}}(L_{\hat{sl}_9(\mathbb{C})}(3,0))$. Hence $W$ contains $K_{sl_3(\mathbb{C}), 9}(0, 3(\alpha_1-\alpha_2))$. Similarly, $K_{sl_3(\mathbb{C}), 9}(0, 3(\alpha_2-\alpha_1))$ is contained in $W$, also. \end{proof} \betagin{thebibliography}{IPSS} \bibitem[ATLAS]{ATLAS} J.H. Conway, R.T. Curtis, S.P. Norton, R.A. Parker and R.A. Wilson, ATLAS of finite groups. Clarendon Press, Oxford, 1985. \bibitem[DL]{DL} C. Dong and J. Lepowsky, Generalized vertex algebras and relative vertex operators, Progress in Math. {\bf 112}, Birkh\"{a}user, Boston, 1993. \bibitem[DLMN]{DLMN} C. Dong, H. Li, G. Mason and S.P. Norton, Associative subalgebras of Griess algebra and related topics. Proc. of the Conference on the Monster and Lie algebra at the Ohio State University, May 1996, ed. by J. Ferrar and K. Harada, Walter de Gruyter, Berlin - New York, 1998. \bibitem[DLWY]{DLWY} C. Dong, C.H. Lam, Q. Wang and H. Yamada, The structure of parafermion vertex operator algebras, \textit{J. Algebra} \textbf{323} (2010), 371--381. \bibitem[DLY]{DLY} C. Dong, C.H. Lam and H. Yamada, $W$-algebras related to parafermion algebras, \textit{J. Algebra} \textbf{322} (2009), 2366--2403. \bibitem[DMZ]{dmz} C. Y. Dong, G. Mason, and Y. Zhu, Discrete series of the Virasoro algebra and the Moonshine module, \emph{Proceedings of Symposia in Pure Mathematics }\textbf{56}\emph{ Part 2} (1994) 295-316. \bibitem[DW]{DW} C. Dong and Q. Wang, The structure of parafermion vertex operator algebras: general case, \textit{Commun. Math. Phys.} \textbf{299} (2010), 783--792. \bibitem[FHL]{FHL} I. Frenkel, Y. Z. Huang, and J. Lepowsky, On axiomatic approaches to vertex operator algebras and modules, \emph{Memoirs of the American Mathematical Society} \textbf{104}, 1993. \bibitem[FLM]{FLM} I. Frenkel, J. Lepowsky, and A. Meurman, Vertex operator algebras and the Monster, \emph{Academic Press, New York}, 1988. \bibitem[FZ]{FZ} Igor B. Frenkel, and Y. C. Zhu, Vertex operator algebras associated to representations of affine and Virasoro algebras, \emph{Duke Mathematical Journal} \textbf{66} (1992) 123-168. \bibitem[G]{G} R.L. Griess, The friendly giant, {\it Invent. Math.} {\bf 69} (1982), 1--102. \bibitem[GL]{GL} R.L. Griess and C. H. Lam, $EE_8$ lattices and dihedral groups, Pure and Applied Math Quarterly (special issue for Jacques Tits), {\bf 7} (2011), no. 3, 621-743. {\tt arXiv:0806.2753}. \bibitem[H\"o]{ho} G. H\"{o}hn, The group of symmetries of the shorter moonshine module. {\it Abhandlungen aus dem Mathematischen Seminar der Universit\"at Hamburg} {\bf 80} (2010), 275--283, {\tt arXiv:math/0210076}. \bibitem[Iv]{Iva} A. A. Ivanov, \emph{The Monster group and Majorana involutions}, Cambridge Tracts in Mathematics 176, Cambridge University Press, Cambridge, 2009. xiv+252 pp. \bibitem[Iv2]{Iva2} A. A. Ivanov, Majorana representation of $A_6$ involving $3C$-algebras, \emph{ Bull. Math. Sci.} \textbf{1} (2011), no. 2, 365-378. \bibitem[Iv3]{Iva3} A. A. Ivanov, On Majorana representations of $A_6$ and $A_7$, \emph{Comm. Math. Phys.} \textbf{307} (2011), no. 1, 1-16. \bibitem[IS]{IS} A.A. Ivanov and A. Seress, Majorana representations of $A_5$, \emph{Math. Z.} \textbf{272} (2012), no. 1-2, 269-295. \bibitem[IPSS]{IPSS} A. A. Ivanov, D. V. Pasechnik, A. Seress, and S. Shpectorov, Majorana representations of the symmetric group of degree 4, \emph{Journal of Algebra} \textbf{324} (2010) 2432-2463. \bibitem[KW]{KW} V. Kac, M. Wakimoto, Modular and conformal invariance constraints in representation theory of affine algebras, Advances in Math. \textbf{70}(1988), 156-234. \bibitem[La]{Lam2} C.H. Lam, A level-rank duality for parafermion vertex operator algebras of type A, to appear in Proc. Amer. Math. Soc. \bibitem[LYY1]{LYY} C. H. Lam, H. Yamada, H. Yamauchi, Vertex operator algebras, extended $E_{8}$ diagram, and McKay's observation on the Monster simple group, \emph{Transactions of the American Mathematical Society }\textbf{359} (2007) 4107-4123. \bibitem[LYY2]{LYY2} C.H. Lam, H. Yamada and H. Yamauchi, McKay's observation and vertex operator algebras generated by two conformal vectors of central charge 1/2. {\it Internat. Math. Res. Papers} {\bf 3} (2005), 117--181. \bibitem[Mi]{miy} M. Miyamoto, Griess algebras and conformal vectors in vertex operator algebras, \emph{Journal of Algebra }\textbf{179 }(1996) 523-548. \bibitem[Mi2]{M} M. Miyamoto, A new construction of the moonshine vertex operator algebra over the real number field, \emph{Annals of Mathematics} \textbf{159} (2004) 535-596. \bibitem[NT]{NT} T. Nakanishi and A. Tsuchiya, Level-rank duality of WZW models in conformal field theory, \emph{Commun. Math. Phys.} \textbf{144} (1992), 351 -372. \bibitem[Sa]{Sa} S. Sakuma, 6-transposition property of $\tau$-involutions of vertex operator algebras, \emph{International Mathematics Research Notices} \textbf{2007}, Article ID rnm 030, 19 pages. \end{thebibliography} \end{document}
\begin{document} \begin{abstract} Let $R$ be a regular ring, let $J$ be an ideal generated by a regular sequence of codimension at least $2$, and let $I$ be an ideal containing $J$. We give an example of a module $H^3_I(J)$ with infinitely many associated primes, answering a question of Hochster and N\'u\~nez-Betancourt in the negative. In fact, for $i\leq 4$, we show that under suitable hypotheses on $R/J$, $\text{Ass}\,H^{i}_I(J)$ is finite if and only if $\text{Ass}\,H^{i-1}_I(R/J)$ is finite. Our proof of this statement involves a novel generalization of an isomorphism of Hellus, which may be of some independent interest. The finiteness comparison between $\text{Ass}\, H^i_I(J)$ and $\text{Ass}\, H^{i-1}_I(R/J)$ tends to improve as our hypotheses on $R/J$ become more restrictive. To illustrate the extreme end of this phenomenon, at least in the prime characteristic $p>0$ setting, we show that if $R/J$ is regular, then $\text{Ass}\, H^i_I(J)$ is finite for all $i\geq 0$. \end{abstract} \maketitle \section{Introduction} Local cohomology modules over a regular ring are known in many cases to exhibit remarkable finiteness properties. For a regular ring $S$, one may ask whether the set $\text{Ass}\, H^i_I(S)$ is finite for any ideal $I\subseteq S$ and any $i\geq 0$. A celebrated theorem originally due to Huneke and Sharp (later generalized by Lyubeznik's theory of $F$-modules \cite{lyufmod}) says that this is indeed the case if $S$ has prime characteristic $p>0$ \cite[Corollary 2.3]{hush}. Lyubeznik proved the corresponding statement for smooth $K$-algebras when $K$ is a field of characteristic $0$ \cite[Remark 3.7(i)]{lyudmod} and for any regular local ring containing $\mathbb{Q}$ \cite[Theorem 3.4]{lyudmod}. Concerning regular rings of mixed characteristic, the property is known to hold when $S$ is an unramified regular local ring \cite[Theorem 1]{lyuunram}, a smooth $\mathbb{Z}$-algebra \cite[Theorem 1.2]{bhatt}, or is local and of dimension $\leq 4$ \cite[Theorem 2.9]{marley}. The finiteness properties of local cohomology modules $H^i_I(S)$ when $S$ is a complete intersection ring are far less well-understood than when $S$ is regular. Infinite sets of associated primes can be found even when $S$ is a hypersurface ring. For example, Singh describes a hypersurface ring $S$ finitely generated over $\mathbb{Z}$, $$ S = \frac{\mathbb{Z}[u,v,w,x,y,z]}{ux+vy+wz} $$ such that for all prime integers $p$, the module $H^3_{(x,y,z)}(S)$ has a nonzero $p$-torsion element \cite{singhp}. This is not just a global phenomenon: Katzman later gave a local hypersurface ring $S$ containing a field $K$, $$ S = \frac{K[[u,v,w,x,y,z]]}{wu^2x^2-(w+z)uxvy+zv^2y^2} $$ such that $\text{Ass}\, H^2_{(x,y)}(S)$ is infinite \cite{katzinf}. Singh and Swanson construct examples of equicharacteristic local hypersurface rings to demonstrate that $\text{Ass}\, H^3_I(S)$ can be infinite even if $S$ is a UFD, an $F$-regular ring of characteristic $p>0$, or a characteristic $0$ ring with rational singularities \cite[Theorem 5.4]{singhswan}. Surprisingly, the local cohomology of a hypersurface ring is still known to possess striking finiteness properties, at least in the characteristic $p>0$ setting. A result proved independently by Katzman and Zhang \cite[Theorem 7.1]{katzhang} or by Hochster and N\'{u}\~{n}ez-Betancourt \cite[Corollary 4.13]{hochnb} says that if $R$ is a regular ring of prime characteristic $p>0$, $f\in R$ is a nonzerodivisor, and $S=R/f$, then for any ideal $I$ and any $i\geq 0$, $H^i_I(S)$ has only finitely many minimal primes. Equivalently, $\text{Supp}\, H^i_I(S)$ is a Zariski closed set. There are other situations in which the closed support property holds. For example, if $M$ finitely generated over $S$, then the set $\text{Supp}\,H^i_I(M)$ is known to be closed (i) when $S$ has prime characteristic $p>0$, $I$ is generated by $i$ elements, and $M=S$ \cite[Theorem 2.10]{katztop}, (ii) when $S$ is standard graded, $M$ is graded, $I$ is the irrelevant ideal, and $i$ is the cohomological dimension of $I$ on $M$ \cite[Theorem 1]{rottsega}, (iii) when $S$ is local of dimension at most $4$ \cite[Proposition 3.4]{hunkatmar}, or (iv) when $I$ has cohomological dimension at most $2$ \cite[Theorem 2.4]{hunkatmar}. It is not presently known how far these results generalize. Hochster asks whether the support of $H^i_I(M)$ must be closed for all $i\geq 0$ and all ideals $I$, where $S$ is assumed only to be Noetherian, and $M$ may be any finitely generated $S$-module \cite[Question 2]{hochreview}. Even at this level of generality, the question remains open. The present paper is motivated by Hochster's question in the complete intersection setting. Namely, does the closed support property of characteristic $p>0$ hypersurface rings \cite{katzhang,hochnb} generalize to complete intersection rings of higher codimension? \noindent \textbf{Question 1a.} Let $S$ be a complete intersection ring, let $I$ be an arbitrary ideal of $S$, and fix $i\geq 0$. Is $\text{Supp}\, H^i_I(S)$ a Zariski closed set? For simplicity, we restrict our focus to when $S$ is given by a presentation $S=R/J$ for $R$ a regular ring and $J$ an ideal generated by a regular sequence in $R$. Not all regular rings $R$ are known to have the property that $\text{Ass}\, H^i_I(R)$ is always finite. In order to avoid potential complications arising from $R$, we impose an additional hypothesis. \begin{defn} Call a Noetherian ring $R$ {\em LC-finite} if, for any ideal $I$ and any $i\geq 0$, the module $H^i_I(R)$ has a finite set of associated primes. \end{defn} We do not assume regularity in the definition. A semilocal ring of dimension at most $1$ is trivially LC-finite, but can easily fail to be regular. Likewise, any $F$-finite ring with finite $F$-representation type (FFRT) is LC-finite \cite[Theorem 5.7]{hochnb}, but need not be regular. The class of LC-finite regular rings includes, for example, all regular local rings of dimension $\leq 4$, all regular rings of prime characteristic $p$, regular local rings containing $\mathbb{Q}$, smooth $K$-algebras for $K$ a field of characteristic $0$, unramified regular local rings of mixed characteristic, and smooth $\mathbb{Z}$-algebras. The class of LC-finite rings is closed under localization. If there is a finite set of maximal ideals $\mathfrak{m}_1,\text{cd}ots,\mathfrak{m}_t$ of $R$ such that $\text{Spec}(R)-\{\mathfrak{m}_1,\text{cd}ots,\mathfrak{m}_t\}$ can be covered by finitely many charts $\text{Spec}(R_f)$, each of which is LC-finite, then $R$ is LC-finite. For example, a ring of prime characteristic $p$ with isolated singularities is LC-finite. If $R$ is LC-finite and $A\to R$ is pure (e.g., if $A$ is a direct summand of $R$), then $A$ is LC-finite \cite[Theorem 3.1(d)]{hochnb}. We narrow the scope of Question 1a as follows. \noindent \textbf{Question 1b.} Let $R$ be an LC-finite regular ring, $J\subseteq R$ be an ideal generated by a regular sequence, and $S=R/J$. Let $I$ be an arbitrary ideal of $S$, and fix $i\geq 0$. Is $\text{Supp}\, H^i_I(S)$ a Zariski closed set? If $R$ is a regular ring of prime characteristic $p>0$, Hochster and N\'{u}\~{n}ez-Betancourt show that if $\text{Ass}\, H^{i+1}_I(J)$ is finite, then $\text{Supp}\, H^i_I(R/J)$ is closed \cite[Theorem 4.12]{hochnb}. Their theorem raises an immediate question. Although (to the best of our knowledge) it is not yet known whether Hochster and N\'{u}\~{n}ez-Betancourt's result generalizes to (equal or mixed) characteristic $0$, we will nonetheless investigate the following question for LC-finite regular rings of any characteristic. \noindent \textbf{Question 2.} Let $R$ be an LC-finite regular ring, $J\subseteq R$ be an ideal generated by a regular sequence, $I$ be an ideal of $R$ containing $J$ (corresponding to an arbitrary ideal of $R/J$), and fix $i\geq 0$. Is $\text{Ass}\, H^i_I(J)$ a finite set? In Section 2, using some properties of the ideal transform functor associated with $I$, we show that, even under weaker hypotheses on $R$ and $J$, this question has a positive answer when $i=2$. These hypotheses include the case where $J=R$ and $R$ is a $\mathbb{Q}$-factorial normal ring (cf. Dao and Quy \cite[Theorem 3.3]{daoquyH2}). \begin{thm}[\ref{H2}] Let $R$ be a \textit{locally almost factorial} (see Definition \ref{laf}) Noetherian normal ring, and let $I$, $J$ be ideals of $R$. The set $\text{Ass}\, H^2_I(J)$ is finite. \end{thm} The result does not generalize to $i>2$. In Section 3, we give an example where $H^3_I(J)$ has infinitely many associated primes. In this example, $R$ is a $7$-dimensional polynomial ring, $J$ is generated by a regular sequence of length $2$, and $I$ is $4$-generated. Question 2 has a negative answer at the level of generality in which it is stated. However, our counterexample crucially requires $\text{Ass}\, H^2_I(R/J)$ to be infinite. In fact, $R/J$ in our counterexample is isomorphic to Katzman's hypersurface \cite{katzinf}. The nature of the counterexample therefore begs the following natural question that, to the best of our knowledge, remains open. \noindent \textbf{Question 3.} Let $R$ be an LC-finite regular ring, $J\subseteq R$ be an ideal generated by a regular sequence, $I$ be an ideal of $R$ containing $J$, and fix $i\geq 0$. Does the finiteness of $\text{Ass}\, H^{i-1}_I(R/J)$ imply the finiteness of $\text{Ass}\, H^i_I(J)$? Notice that if $\text{depth}_J(R)=1$, then $J\simeq R$ as an $R$-module, so the question has a trivially positive answer. The question is only interesting when $\text{depth}_J(R)\geq 2$, and our methods in fact require $\text{depth}_J(R)\geq 2$. We investigate this question in Section 4 for various small values of $i$. We give a partial positive answer to both this question and its converse when $i\leq 4$. We emphasize in this result that $R$ is allowed to have any characteristic, even mixed characteristic, so long as it satisfies the LC-finiteness condition. \begin{thm}[\ref{smalli}] Let $R$ be an LC-finite regular ring, let $J\subseteq R$ be an ideal generated by a regular sequence of length $j\geq 2$, and let $S=R/J$. For $I$ an ideal of $R$ containing $J$, \begin{enumerate} \item[(i)] $\text{Ass}\, H^i_I(J)$ and $\text{Ass}\, H^{i-1}_I(S)$ are always finite for $i\leq 2$. \item[(ii)] If the irreducible components of $\text{Spec}(S)$ are disjoint (e.g. $S$ is a domain), then $\text{Ass}\, H^{3}_I(J)$ is finite if and only if $\text{Ass}\,{H^2_I(S)}$ is finite. \item[(iii)] If $S$ is normal and locally almost factorial (e.g. $S$ is a UFD), then $\text{Ass}\,{H^4_I(J)}$ is finite if and only if $\text{Ass}\, H^3_I(S)$ is finite. \end{enumerate} \end{thm} A key aspect of Hochster and Núñez-Betancourt's strategy in codimension $1$ was using the finiteness of $\text{Ass}\, H^i_I(J)$ to control $\text{Min}\, H^{i-1}_I(S)$, even in a situation where $\text{Ass}\, H^{i-1}_I(S)$ may be infinite. However, the ``only if'' statements in Theorem \ref{smalli} imply that this strategy cannot be straightforwardly generalized to the higher codimension setting. Namely, if $\text{Ass}\, H^{i-1}_I(S)$ is infinite, then the same may necessarily also true of $\text{Ass}\, H^{i}_I(J)$. The proof of Theorem \ref{smalli} relies heavily on the following isomorphism of functors, which may be of some independent interest. This isomorphism is a generalization of a result of Hellus \cite[Theorem 3]{hellus}. \begin{thm}[\ref{genhel}] Let $R$ be a Noetherian ring and let $I\subseteq R$ be any ideal. Fix $i\geq 0$. There is an ideal $I^\primeime\supseteq I$ (resp. $I^{\primeime\primeime}\supseteq I$) such that \begin{itemize} \item There is a natural isomorphism $H^i_{I^\primeime}(-) \mathbf{\mkern2mu\underline{\mkern-2mu x\mkern-2mu}\mkern2mu}rightarrow{\sim} H^i_{I}(-)$ (resp. there is a natural surjection $H^i_{I^{\primeime\primeime}}(-) \twoheadrightarrow H^i_{I}(-)$) \item $\text{ht}(I^\primeime)\geq i-1$ (resp. $\text{ht}(I^{\primeime\primeime})\geq i$) \end{itemize} \end{thm} Separate to our main application, Theorem \ref{genhel} yields a new proof of a result of Marley \cite[Proposition 2.3]{marley} on the set of height $i$ primes in the support of modules of the form $H^i_I(M)$. See Corollary \ref{heightcontrol}. Returning to the main objects of study, we observe that in Theorem \ref{smalli}, our control over the associated primes of $H^i_I(J)$ tends to become greater as our hypotheses on $R/J$ become more restrictive. We turn in Section 5 to investigating the extreme case in which $R/J$ is itself regular and LC-finite. At least in characteristic $p>0$, we obtain the following result. \begin{thm}[\ref{charpj}] Let $R$ be a regular ring of prime characteristic $p>0$, let $J\subseteq R$ be an ideal such that $R/J$ is regular, let $\mathcal{M}$ be an $F$-finite $F_R$-module (e.g. $\mathcal{M}=R$), and let $I$ be any ideal of $R$. For all $i\geq 0$, $\text{Ass}\, H^i_I(J\mathcal{M})$ is finite. \end{thm} \noindent\textit{Convention:} Throughout this paper, we assume that all given rings are Noetherian, unless stated otherwise. \section{Finiteness of $\text{Ass}\, H^2_I(J)$} \noindent If we are focused exclusively on cohomological degree $2$, many of the hypotheses of our basic setting can be relaxed. $R$ need only be normal with a condition somewhat weaker than local factoriality, and the ideal $J\subseteq R$ can be completely arbitrary -- we do not need to $J$ to be generated by a regular sequence. Our goal is to show that $\text{Ass}\, H^2_I(J)$ is finite for any ideal $I\subseteq R$. The main case is when $\text{depth}_I(R)=1$. \begin{lem}\label{depth1} Let $R$ be a Noetherian domain, and let $J\subseteq R$ be an ideal. If $I\subseteq R$ is an ideal such that $\text{depth}_I(R)\neq 1$, then $\text{Ass}\, H^2_I(J)$ is finite. \end{lem} \begin{proof} If $I= (0)$ or $I=R$, there is nothing to do, so we assume that $I$ is a nonzero proper ideal. Since $R$ is a domain, this implies that both $J$ and $R$ are $I$-torsionfree, giving $\text{depth}_I(R)>0$ and by hypothesis $\text{depth}_I(R)\neq 1$, so we have $\text{depth}_I(R)\geq 2$. The following sequence is exact. \begin{center}{ \begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}] \matrix (m) [ matrix of math nodes, row sep=1em, column sep=2.5em, text height=1.5ex, text depth=0.25ex ] { 0 & 0 & 0 & \Gamma_I(R/J) \\ & H^1_I(J) & 0 & H^1_I(R/J) \\ & H^{2}_I(J) & H^{2}_I(R) & H^{2}_I(R/J) \\ }; \path[overlay,->, font=\scriptsize,>=latex] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-1-3) edge (m-1-4) (m-1-4) edge[out=345,in=165] (m-2-2) (m-2-2) edge (m-2-3) (m-2-3) edge (m-2-4) (m-2-4) edge[out=345,in=165] (m-3-2) (m-3-2) edge (m-3-3) (m-3-3) edge (m-3-4) ; \end{tikzpicture}} \end{center} Note that $H^1_I(J) \simeq \Gamma_I(R/J)$ is finitely generated, meaning that $H^2_I(J)$ is either finitely generated or the first non-finitely-generated local cohomology module of $J$ on $I$, and the stated result follows at once from Brodmann and Lashgari Faghani \cite[Theorem 2.2]{brodlash}. \end{proof} \begin{lem}\label{facdecomp} Let $R$ be a Noetherian normal ring, and $I\subseteq R$ be an ideal such that $\text{depth}_I(R)=1$. Then $\sqrt{I}=L\cap I_0$ for some ideal $L$ given by the intersection of height one primes, and some ideal $I_0\subseteq R$ with $\text{depth}_{I_0}(R)\geq 2$. \end{lem} \begin{proof} In a Noetherian normal ring $R$, $\text{ht}(I_0)\geq 2$ implies $\text{depth}_{I_0}(R)\geq 2$. \iffalse First note that for an ideal $\mf{a}$ in a normal ring $R$, $\text{depth}_{\mf{a}}(R)=1$ if and only if $\text{ht}(\mf{a})=1$. Indeed, if $\text{ht}(\mf{a})=1$, then certainly $\text{depth}_{\mf{a}}(R)\leq 1$. $R$ has no embedded primes, so if $\mf{a}$ is not contained in any minimal prime of $R$, $\mf{a}$ contains a nonzerodivisor, and thus $\text{depth}_{\mf{a}}(R)\geq 1$. If on the other hand, we assume $\text{depth}_{\mf{a}}(R)=1$, then clearly $\text{ht}(\mf{a})\geq 1$. Take $x\in \mf{a}$ a nonzerodivisor. Since $\text{depth}_{\mf{a}}(R/xR)=0$, $\mf{a}$ is contained in an associated prime of $xR$, all of which have height $1$, giving $\text{ht}(\mf{a})\leq 1$. Since $\text{ht}(I)=1$, the radical of $I$ can be written as $L\cap I_0$ where $L$ has pure height one and $I_0$ is an intersection of primes of height $\geq 2$. Since $\text{ht}(I_0)\geq 2$, it must be the case that $\text{depth}_{I_0}(R)\geq 2$.\fi \end{proof} Recall the notion of an almost factorial ring. \begin{defn}\label{laf} A normal domain $R$ is called \textit{almost factorial} if the class group of $R$ is torsion. A normal ring $R$ is called \textit{locally almost factorial} if $R_P$ is almost factorial for all $P\in\text{Spec}(R)$. \end{defn} A regular ring, for example, is locally (almost) factorial. Hellus shows that an almost factorial Cohen-Macaulay local ring of dimension at most four is LC-finite \cite[Theorem 5]{hellus}. Our application of the almost factorial hypothesis is motivated by the application in \cite{hellus}. The main property we require is that every height $1$ prime of an almost factorial ring is principal up to taking radicals. Recall that an ideal $I$ is said to have \text{pure} height $h$ if every minimal prime of $I$ has height exactly $h$. An ideal of pure height $1$ is (up to radical) a product of height $1$ primes, so in an almost factorial ring, any pure height $1$ ideal is principal up to radicals. \begin{lem} Let $R$ be a locally almost factorial Noetherian normal ring, and $L$ be an ideal of pure height $1$. Then there is a finite cover of $\text{Spec}(R)$ by open charts $\text{Spec}(R_{f_1}),\text{cd}ots,\text{Spec}(R_{f_t})$ such that for each $i$, $LR_{f_i}$ has the same radical as a principal ideal. \end{lem} \begin{proof} We do no harm in replacing $L$ with $\sqrt{L}$, so assume $L$ is radical. Consider a single point $P\in\text{Spec}(R)$. Since $R_P$ is almost factorial, we can write $LR_P = \sqrt{y R_P}$ for some $y\in R_P$. Up to multiplying by units of $R_P$, we may assume that $y$ is an element of $R$. Since $y\in LR_P\cap R$, there is some $u\in R-P$ such that $uy\in L$. Also, since $R$ is Noetherian, there is some $n>0$ such that $L^n R_P\subseteq yR_P$, hence $L^n\subseteq yR_P\cap R$, and there is some $v\in R-P$ such that $vL^n\subseteq yR$. If $f=uv$, then we see that $y\in LR_f$ and $L^n\subseteq yR_f$, giving $LR_f =\sqrt{yR_f}$. Our choice of $f$ depends on $P$. Varying over all $P\in\text{Spec}(R)$, we obtain a collection of open charts $\{\text{Spec}(R_{f_P})\}_{P\in\text{Spec}(R)}$ which cover $\text{Spec}(R)$ such that (the expansion of) $L$ is principal up to radicals on each chart. Since $\text{Spec}(R)$ is quasicompact, finitely many of these charts cover the whole space. \end{proof} \begin{corol}\label{localprinc} Let $R$ be a locally almost factorial Noetherian normal ring, and $I\subseteq R$ be an ideal such that $\text{depth}_I(R)=1$. Then there is an ideal $I_0\subseteq R$ with $\text{depth}_{I_0}(R)\geq 2$, and a finite cover of $\text{Spec}(R)$ by open charts $\text{Spec}(R_{f_1}),\text{cd}ots,\text{Spec}(R_{f_t})$ such that for each $i$, $\sqrt{IR_{f_i}}=\sqrt{y_iR_{f_i}}\cap I_0$ for some $y_i\in R$. \end{corol} To show that $\text{Ass}\, H^2_I(J)$ is finite, it would certainly suffice to show that $\text{Ass}\, H^2_{IR_{f}}(JR_{f})$ is finite on each chart $\text{Spec}(R_f)$ of a finite cover of $\text{Spec}(R)$. Thus, up to radicals, the main case to understand is when $I$ has the form $yR \cap I_0$, with $y\in R$ and $\text{depth}_{I_0}(R)\geq 2$. This decomposition can be used to get a better understanding of the $I$-transform functor. For an ideal $\mathfrak{a}\subseteq R$, recall the $\mathfrak{a}$-transform $$D_{\mathfrak{a}}(-) := \varinjlim_{t} \text{Hom}_R(\mathfrak{a}^t,-)$$ is a left exact covariant functor whose right derived functors satisfy $\mathscr{R}^i D_{\mathfrak{a}}(-) \simeq H^{i+1}_{\mathfrak{a}}(-)$. There is a sense in which $D_{\mathfrak{a}}(-)$ forces $\text{depth}_{\mathfrak{a}}(-)\geq 2$ without modifying higher local cohomology on $\mathfrak{a}$. Namely, for any $R$-module $M$, $\Gamma_{\mathfrak{a}}(D_{\mathfrak{a}}(M))=H^1_{\mathfrak{a}}(D_{\mathfrak{a}}(M))=0$, and $H^i_{\mathfrak{a}}(D_{\mathfrak{a}}(M))=H^i_{\mathfrak{a}}(M)$ for all $i\geq 2$. Below, we collect a few fundamental facts about the ideal transform, all of which can be found in the treatment presented in Chapter 2 of Brodmann and Sharp \cite{brodsharp}, along with proofs of the previous assertions. \textit{Notational remark:} If $F$ and $G$ are functors $\mathfrak{C}\to\mathfrak{D}$, we will write natural transformations from $F$ to $G$ as $\phi(-):F(-)\to G(-)$, which consists of the data of a map denoted $\phi(A):F(A)\to G(A)$ for each object $A$ of $\mathfrak{C}$, such that for any $\mathfrak{C}$-morphism $f:A\to B$, $\phi(B)\circ F(f) = G(f)\circ \phi(A)$. \begin{lem}\label{four} (Brodmann and Sharp \cite[Theorem 2.2.4(i)]{brodsharp}) Let $R$ be a Noetherian ring and fix an ideal ${\mathfrak{a}}\subseteq R$. There is a natural transformation $\eta_{\mathfrak{a}}(-):\text{Id}\to D_{\mathfrak{a}}(-)$ such that, for any $R$-module $M$, there is an exact sequence $$ 0\to\Gamma_{\mathfrak{a}}(M)\to M\mathbf{\mkern2mu\underline{\mkern-2mu x\mkern-2mu}\mkern2mu}rightarrow{\eta_{\mathfrak{a}}(M)} D_{\mathfrak{a}}(M)\to H^1_{\mathfrak{a}}(M)\to 0 $$ \end{lem} \begin{lem}\label{fac} (Brodmann and Sharp \cite[Proposition 2.2.13]{brodsharp}) Let $R$ be a Noetherian ring, and ${\mathfrak{a}}\subseteq R$ be an ideal. Let $e:M\to M^\primeime$ be a homomorphism of $R$-modules such that $\text{Ker}{e}$ and $\text{Coker}{e}$ are both ${\mathfrak{a}}$-torsion. Then \begin{enumerate}[(i)] \item The map $D_{\mathfrak{a}}(e):D_{\mathfrak{a}}(M)\to D_{\mathfrak{a}}(M^\primeime)$ is an isomorphism. \item There is a unique $R$-module homomorphism $\varphi:M^\primeime\to D_{\mathfrak{a}}(M)$ such that the diagram \begin{center} \begin{tikzcd} M\ar[r,"e"]\ar[dr,"\eta_{\mathfrak{a}}(M)"'] & M^\primeime\ar[d,"\varphi"]\\ &D_{\mathfrak{a}}(M) \end{tikzcd} \end{center} commutes. In fact, $\varphi = D_{\mathfrak{a}}(e)^{-1}\circ \eta_{\mathfrak{a}}(M^\primeime)$. \item The map $\varphi$ of (ii) is an isomorphism if and only if $\eta_{\mathfrak{a}}(M^\primeime)$ is an isomorphism, and this is the case if and only if $\Gamma_{\mathfrak{a}}(M^\primeime)=H^1_{\mathfrak{a}}(M^\primeime)=0$. \end{enumerate} \end{lem} The main ingredient to dealing with $H^2_I(J)$ when $\text{depth}_I(R)=1$ is the following compatibility property of the $I$-transform functor with our decomposition $I=yR\cap I_0$. \begin{prop}\label{loc} Let $R$ be a Noetherian ring, $y$ an element of $R$, and $I_0\subseteq R$ an ideal. Let $I=yR\cap I_0$. There is a natural isomorphism of functors $D_{I_0}(-)_y\simeq D_{I}(-)$ \end{prop} \begin{proof} Precomposing $\eta_{I_0}(-)_y:(-)_y\to D_{I_0}(-)_y$ with $\text{Id}\to (-)_y$ we obtain a natural transformation $\gamma(-):\text{Id}\to D_{I_0}(-)_y$. We claim that for any module $M$, both the kernel and cokernel of $\gamma(M):M\to D_{I_0}(M)_y$ are $I=yR\cap I_0$-torsion: \begin{itemize} \item $\text{Ker}{\gamma(M)}$ consists of those $m\in M$ such that $m/1\in \Gamma_{I_0}(M)_y$, or, equivalently, $y^t m\in \Gamma_{I_0}(M)$ for some $t\geq 0$. Let $s$ be such that $I_0^s y^t m =0$. Then $(yI_0)^{\max(s,t)}m=0$, so $m\in \Gamma_{yI_0}(M)=\Gamma_{yR\cap I_0}(M)$ (since $\sqrt{yI_0} = \sqrt{yR\cap I_0}$). \item An element of $C=\text{Coker}{\gamma(M)}$ can be represented by $c=f/y^t$ for some $f\in D_{I_0}(M)$, $t\geq 0$. $\text{Coker}{\eta_{I_0}(M)}$ is $I_0$-torsion, so there is some $s$ such that $I_0^s f\subseteq \text{Im}{\eta_{I_0}(M)}$. Since $f=y^t c$, we have $(yI_0)^{\max(s,t)}c \subseteq \text{Im}{\gamma(M)}$. The element of $C$ represented by $c$ therefore belongs to $\Gamma_{yI_0}(C) = \Gamma_{yR\cap I_0}(C)$. \end{itemize} Lemma \ref{fac}(ii) therefore gives a map $\varphi(M): D_{I_0}(M)_y\to D_I(M)$, specifically $\varphi(M)=D_I(\gamma(M))^{-1}\circ \eta_{I}(D_{I_0}(M)_y)$. Both of the composite maps come from natural transformations $D_I(\gamma(-))^{-1}$ and $\eta_I(D_{I_0}(-)_y)$, so the result is a natural transformation $\varphi(-):D_{I_0}(-)_y\to D_I(-)$. It remains to show that that $\varphi(M)$ is an isomorphism for each $M$, which is equivalent, by Lemma \ref{fac}(iii), to showing that $\Gamma_I(D_{I_0}(M)_y)=H^1_I(D_{I_0}(M)_y)=0$. This can be done using the Mayer-Vietoris sequence associated with the intersection $yR\cap I_0$. Each module $H^i_{yR+I_0}(D_{I_0}(M)_y)$ vanishes because $y\in yR+I_0$ acts as a unit on $D_{I_0}(M)_y$, and likewise for the modules $\Gamma_{yR}(D_{I_0}(M)_y)$ and $H^1_{yR}(D_{I_0}(M)_y)$. Note that $\text{depth}_{I_0}(D_{I_0}(M))\geq 2$, and localization can only make depth go up, so, $\Gamma_{I_0}(D_{I_0}(M)_y)=H^1_{I_0}(D_{I_0}(M)_y)=0$. \begin{center}{ \begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}] \matrix (m) [ matrix of math nodes, row sep=1em, column sep=2.5em, text height=1.5ex, text depth=0.25ex ] { 0 & 0 & 0 & \Gamma_{yR\cap I_0}(D_{I_0}(M)_y)\\ & 0 & 0 & H^1_{yR\cap I_0}(D_{I_0}(M)_y)\\ & 0 & 0\oplus H^2_{I_0}(D_{I_0}(M)_y) & H^2_{yR\cap I_0}(D_{I_0}(M)_y)\\ }; \path[overlay,->, font=\scriptsize,>=latex] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-1-3) edge (m-1-4) (m-1-4) edge[out=350,in=170] (m-2-2) (m-2-2) edge (m-2-3) (m-2-3) edge (m-2-4) (m-2-4) edge[out=350,in=170] (m-3-2) (m-3-2) edge (m-3-3) (m-3-3) edge (m-3-4) ; \end{tikzpicture}} \end{center} We can now see that $\Gamma_I(D_{I_0}(M)_y)=H^1_I(D_{I_0}(M)_y)=0$, as desired. \end{proof} \begin{corol}\label{depth1coh} Let $R$ be a Noetherian ring, $y$ an element of $R$, and $I_0\subseteq R$ an ideal. Let $I=yR\cap I_0$. Then for all $i\geq 2$, there is a natural isomorphism of functors $H^i_I(-)\simeq H^i_{I_0}(-)_y$. \end{corol} \begin{proof} It is equivalent to show that $\mathscr{R}^{i-1}D_I(-)\simeq \left(\mathscr{R}^{i-1}D_{I_0}(-)\right)_y$. We can calculate $\mathscr{R}^{i-1}D_I(M)$ as $H^{i-1}(D_I(E^\bullet))$ where $M\to E^\bullet$ is an injective resolution, but by Proposition \ref{loc}, $D_I(-)\simeq D_{I_0}(-)_y$ where $(-)_y$ commutes with the formation of cohomology. Thus, $$H^{i-1}(D_I(E^\bullet))\simeq H^{i-1}(D_{I_0}(E^\bullet))_y= \left(\mathscr{R}^{i-1}D_{I_0}(M)\right)_y.$$ \end{proof} \begin{theor}\label{H2} Let $R$ be a locally almost factorial Noetherian normal ring, and $I$, $J$ be ideals of $R$. The set $\text{Ass}\, H^2_I(J)$ is finite. \end{theor} \begin{proof} $R$ is a product of normal domains $R_1\times\text{cd}ots\times R_k$, and $J$ is a product of ideals $J_1\times\text{cd}ots\times J_k$ with $J_i\subseteq R_i$. It is enough to show that $\text{Ass}\, H^2_{IR_i}(J_i)$ is finite for all $i$, so assume that $R$ is a domain. By Lemma \ref{depth1}, we need only deal with the case in which $\text{depth}_I(R)=1$. It is enough to show that $\text{Ass}\, H^2_{IR_f}(J_f)$ is finite for each chart $\text{Spec}(R_f)$ in a finite cover of $\text{Spec}(R)$. By Corollary \ref{localprinc}, working with one chart at a time, and replacing $R$ by $R_f$ and $I$ by an ideal with the same radical, we may assume that $I$ has the form $I=yR\cap I_0$ where $\text{depth}_{I_0}(R)\geq 2$. By Corollary \ref{depth1coh}, this decomposition gives $H^2_I(J)\simeq H^2_{I_0}(J)_y$. It is therefore enough to show that $\text{Ass}\, H^2_{I_0}(J)$ is finite. But $\text{depth}_{I_0}(R)\geq 2$, so this follows at once from Lemma \ref{depth1}. \end{proof} \section{An example of an infinite $\text{Ass}\, H^3_I(J)$} Let $K$ be a field, $A = K[u,v,w,x,y,z]$, and $f=wv^2 x^2 - (w+z)vxuy + zu^2y^2$. Katzman \cite{katzinf} showed that $\text{Ass}\, H^2_{(x,y)}(A/f)$ is infinite. The hypersurface ring $S=A/f$ can be presented as a complete intersection ring of higher codimension. For example, if $R = A[t]$ and $J=(t,f)R$, then clearly $S \simeq R/J$. Also, if $I = (t,f,x,y)R$, then $IS=(x,y)S$ and thus $H^i_{I}(S)\simeq H^i_{(x,y)}(S)$ for all $i$. In this case, $\text{depth}_I(R) = 3$ (the sequence $t,f,x\in I$ is $R$-regular), so the long exact sequence from applying $\Gamma_I(-)$ to $0 \to J \to R \to S\to 0$ begins with \begin{center}{ \begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}] \matrix (m) [ matrix of math nodes, row sep=1em, column sep=2.5em, text height=1.5ex, text depth=0.25ex ] { 0 & 0 & 0 & 0 \\ & H^1_I(J) & 0 & H^1_{(u,v)}(S) \\ & H^2_I(J) & 0 & H^2_{(u,v)}(S) \\ & H^{3}_I(J) & H^3_I(R) & 0 \\ & H^{4}_I(J) & H^{4}_I(R) & 0 \\ & \vdots & \vdots & \vdots \\ }; \path[overlay,->, font=\scriptsize,>=latex] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-1-3) edge (m-1-4) (m-1-4) edge[out=345,in=165] (m-2-2) (m-2-2) edge (m-2-3) (m-2-3) edge (m-2-4) (m-2-4) edge[out=345,in=165] (m-3-2) (m-3-2) edge (m-3-3) (m-3-3) edge (m-3-4) (m-3-4) edge[out=345,in=165] (m-4-2) (m-4-2) edge (m-4-3) (m-4-3) edge (m-4-4) (m-4-4) edge[out=345,in=165] (m-5-2) (m-5-2) edge (m-5-3) (m-5-3) edge (m-5-4) (m-5-4) edge[out=345,in=165] (m-6-2) ; \end{tikzpicture}} \end{center} From this, we see that $H^2_{(x,y)}(S)$ is isomorphic to a submodule of $H^3_I(J)$. $\text{Ass}\, H^2_{(x,y)}(S)$ is infinite, so $\text{Ass}\, H^3_I(J)$ must be infinite as well. Note that this example occurs when $J$ is generated by a regular sequence of length $2$. This approach can be used to produce examples of parameter ideals $J$ with a similar relationship to $R$ and $S$. If $S = A/J_0$ is a complete intersection ring of dimension $n$, with $A$ an LC-finite regular ring and $J_0$ an ideal generated by a regular sequence, we can let $R = A[z_1,\text{cd}ots,z_{N}]$ for $N\gg 0$, and let $J=(z_1,\text{cd}ots,z_N)R + J_0R$, and will have $S\simeq R/J$. For any ideal $I_0\subseteq A$, set $I = J + I_0 R$ and $H^i_{I_0}(S) \simeq H^i_{I}(S)$, with $d=\text{depth}_I(R) > n+1$ if $N$ is large enough. Using the long exact sequence from applying $\Gamma_I(-)$ to $0\to J \to R \to S\to 0$, it follows at once that $H^i_I(J) \simeq H^{i-1}_{I_0}(S)$ for all $1\leq i \leq n+1$, that $H^i_I(J) = 0$ for $n+2\leq i \leq d-1$, and that $H^i_I(J) \simeq H^i_I(R)$ for all $i\geq d$. In sufficiently large cohomological degrees, $\text{Ass}\, H^i_I(J)$ is finite, and in degrees $1\leq 1\leq n+1$, the local cohomology of $J$ has exactly the same finiteness properties as the local cohomology of $S$. \section{Finiteness of $\text{Ass}\, H^i_I(J)$ vs finiteness of $\text{Ass}\, H^{i-1}_I(R/J)$} The class of examples presented in Section 3 may be somewhat unsatisfying, since the infinite collection of embedded primes we found in $H^i_I(J)$ was directly inherited from $H^{i-1}_I(R/J)$. It is an interesting and unresolved question whether there exist modules $H^i_I(J)$ that exhibit this type of behavior \textit{even when} $H^{i-1}_I(R/J)$ is well-controlled. To be precise, the following question is, to the best of our knowledge, still open: \noindent\textbf{Question 3.} Let $R$ be an LC-finite regular ring, $J\subseteq R$ be an ideal generated by a regular sequence, and $I$ be an ideal of $R$ containing $J$. Does the finiteness of $\text{Ass}\, H^{i-1}_I(R/J)$ imply the finiteness of $\text{Ass}\, H^i_I(J)$? If $J$ is generated by a regular sequence of length $1$, then $J\simeq R$ as an $R$-module, and $\text{Ass}\, H^i_I(J)$ is finite, so the claim is trivially true. We therefore restrict our attention to when $\text{depth}_J(R)\geq 2$. We think of $i$ as being fixed with $I$ varying. The case $i = 2$ is completely answered, since $\text{Ass}\, H^1_I(R/J)$ is finite (as is true of $H^1_I(M)$ for any finitely generated module $M$), and $\text{Ass}\, H^2_I(J)$ is finite by Theorem \ref{H2}. The goal of this section is to give a partial positive answer to this question when $i=3$ and when $i=4$. As $i$ gets larger, our results require increasingly restrictive hypotheses on $R/J$. To begin, notice that we can very easily ignore ideals $I$ where the depth of $R$ on $I$ is too large. \begin{lem}\label{largedep} Let $R$ be a Noetherian ring, let $I$ and $J$ be ideals of $R$, and let $S=R/J$. Fix $i\geq 1$ and assume $\text{Ass}\, H^i_I(R)$ is finite. If $\text{depth}_I(R)>i-1$, then $\text{Ass}\, H^{i}_I(J)$ is finite if and only if $\text{Ass}\, H^{i-1}_I(R/J)$ is finite. \end{lem} \begin{proof} There is a short exact sequence $ 0 \to H^{i-1}_I(R/J) \to H^{i}_I(J) \to N \to 0 $ where $N\subseteq H^i_I(R)$, so $\text{Ass}\, N$ is finite. \end{proof} We may therefore restrict our focus to the case where $\text{depth}_I(R)\leq i-1$. We will show that it actually suffices to only consider ideals $I$ such that $\text{depth}_I(R) =i-1$. This crucial simplification is inspired by a very similar strategy employed by Hellus, albeit for different applications \cite{hellus}. \subsection{A generalized isomorphism of Hellus} Before proceeding, we require a notion of parameters in a global ring, and the following lemma provides one suitable enough for use in our main proofs. \begin{lem} Let $R$ be a Noetherian ring, let $I$ be a proper ideal of height $h\geq 0$, and let $J\subseteq I$ be an ideal of height $j\geq 0$. \begin{enumerate}[(a)] \item If an ideal of the form $(x_1,\text{cd}ots,x_h)R$ has height $h$, then it has pure height $h$. \item Any sequence $x_1,\text{cd}ots,x_j\in J$ generating an ideal of height $j$ (including the empty sequence if $j=0$) can be extended to a sequence $x_1,\text{cd}ots,x_h\in I$ generating an ideal of height $h$. \item There is a sequence $x_1,\text{cd}ots,x_h\in I$ such that $(x_1,\text{cd}ots,x_h)R$ has (necessarily pure) height $h$. \end{enumerate} \end{lem} \begin{proof} (a) Every minimal prime of a height $h$ ideal has height at least $h$, and every minimal prime of an $h$-generated ideal has height at most $h$. (b) If $j=h$, there is nothing to do, so assume $j<h$. By induction, it is enough to show that we can extend the sequence by one element. Since $j<h$, $I$ is not contained in any minimal prime of $(x_1,\text{cd}ots,x_j)R$ (all of which have height $j$), and so we may choose $x\in I$ avoiding all such primes. A height $j$ prime containing $(x_1,\text{cd}ots,x_j)R$ therefore cannot also contain $xR$. Thus, the minimal primes of $(x,x_1,\text{cd}ots,x_j)R$ have height at least $j+1$. By Krull's height theorem, they also have height at most $j+1$. (c) This follows at once from (b) by taking $J=(0)$. \end{proof} \noindent By convention, we take the height of the unit ideal to be $+\infty$. An intersection of prime ideals of $R$ indexed by the empty set is taken to be $\bigcap_{i\in\varnothing} P_i=R$. If $R$ is a ring and $I$ is an ideal, let $\text{ara}_R(I)$ denote the arithmetic rank of $I$, i.e. the minimum number of generators among all ideals with the same radical as $I$. \begin{lem}\label{chooseelt} Let $R$ be a Noetherian ring, let $I$ be a proper ideal of height $h$, and let $J\subseteq I$ be an ideal of height $j\leq h$ generated by $j$ elements. There is an element $y\in R$ that satisfies the following properties. \begin{enumerate}[(i)] \item $\text{ara}_R(yR\cap I)\leq h$ \item $\text{ara}_{R/J}(y(R/J)\cap(I/J))\leq h-j$ \item $\text{ht}(yR+I)\geq h+1$ \end{enumerate} \end{lem} \begin{proof} Write $J=(x_1,\text{cd}ots,x_j)R$, and extend this sequence to $x_1,\text{cd}ots,x_h\in I$ generating an ideal of height $h$. $I$ is contained in at least one minimal prime of $(x_1,\text{cd}ots,x_h)R$. \iffalse If $I$ is contained in \textit{every} minimal prime of $(x_1,\text{cd}ots,x_h)R$, namely $\sqrt{I}=\sqrt{(x_1,\text{cd}ots,x_h)R}$, then $\text{ara}(I)\leq h$, and $\sqrt{I/J} =\sqrt{(x_1,\text{cd}ots,x_h)R/J} = \sqrt{(x_{j+1},\text{cd}ots,x_h)R/J}$, so $\text{ara}(I/J)\leq h-j$. For any $y\in R$, $\sqrt{yR\cap I} = \sqrt{(yx_1,\text{cd}ots,yx_h)R}$ and $\sqrt{y(R/J)\cap I} = \sqrt{(yx_{j+1},\text{cd}ots,yx_h)R/J}$, meaning that conditions (i) and (ii) on $y$ are automatic. We could take $y=1$, in which case $\text{ht}(yR+I)=+\infty > h+1$, satisfying condition (iii).\fi Let $P_1,\text{cd}ots,P_t$ be the minimal primes of $(x_1,\text{cd}ots,x_h)R$ containing $I$, and $Q_1,\text{cd}ots,Q_s$ be the minimal primes of $(x_1,\text{cd}ots,x_h)R$ that do not. We may have $s=0$. Since these primes are pairwise incomparable, there exist elements $y\in Q_1\cap\text{cd}ots\cap Q_s$ that avoid the union $P_1\cup\text{cd}ots\cup P_t$ (if $s=0$, we can take $y=1$). For any such $y$, it holds that $$yR \cap I\subseteq P_1\cap\text{cd}ots\cap P_t\cap Q_1\cap\text{cd}ots\cap Q_s=\sqrt{(x_1,\text{cd}ots,x_h)R}$$ and thus $yR\cap I\subseteq yR\cap \sqrt{(x_1,\text{cd}ots,x_h)R}$. Since $(x_{1},\text{cd}ots,x_h) \subseteq I$, we see that $$ yR \cap (x_{1},\text{cd}ots,x_h)R \subseteq yR \cap I \subseteq yR \cap \sqrt{(x_{1},\text{cd}ots,x_h)R} $$ It follows at once that that $$\sqrt{yR\cap I}=\sqrt{yR\cap(x_{1},\text{cd}ots,x_h)R}=\sqrt{(yx_{1},\text{cd}ots,yx_h)R}$$ producing the desired bound on arithmetic rank: $\text{ara}_R(yR \cap I)\leq h$. Since $yR/J\cap I/J\subseteq \sqrt{(x_{j+1}\text{cd}ots,x_h)R/J}$, an identical argument to the above shows that $$\sqrt{yR/J\cap I/J} = \sqrt{(yx_{j+1},\text{cd}ots,yx_h)R/J}$$ so $\text{ara}_{R/J}(yR/J\cap I/J)\leq h-j$. We have established (i) and (ii). Concerning (iii), note that that all primes containing $yR+I$ also contain $(x_1,\text{cd}ots,x_h)R$, and thus, to show that $\text{ht}(yR+I)\geq h+1$, it is enough to show that none of the height $h$ primes containing $(x_1,\text{cd}ots,x_h)R$ appear in $V(yR+I)$. But this is clear, since $\{P\supseteq (x_1,\text{cd}ots,x_h)R\hspace{0.1cm}space{0.1cm}|\hspace{0.1cm}space{0.1cm} \text{ht}(P)=h\} =\{P_1,\text{cd}ots,P_t,Q_1,\text{cd}ots,Q_s\}$. None of the primes $P_i$ contain $yR$, and none of the primes $Q_j$ contain $I$. \end{proof} The following is a generalization of Hellus's isomorphism \cite[Theorem 3]{hellus}. We significantly relax the hypotheses (originally $R$ was assumed to be a Cohen-Macaulay local ring) and obtain an isomorphism of functors (originally the isomorphism was of modules $H^i_{I^\primeime}(R)\to H^i_I(R)$). \begin{theor}\label{mainj} Let $R$ be a Noetherian ring, let $I$ be an ideal of height $h$, and let $J\subseteq I$ be an ideal of height $j\geq 0$ generated by $j$ elements. For any $k\geq 0$, there is an ideal $I_{k,J}\supseteq I$ such that \begin{itemize} \item $\text{ht}(I_{k,J}) \geq \text{ht}(I)+k$ \item The natural transformation \begin{center} \begin{tikzcd}[column sep = 2.0em] H^i_{I_{k,J}}(-)\ar[r] & H^i_{I}(-) \end{tikzcd} \end{center} is an isomorphism on $R$-modules for all $i>h+k$, and an isomorphism on $R/J$-modules for all $i>h-j+k$. If $i=h+k$ (resp. $i=h-j+k$) this natural transformation is a surjection on $R$-modules (resp. $R/J$-modules). \end{itemize} \end{theor} \begin{proof} If $k=0$, choose $I_{0,J}=I$. Fix $k\geq 1$, and suppose that we've chosen the ideal $I_{k-1,J}$ by induction. We must choose $I_{k,J}$. For brevity, we will suppress $J$ from our notation, and write $I_{k-1}$ and $I_k$ for $I_{k-1,J}$ and $I_{k,J}$, respectively. If $\text{ht}(I_{k-1}) > h + k-1$ we can simply pick $I_{k} = I_{k-1}$, so assume that $\text{ht}(I_{k-1}) = h + k-1$. By Lemma \ref{chooseelt} there is an element $y\in R$ such that $\text{ht}(yR+I_{k-1})\geq (h+k-1)+1$, with $$\text{ara}_R(yR\cap I_{k-1})\leq h+k-1\hspace{0.1cm}space{0.5em}\text{and}\hspace{0.1cm}space{0.5em} \text{ara}_{R/J}(y(R/J)\cap I_{k-1}/J)\leq (h+k-1)-j$$ Consider the Mayer-Vietoris sequence on the intersection $yR\cap I_{k-1}$. We use $(-)$ in our notation to mean that the sequence is exact when $-$ is replaced by any $R$-module $M$, and that all maps in the sequence are given by natural transformations. \begin{center}{ \begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}] \matrix (m) [ matrix of math nodes, row sep=1em, column sep=2.5em, text height=1.5ex, text depth=0.25ex ] { & & \text{cd}ots & H^{i-1}_{yR\cap {I_{k-1}}}(-) \\ & H^{i}_{yR+{I_{k-1}}}(-) & H^{i}_{yR}(-)\oplus H^i_{{I_{k-1}}}(-) & H^{i}_{yR\cap {I_{k-1}}}(-) \\ }; \path[overlay,->, font=\scriptsize,>=latex] (m-1-3) edge (m-1-4) (m-1-4) edge[out=350,in=170] (m-2-2) (m-2-2) edge (m-2-3) (m-2-3) edge (m-2-4) ; \end{tikzpicture}} \end{center} Let $i>h+k$. Since $i-1>\text{ara}_{R}(yR\cap I_{k-1})$, we get vanishing $H^{i-1}_{yR\cap I_{k-1}}(-)=H^i_{yR\cap I_{k-1}}(-)=0$. Since $i\geq h+k+1\geq 2$, we also have $H^i_{yR}(-)=0$, and therefore obtain a natural isomorphism $H^i_{yR+I_{k-1}}(-)\mathbf{\mkern2mu\underline{\mkern-2mu x\mkern-2mu}\mkern2mu}rightarrow{\sim} H^i_{I_{k-1}}(-)$. Notice that if $i=h+k$, then we still have $H^i_{yR\cap I_{k-1}}(-)=0$, so $$H^i_{yR+I_{k-1}}(-)\to H^i_{yR}(-)\oplus H^i_{I_{k-1}}(-)\to 0$$ is exact, implying that the component map $H^i_{yR+I_{k-1}}(-)\to H^i_{I_k}(-)$ is surjective. Working with $R/J$-modules, an identical argument using the fact that $$\text{ara}_{R/J}(y(R/J)\cap I_{k-1}/J)\leq (h+k-1)-j$$ shows that \[H^i_{y(R/J)+I_{k-1}/J}(-)\mathbf{\mkern2mu\underline{\mkern-2mu x\mkern-2mu}\mkern2mu}rightarrow{\sim} H^i_{I_{k-1}/J}(-)\] when $i>h+k-j$ and \[H^i_{y(R/J)+I_{k-1}/J}(-)\twoheadrightarrow H^i_{I_{k-1}/J}(-)\] when $i=h+k-j$. Finally, $\text{ht}(yR + I_{k-1})\geq h+k$, so we may in fact choose $I_k = yR + I_{k-1}$, which completes the induction. \end{proof} \begin{corol}\label{genhel} Let $R$ be a Noetherian ring and let $I\subseteq R$ be any ideal. Fix $i\geq 0$. There is an ideal $I^\primeime\supseteq I$ (resp. $I^{\primeime\primeime}\supseteq I$) such that \begin{itemize} \item $\text{ht}(I^\primeime)\geq i-1$ (resp. $\text{ht}(I^{\primeime\primeime})\geq i$) \item $H^i_{I^\primeime}(-) \mathbf{\mkern2mu\underline{\mkern-2mu x\mkern-2mu}\mkern2mu}rightarrow{\sim} H^i_{I}(-)$ (resp. $H^i_{I^{\primeime\primeime}}(-) \twoheadrightarrow H^i_{I}(-)$) \end{itemize} \end{corol} \begin{proof} Let $h=\text{ht}(I)$. If $h\geq i-1$ (resp. $h\geq i$) simply choose $I^\primeime=I$ (resp. $I^{\primeime\primeime}=I$). Otherwise, $h<i-1$ (resp. $h < i$). Apply Theorem \ref{mainj} in the case $k=i-1-h$ (resp. $k^\primeime=i-h$) to obtain an ideal $I^\primeime\supseteq I$ (resp. $I^{\primeime\primeime}\supseteq I$) satisfying $\text{ht}(I^\primeime)\geq h+k=i-1$ (resp. $\text{ht}(I^{\primeime\primeime})\geq h+k^\primeime=i$) and $H^i_{I^\primeime}(-)\mathbf{\mkern2mu\underline{\mkern-2mu x\mkern-2mu}\mkern2mu}rightarrow{\sim}H^i_I(-)$, since $i>h+k$ (resp. $H^i_{I^{\primeime\primeime}}(-)\twoheadrightarrow H^i_I(-)$, since $i=h+k^\primeime$). \end{proof} An immediate application of this theorem is to generalize a corollary of Hellus \cite[Corollary 2]{hellus}. This generalization provides a new proof of a result of Marley \cite[Proposition 2.3]{marley}, namely, for any Noetherian ring $R$, any ideal $I\subseteq R$, and any $R$-module $M$, $\{P\in\text{Supp}\, H^i_I(M)\,|\,\text{ht}(P)=i\}$ is a finite set. Since our result comes from a surjection of functors, we will describe it in terms of the ``support'' of $H^i_I(-)$. By the \textit{support} of a functor $F:\text{Mod}_R\to\text{Mod}_R$, we mean the set of primes $P\in\text{Spec}(R)$ such that $F(-)_P$ is not the zero functor. That is to say, $$ \text{Supp}(F) := \{P\in\text{Spec}(R)\,|\, \exists M\in\text{Mod}_R \text{ such that } F(M)_P \neq 0\} $$ For example, if $I\subseteq R$ is an ideal and $i\geq 0$, then $\text{Supp}\, H^i_I(-)\subseteq V(I)$. If $i>\text{ht}(I)$, this inclusion is not sharp. The following Corollary shows us how to find a strictly smaller closed set containing $\text{Supp}\, H^i_I(-)$. \begin{corol}\label{heightcontrol} (cf. Marley \cite[Proposition 2.3]{marley}) Let $R$ be a Noetherian ring and $I$ be an ideal. Then for all $i\geq 0$, there is an ideal $I^{\primeime\primeime}\supseteq I$ with $\text{ht}(I^{\primeime\primeime})\geq i$ such that $\text{Supp}\, H^i_I(-)\subseteq V(I^{\primeime\primeime})$. In particular, for any $R$-module $M$, the set $$\{P\in\text{Supp}\, H^i_I(M)\,\,|\,\, \text{ht}(P)= i\}$$ is a subset of $\text{Min}_R(R/I^{\primeime\primeime})$, and is therefore finite. If $R$ is semilocal and $i\geq \dim(R)-1$, then $\text{Supp}\, H^i_I(M)$ is a finite set. \end{corol} \begin{proof} Fix $i\geq 0$ and write $h=\text{ht}(I)$. If $i<h$, then because $\text{Supp}\, H^i_I(-)\subseteq V(I)$, we already have $\text{ht}(P) \geq h>i$ for all $P\in \text{Supp}\, H^i_I(-)$ and there is nothing to prove. So assume that $i\geq h$. By Corollary \ref{genhel}, there is an ideal $I^{\primeime\primeime}\supseteq I$ such that $\text{ht}(I^{\primeime\primeime})\geq i$ and $H^i_{I^{\primeime\primeime}}(-)\twoheadrightarrow H^i_I(-)$. In particular, for any $R$-module $M$, $H^i_I(M)$ is $I^{\primeime\primeime}$-torsion, and thus $\text{Supp}\, H^i_I(M)\subseteq V(I^{\primeime\primeime})$. All primes in $V(I^{\primeime\primeime})$ have height at least $i$. Any primes of height exactly $i$ must be among the minimal primes of $I^{\primeime\primeime}$, of which there are only finitely many. \end{proof} \subsection{Comparing the local cohomology of $J$ and $R/J$} Of primary interest within the scope of this paper is the following application of Theorem \ref{mainj} to the setting of Question 3. \begin{corol}\label{reduction} Let $R$ be a Cohen-Macaulay ring, and let $J\subseteq R$ be an ideal generated by a regular sequence of length $j\geq 1$. Fix a nonnegative integer $i$, and let $I$ be an ideal containing $J$ such that $\text{depth}_{I}(R)\leq i-1$. Then there is an ideal $I^\primeime\supseteq I$ such that $\text{depth}_{I^\primeime}(R)\geq i-1$ and such that $H^i_{I^\primeime}(J)\simeq H^i_{I}(J)$ and $H^{i-1}_{I^\primeime}(R/J)\simeq H^{i-1}_I(R/J)$. \end{corol} \begin{proof} Write $h=\text{ht}(I)$ and $j=\text{ht}(J)$. Apply Theorem \ref{mainj} with $k=i-1-h$ to obtain $I^\primeime\supseteq I$ such that $\text{depth}_{I^\primeime}(R) = \text{ht}(I^\primeime) \geq i-1$ and such that the natural transformation $H^\ell_{I^\primeime}(-)\to H^\ell_{I}(-)$ is an isomorphism on $R$-modules whenever $\ell>i-1$ and on $R/J$-modules whenever $\ell>i-1-j$. In particular, we see that $H^i_{I^\primeime}(-)\to H^i_{I}(-)$ is an isomorphism on $R$-modules $H^{i-1}_{I^\primeime}(-)\to H^{i-1}_I(-)$ is an isomorphism on $R/J$-modules. \end{proof} Assume that $R$ is Cohen-Macaulay, $J$ is generated by a regular sequence of length $j\geq 2$, and $I\supseteq J$ is any ideal. Fix $i\geq 0$ and let $a=\text{depth}_I(R/J)=\text{depth}_I(R)-j$. If $a+j\leq i-1$, then by Corollary \ref{reduction}, we can replace $I$ with a possibly larger ideal in order to assume that $a+j\geq i-1$, without affecting $H^i_I(J)$ and $H^{i-1}_I(R/J)$. Lemma \ref{largedep} gives a positive answer to Question 3 if $a+j>i-1$, so we may assume that $a+j=i-1$. Note that this allows us to ignore all values of $i$ and $j$ for which $j>i-1$. Here is a table illustrating the relevant values of $a$ to consider for various small values of $i$ and $j$. \begin{center} \begin{tabular}{l|| c| c| c| c| c } & $i=3$ & $i=4$ & $i=5$ & $i=6$ & $i=7$\\\hspace{0.1cm}line\hspace{0.1cm}line $j=2$& $a=0$ & $a=1$ & $a=2$ & $a=3$ & $a=4$\\\hspace{0.1cm}line $j=3$& $\varnothing$ & $a=0$ & $a=1$ & $a=2$ & $a=3$\\\hspace{0.1cm}line $j=4$& $\varnothing$ & $\varnothing$ & $a=0$ & $a=1$ & $a=2$\\\hspace{0.1cm}line $j=5$& $\varnothing$ & $\varnothing$ & $\varnothing$ & $a=0$ & $a=1$\\\hspace{0.1cm}line $j=6$& $\varnothing$ & $\varnothing$ & $\varnothing$ & $\varnothing$ & $a=0$\\\hspace{0.1cm}line $j=7$& $\varnothing$ & $\varnothing$ & $\varnothing$ & $\varnothing$ & $\varnothing$\\ \end{tabular} \end{center} \noindent We will attack the cases $a=0$ and $a=1$ directly, and the next lemma is our main tool in doing so. \begin{lem}\label{prinq3} Fix $a\geq 0$. Let $R$ be a Noetherian ring, let $J\subseteq R$ be an ideal generated by a regular sequence of length $j\geq 2-a$, let $I$ be an ideal containing $J$, and let $S=R/J$. Suppose that $IS$ can be decomposed (up to radicals) as $yS\cap I_0$ with $\text{depth}_{I_0}(S)> a$. Suppose further that $\text{Ass}\, H^{j+a+1}_I(R)$ is finite. Then $\text{Ass}\, H^{j+a+1}_I(J)$ is finite if and only if $\text{Ass}\, H^{j+a}_I(S)$ is finite. \end{lem} \begin{proof} By Corollary \ref{depth1coh}, there is a natural isomorphism $H^i_{I_0}(S)_y\simeq H^i_I(S)$ for all $i\geq 2$, so that in particular, $H^{j+a}_I(S)$ is an $S_y$-module. The natural map $\psi:H^{j+a}_I(R)\to H^{j+a}_I(S)$ therefore factors through $H^{j+a}_I(R)\to S_y\otimes_R H^{j+a}_I(R)$ to give an $S_y$-linear map $S_y\otimes_R H^{j+a}_I(R)\to H^{j+a}_I(S)$. \begin{center} \begin{tikzcd} H^{j+a}_I(R)\ar[r,"\psi"]\ar[dr] & H^{j+a}_I(S)\\ & S_y\otimes_R H^{j+a}_I(R)\ar[u] \end{tikzcd} \end{center} We claim that $\psi=0$, and for this it suffices to show that $S_y\otimes_R H^{j+a}_I(R)=0$. Consider the decomposition of $I$ up to radicals as $yS\cap I_0$ in $S$. We can replace $y$ by some lift mod $J$ to assume that $y\in R$, and since $I_0$ is expanded from $R$, we can write $I_0=I_0^\primeime S$ for some ideal $I_0^\primeime$ of $R$ containing $J$. We therefore have $I=(yR+J)\cap I_0^\primeime$ in $R$ (after possibly replacing $I$ by an ideal with the same radical). Note that $\text{depth}_{I_0^\primeime}(R)> j+a$. We can write $S_y\otimes_R H^{j+a}_I(R) = S_y\otimes_{R_y} H^{j+a}_{IR_y}(R_y)$ where $IR_y = (y + J)R_y \cap I_0^\primeime R_y = I_0^\primeime R_y$, and thus $H^{j+a}_{I R_y}(R_y) = H^{j+a}_{I_0^\primeime}(R)_y$. Since $\text{depth}_{I_0^\primeime}(R)>j+a$, we have $H^{j+a}_{I_0^\primeime}(R)=0$ and consequently, $\psi=0$. We therefore have an exact sequence $$ 0\to H^{j+a}_I(S)\to H^{j+a+1}_I(J)\to H^{j+a+1}_I(R). $$ Since $\text{Ass}\,{H^{j+a+1}_I(R)}$ is finite, the claim follows at once. \end{proof} We can now prove the main result of this section. \begin{theor}\label{smalli} Let $R$ be an LC-finite regular ring, let $J\subseteq R$ be an ideal generated generated by a regular sequence of length $j\geq 2$, and let $S=R/J$. For any ideal $I$ of $R$ containing $J$, \begin{enumerate} \item[(i)] $\text{Ass}\, H^i_I(J)$ and $\text{Ass}\, H^{i-1}_I(S)$ are always finite for $i\leq 2$. \item[(ii)] If the irreducible components of $\text{Spec}(S)$ are disjoint, then $\text{Ass}\, H^{3}_I(J)$ is finite if and only if $\text{Ass}\,{H^2_I(S)}$ is finite. \item[(iii)] If $S$ is normal and locally almost factorial, then $\text{Ass}\,{H^4_I(J)}$ is finite if and only if $\text{Ass}\, H^3_I(S)$ is finite. \end{enumerate} \end{theor} \begin{proof} For (i), it is well known that for any finitely generated $R$-module $M$, $\text{Ass}\, H^i_I(M)$ is finite whenever $i\leq 1$. The finiteness of $\text{Ass}\, H^2_I(J)$ is the subject of Theorem \ref{H2}. For (ii), we may use Corollary \ref{reduction} to replace $I$ with a possibly larger ideal in order to assume that $\text{depth}_I(R) \geq 2$. By Lemma \ref{largedep}, (ii) is immediate if $\text{depth}_I(R)> 2$, so assume $\text{depth}_I(R)= 2$. Since $J\subseteq I$ and $\text{depth}_J(R)\geq 2$, it follows that $\text{depth}_J(R)=2$ and $\text{depth}_I(S)=\text{ht}(IS)=0$. Let $e_1,\text{cd}ots,e_t\in S$ be a complete set of orthogonal idempotents. The minimal primes of $S$ are $\sqrt{(1-e_1)S},\text{cd}ots,\sqrt{(1-e_t)S}$, and thus, every pure height $0$ ideal of $S$ has arithmetic rank at most $1$. Up to radicals, we can therefore write $IS$ as $yS\cap I_0$ where $\text{ht}(I_0)=\text{depth}_{I_0}(S)>0$. Since $\text{depth}_J(R)\geq 2-0$, the claim follows from Lemma \ref{prinq3} in the case where $a=0$. For (iii), again using Corollary \ref{reduction} and Lemma \ref{largedep}, we may assume that $\text{depth}_I(R)=\text{depth}_I(S)+\text{depth}_J(R)=3$. Since $\text{depth}_J(R)\geq 2$, this means $\text{depth}_I(S)\leq 1$. If $\text{depth}_J(R)=3$, giving $\text{depth}_I(S)=0$, then we may argue as in the proof of (ii) (note that $S$ is a product of domains). If $\text{depth}_J(R)=2$, giving $\text{depth}_I(S)=1$, then by Corollary \ref{localprinc} there is a finite cover of $\text{Spec}(S)$ by charts $\text{Spec}(S_{f_1}),\text{cd}ots,\text{Spec}(S_{f_t})$ such that for each $i$, we can write (up to radicals) $IS_{f_i} = y_iS_{i}\cap I_{0,i}$ with $\text{depth}_{I_{0,i}}(S_{f_i})>1$. Replace $f_1,\text{cd}ots,f_t$ with lifts from $R/J$ to $R$ in order to assume $f_1,\text{cd}ots,f_t\in R$. Lemma \ref{prinq3} in the case $a=1$ shows that for each $i$, $\text{Ass}\, H^4_I(J)_{f_i}$ is finite if and only if $\text{Ass}\, H^{3}_I(S)_{f_i}$ is finite. The charts $\text{Spec}(R_{f_1}),\text{cd}ots,\text{Spec}(R_{f_t})$ do not necessarily cover $\text{Spec}(R)$, but they do cover the subset $V(J)$. Since $I\supseteq J$, $\text{Supp}\, H^\ell_I(-)\subseteq V(I)\subseteq V(J)$ for all $\ell$, so showing that $\text{Ass}\, H^4_I(J)$ is finite is equivalent to showing that $\text{Ass}\, H^4_I(J)_{f_i}$ is finite for each $i$. The result we proved on each chart therefore implies $\text{Ass}\, H^4_I(J)$ is finite if and only if $\text{Ass}\, H^3_I(S)$ is finite. \end{proof} Under the hypotheses of (iii), we can give the following partial answer to Question 3 for local rings of sufficiently small dimension. \begin{corol} Let $(R,\mathfrak{m},K)$ be an LC-finite regular local ring of dimension at most $7$, let $J$ be an ideal generated by a regular sequence of length at least $2$ such that $S=R/J$ is normal and almost factorial. Let $I$ be any ideal of $R$ containing $J$. Then for all $i\geq 1$, $\text{Ass}\, H^i_I(J)$ is finite if and only if $\text{Ass}\, H^{i-1}_I(S)$ is finite. \end{corol} \begin{proof} The case $i\leq 4$ is the subject of Theorem \ref{smalli}. We must have $\dim(S)\leq 5$ since $\text{depth}_J(R)\geq 2$, so by Corollary \ref{heightcontrol}, $\text{Supp}\, H^{i-1}_I(S)$ (and hence $\text{Ass}\, H^{i-1}_I(S)$) is a finite set if $i-1\geq 4$. Likewise, for any homomorphic image $H^{i-1}_I(S)\twoheadrightarrow N$, the set $\text{Supp}\,(N)$ is finite. There is an exact sequence $0\to N \to H^i_I(J)\to M\to 0$ where $N$ is a homomorphic image of $H^{i-1}_I(S)$ and $M$ is a submodule of $H^i_I(R)$. If $i\geq 5$, both $\text{Ass}\,(N)$ (a subset of $\text{Supp}\,(N)$) and $\text{Ass}\,(M)$ (a subset of $\text{Ass}\, H^i_I(R)$) are finite, so $\text{Ass}\, H^i_I(J)$ is finite as well. \end{proof} \section{Regular parameter ideals in characteristic $p>0$} In this section, we will show that if $R$ is a regular ring of prime characteristic $p>0$ and $J$ is an ideal such that $R/J$ is regular, then for any ideal $I\subseteq R$ and any $i\geq 0$, the set $\text{Ass}\, H^i_I(J)$ is finite. This result is essentially a corollary of a stronger result. Specifically, we will show that if $R\to S$ is a homomorphism between two regular rings of prime characteristic $p>0$, and $I\subseteq R$ is an ideal, then for any $i\geq 0$, the natural map $$ S\otimes_R H^i_I(R)\to H^i_I(S) $$ is a morphism of $F$-finite $F_S$-modules in the sense of Lyubeznik \cite{lyufmod}. The proof of this statement relies on understanding a certain family of natural transformations that compare the cohomology of a complex before and after applying some base change functor. \subsection{Natural transformations associated with cohomology and base change} Let $\text{Mod}_{R}$ denote the category of modules over a ring $R$, and let $\text{Kom}_{R}$ denote the category of cohomologically indexed complexes of $R$-modules. Let $H^i:\text{Kom}_{R}\to\text{Mod}_{R}$ denote the functor that takes a complex $C^\bullet$ to its $i$th cohomology module $H^i(C^\bullet)$. \begin{defn} If $f:R\to S$ is a ring homomorphism and $C^\bullet$ is an $R$-complex, then the natural ($R$-linear) map $C^\bullet \to S\otimes_R C^\bullet$ induces $H^i(C^\bullet)\to H^i(S\otimes_R C^\bullet)$, which factors uniquely through the natural map $H^i(C^\bullet)\to S\otimes_R H^i(C^\bullet)$ to an $S$-linear map $S\otimes_R H^i(C^\bullet)\to H^i(S\otimes_R C^\bullet)$. Call this map $h^i_{f}(C^\bullet)$, and let $h^i_{f}$ denote the corresponding natural transformation $$h^i_{f}(-):S\otimes_R H^i(-)\longrightarrow H^i(S\otimes_R -)$$ of functors $\text{Kom}_{R}\to\text{Mod}_{S}$. \end{defn} If the homomorphism $f:R\to S$ is understood from context, we will write $h^i_{S/R}(-)$ instead of $h^i_{f}(-)$. The latter, more precise notation is reserved for ambiguous cases, such as when $R=S$ has prime characteristic $p$ and $f$ is the Frobenius homomorphism. Note that $S$ is flat over $R$ if and only if $h^i_{S/R}(-)$ is an isomorphism of functors. Our goal in this section is to briefly review some straightforward but crucially important compatibility properties of these natural transformations. \begin{prop}\label{comp} (Compatibility with composition) Let $R\to S\to T$ be ring homomorphisms. The following diagram of functors $\text{Kom}_{R}\to\text{Mod}_{T}$ commutes \begin{center} \begin{tikzcd}[column sep = small] T\otimes_R H^i(-)\ar[rrrrrdd,"h^i_{T/R}(-)"'] \ar[r,no head, shift right = 0.1em]\ar[r,no head,shift left=0.1em]& T\otimes_S (S\otimes_R H^i(-))\ar[rrrr,"\text{id}_T\otimes_S h^i_{S/R}(-)"] & & & & T\otimes_S H^i(S\otimes_R -)\ar[d,"h^i_{T/S}(S\otimes_R -)"]\\ & & & & & H^i(T\otimes_S(S\otimes_R -))\ar[d,no head, shift right = 0.1em]\ar[d,no head,shift left=0.1em]\\ & & & & & H^i(T\otimes_R -) \end{tikzcd} \end{center} where the equalities shown above come from identifying the functor $T\otimes_S(S\otimes_R-)$ with $T\otimes_R -$. \end{prop} \iffalse\begin{proof} The diagram for a single $R$-complex $C^\bullet$ is obtained by applying $H^i(-)$ to \begin{center} \begin{tikzcd} C^\bullet \ar[rr]\ar[ddrr]& & S\otimes_R C^\bullet\ar[d]\\ & & T\otimes_S (S\otimes_R C^\bullet)\ar[d,no head, shift right = 0.1em]\ar[d,no head,shift left=0.1em]\\ & & T\otimes_R C^\bullet \end{tikzcd} \end{center} and base changing the modules $H^i(C^\bullet)$ and $H^i(S\otimes_R C^\bullet)$ from $\text{Mod}_{R}$ and $\text{Mod}_{S}$ to $\text{Mod}_{T}$, respectively. Naturality of the resulting diagram is straightforward to verify. \end{proof}\fi \begin{prop}\label{simflat} (Simultaneous flat base change) Suppose there is a commutative square of ring homomorphisms \begin{center} \begin{tikzcd} R\ar[d]\ar[r] & S\ar[d]\\ R^\primeime\ar[r] & S^\primeime \end{tikzcd} \end{center} such that $R^\primeime$ is flat over $R$, and $S^\primeime$ is flat over $S$. Then there is a commutative square of functors $\text{Kom}_{R}\to \text{Mod}_{S^\primeime}$ \begin{center} \begin{tikzcd}[column sep = huge] S^\primeime\otimes_S (S\otimes_R H^i(-))\ar[d,"\text{\rotatebox{90}{$\sim$}}"'] \ar[r,"\text{id}_{S^\primeime}\otimes_S h^i_{S/R}(-)"] & S^\primeime\otimes_S H^i(S\otimes_R-)\ar[d,"\text{\rotatebox{90}{$\sim$}}"]\\ S^\primeime \otimes_{R^\primeime} H^i(R^\primeime\otimes_R -) \ar[r,"h^i_{S^\primeime/R^\primeime}(R^\primeime\otimes_R -)"'] & H^i(S^\primeime\otimes_{R^\primeime} (R^\primeime\otimes_R-)) \end{tikzcd} \end{center} where the vertical arrows are natural isomorphisms. \end{prop} \begin{proof} Apply Proposition \ref{comp} to $R\to S\to S^\primeime$ in order to see that the upper right corner of the below diagram commutes, and then to $R\to R^\primeime \to S^\primeime$ to see that the lower left corner commutes as well: \begin{center} \begin{tikzcd}[column sep = huge, row sep = huge] S^\primeime\otimes_S (S\otimes_R H^i(-))\ar[r,"\text{id}_{S^\primeime}\otimes_S h^i_{S/R}(-)"]\ar[d,no head, shift right = 0.1em]\ar[d,no head,shift left=0.1em] & S^\primeime\otimes_S H^i(S\otimes_R-)\ar[d,"h^i_{S^\primeime/S}(S\otimes_R-)"] \\ S^\primeime \otimes_R H^i(-)\ar[d,no head, shift right = 0.1em]\ar[d,no head,shift left=0.1em] \ar[rd,"h^i_{S^\primeime/R}(-)"] & H^i(S^\primeime\otimes_S (S\otimes_R -))\ar[d,no head, shift right = 0.1em]\ar[d,no head,shift left=0.1em] \\ S^\primeime \otimes_{R^\primeime}(R^\primeime\otimes_R H^i(-)) \ar[d,"\text{id}_{S^\primeime}\otimes_{R^\primeime} h^i_{R^\primeime/R}(-)"'] & H^i(S^\primeime\otimes_R -)\ar[d,no head, shift right = 0.1em]\ar[d,no head,shift left=0.1em]\\ S^\primeime \otimes_{R^\primeime} H^i(R^\primeime\otimes_R -)\ar[r,"h^i_{S^\primeime/R^\primeime}(R^\primeime\otimes_R -)"'] & H^i(S^\primeime\otimes_{R^\primeime} (R^\primeime\otimes_R-)) \end{tikzcd} \end{center} We are finished upon observing that the flatness of $R^\primeime$ over $R$ (resp. of $S^\primeime$ over $S$) implies that $h^i_{R^\primeime/R}(-)$ (resp. $h^i_{S^\primeime/S}(-)$) is a natural isomorphism. \end{proof} Our main interest is in applying Proposition \ref{simflat} in the case where $R$ and $S$ are regular rings of primes characteristic $p>0$, and $R\to R^\primeime$, $S\to S^\primeime$ are the ($e$-fold iterates of the) Frobenius homomorphisms of $R$ and $S$. Some notation: if $A$ is a ring of prime characteristic $p>0$, let $F_A:A\to A$ denote the Frobenius homomorphism of $A$, and $\mathcal{F}_A(-)$ (either $\text{Mod}_A\to \text{Mod}_A$ or $\text{Kom}_A\to \text{Kom}_A$, depending on context) denote the base change functor associated with $F_A$. Let $\mathcal{F}^e_A(-)$ denote the $e$-fold iterate of $\mathcal{F}_A(-)$. \begin{corol}\label{frob} Let $R\to S$ be a homomorphism between two regular rings of prime characteristic $p>0$. For each $e\geq 1$, the following diagram commutes, \begin{center} \begin{tikzcd}[column sep = huge] \mathcal{F}^e_S(S\otimes_R H^i(-))\ar[d,"\text{\rotatebox{90}{$\sim$}}"'] \ar[r,"\mathcal{F}^e_S(h^i_{S/R}(-))"] & \mathcal{F}^e_S(H^i(S\otimes_R -))\ar[d,"\text{\rotatebox{90}{$\sim$}}"]\\ S \otimes_{R} H^i(\mathcal{F}^e_R(-)) \ar[r,"h^i_{S/R}(\mathcal{F}^e_R(-))"'] & H^i(S\otimes_{R} \mathcal{F}^e_R(-)) \end{tikzcd} \end{center} where the vertical arrows are isomorphisms. \end{corol} In other words, if we identify $\mathcal{F}^e_S(S\otimes_R H^i(-))$ with $S\otimes_R H^i(\mathcal{F}^e_R(-))$ and $\mathcal{F}^e_S(H^i(S\otimes_R -))$ with $H^i(S\otimes \mathcal{F}^e_R(-))$ using the vertical isomorphisms, then for all $e\geq 0$, $h^i_{S/R}(\mathcal{F}^e_R(-))=\mathcal{F}^e_S(h^i_{S/R}(-))$. Concretely, if \iffalse $C^\bullet$ is a complex of $R$-modules and $C^\bullet_e := \mathcal{F}^e_R(C^\bullet)$, then as long as $R$ and $S$ are both regular, the family $\{h^i_{S/R}(C^\bullet_e)\}_{e=0}^\infty$ of homomorphisms $$ S\otimes_R H^i(C^\bullet_e)\to H^i(S\otimes_R C^\bullet_e) $$ is determined by the homomorphism at $e=0$ by taking Frobenius powers. For example, if\fi $C^\bullet = K^\bullet(\mathbf{\underline{f}};R)$ is the Koszul complex on some sequence of elements $\mathbf{\underline{f}} = f_1,\text{cd}ots,f_t\in R$, and if $\mathbf{\underline{f}}^{[p^e]}:= f_1^{p^e},\text{cd}ots,f_t^{p^e}$, then a consequence of Corollary \ref{frob} is that the family of maps $\{h^i_e\}_{e=0}^\infty$ given by $$ h^i_e:S\otimes_R H^i(\mathbf{\underline{f}}^{[p^e]};R)\to H^i(\mathbf{\underline{f}}^{[p^e]};S). $$ is determined by the data of just the $0$th map $h^i_0$ by taking Frobenius powers. The statement is more abstruse when applied directly to the \v{C}ech complex, since $\check{C}(\mathbf{\underline{f}};R)$ is canonically identified with $\mathcal{F}_R(\check{C}(\mathbf{\underline{f}};R))$. However, when we replace $R$ with a general $F$-finite $F_R$-module $\mathcal{M}$ in the sequel, it will simplify our main proofs if we work at the level of \v{C}ech cohomology. \subsection{The natural map on local cohomology} Throughout this section, $R$ and $S$ are regular rings of prime characteristic $p>0$. Recall that an $F_R$-module is a pair $(\mathcal{M},\theta)$ consisting of an $R$-module $\mathcal{M}$ and an isomorphism $\theta:\mathcal{M}\to\mathcal{F}_R(\mathcal{M})$. A morphism of $F_R$-modules $(\mathcal{M},\theta)\to (\mathcal{N},\phi)$ is an $R$-linear map $h:\mathcal{M}\to\mathcal{N}$ such that $\phi\circ h = \mathcal{F}_R(h)\circ \theta$. If $R\to S$ is a ring homomorphism and $(\mathcal{M},\theta)$ is an $F_R$-module, then $S\otimes_R \mathcal{M}$ is an $F_S$-module via the structure isomorphism $\text{id}_S\otimes \theta:S\otimes_R \mathcal{M}\to S\otimes_R \mathcal{F}_R(\mathcal{M})$, where we identify the $S\otimes_R \mathcal{F}_R(\mathcal{M})$ with $\mathcal{F}_S(S\otimes_R \mathcal{M})$ in the canonical way. Let $\mathbf{\underline{f}}=f_1,\text{cd}ots,f_t$ be a sequence of elements of $R$, let $C^\bullet:=\check{C}^\bullet(\mathbf{\underline{f}};R)$ be the \v{C}ech complex on $R$ associated with $\mathbf{\underline{f}}$. For shorthand, if $M$ is an $R$-module, denote by $C^\bullet_M:= C^\bullet\otimes_R M =\check{C}^\bullet(\mathbf{\underline{f}};M)$ the \v{C}ech complex on $M$ associated with $\mathbf{\underline{f}}$. The complex $\mathcal{F}_R(C^\bullet)$ is canonically identified with $C^\bullet$ itself, and likewise, for any $R$-module $M$, $\mathcal{F}_R(C^\bullet_M)$ is canonically identified with $C^\bullet_{\mathcal{F}_R(M)}$. If $(\mathcal{M},\theta)$ is an $F_R$-module, then applying $C^\bullet\otimes_R -$ to $\theta:\mathcal{M}\to \mathcal{F}_R(\mathcal{M})$ gives an isomorphism of complexes $\Theta:C^\bullet_{\mathcal{M}}\to \mathcal{F}_R(C^\bullet_{\mathcal{M}})$. Using this isomorphism, $H^i_I(\mathcal{M})$ can naturally be made into an $F_R$-module for any $i$: its structure isomorphism $\Psi: H^i_I(\mathcal{M})\to \mathcal{F}_R(H^i_I(\mathcal{M}))$ is given by the composition \begin{center} \begin{equation}\label{decomp} \begin{tikzcd} H^i(C^\bullet_{\mathcal{M}})\ar[r,"H^i(\Theta)"] & H^i(\mathcal{F}_R(C^\bullet_{\mathcal{M}}))\ar[rr,"h^i_{F_R}(C^\bullet_{\mathcal{M}})^{-1}"] && \mathcal{F}_R(H^i(C^\bullet_{\mathcal{M}})) \end{tikzcd}\tag{$\star\star$} \end{equation} \end{center} We now prove our main compatibility result. \begin{theor}\label{morph} Let $R\to S$ be a homomorphism between two regular rings of prime characteristic $p>0$, fix an ideal $I\subseteq R$ and an index $i\geq 0$, and let $(\mathcal{M},\theta)$ be an $F_R$-module. The natural map $$ S\otimes_R H^i_I(\mathcal{M})\to H^i_I(S\otimes_R \mathcal{M}) $$ is a morphism of $F_S$-modules. \end{theor} \begin{proof} Let $C^\bullet_{\mathcal{M}} = \check{C}^\bullet(\mathbf{\underline{f}};\mathcal{M})$ be the \v{C}ech complex on $\mathcal{M}$ associated with a sequence of elements $\mathbf{\underline{f}}=f_1,\text{cd}ots,f_t$ generating $I$. It is enough to show that the diagram \begin{center} \begin{tikzcd}[column sep = huge] \mathcal{F}_S(S\otimes_R H^i(C^\bullet_{\mathcal{M}}))\ar[r,"\mathcal{F}_S(h^i_{S/R}(C^\bullet_{\mathcal{M}}))"]\ar[d,"\text{\rotatebox{90}{$\sim$}}"'] & \mathcal{F}_S(H^i(S\otimes_R C^\bullet_{\mathcal{M}}))\ar[d,"\text{\rotatebox{90}{$\sim$}}"']\\ S\otimes_R H^i(C^\bullet_{\mathcal{M}})\ar[r,"h^i_{S/R}(C^\bullet_{\mathcal{M}})"] & H^i(S\otimes_RC^\bullet_{\mathcal{M}}) \end{tikzcd} \end{center} commutes, where the vertical arrows are the inverses of the structure morphisms of $S\otimes_R H^i_I(\mathcal{M})$ and $H^i_I(S\otimes_R \mathcal{M})$ as $F_S$-modules, respectively. Let $\Theta:C^\bullet_{\mathcal{M}}\to\mathcal{F}(C^\bullet_{\mathcal{M}})$ denote the isomorphism of complexes induced by $\theta$. Using the decomposition \eqref{decomp} of the structure isomorphism of $H^i_I(\mathcal{M})$, the stated result is equivalent to showing that the following diagram commutes. \begin{center} \begin{tikzcd} \mathcal{F}_S(S\otimes_R H^i(C^\bullet_{\mathcal{M}})) \ar[d,no head, shift right = 0.1em]\ar[d,no head,shift left=0.1em] \ar[rr,"\mathcal{F}_S(h^i_{S/R}(C^\bullet_{\mathcal{M}}))"] && \mathcal{F}_S(H^i(S\otimes_R C^\bullet_{\mathcal{M}})) \ar[d,"h^i_{F_S}(S\otimes_R C^\bullet_{\mathcal{M}})","\text{\rotatebox{90}{$\sim$}}"']\\ S\otimes_R \mathcal{F}_R(H^i(C^\bullet_{\mathcal{M}})) \ar[d,"\text{id}_S\otimes h^i_{F_R}(C^\bullet_{\mathcal{M}})"',"\text{\rotatebox{90}{$\sim$}}"] && H^i(\mathcal{F}_S(S\otimes_R C^\bullet_{\mathcal{M}})) \ar[d,no head, shift right = 0.1em]\ar[d,no head,shift left=0.1em]\\ S\otimes_R H^i(\mathcal{F}_R(C^\bullet_{\mathcal{M}})) \ar[d,"\text{id}_S\otimes H^i(\Theta)^{-1}"',"\text{\rotatebox{90}{$\sim$}}"] \ar[rr,"h^i_{S/R}(\mathcal{F}_R(C^\bullet_{\mathcal{M}}))"] && H^i(S\otimes_R\mathcal{F}_R(C^\bullet_{\mathcal{M}})) \ar[d,"H^i(\text{id}_S\otimes \Theta)^{-1}","\text{\rotatebox{90}{$\sim$}}"']\\ S\otimes_R H^i(C^\bullet_{\mathcal{M}}) \ar[rr,"h^i_{S/R}(C^\bullet_{\mathcal{M}})"] && H^i(S\otimes_R C^\bullet_{\mathcal{M}}) \end{tikzcd} \end{center} The commutativity of the rectangle of maps in the top three rows is precisely the content of Proposition \ref{simflat} (see specifically Corollary \ref{frob}) applied to the complex $C^\bullet_{\mathcal{M}}$. The square of maps in the bottom two rows is induced from the diagram that results from applying $H^i(-)$ to \begin{center} \begin{tikzcd} \mathcal{F}_R(C^\bullet_{\mathcal{M}}) \ar[d,"\Theta^{-1}"',"\text{\rotatebox{90}{$\sim$}}"] \ar[r,"\text{nat}"] & S\otimes_R\mathcal{F}_R(C^\bullet_{\mathcal{M}}) \ar[d,"(\text{id}_S\otimes \Theta)^{-1}","\text{\rotatebox{90}{$\sim$}}"']\\ C^\bullet_{\mathcal{M}} \ar[r,"\text{nat}"] &S\otimes_R C^\bullet_{\mathcal{M}} \end{tikzcd} \end{center} Recall that $C^\bullet_{\mathcal{M}}=C^\bullet\otimes_R \mathcal{M}$ and $\mathcal{F}_R(C^\bullet_{\mathcal{M}}) = C^\bullet \otimes_R \mathcal{F}_R(\mathcal{M})$, where $C^\bullet =\check{C}^\bullet(\mathbf{\underline{f}};R)$, so that the above diagram is $C^\bullet\otimes_R-$ applied to \begin{center} \begin{tikzcd} \mathcal{F}_R({\mathcal{M}}) \ar[d,"\Theta^{-1}"',"\text{\rotatebox{90}{$\sim$}}"] \ar[r,"\text{nat}"] & S\otimes_R\mathcal{F}_R(\mathcal{M}) \ar[d,"(\text{id}_S\otimes \Theta)^{-1}","\text{\rotatebox{90}{$\sim$}}"']\\ \mathcal{M} \ar[r,"\text{nat}"] &S\otimes_R \mathcal{M} \end{tikzcd} \end{center} This final diagram obviously commutes. \end{proof} \begin{corol}\label{quot} Let $R$ be a regular ring of prime characteristic $p>0$, let $J\subseteq R$ be an ideal such that $R/J$ is regular, and let $\mathcal{M}$ be an $F_R$-module. For any ideal $I\subseteq R$ and any $i\geq 0$, the natural map $H^i_I(\mathcal{M})/J H^i_I(\mathcal{M}) \to H^i_I(\mathcal{M}/J\mathcal{M})$ is an $F_{R/J}$-module morphism. \end{corol} \begin{theor}\label{charpj} Let $R$ be a regular ring of prime characteristic $p>0$, let $J\subseteq R$ be an ideal such that $R/J$ is regular, and let $\mathcal{M}$ be an $F$-finite $F_R$-module (e.g. $\mathcal{M}=R$). For any ideal $I\subseteq R$ and any $i\geq 0$, $$\text{Coker}\left(H^i_I(\mathcal{M}) \to H^i_I(\mathcal{M}/J\mathcal{M})\right)$$ is an $F$-finite $F_{R/J}$-module, and hence, has finitely many associated primes. The module $H^i_I(J\mathcal{M})$ also has finitely many associated primes. \end{theor} \begin{proof} Note that $H^i_I(\mathcal{M})$ is an $F$-finite $F_R$-module, and $H^i_I(\mathcal{M}/J\mathcal{M})$ is an $F$-finite $F_{R/J}$-module \cite[Proposition 2.10]{lyufmod}. The map $H^i_I(\mathcal{M}) \to H^i_I(\mathcal{M}/J\mathcal{M})$ factors through the natural map $H^i_I(\mathcal{M}) \to H^i_I(\mathcal{M})/J H^i_I(\mathcal{M})$, and since $R\twoheadrightarrow R/J$ is surjective, the images of $H^i_I(\mathcal{M})$ and $H^i_I(\mathcal{M})/J H^i_I(\mathcal{M})$ inside $H^i_I(\mathcal{M}/J\mathcal{M})$ are equal. Thus, $$\text{Coker}\left(H^i_I(\mathcal{M}) \to H^i_I(\mathcal{M}/J\mathcal{M})\right) = \text{Coker}\left(H^i_I(\mathcal{M})/JH^i_I(\mathcal{M}) \to H^i_I(\mathcal{M}/J\mathcal{M})\right)$$ where the latter is the cokernel of a morphism of $F$-finite $F_{R/J}$-modules by Corollary \ref{quot}. It is therefore itself an $F$-finite $F_{R/J}$-module \cite[Theorem 2.8]{lyufmod}, and accordingly, has a finite set of associated primes \cite[Theorem 2.12(a)]{lyufmod}. Regarding the claim about the associated primes of $H^i_I(J\mathcal{M})$, apply $\Gamma_I(-)$ to $0\to J\mathcal{M}\to\mathcal{M}\to\mathcal{M}/J\mathcal{M}\to 0$ to obtain the exact sequence \begin{center}{ \begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}] \matrix (m) [ matrix of math nodes, row sep=1em, column sep=2.5em, text height=1.5ex, text depth=0.25ex ] { & \text{cd}ots & H^{i-1}_I(\mathcal{M}) & H^{i-1}_I(\mathcal{M}/J\mathcal{M}) \\ & H^{i}_I(J\mathcal{M}) & H^{i}_I(\mathcal{M}) & \text{cd}ots \\ }; \path[overlay,->, font=\scriptsize,>=latex] (m-1-2) edge (m-1-3) (m-1-3) edge (m-1-4) (m-1-4) edge[out=345,in=165] (m-2-2) (m-2-2) edge (m-2-3) (m-2-3) edge (m-2-4) ; \end{tikzpicture}} \end{center} We therefore have a short exact sequence $$ 0\to \text{Coker}\left(H^{i-1}_I(\mathcal{M}) \to H^{i-1}_I(\mathcal{M}/J\mathcal{M})\right) \to H^i_I(J\mathcal{M})\to N\to 0 $$ for some submodule $N\subseteq H^i_I(\mathcal{M})$, and the stated result now follows at once. \end{proof} Let $(R,\mathfrak{m},K)$ be a regular local ring. Recall that a parameter ideal $J\subseteq R$ is called {\it regular} if it is generated by an $R$-regular sequence whose images in $\mathfrak{m}/\mathfrak{m}^2$ are linearly independent over $K$. Every ideal $J$ such that $R/J$ is regular has this form. If $R$ is complete and contains a field, then by then Cohen Structure Theorem, all examples of regular parameter ideals are isomorphic to an example of the form $R=K[[x_1,\text{cd}ots,x_m,z_1,\text{cd}ots,z_n]]$ and $J=(x_1,\text{cd}ots,x_m)R$ for some $m,n\geq 0$. \begin{corol} Let $(R,\mathfrak{m},K)$ be a regular local ring containing a field of prime characteristic $p>0$ and $J\subseteq R$ be a regular parameter ideal. For any ideal $I\subseteq R$ and any $i\geq 0$, the module $H^i_I(J)$ has finitely many associated primes. \end{corol} \end{document}
\begin{document} \title[The 3D Navier-Stokes equations]{Boundary $\varepsilon$-regularity criteria for the 3D Navier-Stokes equations} \ \author[H. Dong]{Hongjie Dong} \address[H. Dong]{Division of Applied Mathematics, Brown University, 182 George Street, Providence, RI 02912, USA} \varepsilonmail{Hongjie\[email protected]} \thanks{H. Dong and K. Wang were partially supported by the NSF under agreement DMS-1600593.} \author[K. Wang]{Kunrui Wang} \address[K. Wang]{Division of Applied Mathematics, Brown University, 182 George Street, Providence, RI 02912, USA} \varepsilonmail{Kunrui\[email protected]} \subjclass[2010]{Primary 35Q30, 35B65, 76D05} \deltaate{\today} \begin{abstract} We establish several boundary $\varepsilon$-regularity criteria for suitable weak solutions for the 3D incompressible Navier-Stokes equations in a half cylinder with the Dirichlet boundary condition on the flat boundary. Our proofs are based on delicate iteration arguments and interpolation techniques. These results extend and provide alternative proofs for the earlier interior results by Vasseur \cite{MR2374209}, Choi-Vasseur \cite{MR3258360}, and Phuc-Guevara \cite{Phuc}. \varepsilonnd{abstract} \maketitle \section{Introduction and main results} In this paper we discuss the 3-dimensional incompressible Navier-Stokes equations with unit viscosity and zero external force: \begin{equation}\label{NS1} \partial_tu+u\cdot\nabla u- \Deltaelta u +\nabla p=0, \quad \text{div } u=0, \varepsilonnd{equation} where $u = (u^1(t,x),u^2(t,x),u^3(t,x))$ is the velocity field and $p = p(t,x)$ is the pressure. We consider local problem: $(t,x)\in Q$ or $Q^+$, where $Q$ and $Q^+$ denote the unit cylinder and unit half-cylinder respectively. For the half cylinder case, we assume that $u$ satisfies the zero Dirichlet boundary condition: \begin{equation}\label{NS3} u = 0 \quad\text{on}\ \{x_d=0\}\cap\partial Q^+. \varepsilonnd{equation} We are concerned with different types of $\varepsilon$-regularity criteria for suitable weak solutions for 3D Navier-Stokes equations. The suitable weak solutions are a class of Leray-Hopf weak solutions satisfying the so-called local energy inequality, which was originated by Scheffer in a series of papers \cite{Refer23,Refer24,Refer25}. The formal definition of the suitable weak solutions was first introduced by Caffarelli, Kohn, and Nirenberg \cite{Caf1}. See Section \ref{sws}. In \cite{MR2374209} Vasseur proved the following interior $\varepsilon$-regularity criterion, which provided an alternative proof of the well-known partial regularity result for the 3D incompressible Navier-Stokes equations proved by Caffarelli, Kohn, and Nirenberg \cite{Caf1}. His proof is based on the De Giorgi iteration argument originally for elliptic equations in divergence form. \begin{theorem}[Vasseur \cite{MR2374209}] \label{thm2} For any $\tilde q\in (1,\infty)$, there exists an $\varepsilon_0=\varepsilon_0(\tilde q)>0$ such that if $(u,p)$ is a pair of suitable weak solution to \varepsilonqref{NS1}-\varepsilonqref{NS3} in $Q$ and satisfies \begin{align*} \sup_{t\in [-1,0]}\int_{B}\abs{u(t,x)}^2\,dx + \int_Q\abs{\nabla u}^2\, dx dt+\int_{-1}^{0}\norm{p}^{\tilde q}_{L_1^{x}(B)}\, dt\leq \varepsilon_0, \varepsilonnd{align*} then $u$ is regular in $\overline{Q(1/2)}$. \varepsilonnd{theorem} Later Choi and Vasseur extended this result up to $\tilde{q}=1$ in \cite[Proposition 2.1]{MR3258360} with an additional condition on the maximal function of $\nabla u$. In \cite{Phuc}, Phuc and Guevara further refined this result to the case with simply $\tilde q =1$. Their proof exploits fractional Sobolev spaces of negative order and an inductive argument in \cite{Caf1} and \cite{MR3233577}. In this paper, we show a boundary version of Theorem \ref{thm2}. Namely, \begin{theorem} \label{thm0} For any $\tilde q\in (1,\infty)$, there exists a universal constant $\varepsilon_0 = \varepsilon_0(\tilde q)>0$ such that if $(u,p)$ is a pair of suitable weak solution to \varepsilonqref{NS1}-\varepsilonqref{NS3} in $Q^+$ with $p\in L_{\tilde q}^tL^x_1(Q^+)$ and satisfies \begin{equation*} \sup_{t\in [-1,0]}\int_{B^+}\abs{u(t,x)}^2\,dx + \int_{Q^+}\abs{\nabla u}^2\,dx dt+\int_{-1}^{0}\norm{p}^{\tilde q}_{L_{1}^x(B^+)}\, dt\leq \varepsilon_0, \varepsilonnd{equation*} then $u$ is regular in $\overline{Q^+(1/2)}$. \varepsilonnd{theorem} The condition $\tilde q>1$ is required when we apply the coercive estimate for the linear Stokes system to estimate the pressure term. At the time of this writing, it is not clear to us whether it is possible to take $\tilde q=1$ as in the interior case. Theorem \ref{thm0} can be used to give a new proof of the boundary partial regularity result by Seregin \cite{Refer23b}. Another consequence of the theorem is the following boundary regularity criterion, which does not involve $\nabla u$. \begin{theorem} \label{thm4} For any $q>5/2$ and $\tilde q>1$, there exists a universal constant $\varepsilon_0 = \varepsilon_0(q,\tilde{q})$ such that if $(u,p)$ is a pair of suitable weak solution to \varepsilonqref{NS1}-\varepsilonqref{NS3} in $Q^+$ with $p\in L_{\tilde q}^tL^x_1(Q^+)$ and satisfies \begin{equation*} \|u\|_{L_q(Q^+)}+\|p\|_{L^t_{\tilde q}L^x_1(Q^+)}<\varepsilon_0, \varepsilonnd{equation*} then $u$ is regular in $\overline{Q^+(1/2)}$. \varepsilonnd{theorem} The above theorem is a special case of Theorem \ref{thm1b}, which will be proved in Section \ref{sec4}. The corresponding interior result when $q>20/7$ was proved recently in \cite{Phuc} by viewing the ``head pressure'' $|u|^2/2+p$ as a signed distribution, which belongs to certain fractional Sobolev spaces of negative order. This answered a question raised by Kukavica in \cite{MR2483004} about whether one can lower the exponent $3$ in the original $\varepsilon$-regularity criterion in \cite{Caf1}. See also more recent \cite{170901382H} for an extension to the case when $q>5/2$ with an application to the estimate of box dimensions of singular sets for the Navier--Stokes equations. We refer the reader to \cite{Refer11, MR2559050} and references therein for various interior and boundary $\varepsilon$-regularity criteria for the Navier-Stokes equations. The proofs of Theorems \ref{thm0} and \ref{thm4} both rely on iteration arguments. Compared to the argument in \cite{MR2374209}, our proof of Theorem \ref{thm0} is much shorter and, in the conceptual level, closer to the original argument in \cite{Caf1}. Instead of fractional Sobolev spaces used in \cite{Phuc}, which does not seem to work for the boundary case, we consider scale invariant quantities in the usual mixed-norm Lebesgue spaces, and apply a decomposition of the pressure due to Seregin \cite{Refer23b}. We adopt some ideas in \cite{MR3129108, Dong2} on showing uniform decay rates of scale invariant quantities by induction. More precisely, we use different induction step lengths when iterating between different scale invariant quantities associated with the energy norm and the pressure respectively. In the last step, we use parabolic regularity to further improve the estimate of mean oscillation of $u$ and conclude the H\"{o}lder continuity according to Campanato's characterization of H\"{o}lder continuous functions. By a minor modification on the proof of Theorem \ref{thm0} to transform to the interior case, we also get a different proof of Theorem \ref{thm2} with refined condition $\tilde q=1$ obtained in \cite{Phuc}. The proof of Theorem \ref{thm4} uses a delicate interpolation argument. We treat each term on the right hand side of the generalized energy inequality in a consistent way such that they are all interpolated by the energy norms and the mixed-norm which is assumed to be small in the condition. By fitting the exponents of those energy norms slightly less than $2$, we spare some space that we can borrow to use Young's inequality and proceed with an iteration to obtain the desired results. The remaining part of the paper is organized as follows. In the next section, we introduce some notation and the definition of suitable weak solutions to the Navier-Stokes equations. The proof of Theorem \ref{thm0} is given in Section \ref{sec_iter}. Section \ref{sec4} is devoted to the proof of Theorem \ref{thm1b}. In Appendix \ref{append}, we show how to adapt our proof to the interior case where we can take $\tilde q=1$ due to a conciser estimate of the pressure. Throughout the paper, various constants are denoted by $N$ in general, which may vary from line to line. The expression $N = N(\cdots)$ means that the given constant $N$ depends only on the contents in the parentheses. \section{Preliminaries} \label{sec2} \subsection{Notation} Let $T>0$, $\Omega$ be a domain in $\mathbb{R}^3$, $\Gamma\in \partial\mathcal{O}mega$, and $\Omega_T:=(0,T)\times\Omega$ with the parabolic boundary $$ \partial_p \Omega_T = [0,T) \times \partial\mathcal{O}mega \cup \{t=0\}\times \mathcal{O}mega. $$ We denote $\deltaot{C}_0^{\infty}(\Omega,\Gamma)$ to be the space of divergence-free infinitely differentiable vector fields which vanishes near $\Gamma$. Let $\deltaot{J}(\Omega,\Gamma)$ and $\deltaot{J}_2^1(\Omega,\Gamma)$ be the closures of $\deltaot{C}^{\infty}_0(\Omega,\Gamma)$ in the spaces $L_2(\Omega)$ and $W^1_2(\Omega)$, respectively. We shall use the following notation for balls, half balls, parabolic cylinders, and half parabolic cylinders: \begin{align*} &B(\hat{x},\rho) = \{x\in \mathbb{R}^3 \mid \abs{x-\hat{x}}<\rho\}, \quad B(\rho) = B(0,\rho),\quad B=B(1);\\ &B^+(\hat{x},\rho)=\{x\in B(\hat{x},\rho)\mid x=(x_1,x_2,x_3),\ x_3>\hat{x}_3\},\\ & B^+(\rho) = B^+(0,\rho), \quad B^+=B^+(1);\\ &Q(\hat{z},\rho)=(\hat{t}-\rho^2,\hat{t}) \times B(\hat{x},\rho),\quad Q(\rho)=Q(0,\rho), \quad Q=Q(1);\\ &Q^+(\hat{z},\rho)=(\hat{t}-\rho^2,\hat{t}) \times B^+(\hat{x},\rho),\quad Q^+(\rho)=Q^+(0,\rho), \quad Q^+=Q^+(1). \varepsilonnd{align*} where $\hat{z}=(\hat{t},\hat{x})$. We denote $$ A(\rho) = \text{ess sup}_{-\rho^2\le t\le 0}\frac 1 \rho\int_{B^+(\rho)}\vert u\vert ^2 \,dx,\quad E(\rho)=\frac 1 \rho\int_{Q^+(\rho)}\vert \nabla u\vert ^2 \,dz, $$ $$ C_{q,r}(\rho) = \rho^{1-2/q-3/r}\norm{u}_{L_q^tL_r^x(Q^+(\rho))}, \quad D_{q,r}(\rho) = \rho^{2-2/q-3/r}\norm{p-[p]_\rho(t)}_{L_q^tL_r^x(Q^+(\rho))},$$ where $q,r\in [1,\infty]$ and $[p]_\rho(t)$ is the average of $p$ with respect to $x$ in $B^+(\rho)$. Note that all of them are scale invariant with respect to the natural scaling for \varepsilonqref{NS1}: \begin{equation*} u(t,x)\rightarrow \lambda u(\lambda^2t,\lambda x),\quad p(t,x)\rightarrow \lambda^2 p(\lambda^2 t, \lambda x). \varepsilonnd{equation*} \subsection{Suitable weak solutions} \label{sws} The definition of suitable weak solutions was introduced in \cite{Caf1}. We say a pair $(u,p)$ is a suitable weak solution of the Navier-Stokes equations on the set $\Omega_T$ vanishing on $(0,T)\times \Gamma$ if \textsl{i)} $u\in L_{\infty}(0,T;\deltaot{J}(\Omega,\Gamma))\cap L_2(0,T;\deltaot{J}^1_2(\Omega,\Gamma))$ and $p\in L_{\tilde q}^tL^x_1(\Omega_T)$ for some $\tilde q\ge 1$; \textsl{ii)} $u$ and $p$ satisfy equation (\ref{NS1}) in the sense of distribution. \textsl{iii)} For any $t\in(0,T)$ and nonnegative function $\psi\in C^{\infty}(\overline{\Omega_T})$ vanishing in a neighborhood of the boundary $\{t=0\}\times \mathcal{O}mega$ and $(0,T)\times (\partial\mathcal{O}mega\setminus \Gamma)$, the integrals in the following local energy inequality are summable and the inequality holds true: \begin{align}\label{eqn_sw_energy} \begin{split} \text{ess sup}_{0\leq s\leq t}&\int_{\mathcal{O}mega}\abs{u(s,x)}^2\psi(s,x)\,dx+2\int_{\mathcal{O}mega_t}\abs{\nabla u}^2\psi \,dx \,ds\\ &\leq \int_{\mathcal{O}mega_t}\{\abs{u}^2(\psi_t+\Deltaelta\psi)+(\abs{u}^2+2p)u\cdot\nabla\psi\}\,dx \,ds. \varepsilonnd{split} \varepsilonnd{align} We will specify the constant $\tilde q$ later so that the integrals on the right-hand side of \varepsilonqref{eqn_sw_energy} are summable. \section{Proof of Theorem \ref{thm0}} \label{sec_iter} This section is devoted to the proof of Theorem \ref{thm0}. We use the abbreviation $$ D(\rho) = D_{\tilde{q},\tilde{r}}(\rho) = \frac{1}{\rho} \norm{p-[p]_{\rho}(t)}_{L_{\tilde{q}}^tL_{\tilde{r}}^x(Q^+(\rho))}, $$ where $\tilde{q}\in (1, 2)$, $\tilde{r}\in (3/2,3)$, ${2}/{\tilde{q}}+{3}/{\tilde{r}}=3$. We first prove a few lemmas which will be used below. \begin{lemma} \label{lem1b} For any $\rho>0$ and any pair of exponents $(q,r)$ such that $$\frac{2}{q}+\frac{3}{r}=\frac 3 2, \quad 2\leq q\leq \infty, \quad 2\leq r\leq 6,$$ we have $$ \rho^{-1/2} \norm{u}_{L_{q}^tL_{r}^x(Q^+(\rho))} \leq N\left(A(\rho)+E(\rho)\right)^{\frac{1}{2}}, $$ where $N>0$ is a universal constant. \varepsilonnd{lemma} \begin{proof} Use the standard interpolation by the Sobolev embedding inequality and H\"{o}lder's inequality. \varepsilonnd{proof} \begin{lemma} \label{lem4} Let $(u,p)$ be a pair of suitable weak solution of (\ref{NS1}). For constants $\gamma \in (0,1/2]$, $\rho >0$, we have \begin{equation*} A(\gamma\rho)+E(\gamma\rho) \leq N\left[\gamma^{2}A(\rho)+\gamma^{-2}\left((A(\rho)+E(\rho))^{\frac{3}{2}} +(A(\rho)+E(\rho))^{\frac{1}{2}}D(\rho)\right)\right], \varepsilonnd{equation*} where $N>0$ is a universal constant. \varepsilonnd{lemma} \begin{proof} The proof is more or less standard. We give the details for the sake of completeness. By scaling, we may assume $\rho=1$. Define the backward heat kernel as $$ \Gamma(t,x)=\frac{1}{(4\pi(\gamma^2-t))^{3/2}}e^{-\frac{\abs{x}^2}{4(\gamma^2-t)}}. $$ In the energy inequality (\ref{eqn_sw_energy}), we choose $\psi=\Gamma\phi$, where $\phi\in C^{\infty}_0( (-1,1)\times B(1))$ is a suitable smooth cut-off function satisfying $$ 0\leq \phi \leq 1 \quad \text{in } \mathbb{R} \times\mathbb{R}^3, \quad \phi \varepsilonquiv 1 \quad \text{ in } Q(1/2),$$ $$|\nabla \phi |\leq N,\quad |\nabla^2\phi| \leq N, \quad|\partial_t\phi |\leq N \text{ in } \mathbb{R} \times\mathbb{R}^3.$$ By using the equation $$\Deltaelta \Gamma +\Gamma_t=0,$$ we have \begin{align}\label{eqn8c} \begin{split} &\text{ess sup}_{-1\leq s\leq 0}\int_{B^+}\abs{u(s,x)}^2\Gamma(s,x)\phi(s,x)\,dx+2\int_{Q^+}\abs{\nabla u}^2\Gamma\phi \,dz\\ &\leq \int_{Q^+}\{\abs{u}^2(\Gamma\phi_t+\Gamma\Deltaelta\phi+2\nabla\phi\nabla\Gamma)+(\abs{u}^2+2\abs{p-[p] _{1}})u\cdot(\Gamma\nabla\phi+\phi\nabla\Gamma \} \,dz. \varepsilonnd{split} \varepsilonnd{align} The test function has the following properties: \begin{enumerate} \item[(i)] For some constant $c>0$, on $Q^+(\gamma)$ it holds that $$\Gamma\phi = \Gamma \geq c\gamma^{-3}.$$ \item[(ii)]For any $z\in Q^+(1)$, we have $$\abs{\Gamma(z)\phi(z)}\leq N\gamma^{-3}, \quad \abs{\nabla\Gamma(z)\phi(z)}+\abs{\Gamma(z)\nabla\phi(z)}\leq N\gamma^{-4}.$$ \item[(iii)] For any $z\in Q^+(1)$, we have $$\abs{\Gamma(z)\phi_t(z)}+\abs{\Gamma(z)\Deltaelta\phi(z)}+\abs{\nabla\Gamma(z)\nabla\phi(z)}\leq N.$$ \varepsilonnd{enumerate} Therefore, (\ref{eqn8c}) and Lemma \ref{lem1b} yield \begin{align*} \begin{split} A(\gamma)+E(\gamma)&= \gamma^{-1}\text{ess sup}_{-1\leq t\leq 0}\int_{B^+(\gamma)}\abs{u(t,x)}^2\,dx+\gamma^{-1}\int_{Q^+(\gamma)}\abs{\nabla u}^2 \,dz\\ &\leq N\gamma^2\int_{Q^+}\abs{u}^2 \,dz +N\gamma^{-2} \int_{Q^+}(\abs{u}^2+2\abs{p-[p]_{1}})|u|\,dz\\ &\leq N\gamma^2A(1)+N\gamma^{-2}(A(1)+E(1))^{3/2}+N(A(1)+E(1))^{1/2}D(1). \varepsilonnd{split} \varepsilonnd{align*} The lemma is proved. \varepsilonnd{proof} \begin{lemma}\label{lem5} Let $(u,p)$ be a pair of suitable weak solution of (\ref{NS1})-\varepsilonqref{NS3}. For $\gamma\in(0,1/2]$, $\rho>0$, and $\varepsilon_1\in(0,3/\tilde{r}-1)$, we have \begin{align}\label{eqn1c} D(\gamma\rho)\leq N\left[\gamma^{1+\varepsilon_1}(D_{\tilde q,1}(\rho)+(A(\rho)+E(\rho))^{\frac{1}{2}})+\gamma^{-1}(A(\rho)+E(\rho))\right], \varepsilonnd{align} and \begin{align}\label{eqn1cc} D(\gamma\rho)\leq N\left[\gamma^{1+\varepsilon_1}(D(\rho)+(A(\rho)+E(\rho))^{\frac{1}{2}})+\gamma^{-1}(A(\rho)+E(\rho))\right], \varepsilonnd{align} where $N$ is a constant independent of $u$, $p$, $\gamma$, and $\rho$, but may depend on $\tilde q$, $\tilde r$, and $\varepsilon_1$. \varepsilonnd{lemma} \begin{proof} By the scale-invariant property, we may also assume $\rho=1$. We fix a domain $\tilde{B}\subset \mathbb{R}^3$ with smooth boundary so that $$B^+(1/2)\subset \tilde{B}\subset B^+,$$ and denote $\tilde{Q}=(-1,0)\times \tilde{B}$. Define $r^*$ by ${1}/{r^*}={1}/{\tilde r}+{1}/{3}>1$. Using H\"{o}lder's inequality, we get \begin{align*} \norm{u\cdot \nabla u}_{L_{\tilde{q}}^tL^x_{r^*}(Q^+)} \leq \norm{\nabla u}_{L_{2}^tL^x_{2}(Q^+)} \norm{ u}_{L_{q_1}^tL^x_{r_1}(Q^+)}, \varepsilonnd{align*} where $$\frac{1}{\tilde{q}} = \frac{1}{2}+\frac{1}{q_1},\quad \frac{1}{r^*} = \frac{1}{2}+\frac{1}{r_1}.$$ Because of the conditions on $(\tilde{q},\tilde{r})$, we have $q_1\in (2,\infty)$ and $2/q_1+3/r_1=3/2$. Thus by Lemma \ref{lem1b}, we get \begin{equation} \label{eqn5c} \norm{u\cdot \nabla u}_{L_{\tilde{q}}^tL^x_{r^*}(Q^+)} \leq N(A(1)+E(1)). \varepsilonnd{equation} By the solvability of the linear Stokes system with the zero Dirichlet boundary condition \cite[Theorem 1.1]{Refer29} (see Lemma \ref{lem2} below), there is a unique solution $$v\in W^{1,2}_{\tilde{q},r^*}(\tilde{Q}) \quad \text{and}\quad p_1\in W^{0,1}_{\tilde{q},r^*}(\tilde{Q})$$ to the following initial boundary value problem: \[\begin{cases} \partial_t v-\Deltaelta v+\nabla p_1=-u\cdot \nabla u \quad &\text{in } \tilde{Q},\\ \nabla\cdot v=0 &\text{in } \tilde{Q},\\ [p_1]_{\tilde{B}}(t)=0 & \text{for a.e. }t\in [-1,0],\\ v=0 & \text{on }\partial_p\tilde{Q}. \varepsilonnd{cases}\] Moreover, we have \begin{align}\label{eqn6c} \begin{split} &\Vert |v|+|\nabla v|+|p_1|+|\nabla p_1| \Vert_{L^t_{\tilde{q}}L_{r^*}^x (\tilde{Q})}\le N\|u\cdot\nabla u\|_{L^t_{\tilde{q}}L_{r^*}^x (\tilde{Q})} \leq N(A(1)+E(1)). \varepsilonnd{split} \varepsilonnd{align} where in the last inequality we used (\ref{eqn5c}). By the Sobolev-Poincar\'{e} inequality, we have \begin{equation} \label{eqn7c} \norm{p_1}_{L_{\tilde{q}}^tL^x_{\tilde{r}}(Q^+)} \leq N\norm{\nabla p_1}_{L_{\tilde{q}}^tL^x_{r^*}(Q^+)} \leq N(A(1)+E(1)). \varepsilonnd{equation} We set $w=u-v$ and $p_2=p-p_1$. Then $w$ and $p_2$ satisfy \[\begin{cases} \partial_t w-\Deltaelta w+\nabla p_2=0 \quad &\text{in } \tilde{Q},\\ \nabla\cdot w=0 & \text{in } \tilde{Q},\\ w=0 & \text{on } [-1,0) \times (\partial\tilde{B}\cap\{x_3=0\}).\varepsilonnd{cases}\] By the Sobolev-Poincar\'{e} inequality, H\"older's inequality, and an improved integrability result for the Stokes system \cite[Theorem 1.2]{Refer26b} (see also Lemma \ref{lem3} below), we have \begin{align} \label{eqn4c} &\frac{1}{\gamma}\norm{ p_2-[p_2]_\gamma}_{L_{\tilde{q}}^tL^x_{\tilde r}(Q^+(\gamma))} \leq N \norm{\nabla p_2}_{L_{\tilde{q}}^tL^x_{\tilde r}(Q^+(\gamma))} \nonumber \\ &\leq N \gamma^{3(\frac{1}{\tilde{r}}-\frac{1}{r'})}\norm{\nabla p_2}_{L_{\tilde{q}}^tL^x_{r'}(Q^+(\gamma))} \leq N \gamma^{3(\frac{1}{\tilde{r}}-\frac{1}{r'})}\norm{\nabla p_2}_{L_{\tilde{q}}^tL^x_{r'}(Q^+(1/2))}\nonumber \\ &\leq N\gamma^{3(\frac{1}{\tilde{r}}-\frac{1}{r'})}\left(\norm{p_2-[p_2]_1}_{L_{\tilde{q}}^tL^x_{1}(Q^+)}+\norm{w}_{L_{\tilde{q}}^tL^x_{1}(Q^+)}\right) \nonumber \\ &\leq N\gamma^{3(\frac{1}{\tilde{r}}-\frac{1}{r'})} \left( D_{\tilde{q},1}(1) + \norm{p_1}_{L_{\tilde{q}}^tL^x_{1}(Q^+)}+ \norm{v}_{L_{\tilde{q}}^tL^x_{1}(Q^+)}+\norm{u}_{L_{\tilde{q}}^tL^x_{1}(Q^+)}\right) \nonumber \\ &\leq N\gamma^{3(\frac{1}{\tilde{r}}-\frac{1}{r'})} \left( D_{\tilde{q},1}(1) + A(1)+E(1)+A^{\frac{1}{2}}(1)\right). \varepsilonnd{align} where $r'>\tilde{r}$ is any large number and we also used \varepsilonqref{eqn6c} in the last inequality. From \varepsilonqref{eqn7c} and \varepsilonqref{eqn4c}, we obtain \begin{equation} D(\gamma)\leq N \left[ \gamma^{3(\frac{1}{\tilde{r}}-\frac{1}{r'})} \left( D_{\tilde{q},1}(1) + (A(1)+E(1))^{\frac{1}{2}}\right)+\gamma^{-1}(A(1)+E(1))\right]. \nonumber \varepsilonnd{equation} Because $\tilde{r}<3$ and $r'$ can be arbitrarily large, $\varepsilon_1$ may range in $(0,3/\tilde{r}-1)$. Finally, \varepsilonqref{eqn1cc} follows from \varepsilonqref{eqn1c} and H\"older's inequality. The lemma is proved. \varepsilonnd{proof} The following is the key lemma for the proof of Theorem \ref{thm0}. \begin{lemma} \label{lem6} There exists a constant $\varepsilon_2=\varepsilon_2(\tilde{q})>0$ satisfying the following property. If \begin{equation*} A(1)+E(1)+D_{\tilde q,1}(1)\leq \varepsilon_2, \varepsilonnd{equation*} then there exist sufficiently small $\deltaelta>0$ and $\rho_0>0$ such that \begin{equation} \label{eqn11c} A(\rho)+E(\rho)\leq \rho^{2-\deltaelta},\quad D(\rho)\leq \rho^{1+\deltaelta} \varepsilonnd{equation} for any $\rho\in(0,\rho_0]$. \varepsilonnd{lemma} \begin{proof} Let $\rho_k = \rho_0^{(1+\beta)^k}$ where $\rho_0\in (0,1)$ and $\beta\in (0,\deltaelta)$ are small constants to be specified later. To prove \varepsilonqref{eqn11c} for any $\rho\in(0,1/2]$ (with a slightly smaller $\deltaelta$), it suffices to show that, for every $k$ we have \begin{equation} \label{eqn12c} A(\rho_k)+E(\rho_k)\leq \rho_k^{2-\deltaelta},\quad D(\rho_k)\leq \rho_k^{1+\deltaelta}. \varepsilonnd{equation} We will assume the initial step for induction later by specify some conditions on $\rho_0$ and $\varepsilon_2$. Now suppose $\varepsilonqref{eqn12c}$ is true for 0 to $k\geq l$ where $l$ is an integer to be specified later. For $E(\rho_{k+1})$, we set $\gamma = \rho_k^{\beta}$ and $\rho = \rho_k$ in Lemma \ref{lem4} to obtain \begin{align} \label{eqn13c} A(\rho_{k+1})+E(\rho_{k+1})&\leq N\left[ \rho_k^{2\beta+2-\deltaelta}+\rho_k^{-2\beta+3(2-\deltaelta)/2}+\rho_k^{-2\beta+1+\deltaelta+(2-\deltaelta)/2} \right] \nonumber\\ &\leq N\left[ \rho_k^{2\beta+2-\deltaelta}+\rho_k^{-2\beta+3-3\deltaelta/2}+\rho_k^{-2\beta+2+\deltaelta/2} \right]. \varepsilonnd{align} For $\deltaelta\ll 1$, when $\beta<\frac{3\deltaelta}{2(4-\deltaelta)}$, it holds that \begin{equation} \label{eqn14c} \min\left\{2\beta+2-\deltaelta,-2\beta+3-3\deltaelta/2, -2\beta+2+\deltaelta/2 \right\}>(2-\deltaelta)(1+\beta). \varepsilonnd{equation} For $D(\rho_{k+1})$, we let $\tilde{\beta} = (1+\beta)^{l+1}-1$. We set $\gamma = \rho_{k-l}^{\tilde{\beta}}$ and $\rho = \rho_{k-l}$ in \varepsilonqref{eqn1c} to have \begin{equation} \label{eqn15c} D(\rho_{k+1})\leq N\left[ \rho_{k-l}^{(1+\varepsilon_1)\tilde{\beta}+1+\deltaelta} +\rho_{k-l}^{(1+\varepsilon_1)\tilde{\beta}+1-\deltaelta/2}+\rho_{k-l}^{-\tilde{\beta}+2-\deltaelta}\right]. \varepsilonnd{equation} To satisfy \begin{equation} \label{eqn16c} \min\left\{(1+\varepsilon_1)\tilde{\beta}+1+\deltaelta, (1+\varepsilon_1)\tilde{\beta}+1-\deltaelta/2,-\tilde{\beta}+2-\deltaelta \right\}>(1+\deltaelta)(1+\tilde{\beta}), \varepsilonnd{equation} we require $(\varepsilon_1-\deltaelta)\tilde{\beta}>3\deltaelta/2$. Indeed we can choose $\deltaelta = \varepsilon_1^2$, $\beta = \varepsilon_1^2/4$, and take a sufficiently large integer $l$ of order $1/\varepsilon_1$ so that $\tilde \beta\sim 3\varepsilon_1$. A direct calculation shows that both \varepsilonqref{eqn14c} and \varepsilonqref{eqn16c} are satisfied. From \varepsilonqref{eqn13c} and \varepsilonqref{eqn15c} we can find some $\xi>0$ such that $$A(\rho_{k+1})+E(\rho_{k+1})\leq N\rho_{k+1}^{2-\deltaelta+\xi},\quad D(\rho_{k+1})\leq N\rho_{k+1}^{1+\deltaelta+\xi},$$ where $N$ is a constant independent of $k$ and $\xi$. We choose $\rho_0$ small enough such that $$ N\rho_{k+1}^{\xi}<N\rho_0^{\xi}<1 $$ to get \varepsilonqref{eqn12c} for $k\ge l$. Finally, by using \varepsilonqref{eqn1c} in lemma \ref{lem5} with $\gamma=1/2$ and $\rho=1$, we have $$ A(1/2)+E(1/2)+D(1/2)\le N\varepsilon_2^{1/2}. $$ Then by choosing $\varepsilon_2$ sufficiently small, we can make \varepsilonqref{eqn12c} to be true for $k = 0,1,2,\ldots,l$. Therefore, by induction we conclude \varepsilonqref{eqn12c} is true for any integer $k\geq 0$. \varepsilonnd{proof} \begin{lemma}\label{lem7} Suppose $f(\rho_{0})\leq C_0$. If there exist $\alpha >\beta >0$ and $C_1,C_2>0$ such that for any $0<\gamma<\rho\le\rho_{0}$, it holds that $$ f(\gamma)\leq C_1\left({\gamma}/{\rho}\right)^{\alpha}f(\rho)+C_2\rho^{\beta}, $$ then there exist constants $ C_3,C_4>0$ depending on $C_0,C_1,C_2,\alpha,\beta$, such that $$ f(\gamma)\leq C_3\left({\gamma}/{\rho_0}\right)^{\beta}f(\rho_0)+ C_4\gamma^{\beta} $$ for any $\gamma\in (0,\rho_{0}]$. \varepsilonnd{lemma} \begin{proof} See, for instance, \cite[Chapter III, Lemma 2.1]{iter}. \varepsilonnd{proof} Now we are ready to give \begin{proof}[Proof of Theorem \ref{thm0}] By Lemma \ref{lem6} we have the following estimates for $\rho\in (0,\rho_0]$: \begin{equation} \label{eqn17c} \norm{p-[p]}_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))}\leq \rho^{2+\deltaelta},\quad \norm{\abs{u}^2}_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))}\leq \rho^{3-\deltaelta}. \varepsilonnd{equation} Let $w$ be the unique weak solution to the heat equation $$\partial_t w_i-\Deltaelta w_i=-\partial_j(u_iu_j)-\partial_i(p-[p]_{\rho}) \quad \text{in } Q^+(\rho)$$ with the zero boundary condition. By the classical $L_p$ estimate for the heat equation, we have $$\Vert\nabla w\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))}\leq N\left\lVert |u|^2 \right\rVert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))} + N\left\lVert p-[p]_{\rho}\right\rVert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))} ,$$ which together with (\ref{eqn17c}) yields \begin{equation}\label{eqn17d} \Vert\nabla w\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))}\leq N\rho^{2+\deltaelta}. \varepsilonnd{equation} By the Poincar\'{e} inequality with zero boundary condition, we get from \varepsilonqref{eqn17d} that \begin{equation}\label{eqn19c} \Vert w\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))}\leq N\rho^{3+\deltaelta}. \varepsilonnd{equation} Denote $v=u-w$, which satisfies the homogeneous heat equation $$\partial_t v-\Deltaelta v=0 \quad \text{in } Q^+(\rho)$$ with the boundary condition $v=u$ on $\partial_p Q^+(\rho)$. Let $0<\gamma<\rho$. By the Poincar\'{e} inequality with zero mean value and using the fact that any H\"older norm of a caloric function in a smaller half cylinder is controlled by any $L_p$ norm of it in a larger half cylinder. We have \begin{align}\label{eqn188c} \begin{split} \norm{v-(v)_\gamma}_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\gamma))} &\leq N\gamma^{4}\norm{v}_{C^{1/2,1}(Q^+(\gamma))}\\ &\leq N(\gamma/\rho)^{4}\norm{v-(u)_{\rho}}_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))}, \varepsilonnd{split} \varepsilonnd{align} where $(u)_{\rho}$ is the average of $u$ in $Q^+(\rho)$. Using (\ref{eqn188c}), (\ref{eqn19c}), and the triangle inequality, we have \begin{align*} \Vert u-(u)_{\gamma}\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\gamma))} \leq &\Vert v-(v)_{\gamma}\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\gamma))}+2\Vert w\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\gamma))}\\ \leq & N(\gamma/\rho)^{4} \Vert v-(u)_{\rho}\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))} + N \rho^{3+\deltaelta}\\ \leq & N(\gamma/\rho)^{4} \left(\Vert u-(u)_{\rho}\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))} +\Vert w\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))}\right) + N \rho^{3+\deltaelta}\\ \leq & N(\gamma/\rho)^{4}\Vert u-(u)_{\rho}\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\rho))} +N \rho^{3+\deltaelta}. \varepsilonnd{align*} Applying Lemma \ref{lem7} we obtain $$\Vert u-(u)_{\gamma}\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\gamma))}\leq N \gamma^{3+\deltaelta} $$ for any $\gamma\in (0,\rho_0)$. By H\"{o}lder's inequality, we have \begin{equation*} \Vert u-(u)_{\gamma}\Vert_{L_1(Q^+(\gamma))}\leq N\gamma^2 \Vert u-(u)_{\gamma}\Vert_{L^t_{\tilde{q}}L^x_{\tilde{r}}(Q^+(\gamma))}\leq N \gamma^{5+\deltaelta}. \varepsilonnd{equation*} Similar estimates can be derived for interior points using same techniques. We conclude that $u$ is H\"{o}lder continuous in $Q^+(\rho_0)$ by Campanato's characterization of H\"{o}lder continuity. The theorem then follows by a covering argument. \varepsilonnd{proof} \section{A boundary regularity criterion without involving $\nabla u$} \label{sec4} In this section, we shall prove the following theorem, which is more general than Theorem \ref{thm4}. \begin{theorem} \label{thm1b} For each triple of exponents $(q,r,\tilde{q})$ satisfying \begin{equation} \label{eq9.15} \frac{2}{q}+\frac{3}{r}<2,\quad 2<q<\infty,\quad \frac{3}{2}<r<\infty, \varepsilonnd{equation} and \begin{equation} \label{eqn100b} \frac{1}{\tilde{q}}<F(q,r):=1- \frac{1}{q}\cdot\frac{\left(1/r-1/2\right)_+}{1/r-1/6}, \varepsilonnd{equation} there exists a universal constant $\varepsilon_0 = \varepsilon_0(q,r,\tilde{q})$ such that if $(u,p)$ is a pair of suitable weak solution to \varepsilonqref{NS1}-\varepsilonqref{NS3} in $Q^+$ with $p\in L_{\tilde q}^tL^x_1(Q^+)$ and satisfies \begin{equation} \label{eqn0b} C_{q,r}(1)+D_{\tilde{q},1}(1)<\varepsilon_0, \varepsilonnd{equation} then $u$ is regular in $\overline{Q^+(1/2)}$. \varepsilonnd{theorem} \begin{remark} The restriction \varepsilonqref{eqn100b} arises in the estimates for the pressure term below by using the coercive estimate for the linear Stokes system. It is not clear to us if this restriction can be dropped. A straightforward calculation shows that under the constraints $$ \frac{2}{q}+\frac{3}{r}\le 2,\quad 2<q<\infty,\quad \frac{3}{2}<r<\infty, $$ the function $F(q,r)$ attains its minimum $1-(\sqrt 3-\sqrt 2)^2/4$ when $1/r=1/\sqrt 6+1/6$ and $1/q=3/4-\sqrt 6/4$. Therefore, if $$ \tilde q\ge \left(1-\frac {(\sqrt 3-\sqrt 2)^2} 4\right)^{-1}\approx 1.02591, $$ then \varepsilonqref{eqn100b} is trivial for any $(q,r)$ satisfying \varepsilonqref{eq9.15}. Moreover, it is easily seen that $$ \max\{1-1/q,1/2+1/q\}< F(q,r). $$ Therefore, by decreasing $\tilde q$ if necessary, we may assume that \begin{equation} \label{eqn111} \max\{1-1/q,1/2+1/q,7/8\}<1/\tilde q< F(q,r). \varepsilonnd{equation} \varepsilonnd{remark} \begin{remark} By a slight modification of the proof below, we have the following result. Under the conditions of Theorem \ref{thm1b}, if instead of \varepsilonqref{eqn0b} we assume $$ C_{q,r}(1)+D_{\tilde{q},1}(1)\le C_0 $$ for some constant $C_0>0$, then $$ A(1/2)+E(1/2)\le N(q,r,\tilde q, C_0)<\infty. $$ \varepsilonnd{remark} We first set up some notation and state a few lemmas that will be useful in the proof of Theorem \ref{thm1b}. Let $\rho_{k}=1-2^{-k}$. Denote $Q^+_{k}=Q^+(\rho_k)$ and $B^+_k = B^+(\rho_k)$ for integer $k\geq 1$. Again we denote $A_k = A(\rho_k)$ and $E_k = E(\rho_k)$. For each integer $k\geq 0$, We fix a domain $\tilde{B}_k\subset \mathbb{R}^3$ with smooth boundary so that $$ B^+_{k+1}\subset\tilde{B}_k \subset B^+_{k+2} $$ such that the $C^2$ norm of $\partial \tilde B_k$ is bounded by $N2^{k}$. We also denote $\tilde{Q}_k=(-\rho^2_{k+1},0)\times \tilde{B}_k$. \begin{lemma}\label{lem2} Let $m,n\in (1,\infty)$ be two fixed numbers. Assume that $g\in L_{n,m}(\tilde{Q}_k).$ Then there exists a unique function pair $(v,p)$, which satisfies the following equations: \[\begin{cases} \partial_t v-\Deltaelta v+\nabla p=g \quad &\text{in } \tilde{Q}_k,\\ \nabla\cdot v=0 &\text{in }\tilde{Q}_k,\\ [p]_{\tilde{B}_k}(t)=0 & \text{for a.e. }t\in [-\rho^2_{k+1},0],\\ v=0 & \text{on }\partial_p \tilde{Q}_k. \varepsilonnd{cases}\] Moreover, $v$ and $p$ satisfy the following estimate: $$\Vert v\Vert _{W^{1,2}_{n,m}(\tilde{Q}_k)}+\Vert p\Vert _{W^{0,1}_{n,m}(\tilde{Q}_k)}\leq C2^{bk} \Vert g\Vert _{L_{n,m}(\tilde{Q}_k)},$$ where the constants $C$ and $b$ only depend on $m$ and $n$. \varepsilonnd{lemma} We refer the reader to \cite[Theorem 1.1]{Refer29} for the proof of Lemma \ref{lem2}. The factor $2^{bk}$ can be obtained by keeping track of the constants in the localization argument in \cite[Sect. 3]{Refer29}. \begin{lemma}\label{lem3} Let $n,s\in (1,\infty)$ be constants and $g\in L_{n}^tL_s^x(Q^+_{k+1})$. Assume that the functions $v\in W^{0,1}_{n,1}(Q^+_{k+1})$ and $p\in L_{n}^tL_1^x(Q^+_{k+1})$ satisfy the equations: \[\begin{cases} \partial_tv-\Deltaelta v+\nabla p=g \quad &\text{in } Q^+_{k+1},\\ \nabla\cdot v=0 &\text{in }Q^+_{k+1}, \varepsilonnd{cases}\] and the boundary condition $$ v=0 \quad \text{on }\{y\,\vert\, y=(y',0),|y'|<\rho_{k+1}\}\times[-\rho^2_{k+1},0).$$ Then we have $v\in W^{1,2}_{n,s}(Q^+_k),\, p\in W^{0,1}_{n,s}(Q^+_k)$, and \begin{equation} \label{eq9.31} \Vert v\Vert _{W^{1,2}_{n,s}(Q^+_k)}+\Vert p\Vert _{W^{0,1}_{n,s}(Q^+_k)}\leq C2^{bk} \left(\Vert g\Vert _{L_{n}^t L_s^x(Q^+_{k+1})}+\Vert v\Vert _{L_{n}^t L_1^x(Q^+_{k+1})}+\Vert p\Vert _{L_{n}^t L_1^x(Q^+_{k+1})}\right), \varepsilonnd{equation} where the constants $C$ and $b$ only depend on $n$ and $s$. \varepsilonnd{lemma} \begin{proof} We use a mollification argument. Denote $x'=(x_1,x_2)$ and $\hat Q_k=Q^+((\rho_k+\rho_{k+1})/2)$. By the Sobolev embedding theorem, we have $v\in L_{n}^t L_m^x(Q_{k+1}^+)$ for some $m\in (1,\min(n,s))$. Let $v^{\varepsilon}$, $p^{\varepsilon}$, and $g^{\varepsilon}$ be the standard mollifications with respect to $(t,x')$, which satisfy the same equations as $v$, $p$, and $g$. By the properties of mollifications, it is clear that for sufficiently small $\varepsilon$, \begin{equation} \label{eq2.25} D_{x'} v^{\varepsilon},\,\,D_{x'}^2 v^{\varepsilon},\,\, \partial_t v^{\varepsilon}\in L_{n}^t L_m^x(\hat Q_k),\quad D_{x'}p^{\varepsilon}\in L_{n}^{t,x'} L_1^{x_3}(\hat Q_k),\quad j=1,2. \varepsilonnd{equation} Then from the equations for $v_1^{\varepsilon}$ and $v_2^{\varepsilon}$, we get $D_{x_3x_3}v_j^{\varepsilon}\in L_{n}^{t,x'} L_1^{x_3}(\hat Q_k)$ for $j=1,2$. By applying the Sobolev embedding theorem in the $x_3$ direction, we get $D_{x_3}v_j^{\varepsilon}\in L_{n}^t L_m^x(\hat Q_k)$, which together with \varepsilonqref{eq2.25} implies that $Dv_j^{\varepsilon}\in L_{n}^t L_m^x(\hat Q_k)$ for $j=1,2$. Since $D_{x'}v^{\varepsilon}$ satisfies the same equation, we see that $D_{x'}Dv_j^{\varepsilon}\in L_{n}^t L_m^x(\hat Q_k)$ for $j=1,2$. Now owing to $\nabla\cdot v^{\varepsilon}=0$, we have $$ D_{x_3}v_3^{\varepsilon},\ D_{x_3x_3}v_3^{\varepsilon}\in L_{n}^t L_m^x(\hat Q_k), $$ which together with the equation for $v_3^{\varepsilon}$ further implies $D_{x_3}p^{\varepsilon}\in L_{n}^t L_m^x(\hat Q_k)$. Using the Sobolev embedding theorem, we then get $p^{\varepsilon}\in L_{n}^t L_m^x(\hat Q_k)$. By \cite[Theorem 1.2]{Refer26b} (see also \cite{Refer24b}), we have $v^{\varepsilon}\in W^{1,2}_{n,s}(\hat Q_k)$, $p^{\varepsilon}\in W^{0,1}_{n,s}(\hat Q_k)$, and \begin{equation*} \|v^{\varepsilon}\|_{W_{n,s}^{1,2} (Q^+_k)}+\|p^{\varepsilon}\|_{W_{n,s}^{0,1} (Q^+_k)} \leq C2^{bk}\big(\|g^{\varepsilon}\|_{L_{n}^t L_s^x(\hat Q_k)} +\|v^{\varepsilon}\|_{W_{n,m}^{1,0}(\hat Q_k)} +\|p^{\varepsilon}\|_{L_{n}^t L_m^x(\hat Q_k)}\big), \varepsilonnd{equation*} where $C$ is independent of $\varepsilon$ and $k$. Again the factor $2^{bk}$ can be obtained by keeping track of the constants in the proofs in \cite{Refer24b}. Taking the limit as $\varepsilon\to 0$, we get \begin{equation*} \|v\|_{W_{n,s}^{1,2} (Q^+_k)}+\|p\|_{W_{n,s}^{0,1} (Q^+_k)}\leq C2^{bk}\big(\|g\|_{L_{n}^t L_s^x(\hat Q_k)} +\|v\|_{W_{n,m}^{1,0}(\hat Q_k)} +\|p\|_{L_{n}^t L_m^x(\hat Q_k)}\big). \varepsilonnd{equation*} By interpolation inequalities, for any $\deltaelta\in(0,1)$ we have \begin{align*} &\|v\|_{W_{n,s}^{1,2} (Q^+_k)}+\|p\|_{W_{n,s}^{0,1} (Q^+_k)}\\ &\, \leq \deltaelta \big(\|v\|_{W_{n,s}^{1,2} (\hat Q_k)}+\|p\|_{W_{n,s}^{0,1} (\hat Q_k)}\big)+ C2^{bk}\|g\|_{L_{n}^t L_s^x(\hat Q_k)} +C_\deltaelta 2^{bk}\big(\|v\|_{L_{n}^x L_1^t (\hat Q_k)} +\|p\|_{L_{n}^t L_1^x(\hat Q_k)}\big). \varepsilonnd{align*} Finally, \varepsilonqref{eq9.31} follows by using a standard iteration argument. \varepsilonnd{proof} We now give the proof of Theorem \ref{thm1b}. \begin{proof}[Proof of Theorem \ref{thm1b}] By replacing $p$ with $p-[p]_1(t)$ without loss of generality, we may assume that $[p]_1(t)=0$ for $t\in (-1,0)$. Let $\tau>0$ be such that \begin{equation*} \frac{2}{q}+\frac{3}{r} : =2-\tau. \varepsilonnd{equation*} It is easily seen that we can find $\hat r\in (3/2,r)$ such that \begin{equation} \label{eq4.54} \frac{2}{q}+\frac{3}{\hat{r}} : =2-\tau_1<2,\quad \frac{1}q+\frac{2}{\hat{r}} > 1, \quad \frac{1}q+\frac{1}{\hat{r}} > \frac 2 3. \varepsilonnd{equation} In the sequel, we will choose $\tau_1$ sufficiently small by reducing $\hat{r}$. Let $\rho_{k}=1-2^{-k}$, $B_k = B(\rho_k)$, and $Q_{k}=Q(\rho_k)$ for integer $k\geq 1$. For each $k$, we choose a cut-off function $\psi_k$ satisfying $$ \text{supp } \psi_k \subset Q_{k+1}, \quad \psi_k\varepsilonquiv 1 \text{ on } Q_{k},$$ $$\partial_t \psi_k\leq N2^{2k}, \quad D\psi_k\leq N2^k, \quad D^2\psi_k\leq N2^{2k}.$$ By \varepsilonqref{eqn_sw_energy}, we have \begin{equation*} \text{ess sup}_{-\rho_k^2 \leq s\leq 0}\int_{B^+_k}\abs{u(s,x)}^2\,dx+2\int_{Q^+_k}\abs{\nabla u}^2 \,dx \,ds\\ \leq N\int_{Q^+_{k+1}}\{2^{2k}\abs{u}^2+2^k(\abs{u}^2+2|p|)|u|\}\,dx \,ds, \varepsilonnd{equation*} where $N$ is independent of $k$. For simplicity, we denote $A_k = A(\rho_k)$ and $E_k = E(\rho_k)$. Thus we can rewrite the above inequality as \begin{equation} \label{eqn1} A_k+E_k \leq N 2^{2k}\int_{Q^+_{k+1}}\left\{ \abs{u}^2+\abs{u}^3+\abs{pu} \right\}\, dz. \varepsilonnd{equation} We have the following interpolation using H\"{o}lder's inequality \begin{equation} \int_{Q^+_{k+1}} \abs{u}^3\,dz\leq \norm{u}^{\frac{1}{1-2\tau_1}}_{L_{q}^tL_{\hat r}^x(Q^+_{k+1})}\norm{u}^{\frac{2-6\tau_1}{1-2\tau_1}}_{L_{q_1}^tL_{r_1}^x(Q^+_{k+1})}\leq \norm{u}^{\frac{1}{1-2\tau_1}}_{L_{q}^tL_{r}^x(Q^+_{k+1})} \norm{u}^{\frac{2-6\tau_1}{1-2\tau_1}}_{L_{q_1}^tL_{r_1}^x(Q^+_{k+1})}, \nonumber \varepsilonnd{equation} where \begin{equation} \label{eq5.04} q_1\in [2,\infty],\quad r_1\in [2,6] \varepsilonnd{equation} need to satisfy $2/q_1+3/r_1=3/2$. A simple calculation shows that $$ q_1 = \frac{(2-6\tau_1)q}{(1-2\tau_1)q-1}, \quad r_1 = \frac{(2-6\tau_1)\hat r}{(1-2\tau_1)\hat r-1}. $$ Indeed \varepsilonqref{eq5.04} holds because of \varepsilonqref{eq4.54}. We then apply Lemma \ref{lem1b} to get \begin{equation} \label{eqn2} \int_{Q^+_{k+1}} \abs{u}^3\,dz\leq \norm{u}^{\frac{1}{1-2\tau_1}}_{L_{q}^tL_{r}^x(Q^+_{k+1})}(A_{k+1}+E_{k+1})^{\frac{1-3\tau_1}{1-2\tau_1}}. \varepsilonnd{equation} Again by H\"{o}lder's inequality, we have \begin{equation} \label{eqn13} \int_{Q^+_{k+1}} |u|^2\,dz \leq N\left(\int_{Q^+_{k+1}} \abs{u}^3\,dz\right)^{2/3} \leq N\norm{u}^{\frac{2}{3(1-2\tau_1)}}_{L_q^tL_r^x(Q^+_{k+1})} (A_{k+1}+E_{k+1})^{\frac{2(1-3\tau_1)}{3(1-2\tau_1)}}. \varepsilonnd{equation} To deal with the last term in \varepsilonqref{eqn1}, we make the following decomposition. For some suitable $(q',s^*)$ which we will specify later, there exists a pair of unique solution $$v_k\in W^{1,2}_{q',s^*}(\tilde{Q}_k) \quad \text{and}\quad p_k\in W^{0,1}_{q',s^*}(\tilde{Q}_k)$$ to the following initial boundary value problem: \begin{equation} \label{eqn_decomp1} \begin{cases} \partial_t v_k-\Deltaelta v_k+\nabla p_k=-u\cdot \nabla u \quad &\text{in } \tilde{Q}_k,\\ \nabla\cdot v_k=0 &\text{in } \tilde{Q}_k,\\ [p_k]_{\tilde{B}_k}(t)=0 & \text{for a.e. }t\in [-\rho^2_{k+1},0],\\ v_k=0 & \text{on }\partial_p\tilde{Q}_k. \varepsilonnd{cases}\varepsilonnd{equation} We set $w_k=u-v_k$ and $h_k=p-p_k$. Then $w_k$ and $h_k$ satisfy \begin{equation*} \begin{cases} \partial_t w_k-\Deltaelta w_k+\nabla h_k=0 \quad &\text{in } \tilde{Q}_k,\\ \nabla\cdot w_k=0 & \text{in } \tilde{Q}_k,\\ w_k=0 & \text{on } [-\rho^2_{k+1},0) \times (\partial\tilde{B}_k\cap \{x_3=0 \}).\varepsilonnd{cases} \varepsilonnd{equation*} We choose $\hat{r}_2\in (3/2,r)$ satisfying $$ \frac{2}{q}+\frac{3}{\hat{r}_2} : =2-\tau_2<2. $$ In the sequel, we will choose $\tau_2$ sufficiently small by reducing $\hat{r}_2$. {\varepsilonm Estimates for $p_k$:} For $\alpha_1\in (0,1)$ which we will specify later, by H\"older's inequality, \begin{equation} \label{eqn11b} \int_{Q^+_{k+1}}\abs{up_k}\, dz\leq \norm{u}^{1-\alpha_1}_{L_{q_2}^tL^x_{r_2}(Q^+_{k+1})} \norm{u}^{\alpha_1}_{L_{q}^tL^x_{\hat{r}_2}(Q^+_{k+1})} \norm{p_k}_{L^t_{q'}L^x_{s'}(Q^+_{k+1})}, \varepsilonnd{equation} where the exponents need to satisfy $$ \frac{2}{q_2}+\frac{3}{r_2} = \frac{3}{2},\quad \frac{1}{q'}+\frac{1-\alpha_1}{q_2}+\frac{\alpha_1}{q}=1,\quad \frac{1}{s'}+\frac{1-\alpha_1}{r_2}+\frac{\alpha_1}{\hat{r}_2}=1. $$ The system above implies that \begin{equation} \label{eqn7b} \frac{2}{q'}+\frac{3}{s'} = 5-\left((1-\alpha_1)\cdot \frac{3}{2}+\alpha_1(2-\tau_2)\right). \varepsilonnd{equation} To make use of Lemma \ref{lem1b}, we need $ 2\leq q_2\leq \infty$, which is equivalent to \begin{equation} \label{eqn17b} \frac{1}{q'}+\frac{\alpha_1}q\leq 1\leq \frac{1}{q'}+\frac{\alpha_1}q+\frac{1-\alpha_1}{2}. \varepsilonnd{equation} We are going to check this condition later. Next we estimate the nonlinear term $u\cdot \nabla u$. Define another exponent $s^*$ by $$ \frac{1}{s^*} := \frac{1}{s'}+\frac{1}{3}. $$ Using H\"{o}lder's inequality, we get \begin{equation} \label{eqn20} \norm{u\cdot \nabla u}_{L^t_{q'}L^x_{s^*}(\tilde{Q}_k)}\leq \norm{ \nabla u}_{L^t_2L^x_2(\tilde{Q}_k)}\norm{u}^{1-\alpha_2}_{L^t_{q}L^x_{\hat{r}_2}(\tilde{Q}_k)}\norm{u}^{\alpha_2}_{L^t_{q_3}L^x_{r_3}(\tilde{Q}_k)}, \varepsilonnd{equation} where \begin{equation} \label{eq5.00} \frac{1}{q'}=\frac{1}{2}+\frac{1-\alpha_2}{q}+\frac{\alpha_2}{q_3}, \quad\frac{1}{s^*} = \frac{1}{2}+\frac{1-\alpha_2}{\hat{r}_2}+\frac{\alpha_2}{r_3} \varepsilonnd{equation} $$q_3\in [2,\infty],\quad r_3\in [2,6],\quad \frac{2}{q_3}+\frac{3}{r_3} = \frac{3}{2}.$$ In particular, we take $q_3 = 2$ and $r_3 = 6$ so that \begin{equation} \label{eqn28} \alpha_2 = \frac{{1}/{q'}-{1}/{2}-{1}/{q}}{{1}/{2}-{1}/{q}}. \varepsilonnd{equation} By \varepsilonqref{eq5.00} we have \begin{equation} \label{eqn19b} \frac{2}{q'}+\frac{3}{s'}=(1-\alpha_2)(2-\tau_2)+(1+\alpha_2)\cdot\frac{3}{2}. \varepsilonnd{equation} From \varepsilonqref{eqn7b}, \varepsilonqref{eqn28}, and \varepsilonqref{eqn19b}, we get $$\alpha_1= \alpha_2+\frac{2\tau_2}{1-2\tau_2} = \frac{1/q'-1/2-{1}/{q}}{{1}/{2}-{1}/{q}}+\frac{2\tau_2}{1-2\tau_2}.$$ Since we have solved for $\alpha_1$, we can now go back to verify \varepsilonqref{eqn17b}. A simple calculation gives that it indeed holds when $q'> \frac{q^2}{q^2-q+2}$ and $\tau_2$ is sufficiently small. We note that there is an implicit restriction $q>2$ contained in the conditions above. We can observe it by adding up the first inequality in \varepsilonqref{eqn17b} and the first equality in \varepsilonqref{eq5.00} and using the fact $\alpha_1>\alpha_2$. To make use of Lemma \ref{lem2}, we need to check that $s^*> 1$, which is equivalent to $$\frac{1-\alpha_2}{\hat{r}_2}+\frac{\alpha_2}{6}< \frac{1}{2}.$$ In the special case when $q'=\frac{q^2}{q^2-q+2}$ , we have $\alpha_2 = 1-2/q\in (0,1)$ and the above inequality becomes \begin{equation*} \frac{2}{\hat{r}_2 q}+\left(1-\frac{2}{q}\right) \frac{1}{6}< \frac{1}{2}, \varepsilonnd{equation*} which clearly holds true because $$2\sqrt{\frac{6}{\hat{r}_2q}}\leq \frac{2}{q}+\frac{3}{\hat{r}_2}<2$$ and thus $\hat{r}_2 q> 6$. By continuity, when $q'$ is sufficiently close to $\frac{q^2}{q^2-q+2}$, we still have $s^*> 1$ and $\alpha_2\in (0,1)$. Now by Lemma \ref{lem2}, we have the existence of the unique solution pair $(v_k,p_k)$ to \varepsilonqref{eqn_decomp1} and \begin{align*} \begin{split} &\Vert |p_k|+|\nabla p_k| \Vert_{L^t_{q'}L_{s^*}^x (\tilde{Q}_k)}\le N2^{bk} \|u\cdot\nabla u\|_{L^t_{q'}L_{s^*}^x (\tilde{Q}_k)}\\ &\leq N2^{bk}\norm{ \nabla u}_{L^t_2L^x_2(\tilde{Q}_k)}\norm{u}^{1-\alpha_2}_{L^t_qL^x_{\hat r_2}(\tilde{Q}_k)}\norm{u}^{\alpha_2}_{L^t_{q_3}L^x_{r_3}(\tilde{Q}_k)}, \varepsilonnd{split} \varepsilonnd{align*} where in the last inequality we used (\ref{eqn20}). Here and in the sequel, $b$ is a positive constant which is independent of $k$ and may vary from line to line. Together with the Sobolev-Poincar\'{e} inequality and H\"{o}lder's inequality, we obtain \begin{equation*} \norm{p_k}_{L^t_{q'}L^x_{s'}(\tilde{Q}_k)}\leq N2^{bk}\norm{ \nabla u}_{L^t_2L^x_2(\tilde{Q}_k)}\norm{u}^{1-\alpha_2}_{L^t_{q}L^x_{\hat r_2}(\tilde{Q}_k)}\norm{u}^{\alpha_2}_{L^t_{q_3}L^x_{r_3}(\tilde{Q}_k)}. \varepsilonnd{equation*} Combining with \varepsilonqref{eqn11b} we have \begin{align} \label{eqn5p} \int_{Q^+_{k+1}}\abs{up_k}\, dz &\leq N2^{b k} \norm{u}^{1-\alpha_1}_{L_{q_2}^tL^x_{r_2}(Q^+_{k+2})} \norm{u}^{1+\alpha_1-\alpha_2}_{L^t_{q}L^x_{\hat r_2}(Q^+_{k+2})} \cdot\norm{ \nabla u}_{L^t_2L^x_2(Q^+_{k+2})}\norm{u}^{\alpha_2}_{L^t_{q_3}L^x_{r_3}(Q^+_{k+2})}\nonumber \\ & \leq N2^{b k}\varepsilonpsilon_0^{\frac{1}{1-2\tau_2}}(A_{k+2}+E_{k+2})^{\frac{1-3\tau_2}{1-2\tau_2}}. \varepsilonnd{align} {\varepsilonm Estimates for $h_k$:} For some $\alpha_3\in (0,1]$ which we will specify later, analogous to \varepsilonqref{eqn11b} by H\"older's inequality, \begin{equation} \label{eqn11h} \int_{Q^+_{k+1}}\abs{uh_k}\, dz\leq \norm{u}^{1-\alpha_3}_{L_{q_4}^tL^x_{r_4}(Q^+_{k+1})} \norm{u}^{\alpha_3}_{L_{q}^tL^x_{r}(Q^+_{k+1})} \norm{h_k}_{L^t_{\tilde{q}}L^x_{\tilde{s}}(Q^+_{k+1})}, \varepsilonnd{equation} where the exponents satisfy $$\frac{2}{q_4}+\frac{3}{r_4} = \frac{3}{2},\quad \frac{1}{\tilde{q}}+\frac{1-\alpha_3}{q_4}+\frac{\alpha_3}{q}=1,\quad \frac{1}{\tilde{s}}+\frac{1-\alpha_3}{r_4}+\frac{\alpha_3}{r}=1.$$ $$$$ To make use of Lemma \ref{lem1b}, we require $ 2\leq q_4\leq \infty$, which is equivalent to \begin{equation*} \frac{1}{\tilde{q}}+\frac{\alpha_3}{q}\leq 1\leq \frac{1}{\tilde{q}}+\frac{\alpha_3}{q}+\frac{1-\alpha_3}{2}. \varepsilonnd{equation*} We simply choose $\alpha_3 = q(1-1/\tilde{q})\in (0,1)$ and use \varepsilonqref{eqn111} to verify this condition. \begin{comment} We set $w_k=u-v_k$ and $h_k=p-p_k$. Then $w$ and $h_k$ satisfy \[\begin{cases} \partial_t w_k-\Deltaelta w_k+\nabla h_k=0 \quad &\text{in } \tilde{Q}_k,\\ \nabla\cdot w_k=0 & \text{in } \tilde{Q}_k,\\ w_k=0 & \text{on } [-1,0) \times \{\partial\tilde{B}_k\cap x_3=0 \}.\varepsilonnd{cases}\] \varepsilonnd{comment} By Lemma \ref{lem3} and the triangle inequality, we have $h_k\in W^{0,1}_{\tilde{q},\tilde{s}}(Q^+_{k+2})$ and the following estimate: \begin{align} \label{eqn29} &\Vert h_k\Vert_{ L^t_{\tilde{q}}L^x_{\tilde{s}}(Q^+_{k+1})}\leq N2^{bk}\Vert |w_k|+|h_k|\Vert_{ L^t_{\tilde{q}}L^x_{1}(Q^+_{k+2})}\nonumber\\ &\leq N 2^{bk}\Vert |u|+|p|\Vert_{ L^t_{\tilde{q}}L^x_{1}(Q^+_{k+2})} +N2^{bk}\Vert |v_k|+|p_k|\Vert_{ L^t_{\tilde{q}}L^x_{1}(Q^+_{k+2})}. \varepsilonnd{align} By H\"{o}lder's inequality, \varepsilonqref{eqn111}, and \varepsilonqref{eqn0b}, we have \begin{equation} \label{eqn26} \Vert |u|+|p|\Vert_{ L^t_{\tilde{q}}L^x_{1}(Q^+_{k+2})}\leq \varepsilon_0. \varepsilonnd{equation} For any $\tilde{s}^*>1$, by Lemma \ref{lem2}, we have \begin{equation} \label{eqn23b} \norm{|v_k|+|p_k|}_{L^t_{\tilde{q}}L_{1}^x (\tilde{Q}_{k+1})} \le N\norm{|v_k|+|p_k|}_{L^t_{\tilde{q}}L_{\tilde{s}^*}^x (\tilde{Q}_{k+1})}\le N2^{bk} \|u\cdot\nabla u\|_{L^t_{\tilde{q}}L_{\tilde{s}^*}^x (\tilde{Q}_{k+1})}. \varepsilonnd{equation} Next analogous to \varepsilonqref{eqn20} by H\"older's inequality, \begin{equation} \label{eqn20h} \norm{u\cdot \nabla u}_{L^t_{\tilde{q}}L^x_{\tilde{s}^*}(\tilde{Q}_{k+1})}\leq \norm{ \nabla u}_{L^t_2L^x_2(\tilde{Q}_{k+1})} \norm{u}^{1-\alpha_4}_{L^t_qL^x_r(\tilde{Q}_{k+1})}\norm{u}^{\alpha_4}_{L^t_{q_5}L^x_{r_5}(\tilde{Q}_{k+1})}, \varepsilonnd{equation} where the exponents satisfy $$2\leq q_5\leq \infty,\quad\frac{2}{q_5}+\frac{3}{r_5} = \frac{3}{2},\quad \frac{1}{2}+\frac{1-\alpha_4}{q}+\frac{\alpha_4}{q_5}\le \frac{1}{\tilde{q}}, \quad \frac{1}{2}+\frac{1-\alpha_4}{r}+\frac{\alpha_4}{r_5}\le \frac{1}{\tilde{s}^*}. $$ To justify the use of Lemma \ref{lem2} in \varepsilonqref{eqn23b}, we need to check that $\tilde{s}^*> 1$, which is equivalent to $$\frac{1-\alpha_4}{r}+\frac{\alpha_4}{r_5}<\frac{1}{2}.$$ For later purpose, we also want $\alpha_3 = q(1-1/\tilde{q})>\alpha_4$. To satisfy all of the three conditions above, we discuss two cases: i) If $r>2$, we simply choose $\alpha_4=0$. Then such $\tilde s^*>1$ exists, and in view of \varepsilonqref{eqn111}, all the conditions are satisfied. ii) If $r\in (3/2, 2]$, then $q>4$. We set $q_5 = 2, r_5 = 6$. To ensure $\tilde{s}^*>1$ and $\alpha_3>\alpha_4$, we take $$ \frac{1/r-1/2}{1/r-1/6}<\alpha_4< \min\{1/2, q(1-1/\tilde q)\}, $$ which is possible because of \varepsilonqref{eqn100b} and $r>3/2$. Moreover, we have $$ \frac{1}{2}+\frac{1-\alpha_4}{q}+\frac{\alpha_4}{q_5} \le \frac{1}{2}+\frac{1-\alpha_4}{4}+\frac{\alpha_4}{2}\le \frac 7 8<\frac 1 {\tilde q}, $$ where we used \varepsilonqref{eqn111} in the last inequality. Now plugging (\ref{eqn26}), \varepsilonqref{eqn23b}, and \varepsilonqref{eqn20h} into \varepsilonqref{eqn29}, we obtain \begin{equation*} \norm{h_k}_{L^t_{\tilde{q}}L^x_{\tilde{s}}(Q^+_{k+1})}\leq N2^{bk} \left(\varepsilon_0+ \norm{ \nabla u}_{L^t_2L^x_2(Q^+_{k+3})}\norm{u}^{1-\alpha_4}_{L^t_qL^x_r(Q^+_{k+3})} \norm{u}^{\alpha_4}_{L^t_{q_5}L^x_{r_5}(Q^+_{k+3})} \right). \varepsilonnd{equation*} Together with \varepsilonqref{eqn11h} we have \begin{align} \label{eqn5b} \int_{Q^+_{k+1}}\abs{uh_k}\, dz \leq & N2^{b k} \norm{u}^{1-\alpha_3}_{L_{q_4}^tL^x_{r_4}(Q^+_{k+3})} \norm{u}^{\alpha_3}_{L_{q}^tL^x_{r}(Q^+_{k+3})} \nonumber \\ &\cdot \left(\varepsilon_0+ \norm{ \nabla u}_{L^t_2L^x_2(Q^+_{k+3})}\norm{u}^{1-\alpha_4}_{L^t_qL^x_r(Q^+_{k+3})}\norm{u}^{\alpha_4}_{L^t_{q_5}L^x_{r_5}(Q^+_{k+3})} \right). \varepsilonnd{align} Note we have consistently chosen $(q_n,r_n)$ such that $$ \frac{2}{q_n}+\frac{3}{r_n}=\frac{3}{2}, \quad 2\leq q_n\leq \infty, \quad 2\leq r_n\leq 6,\quad \text{ for }n = 1,2,3,4,5.$$ Thus by Lemma \ref{lem1b}, we know $$\norm{u}_{L^t_{q_n}L^x_{r_n}(Q_k^+)}\leq (A_k+E_k)^{1/2} \quad \text{for }n = 1,2,3,4,5.$$ Substituting \varepsilonqref{eqn0b}, \varepsilonqref{eqn2}, \varepsilonqref{eqn13}, \varepsilonqref{eqn5p}, and \varepsilonqref{eqn5b} into \varepsilonqref{eqn1}, we obtain \begin{align*} A_k+E_k &\leq N 2^{bk}\left( \sum_{j=1}^2 \varepsilon_0^{\frac{1}{1-2\tau_j}}(A_{k+3}+E_{k+3})^{\frac{1-3\tau_j}{1-2\tau_j}} +\varepsilon_0^{\frac{2}{3(1-2\tau_1)}}(A_{k+3}+E_{k+3})^{\frac{2(1-3\tau_1)}{3(1-2\tau_1)}} \right. \nonumber \\ &\quad \left. + \varepsilon_0^{1+\alpha_3}(A_{k+3}+E_{k+3})^{\frac{1-\alpha_3}{2}} +\varepsilon_0^{1+\alpha_3-\alpha_4}(A_{k+3}+E_{k+3})^{\frac{2-\alpha_3+\alpha_4}{2}} \right). \varepsilonnd{align*} By Young's inequality, for any $\deltaelta>0$ we have \begin{align} \label{eqn15b} A_k+E_k\leq \deltaelta^3(A_{k+3}+E_{k+3}) + \tilde{N}2^{bk}\varepsilon_0 ^{\beta}, \varepsilonnd{align} where $\tilde{N} = \tilde{N}(\deltaelta,q,r,b)>0$ and $\beta=\beta(q,r)\in (0,1)$. We multiply both sides of \varepsilonqref{eqn15b} by $\deltaelta^k$ and sum over integer $k$ from $1$ to infinity. By setting $\deltaelta = 3^{-b}$, we make sure the second term on the right-hand side is summable and get \begin{align*} \sum_{k=1}^{\infty}\deltaelta^k(A_k+E_k) &\leq \sum_{k=1}^{\infty}\deltaelta^{k+3}(A_{k+3}+E_{k+3}) + \tilde{N}\varepsilon_0^{\beta}\sum_{k=1}^{\infty}(2/3)^{bk}\\ & \leq \sum_{k=3}^{\infty}\deltaelta^{k}(A_{k}+E_k) + \tilde{N}\varepsilon_0^{\beta}. \varepsilonnd{align*} Therefore, we have \begin{equation*} A_1+E_1\leq \tilde{N}\varepsilon_0^{\beta}, \varepsilonnd{equation*} where $\tilde{N} = \tilde{N}(\deltaelta,q,r,b)>0$ and $\beta=\beta(q,r)\in (0,1)$. Together with $D_{\tilde{q},1}<\varepsilon_0$, we can use Theorem \ref{thm0} to conclude that there exists a universal $\varepsilon_0=\varepsilon_0(q,r)$ sufficiently small such that $u$ is regular in $\overline{Q^+(1/2)}$. \varepsilonnd{proof} \noindent {\bf Remark added after the proof:} After we finished this paper, we learned that a result similar to Theorem \ref{thm4} was proved in \cite{181200900W} under a much stronger assumption on the pressure. \appendix \section{Interior regularity criterion} \label{append} In the appendix, we show how our proof is adapted to the interior case where $\tilde q$ is allowed to be $1$. We note that Theorem \ref{thm1} was also obtained recently in \cite{170901382H} by using a different proof. For $\rho>0$, we define the scale invariant quantities $A(\rho)$, $E(\rho)$, $C_{q,r}(\rho)$, and $D_{q,r}(\rho)$ with $B(\rho)$ and $Q(\rho)$ in place of $B^+(\rho)$ and $Q^+(\rho)$. \begin{theorem} \label{thm1} For each pair of exponents $(q,r)$ satisfying \begin{equation*} \frac{2}{q}+\frac{3}{r}<2,\quad 1<q<\infty,\quad \frac{3}{2}<r<\infty, \nonumber \varepsilonnd{equation*} there exists a universal constant $\varepsilon_0 = \varepsilon_0(q,r)$ such that if $(u,p)$ is a pair of suitable weak solution to \varepsilonqref{NS1} in $Q$ with $p\in L_{1}(Q)$ and satisfies \begin{equation} \label{eqn0} C_{q,r}(1)+D_{1,1}(1)<\varepsilon_0, \varepsilonnd{equation} then $u$ is regular in $\overline{Q(1/2)}$. \varepsilonnd{theorem} \begin{lemma} \label{lem1} For any $\rho>0$ and a pair of exponents $(q,r)$ such that $$ \frac{2}{q}+\frac{3}{r}=\frac 3 2, \quad 2\leq q\leq \infty, \quad 2\leq r\leq 6, $$ we have $$ \rho^{-1/2}\norm{u}_{L_{q}^tL_{r}^x(Q(\rho))} \leq N\left(A(\rho)+E(\rho)\right)^{\frac{1}{2}}. $$ \varepsilonnd{lemma} \begin{proof} Use the standard interpolation by the Sobolev embedding inequality and H\"{o}lder's inequality. \varepsilonnd{proof} \begin{proof}[Proof of Theorem \ref{thm1}] As before we may assume that $[p]_1(t)=0$ for $t\in (-1,0)$. Also we can find $\hat r\in (3/2,r)$ such that \begin{equation} \label{eq4a} \frac{2}{q}+\frac{3}{\hat{r}} : =2-\tau_1<2,\quad \frac{1}q+\frac{2}{\hat{r}} > 1, \quad \frac{1}q+\frac{1}{\hat{r}} > \frac 2 3. \varepsilonnd{equation} As before, we choose $\tau_1$ sufficiently small by reducing $\hat{r}$. Following the beginning part of proof of Theorem \ref{thm1b}, we have \begin{equation} \label{eqn1b} A_k+E_k \leq N 2^{2k}\int_{Q_{k+1}}\left\{ \abs{u}^2+\abs{u}^3+\abs{pu} \right\}\, dz. \varepsilonnd{equation} The estimates for the first two terms on the right remain the same. For the third term, we decompose it in the following way. For each $k$, let $\varepsilonta_k(x)$ be a smooth cut-off function supported in $B_{k+2}$, $0 \leq\varepsilonta_k\leq 1$ and $\varepsilonta_k \varepsilonquiv 1 $ on $B(\frac{\rho_{k+1}+\rho_{k+2}}{2})$. In the sense of distribution, for a.e. $t\in (-1,0)$, it holds that $$ \Deltaelta p = D_{ij}(u_iu_j)\quad \text{in}\, B_1. $$ We consider the decomposition $p = p_k+h_k$, where $p_k$ is the Newtonian potential of $D_{ij}(u_iu_j\varepsilonta_k).$ Then $h_k$ is harmonic in $B(\frac{\rho_{k+1}+\rho_{k+2}}{2})$. {\varepsilonm Estimates for $p_k$:} Let $q'=\frac{2q}{q+1}\in (1,\infty)$, and $s'\in (1,\infty)$ and $\alpha_1\in (0,1)$ be constants which we will specify later. By H\"older's inequality, \begin{equation} \label{eqn11} \int_{Q_{k+1}}\abs{up_k}\, dz\leq \norm{u}^{1-\alpha_1}_{L_{q_2}^tL^x_{r_2}(Q_{k+1})} \norm{u}^{\alpha_1}_{L_{q}^tL^x_{\hat r}(Q_{k+1})} \norm{p_k}_{L^t_{q'}L^x_{s'}(Q_{k+1})}, \varepsilonnd{equation} where the exponents satisfy \begin{equation} \label{eq4.54a} \frac{2}{q_2}+\frac{3}{r_2} = \frac{3}{2}, \quad \frac{1}{q'}+\frac{1-\alpha_1}{q_2}+\frac{\alpha_1}{q}=1, \quad\frac{1}{s'}+\frac{1-\alpha_1}{r_2}+\frac{\alpha_1}{\hat r}=1. \varepsilonnd{equation} To make use of Lemma \ref{lem1}, we require $ 2\leq q_2\leq \infty$, which is equivalent to \begin{equation} \label{eqn17} \frac{1}{q'}+\frac{\alpha_1}{q}\leq 1\leq \frac{1}{q'}+\frac{\alpha_1}{q}+\frac{1-\alpha_1}{2}. \varepsilonnd{equation} We will come back to check this condition later. By \varepsilonqref{eq4.54a} we also have \begin{equation} \label{eqn7} \frac{2}{q'}+\frac{3}{s'} = 5-\left((1-\alpha_1)\cdot \frac{3}{2}+\alpha_1(2-\tau_1)\right). \varepsilonnd{equation} Using the Calder\'{o}n-Zygmund estimate, we have \begin{equation} \label{eqn3} \norm{p_k}_{L^t_{q'}L^x_{s'}(Q_{k+1})}\leq \norm{p_k}_{L^t_{q'}L^x_{s'}(Q_{k+2})} \leq N \norm{u}^2_{L^t_{2q'}L^x_{2s'}(Q_{k+2})}. \varepsilonnd{equation} By H\"{o}lder's inequality, we have \begin{equation} \label{eqn6} \norm{u}^2_{L^t_{2q'}L^x_{2s'}(Q_{k+2})} \leq \norm{u}_{L_q^tL_{\hat r}^x(Q_{k+2})}\norm{u}_{L_{q_3}^tL_{r_3}^x(Q_{k+2})}, \varepsilonnd{equation} where \begin{equation*} q_3=\frac {2q}{q-1}\in (2,\infty),\quad r_3=\frac {6q}{q+2}\in (2,6),\quad \frac 2 {q_3}+\frac 3 {r_3}=\frac 3 2, \varepsilonnd{equation*} and \begin{equation*} \frac 1 {s'}=\frac 1 {\hat r}+\frac 1 {r_3}=\frac 1 {\hat r}+\frac 1 6 +\frac 1 {3 q}. \varepsilonnd{equation*} Plugging this into \varepsilonqref{eqn7} and using \varepsilonqref{eq4a}, we get $$ \alpha_1= \frac{2\tau_1}{1-2\tau_1}. $$ Since we have solved for $\alpha_1$, we can now go back to verify \varepsilonqref{eqn17}, which is equivalent to $$ \frac{1}{2}+\frac{1}{2q}+\frac{\alpha_1}{q}\leq 1\leq 1+\frac{1}{2q}+\frac{\alpha_1}{q}-\frac{\alpha_1}{2}. $$ This indeed is satisfied when $\tau_1$ is sufficiently small. Thus by Lemma \ref{lem1}, \varepsilonqref{eqn11}, \varepsilonqref{eqn3}, and \varepsilonqref{eqn6}, we have \begin{align} \label{eqn5} \int_{Q_{k+1}}\abs{up_k}\, dz \leq & N (A_{k+2}+E_{k+2})^{\frac{1-3\tau_1}{1-2\tau_1}} \norm{u}^{\frac 1 {1-2\tau_1}}_{L_q^tL_r^x(Q_{k+2})}. \varepsilonnd{align} {\varepsilonm Estimates for $h_k$:} By H\"{o}lder's inequality, we have \begin{equation} \label{eqn11g} \int_{Q_{k+1}}\abs{uh_k}\, dz\leq \norm{u}_{L_{\infty}^tL^x_{2}(Q_{k+1})} \norm{h_k}_{L^t_{1}L^x_{2}(Q_{k+1})}. \varepsilonnd{equation} Recall that $h_k$ is harmonic in $B(\frac{\rho_{k+1}+\rho_{k+2}}{2})$. By the fact that any Sobolev norm of a harmonic function in a smaller ball can be estimated by any of its $L_p$ norm in a greater ball, we know \begin{equation*} \norm{h_k}_{L_{2}^x(B_{k+1})}\leq N 2^{b k}\norm{h_k}_{L_1^x(B_{k+2})},\nonumber \varepsilonnd{equation*} where $b>0$ is a constant. Integrating in $t\in (-\rho^2_{k+1},0)$ we have \begin{equation} \label{eqn4q} \norm{h_k}_{L_1^tL_{2}^x(Q_{k+1})}\leq N 2^{b k}\norm{h_k}_{L_1^tL_1^x(Q_{k+2})}\leq N 2^{b k}(\norm{p_k}_{L_1^tL_1^x(Q_{k+2})}+\norm{p}_{L_1^tL_1^x(Q_{k+2})}), \varepsilonnd{equation} where the second term is small by condition \varepsilonqref{eqn0}. By the Calder\'{o}n-Zygmund estimate and H\"{o}lder's inequality, for any $\tilde{r}>1$ we have \begin{equation} \label{eqn5g} \norm{p_k}_{L_1^tL_{1}^x(Q_{k+2})}\leq N\norm{p_k}_{L_1^tL_{\tilde{r}}^x(Q_{k+2})}\leq N\norm{u}^2_{L_1^tL_{2\tilde{r}}^x(Q_{k+2})}. \varepsilonnd{equation} For $q>1$, we claim the following interpolation holds for some $\alpha,q_4,r_4>0$: \begin{equation} \label{eqn2p} \norm{u}^2_{L_1^tL_{2\tilde{r}}^x(Q_{k+2})} \leq N\norm{u}^{2-\alpha}_{L_q^tL_{r}^x(Q_{k+2})}\norm{u}^{\alpha}_{L_{q_4}^tL_{r_4}^x(Q_{k+2})}, \varepsilonnd{equation} where $$ q_4\in [2,\infty],\quad r_4\in [2,6],\quad \frac{2}{q_4}+\frac{3}{r_4}=\frac{3}{2}, $$ and they need to satisfy $$ \frac{2-\alpha}{q}+\frac{\alpha}{q_4}\le 1,\quad \frac{2-\alpha}{r}+\frac{\alpha}{r_4}\le 1. $$ Indeed, we can choose $(\alpha,q_4,r_4)$ in the following way: when $1<q\leq 2$, we set $q_4 = \infty$, $r_4 =2$, and $\alpha = 2-q$; when $q>2$, we set $q_4 =2$, $r_4 = 6$, and $\alpha=6/7$. Note that in both cases we have $\alpha<1$. Now we plug \varepsilonqref{eqn4q}, \varepsilonqref{eqn5g}, and \varepsilonqref{eqn2p} into \varepsilonqref{eqn11g} to obtain \begin{align} \label{eqn5k} \int_{Q_{k+1}}\abs{uh_k}\, dz \leq & N2^{bk}\left((A_{k+2}+E_{k+2})^{\frac{1}{2}} \norm{p}_{L_1^tL_1^x(Q_{k+2})}\right. \nonumber \\ &+ \left. (A_{k+2}+E_{k+2})^{\frac{1+\alpha}{2}} \norm{u}^{2-\alpha}_{L_q^tL_r^x(Q_{k+2})}\right). \varepsilonnd{align} By \varepsilonqref{eqn1b}, \varepsilonqref{eqn2}, \varepsilonqref{eqn13}, \varepsilonqref{eqn5}, \varepsilonqref{eqn5k}, and condition \varepsilonqref{eqn0} we conclude that \begin{align*} A_k+E_k &\leq N 2^{bk}\left( \varepsilon_0^{\frac{1}{1-2\tau_1}}(A_{k+2}+E_{k+2})^{\frac{1-3\tau_1}{1-2\tau_1}}+ \varepsilon_0^{\frac{2}{3(1-2\tau_1)}}(A_{k+2}+E_{k+2})^{\frac{2(1-3\tau_1)}{3(1-2\tau_1)}} \right. \nonumber \\ &\quad \left. +\varepsilon_0(A_{k+2}+E_{k+2})^{\frac{1}{2}} +\varepsilon_0^{2-\alpha}(A_{k+2}+E_{k+2})^{\frac{1+\alpha}{2}} \right). \varepsilonnd{align*} Now similar to the proof in Section \ref{sec4}, we obtain $$ A_1+E_1\leq \tilde{N}\varepsilon_0^{\beta}. $$ Together with $D_{1,1}<\varepsilon_0$, we can apply \cite[Theorem 1.5]{Phuc} to conclude that there exists a universal $\varepsilon_0=\varepsilon_0(q,r)$ sufficiently small such that $u$ is regular in $\overline{Q(1/2)}$. \begin{comment} To prove the other case when $1<q\leq 2$, we can simplify the above proof because we have $\tilde{q}=1$. We replace \varepsilonqref{eqn11} with \begin{equation} \int_{Q_{k+1}}\abs{up}\, dz\leq \norm{u}_{L_{\infty}^tL^x_{2}(Q_{k+1})}\norm{p}_{L^t_{1}L^x_{2}(Q_{k+1})}, \nonumber \varepsilonnd{equation} and \varepsilonqref{eqn6} with \begin{equation} \label{eqn6b} \norm{u}^2_{L^t_{2}L^x_{4}(Q_{k+2})} \leq \norm{u}^{2-\alpha}_{L_q^tL_r^x(Q_{k+2})}\norm{u}^{\alpha}_{L_{q_3}^tL_{r_3}^x(Q_{k+2})}, \varepsilonnd{equation} where \begin{equation} \label{eqn9b} \alpha := \frac{1-4\tau}{1-2\tau}<1, \quad \frac{2}{q_3}+\frac{3}{r_3} = \frac{3}{2}. \varepsilonnd{equation} We still need to check $q_3\geq 2$ so that we can take advantage of Lemma \ref{lem1}: i) When $1<q<2$, $\tilde{q}=1$, and $\tilde{s}=2$. For \varepsilonqref{eqn6b} to hold, we need $$1=\frac{2-\alpha}{q}+\frac{\alpha}{q_3}.$$ If we were allowed to take $\tau = 0$, then $\alpha = 1$ and $$\frac{1}{q_3} = 1-\frac{1}{q}<\frac{1}{2} \mathbb{R}ightarrow q_3>2.$$ By continuity, for $\tau>0$ sufficiently small, we still have $q_3>2$. The rest of the exponents can be determined and checked by \varepsilonqref{eqn8}, \varepsilonqref{eqn9b}, and $$\frac{1}{2}=\frac{2-\alpha}{r}+\frac{\alpha}{r_3}.$$ ii) When $q=2$, $\tilde{q}=1$, and $\tilde{s}=2$. We can easily verify that $\alpha = \frac{1-4\tau}{1-2\tau}$, $q_3 = 2$, and $r_3=6$ will make \varepsilonqref{eqn6b} hold true. After this minor modification, our proof for $q>2$ will readily work for $1<q\leq 2$ as well. \varepsilonnd{comment} \varepsilonnd{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\mathcal{M}R}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\mathcal{M}Rhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{comment} \bibitem{2018arXiv180203164A} D.~{Albritton} and T.~{Barker}, \varepsilonmph{{Global weak Besov solutions of the Navier-Stokes equations and applications}}, ArXiv e-prints (2018). \bibitem{MR3803715} Dallas Albritton, \varepsilonmph{Blow-up criteria for the {N}avier-{S}tokes equations in non-endpoint critical {B}esov spaces}, Anal. PDE \textbf{11} (2018), no.~6, 1415--1456. \mathcal{M}R{3803715} \bibitem{MR3713543} T.~Barker and G.~Seregin, \varepsilonmph{A necessary condition of potential blowup for the {N}avier-{S}tokes system in half-space}, Math. Ann. \textbf{369} (2017), no.~3-4, 1327--1352. \mathcal{M}R{3713543} \bibitem{Caf1} L.~Caffarelli, R.~Kohn, and L.~Nirenberg, \varepsilonmph{Partial regularity of suitable weak solutions of the {N}avier-{S}tokes equations}, Comm. Pure Appl. Math. \textbf{35} (1982), no.~6, 771--831. \mathcal{M}R{673830} \bibitem{Refer2} A.~Cheskidov and R.~Shvydkoy, \varepsilonmph{The regularity of weak solutions of the 3{D} {N}avier-{S}tokes equations in {$B^{-1}_{\infty,\infty}$}}, Arch. Ration. Mech. Anal. \textbf{195} (2010), no.~1, 159--169. \mathcal{M}R{2564471} \bibitem{Refer2b} H.~Dong and D.~Du, \varepsilonmph{Partial regularity of solutions to the four-dimensional {N}avier-{S}tokes equations at the first blow-up time}, Comm. Math. Phys. \textbf{273} (2007), no.~3, 785--801. \mathcal{M}R{2318865} \bibitem{Dong1} \bysame, \varepsilonmph{The {N}avier-{S}tokes equations in the critical {L}ebesgue space}, Comm. Math. Phys. \textbf{292} (2009), no.~3, 811--827. \mathcal{M}R{2551795} \bibitem{Dong2} H.~Dong and X.~Gu, \varepsilonmph{Boundary partial regularity for the high dimensional {N}avier-{S}tokes equations}, J. Funct. Anal. \textbf{267} (2014), no.~8, 2606--2637. \mathcal{M}R{3255469} \bibitem{Dong3} H.~Dong and D.~Li, \varepsilonmph{Optimal local smoothing and analyticity rate estimates for the generalized {N}avier-{S}tokes equations}, Commun. Math. Sci. \textbf{7} (2009), no.~1, 67--80. \mathcal{M}R{2512833} \bibitem{Seregin1} L.~Escauriaza, G.~A. Seregin, and V.~\v{S}ver\'{a}k, \varepsilonmph{{$L_{3,\infty}$}-solutions of {N}avier-{S}tokes equations and backward uniqueness}, Uspekhi Mat. Nauk \textbf{58} (2003), no.~2(350), 3--44. \mathcal{M}R{1992563} \bibitem{Seregin2} \bysame, \varepsilonmph{On backward uniqueness for parabolic equations}, Arch. Ration. Mech. Anal. \textbf{169} (2003), no.~2, 147--157. \mathcal{M}R{2005639} \bibitem{MR3475661} Isabelle Gallagher, Gabriel~S. Koch, and Fabrice Planchon, \varepsilonmph{Blow-up of critical {B}esov norms at a potential {N}avier-{S}tokes singularity}, Comm. Math. Phys. \textbf{343} (2016), no.~1, 39--82. \mathcal{M}R{3475661} \bibitem{iter} M.~Giaquinta, \varepsilonmph{Multiple integrals in the calculus of variations and nonlinear elliptic systems}, Annals of Mathematics Studies, vol. 105, Princeton University Press, Princeton, NJ, 1983. \mathcal{M}R{717034} \bibitem{Refer8} Y.~Giga, \varepsilonmph{Solutions for semilinear parabolic equations in {$L^p$} and regularity of weak solutions of the {N}avier-{S}tokes system}, J. Differential Equations \textbf{62} (1986), no.~2, 186--212. \mathcal{M}R{833416} \bibitem{Giga1} Y.~Giga and T.~Miyakawa, \varepsilonmph{Solutions in {$L_r$} of the {N}avier-{S}tokes initial value problem}, Arch. Rational Mech. Anal. \textbf{89} (1985), no.~3, 267--281. \mathcal{M}R{786550} \bibitem{Giga2} Y.~Giga and O.~Sawada, \varepsilonmph{On regularizing-decay rate estimates for solutions to the {N}avier-{S}tokes initial value problem}, Nonlinear analysis and applications: to {V}. {L}akshmikantham on his 80th birthday. {V}ol. 1, 2, Kluwer Acad. Publ., Dordrecht, 2003, pp.~549--562. \mathcal{M}R{2060233} \bibitem{Hopf1} E.~Hopf, \varepsilonmph{\"uber die {A}nfangswertaufgabe f\"ur die hydrodynamischen {G}rundgleichungen}, Math. Nachr. \textbf{4} (1951), 213--231. \mathcal{M}R{0050423} \bibitem{analyticity} C.~Kahane, \varepsilonmph{On the spatial analyticity of solutions of the {N}avier-{S}tokes equations}, Arch. Rational Mech. Anal. \textbf{33} (1969), 386--405. \mathcal{M}R{0245989} \bibitem{Kato1} T.~Kato, \varepsilonmph{Strong {$L^{p}$}-solutions of the {N}avier-{S}tokes equation in {${\bf R}^{m}$}, with applications to weak solutions}, Math. Z. \textbf{187} (1984), no.~4, 471--480. \mathcal{M}R{760047} \bibitem{MR2784068} Carlos~E. Kenig and Gabriel~S. Koch, \varepsilonmph{An alternative approach to regularity for the {N}avier-{S}tokes equations in critical spaces}, Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire \textbf{28} (2011), no.~2, 159--187. \mathcal{M}R{2784068} \bibitem{Koch1} H.~Koch and D.~Tataru, \varepsilonmph{Well-posedness for the {N}avier-{S}tokes equations}, Adv. Math. \textbf{157} (2001), no.~1, 22--35. \mathcal{M}R{1808843} \bibitem{Refer15} O.~A. Lady\v{z}enskaja, \varepsilonmph{Uniqueness and smoothness of generalized solutions of {N}avier-{S}tokes equations}, Zap. Nau\v cn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) \textbf{5} (1967), 169--185. \mathcal{M}R{0236541} \bibitem{Refer18} O.~A. Ladyzhenskaya and G.~A. Seregin, \varepsilonmph{On partial regularity of suitable weak solutions to the three-dimensional {N}avier-{S}tokes equations}, J. Math. Fluid Mech. \textbf{1} (1999), no.~4, 356--387. \mathcal{M}R{1738171} \bibitem{Leray1} J.~Leray, \varepsilonmph{\'etude de diverses \'equations int\'egrales non lin\'eaires et de quelques probl\`emes que pose l'hydrodynamique}, NUMDAM, [place of publication not identified], 1933. \mathcal{M}R{3533002} \bibitem{Lieberman1} Gary~M. Lieberman, \varepsilonmph{Second order parabolic differential equations}, World Scientific Publishing Co., Inc., River Edge, NJ, 1996. \mathcal{M}R{1465184} \bibitem{Refer21} F.~Lin, \varepsilonmph{A new proof of the {C}affarelli-{K}ohn-{N}irenberg theorem}, Comm. Pure Appl. Math. \textbf{51} (1998), no.~3, 241--257. \mathcal{M}R{1488514} \bibitem{Refer29} P.~Maremonti and V.~A. Solonnikov, \varepsilonmph{On estimates for the solutions of the nonstationary {S}tokes problem in {S}. {L}. {S}obolev anisotropic spaces with a mixed norm}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{222} (1995), no.~Issled. po Line\u\i n. Oper. i Teor. Funktsi\u\i . 23, 124--150, 309. \mathcal{M}R{1359996} \bibitem{analyticity1} K.~Masuda, \varepsilonmph{On the analyticity and the unique continuation theorem for solutions of the {N}avier-{S}tokes equation}, Proc. Japan Acad. \textbf{43} (1967), 827--832. \mathcal{M}R{0247304} \bibitem{Refer19} A.~S. Mikhailov and T.~N. Shilkin, \varepsilonmph{{$L_{3,\infty}$}-solutions to the 3{D}-{N}avier-{S}tokes system in the domain with a curved boundary}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{336} (2006), no.~Kraev. Zadachi Mat. Fiz. i Smezh. Vopr. Teor. Funkts. 37, 133--152, 276. \mathcal{M}R{2270883} \bibitem{Refer22} G.~Prodi, \varepsilonmph{Un teorema di unicit\`a per le equazioni di {N}avier-{S}tokes}, Ann. Mat. Pura Appl. (4) \textbf{48} (1959), 173--182. \mathcal{M}R{0126088} \bibitem{Refer23} V.~Scheffer, \varepsilonmph{Partial regularity of solutions to the {N}avier-{S}tokes equations}, Pacific J. Math. \textbf{66} (1976), no.~2, 535--552. \mathcal{M}R{0454426} \bibitem{Refer24} \bysame, \varepsilonmph{Hausdorff measure and the {N}avier-{S}tokes equations}, Comm. Math. Phys. \textbf{55} (1977), no.~2, 97--112. \mathcal{M}R{0510154} \bibitem{Refer25} \bysame, \varepsilonmph{The {N}avier-{S}tokes equations on a bounded domain}, Comm. Math. Phys. \textbf{73} (1980), no.~1, 1--42. \mathcal{M}R{573611} \bibitem{Refer24b} G.~A. Seregin, \varepsilonmph{Some estimates near the boundary for solutions to the non-stationary linearized {N}avier-{S}tokes equations}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{271} (2000), no.~Kraev. Zadachi Mat. Fiz. i Smezh. Vopr. Teor. Funkts. 31, 204--223, 317. \mathcal{M}R{1810618} \bibitem{Refer23b} \bysame, \varepsilonmph{Local regularity of suitable weak solutions to the {N}avier-{S}tokes equations near the boundary}, J. Math. Fluid Mech. \textbf{4} (2002), no.~1, 1--29. \mathcal{M}R{1891072} \bibitem{Seregin3} \bysame, \varepsilonmph{On smoothness of {$L_{3,\infty}$}-solutions to the {N}avier-{S}tokes equations up to boundary}, Math. Ann. \textbf{332} (2005), no.~1, 219--238. \mathcal{M}R{2139258} \bibitem{Refer26b} \bysame, \varepsilonmph{A note on local boundary regularity for the {S}tokes system}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{370} (2009), no.~Kraevye Zadachi Matematichesko\u\i \ Fiziki i Smezhnye Voprosy Teorii Funktsi\u\i . 40, 151--159, 221--222. \mathcal{M}R{2749216} \bibitem{Refer27b} G.~A. Seregin, T.~N. Shilkin, and V.~A. Solonnikov, \varepsilonmph{Boundary partial regularity for the {N}avier-{S}tokes equations}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{310} (2004), no.~Kraev. Zadachi Mat. Fiz. i Smezh. Vopr. Teor. Funkts. 35 [34], 158--190, 228. \mathcal{M}R{2120190} \bibitem{Refer28} J.~Serrin, \varepsilonmph{On the interior regularity of weak solutions of the {N}avier-{S}tokes equations}, Arch. Rational Mech. Anal. \textbf{9} (1962), 187--195. \mathcal{M}R{0136885} \bibitem{Refer30} \bysame, \varepsilonmph{The initial value problem for the {N}avier-{S}tokes equations}, Nonlinear {P}roblems ({P}roc. {S}ympos., {M}adison, {W}is., 1962), Univ. of Wisconsin Press, Madison, Wis., 1963, pp.~69--98. \mathcal{M}R{0150444} \bibitem{Refer31} M.~Struwe, \varepsilonmph{On partial regularity results for the {N}avier-{S}tokes equations}, Comm. Pure Appl. Math. \textbf{41} (1988), no.~4, 437--458. \mathcal{M}R{933230} \bibitem{Taylor1} M.~E. Taylor, \varepsilonmph{Analysis on {M}orrey spaces and applications to {N}avier-{S}tokes and other evolution equations}, Comm. Partial Differential Equations \textbf{17} (1992), no.~9-10, 1407--1456. \mathcal{M}R{1187618} \bibitem{Wahl1} W.~von Wahl, \varepsilonmph{The equations of {N}avier-{S}tokes and abstract parabolic equations}, Aspects of Mathematics, E8, Friedr. Vieweg \& Sohn, Braunschweig, 1985. \mathcal{M}R{832442} \bibitem{MR3629487} WenDong Wang and ZhiFei Zhang, \varepsilonmph{Blow-up of critical norms for the 3-{D} {N}avier-{S}tokes equations}, Sci. China Math. \textbf{60} (2017), no.~4, 637--650. \mathcal{M}R{3629487} \bibitem{local_sol1} F.~B. Weissler, \varepsilonmph{The {N}avier-{S}tokes initial value problem in {$L^{p}$}}, Arch. Rational Mech. Anal. \textbf{74} (1980), no.~3, 219--230. \mathcal{M}R{591222} \bibitem{phuc} Guevara, Cristi(1-LAS); Phuc, Nguyen Cong(1-LAS) Local energy bounds and ϵ-regularity criteria for the 3D Navier-Stokes system. Calc. Var. Partial Differential Equations 56 (2017), no. 3, Art. 68, 16 pp. \varepsilonnd{comment} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\mathcal{M}R}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\mathcal{M}Rhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \begin{thebibliography}{10} \bibitem{Caf1} L.~Caffarelli, R.~Kohn, and L.~Nirenberg, \varepsilonmph{Partial regularity of suitable weak solutions of the {N}avier-{S}tokes equations}, Comm. Pure Appl. Math. \textbf{35} (1982), no.~6, 771--831. \mathcal{M}R{673830} \bibitem{MR3258360} Kyudong Choi and Alexis~F. Vasseur, \varepsilonmph{Estimates on fractional higher derivatives of weak solutions for the {N}avier-{S}tokes equations}, Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire \textbf{31} (2014), no.~5, 899--945. \mathcal{M}R{3258360} \bibitem{Dong2} Hongjie Dong and Xumin Gu, \varepsilonmph{Boundary partial regularity for the high dimensional {N}avier-{S}tokes equations}, J. Funct. Anal. \textbf{267} (2014), no.~8, 2606--2637. \mathcal{M}R{3255469} \bibitem{MR3129108} Hongjie Dong and Robert~M. Strain, \varepsilonmph{On partial regularity of steady-state solutions to the 6{D} {N}avier-{S}tokes equations}, Indiana Univ. Math. J. \textbf{61} (2012), no.~6, 2211--2229. \mathcal{M}R{3129108} \bibitem{iter} Mariano Giaquinta, \varepsilonmph{Multiple integrals in the calculus of variations and nonlinear elliptic systems}, Annals of Mathematics Studies, vol. 105, Princeton University Press, Princeton, NJ, 1983. \mathcal{M}R{717034} \bibitem{Phuc} Cristi Guevara and Nguyen~Cong Phuc, \varepsilonmph{Local energy bounds and {$\varepsilonpsilon$}-regularity criteria for the 3{D} {N}avier-{S}tokes system}, Calc. Var. Partial Differential Equations \textbf{56} (2017), no.~3, Art. 68, 16. \mathcal{M}R{3640646} \bibitem{Refer11} Stephen Gustafson, Kyungkeun Kang, and Tai-Peng Tsai, \varepsilonmph{Interior regularity criteria for suitable weak solutions of the {N}avier-{S}tokes equations}, Comm. Math. Phys. \textbf{273} (2007), no.~1, 161--176. \mathcal{M}R{2308753} \bibitem{170901382H} Cheng {He}, Yanqing {Wang}, and Daoguo {Zhou}, \varepsilonmph{{New $\varepsilon$-regularity criteria and application to the box dimension of the singular set in the 3D Navier-Stokes equations}}, arXiv e-prints (2017), arXiv:1709.01382. \bibitem{MR2559050} Jaewoo Kim and Myeonghyeon Kim, \varepsilonmph{Local regularity of the {N}avier-{S}tokes equations near the curved boundary}, J. Math. Anal. Appl. \textbf{363} (2010), no.~1, 161--173. \mathcal{M}R{2559050} \bibitem{MR2483004} Igor Kukavica, \varepsilonmph{Regularity for the {N}avier-{S}tokes equations with a solution in a {M}orrey space}, Indiana Univ. Math. J. \textbf{57} (2008), no.~6, 2843--2860. \mathcal{M}R{2483004} \bibitem{Refer29} P.~Maremonti and V.~A. Solonnikov, \varepsilonmph{On estimates for the solutions of the nonstationary {S}tokes problem in {S}. {L}. {S}obolev anisotropic spaces with a mixed norm}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{222} (1995), no.~Issled. po Line\u\i n. Oper. i Teor. Funktsi\u\i . 23, 124--150, 309. \mathcal{M}R{1359996} \bibitem{Refer23} Vladimir Scheffer, \varepsilonmph{Partial regularity of solutions to the {N}avier-{S}tokes equations}, Pacific J. Math. \textbf{66} (1976), no.~2, 535--552. \mathcal{M}R{0454426} \bibitem{Refer24} \bysame, \varepsilonmph{Hausdorff measure and the {N}avier-{S}tokes equations}, Comm. Math. Phys. \textbf{55} (1977), no.~2, 97--112. \mathcal{M}R{0510154} \bibitem{Refer25} \bysame, \varepsilonmph{The {N}avier-{S}tokes equations on a bounded domain}, Comm. Math. Phys. \textbf{73} (1980), no.~1, 1--42. \mathcal{M}R{573611} \bibitem{Refer24b} G.~A. Seregin, \varepsilonmph{Some estimates near the boundary for solutions to the non-stationary linearized {N}avier-{S}tokes equations}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{271} (2000), no.~Kraev. Zadachi Mat. Fiz. i Smezh. Vopr. Teor. Funkts. 31, 204--223, 317. \mathcal{M}R{1810618} \bibitem{Refer23b} \bysame, \varepsilonmph{Local regularity of suitable weak solutions to the {N}avier-{S}tokes equations near the boundary}, J. Math. Fluid Mech. \textbf{4} (2002), no.~1, 1--29. \mathcal{M}R{1891072} \bibitem{Refer26b} \bysame, \varepsilonmph{A note on local boundary regularity for the {S}tokes system}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{370} (2009), no.~Kraevye Zadachi Matematichesko\u\i \ Fiziki i Smezhnye Voprosy Teorii Funktsi\u\i . 40, 151--159, 221--222. \mathcal{M}R{2749216} \bibitem{MR2374209} Alexis~F. Vasseur, \varepsilonmph{A new proof of partial regularity of solutions to {N}avier-{S}tokes equations}, NoDEA Nonlinear Differential Equations Appl. \textbf{14} (2007), no.~5-6, 753--785. \mathcal{M}R{2374209} \bibitem{MR3233577} Wendong Wang and Zhifei Zhang, \varepsilonmph{On the interior regularity criteria and the number of singular points to the {N}avier-{S}tokes equations}, J. Anal. Math. \textbf{123} (2014), 139--170. \mathcal{M}R{3233577} \bibitem{181200900W} Yanqing {Wang} and Minsuk {Yang}, \varepsilonmph{{Improved bounds for box dimensions of potential singular points to the Navier--Stokes equations}}, arXiv e-prints (2018), arXiv:1812.00900. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{\huge{\textbf{Existence and the stability of minimizers in ferromagnetic nanowires.}}} \author{\Large{Davit Harutyunyan}\\ \textit{Department of Mathematics of The University of Utah}} \maketitle $$\textbf{Abstract}$$ We study static 180 degree domain walls in infinite magnetic wires with bounded, $C^1$ and rotationally symmetric cross sections. We prove an existence of global minimizers for the energy of micromagnetics for any bounded $C^1$ cross sections. Under some asymmetry of cross sections we prove a stability result for the minimizers, namely, we show that vectors of micromagnetics having an energy close to the minimal one, must be $H^1$ close to the actual minimizer, which is itself close to the minimizer of the limit energy up to a rotation and a translation. \\ \newline \textbf{Keywords:}\ \ Nanowires; Magnetization reversal; Transverse wall; Vortex wall; Domain wall \section{Introduction} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Definition}[Theorem]{Definition} In the theory of micromagnetics to any domain $\Omega\in \mathbb R^3$ and a unit vector field (called magnetization) $m\colon\Omega\to\mathbb S^2$ with $m=0$ in $\mathbb R^3\setminus \Omega$ the energy of micromagnetics is assigned: $$E(m)=A_{ex}\int_\Omega|\nabla m|^2+K_d\int_{\mathbb R^3}|\nabla u|^2+Q\int_\Omega\varphi(m)-2\int_\Omega H_{ext}\cdot m,$$ where $A_{ex},$ $K_d$, $Q$ are material parameters, $H_{ext}$ is the externally applied magnetic field, $\varphi$ is the anisotropy energy density and $u$ is obtained from Maxwell's equations of magnetostatics, $$ \begin{cases} \mathrm{curl} H_{ind}=0 & \quad\text{in}\quad \mathbb R^3\\ \mathrm{div}(H_{ind}+m)=0 & \quad\text{in}\quad \mathbb R^3, \end{cases} $$ i.e., $u$ is a weak solution of $$\triangle u= \textrm{div} m\qquad \text{in}\qquad \mathbb R^3.$$ According to micromagnetics, stable magnetization patterns are described by the minimizers of the micromagnetic energy functional, see [\ref{bib:D.S.KMO1},\ref{bib:D.S.KMO2},\ref{bib:HSch}]. The study of magnetic wires and thin films has attracted significant attention in the recent years, see [\ref{bib:AXFAPC},\ref{bib:BNKTE},\ref{bib:CL},\ref{bib:FSSSTDF},\ref{bib:HK},\ref{bib:Kuehn},\ref{bib:NT},\ref{bib:NWBKUFK},\ref{bib:PGDLFLOF},\ref{bib:Sanchez}, \ref{bib:SS},\ref{bib:WNU}] for wires and [\ref{bib:CMO},\ref{bib:D.S.KMO1},\ref{bib:D.S.KMO2},\ref{bib:GC1},\ref{bib:GC2},\ref{bib:KS1},\ref{bib:KS2},\ref{bib:Kurzke}] for thin films. It has been suggested in [\ref{bib:AXFAPC}] that magnetic nanowires can be effectively used as storage devices. When a homogenous external field is applied in the axial direction of a magnetic wire facing the homogenous magnetization direction (see Fig. 1), then at a critical strength of the field the reversal of the magnetization typically starts at one end of the wire creating a domain wall which starts moving along the wire. The domain wall separates the reversed and the not yet reversed parts of the wire (see Fig. 1). It is known that the magnetization pattern reversal time is closely related to the writing and reading speed of such a device, thus it is crucial to understand the magnetization reversal and switching processes. Several authors have numerically, experimentally and analytically observed two different magnetization modes in magnetic nanowires [\ref{bib:FSSSTDF},\ref{bib:HK},\ref{bib:Har},\ref{bib:Kuehn}]. In [\ref{bib:FSSSTDF}] the magnetization reversal process has been studied numerically in cobalt nanowires by the Landau-Lifshitz-Gilbert equation. Two different domain wall types were observed. For thin cobalt wires with 10nm in diameter the transverse mode has been observed: the magnetization is constant on each cross section and is moving along the wire. For thick wires, with diameters bigger that 20nm the vortex wall has been observed: the magnetization is approximately tangential to the boundary and forms a vortex which propagates along the wire. In [\ref{bib:HK}] the magnetization reversal process has been studied both numerically and experimentally. By considering a conical type wire so that the diameter of the cross section increases very slowly, the magnetization switching from the vortex wall to the transverse at a critical diameter has been observed, as the domain wall moves along the wire. The results in [\ref{bib:FSSSTDF}] and [\ref{bib:HK}] were the same: in thin wires the transverse wall occurs, while in thick wires the vortex wall occurs. \\ \setlength{\unitlength}{1.2mm} \begin{picture}(90,28) \put(23,27){\textbf{Homogenius magnetization}} \put(10,10){\line(1,0){70}} \put(80,10){\line(0,1){15}} \put(10,25){\line(1,0){70}} \put(10,10){\line(0,1){15}} \thicklines \put(11,12){\vector(1,0){5}} \put(11,15){\vector(1,0){5}} \put(11,18){\vector(1,0){5}} \put(11,21){\vector(1,0){5}} \put(11,24){\vector(1,0){5}} \put(18,12){\vector(1,0){5}} \put(18,15){\vector(1,0){5}} \put(18,18){\vector(1,0){5}} \put(18,21){\vector(1,0){5}} \put(18,24){\vector(1,0){5}} \put(25,12){\vector(1,0){5}} \put(25,15){\vector(1,0){5}} \put(25,18){\vector(1,0){5}} \put(25,21){\vector(1,0){5}} \put(25,24){\vector(1,0){5}} \put(32,12){\vector(1,0){5}} \put(32,15){\vector(1,0){5}} \put(32,18){\vector(1,0){5}} \put(32,21){\vector(1,0){5}} \put(32,24){\vector(1,0){5}} \put(39,12){\vector(1,0){5}} \put(39,15){\vector(1,0){5}} \put(39,18){\vector(1,0){5}} \put(39,21){\vector(1,0){5}} \put(39,24){\vector(1,0){5}} \put(46,12){\vector(1,0){5}} \put(46,15){\vector(1,0){5}} \put(46,18){\vector(1,0){5}} \put(46,21){\vector(1,0){5}} \put(46,24){\vector(1,0){5}} \put(53,12){\vector(1,0){5}} \put(53,15){\vector(1,0){5}} \put(53,18){\vector(1,0){5}} \put(53,21){\vector(1,0){5}} \put(53,24){\vector(1,0){5}} \put(60,12){\vector(1,0){5}} \put(60,15){\vector(1,0){5}} \put(60,18){\vector(1,0){5}} \put(60,21){\vector(1,0){5}} \put(60,24){\vector(1,0){5}} \put(67,12){\vector(1,0){5}} \put(67,15){\vector(1,0){5}} \put(67,18){\vector(1,0){5}} \put(67,21){\vector(1,0){5}} \put(67,24){\vector(1,0){5}} \put(74,12){\vector(1,0){5}} \put(74,15){\vector(1,0){5}} \put(74,18){\vector(1,0){5}} \put(74,21){\vector(1,0){5}} \put(74,24){\vector(1,0){5}} \end{picture} \begin{picture}(90,26) \put(25,27){\textbf{180 degree domain wall}} \put(10,10){\line(1,0){70}} \put(80,10){\line(0,1){15}} \put(10,25){\line(1,0){70}} \put(10,10){\line(0,1){15}} \put(15,2){\textbf{ Figure 1.}} \thicklines \put(50,6){\vector(-1,0){15}} \put(40,7){\textbf{$H_{ext}$}} \put(11,12){\vector(1,0){5}} \put(11,15){\vector(1,0){5}} \put(11,18){\vector(1,0){5}} \put(11,21){\vector(1,0){5}} \put(11,24){\vector(1,0){5}} \put(18,12){\vector(1,0){5}} \put(18,15){\vector(1,0){5}} \put(18,18){\vector(1,0){5}} \put(18,21){\vector(1,0){5}} \put(18,24){\vector(1,0){5}} \put(25,12){\vector(1,0){5}} \put(25,15){\vector(1,0){5}} \put(25,18){\vector(1,0){5}} \put(25,21){\vector(1,0){5}} \put(25,24){\vector(1,0){5}} \put(32,12){\vector(1,0){5}} \put(32,15){\vector(1,0){5}} \put(32,18){\vector(1,0){5}} \put(32,21){\vector(1,0){5}} \put(32,24){\vector(1,0){5}} \put(39,12){\vector(1,0){5}} \put(39,15){\vector(1,0){5}} \put(39,18){\vector(1,0){5}} \put(39,21){\vector(1,0){5}} \put(39,24){\vector(1,0){5}} \put(58,12){\vector(-1,0){5}} \put(58,15){\vector(-1,0){5}} \put(58,18){\vector(-1,0){5}} \put(58,21){\vector(-1,0){5}} \put(58,24){\vector(-1,0){5}} \put(65,12){\vector(-1,0){5}} \put(65,15){\vector(-1,0){5}} \put(65,18){\vector(-1,0){5}} \put(65,21){\vector(-1,0){5}} \put(65,24){\vector(-1,0){5}} \put(72,12){\vector(-1,0){5}} \put(72,15){\vector(-1,0){5}} \put(72,18){\vector(-1,0){5}} \put(72,21){\vector(-1,0){5}} \put(72,24){\vector(-1,0){5}} \put(79,12){\vector(-1,0){5}} \put(79,15){\vector(-1,0){5}} \put(79,18){\vector(-1,0){5}} \put(79,21){\vector(-1,0){5}} \put(79,24){\vector(-1,0){5}} \end{picture} In Figure 2 one can see the transverse and the vortex wall longitudinal and cross section pictures for wires with a rectangular cross section. \\ \setlength{\unitlength}{1mm} \begin{picture}(180,93) \put(0,15){\line(0,1){71}} \put(0,15){\line(1,0){20}} \put(20,15){\line(0,1){71}} \put(0,86){\line(1,0){20}} \put(2,15){\vector(0,1){4}} \put(5,15){\vector(0,1){4}} \put(8,15){\vector(0,1){4}} \put(11,15){\vector(0,1){4}} \put(14,15){\vector(0,1){4}} \put(17,15){\vector(0,1){4}} \put(2,20){\vector(0,1){4}} \put(5,20){\vector(0,1){4}} \put(8,20){\vector(0,1){4}} \put(11,20){\vector(0,1){4}} \put(14,20){\vector(0,1){4}} \put(17,20){\vector(0,1){4}} \put(2,25){\vector(1,3){1.2}} \put(5,25){\vector(1,3){1.2}} \put(8,25){\vector(1,3){1.2}} \put(11,25){\vector(1,3){1.2}} \put(14,25){\vector(1,3){1.2}} \put(17,25){\vector(1,3){1.2}} \put(2,29){\vector(1,3){1.2}} \put(5,29){\vector(1,3){1.2}} \put(8,29){\vector(1,3){1.2}} \put(11,29){\vector(1,3){1.2}} \put(14,29){\vector(1,3){1.2}} \put(17,29){\vector(1,3){1.2}} \put(2,33){\vector(1,2){1.8}} \put(5,33){\vector(1,2){1.8}} \put(8,33){\vector(1,2){1.8}} \put(11,33){\vector(1,2){1.8}} \put(14,33){\vector(1,2){1.8}} \put(17,33){\vector(1,2){1.8}} \put(2,37){\vector(1,1){3.6}} \put(5,37){\vector(1,1){3.6}} \put(8,37){\vector(1,1){3.6}} \put(11,37){\vector(1,1){3.6}} \put(14,37){\vector(1,1){3.6}} \put(17,37){\vector(1,1){3.6}} \put(2,41){\vector(2,1){3.6}} \put(5,41){\vector(2,1){3.6}} \put(8,41){\vector(2,1){3.6}} \put(11,41){\vector(2,1){3.6}} \put(14,41){\vector(2,1){3.6}} \put(17,41){\vector(2,1){3.6}} \put(2,43.5){\vector(3,1){3.6}} \put(5,43.5){\vector(3,1){3.6}} \put(8,43.5){\vector(3,1){3.6}} \put(11,43.5){\vector(3,1){3.6}} \put(14,43.5){\vector(3,1){3.6}} \put(17,43.5){\vector(3,1){3.6}} \put(2,45.5){\vector(4,1){4}} \put(5,45.5){\vector(4,1){4}} \put(8,45.5){\vector(4,1){4}} \put(11,45.5){\vector(4,1){4}} \put(14,45.5){\vector(4,1){4}} \put(17,45.5){\vector(4,1){4}} \put(1,47.5){\vector(1,0){4}} \put(6,47.5){\vector(1,0){4}} \put(11,47.5){\vector(1,0){4}} \put(16,47.5){\vector(1,0){4}} \put(2,49.5){\vector(4,-1){4}} \put(5,49.5){\vector(4,-1){4}} \put(8,49.5){\vector(4,-1){4}} \put(11,49.5){\vector(4,-1){4}} \put(14,49.5){\vector(4,-1){4}} \put(17,49.5){\vector(4,-1){4}} \put(2,51.5){\vector(3,-1){3.6}} \put(5,51.5){\vector(3,-1){3.6}} \put(8,51.5){\vector(3,-1){3.6}} \put(11,51.5){\vector(3,-1){3.6}} \put(14,51.5){\vector(3,-1){3.6}} \put(17,51.5){\vector(3,-1){3.6}} \put(2,54){\vector(2,-1){3.6}} \put(5,54){\vector(2,-1){3.6}} \put(8,54){\vector(2,-1){3.6}} \put(11,54){\vector(2,-1){3.6}} \put(14,54){\vector(2,-1){3.6}} \put(17,54){\vector(2,-1){3.6}} \put(2,58){\vector(1,-1){3.6}} \put(5,58){\vector(1,-1){3.6}} \put(8,58){\vector(1,-1){3.6}} \put(11,58){\vector(1,-1){3.6}} \put(14,58){\vector(1,-1){3.6}} \put(17,58){\vector(1,-1){3.6}} \put(2,62){\vector(1,-2){1.8}} \put(5,62){\vector(1,-2){1.8}} \put(8,62){\vector(1,-2){1.8}} \put(11,62){\vector(1,-2){1.8}} \put(14,62){\vector(1,-2){1.8}} \put(17,62){\vector(1,-2){1.8}} \put(2,66){\vector(1,-3){1.2}} \put(5,66){\vector(1,-3){1.2}} \put(8,66){\vector(1,-3){1.2}} \put(11,66){\vector(1,-3){1.2}} \put(14,66){\vector(1,-3){1.2}} \put(17,66){\vector(1,-3){1.2}} \put(2,70){\vector(1,-3){1.2}} \put(5,70){\vector(1,-3){1.2}} \put(8,70){\vector(1,-3){1.2}} \put(11,70){\vector(1,-3){1.2}} \put(14,70){\vector(1,-3){1.2}} \put(17,70){\vector(1,-3){1.2}} \put(2,75){\vector(0,-1){4}} \put(5,75){\vector(0,-1){4}} \put(8,75){\vector(0,-1){4}} \put(11,75){\vector(0,-1){4}} \put(14,75){\vector(0,-1){4}} \put(17,75){\vector(0,-1){4}} \put(2,80){\vector(0,-1){4}} \put(5,80){\vector(0,-1){4}} \put(8,80){\vector(0,-1){4}} \put(11,80){\vector(0,-1){4}} \put(14,80){\vector(0,-1){4}} \put(17,80){\vector(0,-1){4}} \put(2,85){\vector(0,-1){4}} \put(5,85){\vector(0,-1){4}} \put(8,85){\vector(0,-1){4}} \put(11,85){\vector(0,-1){4}} \put(14,85){\vector(0,-1){4}} \put(17,85){\vector(0,-1){4}} \put(0,0){\textbf{The transverse wall}} \put(60,0){\textbf{The vortex wall}} \put(27,40){\line(0,1){10}} \put(27,40){\line(1,0){21}} \put(48,40){\line(0,1){10}} \put(27,50){\line(1,0){21}} \put(28,41){\vector(1,0){3}} \put(28,43){\vector(1,0){3}} \put(28,45){\vector(1,0){3}} \put(28,47){\vector(1,0){3}} \put(28,49){\vector(1,0){3}} \put(32,41){\vector(1,0){3}} \put(32,43){\vector(1,0){3}} \put(32,45){\vector(1,0){3}} \put(32,47){\vector(1,0){3}} \put(32,49){\vector(1,0){3}} \put(36,41){\vector(1,0){3}} \put(36,43){\vector(1,0){3}} \put(36,45){\vector(1,0){3}} \put(36,47){\vector(1,0){3}} \put(36,49){\vector(1,0){3}} \put(40,41){\vector(1,0){3}} \put(40,43){\vector(1,0){3}} \put(40,45){\vector(1,0){3}} \put(40,47){\vector(1,0){3}} \put(40,49){\vector(1,0){3}} \put(44,41){\vector(1,0){3}} \put(44,43){\vector(1,0){3}} \put(44,45){\vector(1,0){3}} \put(44,47){\vector(1,0){3}} \put(44,49){\vector(1,0){3}} \put(55,15){\line(0,1){71}} \put(55,15){\line(1,0){20}} \put(55,86){\line(1,0){20}} \put(75,15){\line(0,1){71}} \put(55,28){\line(1,2){20}} \put(75,28){\line(-1,2){20}} \put(83,30){\line(0,1){30}} \put(83,30){\line(1,0){30}} \put(113,30){\line(0,1){30}} \put(83,60){\line(1,0){30}} \put(83,30){\line(1,1){30}} \put(83,60){\line(1,-1){30}} \put(90.5,30){\line(1,2){15}} \put(113,37.5){\line(-2,1){30}} \thicklines \put(93,55){\vector(1,0){6}} \put(105.5,55){\vector(1,-1){3.7}} \put(103,35){\vector(-1,0){6}} \put(90.5,35){\vector(-1,1){3.7}} \put(88,40){\vector(0,1){6}} \put(88,52.5){\vector(1,1){3.7}} \put(108,50){\vector(0,-1){6}} \put(108,37.5){\vector(-1,-1){3.7}} \put(65,47){\vector(0,-1){5}} \put(63.5,44){\vector(0,-1){5}} \put(66.5,44){\vector(0,-1){5}} \put(62,40){\vector(0,-1){5}} \put(65,40){\vector(0,-1){5}} \put(68,40){\vector(0,-1){5}} \put(60.5,36){\vector(0,-1){5}} \put(66.5,36){\vector(0,-1){5}} \put(63.5,36){\vector(0,-1){5}} \put(69.5,36){\vector(0,-1){5}} \put(58,30){\vector(0,-1){5}} \put(63,30){\vector(0,-1){5}} \put(68,30){\vector(0,-1){5}} \put(73,30){\vector(0,-1){5}} \put(58,23){\vector(0,-1){5}} \put(63,23){\vector(0,-1){5}} \put(68,23){\vector(0,-1){5}} \put(73,23){\vector(0,-1){5}} \put(65,49){\vector(0,1){5}} \put(63.5,52){\vector(0,1){5}} \put(66.5,52){\vector(0,1){5}} \put(62,56){\vector(0,1){5}} \put(65,56){\vector(0,1){5}} \put(68,56){\vector(0,1){5}} \put(60.5,60){\vector(0,1){5}} \put(66.5,60){\vector(0,1){5}} \put(63.5,60){\vector(0,1){5}} \put(69.5,60){\vector(0,1){5}} \put(58,66){\vector(0,1){5}} \put(63,66){\vector(0,1){5}} \put(68,66){\vector(0,1){5}} \put(73,66){\vector(0,1){5}} \put(63,73){\vector(0,1){5}} \put(68,73){\vector(0,1){5}} \put(73,73){\vector(0,1){5}} \put(58,73){\vector(0,1){5}} \put(58,80){\vector(0,1){5}} \put(63,80){\vector(0,1){5}} \put(68,80){\vector(0,1){5}} \put(73,80){\vector(0,1){5}} \put(62,49.5){\vector(0,1){3}} \put(62,46.5){\vector(0,-1){3}} \put(68,49.5){\vector(0,1){3}} \put(68,46.5){\vector(0,-1){3}} \put(62,48){\circle*{0.7}} \put(68,48){\circle*{0.7}} \put(58,48){\circle*{0.7}} \put(72,48){\circle*{0.7}} \put(58,46){\vector(0,-1){3}} \put(58,41.5){\vector(0,-1){4.5}} \put(58,50){\vector(0,1){3}} \put(58,55){\vector(0,1){4.5}} \put(72,46){\vector(0,-1){3}} \put(72,41.5){\vector(0,-1){4.5}} \put(72,50){\vector(0,1){3}} \put(72,55){\vector(0,1){4.5}} \put(5,7){\textbf{ Figure 2.}}\\ \end{picture} \\ \\ In [\ref{bib:Har1}] the author studies a similar problem for thin films and derives $\Gamma$-convergence result for the energies. In a series of papers the authors study the magnetization reversal process in thin films, identifying four different scaling regimes for the critical value of the applied external field, see [\ref{bib:Con.Ott.1},\ref{bib:Con.Ott.2},\ref{bib:Con.Ott.Ste.1},\ref{bib:Ot.St.},\ref{bib:St.Sc.Wi.Mc.Ot.}]. In nanowires, it has been observed that there is a distinctive crossover between two different modes, which occurs at a critical diameter of the wire and it was suggested that the magnetization switching process can be understood by analyzing the micromagnetics energy minimization problem for different diameters of the cross section. In [\ref{bib:Kuehn}], K. K\"uhn studied $180$ degree static domain walls in magnetic wires with circular cross sections by an asymptotic analysis proving that indeed the transverse mode must occur in thin magnetic wires. It is also shown in [16] that for thick wires the vortex wall has the optimal energy scaling and that the minimal energy scales like $R^2\sqrt{\ln R}.$ In [\ref{bib:SS}] V.V.Slastikov and C.Sonnenberg studied a similar problem for finite curved wires proving a $\Gamma$-convergence on energies as the diameter of the wire goes to zero. In [\ref{bib:Har}], the author studied the same problem as K.K\"uhn in [\ref{bib:Kuehn}] and independently of [\ref{bib:SS}] (see the submission and the publication dates of [\ref{bib:Har}] and [\ref{bib:SS}] respectively) extended some of the results proven in [\ref{bib:Kuehn}] for arbitrary wires with a rotational symmetry. In this paper we study the $180$ degree static domain walls in magnetic wires with arbitrary bounded, $C^1$ and rotationally symmetric cross sections. We generalize the existence of minimizers result proven by K.K\"uhn for circular cross sections, to wires with arbitrary bounded and $C^1$ cross sections. For a class of domains we prove a stability of minimizers result, which is a new and much deeper result and it does not follow from the $\Gamma-$convergence of the energies. It actually requires much deeper analysis of minimization problem of minimizing the energy of micromagnetics and its minimizers. \section{The main results} Assume $\Omega=\mathbb R\times \omega$, where $\omega\subset \mathbb R^2$ is a bounded $C^1$ domain. Consider the isotropic energy of micromagnetics without an external field like in [\ref{bib:Kuehn},\ref{bib:SS},\ref{bib:Har}], $$E(m)=A_{ex}\int_\Omega|\nabla m|^2+K_d\int_{\mathbb R}|\nabla u|^2.$$ By scaling of all coordinates one can achieve the situation where $A_{ex}=K_d,$ so we will henceforth assume that $A_{ex}=K_d=1.$ Next we rescale the magnetization $m$ in the $y$ and $z$ coordinates such that the domain of the rescaled magnetization is fixed, i.e., if $d=\mathrm{diam}(\omega),$ then set $\acute m(x,y,z)=m(x,dy,dz).$ Denote $$A(\Omega)=\{m\colon\Omega\to\mathbb S^2 \ : \ m\in H_{loc}^1(\Omega), \ E(m)<\infty\}.$$ We are interested in $180$ degree domain walls, so set $$\tilde A(\Omega)=\{m\colon\Omega\to\mathbb S^2 \ : \ m-\bar e\in H^1(\Omega)\},$$ where \begin{equation*} \bar e(x,y,z) = \left\{ \begin{array}{rl} (-1,0,0) & \text{if } \ \ x<-1 \\ (x,0,0) & \text{if } \ \ -1\leq x \leq 1 \\ (1,0,0) & \text{if } \ \ 1<x \\ \end{array} \right. \end{equation*} The objective of this work will be studying the existence of minimizers in the minimization problem \begin{equation} \label{minimization problem} \inf_{m\in \tilde A(\Omega)}E(m), \end{equation} and the behavior of its almost minimizers, where the notion of "almost minimizers" will be defined later in Definition~\ref{def:almost.min}. The following existence theorem is a generalization of the corresponding theorem proven for circular cross sections in [\ref{bib:Kuehn}]. \begin{Theorem}[Existence of minimizers] \label{th:existence} For every bounded $C^1$ domain $\omega\in\mathbb R^2$ there exists a minimizer of $E$ is $\tilde A(\Omega).$ \end{Theorem} It has been shown for circular wires in [\ref{bib:Kuehn}] and later for any cross sections in [\ref{bib:SS}] and for cross sections with a rotational symmetry in [\ref{bib:Har}], that as $d$ goes to zero, the rescaled energy functional $\frac{E(m)}{d^2}$, $\Gamma$-converges to a one dimensional energy $E_0(m^0)$ under the following notion of convergence of magnetization vectors: \begin{Definition} \label{notion of convergence} The sequence $\{\acute m^n\}\subset A(\Omega)$ is said to converge to $m^0$ as $n$ goes to infinity if, \begin{itemize} \item[(i)] $\nabla \acute m^n\rightharpoonup\nabla m^0 $ \ \ weakly in \ \ $L^2(\Omega)$ \item[(ii)] $\acute m^n \rightarrow m^0$ \ \ strongly in \ \ $L_{loc}^2(\Omega).$ \end{itemize} \end{Definition} The limit or reduced energy is given by \begin{equation} \label{linit.energy} E_0(m)= \begin{cases} |\omega|\int_{\mathbb{R}}|\partial_x m|^2\,\mathrm{d} x+\int_{\mathbb{R}}m M_\omega m^T\,\mathrm{d} x,&\quad \text{if}\quad m=m(x),\\ \infty,&\qquad \text{otherwise}, \end{cases} \end{equation} Where $M_\omega$ is a symmetric matrix given by $$M_\omega=-\frac{1}{2\pi}\int_{\partial \omega}\int_{\partial \omega} n(x)\otimes n(y)\ln |x-y|\,\mathrm{d} x\,\mathrm{d} y,$$ and $n=(0,n_2,n_3)$ is the outward unit normal to $\partial\omega,$ see [\ref{bib:SS}]. Since $M_\omega$ is symmetric it can be diagonalized by a rotation in the $OYZ$ plane. We choose the coordinate system such that $M_\omega$ is diagonal. Assume now $\omega$ is fixed and $\mathrm{diam}(\omega)=1.$ Actually, the $\Gamma$-convergence theorem implies the following two properties of the minimal energies and sequences of minimizers: \begin{itemize} \item[(i)] \begin{equation} \label{conv.energies} \lim_{d\to 0}\min_{m\in \tilde A(d\cdot\Omega)}\frac{E(m)}{d^2}=\min_{m\in A_0}E_0(m), \end{equation} where $A_0=\{m\colon\mathbb R\to \mathbb R^3 : |m|=1,\ m(\pm\infty)=\pm 1 \}.$ \item[(ii)] If $m^n$ is any sequence of minimizers with $m^n$ defined in $d\cdot\Omega,$ then a subsequence of $\{\acute m^n\}$ converges to a minimizer of $E_0$ in the sense of Definition~\ref{notion of convergence}. \end{itemize} It turns out, that under some asymmetry condition on $\omega$ a stronger convergence holds, namely an $H^1$ convergence of the whole sequence of almost minimizers holds. \begin{Definition} \label{def:almost.min} Let $\{d_n\}$ be a sequence of positive numbers such that $d_n\to 0.$ A sequence of magnetizations $\{m^n\}$ defined in $d_n\cdot \Omega$ is called a sequence of almost minimizers if \begin{equation} \label{almost.min} \lim_{n\to \infty}\frac{E(m^n)}{d_n^2}=\min_{m \in A_0}E(m). \end{equation} \end{Definition} We are now ready to formulate the other result of the paper. \begin{Theorem}[Convergence of almost minimizers] \label{th:almost.minimizers} Let $\{d_n\}$ be a sequence of positive numbers such that $d_n\to 0.$ Assume that the domain $\omega$ is so that $M_\omega$ has three different eigenvalues. Then for any sequence of almost minimizers $\{m^n\}$ defined in $d_n\cdot\Omega,$ there exist a sequence $\{T_n\}$ of translations in the $x$ direction and a sequence $\{R_n\}$ of rotations in the $OYZ$ plane, each of which is either the identity or the rotation by $180$ degrees such that for $\tilde m^n(x,y,z)=m^n(T_n(R_n(x,y,z)))$ for a minimizer $m^\omega$ of $E_0$, there holds, $$\lim_{n\to\infty}\frac{1}{d_n}\|\tilde m^n-m^\omega\|_{H^1(\Omega_n)}=0.$$ \end{Theorem} We refer to Appendix for the definition of $m^\omega.$ The convergence in the above theorem actually states the stability of minimizers in nanowires, i.e., when the energy of a magnetization is close to the minimal one, then the magnetization vector must be close to the actual minimizer. \section{The oscillation preventing lemma} In this section we prove a lemma, that will be crucial in proving both existence and convergence of almost minimizers results. The lemma bounds the oscillations of a magnetization $m$ and the total measure of the set where $m$ develops oscillations by the energy of $m.$ Uzing the idea of Kohn and Slastikov in [\ref{bib:KS2}] of the dimension reduction in thin domains, define $$\bar m(x)=\int_{\omega}m(x,y,z)\,\mathrm{d} y\,\mathrm{d} z.$$ Using the definition of $M_\omega^1$ it is straightforward to show that $M_\omega^1$ is positive definite, where $M_\omega^1$ is the lower right $2\times2$ block of $M_\omega.$ Denote for convenience $$M_\omega^1= \begin{bmatrix} \alpha_2 & 0\\ 0 & \alpha_3 \end{bmatrix}. $$ It has been explicitly shown in [\ref{bib:Har}, Corollary 3.7.5] and implicitly in [\ref{bib:SS}, Proof of Lemma 4.1], that the inequality below holds uniformly in $m\colon (d\cdot\Omega)\to\mathbb S^2$: \begin{equation} \label{lower.bound.E} \frac{E(m)}{d^2}\geq \int_{\Omega}|\nabla m|^2+\alpha_2\int_{\mathbb R}|\bar m_2|^2+\alpha_3\int_{\mathbb R}|\bar m_3|^2+o(1), \end{equation} as $d$ goes to zero. \begin{Lemma} \label{lem:m2.m.3.bdd.E} Assume $m^d\in A(d\cdot\Omega).$ Then there exists $d_0>0$ such that, \begin{align} \label{bar.m.d<d0} &\int_{\mathbb R}(|\bar m_2|^2+|\bar m_3|^2)\leq \frac{2E(m)}{d^2\min(\alpha_2,\alpha_3)},\quad\text{if}\quad d\leq d_0\\ \label{bar.m.d>d0} &\int_{\mathbb R}(|\bar m_2|^2+|\bar m_3|^2)\leq \frac{2\max\left(\frac{d}{d_0},(\frac{d}{d_0})^3\right)E(m^d)}{dd_0\min(\alpha_2,\alpha_3)},\quad\text{if}\quad d>d_0 \end{align} \end{Lemma} \begin{proof} Due to inequality (\ref{lower.bound.E}) there exists $d_0>0$ such that for $d\leq d_0$ we have $$\frac{2E(m)}{d^2}\geq \alpha_2\int_{\mathbb R}|\bar m_2|^2+\alpha_3\int_{\mathbb R}|\bar m_3|^2,$$ and inequality (\ref{bar.m.d<d0}) follows. Assume now $d>d_0.$ It is straightforward that if $m^d\in A(d\cdot\Omega)$ then $m_t^d(x,y,z)=m^d(tx,ty,tz)\in A(\frac{d}{t}\cdot\Omega)$ with $E(m_t^d)=tE_{ex}(m^d)+t^3E_{mag}(m^d)$, where $E_{ex}(m)=\int_{\Omega}|\nabla m|^2$ is the exchange energy and $E_{mag}(m)=\int_{\mathbb R^3}|\nabla u|^2$ is the magnetostatic energy, thus we get on one hand, \begin{equation} \label{E(m).E(mt)} E(m_t^d)\leq \max(t,t^3)E(m^d). \end{equation} But on the other hand we have $$\int_{\mathbb R}(|\bar m_2^d|^2+|\bar m_3^d|^2)=\frac{1}{t}\int_{\mathbb R}(|\bar m_{t2}^d|^2+|\bar m_{t3}^d|^2),$$ thus we obtain choosing $t=\frac{d}{d_0}$ and taking into account (\ref{bar.m.d<d0}) and (\ref{E(m).E(mt)}), $$\int_{\mathbb R}(|\bar m_2^d|^2+|\bar m_3^d|^2)\leq\frac{2d_0E(m_t^d)}{dd_0^2\min(\alpha_2,\alpha_3)}\leq \frac{2\max\left(\frac{d}{d_0},(\frac{d}{d_0})^3\right)E(m^d)}{dd_0\min(\alpha_2,\alpha_3)}$$ which completes the proof. \end{proof} Next we prove a simple estimate between $m$ and $\bar m$ that will be useful in the proof of the oscillation preventing lemma. \begin{Lemma} \label{lem:ineq.m.bar.m} For any $m\in A(\Omega)$ there holds $$\int_{\omega}(|m|^2-|\bar m|^2)=\int_{\omega}|m-\bar m|^2\leq C_pd^2\int_{\omega}|\nabla_{yz}m|\qquad \text{for all}\qquad x\in \mathbb{R}, $$ where $C_p$ is the Poincar\'e constant of $\omega.$ \end{Lemma} \begin{proof} We have for any $x\in\mathbb{R}$ $$\int_{\omega}(m-\bar m)=\int_{\omega}m-|\omega|\cdot \bar m(x)=0,$$ thus by the Poincer\'e inequality we get \begin{align*} \int_{\omega}|m|^2&=\int_{\omega}|\bar m|^2+\int_{\omega}|m-\bar m|^2+2\bar m(x)\int_{\omega}(m-\bar m)\\ &=\int_{\omega}|\bar m|^2+\int_{\omega}|m-\bar m|^2\\ &\leq \int_{\omega}|\bar m|^2+C_pd^2\int_{\omega}|\nabla_{yz}m|, \end{align*} the proof is complete now. \end{proof} \begin{Lemma}[Oscillation preventing lemma] \label{lem:oscilation.preventing} Let $m\in A(\Omega)$ and let $\alpha,\beta,\rho\in \mathbb R$ such that $-1<\alpha<\beta<1$ and $0<\rho<1.$ Assume $\Re$ is a family of disjoint intervals $(a,b)$ satisfying the conditions $$\{\bar m_1(a), \bar m_1(b)\}=\{\alpha,\beta\}\qquad \text{and}\qquad |\bar m_1(x)|\leq \rho,\qquad x\in (a,b).$$ Then, \begin{itemize} \item[(i)] \begin{equation} \label{card.sum} \mathrm{card}(\Re)\leq M \qquad \text{and}\qquad \sum_{(a,b)\in\Re}(b-a)\leq M, \end{equation} where $M$ is a constant depending on $\alpha$, $\beta,$ $\rho,$ $\omega$ and $E(m)$. \item[(ii)] The component $\bar m_1,$ satisfies $\lim_{x\to\pm\infty}|\bar m_1(x)|=1.$ \end{itemize} \end{Lemma} \begin{proof} Let us first prove the second inequality in (\ref{card.sum}). The function $\bar m$ is a one variable weakly differentiable function therefore it is locally absolutely continuous in $\mathbb{R}.$ For any $(a,b)\in \Re,$ we have by Lemma~\ref{lem:ineq.m.bar.m} and by the assumption of the lemma, \begin{align*} |\omega|(b-a)&=\int_{(a,b)\times \omega}|m|^2\\ &\leq \int_{(a,b)\times \omega}|\bar m|^2+C_pd^2\int_{(a,b)\times \omega}|\nabla_{yz}m|^2\\ &\leq \rho^2|\omega|(b-a)+\int_{(a,b)\times \omega}(\bar m_2^2+\bar m_3^2)+C_pd^2\int_{(a,b)\times \omega}|\nabla m|^2. \end{align*} Summing up the inequalities for all $(a,b)\in \Re$ we get, \begin{align*} |\omega|\cdot\sum_{(a,b)\in\Re}(b-a)&\leq \rho^2|\omega|\sum_{(a,b)\in\Re}(b-a)+\int_{\Sigma}(\bar m_2^2+\bar m_3^2)+ C_pd^2\int_{\Sigma}|\nabla m|^2\\ &\leq \rho^2|\omega|\sum_{(a,b)\in\Re}(b-a)+\int_{\Omega}(\bar m_2^2+\bar m_3^2)+C_pd^2\int_{\Omega}|\nabla m|^2, \end{align*} where $\Sigma=\bigcup_{(a,b)\in\Re}(a,b)\times \omega.$ By virtue of Lemma~\ref{lem:m2.m.3.bdd.E} we have $$\int_{\Omega}(\bar m_2^2+\bar m_3^2)\leq C_1,$$ for some $C_1$ depending on $\omega$ and $E(m).$ Therefore we obtain \begin{equation} \label{sum (b-a)} \sum_{(a,b)\in\Re}(b-a)\leq \frac{C_1+C_pd^2E(m)}{|\omega|(1-\rho^2)}. \end{equation} Next we have for any point $(y,z)\in \omega $ and any interval $(a,b)\in \Re,$ $$ \int_a^b|\partial_xm_1(x,y,z)|^2\,\mathrm{d} x\geq \frac{1}{b-a}\bigg(\int_a^b|\partial_xm_1(x,y,z)|\,\mathrm{d} x\bigg)^2, $$ Thus integrating over $\omega$ we get \begin{align*} \int_{(a,b)\times \omega}|\partial_xm_1|^2\,\mathrm{d} \xi&\geq \frac{1}{b-a}\int_{\omega}\bigg(\int_a^b|\partial_xm_1(x,y,z)|\,\mathrm{d} x\bigg)^2\,\mathrm{d} y\,\mathrm{d} z\\ &\geq\frac{1}{b-a}\int_{\omega}|m_1(a,y,z)-m_1(b,y,z)|^2\,\mathrm{d} y\,\mathrm{d} z\\ &\geq\frac{1}{|\omega|(b-a)}\bigg(\int_{\omega}\big(m_1(a,y,z)-m_1(b,y,z)\big)\,\mathrm{d} y\,\mathrm{d} z\bigg)^2\\ &=\frac{|\omega|(\alpha-\beta)^2}{b-a}, \end{align*} thus $$\int_{(a,b)\times \omega}|\partial_xm_1|^2\,\mathrm{d} \xi\geq\frac{|\omega|(\alpha-\beta)^2}{b-a}.$$ Summing up the last inequalities for all $(a,b)\in \Re$ we arrive at \begin{align*} \sum_{(a,b)\in \Re}\frac{1}{b-a}&\leq \frac{1}{|\omega|(\alpha-\beta)^2}\int_{\Sigma}|\partial_xm_1|^2\,\mathrm{d} \xi\\ &\leq\frac{1}{|\omega|(\alpha-\beta)^2}\int_{\Omega}|\nabla m|^2\,\mathrm{d} \xi\\ &\leq \frac{E(m)}{|\omega|(\alpha-\beta)^2}, \end{align*} thus \begin{equation} \label{sum 1/(b-a)} \sum_{(a,b)\in \Re}\frac{1}{b-a}\leq \frac{E(m)}{|\omega|(\alpha-\beta)^2}. \end{equation} Combining now (\ref{sum (b-a)}) and (\ref{sum 1/(b-a)}) we obtain, \begin{equation} \label{(a,b).finite.estimate} \sum_{(a,b)\in \Re}\bigg(\frac{1}{b-a}+b-a\bigg)\leq\frac{1}{|\omega|}\bigg(\frac{E(m)}{(\alpha-\beta)^2}+ \frac{C_1+C_pd^2E(m)}{1-\rho^2}\bigg):=M(\alpha,\beta,\rho,\omega,E(m)). \end{equation} The last inequality and the inequality $\frac{1}{b-a}+b-a\geq 2$ yield $M(\alpha,\beta,\rho,\omega,E(m))\geq 2\mathrm{card}(\Re),$ which finishes the proof of the first part. It is clear that $$|\bar m_1(x)|=\frac{1}{|\omega|}\bigg|\int_{\omega}m_1(x,y,z)\,\mathrm{d} y\,\mathrm{d} z\bigg| \leq\frac{1}{|\omega|}\int_{\omega}|m_1(x,y,z)|\,\mathrm{d} y\,\mathrm{d} z\leq 1$$ thus $$0\leq 1-\bar m_1^2(x)\leq 1,\qquad x\in\mathbb{R}.$$ By virtue of Lemma~\ref{lem:m2.m.3.bdd.E} and Lemma~\ref{lem:ineq.m.bar.m} we have, $$\int_{\Omega}(1-\bar m_1^2)\,\mathrm{d} \xi\leq\int_{\Omega}(\bar m_2^2+\bar m_3^2)\,\mathrm{d} \xi+C_pd^2E(m)<\infty,$$ thus \begin{equation} \label{1-bar m_x^2 has finite norm} \int_{\mathbb{R}}(1-\bar m_x^2)\,\mathrm{d} x<\infty. \end{equation} The integrand is continuous and positive thus for any $0<\delta<1$ and $N>0$ there exists $x_\delta>N$ such that $|\bar m_1(x_\delta)|>1-\frac{\delta}{2}$. Therefore there exists an increasing sequence $\{x_n\}$ such that $x_n\to\infty$ and $|\bar m_1(x_n)|>1-\frac{\delta}{2}$. Thus for infinitely many indices $n$ one has one of the following: $\bar m_1(x_n)>1-\frac{\delta}{2}$\ or \ $\bar m_1(x_n)<-1+\frac{\delta}{2}$. Assume that for a subsequence (not relabeled) there holds $\bar m_1(x_n)>1-\frac{\delta}{2}$. Let us then show, that $\bar m_1(x)>1-\delta$ for all $x>N_{\delta}$ and some $N_{\delta}$. Assume in the contrary that for an increasing sequence $(\tilde x_n)_{n\in\mathbb{N}}$ \ with $\tilde x_n\to\infty$ one has $\bar m_1(\tilde x_n)\leq 1-\delta$. We construct an infinite family of disjoint intervals $(a_n,b_n)$ such that the value of $\bar m_1$ at one end of $(a_n,b_n)$ is less or equal than $1-\delta$ and at the other end is bigger than $1-\frac{\delta}{2}$ for all $n\in \mathbb{N}$. We start with taking the smallest $n$ such that $\tilde x_n>x_1$ and denote it by $\tilde n_1$ and set $a_1=x_1$, $b_1=\tilde x_{\tilde n_1}$. In the second step we take the smallest $n$ such that $x_n>b_1$ and denote it by $n_2$ and then we take the smallest $n$ such that $\tilde x_n>x_{n_2}$ and denote it by $\tilde n_2$ and set $a_2=x_{n_2}$ and $b_2=\tilde x_{\tilde n_2}$. This process will never stop, thus the intervals $(a_n,b_n)$ are constructed such that $\bar m_x(a_n)>1-\frac{\delta}{2}$ \ and \ $\bar m_x(b_n)<1-\delta.$ Since $\bar m_x$ is continuous in $\mathbb{R}$ the new sequence of disjoint intervals $(\acute a_n, \acute b_n)$ where $\acute a_n=\sup\{ x\in (a_n,b_n) \ | \ \bar m_x(x)\geq 1-\frac{\delta}{2}\}$ and $\acute b_n=\inf\{ x\in (\acute a_n,b_n) \ | \ \bar m_x(x)\leq 1-\delta\}$ have the properties $\bar m_1(\acute a_n)=1-\frac{\delta}{2},$\ $\bar m_1(\acute b_n)=1-\delta$ and $|\bar m_x(x)|\leq 1-\frac{\delta}{2} $ for all $x\in[\acute a_n,\acute b_n]$ which contradicts (\ref{card.sum}). The same can be done for $-\infty$. \end{proof} \newtheorem{Remark}[Theorem]{Remark} \begin{Remark} \label{rem:lim.bar.m1=1} If $m\in\tilde A(\Omega)$ then $\lim_{x\to\pm\infty}\bar m_1(x)=\pm 1.$ \end{Remark} \begin{proof} By Lemma~\ref{lem:oscilation.preventing} we have $\lim_{x\to\pm\infty}|\bar m_1(x)|=1.$ Since $\bar m_1(x)$ is continuous and $\bar m-\bar e\in H^1(\Omega),$ then the proof follows. \end{proof} \begin{Remark} \label{rem:lim.m0} If $|m|=1$ and $E_0(m)<\infty$ then $\lim_{x\to\pm\infty}|m_1(x)|=1.$ \end{Remark} \begin{proof} The proof is analogues to the proof of property $(ii)$ in Lemma~\ref{lem:oscilation.preventing}. \end{proof} \section{Existence of minimizers} We start by proving a simple compactness lemma that will be crucial in the proof of the existence theorem. \begin{Lemma} \label{lem:compactness} Assume that the sequence of magnetizations $\{m^n\}$ defined in the same domain $\Omega$ satisfies and $E(m^n)\leq C$ for some constant $C.$ Then there exists a magnetization $m^0\colon \Omega \rightarrow \mathbb{S}^2 $ such that for a subsequence of $\{m^n\}$ (not relabeled) the following statements hold: \begin{itemize} \item[(i)] $\nabla m^n\rightharpoonup\nabla m^0$ weakly in $L^2(\Omega)$ \item[(ii)] $m^n\rightarrow m^0$ strongly in $L_{loc}^2(\Omega)$ \item[(iii)] $E(m^0)\leq \liminf E(m^n)$. \end{itemize} \end{Lemma} \begin{proof} Let $u_n$ be a weak solution of $\triangle u=\mathrm{div}m^n$. From $\int_{\Omega}|\nabla m^n|^2+\int_{\mathbb R^3}|\nabla u^n|^2\leq C$ we get by a standard compactness argument that, $\nabla m^n\rightharpoonup \nabla m^0$ in $ L^2(\Omega),$ $\nabla u_n\rightharpoonup g$ in $L^2(\mathbb{R}^3)$ and $m^n\rightarrow m^0$ in $L_{loc}^2(\Omega),$ for the same subsequence (not relabeled) of $\{\nabla m^n\}$ and $\{\nabla u_n\}$ and some $f\in L^2(\Omega)$ and $ g\in L^2(\mathbb{R}^3).$ We extend $m^0$ outside $\Omega$ as zero. The identities $$\int_{\Omega} m^n\cdot \nabla\varphi=\int_{\mathbb{R}^3} \nabla u_n\cdot \nabla\varphi\quad \text{for all}\quad n\in\mathbb N\quad\text{and}\quad\varphi\in C_0^\infty(\mathbb R^3),$$ will then yield $$\int_{\Omega} m^0\cdot \nabla\varphi =\int_{\mathbb{R}^3} g\cdot \nabla\varphi\quad \text{for all}\quad \varphi\in C_0^\infty(\mathbb R^3).$$ Since $g\in L^2(\mathbb{R}^3)$ then the Helmoltz projection of $g$ onto the subspace of gradient fields in $L^2(\mathbb R^3)$ will have the form $\nabla u_0,$ will satisfy $\|\nabla u_0\|_{L^2(\mathbb{R}^3)}\leq\|g\|_{L^2(\mathbb{R}^3)}$ and will be a weak solution of $\triangle u=\mathrm{div} g$ which is equivalent to $$\int_{\mathbb{R}^3} g\cdot \nabla\varphi =\int_{\mathbb{R}^3} \nabla u_0\cdot \nabla\varphi \quad \text{for all} \quad\varphi\in C_0^{\infty}(\mathbb{R}^3),$$ thus we get $$\int_{\mathbb{R}^3} m^0\cdot \nabla\varphi=\int_{\mathbb{R}^3} \nabla u_0\cdot \nabla\varphi\quad \text{for all} \quad \varphi\in C_0^{\infty}(\mathbb{R}^3)$$ which means that $u_0$ is a weak solution of $$\triangle u=\mathrm{div} m^0.$$ Therefore from the weak convergence $\nabla m^n\rightharpoonup \nabla m^0$ and $\nabla u_n\rightharpoonup g$ we obtain, \begin{align*} \|\nabla u_0\|_{L^2(\mathbb{R}^3)}&\leq\|g\|_{L^2(\mathbb{R}^3)}\leq \liminf_{n\to\infty} \|\nabla u_n\|_{L^2(\mathbb{R}^3)}\\ \|\nabla m^0\|_{L^2(\mathbb{R}^3)}&\leq \liminf_{n\to\infty} \|\nabla m^n\|_{L^2(\mathbb{R}^3)} \end{align*} which yields $E(m^0)\leq \liminf_{n\to\infty} E(m^n).$ \end{proof} Now we have enough tolls to prove the existence theorem.\\ \textbf{Proof of Theorem~\ref{th:existence}}. We adopt the direct method of proving an existence of a minimizer. The idea is starting with any minimizing sequence, we construct another minimizing sequence that has a limit in $\tilde A(\Omega)$ in the sense of Lemma~\ref{lem:compactness}. Let $\{m^n\}$ be a minimizing sequence, i.e., $$\lim_{n\rightarrow \infty}E(m^n)=\inf_{m\in\tilde A(\Omega)}E(m).$$ First of all note that minimization problem (\ref{minimization problem}) is invariant under translations in the $x$ direction, that is if $m\in \tilde A(\Omega)$ then obviously $m_c(x,y,z)=m(x-c,y,z)\in \tilde A(\Omega)$ and $E(m_c)=E(m).$ We have that $|E(m^n)|\leq M$ for some $M$ and for all $n \in \mathbb{N}$. For any $n\in\mathbb{N}$ consider the sets $A_n$, $B_n$ and $C_n$ defined as follows: \begin{align*} A_n&=\bigg\{x\in \mathbb{R} \ \ : \ -1\leq \bar m_1^n(x)< -\frac{1}{2}\bigg\}\\ B_n&=\bigg\{x\in \mathbb{R} \ \ : \ -\frac{1}{2}\leq \bar m_1^n(x)\leq \frac{1}{2}\bigg\}\\ C_n&=\bigg\{x\in \mathbb{R} \ \ : \ \frac{1}{2}<\bar m_1^n(x)\leq 1\bigg\}\\ \end{align*} Since $\bar m_1^n$ is continuous in $\mathbb{R}$ then for all $n \in\mathbb{N}$, $A_n,$ $B_n$ and $C_n$ are a finite or countable union of disjoint intervals. We distinguish two types of intervals in $B_n.$ A composite interval $(a,b)\subset B_n$ is said to be of the first type if $|\bar m_1^n(a)-\bar m_1^n(b)|=1,$ and of the second type otherwise. By Lemma~\ref{lem:oscilation.preventing} the sum of the lengths of all intervals, as well as the number of the first type intervals in $B_n$ is bounded by a number $s$ depending only on $M$ and $\omega$, i.e., a constant not depending on $n$. Consider two cases:\\ \textbf{CASE1.} \textit{There are no second type intervals in $B_n$ for all $n\in\mathbb{N}.$}\\ Let us paint all the points of $A_n$, $B_n$ and $C_n$ with respectively black, yellow and red color for all $n\in \mathbb{N}$. We call the increasing sequence $\{n_k\}\subset \mathbb N$ "good" if for every $k\in\mathbb{N}$ there exist two intervals $(a_1^k,a_2^k)\subset A_{n_k}$ and $(c_1^k,c_2^k)\subset C_{n_k}$ such that $$a_2^k-a_1^k\rightarrow +\infty,\qquad c_2^k-c_1^k\rightarrow +\infty,\qquad 0<c_1^k-a_2^k\leq C$$ for a constant $C$ not depending on $k.$ The endpoints $a_1^k$ and $c_2^k$ can also take values $-\infty$ and $+\infty$ respectively. If $\{n_k\}$ is "good", the subsequence $\{m^{n_k}\}$ will also be called "good". We show, that any minimizing sequence $\{m^n\}\subset\tilde A(\Omega)$ can be translated in the $x$ coordinate such that the new sequence contains a "good" subsequence. For every fixed $n$ there are some black, yellow and red intervals between $(-\infty, a_n)$ and $(c_n,+\infty)$. Note that there is obviously at least one yellow interval between any two black and any two red ones, thus the number of both black and yellow intervals is at most $s+1$, hence the number of all intervals in the $n$-th family is bounded by the same number $S=3s+2$ for all $n.$ Let us number both the red and the black intervals in any family of intervals. Let us prove the proposition below, which is a reformulation of our problem:\\ \textbf{Proposition.} Assume a sequence of natural numbers $l_n$ and a sequence of families of $l_n$ disjoint intervals on the real line pained with black and red color are given for all $n\in\mathbb{N}$. Assume $l_n\leq l$ and the sum of the lengths of $l_n-1$ gaps between the intervals in the $n$-th family is bounded by the same number $M$ for all $n$. Assume furthermore that for any $n,$ the far left placed interval is black and the far right placed interval is red and their lengths tend to $\infty$ as $n$ goes to infinity. Then there exists a subsequence $\{n_k\}$ and two associated intervals $(a_1^k, a_2^k)$ and $(c_1^k, c_2^k)$ in the $n_k$-th family such that $(a_1^k, a_2^k)$ is black, $(c_1^k, c_2^k)$ is red, and \begin{equation} a_2^k-a_1^k \rightarrow +\infty,\qquad c_2^k-c_1^k \rightarrow +\infty \qquad 0<c_1^k-a_2^k\leq M_1 \end{equation} for a constant $M_1$ and all $k\in \mathbb{N}$.\\ \textbf{Proof of proposition.} The case $l=2$ is evident. Assume that the proposition is true for $l\leq N$ and let us prove it for $l=N+1$. Since $l\geq 3$, in every family there are at least two intervals of the same color. Assume that for infinitely many indices $n$ there are at least two black intervals in the $n$-th family. Consider the far right placed black intervals for all such families. There are two possible cases:\\ \textbf{Case 1.} \textit{For a subsequence their lengths tend to $+\infty$}.\\ In this case we can omit all the intervals placed on their left side which leads to a situation with less intervals in every family (in such a subsequence) fulfilling the requirements of the proposition, so by induction the existence of a "good" subsequence is proven.\\ \textbf{Case 2.} \textit{Their lengths are bounded by the same constant.}\\ In this case we can remove this intervals and this will lead us to a situation with less intervals in all families fulfilling the requirements of the statements so by the induction the existence of a "good" subsequence is proven.\\ Let us get now back to our situation. If we remove all the yellow intervals from the real line for all $n\in \mathbb{N}$ then the families of the black and the red intervals fulfill the requirements of the proposition, thus the existence of a "good" sequence is proven. Take the two intervals $[a_1^k,a_2^k]$ and $[c_1^k,c_2^k]$ for all $k\in\mathbb{N}$ and denote the the "good" subsequence of magnetizations again by $\{m^k\}$ which will also be a minimizing sequence. Let us translate $m^k$ by a factor of $a_2^k$ and denote $$m_{good}^k(x,y,z)=m^k(x+a_2^k,y,z).$$ Then $\{m_{good}^k\}$ is a minimizing sequence and furthermore denoting $a_3^k=a_2^k-a_1^k$, $c_3^k=c_1^k-a_2^k$ and $c_4^k=c_2^k-a_2^k,$ we obtain, \begin{align} \label{conditions.good1} &\bar m_{good}^k(x)\leq -\frac{1}{2} \quad\text{for} \quad x \in [-a_3^k,0] \quad \text{and} \quad\bar m_{good}^k(x)\geq \frac{1}{2} \quad\text{for}\quad x \in [c_3^k,c_4^k],\\ &a_3^k\rightarrow \infty,\qquad c_4^k-c_3^k\rightarrow \infty,\qquad 0<c_3^k<M_1. \label{conditions.good2} \end{align} Owing to Lemma~\ref{lem:compactness} one can extract a subsequence from $\{m_{good}^k\}$ (not relabeled) with a limit $m^0\in A(\Omega).$ Let us now prove that conditions (\ref{conditions.good1}) and (\ref{conditions.good2}) imply that $m^0\in \tilde A(\Omega).$ We have for any fixed $R>0,$ \begin{align*} \int_{-R}^{R}|\bar m_1^0-\bar m_{good1}^k|\,\mathrm{d} x&=\frac{1}{|\omega|}\int_{-R}^{R}\bigg|\int_{\omega}( m_1^0-m_{good1}^k)\,\mathrm{d} y\,\mathrm{d} z\bigg|\,\mathrm{d} x\\ &\leq\frac{1}{|\omega|}\int_{-R}^{R}\int_{ \omega}| m_1^0-m_{good1}^k|\,\mathrm{d} y\,\mathrm{d} z\,\mathrm{d} x\\ &\leq\frac{1}{|\omega|}\Bigg(2R|\omega|\cdot\int_{[-R,R]\times \omega}|m_1^0-m_{good1}^k|^2\,\mathrm{d}\xi\Bigg)^{\frac{1}{2}}\\ &=\sqrt{\frac{2R}{|\omega|}}\cdot \|m_1^0-m_{good1}^k\|_{L^2([-R,R]\times \omega)}\rightarrow 0 \end{align*} as $k\to\infty$ because of the strong convergence $m_{good}^k\rightarrow m^0$ in $L_{loc}^2(\Omega)$. Therefore a subsequence of $\{\bar m_{good1}^k(x)\}$ converges pointwise to $\bar m_1^0(x)$ almost everywhere in $[-R,R].$ Giving $R$ all natural values and applying a diagonal argument we establish that a subsequence of $\{\bar m_{good1}^k(x)\}$ converges pointwise to $\bar m_1^0(x)$ almost everywhere in $\mathbb{R},$ therefore \begin{equation} \label{cond.m0} \bar m_1^0(x)\leq -\frac{1}{2}\quad\text{ a.e. in}\quad (-\infty,0)\quad\text{ and}\quad \bar m_1^0(x)\geq\frac{1}{2} \quad\text{ a.e. in}\quad [M_1,+\infty) \end{equation} Let us now show that conditions $E(m^0)<\infty$ and (\ref{cond.m0}) imply $m^0\in \tilde A(\Omega).$ We have by the triangle inequality $$\|\nabla (m^0-\bar e)\|_{L^2(\Omega)}^2\leq 2\|\nabla m^0\|_{L^2(\Omega)}^2+2\|\nabla \bar e\|_{L^2(\Omega)}^2\leq 2E(m^0)+4|\omega|<\infty, $$ thus it remains to prove that $m^0-\bar e\in L^2(\Omega)$. We have again by the triangle inequality and by Lemma~\ref{lem:ineq.m.bar.m}, \begin{align*} \|m^0-\bar e\|_{L^2(\Omega)}^2&\leq 2\|\bar m^0-\bar e\|_{L^2(\Omega)}^2+2\|m^0-\bar m^0\|_{L^2(\Omega)}^2\\ &\leq 2C_pd^2\|\nabla m^0\|_{L^2(\Omega)}^2+2\|\bar m^0-\bar e\|_{L^2(\Omega)}^2\\ &\leq 2C_pd^2E(m^0)+2\|\bar m^0-\bar e\|_{L^2(\Omega)}^2, \end{align*} thus it remains to prove that $\bar m^0-\bar e\in L^2(\Omega)$. One can assume without loss of generality that $M_1\geq1$ in (\ref{cond.m0}). We calculate, \begin{align*} \int_\Omega |\bar m^0-\bar e|^2=\int_{[-1,M_1]\times\omega}|\bar m^0-\bar e|^2+\int_{[-\infty,-1]\times\omega}|\bar m^0-\bar e|^2+\int_{[M_1,\infty]\times\omega}|\bar m^0-\bar e|^2=I_1+I_2+I_3. \end{align*} The estimation of $I_1,$ $I_2$ and $I_3$ is straightforward: $$I_1\leq 4(1+M_1)|\omega|.$$ Due to condition (\ref{cond.m0}) and Lemma~\ref{lem:ineq.m.bar.m} we have, \begin{align*} I_2&=\int_{[-\infty,-1]\times\omega}(1+|\bar m^0|^2+2\bar m_1^0)\\ &=2\int_{[-\infty,-1]\times\omega}(1+\bar m_1^0)+\int_{[-\infty,-1]\times\omega}(|m^0|^2-|\bar m^0|^2)\\ &\leq 2\int_{[-\infty,-1]\times\omega}(1+\bar m_1^0)(1-\bar m_1^0)+C_pd^2\int_{[-\infty,-1]\times\omega}|\nabla m^0|^2\\ &=2\int_{[-\infty,-1]\times\omega}(|m^0|^2-|\bar m^0|^2)+2\int_{[-\infty,-1]\times\omega}(|\bar m_2^0|^2+|\bar m_3^0|^2)+C_pd^2\int_{[-\infty,-1]\times\omega}|\nabla m^0|^2\\ &\leq 3C_pd^2\int_{[-\infty,-1]\times\omega}|\nabla m^0|^2+2\int_{[-\infty,-1]\times\omega}(|\bar m_2^0|^2+|\bar m_3^0|^2). \end{align*} Analogues analysis for $I_3$ gives $$I_3\leq3C_pd^2\int_{[M_1,\infty]\times\omega}|\nabla m^0|^2+2\int_{[M_1,\infty]\times\omega}(|\bar m_2^0|^2+|\bar m_3^0|^2).$$ Therefore combining the estimates for $I_1$ $I_2$ and $I_3$ and taking into account Lemma~\ref{lem:m2.m.3.bdd.E} we discover $I_1+I_2+I_3<\infty$ as wished. CASE1 is now established.\\ \textbf{CASE2.} \textit{There are some second type intervals in $B_n$ for some $n.$}\\ Removing all the second type yellow intervals from the real line we can regard the rest as a real line without gaps simply by shifting all the intervals to the left hand side such that after that operation no overlap occurs and there is no gap left. Precisely, we shift each interval to the left by a factor equal to the sum of the lengths of the gaps between that interval and $-\infty.$ During that operation we unify the black and red intervals with the neighboring intervals of the same color but we regard the possible neighboring first type yellow intervals as separate. We get a situation like in CASE1 and therefore we can prove the existence of a "good" subsequence. It is easy to show that since that sum of the lengths of the second type yellow intervals in each family is bounded by the same constant then the in Lemma~\ref{lem:compactness} described limit of the obtained "good" subsequence will belong to $\tilde A(\Omega)$ and hence will be an energy minimizer in $\tilde A(\Omega)$. The proof is complete now. \section{The stability of minimizers} Throughout this section we will consider a sequence of domain-magnetization-energy triples $(\Omega_n, m^n,E(m^n))_{n\in\mathbb{N}}$ such that $\Omega_n=\mathbb R\times(d_n\cdot\omega),$ $m^n\in\tilde A(\Omega_n),$ $d_n\to 0$ and $\lim_{n\to\infty}\frac{E(m^n)}{d_n^2}=\min_{m\in A_0}E_0(m),$ i.e., $\{m^n\}$ is a sequence of almost minimizers. Assume furthermore that $\omega$ has 180 degree rotational symmetry and the matrix $M_\omega$ has three different eigenvalues, i.e., $\alpha_2\neq\alpha_3,$ hence one can assume without loss of generality, that $\alpha_2<\alpha_3.$ Note that due to (\ref{almost.min}) we have \begin{equation} \label{E(m^n)is.bdd} E(m^n)\leq Cd_n^2\quad\text{for all}\quad n. \end{equation} \begin{Lemma} \label{morms of avrages converges to the norm of limit} If $\{\acute m^n\}$ converges to some $m^0(x)\in\tilde A(\Omega)$ in the sense of Definition~\ref{notion of convergence}, then \begin{itemize} \item[(i)] $ \lim_{n\to\infty}\|\nabla \acute{\bar m}^n\|_{L^2(\Omega)}=\|\nabla m^0\|_{L^2(\Omega)},$ \item[(ii)] $ \lim_{n\to\infty}\|\acute{\bar m}_2^n\|_{L^2(\Omega)}=\|m_2^0\|_{L^2(\Omega)},\qquad \lim_{n\to\infty}\|\acute{\bar m}_3^n\|_{L^2(\Omega)}=\|m_3^0\|_{L^2(\Omega)}.$ \end{itemize} \end{Lemma} \begin{proof} The inequality $ \liminf_{n\to\infty}\|\nabla \acute{\bar m}^n\|_{L^2(\Omega)}\geq\|\nabla m^0\|_{L^2(\Omega)}$ is trivial, while the inequality $ \liminf_{n\to\infty}\|\acute{m}_2^n\|_{L^2(\Omega)}\geq\|m_2^0\|_{L^2(\Omega)}$ follows from the convergence $m_2^n\to m_2^0$ in $L_{loc}^2(\Omega).$ We have furthermore by Lemma~\ref{lem:ineq.m.bar.m} and by (\ref{E(m^n)is.bdd}) that, \begin{align*} \|\acute m_2^n-\acute{ \bar m}_2^n\|_{L^2(\Omega)}^2&=\frac{1}{d_n^2}\|m_2^n-\bar m_2^n\|_{L^2(\Omega_n)}\\ &\leq C_p\|\nabla m^n\|_{L^2(\Omega_n)}^2\\ &\leq C_pCd_n^2, \end{align*} thus \begin{equation} \label{acute.acute.bar} \|\acute m_2^n-\acute{ \bar m}_2^n\|_{L^2(\Omega)}\to 0. \end{equation} Therefore we get $ \liminf_{n\to\infty}\|\acute{\bar m}_2^n\|_{L^2(\Omega)}\geq\|m_2^0\|_{L^2(\Omega)}$ and a similar inequality for $m_3$ is also fulfilled. It remains to only show the opposite inequalities with $\limsup.$ It is clear that $\|\nabla \acute{\bar m}^n\|_{L^2(\Omega)}\leq \|\nabla \acute{m}^n\|_{L^2(\Omega)},$ thus it suffices to prove that $ \limsup_{n\to\infty}\|\nabla \acute{ m}^n\|_{L^2(\Omega)}\leq\|\nabla m^0\|_{L^2(\Omega)}.$ Assume now in contradiction that one of the three inequalities with $\limsup$, we intend to prove, fails. Therefore we have owing to (\ref{lower.bound.E}), that for some $\delta>0$ there holds, \begin{align*} \limsup_{n\to\infty}\frac{E(m^n)}{d_n^2}&\geq\max\bigg(\limsup_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}^2+\liminf_{n\to\infty} \alpha_2\|\bar m_2^n\|_{L^2(\mathbb R)}^2+\liminf_{n\to\infty}\alpha_3\|\bar m_3^n\|_{L^2(\mathbb R)}^2,\\ &\liminf_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}^2+\limsup_{n\to\infty} \alpha_2\|\bar m_2^n\|_{L^2(\mathbb R)}^2+\liminf_{n\to\infty}\alpha_3\|\bar m_3^n\|_{L^2(\mathbb R)}^2,\\ &\liminf_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}^2+\liminf_{n\to\infty} \alpha_2\|\bar m_2^n\|_{L^2(\mathbb R)}^2+\limsup_{n\to\infty}\alpha_3\|\bar m_3^n\|_{L^2(\mathbb R)}^2\bigg)\\ &\geq E_0(m^0)+\delta\\ &\geq \min_{m\in A_0}E_0(m)+\delta, \end{align*} which contradicts (\ref{almost.min}). The lemma is proved now. \end{proof} \begin{Corollary} \label{norms convergs to the norm of the limit} Let $\{m^n\}$ and $m^0$ be as in Lemma \ref{morms of avrages converges to the norm of limit}. Then \begin{itemize} \item[(i)] $ \lim_{n\to\infty}\|\acute{m}_2^n\|_{L^2(\Omega)}=\|m_2^0\|_{L^2(\Omega)},\qquad\lim_{n\to\infty}\|\acute{m}_3^n\|_{L^2(\Omega)}=\|m_3^0\|_{L^2(\Omega)}.$ \end{itemize} \end{Corollary} \begin{proof} It follows from Lemma~\ref{morms of avrages converges to the norm of limit} and equality (\ref{acute.acute.bar}). \end{proof} \begin{Lemma} \label{strong convergence1} Let $\{m^n\}$ and $m^0$ be as in Lemma~\ref{morms of avrages converges to the norm of limit}. Then \begin{itemize} \item[(i)]$\lim_{n\to\infty}\|\nabla \acute m^n-\nabla m^0\|_{L^2(\Omega)}=0$ \item[(ii)] $\lim_{n\to\infty}\|\acute m_2^n- m_2^0\|_{L^2(\Omega)}=0, \qquad \lim_{n\to\infty}\|\acute m_3^n- m_3^0\|_{L^2(\Omega)}=0.$ \end{itemize} \end{Lemma} \begin{proof} The inequality $\liminf_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}\geq \|\nabla m^0\|_{L^2(\Omega)}$ ia a consequence of the weak convergence $ \nabla \acute m^n\rightharpoonup\nabla m^0.$ The opposite inequality $\limsup_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}\leq \|\nabla m^0\|_{L^2(\Omega)}$ has been proven in the proof of Lemma~\ref{morms of avrages converges to the norm of limit}. Therefore $\limsup_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}=\|\nabla m^0\|_{L^2(\Omega)}$ which combined with the weak convergence $\acute m^n \rightharpoonup m^0$ gives $(i)$. Fix now $l>0.$ We have by virtue of Corollary \ref{norms convergs to the norm of the limit}, \begin{align*} \limsup_{n\to\infty}\int_{\Omega}|\acute m_2^n- m_2^0|^2&\leq \limsup_{n\to\infty}\int_{[-l,l]\times\omega}|\acute m_2^n- m_2^0|^2+\limsup_{n\to\infty}\int_{\Omega\setminus([-l,l]\times\omega)}|\acute m_2^n- m_2^0|^2\\ &\leq 2\limsup_{n\to\infty}\int_{\Omega\setminus([-l,l]\times\omega)}\big(|\acute m_2^n|^2+| m_2^0|^2\big)\\ &\leq 2\limsup_{n\to\infty}\int_{\Omega}\big(|\acute m_2^n|^2+|m_2^0|^2\big)-2\liminf_{n\to\infty}\int_{[-l,l]\times\omega}\big(|\acute m_2^n|^2+| m_2^0|^2\big)\\ &=4|\omega|\int_{\mathbb{R}\setminus[-l,l]}| m_2^0(x)|^2\,\mathrm{d} x. \end{align*} From the arbitrariness of $l$ we get the validity of the first equality in $(ii).$ The proof of the second equality in $(ii)$ is straightforward. \end{proof} \begin{Lemma} \label{strong convergence2} Let $\{m^n\}$ and $m^0$ be as in Lemma~\ref{morms of avrages converges to the norm of limit}. Assume in addition that for some $N\in\mathbb{N}$ and $l>0$ we have for all $n\geq N$ $$\bar m_1^n(x)\leq 0, \ x\in(-\infty,-l] \ \ \text{and}\ \ \ \bar m_1^n(x)\geq 0, \ x\in[l,+\infty).$$ Then $$\lim_{n\to\infty}\|\acute m^n-m^0\|_{H^1(\Omega)}=0.$$ \end{Lemma} \begin{proof} By Lemma~\ref{strong convergence1} it suffices to show that $\lim_{n\to\infty}\|\acute m_1^n-m_1^0\|_{L^2(\Omega)}=0.$ Since $m^0(x)\in \tilde A(\Omega)$ then due to Remark~\ref{rem:lim.bar.m1=1} there exists $l_1>0$ such that $$m_1^0(x)\leq-\frac{1}{2},\qquad x\in(-\infty, l_1]\qquad \text{and}\qquad m_1^0(x)\geq\frac{1}{2},\qquad x\in[l_1, +\infty).$$ For any fixed $l_2>\max(l,l_1)$ we have, $$\int_{\Omega}|\acute m_1^n-m_1^0|^2=\int_{[-l_2,l_2]\times\omega}|\acute m_1^n-m_1^0|^2+\int_{\Omega\setminus([-l_2,l_2]\times\omega)}|\acute m_1^n-m_1^0|^2.$$ The first summand converges to zero and we have furthermore that $\|\acute m_1^n-\acute{\bar m}_1^n\|_{L^2(\Omega)}\to 0,$ thus it suffices to show that $$\lim_{n\to\infty}\int_{\Omega\setminus([-l_2,l_2]\times\omega)}|\acute{\bar m}_1^n-m_1^0|^2=0.$$ For $n\geq N$ we have \begin{align*} \int_{\Omega\setminus([-l_2,l_2]\times\omega)}|\acute{\bar m}_1^n-m_1^0|^2&\leq \int_{\Omega\setminus([-l_2,l_2]\times\omega)}\big||\acute{\bar m}_1^n|^2-|m_1^0|^2\big|\\ &\leq\int_{\Omega\setminus([-l_2,l_2]\times\omega)}\big||\acute{\bar m}_1^n|^2-|m_1^n|^2\big|+ \int_{\Omega\setminus([-l_2,l_2]\times\omega)}\big||\acute m_1^n|^2-|m_1^0|^2|. \end{align*} The first summand converges to zero, for the second summand we have by Lemma~\ref{norms convergs to the norm of the limit} \begin{align*} \limsup_{n\to\infty}\int_{\Omega\setminus([-l_2,l_2]\times\omega)}\big||\acute m_1^n|^2-|m_1^0|^2\big| &\leq\limsup_{n\to\infty}\int_{\Omega\setminus([-l_2,l_2]\times\omega)}(|\acute m_2^n|^2+|\acute m_3^n|^2+|m_2^0|^2+|m_3^0|^2)\\ &\leq 2\int_{\Omega\setminus([-l_2,l_2]\times\omega)}(|m_2^0|^2+|m_3^0|^2), \end{align*} which converges to zero as $l_2$ goes to infinity. \end{proof} \begin{Lemma} \label{bar m and [b^1,b^2] lemma} Let $0<\epsilon<1$ and let the sequence of intervals $\big([b_n^1, b_n^2]\big)_{n\in\mathbb N}$ be such that $$\bar m_1^n(b_n^1)=-1+\epsilon,\quad\bar m_1^n(b_n^2)=1-\epsilon.$$ Then for sufficiently big $n$ there holds \begin{align*} \bar m_1^n(x)&<-1+2\epsilon,\quad x\in(-\infty, b_n^1],\quad\bar m_1^n(x)>1-2\epsilon,\quad x\in[b_n^2,+\infty),\\ -1+\frac{\epsilon}{2}&<m_1^n(x)<1-\frac{\epsilon}{2},\quad x\in [b_n^1,b_n^2]. \end{align*} \end{Lemma} \begin{proof} Assume in contradiction that for a subsequence $\{n_k\}$(not relabeled) there is a point $b_n^3\in(-\infty, b_n^1)$ such that $\bar m_x^n(b_n^3)\geq -1+2\epsilon.$ Since $\bar m_1^n(-\infty)=-1$ and $\bar m_1^n$ is continuous we can without loss of generality assume that $\bar m_1^n(b_n^3)=-1+2\epsilon.$ Utilizing Lemma~\ref{lemma with f} for the intervals $(-\infty, b_n^3],$ $[b_n^3,b_n^1],$ $[b_n^1, +\infty)$ and (\ref{lower.bound.E}) we discover, \begin{align*} \frac{E(m^n)}{d_n^2}&\geq\int_{\Omega}|\nabla \acute m^n|^2+\alpha_2\int_{\mathbb R}|m_2^n|^2+\alpha_3\int_{\mathbb R}|m_2^n|^2+o(1)\\ &\geq 2\sqrt{\alpha_2|\omega|}\bigg(|2\epsilon|+|\epsilon|+|2-2\epsilon|\bigg)+o(1)\\ &=(4+2\epsilon)\min_{m\in A_0}E_0(m)+o(1), \end{align*} which contradicts the almost minimizing property of $\{m^n\}.$ Similarly we get the bounds near $\infty$ and in $[b_n^1,b_n^2].$ \end{proof} \subsection{Proof of Theorem~\ref{th:almost.minimizers}} \begin{proof} The proof splits into some steps:\\ \textbf{Step1.} Let us prove that if a sequence of magnetizations converges to some $m^0\in\tilde A(\Omega)$ in the sense of Definition \ref{notion of convergence} and satisfies conditions (\ref{almost.min}) and $\bar m_2^n(x_0)\geq 0$ for some $x_0\in\mathbb R$ and for big $n$ then $m_2^0(x_0)\geq 0.$ We have due to (\ref{almost.min}), that $$\int_{\Omega_n}|\partial_x \bar m^n|^2\leq\int_{\Omega_n}|\partial_x m^n|^2\leq Cd_n^2,$$ thus $$\int_{\mathbb R}|\partial_x \bar m^n(x)|^2\,\mathrm{d} x\leq \frac{C}{|\omega|}$$ which yields that the sequence $\{\bar m^n\}$ is equicontinuous in $\mathbb R,$ and therefore by the Arzela-Ascoli theorem $\{\bar m^n(x)\}$ has a subsequence with a uniform limit in the interval $[x_0-1,x_0+1].$ It is trivial that the limit is $m^0,$ and thus $\bar m_2^n(x_0)\geq 0$ yields $m_2^0(x_0)\geq 0.$ Evidently, the same sing preserving property holds for the first and the third components of $\bar m^n$ and also for the opposite sign. This means in particular that if $\bar m_1^n(x_0)=0$ for big $n$ then $m_1^0(x_0)=0.$\\ \textbf{Step2.} In the second step we construct the sequences $\{T_n\}$ and $\{R_n\}.$ Note first, that the change of variables mentioned in the theorem translates the domain $\Omega$ to itself and preserves the energy, thus the minimization problem (\ref{minimization problem}) is invariant under that kind of transformations. Let us now evaluate the constant in estimate (\ref{(a,b).finite.estimate}). The constant $C_1$ in (\ref{(a,b).finite.estimate}) comes from Lemma~\ref{lem:m2.m.3.bdd.E} and is given by $$C_1=\frac{2d_n^2E(m^n)}{\alpha_2d_n^2}\leq \frac{2Cd_n^2}{\alpha_2},$$ for big $n.$ Thus we get \begin{align*} M(\alpha,\beta,\rho,\omega_n,E(m^n))&=\frac{1}{|\omega_n|}\left(\frac{E(m^n)}{(\alpha-\beta)^2}+\frac{C_1+C_pd_n^2E(m^n)}{1-\rho^2}\right)\\ &\leq \frac{1}{|\omega|}\left(\frac{C}{(\alpha-\beta)^2}+\frac{\frac{2C}{\alpha_2}+C_pCd_n^2}{1-\rho^2}\right)\\ &\leq M_1 \end{align*} uniformly in $n.$ Next we choose the intervals $[b_n^1,b_n^2]$ to be as in Lemma~\ref{bar m and [b^1,b^2] lemma} with $\epsilon=\frac{1}{3},$ which is possible due to the continuity of $\bar m_1^n$ and the fact that $\bar m_1^n(\pm\infty)=\pm1.$ Owing to Lemma~\ref{bar m and [b^1,b^2] lemma} we get \begin{align} \label{b1b2.est} \bar m_1^n(x)&<-\frac{1}{3},\quad x\in(-\infty, b_n^1],\quad\bar m_1^n(x)>\frac{1}{3},\quad x\in[b_n^2,+\infty),\\ -\frac{5}{6}&<m_1^n(x)<\frac{5}{6},\quad x\in [b_n^1,b_n^2]. \end{align} Therefore, we obtain by the uniform estimate on $M(\alpha,\beta,\rho,\omega_n,E(m^n))$ and by the estimate (\ref{(a,b).finite.estimate}) of Lemma~\ref{lem:oscilation.preventing} that for sufficiently big $n$ there holds, \begin{equation} \label{b2-b1.bdd} b_n^2-b_n^1\leq M_1. \end{equation} Let now $x_n\in [b_n^1,b_n^2]$ be such that $\bar m_1^n(x_n)=0$. For any $n\in\mathbb N$ we choose $T_n$ to be the translation by $x_n$ and the rotation $R_n$ to be the identity if $\bar m_2^n(x_n)\geq 0$ and the rotation by $180$ degree otherwise. In the last step we prove that the whole sequence $\{\acute{\tilde m}^n\}$ converges to $m^\omega$ in $H^1(\Omega).$\\ \textbf{Step3.} For convenience of notation we will omit the "tilde" in $\acute{\tilde m}^n.$ We are now ready to prove that $\|\acute m^n-m^\omega\|_{H^1(\Omega)}\to 0$ as $n\to\infty.$ Assume in contradiction that for a subsequence (not relabeled) $\|\acute m^n-m^\omega\|_{H^1(\Omega)}\geq \delta>0$ for some $\delta.$ Like in the proof of Lemma~\ref{lem:compactness} we can show that a subsequence of $\{\acute m^n\}$ converges to some $m^0$ in the sense of Definition~\ref{notion of convergence}. By the $\Gamma$-convergence theorem we then have $E_0(m^0)\leq \liminf_{n\to\infty}\frac{E(m^n)}{d_n^2}$ thus \begin{equation} \label{m0.mins.E0} E_0(m^0)=\min_{m\in A_0}E_0(m). \end{equation} Next we have by the sign-preserving property of Step1 and by bounds (\ref{b1b2.est})--(\ref{b2-b1.bdd}), that \begin{equation} \label{m0.M1.est} \bar m_1^0(x)\leq-\frac{1}{3},\quad x\in(-\infty, -M_1],\quad\bar m_1^0(x)\geq\frac{1}{3},\quad x\in[M_1,+\infty). \end{equation} Invoking now Remark~\ref{rem:lim.m0} and the properties (\ref{m0.mins.E0}) and (\ref{m0.M1.est}) we discover $m_1^0(\pm\infty)=\pm1,$ which yields \begin{equation} \label{m0.in.A0} m^0\in A_0, \end{equation} i.e., $m_0$ is a minimizer of the minimization problem (\ref{min.prob.E0}). Again, by the sign-preserving property we have $m_1^0(0)=0$ and $m_2(0)\geq 0,$ thus by the analysis on the minimization problem (\ref{min.prob.E0}) in Appendix, we establish that actually $m^0$ and $m^\omega$ coincide. Note, finally, that the requirements of Lemma~\ref{strong convergence2} are satisfied, thus we get $$\lim_{n\to\infty}\|\acute m^n-m^\omega\|_{H^1(\Omega)}=\lim_{n\to\infty}\|\acute m^n-m^0\|_{H^1(\Omega)}=0,$$ which is s contradiction. The theorem is proven now. \end{proof} We mention that it is easy to see that any rectangle that is not a square and any ellipse that is not a circle satisfies the condition $0<\alpha_2<\alpha_3.$ This condition shows that the cross section $\omega$ does not have many rotational symmetries in some sense. For instance, if $\omega$ has a 90 degree rotational symmetry, then one can show that $\alpha_2=\alpha_3.$ It is also worth mentioning that one can prove a modified version of Theorem~\ref{th:almost.minimizers} in the case when $\omega$ is a disc or a canonical polygon with even number of vertices, namely due to the symmetry it is not true that any of the rotations $R_n$ is either the identity the rotation by $180$ degree, but one can prove their existence. In conclusion we state that Theorem~\ref{th:almost.minimizers} shows that in thin wires energy minimizers with a 180 degree domain wall are transverse (Ne\'el) walls that have the shape of $m^\omega.$ \begin{Remark} The stability result holds for circular cross sections and cross sections that are canonical polygons. \end{Remark} \begin{proof} It is straightforward to check that the construction of the rotation $R_n$ and translation $T_n$ can be done exactly in the same way as in the proof of Theorem~\ref{th:almost.minimizers}, although the cross sections do not have the required asymmetry. \end{proof} \appendix \section{Appendix} In appendix we recall some well-know facts, in particular the study of the minimization problem $\min_{m\in A_0}E_0(m).$ We start with a simple lemma. \begin{Lemma} \label{lem:mag.m_1.m_2} For any magnetizations $m_1,m_2\in A(\Omega)$ there holds $$|E_{mag}(m_1)-E_{mag}(m_2)|\leq \|m_1-m_2\|_{L^2(\Omega)}^2+2\|m_1-m_2\|_{L^2(\Omega)} \sqrt{E_{mag}(m_1)}$$ \\ \end{Lemma} \begin{proof} The proof is elementary and can be found in [\ref{bib:Kuehn}]. \end{proof} \begin{Lemma} \label{lem:bound.Green.function} For any $0<s\leq r$ denote $R(s,r)=[-s,s]\times [-r,r].$ Then for all points $(y_1,z_1)\in \mathbb R^2$ there holds $$I=\int_{R(s,r)}\frac{\,\mathrm{d} y\,\mathrm{d} z}{\sqrt{(y-y_1)^2+(z-z_1)^2}}< 10s\Big(1+\ln\frac{r}{s}\Big).$$ \end{Lemma} \begin{proof} It is clear that if we replace the point $(y_1,z_1)$ by its closest point to $R(s,r),$ then the integral may only increase. Thus one can without loss of generality assume that $(y_1,z_1)\in R(s,r).$ We have that \begin{align*} I\leq \int_{R(2s,2r)}\frac{\,\mathrm{d} y\,\mathrm{d} z}{\sqrt{y^2+z^2}}&=\int_{R(2s,2s)}\frac{\,\mathrm{d} y\,\mathrm{d} z}{\sqrt{y^2+z^2}}+ \int_{R(2s,2r)\setminus R(2s,2s)}\frac{\,\mathrm{d} y\,\mathrm{d} z}{\sqrt{y^2+z^2}}\\ &\leq\frac{1}{4}\int_{D_{4\sqrt2 s}(0)}\frac{\,\mathrm{d} y\,\mathrm{d} z}{\sqrt{y^2+z^2}}+8s\int_{2s}^{2r}\frac{\,\mathrm{d} y}{y}\\ &=2\sqrt2 \pi s+8s\ln\frac{r}{s}\\ &<10s\Big(1+\ln\frac{r}{s}\Big). \end{align*} \end{proof} \begin{Lemma} \label{lemma with f} Assume that $\omega\subset\mathbb{R}^2$ is a bounded Lipschitz domain. Then for any interval $(a,b)\subset\mathbb{R},$ positive $\alpha$ and a unit vector field $f\in H^1\big((a,b)\times\omega, \mathbb{R}^3\big)$ there holds: $$\int_{(a,b)\times\omega}|\partial_x f|^2+\alpha^2\int_{(a,b)\times\omega}(|f_2|^2+|f_3|^2)\geq 2\alpha|\omega||\bar f_1(a)-\bar f_1(b)|.$$ (The endpoints $a$ and $b$ can take values $-\infty$ and $\infty$ respectively). \end{Lemma} \begin{proof} Fix a point $(y,z)\in\omega$ and consider the vector field $f$ on the segment with endpoints $(a,y,z)$ and $(b,y,z).$ Being an $H^1$ vector field, $f$ must be absolutely continuous on that segment as a function of one variable, thus denoting $$f_1(x,y,z)=\sin\varphi(x), f_2(x,y,z)=\cos\varphi(x)\cos\theta(x), f_3(x,y,z)=\cos\varphi(x)\sin\theta(x)$$ we obtain that $\varphi$ and $\theta$ are differentiable a.e. in $[a,b].$ Therefore we can calculate, \begin{align*} \int_{(a,b)\times(y,z)}|\partial_x f(\xi)|^2\,\mathrm{d} x&+\alpha^2\int_{(a,b)\times(y,z)}(|f_2(\xi)|^2+|f_3(\xi)|^2)\,\mathrm{d} x\\ &=\int_a^b(\varphi'^2(x)+\theta'^2(x)\cos^2\varphi(x))\,\mathrm{d} x+\alpha^2\int_a^b\cos^2\varphi(x)\,\mathrm{d} x\\ &\geq\int_a^b(\varphi'^2(x)\,\mathrm{d} x+\alpha^2\int_a^b\cos^2\varphi(x)\,\mathrm{d} x\\ &\geq 2\alpha\bigg|\int_a^b\varphi\prime(x)\cos\varphi(x)\,\mathrm{d} x\bigg|\\ &=2\alpha|f_1(a,y,z)-f_1(b,y,z)|. \end{align*} An integration of the obtained inequality over $\omega$ completes the proof. \end{proof} Recall now that analogues to the proof of the above lemma one can determine the minima of the energy functional $$E_{\alpha}(m)=\int_{\mathbb{R}}|\partial_x m(x)|^2\,\mathrm{d} x+\alpha\int_{\mathbb{R}}(| m_y(x)|^2+|m_z(x)|^2)\,\mathrm{d} x$$ for any $\alpha>0$ in the admissible set $$A_0=\{m\colon \mathbb{R}\to\mathbb{R}^3 \ : \ |m|=1, m(\pm\infty)=\pm1 \}.$$ The minimizer is unique up to a translation in the $x$ coordinate and a rotation in the $OYZ$ plane and is given by \begin{equation} \label{limit problem minimizer} m^{\alpha,\beta}=\bigg(\frac{e^{2\sqrt{\alpha}x}\cdot\beta-1}{e^{2\sqrt{\alpha}x}\cdot\beta+1},\ \frac{2\sqrt\beta e^{\sqrt{\alpha}x}}{e^{2\sqrt{\alpha}x}\cdot\beta+1}\cos\theta,\ \frac{2\sqrt\beta e^{\sqrt{\alpha}x}}{e^{2\sqrt{\alpha}x}\cdot\beta+1}\sin\theta\bigg). \end{equation} Set $m^{\alpha}:=m^{\alpha,1}$ to have $m_x^{\alpha}(0)=0.$ The minimal value of $E_\alpha$ in $A_0$ will be $4\sqrt{\alpha}.$ Let us now find the minimal value and the minima of the reduced problem for any $\omega$ under the condition \begin{equation} \label{alpha2.alpha3} 0<\alpha_2<\alpha_3, \end{equation} i.e., consider the minimization problem \begin{equation} \label{min.prob.E0} \min_{m\in A_0}E_0(m). \end{equation} Observe that, \begin{align*} E_0(m)&=|\omega|\int_{\mathbb R}|\partial_x m(x)|^2\,\mathrm{d} x+\alpha_2\int_{\mathbb R}m_2^2(x)+\alpha_3\int_{\mathbb R}m_3^2(x)\\ &\geq |\omega|\int_{\mathbb R}|\partial_x m(x)|^2\,\mathrm{d} x+\alpha_2\int_{\mathbb R}m_2^2(x)+\alpha_2\int_{\mathbb R}m_3^2(x)\\ &\geq 4\sqrt{|\alpha_2\omega|,} \end{align*} and the minimum is realized by \begin{equation} \label{fixed minimizer of the limit enargy} m^\omega=\bigg(\frac{e^{2\sqrt{\alpha_\omega}x}-1}{e^{2\sqrt{\alpha_\omega}x}+1},\ \frac{2 e^{\sqrt{\alpha_\omega}x}}{e^{2\sqrt{\alpha_\omega}x}+1},\ 0\bigg), \end{equation} where $\alpha_\omega=\frac{\alpha_2}{|\omega|}.$ All other minimizers of $E_0$ are obtained via translations and 180 degree rotations of $m^\omega.$ \\ \textbf{\large{Acknowledgement}}\\ The author is very grateful to his Ph.D. thesis supervisor Dr. Prof. S. M\"uller for suggesting the topic and for many fruitful discussions. \end{document}
\begin{document} \title{Ordinal Notations in Caucal Hierarchy} \author{Fedor~Pakhomov\thanks{This work was partially supported by RFFI grant 15-01-09218 and Dynasty foundation.}\\Steklov Mathematical Institute,\\Moscow\\ \texttt{[email protected]}} \date{December 2015} \maketitle \begin{abstract} Caucal hierarchy is a well-known class of graphs with decidable monadic theories. It were proved by L.~Braud and A.~Carayol that well-orderings in the hierarchy are the well-orderings with order types less than $\varepsilon_0$. Naturally, every well-ordering from the hierarchy could be considered as a constructive system of ordinal notations. In proof theory constructive systems of ordinal notations with fixed systems of cofinal sequences are used for the purposes of classification of provable recursive functions of theories. We show that any well-ordering from the hierarchy could be extended by a monadically definable system of cofinal sequences with Bachmann property. We show that the growth speed of functions from fast-growing hierarchy based on constructive ordinal notations from Caucal hierarchy may be only slightly influenced by the choice of monadically definable systems of cofinal sequences. We show that for ordinals less than $\omega^\omega$ a fast-growing hierarchy based on any system of ordinal notations from Caucal hierarchy coincides with Löb-Wainer hierarchy. \end{abstract} \section{Introduction} We are interested in the hierarchy of (possibly infinite) graphs that were introduced by D.~Caucal \cite{Cau96}. Graphs in the hierarchy are directed graphs with colored edges. They naturally could be considered as structures with several binary relations (one relation for each color). All the graphs in the hierarchy have decidable monadic theories. There are several characterizations of this hierarchy (we refer to the survey \cite{Ong07}). We will use definition due to Caucal \cite{Cau02,CarWoh03}. The level zero of hierarchy consists of all finite graph. Each next level of hierarchy consists of all graphs that are monadically interpretable in unfoldings of graphs of the previous level. Caucal hierarchy contains wide spectrum of structures. In our investigation we focus on well-orderings that are monadically definable in some graphs from the hierarchy. L.~Braud has shown that for every $\alpha<\varepsilon_0=\lim_{n\to\omega}\underbrace{\omega^{\dots^{\omega}}}_{\mbox{\scriptsize $n$ times}}$ there exists a well-ordering of order type $\alpha$ in Caucal hierarchy \cite{Bra09}. L.~Braud and A.~Carayol have shown that the converse is true and any well-ordering that lies in the hierarchy have order type less than $\varepsilon_0$. The ordinal $\varepsilon_0$ is known to be proof-theoretic ordinal of first-order arithmetic (for a survey of ordinals in proof theory we refer to \cite{Rat07}). Natural fragments of first-order arithmetic have proof-theoretic ordinals less than $\varepsilon_0$. Constructive ordinal notation systems are used within the context of study of proof-theoretic ordinals. The standard general method of defining constructive ordinal notations is Kleene $\mathcal{O}$ \cite{Kln38}\cite{Chr38}. It is known that wast number of proof-theoretic applications is sensitive to the choice of constructive representations of ordinals. There is a long-standing conceptual problem of determining what is a {\it natural} or {\it canonical} ordinal notation system \cite{Kre76,Fef96}. One of the uses of ordinals in proof theory are classifications of provably-recursive functions of theories. There are several approaches to define fast-growing recursive functions using ordinal notations \cite{Ros84}. There are number of connections between this different methods of constructing fast-growing functions. Most of the approaches essentially uses systems of cofinal sequences for ordinals. A cofinal sequence (of the length $\omega$) for an ordinal $\alpha$ is a sequence $\alpha_0<\alpha_1<\ldots$ such that every $\alpha_i<\alpha$ and $\lim_{n\in \omega}\alpha_n=\alpha$. Let us is considered some proper initial segment of countable ordinals. A system of cofinal sequences for ordinals in the segment is some assignment of cofinal sequences (of length $\omega$) for all limit ordinals in the segment. In this paper we focus on one of the methods to construct fast-growing functions from ordinals with systems of cofinal sequences: fast-growing hierarchy. Because monadic theories of graphs in Caucal hierarchy are decidable and in every graph from the hierarchy all the vertices are monadically definable, if we have a monadically definable well-ordering in some graph from the hierarchy then we can straightforward build a constructive system of ordinal notations from it. In the present paper we investigate how can we naturally define system of cofinal sequences for well-orderings from the hierarchy. We will represent systems of cofinal sequences by binary predicates $x R y$ that means ``$x$ is in an element of the cofinal sequence for $y$''; in this way we can talk about definability of systems of cofinal systems in structures. We show that if there is a graph with monadically definable well-ordering in $n$-th level of Caucal hierarchy then in the same level there is an extension of this graph by new colors and edges marked by the new colors such that some system of cofinal sequences is monadically definable in the extension. Moreover, for a deterministic tree from Caucal hierarchy there is always a monadically definable system of cofinal sequences for a monadically definable well-ordering; note that every graph in Caucal hierarchy could be monadically interpreted in a deterministic tree from the same level of Caucal hierarchy \cite{CarWoh03}. There is Bachmann property for systems of cofinal sequences that guarantees that hierarchies of functions based on them behave relatively well \cite{Bach67,Sch77}. We show that if there is a graph with monadically definable well-ordering and system of cofinal sequences for it in $n$-th level of Caucal hierarchy then in the same level there is an extension of this graph by new colors and edges of the new colors such that there is a monadically definable system of cofinal sequences with Bachmann property that contains only subsequences of the original cofinal sequences. Moreover, for a deterministic tree from Caucal hierarchy the same could be done without switching to an extension. We consider graphs from Caucal hierarchy with monadically-definable well-ordering $\prec$ and two monadically definable systems of cofinal sequences with Bachmann property. Two fast-growing hierarchies of recursive functions arises: $F^1_{a}(x)$ and $F^2_{a}(x)$. We show that if $a \prec b$ then $F^1_b(x)$ eventually dominates $F^2_a(x)$ and $F^2_b(x)$ eventually dominates $F^1_a(x)$, i.e. hierarchies are very close to each other. Let us consider a graph from Caucal hierarchy with monadically-definable well-ordering $\prec$ and monadically definable system of cofinal sequences with Bachmann property. We have fast-growing hierarchy of recursive functions based on this well-ordering and system of cofinal sequences: $F'_a(x)$. Let us denote by $f$ the embedding of the well-ordering onto initial segment of ordinals. Most wide-known concrete fast-growing hierarchy up to $\varepsilon_0$ is Löb-Wainer hierarchy \cite{LobWai70} we denote the functions from it as $F_{\alpha}(x)$. We show that for all $\alpha<\beta<\omega^\omega$, if $f^{-1}(\beta)$ is defined then $F'_{f^{-1}(\beta)}(x)$ eventually dominates $F_\alpha(x)$ and $F_\beta(x)$ eventually dominates $F'_{f^{-1}(\alpha)}(x)$. \section{Caucal Hierarchy of Graphs} Caucal hierarchy is an hierarchy of directed graphs with colored edges. Levels of the hierarchy are indexed by natural numbers. Formally, {\it directed graph with colored edges} is a tuple $(\mathbf{C},V,U)$, where $\mathbf{C}$ is a set of edge colors, $\mathbf{C}$ is finite, $V$ is a set of vertices, and $U$ is a set of edges, $U\subset V\times\mathbf{C}\times V$. Because directed graphs with colored edges are the only graphs that we consider, if we use the term graph it will refer to directed graphs with colored edges. We say that a triple $(v_1,c,v_2)\in V\times\mathbf{C}\times V$ is an {\it edge of color} $c$ from $v_1$ to $v_2$; we also say that the edge $(v_1,c,v_2)$ is marked by color $c$ . We denote the set of all graphs from the $n$-th level of Caucal hierarchy by $\mathbf{Graph}_n$. We will have $\mathbf{Graph}_n\subset\mathbf{Graph}_{n+1}$ for all $n$. $\mathbf{Graph}_0$ is the collection of all finite graphs (formally, in axiomatic set theory we can't consider such a set; we could overcome this difficulties, for example as following: instead of considering all finite graphs we can consider only graphs, where $\mathbf{C}$ and $V$ are hereditary countable sets). Suppose we have a graph $G=(\mathbf{C},V,U)$ and a vertex $v_0$. The {\it unfolding} of $G$ from $v_0$ is the graph $(\mathbf{C},P,R)$, where $P$ is the set of all sequences $(p_0,a_0,p_1,a_1\ldots,a_{n-1},p_{n})$ such that $n\ge 0$, $p_i\in V$, $a_i\in U$, and every $a_i$ is of the form $(p_i,x,p_{i+1})$ and $R$ consist of all edges $$((p_0,a_0,p_1,a_1\ldots,a_{n-1},p_{n}),c,(p_0,a_0,p_1,a_1\ldots,a_{n-1},p_{n},(p_n,c,p_{n+1}),p_{n+1}))$$ such that $$(p_0,a_0,p_1,a_1\ldots,a_{n-1},p_{n}),(p_0,a_0,p_1,a_1\ldots,a_{n-1},p_{n},(p_n,c,p_{n+1}),p_{n+1})\in P.$$ We denote by $\mathcal{U}(G,v_0)$ the result of unfolding of $G$ from $v_0$. Graph $(\mathbf{C},V,U)$ could be treated as the structure with the domain $V$, signature of binary predicates $\{\textsc{R}_c\mid c\in \mathbf{C}\}$, and the following interpretations of the predicates: $$v_1 \textsc{R}_c v_2 \stackrel{\text{def}}{\iff} (v_1,c,v_2)\in U.$$ Vice versa, for a finite set $\mathbf{C}$ a structure $\mathfrak{A}$ with the domain $A$ and the signature consists of binary predicates $\{\textsc{R}_c\mid c\in \mathbf{C}\}$ could be treated as a graph with the set of colors $\mathbf{C}$, the set of vertices $A$, and the set of edges $\{(v_1,c,v_2) \mid v_1,v_2\in A\textrm{ and }\mathfrak{A}\models v_1 \textsc{R}_c v_2\}$. We freely switch between this two formalisms. {\it Monadic second-order formulas} are formulas in which in addition to normal first-order connectivities and quantifiers, unary predicate variables and quantifiers over unary predicates are allowed. We use capital Latin letters $X,Y,Z,\ldots$ for predicate variables and small Latin letters $x,y,z,\ldots$ for first-order variables. The truth of monadic formulas in a structure $\models_{\mathrm{MSO}}$ could be defined in a natural way with unary predicate variables ranging over arbitrary subsets of the domain of the structure. Suppose $\mathfrak{A}$ and $\mathfrak{B}$ are structures with predicate-only signatures. In a natural way for a structure $\mathfrak{A}$ with the domain $A$ we can talk about {\it monadically definable} subsets of $$(\mathcal{P}(A))^n\times A^m,$$ for each $n,m$. A set $$B\subset (\mathcal{P}(A))^n\times A^m$$ is monadically definable if there is a monadic formula $\varphi(X_1,\ldots, X_n,x_1,\ldots x_m)$ such that for every $(P_1,\ldots,P_n,p_1,\ldots,p_m)\in (\mathcal{P}(A))^n\times A^m $ we have $$(P_1,\ldots,P_n,p_1,\ldots,p_m)\in B \iff \mathfrak{A}\models_{\mathrm{MSO}}\varphi(P_1,\ldots,P_n,p_1,\ldots,p_m).$$ A {\it monadic interpretation} of a structure $\mathfrak{A}$ in a structure $\mathfrak{B}$ is a function $f$ that maps symbols from the signature of $\mathfrak{A}$ to monadic formulas of the signature of $\mathfrak{B}$ such that for an $n$-ary symbol $R$ from the signature of $\mathfrak{A}$ the formula $f(R)$ is of the form $F(x_1,\ldots,x_n)$ and there exists a bijection $g$ from the domain of $\mathfrak{A}$ to the domain of $\mathfrak{B}$ with the following holds for any $n$, $n$-ary symbol $R$ from the signature of $\mathfrak{A}$, and $v_1,\ldots,v_n$ from the domain of $\mathfrak{A}$: $$\mathfrak{A}\models_{\mathrm{MSO}} R(v_1,\ldots,v_n)\iff \mathfrak{B}\models_{\mathrm{MSO}} f(R)(g(v_1),\ldots,g(v_n)).$$ For graphs $G_1,G_2$, a {\it monadic interpretation} of $G_1$ in $G_2$ is a monadic interpretation of the graph $G_1$ as the structure in the graph $G_2$ as the structure. Suppose we have defined the class $\mathbf{Graph}_n$. Then the class $\mathbf{Tree}_{n+1}$ consists of all unfoldings of graphs from $\mathbf{Graph}_n$ from arbitrary fixed vertices. The class $\mathbf{Graph}_{n+1}$ consists of all graphs that can be monadically interpreted in the graphs from $\mathbf{Tree}_{n+1}$. For a finite set of colors $\mathbf{C}$ we denote by $\mathbf{C}^{-}$ the set of fresh colors $c^{-}$ for all $c\in\mathbf{C}$. We can extend every graph $G=(\mathbf{C},V,U)$ to the graph $\mathcal{R}(G)=(\mathbf{C}\sqcup\mathbf{C}^{-},V,U\sqcup U^{-})$, where $U^{-}=\{(v_2,c^{-},v_1)\mid (v_1,c,v_2)\in U\}$. It is easy to see that if $G\in \mathbf{Graph}_n$ then $\mathcal{R}(G)\in\mathbf{Graph}_n$. For a graph $(\mathbf{C},V,E)$, vertices $v_1,v_2\in V$, and word $c_1c_2\ldots c_k\in \mathbf{C}^{\star}$ we say that there is $c_1c_2\ldots c_k$-marked path if there ares edges: $$(w_0,c_1,w_1),(w_1,c_2,w_2),\ldots,(w_{k-1},c_k,w_k)\in E.$$ We call a graph $G$ {\it deterministic} if for each color and vertex there exists at most one edge of this color going from the vertex. \begin{theorem}\label{Inv_rat_theorem}(\cite{CarWoh03}) For every $G=(\mathbf{C},V,U)\in \mathbf{Graph}_n$ there exists deterministic tree $T=(\mathbf{D},V,E)\in \mathbf{Tree}_n$ and regular languages $L_c\subset(\mathbf{D}\sqcup\mathbf{D}^{-})^{\star}$, for $c\in \mathbf{C}$, such that for all $v_1,v_2\in V$ we have $(v_1,c, v_2)\in U$ iff there exists a path from $v_1$ to $v_2$ in $\mathcal{R}(T)$ marked by some $\alpha\in L_c$. \end{theorem} Note that if some $G$ and $T$ are in a relation as in Theorem \ref{Inv_rat_theorem} then there is a monadic interpretation of $G$ in $T$ \cite{CarWoh03}. \section{Higher Order Pushdown Automatons} One of alternative ways to represent Caucal hierarchy are configuration graphs of higher order pushdown automatons. There is a pumping lemma for graphs in Caucal hierarchy formulated in the terms of this representation \cite{Par12}. We need this pumping lemma for the proofs in Section \ref{fast_growing_section}. Higher order pushdown automatons were introduced by Maslov \cite{Mas74} Suppose $\mathbf{A}$ is a finite set. We are going to define the notion of {\it higher order pushdown store} ({\it pds}) over an alphabet $\mathbf{A}$. A $0$-pds over $\mathbf{A}$ is just a symbol from $\mathbf{A}$. An $(n+1)$-pds is a finite sequence of $n$-pds's. Suppose $n>m$, $\alpha^n$ is an $n$-pds and $\alpha^m$ is an $m$-pds. We will define $n$-pds $\alpha^n{:}\alpha^m$. If $n=m+1$ then $\alpha^n{:}\alpha^m$ is the result of attaching $\alpha^m$ at the end of the sequence $\alpha^n$. If $n>m+1$ then $\alpha^n{:}\alpha^m$ is equal to $\alpha^n{:}\beta^{m+1}$, where $\beta^{m+1}$ is $m+1$-pds that is an one element sequence containing $\alpha^m$. We say that every $0$-pds is {\it proper}. We call an $n+1$-pds proper if it is a non-empty sequence of proper $n$-pds's. Note that for every $n$ and $m<n$, a proper $n$-pds $\alpha^n$ there is the unique representation of $\alpha^n$ in the form $\beta^n{:}\beta^m$, where $\beta^m$ is $m$-pds; it is easy to see that here $\beta^m$ is always proper. Note can represent every proper $n$-pds $\alpha$ in the form $\beta^n{:}(\beta^{n-1}{:}(\ldots (\beta^1{:}\beta^0)\ldots))$ in the unique way. We say that $\beta^k{:}(\beta^{k-1}{:}(\ldots (\beta^1{:}\beta^0)\ldots))$ is the {\it topmost} $k$-pds of $\alpha$ . There are the following operations on proper $n$-pds's over an alphabet $\mathbf{A}$: \begin{enumerate} \item Operation $\textsf{pop}^k$, where $0< k\le n$. This operation maps a proper $n$-pds $\alpha^n{:}\alpha^{k-1}$ to $n$-pds $\alpha^n$. \item Operation $\textsf{push}^k(a)$, where $0< k\le n$ and $a\in\mathbf{A}$. This operation transforms a proper $n$-pds by replacing its topmost $k$-pds $\alpha^k{:}\alpha^{k-1}$ with $(\alpha^k{:}\alpha^{k-1}){:}\beta^{k-1}$, where $\beta^{k-1}$ is $\alpha^{ k-1}$ with topmost $0$-pds replaced by $a$. \end{enumerate} We denote by $\mathbf{OP}^n(\mathbf{A})$ the set of all operations on $n$-pds's over alphabet $\mathbf{A}$ defined above. A pushdown system of level $n$ is a tuple $\mathcal{A}=(\mathbf{A},\mathbf{S},s_I,Q,q_I,\Delta,\lambda)$: \begin{enumerate} \item $\mathbf{A}$ is a finite input alphabet, \item $\mathbf{S}$ is a finite alphabet of stack symbols, \item $s_I$ is an initial stack symbol, \item $Q$ is a finite set of automaton states, \item $q_I$ is an initial state, \item $\Delta\subset Q\times \mathbf{S}\times Q\times\mathbf{OP}^n(\mathbf{S})$ is a set of transition instructions, \item $\lambda\colon \Delta\to \mathbf{A}\sqcup \{\varepsilon\}$ is a labeling function. \end{enumerate} A {\it configuration} of pushdown system $\mathcal{A}$ is a pair $(q,\alpha)$, where $q\in Q$ and $\alpha$ is a proper $n$-pds. The {\it initial configuration} is the pair $(q_I,\varepsilon^n{:}(\varepsilon^{n-1}{:}(\ldots {:}(\varepsilon^1{:}s_I)\ldots )))$, where all $\varepsilon^{k}$ are empty $k$-pds's. For configurations $w_1=(q_1,\alpha_1)$ and $w_2=(q_2,\alpha_2)$ we say that $\mathcal{A}$ {\it admits an one step transition} from $w_1$ to $w_2$ if the topmost $0$-pds of $\alpha_1$ is $s_1$, there exists $o\in\mathbf{OP}^n(\mathbf{S})$ such that the result of application of $o$ to $\alpha_1$ is $\alpha_2$, and $(q_1,s_1,q_2,o)\in\Delta$. Note that for a pair of configurations there exists at most one transition instruction $(q_1,s_1,q_2,o)$ with this property; we say that $(q_1,s_1,q_2,o)$ is the transition instruction that makes a switch from configuration $w_1$ to configuration $w_2$. We say that there is an $\mathcal{A}$ {\it run} from configuration $w$ to configuration $w'$ if we ca find configurations $u_0,u_1,\ldots, u_k$ such that $\mathcal{A}$ admits an one step transition from $u_i$ to $u_{i+1}$ for all $i<k$, $u_0=w$, and $u_k=w'$. A configuration $w$ is called {\it reachable} for $\mathcal{A}$ if there exists a run from the initial configuration of $\mathcal{A}$ to $w$. {\it Configuration graph} $C(\mathcal{A})=(\mathbf{A}\sqcup\{\varepsilon\},V_{C(\mathcal{A})},E_{C(\mathcal{A})})$ of $\mathcal{A}$ is the graph, where $V_{C(\mathcal{A})}$ is the set of all reachable configurations of $\mathcal{A}$ and $E_{C(\mathcal{A})}$ is the set of all triples $(w_1,\lambda(t),w_2)$, where $w_1,w_2$ are reachable configurations of $\mathcal{A}$, $\mathcal{A}$ admits an one step transition from $w_1$ to $w_2$, and $t$ is the transition instruction that makes a switch from configuration $w_1$ to configuration $w_2$. Let us consider next only the case of graphs $C(\mathcal{A})$ such that for every vertex either all outgoing edges are marked by $\varepsilon$ or all of them aren't marked by $\varepsilon$ (we will talk about $\varepsilon$-contractions only in the case of $\mathcal{A}$ with this property). We define $G=(\mathbf{C},V,U)$ the {\it $\varepsilon$-contraction} of a configuration graph $C(\mathcal{A})$. $V$ consists of the initial configuration of $\mathcal{A}$ and all configurations that have incoming edges marked by non-$\varepsilon$ color. There is a $c$-marked edge in $G$ between $v_1,v_2\in V$ iff in $C(\mathcal{A})$ there is an $\varepsilon$-marked path from $v_1$ to some $v_3$ such that there is $c$-marked edge from $v_3$ to $v_2$ We denote by $\mathbf{HOPDG}_n$ the collection of all graphs that are isomorphic to the $\varepsilon$-contraction of the configuration graph of some $n$-pds. \begin{theorem}(\cite{CarWoh03})\label{HOPDG} For every $n$ and every graph $G$ such that there is a vertex and paths from it to any other vertex we have $G\in\mathbf{Graph}_n$ iff $G\in\mathbf{HOPDG}_n$ \end{theorem} P.~Parys\cite{Par12} have proved a variant of pumping lemma for graphs in $\mathbf{HOPDG}$. Functions $\beth_n\colon \omega\to \omega$ are defined as the following: \begin{itemize} \item $\beth_0(x)=x$, \item $\beth_{n+1}(x)=2^{\beth_n(x)}$. \end{itemize} \begin{theorem}(\cite{Par12}) \label{pumping_lemma} Suppose $\mathcal{A}$ is $n$-pds with input alphabet $\mathbf{A}$ and $L\subset \mathbf{A}^{\star}$ is a regular language. Let $G$ be the $\varepsilon$-contraction of the configuration graph of $\mathcal{A}$. Assume that $G$ is finitely branching. Assume that in $G$ there exists a path of length $m$ from the initial configuration to some configuration $w$. Assume that for some $\alpha\in L$, $|\alpha|\ge\beth_{n-1}((m+1)C_{\mathcal{A},L})$ there is path in $G$ from $w$ marked by $\alpha$; here $C_{\mathcal{A},L}$ is a constant which depends on $\mathcal{A}$ and $L$. Then there are infinitely many paths in $G$, which start in $w$ and are marked by some $\beta\in L$. \end{theorem} \section{Well-Orderings and Cofinal Sequences} We are interested in graphs $G=(\mathbf{C},V,U)$ such that one of $R_c$, for $c\in\mathbf{C}$ is a strict well-ordering of $V$. We call such graphs {\it well-orderings for color $c$}. L.~Braud have proved that there are graphs in Caucal hierarchy with well-ordering of order type $\alpha$, for every $\alpha<\varepsilon_0$ \cite{Bra09}. L.~Braud and A.~Carayol have proved that every graph with well-ordering in Caucal hierarchy has order type less that $\varepsilon_0$ \cite{BraCar10}. More precisely, those works have shown that for a class $\mathbf{Graph}_n$ the exact non-reachable upper bound of well-orderings order types is $\omega_{n+1}$, where $\omega_0=1$ and $\omega_{k+1}=\omega^{\omega_k}$. Suppose $(A,R)$ is a strict well-ordering. We denote by $A^{\mathrm{lim}}$ the set of all non-zero limit points of $(A,R)$, i.e. $A^{\mathrm{lim}}=\{x\in A\mid x=\sup_R(\{y\in A\mid yR x\})\}\setminus \{\inf_R(A)\}$. A {\it system of cofinal sequences} is a function $s\colon A^{\mathrm{lim}}\times \omega \to A$ such that for all $x\in A^{\mathrm{lim}}$ and $n\in \omega$ we have $s(x,n)R x$ and $x=\sup_R(\{s(x,m)\mid m\in \omega\})$. For the purposes of defining cofinal sequences in the terms of monadic theories of graphs we further consider only strictly monotone cofinal sequences, i.e. for every $x\in A^{\lim}$ if $n<m$ then $s(x,n)R s(x,m)$. A system of cofinal sequences $s\colon A^{\mathrm{lim}}\times \omega \to A$ is said to have {\it Bachmann property} \cite{Bach67} if for any $x,y\in A^{\mathrm{lim}}$ and $n\in \omega$ such that $s(x,n)<y\le s(x,n+1)$ we have $s(x,n)\le s(y,0)$. There is an alternative characterization of systems of cofinal sequences with Bachmann property (we refer to \cite[\S3.3]{Ros84} for details). System of cofinal sequences gives step-down relation $\prec^s$ on $A$. The relation $\prec^s$ is the transitive closure of $x \prec^s x+1$ and $s(x,0)\prec^s x$. A system of strictly monotone cofinal sequences $s\colon A^{\mathrm{lim}}\times \omega \to A$ is said to be Schmidt-coherent \cite{Sch77} if for all $x\in A^{\mathrm{lim}}$ and $n\in \omega$ we have $s(x,n)\prec^ss(x,n+1)$. Suppose $G=(\mathbf{C},V,U)$ is a well-ordering for color $c$ and $d\in\mathbf{C}$. We call say that $d$ {\it gives a system of cofinal sequences for $R_c$} if there is a system of strictly monotone cofinal sequences $s_d$ for $(V,R_c)$ such that for all $v_1,v_2\in V$ $$v_1 R_d v_2\iff v_2\textrm{ is non-zero limit of $R_c$ and } \exists n ( s_d(v_2,n)=v_1) .$$ Note that there exists at most one $s_d$ with described property and thus it is determined by $R_d$. For a well-ordering $(A,R)$ with a fixed system of cofinal sequences $s$ the {\it fast-growing hierarchy} is a family of functions $F_a\colon \omega \to \omega$ indexed by $a\in A$: \begin{enumerate} \item $F^s_{0}(x)=x+1$, \item $F^s_{b+1}(x)=\underbrace{F_{b}(F_{b}(\ldots F_{b}}\limits_{\mbox{$x$-times}}(x)\ldots))$, \item $F^s_l(x)=F_{s(l,x)}(x)$, for every $l\in A^{\mathrm{lim}}$. \end{enumerate} \section{Defining Cofinal Sequences in Caucal hierarchy} In the present section we will prove that every $G\in \mathbf{Graph}_n$ that is a well-ordering for some color $c$ could be extended to $G'\in \mathbf{Graph}_n$ by adding color $d$ and some edges marked by $d$ such that $d$ gives a system of cofinal sequences for $R_c$. Note that equivalently Theorem \ref{adding_cofinals} and Theorem \ref{making_Bachmann} could be formulated in the terms of monadic definable relations on deterministic trees from $\mathbf{Tree}_n$. We consider finite automatons as tuples $(\mathbf{A},Q,\Delta)$, where $\mathbf{A}$ is input alphabet, $Q$ is the set of states, and $\Delta\subset Q\times \mathbf{A}\times Q$ is the set of instructions. Suppose we have a tree $T=(\mathbf{D},V,E)$, and a finite automaton $\mathcal{A}$ with input language $\mathbf{D}\sqcup\mathbf{D}^{-}$ and the set of states $Q_{\mathcal{A}}$. We call the the set $\mathbf{Tp}_{\mathcal{A}}=\mathcal{P}(Q_{\mathcal{A}}\times Q_{\mathcal{A}})\times \mathcal{P}(Q_{\mathcal{A}}\times Q_{\mathcal{A}})$ the set of {\it vertex pair types}. For $q_1,q_2\in Q_{\mathcal{A}}$ and $v_1,v_2\in V$ we say that $\mathcal{A}$ can {\it switch} from $q_1$ to $q_2$ on a path from $v_1$ to $v_2$ if there exists a path in $\mathcal{R}(T)$ from $v_1$ to $v_2$ marked by $\alpha\in (\mathbf{D}\sqcup\mathbf{D}^{-})^{\star}$ such that $\mathcal{A}$ can switch from $q_1$ to $q_2$ on input $\alpha$. We say that $(v_1,v_2)\in V\times V$ has a type $(A,B)$ if for a $(q_1,q_2)\in Q_{\mathcal{B}}\times Q_{\mathcal{B}}$ we have \begin{enumerate} \item $(q_1,q_2)\in A$ iff $\mathcal{A}$ can switch from $q_1$ to $q_2$ on a path from $v_1$ to $v_2$, \item $(q_1,q_2)\in B$ iff $\mathcal{A}$ can switch from $q_1$ to $q_2$ on a path from $v_2$ to $v_1$. \end{enumerate} \begin{lemma} Suppose $\mathbf{D}$ is a finite set of colors and $\mathcal{A}$ is a finite automaton with input alphabet $\mathbf{D}\sqcup\mathbf{D}^{-}$. Then for every vertex pair types $t\in\mathbf{Tp}_{\mathcal{A}}$ there exists a monadic formulas $\varphi_t(x,y)$ such that for every tree $T=(\mathbf{D},V,E)$ and $v_1,v_2\in V$ we have $T\models_{\mathrm{MSO}} \varphi_t(v_1,v_2)$ iff $(v_1,v_2)$ have the type $t$. \end{lemma} \begin{proof}Suppose $k=|Q_{\mathcal{A}}|$ and we have enumerated all the elements of $Q_{\mathcal{A}}$. Clearly it is enough to prove that for any state $q\in Q_{\mathcal{A}}$ there exists a monadic formula $\psi_{q}(x,Y_1,\ldots, Y_k)$ such that for every tree $T=(\mathbf{D},V,E)$, $v\in V$, and $(S_1,\ldots,S_k)\in (\mathcal{P}(V))^k$ we have: $T\models_{\mathrm{MSO}} \psi_q(v,S_1,\ldots,S_k)$ iff, for all $1\le i \le k$, $S_i$ is the set of all vertices $v'$ such that $\mathcal{A}$ can switch from the state $q$ to $i$-th state on path from $v$ to $v'$. Suppose $T=(\mathbf{D},V,E)$ is a tree, $v_1\in V$, and $q\in Q_{\mathcal{A}}$. Let us consider the tuple $(R_1,\ldots,R_k)\in (\mathcal{P}(V))^k$, where $R_i$ is the set of all vertices $v'$ such that $\mathcal{A}$ can switch from the state $q$ to the $i$-th state. There is a natural partial order on $(\mathcal{P}(V))^k$ by element-wise inclusion. Note that $(R_1,\ldots,R_k)$ is the least element $(S_1,\ldots,S_k)\in (\mathcal{P}(V))^k$ such that, $v\in S_i$ if $q$ is $i$-th state and for all $1\le i,j\le k$, $v'\in S_i$, and edge $(v',a,v'')$ from $\mathcal{R}(T)$, if $\mathcal{A}$ can switch from $i$-th state to $j$-th state on input $a$ then $v'\in S_j$. The late characterization of $(R_1,\ldots,R_k)$ clearly gives us the required monadic formula $\psi_{q}$. \end{proof} Obviously the following two lemmas holds: \begin{lemma} \label{type_calculation1} Suppose $\mathbf{D}$ is a finite set of colors and $\mathcal{A}$ is a finite automaton with input alphabet $\mathbf{D}\sqcup\mathbf{D}^{-}$. Then there exists a function $f\colon \mathbf{Tp}_{\mathcal{A}}\times \mathbf{Tp}_{\mathcal{A}} \to \mathbf{Tp}_{\mathcal{A}}$ with the following property: Suppose $T=(\mathbf{D},V,E)$ is a tree, $v_1,v_2,v_3$ are vertices such that $v_1$ is in the cone of $v_2$ and $v_3$ is not in the cone of $v_2$. Then the type $t_3$ of $(v_1,v_3)$ is equal to $f(t_1,t_2)$, where $t_1$ is the type of $(v_1,v_2)$, $t_2$ is the type of $(v_2,u_3)$. \end{lemma} \begin{lemma} \label{type_calculation2} Suppose $\mathbf{D}$ is finite set of colors and $\mathcal{A}$ is a finite automaton with input alphabet $\mathbf{D}\sqcup\mathbf{D}^{-}$. Then there exists a function $f\colon \mathbf{Tp}_{\mathcal{A}}\times \mathbf{Tp}_{\mathcal{A}} \times \mathbf{Tp}_{\mathcal{A}} \to \mathbf{Tp}_{\mathcal{A}}$ with the following property: Suppose $T=(\mathbf{D},V,E)$ is a tree, $v_1,v_2$ are vertices with non-intersecting cones, $u_1$ is a vertex from the cone of $v_1$ and $u_2$ is a vertex from the cone of $v_2$. Then the type $t_4$ of $(u_1,u_2)$ is equal to $f(t_1,t_2,t_3)$, where $t_1$ is the type of $(v_1,v_2)$, $t_2$ is the type of $(v_1,u_1)$, and $t_3$ is the type of $(v_2,u_2)$. \end{lemma} Suppose $T$ is a tree. For a vertex $v$ of $T$ we call the set of all vertices reachable from $v$ (including $v$ itself) {\it $T$-cone} under $v$ . We say that a vertex $v$ is a {\it $T$-successor} of $v'$ if there is an edge from $v'$ to $v$ in $T$. Suppose $(A,R)$ is a well-ordering and $a\in A^{\mathrm{lim}}$. We say that a set $B\subset A$ is {\it cofinal} in $a$ if $\sup_R B=a$. \begin{theorem} \label{adding_cofinals} Suppose $n$ is a number, $G=(\mathbf{C},V,U)\in \mathbf{Graph}_n$ is a well-ordering for color $c\in \mathbf{C}$. Then there exists a color $d\not \in \mathbf{C}$, a set $U_d\subset V\times \{d\}\times V$ such that $G'=(\mathbf{C}\sqcup\{d\},V,U\sqcup U_d)\in \mathbf{Graph}_n$ and $d$ gives a system of cofinal sequences for $R_c$. \end{theorem} \begin{proof} We apply Theorem \ref{Inv_rat_theorem} to $\mathbf{Graph}_n$. We obtain deterministic tree $T=(\mathbf{D},V,E)\in \mathbf{Tree}_n$ and regular languages $L_e\subset (\mathbf{D}\sqcup \mathbf{D}^{-})^{\star}$, for $e\in\mathbf{C}$. We consider an automaton $\mathcal{B}$ and initial state $q_I$ and accepting state $q_c$ that recognize the language $L_c$. Note that for all $v_1,v_2\in V$ we have $(v_1,c,v_2)\in U$ iff $\mathcal{B}$ can switch from $q_I$ to $q_c$ on a path from $v_1$ to $v_2$ in $T$. We consider some $R_c$-limit point $v_0$ (note that the set of all $R_c$-limit points is monadically definable). Then we consider the set $S_{v_0}$ of all points $v$ such that $T$-cone under $v$ contains some subset $R_c$-cofinal in $v_0$. $T$ is deterministic, there are only finitely many $T$-successors of any vertex and thus at least one $T$-immediate successor of a vertex from $S_{v_0}$ must lie in $S_{v_0}$. Therefore, the set $S_{v_0}$ is infinite. We consider the set $P_{v_0}$ of all vertices from $S_{v_0}$ that doesn't contain $v_0$ in their $T$-cone. The set of vertices $v$ from $S_{v_0}$ such that $v_0$ lies in $T$-cone under $v$ is finite because there are only finitely-many vertices above $v_0$. Thus the set $P_{v_0}$ is infinite. Clearly, there is a formula $\psi(x,y)$ such that $T\models_{\mathrm{MSO}} \psi(v_0,v)$ iff $v\in P_{v_0}$. Now let us prove that for every $v_1,v_2\in P_{v_0}$ either the vertex $v_1$ lies in the $T$-cone of $v_2$ or the the vertex $v_2$ lies in the $T$-cone of $v_1$. For a contradiction, assume that neither the vertex $v_1$ lies in the $T$-cone of $v_2$ or the the vertex $v_2$ lies in the $T$-cone of $v_1$. We separate $T$-cone of $v_1$ on sets $A^{v_1}_t$, where $t\in \mathbf{Tp}_{\mathcal{B}}$; the set $A^{v_1}_t$ consists of all the points $v$ from $T$-cone of $v_1$ such that the type of $(v,v_1)$ is equal to $t$. In the same fashion we separate $T$-cone of $v_2$ on sets $A^{v_2}_t$. Because $\mathbf{Tp}_{\mathcal{B}}$ is finite, there exist a set $A^{v_1}_{t_1}$ and a set $A^{v_2}_{t_2}$ that are cofinal in $v_0$. From Lemma \ref{type_calculation2} it follows that the $\mathcal{B}$-type of all pairs $(u_1,u_2)$, where $u_1\in A^{v_1}_{t_1}$ and $u_2\in A^{v_2}_{t_2}$, doesn't depend on the choice of $u_1$ and $u_2$. Thus we can $R_c$ compare the sets $ A^{v_1}_{t_1}$ and $ A^{v_2}_{t_2}$, i.e. either for all $u_1\in A^{v_1}_{t_1}$ and $u_2\in A^{v_2}_{t_2}$ we have $u_1 R_c u_2$ or for all $u_1\in A^{v_1}_{t_1}$ and $u_2\in A^{v_2}_{t_2}$ we have $u_2 R_c u_1$. Thus one of the sets is not cofinal, contradiction. We have linear order $\sqsubset$ on $P_{v_0}$: $$w_1\sqsubset w_2 \stackrel{\text{def}}{\iff} \mbox{ $w_1\ne w_2$ and $w_2$ is in the $T$-cone of $w_1$}.$$ Clearly, the order type of $(P_{v_0},\sqsubset)$ is $\omega$. We enumerate $P_{v_0}=\{w^{v_0}_0,w^{v_0}_1,w^{v_0}_2,\ldots\}$. Now we consider sets of vertices $B^{v_0}_0,B^{v_0}_1,B^{v_0}_2,\ldots$ that are cones under $w^{v_0}_0,w^{v_0}_1,w^{v_0}_2,\ldots$, respectively. We consider the sequence $k^{v_0}_0<k^{v_0}_1<k^{v_0}_2<\ldots$ of all indexes $k$ such that there exists $u\in B^{v_0}_k\setminus B^{v_0}_{k+1}$ with $uR_c v_0$. We denote the set of all $w^{v_0}_{k^{v_0}_i}$ by $J_{v_0}$. Now we define the desired cofinal sequence $u^{v_0}_0,u^{v_0}_1,u^{v_0}_2,\ldots$ for $v_0$. The element $u^{v_0}_i$ is $R_c$-maximal element such that $u^{v_0}_i\in B^{v_0}_{k_i}\setminus B^{v_0}_{k_i+1}$ and $u^{v_0}_iR_c v_0$. We denote by $K_{v_0}$ the set $\{u^{v_0}_0,u^{v_0}_1,u^{v_0}_2,\ldots\}$. It is easy to see that there exists a monadic formula that defines the set of all pairs $(v_0,P_{v_0})$. Thus there exists a monadic formula that define the set of all pairs $(v_0,J_{v_0})$ and therefore there exists a monadic formula that defines the set of all pairs $(v_0,K_{v_0})$. We use the late formula to define the required binary relation $R_d$ on $V$: $$v_1R_d v_2 \stackrel{\text{def}}{\iff} v_1\in K_{v_2}.$$ Using this definition we build monadic interpretation of $G'$ with the desired property in $T$. \end{proof} \begin{theorem} \label{making_Bachmann}Suppose $n$ is a number, $G=(\mathbf{C},V,U)\in \mathbf{Graph}_n$ is a well-ordering for color $c\in \mathbf{C}$, and color $d\in \mathbf{C}$ gives a system of cofinal sequences for $R_c$. Then there exists a color $e\not \in \mathbf{C}$, a set $U_e\subset V\times \{e\}\times V$ such that $G'=(\mathbf{C}\sqcup\{e\},V,U\sqcup U_e)\in \mathbf{Graph}_n$, $e$ gives a system of cofinal sequences for $R_c$, $R_e\subset R_d$, and $s_e$ have Bachmann property. \end{theorem} \begin{proof} As in the proof of the previous theorem we find deterministic tree $T=(\mathbf{D},V,E)\in \mathbf{Tree}_n$, and regular languages $L_a\subset (\mathbf{D}\sqcup \mathbf{D}^{-})^{\star}$, for $a\in\mathbf{C}$. Then we build an automaton $\mathcal{B}$, the initial states $q_c^I$, $q_d^I$, and final states states $q_c$ and $q_d$ such that $\mathcal{B}$ recognize $L_c$ on runs that starts from the state $q_c^I$ and ends at the state $q_c$ and recognize $L_d$ on runs that starts from the state $q_d^I$ and ends on the state $q_d$. Note that for all $v_1,v_2\in V$ we have \begin{enumerate} \item $(v_1,c,v_2)\in U$ iff $\mathcal{B}$ can switch from $q_I$ to $q_c$ on a path from $v_1$ to $v_2$ in $T$, \item $(v_1,d,v_2)\in U$ iff $\mathcal{B}$ can switch from $q_I$ to $q_d$ on a path from $v_1$ to $v_2$ in $T$. \end{enumerate} Let us consider some $R_c$-limit point $v_0$ . We find the point $u'$ on the path from $v_0$ over inverse edges of $T$ to the root of $T$ such that $T$-cone of $u'$ contains infinitely many elements of $s_d$ cofinal sequence for $v_0$. We fix some ordering on $\mathbf{D}$. We consider first color $a\in D$ and $a$-successor $u_{v_0}$ of $u_{v_0}$ in $T$ such that $u_{v_0}$ exists, $T$-cone of $u_{v_0}$ doesn't contain $v_0$, and $T$-cone of $u_{v_0}$ contains infinitely many elements of $s_d$ cofinal sequence for $v_0$. Simple check shows that such a color $a$ and such a vertex $u_{v_0}$ exists. Thus we have found in a deterministic way a cone that contains infinitely many elements of $s_d$ cofinal sequence for $v_0$ but not $v_0$ itself. We fix some linear ordering of $\mathbf{Tp}_{\mathcal{B}}$. We consider the least vertex pair type $t\in \mathbf{Tp}_{\mathcal{B}}$ such that there are infinitely many elements $w$ of cofinal sequence for $v_0$ in the cone of $u_{v_0}$ such that the type of $(w,u_{v_0})$ is $t$; we denote the set of all that $w$ by $P_{v_0}$. From Lemma \ref{type_calculation1} it follows that all elements of $P_{v_0}$ were elements of $s_d$ cofinal sequence for $v_0$. Clearly the set $B$ of all triples $(v_0, u_{v_0},P_{v_0})$ is monadically definable. Now for a given $R_c$-limit point $v_0$ we consider the set $Z_{v_0}$ of all triples $(v_1, u_{v_1},P_{v_1})\in B$, where $v_1\ne v_0$ and $u_{v_1}$ is on the path from $v_0$ over inverse edges in $T$ to $T$-root. Obviously there are only finitely many different $u_{v_1}$'s in elements of $Z_{v_0}$. Thus, because the set $\mathbf{Tp}_{\mathcal{B}}$ is finite, there are only finitely many different $(u_{v_1},P_{v_1})$'s in elements of $Z_{v_0}$. Therefore, because $v_1$ is always the limit of $P_{v_1}$, the set $Z_{v_0}$ is finite. We consider the set $O_{v_0}\subset P_{v_0}$ of all $w\in P_{v_0}$ such that for any $(v_1, u_{v_1},P_{v_1})\in Z_{v_0}$ and $w'\in P_{v_1}$ we have either $w'R_c w$, or $w'=v_0$, or $v_0 R_cw'$. Because for every $(v_1, u_{v_1},P_{v_1})\in Z_{v_0}$ there are only finitely many elements of $P_{v_1}$ that are not $R_c$-greater than $v_0$, the set $O_{v_0}$ is infinite. Clearly, the set of all pairs $(v_0,O_{v_0})$ is monadically definable in $T$. For all $w_1,w_2\in V$ we put $$w_1 R_e w_2\stackrel{\text{def}}{\iff}\mbox{$w_2$ is $R_c$ limit point and $w_1\in O_{w_2}$}.$$ Obviously, $R_e$ gives a system of cofinal sequences for $R_c$. Now we show that $s_e$ have Bachmann property. We consider some $R_c$-limit points $v_0$ and $v_1$ such that $v_0 R_c v_1$ we claim that we can separate $P_{v_1}$ on two disjoint sets $F^{-}$ and $F^{+}$ such that \begin{enumerate} \item we have $w R_c w'$, for all $w\in F^{-}$ and $w' \in O_{v_0}$, \item we have $w' R_c w$, for all $w\in F^{+}$ and $w' \in O_{v_0}$. \end{enumerate} There are two possible cases: 1. $(v_1,u_{v_1},P_{v_1})\in Z_{v_0}$, 2. $v_0$ isn't in the $T$-cone of $u_{v_1}$. In the first case we have such a separation on $F^{-}$ and $F^{+}$ by construction of $O_{v_0}$. In the second case from Lemma \ref{type_calculation1} it follows that all the elements of $P_{v_1}$ are in the same $R_c$ relation to $v_0$. Because $P_{v_1}$ consists of the elements of some cofinal sequence for $v_1$, there is some $w\in P_{v_1}$ such that $v_0 R_c w$. Therefore we have $v_0 R_c w$ for all $w\in P_{v_1}$. Hence we can put $F^{-}=\emptyset$ and $F^{+}=P_{v_1}$. It is easy to see that Bachmann property for $s_e$ follows from the claim for all $v_0$ and $v_1$.\end{proof} Using Theorem \ref{adding_cofinals} and Theorem \ref{making_Bachmann} we obtain stronger version of Theorem \ref{adding_cofinals}. \begin{corollary} Suppose $n$ is a number, $G=(\mathbf{C},V,U)\in \mathbf{Graph}_n$ is a well-ordering for color $c\in \mathbf{C}$. Then there exists a color $d\not \in \mathbf{C}$, a set $U_d\subset V\times \{d\}\times V$ such that $G'=(\mathbf{C}\sqcup\{d\},V,U\sqcup U_d)\in \mathbf{Graph}_n$ and $d$ gives a system of cofinal sequences for $R_c$ with Bachmann property. \end{corollary} \section{Equivalence of Systems of Cofinal Sequences} \label{fast_growing_section} Suppose $(A,R)$ is a strict well-ordering and $s\colon A^{\mathrm{lim}}\times \omega \to A$ is a system of cofinal sequences. For an element $a\in A$ we encode finite down-paths from $a$ by finite sequences of natural numbers. We simultaneously define set $\mathbf{Path}^s_a$ of path codes and function $\rho^s_a\colon \mathbf{Path}^s_a\to A$ that maps a path code to the end of the corresponding path. Empty sequence $()$ lies in $\mathbf{Path}^s_a$ and $\rho^s_a( ())=a$. If sequence $(n_1,\ldots,n_k)$ lies in $\mathbf{Path}^s_a$ then \begin{enumerate} \item if $\rho^s_a((n_1,\ldots,n_k))\in A^{\mathrm{lim}}$ then for all $m\ge 1$ the sequence $(n_1,\ldots,n_k,m)\in \mathbf{Path}^s_a$ and $\rho^s_a((n_1,\ldots,n_k,m))=s(\rho^s_a((n_1,\ldots,n_k)),m-1)$, \item if $\rho^s_a((n_1,\ldots,n_k))$ is subsequent for $b\in A$ then the sequence $(n_1,\ldots,n_k,0)\in \mathbf{Path}^s_a$ and $\rho^s_a((n_1,\ldots,n_k,0))=b$. \end{enumerate} The set $\mathbf{Path}^s_a$ is the minimal set with described properties. Clearly, for every $b R a$ there exists $p\in \mathbf{Path}^s_a$ such that $\rho^s_a(p)=b$. For a sequence of natural numbers $(n_1,\ldots,n_k)$ we put $|(n_1,\ldots,n_k)|=n_1+\ldots+n_k+k$ Note that for a cofinal sequence systems with Bachmann property there is simple algorithm to find the path to a target point with the least $|\cdot|$. The path is defined by induction. Suppose our partial path is $(e_1,\ldots,e_k)$. We find the least $e_{k+1}$ such that $(e_1,\ldots,e_k,e_{k+1})$ encodes a path which ends either at our target point or at a point that is above our target point; in the first case we are done in the second case we repeat the procedure. The path defined this way have the least $|\cdot|$ because every other path to the same point will go throe all intermediate points of the calculated path. Suppose $G=(\mathbf{C},V,U)$ is a graph, and $e\not \in\mathbf{C}$. We define {\it treegraph} of $G$, it is a graph $\mathcal{T}_{e}(G)=(\mathbf{C}\sqcup \{e\},V^{+},U^{+}\sqcup S)$. The set $V^{+}$ is the set of all non-empty sequences of elements of $V$. The set $U^{+}$ consists of all edges of the form $$((v_1,\ldots,v_k,u),c,(v_1,\ldots,v_k,w)),$$ such that there were the edge $(u,c,w)$ in $G$. The set $S$ contains all the edges of the form $$((v_1,\ldots,v_k,u),e,(v_1,\ldots,v_k,u,u)).$$ \begin{theorem}(\cite{CarWoh03}) \label{treegraph_theorem} If $G\in\mathbf{Graph}_n$ and $\mathcal{T}_{e}(G)$ is defined then $\mathcal{T}_{e}(G)\in \mathbf{Graph}_{n+1}$. \end{theorem} \begin{lemma} \label{path_transform_1}Suppose graph $G=(\mathbf{C},V,U)$ lies in Caucal hierarchy, $G$ is a well-ordering for color $c\in \mathbf{C}$, $d,e\in \mathbf{C}$ give systems of cofinal sequences for $R_c$ with Bachmann property, and $a\in V$. Then there exists $n$ such that for every $p_1\in\mathbf{Path}^{s_d}_a$, $p_2\in \mathbf{Path}^{s_e}_a$ with $\rho^{s_d}_a(p_1)R \rho^{s_e}_a(p_2)$ there exists $p_3\in\mathbf{Path}^{s_e}_{\rho^{s_e}_a(p_2)}$ such that $\rho^{s_e}_{\rho^{s_e}_a(p_2)}(p_3)=\rho^{s_d}_a(p_1)$ and $|p_3|\le\beth_n(|p_1|+|p_2|)$. \end{lemma} \begin{proof} We switch to a different graph $G'$. The set of vertices of $G'$ is $V_a$, which is the set of $a$ and all vertices $R_c$ below $a$. The set of colors of $G'$ is $\{d_0,d_1,d_2,e_0,e_1,e_2,o\}$. We will describe how we construct $R_{d_0}$ $R_{d_1}$ and $R_{d_2}$ from $R_d$; we construct $R_{e_0}$, $R_{e_1}$, and $R_{e_2}$ from $R_e$ in the same way and omit the description. For all $v_1,v_2\in V_a$ we put: $$v_1R_{d_0}v_2\stackrel{\text{def}}{\iff} \mbox{$v_2$ is $R_c$-limit point and $s_d(v_2,0)=v_1$},$$ $$v_1R_{d_1}v_2\stackrel{\text{def}}{\iff} \mbox{$v_2$ is $R_c$-limit point and $s_d(v_2,1)=v_1$},$$ $$v_1R_{d_2}v_2\stackrel{\text{def}}{\iff} \mbox{$\exists n\ge 1, v_0\in V_a^{\mathrm{lim}}(s_d(v_0,n)=v_1$ and $s_d(v_0,n+1)=v_2)$},$$ $$v_1R_{o}v_2\stackrel{\text{def}}{\iff} \mbox{$v_2$ is immediate $R_c$-successor of $v_1$}.$$ Because, every $\mathbf{Graph}_n$ is closed under monadic interpretations with domain restriction \cite{CarWoh03}, the graph $G'$ lies in Caucal hierarchy. Note that graph $\mathcal{R}(G')$ is deterministic for all the colors save $d_0$ and $e_0$. It could be easily shown using the fact that $s_d$ have Bachmann property and hence for every $v\in V_a$ there is at most one $v'\in V_a^{\mathrm{lim}}$ such that $s_d(v',n)=v$, for $n\ge 1$ (the same holds for $s_e$). Also note that the restriction of $R_c$ to $V_a$ is monadically definable in $G'$. We consider some fresh color $h$ and the graph $\mathcal{R}(\mathcal{T}_h(G'))$. Clearly, $\mathcal{R}(\mathcal{T}_h(G'))$ is deterministic for all the colors save $d_0$ and $e_0$. From Theorem \ref{treegraph_theorem} it follows that $\mathcal{R}(\mathcal{T}_h(G'))$ lies in Caucal hierarchy. We consider subgraph $H$ of $\mathcal{R}(\mathcal{T}_h(G'))$ that consists of all sequences of length at most 2, i.e. root copy of $\mathcal{R}(G')$ and copies one step below it. Clearly $H$ lies in Caucal hierarchy. We consider fresh colors $l,r$, add $l$-loops to every vertex $(v_1,v_2)$ of $H$ such that $v_1 R_c v_2$, and add $r$-loops to every vertex $(v_1,v_2)$ of $H$ such that $v_2 R_c v_1$ or $v_2=v_1$; thus we obtain graph $H'$ with colors $\{d_0,d_1,d_2,e_0,e_1,e_2,o,h,d_0^{-},d_1^{-},d_2^{-},e_0^{-},e_1^{-},e_2^{-},o^{-},h^{-},l,r\}$. It is easy to see that the sets of vertices where we have added $l$-loops and $r$-loops are monadically definable in $H$ and therefore $H'$ lies in Caucal hierarchy. Also, clearly $H'$ is deterministic for all the colors save $d_0$ and $e_0$.. We build $H''$ from $H'$ by adding color $b$, all the edges of the form $((u),b,(u,a))$, and removing colors $d_0,d_1,d_2,e_0,e_1,e_2,o$. Clearly, $H''$ lies in Caucal hierarchy, is deterministic, and all vertices are reachable from $(a)$. We consider regular language $$L=(l\{o^{-}, e_0^{-},e_1^{-}(re_2^{-})^{\star}\})^{\star}h^{-}.$$ It is easy to see for any two $v_1,v_2\in V_a$, $v_1 R_c v_2$ there exist exactly one path from $(v_1,v_2)$ to $(v_1)$ in $H''$ marked by some $\alpha\in L$. Moreover the corresponding $\alpha$ could be calculated from the $|\cdot|$-least path $p\in \mathbf{Path}^{s_e}_{v_2}$, $p=(n_1,\ldots,n_k)$ such that $\rho_{v_2}^{s_e}(v_2)=v_1$: $\alpha$ is the word $\alpha_1\ldots\alpha_kh^{-}$, where, for $1\le i\le k$ the word $\alpha_i$ is equal to the word $l o^{-}$ if $n_i=0$, is equal to the word $l e_0^{-}$ if $n_i=1$ and is equal to $le_1^{-}(re_2^{-})^{n_i-1}$ if $n_i\ge 1$. We consider some $p_1,p_2$ as in the lemma formulation. We consider $|\cdot|$-least $p_3\in \mathbf{Path}^{s_e}_{\rho^{s_e}_a(p_2)}$ such that $\rho^{s_e}_{\rho^{s_e}_a(p_2)}(p_3)=\rho^{s_d}_a(p_1)$. From the consideration above we see that there exists exactly one path from $(\rho^{s_d}_a(p_1),\rho^{s_e}_a(p_2))$ to $(\rho^{s_d}_a(p_1))$ marked by some $\alpha_0\in L$ and $|p_3|\le|\alpha_0|$. It is easy to see that there is path in $H''$ from $(a)$ to $(\rho^{s_d}_a(p_1))$ of the length at most $|p_1|$, there is path in $H''$ from $(\rho^{s_d}_a(p_1))$ to $(\rho^{s_d}_a(p_1),\rho^{s_e}_a(p_2))$ of the length at most $|p_2|+1$. Using Theorem \ref{HOPDG} we obtain some $m$-pds $\mathcal{A}$ such that $\varepsilon$-contraction of configuration graph of $\mathcal{A}$ is isomorphic to $H''$. Clearly, we can assume that initial configuration of $\mathcal{A}$ corresponds to vertex $(a)$, because we can always modify $\mathcal{A}$ if it wasn't initially true. There is path in $\varepsilon$-contraction of configuration graph of $\mathcal{A}$ from initial configuration of $\mathcal{A}$ to the configuration corresponding to $(\rho^{s_d}_a(p_1),\rho^{s_e}_a(p_2))$ of the length at most $|p_2|+2|p_1|+1$. Now we apply Theorem \ref{pumping_lemma} to $\mathcal{A}$, language $L$ and the configuration corresponding to $(\rho^{s_d}_a(p_1),\rho^{s_e}_a(p_2))$ and see that $|\alpha_0|<\beth_m(C_{\mathcal{A},L}(|p_1|+|p_2|+2))$ (note that every deterministic graph is finitely-branching and thus we can apply Theorem \ref{pumping_lemma}). Hence $|p_3|<\beth_m(C_{\mathcal{A},L}(|p_2|+2|p_1|+2))$. Hence we can choose $n$ that doesn't depend on $p_1$ and $p_2$ such that $|p_3|\le \beth_n(|p_1|+|p_2|)$. \end{proof} The following two lemmas could be proved in the same fashion as the previous lemma with only slight modifications: \begin{lemma} \label{path_transform_2} Suppose graph $G=(\mathbf{C},V,U)$ lies in Caucal hierarchy, $G$ is a well-ordering for color $c\in \mathbf{C}$, $d,e\in \mathbf{C}$ give systems of cofinal sequences for $R_c$ with Bachmann property, $a\in V$, and $f$ is embedding of $(V,R_c)$ onto initial segment of ordinals. Then there exists $n$ such that for every $p_1\in\mathbf{Path}^{s_d}_a$, $p_2\in \mathbf{Path}^{s_e}_a$ with $\rho^{s_d}_a(p_1)R \rho^{s_e}_a(p_2)$ there exists $p_3\in\mathbf{Path}^{s_e}_{\rho^{s_e}_a(p_2)}$ such that $\rho^{s_e}_{\rho^{s_e}_a(p_2)}(p_3)=f^{-1}(f(\rho^{s_d}_a(p_1))+1)$ and $|p_3|\le\beth_n(|p_1|+|p_2|)$. \end{lemma} \begin{lemma} \label{path_transform_3} Suppose graph $G=(\mathbf{C},V,U)$ lies in Caucal hierarchy, $G$ is a well-ordering for color $c\in \mathbf{C}$, $d\in \mathbf{C}$ give systems of cofinal sequences for $R_c$ with Bachmann property, $a\in V$, and $f$ is embedding of $(V,R_c)$ onto initial segment of ordinals. Then there exists $n$ such that for every $p\in\mathbf{Path}^{s_d}_a$ there exist $v\in V^{\mathrm{lim}}\cup\{f^{-1}(0)\}$ and $k<\beth_n(|p|)$ such that $\rho^{s_d}_{a}(p)=f^{-1}(f(v)+k)$. \end{lemma} \begin{lemma}\label{coherent_dom} Suppose $(A,R)$ is a well-ordering and $s$ is a system of cofinal sequences with Bachmann property, $b R a\in A$, $p\in \mathbf{Path}^s_a$, and $\rho^s_a(p)=b$. Then for all $x\ge |p|$ we have $F^s_a(x)\ge F^s_b(x)$. \end{lemma} \begin{proof} We prove the lemma by transfinite induction on $a$. The case of subsequent $a$ is obviously true if $a$ is successor of $b$ and trivially follows from induction assumption otherwise. If $a$ is limit then $F^s_a(x)=F^s_{s(a,x)}(x)$. Clearly the first point in path $p=(u_1,\ldots,u_k)$, i.e $s(a,u_1)$, is step-down reachable from $s(a,x)$ and hence $F^s_{s(a,x)}(y)>F^s_{s(a,u_1)}(y)$ for every $y$. Here we use the fact that for every $c$ and step-down reachable from $c$ point $d$ we have $F^s_{c}(y)\ge F^s_{d}(y)$, for every $y$; this fact could be easily proved by transfinite induction. \end{proof} \begin{theorem} \label{Fast_growing_equivalency_1}Suppose graph $G=(\mathbf{C},V,U)$ lies in Caucal hierarchy, $G$ is a well-ordering for color $c\in \mathbf{C}$, $d,e\in \mathbf{C}$ give systems of cofinal sequences for $R_c$, and $s_d,s_e$ have Bachmann property. Then for every $a R_c b\in V$ there exists $N$ such that for all $x>N$ we have $F^{s_e}_b(x)> F^{s_d}_{a}(x)$. \end{theorem} \begin{proof} We denote by $f$ the embedding of $(V,R_c)$ onto the initial segment of ordinals. Let us fix any $a\in V$. We find $n$ that is the maximal of three $n$-s provided by Lemma \ref{path_transform_1} and Lemma \ref{path_transform_2} Lemma \ref{path_transform_3}. We prove by transfinite induction on $v_2 R_c a$ that for all $v_1R_c v_2$, $p_1\in\mathbf{Path}^{s_e}_{a}$, $p_2\in\mathbf{Path}^{s_d}_{a}$ where $\rho^{s_d}_{v_0}(p_1)=v_1$ and $\rho^{s_e}_{v_0}(p_2)=v_2$ we have $F^{s_d}_{v_2}(x)>F^{s_e}_{v_1}(x)$, for all $x\ge \beth_n(|p_1|+|p_2|)$. There exists a path $p_3\in\mathbf{Path}^{s_e}_{v_2}$ such that $\rho^{s_e}_{v_2}(p_3)=f^{-1}(f(v_1)+1)=v_3$ and $|p_3|<\beth_n(|p_1|+|p_2|)$ by Lemma \ref{path_transform_2}. Hence by Lemma \ref{coherent_dom} for all $y\ge \beth_n(|p_1|+|p_2|)$ we have $F^{s_e}_{v_2}(y)\ge F^{s_e}_{v_3}(y)$. If there are no limit points below $v_3$ then we are done because $F^{s_e}_{v_3}(y)=F^{s_d}_{v_3}(y)> F^{s_d}_{v_1}(x)$, for every $x$. Further we assume that there are limit points below $v_1$. Assume that $v_1=f^{-1}(f(v_4)+k)$, where $v_4$ is a limit point; note that from Lemma \ref{path_transform_1} it follows that $k\le\beth_n(|p_1|)$. We claim that $s_d(v_4,y)R_c s_e(v_4,\beth_n(\beth_n(y)))$ for all $y>\beth_n(\beth_n(|p_1|+|p_2|))$. Indeed, let us consider some $y>\beth_n(\beth_n(|p_1|+|p_2|))$. There is $q_1\in \mathbf{Path}^{s_d}_a$ with $\rho_a^{s_d}(q_1)=s_d(v_4,y)$ and $|q_1|<y+k+|p_1|\le y+\beth_n(|p_1|+|p_2|)$ and there is $q_2\in \mathbf{Path}^{s_e}_a$ with $\rho_a^{s_e}(q_2)=v_4$ and $|q_2|=k+|p_3|+1\le \beth_n(\beth_n(|p_1|+|p_2|))$. Therefore from Lemma \ref{path_transform_1} it follows that there exists $q_3\mathbf{Path}^{s_e}_{v_4}$ with $\rho_a^{s_e}(q_3)=s_d(v_4,y)$ and $|q_3|<\beth_n(|q_1|+|q_2|)<\beth_n(3y)<\beth_n(\beth_n(y))$. Let $m_1$ denote the first element of sequence $q_3$. Clearly, $m_1<\beth_n(\beth_n(y))$ and either $s_d(v_4,y)R_c s_e(v_4,m_1))$ or $s_d(v_4,y)= s_e(v_4,m_1))$. Hence, because $s_e(v_4,m_1))R_cs_e(v_4,\beth_n(\beth_n(y)))$ we have $s_d(v_4,y)R_c s_e(v_4,\beth_n(\beth_n(y)))$. Now we see that from induction hypothesis it follows that $F^{s_d}_{v_4}(y)<F^{s_e}_{v_4}(\beth_n(\beth_n(y)))$, for all $y>\beth_n(\beth_n(|p_1|+|p_2|)$. Simple estimation shows that $F^{s_d}_{v_4}(y)<F^{s_e}_{f^{-1}(f(v_4)+1)}(y)$ for all $y>\beth_n(|p_1|+|p_2|)$. Hence $F^{s_d}_{f^{-1}(f(v_3)+i)}(y)<F^{s_e}_{f^{-1}(f(v_3)+i+1)}(y)$, for all $y>\beth_n(|p_1|+|p_2|)$ and natural $i$. Thus induction hypothesis holds for $v_2$. Theorem clearly follows from induction hypothesis. \end{proof} \section{The Case of Ordinals Less than $\omega^\omega$} Theorem \ref{Fast_growing_equivalency_1} gives comparison of two monadically definable systems of cofinal sequences on one structure. In general case we don't know how to use Theorem \ref{Fast_growing_equivalency_1} to compare systems of ordinal notations that arises from Caucal hierarchy to the standard system of ordinal notations for the same ordinal. But in the case of levels of fast-growing hierarchy below $\omega^{\omega}$ we can prove that ordinal notations from Caucal hierarchy give the same growth rate as the standard Cantor system of ordinal notations. We give a definition of standard system of cofinal sequences $s_{\mathrm{st}}\colon \omega^\omega\cap \mathit{Lim} \times \omega \to \omega^\omega$ for ordinals less that $\omega^\omega$: $$s_{\mathrm{st}}(\alpha+\omega^{m+1},n)=\alpha+\omega^{m}n.$$ Note that from Cantor Theorem about normal forms of ordinals it follows that the definition is correct. Note that the family of fast-growing functions $F^{s_{\mathrm{st}}}_{\alpha}(x)$, for $\alpha<\omega^{\omega}$, is an initial segment of Löb-Wainer hierarchy \cite{LobWai70}. \begin{theorem}\label{Fast_growing_equivalency_2} Suppose graph $G=(\mathbf{C},V,U)$ lies in Caucal hierarchy, $G$ is a well-ordering for color $c\in \mathbf{C}$, $d\in \mathbf{C}$ gives a system of cofinal sequences for $R_c$, $s_d$ have Bachmann property, and $f$ is an embedding of $(V,R_c)$ onto initial segment of ordinals. Then for all $\beta<\alpha<\omega^\omega$ there exists $N$ such that for all $x>N$ we have $F^{s_d}_{f^{-1}(\alpha)}(x)> F^{s_{\mathrm{st}}}_{\beta}(x)$ and $F^{s_{\mathrm{st}}}_{\alpha}(x)>F^{s_d}_{f^{-1}(\beta)}(x)$. \end{theorem} \begin{proof} We claim that for every $\alpha_0<\omega^{\omega}$ the following relation $R_{\mathrm{st},\alpha_0}$ on $V$ is monadically definable: $$ v_1 R_{\mathrm{st},\alpha_0} v_2\stackrel{\text{def}}{\iff} \mbox{$f(v_1),f(v_2)<\alpha_0$ and $\exists n (s_{\mathrm{st}}(f(v_2),n)=f(v_1))$}.$$ The claim obviously follows from the fact that for every $n$ we can monadically define the set $A_n$ of all $v\in V$ such that $f(v)$ is of the form $\alpha+\omega^{n}$. We prove monadic definability of mentioned sets by induction on $n$. The set $A_0$ is the set $$V\setminus (V^{\mathrm{lim}}\cup\{\inf_{R_c}V\}).$$ The set $A_{n+1}$ is the set of all limit points $v$ of $A_n$ such that $v$ isn't a limit point of the set of all limit points of $A_n$. Using Theorem \ref{Fast_growing_equivalency_1} and the claim we straightforward conclude that theorem holds. \end{proof} \end{document}
\begin{document} \title{{\huge {\bf Embedding Quantum Universes into Classical Ones}} \thanks{This paper has been completed during the visits of the first author at the University of Technology Vienna (1997) and of the third author at the University of Auckland (1997). The first author has been partially supported by AURC A18/XXXXX/62090/F3414056, 1996. The second author was supported by DFG Research Grant No. HE 2489/2-1.}} \author{{\large {\bf Cristian S.\ Calude},}\thanks{Computer Science Department, The University of Auckland, Private Bag 92019, Auckland, New Zealand, e-mail: [email protected].} \quad {\large {\bf Peter H.\ Hertling},}\thanks{Computer Science Department, The University of Auckland, Private Bag 92019, Auckland, New Zealand, e-mail: [email protected].} \quad {\large {\bf Karl Svozil}}\thanks{Institut f\"ur Theoretische Physik, University of Technology Vienna, Wiedner Hauptstra\rightarrows e 8-10/136, A-1040 Vienna, Austria, e-mail: [email protected].}} \date{ } \maketitle \thispagestyle{empty} \begin{abstract} Do the partial order and ortholattice operations of a quantum logic correspond to the logical implication and connectives of classical logic? Re-phrased, how far might a classical understanding of quantum mechanics be, in principle, possible? A celebrated result by Kochen and Specker answers the above question in the negative. However, this answer is just one among different possible ones, not all negative. It is our aim to discuss the above question in terms of mappings of quantum worlds into classical ones, more specifically, in terms of embeddings of quantum logics into classical logics; depending upon the type of restrictions imposed on embeddings the question may get negative or positive answers. \end{abstract} \rightarrowection{Introduction} Quantum mechanics is a very successful theory which appears to predict novel ``counterintuitive'' phenomena (see Wheeler \cite{wheeler}, Greenberger, Horne and Zeilinger \cite{green-horn-zei}) even almost a century after its development, cf.\ Schr{\"o}dinger \cite{schrodinger}, Jammer \cite{jammer:66,jammer1}. Yet, it can be safely stated that quantum theory is not understood (Feynman \cite{feynman-law}). Indeed, it appears that progress is fostered by abandoning long--held beliefs and concepts rather than by attempts to derive it from some classical basis, cf.\ Greenberger and YaSin \cite{greenberger2}, Herzog, Kwiat, Weinfurter and Zeilinger \cite{hkwz} and Bennett \cite{benn:94}. But just how far might a classical understanding of quantum mechanics be, in principle, possible? We shall attempt an answer to this question in terms of mappings of quantum worlds into classical ones, more specifically, in terms of embeddings of quantum logics into classical logics. One physical motivation for this approach is a result proven for the first time by Kochen and Specker \cite{kochen1} (cf. also Specker \cite{specker-60}, Zierler and Schlessinger \cite{ZirlSchl-65} and John Bell \cite{bell-66}; see reviews by Mermin \cite{mermin-93}, Svozil and Tkadlec \cite{svozil-tkadlec}, and a forthcoming monograph by Svozil \cite{svozil-ql}) stating the impossibility to ``complete'' quantum physics by introducing noncontextual hidden parameter models. Such a possible ``completion'' had been suggested, though in not very concrete terms, by Einstein, Podolsky and Rosen (EPR) \cite{epr}. These authors speculated that ``elements of physical reality'' exist irrespective of whether they are actually observed. Moreover, EPR conjectured, the quantum formalism can be ``completed'' or ``embedded'' into a larger theoretical framework which would reproduce the quantum theoretical results but would otherwise be classical and deterministic from an algebraic and logical point of view. A proper formalization of the term ``element of physical reality'' suggested by EPR can be given in terms of two-valued states or valuations, which can take on only one of the two values $0$ and $1$, and which are interpretable as the classical logical truth assignments {\it false} and {\it true}, respectively. Kochen and Specker's results \cite{kochen1} state that for quantum systems representable by Hilbert spaces of dimension higher than two, there does not exist any such valuation $s: L\rightarrow \{0,1\}$ defined on the set of closed linear subspaces of the space $L$ (these subspaces are interpretable as quantum mechanical propositions) preserving the lattice operations and the orthocomplement, even if one restricts the attention to lattice operations carried out among commuting (orthogonal) elements. As a consequence, the set of truth assignments on quantum logics is not separating and not unital. That is, there exist different quantum propositions which cannot be distinguished by any classical truth assignment. The Kochen and Specker result, as it is commonly argued, e.g.\ by Peres \cite{peres} and Mermin \cite{mermin-93}, is directed against the noncontextual hidden parameter program envisaged by EPR. Indeed, if one takes into account the entire Hilbert logic (of dimension larger than two) and if one considers all states thereon, any truth value assignment to quantum propositions prior to the actual measurement yields a contradiction. This can be proven by finitistic means, that is, with a finite number of one-dimensional closed linear subspaces (generating an infinite set whose intersection with the unit sphere is dense; cf.\ Havlicek and Svozil \cite{havlicek}). But, the Kochen--Specker argument continues, it is always possible to prove the existence of separable valuations or truth assignments for classical propositional systems identifiable with Boolean algebras. Hence, there does not exist any injective morphism from a quantum logic into some Boolean algebra. Since the previous reviews of the Kochen--Specker theorem by Peres \cite{peres-91,peres}, Redhead \cite{redhead}, Clifton \cite{clifton-93}, Mermin \cite{mermin-93}, Svozil and Tkadlec \cite{svozil-tkadlec}, concentrated on the nonexistence of classical noncontextual elements of physical reality, we are going to discuss here some options and aspects of embeddings in greater detail. Particular emphasis will be given to embeddings of quantum universes into classical ones which do not necessarily preserve (binary lattice) operations identifiable with the logical {\it or} and {\it and} operations. Stated pointedly, if one is willing to abandon the preservation of quite commonly used logical functions, then it is possible to give a classical meaning to quantum physical statements, thus giving raise to an ``understanding'' of quantum mechanics. Quantum logic, according to Birkhoff \cite{birkhoff-36}, Mackey \cite{ma-57}, Jauch \cite{jauch}, Kalmbach \cite{kalmbach-83}, Pulmannov{\'{a}} \cite{pulmannova-91}, identifies logical entities with Hilbert space entities. In particular, elementary propositions $p,q,\ldots$ are associated with closed linear subspaces of a Hilbert space through the origin (zero vector); the implication relation $\leq$ is associated with the set theoretical subset relation $\rightarrowubseteq$, and the logical {\it or} $\vee$, {\it and} $\wedge$, and {\it not} $'$ operations are associated with the set theoretic intersection $\cap$, with the linear span $\oplus$ of subspaces and the orthogonal subspace $\perp$, respectively. The trivial logical statement $1$ which is always true is identified with the entire Hilbert space $H$, and its complement $\emptyset$ with the zero-dimensional subspace (zero vector). Two propositions $p$ and $q$ are orthogonal if and only if $p\leq q'$. Two propositions $p,q$ are co--measurable (commuting) if and only if there exist mutually orthogonal propositions $a,b,c$ such that $p=a\vee b$ and $q=a\vee c$. Clearly, orthogonality implies co--measurability, since if $p$ and $q$ are orthogonal, we may identify $a, b, c$ with $0,p,q$, respectively. The negation of $p\leq q$ is denoted by $p \not\leq q$. \rightarrowection{Varieties of embeddings} One of the questions already raised in Specker's almost forgotten first article \cite{specker-60}\footnote{In German.} concerned an embedding of a quantum logical structure $L$ of propositions into a classical universe represented by a Boolean algebra $B$. Thereby, it is taken as a matter of principle that such an embedding should preserve as much logico--algebraic structure as possible. An embedding of this kind can be formalized as a mapping $\varphi :L\rightarrow B$ with the following properties.\footnote{Specker had a modified notion of embedding in mind; see below.} Let $p,q\in L$. \begin{description} \item[{\rm (i)}] {\em Injectivity}: two different quantum logical propositions are mapped into two different propositions of the Boolean algebra, i.e., if $p\neq q, $ then $ \varphi (p)\neq \varphi (q)$. \item[{\rm (ii)}] {\em Preservation of the order relation}: if $p\leq q$, then $\varphi (p) \leq \varphi (q)$. \item[{\rm (iii)}] {\em Preservation of ortholattice operations}, i.e.\ preservation of the \begin{description} \item[{\rm (ortho-)complement}:] $\varphi(p')=\varphi (p)'$, \item[{\it or} {\rm operation}:] $\varphi (p\vee q)=\varphi (p) \vee \varphi (q)$, \item[{\it and} {\rm operation}:] $\varphi (p\wedge q)=\varphi (p) \wedge \varphi (q)$. \end{description} \end{description} As it turns out, we cannot have an embedding from the quantum universe to the classical universe satisfying all three requirements (i)--(iii). In particular, a head-on approach requiring (iii) is doomed to failure, since the nonpreservation of ortholattice operations among nonco--measurable propositions is quite evident, given the nondistributive structure of quantum logics. \rightarrowubsection{Injective lattice morphisms} Here we shall review the rather evident fact that there does not exist an injective lattice morphism from any nondistributive lattice into a Boolean algebra. We illustrate this obvious fact with an example that we need to refer to later on in this paper; the propositional structure encountered in the quantum mechanics of spin state measurements of a spin one-half particle along two different directions (mod~$\pi$), that is, the modular, orthocomplemented lattice $MO_2$ drawn in Figure \ref{f-hd-mo2} (where $p_-=(p_+)^\prime$ and $q_-=(q_+)^\prime$). \begin{figure} \caption{\label{f-hd-mo2} \label{f-hd-mo2} \end{figure} Clearly, $MO_2$ is a nondistributive lattice, since for instance, $$p_- \wedge (q_- \vee q_+)=p_- \wedge 1=p_-,$$ whereas $$(p_- \wedge q_-)\vee (p_- \wedge q_+)= 0\vee0=0.$$ Hence, $$p_- \wedge (q_- \vee q_+)\neq (p_- \wedge q_-)\vee (p_- \wedge q_+).$$ In fact, $MO_2$ is the smallest orthocomplemented nondistributive lattice. The requirement (iii) that the embedding $\varphi$ preserves all ortholattice operations (even for nonco--measurable and nonorthogonal propositions) would mean that $\varphi(p_-) \wedge (\varphi(q_-) \vee \varphi (q_+))\neq (\varphi(p_-) \wedge \varphi(q_-))\vee (\varphi(p_-) \wedge \varphi(q_+))$. That is, the argument implies that the distributive law is not satisfied in the range of $\varphi$. But since the range of $\varphi$ is a subset of a Boolean algebra and for any Boolean algebra the distributive law is satisfied, this yields a contradiction. Could we still hope for a reasonable kind of embedding of a quantum universe into a classical one by weakening our requirements, most notably (iii)? In the next three sections we are going to give different answers to this question. In the first section we restrict the set of propositions among which we wish to preserve the three operations {\em complement} $'$, {\em or} $\vee$, and {\em and} $\wedge$. We will see that the Kochen--Specker result gives a very strong negative answer even when the restriction is considerable. In the second section we analyze what happens if we try to preserve not all operations but just the complement. Here we will obtain a positive answer. In the third section we discuss a different embedding which preserves the order relation but no ortholattice operation. \rightarrowubsection{Injective order morphisms preserving ortholattice operations among orthogonal propositions} Let us follow Zierler and Schlessinger \cite{ZirlSchl-65} and Kochen and Specker \cite{kochen1} and weaken (iii) by requiring that the ortholattice operations need only to be preserved {\em among orthogonal} propositions. As shown by Kochen and Specker \cite{kochen1}, this is equivalent to the requirement of separability by the set of valuations or two-valued probability measures or truth assignments on $L$. As a matter of fact, Kochen and Specker \cite{kochen1} proved nonseparability, but also much more---the {\em nonexistence} of valuations on Hilbert lattices associated with Hilbert spaces of dimension at least three. For related arguments and conjectures, based upon a theorem by Gleason \cite{Gleason}, see Zierler and Schlessinger \cite{ZirlSchl-65} and John Bell \cite{bell-66}. Rather than rephrasing the Kochen and Specker argument \cite{kochen1} concerning nonexistence of valuations in three-dimensional Hilbert logics in its original form or in terms of fewer subspaces (cf.\ Peres \cite{peres}, Mermin \cite{mermin-93}), or of Greechie diagrams, which represent orthogonality very nicely (cf.\ Svozil and Tkadlec \cite{svozil-tkadlec}, Svozil \cite{svozil-ql}), we shall give two geometric arguments which are derived from proof methods for Gleason's theorem (see Piron \cite{piron-76}, Cooke, Keane, and Moran \cite{c-k-m}, and Kalmbach \cite{kalmbach-86}). Let $L$ be the lattice of closed linear subspaces of the three-dimensional real Hilbert space ${\Bbb R}^3$. A {\em two-valued probability measure} or {\em valuation} on $L$ is a map $v:L\to\{0,1\}$ which maps the zero-dimensional subspace containing only the origin $(0,0,0)$ to $0$, the full space ${\Bbb R}^3$ to $1$, and which is additive on orthogonal subspaces. This means that for two orthogonal subspaces $s_1, s_2 \in L$ the sum of the values $v(s_1)$ and $v(s_2)$ is equal to the value of the linear span of $s_1$ and $s_2$. Hence, if $s_1, s_2, s_3 \in L$ are a tripod of pairwise orthogonal one-dimensional subspaces, then \[ v(s_1) + v(s_2) + v(s_3) = v({\Bbb R}^3) = 1. \] The valuation $v$ must map one of these subspaces to $1$ and the other two to $0$. We will show that there is {\it no} such map. In fact, we show that there is no map $v$ which is defined on all one-dimensional subspaces of ${\Bbb R}^3$ and maps {\em exactly one subspace out of each tripod of pairwise orthogonal one-dimensional subspaces to $1$ and the other two to $0$}. In the following two geometric proofs we often identify a given one-dimensional subspace of ${\Bbb R}^3$ with one of its two intersection points with the unit sphere \[ S^2 = \{x \in {\Bbb R}^3 \ | \ ||x||=1\}\,. \] In the statements ``a point (on the unit sphere) has value $0$ (or value $1$)'' or that ``two points (on the unit sphere) are orthogonal'' we always mean the corresponding one-dimensional subspaces. Note also that the intersection of a two-dimensional subspace with the unit sphere is a great circle. To start the first proof, let us assume that a function $v$ satisfying the above condition exists. Let us consider an arbitrary tripod of orthogonal points and let us fix the point with value $1$. By a rotation we can assume that it is the north pole with the coordinates $(0,0,1)$. Then, by the condition above, all points on the equator $\{(x,y,z) \in S^2\ | \ z=0\}$ must have value $0$ since they are orthogonal to the north pole. Let $q=(q_x,q_y,q_z)$ be a point in the northern hemisphere, but not equal to the north pole, that is $0< q_z < 1$. Let $C(q)$ be the unique great circle which contains $q$ and the points $\pm(q_y,-q_x,0)/\rightarrowqrt{q_x^2+q_y^2}$ in the equator, which are orthogonal to $q$. Obviously, $q$ is the northern-most point on $C(q)$. To see this, rotate the sphere around the $z$-axis so that $q$ comes to lie in the $\{y=0\}$-plane; see Figure \ref{figure:greatcircle}. Then the two points in the equator orthogonal to $q$ are just the points $\pm(0,1,0)$, and $C(q)$ is the intersection of the plane through $q$ and $(0,1,0)$ with the unit sphere, hence \[C(q) = \{p \in {\Bbb R}^3 \ | \ (\exists \ \alpha,\beta \in {\Bbb R}) \ \alpha^2 + \beta^2 =1 \ \mbox{\rm and } p=\alpha q + \beta (0,1,0) \}\,.\] This shows that $q$ has the largest $z$-coordinate among all points in $C(q)$. \begin{figure} \caption{The great circle $C(q)$.} \label{figure:greatcircle} \end{figure} Assume that $q$ has value $0$. We claim that then all points on $C(q)$ must have value $0$. Indeed, since $q$ has value $0$ and the orthogonal point $(q_y,-q_x,0)/\rightarrowqrt{q_x^2+q_y^2}$ on the equator also has value $0$, the one-dimensional subspace orthogonal to both of them must have value $1$. But this subspace is orthogonal to all points on $C(q)$. Hence all points on $C(q)$ must have value $0$. Now we can apply the same argument to any point $\tilde{q}$ on $C(q)$ (by the last consideration $\tilde{q}$ must have value $0$) and derive that all points on $C(\tilde{q})$ have value $0$. The great circle $C(q)$ divides the northern hemisphere into two regions, one containing the north pole, the other consisting of the points below $C(q)$ or ``lying between $C(q)$ and the equator'', see Figure \ref{figure:greatcircle}. The circles $C(\tilde{q})$ with $\tilde{q} \in C(q)$ certainly cover the region between $C(q)$ and the equator.\footnote{This will be shown formally in the proof of the geometric lemma below.} Hence any point in this region must have value $0$. But the circles $C(\tilde{q})$ cover also a part of the other region. In fact, we can iterate this process. We say that a point $p$ in the northern hemisphere {\em can be reached} from a point $q$ in the northern hemisphere, if there is a finite sequence of points $q=q_0, q_1, \ldots, q_{n-1}, q_n=p$ in the northern hemisphere such that $q_i\in C(q_{i-1})$ for $i=1,\ldots,n$. Our analysis above shows that if $q$ has value $0$ and $p$ can be reached from $q$, then also $p$ has value $0$. The following geometric lemma due to Piron \cite{piron-76} (see also Cooke, Keane, and Moran \cite{c-k-m} or Kalmbach \cite{kalmbach-86}) is a consequence of the fact that the curve $C(q)$ is tangent to the horizontal plane through the point $q$: \begin{quote} {\it If $q$ and $p$ are points in the northern hemisphere with $p_z < q_z$, then $p$ can be reached from $q$.} \end{quote} This result will be proved in Appendix A. We conclude that, if a point $q$ in the northern hemisphere has value $0$, then every point $p$ in the northern hemisphere with $p_z < q_z$ must have value $0$ as well. Consider the tripod $(1,0,0), (0,{1 \over \rightarrowqrt{2}},{1 \over \rightarrowqrt{2}}), (0,-{1 \over \rightarrowqrt{2}},{1 \over \rightarrowqrt{2}})$. Since $(1,0,0)$ (on the equator) has value $0$, one of the two other points has value $0$ and one has value $1$. By the geometric lemma and our above considerations this implies that all points $p$ in the northern hemisphere with $p_z<{ 1\over\rightarrowqrt{2}}$ must have value $0$ and all points $p$ with $p_z>{1\over\rightarrowqrt{2}}$ must have value $1$. But now we can choose any point $p^\prime$ with ${1\over\rightarrowqrt{2}} < p^\prime_z < 1$ as our new north pole and deduce that the valuation must have the same form with respect to this pole. This is clearly impossible. Hence, we have proved our assertion that there is no mapping on the set of all one-dimensional subspaces of ${\Bbb R}^3$ which maps one space out of each tripod of pairwise orthogonal one-dimensional subspaces to $1$ and the other two to $0$. In the following we give a second topological and geometric proof for this fact. In this proof we shall not use the geometric lemma above. Fix an arbitrary point on the unit sphere with value $0$. The great circle consisting of points orthogonal to this point splits into two disjoint sets, the set of points with value $1$, and the set of points orthogonal to these points. They have value $0$. If one of these two sets were open, then the other had to be open as well. But this is impossible since the circle is connected and cannot be the union of two disjoint open sets. Hence the circle must contain a point $p$ with value $1$ and a sequence of points $q(n)$, $n=1,2,\ldots$ with value $0$ converging to $p$. By a rotation we can assume that $p$ is the north pole and the circle lies in the $\{y=0\}$-plane. Furthermore we can assume that all points $q_n$ have the same sign in the $x$-coordinate. Otherwise, choose an infinite subsequence of the sequence $q(n)$ with this property. In fact, by a rotation we can assume that all points $q(n)$ have positive $x$-coordinate (i.e.\ all points $q(n)$, $n=1,2,\ldots$ lie as the point $q$ in Figure \ref{figure:greatcircle} and approach the north pole as $n$ tends to infinity). All points on the equator have value $0$. By the first step in the proof of the geometric lemma in the appendix, all points in the northern hemisphere which lie between $C(q(n))$ (the great circle through $q(n)$ and $\pm(0,1,0)$) and the equator can be reached from $q(n)$. Hence, as we have seen in the first proof, $v(q(n))=0$ implies that all these points must have value $0$. Since $q(n)$ approaches the north pole, the union of the regions between $C(q(n))$ and the equator is equal to the open right half $\{q \in S^2\ | \ q_z>0, q_x>0\}$ of the northern hemisphere. Hence all points in this set have value $0$. Let $q$ be a point in the left half $\{q \in S^2\ | \ q_z>0, q_x<0\}$ of the northern hemisphere. It forms a tripod together with the point $(q_y,-q_x,0)/\rightarrowqrt{q_x^2+q_y^2}$ in the equator and the point $(-q_x,-q_y,{q_x^2+q_y^2 \over q_z}) / ||(-q_x,-q_y,{q_x^2+q_y^2 \over q_z})||$ in the right half. Since these two points have value $0$, the point $q$ must have value $1$. Hence all points in the left half of the northern hemisphere must have value $1$. But this leads to a contradiction because there are tripods with two points in the left half, for example the tripod $(-{1 \over 2},{1 \over \rightarrowqrt{2}},{1 \over 2})$, $(-{1 \over 2},-{1 \over \rightarrowqrt{2}},{1 \over 2})$, $({1 \over \rightarrowqrt{2}},0,{1 \over \rightarrowqrt{2}})$. This ends the second proof for the fact that there is no two-valued probability measure on the lattice of subspaces of the three-dimensional Euclidean space which preserves the ortholattice operations at least for orthogonal elements. \rightarrowubsection{Injective morphisms preserving order as well as {\it or} and {\it and} operations} \label{section:2.3a} We have seen that we cannot hope to preserve the ortholattice operations, not even when we restrict ourselves to operations among orthogonal propositions. An even stronger weakening of condition (iii) would be to require preservation of ortholattice operations merely among the center $C$, i.e., among those propositions which are co--measurable (commuting) with all other propositions. It is not difficult to prove that in the case of complete Hilbert lattices (and not mere subalgebras thereof), the center consists of just the least lower and the greatest upper bound $C=\{0,1\}$ and thus is isomorphic to the two-element Boolean algebra ${\bf 2}=\{0,1\}$. As it turns out, the requirement is trivially fulfilled and its implications are quite trivial as well. Another weakening of (iii) is to restrict oneself to particular physical states and study the embeddability of quantum logics under these constraints; see Bell, Clifton \cite{bell-clifton}. In the following sections we analyze a completely different option: Is it possible to embed quantum logic into a Boolean algebra when one does not demand preservation of all ortholattice operations? One method of embedding an arbitrary partially ordered set into a concrete orthomodular lattice which in turn can be embedded into a Boolean algebra has been used by Kalmbach \cite{kalmbach-77} and extended by Harding \cite{harding-91} and Mayet and Navara \cite{navara-95}. In these {\it Kalmbach embeddings}, as they may be called, the meets and joins are preserved but not the complement. The Kalmbach embedding of some bounded lattice $L$ into a concrete orthomodular lattice $K(L)$ may be thought of as the pasting of Boolean algebras corresponding to all maximal chains of $L$ \cite{harding-priv}. First, let us consider linear chains $0=a_0\rightarrow a_1\rightarrow a_2\rightarrow \cdots \rightarrow 1=a_m$. Such chains generate Boolean algebras ${\bf 2}^{m-1}$ in the following way: from the first nonzero element $a_1$ on to the greatest element $1$, form $A_n=a_n\wedge (a_{n-1})'$, where $(a_{n-1})'$ is the complement of $a_{n-1}$ relative to $1$; i.e., $(a_{n-1})'=1-a_{n-1}$. $A_n$ is then an atom of the Boolean algebra generated by the bounded chain $0=a_0\rightarrow a_1\rightarrow a_2\rightarrow \cdots \rightarrow 1$. Take, for example, a three-element chain $0= a_0\rightarrow \{a\}\equiv a_1\rightarrow \{a,b\}\equiv 1=a_2$ as depicted in Figure \ref{f-thech}a). In this case, \begin{eqnarray*} A_1&=&a_1\wedge (a_0)'=a_1\wedge 1\equiv \{a\}\wedge \{a,b\}=\{a\},\\ A_2&=&a_2\wedge (a_1)'=1\wedge (a_1)'\equiv \{a,b\}\wedge \{b\}=\{b\}. \end{eqnarray*} This construction results in a four-element Boolean Kalmbach lattice $K(L)={\bf 2}^2$ with the two atoms $\{a\}$ and $\{b\}$ given in Figure \ref{f-thech}b). \begin{figure} \caption{\label{f-thech} \label{f-thech} \end{figure} Take, as a second example, a four-element chain $0= a_0\rightarrow \{a\}\equiv a_1\rightarrow \{a,b\} \rightarrow \{a,b,c\}\equiv 1=a_3$ as depicted in Figure \ref{f-thech}c). In this case, \begin{eqnarray*} A_1&=&a_1\wedge (a_0)'=a_1\wedge 1\equiv \{a\}\wedge \{a,b,c\}=\{a\},\\ A_2&=&a_2\wedge (a_1)'\equiv \{a,b\}\wedge \{b,c\}=\{b\},\\ A_3&=&a_3\wedge (a_2)'=1\wedge (a_2)'\equiv \{a,b,c\}\wedge \{c\}=\{c\}.\\ \end{eqnarray*} This construction results in an eight-element Boolean Kalmbach lattice $K(L)={\bf 2}^3$ with the three atoms $\{a\}$, $\{b\}$ and $\{c\}$ depicted in Figure \ref{f-thech}d). To apply Kalmbach's construction to any bounded lattice, all Boolean algebras generated by the maximal chains of the lattice are pasted together. An element common to two or more maximal chains must be common to the blocks they generate. Take, as a third example, the Boolean lattice ${\bf 2}^2$ drawn in Figure \ref{f-thech}e). ${\bf 2}^2$ contains two linear chains of length three which are pasted together horizontally at their smallest and biggest elements. The resulting Kalmbach lattice $K({\bf 2}^2)=MO_2$ is of the ``Chinese lantern'' type, see Figure \ref{f-thech}f). Take, as a fourth example, the pentagon drawn in Figure \ref{f-thech}g). It contains two linear chains: one is of length three, the other is of length 4. The resulting Boolean algebras ${\bf 2}^2$ and ${\bf 2}^3$ are again horizontally pasted together at their extremities $0,1$. The resulting Kalmbach lattice is given in Figure \ref{f-thech}h). In the fifth example drawn in Figure \ref{f-thech}i), the lattice has two maximal chains which share a common element. This element is common to the two Boolean algebras, hence central in $K(L)$. The construction of the five atoms proceeds as follows: \begin{eqnarray*} A_1&=& \{a\}\wedge \{a,b,c,d\}=\{a\},\\ A_2&=& \{a,b,c\}\wedge \{b,c,d\}=\{b,c\},\\ A_3&=&B_3= \{a,b,c,d\}\wedge \{d\}=\{d\},\\ B_1&=& \{b\}\wedge \{a,b,c,d\}=\{b\},\\ B_2&=& \{a,b,c\}\wedge \{a,c,d\}=\{a,c\},\\ \end{eqnarray*} where the two sets of atoms $\{A_1,A_2,A_3=B_3\}$ and $\{B_1,B_2,B_3=A_3\}$ span two Boolean algebras ${\bf 2}^3$ pasted together at the extremities and at $A_3=B_3$ and $A_3'=B_3'$. The resulting lattice is ${\bf 2}\times MO_2=L_{12}$ depicted in Figure \ref{f-thech}j). \rightarrowubsection{Injective morphisms preserving order and complementation} \label{section:2.3} In the following, we shall show that {\em any orthoposet can be embedded into a Boolean algebra} where in this case by an {\em embedding} we understand an {\em injective mapping preserving the order relation and the orthocomplementation}. A slightly stronger version of this fact using more topological notions has already been shown by Katrno\v{s}ka \cite{katrnoska-82}. Zierler and Schlessinger constructed embeddings with more properties for orthomodular orthoposets \cite[Theorem 2.1]{ZirlSchl-65} and mentioned another slightly stronger version of the result above without explicit proof \cite[Section 2, Remark 2]{ZirlSchl-65}. For completeness sake we give the precise definition of an orthoposet. An {\em orthoposet} (or {\em orthocomplemented poset}) $(L,\leq,0,1,')$ is a set $L$ which is endowed with a partial ordering $\leq$, (i.e.\ a subset $\leq$ of $L \times L$ satisfying (1) $p \leq p$, (2) if $p \leq q$ and $q \leq r$, then $p \leq r$, (3) if $p \leq q$ and $q \leq p$, then $p=q$, for all $p,q,r \in L$). Furthermore, $L$ contains distinguished elements $0$ and $1$ satisfying $0 \leq p$ and $p \leq 1$, for all $p \in L$. Finally, $L$ is endowed with a function $'$ ({\em orthocomplementation}) from $L$ to $L$ satisfying the conditions (1) $p''=p$, (2) if $p \leq q$, then $q' \leq p'$, (3) the least upper bound of $p$ and $p'$ exists and is $1$, for all $p,q \in L$. Note that these conditions imply $0'=1$, $1'=0$, and that the greatest lower bound of $p$ and $p'$ exists and is $0$, for all $p \in L$. For example, an arbitrary sublattice of the lattice of all closed linear subspaces of a Hilbert space is an orthoposet, if it contains the subspace $\{0\}$ and the full Hilbert space and is closed under the orthogonal complement operation. Namely, the subspace $\{0\}$ is the $0$ in the orthoposet, the full Hilbert space is the $1$, the set-theoretic inclusion is the ordering $\leq$, and the orthogonal complement operation is the orthocomplementation $'$. In the rest of this section we always assume that $L$ is an arbitrary orthoposet. We shall construct a Boolean algebra $B$ and an injective mapping $\varphi:L\rightarrow B$ which preserves the order relation and the orthocomplementation. The construction goes essentially along the same lines as the construction of Zierler and Schlessinger \cite{ZirlSchl-65} and Katrno\v{s}ka \cite{katrnoska-82} and is similar to the proof of the Stone representation theorem for Boolean algebras, cf.\ Stone \cite{stone}. It is interesting to note that for a finite orthoposet the constructed Boolean algebra will be finite as well. We call a nonempty subset $K$ of $L$ an {\em ideal} if for all $p,q\in L$: \begin{enumerate} \item if $p\in K$, then $p'\not \in K$, \item if $p\le q$ and $q\in K$, then $p\in K $. \end{enumerate} Clearly, if $K$ is an ideal, then $0\in K$. An ideal $I$ is {\em maximal} provided that if $K$ is an ideal and $I\rightarrowubseteq K$, then $K=I$. Let ${\cal I}$ be the set of all maximal ideals in $L$, and let $B$ be the power set of ${\cal I}$ considered as a Boolean algebra, i.e.\ $B$ is the Boolean algebra which consists of all subsets of ${\cal I}$. The order relation in $B$ is the set-theoretic inclusion, the ortholattice operations {\em complement}, {\em or}, and {\em and} are given by the set-theoretic complement, union, and intersection, and the elements $0$ and $1$ of the Boolean algebra are just the empty set and the full set ${\cal I}$. Consider the map \[ \varphi:L \to B \] which maps each element $p \in L$ to the set \[ \varphi(p) = \{ I \in {\cal I} \ | \ p \not\in I \} \] of all maximal ideals which do not contain $p$. We claim that the map $\varphi$ \begin{enumerate} \item[(i)] is injective, \item[(ii)] preserves the order relation, \item[(iii)] preserves complementation. \end{enumerate} This provides an embedding of quantum logic into classical logic which preserves the implication relation and the negation.\footnote{Note that for a finite orthoposet $L$ the Boolean algebra $B$ is finite as well. Indeed, if $L$ is finite, then it has only finitely many subsets, especially only finitely many maximal ideals. Hence ${\cal I}$ is finite, and thus also its power set $B$ is finite.} The rest of this section consists of the proof of the three claims above. Let us start with claim (ii). Assume that $p,q \in L$ satisfy $p \leq q$. We have to show the inclusion \[ \varphi(p) \rightarrowubseteq \varphi(q) \,.\] Take a maximal ideal $I \in \varphi(p)$. Then $p \not\in I$. If $q$ were contained in $I$, then by condition 2. in the definition of an ideal also $p$ had to be contained in $I$. Hence $q \not\in I$, thus proving that $I \in \varphi(q)$. Before we come to claims (iii) and (i) we give another characterization of maximal ideals. We start with the following assertion which will also be needed later: \begin{equation} \label{auxideal} \begin{array}{l} \mbox{\rm If $I$ is an ideal and $r \in L$ with $r \not\in I$ and $r^\prime\not\in I$,} \\ \mbox{\rm then also the set $J = I \cup \{s \in L \mid s \leq r\}$ is an ideal.} \end{array} \end{equation} Here is the proof: It is clear that $J$ satisfies condition 2. in the definition of an ideal. To show that it satisfies condition 1. assume to the contrary that there exists $s\in J$ and $s'\in J$, for some $s\in L$. Then one of the following conditions must be true; (I) $s,s'\in J$, (II) $s\le r$ and $s'\le r$, (III) $s\in I$ and $s'\le r$, (IV) $s\le r$, $s'\in I$. The first case is impossible since $I$ is an ideal. The second case is ruled out by the fact that $r\neq 1$ (namely, $r=1$ would imply $r'=0$ which would contradict our assumption $r'\not\in I$). The third case is impossible since $s'\le r$ implies $r'\le s$ which, combined with $s\in I$ would imply $r'\in I$, contrary to our assumption. Finally the fourth case is nothing but a reformulation of the third case with $s$ and $s'$ interchanged. Thus we have proved that $J$ is an ideal and have proved the assertion (\ref{auxideal}). Next, we prove the following new characterization of maximal ideals: \begin{equation} \label{maxideal} \mbox{\rm An ideal $I$ is a maximal ideal iff $r \not\in I$ implies $r^\prime \in I$.} \end{equation} To prove this first assume that for all $r\in L$, if $r\not \in I$, then $r'\in I$ and suppose $I$ is a {\em proper} subset of an ideal $K$. Then there exists $p\in K$ such that $p\not \in I$. By our hypothesis (for all $r\in L$, $r\not \in I$ implies $r'\in I$), we have $p^\prime\in I$. Thus both $p\in K$ and $p^\prime\in K$. This contradicts the fact that $K$ is an ideal. Conversely, suppose that $I$ is a maximal ideal in $L$ and suppose, to the contrary, that for some $r\in L,$ \begin{equation}\label{one} r\not\in I\ {\rm and}\ r'\not\in I\,. \end{equation} Of course $r\neq 1$, since $1'=0\ {\rm and}\ 0\in I$. Let \begin{equation}\label{two} J=I\cup(r) \end{equation} where $(r)=\{s\in L \ |\ s\le r\}$ is the principal ideal of $r$ (note that $(r)$ is indeed an ideal). Then, under assumption (\ref{one}), using (\ref{auxideal}) above, we have that $J$ is an ideal which properly contains $I$. This contradicts the maximality of $I$ and ends the proof of the assertion (\ref{maxideal}). For claim (iii) we have to show the relation: \[ \varphi(p') = {\cal I} \rightarrowetminus \varphi(p)\,, \] for all $p \in L$. This can be restated as \[ I \in \varphi(p') \; {\rm iff} \; I \not\in \varphi(p) \] for all $I \in {\cal I}$. But this means $p' \not\in I \; {\rm iff} \; p \in I$, which follows directly from condition 1. in the definition of an ideal and from assertion (\ref{maxideal}). We proceed to claim (i), which states that $\varphi$ is injective, i.e., if $p \neq q$, then $\varphi(p)\neq \varphi(q)$. But $p\neq q$ is equivalent to $p\not\le q\ {\it or}\ q\not\le p$. Furthermore, if we can show that there is a maximal ideal $I$ such that $q\in I$ and $p\not\in I$ then it follows easily that $\varphi(p)\neq\varphi(q)$. Indeed, $p\not\in I$ means $I\in \varphi(p)$ and $q\in I$ means $I\not\in\varphi(q)$. It is therefore enough to prove that: \begin{quote} If $p\not\le q$, then there exists a maximal ideal $I$ such that $q\in I$ and $p\not\in I$. \end{quote} To prove this we note that since $p\not\le q$, we have $p\neq 0$. Let \[ {\cal I}_{pq}=\{K\rightarrowubseteq L \mid K\ {\rm is\ an\ ideal\ and}\ p\not\in K\ {\rm and}\ q\in K\}.\] We have to show that among the elements of ${\cal I}_{p,q}$ there is a maximal ideal. Therefore we will use Zorn's Lemma. In order to apply it to ${\cal I}_{p,q}$ we have to show that ${\cal I}_{p,q}$ is not empty and that every chain in ${\cal I}_{p,q}$ has an upper bound. The set ${\cal I}_{p,q}$ is not empty since $(q) \in {\cal I}_{p,q}$. Now we are going to show that every chain in ${\cal I}_{p,q}$ has an upper bound. This means that, given a subset ({\em chain}) ${\cal C}$ of ${\cal I}_{p,q}$ with the property \[ {\rm for \mbox{ } all} \; J,K \in {\cal C} \; {\rm one \mbox{ } has} \; J \rightarrowubseteq K \ \mbox{\rm or } K \rightarrowubseteq J \,, \] we have to show that there is an element ({\em upper bound}) $U \in {\cal I}_{p,q}$ with $K \rightarrowubseteq U$ for all $K \in {\cal C}$. The union \[ U_{\cal C} = \bigcup_{K \in {\cal C}} K \] of all ideals $K \in {\cal C}$ is the required upper bound! It is clear that all $K \in {\cal C}$ are subsets of $U_{\cal C}$. We have to show that $U_{\cal C}$ is an element of ${\cal I}_{p,q}$. Since $p \not\in K$ for all $K \in {\cal C}$ we also have $p \not\in U_{\cal C}$. Similarly, since $q \in K$ for some (even all) $K \in {\cal C}$, we have $q \in U_{\cal C}$. We still have to show that $U_{\cal C}$ is an ideal. Given two propositions $r,s$ with $r \leq s$ and $s \in U_{\cal C}$ we conclude that $s$ must be contained in one of the ideals $K \in {\cal C}$. Hence also $r \in K \rightarrowubseteq U_{\cal C}$. Now assume $r \in U_{\cal C}$. Is it possible that the complement $r'$ belongs to $U_{\cal C}$? The answer is negative, since otherwise $r \in J$ and $r' \in K$, for some ideals $J,K \in {\cal C}$. But since ${\cal C}$ is a chain we have $J \rightarrowubseteq K$ or $K \rightarrowubseteq J$, hence $r, r' \in K$ in the first case and $r, r' \in J$ in the second case. Both cases contradict the fact that $J$ and $K$ are ideals. Hence, $U_{\cal C}$ is an ideal and thus an element of ${\cal I}_{p,q}$. We have proved that ${\cal I}_{p,q}$ is not empty and that each chain in ${\cal I}_{p,q}$ has an upper bound in ${\cal I}_{p,q}$. Consequently, we can apply Zorn's Lemma to ${\cal I}_{p,q}$ and obtain a maximal element $I$ in the ordered set ${\cal I}_{p,q}$. Thus \begin{equation}\label{four} p\not\in I\ {\rm and}\ q\in I. \end{equation} It remains to show that $I$ is a maximal ideal in $L$. Thus suppose, to the contrary, that $I$ is {\em not\/} a maximal ideal in $L$. By (\ref{maxideal}) there exists $r\in L$ such that both $r\not\in I$ and $r'\not\in I$. Furthermore, since $p\neq 0$, then either $p\not\le r$ or $p\not\le r'$. Without loss of generality suppose \begin{equation}\label{five} p\not\le r. \end{equation} It follows, by (\ref{auxideal}), and since $r\not\in I$ and $r'\not\in I$, that $I\cup(r)$ is an ideal properly containing $I$. But since, by Conditions (\ref{four}) and (\ref{five}), $q\in I$ and $p\not\le r$, we have \begin{center} $p\not\in I\cup(r)$ and $q\in I\cup(r)$. \end{center} Thus $I\cup(r)\in {\cal I}_{pq}$ and, since $r\not\in I$, we deduce that $I\cup(r)$ properly contains $I$, contradicting the fact that $I$ is a {\em maximal element} in ${\cal I}_{pq}$. This ends the proof of claim (i), the claim that the map $\varphi$ is injective. We have shown: \begin{quote} {\em Any orthoposet can be embedded into a Boolean algebra where the embedding preserves the order relation and the complementation.} \end{quote} \rightarrowubsection{Injective order preserving morphisms} \label{section:injorder} In this section we analyze a different embedding suggested by Malhas \cite{malhas-87,malhas-92}. We consider an orthocomplemented lattice $(L, \leq, 0, 1, ')$, i.e.\ a lattice $(L, \leq, 0, 1)$ with $0\leq x\leq 1$ for all $x\in L$, with orthocomplementation, that is with a mapping $' : L \rightarrow L$ satisfying the following three properties: a) $x''=x$, b) if $x\leq y$, then $y'\leq x'$, c) $x\cdot x'=0$ and $y \vee y'=1$. Here $x\cdot y= {\rm glb} (x,y)$ and $x \vee y = {\rm lub}(x,y) $. Furthermore, we will assume that $L$ is atomic\footnote{For every $x\in L\rightarrowetminus\{0\}$, there is an atom $a\in L$ such that $a\leq x$. An atom is an element $a\in L$ with the property that if $0\leq y \leq a$, then $y=0$ or $y=a$.} and satisfies the following additional property: \begin{equation} \label{*} \mbox{ for all }\; x,y \in L, x \leq y \; \mbox{ iff for every atom }\; a\in L, a\leq x\; \mbox{ implies} \; a\leq y. \end{equation} Every atomic Boolean algebra and the lattice of closed subspaces of a separable Hilbert space satisfy the above conditions. Consider next a set $U$\footnote{Not containing the logical symbols $\cup, ', \rightarrow $.} and let $W(U)$ be the smallest set of words over the alphabet $U \cup \{', \rightarrow\}$ which contains $U$ and is closed under negation (if $A\in W(U)$, then $ A' \in W(U)$) and implication (if $A, B \in W(U)$, then $A \rightarrow B \in W(U)$).\footnote{Define in a natural way $A \cup B = A' \rightarrow B$, $A \cap B = (A \rightarrow B')'$, $ A \leftrightarrow B = (A \rightarrow B) \cap (B \rightarrow A)$.} The elements of $U$ are called {\it simple propositions} and the elements of $W(U)$ are called {\it (compound) propositions}. A {\it valuation} is a mapping \[t: W(U) \rightarrow {\bf 2}\] such that $t(A) \not= t(A')$ and $t(A \rightarrow B) = 0$ iff $t(A)=1$ and $t(B)=0$. Clearly, every assignment $s: U \rightarrow {\bf 2}$ can be extended to a unique valuation $t_s$. A {\it tautology} is a proposition $A$ which is true under every possible valuation, i.e., $t(A)=1$, for every valuation $t$. A set $ {\cal K} \rightarrowubseteq W(U)$ is {\it consistent} if there is a valuation making true every proposition in $\cal {K}$. Let $A\in W(U)$ and ${\cal K} \rightarrowubseteq W(U)$. We say that $A$ {\it derives} from $ {{\cal K}}$, and write ${\cal K}\models A$, in case $t(A)=1$ for each valuation $t$ which makes true every proposition in ${\cal K} $ (that is, $t(B)=1$, for all $B\in {\cal K}$). We define the set of consequences of ${\cal K}$ by \[Con({\cal K}) = \{A\in W(U) \mid {\cal K}\models A\}.\] Finally, a set ${\cal K}$ is a {\it theory} if ${\cal K}$ is a fixed-point of the operator $Con$: \[Con({\cal K}) = {\cal K}.\] It is easy to see that $Con$ is in fact a finitary closure operator, i.e., it satisfies the following four properties: \begin{itemize} \item ${\cal K} \rightarrowubseteq Con({\cal K})$, \item if $ {\cal K} \rightarrowubseteq \tilde{\cal K}$, then $Con({\cal K})\rightarrowubseteq Con(\tilde{\cal K})$, \item $Con(Con({\cal K})) = Con({\cal K})$, \item $Con({\cal K}) = \bigcup_{\{X\rightarrowubseteq {\cal K}, X \;\mbox{ finite}\}} Con(X)$. \end{itemize} The first three properties can be proved easily. A topological proof for the fourth property can be found in Appendix B. The main example of a theory can be obtained by taking a set $X$ of valuations and constructing the set of all propositions true under all valuations in $X$: \[Th(X) = \{ A\in W(U)\mid t(A)=1, \; \mbox{ for all} \; t\in X\}.\] In fact, every theory is of the above form, that is, {\it for every theory ${\cal K}$ there exists a set of valuations $X$ (depending upon ${\cal K}$) such that ${\cal K} = Th(X).$ } Indeed, take \[X_{{\cal K}} = \{ t: W(U) \rightarrow {\bf 2} \mid t \; \mbox{ valuation with} \; t(A)=1,\; \mbox{ for all }\; A\in {\cal K}\},\] and notice that \begin{eqnarray*} Th(X_{{\cal K}}) & = & \{B\in W(U)\mid t(B)=1,\; \mbox{ for all }\; t\in X_{{\cal K}}\}\\ & = & \{ B\in W(U)\mid t(B)=1,\; \mbox{ for every valuation with}\; t(A)=1,\\ & & \mbox{ for all }\; A\in {\cal K}\}\\ & = & Con({\cal K}) = {\cal K}. \end{eqnarray*} In other words, {\it theories are those sets of propositions which are true under a certain set of valuations (interpretations).} Let now ${\cal T}$ be a theory. Two elements $p,q\in U$ are ${\cal T}$-equivalent, written $p \equiv_{{\cal T}} q$, in case $p \leftrightarrow q\in {\cal T}$. The relation $\equiv_{{\cal T}}$ is an equivalence relation. The equivalence class of $p$ is $[p]_{{\cal T}}=\{q\in U\mid p \equiv_{{\cal T}} q\}$ and the factor set is denoted by $U_{\equiv_{{\cal T}}}$; for brevity, we will sometimes write $[p]$ instead of $[p]_{{\cal T}}$. The factor set comes with a natural partial order: \[[p] \leq [q] \; \mbox{ if}\; p \rightarrow q\in {\cal T}.\] Note that in general, $(U_{\equiv_{{\cal T}}}, \leq)$ is not a Boolean algebra.\footnote{For instance, in case ${\cal T} = Con(\{p\})$, for some $p\in U$. If $U$ has at least three elements, then $(U_{\equiv_{{\cal T}}}, \leq)$ does not have a minimum.} In a similar way we can define the $\equiv_{{\cal T}}$-equivalence of two propositions: \[ A \equiv_{{\cal T}} B \; \mbox{ if }\; A \leftrightarrow B \in {\cal T}.\] Denote by $[[A]]_{{\cal T}}$ (shortly, $[[A]]$) the equivalence class of $A$ and note that for every $p\in U$, \[ [p]=[[p]] \cap U.\] The resulting Boolean algebra $W(U)_{\equiv_{{\cal T}}}$ is the Lindenbaum algebra of ${\cal T}$. Fix now an atomic orthocomplemented lattice $(L, \leq, 0, 1, ')$ satisfying (\ref{*}). Let $U$ be a set of cardinality greater or equal to $L$ and fix a surjective mapping $f: U \rightarrow L$. For every atom $a\in L$, let $s_a: U \rightarrow {\bf 2}$ be the assignment defined by $s_a (p)=1$ iff $a\leq f(p)$. Take \[X = \{t_{s_a} \mid a \; \mbox{ is an atom of}\; L\}\footnote{Recall that $t_s$ is the unique valuation extending $s$.} \; \mbox{\rm and } {\cal T} = Th(X) . \] Malhas \cite{malhas-87,malhas-92} has proven that the {\it lattice $(U_{\equiv_{{\cal T}}}, \leq)$ is orthocomplemented,} and, in fact, {\it isomorphic to $L$}. Here is the argument. Note first that there exist two elements $\underline{0}, \underline{1}$ in $U$ such that $f(\underline{0})=0, \; f(\underline{1})=1$. Clearly, $\underline{0}\not\in {\cal T}$, but $\underline{1}\in {\cal T}$. Indeed, for every atom $a$, $a\leq f(\underline{1})=1$, so $s_a(\underline{1}) = 1$, a.s.o. Secondly, for every $p,q\in U$, \[p \rightarrow q \in {\cal T} \; \mbox{ iff } \; f(p) \leq f(q).\] If $p \rightarrow q\not\in {\cal T}$, then there exists an atom $a\in L$ such that $t_{s_a}(p \rightarrow q)=0,$ so $s_a(p) = t_{s_a}(p)=1, \;s_a(q) = t_{s_a}(q)=0$, which---according to the definition of $s_a$---mean $a\leq f(p)$, but $a\not\leq f(q)$. If $f(p) \leq f(q)$, then $a\leq f(q)$, a contradiction. Conversely, if $f(p) \not\leq f(q)$, then by (\ref{*}) there exists an atom $a$ such that $a\leq f(p)$ and $a\not\leq f(q)$. So, $s_a(p) = t_{s_a}(p)=1, \;s_a(q) = t_{s_a}(q)=0$, i.e., $(p\rightarrow q)\not\in {\cal T}$. As immediate consequences we deduce the validity of the following three relations: for all $p,q\in U$, \begin{itemize} \item $ f(p) \leq f(q)$ iff $[p]\leq [q]$, \item $ f(p) = f(q)$ iff $[p] = [q]$, \item $[\underline{0}] \leq [p] \leq [\underline{1}]$. \end{itemize} Two simple propositions $p,q\in U$ are {\it conjugate} in case $f(p)'=f(q)$.\footnote{Of course, this relation is symmetrical.} Define now the operation $^{*}: U_{{\cal T}} \rightarrow U_{{\cal T}}$ as follows: $[p]^{*}=[q]$ in case $q$ is a conjugate of $p$. It is not difficult to see that the operation $^{*}$ is well-defined and actually is an orthocomplementation. It follows that $(U_{{\cal T}}, \leq_{{\cal T}}, ^{*})$ is an orthocomplemented lattice. To finish the argument we will show that this lattice is {\it isomorphic} with $L$. The isomorphism is given by the mapping $\psi : U_{{\cal T}} \rightarrow L$ defined by the formula $\psi([p]) = f(p)$. This is a well-defined function (because $ f(p) = f(q)$ iff $[p]=[q]$), which is bijective ($\psi([p])= \psi([q])$ implies $ f(p) = f(q)$, and surjective because $f$ is onto). If $[p]\leq [q]$, then $f(p)\leq f(q)$, i.e. $\psi([p]) \leq \psi([q])$. Finally, if $q$ is a conjugate of $p$, then \[\psi([p]^{*}) = \psi([q]) = f(q) = f(p)'= \psi([p])'.\] In particular, {\it there exists a theory whose induced orthoposet is isomorphic to the lattice of all closed subspaces of a separable Hilbert space}. How does this relate to the Kochen-Specker theorem? {\it The natural embedding \[ \Gamma: U_{\equiv_{{\cal T}}} \rightarrow W(U)_{\equiv_{{\cal T}}}, \ \Gamma ([p]) = [[p]]\] is order preserving and one-to-one, but in general it does not preserve orthocomplementation}, i.e.\ in general $\Gamma ([p]^{*}) \neq \Gamma ([p])'$. We always have $\Gamma ([p]^{*}) \leq \Gamma ([p])'$, but sometimes $ \Gamma ([p])'\not\leq \Gamma ([p]^{*})$. The reason is that for every pair of conjugate simple propositions $p,q$ one has $(p \rightarrow q')\in {\cal T}$, but the converse is not true. By combining the inverse $\psi^{-1}$ of the isomorphism $\psi$ with $\Gamma$ we obtain an embedding $\varphi$ of $L$ into the Boolean Lindenbaum algebra $W(U)_{\equiv_{{\cal T}}}$. Thus, the above construction of Malhas gives us another method how {\em to embed any quantum logic into a Boolean logic in case we require that only the order is preserved}.\footnote{In Section \ref{section:2.3} we saw that it is possible to embed quantum logic into a Boolean logic preserving the order and the complement.} Next we shall give a simple example of a Malhas type embedding $\varphi : MO_2 \rightarrow {\bf 2}^4$. Consider again the finite quantum logic $MO_2$ represented in Figure \ref{f-hd-mo2}. Let us choose $$U=\{A,B,C,D,E,F,G,H\}.$$ Since $U$ contains more elements than $MO_2$, we can map $U$ surjectively onto $MO_2$; e.g., \begin{eqnarray*} f(A)&=& 0,\\ f(B)&=& p_-,\\ f(C)&=& p_-,\\ f(D)&=& p_+,\\ f(E)&=& q_-,\\ f(F)&=& q_+,\\ f(G)&=& 1,\\ f(H)&=& 1. \end{eqnarray*} For every atom $a \in MO_2$, let us introduce the truth assignment $s_{a}:U\rightarrow {\bf 2}=\{0,1\}$ as defined above (i.e.\ $s_a(r)=1$ iff $a \leq f(r)$) and thus a valuation on $W(U)$ separating it from the rest of the atoms of $MO_2$. That is, for instance, associate with $p_-\in MO_2$ the function $s_{p_-}$ as follows: \begin{eqnarray*} &&s_{p_-}(A)=s_{p_-}(D)=s_{p_-}(E)=s_{p_-}(F)=0,\\ &&s_{p_-}(B)=s_{p_-}(C)=s_{p_-}(G)=s_{p_-}(H)=1. \end{eqnarray*} The truth assignments associated with all the atoms are listed in Table \ref{t-mo2-tvs2}. \begin{table} \begin{center} \begin{tabular}{|c|cccccccc|} \hline\hline &$A$& $B$& $C$& $D$& $E$& $F$& $G$& $H$\\ \hline $s_{p_{-}}$ &0 & 1& 1 & 0 &0 & 0& 1 & 1 \\ $s_{p_{+}}$ &0 & 0& 0 & 1 &0 & 0& 1 & 1 \\ $s_{q_{-}}$ &0 & 0& 0 & 0 &1 & 0& 1 & 1 \\ $s_{q_{+}}$ &0 & 0& 0 & 0 &0 & 1& 1 & 1 \\ \hline\hline \end{tabular} \end{center} \caption{Truth assignments on $U$ corresponding to atoms $p_{-},p_{+},q_{-},q_{+}\in MO_{2}$. \label{t-mo2-tvs2}} \end{table} The theory ${\cal T}$ we are thus dealing with is determined by the union of all the truth assignments; i.e., \[ X =\{t_{s_{p_-}},t_{s_{p_+}},t_{s_{q_-}},t_{s_{q_+}}\} \ \mbox{\rm and } {\cal T} = Th(X) .\] The way it was constructed, $U$ splits into six equivalence classes with respect to the theory ${\cal T}$; i.e., \[ U_{\equiv_{{\cal T}}}=\{[A],[B],[D],[E],[F],[G]\}. \] Since $[p]\rightarrow [q]$ if and only if $(p\rightarrow q)\in {\cal T}$, we obtain a partial order on $U_{\equiv_{{\cal T}}}$ induced by $T$ which isomorphically reflects the original quantum logic $MO_2$. The Boolean Lindenbaum algebra $W(U)_{\equiv_{{\cal T}}}={\bf 2}^4$ is obtained by forming all the compound propositions of $U$ and imposing a partial order with respect to ${\cal T}$. It is represented in Figure \ref{f-pt-tre}. \begin{figure} \caption{\label{f-pt-tre} \label{f-pt-tre} \end{figure} The embedding is given by \begin{eqnarray*} \varphi (0)&=&[[A]],\\ \varphi( p_-)&=&[[B]],\\ \varphi( p_+)&=&[[D]],\\ \varphi(q_-)&=&[[E]],\\ \varphi (q_+)&=&[[F]],\\ \varphi (1)&=&[[G]] . \end{eqnarray*} It is order--preserving but does not preserve operations such as the complement. Although, in this particular example, $f(B)=(f(D))'$ implies $(B\rightarrow D')\in {\cal T}$, the converse is not true in general. For example, there is no $s\in X$ for which $s(B)=s(E)=1$. Thus, $(B\rightarrow E')\in T$, but $f(B)\neq (f(E))'$. One needs not be afraid of order-preserving embeddings which are no lattice morphisms, after all. Even automaton logics (see Svozil \cite[Chapter 11]{svozil-93}, Schaller and Svozil \cite{schaller-92,schaller-95,schaller-96}, and Dvure{\v{c}}enskij, Pulmannov{\'{a}} and Svozil \cite{dvur-pul-svo}) can be embedded in this way. Take again the lattice $MO_2$ depicted in Figure \ref{f-hd-mo2}. A partition (automaton) logic realization is, for instance, $$\{\{\{1\},\{2,3\}\},\{\{2\},\{1,3\}\}\},$$ with \begin{eqnarray*} \{1\}&\equiv &p_-,\\ \{2,3\}&\equiv& p_+,\\ \{2\}&\equiv & q_-,\\ \{1,3\}&\equiv &q_+ ,\end{eqnarray*} respectively. If we take $\{1\}$,$\{2\}$ and $\{3\}$ as atoms, then the Boolean algebra ${\bf 2}^3$ generated by all subsets of $\{1,2,3\}$ with the set theoretic inclusion as order relation suggests itself as a candidate for an embedding. The embedding is quite trivially given by $$\varphi (p)=p\in {\bf 2}^{3}.$$ The particular example considered above is represented in Figure \ref{mooreh-e2}. \begin{figure} \caption{\label{mooreh-e2} \label{mooreh-e2} \end{figure} It is not difficult to check that the embedding satisfies the requirements (i) and (ii), that is, it is injective and order preserving. It is important to realize at that point that, although different automaton partition logical structures may be isomorphic from a logical point of view (one-to-one translatable elements, order relations and operations), they may be very different with respect to their embeddability. Indeed, any two distinct partition logics correspond to two distinct embeddings. It should also be pointed out that in the case of an automaton partition logic and for all finite subalgebras of the Hilbert lattice of two-dimensional Hilbert space, it is always possible to find an embedding corresponding to a logically equivalent partition logic which is a lattice morphism for co--measurable elements (modified requirement (iii)). This is due to the fact that partition logics and $MO_{n}$ have a separating set of valuations. In the $MO_2$ case, this is, for instance $$\{\{\{1,2\},\{3,4\}\},\{\{1,3\},\{2,4\}\}\},$$ with \begin{eqnarray*} \{1,2\}&\equiv& p_-,\\ \{3,4\}&\equiv& p_+,\\ \{1,3\}&\equiv& q_-,\\ \{2,4\}&\equiv& q_+, \end{eqnarray*} respectively. This embedding is based upon the set of all valuations listed in Table \ref{t-mo2-tvs}. These are exactly the mappings from $MO_2$ to ${\bf 2}$ preserving the order relation and the complementation. They correspond to the maximal ideals considered in Section 2.3. In this special case the embedding is just the embedding obtained by applying the construction of Section 2.3, which had been suggested by Zierler and Schlessinger \cite[Theorem 2.1]{ZirlSchl-65}. \begin{table} \begin{center} \begin{tabular}{|c|cccc|} \hline\hline &$p_{-}$& $p_{+}$& $q_{-}$& $q_{+}$\\ \hline $s_1$ &1 & 0& 1 & 0 \\ $s_2$ &1 & 0& 0 & 1 \\ $s_3$ &0 & 1& 1 & 0 \\ $s_4$ &0 & 1& 0 & 1 \\ \hline\hline \end{tabular} \end{center} \caption{The four valuations $s_{1},s_{2},s_{3},s_{4}$ on $MO_2$ take on the values listed in the rows. \label{t-mo2-tvs}} \end{table} The embedding is drawn in Figure \ref{f-pt-treb}. \begin{figure} \caption{\label{f-pt-treb} \label{f-pt-treb} \end{figure} \rightarrowection{Surjective extensions?} The original proposal put forward by EPR \cite{epr} in the last paragraph of their paper was some form of completion of quantum mechanics. Clearly, the first type of candidate for such a completion is the sort of embedding reviewed above. The physical intuition behind an embedding is that the ``actual physics'' is a classical one, but because of some yet unknown reason, some of this ``hidden arena'' becomes observable while others remain hidden. Nevertheless, there exists at least one other alternative to complete quantum mechanics. This is best described by a {\em surjective map} $\phi :B\rightarrow L$ of a classical Boolean algebra onto a quantum logic, such that $\vert B\vert \ge \vert L\vert$. Plato's cave metaphor applies to both approaches, in that observations are mere shadows of some more fundamental entities. \rightarrowection{Summary} We have reviewed several options for a classical ``understanding'' of quantum mechanics. Particular emphasis has been given to techniques for embedding quantum universes into classical ones. The term ``embedding'' is formalized here as usual. That is, an embedding is a mapping of the entire set of quantum observables into a (bigger) set of classical observables such that different quantum observables correspond to different classical ones (injectivity). The term ``observables'' here is used for quantum propositions, some of which (the complementary ones) might not be co--measurable, see Gudder \cite{gudder1}. It might therefore be more appropriate to conceive these ``observables'' as ``potential observables.'' After a particular measurement has been chosen, some of these observables are actually determined and others (the complementary ones) become ``counterfactuals'' by quantum mechanical means; cf.\ Schr\"odinger's catalogue of expectation values \cite[p.\ 823]{schrodinger}. For classical observables, there is no distinction between ``observables'' and ``counterfactuals,'' because everything can be measured precisely, at least in principle. We should mention also a {\it caveat}. The relationship between the states of a quantum universe and the states of a classical universe into which the former one is embedded is beyond the scope of this paper. As might have been suspected, it turns out that, in order to be able to perform the mapping from the quantum universe into the classical one consistently, important structural elements of the quantum universe have to be sacrificed: \begin{description} \item[$\bullet$] Since {\it per definition}, the quantum propositional calculus is nondistributive (nonboolean), a straightforward embedding which preserves all the logical operations among observables, irrespective of whether or not they are co--measurable, is impossible. This is due to the quantum mechanical feature of {\em complementarity}. \item[$\bullet$] One may restrict the preservation of the logical operations to be valid only among mutually orthogonal propositions. In this case it turns out that again a consistent embedding is impossible, since no consistent meaning can be given to the classical existence of ``counterfactuals.'' This is due to the quantum mechanical feature of {\em contextuality}. That is, quantum observables may appear different, depending on the way by which they were measured (and inferred). \item[$\bullet$] In a further step, one may abandon preservation of lattice operations such as {\it not} and the binary {\it and} and {\it or} operations altogether. One may merely require the preservation of the implicational structure (order relation). It turns out that, with these provisos, it is indeed possible to map quantum universes into classical ones. Stated differently, definite values can be associated with elements of physical reality, irrespective of whether they have been measured or not. In this sense, that is, in terms of more ``comprehensive'' classical universes (the hidden parameter models), quantum mechanics can be ``understood.'' \end{description} At the moment we can neither say if the nonpreservation of the binary lattice operations (interpreted as {\it and} and {\it or}) is a too high price for value definiteness, nor can we speculate of whether or not the entire program of embedding quantum universes into classical theories is a progressive or a degenerative case (compare Lakatosch \cite{lakatosch}). \appendix \rightarrowection*{Appendix A: Proof of the geometric lemma} In this appendix we are going to prove the geometric lemma due to Piron \cite{piron-76} which was formulated in Section 2.2. First let us restate it. Consider a point $q$ in the northern hemisphere of the unit sphere $S^2 = \{p \in {\Bbb R}^3 \ | \ ||p||=1\}$. By $C(q)$ we denote the unique great circle which contains $q$ and the points $\pm(q_y,-q_x,0)/\rightarrowqrt{q_x^2+q_y^2}$ in the equator, which are orthogonal to $q$, compare Figure \ref{figure:greatcircle}. We say that a point $p$ in the northern hemisphere {\em can be reached} from a point $q$ in the northern hemisphere, if there is a finite sequence of points $q=q_0, q_1, \ldots, q_{n-1}, q_n=p$ in the northern hemisphere such that $q_i\in C(q_{i-1})$ for $i=1,\ldots,n$. The lemma states: \begin{quote} {\it If $q$ and $p$ are points in the northern hemisphere with $p_z < q_z$, then $p$ can be reached from $q$.} \end{quote} For the proof we follow Cooke, Keane, and Moran \cite{c-k-m} and Kalmbach \cite{kalmbach-86}). We consider the tangent plane $H=\{p \in {\Bbb R}^3\ | \ p_z=1\}$ of the unit sphere in the north pole and the projection $h$ from the northern hemisphere onto this plane which maps each point $q$ in the northern hemisphere to the intersection $h(q)$ of the line through the origin and $q$ with the plane $H$. This map $h$ is a bijection. The north pole $(0,0,1)$ is mapped to itself. For each $q$ in the northern hemisphere (not equal to the north pole) the image $h(C(q))$ of the great circle $C(q)$ is the line in $H$ which goes through $h(q)$ and is orthogonal to the line through the north pole and through $h(q)$. Note that $C(q)$ is the intersection of a plane with $S^2$, and $h(C(q))$ is the intersection of the same plane with $H$; see Figure \ref{figure:projectionh}. \begin{figure} \caption{The plane $H$ viewed from above.} \label{figure:projectionh} \end{figure} The line $h(C(q))$ divides $H$ into two half planes. The half plane not containing the north pole is the image of the region in the northern hemisphere between $C(q)$ and the equator. Furthermore note that $q_z > p_z$ for two points in the northern hemisphere if and only if $h(p)$ is further away from the north pole than $h(q)$. We proceed in two steps. Step 1. First, we show that, if $p$ and $q$ are points in the northern hemisphere and $p$ lies in the region between $C(q)$ and the equator, then $p$ can be reached from $q$. In fact, we show that there is a point $\tilde{q}$ on $C(q)$ such that $p$ lies on $C(\tilde{q})$. Therefore we consider the images of $q$ and $p$ in the plane $H$; see Figure \ref{figure:belowcq}. The point $h(p)$ lies in the half plane bounded by $h(C(q))$ not containing the north pole. \begin{figure} \caption{The point $p$ can be reached from $q$.} \label{figure:belowcq} \end{figure} Among all points $h(q^\prime)$ on the line $h(C(q))$ we set $\tilde{q}$ to be one of the two points such that the line trough the north pole and $h(q^\prime)$ and the line through $h(q^\prime)$ and $h(p)$ are orthogonal. Then this last line is the image of $C(\tilde{q})$, and $C(\tilde{q})$ contains the point $p$. Hence $p$ can be reached from $q$. Our first claim is proved. Step 2. Fix a point $q$ in the northern hemisphere. Starting from $q$ we can wander around the northern hemisphere along great circles of the form $C(p)$ for points $p$ in the following way: for $n\geq 5$ we define a sequence $q_0, q_1, \ldots, q_n$ by setting $q_0=q$ and by choosing $q_{i+1}$ to be that point on the great circle $C(q_i)$ such that the angle between $h(q_{i+1})$ and $h(q_i)$ is $2\pi/n$. The image in $H$ of this configuration is a shell where $h(q_n)$ is the point furthest away from the north pole; see Figure \ref{figure:schnecke}. \begin{figure} \caption{The shell in the plane $H$ for $n=16$.} \label{figure:schnecke} \end{figure} First, we claim that any point $p$ on the unit sphere with $p_z < {q_n}_z$ can be reached from $q$. Indeed, such a point corresponds to a point $h(p)$ which is further away from the north pole than $h(q_n)$. There is an index $i$ such that $h(p)$ lies in the half plane bounded by $h(C(q_i))$ and not containing the north pole, hence such that $p$ lies in the region between $C({q_i})$ and the equator. Then, as we have already seen, $p$ can be reached from $q_i$ and hence also from $q$. Secondly, we claim that $q_n$ approaches $q$ as $n$ tends to infinity. This is equivalent to showing that the distance of $h(q_n)$ from $(0,0,1)$ approaches the distance of $h(q)$ from $(0,0,1)$. Let $d_i$ denote the distance of $h(q_i)$ from $(0,0,1)$ for $i=0,\ldots,n$. Then $d_i / d_{i+1} = \cos(2\pi/n)$, see Figure \ref{figure:schnecke}. Hence $d_n = d_0 \cdot (\cos(2\pi/n))^{-n}$. That $d_n$ approaches $d_0$ as $n$ tends to infinity follows immediately from the fact that $(\cos(2\pi/n))^n$ approaches $1$ as $n$ tends to infinity. For completeness sake\footnote{Actually, this is an exercise in elementary analysis.} we prove it by proving the equivalent statement that $\log((\cos(2\pi/n))^n)$ tends to $0$ as $n$ tends to infinity. Namely, for small $x$ we know the formulae $\cos(x)=1-x^2/2 + {\cal O}(x^4)$ and $\log(1+x)=x+{\cal O}(x^2)$. Hence, for large $n$, \begin{eqnarray*} \log((\cos(2\pi/n))^n) & = & n \cdot \log(1-2{\pi^2 \over n^2} + {\cal O}(n^{-4})) \\ & = & n \cdot ( - 2 {\pi^2 \over n^2} + {\cal O}(n^{-4}))\\ & = & - {2 \pi^2 \over n} + {\cal O}(n^{-3}) \, . \end{eqnarray*} This ends the proof of the geometric lemma. \appendix \rightarrowection*{Appendix B: Proof of a property of the set of consequences of a theory} In Section \ref{section:injorder} we introduced the set $Con({\cal K})$ of consequences of a set ${\cal K}$ of propositions over a set $U$ of {\em simple propositions} and the logical connectives negation $\phantom{x}^\prime$ and implication $\rightarrow$. We mentioned four properties of the operator $Con$. In this appendix we prove the fourth property: \[ Con({\cal K}) = \bigcup_{\{X\rightarrowubseteq {\cal K}, X \;\mbox{ finite}\}} Con(X) \,.\] The inclusion $Con({\cal K}) \rightarrowupseteq \bigcup_{\{X\rightarrowubseteq {\cal K}, X \;\mbox{ finite}\}} Con(X)$ follows directly from the second property of $Con$, i.e., from the monotonicity: if $X \rightarrowubseteq {\cal K}$, then $Con(X) \rightarrowubseteq Con ({\cal K})$. For the other inclusion we assume that a proposition $A \in Con({\cal K})$ is given. We have to show that there exists a finite subset $X \rightarrowubseteq {\cal K}$ such that $A \in Con(X)$. In order to do this we consider the set ${\cal V}(W(U))$ of all valuations. This set can be identified with the power set of $U$ and viewed as a topological space with the product topology of $|U|$ copies of the discrete topological space $\{0,1\}$. By Tychonoff's Theorem (see Munkres \cite{munkres-75}) ${\cal V}(W(U))$ is a compact topological space. For an arbitrary proposition $B$ and valuation $t$ the set $\{t \in {\cal V}(W(U)) \mid t(B)=0\}$ of valuations $t$ with $t(B)=0$ is a compact and open subset of valuations because the value $t(B)$ depends only on the finitely many simple propositions occurring in $B$. Note that our assumption $A \in Con({\cal K})$ is equivalent to the inclusion \[ \{t \in {\cal V}(W(U)) \mid t(A)=0\} \rightarrowubseteq \bigcup_{B \in {\cal K}} \{t \in {\cal V}(W(U)) \mid t(B)=0\}.\] Since the set on the left-hand side is compact, there exists a finite subcover of the open cover on the right-hand side, i.e.\ there exists a finite set $X \rightarrowubseteq {\cal K}$ with \[ \{t \in {\cal V}(W(U)) \mid t(A)=0\} \rightarrowubseteq \bigcup_{B \in X} \{t \in {\cal V}(W(U)) \mid t(B)=0\}.\] This is equivalent to $A \in Con(X)$ and was to be shown. \rightarrowection*{Acknowledgement} The authors thank the anonymous referees for their extremely helpful suggestions and comments leading to a better form of the paper. \end{document}
\begin{document} \title{Spherical Thrackles} \begin{abstract} We establish Conway's thrackle conjecture in the case of spherical thrackles; that is, for drawings on the unit sphere where the edges are arcs of great circles. \end{abstract} \section{Introduction} Let $G$ be a finite abstract graph. A \emph{thrackle} drawing is a graph drawing of $G$ on some surface where every pair of distinct edges in $G$ intersects in a single point, either at a common endpoint or at a proper crossing; see \cite{CN,CN2,CN3,FP,LPS,PRT,PS,PP,W}. A \emph{spherical} thrackle drawing is a thrackle drawing on the unit sphere where the edges are represented by arcs of great circles. The class of spherical thrackle drawings is a natural spherical analog of straight-line thrackles drawn on the plane. Despite the similarity, the graphs that can be drawn as spherical thrackles form a larger class than those that can be drawn as straight-line thrackles. Clearly, every graph that can be drawn as a straight-line thrackle can also be drawn as a (sufficiently small) spherical thrackle, but the converse is not true. By the results of Woodall \cite{W}, the only cycles which can be drawn as straight-line thrackles are the odd cycles. In comparison, all even cycles other than the $4$-cycle can be drawn as spherical thrackles; that is, every cycle that has a thrackle drawing in the plane also has a spherical thrackle drawing. Using an adaptation of Woodall's edge-insertion procedure for spherical thrackles \cite{W}, we can obtain from the $6$-cycle drawing the rest of the even cycle drawings, as demonstrated in Figure \ref{F1}. \begin{figure} \caption{Spherical thrackle drawings of a $6$-cycle (left) and an $8$-cycle (right).} \label{F1} \end{figure} The main result of this paper is that Conway's thrackle conjecture (see \cite{W}) holds in the case of spherical thrackles. \begin{theorem}\label{thcon} Let $G$ be an abstract graph with $n$ vertices and $m$ edges. If $G$ admits a spherical thrackle drawing, then $n\geq m$. \end{theorem} Let us comment straight away that Theorem \ref{thcon} opens up an approach to the thrackle conjecture in the plane; given a thrackle in the plane one can transport it to the sphere by central projection, and then attempt to deform it to a thrackle whose edges are arcs of great circles. We have not been able to complete this program; the difficulty lies in controlling the process so that during the deformation the edges do not cross any vertices. \section{Definitions and assumptions} Consider a spherical thrackle drawing of a graph $G$. Throughout this paper the graph $G$ will be assumed to be connected and have no terminal edges. This is not restrictive, since the existence of any counterexample to the thrackle conjecture obviously implies the existence of a counterexample which is connected and has no terminal edges. We define the \emph{crossing orientation} of any two directed edges $e$, $f$ in a similar manner to the vector cross product. To demonstrate this, in Figure \ref{F2} we have $\chi(e_3,e_1)=1$, while $\chi(e_2,e_4)=-1$. A similar definition applies for intersections at endpoints; in Figure \ref{F2} we have $\chi(e_1,e_2)=1$. Note that in general we have $\chi(e,f)=-\chi(f,e)$. \begin{figure} \caption{A directed $4$-path.} \label{F2} \end{figure} A (directed) $k$-path $p=e_1\dots e_k$ is called \emph{good} if either $\chi(e_{i-1},e_{i})=1$ for each $i=2,\dots,k$ or $\chi(e_{i-1},e_{i})=-1$ for each $i=2,\dots,k$, and is called \emph{bad} otherwise. Similarly, a $k$-cycle $c_k$ is called \emph{good} if every directed path in $c_k$ is good, and is called bad otherwise. The path shown in Figure \ref{F2} is good. For any edge $e$, denote by ${\mathcal C}(e)$ the great circle containing $e$. A \emph{long} (resp.~\emph{short}, resp.~\emph{medium}) edge is an edge whose length is greater than $\pi$ (resp.~less than $\pi$, resp.~equal to $\pi$). For a given spherical thrackle drawing, by making small adjustments to the vertex positions if necessary, we may assume that no two edges lie on the same great circle. Hence the crossing orientations are all well-defined. Similarly, we may also assume that there are no medium edges, so every edge is either long or short. With these assumptions we say that the drawing is in \emph{general position}. Throughout this paper, the term \emph{spherical thrackle} will designate a spherical thrackle drawing in general position. We will say that an edge $e$ \emph{lies in} a given (open) hemisphere $\mathcal{H}$ if the interior of $e$ is contained in $\mathcal{H}$; that is, we permit the vertices of $e$ to lie on the boundary of $\mathcal{H}$. We require one further notion. Let $v\in G$ be a vertex of degree $\geq 3$ and let $e$ be an edge incident to $v$. For the purpose of illustration, suppose any other edges incident to $v$ are directed away from $v$. We say $e$ \emph{separates at} $v$ if and only if there are two other edges, $f,g$, incident to $v$ which start in opposite hemispheres bounded by ${\mathcal C} (e)$. This is illustrated in Figure \ref{F4}. If it is understood which vertex is being referred to, or it is irrelevant, we simply say $e$ \emph{separates}. \begin{figure} \caption{The edge $e$ separates at $v$.} \label{F4} \end{figure} \section{Cycles} \begin{theorem}\label{cth} Every spherical thrackle drawing of an $n$-cycle is good for $n\geq5$. Moreover, if $n$ is even, such a drawing contains at least one long edge. \end{theorem} \begin{proof} Let $c_n=e_1e_2\dots e_n$ be an $n$-cycle for some $n\geq 5$. For convenience of notation, let us write $e_k=e_{k+n}$ for all $k$, and choose the direction on $c_n$ in order of increasing edge index. Suppose $c_n$ is bad. We can assume without loss of generality that there are three adjacent edges $e_{j-1}, e_j, e_{j-1}$ such that $\chi(e_{j-1},e_{j})=1$ and $\chi(e_{j},e_{j+1})=-1$. Then $e_j$ is short; indeed, otherwise $e_{j-1}$ and $e_{j+1}$ will be forced to lie in opposite hemispheres bounded by ${\mathcal C} (e_j)$ and hence could not intersect. We must also have at least one of $e_{j-1}$ and $e_{j+1}$ long, for the same reason. Up to relabelling and direction change, we have the structure shown in Figure \ref{F3}, with $e_{j+1}$ possibly long. We see the edge $e_{j-2}$ incident to $e_{j-1}$ at its starting point must meet $e_{j}$ and $e_{j+1}$ at their common endpoint in order to intersect them both, and this produces a $3$-cycle, which is a contradiction. Hence, $c_n$ is good. \begin{figure} \caption{A bad $3$-path.} \label{F3} \end{figure} Now, let $c_{2n}=e_1\dots e_{2n}$ be a $2n$-cycle for some $n\geq3$, directed in order of increasing edge index. Suppose that all edges in $c_{2n}$ are short. Denote by $\mathcal{H}$ the hemisphere bounded by ${\mathcal C} (e_1)$ in which the edge $e_2$ lies. As $c_{2n}$ is good, the starting point of $e_{2n}$ is also in $\mathcal{H}$. As $e_3$ is short and begins in $\mathcal{H}$, it ends in the other hemisphere $-\mathcal{H}$ (since it must cross $e_1$ and hence ${\mathcal C} (e_1)$). By induction, each even-numbered edge (other than $e_{2n}$) ends in $\mathcal{H}$, while each odd-numbered edge (other than $e_1$) ends in $-\mathcal{H}$. But $e_{2n-1}$ must end in $\mathcal{H}$ in order to meet the starting point of $e_{2n}$, so we have a contradiction. Hence, $c_{2n}$ contains a long edge. \end{proof} \begin{remark}\label{R:bad3cycle} Notice that since we will be dealing with graphs without terminal edges, the proof of Theorem \ref{cth} also establishes the following fact: for every bad $3$-path, the middle edge is short and is an edge in a bad $3$-cycle. Moreover, every bad $3$-cycle has a unique long edge; this is the edge whose vertices have the same crossing orientation. These observations will be useful in what follows. \end{remark} \section{Lemmata} Consider a spherical thrackle drawing of a connected graph $G$ having no terminal edges. \begin{lemma}\label{sel} In any spherical thrackle, every edge that separates at one of its vertices is short and is an edge of a bad $3$-cycle. \end{lemma} \begin{proof} Assume that $e$ separates at $v$ with adjacent edges $f,g$ in different hemispheres. Direct $f,g$ towards $v$ and $e$ away from $v$, and assume without loss of generality that $\chi(f,e)=-1$ and $\chi(g,e)=1$, as shown in Figure \ref{F4}. Since $G$ has no terminal edges, there is some edge $h$ incident to $e$ at its other endpoint. We have either $\chi(e,h)=1$ or $\chi(e,h)=-1$. If $\chi(e,h)=1$, then $feh$ is a bad $3$-path, and if $\chi(e,h)=-1$, then $geh$ is a bad $3$-path, so in either case $e$ is the middle edge of a bad $3$-path. So by Remark \ref{R:bad3cycle}, $e$ is short and is an edge in a bad $3$-cycle. \end{proof} \begin{lemma}\label{vl} Let $G$ be a connected graph with no terminal edges. If $\deg(v)\geq3$, then there exists a great circle ${\mathcal C} $ passing through $v$ such that the starting segments of all of the edges incident to $v$ lie in the same hemisphere bounded by ${\mathcal C} $. \end{lemma} \begin{proof} If no such circle ${\mathcal C} $ exists, then all of the edges incident to $v$ must separate; since there are at least three of them, this gives at least two different $3$-cycles. But this is impossible as thrackles can have at most one $3$-cycle \cite{CN}. \end{proof} \begin{lemma}\label{corollary} For any vertex $v$ we have $\deg(v)\leq4$. Moreover, if $\deg(v)>2$, then $v$ is a vertex of a bad $3$-cycle. \end{lemma} \begin{proof} By Lemma \ref{vl}, if we have a vertex of degree $5$, then at least three of the adjacent edges are separating edges; this gives at least two different $3$-cycles by Lemma \ref{sel}, which is impossible. Hence, $\deg(v)\leq4$ for any vertex $v$. If $\deg(v)>2$, we have at least one separating edge, which must be an edge of a bad $3$-cycle by Lemma \ref{sel}, so $v$ is a vertex of a bad $3$-cycle. \end{proof} \begin{lemma}\label{l:2long}{\ } \begin{enumerate}[{\rm (a)}] \item No two long edges are adjacent. \item All the edges adjacent to a long edge $e$ lie in the same open hemisphere bounded by ${\mathcal C}(e)$. \item If $e_1e_2e_3$ is a directed $3$-path, with the edge $e_2$ long, then the orientations of crossings of $e_2$ and $e_3$ with $e_1$ are the same: $\chi (e_1,e_2)=\chi (e_1,e_3)$. \item If $e_1e_2$ is a directed $2$-path, with both edges $e_1, e_2$ short, then any directed edge $e$ not incident to their common endpoint crosses $e_1, e_2$ with opposite orientations: $\chi (e,e_2)=-\chi (e,e_1)$. \item Let $p = e_1e_2\ldots e_m$ be a directed path, with all the edges short, and let $e$ be a directed edge none of whose endpoints is a common endpoint of two adjacent edges of $p$. Then $\chi (e, e_m)=(-1)^{m-1} \chi (e, e_1)$. \end{enumerate} \end{lemma} \begin{proof} (a) is trivial, as two adjacent long edges would have two common points. (b) Two edges that are adjacent to $e$ at different endpoints of $e$ are short by (a) and lie in the same hemisphere bounded by ${\mathcal C}(e)$ since otherwise they would have no points in common. The assertion follows. Part (c) is obvious. (d) follows from the fact that both edges $e_1, e_2$ are short, hence each cross or meet the great circle ${\mathcal C}(e)$ exactly once. Part (e) follows from (d) by induction. \end{proof} \begin{lemma}\label{gpl} Let $e_0e_1e_2\dots e_{m-1}e_{m}e_{m+1}$ be a simple good path with $e_1, e_m$ long and all other edges short. Then $m$ is odd; that is, the long edges are separated by an odd number of short edges. \end{lemma} \begin{proof} Assume $m$ is even and direct the path from $e_0$ to $e_{m+1}$. By Lemma~\ref{l:2long}(a), $m \ge 4$, so $e_2 \ne e_{m-1}$. Without loss of generality, assume that $\chi (e_i, e_{i+1})=1$ for all $i \le m$. The proof has three steps. \emph{Step 1.} We first compute the orientations of some crossings. By Lemma \ref{l:2long}(c), $\chi (e_0, e_2) = \chi (e_0, e_1) = 1$. Applying Lemma~\ref{l:2long}(e) to the $(m-2)$-path $e_2 \ldots e_{m-1}$ and the edges $e_0$ and $e_1$ we obtain respectively $\chi (e_0, e_{m-1}) = (-1)^{m-3}\chi (e_0, e_{2}) = -1$ and $\chi (e_1, e_{m-1}) = (-1)^{m-3}\chi (e_1, e_{2})= -1$. Similarly, $\chi (e_{m+1}, e_2) = \chi (e_m, e_2) =1$. Again, by Lemma~\ref{l:2long}(e), $\chi (e_2, e_{m-1})= \chi (e_2, e_{3})= 1$. \emph{Step 2.} We claim that on the edge $e_1$, the crossing point $e_1 \cap e_{m-1}$ appears before the crossing point $e_1 \cap e_m$. Similarly, on the edge $e_m$, the crossing point $e_m \cap e_2$ appears after the crossing point $e_m \cap e_1$. To see this, we only need the edges $e_1, e_2, e_{m-1}, e_m$ and the information about the orientation of their crossings obtained in Step 1: \[ \chi (e_1, e_2)= \chi (e_{m-1}, e_m)=1,\quad \chi (e_1, e_{m-1}) = \chi (e_2, e_m) = -1,\quad \chi (e_2, e_{m-1})=1. \] Let $\mathcal{H}$ be the open hemisphere bounded by ${\mathcal C}(e_1)$ in which the short edge $e_2$ lies. Using the orientation of mutual crossings of $e_1,e_2$ and $e_{m-1}$, we find that the short edge $e_{m-1}$ starts in $\mathcal{H}$ and ends in $\mathcal{-H}$, the antipodal hemisphere. Then the long edge $e_m$ starts in $\mathcal{-H}$. If it crossed $e_2$ before $e_1$, the orientation $\chi (e_m, e_2)$ would be $-1$, which is not the case. Thus, on the edge $e_m$, the crossing with $e_2$ is after the crossing with $e_1$. Changing the direction of the path and relabelling edges accordingly, we find that on the edge $e_1$, the crossing with $e_{m-1}$ is before the crossing with $e_m$. \emph{Step 3.} Let $v_i$ be the starting point of the edge $e_i$ for $ i =0, \dots, m+1$, and let $p=e_1 \cap e_m$, $q=e_2 \cap e_{m+1}$ and $r=e_2 \cap e_{m}$. As we found in Step 1, $\chi (e_{m+1}, e_2)= \chi (e_m, e_2)= 1$. Consequently, since $v_{m+1}$ is the starting point of the short edge $e_{m+1}$, the arc $f$ of the edge $e_m$ from $r$ to $v_{m+1}$ is longer than $\pi$. Indeed, if $g$ denotes the arc of $e_2$ travelled in the reverse direction from $q$ to $r$, and $h$ denotes the arc of $e_{m+1}$ from $v_{m+1}$ to $q$, then the triangle $gfh$ is a bad $3$-cycle and thus $f$ has length greater than $\pi$ by Remark \ref{R:bad3cycle}. Then, by Step 2, the arc $pv_{m+1}$ of the edge $e_m$ is longer than $\pi$. Similarly, the arc $v_1p$ of the edge $e_1$ is longer than $\pi$. But then the edges $e_m$ and $e_1$ have two points in common, which is a contradiction. \end{proof} \begin{lemma}\label{gcy} In every good cycle, consecutive distinct long edges are separated by an odd number of short edges. In particular, good odd cycles have at most one long edge. \end{lemma} \begin{proof} By Lemma \ref{l:2long}(a), there are no adjacent long edges. If $e_k$ and $e_{\ell}$ are long edges with $k< \ell$ and there are no long edges $e_i$ for $k<i< \ell$, then one can consider the path $e_{k-1}e_k\dots e_{\ell}e_{\ell+1}$. So the first part of the lemma follows immediately from Lemma \ref{gpl} except in the case of a good cycle of the form $e_0e_1e_2\dots e_{m-1}e_{m}$ where $e_1, e_m$ are long and all other edges short. But in this case one can replace $e_0$ by two edges $e'_0,e''_0$, as in Figure \ref{Fsw}, and apply Lemma \ref{gpl} to the good path $e'_0e_1e_2\dots e_{m-1}e_{m}e''_0$. \begin{figure} \caption{Turning a good cycle into a good path.} \label{Fsw} \end{figure} If a good cycle has $n$ long edges with $n\ge2$, then from what we have just seen there is an odd number of short edges between each consecutive pair of long edges. So the total number $t$ of short edges is even if and only if $n$ is even. But then the total number of edges is $t+n$, which is always even. \end{proof} \section{Proof of the main result} Consider a spherical thrackle drawing and suppose, by way of contradiction, that $G$ has more edges than vertices. We prove Theorem \ref{thcon} in two parts. We first prove the existence of a spherical thrackle drawing of a figure $8$-graph $H$; that is, $H$ consists of two cycles that only share a vertex. In particular, we will show that $H$ consists of an even cycle and a bad $3$-cycle, with the vertex of degree $4$ opposite the long edge of the bad $3$-cycle. We then prove that no such graph can be drawn as a spherical thrackle. If $G$ has more edges than vertices, then there must be some vertex in $G$ with degree greater than $2$. By Lemma \ref{corollary}, any such vertex is the vertex of a bad $3$-cycle $c_3$ of $G$. Since $G$ can contain at most one $3$-cycle, only the vertices of $c_3$ can have degree greater than $2$. Let $c_3=ABC$, with $A$ opposite to the unique long edge $BC$; see Remark \ref{R:bad3cycle}. Let $\mathcal{H}$ be the hemisphere bounded by ${\mathcal C} (BC)$ containing $A$. We consider the three possible cases (up to symmetry): \begin{enumerate} \item $\deg(B)>2$ and $\deg(C)>2$. \item $\deg(B)>2$ and $\deg(C)=2$. \item $\deg(B)=\deg(C)=2$. \end{enumerate} Case (1). As $\deg(B)>2$, there is some edge $BD$ incident to $B$ with $D\neq A,C$. Since $BC$ is long, $BD$ is necessarily short by Lemma~\ref{l:2long}(a), so $D\in\mathcal{H}$ since $BD$ must cross $AC$. $BD$ does not separate, since otherwise it produces another bad $3$-cycle by Lemma \ref{sel}. If there is some other edge $BD'$ incident to $B$ with $D'\neq A,C,D$, then one of $BD$, $BD'$ would separate, giving another bad $3$-cycle. Hence $\deg(B)=3$. By symmetry, $\deg(C)=3$, with some short edge $CE$ incident to $C$ with $E\in\mathcal{H}$. We must then have $\deg(A)=2$, since if there were some edge $AF$ incident to $A$ with $F\neq B,C$ then $AF$ cannot separate $AB$, $AC$, or it produces another bad $3$-cycle, but this would imply that $AF$ has no points in common with one of $BD$, $CE$. This brings us to the drawing on the left in Figure \ref{F5}, from which we can obtain the structure shown on the right in Figure \ref{F5} by edge insertion. Since all other vertices have degree $2$, we have a cycle sharing the vertex $A$ with a $3$-cycle. Since the intersection of the two cycles is a touching intersection, the other cycle must be even by \cite{LPS}. \begin{figure} \caption{The drawing on the right is obtained from the drawing on the left.} \label{F5} \end{figure} Case (2). As in case (1), $\deg(B)=3$, with some short edge $BD$ incident to $B$ crossing $AC$. Since $\deg(C)=2$, and all vertices other than the vertices of $c_3$ have degree $2$, we must have $\deg(A)$ odd so that the degree sum of $G$ is even (which is true of any graph). Hence, $\deg(A)=3$, since this is the only possible odd degree by Lemma \ref{corollary}. The remaining edge $AF$ (with $F\neq B,C$) incident to $A$ must intersect $BD$ and $BC$, and cannot separate $AB$, $AC$, so we get the drawing on the left in Figure \ref{F6}. By edge insertion as shown on the right in Figure \ref{F6}, we again obtain a figure $8$-graph consisting of an even cycle and a $3$-cycle. \begin{figure} \caption{The drawing on the right is obtained from the drawing on the left.} \label{F6} \end{figure} Case (3). Since $A$ is the only vertex with degree greater than $2$, and the degree sum must be even, we have $\deg(A)=4$ by Lemma \ref{corollary}. The other two edges $AF$, $AF'$ must both cross $BC$, and since neither of them can separate at $A$, we get (without loss of generality) the structure shown in Figure \ref{F7}. We again have an even cycle sharing a vertex with a $3$-cycle. \begin{figure} \caption{Case (3) gives another figure $8$-graph.} \label{F7} \end{figure} This completes the first part of the proof; in each case we have obtained a figure $8$-graph $H$ consisting of an even cycle and a $3$-cycle, with the vertex of degree $4$ opposite the long edge of the bad $3$-cycle. Now we show that such a graph cannot be drawn in this way. Retain the labelling of the bad $3$-cycle with vertices $A$, $B$, $C$, with $A$ opposite the long edge $BC$. Let $c_{2n}=e_1\dots e_{2n}$ be the even cycle. Note that $c_{2n}$ is good by Theorem \ref{cth}. Orient the direction on $c_{2n}$ in order of increasing edge index with $e_1$ starting at $A$, and without loss of generality, assume that $\chi(e_{i-1},e_{i})=1$ for each $i=2,\dots,2n$, and $\chi(e_{2n},e_{1})=1$. By swapping the labels of $B$ and $C$ if necessary, assume also that $\chi (AB,BC)=\chi( BC,CA)=1$. Since neither $e_1$ nor $e_{2n}$ can separate at $A$, we then get the drawing on left hand side of Figure~\ref{F8}. Note that necessarily $\chi(e_{2n},AB)=\chi (CA,e_1)=1$. In particular, the path $ABCAe_1\dots e_{2n}$ is good. Note that $ABCAe_1\dots e_{2n}$ can be made simple by introducing a new vertex $A'$ and disconnecting the edges $CA$ and $e_{1}$ from $A$, as shown in the right hand side of Figure~\ref{F8}. This turns the figure $8$-graph $H$ into the good $(2n+3)$-cycle $ABCA'e'_1e_2\dots e_{2n}$. By Theorem \ref{cth}, $c_{2n}$ contains at least one long edge. So, since $BC$ is long, the good odd cycle $ABCA'e'_1e_2\dots e_{2n}$ has at least two long edges. But this is impossible by Lemma \ref{gcy}. This completes the proof of Theorem \ref{thcon}. \begin{figure} \caption{Turning a figure $8$-graph into a cycle.} \label{F8} \end{figure} \end{document}
\begin{document} \title{Recovering measures from approximate values on balls} \renewcommand{\varphi}{\varphi} \renewcommand{\varepsilonpsilon}{\varepsilon} \newcommand{\mathds{1}}{\mathds{1}} \begin{abstract} In a metric space $(X,d)$ we reconstruct an approximation of a Borel measure $\mu$ starting from a premeasure $q$ defined on the collection of closed balls, and such that $q$ approximates the values of $\mu$ on these balls. More precisely, under a geometric assumption on the distance ensuring a Besicovitch covering property, and provided that there exists a Borel measure on $X$ satisfying an asymptotic doubling-type condition, we show that a suitable packing construction produces a measure ${\hat \mu}^{q}$ which is equivalent to $\mu$. Moreover we show the stability of this process with respect to the accuracy of the initial approximation. We also investigate the case of signed measures. \varepsilonnd{abstract} \tableofcontents \section{Introduction} Is a Borel measure $\mu$ on a metric space $(X,d)$ fully determined by its values on balls? In the context of general measure theory, such a question appears to be of extremely basic nature. The answer (when it is known) strongly depends upon the interplay between the measure and the metric space. A clear overview on the subject is given in \cite{handbook_geometry_banach_spaces}. Let us mention some known facts about this issue. When $X=\mathbb R^{n}$ equipped with a norm, the answer to the above question is in the affirmative. The reason is the following: if two Radon measures $\mu$ and $\nu$ coincide on every ball $B_{r}(x)\subset \mathbb R^{n}$, then in particular they are mutually absolutely continuous, therefore by the Radon-Nikodym-Lebesgue Differentiation Theorem one has $\mu(A) = \int_{A}\varepsilonta\, d\nu = \nu(A)$ for any Borel set $A$, where \[ \varepsilonta(x) = \lim_{r \to 0} \frac{\mu (B_r (x))}{\nu (B_r(x))} = 1 \] is the Radon-Nikodym derivative of $\mu$ with respect to $\nu$ (defined for $\nu$-almost all $x\in \mathbb R^{n}$). More generally, the same fact can be shown for any pair of Borel measures on a finite-dimensional Banach space $X$. Unfortunately, the Differentiation Theorem is valid on a Banach space $X$ if and only if $X$ is finite-dimensional. Of course, this does not prevent in general the possibility that Borel measures are uniquely determined by their values on balls. Indeed, Preiss and Ti$\rm{\check{s}er}$ proved in \cite{preiss_tiser} that in separable Banach spaces, two finite Borel measures coinciding on all balls also coincide on all Borel sets. Nevertheless, if this coincidence turns out to be satisfied only on balls of radius, say, less than $1$, then the question still stands. In the case of separable metric spaces, Federer introduced in \cite{federer} a geometrical condition on the distance (see Definition \ref{def:federer}) implying a Besicovitch-type covering lemma that can be used to show the property above, i.e., that any finite Borel measure is uniquely identified by its values on closed balls. When this condition on the distance is dropped, some examples of metric spaces and of pairs of distinct Borel measures coinciding on balls of upper-bounded diameter are known (see \cite{davies}). Here we consider the case of a separable metric space $(X,d)$ where Besicovitch covering lemma (or at least some generalized version of it) holds, and we ask the following questions: \begin{question}\label{question_1}\it How can we reconstruct a Borel measure from its values on balls, and especially, what about the case of signed measures? \varepsilonnd{question} A classical approach to construct a measure from a given \varepsilonmph{premeasure} $p$ defined on a family $\mathcal{C}$ of subsets of $X$ (here the premeasure $p$ is defined on closed balls) is to apply Carath\'eodory constructions (Method I and Method II, see \cite{bruckner_thomson}) to obtain an outer measure. We recall that a premeasure $p$ is a nonnegative function, defined on a given family $\mathcal{C}$ of subsets of $X$, such that $\varepsilonmptyset \in \mathcal{C}$ and $p(\varepsilonmptyset)=0$. By Method I, an outer measure $\mu^\alphast$ is defined starting from $p$ as \[ \mu^\alphast (A) = \inf \left\lbrace \sum_{k=1}^\infty p(B_k) \: : \: B_k \in \mathcal{C} \text{ and } A \subset \bigcup_{k=1}^\infty B_k \right\rbrace \: , \] for any $A \subset X$. But, as it is explained in \cite{bruckner_thomson} (Section $3.2$), Method I does not take into account that $X$ is a metric space, thus the resulting outer measure can be incompatible with the metric on $X$, in the sense that open sets are not necessarily $\mu^\alphast$-measurable. On the other hand, Method II is used to define Hausdorff measures (see Theorem~\ref{thm_caratheodory_construction}) and it always produces a metric outer measure $\mu^{\alphast}$, for which Borel sets are $\mu^\alphast$-measurable. As for a signed measure $\mu = \mu^+ - \mu^-$, the main problem is that, given a closed ball $B$, it is impossible to directly reconstruct $\mu^+ (B)$ and $\mu^- (B)$ from $\mu(B)$. The idea is, then, to apply Carath\'eodory's construction to the premeasure $p^+ (B) = \left( \mu(B) \right)_+$ (here $a_{+}$ denotes the positive part of $a\in \mathbb R$) and check that the resulting outer measure is actually $\mu^+$. Then, by a similar argument we recover $\mu^{-}$. Now we consider the problem of reconstructing a measure $\mu$ from approximate values on balls. We thus assume that a premeasure $q$, defined on closed balls, is given and satisfies the following two properties: for some $0<\alpha\le 1$, $C\gammae 1$, and $r_{0}>0$, \begin{equation} \label{eq_mean_premeasure} \begin{split} (i)&\ \ q(B_{r}(x)) \gammae C^{-1}\mu(B_{\alpha r}(x)) \,,\\ (ii)&\ \ q(B_{r}(x)) \le C \mu(B_{r}(x)) \varepsilonnd{split} \varepsilonnd{equation} holds for all $0<r<r_{0}$ and all $x\in X$. \begin{question} \label{question_2} \it Given a positive Borel measure $\mu$ and a premeasure $q$ defined on balls and satisfying the two conditions in \varepsilonqref{eq_mean_premeasure}, is it possible to reconstruct an approximation up to constants of $\mu$ from $q$? What about the case when $\mu$ is a signed measure? \varepsilonnd{question} First notice that under the assumptions \varepsilonqref{eq_mean_premeasure} the best one can obtain is a reconstruction ${\hat \mu}$ of $\mu$ such that \[ \hat C^{-1}\mu \le {\hat \mu} \le \hat C\, \mu \] for some constant $\hat C\gammae 1$. We stress that, in the case $\alpha = 1$ in \varepsilonqref{eq_mean_premeasure}(i), this can be easily obtained via Carath\'eodory Method II (with $\hat C = C$) while in the case $0<\alpha<1$ Carath\'eodory construction does not provide in general such a measure ${\hat \mu}$. Loosely speaking, a loss of mass can happen in the recovery process, as the example presented in section \ref{section:2.1} shows. In order to recover $\mu$, or at least some measure equivalent or comparable to $\mu$, the choice of the centers of the balls in the collection, which are used to cover the support of $\mu$, is crucial. Indeed they must be placed in some nearly-optimal positions, such that even the concentric balls with smaller radius have a significant overlapping with the support of $\mu$. This led us to consider a packing-type construction. The packing construction is mainly used to define the packing $s$-dimensional measure and its associated notion of packing dimension. It is in some sense dual to the construction leading to Hausdorff measure and dimension, and was introduced by C. Tricot in \cite{tricot_0}. Then Tricot and Taylor in \cite{tricot} extended it to the case of a general premeasure. In our setting we show that this packing construction can be formulated in a simpler and more manageable way (see section \ref{sect:packtype}). Since the lower bound on $q(B_{r}(x))$ is given in terms of $\mu(B_{\alpha r}(x))$, in the case $0<\alpha<1$ and under some additional assumptions on the metric space $(X,d)$ we prove a suitable version of Besicovitch covering lemma (see Proposition \ref{prop_diffusion_in_X} and Corollary \ref{bescivoic_with_doubling_balls}), which represents a key ingredient in our construction and seems also of independent interest. Some further explanations about the assumption \varepsilonqref{eq_mean_premeasure} on $q(B_{r}(x))$ are in order. An example of $q(B_{r}(x))$ satisfying \varepsilonqref{eq_mean_premeasure} is \begin{equation}\label{varif-prem} q(B_{r}(x)) = \frac 1r \int_{0}^{r} \mu(B_{s}(x))\, ds\,; \varepsilonnd{equation} more generally one could consider \[ q(B_{r}(x)) = \frac 1r \int_{0}^{r} \mu(B_{s}(x))\,\omega(s/r) ds \] where $\omega:(0,1)\to (0,+\infty)$ is a non-increasing probability density function. Notice that in $\mathbb R^{n}$ this last expression corresponds to the convolution of $\mu$ with $\omega(|x|/r)$, while for a general metric space it may be understood as an extension of the convolution operation. We also remark that the starting motivation of our study is related to the problem of rectifiability of a $d$--varifold $V$ in $\mathbb R^{n}$ obtained as the limit of ``discrete varifolds'' (see \cite{Blanche-thesis,Blanche-rectifiability,BuetLeonardiMasnou}). The paper is organized as follows. In Section $2$, we explain how to reconstruct a positive measure and then a signed measure (Theorem~\ref{prop_positive_case}) from their values on balls, thanks to Carath\'eodory's construction, answering Question~\ref{question_1}. Section $3$ deals with Question~\ref{question_2}, that is, the reconstruction of a measure starting from a premeasure satisfying \varepsilonqref{eq_mean_premeasure}. After explaining the limitations of Carath\'eodory's construction for this problem, we prove our main result, Theorem~\ref{thm_main}, saying that by suitable packing constructions one can reconstruct a signed measure equivalent to the initial one in any metric space $(X,d)$ which is directionally limited and endowed with an asymptotically doubling measure $\nu$ (see Hypothesis \ref{hypo1} in page \pageref{hypo1}). \subsection*{Some notations} Let $(X,d)$ be a metric space. \begin{itemize} \item $\mathcal{B}(X)$ denotes the $\sigma$--algebra of Borel subsets of $X$. \item $B_r (x) = \left\lbrace y \in X \: | \: d(y,x) \leq r \right\rbrace$ is the closed ball of radius $r>0$ and center $x \in X$. \item $U_r (x) = \left\lbrace y \in X \: | \: d(y,x) < r \right\rbrace$ is the open ball of radius $r>0$ and center $x \in X$. \item $\mathcal{C}$ denotes the collection of closed balls of $X$ and for $\delta > 0$, $\mathcal{C}_\delta$ denotes the collection of closed balls of diameter $\leq \delta$. \item $\mathcal{L}^n$ is the Lebesgue measure in $\mathbb R^n$. \item $\mathcal{P} (X)$ is the set of all subsets of $X$. \item ${\rm card} A$ is the cardinality of the set $A$. \item $\mathbb N^\alphast = \{ 1 , 2 , \ldots \}$. \varepsilonnd{itemize} \section{Carath\'eodory metric construction of outer measures} We recall here some standard definitions and well-known facts about general measures, focusing in particular on the construction of measures from premeasures, in the sense of Carath\'eodory Method II \cite{bruckner_thomson}. \subsection{Outer measures and metric outer measures} \begin{dfn}[Outer measure] Let $X$ be a set, and let $\mu^\alphast : \mathcal{P} (X) \rightarrow [0;+\infty]$ satisfying \begin{enumerate}[$(i)$] \item $\mu^\alphast (\varepsilonmptyset ) = 0$. \item $\mu^\alphast$ is monotone: if $A \subset B \subset X$, then $\mu^\alphast (A) \leq \mu^\alphast (B)$. \item $\mu^\alphast$ is \varepsilonmph{countably subadditive}: if $(A_k)_{k \in \mathbb N}$ is a sequence of subsets of $X$, then \[ \mu^\alphast \left( \bigcup_{k=1}^\infty A_k \right) \leq \sum_{k=1}^\infty \mu^\alphast (A_k) \: . \] \varepsilonnd{enumerate} Then $\mu^\alphast$ is called an \varepsilonmph{outer measure} on $X$. \varepsilonnd{dfn} In order to obtain a measure from an outer measure, one defines the measurable sets with respect to $\mu^\alphast$. \begin{dfn}[$\mu^\alphast$--measurable set] Let $\mu^\alphast$ be an outer measure on $X$. A set $A \subset X$ is \varepsilonmph{$\mu^\alphast$--measurable} if for all sets $E \subset X$, \[ \mu^\alphast (E) = \mu^\alphast (E \cap A) + \mu^\alphast (E \setminus A) \: . \] \varepsilonnd{dfn} We can now define a measure associated with an outer measure. Thanks to the definition of $\mu^\alphast$--measurable sets, the additivity of $\mu^\alphast$ among the measurable sets is straightforward, actually it happens that $\mu^\alphast$ is $\sigma$--additive on $\mu^\alphast$--measurable sets. \begin{theo}[Measure associated with an outer measure, see Theorem $2.32$ in \cite{bruckner_thomson}] \label{thm_from_outer_measure_to_measure} Let $X$ be a set, $\mu^\alphast$ an outer measure on $X$, and $\mathcal{M}$ the class of $\mu^\alphast$--measurable sets. Then $\mathcal{M}$ is a $\sigma$--algebra and $\mu^\alphast$ is countably additive on $\mathcal{M}$. Thus the set function $\mu$ defined on $\mathcal{M}$ by $\mu(A) = \mu^\alphast (A)$ for all $A \in \mathcal{M}$ is a measure. \varepsilonnd{theo} We now introduce \varepsilonmph{metric outer measures}. \begin{dfn} Let $(X, d)$ be a metric space and $\mu^\alphast$ be an outer measure on $X$. $\mu^\alphast$ is called a \varepsilonmph{metric outer measure} if \[ \nu(E \cup F) = \nu(E) + \nu(F) \] for any $E,F \subset X$ such that $d(E,F) >0$. \varepsilonnd{dfn} When $\mu^\alphast$ is a metric outer measure, every Borel set is $\mu^\alphast$--measurable and thus the measure $\mu$ associated with $\mu^\alphast$ is a Borel measure. \begin{theo}[Carath\'eodory's Criterion, see Theorem $3.8$ in \cite{bruckner_thomson}] Let $\mu^\alphast$ be an outer measure on a metric space $(X,d)$. Then every Borel set in $X$ is $\mu^{\alphast}$-measurable if and only if $\mu^\alphast$ is a metric outer measure. In particular, a metric outer measure is a Borel measure. \varepsilonnd{theo} We recall two approximation properties of Borel measures defined on metric spaces. \begin{theo}[see Theorems $3.13$ and $3.14$ in \cite{bruckner_thomson}] \label{thm_approximation_Borel_measure} Let $(X,d)$ be a metric space and $\mu$ be a Borel measure on $X$. \begin{itemize} \item \varepsilonmph{Approximation from inside}: Let $F$ be a Borel set such that $\mu (F) < +\infty$, then for any $\varepsilonpsilon > 0$, there exists a closed set $F_\varepsilonpsilon \subset F$ such that $\mu (F \setminus F_\varepsilonpsilon) < \varepsilonpsilon$. \item \varepsilonmph{Approximation from outside}: Assume that $\mu$ is finite on bounded sets and let $F$ be a Borel set, then \[ \mu (F) = \inf \{ \mu (U) \: : \: F \subset U, \, U \text{ open set} \} \: . \] \varepsilonnd{itemize} \varepsilonnd{theo} We can now introduce Carath\'eodory's construction of metric outer measures (Method II, see \cite{bruckner_thomson}). \begin{dfn}[Premeasure] Let $X$ be a set and $\mathcal{C}$ be a family of subsets of $X$ such that $\varepsilonmptyset \in \mathcal{C}$. A nonnegative function $p$ defined on $\mathcal{C}$ and such that $p(\varepsilonmptyset) = 0$ is called a \varepsilonmph{premeasure}. \varepsilonnd{dfn} \begin{theo}[Carath\'eodory's construction, Method II] \label{thm_caratheodory_construction} Suppose $(X,d)$ is a metric space and $\mathcal{C}$ is a family of subsets of $X$ which contains the empty set. Let $p$ be a premeasure on $\mathcal{C}$. For each $\delta > 0$, let \[ \mathcal{C}_\delta= \{A \in \mathcal{C} \: | \: \diam(A) \leq \delta\} \] and for any $E \subset X$ define \[ \nu^p_\delta(E) = \inf \left\lbrace \sum_{i=0}^\infty p(A_i)\,\bigg| \, E \subset \bigcup_{i\in \mathbb N} A_i,\forall i, \: A_i\in \mathcal{C}_\delta \right\rbrace. \] As $\nu^p_\delta \gammaeq \nu^p_{\delta^\prime}$ when $\delta \leq \delta^\prime$, the limit \[ \nu^{p,\alphast} (E) = \lim_{\delta \to 0} \nu^p_\delta(E) \] exists (possibly infinite). Then $\nu^{p,\alphast}$ is a metric outer measure on $X$. \varepsilonnd{theo} \subsection{Effects of Carath\'eodory's construction on positive Borel measures} Let $(X,d)$ be a metric space and $\mu$ be a positive Borel $\sigma$--finite measure on $X$. Let $\mathcal{C}$ be the set of closed balls and let $p$ be the premeasure defined on $\mathcal{C}$ by \begin{equation} \label{dfn_premeasure} \begin{array}{lllll} p & : & \mathcal{C} & \rightarrow & [0,+\infty] \\ & & B & \mapsto & \mu (B) \varepsilonnd{array} \varepsilonnd{equation} Let $\mu^{p,\alphast}$ be the metric outer measure obtained by Carath\'eodory's metric construction applied to $(\mathcal{C},p)$ and then $\mu^p$ the Borel measure associated with $\mu^{p,\alphast}$. Then, the following question arises. \begin{question} \label{question_positive_case}\it Do we have $\mu^p = \mu$? In other terms, can we recover the initial measure by Carath\'eodory's Method II? \varepsilonnd{question} The following lemma shows one of the two inequalities needed to positively answer Question \ref{question_positive_case}. \begin{lemma} \label{lem_easy_inequality} Let $(X,d)$ be a metric space and $\mu$ be a positive Borel measure on $X$. Then, in the same notations as above, we have $\mu \leq \mu^p$. \varepsilonnd{lemma} \begin{proof} Let $A \subset X$ be a Borel set, we have to show that $\mu (A) \leq \mu^p (A) = \mu^{p,\alphast} (A)$. This inequality relies only on the definition of $\mu^p_\delta$ as an infimum. Indeed, let $\delta >0$ be fixed, then for any $\varepsilonta > 0$ there exists a countable collection of closed balls $( B_j^\varepsilonta )_{j \in \mathbb N} \subset \mathcal{C}_\delta$ such that \[ A \subset \bigcup_j B_j^\varepsilonta\qquad \text{and}\qquad \mu^p_\delta (A) \gammaeq \sum_{j=1}^\infty p (B_j^\varepsilonta) - \varepsilonta\,, \] so that \[ \mu^p_\delta (A) + \varepsilonta \gammaeq \sum_{j=1}^\infty p (B_j^\varepsilonta) = \sum_{j=1}^\infty \mu (B_j^\varepsilonta) \gammaeq \mu \Big( \bigcup_j B_j^\varepsilonta \Big) \gammaeq \mu (A) \: . \] Letting $\varepsilonta \rightarrow 0$ and then $\delta \rightarrow 0$ leads to $\mu (A) \leq \mu^p (A)$. \varepsilonnd{proof} A consequence of Davies' result \cite{davies} is that the other inequality cannot be true in general. We need extra assumptions on the metric space $(X,d)$ ensuring that open sets are ``well approximated'' by closed balls, that is, we need some specific covering property. In $\mathbb R^n$ with the Euclidean norm, this approximation of open sets by disjoint unions of balls is provided by Besicovitch Theorem, which we recall here: \begin{theo}[Besicovitch Theorem, see Corollary $1$ p. $35$ in \cite{evans}] \label{thm_besicovitch} Let $\mu$ be a Borel measure on $\mathbb R^n$ and consider any collection $\mathcal{F}$ of non degenerated closed balls. Let $A$ denote the set of centers of the balls in $\mathcal{F}$. Assume $\mu (A) < +\infty$ and that \[ \inf \left\lbrace r>0 \: | \: B_r (a) \in \mathcal{F} \right\rbrace = 0 \qquad \forall\, a \in A\,. \] Then, for every open set $U \in \mathbb R^n$, there exists a countable collection $\mathcal{G}$ of disjoint balls in $\mathcal{F}$ such that \[ \bigsqcup_{B \in \mathcal{G}} B \subset U \quad \text{and} \quad \mu \left( (A \cap U) - \bigsqcup_{B \in \mathcal{G}} B \right) = 0 \: . \] \varepsilonnd{theo} A generalization of Besicovitch Theorem for metric spaces is due to Federer, under a geometric assumption involving the distance function. \begin{dfn}[Directionally limited distance, see $2.8.9$ in \cite{federer}] \label{def:federer} Let $(X,d)$ be a metric space, $A \subset X$ and $\xi > 0$, $0<\varepsilonta\leq \frac{1}{3}$, $\zeta \in \mathbb N^\alphast$. The distance $d$ is said to be \varepsilonmph{directionally $(\xi,\varepsilonta,\zeta)$--limited at $A$} if the following holds. Take any $a \in A$ and $B \subset A \cap \left( U_\xi (a) \setminus \{a\} \right)$ satisfying the following property: let $b, c \in B$ with $b \neq c$ and assume without loss of generality that $d(a,b) \gammaeq d(a,c)$, then for all $x \in X$ such that $d(a,x) = d(a,c)$ and $d(a,x) + d(x,b) = d(a,b)$ one has \begin{equation}\label{directionally_limited_distance_eq} \frac{d(x,c)}{d(a,c)} \gammaeq \varepsilonta \: . \varepsilonnd{equation} Then ${\rm card} B \leq \zeta$. \varepsilonnd{dfn} Let us say a few words about this definition. If $(X,|\cdot|)$ is a Banach space with strictly convex norm, then the above relations involving $x$ imply that \[ x = a + \frac{|a-c|}{|a-b|} (b-a ) \: , \] hence in this case \varepsilonqref{directionally_limited_distance_eq} is equivalent to \[ \frac{d(x,c)}{d(a,c)} = \left| \frac{c-a}{|c-a|} - \frac{b-a}{|b-a|} \right| \gammaeq \varepsilonta \: . \] Consequently, if $X$ is finite-dimensional, and thanks to the compactness of the unit sphere, for a given $\varepsilonta$ there exists $\zeta \in \mathbb N$ such that $(X, |\cdot |)$ is directionally $(\xi,\varepsilonta,\zeta)$-limited for all $\xi>0$. Hereafter we provide two examples of metric spaces that are not directionally limited. \begin{xmpl} Consider in $\mathbb R^2$ the union $X$ of a countable number of half-lines, joining at the same point $a$. Then the geodesic metric $d$ induced on $X$ by the ambient metric is not directionally limited at $\{a\}$. \noindent \begin{minipage}{0.70\textwidth} Indeed let $B = X \cap \{y \: : \: d(a,y) = \xi \}$, let $b$ and $c \in B$ lying in two different lines, at the same distance $d(a,b) = d(a,c)=\xi$ of $a$. Then $x \in X$ such that $d(a,x) =d(a,c)=\xi$ and $d(b,x) = d(a,b) - d(a,c) = 0$ implies $x=b$ and thus \[ \frac{d(x,c)}{d(a,c)} = \frac{d(b,c)}{\xi} = \frac{2\xi}{\xi} = 2 \: . \] but ${\rm card} \, B$ is not finite. \varepsilonnd{minipage} \quad \begin{minipage}{0.25\textwidth} \includegraphics[scale=0.8]{metric_space_non_directionally_limited.pdf} \varepsilonnd{minipage} \varepsilonnd{xmpl} \begin{xmpl} If $X$ is a separable Hilbert space and $B=(e_k)_{k \in \mathbb N}$ a Hilbert basis, $a \in H$ and $b= a + e_j, \, c= a + e_k \in a+B$, the Hilbert norm is strictly convex so that $d(a,x) = d(a,c)$, $d(b,x) = d(a,b) - d(a,c)$ uniquely define $x$ as \[ x = a + \frac{|e_k|}{|e_j|} e_j = b \text{ and } \frac{d(x,c)}{d(a,c)} = \left| e_k - e_j \right| = 2 \gammaeq \varepsilonta \] for all $\varepsilonta \leq \frac{1}{3}$ and ${\rm card} (a + B)$ is infinite. Therefore $H$ is not directionally limited (nowhere). \varepsilonnd{xmpl} We can now state the generalized versions of Besicovitch Covering Lemma and Besicovitch Theorem for directionally limited metric spaces. \begin{theo}[Generalized Besicovitch Covering Lemma, see $2.8.14$ in \cite{federer}] \label{thm_generalized_besicovic_federer_covering_lemma} Let $(X,d)$ be a separable metric space directionally $(\xi,\varepsilonta,\zeta)$--limited at $A \subset X$. Let $0 < \delta < \frac{\xi}{2}$ and $\mathcal{F}$ be a family of closed balls of radii less than $\delta$ such that each point of $A$ is the center of some ball of $\mathcal{F}$. Then, there exists $2 \zeta +1$ countable subfamilies of $\mathcal{F}$ of disjoint closed balls, $\mathcal{G}_1, \ldots , \mathcal{G}_{2 \zeta + 1}$ such that \[ A \subset \bigcup_{j = 1}^{2 \zeta + 1} \bigsqcup_{B \in \mathcal{G}_j} B \: . \] \varepsilonnd{theo} \begin{remk} In $\mathbb R^n$ endowed with the Euclidean norm it is possible to take $\xi = +\infty$ and $\zeta$ only dependent on $\varepsilonta$ and $n$. If we fix $\varepsilonta = \frac{1}{3}$, then $\zeta = \zeta_n$ only depends on the dimension $n$. \varepsilonnd{remk} \begin{theo}[Generalized Besicovitch Theorem, see $2.8.15$ in \cite{federer}] \label{thm_generalized_besicovic_federer} Let $(X,d)$ be a separable metric space directionally $(\xi,\varepsilonta,\zeta)$--limited at $A \subset X$. Let $\mathcal{F}$ be a family of closed balls of $X$ satisfying \begin{equation} \label{hyp_fine_cover} \inf \left\lbrace r>0 \: | \: B_r (a) \in \mathcal{F} \right\rbrace = 0, \qquad \forall\, a\in A\:, \varepsilonnd{equation} and let $\mu$ be a positive Borel measure on $X$ such that $\mu (A) < +\infty$. Then, for any open set $U \subset X$ there exists a countable disjoint family $\mathcal{G}$ of $\mathcal{F}$ such that \[ \bigsqcup_{B \in \mathcal{G}} B \subset U \quad \text{and} \quad \mu \left( (A \cap U) - \bigsqcup_{B \in \mathcal{G}} B \right) = 0 \: . \] \varepsilonnd{theo} We can now prove the coincidence of the initial measure and the reconstructed measure under assumptions of Theorem~\ref{thm_generalized_besicovic_federer}. \begin{prop} \label{prop_positive_case} Let $(X,d)$ be a separable metric space directionally $(\xi,\varepsilonta,\zeta)$--limited at $X$. Let $\mu$ be a positive Borel measure on $X$, finite on bounded sets. Let $\mathcal{C}$ be the family of closed balls in $X$ and let $p$ be the premeasure defined on $\mathcal{C}$ by \varepsilonqref{dfn_premeasure}. Denote by $\mu^{p,\alphast}$ the metric outer measure obtained by Carath\'eodory's metric construction applied to $(\mathcal{C},p)$ and by $\mu^p$ the Borel measure associated with $\mu^{p,\alphast}$. Then $\mu^p = \mu$. \varepsilonnd{prop} \begin{proof} \textbf{Step one.} We first show that $\mu^p$ is a Borel measure finite on bounded sets. First we recall that $\mu^{p,\alphast}$ is a metric outer measure by Theorem~\ref{thm_caratheodory_construction}, then thanks to Theorem \ref{thm_from_outer_measure_to_measure} $\mu^p$ is a Borel measure. Let us prove that $\mu^p$ is finite on bounded sets. Fix $A \subset X$ a bounded Borel set and apply Besicovitch Covering Lemma (Theorem~\ref{thm_generalized_besicovic_federer_covering_lemma}) with the family \[ \mathcal{F}_\delta = \left\lbrace B = B_r(x) \text{ closed ball} \: : \: x \in A \text{ and } \diam B \leq \delta \right\rbrace \: , \] to get $2 \zeta + 1$ countable subfamilies $\mathcal{G}_1^\delta \ldots , \mathcal{G}_{2\zeta +1}^\delta$ of disjoint balls in $\mathcal{F}$ such that \[ A \subset \bigcup_{j=1}^{2\zeta +1} \bigsqcup_{B \in \mathcal{G}_j^\delta} B \: . \] Therefore, \[ \mu_\delta^p (A) \leq \sum_{j=1}^{2 \zeta + 1} \sum_{B \in \mathcal{G}_j^\delta} p(B) \leq \sum_{j=1}^{2 \zeta +1} \mu \left( \bigsqcup_{B \in \mathcal{G}_j^\delta} B \right) \leq (2\zeta + 1) \mu (A + B_\delta (0)) \leq (2\zeta + 1) \mu (A + B_1 (0)) \: , \] where $A + B_1 (0) = \bigcup_{x \in A} B_1 (x)$ is bounded, thus $ \mu (A + B_1 (0)) < +\infty$ by assumption and hence for all $0 < \delta < 1$ \[ \mu_\delta^p (A) \leq (2\zeta + 1) \mu (A + B_1 (0)) < +\infty \: , \] whence $\mu^{p,\alphast}(A) < +\infty$. The claim is proved since $\mu^p = \mu^{p, \alphast}$ on Borel sets. \textbf{Step two.} We now prove that for any open set $U \subset X$, it holds $ \mu^p (U) \leq \mu (U)$. Let $\delta > 0$ be fixed. The collection of closed balls \[ \mathcal{C}_\delta = \left\lbrace B_r (x) \: | \: x \in U, \: 0< 2r \leq \delta \right\rbrace \: . \] satisfies assumption \varepsilonqref{hyp_fine_cover}. We can apply Theorem~\ref{thm_generalized_besicovic_federer} to $\mu^p$ and get a countable collection $\mathcal{G}^\delta$ of disjoint balls in $\mathcal{C}_\delta$ such that \[ \bigsqcup_{B \in \mathcal{G}^\delta} B \subset U \quad \textrm{and} \quad \mu^p (U) = \mu^p \left( \bigsqcup_{B \in \mathcal{G}^\delta} B \right) \: . \] At the same time we have \begin{equation} \label{proof_positive_measure_case_1} \mu^p_\delta \left( \bigsqcup_{B \in \mathcal{G}^\delta} B \right) \leq \sum_{B \in \mathcal{G}^\delta } p (B) = \sum_{B \in \mathcal{G}^\delta} \mu (B) = \mu \left( \bigsqcup_{B \in \mathcal{G}^\delta} B \right) \leq \mu (U) \: . \varepsilonnd{equation} We fix any decreasing infinitesimal sequence $(\delta_n)_{n \in \mathbb N}$ and define $\displaystyle A = \bigcap_{n \in \mathbb N} \left( \bigsqcup_{B \in \mathcal{G}^{\delta_n}} B \right)$. We obtain $\displaystyle \mu^p (U) = \mu^p (A)$ and $\displaystyle A \subset \bigsqcup_{B \in \mathcal{G}^{\delta_n}} B$ for any $n$. Thus, owing to \varepsilonqref{proof_positive_measure_case_1}, we have \[ \mu^p_{\delta_n} (A) \leq \mu^p_{\delta_n} \left( \bigsqcup_{B \in \mathcal{G}^{\delta_n}} B \right) \leq \mu (U) \quad \text{and then} \quad \mu^p (U) = \mu^p(A) \leq \mu (U) \: . \] This shows that $\mu^p (U) \leq \mu (U)$, as wanted. \textbf{Step three.} Since $\mu$ and $\mu^p$ are Borel measures, finite on bounded sets, they are also outer regular (see Theorem~\ref{thm_approximation_Borel_measure}). Then for any Borel set $B \subset X$, and owing to Step two, it holds \begin{align*} \mu^p (B)& = \inf \{ \mu^p (U) \: | \: U \textrm{ open, } B \subset U \} \\ & \leq \inf \{ \mu (U) \: | \: U \textrm{ open, } B \subset U \} = \mu (B) \: . \varepsilonnd{align*} Coupling this last inequality with Lemma~\ref{lem_easy_inequality} we obtain $\mu^p = \mu$. \varepsilonnd{proof} \subsection{Carath\'eodory's construction for a signed measure} We recall that a Borel signed measure $\mu$ on $(X,d)$ is an extended real-valued set function $\mu : \mathcal{B} (X) \rightarrow [-\infty,+\infty]$ such that $\mu (\varepsilonmptyset)=0$ and, for any sequence of disjoint Borel sets $(A_k)_k$, one has \begin{equation} \label{eq_countably_additive} \sum_{k=1}^\infty \mu (A_k) = \mu \left( \bigcup_{k=1}^\infty A_k \right) \: . \varepsilonnd{equation} \begin{remk} Notice that when $\displaystyle \mu \left( \bigcup_{k=1}^\infty A_k \right)$ is finite, its value does not depend on the arrangement of the $A_k$, therefore the series on the right hand side of \varepsilonqref{eq_countably_additive} is commutatively convergent, thus absolutely convergent. In particular, if we write the Hahn decomposition $\mu = \mu^+ - \mu^-$, with $\mu^{+}$ and $\mu^{-}$ being two non-negative and mutually orthogonal measures, then $\mu^+(X)$ and $\mu^-(X)$ cannot be both $+\infty$. \varepsilonnd{remk} The question is now the following: \begin{question}\it Let $(X,d)$ be a metric space, separable and directionally $(\xi,\varepsilonta,\zeta)$--limited at $X$, and let $\mu$ be a Borel signed measure, finite on bounded sets. Is it possible to recover $\mu$ from its values on closed balls by some Carath\'eodory-type construction? \varepsilonnd{question} The main difference with the case of a positive measure is that $\mu$ is not monotone and thus the previous construction is not directly applicable. A simple idea could be to rely on the Hahn decomposition of $\mu$: indeed, $\mu^+$ and $\mu^-$ are positive Borel measures, and since one of them is finite, both are finite on bounded sets (recall that $\mu$ is finite on bounded sets by assumption). Once again, we cannot directly apply Carath\'eodory's construction to $\mu^+$ or $\mu^-$ since we cannot directly reconstruct $\mu^+ (B)$ and $\mu^- (B)$ simply knowing $\mu(B)$ for any closed ball $B$. We thus try to apply Carath\'eodory's construction not with $\mu^+ (B)$, but with $\left( \mu (B) \right)_+$, where $a_+$ (resp. $a_-$) denote the positive part $\max (a,0)$ (resp. the negative part $\max(-a,0)$) for any $a \in \mathbb R$. To be more precise, we state the following definition. \begin{dfn} \label{caratheodory_construction_signed} Let $\mu$ be a Borel signed Radon measure in $X$ and let $\mathcal{C}$ be the family of closed balls in $X$. We define \[ \begin{array}{lcrclclcrcl} p_+ & : & \mathcal{C} & \longrightarrow & \mathbb R_+ & \text{and} & p_- & : & \mathcal{C} & \longrightarrow & \mathbb R_+\\ & & B & \longmapsto & \displaystyle \left( \mu(B) \right)_+ & & & & B & \longmapsto & \displaystyle \left( \mu(B) \right)_- \: . \varepsilonnd{array} \] Then according to Carath\'eodory's construction, we define the metric outer measure $\mu^{p_+, \alphast}$ such that for any $A \subset X$, \[ \mu^{p_+, \alphast}(A) = \lim_{\delta \to 0} \mu^{p_+, \alphast}_\delta(A) = \lim_{\delta \to 0} \inf \left\lbrace \sum_{i=0}^\infty p_+ (A_i) \,\bigg| \, A \subset \bigcup_{i\in \mathbb N} A_i, \text{ and for all } i, \: A_i\in \mathcal{C},\ \diam(A_{i})\le \delta \right\rbrace \: . \] Similarly we define $\mu^{p_-,\alphast}$ and then call $\mu^{p_+}$ and $\mu^{p_-}$ the Borel measures associated with $\mu^{p_+,\alphast}$ and $\mu^{p_-,\alphast}$. Finally, we set $\mu^p = \mu^{p_+} - \mu^{p_-}$. \varepsilonnd{dfn} \begin{theo} \label{thm_signed_case} Let $(X,d)$ be a metric space, separable and directionally $(\xi,\varepsilonta,\zeta)$--limited at $X$ and let $\mu = \mu^+ - \mu^-$ be a Borel signed measure on $X$, finite on bounded sets. Let $\mu^{p} = \mu^{p_+}-\mu^{p_-}$ be as in Definition~\ref{caratheodory_construction_signed}. Then $\mu^p = \mu$. \varepsilonnd{theo} \begin{proof} We observe that $\mu^{p_+}$ and $\mu^{p_-}$ are Borel measures: indeed, by construction they are metric outer measures and Carath\'eodory criterion implies that they are Borel measures. Furthermore, for any closed ball $B \in \mathcal{C}$, if we set $p(\mu^+) (B) = \mu^+ (B)$ (note that $p(\mu^+)$ is the canonical premeasure associated with $\mu^+$ while $p_+$ is not a priori associated with any measure) then \[ p_+ (B) = \left( \mu (B) \right)_+ \leq \mu^+ (B) = p(\mu^+) (B) \: , \] thus by construction, $\mu^{p_+,\alphast} \leq \mu^{p(\mu^+), \alphast}$ and then one gets $\mu^{p_+} \leq \mu^{p(\mu^+)}$; similarly one can show that $\mu^{p_-} \leq \mu^-$. Thanks to Proposition~\ref{prop_positive_case}, we have proven that \begin{equation} \label{eq_first_ineq_signed_case} \mu^{p_+} \leq \mu^+ \quad \text{and} \quad \mu^{p_-} \leq \mu^- \: . \varepsilonnd{equation} In particular, $\mu^{p_+}$ and $\mu^{p_-}$ are finite on bounded sets, as it happens for $\mu^+$ and $\mu^-$. Let now $A \subset X$ be a Borel set. It remains to prove that $\mu^{p_+} (A) = \mu^{p_+,\alphast} (A) \gammaeq \mu^+ (A)$ (and the same for $\mu^{p_-}$). We argue exactly as in the proof of Lemma~\ref{lem_easy_inequality}. Let $\delta >0$, then for any $\varepsilonta > 0$ there exists a countable collection of closed balls $( B_j^\varepsilonta )_{j \in \mathbb N} \subset \mathcal{C}_\delta$ such that $A \subset \bigcup_j B_j^\varepsilonta$ and $\displaystyle \mu^{p_+,\alphast}_\delta (A) \gammaeq \sum_{j=1}^\infty p_+ (B_j^\varepsilonta) - \varepsilonta$, so that \[ \mu^{p_+,\alphast}_\delta (A) + \varepsilonta \gammaeq \sum_{j=1}^\infty p_+ (B_j^\varepsilonta) = \sum_{j=1}^\infty \left( \mu (B_j^\varepsilonta) \right)_+ \gammaeq \sum_{j=1}^\infty \mu (B_j^\varepsilonta) \gammaeq \mu \left( \bigcup_j B_j^\varepsilonta \right) \gammaeq \mu (A) \: . \] Letting $\varepsilonta \rightarrow 0$ and then $\delta \rightarrow 0$ gives \begin{equation} \label{proof_signed_case_1} \mu (A) \leq \mu^{p_+,\alphast} (A) = \mu^{p_+} (A) \: . \varepsilonnd{equation} We recall that $\mu^+$ and $\mu^-$ are mutually singular, hence there exists a Borel set $P \subset X$ such that for any Borel set $A$ \[ \mu^+ (A) = \mu (P \cap A) \quad \textrm{and} \quad \mu^- (A) = \mu ( (X-P) \cap A ) \: . \] Thanks to \varepsilonqref{proof_signed_case_1} we already know that $\mu \leq \mu^{p_+}$, therefore we get $\mu^+ (A) = \mu (P \cap A) \leq \mu^{p_+} (P \cap A) \leq \mu^{p_+} (A)$ for any Borel set $A$. Thanks to \varepsilonqref{eq_first_ineq_signed_case}, we finally infer that $\mu^{p_+} = \mu^+$, $\mu^{p_-} = \mu^-$, i.e., that $\mu^p = \mu$. \varepsilonnd{proof} \begin{remk} If $\mu$ is a vector-valued measure on $X$, with values in a finite vector space $E$, we can apply the same construction componentwise. \varepsilonnd{remk} \section{Recovering measures from approximate values on balls} We now want to reconstruct a measure $\mu$ (or an approximation of $\mu$) starting from approximate values on closed balls, given by a premeasure $q$ satisfying \varepsilonqref{eq_mean_premeasure}. More precisely, we can now reformulate Question 2 in the context of directionally limited metric spaces. \begin{question}\it Let $(X,d)$ be a separable metric space, directionally $(\xi,\varepsilonta,\zeta)$--limited at $X$ and let $\mu$ be a positive Borel measure on $X$. Is it possible to reconstruct $\mu$ from $q$ up to multiplicative constants? Can the same be done when $\mu$ is a signed measure? \varepsilonnd{question} In section \ref{section:2.1} below we explain with a simple example involving a Dirac mass why Carath\'eodory's construction does not allow to recover $\mu$ from the premeasure $q$ defined in \varepsilonqref{varif-prem}. Then in section \ref{sect:packtype} we define a \varepsilonmph{packing construction} of a measure, that is in some sense dual to Carath\'eodory Method II, and we show that in a directionally limited and separable metric space $(X,d)$, endowed with an asymptotically $(\alpha , \gamma)$-bounded measure $\nu$ (see \varepsilonqref{agbounded}) it produces a measure equivalent to the initial one. \subsection{Why Carath\'eodory's construction is not well-suited} \label{section:2.1} Let us consider a Dirac mass $\mu = \delta_x$ in $\mathbb R^n$ and define \[ q (B_r (y)) = \frac{1}{r} \int_{s=0}^r \mu (B_s (y)) \, ds\,. \] It is easy to check that this particular choice of premeasure $q$ satisfies \varepsilonqref{eq_mean_premeasure}. First of all, for any $r > 0$, \[ q (B_r(x)) = \frac{1}{r} \int_{s=0}^r \delta_x (B_s (x)) \, ds = \frac{1}{r} \int_{s=0}^r 1 \, ds = 1 \: . \] If now $y$ is at distance $\varepsilonta$ from $x$ for some $0<\varepsilonta<r$, we have \[ q (B_r (y)) = \frac{1}{r} \int_{s=0}^r \delta_x (B_s (y)) \, ds = \frac{1}{r} \int_{s=\varepsilonta}^r 1 \, ds = \frac{r-\varepsilonta}{r} \: . \] Therefore, $q(B_{r}(y)) \to 0$ as $d(x,y) = \varepsilonta \to r$. We can thus find a covering made by a single ball of radius less than $r$ for which $\mu_r^q ( \{x\} )$ is as small as we wish. This shows that Carath\'eodory's construction associated with this premeasure produces the zero measure. \begin{center} \setcounter{subfigure}{0} \begin{figure}[!htp] \subfigure[Bad covering of a Dirac mass]{\includegraphics[width=0.25\textwidth]{problem_caratheodory_dirac.pdf}} \quad \subfigure[Bad covering of a curve]{\includegraphics[width=0.60\textwidth]{problem_caratheodory_bad_covering.pdf}} \varepsilonnd{figure} \varepsilonnd{center} More generally, as soon as it is possible to cover with small balls such that the mass of the measure inside each ball is close to the boundary of the ball, one sees that Carath\'eodory's construction ``looses mass''. For instance, take $\mu = \mathcal{H}^1_{| G_{d,n}amma}$, where $G_{d,n}amma\subset \mathbb R^{n}$ is a curve of length $L_G_{d,n}amma$ and $\mathcal{H}^{1}$ is the $1$-dimensional Hausdorff measure in $\mathbb R^{n}$, then cover $G_{d,n}amma$ with a family of closed balls $\mathcal{B}_\delta$ of radii $\delta$ with centers at distance $\varepsilonta$ from $G_{d,n}amma$. Assuming that no portion of the curve is covered more than twice, then \begin{align*} \sum_{B \in \mathcal{B}_\delta} q(B)& = \sum_k \frac{1}{\delta} \int_{s=0}^\delta \mu (B_s (x_k)) = \sum_k \frac{1}{\delta} \int_{s=\varepsilonta}^\delta \mu (B_s (x_k)) \, ds \\ & \leq \frac{\delta - \varepsilonta}{\delta} \sum_k \mu (B_\delta (x_k)) \\ & \leq 2 L_G_{d,n}amma \frac{\delta - \varepsilonta}{\delta} \xrightarrow[\delta \to 0]{} 0 \: , \varepsilonnd{align*} with $\varepsilonta = \delta - \delta^2$ for instance. The same phenomenon cannot be excluded by blindly centering balls on the support of the measure $\mu$. Indeed, take a line $\varepsilonll\subset \mathbb R^{2}$ with a Dirac mass placed on it at some $x\in \varepsilonll$, so that $\mu = \mathcal{H}^1_{| \varepsilonll} + \delta_x$. Then, by centering the balls on the support of $\mu$, we may recover the Hausdorff measure restricted to $\varepsilonll$, but not the Dirac mass, for the same reason as before. We thus understand that the position of the balls should be optimized in order to avoid the problem. For this reason we consider an alternative method, based on a packing-type construction. \begin{figure}[!htp] \begin{center} \includegraphics[width=0.50\textwidth]{problem_caratheodory_T.pdf} \caption{Bad covering with balls centered on the support of the measure} \varepsilonnd{center} \varepsilonnd{figure} \subsection{A packing-type construction} \label{sect:packtype} Taking into account the problems described in the examples of the previous section, one realizes the need to optimize the position of the centers of the balls in order to properly reconstruct the original measure $\mu$. The idea is to consider a kind of dual construction, that is, a supremum over packings rather than an infimum over coverings. To this aim we recall a notion of \varepsilonmph{packing} of balls. \begin{dfn}[Packings] \label{def:packing} Let $(X,d)$ be a separable metric space and $U \subset X$ be an open set. We say that $\mathcal{F}$ is a packing of $U$ of order $\delta$ if $\mathcal{F}$ is a countable family of disjoint closed balls whose radius is less than $\delta$ and such that \[ \bigsqcup_{B \in \mathcal{F}} B \subset U \: . \] \varepsilonnd{dfn} \begin{dfn}[Packing construction of measures] \label{dfn_packing_construction} Let $(X,d)$ be a separable metric space and let $q$ be a premeasure defined on closed balls. Let $U \subset X$ be an open set and fix $\delta >0$. We set \[ {\hat \mu}_\delta^{q} (U) := \sup \left\lbrace \sum_{B \in \mathcal{F}} q (B) \, : \, \mathcal{F} \textrm{ is a packing of order } \delta \textrm{ of } U \right\rbrace \] and, in a similar way as in Carath\'eodory construction, we define \[ {\hat \mu}^{q} (U) = \lim_{\delta \to 0} {\hat \mu}_{\delta}^{q} (U) = \inf_{\delta > 0} {\hat \mu}_{\delta}^{q} (U) \] and note that $\delta^\prime \leq \delta$ implies ${\hat \mu}_{\delta^\prime}^{q} (U) \leq {\hat \mu}_\delta^{q} (U)$. Then, ${\hat \mu}^{q}$ can be extended to all $A \subset X$ by setting \begin{equation} \label{eq:extensionHmuInf} {\hat \mu}^{q}(A) = \inf \left\lbrace {\hat \mu}^{q} (U) \, : \, A \subset U, \, U \textrm{ open set} \right\rbrace \, . \varepsilonnd{equation} \varepsilonnd{dfn} The main difference between Definition \ref{dfn_packing_construction} and Carath\'eodory's construction is that the set function ${\hat \mu}^{q}$ is not automatically an outer measure: it is monotone but not sub-additive in general. In order to fix this problem we may apply the construction of outer measures, known as Munroe Method I, to the set function ${\hat \mu}^{q}$ restricted to the class of open sets. This amounts to setting, for any $A \subset X$, \begin{equation}\label{mutilde} \tilde{\mu}^{q} (A) = \inf \left\lbrace \sum_{n \in \mathbb N} {\hat \mu}^{q} (U_n) \, : \, A \subset \bigcup_{n \in \mathbb N} U_n, \, U_n \textrm{ open set } \right\rbrace \: . \varepsilonnd{equation} One can check that $\tilde{\mu}^{q}$ is an outer measure. We will prove in Theorem~\ref{thm_main} that, for the class of set functions $q$ we are focusing on, and under additional assumptions on$ X$, ${\hat \mu}^{q}$ is already a Borel outer measure. \begin{remk} \label{remk_sub_additivity_and_tilde_mu} Knowing that ${\hat \mu}^{q}$ is sub-additive in the class of open sets is enough to show that ${\hat \mu}^{q} = \tilde{\mu}^{q}$. Indeed, the inequality $\tilde{\mu}^{q} (A) \leq {\hat \mu}^{q} (A)$ comes directly from the fact that minimizing ${\hat \mu}^{q} (U)$ over $U$ open such that $A \subset U$ is a special case of minimizing $\sum_k {\hat \mu}^{q} (U_k)$ among countable families of open sets $U_k$ such that $A \subset \bigcup_k U_k$. Assuming in addition that ${\hat \mu}^{q}$ is sub-additive on open sets implies that for any countable family of open sets $(U_k)_k$ such that $A \subset \bigcup_k U_k$, \[ {\hat \mu}^{q} (A) \leq {\hat \mu}^{q} (\bigcup_k U_k) \leq \sum_k {\hat \mu}^{q} (U_k) \: . \] By definition of $\tilde{\mu}^{q}$, taking the infimum over such families leads to ${\hat \mu}^{q} (A) \leq \tilde{\mu}^{q} (A)$. \varepsilonnd{remk} \begin{remk} Our construction of the measure ${\hat \mu}^{q}$ is slightly different from the more classical packing construction proposed by Tricot and Taylor in \cite{tricot}. In particular, \varepsilonqref{eq:extensionHmuInf} enforces outer regularity of ${\hat \mu}^{q}$, while for instance the classical $s$--dimensional packing measure in $\mathbb R^n$ associated with the premeasure $q(B_r(x)) = (2r)^s$ is not outer regular for $s < n$ (see \cite{edgar}). A more specific comparison between our definition and the one by Tricot and Taylor will be carried out in section \ref{sect:BLversusTT}, where it will be proved that, under the assumption that $q(B) \le C \mu(B)$ for every ball $B\subset \mathbb R^{n}$ and for some constant $C>0$ and some Radon measure $\mu$, the two constructions actually produce the same measure. \varepsilonnd{remk} \begin{theo} \label{thm_main} Let $(X,d)$ be a metric space satisfying Hypothesis \ref{hypo1}, let $\mu$ be a positive Borel measure on $X$ and let $q$ be a premeasure given on closed balls satisfying \varepsilonqref{eq_mean_premeasure}. Let ${\hat \mu}^{q}$ be as in Definition~\ref{dfn_packing_construction}. Then, the following hold: \begin{enumerate} \item for any Borel set $A \subset X$, \begin{equation} \label{eq:generalControl} \frac{1}{\gamma C} \mu (A) \leq {\hat \mu}^{q} (A) \leq C \inf \{ \mu (U) \: | \: A \subset U \text{ open set } \} \: ; \varepsilonnd{equation} where $\gamma$ and $C$ are the constants respectively appearing in Hypothesis \ref{hypo1} and \varepsilonqref{eq_mean_premeasure}. \item ${\hat \mu}^q$ is countably sub-additive and is a metric outer measure; \item if moreover $\mu$ is outer regular, then $\mu$ and the positive Borel measure associated with the outer measure ${\hat \mu}^{q}$ (still denoted as ${\hat \mu}^{q}$) are equivalent, and more precisely $\displaystyle \frac{1}{\gamma C} \mu \leq {\hat \mu}^q \leq C \mu$. \varepsilonnd{enumerate} \varepsilonnd{theo} We briefly sketch the proof of this result for the reader's convenience. \begin{enumerate} \item The first step (see Proposition \ref{prop_sub_additivity_open_case}) is to prove that ${\hat \mu}^q$ is countably sub-additive on any sequence of open sets $(U_n)_n$ such that $\sum_n \mu (U_n) < +\infty$. This property does not require Hypothesis $1$ and only relies on the upper bound \varepsilonqref{eq_mean_premeasure}--(ii) on $q$. \item Then, assuming moreover that $\mu$ is finite on bounded sets, we prove in Proposition \ref{prop_sub_additivity_loc_finite_case} the countable sub-additivity of ${\hat \mu}^q$. \item Now we show in Proposition \ref{prop_equivalence_on_open_sets} a crucial fact, i.e. that the finiteness of the measure $\mu$ can be dropped, thus we only require that $\mu$ is a positive Borel measure (not necessarily finite on bounded sets) and that $(X,d)$ satisfies Hypothesis~\ref{hypo1}. The heart of the proof is to use the lower bound \varepsilonqref{eq_mean_premeasure} (i) on $q$ and show that it can be transferred to ${\hat \mu}^q$, which gives \varepsilonqref{eq:generalControl}. \item Up to this last step, we still do not know that ${\hat \mu}^q$ is countably sub-additive. This is proved in Proposition~\ref{cor_countable_sub_additvity} and follows from the partial sub-additivity result of the first step and the lower bound in \varepsilonqref{eq:generalControl}. We stress that this last result does not require that $\mu$ is finite on bounded sets. \varepsilonnd{enumerate} \begin{prop} \label{prop_sub_additivity_open_case} Let $(X,d)$ be a separable metric space and let $\mu$ be a positive Borel measure on $X$. Let $q$ be a premeasure defined on the class $\mathcal C$ of closed balls contained in $X$, such that \varepsilonqref{eq_mean_premeasure} (ii) holds. Then, for any countable family of open sets $(U_k)_k \subset X$ satisfying $\sum_{k\in \mathbb N} \mu (U_k) < +\infty$, one has \begin{equation}\label{subadd1} {\hat \mu}^{q} \left( \bigcup_{k \in \mathbb N} U_k \right) \leq \sum_{k \in \mathbb N} {\hat \mu}^{q} (U_k)\,. \varepsilonnd{equation} In particular, if $\mu$ is finite, then ${\hat \mu}^{q}$ is an outer measure. \varepsilonnd{prop} \begin{proof} Let $( U_k )_k$ be a sequence of open subsets of $X$ such that $\sum_k \mu (U_k) < +\infty$. Let $\varepsilonpsilon > 0$, then for all $k\in \mathbb N$ we define \[ U_k^\varepsilonpsilon = \left\lbrace x \in U_k \, : \, d(x , X - U_k) > \varepsilonpsilon \right\rbrace \: . \] Fix $0 < \delta < \frac{\varepsilonpsilon}{2}$. If $B$ is a closed ball such that $\diam B \leq 2 \delta$ and $B \subset \bigcup_k U_k^\varepsilonpsilon$, then there exists $k_0$ such that $B \subset U_{k_0}$. Indeed, $B=B_\delta (x)$ and there exists $k_0$ such that $x \in U_{k_0}^\varepsilonpsilon$ and thus \[ B_\delta (x) \subset U_{k_0}^{\varepsilonpsilon - \delta} \subset U_{k_0}^{\frac{\varepsilonpsilon}{2}} \subset U_{k_0}\: . \] Of course the inclusion $B\subset U_{k_{0}}$ remains true for any closed ball $B$ with $\diam B \leq 2 \delta$. \begin{figure}[!htp] \begin{center} \includegraphics[width=0.50\textwidth]{sub_additivity.pdf} \caption{Sub-additivity for packing construction} \varepsilonnd{center} \varepsilonnd{figure} Therefore any packing $\mathcal{B}$ of $\displaystyle \bigcup_k U_k^\varepsilonpsilon$ of order $\delta$ can be decomposed as the union of a countable family of packings $\mathcal{B} = \bigsqcup_k \mathcal{B}_k $, where $\mathcal{B}_k$ is a packing of $U_k$ of order $\delta$, whence \[ \sum_{B \in \mathcal{B}} q(B) = \sum_k \sum_{B \in \mathcal{B}_k} q(B)\,. \] By taking the supremum over all such packings $\mathcal{B}$ of $\displaystyle \bigcup_k U_k^\varepsilonpsilon$, we get \[ {\hat \mu}^q_\delta \left( \bigcup_k U_k^\varepsilonpsilon \right) \leq \sum_k {\hat \mu}_\delta^q (U_k) \: . \] Then, taking the infimum over $\delta > 0$ and then the supremum over $\varepsilonpsilon > 0$ gives \begin{equation} \label{eq_sub_additivity_3} \sup_{\varepsilonpsilon > 0} {\hat \mu}^{q} \left( \bigcup_k U_k^\varepsilonpsilon \right) \leq \inf_{\delta > 0} \sum_{k \in \mathbb N} {\hat \mu}_\delta^q (U_k) \: . \varepsilonnd{equation} We now want to prove that \begin{equation}\label{eqsupeps} \sup_{\varepsilonpsilon > 0} {\hat \mu}^{q} \left( \bigcup_{k \in \mathbb N} U_k^\varepsilonpsilon \right) = {\hat \mu}^{q} \left( \bigcup_{k \in \mathbb N} U_k \right)\,. \varepsilonnd{equation} Let $\mathcal{B}$ be a packing of $\displaystyle \bigcup_k U_k$ of order $\delta < \frac{\varepsilonpsilon}{2}$. We have \begin{equation} \label{eq_sub_additivity_1} \sum_{B \in \mathcal{B}} q(B) = \sum_{\substack{B \in \mathcal{B} \\ B \subset \bigcup_k U_k^\varepsilonpsilon }} q(B) \: + \sum_{\substack{B \in \mathcal{B} \\ B \not\subset \bigcup_k U_k^\varepsilonpsilon }} q(B) \: . \varepsilonnd{equation} Notice that since $2 \delta < \varepsilonpsilon$, for any $B \in \mathcal{B}$, if $\displaystyle B \not\subset \bigcup_k U_k^\varepsilonpsilon$ then $\displaystyle B \subset \bigcup_k U_k \setminus \bigcup_k U_k^{2 \varepsilonpsilon}$. Since $q(B) \le C\mu (B)$ according to \varepsilonqref{eq_mean_premeasure} (ii), we get \begin{equation} \label{eq_sub_additivity_2} \sum_{\substack{B \in \mathcal{B} \\ B \not\subset \bigcup_k U_k^\varepsilonpsilon }} q(B) \leq C\sum_{\substack{B \in \mathcal{B} \\ B \not\subset \bigcup_k U_k^\varepsilonpsilon }} \mu(B) = C\mu \left( \bigsqcup_{\substack{B \in \mathcal{B} \\ B \not\subset \bigcup_k U_k^\varepsilonpsilon }} B \right) \leq C\mu \left( \bigcup_k U_k \setminus \bigcup_k U_k^{2 \varepsilonpsilon} \right)\, . \varepsilonnd{equation} Owing to the fact that $\displaystyle \bigcup_k U_k = \bigcup_{\substack{\text{countable} \\ \varepsilonpsilon > 0}} \bigcup_k U_k^{2 \varepsilonpsilon}$, and $\bigcup_k U_k^{2 \varepsilonpsilon}$ is decreasing in $\varepsilonpsilon$, we have that \begin{equation}\label{muvazero} \mu \left( \bigcup_k U_k \setminus \bigcup_k U_k^{2 \varepsilonpsilon} \right) \xrightarrow[\varepsilonpsilon \to 0]{} 0 \varepsilonnd{equation} as soon as $\mu (\bigcup_k U_k) < +\infty$, which is true under the assumption $ \sum_k \mu (U_k) < +\infty$. Therefore, by \varepsilonqref{eq_sub_additivity_1}, \varepsilonqref{eq_sub_additivity_2} and \varepsilonqref{muvazero} we infer that \begin{equation}\label{eqsubadd3} \sum_{B \in \mathcal{B}} q(B) \leq \sum_{\substack{B \in \mathcal{B} \\ B \subset \bigcup_k U_k^\varepsilonpsilon }} q(B) + C \mu \left( \bigcup_k U_k \setminus \bigcup_k U_k^{2 \varepsilonpsilon} \right) \: . \varepsilonnd{equation} Taking the supremum in \varepsilonqref{eqsubadd3} over all packings $\mathcal{B}$ of order $\delta$ of $\bigcup_k U_k$, we get \begin{align*} {\hat \mu}_\delta^q \left( \bigcup_k U_k \right) & \leq \sup \left\lbrace \sum_{\substack{B \in \mathcal{B} \\ B \subset \bigcup_k U_k^\varepsilonpsilon }} q(B) \: : \: \mathcal{B} \text{ is a packing of } \bigcup_k U_k \text{ order } \delta \right\rbrace + C\mu \left( \bigcup_k U_k \setminus \bigcup_k U_k^{2 \varepsilonpsilon} \right) \\ & \leq {\hat \mu}_\delta^q \left( \bigcup_k U_k^\varepsilonpsilon \right) + C\mu \left( \bigcup_k U_k \setminus \bigcup_k U_k^{2 \varepsilonpsilon} \right) \: . \varepsilonnd{align*} Then taking the limit as $\delta\to 0$ we obtain \[ {\hat \mu}^{q} \left( \bigcup_k U_k \right) \leq {\hat \mu}^{q} \left( \bigcup_k U_k^\varepsilonpsilon \right) + C\mu \left( \bigcup_k U_k \setminus \bigcup_k U_k^{2 \varepsilonpsilon} \right) \: \] and finally, letting $\varepsilonpsilon\to 0$, we prove \varepsilonqref{eqsupeps}. We now turn to the right hand side of \varepsilonqref{eq_sub_additivity_3}. For fixed $k$, ${\hat \mu}^q_\delta (U_k)$ is decreasing when $\delta \downarrow 0$, therefore \begin{equation} \label{eq_sub_additivity_4} \lim_{\delta \downarrow 0} \sum_k {\hat \mu}_\delta^q (U_k) = \sum_k \lim_{\delta \downarrow 0} {\hat \mu}_\delta^q (U_k) = \sum_k {\hat \mu}^q (U_k) \varepsilonnd{equation} provided that $ \sum_k {\hat \mu}_\delta^q (U_k)$ is finite for some $\delta > 0$, which is true since $q(B) \leq C\mu(B)$, ${\hat \mu}_\delta^q (U_k) \leq C\mu (U_k)$ for all $k$, so that $\sum_k {\hat \mu}_\delta^q (U_k) \leq C\sum_k \mu(U_k) < +\infty$. Finally, thanks to \varepsilonqref{eq_sub_additivity_3}, \varepsilonqref{eqsupeps} and \varepsilonqref{eq_sub_additivity_4} we obtain the conclusion. \varepsilonnd{proof} \begin{prop} \label{prop_sub_additivity_loc_finite_case} Let $(X,d)$ be a separable metric space and let $\mu$ be a positive Borel measure on $X$, finite on bounded sets. Let $q$ be a premeasure defined on the class $\mathcal C$ of closed balls contained in $X$, such that \varepsilonqref{eq_mean_premeasure} (ii) holds. Then ${\hat \mu}^q$ is countably sub-additive, thus it is an outer measure. \varepsilonnd{prop} \begin{proof} Let $(A_k)_k$ be a countable family of disjoint sets such that $\mu \left( \bigsqcup_k A_k \right) < +\infty$. We shall prove that \begin{equation} \label{eq_sub_additivity_6} {\hat \mu}^{q} \left( \bigsqcup_k A_k \right) \leq \sum_k {\hat \mu}^{q} (A_k) \: . \varepsilonnd{equation} Let $\varepsilonpsilon >0$. By outer regularity of $\mu$ (since $\mu$ a Borel measure finite on bounded sets, it is outer regular by Theorem~\ref{thm_approximation_Borel_measure}) and by definition of ${\hat \mu}^q$, let $\left( U_k \right)_k$ be a family of open sets such that, for any $k$, \[ A_k \subset U_k \quad \textrm{and} \quad \mu (U_k ) \leq \mu (A_k) + \frac{1}{2^k} \quad \textrm{and} \quad {\hat \mu}^q (U_k ) \leq {\hat \mu}^q (A_k) + \frac{\varepsilonpsilon}{2^k} \: . \] Hence $\sum \mu (U_k) \leq \sum \mu(A_k) + 1 < +\infty$ and by Proposition~\ref{prop_sub_additivity_open_case} we thus find \[ {\hat \mu}^{q} \left( \bigsqcup_k A_k \right) \leq {\hat \mu}^{q} \left( \bigcup_k U_k \right) \leq \sum_k {\hat \mu}^{q} (U_k) \leq \sum_k {\hat \mu}^{q} (A_k) + \varepsilonpsilon \: . \] Let $\varepsilonpsilon \rightarrow 0$ to get \varepsilonqref{eq_sub_additivity_6}. The case of a countable family $(A_k)_k$ such that $\mu \left( \bigcup_k A_k \right) < +\infty$ is obtained from the case of disjoint sets in the standard way, by defining $B_k \subset A_k$ as \[ B_k = A_k - \bigcup_{i=1}^{k-1} A_i \: . \] The family $(B_k)_k$ is disjoint and $\displaystyle \bigsqcup_{k \in \mathbb N} B_k = \bigcup_{k \in \mathbb N} A_k$, hence by \varepsilonqref{eq_sub_additivity_6} one gets \[ {\hat \mu}^{q} \left( \bigcup_k A_k \right) = {\hat \mu}^{q} \left( \bigsqcup_k B_k \right) \leq \sum_k {\hat \mu}^{q} (B_k) \leq \sum_k {\hat \mu}^{q} (A_k) \: . \] Finally, let us consider the general case of $(A_k)_k$ being any countable family of sets. Let $(X_n)_n$ be an increasing family of bounded sets such that $\cup_n X_n = X$, then for all $n$, $\cup_k ( A_k \cap X_n )$ is bounded, hence $\mu \left( \cup_k ( A_k \cap X_n ) \right) < +\infty$ and therefore \[ {\hat \mu}^q \left( \bigcup_k ( A_k \cap X_n ) \right) \leq \sum_k {\hat \mu}^q \left( A_k \cap X_n \right) \: . \] We let $n \rightarrow + \infty$ and we infer by monotone convergence that \[ {\hat \mu}^q \left( \bigcup_k A_k \right) \leq \sum_k {\hat \mu}^q \left( A_k \right) \: . \] \varepsilonnd{proof} In order to have the countable sub-additivity of ${\hat \mu}^{q}$ (in the case where $\mu$ is not assumed to be finite on bounded sets) it is enough to show that $\sum_k \mu(U_k) = +\infty$ implies $\sum_k {\hat \mu}^{q} (U_k) = +\infty$. If this is true, then either $\sum_k \mu(U_k) < +\infty$ and the sub-additivity is given by Proposition~\ref{prop_sub_additivity_open_case}, or $\sum_k {\hat \mu}^{q} (U_k) = +\infty$ and the sub-additivity is immediate. Now the core of the problem is to estimate ${\hat \mu}^{q}$ from below by $\mu$. The main issue is that the lower bound \varepsilonqref{eq_mean_premeasure} (i) does not imply the stronger lower bound \[ q(B)\gammaeq C^{-1} \mu (B)\,. \] Moreover, unless we know that the measure $\mu$ is doubling, there is no reason to expect that the inequality $\mu (2B) \le c\mu (B)$ is verified for some $c\gammae 1$ (where by $2B$ we denote the ball concentric to $B$ with double radius) and for any ball $B$. Nevertheless, by assuming that $(X,d)$ is directionally bounded and by comparing $\mu$ with an asymptotically doubling measure $\nu$ on $X$, which we assume to exist, we are able to prove that a doubling property for $\mu$ actually holds for enough balls, so that we can choose packings among these balls. Before showing the result, we need to introduce the notion of \textit{asymptotically $(\alpha,\gamma)$-bounded measure}. We say that $\nu$ is an asymptotically $(\alpha,\gamma)$-bounded measure on $(X,d)$ if it is finite on bounded sets and strictly positive on any ball with positive radius, and for all $x\in X$ it satisfies \begin{equation}\label{agbounded} \limsup_{r\to 0^{+}} \frac{\nu(B_{r}(x))}{\nu(B_{\alpha r}(x))} \le \gamma \,. \varepsilonnd{equation} \begin{remk} Notice that if we assume that $\nu$ is asymptotically doubling on $X$, that is, $\nu$ is finite on bounded sets and there exists a constant $d \gammae 1$ such that for all $x\in X$ it holds \begin{equation*} \limsup_{r\to 0^{+}} \frac{\nu(B_{2r}(x))}{\nu(B_{r}(x))} \le d \,, \varepsilonnd{equation*} then for any $\alpha\in (0,1]$, taking $Q$ as the unique integer such that $2^{-(Q+1)}< \alpha\le 2^{-Q}$, one can easily check that $\nu$ is asymptotically $(\alpha,d^{Q+1})$-bounded. \varepsilonnd{remk} We conveniently state some key properties on the metric space $(X,d)$, that will be constantly assumed in the rest of this section. \begin{hypothesis}\label{hypo1} $(X,d)$ is a directionally limited metric space endowed with an asymptotically $(\alpha , \gamma)$-bounded measure $\nu$ satisfying \varepsilonqref{agbounded} for some constants $\alphalpha \in (0,1]$ and $\gammaamma \gammae 1$. \varepsilonnd{hypothesis} \begin{prop} \label{prop_diffusion_in_X} Assume that $(X,d,\nu)$ satisfy Hypothesis \varepsilonqref{hypo1} and let $\mu$ be a positive Borel measure on $X$. Let \[ A_0 = \left\lbrace x \in X \: : \: \liminf_{r \to 0^{+}} \frac{\mu (B_{r}(x))}{\nu (B_r(x))} = 0 \right\rbrace \quad \textrm{and} \quad A_+ = \left\lbrace x \in X \: : \: 0 < \liminf_{r \to 0^{+}} \frac{\mu (B_r(x))}{\nu (B_r(x))} \leq +\infty \right\rbrace \: . \] Then the following hold. \begin{enumerate}[(i)] \item For all $x \in A_+$, either $\mu (B_r(x)) = +\infty$ for all $r>0$, or \[ \liminf_{r \to 0^{+}} \frac{\mu (B_{ r} (x))}{\mu (B_{\alpha r}(x))} \leq \gammaamma \: . \] \item $\mu (A_0) = 0$. \varepsilonnd{enumerate} In particular, for a fixed $\varepsilonpsilon_0 >0$ and for $\mu$--almost any $x \in X$, there exists a decreasing infinitesimal sequence $(r_{n})_{n}$ of radii (depending on $\varepsilonpsilon_0$ and $x$), such that \begin{equation}\label{epsliminf} \mu (B_{r_{n}} (x)) \le (\gammaamma+\varepsilonpsilon_0) \, \mu(B_{\alpha r_{n}}(x))\,,\quad \forall\, n\in \mathbb N \: . \varepsilonnd{equation} \varepsilonnd{prop} \begin{proof} \textbf{Proof of (i).} Let $x \in A_+$. By monotonicity, either $\mu (B_r(x)) = +\infty$ for all $r>0$ (and then \varepsilonqref{epsliminf} is also trivially satisfied) or there exists some $R$ such that, for all $r \leq R$, $\mu (B_r (x)) < +\infty$. In this case the function defined by \[ \displaystyle f(r) = \frac{\mu(B_r(x))}{\nu(B_r(x))} \] is non-negative and finite for $r$ small enough. Moreover, since $x \in A_+$ we have $\liminf_{r \to 0} f(r) > 0$. Let us prove that \begin{equation}\label{limsupf} \liminf_{r \to 0} \frac{f( r)}{f(\alpha r)} \leq 1 \: . \varepsilonnd{equation} Assume by contradiction that $\displaystyle \liminf_{r \to 0} \frac{f( r)}{f(\alpha r)} > 1$, then necessarily $\alpha < 1$ and there exists $r_0 > 0$ and $\beta > 1$ such that for all $r \leq r_0$, $f( r) \gammaeq \beta f(\alpha r)$. Consider now the sequence $(r_k)_k$ defined by $r_k = \alpha^{k} r_0$. Then $r_k \rightarrow 0$ and \[ f(r_k) \leq \beta^{-1} f(\alpha^{-1} r_k) = \beta^{-1} f(r_{k-1}) \leq \beta^{-k} f(r_0) \xrightarrow[k \to \infty]{} 0 \] which contradicts $\liminf_{r \to 0} f(r) > 0$ and thus proves \varepsilonqref{limsupf}. Let us notice that $\mu(B_{r}(x))>0$ for all $r>0$ since $x\in A_{+}$ and that, by definition, we have \[ \frac{\mu(B_{ r}(x))}{\mu(B_{\alpha r}(x))} = \frac{f( r)}{f(\alpha r)} \cdot \frac{\nu(B_{ r}(x))}{\nu(B_{\alpha r}(x))}\,. \] Since $\nu$ is asymptotically $(\alpha, \gamma)$-bounded, by \varepsilonqref{agbounded} we get \[ \liminf_{r \to 0^{+}} \frac{\mu(B_{ r}(x))}{\mu(B_{\alpha r}(x))} \leq \limsup_{r \to 0^{+}} \frac{\nu(B_{ r}(x))}{\nu(B_{\alpha r}(x))} \liminf_{r \to 0^+} \frac{f(\alpha r)}{f(r)} \leq \gammaamma \: . \] \textbf{Proof of (ii).} Let us show that $\mu (A_0) = 0$. First assume that $\nu (X) < +\infty$ and let $\varepsilonpsilon > 0$. Consider \[ \mathcal{F}_\varepsilonpsilon = \left\lbrace B \subset X \: | \: B=B_r(a), a \in A_0 \text{ and } \mu(B) \leq \varepsilonpsilon \nu (B) \right\rbrace \: . \] Let $a \in A_0$ be fixed. Since $\displaystyle \liminf_{r \to 0^{+}} \frac{\mu(B_r(a))}{\nu(B_r(a))} = 0$, there exists $r >0$ such that $B_r (a) \in \mathcal{F}_\varepsilonpsilon$. Every point in $A_0$ is the center of some ball in $\mathcal{F}_\varepsilonpsilon$, so that we can apply Theorem \ref{thm_generalized_besicovic_federer_covering_lemma} and obtain $2\zeta +1$ countable families $\mathcal{G}_1, \ldots \mathcal{G}_{2\zeta +1} $ of disjoint balls in $\mathcal{F}_\varepsilonpsilon$, such that \[ A_0 \subset \bigcup_{j=1}^{2\zeta +1} \bigsqcup_{B \in \mathcal{G}_j} B \: . \] Therefore \[ \mu (A_0) \leq \sum_{j=1}^{2\zeta +1} \sum_{B \in \mathcal{G}_j} \underbrace{\mu (B)}_{\leq \varepsilonpsilon \nu (B)} \leq \varepsilonpsilon \sum_{j=1}^{2\zeta +1} \nu \left( \bigsqcup_{B \in \mathcal{G}_j} B \right) \leq \varepsilonpsilon (2\zeta +1) \nu (X) \: . \] Hence $\mu (A_0) = 0$ if $\nu (X) < +\infty$. Otherwise, replace $X$ by $X \cap U_k (x_{0})$ for any fixed $x_{0}\in X$ to obtain that for any $k \in \mathbb N$, $\mu (A_0 \cap U_k (x_{0})) = 0$, then let $k\to\infty$ to conclude that $\mu (A_0) = 0$. Finally \varepsilonqref{epsliminf} is an immediate consequence of the fact that $X = A_{0}\cup A_{+}$ coupled with (i) and (ii). \varepsilonnd{proof} \begin{corollary}[Besicovitch with doubling balls] \label{bescivoic_with_doubling_balls} Assume that $(X,d,\nu)$ satisfy Hypothesis \varepsilonqref{hypo1} and let $\mu$ be a positive Borel measure on $X$. Fix $\varepsilonpsilon_{0}>0$ and for any $\delta > 0$ define \[ \mathcal{F}_\delta = \left\lbrace B=B_{r}(x) \text{ closed ball } \subset X \: : \: \mu(B) \leq (\gammaamma+\varepsilonpsilon_0) \mu (B_{\alpha r}(x)) \text{ and } \diam B \leq 2\delta \right\rbrace \: \] and, for any $A \subset X$, \[ \mathcal{F}_\delta^A = \{ B \in \mathcal{F}_\delta \: : \: B=B_r (a),\ a \in A \}\: . \] Then there exist $A_0 \subset X$ and $2\zeta +1$ countable subfamilies of $\mathcal{F}_\delta^A$ of disjoint closed balls, $\mathcal{G}_1, \ldots \mathcal{G}_{2\zeta +1}$ such that \[ A \subset A_0 \cup \bigcup_{j=1}^{2\zeta +1} \bigsqcup_{B \in \mathcal{G}_j} B\quad \text{ and }\quad \mu (A_0) = 0 \: . \] Moreover, if $\mu (A) < +\infty$, then for any open set $U \subset X$ there exists a countable collection $\mathcal{G}$ of disjoint balls in $\mathcal{F}_\delta^A$ such that \[ \bigsqcup_{B \in \mathcal{G}} B \subset U \quad \text{and} \quad \mu \left( (A\cap U) \setminus \bigsqcup_{B \in \mathcal{G}} B \right) = 0 \: . \] \varepsilonnd{corollary} \begin{proof} Thanks to \varepsilonqref{epsliminf} (see Proposition~\ref{prop_diffusion_in_X}) we know that for $\mu$-almost every $x \in X$ there exists a decreasing infinitesimal sequence $(r_{n})_{n}$ such that $\displaystyle B_{r_{n}}(x) \in \mathcal{F}_\delta$ for all $n\in \mathbb N$. Hence for $\mu$-almost any $x \in X$ we have \[ \inf \left\lbrace r \: | \: B_r (x) \in \mathcal{F}_\delta \right\rbrace = 0 \: . \] Then, the conclusion follows from Theorems \ref{thm_generalized_besicovic_federer_covering_lemma} and \ref{thm_generalized_besicovic_federer}. \varepsilonnd{proof} We can now prove that ${\hat \mu}^{q}$ and $\mu$ are equivalent on Borel sets (as set functions since we have not completely proved the sub-additivity of ${\hat \mu}^{q}$ yet). \begin{prop} \label{prop_equivalence_on_open_sets} Let $(X,d,\nu)$ satisfy Hypothesis \varepsilonqref{hypo1} with constants $(\alpha,\gamma)$ and let $\mu$ be a positive Borel measure on $X$. Let $q$ be a premeasure satisfying \varepsilonqref{eq_mean_premeasure} with constants $\alpha$ and $C$. Let ${\hat \mu}^{q}$ be as in Definition~\ref{dfn_packing_construction}. Then for any Borel set $A \subset X$ we have \[ \frac{1}{\gammaamma C} \mu (A) \leq {\hat \mu}^{q} (A) \leq C \inf \{ \mu (U) \: | \: A \subset U \text{ open set } \} \, . \] Therefore if $\mu$ is outer regular then \[ \frac{1}{\gamma C} \mu (A) \leq {\hat \mu}^{q} (A) \leq C\mu (A) \: . \] \varepsilonnd{prop} \begin{proof} Let $U \subset X$ be an open set, then the inequality ${\hat \mu}^{q} (U) \leq C\mu (U)$ is just a consequence of the definition of ${\hat \mu}^{q}$ and of the second inequality in \varepsilonqref{eq_mean_premeasure}, i.e., of the fact that, for any closed ball $B$, $q(B) \leq C\mu(B)$. Now we prove the other inequality by splitting the problem into two cases. \textit{Case $\mu(U) < +\infty$.} Let $\varepsilonpsilon_0 >0$ and $\delta > 0$, then we can apply Corollary~\ref{bescivoic_with_doubling_balls} (Besicovitch with doubling balls) to get a countable family $\mathcal{G}_\delta$ of disjoint balls of \[ \mathcal{F}_\delta^U = \left\lbrace B=B_r(x) \subset U \: : \: \mu( B) \leq (\gamma + \varepsilonpsilon_0) \mu (B_{\alpha r}(x)) \text{ and } \diam B \leq 2\delta \right\rbrace \] such that \[ \mu(U) = \mu \left( \bigsqcup_{B \in \mathcal{G}_\delta} B \right) \quad \textrm{and} \quad \bigsqcup_{B \in \mathcal{G}_\delta} B \subset U \: . \] Therefore by \varepsilonqref{eq_mean_premeasure} and by definition of ${\hat \mu}_\delta^q $, \begin{align*} {\hat \mu}_\delta^q (U) \gammaeq \sum_{B \in \mathcal{G}_\delta} q(B) \gammaeq &\sum_j \frac{1}{C} \mu ( B_{\alpha r_j} (x_j) ) \\ & \gammaeq \frac{1}{C(\gammaamma+\varepsilonpsilon_0)} \sum_j \mu (B_{r_j} (x_j)) = \frac{1}{C(\gammaamma+\varepsilonpsilon_0)} \mu \left( \bigsqcup_{B \in \mathcal{G}_\delta} B \right) = \frac{1}{C(\gammaamma+\varepsilonpsilon_0)} \mu (U) \: . \varepsilonnd{align*} Letting $\delta\to 0$ and then $\varepsilonpsilon_0 \rightarrow 0$ gives $\displaystyle {\hat \mu}^{q}(U) \gammaeq \frac{1}{ \gammaamma C} \mu(U)$. \textit{Case $\mu(U) = +\infty$.} Let $\delta > 0$ and $\mathcal{F}_{\delta}^{U}$ be as in the previous case, then applying Corollary~\ref{bescivoic_with_doubling_balls} (Besicovitch with doubling balls) gives $2\zeta +1$ countable families $\mathcal{G}_\delta^1, \ldots , \mathcal{G}_\delta^{2\zeta +1}$ of balls in $\mathcal{F}_\delta^{U}$ such that \[ U \subset U_0 \cup \bigcup_{j=1}^{2\zeta +1} \bigsqcup_{B \in \mathcal{G}_\delta^j} B\quad \text{with}\quad \mu(U_0) = 0 \: . \] Then we get \[ \sum_{j=1}^{2\zeta +1} \mu \left( \bigsqcup_{B \in \mathcal{G}_\delta^j} B \right) \gammaeq \mu (U) = +\infty \: . \] Consequently there exists $j_0 \in \{1, \ldots , 2\zeta +1 \}$ such that $\mu \left( \bigsqcup_{B \in \mathcal{G}_\delta^{j_0}} B \right) = +\infty$. Therefore we have the same estimate as in the case $\mu (U) < +\infty$: \begin{align*} {\hat \mu}_\delta^q (U) \gammaeq \sum_{B \in \mathcal{G}_\delta^{j_0}} q(B) \gammaeq & \sum_l \frac{1}{C} \mu ( B_{\alpha r_l} (x_l) ) \\ & \gammaeq \frac{1}{C (\gamma + \varepsilonpsilon_0)} \sum_l \mu (B_{r_l} (x_l)) = \frac{1}{C(\gamma + \varepsilonpsilon_0)} \mu \left( \bigsqcup_{B \in \mathcal{G}_\delta^{j_0}} B \right) = + \infty \: , \varepsilonnd{align*} and we conclude ${\hat \mu}^{q} (U) = +\infty$. \varepsilonnd{proof} \begin{prop} \label{cor_countable_sub_additvity} Let $(X,d,\nu)$ satisfy Hypothesis \varepsilonqref{hypo1}. Let $\mu$ be a positive Borel measure on $X$ and let $q$ be a premeasure satisfying \varepsilonqref{eq_mean_premeasure}. Let ${\hat \mu}^{q}$ be as in Definition~\ref{dfn_packing_construction}. Then ${\hat \mu}^{q}$ is countably sub-additive. \varepsilonnd{prop} \begin{proof} Let $(A_n)_n$ be a countable collection of subsets of $X$. If $\displaystyle \sum_n \mu \left( A_n \right) = +\infty$ then by Proposition~\ref{prop_equivalence_on_open_sets} there exists $K > 0$ such that $\mu (A_n) \leq K {\hat \mu}^{q} (A_n)$ for all $n$, therefore \[ \sum_n {\hat \mu}^{q} (A_n) \gammaeq \frac{1}{K} \sum_n \mu (A_n) = +\infty \: , \] whence the countable sub-additivity directly follows. Recall that if $\displaystyle \sum_n \mu \left( A_n \right) < +\infty$ and $A_n$ are open sets, then countable sub-additivity was proved in Proposition~\ref{prop_sub_additivity_open_case}. It remains to check the case $\displaystyle \sum_n \mu \left( A_n \right) < +\infty$ for a generic sequence of Borel sets $A_n$. Fix $\varepsilonpsilon >0$, then by definition of ${\hat \mu}^q$ there exists an open set $U_n$ for all $n$, depending also on $\varepsilonpsilon$, such that $A_n \subset U_n$ and \[ {\hat \mu}^q (U_n) \leq {\hat \mu}^q (A_n) + \frac{\varepsilonpsilon}{2^n} \: . \] By sub-additivity on open sets we have \[ {\hat \mu}^{q} (\bigcup_n A_n) \leq {\hat \mu}^{q} (\bigcup_n U_n) \leq \sum_n {\hat \mu}^{q} (U_n) \leq \sum_n {\hat \mu}^q (A_n) + \varepsilonpsilon \: . \] Letting $\varepsilonpsilon \rightarrow 0$ concludes the proof. \varepsilonnd{proof} \subsection{Connection with a classical packing construction} \label{sect:BLversusTT} Our packing construction \varepsilonqref{dfn_packing_construction} is very similar to the one introduced by Taylor and Tricot in \cite{tricot} for measures in $\mathbb R^n$. In that paper, starting from a given premeasure $q$ defined on a family of sets $\mathcal{C}$ (here, as in our construction, $\mathcal{C}$ will be the family of closed balls) a so-called \varepsilonmph{packing premeasure} is defined for any $E \subset \mathbb R^n$ as \[ (q-P)(E) = \limsup_{\delta \to 0} \left\lbrace \sum_{B \in \mathcal{B}} q(B) \: : \: \mathcal{B} \text{ is a T--packing of order } \delta \text{ of } E, \: \mathcal{B} \subset \{ B_r (x) \: : \: x \in E, \, r>0 \} \right\rbrace \: , \] where their notion of packing (here specialized to families of closed balls, which we will refer to as $T$--packing) is slightly different from the one we introduced in Definition~\ref{def:packing}. \begin{dfn}[$T$--Packings] \label{def:Tpacking} Let $E \subset \mathbb R^n$. We say that $\mathcal{F}$ is a $T$-packing of $E$ of order $\delta$ if $\mathcal{F}$ is a countable family of disjoint closed balls whose radius is less or equal to $\delta$ and such that for all $B \in \mathcal{F}$, $\overline{E} \cap B \neq \varepsilonmptyset$. \varepsilonnd{dfn} We insist on the fact that such a packing is \varepsilonmph{not} included in $E$ (as required in Definition ~\ref{def:packing}) but only in some enlargement of $E$. Then, from this packing premeasure $(q-P)$, a packing measure $\mu^{q-P}$ is defined by applying Carath\'eodory's construction, Method I, to $q-P$ on Borel sets. To be precise, for any $A \subset \mathbb R^n$, \[ \mu^{q-P} (A) = \inf \left\lbrace \sum_{k=1}^\infty (q-P)(A_k) \: : \: A_k \in \mathcal{B}(\mathbb R^n), \: A \subset \bigcup_{k} A_k \right\rbrace \: . \] We will now prove that these two constructions are equivalent when the premeasure $q$ is controlled by some Radon measure $\mu$, i.e., when there exists $C >0$ such that for every closed ball $B$, $q(B) \leq C \mu(B)$. \begin{prop} Let $q$ be a premeasure defined on closed balls in $\mathbb R^{n}$. We assume that there exists a Radon measure $\mu$ and a constant $C>0$ such that $q(B)\le C \mu(B)$ for any closed ball $B\subset \mathbb R^{n}$. Then ${\hat \mu}^{q} = \mu^{q-P}$. \varepsilonnd{prop} \begin{proof} Let us prove that for any compact set $K \subset \mathbb R^n$ \begin{equation} \label{eq:comparisonTricotCompact0} \mu^{q-P}(K) \leq {\hat \mu}^q (K) \: . \varepsilonnd{equation} Fix an open set $U$ containing $K$. As $K$ is compact, there exists $\delta_0 > 0$ such that, for all $0 < \delta < \delta_0$, \[ K_{2 \delta} = \left\lbrace x \in \mathbb R^n \: : \: \dist (x,K) < 2\delta \right\rbrace \subset U \: . \] Note that if $\mathcal{B}$ is a $T$--packing of order $\delta$ of $K$ then it is a packing of order $\delta$ of $K_{2\delta}$ and thus of $U$. Hence, for all $\delta < \delta_0$ \[ \sup \left\lbrace \sum_{B \in \mathcal{B}} q(B) \: : \: \mathcal{B} \ T\text{--packing of } K \text{ of order } \delta \right\rbrace \leq \sup \left\lbrace \sum_{B \in \mathcal{B}} q(B) \: : \: \mathcal{B} \text{ packing of } U \text{ of order } \delta \right\rbrace \: . \] Let us pass to the limit as $\delta \rightarrow 0$ and obtain by definition \begin{equation} \label{eq:comparisonTricotCompact} (q-P)(K) \leq {\hat \mu}^q(U) \: . \varepsilonnd{equation} The inequality \varepsilonqref{eq:comparisonTricotCompact} is true for any open set $U \supset K$, therefore \[ (q-P)(K) \leq \inf_{K \subset U open} {\hat \mu}^q (U) = {\hat \mu}^q (K)\,. \] Since $\{K\}$ is trivially a covering of $K$, then $\mu^{q-P}(K) \leq (q-P)(K)$, which leads to \varepsilonqref{eq:comparisonTricotCompact0}. It is easy to extend \varepsilonqref{eq:comparisonTricotCompact0} to any bounded Borel set thanks to Theorem~\ref{thm_approximation_Borel_measure} and the fact that $\mu$ is Radon and $\mathbb R^n$ is locally compact. Then for any Borel set $E \subset \mathbb R^n$, \[ \mu^{q-P} (E) \leq {\hat \mu}^q (E) \: . \] Let us now prove that for any bounded set $E \subset \mathbb R^n$ we have the converse inequality \begin{equation} \label{eq:comparisonTricotBounded0} {\hat \mu}^q (E) \leq \mu^{q-P}(E) \: . \varepsilonnd{equation} Given $\varepsilon>0$ and $0 < \delta < \varepsilonpsilon/2$ we define \[ E_\varepsilonpsilon = \left\lbrace x \in \mathbb R^n \: : \: \dist (x,\overline{E}) < \varepsilon \right\rbrace \: , \] which is obviously an open set. Let $\mathcal{B}$ be a $\delta$--packing of $E_\varepsilon$, then $\mathcal{B} = \mathcal{B}_1 \sqcup \mathcal{B}_2$ with \[ \mathcal{B}_1 = \left\lbrace B \in \mathcal{B} \: : \: B \cap \overline{E} \neq \varepsilonmptyset \right\rbrace \quad \text{and} \quad \mathcal{B}_2 = \left\lbrace B \in \mathcal{B} \: : \: B \cap \overline{E} = \varepsilonmptyset \right\rbrace \: . \] Then, by Definition~\ref{def:Tpacking}, $\mathcal{B}_1$ is a $T$--packing of order $\delta$ of $E$ while any ball of $\mathcal{B}_2$ is included in $E_\varepsilon - \overline{E}$. Therefore, taking the supremum over all $\delta$--packings of $E_\varepsilon$, we get \[ {\hat \mu}^q_\delta (E_\varepsilon) \leq \sup \left\lbrace \sum_{\substack{B \in \mathcal{B} \\ \overline{B} \cap \overline{E} \neq \varepsilonmptyset}} q(B) \: : \: \mathcal{B} \, \delta\text{--packing of } E_\varepsilon \right\rbrace + \underbrace{ \sup \left\lbrace \sum_{\substack{B \in \mathcal{B} \\ \overline{B} \cap \overline{E} = \varepsilonmptyset}} q(B) \: : \: \mathcal{B} \, \delta\text{--packing of } E_\varepsilon \right\rbrace }_{\leq \mu \left( E_\varepsilon - \overline{E} \right)}\,, \] then letting $\delta \rightarrow 0$ leads to \begin{equation} \label{eq:comparisonTricotBounded1} {\hat \mu}_\delta^q (E_\varepsilon) \leq (q-P)(E) + \mu (E_\varepsilon - \overline{E}) \: . \varepsilonnd{equation} As $\mu$ is Radon and $E_\varepsilon - \overline{E}$ is bounded (since $E$ is bounded), then $\mu (E_\varepsilon - \overline{E}) < +\infty$, $( E_\varepsilon - \overline{E} )_\varepsilonpsilon$ is increasing in $\varepsilon$ and $\cap_{\varepsilon>0}( E_\varepsilon - \overline{E} )_\varepsilonpsilon = \varepsilonmptyset$, so that \[ \mu (E_\varepsilon - \overline{E}) \xrightarrow[\varepsilon \to 0]{} 0 \: . \] Letting $\varepsilonpsilon \rightarrow 0$ in \varepsilonqref{eq:comparisonTricotBounded1}, we get \begin{equation} \label{eq:comparisonTricotBounded2} {\hat \mu}^q (E) = \inf_{E \subset U} \left\lbrace {\hat \mu}^q (U) \: : \: U \text{ open } \right\rbrace \leq \lim_{\varepsilon \to 0} {\hat \mu}^q (E_\varepsilon) \leq (q-P)(E) \: . \varepsilonnd{equation} Thanks to Proposition~\ref{prop_sub_additivity_loc_finite_case}, we know that ${\hat \mu}^q$ is countably sub-additive and for every countable family $(E_h)_h$ such that $E \subset \cup_h E_h$, we have by \varepsilonqref{eq:comparisonTricotBounded2} \[ {\hat \mu}^q (E) \leq \sum_h {\hat \mu}^q (E \cap E_h) \leq \sum_h (q-P)(E \cap E_h) \leq \sum_h (q - P)(E_h)\,. \] Finally taking the infimum over all such $(E_h)_h$ gives \varepsilonqref{eq:comparisonTricotBounded0}, whence \[ {\hat \mu}^q(E) \leq \mu^{q-P} (E) \] holds for any Borel set $E \subset \mathbb R^n$. This concludes the proof. \varepsilonnd{proof} \begin{remk} Reading carefully the proof, one should note that the property of $\mathbb R^n$ which is used is the local compactness of $\mathbb R^n$. Therefore, if we extend the definition of Taylor and Tricot \cite{tricot} to a metric space $(X,d)$, then assuming $(X,d)$ locally compact the two packing constructions still coincide under the assumption $q(B) \leq C \mu (B)$ for all closed balls $B$, where $\mu$ is some given Radon measure and $C >0$. \varepsilonnd{remk} \subsection{The case of a signed measure} Our aim is to prove that the packing-type reconstruction applied to a signed measure $\mu$, with premeasures $q_{\pm}$ satisfying \begin{equation}\label{cond-signed-pplus} \frac{1}{C} \mu^+(B_{\alpha r}(x)) - \mu^{-}(B_{r}(x)) \le q_{+}(B_{r}(x)) \le C\mu^{+}(B_{r}(x)) \varepsilonnd{equation} and \begin{equation}\label{cond-signed-pminus} \frac{1}{C} \mu^-(B_{\alpha r}(x)) - \mu^{+}(B_{r}(x)) \le q_{-}(B_{r}(x)) \le C\mu^{-}(B_{r}(x)) \varepsilonnd{equation} for some $C\gammae 1$ and $0<\alpha\le 1$, produces a signed measure ${\hat \mu}^{p}$ whose positive and negative parts are comparable with those of $\mu$. We notice that properties \varepsilonqref{cond-signed-pplus} and \varepsilonqref{cond-signed-pminus} are weaker than the following (and apparently more natural) ones: \begin{align} \label{csp1} \frac{1}{C} \mu^+(B_{\alpha r}(x)) &\le q_{+}(B_{r}(x)) \le C\mu^{+}(B_{r}(x)) \\ \label{csp2} \frac{1}{C} \mu^-(B_{\alpha r}(x)) & \le q_{-}(B_{r}(x)) \le C\mu^{-}(B_{r}(x))\,. \varepsilonnd{align} On the other hand we note that the premeasures defined as \[ q_{\pm}(B_{r}(x)) = \left( \frac{1}{r} \int_{s=0}^r \mu (B_s (x) ) \, ds \right)_\pm \] satisfy \varepsilonqref{cond-signed-pplus}--\varepsilonqref{cond-signed-pminus} but not \varepsilonqref{csp1}--\varepsilonqref{csp2}. \begin{theo}\label{thm:signedboh} Let $(X,d,\nu)$ satisfy Hypothesis \ref{hypo1} for some constants $\alpha\in (0,1]$ and $\gamma\gammae 1$, and let $\mu = \mu^+ - \mu^-$ be a locally finite, Borel-regular signed measure on $X$. Let $q_\pm$ be a pair of premeasures satisfying \varepsilonqref{cond-signed-pplus} and \varepsilonqref{cond-signed-pminus} for some $C\gammae 1$. Take ${\hat \mu}^{q_\pm}$ as in Definition~\ref{dfn_packing_construction}. Then the following properties hold: \begin{itemize} \item[(i)] ${\hat \mu}^{q_+}$, ${\hat \mu}^{q_-}$ are metric outer measures finite on bounded sets; \item[(ii)] ${\hat \mu}^q = {\hat \mu}^{q_+} - {\hat \mu}^{q_{-}}$ is a signed measure such that, for any Borel set $A \subset X$, \[ \frac{1}{\gamma C} \mu^+ (A) \leq {\hat \mu}^{q_+} (A) \leq C\mu^+ (A)\quad \text{ and }\quad \frac{1}{\gamma C} \mu^- (A) \leq {\hat \mu}^{q_-} (A) \leq C\mu^- (A) \: , \] whence in particular \[ \frac{1}{\gamma C} | \mu | (A) \leq |{\hat \mu}^q | (A) \leq C | \mu | (A) \: . \] \varepsilonnd{itemize} \varepsilonnd{theo} \begin{proof} The countable sub-additivity of ${\hat \mu}^{q_{\pm}}$ follows from the second inequalities in \varepsilonqref{cond-signed-pplus}--\varepsilonqref{cond-signed-pminus} (see Proposition~\ref{prop_sub_additivity_loc_finite_case}). Then for any open set $U \subset X$ both inequalities \[ {\hat \mu}^{q_\pm} (U) \leq C\mu^\pm (U) \] are just a consequence of the definition of ${\hat \mu}^{q_{\pm}}$ and of the second inequalities in \varepsilonqref{cond-signed-pplus}--\varepsilonqref{cond-signed-pminus}. This proves (i). Let now $A \subset X$ be a Borel set. We first derive an estimate concerning ${\hat \mu}^{q_{+}}$ (the estimate for ${\hat \mu}^{q_{-}}$ can be obtained in the same way). If $\mu^+ (A) < +\infty$, we take an open set $U$ containing $A$ such that $\mu^{+}(U)<+\infty$. Let $\varepsilon_{0},\delta > 0$ be fixed, then apply Corollary~\ref{bescivoic_with_doubling_balls} to $\mu^{+}$ and get a countable family $\mathcal{G}_\delta = \{B_{r_{j}}(x_{j})\}_{j}$ of disjoint closed balls contained in $U$ with radii $r_{j}\leq \delta$ and satisfying $\mu^+ (B_{\alpha r_{j}}(x_{j})) \gammaeq \frac{1}{\gamma +\varepsilon_{0}} \mu^+ (B_{r_{j}}(x_{j}))$ for all $j$, such that \[ \mu^+ (A) = \mu^+ \left(\bigsqcup_{B \in \mathcal{G}_\delta } B \right) \: . \] We have \begin{align*} {\hat \mu}^{q_+}_\delta (U) & \gammaeq \sum_{B \in \mathcal{G}_\delta} q_+ (B) \gammaeq \sum_{j} \frac 1C \mu^{+}(B_{\alpha r_{j}}(x_{j})) - \mu^{-}(B_{r_{j}}(x_{j}))\\ & \gammaeq \sum_{j} \frac 1{(\gamma + \varepsilon_{0})C} \mu^{+}(B_{r_{j}}(x_{j})) - \mu^{-}(B_{r_{j}}(x_{j})) \gammaeq \frac 1{(\gamma + \varepsilon_{0})C} \mu^{+}(A) - \mu^{-}(U)\,. \varepsilonnd{align*} Letting $\delta\to 0$ and then $\varepsilon_{0}\to 0$ we find \begin{equation} \label{eq_signed_case_1} {\hat \mu}^{q_+} (U) \gammaeq \frac 1{\gamma C} \mu^+ (A) - \mu^- (U) \: . \varepsilonnd{equation} By definition of ${\hat \mu}^{q_+}(A)$, there exists a sequence of open sets $(U_k^1)_k$ such that, for all $k$, it holds $A \subset U_k^1$ and \[ {\hat \mu}^{q_+} (U_k^1) \xrightarrow[k \to \infty]{} {\hat \mu}^{q_+} (A) \: . \] By outer regularity of $\mu^-$ (which is Borel and finite on bounded sets) there exists a sequence of open sets $(U_k^2)_k$ such that, for all $k$, we get $A \subset U_k^2$ and \[ \mu^- (U_k^2) \xrightarrow[k \to \infty]{} \mu^- (A) \: . \] For all $k$, let $U_k = U_k^1 \cap U_k^2$, then $U_k$ is an open set, $A\subset U_k$ and, by monotonicity, \begin{align*} {\hat \mu}^{q_+} (A) & \leq {\hat \mu}^{q_+} (U_k) \leq {\hat \mu}^{q_+} (U_k^1)\,, \\ \mu^- (A) & \leq \mu^- (U_k) \leq \mu^- (U_k^2) \: , \varepsilonnd{align*} therefore \[ {\hat \mu}^{q_+} (U_k) \xrightarrow[k \to \infty]{} {\hat \mu}^{q_+} (A) \text{ and } \mu^- (U_k) \xrightarrow[k \to \infty]{} \mu^-(A) \: . \] Evaluating \varepsilonqref{eq_signed_case_1} on the sequence $(U_k)_k$ and letting $k$ go to $+\infty$, we eventually get \begin{equation} \label{eq_signed_case_2} {\hat \mu}^{q_+} (A) \gammaeq \frac{1}{\gamma C} \mu^+ (A) - \mu^- (A) \: . \varepsilonnd{equation} Owing to Hahn decomposition of signed measures, there exists a Borel set $P$ such that for all Borel $A$ it holds \[ \mu^+ (A) = \mu^+ (A \cap P) = \mu (A \cap P) \text{ and } \mu^- (A) = \mu (A - P)\: . \] Finally, let $A$ be a Borel set, then by \varepsilonqref{eq_signed_case_2} applied to $A \cap P$ we get \begin{align*} {\hat \mu}^{q_+} (A) \gammaeq {\hat \mu}^{q_+} (A \cap P) & \gammaeq \frac 1{\gamma C} \mu^+ (A \cap P) - \mu^- (A \cap P) \\ & = \frac 1{\gamma C} \mu^+ (A)\,. \varepsilonnd{align*} It remains to show that if $\mu^+ (A) = +\infty$, then ${\hat \mu}^{q_+} (A) = +\infty$. This can be easily obtained by taking a sequence of open balls $U_{n}$ with fixed center and radius $n\in \mathbb N$, and by considering the sequence $A_{n} = A\cap U_{n}$ for which $\mu^{+}(A_{n})<+\infty$ and $\lim_{n} \mu^{+}(A_{n}) = \mu^{+}(A) = +\infty$. By applying the same argument as before, we get \[ {\hat \mu}^{q_{+}}(A)\gammae {\hat \mu}^{q_{+}}(A_{n}) \gammae \frac 1{\gamma C}\mu^{+}(A_{n})\,, \] thus the conclusion follows by taking the limit as $n\to +\infty$. This completes the proof of (ii) and thus of the theorem. \varepsilonnd{proof} \subsection{A stability result} If the approximate values $q(B_{r}(x))$ are closer and closer to the actual values of $\mu(B_{r}(x))$ when $r\to 0$ one obtains by Theorem \ref{thm_main} that the reconstructed measure ${\hat \mu}^{q}$ coincides with $\mu$. More precisely, we have the following stability result. \begin{corollary} Let us fix $(\alpha_{n})_{n}, (\gamma_{n})_{n}, (C_{n})_{n}$ and $(r_{n})_{n}$ such that \begin{align*} &\alpha_{n}\in (0,1],\ \gamma_{n}\gammae 1,\ C_{n}\gammae 1,\ r_{n}>0 \\ &\alpha_{n},\gamma_{n},C_{n}\to 1\quad\text{and}\quad r_{n}\to 0\quad \text{as }n\to \infty\,. \varepsilonnd{align*} Let $(X,d)$ be a directionally limited metric space endowed with a sequence of asymptotically $(\alpha_{n},\gamma_{n})$-bounded measures $\nu_{n}$ satisfying \varepsilonqref{agbounded}. Let $\mu$ be a positive Borel measure on $X$ and let $q$ be a premeasure defined on closed balls and satisfying \[ C_{n}^{-1}\mu(B_{\alpha_{n}r}(x)) \le q(B_{r}(x)) \le C_{n}\, \mu(B_{r}(x)) \] for all $x\in X$, $n\in \mathbb N$, and $r\in (0,r_{n})$. Then ${\hat \mu}^{q} = \mu$. \varepsilonnd{corollary} \begin{proof} It is an immediate consequence of Theorem \ref{thm_main}. \varepsilonnd{proof} \begin{remk} The above corollary can be formulated for signed measures as well. Indeed under the same assumptions on $X$ and analogous assumptions on $q_{\pm}$ one obtains the same conclusion, i.e. the coincidence of the reconstructed signed measure with the initial measure, thanks to Theorem \ref{thm:signedboh}. \varepsilonnd{remk} \begin{thebibliography}{10} \bibitem{bruckner_thomson} A.~M. Bruckner, J.~B. Bruckner, and B.~S. Thomson. \newblock {\varepsilonm Elementary real analysis}. \newblock Prentice Hall, 2001. \bibitem{Blanche-rectifiability} B.~{Buet}. \newblock Quantitative conditions of rectifiability for varifolds. \newblock {\varepsilonm Annales de l'Institut Fourier}. \newblock to appear. \bibitem{Blanche-thesis} B.~Buet. \newblock {\varepsilonm Approximation de surfaces par des varifolds discrets : repr{\'e}sentation, courbure, rectifiabilit{\'e}}. \newblock PhD thesis, Univerist{\'e} Claude Bernard Lyon 1, 2014. \bibitem{BuetLeonardiMasnou} B.~Buet, G.~P. Leonardi, and S.~Masnou. \newblock Surface approximation, discrete varifolds, and regularized first variation. \newblock in preparation. \bibitem{davies} R.~O. Davies. \newblock Measures not approximable or not specifiable by means of balls. \newblock {\varepsilonm Mathematika}, 18:157--160, 1971. \bibitem{edgar} G.A. Edgar. \newblock Packing measure in general metric space. \newblock {\varepsilonm Real Analysis Exchange}, 26(2):831--852, 2000. \bibitem{evans} L.~C. Evans and R.~F. Gariepy. \newblock {\varepsilonm Measure theory and fine properties of functions}. \newblock Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992. \bibitem{federer} H.~Federer. \newblock {\varepsilonm Geometric measure theory}. \newblock Die Grundlehren der mathematischen Wissenschaften, Band 153. Springer-Verlag New York Inc., New York, 1969. \bibitem{handbook_geometry_banach_spaces} D.~Preiss. \newblock {\varepsilonm Geometric {M}easure {T}heory in {B}anach {S}paces}. \newblock North-Holland Publishing Co., Amsterdam, 2001. \bibitem{preiss_tiser} D.~Preiss and J.~Ti{\v{s}}er. \newblock Measures in {B}anach spaces are determined by their values on balls. \newblock {\varepsilonm Mathematika}, 38(2):391--397 (1992), 1991. \bibitem{tricot} S.~J. Taylor and C.~Tricot. \newblock Packing measure, and its evaluation for a {B}rownian path. \newblock {\varepsilonm Trans. Amer. Math. Soc.}, 288(2):679--699, 1985. \bibitem{tricot_0} C.~Tricot. \newblock Two definitions of fractional dimension. \newblock {\varepsilonm Math. Proc. Cambridge Philos. Soc.}, 91(1):57--74, 1982. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \newtheorem{prop}{Proposition} \newtheorem{thrm}{Theorem} \newtheorem{defn}{Definition} \newtheorem{cor}{Corollary} \newtheorem{lemma}{Lemma} \newtheorem{remark}{Remark} \newtheorem{exam}{Example} \newcommand{\mathrm d}{\mathrm d} \newcommand{\mathrm{GD}}{\mathrm{GD}} \newcommand{\mathrm{MD}}{\mathrm{MD}} \newcommand{\mathrm{MVP}}{\mathrm{MVP}} \newcommand{\mathrm D}{\mathrm D} \newcommand{\mathrm E}{\mathrm E} \newcommand{\mathrm F}{\mathrm F} \newcommand{\mathrm P}{\mathrm P} \newcommand{\mathrm G}{\mathrm G} \newcommand{\mathrm M}{\mathrm M} \newcommand{\mathrm{TS}^p_{\alpha,d}}{\mathrm{TS}^p_{\alpha,d}} \newcommand{\mathrm{Exp}}{\mathrm{Exp}} \newcommand{\mathrm{ID}}{\mathrm{ID}} \newcommand{\mathrm{tr}}{\mathrm{tr}} \newcommand{\mathrm{Var}}{\mathrm{Var}} \newcommand{\iid}{\stackrel{\mathrm{iid}}{\sim}} \newcommand{\eqd}{\stackrel{d}{=}} \newcommand{\cond}{\stackrel{d}{\rightarrow}} \newcommand{\conv}{\stackrel{v}{\rightarrow}} \newcommand{\conw}{\stackrel{w}{\rightarrow}} \newcommand{\conp}{\stackrel{p}{\rightarrow}} \newcommand{\simp}{\stackrel{p}{\sim}} \title{Representation and Simulation of Multivariate Dickman Distributions and Vervaat Perpetuities} \setcounter{page}{1} \pagenumbering{arabic} \begin{abstract} A multivariate extension of the Dickman distribution was recently introduced, but very few properties have been studied. We discuss several properties with an emphasis on simulation. Further, we introduce and study a multivariate extension of the more general class of Vervaat perpetuities and derive a number of properties and representations. Most of our results are presented in the even more general context of so-called $\alpha$-times self-decomposable distributions.\\ \noindent\textbf{Keywords:} Multivariate Dickman distribution; Multivariate Vervaat perpetuities; Self-decomposable distributions; Simulation \end{abstract} \section{Introduction} The Dickman distribution arises in many applications, including in the study of random graphs, small jumps of L\'evy processes, and Hoare's quickselect algorithm. It is closely related to the Dickman function, which is important in the study of prime numbers. For details and many applications see the recent surveys \cite{Penrose:Wade:2004}, \cite{Molchanov:Panov:2020}, and \cite{Grabchak:Molchanov:Panov:2022}. The class of Vervaat perpetuities is closely related to the Dickman distribution and has applications in a variety of areas including economics, actuarial science, and astrophysics, see the references in \cite{Dassios:Qu:Lim:2019}. Recently, there has been much interest in the simulation of Dickman random variables and Vervaat perpetuities. This was studied in a series of papers, including \cite{Devroye:2001}, \cite{Devroye:Fawzi:2010}, \cite{Fill:Huber:2010}, \cite{Chi:2012}, \cite{Cloud:Huber:2017}, and \cite{Dassios:Qu:Lim:2019}. A multivariate extension of the Dickman distribution was recently introduced in \cite{Bhattacharjee:Molchanov:2020}, where it arose in the context of a limit theorem related to certain point processes. However, very few properties of the multivariate Dickman distribution have been studied and, from what we have seen, a multivariate extension of Vervaat perpetuities has not been introduced. In this paper, we introduce a multivariate extension of the class of Vervaat perpetuities, which includes the multivariate Dickman distribution as a special case. We show that these distributions are infinitely divisible and, more specifically, that they coincide with the class of self-decomposable distributions with finite background driving L\'evy measures, no Gaussian part, and no drift. We have not seen this relationship mentioned previously even in the univariate case, although connections between the class of self-decomposable distribution and more general classes of perpetuities are known, see \cite{Jurek:1999}. In the interest of generality, our theoretical results are derived in the more general context of $\alpha$-times self-decomposable distributions. For these we derive three representations: as stochastic integrals, as shot noise, and as limits of certain triangular arrays. The latter two can be used for simulation. Further, for the multivariate Dickman distribution, we propose a third simulation method, which is based on a discretization of the spectral measure and is similar to the approach used in \cite{Xia:Grabchak:2022} for simulating multivariate tempered stable distributions. The need for simulation methods is motivated by the fact that the Dickman distribution can be used to model the small jumps of large classes of L\'evy processes. In the univariate case this was shown in \cite{Covo:2009}; we will extend it to the multivariate case in a forthcoming work. The simulation of large jumps tends to be easier as they follow a compound Poisson distribution. Combining the two allows for the simulation of a wide variety of L\'evy processes, which can then be used for many applications. Our motivation comes from finance, where Monte Carlo methods based on L\'evy processes are often used for option pricing, see, e.g., \cite{Cont:Tankov:2004}. In particular, the ability to simulate multivariate L\'evy processes allows for the pricing of multi-asset, e.g., rainbow or basket options. The rest of this paper is organized as follows. In Section \ref{sec: alpha times SD} we recall basic facts about $\alpha$-times self-decomposable distributions and give our main theoretical results. In Section \ref{sec: MD}, we formally introduce the multivariate Dickman distribution and multivariate Vervaat perpetuities. In Section \ref{sec: sim methods} we discuss three approximate simulation methods for the multivariate Dickman distribution with an emphasis on the bivariate case. A small-scale simulation study is conducted in Section \ref{sec: sim study}. Proof are postponed to Section \ref{sec: proofs}. Before proceeding, we introduce some notation. We write $\mathbb R^d$ to denote the space of $d$-dimensional column vectors of real numbers, $\mathbb S^{d-1}=\{x\in\mathbb R^d:|x|=1\}$ to denote the unit sphere in $\mathbb R^d$, and $\mathfrak B(\mathbb R^d)$ and $\mathfrak B(\mathbb S^{d-1})$ to denote the classes of Borel sets on $\mathbb R^d$ and $\mathbb S^{d-1}$, respectively. For a distribution $\mu$ on $\mathbb R^d$, we write $\hat\mu$ to denote its characteristic function, $X\sim \mu$ to denote that $X$ is a random variable with distribution $\mu$, and $X_1,X_2,\dots\iid \mu$ to denote that $X_1,X_2,\dots$ are independent and identically distributed (iid) random variables with distribution $\mu$. We write $U(0,1)$ to denote the uniform distribution on $(0,1)$, $\mathrm{Exp}(\lambda)$ to denote the exponential distribution with rate $\lambda$, $\delta_a$ to denote a point-mass at $a$, and $1_A$ to denote the indicator function on event $A$. We write $\Gamma$ to denote the gamma function, $\lfloor\cdot\rfloor$ to denote the floor function, and $\vee$ and $\wedge$ to denote the maximum and minimum, respectively. We write $\eqd$, $\cond$, and $\conw$ to denote equality in distribution, convergence in distribution, and weak convergence, respectively. For two sequences of real numbers $\{a_n\}$ and $\{b_n\}$, we write $a_n\sim b_n$ to denote $a_n/b_n\to1$ as $n\to \infty$. \section{$\alpha$-times self-decomposable distributions and main results}\label{sec: alpha times SD} We begin by recalling that the characteristic function of an infinitely divisible distribution $\mu$ on $\mathbb R^d$ can be written in the form $\hat\mu(z) = \exp\{C_\mu(z)\}$, where \begin{eqnarray*}\label{eq: char func inf div} C_{\mu}(z) = -\langle z,Az\rangle + i \left\langle b, z \right\rangle + \int_{\mathbb{R}^d}\left(e^{i\left\langle z, x\right\rangle } - 1 - \left\langle z, x\right\rangle 1_{[|x|\le1]}\right) M(\mathrm d x), \ \ \ z \in \mathbb{R}^d, \end{eqnarray*} $A$ is a $d\times d$-dimensional covariance matrix called the Gaussian part, $b\in\mathbb R^d$ is the shift, and $M$ is the L\'evy measure, which is a Borel measure on $\mathbb R^d$ satisfying \begin{equation*}\label{eq: levy measure equation gen} M(\{0\}) = 0 \text{ and } \int_{\mathbb{R}^d}(|x|^2 \wedge 1) M(\mathrm d x) < \infty. \end{equation*} The parameters $A$, $M$, and $b$ uniquely determine this distribution and we write $\mu=\mathrm{ID}(A,M,b)$. We call $C_\mu$ the cumulant generating function (cgf) of $\mu$. Associated with every infinitely divisible distribution $\mu$ is a L\'evy process $\{X_t:t\ge0\}$, where $X_1\sim\mu$. This process has finite variation if and only if $A=0$ and $M$ satisfies the additional assumption \begin{equation}\label{eq: levy measure equation finite var} \int_{\mathbb{R}^d}(|x| \wedge 1) M(\mathrm d x) < \infty. \end{equation} Through a slight abuse of terminology, we also say that the associated distribution $\mu$ has finite variation. In this case, the cgf can be written in the form \begin{eqnarray}\label{eq: char func inf div finite variation} C_\mu(z)= i \left\langle \gamma, z \right\rangle + \int_{\mathbb{R}^d}\left(e^{i\left\langle z, x\right\rangle } - 1\right) M(\mathrm d x), \ \ \ z \in \mathbb{R}^d, \end{eqnarray} where $\gamma = b-\int_{|x|\le1}x M(\mathrm d x) \in \mathbb{R}^d$ is the drift and we write $\mu = \mathrm{ID}_0(M, \gamma)$. A distribution $\mu$ is said to be self-decomposable if for any $c\in(0,1)$ there exists a probability measure $\rho_c$ with \begin{eqnarray}\label{eq: self dec} \hat\mu(z) = \hat\mu(cz)\hat\rho_c(z), \ \ z\in\mathbb R^d. \end{eqnarray} Equivalently, if $X\sim\mu$ and $Y_c\sim\rho_c$, then $X\eqd cX+Y_c$, where $X$ and $Y_c$ are independent on the right side. We denote the class of self-decomposable distributions by $L_1$. These distributions are important in the study of stationary $\mathrm{AR}(1)$ processes and the limits of sums of independent random variables. Next, for $\alpha\in\{2,3,\dots\}$ we define the classes $L_\alpha$ recursively, as follows. A distribution $\mu\in L_\alpha$ if for every $c\in(0,1)$ there exists a $\rho_c\in L_{\alpha-1}$ such that \eqref{eq: self dec} holds. There is substantial literature on the study of these classes, see, e.g., the monograph \cite{Rocha-Arteaga:Sato:2019} and the references therein. The distributions in $L_\alpha$ are sometimes called $\alpha$-times self-decomposable. These should not be confused with the so-called $\alpha$-self-decomposable distributions, studied in, e.g., \cite{Maejima:Ueda:2010} and the references therein. It is well-known that every $\mu\in L_\alpha$ is infinitely divisible. In fact, $\mu\in L_\alpha$ if and only if $\mu=\mathrm{ID}(A,M,b)$ with $M=M_\alpha$, where \begin{eqnarray}\label{eq: M alpha} M_\alpha(B) = \int_{\mathbb R^d}\int_0^1 1_{B}(yr) (-\log r)^{\alpha-1}r^{-1}\mathrm d r \nu(\mathrm d y), \ \ \ B\in\mathfrak B(\mathbb R^d), \end{eqnarray} for some Borel measure $\nu$ satisfying \begin{eqnarray}\label{eq: finite for MVP alpha} \nu(\{0\})=0\ \mbox{ and }\ \int_{|x|\le2}|x|^2\nu(\mathrm d x)+\int_{|x|>2}\left(\log|x|\right)^\alpha\nu(\mathrm d x)<\infty. \end{eqnarray} Clearly, $M_\alpha$ can be a L\'evy measure even if $\alpha$ is not an integer. In fact, $M_\alpha$ is a L\'evy measure for any $\alpha>0$ so long as \eqref{eq: finite for MVP alpha} holds. The study of this extension to non-integer $\alpha$ was initiated in \cite{Thu:1982}, see also \cite{Sato:2010} and the references therein. This leads to the following. \begin{defn} Fix $\alpha\in(0,\infty)$. We write $L_\alpha$ to denote the class of all infinitely divisible distributions $\mathrm{ID}(A,M,b)$ where $M=M_\alpha$ is of the form \eqref{eq: M alpha} for some Borel measure $\nu$ on $\mathbb R^d$ satisfying \eqref{eq: finite for MVP alpha}. We refer to $L_\alpha$ as the class of $\alpha$-times self-decomposable distributions. The measure $\nu$ is called the background driving L\'evy measure (BDLM). \end{defn} The name BDLM is motivated by the role that this measure plays in the context of OU-processes, see \cite{Rocha-Arteaga:Sato:2019}. We now give a result that relates certain moment properties of $M_\alpha$ to those of the BDLM $\nu$. \begin{lemma}\label{lemma: moments} Fix $\alpha\in(0,\infty)$ and let $M_\alpha$ be as in \eqref{eq: M alpha}, where $\nu$ is a Borel measure on $\mathbb R^d$ satisfying \eqref{eq: finite for MVP alpha}. In this case we can equivalently write \begin{eqnarray}\label{eq: M alpha 2} M_\alpha(B) &=& \int_{\mathbb R^d}\int_0^\infty 1_{B}(ye^{-r}) r^{\alpha-1}\mathrm d r \nu(\mathrm d y)\nonumber\\ &=& \theta^{-1}\int_{\mathbb R^d}\int_0^\infty 1_{B}(ye^{-(\alpha r/\theta)^{1/\alpha}})\mathrm d r \nu(\mathrm d y), \ \ \ B\in\mathfrak B(\mathbb R^d) \end{eqnarray} for any $\theta>0$. Further, $M_\alpha(\mathbb R^d)=\infty$ for any $\nu\ne0$ and for any $p>0$, we have $$ \int_{|x|\le1}|x|^p M_\alpha(\mathrm d x)<\infty \mbox{ if and only if } \int_{|x|\le1}|x|^p \nu(\mathrm d x)<\infty $$ and $$ \int_{|x|>1}|x|^p M_\alpha(\mathrm d x)<\infty \mbox{ if and only if } \int_{|x|>1}|x|^p \nu(\mathrm d x)<\infty. $$ \end{lemma} Combining the lemma with Proposition 25.4 in \cite{Sato:1999} shows that for $X\sim\mu\in L_\alpha$ with BDLM $\nu$ and any $p>0$ we have $$ \mathrm E|X|^p<\infty \mbox{ if and only if } \int_{|x|>1}|x|^p \nu(\mathrm d x)<\infty. $$ In the context of multivariate Vervaat perpetuities, we are only interested in distributions with no Gaussian part and a finite BDLM. In this case, Lemma \ref{lemma: moments} imples that $M_\alpha$ will satisfy \eqref{eq: levy measure equation finite var}, which leads to the following. \begin{defn} Fix $\alpha\in(0,\infty)$. We write $L^*_\alpha$ to denote the class of all infinitely divisible distribution $\mu=\mathrm{ID}_0(M,\gamma)$ where $M=M_\alpha$ is of the form \eqref{eq: M alpha} for some finite Borel measure $\nu$ on $\mathbb R^d$ satisfying \eqref{eq: finite for MVP alpha}. In this case, we write $\mu=L^*_\alpha(\nu,\gamma)$. \end{defn} From \eqref{eq: char func inf div finite variation} and \eqref{eq: M alpha} it follows that the cgf of $L^*_\alpha(\nu,\gamma)$ is given by \begin{eqnarray*}\label{eq: char func L*} i\langle \gamma,z\rangle+\int_{\mathbb{R}^{d}}\int_0^1 \left(e^{ir\left\langle z, y\right\rangle } - 1\right)(-\log r)^{\alpha-1}r^{-1}\mathrm d r \nu(\mathrm d y), \ \ \ z \in \mathbb{R}^d. \end{eqnarray*} Taking partial derivatives shows that, when they exist, the mean vector and covariance matrix of $X\sim L^*_\alpha(\nu,\gamma)$ are given by \begin{eqnarray}\label{eq: mean and var L*} \mathrm E[X] =\gamma+ \Gamma(\alpha) \int_{\mathbb R^d} y \nu(\mathrm d y) \mbox{ and } \mathrm{cov}(X) = \frac{\Gamma(\alpha)}{2^\alpha}\int_{\mathbb R^{d}} yy^T \nu(\mathrm d y). \end{eqnarray} Next, we give three representations of the distributions in $L^*_\alpha$. The first as a stochastic integral, the second as shot noise, and the third as the limit of a triangular array. The first result is essentially contained in \cite{Sato:2010}, but, for completeness and due to a difference in presentation, a self-contained proof is given in Section \ref{sec: proofs}. \begin{thrm}\label{thrm: integ rep} Let $X\sim L^*_\alpha(\nu,\gamma)$ and fix $\theta>0$. If $\{Y_t:t\ge0\}$ is a L\'evy process with $Y_1\sim\mathrm{ID}_0(\nu',\gamma')$, where $\nu'=\nu/\theta$ and $\gamma'=\gamma/(\theta\Gamma(\alpha))$, then $$ X \eqd \int_0^\infty e^{-\left(\frac{\alpha s}{\theta}\right)^{1/\alpha}} \mathrm d Y_s, $$ where the stochastic integral is absolutely definable in the sense of \cite{Sato:2006}. \end{thrm} Our second representation is as an infinite series. This is sometimes called a shot noise representation. It is not just for one random variable, but for the corresponding L\'evy process. \begin{thrm}\label{thrm: series rep gen Vervaat} Fix $\alpha\in(0,\infty)$, let $\nu$ be a finite nonzero measure satisfying \eqref{eq: finite for MVP alpha}, and set $\theta=\nu(\mathbb R^d)$ and $\nu_1=\nu/\theta$. Let $E_1,E_2,\dots\iid \mathrm{Exp}(1)$, $V_1,V_2,\dots\iid U(0,1)$, and $Y_1,Y_2,\dots\iid \nu_1$ be independent sequences of random variables and let $\Gamma_i = E_1+E_2+\cdots+E_i$. Fix $\gamma\in\mathbb R^d$, $T>0$ and set \begin{eqnarray}\label{eq: shot noise for Levy meas} X_t = t\gamma+ \sum_{i=1}^{\infty} e^{-\left(\frac{\Gamma_i}{T\theta}\right)^{1/\alpha}} Y_i 1_{\left[0,\frac{t}{T}\right]}(V_i), \ \ \ t\in[0,T]. \end{eqnarray} Then the series converges almost surely and uniformly on $t\in[0,T]$ and $\{X_t:0\le t\le T\}$ is a L\'evy process with $X_t\sim L^*_\alpha(t\nu,t\gamma)$. When $\alpha=1$ we can write \eqref{eq: shot noise for Levy meas} as \begin{eqnarray}\label{eq: shot noise for Levy meas alpha=1} X_t = t\gamma+ \sum_{i=1}^{\infty} \left(U_1U_2\cdots U_i\right)^{1/(T\theta)} Y_i 1_{\left[0,\frac{t}{T}\right]}(V_i), \ \ \ t\in[0,T], \end{eqnarray} where $U_1,U_2,\dots\iid U(0,1)$ are independent of the sequences of $Y_i$'s and $V_i$'s. \end{thrm} In light of \eqref{eq: finite for MVP alpha}, Theorem \ref{thrm: series rep gen Vervaat} implicitly assumes that $\nu_1(\{0\})=P(Y_i=0)=0$. However, the representation in \eqref{eq: shot noise for Levy meas} holds even if this is not the case. To see this, assume that $X_t'$ is given as in \eqref{eq: shot noise for Levy meas}, but with some $\gamma'$ in place of $\gamma$ and $Y_i'$ in place of $Y_i$, where $Y_i'\sim \nu'_1$ and $\nu_1'(\{0\})=u$ for some $u\in(0,1)$. Thus, \begin{eqnarray*}\label{eq: shot noise for Levy meas 2} X_t' = t\gamma'+ \sum_{i=1}^{\infty} e^{-\left(\frac{\Gamma_i}{T\theta}\right)^{1/\alpha}} Y_i' 1_{\left[0,\frac{t}{T}\right]}(V_i). \end{eqnarray*} By independence $P([Y'_i=0]\cup[V_i>t/T]) = 1-(1-u)t/T$. Thus, $$ Y_i'1_{\left[0,\frac{t}{T}\right]}(V_i)\eqd Y_i1_{\left[0,\frac{(1-u)t}{T}\right]}(V_i), $$ where $Y_i\sim \nu_1$ is independent of $V_i$ and $\nu_1 (\mathrm d y)= 1_{[|y|>0]}\nu_1'(\mathrm d y)/(1-u)$ is a probability measure. It follows that $X_t'\eqd X_{t(1-u)}$, where $X_{t(1-u)}$ is given as in \eqref{eq: shot noise for Levy meas} if we take $\gamma=\gamma'/(1-u)$ and $\nu(\mathrm d y) =\theta 1_{[|y|>0]}\nu_1'(\mathrm d y)/(1-u)$ in Theorem \ref{thrm: series rep gen Vervaat}. Hence, $X_t' \sim L^*_\alpha(t\nu',t\gamma')$, where $\nu'(\mathrm d y) = \theta1_{[|y|>0]}\nu_1'(\mathrm d y)$. We now turn to the third representation, which is as the limiting distribution of a triangular array. Let $X_1,X_2,\dots$ be iid random variables with support contained in $[0,1]$ and $$ P(X_1>x) = (1-x)^\alpha \ell(1-x), \ x\in[0,1], $$ where $\alpha>0$ and $\ell$ is a slowly varying at $0$ function. This means that for every $t>0$ $$ \lim_{x\to0^+} \frac{\ell(xt)}{\ell(x)}=1. $$ \begin{thrm}\label{thrm: conv of powers} Let $\nu_0$ be a distribution on $\mathbb R^d$ and let $T_1,T_2,\dots\iid\nu_0$ be independent of the sequence of $X_i$'s. Assume that $\ell$ is bounded away from $0$ and $\infty$ on every compact subset of $(0,1]$ and that there exists a $\gamma\in(0,1)$ with $\mathrm E|T_1|^\gamma<\infty$. Let $N_n$ be a sequence of integers with $N_nn^{-\alpha}\ell(1/n)\to c$ for some $c\in(0,\infty)$ and set $$ A_n =\sum_{i=1}^{N_n} T_i X_i^{n}. $$ Then \(A_n \cond A_\infty\), where \(A_\infty\sim L^*_\alpha(\nu,0)\) with $\nu(\mathrm d x) = c1_{[|x|>0]}\nu_0(\mathrm d x)$. \end{thrm} In the univariate case, versions of this result were studied in several papers. In \cite{Schlather:2001} it was studied in the context of limits of $\ell^p$ norms of random vectors as both $p$ and the dimension of the vector approach infinity. In \cite{Grabchak:Molchanov:2019} it was studied in the context of the so-called random energy model (REM), which is important in statistical physics. In both papers it is assumed that $P(T_1=1)=1$. While other distributions were considered in \cite{Molchanov:Panov:2020} and \cite{Grabchak:Molchanov:Panov:2022}, in all of these univariate results, it is assumed that $\nu_0$ either has a bounded support or exponential moments. Here we make a much weaker assumption on the tails. Thus, this result is new even in the one-dimensional case. \section{Multivariate Dickman Distribution and Vervaat Perpetuities}\label{sec: MD} In the univariate case, a positive random variable $X$ is said to have a generalized Dickman (GD) distribution if \begin{equation}\label{eq: dickman equation} X\eqd U^{1/\theta}(X+1), \end{equation} where $\theta>0$ and $U\sim U(0,1)$ is independent of $X$ on the right side. We denote this distribution by $\mathrm{GD}(\theta)$. When $\theta=1$, it is just called the Dickman distribution. A multivariate extension of this distribution was recently introduced in \cite{Bhattacharjee:Molchanov:2020}. It is defined as follows. \begin{defn} Let $\sigma_1$ be a probability measure on $\mathbb S^{d-1}$ and let $W\sim \sigma_1$. A random variable $X$ on $\mathbb R^d$ is said to have a multivariate Dickman (MD) distribution if for some $\theta>0$ \begin{eqnarray}\label{eq: relation for MD} X\eqd U^{1/\theta}( X+ W), \end{eqnarray} where $U\sim U(0,1)$ and $X, W, U$ are independent on the right side. We denote this distribution $\mathrm{MD}(\sigma)$, where $\sigma=\theta\sigma_1$. We call $\sigma$ the spectral measure. \end{defn} There is no loss of information when working with $\sigma$ instead of $\theta$ and $\sigma_1$ since $\theta=\sigma(\mathbb S^{d-1})$ and $\sigma_1 = \sigma/\theta$. When the dimension $d=1$, $\sigma(\{-1\})=0$ and $\sigma(\{1\})=\theta>0$, we have $\mathrm{MD}(\sigma)=\mathrm{GD}(\theta)$. More generally, when $d=1$, $\sigma(\{1\})=\theta_1>0$, and $\sigma(\{-1\})=\theta_2>0$, it is easily checked that $\mathrm{MD}(\sigma)$ is the distribution $X_1-X_2$, where $X_1\sim\mathrm{GD}(\theta_1)$ and $X_2\sim\mathrm{GD}(\theta_2)$. We now turn to Vervaat perpetuities, which are named after the author of \cite{Vervaat:1979}. In the univariate case a distribution $\mu$ on $[0,\infty)$ is said to be a Vervaat perpetuity if there exists a $\theta>0$ and a distribution $\nu_1$ on $[0,\infty)$ such that, if $X\sim\mu$, $Z\sim\nu_1$, and $U\sim U(0,1)$, then \begin{eqnarray}\label{eq: MVP defn} X\eqd U^{1/\theta}\left(X+Z\right), \end{eqnarray} where $X,Z,U$ are independent on the right. It can be shown that a solution exists if and only if \begin{eqnarray}\label{eq: finite for MVP} \int_{|x|>2}\log|x|\nu_1(\mathrm d x)<\infty. \end{eqnarray} We now extend this idea to the multivariate case. \begin{defn} Fix $\theta>0$ and let $\nu_1$ be a probability measure on $\mathbb R^d$ satisfying \eqref{eq: finite for MVP}. The distribution of a random variable $X$ on $\mathbb R^d$ is said to be a multivariate Vervaat perpetuity (MVP) if \eqref{eq: MVP defn} holds, where $Z\sim\nu_1$, $U\sim U(0,1)$, and $U,X,Z$ are independent on the right side. \end{defn} From \eqref{eq: MVP defn} it follows that the distribution of $X$ is MVP if and only if \begin{eqnarray}\label{eq: main sum for MVP} X\eqd Z_1 U_1^{1/\theta}+ Z_2 \left(U_1U_2\right)^{1/\theta}+Z_3\left(U_1U_2U_3\right)^{1/\theta} +\cdots, \end{eqnarray} where $Z_1,Z_2,\dots\iid\nu_1$ and $U_1,U_2,\dots\iid U(0,1)$ are independent sequences. By comparing \eqref{eq: main sum for MVP} with \eqref{eq: shot noise for Levy meas alpha=1} and taking into account the discussion just below Theorem \ref{thrm: series rep gen Vervaat}, it follows that $X\sim L^*_1(\nu,0)$, where $\nu(\mathrm d x) = \theta 1_{[|x|>0]}\nu_1(\mathrm d x)$. From here we get the following. \begin{thrm} $\mu$ is $\mathrm{MVP}$ if and only if $\mu= L_1^*(\nu,0)$ for some finite measure $\nu$. \end{thrm} To the best of our knowledge, this result was previously unknown, even in the univariate case. Next, comparing \eqref{eq: relation for MD} and \eqref{eq: MVP defn} shows that MD distributions are special cases of MVP and hence of $L_1^*$. More specifically, we immediately get the following. \begin{thrm}\label{thrm: char MD} $\mu$ is $\mathrm{MD}$ if and only if $\mu=L_1^*(\nu,0)$ where $\nu(\mathbb R^d\setminus\mathbb S^{d-1}) = 0$. In this case $\mu=\mathrm{MD}(\sigma)$, where $\sigma=\nu$. \end{thrm} It follows that all of the results in Section \ref{sec: alpha times SD} specialize to MVP and MD distributions. In particular Theorems \ref{thrm: series rep gen Vervaat} and \ref{thrm: conv of powers} can be used for approximate simulation. \section{Simulation from Multivariate Dickman Distributions}\label{sec: sim methods} In this section we focus on the simulation of MD random variables. First, combining Theorem \ref{thrm: char MD} with \eqref{eq: mean and var L*} shows that the mean vector and covariance matrix of $X\sim\mathrm{MD}(\sigma)$ are given by \begin{eqnarray}\label{eq: mean and var MD} \mathrm E[X] = \int_{\mathbb S^{d-1}} s \sigma(\mathrm d s) \mbox{ and } \mathrm{cov}(X) = \frac{1}{2}\int_{\mathbb S^{d-1}} ss^T \sigma(\mathrm d s). \end{eqnarray} As mentioned, Theorems \ref{thrm: series rep gen Vervaat} and \ref{thrm: conv of powers} can be used to develop approximate simulation methods. We now derive another method, which is exact in the important case when the spectral measure has finite support. This means that there is a positive integer $k$ such that the spectral measure is given by \begin{eqnarray}\label{eq: sigma k} \sigma_k = \sum_{i=1}^k a_i \delta_{s_i}, \end{eqnarray} where $s_1,s_2,\dots,s_k\in\mathbb S^{d-1}$ and $a_1,a_2,\dots,a_k\in(0,\infty)$. In this case, it is readily checked that $$ \sum_{i=1}^k s_i Y_i \sim \mathrm{MD}(\sigma_k), $$ where $Y_1,Y_2,\dots,Y_k$ independent random variables with $Y_i\sim \mathrm{GD}(a_i)$ for $i=1,2,\dots,k$. Thus, to simulate from $\mathrm{MD}(\sigma_k)$ we just need a way to simulate from $\mathrm{GD}$. Exact simulation methods for $\mathrm{GD}$ are available, see, e.g., \cite{Devroye:2001}, \cite{Devroye:Fawzi:2010}, \cite{Fill:Huber:2010}, \cite{Chi:2012}, \cite{Cloud:Huber:2017}, and \cite{Dassios:Qu:Lim:2019}. In this paper, we use the method of \cite{Dassios:Qu:Lim:2019}, which is implemented in the SubTS \cite{Grabchak:Cao:2023} package for the statistical software R. This approach can be modified to work even when the support of the spectral measure is infinite. The idea is that for any spectral measure $\sigma$ there exists a sequence of spectral measures $\{\sigma_k\}$ on $\mathbb S^{d-1}$, each having finite support, such that $\mathrm{MD}(\sigma_k) \conw \mathrm{MD}(\sigma)$. This follows immediately from Theorem 7.1 in \cite{Xia:Grabchak:2022}. Thus, we can approximately simulate from $\mathrm{MD}(\sigma)$ by first discretizing $\sigma$ and approximating it by some $\sigma_k$ with finite support. We can then use the above approach to simulate from $\mathrm{MD}(\sigma_k)$, which is an approximate simulation from $\mathrm{MD}(\sigma)$. We call this the discretization and simulation (DS) method. A version of this approach was used in \cite{Xia:Grabchak:2022} to simulate from certain multivariate tempered stable distributions. For the remainder of this section, we specialize our results and give additional details in the important case of bivariate distributions. Here, every point $s\in\mathbb S^1$ can be written as $s=(\cos\phi,\sin\phi)$ for some angle $\phi\in[0,2\pi)$. Thus, corresponding to the measure $\sigma$ on $\mathbb S^1$, there is a measure $\sigma'$ on $[0,2\pi)$ satisfying $$ \sigma(B) = \int_{[0,2\pi)} 1_B((\cos\phi,\sin\phi)) \sigma'(\mathrm d \phi), \ \ B\in\mathfrak B(\mathbb S^1). $$ From \eqref{eq: mean and var MD}, it follows that if $X=(X_1,X_2)\sim\mathrm{MD}(\sigma)$, then in terms of $\sigma'$, we can write $$ \mathrm E[X_1] = \int_{[0,2\pi)} \cos\phi\ \sigma'(\mathrm d\phi), \quad \mathrm E[X_1] = \int_{[0,2\pi)} \sin\phi\ \sigma'(\mathrm d\phi), $$ $$ \mathrm{Var}(X_1) = \frac{1}{2} \int_{[0,2\pi)} \cos^2\phi\ \sigma'(\mathrm d\phi), \quad \mathrm{Var}(X_2) = \frac{1}{2} \int_{[0,2\pi)} \sin^2\phi\ \sigma'(\mathrm d\phi), $$ and $$ \mathrm{Cov}(X_1,X_2) = \frac{1}{2} \int_{[0,2\pi)} \cos\phi \sin\phi\ \sigma'(\mathrm d\phi). $$ Next, we specialize our three simulation methods to this case. Toward this end, let $\theta=\sigma(\mathbb S^1) = \sigma'([0,2\pi))$ and let $\sigma_1 = \sigma/\theta$ and $\sigma_1'=\sigma'/\theta$ be probability measures. Throughout, we assume that we know how to simulate from $\sigma'_1$. All of our simulation methods depend on a tuning parameter $k$. The first method is based on the shot-noise representation given in Theorem \ref{thrm: series rep gen Vervaat} and is denoted SN. For this approximation we take the first $k$ terms in the series, which gives $$ \sum_{i=1}^{k} \left(U_1U_2\cdots U_i\right)^{1/\theta} \zeta_i, $$ where $U_1,U_2,\dots,U_k\iid U(0,1)$ and $\zeta_1,\zeta_2,\dots,\zeta_k$ are iid and are simulated by taking $\zeta_i=(\cos(\phi_i),\sin(\phi_i))$, where $\phi_i\sim\sigma_1'$. The second method is based on the triangular array approximation given in Theorem \ref{thrm: conv of powers} and is denoted TA. Here we evaluate $A_k$ for some large $k$. We take $\alpha=1$, $\ell(x)=1$ for all $x$, $N_k=\lfloor k\theta\rfloor$, and $c=\theta$, which gives $$ \sum_{i=1}^{\lfloor k\theta\rfloor} U_i^k \zeta_i, $$ where $U_1,U_2,\dots,U_k\iid U(0,1)$ and the $\zeta_i$ are as in the SN method. Finally, for the DS method, we assume either that $\sigma$ is of the form given in \eqref{eq: sigma k} or that we have an approximation $\sigma_k$ of $\sigma$ that is of this form. Either way, we have $$ \sum_{i=1}^{k} Y_i s_i, $$ where $Y_1,Y_2,\dots,Y_k$ independent random variables with $Y_i\sim \mathrm{GD}(a_i)$ for $i=1,2,\dots,k$ and $a_i,s_i$ are as in \eqref{eq: sigma k}. Note that, in the first two methods, the directions are random, while in the third they are deterministic. All that remains is to describe a systematic approach for discretizing $\sigma$. We start by selecting integer $k$, which is the number of terms in the support of the approximation, selecting $0=d_0<d_1<\cdots<d_k=2\pi$, and selecting $\phi_1,\phi_2,\dots,\phi_k$ with $d_{i-1}\le \phi_i<d_i$ for each $j=1,2,\dots,k$. Next we take $a_i= \sigma'([d_{i-1},d_i))$. We then approximate $\sigma$ and $\sigma'$ by $$ \sigma_k = \sum_{i=1}^k a_i \delta_{s_i} \mbox{ and } \sigma_k' = \sum_{i=1}^k a_i \delta_{\phi_i}, $$ where $s_i=(\cos\phi_i,\sin\phi_i)$. For simplicity, in this paper, we take $d_i =2\pi i/k$ to be evenly spaced and $\phi_i = d_{i-1}$. \section{Simulation Study}\label{sec: sim study} In this section we perform a small-scale simulation study to compare the performance of the three methods discussed in Section \ref{sec: sim methods}. For simplicity, we focus on the bivariate case. In this context, we consider two models for the spectral measure: in the first the spectral measure is a beta distribution and in the second it has finite support. In the latter case, the DS method is exact. We begin with the first model. Here the spectral measure depends on two parameters $\alpha,\beta>0$ and is denoted by $\sigma^{\alpha,\beta}$. We assume that $\sigma^{\alpha,\beta}$ is a probability measure on $\mathbb S^1$ such that it is the distribution of the random vector $\zeta=(\cos(\phi),\sin(\phi))$ where $\phi$ has a beta distribution on $[0,2\pi)$, i.e., the distribution of $\phi$ has a density given by $$ f(x) = \frac{(2\pi)^{1-\alpha - \beta}}{B(\alpha, \beta)} x^{\alpha -1}(2\pi - x)^{\beta - 1}, \ \ 0\le x<2\pi, $$ where $B$ is the beta function. When $\alpha=\beta=1$ this reduces to the uniform distribution on $[0,2\pi)$. Since $\sigma^{\alpha,\beta}$ is a probability measure, we have $\theta=1$, and since the support of $\sigma^{\alpha,\beta}$ is infinite, all three simulation methods are approximate. They depend on a tuning parameter $k$ and, as $k$ increases, all methods get closer to simulating from $\mathrm{MD}(\sigma^{\alpha,\beta})$. Our goal is to understand which methods converge faster. \begin{figure} \caption{Plots of errors in the beta model with all three methods and several choices of the parameters. The $x$-axis represents $k$, the number of terms in the sum, and the $y$-axis represents the errors.} \label{fig: error in sims} \end{figure} Our simulations are performed as follows. We begin by choosing values for $\alpha$ and $\beta$. We then select one of our three approximate simulation methods and a value for the tuning parameter $k$. Next, we use the method to approximately simulate $N$ observations from $\mathrm{MD}(\sigma^{\alpha,\beta})$. Using these, we estimate the means $m_1$, $m_2$ of both components, the variances $\sigma_1^2$, $\sigma^2_2$ of both components, and the covariance $\sigma_{12}$ between the components by using the empirical means $\bar x_1$, $\bar x_2$, the empirical variances $s_1^2$, $s_2^2$, and the empirical covariance $s_{12}$, respectively. We then quantify the error in the approximation by \begin{equation}\label{eq:total error} \mathrm E_k = \sqrt{(\bar x_1 - m_1)^2 + (\bar x_2 - m_2)^2 + (s_1^2 - \sigma_1^2)^2 + (s_2^2 - \sigma_2^2)^2 + (s_{12} - \sigma_{12})^2}. \end{equation} This was used to quantify errors in a similar context in \cite{Xia:Grabchak:2022}. The values of $m_1$, $m_2$, $\sigma_1^2$, $\sigma^2_2$, and $\sigma_{12}$ can be calculated by numerically integrating the formulas given in Section \ref{sec: sim methods}. When $\alpha$ and $\beta$ are integers, one can also evaluate the integrals explicitly using integration by parts. Note that only part of the error is due to the fact that the methods are approximate, the other part is due to Monte Carlo error as we are using only a finite number ($N$) of replications. The results of our simulations are presented in Figure \ref{fig: error in sims}. Here we consider four combinations of the parameters $(\alpha, \beta)$: $(1, 1)$, $(2, 2)$, $(2, 5)$, and $(5,1)$. For each method and each choice of the parameters, we let $k$ range from $1$ to $200$. In each case, we simulate $N=160000$ replications. Figure \ref{fig: error in sims} presents the value of $k$ plotted against the error $\mathrm E_k$. When $\alpha=\beta=1$, which corresponds to a uniform distribution, the SN and DS methods have similar performance and the error deceases very quickly. In comparison, the error for TA decreases much slower. For the other cases, SN has the best performance, followed by TA, and then DS has the worst performance. Next, we turn to our second model. Here $\sigma$ has a finite support, i.e, $\sigma = \sum_{i=1}^{r} a_i \delta_{s_i}$. For simplicity, we take $a_i=1/r$ for each $i=1,2,\dots,r$ and the $s_i$'s to be evenly spaced. We again quantify the error using $\mathrm E_k$ as given in \eqref{eq:total error} and use the formulas in Section \ref{sec: sim methods} to calculate the means, variances, and the covariance. This time they are all finite sums. The results of our simulations are given in Figures \ref{fig: error in sims discrete} and \ref{fig: error in sims discrete diff dir}. In Figure \ref{fig: error in sims discrete} we consider the case where $r=50$. Since the DS method is exact in this case, we just evaluate it once and plot the resulting error as a baseline. Note that, due to Monte Carlo error, this error is not zero. For the other methods, we see that the error in SN decays quickly as $k$ increases, whereas for TA it decays slower. To get an idea of how the performance of the methods depends on the number of terms $r$, in Figure \ref{fig: error in sims discrete diff dir} we consider the case where $r=2,20,100$. We can see that both SN and TA work better when $r$ is larger. We do not include DS in these simulations as it is exact in this case. \begin{figure} \caption{Plots of errors when $\sigma$ has $50$ evenly spaced directions. The $x$-axis represents $k$, the number of terms in the sum, and the $y$-axis represents the errors. Since DS is exact in this case, it is presented as a baseline.} \label{fig: error in sims discrete} \end{figure} \begin{figure} \caption{Plots of errors when $\sigma$ has $r=2,20,100$ evenly spaced directions. The plot of the left is for the SN method and the one on the right is for the TA method. The $x$-axis represents $k$, the number of terms in the sum, and the $y$-axis represents the errors. DS is not presented as it is exact in this case.} \label{fig: error in sims discrete diff dir} \end{figure} Overall, SN converges quickly in all of the situations that we considered. It is also easy to implement. TA is also easy to implement, but it needs more terms to converge. DS is an exact method when $\sigma$ has finite support. However, it is slightly harder to implement as it requires one to simulate GD random variables, which is a bit more involved. \section{Proofs}\label{sec: proofs} \begin{proof}[Proof of Lemma \ref{lemma: moments}] We get \eqref{eq: M alpha 2} from \eqref{eq: M alpha} by change of variables. Next, \eqref{eq: M alpha 2} implies that $M_\alpha(\mathbb R^d) =\int_{\mathbb R^d}\int_0^\infty r^{\alpha-1}\mathrm d r\nu(\mathrm d x)$ and so $M_\alpha(\mathbb R^d)=\infty$ for any $\nu\ne0$. For the next part, note that, for $p>0$, l'H\^opital's rule gives $$ \int_a^\infty r^{\alpha-1} e^{-rp}\mathrm d r\sim p^{-1}a^{\alpha-1} e^{-ap} \ \ \mbox{as} \ \ a\to\infty. $$ Thus, there exists a $K\ge e$ such that, if $|y|>K$, then $$ \frac{1}{2p}(\log|y|)^{\alpha-1} e^{-p\log|y|}\le \int_{\log|y|}^\infty r^{\alpha-1} e^{-rp}\mathrm d r\le \frac{2}{p}(\log|y|)^{\alpha-1} e^{-p\log|y|}. $$ Hence, \begin{eqnarray*} \int_{|x|\le1}|x|^pM_\alpha(\mathrm d x) &=& \int_{\mathbb R^d}|y|^p\int_0^{|y|^{-1}\wedge1} (-\log r)^{\alpha-1}r^{p-1}\mathrm d r \nu(\mathrm d y)\\ &=&\int_{\mathbb R^d}|y|^p\int_{-\log(|y|^{-1}\wedge1)}^\infty r^{\alpha-1}e^{-pr}\mathrm d r \nu(\mathrm d y)\\ &\le& \int_{|y|\le K}|y|^p\nu(\mathrm d y)\int_0^\infty r^{\alpha-1}e^{-pr}\mathrm d r + \frac{2}{p}\int_{|y|>K} (\log|y|)^{\alpha}\nu(\mathrm d y) \end{eqnarray*} and similarly \begin{eqnarray*} \int_{|x|\le1}|x|^pM_\alpha(\mathrm d x) &\ge& \int_{|y|\le K}|y|^p\nu(\mathrm d y)\int_{\log K}^\infty u^{\alpha-1}e^{-pr}\mathrm d r + \frac{1}{2p}\int_{|y|>K} (\log|y|)^{\alpha}\nu(\mathrm d y). \end{eqnarray*} For the last part, we have \begin{eqnarray*} \int_{|x|>1}|x|^pM_\alpha(\mathrm d x) &=& \int_{|y|>1}|y|^p\int_{|y|^{-1}}^1 (-\log r)^{\alpha-1}r^{p-1}\mathrm d r \nu(\mathrm d y)\\ &=&\int_{|y|>1}|y|^p\int_0^{\log|y|} r^{\alpha-1}e^{-pr}\mathrm d r \nu(\mathrm d y)\\ &\le&\frac{\Gamma(\alpha)}{p^\alpha} \int_{|y|>1}|y|^p\nu(\mathrm d y) \end{eqnarray*} and \begin{eqnarray*} \int_{|x|>1}|x|^pM_\alpha(\mathrm d x) &=&\int_{|y|>1}|y|^p\int_0^{\log|y|} r^{\alpha-1}e^{-pr}\mathrm d r \nu(\mathrm d y)\\ &\ge&\int_{|y|>e}|y|^p \nu(\mathrm d y)\int_0^{1} r^{\alpha-1}e^{-pr}\mathrm d r, \end{eqnarray*} which completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thrm: integ rep}.] Let $M_\alpha$ be as in \eqref{eq: M alpha} and let $C_1$ be the cgf of $Y_1$. By Corollary 2.3 in \cite{Sato:2006}, the stochastic integral is absolutely definable so long as we have the finiteness of \begin{eqnarray*} &&\int_0^\infty \left|C_1\left(z e^{-(\alpha s/\theta)^{1/\alpha}} \right)\right| \mathrm d s\\ &&\qquad\le |z|| \gamma' | \int_0^\infty e^{-(\alpha s/\theta)^{1/\alpha}} \mathrm d s + \theta^{-1} \int_0^\infty \int_{\mathbb{R}^d}1 \wedge(|z||x|e^{-(\alpha s/\theta)^{1/\alpha}}) \nu(\mathrm d x) \mathrm d s\\ &&\qquad\le |z|| \gamma' | \int_0^\infty e^{-(\alpha s/\theta)^{1/\alpha}} \mathrm d s + \int_{\mathbb{R}^d}2 \wedge(|z||x|) M_\alpha (\mathrm d x) <\infty, \end{eqnarray*} where we use \eqref{eq: M alpha 2}, \eqref{eq: finite for MVP alpha}, and the fact that $|e^{ia}-1|\le2\wedge|a|$ for $a\in\mathbb R$, see Section 26 in \cite{Billingsley:1995}. Similarly, by Proposition 2.2 in \cite{Sato:2006}, the cgf of the stochastic integral is given by \begin{eqnarray*} &&\int_0^\infty C_1\left(z e^{-(\alpha s/\theta)^{1/\alpha}} \right) \mathrm d s\\ &&\qquad =\int_0^\infty\left(i \langle \gamma', z \rangle e^{-(\alpha s/\theta)^{1/\alpha}} + \int_{\mathbb{R}^d}\left(e^{i\left\langle z e^{-(\alpha s/\theta)^{1/\alpha}}, x\right\rangle } - 1\right) \nu'(\mathrm d x) \right) \mathrm d s\\ &&\qquad= i \langle \gamma', z \rangle \int_0^\infty e^{-(\alpha s/\theta)^{1/\alpha}} \mathrm d s + \int_{\mathbb{R}^d}\left(e^{i\left\langle z, x\right\rangle } - 1\right) M_\alpha (\mathrm d x). \end{eqnarray*} From here, the result follows from the fact that $ \int_0^\infty e^{-(\alpha s/\theta)^{1/\alpha}} \mathrm d s=\theta\Gamma(\alpha)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thrm: series rep gen Vervaat}] The result follows by a general shot-noise representation of L\'evy processes given in \cite{Rosinski:2001}, see also Theorem 6.2 in \cite{Cont:Tankov:2004}. Define $$ H(r,y) = ye^{-(r/\theta)^{1/\alpha}}, \ \ r>0,\ y\in\mathbb R^d $$ and note that $|H(r,y)|$ is nonincreasing in $r$ for each $y$. Let $$ \lambda(r,B) = P(H(r,Y_1)\in B) = P(Y_1\in e^{(r/\theta)^{1/\alpha}}B), \ \ r>0, \ B\in\mathfrak B(\mathbb R^d), $$ and $$ A(s) = \int_0^s \int_{|y|\le1} y \lambda(r,\mathrm d y)\mathrm d r, \ \ s\ge0. $$ From \eqref{eq: M alpha 2} it follows that for $B\in\mathfrak B(\mathbb R^d)$ \begin{eqnarray*} \int_0^\infty \lambda(r,B) \mathrm d r &=& \int_0^\infty \int_{\mathbb R^d} 1_{B}(ye^{-(r/\theta)^{1/\alpha}}) \nu_1(\mathrm d y)\mathrm d r =M_\alpha(B). \end{eqnarray*} and by dominated convergence that \begin{eqnarray*} \lim_{s\to\infty}A(s) &=& \int_0^\infty \int_{|y|\le e^{(r/\theta)^{1/\alpha}}} ye^{-(r/\theta)^{1/\alpha}} \nu_1(\mathrm d y)\mathrm d r = \int_{|x|\le1}xM_\alpha(\mathrm d x). \end{eqnarray*} We can use dominated convergence since by \eqref{eq: finite for MVP alpha} \begin{eqnarray*} \int_0^s \int_{|y|\le e^{(r/\theta)^{1/\alpha}}} |y|e^{-(r/\theta)^{1/\alpha}} \nu_1(\mathrm d y)\mathrm d r &\le& \int_0^\infty \int_{|y|\le e^{(r/\theta)^{1/\alpha}}} |y|e^{-(r/\theta)^{1/\alpha}} \nu_1(\mathrm d y)\mathrm d r \\ &=& \int_{|x|\le1}|x|M_\alpha(\mathrm d x)<\infty. \end{eqnarray*} From here Theorem 6.2 in \cite{Cont:Tankov:2004} gives the result for $T=1$. The results for general $T$ follows from the fact that the L\'evy process $\{X_t:0\le t\le T\}$ where $X_1\sim\mathrm{ID}_0(M,\gamma)$ has the same distribution as $\{X'_t:0\le t\le 1\}$ where $X'_1\sim\mathrm{ID}_0(TM,T\gamma)$. The result for $\alpha=1$ follows from the well-known and easily checked fact that $e^{-E_i}\sim U(0,1)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thrm: conv of powers}] To prove this result, it suffices to verify the conditions for convergence of sums of infinitesimal triangular arrays. Such conditions can be found in, e.g., \cite{Sato:1999}, \cite{Meerschaert:Scheffler:2001}, or \cite{Kallenberg:2002}. A version of these is as follows:\\ 1. For any $C\in\mathscr B(\mathbb S^{d-1})$ with $\nu\left(\left\{y\in\mathbb R^d\setminus\{0\}: \frac{y}{|y|}\in\partial C\right\}\right)=0$ and $s>0$, then $$ \lim_{n \to \infty} N_n \mathrm P\left(|T_1|X_1^n>s, \frac{T_1}{|T_1|}\in C\right) = M_\alpha\left(\left\{x\in\mathbb R^d:|x|>s, \frac{x}{|x|}\in C\right\}\right). $$ 2. $$ \lim_{\epsilon\downarrow0}\limsup_{n\to\infty} N_n \mathrm E\left[X_1^n |T_1| 1(X_1^n |T_1| <\epsilon)\right] =0. $$ We begin by noting that, by L'H\^opital's rule, for any $t>s>0$ we have $$ n\left(1-(s/t)^{1/n} \right) \sim \log(t/s). $$ Let $\ell_0(t) = \ell(1/t)$ and note that $\ell_0$ is slowly varying at $\infty$, i.e.\ for every $t>0$ $$ \lim_{x\to\infty} \frac{\ell_0(xt)}{\ell_0(x)}=1. $$ Proposition 2.6 in \cite{Resnick:2007} implies that for $t>s>0$ we have $$ \ell\left(1-(s/t)^{1/n} \right)=\ell_0\left(\frac{n}{n(1-(s/t)^{1/n})} \right) \sim \ell_0\left(n \right) = \ell(1/n) $$ and hence \begin{eqnarray*} \lim_{n\to\infty} N_nP(X_1 >(s/t)^{1/n}) = \lim_{n\to\infty} cn^\alpha (1-(s/t)^{1/n})^\alpha \frac{ \ell(1-(s/t)^{1/n})}{\ell(1/n)} = c (\log(t/s))^{\alpha}. \end{eqnarray*} By Theorem 20.3 in \cite{Billingsley:1995} \begin{eqnarray*} \lim_{n\to\infty} N_n \mathrm P\left(|T_1|X_1^n>s, \frac{T_1}{|T_1|}\in C\right) &=& \lim_{n\to\infty} \int_{|y|>s, \frac{y}{|y|}\in C}N_nP(X_1 >(s/|y|)^{1/n}) \nu_0(\mathrm d y)\nonumber\\ &=& \int_{|y|>s, \frac{y}{|y|}\in C} \lim_{n\to\infty} N_n P(X_1 >(s/t)^{1/n}) \nu_0(\mathrm d y)\nonumber\\ &=& c \int_{|y|>s, \frac{y}{|y|}\in C} \left(\log(|y|/s)\right)^\alpha \nu_0(\mathrm d y)\nonumber\\ &=&M_\alpha\left(\left\{x\in\mathbb R^d:|x|>s, \frac{x}{|x|}\in C\right\}\right), \end{eqnarray*} where in the second line we interchange limit and integration using dominated convergence. To see that we can do this, assume that $n$ is large enough that $N_nn^{-\alpha}\ell(1/n)\le 2c$, let $t>s>0$, $a = \log(t/s)>0$, and fix $\delta\in(0,\alpha)$. By the Potter bounds, see e.g.\ Theorem 1.5.6 in \cite{Bingham:Goldie:Teugels:1987}, there exists a constant $A>0$ with \begin{eqnarray}\label{eq: log bound} N_nP(X_1 >(s/t)^{1/n}) &\le& 2c \left(\frac{1-(s/t)^{1/n}}{1/n}\right)^\alpha \frac{\ell(1-(s/t)^{1/n})}{\ell(1/n)} \nonumber\\ &=& 2c \left(\frac{1-e^{-a/n}}{1/n}\right)^\alpha \frac{\ell(1-e^{-a/n})}{\ell(1/n)}\nonumber\\ &\le& A \left(\left(\frac{1-e^{-a/n}}{1/n}\right)^{\alpha+\delta}\vee \left(\frac{1-e^{-a/n}}{1/n}\right)^{\alpha-\delta}\right)\nonumber\\ &\le& A \left(( \log(t/s))^{\alpha+\delta}\vee ( \log(t/s))^{\alpha-\delta}\right)\le C_s t^\gamma, \end{eqnarray} where $C_s$ is some constant depending on $s$ and we use the fact that $1-e^{-x}\le x$. Next consider \begin{eqnarray*} &&\lim_{\epsilon\downarrow0}\limsup_{n\to\infty} N_n \mathrm E\left[X_1^n |T_1| 1_{[X_1^n |T_1| <\epsilon]}\right] \\ &&\qquad=\lim_{\epsilon\downarrow0}\limsup_{n\to\infty} N_n \int_0^\epsilon P(X_1^n |T_1| 1_{[X_1^n |T_1| <\epsilon]}>s)\mathrm d s\\ &&\qquad\le\lim_{\epsilon\downarrow0}\limsup_{n\to\infty} N_n \int_0^\epsilon P(X_1^n |T_1| >s)\mathrm d s\\ &&\qquad =\lim_{\epsilon\downarrow0} \limsup_{n\to\infty} \int_0^\epsilon \int_{s\le |y|} N_nP(X_1^n >s/|y|)\nu_0(\mathrm d y)\mathrm d s\\ &&\qquad \le A\lim_{\epsilon\downarrow0}\int_{|y|>0} \int_0^{\epsilon} \left(( \log(|y|/s))^{\alpha+\delta}\vee ( \log(|y|/s))^{\alpha-\delta}\right)\mathrm d s \nu_0(\mathrm d y)\\ &&\qquad =A\lim_{\epsilon\downarrow0}\int_{|y|>0} \int_{\log(|y|/\epsilon)}^\infty \left(s^{\alpha+\delta}\vee s^{\alpha-\delta}\right)e^{-s}\mathrm d s |y|\nu_0(\mathrm d y)\\ &&\qquad \le A\lim_{\epsilon\downarrow0}\epsilon^{1-\gamma} \int_{|y|>0} \int_{\log(|y|/\epsilon)}^\infty \left(s^{\alpha+\delta}\vee s^{\alpha-\delta}\right)e^{-s\gamma}\mathrm d s |y|^\gamma \nu_0(\mathrm d y)\\ &&\qquad \le A\lim_{\epsilon\downarrow0}\epsilon^{1-\gamma} \int_{\mathbb R^d} |y|^\gamma \nu_0(\mathrm d y) \int_{0}^\infty \left(s^{\alpha+\delta}\vee s^{\alpha-\delta}\right)e^{-s\gamma}\mathrm d s =0, \end{eqnarray*} where the fifth line follows by \eqref{eq: log bound} and the sixth by change of variables. \end{proof} \end{document}
\begin{equation}gin{document} \setcounter{page}{95} \title[Directed last-passage percolation]{Busemann functions, geodesics, and the competition interface for directed last-passage percolation} \author{Firas Rassoul-Agha} \address{University of Utah, Mathematics Department, 155 S 1400 E, Salt Lake City, UT 84109, USA} \varepsilonmail{[email protected]} \thanks{The author was partially supported by NSF grant DMS-1407574.} \subjclass[2010]{60K35, 60K37, 82B43} \mathrm date{\today} \begin{equation}gin{abstract} In this survey article we consider the directed last-passage percolation model on the planar square lattice with nearest-neighbor steps and general i.i.d.\ weights on the vertices, outside of the class of exactly solvable models. We show how stationary cocycles are constructed from queueing fixed points and how these cocycles characterize the limit shape, yield existence of Busemann functions in directions where the shape has some regularity, describe the direction of the competition interface, and answer questions on existence, uniqueness, and coalescence of directional semi-infinite geodesics, and on nonexistence of doubly infinite geodesics. \varepsilonnd{abstract} \maketitle \tableofcontents \section{Introduction} In 1965, Hammersley and Welsh \cite{Ham-Wel-65} introduced a model of a fluid flowing through a porous medium that they called \mathbf{1}ex{first-passage percolation (FPP)} \mathbf{1}ex{FPP} first-passage percolation (FPP). Roughly speaking, the model consists of putting random positive numbers (called weights) on the nearest-neighbor edges of the square lattice $\mathbb{Z}^d$. These numbers describe the time it takes to traverse the various edges. Only nearest-neighbor paths are allowed between lattice points. The (random) distance between two points is then given by the passage time -- the shortest time it takes to go between the points. A path that realizes the passage time is naturally called a geodesic. Immediate questions arise regarding the structure of the random metric, its balls, and its geodesics (finite, semi-infinite, and bi-infinite). As it is often the case in probability theory, structure emerges from randomness when we look at the large scale behavior of the model. This means scaling the lattice down as we look at farther and farther sites. When done properly, the random metric on the scaled lattice converges to a deterministic one on $\mathbb{R}^d$. As a result, random balls approach deterministic convex sets as their radii grow and geodesics from the origin to far away points approach straight lines. This raises the next layer of questions regarding the size and statistics of the fluctuations of the random objects from their deterministic limits. See Section 1 of \cite{ch:Damron} for more details and precise formulations. In a related model, called directed last-passage percolation or just \mathbf{1}ex{last-passage percolation (LPP)} \mathbf{1}ex{LPP} last-passage percolation (LPP) paths are only allowed to take steps in $\{e_1,\mathrm dotsc,e_d\}$. One can think of $e_1+\cdots+e_d$ as a time direction that cannot be reversed. In fact, in LPP passage time is maximized rather than minimized and the random weights the path collects are usually put on the vertices instead of on the edges, but these are only minor differences from FPP. The crucial difference is that paths are now directed. This creates some simplifications, but also introduces some complications. For example, in LPP it is clear that there exist maximizing paths \mathbf{1}ex{geodesic} (still called geodesics by analogy with LPP) from the origin to any point in the first quadrant. There are after all finitely many such paths, as opposed to the situation in FPP. On the other hand, the passage time is no longer symmetric in LPP: there is no path back to the origin, starting from a point in the first quadrant. Nevertheless, LPP and FPP are expected to a have similar qualitative behavior at large scales. One of the advantages the two-dimensional LPP has over its cousin FPP is a connection to other models, such as queues and particle systems. Furthermore, with certain specific weight distributions, LPP becomes what we call an exactly solvable model. This means that many computations can be carried out all the way down to exact formulas. In this case, one is able to extract very detailed information about the behavior of the model, which then drives or confirms predictions about general LPP and FPP (and related) models. The focus of this article is on surveying recent developments in connection with the structure of geodesics in LPP. A main tool will be Busemann functions, originally used to study the large-scale geometry of geodesics in Riemannian manifolds. The next section collects some notation for easier reference. Section \ref{LPP:intro} introduces the last-passage percolation model and Section \ref{sec:connection} discusses its connections to the corner growth model, queues in tandem, a system of interacting particles, and a competition model. In Section \ref{sec:shape} we describe the scaling limit of the passage time. Section \ref{sec:Bus} introduces the Busemann functions and studies their properties. To prove existence of these functions we borrow tools from queuing theory. This is done in Section \ref{fixed-pt}. Busemann functions are then used in Section \ref{geodesics} to construct semi-infinite geodesics and describe their structure. They are also used to study the competition interface, in Section \ref{cif:sec}. In the last two sections we give a brief account of the recent history of Busemann functions and geodesics in percolation and a preview of the topic of the next two lecture notes \cite{ch:Seppalainen,ch:Corwin}: fluctuations. \section{Notation} $\mathbb{Z}$ denotes the integer numbers, $\mathbb{Z}_+$ the nonnegative integers, and $\mathbb{N}$ the positive integers. $\mathbb{R}$ denotes the real numbers and $\mathbb{R}_+$ the nonnegative real numbers. $\mathbb{Q}$ denotes the rational numbers. $e_1$ and $e_2$ are the canonical basis vectors of $\mathbb{R}^2$. $\mathcal U=\{(t,1-t)=te_1+(1-t)e_2:0\le t\le1\}$ and its relative interior is $\Uset^\circ=\{(t,1-t):0<t<1\}$. We will write $\xi,\zeta,\varepsilonta$ for elements in $\mathcal U$. $x_{i,j}=(x_i,\mathrm dotsc,x_j)$, for $-\infty\le i<j\le\infty$. $x\cdot y$ denotes the scalar product of vectors $x,y\in\mathbb{R}^2$. $\abs{x}_1=\abs{x_1}+\abs{x_2}=\abs{x\cdot e_1}+\abs{x\cdot e_2}$. $\fl{a}$ is the largest integer $\le a$. If $a$ is a vector, then $\fl{a}$ is taken coordinatewise. For a real number $a$ we write $a^+$ for $\max(a,0)$ and $a^-$ for $-\min(a,0)$. Thus, $a=a^+-a^-$. For $x,y\in\mathbb{R}^2$, $y\ge x$ and $y\le x$ mean the inequalities hold coordinatewise. We will say that a sequence $x_n$ is asymptotically directed into a subset $A\subset\mathcal U$ if all the limit points of $x_n/n$ are inside $A$. A random variable $X$ has an exponential distribution with rate $\theta>0$ if $P(X>s)=e^{-\theta s}$ for all $s\ge0$. The probability density function of such a variable equals $\theta e^{-\theta s}$ for $s\ge0$ and $0$ for $s<0$. The mean of $X$ then equals $E[X]=\theta \int_0^\infty s e^{-\theta s}\,ds=\theta^{-1}$. Its variance equals \[E[(X^2-E[X])^2]=E[X^2]-E[X]^2=\theta \int_0^\infty s^2 e^{-\theta s}\,ds-\theta^{-2}=\theta^{-2}.\] A random variable $X$ is said to have a continuous distribution if $P(X=s)=0$ for all $s\in\mathbb{R}$. \section{Directed last-passage percolation (LPP)}\lambdabel{LPP:intro} Consider the two-dimensional square lattice $\mathbb{Z}^2$. Put a random real number $\omega_x$ at each site $x\in\mathbb{Z}^2$ and let these assignments be independent. In more precise terms, let $\Omega=\mathbb{R}^{\mathbb{Z}^2}$ and endow it with the product topology and the Borel $\sigma$-algebra. A generic element in $\Omega$ is denoted by $\omega=\{\omega_x:x\in\mathbb{Z}^2\}$ and called an \mathbf{1}ex{environment} {\sl environment} or a \mathbf{1}ex{configuration} {\sl configuration}. The numbers $\omega_x$ are called \mathbf{1}ex{weights} {\sl weights}. Let $\mu$ be a probability measure on $\mathbb{R}$ and let $\mathbb{P}$ be the product probability measure on $\Omega$ with all marginals equal to $\mu$: for a finite collection $x_1,\mathrm dotsc,x_n$ and measurable sets $A_1,\mathrm dotsc,A_n\subset\mathbb{R}$ \[\mathbb{P}\big\{\omega:\omega_{x_i}\in A_i:1\le i\le n\big\}=\prod_{i=1}^n \mu(A_i).\] (In particular $\mathbb{P}(a<\omega_x\le b)=\mu((a,b])$ for all $x\in\mathbb{Z}^2$ and $-\infty\le a<b\le \infty$.) We will abbreviate the average weight value as $m_0=\mathbb{E}[\omega_0]$. When weights $\omega_x$ are nonnegative they can be thought of as times spent at sites $x$. Thus, the \mathbf{1}ex{passage time} {\sl passage time} of a path $x_0,\mathrm dotsc,x_n\in\mathbb{Z}^2$ is the time it takes to traverse the path and equals $\sum_{i=1}^n\omega_{x_i}$. (We choose not to count the time spent at the very first point of the path.) We will abbreviate $x_{i,j}$ for a sequence $x_i,x_{i+1},\mathrm dotsc,x_j$, with a similar notation for $x_{i,\infty}$, $x_{-\infty,j}$, and $x_{-\infty,\infty}$. The model under consideration is directed, which means that we will only consider \mathbf{1}ex{up-right} {\sl up-right} paths, i.e.\ paths $x_{0,n}\in\mathbb{Z}^2$ with $x_{i+1}-x_i\in\{e_1,e_2\}$ for all $0\le i<n$. We sometimes also call such paths \mathbf{1}ex{admissible} {\sl admissible}. If $x,y\in\mathbb{Z}^2$ are such that $x\le y$ coordinatewise, then the \mathbf{1}ex{last-passage time} {\sl last-passage time} between $x$ and $y$ is given by \begin{equation}gin{align}\lambdabel{G:def} G_{x,y}=\max{B}ig\{\sum_{i=1}^n\omega_{x_i}:x_{0,n}\text{ up-right, $x_0=x$, $x_n=y$, $n=\abs{y-x}_1$}{B}ig\}. \varepsilonnd{align} Here, $\abs{\cdot}_1$ is the $\varepsilonll^1$ norm on $\mathbb{R}^2$. Let us immediately record an important inequality, called \mathbf{1}ex{superadditivity} {\sl superadditivity}: for $x\le y\le z$ coordinatewise \begin{equation}gin{align}\lambdabel{superadd} G_{x,y}+G_{y,z}\le G_{x,z}. \varepsilonnd{align} Paths that maximize in \varepsilonqref{G:def} are called \mathbf{1}ex{geodesic} {\sl geodesics}. See Figure \ref{LPP:fig}. The above inequality comes simply from observing that concatenating a geodesic from $x$ to $y$ with one from $y$ to $z$ gives an up-right path from $x$ to $z$ (which may or may not be a geodesic). \begin{equation}gin{figure} \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[ >=latex, scale=0.6] \foreach \x in {0,...,9}{ \foreach \y in {0,...,5}{ \fill[color=mygray](\x,\y)circle(2mm); } } \mathrm draw[line width=3pt](9,5)--(6,5)--(6,4)--(5,4)--(5, 3)--(2, 3)--(2,1)--(1,1)--(1,0)--(0,0); \fill[color=black](9,5)circle(2mm); \fill[color=black](8,5)circle(2mm); \fill[color=black](7,5)circle(2mm); \fill[color=black](6,5)circle(2mm); \fill[color=black](6,4)circle(2mm); \fill[color=black](5,4)circle(2mm); \fill[color=black](5,3)circle(2mm); \fill[color=black](4,3)circle(2mm); \fill[color=black](3,3)circle(2mm); \fill[color=black](2,3)circle(2mm); \fill[color=black](2,2)circle(2mm); \fill[color=black](2,1)circle(2mm); \fill[color=black](1,1)circle(2mm); \fill[color=black](1,0)circle(2mm); \fill[color=black](0,0)circle(2mm); \mathrm draw (0,0)node[below left]{\Large$x$}; \mathrm draw (9,5)node[above right]{\Large$y$}; \varepsilonnd{tikzpicture} \varepsilonnd{center} \caption{\small Illustration of a possible geodesic from $x$ to $y$.} \lambdabel{LPP:fig} \varepsilonnd{figure} In what follows the math will make sense even when $\omega_x$ can be negative and we continue to use the term \mathbf{1}ex{passage time} {\sl passage time} even with negative weights. \begin{equation}gin{remark} The above model is related to the standard first\hyp{}passage site\hyp{}percolation model (FPP) as follows. Allow admissible paths to take steps in $\{\pm e_1,\pm e_2\}$, i.e.\ be nearest-neighbor paths. Then, paths from $x$ to $y$ no longer have to have a determined length and the last-passage time is defined by \[G_{x,y}=\max{B}ig\{\sum_{i=1}^n\omega_{x_i}:x_{0,n}\text{ nearest-neighbor, $x_0=x$, $x_n=y$, $n\ge1$}{B}ig\}.\] Replacing $\omega$ with $-\omega$ turns the $\max$ into a $\min$ and the model becomes the standard \mathbf{1}ex{first-passage percolation (FPP)} \mathbf{1}ex{FPP} first-passage percolation model. (Note that now weights must be taken nonnegative for otherwise $G_{x,y}$ will be infinite.) In this case, the inequality in \varepsilonqref{superadd} is reversed and $G_{x,y}$ defines a (random) metric on $\mathbb{Z}^2$. It is by analogy with this situation that optimizing paths in the directed model are called geodesics. \varepsilonnd{remark} \section{Connections to other models}\lambdabel{sec:connection} Say weights $\omega_x$ are positive and for simplicity assume they have a continuous distribution, i.e.\ $\mathbb{P}(\omega_x=s)=0$ for all $s\in\mathbb{R}$. Then the last-passage percolation model we defined in Section \ref{LPP:intro} has several other equivalent descriptions. \subsection{Random corner growth model (CGM)}\lambdabel{cgm} \mathbf{1}ex{corner growth model (CGM)} \mathbf{1}ex{CGM} \begin{equation}gin{figure} \centering \begin{equation}gin{tikzpicture}[>=latex,scale=0.33] \begin{equation}gin{scope}[shift={(15,0)}, local bounding box=aa] \mathrm draw[line width=2pt,color=sussexb](0,0)--(6,0); \mathrm draw[line width=2pt,color=sussexb](0,0)--(0,5); \mathrm draw[line width=0.5pt](0.5,6)--(6,0.5); \mathrm draw[ fill =sussexp](1,5.5)circle(2mm); \mathrm draw (1,5.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny4}; \mathrm draw[ fill =sussexp](1.5,5)circle(2mm); \mathrm draw (1.5,5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =sussexp](2,4.5)circle(2mm); \mathrm draw (2,4.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =sussexp](2.5,4)circle(2mm); \mathrm draw (2.5,4)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =sussexp](3,3.5)circle(2mm); \mathrm draw (3,3.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](3.5,3)circle(2mm); \mathrm draw (3.5,3)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](4,2.5)circle(2mm); \mathrm draw (4,2.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =white](4.5,2)circle(2mm); \mathrm draw (4.5,2)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =white](5,1.5)circle(2mm); \mathrm draw (5,1.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =white](5.5,1)circle(2mm); \mathrm draw (5.5,1)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny4}; \varepsilonnd{scope} \begin{equation}gin{scope}[shift={(22.5,0)}, local bounding box=bb] \mathrm draw[line width=2pt,color=sussexb](1,0)--(6,0); \mathrm draw[line width=2pt,color=sussexb](0,1)--(0,5); \mathrm draw[fill=mygray](0,0)--(0,1)--(1,1)--(1,0)--cycle; \mathrm draw[line width=2pt,color=sussexb](1,0)--(1,1)--(0,1); \mathrm draw[fill=red](0.5,0.5)circle(2mm); \mathrm draw[line width=0.5pt](0.5,6)--(6,0.5); \mathrm draw[ fill =sussexp](1,5.5)circle(2mm); \mathrm draw (1,5.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny4}; \mathrm draw[ fill =sussexp](1.5,5)circle(2mm); \mathrm draw (1.5,5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =sussexp](2,4.5)circle(2mm); \mathrm draw (2,4.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =sussexp](2.5,4)circle(2mm); \mathrm draw (2.5,4)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =white](3,3.5)circle(2mm); \mathrm draw (3,3.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =sussexp](3.5,3)circle(2mm); \mathrm draw (3.5,3)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](4,2.5)circle(2mm); \mathrm draw (4,2.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =white](4.5,2)circle(2mm); \mathrm draw (4.5,2)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =white](5,1.5)circle(2mm); \mathrm draw (5,1.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =white](5.5,1)circle(2mm); \mathrm draw (5.5,1)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny4}; \varepsilonnd{scope} \begin{equation}gin{scope}[shift={(30,0)}, local bounding box=cc] \mathrm draw[line width=2pt,color=sussexb](2,0)--(6,0); \mathrm draw[line width=2pt,color=sussexb](0,1)--(0,5); \mathrm draw[fill=mygray](0,0)--(2,0)--(2,1)--(0,1)--cycle; \mathrm draw[line width=2pt,color=sussexb](0,1)--(2,1)--(2,0); \mathrm draw[line width=1pt](0.5,0.5)--(1.5,0.5); \mathrm draw[fill=red](0.5,0.5)circle(2mm); \mathrm draw[fill=black](1.5,0.5)circle(1.5mm); \mathrm draw[line width=0.5pt](0.5,6)--(6,0.5); \mathrm draw[ fill =sussexp](1,5.5)circle(2mm); \mathrm draw (1,5.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny4}; \mathrm draw[ fill =sussexp](1.5,5)circle(2mm); \mathrm draw (1.5,5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =sussexp](2,4.5)circle(2mm); \mathrm draw (2,4.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =sussexp](2.5,4)circle(2mm); \mathrm draw (2.5,4)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =white](3,3.5)circle(2mm); \mathrm draw (3,3.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](3.5,3)circle(2mm); \mathrm draw (3.5,3)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =sussexp](4,2.5)circle(2mm); \mathrm draw (4,2.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](4.5,2)circle(2mm); \mathrm draw (4.5,2)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =white](5,1.5)circle(2mm); \mathrm draw (5,1.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =white](5.5,1)circle(2mm); \mathrm draw (5.5,1)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny4}; \varepsilonnd{scope} \begin{equation}gin{scope}[shift={(15,-7.5)},local bounding box=dd] \mathrm draw[line width=2pt,color=sussexb](2,0)--(6,0); \mathrm draw[line width=2pt,color=sussexb](0,2)--(0,5); \mathrm draw[fill=mygray](0,0)--(0,2)--(1,2)--(1,1)--(2,1)--(2,0)--cycle; \mathrm draw[line width=2pt,color=sussexb](0,2)--(1,2)--(1,1)--(2,1)--(2,0); \mathrm draw[line width=1pt](0.5,0.5)--(1.5,0.5); \mathrm draw[line width=1pt](0.5,0.5)--(0.5,1.5); \mathrm draw[fill=red](0.5,0.5)circle(2mm); \mathrm draw[fill=black](1.5,0.5)circle(1.5mm); \mathrm draw[fill=black](0.5,1.5)circle(1.5mm); \mathrm draw[line width=0.5pt](0.5,6)--(6,0.5); \mathrm draw[ fill =sussexp](1,5.5)circle(2mm); \mathrm draw (1,5.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny4}; \mathrm draw[ fill =sussexp](1.5,5)circle(2mm); \mathrm draw (1.5,5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =sussexp](2,4.5)circle(2mm); \mathrm draw (2,4.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =white](2.5,4)circle(2mm); \mathrm draw (2.5,4)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =sussexp](3,3.5)circle(2mm); \mathrm draw (3,3.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =white](3.5,3)circle(2mm); \mathrm draw (3.5,3)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =sussexp](4,2.5)circle(2mm); \mathrm draw (4,2.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](4.5,2)circle(2mm); \mathrm draw (4.5,2)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =white](5,1.5)circle(2mm); \mathrm draw (5,1.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =white](5.5,1)circle(2mm); \mathrm draw (5.5,1)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny4}; \varepsilonnd{scope} \begin{equation}gin{scope}[shift={(22.5,-7.5)},local bounding box=ee] \mathrm draw[line width=2pt,color=sussexb](3,0)--(6,0); \mathrm draw[line width=2pt,color=sussexb](0,2)--(0,5); \mathrm draw[fill=mygray](0,0)--(0,2)--(1,2)--(1,1)--(3,1)--(3,0)--cycle; \mathrm draw[line width=2pt,color=sussexb](0,2)--(1,2)--(1,1)--(3,1)--(3,0); \mathrm draw[line width=1pt](0.5,0.5)--(2.5,0.5); \mathrm draw[line width=1pt](0.5,0.5)--(0.5,1.5); \mathrm draw[fill=red](0.5,0.5)circle(2mm); \mathrm draw[fill=black](1.5,0.5)circle(1.5mm); \mathrm draw[fill=black](0.5,1.5)circle(1.5mm); \mathrm draw[fill=black](2.5,0.5)circle(1.5mm); \mathrm draw[line width=0.5pt](0.5,6)--(6,0.5); \mathrm draw[ fill =sussexp](1,5.5)circle(2mm); \mathrm draw (1,5.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny4}; \mathrm draw[ fill =sussexp](1.5,5)circle(2mm); \mathrm draw (1.5,5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =sussexp](2,4.5)circle(2mm); \mathrm draw (2,4.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =white](2.5,4)circle(2mm); \mathrm draw (2.5,4)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =sussexp](3,3.5)circle(2mm); \mathrm draw (3,3.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =white](3.5,3)circle(2mm); \mathrm draw (3.5,3)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =white](4,2.5)circle(2mm); \mathrm draw (4,2.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =sussexp](4.5,2)circle(2mm); \mathrm draw (4.5,2)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](5,1.5)circle(2mm); \mathrm draw (5,1.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =white](5.5,1)circle(2mm); \mathrm draw (5.5,1)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny4}; \varepsilonnd{scope} \begin{equation}gin{scope}[shift={(30,-7.5)},local bounding box=ff] \mathrm draw[line width=2pt,color=sussexb](3,0)--(6,0); \mathrm draw[line width=2pt,color=sussexb](0,2)--(0,5); \mathrm draw[fill=mygray](0,0)--(0,2)--(2,2)--(2,1)--(3,1)--(3,0)--cycle; \mathrm draw[line width=2pt,color=sussexb](0,2)--(2,2)--(2,1)--(3,1)--(3,0); \mathrm draw[line width=1pt](0.5,0.5)--(2.5,0.5); \mathrm draw[line width=1pt](0.5,0.5)--(0.5,1.5); \mathrm draw[line width=1pt](1.5,0.5)--(1.5,1.5); \mathrm draw[fill=red](0.5,0.5)circle(2mm); \mathrm draw[fill=black](1.5,0.5)circle(1.5mm); \mathrm draw[fill=black](0.5,1.5)circle(1.5mm); \mathrm draw[fill=black](2.5,0.5)circle(1.5mm); \mathrm draw[fill=black](1.5,1.5)circle(1.5mm); \mathrm draw[line width=0.5pt](0.5,6)--(6,0.5); \mathrm draw[ fill =sussexp](1,5.5)circle(2mm); \mathrm draw (1,5.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny4}; \mathrm draw[ fill =sussexp](1.5,5)circle(2mm); \mathrm draw (1.5,5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =sussexp](2,4.5)circle(2mm); \mathrm draw (2,4.5)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =white](2.5,4)circle(2mm); \mathrm draw (2.5,4)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](3,3.5)circle(2mm); \mathrm draw (3,3.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =sussexp](3.5,3)circle(2mm); \mathrm draw (3.5,3)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny1}; \mathrm draw[ fill =white](4,2.5)circle(2mm); \mathrm draw (4,2.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny2}; \mathrm draw[ fill =sussexp](4.5,2)circle(2mm); \mathrm draw (4.5,2)+(0.3,0.3) node[color=sussexp,inner sep=0.6pt,sloped,above,rotate=-45] {\tiny0}; \mathrm draw[ fill =white](5,1.5)circle(2mm); \mathrm draw (5,1.5)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny3}; \mathrm draw[ fill =white](5.5,1)circle(2mm); \mathrm draw (5.5,1)+(0.3,0.3) node[shape=rectangle,inner sep=0.8pt,minimum size=1pt,draw,sloped,above,rotate=-45] {\tiny4}; \varepsilonnd{scope} \begin{equation}gin{scope}[shift={(40,-7.5)},scale=1.1,local bounding box=hh] \mathrm draw[fill=mygray](0,0)--(0,10)--(1,10)--(1,7)--(2,7)--(2,6)--(3,6)--(3,4)--(5,4)--(5,3)--(6,3)--(6,2)--(8,2)--(8,1)--(10,1)--(10,0)--cycle; \mathrm draw[line width=2pt,color=sussexb](0,11.5)--(0,10)--(1,10)--(1,7)--(2,7)--(2,6)--(3,6)--(3,4)--(5,4)--(5,3)--(6,3)--(6,2)--(8,2)--(8,1)--(10,1)--(10,0)--(11.5,0); \mathrm draw[line width=1pt](9.5,0.5)--(0.5,0.5)--(0.5,9.5); \mathrm draw[line width=1pt](2.5,0.5)--(2.5,4.5); \mathrm draw[line width=1pt](1.5,0.5)--(1.5,3.5); \mathrm draw[line width=1pt](0.5,4.5)--(1.5,4.5)--(1.5,5.5)--(2.5,5.5); \mathrm draw[line width=1pt](0.5,6.5)--(1.5,6.5); \mathrm draw[line width=1pt](2.5,1.5)--(4.5,1.5); \mathrm draw[line width=1pt](5.5,1.5)--(7.5,1.5); \mathrm draw[line width=1pt](2.5,2.5)--(4.5,2.5)--(4.5,3.5); \mathrm draw[line width=1pt](5.5,0.5)--(5.5,1.5); \mathrm draw[line width=1pt](5.5,1.5)--(5.5,2.5); \mathrm draw[line width=1pt](3.5,2.5)--(3.5,3.5); \foreach \x in {1,...,9}{ \mathrm draw[ fill =black](\x+0.5,0.5)circle(1.7mm); \mathrm draw[ fill =black](0.5,\x+0.5)circle(1.7mm); } \mathrm draw[fill=red](0.5,0.5)circle(2.3mm); \foreach \x in {0,...,4}{ \mathrm draw[ fill =black](2.5,\x+0.5)circle(1.7mm); } \foreach \x in {0,...,3}{ \mathrm draw[ fill =black](1.5,\x+0.5)circle(1.7mm); } \mathrm draw[ fill =black](2.5,5.5)circle(1.7mm); \mathrm draw[ fill =black](1.5,5.5)circle(1.7mm); \mathrm draw[ fill =black](1.5,6.5)circle(1.7mm); \mathrm draw[ fill =black](1.5,4.5)circle(1.7mm); \mathrm draw[ fill =black](4.5,1.5)circle(1.7mm); \mathrm draw[ fill =black](3.5,1.5)circle(1.7mm); \mathrm draw[ fill =black](3.5,2.5)circle(1.7mm); \mathrm draw[ fill =black](3.5,3.5)circle(1.7mm); \mathrm draw[ fill =black](4.5,3.5)circle(1.7mm); \mathrm draw[ fill =black](4.5,2.5)circle(1.7mm); \mathrm draw[ fill =black](5.5,2.5)circle(1.7mm); \mathrm draw[ fill =black](5.5,1.5)circle(1.7mm); \mathrm draw[ fill =black](6.5,1.5)circle(1.7mm); \mathrm draw[ fill =black](7.5,1.5)circle(1.7mm); \varepsilonnd{scope} \varepsilonnd{tikzpicture} \caption{\small A possible early evolution of the corner growth model \mathbf{1}ex{corner growth model (CGM)} \mathbf{1}ex{CGM} on the first quadrant of the plane. Bullets mark infected lattice points $x$ with $G_{0,x}\le t$. The origin is distinguished with the larger red bullet. The gray region is the fattened set $\mathcal{B}(t)+[-1/2,1/2]^2$. The thick purple down-right path is the height function that is the boundary of the fattened set $\mathcal{H}(t)=(\mathbb{Z}^2_+\setminus\mathcal{B}(t))+[-1/2,1/2]^2$. The bold black edges are the paths of minimal passage time from the origin. They are all directed. The antidiagonals illustrate the mapping of the \mathbf{1}ex{corner growth model (CGM)} \mathbf{1}ex{CGM} corner growth model to TASEP. Whenever a point is added to the growing infected cluster, a particle (solid purple circle) switches places with the hole (open circle) to its right. For the queuing picture, boxed numbers indicate the service stations and numbers without a box are the customers. When a customer and a station switch places, the customer has left that station and moved to the back of the line at the next station.} \lambdabel{CGM:fig} \varepsilonnd{figure} For $x\in\mathbb{Z}^2$ set $G_{x,x}=0$. An infection starts at the origin and spreads into the first quadrant $\mathbb{Z}_+^2$. The set of infected sites at time $t \geq 0$ is \[\mathcal{B}(t) = \{x \in \mathbb{Z}^2_+ : G_{0,x} \le t\}.\] Let us see how $\mathcal{B}(t)$ evolves. Since for a site $x=ke_i$, $k\in\mathbb{N}$ and $i\in\{1,2\}$, we have \[G_{0,ke_i}=\sum_{j=1}^{k}\omega_{je_i}>G_{0,(k-1)e_i};\] such a site cannot get infected before $(k-1)e_i$ is infected. Similarly, for a site $x\in\mathbb{N}^2$ we have the induction \begin{equation}gin{align}\lambdabel{G:ind} G_{0,x}=\omega_x+\max(G_{0,x-e_1},G_{0,x-e_2}). \varepsilonnd{align} Thus, $G_{0,x}>G_{0,x-e_i}$ for both $i\in\{1,2\}$ and a site $x\in\mathbb{N}^2$ cannot be infected until after both $x-e_1$ and $x-e_2$ were infected. There is a nice description of the evolution of $\mathcal{B}(t)$ using the fattened set of heathy sites \[\mathcal{H}(t)=(\mathbb{Z}^2_+\setminus\mathcal{B}(t))+[-1/2,1/2]^2.\] See the sequence of snapshots in Figure \ref{CGM:fig}. Start with $\mathcal{B}(0-)=\varnothing$ and \begin{equation}gin{align}\lambdabel{H0-} \mathcal{H}(0-)=\mathbb{Z}^2_++[-1/2,1/2]^2=\{x\in\mathbb{R}^2:x\ge-(e_1+e_2)/2\}. \varepsilonnd{align} The boundary of $\mathcal{H}(0-)$ is given by the path $\{h(s):s\in\mathbb{R}\}$ where \begin{equation}gin{align}\lambdabel{h0-} h(s)=s^+ e_1+s^- e_2-(e_1+e_2)/2. \varepsilonnd{align} At time $0$ the origin $x=0$ is infected, $\mathcal{B}(0)=\{0\}$, $\mathcal{H}(0)=\mathcal{H}(0-)\setminus[-1/2,1/2)^2$, and the boundary of $\mathcal{H}(0)$ is obtained from that of $\mathcal{H}(0-)$ by flipping the south-west corner located at $-(e_1+e_2)/2$ to a north-east corner, creating two new south-west corners at $(e_1-e_2)/2$ and $(e_2-e_1)/2$. For concreteness, say $\omega_{e_1}<\omega_{e_2}$. Then $e_1$ is the next site to become infected, exactly at time $t_1=\omega_{e_1}>0$. We have $\mathcal{B}(t)=\{0\}$ for $0\le t<t_1$ and $\mathcal{B}(t_1)=\{0,e_1\}$. Region $\mathcal{H}$ gets another square taken away: $\mathcal{H}(t_1)=\mathcal{H}(0)\setminus(e_1+[-1/2,1/2)^2)$. Its boundary changes by the south-west corner at $(e_1-e_2)/2$ getting flipped into a north-east corner. Site $(k,\varepsilonll)$ becomes infected at time $t=G_{0,(k,\varepsilonll)}$. Right before this time, the boundary of $\mathcal{H}(t-)$ has a south-west corner at $ke_1+\varepsilonll e_2-(e_1+e_2)/2$ that then flips into a north-east corner. Hence the name \mathbf{1}ex{corner growth model (CGM)} \mathbf{1}ex{CGM} ``corner growth''. The evolution is particularly nice when weights $\omega_x$ have an exponential distribution, i.e.\ when $\mathbb{P}(\omega_x>s)=\mu((s,\infty))=e^{-\theta s}$ for some $\theta>0$ and all $s\in\mathbb{R}_+$. Parameter $\theta$ is called the rate of the exponential random variable. In this special case the evolution goes as follows. Given set $\mathcal{B}(t_0)$ at some point in time $t_0\ge0$ consider the south-west corners on the boundary of $\mathcal{H}(t_0)$. (There are always finitely many such corners.) Assign to these corners independent random variables, exponentially distributed with the same rate $\theta$ as weights $\omega_x$. Think of these variables as the time an ``alarm clock'' goes off at the corner the variable is assigned to. When the first of these clocks rings the corresponding south-west corner gets flipped to a north-east corner. At that point in time, we have a new $\mathcal{H}$ and the procedure is repeated. The mathematics of this evolution is quite clear in the next equivalent description. \subsection{Queues in tandem}\lambdabel{queues} The queueing interpretation of LPP in terms of tandem service stations goes as follows. Imagine a queueing system with customers labeled by $\mathbb{Z}_+$ and service stations also labeled by $\mathbb{Z}_+$. The random weight $\omega_{k,\varepsilonll}$ is the service time of customer $k$ at station $\varepsilonll$. Right before time $0$ all customers are lined up at service station $0$ and customer $0$ is first in line and has just been served. At time $t=0$ customer $0$ is first in line at queue $1$ and the rest of the customers are still at queue $0$ with customer $1$ being first in line there, then customer $2$, and so on. Service of customers $0$ and $1$ begins. Customers proceed through the system in order, obeying FIFO (first-in-first-out) discipline, and joining the queue at station $\varepsilonll+1$ as soon as service at station $\varepsilonll$ is complete. Once customer $k\ge0$ is first in line at station $\varepsilonll\ge0$, it takes $\omega_{k,\varepsilonll}$ time units to perform service. (The only exception is $k=\varepsilonll=0$ where customer $0$ advances immediately from queue $0$ to queue $1$.) For each $k\ge0$ and $\varepsilonll\ge0$, $G_{0,(k,\varepsilonll)}$ is the time when customer $k$ departs station $\varepsilonll$ and joins the end of the queue at station $\varepsilonll+1$. To see this observe that on the one hand this is clear when $k=0$ or $\varepsilonll=0$ and, on the other hand, this departure time satisfies the same recurrence \varepsilonqref{G:ind}. Indeed, the departure time of customer $k$ from station $\varepsilonll$ is equal to the service time $\omega_{k,\varepsilonll}$ plus the time $G_{0,(k-1,\varepsilonll)}$ when customer $k-1$ departs station $\varepsilonll$ or the time customer $k$ departs station $\varepsilonll-1$, whichever is larger. (If the former time is larger, customer $k$ will have to wait for customer $k-1$ before service starts at station $\varepsilonll$. If instead the latter time is the larger of the two, then station $\varepsilonll$ will be empty when customer $k$ arrives and service starts immediately.) In terms of the \mathbf{1}ex{corner growth model (CGM)} \mathbf{1}ex{CGM} corner growth model, this is exactly when site $(k,\varepsilonll)$ gets infected. See Figure \ref{CGM:fig}. Among the seminal references for these ideas are \cite{Gly-Whi-91,Mut-79}. When weights are exponentially distributed, the description is again quite transparent. At every point in time there are only finitely many non-empty queues. Assign to the first customer in each of these queues an independent random variable (a clock) with exponential distribution having the same rate as weights $\omega_x$. The customer whose clock rings first has been served and moves on to the end of the next station and then the procedure repeats. \subsection{Totally asymmetric simple exclusion process (TASEP)}\lambdabel{tasep} In this model, a configuration is an assignment of $0$s and $1$s to the integers $\mathbb{Z}$. More precisely, it is a function $\varepsilonta:\mathbb{Z}\to\{0,1\}$. Think of $\varepsilonta_j=1$ as a particle occupying site $j\in\mathbb{Z}$ and then $\varepsilonta_j=0$ means $j$ is empty (or a hole). Given a configuration $\varepsilonta$ such that $\varepsilonxists j_0$ with $\varepsilonta_j=0$ for all $j\ge j_0$ define a curve (known as a \mathbf{1}ex{height function} {\sl height function}) $h:\mathbb{R}\to\mathbb{R}^2$ by $h(j_0-1/2)=j_0 e_1-(e_1+e_2)/2$, $h(j+1/2)-h(j-1/2)=(1-\varepsilonta_j)e_1-\varepsilonta_j e_2$ for all $j\in\mathbb{Z}$, and linear interpolation on $\mathbb{R}\setminus(\mathbb{Z}+1/2)$. (This is well defined, regardless of the choice of $j_0$.) Right before time $t=0$ we start by placing a particle at every site $j\le-1$ and leaving sites $j\ge0$ empty. In other words, $\varepsilonta_j(0-)=\mbox{\mymathbb{1}}\{j<0\}$. The corresponding height function is given by \varepsilonqref{h0-}, i.e.\ it is the boundary of $\mathcal{H}(0-)$ from \varepsilonqref{H0-}. At time $t=0$ the particle at $j=-1$ jumps to the empty site $j=0$, leaving site $j=-1$ empty. Now, the corresponding height function is given by the boundary of $\mathcal{H}(0)$. As $t$ grows particles move around. At any point in time only one particle is allowed to make a move and it can only move one step to its right, if there is no other particle there already. Here is a more precise description of the particle dynamics. Particles move only at times when $\mathcal{B}(t)$ changes (i.e.\ when new sites are infected). Think of the boundary of $\mathcal{H}(t)$ as a height function. Then, there is a one-to-one correspondence between south-west corners in the boundary of $\mathcal{H}(t)$ and particle-hole pairs (the hole being immediately to the right of the particle). When a south-west corner in the boundary of $\mathcal{H}(t)$ is flipped, the corresponding particle jumps one step to its right, switching positions with the hole that was there. See Figure \ref{CGM:fig}. Comparing this description to the queuing system we see that holes play the role of service stations and particles to the left of a hole play the role of customers in line at that service station. Once again, when the weights are exponentially distributed the evolution can be described in a Markovian way using exponential clocks. Given a configuration at some time $t_0$ there are only finitely many particles that have a hole immediately to their right. Each of these particles is given an independent exponential random variable with the same rate as weights $\omega_x$ and the particle whose clock rings first moves one position to the right, effectively switching places with the hole that was there. Then the process is repeated. This is one of the most fundamental interacting particle systems. See \cite{Spi-70,Mac-Gib-Pip-68} for two of the earliest papers on the model. Here is a table that summarizes the meaning of $G_{0,(k,\varepsilonll)}$ in the models in Sections \ref{cgm}-\ref{tasep}. \renewcommand{1.2}{1.2} \begin{equation}gin{center} \begin{equation}gin{tabular}{|l||l|} \hline {\bf Model} & {$\bm{G_{0,(k,\varepsilonll)}}$} {\bf is the time when:} \\ \hline\hline CGM & site $(k,\varepsilonll)$ is infected\\ \hline Queues & customer $k$ clears server $\varepsilonll$\\ \hline TASEP & particle $k$ exchanges places with hole $\varepsilonll$\\ \hline \varepsilonnd{tabular} \varepsilonnd{center} \subsection{The competition interface (CIF)}\lambdabel{cif-intro} Start two infections at $e_1$ and at $e_2$ and let sites get infected as before. Mark sites infected by $e_1$ purple and those infected by $e_2$ green. Once a site is infected by one of the two types, it remains like that forever. This partitions the first quadrant $\mathbb{Z}_+^2\setminus\{0\}$ into two regions of infection. Mathematically, recall induction \varepsilonqref{G:ind}. Since we assumed weights to be independent and with a continuous distribution, equality $G_{0,x-e_1}=G_{0,x-e_2}$ happens with zero probability. Thus, \[G_{0,x}=\omega_x+G_{0,x-e_i}\] for exactly one of $i\in\{1,2\}$. This indicates who infected site $x$. Another description of the spread of infection comes using geodesics. Again, because weights are independent and have a continuous distribution, there is a unique geodesic between any two distinct sites $x\le y$. As such, the union of all geodesics from $0$ to sites $x\in\mathbb{Z}^2_+\setminus\{0\}$ forms a spanning tree of $\mathbb{Z}^2_+$ that represents the genealogy of the infection. The subtrees rooted at $e_1$ and $e_2$ are precisely the vertices infected by these two sites, respectively. They are separated by an up-right path on the dual lattice $\mathbb{Z}^2+(e_1+e_2)/2$ called the \mathbf{1}ex{competition interface} {\sl competition interface}. See Figure \ref{CIF:fig}. Properties of this interface, as well as references for further reading, are in Section \ref{cif:sec}. \begin{equation}gin{figure} \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[scale=0.5] \mathrm draw[line width=1pt](9,0)--(0,0)--(0,9); \mathrm draw[line width=1pt](2,0)--(2,4)--(5,4)--(5,7)--(7,7)--(7,9)--(9,9); \mathrm draw[line width=1pt](1,0)--(1,3); \mathrm draw[line width=1pt](0,4)--(1,4)--(1,5)--(2,5)--(2,7); \mathrm draw[line width=1pt](2,5)--(4,5)--(4,6); \mathrm draw[line width=1pt](3,5)--(3,6)--(3,8)--(5,8)--(5,9)--(6,9); \mathrm draw[line width=1pt](3,7)--(4,7); \mathrm draw[line width=1pt](6,7)--(6,8); \mathrm draw[line width=1pt](0,6)--(1,6)--(1,8)--(2,8)--(2,9)--(3,9); \mathrm draw[line width=1pt](4,8)--(4,9); \mathrm draw[line width=1pt](0,9)--(1,9); \mathrm draw[line width=1pt](5,4)--(7,4)--(7,5)--(9,5)--(9,8); \mathrm draw[line width=1pt](6,4)--(6,6)--(7,6); \mathrm draw[line width=1pt](8,5)--(8,8); \mathrm draw[line width=1pt](2,1)--(8,1)--(8,2)--(9,2); \mathrm draw[line width=1pt](2,2)--(4,2)--(4,3)--(6,3); \mathrm draw[line width=1pt](9,0)--(9,1); \mathrm draw[line width=1pt](5,0)--(5,1); \mathrm draw[line width=2pt,color=white](4,1)--(5,1); \mathrm draw[line width=1pt](5,1)--(5,2)--(7,2)--(7,3); \mathrm draw[line width=1pt](8,2)--(8,4)--(9,4); \mathrm draw[line width=1pt](3,2)--(3,3); \mathrm draw[line width=1pt](8,3)--(9,3); \mathrm draw[line width=2.0pt](.5,.5)--(0.5,3.5)--(1.5,3.5)--(1.5, 4.5)--(4.5,4.5)--(4.5,7.5)--(5.5,7.5)--(5.5,8.5)--(6.5,8.5)--(6.5, 9.5); \foreach \x in {0,...,9}{ \foreach \y in {0,...,9}{ \mathrm draw[ fill =sussexp](\x,\y)circle(1.7mm); }} \foreach \x in{0,...,6}{ \mathrm draw[ fill =sussexg](\x, 9)circle(1.7mm); } \foreach \x in{0,...,5}{ \mathrm draw[ fill =sussexg](\x, 8)circle(1.7mm); } \foreach \x in{0,...,4}{ \foreach \y in {5,6,7}{ \mathrm draw[fill =sussexg](\x, \y)circle(1.7mm); }} \foreach \x in{0,...,1}{ \mathrm draw[ fill =sussexg](\x, 4)circle(1.7mm); } \foreach \x in{0,...,0}{ \foreach \y in {1,2,3}{ \mathrm draw[fill =sussexg](\x, \y)circle(1.7mm); }} \mathrm draw[fill=white](0,0)circle(1.7mm); \mathrm draw[fill=sussexg](0,1)circle(2.5mm); \mathrm draw[fill=sussexp](1,0)circle(2.5mm); \varepsilonnd{tikzpicture} \varepsilonnd{center} \caption{\small The full tree of infection in the \mathbf{1}ex{corner growth model (CGM)} \mathbf{1}ex{CGM} corner growth model. The origin is the open circle at the bottom left. The solid black line marks the \mathbf{1}ex{competition interface} competition interface that separates the two competing infections that grow from points $e_1$ and $e_2$ marked with larger circles. } \lambdabel{CIF:fig} \varepsilonnd{figure} \section{The shape function}\lambdabel{sec:shape} One of the central questions in probability theory is to describe the order that emerges out of randomness as the number of random inputs into the system grows. For instance, the law of large numbers says that if $\{X_n:n\in\mathbb{N}\}$ are independent random variables that are identically distributed (i.e.\ random samples from the same population), then the sample or empirical mean $(X_1+\cdots+X_n)/n$ converges, with probability one, to the (population) mean $E[X_1]$ (provided this mean is well defined). Even though $G_{0,x}$ is not simply a sum of independent random variables (it has a maximum), the law of large numbers raises the question of whether or not $G_{0,x}$ should grow at most linearly with $\abs{x}_1$. This is indeed the case. For $a\in\mathbb{R}$ let $\fl{a}$ be the largest integer no greater than $a$. For $x\in\mathbb{R}^2$ let $\fl{x}$ act coordinatewise. Recall that $m_0=\mathbb{E}[\omega_0]$. \mathbf{1}ex{shape theorem} \begin{equation}gin{theorem}\lambdabel{th:shape} {\rm\cite{Mar-04}} Assume $\mathbb{E}[\abs{\omega_0}]<\infty$. Then for any $\xi\in\mathbb{R}_+^2$ the limit \begin{equation}gin{align}\lambdabel{shape-sub} g(\xi)=\lim_{n\to\infty}\frac{G_{0,\fl{n\xi}}}n \varepsilonnd{align} exists almost surely and {\rm(}if $g(\xi)<\infty${\rm)} in $L^1$. It is deterministic, $1$-homogenous, concave, and satisfies $g(x_1,x_2)=g(x_2,x_1)$, $x_1,x_2\in\mathbb{R}_+$, $g(e_1)=g(e_2)=m_0$, and $g(\xi)\ge m_0\,\abs{\xi}_1$. If furthermore \begin{equation}gin{align}\lambdabel{w-bound} \int_0^\infty\!\!\!\sqrt{\mathbb{P}(\omega_0>s)}\,ds<\infty \varepsilonnd{align} {\rm(}e.g.\ if $\mathbb{E}[\abs{\omega_0}^{2+\varepsilon}]<\infty$ for some $\varepsilon>0${\rm)}, then $g$ is finite and continuous on all of $\mathbb{R}_+^2$ and \begin{equation}gin{align}\lambdabel{shape} \lim_{n\to\infty}\max_{x\in\mathbb{Z}_+^2:\abs{x}_1=n}\frac{\abs{G_{0,x}-g(x)}}n=0\quad\text{almost surely.} \varepsilonnd{align} \varepsilonnd{theorem} In particular, limit \varepsilonqref{shape} says that the fattened set $(\mathcal{B}(t)+[-1/2,1/2])/t$ converges almost surely, as $t\to\infty$, to the set $\{x\in\mathbb{R}_+^2:g(x)\le1\}$. Thus, \varepsilonqref{shape} is called a \mathbf{1}ex{shape theorem} {\sl shape theorem} and $g$ is the \mathbf{1}ex{shape function} {\sl shape function}. See Figure \ref{shape:fig}. \begin{equation}gin{figure} \centering \mathrm def2ptk{4pt} \begin{equation}gin{tikzpicture}[>=latex,scale=5] \mathrm draw[fill=mygray](0,0)--(-0.01,0.90)--(0.01,0.90)--(0.01,0.77)--(0.01,0.77)--(0.01,0.70)--(0.03,0.70)--(0.03,0.62)--(0.04,0.62)--(0.04,0.60)--(0.04,0.60)--(0.04,0.59)--(0.06,0.59)--(0.06,0.59)--(0.07,0.59)--(0.07,0.55)--(0.07,0.55)--(0.07,0.54)--(0.09,0.54)--(0.09,0.53)--(0.10,0.53)--(0.10,0.49)--(0.10,0.49)--(0.10,0.47)--(0.12,0.47)--(0.12,0.46)--(0.12,0.46)--(0.12,0.43)--(0.14,0.43)--(0.14,0.40)--(0.14,0.40)--(0.14,0.38)--(0.15,0.38)--(0.15,0.36)--(0.17,0.36)--(0.17,0.30)--(0.17,0.30)--(0.17,0.29)--(0.18,0.29)--(0.18,0.28)--(0.20,0.28)--(0.20,0.28)--(0.20,0.28)--(0.20,0.28)--(0.21,0.28)--(0.21,0.28)--(0.23,0.28)--(0.23,0.28)--(0.23,0.28)--(0.23,0.28)--(0.24,0.28)--(0.24,0.26)--(0.26,0.26)--(0.26,0.24)--(0.27,0.24)--(0.27,0.23)--(0.28,0.23)--(0.28,0.23)--(0.28,0.23)--(0.28,0.23)--(0.29,0.23)--(0.29,0.21)--(0.30,0.21)--(0.30,0.21)--(0.32,0.21)--(0.32,0.21)--(0.33,0.21)--(0.33,0.21)--(0.34,0.21)--(0.34,0.20)--(0.34,0.20)--(0.34,0.20)--(0.35,0.20)--(0.35,0.20)--(0.36,0.20)--(0.36,0.20)--(0.38,0.20)--(0.38,0.20)--(0.39,0.20)--(0.39,0.17)--(0.40,0.17)--(0.40,0.17)--(0.41,0.17)--(0.41,0.17)--(0.41,0.17)--(0.41,0.10)--(0.42,0.10)--(0.42,0.10)--(0.43,0.10)--(0.43,0.10)--(0.45,0.10)--(0.45,0.10)--(0.46,0.10)--(0.46,0.10)--(0.47,0.10)--(0.47,0.09)--(0.47,0.09)--(0.47,0.09)--(0.48,0.09)--(0.48,0.09)--(0.49,0.09)--(0.49,0.09)--(0.51,0.09)--(0.51,0.07)--(0.52,0.07)--(0.52,0.07)--(0.53,0.07)--(0.53,0.07)--(0.54,0.07)--(0.54,0.07)--(0.55,0.07)--(0.55,0.07)--(0.56,0.07)--(0.56,0.07)--(0.56,0.07)--(0.56,0.07)--(0.57,0.07)--(0.57,0.07)--(0.58,0.07)--(0.58,0.07)--(0.59,0.07)--(0.59,0.07)--(0.60,0.07)--(0.60,0.07)--(0.61,0.07)--(0.61,0.04)--(0.62,0.04)--(0.62,0.04)--(0.64,0.04)--(0.64,0.04)--(0.65,0.04)--(0.65,0.04)--(0.66,0.04)--(0.66,0.04)--(0.67,0.04)--(0.67,0.03)--(0.68,0.03)--(0.68,0.01)--(0.69,0.01)--(0.69,0.01)--(0.69,0.01)--(0.69,0.01)--(0.70,0.01)--(0.70,0.01)--(0.71,0.01)--(0.71,0.01)--(0.72,0.01)--(0.72,0.01)--(0.73,0.01)--(0.73,0.01)--(0.74,0.01)--(0.74,0.01)--(0.76,0.01)--(0.76,0.01)--(0.77,0.01)--(0.77,0.01)--(0.78,0.01)--(0.78,0.01)--(0.79,0.01)--(0.79,0.01)--(0.80,0.01)--(0.80,0.01)--(0.81,0.01)--(0.81,0.01)--(0.81,0.01)--(0.81,0.01)--(0.82,0.01)--(0.82,0.01)--(0.83,0.01)--(0.83,0.01)--(0.84,0.01)--(0.84,0.01)--(0.85,0.01)--(0.85,0.01)--(0.86,0.01)--(0.86,-0.01)--(0.88,-0.01)--(0.88,-0.01)--(0.89,-0.01)--(0.89,-0.01)--(0.90,-0.01)--(0.90,-0.01)--(0.91,-0.01)--(0.91,-0.01)--(0.92,-0.01)--(0.92,-0.01)--(0.93,-0.01)--(0.93,-0.01)--(0.94,-0.01)--(0.94,-0.01)--(0.94,-0.01)--(0.94,-0.01)--(0.95,-0.01)--(0.95,-0.01)--(0.96,-0.01)--(0.96,-0.01)--(0.97,-0.01)--(0.97,-0.01)--(0.98,-0.01)--(0.98,-0.01)--(0.99,-0.01)--cycle; \mathrm draw[line width=1.5pt,color=sussexb](-0.01,1)--(-0.01,0.90)--(0.01,0.90)--(0.01,0.77)--(0.01,0.77)--(0.01,0.70)--(0.03,0.70)--(0.03,0.62)--(0.04,0.62)--(0.04,0.60)--(0.04,0.60)--(0.04,0.59)--(0.06,0.59)--(0.06,0.59)--(0.07,0.59)--(0.07,0.55)--(0.07,0.55)--(0.07,0.54)--(0.09,0.54)--(0.09,0.53)--(0.10,0.53)--(0.10,0.49)--(0.10,0.49)--(0.10,0.47)--(0.12,0.47)--(0.12,0.46)--(0.12,0.46)--(0.12,0.43)--(0.14,0.43)--(0.14,0.40)--(0.14,0.40)--(0.14,0.38)--(0.15,0.38)--(0.15,0.36)--(0.17,0.36)--(0.17,0.30)--(0.17,0.30)--(0.17,0.29)--(0.18,0.29)--(0.18,0.28)--(0.20,0.28)--(0.20,0.28)--(0.20,0.28)--(0.20,0.28)--(0.21,0.28)--(0.21,0.28)--(0.23,0.28)--(0.23,0.28)--(0.23,0.28)--(0.23,0.28)--(0.24,0.28)--(0.24,0.26)--(0.26,0.26)--(0.26,0.24)--(0.27,0.24)--(0.27,0.23)--(0.28,0.23)--(0.28,0.23)--(0.28,0.23)--(0.28,0.23)--(0.29,0.23)--(0.29,0.21)--(0.30,0.21)--(0.30,0.21)--(0.32,0.21)--(0.32,0.21)--(0.33,0.21)--(0.33,0.21)--(0.34,0.21)--(0.34,0.20)--(0.34,0.20)--(0.34,0.20)--(0.35,0.20)--(0.35,0.20)--(0.36,0.20)--(0.36,0.20)--(0.38,0.20)--(0.38,0.20)--(0.39,0.20)--(0.39,0.17)--(0.40,0.17)--(0.40,0.17)--(0.41,0.17)--(0.41,0.17)--(0.41,0.17)--(0.41,0.10)--(0.42,0.10)--(0.42,0.10)--(0.43,0.10)--(0.43,0.10)--(0.45,0.10)--(0.45,0.10)--(0.46,0.10)--(0.46,0.10)--(0.47,0.10)--(0.47,0.09)--(0.47,0.09)--(0.47,0.09)--(0.48,0.09)--(0.48,0.09)--(0.49,0.09)--(0.49,0.09)--(0.51,0.09)--(0.51,0.07)--(0.52,0.07)--(0.52,0.07)--(0.53,0.07)--(0.53,0.07)--(0.54,0.07)--(0.54,0.07)--(0.55,0.07)--(0.55,0.07)--(0.56,0.07)--(0.56,0.07)--(0.56,0.07)--(0.56,0.07)--(0.57,0.07)--(0.57,0.07)--(0.58,0.07)--(0.58,0.07)--(0.59,0.07)--(0.59,0.07)--(0.60,0.07)--(0.60,0.07)--(0.61,0.07)--(0.61,0.04)--(0.62,0.04)--(0.62,0.04)--(0.64,0.04)--(0.64,0.04)--(0.65,0.04)--(0.65,0.04)--(0.66,0.04)--(0.66,0.04)--(0.67,0.04)--(0.67,0.03)--(0.68,0.03)--(0.68,0.01)--(0.69,0.01)--(0.69,0.01)--(0.69,0.01)--(0.69,0.01)--(0.70,0.01)--(0.70,0.01)--(0.71,0.01)--(0.71,0.01)--(0.72,0.01)--(0.72,0.01)--(0.73,0.01)--(0.73,0.01)--(0.74,0.01)--(0.74,0.01)--(0.76,0.01)--(0.76,0.01)--(0.77,0.01)--(0.77,0.01)--(0.78,0.01)--(0.78,0.01)--(0.79,0.01)--(0.79,0.01)--(0.80,0.01)--(0.80,0.01)--(0.81,0.01)--(0.81,0.01)--(0.81,0.01)--(0.81,0.01)--(0.82,0.01)--(0.82,0.01)--(0.83,0.01)--(0.83,0.01)--(0.84,0.01)--(0.84,0.01)--(0.85,0.01)--(0.85,0.01)--(0.86,0.01)--(0.86,-0.01)--(0.88,-0.01)--(0.88,-0.01)--(0.89,-0.01)--(0.89,-0.01)--(0.90,-0.01)--(0.90,-0.01)--(0.91,-0.01)--(0.91,-0.01)--(0.92,-0.01)--(0.92,-0.01)--(0.93,-0.01)--(0.93,-0.01)--(0.94,-0.01)--(0.94,-0.01)--(0.94,-0.01)--(0.94,-0.01)--(0.95,-0.01)--(0.95,-0.01)--(0.96,-0.01)--(0.96,-0.01)--(0.97,-0.01)--(0.97,-0.01)--(0.98,-0.01)--(0.98,-0.01)--(0.99,-0.01)--(1,-0.01); \mathrm draw[red, smooth, line width=1pt,domain=0:1] plot (\x, {1+\x-2*sqrt(\x)}); \varepsilonnd{tikzpicture} \caption{\footnotesize LPP with exponentially distributed vertex weights with rate $1$. The gray region is a simulation of the scaled growing set $t^{-1}(\mathcal{B}(t)+[-1/2,1/2]^2)$ at time $t=160$. Its boundary (the thick blue down-right path) approximates the smooth red limit curve $\{x\in\mathbb{R}_+^2:\sqrt{x\cdot e_1} +\sqrt{x\cdot e_2}=1\}$, as first proved by Rost in 1981.} \lambdabel{shape:fig} \varepsilonnd{figure} \begin{equation}gin{proof}[Proof of Theorem \ref{th:shape}] Let us start by considering the case $\xi\in\mathbb{Z}_+^2$. By superadditivity \varepsilonqref{superadd} we have for $m\le n$ \[G_{0,m\xi}+G_{m\xi,n\xi}\le G_{0,n\xi}.\] If we had additivity instead of superadditivity, then we could write \[G_{0,n\xi}=\sum_{i=0}^{n-1}G_{i\xi,(i+1)\xi}.\] The summands are \mathbf{1}ex{i.i.d.} {\sl independent and identically distributed} (i.i.d.) and as such $n^{-1}\sum_{i=0}^{n-1}G_{i\xi,(i+1)\xi}$ is the sample mean of random samples of $G_{0,\xi}$. A generalization of the law of large numbers, called the ergodic theorem, tells us then that this sample mean converges to the population mean $\mathbb{E}[G_{0,\xi}]$. Unfortunately, additivity does not hold. However, it turns out that one can prove a stochastic version of Fekete's subadditive lemma and apply this \mathbf{1}ex{subadditive ergodic theorem} {\sl subadditive ergodic theorem} to $-G_{0,n\xi}$ to obtain the limit \varepsilonqref{shape-sub}. A version of this theorem is given in \cite{ch:Damron} as Theorem 3.2. A bit more work is needed for the general case $\xi\in[0,\infty)^2$ and even more work is needed to get more uniform control and prove \varepsilonqref{shape}. The details are omitted as they are similar to the ones in the proof of the standard first-passage percolation shape theorem, given in \cite{ch:Damron}. See also Proposition 2.1(i) of \cite{Mar-04}. Symmetry of $g$ follows from that of the lattice and the fact that i.i.d.\ random variables are exchangeable (i.e.\ switching them around does not change the joint distribution). Since $G_{0,ne_i}=\sum_{k=0}^{n-1}\omega_{ke_i}$, $i\in\{1,2\}$, \varepsilonqref{shape-sub} and the law of large numbers give $g(e_1)=g(e_2)=\mathbb{E}[\omega_0]=m_0$. Consider next the up-right path $x_{0,\fl{n\xi}}$ from $0$ to $\fl{n\xi}$ that first takes $\fl{n\xi_1}$ $e_1$-steps and then $\fl{n\xi_2}$ $e_2$-steps. We have \[n^{-1} G_{0,\fl{n\xi}}\ge n^{-1}\sum_{i=0}^{\fl{n\xi_1}-1}\omega_{ie_1}+n^{-1}\sum_{i=0}^{\fl{n\xi_2}-1}\omega_{\fl{n\xi_2}+ie_2}.\] By \varepsilonqref{shape-sub} the left-hand side converges to $g(\xi)$ almost surely and hence also in probability. By the weak law of large numbers, the two sums on the right-hand side converge in probability to $m_0\xi_1$ and $m_0\xi_2$, respectively. It follows that $g(\xi)\ge m_0\abs{\xi}_1$. Finiteness of $g(\xi)$ comes easily if weights $\omega_x$ are bounded above by some constant, i.e.\ if $\mathbb{P}(\omega_0\le c)=1$ for some $c>0$ then clearly $G_{0,\fl{n\xi}}\le cn\abs{\xi}_1$ and thus $g(\xi)\le c\abs{\xi}_1$. More generally, finiteness would follow from the fact that $g$ is concave, homogenous, and continuous on $\mathbb{R}_+^2$ (all the way up to the boundary). Let us now prove the regularity properties claimed in the theorem. Homogeneity of $g$ comes simply from \[g(c\xi)=\lim_{n\to\infty}\frac{G_{0,\fl{nc\xi}}}n=c\lim_{n\to\infty}\frac{G_{0,\fl{cn\xi}}}{cn}=cg(\xi),\quad\text{for }c>0.\] Then concavity follows from homogeneity and superadditivity: for $\alpha\in(0,1)$ \[\alpha g(\xi)+(1-\alpha)g(\zeta)=g(\alpha\xi)+g((1-\alpha)\zeta)\] and for $x,y\in\mathbb{R}_+^2$ \begin{equation}gin{align*} &g(x)+g(y)\\ &\quad=\lim_{n\to\infty}\frac{G_{0,\fl{nx}}}n+\lim_{n\to\infty}\frac{G_{0,\fl{ny}}}n&\text{(almost surely)}\\ &\quad=\lim_{n\to\infty}\frac{G_{0,\fl{nx}}}n+\lim_{n\to\infty}\frac{G_{\fl{nx},\fl{ny}+\fl{nx}}}n&\text{(in probability, due to shift-invariance)}\\ &\quad\le\lim_{n\to\infty}\frac{G_{0,\fl{ny}+\fl{nx}}}n&\text{(by supperadditivity \varepsilonqref{superadd})}\\ &\quad=g(x+y). \varepsilonnd{align*} In the second equality, we used the fact that if one shifts the picture, placing the origin where say $z$ used to be, then the two situations are statistically equivalent: for all $z\in\mathbb{Z}^2$ and $x\ge0$, $G_{z,z+x}$ has the same distribution as $G_{0,x}$. It remains to prove continuity. We will do so under the assumption that weights are bounded in absolute value. This will capture the essence of the argument. The general case requires some extra technical work that we will avoid. The interested reader can find the details in the proof of Proposition 2.2 of \cite{Mar-04}. Fix an $\varepsilon\in\mathbb{Q}\cap(0,1/2)$ and an integer $k$ such that $k\varepsilon\in\mathbb{N}$. The above computation gives us \begin{equation}gin{align}\lambdabel{cont1} g(e_2+\varepsilon e_1)-g(e_2)\ge g(\varepsilon e_1)=\varepsilon g(e_1)=\varepsilon m_0. \varepsilonnd{align} We next prove a similar upper bound. Each up-right path from $0$ to $kn e_2+\varepsilon kn e_1$ consists of $kn$ $e_2$-steps and $\varepsilon k n$ $e_1$-steps. Thus, there are ${nk(1+\varepsilon)\choose nk\varepsilon}$ such paths. By Stirling's approximation $N!\sim\sqrt{2\pi N}N^Ne^{-N}$ we have that \[{nk(1+\varepsilon)\choose nk\varepsilon}\sim\sqrt{\frac{1+\varepsilon}{2\pi nk\varepsilon}}e^{nkh(\varepsilon)},\] where $h(\varepsilon)=(1+\varepsilon)\log(1+\varepsilon)-\varepsilon\log\varepsilon$. Next, fix $\mathrm delta>0$ and use a union bound (i.e.\ that $\mathbb{P}(\cup_j A_j)\le\sum_j\mathbb{P}(A_j)$) and $m_0=g(e_2)$ to write \begin{equation}gin{align*} \mathbb{P}\bigl(G_{0,kne_2+\varepsilon kne_1}\ge nk g(e_2)+nk\mathrm delta\bigr) \le\sum_{\substack{x_{0,nk(1+\varepsilon)}\\ \text{ up-right}}}\mathbb{P}{B}igl(\sum_{i=1}^{nk(1+\varepsilon)}\omega_{x_i}\ge nkg(e_2)+nk\mathrm delta{B}igr)&\\ =\sum_{\substack{x_{0,nk(1+\varepsilon)}\\ \text{ up-right}}}\mathbb{P}{B}igl(\sum_{i=1}^{nk(1+\varepsilon)}(\omega_{x_i}-m_0)\ge -nk\varepsilon m_0+nk\mathrm delta{B}igr).& \varepsilonnd{align*} Now observe that once the up-right path $x_{0,nk(1+\varepsilon)}$ is fixed, weights $\omega_{x_i}$ are i.i.d.\ with mean $m_0$. Thus, all the probabilities in the last sum have the same value and we can continue by writing \begin{equation}gin{align}\lambdabel{inter} \begin{equation}gin{split} &\mathbb{P}\bigl(G_{0,kne_2+\varepsilon kne_1}\ge nk g(e_2)+nk\mathrm delta\bigr)\\ &\qquad\qquad\le\sqrt{\frac{1+\varepsilon}{2\pi nk\varepsilon}}e^{nkh(\varepsilon)}P{B}igl(\sum_{i=1}^{nk(1+\varepsilon)}Z_i\ge nk(\mathrm delta-\varepsilon m_0){B}igr), \varepsilonnd{split} \varepsilonnd{align} where $Z_i$ are i.i.d.\ centered (and bounded) random variables with the same distribution as $\omega_0-m_0$. If $m_0>0$, then pick $\varepsilon\in(0,\mathrm delta/m_0)$. Otherwise, just pick $\varepsilon>0$. Then the event in the last probability contradicts the law of large numbers which says that the sample mean $\frac1{nk(1+\varepsilon)}\sum_{i=1}^{nk(1+\varepsilon)}Z_i$ should be close to $0$. In fact, large deviation theory tells us that the probability of such an event decays exponentially fast. Indeed, fix a positive number $\lambdambda>0$ and use Chebyshev's exponential inequality to write \begin{equation}gin{align*} P{B}igl(\sum_{i=1}^{nk(1+\varepsilon)}Z_i\ge nk(\mathrm delta-\varepsilon m_0){B}igr) &\le E\bigl[e^{\sum_{i=1}^{nk(1+\varepsilon)}\lambdambda Z_i-\lambdambda nk(\mathrm delta-\varepsilon m_0)}\bigr]\\ &=e^{-\lambdambda nk(\mathrm delta-\varepsilon m_0)}E[e^{\lambdambda Z_0}]^{nk(1+\varepsilon)}\\ &= \varepsilonxp{B}igl\{-nk{B}igl(\lambdambda(\mathrm delta-\varepsilon m_0)-(1+\varepsilon)\log E[e^{\lambdambda Z_0}]{B}igr){B}igr\}. \varepsilonnd{align*} A little calculus exercise shows that $e^x\le1+x+x^2$ for $x$ close to $0$. Use this and $\log(1+x)\le x$ (valid for all $x$) to continue the above computation, remembering that $E[Z_0]=0$, \begin{equation}gin{align*} &P{B}igl(\sum_{i=1}^{nk(1+\varepsilon)}Z_i\ge nk(\mathrm delta-\varepsilon m_0){B}igr)\\ &\qquad\le \varepsilonxp{B}igl\{-nk{B}igl(\lambdambda(\mathrm delta-\varepsilon m_0)-(1+\varepsilon)\log \bigl(1+\lambdambda^2 E[Z_0^2]\bigr){B}igr){B}igr\}\\ &\qquad\le \varepsilonxp{B}igl\{-nk{B}igl(\lambdambda(\mathrm delta-\varepsilon m_0)-(1+\varepsilon)\lambdambda^2 E[Z_0^2]{B}igr){B}igr\}. \varepsilonnd{align*} Take $\lambdambda=(\mathrm delta-\varepsilon m_0)/(2+2\varepsilon)$ and the above becomes \[P{B}igl(\sum_{i=1}^{nk(1+\varepsilon)}Z_i\ge nk(\mathrm delta-\varepsilon m_0){B}igr)\le e^{-(\mathrm delta-\varepsilon m_0)^2nk/(4+4\varepsilon)}.\] Going back to \varepsilonqref{inter} we have \begin{equation}gin{align*} &\mathbb{P}\bigl(G_{0,kne_2+\varepsilon kne_1}\ge nk g(e_2)+nk\mathrm delta\bigr)\\ &\qquad\qquad\le \sqrt{\frac{1+\varepsilon}{2\pi nk\varepsilon}}\varepsilonxp{B}igl\{-nk{B}igl(\frac{(\mathrm delta-\varepsilon m_0)^2}{4(1+\varepsilon)}-(1+\varepsilon)\log(1+\varepsilon)+\varepsilon\log\varepsilon{B}igr){B}igr\}. \varepsilonnd{align*} Taking $\varepsilon$ small enough makes the right-hand side converge exponentially fast to $0$ as $n\to\infty$. Combining this with the fact that $G_{0,kne_2+\varepsilon kne_2}/(nk)$ converges almost surely to $g(e_2+\varepsilon e_1)$ gives \[g(e_2+\varepsilon e_1)-g(e_2)\le\mathrm delta.\] Together with \varepsilonqref{cont1} this proves continuity of $a\mapsto g(e_2+a e_1)$ at $a=0$. Symmetry gives the same result with $e_1$ and $e_2$ switched around. This and the homogeneity of $g$ imply its continuity at the boundary of $\mathbb{R}^2_+$. Continuity in the interior is a known fact about concave functions. \varepsilonnd{proof} When weights $\omega_x$ are exponentially distributed (with rate $\theta>0$) one can get an explicit formula for the \mathbf{1}ex{shape function!explicit} shape: \begin{equation}gin{align}\lambdabel{shape:exp} g(\xi_1,\xi_2)=m_0(\xi_1+\xi_2)+2\sigma_0\sqrt{\xi_1\,\xi_2}\,,\quad\xi=(\xi_1,\xi_2)\in\mathbb{R}_+^2. \varepsilonnd{align} Here, $m_0=\theta^{-1}$ is the mean of $\omega_x$ and $\sigma_0=\theta^{-1}$ is its standard deviation. (The two quantities are equal for exponential random variables, but this is not true in general.) A similar formula also holds when weights $\omega_x$ are geometrically distributed, i.e.\ when $\mu$ is supported on $\mathbb{N}$ and for some $p\in(0,1)$ and all $j\in\mathbb{N}$, $\mathbb{P}(\omega_x=j)=\mu(\{j\})=p^{j-1}(1-p)$. In the exponential case this formula was first derived by Rost \cite{Ros-81} (who presented the model in its coupling with TASEP without the last-passage formulation) while early derivations of the geometric case appeared in \cite{Coh-Elk-Pro-96, Joc-Pro-Sho-98, Sep-98-mprf-1}. We will prove the formula at the end of Section \ref{fixed-pt} using equation \varepsilonqref{B-g} and the explicit knowledge of the distribution of Busemann functions. See also Theorem 3.4 in \cite{ch:Seppalainen}. Other than the above two cases, no explicit formula is known for $g$. However, it is known that the above formula does hold in general near the boundary. \begin{equation}gin{theorem}\lambdabel{th:Martin} {\rm\cite{Mar-04}} Assume \[\int_0^\infty\!\!\!\sqrt{\mathbb{P}(\omega_0>s)}\,ds<\infty\quad\text{and}\quad \int_{-\infty}^0\!\!\!\sqrt{\mathbb{P}(\omega_0\le s)}\,ds<\infty.\] Let $m_0=\mathbb{E}[\omega_0]$ and $\sigma_0^2=\mathbb{E}[\omega_0^2]-m_0^2$. Then \begin{equation}gin{align}\lambdabel{martin} g(1,\alpha)=m_0+2\sigma_0\sqrt{\alpha}+o(\sqrt\alpha)\quad\text{as }\alpha\searrow0. \varepsilonnd{align} \varepsilonnd{theorem} The fact that $g(1,\alpha)-m_0$ is of order $\sqrt{\alpha}$ for small $\alpha$ comes by a more careful look at the continuity proof above. An even more careful look involving comparison to the case with exponential weights gives the more precise formula \varepsilonqref{martin}. The above has a nice consequence regarding the limiting shape. \begin{equation}gin{corollary} The limiting shape $\{x\in\mathbb{R}_2^+:g(x)\le1\}$ is not a polygon {\rm(}with finitely many sides{\rm)}. \varepsilonnd{corollary} It is tempting to think that perhaps formula \varepsilonqref{shape:exp} holds in general. The following situation shows that this is not the case. Assume that the LPP weights satisfy $\omega_x\le 1$ and $p=\mathbb{P}\{\omega_0=1\}>0$. The classical Durrett-Liggett flat edge result implies that if $p$ is large enough, $g$ is linear on a whole cone. See Figure \ref{flat:fig}. \begin{equation}gin{figure} \centering \mathrm def2ptk{4pt} \begin{equation}gin{tikzpicture}[>=latex,scale=8] \mathrm draw[line width=1pt,color=sussexb](0,0.794545454545454)-- (0.0101010101010101,0.839191919191919)-- (0.0202020202020202,0.866868686868686)-- (0.0303030303030303,0.887474747474746)-- (0.0404040404040404,0.900808080808079)-- (0.0505050505050505,0.91141414141414)-- (0.0606060606060606,0.92151515151515)-- (0.0707070707070707,0.93090909090909)-- (0.0808080808080808,0.937171717171716)-- (0.0909090909090909,0.944242424242423)-- (0.101010101010101,0.95070707070707)-- (0.111111111111111,0.957070707070706)-- (0.121212121212121,0.96090909090909)-- (0.131313131313131,0.966060606060605)-- (0.141414141414141,0.969797979797979)-- (0.151515151515152,0.973535353535353)-- (0.161616161616162,0.976666666666666)-- (0.171717171717172,0.979898989898989)-- (0.181818181818182,0.982424242424242)-- (0.191919191919192,0.985050505050505)-- (0.202020202020202,0.986262626262626)-- (0.212121212121212,0.987676767676767)-- (0.222222222222222,0.989191919191919)-- (0.232323232323232,0.990909090909091)-- (0.242424242424242,0.991212121212121)-- (0.252525252525253,0.991717171717172)-- (0.262626262626263,0.992020202020202)-- (0.272727272727273,0.993131313131313)-- (0.282828282828283,0.993939393939394)-- (0.292929292929293,0.994343434343434)-- (0.303030303030303,0.994646464646464)-- (0.313131313131313,0.995252525252525)-- (0.323232323232323,0.995454545454545)-- (0.333333333333333,0.995151515151515)-- (0.343434343434343,0.995959595959596)-- (0.353535353535354,0.995656565656566)-- (0.363636363636364,0.996060606060606)-- (0.373737373737374,0.996060606060606)-- (0.383838383838384,0.995555555555555)-- (0.393939393939394,0.995757575757576)-- (0.404040404040404,0.995757575757576)-- (0.414141414141414,0.995858585858586)-- (0.424242424242424,0.995757575757576)-- (0.434343434343434,0.995656565656566)-- (0.444444444444444,0.995656565656566)-- (0.454545454545455,0.996262626262626)-- (0.464646464646465,0.996363636363636)-- (0.474747474747475,0.996262626262626)-- (0.484848484848485,0.995757575757576)-- (0.494949494949495,0.996464646464646)-- (0.505050505050505,0.995555555555555)-- (0.515151515151515,0.995858585858586)-- (0.525252525252525,0.996161616161616)-- (0.535353535353535,0.996060606060606)-- (0.545454545454545,0.996161616161616)-- (0.555555555555556,0.995858585858586)-- (0.565656565656566,0.995858585858586)-- (0.575757575757576,0.995555555555555)-- (0.585858585858586,0.996262626262626)-- (0.595959595959596,0.996363636363636)-- (0.606060606060606,0.996161616161616)-- (0.616161616161616,0.996161616161616)-- (0.626262626262626,0.996161616161616)-- (0.636363636363636,0.995757575757576)-- (0.646464646464647,0.995959595959596)-- (0.656565656565657,0.995959595959596)-- (0.666666666666667,0.996060606060606)-- (0.676767676767677,0.995858585858586)-- (0.686868686868687,0.995050505050505)-- (0.696969696969697,0.995252525252525)-- (0.707070707070707,0.994444444444444)-- (0.717171717171717,0.994343434343434)-- (0.727272727272727,0.994141414141414)-- (0.737373737373737,0.993838383838384)-- (0.747474747474748,0.992727272727273)-- (0.757575757575758,0.991919191919192)-- (0.767676767676768,0.99060606060606)-- (0.777777777777778,0.989393939393939)-- (0.787878787878788,0.987575757575757)-- (0.797979797979798,0.985858585858586)-- (0.808080808080808,0.983939393939394)-- (0.818181818181818,0.981919191919192)-- (0.828282828282828,0.978989898989899)-- (0.838383838383838,0.974545454545454)-- (0.848484848484849,0.971313131313131)-- (0.858585858585859,0.967878787878787)-- (0.868686868686869,0.963939393939393)-- (0.878787878787879,0.957575757575757)-- (0.888888888888889,0.954343434343433)-- (0.898989898989899,0.949696969696969)-- (0.909090909090909,0.942222222222221)-- (0.919191919191919,0.933939393939393)-- (0.929292929292929,0.927373737373736)-- (0.939393939393939,0.92111111111111)-- (0.94949494949495,0.910606060606059)-- (0.95959595959596,0.898989898989898)-- (0.96969696969697,0.886363636363635)-- (0.97979797979798,0.864646464646464)-- (0.98989898989899,0.838585858585858)-- (1,0.791414141414141); \mathrm draw[->](0,0.7)--(1.2,0.7); \mathrm draw[->](0,0.7)--(0,1.2); \mathrm draw(-.02,0.8)--(0.02,0.8); \mathrm draw(-0.065,0.8)node{$0.8$}; \mathrm draw(-.02,0.98)--(0.02,0.98); \mathrm draw(-0.04,0.99)node{$1$}; \mathrm draw(1,0.7-.02)--(1,0.7+0.02); \mathrm draw(1,0.7-0.06)node{$1$}; \mathrm draw(0,0.7-0.06)node{$0$}; \mathrm draw[dashed](0.24,0.7)--(0.24,0.99); \mathrm draw[dashed](1-0.24,0.7)--(1-0.24,0.99); \mathrm draw(0.24,0.7-0.06)node{$t_0$}; \mathrm draw(1-0.24,0.7-0.06)node{$1-t_0$}; \mathrm draw[line width=1pt,color=red] (0,0.8)-- (0.001,0.825285569006847)-- (0.002,0.835741292645902)-- (0.003,0.843752028524401)-- (0.004,0.85049514828179)-- (0.005,0.856426943918664)-- (0.006,0.861781550644185)-- (0.007,0.866698125910703)-- (0.008,0.871267383844224)-- (0.009,0.875552365945747)-- (0.01,0.87959899496853)-- (0.011,0.883441955873529)-- (0.012,0.887108208568424)-- (0.013,0.890619203262885)-- (0.014,0.893992340113437)-- (0.015,0.897241966249146)-- (0.016,0.900380077704692)-- (0.017,0.903416826483895)-- (0.018,0.90636089506957)-- (0.019,0.909219778428634)-- (0.02,0.912)-- (0.021,0.914707279629499)-- (0.022,0.917346665909177)-- (0.023,0.919922641732077)-- (0.024,0.922439209406138)-- (0.025,0.924899959967968)-- (0.026,0.927308130141009)-- (0.027,0.929666649528705)-- (0.028,0.931978180014728)-- (0.029,0.934245148888144)-- (0.03,0.936469776873856)-- (0.031,0.938654101994856)-- (0.032,0.9408)-- (0.033,0.942909201943052)-- (0.034,0.944983309384218)-- (0.035,0.947023807595913)-- (0.036,0.949032077084096)-- (0.037,0.951009403680698)-- (0.038,0.952956987418032)-- (0.039,0.95487595036028)-- (0.04,0.956767343538123)-- (0.041,0.958632153109009)-- (0.042,0.960471305846248)-- (0.043,0.962285674044261)-- (0.044,0.964076079914167)-- (0.045,0.965843299533023)-- (0.046,0.967588066400923)-- (0.047,0.969311074652546)-- (0.048,0.971012981963359)-- (0.049,0.972694412185224)-- (0.05,0.974355957741627)-- (0.051,0.97599818180879)-- (0.052,0.977621620305637)-- (0.053,0.979226783712703)-- (0.054,0.980814158737639)-- (0.055,0.982384209842848)-- (0.056,0.983937380648959)-- (0.057,0.985474095226261)-- (0.058,0.986994759284853)-- (0.059,0.988499761273058)-- (0.06,0.989989473392607)-- (0.061,0.99146425253817)-- (0.062,0.992924441168039)-- (0.063,0.994370368112015)-- (0.064,0.995802349321963)-- (0.065,0.997220688569937)-- (0.066,0.998625678098276)-- (0.067,1.00001759922567)-- (0.068,1.00139672291276)-- (0.069,1.00276331029059)-- (0.07,1.00411761315477)-- (0.071,1.00545987442807)-- (0.072,1.00679032859397)-- (0.073,1.00810920210313)-- (0.074,1.00941671375513)-- (0.075,1.01071307505705)-- (0.076,1.01199849056066)-- (0.077,1.01327315817983)-- (0.078,1.01453726948948)-- (0.079,1.01579101000737)-- (0.08,1.01703455946001)-- (0.081,1.01826809203363)-- (0.082,1.01949177661133)-- (0.083,1.02070577699734)-- (0.084,1.0219102521291)-- (0.085,1.02310535627815)-- (0.086,1.02429123924041)-- (0.087,1.02546804651657)-- (0.088,1.02663591948321)-- (0.089,1.02779499555521)-- (0.09,1.02894540834007)-- (0.091,1.03008728778444)-- (0.092,1.0312207603136)-- (0.093,1.03234594896404)-- (0.094,1.03346297350972)-- (0.095,1.03457195058233)-- (0.096,1.03567299378588)-- (0.097,1.03676621380594)-- (0.098,1.03785171851387)-- (0.099,1.03892961306628)-- (0.1,1.04)-- (0.101,1.04106297932283)-- (0.102,1.04211864860023)-- (0.103,1.04316710303822)-- (0.104,1.04420843556274)-- (0.105,1.04524273689551)-- (0.106,1.04627009562673)-- (0.107,1.04729059828469)-- (0.108,1.04830432940245)-- (0.109,1.0493113715818)-- (0.11,1.05031180555459)-- (0.111,1.05130571024153)-- (0.112,1.05229316280867)-- (0.113,1.05327423872159)-- (0.114,1.05424901179749)-- (0.115,1.05521755425519)-- (0.116,1.05617993676321)-- (0.117,1.05713622848599)-- (0.118,1.05808649712839)-- (0.119,1.05903080897839)-- (0.12,1.05996922894835)-- (0.121,1.06090182061458)-- (0.122,1.06182864625552)-- (0.123,1.06274976688857)-- (0.124,1.06366524230547)-- (0.125,1.06457513110646)-- (0.126,1.06547949073328)-- (0.127,1.06637837750088)-- (0.128,1.06727184662811)-- (0.129,1.0681599522673)-- (0.13,1.0690427475328)-- (0.131,1.0699202845286)-- (0.132,1.07079261437491)-- (0.133,1.07165978723396)-- (0.134,1.07252185233482)-- (0.135,1.07337885799747)-- (0.136,1.07423085165605)-- (0.137,1.07507787988132)-- (0.138,1.07591998840244)-- (0.139,1.07675722212799)-- (0.14,1.07758962516636)-- (0.141,1.07841724084546)-- (0.142,1.07924011173182)-- (0.143,1.08005827964908)-- (0.144,1.08087178569589)-- (0.145,1.08168067026333)-- (0.146,1.08248497305167)-- (0.147,1.0832847330867)-- (0.148,1.08407998873557)-- (0.149,1.08487077772211)-- (0.15,1.08565713714171)-- (0.151,1.08643910347577)-- (0.152,1.08721671260566)-- (0.153,1.08798999982638)-- (0.154,1.08875899985974)-- (0.155,1.08952374686716)-- (0.156,1.09028427446212)-- (0.157,1.09104061572227)-- (0.158,1.09179280320118)-- (0.159,1.09254086893971)-- (0.16,1.09328484447717)-- (0.161,1.09402476086207)-- (0.162,1.09476064866261)-- (0.163,1.09549253797685)-- (0.164,1.09622045844269)-- (0.165,1.09694443924748)-- (0.166,1.09766450913738)-- (0.167,1.09838069642656)-- (0.168,1.09909302900603)-- (0.169,1.09980153435231)-- (0.17,1.10050623953589)-- (0.171,1.10120717122937)-- (0.172,1.10190435571551)-- (0.173,1.10259781889498)-- (0.174,1.10328758629393)-- (0.175,1.10397368307141)-- (0.176,1.10465613402654)-- (0.177,1.10533496360555)-- (0.178,1.10601019590857)-- (0.179,1.10668185469636)-- (0.18,1.10734996339678)-- (0.181,1.1080145451111)-- (0.182,1.10867562262025)-- (0.183,1.10933321839078)-- (0.184,1.1099873545808)-- (0.185,1.11063805304566)-- (0.186,1.11128533534364)-- (0.187,1.11192922274131)-- (0.188,1.11256973621898)-- (0.189,1.1132068964758)-- (0.19,1.11384072393493)-- (0.191,1.11447123874847)-- (0.192,1.11509846080233)-- (0.193,1.11572240972094)-- (0.194,1.11634310487191)-- (0.195,1.11696056537052)-- (0.196,1.11757481008418)-- (0.197,1.1181858576367)-- (0.198,1.11879372641255)-- (0.199,1.11939843456097)-- (0.2,1.12)-- (0.201,1.12059844042041)-- (0.202,1.12119377328958)-- (0.203,1.12178601585526)-- (0.204,1.12237518514923)-- (0.205,1.12296129799095)-- (0.206,1.12354437099106)-- (0.207,1.12412442055482)-- (0.208,1.12470146288553)-- (0.209,1.12527551398776)-- (0.21,1.12584658967066)-- (0.211,1.12641470555108)-- (0.212,1.12697987705668)-- (0.213,1.12754211942894)-- (0.214,1.12810144772616)-- (0.215,1.12865787682634)-- (0.216,1.12921142143006)-- (0.217,1.12976209606321)-- (0.218,1.13030991507976)-- (0.219,1.13085489266444)-- (0.22,1.13139704283533)-- (0.221,1.13193637944642)-- (0.222,1.13247291619018)-- (0.223,1.13300666659993)-- (0.224,1.13353764405236)-- (0.225,1.1340658617698)-- (0.226,1.13459133282259)-- (0.227,1.13511407013135)-- (0.228,1.13563408646918)-- (0.229,1.13615139446386)-- (0.23,1.13666600660001)-- (0.231,1.13717793522115)-- (0.232,1.13768719253179)-- (0.233,1.13819379059941)-- (0.234,1.13869774135651)-- (0.235,1.13919905660246)-- (0.236,1.13969774800549)-- (0.237,1.14019382710449)-- (0.238,1.1406873053109)-- (0.239,1.14117819391046)-- (0.24,1.141666504065)-- (0.241,1.14215224681419)-- (0.242,1.1426354330772)-- (0.243,1.14311607365438)-- (0.244,1.14359417922893)-- (0.245,1.14406976036845)-- (0.246,1.14454282752656)-- (0.247,1.14501339104446)-- (0.248,1.1454814611524)-- (0.249,1.14594704797122)-- (0.25,1.14641016151378)-- (0.251,1.14687081168643)-- (0.252,1.14732900829041)-- (0.253,1.14778476102325)-- (0.254,1.14823807948012)-- (0.255,1.14868897315516)-- (0.256,1.14913745144284)-- (0.257,1.1495835236392)-- (0.258,1.15002719894317)-- (0.259,1.15046848645777)-- (0.26,1.15090739519138)-- (0.261,1.15134393405892)-- (0.262,1.15177811188304)-- (0.263,1.1522099373953)-- (0.264,1.15263941923727)-- (0.265,1.15306656596172)-- (0.266,1.15349138603366)-- (0.267,1.15391388783149)-- (0.268,1.15433407964801)-- (0.269,1.1547519696915)-- (0.27,1.15516756608677)-- (0.271,1.15558087687613)-- (0.272,1.15599191002044)-- (0.273,1.15640067340004)-- (0.274,1.15680717481575)-- (0.275,1.15721142198984)-- (0.276,1.15761342256688)-- (0.277,1.15801318411478)-- (0.278,1.15841071412557)-- (0.279,1.15880602001639)-- (0.28,1.1591991091303)-- (0.281,1.15958998873717)-- (0.282,1.15997866603453)-- (0.283,1.16036514814837)-- (0.284,1.16074944213401)-- (0.285,1.16113155497685)-- (0.286,1.16151149359322)-- (0.287,1.16188926483111)-- (0.288,1.16226487547097)-- (0.289,1.16263833222648)-- (0.29,1.16300964174523)-- (0.291,1.16337881060953)-- (0.292,1.1637458453371)-- (0.293,1.16411075238174)-- (0.294,1.16447353813411)-- (0.295,1.16483420892235)-- (0.296,1.16519277101279)-- (0.297,1.1655492306106)-- (0.298,1.16590359386046)-- (0.299,1.16625586684721)-- (0.3,1.16660605559647)-- (0.301,1.16695416607527)-- (0.302,1.1673002041927)-- (0.303,1.16764417580046)-- (0.304,1.16798608669351)-- (0.305,1.16832594261062)-- (0.306,1.16866374923499)-- (0.307,1.1689995121948)-- (0.308,1.16933323706377)-- (0.309,1.16966492936171)-- (0.31,1.16999459455511)-- (0.311,1.17032223805761)-- (0.312,1.1706478652306)-- (0.313,1.17097148138368)-- (0.314,1.17129309177522)-- (0.315,1.17161270161285)-- (0.316,1.17193031605396)-- (0.317,1.1722459402062)-- (0.318,1.17255957912796)-- (0.319,1.17287123782882)-- (0.32,1.1731809212701)-- (0.321,1.17348863436522)-- (0.322,1.17379438198025)-- (0.323,1.17409816893431)-- (0.324,1.1744)-- (0.325,1.1746998799039)-- (0.326,1.17499781332696)-- (0.327,1.1752938049049)-- (0.328,1.1755878592287)-- (0.329,1.17587998084495)-- (0.33,1.17617017425628)-- (0.331,1.17645844392177)-- (0.332,1.17674479425733)-- (0.333,1.17702922963611)-- (0.334,1.17731175438886)-- (0.335,1.17759237280432)-- (0.336,1.17787108912961)-- (0.337,1.17814790757057)-- (0.338,1.17842283229213)-- (0.339,1.1786958674187)-- (0.34,1.17896701703446)-- (0.341,1.17923628518379)-- (0.342,1.17950367587153)-- (0.343,1.17976919306337)-- (0.344,1.18003284068617)-- (0.345,1.1802946226283)-- (0.346,1.18055454273993)-- (0.347,1.1808126048334)-- (0.348,1.18106881268348)-- (0.349,1.18132317002773)-- (0.35,1.18157568056678)-- (0.351,1.18182634796462)-- (0.352,1.18207517584894)-- (0.353,1.18232216781139)-- (0.354,1.18256732740787)-- (0.355,1.18281065815883)-- (0.356,1.18305216354956)-- (0.357,1.18329184703043)-- (0.358,1.1835297120172)-- (0.359,1.18376576189129)-- (0.36,1.184)-- (0.361,1.18423242965684)-- (0.362,1.18446305414175)-- (0.363,1.18469187670134)-- (0.364,1.18491890054919)-- (0.365,1.18514412886606)-- (0.366,1.18536756480015)-- (0.367,1.18558921146733)-- (0.368,1.1858090719514)-- (0.369,1.18602714930429)-- (0.37,1.18624344654635)-- (0.371,1.18645796666649)-- (0.372,1.18667071262251)-- (0.373,1.18688168734123)-- (0.374,1.18709089371878)-- (0.375,1.18729833462074)-- (0.376,1.18750401288245)-- (0.377,1.18770793130912)-- (0.378,1.18791009267613)-- (0.379,1.18811049972914)-- (0.38,1.18830915518437)-- (0.381,1.18850606172877)-- (0.382,1.1887012220202)-- (0.383,1.18889463868765)-- (0.384,1.18908631433141)-- (0.385,1.18927625152326)-- (0.386,1.18946445280667)-- (0.387,1.18965092069697)-- (0.388,1.18983565768154)-- (0.389,1.19001866621996)-- (0.39,1.19019994874423)-- (0.391,1.19037950765889)-- (0.392,1.19055734534124)-- (0.393,1.19073346414148)-- (0.394,1.19090786638286)-- (0.395,1.19108055436189)-- (0.396,1.19125153034844)-- (0.397,1.19142079658598)-- (0.398,1.19158835529163)-- (0.399,1.1917542086564)-- (0.4,1.19191835884531)-- (0.401,1.19208080799754)-- (0.402,1.19224155822656)-- (0.403,1.19240061162032)-- (0.404,1.19255797024134)-- (0.405,1.19271363612689)-- (0.406,1.1928676112891)-- (0.407,1.19301989771512)-- (0.408,1.19317049736724)-- (0.409,1.19331941218302)-- (0.41,1.19346664407545)-- (0.411,1.19361219493303)-- (0.412,1.19375606661993)-- (0.413,1.19389826097611)-- (0.414,1.19403877981742)-- (0.415,1.19417762493576)-- (0.416,1.19431479809918)-- (0.417,1.19445030105198)-- (0.418,1.19458413551485)-- (0.419,1.19471630318496)-- (0.42,1.1948468057361)-- (0.421,1.19497564481877)-- (0.422,1.19510282206028)-- (0.423,1.1952283390649)-- (0.424,1.1953521974139)-- (0.425,1.1954743986657)-- (0.426,1.19559494435597)-- (0.427,1.19571383599768)-- (0.428,1.19583107508128)-- (0.429,1.19594666307471)-- (0.43,1.19606060142357)-- (0.431,1.19617289155115)-- (0.432,1.19628353485857)-- (0.433,1.19639253272482)-- (0.434,1.19649988650692)-- (0.435,1.19660559753992)-- (0.436,1.19670966713706)-- (0.437,1.19681209658981)-- (0.438,1.19691288716795)-- (0.439,1.19701204011969)-- (0.44,1.19710955667171)-- (0.441,1.19720543802924)-- (0.442,1.19729968537617)-- (0.443,1.19739229987507)-- (0.444,1.19748328266733)-- (0.445,1.19757263487318)-- (0.446,1.19766035759175)-- (0.447,1.19774645190121)-- (0.448,1.19783091885875)-- (0.449,1.19791375950072)-- (0.45,1.19799497484265)-- (0.451,1.19807456587931)-- (0.452,1.19815253358481)-- (0.453,1.19822887891262)-- (0.454,1.19830360279566)-- (0.455,1.19837670614633)-- (0.456,1.1984481898566)-- (0.457,1.19851805479802)-- (0.458,1.19858630182183)-- (0.459,1.19865293175894)-- (0.46,1.19871794542007)-- (0.461,1.19878134359571)-- (0.462,1.19884312705624)-- (0.463,1.19890329655193)-- (0.464,1.19896185281302)-- (0.465,1.19901879654974)-- (0.466,1.19907412845235)-- (0.467,1.19912784919121)-- (0.468,1.1991799594168)-- (0.469,1.19923045975977)-- (0.47,1.19927935083097)-- (0.471,1.19932663322148)-- (0.472,1.19937230750266)-- (0.473,1.1994163742262)-- (0.474,1.1994588339241)-- (0.475,1.19949968710876)-- (0.476,1.19953893427299)-- (0.477,1.19957657589003)-- (0.478,1.19961261241357)-- (0.479,1.19964704427782)-- (0.48,1.1996798718975)-- (0.481,1.19971109566786)-- (0.482,1.19974071596474)-- (0.483,1.19976873314455)-- (0.484,1.19979514754434)-- (0.485,1.19981995948176)-- (0.486,1.19984316925515)-- (0.487,1.19986477714347)-- (0.488,1.19988478340642)-- (0.489,1.19990318828436)-- (0.49,1.1999199919984)-- (0.491,1.19993519475035)-- (0.492,1.19994879672278)-- (0.493,1.19996079807901)-- (0.494,1.19997119896313)-- (0.495,1.19997999949998)-- (0.496,1.19998719979519)-- (0.497,1.1999927999352)-- (0.498,1.1999967999872)-- (0.499,1.1999991999992)-- (0.5,1.2)-- (0.501,1.1999991999992)-- (0.502,1.1999967999872)-- (0.503,1.1999927999352)-- (0.504,1.19998719979519)-- (0.505,1.19997999949998)-- (0.506,1.19997119896313)-- (0.507,1.19996079807901)-- (0.508,1.19994879672278)-- (0.509,1.19993519475035)-- (0.51,1.1999199919984)-- (0.511,1.19990318828436)-- (0.512,1.19988478340642)-- (0.513,1.19986477714347)-- (0.514,1.19984316925515)-- (0.515,1.19981995948176)-- (0.516,1.19979514754434)-- (0.517,1.19976873314455)-- (0.518,1.19974071596474)-- (0.519,1.19971109566786)-- (0.52,1.1996798718975)-- (0.521,1.19964704427782)-- (0.522,1.19961261241357)-- (0.523,1.19957657589003)-- (0.524,1.19953893427299)-- (0.525,1.19949968710876)-- (0.526,1.1994588339241)-- (0.527,1.1994163742262)-- (0.528,1.19937230750266)-- (0.529,1.19932663322148)-- (0.53,1.19927935083097)-- (0.531,1.19923045975977)-- (0.532,1.1991799594168)-- (0.533,1.19912784919121)-- (0.534,1.19907412845235)-- (0.535,1.19901879654974)-- (0.536,1.19896185281302)-- (0.537,1.19890329655193)-- (0.538,1.19884312705624)-- (0.539,1.19878134359571)-- (0.54,1.19871794542007)-- (0.541,1.19865293175894)-- (0.542,1.19858630182183)-- (0.543,1.19851805479802)-- (0.544,1.1984481898566)-- (0.545,1.19837670614633)-- (0.546,1.19830360279566)-- (0.547,1.19822887891262)-- (0.548,1.19815253358481)-- (0.549,1.19807456587931)-- (0.55,1.19799497484265)-- (0.551,1.19791375950072)-- (0.552,1.19783091885875)-- (0.553,1.19774645190121)-- (0.554,1.19766035759175)-- (0.555,1.19757263487318)-- (0.556,1.19748328266733)-- (0.557,1.19739229987507)-- (0.558,1.19729968537617)-- (0.559,1.19720543802924)-- (0.56,1.19710955667171)-- (0.561,1.19701204011969)-- (0.562,1.19691288716795)-- (0.563,1.19681209658981)-- (0.564,1.19670966713706)-- (0.565,1.19660559753992)-- (0.566,1.19649988650692)-- (0.567,1.19639253272482)-- (0.568,1.19628353485857)-- (0.569,1.19617289155115)-- (0.57,1.19606060142357)-- (0.571,1.19594666307471)-- (0.572,1.19583107508128)-- (0.573,1.19571383599768)-- (0.574,1.19559494435597)-- (0.575,1.1954743986657)-- (0.576,1.1953521974139)-- (0.577,1.1952283390649)-- (0.578,1.19510282206028)-- (0.579,1.19497564481877)-- (0.58,1.1948468057361)-- (0.581,1.19471630318496)-- (0.582,1.19458413551485)-- (0.583,1.19445030105198)-- (0.584,1.19431479809918)-- (0.585,1.19417762493576)-- (0.586,1.19403877981742)-- (0.587,1.19389826097611)-- (0.588,1.19375606661993)-- (0.589,1.19361219493303)-- (0.59,1.19346664407545)-- (0.591,1.19331941218302)-- (0.592,1.19317049736724)-- (0.593,1.19301989771512)-- (0.594,1.1928676112891)-- (0.595,1.19271363612689)-- (0.596,1.19255797024134)-- (0.597,1.19240061162032)-- (0.598,1.19224155822656)-- (0.599,1.19208080799754)-- (0.6,1.19191835884531)-- (0.601,1.1917542086564)-- (0.602,1.19158835529163)-- (0.603,1.19142079658598)-- (0.604,1.19125153034844)-- (0.605,1.19108055436189)-- (0.606,1.19090786638286)-- (0.607,1.19073346414148)-- (0.608,1.19055734534124)-- (0.609,1.19037950765889)-- (0.61,1.19019994874423)-- (0.611,1.19001866621996)-- (0.612,1.18983565768154)-- (0.613,1.18965092069697)-- (0.614,1.18946445280667)-- (0.615,1.18927625152326)-- (0.616,1.18908631433141)-- (0.617,1.18889463868765)-- (0.618,1.1887012220202)-- (0.619,1.18850606172877)-- (0.62,1.18830915518437)-- (0.621,1.18811049972914)-- (0.622,1.18791009267613)-- (0.623,1.18770793130912)-- (0.624,1.18750401288245)-- (0.625,1.18729833462074)-- (0.626,1.18709089371878)-- (0.627,1.18688168734123)-- (0.628,1.18667071262251)-- (0.629,1.18645796666649)-- (0.63,1.18624344654635)-- (0.631,1.18602714930429)-- (0.632,1.1858090719514)-- (0.633,1.18558921146733)-- (0.634,1.18536756480015)-- (0.635,1.18514412886606)-- (0.636,1.18491890054919)-- (0.637,1.18469187670134)-- (0.638,1.18446305414175)-- (0.639,1.18423242965684)-- (0.64,1.184)-- (0.641,1.18376576189129)-- (0.642,1.1835297120172)-- (0.643,1.18329184703043)-- (0.644,1.18305216354956)-- (0.645,1.18281065815883)-- (0.646,1.18256732740787)-- (0.647,1.18232216781139)-- (0.648,1.18207517584894)-- (0.649,1.18182634796462)-- (0.65,1.18157568056678)-- (0.651,1.18132317002773)-- (0.652,1.18106881268348)-- (0.653,1.1808126048334)-- (0.654,1.18055454273993)-- (0.655,1.1802946226283)-- (0.656,1.18003284068617)-- (0.657,1.17976919306337)-- (0.658,1.17950367587153)-- (0.659,1.17923628518379)-- (0.66,1.17896701703446)-- (0.661,1.1786958674187)-- (0.662,1.17842283229213)-- (0.663,1.17814790757057)-- (0.664,1.17787108912961)-- (0.665,1.17759237280432)-- (0.666,1.17731175438886)-- (0.667,1.17702922963611)-- (0.668,1.17674479425733)-- (0.669,1.17645844392177)-- (0.67,1.17617017425628)-- (0.671,1.17587998084495)-- (0.672,1.1755878592287)-- (0.673,1.1752938049049)-- (0.674,1.17499781332696)-- (0.675,1.1746998799039)-- (0.676,1.1744)-- (0.677,1.17409816893431)-- (0.678,1.17379438198025)-- (0.679,1.17348863436522)-- (0.68,1.1731809212701)-- (0.681,1.17287123782882)-- (0.682,1.17255957912796)-- (0.683,1.1722459402062)-- (0.684,1.17193031605396)-- (0.685,1.17161270161285)-- (0.686,1.17129309177522)-- (0.687,1.17097148138368)-- (0.688,1.1706478652306)-- (0.689,1.17032223805761)-- (0.69,1.16999459455511)-- (0.691,1.16966492936171)-- (0.692,1.16933323706377)-- (0.693,1.1689995121948)-- (0.694,1.16866374923499)-- (0.695,1.16832594261062)-- (0.696,1.16798608669351)-- (0.697,1.16764417580046)-- (0.698,1.1673002041927)-- (0.699,1.16695416607527)-- (0.7,1.16660605559647)-- (0.701,1.16625586684721)-- (0.702,1.16590359386046)-- (0.703,1.1655492306106)-- (0.704,1.16519277101279)-- (0.705,1.16483420892235)-- (0.706,1.16447353813411)-- (0.707,1.16411075238174)-- (0.708,1.1637458453371)-- (0.709,1.16337881060953)-- (0.71,1.16300964174523)-- (0.711,1.16263833222648)-- (0.712,1.16226487547097)-- (0.713,1.16188926483111)-- (0.714,1.16151149359322)-- (0.715,1.16113155497685)-- (0.716,1.16074944213401)-- (0.717,1.16036514814837)-- (0.718,1.15997866603453)-- (0.719,1.15958998873717)-- (0.72,1.1591991091303)-- (0.721,1.15880602001639)-- (0.722,1.15841071412557)-- (0.723,1.15801318411478)-- (0.724,1.15761342256688)-- (0.725,1.15721142198984)-- (0.726,1.15680717481575)-- (0.727,1.15640067340004)-- (0.728,1.15599191002044)-- (0.729,1.15558087687613)-- (0.73,1.15516756608677)-- (0.731,1.1547519696915)-- (0.732,1.15433407964801)-- (0.733,1.15391388783149)-- (0.734,1.15349138603366)-- (0.735,1.15306656596172)-- (0.736,1.15263941923727)-- (0.737,1.1522099373953)-- (0.738,1.15177811188304)-- (0.739,1.15134393405892)-- (0.74,1.15090739519138)-- (0.741,1.15046848645777)-- (0.742,1.15002719894317)-- (0.743,1.1495835236392)-- (0.744,1.14913745144284)-- (0.745,1.14868897315516)-- (0.746,1.14823807948012)-- (0.747,1.14778476102325)-- (0.748,1.14732900829041)-- (0.749,1.14687081168643)-- (0.75,1.14641016151378)-- (0.751,1.14594704797122)-- (0.752,1.1454814611524)-- (0.753,1.14501339104446)-- (0.754,1.14454282752656)-- (0.755,1.14406976036845)-- (0.756,1.14359417922893)-- (0.757,1.14311607365438)-- (0.758,1.1426354330772)-- (0.759,1.14215224681419)-- (0.76,1.141666504065)-- (0.761,1.14117819391046)-- (0.762,1.1406873053109)-- (0.763,1.14019382710449)-- (0.764,1.13969774800549)-- (0.765,1.13919905660246)-- (0.766,1.13869774135651)-- (0.767,1.13819379059941)-- (0.768,1.13768719253179)-- (0.769,1.13717793522115)-- (0.77,1.13666600660001)-- (0.771,1.13615139446386)-- (0.772,1.13563408646918)-- (0.773,1.13511407013135)-- (0.774,1.13459133282259)-- (0.775,1.1340658617698)-- (0.776,1.13353764405236)-- (0.777,1.13300666659993)-- (0.778,1.13247291619018)-- (0.779,1.13193637944642)-- (0.78,1.13139704283533)-- (0.781,1.13085489266444)-- (0.782,1.13030991507976)-- (0.783,1.12976209606321)-- (0.784,1.12921142143006)-- (0.785,1.12865787682634)-- (0.786,1.12810144772616)-- (0.787,1.12754211942894)-- (0.788,1.12697987705668)-- (0.789,1.12641470555108)-- (0.79,1.12584658967066)-- (0.791,1.12527551398776)-- (0.792,1.12470146288552)-- (0.793,1.12412442055482)-- (0.794,1.12354437099106)-- (0.795,1.12296129799095)-- (0.796,1.12237518514923)-- (0.797,1.12178601585526)-- (0.798,1.12119377328958)-- (0.799,1.12059844042041)-- (0.8,1.12)-- (0.801,1.11939843456097)-- (0.802,1.11879372641255)-- (0.803,1.1181858576367)-- (0.804,1.11757481008418)-- (0.805,1.11696056537052)-- (0.806,1.11634310487191)-- (0.807,1.11572240972094)-- (0.808,1.11509846080233)-- (0.809,1.11447123874847)-- (0.81,1.11384072393493)-- (0.811,1.1132068964758)-- (0.812,1.11256973621898)-- (0.813,1.11192922274131)-- (0.814,1.11128533534364)-- (0.815,1.11063805304566)-- (0.816,1.1099873545808)-- (0.817,1.10933321839078)-- (0.818,1.10867562262025)-- (0.819,1.1080145451111)-- (0.82,1.10734996339678)-- (0.821,1.10668185469636)-- (0.822,1.10601019590857)-- (0.823,1.10533496360555)-- (0.824,1.10465613402654)-- (0.825,1.10397368307141)-- (0.826,1.10328758629393)-- (0.827,1.10259781889498)-- (0.828,1.10190435571551)-- (0.829,1.10120717122937)-- (0.83,1.10050623953589)-- (0.831,1.09980153435231)-- (0.832,1.09909302900603)-- (0.833,1.09838069642656)-- (0.834,1.09766450913738)-- (0.835,1.09694443924748)-- (0.836,1.09622045844269)-- (0.837,1.09549253797685)-- (0.838,1.09476064866261)-- (0.839,1.09402476086207)-- (0.84,1.09328484447717)-- (0.841,1.09254086893971)-- (0.842,1.09179280320118)-- (0.843,1.09104061572227)-- (0.844,1.09028427446212)-- (0.845,1.08952374686716)-- (0.846,1.08875899985974)-- (0.847,1.08798999982638)-- (0.848,1.08721671260566)-- (0.849,1.08643910347577)-- (0.85,1.08565713714171)-- (0.851,1.08487077772211)-- (0.852,1.08407998873557)-- (0.853,1.0832847330867)-- (0.854,1.08248497305167)-- (0.855,1.08168067026333)-- (0.856,1.08087178569589)-- (0.857,1.08005827964908)-- (0.858,1.07924011173182)-- (0.859,1.07841724084546)-- (0.86,1.07758962516636)-- (0.861,1.07675722212799)-- (0.862,1.07591998840244)-- (0.863,1.07507787988132)-- (0.864,1.07423085165605)-- (0.865,1.07337885799747)-- (0.866,1.07252185233482)-- (0.867,1.07165978723396)-- (0.868,1.07079261437491)-- (0.869,1.0699202845286)-- (0.87,1.0690427475328)-- (0.871,1.0681599522673)-- (0.872,1.06727184662811)-- (0.873,1.06637837750088)-- (0.874,1.06547949073328)-- (0.875,1.06457513110646)-- (0.876,1.06366524230547)-- (0.877,1.06274976688857)-- (0.878,1.06182864625552)-- (0.879,1.06090182061458)-- (0.88,1.05996922894835)-- (0.881,1.05903080897839)-- (0.882,1.05808649712839)-- (0.883,1.05713622848599)-- (0.884,1.05617993676321)-- (0.885,1.05521755425519)-- (0.886,1.05424901179749)-- (0.887,1.05327423872159)-- (0.888,1.05229316280867)-- (0.889,1.05130571024153)-- (0.89,1.05031180555459)-- (0.891,1.0493113715818)-- (0.892,1.04830432940245)-- (0.893,1.04729059828469)-- (0.894,1.04627009562673)-- (0.895,1.04524273689551)-- (0.896,1.04420843556274)-- (0.897,1.04316710303822)-- (0.898,1.04211864860023)-- (0.899,1.04106297932283)-- (0.9,1.04)-- (0.901,1.03892961306628)-- (0.902,1.03785171851387)-- (0.903,1.03676621380594)-- (0.904,1.03567299378588)-- (0.905,1.03457195058233)-- (0.906,1.03346297350972)-- (0.907,1.03234594896404)-- (0.908,1.0312207603136)-- (0.909,1.03008728778444)-- (0.91,1.02894540834007)-- (0.911,1.02779499555521)-- (0.912,1.02663591948321)-- (0.913,1.02546804651657)-- (0.914,1.02429123924041)-- (0.915,1.02310535627815)-- (0.916,1.0219102521291)-- (0.917,1.02070577699734)-- (0.918,1.01949177661133)-- (0.919,1.01826809203363)-- (0.92,1.01703455946001)-- (0.921,1.01579101000737)-- (0.922,1.01453726948948)-- (0.923,1.01327315817983)-- (0.924,1.01199849056066)-- (0.925,1.01071307505705)-- (0.926,1.00941671375513)-- (0.927,1.00810920210313)-- (0.928,1.00679032859397)-- (0.929,1.00545987442807)-- (0.93,1.00411761315477)-- (0.931,1.00276331029059)-- (0.932,1.00139672291276)-- (0.933,1.00001759922567)-- (0.934,0.998625678098276)-- (0.935,0.997220688569937)-- (0.936,0.995802349321963)-- (0.937,0.994370368112014)-- (0.938,0.992924441168039)-- (0.939,0.99146425253817)-- (0.94,0.989989473392607)-- (0.941,0.988499761273058)-- (0.942,0.986994759284853)-- (0.943,0.985474095226261)-- (0.944,0.983937380648959)-- (0.945,0.982384209842848)-- (0.946,0.980814158737639)-- (0.947,0.979226783712703)-- (0.948,0.977621620305637)-- (0.949,0.97599818180879)-- (0.95,0.974355957741627)-- (0.951,0.972694412185224)-- (0.952,0.971012981963359)-- (0.953,0.969311074652546)-- (0.954,0.967588066400923)-- (0.955,0.965843299533023)-- (0.956,0.964076079914167)-- (0.957,0.962285674044261)-- (0.958,0.960471305846248)-- (0.959,0.958632153109009)-- (0.96,0.956767343538124)-- (0.961,0.95487595036028)-- (0.962,0.952956987418032)-- (0.963,0.951009403680698)-- (0.964,0.949032077084096)-- (0.965,0.947023807595913)-- (0.966,0.944983309384219)-- (0.967,0.942909201943052)-- (0.968,0.9408)-- (0.969,0.938654101994856)-- (0.97,0.936469776873856)-- (0.971,0.934245148888144)-- (0.972,0.931978180014728)-- (0.973,0.929666649528705)-- (0.974,0.927308130141009)-- (0.975,0.924899959967968)-- (0.976,0.922439209406138)-- (0.977,0.919922641732077)-- (0.978,0.917346665909177)-- (0.979,0.914707279629499)-- (0.98,0.912)-- (0.981,0.909219778428635)-- (0.982,0.90636089506957)-- (0.983,0.903416826483895)-- (0.984,0.900380077704692)-- (0.985,0.897241966249146)-- (0.986,0.893992340113437)-- (0.987,0.890619203262885)-- (0.988,0.887108208568424)-- (0.989,0.883441955873529)-- (0.99,0.87959899496853)-- (0.991,0.875552365945747)-- (0.992,0.871267383844224)-- (0.993,0.866698125910703)-- (0.994,0.861781550644185)-- (0.995,0.856426943918664)-- (0.996,0.850495148281791)-- (0.997,0.843752028524401)-- (0.998,0.835741292645902)-- (0.999,0.825285569006847)-- (1,0.8); \varepsilonnd{tikzpicture} \caption{\footnotesize LPP with Bernoulli distributed vertex weights: $\mathbb{P}(\omega_0=1)=1-\mathbb{P}(\omega_0=0)=0.8$. The bottom blue line is a simulation of the curve $t\mapsto g(te_1+(1-t)e_2)$, $t\in[0,1]$. The top red curve is $t\mapsto 0.8+\sqrt{0.8\times0.2t(1-t)}$, according to formula \varepsilonqref{shape:exp}. The two curves are different: the top red one is strictly concave while the bottom blue one is flat on $[t_0,1-t_0]$. But their asymptotics match near $t=0$ and $t=1$.} \lambdabel{flat:fig} \varepsilonnd{figure} \begin{equation}gin{theorem} \lambdabel{flat:thm} {\rm\cite{Dur-Lig-81}} Suppose $\mathbb{E}[\abs{\omega_0}^{2+\varepsilon}]<\infty$ for some $\varepsilon>0$ and $\mathbb{P}\{\omega_x\le 1\}=1$. There exists a critical value $p_c\in(0,1)$ such that if $p=\mathbb{P}\{\omega_0=1\}>p_c$, then there exists $t_0\in(0,1/2)$ such that $g(\xi)=\abs{\xi}_1=\xi\cdot(e_1+e_2)$ for all $\xi\in\mathbb{R}_+^2$ with $\xi\cdot e_1/\abs{\xi}_1\in[t_0,1-t_0]$. \varepsilonnd{theorem} Here is a heuristic argument to help understand why the above result holds. When probability $p$ is large one can show that there is an infinite up-right path $x_{0,\infty}$ starting from the origin $x_0=0$ such that $\varepsilonxists n_0$ with $\omega_{x_n}=1$ for all $n\ge n_0$. Furthermore, one can guarantee that $x_n\cdot e_1/n\to t_0$ for some $t_0\in(0,1/2)$ as $n\to\infty$. The shape theorem then implies that $g(t_0,1-t_0)=1$. By concavity of $g$ we have that $g(t,1-t)=1$ for all $t\in[t_0,1-t_0]$ and the claim of the theorem follows from the homogeneity of $g$. Before we close the section, it may be noteworthy that even though no closed formulas are known for $g$ in general, some variational characterizations of $g$ do exist in the general weights setting. See Section 2 of \cite{ch:Seppalainen}. \section{Busemann functions}\lambdabel{sec:Bus} How does one prove \varepsilonqref{shape:exp}? Can one get any information on $g$, more than what is given by Theorem \ref{th:shape}? One way to approach such questions is by taking a closer look at the shape function. In precise terms, abbreviate \[\mathcal U=\{(t,1-t):t\in[0,1]\}\quad\text{and}\quad\Uset^\circ=\{(t,1-t):0<t<1\}\] and fix a $\xi\in\Uset^\circ$. Consider the collection of random variables \[\{G_{0,\fl{n\xi}}-G_{0,\fl{n\xi}+z}:\abs{z}_1\le M\},\] for some $M>0$. In other words, we want to examine the passage times to points in the vicinity of $n\xi$, relative to the passage time to $n\xi$ itself. Presumably, as $n\to\infty$, (the distribution of) this vector of random variables converges weakly to some limit and this limit carries some useful information about the large scale behavior of the system ``in direction $\xi$''. Since the point of reference is moving, there is really no hope of the convergence being almost sure. However, we can change our frame of reference and view things from $n\xi$ by considering \[\{G_{-\fl{n\xi},0}-G_{-\fl{n\xi},z}:\abs{z}_1\le M\},\] or equivalently \[\{G_{-\fl{n\xi},0}-G_{-\fl{n\xi},-z}:\abs{z}_1\le M\}.\] Since $\{\omega_{-x}:x\in\mathbb{Z}^2\}$ has the same distribution as $\{\omega_x:x\in\mathbb{Z}^2\}$, the above has the same law as \begin{equation}gin{align}\lambdabel{(*)} \{G_{0,\fl{n\xi}}-G_{z,\fl{n\xi}}:\abs{z}_1\le M\}. \varepsilonnd{align} The advantage now is that there is a chance this random vector does converge to an almost sure limit. This is indeed the case under some mild regularity assumption on the shape $g$. One little technicality: since when we defined $G_{x,y}$ in \varepsilonqref{G:def} we left out the weight $\omega_x$, going through the above reflection argument leads us to slightly change our definition to become \begin{equation}gin{align}\lambdabel{G:def2} G_{x,y}=\max{B}ig\{\sum_{i=0}^{n-1}\omega_{x_i}:x_{0,n}\text{ up-right, $x_0=x$, $x_n=y$, $n=\abs{y-x}_1$}{B}ig\}, \varepsilonnd{align} i.e.\ we now leave out the last weight $\omega_y$ instead. We will use this definition in the rest of the chapter. The reader should note, though, that since we are interested in the large scale behavior of the system, it is really immaterial whether we leave out or include the first or last weights. Recall that $g$ is a concave function. Then the set \[\mathcal{C}} \mathrm def\cD{\mathcal{D}=\{x\in\mathbb{R}^2_+:g(x)\ge1\}\] is convex. It can then be decomposed into a union of closed faces (see Section 17 of \cite{Roc-70}). For a given $\xi\in\Uset^\circ$ the point $\xi/g(\xi)$ is on the relative boundary of $\mathcal{C}} \mathrm def\cD{\mathcal{D}$. Let \[\mathcal U_\xi={B}ig\{\frac{\zeta}{\abs{\zeta}_1}:\zeta\text{ belongs to the closed face containing $\xi/g(\xi)$}{B}ig\}.\] If $g$ is differentiable, then $\mathcal U_\xi$ is simply the largest connected subset of $\Uset^\circ$ containing $\xi$ on which $g$ is affine. Set $\mathcal U_\xi$ cannot contain $e_1$ or $e_2$ because we chose $\xi$ in the relative interior of $\mathcal U$ and Theorem \ref{th:Martin} prevents $g$ from being linear on a neighborhood of $e_1$ or $e_2$. Let $\mathcal U_{e_1}=\{e_1\}$ and $\mathcal U_{e_2}=\{e_2\}$. Clearly $\mathcal U=\cup_{\xi\in\mathcal U}\mathcal U_\xi$. When $g$ is differentiable we have that $\forall\xi,\zeta\in\mathcal U$, either $\mathcal U_\xi=\mathcal U_\zeta$ or $\mathcal U_\xi\cap\mathcal U_\zeta=\varnothing$. We will say that a sequence $x_n$ is asymptotically directed into a subset $A\subset\mathcal U$ if all the limit points of $x_n/n$ are inside $A$. We are ready to state the theorem about the limits of the gradients in \varepsilonqref{(*)}. The proof requires a detour into queuing theory and is thus deferred to Section \ref{fixed-pt}. \begin{equation}gin{theorem}\lambdabel{Bus:thm} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Assume $\mathbb{P}\{\omega_0\ge c\}=1$ for some $c\in\mathbb{R}$ and $\mathbb{E}[\abs{\omega_0}^{2+\varepsilon}]<\infty$ for some $\varepsilon>0$. Assume $g$ is differentiable on $(0,\infty)^2$. Then, for each $\xi\in\Uset^\circ$ the {\rm(}random{\rm)} limit \begin{equation}gin{align}\lambdabel{B-limit} B^\xi(\omega,x,y)=\lim_{n\to\infty}(G_{x,x_n}-G_{y,x_n}) \varepsilonnd{align} exists almost surely and in $L^1$ for all $x,y\in\mathbb{Z}^2$ and sequence $x_n\in\mathbb{Z}^2$ that is directed into $\mathcal U_\xi$. Furthermore, \begin{equation}gin{align}\lambdabel{B-g} \begin{equation}gin{split} &\text{for }i\in\{1,2\}\quad\mathbb{E}[B^\xi(0,e_i)]=e_i\cdot \nabla g(\xi)\\ &\text{and}\quad g(\xi)=\mathbb{E}[B^\xi(0,e_1)]\,\xi\cdot e_1+\mathbb{E}[B^\xi(0,e_2)]\,\xi\cdot e_2. \varepsilonnd{split} \varepsilonnd{align} If $\xi,\zeta\in\Uset^\circ$ are such that $\mathcal U_\xi=\mathcal U_\zeta$, then $B^\xi\varepsilonquiv B^\zeta$ almost surely. \varepsilonnd{theorem} (It is customary in probability theory to drop the dependence on $\omega$ from the notation of random variables, e.g.\ to write $B^\xi(x,y)$ instead of $B^\xi(\omega,x,y)$.) The distribution of $B^\xi(0,e_i)$, $i\in\{1,2\}$, is known in the solvable cases. Formula \varepsilonqref{shape:exp} then follows from the second equation in \varepsilonqref{B-g}. See the end of Section \ref{fixed-pt} for more details. The condition that weights are bounded below by a deterministic constant is not really necessary. We assumed it in \cite{Geo-Ras-Sep-17-ptrf-1} because we used queuing theory for the proof, as we will see in Section \ref{fixed-pt}, and then weights $\omega_x$ are service times and have to be nonnegative. The extension to weights that are bounded below is immediate. However, the math in the proof works just as well for general weights, even though then interpreting them as service times would not make sense. The differentiability assumption is more serious. Although still an open question, differentiability is believed to be generally true. It can be directly verified from the explicit formula, when the weights are either exponentially or geometrically distributed. In the case when a flat segment occurs (see Theorem \ref{flat:thm}), it is known that $g$ is differentiable at the two edges of the segment. See \cite{Auf-Dam-13} for the standard first-passage percolation and \cite{Geo-Ras-Sep-17-ptrf-1} for the directed LPP. It is worthy to note that when $\{\omega_x\}$ are only ergodic (a generalization of being i.i.d.), the limiting shape can have corners and linear segments, and can even be a polygon with finitely many edges. See \cite{Hag-Mee-95}. Limits $B^\xi$ are called \mathbf{1}ex{Busemann function} {\sl Busemann functions}. This name is borrowed from metric geometry due to a connection between Busemann functions and geodesics, which is revealed in Section \ref{geodesics}. The first equation in \varepsilonqref{B-g} can be understood as follows: $B^\xi$ is a (microscopic) gradient of passage times to ``far away points in direction $\xi$''. The shape function at $\xi$ is the large scale (macroscopic) limit of passage times to these far away points. \varepsilonqref{B-g} says that the mean of the microscopic gradient is exactly the macroscopic gradient. The second equation in \varepsilonqref{B-g} is simply a consequence of the first one since differentiating $g(t\xi)=tg(\xi)$ in $t$ gives \begin{equation}gin{align}\lambdabel{euler} g(\xi)=\xi\cdot \nabla g(\xi). \varepsilonnd{align} Let us record right away a few important properties of processes $B^\xi$. \mathbf{1}ex{cocycle} {\bf 1. Cocycle:} \begin{equation}gin{align}\lambdabel{cocycle} B^\xi(x,y)+B^\xi(y,z)=B^\xi(x,y),\quad\text{almost surely and for all }x,y,z\in\mathbb{Z}^2. \varepsilonnd{align} It follows immediately from \varepsilonqref{B-limit}. \mathbf{1}ex{recovery} {\bf 2. Recovery:} \begin{equation}gin{align}\lambdabel{recovery} \omega_x=\min(B^\xi(x,x+e_1),B^\xi(x,x+e_2))\quad\text{almost surely and for all }x\in\mathbb{Z}^2. \varepsilonnd{align} To see this holds start with an induction equation similar to \varepsilonqref{G:ind} (but recall that passage times are now defined by \varepsilonqref{G:def2}): \[G_{x,y}=\omega_x+\max(G_{x+e_1,y},G_{x+e_2,y})\quad\text{for all $x$ and $y\ge x+e_i$, $i\in\{1,2\}$}.\] Rewrite this as \begin{equation}gin{align}\lambdabel{G:ind2} \omega_x=\min(G_{x,y}-G_{x+e_1,y},G_{x,y}-G_{x+e_2,y}). \varepsilonnd{align} Now set $y=\fl{n\xi}$ and take $n\to\infty$. {\bf 3. Stationarity:} To express this, define the group action of $\mathbb{Z}^2$ on $\Omega=\mathbb{R}^{\mathbb{Z}^2}$ that consists of shifting the weights: for $z\in\mathbb{Z}^2$ and $\omega\in\Omega$, $T_z\omega\in\Omega$ is such that $(T_z\omega)_x=\omega_{x+z}$. In words, $T_z\omega$ is simply the weight configuration obtained from $\omega$ by placing the origin at $z$. Then we have \begin{equation}gin{align}\lambdabel{shift} B^\xi(T_z\omega,x,y)=B^\xi(\omega,x+z,y+z)\quad\text{almost surely and for all }x,y,z\in\mathbb{Z}^2. \varepsilonnd{align} This follows directly from \varepsilonqref{B-limit} and the fact that $G_{x,y}(T_z\omega)=G_{x+z,y+z}(\omega)$. (All these equations are just saying is that fixing the lattice and shifting the weight configuration is the same thing as fixing the weight configuration and shifting the lattice.) {\bf 4. Monotonicity:} We record the last property as a lemma. \begin{equation}gin{lemma}\lambdabel{lm:monotone} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. Fix $\xi,\zeta\in\Uset^\circ$ with $\xi_1<\zeta_1$. Then we have almost surely and for all $x\in\mathbb{Z}^2$ \begin{equation}gin{align}\lambdabel{monotone} B^\xi(x,x+e_1)\ge B^\zeta(x,x+e_1)\quad\text{and}\quad B^\xi(x,x+e_2)\le B^\zeta(x,x+e_2). \varepsilonnd{align} \varepsilonnd{lemma} \begin{equation}gin{proof} The claim follows from a monotonicity of the passage times $G_{x,y}$ themselves: for all $x\in\mathbb{Z}^2$ and $y,z\in\mathbb{Z}^2_+$ such that $x+e_1+e_2\le y$, $x+e_1+e_2\le z$, $\abs{y}_1=\abs{z}_1$, and $y\cdot e_1<z\cdot e_1$ we have \begin{equation}gin{align}\lambdabel{crossing} G_{x,y}-G_{x+e_1,y}\ge G_{x,z}-G_{x+e_1,z}\quad\text{and}\quad G_{x,y}-G_{x+e_2,y}\le G_{x,z}-G_{x+e_2,z}. \varepsilonnd{align} Indeed, once this is proved set $y=y_n$ and $z=z_n$ with $y_n,z_n\in\mathbb{Z}_+^2$ any two sequences with $\abs{y_n}_1=\abs{z_n}_1=n$, $y_n/n\to\xi$, and $z_n/n\to\zeta$, then send $n\to\infty$. Inequalities \varepsilonqref{crossing} are due to paths crossing. Indeed, a geodesic from $x$ to $z$ must cross a geodesic from $x+e_1$ to $y$. Let $u$ be the first point where the two paths cross, i.e.\ $u\cdot(e_1+e_2)$ is the smallest possible. See Figure \ref{monotone:fig}. \begin{equation}gin{figure} \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[ >=latex, scale=0.8] \mathrm draw[line width=1pt, sussexp](1.08,0)--(1.08,0.91)--(6,0.91)--(6,1.91)--(7,1.91)--(7,3); \mathrm draw[line width=1pt, sussexp](0.09,0)--(0.09,1.91)--(5,1.91)--(5,3)--(8,3); \mathrm draw[line width=1pt, sussexg](1,0)--(1,1)--(3,1)--(3,2)--(4,2)--(4,4)--(6,4)--(6,5); \mathrm draw[line width=1pt, sussexg](0,0)--(0,2)--(2,2)--(2,3)--(4,3); \mathrm draw[fill =black](0.04,0)circle(1.2mm); \mathrm draw(0.04,-0.4)node{$x$}; \mathrm draw[fill =black](1.04,0)circle(1.2mm); \mathrm draw(1.4,-0.4)node{$x+e_1$}; \mathrm draw[fill =black](8,3)circle(1.2mm); \mathrm draw(8.3,3.3)node{$z$}; \mathrm draw[fill =black](6,5)circle(1.2mm); \mathrm draw(6.35,5.35)node{$y$}; \mathrm draw[fill =red](3,1.95)circle(1.2mm); \mathrm draw(3,2.35)node{$u$}; \varepsilonnd{tikzpicture} \varepsilonnd{center} \caption{\small The paths crossing trick: a path from $x$ to $z$ must cross a path from $x+e_1$ to $y$.} \lambdabel{monotone:fig} \varepsilonnd{figure} Then superadditivity implies that \[G_{x,u}+G_{u,y} \le G_{x,y}\quad\text{and}\quad G_{x+e_1,u}+G_{u,z} \le G_{x+e_1,z}.\] Add the two inequalities and rearrange to get \[G_{x,y}-G_{x+e_1,u}-G_{u,y} \ge G_{x,u}+G_{u,z}-G_{x+e_1,z}.\] Use the fact that $u$ is on geodesics to add the passage times and get the desired inequality: \[G_{x,y}-G_{x+e_1,y}=G_{x,y}-G_{x+e_1,u}-G_{u,y} \ge G_{x,u}+G_{u,z}-G_{x+e_1,z}=G_{x,z} - G_{x+e_1,z}.\] A similar proof works for the $e_2$-gradients and as we showed above, the claim of the theorem follows. This ``crossing trick'' has been used profitably in planar percolation, and goes back at least to \cite{Alm-98, Alm-Wie-99}. \varepsilonnd{proof} \section{Queuing fixed points}\lambdabel{fixed-pt} We now sketch how Theorem \ref{Bus:thm} is proved. By adding a constant to the weights, if necessary, we can assume they are nonnegative. Then we can use the queuing terminology. Let us note one small modification, though. Due to our new definition \varepsilonqref{G:def2}, which replaced \varepsilonqref{G:def}, now $G_{0,(k,\varepsilonll)}$ is the time when customer $k$ enters service at station $\varepsilonll$ and $G_{0,(k,\varepsilonll)}+\omega_{k,\varepsilonll}$, is the time when customer $k$ departs station $\varepsilonll$ and joins the end of the queue at station $\varepsilonll+1$. We are looking for stationary versions of the queuing process described in Section \ref{queues}. For this, customers need to have been arriving for a long time. Thus, indexing them by $\mathbb{N}$ will not do and we need instead to have customers indexed by $\mathbb{Z}$. Furthermore, one cannot track the customer's arrival times at the queues (since the process started in the far past) and the right thing to do is to consider \mathbf{1}ex{inter-arrival} {\sl inter-arrival} times. Thus, the development begins with a processes $\{A_{n,0}:n\in\mathbb{Z}\}$ that records the time between the arrival of customers number $n$ and $n+1$ to queue $0$. This process is supposed to be stationary, i.e.\ the distribution of $\{A_{n+m,0}:n\in\mathbb{Z}\}$ does not depend on $m\in\mathbb{Z}$. We also assume it is \mathbf{1}ex{ergodic} {\sl ergodic}. (Stationary measures form a convex set whose extreme points are said to be ergodic. A process is ergodic if its distribution is ergodic.) One special case is a constant inter-arrival process: $A_{n,0}=\alpha$ for all $n\in\mathbb{Z}$ and some $\alpha\in\mathbb{R}$. We are also given service times $\{S_{n,k}:n\in\mathbb{Z},k\in\mathbb{Z}_+\}$. They represent the time it takes to serve customer $n$ at station $k$, once the customer is first in line at that station. These service times have the same joint distribution $\mathbb{P}$ as the weights in our directed LPP model. In particular, they are i.i.d. Service times are also independent of the inter-arrival process. In order for the system to be stable, we need to have customers served faster than they arrive. Hence, we require that \begin{equation}gin{align}\lambdabel{m<A} m_0=\mathbb{E}[S_{0,0}]<\mathbb{E}[A_{0,0}]<\infty. \varepsilonnd{align} Given the inter-arrival and service times define waiting times at station $0$ by \begin{equation}gin{align}\lambdabel{W:def} W_{n,0}={B}ig(\sup_{j\le n-1}\sum_{i=j}^{n-1}(S_{i,0}-A_{i,0}){B}ig)^+. \varepsilonnd{align} $W_{n,0}$ is the time customer $n$ waits at station $0$ before their service starts. See further down for an explanation. Process $\{S_{i,0}-A_{i,0}:i\in\mathbb{Z}\}$ is ergodic. This is because one can show that the product measure of an ergodic process with an i.i.d.\ one is ergodic. By the ergodic theorem the sample mean $(n-j)^{-1}\sum_{i=j}^{n-1}(S_{i,0}-A_{i,0})$ converges to the population mean $\mathbb{E}[S_{0,0}-A_{0,0}]$, which by \varepsilonqref{m<A} is negative. Therefore, almost surely, as $j\to-\infty$ the sum $\sum_{i=j}^{n-1}(S_{i,0}-A_{i,0})$ goes to $-\infty$ and $0\le W_{n,0}<\infty$ for all $n\in\mathbb{Z}$. It is immediate to check that these times satisfy Lindley's equation \begin{equation}gin{align}\lambdabel{ind1} W_{n+1,0}=(W_{n,0}+S_{n,0}-A_{n,0})^+. \varepsilonnd{align} This now explains why $W_{n,0}$ is the time customer $n$ waits at station $0$ before their service starts. Indeed, if $W_{n,0}+S_{n,0}<A_{n,0}$ then customer $n$ will leave station $0$ before the next customer $n+1$ arrives. As a result, customer $n+1$ does not wait and $W_{n+1,0}=0$. If, on the other hand, $W_{n,0}+S_{n,0}\ge A_{n,0}$, then customer $n+1$ waits time $W_{n+1,0}=W_{n,0}+S_{n,0}-A_{n,0}$ before service begins. Inter-departure times from queue $0$ or, equivalently, inter-arrival times at queue $1$ are given by \begin{equation}gin{align}\lambdabel{ind2} A_{n,1}=(W_{n,0}+S_{n,0}-A_{n,0})^-+S_{n+1,0}. \varepsilonnd{align} Again, if $W_{n+1,0}>0$ then customer $n+1$ is already waiting and will start being serviced as soon as customer $n$ departs. The time between the departure of customer $n$ and that of customer $n+1$, from station $0$, is then equal to $S_{n+1,0}$. In the other case, when $W_{n+1,0}=0$, station $0$ is empty before customer $n+1$ gets there, for exactly time $A_{n,0}-S_{n,0}-W_{n,0}$. The time between the departures of customers $n$ and $n+1$ from queue $0$ equals this idle time, plus the service time $S_{n+1,0}$. From \varepsilonqref{ind1} and \varepsilonqref{ind2} we have \[W_{n+1,0}+S_{n+1,0}+A_{n,0}=W_{n,0}+S_{n,0}+A_{n,1}\] for all $n\in\mathbb{Z}$. This in turn gives \[\sum_{m=0}^{n-1} A_{m,0}+S_{n,0}-S_{0,0}+W_{n,0}-W_{0,0}=\sum_{m=0}^{n-1}A_{m,1}.\] Process $\{A_{n,1}:n\in\mathbb{Z}\}$ is again ergodic. Hence, dividing by $n$ and taking it to $\infty$ the ergodic theorem tells us that $n^{-1}\sum_{m=0}^{n-1} A_{m,0}$ and $n^{-1}\sum_{m=0}^{n-1}A_{m,1}$ converge to $\mathbb{E}[A_{0,0}]$ and $\mathbb{E}[A_{0,1}]$, respectively. Since $S_{n,0}$ is a stationary process, $S_{n,0}/n$ converges to $0$. The next lemma shows that also $W_{n,0}/n\to0$ almost surely. Consequently, we have that $\mathbb{E}[A_{0,1}]=\mathbb{E}[A_{0,0}]$. \begin{equation}gin{lemma} We have $n^{-1}W_{n,0}\to0$ almost surely, as $n\to\infty$. \varepsilonnd{lemma} \begin{equation}gin{proof} Let $U_n=S_{n,0}-A_{n,0}$. Fix $\varepsilon>0$, $a\ge0$, and let $u_0=\mathbb{E}[U_n]<0$. ($u_0$ does not depend on $n$ because $U_n$ is stationary.) Set $W^\varepsilon_0(a)=a$ and define inductively $W_{n+1}^\varepsilon(a)=(W_n^\varepsilon(a)+U_n-u_0+\varepsilon)^+$, $n\ge0$. By induction \[W_n^\varepsilon(0)={B}igl(\max_{0\le m<n}\sum_{k=m}^{n-1}(U_k-u_0+\varepsilon){B}ig)^+.\] Also, since the induction preserves monotonicity we have that \[W_n^\varepsilon(a)\ge W^\varepsilon_n(0)\ge \sum_{k=0}^{n-1}(U_k-u_0+\varepsilon).\] Since $\mathbb{E}[U_k-u_0+\varepsilon]=\varepsilon>0$, the ergodic theorem tells us the last sum grows to infinity (linearly in $n$). Consequently, there exists an $n_0$ such that $W_n^\varepsilon(a)>0$ for $n\ge n_0$. But then if $n\ge n_0$ we have $W_n^\varepsilon(a)=W_{n-1}^\varepsilon(a)+U_{n-1}-u_0+\varepsilon$ and thus \[n^{-1}W_n^\varepsilon(a)=n^{-1}W_{n_0}^\varepsilon(a)+n^{-1}\sum_{k=n_0}^{n-1}(U_k-u_0+\varepsilon).\] The ergodic theorem again tells us that the last term converges to $\varepsilon$ as $n\to\infty$. As a result, we have shown that with probability one for any $a\ge0$ we have $n^{-1}W_n^\varepsilon(a)\to0$. Recall now that $W_{0,0}<\infty$ and observe that $W_0^\omega(W_{0,0})=W_{0,0}$. Induction then shows that $W_{n,0}\le W_n^\varepsilon(W_{0,0})$ for all $n\ge0$. Indeed, \begin{equation}gin{align*} W_{n+1,0}&=(W_{n,0}+U_n)^+\le(W_n^\varepsilon(W_{0,0})+U_n)^+\\ &\le(W_n^\varepsilon(W_{0,0})+U_n-u_0+\varepsilon)^+=W_{n+1}^\varepsilon(W_{0,0}). \varepsilonnd{align*} But then \[0\le\varliminf_{n\to\infty}n^{-1}W_{n,0}\le\varlimsup_{n\to\infty}n^{-1}W_{n,0}\le\lim_{n\to\infty}n^{-1}W_n^\varepsilon(W_{0,0})=\varepsilon.\] Taking $\varepsilon\to0$ completes the proof. \varepsilonnd{proof} Note that $A_{n,1}$ only used values $\{S_{m,0}:m\in\mathbb{Z}\}$ and is therefore independent of $\{S_{m,k}:m\in\mathbb{Z},k\ge1\}$. We can now repeat the above steps inductively: Say we already computed the (ergodic) inter-arrival process $\{A_{n,k}:n\in\mathbb{Z}\}$ at queue $k\ge0$ and that it is independent of the service times $\{S_{n,\varepsilonll}:n\in\mathbb{Z},\,\varepsilonll\ge k\}$. Say also that $\mathbb{E}[A_{0,k}]=\mathbb{E}[A_{0,0}]$. Then we define the waiting times \begin{equation}gin{align}\lambdabel{W:def_k} W_{n,k}={B}ig(\sup_{j\le n-1}\sum_{i=j}^{n-1}(S_{i,k}-A_{i,k}){B}ig)^+, \varepsilonnd{align} which satisfy \begin{equation}gin{align}\lambdabel{lindley} W_{n+1,k}=(W_{n,k}+S_{n,k}-A_{n,k})^+. \varepsilonnd{align} The inter-arrival process at queue $k+1$ is given by \begin{equation}gin{align}\lambdabel{A-ind} A_{n,k+1}=(W_{n,k}+S_{n,k}-A_{n,k})^-+S_{n+1,k}. \varepsilonnd{align} It is ergodic and has the same mean as $A_{n,k}$ (and thus as $A_{n,0}$). It is also independent of $\{S_{n,\varepsilonll}:n\in\mathbb{Z},\,\varepsilonll\ge k+1\}$ and we can continue the inductive process. Using \varepsilonqref{lindley} and \varepsilonqref{A-ind} we can check the conservation law \begin{equation}gin{align}\lambdabel{q-coc} W_{n+1,k}+S_{n+1,k}+A_{n,k}=W_{n,k}+S_{n,k}+A_{n,k+1} \varepsilonnd{align} and the equation \begin{equation}gin{align}\lambdabel{q-rec} S_{n+1,k}=\min(W_{n+1,k}+S_{n+1,k},A_{n,k+1}). \varepsilonnd{align} Times $W_{n,k}+S_{n,k}$ are called \mathbf{1}ex{work load} {\sl work load} of station $k$ by customer $n$. Let us assign values to the edges of $\mathbb{Z}\times\mathbb{Z}_+$: on the horizontal edge $(n,k)-(n+1,k)$ put weight $A_{n,k}$ and on vertical edge $(n,k)-(n,k+1)$ put weight $W_{n,k}+S_{n,k}$. Also, put weights $S_{n,k}$ on vertices $(n,k+1)$. Recall now \varepsilonqref{cocycle} and \varepsilonqref{recovery}. Then \varepsilonqref{q-coc} can be seen as a \mathbf{1}ex{cocycle} cocycle property and \varepsilonqref{q-rec} is a \mathbf{1}ex{recovery} recovery property (but in the south-west direction instead of the north-east direction). See Figure \ref{q-coc:fig}. This is where the connection to Busemann functions lies. \begin{equation}gin{figure} \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[ >=latex, scale=1] \mathrm draw[line width=1pt](0,0)--(4,0)--(4,4)--(0,4)--(0,0); \mathrm draw[fill =black](0,0)circle(1.5mm); \mathrm draw[fill =black](4,0)circle(1.5mm); \mathrm draw[fill =black](0,4)circle(1.5mm); \mathrm draw[fill =black](4,4)circle(1.5mm); \mathrm draw(4.5,4.5) node{$S_{n+1,k}$}; \mathrm draw(4.7,-0.5) node{$S_{n+1,k-1}$}; \mathrm draw(-0.3,4.5) node{$S_{n,k}$}; \mathrm draw(0,-0.5) node{$S_{n,k-1}$}; \mathrm draw(2,4.5) node{$A_{n,k+1}$}; \mathrm draw(2,-0.5) node{$A_{n,k}$}; \mathrm draw(5.6,2) node{$W_{n+1,k}+S_{n+1,k}$}; \mathrm draw(-1.3,2) node{$W_{n,k}+S_{n,k}$}; \varepsilonnd{tikzpicture} \varepsilonnd{center} \caption{\small Assignment of work loads, inter-arrival times, and service times, to edges and vertices.} \lambdabel{q-coc:fig} \varepsilonnd{figure} Observe that system $\{A_{n,k},W_{n,k},S_{n,k}:n\in\mathbb{Z},\,k\in\mathbb{Z}_+\}$ is invariant with respect to shifts in the first coordinate, i.e.\ the distribution of $\{A_{n+m,k},W_{n+m,k},S_{n+m,k}:n\in\mathbb{Z},\,k\in\mathbb{Z}_+\}$ is the same for all $m\in\mathbb{Z}$. It is in fact also ergodic under shifts in the first coordinate. However, it is not obvious at all (and in fact not true in general) that the system is invariant (let alone ergodic) under shifts in the second coordinate. That is, we do not know a priori that the distribution of $\{A_{n,k+\varepsilonll},W_{n,k+\varepsilonll},S_{n,k+\varepsilonll}:n\in\mathbb{Z},\,k\in\mathbb{Z}_+\}$ is independent of $\varepsilonll\in\mathbb{Z}_+$. For this to happen, clearly $\{A_{n,0}:n\in\mathbb{Z}\}$ needs to have some special distribution. If we denote the distribution of $\{A_{n,0}:n\in\mathbb{Z}\}$ by $\nu$ then write $\mathbb{P}hi(\nu)$ for the distribution of $\{A_{n,1}:n\in\mathbb{Z}\}$. This is the so-called \mathbf{1}ex{queuing operator} {\sl queuing operator}. It takes a ergodic probability measure on $\mathbb{R}^\mathbb{Z}$ and transforms it to another ergodic probability measure on $\mathbb{R}^\mathbb{Z}$, while preserving the value of the mean (recall that $\mathbb{E}[A_{n,1}]=\mathbb{E}[A_{n,0}]$). Let $\mathbb{P}hi^k$ be the $k$-th iterate of $\mathbb{P}hi$, i.e.\ $\mathbb{P}hi^1=\mathbb{P}hi$ and $\mathbb{P}hi^{k+1}(\nu)=\mathbb{P}hi(\mathbb{P}hi^k(\nu))$. For the invariance under shifts in the second coordinate to hold, what we need is $\mathbb{P}hi(\nu)=\nu$, i.e.\ that $\nu$ be an ergodic \mathbf{1}ex{fixed point} {\sl fixed point} of the queuing operator. Thus, the problem at hand is: Given $\alpha>m_0$ find ergodic measures $\nu_\alpha$ on $\mathbb{R}^\mathbb{Z}$ such that $\mathbb{P}hi(\nu_\alpha)=\nu_\alpha$. And it would be good if along the way we can also answer the question of uniqueness of such measures. One way to produce fixed points with a prescribed mean $\alpha>m_0$ is to start for example with the measure $\mathrm delta_\alpha^\mathbb{Z}$, the distribution of the constant process $\{A_{n,0}=\alpha:n\in\mathbb{Z}\}$, and hope that $\mathbb{P}hi^k(\mathrm delta_\alpha^\mathbb{Z})$ converges (weakly) to an ergodic fixed point that has mean $\alpha$, as $k\to\infty$. Mairesse and Prabhakar \cite{Mai-Pra-03} proved that for each $\alpha>m_0$ there exists a unique stationary fixed point $\nu_\alpha$ such that if one starts with any ergodic process $\nu$ with mean $\alpha$, then the C\'esaro mean $k^{-1}\sum_{\varepsilonll=1}^k\mathbb{P}hi^k(\nu)$ converges weakly to $\nu_\alpha$. One consequence of this is that ergodic fixed points with a prescribed mean are unique. It is an open question whether or not for each $\alpha>m_0$ probability measure $\nu_\alpha$ is ergodic. What is known, though, is that this is true if we have differentiability of the shape function $g$ for the LPP problem with weights distribution $\mathbb{P}$ (the distribution of the service times) \cite[Lemma 7.6(b)]{Geo-Ras-Sep-17-ptrf-1}. For a fixed $\alpha>m_0$ let $\{A_{n,0}:n\in\mathbb{Z}\}$ have distribution $\nu_\alpha$ and let $\{S_{n,k}:n\in\mathbb{Z},k\in\mathbb{Z}_+\}$ be i.i.d.\ with distribution $\mathbb{P}$, independent of $\{A_{n,0}\}$. Construct the processes $\{W_{n,k},A_{n,k}:n\in\mathbb{Z},k\in\mathbb{Z}_+\}$ by inductions \varepsilonqref{lindley} and \varepsilonqref{A-ind}. Then, because $\nu_\alpha$ is a fixed point, process $\{W_{0,k}:k\in\mathbb{Z}_+\}$ is stationary and can thus be extended to a stationary process $\{W_{0,k}:k\in\mathbb{Z}\}$. (If this is not clear to the reader, it is an excellent exercise on a standard application of Kolmogorov's extension theorem.) Let \begin{equation}gin{align}\lambdabel{def:f} f(\alpha)=\mathbb{E}[W_{0,0}]+\mathbb{E}[S_{0,0}]. \varepsilonnd{align} Using the symmetry of the construction one can prove that the distribution of $\{W_{0,k}+S_{0,k}:k\in\mathbb{Z}\}$ is again a fixed point of the queuing operator and that it is ergodic if $\nu_\alpha$ is ergodic \cite[Lemma 7.7]{Geo-Ras-Sep-17-ptrf-1}. In this case, this distribution is no other than $\nu_{f(\alpha)}$. This tells us that not only the system $\{A_{n,k},W_{n,k},S_{n,k}:n\in\mathbb{Z},k\in\mathbb{Z}_+\}$ is ergodic under shifting the $n$ coordinate, but it is also ergodic under shifts of the $k$ coordinate. \begin{equation}gin{theorem}\lambdabel{f(alpha)} Function $f$ takes values in $(m_0,\infty)$ and is a convex, continuous, and strictly decreasing involution {\rm(}i.e.\ $f(f(\alpha))=\alpha${\rm)}. Furthermore, $f(\alpha)$ converges to $\infty$ as $\alpha\searrow m_0$ at to $m_0$ as $\alpha\to\infty$. Let $g$ be the shape function for the LPP problem with weight distribution $\mathbb{P}$ {\rm(}the distribution of the service times{\rm)}. Assume it is differentiable on $\Uset^\circ$. Then for all $\xi\in\Uset^\circ$ we have \begin{equation}gin{align}\lambdabel{g-var} g(\xi)=\inf_{\alpha>m_0}\bigl(\alpha\xi_1+f(\alpha)\xi_2\bigr). \varepsilonnd{align} The infimum is minimized at $\alpha=e_1\cdot\nabla g(\xi)$ and then $f(\alpha)=e_2\cdot\nabla g(\xi)$. \varepsilonnd{theorem} \begin{equation}gin{proof} Let $G_{me_1,ne_2}$ denote the last passage time from $me_1$ to $ne_2$ using weights $\{S_{m,k}:m\in\mathbb{Z},k\in\mathbb{Z}_+\}$. Equation \varepsilonqref{W:def} for $n=0$ can be rewritten as \[S_{0,0}+W_{0,0}=\sup_{j\le0}{B}igl(G_{je_1,e_2}-\sum_{i=j}^{-1}A_{i,0}{B}ig),\] with the convention that an empty sum is zero. We can extend this by induction to \[\sum_{k=0}^{n-1}(S_{0,k}+W_{0,k})=\sup_{j\le0}{B}igl(G_{je_1,ne_2}-\sum_{i=j}^{-1}A_{i,0}{B}ig).\] Divide by $n$, take it to infinity, and use the ergodic theorem and the shape theorem (plus a bit of work similar to what we did in the proof of continuity of $g$) we get \[f(\alpha)=\sup_{s\ge0}\bigl(g(s,1)-s\alpha\bigr)\] with a maximizer at $s$ such that $\alpha=e_1\cdot\nabla g(s,1)$. The above formula shows that $f$ is nonincreasing and convex function. Martin's asymptotic formula \varepsilonqref{martin} (and homogeneity of $g$) implies that $m_0<f(\alpha)<\infty$ for $\alpha>m_0$. In particular, the convex function $f$ is continuous. The symmetry of $g$ implies that $f$ is an involution and is thus strictly decreasing. The limits claimed in the theorem follow. Inverting the above convex duality we get \begin{equation}gin{align}\lambdabel{g-f} g(s,1)=\inf_{\alpha>m_0}\bigl(\alpha s+f(\alpha)\bigr) \varepsilonnd{align} with a minimizer at $\alpha=e_1\cdot\nabla g(s,1)$. Variational formula \varepsilonqref{g-var} now comes by using homogeneity of $g$ to write $g(\xi)=\xi_2 g(\xi_1/\xi_2,1)$. The minimizing $\alpha$ is given by $e_1\cdot g(\xi)$, as claimed, and then the fact that $f(\alpha)=e_2\cdot\nabla g(\xi)$ comes from \varepsilonqref{euler}. \varepsilonnd{proof} The above construction of fixed points can be carried out simultaneously for any given countable set of parameters $\alpha>m_0$, thus coupling the fixed points $\nu_\alpha$. A bit more precisely, fix a countable set $\mathcal{A}_0\subset(m_0,\infty)$ and start with inter-arrival times $A^{(\alpha)}_{n,0}=\alpha$, $\alpha\in\mathcal{A}_0$, $n\in\mathbb{Z}$, and service times $\{S_{n,k},n\in\mathbb{Z},\,k\in\mathbb{Z}_+\}$ that are independent and have the same distribution as the weights of the LPP model. (Note that the service times do not depend on $\alpha$.) Use \varepsilonqref{W:def_k} and \varepsilonqref{A-ind} to define inductively $W^{(\alpha)}_{n,k}$ and $A^{(\alpha)}_{n,k}$ on all of $\mathbb{Z}\times\mathbb{Z}_+$. Then, as $k\to\infty$ the C\'esaro mean of the distributions of $\{A^{(\alpha)}_{n,k}:n\in\mathbb{Z},\,\alpha\in\mathcal{A}_0\}$ converges weakly to a probability measure on $(\mathbb{R}^\mathbb{Z})^{\mathcal{A}_0}$ whose marginal for a fixed $\alpha\in\mathcal{A}_0$ is exactly $\nu_\alpha$. We can now go back to proving existence of the limit \varepsilonqref{B-limit}. \begin{equation}gin{proof}[Proof of Theorem \ref{Bus:thm}] The first step is to extract from the above queuing objects candidates for the limits $B^\xi$. Then we prove that the Busemann limits indeed exist and equal these candidates. Recall that we assume $g$ is differentiable. For $\zeta\in\Uset^\circ$ define $\alpha(\zeta)=e_1\cdot\nabla g(\zeta)$. Then $\alpha$ is nonincreasing and continuous in $\zeta_1=\zeta\cdot e_1$. Also, it follows from \varepsilonqref{martin} that $\alpha(\zeta)\to m_0$ as $\zeta\to e_1$ and $\alpha(\zeta)\to\infty$ as $\zeta\to e_2$. In particular, $\alpha(\zeta)$ is always strictly bigger than $m_0=\mathbb{E}[\omega_0]$. Fix $\xi\in\Uset^\circ$ and let $\mathcal U_0=\{\xi\}\cup(\Uset^\circ\cap\mathbb{Q}^2)$. Let $\mathcal{A}_0=\{\alpha(\zeta):\zeta\in\mathcal U_0\}$. This is a dense countable subset of $(m_0,\infty)$. Let $\{A^{(\alpha)}_{n,0}:n\in\mathbb{Z},\,\alpha\in\mathcal{A}_0\}$ be distributed according to the above coupling of fixed points and let $\{S_{n,k}:n\in\mathbb{Z},\,k\in\mathbb{Z}_+\}$ be independent, with the same joint distribution $\mathbb{P}$ as the weights in the directed LPP model, and independent of the inter-arrival times. Construct times $\{A_{n,k}^{(\alpha)},W_{n,k}^{(\alpha)}:n\in\mathbb{Z},\,k\in\mathbb{N},\,\alpha\in\mathcal{A}_0\}$ using inductions \varepsilonqref{W:def_k} and \varepsilonqref{A-ind}. Since it comes from fixed points, system $\{A^{(\alpha)}_{n,k},W^{(\alpha)}_{n,k},S_{n,k}:n\in\mathbb{Z},\,k\in\mathbb{Z}_+,\alpha\in\mathcal{A}_0\}$ is stationary under shifts in both coordinates. We can then extend it to a system $\{A^{(\alpha)}_{n,k},W^{(\alpha)}_{n,k},S_{n,k}:n\in\mathbb{Z},\,k\in\mathbb{Z},\,\alpha\in\mathcal{A}_0\}$ on the whole lattice. For $\zeta\in\mathcal U_0$ define \begin{equation}gin{align*} &\omega_{ne_1+ke_2}=S_{-n,-k-1},\quad B^\zeta(ne_1+ke_2,(n+1)e_1+ke_2)=A^{(\alpha(\zeta))}_{-n-1,-k},\quad \text{and}\\ &B^\zeta(ne_1+ke_2,ne_1+(k+1)e_2)=W^{(\alpha(\zeta))}_{-n,-k-1}+S_{-n,-k-1}\,. \varepsilonnd{align*} Then $\{\omega_x:x\in\mathbb{Z}^2\}$ has distribution $\mathbb{P}$ and \varepsilonqref{q-rec} says that $B^\zeta$ satisfies the recovery property \varepsilonqref{recovery}. Equation \varepsilonqref{q-coc} says that $B^\zeta$ satisfies \begin{equation}gin{align}\lambdabel{B-cell} \begin{equation}gin{split} &B^\zeta(x,x+e_1)+B^\zeta(x+e_1,x+e_1+e_2)\\ &\qquad\qquad\qquad=B^\zeta(x,x+e_2)+B^\zeta(x+e_2,x+e_1+e_2) \varepsilonnd{split} \varepsilonnd{align} for all $x\in\mathbb{Z}^2$. Set $B^\zeta(x+e_i,x)=-B^\zeta(x,x+e_i)$, $i\in\{1,2\}$ and for $x,y\in\mathbb{Z}^2$ define \[B^\zeta(x,y)=\sum_{i=0}^{n-1}B^\zeta(x_i,x_{i+1}),\] where $x_{0,n}$ is any nearest-neighbor path from $x_0=x$ to $x_n=y$. Thanks to \varepsilonqref{B-cell} this definition does not depend on the choice of the path $x_{0,n}$. Now, $B^\zeta$ is an $L^1$ cocycle and $\mathbb{E}[B^\zeta(0,e_1)]=\mathbb{E}[A^{(\alpha(\zeta))}_{0,0}]=\alpha(\zeta)$. This equality says that $B^\zeta$ satisfies the first equation in \varepsilonqref{B-g}, for the $e_1$ direction. The version for the $e_2$ direction comes from Theorem \ref{f(alpha)}: \[\mathbb{E}[B^\zeta(0,e_2)]=\mathbb{E}[W^{\alpha(\zeta)}_{0,-1}+S_{0,-1}]=f(\alpha(\zeta))=e_2\cdot\nabla g(\zeta).\] As was mentioned earlier, the second equation in \varepsilonqref{B-g} is simply a consequence of the first one and \varepsilonqref{euler}. Observe next that if we start with two sequences of inter-arrival times $A_{n,0} \le A'_{n,0}$ for all $n\in\mathbb{Z}$, then \varepsilonqref{W:def} gives $W_{n,0}\ge W'_{n,0}$ for all $n\in\mathbb{Z}$. Then from \varepsilonqref{A-ind} we get that $A_{n,1}\le A'_{n,1}$. This monotonicity of the queuing operator leads to a monotonicity in the coupling of the fixed points. Combining this with the fact that if $\zeta,\varepsilonta\in\mathcal U_0$ are such that $\zeta_1<\varepsilonta_1$, then $\alpha(\zeta)= e_1\cdot\nabla g(\zeta)\ge e_1\cdot \nabla g(\varepsilonta)=\alpha(\varepsilonta)$, we get that the $B^\zeta$ cocycles we constructed satisfy monotonicity \varepsilonqref{monotone}. The first equation in \varepsilonqref{B-g} and differentiability of $g$ imply that $\zeta\mapsto\mathbb{E}[B^\zeta(0,e_i)]$ is continuous, for $i\in\{1,2\}$. Combined with the above monotonicity we get that with probability one \begin{equation}gin{align}\lambdabel{B-cont} \lim_{\mathcal U_0\ni\zeta\to\xi} B^\zeta(x,x+e_i)=B^\xi(x,x+e_i),\quad i\in\{1,2\}. \varepsilonnd{align} Note that the set of full $\mathbb{P}$-measure on which the above event holds depends on $\xi$. In fact, we will see in Corollary \ref{B-not-cont} that this continuity does not hold on all $\Uset^\circ$ simultaneously (i.e.\ with one null set thrown away). Lastly, by another monotonicity argument, not too different from the one we used in the proof of Lemma \ref{lm:monotone}, we can show that for $x\in\mathbb{Z}^2$, a sequence $x_n$ directed in $\mathcal U_\xi$, directions $\zeta,\varepsilonta\in\mathcal U_0\setminus\mathcal U_\xi$ with $\zeta_1<\xi_1<\varepsilonta_1$, and a large integer $n$, we have the stochastic inequalities \begin{equation}gin{align}\lambdabel{eq:comp} \begin{equation}gin{split} &B^\varepsilonta(x,x+e_1)\le G_{x,x_n}-G_{x+e_1,x_n}\le B^\zeta(x,x+e_1)\quad\text{and}\\ &B^\varepsilonta(x,x+e_2)\ge G_{x,x_n}-G_{x+e_2,x_n}\ge B^\varepsilonta(x,x+e_2). \varepsilonnd{split} \varepsilonnd{align} (A random variable $Y$ is said to be stochastically smaller than a random variable $Z$ if for all $a\in\mathbb{R}$ we have $P(Z\le a)\le P(Y\le a)$. If moreover one shows that $E[Z]=E[Y]$, then $Y$ and $Z$ have the same distribution.) Taking $n\to\infty$ then $\varepsilonta$ and $\zeta$ to $\xi$ and applying \varepsilonqref{B-cont} we get \[B^\xi(x,x+e_i)=\lim_{n\to\infty}(G_{x,x_n}-G_{x+e_i,x_n}).\] Then the cocycle property \varepsilonqref{B-cell} gives \varepsilonqref{B-limit}. If $\mathcal U_\xi=\mathcal U_\zeta$ for some $\xi,\zeta\in\Uset^\circ$, then $\alpha(\xi)=e_1\cdot\nabla g(\xi)=e_1\cdot\nabla g(\zeta)=\alpha(\zeta)$ and thus $B^\xi=B^\zeta$. The proof of Theorem \ref{Bus:thm} is complete. \varepsilonnd{proof} The next lemma develops the very first inequality in \varepsilonqref{eq:comp}, the others being similar. We focus on the case $x=0$, the general case coming from shift-invariance of $\mathbb{P}$. \begin{equation}gin{lemma} Fix $\xi\in\Uset^\circ$ and a {\rm(}possibly random{\rm)} sequence $x_n$ directed in $\mathcal U_\xi$. Fix $\varepsilonta\in\Uset^\circ\setminus\mathcal U_\xi$ with $\xi_1<\varepsilonta_1$. Then almost surely for large $n$ \[B^\varepsilonta(0,e_1)\le G_{0,x_n}-G_{e_1,x_n}.\] \varepsilonnd{lemma} \begin{equation}gin{proof} Consider the two rectangle with common south-west corner at $0$ and with north-east corners at $x_n$ and $x_n+e_1+e_2$. Put weights $\omega_x$ at all vertices $x\le x_n$. Let $\bar\omega_x=\omega_x$ for such vertices. At sights $x=x_n+e_1+e_2-ke_1$, $k\in\mathbb{N}$, put weights $\bar\omega_x=B^\varepsilonta(x,x+e_1)$. Similarly, at sights $x=x_n+e_1+e_2-ke_2$, $k\in\mathbb{N}$, put weights $\bar\omega_x=B^\varepsilonta(x,x+e_2)$. We will write $\overline G_{y,x_n}$ for the passage time from $y$ to $x_n$ using weights $\omega_x$, as defined in \varepsilonqref{G:def2}. We also use the passage times from $y$ to $x_n+e_1+e_2$ that use weights the combination of weights $\omega_x$, $x\le x_n$, and \varepsilonnd{proof} To close this section let us describe the situation for solvable models. Here, fixed points $\nu_\alpha$ can be described explicitly. For example, in the case of exponentially distributed service times with mean $m_0>0$ (rate $1/m_0$) one can check directly that for any $\alpha>m_0$ inter-arrival times $\{A_{n,0}:n\in\mathbb{Z}\}$ that are i.i.d.\ exponentially distributed with mean $\alpha$ (rate $1/\alpha$) furnish an ergodic fixed point of the queuing operator. Because of the uniqueness of ergodic fixed points, this identifies $\nu_\alpha$. Another direct computation verifies that $f(\alpha)=\mathbb{E}[W_{0,0}+S_{0,0}]=m_0\alpha/(\alpha-m_0)$ and the symmetry observed below \varepsilonqref{def:f} says that $\{W_{0,k}+S_{0,k}:k\in\mathbb{Z}\}$ are i.i.d.\ exponentially distributed with mean $f(\alpha)$. One more miracle occurs: It turns out that $\{A_{n,0}:n\in\mathbb{Z}_+\}$ and $\{W_{0,k}+S_{0,k}:k\in\mathbb{Z}_+\}$ are independent of each other. See Theorem 3.1 in \cite{ch:Seppalainen} for the proofs of all these distributional claims. Once the explicit formula for $f$ is known solving the variational formula \varepsilonqref{g-f} leads to Rost's formula \varepsilonqref{shape:exp}. A consequence of the above is that for $\xi\in\Uset^\circ$, $\{B^\xi(ne_1,(n+1)e_1):n\in\mathbb{Z}_+\}$ are independent exponentially distributed with rate $\frac{\sqrt{\xi_1}}{m_0(\sqrt{\xi_1}+\sqrt{\xi_2})}$, $\{B^\xi(ne_2,(n+1)e_2):n\in\mathbb{Z}_+\}$ are independent exponentially distributed with rate $\frac{\sqrt{\xi_2}}{m_0(\sqrt{\xi_1}+\sqrt{\xi_2})}$, and the two sets of random variables are independent of each other. Information about the distribution of the Busemann functions is powerful. For example, it allows to get bounds on the coalescence time of geodesics \cite{Pim-16,Bas-Sar-Sly-17-}. We we will see in Section \ref{cif:sec} that it enables calculation of the distribution of the asymptotic direction of the competition interface. It is also used in proving bounds on the fluctuations of passage times and geodesics, as is done in Section 5 of \cite{ch:Seppalainen}. \section{Geodesics}\lambdabel{geodesics} In this section, let us assume the conditions of Theorem \ref{Bus:thm} to be satisfied. In particular, the shape $g$ is differentiable on $(0,\infty)^2$. One of the important questions in LPP concerns infinite geodesics: an infinite path is a geodesic if every finite segment of it is a geodesic between its endpoints. The following existence result comes quite easily. \begin{equation}gin{lemma}\lambdabel{exist-geo} With probability one, for every $x\in\mathbb{Z}^2$ there is at least one infinite geodesics starting at that point. \varepsilonnd{lemma} \begin{equation}gin{proof} Fix $x\in\mathbb{Z}^2$. Take any sequence $x_n\in\mathbb{Z}^2_+$ with $\abs{x_n}_1\to\infty$ and consider for each $n$ a geodesic from $x$ to $x_n$. Denote it by $x_{0,n}^{(n)}$. Fix $m\ge1$. We have only finitely many possible up-right paths of length $m$. Hence, there exists a subsequence along which $x_{0,n}^{(n)}$ all share the same initial $m$ steps. Using the diagonal trick, we can find a subsequence $n_j$ such that for all $m\ge1$ there exists a $j_m$ such that , $\{x_{0,n_j}^{(n_j)}:j\ge j_m\}$ share the first $m$ steps. This constructs an infinite path $x_{0,\infty}$ such that for all $m\ge1$, $x_{0,m}$ is the path shared by $\{x_{0,n_j}^{(n_j)}:j\ge j_m\}$. In particular, $x_{0,m}$ is a geodesic between $x_0=x$ and $x_m$, for all $m\ge1$, and thus $x_{0,\infty}$ is an infinite geodesic starting at $x_0=x$. \varepsilonnd{proof} Now we know that infinite geodesics exist. But how many infinite geodesics starting at a given point are there? And what are their properties? Does every infinite geodesic $x_{0,\infty}$ have to have an asymptotic direction, i.e.\ is it necessary that $x_n/n\to\xi$ for some $\xi\in\mathcal U$? Can there be multiple geodesics that go in the same asymptotic direction $\xi\in\mathcal U$? Do geodesics starting at different points and going in a given direction $\xi$ cross? Does there exist a bi-infinite geodesic, i.e.\ an up-right path $x_{-\infty,\infty}$ whose finite segments are all geodesics? Busemann functions can help answer some (if not all) of the above questions. Take a look, for example, at Theorems \ref{direction}, \ref{direction2}, \ref{left}, \ref{coal:thm}, \ref{unique:thm}, \ref{double:thm} and Corollary \ref{nice-cor} below. To see the connection between Busemann functions and geodesics start with formula \varepsilonqref{G:ind2}. Say, for simplicity, weights $\omega_x$ have a continuous distribution. Then $G_{x+e_1,y}=G_{x+e_2,y}$ happens with zero probability and thus there is a unique $i\in\{1,2\}$ for which \[\omega_x=G_{x,y}-G_{x+e_i,y}.\] The geodesic path from $x$ to $y$ will follow this increment and go from $x$ to $x+e_i$. Then, from there the procedure can be repeated, until the path reaches the north-east boundary with corner $y$, i.e.\ until one gets to an $x\in\{y-ke_1:k\in\mathbb{N}\}\cup\{y-ke_2:k\in\mathbb{N}\}$. From there, the geodesic marches straight to $y$ using only $e_1$ or only $e_2$ steps. This description of geodesics motivates the following. \begin{equation}gin{lemma}\lambdabel{B-geo} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Let $B:\mathbb{Z}^2\times\mathbb{Z}^2\to\mathbb{R}$ satisfy the cocycle and recovery properties \varepsilonqref{cocycle} and \varepsilonqref{recovery}. Let $x_{0,\infty}$ be a path such that for every $i\ge0$ we have \[\omega_{x_i}=B(x_i,x_{i+1}).\] In other words, the path goes along the ``minimal gradient'' of $B$. Then, $x_{0,\infty}$ is a geodesic. \varepsilonnd{lemma} \begin{equation}gin{proof} Fix $n\ge1$. Consider an arbitrary up-right path $y_{0,n}$ with $y_0=x_0$ and $y_n=x_n$. Write \[\sum_{i=0}^{n-1} \omega_{x_i} =\sum_{i=0}^{n-1}B(x_i,x_{i+1}) =B(x_0,x_n) =\sum_{i=0}^{n-1}B(y_i,y_{i+1}) \ge\sum_{i=0}^{n-1}\omega_{y_i}.\] (The first equality is from recovery, the second and third use the cocycle property, and the fourth uses recovery again.) Take a maximum over all up-right paths $y_{0,n}$ between $x_0$ and $x_n$ to get \[\sum_{i=0}^{n-1} \omega_{x_i}\ge G_{x_0,x_n},\] which says that $x_{0,n}$ is a geodesic. Since $n$ was arbitrary, the lemma is proved. \varepsilonnd{proof} As a bonus, we get in the above proof that when $x_{0,\infty}$ follows the smallest gradient of a cocycle $B$ that recovers, we have for $0\le m\le n$ \begin{equation}gin{align}\lambdabel{bonus} G_{x_m,x_n}=\sum_{i=m}^{n-1} \omega_{x_i}=B(x_m,x_n). \varepsilonnd{align} The above lemma says in particular that Busemann functions $B^\xi$ from \varepsilonqref{B-limit} provide us with a ``machine'' to produce infinite geodesics starting from any given point. Given a starting point $u$, a direction $\xi\in\Uset^\circ$, and an integer $j\in\{1,2\}$, let $x_{0,\infty}^{u,\xi,j}$ be the path produced by the following inductive mechanism: $x_0=u$ and for $k\ge0$, if $B^\xi(x_k,x_k+e_1)\ne B^\xi(x_k,x_k+e_2)$, then let $x_{k+1}=x_k+e_i$ for the unique $i\in\{1,2\}$ such that $\omega_{x_k}=B^\xi(x_k,x_k+e_i)$. If, on the other hand, $B^\xi(x_k,x_k+e_1)=B^\xi(x_k,x_k+e_2)$, then break the tie by letting $x_{k+1}=x_k+e_j$. Now that we know how to produce geodesics we ask about whether or not these geodesics have an asymptotic direction. We have the following theorem. \mathbf{1}ex{geodesic!direction} \begin{equation}gin{theorem}\lambdabel{direction} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. For each $\xi\in\Uset^\circ$ we have with probability one that for all $u\in\mathbb{Z}^2$ and $j\in\{1,2\}$ \[\mathbb{P}\big\{\text{geodesic }x_{0,\infty}^{u,\xi,j}\text{ is asymptotically directed into }\mathcal U_\xi\big\}=1.\] In particular, if the boundary of $\mathcal{C}} \mathrm def\cD{\mathcal{D}$ is strictly convex at $\xi$, then $x_{0,\infty}^{u,\xi,j}$ has asymptotic direction $\xi$: $n^{-1}x_n^{u,\xi,j}\to\xi$ as $n\to\infty$. \varepsilonnd{theorem} The proof of the above theorem needs a fact about stationary $L^1$ cocycles, i.e.\ measurable functions $B:\Omega\times\mathbb{Z}^2\times\mathbb{Z}^2\to\mathbb{R}$ that satisfy \varepsilonqref{cocycle} and \varepsilonqref{shift} and are such that for each $x,y\in\mathbb{Z}^2$ we have $\mathbb{E}[\abs{B(\omega,x,y)}]<\infty$. \mathbf{1}ex{cocycle!ergodic theorem} \begin{equation}gin{theorem}\lambdabel{B-shape:thm} {\rm\cite{Geo-Ras-Sep-17-ptrf-1,Ras-Sep-Yil-13,Geo-etal-15}} Let $B$ be a stationary $L^1$ cocycle. Define $\overline B=\mathbb{E}[B(0,e_1)]e_1+\mathbb{E}[B(0,e_2)]e_2$. Then for any $\xi\in\mathbb{R}_+^2$ we have $\mathbb{P}$-almost surely \begin{equation}gin{align}\lambdabel{B-ergod} \lim_{n\to\infty} \frac{B(0,\fl{n\xi})}n=\overline B\cdot\xi. \varepsilonnd{align} If furthermore $B$ recovers {\rm(}i.e.\ satisfies \varepsilonqref{recovery}{\rm)}, then we also have \begin{equation}gin{align}\lambdabel{B-shape} \lim_{n\to\infty}\max_{x\in\mathbb{Z}_+^2:\abs{x}_1=n}\frac{\abs{B(0,x)-\overline B\cdot x}}n=0\quad\mathbb{P}\text{-almost surely.} \varepsilonnd{align} \varepsilonnd{theorem} \begin{equation}gin{proof}[Sketch of proof of Theorem \ref{B-shape:thm}] As it was the case in Theorem \ref{th:shape}, we will show how the proof of \varepsilonqref{B-ergod} goes when $\xi\in\mathbb{Z}_+^2$. The case $\xi\in\mathbb{R}_+^2$ comes with some more work and \varepsilonqref{B-shape} comes with considerably more work. Assume thus that $\xi\in\mathbb{Z}^2_+$. Then we can use the cocycle and stationarity properties to write \[B(\omega,0,n\xi)=\sum_{i=0}^{n-1}B(\omega,i\xi,(i+1)\xi)=\sum_{i=0}^{n-1}B(T_{i\xi}\omega,0,\xi).\] Terms $B(T_{i\xi}\omega,0,\xi)$ are just shifted copies of the first term $B(\omega,0,\xi)$. As such, $n^{-1}B(\omega,0,n\xi)$ can be thought of as a sample mean of, albeit dependent, samples of $B(\omega,0,\xi)$. A generalization of the law of large numbers, called the ergodic theorem, tells us then that this sample mean converges to the population mean $\mathbb{E}[B(\omega,0,\xi)]$. In other words, for $\mathbb{P}$-almost every $\omega$ \begin{equation}gin{align}\lambdabel{B-ergod2} \lim_{n\to\infty} \frac{B(\omega,n\xi)}n=\mathbb{E}[B(0,\xi)]. \varepsilonnd{align} Now use the cocycle property again to write \[B(0,\xi)=\sum_{i=0}^{\xi\cdot e_1-1}B(ie_1,(i+1)e_1)+\sum_{j=0}^{\xi\cdot e_2-1}B((\xi\cdot e_1)e_1+je_2,(\xi\cdot e_1)e_1+(j+1)e_2).\] The summands in the first sum are shifted copies of $B(0,e_1)$ and the summands in the second sum are shifted copies of $B(0,e_2)$. Hence, taking expectation we get $\mathbb{E}[B(0,\xi)]=\mathbb{E}[B(0,e_1)]\xi\cdot e_1+\mathbb{E}[B(0,e_2)]\xi\cdot e_2=\overline B\cdot\xi$, which combined with \varepsilonqref{B-ergod2} proves the claim of the theorem. \varepsilonnd{proof} \begin{equation}gin{proof}[Proof of Theorem \ref{direction}] Let us abbreviate and write $x_n$ for $x^{u,\xi,j}_n$. Let $\zeta\in\mathcal U$ be a (possibly random) limit point of $x_n/n$, i.e.\ there exists a (possibly random) subsequence $n_j$ such that $x_{n_j}/{n_j}\to\zeta$. By Lemma \ref{B-geo} we know $x_{0,\infty}$ is a geodesic and by its definition it moves along the smallest gradient of $B^\xi$. By \varepsilonqref{bonus} \[G_{0,x_n}=B^\xi(x_0,x_n).\] Divide by $n$ then apply this to $n=n_j$ and use \varepsilonqref{shape} and \varepsilonqref{B-shape} to deduce that \[g(\zeta)=\mathbb{E}[B^\xi(0,e_1)]\zeta\cdot e_1+\mathbb{E}[B^\xi(0,e_2)]\zeta\cdot e_2.\] Apply the first equality in \varepsilonqref{B-g} to get \[g(\zeta)=\zeta\cdot\nabla g(\xi).\] Combine with \varepsilonqref{euler} to get \[g(\zeta)-g(\xi)=(\zeta-\xi)\cdot\nabla g(\xi).\] Then, function \[f(t)=g(t\xi+(1-t)\zeta)-g(\xi)-(t\xi+(1-t)\zeta-\xi)\cdot\nabla g(\xi),\quad t\in[0,1],\] satisfies $f(0)=f(1)=0$ and \[\lim_{\varepsilon\searrow0}\frac{f(1)-f(1-\varepsilon)}\varepsilon=-\lim_{\varepsilon\searrow0}\frac{g(\xi+\varepsilon(\zeta-\xi))-g(\xi)}\varepsilon+(\zeta-\xi)\cdot\nabla g(\xi)=0.\] Since $f$ is also concave it is identically $0$ and thus for all $t\in[0,1]$ \[g(t\xi+(1-t)\zeta)-g(\xi)=(t\xi+(1-t)\zeta-\xi)\cdot\nabla g(\xi)=(1-t)(\zeta-\xi)\cdot\nabla g(\xi).\] This says $g$ is affine on $\{t\xi+(1-t)\zeta:0\le t\le1\}$ and thus $\zeta\in\mathcal U_\xi$. The theorem is proved. \varepsilonnd{proof} Now that we know that geodesics generated using Busemann functions $B^\xi$ have an asymptotic direction we can prove the same thing about all geodesics. \mathbf{1}ex{geodesic!direction} \begin{equation}gin{theorem}\lambdabel{direction2} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. With probability one, any geodesic is asymptotically directed into $\mathcal U_\xi$ for some $\xi\in\mathcal U$. \varepsilonnd{theorem} The proof will need one more fact about geodesics $x_{0,\infty}^{u,\xi,j}$. \begin{equation}gin{lemma}\lambdabel{order} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} For all $n\ge m$, $x_{m,n}^{u,\xi,1}$ is the right-most geodesic between its two endpoints: if $y_{m,n}$ is a geodesic between $x_m^{u,\xi,1}$ and $x_n^{u,\xi,1}$, then we have $y_k\cdot e_1\le x_k^{u,\xi,1}\cdot e_1$ for $m\le k\le n$. Similarly, $x_{m,n}^{u,\xi,2}$ is the left-most geodesic between its two endpoints. \varepsilonnd{lemma} \begin{equation}gin{proof} We will prove the claim about $x_{m,n}^{u,\xi,1}$, the other one being symmetric. Abbreviate this path by writing $x_{m,n}$. Take $y_{m,n}$ as in the claim. In particular, $y_m=x_m$. For starters we want to prove that $y_{m+1}\cdot e_1\le x_{m+1}\cdot e_1$. For this, we only need to consider the case when $x_{m+1}=x_m+e_2$, for the inequality clearly holds in the other case. Since we are using a superscript $j=1$ it cannot be that $B^\xi(x_m,x_m+e_1)=B^\xi(x_m,x_m+e_2)$, for otherwise the path would have taken an $e_1$-step out of $x_m$. Since the path always takes a step along the smaller $B^\xi$ gradient and since $B^\xi$ recovers, we conclude that in the case at hand we have $\omega_{x_m}=B^\xi(x_m,x_m+e_2)<B^\xi(x_m,x_m+e_1)$. Now, recovery and the cocycle property imply that $G_{x,y}\le B^\xi(x,y)$ for any $x\le y$. Combine this with \varepsilonqref{bonus} and the cocycle property again to get \begin{equation}gin{align*} \omega_{x_m}+G_{x_m+e_1,x_n} &\le B^\xi(x_m,x_m+e_2)+B^\xi(x_m+e_1,x_n)\\ &<B^\xi(x_m,x_m+e_1)+B^\xi(x_m+e_1,x_n)=B^\xi(x_m,x_n)=G_{x_m,x_n}. \varepsilonnd{align*} Therefore, no geodesic from $x_m$ to $x_n$ can go through $x_m+e_1$ and we have $y_{m+1}=x_m+e_2=x_{m+1}$. Now repeat this argument every time $x_{m,n}$ and $y_{m,n}$ intersect to see that the latter never goes to the ``right'' of the former. The lemma is proved. \varepsilonnd{proof} One can squeeze the proof of the above lemma a little bit more to get the following interesting result. \begin{equation}gin{theorem}\lambdabel{left} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. \begin{equation}gin{enumerate}[\ \ {\rm(}i{\rm)}] \item\lambdabel{left-i} Fix $\xi\in\Uset^\circ$. With probability one and for all $u\in\mathbb{Z}^2$, $x_{0,\infty}^{u,\xi,1}$ is the right-most geodesic directed into $\mathcal U_\xi$ and $x_{0,\infty}^{u,\xi,2}$ is the left-most geodesic directed into $\mathcal U_\xi$. \item\lambdabel{left-ii} With probability one and for any $u\in\mathbb{Z}^2$, every infinite geodesic out of $u$ stays between $x_{0,\infty}^{u,\xi,1}$ and $x_{0,\infty}^{u,\xi,2}$ for some $\xi\in\Uset^\circ$. \varepsilonnd{enumerate} \varepsilonnd{theorem} \begin{equation}gin{proof}[Proof of Theorem \ref{direction2}] First, observe that although we proved Theorem \ref{direction} and Lemma \ref{order} for a fixed $\xi\in\Uset^\circ$, they both hold simultaneously (i.e.\ with one null set thrown away) for all $\xi\in\Uset^\circ\cap\mathbb{Q}^2$, which is countable and dense in $\Uset^\circ$. Assume that for some geodesic $x_{0,\infty}$, $x_n/n$ has limit points in both $\mathcal U_\zeta$ and $\mathcal U_\varepsilonta$ with $\zeta,\varepsilonta\in\mathcal U$ and $\mathcal U_\zeta\ne\mathcal U_\varepsilonta$. We can assume $\zeta\cdot e_1<\varepsilonta\cdot e_1$. Since we have assumed $g$ to be differentiable, there must exist at least one (and in fact infinitely many) point(s) $\xi\in\Uset^\circ\cap\mathbb{Q}^2$ such that \begin{equation}gin{align}\lambdabel{aux} \zeta\cdot e_1<\xi\cdot e_1<\varepsilonta\cdot e_1,\quad\mathcal U_\xi\not=\mathcal U_\zeta,\quad\text{and}\quad\mathcal U_\xi\not=\mathcal U_\varepsilonta. \varepsilonnd{align} Let $x_0=u$ (the starting point of the geodesic under study). Since we have shown that geodesic $x_{0,\infty}^{u,\xi,1}$ has asymptotic direction $\xi$, the ordering in \varepsilonqref{aux} implies that $x_{0,\infty}$ passes infinitely often to the left of $x_{0,\infty}^{u,\xi,1}$. But then Lemma \ref{order} implies that once the former goes strictly to the left of the latter, it has to remain (weakly) on that side forever. A similar argument shows that $x_{0,\infty}$ must also eventually stay to the right of $x_{0,\infty}^{u,\xi,2}$. In other words, $x_{0,\infty}$ eventually stays between $x_{0,\infty}^{u,\xi,1}$ and $x_{0,\infty}^{u,\xi,2}$. But both these geodesics are directed into $\mathcal U_\xi$. Hence, so is $x_{0,\infty}$, which contradicts the assumption that $x_n/n$ has limit points in $\mathcal U_\zeta$ and $\mathcal U_\varepsilonta$. \varepsilonnd{proof} Theorems \ref{direction} and \ref{direction2} have a very nice consequence when we know more about the regularity of $g$. \begin{equation}gin{corollary}\lambdabel{nice-cor} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. Assume also that $g$ is strictly concave. Then \begin{equation}gin{enumerate}[\ \ {\rm(}i{\rm)}] \item\lambdabel{cor1} For any given direction, with probability one, out of any given point, there exists an infinite geodesic going in this direction: \[\forall\xi\in\mathcal U:\quad\mathbb{P}\big\{\forall u\in\mathbb{Z}^2\ \varepsilonxists x_{0,\infty}\text{ geodesic }:x_n/n\to\xi\big\}=1;\] \item\lambdabel{cor2} With probability one, every infinite geodesic has an asymptotic direction: \[\mathbb{P}\big\{\forall x_{0,\infty}\text{ geodesic }\ \varepsilonxists\xi\in\mathcal U:x_n/n\to\xi\big\}=1.\] \varepsilonnd{enumerate} \varepsilonnd{corollary} This simply follows from the fact that if $g$ is strictly concave, then $\mathcal U_\xi=\{\xi\}$ for all $\xi\in\mathcal U$. Strict concavity is still an open question, but it is believed to hold in general, either when the maximum of $\omega_0$ does not percolate or outside the flat segment that occurs when the maximum does percolate (see Theorem \ref{flat:thm}). The claims in the above corollary appeared before in Proposition 7 of \cite{Fer-Pim-05} for the solvable model where weights $\omega_x$ are exponentially distributed. Note that in this case formula \varepsilonqref{shape:exp} gives an explicit expression for $g$ and we can check directly that $g$ is indeed strictly concave. The approach used by \cite{Fer-Pim-05} follows the ideas of Licea and \mathbf{1}ex{Newman, Charles} Newman \cite{Lic-New-96} for nearest-neighbor \mathbf{1}ex{first-passage percolation (FPP)} \mathbf{1}ex{FPP} first-passage percolation (FPP). In \cite{Lic-New-96} the authors assume a certain global curvature assumption on $g$ and use it to control how much infinite geodesics can wander, proving existence and directedness of infinite geodesics. They also use a lack-of-space argument to prove \mathbf{1}ex{geodesic!coalescence} coalescence (i.e.\ merger) of geodesics with a given asymptotic direction. This is the only method known to date for proving coalescence of directional geodesics. The same idea was adapted by \cite{Fer-Pim-05} to the directed LPP model with exponential weights and then by \cite{Geo-Ras-Sep-17-ptrf-1} to the general weights setting. \begin{equation}gin{theorem}\lambdabel{coal:thm} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. Fix $\xi\in\Uset^\circ$. With probability one and for all $u,v\in\mathbb{Z}^2$ the right-most geodesics directed into $\mathcal U_\xi$ and starting at $u$ and at $v$ coalesce: there exist $m,n\ge0$ such that $x_{m,\infty}^{u,\xi,1}=x_{n,\infty}^{v,\xi,1}$. The same claim holds for the left-most geodesics. \varepsilonnd{theorem} Here is a very rough and high level sketch of how such a coalescence result is proved. Details can be found in Appendix A of \cite{Geo-Ras-Sep-15} (which is an extended version of \cite{Geo-Ras-Sep-17-ptrf-2}). See also the proof of Theorem 2.3 in \cite{ch:Hanson}. First observe that if $x_{0,\infty}^{u,\xi,1}$ and $x_{0,\infty}^{v,\xi,1}$ ever intersect, then from there on they follow the same evolution (smallest $B^\xi$ increment and increment $e_1$ in case of a tie). Therefore, the task is really to prove that they eventually intersect. By stationarity the assumption of two nonintersecting geodesics implies we can find at least three nonintersecting ones. A local modification of the weights turns the middle geodesic of the triple into a geodesic that stays disjoint from all geodesics that emanate from sufficiently far away. By stationarity again at least $\mathrm delta L^2$ such disjoint geodesics emanate from an $L\times L$ square. This gives a contradiction because there are only $2L$ boundary points for these geodesics to exit through. \begin{equation}gin{corollary}\lambdabel{B-not-cont} Make the same assumptions as in Theorem \ref{Bus:thm}. Then with probability one \[\{B^\xi(0,e_1):\xi\in\Uset^\circ\}\subset\{G_{0,z}-G_{e_1,z}:z\in e_1+\mathbb{Z}_+^2\}.\] In particular the map $\xi\mapsto B^\xi(0,e_1)$ cannot be continuous on all of $\Uset^\circ$. \varepsilonnd{corollary} \begin{equation}gin{proof} For each $\xi\in\Uset^\circ\cap\mathbb{Q}^2$ consider geodesics $x_{0,\infty}^{0,\xi,1}$ and $x_{0,\infty}^{e_1,\xi,1}$. Theorem \ref{direction} says that with probability one these geodesics are asymptotically directed into $\mathcal U_\xi$. Then Theorem \ref{Bus:thm} implies that almost surely and for all $\xi\in\Uset^\circ\cap\mathbb{Q}^2$ \[B^\xi(0,e_1)=\lim_{n\to\infty}(G_{0,x_n^{0,\xi,1}}-G_{e_1,x_n^{0,\xi,1}}).\] By Theorem \ref{coal:thm} the two geodesics $x_{0,\infty}^{0,\xi,1}$ and $x_{0,\infty}^{e_1,\xi,1}$ coalesce at say a point we denote by $z^\xi$. This leads to \[B^\xi(0,e_1)=G_{0,z^\xi}-G_{e_1,z^\xi}.\] Therefore, we have almost surely \[\{B^\xi(0,e_1):\xi\in\Uset^\circ\cap\mathbb{Q}^2\}\subset\{G_{0,z}-G_{e_1,z}:z\in e_1+\mathbb{Z}_+^2\}.\] The claim now follows from monotonicity \varepsilonqref{lm:monotone}. \varepsilonnd{proof} Note that the above result does not contradict \varepsilonqref{B-cont}. Together, the two results say that there is zero probability that a given (fixed) $\xi\in\Uset^\circ$ happens to be a discontinuity point of $\xi\mapsto B^\xi(0,e_1)$. When the weights have a continuous distribution one has a \mathbf{1}ex{geodesic!unique} {\sl unique} geodesic between any two given points. What about infinite directional geodesics? The answer is also in the positive. \begin{equation}gin{theorem}\lambdabel{unique:thm} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. Assume also that $\omega_0$ has a continuous distribution. Fix $\xi\in\Uset^\circ$. Then with probability one, out of any $u\in\mathbb{Z}^2$, there exists a unique infinite geodesic directed into $\mathcal U_\xi$. \varepsilonnd{theorem} \begin{equation}gin{proof} In view of Theorem \ref{left}\varepsilonqref{left-ii}, it is enough to show that $x_{0,\infty}^{u,\xi,1}=x_{0,\infty}^{u,\xi,2}$. This in turn follows from the coalescence result. Indeed, assume the two geodesics do not match. We can assume that they separate right away, otherwise just consider the paths starting at the separation point. Under this assumption, it must be the case that $\omega_u=B^\xi(u,u+e_1)=B^\xi(u,u+e_2)$, for otherwise both geodesics would have followed the smaller $B^\xi$-gradient and thus stayed together. Now, the above coalescence result implies that $x_{0,\infty}^{u,\xi,1}$ and $x_{0,\infty}^{u+e_2,\xi,1}$ will eventually coalesce, say at point $v=x_n^{u,\xi,1}=x_{n-1}^{u+e_2,\xi,1}$. But then applying \varepsilonqref{bonus} we would have \[\sum_{i=0}^{n-1}\omega(x_i^{u,\xi,1})=B^\xi(u,v)=B^\xi(u,u+e_2)+B^\xi(u+e_2,v)=\omega_u+\sum_{i=0}^{n-2}\omega(x_i^{u+e_2,\xi,1}),\] which says that the weights add up to the same amount along two different paths. This happens with zero probability if weights have a continuous distribution. Hence the two geodesics can never separate and the theorem is proved. (We used $\omega(x)$ to denote $\omega_x$, for aesthetic reasons.) \varepsilonnd{proof} One of the things Theorem \ref{unique:thm} is really saying is that one should not need to worry about breaking ties among $B^\xi$ gradients. Let us spell this out as a separate result. \begin{equation}gin{theorem}\lambdabel{tie:thm} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. Assume also that $\omega_0$ has a continuous distribution. Fix $\xi\in\Uset^\circ$. Then $\mathbb{P}\{\varepsilonxists u: B^\xi(u,u+e_1)=B^\xi(u,u+e_2)\}=0.$ \varepsilonnd{theorem} \begin{equation}gin{proof} The proof is quite simple: when a tie happens at $u$ geodesics $x_{0,\infty}^{u,\xi,1}$ and $x_{0,\infty}^{u,\xi,2}$ separate right away. Since we just showed this cannot happen when weights have a continuous distribution, the theorem follows. \varepsilonnd{proof} As the last result of this section we address existence doubly-infinite geodesics (or rather lack thereof). \mathbf{1}ex{geodesic!doubly-infinite} \mathbf{1}ex{bigeodesic} \begin{equation}gin{theorem}\lambdabel{double:thm} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. Assume also that $\omega_0$ has a continuous distribution. Fix $\xi\in\Uset^\circ$. Then \[\mathbb{P}\big\{\varepsilonxists x_{-\infty,\infty}\text{ geodesic}:x_{0,\infty}\text{ is directed into }\mathcal U_\xi\big\}=0.\] \varepsilonnd{theorem} \begin{equation}gin{proof}[Sketch of the proof] If such a doubly-infinite geodesic existed, with positive probability, then by stationarity we would have another (different) one $y_{-\infty,\infty}$, also directed into $\mathcal U_\xi$. By the coalescence result we would have that $x_{0,\infty}$ and $y_{0,\infty}$ coalesce. Modulo renumbering the indices, we can assume that $x_{0,\infty}=y_{0,\infty}$ but $x_{-1}\ne y_{-1}$. Because weights have a continuous distribution, it cannot be that $x_{-n}=y_{-n}$ for any $n>1$ (otherwise the weights would add up to the same amount $G_{x_{-n},0}$ along two different paths $x_{-n,0}$ and $y_{-n,0}$). We thus have a bi-infinite three-armed ``fork'' embedded in $\mathbb{Z}^2$. But by stationarity this picture will repeat infinitely often, allowing us to embed a binary tree into $\mathbb{Z}^2$, with its vertices having a positive density. This embedding is not possible and thus we have a contradiction. (Such a tree grows exponentially fast, while the boundary of a box in $\mathbb{Z}^2$ grows only linearly in the diameter of the box.) See Figure \ref{bi-infinite:fig} for an illustration and \cite{Geo-Ras-Sep-17-ptrf-1} for the details. \varepsilonnd{proof} \begin{equation}gin{figure} \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[ >=latex, scale=0.5] \mathrm draw[line width=1pt](-2,-0.5)--(-2,0)--(2,0)--(2,1)--(3,1)--(3,2)--(5,2)--(5,3)--(8,3)--(8,4)--(10,4); \mathrm draw[line width=1pt](-1,-1.5)--(-1,-1)--(0,-1)--(0,0); \mathrm draw[line width=1pt](8,0.5)--(8,1)--(9,1)--(9,4); \mathrm draw[line width=1pt](-0.5,-4)--(1,-4)--(1,-2)--(4,-2)--(4,-1)--(5,-1)--(5,1)--(6,1)--(6,3); \mathrm draw[line width=1pt](2.5,-5)--(3,-5)--(3,-2); \mathrm draw [line width=1pt,->] plot [smooth] coordinates {(10,4) (10.3,4.2) (10.5,5) (11.5,5) (12.5,6)}; \mathrm draw [line width=1pt,->] plot [smooth] coordinates {(2.5,-5) (1.8,-5.1) (1.7,-5.8) (1,-6.5)}; \mathrm draw [line width=1pt,->] plot [smooth] coordinates {(-0.5,-4) (-1.2,-4.1) (-1.3,-4.8) (-2,-5)}; \mathrm draw [line width=1pt,->] plot [smooth] coordinates {(-1,-1.5) (-1.2,-2.5) (-1.9,-2.7)}; \mathrm draw [line width=1pt,->] plot [smooth] coordinates {(-2,-0.5) (-2.2,-1.5) (-3.3,-1.7) (-3.5,-2.4) (-4.5,-2.5)}; \mathrm draw [line width=1pt,->] plot [smooth] coordinates {(8,0.5) (7.9,-0.5) (6.9,-1.5)}; \mathrm draw[fill =black](0,0)circle(2mm); \mathrm draw[fill =black](3,-2)circle(2mm); \mathrm draw[fill =black](6,3)circle(2mm); \mathrm draw[fill =black](6,3)circle(2mm); \mathrm draw[fill =black](9,4)circle(2mm); \mathrm draw(12,4.5) node[inner sep=0.6pt,sloped,rotate=30]{\small into $\mathcal U_\xi$}; \varepsilonnd{tikzpicture} \varepsilonnd{center} \caption{\small The tree of bi-infinite geodesics. Bullets mark the triple split points. They have a positive density.} \lambdabel{bi-infinite:fig} \varepsilonnd{figure} The proof of the above theorem used the coalescence result, which requires us to fix the direction of the geodesics. But we will see in the next section that there are random directions in which there are multiple geodesics out of say the origin. So are there then doubly-infinite geodesics in these random directions? The answer is still expected to be in the negative, but a proof remains elusive. \section{The competition interface}\lambdabel{cif:sec} \mathbf{1}ex{competition interface} We continue to assume the conditions of Theorem \ref{Bus:thm} to be satisfied. In particular, the shape $g$ is still assumed differentiable on $(0,\infty)^2$. Let us also assume in this section that weights $\omega_x$ have a continuous distribution. Recall our earlier definition of the competition interface separating the two geodesic trees rooted at $e_1$ and $e_2$ (see Figure \ref{CIF:fig}). Denote this up-right path of sites in $\mathbb{Z}^2+(e_1+e_2)/2$ by $\varphi_n$. In particular, $\varphi_0=(e_1+e_2)/2$. Does $\varphi_n$ have an asymptotic direction? What can we say about this direction? Can we describe $\varphi_n$ using Busemann functions, as we did for geodesics in the previous section? By monotonicity \varepsilonqref{monotone} of the Busemann functions we have that \[B^\xi(0,e_1)-B^\xi(0,e_2)\] is monotone in $\xi\in\Uset^\circ\cap\mathbb{Q}^2$. Namely, the above is nonincreasing as $\xi\cdot e_1$ increases in $(0,1)\cap\mathbb{Q}$. (The reason for only considering rational directions is that the limit in \varepsilonqref{B-limit} holds for configurations $\omega$ outside a set of measure zero, but this null set depends on the direction $\xi$. Thus, the limit can be claimed to hold almost surely for only countably many directions at once.) By Theorem \ref{tie:thm} we have $B^\xi(0,e_1)\ne B^\xi(0,e_2)$ almost surely for all $\xi\in\Uset^\circ\cap\mathbb{Q}^2$. If \[\mathbb{P}\big\{B^\xi(0,e_1)>B^\xi(0,e_2)\ \forall \xi\in\Uset^\circ\cap\mathbb{Q}^2\big\}>0,\] then, due to Theorem \ref{left}\varepsilonqref{left-ii}, for the configurations in the above event all infinite geodesics out $0$ must start with an $e_2$-step. This can be contradicted by following an argument similar to the proof of Lemma \ref{exist-geo} to show that, with probability one, there exists at least one geodesic out of $0$ that takes a first step $e_1$. A similar reasoning applies for the case where with positive probability $B^\xi(0,e_1)<B^\xi(0,e_2)$ for all $\xi\in\Uset^\circ$. Then, with probability one there exists a unique $\xi^*\in\Uset^\circ$ such that for all $\xi\in\Uset^\circ\setminus\{\xi^*\}$ \[B^\xi(0,e_1)>B^\xi(0,e_2)\text{ if }\xi\cdot e_1<\xi^*\cdot e_1\quad{and}\quad B^\xi(0,e_1)<B^\xi(0,e_2)\text{ if }\xi\cdot e_1>\xi^*\cdot e_1.\] One thing this says is that geodesics originating at $0$ that are directed into $\mathcal U_\xi$ with $\xi\cdot e_1<\xi^*\cdot e_1$ (i.e.\ $\xi$ to the left of $\xi^*$) must start with an $e_2$ step. Similarly, geodesics originating at $0$ that are directed into $\mathcal U_\xi$ with $\xi\cdot e_1>\xi^*\cdot e_1$ (i.e.\ $\xi$ to the right of $\xi^*$) must start with an $e_1$ step. A slightly sharper version of this argument leads to the following. \mathbf{1}ex{competition interface} \begin{equation}gin{theorem}\lambdabel{cif:thm} {\rm\cite{Geo-Ras-Sep-17-ptrf-1}} Make the same assumptions as in Theorem \ref{Bus:thm}. Assume also that $\omega_0$ has a continuous distribution. \begin{equation}gin{enumerate}[\ \ {\rm(}i{\rm)}] \item\lambdabel{cif-lln} With probability one, competition interface $\varphi_n$ has asymptotic direction $\xi^*$: \[\mathbb{P}\big\{\varphi_n/n\to\xi^*\big\}=1.\] \item\lambdabel{2geo} With probability one, there exist two infinite geodesics out of $0$ with asymptotic direction $\xi^*$, one going through $e_1$ and the other through $e_2$: \[\mathbb{P}\big\{\varepsilonxists x^1_{0,\infty},x^2_{0,\infty}\text{ geodesics }:x^1_1=e_1,\ x^2_1=e_2,\ x^1_n/n\to\xi^*,\ x^2_n/n\to\xi^*\big\}=1.\] \item\lambdabel{cont-dist} $\xi^*$ is a genuine random variable that has a continuous distribution and is supported outside the linear segments of $g$ {\rm(}if any{\rm)}: \[\forall\xi\in\mathcal U:\quad\mathbb{P}\{\xi^*=\xi\}\le\mathbb{P}\{\xi^*\in\mathcal U_\xi\}=0.\] \item\lambdabel{cid-supp} $\xi^*$ is supported on all of $\mathcal U$, take away the linear segments of $g$ {\rm(}if any{\rm)}: for any open interval (i.e.\ connected subset) $\mathcal V\subset\mathcal U$ such that $\mathcal U_\xi=\{\xi\}$ $\forall\xi\in\mathcal V$, we have $\mathbb{P}\{\xi^*\in\mathcal V\}>0$. \varepsilonnd{enumerate} \varepsilonnd{theorem} It is conjectured that $g$ is strictly concave when weights $\omega_x$ have a continuous distribution. Hence, $\xi^*$ is expected to be supported on all of $\mathcal U$. Since weights are assumed to be continuous, the two geodesics in Theorem \ref{cif:thm}\varepsilonqref{2geo} cannot coalesce. This does not contradict Theorem \ref{coal:thm} because direction $\xi^*$ is random while the coalescence result was about fixed deterministic directions. In fact, this is one way to see why claim \varepsilonqref{cont-dist} is true. In the solvable case of exponentially distributed weights, one can compute the distribution of $\xi^*$ explicitly. \begin{equation}gin{lemma} Assume $\omega_0$ is exponentially distributed with rate $\theta>0$. Then for $a\in(0,1)$ \begin{equation}gin{align}\lambdabel{cid-dist} \mathbb{P}\{\xi^*\cdot e_1>a\}=\frac{\sqrt{1-a}}{\sqrt{a}+\sqrt{1-a}}\,. \varepsilonnd{align} If we define the angle $\xi^*$ makes with $e_1$ by $\theta^*=\tan^{-1}(\xi^*\cdot e_2/\xi^*\cdot e_1)$, then \begin{equation}gin{align}\lambdabel{theta-dist} \mathbb{P}\{\theta^*\le t\}=\frac{\sqrt{\sin t}}{\sqrt{\sin t}+\sqrt{\cos t}}\,. \varepsilonnd{align} \varepsilonnd{lemma} \begin{equation}gin{proof} Recall that $B^\xi(0,e_i)$ has an exponential distribution with rate \[\lambdambda_i=\frac{\theta\sqrt{\xi\cdot e_i}}{\sqrt{\xi\cdot e_1}+\sqrt{\xi\cdot e_2}}.\] Furthermore, $B^\xi(0,e_1)$ and $B^\xi(0,e_2)$ are independent. See Section \ref{fixed-pt} below. Now compute \begin{equation}gin{align*} \mathbb{P}\{\xi^*\cdot e_1>a\} &=\mathbb{P}\{B^{ae_1+(1-a)e_2}(0,e_1)>B^{ae_1+(1-a)e_2}(0,e_2)\}\\ &=\int_0^\infty \lambdambda_2 e^{-\lambdambda_2 s} \mathbb{P}\{B^{ae_1+(1-a)e_2}(0,e_1)>s\}\,ds\\ &=\int_0^\infty \lambdambda_2 e^{-\lambdambda_2 s} e^{-\lambdambda_1 s}\,ds=\frac{\lambdambda_2}{\lambdambda_1+\lambdambda_2}\,, \varepsilonnd{align*} from which \varepsilonqref{cid-dist} follows. For the distribution of $\theta^*$ we have for $t\in(0,\pi/2)$ \[\mathbb{P}\{\theta^*\le t\}=\mathbb{P}\{\xi^*\cdot e_2\le \xi^*\cdot e_1\tan t\}=\mathbb{P}\{\xi^*\cdot e_1\ge 1/(1+\tan t)\}=\frac{\sqrt{\tan t}}{1+\sqrt{\tan t}}\,,\] which is \varepsilonqref{theta-dist}. \varepsilonnd{proof} The competition interface of the \mathbf{1}ex{corner growth model (CGM)!exponential} \mathbf{1}ex{CGM!exponential} exponential corner growth model maps to a certain object called the \mathbf{1}ex{second-class particle} {\sl second-class particle} in TASEP, so this object has been studied from both perspectives. In this case, a weak-limit version of Theorem \ref{cif:thm}\varepsilonqref{cif-lln} follows from translating a result of Ferrari and Kipnis \cite{Fer-Kip-95} on the limit of the scaled location of the second-class particle in TASEP to LPP language. Almost sure convergence was shown by Mountford and Guiol \cite{Mou-Gui-05} using concentration inequalities and the TASEP variational formula of Sepp\"al\"ainen \cite{Sep-99-aop}. Concurrently, \cite{Fer-Pim-05} gave a different proof of almost sure convergence of $\phi_n/n$ by applying the techniques of directed geodesics and then obtained the distribution \varepsilonqref{theta-dist} of the angle of the asymptotic direction $\xi^*$ from the TASEP results of \cite{Fer-Kip-95}. Later, these results on the direction of the competition interface were extended from the quadrant to larger classes of initial profiles in two rounds: first by \cite{Fer-Mar-Pim-09} still with TASEP and geodesic techniques, and then by \cite{Cat-Pim-13} using their earlier results on Busemann functions \cite{Cat-Pim-12}. Coupier \cite{Cou-11} also relied on the TASEP connection to sharpen the geodesics results of \cite{Fer-Pim-05}. He showed that with probability one there are no triple geodesics (out of the origin) in any direction. \section{History} We now give a quick overview of the last twenty or so years of research on Busemann functions and geodesics in percolation. As we mentioned above, Licea and \mathbf{1}ex{Newman, Charles} Newman \cite{Lic-New-96} were the first to introduce a technique for proving existence, uniqueness, and coalescence of directional geodesics under a global curvature assumption on the limit shape to control how much geodesics deviate from a straight line, and then as a consequence deducing the existence of Busemann functions. See also the summary in Newman's ICM paper \cite{New-95}. Although verifying the curvature assumption for percolation models with general weights remains an open problem, it can be done in a number of special cases. Thus, Licea and Newman's approach was applied to directed LPP with exponential weights \cite{Fer-Pim-05} and to several other (non-lattice) models built on homogeneous Poisson processes \cite{How-New-01,Wut-02,Cat-Pim-11,Bak-Cat-Kha-14}. In the case of the \mathbf{1}ex{corner growth model (CGM)!exponential} \mathbf{1}ex{CGM!exponential} exponential corner growth model, another set of tools comes from its connection with TASEP, as explained at the end of Section \ref{cif:sec}. The idea of deducing existence and uniqueness of stationary processes by studying geodesic-like objects has also been used in random dynamical systems. For example, this is how \cite{E-etal-00} and its extensions \cite{Hoa-Kha-03,Itu-Kha-03,Bak-07,Bak-Kha-10,Bak-13} show existence of invariant measures for the Burgers equation with random forcing. These works treated cases where space is compact or essentially compact. To make progress in a non-compact case, the approach of Newman et al.\ was adopted again in \cite{Bak-Cat-Kha-14,Bak-16,Bak-Li-16-}. The approach we presented in this chapter is the one we took in \cite{Geo-Ras-Sep-17-ptrf-1,Geo-Ras-Sep-17-ptrf-2} and is the very opposite of the above. Using the connection to queues in tandem the Busemann limits are constructed a priori in the form of stationary cocycles that come from certain \mathbf{1}ex{invariant measure} {\sl invariant} measures of the queuing system. Using a certain monotonicity the cocycles are then compared to the gradients of passage times. The monotonicity, ergodicity, and differentiability (rather than curvature) of the limit shape give the control that proves the Busemann limits. After establishing existence of Busemann functions, we use them to prove existence, directedness, coalescence, and uniqueness results about the geodesics. A similar approach was carried out by Damron and Hanson \cite{Dam-Han-14,Dam-Han-17} for the standard first-passage percolation model. They first construct (generalized) Busemann functions from weak subsequential limits of first-passage time differences. These weak Busemann limits can be regarded as a counterpart of our stationary cocycles. This then gives access to properties of geodesics, while weakening the need for the global curvature assumption. An independent line of work is that of Hoffman \cite{Hof-05,Hof-08} on the standard first-passage percolation, with general weights and without any regularity assumptions on the limit shape. Assuming all semi-infinite geodesics coalesce, \cite{Hof-05} constructed a Busemann function and used it to get a contradiction, concluding that there are at least two semi-infinite geodesics. (\cite{Gar-Mar-05} gave an independent proof with a different method.) \cite{Hof-08} extended this to at least four geodesics. \section{Next: fluctuations} The results we presented can be thought of as analogues of the law of large numbers. A natural follow-up is to study the analogue of the central limit theorem, i.e.\ questions concerning the size of deviations of the passage times $G_{0,\fl{n\xi}}$ from their asymptotic limit $n g(\xi)$ and of the geodesics, both finite (i.e.\ from $0$ to $\fl{n\xi}$) and infinite (i.e.\ going in direction $\xi$), from a straight line. Due to the maximum in the definition of the passage times, $G_{0,\fl{n\xi}}-ng(\xi)$ should be tighter than in the case of just a sum of identically distributed independent random variables. That is, its fluctuations should be smaller than order $n^{1/2}$. On the other hand, one would expect the geodesic paths to wander a great deal in search of favorable weights. For example, if $x_{0,\infty}$ is a path that follows the smallest $B^\xi$ gradient, then it should be the case that $x_n-n\xi$ has fluctuations of order larger than $n^{1/2}$. We will see in articles \cite{ch:Seppalainen} and \cite{ch:Corwin} how the above can be answered quite precisely for the solvable models: $G_{0,\fl{n\xi}}-ng(\xi)$ fluctuates on order $n^{1/3}$ (and we can even determine the limiting \mathbf{1}ex{Tracy-Widom distribution} {\sl Tracy-Widom distribution} of $n^{-1/3}(G_{0,\fl{n\xi}}-ng(\xi))$ and $x_n-n\xi$ fluctuates on the order $n^{2/3}$ (but here, a limiting distribution of $(x_n-n\xi)/n^{2/3}$ is only conjectured). Just like it is the case for the central limit theorem, this behavior is believed to be universal, going beyond solvable models, but this is far from being proved. See, however, \cite{Alb-Kha-Qua-14-aop} and \cite{Kri-Qua-16-} for results towards this universality conjecture. \varepsilonnd{document}
\begin{document} \centerline{\textsc{Inverse spectral problem for GK integrable systems.}} \centerline{\it V.V.Fock} Given a minimal bipartite graph on a torus the main construction of Goncharov and Kenyon \cite{GK} gives a map from an algebraic torus of dimension one less than the number of faces of the graph to the space of pairs (planar curve, a line bundle on it) called the action-angle map. The aim of this section is to solve the inverse problem, namely given a planar curve of genus $g$ and a line bundle on it of degree $g-1$ construct a point of the algebraic torus. The key observation is that the space of pairs is birationally isomorphic to the configuration space of a collection of complete flags in an infinite dimensional vector space invariant with respect to an action of a free Abelian group with two generators. Therefore the construction of coordinates in the space of configurations of flags introduced in \cite{FG} for flag configurations in finite dimensional spaces applies with minor modifications. Before giving the formula we recall some basic facts about combinatorics of bipartite graphs. In the appendix we briefly sketch necessary background on planar algebraic curves and theta functions. \refstepcounter{mysection} \paragraph{\arabic{mysection}. Discrete Dirac operator and the action-angle map.\label{s:discrete}} In this section we give a very concise introduction to the Goncharov-Kenyon construction. Let $\Gamma$ be a bipartite graph with equal number of black and white vertices embedded into a two-dimensional torus $T$. Denote the set of black (resp. white )vertices of $\Gamma$ by $B$ and $W$, and by $F$ the set of connected components of the complement to $\Gamma$ in $\Sigma$, called faces. For any graph $\Gamma$ a \textit{discrete line bundle} on it is just an association of a one dimensional vector space $V_v$ to every vertex $v$ of $\Gamma$. A discrete connection on a line bundle is an association to every edge of the graph of an isomorphism between vector spaces corresponding to its ends. For a bipartite graph it amounts to a collection isomorphisms $A_e:V_{b(e)}\to V_{w(e)}$ for every edge $e$ between the vertices $b(e)$ and $w(e$). Having chosen a basis in the vector spaces every isomorphism becomes just a multiplication by a nonzero number which we also denote by $A_e$. A collection of numbers $\boldsymbol{A}=\{A_e\}$ is called a \textit{discrete connection form} and can be interpreted as a cocycle $\boldsymbol{A}\in Z^1(\Gamma,\mathbb{C})$. Changing bases in the vector spaces amounts to changing the cocycle by a coboundary and therefore the space of connections can be identified with the cohomology group $H^1(\Gamma,\mathbb{C}^\times)$. Since the graph $\Gamma$ is embedded into the torus $T$ we have an exact sequence $$1\to H^1(T,\mathbb{C}^\times) \to H^1(\Gamma,\mathbb{C}^\times)\stackrel{d}{\to} H^2(T/\Gamma,\mathbb{C}^\times)\to H^2(T,\mathbb{C}^\times)\to 1$$ The space $H^2(T/\Gamma,\mathbb{C}^\times)$ is just the space denoted by $\mathcal{X}_\Gamma$ of associations of nonzero complex numbers $\boldsymbol{x}=\{x_i|i\in F\}$ to faces of the graph $\Gamma$. The differential $d \boldsymbol{A}\in H^2(T/\Gamma,\mathbb{C}^\times)$ can be interpreted as a discrete curvature of the connection --- association to every face of composition of isomorphisms corresponding to its sides. Denote by $\mathcal{X}_\Gamma^1 \in \mathcal{X}_\Gamma$ the image of the map $d$. A points $\boldsymbol{x}\in \mathcal{X}_\Gamma^1$ are collections of numbers on faces with product equal to one. The exact sequence implies also that $H^1(\Gamma,\mathbb{C}^\times)$ is a principal $H^1(\Sigma,\mathbb{C}^\times)$-bundle over the base $\mathcal{X}_\Gamma^1$. Denote by $\boldsymbol{R}\in H^2(T/\Gamma,\pm 1)$ a 2-cocycle associating $-1$ to every face with the number of sides divisible by 4 and $1$ otherwise. A \textit{Kasteleyn orientation} $\boldsymbol{K}=\{K_e|e\in E\}$ is a cochain such that $d \boldsymbol{K}=\boldsymbol{R}$. A \textit{Dirac operator} on the graph $\Gamma$ provided with a discrete connection $\boldsymbol{A}$ is a map $$\mathfrak{D}_{\boldsymbol{K}}[\boldsymbol{A}]\colon\oplus_{b}V_b\to\oplus_w V_w$$ from the sum of spaces associated to black vertices to the sum of spaces associated to white vertices. It is defined by its action of $V_b$ as $$\mathfrak{D}_{\boldsymbol{K}}[\boldsymbol{A}]\vert_{V_b}=\oplus_{e|b(e)=b}A_eK_e.$$ From now on we will consider graphs such that for a generic connection the Dirac operator is nondegenerate. In this case $\mathfrak{D}_{\boldsymbol{K}}[\boldsymbol{A}]$ degenerates on a subvariety of $H^1(\Gamma,\mathbb{C}^\times)$ of codimension one. Given a point $\boldsymbol{x}\in \mathcal{X}_ \Gamma^1$ the preimage of it by the differential $d$ is isomorphic (up to a shift) to the algebraic torus $H^1(T,\mathbb{C}^\times)$. The intersection of the degeneration locus of the Dirac operator $\mathfrak{D}_{\boldsymbol{K}}[\boldsymbol{A}]$ with this torus is an algebraic curve $\Sigma_0(\boldsymbol{x})$. This curve can be compactified (see appendix \ref{s:planar}) to a curve $\Sigma(\boldsymbol{x})$ called the \textit{spectral curve} corresponding to the point $\boldsymbol{x}$ of the phase space. The kernel of $\mathfrak{D}_{\boldsymbol{K}}[\boldsymbol{A}]$ extends to $\Sigma(\boldsymbol{x})$ by continuity and defines a line bundle dual to a bundle $\mathcal{L}(\boldsymbol{x})$ (of degree $g-1$ as it will be clear from the second part of this note). The map associating a pair ($\Sigma(\boldsymbol{x}),\mathcal{L}(\boldsymbol{x})$) to a point $\boldsymbol{x}\in \mathcal{X}^1_\Gamma$ is called the \textit{action-angle map}. In coordinates the equation of the curve $\Sigma(\boldsymbol{x})$ can be written as $\det \mathfrak{D}_{\boldsymbol{K}}[\boldsymbol{\lambda}d^{-1}(\boldsymbol{x})]=0$, where $\boldsymbol{\lambda}=(\lambda,\mu)\in H^1(T,\mathbb{C}^\times)$ is a cohomology class of the torus $T$. It is a Laurent polynomial equation with a Newton polygon $\Delta_\Gamma$ independent on $\boldsymbol{x}$, see the section \ref{s:graphs} and \cite{GK}. The aim of this paper is to construct the inverse map, i.e., to restore the point of $\mathcal{X}^1_\Gamma$ out of a plane algebraic curve and a line bundle on it. \refstepcounter{mysection} \paragraph{\arabic{mysection}. Bipartite graphs and Newton polygons.\label{s:graphs}} Recall that a \textit{zig-zag loop} \cite{GK} on a bipartite graph $\Gamma \subset T$ is a closed path along edges of $\Gamma$ turning maximally right at every white vertex and maximally left at every black one. Denote the set of zig-zag loops by $Z$. Every zig-zag loop $\alpha\in Z$ represents a homology class $\boldsymbol{h}_{\alpha}\in H_1(T,\mathbb{Z})$. Since there are exactly two zig-zag loops with opposite orientation passing through every edge, the sum of all such classes vanishes and therefore they form sides of a unique convex polygon $\Delta_\Gamma\subset H^1(T,\mathbb{R})$ defined up to a shift. Let the dual surface $\Sigma$ be a surface obtained by gluing disks to the graph $\Gamma$ along every zig-zag loop. By construction the graph $\Gamma$ is embedded into $\Sigma$. The boundary of every face $i\in F$ is a curve representing a cycle $\check{\boldsymbol{h}}_i\in H_1(\Sigma,\mathbb{Z})$. Define a skew-symmetric matrix with integral entries $$\varepsilon_{ij}= \langle \check{\boldsymbol{h}}_i,\check{\boldsymbol{h}}_j\check{\rangle},$$ with $i,j \in F$ and $\langle,\check{\rangle}$ the intersection pairing of cycles on $\Sigma$. This matrix can be considered as an exchange matrix defining the structure of a cluster seed on $\mathcal{X}_\Gamma$ and in particular a Poisson structure on it by $\{x_i,x_j\}=\varepsilon_{ij}x_ix_j$. Since $\sum_{i\in F}\check{\boldsymbol{h}}_j=0$, we have $\sum_i \varepsilon_{ij}=0$ for any $j$ and therefore the submanifold $\mathcal{X}^1_\Gamma\in \mathcal{X}_\Gamma$ is a Poisson submanifold. The exchange matrix $\varepsilon_{ij}$ can be defined combinatorially as the number of common edges of the faces $i$ and $j$ taken with signs. To be more precise denote by $e_1,\ldots,e_{l(i)}$ the sides of the face $i$ taken counterclockwise and starting from the edge going from a black to a white vertex. Then $\varepsilon_{ij}=\sum_{k|e_k \in \partial j }(-1)^k$. \refstepcounter{mysection} \paragraph{\arabic{mysection}. Discrete Abel map.\label{s:graphsAbel}} Consider a free Abelian group $\mathbb{Z}^{Z}$ generated by the set of zig-zag loops $Z$. This group has a natural $\mathbb{Z}$-grading with all generators having degree one. Consider the universal cover $\tilde{T}$ of the torus $T$ and denote by $\tilde{\Gamma}$ the lift of the graph $\Gamma$ to $\tilde{T}$. Define a map $\boldsymbol{d}$ associating to every face and vertex of $\tilde{\Gamma}$ an element of the group $\mathbb{Z}^{Z}$. Fix a face $i_0$ and fix any any element of $\mathbb{Z}^{Z}$ of degree 0 to be the value of $\boldsymbol{d}(i_0)$. Once this choice is done to define value of $\boldsymbol{d}$ on another face $i\in F$ choose a path $\gamma$ on $T$ connecting an internal point of the face $i$ to an internal of the face $i_0$ and define $\boldsymbol{d}(i)$ by the formula \begin{equation}\label{path} \boldsymbol{d}(i)= \boldsymbol{d}(i_0)+\sum_{\alpha\in Z}\langle\gamma,\boldsymbol{h}_{\alpha}\rangle\alpha, \end{equation} where $\langle\gamma,\boldsymbol{h}_{\alpha}\rangle$ is the intersection index between the zig-zag loop $\alpha$ and the path $\gamma$. It is obvious that the value of $\boldsymbol{d}$ does not depend on the choice of the path $\gamma$ since the expression (\ref{path}) vanishes for any closed path. To define the the value of the map $\boldsymbol{d}$ on vertices slightly deform the zig-zag loops in such a way that they intersect edges in the midpoints only (as shown on fig. \ref{fi:blackvertex}). The connected components of the complement to the deformed loops correspond to either faces or vertices. The value $\boldsymbol{d}(v)$ is defined by exactly the same formula as for $\boldsymbol{d}(i)$ with the path $\gamma$ going from the midpoint of the face $i_0$ to the vertex $v$. It is clear from the construction that $\deg \boldsymbol{d}(i)=0$, $\deg \boldsymbol{d}(w)=-1$ and $\deg \boldsymbol{d}(b)=1$ for any face $i$ white vertex $w$ and black vertex $b$, respectively. The map $\boldsymbol{d}$ is not unique, but is defined up to a shift $\boldsymbol{d}(i)\mapsto \boldsymbol{d}(i)+\boldsymbol{c}$ (and the same for the vertices) with any $\boldsymbol{c}\in \mathbb{Z}^Z$ of degree 0. One can equivalently define the map $\boldsymbol{d}$ by the following property. Consider an edge $e$ of $\Gamma$ connecting a black vertex $b$ and the white vertex $w$. Denote by $\alpha^+$ zig-zag loop going along $e$ from $b$ to $w$ and $\alpha^-$ going along $e$ in the opposite direction. Denote also by $i^+$ and $i^-$ the faces of $\Gamma$ to the left and to the right form $e$ (viewed from $b$). Then we have \begin{equation}\label{Dbalance} \begin{array}{l} \boldsymbol{d}(w)=\boldsymbol{d}(b)-\alpha^+-\alpha^-,\\ \boldsymbol{d}(i^+)=\boldsymbol{d}(b)-\alpha^+,\\ \boldsymbol{d}(i^-)=\boldsymbol{d}(b)-\alpha^-, \end{array} \end{equation} These relations allow to unambiguously define the value of $\boldsymbol{d}$ starting from its value on any face of vertex. Another property of the map $\boldsymbol{d}$ can be easily seen on the figure \ref{fi:blackvertex}B: $$\sum_{i\in F}\varepsilon_{ij}\boldsymbol{d}(i)=0$$ Observe that any class of $H_1(T,\mathbb{Z})$ can be mapped to $\mathbb{Z}^Z$ by $\boldsymbol{h} \mapsto \sum_\alpha\langle\gamma,\boldsymbol{h}_{\alpha}\rangle\alpha$. We will denote a class and its image by the same letter. Observe that the map $\boldsymbol{d}$ is equivariant with respect to the action or the group $H_1(T,\mathbb{Z})$, namely $\boldsymbol{d}(i+\boldsymbol{h}) = \boldsymbol{d}(i)+\boldsymbol{h}$ and similarly for $\boldsymbol{d}(w)$ or $\boldsymbol{d}(b)$. Therefore this map descends to the map from the faces and vertices of the graph $\Gamma$ to the quotient $\mathbb{Z}^Z/H_1(T,\mathbb{Z})$ which we will call the \textit{discrete Abel map} and denote by the same letter $\boldsymbol{d}$. We call a bipartite graph $\Gamma$ \textit{minimal} if the number of faces is twice the area $S_{\Delta_\Gamma}$ of the polygon $\Delta_\Gamma$ and it is called \textit{simple} if all classes $\boldsymbol{h}_{\alpha}$ are non-divisible in $H_1(T,\mathbb{Z})$. As it is shown in \cite{GK} and \cite{FM} any graph can be transformed to make it minimal and simple without changing the Newton polygon. \refstepcounter{mysection} \paragraph{\arabic{mysection}. General solution.\label{s:general}} The aim of this section and of the paper in general is to give an explicit inversion formula for the action-angle map. Namely, given a Laurent polynomial $P(\lambda, \mu)$ of two variables with a Newton polygon $\Delta$ defining an algebraic curve $\Sigma\subset H^2(T,\mathbb{C}^\times)$, a line bundle $\mathcal{L}\in {\it Pic}^{g-1}(\Sigma)$ and a bipartite graph $\Gamma$ with the same Newton polygon describe a point $\boldsymbol{x}\in \mathcal{X}^1_\Gamma$ such that this point gives the pair $(\Sigma,\mathcal{L})$ as the value of the action-angle map. The main observation permitting to solve this problem is that one can identify the set $Z$ of zig-zag loops with the set of points at infinity $\Sigma\backslash\Sigma_0$ in such a way that the corresponding homology classes $\boldsymbol{h}_{\alpha}\in H_1(T,\mathbb{Z})$ coincide. This map is not canonical, but is defined up to permutation of points at infinity corresponding to the same side of the Newton polygon. Once such isomorphism is chosen, we can extend this identification to a grading preserving map $\mathbb{Z}^Z\to {\it Div}(\Sigma)$. The restriction of this map to $H_1(T,\mathbb{Z})$ takes value in principal divisors ${\it div}(\Sigma)$ and thus defines a map $\mathbb{Z}^Z/H_1(T,\mathbb{Z}) \to {\it Pic}(\Sigma)$ . In what follows we will denote both zig-zag loops and the corresponding points at infinity by the same letter. Let $\mathcal{L} \in {\it Pic}^{g-1}(\Sigma)$ be a line bundle on $\Sigma$. Denote by $H$ the space of meromorphic sections of the bundle $\mathcal{L}$ holomorphic on $\Sigma_0$ and denote by $F_\alpha^i=\{\psi\in H|{\it ord}_\alpha \psi\geq i\}$ be the subspace of $H$ of section having zero of order at least $i$ at the point $\alpha \in \Sigma\backslash \Sigma_0$. The collection of the spaces $F_\alpha^i$ for a given $\alpha$ form a complete flag in the space $H$. Observe that the group $H_1(T,\mathbb{Z})$ acts on $H$ by $\boldsymbol{h}: \psi \mapsto \langle \boldsymbol{h},\boldsymbol{\lambda}\rangle \psi$ and preserves the flags, namely \begin{equation}\label{period} \boldsymbol{h} F^i_\alpha=F_\alpha^{i+\langle\boldsymbol{h},\boldsymbol{h}_\alpha\rangle}. \end{equation} Given $\boldsymbol{d}=\{d_\alpha\}\in \mathbb{Z}^Z$ denote by $F^{\boldsymbol{d}}=\cap F_\alpha^{d_\alpha}$ the intersection of such subspaces. For a generic $\mathcal{L}$ by the Riemann-Roch theorem the dimension of this intersection is given by $\dim F^{\boldsymbol{d}}=\max(0,-\deg \boldsymbol{d})$. Observe that $F^{\boldsymbol{d}}\subset F^{\boldsymbol{d}'}$ if $d_\alpha\geqslant d_\alpha'$ for any $i$. Associate to every white vertex $w$ the one-dimensional space $V_w=F^{\boldsymbol{d}(w)}$. Let now $\alpha_1,\ldots,\alpha_l$ be the zig-zag loops passing through a black vertex $b$ and let $w_1,\ldots w_l$ be the corresponding white vertices as shown on Fig. \ref{fi:blackvertex}. From (\ref{Dbalance}) it follows that $\boldsymbol{d}(w_k)=\boldsymbol{d}(b)-\alpha_k-\alpha_{k+1}$. Consider now the kernel of the map $V_b={\it Ker}\left(\oplus_k F^{\boldsymbol{d}(w_k)}\to F^{\boldsymbol{d}(b)-\sum \alpha_k}\right)$. This map is well defined since $F^{\boldsymbol{d}(w_k)}\subset F^{\boldsymbol{d}(w)-\sum_k \alpha_k}$ and the kernel has dimension 1 if $\mathcal{L}$ is generic since $\dim F^{\boldsymbol{d}(b)-\sum \alpha_k}=l-1$. Obviously for any $k$ this map restricts to a collection of maps $\tilde{A}_k:V_b\to V_{w_k}$. Multiplying every such map by the value of the Kasteleyn connection form $\{K_e\}$ we get a connection form $\{A_e\}$ on the graph $\tilde{\Gamma}$. Taking into account the periodicity (\ref{period}) we see that the monodromy $x_i$ of the connection form $\{A_e\}$ around faces of $\tilde{\Gamma}$ is $H_1(T,\mathbb{Z})$-periodic and thus defines a point $\mathcal{X}^1_\Gamma$. To give an explicit formula for $x_i$ in terms of $\theta$-functions, observe that for a white vertex $w$ of $\tilde{\Gamma}$ the space $V_w=F^{\boldsymbol{d}(w)}$ is the span of $\psi_{\boldsymbol{d}(w)}(z)=\theta_q(z-t+\boldsymbol{d}(w))E_{\boldsymbol{d}(w)}(z)$. To compute the maps $V_{b}\to V_{w_k}$ we need to find the coefficients $A_k$ of the identity \begin{equation}\label{Diraccover} \sum_k \tilde{A}_k\theta_q(z-t+\boldsymbol{d}(b)-\alpha_k-\alpha_{k+1})E_{\boldsymbol{d}(b)-\alpha_k-\alpha_{k+1}}(z)=0 \end{equation} Comparing to the Fay's identity (\ref{Fay}) one gets \begin{equation}\label{connection} \tilde{A}_k = \frac{E(\alpha_l,\alpha_{l+1})}{\theta_q(t+\boldsymbol{d}(b)-\alpha_l)\theta_q(t+\boldsymbol{d}(b)-\alpha_{l+1})} \end{equation} or using the notations of (\ref{Dbalance}) and taking into account the Kasteleyn form $$A_e=K_e\frac{E(\alpha^+,\alpha^-)}{\theta_q(t+\boldsymbol{d}(i^+))\theta_q(t+\boldsymbol{d}(i^-))}$$ Now computing the monodromy around a face $i$ with $2l$ sides enumerated counterclockwise starting from a black vertex as shown on Fig. \ref{fi:blackvertex}B and taking into account that $E(\alpha^+,\alpha^-)=\theta_{q'}(\boldsymbol{d}(i^+)-\boldsymbol{d}(i^-))/\sqrt{\phi(\alpha^+)\phi(\alpha^-)}$ one gets \begin{equation}\label{xcoord} x_i=(-1)^{l(i)/2+1}\prod_j \left(\frac{\theta_{q'}(\boldsymbol{d}(j)-\boldsymbol{d}(i))}{\theta_q(t+\boldsymbol{d}(j))}\right)^{\varepsilon_{ij}}. \end{equation} Now we are ready to formulate the main \begin{theorem} The formula (\ref{xcoord}) gives a point $\boldsymbol{x}\in \mathcal{X}_\Gamma^1$ such that $\Sigma(\boldsymbol{x})=\Sigma$ and $\mathcal{L}(\boldsymbol{x})=\mathcal{L}$. \end{theorem} To prove the theorem to give an explicit formulas for \begin{enumerate} \item the connection $d^{-1}(\boldsymbol{x})$, \item the connection representing the class $\boldsymbol{\lambda}\in H^1(T,\mathbb{C^\times})$ \item a map $z\mapsto \boldsymbol{\lambda}(z)$ of the curve $\Sigma \to H^1(T,\mathbb{C^\times})$, \item a map of the bundle $\mathcal{L^*}$ to the trivial bundle over $\Sigma$ with fiber $\mathbb{C}^W$. \end{enumerate} and verify that the Dirac operator $\mathfrak{D}[\boldsymbol{\lambda}(z)d^{-1}(\boldsymbol{x})]$ degenerates on the image of the bundle $\mathcal{L^*}$. Since the connection given by the formula (\ref{connection}) is obviously $H_1(T,\mathbb{Z})$-periodic it defines a connection on the graph $\Gamma$ and not only on its cover $\tilde{\Gamma}$. The map $\Sigma\to H^2(T,\mathbb{C}^\times)$ can be defined by the formula $\langle\boldsymbol{\lambda}(z),\boldsymbol{h}\rangle=E_{\boldsymbol{h}}(z)$. Fix now a lift of the vertices of $\Gamma$ to its cover $\tilde{\Gamma}$. Using such lift we can define the value of the map $\boldsymbol{d}$ for vertices of the graph $\Gamma$, but the relations (\ref{Dbalance}) will be satisfied only up to $H_1(T,\mathbb{Z})$. In particular for every edge $e$ of $\Gamma$ we can associate a homology class $\boldsymbol{h}_e=\boldsymbol{d}(b)-\boldsymbol{d}(w)-\alpha^+-\alpha^-$. For a given class $\boldsymbol{\lambda}\in H^1(T,\mathbb{C}^\times)$ we can also define a connection form$\{B_e\}$ by the formula $B_e = \langle \boldsymbol{\lambda},\boldsymbol{h}_e\rangle$. This connection represents the class $\boldsymbol{\lambda}$. Indeed, consider a closed path $\gamma$ passing consecutively through the edges $e_1,\ldots,e_{2l}$ starting from an edge going from a black to the white vertex. The monodromy along $\gamma$ is given by $$\prod_k \langle \boldsymbol{\lambda},\boldsymbol{h}_{e_k} \rangle^{(-1)^k}= \langle \boldsymbol{\lambda},\sum_k(-1)^k\boldsymbol{h}_{e_k} \rangle=\langle\boldsymbol{\lambda},\sum_{\alpha}\langle\gamma,\alpha\rangle\alpha\rangle=\langle\boldsymbol{\lambda},\gamma\rangle$$ and thus this connection form represents the class $\boldsymbol{\lambda}$. The second equality is true since after the substitution of expression for $\boldsymbol{h}_e$ all $\boldsymbol{d}(b)$ and $\boldsymbol{d}(e)$ cancel and $\alpha$'s enters with a sign given by the intersection index with the edge $e$. The map of the bundle $\mathcal{L^*}$ to the trivial bundle $\mathbb{C}^W\times \Sigma$ is given by a collection of the sections $\psi_{\boldsymbol{d}(w)}$ of the dual bundle $\mathcal{L}$ for $w$ in the chosen vertices of $\tilde{\Gamma}$. Now we need to put all these together and verify that for any black vertex $b$ of $\Gamma$ having neighbors $w_1,\ldots,w_l$ connected by edges $e_1,\ldots,e_l$ and with zig-zag loops $\alpha_1,\ldots,\alpha_k$ we have to verify that $\sum_kK_eA_{e_k}B_{e_k}\psi_{\boldsymbol{d}(w_k)}=0$. Observe now that $B_{e_k}\psi_{\boldsymbol{d}(w_k)}=\psi_{\boldsymbol{d}(b)-\alpha_k-\alpha_{k+1}}$ and therefore the identity to be proven is a consequence of the relation (\ref{Diraccover}). \begin{figure} \caption{\label{fi:blackvertex} \label{fi:blackvertex} \end{figure} \refstepcounter{mysection} \paragraph{\arabic{mysection}. Mutations.\label{s:mutations}} As observed in \cite{GK} there exist transformations of the graph $\Gamma$ into another graph $\Gamma'$ that do not change the integrable system. Transformations of the first kind are retractions of two edges incident to a two-valent vertex or an inverse operation. The transformation of the second kind are applicable if there exists a quadrilateral face $i$ and it is shown on figure \ref{fi:spider}A. Transformation of the graphs induces birational isomorphisms between the corresponding tori $\mathcal{X}_\Gamma\to\mathcal{X}_{\Gamma'}$. For the transformation of the first kind this map is identity in coordinates (provided we use a natural bijection between faces of $\Gamma$ and $\Gamma'$). For the transformation of the second kind the isomorphism is a \textit{cluster mutation} and is shown also on fig. \ref{fi:spider}A. Define a correspondence between the discrete Abel maps before and after the transformation by the following conditions: For the transformation of the first kind we require its values on the corresponding faces coincide. For the transformation of the second kind its values coincide on corresponding faces except for the face $i$ and is changed to on $i$ to $\boldsymbol{d}(i)-a+b-c+d$, where $a,b,c,d$ are the zig-zag loops surrounding the quadrilateral face $i$ as is shown on fig. \ref{fi:spider}B. (In the figure we omitted $+\boldsymbol{d}(i')$ after every expression). \begin{figure} \caption{\label{fi:spider} \label{fi:spider} \end{figure} Here we shall verify that the move is compatible with the parameterization given by the formula (\ref{xcoord}). More precisely we want to prove the following proposition. \begin{proposition} The diagram $$\begin{array}{rcl} &{\it Pic}^{g-1}(\Sigma)&\\ \!\!\!\!&&\!\!\!\!\\ &\swarrow\qquad\qquad\searrow&\\[5pt] \mathcal{X}_\Gamma& \longrightarrow &\mathcal{X}_{\Gamma'} \end{array} $$ is commutative for graph transformation of both kinds. \end{proposition} The proposition is obvious for the transformations of the first kind. For a spider move we need to verify that for any face of the graph $\Gamma'$ the coordinates given by (\ref{xcoord}) are related to the ones for the graph $\Gamma$ as shown on fig. \ref{fi:spider}A. $$x'=-\frac{E(a,b)}{\theta_q(b+c-t)}\frac{\theta_q(d+c-t)}{E(d,a)}\frac{E(c,d)}{\theta_q(d+a-t)}\frac{\theta_q(b+a-t)}{E(b,c)}=-\frac{F_t(a,b)F_t(c,d)}{F_t(d,a)F_t(b,c)}=x^{-1} $$ $$y'=y\frac{E(b,c)}{\theta_q(a+c-t)}\frac{\theta_q(b+c-t)}{E(c,a)}\frac{E(d,a)}{\theta_q(b+d-t)}\frac{\theta_q(d+a-t)}{E(b,d)}=y\frac{F_t(b,c)F(d,a)}{F_t(c,a)F(b,d)}=$$ $$=y\frac{F_t(b,c)F_t(d,a)}{F_t(b,c)F_t(d,a)+F_t(a,b)F_(d,c)}=y(1+x^{-1})^{-1} $$ The remaining relations are proven analogously \refstepcounter{mysection} \paragraph{\arabic{mysection}. Example.} Consider the example of the simplest relativistic affine Toda lattice discussed in details in \cite{FM}. The bipartite graph $\Gamma$ is given on fig. \ref{fi:Toda}A, where we assume that the top of the picture is glued to the bottom and the left to the right. It has four zig-zag loops (shown on the fig. \ref{fi:Toda}C) representing the homology classes $(\pm 1,\pm 1)$ and defining the Newton polygon shown on fig. \ref{fi:Toda}B. We denote by $a,b,c,d$ the corresponding zig-zag loops. The vertical cycle of $H_1(T,\mathbb{Z})$ corresponds to $a+b-c-d$ and the variable $\lambda=E_{a+b-c-d}(z)$ while the horizontal one corresponds to $a+d-b-c$ and the variable $\mu=E_{a+d-b-c}(z)$. The labeling of vertices and faces by elements of $\mathbb{Z}^Z$ is shown on fig.\ref{fi:Toda}C. Since the curve $\Sigma$ is elliptic we have ${\it Pic}^1(\Sigma)=\Sigma$ and $E(x,y)=\theta_{11}(x-y)$. Without loss of generality one can assume that $d=0$. The equations $a+d-b+c\in L_a$ and $a+b-c+d\in L_a$ has a solution $b=1/2$, $c=\tilde{p}_a+1/2$. Therefore $$\lambda=\frac{\theta_{11}(z-a)\theta_{11}(z-1/2)}{\theta_{11}(z-a-1/2)\theta_{11}(z)};\quad \mu=\frac{\theta_{11}(z-a)\theta_{11}(z)}{\theta_{11}(z-a-1/2)\theta_{11}(z-1/2)}$$ The formula (\ref{xcoord}) gives $$x=-\frac{\theta^2_{11}(1/2-a)}{\theta^2_{11}(a)}\frac{\theta^2_{00}(t+a)}{\theta^2_{00}(t+a+1/2)};\quad y=-\frac{\theta^2_{11}(a)}{\theta^2_{11}(1/2-a)}\frac{\theta^2_{00}(t)}{\theta^2_{00}(t+1/2)}$$ $$z=-\frac{\theta^2_{11}(\tilde{p}_a)}{\theta^2_{11}(1/2-\tilde{p}_a)}\frac{\theta^2_{00}(t+1/2)}{\theta^2_{00}(t)};\quad w=-\frac{\theta^2_{11}(1/2-a)}{\theta^2_{11}(a)}\frac{\theta^2_{00}(t+a+1/2)}{\theta^2_{00}(t+a)}$$ The group of discrete birational transformations is $$\mathcal{G}_\Delta=\{(A,B,C,D)\in \mathbb Z^4|A+B+C+D=0\}/\langle (1,1,-1,-1),(1,-1,1,-1)\rangle = \mathbb{Z}\oplus (\mathbb{Z}/2\mathbb{Z})$$ One generator of this group is given by $$(x,y,z,w)\mapsto (z,w,x,y).$$ This generator has order two and corresponds to the automorphism of the graph. The second generator of the infinite order corresponds to two mutations and shown on fig.\ref{fi:Toda}DEF: $$(x,y,z,w)\mapsto \left(y^{-1}, x\frac{(1+w)^2}{(1+y^{-1})^2},w^{-1},z\frac{(1+y)^2}{(1+w^{-1})^2}\right)$$. \begin{figure} \caption{A: Bipartite graph, zig-zag paths and value of the map $\boldsymbol{d} \label{fi:Toda} \end{figure} \appendix \noindent\textbf{\large{Appendix}} \setcounter{mysection}{1} \paragraph{\Alph{mysection}. Picard variety and the Abel map.\label{s:picard}} Let $\Sigma$ be a smooth Riemann surface of genus $g$. The cohomology group $H^1(\Sigma,\mathbb{Z})$ possesses a nondegenerate skew-symmetric intersection form $\langle\cdot,\cdot\rangle$. Its complexification $H^1(\Sigma,\mathbb{C})$ contains a Lagrangian subspace $H^{1,0}(\Sigma,\mathbb{C})$ represented by holomorphic 1-forms on $\Sigma$. Fix two complimentary Lagrangian sublattices $L_a,L_b\subset H^1(\Sigma,\mathbb{Z})$ and denote their complexifications by $L_a^\mathbb{C}=L_a\otimes \mathbb{C}$ and $L^\mathbb{C}_b=L_b\otimes \mathbb{C}$, respectively. Three Lagrangian subspaces $L_a^\mathbb{C}$, $L_b^\mathbb{C}$ and $H^{1,0}(\Sigma,\mathbb{C})$ are transversal to each other. Denote by $\tau_a$ and $\tau_b$ the projections of $H^1(\Sigma,\mathbb{C})$ to $L_a^{\mathbb{C}}$ and $L_b^{\mathbb{C}}$, respectively, along $H^{1,0}(\Sigma,\mathbb{C})$. The skew-symmetric form $\langle\cdot,\cdot\rangle$ induces a bilinear form $Q$ on $H^1(\Sigma,\mathbb{C})$ by $Q(\boldsymbol{h}_1,\boldsymbol{h}_2)=\langle \tau_a( \boldsymbol{h}_1),\tau_b(\boldsymbol{h}_2)\rangle$. This form is symmetric since $\langle \tau_a(\boldsymbol{h}_1)-\tau_b(\boldsymbol{h}_1),\tau_a(\boldsymbol{h}_2)-\tau_b(\boldsymbol{h}_2)\rangle=0$. Its imaginary part is positive definite on $L_b^\mathbb{C}$ and negative definite on $L_a^\mathbb{C}$ since $0<i\langle\tau_a(\boldsymbol{h})-\tau_b(\boldsymbol{h}),\overline{\tau_a(\boldsymbol{h})-\tau_b(\boldsymbol{h}})\rangle=2\Im\langle\tau_a(\boldsymbol{h}),\overline{\tau_b(\boldsymbol{h})}\rangle$. The corresponding quadratic form is denoted by by the same letter $Q(\boldsymbol{h})=\langle \tau_a(\boldsymbol{h}), \tau_b(\boldsymbol{h})\rangle$. The quotient $L_a^{\mathbb{C}}/\tau_a (H^1(\Sigma,\mathbb{Z}))=L_b^{\mathbb{C}}/\tau_b (H^1(\Sigma,\mathbb{Z}))$ is called the Jacobian of the Riemann surface $\Sigma$ and denoted by ${\it Jac}(\Sigma)$. Let ${\it Div}(\Sigma)$ be a free Abelian group generated by the set of points of the curve called the \textit{group of divisors} on $\Sigma$. It can also be considered as a group of singular $0$-chains on $\Sigma$. We make no distinction between a generator of the group ${\it Div}(\Sigma)$ corresponding to a point $z\in\Sigma$ and the point $z$ itself and denote it by the same letter. Any meromorphic section $\phi$ of a line bundle on $\Sigma$ induces a divisor $(\phi)=\sum_{z\in \Sigma} z{\it ord}_z\phi$. The divisors of meromorphic functions are called \textit{principal}. The principal divisors form a subgroup of ${\it Div}(\Sigma)$ denoted by ${\it div}(\Sigma)$. The \textit{Picard variety} is the quotient ${\it Pic}(\Sigma)={\it Div}(\Sigma)/{\it div}(\Sigma)$. The group of divisors has a grading ${\it Div}(\Sigma)=\sum_i{\it Div}^i(\Sigma)$ with any generator $z\in \Sigma$ having degree 1. This grading induces a grading ${\it Pic}(\Sigma)=\sum_{i\in \mathbb{Z}}{\it Pic}^i(\Sigma)$ since the grading of any principal divisor vanishes. The variety ${\it Pic}(\Sigma)$ can be also interpreted as the variety of line bundles on $\Sigma$. The correspondence associates to a given line bundle the divisor of any meromorphic section of it. The \textit{Abel map} is an isomorphism $\mathcal{A}:{\it Pic}^0(\Sigma)\to {\it Jac}(\Sigma)$ defined as follows. For any $d\in {\it Pic}^0(\Sigma)$ choose its representative $\tilde{d}\in{\it Div}^0(\Sigma)$ and than choose a 1-chain $\partial^{-1}\tilde{d}$. These choices define a point $\mathcal{A}(d)$ in the space $L_b^{\mathbb{C}}$ by the property that $\langle \mathcal{A}(d),\omega\rangle=\int_{\partial^{-1}\tilde{d}}\omega$ for any holomorphic 1-form $\omega\in H^{1,0}(\Sigma)$. Making a different choice of $\partial^{-1}\tilde{d}$ or of $\tilde{d}$ results in changing the point $\mathcal{A}(d)$ by a point of the lattice $\tau_b(H^1(\Sigma,\mathbb{Z}))$ and thus $\mathcal{A}(d)$ considered as an element of ${\it Jac}(\Sigma)$ is well defined. In the text we make no distinction between ${\it Pic}^0(\Sigma)$ and ${\it Jac}(\Sigma)$. \refstepcounter{mysection} \paragraph{\Alph{mysection}. Planar curves and Newton polygons.\label{s:planar}} Let $P(\lambda,\mu)=\sum_{ij}c_{ij}\lambda^i\mu^j$ be a Laurent polynomial in two variables. Identify the pairs of numbers $(\lambda,\mu)=\boldsymbol{\lambda}$ with the cohomology classes from $H^1(T,\mathbb{C}^\times)$ of a two-dimensional torus $T$ with coefficients in the multiplicative group. The pairs of integers $(i,j)=\boldsymbol{h}$ can be than treated as homology classes from $H_1(T,\mathbb{Z})$ and monomials $\lambda^i\mu^j$ as natural pairing between cohomology and homology $\langle\boldsymbol{\lambda},\boldsymbol{h}\rangle$. The polynomial $P$ can be thus rewritten as $P(\boldsymbol{\lambda})=\sum_{\boldsymbol{\gamma}}c_{\boldsymbol{\gamma}}\langle \boldsymbol{\lambda},\boldsymbol{\gamma}\rangle$. Denote by $\Delta_P\subset H_1(\Sigma,\mathbb{R})$ the convex hull in of the set $\{\boldsymbol{\gamma}\in H_1(\Sigma,\mathbb{Z})|c_{\boldsymbol{\gamma}}\neq 0\}$. The equation $P(\boldsymbol{\lambda})=0$ defines an algebraic curve in $H^1(T,\mathbb{C}^\times)$. Observe that the curve does not change if we make a transformation $P(\boldsymbol{\lambda})\to \langle\boldsymbol{\lambda},\boldsymbol{h}\rangle \beta P(\boldsymbol{\mu}\boldsymbol{\lambda})$, with any $\boldsymbol{h}\in H_1(T,\mathbb{Z}), \boldsymbol{\mu}\in H^1(\Sigma,\mathbb{C}^\times)$ and $\beta \in \mathbb{C}^\times$. Polynomials related by such transformations are called equivalent. The dimension of the space of equivalence classes of Laurent polynomials with given Newton polygon $\Delta_P$ is $I_{\Delta_P}+B_{\Delta_P}-3$, where $I_{\Delta_P}$ and $B_{\Delta_P}$ are the numbers of integer points strictly inside the polygon and on its boundary, respectively. The curve is obviously non-compact but can be canonically compactified by adding points where either $\lambda$ or $\mu$ or both, considered as functions on the curve, vanish or have a pole. The compactification goes as follows. Consider one side of the polygon $\Delta_P$. Without loss of generality we may assume that the side is a horizontal segment between $(0,0)$ and $(k,0)$ and $\Delta_P$ is located in the upper half plane. It means that the polynomial has the form $P(\lambda,\mu)=\sum_iP_i(\lambda)\mu^i$, where $P_i(\lambda)=0$ for $i<0$ and $P_0(\lambda)$ is a polynomial of degree $k$ with nonvanishing degree zero term. The zero locus of $P(\lambda,\mu)$ intersects the line $\mu=0$ in the roots of the $P_0(\lambda)$. Adding the roots to the curve for every side of $\Delta_P$ makes it compact. The curve is nonsingular at the compactification point if the corresponding root is simple. We denote by $\Sigma$ the corresponding compact curve and call the added points \textit{the points at infinity}. If the curve has no singularities at infinity the number of points corresponding to every side is equal to the number of segments into which a side is split by integer points (called \textit{boundary segments}). Remark that the bijection between points at infinity and boundary segments is not canonical and can be defined only up to permutations of the segments within every side. Consider a 1-form $\displaystyle \omega_{\boldsymbol{h}}={\it Res}\frac{\langle\boldsymbol{\lambda},\boldsymbol{h}\rangle}{P(\boldsymbol{\lambda})}\Omega$, where $\displaystyle \Omega=\frac{d\lambda\wedge d\mu}{\lambda\mu}$ be the canonical 2-form on $H^1(\Sigma,\mathbb{C}^\times)$ induced by the Poincar\'e pairing on the torus $T$. The form $\omega_{\boldsymbol{h}}$ is nonsingular and nonvanishing in the interior of $\Sigma$ provided the curve $\Sigma$ is nonsingular there. The zero order of the form $\omega_{\boldsymbol{h}}$ at a point $\alpha$ at infinity corresponding to the boundary segment $[\boldsymbol{x}_{\alpha},\boldsymbol{y}_{\alpha}]$ (with $\boldsymbol{x}_{\alpha}$ preceding $\boldsymbol{y}_{\alpha}$ in the counterclockwise order) is given by $\langle\boldsymbol{x}_{\alpha}-\boldsymbol{h},\boldsymbol{y}_{\alpha}-\boldsymbol{h}\rangle-1$, which is easy to verify assuming that the segment $[\boldsymbol{x}_{\alpha},\boldsymbol{y}_{\alpha}]$ belongs to the horizontal axis. Remark that the quantity $\langle\boldsymbol{x}_{\alpha}-\boldsymbol{h},\boldsymbol{y}_{\alpha}-\boldsymbol{h}\rangle$ is twice the signed area of the triangle $(\boldsymbol{x}_{\alpha},\boldsymbol{y}_{\alpha},\boldsymbol{h})$. It implies that the form $\omega_{\boldsymbol{h}}$ is holomorphic if $\boldsymbol{h}$ is strictly inside the polygon $\Delta_P$. It also implies that the degree of the canonical divisor $2g-2$ equals $2S_{\Delta_P}-B_{\Delta_P}$. By Pick's theorem $S_{\Delta_P}= I_{\Delta_P}+B_{\Delta_P}/2-1$ we get that $g=I_{\Delta_P}$ and thus the constructed holomorphic forms constitute a basis of $H^{1,0}(\Sigma,\mathbb{C})$. \refstepcounter{mysection} \paragraph{\Alph{mysection}. Parameterized curves and Newton polygons.\label{a:parameterized}} Let $\Sigma$ be an abstract smooth algebraic curve and let $\boldsymbol{\lambda}:\Sigma\to H^1(T,\mathbb{C}^\times)$ be a rational map to a two-dimensional algebraic torus which we consider as the cohomology group of a surface $T$ of genus 1. The points at infinity of this curve are those where the map is not defined. Denote the set of points at infinity by $Z$ enumerating them. For every $\alpha\in Z$ associate a homology class $\boldsymbol{h}_{\alpha}\in H_1(T,\mathbb{Z})$ by the condition that ${\it ord}_{\alpha} \langle \boldsymbol{\lambda},\boldsymbol{h}\rangle\vert_{\Sigma}=\langle \boldsymbol{h},\boldsymbol{h}_{\alpha}\rangle$ for any $\boldsymbol{h}\in H_1(T,\mathbb{Z})$. Since the sum of orders of zeroes vanishes for any function we have $\sum_{\alpha\in Z} \boldsymbol{h}_{\alpha}=0$. It implies that the vectors $\boldsymbol{h}_{\alpha}$ are sides of a convex polygon $\Delta_{\boldsymbol{\lambda}}\in H_1(T,\mathbb{R})$ defined up to a shift. We call the map $\boldsymbol{\lambda}$ nonsingular if its image has no singularities and if for any $\alpha\in Z$ the class $\boldsymbol{h}_{\alpha}$ is primitive (i.e., it is not divisible in $H^1(T,\mathbb{Z})$). Now we are going to show that the image of the nonsingular map $\boldsymbol{\lambda}$ coincides with the curve given by the equation $P(\boldsymbol{\lambda})=0$ only if the corresponding Newton polygons $\Delta_{\boldsymbol{\lambda}}$ and $\Delta_P$ coincide. Show first that every side of $\Delta_{\boldsymbol{\lambda}}$ is parallel to a side of $\Delta_P$. Indeed ${\it ord}_{\alpha}P\circ\boldsymbol{\lambda}\geqslant\min_{\boldsymbol{h}\in\Delta_P}\langle \boldsymbol{h},\boldsymbol{h}_{\alpha}\rangle$ and the strict inequality can hold only if the minimum is attained at least twice. Since we need the order of $P\circ\boldsymbol{\lambda}$ at $\alpha$ to be $+\infty$ there exist at least two points $\boldsymbol{h}_1,\boldsymbol{h}_2\in \Delta_P$ such that $\langle \boldsymbol{h}_1-\boldsymbol{h}_2,\boldsymbol{h}_\alpha\rangle=0$ and $\langle \boldsymbol{h},\boldsymbol{h}_\alpha\rangle\geqslant\langle \boldsymbol{h}_1,\boldsymbol{h}_\alpha\rangle$ for any $\boldsymbol{h}\in \Delta$. Since for a nonsingular map every point at infinity corresponds to a segment between integral points on the sides of $\Delta_{\boldsymbol{\lambda}}$, the lengths of the corresponding sides of $\Delta_P$ and $\Delta_{\boldsymbol{\lambda}}$ coincide and thus the polygons coincide also. This observation allows to describe all maps $\Sigma\to H^1(T,\mathbb{C}^\times)$ with a given Newton polygon as collections of points $\alpha$ of $\Sigma$ one per segment $\boldsymbol{h}^\alpha$ between integer points of the boundary of $\Delta$ with the property $$\sum_{\alpha\in Z} \alpha\otimes \boldsymbol{h}_{\alpha}=0\in {\it Pic}(\Sigma)\otimes H^1(\Sigma,\mathbb{Z}).$$ Since the dimension of the space of curves of genus $g$ is $3g-3$, the dimension of the space of pairs (curve, map) with a given Newton polytope is $3g-3+B_{\Delta_{\boldsymbol{\lambda}}}-2g=g+B_{\Delta_{\boldsymbol{\lambda}}}-3$. On the other hand if $\boldsymbol{\lambda}$ is an isomorphism with the zero locus of $P$ this dimension must coincide with the dimension $I_{\Delta_P}+B_{\Delta_P}-3$ of equivalence classes of polynomials with a given Newton polytope $\Delta_P$. It gives another proof that $g=I_{\Delta_P}$. Consider the subgroup ${\it Div}^\infty(\Sigma)\subset {\it Div}(\Sigma)$ of divisors on $\Sigma$ supported at infinity. As an abstract group it is isomorphic to a free Abelian group generated by $Z$. The embedding $H_1(T,\mathbb{Z})\hookrightarrow {\it Div}^\infty(\Sigma)$ given by $\boldsymbol{h}\mapsto (\langle\boldsymbol{\lambda},\boldsymbol{h}\rangle)$ induces an isomorphism between integer homology of $T$ and the set of principal divisors supported at infinity. \refstepcounter{mysection} \paragraph{\Alph{mysection}. Theta-functions.\label{a:theta}} Recall that a spin structure on the curve $\Sigma$ can be identified with a quadratic form $q$ on $H^1(\Sigma,\mathbb{Z}/2\mathbb{Z})$ such that $q(\boldsymbol{h}^1+\boldsymbol{h}^2)=q(\boldsymbol{h}^1)+q(\boldsymbol{h}^2)+\langle \boldsymbol{h}^1,\boldsymbol{h}^2\rangle$ for any $\boldsymbol{h}^1,\boldsymbol{h}^2\in H^1(\Sigma,\mathbb{Z}/2\mathbb{Z})$. Given a decomposition $H^1(\Sigma,\mathbb{Z})=L_a\oplus L_b$ into two Lagrangian sublattices any such form can be presented as $q(\boldsymbol{l}_a+\boldsymbol{l}_b)=\langle \boldsymbol{l}_a,\boldsymbol{l}_b\rangle + \langle \boldsymbol{l}_a,\boldsymbol{\eta}\rangle + \langle \boldsymbol{\epsilon},\boldsymbol{l}_b\rangle$ for some $\boldsymbol{\eta} \in L_b/2L_b$ and $\boldsymbol{\epsilon} \in L_a/2L_a$. If the surface $\Sigma$ has a complex structure spin structures can be identified with classes $q\in {\it Pic}^{g-1}(\Sigma)$ such that $2q$ is the canonical class. The space $H^1(\Sigma,\mathbb{Z}/2\mathbb{Z})$ can be identified with classes $\boldsymbol{h}\in {\it Pic}^0()\Sigma)$ such that $2\boldsymbol{h}=0$ and the class $q$ defines a quadratic form by $q(\boldsymbol{h})=\dim H^0(q+h)\mod{2}$ (see \cite{Atiyah}). The theta function of characteristic $q$ is a function $L_a^{\mathbb{C}}\to \mathbb{C}$ defined by $$ \theta_q(z)=\sum_{\boldsymbol{l}_b\in L_b}e^{2\pi i ({Q(\boldsymbol{l}_b+\boldsymbol{\eta}/2)}/{2}+\langle \boldsymbol{l}_b+\boldsymbol{\eta}/2,z+\boldsymbol{\epsilon}/2\rangle)} $$ The theta function satisfies the following straightforwardly verifiable properties: \begin{enumerate} \item $\theta_q(z+\boldsymbol{l}_a)=(-1)^{q(\boldsymbol{l}_a)}\theta_q(z)$ for any $\boldsymbol{l}_a\in L_a$, \item $\theta_q(z+\tau_a(\boldsymbol{l}_b))=(-1)^{q( \boldsymbol{l}_b)}\theta_q(z)e^{-2\pi i ({Q(\boldsymbol{l}_b)}/{2}+\langle \boldsymbol{l}_b,z\rangle)}$ for any $\boldsymbol{l}_b\in L_b$, \item $\theta_q(-z)=(-1)^{\langle \boldsymbol{\epsilon},\boldsymbol{\eta}\rangle}\theta_q(z)$. \end{enumerate} From these properties it follows that the zero locus of a theta function is a lift of a subvariety of $D[q]\in {\it Jac}(\Sigma)$ called \textit{theta divisor}. It can be parameterized (it is the Jacobi inversion formula) by the $(n-1)$-st symmetric power $\Sigma^{n-1}/\mathfrak{S}_{n-1}$ by $(z_1,\ldots,z_{n-1})\mapsto q-\sum z_i$ or by $(w_1,\ldots,w_{n-1})\mapsto \sum w_i-q$. The spin structure $q$ is called odd (respectively, even) if the theta function $\theta_q(z)$ is odd (respectively, even). A map from $\Sigma$ to ${\it Pic}^1(\Sigma)$ extends to a map of the Abelian universal cover $\widetilde{\Sigma}\to \widetilde{{\it Pic}}^1(\Sigma)$. Let $e$ be any point of $\widetilde{{\it Pic}}^1(\Sigma)$. Consider the function on $\tilde{\Sigma}$ defined by $z\mapsto \theta_q(z-e)$. This function can be considered as a holomorphic section of a line bundle on $\Sigma$. Jacobi theorem implies that if $e=q-\sum_{i=1}^{g-2}z_i$ in ${\it Pic}^1(\Sigma)$ then this section vanishes. Otherwise the divisor of this section is a sum of $g$ points and its class in ${\it Pic}^g(\Sigma)$ is $q+e$. \refstepcounter{mysection} \paragraph{\Alph{mysection}. Prime form and Fay's trisecant identity.\label{s:prime}} Recall following \cite{Fay} that the prime form is a function $E(x,y)$ on the square of the universal Abelian cover $\tilde{\Sigma}$ of the curve $\Sigma$ having the following properties: \begin{enumerate} \item $E(x,y)=-E(y,x)$ \item $E(x+\boldsymbol{l}_a,y)=E(x, y)$ for any $\boldsymbol{l}_a\in L_a$ \item $E(x+\boldsymbol{l}_b,y)=e^{2\pi i (Q(\boldsymbol{l}_b)/2-\langle \boldsymbol{l}_b,x-y\rangle)}E(x,y)$ for any $\boldsymbol{l}_b\in L_b$ \item $E(x,y)=0$ if and only if $y=x+\boldsymbol{h}$ for some $\boldsymbol{h}\in H^1(\Sigma,\mathbb{Z})$. \end{enumerate} Here by $x+\boldsymbol{h}$ with $x\in \widetilde{\Sigma}$ and $\boldsymbol{h} \in H_1(\Sigma,\mathbb{Z})$ we mean the action of $\boldsymbol{h}$ on the point $x$. Therefore the prime form as a function of its first argument $x$ can be considered as a section of the line bundle of degree 1 with divisor $y\in {\it Pic}^1(\Sigma)$. The prime form allows to express explicitly a section of a line bundle on $\Sigma$ having a given divisor of zeroes and poles.Indeed, given a divisor $\boldsymbol{d}=\sum_\alpha d_\alpha \alpha \in {\it div}(\Sigma)$ the product $$E_{\boldsymbol{d}}(z)=\prod_k E(z,\alpha)^{d_\alpha}$$ is a section of a line bundle corresponding to $\boldsymbol{d}$. In particular if $\boldsymbol{d}$ is principal than $E_{\boldsymbol{d}}$ is a single valued on $\Sigma$ with $(E_{\boldsymbol{d}})=\boldsymbol{d}$. Let $q$ be an odd spin structure. According to the Abel theorem the divisor of the function $\theta_q(x-y)$, considered as a section of a line bundle on $\Sigma$ depending on $y\in \Sigma$ as a parameter, is equal to $y+\sum_{i=1}^{g-1}z_i$ with the class of $\sum_{i=1}^{g-1}z_i$ equal to $q$. Let $\phi$ be a holomorphic 1-form on $\Sigma$ with the divisor $2\sum_{i=1}^{g-1}z_i$. Then the prime form is given by \begin{equation}\label{prime} E(x,y)=\theta_q(x-y)(\phi(x)\phi(y))^{-1/2}. \end{equation} The prime form does not depend on the choice of the spin structure $q$. \begin{lemma}(Generalized Fay's trisecant identity) Let $\{\alpha_k|k\in \mathbb{Z}/n\mathbb{Z}\}$ be a collection of points on a universal Abelian cover $\tilde{\Sigma}$ of a Riemann surface $\Sigma$ and $z$ be any other point of $\tilde{\Sigma}$. Let $t$ be any point of the universal cover of the Picard variety $\widetilde{{\it Pic}}^1(\Sigma)$. Then the following identity holds: \begin{equation}\label{Fay} \sum_k\frac{\theta(t+z-\alpha_k-\alpha_{k+1})}{E(z,\alpha_k)E(z,\alpha_{k+1})}\frac{E(\alpha_k,\alpha_{k+1})}{\theta(t-\alpha_k)\theta(t-\alpha_{k+1})}=0 \end{equation} \end{lemma} \noindent\textit{Proof:} Multiplying both sides by the common denominator one can reformulate the identity as $$\sum_k \theta(t+z-\alpha_k-\alpha_{k+1})E(\alpha_k,\alpha_{k+1})\prod_{l\neq k,k+1}\theta(t-\alpha_l)E(z,\alpha_l)=0. $$ Observe that all terms of the sum considered as functions of $z$ are holomorphic sections of a line bundle of degree $g+n-2$. Indeed, according to the Jacobi inversion formula the sum of zeroes in the Jacobian of every term (for generic $t$) is equal to $\sum \alpha_k-t$ and thus do not depend on $k$. We will show that the identity holds for $z=\alpha_m$. It implies that the sum of the remaining $g-2$ zeroes is equal to $-t$ for any $t\in {\it Pic}^1(\Sigma)$, which is impossible for dimensional reasons. To show that the identity holds for $z=\alpha_m$ we just make this substitution and get $$\sum_k \theta(t+\alpha_m-\alpha_k-\alpha_{k+1})E(\alpha_k,\alpha_{k+1})\prod_{l\neq k,k+1}\theta(t-\alpha_l)E(\alpha_m,\alpha_l)=$$ $$=\theta(t-\alpha_{m+1})E(\alpha_m,\alpha_{m+1})\prod_{l\neq m,m+1}\theta(t-\alpha_l)E(\alpha_m,\alpha_l)+$$ $$+\theta(t-\alpha_{m-1})E(\alpha_{m-1},\alpha_{m})\prod_{l\neq m-1,m}\theta(t-\alpha_l)E(\alpha_m,\alpha_l) $$ $$=\prod_{l\neq m}\theta(t-\alpha_l)E(\alpha_m,\alpha_l)-\prod_{l\neq m}\theta(t-\alpha_l)E(\alpha_m,\alpha_l)=0,$$ where the first equality is satisfied since all but two terms of the sum vanish. Lemma is proven. For $n=1$ and $2$ the lemma is trivial. For $n=3$ replacing $\alpha_0,\alpha_1,\alpha_2,z$ by $a,b,c,d$, respectively, and $t$ by $t+d$ we get the trisecant identity in a more usual form \cite{Mumford}: $$\begin{array}{rl} ~&\theta(b+c-t)E(b,c)\theta(d+a-t)E(d,a)\ +\\ +&\theta(c+a-t)E(c,a)\theta(d+b-t)E(d,b)\ +\\ +&\theta(a+b-t)E(a,b)\theta(d+c-t)E(d,c)=0 \end{array}$$ Introducing a function $F_t(u,v)=\theta(u+v-t)E(u,v)$ this formula can be written in an even more elegant form $$F_t(b,c)F_t(d,a)+F_t(c,a)F_t(d,b)+F_t(a,b)F_t(d,c)=0 $$ \end{document}
\betagin{document} \widetildetle[Cascade of phase shifts for Hartree equation]{Cascade of phase shifts and creation of nonlinear focal points for supercritical Semiclassical Hartree equation} \author[S. Masaki]{Satoshi Masaki} \address{Division of Mathematics\\ Graduate School of Information Sciences\\ Tohoku University\\ Sendai 980-8579, Japan} \email{[email protected]} \betagin{abstract} We consider the semiclassical limit of the Hartree equation with a data causing a focusing at a point. We study the asymptotic behavior of phase function associated with the WKB approximation near the caustic when a nonlinearity is supercritical. In this case, it is known that a phase shift occurs in a neighborhood of focusing time in the case of focusing cubic nonlinear Schr\"odinger equation. Thanks to the smoothness of the nonlocal nonlinearities, we justify the WKB-type approximation of the solution for a data which is larger than in the previous results and is not necessarily well-prepared. We also show by an analysis of the limit hydrodynamical equaiton that, however, this WKB-type approximation breaks down before reaching the focal point: Nonlinear effects lead to the formation of singularity of the leading term of the phase function. \end{abstract} \maketitle \section{Introduction}\lambdaambdabel{sec:intro} This paper is devoted to the study of the semiclassical limit $\varepsilon \to 0$ for the Cauchy problem of the semiclassical nonlinear Schr\"odinger equation for $(t,x) \in \mathbb{R}_+ \widetildemes \mathbb{R}^n$ \betagin{align}\lambdaambdabel{eq:r3} i\varepsilon{\partial}_t u^\varepsilon +\frac{\varepsilon^2}{2}\Delta u^\varepsilon =& \varepsilon^\alpha N(u^\varepsilon),& u^\varepsilon_{|t=0}(x) =& a_0(x)e^{-i\frac{|x|^2}{2\varepsilon}} \end{align} with the nonlocal nonlinearity of Hartree type \betagin{equation}\lambdaambdabel{eq:h} N(u^\varepsilon) = \lambdaambdambda (\lambdavert x\rvert ^{-\gammaamma}\ast |u^\varepsilon|^2)u^\varepsilon, \end{equation} where $n\gammae 3$, $\lambda$ is a real number, and $\gamma$ is a positive number. In this paper we consider the small-nonlinearity case $\alphapha>0$. The case $\alpha=0$ is studied in \cite{AC-SP,CM-AA}. In the case of linear equation $N \equiv 0$, the quadratic oscillation in the initial data causes a caustic at the origin at $t=1$. In \cite{CaIUMJ,CaCMP}, R. Carles justified the general heuristics presented in \cite{HK-WM} in the case of \eqref{eq:r3} with $N(y) = |y|^\betata y$ (see, also \cite{CaBook}), and two different notions of the criticality for $\alpha$ are realized. One is concerned with the nonlinear effect on the behavior far from the focal point, and the other with that near the focal point. The situation is similar in the case of Hartree equation \betagin{align}\lambdaambdabel{eq:r4} i\varepsilon{\partial}_t u^\varepsilon +\frac{\varepsilon^2}{2}\Delta u^\varepsilon &= \lambdaambdambda \varepsilon^\alpha (\lambdavert x\rvert ^{-\gammaamma}\ast |u^\varepsilon|^2)u^\varepsilon, \\ \lambdaambdabel{eq:odata} u^\varepsilon_{|t=0}(x) &= a_0(x)e^{-i\frac{|x|^2}{2\varepsilon}}. \end{align} (see, \cite{CL-FT,CMS-SIAM,MaASPM}). The two critical indices are $\alpha = 1$ (far from focal point) and $\alpha = \gamma$ (near the focal point). They are completely different notions. For example, the second index $\alpha=\gamma$ depends on the shape of the nonlinearity while the first not. Using terminologies in \cite{CaJHDE}, we have the following nine cases: \betagin{center} \betagin{tabular}{c|c|c|c|} & $\alphapha > 1$ & $\alphapha = 1$ & $\alphapha < 1$ \\ \hline $\alphapha>\gammaamma$ & linear WKB & nonlinear WKB & supercritical WKB \\ & linear caustic & linear caustic & linear caustic \\ \hline $\alphapha=\gammaamma$ & linear WKB & nonlinear WKB & supercritical WKB \\ & nonlinear caustic & nonlinear caustic & nonlinear caustic \\ \hline $\alphapha<\gammaamma$ & linear WKB & nonlinear WKB & supercritical WKB \\ & supercritical caustic & supercritical caustic & supercritical caustic \\ \hline \end{tabular} \end{center} Roughly speaking, the term ``linear WKB'' means that, far from the caustic, the propagation of $u^\varepsilon$ does not involve the nonlinear effect at leading order. The term ``linear caustic'' means that the nonlinear effect is negligible at leading order when the solution crosses the focal point. Now, let us be more precise about this problem with \eqref{eq:r4}--\eqref{eq:odata}. Let $w^\varepsilon $ be the solution of the linear equation $(i\varepsilon \partialartial_t + (\varepsilon^2/2)\Delta)w^\varepsilon=0$ with the same initial data as in \eqref{eq:odata}. Note that $w^\varepsilon$ is approximated by the WKB-type approximate solution \betagin{equation}\lambdaambdabel{eq:linear} v^\varepsilon_{\mathrm{lin}}=\frac{1}{(1-t)^{n/2}}a_0 \lambdaeqslantft(\frac{x}{1-t}\right) e^{i\frac{|x|^2}{2\varepsilon (t-1)}} \end{equation} as long as $\varepsilon /(1-t)$ is small. In the linear WKB case $\alpha>1$, it is shown in \cite{CL-FT,MaAHP,MaASPM} that the solution $u^\varepsilon$ is close to $w^\varepsilon$ (and so to $v^\varepsilon_{\mathrm{lin}}$) before caustic. The time interval in which $u^\varepsilon$ is approximated by $w^\varepsilon$ depends on the latter critical notion $\alpha=\gamma$: If $\alpha>\gamma$ then $u^\varepsilon \to w^\varepsilon$ as $\varepsilon\to0$ for all $t \lambdaeqslant 1$; if $\alpha=\gamma$ then it holds for $1- t \gammae \Lambda \varepsilon$ with a large $\Lambda$; if $\alpha<\gamma$ then it holds for $1- t \gammae \Lambda \varepsilon^\mu$ with a large $\Lambda$ and some $\mu=\mu(\alpha,\gamma) \lambdaeqslant (\alpha-1)/(\gamma-1)$. In the case $\alpha \gammae \gamma$, the asymptotic profile of the solution beyond the caustic is also given in \cite{CL-FT}. Let us proceed to the supercritical caustic case $\alpha<\gamma$. In this case, infinitely many phase shifts occur between the time $1- t = \Lambda_1\varepsilon^{\frac{\alpha-1}{\gamma-1}}$, called a \emph{first boundary layer}, and the time $ 1-t = \Lambda \varepsilon^{{\alpha}/{\gamma}}$, called a \emph{final layer}. This phenomena, called \emph{cascade of phase shifts}, is first shown in \cite{CaJHDE} for \eqref{eq:r3} with certain class of nonlinearities including the cubic nonlinearity $N(y) = |y|^2 y$, and the asymptotic behavior of the solution is given before the final layer, $1-t\gammag \varepsilon^{{\alpha-1}/{\gamma-1}}$, in the case $(\gammaamma>)\alphapha>1$. The similar result is proven also in the nonlinear and supercritical WKB case $\alphapha \lambdaeqslant 1$, provided the initial data is a properly-modified one of the form \betagin{equation}\lambdaambdabel{eq:wpdata} u_{|t=0}^\varepsilon(x) = b_0(\varepsilon^{\frac{\alpha}{\gamma}},x) e^{-i\frac{|x|^2}{2\varepsilon}} \exp (i\varepsilon^{\frac{\alpha}{\gamma}-1}\partialhi_0(\varepsilon^{\frac{\alpha}{\gamma}},x)), \end{equation} where $(b_0(t,x),\partialhi_0(t,x))$ is a suitable function defined in terms of $a_0$. Let us call this type of data as a \emph{well-prepared data}. The aim of this paper is to give an explicit asymptotic profile of the solution of \eqref{eq:r4}-\eqref{eq:odata} (before and) on the final layer for whole $\gamma > \alpha >0$ with a not-modified data. In particular, the behavior of the solution is new in the following situations: \betagin{itemize} \item On the final layer, that is, at $t=1-T^{-1} \varepsilon^{\alpha/\gamma}$ with not necessarily small $T>0$. \item $\alpha \lambdaeqslant 1$ with an initial data which is not necessarily of the form \eqref{eq:wpdata}. \end{itemize} Moreover, it will be shown that the WKB-type approximation of the solution breaks down at some time on the final layer, that is, at $t=t_c:=1-(T^{*})^{-1} \varepsilon^{\alpha/\gamma}$ for some $T^*> 0$ for a certain class of initial data. For our analysis, we apply the following transform introduced in \cite{CaJHDE}: \betagin{equation}\lambdaambdabel{eq:spct} u^\varepsilon(t,x) = \frac{1}{(1-t)^{n/2}} \partialsi^\varepsilon \lambdaeqslantft( \frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}, \frac{x}{1-t} \right) \exp\lambdaeqslantft(i \frac{|x|^2}{2\varepsilon(t-1)}\right). \end{equation} Then, \eqref{eq:r4}--\eqref{eq:odata} becomes \betagin{align*} i\varepsilon^{1-\frac{\alpha}{\gamma}} {\partial}_\tau \partialsi^\varepsilon +\frac{(\varepsilon^{1-\frac{\alpha}{\gamma}})^2}{2}\Delta_y \partialsi^\varepsilon &= \lambdaambdambda \tau^{\gamma-2} (\lambdavert y\rvert ^{-\gamma}\ast |\partialsi^\varepsilon|^2)\partialsi^\varepsilon, & \partialsi^\varepsilon_{|\tau=\varepsilon^{\alpha/\gamma}}(y) &= a_0(y), \end{align*} where $\tau=\varepsilon^{\alpha/\gamma}/(1-t)$ and $y=x/(1-t)$. The quadratic oscillation of the initial data is canceled out. This transformation clarifies the problem; if the nonlinear effect is weak and so if $u^\varepsilon$ behaves like $w^\varepsilon$ and $v^\varepsilon_{\mathrm{lin}}$, then $\partialsi^\varepsilon$ is close to $a_0$; if the solution has a rapid oscillation other than $\exp(i|x|^2/2\varepsilon(t-1))$, then $\partialsi^\varepsilon$ becomes oscillatory. We change the parameter into $h=\varepsilon^{1-\alpha/\gamma}$. The limit $\varepsilon\to 0$ is equivalent to $h\to 0$ as long as $\alpha<\gamma$. Denoting $\partialsi^\varepsilon$ by $\partialsi^h$, our problem is reduced to the limit $h\to 0$ of the solution to \betagin{align}\lambdaambdabel{eq:r5} ih{\partial}_\tau \partialsi^h +\frac{h^2}{2}\Delta_y \partialsi^h &= \lambda \tau^{\gamma-2} (\lambdavert y\rvert ^{-\gamma}\ast |\partialsi^h|^2)\partialsi^h, & \partialsi^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}(y) &= a_0(y). \end{align} Since $\tau=h^{\alpha/(\gamma-\alpha)}/(1-t)$, the correspondence between boundary layers of $t$ and $\tau$ variables is as follows: \betagin{equation}\lambdaambdabel{eq:timetranslation} \betagin{aligned} \thetaetaxt{initial time:}& & t&{}=0 &&\lambdaongleftrightarrow \tau = h^{\frac{\alpha}{\gamma-\alpha}}, \\ \thetaetaxt{first layer:}& & t&{}=1-\Lambda_1\varepsilon^{\frac{\alpha-1}{\gamma-1}}&& \lambdaongleftrightarrow \tau = \Lambda_1^{-1} h^{\frac{1}{\gamma-1}}, \\ \thetaetaxt{final layer:}& & t&{}=1-T^{-1}\varepsilon^{\frac{\alpha}{\gamma}}&& \lambdaongleftrightarrow \tau = T. \end{aligned} \end{equation} Our analysis of \eqref{eq:r5} is based on a generalized WKB method by G\'erard \cite{PGEP} and Grenier \cite{Grenier98}. We apply a modified Madelung transformation \betagin{equation}\lambdaambdabel{eq:grenier} \partialsi^h = a^h \exp\lambdaeqslantft(i\frac{\partialhi^h}h\right) \end{equation} to \eqref{eq:r5}, where $a^h$ is complex valued and $\partialhi^h$ is real valued. This choice is slightly different from the usual Madelung transformation \[ \partialsi^h = \sqrt{\rho^h} \exp\lambdaeqslantft(i\frac{S^h}h\right) \] (see \cite{GLM-TJM}) which leads us to an equation of compressible fluid with the quantum pressure. It is essential that $a^h$ takes complex value and, therefore, $\partialhi^h \neq S^h$ in general. The choice \eqref{eq:grenier} allows us to rewrite \eqref{eq:r5} as the system for the pair $(a^h,\partialhi^h)$: \betagin{equation}\lambdaambdabel{eq:sys} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_\tau a^h + \nabla \partialhi^h \cdot \nabla a^h + \frac12 a^h \Delta \partialhi^h = i\frac{h}{2}\Delta a^h,\\ &\partialartial_\tau \partialhi^h + \frac12 |\nabla \partialhi^h|^2 + \lambda \tau^{\gamma-2}(|y|^{-\gamma}*|a^h|^2) =0, \\ & a^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}=a_0, \quad \partialhi^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}=0. \end{aligned} \right. \end{equation} If we approximate the solution of \eqref{eq:sys} up to $h^1$ order, that is, if we establish an asymptotics such as \betagin{align}\lambdaambdabel{eq:h1expansion} a^h&{}=b_0+h b_1 + o(h^1), & \partialhi^h&{}=\partialhi_0 + h \partialhi_1 + o(h^1), \end{align} then, by means of \eqref{eq:grenier}, we immediately obtain the WKB-type approximate solution of $\partialsi^h$. This method is first employed for nonlinear Schr\"odinger equation with a certain class of defocussing local nonlinearity including the cubic nonlinearity $N(y)=|y|^2 y$ for analytic data \cite{PGEP} and for Sobolev data \cite{Grenier98} (see, also \cite{AC-ARMA,CR-CMP}), and is extended to the Gross-Pitaevskii equation \cite{AC-GP} (see, also \cite{LZ-ARMA}) and to the equation with (de)focusing nonlocal nonlinearities: Schr\"odinger-Poisson equation \cite{AC-SP,LT-MAA} (see, also \cite{LL-EJDE,ZhSIAM}) and Hartree equation \cite{CM-AA}. Our goal is to justify \eqref{eq:h1expansion}. The natural choice of the main part $(b_0,\partialhi_0)$ may be $(a^h,\partialhi^h)_{|h=0}$. Letting $h=0$ in \eqref{eq:sys}, we obtain a hydrodynamical system \betagin{equation}\lambdaambdabel{eq:lsys} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_\tau b_0 + \nabla \partialhi_0 \cdot \nabla b_0 + \frac12 b_0 \Delta \partialhi_0 = 0,\\ &\partialartial_\tau \partialhi_0 + \frac12 |\nabla \partialhi_0|^2 + \lambda \tau^{\gamma-2}(|y|^{-\gamma}*|b_0|^2) =0, \\ & b_{0|\tau=0}=a_0, \quad \partialhi_{0|\tau=0}=0. \end{aligned} \right. \end{equation} For a Sobolev data $a_0$, \eqref{eq:lsys} has a unique local solution (Theorem \ref{thm:main}). Then, the main task is to determine $h^1$-term $(b_1,\partialhi_1)$. The difficulty of finding $h^1$-term lies in the following two respects: \betagin{enumerate} \item The equaiton \eqref{eq:sys} itself depends on $h$ through the term $i\frac{h}{2}\Delta a^h$. \item The initial time of \eqref{eq:sys} tends to $\tau=0$ at a speed $h^{\frac{\alpha}{\gamma-\alpha}}$. \end{enumerate} We will see that the first becomes crucial when we consider the asymptotic behavior of the solution on the final layer, and that a suitable choice of $h^1$-term is the key for overcoming this difficulty. We use $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ defined by \eqref{eq:tbtp1}, below, as an $h^1$-term and show an asymptotic behavior of $\partialsi^h$ for $\tau\in [h^{\frac{\alpha}{\gamma-\alpha}},T]$ when $\alpha \gammae1$, where $T$ is independent of $h$ (Theorem \ref{thm:main}). By \eqref{eq:timetranslation}, $\tau\in [h^{\frac{\alpha}{\gamma-\alpha}},T]$ is equivalent to $t\in[0,1-T^{-1}\varepsilon^{\alpha/\gamma}]$, from the initial time to the final layer. On the other hand, the second becomes crucial in the supercritical case $\alphapha<1$. This is because the moving speed of initial time becomes too slow. In this case, we need three more kinds of correction terms whose order are between $h^0$ and $h^1$. With them, we describe the asymptotic behavior also for $\alpha \in]0,1[$. A heuristic observation on these correction terms is in Section \ref{subsec:summary2}. The rigorous result is in Theorem \ref{thm:sscase}. The well-prepared data \eqref{eq:wpdata} is closely related to the second problem listed above. The function $(b_0,\partialhi_0)$ in \eqref{eq:wpdata} is the solution of \eqref{eq:lsys}. If we employ the well-prepared data and consider \eqref{eq:r4} with \eqref{eq:wpdata}, then \eqref{eq:sys} changes into \betagin{equation}\lambdaambdabel{eq:wpsys} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_\tau a^h + \nabla \partialhi^h \cdot \nabla a^h + \frac12 a^h \Delta \partialhi^h = i\frac{h}{2}\Delta a^h,\\ &\partialartial_\tau \partialhi^h + \frac12 |\nabla \partialhi^h|^2 + \lambda \tau^{\gamma-2}(|y|^{-\gamma}*|a^h|^2) =0, \\ & a^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}=b_0(h^{\frac{\alpha}{\gamma-\alpha}}), \quad \partialhi^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}=\partialhi_0(h^{\frac{\alpha}{\gamma-\alpha}}). \end{aligned} \right. \end{equation} The initial time is still moving, however, the only difference between \eqref{eq:wpsys} and \eqref{eq:lsys} is the existence of $i\frac{h}2\Delta a^h$. Thus, we will see that we do not meet with the second problem any longer. This point is discussed in Sections \ref{subsec:wpdata}. We also consider the problem of global existence of the solution to \eqref{eq:lsys}. With the notation $(\rho,v) := (|b_0|^2,\nabla \partialhi_0)$, \eqref{eq:lsys} is the compressible Euler equation with time-dependent pressure term of Hartree type: \betagin{equation}\lambdaambdabel{eq:cE} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_\tau \rho + \mathrm{div} (\rho v) = 0,\\ &\partialartial_\tau v + (v \cdot \nabla)v + \lambda \tau^{\gamma-2}\nabla (|x|^{-\gamma}*\rho) =0,\\ &\rho_{|\tau=0}=|a_0|^2, \quad v_{|\tau=0}=0. \end{aligned} \right. \end{equation} We adapt results in \cite{ELT-IUMJ,MaRIMS} (see also \cite{MP-JJAM,PeJJAM}) to prove that $C^1$-solution to \eqref{eq:cE} cannot be global in time, in several situations (Theorems \ref{thm:main2} and \ref{thm:main3}). This implies \eqref{eq:h1expansion} breaks down before caustic $t=1$ (see Section \ref{subsec:summary}). \subsection{Main result I} To state our result precisely, we introduce some notation. For $n \gammae 3$, $s>n/2+1$, $p\in [1, \infty]$, and $q \in [1, \infty]$, we define a function space $Y^s_{p,q}(\mathbb{R}^n)$ by \betagin{equation}\lambdaambdabel{def:Y} Y^s_{p,q}(\mathbb{R}^n) = \overline{C_0^\infty (\mathbb{R}^n)}^{\norm{\cdot}_{Y^s_{p,q}(\mathbb{R}^n)}} \end{equation} with norm \betagin{equation}\lambdaambdabel{def:Y2} \norm{\cdot}_{Y^s_{p,q}(\mathbb{R}^n)} := \norm{\cdot}_{L^p(\mathbb{R}^n)} + \norm{\nabla \cdot}_{L^q(\mathbb{R}^n)} + \norm{\nabla^2 \cdot}_{H^{s-2}(\mathbb{R}^n)}. \end{equation} We denote $Y^s_{p,q}=Y^s_{p,q}(\mathbb{R}^n)$, for short. For $q<n$, we use the notation $q^* = nq/(n-q)$. This space $Y^s_{p,q}$ is a modification of the Zhidkov space $X^s$, which is defined, for $s>n/2$, by $X^s(\mathbb{R}^n) := \{ f \in L^\infty(\mathbb{R}^n) | \nabla f \in H^{s-1}(\mathbb{R}^n) \}$. The Zhidkov space was introduced in \cite{Zhidkov} (see, also \cite{Gallo}). Roughly speaking, the exponents $p$ and $q$ in $Y^s_{p,q}$ indicate the decay rates at the spatial infinity of a function and of its first derivative, respectively. Moreover, the Zhidkov space $X^s$ corresponds to $Y^s_{\infty,2}$ in a sense, if $n \gammae 3$. We discuss these points more precisely in Section \ref{subsec:spaceY}. We also note that $Y^s_{2,2}$ is the usual Sobolev space $H^s$. We use the following notation: $Y^\infty_{p,q} := \cap_{s>0}Y^s_{p,q}$; for intervals $I_1$ and $I_2$ of $[1,\infty]$, $Y^s_{I_1,q} := \cap_{p\in I_1} Y^s_{p,q}$ and $Y^s_{p,I_2} := \cap_{q\in I_2} Y^s_{p,q}$. These notation are sometimes used simultaneously, for example $Y^\infty_{I_1,I_2} := \cap_{s>0,p\in I_1,q\in I_2} Y^s_{p,q}$. We also use the operator \[ |J^\varepsilon|^s = e^{i\frac{|x|^2}{2\varepsilon(t-1)}} |(1-t)\nabla|^s e^{-i\frac{|x|^2}{2\varepsilon(t-1)}}. \] This is the scaled version of the Galilean operator and is suitable for a study of rapid phases other than $e^{i|x|^2/2\varepsilon(t-1)}$. We also introduce the following systems for a pair $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}}) $: \betagin{equation}\lambdaambdabel{eq:tbtp1} \lambdaeqslantft( \betagin{aligned} &{\partial}_\tau b_{\mathrm{equ}} + \nabla \partialhi_{\mathrm{equ}} \cdot \nabla b_0 + \nabla \partialhi_0 \cdot \nabla b_{\mathrm{equ}} + \frac{1}{2} b_{\mathrm{equ}} \Delta \partialhi_0 + \frac{1}{2} b_0 \Delta \partialhi_{\mathrm{equ}} = \frac{i}{2} \Delta b_0, \\ &{\partial}_\tau \partialhi_{\mathrm{equ}} + \nabla \partialhi_0 \cdot \nabla \partialhi_{\mathrm{equ}} + \lambda \tau^{\gamma-2} (|y|^{-\gamma}* 2 \mathbb{R}e \overline{b_0} b_{\mathrm{equ}} ) = 0, \end{aligned} \right. \end{equation} where $(b_0,\partialhi_0)$ is a solution of \eqref{eq:lsys}. This will be posed with the zero data \betagin{align}\lambdaambdabel{eq:tbtp2} b_{\mathrm{equ}|\tau=0}={}&0, & \partialhi_{\mathrm{equ}|\tau=0}={}& 0 \end{align} or the data \betagin{align}\lambdaambdabel{eq:tbtp3} b_{\mathrm{equ}|\tau=0}={}&0, & \partialhi_{\mathrm{equ}|\tau=0}={}& \lambda(|y|^{-\gamma}*|a_0|^2)/(\gamma-1). \end{align} \betagin{notation}\lambdaambdabel{not:asymp} Let $T>0$ and $X$ be a Banach space. Let $\{k_j\}$ be an increasing sequence of real number, $\partialhi(t,x) \in C([0,T];X)$ be a function , and $\{\partialhi_j\}$ be a sequence of function in $X$. We write \[ \partialhi(t,x) \asymp \sum_{j=1}^\infty t^{k_j} \partialhi_j \inftyN X \] if it holds that \[ \norm{\partialhi(t,x) - \sum_{j=1}^J t^{k_j} \partialhi_j}_{X} = o(t^{k_J}) \] as $t\to0$ for all $J\gammae1$. \end{notation} We now state our main result. To avoid complicity, here we state the result for $\alpha \gammae 1$; the linear and nonlinear WKB cases. For the supercritical WKB case $\alpha<1$, see Theorem \ref{thm:sscase}. \betagin{assumption}\lambdaambdabel{asmp:1} Let $n \gammaeq 4$ and $\lambda \in \mathbb{R}$. The constants $\gamma$ and $\alpha$ satisfy $\max(1,n/2-2) < \gamma \lambdaeqslantq n-2$ and $0< \alpha < \gamma$, respectively. The initial data $a_0 \in H^\infty$. \end{assumption} \betagin{theorem}\lambdaambdabel{thm:main} Let assumption \ref{asmp:1} be satisfied. Assume $\alpha \gammae 1$. Then, there exists an existence time $T>0$ independent of $\varepsilon$. There also exist $(b_0,\partialhi_0)$, $(b_{\mathrm{equ}}, \partialhi_{\mathrm{equ}}) \in C([0,T];H^\infty \widetildemes Y^\infty_{(n/\gamma,\infty],(n/(\gamma+1),\infty]})$ such that: \betagin{enumerate} \item $\partialhi_0(\tau,y) \asymp \sum_{j=1}^\infty \tau^{\gamma j -1} \varphi_j(y)$ in $Y^\infty_{(n/\gamma,\infty],(n/(\gamma+1),\infty]}$. \item The solution $u^\varepsilon$ to \eqref{eq:r4} with \eqref{eq:odata} satisfies the following asymptotics for all $s \gammae 0$: \betagin{equation}\lambdaambdabel{eq:asymptotics} \sup_{t \in [0,1-T^{-1}\varepsilon^{\alpha/\gamma}]} \Lebn{|J^\varepsilon|^s \lambdaeqslantft( u^\varepsilon(t) e^{-i\Phi^\varepsilon (t)} - \frac{1}{(1-t)^{n/2}} A^\varepsilon(t) e^{i\frac{|\cdot|^2}{2\varepsilon(t-1)}} \right)}{2} \to 0 \end{equation} as $\varepsilon \to 0$ with \betagin{equation}\lambdaambdabel{def:Phi} \Phi^\varepsilon(t,x) = \varepsilon^{\frac{\alpha}{\gamma}-1} \partialhi_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right) \end{equation} and \betagin{equation}\lambdaambdabel{def:A} A^\varepsilon(t,x) = b_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t} \right) \exp \lambdaeqslantft(i\partialhi_{\mathrm{equ}}\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t} \right)\right) , \end{equation} where $(b_0,w_0)$ solves \eqref{eq:lsys} and $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ solves \eqref{eq:tbtp1} with \eqref{eq:tbtp2} if $\alpha>1$ and with \eqref{eq:tbtp3} if $\alpha=1$. \end{enumerate} \end{theorem} \betagin{remark}\lambdaambdabel{rmk:main} \betagin{enumerate} \item By the definition of ``$\asymp$'' sign, the expansion $\partialhi_0(\tau,y) \asymp \sum_{j=1}^\infty\tau^{\gamma j -1} \varphi_j(y)$ implies \betagin{equation}\lambdaambdabel{eq:p-exp} \partialhi_0(\tau) = \sum_{j=1}^J \tau^{\gamma j -1} \varphi_j + o(\tau^{\gamma J-1}) \inftyN Y^\infty_{(n/\gamma,\infty],(n/(\gamma+1),\infty]} \end{equation} as $\tau\to 0$ for all $J \gammae 1$. \item In Theorem \ref{thm:main}, we only need $a_0 \in H^{s_0}$ with some $s_0 > n/2 + 3$. Then, $(b_0,\partialhi_0)$ belongs to $C([0,T],H^{s_0} \widetildemes Y^{s_0+2}_{(n/\gamma,\infty],(n/(\gamma+1),\infty]})$ and $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ belongs to $C([0,T],H^{s_0-2} \widetildemes Y^{s_0}_{(n/\gamma,\infty],(n/(\gamma+1),\infty]})$. The asymptotics \eqref{eq:asymptotics} holds for any $s \in [0,s_0-4]$. \item All $\varphi_j$ are given explicitly (but inductively) in terms of $a_0$. For example, $\varphi_1=\lambda(|x|^{-\gamma}*|a_0|^2)/(1-\gamma)$ and \betagin{align*} \varphi_2={}&-\frac{\lambda^2}{2(\gamma-1)^2(2\gamma-1)} |\nabla (|x|^{-\gamma}*|a_0|^2)|^2 \\ &{} - \frac{\lambda^2}{\gamma(\gamma-1)(2\gamma-1)}\lambdaeqslantft(|x|^{-\gamma} * (\nabla\cdot ( |a_0|^2\nabla(|x|^{-\gamma}*|a_0|^2))\right) \end{align*} (see, Proposition \ref{prop:t-expansion}). \item Even if the system \eqref{eq:tbtp1} is posed with zero initial condition, its solution $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ is not identically zero because of the presence of the (nontrivial) external force $\frac{i}{2}\Delta b_0$. \item The choice of the initial data \eqref{eq:tbtp3} is the key for the analysis in the case $\alpha =1$. We discuss this point more precisely in Section \ref{subsec:summary2}. \end{enumerate} \end{remark} The asymptotics \eqref{eq:asymptotics} reads \betagin{equation}\lambdaambdabel{eq:asympL2} u^\varepsilon(t,x) \sigmam \frac{1}{(1-t)^{n/2}} A^\varepsilon(t,x)e^{i\Phi^\varepsilon (t,x)} e^{i\frac{|x|^2}{2\varepsilon(t-1)}} \end{equation} as $\varepsilon \to 0$. Indeed, this holds in $L^\infty([0,1-T^{-1}\varepsilon^{\alpha/\gamma}];L^2)$. This explains the cascade of phase shifts. We consider the case $1<\alpha(<\gamma)$. Combining \eqref{eq:p-exp} and \eqref{def:Phi}, we have \[ \Phi^\varepsilon(t,x) = \sum_{j=1}^J \frac{\varepsilon^{\alpha j-1}}{(1-t)^{\gamma j-1}} \varphi_j \lambdaeqslantft(\frac{x}{1-t}\right) + o \lambdaeqslantft( \frac{\varepsilon^{\alpha J-1}}{(1-t)^{\gamma J-1}}\right). \] We set $g^\varepsilon_j(t,x) = \frac{\varepsilon^{j\alpha -1}}{(1-t)^{j\gamma -1}} \varphi_j (\frac{x}{1-t})$. Then, the above asymptotics yields \betagin{align*} \Phi^\varepsilon(t,x) &\sigmam 0 & \thetaetaxt{for }&1-t \gammag \varepsilon^{\frac{\alpha-1}{\gamma-1}} ,\\ \Phi^\varepsilon(t,x) &\sigmam g^\varepsilon_1(t,x) & \thetaetaxt{for }&1-t \gammag \varepsilon^{\frac{2\alpha-1}{2\gamma-1}} , \\ \Phi^\varepsilon(t,x) &\sigmam g^\varepsilon_1(t,x) + g^\varepsilon_2(t,x) & \thetaetaxt{for }&1-t \gammag \varepsilon^{\frac{3\alpha-1}{3\gamma-1}} , \\ &\vdots & &\vdots \\ \Phi^\varepsilon(t,x) &\sigmam \sum_{j=1}^J g^\varepsilon_j(t,x) & \thetaetaxt{for }&1-t \gammag \varepsilon^{\frac{J\alpha-1}{J\gamma-1}} , \\ &\vdots & &\vdots \end{align*} as $\varepsilon\to0$. On the other hand, the amplitude $A^\varepsilon$ satisfies \[ A^\varepsilon(t,x) = a_0\lambdaeqslantft(\frac{x}{1-t}\right) + o(1) \] as $\varepsilon^{\frac{\alpha}{\gamma}}/(1-t) \to 0$. Substitute these expansions to \eqref{eq:asympL2} to obtain \betagin{align*} u^\varepsilon &\sigmam v^\varepsilon_{\mathrm{lin}}= \frac{1}{(1-t)^{n/2}} a_0\lambdaeqslantft(\frac{x}{1-t}\right) e^{i\frac{|x|^2}{2\varepsilon(t-1)}} & \thetaetaxt{for }&1-t \gammag \varepsilon^{\frac{\alpha-1}{\gamma-1}} ,\\ u^\varepsilon &\sigmam v^\varepsilon_{\mathrm{lin}}e^{ig^\varepsilon_1(t,x)} & \thetaetaxt{for }&1-t \gammag \varepsilon^{\frac{2\alpha-1}{2\gamma-1}} , \\ u^\varepsilon &\sigmam v^\varepsilon_{\mathrm{lin}}e^{ig^\varepsilon_1(t,x)+ig^\varepsilon_2(t,x)} & \thetaetaxt{for }&1-t \gammag \varepsilon^{\frac{3\alpha-1}{3\gamma-1}} , \\ &\vdots & &\vdots \\ u^\varepsilon &\sigmam v^\varepsilon_{\mathrm{lin}}e^{i\sum_{j=1}^J g^\varepsilon_j(t,x)} & \thetaetaxt{for }&1-t \gammag \varepsilon^{\frac{J\alpha-1}{J\gamma-1}} , \\ &\vdots & &\vdots \end{align*} Recall that $v^\varepsilon_{\mathrm{lin}}$, given in \eqref{eq:linear}, is the approximate solution for the linear solution $w^\varepsilon$. One sees that the solution behaves like a free solution in the region $1-t\gammag \varepsilon^{\frac{\alpha-1}{\gamma-1}}$ where the initial time $t=0$ lies, and that, at each boundary layer of size $1-t \sigmam \varepsilon^{\frac{J\alpha-1}{J\gamma-1}}$ (the \thetaetaxtit{$J$-th boundary layer}), a new phase associated with $g^\varepsilon_J$ becomes relevant. \subsection{Main result II} Our next result is the non-existence of a global solution to \eqref{eq:lsys}. We further assume the radial symmetry and $\gamma=n-2$ ($n\gammae 3$) in \eqref{eq:cE}. A suitable change of $\lambda$ yields the radial compressible Euler-Poisson equations \betagin{equation}\lambdaambdabel{eq:EPr} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_\tau (\rho r^{n-1}) + \partialartial_r (\rho v r^{n-1}) = 0, \\ &\partialartial_\tau v + v \partialartial_r v -\lambda \tau^{n-4} \partialartial_r V_{\mathrm{p}} = 0, \\ & \partialartial_r (r^{n-1}\partialartial_r V_{\mathrm{p}}) = \rho r^{n-1},\\ &\rho_{|\tau=0}=|a_0|^2, \quad v_{|\tau=0}=0, \end{aligned} \right. \end{equation} where $r:=|x|$. We define the ``mean mass'' $M_0$ in $\{|x|\lambdaeqslant r\}$ by \[ M_0(r) := \frac{1}{r^n}\int_0^r |a_0(s)|^2 s^{n-1} ds. \] \betagin{theorem}\lambdaambdabel{thm:main2} Let $\lambda < 0$ and $n \gammae 4$. For every nonzero initial amplitude $a_0 \in C^1$, the solution to \eqref{eq:EPr} breaks down no latter than \[ T^* = \lambdaeqslantft(\frac{(n-2)(n-3)}{|\lambda| \sup_{r \gammae 0} M_0(r)}\right)^{\frac1{n-2}}<\infty. \] \end{theorem} \betagin{theorem}[\cite{ELT-IUMJ}, Theorem 5.10]\lambdaambdabel{thm:main3} Let $\lambda>0$ and $n=4$. The radial $C^1$-solution $(a,v)$ to \eqref{eq:EPr} is global if and only if the initial amplitude $a_0\in C^1$ satisfies \[ |a_0(r)|^2 \gammae 2 M_0(r) = \frac{2}{r^4}\int_0^r |a_0(s)|^2 s^3 ds \] for all $r \gammae 0$. In particular, if $a_0 \in L^2(\mathbb{R}^4)$ then the solution breaks down in finite time. Moreover, the critical time is given by \[ \tau_c = \lambdaeqslantft(\frac{2}{\lambda \max_{r > 0} \lambdaeqslantft( 2M_0(r) - |a_0(r)|^2\right)}\right)^{\frac12}. \] \end{theorem} \betagin{example}[An example of finite-time breakdown]\lambdaambdabel{ex:blowup} Consider the equation \eqref{eq:EPr}. Let $n=4$ and $\lambda>0$. Suppose \[ a_0(x) = a_0(r) = r^{-\frac{5}{2}} e^{-\frac{1}{2r}}. \] Then, an elementary calculation shows \[ M_0(r) = r^{-4} e^{-\frac{1}{r}} \] and so that $2M_0(r)-|a_0|^2 = (2r-1)r^{-5} e^{-\frac{1}{r}}$ takes its maximum at $r_0=(7+\sqrt{17})/16=0.6951 {\partial}ots$ which is the root of \[ 8r_0^2 -7r_0 +1 = 0. \] By Theorem \ref{thm:main3}, the critical time is \[ \tau_c = \lambdaeqslantft(\frac{2}{|\lambda| \lambdaeqslantft( 2M_0(r_0) - |a_0(r_0)|^2\right)}\right)^{\frac12} = \sqrt{\frac{3405+827\sqrt{17}}{\lambda 2^{13}}} e^{\frac{3(7-\sqrt{17})}{32}}. \] In fact, the solution to \eqref{eq:EPr} is given by \betagin{align*} |a|^2(\tau,X(t,R)) ={}& \frac{2 R^2} {X^2(\tau,R) (2R^5e^{\frac{1}{R}} - \lambda(2R-1) \tau^2) }, \\ v(\tau,X(\tau,R)) ={}& \frac{\lambda \tau}{X(\tau,R) R^2 e^{\frac{1}{R}}}, \end{align*} where \[ X(\tau,R) = R\sqrt{1+\lambda R^{-4}e^{-\frac{1}{R}} \tau^2}. \] At the time $\tau=\tau_c$, the characteristic curves ``touch'' at $r=r_c:=X(t_c,r_0)$, that is, we have $(\partialartial_R X)(\tau_c,r_0)=0$, which is one of the sufficient and necessary condition for finite-time breakdown (see, \cite{ELT-IUMJ}). More explicitly, we can see that, as $\tau$ tends to $\tau_c$, the amplitude $|a|^2$ blows up at $r_c$ since the denominator \[ 2r_0^5e^{\frac{1}{r_0}} - \lambda(2r_0-1) t^2 \] tends to zero as $\tau \to \tau_c$. We illustrate the calculation in Remark \ref{rem:indicator}. In this example, $a_0 \in H^\infty \subset C^1$ and so Theorem \ref{thm:main} holds. We see that, however, the asymptotics \eqref{eq:asymptotics} is valid only for $\varepsilon^{\frac{\alpha}{\gamma}}/(1-t)<\tau_c$ and it cannot hold for $\varepsilon^{\frac{\alpha}{\gamma}}/(1-t) \gammae \tau_c$. \end{example} The rest of the paper is organized as follows. In Section \ref{sec:knownresults}, we make a summary of the results in this paper with previous results. Section \ref{sec:pre} is devoted to preliminary results. We prove Theorem \ref{thm:main} in Section \ref{sec:proof}. The strategy of the proof is illustrated rather precisely in Section \ref{subsec:strategy}. In Section \ref{sec:sscase}, we treat the supercritical WKB case $\alpha <1$ (Theorem \ref{thm:sscase}). The well-prepared data is discussed in Section \ref{subsec:wpdata} Finally, we prove Theorems \ref{thm:main2} and \ref{thm:main3} in Section \ref{sec:blowup}. \section{Summary}\lambdaambdabel{sec:knownresults} \subsection{Cascade of phase shifts in the linear WKB case}\lambdaambdabel{subsec:summary} We first discuss about the linear WKB case $\alpha>1$. According to Theorems \ref{thm:main}, \ref{thm:main2}, and \ref{thm:main3}, we summarize the result in this case as follows. Recall several boundary layers: \betagin{equation}\lambdaambdabel{eq:layerlist} \betagin{aligned} \thetaetaxt{initial time:}& & t&{}=0 &&\lambdaongleftrightarrow 1-t= 1 ,\\ \thetaetaxt{first layer:}& & t&{}=1-\Lambda_1\varepsilon^{\frac{\alpha-1}{\gamma-1}} && \lambdaongleftrightarrow 1-t = \Lambda_1\varepsilon^{\frac{\alpha-1}{\gamma-1}} ,\\ J\thetaetaxt{-th layer:}& & t&{}=1-\Lambda_J\varepsilon^{\frac{J\alpha-1}{J\gamma-1}} && \lambdaongleftrightarrow 1-t = \Lambda_J\varepsilon^{\frac{J\alpha-1}{J\gamma-1}} ,\\ \thetaetaxt{final layer:}& & t&{}=1-T^{-1}\varepsilon^{\frac{\alpha}{\gamma}} && \lambdaongleftrightarrow 1-t = T^{-1}\varepsilon^{\frac{\alpha}{\gamma}} , \end{aligned} \end{equation} where $\Lambda_J$ and $T$ are positive constants. \smallbreak $\bullet$ {\bf From the initial time to the first layer}. Before the first layer, that is, for $1-t \gammag \varepsilon^{\frac{\alpha-1}{\gamma-1}}$, the behavior of the solution is the same as in the linear case at leading order. Indeed, the asymptotics \eqref{eq:asymptotics} implies that the solution behaves like $v^\varepsilon_{\thetaetaxt{lin}}$ defined in \eqref{eq:linear} for $1-t \gammag \varepsilon^{\frac{\alpha-1}{\gamma-1}}$. The phase shifts disappear: From \eqref{def:Phi} and the time expansion $\partialhi_0(\tau) \asymp \sum_{j=1}^\infty \tau^{\gamma j -1} \varphi_j$, we see \[ \norm{\Phi^{\varepsilon}(t)}_{L^\infty(\mathbb{R}^n)} \lambdaeqslant C \varepsilon^{\frac{\alpha}{\gamma}-1} \sum_{j=1}^\infty \lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right)^{\gamma j-1} \norm{\varphi_j}_{L^\infty(\mathbb{R}^n)} \lambdal 1 \] if $1-t \gammag \varepsilon^{\frac{\alpha-1}{\gamma-1}}$. Moreover, the amplitude tends to a rescaling of $a_0$: By \eqref{def:A}, \[ A^\varepsilon(t,x) \to b_0\lambdaeqslantft(0,\frac{x}{1-t} \right) \exp \lambdaeqslantft(i\partialhi_{\mathrm{equ}}\lambdaeqslantft(0,\frac{x}{1-t} \right)\right) = a_0\lambdaeqslantft( \frac{x}{1-t}\right) \] since $\varepsilon^{\alpha/\gamma}/(1-t) \lambdal 1$ for $1-t \gammag \varepsilon^{\frac{\alpha-1}{\gamma-1}}$. This agrees with the analysis in \cite{MaAHP}. On the other hand, on the first layer $1-t = \Lambda_1 \varepsilon^{\frac{\alpha-1}{\gamma-1}}$ the nonlinear effect becomes relevant. The term ${\tau^{\gamma-1} \varphi_1}$ in the sum\footnote{ This sum is the formal one: Here, we say ``sum'' in the sense that $\partialhi_0(\tau) \asymp \sum_{j=1}^\infty \tau^{\gamma j -1} \varphi_j$. } $\sum_{j=1}^\infty \tau^{\gamma j -1} \varphi_j$ is no longer negligible. \smallbreak $\bullet$ {\bf From the first layer to the $J$-th layer}. Soon after the first layer, the solution becomes strongly oscillatory by ${\tau^{\gamma-1}\varphi_1}$. Between the first and the second layers, only ${\tau^{\gamma-1}\varphi_1}$ is effective because \betagin{multline*} \norm{\Phi^{\varepsilon}(t)- \varepsilon^{\frac{\alpha}{\gamma} -1 }\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right)^{\gamma-1} \varphi_1\lambdaeqslantft(\frac{\cdot}{1-t} \right) }_{L^\infty(\mathbb{R}^n)} \\ \lambdaeqslant \varepsilon^{\frac{\alpha}{\gamma}-1} \sum_{j=2}^\infty \lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right)^{\gamma j-1} \norm{\varphi_j}_{L^\infty(\mathbb{R}^n)} \lambdal 1 \end{multline*} holds if $1-t \gammag \varepsilon^{\frac{2\alpha-1}{2\gamma-1}}$. When we reached to the second layer $1-t = \Lambda_2 \varepsilon^{\frac{2\alpha-1}{2\gamma-1}}$, the phase $\tau^{2\gamma-1}\varphi_2$ become relevant. Similarly, at each $J$-th layer $1-t = \Lambda_J \varepsilon^{\frac{J\alpha-1}{J\gamma-1}}$ new phase $\varphi_J$ becomes relevant. This is the cascade of phase shift. In this regime, $\varepsilon^{\frac{\alpha}{\gamma}}/(1-t)$ converges to zero as $\varepsilon \to 0$, and so that the amplitude still stays the linear one. \smallbreak $\bullet$ {\bf On the final layer}. After a countable number of boundary layers, we reach to the final layer. In this layer, $\varepsilon^{\frac{\alpha}{\gamma}}/(1-t)$ does not tend to zero any longer. Therefore, the asymptotics of amplitude changes into a nonlinear one. It turns out that the ratio $T:=\varepsilon^{\alpha/\gamma}/(1-t)$ plays the crucial role. If $T \gammae 0$ is small, then the asymptotic behavior of the solution is described by the asymptotics \eqref{eq:asymptotics}. It is essential to use $\Phi^\varepsilon$ and $A^\varepsilon$ defined in \eqref{def:Phi} and \eqref{def:A}, respectively. Thanks to the nontrivial remainder term given by $\partialhi_{\mathrm{equ}}$, \eqref{eq:asymptotics} gives the asymptotic behavior of the solution on the final layer. On the other hand, \eqref{eq:asymptotics} breaks down at $T=T^*<\infty$ in several cases (Theorems \ref{thm:main2} and \ref{thm:main3}). It is because the nonlinear effects cause a formation of singularity. These focal points are moving: If the focal point is \[ \lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t_c}, \frac{x_c}{1-t_c}\right) = ( T^*,X^*), \] then, in the $(t,x)$-coordinates, this focal point is \betagin{align*} t_c =&{} 1-(T^*)^{-1} \varepsilon^{\frac{\alpha}{\gamma}}, & x_c =&{} X^*(T^*)^{-1} \varepsilon^{\frac{\alpha}{\gamma}}, \end{align*} which tends to $(t,x)=(1,0)$ as $\varepsilon\to0$. Example \ref{ex:blowup} gives an example of this type blowup. \smallbreak $\bullet$ {\bf After the final layer}. It remains open what happens after the breakdown of \eqref{eq:asymptotics}. However, we can at least expect that more rapid oscillations might not appear after the final layer. This is because, in the case of $\lambda>0$, $2\lambdaeqslant \gamma< 4$, and $\alpha\in]\max(\gamma/2,1),\gamma[$, the order of the upper bound of $\Lebn{J^\varepsilon u^\varepsilon}{2}$ stays $\varepsilon^{1-\alpha/\gamma}$ even after the caustic (\cite{MaAHP}). Recall that $J^\varepsilon = e^{i|x|^2/2\varepsilon(t-1)}i(t-1)\nabla e^{-i|x|^2/2\varepsilon(t-1)}$ filters out the main quadratic phase. Therefore, this divergent upper bound implies that the order of magnitude of energy of the oscillation \emph{other than the main quadratic phase} is at most $\varepsilon^{1-\alpha/\gamma}$. \subsection{Cascade of phase shifts in the nonlinear and supercritical WKB case} \lambdaambdabel{subsec:summary2} We now turn to the case $\alpha < 1$, the supercritical WKB case. Our goal is to obtain a WKB-type approximation similar to \eqref{eq:asymptotics}, which explains the cascade of phase shifts and gives the asymptotic profile of the solution before and on the final layer. We \emph{do not} use modified the initial data \eqref{eq:wpdata} and keep working with \eqref{eq:r4}--\eqref{eq:odata}. As stated above, the analysis of \eqref{eq:r4}--\eqref{eq:odata} is reduced to the analysis of \eqref{eq:sys} via the transform \eqref{eq:spct}. Recall that $h=\varepsilon^{1-\alpha/\gamma}$. The difficulty is the following: \betagin{enumerate} \item The equaiton \eqref{eq:sys} itself depends on $h$ through the term $i\frac{h}{2}\Delta a^h$. \item The initial time of \eqref{eq:sys} tends to $\tau=0$ at a speed $h^{\frac{\alpha}{\gamma-\alpha}}$. \end{enumerate} The second is the main point of the supercritical case $\alphapha<1$. In this section, we discuss by heuristic arguments what happens, what is the problem, and how we can overcome it. Then, it turns out that the situation becomes more complicated, and the cascade of phase shifts phenomena involves more phase shifts and boundary layers than the linear WKB case. The rigorous result is in Theorem \ref{thm:sscase} (see, also Remark \ref{rmk:type}). \smallbreak $\bullet$ {\bf The transpose of the initial time and the first boundary layer}. We see from \eqref{eq:layerlist} that the relation $\alpha<1$ causes the transpose of the initial time $t=0$ and the first boundary layer, that is, the initial time $t=0$ lies beyond the first boundary layer $t =1-\Lambda_1 \varepsilon^{\frac{\alpha-1}{\gamma-1}}$ because $\Lambda_1 \varepsilon^{\frac{\alpha-1}{\gamma-1}} \gammag 1$ for small $\varepsilon$. It means that there is no linear regime and the behavior of the solution involves nonlinear effects at leading order soon after the initial time. \smallbreak $\bullet$ {\bf The nonlinear behavior of the solution at the initial time}. As a matter of fact, the behavior of $u^\varepsilon$ is already ``nonlinear'' at $t=0$. More precisely, the principal part of the phase shift of the solution $u^\varepsilon$ other than $\exp (i |x|^2/2\varepsilon(t-1))$ (we call this as a \emph{principal nonlinear phase} of $u^\varepsilon$) for $\alpha < 1$ is given by \betagin{equation}\lambdaambdabel{eq:Pphase} \exp \lambdaeqslantft( i \varepsilon^{\frac{\alpha}{\gamma}-1} \partialhi_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right) \right), \end{equation} where $\partialhi_0$ is the solution of \eqref{eq:lsys}. Here we remark that the principal nonlinear phase is the same as in the case $\alpha>1$ (In Theorem \ref{thm:main}, we denote this by $\exp (i\Phi^\varepsilon)$). Since $\partialhi_0$ is given as a solution of \eqref{eq:lsys}, the shape of $\partialhi_0$ is completely independent of $\alpha$. Moreover, the choice of $\partialhi_0$ is natural because the function $\partialhi_0$ is the unique limit of the phase function $\partialhi^h$ which solves \eqref{eq:sys} with $a^h$. Recall that $\partialsi^h=a^h\exp(i\partialhi^h/h)$ is an exact solution of \eqref{eq:r5}. Using the expansion $\partialhi_0(\tau) \asymp \sum_{j=1}^\infty \tau^{\gamma j -1}\varphi_j(y)$, we have \[ \varepsilon^{\frac{\alpha}{\gamma}-1} \partialhi_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right) \sigmam O\lambdaeqslantft(\varepsilon^{\frac{\alpha}{\gamma}-1} \lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right)^{\gamma-1}\right) \] as long as $\varepsilon^{\frac{\alpha}{\gamma}}/(1-t) \lambdal 1$. If the right hand side is small, then the phase shift caused by nonlinear effects is negligible and so the solution behaves like the linear solution. However, the right hand side is $O(\varepsilon^{\alpha-1})$ at $t=0$, which is not small if $\alpha<1$. In this sense, the behavior of $u^\varepsilon$ is nonlinear at $t=0$. \smallbreak $\bullet$ {\bf The initial condition as a constraint}. On the other hand, we always have $u^\varepsilon(0,x)=a_0(x)\exp(-i\frac{|x|^2}{2\varepsilon})$ since, as stated above, we do not modify the initial data and keep working with \eqref{eq:odata}. This initial condition seems to be quite natural and the simplest one for this problem. This is true if $\alphapha>1$. However, in the supercritical case $\alpha<1$, the meaning of this condition slightly changes and this condition becomes a sort of constraint: The appearing nonlinear effects must disappear at $t=0$. To achieve this constraint, we need to employ some more nontrivial phase shifts as correction terms in order to cancel out the nonlinear effect at $t=0$. This modification with correction terms is the heart of the matter. The main difference between two initial data \eqref{eq:odata} and \eqref{eq:wpdata} is this point. A use of \eqref{eq:wpdata} enables us to leave the behavior of the solution at the initial time nonlinear. Hence, we do not need any correction term. \smallbreak $\bullet$ {\bf A formal construction of correction terms}. Intuitively, this modification is done as follows: First, we replace $\partialhi_0(\tau,y)$ in \eqref{eq:Pphase} by $\partialhi_0(\tau,y)-\sum_{j=1}^{k-1}\tau^{\gamma j-1} \varphi_j(y)$ for some $k\gammae2$. This yields the following modified principal nonlinear phase \betagin{equation}\lambdaambdabel{eq:mPphase} \exp \lambdaeqslantft( i \varepsilon^{\frac{\alpha}{\gamma}-1} \lambdaeqslantft[ \partialhi_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right)- \sum_{j=1}^{k-1}\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right)^{\gamma j-1} \varphi_j\lambdaeqslantft(\frac{x}{1-t}\right) \right] \right). \end{equation} At $t=0$, it holds that \[ \varepsilon^{\frac{\alpha}{\gamma}-1} \lambdaeqslantft[ \partialhi_0\lambdaeqslantft(\varepsilon^{\frac{\alpha}{\gamma}}\right)- \sum_{j=1}^{k-1}\lambdaeqslantft(\varepsilon^{\frac{\alpha}{\gamma}}\right)^{\gamma j-1} \varphi_j \right] = O(\varepsilon^{\alpha k-1}). \] Therefore, if we take $k$ large enough then the modified approximate solution which the above principal phase \eqref{eq:mPphase} gives possesses the desired two properties: The leading term of the principal nonlinear phase is the same as \eqref{eq:Pphase}; and the phase shifts tend to zero at the initial time $t=0$. Of course, this simple modification is too rude and so valid only in the small neighborhood of $t=0$. Hence, to obtain an approximation also outside the small neighborhood, we replace each $\varphi_j(y)$ by a function $-\partialhi_{\mathrm{pha},j}(\tau,y)$ which solves a kind of $j$-th linearized system of \eqref{eq:sys} with $-\partialhi_{\mathrm{pha},j}(0)=\varphi_j$. This yields the principal nonlinear phase like \betagin{equation}\lambdaambdabel{eq:mmPphase} \exp \lambdaeqslantft( i \varepsilon^{\frac{\alpha}{\gamma}-1} \lambdaeqslantft[ \partialhi_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right)+ \sum_{j=1}^{k-1}\lambdaeqslantft(\varepsilon^{\frac{\alpha}{\gamma}}\right)^{\gamma j-1}\partialhi_{\mathrm{pha},j}\lambdaeqslantft(\frac{t\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right) \right] \right). \end{equation} We note that ${t\varepsilon^{\frac{\alpha}{\gamma}}}/(1-t)={\varepsilon^{\frac{\alpha}{\gamma}}}/(1-t)-\varepsilon^{\frac{\alpha}{\gamma}}$, which is zero at $t=0$. \smallbreak $\bullet$ {\bf Three kinds of correction terms}. In fact, the above modified principal nonlinear phase \eqref{eq:mmPphase} is still insufficient. We need two more kinds of correction terms which are essentially different from $\partialhi_{\mathrm{pha},j}$. Now, let us list all kinds of correction terms which we use: \betagin{enumerate} \item \emph{Correction from phase}. The first one is the above $\partialhi_{\mathrm{pha},j}$. They satisfy $\partialhi_{\mathrm{pha},j}(0)=-\varphi_j$ and remove the bad part of $\partialhi_0$. \item \emph{Correction from amplitude}. The amplitude $b_0$ which pairs $\partialhi_0$ via \eqref{eq:lsys} has the expansion $b_0(\tau) \asymp a_0+ \sum_{j=1}^\infty \tau^{\gamma j} a_j$, where $a_j$ is a space function defined by $a_0$ (see, Proposition \ref{prop:t-expansion}). Hence, the principal part of the amplitude of $u^\varepsilon$ (\emph{principal amplitude} of $u^\varepsilon$) is \[ b_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right) \sigmam a_0\lambdaeqslantft(\frac{x}{1-t}\right) + \sum_{j=1}^\infty \lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right)^{\gamma j} a_j\lambdaeqslantft(\frac{x}{1-t}\right) \] for $\varepsilon^{\frac{\alpha}{\gamma}}/(1-t) \lambdal 1$. In particular, at $t=0$ we have \betagin{equation}\lambdaambdabel{eq:mamplitude} b_0 \lambdaeqslantft( \varepsilon^{\frac{\alpha}{\gamma}}, x \right) \sigmam a_0(x) + \sum_{j=1}^{k-1} \varepsilon^{\alpha j}a_j + O(\varepsilon^{\alpha k}). \end{equation} The principal amplitude converges to the given initial data for all $\alpha>0$, and so one might expect this is harmless. However, it is understood that, when we try to get the pointwise estimate of solution via Grenier's method, we must take $\varepsilon^1$-term of the initial amplitude into account because it affects (implicitly) the approximate solution at leading order (see, \cite{AC-ARMA,CM-AA,Grenier98}). Therefore, we must remove $\varepsilon^{\alpha j}a_j$ for all $j \gammae 1$ such that $\alpha j \lambdaeqslant 1$ from \eqref{eq:mamplitude}, otherwise the approximate solution will differ outside a small neighborhood of $t=0$. To do this, we construct $b_{\mathrm{amp},j}$ as a solution to a kind of $j$-th linearized system of \eqref{eq:sys} with the condition $b_{\mathrm{amp},j}(0)=-a_j$. At that time, there appear a phase correction $\partialhi_{\mathrm{amp},j}$ associated with $b_{\mathrm{amp},j}$ via the system which $b_{\mathrm{amp},j}$ solves. \item \emph{Correction from interaction}. The third one comes from the structure of \eqref{eq:sys}. As stated in introduction, the problem boils down to determining the asymptotic behavior of the solution $(a^h,\partialhi^h)$ to \eqref{eq:sys} up to $O(h^1)$. Suppose that the solution has two terms $ (h^{p_1}b_1, h^{p_1}\partialhi_1)$ and $ (h^{p_2}b_2, h^{p_2}\partialhi_2)$ in its asymptotic expansion as $h \to 0$. Then, the quadratic terms in \eqref{eq:sys} produce nontrivial $h^{p_1+p_2}$-terms. For example, $\nabla \partialhi^h \cdot \nabla a^h$ has the terms $h^{p_1+p_2}\nabla \partialhi_1 \cdot \nabla b_2$ and $h^{p_1+p_2}\nabla \partialhi_2 \cdot \nabla b_1$ in its expansion. Again by \eqref{eq:sys}, this implies that $(\partialartial_t b^h,\partialartial_t \partialhi^h)$ (and so $(b^h, \partialhi^h)$ itself) also contains a $h^{p_1+p_2}$-term in its expansion. Repeating this argument, we see that $(b^h, \partialhi^h)$ has $h^p$-terms for all $p$ given by $p=lp_1+mp_2$ with integers $l,m \gammae0$ such that $l+m \gammae 1$. So far, we have already obtained two kinds of correction terms, $\partialhi_{\mathrm{pha},j}$ and $\partialhi_{\mathrm{amp},j}$. Therefore, they interact each other to produce the third correction terms $\partialhi_{\mathrm{int},j}$. \end{enumerate} \smallbreak $\bullet$ {\bf The supercritical cascade of phase shifts}. We are now in a position to understand the cascade of phase shifts phenomena in the supercritical case. With the above three kinds of correction terms, we can describe the asymptotic behavior of the solution before the final layer. Recall that, in the linear WKB case $\alpha>1$, the principal nonlinear phase is given by only one phase function $\partialhi_0(\tau,y)$, and the notion of boundary layer comes from its expansion with respect to $\tau$ around $\tau=0$. In this case, not only $\partialhi_0$ but also all $\partialhi_{\mathrm{pha},i}$, $\partialhi_{\mathrm{amp},j}$, $\partialhi_{\mathrm{int},k}$ produce a countable number of similar boundary layers. Thus, the cascade of phase shifts involves much more phase shifts and boundary layers than the linear WKB case. \smallbreak $\bullet$ {\bf One more correction at the final boundary layer}. So far, the second difficulty listed in the beginning of this section is solved by three kinds of correction terms $\partialhi_{\mathrm{pha},i}$, $\partialhi_{\mathrm{amp},j}$, and $\partialhi_{\mathrm{int},j}$. We can describe with them the asymptotic behavior of the solution before the final layer. To give it also on the final layer, we need one more correction term, $\partialhi_{\mathrm{equ}}$, as in the case $\alphapha > 1$. This correction term defeats the first difficulty. This solves \eqref{eq:tbtp1}--\eqref{eq:tbtp2} and so it is independent of $\alpha$. When $\alpha=1$, $\partialhi_{\mathrm{equ}}$ changes in to a solution of \eqref{eq:tbtp1}--\eqref{eq:tbtp3} (see Theorem \ref{thm:main}). We next address why we need a modified $\partialhi_{\mathrm{equ}}$. \smallbreak $\bullet$ {\bf Resonance of correction terms and the nonlinear WKB case}. Some of the correction terms associated with $\partialhi_{\mathrm{pha},i}$, $\partialhi_{\mathrm{amp},j}$, $\partialhi_{\mathrm{int},k}$, and $\partialhi_{\mathrm{equ}}$ may have the same order. This phenomena is a kind of resonance. The nonlinear case $\alpha=1$ is the simplest example. In this case, it happens that $\partialhi_{\mathrm{pha},1}$ and $\partialhi_{\mathrm{equ}}$ have the same order. Recall that if $\alpha=1$ then we work with the modified $\widetilde{\partialhi}_{\mathrm{equ}}$ solving \eqref{eq:tbtp1}--\eqref{eq:tbtp3}, \betagin{equation*} \lambdaeqslantft( \betagin{aligned} &{\partial}_\tau \widetilde{b}_{\mathrm{equ}} + \nabla \widetilde{\partialhi}_{\mathrm{equ}} \cdot \nabla b_0 + \nabla \partialhi_0 \cdot \nabla \widetilde{b}_{\mathrm{equ}} + \frac{1}{2} \widetilde{b}_{\mathrm{equ}} \Delta \partialhi_0 + \frac{1}{2} b_0 \Delta \widetilde{\partialhi}_{\mathrm{equ}} = \frac{i}{2} \Delta b_0, \\ &{\partial}_\tau \widetilde{\partialhi}_{\mathrm{equ}} + \nabla \partialhi_0 \cdot \nabla \widetilde{\partialhi}_{\mathrm{equ}} + \lambda \tau^{\gamma-2} (|y|^{-\gamma}* 2 \mathbb{R}e \overline{b_0} \widetilde{b}_{\mathrm{equ}} ) = 0, \\ & \widetilde{b}_{\mathrm{equ}|\tau=0} \equiv 0, \quad \widetilde{\partialhi}_{\mathrm{equ}|\tau=0} = \lambda(|y|^{-\gamma}*|a_0|^2)/(\gamma-1)=-\varphi_1 \end{aligned} \right. \end{equation*} (see Theorem \ref{thm:main}). We will see later that $\partialhi_{\mathrm{pha},1}$ is the solution to \betagin{equation*} \lambdaeqslantft( \betagin{aligned} &{\partial}_\tau b_{\mathrm{pha},1} + \nabla \partialhi_{\mathrm{pha},1} \cdot \nabla b_0 + \nabla \partialhi_0 \cdot \nabla b_{\mathrm{pha},1} + \frac{1}{2} b_{\mathrm{pha},1} \Delta \partialhi_0 + \frac{1}{2} b_0 \Delta \partialhi_{\mathrm{pha},1} =0, \\ &{\partial}_\tau \partialhi_{\mathrm{pha},1} + \nabla \partialhi_0 \cdot \nabla \partialhi_{\mathrm{pha},1} + \lambda \tau^{\gamma-2} (|y|^{-\gamma}* 2 \mathbb{R}e \overline{b_0} b_{\mathrm{pha},1} ) = 0, \\ & b_{\mathrm{pha},1|\tau=0} \equiv 0, \quad \partialhi_{\mathrm{pha},1|\tau=0} = \lambda(|y|^{-\gamma}*|a_0|^2)/(\gamma-1)=-\varphi_1 \end{aligned} \right. \end{equation*} (see Remark \ref{rmk:type}). Let $\partialhi_{\mathrm{equ}}$ be a solution to \eqref{eq:tbtp1}--\eqref{eq:tbtp2}. One can check from above systems that $\widetilde{\partialhi}_{\mathrm{equ}}=\partialhi_{\mathrm{equ}}+\partialhi_{\mathrm{pha},1}$. Therefore, the modified correction term $\widetilde{\partialhi}_{\mathrm{equ}}$ is nothing but the superposition of the (usual) correction from equation $\partialhi_{\mathrm{equ}}$ and the correction from phase $\partialhi_{\mathrm{pha},1}$. \section{Preliminary results}\lambdaambdabel{sec:pre} \subsection{Properties of the $Y^s_{p,q}(\mathbb{R}^n)$ space}\lambdaambdabel{subsec:spaceY} We first collect some facts about the space $Y^s_{p,q}$ defined in \eqref{def:Y}-\eqref{def:Y2} (see, also \cite{AC-SP,CM-AA}). \betagin{enumerate} \item $Y^{s_1}_{p,q} \subset Y^{s_2}_{p,q}$ if $s_1 \gammae s_2$. \item $Y^s_{p,q}=Y^s_{p,[\min(q,2^*),\infty]}$ and so $Y^s_{p,q_1} \subset Y^s_{p,q_2}$ if $q_1 \lambdaeqslant q_2$. \item If $q<n$ then $Y^s_{p,q}=Y^s_{[\min(p,q^*),\infty],q}$. It implies $Y^s_{p_1,q} \subset Y^s_{p_2,q}$ for $p_1 \lambdaeqslant p_2$ under $q<n$. In particular, $Y^s_{p,q} \subset Y^s_{[2^{**},\infty],[2^*,\infty]}$ if $n \gammae 5$, where $2^{**} := (2^*)^*=2n/(n-4)$. \item If $n \gammae 3$ then any function $f \in X^s$ is written uniquely as $f=g + c$, where $g \in Y^s_{\infty,2}(=Y^s_{2^*,2})$ and $c$ is a constant. \end{enumerate} The first property is obvious by definition. The others follow from the following lemma which is a consequence of the Hardy-Littlewood-Sobolev inequality and found in \cite[Th.~4.5.9]{Hormander1} or \cite[Lemma~7]{PGAHPANL}: \betagin{lemma}\lambdaambdabel{lem:HPG} If $\varphi \in \mathcal{D}^{'}(\mathbb{R}^n)$ is such that $\nabla \varphi \in L^p (\mathbb{R}^n)$ for $p \in ]1,n[$, then there exists a constant $c$ such that $\varphi-c \in L^q (\mathbb{R}^n)$, with $1/p=1/q+1/n$. \end{lemma} We take a function $f \in Y^s_{p,q}$. Then, the indices $p$ and $q$ almost indicate the decay rates at spacial infinity of the function $f$ and its first derivative $\nabla f$, respectively. The second property means that $\nabla f$ is always bounded and decays at the spacial infinity so fast that $\nabla f \in L^{2^*}$. What the third property says is that $f$ has a similar decay property. It can be said from the fourth property that $Y^s_{\infty,2} = X^s$ in a sense, provided $n \gammae 3$. Note that every $g \in Y^s_{\infty,2}$ satisfies $g \to 0$ as $|x| \to \infty$ by definition \eqref{def:Y}-\eqref{def:Y2}. On the other hand, elements of $X^s$ do not necessarily tend to zero at the spacial infinity. We also note that if $q=2$ then we have another definition of $Y^s_{p,2}$: \betagin{equation}\lambdaambdabel{def:Y3} Y^s_{p,2}(\mathbb{R}^n) = L^p (\mathbb{R}^n) \cap X^{s}(\mathbb{R}^n), \end{equation} which makes sense for $s>n/2$. \subsection{Basic existence and approximation results} \lambdaambdabel{subsec:existence} Operate $\nabla$ to the equation for $\partialhi^h$ in \eqref{eq:sys} and put $w^h:=\nabla \partialhi^h$. For our further application, we generalize the system slightly. Let \betagin{align} Q_1(a,v) & {} = - (v\cdot \nabla) a -\frac12 a \nabla \cdot v, \lambdaambdabel{eq:q-1}\\ Q_2(v_1,v_2) & {} = - (v_1 \cdot \nabla)v_2, \lambdaambdabel{eq:q-2}\\ Q_3(a_1,a_2) & {} = -\lambda \nabla(|x|^{-\gamma} * (a_1\overline{a_2})), \lambdaambdabel{eq:q-3} \end{align} and consider a system of the following form: \betagin{equation}\lambdaambdabel{eq:system1} \lambdaeqslantft\{ \betagin{aligned} {\partial}_t b^h ={}& c_1^h Q_1(b^h, w^h) + Q_1(B_1^h,w^h) + Q_1(b^h,W_1^h) +R_1^h + ir^h \Delta b^h, \\ {\partial}_t w^h ={}& c_2^h Q_2(w^h,w^h) + Q_2(W_2^h,w^h) + Q_2(w^h,W_2^h) \\ &{} + f^h(t) \lambdaeqslantft( c_2^h Q_3(b^h,b^h) + Q_3(B_2^h,b^h) + Q_3(b^h,B_2^h)\right) + R_2^h, \end{aligned} \right. \end{equation} \betagin{align}\lambdaambdabel{eq:system2} b^h_{|t=0} &= b_0^h, & w^h_{|t=0} &= w_0^h. \end{align} where $b^h$ takes complex value and $w^h$ takes real value. Other notation will be made precise in Assumptions \ref{asmp:existence} and \ref{asmp:existence-coef}, below. In \cite{CM-AA}, the existence of a unique solution for this kind of system is shown with explicit coefficients. We shall summarize the parallel result. \betagin{assumption}[initial data]\lambdaambdabel{asmp:existence} Let $n \gammae 3$ and $\max(n/2-2,0) < \gammaamma \lambdaeqslant n-2$. We suppose the following conditions with some $s>n/2+1$: The initial amplitude $b_0^h \in H^s(\mathbb{R}^n)$ and the initial velocity $w_0^h \in Y^{s+1}_{q_0,2}(\mathbb{R}^n)$ for some $q_0 \in ]n/(\gammaamma+1),n[$, uniformly for $h \in [0,1]$, that is, there exists a constant $C$ independent of $h$ such that $\tSobn{b_0^h}{s} + \tnorm{w_0}_{Y^{s+1}_{q_0,2}(\mathbb{R}^n)} \lambdaeqslant C$. \end{assumption} \betagin{assumption}[coefficients]\lambdaambdabel{asmp:existence-coef} Let $c_1^h$ and $c_2^h$ be complex constants bounded uniformly in $h$, and let $r^h$ be a real constant bounded uniformly in $h$. Suppose for some $T^*>0$ and $s>n/2+1$ that $f^h$ is a real-valued function of time and $f^h \in L^1((0,T^*))$; $B_i^h$ and $R_1^h$ are complex-valued functions of spacetime, and $B_i^h \in L^1((0,T^*); H^{s+1})$ and $R_1^h\in L^1((0,T^*); H^{s})$; $W_i^h$ and $R_2^h$ are real-valued functions of spacetime, and $W_i^h \in L^1((0,T^*); Y^{s+2}_{\infty,2} )$ and $R_2^h\in L^1((0,T^*); Y^{s+1}_{q_0,2})$. Moreover, suppose all above functions are bounded in the corresponding norms uniformly with respect to $h$. \end{assumption} \betagin{assumption}[existence of the limit]\lambdaambdabel{asmp:approximate-coef} In addition to Assumptions \ref{asmp:existence} and \ref{asmp:existence-coef}, we suppose the existence of limits of all $b_0^h$, $w_0^h$, $c_i^h$, $r^h$, $f^h$, $B_i^h$, $W_i^h$, and $R_i^h$ as $h\to 0$ in the corresponding strong topologies. These strong limits are denoted by $b_0$, $w_0$, $c_i$, $r$, $f$, $B_i$, $W_i$, and $R_i$, respectively. \end{assumption} \betagin{proposition}\lambdaambdabel{prop:existence-sys} Let Assumptions~\ref{asmp:existence} and \ref{asmp:existence-coef} be satisfied. Then, there exists $T>0$ independent of $h$, $s$, and $q_0$, such that for all $h \in [0,1]$ the system \eqref{eq:system1}--\eqref{eq:system2} has a unique solution \betagin{equation*} (b^h,w^h) \in C\lambdaeqslantft([0,T]; H^s\widetildemes Y^{s+1}_{q_0,2}\right). \end{equation*} Moreover, the norm of $ (b^h,w^h)$ is bounded uniformly for $h \in [0,1]$. If, in addition, Assumption~\ref{asmp:approximate-coef} is satisfied, then the pair $(b^h,w^h)$ converges to $(b,w):=(b^h,w^h)_{|h=0}$ in $C([0,T]; H^{s-2} \widetildemes Y^{s-1}_{q_0,2})$ as $h \to 0$. Furthermore, $(b,w)$ solves \betagin{equation*} \lambdaeqslantft\{ \betagin{aligned} {\partial}_t b ={}& c_1 Q_1(b, w) + Q_1(B_1,w) + Q_1(b,W_1) +R_1 + ir \Delta b, \\ {\partial}_t w ={}& c_2 Q_2(w,w) + Q_2(W_2,w) + Q_2(w,W_2) \\ &{} + f(t) \lambdaeqslantft( c_2 Q_3(b,b) + Q_3(B_2,b) + Q_3(b,B_2)\right) + R_2, \nonumber \end{aligned} \right. \end{equation*} \betagin{align*} b_{|t=0} &= b_0, & w_{|t=0} &= w_0. \end{align*} \end{proposition} \betagin{proof} The key is the following energy estimate for $s>n/2+1$ \betagin{align*} \frac{d}{dt} E^h \lambdaeqslant{}& C (1+|f^h(t)|)\Big[(|c_1^h| + |c_2^h|)(E^h)^{\frac32} \\ &{}+(\tSobn{B^h_1}{s+1}+\tSobn{B^h_2}{s}+\tSobn{\nabla W^h_1}{s}+\tSobn{\nabla W^h_2}{s+1})E^h\Big] \\ &{}+ C(\tSobn{R^h_1}{s} + \tSobn{\nabla R^h_2}{s})(E^h)^{\frac12}, \end{align*} where $E^h:= \tSobn{b^h}{s}^2 + \tSobn{\nabla w^h}{s}^2$. For more details, see the proof of Proposition 4.1 in \cite{CM-AA}. \end{proof} \betagin{remark} We intend to apply this proposition to the system \eqref{eq:sys}. In that case, $f^h(t)$ corresponds to $t^{\gamma-2}$, which is singular at $t=0$ if $\gamma<2$. Since $f^h(t)$ is only supposed to be integrable, we will see that the system \eqref{eq:sys} (and so the equation \eqref{eq:r5}) has a unique solution for $\gamma>1$ while the singularity. We also note that this corresponds to the fact that the Hartree nonlinearity is short range when $\gamma>1$. \end{remark} \betagin{remark} In the convergence part of Proposition \ref{prop:existence-sys}, it can happen that $n/2+1 \gammae s-1>n/2$. In this case, we use the definition \eqref{def:Y3} instead of \eqref{def:Y}. \end{remark} We conclude this section with a lemma which we use for the construction of a function $\partialhi^h$ from the corresponding solution $w^h=\nabla \partialhi^h$ to \eqref{eq:system1}--\eqref{eq:system2}. This lemma is a consequence of Lemma \ref{lem:HPG}. \betagin{lemma}\lambdaambdabel{lem:int} If $\varphi$ satisfies $|\varphi|\to 0$ as $|x|\to \infty$ and $\nabla \varphi \in Y^s_{q,2}$ for some $s>n/2$ and some $q<n$, $q \lambdaeqslant 2^*$ then $\varphi \in Y^{s+1}_{q^*,q}$. \end{lemma} \section{Proof of Theorem \ref{thm:main}}\lambdaambdabel{sec:proof} \subsection{Strategy}\lambdaambdabel{subsec:strategy} In this section, we illustrate the strategy of the proof of Theorem \ref{thm:main} rather precisely. As in Section \ref{sec:intro}, we first introduce the semiclassical conformal transform: \betagin{equation}\tag{\ref{eq:spct}} u^\varepsilon(t,x) = \frac{1}{(1-t)^{n/2}} \partialsi^\varepsilon \lambdaeqslantft( \frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}, \frac{x}{1-t} \right) \exp\lambdaeqslantft(i \frac{|x|^2}{2\varepsilon(t-1)}\right). \end{equation} Putting $\tau := \varepsilon^{\frac{\alpha}{\gamma}}/(1-t)$ and $y := x/(1-t)$, we find that \eqref{eq:r4}--\eqref{eq:odata} becomes \betagin{align*} i \varepsilon^{1-\frac{\alpha}{\gamma}} {\partial}_\tau \partialsi^\varepsilon + \frac{(\varepsilon^{1-\frac{\alpha}{\gamma}})^2}{2} \Delta_y \partialsi^\varepsilon &= \lambda \tau^{\gamma-2} (|y|^{-\gamma} * |\partialsi^\varepsilon|^2 )\partialsi^\varepsilon, & \partialsi^\varepsilon_{|\tau=\varepsilon^{\alpha/\gamma} } &= a_0. \end{align*} Put $h=\varepsilon^{1-\alpha/\gamma}$ and denote $\partialsi^\varepsilon$ by $\partialsi^h$. We note that $\varepsilon \to 0$ is equivalent to $h\to 0$ as long as $\alpha < \gamma$. Thus, our problem is reduced to the limit $h\to 0$ of the solution to \betagin{align}\tag{\ref{eq:r5}} ih{\partial}_\tau \partialsi^h +\frac{h^2}{2}\Delta \partialsi^h &= \lambda \tau^{\gamma-2} (\lambdavert y\rvert ^{-\gamma}\ast |\partialsi^h|^2)\partialsi^h, & \partialsi^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}(y) &= a_0(y). \end{align} Our strategy is to seek a solution $\partialsi^h$ to \eqref{eq:r5} represented as \betagin{equation}\tag{\ref{eq:grenier}}\lambdaambdabel{eq:psi-ap} \partialsi^h(\tau,y)=a^h(\tau,y) e^{i\partialhi^h(\tau,y)/h}, \end{equation} with a complex-valued space-time function $a^h$ and a real-valued space-time function $\partialhi^h$. Note that $a^h$ is expected to be complex-valued, even if its initial value $a_0$ is real-valued. Substituting the form \eqref{eq:psi-ap} into \eqref{eq:r5}, we obtain \betagin{multline*} -a^h \lambdaeqslantft( {\partial}_\tau \partialhi^h + {\partial}frac{1}{2}|\nabla \partialhi^h|^2 + \lambda \tau^{\gamma-2}(|y|^{-\gamma}*|a^h|^2)\right) \\ +i h \lambdaeqslantft( {\partial}_\tau a^h + (\nabla \partialhi^h \cdot \nabla) a^h + {\partial}frac{1}{2}a^h \Delta \partialhi^h - i {\partial}frac{h}{2}\Delta a^h\right) = 0. \end{multline*} To obtain a solution of the above equation (hence, of \eqref{eq:r5}), we choose to consider \betagin{equation}\tag{\ref{eq:sys}} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_\tau a^h + \nabla \partialhi^h \cdot \nabla a^h + \frac12 a^h \Delta \partialhi^h = i\frac{h}{2}\Delta a^h,\\ &\partialartial_\tau \partialhi^h + \frac12 |\nabla \partialhi^h|^2 + \lambda \tau^{\gamma-2}(|y|^{-\gamma}*|a^h|^2) =0, \\ & a^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}=a_0, \quad \partialhi^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}=0. \end{aligned} \right. \end{equation} The point is that this system can be regarded as a symmetric hyperbolic system with semilinear perturbation. In Section \ref{subsec:pexistence}, we first prove that it admits a unique solution with suitable regularity (see Proposition~\ref{prop:p-existence}), hence providing a solution to \eqref{eq:r5} and \eqref{eq:r4}--\eqref{eq:odata}. By \eqref{eq:psi-ap}, in order to obtain a leading order WKB type approximate solution it suffices to determine $O(h^0)$ and $O(h^1)$ terms of $\partialhi^h$ in the limit $h \to 0$. Letting $h=0$ in \eqref{eq:sys}, we formally obtain the $O(h^0)$ term $(b_0,\partialhi_0)$ which solves \betagin{equation}\tag{\ref{eq:lsys}} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_t b_0 + \nabla \partialhi \cdot \nabla b_0 + \frac12 b_0 \Delta \partialhi_0 = 0,\\ &\partialartial_t \partialhi_0 + \frac12 |\nabla \partialhi_0|^2 + \lambda t^{\gamma-2}(|y|^{-\gamma}*|b_0|^2) =0, \\ & b_{0|\tau=0}=a_0, \quad \partialhi_{0|\tau=0}=0 \end{aligned} \right. \end{equation} introduced in Section \ref{sec:intro}. The difficulty of finding $h^1$-terms lies in the following two respects; firstly, the equation \eqref{eq:sys} depends on $h$ through the term $i\frac{h}{2}\Delta a^h$; and secondly the initial data of \eqref{eq:sys} is moving at a speed $h^{\AG}$. In Section \ref{subsec:timeexpansion}, we give the time expansion of $(b_0,\partialhi_0)$ around $t=0$. We will obtain an expansion of the form \betagin{align}\lambdaambdabel{eq:tmpexp} b_0(\tau,y) & {} \asymp \sum_{j=0}^\infty \tau^{\gamma j} a_j(y), & \partialhi_0(\tau,y) & {} \asymp \sum_{j=1}^\infty \tau^{\gamma j -1} \varphi_j(y) \end{align} (Proposition \ref{prop:t-expansion}). This expansion is essential in handling the moving initial-data. In Section \ref{subsec:expansion}, we finally determine $O(h^1)$ term in the case $\alpha \gammae 1$. What to show is the existence of the limits \betagin{align*} \frac{a^h(\tau) - b_0(\tau)}{h} & {} \to b_{\mathrm{equ}}(\tau) , & \frac{\partialhi^h(\tau) - \partialhi_0(\tau)}{h} & {} \to \partialhi_{\mathrm{equ}}(\tau) \end{align*} as $h \to 0$. A formal differentiation of \eqref{eq:sys} with respect to $h$ suggests that $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ may solve the linearized system \betagin{equation}\tag{\ref{eq:tbtp1}} \lambdaeqslantft( \betagin{aligned} &{\partial}_\tau b_{\mathrm{equ}} + \nabla \partialhi_{\mathrm{equ}}\cdot \nabla b_0 + \nabla \partialhi_0 \cdot \nabla b_{\mathrm{equ}} + \frac{1}{2} b_{\mathrm{equ}} \Delta \partialhi_0 + \frac{1}{2} b_0 \Delta \partialhi_{\mathrm{equ}} = \frac{i}{2} \Delta b_0, \\ &{\partial}_\tau \partialhi_{\mathrm{equ}} + \nabla \partialhi_0 \cdot \nabla \partialhi_{\mathrm{equ}} + \lambda \tau^{\gamma-2} (|x|^{-\gamma}* 2 \mathbb{R}e \overline{b_0} b_{\mathrm{equ}} ) = 0. \end{aligned} \right. \end{equation} By means of \eqref{eq:tmpexp}, the following estimates hold at the initial time: \betagin{align*} \frac{a^h(h^{\AG}) - b_0(h^{\AG})}{h} & {} = -\frac{b_0(h^{\AG}) - a_0}{h} = O(h^{\frac{\alpha \gamma}{\gamma -\alpha} -1}), \\ \frac{\partialhi^h(h^{\AG}) - \partialhi_0(h^{\AG})}{h} & {} = -\frac{\partialhi_0(h^{\AG}) }{h} = O(h^{\frac{\alpha (\gamma-1)}{\gamma -\alpha} -1}). \end{align*} Note that $\frac{\alpha \gamma}{\gamma -\alpha}>1$ for all $\alpha \gammae 1$, however, $\frac{\alpha (\gamma-1)}{\gamma -\alpha} >1$ if $\alpha >1$ and $\frac{\alpha (\gamma-1)}{\gamma -\alpha} = 1$ if $\alpha =1$. In particular, \[ \frac{\partialhi^h(h^{\AG}) - \partialhi_0(h^{\AG})}{h} \to \betagin{cases} 0 & \thetaetaxt{ if } \alpha >1, \\ - \varphi_1 & \thetaetaxt{ if } \alpha = 1, \end{cases} \] where $\varphi_1$ is defined in \eqref{eq:tmpexp}. Therefore, the $O(h^1)$ term is described by $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ solving \eqref{eq:tbtp1} with \eqref{eq:tbtp2} if $\alpha >1$ and with \eqref{eq:tbtp3} if $\alpha = 1$. In Section \ref{sec:sscase}, we consider the case $\alpha < 1$. In this case, the above powers $\frac{\alpha \gamma}{\gamma -\alpha}$ and $\frac{\alpha (\gamma-1)}{\gamma -\alpha}$ are less than one, in general. Therefore, there appear several terms which is order less than $O(h^1)$ in the expansion of $(a^h,\partialhi^h)$. We determine all these terms and obtain the asymptotic behavior of $(a^h,\partialhi^h)$ (see, Theorem \ref{thm:sscase}). In Sections \ref{sec:proof} and \ref{sec:sscase}, we mainly work with $v^h=\nabla \partialhi^h$ instead of $\partialhi^h$ itself. Note that, by means of Lemma \ref{lem:int}, it is easy to construct $\partialhi^h$ from $v^h$. \subsection{Existence of phase-amplitude form solution}\lambdaambdabel{subsec:pexistence} According to the strategy in Section \ref{subsec:strategy}, we first show that the system \eqref{eq:sys} has a unique solution. \betagin{proposition}\lambdaambdabel{prop:p-existence} Let Assumption \ref{asmp:1} be satisfied. Assume $0<\alpha<\gamma$. Then, there exists $T>0$ independent of $h$ such that, for all $h\in (0,1]$, there exists a unique solution $\partialsi^h \in C([h^{\AG},T+h^{\AG}];H^\infty)$ to \eqref{eq:r5}. Moreover, $\partialsi^h$ is written as \[ \partialsi^h = a^h e^{i\frac{\partialhi^h}{h}}, \] where \[ a^h \in C([h^{\AG},T+h^{\AG}];H^\infty) \cap C^\infty((h^{\AG},T+h^{\AG}];H^\infty) \] and \betagin{align*} \partialhi^h \in & C([h^{\AG},T+h^{\AG}];Y^{\infty}_{(n/\gamma,\infty],(n/(\gamma+1),\infty]}) \\ &{} \cap C^\infty((h^{\AG},T+h^{\AG}];Y^{\infty}_{(n/\gamma,\infty],(n/(\gamma+1),\infty]}). \end{align*} Moreover, there exists a limit $(b_0,\partialhi_0):=(a^h,\partialhi^h)_{|h=0}$ belonging the same function space as $(a^h,\partialhi^h)$ ($h>0$), and $(a^h,\partialhi^h)$ converges strongly to $(b_0,\partialhi_0)$ as $h\to 0$. Furthermore, $(b_0,\partialhi_0)$ solves \eqref{eq:lsys}. \end{proposition} \betagin{proof} We set velocity $v^h = \nabla \partialhi^h$. Then, the pair $(a^h,v^h)$ solves \betagin{equation}\lambdaambdabel{eq:sysav1} \lambdaeqslantft\{ \betagin{aligned} &{\partial}_\tau a^h = Q_1(a^h,v^h) +i\frac{h}{2} \Delta a^h, \\ &{\partial}_\tau v^h = Q_2(v^h,v^h) + \tau^{\gamma-2} Q_3(a^h,a^h), \end{aligned} \right. \end{equation} \betagin{align}\lambdaambdabel{eq:sysav2} a^h_{|\tau=h^{\AG}} &= a_0, & v^h_{|\tau=h^{\AG}} &= 0, \end{align} where $Q_1$, $Q_2$, and $Q_3$ are defined by \eqref{eq:q-1}, \eqref{eq:q-2}, and \eqref{eq:q-3}, respectively. To fix the initial time, we employ the time translation $\tau = t+h^{\AG}$. Then, the equation is \betagin{equation}\lambdaambdabel{eq:sysav3} \lambdaeqslantft\{ \betagin{aligned} &{\partial}_t a^h = Q_1(a^h,v^h) +i\frac{h}{2} \Delta a^h, \\ &{\partial}_t v^h = Q_2(v^h,v^h) + (t+h^{\AG})^{\gamma-2} Q_3(a^h,a^h), \end{aligned} \right. \end{equation} \betagin{align}\lambdaambdabel{eq:sysav4} a^h_{|t=0} &= a_0, & v^h_{|t=0} &= 0. \end{align} Now, the assumption $\gamma >1$ implies that $(t+h^{\AG})^{\gamma-2}$ is integrable over $(0,T^*)$ for some $T^*>0$ and its integral is uniformly bounded with respect to $h$. Fix $s>n/2+1$. Then, applying Proposition \ref{prop:existence-sys} with $c_1^h=c_2^h=1$, $r^h=h/2$, $f^h(t)=(t+h^{\AG})^{\gamma-2}$, and $B_i^h\equiv W_i^h \equiv R_i^h \equiv 0$, we obtain the existence time $0<T\lambdaeqslant T^*$ independent of $\varepsilon$ and the unique solution \[ (a^h,v^h) \in C([0,T]; H^s \widetildemes Y^{s+1}_{(n/(\gamma+1),\infty],2}) \] to \eqref{eq:sysav3}--\eqref{eq:sysav4} such that $\tSobn{a^h}{s}^2 + \tSobn{\nabla v^h}{s}^2$ is bounded uniformly with respect to $t\in [0,T]$. The upper bound depends only on $\tSobn{a_0}{s}$. Notice that Assumption \ref{asmp:existence} is satisfied for all $s>n/2+1$ and $q_0 \in ]n/(\gamma+1),n[$. By the equation and the Hardy-Littlewood-Sobolev inequality, $\partialartial_t v^h$ belongs to $C([0,T]; Y^{s}_{(n/(\gamma+1),\infty],2})$. Hence, we see from Lemma \ref{lem:int} that $\partialartial_t \partialhi^h \in C([0,T]; Y^{s+1}_{(n/\gamma,\infty],(n/(\gamma+1),\infty]})$. Since $\partialhi^h_{|t=0} = 0$, we also have \[ \partialhi^h \in C([0,T]; Y^{s+1}_{(n/\gamma,\infty],(n/(\gamma+1),\infty]}). \] Therefore, we have $\partialhi^h \to 0 $ as $|x|\to \infty$. Then, again applying Lemma \ref{lem:int} to $v^h$, we conclude that $\partialhi^h \in C([0,T]; Y^{s+2}_{(n/\gamma,\infty],(n/(\gamma+1),\infty])}$. Since the existence time $T$ is independent of $s$, we have $a^h \in C([0,T];H^\infty)$ and $\partialhi^h \in C([0,T];Y^{\infty}_{(n/\gamma,\infty],(n/(\gamma+1),\infty]})$. The bootstrap argument gives the $C^\infty$ regularity with respect time. The existence of the limit $(b_0,\partialhi_0)$ and the convergence $(a^h,\partialhi^h)\to (b_0,\partialhi_0)$ as $h\to 0$ follow from the latter part of Proposition \ref{prop:existence-sys} and Lemma \ref{lem:int}. \end{proof} \subsection{Time expansion of the limit solution near $\tau=0$}\lambdaambdabel{subsec:timeexpansion} By Proposition \ref{prop:p-existence}, the system \eqref{eq:sysav1}--\eqref{eq:sysav2} has a unique solution even if $h=0$. We keep working with $v^h=\nabla \partialhi^h$ instead of $\partialhi^h$ Write $(b_0,w_0):=(a^h,v^h)_{|h=0}$. The main difficulty of describing the asymptotic behavior of $(a^h,v^h)$ as $h \to 0$ comes from the fact that the initial data is given at $\tau=h^{\AG}$. In order to handle this $h$-dependence of the initial time, we give a time expansion of $(b_0,w_0) $ around $\tau=0$. Note that $(b_0,w_0)$ solves \betagin{align}\lambdaambdabel{eq:sysb01} {\partial}_\tau b_0 &{}= Q_1(b_0,w_0), & {\partial}_\tau w_0 &{}= Q_2(w_0 ,w_0) + \tau^{\gamma-2} Q_3(b_0,b_0), \end{align} \betagin{align}\lambdaambdabel{eq:sysb02} b_{0|\tau=0} &= a_0, & w_{0|\tau=0} & = 0, \end{align} where the quadratic forms $Q_i$ are defined by \eqref{eq:q-1}--\eqref{eq:q-3}. \betagin{proposition}\lambdaambdabel{prop:t-expansion} Let $(b_0,w_0)=(b_0,\nabla \partialhi_0)$ be the unique solution to \eqref{eq:sysb01}--\eqref{eq:sysb02} defined by Proposition \ref{prop:p-existence}. Then, it holds that \betagin{align}\lambdaambdabel{eq:t-exp1} b_0(\tau,y) & {} = \sum_{j=0}^J \tau^{\gamma j} a_j(y) + o(\tau^{\gamma J}) \inftyN H^\infty, \\ \lambdaambdabel{eq:t-exp2} w_0(\tau,y) & {} = \sum_{j=1}^J \tau^{\gamma j -1} v_j(y) + o(\tau^{\gamma J-1}) \inftyN Y^\infty_{(n/(\gamma+1),\infty],2} \end{align} as $\tau\to 0$ for all $J$, where $a_0$ is the initial data for $b_0$, $a_j$ and $v_j$ are defined by \[ a_j = \frac1{\gamma j} \sum_{k_1\gammae 0, k_2 \gammae 1, k_1+k_2=j}Q_1(a_{k_1},v_{k_2}) \] for $j \gammae 1$, $v_1 = Q_3(a_0,a_0)/(\gamma-1)$, and \[ v_j = \frac1{\gamma j -1} \lambdaeqslantft[ \sum_{k_1\gammae 1, k_2 \gammae 1, k_1+k_2=j}Q_2(v_{k_1},v_{k_2}) + \sum_{k_1\gammae 0, k_2 \gammae 0, k_1+k_2=j-1}Q_3(a_{k_1},a_{k_2}) \right] \] for $j \gammae 2$ with the quadratic forms $Q_i$ defined by \eqref{eq:q-1}--\eqref{eq:q-3}. Moreover, $\partialhi_0$ is expanded as \betagin{equation}\tag{\ref{eq:p-exp}} \partialhi_0(\tau,y) = \sum_{j=1}^J \tau^{\gamma j -1} \varphi_j(y) + o(\tau^{\gamma J-1}) \inftyN Y^\infty_{(n/\gamma,\infty],(n/(\gamma+1),\infty]} \end{equation} as $\tau \to 0$ for all $J \gammae 1$, where $\varphi_j$ is given by $\varphi_1=\frac{\lambda}{1-\gamma}(|y|^{-\gamma}*|a_0|^2)$ and \betagin{multline*} \varphi_j = \frac1{1-\gamma j} \Bigg[ \sum_{k_1\gammae 1, k_2 \gammae 1, k_1+k_2=j}\frac12 (\nabla \varphi_{k_1} \cdot \nabla \varphi_{k_2}) \\ + \sum_{k_1\gammae 0, k_2 \gammae 0, k_1+k_2=j-1}\lambda(|y|^{-\gamma}*(a_{k_1}\overline{a_{k_2}}) \Bigg] . \end{multline*} \end{proposition} \betagin{remark} Using the ``$\asymp$'' sign defined in Notation \ref{not:asymp}, the above three expansions \eqref{eq:t-exp1}, \eqref{eq:t-exp2}, and \eqref{eq:p-exp} can be written as $b_0(\tau) \asymp \sum_{j=0}^J \tau^{\gamma j} a_j$ in $H^\infty$, $w_0(\tau) \asymp \sum_{j=1}^J \tau^{\gamma j -1} v_j$ in $Y^\infty_{(n/(\gamma+1),\infty],2}$, and $\partialhi_0(\tau) \asymp \sum_{j=1}^J \tau^{\gamma j -1} \varphi_j$ in $Y^\infty_{(n/\gamma,\infty],(n/(\gamma+1),\infty]}$, respectively \end{remark} \betagin{proof} We first note that, by the definitions of $Q_i$, it follows for all $s>n/2+1$ that \betagin{align*} &\norm{Q_1(b,v)}_{H^s} \lambdaeqslant C_s \norm{b}_{H^{s+1}} \norm{v}_{Y^{s+1}_{(n/(\gamma+1),\infty],2}}, \\ &\norm{Q_2(v_1,v_2)}_{Y^{s}_{(n/(\gamma+1),\infty],2}} \lambdaeqslant C_s \norm{v_1}_{Y^{s}_{(n/(\gamma+1),\infty],2}} \norm{v_2}_{Y^{s+1}_{(n/(\gamma+1),\infty],2}}, \\ &\norm{Q_3(b_1,b_2)}_{Y^{s}_{(n/(\gamma+1),\infty],2}} \lambdaeqslant C_s \norm{b_1}_{H^s} \norm{b_2}_{H^s}. \end{align*} Therefore, we see that $a_l$ and $v_l$ are bounded in $H^\infty$ and $Y^\infty_{(n/(\gamma+1),\infty],2}$, respectively. For simplicity, in this proof we denote $Y^s_{(n/(\gamma+1),\infty],2}$ by $Y^s$. Denote $\sum_{j=0}^l \tau^{\gamma j}a_j$ and $\sum_{j=1}^l \tau^{\gamma j -1} v_j$ by $\widetilde{a}_l$ and $\widetilde{v}_l$, respectively. Then, it suffices to show that \betagin{align} &\norm{b_0 -\widetilde{a}_l}_{L^\infty([0,\tau];H^\infty)} = o(\tau^{\gamma l}) & & \forall l\gammae 0, \lambdaambdabel{eq:t-expb}\\ &\norm{w_0 -\widetilde{v}_l}_{L^\infty([0,\tau];Y^\infty)} = o(\tau^{\gamma l -1}) & & \forall l \gammae 1. \lambdaambdabel{eq:t-expw} \end{align} \subsection*{Step 1} Since $b_0 \in C([0,T];H^\infty)$ and $b_0(0)=a_0 = \widetilde{a}_0$, \eqref{eq:t-expb} is trivial if $l=0$. We show \eqref{eq:t-expw} for $l=1$. By the second equation of \eqref{eq:sysb01}, it holds for $s>n/2+1$ that \betagin{align*} \norm{w_0(\tau)}_{Y^s} & {} \lambdaeqslant C_1\int_0^\tau \norm{w_0(t)}_{Y^s} dt + C_2 \int_0^\tau t^{\gamma-2} dt,\\ & {} \lambdaeqslant C_1 \tau \norm{w_0}_{L^\infty((0,\tau];Y^s)} + C_2^\partialrime \tau^{\gamma-1}. \end{align*} where $C_1$ depends on $s$ and $C([0,T];Y^\infty)$ norm of $w_0$, and $C_2$ depends on $s$ and $C([0,T];H^\infty)$ norm of $b_0$. The right hand side is monotone increasing in time, hence this gives \[ \norm{w_0}_{L^\infty((0,\tau];Y^s)} \lambdaeqslant C_1 \tau \norm{w_0}_{L^\infty((0,t];Y^s)} + C_2^\partialrime \tau^{\gamma-1}. \] Choose $\tau$ so small that $C_1 \tau \lambdaeqslant 1/2$. Then, we obtain \[ \norm{w_0}_{L^\infty((0,\tau];Y^\infty)} = O(\tau^{\gamma-1}) \] since $s>n/2+1$ is arbitrary. Again by the equation, it holds that \[ w_0 - \widetilde{v}_1 = \int_0^\tau Q_2(w_0,w_0) dt + \int_0^\tau t^{\gamma-2} (Q_3(b_0,b_0-a_0) + Q_3(b_0-a_0,a_0)) dt. \] Since $w_0$ is order $O(\tau^{\gamma-1})$ in $L^\infty((0,\tau];Y^\infty)$, the first integral of the right hand side is order $O(\tau^{2\gamma-1})$ in $L^\infty((0,\tau];Y^\infty)$. Similarly, the fact that $b_0-a_0$ is order $o(1)$ in $L^\infty((0,\tau];H^\infty)$ shows the second integral is order $o(\tau^{\gamma-1})$ in $L^\infty((0,\tau];Y^\infty)$, which proves \eqref{eq:t-expw} for $l=1$. \subsection*{Step 2} We prove \eqref{eq:t-expb} and \eqref{eq:t-expw} for large $l$ by induction. By the definition of $a_j$, an explicit calculation shows \betagin{align*} \partialartial_\tau b_0 = {} & Q_1 (b_0,w_0) = Q_1(b_0,w_0-\widetilde{v}_1) + Q_1(b_0-\widetilde{a}_0, \tau^{\gamma-1}v_1) + \partialartial_\tau (\tau^{\gamma}a_1 ) \\ = {} & Q_1 (b_0,w_0-\widetilde{v}_2) + Q_1(b_0 - \widetilde{a}_0,\tau^{2\gamma-1}v_2 ) + Q_1(b_0 -\widetilde{a}_1 ,\tau^{\gamma-1}v_1) \\ & {} + \partialartial_\tau (\tau^{\gamma}a_1 + \tau^{2\gamma}a_2) = \cdots \\ = {} & Q_1 ( b_0, w_0 - \widetilde{v}_l ) + \sum_{l_1 =0}^{l-1} Q_1( b_0 - \widetilde{a}_{l_1}, \tau^{\gamma (l-l_1) -1} v_{l-l_1}) + \partialartial_\tau \lambdaeqslantft( \sum_{j=1}^l \tau^{\gamma j} a_j \right). \end{align*} Similarly, it holds that \betagin{align*} \partialartial_\tau w_0 = {} & Q_2(w_0,w_0) + \tau^{\gamma-2} Q_3 (b_0,b_0) = \cdots \\ = {} & Q_2 ( w_0, w_0 - \widetilde{v}_l ) + \sum_{l_1 =1}^{l} Q_2\lambdaeqslantft( w_0 - \widetilde{v}_{l_1}, \tau^{\gamma (l-l_1+1) -1} v_{l-l_1+1}\right) \\ & {} + \tau^{\gamma-2}\lambdaeqslantft(Q_3 ( b_0, b_0 - \widetilde{a}_l ) + \sum_{l_1 =0}^{l} Q_3( b_0 - \widetilde{a}_{l_1}, \tau^{\gamma (l-l_1) } a_{l-l_1}) \right)\\ & {} + \partialartial_\tau \lambdaeqslantft( \sum_{j=1}^{l+1} \tau^{\gamma j -1} v_j \right). \end{align*} Integrating these identities with respect to time, we obtain \betagin{equation}\lambdaambdabel{eq:t-expbl} b_0 - \widetilde{a}_l = \int_0^\tau \lambdaeqslantft(Q_1 ( b_0, w_0 - \widetilde{v}_l ) + \sum_{l_1 =0}^{l-1} Q_1( b_0 - \widetilde{a}_{l_1}, t^{\gamma (l-l_1) -1} v_{l-l_1}) \right)dt \end{equation} and \betagin{align}\lambdaambdabel{eq:t-expwl} w_0 - \widetilde{v}_{l+1} = {} & \int_0^\tau \lambdaeqslantft(Q_2 ( w_0, w_0 - \widetilde{v}_l ) + \sum_{l_1 =1}^{l} Q_2\lambdaeqslantft( w_0 - \widetilde{v}_{l_1}, t^{\gamma (l-l_1+1) -1} v_{l-l_1+1}\right) \right)dt \\ & {} + \int_0^\tau t^{\gamma-2}\lambdaeqslantft(Q_3 ( b_0, b_0 - \widetilde{a}_l ) + \sum_{l_1 =0}^{l} Q_3( b_0 - \widetilde{a}_{l_1}, t^{\gamma (l-l_1) } a_{l-l_1}) \right)dt. \nonumber \end{align} Now, let $L \gammae 1$ be an integer. If \eqref{eq:t-expb} holds for $l \lambdaeqslant L-1$ and \eqref{eq:t-expw} holds for $l \lambdaeqslant L$, then we see that \eqref{eq:t-expbl} gives \eqref{eq:t-expb} with $l=L$. On the other hand, if both \eqref{eq:t-expb} and \eqref{eq:t-expw} hold for $l \lambdaeqslant L$, then we obtain \eqref{eq:t-expw} with $l=L+1$ from \eqref{eq:t-expwl}. \smallbreak The expansion of $\partialhi_0$ is an immediate consequence of the expansion of $w_0=\nabla \partialhi_0$. Since \[ Q_2(v_{k_1},v_{k_2}) + Q_2(v_{k_2},v_{k_1}) = -\nabla (v_{k_1}\cdot v_{k_2}) = -\nabla \lambdaeqslantft(\frac12 v_{k_1}\cdot v_{k_2}+ \frac12 v_{k_2}\cdot v_{k_1}\right), \] we deduce from the definition $v_j$ that \betagin{multline*} \nabla \varphi_j = v_j = \frac1{1-\gamma j} \nabla\Bigg[ \sum_{k_1\gammae 1, k_2 \gammae 1, k_1+k_2=j}\frac12 (\nabla \varphi_{k_1} \cdot \nabla \varphi_{k_2}) \\ + \sum_{k_1\gammae 0, k_2 \gammae 0, k_1+k_2=j-1}\lambda(|y|^{-\gamma}*(a_{k_1}\overline{a_{k_2}}) \Bigg] . \end{multline*} By Lemma \ref{lem:int}, $\varphi_j$ belongs to $Y^\infty_{(n/\gamma,\infty],(n/(\gamma+1),\infty]}$. \end{proof} \subsection{Asymptotic behavior of the phase-amplitude form solution}\lambdaambdabel{subsec:expansion} The following proposition completes the proof of the theorem. \betagin{proposition} Let Assumption \ref{asmp:1} satisfied and $\alpha \gammae 1$. Let $T>0$ and $(a^h,v^h)$ be as in Proposition \ref{prop:p-existence}. Let $(b_0,w_0):=(a^h,v^h)_{|h=0}$. Then, there exists $(b_{\mathrm{equ}},w_{\mathrm{equ}})\in C([0,T];H^\infty\widetildemes Y^\infty_{(n/(\gamma+1),\infty],2})$ such that the following asymptotics holds: \betagin{equation}\lambdaambdabel{eq:tmp006} \betagin{aligned} a^h(\tau) &{}= b_0(\tau) + h b_{\mathrm{equ}}(\tau-h^{\AG}) + o(h) \inftyN C([h^{\AG},T];H^\infty), \\ v^h(\tau) &{}= w_0(\tau) + h w_{\mathrm{equ}}(\tau-h^{\AG}) + o(h) \inftyN C([h^{\AG},T];Y^\infty_{(n/(\gamma+1),\infty],2}). \end{aligned} \end{equation} Moreover, $(b_{\mathrm{equ}},w_{\mathrm{equ}})$ solves \betagin{equation}\lambdaambdabel{eq:tmp003} \lambdaeqslantft\{ \betagin{aligned} {\partial}_\tau b_{\mathrm{equ}} ={}& Q_1(b_0,w_{\mathrm{equ}}) + Q_1(b_{\mathrm{equ}},w_0) +i\frac{1}{2}\Delta b_0 , \\ {\partial}_\tau w_{\mathrm{equ}} ={}& Q_2(w_0,w_{\mathrm{equ}}) + Q_2(w_{\mathrm{equ}},w_0) + \tau^{\gamma-2} \lambdaeqslantft( Q_3(b_0,b_{\mathrm{equ}}) + Q_3(b_{\mathrm{equ}},b_0)\right) \end{aligned} \right. \end{equation} with the data \betagin{align}\lambdaambdabel{eq:tmp004} b_{\mathrm{equ}|\tau=0}&{}= 0,& w_{\mathrm{equ}|\tau=0}&{} = \betagin{cases} 0 & \thetaetaxt{ if } \alpha >1, \\ -v_1 & \thetaetaxt{ if } \alpha =1, \end{cases} \end{align} where $v_1$ is defined in Proposition \ref{prop:t-expansion}. \end{proposition} \betagin{remark} Since \betagin{align*} b_{\mathrm{equ}}(\tau-h^{\AG}) &{}= b_{\mathrm{equ}}(\tau) + o(1), & w_{\mathrm{equ}}(\tau-h^{\AG}) &{}= w_{\mathrm{equ}}(\tau) + o(1) \end{align*} by continuity, \eqref{eq:tmp006} implies \betagin{align*} a^h(\tau) &{}= b_0(\tau) + h b_{\mathrm{equ}}(\tau) + o(h) \inftyN C([h^{\AG},T];H^\infty), \\ v^h(\tau) &{}= w_0(\tau) + h w_{\mathrm{equ}}(\tau) + o(h) \inftyN C([h^{\AG},T];Y^\infty_{(n/(\gamma+1),\infty],2}). \end{align*} From this asymptotics and the transforms \eqref{eq:spct} and \eqref{eq:psi-ap}, we immediately obtain the asymptotics \eqref{eq:asymptotics}. \end{remark} \betagin{proof} Let $(a^h,v^h)$ be the solution to \eqref{eq:sysav1}--\eqref{eq:sysav2}. Let $(b_0,w_0)$ be the solution to \eqref{eq:sysb01}--\eqref{eq:sysb02}. We put \betagin{align*} b^h(\tau,y) &{}:= \frac{a^h(\tau +h^{\AG},y) - b_0(\tau+h^{\AG},y)}h, \\ w^h(\tau,y) &{}:= \frac{v^h(\tau +h^{\AG},y) - w_0(\tau+h^{\AG},y)}h. \end{align*} Then, $(b^h,w^h)$ solves \betagin{equation}\lambdaambdabel{eq:tmp001} \lambdaeqslantft\{ \betagin{aligned} {\partial}_\tau b^h ={}& h Q_1(b^h, w^h) + Q_1(b_0,w^h) + Q_1(b^h,w_0) +i\frac{1}{2}\Delta b_0 + i\frac{h}{2} \Delta b^h, \\ {\partial}_\tau w^h ={}& h Q_2(w^h,w^h) + Q_2(w_0,w^h) + Q_2(w^h,w_0) \\ &{} + (\tau+h^{\AG})^{\gamma-2} \lambdaeqslantft( h Q_3(b^h,b^h) + Q_3(b_0,b^h) + Q_3(b^h,b_0)\right), \end{aligned} \right. \end{equation} \betagin{align}\lambdaambdabel{eq:tmp002} b^h_{|\tau=0}&= \frac{b_{0|\tau=0} - b_0(h^{\AG})}h, & w^h_{|\tau=0}&= \frac{w_{0|\tau=0} - w_0(h^{\AG})}h, \end{align} We apply Proposition \ref{prop:existence-sys} with these initial data and $c_1^h = c_2^h=h$, $r^h=h/2$, $f^h(t)=(t+h^{\AG})^{\gamma-2}$, $B_1^h=B_2^h=b_0$, $W_1^h=W_2^h=w_0$, $R_1^h=(i/2)\Delta b_0$, and $R_2^h =0$. Note that the initial data \eqref{eq:tmp002} is uniformly bounded if $\alpha \gammae 1$ since an application of \eqref{eq:t-exp1} and \eqref{eq:t-exp2} gives \betagin{align*} \frac{b_{0|\tau=0} - b_0(h^{\AG})}h = {}& O(h^{\frac{\gamma (\alpha-1)}{\gamma-\alpha} +\frac{\alpha}{\gamma -\alpha}}) \inftyN H^{\infty}, \\ \frac{w_{0|\tau=0} - w_0(h^{\AG})}h = {}& O(h^{\frac{\gamma (\alpha-1)}{\gamma-\alpha}}) \inftyN Y^\infty_{(n/(\gamma+1),\infty],2}. \end{align*} The term $R_1^h$ satisfies $\tSobn{R_1^h}{s} \lambdaeqslant \tSobn{b_0}{s+2}/2$. Therefore, if $s-2>n/2+1$, that is, if $s>n/2+3$ then Proposition \ref{prop:existence-sys} provides the unique solution $(b^h,w^h) \in C([0,T-h^{\AG}];H^{s-2} \widetildemes Y^{s-1}_{(n/(\gamma+1),\infty],2}) $ for $h \in [0,1]$. Moreover, $(b^h,w^h)$ converges to $(\widetilde{b},\widetilde{w}):=(b^h,w^h)_{|h=0}$ in $C([0,T-h^{\AG}];H^{s-4} \widetildemes Y^{s-3}_{(n/(\gamma+1),\infty],2}) $. It follows from \eqref{eq:t-exp2} that $\lambdaim_{h\to 0} w^h_{|\tau=0} = 0$ if $\alpha > 1$ and $\lambdaim_{h\to 0} w^h_{|\tau=0} = -v_1$ if $\alpha = 1$. Hence, $(b_{\mathrm{equ}},w_{\mathrm{equ}})$ solves \eqref{eq:tmp003}--\eqref{eq:tmp004}. \end{proof} \section{Supercritical caustic and Supercritical WKB case}\lambdaambdabel{sec:sscase} \subsection{Result} In this section, we treat the case $\alpha <1<\gamma$. As presented in Section \ref{subsec:strategy}, the asymptotic behavior of the solution \eqref{eq:r4}--\eqref{eq:odata} boils down to the asymptotic behavior of the solution to \eqref{eq:sys}. By means of Lemma \ref{lem:int}, we work with $(a^h,v^h):=(a^h,\nabla \partialhi^h)$ which solves \eqref{eq:sysav1}--\eqref{eq:sysav2}. The main difficulty lies in the fact that the initial data \eqref{eq:sysav2} is moving at the speed $h^{\AG}$ (see Section \ref{subsec:summary2}). From the expansion \eqref{eq:tmpexp} of $(b_0,w_0):=(a^h,v^h)_{|h=0}$, we deduce that $(a^h,v^h)$ contains the terms of order \[ O(h^{\frac{\alpha(\gamma i -1)}{\gamma-\alpha}}) \quad \thetaetaxt{ and } \quad O(h^{\frac{j\alpha\gamma }{\gamma-\alpha}}) \] for all $i,j \gammae 0$. Note that some of these orders are less than one if $\alpha < 1$. This is the feature of the supercritical WKB case, and the problem comes from this point. Moreover, the above terms interact each other and there appear all the terms whose order is given by the finite combination of $h^{\frac{\alpha(\gamma i -1)}{\gamma-\alpha}}$ and $h^{\frac{j\alpha\gamma }{\gamma-\alpha}}$. Thus, we see that $(a^h,v^h)$ contains all the terms whose order is written as \[ O(h^{\frac{\alpha (\gamma l_1 - l_2)}{\gamma -\alpha}}), \quad 0 \lambdaeqslant l_2 \lambdaeqslant l_1. \] For our purpose, we determine all these terms up to $O(h^1)$. Therefore, it is natural to introduce a set $P$ defined by \betagin{equation}\lambdaambdabel{def:P} P := \lambdaeqslantft\{ \frac{\alpha (\gamma l_1 - l_2)}{\gamma -\alpha}; 0 \lambdaeqslant l_2 \lambdaeqslant l_1, \quad 0 \lambdaeqslant \frac{\alpha (\gamma l_1 - l_2)}{\gamma -\alpha} <1 \right\}. \end{equation} Set $N:= \sharp P -1$, and number the elements of $P$ as $0=p_0<p_1<\cdots<p_{N} < 1$. For any $p_{i_1},p_{i_2} \in P$, either $p_{i_1} + p_{i_2} \in P$ or $p_{i_1}+p_{i_2} \gammae 1$ holds. For example, if $\gamma = \sqrt{3}$ and $\alpha = \sqrt{3}/4$ then, $p_0=0$, \betagin{equation}\lambdaambdabel{eq:sampleP1} \betagin{aligned} p_1 &{}=\frac{\sqrt{3}-1}{3} = \frac{\alpha(\gamma -1)}{\gamma-\alpha}, & p_2 &{}=\frac{2(\sqrt{3}-1)}{3} = \frac{\alpha(2\gamma -2)}{\gamma-\alpha}, \\ p_3 &{}=\frac{\sqrt{3}}{3} = \frac{\alpha\gamma }{\gamma-\alpha}, & p_4 &{}=\sqrt{3}-1 = \frac{\alpha(3\gamma -3)}{\gamma-\alpha}, \\ p_5 &{}=\frac{2\sqrt{3}-1}{3} = \frac{\alpha(2\gamma -1)}{\gamma-\alpha}, & & \end{aligned} \end{equation} and $N=5$; and if $\gamma=2$ and $\alpha = 1/3$, then $p_0=0$, \betagin{equation}\lambdaambdabel{eq:sampleP2} \betagin{aligned} p_1 &{}=\frac15 = \frac{\alpha(\gamma -1)}{\gamma-\alpha}, \qquad p_2 =\frac25 = \frac{\alpha(2\gamma -2)}{\gamma-\alpha}= \frac{\alpha\gamma}{\gamma-\alpha}, \\ p_3 &{}=\frac35 = \frac{\alpha(3\gamma -3)}{\gamma-\alpha}= \frac{\alpha(2\gamma-1) }{\gamma-\alpha}, \\ p_4 &{}=\frac45 = \frac{\alpha(4\gamma -4)}{\gamma-\alpha}= \frac{\alpha(3\gamma -2)}{\gamma-\alpha} = \frac{2\alpha\gamma}{\gamma-\alpha}, \end{aligned} \end{equation} and $N=4$. To state the result, we also introduce several systems. Let $Q_1$, $Q_2$, and $Q_3$ be quadratic forms defined in \eqref{eq:q-1}, \eqref{eq:q-2}, and \eqref{eq:q-3}, respectively. Let $a_l$ and $v_l$ be sequences given in Proposition \ref{prop:t-expansion}. Then, for any $0 \lambdaeqslant i \lambdaeqslant N$, we introduce \betagin{equation}\lambdaambdabel{eq:tmp301} \lambdaeqslantft\{ \betagin{aligned} \partialartial_\tau b_i ={}& \sum_{p_j+p_k = p_i} Q_1(b_{p_j},w_{p_k}), \\ \partialartial_\tau w_i ={}& \sum_{p_j+p_k = p_i} \lambdaeqslantft(Q_2(w_{p_j},w_{p_k})+\tau^{\gamma-2}Q_3(b_{p_j},b_{p_k})\right), \end{aligned} \right. \end{equation} \betagin{equation}\lambdaambdabel{eq:tmp302} \betagin{aligned} b_i(0) ={}& \betagin{cases} -a_l & \thetaetaxt{ if }\exists l \thetaetaxt{ such that } p_i=\frac{\alpha\gamma l}{\gamma-\alpha} , \\ 0 & \thetaetaxt{ otherwise}, \end{cases} \\ w_i(0) ={}& \betagin{cases} -v_{l^\partialrime} & \thetaetaxt{ if } \exists l^\partialrime \thetaetaxt{ such that } p_i=\frac{\alpha\gamma l^\partialrime-\alpha}{\gamma-\alpha}, \\ 0 & \thetaetaxt{ otherwise}. \end{cases} \end{aligned} \end{equation} We also introduce a system for $(b_{\mathrm{equ}},w_{\mathrm{equ}})$ \betagin{equation}\lambdaambdabel{eq:tmp304} \lambdaeqslantft\{ \betagin{aligned} {\partial}_\tau b_{\mathrm{equ}} ={}& Q_1(b_0,w_{\mathrm{equ}}) + Q_1(b_{\mathrm{equ}},w_0) + \sum_{p_j+p_k = 1} Q_1(b_{p_j},w_{p_k}) + \frac{i}{2}\Delta b_0, \\ {\partial}_\tau w_{\mathrm{equ}} ={}& Q_2(w_0,w_{\mathrm{equ}}) + Q_2(w_{\mathrm{equ}},w_0) + \tau^{\gamma-2}(Q_3(b_0,b_{\mathrm{equ}}) + Q_3(b_{\mathrm{equ}},b_0)) \\ &{} + \sum_{p_j+p_k = 1} \lambdaeqslantft(Q_2(w_{p_j},w_{p_k})+\tau^{\gamma-2}Q_3(b_{p_j},b_{p_k})\right), \\ \end{aligned} \right. \end{equation} \betagin{equation}\lambdaambdabel{eq:tmp305} \betagin{aligned} b_{\mathrm{equ}}(0) ={}& \betagin{cases} -a_l & \thetaetaxt{ if }\exists l \thetaetaxt{ such that } 1=\frac{\alpha\gamma l}{\gamma-\alpha}, \\ 0 & \thetaetaxt{ otherwise}, \end{cases} \\ w_{\mathrm{equ}}(0) ={}& \betagin{cases} -v_{l^\partialrime} & \thetaetaxt{ if }\exists l^\partialrime \thetaetaxt{ such that }1=\frac{\alpha\gamma l^\partialrime-\alpha}{\gamma-\alpha}, \\ 0 & \thetaetaxt{ otherwise}, \end{cases} \end{aligned} \end{equation} where $(b_0,w_0)$ is the solution of \eqref{eq:lsys}. If there is no pair $(j,k)$ such that $p_j + p_k=1$, we let $\sum_{p_j+p_k = 1} \equiv 0$. It may happen (see \eqref{eq:sampleP1}). \betagin{theorem}\lambdaambdabel{thm:sscase} Let assumption \ref{asmp:1} be satisfied. Assume $0<\alpha <1$. Let $P$ be as in \eqref{def:P} and $N=\sharp P -1$. Then, there exists an existence time $T>0$ independent of $\varepsilon$. There also exist $(b_j,\partialhi_j) \in C([0,T];H^\infty \widetildemes Y^\infty_{(n/\gamma,\infty],(n/(\gamma+1),\infty]})$ ($0 \lambdaeqslant j \lambdaeqslant N$) such that $(b_i,w_i):=(b_i,\nabla \partialhi_i)$ solves \eqref{eq:tmp301}--\eqref{eq:tmp302}, and $(b_{\mathrm{equ}}, \partialhi_{\mathrm{equ}}) \in C([0,T]; H^\infty \widetildemes Y^\infty_{(n/\gamma,\infty],(n/(\gamma+1),\infty]})$ such that $(b_{\mathrm{equ}},w_{\mathrm{equ}}):=(b_{\mathrm{equ}},\nabla \partialhi_{\mathrm{equ}})$ solves \eqref{eq:tmp304}--\eqref{eq:tmp305}. Moreover, the followings hold: \betagin{enumerate} \item $\partialhi_0(\tau) \asymp \sum_{j=1}^\infty \tau^{\gamma j -1} \varphi_j$ (in the sense of \eqref{eq:p-exp}). \item The solution $u^\varepsilon$ to \eqref{eq:r4}--\eqref{eq:odata} satisfies the following asymptotics for all $s \gammae 0$: \betagin{equation}\lambdaambdabel{eq:ssasymptotics} \sup_{t \in [0,1-T^{-1}\varepsilon^{\alpha/\gamma}]} \Lebn{|J^\varepsilon|^s \lambdaeqslantft( u^\varepsilon(t) e^{-i\Phi^\varepsilon (t)} - \frac{1}{(1-t)^{n/2}} A^\varepsilon(t) e^{i\frac{|\cdot|^2}{2\varepsilon(t-1)}} \right)}{2} \to 0 \end{equation} as $\varepsilon \to 0$ with \betagin{equation}\lambdaambdabel{def:ssPhi} \Phi^\varepsilon(t,x) = \varepsilon^{\frac{\alpha}{\gamma}-1} \lambdaeqslantft( \partialhi_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right) + \sum_{j=1}^N \varepsilon^{(1-\frac{\alpha}{\gamma})p_j} \partialhi_j\lambdaeqslantft(\frac{t\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right) \right) \end{equation} and \betagin{equation}\lambdaambdabel{def:ssA} A^\varepsilon(t,x) = b_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t} \right) \exp \lambdaeqslantft(i\partialhi_{\mathrm{equ}}\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t} \right)\right) . \end{equation} \end{enumerate} \end{theorem} \betagin{remark} In \eqref{def:ssPhi}, the time variable of $\partialhi_j$ ($j \gammae 1$) is not $\varepsilon^{\frac{\alpha}{\gamma}}/(1-t)$ but \[ \frac{t\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}=\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t} - \varepsilon^{\frac{\alpha}{\gamma}}. \] Although this variable is not stable on the final layer $1-t = T^{-1} \varepsilon^{\frac{\alpha}{\gamma}}$, this choice is suitable when we work with the well-prepared data (see Section \ref{subsec:wpdata}). Of course, the Taylor expansion \[ \partialhi_j\lambdaeqslantft( \frac{t\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right) = \sum_{k=0}^\infty (-\varepsilon^{\frac{\alpha}{\gamma}})^k (\partialartial_t^k \partialhi_j)\lambdaeqslantft( \frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}\right) \] will exclude the variable ${t\varepsilon^{\frac{\alpha}{\gamma}}}/(1-t)$ from $\Phi^\varepsilon$, however, we do not pursue this point any more. \end{remark} \betagin{remark}\lambdaambdabel{rmk:type} Let us classify the phase functions in \eqref{eq:ssasymptotics} according to the notion in Section \ref{subsec:summary2}. If $\partialhi_i(0)\not\equiv 0$ (resp. $b_i(0)\not\equiv 0$), that is, if there exists a number $l \gammae 1$ such that $p_i=\frac{\alpha\gamma l-\alpha}{\gamma-\alpha}$ (resp. $p_i=\frac{\alpha\gamma l}{\gamma-\alpha}$), then $\partialhi_i$ is the correction from phase (resp. correction from amplitude); in particular $\partialhi_i=\partialhi_{\mathrm{pha},l}$ (resp. $\partialhi_i=\partialhi_{\mathrm{amp},l}$). On the other hand, if $\partialhi_i(0)\equiv b_i(0) \equiv 0$ then $\partialhi_i$ is the correction from interaction; in particular $\partialhi_i=\partialhi_{\mathrm{int},l^\partialrime}$ for some $l^\partialrime$. Notice that the summation in the system \eqref{eq:tmp301} is decomposed as \[ \sum_{p_j+p_k = p_i} = \sum_{(j,k)=(i,0),(0,i)} + \sum_{p_j+p_k = p_i,jk\neq0}. \] The second sum is the interaction term, which is an external force. When $\partialhi_i$ is the correction from interaction, it always has a nonzero interaction term. Otherwise, $\partialhi_i\equiv0$ since the system for $\partialhi_i$ is posed with the zero initial condition. There is a possibility that the correction from phase (resp. correction from amplitude) has an interaction term Indeed, it happens if there is a triplet $j,k,l$ such that $p_i=\frac{\alpha\gamma l-\alpha}{\gamma-\alpha}=p_j+p_k$ (resp. $p_i=\frac{\alpha\gamma l}{\gamma-\alpha}=p_j+p_k$) and $jk\neq0$. In this case, there is a resonance between the correction from phase (resp. correction from amplitude) and the correction from interaction. $\partialhi_{\mathrm{equ}}$ solving \eqref{eq:tmp304}--\eqref{eq:tmp305} is the correction from equation with or without resonance: If $\widetilde{\partialhi}(0)\not \equiv 0$ (resp. $\widetilde{b}(0)\not \equiv 0$ ) then there is a resonance with the correction from phase (resp. correction from amplitude); if $\sum_{p_j+p_k}\not\equiv0$ then there is a resonance with the correction from interaction. We note that the resonance among the correction from phase, the correction from interaction, and the correction from equation (resp. the correction from amplitude, the correction from interaction, and the correction from equation) may happen, and that, however, the resonance between the correction from phase and the correction from amplitude never happens because there is no pair $l,l^\partialrime$ such that $\gamma l = \gamma l^\partialrime -1$ if $\gamma>1$. \end{remark} \betagin{proof} As presented in Section \ref{subsec:strategy}, the asymptotic behavior of the solution \eqref{eq:r4}--\eqref{eq:odata} boils down to the asymptotic behavior of the solution to \eqref{eq:sys}. By means of Lemma \ref{lem:int}, we work with $(a^h,v^h):=(a^h,\nabla \partialhi^h)$ which solves \eqref{eq:sysav1}--\eqref{eq:sysav2}. The existence of $(a^h,v^h)$ and the expansion of $\partialhi_0$ are already proven in Propositions \ref{prop:p-existence} and \ref{prop:t-expansion}, respectively. Let $P$ be as in \eqref{def:P}, $N=\sharp P -1$, and $p_i\in P$ ($i=0,1,{\partial}ots,N$) be such that $\{ p_i\}_{i=0}^N=P$ and $p_i<p_{i+1}$. It suffices to show that $(a^h,v^h)$ is expanded as \betagin{equation*} \betagin{aligned} a^h(\tau+h^{\AG}) ={}& b_0(\tau+h^{\AG}) + \sum_{i=1}^N h^{p_i} b_i(\tau) + hb_{\mathrm{equ}}(\tau) + o(h), \\ v^h(\tau+h^{\AG}) ={}& w_0(\tau+h^{\AG}) + \sum_{i=1}^N h^{p_i} w_i(\tau) + hw_{\mathrm{equ}}(\tau) + o(h). \end{aligned} \end{equation*} Plugging this and \betagin{align*} b_{\mathrm{equ}}(\tau)={}&b_{\mathrm{equ}}(\tau+h^{\AG}) + o(1), & w_{\mathrm{equ}}(\tau)={}&w_{\mathrm{equ}}(\tau+h^{\AG}) + o(1). \end{align*} to \eqref{eq:grenier} and \eqref{eq:spct}, we obtain \eqref{eq:ssasymptotics}. \smallbreak {\bf Step 1}. We first prove by induction that \betagin{equation}\lambdaambdabel{eq:tmp303} \betagin{aligned} a^h(\tau+h^{\AG}) ={}& b_0(\tau+h^{\AG}) + \sum_{i=1}^{k} h^{p_i} b_i(\tau) + o(h^{p_{k}}), \\ v^h(\tau+h^{\AG}) ={}& w_0(\tau+h^{\AG}) + \sum_{i=1}^{k} h^{p_i} w_i(\tau) + o(h^{p_{k}}) \end{aligned} \end{equation} holds for $k=N$, where $(b_i,w_i)$ is a solution of \eqref{eq:tmp301}--\eqref{eq:tmp302}. One verifies from Proposition \ref{prop:p-existence} that \eqref{eq:tmp303} holds if $k=0$. We put $K \in [1, N]$. We assume for induction that \eqref{eq:tmp303} holds for $k=K-1$, and put \betagin{equation*} \betagin{aligned} b_{K}^h(\tau) ={}& h^{-p_{K}} \lambdaeqslantft( a^h(\tau+h^{\AG}) - b_0(\tau+h^{\AG}) - \sum_{i=1}^{K-1} h^{p_i} b_i(\tau)\right), \\ w_{K}^h(\tau) ={}& h^{-p_{K}}\lambdaeqslantft( v^h(\tau+h^{\AG}) - w_0(\tau+h^{\AG}) - \sum_{i=1}^{K-1} h^{p_i} w_i(\tau)\right). \end{aligned} \end{equation*} By the equation for $(a^h,v^h)$, we see that $(b^h_K,w^h_K)$ solves \betagin{equation*} \lambdaeqslantft\{ \betagin{aligned} {\partial}_\tau b^h_{K} ={}& h^{p_{K}}Q_1(b^h_{K},w^h_{K}) + Q_1(B_1^h,w^h_K) + Q_1(b^h_K,W_1^k) + R_1^h+ \frac{i}{2}h\Delta b^h_K, \\ {\partial}_\tau w^h_{K} ={}& h^{p_{K}}Q_2(w^h_{K},w^h_{K}) + Q_2(W_2^h,w^h_K) + Q_2(w^h_K,W_2^k) \\ &{} + (\tau+h^{\AG})^{\gamma-2} \lambdaeqslantft(h^{p_{K}}Q_3(b^h_{K},b^h_{K}) + Q_3(B_2^h,b^h_K) + Q_3(b^h_K,B_2^k) \right) \\ &{} + R_2^h , \\ b_{K}^h(0) ={}& h^{-p_{K}} \lambdaeqslantft( a_0 - b_0(h^{\AG}) - \sum_{i=1}^{K-1} h^{p_i} b_i(0)\right), \\ w_{K}^h(0) ={}& h^{-p_{K}}\lambdaeqslantft( - w_0(h^{\AG}) - \sum_{i=1}^{K-1} h^{p_i} w_i(0)\right), \end{aligned} \right. \end{equation*} where \betagin{align*} &B_1^h = B_2^h = \sum_{i=0}^{K-1} h^{p_i}b_i \to b_0(t), \\ &W_1^h= W_2^h= \sum_{i=0}^{K-1} h^{p_i}w_i \to w_0(t), \end{align*} \betagin{align*} R_1^h = {}& \sum_{p_l<K,p_k<K,p_l+p_k=p_{K}} Q_1 (b_{p_l},w_{p_k}) \\ &{} + \sum_{p_l<K,p_k<K,p_l+p_k>p_{K}} h^{p_l+p_k-p_K}Q_1 (b_{p_l},w_{p_k}) \\ &{} + \frac{ih^{1-p_K}}{2}\sum_{i=0}^{K-1}h^{p_i} \Delta b_i \\ \to {}& \sum_{p_l<K,p_k<K,p_l+p_k=p_{K}} Q_1 (b_{p_l},w_{p_k}), \end{align*} and \betagin{align*} R_2^h = {}& \sum_{p_l<K,p_k<K,p_l+p_k=p_{K}} \lambdaeqslantft(Q_2 (w_{p_l},w_{p_k}) + (t+h^{\AG})^{\gamma-2}Q_3(b_{p_l},b_{p_k}) \right) \\ &{} + \sum_{p_l<K,p_k<K,p_l+p_k>p_{K}} h^{p_l+p_k-p_K}Q_2 (w_{p_l},w_{p_k}) \\ &{} + \sum_{p_l<K,p_k<K,p_l+p_k>p_{K}} h^{p_l+p_k-p_K}(t+h^{\AG})^{\gamma-2}Q_3 (b_{p_l},b_{p_k}) \\ \to {}& \sum_{p_l<K,p_k<K,p_l+p_k=p_{K}} \lambdaeqslantft(Q_2 (w_{p_l},w_{p_k}) + t^{\gamma-2}Q_3(b_{p_l},b_{p_k}) \right). \end{align*} Moreover, applying the time expansion of $(b_0,w_0)$, we deduce that $(b_K^h(0),w_K^h(0))$ is uniformly bounded and that, as $h\to 0$, \betagin{align*} b_{K}^h(0) ={}& - h^{-p_{K}} \lambdaeqslantft( b_0(h^{\AG}) - \sum_{\{j;\frac{\alpha\gamma j }{\gamma -\alpha}<p_K\}} h^{\frac{\alpha\gamma j }{\gamma -\alpha}} a_j\right), \\ \to{}& \betagin{cases} -a_{j^\partialrime} & \thetaetaxt{ if }\exists j^\partialrime \thetaetaxt{ such that } p_K = \frac{\alpha\gamma j^\partialrime }{\gamma -\alpha}, \\ 0 & \thetaetaxt{ otherwise}, \end{cases} \\ w_{K}^h(0) ={}& - h^{-p_{K}} \lambdaeqslantft( w_0(h^{\AG}) - \sum_{\{j;\frac{\alpha(\gamma j -1)}{\gamma -\alpha}<p_K\}} h^{\frac{\alpha(\gamma j -1)}{\gamma -\alpha}} v_j\right), \\ \to{}& \betagin{cases} -v_{j^{\partialrime\partialrime}} & \thetaetaxt{ if }\exists j^{\partialrime\partialrime} \thetaetaxt{ such that } p_K = \frac{\alpha(\gamma j^{\partialrime\partialrime} -1)}{\gamma -\alpha}, \\ 0 & \thetaetaxt{ otherwise}. \end{cases} \end{align*} Note that either $b_K^h(0) \to 0$ or $w_K^h(0) \to 0$ holds for all $K$ since $\gamma >1$ implies there is no pair $(j,j^\partialrime)$ such that $\gamma j = \gamma j^\partialrime -1$. Therefore, we apply Proposition \ref{prop:existence-sys} to obtain the solution $(b_K^h,w_K^h)$. Put $(b_K,w_K):=(b_K^h,w_K^h)_{|h=0}$. Then, it solves \eqref{eq:tmp301}--\eqref{eq:tmp302}, and so \eqref{eq:tmp303} holds for $k=K$. By induction \eqref{eq:tmp303} holds for $k=N$. {\bf Step 2}. Mimicking the argument in Step 1, we construct $(b_{\mathrm{equ}},w_{\mathrm{equ}})$ such that \betagin{equation*} \betagin{aligned} a^h(\tau+h^{\AG}) ={}& b_0(\tau+h^{\AG}) + \sum_{i=1}^N h^{p_i} b_i(\tau) + hb_{\mathrm{equ}}(\tau) + o(h), \\ v^h(\tau+h^{\AG}) ={}& w_0(\tau+h^{\AG}) + \sum_{i=1}^N h^{p_i} w_i(\tau) + hw_{\mathrm{equ}}(\tau) + o(h), \end{aligned} \end{equation*} holds. Notice that $(b_{\mathrm{equ}},w_{\mathrm{equ}})$ solves \betagin{equation*} \betagin{aligned} {\partial}_\tau b_{\mathrm{equ}} ={}& Q_1(b_0,w_{\mathrm{equ}}) + Q_1(b_{\mathrm{equ}},w_0) + \sum_{p_j+p_k = 1} Q_1(b_{p_j},w_{p_k}) + \frac{i}{2}\Delta b_0, \\ {\partial}_\tau w_{\mathrm{equ}} ={}& Q_2(w_0,w_{\mathrm{equ}}) + Q_2(w_{\mathrm{equ}},w_0) + \tau^{\gamma-2}(Q_3(b_0,b_{\mathrm{equ}}) + Q_3(b_{\mathrm{equ}},b_0)) \\ &{} + \sum_{p_j+p_k = 1} \lambdaeqslantft(Q_2(w_{p_j},w_{p_k})+\tau^{\gamma-2}Q_3(b_{p_j},b_{p_k})\right), \\ \end{aligned} \end{equation*} \betagin{align*} \widetilde{b}(0) ={}& \betagin{cases} -a_l & \thetaetaxt{ if } 1=\frac{\alpha\gamma l}{\gamma-\alpha} \, \exists l, \\ 0 & \thetaetaxt{ otherwise}, \end{cases} & \widetilde{w}(0) ={}& \betagin{cases} -v_{l^\partialrime} & \thetaetaxt{ if } 1=\frac{\alpha\gamma l^\partialrime-\alpha}{\gamma-\alpha} \, \exists l^\partialrime, \\ 0 & \thetaetaxt{ otherwise}. \end{cases} \end{align*} \end{proof} \betagin{remark} By a similar proof, we obtain higher order approximation. Let $0<\alpha<\gamma$ and $\gamma>1$. We modify the set $P$ defined by \eqref{def:P} as \[ P^\partialrime :=\lambdaeqslantft\{l+\frac{\alpha\gamma m}{\gamma-\alpha} + \frac{\alpha(\gamma-1)n}{\gamma-\alpha}; \quad l,m,n \gammae 0\right\}, \] and number the elements of $P^\partialrime$ as $0=p_0<p_1<{\partial}ots<p_k<{\partial}ots$. Then, we have, for all $k$, \betagin{equation}\lambdaambdabel{eq:higherexpansion} \betagin{aligned} a^h(\tau+h^{\AG}) ={}& b_0(\tau+h^{\AG}) + \sum_{i=1}^k h^{p_i} b_i(\tau) + o(h^{p_k}), \\ \partialhi^h(\tau+h^{\AG}) ={}& \partialhi_0(\tau+h^{\AG}) + \sum_{i=1}^k h^{p_i} \partialhi_i(\tau) + o(h^{p_k}). \end{aligned} \end{equation} Plugging this to \eqref{eq:grenier} and \eqref{eq:spct}, we obtain higher order approximation of the original solution. It is important to note that the \eqref{eq:higherexpansion} has the same form in the both $\alpha\gammae1$ and $\alpha<1$ case. When we concerned with higher order WKB-type approximation, we need four kinds of correction terms even if $\alpha \gammae 1$. From this respect, the supercritical WKB case $\alpha <1$ can be characterized as the the special case $p_1<1$. \end{remark} \subsection{Well-prepared data and general data}\lambdaambdabel{subsec:wpdata} We conclude this section with some remarks about the well-prepared data. By the semiclassical conformal transform \eqref{eq:spct} and Grenier's transform \eqref{eq:psi-ap}, the leading order WKB-type approximation of the original solution $u^\varepsilon$ to \eqref{eq:r4}--\eqref{eq:odata} is reduced to the approximation of the solution $(a^h,\partialhi^h)$ to \betagin{equation}\lambdaambdabel{eq:sysa} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_\tau a^h + \nabla \partialhi^h \cdot \nabla a^h + \frac12 a^h \Delta \partialhi^h = i\frac{h}{2}\Delta a^h,\\ &\partialartial_\tau \partialhi^h + \frac12 |\nabla \partialhi^h|^2 + \lambda \tau^{\gamma-2}(|y|^{-\gamma}*|a^h|^2) =0 \end{aligned} \right. \end{equation} with \betagin{align}\lambdaambdabel{eq:sysb} a^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}={}&a_0, & \partialhi^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}={}&0, \end{align} up to order $O(h^1)$. Note that \eqref{eq:sysa}--\eqref{eq:sysb} and \eqref{eq:sys} are the same. As shown in Proposition \ref{prop:p-existence}, there exists a limit $(b_0,\partialhi_0):=(a^h,\partialhi^h)_{|h=0}$ which solves \eqref{eq:lsys}. Now, we consider the distances \betagin{align*} d_a^h(t) :=&{} a^h(t) -b_0(t), & d_\partialhi^h(t) :=&{} \partialhi^h(t) - \partialhi_0(t). \end{align*} If these distances are order $o(h^1)$ then we immediately obtain the WKB-type approximation $b_0\exp(i\varepsilon^{\frac{\alpha}{\gamma}-1}\partialhi_0)$ of $u^\varepsilon$ (recall that $h=\varepsilon^{1-\frac{\alpha}{\gamma}}$). However, unfortunately, the following two respects prevent us: The first one is the $h$-dependence of the equation \eqref{eq:sysa}, and the second one is the $h$-dependence of the initial time \eqref{eq:sysb}. The first problem is handled by employing the correction term $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ solving \eqref{eq:tbtp1} and, therefore we discuss about the initial data in the followings. \smallbreak The given initial data \eqref{eq:sysb} is written as \betagin{align*} d_a^h(h^{\AG}) :=&{} a_0 -b_0(h^{\AG}), & d_\partialhi^h(h^{\AG}) :=&{} -\partialhi_0(h^{\AG}). \end{align*} The main difficulty in the case $\alpha \lambdaeqslant 1$ is the fact that these terms become larger than $O(h^1)$ as $h \to 0$. The simplest way to overcome this difficulty is to modify the initial data \eqref{eq:sysb} into \betagin{align}\lambdaambdabel{eq:sysc} a^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}={}&b_0(h^{\AG}), & \partialhi^h_{|\tau=h^{\frac{\alpha}{\gamma-\alpha}}}={}&\partialhi_0(h^{\AG}), \end{align} which ensures $d_a^h(h^{\AG}) \equiv d_\partialhi^h(h^{\AG}) \equiv 0$. Note that \eqref{eq:sysa} with \eqref{eq:sysc} is the same as \eqref{eq:wpsys}. Back to the transform \eqref{eq:spct}, this initial data corresponds to the well-prepared data \betagin{equation}\tag{\ref{eq:wpdata}} u^\varepsilon_{|t=0}(x) = b_0(\varepsilon^{\frac{\alpha}{\gamma}},x)e^{-i\frac{|x|^2}{2 \varepsilon}} \exp (i \varepsilon^{\frac{\alpha}{\gamma} -1} \partialhi_0(\varepsilon^{\frac{\alpha}{\gamma}},x)). \end{equation} This initial condition is rather natural in the supercritical case $\alpha<1$, and the original initial condition in \eqref{eq:odata} is a kind of constraint (see, Section \ref{subsec:summary2}). If we use this well-prepared data, we do not have to consider any correction term other than $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ and, for all $0 < \alpha < \gamma$, it holds that \betagin{align*} a^h(\tau) =&{} b_0(\tau) + h b_{\mathrm{equ}}(\tau) + o(h), \\ \partialhi^h(\tau) =&{} \partialhi_0(\tau) + h \partialhi_{\mathrm{equ}}(\tau) + o(h) \end{align*} with $(b_0,\partialhi_0)$ and $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ solving \eqref{eq:lsys} and \eqref{eq:tbtp1}--\eqref{eq:tbtp2}, respectively, and so that the asymptotic behavior of the solution $u^\varepsilon(t,x)$ to \eqref{eq:r4}--\eqref{eq:wpdata} is given by \[ b_0\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}, \frac{x}{1-t}\right) \exp\lambdaeqslantft( i \partialhi_{\mathrm{equ}}\lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right) + i\varepsilon^{\frac{\alpha}{\gamma}-1} \partialhi_0 \lambdaeqslantft(\frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t},\frac{x}{1-t}\right)\right). \] This approximate solution is the same one as in the case $\alpha >1$. We still need $O(h^1)$-correction term $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ because it comes from the $h^1$-dependence of the equation which $(a^h,\partialhi^h)$ solves. In Theorems \ref{thm:main} and \ref{thm:sscase}, we took another way. By the expansion of $(b_0,\partialhi_0)$ around $\tau=0$, there exist nonnegative integers $k_1, k_2$ depending on $\alpha$ and $\gamma$ such that \betagin{align*} d_a^h(h^{\AG}) =&{} a_0 - b_0(h^{\AG}) = -\sum_{j=1}^{k_1} h^{\frac{\alpha\gamma j}{\gamma -\alpha}} a_j + o(h^1), \\ d_\partialhi^h(h^{\AG}) =&{} -\partialhi _0(h^{\AG}) = -\sum_{j=1}^{k_2} h^{\frac{\alpha(\gamma j -1)}{\gamma -\alpha}} \varphi_j + o(h^1). \end{align*} We subtract the main part $-\sum_{j=1}^{k_1} h^{\frac{\alpha\gamma j}{\gamma -\alpha}} a_j$ and $-\sum_{j=1}^{k_2} h^{\frac{\alpha(\gamma j -1)}{\gamma -\alpha}} \varphi_j$ by constructing appropriate correction terms (correction from amplitude and correction from phase, respectively). Indeed, if we let the correction terms $(b_j,\partialhi_j)$ ($0 \lambdaeqslant j \lambdaeqslant N$) and $(b_{\mathrm{equ}},\partialhi_{\mathrm{equ}})$ be defined as in Theorem \ref{thm:sscase} then, at the initial time $\tau=h^{\AG}$, it holds that \betagin{align*} &\lambdaeqslantft(a^h(\tau) - b_0(\tau) - \sum_{j=1}^N h^{p_j} b_j(\tau-h^{\AG}) - hb_{\mathrm{equ}}(\tau)\right)_{|\tau=h^{\AG}} \\ &{}= -\lambdaeqslantft(b_0(h^{\AG}) - \sum_{l \in \{l\gammae 0;\frac{\alpha\gamma l}{\gamma -\alpha} \lambdaeqslant 1\}} h^{\frac{\alpha\gamma l}{\gamma -\alpha}} a_l \right)+ h\lambdaeqslantft(b_{\mathrm{equ}}(0)-b_{\mathrm{equ}}(h^{\AG})\right) \\ &{}=o(h^1), \end{align*} \betagin{align*} &\lambdaeqslantft(\partialhi^h(\tau) - \partialhi_0(\tau) - \sum_{j=1}^N h^{p_j} \partialhi_j(\tau-h^{\AG}) - h\partialhi_{\mathrm{equ}}(t)\right)_{|\tau=h^{\AG}} \\ &{}= -\lambdaeqslantft(\partialhi_0(h^{\AG}) - \sum_{l \in \{l\gammae1;\frac{\alpha(\gamma l-1)}{\gamma -\alpha} \lambdaeqslant 1\}} h^{\frac{\alpha(\gamma l-1)}{\gamma -\alpha}} \varphi_l \right)+ h\lambdaeqslantft(\partialhi_{\mathrm{equ}}(0)-\partialhi_{\mathrm{equ}}(h^{\AG})\right) \\ &{}=o(h^1). \end{align*} It is important to note that the time variable of $(b_j,\partialhi_j)$ ($1 \lambdaeqslant j \lambdaeqslant N$) is not $\tau$ but $\tau-h^{\AG}$. This choice is the key for the above cancellation. By the transform \eqref{eq:spct}, the time variable of $\partialhi_j$ ($1 \lambdaeqslant j \lambdaeqslant N$) in the definition \eqref{def:ssPhi} of $\Phi^\varepsilon(t)$ should be given by \[ \frac{\varepsilon^{\frac{\alpha}{\gamma}}}{1-t} - \varepsilon^{\frac{\alpha}{\gamma}} = \frac{t\varepsilon^{\frac{\alpha}{\gamma}}}{1-t}. \] These correction terms allow us to work with general data. \section{Proofs of Theorems \ref{thm:main2} and \ref{thm:main3}}\lambdaambdabel{sec:blowup} Recall that the system we consider is \betagin{equation}\tag{\ref{eq:EPr}} \lambdaeqslantft\{ \betagin{aligned} &{\partial}_\tau (\rho r^{n-1}) + \partialartial_r (|\rho v r^{n-1}) = 0, \\ &{\partial}_\tau v + v \partialartial_r v - \lambda \tau^{n-4} {\partial}_r V_{\mathrm{p}} = 0, \\ &{\partial}_r (r^{n-1}\partialartial_r V_{\mathrm{p}}) = \rho r^{n-1}\\ &\rho_{|\tau=0}=|a_0|^2, \quad v_{|\tau=0}=0, \end{aligned} \right. \end{equation} where $r=|x|$. We introduce the ``mass'' $m$ and the ``mean mass'' $M$ \[ m(\tau,r):=M(\tau,r)r^{n}:= r^{n-1}{\partial}_r \Phi(\tau,r) = \int_0^r \rho(\tau,s) s^{n-1}ds. \] We also set $m_0(r):=m(0,r)$ and $M_0(r):=M(0,r)$. Combining the first and the third equations of \eqref{eq:EPr}, we obtain \betagin{equation}\lambdaambdabel{eq:math} \partialartial_\tau m + v \partialartial_r m = 0, \end{equation} where we have used $(\rho v r^{n-1})_{|r=0}=0$. To solve this equation we also introduce the characteristic curve $X(\tau,R)$: \betagin{equation*} \frac{d X}{d \tau} = v(\tau,X(\tau,R)), \quad X(0,R) = R. \end{equation*} Denoting differentiation along this characteristic curve by $\partialrime:=d/d\tau$, the mass equation \eqref{eq:math} and the second equation of \eqref{eq:EPr} yield \betagin{align}\lambdaambdabel{eq:EPr2a} &m^\partialrime = 0, \\ \lambdaambdabel{eq:EPr2b} &v^\partialrime = \lambda \tau^{n-4} \frac{m}{X^{n-1}}, \\ \lambdaambdabel{eq:EPr2c} &X^\partialrime = v. \end{align} We solve this system with the initial data \[ (X,m,v)_{|\tau=0} = (R,m_0(R),0), \] where $R \gammaeq 0$ parameterizes the initial location. By \eqref{eq:EPr2a}, the mass $m$ remains constant along the characteristics, that is, $m(\tau,X(\tau,R)) = m_0(R)$. Therefore, \eqref{eq:EPr2b} and \eqref{eq:EPr2c} yield \betagin{align}\lambdaambdabel{eq:Xdd} X^{\partialrime\partialrime} &= \frac{\lambda m_0(R)\tau^{n-4}}{X^{n-1}}, & X(0,R) & = R, & X^\partialrime(0,R) &= 0. \end{align} This equation is studied also in \cite{Tsu09}. \betagin{proof}[Proof of Theorem \ref{thm:main2}] It suffices to show that $X(\tau,R)\lambdaeqslant 0$ holds for some $R>0$ and $\tau \lambdaeqslant T^*$. This argument is similar to that for the blow-up for the nonlinear Schr\"odinger equation by Glassey \cite{GlJMP}. Since $\lambda<0$, we see from \eqref{eq:Xdd} that $X^{\partialrime\partialrime} \lambdaeqslant 0$, and so that $X^{\partialrime} \lambdaeqslant X^\partialrime(0)=0$ and $X \lambdaeqslant X(0) = R$ for all $\tau \gammae 0$. Therefore, again by \eqref{eq:Xdd}, we verify that \[ X^{\partialrime\partialrime}(\tau,R)\lambdaeqslant -\frac{|\lambda| m_0 \tau^{n-4}}{R^{n-1}}= - |\lambda| R M_0(R) \tau^{n-4}. \] Note that $M_0(R)>0$ for some $R$ provided $a_0$ is not identically zero. Now, fix such $R$. Integrating twice with respect to time, we obtain \[ X(\tau,R) \lambdaeqslant R - \frac{|\lambda| RM_0(R)}{(n-2)(n-3)} \tau^{n-2}, \] which yields $X(\tau,R) \lambdaeqslant 0$ for large time. The critical time is not greater than \[ T^* = \lambdaeqslantft(\frac{(n-2)(n-3)}{|\lambda| \sup_{R\gammae 0}M_0(R)}\right)^{\frac1{n-2}} \] since $R$ is arbitrary. \end{proof} \betagin{proof}[Proof of Theorem \ref{thm:main3}] In order to clarify the necessary and sufficient condition, we repeat the proof in \cite{ELT-IUMJ}. We first note that $\partialartial X/\partialartial R (0,R)=1$, and that the solution is global if and only if $\partialartial X/\partialartial R (t,R) > 0$ for all $\tau \gammae 0$ and $R \gammae 0$. By \eqref{eq:Xdd}, we have \[ X^{\partialrime\partialrime} = \frac{\lambda m_0}{X^3}. \] We multiply this by $X^\partialrime$ and integrate in time to obtain \[ (X^\partialrime)^2 = 0^2 + \frac{\lambda m_0}{R^2} - \frac{\lambda m_0}{X^2} = \frac{\lambda m_0}{R^2} -XX^{\partialrime\partialrime}, \] where we have used \eqref{eq:Xdd} again. This yields \[ (X^2)^{\partialrime\partialrime} =2(X^\partialrime)^2 + 2XX^{\partialrime\partialrime} = \frac{2\lambda m_0}{R^2}. \] Then, integration twice gives \[ X^2 = R^2 + \frac{\lambda m_0}{R^2}t^2. \] Since $X \gammae R > 0$ from \eqref{eq:Xdd} and $X^\partialrime(0)=0$, we see \betagin{equation}\lambdaambdabel{eq:CharCurve} X(\tau,R) = \sqrt{R^2 + \frac{\lambda m_0}{R^2}\tau^2}. \end{equation} An explicit calculation shows that \[ \frac{\partialartial X}{\partialartial R}(\tau,R) = \frac{ 1 + \lambda (\frac12 |a_0|^2 - M_0 )\tau^2} {\sqrt{1 + \lambda M_0 \tau^2}}, \] where $m_0 = M_0 R^4 = \int_0^R |a_0|^2s^3 ds$ by definition. Note that $\lambdaim_{R\to 0} M_0 = |a_0(0)|^2/4$ since $a_0$ is continuous. Hence, $\partialartial X/\partialartial R (t,R) > 0$ holds for all $\tau \gammae 0$ and $R \gammae 0$ if and only if $\frac12 |a_0|^2 - M_0 \gammae 0$ for all $R > 0$, that is, \betagin{equation}\lambdaambdabel{eq:cond-global} |a_0(R)|^2 \gammae \frac{2}{R^4}\int_0^R |a_0(s)|^2 s^3 ds \end{equation} for all $R > 0$. Moreover, the critical time is given by \[ \tau_c = \lambdaeqslantft(\frac{2}{\lambda \max_{r > 0} \lambdaeqslantft( 2M_0(r) - |a_0(r)|^2\right)}\right)^{\frac12}. \] The condition \eqref{eq:cond-global} can be written as \[ \partialartial_{R} \lambdaeqslantft( \frac{m_0(R)}{R^2} \right) \gammae 0. \] We take $R_0$ so that $m_0(R_0) > 0$. Then, it holds only if \[ m_0(R) \gammae \frac{m_0(R_0)}{R_0^2}R^2 \] for all $R \gammae R_0$, which fails if $\lambdaim_{R \to \infty}m_0(R) < \infty$, that is, if $a_0 \in L^2(\mathbb{R}^4)$. \end{proof} \betagin{remark}\lambdaambdabel{rem:sncond} If $n=4$ then \eqref{eq:EPr} becomes an autonomous system. Now we consider the classical solution of autonomous model \betagin{equation*} \lambdaeqslantft\{ \betagin{aligned} &\partialartial_\tau (\rho r^{n-1}) + \partialartial_r (\rho v r^{n-1}) = 0, \\ &\partialartial_\tau v + v \partialartial_r v -\lambda \partialartial_r V_{\mathrm{p}} = 0, \\ & \partialartial_r (r^{n-1}\partialartial_r V_{\mathrm{p}}) = \rho r^{n-1},\\ &\rho_{|\tau=0}=\rho_0, \quad v_{|\tau=0}=v_0, \end{aligned} \right. \end{equation*} where $(\tau,r)\in \mathbb{R}_+\widetildemes \mathbb{R}_+$. It is shown in \cite{MaRIMS} that, under the assumption that $n\gammae 3$, $\rho_0\in L^1((0,\infty),r^{n-1}dr)$, $v_0(0)=0$, and $v_0(r)\to0$ as $r\to\infty$, the corresponding solution is global if and only if $\lambda<0$ and \[ v_0(r) =\sqrt{\frac{2|\lambda|}{(n-2)r^{n-2}}\int_0^r \rho_0(s)s^{n-1}ds}. \] \end{remark} \betagin{remark}\lambdaambdabel{rem:indicator} The function $\mathcal{G}amma=\partialartial X/\partialartial R$ is called the indicator function. As in the above proof, the solution blows up if and only if $\mathcal{G}amma$ takes non-positive value. Moreover, the solution is given by \betagin{align*} \rho(t,X(t,R)) =&{} \frac{\rho_0(R)R^{n-1}}{X^{n-1}(t,R) \mathcal{G}amma(t,R)}, \\ v(t,X(t,R)) =&{} \frac{d X}{d t} (t,R). \end{align*} Example \ref{ex:blowup} is easily checked by this form since the characteristic curve $X$ is given explicitly by \eqref{eq:CharCurve} in the case of $\lambda<0$ and $N=4$. \end{remark} \subsection*{Acknowledgments} The author expresses his deep gratitude to Professor R\'emi Carles for his reading an early version of the paper and fruitful discussions in Montpellier. Deep appreciation goes to Professor Yoshio Tsutsumi for his valuable advice and constant encouragement. This research progressed during the author's stay in Orsay. The author is grateful to people in department of mathematics, University of Paris 11 for their kind hospitality. This research is supported by JSPS fellow. \partialrovidecommand{\bysame}{\lambdaeqslantavevmode\hbox to3em{\hrulefill}\thetainspace} \partialrovidecommand{\href}[2]{#2} \betagin{thebibliography}{10} \bibitem{AC-SP} T.~Alazard and R.~Carles, \emph{Semi-classical limit of {S}chr\"odinger--{P}oisson equations in space dimension $n\gammae 3$}, J. Differential Equations \thetaetaxtbf{233} (2007), no.~1, 241--275. \bibitem{AC-ARMA} \bysame, \emph{Supercritical geometric optics for nonlinear {S}chr\"odinger equations}, Arch. Ration. Mech. Anal. \thetaetaxtbf{194} (2009), no.~1, 315--347. \bibitem{AC-GP} \bysame, \emph{{WKB} analysis for the {G}ross-{P}itaevskii equation with non-trivial boundary conditions at infinity}, Ann. Inst. H. Poincare Anal. Non Lineaire \thetaetaxtbf{26} (2009), no.~3, 959--977. \bibitem{CaIUMJ} R.~Carles, \emph{Geometric optics with caustic crossing for some nonlinear {S}chr\"odinger equations}, Indiana Univ. Math. J. \thetaetaxtbf{49} (2000), no.~2, 475--551. \bibitem{CaCMP} \bysame, \emph{Geometric optics and long range scattering for one-dimensional nonlinear {S}chr\"odinger equations}, Comm. Math. Phys. \thetaetaxtbf{220} (2001), no.~1, 41--67. \bibitem{CaJHDE} \bysame, \emph{Cascade of phase shifts for nonlinear {S}chr\"odinger equations}, J. Hyperbolic Differ. Equ. \thetaetaxtbf{4} (2007), no.~2, 207--231. \bibitem{CaBook} \bysame, \emph{Semi-classical analysis for nonlinear {S}chr\"odinger equations}, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2008. \bibitem{CL-FT} R.~Carles and D.~Lannes, \emph{Focusing at a point with caustic crossing for a class of nonlinear equations}, 2nd France-Tunisia meeting (2003). \bibitem{CM-AA} R.~Carles and S.~Masaki, \emph{Semiclassical analysis for {H}artree equations}, Asymptotic Analysis \thetaetaxtbf{58} (2008), no.~4, 211--227. \bibitem{CMS-SIAM} R.~Carles, N.~J. Mauser, and H.~P. Stimming, \emph{(semi)classical limit of the {H}artree equation with harmonic potential}, SIAM J. Appl. Math \thetaetaxtbf{66} (2005), no.~1, 29--56. \bibitem{CR-CMP} D.~Chiron and F.~Rousset, \emph{Geometric optics and boundary layers for {N}onlinear-{S}chr\"odinger {E}quations}, Comm. Math. Phys. \thetaetaxtbf{288} (2009), no.~2, 503--546. \bibitem{ELT-IUMJ} S.~Engelberg, H.~Liu, and E.~Tadmor, \emph{Critical thresholds in {E}uler-{P}oisson equations}, Indiana Univ. Math. J. \thetaetaxtbf{50} (2001), no.~Special Issue, 109--157, Dedicated to Professors Ciprian Foias and Roger Temam (Bloomington, IN, 2000). \bibitem{Gallo} C.~Gallo, \emph{Schr\"odinger group on {Z}hidkov spaces}, Adv. Differential Equations \thetaetaxtbf{9} (2004), no.~5-6, 509--538. \bibitem{GLM-TJM} I.~Gasser, C.-K. Lin, and P.~A. Markowich, \emph{A review of dispersive limits of (non)linear {S}chr\"odinger-type equations}, Taiwanese J. Math. \thetaetaxtbf{4} (2000), no.~4, 501--529. \bibitem{PGEP} P.~G{\'e}rard, \emph{Remarques sur l'analyse semi-classique de l'\'equation de {S}chr\"odinger non lin\'eaire}, S\'eminaire sur les \'Equations aux D\'eriv\'ees Partielles, 1992--1993, \'Ecole Polytech., Palaiseau, 1993, pp.~Exp.\ No.\ XIII, 13. \bibitem{PGAHPANL} \bysame, \emph{The {C}auchy problem for the {G}ross-{P}itaevskii equation}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire \thetaetaxtbf{23} (2006), no.~5, 765--779. \bibitem{GlJMP} R.~T. Glassey, \emph{On the blowing up of solutions to the {C}auchy problem for nonlinear {S}chr\"odinger equations}, J. Math. Phys. \thetaetaxtbf{18} (1977), no.~9, 1794--1797. \bibitem{Grenier98} E.~Grenier, \emph{Semiclassical limit of the nonlinear {S}chr\"odinger equation in small time}, Proc. Amer. Math. Soc. \thetaetaxtbf{126} (1998), no.~2, 523--530. \bibitem{Hormander1} L.~H{\"o}rmander, \emph{The analysis of linear partial differential operators. {I}}, second ed., Springer Study Edition, Springer-Verlag, Berlin, 1990, Distribution theory and Fourier analysis. \bibitem{HK-WM} J.~Hunter and J.~Keller, \emph{Caustics of nonlinear waves}, Wave motion \thetaetaxtbf{9} (1987), 429--443. \bibitem{LL-EJDE} H.~Li and C.-K. Lin, \emph{Semiclassical limit and well-posedness of nonlinear {S}chr\"odinger-{P}oisson systems}, Electron. J. Differential Equations (2003), No. 93, 17 pp. (electronic). \bibitem{LZ-ARMA} F.~Lin and P.~Zhang, \emph{Semiclassical limit of the {G}ross-{P}itaevskii equation in an exterior domain}, Arch. Ration. Mech. Anal. \thetaetaxtbf{179} (2006), no.~1, 79--107. \bibitem{LT-MAA} H.~Liu and E.~Tadmor, \emph{Semiclassical limit of the nonlinear {S}chr\"odinger-{P}oisson equation with subcritical initial data}, Methods Appl. Anal. \thetaetaxtbf{9} (2002), no.~4, 517--531. \bibitem{MP-JJAM} T.~Makino and B.~Perthame, \emph{Sur les solutions \`a sym\'etrie sph\'erique de l'\'equation d'{E}uler-{P}oisson pour l'\'evolution d'\'etoiles gazeuses}, Japan J. Appl. Math. \thetaetaxtbf{7} (1990), no.~1, 165--170. \bibitem{MaRIMS} S.~Masaki, \emph{Remarks on global existence of classical solution to multi-dimensional compressible {E}uler-{P}oisson equations with geometrical symmetry}, RIMS Kokyuroku Bessatsu, to appear. \bibitem{MaAHP} \bysame, \emph{Semi-classical analysis for {H}artree equations in some supercritical cases}, Ann. Henri Poincar\'e \thetaetaxtbf{8} (2007), no.~6, 1037--1069. \bibitem{MaASPM} \bysame, \emph{Semi-classical analysis of the {H}artree equation around and before the caustic}, Adv. Stud. in Pure Math. \thetaetaxtbf{47} (2007), no.~1, 217--236. \bibitem{PeJJAM} B.~Perthame, \emph{Nonexistence of global solutions to {E}uler-{P}oisson equations for repulsive forces}, Japan J. Appl. Math. \thetaetaxtbf{7} (1990), no.~2, 363--367. \bibitem{Tsu09} I.~Tsukamoto, \emph{On asymptotic behavior of positive solutions of $x^{\partialrime\partialrime}=-t^{\alphapha\lambdaambdambda-2}x^{1+\alphapha}$ with $\alphapha<0$ and $-1<\lambdaambdambda<0$}, preprint, 2009. \bibitem{ZhSIAM} P.~Zhang, \emph{Wigner measure and the semiclassical limit of {S}chr\"odinger-{P}oisson equations}, SIAM J. Math. Anal. \thetaetaxtbf{34} (2002), no.~3, 700--718 (electronic). \bibitem{Zhidkov} P.~E. Zhidkov, \emph{The {C}auchy problem for a nonlinear {S}chr\"odinger equation}, JINR Commun., P5-87-373, Dubna (1987), (in Russian). \end{thebibliography} \end{document}
\begin{document} \gdef\@thefnmark{}\@footnotetext{\textup{2000} \textit{Mathematics Subject Classification}: 57N05, 20F38, 20F05} \gdef\@thefnmark{}\@footnotetext{\textit{Keywords}: Mapping class groups, nonorientable surfaces, twist subgroup, involutions} \newenvironment{prooff}{ \par \noindent {\it Proof}\ }{ $\mathchoice\sqr67\sqr67\sqr{2.1}6\sqr{1.5}6$ \par} \def\sqr#1#2{{\vcenter{\hrule height.#2pt \hbox{\vrule width.#2pt height#1pt \kern#1pt \vrule width.#2pt}\hrule height.#2pt}}} \def\mathchoice\sqr67\sqr67\sqr{2.1}6\sqr{1.5}6{\mathchoice\sqr67\sqr67\sqr{2.1}6\sqr{1.5}6} \def\pf#1{ \par \noindent {\it #1.}\ } \def $\square$ \par{ $\mathchoice\sqr67\sqr67\sqr{2.1}6\sqr{1.5}6$ \par} \def\demo#1{ \par \noindent {\it #1.}\ } \def \par{ \par} \def~ $\square${~ $\mathchoice\sqr67\sqr67\sqr{2.1}6\sqr{1.5}6$} \title[Generating the Twist Subgroup by Involutions] {Generating the Twist Subgroup by Involutions} \author[T{\"{u}}l\.{i}n Altun{\"{o}}z, Mehmetc\.{i}k Pamuk, and O\u{g}uz Y{\i}ld{\i}z ]{T{\"{u}}l\.{i}n Altun{\"{o}}z, Mehmetc\.{i}k Pamuk, and O\u{g}uz Y{\i}ld{\i}z} \address{Department of Mathematics, Middle East Technical University, Ankara, Turkey} \email{[email protected]} \email{[email protected]} \email{[email protected]} \begin{abstract} For a nonorientable surface, the twist subgroup is an index $2$ subgroup of the mapping class group. It is generated by Dehn twists about two-sided simple closed curves. In this paper, we study involution generators of the twist subgroup. We give generating sets of involutions with the smallest number of elements our methods allow. \end{abstract} \maketitle \setcounter{secnumdepth}{2} \setcounter{section}{0} \section{Introduction} Let $N_g$ denote a closed connected nonorientable surface of genus $g$. The mapping class group of $N_g$ is defined to be the group of the isotopy classes of all diffeomorphisms of $N_g$. Throughout the paper this group will be denoted by ${\rm Mod}(N_g)$. Let ${\Sigma}igma_g$ denote a closed connected orientable surface of genus $g$. The mapping class group of ${\Sigma}igma_g$ is the group of the isotopy classes of orientation preserving diffeomorphisms and is denoted by ${\rm Mod}({\Sigma}igma_g)$. In the orientable case, it is a classical result that ${\rm Mod}({\Sigma}igma_g)$ is generated by finitely many Dehn twists about nonseparating simple closed curves~\cite{de,H,l3}. The study of algebraic properties of mapping class group, finding small generating sets, generating sets with particular properties, is an active one leading to interesting developments. Wajnryb~\cite{w} showed that ${\rm Mod}({\Sigma}igma_g)$ can be generated by two elements given as a product of Dehn twists. As the group is not abelian, this is the smallest possible. Later, Korkmaz~\cite{mk2} showed that one of these generators can be taken as a Dehn twist, he also proved that ${\rm Mod}({\Sigma}igma_g)$ can be generated by two torsion elements. Recently, the third author showed that ${\rm Mod}({\Sigma}igma_g)$ is generated by two torsions of small orders~\cite{y1}. Generating ${\rm Mod}({\Sigma}igma_g)$ by involutions was first considered by McCarthy and Papadopoulus~\cite{mp}. They showed that the group can be generated by infinitely many conjugates of a single involution (element of order two) for $g\geq 3$. In terms of generating by finitely many involutions, Luo~\cite{luo} showed that any Dehn twist about a nonseparating simple closed curve can be written as a product six involutions, which in turn implies that ${\rm Mod}({\Sigma}igma_g)$ can be generated by $12g+6$ involutions. Brendle and Farb~\cite{bf} obtained a generating set of six involutions for $g\geq3$. Following their work, Kassabov~\cite{ka} showed that ${\rm Mod}({\Sigma}igma_g)$ can be generated by four involutions if $g\geq7$. Recently, Korkmaz~\cite{mk1} showed that ${\rm Mod}({\Sigma}igma_g)$ is generated by three involutions if $g\geq8$ and four involutions if $g\geq3$. Also, the third author improved his result showing that it is generated by three involutions if $g\geq6$~\cite{y2}. Compared to orientable surfaces less is known about ${\rm Mod}(N_g)$. Lickorish~\cite{l1,l2} showed that it is generated by Dehn twists about two-sided simple closed curves and a so-called $Y$-homeomorphism (or a crosscap slide). Chillingworth~\cite{c} gave a finite generating set for ${\rm Mod}(N_g)$ that linearly depends on $g$. Szepietowski~\cite{sz2} proved that ${\rm Mod}(N_g)$ is generated by three elements and by four involutions. The twist subgroup $\mathcal{T}_g$ of ${\rm Mod}(N_g)$ is the group generated by Dehn twists about two-sided simple closed curves. The group $\mathcal{T}_g$ is a subgroup of index $2$ in ${\rm Mod}(N_g)$ ~\cite{l2}. Chillingworth~\cite{c} showed that $\mathcal{T}_g$ can be generated by finitely many Dehn twists. Stukow~\cite{st2} obtained a finite presentation for $\mathcal{T}_g$ with $(g+2)$ Dehn twist generators. Later Omori~\cite{om} reduced the number of Dehn twist generators to $(g+1)$ for $g\geq4$. If it is not required that all generators are Dehn twists, Du~\cite{du} obtained a generating set consisting of three elements, two involutions and an element of order $2g$ whenever $g\geq5$ and odd. Recently, Yoshihara~\cite{yo} was interested in the problem of finding generating sets for $\mathcal{T}_g$ consisting of only involutions. He proved that $\mathcal{T}_g$ can be generated by six involutions for $g\geq14$ and by eight involutions if $g\geq8$. Our aim in this paper is to generate $\mathcal{T}_g$ with fewer number of involutions. It is known that any group generated by two involutions is isomorphic to a quotient of a dihedral group. Hence, $\mathcal{T}_g$ cannot be generated by two involutions. We are not sure whether $\mathcal{T}_g$ can be generated by three involutions. Based on the approach of ~\cite{mk1}, we obtain the following result: \begin{main}\label{t0} The twist subgroup $\mathcal{T}_g$ of ${\rm Mod}(N_g)$ is generated by \begin{enumerate} \item[(1)] four involutions if $g\geq12$ and even, \item[(2)] four involutions if $g=4k+1\geq 5$, \item[(3)] five involutions if $g=4k+3\geq 11$. \end{enumerate} We also prove that the twist subgroup $\mathcal{T}_g$ can be generated by \begin{enumerate} \item[(4)] five involutions if $g=8,10$, \item[(5)] six involutions if $g=6,7$. \end{enumerate} \end{main} Note that if a group is generated by involutions, then its first integral homology group should consist of elements of order $2$. For the twist subgroup $\mathcal{T}_g$, this is the case when $g\geq5$~\cite{st1}. The paper is organized as follows. In Section~\ref{S2}, we recall some basic results on ${\rm Mod}(N_g)$ and its subgroup $\mathcal{T}_g$. We work with nonorientable surfaces of even genus in Section ~\ref{S3} and nonorientable surfaces of odd genus in Section ~\ref{S4}. \noindent \textit{Acknowledgments.} The authors thank Mustafa Korkmaz for various fruitful discussions. The first author was partially supported by the Scientific and Technologic Research Council of Turkey (TUBITAK)[grant number 117F015]. \par \section{Background and Results on Mapping Class Groups} \label{S2} Let $N_g$ be a closed connected nonorientable surface of genus $g$. Note that the {\textit{genus}} for a nonorientable surface is the number of projective planes in a connected sum decomposition. The {\textit{mapping class group}} ${\rm Mod}(N_g)$ of the surface $N_g$ is defined to be the group of the isotopy classes of diffeomorphisms $N_g \to N_g$. Throughout the paper we do not distinguish a diffeomorphism from its isotopy class. For the composition of two diffeomorphisms, we use the functional notation; if $g$ and $h$ are two diffeomorphisms, the composition $gh$ means that $h$ acts on $N_g$ first.\\ \indent A simple closed curve on a nonorientable surface $N_g$ is said to be \textit{one-sided} if a regular neighbourhood of it is homeomorphic to a M\"{o}bius band. It is called \textit{two-sided} if a regular neighbourhood of it is homeomorphic to an annulus. If $a$ is a two-sided simple closed curve on $N_g$, to define the Dehn twist $t_a$, we need to fix one of two possible orientations on a regular neighbourhood of $a$ (as we did for the curve $a_1$ in Figure~\ref{G}). Following ~\cite{mk1} the right-handed Dehn twist $t_a$ about $a$ will be denoted by the corresponding capital letter $A$. Recall the following properties of Dehn twists: let $a$ and $b$ be two-sided simple closed curves on $N_g$ and let $f\in {\rm Mod}(N_g)$. \begin{itemize} \item \textbf{Commutativity:} If $a$ and $b$ are disjoint, then $AB=BA$. \item \textbf{Conjugation:} If $f(a)=b$, then $fAf^{-1}=B^{s}$, where $s=\pm 1$ depending on whether $f$ is orientation preserving or orientation reversing on a neighbourhood of $a$ with respect to the chosen orientation. \end{itemize} \begin{figure} \caption{The curves $a_1,a_2,b_i,c_i,e$ and $f$ on the surface $N_g$.} \label{G} \end{figure} \par \begin{figure} \caption{Generators of $H_1(N_g;\mathbb{R} \label{H} \end{figure} Consider the surface $N_g$ shown in Figure~\ref{G}. The Dehn twist generators of Omori can be given as follows (note that we do not have the curve $d_r$ when $g$ is odd). \begin{theorem}\cite{om}\label{thm1} The twist subgroup $\mathcal{T}_g$ is generated by the following $(g+1)$ Dehn twists \begin{enumerate} \item $A_1,A_2,B_1,\ldots, B_r$, $C_1,\ldots, C_{r-1}$ and $E$ if $g=2r+1$ and \item $A_1,A_2,B_1,\ldots, B_r$, $C_1,\ldots, C_{r-1}$, $D_r$ and $E$ if $g=2r+2$ . \end{enumerate} \end{theorem} Consider a basis $\lbrace x_1, x_2. \ldots, x_{g-1}\rbrace$ for $H_1(N_g; \mathbb{R})$ such that the curves $x_i$ are one-sided and disjoint as in Figure~\ref{H}. It is known that every diffeomorphism $f: N_g \to N_g$ induces a linear map $f_{\ast}: H_1(N_g;\mathbb{R}) \to H_1(N_g;\mathbb{R})$. Therefore, one can define a homomorphism $D: {\rm Mod}(N_g) \to \mathbb{Z}_{2}$ by $D(f)=\textrm{det}(f_{\ast})$. The following lemma from~\cite{l1} tells when a mapping class falls into the twist subgroup $\mathcal{T}_g$. \begin{lemma}\label{lem1} Let $f\in {\rm Mod}(N_g)$. Then $D(f)=1$ if $f\in \mathcal{T}_g$ and $D(f)=-1$ if $f \not \in \mathcal{T}_g$. \end{lemma} \section{The even case}\label{S3} \begin{figure} \caption{The models for $N_g$ if $g=2r+2$.} \label{C} \end{figure} For $g=2r+2$, we work with the models in Figure~\ref{C}. This surface is obtained from a genus $r$ orientable surface by deleting the interiors of two disjoint disks and identifying the antipodal points on the boundary. Moreover, the genus $r$ surface minus two disks is embedded in $\mathbb{R}^{3}$ in such a way that each genus is in a circular position with the second genus on the $+z$-axis and the rotation $R$ by $\frac{2\pi}{r}$ about $x$-axis maps the curve $b_i$ to $b_{i+1}$ for $i=1,\ldots,r-1$ and $b_r$ to $b_1$.\par We use the explicit homeomorphism constructed in ~\cite[Section $3$]{st1} to identify the models in Figure~\ref{G} and~\ref{C}. On the left hand side of Figure~\ref{C}, one of the crosscaps is centered on the $+x$-axis and the other one is obtained by rotating the first one by $\pi$ about the $z$-axis. The model on the right hand side is obtained from the model on the left hand side by sliding crosscaps via a diffeomorphism, say $\phi$. Let $\tau$ be the blackboard reflection of $N_g$ for the model in the left hand side in Figure~\ref{C}. If $r$ is odd, we consider the reflection $\tau$ and if $r$ is even, we consider the reflection $\phi\tau\phi^{-1}$. The surface $N_g$ is invariant under the reflections $\tau$ and $\phi\tau\phi^{-1}$. Abusing the notation, we keep writing $\tau$ instead of $\phi\tau\phi^{-1}$. Note that $D(\tau)=-1$.\\ \indent Note that the surface $N_g$ is invariant under the two rotations $\rho_{1}^{\prime}$ and $\rho_{2}^{\prime}$ where $\rho_{1}^{\prime}$ is the rotation by $\pi$ about $z$-axis and $\rho_{2}^{\prime}$ is the rotation by $\pi$ about the line $z=tan(\frac{\pi}{r})y$, $x=0$ as in Figure~\ref{C}. The rotations $\rho_{1}^{\prime}$ and $\rho_{2}^{\prime}$ satisfy $D(\rho_{1}^{\prime})=D(\rho_{2}^{\prime})=-1$, which implies that the twist subgroup $\mathcal{T}_g$ does not contain $\rho_{1}^{\prime}$ and $\rho_{2}^{\prime}$. Let $\rho_1=\rho_{1}^{\prime}\tau$ and $\rho_2=\rho_{2}^{\prime}\tau$. Then the involutions $\rho_{1}$ and $\rho_{2}$ are contained in $\mathcal{T}_g$ by Lemma~\ref{lem1}. Observe that the rotation $R=\rho_2\rho_1$. \subsection{Generating sets for the twist subgroup $\mathcal{T}_g$} Recently, Korkmaz~\cite{mk1} introduced new generating sets for the mapping class group of an orientable surface. We follow the outline of his proofs. Especially, since the curves $a_i$, $b_i$ and $c_i$ are exactly the same as in ~\cite{mk1}, statements about these curves follows directly from ~\cite{mk1}. Before we state our result, let us recall the above mentioned theorem of Korkmaz. Recall that $A_i$, $B_i$, $C_i$, $E$ and $F$ represent the Dehn twists about the corresponding lower case letters in Figure~\ref{G} and ~\ref{C}. \begin{theorem}\cite{mk1}\label{mt1} Let ${\Sigma}igma_g$ denote a closed connected oriented surface of genus $g$. Then, if $g\geq3$, ${\rm Mod}({\Sigma}igma_g)$ is generated by the four elements $R, A_1A_{2}^{-1}, B_1B_{2}^{-1}$ and $C_1C_{2}^{-1}$. \end{theorem} Using the above theorem, we give a generating set for $\mathcal{T}_g$ when $g$ is even. \begin{theorem}\label{t1} Let $r\geq3$ and $g=2r+2$. Then the twist subgroup $\mathcal{T}_g$ is generated by the elements $R, A_1A_{2}^{-1}, B_1B_{2}^{-1}, C_1C_{2}^{-1}, D_r$ and $E$ if $g=2r+2$. \end{theorem} \begin{proof} Let $G$ be the subgroup of $\mathcal{T}_g$ generated by the set \[ \lbrace R, A_1A_{2}^{-1}, B_1B_{2}^{-1}, C_1C_{2}^{-1}, D_r, E\rbrace \] if $g=2r+2$. Let $\mathcal{S}$ denote the set of isotopy classes of two-sided non-separating simple closed curves on $N_g$. Define a subset $\mathcal{G}$ of $\mathcal{S}\times \mathcal{S}$ as \[ \mathcal{G} =\lbrace(a,b): AB^{-1}\in G \rbrace. \] The set $\mathcal{G}$ defines an equivalence relation on $\mathcal{S}$ which satisfies $G$-invariance property, that is, \begin{center} if $(a,b)\in \mathcal{G}$ and $H\in G$ then $(H(a),H(b))\in \mathcal{G}$. \end{center} Then it follows from the proof of Theorem~\ref{mt1} that the Dehn twists $A_i$ and $B_i$ for $i=1,\ldots,r$ are contained in $G$. Also, $G$ contains $C_j$ for $j=1,\ldots,r-1$. Since all generators given in Theorem~\ref{thm1} are contained in the group $G$. We conclude that $G=\mathcal{T}_g$. \end{proof} \subsection{Involution generators} We consider the surface $N_g$ where $g$-crosscaps are distributed on the sphere as in Figure~\ref{sigma}. If $g=2r+2$ and $r\geq3$, there is a reflection, $\sigma$, of the surface $N_g$ in the $xy$-plane such that \begin{itemize} \item $\sigma(f)=a_1$, $\sigma(b_r)=d_r$, \item $\sigma(x_2)=x_3$, $\sigma(x_4)=x_5$ $\sigma(x_{g-2})=x_{g}$ and \item $\sigma(x_i)=x_i$ if $i=6,\ldots,g-3$ or $i=1,g-1$. \end{itemize} with reverse orientation. (Recall that $x_i$'s are the generators of $H_1(N_g;\mathbb{R})$ as shown in Figure~\ref{H}.)\\ \begin{figure} \caption{The involution $\sigma$ if $g=2r+2$.} \label{sigma} \end{figure} \indent The linear map $D$ associated to $\sigma$ satisfies $D(\sigma)=1$ if $g$ is even. This implies that the involution $\sigma$ is contained in $\mathcal{T}_g$ if $g$ is even. \indent \begin{figure} \caption{The proof of Theorem~\ref{g12} \label{E} \end{figure} \begin{theorem}\label{g12} The twist subgroup $\mathcal{T}_{12}$ is generated by the involutions $\rho_1,\rho_2, \rho_1 A_1 B_2 C_4 A_3$ and $\sigma$. \end{theorem} \begin{proof} Consider the surface $N_{12}$ as in Figure~\ref{C}. Since \[ \rho_1(a_1)=a_3, \rho_1(b_2)=b_2 \textrm{ and } \rho_1(c_4)=c_4, \] and $\tau$ reverses the orientation of a neighbourhood of a two-sided simple closed curve, we get \begin{itemize} \item $\rho_1A_1\rho_1=A_{3}^{-1}$, \item $\rho_1B_2\rho_1=B_{2}^{-1}$ and \item $\rho_1C_4\rho_1=C_{4}^{-1}$. \end{itemize} It is easy to verify that $\rho_1 A_1 B_2 C_4 A_3$ is an involution. Let $E_1=A_1 B_2 C_4 A_3$ and let $H$ be the subgroup of $\mathcal{T}_{12}$ generated by the set \[ \lbrace \rho_1,\rho_2, \rho_1E_1, \sigma\rbrace. \] Note that the rotation $R$ is in the subgroup $H$. By Theorem~\ref{t1}, we need to show that the elements $A_1A_{2}^{-1}, B_1B_{2}^{-1}, C_1C_{2}^{-1},D_5$ and $E$ are contained in $H$.\\ \noindent Let $E_2=RE_1R^{-1}=A_2B_3C_5A_4$. It can be easily shown that \[ E_2E_1(a_2,b_3,c_5,a_4)=(b_2,a_3,c_5,a_4), \] so that $E_3=B_2A_3C_5A_4$ is in $H$.\\ \noindent Let \[ E_4=R^{2}E_1R^{-2}=A_3B_4C_1A_5. \] It is easy to show that \[ E_4E_3(a_3,b_4,c_1,a_5)=(a_3,a_4,b_2,a_5) \] so that $E_5=A_3A_4B_2A_5$ are contained in $H$. Hence, \[ E_5E_{3}^{-1}=A_5C_{5}^{-1}\in H. \] One can easily see that the elements $A_iC_{i}^{-1}$ are contained in $H$ by conjugating $A_5C_{5}^{-1}$ with powers of $R$.\\ \noindent Let \[ E_6=RE_5R^{-1}=A_4A_5B_3A_1. \] One can easily show that \[ E_6E_5(a_4,a_5,b_3,a_1)=(a_4,a_5,a_3,a_1) \] so that $E_7=A_4A_5A_3A_1$ is in $H$. Therefore, \[ E_7E_{6}^{-1}=A_3B_{3}^{-1}\in H. \] By conjugating with powers of $R$, we get $A_iB_{i}^{-1}\in H$. Hence, \[ B_5C_{5}^{-1}=(B_5A_{5}^{-1})(A_5C_{5}^{-1})\in H. \] Again by conjugating with powers of $R$, the elements $B_iC_{i}^{-1}$ are contained in $H$.\\ \noindent Let \[ E_8=(A_2B_{2}^{-1})(B_3A_{3}^{-1})E_1=A_1A_2C_4B_3 \] and \[ E_9=R^{2}F_8R^{-2}=A_3A_4C_1B_5. \] It can also be shown that \[ E_9E_8(a_3,a_4,c_1,b_5)=(b_3,a_4,c_1,c_4) \] so that $E_{10}=B_3A_4C_1C_4$. Hence, \[ E_9E_{10}^{-1}B_{3}A_{3}^{-1}=B_5C_{4}^{-1}\in H. \] The conjugation of this with powers of $R$ implies that $B_{i+1}C_{i}^{-1}\in H$. Hence \begin{itemize} \item $A_1A_{2}^{-1}=(A_1C_{1}^{-1})(C_1B_{2}^{-1})(B_2A_{2}^{-1})$, \item $B_1B_{2}^{-1}=(B_1C_{1}^{-1})(C_1B_{2}^{-1})$ and \item $C_1C_{2}^{-1}=(C_1B_{2}^{-1})(B_2C_{2}^{-1})$ \end{itemize} are contained in $H$. Also it follows from the fact that \[ \sigma(a_1)=f \text{ and } \sigma(b_5)=d_5 \] with a choice of orientations of regular neighbourhoods of the curves, the element $D_5$ and $F$ are contained in $H$. By the fact that $A_1(f)=e$, $E$ is in $H$. We conclude that $H=T_{12}$. \end{proof} \ \begin{theorem}\label{rodd} For $g=2r+2$, the twist subgroup $\mathcal{T}_{g}$ is generated by the involutions $\rho_1,\rho_2,\rho_1A_1B_2C_{\frac{r+3}{2}} A_3$ and $\sigma$ if $r\geq7$ and odd. \end{theorem} \begin{proof} Consider the surface $N_{g}$ as in Figure~\ref{C}. We have \[ \rho_1(a_1)=a_3, \rho_1(b_2)=b_2 \textrm{ and } \rho_1(c_{\frac{r+3}{2}})=c_{\frac{r+3}{2}}. \] Since $\tau$ reverses the orientation of a neighbourhood of a two-sided simple closed curve, we get \begin{itemize} \item $\rho_1A_1\rho_1=A_{3}^{-1}$ \item $\rho_1B_2\rho_1=B_{2}^{-1}$ and \item $\rho_1C_{\frac{r+3}{2}}\rho_1=C_{\frac{r+3}{2}}^{-1}$. \end{itemize} It can be shown that $\rho_1A_1B_2C_{\frac{r+3}{2}}A_3$ is an involution. Let $G_1=A_1B_2C_{\frac{r+3}{2}}A_3$ and let $K$ be the subgroup of $\mathcal{T}_{g}$ generated by the set \[ \lbrace \rho_1,\rho_2, \rho_1G_1, \sigma\rbrace . \] Note that the rotation $R$ is in $K$. By Theorem ~\ref{t1}, we need to show that the elements $A_1A_{2}^{-1}, B_1B_{2}^{-1}, C_1C_{2}^{-1},D_r$ and $E$ are contained in $K$.\\ \noindent It follows from \begin{itemize} \item$G_2=RG_1R^{-1}=A_2B_3C_{\frac{r+5}{2}}A_4\in K$, \item $G_3=(G_2G_1)G_2(G_2G_1)^{-1}=B_2A_3C_{\frac{r+5}{2}}A_4\in K$, \item $G_4=RG_3R^{-1}=B_3A_4C_{\frac{r+7}{2}}A_5\in K$, \item $G_5=(G_4G_3)G_4(G_4G_3)^{-1}=A_3A_4C_{\frac{r+7}{2}}A_5\in K$ \end{itemize} that \[ G_4G_{5}^{-1}=B_3A_{3}^{-1}\in K. \] Hence, the elements $B_iA_{i}^{-1}$ are contained in $K$ by conjugating $B_3A_{3}^{-1}$ with powers of $R$. Let \begin{itemize} \item $G_6=R^{\frac{r-3}{2}}G_4R^{\frac{3-r}{2}}=B_{\frac{r+3}{2}}A_{\frac{r+5}{2}}C_2A_{\frac{r+7}{2}} \in K$, \item $G_7=(G_6G_4)G_6(G_6G_4)^{-1}=B_{\frac{r+3}{2}}A_{\frac{r+5}{2}}B_3A_{\frac{r+7}{2}} \in K$ if $r>7$,\\ ($G_7=A_5A_6B_3A_7 \in K$ if $g=7$). \end{itemize} Then \[ G_7G_{6}^{-1}=B_3C_{2}^{-1} \in K \textrm{ if } r>7 \] and \[ G_7G_{6}^{-1}B_5A_{5}^{-1}=B_3C_{2}^{-1} \in K \textrm{ if } r=7. \] Therefore, the elements $B_{i+1}C_{i}^{-1}$ are contained in the group $K$ by conjugating $B_3C_{2}^{-1}$ with powers of $R$. Let \begin{itemize} \item $G_8=R^{\frac{r-1}{2}}G_4R^{\frac{1-r}{2}}=B_{\frac{r+5}{2}}A_{\frac{r+7}{2}}C_3A_{\frac{r+9}{2}} \in K$ if $r>7$,\\ ($G_8=B_6A_7C_3A_1 \in K$ if $r=7$), \item $G_{9}=(G_8G_4)G_8(G_8G_4)^{-1}=B_{\frac{r+5}{2}}A_{\frac{r+7}{2}}B_3A_{\frac{r+9}{2}} \in K$ if $r>7$.\\ ($G_9=B_6A_7B_3A_1 \in K$ if $r=7$). \end{itemize} Then \[ G_9G_{8}^{-1}=B_3C_3^{-1} \in K \textrm{ if } r\geq7. \] This implies that the subgroup $K$ contains $B_{i}C_{i}^{-1}$ by conjugating $B_3C_{3}^{-1}$ with powers of $R$. The rest of the proof is very similar to the proof of Theorem~\ref{g12}. \end{proof} \begin{theorem}\label{t3.5} For $g=2r+2$, the twist subgroup $\mathcal{T}_{g}$ is generated by the involutions $\rho_1,\rho_2,\rho_1A_2C_{\frac{r}{2}}B_{\frac{r+4}{2}} C_{\frac{r+6}{2}}$ and $\sigma$ if $r\geq6$ and even. \end{theorem} \begin{proof} Consider the surface $N_{g}$ as in Figure~\ref{C}. The involution $\rho_1$ satisfies \[ \rho_1(a_2)=a_2, \rho_1(b_{\frac{r+4}{2}})=b_{\frac{r+4}{2}} \textrm{ and } \rho_1(c_{\frac{r}{2}})=c_{\frac{r+6}{2}}. \] Since $\tau$ reverses the orientation of a neighbourhood of a two-sided simple closed curve, we have \begin{itemize} \item $\rho_1A_2\rho_1=A_{2}^{-1}$ \item $\rho_1B_{\frac{r+4}{2}}\rho_1=B_{\frac{r+4}{2}}^{-1}$ and \item $\rho_1C_{\frac{r}{2}}\rho_1=C_{\frac{r+6}{2}}^{-1}$. \end{itemize} It can be shown that $\rho_1A_2C_{\frac{r}{2}}B_{\frac{r+4}{2}} C_{\frac{r+6}{2}}$ is an involution. Let $H_1=A_2C_{\frac{r}{2}}B_{\frac{r+4}{2}} C_{\frac{r+6}{2}}$ and let $K$ be the subgroup of $\mathcal{T}_{g}$ generated by the set \[ \lbrace \rho_1,\rho_2, \rho_1H_1, \sigma\rbrace . \] Note that the rotation $R$ is in $K$. By Theorem ~\ref{t1}, we need to show that the elements $A_1A_{2}^{-1}, B_1B_{2}^{-1}, C_1C_{2}^{-1},D_r$ and $E$ are contained in $K$.\\ Let \begin{itemize} \item$H_2=RH_1R^{-1}=A_3C_{\frac{r+2}{2}}B_{\frac{r+6}{2}}C_{\frac{r+8}{2}} \in K$, \item $H_3=(H_2H_1)H_2(H_2H_1)^{-1}=A_3B_{\frac{r+4}{2}}C_{\frac{r+6}{2}}C_{\frac{r+8}{2}}\in K$, \item $H_4=RH_3R^{-1}=A_4B_{\frac{r+6}{2}}C_{\frac{r+8}{2}}C_{\frac{r+10}{2}}\in K$, \item $H_5=(H_4H_3)H_4(H_4H_3)^{-1}=A_4C_{\frac{r+6}{2}}C_{\frac{r+8}{2}}C_{\frac{r+10}{2}}\in K$. \end{itemize} Then, we get \[ H_4H_5^{-1}=B_{\frac{r+6}{2}}C_{\frac{r+6}{2}}^{-1} \in K \] and \[ H_2H_3^{-1}\Big(C_{\frac{r+6}{2}}B_{\frac{r+6}{2}}^{-1}\Big)=C_{\frac{r+2}{2}}B_{\frac{r+4}{2}}^{-1}\in K. \] By conjugating the elements $B_{\frac{r+6}{2}}C_{\frac{r+6}{2}}^{-1}$ and $C_{\frac{r+2}{2}}B_{\frac{r+4}{2}}^{-1}$ with powers of $R$, we conclude that $B_iC_{i}^{-1}$ and $C_iB_{i+1}^{-1}$ are contained in $K$.\\ \noindent Let \begin{itemize} \item $H_6=(B_{\frac{r+6}{2}}C_{\frac{r+6}{2}}^{-1})(B_{\frac{r}{2}}C_{\frac{r}{2}}^{-1})H_1=B_{\frac{r+6}{2}}B_{\frac{r}{2}}A_2B_{\frac{r+4}{2}} \in K$, \item $H_7=R^{\frac{r-4}{2}}H_6R^{\frac{4-r}{2}}=A_{\frac{r}{2}}B_{r-2}B_rB_1 \in K$, \item $H_8=(H_7H_6)H_7(H_7H_6)^{-1}=B_{\frac{r}{2}}B_{r-2}B_rB_1 \in K$. \end{itemize} Then \[ H_8H_7^{-1}=B_{\frac{r}{2}}A_{\frac{r}{2}}^{-1} \in K. \] By conjugating with powers of $R$, $K$ contains $B_iA_{i}^{-1}$. The rest of the proof is very similar to the proof of Theorem~\ref{g12}. \end{proof} In the rest of this section, we introduce involution generators for $\mathcal{T}_g$ for $g=6,8$ and $10$.\\ \begin{figure} \caption{The involution $\delta_1$ for $g=4k+2$.} \label{D1} \end{figure} \begin{figure} \caption{The involution $\delta_2$ for $g=4k+2$.} \label{D2} \end{figure} \begin{figure} \caption{The involution $\delta_3$ for $g=10$.} \label{D3} \end{figure} We consider the models for the surface $N_{10}$, where $10$-crosscaps are distributed on the sphere as in Figure~\ref{D1}, ~\ref{D2} and ~\ref{D3}. There are reflections, $\delta_1,\delta_2$ and $\delta_3$, of the surface $N_{10}$ in the $xy$-plane such that \begin{itemize} \item $\delta_1(x_i)=x_{i+1}$ if $i=1,5,9$, \\ $\delta_1(x_3)=x_{8}$, $\delta_1(x_4)=x_{7}$, \item $\delta_2(x_i)=x_{i}$ if $i=2,6,9,10$,\\ $\delta_2(x_1)=x_{3}$, $\delta_2(x_4)=x_{8}$, $\delta_2(x_5)=x_{7}$ and \item $\delta_3(x_i)=x_{i}$ if $i=1,4,5,6$,\\ $\delta_3(x_2)=x_{3}$, $\delta_3(x_8)=x_{9}$, $\delta_3(x_7)=x_{10}$. \end{itemize} Recall that $x_i$'s are the generators of $H_1(N_g;\mathbb{R})$ as shown in Figure~\ref{H}. Note that the involutions $\delta_1,\delta_2$ and $\delta_3$ reverse the orientation of a neighbourhood of a two-sided simple closed curve. Since $D(\delta_i)=1$, the involutions $\delta_i$ are in $\mathcal{T}_{10}$ for $i=1,2,3$. \begin{theorem} The twist subgroup $\mathcal{T}_{10}$ is generated by five involutions $\delta_1,\delta_2, \delta_2\delta_1\delta_2A_2, \delta_1A_1,\delta_3$. \end{theorem} \begin{proof} Let $K$ be the subgroup of $\mathcal{T}_{10}$ generated by the set \[ \lbrace \delta_1,\delta_2, \delta_2\delta_1\delta_2A_2, \delta_1A_1,\delta_3 \rbrace. \] It is clear that $ \delta_2\delta_1\delta_2A_2$ and $\delta_1A_1$ are involutions. It follows from \begin{itemize} \item $A_1=\delta_1(\delta_1A_1)$ and \item $A_2=( \delta_2\delta_1\delta_2)(\delta_2\delta_1\delta_2A_2)$ \end{itemize} that the elements $A_1$ and $A_2$ are in $K$. Also, It follows from \begin{itemize} \item $\delta_2(a_1)=b_1$ \item $\delta_2\delta_1(b_i)=c_i$ for $i=1,2,3$ and \item $\delta_2\delta_1(c_i)=b_{i+1}$ for $i=1,2$ \end{itemize} that $B_i,C_i$ are contained in $K$ for $i=1,2,3$. Moreover, since \begin{itemize} \item $\delta_3(c_3)=d_4$, \item $\delta_1\delta_2\delta_3\delta_1\delta_3(c_1)=b_4$ and \item $A_1\delta_3(a_1)=e$ \end{itemize} then the elements $D_4, B_4$ and $E$ are in $K$. We conclude that $K=\mathcal{T}_{10}$ by Theorem~\ref{thm1}. \end{proof} We consider the models for the surface $N_{8}$, where $8$-crosscaps are distributed on the sphere as in Figure~\ref{L1}, ~\ref{L2} and ~\ref{L3}. There are reflections, $\lambda_1,\lambda_2$ and $\lambda_3$, of the surface $N_{8}$ in the $xy$-plane such that \begin{itemize} \item $\lambda_1(x_i)=x_{i}$ if $i=7,8$, \\ $\lambda_1(x_i)=x_{i+1}$ if $i=1,4$ and $\lambda_1(x_3)=x_{6}$, \item $\lambda_2(x_i)=x_{i}$ if $i=2,5$,\\ $\lambda_2(x_1)=x_{3}$, $\lambda_2(x_4)=x_{6}$, $\lambda_2(x_7)=x_{8}$, and \item $\lambda_3(x_i)=x_{i}$ if $i=1,4$,\\ $\lambda_3(x_2)=x_{3}$, $\lambda_3(x_5)=x_{8}$ and $\lambda_3(x_6)=x_{7}$. \end{itemize} Note that the involutions $\lambda_i$ reverse the orientation of a neighbourhood of a two-sided simple closed curve for $i=1,2,3$. Since $D(\delta_i)=1$, the involutions $\delta_i$ are contained in $\mathcal{T}_{8}$ for $i=1,2,3$.\\ \begin{figure} \caption{The involution $\lambda_1$ for $g=8$.} \label{L1} \end{figure} \begin{figure} \caption{The involution $\lambda_2$ for $g=8$.} \label{L2} \end{figure} \begin{figure} \caption{The involution $\lambda_3$ for $g=8$.} \label{L3} \end{figure} \begin{theorem} The twist subgroup $\mathcal{T}_{8}$ is generated by five involutions $\lambda_1,\lambda_2, \lambda_2\lambda_1\lambda_2A_2, \lambda_1A_1$ and $\lambda_3$. \end{theorem} \begin{proof} Let $K$ be the subgroup of $\mathcal{T}_{8}$ generated by the set \[ \lbrace \lambda_1,\lambda_2, \lambda_2\lambda_1\lambda_2A_2, \lambda_1A_1,\lambda_3 \rbrace. \] It is clear that $ \lambda_2\lambda_1\lambda_2A_2$ and $\lambda_1A_1$ are involutions. It follows from \begin{itemize} \item $A_1=\lambda_1(\lambda_1A_1)$ and \item $A_2=( \lambda_2\lambda_1\lambda_2)(\lambda_2\lambda_1\lambda_2A_2)$ \end{itemize} that the elements $A_1$ and $A_2$ are in $K$. Also, It follows from \begin{itemize} \item $\lambda_2\lambda_1(a_1)=b_1$, \item $\lambda_2\lambda_1(b_i)=c_i$ for $i=1,2$, \item $\lambda_2\lambda_1(c_1)=b_{2}$, \item $\lambda_3(c_2)=d_3$, \item $\lambda_1\lambda_2\lambda_3\lambda_1\lambda_3(c_1)=b_3$ and \item $A_1\lambda_3(a_1)=e$ \end{itemize} that all generators of $\mathcal{T}_{8}$ given in Theorem~\ref{thm1} are contained in $K$. This completes the proof. \end{proof} \begin{figure} \caption{The involution $\xi_1$ for $g=6$.} \label{X1} \end{figure} \begin{figure} \caption{The involution $\xi_2$ for $g=6$.} \label{X2} \end{figure} We consider the models for the surface $N_{6}$, where $6$-crosscaps are distributed on the sphere as in Figure~\ref{D1},~\ref{D2},~\ref{X1} and ~\ref{X2}. There are reflections $\delta_1,\delta_2, \xi_1$ and $\xi_2$ such that \begin{itemize} \item $\delta_1(x_i)=x_{i+1}$ if $i=1,3,5$, \item $\delta_2(x_i)=x_{i}$ if $i\neq1,3$ and $\delta_2(x_1)=x_{3}$, \item $\xi_1(x_i)=x_{i}$ if $i\neq2,3$ and $\xi_1(x_2)=x_3$ and \item $\xi_2(x_i)=x_{i+1}$ if $i=1,4$ and $\xi_2(x_3)=x_6$. \end{itemize} Note that the involutions $\delta_i$ and $\xi_i$ reverse the orientation of a neighbourhood of a two-sided simple closed curve for $i=1,2$. We obtain that $D(\delta_i)=D(\xi_i)=1$, the twist subgroup $\mathcal{T}_{8}$ contains the involutions $\delta_i$ and $\xi_i$ for $i=1,2$. \begin{theorem} The twist subgroup $\mathcal{T}_{6}$ is generated by six involutions $\delta_1,\delta_2, \delta_2\delta_1\delta_2A_2, \delta_1A_1,\xi_1$ and $\xi_2$. \end{theorem} \begin{proof} Let $K$ be the subgroup of $\mathcal{T}_{6}$ generated by the set \[ \lbrace \delta_1,\delta_2, \delta_2\delta_1\delta_2A_2, \delta_1A_1,\xi_1,\xi_2 \rbrace. \] It is clear that $ \delta_2\delta_1\delta_2A_2$ and $\delta_1A_1$ are involutions. It follows from \begin{itemize} \item $A_1=\delta_1(\delta_1A_1)$ and \item $A_2=( \delta_2\delta_1\delta_2)(\delta_2\delta_1\delta_2A_2)$ \end{itemize} that the elements $A_1$ and $A_2$ are in $K$. Also, It follows from \begin{itemize} \item $\delta_2(a_1)=b_1$, \item $\delta_2\delta_1(b_1)=c_1$, \item $\delta_1\delta_2\xi_2(b_1)=b_2$, \item$\xi_2(c_1)=d_2$ and \item $A_1\xi_1(a_1)=e$ \end{itemize} that all generators of $\mathcal{T}_{6}$ given in Theorem~\ref{thm1} are contained in $K$. This completes the proof. \end{proof} \section{The odd case}\label{S4} For $g=4k+1$, we work with two models for $N_g$: one is on the left hand side of Figure~\ref{SO}, the other one is depicted in Figure~\ref{CODD}. \begin{figure} \caption{The involutions $\tau_1$ and $\tau_2$.} \label{SO} \end{figure} \begin{figure} \caption{The involutions $\rho_1$ and $\rho_2$ for $g=2r+1$.} \label{CODD} \end{figure} The model in Figure~\ref{SO} is the nonorientable surface obtained from $\mathbb{S}^{2}$ embedded in $\mathbb{R}^{3}$ and by deleting the interiors of $g$-disjoint disks and identifying the antipodal points on the boundary of each removed disks, say $\mathcal{C}_i$. Moreover, each crosscap $\mathcal{C}_i$ is in a circular position with the second crosscap $\mathcal{C}_2$ on the $+z$-axis and the rotation $T$ by $\frac{2\pi}{g}$ about $x$-axis maps the crosscap $C_i$ to $C_{i+1}$. The model in Figure~\ref{CODD} is obtained from a genus $r$ orientable surface by deleting the interior of a disk and identifying the antipodal points on the boundary. Moreover, the genus $r$ surface minus a disk is embedded in $\mathbb{R}^{3}$ in such a way that each genus is in a circular position with the second genus on the $+z$-axis and the rotation $R$ by $\frac{2\pi}{r}$ about $x$-axis maps the curve $b_i$ to $b_{i+1}$ for $i=1,\ldots,r-1$ and $b_r$ to $b_1$.\\ \noindent We use the explicit homeomorphism constructed in \cite[Section 3]{st1} to identify the models in Figure~\ref{G} and Figure~\ref{CODD}. In Figure~\ref{CODD}, one crosscap is on the $+x$-axis. Note that the surface $N_g$ is invariant under the two involutions $\rho_{1}$ and $\rho_{2}$ where $\rho_{1}$ is the reflection in the $xz$-plane and $\rho_{2}$ is the reflection in the plane $z=tan(\frac{\pi}{r})y$ as in Figure~\ref{CODD}. The rotations $\rho_{1}$ and $\rho_{2}$ satisfy $D(\rho_{1})=D(\rho_{2})=1$ if $g=4k+1$. In this case, the twist subgroup $\mathcal{T}_g$ contains $\rho_{1}$ and $\rho_{2}$. Observe that the rotation $R=\rho_2\rho_1$. For $g=4k+3$, we work with the model on the right hand side of Figure~\ref{SO}. This surface is a genus-$g$ nonorientable surface obtained from $\mathbb{S}^{2}$ embedded in $\mathbb{R}^{3}$ and by deleting the interiors of $g$-disjoint disks and identifying the antipodal points on the boundary of each removed disks, say $\mathcal{C}_i$. Moreover, each crosscap $\mathcal{C}_i$ for $i=1,\ldots,g-2$ is in a circular position with the second crosscap $\mathcal{C}_2$ on the $+z$-axis, the rotation $T$ by $\frac{2\pi}{g-2}$ about $x$-axis maps the crosscap $\mathcal{C}_i$ to $\mathcal{C}_{i+1}$ for $i=1,\ldots,g-3$. The crosscap $\mathcal{C}_{g-1}$ is on the $+x$-axis and $\mathcal{C}_g$ is obtained by rotating $\mathcal{C}_{g-1}$ by $\pi$ about $+z$-axis. Note that the surface $N_g$ is invariant under the two reflections $\tau_1$ and $\tau_2$ where $\tau_1$ is the reflection in the $z$-axis and $\tau_2$ is the reflection in the plane $z=tan(\frac{\pi}{r})y$ as in Figure~\ref{SO}. The reflections $\tau_1$ and $\tau_2$ satisfy $D(\tau_1)=D(\tau_2)=1$ if $r$ is even, which implies that $\tau_1$ and $\tau_2$ are contained in the twist subgroup $\mathcal{T}_g$.\\ \noindent Recall that in Theorem~\ref{t1} we give a generating set for $\mathcal{T}_g$ when $g$ is even. We have the following generators when $g$ is odd. \begin{theorem}\label{t2} Let $r\geq3$ and $g=2r+1$. Then the twist subgroup $\mathcal{T}_g$ is generated by the elements $R, A_1A_{2}^{-1}, B_1B_{2}^{-1}, C_1C_{2}^{-1}$ and $E$. \end{theorem} \begin{proof} Let $G$ be the subgroup of $\mathcal{T}_g$ generated by the set \[ \lbrace R, A_1A_{2}^{-1}, B_1B_{2}^{-1}, C_1C_{2}^{-1}, E\rbrace \] if $g=2r+1$. Let $\mathcal{S}$ denote the set of isotopy classes of two-sided non-separating simple closed curves on $N_g$. Define a subset $\mathcal{G}$ of $\mathcal{S}\times \mathcal{S}$ as \[ \mathcal{G} =\lbrace(a,b): AB^{-1}\in G \rbrace. \] The set $\mathcal{G}$ defines an equivalence relation on $\mathcal{S}$ which satisfies $G$-invariance property, that is, \begin{center} if $(a,b)\in \mathcal{G}$ and $H\in G$ then $(H(a),H(b))\in \mathcal{G}$. \end{center} Then it follows from the proof of Theorem~\ref{mt1} that the Dehn twists $A_i$ and $B_i$ for $i=1,\ldots,r$ are contained in $G$. Also, $G$ contains $C_j$ for $j=1,\ldots,r-1$. Since all generators given in Theorem~\ref{thm1} are contained in the group $G$. We conclude that $G=\mathcal{T}_g$. \end{proof} \begin{figure} \caption{The involution $\beta$ for $g=2r+1$.} \label{beta} \end{figure} Let $g=2r+1$ and consider the surface $N_g$, where $g$-crosscaps are distributed on $\mathbb{S}^2$ as in Figure~\ref{beta}. First, we introduce a reflection $\beta$ on $N_g$ in the $xy$-plane such that \begin{itemize} \item $\beta(a_1)=f$, \item $\beta(x_2)=x_3$, $\beta(x_4)=x_5$ and \item $\beta(x_1)=x_1$, $\beta(x_i)=x_i$ for $i=6,7,\ldots,g$. \end{itemize} The involution $\beta$ reverses the orientation of a neighbourhood of a two-sided simple closed curve. It satisfies $D(\beta)=1$ and hence $\beta$ is an element of $\mathcal{T}_g$.\\ \noindent For the remaining generators of the following theorem we refer to Figures~\ref{CODD} and ~\ref{beta}. \begin{theorem} For $g=4k+1$ and $k\geq3$, the twist subgroup $\mathcal{T}_g$ is generated by the four involutions $\rho_1,\rho_2,\rho_1A_2C_{\frac{r}{2}}B_{\frac{r+4}{2}} C_{\frac{r+6}{2}}$ and $\beta$, where $r=2k$. \end{theorem} \begin{proof} Consider the surface $N_{g}$ as in Figure~\ref{CODD}. The involution $\rho_1$ satisfies \[ \rho_1(a_2)=a_2, \rho_1(b_{\frac{r+4}{2}})=b_{\frac{r+4}{2}} \textrm{ and } \rho_1(c_{\frac{r}{2}})=c_{\frac{r+6}{2}}. \] Since $\rho_1$ reverses the orientation of a neighbourhood of a two-sided simple closed curve, we have \begin{itemize} \item $\rho_1A_2\rho_1=A_{2}^{-1}$ \item $\rho_1B_{\frac{r+4}{2}}\rho_1=B_{\frac{r+4}{2}}^{-1}$ and \item $\rho_1C_{\frac{r}{2}}\rho_1=C_{\frac{r+6}{2}}^{-1}$. \end{itemize} It can be shown that $\rho_1A_2C_{\frac{r}{2}}B_{\frac{r+4}{2}} C_{\frac{r+6}{2}}$ is an involution. Let $H$ be the subgroup of ${\rm Mod}(N_{g})$ generated by the set \[ \lbrace \rho_1,\rho_2,\rho_1A_2C_{\frac{r}{2}}B_{\frac{r+4}{2}} C_{\frac{r+6}{2}},\beta\rbrace . \] Observe that $R=\rho_1\rho_2\in K$. By the proof of Theorem~\ref{t3.5}, the elements $ A_1A_{2}^{-1},B_1B_{2}^{-1}$ and $,C_1C_{2}^{-1}$ belong to $H$. Since $A_1\beta(a_1)=e$, the element $E$ is in $H$. We conclude that $\mathcal{T}_g=H$ by Theorem~\ref{t2}. \end{proof} Although we give a generating set of $4$ involutions, for completeness of the applications of our method, first we give the following theorem. \begin{theorem} For $g=4k+1$ and $k\geq1$, the twist subgroup $\mathcal{T}_g$ is generated by the five involutions $\tau_1$, $\tau_2$, $\tau_1\tau_2\tau_1A_2$, $\tau_2A_1$ and $\beta$. \end{theorem} \begin{proof} Let $K$ be the subgroup of $\mathcal{T}_g$ generated by the set \[ \lbrace \tau_1,\tau_2, \tau_1\tau_2\tau_1A_2,\tau_2A_1,\beta \rbrace. \] Note that the rotation $T=\tau_1\tau_2$ is contained in $K$. It follows from \begin{itemize} \item $A_1=\tau_2\tau_2A_1$ and \item $A_2=(\tau_1\tau_2)\tau_1(\tau_1\tau_2\tau_1)A_2$ \end{itemize} that the elements $A_1$ and $A_2$ are in $K$. By conjugating $A_1$ with powers of $T$, $\mathcal{T}_g$ contains the elements $B_i$ and $C_i$. Moreover, it follows from $\beta(a_1)=f$ that the element $F$ is in $K$. Since $A_1(f)=e$, we get $E\in K$. This finishes the proof by Theorem~\ref{thm1}. \end{proof} In the next theorem, we present four involutions to generate particularly $\mathcal{T}_5$ and $\mathcal{T}_9$. This completes the case $g=4k+1$ and $k\geq1$. First, recall that $A_1,A_2,B_1,B_2,C_1$ and $E$ generate $\mathcal{T}_5$ and $A_1,A_2,B_1,B_2,B_3,B_4$, $C_1,C_2,C_3$ and $E$ generate $\mathcal{T}_9$. We use the following three involutions $\gamma,S\gamma$ and $S^{2k-2}(S\gamma)S^{2-2k}A_2$ of the generating set given in \cite[Theorem 5]{sz2}. The involution $\gamma$ is defined as the reflection in the $xz$-plane where the crosscaps are distributed along the equator on $\mathbb{S}^{2}$. The map $S$ is defined as the composition $B_{2k}C_{2k-1}B_{2k-1}\cdots C_1B_1A_1$. Note that $D(\gamma)=D(S\gamma)=D(S^{2k-2}(S\gamma)S^{2-2k}A_2)=1$. \begin{theorem} The twist subgroups $\mathcal{T}_5$ and $\mathcal{T}_9$ can be generated by the involutions $\gamma,S\gamma, S^{2k-2}(S\gamma)S^{2-2k}A_2$ and $\beta$ for $k=1,2$. \end{theorem} \begin{proof} The generator $A_1$ can be obtained by $S$ and $A_2$~\cite[Theorem 5]{mk2}. By conjugating with powers of $S$, it is easy to see that the elements $B_i$ and $C_i$ belong to $\mathcal{T}_g$. Also, the generator $E$ is contained in $\mathcal{T}_g$ since $A_1\beta(a_1)=e$. \end{proof} \begin{figure} \caption{The involution $\mu$ for $g=4k+3$.} \label{MU} \end{figure} Now, let $g=4k+3$ and consider $N_g$, where $g$-crosscaps are distributed over $\mathbb{S}^2$ as in Figure~\ref{MU}. The surface $N_g$ is symmetrical in the $xy$-plane. Let $\mu$ be the reflection in the $xy$-plane. Note that the linear map associated to the involution $\mu$ satisfies $D(\mu)=1$ if $k\geq2$. Therefore, the involution $\mu$ is in $\mathcal{T}_g$ for $k\geq2$. \begin{theorem} For $g=4k+3$ and $k\geq2$, the twist subgroup $\mathcal{T}_g$ is generated by the five involutions $\tau_1$, $\tau_2$, $\tau_1\tau_2\tau_1A_2$, $\tau_2A_1$ and $\mu$. \end{theorem} \begin{proof} Let $K$ be the subgroup of $\mathcal{T}_g$ generated by the set \[ \lbrace \tau_1,\tau_2, \tau_1\tau_2\tau_1A_2,\tau_2A_1,\mu \rbrace. \] Note that the rotation $T=\tau_1\tau_2$ is contained in $K$. It follows from \begin{itemize} \item $A_1=\tau_2(\tau_2A_1)$ and \item $A_2=(\tau_1\tau_2\tau_1)(\tau_1\tau_2\tau_1A_2)$ \end{itemize} that the elements $A_1$ and $A_2$ are in $K$. By conjugating $A_1$ with powers of $T$, $\mathcal{T}_g$ contains the elements $B_i$ for $i=1,\ldots, 2k$ and $C_j$ for $j=1,\ldots, 2k-1$. \\ \noindent Let $T(b_{2k})=x$ and $\mu(x)=y$. Then the elements $X$ and $Y$ are contained in $K$ by the fact that $B_{2k}$ is in $K$.\\ \noindent It follows from \begin{itemize} \item $T^{-1}(y)=c_{2k}$, \item $\mu(b_{2k})=b_{2k+1}$ \end{itemize} that $C_{2k}$ and $B_{2k+1}$ are contained in $K$. This completes the proof by Theorem~\ref{thm1}. \end{proof} \begin{figure} \caption{The involution $\sigma_1$ for $g=7$.} \label{SP} \end{figure} \begin{figure} \caption{The involution $\sigma_2$ for $g=7$.} \label{SDP} \end{figure} For the surface $N_7$, we introduce two involutions, $\sigma_1$ and $\sigma_2$, shown in Figure~\ref{SP} and ~\ref{SDP}. In these figures, the surface is symmetric with respect to the $xy$-plane. Both $\sigma_1$ and $\sigma_2$ are reflections in the $xy$-plane and $D(\sigma_1)=D(\sigma_2)=1$. Hence, both $\sigma_1$ and $\sigma_2$ belong to $\mathcal{T}_7$. For the remaining generators in the following theorem we refer to the model on the right hand side of Figure~\ref{SO}. \begin{theorem} The twist subgroup $\mathcal{T}_7$ is generated by the six involutions $\tau_1$, $\tau_2$, $\tau_1\tau_2\tau_1A_2$, $\tau_2A_1$, $\sigma_1$ and $\sigma_2$. \end{theorem} \begin{proof} Let $K$ be the subgroup of $\mathcal{T}_g$ generated by the set \[ \lbrace \tau_1,\tau_2, \tau_1\tau_2\tau_1A_2,\tau_2A_1,\sigma_1,\sigma_2 \rbrace. \] Note that the rotation $T=\tau_1\tau_2$ is contained in $K$. It follows from \begin{itemize} \item $A_1=\tau_2(\tau_2A_1)$ and \item $A_2=(\tau_1\tau_2\tau_1)(\tau_1\tau_2\tau_1A_2)$ \end{itemize} that the elements $A_1$ and $A_2$ are in $K$. By conjugating $A_1$ with powers of $T$, $\mathcal{T}_g$ contains the elements $B_1,C_1$ and $B_2$. \\ \noindent Let $T(b_{2})=x$ and $\sigma_2(x)=y$. Then the elements $X$ and $Y$ are contained in $K$. It follows from \begin{itemize} \item $T^{-1}(y)=c_{2}$, \item $\sigma_2(b_{2})=b_{3}$ \end{itemize} that $C_{2}$ and $B_{3}$ are contained in $K$. Moreover, since $A_1\sigma_1(a_1)=e$, $E\in K$, which completes the proof by Theorem~\ref{thm1}. \end{proof} \end{document}
\begin{document} \title{A finite difference method for a two-point boundary value problem with a Caputo fractional derivative\thanks{This research was partly supported by the Instituto Universitario de Matem\'aticas y Aplicaciones (IUMA), the Ministerio de Educaci\'{o}n, Cultura y Deporte (Programa Nacional de Movilidad de Recursos Humanos del Plan Nacional de I+D+i 2008-2011), project MEC/FEDER MTM 2010-16917 and the Diputaci\'{o}n General de Arag\'{o}n.}} \author{ Martin Stynes\thanks{Corresponding author. Email: [email protected]}\\[2pt] Department of Mathematics, National University of Ireland, Cork, Ireland \\[6pt] and\\[6pt] Jos\'e Luis Gracia\thanks{Email: [email protected]}\\[2pt] IUMA and Department of Applied Mathematics, University of Zaragoza, Spain } \maketitle \begin{abstract} {A two-point boundary value problem whose highest-order term is a Caputo fractional derivative of order $\delta \in (1,2)$ is considered. Al-Refai's comparison principle is improved and modified to fit our problem. Sharp a priori bounds on derivatives of the solution $u$ of the boundary value problem are established, showing that $u''(x)$ may be unbounded at the interval endpoint $x=0$. These bounds and a discrete comparison principle are used to prove pointwise convergence of a finite difference method for the problem, where the convective term is discretized using simple upwinding to yield stability on coarse meshes for all values of $\delta$. Numerical results are presented to illustrate the performance of the method.} Fractional differential equation; Caputo fractional derivative; boundary value problem; derivative bounds; finite difference method; convergence proof. \end{abstract} \section{Introduction}\label{sec:intro} Fractional derivatives are used in an ever-widening range of models of physical processes, and as a consequence the last decade has seen an explosive growth in the number of numerical analysis papers examining differential equations with fractional-order derivatives \citep[see the references in][]{MKM11}. While the analysis of some of these papers \citep[e.g.,][]{MM12,PT12} takes account of the possibly singular behaviour of solutions near some domain boundaries, most fractional-derivative numerical analysis papers work only with very special cases by assuming (explicitly or implicitly) that the solutions they approximate are smooth on the closure of the domain where the problem is posed. In particular, we know of no paper where a finite difference method for a fractional-derivative boundary value problem posed on a bounded domain is analysed rigorously under reasonably general and realistic hypotheses on the behaviour of the solution near the boundaries of that domain. In the present paper we provide the first such rigorous analysis. Even though we deal with the one-dimensional case---a two-point boundary value problem---the analysis is nevertheless lengthy and requires the development of various techniques that do not appear in the context of ``classical" problems (i.e., problems with integer-order derivatives). Let $n\in\mathbb{R}$ satisfy $m-1<n<m$ for some positive integer $m$. The Riemann-Liouville fractional derivative~$D^n$ is defined by $$ D^n g(x) = \left( \frac{d}{dx} \right)^m \left[\frac1{\Gamma(m-n)}\int_{t=0}^x (x-t)^{m-n-1}g(t)\,dt \right] \quad\text{for }\ 0< x \le 1 $$ for all functions $g$ such that $D^n g(x)$ exists. Our interest centres on the Caputo fractional derivative~$D_*^n$, which is defined \citep[Definition 3.2]{Diet10} in terms of~$D^n$ by \begin{equation}\label{CaputoDefn} D_*^n g = D^n[g - T_{m-1}[g;0]], \end{equation} where $T_{m-1}[g;0]$ denotes the Taylor polynomial of degree $m-1$ of the function~$g$ expanded around $x=0$. If $g \in C^{m-1}[0,1]$ and $g^{(m-1)}$ is absolutely continuous on $[0,1]$, then \citep[Theorem~3.1]{Diet10} one also has the equivalent formulation \begin{equation}\label{CaputoEquiv} D_*^n g(x) := \frac1{\Gamma(m-n)} \int_{t=0}^x (x-t)^{m-n-1} g^{(m)}(t)\, dt \quad\text{for }\ 0<x \le 1. \end{equation} Our work relies heavily on \citet{PT12}, who use the definition~\eqref{CaputoDefn} of $D_*^n$. Since the integrals in~$D^n g(x)$ and $D_\ast^n g(x)$ are associated in a special way with the point $x=0$, many authors write instead~$D_0^n g(x)$ and $D_{\ast\,0}^n g(x)$, but for simplicity of notation we omit the extra subscript~$0$. Let the parameter $\delta$ satisfy $1 < \delta <2$. Throughout the paper we consider the two-point boundary value problem \begin{subequations}\label{prob} \begin{align} -D_\ast^\delta &u(x) + b(x)u'(x) + c(x)u(x) = f(x) \ \text{ for }x\in(0,1), \label{proba} \\ &u(0)-\alpha_0u'(0)= \gamma_0,\quad u(1)+\alpha_1u'(1)= \gamma_1, \label{probb} \end{align} \end{subequations} where the constants $\alpha_0, \alpha_1, \gamma_0, \gamma_1$ and the functions $b,c$ and $f$ are given. We assume that $\alpha_1 \ge 0$ and \begin{equation}\label{AlRcondition} \alpha_0 \ge \frac1{\delta -1}\,. \end{equation} The condition \eqref{AlRcondition} comes from \citet{AlR12a}; it will be used in Sections~\ref{sec:maxprin} and \ref{sec:discrete} below to ensure that \eqref{prob} and its discretization each satisfy a suitable comparison principle. For the moment we assume that $b, c, f \in C[0,1]$; further hypotheses will be placed later on the regularity of these functions. The problem \eqref{prob} is discussed by \citet{AlR12a} and is a particular case of the wide class of boundary value problems considered in~\citet{PT12}. It is a steady-state version of the time-dependent problems discussed in~\citet{SD11,SL0405,ZLZ10} and~\citet{JT12}---who describe some advantages of the Caputo fractional derivative over the Riemann-Liouville fractional derivative. Our paper is structured as follows. Section~\ref{sec:maxprin} obtains a comparison principle for the differential operator and boundary operators in~\eqref{prob}. In Section~\ref{sec:derivatives} existence and uniqueness of a solution to~\eqref{prob} is shown, and sharp pointwise bounds on the integer-order derivatives of this solution are derived. The finite difference discretization of \eqref{prob} on a uniform mesh of width $h$ is described and analysed in Section~\ref{sec:discrete}, and it is proved to be $O(h^{\delta-1})$ convergent at the mesh points. Two numerical examples are presented in Section~\ref{sec:numerical}. \emph{Notation.} We use the standard notation $C^k(I)$ to denote the space of real-valued functions whose derivatives up to order $k$ are continuous on an interval $I$, and write $C(I)$ for $C^0(I)$. For each $g\in C[0,1]$, set $\|g\|_\infty = \max_{x\in[0,1]} |g(x)|$. As in \citet{Diet10}, for each positive integer $m$ define $$ A^m[0,1] = \{ g \in C^{m-1}[0,1]: g^{(m-1)}\text{ is absolutely continuous on } [0,1] \}. $$ In several inequalities $C$ denotes a generic constant that depends on the data of the boundary value problem~\eqref{prob} but is independent of any mesh used to solve~\eqref{prob} numerically; note that $C$ can take different values in different places. \section{Comparison principle}\label{sec:maxprin} We begin with a basic result. \begin{lemma}\label{lem:AlRformula} \citep[Theorem 2.1]{AlR12b} Let $g\in C^2[0,1]$ achieve a global minimum at $x_0\in (0,1)$. Then \begin{equation}\label{AlRformula} D_\ast^\delta g(x_0) \ge \frac{x_0^{-\delta}}{\Gamma(2-\delta)} \left\{(\delta-1) \left[g(0)-g(x_0)\right] - x_0 g'(0)\right\}. \end{equation} \end{lemma} A careful inspection of the argument used to prove this lemma in \citet{AlR12b} shows that it remains valid under the weaker regularity hypothesis that $$ \text{(Regularity Hypothesis 1)}\qquad g\in C^2(0,1] \text{ and } |g''(x)| \le Cx^{-\theta} \text{ for } 0<x\le 1, $$ where~$C$ and~$\theta$ are some fixed constants with $0<\theta<1$. Observe that any function $g$ that satisfies Regularity Hypothesis 1 can be extended to a function (which we also call $g$) lying in $C^1[0,1] \cap A^2[0,1]$. We shall see in Section~\ref{sec:derivatives} that the solution of the boundary value problem~\eqref{prob} satisfies Regularity Hypothesis 1 but does not in general lie in~$C^2[0,1]$. Lemma~\ref{lem:AlRformula} is the key tool needed to prove the following comparison principle. \begin{lemma}\label{lem:AlRpositivity1} \citep[Lemma~3.3]{AlR12a} Let $g\in C^2[0,1]$. Let $b,c \in C[0,1]$ with $c(x)>0$ for all $x\in(0,1)$. Assume that $g$ satisfies the inequalities \begin{subequations}\label{AlR} \begin{align} &-D_\ast^\delta g+bg' + cg \ge 0\ \text{ on }(0,1), \label{AlR1} \\ g(0)&-\alpha_0 g'(0) \ge 0 \ \text{ and } \ g(1)+\alpha_1 g'(1) \ge 0, \label{AlR2} \end{align} \end{subequations} where $\alpha_0$ satisfies~\eqref{AlRcondition} and $\alpha_1\ge 0$. Then $g \ge 0$ on $[0,1]$. \end{lemma} Recalling our observation above that Lemma~\ref{lem:AlRformula} is still true when the hypothesis $g\in C^2[0,1]$ is replaced by Regularity Hypothesis 1, one sees quickly from the proof of \citet[Lemma~3.3]{AlR12a} that Lemma~\ref{lem:AlRpositivity1} remains valid when the assumption $g\in C^2[0,1]$ is replaced by Regularity Hypothesis 1. In fact, one can go further: Lemma~\ref{lem:AlRformula} shows immediately that when~$g'(0)<0$ one has~$D_\ast^\delta g(x_0) >0$ at the global minimum, and invoking this observation in the proof of \citet[Lemma~3.3]{AlR12a} and changing a few inequalities there from strict to weak or vice versa, the hypothesis $c>0$ can be weakened to $c \ge 0$. That is, one has the following more general version of Lemma~\ref{lem:AlRpositivity1}. \begin{theorem}\label{th:AlRpositivity} Let $g$ satisfy Regularity Hypothesis 1. Let $b,c \in C[0,1]$ with $c(x) \ge 0$ for all $x\in(0,1)$. Assume that $g$ satisfies the inequalities~\eqref{AlR}, where $\alpha_0$ satisfies~\eqref{AlRcondition} and $\alpha_1\ge 0$. Then $g \ge 0$ on $[0,1]$. \end{theorem} The next example shows that, for our Caputo differential operator, one does not have a comparison principle for the simplest case of Dirichlet boundary conditions, unlike the situation for classical second-order boundary value problems. Thus one cannot permit $\alpha_0=0$ in Theorem~\ref{th:AlRpositivity}. \begin{example}\label{exa:maxprincounterexample} Take $\delta =1.2$. From \citet[Appendix B]{Diet10} we have $$ D^{1.2}_\ast x^2 = \frac{\Gamma(3)}{\Gamma(1.8)}\, x^{0.8} \ \text{ and }\ D^{1.2}_\ast x^3 = \frac{\Gamma(4)}{\Gamma(2.8)}\, x^{1.8} =\frac{3\Gamma(3)}{(1.8)\Gamma(1.8)}\, x^{1.8} \le \frac{(1.67)\Gamma(3)}{\Gamma(1.8)}\, x^{0.8} $$ for $0 < x < 1$. Also $D^{1.2}_\ast x = 0$. Set $g(x)= x^3-1.67 x^2 + 0.67x$. Then \begin{equation}\label{Dirichlet} -D^{1.2}_\ast g(x) \ge 0 \ \text{ for }\ 0 < x <1, \quad g(0) = g(1)=0, \end{equation} but $g(0.8) = 0.512 - 1.0688 + 0.536 <0$, so the Dirichlet boundary conditions in~\eqref{Dirichlet} do not justify a comparison principle for $-D^{1.2}_\ast$ on $[0,1]$. In this example one has $g(0) =0$ and $g'(0)>0$, so the condition $g(0)-\alpha_0 g'(0) \ge 0$ of~\eqref{AlR2} is not satisfied. \end{example} \section{A priori bounds on derivatives of the solution}\label{sec:derivatives} The only source we know for bounds on (certain) derivatives of the solution of \eqref{prob} is~\citet{PT12}, who prove a very general existence result for two-point boundary value problems with differential operators involving fractional-order derivatives. Their analysis is based on \citet{BPV01}. \emph{Notation.} For integer $q \ge 1$ and $\nu \in (-\infty, 1)$, define $C^{q,\nu}(0,1]$ to be the set of continuous functions $y: [0,1]\to\mathbb{R}$ that are $q$ times continuously differentiable in~$(0,1]$ and satisfy the bounds \begin{align} |y^{(i)}(x)| \le \begin{cases} C &\text{if } i < 1-\nu, \\ C(1+|\ln x|) &\text{if } i=1-\nu, \\ Cx^{1-\nu-i} &\text{if } i > 1-\nu, \\ \end{cases} \end{align} for $0<x \le 1$ and $i=1,2,\dots,q$, where $C$ is some constant. Observe that as~$\nu$ increases, the smoothness of functions in~$C^{q,\nu}(0,1]$ decreases. Clearly $C^q[0,1] \subset C^{q,\nu}(0,1] \subset C^{m,\mu}(0,1] \subset C[0,1]$ for $q\ge m \ge 1$ and $\nu \le \mu < 1$. For our problem \eqref{prob}, the Pedas and Tamme result is as follows. \begin{theorem}\label{th:PTresult}\cite[Theorem 2.1]{PT12} Let $b,c, f \in C^{q,\mu}(0,1]$ for some integer $q \ge 1$ and $\mu \in (-\infty, 1)$. Set $\nu = \max\{\mu, 2-\delta\}$. Let $S$ denote the set of functions $w$ defined on $[0,1]$ for which $D_\ast^\delta w \in C^{q, \nu}(0,1]$. Assume that the problem~\eqref{prob} with $f\equiv 0,\ \gamma_0=0$ and $\gamma_1=0$ has in $S$ only the trivial solution $w\equiv 0$. Then \eqref{prob} has a unique solution $u \in S$; furthermore, $u\in C^1[0,1]$. \end{theorem} \begin{remark}\label{rem:onlylinear} In \citet[Theorem 2.1]{PT12} there is the additional assumption that the only linear polynomial $y(x)$ that satisfies the boundary conditions \eqref{probb} is $y\equiv 0$, but it is straightforward to check that this condition is implied by~$\alpha_1 \ge 0$ and \eqref{AlRcondition}. \end{remark} While Theorem~\ref{th:PTresult} bounds $u'$ and the integer-order derivatives of~$D_\ast^\delta u$, it gives no bound on the derivatives~$u^{(i)}$ for $i=2,3,\dots$, but these derivatives will be needed in the consistency analysis of our finite difference method. Thus we now deduce bounds on the integer-order derivatives of $u$ from Theorem~\ref{th:PTresult}. Our elementary argument can be regarded as interpolating between the integer-order derivatives of~$D_\ast^\delta u$; it relies only on the derivative bounds stated in Theorem~\ref{th:PTresult} and makes no use of the differential equation~\eqref{proba}. We shall prove this bound on the integer-order derivatives in a general setting that is suited to fractional-derivative boundary value problems of arbitrary order---such as those considered in \citet{PT12}---since the general proof is essentially the same as the proof for~$D^\delta_\ast$. At various places in our calculations we shall need the formula \begin{equation}\label{diffintegral} \frac{d}{dx}\left[\int_{s=0}^x (x-s)^{\theta_1} r(s)\,ds \right] = \int_{s=0}^x \theta_1(x-s)^{\theta_1-1} r(s)\,ds\quad \text{for }\ 0<x<1, \end{equation} when $|r(s)| \le Cs^{-\theta_2}$ for $0<s\le 1$ and the constants $\theta_1, \theta_2$ lie in $(0,1)$; one can justify~\eqref{diffintegral} from the results of~\citet{Ta01} or by writing $$ \int_{s=0}^x (x-s)^{\theta_1} r(s)\,ds = \int_{s=0}^x r(s) \int_{t=s}^x \theta_1(t-s)^{\theta_1-1}\,dt\,ds = \int_{t=0}^x \int_{s=0}^t \theta_1(t-s)^{\theta_1-1} r(s) \,ds\,dt $$ then applying the fundamental theorem of calculus. The first step is the following technical result. \begin{lemma}\label{lem:technical} Let $m$ be a positive integer and let $\sigma\in\mathbb{R}$ satisfy $m-1 < \sigma < m$. Suppose that $$ w(x) = \int_{s=0}^x (x-s)^{\sigma-m} \psi(s)\, ds \quad \text{for }\ 0<x<1, $$ where $\psi\in C^1(0,1]$ with $|\psi(s)| + s|\psi'(s)| \le C_1s^{\sigma-m}$ for $0<s<1$ and some constant $C_1$. Then \begin{align} w'(x) &= \frac1{x} \int_{s=0}^x (x-s)^{\sigma-m} [s\psi'(s) + (\sigma-m+1)\psi(s)]\, ds \intertext{and} |w'(x)| &\le C_1 \beta(\sigma-m+1,\sigma-m+1) x^{2(\sigma-m)}\quad \text{for }\ 0<x<1, \label{wprimex} \end{align} where $\beta(\cdot, \cdot)$ is Euler's Beta function. \end{lemma} \begin{proof} For $0<x<1$, \begin{align*} xw(x) &= \int_{s=0}^x [(x-s)^{\sigma-m+1}+(x-s)^{\sigma-m}s] \psi(s)\, ds \\ &= \int_{s=0}^x \left\{(x-s)^{\sigma-m+1}\psi(s) + \frac{(x-s)^{\sigma-m+1}}{\sigma-m+1} [s\psi'(s)+\psi(s)]\right\}\, ds, \end{align*} after an integration by parts. Applying \eqref{diffintegral} one gets \begin{align*} \big(xw(x)\big)' &= \int_{s=0}^x (x-s)^{\sigma-m} [s\psi'(s)+(\sigma-m+2)\psi(s)]\, ds, \intertext{and hence} w'(x) &= \frac1{x} \left[ \big(xw(x)\big)'-w(x) \right] = \frac1{x} \int_{s=0}^x (x-s)^{\sigma-m} [s\psi'(s) + (\sigma-m+1)\psi(s)]\, ds, \end{align*} as desired. Furthermore, the hypotheses of the lemma imply that $$ |w'(x)| \le \frac{C_1}{x}\int_{s=0}^x (x-s)^{\sigma-m} s^{\sigma-m}\, ds = C_1 \beta(\sigma-m+1,\sigma-m+1) x^{2(\sigma-m)}, $$ where the value of the integral is given by Euler's Beta function \citep[Theorem~D.6]{Diet10}. \end{proof} The essential property of the bound \eqref{wprimex} is that it takes the form $Cx^{2(\sigma-m)}$ with a constant $C$ that is independent of $x$. Now we can proceed with the main result of this section. \begin{theorem}\label{th:purediffbounds} Let $m$ be a positive integer with $m-1 < \sigma < m$. Assume that $r \in C^{m-1}[0,1]$ and $D_\ast^\sigma r \in C^{q, m-\sigma}(0,1]$ for some integer~$q \ge 1$. Then $r \in C^{q+m-1}(0,1]$ and for all $x\in (0,1]$ there exists a constant~$C$, which is independent of $x$, such that \begin{equation}\label{riinterpbound} |r^{(i)}(x)| \le \begin{cases} C &\text{if }\ i=0,1,\dots, m-1, \\ Cx^{\sigma-i} &\text{if }\ i=m, m+1,\dots, q+m-1. \end{cases} \end{equation} \end{theorem} \begin{proof} First, $r\in C^{m-1}[0,1]$ implies~\eqref{riinterpbound} for $i=0,1, \dots, m-1$. Next, we show that $r \in A^m[0,1]$. Set $w = r-T_{m-1}[r;0]$. Then $w\in C^{m-1}[0,1]$ and $0 = w(0) = w'(0) = \dots = w^{(m-1)}(0)$. By definition \begin{align*} D^\sigma w(x) &= \left( \frac{d}{dx} \right)^m \left[\frac1{\Gamma(m-\sigma)}\int_{t=0}^x (x-t)^{m-\sigma-1}w(t)\,dt \right] \\ &= \left( \frac{d}{dx} \right)^m \left[\frac1{\Gamma(2m-\sigma-1)}\int_{t=0}^x (x-t)^{2m-\sigma-2}w^{(m-1)}(t)\,dt \right] \\ &= \frac{d}{dx} \left[\frac1{\Gamma(m-\sigma)}\int_{t=0}^x (x-t)^{m-\sigma-1}w^{(m-1)}(t)\,dt \right], \end{align*} after $m-1$ integrations by parts followed by $m-1$ differentiations using \eqref{diffintegral}. Consequently $$ \int_{s=0}^x D^\sigma w(s)\, ds = \frac1{\Gamma(m-\sigma)}\int_{t=0}^x (x-t)^{m-\sigma-1}w^{(m-1)}(t)\,dt. $$ This is an Abel integral equation for the function $w^{(m-1)}$. Thus from~\citet[Section 2]{SKR93} it follows that $$ w^{(m-1)}(x) = \frac{d}{dx}\left\{\frac1{\Gamma(\sigma+1-m)}\int_{t=0}^x (x-t)^{\sigma-m} \left[ \int_{s=0}^t D^\sigma w(s)\, ds \right] \,dt \right\}. $$ But $D^\sigma w = D_*^\sigma r \in C[0,1]$ by hypothesis, so we can integrate by parts then use \eqref{diffintegral} to get \begin{align*} w^{(m-1)}(x) &= \frac{d}{dx}\left[\frac1{\Gamma(\sigma+2-m)}\int_{t=0}^x (x-t)^{\sigma+1-m} D^\sigma w(t) \,dt \right] \\ &= \frac1{\Gamma(\sigma+1-m)}\int_{t=0}^x (x-t)^{\sigma-m}D^\sigma w(t) \,dt. \end{align*} As the integrand here lies in the space $L_1[0,1]$ of Lebesgue integrable functions, it follows that $w^{(m-1)}$ is absolutely continuous on $[0,1]$. Hence $r^{(m-1)} = (w + T_{m-1}[r;0])^{(m-1)}$ is absolutely continuous on $[0,1]$, i.e., $r\in A^m[0,1]$. We come now to the main part of the proof. Set $\phi =D_\ast^\sigma r$. As $r\in A^m[0,1]$, by \citet[Corollary 3.9]{Diet10} one has $$ r(x) = \sum_{j=0}^{m-1}\frac{r^{(j)}(0)}{j!} x^j + \frac1{\Gamma(\sigma)} \int_{s=0}^x (x-s)^{\sigma-1}\phi(s)\,ds\,. $$ Integration by parts yields \begin{align} r(x) &= \sum_{j=0}^{m-1}\frac{r^{(j)}(0)}{j!} x^j + \frac{\phi(0)}{\sigma\Gamma(\sigma)}\, x^\sigma + \frac1{\sigma\Gamma(\sigma)} \int_{s=0}^x (x-s)^{\sigma}\phi'(s)\,ds \ \text{ for }\ 0 \le x \le 1. \notag \end{align} Differentiating this formula $m$ times using $\Gamma(n+1) = n\Gamma(n)$ and \eqref{diffintegral}, we obtain \begin{equation}\label{rm} r^{(m)}(x) = \frac{\phi(0)}{\Gamma(\sigma-m+1)}\, x^{\sigma-m} + \frac1{\Gamma(\sigma-m+1)} \int_{s=0}^x (x-s)^{\sigma-m}\phi'(s)\,ds. \end{equation} Hence, since $\phi = D_\ast^\sigma r \in C^{q, m-\sigma}(0,1]$, for some constants $C$ one obtains \begin{align} |r^{(m)}(x)| &\le \frac{|\phi(0)|}{\Gamma(\sigma-m+1)}\,x^{\sigma-m} + \frac{C}{\Gamma(\sigma-m+1)} \int_{s=0}^x (x-s)^{\sigma-m} s^{\sigma-m}\,ds \notag\\ &\le Cx^{\sigma-m} + Cx^{2(\sigma-m)+1} \notag\\ &\le Cx^{\sigma-m} \label{rxxbound} \end{align} as $x^{\sigma-m} > x^{\sigma-m}x^{\sigma+1-m} = x^{2(\sigma-m)+1}$, and \citet[Theorem D.6]{Diet10} was invoked to bound the integral (Euler's Beta function). This is the desired bound~\eqref{riinterpbound} for $i=m$. Furthermore, it is easy to see from~\eqref{rm} that $r^{(m)}\in C(0,1]$. We now deduce~\eqref{riinterpbound} for $i=m+1,m+2,\dots$ from~\eqref{rm}. Applying Lemma~\ref{lem:technical} with $\psi(s)= \phi'(s)$ to differentiate~\eqref{rm}, one gets $$ |r^{(m+1)}(x)| \le C [x^{\sigma-m-1} + x^{2(\sigma-m)}] \le Cx^{\sigma-m-1}, $$ which proves~\eqref{riinterpbound} for $i=m+1$, and \begin{equation}\label{rm1} r^{(m+1)}(x) = \frac{\phi(0)}{\Gamma(\sigma-m)}\, x^{\sigma-m-1} + \frac1{\Gamma(\sigma-m+1)} \cdot \frac1{x} \int_{s=0}^x (x-s)^{\sigma-m}[s\phi''(s)+(\sigma-m+1)\phi'(s)]\,ds, \end{equation} from which one can see that $r^{(m+1)}\in C(0,1]$. Comparing \eqref{rm} and \eqref{rm1}, the relationship between their leading terms is simple, while the integrals in both are $O(x^{2(\sigma-m)+1})$ but the integral in~\eqref{rm1} is multiplied by $1/x$. One now proceeds to differentiate~\eqref{rm1}, invoking Lemma~\ref{lem:technical} with $\psi(s) = s\phi''(s)+(\sigma-m+1)\phi'(s)$; this will yield a rather complicated formula for $r^{(m+2)}(x)$ that involves two integrals, but one sees readily that these integrals are $$ \frac1{x}\cdot O(x^{2(\sigma-m)}) + \frac1{x^2}\cdot O(x^{2(\sigma-m)+1}), $$ whence $$ |r^{(m+2)}(x)| \le C [x^{\sigma-m-2} + x^{2(\sigma-m)-1}] \le Cx^{\sigma-m-2}, $$ which proves~\eqref{riinterpbound} for $i=m+2$. Continuing in this way, each higher derivative of $r$ introduces a further factor $1/x$ in the estimates, and we can derive successively the bounds of~\eqref{riinterpbound} for $i=m+2, m+3, \dots$. The calculation must stop when one reaches an integral involving $\phi^{(q)}(s)$, i.e., when $i=q+m-1$. \end{proof} We now apply this result to our boundary value problem~\eqref{prob}. \begin{corollary}\label{cor:PTintegerbounds} Let $b,c, f \in C^{q,\mu}(0,1]$ for some integer $q \ge 2$ and $\mu \le 2-\delta$. Assume that $c \ge 0,\ \alpha_1 \ge 0$ and the condition~\eqref{AlRcondition} is satisfied. Then \eqref{prob} has a unique solution $u$ with $u \in C^1[0,1] \cap C^{q+1}(0,1]$, and for all $x\in (0,1]$ there exists a constant~$C$ such that \begin{equation}\label{ubound} |u^{(i)}(x)| \le \begin{cases} C &\text{if }\ i=0,1, \\ Cx^{\delta-i} &\text{if }\ i=2, 3,\dots, q+1. \end{cases} \end{equation} \end{corollary} \begin{proof} Observe that any function in~$C^{q,\mu}(0,1]$ satisfies Regularity Hypothesis~1 of Section~\ref{sec:maxprin} since $\mu \le 2-\delta <1$. Consequently Theorem~\ref{th:AlRpositivity} implies that if $f\equiv 0,\ \gamma_0=0$ and $\gamma_1=0$, then the problem~\eqref{prob} has in~$C^{q,\mu}(0,1]$ only the trivial solution $u\equiv 0$. Hence Theorem~\ref{th:PTresult} yields existence and uniqueness of a solution $u$ of~\eqref{prob} with $u \in C^1[0,1]$ and $D_\ast^\delta u \in C^{q, 2-\delta}(0,1]$. An appeal to Theorem~\ref{th:purediffbounds} completes the proof. \end{proof} \begin{remark}\label{rem:fastderivbounds} If we impose the additional hypothesis that $|b(x)| \ge C >0$ on $[0,1]$, it is then possible to give a simple proof of Corollary~\ref{cor:PTintegerbounds} directly from Theorem~\ref{th:PTresult} without using Theorem~\ref{th:purediffbounds}: differentiating~\eqref{proba} then solving for $u''$ yields \begin{equation}\label{u2x} u''(x) = \frac1{b(x)}\, \left[ f' + \left(D_\ast^ \delta u\right)' -c'u - (c+b')u'\right](x), \end{equation} and an appeal to the bounds of Theorem~\ref{th:PTresult} yields \eqref{ubound} immediately for the case $i=2$. One can then differentiate~\eqref{u2x} iteratively and use Theorem~\ref{th:PTresult} to prove~\eqref{ubound} for $i=3,4,\dots$. If instead $b(x) \equiv 0$ and $|c(x)| \ge C >0$ on $[0,1]$, a similar technique will work---note that $b \equiv 0$ enables the conclusion of Theorem~\ref{th:PTresult} to be strengthened to $D_\ast^\delta u \in C^{q, \nu}(0,1]$ where $\nu = \max\{\mu, 1-\delta\}$ by \citet[Remark 2.2]{PT12}. \end{remark} Finally, we give an example to show that the bounds of Theorem~\ref{th:purediffbounds} are sharp. \begin{example}\label{exa:interpolationsharp} Let $m$ be a positive integer with $m-1 < \sigma < m$. Set $r(x) = x^\sigma + x^{2\sigma-m+1}$ for $x \in[0,1]$. Clearly $r \in C^{m-1}[0,1] \cap C^\infty (0,1]$. Then from \cite[Appendix B]{Diet10} one gets $$ D_\ast^\sigma r(x) = \Gamma(\sigma+1) + \frac{\Gamma(2\sigma-m+2)}{\Gamma(\sigma-m+2)} \, x^{\sigma-m+1}. $$ Hence $D_\ast^\sigma r \in C[0,1] \cap C^\infty (0,1]$ and $|D_\ast^\sigma r(x)| \le \Gamma(\sigma+1) + [\Gamma(2\sigma-m+2)/\Gamma(\sigma-m+2)]$, while \begin{align*} (D_\ast^\sigma r)^{(i)}(x) &= \frac{\Gamma(2\sigma-m+2)}{\Gamma(\sigma-m+2-i)}\,x^{\sigma-m+1-i} \\ \intertext{and} r^{(i)}(x) &= \frac{\Gamma(\sigma+1)}{\Gamma(\sigma+1-i)}\,x^{\sigma-i} + \frac{\Gamma(2\sigma-m+2)}{\Gamma(2\sigma-m+2-i)}\,x^{2\sigma-m+1-i} \ \text{ for }\ i=1,2,\dots \end{align*} Thus the derivatives of $D_\ast^\sigma r$ satisfy the hypotheses of Theorem~\ref{th:purediffbounds} and the derivatives of $r$ agree with~\eqref{riinterpbound} for $i \ge 2$, i.e., the outcome of Theorem~\ref{th:purediffbounds} cannot be sharpened for $i \ge 2$. \end{example} \section{Discretization and convergence}\label{sec:discrete} \subsection{The discretization of the boundary value problem}\label{sec:propertiesA} Assume the hypotheses of Corollary~\ref{cor:PTintegerbounds}. Let $N$ be a positive integer. Subdivide $[0,1]$ by the uniform mesh $x_j = j/N =: jh$, for $j=0,1,\dots, N$. Then the standard discretization of $-D_\ast^\delta u(x_j)$ for $j=1,2,\dots,N-1$ is \citep[see, e.g.,][]{So12} given by \begin{equation}\label{DdeltaDisc} -D_\ast^\delta u(x_j) \approx -\,\frac1{h^{\delta}\,\Gamma(3-\delta)} \sum_{k=0}^{j-1} d_{j-k}\left(u_{k+2}-2u_{k+1}+u_k \right), \end{equation} where $u_k$ denotes the computed approximation to $u(x_k)$, and we set \begin{equation}\label{djk} d_r = r_+^{2-\delta} - (r-1)_+^{2-\delta}\ \text{ for all integers } r, \end{equation} with $$ s_+ = \begin{cases} s &\text{if } s\ge 0, \\ 0 &\text{if } s <0. \end{cases} $$ Note that $d_r =0$ for $r \le 0$. Set $g_j = g(x_j)$ for each mesh point $x_j$, where $g$ can be $b,c$ or $f$. To discretize the convective term~$bu'$ we shall use \emph{simple upwinding} \cite[p.47]{RST08}, because the standard approximation~$u'(x_j)\approx (u_{j+1}-u_{j-1})/(2h)$ may yield a non-monotone difference scheme when $\delta$ is near 1; see~\cite{future_paper_GS} for details. Thus we use the approximation \begin{equation}\label{bu'} (bu')(x_j) \approx b_jD^\aleph u_j := \begin{cases} b_j(u_{j+1}-u_j)/h &\text{if }\, b_j <0,\\ b_j(u_j-u_{j-1})/h &\text{if }\, b_j \ge 0. \end{cases} \end{equation} This difference approximation can also be written as \begin{equation}\label{bu'2} b_jD^\aleph u_j = - \frac{(b_j+|b_j|)u_{j-1}}{2h} + \frac{|b_j|u_j}{h} + \frac{(b_j-|b_j|)u_{j+1}}{2h}\,. \end{equation} The full discretization of~\eqref{proba} is \begin{equation}\label{fulldisc} -\,\frac1{h^{\delta}\,\Gamma(3-\delta)} \sum_{k=0}^{j-1} d_{j-k}\left(u_{k+2}-2u_{k+1}+u_k \right) +b_jD^\aleph u_j+c_ju_j= f_j, \end{equation} for $j=1,2,\dots,N-1$. The boundary conditions \eqref{probb} are discretized by approximating $u'(0)$ by $(u_1-u_0)/h$ and $u'(1)$ by $(u_N-u_{N-1})/h$. Let $A = (a_{jk})_{j,k=0}^N$ denote the $(N+1)\times(N+1)$ matrix corresponding to this discretization of \eqref{prob}, i.e., $A\vec u = \vec f$ where $\vec u := (u_0\ u_1\dots u_N)^T, \ \vec f := ( \gamma_0 \ f_1\ f_2\dots f_{N-1}\ \gamma_1 )^T$ and the superscript $T$ denotes transpose. Thus the $0^\text{th}$ row of $A$ is $( (1+\alpha_0 h^{-1})\ \ -\alpha_0 h^{-1}\ \ 0\ 0\ \dots 0)$ and its $N^\text{th}$ row is\\ $(0\ 0\ \dots 0 \ \ -\alpha_1 h^{-1}\ \ (1+\alpha_1 h^{-1}))$. For $j=1,2,\dots,N-1$, the entries of the $j^\text{th}$ row of $A$ satisfy \begin{subequations}\label{Hessenberg} \begin{align} a_{j0} =& \frac{-d_j}{h^{\delta}\,\Gamma(3-\delta)} -\epsilon_{j1}\frac{b_1+|b_1|}{2h} \,, \label{Hessenberga}\\ a_{j1} =&\frac{-d_{j-1}+2d_{j}}{h^{\delta}\,\Gamma(3-\delta)} -\epsilon_{j2}\frac{b_2+|b_2|}{2h}+\epsilon_{j1} \left(\frac{|b_1|}{h} + c_1\right), \label{Hessenbergb}\\ a_{jk} =& \frac{ -d_{j-k}+2d_{j-k+1}-d_{j-k+2}}{h^{\delta}\,\Gamma(3-\delta)} -\epsilon_{j,k+1}\frac{b_j+|b_j|}{2h} +\epsilon_{jk} \left(\frac{|b_j|}{h}+c_j \right) +\epsilon_{j,k-1}\frac{b_j-|b_j|}{2h}\notag\\ &\hspace{85mm} \text{for } k=2,3,\dots,N, \label{Hessenbergc} \end{align} \end{subequations} where we set $$ \epsilon_{jk}= \begin{cases} 1 &\text{if } j=k, \\ 0 &\text{otherwise}. \end{cases} $$ Hence the matrix $A$ is lower Hessenberg. Observe that~\eqref{fulldisc} implies that \begin{equation}\label{rowsum} \sum_{k=0}^N a_{jk} = c_j \ \text{ for } j=1,2,\dots, N-1. \end{equation} We shall prove various inequalities for the non-zero entries of $A$. First, $a_{jj} >0$ for all~$j$ by \eqref{Hessenberg}, \eqref{djk} and $c \ge 0$. From~\eqref{Hessenberga}, one has \begin{equation}\label{j0} a_{j0} <0 \quad\text{for }j=1,2,\dots, N-1. \end{equation} By \eqref{Hessenbergb}, \begin{equation}\label{a21} a_{21} = \displaystyle\frac{2^{3-\delta} - 3}{h^{\delta}\,\Gamma(3-\delta)}- \frac{b_2+|b_2|}{2h} \,, \end{equation} so the sign of $a_{21}$ depends on $\delta, h$ and $b$. \begin{lemma}\label{missing1} One has $a_{j1} >0$ for $j=3,4,\dots, N-1$. \end{lemma} \begin{proof} For $j=3,4,\dots, N-1$, equations~\eqref{Hessenbergb} and~\eqref{djk} yield \begin{align} h^{\delta}\,\Gamma(3-\delta) a_{j1} &= -(j-1)^{2-\delta}+(j-2)^{2-\delta}+2j^{2-\delta}-2(j-1)^{2-\delta} \notag\\ &= 2 \left[(j-1)^{2-\delta}-(j-2)^{2-\delta}\right] \left[\frac{j^{2-\delta}-(j-1)^{2-\delta}}{(j-1)^{2-\delta}-(j-2)^{2-\delta}} \, - \frac12\right]. \notag \end{align} By Cauchy's mean value theorem, for some $\theta\in (j-1, j)$ we have \begin{align*} \frac{j^{2-\delta}-(j-1)^{2-\delta}}{(j-1)^{2-\delta}-(j-2)^{2-\delta}} &= \frac{(2-\delta)\theta^{1-\delta}}{(2-\delta)(\theta-1)^{1-\delta}} = \left( 1 - \frac1{\theta}\right)^{\delta-1} > \left( 1 - \frac12\right)^{\delta-1} > \frac12\,, \end{align*} where we used $\theta > j-1 \ge 2$. It follows that $a_{j1} >0$. \end{proof} \begin{lemma}\label{lem:missing2} One has $a_{jk} < 0$ for $j=1,2,\dots,N-1$ and $k\in \{2,3,\dots, j-2,j-1, j+1\}$. \end{lemma} \begin{proof} If $k=j+1$, the inequality $a_{j,j+1}<0$ follows easily from \eqref{Hessenbergc} and \eqref{djk}. Thus consider the case $k\in \{2,3,\dots, j-1\}$. Taylor expansions imply that for some $\eta\in (j-k, j-k+2)$ we have \begin{align*} -d_{j-k}+2d_{j-k+1}-d_{j-k+2} &= -\frac{d^2}{dr^2}\left[ r^{2-\delta} - (r-1)^{2-\delta} \right]\bigg|_{r=\eta} \\ &= -(2-\delta)(1-\delta)\left[ \eta^{-\delta} - (\eta-1)^{-\delta} \right] \\ &<0. \end{align*} Hence \eqref{Hessenbergc} implies that $a_{jk} < 0$ for $k\in \{2,3,\dots, j-1\}.$ \end{proof} This completes the description of the entries $a_{jk}$ defined in~\eqref{Hessenberg}. The sign pattern of $A$ is \setcounter{MaxMatrixCols}{12} $$ \begin{pmatrix} + & - & 0 & 0 & 0 & 0 & 0 & \hdotsfor{3} 0 & 0\\ - & + & - & 0 & 0 & 0 & 0 & \hdotsfor{3} 0 & 0\\ - &(4.9)& + & - & 0 & 0 & 0 & \hdotsfor{3} 0 & 0\\ - & + & - & + & - & 0 & 0 & \hdotsfor{3} 0 & 0\\ - & + & - & - & + & - & 0 & \hdotsfor{3} 0 & 0\\ \vdots &\vdots&\vdots &\vdots &&&&&&& \vdots\\ - & + & - & - & \hdotsfor{3} & - & + & - & 0\\ - & + & - & - & \hdotsfor{4} & - & + & -\\ 0 & 0 & \hdotsfor{7} 0 & - & + \\ \end{pmatrix}. $$ \subsection{Monotonicity of the discretization matrix $A$}\label{sec:monotoneA} We shall show that $A^{-1}$ exists and $A^{-1} \ge 0$. Here and subsequently, an inequality between two matrices or vectors means that this inequality holds true for all the corresponding pairs of entries in those matrices or vectors. The positive off-diagonal entries in column~1 of $A$ are inconvenient for our analysis. We shall change their signs, while simultaneously simplifying column~0 of $A$, by the following device. Set $$ A' = E^{(N-1)}E^{(N-2)}\dots E^{(1)}A $$ where the elementary matrix $E^{(k)} := (e_{ij}^{(k)})_{i,j=0}^N$ with $$ e_{ij}^{(k)} := \delta_{ij} - \frac{a_{k0}}{a_{00}}\, \delta_{ik}\delta_{j0}. $$ This multiplication of matrix $A$ on the left by elementary matrices adds a positive multiple of row 0 of $A$ to each lower row to reduce to zero the off-diagonal entries of column~0. Write $A' = (a_{jk}')_{j,k=0}^N$. Row 0 of $A'$ is $(a_{00}\ a_{01}\ 0\ 0\dots 0)$, where we recall that $a_{00}= 1+\alpha_0 h^{-1}$ and $a_{01} = -\alpha_0 h^{-1}$. By construction $a_{j0}'=0$ for $j = 1,2 \dots, N$. For $k >1$ and all $j$ we clearly have $a_{jk}' = a_{jk}$. The remaining entries of column~1 of $A'$ will be examined below. \begin{lemma}\label{lem:sign1stcol} The entries of column 1 of the matrix~$A'$ satisfy $$ a_{11}' > 0 \ \text{ and }\ a_{j1}' < 0 \ \text{ for } j=2,3,\dots, N-1. $$ \end{lemma} \begin{proof} For $j =1,2,\dots, N-1$, from~\eqref{Hessenberg} one has \begin{align} a_{j1}' &= a_{j1} + \frac{\alpha_0 h^{-1}}{1+\alpha_0 h^{-1}}\, a_{j0} \notag\\ &= \frac1{h^{\delta}\,\Gamma(3-\delta)}\left[-d_{j-1} + 2d_j - \frac{\alpha_0 h^{-1}}{1+\alpha_0 h^{-1}} d_j \right] -\epsilon_{j2}\frac{b_2+|b_2|}{2h} \notag\\ &\hspace{15mm} +\epsilon_{j1}\left(\frac{|b_1|}{h} -\frac{\alpha_0 h^{-1}}{1+\alpha_0 h^{-1}}\cdot\frac{b_1+|b_1|}{2h} +c_1\right) \notag\\ &= \frac1{h^{\delta}\,\Gamma(3-\delta)}\left[d_j \left( 1 + \frac{h}{h+\alpha_0}\right) - d_{j-1}\right] -\epsilon_{j2}\frac{b_2+|b_2|}{2h} \notag\\ &\hspace{15mm} +\epsilon_{j1}\left(\frac{|b_1|}{h} -\frac{\alpha_0 h^{-1}}{1+\alpha_0 h^{-1}}\cdot\frac{b_1+|b_1|}{2h} +c_1\right). \label{aj1prime} \end{align} Hence $a_{11}' >0$ as $d_1=1,\ d_0=0$ and $c \ge 0$. We show next that \begin{equation}\label{aj1prime_term1} d_j \left( 1 + \frac{h}{h+\alpha_0}\right) - d_{j-1}<0 \quad\text{for }j=2,3,\dots, N-1. \end{equation} This inequality is equivalent to \begin{equation}\label{aji0} \frac{d_{j-1}}{d_j} > 1 + \frac{h}{h+\alpha_0}\, = 1 + \frac1{N\alpha_0+ 1}\,. \end{equation} By Cauchy's mean value theorem, for some $\eta \in (j-2, j-1)$ we have \begin{equation} \frac{d_{j-1}}{d_j} = \frac{(j-1)^{2-\delta}-(j-2)^{2-\delta}}{j^{2-\delta}-(j-1)^{2-\delta}} = \frac{(2-\delta)\eta^{1-\delta}}{(2-\delta)(\eta+1)^{1-\delta}} = \left(1+ \frac1{\eta}\right)^{\delta-1} > \left(1 + \frac1{N-2}\right)^{\delta-1} \label{etaineq} \end{equation} because $j \le N-1$. Hence, using the well-known inequality $$ \frac{t}{1+t} < \ln (1+t) < t \quad\text{for }\ t>0, $$ one gets $$ \ln \left( \frac{d_{j-1}}{d_j} \right) > (\delta-1) \ln \left(1 + \frac1{N-2}\right) > (\delta-1) \,\frac{1/(N-2)}{1+ 1/(N-2)} = \frac{\delta-1}{N-1} $$ and $$ \ln \left( 1 + \frac1{N\alpha_0 +1} \right) < \frac1{N\alpha_0 +1}\,. $$ Consequently \eqref{aji0} is proved if $$ \frac{\delta-1}{N-1} \ge \frac1{N\alpha_0 +1}\,, $$ i.e., if $N-1 \le (\delta-1)(N\alpha_0 +1) = \alpha_0(\delta-1)N+ \delta-1$, which is true by~\eqref{AlRcondition}. Thus~\eqref{aj1prime_term1} is valid. Combining \eqref{aj1prime} and \eqref{aj1prime_term1}, one obtains immediately $a_{j1}' < 0$ for $j=2,3,\dots, N-1$. \end{proof} Set $\vec 0 := (0\ 0\ \dots 0)^T$. If all the off-diagonal entries of a square matrix $G$ are non-positive and there exists a vector $\vec v \ge \vec 0$ such that $G\vec v > \vec 0$, then \citep[see, e.g.,][]{Fi86} $G$ is an \emph{M-matrix,} i.e.,~$G^{-1}$ exists with $G^{-1} \ge 0$. \begin{theorem}\label{th:Amonotone} The matrix $A$ is invertible and $A^{-1} \ge 0$. \end{theorem} \begin{proof} The properties of $A$, the construction of $A'$, and Lemma~\ref{lem:sign1stcol} imply that the entries of the $(N+1)\times (N+1)$ matrix $A'$ are positive on its main diagonal and non-positive off this diagonal. We see immediately that $\sum_{k=0}^N a_{0k}' = a_{00} + a_{01} =1$; for $j=1,2,\dots, N-1$ one has $$ \sum_{k=0}^N a_{jk}' = 0+ \left(a_{j1} + \frac{\alpha_0 h^{-1}}{1+\alpha_0 h^{-1}}\, a_{j0}\right) + \sum_{k=2}^Na_{jk} = \left( \frac{\alpha_0 h^{-1}}{1+\alpha_0 h^{-1}} -1 \right)a_{j0} + \sum_{k=0}^Na_{jk}, $$ but $\sum_{k=0}^Na_{jk}=c_j$ by~\eqref{rowsum} and $a_{j0}<0$ by~\eqref{j0}, so $$ \sum_{k=0}^N a_{jk}' = \left( \frac{\alpha_0 h^{-1}}{1+\alpha_0 h^{-1}} -1 \right)a_{j0}+c_j >0; $$ and the final row of $A'$ is $(0\ 0\ \dots 0 \ \ -\alpha_1 h^{-1}\ \ (1+\alpha_1 h^{-1}))$. Hence $A'(1,1,\dots,1)^T > \vec 0$. Consequently~$A'$ is an M-matrix. Thus~$(A')^{-1}$ exists with $(A')^{-1} \ge 0$. But $$ A' = E^{(N-1)}E^{(N-2)}\dots E^{(1)} A, $$ which implies that $$ A^{-1} = (A')^{-1} E^{(N-1)}E^{(N-2)}\dots E^{(1)} $$ exists, and, since it is a product of matrices with non-negative entries, $A^{-1} \ge 0$. \end{proof} The matrix $A$ is said to be \emph{monotone} because $A^{-1} \ge 0$; see, e.g.,~\citet{Fi86}. \subsection{Error analysis}\label{sec:error} Define the truncation error $\vec \tau := (\tau_0\ \tau_1\dots\tau_N)^T$ by \begin{equation}\label{tau} A(u-\vec u) = \vec\tau, \end{equation} where by ``$Au$" we mean that $A$~multiplies the restriction of $u$ to the mesh. Then \begin{align} \tau_0 &= (Au)_0 - \gamma_0 =\alpha_0 u'(x_0) - \frac{\alpha_0}{h} \left[ u(x_1)-u(x_0) \right], \label{tau0} \\ \tau_N &= (Au)_N- \gamma_1 =-\alpha_1 u'(x_N)+ \frac{\alpha_1}{h}\left[ u(x_N)-u(x_{N-1}) \right], \label{tauN} \end{align} and for $j=1,2,\dots, N-1$, \begin{equation}\label{tauj} \tau_j = (Au)_j - f(x_j) = (Au)_j + D_\ast^\delta u(x_j)-b(x_j)u'(x_j)-c(x_j)u(x_j). \end{equation} \begin{lemma}\label{lem:technical1} There exists a constant $C$, which is independent of $j$, such that \begin{equation}\label{sum1} \sum_{k=1}^{j-1} \big[ (j-k)^{2-\delta} - (j-k-1)^{2-\delta} \big] k^{\delta-3} \le Cj^{1-\delta} \ \text{ for all integers } j \ge 2. \end{equation} \end{lemma} \begin{proof} For $x\in \mathbb{R}$, let $\lceil x\rceil$ denote the smallest integer that is greater than or equal to $x$. By the mean value theorem applied to the function $x \mapsto x^{2-\delta}$ one gets \begin{align*} \sum_{k=1}^{\lceil j/2\rceil-1} \big[ (j-k)^{2-\delta} - (j-k-1)^{2-\delta} \big] k^{\delta-3} &\le \sum_{k=1}^{\lceil j/2\rceil-1} (2-\delta)(j-k-1)^{1-\delta} k^{\delta-3} \\ &\le (2-\delta)(j-\lceil j/2\rceil)^{1-\delta} \sum_{k=1}^{\lceil j/2\rceil-1} k^{\delta-3} \\ &\le C (2-\delta)(j/2)^{1-\delta} \sum_{k=1}^\infty k^{\delta-3} \\ &\le Cj^{1-\delta}, \end{align*} as $\delta <2$ implies that the infinite series is convergent. For the remaining terms in~\eqref{sum1}, by telescoping and $1<\delta<2$ we have \begin{align*} \sum_{k=\lceil j/2\rceil}^{j-1} \big[ (j-k)^{2-\delta} - (j-k-1)^{2-\delta} \big] k^{\delta-3} &\le \lceil j/2\rceil^{\delta-3} \sum_{k=\lceil j/2\rceil}^{j-1} \big[ (j-k)^{2-\delta} - (j-k-1)^{2-\delta} \big] \\ &= \lceil j/2\rceil^{\delta-3} \big(j-\lceil j/2\rceil\big)^{2-\delta} \\ &\le (j/2)^{\delta-3}(j/2)^{2-\delta} \\ &=2j^{-1} \\ &\le 2j^{1-\delta}. \end{align*} Adding these two bounds yields \eqref{sum1}. \end{proof} We can now bound the truncation errors of the finite difference scheme. \begin{lemma}\label{lem:TruncError} There exists a constant $C$ such that the truncation errors in the discretization $A\vec u = \vec f$ of~\eqref{prob} satisfy \begin{equation}\label{truncation} |\tau_j| \le \left\{ \begin{array}{ll} C h^{\delta-1} & \hbox{if } j=0, \\ C & \hbox{if } j=1, \\ C (j-1)^{1-\delta} & \hbox{if } j=2,\ldots,N-1,\\ C \alpha_1 h & \hbox{if } j=N. \end{array} \right. \end{equation} \end{lemma} \begin{proof} We shall consider separately the four cases in \eqref{truncation}. \emph{Case $j=0$:} By the mean value theorem, for some $\eta_1\in (0, h)$ we have \begin{align*} |\tau_0| &= \alpha_0\left| u'(0) - \frac{u(h)-u(0)}{h} \right| = \alpha_0 | u'(0) - u'(\eta_1)| = \alpha_0\left|\int_{t=0}^{\eta_1}u''(t)\,dt\right| \\ &\le \alpha_0\int_{t=0}^h Ct^{\delta-2}\,dt \le C\alpha_0 h^{\delta-1}, \end{align*} where we used Corollary~\ref{cor:PTintegerbounds}. \emph{Case $j=N$:} Similarly to the case $j=0$, one gets $$ |\tau_N| \le \alpha_1\int_{t=1-h}^1 Ct^{\delta-2}\,dt \le C\alpha_1 h. $$ \emph{Case $1<j<N$:} Fix $j\in \{2,3,\dots, N-1\}$. By virtue of~\eqref{tauj} and~\eqref{DdeltaDisc} we can write $\tau_j = R_j + \sum_{k=0}^{j-1}\tau_{j,k}$, where $$ R_j= b_jD^\aleph u_j - (bu')(x_j) $$ and \begin{equation}\label{taujk2} \tau_{j,k} = \frac1{\Gamma(2-\delta)}\int_{x_k}^{x_{k+1}} (x_j-s)^{1-\delta} u''(s)\,ds - \frac{d_{j-k}}{h^{\delta}\,\Gamma(3-\delta)} \left[ u(x_{k+2}) - 2u(x_{k+1}) + u(x_{k})\right]. \end{equation} By Taylor expansions, for some constant $C$ one has \begin{align*} \vert R_j\vert & \le Ch \Vert b \Vert_\infty \max_{x\in[x_{j-1}, x_{j+1}]} |u''(x)|. \end{align*} Hence Corollary~\ref{cor:PTintegerbounds}, $hj \le 1$ and $\delta>1$ imply that \begin{equation}\label{Rj} \vert R_j\vert \le C h [h (j-1)]^{\delta-2} = C[h(j-1)]^{\delta-1}(j-1)^{-1} \le C(j-1)^{1-\delta} . \end{equation} Returning to the other component of $\tau_j$, we have \begin{align*} \tau_{j,k} &= \frac{u''(\eta_2)}{\Gamma(2-\delta)}\int_{x_k}^{x_{k+1}} (x_j-s)^{1-\delta} \,ds - \frac{d_{j-k}}{h^{\delta}\,\Gamma(3-\delta)} \left[ u(x_{k+2}) - 2u(x_{k+1}) + u(x_{k})\right] \\ &= \frac{h^{2-\delta}d_{j-k}}{\Gamma(3-\delta)}\left[ u''(\eta_2) - \frac{u(x_{k+2}) - 2u(x_{k+1}) + u(x_k)}{h^2} \right] \end{align*} for some $\eta_2\in (x_k, x_{k+1})$, by the mean value theorem for integrals. Taylor expansions show that for some $\eta_3\in (x_k, x_{k+2})$ and $k \ge 1$ one obtains $$ \left| u''(\eta_2) - \frac{u(x_{k+2}) - 2u(x_{k+1}) + u(x_k)}{h^2} \right| = \left| u''(\eta_2) - u''(\eta_3)\right| = \left| (\eta_2 -\eta_3)u'''(\eta_4)\right| \le Ch x_k^{\delta-3}, $$ where we again used the mean value theorem to get $\eta_4\in (x_k, x_{k+2})$, then the bound on~$u'''$ given by Corollary~\ref{cor:PTintegerbounds}. Consequently \begin{equation}\label{taujk} \vert \tau_{j,k} \vert \le \frac{Ch^{3-\delta}d_{j-k} x_k^{\delta-3} }{\Gamma(3-\delta)} = \frac{Cd_{j-k} k^{\delta-3} }{\Gamma(3-\delta)}\quad \text{for } k \ge 1. \end{equation} Now an appeal to Lemma~\ref{lem:technical1} gives \begin{equation}\label{sumtaujk} \sum_{k=1}^{j-1} \vert \tau_{j,k} \vert \le C j^{1-\delta}. \end{equation} To complete the case $1<j<N$, it remains to bound $\tau_{j,0}$. By~\eqref{taujk2} and a triangle inequality one has \begin{equation}\label{tauj0} \vert \tau_{j,0} \vert \le \left|\frac1{\Gamma(2-\delta)}\int_{x_0}^{x_{1}} (x_j-s)^{1-\delta} u''(s)\,ds\right| + \left| \frac{d_{j}}{h^{\delta}\,\Gamma(3-\delta)} \left[ u(x_{2}) - 2u(x_{1}) + u(x_{0})\right] \right|. \end{equation} We bound these two terms separately. First, by Corollary~\ref{cor:PTintegerbounds} and $j>1$ we get \begin{align*} \left|\frac1{\Gamma(2-\delta)}\int_{x_0}^{x_{1}} (x_j-s)^{1-\delta} u''(s)\,ds\right| &\le \frac{C[(j-1)h]^{1-\delta}}{\Gamma(2-\delta)}\int_{x_0}^{x_{1}}s^{\delta-2}\,ds = \frac{C(j-1)^{1-\delta}}{(\delta-1)\Gamma(2-\delta)}\,. \end{align*} For the second term in~\eqref{tauj0}, the mean value theorem and Corollary~\ref{cor:PTintegerbounds} give \begin{align*} \left| \frac{d_{j}}{h^{\delta}\,\Gamma(3-\delta)} \left[ u(x_{2}) - 2u(x_{1}) + u(x_{0})\right] \right| &= \frac{d_{j}h\vert u'(\eta_6)-u'(\eta_5) \vert}{h^{\delta}\,\Gamma(3-\delta)} \\ &\le \frac{d_{j}h}{h^{\delta}\,\Gamma(3-\delta)}\int^{\eta_6}_{t=\eta_5} |u''(t)|\, dt \\ &\le \frac{Cd_{j}h}{h^{\delta}\,\Gamma(3-\delta)}\int^{2h}_{t=0} t^{\delta-2}\, dt \\ &= C d_{j} \\ &\le C (j-1)^{1-\delta}, \end{align*} where $\eta_5\in (x_0,x_1)$ and $\eta_6\in (x_1,x_2)$. Combining these inequalities with~\eqref{tauj0} yields $$ \vert \tau_{j,0} \vert \le C (j-1)^{1-\delta}. $$ Add this bound to~\eqref{Rj} and~\eqref{sumtaujk} to obtain finally \begin{equation}\label{tauj3} \vert \tau_j\vert \le \sum_{k=0}^{j-1} |\tau_{j,k}| + |R_j| \le C (j-1)^{1-\delta} \quad \text{for } 1<j<N. \end{equation} \emph{Case $j=1$:} This resembles the analysis above of $\tau_{j,0}$; one starts from~\eqref{tauj0} with~$j=1$ there. The only change is that now one invokes a standard bound on Euler's Beta function \citep[Theorem D.6]{Diet10} to see that \begin{align*} \left|\frac1{\Gamma(2-\delta)}\int_{x_0}^{x_{1}} (x_1-s)^{1-\delta} u''(s)\,ds \right| &\le \frac{C}{\Gamma(2-\delta)}\int_{x_0}^{x_{1}} (x_1-s)^{1-\delta} s^{\delta-2} \,ds \\ &=\frac{C\,\Gamma(2-\delta)\Gamma(\delta-1)}{\Gamma(2-\delta)}\le C, \end{align*} while as before $$ \left| \frac{d_1}{h^{\delta}\,\Gamma(3-\delta)} \left[ u(x_{2}) - 2u(x_{1}) + u(x_{0})\right] \right| \le C d_1 = C. $$ By the mean value theorem, for some $\eta_{7}\in (x_0,x_2)$ one has \begin{equation}\label{R1} |R_1| = \left|b_1D^\aleph u_1 - (bu')(x_1)\right| \le \Vert b \Vert_\infty \, \vert u'(\eta_{7})-u'(x_1)\vert \le C \end{equation} for some constant $C$, because $u \in C^1[0,1]$. Combining the above bounds, we obtain \begin{equation}\label{tau1} \vert \tau_1 \vert \le \vert \tau_{1,0}\vert +\vert R_1\vert\le C. \end{equation} \end{proof} \begin{remark}\label{rem:ShenLiu} In \citet[Lemma~3]{SL0405} Taylor expansions are used to prove that $|\tau_j| \le Ch$ for $j=1,2,\dots, N-1$ under the implicit assumptions that $u'''$ and $u^{(4)}$ are bounded on~$[0,1]$. An inspection of the argument shows that it can be modified slightly to yield the same result under the assumption that $u'''$ is bounded on $[0,1]$; no assumption on $u^{(4)}$ is needed. Nevertheless the assumption that $u'''$ is bounded on $[0,1]$ is very strong and restricts the applicability of this result to special cases of~\eqref{prob} whose solutions are exceptionally smooth. \end{remark} We can now prove that our finite difference method is $O\left(h^{\delta-1}\right)$ accurate in the discrete maximum norm. \begin{theorem}\label{thm:Error} Let $b,c, f \in C^{q,\mu}(0,1]$ for some integer $q \ge 2$ and $\mu \le 2-\delta$. Assume that $c \ge 0,\ \alpha_1 \ge 0$ and the condition~\eqref{AlRcondition} is satisfied. Then the error in the discretization $A\vec u = \vec f$ of~\eqref{prob} satisfies $$ \Vert u -\vec u \Vert_{\infty,d} \le C h^{\delta-1}, $$ where $\|u -\vec u\|_{\infty,d} := \max_{j=0,1,\dots,N}|u(x_j)-u_j|$. \end{theorem} \begin{proof} Recalling \eqref{djk} and \eqref{Hessenberga}, the construction of Section~\ref{sec:monotoneA} added the multiple $$ \frac{-a_{j0}}{1+\alpha_0 h^{-1}} = \frac1{1+\alpha_0 h^{-1}} \left[ \frac{d_j}{h^{\delta}\,\Gamma(3-\delta)} +\epsilon_{j1}\frac{b_1+|b_1|}{2h} \right] $$ of row $0$ of $A$ to row $j$ for $j=1,2,\dots, N-1$, yielding an M-matrix $A'$. When this construction is applied to the system of equations~\eqref{tau}, one modifies $\tau_j$ to \begin{equation}\label{tau2} \tau_j' := \tau_j + \frac{\tau_0}{1+\alpha_0 h^{-1}} \left[ \frac{d_j}{h^{\delta}\,\Gamma(3-\delta)} +\epsilon_{j1}\frac{b_1+|b_1|}{2h} \right] \quad\text{for } j=1,2,\dots, N-1, \end{equation} and then \begin{equation}\label{tau3} A'(u-\vec u) = \vec\tau', \ \text{\ with } \vec\tau' := (\tau_0\ \tau_1'\ \tau_2' \dots \tau_{N-1}'\ \tau_N)^T. \end{equation} For $j=1,2,\dots, N-1$, the proof of Theorem~\ref{th:Amonotone} shows that the $j^\text{th}$ row sum of~$A'$ is \begin{equation}\label{jrowsum} \left( \frac{\alpha_0 h^{-1}}{1+\alpha_0 h^{-1}} -1 \right)a_{j0}+c_j = \frac{-a_{j0}}{1+\alpha_0 h^{-1}}+c_j = \frac1{1+\alpha_0 h^{-1}} \left[ \frac{d_j}{h^{\delta}\,\Gamma(3-\delta)} +\epsilon_{j1}\frac{b_1+|b_1|}{2h} \right]+c_j\,. \end{equation} Thus the value of the $j^\text{th}$ row sum depends strongly on $j$. We rescale rows $1,2,\dots, N-1$ in the system of equations~\eqref{tau3} by multiplying the $j^\text{th}$ equation by $$ \frac{(1+\alpha_0 h^{-1})h^{\delta}\,\Gamma(3-\delta)}{d_j}\,, $$ so that by~\eqref{jrowsum} each of these row sums is now $O(1)$ and equals \begin{equation}\label{jrowsumtilde} 1+ \frac{h^{\delta}\,\Gamma(3-\delta)}{d_j}\left[ \epsilon_{j1}\frac{b_1+|b_1|}{2h} + (1+\alpha_0 h^{-1})c_j \right]\ge 1. \end{equation} Write $\tilde A$ for the rescaled matrix of the system of equations and $\vec{\tilde \tau}$ for the rescaled right-hand side, so now we have \begin{equation}\label{tau4} \tilde A(u-\vec u) = \vec{\tilde\tau}, \ \text{\ with } \vec{\tilde\tau} := (\tau_0\ {\tilde\tau}_1\ \tilde{\tau}_2 \dots \tilde{\tau}_{N-1} \ \tau_N)^T \end{equation} and for $j=1,2,\dots, N-1$, $$ \tilde{\tau}_j = \frac{(1+\alpha_0 h^{-1})h^{\delta}\,\Gamma(3-\delta)}{d_j}\, \tau_j' = \frac{(1+\alpha_0 h^{-1})h^{\delta}\,\Gamma(3-\delta)}{d_j}\,\tau_j + \left[1+ \epsilon_{j1}\frac{b_1+|b_1|}{2h}\cdot \frac{h^{\delta}\,\Gamma(3-\delta)}{d_j} \right]\tau_0 $$ by~\eqref{tau2}. Hence, Lemma~\ref{lem:TruncError} and $d_j \ge (2-\delta)j^{1-\delta}\ge (2-\delta)2^{1-\delta}(j-1)^{1-\delta}$ for $j\ge 2$ imply that \begin{equation}\label{tildetau} \vert \tilde{\tau}_j \vert \le C h^{\delta-1} \quad{\text{for }} j=1,2,\dots, N-1. \end{equation} But the off-diagonal entries of $\tilde A$ are non-positive and $\tilde A(1\ 1\dots 1)^T \ge (1\ 1\dots 1)^T > \vec 0$ by~\eqref{jrowsumtilde}, so $\tilde A$ is an M-matrix; furthermore, it follows that in the standard matrix norm notation $\|\cdot\|_\infty$ one has $\|(\tilde A)^{-1}\|_\infty \le 1$ ---see, e.g., \citet[Lemma 2.1]{AK90}. Consequently~\eqref{tau4} and the bound~\eqref{tildetau}, together with $|\tau_0| \le Ch^{\delta-1}$ and $|\tau_N| \le C \alpha_1 h$ from Lemma~\ref{lem:TruncError}, imply that $\Vert u -\vec u \Vert_{\infty,d} \le C h^{\delta-1}$, as desired. \end{proof} \begin{remark}\label{rem:comments_conditionN} The order of convergence proved in Theorem \ref{thm:Error} is the same order (in $\|\cdot\|_{\infty,d}$) as that proved in the norm $\|\cdot\|_\infty$ for the simplest collocation method ($m=1$) in \citet[Theorem 4.1]{PT12} on a uniform mesh, but Theorem~\ref{thm:Error} places no condition on the mesh diameter, while the results of \cite{PT12} implicitly require $h^{\delta-1}$ to be smaller than some fixed constant---this may be restrictive when~$\delta$ is near~1. This mesh condition arises because the proofs of the main convergence results of \cite{PT12} rely on the property that, for sufficiently large $N$, the operator $I-{\cal P}_N T$ is invertible and $\|(I-{\cal P}_N T)^{-1}\|$ is bounded in a certain operator norm; to verify this property, the authors appeal to a standard argument from \cite[Lemma 3.2]{BPV01}, but on a uniform mesh this relies on inequality~(3.12) of \cite{BPV01} with $r=1,\ \nu =2-\delta$ and $m \ge 1$, and consequently ``for sufficiently large $N$" is equivalent to ``for $h^{\delta-1}$ sufficiently small". Note however that the mesh restriction is less demanding for the graded meshes that are also considered in \cite{PT12}. \end{remark} \section{Numerical results}\label{sec:numerical} We first consider a problem whose solution $u$ lies in $C^1[0,1]\cap C^\infty(0,1]$, but $u\notin C^2[0,1]$ and the behaviour of $u$ mimics exactly the behaviour of the estimates of the solution in Corollary~\ref{cor:PTintegerbounds}; cf.~Example~\ref{exa:interpolationsharp}. \emph{Test Problem 1.} \begin{subequations}\label{test} \begin{align} -D_\ast^{\delta} u(x)+x^2u'(x)+(1+x)u(x) &= f(x) \ \text{ on }(0,1), \\ u(0)-\displaystyle\frac{1}{\delta-1} u'(0) &= \gamma_0 \ \text{ and } \ u(1)+u'(1) = \gamma_1, \end{align} \end{subequations} where the function $f$ and the constants $\gamma_0$ and $\gamma_1$ are chosen such that the exact solution of~\eqref{test} is $u(x)=x^{\delta}+x^{2\delta-1}+1+3x-7x^2+4x^3+x^4$. \begin{figure} \caption{Exact (solid) and computed (dashed) solutions of (\ref{test} \label{fig1} \end{figure} \begin{figure} \caption{Derivative $u'(x)$ of the solution of (\ref{test} \label{fig2b} \end{figure} \begin{figure} \caption{Derivative $u'(x)$ of the solution of (\ref{test} \label{fig2} \end{figure} The numerical solution $\{u_j\}_{j=0}^N$ of problem (\ref{test}) is computed on a uniform mesh of width $h=1/N$ as described in Section~\ref{sec:propertiesA}. Figure~\ref{fig1} shows the exact solution $u$ for~$\delta=1.1$ (left figure) and~$\delta=1.4$ (right figure), together with the respective solutions computed by our finite difference method for $N=64$. Figures~\ref{fig2b} (left) and \ref{fig2} (left) show the derivative $u'$ of the exact solution for $\delta=1.1$ and $\delta=1.4$, respectively. We observe that the solution $u$ and its derivative $u'$ are bounded functions. In Figures~\ref{fig2b} (right) and \ref{fig2} (right) a zoom of $u'(x)$ in the vicinity of $x=0$ is displayed, and we can observe a vertical tangent at this point in both cases. Table \ref{tb1} presents numerical results for several values of the parameters~$\delta$ and~$N$. Each table entry shows first the maximum pointwise error $$ e_N^\delta := \max_{0\le j\le N} \vert u(x_j) - u_j\vert, $$ and then the order of convergence, which is computed in the standard way: $$ p_N^\delta = \log _2 \left(\frac{e_N^\delta}{e_{2N}^\delta}\right). $$ To show that the numerical results do not depend strongly on the value of $\delta$, we have also computed the uniform errors for the set of values of the parameter $\delta$ considered in Table \ref{tb1} and the corresponding orders of convergence. These values are defined for each $N$ by $$ e_N=\max_\delta\, e_N^\delta \quad \hbox{and} \quad p_N=\log _2 \left(\frac{e_{N}}{e_{2N}}\right) $$ and they appear in the last row of Table \ref{tb1}. These numerical results show that one obtains first-order convergence for each value of $\delta$ considered in Table~\ref{tb1}, and this convergence is uniform in $\delta$. Figure~\ref{fig3} exhibits the pointwise errors in the computed solution for $\delta=1.1, 1.4$ and $N=64, 128$, to show how the error varies within $[0,1]$. The first-order convergence of Table~\ref{tb1} is much better than the $O(h^{\delta-1})$ convergence guaranteed by Theorem~\ref{thm:Error}, but our second numerical example will show that the rate of convergence can indeed deteriorate when $\delta$ is close to $1$. \begin{table}[h] \caption{Test Problem (\ref{test}): Maximum and uniform errors $e_N^\delta, \, e_N$ and their orders of convergence $p_N^\delta, \, p_N$} \begin{center}{\scriptsize \label{tb1} \begin{tabular}{|c||c|c|c|c|c|c|} \hline & N=64 & N=128 & N=256 & N=512 & N=1024 & N=2048 \\ \hline \hline $\delta=1.1$ &1.464E-001 &7.547E-002 &3.843E-002 &1.941E-002 &9.761E-003 &4.895E-003 \\ &0.956&0.974&0.985&0.992&0.996&0.998 \\ \hline $\delta=1.2$ &1.455E-001 &7.443E-002 &3.769E-002 &1.897E-002 &9.522E-003 &4.770E-003 \\ &0.967&0.982&0.990&0.995&0.997&0.999 \\ \hline $\delta=1.3$ &1.457E-001 &7.427E-002 &3.754E-002 &1.888E-002 &9.472E-003 &4.744E-003 \\ &0.972&0.984&0.991&0.995&0.997&0.999 \\ \hline $\delta=1.4$ &1.466E-001 &7.469E-002 &3.776E-002 &1.900E-002 &9.536E-003 &4.779E-003 \\ &0.973&0.984&0.991&0.995&0.997&0.998 \\ \hline $\delta=1.5$ &1.476E-001 &7.534E-002 &3.816E-002 &1.924E-002 &9.669E-003 &4.852E-003 \\ &0.971&0.981&0.988&0.992&0.995&0.997 \\ \hline $\delta=1.6$ &1.479E-001 &7.573E-002 &3.849E-002 &1.947E-002 &9.816E-003 &4.937E-003 \\ &0.965&0.976&0.983&0.988&0.991&0.994 \\ \hline $\delta=1.7$ &1.459E-001 &7.508E-002 &3.836E-002 &1.950E-002 &9.876E-003 &4.988E-003 \\ &0.958&0.969&0.976&0.981&0.985&0.988 \\ \hline $\delta=1.8$ &1.392E-001 &7.197E-002 &3.698E-002 &1.891E-002 &9.637E-003 &4.896E-003 \\ &0.951&0.961&0.967&0.973&0.977&0.980 \\ \hline $\delta=1.9$ &1.236E-001 &6.389E-002 &3.288E-002 &1.686E-002 &8.628E-003 &4.405E-003 \\ &0.952&0.958&0.963&0.967&0.970&0.973 \\ \hline $e_N$ &1.479E-001 &7.573E-002 &3.849E-002 &1.950E-002 &9.876E-003 &4.988E-003 \\ $p_N$ &0.965&0.976&0.981&0.981&0.985&0.988\\ \hline \hline \end{tabular}} \end{center} \end{table} \begin{figure} \caption{Pointwise errors in computed solutions for test problem (\ref{test} \label{fig3} \end{figure} \emph{Test Problem 2.} Consider the constant-coefficient problem \begin{subequations}\label{test2} \begin{align} -D_\ast^{\delta} u(x)+ 2 u'(x)+3u(x) &= 1.25 \ \text{ on }(0,1), \\ u(0)-\displaystyle\frac{1}{\delta-1} u'(0) &= 0.4 \ \text{ and } \ u(1)= 1.7. \end{align} \end{subequations} As the exact solution of \eqref{test2} is unknown, in order to estimate the errors of the solution~$\{u_j\}^N_{j=0}$ computed on a uniform mesh of width $h=1/N$ by our finite difference method, we use the two-mesh principle \cite[Section 5.6]{FHMORS00}: on a uniform mesh of width $h/2$, compute the numerical solution~$\{z_j\}^{2N}_{j=0}$ with the same method and hence the two-mesh differences $$ d_N^\delta := \max_{0\le j\le N} \vert u_j - z_{2j}\vert; $$ from these values we estimate the orders of convergence by $$ q_N^\delta = \log _2 \left(\frac{d_N^\delta}{d_{2N}^\delta}\right). $$ The uniform two-mesh differences and their corresponding uniform orders of convergence are computed analogously to Table~\ref{tb1} and denoted by~$d_N$ and~$q_N$, respectively. The numerical results obtained are displayed in Table~\ref{tb2} and we observe that the finite difference method is again convergent but a significant decrease in the order of convergence occurs for values of~$\delta$ close to~1 and practical values of~$N$. \begin{table}[h] \caption{Test Problem (\ref{test2}): Maximum and uniform two-mesh differences $d_N^\delta, \, d_N$ and their orders of convergence $q_N^\delta, \, q_N$} \begin{center}{\scriptsize \label{tb2} \begin{tabular}{|c||c|c|c|c|c|c|} \hline & N=64 & N=128 & N=256 & N=512 & N=1024 & N=2048 \\ \hline \hline $\delta=1.1$ &2.304E-001 &2.277E-001 &2.200E-001 &2.075E-001 &1.913E-001 &1.702E-001 \\ &0.017&0.050&0.084&0.117&0.168&0.255 \\ \hline $\delta=1.2$ &1.289E-001 &1.032E-001 &7.512E-002 &4.919E-002 &2.935E-002 &1.631E-002 \\ &0.321&0.458&0.611&0.745&0.847&0.915 \\ \hline $\delta=1.3$ &6.557E-002 &4.240E-002 &2.509E-002 &1.387E-002 &7.337E-003 &3.782E-003 \\ &0.629&0.757&0.855&0.919&0.956&0.977 \\ \hline $\delta=1.4$ &3.635E-002 &2.125E-002 &1.167E-002 &6.158E-003 &3.174E-003 &1.614E-003 \\ &0.775&0.864&0.922&0.956&0.976&0.986 \\ \hline $\delta=1.5$ &2.271E-002 &1.265E-002 &6.749E-003 &3.509E-003 &1.795E-003 &9.104E-004 \\ &0.844&0.906&0.944&0.967&0.980&0.988 \\ \hline $\delta=1.6$ &1.548E-002 &8.418E-003 &4.443E-003 &2.300E-003 &1.177E-003 &5.976E-004 \\ &0.879&0.922&0.950&0.967&0.978&0.985 \\ \hline $\delta=1.7$ &1.110E-002 &5.968E-003 &3.140E-003 &1.628E-003 &8.356E-004 &4.262E-004 \\ &0.895&0.927&0.948&0.962&0.971&0.978 \\ \hline $\delta=1.8$ &8.063E-003 &4.305E-003 &2.264E-003 &1.178E-003 &6.078E-004 &3.120E-004 \\ &0.905&0.927&0.943&0.954&0.962&0.969 \\ \hline $\delta=1.9$ &5.650E-003 &2.978E-003 &1.557E-003 &8.090E-004 &4.184E-004 &2.156E-004 \\ &0.924&0.936&0.945&0.951&0.957&0.961 \\ \hline $d_N$ &2.304E-001 &2.277E-001 &2.200E-001 &2.075E-001 &1.913E-001 &1.702E-001 \\ $q_N$ &0.017&0.050&0.084&0.117&0.168&0.255\\ \hline \hline \end{tabular}} \end{center} \end{table} \section{Conclusions}\label{sec:conclusions} In this paper we discussed a two-point boundary value problem whose highest-order derivative was a Caputo fractional derivative of order $\delta \in (1,2)$. A comparison principle was proved for this differential operator on its domain $[0,1]$ provided that the boundary conditions satisfied certain restrictions. Then we derived sharp a priori bounds on the integer-order derivatives of the solution $u$ of the boundary value problem, using elementary analytical techniques to extract this information from previously-known bounds on the integer-order derivatives of $D^\delta_\ast u$. These new bounds were used to analyse a finite difference scheme for this problem via a truncation error analysis, but this analysis was complicated by the awkward fact that $u''(x)$ and $u'''(x)$ blow up at the boundary $x=0$ of the domain $[0,1]$. We were able to prove that our finite difference method was $O(h^{\delta-1})$ accurate at the nodes of our mesh ($h$ is the mesh width and the mesh is uniform), but our numerical experience has been that the method is often more accurate; we have observed first-order convergence for all values of $\delta$ in several numerical examples, though one can have some deterioration in the rate of convergence when $\delta$ is near~1, as we saw in Test Problem 2. In future work in~\cite{future_paper_GS} and other papers we shall discuss the use of alternative difference approximations of the convective term of~\eqref{proba}, investigate why the rate of convergence of~\eqref{fulldisc} is sometimes first order for all values of $\delta$, and extend our approach to higher-order difference schemes and to graded meshes (cf.~\citet{PT12}). \end{document}
\begin{document} \title{A Discrete It\^o Calculus Approach \\ to He's Framework for \\ Multi-Factor Discrete Markets} \author{Jir\^o Akahori} \address{Department of Mathematical Sciences \& Research Center for Finance, Ritsumeikan University 1-1-1, Nojihigashi, Kusatsu, Shiga, 525-8577, Japan} \email{[email protected]} \date{} \keywords{discrete It\^o formula, finite difference scheme, discrete-time multi-asset market} \thanks{\noindent{\bf 2000 Mathematics Subject Classifications}:Primary 91B28 secondary 60G50, 65C20, 60F99. } \thanks{This research was supported by Open Research Center Project for Private Universities: matching fund subsidy from MEXT, 2004-2008 and also by Grants-in-Aids for Scientific Research (No. 18540146) from the Japan Society for Promotion of Sciences.} \pagestyle{plain} \maketitle \begin{abstract} In the present paper, a discrete version of It\^o's formula for a class of multi-dimensional random walk is introduced and applied to the study of a discrete-time complete market model which we call He's framework. The formula unifies continuous-time and discrete-time settings and by regarding the latter as the finite difference scheme of the former, the order of convergence is obtained. The result shows that He's framework cannot be of order $1$ scheme except for the one dimensional case. \end{abstract} \section{Introduction} In \citet{he:90}, the binomial tree approach by \citet{CRR} is generalized to a multi-nomial one and limit theorems concerning pricing kernels and hedging strategies are established. The main feature of He's multi-nomial tree framework is that each approximating market itself is arbitrage-free and \underline{\bf complete}. In the present paper, a new insight to He's framework, which leads to further applications, will be introduced. The insight comes from a discrete version of It\^o's formula. As is the case with continuous-time models, our discrete It\^o formula relates the value process of a contingent claim to a difference equation. This means that the formula enables a discrete version of so-called {\em partial differential equation (PDE) approach} to the pricing-hedging problems in the literature of mathematical finance; we do not use the usual martingale argument. Further, if a continuous-time limit exists, then the discrete equations obtained via our It\^o formula can be seen as {\em explicit} finite difference approximations of the limit PDE, and we can obtain the order of convergence by using the standard argument of the finite difference scheme. The contributions of the present paper are: \begin{itemize} \item a multi-dimensional version of discrete It\^o formula [Theorem \mathrm{Re} f{ItoF}] which enables the discrete PDE approach. \item the order of convergence of the value functions of European options within He's framework [Theorem \mathrm{Re} f{limittheorem}], which is proved to be $ \mathrm{O} (N^{-1/2}) $ in general and $ \mathrm{O}(N^{-1}) $ in single risky asset cases. Here $ N $ is the number of time-discretization steps. \end{itemize} The point is that {\em completeness makes it slow}; as He's framework is based on completeness of market. The first observation from discrete-It\^o formula shows that approximations by the discrete market model of He's framework are always a kind of finite-difference approximation of a PDE, while the second observation says that the convergence is much slower when $ n \geq 2 $ than an approximation by an Euler-Maruyama scheme of first order. \ This paper is organized as follows. In section \mathrm{Re} f{Heov}, a quick review of He's framework will be presented. In section \mathrm{Re} f{Ito}, the Szabados-Fujita formula and the discrete PDE framework will be introduced. In section \mathrm{Re} f{limit}, a limit theorem will be established. In section \mathrm{Re} f{rep}, the relations with the group theory will be explained. Finally in section \mathrm{Re} f{proof}, proofs of the theorems in the present paper will be undertaken. \begin{rem} This paper is motivated by the textbook \citet{Fujita_Japanese_text}, where he gives a very nice description of {\em from CRR to Black-Scholes} argument by using his discrete It\^o formula. His (and our) approach would be very instructive for those who are not familiar with higher mathematics. \end{rem} \noindent {\bf Acknowledgment.} The author wishes to acknowledge the hospitality extended to him by Professor Marc Yor during a sabbatical stay at Paris VI, when the first version of the present paper was written. The author also acknowledge suggestions by Professors Freddy Delbaen, Raouf Ghomrasni, and anonymous referees. \section{He's framework: an overview}\label{Heov} In essence, \citet{he:90} approximated $ n $-dimensional Brownian motions by a system of mutually orthogonal martingales of finite states--- $ (n + 1) $ states at each step. Let us briefly review He's framework. Let $ ( e_{i,j} )_{0 \leq i,j \leq n} $ be an $ (n+1) \times (n+1) $-orthogonal matrix such that $ e_{0,j} > 0 $ for $ j=0,1,...,n $, and define \begin{equation}\label{Eset} \mathcal{E} := \left\{ {e}_j = \frac{1}{e_{0,j} }(e_{1,j},...,e_{n,j} ) \in {\mathbf R} ^n \,:\, j=0,...,n \right\}. \end{equation} Let $ \tau \equiv (\tau^1,...,\tau^n) $ be a random variable taking values in $ \mathcal{E} $ with \begin{equation*} \mathbf{P} (\tau = {e}_j ) = e_{0,j}^2, \,\,j=0,1,...,n. \end{equation*} Then, \footnote{Note that the converse is true; any random variable $ \tau $ satisfying ( \mathrm{Re} f{cov}) is, if it is defined on a finite set, constructed in the above way from such an orthogonal matrix. \citet{he:90} treated only the uniform cases of $ e_{0,0}= \cdots = e_{0,n} = 1/\sqrt{n+1} $.} \begin{equation}\label{cov} \mathcal{E} x [ \tau^i ] = 0, \,i=1,...,n, \quad \mbox{and} \quad \mathrm{Cov} (\tau^i, \tau^j) = \begin{cases} 1 & (i=j) \\ 0 & (i \ne j ). \end{cases} \end{equation} Let $ \tau_1,....,\tau_t,... $ be independent copies of $ \tau $. Define a sequence of $ {\mathbf R} ^n $ valued stochastic processes $ \{ X^{N}\} $ by \begin{equation*} X^{N}_t = X_0 + N^{-1/2} \sum_{u=1}^{[Nt]} \tau_u \end{equation*} for a given initial point $ X_0 \in {\mathbf R} ^n $. By ( \mathrm{Re} f{cov}), components of $ X^N_t - X^N_0 $ are mutually orthogonal martingales, and therefore, the martingale central limit theorem (see \citet{MR88a:60130} for example) ensures that the law of $ X^N $ converges weakly to the $ n $-dimensional Wiener measure as $ N \to \infty $. Fix $ N \in \mathbf{N} $. For $ T>0 $, we denote $ T_N=[TN]/N $. For a subinterval $ I $ of $ [0,\infty ) $, we denote $ I_N = I \cap \{ k/N : k=0,1,2,\ldots \} $. In our market there are $ (n+1) $-securities whose prices are given by $ S^{j,N}_t \equiv h^{j,N} ( t, X^N_t )$ for $ j=0,1,...,n $, where $ h^{j,N} $'s are real functions defined on $ [0,T]_N \times {\mathbf R} ^n $ such that the following $ (n+1)\times(n+1) $-matrix \begin{equation*} H^N (t,x) := \begin{pmatrix} h^{0,N} (t, x+ N^{-1/2}e_0 ) & \cdots & h^{n,N} (t, x+N^{-1/2}e_{0} ) \\ h^{0,N} (t, x+ N^{-1/2}e_1 ) & \cdots & h^{n,N} (t, x+N^{-1/2}e_{1} ) \\ \vdots & \ddots & \vdots \\ h^{0,N} (t, x+ N^{-1/2}e_n ) & \cdots & h^{n,N} (t, x+N^{-1/2}e_{n} ) \end{pmatrix} \end{equation*} is invertible for arbitrary $ (t,x) \in [0,T]_N \times {\mathbf R} ^n $. Suppose that at time $ t=k/N $ we have $ \theta^j_t \equiv \theta^j_t (\tau_1,...,\tau_k) $ amount of $ j $-th security for each $ j \in \{0,1,...,n \} $. The cost of the portfolio at time $ t $ is \begin{equation}\label{cost} v^N (t, \tau_1,...,\tau_k) := \sum_j h^{j,N} (t,X^N_t) \theta^j_t, \end{equation} and at time $ t + N^{-1} $ the value of the portfolio becomes \begin{equation*} v^N (t+N^{-1}, \tau_1,...,\tau_k, \tau_{k+1}) := \sum_j h^{j,N} (t,X^N_{t + N^{-1}}) \theta^j_t , \end{equation*} or equivalently \begin{equation}\label{value} \begin{pmatrix} v^{N} (t+N^{-1},\tau_1,...,\tau_k, N^{-1/2}e_0 ) \\ v^{N} (t+N^{-1},\tau_1,...,\tau_k, N^{-1/2}e_1 ) \\ \vdots \\ v^{N} (t+N^{-1},\tau_1,...,\tau_k, N^{-1/2}e_n ) \end{pmatrix} = H^N (t+ N^{-1},x) \begin{pmatrix} \theta^0_t \\ \theta^1_t \\ \vdots \\ \theta^n_t \end{pmatrix}. \end{equation} If the portfolio is self-financed, then $ c^N (t,\cdot) = v^N (t, \cdot) $. Since we have assumed that $ H^N $ is invertible, we have by combining ( \mathrm{Re} f{cost}) and ( \mathrm{Re} f{value}), \begin{equation}\label{recursivep} \begin{split} &v^N (t,x) = \\ &(h^{0,N} (t, x),...,h^{n,N} (t, x) ) H^N (t+ N^{-1},x)^{-1} \begin{pmatrix} v^{N} (t+N^{-1},x,N^{-1/2}e_0 ) \\ v^{N} (t+N^{-1},x,N^{-1/2}e_1 ) \\ \vdots \\ v^{N} (t+N^{-1},x,N^{-1/2}e_n ) \end{pmatrix} ; \\ & \quad t \in [0,T_N)_N, \quad x \in \mathcal{E}^{[tN]}. \end{split} \end{equation} If the terminal value (to be hedged) $ \mathbf{P}hi : \mathcal{E}^N \to {\mathbf R} $ is dependent only on $ X^N_T $, then $ v^N (T-N^{-1}, \cdot ) $ depends only on $ X^N_{t - N^{-1}} $ etc, etc, and finally we have the following recursive equation, which has a unique solution: \begin{equation}\label{recursiveE} \begin{split} &v^N (T_N,x) = \mathbf{P}hi^N (x); \quad x \in {\mathbf R} ^n, \\ &v^N (t,x) = \\ &(h^{0,N} (t, x),...,h^{n,N} (t, x) ) H^N (t+ N^{-1},x)^{-1} \begin{pmatrix} v^{N} (t+N^{-1},x+N^{-1/2}e_0 ) \\ v^{N} (t+N^{-1},x+N^{-1/2}e_1 ) \\ \vdots \\ v^{N} (t+N^{-1},x+N^{-1/2}e_n ) \end{pmatrix} ; \\ & \quad t \in [0,T_N)_N, \quad x \in {\mathbf R} ^n. \end{split} \end{equation} Here all we can say is that $ v (t,X^N_t) $ is the replication cost (at time $ t $) of an European option whose pay-off is described by $ \mathbf{P}hi (X^N_T) $, where $ \mathbf{P}hi^N : {\mathbf R} ^{n+1} \to {\mathbf R} $. As is well known, absence of arbitrage opportunities is equivalent to the positivity of the state price (see e.g. \citet{duffie:96}). In other words, denoting $ h^N (t,x) = $ $(h^{0,N} (t, x) $ $,...,$ $ h^{n,N} (t, x) ) $, \begin{equation}\label{NA} \text{each component of $ h^N (t,x)H^N (t+N^{-1},x)^{-1} $ is strictly positive.} \end{equation} Under the hypothesis of ( \mathrm{Re} f{NA}), the unique solution $ v^N(t,x) $ is the unique fair price at time $ t \in [0, T) $ and the state $ x \in {\mathbf R} ^n $ with $ S^{j,N}_t = h^{j,N} ( t, x ) $ of the European option whose pay-off is $ \mathbf{P}hi^N (X^N_T) $. Note that the invertibility of $ H $ is equivalent to completeness of the market. \ The above derivation of ( \mathrm{Re} f{recursiveE}) is also valid for any Markov process\footnote{In general it is represented by some $ F_j, j=0,1,...,n $ as \begin{equation*} Z^N_{t + N^{-1}} - Z^N_t = \sum_{j=0}^n F_j (Z_t^N) \tau_{t+N^{-1}}^j, \end{equation*} with a convention of $ \tau^0 \equiv 1 $. This is because $ 1,\tau^1,...,\tau^n $ forms an orthonormal basis of the space of random variables on generated by $ \tau $. In particular, a discrete approximation of an SDE by a Markov chain always has an Euler-Maruyama representation.} $ Z^N $ replacing $ X^N $. In fact \citet{he:90} modeled the price vector $ \mathbf{S}_t=(S^1_t,...,S^n_t) $ directly (meaning $ h^j $'s are identity maps) by an Euler-Maruyama approximation of a stochastic differential equation. Here we have changed the setting as above. The differences is that we have preserved the structure of so-called recombining tree: if we consider $ \mathbf{S}_{k/N} $ as a function of $ \tau_1,...,\tau_k $, we have \begin{equation}\label{recom} \mathbf{S}_{k/N} (e_{i_1},...,e_{i_k} ) = \mathbf{S}_{k/N} (e_{i_{\sigma(1)}},...,e_{i_{\sigma(k)}} ) \end{equation} for arbitrary permutation $ \sigma \in \mathfrak{S}_k $. The reasons for this modification are: (i) Euler-Maruyama approximations by finite-points random variables using Monte-Carlo are not practical, (ii) nor is solving an equation like ( \mathrm{Re} f{recursiveE}) without recombining structure of ( \mathrm{Re} f{recom}). In fact, it relaxes quite a lot computational complexity, by which we mean {\em how many times we need to solve the one-step linear equation ( \mathrm{Re} f{recursiveE}) to obtain the value for $ v^N (t, x) $.} In other words, it is the number $ \sharp \mathcal{X}(t,x, Z) $ of the possible states \begin{equation*} \mathcal{X} (t,x,Z):= \{ y \in {\mathbf R} ^n : \mathbf{P} ( Z^N_{t} = x, X^N_T =y ) > 0 \}. \end{equation*} In general we have $ \sharp \mathcal{X}(T-kN^{-1}, x, Z ) = (n+1)^k $. Even if $ Z $ is an Euler-Maruyama approximation of a solution to SDE, almost always this is the case. However, the symmetry ( \mathrm{Re} f{recom}), which comes from that of $ X_t $, reduces it dramatically. More precisely, we have the following. \begin{prop}\label{tree} \begin{equation*} \sharp \mathcal{X}(T-kN^{-1}, x, X^N )= \frac{(k+n)!}{k!n!}. \end{equation*} \end{prop} \begin{proof} Since $ \{ e_1,...,e_n \} $ spans $ n $-dimensional subspace in $ {\mathbf R} ^{n+1}$, they have no linear dependence other than $ e_1 + \cdots + e_n = 0 $. Therefore, the number is equal to that of solutions to \begin{equation*} x_1 + x_2 + \cdots + x_{n+1} = k, \quad x_j \in {\mathbf Z} _+,\,j=1,...,n+1, \end{equation*} which is exactly $ (n+k)!/k!n! $. \end{proof} \begin{rem}\label{NAcomp} Denoting by $ A (t,x) $ the sum of all the components of $ h^N (t,x) H^N (t+N^{-1},x)^{-1} $, the value process of money market account is given by \begin{equation*} \prod_k^{[Nt]} 1/A(k/N,X^N_{(k-1)/N}). \end{equation*} In particular, {\em positive interest rate} is equivalent to $ A (t,x) < 1 $ for arbitrary $ (t,x) $. \end{rem} \section{A Discrete It\^o formula and discrete PDE}\label{Ito} Let us introduce a discrete version of It\^o's formula for the process $ X^N \equiv (X^{N,1},...,X^{N,n}) $. \begin{thm}\label{ItoF} {\em (i)} For a function $ f : [0,\infty) \times {\mathbf R} ^n \to {\mathbf R} $, we have \begin{equation}\label{itofujita} \begin{split} &f(t,X^N_t) - f(0,X_0) \\ & \qquad = \sum_{u=1}^{[Nt]} \bigg( \sum_{k=1}^n \partial^N_k f (u/N, X^N_{(u-1)/N})\, (X^{N,k}_{u/N} - X^{N,k}_{(u-1)/N}) \\ & \qquad \qquad \quad + ( \frac{1}{2}\Delta^N + \partial_t^N ) f(u/N, X^{N}_{(u-1)/N}) /N \bigg), \end{split} \end{equation} where \begin{equation}\label{diff} \begin{split} \partial_{k}^N f (\cdot, x) &= \sqrt{N} \sum_{j=0}^{n} f( \cdot , x + N^{-1/2} e_{j} )e_{0,j} e_{k,j} \\ \Delta^N f (\cdot,x) &= 2 {N} \sum_{j=0}^{n} \{ f( \cdot , x + N^{-1/2} e_j) - f (\cdot, x) \} e_{0,j}^2, \\ \partial^N_t f( t, \cdot ) &= N(f( t, \cdot ) - f(t-N^{-1}, \cdot)). \end{split} \end{equation} {\em (ii)} If $ f $ is in $ C^{1,2} $ in a neighborhood of $ (t,x) $, then letting $ N \to \infty $, we have \begin{equation}\label{point} \partial_j^{N} f (t, x) \to \frac{\partial}{\partial x_j}f(t, x),\, \Delta^N f (t,x) \to \Delta f (t,x), \partial_t^N f (t,x) \to \frac{\partial}{\partial t} f (t, x). \end{equation} Here $ \Delta $ is the Laplacian in $ {\mathbf R} ^n $. {\em (iii)} Further, for fixed $ t \in [0,T] $, if $ f(t,\cdot)$ is in $ C^{3} $ in an open set $ U \subset {\mathbf R} ^n $, then for every compact subset $ K \subset U $, there exists a positive constant $ C_K $ depending only on $ f (t,\cdot) $ such that \begin{equation}\label{orders} \max_j \left| \partial_j^{N} f (t, x) - \frac{\partial}{\partial x_j}f(t, x) \right| + \left| \Delta^N f (t,x) - \Delta f(t,x) \right| \leq C_K N^{-1/2} \end{equation} for all $ x $ in $ K $. (iv) The order of convergence cannot be improved for general $ f \in C^{1,4} $ when $ n \geq 2 $. (v) For the case of $ n =1 $, it can be improved to be $ N^{-1} $, provided that $ f \in C^{1,4} $. \end{thm} A proof of Theorem \mathrm{Re} f{ItoF} will be given in section \mathrm{Re} f{PDIF}. \begin{rem} This version of It\^o's formula is different from those for jump semimartingales which, for example, is appearing in \citet{MR2020294}, different in that ours gives the Doob decomposition of $ f (t,X_t) $. This version of discrete It\^o's formula was introduced by \citet{Fujita_formula} for the case of $ n=1 $. \citet{MR684465} and \citet{MR92i:60105} also studied discrete It\^o formulas as discrete-analogues of the standard one, which point of view is what we share in this paper. It is true that it should be called Kudzhma-Szabados-Fujita formula, but here the term {\em discrete It\^o formula} is preferred since the true name is too long and confusing \end{rem} We claim that the recursive equation ( \mathrm{Re} f{recursiveE}) defines a discrete PDE with respect to these {\em differentials} of ( \mathrm{Re} f{diff}). Define \begin{equation*} \Sigma^N (t,x) := \begin{pmatrix} h^{0,N} (t-N^{-1},x) & \cdots & h^{n,N} (t-N^{-1},x) \\ \partial^N_1 h^{0,N} (t,x) & \cdots & \partial^N_1 h^{n,N} (t,x) \\ \vdots & \ddots & \vdots \\ \partial^N_n h^{0,N} (t,x) & \cdots & \partial^N_n h^{n,N} (t,x) \end{pmatrix}. \end{equation*} \begin{thm}\label{Euro} Let us assume that the market is arbitrage-free and complete. Namely, the existence of $ (H^N)^{-1} $ and ( \mathrm{Re} f{NA}) are assumed. Then, $ \Sigma^N $ is always invertible and $ v^N $ satisfies the following discrete PDE. \begin{equation}\label{disdif} \begin{split} & \nu^N(T_N,x) = \mathbf{P}hi^N(x) ; \quad x \in \mathbf{R}^n, \\ & \partial ^N_t \nu^N + \frac{1}{2} \Delta^N \nu^N - \langle b^N,\nabla^N \nu^N \rangle - c^N (1^N \nu^N)= 0 ; \\ & \qquad t \in (0,T_N)_N , \ x \in \mathbf{R}^n . \end{split} \end{equation} Here $ 1^N \nu^N(t,x) = \nu^N(t-N^{-1},x) $ and $ (c^N,b^N)=(\partial ^N_t h^N + \frac{1}{2} \Delta^N h^N) [\Sigma^N]^{-1}$. \end{thm} A proof of Theorem \mathrm{Re} f{Euro} will be given in section \mathrm{Re} f{PEuro}. \ The equation ( \mathrm{Re} f{disdif}) can be obtained directly by using the discrete It\^o's formula ( \mathrm{Re} f{itofujita}) if we a priori assume that $ \Sigma^N $ is invertible. Let us write $ d Y_t := Y_t - Y_{t-N^{-1}} $ for a process $ Y $, $ dt := 1/N $, $ \nabla^N = (\partial^N_1,...,\partial^N_n ) $, $ V_t := v^N (t,X^N_t) $, and so on. If we have $ d V_t = \sum_{j=1}^{n+1} \theta^j_t d h_t^{j,N} $ and $ V_t = \sum_{j=1}^{n+1} \theta^j_{t+1} h^{j,N}_t $, then $ \theta $ is the hedging strategy and the problem is settled. This can be done quite easily in a parallel way with the continuous-time cases. In fact, we have \begin{equation*} \begin{split} dV &=\nabla v^N \cdot dX^N + \left( \partial^N_t v^N + \frac{1}{2} \Delta^N v^N \right) \,dt = \sum_{j=1}^{n+1} \theta^j d h^{j,N}, \\ dh^{j,N} &= \nabla h^{j,N} \cdot dX^N + \left( \partial^N_t h^{j,N} + \frac{1}{2} \Delta^N h^{j,N} \right) \,dt, \end{split} \end{equation*} and $ v^N = \sum_{j=1}^{n+1} \theta^j_{t+1} h^{j,N}_t $, hence $ \theta= (\theta^1,...,\theta^{n+1}) = (\Sigma^N)^{-1} (v^N (t-N^{-1}), \nabla v^N ) $. Note that the above argument can be applied to the case of $ N = \infty $, where $ X^\infty $ is the standard Brownian motion, $ dt $ is the standard one, and so on. The corresponding standard PDE shares the algebraic structure with the discrete ones. Since the second assertion (ii) of the above theorem \mathrm{Re} f{ItoF} can be seen as {\em consistency} of the difference operators of ( \mathrm{Re} f{diff}) in the context of finite difference method (see e.g. \citet{MR95b:65003}) , $ v^N $ converges to a solution $ v^\infty $ to the PDE at least when $ v^\infty $ is regular enough. We will make a detailed study about this topic in the next section. \section{Limit Theorem}\label{limit} The solution $ {v^N(t,\cdot):\mathbf{R}^n \to \mathbf{R}} $ is solved inductively for each $ t \in {[0,T_N)_N} $, and for each $ {x \in \mathbf{R}^n} $, the function $ {v^N(\cdot,x)} $ on $ {[0,T]_N} $ can be extended to a piecewise-constant function on $ {[0,T]} $. We choose such an extension on $ [0,T] \times {\mathbf R} ^n $ and denote it by the same symbol. Here we assume the followings to establish our limit theorem. \begin{as}\label{standing} (i) The market is arbitrage-free and complete; i.e. we assume ( \mathrm{Re} f{NA}) and invertibility of $ H^N $. (ii) There exist bounded measurable functions $ b : [0,T] \times {\mathbf R} ^n \to {\mathbf R} ^{n} $ and $ \mathbf{P}hi : {\mathbf R} ^n \to {\mathbf R} $ and a continuous function $ c : [0,T] \times {\mathbf R} ^n \to {\mathbf R} $ such that \begin{equation}\label{orderorder} \sup_N \sup_{t \in [0,T]_N , \ x \in \mathbf{R}^n} N^{1/2} \left( | b - b^N | + | c - c^N | + |\mathbf{P}hi - \mathbf{P}hi^N| \right) < \infty . \end{equation} and (iii) they are regular enough to allow the following partial differential equation to have a bounded solution in $ C^{1,3} $ whose first order derivatives are also bounded. \begin{equation}\label{EuroPDE} \frac{\partial v}{\partial t} + \frac{1}{2}\Delta v - \langle b(t,x), \nabla v \rangle - c (t,x) v = 0, \quad t \in [0, T), \quad v(T, x) = \mathbf{P}hi (x), \quad \end{equation} where $ \Delta $ is the Laplacian of $ {\mathbf R} ^n $ and $ \nabla $ is the gradient operator in $ {\mathbf R} ^n $. (iv) We also assume that the interest rate is positive (See Remark \mathrm{Re} f{NAcomp}). \end{as} As we pointed out in Remark \mathrm{Re} f{NAcomp}, the assumption (iv) is equivalent to $ A < 1 $, and hence, as we shall show in the proof of Theorem \mathrm{Re} f{Euro}, is equivalent to the positivity of the first component of $ (\partial^N_t h^N + \frac{1}{2} \Delta^N h^N ) [\Sigma^N]^{-1} $. This in turn implies that $ c $ is positive. Under the Assumption \mathrm{Re} f{standing}, we have the following \begin{thm}\label{limittheorem} The solutions $ v^N $ to ( \mathrm{Re} f{disdif}) converges uniformly on compact intervals of $ [0,T] \times {\mathbf R} ^n $ to the solution $ v \in C^{1,3} $ to ( \mathrm{Re} f{EuroPDE}) in an order of $ N^{-1/2} $. For the one dimensional case, the order can be improved to be $ N^{-1} $ provided that $ v $ is in $ C^{1,4} $ and the order in ( \mathrm{Re} f{orderorder}) is replaced with $ N^{-1} $. \end{thm} A proof of Theorem \mathrm{Re} f{limittheorem} will be given in section \mathrm{Re} f{PLMT}. \begin{rem}. Our scope covers as a special case the Black-Scholes economy by setting $ h^{j,N} (t,x) \equiv S^j_0 e^{ \langle \sigma_j ,x \rangle - \mu t} $ for $ j=1,...,n $ and $ h^{n+1,N}(t,x) \equiv e^{rt} $, where $ S^j_0, r \in {\mathbf R} _+ $, $ \mu^j \in {\mathbf R} $ and $ \Sigma \equiv [ \sigma_1,...,\sigma_n ] $ is a $ n \times n $ positive definite matrix. \end{rem} \begin{rem} Since we are working on a reference measure which is not necessarily a risk neutral measure nor so-called {\em physical measure}, $ W $ can be a diffusion process other than Brownian motions under those measures. Roughly speaking, $ W $ can be a solution (in the {\em weak} sense!) to a stochastic differential equation whose diffusion coefficients are constant functions. In one dimensional cases, by scaling we can work on any diffusion whose diffusion coefficient is monotone and smooth. \end{rem} \section{Supplementary Remark: relations to Group Representation}\label{rep} We remark here that a specification of $ \mathcal{E} $ can be done with the help of group representation theory. Let us recall the basics of group representation theory (see e.g. \citet{MR80f:20001}). Let $ G $ be a compact abelian group, and let $ \widehat{G} $ be its dual group. The members of $ \widehat{G} $ are often called {\em characters}, which forms an orthonormal basis of $ L^2(G; {\mathbf C} ) $; the space of square integrable functions on $ G $ over $ {\mathbf C} $ with respect to its Haar measure. Since $ L^2 (G; {\mathbf C} ) $ is a complex vector space, we need to modify it to get an orthonormal basis over real field. One candidate is obtained by the transform $ \varphi : {\mathbf C} \to {\mathbf R} $ defined by $ \varphi (x + i y) = x + y $. It is easy to check that $ \{\varphi(\chi): \chi \in \widehat{G} \} $ is a orthogonal basis of $ L^2(G; {\mathbf R} ) $. The group $ \widehat{G} $ always contains a unit, which corresponds to $ \mathbf{1} $. Thanks to Peter-Wyel Theorem, the above argument is extended to non-abelian groups. In particular, a choice of group $ G $ with $ |G| = n+1 $ gives us an $ \mathcal{E} $. The simplest choice may be the cyclic group $ C_{n+1} $. In this case, $ \tau = (\tau^1,....,\tau^{n}) $ is obtained by taking $ \tau^k = \varphi(\eta^k) $ where $ \eta $ is a uniformly distributed random variable taking values in $ (n+1) $-th units of root. The fundamental theorem of finitely generated abelian groups says that the characters are always taking values in a set of the units of root. Therefore, the {\em scenarios} are generated by a random walk on a ring of {\em integers} of an algebraic number field. The easiest case ($n=2$) is studied in \citet{Aka_Parisian}. \section{Proofs}\label{proof} \subsection{A Proof of Theorem \mathrm{Re} f{ItoF}}\label{PDIF} Let $ L (\tau) $ be a linear space of $\tau $-measurable real valued random variables. Since $\tau $ takes only $ (n+1) $-distinct values, the dimension of $ L (\tau) $ is $ (n+1) $. On the other hand, as a matter of course, the coordinate maps $ \tau^1,...,\tau^n $ are members of $ L (\tau) $. The moment condition ( \mathrm{Re} f{cov}) says that $ \{ \tau^1,....,\tau^n \} $ and constant function $ \mathbf{1} $ are mutually orthogonal with respect to the inner product $ \langle x,y \rangle = \mathcal{E} x [xy] $. Hence $ \{ \mathbf{1}, \tau^1,...,\tau^n \} $ is an orthonormal basis of $ L (\tau) $. Orthogonal expansion of $ f (t, x + N^{-1/2} \tau) $ with respect to the basis $ \{ \mathbf{1}, \tau^1,...,\tau^n \} $ are as follows: \begin{equation*}\label{Fourier2} \begin{split} f (t, x + N^{-1/2} \tau) &= \sum_{k=1}^n \mathcal{E} x [f (t, x + N^{-1/2} \tau)\tau^k ] \tau^k + \mathcal{E} x [f (t, x + N^{-1/2} \tau)] \\ &= \sum_{k=1}^n \left( \sum_{j=0}^{n} \mathbf{P} (\tau=e_j) f(t,x+ N^{-1/2} e_j ) \frac{e_{k,j}}{e_{0,j}} \right) \tau^k \\ & \hspace{3cm} + \sum_{j=0}^{n} \mathbf{P} (\tau=e_j) f(t,x+ N^{-1/2} e_j ) \\ &= \sum_{k=1}^n \left( \sum_{j=0}^{n} f(t,x+ N^{-1/2} e_j ) e_{0,j} e_{k,j} \right) \tau^k \\ & \hspace{3cm} + \sum_{j=0}^{n} e_{0,j}^2 f(t,x+ N^{-1/2} e_j ) \\ &= \sum_{k=1}^n \partial^N_k f (t,x) \frac{\tau^k}{\sqrt{N}} + \frac{1}{2N} \Delta^N f (t,x)\, + f(t,x). \end{split} \end{equation*} Substituting $ X^{N}_u $ for $ x $ and $ X^{N,k}_{u+N^{-1}} - X^{N,k}_u $ for $ \tau^k/\sqrt{N} $, we have \begin{equation}\label{ItoFD} \begin{split} &f(u+ N^{-1}, X^N_{u + N^{-1}})-f(u,X_u^N) \\ &\,\, = \sum_{k=1}^n \partial^N_k f (u+ N^{-1}, X^N_u) ( X^{N,k}_{u+N^{-1}} - X^{N,k}_u ) \\ & \qquad + ( \frac{1}{2} \Delta^N + \partial^N_t ) f (u+ N^{-1}, X^N_u)/N. \end{split} \end{equation} By summing up ( \mathrm{Re} f{ItoFD}) for $ u=0, 1/N, 2/N,...,([Nt]-1)/N $, we obtain ( \mathrm{Re} f{itofujita}). Let us consider next the following formal Taylor expansion of $ f (t, x + N^{-1/2} \tau ) $ with respect to $ N^{-1/2} \tau $: \begin{equation*} \begin{split} &f ( t, x + N^{-1/2} \tau ) \\ &\quad = f(t,x) + \frac{1}{\sqrt{N}} \langle \nabla f, \tau \rangle + \frac{1}{\sqrt{N}} \langle \nabla f \otimes \nabla f, \tau \otimes \tau \rangle \\ & \hspace{3cm} + \cdots + \frac{1}{N^{m/2}} \langle (\nabla f)^{\otimes m \,\scriptsize{\mbox{times}} }, \tau^{\otimes m \,\scriptsize{\mbox{times}}} \rangle + \cdots \end{split} \end{equation*} Recalling (or observing the proof given above) that \begin{equation*} \partial_k^N f (t,x) = \sqrt{N} \mathcal{E} x [f (t, x + N^{-1/2} \tau)\tau^k ], \end{equation*} and \begin{equation*} \Delta^N f (t,x) = 2 N \mathcal{E} x [f (t, x + N^{-1/2} \tau) -f (t, x)], \end{equation*} we have the following formal expansions: \begin{equation*} \begin{split} \partial_k^N f (t,x) & = \sqrt{N}\, \mathcal{E} x [f (t, x )\tau^k ] + \mathcal{E} x [\langle \nabla f, \tau \rangle \tau^k ] \\ & \quad + \frac{1}{2\sqrt{N}} \, \mathcal{E} x [\langle \nabla f \otimes \nabla f, \tau \otimes \tau \rangle \tau^k ] \\ & \quad + \frac{1}{2 N} \mathcal{E} x [\langle \nabla f \otimes \nabla f \otimes \nabla f, \tau \otimes \tau \otimes \tau \rangle \tau^k ] + \cdots + \cdots \end{split} \end{equation*} and \begin{equation*} \begin{split} \frac{1}{2}\Delta^N f (t,x) & = + \sqrt{N} \mathcal{E} x [\langle \nabla f, \tau \rangle ] + \frac{1}{2} \, \mathcal{E} x [\langle \nabla f \otimes \nabla f, \tau \otimes \tau \rangle ] \\ & \quad + \frac{1}{6\sqrt{N}} \mathcal{E} x [\langle \nabla f \otimes \nabla f \otimes \nabla f, \tau \otimes \tau \otimes \tau \rangle] \\ & \quad + \frac{1}{24 N} \mathcal{E} x [\langle (\nabla f)^{\otimes 4}, \tau^{ \otimes 4} \rangle ] + \cdots + \cdots. \end{split} \end{equation*} Now the assertions (ii) and (iii) are verified since for $ f(t,\cdot) \in C^k $ the expansion up to $ k $-th term is valid. The assertion (v) is verified by looking at the case $ \mathbf{P} (\tau = \pm 1 )= 1/2 $ where $ \mathcal{E} x [\tau^3 ] = 0 $. The assertions (iv) is a consequence of the following lemma \qed \begin{lem} Suppose that $ \tau: \Omega \to {\mathbf R} ^n $ satisfies ( \mathrm{Re} f{cov}) and $ \sharp \Omega = n+1 $. Then if $ n \geq 2 $, there exists $ (i,j,k) \in \{ 1,...,n \}^3 $ such that $ \mathcal{E} x [\tau^i \tau^j \tau^k ] \ne 0 $. \end{lem} \begin{proof} Denote $ \mathcal{D} = (e_{i,j})_{0 \leq i,j \leq n} $ and let \begin{equation*} \mathcal{D}_k = \mathrm{diag} [e_{k,0}/e_{0,0}, e_{k,1}/e_{0,1}, \cdots, e_{k,n}/e_{0,n}], \,\,k=1,...,n. \end{equation*} Then one will find that the $ (i,j) $-th component of $ \mathcal{D}^* \mathcal{D}_k \mathcal{D} $ is given by \begin{equation*} d_{i,j} = \sum_{l=0}^n \frac{e_{i,l}e_{k,l} e_{j,l}}{e_{0,l}} = \mathcal{E} x [\tau^i \tau^k \tau^j], \end{equation*} where conventionally $ \tau^0 \equiv 1 $ and $ d_{i,j} $ for $ 0 \leq i,j \leq n $ are numbered as follows: \begin{equation*} \mathcal{D}^* \mathcal{D}_k \mathcal{D} = \begin{pmatrix} d_{0,0} & d_{0,1} & \cdots & d_{0,n} \\ d_{1,0} & d_{1,1} & \cdots & d_{1,n} \\ \vdots & \ddots & & \vdots \\ d_{n,0} & d_{n,1} & \cdots & d_{n,n} \end{pmatrix} . \end{equation*} If we assume $ \mathcal{E} x [\tau^i \tau^j \tau^k ] = 0 $ for all $ (i,j,k) \in \{ 1,...,n \}^3 $, then for arbitrary fixed $ k $ we have $ d_{i,j} = 0 $ for every $ (i,j) \in \{ 1,...,n \}^2 $. Since $ d_{0,j} = \mathcal{E} x [ \tau^k \tau^j ] = \delta_{k,j} $, we notice that $ \mathrm{rank} \mathcal{D}^* \mathcal{D}_k \mathcal{D} = \mathrm{rank} \mathcal{D}_k = 2 $. This implies, since $ \mathcal{D}_k $ is a diagonal matrix, $ e_{k,j} = 0 $ except for exactly two $j$'s, for which we write $ k_+ $ and $ k_- $. We may assume without loss of generality $ e_{k,k_-} < 0 < e_{k,k_+} $ since $ \mathcal{E} x[\tau^k] =e_{k,k_-} e_{0,k_-} + e_{k,k_+} e_{0,k_+} $ must be zero. This in turn implies $ \{k_-,k_+\} $, $ k=1,...,n $ must be disjoint to fulfill $ \mathcal{E} x[ \tau^k \tau^{k'} ] = 0 $ for $ k \ne k' $. Hence finally we notice that $ 2 n \leq n+1 $. This implies $ n =1 $. \end{proof} \subsection{Proof of Theorem \mathrm{Re} f{Euro}}\label{PEuro} We will write \begin{equation*} \tilde{f} (x) = ( f(x+ N^{-1/2} {e}_0),..., f(x+N^{-1/2} e_{n}) ) \end{equation*} for $ f : {\mathbf R} ^n \to {\mathbf R} $. Note that $ \tilde{f} $ is a map to $ {\mathbf R} ^{n+1} $. As in the above proof we denote $ \mathcal{D} = (e_{i,j})_{0 \leq i,j \leq n} $. Then we have \begin{equation*} \mathcal{D} \tilde{f}(x) = \left( f(x) + (2N)^{-1} \Delta^N f(x), N^{-1/2} \partial^N_1 f(x),..., N^{-1/2} \partial^N_n f(x) \right). \end{equation*} Since $ \mathcal{D} H^N = \mathcal{D} \tilde{h}^N = \Sigma^N + (a,0,\ldots,0)^* $ for some $ a=a(t,x) $, we have \begin{equation}\label{DM} \Sigma^N [\mathcal{D} H^N (t,x)]^{-1} = \begin{pmatrix} \pi^N_1 (t,x) & \cdots & \pi^N_{n+1} (t,x) \\ \mathbf{0} & \quad N^{1/2} I_n & \, \end{pmatrix} \end{equation} where $ \mathbf{0} = (0,...,0)^* \in {\mathbf R} ^n $, $ I_n $ is the unit $ n \times n $ matrix, and \begin{equation} \pi^N = (\pi^N_1,\ldots,\pi^N_{n+1}) = (1^N h^N) [\mathcal{D} H^N]^{-1} = (1^N h^N)[H^N]^{-1} \mathcal{D}^{-1} . \end{equation} Since $ \mathcal{D}^{-1}=\mathcal{D}^* $, we have $ \pi^N_1=1^N A $, the sum of the components of $ (1^N h^N) [H^N]^{-1} $, which is strictly positive by the assumption (i), and hence $ \Sigma^N $ is invertible. Using ( \mathrm{Re} f{DM}), we have \begin{equation} \begin{split} \Sigma^N [\mathcal{D} H^N]^{-1} \mathcal{D} \tilde{\nu}^N =& [\pi^N \mathcal{D} \tilde{\nu}^N,\partial^N_1,\ldots,\partial^N_n]^* \\ =& (1^N,\nabla^N)^* \nu^N. \end{split} \end{equation} Here we use the equality $ (1^N h^N) [H^N]^{-1} \tilde{\nu}^N = 1^N \nu^N $ which comes from ( \mathrm{Re} f{recursiveE}). Hence we have \begin{equation}\label{7.4} \mathcal{D} \tilde{\nu}^N = \mathcal{D} H^N [\Sigma^N]^{-1} (1^N,\nabla^N)^* \nu^N. \end{equation} In particular, we have the following relation from the first component of the above ( \mathrm{Re} f{7.4}): \begin{equation} \nu^N + \frac{1}{2N} \Delta^N \nu^N = \left( h^N + \frac{1}{2N} \Delta^N h^N \right) [\Sigma^N]^{-1} (1^N,\nabla^N)^* \nu^N . \end{equation} Since \begin{equation} \nu^N = 1^N \nu^N + N^{-1} \partial ^N_t \nu^N, \ h^N = 1^N h^N + N^{-1} \partial ^N_t h^N \end{equation} and \begin{equation} (1^N h^N)[\Sigma^N]^{-1} = (1,0,\ldots,0), \end{equation} we have \begin{equation} \partial ^N_t \nu^N + \frac{1}{2} \Delta^N \nu^N = \left( \partial ^N_t h^N + \frac{1}{2} \Delta^N h^N \right) [\Sigma^N]^{-1} (1^N,\nabla^N)^* \nu^N . \end{equation} This is exactly ( \mathrm{Re} f{disdif}). \qed \subsection{Proof of Theorem \mathrm{Re} f{limittheorem}}\label{PLMT} The following proof is a routine-work in the context of finite difference method. First we will show that our scheme is {\em stable}. Let $ u^N $ be the unique solution of the following difference equation. \begin{equation}\label{disdif2} \begin{split} & u^N(T_N,x) = \mathbf{P}si^N(x) ; \ x \in \mathbf{R}^n, \\ & \partial ^N_t u^N + \frac{1}{2} \Delta^N u^N - \langle b^N,\nabla^N u^N \rangle - c^N (1^N u^N) = g^N ; \\ & \qquad t \in (0,T_N)_N , \ x \in {\mathbf R} ^n. \end{split} \end{equation} where $ g^N $ and $ \mathbf{P}si^N $ are given functions on $ {\mathbf R} _+ \times {\mathbf R} ^n $ and $ {\mathbf R} ^n $ respectively. We claim that \begin{equation}\label{stability} \sup_{x \in {\mathbf R} ^n} |u^N (t,x)| \leq \sup_{(s,y) \in [t,T] \times {\mathbf R} ^n} \left\{ (T-t)|g^N (s,y)| + |\mathbf{P}si^N (s,y)| \right\} \end{equation} for every $ t \in [0,T]_N $. This inequality shows the stability of our scheme. To prove ( \mathrm{Re} f{stability}), we first remark that the equation in ( \mathrm{Re} f{disdif2}) can be rewritten as \begin{equation*} 1^N u^N = 1^N g^N /N + (1^N h^N) [H^N]^{-1} \tilde{u}^N, \end{equation*} which comes from Theorem \mathrm{Re} f{Euro}. By the positivity assumption on $ h^N(t,x) [H^N(t+N^{-1},x)]^{-1} $, we see that $ A^{-1} h^N(t,x) [H^N(t+N^{-1},x)]^{-1} $ defines a transition probability of a time-inhomogeneous Markov chain $ (Y^N_t,\mathbf{P}^x_t)_{t \in [0,T]_N, x \in \mathbf{R}^n} $: $ \mathbf{P}^x_t(Y^N_{t}=x)=1 $ and \begin{equation} \begin{split} & \mathbf{P}^x_t(Y^N_{t+N^{-1}}=x+N^{-1/2}e_j) \\ & \mbox{$ = $ the $ j $-th component of} \\ & \quad A(t,x)^{-1} h^N(t,x) [H^N(t+N^{-1},x)]^{-1} . \end{split} \end{equation} Denoting the expectation with respect to $ \mathbf{P}^x_t $ by $ \mathbf{E}^x_t $, we have \begin{equation}\label{Markovian} u^N(t,x) = \frac{1}{N} g^N(t,x) + \mathbf{E}^x_t \left[ A(t,Y^N_t) u^N(t+N^{-1},Y^N_{t+N^{-1}}) \right]. \end{equation} By iterating ( \mathrm{Re} f{Markovian}) and by the Markov property, we have \begin{equation} \begin{split} u^N(t,x) =& \frac{1}{N} \sum_{s \in [t,T]_N} \mathbf{E}^x_t \left[ g^N(s,Y^N_{s}) \prod_{u \in [t,s)_N} A(u,Y^N_{u}) \right] \\ +& \mathbf{E}^x_t \left[ \mathbf{P}si^N(Y^N_{T_N}) \prod_{u \in [t,T_N)_N} A(u,Y^N_{u}) \right] . \end{split} \end{equation} (This is a discrete version of Feynman-Kac formula.) By the assumption of $ 0< A < 1 $, we obtain ( \mathrm{Re} f{stability}). Next, we will show that \begin{equation}\label{consistency} \sup_N \sup_{(t,x) \in [0,T]_N \times \mathbf{R}^n} N^{1/2} |g^N(t,x)| < \infty \end{equation} where \begin{equation}\label{7.9} g^N := \partial ^N_t \nu + \frac{1}{2} \Delta^N \nu - \langle b^N, \nabla^N \nu \rangle - c^N (1^N \nu) \end{equation} for the solution $ \nu $ to ( \mathrm{Re} f{EuroPDE}). Since \begin{equation} g^N = g^N - \frac{\partial \nu}{\partial t} - \frac{1}{2} \Delta \nu + \langle b,\nabla \nu \rangle + c \nu , \end{equation} we have \begin{equation} \begin{split} |g^N| \le& \left| \frac{\partial \nu}{\partial t} - \partial ^N_t \nu \right| + \frac{1}{2} |\Delta \nu - \Delta^N \nu| \\ +& |b^N| |\nabla \nu - \nabla^N \nu| + |\nabla \nu| |b-b^N| \\ +& |c^N| |\nu - 1^N \nu| + |\nu| |c-c^N| . \end{split} \end{equation} By the Assumption \mathrm{Re} f{standing} and the consistency ( \mathrm{Re} f{orders}), we obtain ( \mathrm{Re} f{consistency}). Finally, by combining ( \mathrm{Re} f{stability}) and ( \mathrm{Re} f{consistency}), and by the uniform continuity of $ \nu $, we have the desired result since $ \nu - \nu^N $ is the solution to ( \mathrm{Re} f{disdif2}) with $ \mathbf{P}si^N(x) = \mathbf{P}hi(x)-\mathbf{P}hi^N(x) $ and $ g^N $ given by ( \mathrm{Re} f{7.9}). \qed \end{document}
\begin{document} \title{Larger greedy sums for reverse partially greedy bases} \author{H\`ung Vi\d{\^e}t Chu} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \address{Department of Mathematics, University of Illinois at Urbana-Champaign, Urbana, IL 61820, USA} \begin{abstract} An interesting result due to Dilworth et al. was that if we enlarge greedy sums by a constant factor $\lambda > 1$ in the condition defining the greedy property, then we obtain an equivalence of the almost greedy property, a strictly weaker property. Previously, the author of the present paper showed that enlarging greedy sums by $\lambda$ in the condition defining the partially greedy (PG) property also strictly weakens the property. However, enlarging greedy sums in the definition of reverse partially greedy (RPG) bases by Dilworth and Khurana again gives RPG bases. The companion of PG and RPG bases suggests the existence of a characterization of RPG bases which, when greedy sums are enlarged, gives an analog of a result that holds for partially greedy bases. In this paper, we show that such a characterization indeed exists, answering positively a question previously posed by the author. \end{abstract} \subjclass[2020]{41A65; 46B15} \keywords{reverse partially greedy; greedy sum; bases.} \thanks{The author is thankful to Timur Oikhberg for helpful comments on an earlier draft of this paper.} \maketitle \tableofcontents \section{Introduction} Let $\mathbb{X}$ be an infinite-dimensional Banach space (with dual $\mathbb{X}^*$) over the field $\mathbb K = \mathbb R$ or $\mathbb C$. We define a \textbf{basis} to be any countable collection $\mathcal{B} = (e_n)_{n=1}^\infty$ such that i) the span of $(e_n)_n$ is norm-dense in $\mathbb{X}$; ii) there exist biorthogonal functionals $(e^*_n)_n\subset \mathbb{X}^*$ such that $e^*_{n}(e_m) = \delta_{n,m}$; and iii) there exist $c_1, c_2> 0$ such that $c_1\leqslant \|e_n\|, \|e_n^*\|\leqslant c_2$ for all $n$. In the literature, the condition of totality: $\overline{\spann(e^*_n)}^{w^*} = X^*$ is sometimes called for. While totality guarantees that for each $x$, the sequence $(e_n^*(x))_{n=1}^{\infty}$ is unique, uniqueness is not important in this paper, so we do not assume that our basis is total. In 1999, Konyagin and Temlyakov \cite{KT1} introduced the notion of greedy bases as follows: for each $x\in\mathbb{X}$, a finite set $\Lambda$ is a greedy set of $x$ if $$\min_{n\in \Lambda}|e_n^*(x)| \ \geqslant\ \max_{n\notin \Lambda}|e_n^*(x)|.$$ For $m\in \mathbb{N}$, let $G(x, m)$ denote the set of all greedy sets of $x$ of cardinality $m$. A basis is said to be \textbf{greedy} if there exists a constant $\mathbf C\geqslant 1$ such that \begin{equation}\label{e8}\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\sigma_m(x), \forall x\in\mathbb{X}, \forall m\in\mathbb{N}, \forall \Lambda\in G(x, m),\end{equation} where $$\sigma_m(x)\ :=\ \inf\left\{\left\|x-\sum_{n\in A}a_ne_n\right\|\,:\, a_n\in\mathbb{K}, |A|\leqslant m\right\}.$$ Later, Dilworth et al. \cite{DKKT} introduced the so-called \textbf{almost greedy} bases: a basis is almost greedy if there exists a constant $\mathbf C\geqslant 1$ such that \begin{equation}\label{e9}\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\inf\{\|x-P_A(x)\|: |A| = m\}, \forall x\in \mathbb{X}, \forall m\in\mathbb{N}, \forall \Lambda\in G(x,m),\end{equation} where $P_A(x) = \sum_{n\in A}e_n^*(x)e_n$. By definition, a greedy basis is almost greedy, but an almost greedy basis is not necessarily greedy (see \cite[Example 10.2.9]{AK}). Among other results, Dilworth et al. showed a suprising equivalence of almost greedy bases. In particular, for any fixed $\lambda > 1$, if we enlarge greedy sums from size $m$ to $\lceil \lambda m\rceil$ in \eqref{e8} to have \begin{equation}\label{e10}\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\sigma_m(x), \forall x\in\mathbb{X}, \forall m\in\mathbb{N}, \forall \Lambda\in G(x, \lceil \lambda m\rceil),\end{equation} then the condition \eqref{e10} is equivalent to the condition \eqref{e9} (with possibly different constants). That is, by increasing the size of greedy sums, we move from the realm of greedy bases to a strictly weaker realm of almost greedy bases. Continuing the work, the author of the present paper \cite{C1} investigated the situation for \textbf{partially greedy} (PG) bases (also introduced by Dilworth et al. for Schauder bases \cite{DKKT} and by Berasategui et al. for general bases \cite{BBL}), which are defined to satisfy the condition: there exists a constant $\mathbf C\geqslant 1$ such that \begin{equation}\label{e11}\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\inf_{k\le m}\left\|x-\sum_{n=1}^k e_n^*(x)e_n\right\|, \forall x\in \mathbb{X}, \forall m\in\mathbb{N}, \forall \Lambda\in G(x,m).\end{equation} According to \cite[Theorem 1.5]{C1}, if we enlarge the size of greedy sums from $m$ to $\lceil\lambda m\rceil$ in \eqref{e11}, we obtain strictly weaker greedy-type bases. Therefore, similar to Dilworth et al.'s result, enlarging greedy sums in the PG property strictly weakens the property. To complete the picture, let us consider \textbf{reverse partially greedy} (RPG) bases introduced by Dilworth and Khurana \cite{DK}. The original definition of RPG \cite{DK} is relatively more technical: first, given two sets $A, B\subset\mathbb{N}$, we write $A > B$ if for all $a\in A$ and $b\in B$, we have $a > b$. Respective definitions hold for other inequalities $<, \geqslant$, and $\leqslant$. Also, it holds vacuously that $\emptyset > A$ and $\emptyset < A$ for all sets $A$. A basis is RPG if there exists a constant $\mathbf C\geqslant 1$ such that \begin{equation}\label{e12}\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\widetilde{\sigma}^{R, \Lambda}_m(x), \forall x\in\mathbb{X}, \forall m\in\mathbb{N}, \forall \Lambda\in G(x,m),\end{equation} where $$\widetilde{\sigma}^{R,\Lambda}_m(x)\ =\ \inf\{\|x-P_A(x)\|\,:\, |A|\leqslant m, A > \Lambda\}.$$ Recent work has confirmed that RPG bases are truly companions of PG bases (see \cite{C1, DK, K}). That is, if a result holds true for PG bases, there is a corresponding result that holds for RPG bases. We, therefore, suspect that as in the case of PG bases, if we enlarge the size of greedy sums in a condition that defines RPG bases, we would then obtain a strictly weaker greedy-type bases. However, by \cite[Theorem 6.4]{C1}, enlarging greedy sums in \eqref{e12} still gives us RPG bases. This hints us at the existence of an equivalent reformulation of RPG bases such that when we enlarge greedy sums in the reformulation, we strictly weaken the RPG property. The main results in this paper show that such an equivalent reformulation indeed exists. Given a set $A\subset\mathbb{N}$, define $$\widecheck{\sigma}^{A}_m(x)\ :=\ \inf\{\|x-P_{I}(x)\|\, :\, \mbox{either } I = \emptyset\mbox{ or } I\mbox{ is an interval}, A \leqslant \max I, |I|\leqslant m\}.$$ Our first result gives an equivalence of the notion of RPG bases. \begin{thm}\label{m2} A basis $\mathcal{B}$ is RPG if and only if there exists $\mathbf C \geqslant 1$ such that \begin{equation}\label{e1}\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\widecheck{\sigma}^{\Lambda}_m(x),\forall x\in\mathbb{X}, \forall m\in\mathbb{N}, \forall \Lambda\in G(x, m).\end{equation} \end{thm} To the author's knowledge, \eqref{e1} is the first characterization of RPG bases such that the right side of \eqref{e1} involves a projection onto consecutive elements of the basis. This resembles \eqref{e11}, which defines partially greedy bases. Indeed, \eqref{e1} highlights the correspondence between partially greedy bases and RPG bases by revealing the relative position of the interval $I$ with respect to the greedy set $\Lambda$. Specifically, for partially greedy bases, since we project onto the first elements of $\mathcal{B}$, we have $\min I \leqslant \Lambda$, while for RPG bases, $\Lambda\leqslant \max I$. We also have a new characterization of partially greedy bases. \begin{thm}\label{ma1} A basis $\mathcal{B}$ is PG if and only if there exists $\mathbf C \geqslant 1$ such that \begin{equation*}\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\widehat{\sigma}^{\Lambda}_m(x),\forall x\in\mathbb{X}, \forall m\in\mathbb{N}, \forall \Lambda\in G(x, m),\end{equation*} where $$\widehat{\sigma}^{\Lambda}_m(x) \ :=\ \inf\{\|x-P_{I}(x)\|\, :\, \mbox{either } I = \emptyset\mbox{ or } I\mbox{ is an interval}, \Lambda \geqslant \min I, |I|\leqslant m\}.$$ \end{thm} Next, for any fixed $\lambda \geqslant 1$, we study the following condition: there exists $\mathbf C\geqslant 1$ such that \begin{equation}\label{e5} \|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\widecheck{\sigma}_m^{\Lambda}(x), \forall x\in\mathbb{X}, \forall m\in \mathbb{N},\forall \Lambda\in G(x, \lceil \lambda m\rceil). \end{equation} As stated above, our goal is to show that \eqref{e5} is strictly weaker than the RPG property to have an analog of Dilworth et al.'s theorem for RPG bases. First, we give bases that satisfy \eqref{e5} a name. \begin{defi}\normalfont A basis $\mathcal{B}$ is said to be ($\lambda$, RPG) of type II (or ($\lambda$, RPGII), for short) if $\mathcal{B}$ satisfies \eqref{e5}. In this case, the least constant in \eqref{e5} is denoted by $\mathbf C_{\lambda, rp}$. \end{defi} Here we use ``of type II" to not confuse ourselves with ($\lambda$, RPG) bases in \cite[Definition 6.1]{C1}. A basis is ($\lambda$, RPG) of type I if we enlarge the size of greedy sums in \eqref{e12} from $m$ to $\lceil\lambda m\rceil$. For all $\lambda\geqslant 1$, the ($\lambda$, RPG) property in \cite[Definition 6.1]{C1} is equivalent to RPG property (see \cite[Theorem 6.4]{C1}). We shall show that this is not the case for the ($\lambda$, RPGII) property. \begin{thm}\label{m8} Let $\lambda > 1$. The following hold. \begin{enumerate} \item[i)] If $\mathcal{B}$ is RPG, then $\mathcal{B}$ is ($\lambda$, RPGII). \item[ii)] There exists an unconditional basis $\mathcal{B}$ that is ($\lambda$, RPGII) but is not RPG. \end{enumerate} \end{thm} To prove Theorem \ref{m8}, we need to characterize characterize ($\lambda$, RPGII) bases with the introduction of the so-called reverse partial symmetry for largest coefficients (Definition \ref{d1}). As a corollary of our characterization, we characterize bases that satisfy \eqref{e1} with constant $\mathbf C = 1$ (or $1$-($1$, RPGII) bases) in the same manner as Berasategui et al. characterized $1$-strong partially greedy bases \cite{BBL}. \begin{cor}\label{ca1} A basis satisfies \eqref{e12} with $\mathbf C = 1$ if and only if it satisfies \eqref{e1} with $\mathbf C = 1$. \end{cor} For the full statement, see Corollary \ref{ca2}. \section{A characterization of RPG and PG bases} We recall some well-known results that will be used in due course. \begin{defi}[Konyagin and Temlyakov \cite{KT1}]\label{d3}\normalfont A basis $\mathcal{B}$ is \textbf{quasi-greedy} if there exists $\mathbf C > 0$ such that $$\|P_\Lambda(x)\|\ \leqslant\ \mathbf C\|x\|,\forall x\in\mathbb{X}, m\in\mathbb{N}, \forall \Lambda\in G(x, m).$$ The least such $\mathbf C$ is denoted by $\mathbf C_q$, called the quasi-greedy constant. Also when $\mathcal{B}$ is quasi-greedy, let $\mathbf C_\ell$ be the least constant such that $$\|x-P_\Lambda(x)\|\ \leqslant\ \mathbf \mathbf \mathbf C_\ell\|x\|,\forall x\in\mathbb{X}, m\in\mathbb{N}, \forall \Lambda\in G(x, m).$$ We call $\mathbf C_\ell$ the suppression quasi-greedy constant. \end{defi} A sequence $\varepsilon = (\varepsilon_n)_n$ is called a sign if $\varepsilon_n\in\mathbb{K}$ and $|\varepsilon_n| = 1$. For $x\in \mathbb{X}$ and a finite set $A\subset\mathbb{N}$, let $$1_A \ =\ \sum_{n\in A} e_n, 1_{\varepsilon A} \ =\ \sum_{n\in A}\varepsilon_n e_n, \mbox{ and }P_{A^c}(x)\ =\ x - P_A(x).$$ Two important properties of quasi-greedy bases are the \textbf{UL property} and the uniform boundedness of the truncation operators. \begin{enumerate} \item[i)] UL property: for all finite $A\subset\mathbb{N}$ and scalars $(a_n)$, we have $$\frac{1}{2\mathbf C_q}\min |a_n|\|1_A\|\ \leqslant\ \left\|\sum_{n\in A}a_ne_n\right\|\ \leqslant\ 2\mathbf C_q\max|a_n|\|1_A\|,$$ which was first proved in \cite{DKKT}. \item[ii)] the uniform boundedness of the truncation operator: for each $\alpha > 0$, we define the truncation function $T_\alpha$ as follows: for $b\in\mathbb{K}$, $$T_{\alpha}(b)\ =\ \begin{cases}\sgn(b)\alpha, &\mbox{ if }|b| > \alpha,\\ b, &\mbox{ if }|b|\leqslant \alpha.\end{cases}$$ We define the truncation operator $T_\alpha: \mathbb{X}\rightarrow \mathbb{X}$ as $$T_{\alpha}(x)\ =\ \sum_{n=1}^\infty T_\alpha(e_n^*(x))e_n \ =\ \alpha 1_{\varepsilon \Lambda_{\alpha}(x)}+ P_{\Lambda_\alpha^c(x)}(x),$$ where $\Lambda_\alpha(x) = \{n: |e_n^*(x)| > \alpha\}$ and $\varepsilon_n = \sgn(e_n^*(x))$ for all $n\in \Lambda_\alpha(x)$. \begin{thm}\label{bto}\cite[Lemma 2.5]{BBG} Let $\mathcal{B}$ be $\mathbf{C}_\ell$-suppression quasi-greedy. Then for any $\alpha > 0$, $\|T_\alpha\|\leqslant \mathbf{C}_\ell$. \end{thm} \end{enumerate} Now we recall a characterization of RPG bases due to Dilworth and Khurana \cite{DK}. \begin{defi}\normalfont A basis $\mathcal{B}$ is said to be \textbf{reverse conservative} if for some constant $\mathbf C > 0$, we have $$\|1_A\|\ \leqslant\ \mathbf C\|1_B\|, $$ for all finite sets $A, B\subset\mathbb{N}$ with $|A|\leqslant |B|$ and $B < A$. In this case, the least constant $\mathbf C$ is denoted by $\Delta_{rc}$. \end{defi} \begin{thm}\label{m1}\cite[Theorem 2.7]{DK} A basis is RPG if and only if it is reverse conservative and quasi-greedy. \end{thm} We are ready to prove our first main result. \begin{proof}[Proof of Theorem \ref{m2}] Assume \eqref{e1}. By Theorem \ref{m1}, we need to show that $\mathcal{B}$ is both quasi-greedy and reverse conservative. First, we observe that $\mathcal{B}$ is $\mathbf C$-suppression quasi-greedy since by definition, $\widecheck{\sigma}^{\Lambda}_m(x) \leqslant \|x\|$ for all $x\in\mathbb{X}$, $m\in\mathbb{N}$ and $\Lambda\in G(x, m)$. Next, we show that $\mathcal{B}$ is reverse conservative. Let $A, B\subset\mathbb{N}$ be finite with $B < A$ and $|A|\leqslant |B|$. If $A = \emptyset$, there is nothing to prove. Suppose $A\neq \emptyset$. Define $x = 1_A + 1_B + 1_D$, where $D = [\min A, \max A]\backslash A$. Since $B\cup D$ is a greedy set of $x$ and $A\cup D$ is an interval of length $|A\cup D|\leqslant |B\cup D|$ with $B\cup D < \max(A\cup D)$, we have $$\|1_A\|\ =\ \|x - P_{B\cup D}(x)\|\ \leqslant\ \mathbf C\widecheck{\sigma}^{B\cup D}_{|B\cup D|}(x)\ \leqslant\ \mathbf C\|x-P_{A\cup D}(x)\|\ =\ \mathbf C\|1_B\|.$$ Now we assume that $\mathcal{B}$ is RPG. According to Theorem \ref{m1}, $\mathcal{B}$ is $\mathbf C_q$-quasi-greedy and $\Delta_{rc}$-reverse conservative for some $\mathbf C_q, \Delta_{rc}> 0$. Fix $x\in \mathbb{X}$, $m\in\mathbb{N}$, $\Lambda\in G(x,m)$, and a nonempty interval $I$ with $|I|\leqslant m$, $\Lambda\leqslant \max I$. We shall show that $$\|x-P_{\Lambda}(x)\|\ \leqslant\ (1+\mathbf C_q + 4\mathbf C^3_q\Delta_{rc})\|x-P_I(x)\|.$$ Write \begin{equation}\label{e2}\|x-P_{\Lambda}(x)\|\ \leqslant\ \|x-P_I(x)\| + \|P_{\Lambda\backslash I}(x)\| + \|P_{I\backslash \Lambda}(x)\|.\end{equation} Since $\Lambda\backslash I$ is a greedy set of $x - P_I(x)$, we get \begin{equation}\label{e3}\|P_{\Lambda\backslash I}(x)\|\ \leqslant\ \mathbf C_q\|x-P_I(x)\|.\end{equation} Let us estimate $\|P_{I\backslash \Lambda}(x)\|$. Note that $\Lambda\backslash I < I\backslash \Lambda$. Otherwise, there exists $a\in \Lambda\backslash I$, $b\in I\backslash \Lambda$ such that $a\geqslant b$. Then $a\geqslant \min I$ and $a\leqslant \max \Lambda\leqslant \max I$. Hence, $a\in I$, which contradicts $a\in \Lambda\backslash I$. Furthermore, $|I\backslash \Lambda|\leqslant |\Lambda\backslash I|$ because $|I|\leqslant |\Lambda|$. Since $\mathcal{B}$ is $\Delta_{rc}$-reverse conservative, we get $$\|1_{I\backslash \Lambda}\|\ \leqslant\ \Delta_{rc}\|1_{\Lambda\backslash I}\|.$$ We have \begin{align}\label{e4} \left\|\sum_{n\in I\backslash \Lambda}e_n^*(x)e_n\right\|&\ \leqslant\ \max_{n\in I\backslash \Lambda}|e_n^*(x)| \sup_{\varepsilon}\left\|1_{\varepsilon (I\backslash \Lambda)}\right\|\mbox{ by convexity}\nonumber\\ &\ \leqslant\ 2\mathbf C_q \min_{n\in \Lambda\backslash I} |e_n^*(x)|\left\|1_{ I\backslash \Lambda}\right\|\mbox{ by the UL property}\nonumber\\ &\ \leqslant\ 2\mathbf C_q\Delta_{rc} \min_{n\in \Lambda\backslash I} |e_n^*(x)|\left\|1_{ \Lambda\backslash I}\right\|\mbox{ by reverse conservativeness}\nonumber\\ &\ \leqslant\ 4\mathbf C^2_q\Delta_{rc}\|P_{\Lambda\backslash I}(x)\|\mbox{ by the UL property}. \end{align} From \eqref{e2}, \eqref{e3}, and \eqref{e4}, we obtain $$\|x - P_{A}(x)\|\ \leqslant\ (1+\mathbf C_q + 4\mathbf C^3_q\Delta_{rc})\|x - P_I(x)\|,$$ as desired. \end{proof} The proof of Theorem \ref{ma1} is the same as the proof of Theorem \ref{m2} with obvious modifications. The proof is left for interested readers. \section{Larger greedy sums for RPG bases} Throughout this section, let $\lambda$ be a real number at least $1$. First, we define the notion of reverse partial symmetry for largest coefficients (RPSLC). For a finite set $A\subset\mathbb{N}$, let $s(A) = 0$ if $A= \emptyset$; otherwise, $s(A) = \max A - \min A + 1$. For a collection of sets $(A_i)_{i\in I}$, we write $\sqcup_i A_i$ to mean $A_j\cap A_k = \emptyset$ for any $j, k\in I$. Finally, $\|x\|_\infty:= \max_{n}|e_n^*(x)|$. \begin{defi}\normalfont A vector $x\in\mathbb{X}$ is said to surround a finite set $A\subset\mathbb{N}$ if either $A = \emptyset$ or $\supp(x)\cap [\min A, \max A] = \emptyset$. \end{defi} \begin{defi}\label{d1}\normalfont A basis is said to be ($\lambda$, RPSLC) if there exists a constant $\mathbf C \geqslant 1$ such that $$\|x + 1_{\varepsilon A}\|\ \leqslant\ \mathbf C\|x+1_{\delta B}\|,$$ for all $x\in\mathbb{X}$ with $\|x\|_\infty\leqslant 1$, for all signs $\varepsilon, \delta$, and for all finite sets $A, B\subset\mathbb{N}$ such that $(\lambda - 1)s(A) + |A|\leqslant |B|$, $B\sqcup \supp(x)$, $B < A$, and $x$ surrounds $A$. In this case, the least such $\mathbf C$ is denoted by $\Delta_{\lambda, rpl}$. \end{defi} \begin{rek}\normalfont While the definition of ($\lambda$, RPSLC) seems unnatural and technical, we shall use it in proving Corollary \ref{ca2} and eventually Theorem \ref{m8}, both of which are relevant to the literature on greedy-type bases. \end{rek} We have an easy but useful reformulation of ($\lambda$, RPSLC). \begin{prop}\label{p1} A basis $\mathcal{B}$ is $\Delta_{\lambda, rpl}$-($\lambda$, RPSLC) if and only if \begin{equation}\label{e6}\|x\|\ \leqslant\ \Delta_{\lambda, rpl}\|x - P_A(x) + 1_{\varepsilon B}\|,\end{equation} for all $x\in\mathbb{X}$ with $\|x\|_\infty\leqslant 1$, for all sign $\varepsilon$, and for all finite sets $A, B\subset\mathbb{N}$ such that $(\lambda - 1)s(A) + |A|\leqslant |B|$, $B\sqcup \supp(x-P_A(x))$, $B < A$, and $x-P_A(x)$ surrounds $A$. \end{prop} \begin{proof} Assume that $\mathcal{B}$ is $\Delta_{\lambda, rpl}$-($\lambda$, RPSLC). Let $x, A, B, \varepsilon$ be chosen as in \eqref{e6}. We have \begin{align*} \|x\|\ =\ \left\|x - P_A(x) + \sum_{n\in A}e_n^*(x)e_n\right\|&\ \leqslant\ \sup_{\delta}\left\|x-P_A(x) + 1_{\delta A}\right\|\\ &\ \leqslant\ \Delta_{\lambda, rpl}\|x-P_A(x) + 1_{\varepsilon B}\|. \end{align*} Next, assume that $\mathcal{B}$ satisfies \eqref{e6}. Let $x, A, B, \varepsilon, \delta$ be chosen as in Definition \ref{d1}. Let $y = x+ 1_{\varepsilon A}$. By \eqref{e6}, \begin{align*} \|x+1_{\varepsilon A}\|\ =\ \|y\|\ \leqslant\ \Delta_{\lambda, rpl}\|y - P_A(y) + 1_{\delta B}\|\ =\ \Delta_{\lambda, rpl}\|x+1_{\delta B}\|. \end{align*} This completes our proof. \end{proof} The next theorem characterizes ($\lambda$, RPGII) bases. \begin{thm}\label{m3} A basis $\mathcal{B}$ is ($\lambda$, RPGII) if and only it is quasi-greedy and ($\lambda$, RPSLC). \end{thm} \begin{thm}[Analog of Theorem 3.1 in \cite{C1}]\label{m4} Let $\mathcal{B}$ be a $\mathbf{C}_\ell$-suppression quasi-greedy basis. The following hold. \begin{itemize} \item [i)] If $\mathcal{B}$ is $\mathbf C_{1, rp}$-($1$, RPGII), then $\mathcal{B}$ is $\mathbf C_{1, rp}$-($1$, RPSLC). \item [ii)] If $\mathcal{B}$ is $\mathbf C_{\lambda, rp}$-($\lambda$, RPGII), then $\mathcal{B}$ is $\mathbf C_\ell\mathbf C_{\lambda, rp}$-($\lambda$, RPSLC). \item [iii)] If $\mathcal{B}$ is $\Delta_{\lambda, rpl}$-($\lambda$, RPSLC), then $\mathcal{B}$ is $\mathbf C_\ell\Delta_{\lambda, rpl}$-($\lambda$, RPGII). \end{itemize} \end{thm} \begin{proof} i) Let $x, A, B, \varepsilon, \delta$ be as in Definition \ref{d1}. We need to show that $$\|x+1_{\varepsilon A}\|\ \leqslant\ \mathbf C_{1, rp}\|x+1_{\delta B}\|.$$ If $A = \emptyset$, then $$\|x+1_{\varepsilon A}\|\ =\ \|x\|\ \leqslant\ \mathbf C_{\ell}\|x+1_{\delta B}\|\ \leqslant\ \mathbf C_{1, rp}\|x+1_{\delta B}\|,$$ where the last inequality is due to the proof of Theorem \ref{m2}. If $A \neq \emptyset$, form $y = x + 1_{\varepsilon A} + 1_{\delta B} + 1_D$, where $D = [\min A, \max A]\backslash A$. We have $$|D\cup A|\ \leqslant\ |D\cup B|\mbox{ and }D\cup B \leqslant \max(D\cup A).$$ Hence, we obtain \begin{align*}\|x+1_{\varepsilon A}\|\ =\ \|y - P_{B\cup D}(y)\|\ \leqslant\ \mathbf C_{1, rp}\widecheck{\sigma}^{B\cup D}_{|B\cup D|}(y)&\ \leqslant\ \mathbf C_{1, rp}\|y- P_{A\cup D}(y)\|\\ &\ =\ \mathbf C_{1, rp}\|x + 1_{\delta B}\|, \end{align*} as desired. ii) Let $x, A, B, \varepsilon, \delta$ be as in Definition \ref{d1}. If $A = \emptyset$, then by the proof of item i), we are done. If $A \neq\emptyset$, form $y = x + 1_{\varepsilon A} + 1_{\delta B} + 1_D$, where $D = [\min A, \max A]\backslash A$. Observe that $B\cup D$ is a greedy set of $y$ and $$|B\cup D| \ =\ |B| + |D|\ \geqslant\ (\lambda-1)s(A) + |A| + (s(A) - |A|)\ =\ \lambda s(A).$$ Choose $\Lambda\subset B\cup D$ such that $|\Lambda| = \lceil \lambda s(A)\rceil$. Clearly, $$|A\cup D|\ =\ s(A) \mbox{ and }\Lambda\ \leqslant\ \max (A\cup D).$$ We obtain \begin{align*}\|x+1_{\varepsilon A}\|\ =\ \|y - P_{B\cup D}(y)\|\ \leqslant\ \mathbf C_\ell\|y-P_{\Lambda}(y)\|&\ \leqslant\ \mathbf C_\ell\mathbf C_{\lambda, rp}\widecheck{\sigma}^{\Lambda}_{s(A)}(y)\\ &\ \leqslant\ \mathbf C_\ell\mathbf C_{\lambda, rp}\|y - P_{A\cup D}(y)\|\\ &\ =\ \mathbf C_\ell\mathbf C_{\lambda, rp}\|x+1_{\delta B}\|. \end{align*} Therefore, $\mathcal{B}$ is $\mathbf C_\ell\mathbf C_{\lambda, rp}$-($\lambda$, RPSLC). iii) Assume that $\mathcal{B}$ is $\mathbf C_\ell$-suppression quasi-greedy and $\Delta_{\lambda, rpl}$-($\lambda$, RPSLC). Let $x\in\mathbb{X}$, $m\in\mathbb{N}$, $\Lambda\in G(x, \lceil \lambda m\rceil)$, and a nonempty interval $I$ with $|I|\leqslant m$ and $\Lambda\leqslant \max I$. We need to show that $$\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C_\ell \Delta_{\lambda, rpl}\|x - P_I(x)\|.$$ If $\alpha := \min_{n\in\Lambda} |e_n^*(x)|$, then $\|x-P_{\Lambda}(x)\|_\infty\leqslant \alpha$. We have \begin{align*} |\Lambda\backslash I| \ =\ |\Lambda| - |\Lambda\cap I|\ \geqslant\ \lambda m - |\Lambda\cap I| &\ =\ \lambda m + (|I| - |\Lambda\cap I|) - |I|\\ &\ \geqslant\ \lambda m + |I\backslash \Lambda| - m\\ &\ =\ (\lambda - 1)m + |I\backslash \Lambda|\\ &\ \geqslant\ (\lambda-1)s(I\backslash \Lambda) + |I\backslash \Lambda|, \end{align*} and $$\Lambda\backslash I\ <\ I\backslash \Lambda\mbox{ and }(\Lambda \backslash I)\sqcup \supp(x-P_{\Lambda}(x)-P_{I\backslash \Lambda}(x)).$$ Furthermore, $x-P_{\Lambda}(x)-P_{I\backslash \Lambda}(x)$ surrounds $I\backslash \Lambda$. Setting $\varepsilon = (\sgn(e_n^*(x))$, we can apply Proposition \ref{p1} to obtain \begin{align*} \|x-P_{\Lambda}(x)\|&\ \leqslant\ \Delta_{\lambda, rpl}\|x-P_{\Lambda}(x) - P_{I\backslash \Lambda}(x) + \alpha 1_{\varepsilon \Lambda\backslash I}\| \\ &\ \leqslant\ \Delta_{\lambda, rpl}\|T_{\alpha}(x-P_{\Lambda}(x) - P_{I\backslash \Lambda}(x) + P_{\Lambda\backslash I}(x))\|\\ &\ \leqslant\ \mathbf C_\ell \Delta_{\lambda, rpl}\|x-P_I(x)\| \mbox{ by Theorem \ref{bto}.} \end{align*} We finish the case when $I \neq \emptyset$. If $I = \emptyset$, then we simply have $\|x-P_{\Lambda}(x)\|\leqslant \mathbf C_\ell\|x\| = \mathbf C_\ell\|x-P_I(x)\|$. \end{proof} \begin{proof}[Proof of Theorem \ref{m3}] By Theorem \ref{m4}, we need only to show that a ($\lambda$, RPGII) basis is quasi-greedy. Indeed, a ($\lambda$, RPGII) basis satisfies \eqref{e5}, which implies that $$\|x-P_{\Lambda}(x)\|\ \leqslant\ \mathbf C\|x\|, \forall x\in\mathbb{X}, \forall m\in \mathbb{N},\forall \Lambda\in G(x, \lceil \lambda m\rceil).$$ By \cite[Lemma 2.3]{C1}, we know that $\mathcal{B}$ is quasi-greedy. \end{proof} \begin{defi}\normalfont A basis is said to be ($\lambda$, reverse conservative of type II)\footnote{For the definition of ($\lambda$, reverse conservative of type I), see \cite[Definition 6.2]{C1}.} if for some constant $\mathbf C > 0$, we have $$\|1_A\|\ \leqslant\ \mathbf C\|1_B\|, $$ for all finite sets $A, B\subset\mathbb{N}$ with $(\lambda-1)s(A) + |A|\leqslant |B|$ and $B < A$. In this case, the least constant $\mathbf C$ is denoted by $\Delta_{\lambda, rc}$. \end{defi} \begin{prop}\label{p2} Let $\mathcal{B}$ be a quasi-greedy basis. Then $\mathcal{B}$ is ($\lambda$, reverse conservative of type II) if and only if it is ($\lambda$, RPSLC). \end{prop} \begin{proof} Setting $x = 0$ and $\varepsilon \equiv \delta \equiv 1$ in Definition \ref{d1}, we see that a ($\lambda$, RPSLC) basis is ($\lambda$, reverse conservative of type II). Assume that $\mathcal{B}$ is $\Delta_{\lambda, rc}$-($\lambda$, reverse conservative of type II) and $\mathbf C_\ell$-suppression quasi-greedy (and $\mathbf C_q$-quasi-greedy). Let $x, A, B, \varepsilon, \delta$ be chosen as in Definition \ref{d1}. We have $$\|1_{\varepsilon A}\|\ \stackrel{\mbox{UL}}{\leqslant}\ 2\mathbf C_q\|1_{A}\|\ \leqslant\ 2\mathbf C_q\Delta_{\lambda, rc}\|1_B\|\ \stackrel{\mbox{UL}}{\leqslant}\ 4\mathbf C_q^2\Delta_{\lambda, rc}\|1_{\delta B}\|\ \leqslant\ 4\mathbf C_q^3\Delta_{\lambda, rc}\|x+1_{\delta B}\|.$$ Furthermore, $$\|x\|\ \leqslant\ \mathbf C_\ell \|x+1_{\delta B}\|.$$ Therefore, $$\|x+1_{\varepsilon A}\|\ \leqslant\ \|x\| + \|1_{\varepsilon A}\|\ \leqslant\ (4\mathbf C_q^3\Delta_{\lambda, rc} + \mathbf C_\ell)\|x+1_{\delta B}\|.$$ This completes our proof. \end{proof} \begin{thm}\label{m9} Let $\mathcal{B}$ be a basis. The following are equivalent \begin{enumerate} \item[i)] $\mathcal{B}$ is ($\lambda$, RPGII). \item[ii)] $\mathcal{B}$ is quasi-greedy and ($\lambda$, RPSLC). \item[iii)] $\mathcal{B}$ is quasi-greedy and ($\lambda$, reverse conservative of type II). \end{enumerate} \end{thm} \begin{proof} That i) $\Longleftrightarrow$ ii) is due to Theorem \ref{m3}, and that ii) $\Longleftrightarrow$ iii) is due to Proposition \ref{p2}. \end{proof} The problem of characterizing $1$-greedy-type bases has been of great interest as can be seen in \cite{AA0, AA, AW, BBL, DK}. As a corollary of Theorem \ref{m4}, let us characterize bases that satisfies \eqref{e1} with constant $\mathbf C = 1$. \begin{thm}\label{m6} A basis $\mathcal{B}$ satisfies \eqref{e1} with constant $\mathbf C = 1$ if and only if $\mathcal{B}$ is $1$-($1$, RPSLC). \end{thm} \begin{proof} By Theorem \ref{m4} items i) and iii), it suffices to show that if a basis $\mathcal B$ is $1$-($1$, RPSLC), then it is $1$-suppression quasi-greedy. This follows immediately from Definition \ref{d1} by setting $A = \emptyset$ to have $$\|x\|\ \leqslant\ \|x + e_k\|,$$ for all $x\in \mathbb{X}$ with $\|x\|_\infty\leqslant 1$ and for all $k\notin \supp(x)$. By induction, $\mathcal{B}$ is $1$-suppression quasi-greedy. \end{proof} In the spirit of \cite[Proposition 4.2]{BBL}, we offer yet another characterization of $1$-($1$, RPGII) bases. \begin{thm}\label{m7} A basis $\mathcal{B}$ is $1$-($1$, RPGII) if and only if $\mathcal{B}$ satisfies simultaneously two following conditions \begin{enumerate} \item[i)] for all $x\in\mathbb{X}$ with $\|x\|_\infty\leqslant 1$ and for all $k\notin \supp(x)$, we have \begin{equation} \label{e15}\|x\|\ \leqslant\ \|x+e_k\|. \end{equation} \item[ii)] for $x\in\mathbb{X}$ with $\|x\|_\infty\leqslant 1$, for $s, t\in\mathbb{K}$ with $|s| = |t| = 1$, and for $j < k$, both of which are not in $\supp(x)$, we have \begin{equation}\label{e16}\|x+te_k\|\ \leqslant\ \|x+se_j\|.\end{equation} \end{enumerate} \end{thm} \begin{proof} Due to Theorem \ref{m6}, it suffices to show that a basis is $1$-($1$, RPSLC) if and only if it satisfies both \eqref{e15} and \eqref{e16}. It follows immediately from Definition \ref{d1}, that a $1$-($1$, RPSLC) basis must satisfy \eqref{e15} and \eqref{e16}. Conversely, suppose that $\mathcal{B}$ satisfies both \eqref{e15} and \eqref{e16}. Choose $x, A, B, \varepsilon, \delta$ as in Definition \ref{d1}. We show that \begin{equation}\label{e17}\|x+1_{\varepsilon A}\|\ \leqslant\ \|x + 1_{\delta B}\|\end{equation} inductively on $|B|$. Base case: $|B| = 1$. If $A = \emptyset$, then \eqref{e15} implies \eqref{e17}; if $|A| = 1$, then \eqref{e16} implies \eqref{e17}. Inductive hypothesis (I.H.): assume that for some $\ell\in\mathbb{N}$, \eqref{e17} holds for $|B|\leqslant \ell$. We show that \eqref{e17} holds for $|B| = \ell+1$. If $A = \emptyset$, then we use \eqref{e15} inductively to obtain \eqref{e17}. For $|A| \geqslant 1$, let $p = \max A$, $q\in B$, $A' = A\backslash \{p\}$, and $B' = B\backslash \{q\}$. We have \begin{align*}\|x+1_{\varepsilon A}\|\ =\ \|(x + \varepsilon_p e_p) + 1_{\varepsilon A'}\|&\ \leqslant\ \|(x+\varepsilon_p e_p) + 1_{\delta B'}\|\mbox{ by I.H.}\\ &\ \leqslant\ \|(x+\delta_q e_q) + 1_{\delta B'}\|\mbox{ by \eqref{e16}}\\ &\ =\ \|x+1_{\delta B}\|.\end{align*} This shows that $\mathcal{B}$ is $1$-($1$, RPSLC). \end{proof} \begin{cor}\label{ca2} The following are equivalent \begin{enumerate} \item [i)] $(e_n)_n$ is $1$-($1$, RPGII). \item [ii)] $(e_n)_n$ is $1$-($1$, RPSLC). \item [iii)] $(e_n)_n$ satisfies \eqref{e15} and \eqref{e16}. \item [iv)] $(e_n)_n$ satisfies \eqref{e12} with constant $1$. \end{enumerate} \end{cor} \begin{proof} The equivalence between i), ii), and iii) is due to Theorems \ref{m6} and \ref{m7}. That iii) $\Longleftrightarrow$ iv) is due to \cite[Theorem 3.4]{DK}. \end{proof} \section{($\lambda$, RPGII) bases are weaker than RPG bases} The goal of this section is to prove the following Theorem \ref{m8}. \begin{proof} i) If $\mathcal{B}$ is RPG, then it is quasi-greedy and reverse conservative. By definition, a reverse conservative basis is ($\lambda$, reverse conservative of type II). By Theorem \ref{m9}, $\mathcal{B}$ is ($\lambda$, RPGII). ii) For each $\lambda > 1$, we now construct an unconditional basis $\mathcal{B}$ that is ($\lambda$, RPGII) but is not RPG. Let $D = \{2^n\,:\, n\in \mathbb{N}_0\}, u_n = \frac{1}{\sqrt{n}}, v_n = \frac{1}{n}$ for all $n\geqslant 1$. Let $\mathbb{X}$ be the completion of $c_{00}$ with respect to the following norm: for $x = (x_1, x_2, \ldots)$, define $$\|x\|\ =\ \sup_{\pi, \pi'}\left(\sum_{n\in D}u_{\pi(n)} |x_{n}| + \sum_{n\notin D}v_{\pi'(n)}|x_n|\right),$$ where $\pi: D\rightarrow \mathbb{N}$ and $\pi': \mathbb{N}\backslash D\rightarrow \mathbb{N}$ are bijections. Clearly, the canonical basis $\mathcal{B}$ is an $1$-unconditional normalized basis of $\mathbb{X}$. \begin{claim} The basis $\mathcal{B}$ is not reverse conservative and thus, is not RPG. \end{claim} \begin{proof} We use the notation $\sim$ to indicate the order of a number. For each $N\in\mathbb{N}$, let $A_N = \{2^{2N+1}, 2^{2N+2}, \ldots, 2^{3N}\}$ and $B_N = \{3, 3^2, \ldots, 3^N\}$. Then $|A_N| = |B_N| = N$ and $B_N < A_N$. However, $$\|1_{A_N}\|\ =\ \sum_{n=1}^N\frac{1}{\sqrt{n}}\ \sim\ \sqrt{N} \mbox{ and }\|1_{B_N}\|\ =\ \sum_{n=1}^N\frac{1}{n} \ \sim\ \ln N,$$ which imply that $\|1_{A_N}\|/\|1_{B_N}\|\rightarrow\infty$ as $N\rightarrow\infty$. Therefore, $\mathcal{B}$ is not reverse conservative. \end{proof} \begin{claim} The basis $\mathcal{B}$ is ($\lambda$, reverse conservative of type II) and thus, is ($\lambda$, RPGII). \end{claim} \begin{proof} Pick nonempty, finite sets $A, B\subset\mathbb{N}$ with $(\lambda-1)s(A) + |A|\leqslant |B|$ and $B < A$. Then $$\|1_A\|\ =\ \sum_{n=1}^{|A\cap D|}\frac{1}{\sqrt{n}} + \sum_{n=1}^{|A\backslash D|}\frac{1}{n}\mbox{ and }\|1_B\|\ =\ \sum_{n=1}^{|B\cap D|}\frac{1}{\sqrt{n}} + \sum_{n=1}^{|B\backslash D|}\frac{1}{n}.$$ We proceed by case analysis. Case 1: $\sum_{n=1}^{|A\cap D|}\frac{1}{\sqrt{n}}\ \leqslant\ \sum_{n=1}^{|A\backslash D|}\frac{1}{n}$. We get $$\|1_A\|\ \leqslant\ 2\sum_{n=1}^{|A\backslash D|}\frac{1}{n}\ \leqslant\ 2\sum_{n=1}^{|B|}\frac{1}{n}\ \leqslant\ 2\left(\sum_{n=1}^{|B\cap D|}\frac{1}{\sqrt{n}} + \sum_{n=1}^{|B\backslash D|}\frac{1}{n}\right)\ =\ 2\|1_B\|.$$ Case 2: $\sum_{n=1}^{|A\cap D|}\frac{1}{\sqrt{n}}\ >\ \sum_{n=1}^{|A\backslash D|}\frac{1}{n}$. Then $\|1_A\|\sim \sqrt{|A\cap D|}$. If $|B\cap D|\geqslant |A\cap D|$, then we are done because $$\|1_B\| \ \geqslant\ \sum_{n=1}^{|B\cap D|}\frac{1}{\sqrt{n}}\ \sim\ \sqrt{|B\cap D|}\ \geqslant\ \sqrt{|A\cap D|}\ \sim\ \|1_A\|.$$ Suppose that $|B\cap D| < |A\cap D|$. Let $N = |A\cap D|$ and write $A\cap D = \{2^{k_1}, 2^{k_2}, \ldots, 2^{k_N}\}$ to obtain $\|1_A\|\sim \sqrt{N}$. \begin{enumerate} \item[i)] Case 2.1: $N = 1$. Then $\sum_{n=1}^{|A\cap D|}\frac{1}{\sqrt{n}}\ >\ \sum_{n=1}^{|A\backslash D|}\frac{1}{n}$ implies that $|A\backslash D| = 0$ and $|A| = 1$. Clearly, $\|1_A\|\leqslant \|1_B\|$. \item[ii)] Case 2.2: $N\geqslant 2$. We have $$|B|\ \geqslant\ (\lambda-1)s(A) \ \geqslant\ (\lambda-1)(2^{k_N}-2^{k_1}+1)\ \geqslant\ (\lambda-1)2^{k_N-1}.$$ Hence, $$\ln|B|\ \geqslant\ \ln(\lambda-1) + (N-1)\ln 2.$$ If $\ln(\lambda-1) + 0.5(N-1)\geqslant 0$, then $$\|1_B\| \ \gtrsim\ \ln |B|\ \geqslant\ (\ln 2 -0.5)(N-1) \ \gtrsim \ \sqrt{N} \ \sim\ \|1_A\|.$$ If $\ln(\lambda-1) + 0.5(N-1) < 0$, then $N < 1 - 2\ln (\lambda-1)$. In this case, $$\|1_A\|\ \sim\ \sqrt{N}\ <\ \sqrt{1 - 2\ln (\lambda-1)}\ \leqslant\ \sqrt{1 - 2\ln (\lambda-1)}\|1_B\|.$$ \end{enumerate} We have shown that in all cases, there exists a constant $\mathbf C = \mathbf C(\lambda)$ such that $\|1_A\| \leqslant \mathbf C\|1_B\|$. Therefore, $\mathcal{B}$ is ($\lambda$, reverse consecutive of type II). \end{proof} We conclude that our basis $\mathcal{B}$ is $1$-unconditional and ($\lambda$, RPGII) but is not RPG. \end{proof} \ \\ \end{document}
\begin{document} \title{Substructuring Preconditioners for \ an $h$-$p$ Nitsche-type method} \begin{abstract} We propose and study an iterative substructuring method for an $h$-$p$ Nitsche-type discretization, following the original approach introduced in \cite{BPS} for conforming methods. We prove quasi-optimality with respect to the mesh size and the polynomial degree for the proposed preconditioner. Numerical experiments asses the performance of the preconditioner and verify the theory. \end{abstract} \section{Introduction} Discontinuous Galerkin (DG) Interior Penalty (IP) methods were introduced in the late 70's for approximating elliptic problems. They were arising as a natural {\it evolution} or extension of Nitsche's method \cite{nitsche0}, and were based on the observation that inter-element continuity could be attained by penalization; in the same spirit Dirichlet boundary conditions are weakly imposed for Nitsche's method \cite{nitsche0}. The use, study and application of DG IP methods was abandoned for a while, probably due to the fact that they were never proven to be more advantageous or efficient than their conforming relatives. The lack of optimal and efficient solvers for the resulting linear systems, at that time, surely was also contributing to that situation. However, over the last 10-15 years, there has been a considerable interest in the development and understanding of DG methods for elliptic problems (see, for instance, \cite{abcm} and the references therein), partly due to the simplicity with which the DG methods handle non-matching grids and allow for the design of hp-refinement strategies. The IP and Nitsche approaches have also found some new applications; in the design of new conforming and non-conforming methods \cite{abm0,abfm0, mika0, dominik0,guido-riviere, hansbo-elasticity} and as a way to deal with non-matching grids for domain decomposition \cite{Stenberg.2003,duarte1}. This has also motivated the interest in developing efficient solvers for DG methods. In particular, additive Schwarz methods are considered and analyzed in \cite{FengKarakashian01,BrennerWang05,AntoniettiAyuso2007,AntoniettiAyuso2008,AntoniettiAyuso2009,AntoniettiHouston2011,BarkerBrennerParkSung2011,BrennerWang05}. Multigrid methods are studied in \cite{GopalakrishnanKanschat2003,BrennerZhao_2005,Kanschat2003}. Two-level methods and multi-level methods are presented in~\cite{Dobrev_et_al_2006,dahmen1} and other subspace correction methods are considered in \cite{AyusoZikatanov2009,jump,AyusoB_GeorgievI_KrausJ_ZikatanovL-2009aa}. Still the development of preconditioners for DG methods based on Domain Decomposition (DD) has been mostly limited to classical Schwarz methods. Research towards more sophisticated non-overlapping DD preconditioners, such as the BPS ({\it Bramble Pasciak Schatz}), Neuman-Neuman, BDDC, FETI or FETI-DP is now at its inception. Non-overlapping DD methods typically refer to methods defined on a decomposition of a domain made up of a collection of mutually disjoint subdomains, generally called {\it substructures}. These family of methods are obviously well suited for parallel computations and furthermore, for several problems (like problems with jump coefficients) they offer some advantages over their relative overlapping methods, and have already proved their usefulness. Roughly speaking, these methods are algorithms for preconditioning the Schur complement with respect to the unknowns on the skeleton of the subdomain partition. They are generally referred {\it substructuring preconditioners}. While the theory for the confoming case is now well established and understood for many problems \cite{Toselli:2004:DDM}, the discontinuous nature of the finite element spaces at the interface of the substructures (in the case of Nitsche-type methods) or even within the skeleton of the domain partition, poses extra difficulties in the analysis which preclude from having a straight extension of such theory. Mainly, unlike in the conforming case, the coupling of the unknowns along the interface does not allow for splitting the global bilinear form as a sum of local bilinear forms associated to the substructures (see for instance \cite{FengKarakashian01} and \cite[Proposition 3.2]{AntoniettiAyuso2007}). Moreover the discontinuity of the finite element space makes the use of standard $H^{1/2}$-norms in the analysis of the discrete harmonic functions difficult. For Nitsche-type methods, a new definition of discrete harmonic function has been introduced in \cite{Dryja.Galvis.Sarkis.2007} together with some tools (similar to those used in the analysis of mortar preconditioners) that allow them to adapt and extend the general theory \cite{Toselli:2004:DDM} for substructuring preconditioners in two dimensions. More precisely, in \cite{Dryja.Galvis.Sarkis.2007, sarkis1, sarkis2} the authors introduced and analyzed {\it Balancing Domain with Constrains} BDDC, Neuman-Neuman and FETI-DP domain decomposition preconditioners for a first order Nitsche type discretization of an elliptic problem with jumping coefficients. For the discretization, a symmetric IP DG scheme is used (only) on the skeleton of the subdomain partition, while piecewise linear conforming approximation is used in the interior of the subdomains. In these works, the authors prove quasi-optimality with respect to the mesh-size and optimality with respect to the jump in the coefficient. They also address the case of non-conforming meshes. More recently, several BDDC preconditioners have been introduced and analyzed for some full DG discretizations \cite{schoberl, luca, eun00}, following a different path. In \cite{schoberl} the authors consider the $p$-version of the preconditioner for an Hibridized IP DG method \cite{hybrid0,hybrid2}, for which the unknown is defined directly on the skeleton of the partition. They prove cubic logarithmic growth on the polynomial degree but also show numerically that the results are not sharp. The IP DG and the IP-spectral DG methods for an elliptic problem with jumping coefficient are considered in \cite{eun00} and \cite{luca}, respectively. In both works the approach for the analysis differs considerably from the one taken in \cite{Dryja.Galvis.Sarkis.2007, sarkis1,sarkis2} and relies on suitable space decomposition of the global DG space; using either nonconforming or conforming subspaces. This allow the authors to adapt the classical theories for analyzing the resulting BDDC preconditioners. In this work, we focus on the original substructuring approach introduced in \cite{BPS} for conforming discretization of two dimensional problems and in \cite{BPSIV,Dryja.Smith.Widlund94} for three dimensions (see also \cite{Xu.Zou,Toselli:2004:DDM} for a detailed description). In the framework of non conforming domain decomposition methods, this kind of preconditioner has been applied to the mortar method \cite{AMW,BertPenn,Pennacchio.08,Pennacchio.Simoncini.08} and to the three fields domain decomposition method \cite{Bsubstr}, always considering the $h$-version of the methods. For spectral discretizations and the $p$ version of conforming approximations the preconditioner has been studied in \cite{pavarino1,mandel2}. For $h$-$p$ conforming discretizations of two dimensional problems the BPS preconditioner is studied in \cite{ainsworthBPS}. To the best of our knowledge, this preconditioner has not been considered for Nitsche or DG methods before. Here, we propose a BPS (Bramble-Pasciak-Schatz) preconditioner for an $h$-$p$ Nitsche type discretization of elliptic problems. In our analysis, we use some of the tools introduced in \cite{Dryja.Galvis.Sarkis.2007,sarkis1}, such as their definition of the discrete harmonic lifting that allows for defining the discrete Steklov-Poincar\'e operator associated to the Nitsche-type method. However, our construction of the preconditioners is guided by the definition of a suitable norm on the skeleton of the subdomain partition, that scales like an $H^{1/2}$-norm and captures the energy of the DG functions on the skeleton. This allow us to provide a much simpler analysis, proving quasi-optimality with respect to the mesh size and the polynomial degree for the proposed preconditioners. Furthermore, we demonstrate that unlike what happens in the conforming case, to ensure quasi-optimality of the preconditioners a block diagonal structure that de-couples completely the edge and vertex degrees of freedom on the skeleton is not possible; this is due to the presence of the penalty term which is needed to deal with the discontinuity. We show however that the implementation of the preconditioner can be done efficiently and that it performs in agreement with the theory. The rest of the paper is organized as follows. The basic notation, functional setting and the description of the Nitsche-type method are given in next section; Section~\mathbb{R}f{problem}. Some technical tools required in the construction and analysis of the proposed preconditioners are revised in Section~\mathbb{R}f{sec:technical_tool}. The substructuring preconditioner is introduced and analyzed in Section~\mathbb{R}f{sec:substr}. Its practical implementation together with some variants of the preconditioner are discussed in Section~\mathbb{R}f{sec:variants}. The theory is verified through several numerical experiments presented in Section~\mathbb{R}f{sec:expes}. The proofs of some technical lemmas used in our analysis are reported in the Appendix \mathbb{R}f{app0}. \section{Nitsche methods and Basic Notation}\label{problem} In this section, we introduce the basic notation, the functional setting and the Nitsche discretization. To ease the presentation we restrict ourselves to the following model problem. Let ${\Omegamega}ega\subset \mathbb{R}^{2}$ a bounded polygonal domain, let $f\in L^2({\Omegamega}ega)$ and let \begin{equation*} \left\{\begin{aligned} -\Delta u^{\ast} &= f \qquad&& \mbox {in } \Omega,\\ u^{\ast}&=0\qquad&& \mbox {on } \partial\Omega. \end{aligned}\right. \end{equation*} The above problem admits the following weak formulation: {\it find} $u^*\in H^1_0({\Omegamega}ega)$ {\it such that}: \begin{equation}\label{prob} a(u^*,v)=f(v)\qquad \text{ for all } v\in H^1_0({\Omegamega}ega), \end{equation} where \begin{eqnarray*} a(u,v) = \int_{\Omegamega}ega {\vect{n}}abla u \cdot {\vect{n}}abla v \dd{x} \qquad f(v) = \int_{\Omegamega}ega f v \dd{x}\;, \quad \forall\, u, v\in H^1_0({\Omegamega}ega) \end{eqnarray*} \subsection{Partitions} We now introduce the different partitions needed in our work. We denote by ${\mathcal T}_{H}$ a subdomain partition of $\Omega$ into $N$ non-overlapping shape-regular triangular or quadrilateral subdomains: \begin{equation*} \begin{aligned} &\overline{\Omega}=B_{\ell}gcup_{\ell=1}^{N} \overline{{\Omegamega}ega}_\ell , &&{\Omegamega_{\ell}} \cap \Omega_j =\emptyset && \ell{\vect{n}}e j. && \end{aligned} \end{equation*} We set \begin{equation}\label{def:hls} {H_{\ell}}=\min_{j\,:\, \bar \Omega_\ell \cap \bar \Omega_j {\vect{n}}e \emptyset } H_{\ell,j} \qquad \mbox{where }\quad H_{\ell,j}=\left|\partial{\Omegamega_{\ell}} \cap \partial \Omega_j\right|\;, \end{equation} and we also assume that ${H_{\ell}}\simeq \mbox{diam}({\Omegamega_{\ell}})$ for each $\ell=1,\ldots N$. We finally define the granularity of ${\mathcal T}_{H}$ by $H=\min_{\ell} {H_{\ell}}$. We denote by $\Gammaamma$ and $\Gammaamma^\partial$ respectively the interior and the boundary portions of the skeleton of the subdomain partition ${\mathcal T}_{H}$: \begin{equation*} \begin{aligned} &\Gammaamma=B_{\ell}gcup_{\ell=1}^{N} {\Gammaamma_{\ell}}, && {\Gammaamma_{\ell}}=\partial{\Omegamega_{\ell}}\smallsetminus \partial\Omega && &\forall\, \ell =1,\ldots, N.\\ &\Gammaamma^{\partial}=B_{\ell}gcup_{\ell=1}^{N} {\Gammaamma_{\ell}}b, &&{\Gammaamma_{\ell}}b=:\partial{\Omegamega_{\ell}}\cap \partial\Omega && &\forall\, \ell=1\ldots, N. \end{aligned} \end{equation*} We also define the complete skeleton as $\Sigma=\Gammaamma \cup \Gammaamma^{\partial}$. The edges of the subdomain partition that form the skeleton will be denoted by $E\subset \Gammaamma$ and we will refer to them as {\it macro edges}, if they do not allude to a particular subdomain or {\it subdomain edges}, when they do refer to a particular subdomain. For each ${\Omegamega_{\ell}}$, let $\left \{{\mathcal T}_{h}^\ell \right \}$ be a family of {\it fine} partitions of $\Omega_\ell$ into elements (triangles or quadrilaterals) $K$ with diameter $h_{K}$. All partitions ${\mathcal T}_{h}^\ell$ are assumed to be shape-regular and we define a global partition ${\mathcal T}_{h}$ of $\Omega$ as \begin{equation*} {\mathcal T}_{h} =B_{\ell}gcup_{\ell=1}^{N} {\mathcal T}_{h}^\ell. \end{equation*} Observe that by construction ${\mathcal T}_{h}$ is a fine partition of ${\Omegamega}ega$ which is compatible within each subdomain $\Omega_\ell$ but which may be non matching across the skeleton $\Gammaamma$. Throughout the paper, we always assume that the following \emph{bounded local variation} property holds: for any pair of neighboring elements $K^{+} \in {\mathcal T}_{h}^{\ell^{+}}$ and $K^{-} \in {\mathcal T}_{h}^{\ell^{-}}$, $\ell^{+} {\vect{n}}eq \ell^{-}$, $h_{K^{+}}\simeq h_{K^{-}}$. Note that the restriction of ${\mathcal T}_{h}$ to the skeleton $\Gammaamma$ induces a partition of each subdomain edge $E\subset \Gammaamma$. We define the set of {\it element edges} on the skeleton $\Gammaamma$ and on the boundary of $\Omega$ as follows: \begin{align*} {{\mathcal E}_h}o:&=\{ e = \partial K^{+} \cap \partial K^{-} \cap \Gammaamma, \, K^{+}\in {\mathcal T}_{h}^{\ell^{+}}\, , K^{-}\in {\mathcal T}_{h}^{\ell^{-}}\, , \ell^{+} {\vect{n}}eq \ell^{-}\} \,,&& \\ {{\mathcal E}_h}b:&=\{ e= \partial K \cap \partial {\Omegamega}ega, \, K \in {\mathcal T}_{h}\} \,, && \end{align*} and we set ${{\mathcal E}_h}={{\mathcal E}_h}o\cup {{\mathcal E}_h}b$. When referring to a particular subdomain, say ${\Omegamega_{\ell}}$ for some $\ell$, the set of element edges are denoted by \begin{equation*} \begin{aligned} &{{\mathcal E}_h}io = \{ e \in {{\mathcal E}_h}o:\ e \subset \partial{\Omegamega_{\ell}}\}, \quad &&{{\mathcal E}_h}bi=\{ e \in {{\mathcal E}_h}b:\ e \subset \partial {\Omegamega_{\ell}}\}, \quad && {{\mathcal E}_h}i={{\mathcal E}_h}io\cup {{\mathcal E}_h}bi\;. \end{aligned} \end{equation*} \subsection{Basic Functional setting} For $s\geq 1$, we define the broken Sobolev space \begin{equation*} H^{s}({\mathcal T}_{H})=\left\{ \phi \in L^{2}(\Omega) \,\,: \quad \phiB_{\ell}g|_{{\Omegamega_{\ell}}} \in H^{s}({\Omegamega_{\ell}}) \quad \forall\,\, {\Omegamega_{\ell}} \in {\mathcal T}_{H} \,\right\} \sim \prod_{\ell}\, H^s({\Omegamega_{\ell}})\;, \end{equation*} whereas the trace space associated to $H^{1}({\mathcal T}_{H})$ is defined by \begin{eqnarray*} \Phi= \prod_{\ell}\, H^{1/2}(\partial {\Omegamega_{\ell}}). \end{eqnarray*} For $u = (u^\ell)_{\ell=1}^N$ in $H^1({\mathcal T}_{H})$ we will denote by $u_{|_\Sigma}$ the unique element $\phi =(\phi^\ell)_{\ell=1}^N$ in $\Phi$ such that $$ \phi^\ell =u^\ell_{|_{\partial {\Omegamega_{\ell}}}}.$$ We now recall the definition of some trace operators following \cite{abcm}, and introduce the different discrete spaces that will be used in the paper.\\ Let $e\in {{\mathcal E}_h}o$ be an edge on the interior skeleton shared by two elements $K^{+}$ and $K^{-}$ with outward unit normal vectors ${\vect{n}}^{+}$ and ${\vect{n}}^{-}$, respectively. For scalar and vector-valued functions $\varphi \in H^{1}({\mathcal T}_{H})$ and ${\vect{\tau}}\in \left[{H}^{1}({\mathcal T}_{H})\right]^2$, we define the \emph{average} and the \emph{jump} on $e \in {{\mathcal E}_h}o$ as \begin{align*} &\av{{\vect{\tau}}}=\frac {1} {2}({\vect{\tau}}^+ +{\vect{\tau}}^-), &&\jump{\varphi}=\varphi^+{\vect{n}}^++\varphi^-{\vect{n}}^-\;, &&\mbox{on } e\in {{\mathcal E}_h}o \end{align*} On a boundary element edge $e\in {{\mathcal E}_h}b$ we set $\av{{\vect{\tau}}}={\vect{\tau}}$ and $\jump{\varphi}=\varphi {\vect{n}}$, ${\vect{n}}$ denoting the outward unit normal vector to $\Omega$.\\ To each element $K\in {\mathcal T}_{h}k$, we associate a polynomial approximation order $p_{K} \geq 1$, and define the $hp$-finite element space of piecewise polynomials as \begin{equation*} {X_h^{\ell}}= \{ v \in C^0({\Omegamega_{\ell}})\text{~such~that~}v|_{K} \in \mathbb{P}^{p_K}(K), ~K\in {\mathcal T}_{h}k \}, \end{equation*} where $\mathbb{P}^{p_{K}}(K)$ stands for the space of polynomials of degree at most $p_{K}$ on $K$. We also assume that the polynomial approximation order satisfies a \emph{local bounded variation} property: for any pair of elements $K^{+}$ and $K^{-}$ sharing an edge $e \in {{\mathcal E}_h}o$, $p_{K^{+}} \simeq p_{K^{-}}$. \\ Our global approximation space ${X_h}$ is then defined as \begin{equation*} {X_h}= \{ v\in L^{2}(\Omega) \,\, : \text{~such~that~}v_{|_{{\Omegamega_{\ell}}}} \in {X_h^{\ell}} \}\sim \prod_{\ell=1}^N {X_h^{\ell}}\;, . \end{equation*} We also define ${X_h}o \subset {X_h}$ as the subspace of functions of ${X_h}$ vanishing on the skeleton $\Sigma$, i.e., \begin{equation*} {X_h}o=\{ v\in {X_h} \,\, : \text{~such~that~}v_{|_{\Sigma}} = 0 \} . \end{equation*} The trace spaces associated to ${X_h^{\ell}}$ and ${X_h}$ are defined as follows \begin{align*} \Phihk&=\{ \eta^{\ell} \in H^{1/2}(\partial{\Omegamega_{\ell}}) \, :\,\, \eta^{\ell} = w_{|_{ \partial{\Omegamega_{\ell}} }} \text{ for some } w\in {X_h^{\ell}}\} &&\forall \ell=1,\ldots ,N\;\\ \Phih & = \prod_{\ell=1}^{N} \Phihk\; \ \subset \Phi. \end{align*} Notice that the functions in the above finite element spaces are conforming in the interior of each subdomain but are double-valued on $\Gammaamma$. Moreover, any function $v\in {X_h}$ can be represented as $v = ( v^{\ell})_{\ell=1}^N$ with $v^{\ell}\in {X_h^{\ell}}$.\\ Next, for each subdomain ${\Omegamega_{\ell}}\in {\mathcal T}_{H}$ and for each subdomain edge $E \subset \partial{\Omegamega_{\ell}}$, we define the discrete trace spaces \begin{align*} &\Phi_{\ell}(E) = {\Phihk}_{|_E}, &&\Phi^{o}_{\ell}(E) = \{ \eta^{\ell}\in \Phi_{\ell}(E) \,\, :\; ~~ \eta^{\ell}=0 \;\mbox{on } \partial E\, \}. \end{align*} Note that, since we are in two dimensions, the boundary of a subdomain edge $E$ is the set of the two endpoints (or vertices) of $E$, that is if $E=(a,b)$ then $\partial E=\{a,b\}$.\\ Finally, we introduce a suitable coarse space ${\mathfrak L}_{H} \subset \Phi$, that will be required for the definition of the subtructuring preconditioner: \begin{equation}\label{eq:Lineari} {\mathfrak L}_{H} = \{ \eta =(\eta^\ell) \in \Phi \,\, : \,\,\, \eta^\ell_{|_{E}} \in \mathbb{P}^{1}(E), \quad \forall\, E\subset \partial{\Omegamega_{\ell}}\;, \,\,\, \forall {\Omegamega_{\ell}} \in {\mathcal T}_{H} \}\;. \end{equation} \subsection{Nitsche-type methods} In this section, we introduce the Nitsche-type method we consider for approximating the model problem \eqref{prob}. Here and in the following, to avoid the proliferation of constants, we will use the notation $x \lesssim y$ to represent the inequality $x \leq C y$, with $C > 0$ independent of the mesh size, of the polynomial approximation order, and of the size and number of subdomains. Writing $x \simeq y$ will signify that there exists a constant $C > 0$ such that $C^{-1}x \leq y \leq Cx$.\\ We introduce the local mesh size function $\h \in L^{\infty}(\Sigma)$ defined as \begin{equation}\label{def:h} \h(x)=\left\{ \begin{aligned} &h_{K} \quad && \textrm{if $x\in\partialK \cap \partial{\Omegamega}ega$},\\ &\min\{ h_{K^{+}}, h_{K^{-}}\} && \textrm{if $x\in\partialK^{+} \cap \partialK^{-}\cap\Gammaamma$, $K^{\pm}\in {\mathcal T}_{h}^{\ell^{\pm}}\, , \ell^{+} {\vect{n}}eq \ell^{-}$}, \end{aligned}\right. \end{equation} and the local polynomial degree function $\p \in L^{\infty}(\Sigma)$: \begin{equation}\label{def:p2} \p(x)=\left\{\begin{aligned} &p_{K} \quad && \textrm{if $x\in\partialK \cap \partial{\Omegamega}ega$},\\ &\max\{ p_{K^{+}}, p_{K^{-}}\} && \textrm{if $x\in\partialK^{+} \cap \partialK^{-}\cap \Gammaamma$, $K^{\pm}\in {\mathcal T}_{h}^{\ell^{\pm}}$, $\ell^{+} {\vect{n}}eq \ell^{-}$}. \end{aligned}\right. \end{equation} \begin{remark}\label{las:h} A different definition for the local mesh size function $\h$ and the local polynomial degree function $\p$ involving harmonic averages is sometimes used for the definition of Nitsche or DG methods \cite{Dryja.Galvis.Sarkis.2007}. We point out that such a definition yields to functions $\h$ and $\p$ which are of the same order as the ones given in {\vect{n}}ref{def:h} and {\vect{n}}ref{def:p2}, and therefore result in an equivalent method. \end{remark} We now define the following Nitsche-type discretization \cite{Stenberg.1998,Stenberg.2003} to approximate problem \eqref{prob}: find $u^*h\in {X_h}$ such that \begin{equation}\label{discrete_problem} {\mathcal A}_h(u^*h,v_h)=f(v_h)\qquad \text{ for all } v_h\in {X_h}\;, \end{equation} where, for all $u,v \in {X_h}$, ${\mathcal A}_h(\cdot,\cdot)$ is defined as \begin{equation}\label{def:nit} \begin{aligned} {\mathcal A}_h(u,v)&= \sum_{\ell=1}^{N}\int_{{\Omegamega_{\ell}}} {\vect{n}}abla u \cdot {\vect{n}}abla v \dd{x} -\sum_{e \in {{\mathcal E}_h}} \int_e \av{{\vect{n}}abla u}\cdot \jump{v}\dd{s} &&\\ &\quad-\sum_{e \in {{\mathcal E}_h}} \int_e \jump{u}\cdot \av{{\vect{n}}abla v} \dd{s} +\sum_{e \in {{\mathcal E}_h}} \alpha \, \int_e \p^2 \, \h^{-1}\jump{u}\cdot\jump{v} \dd{s}. && \end{aligned} \end{equation} Here, $\alpha>0$ is the penalty parameter that needs to be chosen $\alpha\geq \alpha_0$ for some $\alpha_0\gtrsim 1$ large enough to ensure the coercivity of ${\mathcal A}_h(\cdot,\cdot)$. \\ On ${X_h}$, we introduce the following semi-norms: \begin{equation}\label{eq:seminorms} \begin{aligned} |v|_{1,{\mathcal T}_{H}}^{2}=\sum_{\ell=1}^{N}\|{\vect{n}}abla v\|^{2}_{L^{2}({\Omegamega_{\ell}})} , &&\qquad \snorm{v}_{\ast,{{\mathcal E}_h}}^{2} &= \sum_{e\in {{\mathcal E}_h}} \|\p \, \h^{-1/2}\, \jump{v}\|_{L^{2}(e)}^{2} , \end{aligned} \end{equation} together with the natural induced norm by $\mathcal{A}_{h}(\cdot,\cdot)$: \begin{equation}\label{def:normA} \begin{aligned} &{\vect{n}}ormAh{v}^2=|v|_{1,{\mathcal T}_{H}}^{2} + \alpha \snorm{v}_{\ast,{{\mathcal E}_h}}^{2} && \forall\, v\in {X_h}\;. \end{aligned} \end{equation} Following \cite{Stenberg.1998} (see also \cite{abcm}) it is easy to see the bilinear form ${\mathcal A}_h(\cdot,\cdot)$ is continuous and coercive (provided $\alpha\geq \alpha_0$ ) with respect the norm {\vect{n}}ref{def:normA}, i.e., \begin{equation*} \begin{aligned} &\textrm{Continuitiy}: &&|{\mathcal A}_h(u,v)| \lesssim {\vect{n}}ormAh{u}{\vect{n}}ormAh{v} &&\forall\, u,v\in {X_h}\\ &\textrm{Coercivity}: &&{\mathcal A}_h(v,v) \gtrsim {\vect{n}}ormAh{v}^{2} && \forall\, v\in {X_h}. \end{aligned} \end{equation*} From now on we will always assume that $\alpha \geq \alpha_0$. Notice that the continuity and coercivity constants depend only on the shape regularity of ${\mathcal T}_{h}$. \section{Some technical tools}\label{sec:technical_tool} We now revise some technical tools that will be required in the construction and analysis of the proposed preconditioners.\\ We recall the local {\em inverse inequalities} (cf. \cite{Schwab98}, for example): for any $\eta \in \mathbb{P}^{p_{K}}(K)$ it holds \begin{equation*} | \eta |_{H^{r}(e)} \lesssim p_{K}^{2(r-s)}h_{K}^{s - r} | \eta |_{H^{s}(e)}, \qquad e\subset \partial K \end{equation*} for all $s,r$ with $0 \leq s < r \leq 1$. Using the above inequality for $s = 0$ and space interpolation, it is easy to deduce that for a subdomain edge $E\subset \partial {\Omegamega}ega_\ell$ and for all $s,r$, $0 \leq s < r < 1$, for all $\eta \in X_h^\ell|_E$ it holds that \begin{align} | \eta |_{H^{r}(E)} &\lesssim \max_{\substack{K \in {\mathcal T}_{h}^\ell\\\partial K \cap E {\vect{n}}eq \emptyset}} (p_K^2 h_K^{-1})^{(r-s)} | \eta |_{H^{s}(e)} \lesssim \po^{2(s-r)} \ho^{s - r} | \eta|_{H^{s}(E)}, \label{inv:E} \\[4mm] | \eta |_{H^{r}(\partial{\Omegamega_{\ell}})} & \lesssim \max_{\substack{K \in {\mathcal T}_{h}^\ell\\\partial K \cap \partial{\Omegamega_{\ell}} {\vect{n}}eq \emptyset}} (p_K^2 h_K^{-1})^{(r-s)} | \eta |_{H^{s}(E)} \lesssim \po^{2(r-s)} \ho^{s - r} | \eta |_{s,\partial{\Omegamega_{\ell}}}, \label{inv:O} \end{align} where $\ho$ and $\po$ refer to the minimum (resp. the maximum) of the restriction to $\partial{\Omegamega_{\ell}}$ of the local mesh size $\h$ (resp. the local polynomial degree function $\p$), that is, \begin{equation}\label{def:hpl} \ho=\min_{x\in \partial{\Omegamega_{\ell}} \cap \Gammaamma} \h(x) \quad \text{and} \quad \po=\max_{x\in \partial{\Omegamega_{\ell}} \cap \Gammaamma} \p(x) . \end{equation} We write conventionally \begin{equation}\label{quot:convention} \frac{H\,p^2}{h} = \max_\ell \left \{ \frac{H_\ell\, \po^2}{\ho}\right \} . \end{equation} The next two results generalize \cite[Lemma~3.2, 3.4 and 3.5]{BPS} and \cite[Lemma~3.2]{Bsubstr} to the $hp$-version. The detailed proofs are reported in the Appendix \mathbb{R}f{app0}. \begin{lemma} \label{lembsp2} Let $\eta=(\eta^\ell)_{\ell=1}^N \in \Phih$ and let $ \chi = (\chi^\ell)_{\ell=1}^N \in {\mathfrak L}_{H}$ be such that $\chi^\ell(a)= \eta^\ell(a)$ at all vertices $a$ of ${\Omegamega_{\ell}}$, for all ${\Omegamega_{\ell}}\in {\mathcal T}_{H}$. Then \begin{equation*} \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}}|\chi^\ell |_{H^{1/2}(\partial {\Omegamega_{\ell}})}^2 \lesssim \left(1+\log{\left(\frac{H\,p^{2}}{h}\right)}\right) \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} |\eta^\ell |^2_{H^{1/2}(\partial {\Omegamega_{\ell}})}\;. \end{equation*} \end{lemma} \begin{lemma}\label{lembsp} Let $\xi \in \Phi_h^\ell$ such that $\xi(a) = 0$ at all vertices $a$ of ${\Omegamega_{\ell}}$. Let $\zeta_L\in H^{1/2}(\partial {\Omegamega_{\ell}})$ be linear on each subdomain edge of $\partial {\Omegamega_{\ell}}$. Then, it holds \begin{equation*} \sum_{E\subset \partial{\Omegamega_{\ell}}} \| \xi \|_{H^{1/2}_{00}(E)}^2\lesssim \left(1+\log{\left(\frac{H_{\ell}\,\po^{2}}{\ho}\right)}\right)^2 \left | \xi + \zeta_L \right |_{H^{1/2}(\partial{\Omegamega_{\ell}})}^2 \; , \end{equation*} where $\ho$ and $\po$ are defined in \eqref{def:hpl} and $H_{\ell}$ is defined as in \eqref{def:hls}. \end{lemma} \subsection{Norms on $\Phih$} We now introduce a suitable norm on $\Phih$ that will suggest how to properly construct the preconditioner. The natural norm that we can define for all $\eta=(\eta_\ell)_\ell\in \Phih$ is: \begin{equation}\label{Snormh} \Shnorm{\eta} = \inf_{ \begin{array}{c} u\in {X_h} \\ {u}_{|\Sigma}=\eta \end{array}} {\vect{n}}ormAh{u}\;, \end{equation} where the $\inf$ is taken over all $u\in {X_h}$ that coincide with $\eta$ along $\Sigma$. We recall that since on $\Gammaamma$ both $u$ and $\eta$ are double valued, the identity $\eta = u_{|_\Sigma}$ is to be intended as $\eta^\ell = u^\ell_{|_{\partial{\Omegamega_{\ell}}}}$. Although \eqref{Snormh} is the natural trace norm induced on $\Phi$ by the norm (\mathbb{R}f {def:normA}), working with it might be difficult. For this reason, we introduce another norm which will be easier to deal with and which, as we will show below, is equivalent to \eqref{Snormh}. The structure of the preconditioner proposed in this paper will be driven by this norm. We define:\label{normDG} \begin{equation} \begin{aligned} \ShnormDG{\eta}^2 &= \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} |\eta |^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} + \alpha \sum_{e\in {{\mathcal E}_h}}\| \p \,\h^{-1/2} \jump{\eta}\|_{L^{2}(e)}^{2}. \end{aligned} \end{equation} The next result shows that the norms \eqref{Snormh} and \eqref{normDG} are indeed equivalent: \begin{lemma}\label{equivT} The following norm equivalence holds: \begin{equation*} \begin{aligned} \Shnorm{\eta} \lesssim \ShnormDG{\eta} \lesssim \Shnorm{\eta} && \forall \eta\in \Phih \end{aligned} \end{equation*} \end{lemma} \begin{proof} We first prove that $\ShnormDG{\eta} \lesssim \Shnorm{\eta} $. Let $\eta=(\eta_{\ell})_{\ell=1,\dots,N}\in \Phih$ and let $u=(u_{\ell})_{\ell=1,\dots,N}$ such that ${u}_{|{\Sigma}}=\eta$ arbitrary. Thanks to the trace inequality, we have $$ |\eta_{\ell}|^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} \lesssim |u_{\ell}|^2_{H^1({\Omegamega_{\ell}})}\;, $$ and so, summing over all the subdomains ${\Omegamega_{\ell}} \in {\mathcal T}_{H}$ we have $$ \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} |\eta_{\ell}|^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} \lesssim \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} |u_{\ell}|^2_{H^1({\Omegamega_{\ell}})} =|u|_{1,{\mathcal T}_{H}}^{2}\;. $$ Adding now the term $\alpha \sum_{e\in {{\mathcal E}_h}}\| \p \h^{-1/2} \jump{\eta}\|_{L^2(e)}^{2}$ to both sides, and recalling the definition of the norms \eqref{def:normA}, \eqref{Snormh} and \eqref{normDG} we get the thesis thanks to the arbitrariness of $u$. We now prove that $ \Shnorm{\eta}\lesssim \ShnormDG{\eta} $. Given $\eta=(\eta_{\ell})_{\ell=1,\dots,L}\in \Phih$ let $\check u_\ell \in X_h^\ell$ be the standard discrete harmonic lifting of $\eta^\ell$, for which the bound $\snorm{\check u_\ell}_{H^1({\Omegamega_{\ell}})} \lesssim \snorm{\eta_\ell}_{H^{1/2}(\partial {\Omegamega_{\ell}})}$ holds (see e.g. \cite{BPS}) and let $\check u = (\check u^\ell)_{\ell=1,\dots,L}$ Summing over all the subdomains ${\Omegamega_{\ell}}$ and adding the term $ \alpha \sum_{e\in {{\mathcal E}_h}}\| \p \h^{-1/2} \jump{\eta}\|_{L^2(e)}^{2}$ we get \begin{equation*}\label{disatau} \Shnorm{\eta} \leq {\vect{n}}ormAh{\check u}\lesssim \ShnormDG{\eta}. \end{equation*} \end{proof} \section{Substructuring preconditioners} \label{sec:substr} In this section we present the construction and analysis of a substructing preconditioner for the Nitsche method \eqref{discrete_problem}-\eqref{def:nit}. The first step in the construction is to split the set of degrees of freedom into {\em interior} degrees of freedom (corresponding to basis functions identically vanishing on the skeleton) and degrees of freedom associated to the skeleton $\Gammaamma$ of the subdomain partition. Then, the idea of the ``substructuring'' approach (see \cite{BPS}) consists in further distinguishing two types among the degrees of freedom associated to $\Gammaamma$ : {\em edge} degrees of freedom and {\em vertex} degrees of freedom. Therefore, any function $u \in {X_h}$ can be split as the sum of three suitably defined components: $u = u^{0}+u_{\Gammaamma}=u^0 + u^E + u^V$. We first show how to {\it eliminate} (or condensate) the interior degrees of freedom and introduce the discrete Steklov-Poincar\'e operator associated to \eqref{def:nit}, acting on functions living on the skeleton of the subdomain partition. We then propose a preconditioner of substructuring type for the discrete Steklov-Poincar\'e operator and provide the convergence analysis. \subsection{Discrete Steklov-Poincar\`e operator} Following \cite{Dryja.Galvis.Sarkis.2007, sarkis1}, we now introduce a discrete harmonic lifting that allows for defining the discrete Steklov-Poincar\'e operator associated to \eqref{def:nit}. We also show that such a discrete Steklov-Poincar\'e operator defines a norm that is equivalent to the one defined in \eqref{normDG}.\\ Let ${X_h}o \subset {X_h}$ be the subspace of functions vanishing on the skeleton of the decomposition. Given any discrete function $w \in {X_h}$, we can split it as the sum of an {\em interior} function $w^0 \in {X_h}o$ and a suitable discrete lifting of its trace. More precisely, following \cite{Dryja.Galvis.Sarkis.2007, sarkis1}, we split $$ w = w^0 + R_h(w_{|\Sigma}), \qquad w^0 \in {X_h}o, $$ where, for $\eta \in \Phi_h$, $R_h(\eta) \in {X_h}$ denotes the unique element of ${X_h}$ satisfying \begin{equation}\label{def:Rh} \begin{aligned} &R_h(\eta)_{|_\Sigma} = \eta, \quad &&{\mathcal A}_h(R_h(\eta),v_h) = 0 &&\forall v_h \in {X_h}o. \end{aligned} \end{equation} The following proposition is easy to prove (see \cite{Dryja.Galvis.Sarkis.2007, sarkis1}). \begin{proposition} For $\eta=(\eta^\ell) \in \Phi_h$, the following identity holds: \begin{equation*} R_h(\eta)_{|_{{\Omegamega_{\ell}}}} = w_\ell^H + w_\ell^0, \end{equation*} with $w_\ell^H \in X_h^\ell$ denoting the standard discrete harmonic lifting of $\eta^\ell$ \begin{equation*} \begin{aligned} &{w_\ell^H} = \eta^\ell \text{ on } \partial{\Omegamega_{\ell}}, \quad &&\int_{\Omegamega_{\ell}} {\vect{n}}abla w_\ell^H \cdot {\vect{n}}abla v_h^\ell = 0 &&\forall v_h^\ell \in X_h^\ell \cap H^1_0({\Omegamega_{\ell}}), \end{aligned} \end{equation*} and $w_\ell^0 \in X_h^\ell \cap H^1_0({\Omegamega_{\ell}})$ being the solution of \begin{equation*} \begin{aligned} &\int_{{\Omegamega_{\ell}}} {\vect{n}}abla w_\ell^0 \cdot {\vect{n}}abla v_h^\ell = \int_{\partial{\Omegamega_{\ell}}} \jump{\eta} \cdot {\vect{n}}abla v_h^\ell, && \forall v_h^\ell \in X_h^\ell \cap H^1_0({\Omegamega_{\ell}}). \end{aligned} \end{equation*} \end{proposition} The space ${X_h}$ can be split as direct sums of an interior and a trace component, that is \begin{equation*} {X_h} = {X_h}o \oplus R_h(\Phih) . \end{equation*} Using the above splitting, the definition of $R_h(\cdot)$ and the definition of ${\mathcal A}_h(\cdot,\cdot)$, it is not difficult to verify that, \begin{align*} {\mathcal A}_h(w,v) &= {\mathcal A}_h(w^0,v^0) + {\mathcal A}_h(R_h(w_{|_\Sigma}),R_h(v_{|_\Sigma})) \\ &= a(w^0,v^0) + s(w_{|_\Sigma},v_{|_\Sigma}), \qquad \qquad \forall\, w,v \in {X_h} \end{align*} where the {\em discrete Steklov-Poincar\'e} operator $s: \Phih \times \Phih \to {\mathbb R}$ is defined as \begin{equation}\label{eqn:steklov} s(\xi,\eta) = {\mathcal A}_h(R_h(\xi),R_h(\eta)) \qquad \forall\, \xi , \eta \in \Phih\;. \end{equation} We have the following result: \begin{lemma}\label{lifting_ok} Let $R_h$ be the discrete harmonic lifting defined in \eqref{def:Rh}. Then, \begin{equation*} {\vect{n}}ormAh{R_h(\eta)} \simeq \ShnormDG{ \eta} \qquad \forall\, \eta\in \Phih\;. \end{equation*} \end{lemma} \begin{proof} If we show that ${\vect{n}}ormAh{R_h(\eta)} \simeq \Shnorm{ \eta}$, then the thesis follows thanks to the equivalence of the norms shown in Lemma~\mathbb{R}f{equivT}. First, we prove that ${\vect{n}}ormAh{R_h(\eta)} \lesssim \Shnorm{ \eta}$; let $\eta\in \Phih$, then from the definition of the $\inf$, we get that \begin{equation*} \exists u\in {X_h}\,:\, u_{|\Sigma} = \eta \quad \mbox{ such that } \quad {\vect{n}}ormAh{u} \leq 2 \Shnorm{\eta}\;. \end{equation*} Then, we can write $R_h(\eta) = u+v$ with $v\in{X_h}o$, and {\vect{n}}ref{def:Rh} reads \begin{equation*} {\mathcal A}_h(v,w) = -{\mathcal A}_h(u,w)\qquad \forall w \in {X_h}o\;. \end{equation*} Setting $w=v \in {X_h}o$ in the above equation, leads to $$ {\mathcal A}_h(v,v) = -{\mathcal A}_h(u,v)\;. $$ Then, using the coercivity and continuity of ${\mathcal A}_h(\cdot,\cdot)$ in the ${\vect{n}}ormAh{\cdot}$-norm we find \begin{equation*} {\vect{n}}ormAh{v}^{2} \lesssim {\mathcal A}_h(v,v)=|{\mathcal A}_h(u,v)| \lesssim {\vect{n}}ormAh{u}{\vect{n}}ormAh{v}\;. \end{equation*} Hence, ${\vect{n}}ormAh{v} \lesssim {\vect{n}}ormAh{u}$, and so this bound together with the triangle inequality gives \begin{equation*} {\vect{n}}ormAh{R_h(\eta)}\leq {\vect{n}}ormAh{u}+{\vect{n}}ormAh{v} \lesssim {\vect{n}}ormAh{u} \lesssim \Shnorm{\eta}\;. \end{equation*} The other inequality $\Shnorm{ \eta} \lesssim {\vect{n}}ormAh{R_h(\eta)}$ follows from the trace theorem. \end{proof} From the above result, the following result for the \emph{discrete Steklov-Poincar\`e} operator follows easily. \begin{corollary}\label{equiv_norm_Ttau} For all $\xi \in \Phih$, it holds \begin{equation*} s(\xi,\xi) \simeq \ShnormDG{\xi}^2. \end{equation*} \end{corollary} \begin{proof} Let $\xi\in \Phih$ then from the definition of $s(\cdot,\cdot)$, the continuity and coercivity of ${\mathcal A}_h(\cdot,\cdot)$ and applying Lemma~\mathbb{R}f{lifting_ok} we have \begin{equation*} s(\xi,\xi) = {\mathcal A}_h(R_h(\xi),R_h(\xi)) \simeq {\vect{n}}ormAh{R_h(\xi)}^2\simeq \ShnormDG{\xi}^2. \end{equation*} \end{proof} \subsection{The preconditioner}\label{sec:5.2} Following the approach introduced in \cite{BPS}, we now present the construction of a preconditioner for the discrete Steklov-Poincar\'e operator given by $s(\cdot,\cdot)$. We split the space of skeleton functions $\Phih$ as the sum of {\em vertex} and {\em edge} functions. We start by observing that ${\mathfrak L}_{H} \subset \Phih$. We then introduce the space of {\em edge} functions $\Phih^E \subset \Phih$ defined by \begin{equation*} \Phih^E = \{ \eta \in \Phih,~ \eta_\ell(A)=0~ \text{ ~at all vertex $A$ of} ~{\Omegamega_{\ell}} \quad \forall\, {\Omegamega_{\ell}}\in {\mathcal T}_{H} \} \end{equation*} and we immediately get \begin{equation} \label{eq:11} \Phih = {\mathfrak L}_{H} \oplus \Phih^E . \end{equation} The preconditioner ${\hat s}(\cdot,\cdot)$ that we consider is built by introducing bilinear forms \begin{equation*} \begin{aligned} & \shat^E:\Phih^E \times \Phih^E \longrightarrow{\mathbb R} && \qquad \shat^V:{\mathfrak L}_{H} \times {\mathfrak L}_{H} \longrightarrow{\mathbb R} \end{aligned} \end{equation*} acting respectively on edge and vertex functions, satisfying \begin{align} \shat^E(\eta^E,\eta^E) & \simeq \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}}\sum_{E\subset \partial{\Omegamega_{\ell}}}{\vect{n}}orm{\eta^E}^2_{H^{1/2}_{00}(E)} && \forall\, \eta^{E}\in \Phih^E, \label{eq:sE}\\ \shat^V(\eta^V,\eta^V) &\simeq \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} \left |\eta^V\right |^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} && \forall\, \eta^{V} \in {\mathfrak L}_{H}, \label{eq:sV} \end{align} and we define ${\hat s}: \Phih\times \Phih \longrightarrow{\mathbb R}$ as \begin{equation} \label{shat} {\hat s}(\eta,\xi) = \shat^E(\eta^E,\xi^E) + \shat^V(\eta^V,\xi^V) + q(\eta,\xi), \end{equation} where \label{normDG} \begin{align} q(\eta,\eta) &= \alpha \sum_{e\in {{\mathcal E}_h}} \| \p \,\h^{-1/2}\jump{\eta}\|_{L^2(e)}^2 &&\forall\, \eta\in \Phih\;. \label{eq:jEV} \end{align} Finally, we can state the main theorem of the paper. \begin{theorem}\label{precond} Let $s(\cdot,\cdot)$ and ${\hat s}(\cdot,\cdot)$ be the bilinear forms defined in \eqref{eqn:steklov} and \eqref{shat}, respectively. Then, we have: \begin{equation*} \left( 1 + \log\left(H\,p^2/ h\right) \right)^{-2} {\hat s}(\eta,\eta) \lesssim s(\eta,\eta) \lesssim {\hat s}(\eta,\eta) \qquad\forall\, \eta \in \Phih\;. \end{equation*} \end{theorem} The proof of Theorem \mathbb{R}f{precond} follows the analogous proofs given in \cite{BPS,Bsubstr} for conforming finite element approximation. We give it here for completeness. \begin{proof} We start proving that $s(\eta,\eta) \lesssim {\hat s}(\eta,\eta)$. Let $\eta \in \Phih$, then, $\eta= \eta^V + \eta^E$ with $\eta^E\in \Phih^E$ and $\eta^V\in {\mathfrak L}_{H}$. By using Corollary~\mathbb{R}f{equiv_norm_Ttau}, as well as the properties {\vect{n}}ref{eq:sE}-{\vect{n}}ref{eq:sV} of the edge and vertex bilinear forms, and {\vect{n}}ref{eq:jEV} of $q(\cdot,\cdot)$, we get \begin{align*} s(\eta,\eta) & \lesssim \ShnormDG{\eta}^2 = \sum_{{\Omegamega_{\ell}}\in{\mathcal T}_{H}} \snorm{\eta^E + \eta^V}^2_{1/2,\partial{\Omegamega}ega_\ell} + \alpha \sum_{e\in {{\mathcal E}_h}} \| \p\, \h^{-1/2} \jump{\eta}\|_{L^2(e)}^2 \\ & \lesssim \sum_{{\Omegamega_{\ell}}\in{\mathcal T}_{H}} \snorm{\eta^E}^2_{1/2,\partial{\Omegamega}ega_\ell} + \sum_{{\Omegamega_{\ell}}\in{\mathcal T}_{H}} \snorm{\eta^V}^2_{1/2,\partial{\Omegamega}ega_\ell} + q(\eta,\eta) \\ & \lesssim \shat^E(\eta^E,\eta^E) + \shat^V(\eta^V,\eta^V) + q(\eta,\eta), \end{align*} and hence \begin{equation*} s(\eta,\eta) \lesssim {\hat s}(\eta,\eta) \qquad\forall\, \eta \in \Phih\;. \end{equation*} \ We next prove the lower bound. We shall show that \begin{equation} \label{boundonb} {\hat s}(\eta,\eta) \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)^2 s(\eta,\eta) \qquad \forall\, \eta \in \Phih\;. \end{equation} For $\eta \in \Phih$, we have $\eta= \eta^V + \eta^E$ with $\eta^E\in \Phih^E$ and $\eta^V\in {\mathfrak L}_{H}$. Then, from the definition of ${\hat s}(\cdot,\cdot)$ we have \begin{align*} {\hat s}(\eta,\eta)& = \shat^E(\eta^E,\eta^E) + \shat^V(\eta^V,\eta^V) + q(\eta,\eta) \\ & \simeq \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}}\sum_{E\subset \partial{\Omegamega_{\ell}}} {\vect{n}}orm{\eta^E}^2_{H^{1/2}_{00}(E)} + \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}}\left |\eta^V\right |^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} + \alpha \sum_{e\in {{\mathcal E}_h}} \| \p \,\h^{-1/2} \jump{\eta}\|_{L^2(e)}^2. \end{align*} Appling Lemma~\mathbb{R}f{lembsp} with $\chi=\eta^{E}$ and $\zeta_{L}=\eta^{V}$, we obtain \begin{equation*} \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}}\sum_{E\subset \partial{\Omegamega_{\ell}}} {\vect{n}}orm{ \eta^E}^2_{H^{1/2}_{00}(E)} \lesssim \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}}\left( 1 + \log\left(H\,p^2/ h\right) \right)i^2 \snorm{ \eta}^2_{H^{1/2}({\Omegamega_{\ell}})}, \end{equation*} that is \begin{equation*} \shat^E(\eta^E,\eta^E) \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)^2 \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} \snorm{\eta}^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} . \end{equation*} To bound $\shat^V(\eta^V,\eta^V)$, we apply Lemma~\mathbb{R}f{lembsp2} with $\chi^{\ell}=\eta^{V}$ and $\eta^{\ell}=\eta$, and we get \begin{align*} \shat^V(\eta^V,\eta^V) \lesssim \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} \snorm{\eta^V}^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right) \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} \snorm{\eta}^2_{H^{1/2}(\partial {\Omegamega_{\ell}})}, \end{align*} and hence \begin{equation*} \shat^E(\eta^E,\eta^E) + \shat^V(\eta^V,\eta^V) \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)^2 \sum_{{\Omegamega_{\ell}}\in{\mathcal T}_{H}} \snorm{\eta}^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} . \end{equation*} Adding now the term $ \alpha \sum_{e\in {{\mathcal E}_h}}\| \p \h^{-1/2} \jump{\eta}\|_{L^2(e)}^{2}$ to both sides and recalling the definition of $q(\cdot,\cdot)$ we have: \begin{align*} {\hat s}(\eta,\eta)& =\shat^E(\eta^E,\eta^E) + \shat^V(\eta^V,\eta^V) +q(\eta,\eta) && \\ & \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)^2 \left( \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}} \snorm{\eta}^2_{H^{1/2}(\partial {\Omegamega_{\ell}})} + \alpha \sum_{e\in {{\mathcal E}_h}}\| \p \h^{-1/2} \jump{\eta}\|_{L^2(e)}^{2} \right) && \\ & = \left( 1 + \log\left(H\,p^2/ h\right) \right)^2 \ShnormDG{\eta}^2. && \end{align*} Finally, using the equivalence norm given in Corollary~\mathbb{R}f{equiv_norm_Ttau}, we reach \eqref{boundonb} and the proof of the Theorem is completed. \end{proof} As a direct consequence of Theorem \mathbb{R}f{precond} we obtain the following estimate for the condition number of the preconditioned Schur complement. \begin{corollary}\label{cond} Let ${\mathbf S}$ and ${\mathbf P}$ be the matrix representation of the bilinear forms $s(\cdot,\cdot)$ and ${\hat s}(\cdot,\cdot)$, respectively. Then, the condition number of ${\mathbf P} ^{-1} {\mathbf S}$, $ \kappa ({\mathbf P} ^{-1} {\mathbf S})$, satisfies \begin{equation} \kappa ({\mathbf P} ^{-1} {\mathbf S}) \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)^2. \end{equation} \end{corollary} Unfortunately, the splitting (\mathbb{R}f{eq:11}) of $\Phi_h$ is not orthogonal with respect to the $\hat{s}(\cdot,\cdot)$-inner product given in \eqref{shat}, and therefore the preconditioner based on $\hat{s}(\cdot,\cdot)$ is not block diagonal, in contrast to what happens in the full conforming case. Furthermore the off-diagonal blocks in the preconditioner cannot be dropped without loosing the quasi-optimality. The reason is the presence of the $q(\cdot,\cdot)$ bilinear form in the definition \eqref{shat}, and the fact that the two components in the splitting (\mathbb{R}f{eq:11}) of $\Phi_h$ scale differently in the semi-norm that $q(\cdot,\cdot)$ defines. In fact, it is possible to show that, if for some constant $\kappa(h)$, it holds \begin{equation}\label{false-dis} \begin{aligned} &\| \eta^V \|_{\Phi_h,*}^2\leq \kappa(h)\| \eta \|_{\Phi_h,*}^2 && \forall \eta = \eta^V+\eta^E \in \Phi_h, \end{aligned} \end{equation} then such $\kappa(h)$ must verify $\kappa(h)\gtrsim H/h$, which implies that, if we were to use a fully block diagonal preconditioner based on the splitting (\mathbb{R}f{eq:11}) of $\Phi_h$ an estimate of the form (\mathbb{R}f{boundonb}) would no be longer true. In order to show this, consider linear finite elements on quasi uniform meshes with meshsize $h$ in all subdomains, and let $\eta = (\eta^\ell)_{\ell}$ be the function identically vanishing in all subdomains but one, say ${\Omegamega}ega_k$, and let $\eta^k$ be equal to $1$ in a single vertex of ${\Omegamega}ega_k$ and zero at all other nodes. With this definition, we have $|\jump{\eta}| = |\eta^k|$ on $\partial {\Omegamega}ega_k$ and $\jump{\eta}=0$ on $\Sigma\setminus\partial{\Omegamega}ega^k$. Then, by a direct calculation, and recalling the definition of the semi-norm $|\cdot|_{\ast,{{\mathcal E}_h}}$ in \eqref{eq:seminorms}, we easily see that $$|\eta|^{2}_{\ast,{{\mathcal E}_h}}\simeq 1, \quad \mbox{ but }\qquad |\eta^{V}|^{2}_{\ast,{{\mathcal E}_h}}\simeq \frac{H_k}{h_k} $$ or equivalently \begin{equation}\label{controesempio1} q(\eta^V,\eta^V) \simeq \frac {H_k}{h_k}\;, \qquad q(\eta,\eta)\simeq 1. \end{equation} Therefore the energy of {\it coarse interpolant} $\eta^{V}$ exceeds that of $\eta$ by a factor of $H_k/h_k$. Hence, bounding $\eta^{V}$ alone in the $\| \cdot\|_{\Phi_h,*}$-norm would result in an estimate of the type \eqref{false-dis} \begin{equation}\label{controesempio2} q(\eta^V,\eta^V) \lesssim \| \eta^V \|_{\Phi_h,*}^2 \lesssim \kappa(h) q(\eta,\eta), \end{equation} which in view of (\mathbb{R}f{controesempio1}) would imply \[ \kappa(h) \gtrsim \frac{H_k}{h_k}. \] \begin{remark} We point out that the lack of the block-diagonal structure of the preconditioner associated to ${\hat s}(\cdot,\cdot)$ defined in {\vect{n}}ref{shat}, will not affect its computational efficiency, see Section~\mathbb{R}f{sec:expes}. \end{remark} \section{Realizing the preconditioner}\label{sec:variants} We start by deriving the matrix form of the discrete Steklov-Poincar\'e operator $s(\cdot,\cdot)$ defined in {\vect{n}}ref{eqn:steklov}. We choose a Lagrangian nodal basis for the discrete space $X_{h}$, and we take care of numbering interior degrees of freedom first (grouped subdomain-wise), then edge degrees of freedom (grouped edge by edge and in such a way that the degrees of freedom corresponding to the common edge of two adjacent subdomains are ordered consecutively), and finally the degrees of freedom corresponding to the vertices of the subdomains. We let $\iin_{\ii}$, $\iin_{\ie}$ and $\iin_{\iv}$ be the number of interior, edge and vertex degrees of freedom, respectively, and set $\iin=\iin_{\ie}+\iin_{\iv}$ Problem {\vect{n}}ref{discrete_problem} is then reduced to looking for a vector ${{\mathbf u}}\in \mathbb{R}^{\iin_\ii+\iin}$ with ${\mathbf u} =({\mathbf u}_{\ii},{\mathbf u}_{\ie}, {\mathbf u}_{\iv})$ solution to a linear system of the following form \begin{equation*} \begin{pmatrix} {\mathbf A}_{\ii \ii} & {\mathbf A}_{\ii \ie} & {\mathbf A}_{\ii \iv} \\ {\mathbf A}_{\ii \ie}^T & {\mathbf A}_{\ie \ie} & {\mathbf A}_{\ie \iv} \\ {\mathbf A}_{\ii \iv}^T & {\mathbf A}_{\ie \iv}^T & {\mathbf A}_{\iv \iv} \end{pmatrix} \begin{pmatrix} {\mathbf u}_{\ii} \\ {\mathbf u}_{\ie} \\{\mathbf u}_{\iv} \end{pmatrix} = \begin{pmatrix} {\mathbf F}_{\ii} \\ {\mathbf F}_{\ie} \\{\mathbf F}_{\iv} \end{pmatrix}. \end{equation*} Here, ${\mathbf u}_{\ii}\in \mathbb{R}^{\iin_{\ii}}$ (resp. ${\mathbf F}_{\ii} \in \mathbb{R}^{\iin_{\ii}}$) represents the unknown (resp. the right hand side) component associated to interior nodes. Analogously, ${\mathbf u}_{\ie}, {\mathbf F}_{\ie} \in \mathbb{R}^{\iin_{\ie}}$ and ${\mathbf u}_{\iv}, {\mathbf F}_{\iv} \in \mathbb{R}^{\iin_{\iv}}$ are associated to edge and vertex nodes, respectively. We recall that for each vertex we have one degree of freedom for each of the subdomains sharing it. For each macro edge $E$, we will have two sets of nodes (some of them possibly physically coinciding) corresponding to the degrees of freedom of $\Phi_h^{\ell^+}(E)$ and of $\Phi_h^{\ell^-}(E)$. \ As usual, we start by eliminating the interior degrees of freedom, to obtain the Schur complement system \begin{equation*} {\mathbf S} \begin{pmatrix} {\mathbf u}_{\ie} \\ {\mathbf u}_{\iv} \end{pmatrix} ={\mathbf g}, \end{equation*} with \begin{equation*}\label{eq:defS} \begin{aligned} &{\mathbf S} = \begin{pmatrix} {\mathbf A}_{\ie \ie} - {\mathbf A}_{\ii \ie}^T {\mathbf A}_{\ii \ii}^{-1} {\mathbf A}_{\ii \ie} & {\mathbf A}_{\ie \iv}-{\mathbf A}_{\ii \ie}^T {\mathbf A}_{\ii \ii}^{-1} {\mathbf A}_{\ii \iv}\\ {\mathbf A}_{\ie \iv}^T-{\mathbf A}_{\ii \iv}^T {\mathbf A}_{\ii \ii}^{-1} {\mathbf A}_{\ii \ie} & {\mathbf A}_{\iv \iv}-{\mathbf A}_{\ii \iv}^T {\mathbf A}_{\ii \ii}^{-1} {\mathbf A}_{\ii \iv} \end{pmatrix}, && {\mathbf g} = \begin{pmatrix} {\mathbf F}_E - {\mathbf A}_{\ii \ie}^T {\mathbf A}_{\ii \ii}^{-1} {\mathbf F}_{\ii} \\ {\mathbf F}_V - {\mathbf A}_{\ii \iv}^T {\mathbf A}_{\ii \ii}^{-1} {\mathbf F}_{\ii} \end{pmatrix}. \end{aligned} \end{equation*} The Schur complement ${\mathbf S}$ represents the matrix form of the Steklov-Poincar\'e operator $s(\cdot, \cdot)$. Remark that in practice we do not need to actually assemble ${\mathbf S}$ but only to be able to compute its action on vectors. \ In order to implement the preconditioner introduced in the previous section we need to represent algebraically the splitting of the trace space given by \eqref{eq:11}. As defined in {\vect{n}}ref{eq:Lineari}, we consider the space ${\mathfrak L}_{H}$ of functions that are linear on each subdomain edge, and introduce the matrix representation of the injection of ${\mathfrak L}_{H}$ into $\Phi_h$. More precisely, we let $\mathbf{\Xi} = \{ {\mathbf x}_{i}, \, i=1,\ldots,\iin_{\ie}, \iin_{\ie}+1,\ldots, \iin_{\ie}+\iin_{\iv}\}$ be the set of edge and vertex degrees of freedom. For any vertex degree of freedom ${\mathbf x}_j$, $j=\iin_{\ie}+1,\ldots,\iin_{\ie}+\iin_{\iv}$ , let $\varphi_{j}(\cdot)$ be the piecewise polynomial that is linear on each subdomain edge and that satisfies \begin{equation*} \begin{aligned} \varphi_{j}({\mathbf x}_k)=\delta_{jk} && j,k=\iin_{\ie}+1,\ldots,\iin_{\ie}+\iin_{\iv}. \end{aligned} \end{equation*} The matrix ${\mathbf R}^{T}\in \mathbb{R}^{\iin\times \iin_{\iv}}$ realizing the linear interpolation of vertex values is then defined as \begin{equation*} \begin{aligned} &{\mathbf R}^{T}(i,j-\iin_{\ie}+1)=\varphi_{j}({\mathbf x}_{i}), && i=1,\ldots,\iin, && j=\iin_{\ie}+1,\ldots,\iin_{\ie}+\iin_{\iv}. \end{aligned} \end{equation*} Next, we define a square matrix $\widetilde{{\mathbf R}}^{T}\in \mathbb{R}^{\iin \times \iin}$ as \begin{equation*} \widetilde{{\mathbf R}}^{T} = \left( \begin{aligned} & \begin{pmatrix} {\mathbf I}_{\ie} \\ {\bf 0} \end{pmatrix} && {\mathbf R}^{T} \end{aligned} \right), \end{equation*} ${\mathbf I}_{\ie} \in \mathbb{R}^{\iin_{\ie}\times \iin_{\ie}}$ being the identity matrix. Let now $\widetilde{{\mathbf S}}$ be the matrix obtained after applying the change of basis corresponding to switching from the standard nodal basis to the basis related to the splitting {\vect{n}}ref{eq:11}, that is \begin{equation}\label{eq:Smodified} \widetilde{{\mathbf S}} = \widetilde{{\mathbf R}} {\mathbf S} \widetilde{{\mathbf R}}^T = \left ( \begin{array}{cc} \widetilde{{\mathbf S}}_{\ie \ie} & \widetilde{{\mathbf S}}_{\iv \ie}\\ \widetilde{{\mathbf S}}_{\iv \ie}^T & \widetilde{{\mathbf S}}_{\iv \iv} \end{array}\right ) . \end{equation} Our problem is then reduced to the solution of a transformed Schur complement system \begin{equation}\label{eq:Schur_system_modified} \widetilde{{\mathbf S}} \, \widetilde{{\mathbf u}}= \widetilde{{\mathbf g}}, \end{equation} where $\widetilde{{\mathbf u}} = \widetilde{{\mathbf R}}^{-T} {\mathbf u}$ and $\widetilde{{\mathbf g}}= \widetilde{{\mathbf R}} {\mathbf g}$. \ {\vect{n}}oindent {\it The preconditioner ${\mathbf P}$}. The preconditioner ${\mathbf P}$ that we propose is obtained as matrix counterpart of {\vect{n}}ref{shat}. In the literature it is possible to find different ways to build bilinear forms $\hat s^E(\cdot,\cdot)$, $\hat s^V(\cdot,\cdot)$ that satisfy {\vect{n}}ref{eq:sE} and {\vect{n}}ref{eq:sV}, respectively. The choice that we make here for defining $\hat s^E(\cdot,\cdot)$ is the one proposed in \cite{BPS} and it is based on an equivalence result for the $H^{1/2}_{00}$ norm. We revise now its construction. Let $l_0(\cdot)$ denote the discrete operator defined on $\Phi^0_\ell(E)$ associated to the finite-dimensional approximation of $-\partial^2/\partial s^2$ on $E$. It is defined by: \begin{equation}\label{eqH1200} \langle l_0 \varphi,\phi\rangle_E = (\varphi^\prime,\phi^\prime)_E\qquad \forall \phi \in \Phi^0_\ell(E), \end{equation} where the prime superscript refers, as usual, to the derivative $\partial/\partial s$ with respect to the arc length $s$ on $E$. Notice that, since $l_0(\cdot)$ is symmetric and positive definite, its square root can be defined. Furthermore, it can be shown that $${\vect{n}}orm{\varphi}_{H^{1/2}_{00}(E)}\simeq (l_0^{1/2}\varphi, \varphi)^{1/2}_{E},$$ see \cite{BPS}. Then, we define \begin{equation}\label{sEprec} \shat^E(\eta^E,\xi^E) = \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}}\sum_{E\subset \partial{\Omegamega_{\ell}}} (l_0^{1/2}\eta^E,\xi^E)_{E}\; \qquad \forall\, \eta^E, \xi^E \in \Phi^0_\ell(E). \end{equation} For $\eta^E\in \Phi^0_\ell(E)$ we denote by $\bm {\eta}^E$ its vector representation. Then, it can be verified that, for each subdomain edge $E\subset \partial {\Omegamega_{\ell}}$, we have (see \cite{Bjorstad:1986:IMS} pag. 1110 and \cite{Dryja:1981:CMM}) \begin{equation*} (l_0^{1/2}\eta^E, \eta^E)_{E} ={\bm {\eta}^E}^T \widehat {\mathbf K}_E\bm {\eta}^E \end{equation*} where $ \widehat {\mathbf K}_E= {\mathbf M}_E^{1/2} ( {\mathbf M}_E^{-1/2} {\mathbf R}_E {\mathbf M}_E^{-1/2})^{1/2} {\mathbf M}_E^{1/2}$, and where $ {\mathbf M}_E$ and $ {\mathbf R}_E$ are the mass and stiffness matrices associated to the discretization of the operator $-\textrm{d}^2/\textrm{ds}^2$ (in $\Phi^0_\ell(E)$) with homogeneous Dirichlet boundary conditions at the extrema $a$ and $b$ of $E$. Observe, that for each macro edge $E$ shared by the subdomains ${\Omegamega}ega_{\ell^+}$ and ${\Omegamega}ega_{\ell^-}$, $ \widehat {\mathbf K}_E$ is a two by two block diagonal matrix of the form \begin{equation*} \widehat {\mathbf K}_E = \begin{pmatrix} \widehat {\mathbf K}_E^{+} & {\bf 0}\\ {\bf 0} & \widehat {\mathbf K}_E^{-} \\ \end{pmatrix}, \end{equation*} where $\widehat {\mathbf K}_E^\pm $ are the contributions from the subdomains ${\Omegamega}ega_{\ell^\pm }$ sharing the macro-edge $E$. As far as the vertex bilinear form $\hat s^V(\cdot,\cdot)$ is concerned, we choose: \begin{equation}\label{sv} \hat s^V(\eta^V,\eta^V) = \sum_{{\Omegamega_{\ell}}\in {\mathcal T}_{H}}\int_{{\Omegamega_{\ell}}} {\vect{n}}abla({\cal H}_h^\ell\eta^\ell) \cdot {\vect{n}}abla({\cal H}_h^\ell\eta^\ell) \dd{x}, \end{equation} where $\cal H(\cdot)$ denotes the standard discrete harmonic lifting \cite{BPS,Xu.Zou}. We observe that if the ${\Omegamega_{\ell}}$'s are rectangles, for $\eta\in{\mathfrak L}_{H}$ we have that ${\cal H}_h^\ell\eta^\ell$ is the $\mathbb{Q}^{1}({\Omegamega_{\ell}})$ polynomial that coincides with $\eta^\ell$ at the four vertices of ${\Omegamega_{\ell}}$. Computing $\hat s^V(\eta^V,\xi^V)$ for $\eta^V,\xi^V\in {\mathfrak L}_{H}$ is therefore easy, since it is reduced to compute the local (associated to ${\Omegamega_{\ell}}$) stiffness matrix for $\mathbb{Q}^{1}({\Omegamega_{\ell}})$ polynomials. \begin{remark} A similar construction also holds for quadrilaterals which are affine images of the unit square, and for triangular domains. In fact, if ${\Omegamega_{\ell}}$ is a triangle then for $\eta\in{\mathfrak L}_{H}$ we have that ${\cal H}_h^\ell\eta^\ell$ is the $\mathbb{P}^{1}({\Omegamega_{\ell}})$ function coinciding with $\eta^\ell$ at the three vertices of ${\Omegamega_{\ell}}$. If ${\Omegamega_{\ell}}$ is the affine image of the unit square, we work by using the harmonic lifting on the reference element. \end{remark} The preconditioner ${\mathbf P} $ can then be written as: \begin{equation}\label{P2} {\mathbf P} \!=\!\! \left( \begin{array}{ccccc} {\mathbf K}_{E_1} & 0 & 0 &0&0\\ 0 & {\mathbf K}_{E_2} & 0 &0&0\\ 0 & 0& \ddots &0&0\\ 0 & 0 & 0&{\mathbf K}_{E_M}&0 \\ 0&0 & 0 & 0& {\mathbf P}_{\iv \iv} \end{array} \right) \!+ \widetilde{{\mathbf Q}} \,, \end{equation} where for each macro edge $E_i$, \[{\mathbf K}_{E_i}=\begin{pmatrix} (\widehat {\mathbf K}_{E_i}^{+})^{1/2} & 0 \\ 0 & (\widehat {\mathbf K}_{E_i}^{-})^{1/2} \end{pmatrix}. \] In \eqref{P2} ${\mathbf P}_{\iv\iv}$ is defined as the matrix counterpart of {\vect{n}}ref{sv} whereas $\widetilde{{\mathbf Q}}= \widetilde{{\mathbf R}} {\mathbf Q} \widetilde{{\mathbf R}}^T$ and $${\mathbf Q} = \left( \begin{array}{cccc} {\mathbf Q}_{E_1} & 0 & 0 & {\mathbf Q}_{E_1 V} \\ 0 & {\mathbf Q}_{E_2} & 0 & {\mathbf Q}_{E_2 V} \\ 0 & 0 & \ddots & \vdots \\ {\mathbf Q}_{E_1 V}^T & {\mathbf Q}_{E_2 V}^T & \cdots & {\mathbf Q}_{\iv \iv}, \end{array} \right) $$ is the matrix counterpart of {\vect{n}}ref{eq:jEV}. Remark that, due to the structure of the off diagonal blocks of ${\mathbf Q}$, ${\mathbf P}$ is low-rank perturbation of an invertible block diagonal matrix. The action of ${\mathbf P}^{-1}$ can therefore be easily computed, see e.g. \cite{Demmel_1997} sec.2.7.4, p. 83. \ {\it The preconditioner ${{\mathbf P}}_\star$}. For comparison we introduce a preconditioner ${{\mathbf P}}_\star$ with the same block structure of ${\mathbf P}$ but with the elements of the non-zero blocks coinciding with the corresponding elements of $\widetilde{{\mathbf S}}$. We expect this preconditioner to be the best that can be done within the block structure that we want our preconditioner to have. In order to do so, we replace the $\widetilde{{\mathbf S}}_{\ie\ie}$ component of $\widetilde{{\mathbf S}}$ with the matrix obtained by dropping all couplings between the degrees of freedom corresponding to nodes belonging to different macro edges, and use the resulting matrix as preconditioner. More precisely, for any subdomain edge $E_{k}$ of the subdomain partition, $k=1,\ldots,M$, let ${{\mathbf J}}_{k} \in \mathbb{R}^{\iin_{\ie}\times\iin_{\ie}}$ be the diagonal matrix that extract only the edge degrees of freedom belonging to the macro edge $E_{k}$, i.e., \begin{equation*} {{\mathbf J}}_{k}(i,j)= \left\{ \begin{aligned} &1 &&\textrm{if $i=j$ and ${\mathbf x}_{i} \in E_{k}$}\\ & 0 &&\textrm{otherwise} \end{aligned} \right. \quad i,j=1,\dots,\iin_{\ie}. \end{equation*} Then, we define \begin{equation*} \widetilde{{\mathbf P}}_{\ie \ie} =\sum_{k=1}^{m} {{\mathbf J}}_{k}^{T} \widetilde{{\mathbf S}}_{\ie \ie} {{\mathbf J}}_{k} \end{equation*} This provides our preconditioner \begin{equation}\label{eq:defP1} {\mathbf P}_{\star} = \left (\begin{array}{cc} \widetilde{{\mathbf P}}_{\ie\ie} & \widetilde{{\mathbf S}}_{\ie \iv} \\ \widetilde{{\mathbf S}}_{\ie \iv}^T & \widetilde{{\mathbf S}}_{\iv \iv} \end{array}\right ) . \end{equation} Building this preconditioner implies the need of assembling at least part of the Schur complement; this is quite expensive and therefore this preconditioner is not feasible in practical applications. \begin{remark}\label{rem:defP1diag} Note that we cannot drop the coupling between edge and vertex points, i.e. we cannot eliminate the off-diagonal blocks ${\mathbf Q}_{E_i V},{\mathbf Q}_{E_i V}^T$. Indeed, as already pointed out at the end of Section \mathbb{R}f{sec:5.2}, with the splitting (\mathbb{R}f{eq:11}) of $\Phi_h$ it is not possible to design a block diagonal preconditioner without losing quasi-optimality. In Section~\mathbb{R}f{sec:expes} we will present some computations that show that the preconditioner \label{normDG} \begin{equation}\label{eq:defP1diag} {\mathbf P}_{D} = \left (\begin{array}{cc} \widetilde{{\mathbf P}}_{\ie\ie} & 0\\ 0 & \widetilde{{\mathbf S}}_{\iv \iv} \end{array}\right ) , \end{equation} is not optimal. \end{remark} \section{Numerical results}\label{sec:expes} In this section we present some numerical experiments to validate the performance of the proposed preconditioners.\\ We set ${\Omegamega}ega=(0,1)^{2}$, and consider a sequence of subdomain partitions made of $N=4^{\ell}$ squares, $\ell=1,2,\ldots$, cf. Figure~\mathbb{R}f{fig:subdomains_ini_stru_grids} for $\ell=1,2,3,4$. For a given subdomain partition, $\ell=1,2,\ldots$, we have tested our preconditioners on a sequence of nested structured and unstructured triangular grids made of $n=2*4^{r}$, $r=\ell,\ell+1,\ldots$. Notice that the corresponding coarse and fine mesh sizes given by $H \approx 2^{-\ell}$, $\ell=1,2,\ldots$, and $h \approx 2^{-(r+1/2)}$, $r=\ell,\ell+1,\ldots$, respectively. In Figure~\mathbb{R}f{fig:subdomains_ini_stru_grids} we have reported the initial structured grids, on subdomains partitions made by $N=4^{s}$ squares, $s=1,2,3,4$, are reported. Figure~\mathbb{R}f{fig:subdomains_ini_unstru_grids} shows the first four refinement levels of unstructured grids on a subdomain partition made of $N=4$ squares.\\ \begin{figure} \caption{Top: initial structured grids on subdomains partitions made by $N=4^{\ell} \label{fig:subdomains_ini_stru_grids} \label{fig:subdomains_ini_unstru_grids} \label{fig:subdomains_ini_grids} \end{figure} Throughout the section, we have solved the (preconditioned) linear system of equations by the Preconditioned Conjugate Gradient~(PCG) method with a relative tolerance set equal to $10^{-9}$. The condition number of the (preconditioned) Schur complement matrix has been estimated within the PCG iteration by exploiting the analogies between the Lanczos technique and the PCG method (see \cite[Sects. 9.3, 10.2]{GolubVanLoan_1996}, for more details). Finally, we choose the source term in problem \eqref{prob} as $f(x,y)=1$, and set the penalty parameter $\alpha$ equal to $10$. We first present some computations that show the behavior of the condition number of the Schur complement matrix ${\mathbf S}$, cf. {\vect{n}}ref{eq:defS}. \begin{figure} \caption{Condition number estimate of the Schur complement matrix ${\mathbf S} \label{fig:CondS} \end{figure} In Figure~\mathbb{R}f{fig:CondS} (log-log scale) we report, for different subdomains partitions made by $N=4^{\ell}$ squares, $\ell=1,2,3,4,5$, the condition number estimate of the Schur complement matrix ${\mathbf S}$, $\kappa({\mathbf S})$, as a function of the mesh-size $1/h$. We clearly observe that $\kappa({\mathbf S})$ increases linearly as the mesh size $h$ goes to zero.\\ Next, we consider the preconditioned linear system of equations \begin{equation*} {\mathbf P}^{-1}\widetilde{{\mathbf S}} \, \widetilde{{\mathbf u}}= {\mathbf P}^{-1} \widetilde{{\mathbf g}}, \end{equation*} and test the performance of the preconditioners ${\mathbf P}$ and ${\mathbf P}_{\star}$ (cf. {\vect{n}}ref{P2} and {\vect{n}}ref{eq:defP1}, respectively). Throughout the section, the action of the preconditioner has been computed with a direct solver. \\ In the first set of experiments, we consider piecewise linear elements ($p=1$), and compute the condition number estimates when varying the number of subdomains and the mesh size. Table~\mathbb{R}f{tab:precP1P2_hversion_TS} shows the condition number estimates increasing the number of subdomains $N$ and the number of elements $n$ of the fine mesh. In Table~\mathbb{R}f{tab:precP1P2_hversion_TS} we also report (between parenthesis) the ratio between the condition number of the preconditioned system and $\left(1+\log(H/h)\right)^2$ (between parenthesis). These results have been obtained on a sequence of structured triangular grids as the ones shown in Figure~\mathbb{R}f{fig:subdomains_ini_stru_grids}. Results reported in Table~\mathbb{R}f{tab:precP1P2_hversion_TS}~(top) refers to the performance of the preconditioner ${\mathbf P}$, whereas the analogous results obtained with the preconditioner ${\mathbf P}_{\star}$ are shown in Table~\mathbb{R}f{tab:precP1P2_hversion_TS}~(bottom). \begin{table}[!htbp] \begin{center} \begin{tabular}{lrrrrr} &\multicolumn{5}{c}{Preconditioner ${\mathbf P}$}\\ \hline$N\downarrow \; n\rightarrow\;$ &\multicolumn{1}{c}{$n=128$} &\multicolumn{1}{c}{$n=512$} &\multicolumn{1}{c}{$n=2048$} &\multicolumn{1}{c}{$n=8192$} &\multicolumn{1}{c}{$n=32768$}\\ \hline\\[-0.3cm] $N= 16$ & 3.11 (0.74) & 4.88 (0.65) & 7.50 (0.64) & 10.84 (0.64) & 14.79 (0.64) \\ $N= 64$ & - & 3.30 (0.79) & 5.25 (0.70) & 8.00 (0.68) & 11.42 (0.67) \\ $N= 256$ & - & - & 3.35 (0.81) & 5.36 (0.72) & 8.16 (0.70) \\ $N= 1024$ & - & - & - & 3.37 (0.81) & 5.39 (0.72) \\ \hline \\ &\multicolumn{5}{c}{Preconditioner ${\mathbf P}_{\star}$}\\ \hline &\multicolumn{1}{c}{$n=128$} &\multicolumn{1}{c}{$n=512$} &\multicolumn{1}{c}{$n=2048$} &\multicolumn{1}{c}{$n=8192$} &\multicolumn{1}{c}{$n=32768$}\\ \hline\\[-0.3cm] $N= 16$ & 2.26 (0.54) & 4.04 (0.54) & 7.01 (0.60) & 11.00 (0.65) & 15.83 (0.68) \\ $N= 64$ & - & 2.42 (0.58) & 4.49 (0.60) & 7.85 (0.67) & 12.28 (0.72) \\ $N= 256$ & - & - & 2.47 (0.59) & 4.60 (0.62) & 8.07 (0.69) \\ $N= 1024$ & - & - & - &2.48 (0.60) & 4.63 (0.62) \\ \hline \end{tabular} \caption{Preconditioner ${\mathbf P}$ (top) and ${\mathbf P}_{\star}$ (bottom). Condition number estimates and ratio between the condition number of the preconditioned system and $\left(1+\log(H/h)\right)^2$ (between parenthesis). Structured triangular grids, piecewise linear elements ($p=1$). } \label{tab:precP1P2_hversion_TS} \end{center} \end{table} We have repeated the same set of experiments on the sequence of unstructured triangular grids (cf. Figure~\mathbb{R}f{fig:subdomains_ini_unstru_grids}). The computed results are shown in Figure~\mathbb{R}f{tab:precP1P2_hversion_TU}. As before, between parenthesis we report ratio between the condition number of the preconditioned system and $\left(1+\log(H/h)\right)^2$. As expected, a logarithmic growth is clearly observed for both preconditioner ${\mathbf P}$ and ${\mathbf P}_{\star}$. \\ \begin{table}[!htbp] \begin{center} \begin{tabular}{lrrrrr} &\multicolumn{5}{c}{Preconditioner ${\mathbf P}$}\\ \hline &\multicolumn{1}{c}{$n=128$} &\multicolumn{1}{c}{$n=512$} &\multicolumn{1}{c}{$n=2048$} &\multicolumn{1}{c}{$n=8192$} &\multicolumn{1}{c}{$n=32768$}\\ \hline\\[-0.3cm] $N= 16$ & 2.87 (0.69) & 4.69 (0.63) & 7.35 (0.63) & 10.68 (0.63) & 14.62 (0.63) \\ $N= 64$ & - &3.05 (0.73) & 5.01 (0.67) & 7.75 (0.66) & 11.13 (0.66) \\ $N= 256$ & - & - &3.09 (0.74) & 5.08 (0.68) & 7.89 (0.67) \\ $N= 1024$ & - & - & - &3.11 (0.75) & 5.11 (0.68) \\ \hline \\ &\multicolumn{5}{c}{Preconditioner ${\mathbf P}_{\star}$}\\ \hline &\multicolumn{1}{c}{$n=128$} &\multicolumn{1}{c}{$n=512$} &\multicolumn{1}{c}{$n=2048$} &\multicolumn{1}{c}{$n=8192$} &\multicolumn{1}{c}{$n=32768$}\\ \hline\\[-0.3cm] $N= 16$ & 1.84 (0.44) & 3.24 (0.43) & 5.51 (0.47) & 8.44 (0.50) & 12.00 (0.52) \\ $N= 64$ & - & 2.01 (0.48) & 3.77 (0.50) & 6.35 (0.54) & 9.76 (0.58) \\ $N= 256$ & - & - & 2.04 (0.49) & 3.90 (0.52) & 6.58 (0.56) \\ $N= 1024$ & - & - & - &2.05 (0.49) & 3.93 (0.53) \\ \hline \end{tabular} \caption{Preconditioner ${\mathbf P}$ (top) and ${\mathbf P}_{\star}$ (bottom). Condition number estimates and ratio between the condition number of the preconditioned system and $\left(1+\log(H/h)\right)^2$ (between parenthesis). Unstructured triangular grids, piecewise linear elements ($p=1$). } \label{tab:precP1P2_hversion_TU} \end{center} \end{table} Next, always with $p=1$, we present some computations that show that the preconditioner ${\mathbf P}_{D}$ defined as in {\vect{n}}ref{eq:defP1diag}, i.e., the block-diagonal version of the preconditioner ${\mathbf P}_{\star}$, is not optimal (cf. Remark \mathbb{R}f{rem:defP1diag}). More precisely, in Table~\mathbb{R}f{tab:precPdaisy_hversion_TS_TU} we report the condition number estimate of the preconditioned system when decreasing $H$ as well as $h$. Table~\mathbb{R}f{tab:precPdaisy_hversion_TS_TU} also shows (between parenthesis) the ratio between $\kappa({\mathbf P}_{D} \widetilde{{\mathbf S}})$ and $Hh^{-1}$. We can clearly observe that on both structured and unstructured mesh configurations, the ratio between $\kappa({\mathbf P}_{D} \widetilde{{\mathbf S}})$ and $Hh^{-1}$ remains substantially constant as $H$ and $h$ vary, indicating that the preconditioner ${\mathbf P}_{D}$ is not optimal. \\ \begin{table}[!htbp] \begin{center} \begin{tabular}{lrrrrr} &\multicolumn{5}{c}{Structured triangular grids}\\ \hline &\multicolumn{1}{c}{$n=128$} &\multicolumn{1}{c}{$n=512$} &\multicolumn{1}{c}{$n=2048$} &\multicolumn{1}{c}{$n=8192$} &\multicolumn{1}{c}{$n=32768$}\\ \hline\\[-0.3cm] $N= 16$ & 11.51 (4.07) & 23.19 (4.10) & 47.40 (4.19) & 95.21 (4.21) & 190.69 (4.21) \\ $N= 64$ & - & 11.58 (4.09) & 23.03 (4.07) & 47.16 (4.17) & 95.02 (4.20) \\ $N= 256$ & - & - & 11.55 (4.08) & 22.96 (4.06) & 47.12 (4.16) \\ $N= 1024$ & - & - & - & 11.44 (4.04) & 22.88 (4.04) \\ \hline\\ &\multicolumn{5}{c}{Unstructured triangular grids }\\ \hline &\multicolumn{1}{c}{$n=128$} &\multicolumn{1}{c}{$n=512$} &\multicolumn{1}{c}{$n=2048$} &\multicolumn{1}{c}{$n=8192$} &\multicolumn{1}{c}{$n=32768$}\\ \hline\\[-0.3cm] $N= 16$ & 9.45 (3.34) & 18.63 (3.29) & 39.13 (3.46) & 75.38 (3.33) & 148.93 (3.29) \\ $N= 64$ & - & 8.93 (3.16) & 18.30 (3.24) & 38.88 (3.44) & 78.82 (3.48) \\ $N= 256$ & - & - & 8.80 (3.11) & 17.85 (3.15) & 38.59 (3.41) \\ $N= 1024$ & - & - & - & 8.75 (3.10) & 17.64 (3.12) \\ \hline\\ \end{tabular} \caption{Preconditioner ${\mathbf P}_{D}$. Condition number estimates and ratio between $\kappa({\mathbf P}_{D} \widetilde{{\mathbf S}})$ and $Hh^{-1}$ (between parenthesis). Structured (top) and unstructured (bottom) triangular grids, piecewise linear elements ($p=1$).} \label{tab:precPdaisy_hversion_TS_TU} \end{center} \end{table} Finally, we present some computations obtained with high-order elements. As before, we consider a subdomain partition made of $N=4^{\ell}$ squares, $\ell=1,2,\ldots$, (cf. Figure~\mathbb{R}f{fig:subdomains_ini_stru_grids} for $\ell=1,2,3$). In this set of experiments, the subdomain partition coincides with the fine grid, i.e., $H=h$ , and on each element we consider the space of polynomials of degree $p=2,3,4,5,6$ in each coordinate direction. Table~\mathbb{R}f{tab:nonprec_pversion_CART} shows the condition number estimate of the non-preconditioned Schur complement matrix and the CG iteration counts. \begin{table}[!htbp] \begin{center} \begin{tabular}{llllll} \hline $N=n$ &\multicolumn{1}{c}{$p=2$} &\multicolumn{1}{c}{$p=3$} &\multicolumn{1}{c}{$p=4$} &\multicolumn{1}{c}{$p=5$} &\multicolumn{1}{c}{$p=6$}\\ \hline\\[-0.3cm] $4$ & 5.1e+1 ( 5) & 2.7e+2 ( 8) & 6.2e+2 (13) & 1.4e+3 (18) & 3.4e+3 (28) \\ $16$ & 3.2e+2 (22) & 8.4e+2 (42) & 2.0e+3 (69) & 4.6e+3 (101) & 1.1e+4 (153) \\ $64$ & 1.2e+3 (90) & 3.2e+3 (150) & 7.6e+3 (231) & 1.8e+4 (312) & 4.3e+4 (446) \\ $256$ & 4.7e+3 (195) & 1.3e+4 (294) & 3.0e+4 (462) & 7.0e+4 (634) & 1.7e+5 (886) \\ \hline \end{tabular} \caption{Condition number estimates $\kappa({\mathbf S})$ and CG iteration counts (between parenthesis). Cartesian grids. } \label{tab:nonprec_pversion_CART} \end{center} \end{table} We have run the same set of experiments employing the preconditioners ${\mathbf P}$ and ${\mathbf P}_{\star}$, and the results are reported in Table~\mathbb{R}f{tab:precP1P3_pversion_CART} . We clearly observe that, as predicted, for a fixed mesh configuration the condition number of the preconditioned system grows logarithmically as the polynomial approximation degree increases. Comparing these results with the analogous ones reported in Table~\mathbb{R}f{tab:nonprec_pversion_CART}, it can be inferred that both the preconditioners ${\mathbf P}$ and ${\mathbf P}_{\star}$ are efficient in reducing the condition number of the Schur complement matrix. \begin{table}[!htbp] \begin{center} \begin{tabular}{lrrrrr} &\multicolumn{5}{c}{Preconditioner ${\mathbf P}$}\\ \hline $N=n$ &\multicolumn{1}{c}{$p=2$} &\multicolumn{1}{c}{$p=3$} &\multicolumn{1}{c}{$p=4$} &\multicolumn{1}{c}{$p=5$} &\multicolumn{1}{c}{$p=6$}\\ \hline\\[-0.3cm] $N= 4$ & 7.14 (1.25) & 9.04 (0.88) & 12.06 (0.85) & 14.15 (0.79) & 16.48 (0.78) \\ $N= 16$ & 9.24 (1.62) & 9.93 (0.97) & 15.25 (1.07) & 15.99 (0.90) & 20.25 (0.96) \\ $N= 64$ & 10.03 (1.76) & 10.14 (0.99) & 16.34 (1.15) & 16.57 (0.93) & 21.53 (1.02) \\ $N= 256$ & 10.24 (1.80) & 10.19 (1.00) & 16.61 (1.17) & 16.71 (0.94) & 21.84 (1.04) \\ \hline\\ &\multicolumn{5}{c}{Preconditioner ${\mathbf P}_{\star}$}\\ \hline $N=n$ &\multicolumn{1}{c}{$p=2$} &\multicolumn{1}{c}{$p=3$} &\multicolumn{1}{c}{$p=4$} &\multicolumn{1}{c}{$p=5$} &\multicolumn{1}{c}{$p=6$}\\ \hline\\[-0.3cm] $N= 4$ & 1.88 (0.33) & 2.56 (0.25) & 3.75 (0.26) & 4.64 (0.26) & 5.70 (0.27) \\ $N= 16$ & 4.60 (0.81) & 5.23 (0.51) & 8.71 (0.61) & 9.38 (0.53) & 12.25 (0.58) \\ $N= 64$ & 6.18 (1.09) & 6.03 (0.59) & 10.35 (0.73) & 10.79 (0.61) & 14.33 (0.68) \\ $N= 256$ & 6.55 (1.15) & 6.25 (0.61) & 10.83 (0.76) & 11.20 (0.63) & 14.94 (0.71) \\ \hline \\ \end{tabular} \caption{Preconditioner ${\mathbf P}$ (top), ${\mathbf P}_{\star}$ (bottom). Condition number estimates and ratio between the condition number of the preconditioned system and $\left(1+\log(p^{2})\right)^2$ (between parenthesis). Cartesian grids.} \label{tab:precP1P3_pversion_CART} \end{center} \end{table} \appendix \section{Appendix}\label{app0} In this section, we report the proofs of Lemma~\mathbb{R}f{lembsp2} and Lemma~\mathbb{R}f{lembsp}. \ In the following, for $E\subset {\Omegamega}ega_\ell$ subdomain edge we will make explicit use of the space $H^{s}_0(E)$, $0<s<1/2$, which is defined as the subspace of those functions $\eta$ of $H^s(E)$ such that the function $\bar \eta \in L^2(\partial{\Omegamega_{\ell}})$ defined as $\bar \eta = \eta$ on $E$ and $\bar \eta = 0$ on $\partial{\Omegamega_{\ell}}\setminus E$ belongs to $H^s(\partial{\Omegamega_{\ell}})$. The space $H^{s}_0(E)$ is endowed with the norms \[ \| \eta \|_{H^s_0(E)} = \| \bar \eta \|_{H^s(\partial{\Omegamega_{\ell}})}.\] We recall that for $s < 1/2$ the spaces $H^s(E)$ and $H^{s}_0(E)$ coincide as sets and have equivalent norms. However, the constant in the norm equivalence goes to infinity as $s$ tends to $1/2$. In particular on the reference segment $\widehat E =(0,1)$, for all $\varphi \in H^{s}(\widehat{E})$ and for all $\beta \in {\mathbb R}$, the following bound can be shown (see \cite{Bsubstr}) \begin{equation*} | \varphi |_{H^{s}_0(\widehat{E})} \lesssim \frac 1 {1/2-s} \| \varphi - \beta \|_{H^{s}(\widehat{E})} + \frac 1 {\sqrt{1/2-s}} |\beta|, \end{equation*} which, provided $\varphi \in H^{1/2}(\widehat E)$, implies the bound \begin{equation} | \varphi |_{H^{s}_0(\widehat{E})}\lesssim \frac 1 {1/2-s} \| \varphi - \beta \|_{H^{1/2}(\widehat{E})} + \frac 1 {\sqrt{1/2-s}} |\beta|.\label{penultima} \end{equation} Prior to give the proofs of Lemmas \mathbb{R}f{lembsp2} and \mathbb{R}f{lembsp}, we start by observing that the following result, that corresponds to the $hp$-version of \cite[Lemma~3.1]{Bsubstr}, holds. \begin{lemma} \label{lemmainf} Let $E=(a,b)$ be a subdomain edge of ${\Omegamega_{\ell}}$. Then, for all $\eta \in\Phi_{\ell}(E)$, the following bounds hold: \begin{itemize} \item[{\em (i)}] \begin{equation} \label{eq:19} (\eta(a) - \eta(b))^2 \lesssim \left(1+\log{\left(\frac{H_{\ell}\,\po^{2}}{\ho}\right)}\right) | \eta |^2_{H^{1/2}(E)}. \end{equation} \item[{\em (ii)}] If $\eta(x)=0$ at some $x \in E$ it holds \begin{equation}\label{eq:18} \| \eta \|^2_{L^\infty(E)} \lesssim \left(1+\log{\left(\frac{H_{\ell}\,\po^{2}}{\ho}\right)}\right) | \eta |^2_{H^{1/2}(E)}. \end{equation} \item[{\em (iii)}] if $\eta \in\Phi^{0}_{\ell}(E)$, we have \begin{equation} \label{eq:4} \| \eta \|_{H^{1/2}_{00}(E)}^2\lesssim \left(1+\log{\left(\frac{H_{\ell}\,\po^{2}}{\ho}\right)}\right) | \eta |_{H^{1/2}(\partial {\Omegamega_{\ell}})}^2. \end{equation} \end{itemize} \end{lemma} \begin{proof} We first show {\em (ii)}. Notice that since $E$ is an arbitrary subdomain edge, $E\subset \partial{\Omegamega_{\ell}}$, and so $|E|\simeq H_{\ell}$. We claim that for any $\varphi \in H^{1/ 2+\varepsilon}(E)$ the following inequality holds: \begin{equation} \label{eq:2} \| \varphi \|^2_{L^{\infty}(E)} \lesssim H^{-1}_{\ell} \| \varphi \|_{L^2(E)}^2 + \frac {H^{2\varepsilon}_{\ell}} {\varepsilon} | \varphi |^2_{H^{1/ 2+\varepsilon}(E)}. \end{equation} To show \eqref{eq:2} one needs to trace the constants in the Sobolev imbedding between $H^{1/2+\varepsilon}(E)$ and $L^{\infty}(E)$. Let $\widehat{E} =]0,1[$ be the reference unit segment. Then, for any $\hat{\varphi} \in H^{1/ 2+\varepsilon}(\widehat{E})$, the continuity constant of the injection $H^{1/2+\varepsilon}(\widehat{E}) \subset L^{\infty}(\widehat{E})$ depends on $\varepsilon$ as follows (see \cite[Appendix]{Bsubstr}, for details) \[ \|\hat{ \varphi} \|^2_{L^{\infty}(\widehat{E})} \lesssim \|\hat{ \varphi} \|_{L^2(\widehat{E})}^2 + \frac 1 {\varepsilon} | \hat{\varphi} |^2_{H^{1/ 2+\varepsilon}(\widehat{E})}. \] A scaling argument using $|E|\simeq H_{\ell}$ leads to \eqref{eq:2}. Let now $\eta \in \Phi_h$ and $\beta \in \mathbb{R}$ an arbitrary constant. Using the inverse inequality \eqref{inv:E}, we have \begin{equation} \label{eq:3} \begin{aligned} \frac {H_{\ell}^{2\varepsilon}} {\varepsilon} | \eta - \beta |^2_{H^{1/ 2+\varepsilon}(E)} &= \frac {H_{\ell}^{2\varepsilon}} {\varepsilon} | \eta |^2_{H^{1/ 2+\varepsilon}(E)} \lesssim \frac {H_{\ell}^{2\varepsilon}\, \po^{4\varepsilon}\ho^{-2\varepsilon}}{\varepsilon} | \eta |^2_{H^{1/ 2}(E)} &&\\ & \lesssim \log{\left(\frac{H_{\ell} \,\po^{2}}{\ho}\right)} | \eta |^2_{H^{1/ 2}(E)}\;, && \end{aligned} \end{equation} where in the last step we have taken $\varepsilon = 1/\log(H_{\ell}\po^{2}/\ho)$ and used the fact that $s^{1/\log(s)}=e$. Applying now inequality \eqref{eq:2} to $\varphi=\eta - \beta$ together with the above estimate \eqref{eq:3} yields: \begin{equation} \label{eq:27} \| \eta - \beta \|_{L^\infty(E)}^2 \lesssim H_{\ell}^{-1} \| \eta- \beta \|^2_{L^2(E)} + \log \left( \frac {H_{\ell}\po^2} {\ho } \right)| \eta |_{H^{1/2}(E)}^2. \end{equation} Following \cite{BPS}, let $\beta$ be the average over $E$ of $\eta$ (or the $L^{2}$- projection onto the space $\mathbb{P}^{0}(E))$ of constants functions over $E$. Poincar\`e-Friederichs inequality (or standard approximation results) give \begin{equation} \label{eq:20} H_{\ell}^{-1/2} \| \eta - \beta \|_{L^2(E)} \lesssim | \eta |_{H^{1/2}(E)} \end{equation} which yields \begin{equation} \label{eq:21} \| \eta - \beta \|^2_{L^\infty(E)} \lesssim \left( 1 + \log \left( \frac {H_{\ell}\po^{2}} {\ho}\right) \right)| \eta |^2_{H^{1/2}(E)}. \end{equation} The proof of \eqref{eq:18} is concluded by noticing that if $\eta(x) = 0$ for some $x \in \bar{E}$ then it follows \begin{equation} \label{eq:21b} | \beta | \lesssim \| \eta - \beta \|_{L^\infty(E)} \end{equation} which yields \eqref{eq:18} using triangular inequality. The proof of \eqref{eq:19} follows by applying the estimate \eqref{eq:18} to the function $\eta-\eta(a)$, which by hypothesis vanishes at $a\in \bar{E}$. To show {\em (iii)}, we first notice that for $\eta_{o} \in \Phi^{o}_{\ell}(E)$, we can always construct an extension $\widetilde{\eta}_o$ such that \begin{equation*} \begin{aligned} &\widetilde{\eta}_o =\eta_{o} \; \textrm{ on $E$} && \quad \widetilde{\eta}_o=0 \; \textrm{ on $\partial{\Omegamega_{\ell}} \setminus E$}\;. \end{aligned} \end{equation*} Using now the inverse inequality \eqref{inv:O}, we obtain the following bounds \begin{equation} \| \eta_{o} \|_{H^{1/2}_{00}(E)} \lesssim | \widetilde{\eta}_o |_{H^{1/2}(\partial{\Omegamega_{\ell}}) } \lesssim \po^{2\varepsilon} \ho^{-\varepsilon} | \tilde\eta_{o} |_{H^{1/2 - \varepsilon}(\partial{\Omegamega_{\ell}})} \lesssim \po^{2\varepsilon} \ho^{-\varepsilon}| \eta_{o} |_{H^{1/2-\varepsilon}_0(E)},\label{36} \end{equation} where the second inequality follows from the boundedness from $H^{1/2}_{00}(E)$ to $H^{1/2}(\partial{\Omegamega_{\ell}})$ of the extension by $0$. To estimate now the $H^{1/2-\varepsilon}_0(E)$ seminorm of $\eta_{o}$ we observe that {\vect{n}}ref{penultima} rescales as \[ | \varphi |_{H^{1/2-\varepsilon}_0(E)} \lesssim \frac{H_{\ell}^{\varepsilon}} \varepsilon \left( H_{\ell}^{-1/2} \| \varphi - \beta \|_{L^2(E)} + | \varphi - \beta |_{H^{1/2}(E)} \right) + \frac{H_{\ell}^{\varepsilon}}{\sqrt{\varepsilon}} | \beta |. \] Taking now $\varphi = \eta_{o}$ and choosing $\beta$ as its average on $E$, the first term on the right hand side above is bounded by means of Poincar\'e-Friederichs inequality, and the second by means of estimate (\mathbb{R}f{eq:21b}), which holds since $\eta_{o}(a) = 0$. Hence, we get \[ \| \eta_{o} \|_{H^{1/2}_{00}(E)} \lesssim \frac{H_{\ell}^{\varepsilon} \po^{2\varepsilon}\ho^{-\varepsilon}} \varepsilon | \eta_{o} |_{H^{1/2}(E)} + \frac{H_{\ell}^{\varepsilon}\po^{2\varepsilon} \ho^{-\varepsilon}}{\sqrt{\varepsilon}} \| \eta_{o} - \beta \|_{L^\infty(E)}. \] Arguing as before and taking $\varepsilon = 1/\log(H_{\ell}\po^{2}/\ho)$, and using bound (\mathbb{R}f{eq:21}) we obtain \[ \| \eta_{o} \|_{H^{1/2}_{00}(E)} \lesssim \left(1 + \log \frac {H_{\ell}\, \po^{2}}{\ho}\right) | \eta_{o} |_{H^{1/2}(E)}. \] Finally, since $\sum_{E\subset \partial{\Omegamega_{\ell}}} | \cdot |_{H^{1/2}(E)}^2 \lesssim | \cdot |_{H^{1/2}(\partial{\Omegamega_{\ell}})}^2,$ by squaring and taking the sum over $E\subset \partial{\Omegamega_{\ell}}$, we obtain \eqref{eq:4}. \end{proof} \ We are now able to prove Lemma \mathbb{R}f{lembsp2} and Lemma \mathbb{R}f{lembsp}. \begin{proof}[{\it Proof of Lemma~\mathbb{R}f{lembsp2}.}] A direct computation using the linearity of $\chi$ shows that, if $a_i,b_i$ are the vertices of the $i$-th subdomain edge $E^i$ of ${\Omegamega_{\ell}}$, we have $$ | \chi^\ell |_{H^{1/2}(\partial{\Omegamega_{\ell}})}^2 \lesssim \sum_{i=1}^{N^\ell_E} (\eta^\ell(a_i) - \eta^\ell(b_i))^2$$ with $N^\ell_E$($=3$ or $4$) denoting the number of subdomain edges of ${\Omegamega_{\ell}}$. Now, using~ {\vect{n}}ref{eq:19} and assembling all the contributions we easily conclude that the thesis holds. \end{proof} \begin{proof}[{\it Proof of Lemma~\mathbb{R}f{lembsp}}] Let $\zeta_0 \in \Phi_h^\ell$ be the unique element of $\Phi_h^\ell$ satisfying $\zeta_0(a) = 0$ for all vertices $a$ of ${\Omegamega_{\ell}}$ and $(\zeta_0,\tau)_{H^{1/2}(\partial{\Omegamega_{\ell}})} = (\zeta_L,\tau)_{H^{1/2}(\partial{\Omegamega_{\ell}})}$ for all $\tau \in \Phi_h^\ell$ with $\tau(a) = 0$ for all vertices $a$ of ${\Omegamega_{\ell}}$. It is not difficult to see that $| \cdot |_{H^{1/2}(\partial{\Omegamega_{\ell}})}$ is a norm on the subspace of functions in $\Phi_h^\ell$ vanishing at the vertices of ${\Omegamega_{\ell}}$ and then, by standard arguments we get that $\zeta_0$ is well defined and $| \zeta_0 |_{H^{1/2}(\partial{\Omegamega_{\ell}})} \lesssim | \zeta_L |_{H^{1/2}(\partial{\Omegamega_{\ell}})} $. Now we can write: \begin{equation} \label{eq:6} \sum_{i=1}^{N^\ell_E} \| \xi \|^2_{H^{1/2}_{00}({E_{\ell}^i})} \lesssim \sum_{i=1}^{N^\ell_E} \| \xi + \zeta_0\|^2_{H^{1/2}_{00}({E_{\ell}^i})} + \sum_{i=1}^{N^\ell_E} \| \zeta_0 \|^2_{H^{1/2}_{00}({E_{\ell}^i})}, \end{equation} with $N^\ell_E$ number of subdomain edges of ${\Omegamega_{\ell}}$. The first sum on the right hand side of (\mathbb{R}f{eq:6}) can be bound by using the previous lemma as \begin{align*} \sum_{i=1}^{N^\ell_E} \| \xi + \zeta_0\|^2_{H^{1/2}_{00}({E_{\ell}^i})} &\lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)i^2 | \xi + \zeta_0 |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})} \\ & \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)i^2 | \xi + \zeta_L |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})}, \end{align*} where on one hand we used Poincar{\'e} inequality to bound the $H^{1/2}$ norm of $\xi + \zeta_0$ (which vanishes at the vertices of ${\Omegamega_{\ell}}$) by the corresponding seminorm, while the last inequality follows by observing that, by the definition of $\zeta_0$, $\xi + \zeta_0 \in \Phi_h^\ell$ vanishes at the vertices of ${\Omegamega_{\ell}}$ and satisfies $(\zeta_L - \zeta_0,\xi+\zeta_0)_{H^{1/2}(\partial{\Omegamega_{\ell}})} = 0$. Hence, we have $$| \xi + \zeta_L |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})} = | \xi + \zeta_0 |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})}+ | \zeta_L - \zeta_0 |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})} \geq | \xi + \zeta_0 |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})}. $$ Let us now bound the second sum on the right hand side of (\mathbb{R}f{eq:6}): we first observe that \begin{equation*} \| \zeta_0 \|^2_{H^{1/2}_{00}({E_{\ell}^i})} = | \zeta_0 |^2_{H^{1/2}({E_{\ell}^i})}+I_1(\zeta_0) + I_2(\zeta_0), \end{equation*} having set \begin{equation*} \begin{aligned} & I_1(\zeta_0) = \int_{a_i}^{\mathbf b}i \frac {| \zeta_0(x) |^2} {|x - {a_i}|} \dd{x}, &&I_2(\zeta_0) = \int_{a_i}^{\mathbf b}i \frac {| \zeta_0(x) |^2} {|x - {\mathbf b}i|} \dd{x}, \end{aligned} \end{equation*} with ${a_i}$ and ${\mathbf b}i$ the two vertices of the subdomain edge ${E_{\ell}^i}$. Now we can write \begin{align*} \sum_{i=1}^{N^\ell_E} | \zeta_0 |^2_{H^{1/2}({E_{\ell}^i})} & \lesssim | \zeta_0 |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})} \lesssim | \zeta_L |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})} \lesssim \sum_{i=1}^{N^\ell_E} (\zeta_L({a_i}) - \zeta_L({\mathbf b}i))^2 \\ & \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)i | \xi + \zeta_L |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})}, \end{align*} where the inequality $ | \zeta_L |^2_{H^{1/2}(\partial{\Omegamega_{\ell}})}\lesssim \sum_{i=1}^{N^\ell_E} (\zeta_L({a_i}) - \zeta_L({\mathbf b}i))^2$ is proven in \cite{BPS} by direct computation, and the last inequality follows by applying the bound of Lemma~\mathbb{R}f{lemmainf}-{\it (ii)} to the function $(\xi + \zeta_L)(x) - (\xi + \zeta_L)({\mathbf b}i)$. Let us now bound $I_1$. For notational simplicity let us identify ${a_i} = 0$ and ${\mathbf b}i = H$. Adding and subtracting $\zeta_L(x) + \zeta_L(0)$ and using the Cauchy-Schwarz inequality, we have \begin{equation}\label{eq:esti_I1} I_1(\zeta_0) = \int_0^H \frac {|\zeta_0(x)|^2} {|x|} \dd{x} \lesssim \int_0^H \frac {|\zeta_0(x) - \zeta_L(x) + \zeta_L(0)|^2} {|x|} \dd{x} +\int_0^H \frac {|\zeta_L(x) - \zeta_L(0)|^2} {|x|} \dd{x}. \end{equation} {\vect{n}}ewcommand{\zeta_\perp}{\zeta_\perp} Let us bound the first integral on the right hand side of {\vect{n}}ref{eq:esti_I1}. Setting $\zeta_\perp = \zeta_0 - \zeta_L$, we have \[ \int_0^H \frac {| \zeta_\perp(x) - \zeta_\perp(0) |^2} {|x|} \dd{x} = \int_0^h \frac {| \zeta_\perp(x) - \zeta_\perp(0) |^2} {|x|} \dd{x} + \int_h^H \frac {| \zeta_\perp(x) - \zeta_\perp(0) |^2} {|x|} \dd{x}.\] The first term can be bounded by \begin{eqnarray*} \int_0^h \frac {| \zeta_\perp(x) - \zeta_\perp(0) |^2} {|x|} \dd{x} = \int_0^h \frac {| \int_0^x (\zeta_\perp)_x(\tau) \,d\tau |^2} {|x|} \dd{x} \lesssim h | \zeta_\perp |^2_{H^1({E_{\ell}^i})} \lesssim | \zeta_\perp |^2_{H^{1/2}({E_{\ell}^i})}, \end{eqnarray*} while we bound the second term by \begin{align*} \int_h^H \frac {| \zeta_\perp(x) - \zeta_\perp(0) |^2} {|x|} \dd{x} \lesssim &\| \zeta_\perp - \zeta_\perp(0) \|^2_{L^{\infty}({E_{\ell}^i})} \log \left( \frac{H_{\ell}\,\po^2}{\ho}\right) \\ \lesssim & \left( \log \left( \frac{H_{\ell}\,\po^2}{\ho}\right) \right)^2 | \zeta_\perp |^2_{H^{1/2}({E_{\ell}^i})}. \end{align*} Next, we estimate the second integral on the right hand side of {\vect{n}}ref{eq:esti_I1}. By direct calculation and using the linearity of $\zeta_L$, we have \[ \int_0^H \frac {|\zeta_L(x) - \zeta_L(0)|^2} {|x|} \dd{x} \lesssim (\zeta_L(B_{\ell}) - \zeta_L(A_{\ell}))^2 \lesssim \log \left( \frac{H_{\ell}\,\po}{\ho} \right)\,| \zeta_L + \xi |^2_{H^{1/2}({E_{\ell}^i})}. \] Hence, we conclude that $$ I_1(\zeta_0) \lesssim \left( 1 + \log\left(H\,p^2/ h\right) \right)i^2 | \zeta_0 - \zeta_L |^2_{H^{1/2}({E_{\ell}^i})} + \log \left( \frac{H_{\ell}\,\po^2}{\ho} \right)\,| \zeta_L + \xi |^2_{H^{1/2}({E_{\ell}^i})}. $$ The term $I_2$ can be bounded by the same argument. Collecting all the previous estimates the thesis follows. \end{proof} \section*{Acknowledgments} The work of P.F. Antonietti and B. Ayuso de Dios was partially supported by Azioni Integrate Italia-Spagna through the projects IT097ABB10 and HI2008-0173. B. Ayuso de Dios was also partially supported by grants MINECO MTM2011-27739-C04-04 and GENCAT 2009SGR-345. Part of this work was done during several visits of B. Ayuso de Dios to the Istituto {\it Enrico Magenes} IMATI-CNR at Pavia (Italy). She thanks the IMATI for the everlasting kind hospitality. \end{document}
\begin{document} \title{Protective measurement of open quantum systems} \author{Maximilian Schlosshauer} \affiliation{Department of Physics, University of Portland, 5000 North Willamette Boulevard, Portland, Oregon 97203, USA} \begin{abstract} We study protective quantum measurements in the presence of an environment and decoherence. We consider the model of a protectively measured qubit that also interacts with a spin environment during the measurement. We investigate how the coupling to the environment affects the two characteristic properties of a protective measurement, namely, (i) the ability to leave the state of the system approximately unchanged and (ii) the transfer of information about expectation values to the apparatus pointer. We find that even when the interaction with the environment is weak enough not to lead to appreciable decoherence of the initial qubit state, it causes a significant broadening of the probability distribution for the position of the apparatus pointer at the conclusion of the measurement. This washing out of the pointer position crucially diminishes the accuracy with which the desired expectation values can be measured from a readout of the pointer. We additionally show that even when the coupling to the environment is chosen such that the state of the system is immune to decoherence, the environment may still detrimentally affect the pointer readout.\\[.2cm] Journal reference: \emph{Phys.\ Rev.\ A} {\bf 101}, 012108 (2020), \href{https://doi.org/10.1103/PhysRevA.101.012108}{\texttt{10.1103/PhysRevA.101.012108}} \end{abstract} \maketitle \section{Introduction} Quantum measurements in which an apparatus is weakly coupled to a quantum system play an important role in the investigation of quantum phenomena \cite{Dressel:2014:uu,Gao:2014:cu}. Two main categories of such weak measurements have been studied: instantaneous weak measurements \cite{Aharonov:1988:mz,Dressel:2014:uu} and protective measurements \cite{Aharonov:1993:qa,Aharonov:1993:jm,Dass:1999:az,Vaidman:2009:po,Gao:2014:cu,Genovese:2017:zz,Qureshi:2015:jj}. In instantaneous weak measurements (usually simply called weak measurements), the apparatus interacts with the system only momentarily, followed by postselection. The shift of the apparatus pointer then encodes the weak value \cite{Aharonov:1988:mz,Dressel:2014:uu} of a system observable $\op{A}$ for given pre- and postselected states \cite{Aharonov:1988:mz,Duck:1989:uu,Dressel:2014:uu}. By contrast, in protective measurements \cite{Aharonov:1993:qa,Aharonov:1993:jm,Dass:1999:az,Vaidman:2009:po,Gao:2014:cu,Genovese:2017:zz,Qureshi:2015:jj} the apparatus is coupled to the system not instantaneously but for a time $T$ much longer than the intrinsic timescale of the system. If the system starts out in an eigenstate of its Hamiltonian, then this state remains approximately unchanged during the measurement while the apparatus pointer is shifted by an amount proportional to the expectation value of $\op{A}$ in the initial state \footnote{A different version of protective measurements, which we shall not consider here, uses repeated projective measurements to protect the state \cite{Aharonov:1993:jm,Piacentini:2017:oo}.}. Applications of protective measurements include the direct measurement of the quantum state of a single system \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az,Vaidman:2009:po,Auletta:2014:yy,Diosi:2014:yy,Aharonov:2014:yy,Schlosshauer:2016:uu}, studies of particle trajectories \cite{Aharonov:1996:ii,Aharonov:1999:uu}, determination of stationary states \cite{Diosi:2014:yy}, translation of ergodicity into the quantum realm \cite{Aharonov:2014:yy}, fundamental investigations of quantum measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp, Alter:1996:oo,Alter:1997:oo,Dass:1999:az,Gao:2014:cu}, and the description of two-state thermal ensembles \cite{Aharonov:2014:yy}. An experimental realization of a protective measurement using photons has been reported in Ref.~\cite{Piacentini:2017:oo}. In general, any realistic quantum system is open and consequently the state of the system is subject to decoherence due to interactions with the environment \cite{Zurek:1982:tv,Zurek:2002:ii,Schlosshauer:2019:qd}. For instantaneous weak measurements, the decoherence acting between the preselection and the start of the measurement, and again between the end of measurement and the postselection, will influence the measured weak value \cite{Shikano:2010:zz,Abe:2015:yy}. However, because the measurement is instantaneous, the measurement and decoherence interactions can be treated independently and the effect of decoherence can be minimized by performing the pre- and postselection close to the time of measurement. The situation is very different and more acute in a protective measurement, since here the system must be coupled to the apparatus for a long time, during which the system will also be subject to interactions with its environment. Therefore, the dynamics will be governed simultaneously by the measurement and decoherence interactions, and the environment can substantially affect the protective measurement in two ways. First, because decoherence will in general change the quantum state, it decreases the rate of success of the measurement (since an ideal protective measurement desires the state of the system to remain unaltered). Second, environmental interactions can influence the state of the apparatus pointer at the conclusion of the measurement, thereby diminishing the ability of the pointer to accurately reveal the expectation value of the chosen observable. In this paper, we consider a generic model for the protective measurement of a qubit and extend it by adding to it the interaction of the qubit with an environment of other two-level systems. We study, for different strengths of the environmental interaction, the resulting evolution of the state of the qubit and the apparatus pointer. This paper is organized as follows. In Sec.~\ref{sec:model}, we describe our model (developing it, for concreteness, in the context of a Stern--Gerlach measurement setup) and solve for its dynamics. We then investigate the influence of the environment on the initial state of the system (Sec.~\ref{sec:infl-decoh-init}) and on the shift of the apparatus pointer (Sec.~\ref{sec:infl-decoh-point}). In Sec.~\ref{sec:envir-effects-with}, we discuss how the environment can negatively affect the apparatus pointer even when it does not change the state of the system. In Sec.~\ref{sec:stern-gerl-exper}, we describe a scheme for an experimental test of our model using a setup of the Stern--Gerlach type. In Sec.~\ref{sec:gener-appl}, we show that our model and results are general in the sense that they apply beyond the Stern--Gerlach scenario to any protective measurement of a qubit system in contact with a spin environment. We discuss our results in Sec.~\ref{sec:discussion}. \section{\label{sec:model}Model and dynamics} \subsection{Protective measurement} A general protective measurement of a system $S$ by an apparatus $A$ can be described by the Hamiltonian \begin{equation}\label{eq:3aaa} \op{H}(t) = \op{H}_S+\op{H}_m(t)= \op{H}_S+ \kappa(t) \op{O}_S \otimes \op{K}_A, \end{equation} where $\op{H}_S$ is the self-Hamiltonian of the system and $\op{H}_m$ represents the measurement interaction between system and apparatus. $\op{O}_S$ is an arbitrary observable of the system, and $\op{K}_A$ is an operator that generates the shift of the apparatus pointer. The function $\kappa(t)$ is a time-dependent coupling strength, which we take to be proportional to $1/T$ during the duration $t \in [0,T]$ of the measurement, and equal to zero otherwise (more complicated time dependencies may also be considered \cite{Schlosshauer:2014:pm}). The measurement is weak in the sense that $T$ is chosen sufficiently long such that $\op{H}_S$ dominates. If the system starts out in an eigenstate $\ket{\psi}$ of $\op{H}_S$, the probability of transitioning to a different state at the conclusion of the measurement interaction can be made arbitrarily small by increasing $T$ (and thus making the measurement interaction longer and weaker), and the apparatus pointer shifts by an amount proportional to the expectation value $\bra{\psi}\op{O}_S\ket{\psi}$. In this way, the state is effectively protected by $\op{H}_S$ and one may, for example, reconstruct the quantum state of a single system from protective measurements of a complete set of observables \cite{Aharonov:1993:qa,Vaidman:2009:po,Gao:2014:cu,Schlosshauer:2016:uu}. We now focus on the case of a qubit system (with self-Hamiltonian $\op{H}_S=\frac{1}{2}\hbar \omega_0 \op{\sigma}_z$) on which a generic qubit observable $\op{O}_S=\bopvecgr{\sigma} \cdot \buvec{m}$ is protectively measured. Then the Hamiltonian~\eqref{eq:3aaa} becomes \begin{equation}\label{eq:7} \op{H}(t) = \frac{1}{2}\hbar \omega_0 \op{\sigma}_z + \kappa (t) (\bopvecgr{\sigma} \cdot \buvec{m}) \otimes \op{K}_A. \end{equation} For concreteness, and to make contact with models of protective measurement studied previously \cite{Aharonov:1993:jm,Dass:1999:az,Schlosshauer:2015:uu}, we shall consider a realization of the Hamiltonian~\eqref{eq:7} in a setting of the Stern--Gerlach type, describing a spin-$\frac{1}{2}$ particle subject to magnetic fields. We stress, however, that our calculations and results are not tied to this particular realization. They are generic in the sense that they apply to the protective measurement of any qubit system by an apparatus pointer as described by Eq.~\eqref{eq:7}. We discuss this generality and applications beyond the Stern--Gerlach setting in Sec.~\ref{sec:gener-appl} below. In the scenario of the Stern--Gerlach type, $\op{H}_S$ corresponds to a uniform protection field $\bvec{B}_0$ in the $+z$ direction, \begin{equation}\label{eq:vshvbjfdjhvs} \op{H}_S = - \mu \bopvecgr{\sigma} \cdot \bvec{B}_0 = -\mu B_0 \op{\sigma}_z, \end{equation} where $\mu$ denotes the magnetic moment of the particle. The eigenstates of $\op{H}_S$ are the eigenstates $\ket{0}$ and $\ket{1}$ of $\op{\sigma}_z$, with eigenvalues $E_\pm=\mp \mu B_0$ and corresponding transition frequency $\omega_0 = 2\mu B_0/\hbar$. During the measurement interval $[0,T]$, the particle additionally experiences an inhomogeneous measurement field given by \footnote{Because the field given by Eq.~\eqref{eq:measfield} has nonzero divergence, it violates Maxwell's equations and therefore cannot represent a physical magnetic field, as already noted in Ref.~\cite{Anandan:1993:uu}. However, one can easily construct a suitable divergence-free inhomogeneous field and show that it produces the same pointer shift and state disturbance \cite{Anandan:1993:uu,Schlosshauer:2015:uu}. Therefore, for our purposes it suffices to consider, without loss of generality, the field given by Eq.~\eqref{eq:measfield}.} \begin{equation}\label{eq:measfield} \bvec{B}_m(\bvec{x}) = \frac{1}{T} \beta q \buvec{m}, \end{equation} where $q$ is the position coordinate in the field direction given by the unit vector $\buvec{m}$. We specify $\buvec{m}$ in spherical coordinates using polar angle $\gamma$ and azimuthal angle $\eta$, $\buvec{m} = (\cos\eta\sin\gamma, \sin\eta\sin\gamma, \cos\gamma)$. Thus the measurement Hamiltonian is \begin{align}\label{eq:1dvhjbbdhvbdhjv} \op{H}_m(\bvec{x}) &= - \mu \bopvecgr{\sigma} \cdot \bvec{B}_m(\bvec{x}) = - \mu \frac{\beta q}{T} \bigl[ \cos\eta\sin\gamma \,\op{\sigma}_x \notag \\ & \quad\, + \sin\eta\sin\gamma \,\op{\sigma}_y + \cos\gamma \,\op{\sigma}_z\bigr]. \end{align} The condition of a weak measurement corresponds to $T \gg \omega_0^{-1}$. If we think of $q$ as the one-dimensional position operator for the $\buvec{m}$ axis, we can see that this Hamiltonian generates changes in particle momentum along the $\buvec{m}$ direction \footnote{Since the operator $\op{q}$ does not commute with the Hamiltonian $\op{H}_p=\bopvec{p}^2/2m$ associated with the phase-space degree of freedom of the particle, it is not a constant of motion. Because this noncommutativity complicates the mathematical treatment without altering the possibility and physics of protective measurements \cite{Dass:1999:az}, we follow, without loss of generality, the common approach \cite{Aharonov:1993:jm,Dass:1999:az,Schlosshauer:2015:uu} of considering the particle in its rest frame, such that $\op{H}_p =0$.}. These momentum changes represent the pointer shifts in the model. Suppose that at $t=0$ (the start of the measurement interaction), the system $S$ is in the initial state $\ket{\psi(0)} = \ket{0}\ket{\Phi(p_0)}$, where $\ket{\Phi(p_0)} $ is the initial wave function for particle momentum along $\buvec{m}$, which we take to be a Gaussian of width $\sigma_p$ centered at $p_0$, \begin{equation}\label{eq:18} \Phi_{p_0}(p) = \braket{p}{\Phi(p_0)} = \left( \frac{1}{2\pi\sigma_p^2}\right)^{1/4} \exp \left[-\frac{(p-p_0)^2}{4\sigma_p^2}\right]. \end{equation} In the weak-measurement limit $T \gg \omega_0^{-1}$ [i.e., $\abs{\bvec{B}_m(\bvec{x}) } \ll \abs{\bvec{B}_0}$], the state of $S$ at the conclusion of the measurement ($t=T$) is \cite{Aharonov:1993:jm,Schlosshauer:2015:uu} \begin{align}\label{eq:29} \ket{\psi(\bvec{x}, T)} &\approx \exp\left( \frac{\text{i} \omega_0 T}{2} \right) \ket{0} \exp\left(\frac{\text{i} \mu \beta q \cos\gamma }{\hbar} \right) \ket{\Phi(p_0)} \notag\\&= \exp\left( \frac{\text{i} \omega_0 T}{2} \right) \ket{0} \ket{\Phi(p_0+\mu\beta\cos\gamma)} . \end{align} Since $\cos\gamma = \bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}$, this shows that the center of the momentum wave packet has shifted by an amount proportional to the expectation value of $\bopvecgr{\sigma} \cdot \buvec{m}$ in the initial spin state, while the spin state itself is left approximately undisturbed. By measuring this momentum change along $\buvec{m}$, the expectation value $\bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}$ can be determined. This momentum change can be obtained by measuring the final position of the particle when it has completed its travel through the measurement field, giving the deflection of the particle in the direction $\buvec{m}$ \cite{Aharonov:1993:jm}. \subsection{\label{sec:envir-inter}Environmental interaction} To study the influence of an environment and decoherence, we include the interaction of the spin degree of freedom of $S$ with an environment $E$ consisting of $N$ spin-$\frac{1}{2}$ particles. We take the system to couple to the environment through its $\op{\sigma}_x$ coordinate, \begin{equation}\label{eq:3} \op{H}_{SE}=\frac{1}{2} \op{\sigma}_x \otimes \sum_{i=1}^N g_i \op{\sigma}_x^{(i)} \equiv \frac{1}{2} \op{\sigma}_x \otimes \op{E}, \end{equation} where the $g_i$ are coupling coefficients, and we neglect the internal dynamics of the environment ($\op{H}_E=0$). This type of environmental interaction was used in one of the first models of decoherence \cite{Zurek:1982:tv}. It has since been studied repeatedly and its relevance to a large class of physical situations has been emphasized \cite{Schliemann:2002:yy,Dobrovitski:2003:az,Cucchietti:2005:om,Schlosshauer:2005:bb}. An orthonormal set of eigenstates $\ket{E_n}$ (where $n=0,1,\hdots,2^N-1$) of the environment operator $\op{E}$ defined in Eq.~\eqref{eq:3} is given by tensor products $\ket{k_1}_x\ket{k_2}_x \cdots \ket{k_N}_x$, $k_i \in \{0,1\}$, of eigenstates of the individual environment spin operators $\op{\sigma}^{(i)}_x$, with eigenvalues \begin{equation} \label{eq:1fssvgihTG8} \epsilon_n = \sum_{i=1}^N (-1)^{k_i} g_i. \end{equation} At $t=0$, we take the system--environment state to be in a pure product state, \begin{equation} \ket{\Psi(\bvec{x}, 0)} = \ket{0}\ket{\Phi(p_0)} \sum_{n=0}^{2^N-1} c_n\ket{E_n}. \end{equation} At $t=T$, the evolved state is \begin{multline}\label{eq:9} \ket{\Psi(\bvec{x}, T)} =\\= \sum_{n=0}^{2^N-1} c_n \exp \left[ -\frac{\text{i}}{\hbar} \left(\op{H}_S + \op{H}_m(\bvec{x}) + \frac{1}{2} \epsilon_n\op{\sigma}_x \right)T \right] \\ \times\ket{0}\ket{\Phi(p_0)} \ket{E_n} . \end{multline} Thus, for each environmental state $\ket{E_n}$ we can consider an effective system Hamiltonian \cite{Cucchietti:2005:om} \begin{equation}\label{eq:22} \op{H}_S^{(n)} = -\mu B_0 \op{\sigma}_z + \frac{1}{2}\epsilon_n \op{\sigma}_x \equiv -\mu B_0 \op{\sigma}_z -\mu b_n \op{\sigma}_x, \end{equation} where $\bvec{b}_n = b_n \buvec{x} = -\frac{\epsilon_n}{2\mu}\buvec{x}$ is the magnetic field associated with $\ket{E_n}$. We shall refer to the $\bvec{b}_n$ as the environment fields. For a given $\ket{E_n}$, the total Hamiltonian [including the measurement field~\eqref{eq:measfield}] may then be written as \begin{equation}\label{eq:8} \op{H}^{(n)}(\bvec{x}) = - \mu \bopvecgr{\sigma} \cdot \bvec{B}^{(n)}(\bvec{x}), \end{equation} where $\bvec{B}^{(n)}(\bvec{x})$ is the effective field felt by the spin particle, with components \begin{subequations}\label{eq:26} \begin{align}\label{eq:1} B_x^{(n)}(\bvec{x}) &= \frac{\beta q}{T} \cos\eta\sin\gamma+b_n, \\ B_y(\bvec{x}) &= \frac{\beta q}{T} \sin\eta\sin\gamma, \\ B_z(\bvec{x}) &= B_0+\frac{\beta q}{T} \cos\gamma. \end{align} \end{subequations} We define dimensionless field parameters $\xi(\bvec{x})=\frac{\beta q}{B_0 T}$ and $\tilde{b}_n=\frac{b_n}{B_0}$ that quantify the strength of the measurement and environment fields relative to the protection field strength $B_0$. Then we can write the magnitude of $\bvec{B}^{(n)}(\bvec{x})$ as $B^{(n)}(\bvec{x})= B_0 \chi_n(\bvec{x})$ with \begin{align}\label{eq:4} \chi_n(\bvec{x})& = \bigl[ 1 + \tilde{b}_n^2 + \xi (\bvec{x})^2 + 2 \tilde{b}_n \xi (\bvec{x}) \cos\eta\sin\gamma \notag\\ &\quad + 2 \xi (\bvec{x}) \cos\gamma \bigr]^{1/2}. \end{align} The components of the unit vector $\buvec{r}_n(\bvec{x})$ specifying the direction of $\bvec{B}^{(n)}(\bvec{x})$ are given by \begin{subequations}\label{eq:16hjkhjk} \begin{align} r_n^x(\bvec{x}) &= \frac{\xi (\bvec{x})\cos\eta\sin\gamma + \tilde{b}_n}{\chi_n (\bvec{x})}, \label{eq:24}\\ r_n^y(\bvec{x}) &= \frac{\xi (\bvec{x})\sin\eta\sin\gamma}{\chi_n (\bvec{x})}, \label{eq:25}\\ r_n^z(\bvec{x}) &= \frac{1 + \xi (\bvec{x}) \cos\gamma}{\chi_n (\bvec{x})}.\label{eq:16} \end{align} \end{subequations} Note that $r_n^z (\bvec{x})=\cos\theta_n (\bvec{x})$, where $\theta_n (\bvec{x})$ is the polar angle of $\bvec{B}^{(n)}(\bvec{x})$. \subsection{Time evolution} The eigenstates of the Hamiltonian $\op{H}^{(n)}(\bvec{x})$ [see Eq.~\eqref{eq:8}] are \begin{align} \ket{\buvec{r}_n^+ (\bvec{x})} &= \cos\frac{\theta_n (\bvec{x})}{2}\ket{0} + \sin\frac{\theta_n (\bvec{x})}{2}\text{e}^{\text{i} \phi_n (\bvec{x})}\ket{1},\\ \ket{\buvec{r}_n^-(\bvec{x})} &= \sin\frac{\theta_n (\bvec{x})}{2}\ket{0} - \cos\frac{\theta_n (\bvec{x})}{2}\text{e}^{\text{i} \phi_n (\bvec{x})}\ket{1}, \end{align} where $\theta_n (\bvec{x})$ and $\phi_n (\bvec{x})$ are the polar and azimuthal angles of the net field direction $\buvec{r}_n(\bvec{x})$ given by \eqref{eq:16hjkhjk}. Then the state~\eqref{eq:9} at $t=T$ can be evaluated to \begin{align}\label{eq:5} \ket{\Psi(\bvec{x},T)} &= \sum_{n=0}^{2^N-1} c_n \bigg[\cos\frac{\theta_n (\bvec{x})}{2}\text{e}^{ \text{i} \mu T B_0 \chi_n (\bvec{x})/\hbar }\ket{\buvec{r}_n^+ (\bvec{x})} \notag \\ &\quad + \sin\frac{\theta_n (\bvec{x})}{2}\text{e}^{ -\text{i} \mu T B_0 \chi_n (\bvec{x})/\hbar }\ket{\buvec{r}_n^- (\bvec{x})}\bigg]\notag\\ &\quad \times \ket{\Phi(p_0)}\ket{E_n}. \end{align} In the following, we will omit the argument $\bvec{x}$ and associate the position coordinate $q$ with the measured location of the particle along $\buvec{m}$ at $t=T$ \cite{Aharonov:1993:jm}. Since the measurement is weak, we have $\xi \ll 1$ and can therefore expand $\chi_n$ [see Eq.~\eqref{eq:4}] to first order in $\xi$, \begin{equation} \chi_n \approx \sqrt{1 + \tilde{b}_n^2} + \xi \left(\frac{\cos\gamma + \tilde{b}_n \cos\eta\sin\gamma}{\sqrt{1 + \tilde{b}_n^2}}\right). \end{equation} Using this approximation from here on, the exponentials in Eq.~\eqref{eq:5} can be written as \begin{equation}\label{eq:11} \exp\left( \pm \frac{\text{i}}{\hbar} \mu T B_0 \chi_n \right) = \exp\left( \pm \text{i} \Omega_n T \right) \exp\left( \pm \frac{\text{i} q\text{d}elta p_n}{\hbar} \right), \end{equation} where \begin{equation} \Omega_n = \frac{\mu \sqrt{B_0^2 + b_n^2} }{\hbar}, \end{equation} and \begin{equation}\label{eq:10} \text{d}elta p_n = \mu\beta \left(\frac{\cos\gamma + \tilde{b}_n \cos\eta\sin\gamma}{\sqrt{1 + \tilde{b}_n^2}}\right) \end{equation} is the magnitude of the momentum change (pointer shift) in the direction $\buvec{m}$ of the measurement field. We see from Eq.~\eqref{eq:10} that the influence of the environment on the pointer shift amounts to additional momentum kicks. The equation shows that the influence is maximized for $\gamma=\frac{\pi}{2}$ and $\eta=0$, when the measurement field is oriented along the $x$ axis and therefore coincides with the orientation of the environment field. Note also that the environment influences the pointer shift even though it does not directly couple to the pointer variable itself but rather to the spin coordinate $\op{\sigma}_x$ [compare Eq.~\eqref{eq:3}]. One way to understand this behavior is by recalling that the environment, for each state $\ket{E_n}$, gives rise to an effective environment-modified Hamiltonian $\op{H}_S^{(n)}$ for the spin degree of freedom of the particle [see Eq.~\eqref{eq:22}]. From perturbation theory it follows that the effect of the measurement interaction on the pointer, treated as a small perturbation, is given (to first order) by the expectation value of the spin part $\bopvecgr{\sigma} \cdot \buvec{m}$ of the perturbation in the eigenbasis of the unperturbed Hamiltonian $\op{H}_0$. Without an environment, $\op{H}_0$ is equal to $\op{H}_S$ [Eq.~\eqref{eq:vshvbjfdjhvs}] with eigenbasis $\{\ket{0},\ket{1}\}$, and the expectation value of the perturbation in this basis is proportional to $\bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}= \cos\gamma$, which is the familiar result~\eqref{eq:29}. In the presence of an environment, however, $\op{H}_0$ is represented by the family of Hamiltonians $\op{H}_S^{(n)}$. It follows that, for each state $\ket{E_n}$, the expectation value of the perturbation must now be evaluated in the eigenbasis of $\op{H}_S^{(n)}$, which yields the term in parentheses in Eq.~\eqref{eq:10}. Thus, the influence of the environment on the pointer shift may be understood as a consequence of the modification of the particle's spin Hamiltonian by the environment. Using Eq.~\eqref{eq:11}, the evolved state~\eqref{eq:5} can be written as $\ket{\Psi} = \sum_{n=0}^{2^N-1} c_n \ket{\psi^{(n)}}\ket{E_n}$ with \begin{align}\label{eq:12} \ket{\psi^{(n)}}&=\cos\frac{\theta_n }{2}\text{e}^{ \text{i} \Omega_n T }\ket{\buvec{r}_n^+}\ket{\Phi(p_0+\text{d}elta p_n)} \notag\\&\quad+ \sin\frac{\theta_n }{2}\text{e}^{ -\text{i} \Omega_n T}\ket{\buvec{r}_n^-}\ket{\Phi(p_0-\text{d}elta p_n)}. \end{align} Here we have omitted the time argument $T$ in the state vector symbols for notational simplicity, as evaluation at $t=T$ will be implicit from here on. The corresponding reduced density matrix of the system $S$ is \begin{equation}\label{eq:2} \op{\rho}_{S} = \text{Tr}_E \, \ketbra{\Psi} {\Psi} = \sum_{n=0}^{2^N-1} \abs{c_n}^2 \ketbra{\psi^{(n)}}{\psi^{(n)}}, \end{equation} which is an incoherent mixture of the states~\eqref{eq:12}. \section{\label{sec:infl-decoh-init}Influence of the environment on the spin state} We first study the decoherence imparted by the environmental interaction on the spin state of the system. By tracing over the momentum degree of freedom in the density matrix~\eqref{eq:2}, we obtain the reduced density matrix $\op{\rho}$ for the spin degree of freedom at $t=T$, \begin{align} \op{\rho} &= \sum_{n=0}^{2^N-1} \abs{c_n}^2 \cos^2\frac{\theta_n}{2}\ketbra{\buvec{r}_n^+}{\buvec{r}_n^+} + \sin^2\frac{\theta_n}{2}\ketbra{\buvec{r}_n^-}{\buvec{r}_n^-} \notag \\ &\quad + \Gamma_n \cos\frac{\theta_n}{2}\sin\frac{\theta_n}{2}\text{e}^{2\text{i} \Omega_n T} \ketbra{\buvec{r}_n^+}{\buvec{r}_n^-} \notag \\ &\quad + \Gamma_n \cos\frac{\theta_n}{2}\sin\frac{\theta_n}{2}\text{e}^{-2\text{i} \Omega_n T} \ketbra{\buvec{r}_n^-}{\buvec{r}_n^+},\label{eq:23} \end{align} where $\Gamma_n = \braket{\Phi(p_0+\text{d}elta p_n)} {\Phi(p_0-\text{d}elta p_n)}$ measures the overlap of the momentum-shifted wave packets. To quantify the amount of disturbance of the initial spin state $\ket{0}$ caused by the measurement and environment fields, we calculate the probability $\mathcal{P}_1 = \bra{1}\rho\ket{1}$ of finding the system in the orthogonal state $\ket{1}$ at $t=T$. From Eq.~\eqref{eq:23}, this probability is \begin{equation}\label{eq:17} \mathcal{P}_1 = \frac{1}{2}\sum_{n=0}^{2^N-1} \abs{c_n}^2 \left\{ \sin^2\theta_n\left[1-\Gamma_n\cos(2 \Omega_n T) \right] \right\}, \end{equation} where, from Eqs.~\eqref{eq:4} and \eqref{eq:16hjkhjk}, \begin{equation}\label{eq:27} \sin^2\theta_n = \frac{\xi^2 \sin^2\gamma + \tilde{b}^2_n + 2\tilde{b}_n \xi \cos\eta\sin\gamma}{1 + \tilde{b}_n^2 + \xi^2 + 2 \tilde{b}_n \xi \cos\eta\sin\gamma + 2 \xi \cos\gamma}, \end{equation} and we have used that $\sum_{n=0}^{2^N-1} \abs{c_n}^2=1$. Note that even in the absence of decoherence, $\mathcal{P}_1$ is nonzero due to the presence of the measurement field \cite{Schlosshauer:2015:uu}. This can be seen from Eq.~\eqref{eq:17}, which, without environmental interactions, simplifies to \begin{equation}\label{eq:17dsc} \mathcal{P}_1 = \frac{1}{2}\sin^2\theta\left[1-\Gamma \cos(\omega_0 T) \right]. \end{equation} This probability oscillates as a function of $T$. However, because in a protective measurement the magnitude (i.e., $\omega_0$) of the protection field need not be known \cite{Aharonov:1993:jm,Dass:1999:az}, one has in general not enough information to choose $T$ such that $\mathcal{P}_1=0$ \cite{Schlosshauer:2015:uu}. Instead, we use Eq.~\eqref{eq:17dsc} to obtain an upper bound on $\mathcal{P}_1$, by replacing $\cos(\omega_0 T)$ by its minimum value $-1$ and also setting $\Gamma=1$, as both of these choices maximize $\mathcal{P}_1$. Then \begin{equation}\label{eq:17djkd44sc} \mathcal{P}_1\le \sin^2\theta = \frac{\xi^2\sin^2\gamma}{1 + \xi^2 + 2 \xi \cos\gamma}, \end{equation} where we have used Eq.~\eqref{eq:27} with $\tilde{b}_n=0$. $\mathcal{P}_1$ is largest for $\gamma=\frac{\pi}{2}$ when the protection and measurement fields are orthogonal. In this case, $\xi = 0.1$ gives $\mathcal{P}_1 \le 0.01$, i.e., the probability of state disturbance due to the measurement field alone (without decoherence) is no greater than 0.01 for all possible orientations of the measurement field. From here on, we will use this value of $\xi$ as a reasonable choice for the strength of the measurement interaction. We now return to the consideration of added decoherence as given by Eq.~\eqref{eq:17}, and rewrite this equation in equivalent integral form as \begin{align}\label{eq:14} \mathcal{P}_1 &= \frac{1}{2}\int_{-\infty}^\infty \text{d} \tilde{b} \, w(\tilde{b}) \left\{ \sin^2\theta(\tilde{b})\left[1-\Gamma(\tilde{b})\cos[2 \Omega(\tilde{b}) T]\right] \right\}, \end{align} where $w(\tilde{b}) = \sum_{n=0}^{2^N-1} \abs{c_n}^2\delta(\tilde{b}-\tilde{b}_n)$ is the spectral density describing the distribution of the $\tilde{b}$. It has been shown \cite{Cucchietti:2005:om} that already for modest values of $N$ and for a large class of distributions of the couplings $g_i$ [Eq.~\eqref{eq:3}], the distribution of the energies $\epsilon_n$ given by Eq.~\eqref{eq:1fssvgihTG8}, and therefore also the distribution of the environment fields $\tilde{b}_n$, is well described by a Gaussian, \begin{equation}\label{eq:15} w(\tilde{b}) = \frac{1}{\sqrt{2\pi s_d^2}} \exp \left(-\frac{\tilde{b}^2}{2s_d^2}\right), \end{equation} where $s_d$ represents a typical strength of the environment field relative to the protection field strength $B_0$. We will use this distribution from here on. Also, in the regime $T \gg \Omega$ relevant to a protective measurement, Eq.~\eqref{eq:14} simplifies to \begin{equation}\label{eq:13} \mathcal{P}_1 \approx \frac{1}{2} \int_{-\infty}^\infty \text{d} \tilde{b} \, w(\tilde{b}) \sin^2\theta(\tilde{b}), \end{equation} which establishes the first main result of this paper. \begin{figure} \caption{\label{fig:statedist} \label{fig:statedist} \end{figure} The probability $\mathcal{P}_1$ given by Eqs.~\eqref{eq:15} and \eqref{eq:13} is shown in Fig.~\ref{fig:statedist} as a function of the decoherence strength $s_d$. If the decoherence is very weak ($s_d \ll 1$), then for a typical value $\tilde{b} \approx s_d$ the net field will be close to the $z$ direction, i.e., $\sin^2\theta(s_d)\ll 1$. Thus Eq.~\eqref{eq:13} gives $\mathcal{P}_1 \ll 1$, showing that the initial state is not substantially affected by the presence of the environment. In the opposite limit where the environment fields are so strong as to dominate the evolution ($s_d \gg 1$), a typical net field will be close to the $x$ direction, i.e., $\sin^2\theta(s_d) \approx 1$, which yields $\mathcal{P}_1 \approx \frac{1}{2}$ from Eq.~\eqref{eq:13}. In this case, the environmental monitoring of the $\op{\sigma}_x$ spin coordinate leads to a loss of most of the coherence between the components $\ket{0}_x$ and $\ket{1}_x$ in the initial state $\ket{0}=\frac{1}{\sqrt{2}}(\ket{0}_x+\ket{1}_x)$, giving an approximately maximally mixed state and thus roughly equal probabilities of finding either $\ket{0}$ or $\ket{1}$. We can use this information to define two decoherence regimes. (i) We refer to \emph{weak decoherence} as the regime in which the presence of the environment does not contribute an appreciable probability of leaving the initial state. We choose $\mathcal{P}_1 \le 0.05$ as the upper limit for state disturbance in this regime, which corresponds to $s_d \le 0.35$. (ii) We refer to \emph{strong decoherence} as the regime $s_d \gtrsim 1$ where a typical strength of the environment field is on the order of (or exceeding) the size of the protection field, and significant state disturbance results (for $s_d = 1.0$ we have $\mathcal{P}_1 = 0.17$). Since the goal of a protective measurement is to leave the initial state approximately unchanged, only the regime of weak decoherence can be said to allow for a proper protective measurement. \section{\label{sec:infl-decoh-point}Influence of the environment on the pointer shift} A second important consideration with respect to the quality of the protective measurement is the pointer shift. Therefore, we now turn to the question of how the pointer shift is influenced by the presence of the environment. By tracing over the spin degree of freedom of the system in the density matrix~\eqref{eq:2}, we obtain the reduced density matrix $\rho(p)$ for the momentum degree of freedom at $t=T$, \begin{align}\label{eq:19} \rho(p) &= \int_{-\infty}^\infty \text{d} \tilde{b}\, w(\tilde{b}) \left[ \cos^2\frac{\theta(\tilde{b})}{2} \abs{\Phi_{p_0+\text{d}elta p(\tilde{b})}(p)}^2 \right.\notag\\&\quad \left.+ \sin^2\frac{\theta(\tilde{b})}{2} \abs{\Phi_{p_0-\text{d}elta p(\tilde{b})}(p)}^2\right], \end{align} where we have again gone to the continuum limit using the Gaussian distribution $w(\tilde{b})$ given by Eq.~\eqref{eq:15}. This is an incoherent mixture of the Gaussian pointer states $\Phi_{p_0}(p)$ [see Eq.~\eqref{eq:10}] shifted in momentum by $\pm \text{d}elta p(\tilde{b})$ as given by Eq.~\eqref{eq:10}. Explicitly, \begin{multline}\label{eq:20} \abs{\Phi_{\pm\text{d}elta p(\tilde{b})}(p)}^2 \equiv \abs{\Phi_\pm(p,\tilde{b})}^2 = \frac{1}{\sqrt{2\pi s_p^2}} \\ \quad\times\exp \left\{-\frac{1}{2\sigma_p^2} \left[ p \mp \mu\beta \left(\frac{\cos\gamma + \tilde{b} \cos\eta\sin\gamma}{\sqrt{1 + \tilde{b}^2}}\right) \right]^2\right\}, \end{multline} where we have set $p_0=0$ for simplicity (we are concerned only with changes in momentum, and any nonzero $p_0$ merely adds a constant to the argument of the Gaussian). Only the pointer shift $+\text{d}elta p(\tilde{b})$, which corresponds to the first term in Eq.~\eqref{eq:19}, represents the correct shift that encodes the desired expectation value $\bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}$, while the reversed shift $-\text{d}elta p(\tilde{b})$ encodes the expectation value of $\bopvecgr{\sigma} \cdot \buvec{m}$ in the spin state $\ket{1}$ orthogonal to the initial state $\ket{0}$. However, in the case of weak decoherence ($s_d \ll 1$) relevant to protective measurements (see Sec.~\ref{sec:infl-decoh-init}), the first term in Eq.~\eqref{eq:19} dominates. This is so because to first order in $\xi$ and $\tilde{b}$, $\cos^2\frac{\theta(\tilde{b})}{2} = 1-\frac{1}{2}\xi \tilde{b}\cos\eta\sin\gamma$, and since the term $\frac{1}{2}\xi \tilde{b} \cos\eta\sin\gamma$ is of second order, we can neglect it. Then Eq.~\eqref{eq:19} becomes \begin{equation}\label{eq:1lfdk9} \rho(p) = \int_{-\infty}^\infty \text{d} \tilde{b}\, w(\tilde{b}) \abs{\Phi_+(p,\tilde{b})}^2. \end{equation} Still working with the case of weak decoherence, we expand the pointer shift in the argument of the exponential~\eqref{eq:20} to first order in $\tilde{b}$, \begin{multline}\label{eq:21} \abs{\Phi_\pm(p,\tilde{b})}^2 = \frac{1}{\sqrt{2\pi \sigma_p^2}}\\\times\exp \left\{-\frac{1}{2\sigma_p^2} \left[ p \mp \mu\beta \left(\cos\gamma + \tilde{b} \cos\eta\sin\gamma\right) \right]^2\right\}. \end{multline} We will now evaluate the state $\rho(p)$ given by Eqs.~\eqref{eq:1lfdk9} and \eqref{eq:21}. Introducing the dimensionless momentum variable $\tilde{p}=p/\mu\beta$ and defining $\tilde{b}'=\tilde{b} \cos\eta\sin\gamma$, we rewrite Eq.~\eqref{eq:1lfdk9} in the form \begin{equation}\label{eq:1lfdjhvfjkhk9} \rho(\tilde{p}) = \int_{-\infty}^\infty \text{d} \tilde{b}'\, u(\tilde{b}') v ( \tilde{p}-\tilde{b}'). \end{equation} Here \begin{equation} u(\tilde{b}') = \frac{1}{\sqrt{2\pi s_u^2}} \exp \left(-\frac{\tilde{b}'^2}{2s_u^2}\right) \end{equation} is the Gaussian distribution~\eqref{eq:15} transformed to the variable $\tilde{b}'$, with mean $\mu_u = 0$ and width $\sigma_u=s_d \cos\eta\sin\gamma$, and [compare Eq.~\eqref{eq:21}] \begin{equation} v(\tilde{b}') = \frac{1}{\sqrt{2\pi \sigma_{\tilde{p}}^2}}\exp \left[-\frac{1}{2 \sigma_{\tilde{p}}^2} \left( \tilde{b}' -\cos\gamma \right)^2\right] \end{equation} is a Gaussian with mean $\mu_{\tilde{p}} = \cos\gamma$ and width $\sigma_{\tilde{p}}=\sigma_p/\mu\beta$. Therefore, Eq.~\eqref{eq:1lfdjhvfjkhk9} is a convolution of two Gaussians in the free variable $\tilde{p}=p/\mu\beta$, with mean \begin{equation}\label{eq:63684tf3g} \mu=\mu_{\tilde{p}}+\mu_u=\cos\gamma \end{equation} and variance \begin{equation}\label{eq:6} \sigma^2 = \sigma_{\tilde{p}}^2+\sigma_u^2=(\sigma_p/\mu\beta)^2 + (s_d \cos\eta\sin\gamma)^2. \end{equation} Equations~\eqref{eq:63684tf3g} and \eqref{eq:6} establish the second main result of this paper. They show that the center of the momentum probability distribution (with momentum expressed in the dimensionless variable $\tilde{p}=p/\mu\beta$) still shifts by $\cos\gamma$ just as without an environment present, but that the interaction with the environment broadens the distribution through the term $(s_d \cos\eta\sin\gamma)^2$. Note that the broadening depends both on the strength $s_d$ of the environmental interaction and on the orientation $(\gamma, \eta)$ of the measurement field. It diminishes the accuracy with which the expectation value $\bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}$ can be inferred from a measurement of the particle's momentum change in the $\buvec{m}$ direction. Thus, the interaction with the environment leads to a smearing-out of the pointer and acts as noise on the pointer shift. \begin{figure} \caption{\label{fig:res} \label{fig:res} \end{figure} We will now explore the effect of the broadening. First, we need to choose a reasonable value for the width $\sigma_{\tilde{p}}$ of the initial momentum wave packet (we will measure momentum in terms of $\tilde{p}=p/\mu\beta$ from here on). The size of the pointer shift $\cos\gamma = \bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}$ varies between 0 and 1, so let us suppose that we would like to resolve pointer changes of size 0.1 (this corresponds to variations in $\gamma$ of up to $5^\circ$). As seen from Fig.~\ref{fig:res}, in the absence of environmental interactions the choice $\sigma_{\tilde{p}}=0.03$ offers a good distinction of the original Gaussian wave packet $\Phi(\tilde{p})$ from the momentum-shifted wave packet $\Phi(\tilde{p}-0.1)$, giving an overlap of less than 0.1. \begin{figure} \caption{\label{fig:broad} \label{fig:broad} \end{figure} Figure~\ref{fig:broad} shows the environment-induced broadening of the probability distribution [as given by Eq.~\eqref{eq:6}] for the pointer momentum at the conclusion of the measurement, for different decoherence strengths $s_d$ within the weak decoherence regime. We see that even for such weak decoherence when the initial spin state remains largely unaffected by the presence of the environment, a significant broadening of the pointer's momentum distribution occurs. For example, for $s_d=0.20$ (which corresponds to $\mathcal{P}_1 \approx 0.02$ and thus only insignificant disturbance of the spin state), the distribution has become so wide as to make all but impossible the reliable estimation of $\cos\gamma = \bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}$ from the measured particle momentum in the $\buvec{m}$ direction. This indicates that the chief detrimental influence of the environment on a protective measurement arises in the form of washing-out of the pointer probability distribution associated with the pointer shift. It leads to a substantial reduction in the accuracy with which the desired expectation value can be measured and, as we have seen, is a significant factor even when the state of the system is not appreciably affected by the environment. \section{\label{sec:envir-effects-with}Environmental effects without state disturbance} As discussed in Sec.~\ref{sec:infl-decoh-init}, in the strong-decoherence regime, and with the relative orientation~\eqref{eq:22} of the protection and environment fields, the initial spin state of the system will be substantially perturbed. Therefore, one of the two conditions of a proper protective measurement, i.e., that the initial state remains essentially unchanged in the course of the measurement, is violated. On the other hand, the influence of decoherence on a given quantum state depends also on the choice of the state (with some states being immune to decoherence \cite{Zurek:1982:tv}). For example, if the environment fields act along the $z$ direction and thus along the axis of the protection field (i.e., if the system couples to the environment via the $\op{\sigma}_z$ coordinate), they cannot disturb the initial spin state $\ket{0}$ since now the system starts out in an eigenstate of the system--environment interaction Hamiltonian. This will be true even when the environment dominates the evolution ($s_d \gg 1$). However, in this limit no pointer shift will occur, as can be seen from the following argument. If the environment fields are along the $z$ axis, then the components of the net fields $\bvec{B}^{(n)}$ are as in Eq.~\eqref{eq:26} but with the $b_n$ term now associated with the $z$ component. The magnitude of $\bvec{B}^{(n)}$ is $B_0 \chi_n$ with $\chi_n= \left( 1 + \tilde{b}_n^2 + \xi^2 + 2 \tilde{b}_n + 2 \tilde{b}_n \xi \cos\gamma + 2 \xi \cos\gamma \right)^{1/2}$. Expanding $\chi_n$ to first order in $\xi$, we obtain \begin{equation} \chi_n \approx \abs{1 + \tilde{b}_n} + \xi \cos\gamma\frac{1 + \tilde{b}_n}{\abs{1 + \tilde{b}_n}}, \end{equation} which gives a pointer shift $\pm \mu \beta \cos\gamma$, where the sign is negative if $\tilde{b}_n<-1$ (i.e., if $b_n<-B_0$). Therefore, there is no broadening of the probability distribution for the pointer momentum, but whenever $\tilde{b}_n<-1$ we get a reversed pointer shift $- \mu \beta \cos\gamma = \mu \beta\bra{1} \bopvecgr{\sigma} \cdot \buvec{m} \ket{1}$. This behavior is readily understood by noting that each environment field $b_n$ can be thought of as an added value to the protection field. Whenever $B_0+b_n \ge 0$, only the strength of the protection field is modified, but since the size of the pointer shift does not depend on this strength, the environment field does not affect the pointer shift. Whenever $B_0+b_n < 0$, however, the environment field modifies not only the strength but also the direction of the protection field, as the sum of the two fields is now in the $-z$ direction. With respect to this new direction, the initial spin state $\ket{0}$ becomes the higher-energy (excited) state, which is equivalent to using the orthogonal state $\ket{1}$ for the original $+z$ direction of the unmodified protection field, and thus the pointer shift will be proportional to $\bra{1} \bopvecgr{\sigma} \cdot \buvec{m} \ket{1}$. Applying Eq.~\eqref{eq:19} to this situation (with $\theta \approx 0$, since the net field is close to the $z$ direction), the pointer state $\rho(p)$ can be written as \begin{equation} \rho(p) \approx \mathcal{P}_+\abs{\Phi_{p_0+\mu\beta\cos\gamma}(p)}^2 + (1-\mathcal{P}_+)\abs{\Phi_{p_0-\mu\beta\cos\gamma}(p)}^2, \end{equation} where $\mathcal{P}_+ = \int_{-1}^\infty \text{d} \tilde{b}\, w(\tilde{b})$ is the probability of getting $\tilde{b} > -1$ and hence of obtaining the correct pointer shift $+ \mu \beta \cos\gamma$. In the weak-decoherence limit $s_d \ll 1$, $\mathcal{P}_+$ will be very close to 1 and thus the protective measurement will realize as if no environment were present, i.e., the environment will impart neither a disturbance of the initial state nor a change to the evolution of the pointer wave packet. Conversely, in the strong-decoherence regime $s_d \gtrsim 1$, $\mathcal{P}_+$ is substantially smaller than 1 and will approach $\frac{1}{2}$ for $s_d \gg 1$. As illustrated in Fig.~\ref{fig:strong}, this means that there is now a sizable likelihood of measuring a momentum value that corresponds to the reversed shift $- \mu \beta \cos\gamma$. Thus, even though the environment does not disturb the state of the system, the amount of information pertaining to the desired expectation value $\bra{0}\bopvecgr{\sigma} \cdot \buvec{m}\ket{0}$ that can be extracted from the protective measurement decreases as the decoherence strength is increased. In the limit $s_d \gg 1$, the expectation value of the pointer momentum will be zero (compare Fig.~\ref{fig:strong}) and thus the pointer will encode no information about $\bra{0}\bopvecgr{\sigma} \cdot \buvec{m}\ket{0}$. \begin{figure} \caption{\label{fig:strong} \label{fig:strong} \end{figure} \section{\label{sec:stern-gerl-exper}Experimental scheme} We will now discuss a possible approach to exploring our model in an experiment of the Stern--Gerlach type. First, recall that our results show that the phenomenological influence of the environment on the motional state of the spin particle is to impart noise in the form of momentum kicks. This can be seen directly from the final pointer state given by Eqs.~\eqref{eq:1lfdk9} and \eqref{eq:21}. In this incoherent mixture of momentum-space wave packets, each packet in the mixture is momentum-shifted by the combination of the system expectation value $\bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}=\cos\gamma$ and a contribution from a random field $b$, which represents a portion of the effect of the interaction with the spin environment in terms of a local magnetic field. The distribution of these wave packets is given by the distribution $w(b)$ of the fields $b$ [see Eq.~\eqref{eq:15}]. The distribution $w(b)$ has zero mean, which implies that the momentum kicks average to zero and leave the mean of the pointer momentum unchanged, but the finite width of the distribution means that, as we have seen, the momentum distribution of the pointer becomes significantly broadened. These results suggest the following experimental scheme. We add a magnetic field $b$ to the Stern--Gerlach setup for the protective measurement [as described by Eqs.~\eqref{eq:vshvbjfdjhvs} and \eqref{eq:measfield}], oriented along the $x$ direction and randomly chosen from the Gaussian distribution $w(b)$. After passage of the spin particle (the atom) through the field, we measure, as usual, the pointer momentum shift in the direction $\buvec{m}$ of the measurement field~\eqref{eq:measfield}. As mentioned, this can be done by measuring the total displacement of the spin particle in this direction when the particle has reached the end of the measurement region, with the atomic position measured, for example, by shining a weak-intensity laser beam on the atom \cite{Aharonov:1993:jm}. The momentum kick delivered by the field $b$ will influence the displacement, and by repeating the experiment many times, the distribution of final pointer momenta along $\buvec{m}$ can be reconstructed and compared to the measured distribution in the absence of the added fields. Effectively, this procedure generates the momentum density matrix~\eqref{eq:1lfdk9} in terms of a physical ensemble of different noisy realizations of the atomic evolution. We now give some numerical estimates for the typical strength of the added fields $b$ and the resulting change in displacement of the spin particle. We first discuss the relevant parameter values in the absence of an environment \cite{Schlosshauer:2015:uu}. For a momentum shift $\text{d}elta p = \mu\beta \cos\gamma$, the corresponding force on the atom caused by the measurement field is $F = \mu(\beta /T) \cos\gamma$, where $\beta/T$ is the magnitude of the gradient $\mbox{\boldmath$\nabla$} B_m=\frac{\beta}{T} \buvec{m}$ of the measurement field given by Eq.~\eqref{eq:measfield}. In a modern realization of a Stern--Gerlach setup based on evaporated potassium atoms ($\mu = \unit[9.3 \times 10^{-24}]{J/T}$) \cite{Daybell:1967:sg}, the atoms are emitted from an oven at a typical temperature of $T_\text{oven}=\unit[420]{K}$, which translates to a most probable velocity of $v=\sqrt{2 k_BT_\text{oven}/m} \approx \unit[420]{m/s}$. The inhomogeneity in the direction of $\mbox{\boldmath$\nabla$} B_m$ causes a spatial displacement given by \begin{align}\label{eq:28} \text{d}elta s &= \frac{\mu\beta \cos\gamma}{2m T} T^2 = \frac{\mu \abs{\mbox{\boldmath$\nabla$} B_m}\cos\gamma}{2m} \left(\frac{d}{v}\right)^2 \notag\\&= \frac{\mu \abs{\mbox{\boldmath$\nabla$} B_m}\cos\gamma}{4 k_BT_\text{oven}} d^2, \end{align} where $d$ is the size of the region containing the inhomogeneous measurement field. For $d=\unit[0.1]{m}$, $\gamma =\pi/4$, and a measurement-field gradient of $\mbox{\boldmath$\nabla$} B_m \approx \unit[40]{T/m}$ (a typical value in a Stern--Gerlach experiment), the spatial displacement in the direction of the inhomogeneity is $\text{d}elta s_0 \approx \unit[0.11]{mm}$. Note that the field parameter $\xi=\frac{\beta q}{B_0 T}$ (see Sec.~\ref{sec:envir-inter}) is here given by $(\mbox{\boldmath$\nabla$} B_m) d/B_0$. For the values just stated and a protection field $B_0$ on the order of $\unit[10]{T}$, we have $\xi=0.4$, which corresponds to an upper limit on the state disturbance (due to the measurement field only) of 7\% [see Eq.~\eqref{eq:17djkd44sc}]. This indicates that with these parameter choices, one is able to fulfill the condition that the protective measurement leave the state of the system largely unchanged. Since a spatially extended uniform magnetic field of such strength may be difficult to realize experimentally, one can alternatively use a smaller field if the size $d$ of the measurement region is correspondingly enlarged. For example, for $d=\unit[1]{m}$ and the same displacement as before, the required measurement-field gradient is $\mbox{\boldmath$\nabla$} B_m \approx \unit[0.04]{T/m}$. Then obtaining the same low state disturbance as before requires a uniform field strength of $B_0=\unit[1]{T}$. We can now include the random magnetic fields $b$ that produce the effect of the spin environment. In Sec.~\ref{sec:infl-decoh-init} we showed that in the regime of weak decoherence relevant to protective measurement, a threshold of 5\% for the disturbance of the spin state translates into an upper limit of $s_d \le 0.35$. In Sec.~\ref{sec:infl-decoh-point} we found that values in the range of $0.1 \le s_d \le 0.2$ already produce a substantial broadening of the pointer. Let us choose $s_d=0.2$, together with the value $B_0=\unit[1]{T}$ for the uniform field as discussed in the previous paragraph. Then we can experimentally produce the environmental broadening of the pointer momentum distribution by adding to the measurement region, in each iteration of the experiment, a random magnetic field drawn from $w(b)$ with width $B_0s_d =\unit[0.2]{T}$ (which represents a typical field strength). For this strength $b=B_0s_d$, the force on the atom is now $F = \mu(\beta /T) (\cos\gamma + s_d \sin\gamma)$ [see Eq.~\eqref{eq:21} with $\eta=0$], and the corresponding displacement is $\text{d}elta s_1 \approx \unit[0.14]{mm}$, a 24\% difference compared to the displacement in the absence of the field. Thus, if we perform repeated runs of the experiment and plot the distribution of the resulting displacements, the distribution can be expected to follow a Gaussian of width $\text{d}elta s_1-\text{d}elta s_0$ (we assume that the spread of initial momenta of the atomic beam, as well as any free spreading, is sufficiently small such that the change in displacement induced by the added fields can be resolved). By comparing this distribution to the distribution obtained without the added fields, the effect of the simulated environment can be experimentally verified. Additionally, by varying the average strength of the added fields as quantified by $s_d$, changes in the width of the distribution can be observed. In this way, the dependence of the pointer broadening on the strength of the environmental interaction can be measured. An experiment of this kind could also be implemented using cold atoms \cite{Dass:1999:le}. Numerical estimates of the relevant parameters given in Ref.~\cite{Dass:1999:le} suggest that, provided low atomic velocities on the order of \unit[1]{cm/s} can be achieved, a much weaker protection field ($B_0 \approx \unit[1]{G}$) and a measurement strength of around $\xi=0.1$ will suffice to produce a measurement-induced beam displacement well in excess of both initial and free momentum spreading. \section{\label{sec:gener-appl}General protective qubit measurements} So far, we have couched our analysis in the context of a setting of the Stern--Gerlach type. However, as already briefly indicated in Sec.~\ref{sec:model}, the model and the resulting calculations we have presented in this paper are generic to any protective measurement of a qubit. To see this, consider the Hamiltonian~\eqref{eq:7} together with the environmental contribution~\eqref{eq:3}, \begin{align}\label{eq:t677} \op{H} &= \op{H}_S+\op{H}_m+\op{H}_{SE}\notag\\&=\frac{1}{2}\hbar \omega_0 \op{\sigma}_z + \frac{\zeta}{T} (\bopvecgr{\sigma} \cdot \buvec{m}) \otimes \op{K}_A +\frac{1}{2} \op{\sigma}_x \otimes \sum_{i=1}^N g_i \op{\sigma}_x^{(i)}, \end{align} where we have written $\kappa(t)=\zeta/T$ for $t \in [0,T]$, with $\zeta$ a constant. This is the general form of the Hamiltonian describing the dynamics of a protective measurement of an arbitrary observable $\op{O}_S=\bopvecgr{\sigma} \cdot \buvec{m}$ on a generic qubit system $S$, with the apparatus pointer represented by an arbitrary observable $\op{K}_A$ that generates the pointer shift, and with $S$ coupled to an environment $E$ of two-level systems. For each environmental state $\ket{E_n}$ as defined in Sec.~\ref{sec:model}, the Hamiltonian~\eqref{eq:t677} can be equivalently mapped onto the Hamiltonian for a spin-$\frac{1}{2}$ particle interacting with an effective magnetic field as in Eq.~\eqref{eq:8}, i.e., $\op{H}^{(n)}(k) = \bopvecgr{\sigma} \cdot \bvec{B}^{(n)}(k)$, where $k$ is the variable associated with the pointer operator $\op{K}_A$. The components of $\bvec{B}^{(n)}(k)$ are as in Eq.~\eqref{eq:26} but with straightforward substitutions of variables to match the variables used in the Hamiltonian~\eqref{eq:t677}, \begin{subequations} \begin{align} B_x^{(n)}(k) &= \frac{\zeta k}{T} \cos\eta\sin\gamma+\frac{1}{2}\epsilon_n, \\ B_y(k) &= \frac{\zeta k}{T} \sin\eta\sin\gamma, \\ B_z(k) &= \frac{1}{2}\hbar \omega_0 +\frac{\zeta k}{T} \cos\gamma, \end{align} \end{subequations} where $\epsilon_n$ is the eigenvalue associated with $\ket{E_n}$. It follows that the calculations and results of Secs.~\ref{sec:model}--\ref{sec:envir-effects-with} directly carry over to the general scenario described by the Hamiltonian~\eqref{eq:t677}. All that is required is to express the relevant variables in terms of the quantities used in the Hamiltonian~\eqref{eq:t677}. The dimensionless field parameters $\xi$ and $\tilde{b}_n$ defined in Sec.~\ref{sec:envir-inter} are now given by $\xi = 2\zeta k (\hbar \omega_0 T)^{-1}$ and $\tilde{b}_n=\epsilon_n(\hbar\omega_0)^{-1}$. As before, $\xi$ represents the relative sizes of $\op{H}_m$ and $\op{H}_S$ (i.e., the measurement strength), $\tilde{b}_n$ represents the relative sizes of $\op{H}_{SE}$ and $\op{H}_S$ for a given $\ket{E_n}$, and the width $s_d$ of the Gaussian distribution $w(\tilde{b})$ [see Eq.~\eqref{eq:15}] represents a typical value of the strength of the environmental interaction relative to $\op{H}_S$. The pointer is prepared in a Gaussian wave packet in the variable $\ell$ conjugate to $k$, with width $\sigma_\ell$. We can then apply Eqs.~\eqref{eq:63684tf3g} and \eqref{eq:6} with the substitutions $p \rightarrow \ell$ and $\mu\beta \rightarrow \zeta$. Analogous to the Stern--Gerlach case, this establishes the result that the center of the pointer wave packet is shifted (in the variable $\ell$) by an amount $\zeta\cos\gamma=\zeta \bra{0} \bopvecgr{\sigma} \cdot \buvec{m} \ket{0}$, while the environment broadens the initial pointer wave packet so that its final variance is \begin{equation}\label{eq:30} \sigma^2 = \sigma_\ell^2 + (\zeta s_d \cos\eta\sin\gamma)^2. \end{equation} As a concrete example, consider the typical measurement setting in which the pointer operator $\op{K}_A$ [see Eq.~\eqref{eq:t677}] is the momentum operator generating spatial translations of a physical apparatus pointer, with the pointer initially represented by a Gaussian wave packet in position space. Equation~\eqref{eq:30} then quantifies the broadening of the distribution of final pointer positions due to the environment. The broadening implies an increase in the uncertainty in the measurement of the position of the pointer, and therefore an increased uncertainty in the outcome of the protective measurement, i.e., in the expectation value of the measured qubit observable. The physical representation of the apparatus pointer depends of course on the specific experimental setting. The coupling between a qubit and an apparatus is a ubiquitous task in the control and readout of qubit systems in quantum information processing \cite{Nielsen:2000:tt}, and accordingly a large number of experimental realizations of such interactions exist, including weak measurement and quantum nondemolition schemes for systems such as superconducting quantum circuits \cite{Wendin:2017:aa,Vijay:2012:oo,Qin:2017:kk,Reuther:2011:zz}, quantum dots \cite{Jordan:2007:uu}, and ion traps \cite{Bruzewicz:2019:aa,Pan:2019:uu}, all of which might be adaptable to a future implementation of a protective measurement. In many of these cases, the apparatus can be modeled as a quantum resonator (harmonic oscillator), and the interaction can be tuned to the weak-coupling regime \cite{Qin:2017:kk,Jordan:2007:uu,Vijay:2012:oo,Pan:2019:uu}. For example, in superconducting quantum circuits \cite{Wendin:2017:aa,Qin:2017:kk,Reuther:2011:zz} in which a superconducting qubit is coupled to a transmission line resonator, the pointer is represented by an appropriate resonator mode (voltage for charge qubits, current for flux qubits), and the Hamiltonian for the measurement interaction takes the form $\op{H}_m = g \op{\sigma}_z(\op{a}+\op{a}^\dagger)$. Thus, the pointer observable is given by $\op{X}=\op{a}+\op{a}^\dagger$, generating measurable shifts in the conjugate quantity \cite{Wendin:2017:aa,Reuther:2011:zz}. \section{\label{sec:discussion}Discussion and conclusions} For a quantum measurement to be considered protective, it must leave the initial state of the system approximately unchanged while transferring information about the expectation value of an arbitrary system observable to the apparatus pointer. Our results show that while, unsurprisingly, interactions with a decoherence-inducing environment during the measurement make it harder to fulfill the first condition of minimal state disturbance, it is really the second condition of a faithful pointer shift that is most dramatically affected by the presence of the environment. For even when the system couples only weakly to the environment and the initial state does not become appreciably decohered, the probability distribution of the position of the pointer may be broadened so substantially as to make it difficult to reliably infer information about the expectation value of interest from a measurement of the pointer position. In this way, the environment acts as a significant source of noise on the pointer. Moreover, we have shown that the environment can have an effect on the pointer even when it does not lead to any decoherence of the state of the system. Specifically, it increases the likelihood of reading from the final pointer measurement a value that gives the expectation value not in the initial state of the system as desired, but in a state orthogonal to it. This can dramatically affect the fidelity of any quantum-state reconstruction based on protective measurements \cite{Schlosshauer:2015:uu}. We have also described how our model could be experimentally explored with the help of a setup of the Stern--Gerlach type. Here the influence of the environment can be simulated in terms of repeated noisy realizations of a standard Stern--Gerlach-type protective measurement augmented by random magnetic fields. The resulting pointer state will be equivalent to that obtained from actual spin--spin interactions between the qubit and the environmental spins. Such interactions would be difficult to realize in a controlled manner for an atom traversing the Stern--Gerlach apparatus. By contrast, the scheme we have outlined can be readily implemented once the protective measurement itself is experimentally available. In general, protective measurements, owing to the long duration of the measurement interaction, will be much more susceptible to couplings to an environment than short impulsive or weak measurements. Of course, to what extent decoherence plays a role in a specific experimental implementation of a protective measurement will depend on whether and how the measured system interacts with its environment. For experiments of the Stern--Gerlach type or for experiments based on photon polarization \cite{Piacentini:2017:oo}, unwanted environmental interactions may be reasonably easily controlled and minimized. This is unlikely to be the case in other potentially relevant physical situations, such as trapped ions \cite{Bruzewicz:2019:aa} or superconducting qubits \cite{Wendin:2017:aa,Reuther:2011:zz}. Indeed, given their ability to implement carefully controlled interactions between a qubit and an apparatus, the various experimentally studied qubit architectures for quantum information processing constitute promising platforms for the realization of a protective measurement. Since environmental interactions play a significant role in these qubit systems \cite{Schlosshauer:2019:qd}, implementation of a protective measurement will almost certainly have to include an analysis of the influence of the environment such as we have given here. We stress that the study presented in this paper is independent of any particular physical realization of the qubit system and the apparatus. It describes how environmental interactions affect a generic protective qubit measurement when such interactions cannot be avoided. What is perhaps surprising about our results is how significant the impact of the environment the measurement can be even when the decohering influence on the quantum state of the system is small. \end{document}
\begin{document} \title[postprojectives and components] {On the postprojective partitions and components of the Auslander-Reiten quivers} \author[Coelho] {Fl\'avio U. Coelho} \address{Departamento de Matem\'atica-IME, Universidade de S\~ao Paulo, CP 66281, S\~ao Paulo, SP, 05315-970, Brazil} \email{fucoelho@@ime.usp.br} \author[Silva] {Danilo D. da Silva} \address{Departamento de Matem√°tica-DMA, Universidade Federal de Sergipe, S\~ao Crist\'ov\~ao, SE, 49100-000, Brazil} \email{ddsilva@@ufs.br} \keywords{irreducible morphims, degree, postprojective partitions} \subjclass{16G70, 16G20, 16E10} \maketitle \begin{abstract} In this paper we shall investigate further the connections between the postprojective partition of an algebra and its Auslander-Reiten quiver. \end{abstract} pace{.3 cm} Auslander-Smal\o\ introduced, in \cite{auslander}, the notion of postprojective partition and modules (under the name of preprojective). The connection between such a partition and the structure of the Auslander-Reiten quiver has been investigated in several papers such as \cite{assemcoelho,auslander, coelho1, coelho2,coelhosilva1,todorov}. The purpose of this paper is to follow such investigations. We introduce the notion of ${\bf P}$-discrete component of the Auslander-Reiten quiver $\Gammamma_A$ as follows. Let $\{ \bf{P_i} \}$ of ${\rm ind}A$ with $i\in \mathbb{N}_{\infty}=\mathbb{N}\cup{\{ \infty} \}$ be the postprojective partition of $A$ (recall the definition below). A component $\Gammamma$ of $\Gammamma_A$ is ${\bf P}$-discrete if for each $i\geq 0$ and each $M \in \Gammamma \cap {\bf P_i}$, we have that tr$_{\bf P_{i+1}}(M) = tr_{\bf P_\infty} (M)$, where tr$_{\mathcal C}(M)$ denotes the trace of the set of modules ${\mathcal C}$ in $M$ (see Section 1 below). pace{.3 cm}\\ {\bf Theorem 2.3.} {\it Let $A$ be a representation-infinite Artin algebra. If $\Gammamma$ is a $\bf{P}$-discrete connected component of $\Gammamma_A$ then there is no arrow $ M \rightarrow N$ in $\Gammamma$ with $M \in \bf{P_i}$ and $N \in \bf{P_j}$ such that $i+1<j<{\infty}$. } pace{.3 cm} Also, using the notion of left degree of a morphism (introduced by Liu in \cite{liu}), we prove the following result. pace{.3 cm}\\ {\bf Theorem 3.4.} {\it Let $A$ be a finite dimensional algebra over an algebraically closed field and let $f \colon M \longrightarrow N$ be an irreducible monomorphism of infinite left degree. If tr$_{\bf P_\infty}(N) = 0$, then ${\rm Coker}f\in{\bf P}_{\infty}$ and every non-trivial submodule of $ {\rm Coker}f$ is postprojective. } pace{.3 cm} In particular, from the last theorem we get that if an irreducible monomorphism which lies in a postprojective component has its cokernel in a regular component then the latter must be simple regular. This paper is organized as follows. After recalling basic notions in Section 1, we prove Theorem 2.3 in Section 2 and Theorem 3.4 in Section 3. \section{Preliminaries} \subsection{Basics} For the results of Section 2, we assume the algebras $A$ to be Artin algebras, unless otherwise stated. For the last section, we shall restrict to finite dimensional algebras over a fixed algebraically closed field $k$. Futhermore, we will assume that all algebras are basic. For unexplained notions in representation theory we refer the reader to \cite{auslanderbook}. For an algebra $A$, we denote by $\rm{mod}A$ the category of all finitely generated left $A$-modules, and by $\rm{ind}A$ the full subcategory of $\rm{mod}A$ consisting of one representative of each isomorphism class of indecomposable $A$-modules. We denote by $\Gammamma_A$ the Auslander-Reiten quiver of $A$ and by $\tau$ and $\tau^{-}$ the Auslander-Reiten translations DTr and TrD, respectively. Given $n \geq 1$ and $M,N \in {\rm mod}A$, we define the subgroups ${\rm rad}^n(M,N)$ of ${\rm Hom}(M,N)$ by induction: for $n=1$, we set ${\rm rad}^1(M,N)$ to be the set of all morphisms $f \colon M \longrightarrow N$ such that the compositions $gfh$ are not isomorphisms for all $h \colon L \longrightarrow M$ and $g \colon N \longrightarrow L$, with $L$ indecomposable. Also, we define ${\rm rad}^n(M,N)$ as the set of all morphisms $f \in {\rm Hom}(M,N)$ such that there exist $X \in {\rm mod}A$ and morphisms $g \in {\rm rad}(M,X)$ and $h \in {\rm rad}^{n-1}(X,N)$ such that $f=hg$. Finally, we set ${\rm rad}^{\infty}(M,N)= \bigcap_{n \geq 1}{\rm rad}^n(M,N)$. We recall that for $X,Y \in {\rm ind}A$, $f:X \rightarrow Y$ is called {\bf irreducible} if and only if $f \in {\rm rad}(X,Y) \backslash {\rm rad}^2(X,Y)$. A {\bf path of irreducible morphisms of length n} is a sequence $ M_0 \stackrelackrel{h_1}{\longrightarrow} M_1 \longrightarrow \cdots \longrightarrow M_{n-1} \stackrelackrel{h_n}{\longrightarrow} M_n$ where each $h_i$ is irreducible and each $M_j$ is indecomposable. Following Liu \cite{liu}, we say that the left degree of an irreducible morphism $f:X \rightarrow Y$ is $n$, and we denote $d_l(f)=n$, if $n$ is the smallest positive integer for which there exist $Z \in {\rm ind}A$ and a morphism $h : Z \rightarrow X$ such that $h \in {\rm rad}^n(Z,X) \backslash{\rm rad}^{n+1}(Z,X)$ and $fh \in {\rm rad}^{n+2}(Z,Y)$. In case this condition is not verified for any $n \geq 1$ we say the left degree of $f$ is infinite. Dually, one can define the right degree of an irreducible morphism. \subsection{Postprojective partitions and modules} We shall now recall the concept of postprojective partition and modules as introduced by Auslander and Smal\o\ in \cite{auslander} under the name preprojective. A {\bf postprojective partition} of an Artin algebra $A$ is a partition $\{ \bf{P_i} \}$ of ${\rm ind}A$ with $i\in \mathbb{N}_{\infty}=\mathbb{N}\cup{\{ \infty} \}$ such that \begin{itemize} \item[(a)] ${\rm ind}A$ is the disjoint union of the subcategories ${\bf P_i}$, $i \in \mathbb{N}_{\infty}$. \item[(b)] for each $j<{\infty}$, ${\bf P_j}$ is a finite minimal cover of the union of the subcategories ${\bf P_i}$ such that $j \leq i \leq {\infty}$. \end{itemize} It is clear that the $A$-modules in ${\bf P_0}$ are all the indecomposable projectives. In this article, we denote ${\bf P}({\rm ind}A)= \displaystyle\bigcup_{0 \leq i <{\infty}}{\bf P_i}$ simply by ${\bf P}$. The modules in ${\rm add}{\bf P}$ will be called {\bf postprojective} modules (former preprojective modules in \cite{auslander}). We denote by ${\bf P^m}$ the subcategory ${\bf P_0} \cup \cdots \cup {\bf P_{m}}$. Given $i \in \mathbb{N}_{\infty}$ and a module $M$ in ${\rm mod}A$ we denote the trace of ${\bf P_i}$ on $M$ by ${\rm tr}_{\bf P_i}(M)$, that is, the submodule of $M$ generated by the images of all morphisms which have domain in ${\rm add}{\bf P_i}$. Therefore, ${\rm tr}_{\bf P_i}(M)$ is the submodule of $M$ generated by $\{ {\rm Im}f | f \in {\rm Hom}(N,M) \, {\rm and} \, N \in {\bf P_i} \}$. It was proved in \cite{auslander} that ${\rm tr}_{\bf P_{\infty}}(M)= \displaystyle\cap_{i\geq 0} {\rm tr}_{\bf P_{i}}(M)$. Hence, $ {\rm tr}_{\bf P_{\infty}}(M)= {\rm tr}_{\bf P_{r}}(M)$ for some $r \in \mathbb{N}$ since $M$ is artinian and ${\rm tr}_{\bf P_{n+1}}(M) \subseteq {\rm tr}_{\bf P_{n}}(M)$, for each $n \geq 0$. It was also proved in \cite{auslander} that $M$ is postprojective if and only if ${\rm tr}_{\bf P_{\infty}}(M) \neq M$. The following proposition from \cite{coelho1} shall be very useful in the sequel. \begin{prop} \label{lema1} {\rm \cite{coelho1}} Let $N$ be a postprojective module and $f : M \rightarrow N$ a morphism such that $ {\rm Im}f \not\subseteq {\rm tr}_{\bf{P_{\infty}}}(N)$. Then $f \not\in {\rm rad}^{\infty}(M,N)$. \end{prop} We also recall the following result from \cite{coelhosilva1} (Lemma 4.2). \begin{lem} \label{lemadan} Let $ \bf{P_0},\bf{P_1}, \cdots, {\bf P_{\infty}}$ be the postprojective partition of an algebra $A$. Given $0<i \leq {\infty}$, we have ${\rm Hom}(M,N)={\rm rad}^i(M,N)$, for each $ M \in \bf{P_0}$ and each $ N \in \bf{P_i}$. \end{lem} \section{Path of irreducible morphisms and the postprojective partition} Along this section, let $A$ denote an Artin algebra and $\{ \bf{P_i} \}$ of ${\rm ind}A$ with $i\in \mathbb{N}_{\infty}=\mathbb{N}\cup{\{ \infty} \}$ the postprojective partition of ind$A$. Let $M$ be a postprojective module. It was proved in \cite{auslander} that there exists a path of irreducible morphisms from a projective module $P$ to $M$. Clearly, then, $P$ and $M$ lie in the same connected component $\Gammamma$ of $\Gammamma_A$. One could wonder if there exists such a path as follows: $$ P = M_0 \longrightarrow M_1 \longrightarrow \cdots \longrightarrow M_n= M$$ with $M_i \in {\bf P_i}$ for each $i$. Corollary 3 in \cite{igusa} states that this is true if $\Gammamma$ is a postprojective component of a hereditary algebra. The next example show that this is not true in general. , and, on the other hand, that there are non-postprojective components with this property. \begin{exmp} {\rm Let $A$ be the finite-dimensional $k$-algebra (where $k$ is a field) given by the quiver\\ \begin{picture}(50,20) \put(43,0){\circle*{1}} \put(57,0){\circle*{1}} \put(71,0){\circle*{1}} \put(85,0){\circle*{1}} \put(71,14){\circle*{1}} \put(45,0){\vector(1,0){10}} \put(69,0){\vector(-1,0){10}} \put(83,0){\vector(-1,0){10}} \put(71,12){\vector(0,-1){10}} \put(63,2){$\alpha$} \put(73,5.6){$\beta$} \put(72.5,13){1} \put(42,-4.5){2} \put(56,-4.5){3} \put(70,-4.5){4} \put(84,-4.5){5} \end{picture} pace{1 cm}\\ bound by $\beta \alpha = 0$. Its Auslander-Reiten quiver has the following shape:\\ \begin{picture}(50,40) \put(52,4){\vector(1,1){8}} \put(68,12){\vector(1,-1){8}} \put(52,-4){\vector(1,-1){8}} \put(68,-12){\vector(1,1){8}} \put(62,12){{{\cal S} (\rightarrow M)all $S_4$}} \put(46,-1.5){{{\cal S} (\rightarrow M)all $\tau^{-}P_3$}} \put(62,-15){{{\cal S} (\rightarrow M)all $I_3$}} \put(76,-1.5){{{\cal S} (\rightarrow M)all $\tau^{-2}P_3$}} \put(20,4){\vector(1,1){8}} \put(84,4){\vector(1,1){8}} \put(68,18){\vector(1,1){8}} \put(100,18){\vector(1,1){8}} \put(36,-12){\vector(1,1){8}} \put(52,-26){\vector(1,1){8}} \put(36,12){\vector(1,-1){8}} \put(100,12){\vector(1,-1){8}} \put(84,26){\vector(1,-1){8}} \put(20,-4){\vector(1,-1){8}} \put(36,-18){\vector(1,-1){8}} \put(68,-18){\vector(1,-1){8}} \put(16,-1.5){{{\cal S} (\rightarrow M)all $P_3$}} \put(108,-1){{{\cal S} (\rightarrow M)all $S_1$}} \put(29,-15){{{\cal S} (\rightarrow M)all $P_4$}} \put(47,-30.5){{{\cal S} (\rightarrow M)all $P_5$}} \put(77,-30.5){{{\cal S} (\rightarrow M)all $I_2$}} \put(29,12){{{\cal S} (\rightarrow M)all $P_2$}} \put(94.5,12.5){{{\cal S} (\rightarrow M)all $I_4$}} \put(78,28){{{\cal S} (\rightarrow M)all $P_1$}} \put(108,28){{{\cal S} (\rightarrow M)all $I_5$}} \multiput(59,0)(3,0){5}{\circle*{.1}} \multiput(25,0)(3,0){6}{\circle*{.1}} \multiput(91,0)(3,0){5}{\circle*{.1}} \multiput(41,14)(3,0){6}{\circle*{.1}} \multiput(73,14)(3,0){6}{\circle*{.1}} \multiput(41,-14)(3,0){6}{\circle*{.1}} \multiput(57,-28)(3,0){6}{\circle*{.1}} \multiput(89,28)(3,0){6}{\circle*{.1}} \end{picture} pace{4 cm}\\ For each $j$, $P_j, I_j$ and $S_j$ denote, respectively, the projective, the injective and the simple modules associated to the vertex $j$ of the quiver. The postprojective partition is then ${\bf P_0} = \{ P_1, P_2, P_3, P_4, P_5 \}$, ${\bf P_1} = \{ \tau^{-1} P_3, I_3, I_4 \}$, ${\bf P_2} = \{ S_4, \tau^{-2} P_3, I_1, I_2 \}$ and ${\bf P_3} = \{ I_5 \}$. Observe that there are paths from a projective to $I_5 \in {\bf P_3}$ of length 2, 4 , 5 and 6, and so, none of the required type. } \end{exmp} Next example shows that there are non-postprojective components with this property. \begin{exmp} \label{tuboraio} {\rm Let $A$ be a path algebra defined by the quiver: $$\xymatrix{ & 1 & & 4 \ar[ld]_{\alpha} \\ & & 3 \ar[ld]^{\delta} \ar[lu]_{\beta} & \\ & 2 & & 5 \ar[lu]^{\gamma} \\ }$$ \par\noindent (see \cite{coelhosilva1}). Let $S$ be the simple module associated to the vertex $3$ and let $N$ be the indecomposable module such that $\tau S=N$ and $\tau N= S$. One can check that $S$ and $N$ determine a tube of rank 2.\\ The bounded quiver of the extended algebra $A[S]$ is $$\xymatrix{ & 1 & & 4 \ar[ld]_{\alpha} & \\ & & 3 \ar[ld]^{\delta} \ar[lu]_{\beta} & & 0 \ar[ll]_{\epsilon} \\ & 2 & & 5 \ar[lu]^{\gamma} & \\ }$$ \par\noindent bounded by $\epsilon\beta=0$ and $\epsilon\delta=0$.\\ The ray tube which contains $S=S[1]$ has the shape: $$\xymatrix{ & & & \overline{S[1]} \ar[rd] & & N[1] \ar@{.}[dd] \\ N[1] \ar[rd] \ar@{.}[dd] & & S[1] \ar[rd] \ar[ru] & & \overline{S[2]} \ar[ru] \ar[rd] & \\ & N[2] \ar[rd] \ar[ru] & & S[2] \ar[rd] \ar[ru] & & \overline{S[3]} \ar@{.}[dd] \\ \overline{S[3]} \ar[rd] \ar[ru] \ar@{.}[dd] & & N[3] \ar[rd] \ar[ru] & & S[3] \ar[rd] \ar[ru] & \\ & \overline{S[4]} \ar[rd] \ar[ru] & & N[4] \ar[rd] \ar[ru] & & S[4] \ar@{.}[d] \\ \vdots \ar[ru] & & \vdots \ar[ru] & & \vdots \ar[ru] & \vdots \\ }$$ pace{0.3cm} \par\noindent Let ${\bf P_0}, {\bf P_1}, \cdots, {\bf P_{\infty}}$ be the postprojective partition of $A[S]$. Then the modules $N[i]$ and $S[j]$ with $i \geq 1$ and $j \geq 1$ are all in ${\bf P_{\infty}}$ and the ray $\overline{S[1]} \rightarrow \overline{S[2]} \rightarrow \cdots \rightarrow \overline{S[n]} \rightarrow \cdots$ is such that $\overline{S[n]} \in {\bf P_{n-1}}$ for each $n \geq 1$. } \end{exmp} Our purpose in this section is to introduce the concept of $\bf{P}$-discrete components of $\Gammamma_A$ and show that, for them, we have an affirmative answer to the above question. We start with a lemma. \begin{lem} \label{formadopreprojetivo} Let $n$ be an integer greater than 0, and let $M\in {\bf P_n}$. For each $j$, $0 \leq j < n$, there exists a path of irreducible morphisms $L \leadsto M$ through postprojective modules with $L \in {\bf P_j}$ and with composition not lying in ${\rm rad}^{\infty}(L,M)$. \end{lem} \begin{pf} Since $M\in{\bf P_n}$ and $j <n$, by definition, there exists an epimorphism $\displaystyle\bigoplus_{i=1}^{r}L_i \stackrelackrel {g}{\longrightarrow} M$, with $L_i \in {\bf P_j}$ for each $i$. Write $g=[g_1, \cdots, g_r]$. Since $M$ is postprojective, tr$_{\bf P_{\infty}}(M) \neq M$ and so Im$g \not\subseteq {\rm tr}_{\bf P_{\infty}}(M)$. Hence, there exists an $l$ such that Im$g_l \not\subseteq {\rm tr}_{\bf P_{\infty}}(M)$. Because of Proposition \ref{lema1}, $g_l \not\in {\rm rad}^{\infty} (M_l, M)$. By \cite{auslanderbook} (Proposition 7.4), we get the expression $g_l = \sum_i \alpha_i + \beta $ where each $\alpha_i$ is a path of irreducible morphisms with composite not lying in rad$^{\infty} (L_l, M)$ and $\beta \in$ rad$^{\infty} (L_l, M)$. Using Proposition \ref{lema1} again, we infer that Im$\beta \subseteq$tr$_{\bf P_{\infty}}(M)$. Hence, for some $i$, the composition $h$ of the path $\alpha_i$ has image Im$h\not\subseteq {\rm tr}_{\bf P_{\infty}}(M)$ and so $h \notin$rad$^{\infty} (L_l, M)$. It remains to show that the path $\alpha_i$ pass through only postprojective modules. Suppose $\alpha_i$ is a path $$ L_l \stackrelackrel{(*)}{\leadsto} N \stackrelackrel{(**)}{\leadsto} M$$ where $N \in {\bf P_{\infty}}$ and write by $\gamma$ and $\gamma'$ the (nonzero) compositions of the paths $(*)$ and $(**)$, respectively. Hence $\gamma ' \gamma = h$. Now, Im$\gamma' \subseteq {\rm tr}_{\bf P_{\infty}}(M)$ because $N\in {\bf P_{\infty}}$ and so Im$ h \subseteq$ Im$\gamma' \subseteq {\rm tr}_{\bf P_{\infty}}(M)$, a contradiction. This proves the lemma. \end{pf} \begin{defn} Suppose $A$ is representation-infinite. We say that a connected component $\Gammamma$ of $\Gammamma_A$ is a $\bf{P}$-{\bf{discrete component}} if for all $i \geq 0$ and for each postprojective module $M$ in $\Gammamma$ with $M \in \bf{P_i}$ we have ${\rm tr}_{\bf P_{i+1}}(M)={\rm tr}_{\bf P_{\infty}}(M)$. \end{defn} \par\noindent {\bf Remark:} Note that ${\rm tr}_{\bf P_{i+1}}(M)={\rm tr}_{\bf P_{\infty}}(M)$ implies ${\rm tr}_{\bf P_{j}}(M)={\rm tr}_{\bf P_{\infty}}(M)$, for each $j>i$. \begin{prop} \label{caracterizacao} Let $A$ be a representation-infinite Artin algebra and $\Gammamma$ be a connected component of $\Gammamma_A$. The following are equivalent: \begin{enumerate} \item[(a)] $\Gammamma$ is a $\bf{P}$-discrete component. \item[(b)] There exists no arrow $M \rightarrow N$ in $\Gammamma$ with $M \in \bf{P_j}$, $N \in \bf{P_i}$ and $i<j<{\infty}$. \item[(c)] There exists no path of irreducible morphisms $M \leadsto N$ in $\Gammamma$ through indecomposable postprojective modules with $M \in \bf{P_j}$, $N \in \bf{P_i}$ and $i<j<{\infty}$. \end{enumerate} \end{prop} \begin{pf} As one can easily see that (b) and (c) are equivalent we show (a)$\Rightarrow $(b) and (c)$\Rightarrow $(a). (a)$\Rightarrow $(b) Suppose there exists an irreducible morphism $f:M \rightarrow N$ in $\Gammamma$ with $M \in \bf{P_j}$, $N \in \bf{P_i}$ and $i<j<{\infty}$. Assuming $\Gammamma$ is $\bf{P}$-discrete we have ${\rm Im}f \subseteq {\rm tr}_{\bf P_{j}}(N) = {\rm tr}_{\bf P_{\infty}}(N)$. Then we can factorize $f$ as follows: $$\xymatrix{ M \ar[rr]^f \ar[rd]_{f'} & & N \\ & {\rm tr}_{\bf P_{\infty}}(N) \ar @{^{(}->} [ru] & \\ }$$ From the fact that $f$ is irreducible and $N \neq {\rm tr}_{\bf P_{\infty}}(N)$, we get that $f':M \rightarrow {\rm tr}_{\bf P_{\infty}}(N)$ is a split monomorphism and $M$ is a summand of ${\rm tr}_{\bf P_{\infty}}(N)$. This is an absurd since ${\rm tr}_{\bf P_{\infty}}(N) \in {\rm add}\bf{P_{\infty}}$. (c)$\Rightarrow $(a) Suppose by contradiction that there exists $M \in \Gammamma \cap \bf{P_i}$, with ${\rm tr}_{\bf P_{i+1}}(M) \neq {\rm tr}_{\bf P_{\infty}}(M)$. Then there exists $f: M' \rightarrow M$, $M' \in \bf{P_{i+1}}$, such that ${\rm Im}f \not\subseteq {\rm tr}_{\bf P_{\infty}}(M)$. Therefore, Lemma \ref{formadopreprojetivo} provides us a path of irreducible morphisms through indecomposable postprojective modules starting at $M'$ and ending at $M$ which contradicts (c). \end{pf} \begin{thm} \label{theotodorov} Let $A$ be a representation-infinite Artin algebra. If $\Gammamma$ is a $\bf{P}$-discrete connected component of $\Gammamma_A$ then there is no arrow $ M \rightarrow N$ in $\Gammamma$ with $M \in \bf{P_i}$ and $N \in \bf{P_j}$ such that $i+1<j<{\infty}$. \end{thm} \begin{pf} We shall prove it by induction on $i \geq 0$. \\ For $i = 0$, just observe that if $f: P \rightarrow N$ is an irreducible morphism in $\Gammamma$ with $P \in \bf{P_0}$ and $N \in \bf{P_j}$, $j>1$, then by Lemma \ref{lemadan} we have $f \in {\rm rad}^2(P,N)$ which is a contradiction.\\ Suppose now that the theorem is true for all values less than $i$ and let $f:M \rightarrow N$ be an irreducible morphism in $\Gammamma$ with $M \in \bf{P_i}$, $N \in \bf{P_j}$ and $i+1<j<{\infty}$. Then $\tau N \in {\bf P^{i-1}}$ (Lemma 2.1 in \cite{coelho1}) and $f$ is not a sink morphism because otherwise, by Lemma \ref{formadopreprojetivo}, there would be a path of irreducible morphisms through indecomposable postprojective modules $L \rightarrow \cdots \rightarrow M \rightarrow N$, with $L \in \bf{P_{i+1}}$ which contradicts the fact that $\Gammamma$ is ${\bf P}$-discrete. Therefore, there exists $M' \in \ $mod$A$ such that $$\xymatrix{ & M \ar[rd]^{f} & \\ \tau N \ar[rd] \ar[ru] & & N \\ & M' \ar[ru] & \\ }$$ is the Auslander-Reiten sequence which ends at $N$.\\ We now prove that $M' \not\in \ $add$\bf{P^{j-2}}$. Suppose, by contradiction, that $M' \in \ $add$\bf{P^{j-2}}$ and hence $M \oplus M' \in \ $add$\bf{P^{j-2}}$. Then there exists $h: N' \rightarrow N$, $N' \in \bf{P_{j-1}}$, such that ${\rm Im}h \not\subseteq {\rm tr}_{\bf P_{\infty}}(N)$, $h \in {\rm rad}(N',N)$. Since $h$ is not a split epimorphism, it can be lifted through $f$: $$\xymatrix{ & N' \ar[d]^h \ar[ld]_g & \\ M \oplus M' \ar[r]_f & N \ar[r] & 0 \\ }$$ Since $\Gammamma$ is ${\bf P}$-discrete, we know that ${\rm tr}_{\bf P_{j-1}}(M \oplus M')={\rm tr}_{\bf P_{\infty}}(M \oplus M')$ which implies that ${\rm Im}g \subseteq {\rm tr}_{\bf P_{\infty}}(M \oplus M')$. Therefore ${\rm Im}fg \subseteq {\rm tr}_{\bf P_{\infty}}(N)$ since $f({\rm tr}_{\bf P_{\infty}}(M \oplus M')) \subseteq {\rm tr}_{\bf P_{\infty}}(N)$. Hence ${\rm Im}h = {\rm Im}fg \subseteq {\rm tr}_{\bf P_{\infty}}(N)$, which is a contradiction. \\ Then there exist a summand $M_1 \in \bf{P_{j-1}}$ of $M'$ and an irreducible morphism $\tau N \rightarrow M_1$ such that $ \tau N \in \bf{P^{i-1}}$ and $i-1<i<j-1$ which contradicts the induction hypothesis. The theorem follows. \end{pf} \par\noindent {\bf Remark:} In \cite{todorov}( item (a) of Proposition 1), it was proved that if $A$ is a hereditary algebra then there is no arrow $ M \rightarrow N$ in the postprojective component with $M \in \bf{P_i}$ and $N \in \bf{P_j}$ such that $i+1<j<{\infty}$. Then this result was used to prove that the postprojective component satisfies item (b) of Proposition \ref{caracterizacao} (in other words, the converse of Theorem \ref{theotodorov} for $A$ hereditary). \begin{cor} Let $A$ be a representation-infinite Artin algebra and $\Gammamma$ be a $\bf{P}$-discrete component of $\Gammamma_A$. Given $i>0$ and $M \in \bf{P_i} \cap \Gammamma$, then there exists a path of irreducible morphisms between indecomposable modules $M_0 \longrightarrow M_1 \longrightarrow \cdots \longrightarrow M_i = M $, where $M_j \in {\bf P_j}$ for each $j \in \{0, \cdots, i \}$, and moreover it has the smallest possible length among all the paths of irreducible morphisms starting at a projective and ending at $M$. \end{cor} \begin{pf} Consider the sink map ending at $M$. Since $M$ is not projective, this morphism is an epimorphism. Then there exists an irreducible morphism $N \longrightarrow M$ with $N \in \bf{P^{i-1}}$. In fact, by Theorem 2.3, we do have that $N \in \bf{P_{i-1}}$. We keep using the same argument until we get a path of irreducible morphisms $M_0 \longrightarrow M_1 \longrightarrow \cdots \longrightarrow M_i = M $, where $M_j \in {\bf P_j}$ for each $j \in \{0, \cdots, i\}$. Again by Theorem 2.3 we get that there can not be a smallest path starting at a projective and ending at $M$. \end{pf} \section{Right degrees of irreducible morphisms in $\pi$-components} Recall that if an irreducible morphism $f:M \longrightarrow N$ in ${\rm mod}A$ has finite right degree then $f$ is a monomorphism and we have $d_r(f)=n$ if, and only if, ${\rm coker}(f) \in {\rm rad}^n \backslash {\rm rad}^{n+1}$(by the dual version of Corollary 3.3 in \cite{chaio}). We are now particularly interested in a particular case, assuming in addition that ${\rm tr}_{\bf P_{\infty}}(N)=0$. We look for a connection between the fact that the $A$-module ${\rm Coker}f$ is postprojective to the fact that the right degree of $f$ is finite. Recall that, in \cite{coelhosilva1}, we have proved that the inclusion map $f_S: {\rm rad}P_S \hookrightarrow P_S$, where $P_S$ is the projective covering of the simple $S$, has finite right degree if and only if $S$ is postprojective. From now on, since we depend on the results of \cite{chaio}, we shall restrict our consideration to finite dimensional algebras over an algebraically closed field $k$. Unless otherwise stated, $A$ is such an algebra. \begin{thm} \label{grauengana} Let $f:M \rightarrow N$ be an irreducible monomorphism with ${\rm tr}_{{\bf P_{\infty}}}(N)=0$. Then $d_r(f)<{\infty}$ if and only if ${\rm Coker}f $ is postprojective. \label{propmono} \end{thm} \begin{pf} Suppose ${\rm Coker}f$ is postprojective. Then by the fact that ${\rm coker}(f)$ is an epimorphism and ${\rm tr}_{\bf P_{\infty}}({\rm Coker}f) \neq {\rm Coker}f$ we have ${\rm coker}(f) \not\in {\rm rad}^{\infty}(N,{\rm Coker}f)$, by Proposition \ref{lema1}, so we get $d_r(f)<{\infty}$.\\ Now assume $d_r(f)=n$, $1 \leq n < {\infty}$, and suppose by contradiction $C ={\rm Coker}f \in {\bf P_{\infty}}$.\\ We set $\pi_f={\rm coker}(f)$. Then $\pi_f \in {\rm rad}^n(N,C ) \backslash {\rm rad}^{n+1}(N,C)$, by Proposition 3.5 in \cite{chaio}. We know there exists $1 \leq j < {\infty}$ such that ${\rm tr}_{\bf P_{j}}(N) = {\rm tr}_{\bf P_{\infty}}(N)=0$. Moreover, there exists a nonzero morphism $v:L \rightarrow C$ with $L \in {\rm add}{\bf P_j}$ such that $v \in {\rm rad}^{n+1}(L,C)$. Indeed, if we take $r>j+n$ then we can get a covering $h_r:M_r \rightarrow C$ with $h_r \in {\rm rad}(M_r, C)$ and $M_r \in {\rm add}{\bf P_r}$, since $C \in {\bf P_{\infty}}$. Then let $h_l:M_l \rightarrow M_{l+1}$ be a covering with $M_l \in {\rm add}{\bf P_l}$ for all $l \in \{ j, \cdots ,r-1 \}$. If we set $v=h_rh_{r-1} \cdots h_j:M_j \rightarrow C$ then we have that $v$ is nonzero as a composition of epimorphisms and $v \in {\rm rad}^{n+1}(L,C)$ with $L=M_j \in {\bf P_j}$ as required. By Proposition 5.6 in \cite{auslanderbook}, either there exists $p:N \rightarrow L$ such that $\pi_f=vp$ or there exists $q:L \rightarrow N$ such that $v=\pi_fq$. In the first case, we get $\pi_f \in {\rm rad}^{n+1}(N,C)$ as $v \in {\rm rad}^{n+1}(L,C)$, which is a contradiction. In the last case, as ${\rm tr}_{\bf P_{k}}(N) = {\rm tr}_{\bf P_{\infty}}(N)=0$ we have $q=0$ which implies $v=0$, again a contradiction. Hence ${\rm Coker}f$ is postprojective. \end{pf} In \cite{coelho2}, Coelho has considered the so-called $\pi$-components in $\Gammamma_A$, which are components containing only postprojective modules. Hereditary algebras, or more generally, left glued algebras (see \cite{assemcoelho}), contain such components. These components can also be characterized for the fact that all its modules $M$ satisfies ${\rm tr}_{\bf P_{\infty}}(M)=0$ (see \cite{coelho2}). \begin{cor} Let $\Gammamma$ be a $\pi$-component and $f:M \rightarrow N$ be an irreducible monomorphism in $\Gammamma$. Then $d_r(f)<{\infty}$ if and only if ${\rm Coker}f $ is postprojective. \end{cor} \par\noindent {\bf Remark:} Although the above results may suggest that it is true that if $f:M \rightarrow N$ is an irreducible monomorphism with $N$ postprojective then $d_r(f)<{\infty}$ if and only if ${\rm Coker}f \in {\bf P}$, we alert that it is not the case. In Example \ref{tuboraio}(b) we see a source map $f:\overline{S[1]} \rightarrow \overline{S[2]}$ which is a monomorphism of right degree equal to 1. We have that the $A[S]$-module $\overline{S[2]}$ is postprojective but $N[1]={\rm Coker}f \in {\bf P}_{\infty}$. pace{ .3 cm} Now we turn our attention to the case $d_r(f)={\infty}$. We start with a definition. \begin{defn} Let $M$ be an indecomposable in ${\bf P_{\infty}}$. We say $M$ is ${\bf P_{\infty}-}{\rm simple}$ if every nontrivial submodule of $M$ is postprojective. \end{defn} \par\noindent {\bf Remark:} We know that there is no bound on the lengths of the modules lying in any given infinite set of indecomposable nonisomorphic postprojective modules (see \cite{auslander}). Hence, any $A$-module $M$ must have a finite number of nonisomorphic postprojective submodules (if any). Therefore if $M$ is ${\bf P_{\infty}-}{\rm simple}$ then there exists $0 \leq n < {\infty}$ such that every nontrivial submodule of $M$ is in ${\rm add}{\bf P^n}$. \begin{prop} Let $M$ be and indecomposable $A$-module in a regular component of $\Gammamma_A$. If $M$ is ${\bf P}_{\infty}$-simple then $M$ is simple regular. \end{prop} \begin{pf} If M is ${\bf P}_{\infty}$-simple then $M$ must not have nontrivial regular submodules since all modules in regular components are in ${\bf P}_{\infty}$ . \end{pf} Now we show that if $f:M \rightarrow N$ is an irreducible monomorphism with ${\rm tr}_{{\bf P_{\infty}}}(N)=0$ such that $d_r(f)={\infty}$ then ${\rm Coker}f$ is ${\bf P_{\infty}-}{\rm simple}$. By Theorem in \cite{coelho2}, we know that every monomorphism in a $\pi$-component satisfies this condition. \begin{thm} \label{simpleregular} Let $f:M \rightarrow N$ be an irreducible monomorphism with ${\rm tr}_{{\bf P_{\infty}}}(N)=0$. If $d_r(f)={\infty}$ then ${\rm Coker}f$ is ${\bf P_{\infty}-}{\rm simple}$. \end{thm} \begin{pf} We know by Theorem \ref{propmono} that ${\rm Coker}f \in {\bf P_{\infty}}$. Suppose by contradiction that ${\rm Coker}f$ is not ${\bf P_{\infty}-}{\rm simple}$. Then there exists a nontrivial submodule $X$ of ${\rm Coker}f$ such that $X \in {\bf P_{\infty}}$. Hence ${\rm Hom}(X,N)=0$ since ${\rm tr}_{{\bf P_{\infty}}}(N)=0$. Consider the exact sequence $0 \rightarrow M \stackrelackrel{f}{\rightarrow} N \stackrelackrel{g}{\rightarrow} {\rm Coker}f \rightarrow 0$ and the inclusion morphism $v:X \hookrightarrow {\rm Coker}f$. Then by Proposition 5.6 in \cite{auslanderbook} either there exists $q:X \rightarrow N$ such that $v=gq$ or there exists $p:N \rightarrow X$ such that $g=vp$. The former leads to a contradiction because ${\rm Hom}(X,N)=0$ implies $q=0$ which in turn implies $v=0$ and $X=0$. The latter also leads to a contradiction for $g$ being an epimorphism implies that $v$ is an epimorphism and hence $X={\rm Coker}f$ which is not possible. Therefore ${\rm Coker}f$ is ${\bf P_{\infty}-}{\rm simple}$. \end{pf} \begin{cor} \label{corregular} If $f:M \rightarrow N$ is an irreducible monomorphism in a $\pi-component$ such that ${\rm Coker}f$ lies in a regular component then ${\rm Coker}f$ is simple regular.\\ \end{cor} We give now an example which illustrates the above corollary. \begin{exmp}{\rm Let $A$ be a path algebra defined by the quiver: $$\xymatrix{ & 2 \ar[rd] & \\ 3 \ar[rr] \ar[ru] & & 1 \\ }$$ For each vertex $x$ in the quiver we set $P_x$ the correspondent projective. Then the irreducible monomorphism $f:P_1 \rightarrow P_3$ has infinite right degree since ${\rm Coker}f$ is not in the postprojective component. ${\rm Coker}f$ is as follows: $$\xymatrix{ & k \ar[rd]^{\rm Id} & \\ k \ar[rr]_0 \ar[ru]^{\rm Id} & & k \\ }$$ On the other hand, let $\mu \in {\rm rad}^2(P_1, P_3)$ be the composition of the irreducibles $P_1 \rightarrow P_2 \rightarrow P_3$. Then $f'=f+ \mu$ is also irreducible and ${\rm Coker}f'$ is as follows: $$\xymatrix{ & k \ar[rd]^{\rm Id} & \\ k \ar[rr]_{\rm Id} \ar[ru]^{\rm Id} & & k \\ }$$ Since $f$ has infinite right degree we have that $f'$ has also infinite right degree. One can easily verify that ${\rm Coker}f$ and ${\rm Coker}f'$ lie in two different homogeneous tubes. By the above corollary both modules should be simple regular modules. And in fact that is what happens since both ${\rm Coker}f$ and ${\rm Coker}f'$ lie in the mouth of its respectives tubes.} \end{exmp} \end{document}
\begin{document} \title[Dani's Work on Homogeneous Dynamics]{Dani's Work on \\ Dynamical Systems on Homogeneous Spaces} \author{Dave Witte Morris} \address{Department of Mathematics and Computer Science \\ University of Lethbridge \\Lethbridge, Alberta, T1K~3M4 \\ Canada} \email{[email protected]} \urladdr{http://people.uleth.ca/~dave.morris/} \dedicatory{to Professor S.\,G.\,Dani on his 65th birthday} \subjclass[2010]{Primary 37A17; Secondary 11H55, 37A45 } \date{\today} \begin{abstract} We describe some of S.\,G.\,Dani's many contributions to the theory and applications of dynamical systems on homogeneous spaces, with emphasis on unipotent flows. \end{abstract} \maketitle S.\,G.\,Dani has written over 100 papers. They explore a variety of topics, including: \begin{multicols}{2} \raggedright \@nobreaktrue\nopagebreak \begin{itemize} \itemsep= amount \item flows on homogeneous spaces \@nobreaktrue\nopagebreak \begin{itemize} \itemsep= amount \item unipotent dynamics \item applications to Number Theory \item divergent orbits \item bounded orbits and Schmidt's Game \item topological orbit equivalence \item Anosov diffeomorphisms \item entropy and other invariants \end{itemize} \columnbreak \item actions of locally compact groups \@nobreaktrue\nopagebreak \begin{itemize} \itemsep= amount \item actions of lattices \item action of $\Aut G$ on the Lie group~$G$ \item stabilizers of points \end{itemize} \item convolution semigroups of probability measures on a Lie group \item finitely additive probability measures \item Borel Density Theorem \item history of Indian mathematics \end{itemize} \end{multicols} \noindent Most of Dani's papers (about 60) are directly related to flows on homogeneous spaces. This survey will briefly discuss several of his important contributions in this field. \begin{notation*} Let: \@nobreaktrue\nopagebreak \begin{itemize} \item $G = \SL(n,\mathbb{R})$ (or, for the experts, $G$ may be any connected Lie group), \item $\{g^t\}$ be a one-parameter subgroup of~$G$, \item $\Gamma = \SL(n,\mathbb{Z})$ (or, for the experts, $G$ may be any \emph{lattice} in~$G$), and \item $\varphi_t(x \Gamma) = g^t x \Gamma$ for $t \in \mathbb{R}$ and $x \Gamma \in G/\Gamma$. \end{itemize} Then $\varphi_{s + t} = \varphi_{s} \circ \varphi_{t}$, so $\varphi_t$ is a flow on the homogeneous space $G/\Gamma$. \end{notation*} Much is known about flows generated by a general one-parameter subgroup $\{g^t\}$. (For example, the spectrum of $\{g^t\}$, as an operator on $L^2(G/\Gamma)$, is described in \cite[Thm.~2.1]{Dani-spectrum}. A few of Dani's other results on the general case are mentioned in \cref{OpenProbSect} below.) However, the most impressive results apply only to the flows generated by one-parameter subgroups that are ``unipotent:'' \begin{defn*} A one-parameter subgroup~$\{u^t\}$ of $\SL(n,\mathbb{R})$ is said to be \emph{unipotent} if $1$~is the only eigenvalue of~$u^t$ (for every $t \in \mathbb{R}$). This is equivalent to requiring that $$ \text{$\{u^t\}$ is conjugate to a subgroup of $\mathbb{U}_n =$ \smaller$\begin{bmatrix} 1 \\[-5pt] & 1 \\[-5pt] & & \ddots & \vbox to 0pt{\vss \hbox to 0pt{\hss\larger[5]$*$}\vskip15pt} \\[-5pt] & & & 1 \\[-5pt] & \vbox to 0pt{\vss \hbox to 0pt{\hskip0pt\larger[3]$0$\hss}\vskip0pt} & & & 1 \end{bmatrix}$} .$$ \end{defn*} Work of Dani, Margulis, Ratner, and others in the 1970's and 1980's showed that if $\{u^t\}$ is unipotent, then the corresponding flow on $G/\Gamma$ is surprisingly well behaved. In particular, the closure of every orbit is a very nice $C^\infty$ submanifold of $G/\Gamma$, and every invariant probability measure is quite obvious. \Cref{UnipSect} describes these (and other) fundamental results on unipotent flows, and \Cref{ApplSect} explains some of their important applications in Number Theory. Dani's contributions are of lasting importance in both of these areas. \section{Unipotent flows} \label{UnipSect} \subsection{Orbit closure \pref{RatnerThm-orbit}, equidistribution \pref{RatnerThm-equi}, and measure classification \pref{RatnerThm-measure}} If $\{g^t\}$ is a (nontrivial) group of diagonal matrices, then there exists a $\{g^t\}$-orbit in $G/\Gamma$ that is very badly behaved --- its closure is a fractal (cf.\ \cite[Lem.~2]{Starkov-StructOrbs}). Part~\pref{RatnerThm-orbit} of the following fundamental result tells us that unipotent flows have no such pathology. The other two parts describe additional ways in which the dynamical system is very nicely behaved. \begin{thm}[Ratner \cite{Ratner-meas,Ratner-dist}] \label{RatnerThm} If\/ $\{u^t\}$ is unipotent, then the following hold: \@nobreaktrue\nopagebreak \begin{enumerate} \itemsep= amount \renewcommand{$\mathbf{M}'$}{\textbf{O}} \item \label{RatnerThm-orbit} The closure of every $u^t$-orbit on~$G/\Gamma$ is a\/ \textup{(}finite-volume, homogeneous\textup{)} $C^\infty$ submanifold. \renewcommand{$\mathbf{M}'$}{\textbf E} \item \label{RatnerThm-equi} Every $u^t$-orbit on $G/\Gamma$ is uniformly distributed in its closure. \renewcommand{$\mathbf{M}'$}{\textbf M} \item \label{RatnerThm-measure} Every ergodic $u^t$-invariant probability measure on~$G/\Gamma$ is the natural Lebesgue measure on some \textup{(}finite-volume, homogeneous\textup{)} $C^\infty$ submanifold. \end{enumerate} \end{thm} \begin{rem} Here is a more precise statement of each part of \cref{RatnerThm}. \pref{RatnerThm-orbit} For each $a \in G/\Gamma$, there is a closed subgroup~$L = L_a$ of~$G$, such that $\closure{\{u^t\} a} = L a$. Furthermore, the closed submanifold $La$ of $G/\Gamma$ has finite volume, with respect to an $L$-invariant volume form on~$La$. \pref{RatnerThm-equi} Given $a \in G/\Gamma$, let $dg$~be the $L_a$-invariant volume form on~$\closure{\{u^t\} a}$ that is provided by~\pref{RatnerThm-orbit}. (After multiplying $dg$ by a scalar, we may assume the volume of $\closure{\{u^t\} a}$ is~$1$.) Then, for any continuous function~$f$ on $G/\Gamma$, with compact support, the average value of~$f$ along the orbit of~$a$ is equal to the average value of~$f$ on the closure of the orbit. That is, we have $$ \lim_{T \to \infty} \frac{1}{T} \int_0^T f (u^t a) \, dt = \int_{\closure{\{u^t\} a}} f \, dg .$$ \pref{RatnerThm-measure} Suppose $\mu$ is a Borel measure on $G/\Gamma$, such that \@nobreaktrue\nopagebreak \begin{itemize} \item $\mu( G/ \Gamma) = 1$, \item $\mu(u^t A) = \mu(A)$ for every measurable subset~$A$ of $G/\Gamma$, and \item for every $u^t$-invariant, measurable subset~$A$ of $G/\Gamma$, either $\mu(A) = 0$ or $\mu(A) = 1$. \end{itemize} (In other words, $\mu$ is an ergodic $u^t$-invariant probability measure on $G/\Gamma$.) Then there exists a closed subgroup $L = L_\mu$ of~$G$, and some $a = a_\mu \in G$, such that the orbit $L a$ is closed, and $\mu$~is an $L$-invariant Lebesgue measure on the submanifold~$L a$. (The $L$-invariant measure on $La$ is unique up to multiplication by a positive scalar, so the probability measure~$\mu$ is uniquely determined by~$L$.) \end{rem} \begin{Dani} Dani was a central figure in the activity that paved the way for Ratner's proof of \cref{RatnerThm}. In particular: \@nobreaktrue\nopagebreak \begin{enumerate} \itemsep= amount \item Dani's early work on actions of unipotent subgroups was one of the ingredients that inspired Raghunathan to conjecture the truth of~\pref{RatnerThm-orbit}. Raghunathan did not publish the conjecture himself --- its first appearance in the literature was in a paper of Dani \cite[p.~358]{Dani-MinSetHoro}. \item \pref{RatnerThm-measure} was conjectured by Dani \cite[p.~358]{Dani-MinSetHoro}. This was an important insight, because the methods developed in the 1980's were able to prove this part of \cref{RatnerThm} first, and the other two parts are corollaries of it\label{E-->O+M} (cf.\ \cref{LinearizationSect} below). \item For $G = \SL(2,\mathbb{R})$, \pref{RatnerThm-measure} and \pref{RatnerThm-equi} were proved by Furstenberg \cite{Furstenberg-UniqErg} when $G/\Gamma$ is compact, but it was Dani who tackled the noncompact case. First, he \cite{Dani-InvtMeasNoncpct} proved \pref{RatnerThm-measure}. Then he \cite{Dani-UnifDist} proved the special case of \pref{RatnerThm-equi} in which $\Gamma = \SL(2,\mathbb{Z})$. Subsequent joint work with Smillie \cite{DaniSmillie-UnifDist} established \pref{RatnerThm-equi} for the other (noncocompact) lattices in $\SL(2,\mathbb{R})$. \item Dani \cite{Dani-MinSetHoro,Dani-OrbHoroFlows} proved analogues of \pref{RatnerThm-orbit} and \pref{RatnerThm-measure} in which $u^t$ replaced with the larger unipotent subgroup $\mathbb{U}_n$\label{LargerAnalogue}. (See \cref{HoroSect} below for more discussion of this.) \item Dani and Margulis \cite{DaniMargulis-OrbClos} proved \pref{RatnerThm-orbit} under the assumption that $G = \SL(3,\mathbb{R})$ and $u^t$ is generic. \end{enumerate} \end{Dani} Furthermore, Dani \cite{Dani-InvtMeasMargulis,Dani-OrbUnipFlow} established a very important special case of \pref{RatnerThm-equi}, long before the full theorem was proved. Note that if $a \in G/\Gamma$ and $\epsilon > 0$, then, since \pref{RatnerThm-orbit} tells us that $\closure{\{u^t\}a}$ has finite volume, there exists $C > 0$, such that the complement of the ball of radius~$C$ has volume less than~$\epsilon$. Therefore, letting \@nobreaktrue\nopagebreak \begin{itemize} \item $\lambda$ be the usual Lebesgue measure on~$\mathbb{R}$, and \item $d(x,y)$ be the distance from $x$ to~$y$ in $G/\Gamma$, \end{itemize} \pref{RatnerThm-equi} implies that \begin{align} \label{DaniMargIneq} \limsup_{T \to \infty} \frac{\lambda \{\, t\in [0,T] \mid d(u^t a , a) > C \,\}}{T} < \epsilon . \end{align} Dani proved this fundamental inequality: \begin{thm}[Dani \cite{Dani-InvtMeasMargulis,Dani-OrbUnipFlow}] \label{DaniMargNondiv} For every $a \in G/\Gamma$ and $\epsilon > 0$, there exists $C > 0$, such that \pref{DaniMargIneq} holds. \end{thm} This is a strengthening of the following fundamental result: \begin{cor}[Margulis \cite{Margulis-ActUnipLattSpaceSbornik,Margulis-ActUnipLattSpace}] \label{UnipNotDiverge} No $u^t$-orbit diverges to $\infty$. More precisely, for every $a \in G/\Gamma$, we have $d(u^t a, a) \not\to \infty$ as $t \to +\infty$. \end{cor} \begin{proof} If $d(u^t a, a) \to \infty$, then there exists $T_0 > 0$, such that $d(u^t a , a) > C$ for all $t > T_0$. Therefore \begin{align*} \limsup_{T \to \infty} \frac{\lambda \{\, t\in [0,T] \mid d(u^t a , a) > C \,\}}{T} \ge \lim_{T \to \infty} \frac{T - T_0}{T} = 1 > \epsilon . & \qedhere \end{align*} \end{proof} The proof of \cref{DaniMargNondiv} employs an ingenious induction argument of Margulis that was appropriately modified by Dani. The best exposition of this idea is in an appendix of a paper by Dani and Margulis \cite{DaniMargulis-ElemApproach}. For applications, it is important to have strengthened versions of \cref{DaniMargNondiv} and \pref{RatnerThm-equi} giving estimates that are \emph{uniform} as the starting point of the orbit varies over a compact set. The first such theorems were proved by Dani and Margulis \cite{DaniMargulis-Asymp,DaniMargulis-LimitDist}. \begin{rem} Margulis \cite[Rem.~3.12(II)]{Margulis-LieGrpsErgThy} observed that Dani's \cref{DaniMargNondiv} provides a short (and rather elementary) proof of the fundamental fact that arithmetic groups are lattices. That is, if $\mathbf{G}$ is an algebraic subgroup of $\mathbf{SL}_n$ that is defined over~$\mathbb{Q}$, and $\mathbf{G}$ has no characters that are defined over~$\mathbb{Q}$, then the homogeneous space $\mathbf{G}(\mathbb{R})/\mathbf{G}(\mathbb{Z})$ has finite volume. \end{rem} \subsection{Linearization} \label{LinearizationSect} Dani and Margulis \cite[\S3]{DaniMargulis-LimitDist} developed an important method that is called ``\emph{Linearization},'' because it replaces the action of $u^t$ on $G/\Gamma$ with the much simpler action of $u^t$ by linear transformations on a vector space. It has become a crucial tool in applications of unipotent flows in Number Theory and related areas. To see the main idea (which has its roots in earlier work of Dani-Smillie \cite{DaniSmillie-UnifDist} and Shah \cite{Shah-UnifDistOrbs}), consider the following proof that \pref{RatnerThm-equi} is a consequence of \pref{RatnerThm-orbit} and~\pref{RatnerThm-measure}: \begin{proof}[Idea of proof of \pref{RatnerThm-equi} from \pref{RatnerThm-orbit} and \pref{RatnerThm-measure}] The start of the proof is straightforward. Fix $a \in G/\Gamma$. From \pref{RatnerThm-orbit}, we know there is a closed subgroup~$L$ of~$G$, such that $\closure{\{u^t\} a} = L a$. Furthermore, there is an $L$-invariant probability measure~$\mu$ on $La$. Assume, for simplicity, that $G/\Gamma$ is compact, and let $\meas(G/\Gamma)$ be the set of probability measures on $G/\Gamma$. (The Riesz Representation Theorem tells us that $\meas(G/\Gamma)$ can be identified with the set of positive linear functionals on $C(G/\Gamma)$ that have norm~$1$, so it has a natural weak$^*$ topology.) For each $T > 0$, define $M_T \in \meas(G/\Gamma)$ by $$ M_T(f) = \frac{1}{T} \int_0^T f (u^t a) \, dt .$$ We wish to show $M_T \to \mu$ as $T \to \infty$. Since $\meas(G/\Gamma)$ is a closed, convex subset of the unit ball in $C(G/\Gamma)^*$, the Banach-Alaoglu Theorem tells us that $\meas(G/\Gamma)$ is compact (in the weak$^*$ topology). Therefore, in order to show $M_T \to \mu$, it suffices to show that $\mu$~is the only accumulation point of $\{M_T\}$. Thus, given an accumulation point $\mu_\infty$ of $\{M_T\}$, we wish to show $\mu_\infty = \mu$. It is not difficult to see that $\mu_\infty$ is $u^t$-invariant (since $M_T$ is nearly invariant when $T$ is large). Assume, for simplicity, that $\mu_\infty$ is ergodic. Then \pref{RatnerThm-measure} tells us there is a closed subgroup $L_\infty$ of~$G$, and some $a_\infty \in G$, such that the orbit $L_\infty a_\infty$ is closed, $\mu_\infty$~is $L_\infty$-invariant, and $\mu_\infty$~is supported on~$L_\infty a_\infty$. Since $M_T \to \mu_\infty$, we know $L_\infty a_\infty \subseteq \closure{\{u^t\} a} = L a$. To complete the proof, it suffices to show the opposite inclusion $L a \subseteq L_\infty a_\infty$. (This suffices because the two inclusions imply $L = L_\infty$, which means that $\mu_\infty$ must be the (unique) $L$-invariant probability measure on $La$, which is~$\mu$.) We will now use \emph{Linearization} to show $u^t a \in L_\infty a_\infty$ for all~$t$. (Since $\closure{\{u^t\} a} = L a$, and $L_\infty a_\infty$ is closed, this implies the desired inclusion $L a \subseteq L_\infty a_\infty$.) Roughly speaking, one can show that the subgroup~$L_\infty$ is Zariski closed (since the ergodicity of~$\mu_\infty$ implies that the unipotent elements generate a cocompact subgroup). Therefore, Chevalley's Theorem (from the theory of Algebraic Groups) tells us there exist: \@nobreaktrue\nopagebreak \begin{itemize} \item a finite-dimensional real vector space~$V$, \item a homomorphism $\rho \colon G \to \SL(V)$, and \item $\vector v \in V$, \end{itemize} such that $L_\infty = \Stab_G(\vector v)$. The Euclidean distance formula tells us that $d(\vector x, \vector v)^2$ is a polynomial of degree~$2$ (as a function of~$\vector x$). Also, since $\rho(u^t)$ is unipotent, elementary Lie theory shows that each matrix entry of $\rho(u^t)$ is a polynomial function of~$t$. Therefore, if we choose $g \in G$, such that $a = g \Gamma$, then $$d_g(t) := d \bigl( \rho(u^t g) \vector v, \vector v \bigr) $$ is a polynomial function of~$t$. Furthermore, the degree of this polynomial is bounded, independent of~$g$. Since $M_T \to \mu_\infty$, and $\mu_\infty$ is supported on~$L_\infty a_\infty$, we know that if $T$~is large, then for most $t \in [0,T]$, the point $u^t a$ is close to $L_\infty a_\infty$. Assuming, for simplicity, that $a_\infty = e$, so $\rho(L_\infty a_\infty) \, \vector v = \rho(L_\infty) \, \vector v = \vector v$, this implies there exists $\gamma \in \Gamma$, such that $d_{g\gamma}(t)$ is very small. However, a polynomial of bounded degree that is very small on a large fraction of an interval must be small on the entire interval. We conclude that $d_{g\gamma}(t)$ is small for all $t \in \mathbb{R}^+$ (and that $\gamma$ is independent of~$t$). Since constants are the only bounded polynomials, this implies that the distance from $u^t a$ to $L_\infty a_\infty$ is a constant (independent of~$t$). Then, from the first sentence of this paragraph, we see that this distance must be~$0$, which means $u^t a \in L_\infty a_\infty$ for all~$t$, as desired. \end{proof} \begin{rem} The above argument can be modified to derive both \pref{RatnerThm-orbit} and \pref{RatnerThm-equi} from~\pref{RatnerThm-measure}, without needing to assume~\pref{RatnerThm-orbit}. Therefore, as was mentioned on page~\pageref{E-->O+M}, \pref{RatnerThm-measure} implies both \pref{RatnerThm-orbit} and \pref{RatnerThm-equi}. \end{rem} \begin{rem} The above proof assumes that $G/\Gamma$ is compact. To eliminate this hypothesis, one passes to the one-point compactification, replacing $\meas(G/\Gamma)$ with $\meas \bigl( G/\Gamma \cup \{\infty\} \bigr)$. The bulk of the argument can remain unchanged, because Dani's \cref{DaniMargNondiv} tells us that if $\mu_\infty$ is any accumulation point of $\{M_T\}$, then $\mu_\infty \bigl( \{\infty\} \bigr) = 0$, so $\mu_\infty \in \meas(G/\Gamma)$. \end{rem} \subsection{Actions of horospherical subgroups and actions of lattices} \label{HoroSect} As has already been mentioned on page~\pageref{LargerAnalogue}, Dani proved the analogues of \pref{RatnerThm-measure} and \pref{RatnerThm-orbit} in which the one-dimensional unipotent subgroup $\{u^t\}$ is replaced by a unipotent subgroup that is maximal: \begin{thm}[Dani \cite{Dani-MinSetHoro,Dani-OrbHoroFlows}] \label{DaniMax} Let \@nobreaktrue\nopagebreak \begin{itemize} \item $G = \SL(n,\mathbb{R})$ \textup(or, more generally, let $G$ be a connected, reductive, linear Lie group\textup), \item $U = \mathbb{U}_n$ \textup(or, more generally, let $U$ be any maximal unipotent subgroup of~$G$\textup), and \item $\Gamma$ be a discrete subgroup of~$G$, such that $G/\Gamma$ has finite volume \textup(in other words, let $\Gamma$ be a lattice in~$G$\textup). \end{itemize} Then: \@nobreaktrue\nopagebreak \begin{enumerate} \renewcommand{$\mathbf{M}'$}{$\mathbf{O}'$} \item \label{DaniMax-orbit} The closure of every $U$-orbit on~$G/\Gamma$ is a\/ \textup{(}finite-volume, homogeneous\textup{)} $C^\infty$ submanifold. \renewcommand{$\mathbf{M}'$}{$\mathbf{M}'$} \item \label{DaniMax-measure} Every ergodic $U$-invariant probability measure on~$G/\Gamma$ is the natural Lebesgue measure on some \textup{(}finite-volume, homogeneous\textup{)} submanifold. \end{enumerate} \end{thm} \begin{rem} \label{HoroRem} Although the statement of \cref{DaniMax} requires the unipotent subgroup~$U$ to be maximal, Dani actually proved \pref{DaniMax-orbit} under the weaker assumption that $U$ is ``horospherical:'' $$ \begin{matrix} \text{For each $g \in G$, the corresponding \emph{horospherical subgroup} is} \\ U_g = \{\, u \in G \mid \text{$g^{-k} u g^k \to e$ as $k \to +\infty$} \,\} . \end{matrix}$$ Since $1$~is the only eigenvalue of the identity matrix, and similar matrices have the same eigenvalues, it is easy to see that every element of~$U_g$ is unipotent. Conversely, the maximal unipotent subgroup~$\mathbb{U}_n$ is horospherical. (Namely, we have $\mathbb{U}_n = U_g$ if $g= \mathrm{diag}(\lambda_1,\lambda_2,\ldots,\lambda_n)$ is any diagonal matrix with $\lambda_1 > \lambda_2 > \cdots > \lambda_n > 0$.) \end{rem} Dani's first published paper was joint work with Mrs.~Dani \cite{DaniDani-DenseOrbits}, while they were students at the Tata Institute of Fundamental Research. It proved a $p$-adic version of the following interesting consequence of \pref{DaniMax-orbit}: \begin{cor}[Greenberg \cite{Greenberg-DenseOrbits}] \label{GreenbergRnDense} Let $G = \SL(n,\mathbb{R})$, and let\/ $\Gamma$ be a discrete subgroup of~$G$, such that $G/\Gamma$ is compact. Then the\/ $\Gamma$-orbit of every nonzero vector is dense in\/~$\mathbb{R}^n$. \end{cor} \begin{proof} \Cref{DaniMax}\pref{DaniMax-orbit} tells us that, for each $g \in G$, there is a (unique) connected subgroup $L_g$ of~$G$, such that $\closure{\mathbb{U}_n g \Gamma} = L_g \Gamma$. Since $\mathbb{U}_n$ is horospherical (see \cref{HoroRem}) and $G/\Gamma$ is compact, one can show that $L_g = G$. This means $\mathbb{U}_n g \Gamma$ is dense in~$G$ (for all $g \in G$). So $\Gamma g \mathbb{U}_n$ is dense in~$G$. Since $\mathbb{U}_n$ fixes the vector $\vector{e_n} = (0,\ldots,0,1)$, and $G \vector {e_n} = \mathbb{R}^n \smallsetminus \{0\}$, this implies that $\Gamma g \vector{e_n}$ is dense in~$\mathbb{R}^n$. This is the desired conclusion, since $g \vector{e_n}$ is an arbitrary nonzero vector in~$\mathbb{R}^n$. \end{proof} \Cref{GreenbergRnDense} provides information about an action of the ``lattice''~$\Gamma$. This particular result is a consequence of the theory of unipotent dynamics, but other theorems of Dani about lattice actions do not come from this theory. As an example of this, we mention Dani's topological analogue of a famous measure-theoretic result of Margulis \cite[Thm.~1.14.2]{Margulis-QuotGrps}: \begin{thm}[Dani \cite{Dani-EquiImage}] \label{QuotofG/P} Let \@nobreaktrue\nopagebreak \begin{itemize} \item $G = \SL(n,\mathbb{R})$ with $n \ge 3$, \item $\Gamma = \SL(n,\mathbb{Z})$, \item $B$ be the ``Borel'' subgroup of all upper-triangular matrices, \item $X$ be a compact, Hausdorff space on which~$\Gamma$ acts by homeomorphisms, and \item $\phi \colon G/B \to X$ be a continuous, surjective, $\Gamma$-equivariant map. \end{itemize} Then $B$ is contained in a closed subgroup~$P$ of~$G$, such that $X$ is $\Gamma$-equivariantly homeomorphic to $G/P$. \end{thm} \begin{rem} To make the statement more elementary, we assume $G = \SL(n,\mathbb{R})$ and $\Gamma = \SL(n,\mathbb{Z})$ in \cref{QuotofG/P}, but Dani actually proved the natural generalization in which $G$ is allowed to be any semisimple Lie group of real rank greater than one, and $\Gamma$ is an irreducible lattice in~$G$. \end{rem} \section{Applications in Number Theory} \label{ApplSect} \subsection{Values of quadratic forms at integer vectors} Unipotent dynamics (which was discussed in \cref{UnipSect}) became well known through its use in the solution of the ``Oppenheim Conjecture.'' Before explaining this, let us look at a similar problem whose solution is much more elementary. \begin{eg} Suppose $$L(x_1,x_2,\ldots,x_n) = \sum_i a_i x_i = a_1 x_1 + \cdots + a_n x_n$$ is a homogeneous polynomial of degree~$1$, with real coefficients. (Assume, to avoid degeneracies, that $L$~is not identically zero.) For any $b \in \mathbb{R}$, it is easy to find a solution of the equation $L(\vector x) = b$. However, a number theorist may wish to require the coordinates of~$\vector x$ to be integers (i.e., $\vector x \in \mathbb{Z}^n$). Since $\mathbb{Z}^n$ is countable, most choices of~$b$ will yield an equation that does not have an integral solution, so it is natural to ask for an approximate solution: for every $\epsilon > 0$, $$ \text{does there exist $\vector x \in \mathbb{Z}^n$, such that $|L(\vector x) - b| < \epsilon$?} $$ Obviously, to say an approximate solution exists for every $b \in \mathbb{R}$ (and every $\epsilon > 0$) is the same as saying that $L(\mathbb{Z}^n)$ is dense in~$\mathbb{R}$. It is obvious that if the coefficients of~$L$ are integers, then $L$~takes integer values at any point whose coordinates are integers. This means $L(\mathbb{Z}^n) \subseteq \mathbb{Z}$, so $L(\mathbb{Z}^n)$ is not dense in~$\mathbb{R}$. More generally, if $L$~is a scalar multiple of a polynomial with integer coefficients, then $L(\mathbb{Z}^n)$ is not dense in~$\mathbb{R}$. It is less obvious (but not difficult to prove) that the converse is true: $$ \begin{matrix} \text{\it If $L(\vector x)$ is a a homogeneous polynomial of degree~$1$, with real coefficients,} \\ \text{\it and $L$ is not a scalar multiple of a polynomial with integer coefficients,} \\ \text{\it then $L(\mathbb{Z}^n)$ is dense in~$\mathbb{R}$} . \end{matrix} $$ \end{eg} Everything in the above discussion is trivial, but the problem becomes extremely difficult if polynomials of degree~$1$ are replaced by polynomials of degree~$2$. In this setting, a conjecture made by A.\,Oppenheim \cite{Oppenheim-Conj} in 1929 was not proved until almost 60 years later. The statement of the result needs to account for the following counterexamples: \begin{egs} Let $Q(x_1,x_2,\ldots,x_n) = \sum_{i,j} a_{i,j} x_i x_j$ be a homogeneous polynomial of degree~$2$, with real coefficients. (In other words, $Q$~is a ``real quadratic form.'') \@nobreaktrue\nopagebreak \begin{enumerate} \itemsep= amount \item We say $Q$ is \emph{positive-definite} if $Q(\mathbb{R}^n) \subseteq \mathbb{R}^{\ge0}$. (Similarly, $Q$ is \emph{negative-definite} if $Q(\mathbb{R}^n) \subseteq \mathbb{R}^{\le0}$.) Obviously, this implies that $Q(\mathbb{R}^n)$ is not dense in~$\mathbb{R}$, so it is obvious that the smaller set $Q(\mathbb{Z}^n)$ is not dense in~$\mathbb{R}$. For example, if we let $Q(x_1,x_2,\ldots,x_n) = a_1 x_1^2 + \cdots a_n x_n^2$, with each $a_i$~positive, then $Q$~is positive-definite, so $Q(\mathbb{Z}^n)$ is not dense in~$\mathbb{R}$. \item Let $Q(x_1,x_2) = x_1^2 - \alpha^2 x_2^2$. It is not difficult to see that if $\alpha$ is badly approximable, then $0$ is \emph{not} an accumulation point of $Q(\mathbb{Z}^2)$. Obviously, then $Q(\mathbb{Z}^2)$ is not dense in~$\mathbb{R}$. (Recall that, to say $\alpha \in \mathbb{R}$ is \emph{badly approximable} means there exists $\epsilon > 0$, such that $|\alpha - (p/q)| > \epsilon/q^2$ for all $p,q \in \mathbb{Z}$. It is well known that quadratic irrationals, such as $1 + \sqrt{2}$, are always badly approximable.) This means that the ``obvious'' converse can fail when $n = 2$. \item Given a $2$-variable counterexample $x_1^2 - \alpha^2 x_2^2$, it is easy to construct counterexamples in any number of variables, such as $$Q(x_1,x_2,\ldots,x_n) = (x_1+\cdots+x_{n-1})^2 - \alpha^2 x_n^2 .$$ Note that this quadratic form has $n$~variables, but a linear change of coordinates can transform it into a form with less than~$n$ variables. This means it is \emph{degenerate}. \end{enumerate} \end{egs} The following theorem shows that all quadratic counterexamples are of the above types. \begin{thm}[Margulis \cite{Margulis-Formes}] \label{Margulis-OppenheimThm} Suppose: \@nobreaktrue\nopagebreak \begin{itemize} \itemsep= amount \item $Q(x_1,x_2,\ldots,x_n)$ is a homogeneous polynomial of degree~$2$, with real coefficients, \item $Q$ is not a scalar multiple of a polynomial with integer coefficients, \item $Q$ is neither positive-definite nor negative-definite, \item $n \ge 3$, and \item $Q$ is not degenerate. \end{itemize} Then $Q(\mathbb{Z}^n)$ is dense in~$\mathbb{R}$. \end{thm} \begin{proof}[Idea of proof] We will explain how the result can be obtained as a corollary of Ratner's Theorem \pref{RatnerThm}. (The result was originally proved directly, because Ratner's theorem was not yet available in 1987.) Assume, for simplicity, that $n = 3$. Given $b \in \mathbb{R}$, we wish to show there exists $\vector m \in \mathbb{Z}^3$, such that $Q(\vector m) \approx b$. Since $Q$ is neither positive-definite nor negative-definite, we have $Q(\mathbb{R}^3) = \mathbb{R}$, so there exists $\vector v \in \mathbb{R}^3$, such that $Q(\vector v) = b$. Let $G = \SL(3,\mathbb{R})$, $\Gamma = \SL(3,\mathbb{Z})$, and $$H = \SO_3(Q) = \{\, h \in \SL(3,\mathbb{R}) \mid \text{$Q(h \vector x) = Q(\vector x)$ for all $\vector x \in \mathbb{R}^3$} \,\} .$$ Note that $H$ is a subgroup of~$G$ that is generated (up to finite index) by unipotent one-parameter subgroups. Therefore, Ratner's Theorem \pref{RatnerThm} implies that the closure $\closure{H \Gamma}$ of $H\Gamma$ is a very nice submanifold of~$G$. More precisely, there is a closed subgroup~$L$ of~$G$, such that $\closure{H \Gamma} = L \Gamma$, and $H \subseteq L$. However, it can be shown that $H$ is a maximal subgroup of~$G$, and, since $Q(\vector x)$ does not have integer coefficients (up to a scalar multiple), that $\closure{H \Gamma} \neq H \Gamma$ (so $L \neq H$). This implies $L = G$. In other words, $H \Gamma$ is dense in~$G$. For convenience, let $\vector{e_1} = (1,0,0) \in \mathbb{R}^3$. Since $G = \SL(3,\mathbb{R})$ is transitive on the nonzero vectors in $\mathbb{R}^3$, we know there exists $g \in G$, such that $g \vector{e_1} = \vector v$. Then the conclusion of the preceding paragraph implies there exist $h \in H$ and $\gamma \in \Gamma = \SL(3,\mathbb{Z})$, such that \begin{align} \tag{$*$} \label{hgammae1} h \gamma \vector {e_1} \approx \vector v . \end{align} Let $\vector m = \gamma \vector{e_1} \in \mathbb{Z}^3$. Then \begin{align*} Q(\vector m) &= Q (\gamma \vector{e_1}) && \text{(definition of~$\vector m$)} \\ &= Q(h \gamma \vector{e_1}) && \text{(definition of~$H$)} \\ &\approx Q(\vector v) && \text{(polynomial~$Q$ is continuous, and (\ref{hgammae1}))} \\ &= b && \text{(definition of~$\vector v$)} . \qedhere \end{align*} \end{proof} \begin{Dani} Joint work of Dani and Margulis made the following important improvements to this theorem: \@nobreaktrue\nopagebreak \begin{enumerate} \itemsep= amount \item[\cite{DaniMargulis-PrimIntPts}] \emph{approximation by primitive vectors:} $Q(\mathcal{P})$ is dense in~$\mathbb{R}$, where $$ \mathcal{P} = \{\, (m_1,m_2,\ldots,m_n) \in \mathbb{Z}^n \mid \gcd(m_1,m_2,\ldots,m_n) = 1 \,\} .$$ \item[\cite{DaniMargulis-OrbClos}] \emph{simultaneous approximation:} Given two quadratic forms $Q_1$ and~$Q_2$ (satisfying appropriate conditions), the set $ \bigset{ \bigl( Q_1(\vector m) , Q_2(\vector m) \bigr) }{ \vector m \in \mathcal{P} } $ is dense in~$\mathbb{R}^2$. \item[\cite{DaniMargulis-LimitDist}] \emph{quantitative estimates:} For any nonempty open interval $I \subset \mathbb{R}$, \cref{Margulis-OppenheimThm} shows there exists at least one $\vector m \in \mathbb{Z}^n$ with $Q(\vector m) \in I$. In fact, there exist \emph{many} such~$\vector m$ (of bounded norm). Namely, if $\lambda$ is the Lebesgue measure on~$\mathbb{R}^n$, then $$ \liminf_{C \to \infty} \frac{ \# \bigset{ \vector m \in \mathbb{Z}^n }{ \begin{matrix} Q(\vector m) \in I , \\[ amount] \| \vector m \| < C \end{matrix} } }{ \lambda \left( \bigset{ \vector v \in \mathbb{R}^n }{ \begin{matrix} Q(\vector v) \in I , \\[ amount] \| \vector v \| < C \end{matrix} } \right) } \ge 1 .$$ Furthermore, the estimate is uniform when $Q$ varies over any compact set of quadratic forms that all satisfy the hypotheses of \cref{Margulis-OppenheimThm}. \end{enumerate} The paper \cite{DaniMargulis-LimitDist} has been especially influential, because it introduced \emph{Linearization} (which was discussed in \cref{LinearizationSect}). \end{Dani} Dani has continued his contributions to this field of research in recent years. For example, he \cite{Dani-SimApproxQuadLin} proved a simultaneous approximation theorem for certain pairs consisting of a linear form and a quadratic form. \section{Dynamics of general homogeneous flows} \label{OpenProbSect} This section presents a few fundamental questions that Dani worked on, but remain open. \subsection{Kolmogorov automorphisms} \begin{defn} If $p \colon X \to \mathbb{R}^+$ is a probability distribution on a finite set $X = \{x_1,\ldots,x_n\}$, then there is a natural product measure $p^\mathbb{Z}$ on the infinite product $X^\mathbb{Z} = \{\, f \colon \mathbb{Z} \to X \,\} $, and the associated \emph{Bernoulli shift} is the measurable map $ B_{X,p} \colon X^\mathbb{Z} \to X^\mathbb{Z}$, defined by $ B_{X,p}(f)(k) = f(k-1)$. \end{defn} It is believed that almost all translations on homogeneous spaces are isomorphic to Bernoulli shifts: \begin{conj} Suppose \@nobreaktrue\nopagebreak \begin{itemize} \item $G = \SL(n,\mathbb{R})$ \textup(or, more generally, let $G$ be a connected, semisimple, linear Lie group with no compact factors\textup), \item $\Gamma = \SL(n,\mathbb{Z})$ \textup(or, more generally, let $\Gamma$ be an irreducible lattice in~$G$\textup), \item $g \in G$, and \item $T_g \colon G/\Gamma \to G/\Gamma$ be defined by $T_g(x \Gamma) = gx\Gamma$. \end{itemize} If there is an eigenvalue~$\lambda$ of~$g$, such that $|\lambda| \neq 1$ \textup(in other words, if the entropy of~$T_g$ is nonzero\textup), then $T_g$ is measurably isomorphic to a Bernoulli shift. \end{conj} \begin{Dani} Two results of Dani \cite{Dani-Kolmogorov} are the inspiration for this conjecture: \@nobreaktrue\nopagebreak \begin{enumerate} \item The conjecture is true if the matrix~$g$ is diagonalizable over~$\mathbb{C}$. \item \label{DaniKThm} In the general case, $T_g$ has no zero-entropy quotients (if $\exists \lambda, |\lambda| \neq 1$). \end{enumerate} A transformation with no zero-entropy quotients is called a ``\emph{Kolmogorov automorphism}.'' Examples of non-Bernoulli Kolmogorov automorphisms are rare, so \pref{DaniKThm} is good evidence that the conjecture is true for all~$g$, not just those that are diagonalizable. \end{Dani} \subsection{Anosov diffeomorphisms} \begin{defn} A diffeomorphism~$f$ of a compact, connected manifold~$M$ is \emph{Anosov} if, at every point $x \in M$, the tangent space $T_x M$ has a splitting $T_x M = \mathcal{E}^+ \oplus \mathcal{E}^-$, such that \@nobreaktrue\nopagebreak \begin{itemize} \item for $v \in \mathcal{E}^+$, $D(f^{k})(v) \to 0$ exponentially fast as $k \to -\infty$, and \item for $v \in \mathcal{E}^-$, $D(f^{k})(v) \to 0$ exponentially fast as $k \to +\infty$. \end{itemize} \end{defn} \begin{conj}[from the 1960's] \label{AnosovConj} If there is an Anosov diffeomorphism~$f$ on~$M$, then some finite cover of~$M$ is a nilmanifold. \textup(This means the cover is a homogeneous space $G/\Gamma$, where $G$ is a nilpotent Lie group, and\/ $\Gamma$~is a discrete subgroup of~$G$.\textup) Furthermore, lifting $f$ to the finite cover yields an affine map on the nilmanifold. \end{conj} \begin{Dani} \ \@nobreaktrue\nopagebreak \begin{enumerate} \item[\cite{Dani-NilAnosov}] Dani constructed many nilmanifolds that have Anosov diffeomorphisms. Much more recently, joint work with M.\,Mainkar \cite{DaniMainkar-Anosov} constructed examples of every sufficiently large dimension. \item[\cite{Dani-AffineAutsFP}] Dani proved \cref{AnosovConj} under the assumption that $M$ is a double-coset space $K \backslash G / H$ (and certain additional technical conditions are satisfied). \end{enumerate} \end{Dani} \subsection{Divergent trajectories} We know, from \cref{UnipNotDiverge}, that if $u^t$ is unipotent, then no $u^t$-orbit diverges to~$\infty$. On the other hand, orbits of a diagonal subgroup can diverge to~$\infty$. \begin{eg} Let $$G = \SL(2,\mathbb{R}), \quad a^t = \begin{bmatrix} e^{-t} & 0 \\ 0 & e^t \end{bmatrix}, \quad \Gamma = \SL(2,\mathbb{Z}), \text{\quad and\quad} \vector{e_1} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} .$$ Then $a^t \, \vector{e_1} \to \vector 0$ as $t \to \infty$, so $a^t \, \Gamma \to \infty$ in $G/\Gamma$. \end{eg} The divergent orbits of the diagonal matrices in $\SL(2,\mathbb{R}) / \SL(2,\mathbb{Z})$ are well known, and quite easy to describe. Dani vastly generalized this, by proving that all divergent orbits are obvious in a much wider setting. Here is a special case of his result: \begin{thm}[Dani \cite{Dani-Divergent}] \label{DaniDivRank1} Let \@nobreaktrue\nopagebreak \begin{itemize} \item $G = \SO(1,n)$ \textup(or, more generally, let $G$ be a connected, almost simple algebraic\/ $\mathbb{Q}$-subgroup of\/ $\SL_{n+1}(\mathbb{R})$, with $\mathop{\mathrm{rank}_{\rational}} G = 1$\textup), \item $\Gamma = G \cap \SL_{n+1}(\mathbb{Z})$, \item $\{a^t\}$ be a one-parameter subgroup of~$G$ that is diagonalizable over~$\mathbb{R}$, and \item $g \in G$. \end{itemize} Then $a^t g \Gamma$ diverges to~$\infty$ in $G/\Gamma$ if and only if there exist \@nobreaktrue\nopagebreak \begin{itemize} \item a continuous homomorphism $\rho \colon G \to \SL(\ell,\mathbb{R})$, for some~$\ell$, such that $\rho(\Gamma) \subseteq \SL(\ell,\mathbb{Z})$, and \item a nonzero vector $\vector v \in \mathbb{Z}^\ell$, \end{itemize} such that $\rho(a^t g) \, \vector v \to \vector 0$ as $t \to \infty$. \end{thm} Surprisingly, he was also able to exhibit many cases in which there are divergent orbits that do not come from the construction in \cref{DaniDivRank1}: \begin{thm}[Dani \cite{Dani-Divergent}] \label{DaniDivRank>1} Let \@nobreaktrue\nopagebreak \begin{itemize} \item $G = \SL(n,\mathbb{R})$, with $n \ge 3$ \textup(or, more generally, let $G$ be the\/ $\mathbb{R}$-points of a connected, almost simple algebraic\/ $\mathbb{Q}$-group with $\mathop{\mathrm{rank}_{\rational}} G = \mathop{\mathrm{rank}_{\real}} G \ge 2$\textup), \item $\Gamma = \SL(n,\mathbb{Z})$ \textup(or, in general, let $G$ be the\/ $\mathbb{Z}$-points of~$G$\textup), and \item $\{a^t\}$ be a\/ \textup(nontrivial\/\textup) one-parameter subgroup of~$G$ that is diagonalizable over\/~$\mathbb{R}$. \end{itemize} Then there are $a^t$-orbits in $G/\Gamma$ that diverge to~$\infty$, but do not correspond to a continuous homomorphism $\rho \colon G \to \SL(\ell,\mathbb{R})$, as in \cref{DaniDivRank1}. \end{thm} It remains an open problem to determine the set of divergent orbits in $G/\Gamma$ in cases where \cref{DaniDivRank1} does not apply. \end{document}
\begin{document} \title{Metaheuristics for Min-Power Bounded-Hops Symmetric Connectivity Problem\thanks{ The research is supported by the Russian Science Foundation (project 18-71-00084).}} \titlerunning{Metaheuristics for MPBHSCP} \author{Roman Plotnikov\inst{1}\orcidID{0000-0003-2038-5609} \and Adil Erzin\inst{1,2}\orcidID{0000-0002-2183-523X}} \authorrunning{R. Plotnikov et al.} \institute{Sobolev Institute of Mathematics, Novosibirsk, Russia \and Novosibirsk State University, Novosibirsk, Russia} \maketitle The article is submitted to the special session: Synergy of machine learning, combinatorial optimization, and computational geometry: problems, algorithms, bounds, and applications. \begin{abstract} We consider a Min-Power Bounded-Hops Symmetric Connectivity problem that consists of the construction of communication spanning tree on a given graph, where the total energy consumption spent for the data transmission is minimized and the maximum number of edges between two nodes is bounded by some predefined constant. We focus on the planar Euclidian case of this problem where the nodes are placed at the random uniformly spread points on a square and the power cost necessary for the communication between two network elements is proportional to the squared distance between them. Since this is an NP-hard problem, we propose different heuristics based on the following metaheuristics: genetic local search, variable neighborhood search, and ant colony optimization. We perform a posteriori comparative analysis of the proposed algorithms and present the obtained results in this paper. \keywords{Energy efficiency \and Approximation algorithms \and Symmetric connectivity \and Bounded hops \and Genetic local search \and Variable neighborhood search \and Ant colony optimization.} \end{abstract} \section{Introduction} Due to the prevalence of wireless sensor networks (WSNs) in human life, the different optimization problems aimed to increase their efficiency remain actual. Since usually WSN consists of elements with the non-renewable power supply with restricted capacity, one of the most important issues related to the design of WSN is prolongation its lifetime by minimizing energy consumption of its elements per time unit. A significant part of sensor energy is spent on the communication with other network elements. Therefore, the modern sensors often have an ability to adjust their transmission ranges changing the transmitter power. Herewith, usually, the energy consumption of a network's element is assumed to be proportional to $d^s$, where $s\ge 2$ and $d$ is the transmission range \cite{R96}. The problem of determining the optimal power assignment in WSN is well-studied. The most general Range Assignment Problem, where the goal is to find a strongly connected subgraph in a given directed graph, has been considered in \cite{CPS99,KKKP00}. Its subproblem, Min-Power Symmetric Connectivity Problem (MPSCP), was first studied in \cite{CMZ02}. The authors proved that Minimum Spanning Tree (MST) is a 2-approximation solution to this problem. Also, they proposed a polynomial-time approximation scheme with a performance ratio of $1 + \ln{2} + \varepsilon \approx 1.69$ and a 15/8-approximation polynomial algorithm. In \cite{CNSCL03} a greedy heuristic, later called Incremental Power: Prim (IPP), was proposed. The IPP is similar to the Prim's algorithm for MST constructing. A Kruskal-like heuristic, later called Incremental Power: Kruskal, was studied in \cite{CN02}. Both of these so-called incremental power heuristics have been proposed for the Minimum Power Asymmetric Broadcast Problem, but they are suitable for MPSCP too. It is proved in \cite{PS06} that they both have an approximation ratio 2, and it was shown in the same paper that in practice they yield significantly more accurate solution than MST. Also, in a series of papers different heuristic algorithms have been proposed for MPSCP and the experimental studies have been done: local search procedures \cite{PS06,ACMPTZ06,EPS13}, methods based on iterative local search \cite{WM09}, hybrid genetic algorithm that uses a variable neighborhood descent as mutation \cite{EP15}, variable neighborhood search \cite{EMP16_COR}, and variable neighborhood decomposition search \cite{PEM18_VNDS}. Another important property of WSN's efficiency is a message transmission delay, i.e., the minimum time necessary for transmitting a message from one sensor to another via the intermediate transit nodes. As a rule, the delay is proportional to the maximum number of hops (edges) between two nodes of a network. In the general case, when the network is represented as a directed arc-weighted graph, and the goal is to find a strongly connected subgraph with minimum total power consumptions and bounded path length, the problem is called a Min-Power Bounded-Hops Strong Connectivity Problem. In \cite{Clementi00_1} the approximation algorithms with guaranteed estimates have been proposed for the Euclidean case of this problem. The bi-criteria approximation algorithm for the general case (not necessarily Euclidian) with guaranteed upper bounds has been proposed in \cite{Calinescu06}. The authors of \cite{Carmi15} propose an improved constant factor approximation for the planar Euclidian case of the problem. In this paper, we consider the symmetric case of Min-Power Bounded-Hops Strong Connectivity Problem, when the network is represented as an undirected edge-weighted graph. Such a problem is known as Min-Power Bounded-Hops Symmetric Connectivity Problem (MPBHSCP) \cite{Calinescu06}. We also assume that the sensors are positioned on Euclidian plane. The energy consumption for the data transmission is assumed to be proportional to the area of a circle with center in sensor position and radius equal to its transmission range $d$, and, therefore, $s = 2$. This problem is still NP-hard in planar Euclidian case \cite{Clementi00}, and, therefore, the approximation heuristic algorithms that allow obtaining the near-optimal solution in a short time, are required for it. A set of polynomial algorithms that construct the approximate solutions to MPBHSCP were proposed in \cite{PloEr19}. In this pasper, we suggest three metaheuristic approaches that aimed to improve the solutions obtained by the known constructive heuristics. Namely, we use a variable neighborhood search, a genetic local search, and an ant colony optimization that make use of different variants of local search procedure. This research was inspired by the papers where the different metaheuristics are successfully applied for the approximate solution of Bounded-Diameter Minimum Spanning Tree (BDMST) (e.g., see \cite{Gruber06}), and for MPSCP (e.g., see \cite{EMP16_COR}). We conducted an extensive numerical experiment to compare our algorithms. We present the results of the experiment in this paper. Note that, to the best of our knowledge, the metaheuristics of such kind previously were never applied to MPBHSCP. The rest of the paper is organized as follows. In Section \ref{sPF} the problem is formulated, in Section \ref{sH} descriptions of the proposed algorithms are given, Section \ref{sS} contains results and analysis of the experimental study, and Section \ref{sC} concludes the paper. \section{Problem formulation}\label{sPF} Mathematically, MPBHSCP can be formulated as follows. Given a connected edge-weighted undirected graph $G = (V,E)$ and an integer value $D \geq 1$, find such spanning tree $T^*$ in $G$, which is the solution to the following problem: \begin{equation}\label{e1} W(T)=\sum\limits_{i\in V}\max\limits_{j\in V_i(T)} c_{ij}\rightarrow\min\limits_T, \end{equation} \begin{equation}\label{e2} dist_T(u, v) \leq D \; \forall u,v \in V, \end{equation} where $V_i(T)$ is the set of vertices adjacent to the vertex $i$ in the tree $T$, $c_{ij} \geq 0$ is the weight of the edge $(i, j) \in E$, and $dist_T(u, v)$ is the number of edges in a path between the vertices $u \in V$ and $v \in V$ in $T$. Obviously, in general case, MPBHSCP may even not have any feasible solution. In this paper, we consider a planar Euclidian case, where an edge weight equals the squared distance between the corresponding points and $G$ is a complete graph. Therefore, a solution always exists. Although any feasible solution of (\ref{e1})--(\ref{e2}) is an undirected spanning tree with bounded diameter, we always can choose a center of this tree, i.e., a vertex (or two vertices if $D$ is odd), such that a path from it to any other vertex in a tree contains not more than $\lfloor D/2 \rfloor$ edges. Therefore, it is convenient to consider a solution as a directed tree (or arborescence) rooted in one of its centers. Further, we assume that the centers and the root are predefined for each considered feasible spanning tree, and, therefore, we will handle with the following notations that are suitable for directed trees: $v_0$ --- a root of a tree $T = (V, E_T)$; $P_T(v)$ --- a parent vertex of $v \in V \setminus \{v_0\}$ in $T$; $L_T(v)$ --- a level (i.e., the number of edges in a path from $v$ to the center) of $v \in V$ in $T$. \section{Heuristic algorithms}\label{sH} In this section we will describe the heuristic algorithms for approximation solution of MPBHSCP. Our methods are based on the following metaheuristics: variable neighborhood search (VNS), genetic local search (GLS), and ant colony optimization (ACO). All of our methods start with some initial feasible solution -- a spanning tree with bounded diameter (or with a set of feasible solutions, as in GLS). We assume that at least one such solution was already constructed by some heuristic, and the goal of our algorithms is to improve it in a best possible way. Besides the obvious differences that are specific for the particular metaheuristics, our algorithms have the common parts. Namely, they use the same variants of local search and random movement procedures. Therefore, we will describe these procedures at first. \subsection{Local search}\label{ssLS} We suggest three types of neighborhood structure that are used in the local search procedures. The first neighborhood movement is called \emph{LevelChange}. It consists of changing a parent node for a vertex in a such way that the level of a vertex changes and the diameter is feasible (at most $D$). The second procedure, \emph{SameLevelParentChange}, consists of changing a parent for a vertex preserving its level. And the last one is \emph{CenterChange}, which consists of changing of a center vertex of a tree. Note that none of these three variants of local movement can be replaced by a sequence of others. The first two local movements, \emph{LevelChange} and \emph{SameLevelParentChange} are quite simple. In both cases, at first, one edge $e = (v, P_T(v))$ is removed from $T$, and then another vertex $v_1$, that is not a descendant of $v$ in $T$, is chosen as a new parent of $v$. Herewith, some special conditions should be met: in the case of \emph{LevelChange}, $L_T(v_1)$ should not be equal to $L_T(v) - 1$ and the diameter restriction should not be violated; in the case of \emph{SameLevelParentChange}, the equality $L_T(v_1) = L_T(v) - 1$ should hold. In the \emph{CenterChange} movement, at first, one center $c \in V$ is chosen (it may be either a root or another center in a case of odd diameter), and then, some other non-center vertex $v \in V$ is chosen as a new center. In order to make $v$ a new center instead of $c$ the following steps are performed: (a) the children of $c$ change their parent from $c$ to $v$; (b) $v$ is detached from its parent $v_p = P_T(v)$; (c) if $c$ is a root then $v$ becomes a root, otherwise it becomes a second center, and the root $v_0$ becomes a parent of $v$; (d) if $c \neq v_p$, then $c$ becomes a child of $v_p$, otherwise, it becomes a child of $v$. Our algorithms use these three variants of neighborhood movement as parts of one local search method based on variable neighborhood descent metaheuristic (VND). The idea of VND is to perform local search within more than one neighborhood structure. This approach was proposed in \cite{HansenVNS}, and the pseudo-code is given in Algorithm \ref{alg:vnd}. In result, VND returns a local optimum for all considered neighborhood structures. \begin{algorithm}[!bp] \begin{algorithmic}[1] \STATE Select an initial solution $T$; \STATE $k \leftarrow 0$; \STATE Set the set of the local searches $(LS_{l})_{l=1, 2, 3} $ $ \leftarrow $ $\{$\emph{LevelChange}, \emph{SameLevelParentChange}, \emph{CenterChange}$\}$; \STATE $improved \leftarrow $ \textbf{true}; \WHILE {$improved$} \STATE $improved \leftarrow $ \textbf{false}, $l \leftarrow 1$; \WHILE {$l \leq 3$} \STATE $T^{\prime} \leftarrow LS_l(T)$; \IF {$T^{\prime}$ is better than $T$} \STATE $T \leftarrow T^{\prime}$, $l \leftarrow 1$, $improved \leftarrow $ \textbf{true}; \ELSE \STATE $l \leftarrow l+1$; \ENDIF \ENDWHILE \ENDWHILE \end{algorithmic} \caption{Variable neighborhood descent} \label{alg:vnd} \end{algorithm} \subsection{Random movement} Besides the local search procedures, some of our metaheuristics (to be precise, GLS and VNS) involve an operator of randomized modification of a tree. For such random movement we suggest a procedure \emph{RandomBranchReattaching}. In this procedure, some edge $(v, P_T(v))$ is chosen at random and removed from $T$. Then, $v$ is connected with a non-descendant vertex $u$, which is chosen at random as well, if this operation keeps the feasibility of a tree. This process is repeated $k$ times, where $k$ is an external integer parameter provided by an upper-level metaheuristic. \subsection{Variable neighborhood search} Variable neighborhood search (VNS) is a metaheuristic developed by Hansen and Mladenovic \cite{HansenVNS}, and it consists of two phases: randomized phase, or so-called shaking, when the current solution is changed in random or in half-random way, and deterministic phase, where VND is applied to the shaken solution. In our implementation, \emph{RandomBranchReattaching} is used for the shaking phase, and the neighborhood movements \emph{LevelChange}, \emph{SameLevelParentChange}, and \emph{CenterChange} are used in the local search phase. The pseudo-code is presented in Algorithm \ref{alg:vns}. The great advantage of this metaheuristic, comparing to others, is that it requires tuning of the only parameter $k_{\max}$. The algorithm starts with some feasible solution. As the first approximation for MPBHSCP we use the best of the trees obtained by the heuristic algorithms that are proposed in \cite{PloEr19}: MPCBTC, MPRTC, MPCBLSoC, MPCBRC,MPQBH, and MPIR. \begin{algorithm}[h] \begin{algorithmic}[1] \STATE Select an initial solution $T$; \STATE $k \leftarrow 0$; \WHILE {the stopping criteria is not met} \WHILE {$k \leq k_{\max}$} \STATE Perform shaking: $T^{\prime} \leftarrow \emph{RandomBranchReattaching}(T, k)$; \STATE Apply the local search procedures to the shaken solution: $T^{\prime\prime} \leftarrow VND(T^{\prime})$; \IF {$T^{\prime\prime}$ is better than $T$} \STATE $T \leftarrow T^{\prime\prime}$; $k \leftarrow 1$; \ELSE \STATE $k \leftarrow k+1$; \ENDIF \ENDWHILE \ENDWHILE \end{algorithmic} \caption{Variable neighborhood search} \label{alg:vns} \end{algorithm} \subsection{Genetic local search} Another approach suitable for the problem (\ref{e1})--(ref{e2}) is genetic local search algorithm. This metaheuristic deals with \emph{population} --- a set of feasible solutions. Before the algorithm starts, its first population should be generated. For the first population, we used all spanning trees constructed by 6 algorithms from \cite{PloEr19}. Note that, since MPRTC is randomized, it may construct a set of different feasible solutions instead of only one solution, that yield other 5 deterministic heuristics. This allows us to generate the first population with the size which does not exceed some predefined value. Each iteration of the algorithm consists of applying the following operators to the current population: (a) calculation of \emph{fitness}, that expresses the quality of the solution; (b) \emph{selection}, that chooses a subset of solutions from the population according to their fitness; (d) \emph{crossover}, that creates a new solution (an offspring) from the selected pair of solutions; (e) \emph{mutation}, that randomly modifies the offspring; (f) \emph{local search}, that improves the offspring; (g) \emph{join}, that selects the population of the next generation from the current population and the set of the offsprings. A brief description of the main steps of this algorithm is presented in Algorithm \ref{alg:gls} \begin{algorithm}[!hbtp] \begin{algorithmic}[1] \STATE Generation of the first population; \STATE Fitness calculation of the population; \WHILE {stop condition is not met} \STATE Selection; \STATE Crossover; \STATE Mutation; \STATE Local search by VND; \STATE Fitness calculation of the offspring; \STATE Join; \STATE $T \leftarrow$ the best tree among the current population; \ENDWHILE \end{algorithmic} \caption{Genetic local search} \label{alg:gls} \end{algorithm} In our implementation of genetic local search, we take the value of $1 / W(T)$ as a fitness of $T$. This corresponds to the rule that fitness has to be a positive value which is higher when the value of the objective function is closer to optimum. Within the selection procedure, a set of prospective parents of the next offspring is filled with solutions from the current population in the following way. Sequentially, two trees are taken from the current population in proportion to their fitness probability: the first tree of each pair is chosen randomly from the entire population, and the second tree is chosen from the remaining part of the population. Each pair should contain different trees, but the same tree may be included in many pairs. For the crossover operator, a solution is represented as an array of integer values, that correspond to the vertex levels in a tree. In other words, we assume that the vertices are numbered, and for each number $i = 1, ..., n$, the value of $i$-th element in array is assigned to the level of $i$-th vertex in a tree. Given two integer arrays (let's call them the \emph{parent} arrays), a new (\emph{child}) array, that will correspond to an offspring, is generated in the following way. First of all, an offspring has to have a center. For that reason, one parent array is taken at random (with probability of 0.5), and then, the child array derives the elements assigned to 0 from this parent array. Each parent array has one or two such elements, depending on parity of $D$, and the child array should have the same number of elements assigned to 0. The values of all other elements in the child array derive the values at the same places from the parents, and each time the parent is chosen with probability of 0.5. Note that if the element that assigned to 0 is chosen to be derived by a child, then the corresponding element of a child is assigned to 1. This is done because the corresponding vertex cannot be a center of the offspring, since its center is already established. The decoding of a tree from the integer array is performed in the following way. Let $A$ be the array of integers that should be decoded to a tree $T$. At first, a such vertex $v_0$, that $A(v_0) = 0$, is assigned to the root of a tree, and, if another vertex $v_1$ with the same property exists, then it is assigned to the second center of a tree and $v_0$ is assigned to the parent of $v_1$. After that, for each other $i$-th element of an array its predecessor in $T$ $j$ is chosen in such way, that $A(j) < A(i)$ and the edge that connects $i$-th and $j$-th vertices, brings the the minimum contribution to the value of the objective function. The mutation procedure takes as an argument (an integer parameter) $k$ --- the maximum difference (number of different arcs in the initial tree and in the modified one). This parameter is taken randomly from the interval $[1, n / 3]$, with probability proportional to its inverse value (i.e., smaller modifications are more possible). To perform a random movement for the mutation, we used the procedure \emph{RandomBranchReattaching}. The mutation procedure is applied with probability $PM$ (a parameter of the algorithm) to each offspring. Additionally, our algorithm applies local search to improve the offsprings after the crossover operator. To do this, we used VND algorithm that performs local search within three neighborhood structures defined above: \emph{LevelChange}, \emph{SameLevelParentChange}, and \emph{CenterChange}. This solution improvement procedure is applied with a predefined probability, as well as a randomized mutation. At the \emph{join} procedure a subset of solutions from the current population and the current offspring, which have the largest fitness values, are chosen to fill the population of the next generation. Our version of GLS for MPBHSCP requires the following parameters: \begin{itemize} \item $PopSize$ --- the size of population; \item $OffspSize$ --- the size of offspring; \item $PM$ --- the probability of mutation; \item $PLS$ --- the probability of local search. \end{itemize} \subsection{Ant colony optimization} As the third heuristic algorithm for the approximate solution of MPBHSCP, we propose an algorithm based on the \emph{ant colony optimization metaheuristic} (ACO). A \emph{path} of an ant corresponds to the solution to the problem. The path usually consists of the elements, each of which is chosen randomly with probability depending on \emph{pheromone value} that stores information about the frequency of usage of a particular part of a path in the best-found solution. We designed our algorithm in a similar manner as it was done by Gruber et al. in \cite{Gruber06}. To represent a feasible solution of MPBHSCP as a path we used the same vertex-level encoding that was used in the crossover operator of GLS, i.e., an array of $n$ integers not greater than $\lfloor D/2 \rfloor$ corresponding to the vertex levels. As the pheromone values we used the matrix $(\tau_{il})$ of size $n \times \lfloor D/2 \rfloor$, that is initially filled with equal non-negative real numbers $1 / ( n \cdot W(T_0))$, where $T_0$ is the initial solution. Our variant of the ACO algorithm consists of three phases: (a) paths construction, (b) solutions improvement, and (c) pheromone matrix updating. The main steps of the algorithm are briefly described in Algorithm \ref{alg:aco}. \begin{algorithm}[!hbtp] \begin{algorithmic}[1] \STATE Generation of pheromone matrix; \WHILE {stop condition is not met} \STATE Construction of ant paths according to the pheromone matrix; \STATE Improvement of the solutions that are derived from the paths; \STATE Update of the pheromone matrix; \ENDWHILE \end{algorithmic} \caption{Ant colony optimization} \label{alg:aco} \end{algorithm} In the paths construction phase, at first, the center of a corresponding tree should be defined. For that reason, we assign one or two (depending on parity of $D$) elements of ant path to 0. The vertices (or indices of ant path) that will assigned to the centers (with level 0) are chosen randomly with the probability $P_{i,0} = \tau_{i, 0} / \sum_{j = 1}^{n}{\tau_{j, 0}}$. After that, for each vertex $i = 1, ..., n$, that has not been assigned to the center, its level is assigned randomly with the probability $P_{i, l} = \tau_{i, l} / \sum_{l^{\prime} = 1}^{\lfloor D/2 \rfloor}{\tau_{j, l^{\prime}}}$, where $l = 1, ..., \lfloor D/2 \rfloor$. After this construction, the paths are transformed into the spanning trees by the same decoding procedure that is used in the GLS. Each spanning tree is then improved by the VND procedure, that makes use of three neighborhood search types: \emph{LevelChange}, \emph{SameLevelParentChange}, and \emph{CenterChange}. After that, the best solution found so far $T_{best}$ is used for the updating of the pheromone matrix in the following way. For each $i = 1, ..., n$, and $l = 0, ..., \lfloor D/2 \rfloor$, $\tau_{i, l} = \tau_{i, l} + \rho / W(T_{best})$ if $l = L_{T_{best}}(i)$, and $\tau_{i, l} = \tau_{i, l}(1 - \rho)$, otherwise. ACO requires two parameters: $ColSize$ --- the number of ants in colony, and $\rho$ --- the pheromone decay coefficient. \section{Simulation} \label{sS} We have implemented all the described algorithms in C++ programming language and launched them on the Intel Core i5-4460 3.2GHz processor with 8Gb RAM. In order to make our experiment results reproducible, we used as test instances the data sets that are given in Beasley's OR-Library for Euclidian Steiner Problem (http://people.brunel.ac.uk/~mastjjb/jeb/orlib). These test cases present the random uniformly distributed points in the unit square. We tested 3 variants of dimension: $n = $ 100, 250, and 500. We also took different values of $D$ for each dimension. Since all of our algorithms are partially probabilistic, we launched each algorithm 10 times on each instance, and calculated the average value of objective, the best value of objective, and the standard deviation. As a stop criteria the following condition was used: the best found solution is not changed during three iterations in a row. We have performed the preliminary testing of each algorithm to determine such combination of its parameters that would provide in most cases the best result without consuming much time for the calculations. For VNS, $k_{\max}$ was chosen from the set $\{20, 30, 40\}$, and 30 appeared to be the best variant. For GLS, the pair $(PopSize, OffspSize) = (75, 40)$ appeared to be the best among the variants $\{ (25, 15)$, $(50, 20)$, $(75, 40)$, $(100, 50) \}$, and the pair $(PM, PLS)$ = $(0.5, 0.5)$ appeared to be the best among the variants $\{(0.25, 025)$, $(0.25, 05)$, $(0.25, 075)$, $(0.5, 05)$, $(0.75, 0.25)$, $(0.75, 075)\}$. As for ACO, we found out that $\rho = 0.2$ is the best choice among $\{ 0.005$, $0.01$, $0.05$, $0.1$, $0.2\}$ and $ColSize = 50$ is the best choice among $\{25$, $50$, $100\}$. We also tried to exclude one or more variants of local search in the VND subroutine of our algorithms, but, on average, this always deteriorated the results. Therefore, we decided to keep all the proposed variants of local search in each of our algorithms. The results of the experiment are presented in Table \ref{tab:res}. The first three columns contain test instance properties: the tree diameter bound, $D$, the size of a problem, $n$, and the instance case number in the OR Library, nr. In the fourth column, the objective values on the best of constructive heuristics results, $T_{CH}$, are presented. Note that $T_{CH}$ was passed to each metaheuristic algorithm as the initial solution. We added this column here because it is important to see, how much the best solution found by constructive heuristics was improved by our metaheuristic based algorithms. In the other columns, the results of ACO, GLS, and VNS are presented: the objective values on the best found solutions, $W_{best}$, the average values of objective, $W_{av}$, the standard deviation of the set of objective values on the found solutions, $W_{sd}$, and the average running times. The best values among all algorithms are marked bold. \begin{table}[!htp] \begin{tabular}{|c|c|c|c|ccc|ccc|ccc|ccc|} \hline \multirow{2}{*}{$D$} &\multirow{2}{*}{$n$} &\multirow{2}{*}{nr} &\multirow{2}{*}{$W(T_{CH})$} &\multicolumn{3}{c|}{$W_{best}$} &\multicolumn{3}{c|}{$W_{av}$} &\multicolumn{3}{c|}{$W_{sd}$} &\multicolumn{3}{c|}{Time (in sec.)}\\ \cline{5-16} & & & &ACO &GLS &VNS &ACO &GLS &VNS &ACO &GLS &VNS &ACO &GLS &VNS\\ \hline \multirow{9}{*}{7} &\multirow{3}{*}{50} &1 &1.89 &1.56 &\textbf{1.47} &1.61 &1.63 &1.69 &\textbf{1.61} &0.07 &0.18 &0 &1.87 &0.69 &\textbf{0.36}\\ & &2 &1.77 &\textbf{1.31} &1.42 &1.34 &\textbf{1.39} &1.55 &1.42 &0.06 &0.12 &0.04 &2.22 &\textbf{0.69} &0.75\\ & &3 &1.71 &\textbf{1.24} &1.30 &1.36 &\textbf{1.33} &1.44 &1.40 &0.05 &0.12 &0.02 &1.57 &\textbf{0.67} &1.10\\ \cline{2-16} &\multirow{3}{*}{100} &1 &2.07 &\textbf{1.60} &1.74 &1.66 &1.96 &1.80 &\textbf{1.70} &0.14 &0.05 &0.04 &5.93 &3.26 &\textbf{1.93}\\ & &2 &2.00 &\textbf{1.48} &1.55 &1.70 &1.73 &1.82 &\textbf{1.70} &0.12 &0.13 &0.01 &8.16 &3.03 &\textbf{1.88}\\ & &3 &2.35 &2.35 &1.99 &\textbf{1.90} &2.35 &2.03 &\textbf{1.92} &0 &0.04 &0.04 &\textbf{2.60} &3.15 &3.04\\ \cline{2-16} &\multirow{3}{*}{250} &1 &3.13 &3.13 &2.76 &\textbf{2.60} &3.13 &2.92 &\textbf{2.62} &0 &0.09 &0.02 &28.82 &33.07 &\textbf{21.16}\\ & &2 &3.30 &3.30 &2.80 &\textbf{2.88} &3.30 &2.98 &\textbf{2.89} &0 &0.17 &0.01 &28.73 &32.36 &\textbf{12.50}\\ & &3 &3.11 &3.11 &2.53 &\textbf{2.43} &3.11 &2.76 &\textbf{2.49} &0 &0.15 &0.05 &28.57 &32.64 &\textbf{30.41}\\ \hline \multirow{10}{*}{10} &\multirow{3}{*}{50} &1 &1.68 &\textbf{1.14} &1.23 &1.22 &1.29 &1.25 &\textbf{1.24} &0.07 &0.03 &0.03 &1.78 &\textbf{0.76} &1.05\\ & &2 &1.18 &\textbf{1.01} &1.13 &1.06 &1.16 &1.18 &\textbf{1.07} &0.05 &0.02 &0.00 &0.99 &\textbf{0.63} &0.73\\ & &3 &1.00 &1.00 &1.00 &\textbf{0.88} &1.00 &1.00 &\textbf{0.88} &0 &0 &0 &0.74 &0.61 &\textbf{0.41}\\ \cline{2-16} &\multirow{3}{*}{100} &1 &1.73 &1.18 &1.34 &\textbf{1.15} &1.36 &1.38 &\textbf{1.23} &0.07 &0.07 &0.05 &9.73 &\textbf{3.53} &8.66\\ & &2 &1.55 &1.13 &1.25 &\textbf{1.07} &1.26 &1.52 &\textbf{1.09} &0.08 &0.09 &0.01 &11.35 &\textbf{3.33} &5.41\\ & &3 &1.88 &\textbf{1.08} &1.29 &1.21 &1.33 &1.49 &\textbf{1.27} &0.14 &0.23 &0.04 &12.53 &\textbf{4.12} &5.45\\ \cline{2-16} &\multirow{3}{*}{250} &1 &2.11 &2.11 &1.94 &\textbf{1.75} &2.11 &2.08 &\textbf{1.84} &0 &0.06 &0.04 &46.08 &\textbf{38.13} &47.03\\ & &2 &2.30 &1.99 &1.84 &\textbf{1.70} &2.22 &2.14 &\textbf{1.71} &0.11 &0.17 &0.03 &54.10 &\textbf{39.93} &47.24\\ & &3 &2.24 &1.97 &1.79 &\textbf{1.74} &2.14 &1.97 &\textbf{1.80} &0.10 &0.14 &0.04 &72.40 &\textbf{38.87} &39.59\\ \cline{2-16} &500 &1 &2.57 &2.57 &2.13 &\textbf{2.09} &2.57 &2.47 &\textbf{2.11} &0 &0.18 &0.03 &277.90 &255.59 &\textbf{144.00}\\ \hline \multirow{10}{*}{15} &\multirow{3}{*}{50} &1 &1.07 &1.01 &0.98 &\textbf{0.92} &1.06 &1.06 &\textbf{0.93} &0.02 &0.03 &0.01 &0.89 &\textbf{0.69} &1.25\\ & &2 &0.99 &0.99 &0.99 &\textbf{0.88} &0.99 &0.99 &\textbf{0.89} &0 &0 &0.01 &0.81 &\textbf{0.67} &1.13\\ & &3 &0.89 &0.84 &0.89 &\textbf{0.79} &0.88 &0.89 &\textbf{0.81} &0.01 &0 &0.02 &0.92 &\textbf{0.66} &1.51\\ \cline{2-16} &\multirow{3}{*}{100} &1 &1.17 &1.04 &1.07 &\textbf{0.97} &1.08 &1.16 &\textbf{0.97} &0.02 &0.03 &0 &9.81 &3.65 &\textbf{3.51}\\ & &2 &1.14 &1.02 &0.97 &\textbf{0.93} &1.06 &0.99 &\textbf{0.94} &0.04 &0.05 &0.01 &8.44 &4.35 &\textbf{3.47}\\ & &3 &1.39 &\textbf{0.91} &1.08 &0.99 &1.06 &1.33 &\textbf{1.01} &0.09 &0.12 &0.01 &15.93 &3.71 &\textbf{2.88}\\ \cline{2-16} &\multirow{3}{*}{250} &1 &2.05 &1.33 &1.34 &1.26 &1.46 &1.46 &\textbf{1.33} &0.07 &0.07 &0.07 &140.88 &\textbf{52.83} &93.75\\ & &2 &2.08 &\textbf{1.28} &1.31 &1.41 &\textbf{1.39} &1.64 &1.41 &0.07 &0.29 &0.00 &165.90 &\textbf{45.20} &65.31\\ & &3 &1.71 &1.28 &1.23 &\textbf{1.07} &1.38 &1.62 &\textbf{1.09} &0.06 &0.16 &0.02 &128.30 &41.15 &\textbf{22.74}\\ \cline{2-16} &500 &1 &2.13 &\textbf{1.64} &1.77 &1.66 &1.87 &1.89 &\textbf{1.68} &0.08 &0.09 &0.03 &884.69 &355.67 &\textbf{271.71}\\ \hline \multirow{3}{*}{20} &100 &1 &0.98 &0.94 &0.98 &\textbf{0.83} &0.98 &0.98 &\textbf{0.84} &0.01 &0 &0.01 &3.98 &\textbf{3.11} &4.06\\ &250 &1 &1.17 &1.17 &1.17 &\textbf{0.98} &1.17 &1.17 &\textbf{1.01} &0 &0 &0.02 &53.33 &39.26 &\textbf{20.77}\\ &500 &1 &2.06 &1.35 &1.26 &\textbf{1.11} &1.53 &1.86 &\textbf{1.14} &0.07 &0.32 &0.02 &843.71 &314.09 &\textbf{268.58}\\ \hline \multirow{3}{*}{25} &100 &1 &0.88 &0.88 &0.84 &\textbf{0.80} &0.88 &0.88 &\textbf{0.82} &0 &0.01 &0.02 &4.70 &3.77 &\textbf{2.93}\\ &250 &1 &0.99 &0.99 &0.97 &\textbf{0.91} &0.99 &0.98 &\textbf{0.91} &0 &0.00 &0.00 &58.07 &45.87 &\textbf{20.07}\\ &500 &1 &1.77 &1.17 &1.13 &\textbf{1.01} &1.32 &1.63 &\textbf{1.03} &0.05 &0.23 &0.01 &957.09 &336.30 &\textbf{220.94}\\ \hline \end{tabular} \caption{Comparison of the experiment's results obtained by different heuristics.}\label{tab:res} \end{table} It is seen in the table that in more than in a half of all cases VNS works faster than other algorithms. But note that often, especially in small size cases, the difference in running time is not so significant. Besides, in the overwhelming majority of the cases, VNS constructs the best solution among all algorithms. Therefore, in general, the superiority of VNS is obvious. In some cases, especially when $D$ is not too large, ACO yields the better solution than VNS and GLS. But often, even when the best solution found by ACO outperforms the best solution found by VNS, the average objective value of ACO remains inferior in quality than that of VNS: for example, see the cases ($D = 7, n = 100, $ nr = $1,2$) and ($D = 10, n = 100, $ nr = $3$). Although GLS never appeared to be the best among all algorithms, it is worth to say that it often significantly improves $T_{CH}$ and builds solutions that are very close to those constructed by ACO and VNS, in terms of the objective function. Moreover, in some cases GLS outperforms one of the other algorithms: for example, see the values of $W_{best}$ in the cases ($D = 10, n = 250, $ nr = $2,3$), ($D = 15, n = 250, $ nr = $2$), and ($D = 20, n = 500, $ nr = $3$), and see the values of $W_{av}$ in the cases ($D = 7, n = 100, $ nr = $3$), ($D = 7, n = 250, $ nr = $2,3$), and ($D = 10, n = 250, $ nr = $2,3$). In some cases, both algorithms ACO and GLS failed to improve the initial solution $T_{CH}$. This fact can be explained in the following way. These heuristics don't apply local search procedure directly to the initial solution, but they improve the derivative results of the initial solutions: mutated offspring or decoded ant path. Most probably, when solution space is rather large, there exists a risk that these two algorithms will explore only the solutions that are worse than initial solution, which may not lead to its improvement. It would be helpful to look into these cases deeper and to analyse the behaviour of the algorithms on them. Anyway, we believe that both algorithms ACO and GLS have a potential to be improved. It is also worth to say that in some cases the initial solution was significantly improved. For example, see the case ($D = 20, n = 500, $ nr = $1$), where the initial solution was improved almost twice. In particular, this gives us the following negative result regarding the constructive heuristics from \cite{PloEr19}: none of them provides an approximation with a guaranteed factor less than $2.06/1.11 \approx 1.856$. As an illustration, we also present in Fig. \ref{fig:expres} the best solutions that were obtained by different algorithms on the same instance when $D = 20, n = 500$. We chose this case, because of the big gap between the constructive heuristics and the metaheuristics results, that was discussed in the previous paragraph. For the convenience, the edges that remote from a center by an equal distance (i.e., hops count) are colored in the same color. This helps to easily verify that each tree is feasible, since the hops bound is never violated. Since the diameter bound is even in this case, there is the only center in all trees. In this case MPIR constructed the best solution among all constructive heuristics from \cite{PloEr19}. The difference between the tree constructed by MPIR, and the new metaheuristic algorithms, is seen. In the solution obtained by the constructive heuristic MPIR, a part of a tree that lies far away from the center has a star-like structure, which is not desirable, because in this case a lot of rather long edges are connected with "star centers" (i.e., the vertices with high degree). Note that the trees constructed by VNS, GLS, and ACO, have no such star-like parts: the longer edges at the backbone allow to get rid of the need for the vertices with high degree. \begin{figure} \caption{Best algorithms results on the same instance. $D = 20, n = 500$} \label{fig:expres_CH} \label{fig:expres_VNS} \label{fig:expres_GLS} \label{fig:expres_ACO} \label{fig:expres} \end{figure} \section{Conclusion} \label{sC} In this paper, we considered the NP-hard Min-Power Bounded-Hops Symmetric Connectivity Problem. For its approximation solution, we proposed three different heuristic algorithms that are based on such known metaheuristics as variable neighborhood search, ant colony optimization, and genetic local search. To the best of our knowledge, this is the first application of such kind of heuristics to this problem. We implemented all the proposed algorithms and conducted the numerical experiment on different test instances that were generated on the data sets taken from the Beasley's OR-Library. The simulation showed that, in general, our methods allow to significantly improve the solution built by the best known polynomial constructive heuristics. In most cases, VNS based heuristic appeared to be more efficient than other methods both in terms of objective function and running time. \end{document}
\begin{document} \begin{frontmatter} \title{Seeking Nash Equilibrium in Non-Cooperative Differential Games} \author[label1]{Zahra Zahedi } \author[label1]{Alireza Khayatian} \author[label1]{Mohammad Mehdi Arefi\corref{cor1}} \ead{[email protected]} \cortext[cor1]{Corresponding author} \author[label2]{Shen Yin} \fntext[]{Journal of Vibration and Control\\ https://doi.org/10.1177/10775463221122120} \address[label1]{The Department of Power and Control Engineering, School of Electrical and Computer Engineering, Shiraz University, Shiraz 71946-84636, Iran} \address[label2]{Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology , Trondheim 7033, Norway} \begin{abstract} This paper aims at investigating the problem of fast convergence to the Nash equilibrium (NE) for N-Player noncooperative differential games. The proposed method is such that the players attain their NE point without steady-state oscillation (SSO) by measuring only their payoff values with no information about payoff functions, the model and also the actions of other players are not required for the players. The proposed method is based on an extremum seeking (ES) method, and moreover, compared to the traditional ES approaches, in the presented algorithm, the players can accomplish their NE faster. In fact, in our method the amplitude of the sinusoidal excitation signal in classical ES is adaptively updated and exponentially converges to zero. In addition, the analysis of convergence to NE is provided in this paper. Finally, a simulation example confirms the effectiveness of the proposed method. \end{abstract} \begin{keyword} Extremum seeking \sep Non-cooperative differential games \sep Learning \sep Nash equilibria. \end{keyword} \end{frontmatter} \section{Introduction} The problem of finding an algorithm to attain NE has inspired many researchers thanks to the vast variety of applications of differential non-cooperative games in areas such as motion planning \cite{c1}, \cite{c2}, \cite{c3}, formation control \cite{c4}, \cite{c5}, wireless networks \cite{c6}, \cite{c7}, mobile sensor networks \cite{c8}, \cite{c9}, network security \cite{c10} and demand side management in smart grid \cite{c11}, \cite{c12}.\\ \indent The majority of algorithms which are designed to achieve convergence to NE are based on the model information and observation of other player's actions. These algorithms usually are designed on the basis of best response and fictitious play strategy. In \cite{c13}, each agent plays a best response strategy in a non-myopic Cournot competition. A form of dynamic fictitious and gradient play strategy in a continuous time form of repeated matrix games have been introduced in \cite{c14} and the convergence to NE has been shown. Distributed iterative algorithms have been considered in \cite{c15} to compute the equilibria in general class of non-quadratic convex games. The authors in \cite{c16} have presented two distributed learning algorithms in which players remember their own payoff values and actions from the last play; also the convergence to the set of Nash equilibria was proved. An algorithm based on the combination of the support enumeration method and the local search method for finding NE has been designed in \cite{c17}.\\ \indent Furthermore, many works are uncoupled which means each player generates its actions based on its own payoff not the actions or payoffs of the opponents. A new type of learning rule which is called regret testing in a finite game was introduced in \cite{c18} in which behaviors of players converge to the set of Nash equilibria. In \cite{c19}, almost certain convergence to NE in a finite game with uncoupled strategy, and possibility and impossibility results are studied.\\ \indent Moreover, some of the non-model based algorithms are a kind of extremum seeking control algorithm (ESC) which is a real time optimization tool that can find an extremum value of an unknown mapping \cite{c20}, \cite{c21}, \cite{c22}. For example, in \cite{c23}, the Nash seeking problem for \textit{N}-player non-cooperative games with static quadratic and dynamic payoff functions is presented, which is based on the sinusoidal perturbation extremum seeking approach. The same analysis for dynamic systems with non-quadratic payoffs has been considered in \cite{c24}, \cite{c25}. In \cite{oliveira2021nash}, they proposed a strategy for locally stable convergence to NE in quadratic non-cooperative games which are subject to diffusion PDE dynamic. The problem of attaining NE in non-cooperative games based on the stochastic extremum seeking is studied in \cite{c26}. Also, learning of a Generalized NE in strongly monotone games with nonlinear dynamical agents has been studied in \cite{krilavsevic2021learning}. However, usually the convergence to NE in all of the non-model based extremum seeking methods mentioned have steady-state oscillation.\\ \indent In this paper, we have modified an extremum seeking control without steady-state oscillation (ESCWSSO) algorithm \cite{c27} to design a seeking scheme to solve this problem for an \textit{N}-player differential non-cooperative game, which is faster than conventional extremum seeking algorithms in achieving NE. In this algorithm, the players can generate their actions only by measuring their own payoff value with no need for information regarding the model, details of the payoff function, and actions of other players. More importantly, the NE can be achieved fast and without steady-state oscillation, because the amplitude of excitation sinusoidal signal in classical extremum seeking is adjusted to exponentially converge to zero. As a result, the convergence will be fast and the improper effects of steady-state oscillation will be eliminated. The similar analyses have been provided in \cite{c28} and \cite{c29} for static non-cooperative games with quadratic and non-quadratic payoff functions. However, since they have investigated static games, dynamic systems are excluded from their analysis. Additionally, authors in \cite{c30} have proposed a new algorithm for fast and without steady state oscillation convergence to NE with a local dynamic within the algorithm for static non-cooperative games.\\ \indent The main contributions of the paper are as follows \begin{itemize} \item Comparing to the previous works for achieving NE, this paper can achieve the NE fast and without steady state oscillation. \item In this paper we modified the previous algorithm \cite{c27} in order to be applicable on differential games which are multi-agent systems and proved the convergence of NE in such games. \item In comparison to the previous works in \cite{c29}, and \cite{c30} which all were on static games, this paper studied differential and dynamic games which can make this algorithm applicable for a vast variety of new applications. \end{itemize} \indent The rest of this paper is organized as follows. In \cref{sec2}, the Nash seeking algorithm and the general description of the problem are stated. \cref{sec3} includes convergence and stability analysis. A numerical example with simulation is presented in \cref{sec4}. Finally, a conclusion is provided in \cref{sec5}. \section{Preliminaries} \label{sec2} \indent In this section, we consider the problem of fast convergence to NE without steady state oscillation in non-cooperative differential games with \textit{N} players in which players can converge to their NE fast and without oscillation only with the measurement of their own payoff values, which means that they do not need to have any knowledge of the model and actions of other players.\\ \indent Consider the following nonlinear model of player $i$ in an \textit{N}-player non-cooperative differential game: \begin{equation} \label{eq1} \dot{x}=f(x,u) \end{equation} \begin{equation} \label{eq2} J_i = j_i(x) \end{equation} where $i\in\{1, \hdots, N\}$, $u\in R^N$ is actions of players which is $u=[u_1, \hdots, u_N]$, $x$ is the state and $J_i$, $j_i$ and $f$ are smooth functions where $j_i:R^n\to R$, $f:R^n\times R^N\to R^n$ and $J_i\in R$. At first, let's make the following assumptions about the game, which are the same as \cite{c24}. \begin{ass} \label{as1} There exists a smooth function $l:R^N\to R^n$ such that $f(x,u)=0$ if and only if $x=l(u)$. \end{ass} \begin{ass} \label{as2} The equilibrium of system (\ref{eq1}) which is $x=l(u)$ is locally exponentially stable for all $u\in R^N$. \end{ass} Thus, these assumptions mean that any actions of the players can be designed to stabilize the equilibrium without any knowledge about the model, other players' actions or form of $j_i$, $f$ and $l$.\\ Each player $i$ employs the following algorithm in order to generate its actions to achieve NE. \begin{equation} \label{eq3} \begin{split} \dot{\hat{u}}_i &= k_i(J_i-n_i)\sin(\omega_it+\phi_i), \\ u_i &= \hat{u}_i+a_i\sin(\omega_it+\phi_i),\\ \dot{a}_i &= -\omega_{li}a_i+b_i\omega_{li}(J_i-n_i),\\ \dot{n}_i &= -\omega_{hi}n_i+\omega_{hi}J_i, \end{split} \end{equation} where $k_i$ and $\omega_i$ are positive constants, $\omega_i=\bar{\omega}\tilde{\omega}_i$ in which $\bar{\omega}$ is a positive rational number and $\tilde{\omega}_i$ is a positive real number, $J_i$ is the measurement value of payoff, $n_i$ is the low-frequency components of $J_i$, $b_i$ and $\phi_i$ are constants where $b_i$ can adjust the speed of convergence, $\omega_{li}$ and $\omega_{hi}$ are cut-off frequencies for low pass and high pass filters respectively. Also, $\omega_{li}$ and $b_i$ are new parameters which can be used instead of excitation sinusoidal signal's amplitude in classical extremum seeking control, which are designed to make $a_i$ positive. \Cref{fig1} shows the diagram of the algorithm in an \textit{N}-player non-cooperative differential game.\\ \begin{figure} \caption{NE seeking without steady-state oscillation scheme for an \textit{N} \label{fig1} \end{figure} \indent We introduce the following errors for more analysis: \begin{equation} \label{eq4} \begin{split} \tilde{u}_i &= \hat{u}_i-u_i^\ast,\\ \tilde{n}_i &= n_i-j_i\circ l(u^\ast). \end{split} \end{equation} where $u^\ast$ is a vector of players' NE $u^\ast=[u_1^\ast, \hdots, u_N^\ast]$.\\ \indent By substituting (\ref{eq4}) into (\ref{eq3}), the following equation is obtained: \begin{equation} \label{eq5} \begin{split} \dot{x}&=f(x,\tilde{u}+u^\ast+a\times \eta(t))\\ \dot{\tilde{u}}_i &= k_i(j_i(x)-j_i\circ l(u^\ast)-\tilde{n}_i)\sin(\omega_it+\phi_i), \\ \dot{a}_i &= -\omega_{li}a_i(t)+b_i\omega_{li}(j_i(x)-j_i\circ l(u^\ast)-\tilde{n}_i),\\ \dot{\tilde{n}}_i &= -\omega_{hi}\tilde{n}_i(t)+\omega_{hi}(j_i(x)-j_i\circ l(u^\ast)), \end{split} \end{equation} where $\eta_i(t)=\sin(\omega_it+\phi_i)$, $\tilde{u}=[\tilde{u}_1, \hdots, \tilde{u}_N]$, $a=[a_1(t), \hdots, a_N(t)]$, and $\eta(t)=[\eta_1(t), \hdots, \eta_N(t)]$. In addition, in the equation $a\times \eta(t)$, $(\times)$ sign means the entry-wise product of two vectors. \\ \indent For more analysis, the design parameters are chosen as follows: \begin{equation} \label{eq6} \begin{split} \omega_{li} &=\omega\omega_{Li}=\omega\epsilon\omega_{Li}^\prime=O(\omega\epsilon\delta),\\ \omega_{hi} &=\omega\omega_{Hi}=\omega\delta\omega_{Hi}^\prime=O(\omega\delta),\\ k_{i} &=\omega K_{i}=\omega\delta K_{i}^\prime=O(\omega\delta), \end{split} \end{equation} where $\epsilon$ and $\delta$ are small constant parameters, and $\omega_{Hi}^\prime$, $K_i^\prime$ and $\omega_{Li}^\prime$ are $O(1)$ positive constants. Afterwards, the system in time scale $\tau=\bar{\omega}t$ is obtained. \begin{equation} \label{eq7} \bar{\omega}\frac{dx}{d\tau}=f(x,\tilde{u}+u^\ast+a\times\eta(\tau)) \end{equation} \begin{equation} \label{eq8} \begin{split} \frac{d}{d\tau}\left[\begin{array}{c} \tilde{u}_i(\tau)\\ a_i(\tau)\\\tilde{n}_i(\tau)\end{array} \right]=\delta\left[\begin{array}{l} K_i^\prime(j_i(x)-j_i\circ l(u^\ast)-\tilde{n}_i)\eta_i(\tau)\\ \epsilon\omega_{Li}^\prime (b_i(j_i(x)-j_i\circ l(u^\ast)-\tilde{n}_i)-a_i)\\ \omega_{Hi}^\prime(j_i(x)-j_i\circ l(u^\ast)-\tilde{n}_i)\end{array}\right] \end{split} \end{equation} Furthermore, the following assumptions are made to ensure the existence of NE $u^\ast$: \begin{ass} \label{as3} The game admits at least one stable NE $u^\ast$ in which \begin{equation} \label{eq9} \frac{\partial j_i\circ l}{\partial u_i}(u^\ast)=0, \hspace{10pt} \frac{\partial^2 j_i\circ l}{\partial u_i^2}(u^\ast)<0. \end{equation} \end{ass} \begin{ass} \label{as4} The following matrix is diagonally dominant and consequently nonsingular. Hence, according to Assumption \ref{as3} and Gershgorin Circle theorem \footnote{According to Gershgorin Circle theorem, a diagonally dominant matrix is stable (Hurwitz) when all elements of the diagonal have negative real part.} \cite{c31}, it is also stable Hurwitz. \begin{equation} \label{eq10} \Delta=\left[\begin{array}{cccc} \frac{\partial^2 j_1ol(u^\ast)}{\partial u_1^2}&\frac{\partial^2 j_1ol(u^\ast)}{\partial u_1\partial u_2}&\ldots&\frac{\partial^2 j_1ol(u^\ast)}{\partial u_1\partial u_N}\\\frac{\partial^2 j_2ol(u^\ast)}{\partial u_1\partial u_2}&\frac{\partial^2 j_2ol(u^\ast)}{\partial u_2^2}&&\frac{\partial^2 j_2ol(u^\ast)}{\partial u_2\partial u_N}\\\vdots&&\ddots&\vdots\\\frac{\partial^2 j_Nol(u^\ast)}{\partial u_1\partial u_N}&&&\frac{\partial^2 j_Nol(u^\ast)}{\partial u_N^2}\end{array}\right] \end{equation} \end{ass} \section{Main Results} \label{sec3} In this section, local stability and convergence analysis of proposed algorithm will be studied. \subsection{Averaging analysis} For averaging analysis, let's freeze $x$ in quasi-steady state equilibrium: \begin{equation} \label{eq11} x=l(\tilde{u}+u^\ast+a\times\eta(\tau)) \end{equation} By substituting (\ref{eq11}) to (\ref{eq8}), the reduced system is given as follows: \begin{equation} \label{eq12} \begin{split} &\frac{d}{d\tau}\left[\begin{array}{c} \tilde{u}_{ri}(\tau)\\ a_{ri}(\tau)\\\tilde{n}_{ri}(\tau)\end{array} \right]=\\ &\delta{\small \left[\begin{array}{l} K_i^\prime(j_i\circ l(\tilde{u}_r+u^\ast+a_r\times\eta)-j_i\circ l(u^\ast)-\tilde{n}_{ri})\eta_i(\tau)\\ \epsilon\omega_{Li}^\prime (b_i(j_i\circ l(\tilde{u}_r+u^\ast+a_r\times\eta)-j_i\circ l(u^\ast)-\tilde{n}_{ri})-a_{ri})\\ \omega_{Hi}^\prime(j_i\circ l(\tilde{u}_r+u^\ast+a_r\times\eta)-j_i\circ l(u^\ast)-\tilde{n}_{ri})\end{array}\right]} \end{split} \end{equation} in which if we consider $j_i\circ l(\tilde{u}_r+u^\ast+a_r\times\eta)-j_i\circ l(u^\ast)=h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta)$, regarding to Assumption \ref{as3}, we have \begin{equation} \label{eq13} h_i\circ l(0)=0, \hspace{10pt} \frac{\partial h_i\circ l}{\partial u_i}(0)=0, \hspace{10pt} \frac{\partial^2 h_i\circ l}{\partial u_i^2}(0)<0. \end{equation} Since (\ref{eq12}) is in the proper form of averaging theory \cite{c32}, the averaging system is obtained as follows: \begin{equation} \label{eq14} \begin{split} &\dot{\tilde{u}}_{ri}^{av}(\tau)=\delta({\lim_{T\to+\infty}}\frac{K_i^\prime}{T}\int_0^T(h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta)\eta_i(\tau)\,d\tau),\\ &\dot{a}^{av}_{ri}(\tau)=\delta\epsilon\omega_{Li}^\prime (b_i({\lim_{T\to+\infty}}\frac{1}{T}\int_0^T h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta)\,d\tau\\&-\tilde{n}^{av}_{ri})-a^{av}_{ri}),\\ &\dot{\tilde{n}}^{av}_{ri}(\tau)=\delta\omega_{Hi}^\prime({\lim_{T\to+\infty}}\frac{1}{T}\int_0^T h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta)\,d\tau-\\&\tilde{n}^{av}_{ri}). \end{split} \end{equation} Therefore, the following result can be derived. \begin{theorem} \label{th1} Consider system (\ref{eq12}) for an \textit{N}-player game, regarding to Assumptions \ref{as3} and \ref{as4} where $\omega_i\not=\omega_j$, $\omega_i\not=\omega_j+\omega_k$, $2\omega_i\not=\omega_j+\omega_k$, $\omega_i\not=2\omega_j+\omega_k$, $\omega_i\not=2\omega_j$, $\omega_i\not=3\omega_j$ and $\frac{\omega_i}{\omega_j}$ is rational, for all $i, j, k\in\{1,\hdots,N\}$. Then, with constants $\sigma$, $\overline{\epsilon}$, $\overline{\delta}>0$ which $0<\epsilon<\overline{\epsilon}$ and $0<\delta<\overline{\delta}$, there exists a neighborhood for equilibrium point of average system $(\tilde{u}_r^e, 0, \tilde{n}_r^e)$ that $(\tilde{u}_r, a_r, \tilde{n}_r)$ will exponentially converge to that neighborhood. \end{theorem} \begin{proof} According to this fact that $\frac{d\epsilon}{d\tau}=0$ and by using the center manifold technique \cite{c33}, we rewrite (\ref{eq14}) in the following form: \begin{equation} \label{eq15} \dot{z}=\frac{d}{d\tau}\left[\begin{array}{c} \tilde{u}_{ri}^{av}(\tau)\\a^{av}_{ri}(\tau)\\ \epsilon\end{array}\right]=A_1z+g_1(z,y) \end{equation} \begin{equation} \label{eq16} \dot{y}=\frac{d\tilde{n}^{av}_{ri}(\tau)}{d\tau}=A_2z+g_2(z,y) \end{equation} where $A_1=\left[\begin{array}{ccc}0&0&0\\0&0&0\\0&0&0\end{array}\right]$,$g_1(z,y)=\delta\left[\begin{array}{c}g_{11}\\g_{12}\\0\end{array}\right]$, $g_{11}=\lim_{T\to+\infty}\frac{K_i^\prime}{T}\int_0^T(h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta))\eta_i\,d\tau)$ and $g_{12}=\epsilon\omega_{Li}^\prime (-a^{av}_{ri}+b_i(-y+{\lim_{T\to+\infty}}\frac{1}{T}\int_0^T h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta)\,d\tau))$, $g_2(z,y)=\lim_{T\to+\infty}\frac{\delta\omega_{Hi}^\prime}{T}\int_0^T h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta)\,d\tau)$ and $A_2=-\delta\omega_{Hi}^\prime$.\\ \indent Firstly, consider the following lemmas about center manifold. \begin{lemma} \label{lem1} There exists a center manifold $y=q(z)$, $\Vert z\Vert<0$ for (\ref{eq15}) and (\ref{eq16}), in which $q\in C^2$ and $\sigma >0$, if we have the following conditions: \begin{enumerate} \item $A_1$ and $A_2$ are constant matrices where all the eigenvalues of $A_1$ and $A_2$ have zero real part and negative real parts respectively. \item $g_1$ and $g_2$ are $C^2$ so that $g_i(0,0)=0$, $\frac{\partial g_i}{\partial z}(0,0)=0$ and $\frac{\partial g_i}{\partial y}(0,0)=0, $ $\forall$ $i=\{1, 2\}$. \cite{c33} \end{enumerate} \end{lemma} \begin{lemma} \label{lem2} Consider a $\Psi(z)\in C^1$ where $\Psi(0)=0$ and $\frac{\partial\Psi}{\partial z}(0)=0$, such that $M(\Psi(z))=O(\Vert z\Vert^m)$ where $m>1$, then if $\Vert x\Vert\to 0$ we have $\vert q(z)-\psi(z)\vert=O(\Vert z\Vert^m)$ \cite{c33}. \end{lemma} \begin{lemma} \label{lem3} The stability situation of the origin of (\ref{eq15}) and (\ref{eq16}) is the same as the origin of the following equation. \cite{c33} \begin{equation} \label{177} \dot{z}=A_1z+g_1(z,q(z)) \end{equation} which means that if the origin of (\ref{177}) is either stable, exponentially stable, or unstable then it would be the same for the origin of (\ref{eq15}) and (\ref{eq16}). \end{lemma} Therefore, according to Lemma \ref{lem1}, a center manifold $y=q(z)$ can be approximated, because the assumptions of Lemma \ref{lem1} are held in (\ref{eq15}) and (\ref{eq16}). Thus, we have the following equation: \begin{equation} \label{eq17} \begin{split} &M(\Psi(z))=\frac{\partial\Psi}{\partial\tilde{u}^{av}_{ri}}\frac{\partial\tilde{u}^{av}_{ri}}{\partial\tau}+\frac{\partial\Psi}{\partial a^{av}_{ri}}\frac{\partial a^{av}_{ri}}{\partial\tau}+\frac{\partial\Psi}{\partial\epsilon}\frac{\partial\epsilon}{\partial\tau}\\&+\delta\omega_{Hi}^\prime(\Psi(z)-\lim_{T\to+\infty}\frac{1}{T}\int_0^T h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta)\,d\tau). \end{split} \end{equation} By considering Assumption \ref{as4}, if $\Psi(z)=\frac{\partial^2h_i\circ l(0)}{2\partial u^2_i}\tilde{u}_{ri}^{av^2}+\frac{\partial^2h_i\circ l(0)}{4\partial u^2_{ri}}a^{av^2}_{ri}$, then $M(\Psi(z))$ is $O(\vert\tilde{u}^{av}_{ri}\vert^3+\vert a^{av}_{ri}\vert^3+\vert\epsilon\vert^3)$. As a result, by Lemma \ref{lem2}, the center manifold is obtained as follows: \begin{equation} \label{eq18} \begin{split} y=q(z)=\Psi(z)=&\frac{\partial^2h_i\circ l(0)}{2\partial u^2_{ri}}\tilde{u}_{ri}^{av^2}+\frac{\partial^2h_i\circ l(0)}{4\partial u^2_{ri}}a^{av^2}_{ri}\\&+O(\vert\tilde{u}^{av}_{ri}\vert^3+\vert a^{av}_{ri}\vert^3+\vert\epsilon\vert^3). \end {split} \end{equation} By substituting center manifold $y$ into (\ref{eq14}), the following equations are given: \begin{equation} \label{eq19} \begin{split} &\dot{\tilde{u}}_{ri}^{av}(\tau)=\delta({\lim_{T\to+\infty}}\frac{K_i^\prime}{T}\int_0^T(h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta))\eta_i(\tau)\,d\tau) \end{split} \end{equation} \begin{equation} \label{eq20} \dot{a}^{av}_{ri}(\tau)=-\delta\epsilon\omega_{Li}^\prime a^{av}_{ri}+b_i\delta\epsilon\omega_{Li}^\prime O(\vert\tilde{u}^{av}_{ri}\vert^3+\vert a^{av}_{ri}\vert^3+\vert\epsilon\vert^3). \end{equation} According to Lemma \ref{as3}, we can use (\ref{eq19}) and (\ref{eq20}) in order to analyze the stability of (\ref{eq14}).\\ The equilibrium of (\ref{eq19}) which is $\tilde{u}_r^e=[\tilde{u}_{r1}^e, \hdots, \tilde{u}_{rN}^e]$ admits the following equation: \begin{equation} \label{eq21} \begin{split} 0&={\lim_{T\to+\infty}}\frac{1}{T}\int_0^T h_i\circ l(\tilde{u}_r^{av}+a_r^{av}\times\eta)\eta_i(\tau)\,d\tau. \end{split} \end{equation} We assume that the equilibrium point $\tilde{u}_{ri}^e$ for all $i\in\{1, \hdots, N\}$ is as follows: \begin{equation} \label{eq22} \tilde{u}^e_{ri}=\sum_{j=1}^Nc_j^ia_{rj}^{av}+\sum_{j=1}^N\sum_{k\ge j}^Nd_{jk}^ia_{rj}^{av}a_{rk}^{av}+O(\max_ia_{ri}^{av^3}). \end{equation} \indent In regard to (\ref{eq13}), if the Taylor polynomial approximation \cite{c23}, \cite{c34} of $h_i\circ l$ about zero is substituted in (\ref{eq21}) (the details of Taylor polynomial approximation and the integrals which are used are given in the Appendix), then the following equation is acquired by applying the averaging technique: \begin{equation} \label{eq23} \begin{split} &0=\frac{a_{ri}^{av}}{2}\left[\sum_{j\not=i}^N\tilde{u}_{rj}^e\frac{\partial^2h_i\circ l}{\partial u_{ri}\partial u_{rj}}(0)+\tilde{u}_{ri}^e\frac{\partial^2h_i\circ l}{\partial u_{ri}^2}(0)\right.\\&+\frac{\tilde{u}_{ri}^{e^2}}{2}\frac{\partial^3h_i\circ l}{\partial u_{ri}^3}(0)+\sum_{j\not=i}^N(\frac{\tilde{u}_{rj}^{e^2}}{2}+\frac{a_{rj}^{av^2}}{4})\frac{\partial^3h_i\circ l}{\partial u_{ri}\partial u_{rj}^2}(0)\\&+\tilde{u}_{ri}^e\sum_{j\not=i}^N\tilde{u}_{rj}^e\frac{\partial^3h_i\circ l}{\partial u_{ri}^2\partial u_{rj}}(0)+\frac{a_{ri}^{av^2}}{8}\frac{\partial^3h_i\circ l}{\partial u_{ri}^3}(0)\\&+\left.\sum_{j\not=i}^N\sum_{k>j, k\not=i}^N\tilde{u}_{rj}^e\tilde{u}_{rk}^e\frac{\partial^3h_i\circ l}{\partial u_{ri}\partial u_{rj}\partial u_{rk}}(0)\vphantom{\tilde{u}_{ri}^e}\right] +O(\max_ia_{ri}^{av^4}). \end{split} \end{equation} Furthermore, (\ref{eq22}) is substituted in (\ref{eq23}) to compute $c_j^i$ and $d_{jk}^i$. By matching first order powers of $a_{ri}^{av}$, we have \begin{equation} \label{eq24} \left[\begin{array}{c}0\\\vdots\\0\end{array}\right]=\sum_{i=1}^Na^{av}_{ri}\Delta\left[\begin{array}{c}c_i^1\\\vdots\\c_i^N\end{array}\right]. \end{equation} Forasmuch as $\Delta$ is nonsingular, so $c_j^i$ should be equal to zero for all $i, j, k\in\{1,\hdots,N\}$. Therefore, with matching second order powers of $a_{ri}^{av}$, the following equations are computed \begin{equation} \label{eq25} \left[\begin{array}{c}0\\\vdots\\0\end{array}\right]=\sum_{j=1}^N\sum_{k>j}^Na_{rj}^{av}a^{av}_{rk}\Delta\left[\begin{array}{c}d_{jk}^1\\\vdots\\d_{jk}^N\end{array}\right]. \end{equation} \begin{equation} \label{eq26} \left[\begin{array}{c}0\\\vdots\\0\end{array}\right]=\sum_{j=1}^Na_{rj}^{av^2}\left(\Delta\left[\begin{array}{c}d_{jj}^1\\\vdots\\d_{jj}^N\end{array}\right]+\left[\begin{array}{c}\frac{1}{4}\frac{\partial^3 h_1ol}{\partial u_{r1}\partial u_{rj}^2}(0)\\ \vdots\\ \frac{1}{4}\frac{\partial^3 h_{j-1}ol}{\partial u_{rj-1}\partial u_{rj}^2}(0)\\ \frac{1}{8}\frac{\partial^3h_jol}{\partial u_{rj}^3}(0)\\\frac{1}{4}\frac{\partial^3 h_{j+1}ol}{\partial u_{rj}^2\partial u_{rj+1}}(0)\\ \vdots\\\frac{1}{4}\frac{\partial^3 h_Nol}{\partial u_{rj}^2\partial u_{rN}}(0) \end{array}\right]\right). \end{equation} As a result, $d_{jk}^i=0$ for $\forall i\not=j$ and $d_{jj}^i$ is \begin{equation} \label{eq27} \left[\begin{array}{c}d_{jj}^1\\\vdots\\d_{jj}^N\end{array}\right]=-\Delta^{-1}\left[\begin{array}{c}\frac{1}{4}\frac{\partial^3 h_1ol}{\partial u_{r1}\partial u_{rj}^2}(0)\\ \vdots\\ \frac{1}{4}\frac{\partial^3 h_{j-1}ol}{\partial u_{rj-1}\partial u_{rj}^2}(0)\\ \frac{1}{8}\frac{\partial^3h_jol}{\partial u_{rj}^3}(0)\\\frac{1}{4}\frac{\partial^3 h_{j+1}ol}{\partial u_{rj}^2\partial u_{rj+1}}(0)\\ \vdots\\\frac{1}{4}\frac{\partial^3 h_Nol}{\partial u_{rj}^2\partial u_{rN}}(0) \end{array}\right]. \end{equation} Consequently, the equilibrium becomes \begin{equation} \label{eq28} \tilde{u}_{ri}^e=\sum_{j=1}^Nd_{jj}^ia_{rj}^{av^2}+O(\max_ia_{ri}^{av^3}). \end{equation} Since the Jacobian $\Gamma^{av}=(\gamma_{ij})_{(N\times N)}$ of averaging system (\ref{eq19}) is as follows: \begin{equation} \label{eq29} \gamma_{ij}=\delta K_i^\prime\lim_{T\to+\infty}\frac{1}{T}\int_0^T\frac{\partial h_i\circ l}{\partial u_{rj}}(\tilde{u}_r^{av}+a_r^{av}\times\eta)\eta_i(\tau)\,d\tau \end{equation} by Taylor polynomial approximation and substituting average system equilibrium $\tilde{u}_r^e$ we have \begin{equation} \label{eq30} \gamma_{ij}=\frac{1}{2}\delta K_i^\prime a_{ri}^{av}\frac{\partial^2h_i\circ l}{\partial u_{ri} \partial u_{rj}}(0)+O(\delta\max_ia_{ri}^{av^2}) \end{equation} According to Assumptions \ref{as3} and \ref{as4}, and the fact that $a_{ri}^{av}$ is small and positive, (\ref{eq30}) is Hurwitz. Thus, the equilibrium (\ref{eq28}) is exponentially stable for average system (\ref{eq19}). Additionally, if $\tilde{u}_{ri}^{av}$ in (\ref{eq20}) is frozen at the equilibrium (\ref{eq28}) \cite{c20}, since $\delta\omega_{Li}^\prime>0$ and following the perturbation theory \cite{c32}, for all $0<\epsilon<\bar{\epsilon}$ where $\bar{\epsilon}>0$, (\ref{eq20}) is exponentially stable at the origin. Also, since $\tilde{n}_{ri}^e$ is center manifold, and is equal to $\frac{\partial^2h_i\circ l(0)}{2\partial u^2_{i}}\tilde{u}_{ri}^{e^2}+\frac{\partial^2h_i\circ l(0)}{4\partial u^2_{i}}a^{av^2}_{ri}+O(\vert\tilde{u}^{e}_{ri}\vert^3+\vert a^{av}_{ri}\vert^3+\vert\epsilon\vert^3)$, and as $a^{av}_{ri}$ and $\tilde{u}_{ri}^e$ converge to zero, it will converge to a small neighborhood of zero.\\ Finally, corresponding to the averaging theory and Lemma \ref{lem3}, for $0<\delta<\overline{\delta}$ and $\sigma>0$ where $\overline{\delta}>0$, the equilibrium of reduced system (\ref{eq12}) is exponentially stable, which means $(\tilde{u}_{ri}, a_{ri}, \tilde{n}_{ri})$ exponentially converges to a neighborhood of $(\tilde{u}_{ri}^e, 0, \tilde{n}_{ri}^e)$, and the proof is completed. \end{proof} \subsection{Singular perturbation analysis} We propound the following result in order to investigate the complete system (\ref{eq7}) and (\ref{eq8}) with singular perturbation theory \cite{c32} in the time scale $\tau=\bar{\omega}t$. \begin{theorem} \label{th2} Consider system (\ref{eq7}) and (\ref{eq8}) under the Assumptions \ref{as1}, \ref{as2}, \ref{as3}, \ref{as4} and suppose $\omega_i\not=\omega_j$, $\omega_i\not=\omega_j+\omega_k$, $2\omega_i\not=\omega_j+\omega_k$, $\omega_i\not=2\omega_j+\omega_k$, $\omega_i\not=2\omega_j$, $\omega_i\not=3\omega_j$ and $\frac{\omega_i}{\omega_j}$ is rational, for all $i, j, k\in\{1,\hdots,N\}$, there exists a neighborhood of $(x, \hat{u}_i, a_i, n_i)=(l(u^\ast), u^\ast_i, 0, h_i\circ l(u^\ast))$ for $i\in\{1,\hdots,N\}$ along with constants $0<\epsilon<\overline{\epsilon}$, $0<\bar{\omega}<\omega^\ast$, $0<\delta<\overline{\delta}$ and $\sigma>0$ where $\overline{\epsilon}$, $\omega^\ast$, $\overline{\delta}>0$. Then, the solution $(x, \hat{u}_i, a_i, n_i)$ will converge exponentially to that point, and $J(t)$ will converge to $h_i\circ l(u^\ast)$ exponentially. \end{theorem} \begin{proof} Corresponding to Theorem \ref{th1} and \cite{c34}, there exists a unique exponentially stable periodic solution $W_{ri}^p=\left[ \begin{array}{c} \tilde{u}_{ri}^p\\a_{ri}^p\\\tilde{n}_{ri}^p\end{array}\right]$ for $i\in\{1,\hdots,N\}$ in a neighborhood of average solution $\left[ \begin{array}{c} \tilde{u}_{i}^{av}\\a_{i}^{av}\\\tilde{n}_{i}^{av}\end{array}\right]$, which if we substitute in the system (\ref{eq8}) in the form of $\frac{dW_i}{d\tau}=\delta H_i(\tau, W, x)$, we will have \begin{equation} \label{eq31} \frac{dW_{ri}^p}{d\tau}=\delta H_i(L(\tau,W_r^p),W_{ri}^p) \end{equation} where $L(\tau,W)=l(u^\ast+\tilde{u}+a\times \eta(\tau))$. Then, with defining $\tilde{W}_i=W_i-W_{ri}(\tau)$, we have \begin{equation} \label{eq32} \bar{\omega}\frac{dx}{d\tau}=\bar{F}(\tau,\tilde{W},x) \end{equation} \begin{equation} \label{eq33} \frac{d\tilde{W}_{i}}{d\tau}=\delta \bar{H}_i(\tau,\tilde{W},x) \end{equation} where $\tilde{W}=[\tilde{W}_1, \hdots, \tilde{W}_N]$ and \begin{equation} \label{eq34} \begin{split} \bar{H}_i(\tau,\tilde{W},x)&=H_i(\tau,\tilde{W}_i+W_{ri}^p(\tau),x)\\&-H_i(\tau,L(\tau,W_r^p),W_{ri}^p(\tau)) \end{split} \end{equation} \begin{equation} \label{eq35} \bar{F}(\tau,\tilde{W},x)=f\left(x,\beta(x,u^\ast+\underbrace{\tilde{u}-\tilde{u}_r^p}_{\tilde{W}_1}+\tilde{u}_r^p+a\times\eta(\tau))\right). \end{equation} Since $x=L(\tilde{W}+W_r^p)$ is the quasi-steady state, the reduced system is as follows: \begin{equation} \label{eq36} \frac{d\tilde{W}_{ri}}{d\tau}=\delta \bar{H}_i(\tau,\tilde{W}_{ri}+W_{ri}^p,L(\tilde{W}_r+W_r^p)) \end{equation} in which $\tilde{W}_r=0$ is the equilibrium at the origin, which is exponentially stable as it has been shown in section 3.1\\ \indent Now, the boundary layer model is studied in the time scale $t=\frac{\tau}{\bar{\omega}}$: \begin{equation} \label{eq37} \begin{split} \frac{dx_b}{dt}&=\bar{F}(x_b+L(\tilde{W}_r+W_r^p),\tilde{W})\\&=f(x_b+l(u),\beta(x_b+l(u),u)) \end{split} \end{equation} where $u=u^\ast+\tilde{u}+a\times \eta(\tau)$ is regarded as an independent parameter from $t$. Hence, since $f(l(u),\beta(l(u)),u)\equiv 0$ and the equilibrium of (\ref{eq37}) is $x_b=0$, according to Assumption \ref{as2}, this equilibrium is exponentially stable.\\ With exponential stability of the origin in the reduced model and boundary layer model, also with the use of Tikhonov's theorem \cite{c32}, for $0<\bar{\omega}<\omega^\ast$ where $\omega^\ast>0$, we can conclude that the solution $W(\tau)$ is close to $W_r(\tau)$ with a small neighborhood, so it exponentially converges to the periodic solution $W_r^p(\tau)$ with a small neighborhood, where $W_r^p(\tau)$ is within a neighborhood of equilibrium of the average system. As a result, $(x, \hat{u}_i, a_i, n_i)$ exponentially converges to $(l(u^\ast), u^\ast_i, 0, h_i\circ l(u^\ast))$, and consequently $J_i$ will exponentially converge to its extremum value which is $h_i\circ l(u^\ast)$, and the proof is completed. \end{proof} \begin{remark} \label{rem1} The reason of having fast convergence to NE is that when we have large initial input estimation error, the amplitude of the excitation signal will be large (see eq. (\ref{eq20})) and consequently the Jacobian matrix $\Gamma^{av}$ will be large since it is proportional to the amplitude. Also, since the amplitude will shrink gradually, the oscillation will be eliminated. This is why there is no steady state oscillation. \end{remark} \begin{remark} \label{rem2} All proofs were based on the maximization of the payoff. In order to make the problem for minimization we just need to assume $k_i < 0$, and the Jacobian matrix should be kept Hurwitz, because (\ref{eq9}) is changed to $\frac{\partial^2 j_i\circ l}{\partial u_i^2}(u^\ast)>0$. \end{remark} \begin{remark} \label{rem3} According to \cite{c35}, the systems like what we have in (\ref{eq14}) has a manifold equilibrium where if the system starts at any point on the equilibrium manifold, the system stays there and will not converge to zero. Thus, $\tilde{u}_i$ may not converge to zero and instead converges to a constant, if the system reaches to the manifold equilibrium. However, for small $\epsilon$ or in other words for large initial amplitude $a_i(0)$ the exponential stability will be guaranteed. This can happen for large initial value of input estimation error. We can have this condition by choosing proper design parameter in our ES controller. As a result, the exponential stability in this paper is guaranteed, unless the design parameters were chosen to have small initial input estimation error. \end{remark} \section{Numerical Example and Simulations} \begin{figure} \caption{Action values of players as a function of time by implementing the proposed method in this paper} \label{fig2} \end{figure} \begin{figure} \caption{History of player's payoff values by employing the proposed method in this paper} \label{fig3} \end{figure} \begin{figure} \caption{Action values of players as a function of time when they employ the method proposed in this paper as compared to the classical extremum seeking methods} \label{fig4} \end{figure} \label{sec4} In this section, we consider a differential non-cooperative game so that the advantages of the proposed algorithm are investigated precisely. Oligopoly games with nonlinear demand and payoff functions \cite{c36} are real problem motivations of the kinds of games we are focusing on them. In a Cournot Oligopoly, there are different firms that produce homogeneous goods and the demands and costs are nonlinear and often dynamic and changing. Therefore, an oligopoly problem with dynamic demand and costs is one of the many motivations that we can consider for our evaluation. Consider the following differential non-cooperative game that is a form of Duopoly \cite{c23}: \begin{equation} \begin{split} \label{eq38} \dot{x}_1&=-4x_1+x_1x_2+u_1\\ \dot{x}_2&=-4x_2+u_2 \end{split} \end{equation} \begin{equation} \begin{split} \label{eq39} J_1&=-16x_1^2+8x_1^2x_2-x_1^2x_2^2-6x_1x_2^2+\frac{773}{32}x_1x_2-\frac{5}{8}x_1\\ J_2&=-64x_2^3+48x_1x_2-12x_1x_2^2 \end{split} \end{equation} The equilibrium states are calculated as \begin{equation} \label{eq40} x_1^e=\frac{4u_1}{16-u_2}, \hspace{10pt} x_2^e=\frac{1}{4}u_2. \end{equation} Therefore, we have the following Jacobian: \begin{equation} \label{eq41} \left[\begin{array}{cc} -4+\frac{1}{4}u_2& \frac{4u_1}{16-u_2}\\0&-4\end{array}\right] \end{equation} This matrix is Hurwitz for $u_2<16$, thus the action set of players get restricted to $\{(u_1,u_2)\in R^2 \vert u_1, u_2\geq 0, u_2<16\}$. Hence, $x^e=(x_1^e, x_2^e)$ is exponentially stable for all $(u_1, u_2)$ in the defined action set. Then, at $x=x^e$, the payoffs are given as follows: \begin{equation} \label{eq42} J_1=-u_1^2+\frac{3}{2}u_1u_2-\frac{5}{32}u_1 \end{equation} \begin{equation} \label{eq43} J_2=-u_2^3+3u_1u_2 \end{equation} Therefore, the system has two Nash equilibria $(u_{11}^\ast, u_{21}^\ast)=(\frac{26}{64}, \frac{5}{8})$ and $(u_{12}^\ast, u_{22}^\ast)=(\frac{1}{64}, \frac{1}{8})$. $(u_{11}^\ast, u_{21}^\ast)$ admits Assumptions \ref{as3} and \ref{as4}, so it is stable, while $(u_{12}^\ast, u_{22}^\ast)$ cannot satisfy the assumptions, hence we conduct the algorithm to achieve $(u_{11}^\ast, u_{21}^\ast)$. The design parameters are selected as: $k_1=1.273$, $k_2=0.9046$, $b_1=0.7$, $b_2=0.5$, $\omega_{l1}=0.9$, $\omega_{l2}=1.5$, $\omega_{h1}=0.12$, $\omega_{h2}=0.2$, $\omega_{1}=2$, $\omega_{2}=3$ and $\phi_1=\phi_2=0$. The initial value of $(\hat{u}_1,\hat{u}_2)$ is $(\hat{u}_1(0),\hat{u}_2(0))=(0.25,0.9)$, also the state $(x_1,x_2)$ is initiated at the origin. \Cref{fig2} illustrates the history of action values for each player. \Cref{fig3} depicts the payoff values of each player as a function of time. Also, \Cref{fig4} shows a comparison between the proposed algorithm in this paper with the conventional extremum seeking control algorithms. Obviously this figure confirms the effectiveness and superiority of this algorithm with the ability of eliminating steady state oscillation and fast convergence to NE in comparison with the traditional extremum seeking control algorithms.\\ \section{Discussion} Differential games can offer an expressive framework for a wide range of multi-agent problems that involve different agents whose decisions are impacted by others. Thus, our proposed algorithm that is offering an effective way of achieving Nash Equilibrium in differential games can be beneficial in many real-world multi-agent problems and applications. Supply chain management is one of the main streams of applications in differential games. Particularly, our algorithm is capable of solving the problem of the design of coordination management by Duopolist and Oligopolist. In such problems, we can have a multi-retailer as a differential game where a single manufacturer sells a particular product to different retailers in the same market, and each firm is trying to maximize their objective function \cite{ouardighi2013dynamic}. Smart grid with dynamic demand-side management is another broad application of our algorithm. In such problems, a differential game can be used to model the distribution of demand-side management where the price is characterized as dynamic states \cite{c11,c12}. Other areas that can directly benefit from our proposed algorithm are evasion and pursuit scenario and active defense \cite{prokopov2013linear}. Evasion and pursuit scenario can be modeled by differential games where there are conflicts between different players which are attackers, defenders, and the target. So, in such cases, our algorithm can be used for developing a feasible guidance law for an attacker \cite{qilong2018differential}. While the assumption we have might not get satisfied in all active defense problems, the problem of active defense that is occurring in the end game often follows a similar dynamic as the one proposed in this paper. Moreover, although kinematic control of a single manipulator system is well studied by different methods such as Neural-network \cite{li2021neural,tan2022discrete}, and variable damping control \cite{zahedi2021variable,zahedi2022user}, one can go beyond the direct capabilities of our proposed algorithm, and extend the problem to be applicable to kinematic control of multi-manipulator systems. \section{Conclusion} \label{sec5} In this paper, the fast extremum seeking approach introduced in \cite{c27} has been adapted to N-Player noncooperative differential games. It was shown that, to attain the Nash equilibrium (NE), each player generates its action and it was not required detailed information about payoff functions, the model and also other players' actions. Moreover, NE was achieved without oscillation and faster compared to the classical extremum seeking-based approaches. As a result, the inappropriate influence of steady-state oscillation was eliminated. Moreover, the stability and convergence analysis were presented in which the convergence without steady-state oscillation to NE was proved. Furthermore, the effectiveness of the proposed method was shown through a numerical example. Also, the comparison results of simulations between the proposed method and the conventional extremum seeking methods illustrated the superiority of the proposed algorithm. \\ \appendix \section{Integrals computation} Many integrals along with Taylor polynomial approximation of $h_i\circ l$ need to be computed in order to obtain (\ref{eq23}), so for $i, j, k\in\{1, \hdots, N\}$ we have \begin{equation} \label{eq44} \begin{split} &\lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i(\tau)\,d\tau=0, \hspace{10pt} \lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i^2(\tau)\,d\tau=\frac{1}{2},\\ &\lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i^3(\tau)\,d\tau=0, \hspace{10pt} \lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i^4(\tau)\,d\tau=\frac{3}{8}, \end{split} \end{equation} and by making these assumptions that $\omega_i\not=\omega_j$, $2\omega_i\not=\omega_j$, and $3\omega_i\not=\omega_j$: \begin{equation} \label{eq45} \begin{split} &\lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i(\tau)\eta_j(\tau)\,d\tau=0,\\&\lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i^2(\tau)\eta_j(\tau)\,d\tau=0, \\ &\lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i^3(\tau)\eta_j(\tau)\,d\tau=0,\\&\lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i^2(\tau)\eta_j^2(\tau)\,d\tau=\frac{1}{4}, \end{split} \end{equation} and these assumption that $\omega_i\not=\omega_j+\omega_k$, $\omega_i\not=2\omega_j+\omega_k$, and $2\omega_i\not=\omega_j+\omega_k$: \begin{equation} \label{eq46} \begin{split} &\lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i(\tau)\eta_j(\tau)\eta_k(\tau)\,d\tau=0,\\ &\lim_{T\to+\infty}\frac{1}{T}\int_0^T\eta_i(\tau)\eta_j^2(\tau)\eta_k(\tau)\,d\tau=0. \end{split} \end{equation} \section{Taylor polynomial approximation} The Taylor polynomial approximation \cite{c34} of $h_i\circ l$ requires $n+1$ times differentiability of $h_i\circ l$, so the following equation is obtained: \begin{equation} \begin{split} \label{eqa47} &h_i\circ l(\tilde{u}^e+a^{av}\times\eta)=\\&\sum_{\alpha_1+\hdots+\alpha_N=0}^n\frac{\partial^{(\alpha_1+\hdots+\alpha_N)}}{\partial u_1^{\alpha_1}\hdots\partial u_N^{\alpha_N}}\frac{h_i\circ l(0)}{\alpha_1\!\hdots\alpha_N\!}(\tilde{u}^e+a^{av}\times\eta)^\alpha\\&+\sum_{\alpha_1+\hdots+\alpha_N=n+1}\frac{\partial^{(\alpha_1+\hdots+\alpha_N)}}{\partial u_1^{\alpha_1}\hdots\partial u_N^{\alpha_N}}\frac{h_i\circ l(\iota)}{\alpha_1\!\hdots\alpha_N\!}(\tilde{u}^e+a^{av}\times\eta)^\alpha\\&=\sum_{\alpha_1+\hdots+\alpha_N=0}^n\frac{h_i\circ l(0)}{\alpha_1\!\hdots\alpha_N\!}(\tilde{u}^e+a^{av}\times\eta)^\alpha\\&+O(\max_i a_i^{av^{n+1}}) \end{split} \end{equation} where $\iota$ is a point on the line segment of interval $[0 \hspace{8pt} \tilde{u}^e+a^{av}\times\eta(\tau)]$, $\alpha=(\alpha_1, \hdots, \alpha_N)$ and $u^\alpha$ means $u_1^{\alpha_1}\hdots u_N^{\alpha_N}$. Additionally, $O(\max_i a_i^{av^{n+1}})$ is computed on the basis of substituting (\ref{eq22}). In the process of computing (\ref{eq23}), we select $n=3$ to derive the third order derivation effect on the system as in \cite{c23}. \end{document}
\begin{document} \begin{abstract} In \cite{twoapp}, Hjorth proved from $ZF + AD + DC$ that there is no sequence of distinct $\boldsymbol{\Sigma^1_2}$ sets of length $\boldsymbol{\delta^1_2}$. \cite{hra} extends Hjorth's technique to show there is no sequence of distinct $\boldsymbol{\Sigma^1_{2n}}$ sets of length $\boldsymbol{\delta^1_{2n}}$. Sargsyan conjectured an analogous property is true for any regular Suslin pointclass in $L(\mathbb{R})$ --- i.e. if $\kappa$ is a regular Suslin cardinal in $L(\mathbb{R})$, then there is no sequence of distinct $\kappa$-Suslin sets of length $\kappa^+$ in $L(\mathbb{R})$. We prove this in the case that the pointclass $S(\kappa)$ is inductive-like. \end{abstract} \title{Unreachability of Inductive-Like Pointclasses in $L(\R)$} \section{Introduction} \label{intro chapter} \begin{definition} For a boldface pointclass $\boldsymbol{\mathcal{G}amma}$, we say $\lambda$ is $\boldsymbol{\mathcal{G}amma}$-reachable if there is a sequence of distinct $\boldsymbol{\mathcal{G}amma}$ sets of length $\lambda$ and $\lambda$ is $\boldsymbol{\mathcal{G}amma}$-unreachable if $\lambda$ is not $\boldsymbol{\mathcal{G}amma}$-reachable. \end{definition} The problem of unreachability is to determine the minimal $\lambda$ which is $\boldsymbol{\mathcal{G}amma}$-unreachable for each pointclass $\boldsymbol{\mathcal{G}amma}$. As this problem is trivial assuming the axiom of choice, unreachability is exclusively studied under determinacy assumptions. Under $AD$, unreachability yields an interesting measure of the complexity of a pointclass. An early result in this area is Harrington's theorem that there is no injection of $\omega_1$ into any pointclass strictly below the pointclass of Borel sets in the Wadge hierarchy (see \cite{Harrington}). \begin{theorem}[Harrington] If $\beta < \omega_1$, then $\omega_1$ is $\boldsymbol{\Pi^0_\beta}$-unreachable. \end{theorem} A recent application of Harrington's theorem was the resolution of the decomposability conjecture by Marks and Day (see \cite{decomposability}). Prior work on unreachability has focused on levels of the projective hierarchy. Kechris gave a lower bound on the complexity of the pointclass needed to reach $\boldsymbol{\delta^1_{2n+2}}$ (see \cite{Kechris}). \begin{theorem}[Kechris] Assume $ZF + AD + DC$. $\boldsymbol{\delta^1_{2n+2}}$ is $\boldsymbol{\Delta^1_{2n+1}}$-unreachable. \end{theorem} In \cite{Kechris}, Kechris conjectured his own result could be strengthened to $\boldsymbol{\delta^1_{2n+2}}$ is $\boldsymbol{\Delta^1_{2n+2}}$-unreachable. He also made a second, stronger conjecture that $\boldsymbol{\delta^1_{2n+2}}$ is $\boldsymbol{\Sigma^1_{2n+2}}$-unreachable. Jackson proved the former in \cite{Jackson}. \begin{theorem}[Jackson] Assume $ZF + AD + DC$. $\boldsymbol{\delta^1_{2n+2}}$ is $\boldsymbol{\Delta^1_{2n+2}}$-unreachable. \end{theorem} \cite{Jackson} also made progress on Kechris's second conjecture by showing there is no strictly increasing sequence of $\boldsymbol{\Sigma^1_{2n+2}}$ sets of length $\boldsymbol{\delta^1_{2n+2}}$. In fact, Jackson and Martin proved the following more general theorem. \begin{theorem}[Jackson] \label{no strictly inc} Assume $ZF +AD +DC$. Suppose $\kappa$ is a Suslin cardinal, and $\kappa$ is either a successor cardinal or a regular limit cardinal. Then there is no strictly increasing (or strictly decreasing) sequence $\langle A_\alpha : \alpha < \kappa^+\rangle$ contained in $S(\kappa)$. \end{theorem} But the resolution of Kechris's second conjecture eluded the traditional techniques of descriptive set theory. Hjorth pioneered the use of inner model theory in this area to resolve one case of Kechris's second conjecture (see \cite{twoapp}). \begin{theorem}[Hjorth] \label{hjorth result} Assume $ZF + AD + DC$. $\boldsymbol{\delta^1_{2}}$ is $\boldsymbol{\Sigma^1_{2}}$-unreachable. \end{theorem} Kechris also pointed out the following corollary of Hjorth's result. \begin{corollary} Assume $ZF + AD + DC$. A $\boldsymbol{\Pi^1_2}$ equivalence relation has either $2^{\aleph_0}$ or $\leq \aleph_1$ equivalence classes. \end{corollary} Hjorth's proof of Theorem \ref{hjorth result} involved an application of the Kechris-Martin Theorem, which precluded an easy generalization of his technique to other projective pointclasses. The rest of Kechris's second conjecture survived another two decades, until Sargsyan found a modification of Hjorth's proof which generalized to the rest of the projective hierarchy (see \cite{hra}). \begin{theorem}[Sargsyan] \label{projective thm} Assume $ZF + AD + DC$. $\boldsymbol{\delta^1_{2n+2}}$ is $\boldsymbol{\Sigma^1_{2n+2}}$-unreachable. \end{theorem} The following result of Kechris shows Sargsyan's theorem is optimal. \begin{theorem}[Kechris] \label{strictly inc sequence} Assume $ZF + AD + DC$. Suppose $\kappa$ is a Suslin cardinal. Then there is a strictly increasing sequence $\langle A_\alpha : \alpha < \kappa \rangle$ contained in $S(\kappa)$. \end{theorem} Sargsyan's theorem resolves the problem of unreachability for every level of the projective hierarchy. He conjectured an analogous result holds for every regular Suslin pointclass. \begin{conjecture}[Sargsyan] \label{Grigor conjecture} Assume $AD^+$. Suppose $\kappa$ is a regular Suslin cardinal. Then $\kappa^+$ is $S(\kappa)$-unreachable. \end{conjecture} Below, we prove part of Conjecture \ref{Grigor conjecture}. \begin{theorem} \label{main thm v2} Assume $ZF + AD + DC + V=L(\mathbb{R})$. Suppose $\kappa$ is a regular Suslin cardinal and $S(\kappa)$ is inductive-like. Then $\kappa^+$ is $S(\kappa)$-unreachable. \end{theorem} $ZF + AD + DC + V=L(\mathbb{R})$ implies $AD^+$, so Theorem \ref{main thm v2} is a special case of Conjecture \ref{Grigor conjecture}. Theorem \ref{strictly inc sequence} demonstrates this is the optimal result for inductive-like pointclasses. Let $\boldsymbol{\mathcal{G}amma} = S(\kappa)$ for $\kappa$ as in Theorem \ref{main thm v2}. Then $\kappa = \boldsymbol{\delta_\mathcal{G}amma}$. $ZF + AD + DC + V=L(\mathbb{R})$ also implies any inductive-like pointclass $\boldsymbol{\mathcal{G}amma}$ is of the form $S(\kappa)$ for some regular Suslin cardinal $\kappa$. So an equivalent formulation of Theorem \ref{main thm v2} is the following: \begin{theorem} \label{main thm} Assume $ZF + AD + DC + V=L(\mathbb{R})$. Suppose $\boldsymbol{\mathcal{G}amma}$ is an inductive-like pointclass. Then $\boldsymbol{\delta_{\mathcal{G}amma}^+}$ is $\boldsymbol{\mathcal{G}amma}$-unreachable. \end{theorem} Our proof of Theorem \ref{main thm} extends the inner model theory approach pioneered in \cite{twoapp}. Our technique also gives an alternative proof of Theorem \ref{projective thm}. \section{Background} We will assume the reader is familiar with the basics of descriptive set theory espoused in \cite{mosdst} and the theory of iteration strategies for premice covered in \cite{ooimt}. The rest of the necessary background is covered below. In Section \ref{dst in L(R)}, we summarize Steel's classification of the scaled pointclasses and Suslin pointclasses in $L(\mathbb{R})$. Section \ref{woodin and iteration section} reviews the relationship between Woodin cardinals and iteration trees. Two inner model constructions are covered in Sections \ref{m-s construction section} and \ref{s-const section}. In Section \ref{suitable mice section}, we review results from the core model induction demonstrating the existence of mice corresponding to inductive-like pointclasses in $L(\mathbb{R})$.\\ \subsection{The Pointclasses of $L(\mathbb{R})$} \label{dst in L(R)} We will assume for this section $ZF + DC + AD + V = L(\mathbb{R})$. All of the results in this section are due to Steel and are proven outright or else implicit in \cite{scales}. The boldface pointclasses we are interested in all appear in a hierarchy we will now define. If $\boldsymbol{\mathcal{G}amma}$ and $\boldsymbol{\Lambda}$ are non-selfdual pointclasses, say $\{\boldsymbol{\mathcal{G}amma},\boldsymbol{\mathcal{G}amma^c}\} <_w \{\boldsymbol{\Lambda}, \boldsymbol{\Lambda^c}\}$ if $\boldsymbol{\mathcal{G}amma} \subset \boldsymbol{\Lambda} \cap \boldsymbol{\Lambda^c}$. This is a wellordering by Wadge's Lemma. For $\alpha < \mathcal{T}heta$, consider the $\alpha$th pair $\{\boldsymbol{\mathcal{G}amma},\boldsymbol{\mathcal{G}amma^c}\}$ in this wellordering such that $\boldsymbol{\mathcal{G}amma}$ or $\boldsymbol{\mathcal{G}amma^c}$ is closed under projection. Let $\boldsymbol{\Sigma^1_\alpha}$ denote whichever of the two is closed under projection --- if both are, $\boldsymbol{\Sigma^1_\alpha}$ denotes whichever has the separation property. Let $\boldsymbol{\Pi^1_\alpha} = (\boldsymbol{\Sigma^1_\alpha})^c$. For any pointclass $\boldsymbol{\mathcal{G}amma}$, we define \begin{align*} &\boldsymbol{\Delta_\mathcal{G}amma} = \boldsymbol{\mathcal{G}amma} \cap \boldsymbol{\mathcal{G}amma^c} \text{ and} \\ &\boldsymbol{\delta_\mathcal{G}amma} = sup \{|\leq^*| :\, \leq^* \text{ is a prewellordering in } \boldsymbol{\Delta_\mathcal{G}amma}\}. \end{align*}. Let $\boldsymbol{\delta^1_\alpha} = \boldsymbol{\delta_{\Sigma^1_\alpha}}$. The pointclasses $\{\boldsymbol{\Sigma^1_n}: n\in\omega \}$ and $\{\boldsymbol{\Pi^1_n}: n\in\omega \}$ are the usual levels of the projective hierarchy. We will refer to the collection of pointclasses $\{\boldsymbol{\Sigma^1_\alpha}: \alpha\in ON \} \cup \{\boldsymbol{\Pi^1_\alpha}: \alpha \in ON\}$ as the extended projective hierarchy. We now define a hierarchy slightly coarser than the one above. If $n\in \omega$ and $\alpha\in ON$, we say a pointset $A$ is in the pointclass $\boldsymbol{\Sigma_n}(J_\alpha(\mathbb{R}))$ if there is a $\Sigma_n$ formula $\phi$ with real parameters such that $A = \{x : J_\alpha(\mathbb{R}) \models \phi[x]\}$. $\boldsymbol{\Pi_n}(J_\alpha(\mathbb{R}))$ is defined analogously with $\Pi_n$-formulas.\footnote{See \cite{scales} for the definition of $J_\alpha(\mathbb{R})$. Alternatively, the reader will not lose too much of importance by pretending $J_\alpha(\mathbb{R}) = L_\alpha(\mathbb{R})$.} The Levy hierarchy consists of all pointclasses of the form $\boldsymbol{\Sigma_n}(J_\alpha(\mathbb{R}))$ or $\boldsymbol{\Pi_n}(J_\alpha(\mathbb{R}))$ for some $n$ and $\alpha$. It is clear any pointclass in the Levy hierarchy equals $\boldsymbol{\Sigma^1_\alpha}$ or $\boldsymbol{\Pi^1_\alpha}$ for some $\alpha$, but the converse is false. In this section, we will classify the scaled pointclasses within the Levy hierarchy, relate the Levy hierarchy to the extended projective hierarchy, and classify the regular Suslin pointclasses.\\ \subsubsection{Classification of Scaled Pointclasses} A $\Sigma_1$-gap is a maximal interval $[\alpha,\beta]$ such that for any real $x$, the $\Sigma_1$-theory of $x$ is the same in $J_\alpha(\mathbb{R})$ and $J_\beta(\mathbb{R})$. We say the gap $[\alpha,\beta]$ is admissible if $J_\alpha(\mathbb{R})\models KP$, equivalently, if the pointclass $\boldsymbol{\Sigma_1}(J_\alpha(\mathbb{R}))$ is closed under coprojection. Suppose $[\alpha,\beta]$ is an admissible gap. Let $n_\beta\in\mathbb{N}$ be least such that the pointclass $\boldsymbol{\Sigma_{n_\beta}}(J_\beta(\mathbb{R}))$ is not contained in $J_\beta(\mathbb{R})$. We say $[\alpha,\beta]$ is a strong gap if for any $b\in J_\beta(\mathbb{R})$, there is $\beta' < \beta$ and $b' \in J_{\beta'}(\mathbb{R})$ such that the $\Sigma_{n_\beta}$ and $\Pi_{n_\beta}$ theories of $b'$ in $J_{\beta'}(\mathbb{R})$ are the same as the $\Sigma_{n_\beta}$ and $\Pi_{n_\beta}$ theories of $b$ in $J_{\beta}(\mathbb{R})$. Otherwise, we say $[\alpha,\beta]$ is weak. \begin{theorem} \label{scales classification} Suppose $\boldsymbol{\mathcal{G}amma}$ is a pointclass in the Levy hierarchy. If $\boldsymbol{\mathcal{G}amma}$ is scaled, then one of the following holds. \begin{enumerate} \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Sigma}_{2k+1}(J_\alpha(\mathbb{R}))$ for some $k\in \omega$ and some $\alpha$ beginning an inadmissible gap. \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Pi}_{2k+2}(J_\alpha(\mathbb{R}))$ for some $k\in \omega$ and some $\alpha$ beginning an inadmissible gap. \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Sigma}_1(J_\alpha(\mathbb{R}))$ for some $\alpha$ beginning an admissible gap. \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Sigma}_{n_{\beta}+2k}(J_\beta(\mathbb{R}))$ for some $k\in \omega$ and some $\beta$ ending a weak gap. \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Pi}_{n_{\beta}+2k + 1}(J_\beta(\mathbb{R}))$ for some $k\in \omega$ and some $\beta$ ending a weak gap. \end{enumerate} \end{theorem} \begin{definition} A self-justifying system (sjs) is a countable set $\mathcal{B} \subseteq \mathcal{P}(\mathbb{R})$ which is closed under complements and has the property that every $B\in\mathcal{B}$ admits a scale $\vec{\psi}$ such that $\leq_{\psi_n} \in \mathcal{B}$ for all $n$. \end{definition} \begin{definition} Let $z\in\mathbb{R}$ and $\gamma\in ON$. $OD^{<\gamma}(z)$ is the set of pointsets which are ordinal definable from the parameter $z$ in $J_\xi(\mathbb{R})$ for some $\xi<\gamma$. $OD^{<\gamma}$ denotes $OD^{<\gamma}(0)$. \end{definition} The proof of Theorem \ref{scales classification} also gives: \begin{theorem} \label{where sjs appears} Suppose $[\alpha,\beta]$ is an admissible gap. Let $\beta'$ be the least ordinal such that there is a scale for a universal $\boldsymbol{\Pi_1}(J_\alpha(\mathbb{R}))$-set definable over $J_{\beta'}(\mathbb{R})$. Then there is $z\in\mathbb{R}$ and a sjs $\mathcal{B} \subset OD^{<\beta'}(z)$ such that a universal $\boldsymbol{\Pi_1}(J_\alpha(\mathbb{R}))$-set is in $\mathcal{B}$ and either \begin{enumerate} \item $[\alpha,\beta]$ is weak and $\beta' = \beta$ or \item $[\alpha,\beta]$ is strong and $\beta' = \beta+1$. \end{enumerate} \end{theorem} \begin{remark} \label{form of ind-like} Suppose $\boldsymbol{\mathcal{G}amma}$ is a boldface inductive-like pointclass in $L(\mathbb{R})$. Then \begin{enumerate} \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Sigma_1}(J_\alpha(\mathbb{R}))$ for some $\alpha$ beginning an admissible gap, \item there is $x\in \mathbb{R}$ such that letting $\mathcal{G}amma$ be the class of pointsets which are $\Sigma_1$-definable over $J_\alpha(\mathbb{R})$ from the parameter $x$, $\boldsymbol{\mathcal{G}amma}$ is the closure of $\mathcal{G}amma$ under preimages by continuous functions, and \item \label{Gamma is (Sigma^2_1)^Delta} $\boldsymbol{\mathcal{G}amma} = \boldsymbol{(\Sigma^2_1)^{\Delta_{\mathcal{G}amma}}}$. \end{enumerate} \end{remark} \subsubsection{Relationship between the Levy Hierarchy and the Extended Projective Hierarchy} \begin{definition} Suppose $\lambda < \mathcal{T}heta$ is a limit ordinal. We say \begin{itemize} \item $\lambda$ is type I if $\boldsymbol{\Sigma^1_\lambda}$ is closed under finite intersection but not countable intersection, \item $\lambda$ is type II if $\boldsymbol{\Sigma^1_\lambda}$ is not closed under finite intersection, \item $\lambda$ is type III if $\boldsymbol{\Sigma^1_\lambda}$ is closed under countable intersection but not coprojection, and \item $\lambda$ is type IV if $\boldsymbol{\Sigma^1_\lambda}$ is closed under coprojection. \end{itemize} \end{definition} Let $\langle \delta_\alpha : \alpha < \mathcal{T}heta \rangle$ enumerate the ordinals $\delta$ such that there are no sets of reals in $J_{\delta+1}(\mathbb{R}) \backslash J_\delta(\mathbb{R})$. Let $n_\alpha$ be minimal such that $\boldsymbol{\Sigma_{n_\alpha}}(J_{\delta_\alpha}(\mathbb{R})) \newlineot\subset J_{\delta_\alpha}(\mathbb{R})$. \begin{theorem} \label{extended projective hierarchy to levy hierarchy} Suppose $\alpha < \mathcal{T}heta$. \begin{enumerate} \item If $\omega\alpha$ is type I, then $\boldsymbol{\Sigma^1_{\omega\alpha + k}} = \boldsymbol{\Sigma_{n_\alpha+k}}(J_{\delta_\alpha}(\mathbb{R}))$ for all $k\in \omega$. \item If $\omega\alpha$ is type II or III, then $\boldsymbol{\Sigma^1_{\omega\alpha + k + 1}} = \boldsymbol{\Sigma_{n_\alpha+k}}(J_{\delta_\alpha}(\mathbb{R}))$ for all $k\in \omega$. \item If $\omega\alpha$ is type IV, then $\boldsymbol{\Pi^1_{\omega\alpha}} = \boldsymbol{\Sigma_{n_\alpha}}(J_{\delta_\alpha}(\mathbb{R}))$ and \\$\boldsymbol{\Sigma^1_{\omega\alpha + k+1}} = \boldsymbol{\Sigma_{n_\alpha+k}}(J_{\delta_\alpha}(\mathbb{R}))$ for all $k\in \omega\backslash\{0\}$. \end{enumerate} \end{theorem} \subsubsection{Classification of Suslin Pointclasses} There is a related classification of the Suslin pointclasses. For $\alpha < \mathcal{T}heta$, let $\kappa_\alpha$ be the $\alpha$th Suslin cardinal. Let $\newlineu_\alpha$ be the $\alpha$th ordinal $\newlineu$ such that $\boldsymbol{\Sigma^1_\newlineu}$ or $\boldsymbol{\Pi^1_\newlineu}$ is scaled. \begin{theorem} \label{classification of suslin pointclasses} Let $\lambda < \boldsymbol{\delta^2_1}$ be a limit cardinal and $\newlineu = sup\{\newlineu_\alpha : \alpha < \lambda\}$. \begin{enumerate} \item If $\newlineu$ is type I, then for all $k\in \omega$ \begin{itemize} \item $\boldsymbol{\Sigma^1_{\newlineu+2k}}$ and $\boldsymbol{\Pi^1_{\newlineu+2k+1}}$ are scaled, \item $S(\kappa_{\lambda + k}) = \boldsymbol{\Sigma^1_{\newlineu+k+1}}$, \item $\kappa_{\lambda+2k+1} = \boldsymbol{\delta^1_{\newlineu+2k+1}} = (\kappa_{\lambda+2k})^+$, and \item $cof(\kappa_{\lambda+2k}) = \omega$. \end{itemize} \item If $\newlineu$ is type I or III, then for all $k\in \omega$ \begin{itemize} \item $\boldsymbol{\Sigma^1_{\newlineu+2k+1}}$ and $\boldsymbol{\Pi^1_{\newlineu+2k}}$ are scaled, \item $S(\kappa_{\lambda + k}) = \boldsymbol{\Sigma^1_{\newlineu+k+1}}$, \item $\kappa_{\lambda+2k+2} = \boldsymbol{\delta^1_{\newlineu+2k+2}} = (\kappa_{\lambda+2k+1})^+$, and \item $cof(\kappa_{\lambda+2k+1}) = \omega$. \end{itemize} \item \label{ind-like case of suslin classification} If $\newlineu$ is type IV, then $\boldsymbol{\Pi^1_\newlineu}$ is scaled, $S(\kappa_\lambda) = \boldsymbol{\Pi^1_\newlineu}$, and for all $k\in \omega$, letting $\mu = \newlineu_{\lambda+1}$, \begin{itemize} \item $\boldsymbol{\Sigma^1_{\mu+2k}}$ and $\boldsymbol{\Pi^1_{\mu+2k+1}}$ are scaled, \item $S(\kappa_{\lambda + k + 1}) = \boldsymbol{\Sigma^1_{\mu+k+1}}$, \item $\kappa_{\lambda+2k+2} = \boldsymbol{\delta^1_{\mu+2k+1}} = (\kappa_{\lambda+2k+1})^+$, and \item $cof(\kappa_{\lambda+2k+1}) = \omega$. \end{itemize} \end{enumerate} \end{theorem} \begin{corollary} \label{classification of regular Suslins} Suppose $\boldsymbol{\mathcal{G}amma} = S(\kappa)$ for a regular Suslin cardinal $\kappa \leq \boldsymbol{\delta^2_1}$. Then one of the following holds. \begin{enumerate} \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Sigma}_{2k+1}(J_\alpha(\mathbb{R}))$ for some $k\in \omega$ and some $\alpha$ beginning an inadmissible gap. \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Sigma}_1(J_\alpha(\mathbb{R}))$ for some $\alpha$ beginning an admissible gap. \item $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Sigma}_{n_{\beta}+2k}(J_\beta(\mathbb{R}))$ for some $k\in \omega$ and some $\beta$ ending a weak gap. \end{enumerate} \end{corollary} \subsection{Woodin Cardinals and Iterations} \label{woodin and iteration section} We borrow most of the notation of premice and iteration trees from \cite{ooimt}. In addition to the lightface premice defined in \cite{ooimt}, we will also consider premice built over some $a\in HC$. We write an $a$-premouse as $M = (J_\alpha^{\vec{E}},\in,\vec{E},\emptyset,a)$, for a fine extender sequence $\vec{E} = \langle E_\eta: \eta \leq \alpha\rangle$. If $\beta\leq \alpha$, $M|\beta$ represents the premouse $(J_\beta^{\vec{E}},\in,\vec{E}\upharpoonright\beta,E_\beta,a)$. Additionally, if $\mathcal{T}$ is an iteration tree of limit length and $b$ is a cofinal, non-dropping branch through $\mathcal{T}$, we let $M_b^\mathcal{T}$ be the direct limit of the models on $b$ and let $i_b^\mathcal{T}: M_0^\mathcal{T} \to M_b^\mathcal{T}$ be the associated direct limit embedding. For a model $M$, let $\delta_M$ denote the least Woodin cardinal of $M$ (if one exists) and $Ea_M$ denote Woodin's extender algebra in $M$ at $\delta_M$. Let $\kappa_M$ be the least cardinal of $M$ which is $<\delta_M$-strong in $M$. $ea$ will refer to the generic over $Ea_M$. When considering the product extender algebra $Ea_M \times Ea_M$, we will write $ea_l \times ea_r$ for the generic. $ea_r$ will typically code a pair which we shall write $(ea^1_r,ea^2_r)$. For posets of the form $Col(\omega,X)$, $\dot{g}$ denotes a name for the generic. Suppose $M$ is a premouse with iteration strategy $\Sigma$. We say $N$ is a complete iterate of $M$ if $N$ is the last model of an iteration tree $\mathcal{T}$ on $M$ such that $\mathcal{T}$ is according to $\Sigma$ and the branch through $\mathcal{T}$ from $M$ to $N$ is non-dropping.\footnote{This is a slight abuse of notation, since being ``a complete iterate of $M$'' is dependent on $\Sigma$ as well as $M$. This will not cause any ambiguity, since the mice we are interested in have unique iteration strategies.} \begin{theorem} \label{genericity iteration for ext alg} Let $M$ be a countable premouse with an $\omega_1+1$-iteration strategy such that $M \models$ ``There is a Woodin cardinal.'' Then $Ea_M$ is a $\delta_M$-c.c. Boolean algebra and for any $x\in \mathbb{R}$, there is a countable, complete iterate $N$ of $M$ such that $x$ is $Ea_N$-generic over $N$. \end{theorem} \begin{corollary} \label{genericity iteration for Col} Let $M$ be a countable premouse with an $\omega_1+1$-iteration strategy such that $M \models$ ``There is a Woodin cardinal.'' Then for any $x\in\mathbb{R}$, there is a countable, complete iterate $N$ of $M$ and $g$ which is $Col(\omega,\delta_N)$-generic over $N$ such that $x\in N[g]$. \end{corollary} See Section 7.2 of \cite{ooimt} for a proof of Theorem \ref{genericity iteration for ext alg} and its corollary. \begin{theorem} \label{branch uniqueness} Suppose $b$ and $c$ are distinct wellfounded branches of a normal iteration tree $\mathcal{T}$ and $A\subseteq \delta(\mathcal{T})$ is in $M^\mathcal{T}_b \cap M^\mathcal{T}_c$. Then there is $\kappa < \delta(\mathcal{T})$ such that $M^\mathcal{T}_b \models$ ``$\kappa$ is $A$-reflecting in $\delta(\mathcal{T})$,'' and this is witnessed by an extender on the sequence of $\mathcal{M}(\mathcal{T})$. \end{theorem} See 6.9 and 6.10 of \cite{ooimt} for definitions of $\delta(\mathcal{T})$ and $\mathcal{M}(\mathcal{T})$ and a proof of Theorem \ref{branch uniqueness}. The theorem justifies the following definitions. \begin{definition} Suppose $b$ is a wellfounded branch through a normal iteration tree $\mathcal{T}$. Let $\mathcal{Q}(b,\mathcal{T})$ be the least initial segment of $M^\mathcal{T}_b$ extending $\mathcal{M}(\mathcal{T})$ such that there is $A\subset\delta(\mathcal{T})$ which is definable over $\mathcal{Q}(b,\mathcal{T})$ and realizes $\delta(\mathcal{T})$ is not Woodin via extenders in $\mathcal{M}(\mathcal{T})$, if such an initial segment exists. \end{definition} \begin{definition} Suppose $M$ is a premouse and $\eta\in M$. We say $\eta$ is a cutpoint of $M$ if there is no extender on the fine extender sequence of $M$ with critical point less than $\eta$ and length greater than $\eta$. $\eta$ is a strong cutpoint if there is no extender on the fine extender sequence of $M$ with critical point less than or equal to $\eta$ and length greater than $\eta$. \end{definition} \begin{definition} Suppose $\mathcal{T}$ is a normal iteration tree. Let $\mathcal{Q}(\mathcal{T})$ be the least $\delta(\mathcal{T})$-sound mouse extending $\mathcal{M}(\mathcal{T})$ and projecting to $\delta(\mathcal{T})$ such that $\delta(\mathcal{T})$ is a strong cutpoint of $\mathcal{Q}(\mathcal{T})$ and there is $A\subset \delta(\mathcal{T})$ which is definable over $\mathcal{Q}(\mathcal{T})$ and realizes $\delta(\mathcal{T})$ is not Woodin via extenders in $\mathcal{M}(\mathcal{T})$, if one exists. \end{definition} If follows from Theorem \ref{branch uniqueness} that there is at most one wellfounded branch $b$ through $\mathcal{T}$ such that $\mathcal{Q}(\mathcal{T})\trianglelefteq M^\mathcal{T}_b$. In many cases, we will be able to locate the branch a strategy $\Sigma$ chooses as the unique branch which absorbs $\mathcal{Q}(\mathcal{T})$ in this sense. Note an $\omega_1$-iteration strategy on a countable premouse can be coded by a set of reals. For $a\in HC$ and a pointclass $\mathcal{G}amma$, this allows us to define \begin{align*} Lp^\mathcal{G}amma(a) = \, \bigcup\{N :\, &N \text{ is an } \omega\text{-sound } a\text{-premouse projecting to } a \\ & \text{ with an $\omega_1$-iteration strategy in } \boldsymbol{\Delta_{\mathcal{G}amma}}\}. \end{align*} $Lp^\mathcal{G}amma(a)$ can be reorganized as an $a$-premouse, which is what we will typically use $Lp^\mathcal{G}amma(a)$ to refer to. \begin{theorem} \label{lp = c} Assume $AD^{L(\mathbb{R})}$. Suppose $\mathcal{G}amma$ is a (lightface) inductive-like pointclass in $L(\mathbb{R})$ and $a\in HC$. Then $C_\mathcal{G}amma(a) = Lp^\mathcal{G}amma(a) \cap P(a)$. \end{theorem} \begin{remark} \label{lp(a) contained in lp(b)} Suppose $a$ and $b$ are countable, transitive sets and $a\in b$. It is easy to see from the definition of $C_\mathcal{G}amma$ that $C_\mathcal{G}amma(a)\subseteq C_\mathcal{G}amma(b)$. This, and the theorem above, implies $Lp^\mathcal{G}amma(a)\subseteq Lp^\mathcal{G}amma(b)$. \end{remark} \subsection{The Mitchell-Steel Construction} \label{m-s construction section} We shall require a method of building an $a$-premouse inside a premouse $M$ which contains $a$. Our main tool for this purpose is the fully backgrounded Mitchell-Steel construction developed in \cite{msbook}. This section reviews the construction and its properties. We say a premouse $M$ is reliable if $\mathcal{C}_\omega(M)$ exists and is universal and solid. As we shall see in a moment, we will end the Mitchell-Steel construction if we reach a premouse which is not reliable. \cite{msbook} defines reliable to include the stronger property that $\mathcal{C}_\omega(M)$ is iterable. But the weaker properties of universality and solidity are enough to propagate the construction, and our weaker requirement ensures the construction does not end prematurely when performed inside a mouse. The definitions of universality and solidity can be found in \cite{ooimt}. In all of the cases relevant to us, universality and solidity are guaranteed and the reader will lose little by taking on faith that the construction does not end. For the moment we will work in $V$ and assume $ZFC$. Fix $z\in\mathbb{R}$. Define a sequence of $z$-premice $\langle \mathcal{M}_\xi : \xi \in On\rangle$ inductively as follows. \begin{enumerate} \item $\mathcal{M}_0 = (V_\omega,\in,\emptyset,\emptyset,z)$ \item \label{active case of m-s construction} Suppose we have constructed $\mathcal{M}_\xi = (J_\alpha^{\vec{E}},\in,\vec{E},\emptyset,z)$. Note $\mathcal{M}_\xi$ is a passive premouse. Suppose also there is an extender $F^*$ over $V$, an extender $F$ over $\mathcal{M}_\xi$, and $\newlineu<\alpha$ such that \begin{enumerate} \item $V_{\newlineu+\omega} \subset Ult(V,F^*)$, \item $\newlineu$ is the support of $F$, \item $F\upharpoonright \newlineu = F^*\cap ([\newlineu]^{<\omega} \times \mathcal{M}_\xi)$, and \item $\mathcal{N}_{\xi+1} = (J_\alpha^{\vec{E}},\in,\vec{E},F,z)$ is a premouse. \end{enumerate} If $\mathcal{N}_{\xi+1}$ is reliable, let $\mathcal{M}_{\xi+1} = \mathcal{C}_\omega(\mathcal{N}_{\xi+1})$. Otherwise, the construction ends. If there are multiple such $F^*$, we pick one which minimizes the support of $F$. We say $F^*$ is the extender used as a background at step $\xi+1$. \item Suppose we have constructed $\mathcal{M}_\xi = (J_\alpha^{\vec{E}},\in,\vec{E},E_\alpha,z)$ and either $\mathcal{M}_\xi$ is active or $\mathcal{M}_\xi$ is passive and there is no extender $F^*$ as above. Let $\mathcal{N}_{\xi+1} = (J_{\alpha+1}^{\vec{E}^\frown E_\alpha}, \in,\vec{E}^\frown E_\alpha,\emptyset,z)$. If $\mathcal{N}_{\xi+1}$ is reliable, let $\mathcal{M}_{\xi+1} = \mathcal{C}_\omega(\mathcal{N}_{\xi+1})$. Otherwise, the construction ends. \item Suppose we have constructed $\langle \mathcal{M}_\xi : \xi < \lambda\rangle$ for $\lambda$ a limit ordinal. Let \\$\eta = lim\,inf_{\xi < \lambda} (\rho_\omega(\mathcal{M}_\xi)^+)^{\mathcal{M}_\xi}$. Let $\mathcal{N}_\lambda$ be the passive premouse of height $\eta$ such that $\mathcal{N}_\lambda|\beta = lim_{\xi<\lambda}\, \mathcal{M}_\xi|\beta$ for all $\beta < \eta$. If $\mathcal{N}_\lambda$ is reliable, let $\mathcal{M}_\lambda = \mathcal{C}_\omega(\mathcal{N}_\lambda)$. Otherwise, the construction ends. \end{enumerate} Suppose the construction never breaks down. That is, $\mathcal{M}_\xi$ is defined for all $\xi\in On$. \begin{theorem} \label{levels of m-s construction projectum fact} Suppose $\zeta_0$ and $\xi$ are ordinals such that $\zeta_0 < \xi$ and $\kappa = \rho_\omega(\mathcal{M}_\xi) \leq \rho_\omega(\mathcal{M}_\zeta)$ for all $\zeta \geq \zeta_0$. Then $\mathcal{M}_\xi \trianglelefteq \mathcal{M}_\eta$ for all $\eta \geq \xi$. Moreover, $\mathcal{M}_{\xi+1} \models$ ``every set has cardinality at most $\kappa$.'' \end{theorem} Let $\mathcal{M}$ be the class-sized model such that whenever $\xi\in On$ satisfies $\mathcal{M}_\xi \trianglelefteq \mathcal{M}_\eta$ for all $\eta\geq \xi$, $\mathcal{M}_\xi$ is an initial segment of $\mathcal{M}$. We call $\mathcal{M}$ the output of the Mitchell-Steel construction over $z$. For $\delta\in On$, we call $\mathcal{M}_\delta$ the output of the Mitchell-Steel construction of length $\delta$ over $z$. \begin{theorem} \label{woodin in m-s} Assume $ZFC$. Suppose $\delta$ is the least ordinal such that $\delta$ is Woodin in $L(V_\delta)$. Suppose the Mitchell-Steel construction in $V_\delta$ does not break down, and let $\mathcal{M}$ be the output of the construction. Then $\delta$ in Woodin in $L(\mathcal{M})$. \end{theorem} See the proof of Theorem 11.3 of \cite{msbook}. \begin{theorem}[Universality] \label{universality of m-s} Assume $ZFC$. Let $\delta$ be Woodin and $z\in \mathbb{R}$. Assume the Mitchell-Steel construction of length $\delta$ over $z$ does not break down. Let $N$ be the output of the construction. Suppose no initial segment of $N$ satisfies ``there is a superstrong cardinal.'' Let $W$ be a premouse over x of height $\leq \delta$, and suppose $P$ and $Q$ are the final models above $W$ and $N$, respectively, in a successful coiteration. Then $P\trianglelefteq Q$. \end{theorem} See Theorem 11.1 of \cite{dmatm}. \begin{theorem} \label{iterability of m-s} Suppose $M$ is a mouse with Woodin cardinal $\delta$ satisfying enough of $ZFC$ and $z\in M\cap \mathbb{R}$. Then the Mitchell-Steel construction of length $\delta$ over $z$ done inside $M$ does not break down. Let $N$ be the output of the construction. Then $N$ is a $z$-mouse of height $\delta$. \end{theorem} For a premouse $M$ satisfying enough of $ZFC$ and $z\in M\cap \mathbb{R}$, we write $Le[M,z]$ for the output of the Mitchell-Steel construction in $M$ over $z$ (assuming the construction does not break down). $Le[M]$ will refer to $Le[M,\emptyset]$. $Le[M,z]$ is a $z$-premouse. If $M$ is iterable, so is $Le[M,z]$. We are most interested in cases in which $M$ is a mouse with a Woodin cardinal $\delta$, no largest cardinal, and no total extenders above $\delta$. Then $Le[M|\delta,z]$ is equal to the Mitchell-Steel construction of length $\delta$ over $z$, done inside $M$, and $Le[M,z]$ is an initial segment of $L(Le[M|\delta.z])$. \begin{remark} \label{m-s up to inaccessible} Suppose $M$ is a mouse, $z\in M\cap \mathbb{R}$, and $\kappa$ is inaccessible in $M$. Let $\langle \mathcal{M}_\xi : \xi < \kappa\rangle$ be the models of the Mitchell-Steel construction in $M$ of length $\kappa$ over $z$. Suppose an extender is added at step $\xi+1$ in the construction. Let $F^*$, $F$, and $\newlineu$ be as in Case \ref{active case of m-s construction} of the construction. Then there is $F'\in M|\kappa$ such that $M\models V_{\newlineu+\omega}\subset Ult(M,F')$ and $F'\cap ([\newlineu]^{<\omega} \times \mathcal{M}_\xi) = F\upharpoonright\newlineu$. So we may assume if $F^*$ is used as a background in the construction of length $\kappa$, then $F^*\in M|\kappa$. In particular, if $M$ is a mouse, $z\in M\cap \mathbb{R}$, and $\kappa$ is inaccessible in $M$, then $Le[M|\kappa,z]$ equals the Mitchell-Steel construction of length $\kappa$ over $z$, done in $M$. \end{remark} \subsection{S-constructions} \label{s-const section} Below we outline the $S$-construction (this was introduced as the $P$-construction in \cite{self-iter}). Suppose $M = (J_\gamma^{\vec{E}},\in,\vec{E}\upharpoonright\gamma,E_\gamma,a)$ is a countable $a$-premouse and $\delta\in M$ is a cardinal and cutpoint of $M$. Suppose $ON \cap \bar{S} = \delta + \omega$, $\delta$ is a Woodin cardinal of $\bar{S}$, $\bar{S}$ is definable over $M$, and there is a generic $G$ (for the version of Woodin's extender algebra with $\delta$ propositional letters) such that $\bar{S}[G] = M|\delta+1$. Inductively define a sequence $\langle S_\alpha : \delta + 1 \leq \alpha \leq \gamma \rangle$ as follows. $S_{\delta+1}$ is set to be $\bar{S}$. At a limit $\lambda$, $S_\lambda = \bigcup_{\alpha < \lambda} S_\alpha$. If $M|\lambda$ is active, add a predicate for $E_{\lambda} \cap S_\lambda$ to $S_\lambda$. For the successor step, we define $S_{\alpha + 1}$ by constructing one more level over $S_\alpha$. The construction proceeds until we construct $S_\gamma$, or we reach some $S_\alpha$ such that $\delta$ is not Woodin in $S_\alpha$. We refer to $S_\gamma$ as the maximal $S$-construction in $M$ over $\bar{S}$ if the construction reaches $\gamma$. We are primarily interested in cases where $\delta$ is Woodin in $M$, in which case the construction is guaranteed to reach $\gamma$. \begin{lemma} \label{s-const lemma} Suppose $M,\bar{S},\delta,\gamma,$ and $G$ are as above. Assume also $M$ is iterable, $\omega$-sound, and $\rho_\omega(M)\geq \delta$. If the construction reaches $\gamma$, then for each $\alpha$ such that $\delta+1\leq \alpha \leq \gamma$, $S_\alpha$ is an $\bar{S}$-mouse and $S_\alpha[G] = M|\alpha$. If also $\alpha < \gamma$, or $\alpha = \gamma$ and $\delta$ is definably Woodin over $S_\alpha$, then $\rho_n(S_\alpha) = \rho_n(M|\alpha)$ for all $n$ and $S_\alpha$ is $\omega$-sound. \end{lemma} Lemma 1.5 of \cite{self-iter} gives everything in Lemma \ref{s-const lemma} except the iterability of $S_\gamma$. The iteration strategy for $S_\gamma$ in Lemma \ref{s-const lemma} comes from lifting an iteration tree on $S_\gamma$ to iteration trees on $M$ above $\delta$. In particular, we have: \begin{fact} \label{definiability of s-const strat} Suppose $M,\bar{S},\delta,\gamma,$ and $G$ are as in Lemma \ref{s-const lemma}. Then the iteration strategy for $S_\gamma$ (as an $\bar{S}$-premouse) is projective in the iteration strategy for $M$ restricted to iteration trees above $\delta$. \end{fact} The $S$-construction serves two purposes in what follows. It allows us to ``undo'' generic extensions from Woodin's extender algebra. And combined with the fully-backgrounded Mitchell-Steel construction, it provides an inner model of a premouse with convenient properties. \begin{definition} Let $M$ be a mouse with a Woodin cardinal and $z\in M \cap \mathbb{R}$. Let $\bar{S}$ be the result of constructing one level of the $\mathcal{J}$-hierarchy over $Le[M|\delta_M,z]$. Let $StrLe[M,z]$ denote the maximal $S$-construction in $M$ over $\bar{S}$. \end{definition} \subsection{Suitable Mice} \label{suitable mice section} We now review some results from the core model induction. Most of the concepts below are from \cite{cmi}, with some minor additions. We need to work with mice with an inaccessible cardinal above a Woodin, so in Definition \ref{ss definition} we introduce a modification of the standard notion of a suitable premouse. \cite{cmi} proves the existence of terms in suitable mice capturing certain sets of reals. We will need analogous lemmas for our modified definition. In fact we require more than is stated in \cite{cmi} --- it is essential for our purposes that there is a canonical term capturing each set. Fortunately, this stronger claim is already implicit in the proofs of \cite{cmi}. For the remainder of this section, we will asssume $ZF + AD + DC + V=L(\mathbb{R})$ and fix a boldface inductive-like pointclass $\boldsymbol{\mathcal{G}amma}$ such that $\boldsymbol{\mathcal{G}amma}\newlineeq \boldsymbol{\Sigma^2_1}$. We then have $\boldsymbol{\mathcal{G}amma} = \boldsymbol{\Sigma_1}(J_{\alpha_0}(\mathbb{R}))$ for some $\alpha_0$ beginning an admissible $\Sigma_1$-gap $[\alpha_0,\beta_0]$. Fix a lightface pointclass $\mathcal{G}amma$ as in Remark \ref{form of ind-like} such that $\boldsymbol{\mathcal{G}amma}$ is the closure of $\mathcal{G}amma$ under preimages by continuous functions. \begin{definition} Suppose $x\in HC$. Say an $x$-premouse $N$ is $\mathcal{G}amma$-suitable if $N$ is countable and \begin{enumerate} \item $N \models$ there is exactly one Woodin cardinal $\delta_N$. \item Letting $N_0 = Lp^\mathcal{G}amma(N|\delta_N)$ and $N_{i+1} = Lp^\mathcal{G}amma(N_i)$, we have that $N = \bigcup_{i<\omega} N_i$. \item If $\xi < \delta_N$, then $Lp^\mathcal{G}amma(N|\xi) \models \xi$ is not Woodin. \end{enumerate} \end{definition} \begin{definition} \label{ss definition} Suppose $x\in HC$. Say an $x$-premouse $N$ is $\mathcal{G}amma$-super-suitable ($\mathcal{G}amma$-ss) if $N$ is countable and \begin{enumerate} \item $N \models$ There is exactly one Woodin cardinal $\delta_N$. \item $N \models$ There is exactly one inaccessible cardinal above $\delta_N$. We denote this inaccessible by $\newlineu_N$. \item Letting $N_0 = Lp^\mathcal{G}amma(N|\newlineu_N)$ and $N_{i+1} = Lp^\mathcal{G}amma(N_i)$, we have that $N = \bigcup_{i<\omega} N_i$. \item For each $\xi \geq \delta_N$, $N|(\xi^+)^N = Lp^\mathcal{G}amma(N|\xi)$. \item If $\xi < \delta_N$, then $Lp^\mathcal{G}amma(N|\xi) \models \xi$ is not Woodin. \end{enumerate} \end{definition} \begin{definition} Let $N$ be a mouse and $\delta\in N$. We say $\delta$ is a $\mathcal{G}amma$-Woodin of $N$ if $\delta$ is Woodin in $Lp^\mathcal{G}amma(N|\delta)$. \end{definition} A $\mathcal{G}amma$-suitable premouse is a minimal premouse with a $\mathcal{G}amma$-Woodin cardinal which is closed under $Lp^{\mathcal{G}amma}$, in that none of its initial segments have this property. Similarly, a $\mathcal{G}amma$-ss premouse can be considered a minimal premouse with a $\mathcal{G}amma$-Woodin which is closed under $Lp^\mathcal{G}amma$ and has an inaccessible cardinal above its $\mathcal{G}amma$-Woodin. \begin{definition} Let $A \subseteq \mathbb{R}$, $N$ a countable premouse, $\eta$ an uncountable cardinal of $N$, and $\tau\in N^{Col(\omega,\eta)}$. We say that $\tau$ weakly captures $A$ over $N$ if whenever $g$ is $Col(\omega,\eta)$-generic over $N$, $\tau[g] = A \cap N[g]$. \end{definition} \begin{lemma} \label{terms condense} Suppose $\mathcal{B}$ is a self-justifying system and $N$ and $M$ are transitive models of enough of ZFC such that $N\in M$. Let $\mathcal{C}$ be a comeager set of $Col(\omega,N)$ generics over $M$ and suppose for each $B\in \mathcal{B}$ there is a term $\tau_B\in M$ such that if $g\in\mathcal{C}$, then $\tau_B[g] = B \cap M[g]$. Let $\pi: \bar{M} \to M$ be elementary with $\{N\} \cup \{\tau_B: B \in \mathcal{B}\}\subset ran(\pi)$. Let $(N,\tau_B) = \pi(\bar{N},\bar{\tau}_B)$. Then whenever $g$ is $Col(\omega,\bar{N})$-generic over $\bar{M}$, $\bar{\tau}_B[g] = B \cap \bar{M}[g]$. \end{lemma} See Lemma 3.7.2 of \cite{cmi}. Let $\beta'$ be the least ordinal greater than $\alpha_0$ such that there is a scale for a universal $\boldsymbol{\Pi_1}(J_{\alpha_0}(\mathbb{R}))$ set definable over $J_{\beta'}(\mathbb{R})$. By Theorem \ref{where sjs appears}, $\beta'= \beta_0$ or $\beta'=\beta_0+1$ and there is a self-justifying system $\mathcal{G} = \{ G_n : n\in\omega\}$ such that \begin{align*} G_0 = \{(x,y):\, & x \text{ codes some transitive set } a \text{ and } y \text{ codes an } \omega\text{-sound } \\ & a\text{-premouse } R \text{ such that } R \text{ projects to } a \text{ and } R \text{ has an } \\ & \omega_1\text{-iteration strategy in } \boldsymbol{\Delta}\}, \end{align*} $G_1$ is a universal $\boldsymbol{\Sigma_1}(J_{\alpha_0}(\mathbb{R}))$-set, and $\mathcal{G}$ is contained in $OD^{<\beta'}(z)$ for some $z\in\mathbb{R}$ (Note $G_0\in\boldsymbol{\mathcal{G}amma}$, by part \ref{Gamma is (Sigma^2_1)^Delta} of Remark \ref{form of ind-like}). For ease of notation, assume $\mathcal{G}\subset OD^{<\beta'}$. \begin{definition} \label{definition of standard terms} Suppose $B \subset \mathbb{R}$, $N$ is a premouse, and $\eta$ is a cardinal of $N$. Let $\tau^N_{B,\eta}$ be the set of pairs $(\sigma,p)\in N$ such that \begin{enumerate} \item $\sigma$ is a $Col(\omega,\eta)$-standard term for a real, \item $p\in Col(\omega,\eta)$, and \item for comeager many $g\subset Col(\omega,\eta)$ which are $Col(\omega,\eta)$-generic over $N$ such that $p\in g$, $\sigma[g]\in B$. \end{enumerate} For $n\in\omega$, let $\tau^N_{n,\eta} = \tau^N_{G_n,\eta}$ and if $N$ has a Woodin cardinal let $\tau^N_n = \tau^N_{n,\delta_N}$. \end{definition} \begin{lemma} \label{terms exist} Suppose $N$ is a $\mathcal{G}amma$-suitable or $\mathcal{G}amma$-ss premouse, $z\in N$, $B\in OD^{<\beta'}(z)$, and $\eta$ is a cardinal of $N$. Then $\tau^N_{B,\eta}$ is in $N$. \end{lemma} See the proof of Lemma 3.7.5 of \cite{cmi}. In Lemma 5.4.3 of \cite{cmi}, Lemma \ref{terms condense} is used to show: \begin{lemma}[Woodin] \label{term relation condensation} Suppose $z\in\mathbb{R}$, $N$ is a $\mathcal{G}amma$-suitable (or $\mathcal{G}amma$-ss) $z$-premouse, and $\mathcal{B}$ is a sjs containing a universal $\boldsymbol{\Sigma_1}(J_{\alpha_0}(\mathbb{R}))$-set such that each $B\in\mathcal{B}$ is $OD^{<\beta'}(z)$. Suppose $\pi:M\to N$ is $\Sigma_1$-elementary and for every $B\in\mathcal{B}$ and $\eta \geq \delta_N$, $\tau^N_{B,\eta}\in range(\pi)$. Then \begin{enumerate} \item $M$ is $\mathcal{G}amma$-suitable ($\mathcal{G}amma$-ss) and \item $\pi(\tau^M_{B,\bar{\eta}}) = \tau^N_{B,\eta}$, where $\bar{\eta}$ is such that $\pi(\bar{\eta}) = \eta$. \end{enumerate} \end{lemma} As a result of Lemmas \ref{terms condense} and \ref{terms exist} we have: \begin{corollary} \label{standard terms capture} If $N$ is $\mathcal{G}amma$-suitable or $\mathcal{G}amma$-ss and $\eta$ is an uncountable cardinal of $N$, then $\tau^N_{n,\eta}$ weakly captures $G_n$. \end{corollary} \begin{definition} Let $\mathcal{T}$ be a normal iteration tree on a $\mathcal{G}amma$-suitable (or $\mathcal{G}amma$-ss) premouse $N$. Suppose also $\mathcal{T}$ is below $\delta_N$. Say $\mathcal{T}$ is $\mathcal{G}amma$-short if for all limit $\xi \leq lh(\mathcal{T})$, $Lp^\mathcal{G}amma(\mathcal{M}(\mathcal{T}\upharpoonright\xi))\models \delta(\mathcal{T}\upharpoonright\xi)$ is not Woodin. Otherwise, say $\mathcal{T}$ is $\mathcal{G}amma$-maximal. \end{definition} \begin{definition} Let $N$ be a $\mathcal{G}amma$-suitable ($\mathcal{G}amma$-ss) premouse with an $(\omega_1,\omega_1)$-iteration strategy $\Sigma$. Say $\Sigma$ is fullness-preserving if whenever $P$ is an iterate of $N$ by $\Sigma$ via an iteration below $\delta_N$, then \begin{enumerate} \item if the branch to $P$ does not drop, then $P$ is $\mathcal{G}amma$-suitable ($\mathcal{G}amma$-ss), and \item if the branch to $P$ does drop, then $P$ has an $\omega_1$-iteration strategy in $J_{\alpha_0}(\mathbb{R})$. \end{enumerate} \end{definition} \begin{remark} \label{no gamma woodin gives delta strat} Let $N$ be a $\mathcal{G}amma$-suitable (or $\mathcal{G}amma$-ss) mouse with a fullness-\\preserving iteration strategy $\Sigma$. Suppose $P\triangleleft N|\delta_N$, and $\Sigma'$ is the iteration strategy for $P$ given by restricting the domain of $\Sigma$ to trees on $P$. Suppose $\mathcal{T}$ is an iteration tree on $P$ according to $\Sigma'$. Then the branch $b$ through $\mathcal{T}$ chosen by $\Sigma'$ can be determined from $\mathcal{Q}(\mathcal{T})$. And $\mathcal{Q}(\mathcal{T})$ is the unique $\mathcal{M}(\mathcal{T})$-mouse projecting to $\omega$ with an iteration strategy in $\boldsymbol{\Delta}$. It follows from Remark \ref{form of ind-like} and the uniqueness of $\mathcal{Q}(\mathcal{T})$ that $\Sigma'$ is coded by a set in $\boldsymbol{\Delta}$. \end{remark} \begin{definition} Let $\mathcal{T}$ be a $\mathcal{G}amma$-maximal iteration tree on a $\mathcal{G}amma$-suitable (or $\mathcal{G}amma$-ss) premouse $N$ and let $b$ be a cofinal branch through $\mathcal{T}$. Say $b$ respects $\vec{G}_n$ if $i^\mathcal{T}_b(\tau^N_{k,\eta}) = \tau^{M^\mathcal{T}_b}_{k,i_b(\eta)}$ for all $k < n$ and every cardinal $\eta$ of $N$ above $\delta_N$. \end{definition} \begin{definition} Let $N$ be a $\mathcal{G}amma$-suitable (or $\mathcal{G}amma$-ss) mouse with a fullness-\\preserving iteration strategy $\Sigma$.. Say $\Sigma$ is guided by $\mathcal{G}$ if whenever $\mathcal{T}$ is an iteration tree according to $\Sigma$ of limit length and $b = \Sigma(\mathcal{T})$, then \begin{enumerate} \item if $\mathcal{T}$ is $\mathcal{G}amma$-short, then $\mathcal{Q}(b,\mathcal{T})$ exists and $\mathcal{Q}(b,\mathcal{T}) \in Lp^\mathcal{G}amma(\mathcal{M}(\mathcal{T}))$, and \item if $T$ is $\mathcal{G}amma$-maximal, then $\Sigma(b)$ respects $\vec{G}_n$ for all $n\in\omega$. \end{enumerate} \end{definition} \begin{lemma} \label{guided strat not in gamma} If $N$ is $\mathcal{G}amma$-suitable (or $\mathcal{G}amma$-ss) and $\Sigma$ is an $\omega_1$-iteration strategy for $N$ which is guided by $\mathcal{G}$, then $\Sigma$ is not in $\boldsymbol{\mathcal{G}amma}$. \end{lemma} \begin{proof} There is $n\in\omega$ such that $G_n$ is a universal $\boldsymbol{\mathcal{G}amma^c}$-set. Then $y\in G_n$ if and only if there exists a countable, complete iterate $N^*$ of $N$ according to $\Sigma$ such that $y$ is $Ea_{N^*}$-generic over $N^*$ and $y\in\tau^{N^*}_n[g]$. Since $\boldsymbol{\mathcal{G}amma}$ is closed under projection, if $\Sigma$ were in $\boldsymbol{\mathcal{G}amma}$, $G_n$ would also be in $\boldsymbol{\mathcal{G}amma}$. \end{proof} \begin{theorem}[Woodin] \label{nice strategy exists} For any $x\in HC$, there is a (unique) $\omega$-sound, $\mathcal{G}amma$-suitable $x$-mouse $W_x$ projecting to $x$ with a (unique) iteration strategy that is fullness-preserving, condenses well,\footnote{In the sense of Definition 5.3.7 of \cite{cmi}.} and is guided by $\mathcal{G}$. Similarly, there is a (unique) $\omega$-sound, $\mathcal{G}amma$-ss $x$-mouse $M_x$ projecting to $x$ with a (unique) iteration strategy that is fullness-preserving, condenses well, and is guided by $\mathcal{G}$. \end{theorem} Chapter 5 of \cite{cmi} demonstrates the existence of such a $\mathcal{G}amma$-suitable mouse. It is not difficult to see this gives the existence of the required $\mathcal{G}amma$-ss mouse as well. For any $\mathcal{G}amma$-suitable (or $\mathcal{G}amma$-ss) premouse $N$ and any $n\in\omega$, let \begin{align*} \gamma^N_n = Hull^N(\{\tau^N_i: i < n\}) \cap \delta_N. \end{align*} The regularity of $\delta_N$ in $N$ implies each $\gamma^N_n$ is an ordinal. Lemma \ref{term relation condensation} can be used to show: \begin{fact} $\langle \gamma^N_n : n \in \omega \rangle$ is cofinal in $\delta_N$. \end{fact} \begin{lemma} \label{branch agreement} Let $\mathcal{T}$ be a normal iteration tree on a $\mathcal{G}amma$-suitable (or $\mathcal{G}amma$-ss) premouse $N$ and let $b$ and $c$ be branches through $\mathcal{T}$ which respect $\vec{G}_n$. Then $i^\mathcal{T}_b\upharpoonright\gamma^N_n = i^\mathcal{T}_c\upharpoonright\gamma^N_n$. Moreover, if $b$ and $c$ both respect $\vec{G}_n$ for all $n$, then $b=c$. \end{lemma} See Lemma 6.25 of \cite{hacm}. Lemma \ref{branch agreement} implies if $b$ is the branch through $\mathcal{T}$ chosen by the nice iteration strategy for a $\mathcal{G}amma$-suitable premouse given by Theorem \ref{nice strategy exists} and $c$ is any branch respecting $\vec{G}_n$, then $i^\mathcal{T}_b$ and $i^\mathcal{T}_c$ agree up to $\gamma^N_n$. In particular, to track the iteration of a $\mathcal{G}amma$-suitable mouse up to some point below its least Woodin, it is sufficient to know finitely many of the sets in $\mathcal{G}$. Suppose $M$ is a countable premouse with an $(\omega_1,\omega_1+1)$-iteration strategy.\footnote{The suitable mice from Theorem \ref{nice strategy exists} satisfy this. We only explicitly required these to have $(\omega_1,\omega_1)$-iteration strategies, but since $ZF + AD$ implies $\omega_1$ is measurable, an $(\omega_1,\omega_1)$-iteration strategy induces an $(\omega_1,\omega_1+1)$-iteration strategy.} Together, the Comparison Lemma and the Dodd-Jensen Lemma imply the collection of countable, complete iterates of $M$, together with the iteration maps between them, forms a directed system. \cite{hodbelowtheta} presents work of Steel and Woodin analyzing the direct limit of all countable, complete iterates of $M^\#_\omega$. This direct limit cut to its least Woodin is $(HOD||\mathcal{T}heta)^{L(\mathbb{R})}$. \cite{hacm} goes further in showing that the entire class $HOD^{L(\mathbb{R})}$ is a strategy mouse. The iteration maps through trees on $M^\#_\omega$ are approximated using indiscernibles, analogously to the use of terms in Lemma \ref{branch agreement}. These approximations are merged to give an ordinal definable definition of the direct limit in $L(\mathbb{R})$. In particular, initial segments of the direct limit maps are definable from finitely many indiscernibles. In place of $M^\#_\omega$, we shall analyze the direct limit of a $\mathcal{G}amma$-suitable mouse and prove that portions of the direct limit maps are definable within a $\mathcal{G}amma$-ss mouse. Our task is simpler in that we only need to reach up to $\boldsymbol{\delta_\mathcal{G}amma^+}$, which we show in Section \ref{length of direct limit section} is below the least Woodin of our direct limit. So a single approximation using only finitely many sets from $\mathcal{G}$ will suffice. Another advantage we have is that there is no harm in working over a real parameter, so we can work in a $\mathcal{G}amma$-ss mouse over a real which codes $W_0$. On the other hand, we will have some extra work to do in Section \ref{definability section} ensuring enough information about $\mathcal{G}amma$ and $\mathcal{G}$ is definable in a $\mathcal{G}amma$-ss mouse before we internalize the directed system in Section \ref{internalization section}. \cite{hacm} also makes use of the fact that the derived model of $M^\#_\omega$ is essentially $L(\mathbb{R})$. So for $x\in M^\#_\omega \cap \mathbb{R}$, a $\Sigma^2_1$ statement about $x$ is true if and only if it holds in the derived model of $M^\#_\omega$. In particular, there is a natural way to ask about $\Sigma^2_1$ truth inside of $M^\#_\omega$. A second, though minor, inconvenience of having to use a $\mathcal{G}amma$-suitable mouse is we cannot talk about its derived model, since it only has one Woodin. Instead we will use the fine-structural witness condition of \cite{cmi}. \begin{remark} We can associate to any $\Sigma_1$-formula $\phi$ a sequence of formulas $\langle \phi^k:k<\omega\rangle$ such that for any ordinal $\gamma$ and any real $z$, $J_{\gamma+1}(\mathbb{R})\models \phi[z] \iff (\exists k) J_\gamma(\mathbb{R}) \models \phi^k[z]$. Moreover, the map $\phi \to \langle \phi^k:k<\omega\rangle$ is recursive. \end{remark} \begin{definition} Suppose $\phi(v)$ is a $\Sigma_1$-formula and $z\in\mathbb{R}$. A $\langle \phi,z\rangle$-witness is an $\omega$-sound $z$-mouse $N$ in which there are $\delta_0 < ... < \delta_9$, $\mathcal{S}$, and $\mathcal{T}$ such that N satisfies the formulae expressing \begin{enumerate} \item ZFC, \item $\delta_0 < ... < \delta_9$ are Woodin, \item $\mathcal{S}$ and $\mathcal{T}$ are trees on some $\omega \times \eta$ which are absolutely complementing in $V^{Col(\omega,\delta_9)}$, and \item For some $k < \omega$, $\rho[T]$ is the $\Sigma_{k+3}$-theory (in the language with names for each real) of $J_\gamma(\mathbb{R})$, where $\gamma$ is least such that $J_\gamma(\mathbb{R}) \models \phi^k[z]$. \end{enumerate} \end{definition} Other than iterability, the rest of the properties of being a $\langle \phi,z\rangle$-witness are first order. The following two lemmas illustrate the usefulness of this definition. \begin{lemma} \label{witness gives truth} If there is a $\langle \phi,z\rangle$-witness, then $L(\mathbb{R})\models \phi[z]$. \end{lemma} \begin{lemma} \label{truth gives witness} Suppose $\phi$ is a $\Sigma_1$-formula, $z\in\mathbb{R}$, $\gamma$ is a limit ordinal, and $J_\gamma(\mathbb{R})\models \phi[z]$. Then there is a $\langle\phi,z\rangle$-witness $N$ such that the iteration strategy for $N$ restricted to countable trees is in $J_\gamma(\mathbb{R})$. By taking a Skolem hull, we can also ensure $\rho_\omega(N) = \omega$. \end{lemma} \section{The Inductive-Like Case} \label{ind-like chapter} In this section we will prove Theorem \ref{main thm}. We now assume $ZF + AD + DC + V=L(\mathbb{R})$ and fix a boldface inductive-like pointclass $\boldsymbol{\mathcal{G}amma}$. By a reflection argument, we may assume $\boldsymbol{\mathcal{G}amma} \newlineeq \boldsymbol{\Sigma^2_1}$. Let $\boldsymbol{\Delta} = \boldsymbol{\Delta_{\mathcal{G}amma}}$ and let $[\alpha_0,\beta_0],\beta'$, $\mathcal{G}amma$, and $\mathcal{G}$ be as in Section \ref{suitable mice section}. We will also refer to the mouse operators $x\to W_x$ and $x\to M_x$ from Theorem \ref{nice strategy exists} and use the notation for standard terms from Definition \ref{definition of standard terms}. In Sections \ref{length of direct limit section} through \ref{internalization section} we analyze the directed system of iterates of a suitable mouse and show the directed system can be approximated inside a larger suitable mouse. Section \ref{strle lemmas section} covers some lemmas about the StrLe construction inside a suitable mouse. Section \ref{reflection section} contains a lemma we will use to obtain witnesses for $\Sigma_1$ statements inside an initial segment of a suitable mouse. Finally, Theorem \ref{main thm} is proven in Section \ref{main thm section}. One of the key ideas to our proof of Theorem $\ref{main thm}$ is a different coding than the one used in \cite{twoapp} and \cite{hra}. In \cite{hra}, $\boldsymbol{\Sigma^1_{2n+2}}$ sets are coded by conditions in the extender algebra at the least Woodin of some complete iterate $N$ of $M^\#_{2n+1}$. The reflection argument from \cite{twoapp} ensures a code for each $\boldsymbol{\Sigma^1_{2n+2}}$ set appears below the least $<\delta_N$-strong cardinal $\kappa_N$ of some iterate $N$ (in fact it gives a uniform bound below $\kappa_N$). But this reflection argument depends upon the pointclass $\boldsymbol{\Sigma^1_{2n+2}}$ not being closed under coprojection. Our proof of Theorem \ref{main thm} instead codes $\boldsymbol{\mathcal{G}amma}$-sets by sets of conditions in the extender algebra of some $\mathcal{G}amma$-suitable mouse $N$. A weaker reflection argument than the one in \cite{twoapp} is used to contain each code in $N|\kappa_N$. This weaker reflection is sufficient for the proof.\\ \subsection{The Direct Limit} \label{length of direct limit section} Let $W = W_0$ and let $\mathcal{I}$ be the directed system of countable, complete iterates of $W$ according to its $(\omega_1,\omega_1)$-iteration strategy. Let $M_\infty$ be the direct limit of $\mathcal{I}$. For $M,N\in \mathcal{I}$ and $N$ an iterate of $M$, let $\pi_{M,N}: M \to N$ be the iteration map and $\pi_{M,\infty}: M \to M_\infty$ the direct limit map. Here we demonstrate a few properties of $M_\infty$. The proofs of this section are generalizations of arguments in \cite{ooimt} and \cite{hacm} giving analogous properties of the direct limit of all countable, complete iterates of $M^\#_\omega$. \begin{lemma} \label{pwo is long} $\kappa_{M_\infty} \leq \boldsymbol{\delta_\mathcal{G}amma}$ \footnote{In fact $\kappa_{M_\infty} = \boldsymbol{\delta_\mathcal{G}amma}$, but we don't need this.} \end{lemma} \begin{proof} Suppose $\xi < \kappa_{M_\infty}$. Let $M\in \mathcal{I}$ and $\bar{\xi}\in M$ be such that $\pi_{M,\infty}(\bar{\xi}) = \xi$. Let $P$ be an initial segment of $M$ such that $\bar{\xi}\in P$ and the largest cardinal of $P$ is a cutpoint of $M$. The iteration strategy $\Sigma$ for $P$ is in $\boldsymbol{\Delta}$ by Remark \ref{no gamma woodin gives delta strat}. Let $\mathcal{I}_P$ be the directed system of countable, complete iterates of $P$ by $\Sigma$. Then $\bar{\xi}$ is sent to $\xi$ by the direct limit map of this system, since the largest cardinal of $P$ is a cutpoint of $M$. So a prewellordering of height $\xi$ is projective in $\Sigma$ and therefore $\boldsymbol{\delta_\mathcal{G}amma} > \xi$. \end{proof} \begin{lemma} \label{pwo is short} $\delta_{M_\infty} > (\boldsymbol{\delta_\mathcal{G}amma})^+$ \end{lemma} \begin{proof} Let $\Sigma$ be the $(\omega_1,\omega_1)$-iteration strategy for $W$. Recall $\Sigma$ is not in $\boldsymbol{\mathcal{G}amma}$. We will show $\Sigma$ is in $S(\delta_{M_\infty})\backslash S(\boldsymbol{\delta_\mathcal{G}amma}^+)$. \begin{claim} $\Sigma$ is $\delta_{M_\infty}$-Suslin. \end{claim} \begin{proof} Let $\mathcal{T}$ be a tree on $(\omega \times \omega) \times \delta_{M_\infty}$ such that $(x,y,f)\in [\mathcal{T}]$ if and only if $x$ codes a countable iteration tree $\mathcal{S}$ on $W$ of limit length, $y$ codes a cofinal, wellfounded branch $b$ through $\mathcal{S}$, and $f$ codes an embedding $\pi: M_b^{\mathcal{S}} \to M_\infty$ such that $\pi \circ i^{\mathcal{S}}_b = \pi_{W,\infty}$. Let $\Sigma' = \rho[\mathcal{T}]$. If $(x,y)\in \Sigma$, then $x$ codes an iteration tree $\mathcal{S}$ on $W$ according to $\Sigma$ and $y$ codes the cofinal, wellfounded branch $b$ through $\mathcal{S}$ chosen by $\Sigma$. And $\pi_{M^\mathcal{S}_b,\infty}\circ i^\mathcal{S}_b = \pi_{W,\infty}$. So if $f:\omega\to \delta_{M_\infty}$ codes the embedding $\pi_{M^\mathcal{S}_b,\infty}$, then $(x,y,f)\in[\mathcal{T}]$. Thus $(x,y)\in\Sigma'$. On the other hand, suppose $(x,y) \in \Sigma'$ and $x$ codes an iteration tree $\mathcal{S}$ according to $\Sigma$. Fix $f:\omega\to \delta_{M_\infty}$ such that $(x,y,f)\in[\mathcal{T}]$. Let $b$ be the branch coded by $y$ and $\pi$ the embedding coded by $f$. \begin{subclaim} For all $n$, $\pi^\mathcal{S}_b(\tau^W_n) = \tau^{M^\mathcal{S}_b}_n$. \end{subclaim} \begin{proof} Let $Q \in \mathcal{I}$ be such that $range(\pi)\subseteq range(\pi_{Q,\infty})$. Let $\pi' = \pi_{Q,\infty}^{-1} \circ \pi$. Then $\pi':M^{\mathcal{S}}_b \to Q$ and $\pi'(i^{\mathcal{S}}_b(\tau^W_n)) = \tau^Q_n$. Then by Lemma \ref{term relation condensation}, $i^{\mathcal{S}}_b(\tau^W_n) = \tau^{M_b^{\mathcal{S}}}_n$. \end{proof} From the subclaim and the last part of Lemma \ref{branch agreement}, we have that $(x,y)\in\Sigma$. We can now characterize $\Sigma$ as the set of $(x,y)\in\mathbb{R}\times\mathbb{R}$ such that \begin{enumerate} \item $x$ codes an iteration tree $\mathcal{S}$ on $W$ of limit length, \item $y$ codes a cofinal, wellfounded branch through $\mathcal{S}$, \item $(x,y)\in\Sigma'$, and \item \label{initial segments of tree are good} for any $(x_0,y_0) \leq_T x$ such that $x_0$ codes a proper initial segment $\mathcal{S}_0$ of $\mathcal{S}$ of limit length and $y_0$ codes the branch through $\mathcal{S}_0$ determined by $\mathcal{S}$, $(x_0,y_0)\in \Sigma'$. \end{enumerate} Condition \ref{initial segments of tree are good} is just to guarantee $\mathcal{S}$ is in the domain of $\Sigma$. It does so because any proper initial segment $\mathcal{S}_0$ of $\mathcal{S}$ is coded by some real computable from $x$. From this, and the preceding paragraphs, it is clear these conditions characterize $\Sigma$. Since $\Sigma'$ is $\delta_{M_\infty}$-Suslin, this characterization of $\Sigma$ makes plain that $\Sigma$ is also $\delta_{M_\infty}$-Suslin. \end{proof} \begin{claim} $\boldsymbol{\mathcal{G}amma} = S(\boldsymbol{\delta_\mathcal{G}amma})$. \end{claim} \begin{proof} First, let's establish $\boldsymbol{\mathcal{G}amma}$ is Suslin. Let \begin{align*} \Omega= \{\boldsymbol{\Sigma_1}(J_\gamma(\mathbb{R})) : \gamma < \alpha_0 \text{ and } \gamma \text{ begins a } \Sigma_1\text{-gap}\}. \end{align*} It follows from Theorem \ref{extended projective hierarchy to levy hierarchy} that $\boldsymbol{\mathcal{G}amma}$ is the minimal non-selfdual pointclass closed under projection which contains every pointclass in $\Omega$. Let \begin{align*} \Psi = \{\boldsymbol{\Sigma_1}(J_\gamma(\mathbb{R})) \in \Omega : \, \boldsymbol{\Sigma_1}(J_\gamma(\mathbb{R})) \text{ is Suslin}\}. \end{align*} By Theorem \ref{classification of suslin pointclasses}, $\Psi$ is cofinal in $\Omega$. But the minimal Suslin pointclass larger than any element of $\Psi$ is just the minimal non-selfdual pointclass closed under projection which contains every pointclass in $\Omega$ (by part \ref{ind-like case of suslin classification} of Theorem \ref{classification of suslin pointclasses}). Since $\Psi$ is cofinal in $\Omega$, this is $\boldsymbol{\mathcal{G}amma}$. So $\boldsymbol{\mathcal{G}amma} = S(\lambda)$ for some cardinal $\lambda$. By the Kunen-Martin Theorem, there is a prewellordering of length $\lambda$ in $\boldsymbol{\mathcal{G}amma}$ but no prewellordering of length $\lambda^+$. The latter implies that $\lambda\geq\boldsymbol{\delta_\mathcal{G}amma}$, since $\boldsymbol{\delta_\mathcal{G}amma}$ is a limit cardinal,\footnote{See Theorem 7D.8 of \cite{mosdst}.} and since there are prewellorderings of length $\alpha$ in $\boldsymbol{\mathcal{G}amma}$ for all $\alpha<\boldsymbol{\delta_\mathcal{G}amma}$. The former implies that $\lambda\leq\boldsymbol{\delta_\mathcal{G}amma}$, since there is no prewellordering of length $\boldsymbol{\delta_\mathcal{G}amma^+}$ in $\boldsymbol{\mathcal{G}amma}$ (Otherwise a proper initial segment of this prewellordering would be of length $\boldsymbol{\delta_\mathcal{G}amma}$, giving a prewellordering of length $\boldsymbol{\delta_\mathcal{G}amma}$ in $\boldsymbol{\Delta}$). So $\boldsymbol{\delta_\mathcal{G}amma}=\lambda$. \end{proof} By the previous two claims, $\Sigma \in S(\delta_{M_\infty}) \backslash S(\boldsymbol{\delta_\mathcal{G}amma})$. In particular, $\delta_{M_\infty} \geq \lambda'$ where $\lambda'$ is the next Suslin cardinal after $\boldsymbol{\delta_\mathcal{G}amma}$.\footnote{In fact $\delta_{M_\infty} = \lambda'$, but we don't need this.} But $cof(\lambda') = \omega$ by part \ref{ind-like case of suslin classification} of Theorem \ref{classification of suslin pointclasses}, so $\delta_{M_\infty} \geq \lambda' > \boldsymbol{\delta^+_\mathcal{G}amma}$. \end{proof} \begin{lemma} \label{measurable or cof omega} Suppose $\mu < \delta_{M_\infty}$ is a regular cardinal of $M_\infty$. Then $\mu$ is not measurable in $M_\infty$ if and only if $\mu$ has cofinality $\omega$ in $L(\mathbb{R})$. \end{lemma} \begin{proof} Suppose $\mu$ is not measurable in $M_\infty$. Fix $M \in \mathcal{I}$ and $\bar{\mu}$ such that $\pi_{M,\infty}(\bar{\mu}) = \mu$. Then $\bar{\mu}$ is regular but not measurable in $M$. Since $M$ is countable, there is a sequence of ordinals $\langle \bar{\xi}_n: n < \omega \rangle$ cofinal in $\bar{\mu}$. Let $\xi_n = \pi_{M,\infty}(\bar{\xi}_n)$. Since $\bar{\mu}$ is regular and not measurable in $M$, $\pi_{M,\infty}$ is continuous at $\bar{\mu}$ (This is because $\pi_{M,\infty}$ is essentially an iteration embedding --- in fact it is an iteration embedding in $V^{Col(\omega,\mathbb{R})}$. And any iteration embedding is continuous at a cardinal which is regular but not measurable, since ultrapower embeddings are continuous at such cardinals). So $\langle \xi_n : n < \omega\rangle $ is cofinal in $\mu$. Now suppose $\mu$ has cofinality $\omega$ in $L(\mathbb{R})$. Let $\langle \xi_n: n <\omega \rangle$ be cofinal in $\mu$. Fix $M \in \mathcal{I}$ such that there is $\bar{\mu}\in M$ and $\langle \bar{\xi}_n : n < \omega\rangle \subset M$ with $\pi_{M,\infty}(\bar{\mu}) = \mu$ and $\pi_{M,\infty}(\bar{\xi}_n) = \xi_n$. If $\mu$ is measurable in $M_\infty$, then there is a total extender $F$ on the fine extender sequence of $M$ with critical point $\bar{\mu}$. Let $M'$ be the ultrapower of $M$ by $F$ and $j:M \to M'$ the embedding induced by $F$. Then for any $n<\omega$, \begin{align*} \xi_n &= \pi_{M,\infty}(\bar{\xi}_n)\\ &= \pi_{M',\infty}\circ j(\bar{\xi}_n)\\ &= \pi_{M',\infty}(\bar{\xi}_n)\\ &< \pi_{M',\infty}(\bar{\mu})\\ &< \pi_{M',\infty}\circ j(\bar{\mu})\\ &= \mu. \end{align*} So $\pi_{M',\infty}(\bar{\mu})$ is an upper bound for $\bar{\xi}_n$ below $\mu$, a contradiction. \end{proof} \subsection{Definability in Suitable Mice} \label{definability section} \begin{lemma} \label{Lp is definable} Suppose $N$ is a premouse satisfying enough of $ZFC$, $\newlineu$ is a cardinal of $N$, $Lp^\mathcal{G}amma(a) \subset N$ for each $a\in N|\newlineu$, and $\tau \in N^{Col(\omega,\newlineu)}$ weakly captures $G_0$. Then the map with domain $N|\newlineu$ defined by $a \mapsto Lp^\mathcal{G}amma(a)$ is definable in $N$ from $\tau$. \end{lemma} \begin{proof} Recall \begin{align*} G_0 = \{(x,y):\, & x \text{ codes some transitive set } a \text{ and } y \text{ codes an } \omega\text{-sound } \\ & a\text{-premouse } R \text{ such that } R \text{ projects to } a \text{ and } R \text{ has an } \\ & \omega_1\text{-iteration strategy in } \boldsymbol{\Delta}\}, \end{align*} Fix $a\in N|\newlineu$. If $R$ is any set in $N|\newlineu$ and $g$ is any $Col(\omega,\newlineu)$-generic over $N$, then there are reals $x$ and $y$ in $N[g]$ coding $a$ and $R$, respectively. It is easy to see from this that $Lp^\mathcal{G}amma(a)$ is \begin{align*} \bigcup\{R\in N : \emptyset \Vdash^N_{Col(\omega,\newlineu)} (\exists x,y)[ (x,y)\in\tau \wedge x \text{ codes } a \wedge y \text{ codes } R]\}. \end{align*} \end{proof} \begin{corollary} \label{lower part is definable} If $P$ is $\mathcal{G}amma$-ss, then the map with domain $P|\newlineu_P$ defined by $a \mapsto Lp^\mathcal{G}amma(a)$ is definable in $P$ from $\tau^P_{0,\newlineu_P}$. \end{corollary} \begin{proof} It is clear from Remark \ref{lp(a) contained in lp(b)} and Corollary \ref{standard terms capture} that $P$ and $\tau^P_{0,\newlineu_P}$ satisfy the conditions of Lemma \ref{Lp is definable}. \end{proof} \begin{lemma} \label{terms definable} Suppose $P$ is $\mathcal{G}amma$-ss and $N \in P|\newlineu_P$ is $\mathcal{G}amma$-suitable. Then \newlineewline $\{\tau^N_{n,\mu}: \mu \text{ is an uncountable cardinal of } N \}$ is definable in $P$ from $N$ and $\tau^P_{n,\newlineu_P}$ (uniformly in $P$ and $N$). \end{lemma} \begin{proof} Let $\mu$ be an uncountable cardinal of $N$. Note if $g$ is $Col(\omega,\newlineu_P)$-generic over $P$ and $f\in P$ is a surjection of $\newlineu_P$ onto $\mu$, then $f\circ g$ is $P$-generic for $Col(\omega,\mu)$. In particular, $f \circ g$ is $N$-generic for $Col(\omega,\mu)$. Fix such an $f$ which is minimal in the constructibility order of $P$. Let \begin{align*} \tau_{n,\mu} = \,&\{(\sigma,p): \sigma \text{ is a } Col(\omega,\mu) \text{-standard term for a real}, p\in Col(\omega,\mu),\\ &\text{ and } \emptyset \Vdash^P_{Col(\omega,\newlineu_P)} (\check{p}\in \check{f}\circ \dot{g} \rightarrow \check{\sigma}[\check{f} \circ \dot{g}] \in \tau^P_{n,\newlineu_P})\} \end{align*} It is clear that $\tau_{n,\mu}$ is definable in $P$ from $N$, $\mu$, and $\tau^P_{n,\newlineu_P}$. It suffices to show $\tau_{n,\mu} = \tau^N_{n,\mu}$. $\tau_{n,\mu} \subseteq \tau^N_{n,\mu}$ by Definition \ref{definition of standard terms} and that comeager many $h\subset Col(\omega,\mu)$ which are generic over $N$ are of the form $f \circ g$ for some $g$ which is $Col(\omega,\newlineu_P)$-generic over $P$. On the other hand, suppose $(\sigma,p) \in \tau^N_{n,\mu}$. By Corollary \ref{standard terms capture}, $\sigma[h]\in G_n$ for any $h$ which is $Col(\omega,\mu)$-generic over $N$ such that $p\in h$. In particular, $\sigma[f \circ g] \in \tau^P_{n,\newlineu_P}[g]$ for any $g$ which is $Col(\omega,\newlineu_P)$-generic over $P$ such that $p\in f\circ g$. Thus $(\sigma,p) \in \tau_{n,\mu}$. \end{proof} We will also need versions of Corollary \ref{lower part is definable} and Lemma \ref{terms definable} in generic extensions of $\mathcal{G}amma$-ss mice. \begin{lemma} \label{term works in generic extension} Suppose $B\subseteq \mathbb{R}$, $P$ is a premouse, $\delta$ is Woodin in $P$, $\mu \geq \delta$, $\tau\in P^{Col(\omega,\mu)}$ weakly captures $B$ over $P$, and $y$ is $Ea_P$-generic over $P$. Then there is $\tau'\in P[y]^{Col(\omega,\mu)}$ which weakly captures $B$ over $P[y]$. Moreover, $\tau'$ is definable in $P[y]$ from $\tau$ and $y$ (uniformly). \end{lemma} \begin{proof} $Col(\omega,\mu)$ is universal for pointclasses of size $\mu$. So there is a complete embedding $\Phi: Ea_p \times Col(\omega,\mu) \to Col(\omega,\mu)$.\footnote{In the sense of Definition 7.1 of Chapter 7 of \cite{kunen}.} If $g$ is $Col(\omega,\mu)$-generic over $P$, let $(y_g,f_g)$ be the $Ea_P \times Col(\omega,\mu)$-generic consisting of all conditions $(p,q)\in Ea_P \times Col(\omega,\mu)$ such that $\Phi((p,q))\in g$ (see Chapter 7, Theorem 7.5 of \cite{kunen}). Let \begin{align*} \tau^* = \{(\sigma,(p,q)): & \,\sigma \text{ is an } Ea_P\text{-term for a } Col(\omega,\mu)\text{-standard term }\\ & \text{for a real, } (p,q)\in Ea_P\times Col(\omega,\mu), \text{ and }\\ & \Phi((p,q)) \Vdash^P_{Col(\omega,\mu)} \sigma[y_g][f_g] \in \tau[g]\}. \end{align*} \begin{claim} \label{claim for lemma on term definability} For any $(y,f)$ which is $Ea_P\times Col(\omega,\mu)$-generic over $P$,\\ $\tau^*[y][f] = B \cap P[y][f]$. \end{claim} \begin{proof} Suppose $x\in\tau^*[y][f]$. $x = \sigma[y][f]$ for some $(\sigma,(p,q))\in\tau^*$ such that $p\in y$ and $q\in f$. Let $g$ be $Col(\omega,\mu)$-generic such that $y_g = y$ and $f_g = f$. In particular, $\Phi((p,q))\in g$. Then $P[g]\models \sigma[y_g][f_g]\in \tau[g]$. Since $x= \sigma[y_g][f_g]$ and $\tau[g]= B \cap P[g]$, $x\in B \cap P[g]$. Now suppose $x\in B\cap P[y][f]$. Let $\sigma$ be an $Ea_P$-term for a $Col(\omega,\mu)$-standard term for a real such that $x= \sigma[y][f]$. $\bigcup \Phi''\{(p,q):(p,q)\in y\times g\}$ is a function $g_1:S\to\mu$ for some $S\subseteq \omega$. Let \begin{align*} \mathbb{Q} = \{r\in Col(\omega,\mu): domain(r)\cap S = \emptyset\} \end{align*} ($\mathbb{Q}$ is the quotient of $Col(\omega,\mu)$ by $g_1$). Let $g_2$ be $\mathbb{Q}$-generic over $P[g_1]$. Then $g = g_1 \cup g_2$ is $Col(\omega,\mu)$-generic over $P$. We have $x\in \tau[g]$. Pick $s\in g$ such that $s \Vdash^P_{Col(\omega,\mu)} \sigma[y_g][f_g]\in \tau[g]$. $s = r_1 \cup r_2$ for some $r_1\in g_1$ and $r_2\in g_2$. \begin{subclaim} $r_1 \Vdash^P_{Col(\omega,\mu)} \sigma[y_g][f_g]\in \tau$. \end{subclaim} \begin{proof} Suppose not. Then there is $g'_2$ which is $\mathbb{Q}$-generic over $P[g_1]$ such that, letting $g' = g_1 \cup g'_2$, $\sigma[y_{g'}][f_{g'}]\newlineotin \tau[g']$. $\sigma[y_{g'}][f_{g'}] = x$, since $y_g$ and $f_g$ depend only on $g\upharpoonright S$. But then $x\in (B \cap P[g'])\backslash \tau[g']$, contradicting that $\tau$ weakly captures $B$. \end{proof} Pick $p\in y$ and $q\in f$ such that $\Phi((p,q))$ extends $r_1$. Then $(\sigma,(p,q))\in \tau^*$. So $x\in \tau^*[y][f]$. \end{proof} Let \begin{align*} \tau' = \{(\sigma[y],q): \,\exists p \in y \text{ such that } (\sigma,(p,q))\in\tau^*\}. \end{align*} $\tau'$ is definable in $P[y]$ from $\tau$ and $y$. It is clear from Claim \ref{claim for lemma on term definability} that $\tau'$ weakly captures $B$ over $P[y]$. \end{proof} \begin{lemma} \label{generic extension lp-closed} Let $P$ be $\mathcal{G}amma$-ss and $y$ be $Ea_P$-generic over $P$. Then for any $a\in P[y]$, $Lp^\mathcal{G}amma(a)\subset P[y]$. \end{lemma} \begin{proof} Let $N$ be a $\mathcal{G}amma$-suitable mouse built over $P$. $N$ has a Woodin cardinal $\delta_N$ above $\delta_P$. The iteration strategy for any proper initial segment of $N|\delta_N$ restricted to trees above $\delta_P$ is in $\boldsymbol{\Delta}$. And no initial segment of $N$ above $\delta_N$ projects strictly below $\delta_N$. It follows that any cardinal of $P$ remains a cardinal in $N$. In particular, $\delta_P$ remains Woodin in $N$ and $y$ is also $Ea_P$-generic over $N$. Suppose $R$ is an $\omega$-sound $a$-premouse with an $\omega_1$-iteration strategy in $\boldsymbol{\Delta}$ such that $R$ projects to $a$. It suffices to show $R\in P[y]$. Let $\alpha$ be the height of $R$. Iterating $N$ above $R$ if necessary, we may assume there is a real $g$ which is $Col(\omega,\delta_N)$-generic over $N$ such that some real in $N[y][g]$ codes $R$. By Lemma \ref{term works in generic extension}, there is a $Col(\omega,\delta_N)$-term $\tau$ in $N[y]$ which weakly captures $G_0$. Then $R$ is the unique premouse in $N[y][g]$ of height $\alpha$ such that if $x_a$ codes $a$ and $x_R$ codes $R$, then $(x_a,x_R)\in \tau[g]$. By homogeneity of the forcing, for any $g'$ which is $Col(\omega,\delta_N)$-generic over $N$, there is a premouse $R'\in N[y][g']$ of height $\alpha$ and reals $x_a$ and $x_{R'}$ in $N[y][g']$ coding $a$ and $R'$, respectively, such that $(x_a,x_{R'})\in\tau[g']$. The uniqueness of $R$ implies $R\in N[y]$. Since $R$ is coded by a subset of $a$, $R\in P[y]$. \end{proof} \begin{corollary} \label{lp definable in generic ext} If $P$ is $\mathcal{G}amma$-ss and $y$ is $Ea_P$-generic over $P$, then the map with domain $P[y]|\newlineu_P$ defined by $a \mapsto Lp^\mathcal{G}amma(a)$ is definable in $P[y]$ from $\tau^P_{0,\newlineu_P}$ and $y$ (uniformly in $P$ and $y$). \end{corollary} \begin{proof} $P[y]$ is $Lp^\mathcal{G}amma$-closed by Lemma \ref{generic extension lp-closed}. Then by Lemma \ref{Lp is definable}, the map $a\to Lp^\mathcal{G}amma(a)$ with domain $P[y]|\newlineu_P$ is definable from any term $\tau\in P[y]^{Col(\omega,\newlineu_P)}$ which weakly captures $G_0$ over $P[y]$. Lemma \ref{term works in generic extension} shows there is a term $\tau\in P[y]^{Col(\omega,\newlineu_P)}$ which weakly captures $G_0$ over $P[y]$ and is definable from $\tau^P_{0,\newlineu_P}$ and $y$ in $P[y]$. \end{proof} \begin{corollary} Suppose $P$ is $\mathcal{G}amma$-ss, $y$ is $Ea_P$-generic over $P$, and $N\in P[y]|\newlineu_P$ is $\mathcal{G}amma$-suitable. Then $\{\tau^N_{n,\mu}: \mu \text{ is an uncountable cardinal of } N \}$ is definable in $P[y]$ from $N$, $y$, and $\tau^P_{n,\newlineu_P}$ (uniformly in $P$, $y$, and $N$). \end{corollary} \begin{proof} This is by the proof of Lemma \ref{terms definable}, using from Lemma \ref{term works in generic extension} that there is a term in $P$ which weakly captures $G_n$ over $P[y]$ and is definable from $\tau^P_{n,\newlineu_P}$ and $y$. \end{proof} \subsection{Internalizing the Direct Limit} \label{internalization section} Let $x_0\in\mathbb{R}$ be any real which is Turing above some real coding $W$ and consider some $M$ which is a countable, complete iterate of $M_{x_0}$. For elements of $M|\newlineu_M$, being a $\mathcal{G}amma$-suitable premouse, a $\mathcal{G}amma$-short iteration tree, or a $\mathcal{G}amma$-maximal iteration tree is definable over $M$ from $\tau^M_{0,\newlineu_M}$ (This follows easily from Corollary \ref{lower part is definable}). Let \begin{align*} \mathcal{I}^M = \{P \in M|\newlineu_M: P \in\mathcal{I}\}. \end{align*} \begin{lemma} \label{M knows branches through short trees} Let $\mathcal{T} \in M|\newlineu_M$ be a $\mathcal{G}amma$-short tree on some $\mathcal{G}amma$-suitable $P\in M$. Then the branch $b$ picked by the iteration strategy for $P$ is in $M$ and $b$ is definable in $M$ from $\mathcal{T}$ and $\tau^M_{0,\newlineu_M}$ (uniformly). In particular, $M_b^\mathcal{T}$ and the iteration map $i_b^\mathcal{T}:P\to M_b^\mathcal{T}$ are definable in $M$ from $\mathcal{T}$ and $\tau^M_{0,\newlineu_M}$. \end{lemma} \begin{proof} Let $g$ be $Col(\omega,\newlineu_M)$-generic over $M$. Note $b$ is the unique branch through $\mathcal{T}$ which absorbs $\mathcal{Q}(\mathcal{T})$. So by Shoenfield absoluteness, $b \in M[g]$ (in $M[g]$ the existence of such a branch is a $\Sigma^1_2$ statement about reals). But $b$ is independent of the generic $g$, so $b\in M$. It then follows from Corollary \ref{lower part is definable} that $b$, and therefore also $M_b^\mathcal{T}$ and $i^\mathcal{T}_b$, are definable in $M$ from $\tau^M_{0,\newlineu_M}$. \end{proof} \begin{corollary} \label{gamma-guided definable} Suppose $P \in \mathcal{I}^M$ and $\Sigma$ is the iteration strategy for $P$. Suppose also $\mathcal{T} \in M|\newlineu_M$ is an iteration tree on $P$ below $\delta_P$ of limit length. Whether $\mathcal{T}$ is according to $\Sigma$ is definable in $M$ from parameter $\tau^M_{0,\newlineu_M}$ by a formula independent of $\mathcal{T}$ and the choice of $\mathcal{G}amma$-ss mouse $M$. \end{corollary} \begin{lemma} \label{coiterate in M|nu} Suppose $P,Q\in \mathcal{I}^M$. Then there is $R \in \mathcal{I}^M$ and normal iteration trees $\mathcal{T}$ and $\mathcal{U}$ on $P$ and $Q$, respectively, such that \begin{enumerate} \item $\mathcal{T}$ realizes $R$ is a complete iterate of $P$, \item $\mathcal{U}$ realizes $R$ is a complete iterate of $Q$, \item $\mathcal{T}\upharpoonright lh(\mathcal{T}) \in M|\newlineu_M$, \item $\mathcal{U}\upharpoonright lh(\mathcal{U}) \in M|\newlineu_M$, and \item $R$ is definable in $M$ from $P$, $Q$, and $\tau^M_{0,\newlineu_M}$ (uniformly). \end{enumerate} \end{lemma} \begin{proof} We perform a coiteration of $P$ and $Q$ inside $M$. Suppose so far from the coiteration we have obtained iteration trees $\mathcal{T}$ and $\mathcal{U}$ on $P$ and $Q$, respectively. Suppose $\mathcal{T}$ and $\mathcal{U}$ have successor length. Let $P'$ and $Q'$ be the last models of $\mathcal{T}$ and $\mathcal{U}$, respectively. First consider the case $P' \trianglelefteq Q'$ or $P' \trianglelefteq Q'$. If either is a proper initial segment of the other, or there are any drops on the branches to $P'$ or $Q'$, we have violated the Dodd-Jensen property. So $P'=Q'$ and $P'$ is a common, complete iterate of $P$ and $Q$. Otherwise, we continue the coiteration as usual by applying the extender at the least point of disagreement between the last models of $\mathcal{T}$ and $\mathcal{U}$, respectively. Now suppose $\mathcal{T}$ and $\mathcal{U}$ are of limit length. In this case $\mathcal{M}(\mathcal{T}) = \mathcal{M}(\mathcal{U})$. If $\mathcal{T}$ is $\mathcal{G}amma$-short, so is $\mathcal{U}$, and by Lemma \ref{M knows branches through short trees}, $M$ can identify the branches the iteration strategies for $P$ and $Q$ pick through $\mathcal{T}$ and $\mathcal{U}$, respectively. So the coiteration can be continued inside $M$. Otherwise, $\mathcal{T}$ and $\mathcal{U}$ are $\mathcal{G}amma$-maximal. In this case let $R$ be the unique $\mathcal{G}amma$-suitable mouse extending $\mathcal{M}(\mathcal{T})$. $R$ is just the result of applying $Lp^\mathcal{G}amma$ to $\mathcal{M}(\mathcal{T})$ $\omega$ times, so $M$ can identify $R$ by Lemma \ref{Lp is definable}. Then $R$ is a complete iterate of $P$ and $Q$. The proof of the Comparison Lemma gives the coiteration terminates in fewer than $\newlineu_M$ steps. Then the argument above implies the trees from this coiteration, without their last branches, are in $M|\newlineu_M$ and definable in $M$. \end{proof} The lemma implies $\mathcal{I}^M$ is a directed system. $\mathcal{I}^M$ is countable and contained in $\mathcal{I}$, so we may define the direct limit $\mathcal{H}^M$ of $\mathcal{I}^M$, and $\mathcal{H}^M \in \mathcal{I}$. Let \begin{align*} \tilde{\mathcal{I}}^M =\, \{P \in \mathcal{I}^M : &\text{ there is a normal iteration tree } \mathcal{T} \text{ such that } \mathcal{T} \text{ realizes } \\ & P \text{ is a complete iterate of } W \text{ and } \mathcal{T}\upharpoonright lh(\mathcal{T}) \in M|\newlineu_M\}. \end{align*}. $\tilde{\mathcal{I}}^M$ is definable in $M$ by Corollary \ref{gamma-guided definable}. \begin{lemma} \label{cofinal in I^M} $\tilde{\mathcal{I}}^M$ is cofinal in $\mathcal{I}^M$. In particular, the direct limit of $\tilde{\mathcal{I}}^M$ is $\mathcal{H}^M$. \end{lemma} \begin{proof} Suppose $P\in \mathcal{I}^M$. By Lemma \ref{coiterate in M|nu}, there is $R \in \mathcal{I}^M$ which is a common, complete, normal iterate of both $P$ and $W$ by trees which are in $M$ (modulo their final branches). Then $R$ is below $P$ in $\mathcal{I}^M$ and $R\in \tilde{\mathcal{I}}^M$. \end{proof} \begin{lemma} \label{M can approximate iteration maps} Suppose $P\in \mathcal{I}^M$. Let $\Sigma$ be the (unique) iteration strategy for $P$. Suppose $\mathcal{T}\in M|\newlineu_M$ is an iteration tree on $P$ according to $\Sigma$. Let $b = \Sigma(\mathcal{T})$ and let $Q = M^\mathcal{T}_b$. Then $Q$ is definable in $M$ from $\mathcal{T}$ and $\tau^M_{0,\newlineu_M}$. And $\pi_{P,Q}\upharpoonright \gamma^P_n$ is definable in $M$ from $\mathcal{T}$ and $\langle\tau^M_{k,\newlineu_M}: k < n \rangle$ (uniformly). \end{lemma} \begin{proof} If $\mathcal{T}$ is $\mathcal{G}amma$-short, then this is by Lemma \ref{M knows branches through short trees}. Suppose $\mathcal{T}$ is $\mathcal{G}amma$-maximal. Then $Q = \bigcup_{i < \omega} Q_i$, where $Q_0 = \mathcal{M}(\mathcal{T})$ and $Q_{i+1} = Lp^\mathcal{G}amma(Q_i)$. So $Q$ is definable from $\mathcal{M}(\mathcal{T})$ and $\tau^M_{0,\newlineu_M}$ by Corollary \ref{lower part is definable}. And $\pi_{P,Q}\upharpoonright \gamma^P_n = \pi_c\upharpoonright\gamma^P_n$, where $c$ is any branch through $\mathcal{T}$ respecting $\vec{G}_n$. The argument of Lemma \ref{M knows branches through short trees} shows there is a branch $c$ in $M$ respecting $\vec{G}_n$. Then $\pi_{P,Q}\upharpoonright \gamma^P_n = \pi_c\upharpoonright\gamma^P_n$ for any wellfounded branch $c\in M$ through $\mathcal{T}$ such that $\pi_c(\langle\tau^P_k: k < n \rangle) = \langle\tau^Q_k: k < n \rangle$. $\langle\tau^P_k: k < n \rangle$ and $\langle\tau^Q_k: k < n \rangle$ are definable in $M$ from $P$, $Q$, and $\langle\tau^M_{k,\newlineu_M}: k < n \rangle$ by Lemma \ref{terms definable}. So $\pi_{P,Q}\upharpoonright \gamma_n^P$ is definable in $M$ from $\mathcal{T}$ and $\langle\tau^M_{k,\newlineu_M}: k < n \rangle$. \end{proof} It follows from the previous lemmas that for any $P\in \mathcal{I}^M$, $\pi_{P,\mathcal{H}^M}\upharpoonright \gamma^P_n$ is definable in $M$ from $P$ and $\langle\tau^M_{k,\newlineu_M}: k < n \rangle$ (uniformly in $M$). The same lemmas hold in $M[y]$ for $y$ $Ea_M$-generic over $M$. In particular, we have: \begin{lemma} \label{M[y] can approximate branches through maximal trees} Suppose $y$ is $Ea_M$-generic over $M$ and $P \in \mathcal{I} \cap M[y]|\newlineu_M$. Let $\Sigma$ be the (unique) iteration strategy for $P$. Suppose $\mathcal{T}\in M[y]|\newlineu_M$ is an iteration tree on $P$ according to $\Sigma$. Let $b = \Sigma(\mathcal{T})$ and let $Q = M^\mathcal{T}_b$. Then $Q$ is definable in $M[y]$ from $\mathcal{T}$ and $\tau^M_{0,\newlineu_M}$. And $\pi_{P,Q}\upharpoonright\gamma^P_n$ is definable in $M[y]$ from $\mathcal{T}$ and $\langle \tau^M_{k,\newlineu_M}: k < n\rangle$ (uniformly). Moreover, the definition is independent not just of the choice of $\mathcal{G}amma$-ss mouse $M$, but also of the generic $y$. \end{lemma} \begin{lemma} \label{coiterate all possible generics} Suppose $p\in Ea_M$ and $\dot{S}$ is an $Ea_M$-name in $M|\newlineu_M$ such that $p \Vdash_{Ea_M}$ ``$\dot{S}$ is a complete iterate of $W$.'' Then there is $R\in\tilde{I}^M$ such that $R$ is a complete iterate of $S[y]$ for every $y \in \mathbb{R}$ which is $Ea_M$-generic over $M$. Moreover, we can pick $R$ such that $R$ is (uniformly) definable in $M$ from parameters $\dot{S}$ and $p$. \end{lemma} \begin{proof} Let $\mathbb{P}$ be the finite support product $\Pi_{j<\omega} \mathbb{P}_j$, where each $\mathbb{P}_j$ is a copy of the part of $Ea_M$ below $p$. Let $H$ be $\mathbb{P}$-generic over $M$. We can represent $H$ as $\Pi_{j<\omega} H_j$, where $H_j$ is $\mathbb{P}_j$-generic over $M$. Let $\dot{S}_j$ be a $\mathbb{P}$-name for $\dot{S}[H_j]$. Let $S_j = \dot{S}_j[H]$ for $j\in \omega$ and $S_{-1} = W$. Lemma \ref{M[y] can approximate branches through maximal trees} tells us that $M[H]$ can perform the simultaneous coiteration of all of the $S_j$ for $j\in [-1,\omega)$ (except possibly finding the last branches). The proof of the Comparison Lemma gives that this coiteration terminates after fewer than $\newlineu_M$ steps. Let $R_j$ be the last model of the iteration tree on $S_j$ produced by the coiteration. Since each $S_j$ is a complete iterate of $W$, the Dodd-Jensen property implies there are no drops on the branches from $S_j$ to $R_j$ and $R_j = R_i$ for all $i,j\in[-1,\omega)$. Let $R = R_j$ for some (equivalently all) $j\in [-1,\omega)$. Then $R$ is a complete iterate of $M$ and $R$ is a complete iterate of $S_j$ for each $j\in\omega$. Let $\mathcal{U}$ be the iteration tree on $W$ from the coiteration. \begin{claim} \label{R ind of generic} $R$ is independent of the choice of generic $H$. \end{claim} \begin{proof} Code $R$ by a set of ordinals $X$ contained in $\newlineu_M$. Let $\dot{X}$ be a name for $X$. If $R$ is not independent of $H$, then there is $\alpha < \newlineu_M$ and $q_1,q_2\in\mathbb{P}$ such that $q_1 \Vdash\check{\alpha}\in \dot{X}$ and $q_2 \Vdash \check{\alpha}\newlineotin \dot{X}$. Let $N > max(support(q_2))$. Let $\bar{q}_1$ be the condition $q_1$ shifted over by $N$ --- that is, $support(\bar{q}_1) = \{j\in [N,\omega) :\, j-N \in support(q_1)\}$ and for $j\in support(\bar{q}_1)$, $\bar{q_1}(j) = q_1(j-N)$. So $\bar{q}_1$ is compatible with $q_2$ and by symmetry, $\bar{q}_1 \Vdash \check{\alpha}\in\dot{X}$. But then there is $r \leq q_2,\bar{q}_1$ which forces both $\check{\alpha}\in\dot{X}$ and $\check{\alpha}\newlineotin \dot{X}$. \end{proof} \begin{claim} \label{U ind of generic} $\mathcal{U} \upharpoonright lh(\mathcal{U})$ is independent of the choice of generic $H$. \end{claim} \begin{proof} The same proof as in Claim \ref{R ind of generic} works. \end{proof} Claim \ref{R ind of generic} implies $R\in M|\newlineu_M$ and $R$ is a complete iterate of $S[y]$ for any $y$ which is $Ea_M$-generic over $M$. Claim \ref{U ind of generic} gives that $\mathcal{U} \upharpoonright lh(\mathcal{U}) \in M|\newlineu_M$ and thus $R\in\tilde{\mathcal{I}}^M$. \end{proof} \subsection{The StrLe Construction} \label{strle lemmas section} Recall the mouse operator $x \to M_x$ defined in Section \ref{suitable mice section}. In the following lemmas let $z,x \in \mathbb{R}$ be such that $z\in M_x$ and let $M= M_x$. \begin{lemma} Suppose $P = StrLe[M,z]$. Then $P$ is $\mathcal{G}amma$-ss and $\delta_P = \delta_M$. \end{lemma} \begin{proof} Let $\delta = \delta_M$. By Lemma \ref{s-const lemma}, the cardinals of $P$ above $\delta$ are the same as the cardinals of $M$ and $\newlineu_M$ is inaccessible in $P$. Any inaccessible of $P$ above $\delta$ is inaccessible in $M$, since $M$ is a generic extension of $P$ by a $\delta$-c.c. forcing. In particular, $\newlineu_M$ is the unique inaccessible of $P$ above $\delta$. Then it suffices to show the following claim. \begin{claim} \begin{enumerate}[(a)] \item \label{le lp-closed} If $\eta < \delta_P$, then $Lp^\mathcal{G}amma(P|\eta) \triangleleft P$. \item \label{delta is gamma-woodin} $\delta$ is a $\mathcal{G}amma$-Woodin of $P$. That is, $\delta$ is Woodin in $Lp^\mathcal{G}amma(P|\delta)$. \item \label{contained in lp} If $\eta \in P$ and $\eta \geq \delta$, then $P|(\eta^+)^P \trianglelefteq Lp^\mathcal{G}amma(P|\eta)$. \item \label{delta is woodin} $P\models \delta$ is Woodin. \item \label{no woodins below delta} If $\eta < \delta$, $\eta$ is not Woodin in $Lp^\mathcal{G}amma(P|\eta)$. \item \label{strle lp-closed} If $\eta\in P$ and $\delta_P \leq \eta$, then $Lp^\mathcal{G}amma(P|\eta) \subseteq P$. \end{enumerate} \end{claim} \begin{proof} To prove \ref{le lp-closed}, it suffices to show if $\eta < \delta_P$, $R\triangleleft Lp^\mathcal{G}amma(P|\eta)$, and $\rho_\omega(R) = \eta$, then $R\triangleleft P$. Coiterate $R$ against $Le[M,z]$. Suppose $\mathcal{T}$ and $\mathcal{U}$ are the iteration trees on $R$ and $Le[M,z]$, respectively, from the coiteration. $\mathcal{T}$ is above $\eta$ because $Le[M,z]|\eta = R|\eta$ and $\eta$ is a cutpoint of $R$. Let $\lambda < lh(\mathcal{T})$ be a limit ordinal and $Q= \mathcal{Q}(\mathcal{T}) = \mathcal{Q}(\mathcal{U})$. Since $R \in Lp^\mathcal{G}amma(P|\eta)$ and $\mathcal{T}$ is above $\eta$, $Q \in Lp^\mathcal{G}amma(\mathcal{M}(\mathcal{T}))$. $[0,\lambda]_T$ and $[0,\lambda]_U$ are the unique branches through $\mathcal{T}$ and $\mathcal{U}$, respectively, which absorb $Q$. By Corollary \ref{lower part is definable}, these branches can be identified in $M$. In particular, the coiteration of $R$ and $Le[M,z]$ can be performed in $M$. Theorem \ref{universality of m-s} gives that $R$ cannot outiterate $Le[M,z]$. Then since $R$ is $\omega$-sound, $R$ projects to $\eta$, and $Le[M,z]$ does not project to $\eta$, $R$ is a proper initial segment of $Le[M,z]|(\eta^+)^{Le[M,z]}$. $Le[M,z]$ agrees with $P$ up to $\delta$, so $R\triangleleft P$. \ref{delta is gamma-woodin} is by the proof of Theorem 11.3 of \cite{msbook}. For \ref{contained in lp}, the iteration strategies for initial segments of $P|(\eta^+)^P$ restricted to iteration trees above $\delta$ are in $\boldsymbol{\Delta}$ by Fact \ref{definiability of s-const strat}. \ref{delta is woodin} is immediate from \ref{delta is gamma-woodin} and \ref{contained in lp}. See Sublemma 7.4 of \cite{hacm} for a proof of \ref{no woodins below delta}. Towards \ref{strle lp-closed}, let $Q = Lp^\mathcal{G}amma(P|\eta)$. Let $\mathbb{P}$ be the extender algebra in $P$ at $\delta$ with $\delta$ generators. $M|\delta$ is $\mathbb{P}$-generic over $P$. Note $\delta$ is Woodin in $Q$ by \ref{delta is gamma-woodin}. In particular, $\mathbb{P}$ is also $\delta$-c.c. in $Q$, so any antichain of $\mathbb{P}$ in $Q$ is also in $P$ and $M|\delta$ is also $\mathbb{P}$-generic over $Q$. Let $B \in Lp^\mathcal{G}amma(P|\eta)$. $B$ is in $M = P[M|\delta_M]$ since $P|\eta$ is in $M$ and $M$ is closed under $Lp^\mathcal{G}amma$. So let $\dot{B}$ be a $\mathbb{P}$-name in $P$ such that $\dot{B}[M|\delta] = B$. Choose $p\in\mathbb{P}$ such that $p \Vdash^Q_{\mathbb{P}} \dot{B} = \check{B}$. Any $G$ which is $\mathbb{P}$-generic over $P$ is also $\mathbb{P}$-generic over $Q$. So for any $G$ which is $\mathbb{P}$-generic over $P$ such that $p\in G$, $\dot{B}[G] = B$. But then $B$ is in $P$, since $B = \{\xi < \delta : p \Vdash^P_{\mathbb{P}} \check{\xi} \in \dot{B}\}$.\footnote{Viewing $B$ as a subset of $\delta$.} \end{proof} \end{proof} \begin{lemma} \label{another lemma about terms definable} Suppose $P = StrLe[M,z]$. Let $\mu \geq \delta_P$ be a cardinal of $P$. $\tau^P_{n,\mu}$ is definable in $M$ from $\tau^M_{n,\mu}$ and $z$. \end{lemma} \begin{proof} Let \begin{align*} \tau = \, \{(\sigma,p)\in P : \, \sigma \text{ is a } Col(\omega,\mu) \text{-standard term for a real}, p\in Col(\omega,\mu), \\ \text{ and } p \Vdash^M_{Col(\omega,\mu)} \sigma \in P[\dot{g}] \cap \tau^M_{n,\mu}\}. \end{align*} Since $\tau^M_{n,\mu}\in M$ and $P$ is definable over $M$ from $z$, $\tau \in M$ and is definable from $\tau^M_{n,\mu}$ and $z$. Then it suffices to show the following claim. \begin{claim} $\tau = \tau^P_{n,\mu}$ \end{claim} \begin{proof} Clearly $\tau \subseteq \tau^P_{n,\mu}$. Suppose $(\sigma,p)\in \tau$. Let $\mathcal{C}$ be the set of $g$ which are $Col(\omega,\mu)$-generic over $M$ such that $p\in g$. For any $g\in\mathcal{C}$, $\sigma[g] \in \tau^M_{n,\mu}[g]$. In particular, $\sigma[g]\in G_n$. Since $\mathcal{C}$ is comeager in the set of $Col(\omega,\mu)$-generics over $P$ which extend $p$, $\sigma\in \tau^P_{n,\mu}$. \end{proof} \end{proof} \begin{lemma} \label{strle strat f-p and guided} Suppose $P = StrLe[M,z]$. The iteration strategy for $P$ is fullness-preserving and guided by $\mathcal{G}$. \end{lemma} \begin{proof} Let $\Sigma$ be the (unique) iteration strategy for $P$. The proof of Theorem \ref{iterability of m-s} gives that $\Sigma$ is determined by lifting an iteration on $P$ to one on $M$. More precisely, if $\mathcal{T}$ is a non-dropping\footnote{We leave to the reader the task of proving the case where $\mathcal{T}$ drops, as well as showing that $\Sigma$ condenses well.} iteration tree on $P_0 = P$ with $\langle P_\alpha \rangle$ the models of the iteration and $i_{\beta,\alpha}$ the associated iteration maps for $\beta <_T \alpha$, then we maintain an iteration tree $\mathcal{T}^*$ on $M_0 = M$ with models $\langle M_\alpha \rangle$ and associated iteration embeddings $i^*_{\beta,\alpha}$. We also maintain embeddings $\pi_\alpha: P_\alpha \to StrLe[M_\alpha,z]$ such that $\pi_\alpha \circ i_{\beta,\alpha} = i^*_{\beta,\alpha} \circ \pi_\beta$ and $\pi_0 = id$. In particular, $\pi_\alpha \circ i_{0,\alpha} = i^*_{0,\alpha}$. Suppose $\mu$ is a cardinal of $P$ and $\mu > \delta_P$. By Lemma \ref{another lemma about terms definable}, $i^*_{0,\alpha}(\tau^P_{n,\mu}) = \tau^{StrLe[M_\alpha,z]}_{n,\mu}$ for each $n<\omega$. Then $\pi_\alpha \circ i_{0,\alpha}(\tau^P_{n,\mu}) = \tau^{StrLe[M_\alpha,z]}_{n,\mu}$. Then by Lemma \ref{term relation condensation}, $P_\alpha$ is $\mathcal{G}amma$-ss and $\pi_\alpha(\tau^{P_\alpha}_{n,\mu}) = \tau^{StrLe[M_\alpha,z]}_{n,\mu}$. This gives $\Sigma$ is fullness-preserving. A second application of Lemma \ref{term relation condensation} gives $i_{0,\alpha}(\tau^P_{n,\mu}) = \tau^{P_\alpha}_{n,\mu}$. So $\Sigma$ is guided by $\mathcal{G}$. \end{proof} \begin{corollary} \label{strle strategy not in gamma} Suppose $P = StrLe[M,z]$. Then the $\omega_1$-iteration strategy for $P$ is not in $\boldsymbol{\mathcal{G}amma}$. \end{corollary} \begin{proof} Immediate from Lemmas \ref{guided strat not in gamma} and \ref{strle strat f-p and guided}. \end{proof} \begin{lemma} \label{iterate of mitchell-steel} Suppose $x,z\in\mathbb{R}$ and $x$ codes a mouse $N$ which is a complete iterate of $M_z$. Let $P = StrLe[M_x,z]$. Then $P$ is a complete iterate of $N$ below $\delta_N$. \end{lemma} \begin{proof} Coiterate $N$ and $P$. Let $\mathcal{T}$ and $\mathcal{U}$ be the iteration trees on $N$ and $P$, respectively, from the coiteration. Let $N^*$ and $P^*$ be the last models of $\mathcal{T}$ and $\mathcal{U}$, respectively. Suppose $P$ outiterates $N$. One possibility is that there is a drop on the branch of $\mathcal{T}$ from $P$ to $P^*$. Since the iteration strategy for $P$ is fullness-preserving by Lemma \ref{strle strat f-p and guided}, $P^*$ has an $\omega_1$-iteration strategy in $\boldsymbol{\Delta}$. But the strategy for $N$ is fullness-preserving and guided by $\mathcal{G}$. So $N^*$ cannot have an iteration strategy in $\boldsymbol{\Delta}$, contradicting that $N^* \trianglelefteq P^*$. If there is no drop between $P$ and $P^*$, then $N^* \triangleleft P$. Since neither side of the coiteration drops, $N^*$ and $P^*$ are both $\mathcal{G}amma$-ss. But no $\mathcal{G}amma$-ss mouse can have a proper initial segment which is $\mathcal{G}amma$-ss. An identical argument shows $N$ cannot outiterate $P$. Thus $N^* = P^*$ and $\mathcal{T}$ and $\mathcal{U}$ realize $N^*$ and $P^*$ are complete iterates of $N$ and $P$, respectively. Since there are no total extenders on $N$ above $\delta_N$, $\mathcal{T}$ is below $\delta_N$. Similarly, $\mathcal{U}$ is below $\delta_P$. Then stationarity of the Mitchell-Steel construction\footnote{See e.g. 3.23 of \cite{sihmor}.} implies that $P^* = P$. So $\mathcal{T}$ realizes that $P$ is a complete iterate of $N$. \end{proof} \subsection{A Reflection Lemma} \label{reflection section} In this section we prove a lemma that any $\Sigma_1$ statement true in $M_x$ also holds in some $N \triangleleft M_x|\kappa_{M_x}$ with the property that $StrLe[N] \triangleleft StrLe[M]$. A thorough reader not already familiar with the fully-backgrounded Mitchell-Steel construction may wish to review Section \ref{m-s construction section} before proceeding. A lazy one may read the statement of Lemma \ref{reflecting below kappa} and skip to Section \ref{main thm section}. First, we need to show $M_x$ can compute the iteration strategies of its own initial segments below its Woodin cardinal. More precisely, we have: \begin{lemma} \label{knows initial segments iterable} Let $x\in\mathbb{R}$, $N \triangleleft M_x|\delta_{M_x}$ and $\mathcal{T}\in M_x$ be an iteration tree on $N$ of limit length $< \delta_{M_x}$, according to the (unique) iteration strategy for $N$. The cofinal branch $b$ through $\mathcal{T}$ determined by the iteration strategy for $N$ is definable in $M_x$ (uniformly in $N$ and $\mathcal{T}$, from the parameter $\tau^{M_x}_{0,\newlineu_{M_x}}$). \end{lemma} \begin{proof} Let $M = M_x$. By Corollary \ref{lower part is definable}, the function $a\mapsto Lp^\mathcal{G}amma(a)$ with domain $M|\delta_M$ is definable in $M$ from the parameter $\tau^M_{0,\newlineu_M}$. Let $N$ and $\mathcal{T}$ be as in the statement of the lemma. Let $S = \mathcal{M}(\mathcal{T})$. Clearly $S$ is definable from $\mathcal{T}$. Let $Q = \mathcal{Q}(\mathcal{T})$. $Q$ is an initial segment of $Lp^\mathcal{G}amma(S)$. The previous paragraph implies $Q$ is definable in $M$ from $S$ and $\tau^M_{0,\newlineu_M}$. The branch $b$ through $\mathcal{T}$ chosen by the iteration strategy for $N$ is the unique branch which absorbs $Q$. It remains to show $b$ is in $M$. Iterate $M$ to $M'$ well above where $\mathcal{T}$ is constructed to make some $g$ generic over $Ea^{M'}_{\delta_{M'}}$ so that $g$ codes $b$. $M'[g]$ satisfies that $b$ is the unique branch which absorbs $Q$. Since $b$ is in fact the unique such branch in $V$, symmetry of the forcing gives $b$ is in $M'$. But the iteration from $M$ to $M'$ does not add any subsets of $lh(\mathcal{T})$, so in fact $b$ is in $M$. \end{proof} We need to put down a few more properties of the Mitchell-Steel construction before proving the main lemma of this section. \begin{lemma} \label{club of tau so m-s nice} Suppose $N$ is a mouse with a Woodin cardinal $\delta_N$. Let $z\in N\cap \mathbb{R}$. There is a club $C$ of $\tau < \delta_N$ such that $Le[N|\delta_N,z]|\tau = \mathcal{M}_\tau$, where $\mathcal{M}_\tau$ is the Mitchell-Steel construction of length $\tau$ in $N|\delta_N$. Moreover, we can take $C$ to be definable in $N$. \end{lemma} \begin{proof} Let $\langle \mathcal{M}_\xi : \xi <\delta_N\rangle$ be the models from the Mitchell-Steel construction of length $\delta_N$ over $z$, done inside $N|\delta_N$. Let $C'\subset \delta_N$ be the set of $\tau<\delta_N$ such that $\mathcal{M}_\tau$ has height $\tau$ and $\rho_\omega(\mathcal{M}_\xi) \geq \tau$ whenever $\xi$ is between $\tau$ and the height of $N$. It is not hard to see from the material in Section \ref{m-s construction section} that $C$ is a club and if $\tau\in C$, then $\mathcal{M}_\tau = Le[N|\delta_N,z]|\tau$. \end{proof} \begin{corollary} \label{m-s is union of m-s of initial segments} Let $N$, $z$, and $C$ be as in Lemma \ref{club of tau so m-s nice}. Let $S$ be the set of inaccessibles of $N$ below $\delta_N$. Then $Le[N|\delta_N,z] = \bigcup_{\tau\in C\cap S} Le[N|\tau,z]$. \end{corollary} \begin{proof} Since $\delta_N$ is Woodin in $N$, $N\models$ ``$S$ is stationary.'' And $C$ is definable in $N$, so $C\cap S$ is cofinal in $\delta_N$. Since $Le[N|\delta_N,z]$ has height $\delta_N$, $Le[N|\delta_N,z] = \bigcup_{\tau\in C\cap S} Le[N|\delta_N,z]|\tau$. So it suffices to show if $\tau \in C \cap S$, then $Le[N,z]|\tau = Le[N|\tau,z]$. Let $\langle \mathcal{M}_\xi : \xi <\delta_N\rangle$ be the models from the Mitchell-Steel construction of length $\delta_N$ over $z$, done inside $N$. $\tau \in C$ guarantees $Le[N,z]|\tau = \mathcal{M}_\tau$. And by Remark \ref{m-s up to inaccessible}, $\tau \in S$ gives $\mathcal{M}_\tau = Le[N|\tau,z]$. So $Le[N,z]|\tau = Le[N|\tau,z]$ for $\tau \in C \cap S$. \end{proof} \begin{lemma} \label{reflecting below kappa} Suppose $M_x\models\phi[\vec{a}, \delta_{M_x}]$ for some $\Sigma_1$ formula $\phi$, $z\in \mathbb{R} \cap M_x$, and $\vec{a}\in\mathbb{R}^{|\vec{a}|}\cap M_x$. Then there exists $N\triangleleft M_x|\kappa_{M_x}$ such that \begin{enumerate}[(a)] \item \label{has a woodin} $N$ has one Woodin cardinal, \item \label{delta inaccessible} $\delta_N$ is an inaccessible cardinal of $M_x$, \item \label{satisfies phi} $N\models\phi[\vec{a}, \delta_N]$, and \item \label{strle initial segment} $StrLe[N,z] \triangleleft StrLe[M_x,z]$. \end{enumerate} \end{lemma} \begin{proof} Denote $M_x$ by $M$. For ease of notation we will assume $z=0$. Let $\mu$ be a cardinal of $M$ above $\delta_M$ such that $M|\mu\models \phi[\vec{a}]$. \begin{claim} \label{club of tau so hull nice} There is a stationary set of $\tau<\delta_M$ such that $\tau$ is inaccessible in $M$ and if $\tau\leq\zeta<\delta_M$, then $\zeta$ is not definable in $M|\mu$ from parameters below $\tau$. \end{claim} \begin{proof} Work in $M$. Let $S$ be the set of inaccessible cardinals below $\delta_M$. Since $\delta_M$ is Woodin, $S$ is stationary. Define $f:S\to\delta_M$ by setting $f(\zeta)$ to be the least $\eta$ such that there is $\zeta\leq\iota<\delta_M$ definable in $M|\mu$ from parameters in $\eta$. If the claim is false, then $f$ is regressive on a stationary set. Then by Fodor's Lemma, there is a stationary set $S_0$ and $\eta<\delta_M$ such that $f''S_0=\{\eta\}$. But $cof(\delta_M)>|\eta^{<\omega}|\times\aleph_0$, so we cannot have cofinally many elements of $\delta_M$ defined by some formula and parameters from $\eta$. \end{proof} Fix $\tau$ as in Lemma \ref{club of tau so m-s nice} and Claim \ref{club of tau so hull nice}. Let $H = Hull^{M|\mu}(\tau)$. Let $N$ be the transitive collapse of $H$ and $\pi:N\to M|\mu$ the anti-collapse map. By condensation, $N\triangleleft M|\mu$.\footnote{See Theorem 5.1 of \cite{ooimt}.} Clearly $N\triangleleft M|\delta_M$, $N\models\phi[\vec{a},\delta_N]$, $\tau$ is the unique Woodin of $N$, $\tau$ is inaccessible in $M$, and $\rho_\omega(N) = \tau$. \begin{claim} $Le[N|\tau] \triangleleft Le[M|\delta_M]$ \end{claim} \begin{proof} For $\zeta<\tau$, $Le[N|\zeta] \triangleleft Le[N|\tau] \iff Le[M|\zeta] \triangleleft Le[M|\delta_M]$ by elementarity. But $Le[N|\zeta] = Le[M|\zeta]$ for $\zeta < \tau$. So if $Le[N|\zeta]$ is an initial segment of $Le[N|\tau]$, then it is also an initial segment of $Le[M|\delta_M]$. But this implies $Le[N|\tau]\triangleleft Le[M|\delta_M]$, since by Corollary \ref{m-s is union of m-s of initial segments}, $Le[N|\tau]$ is a union of mice of the form $Le[N|\zeta]$ for $\zeta<\tau$. \end{proof} We have found $N \triangleleft M|\delta_M$ satisfying \ref{has a woodin}, \ref{delta inaccessible}, \ref{satisfies phi}, $\rho_\omega(N) = \delta_N$, and $Le[N|\delta_N]\triangleleft Le[M|\delta_M]$ (since $\delta_N = \tau)$. Our next step is to reflect this below $\kappa_M$. Let $F$ be a total extender in $M$ such that the strength of $F$ is greater than $On \cap N$. In particular, we have $N\triangleleft Ult(M|\delta_M,F)$. \begin{claim} $Le[N|\tau] \triangleleft Le[Ult(M|\delta_M,F)]$. \end{claim} \begin{proof} $\tau$ is inaccessible in $Ult(M|\delta_M,F)$. So by Remark \ref{m-s up to inaccessible}, $Le[N|\tau]$ equals the Mitchell-Steel construction of length $\tau$ in $Ult(M|\delta_M,F)$. Suppose the claim fails. Then there is a mouse $Q$ built during the Mitchell-Steel construction in $Ult(M|\delta_M,F)$ after $Le[N|\tau]$ is constructed, such that $Q$ projects to some $\beta < \tau$. Pick such a $Q$ which minimizes $\beta$. By Lemma \ref{knows initial segments iterable}, any initial segment of $M$ below $\delta_M$ is iterable in $M$. Then $M$ has iteration strategies for $Ult(P,F)$ for any $P\triangleleft M|\delta_M$. $Q$ is a mouse built during the Mitchell-Steel construction in $Ult(P,F)$ for some $P\triangleleft M|\delta_M$, so $Q$ is also iterable in $M$. Let $Q' = \mathcal{C}_\omega(Q)$. Then $Q'$ is an $\omega$-sound mouse over $Le[N|\tau]|\beta$ projecting to $\beta$ which is iterable in $M$. It follows from Theorem \ref{universality of m-s} that $Le[M|\delta_M]$ outiterates $Q'$. Since both extend $Le[N|\tau]|\beta$, and $Q'$ is $\omega$-sound and projects to $\beta$, $Q'\triangleleft Le[M|\delta_M]$. But then since $\tau$ is inaccessible in $M$, $Le[M|\tau]$ has height $\tau$, and $Le[M|\tau]\triangleleft Le[M|\delta_M]$, $Q'$ is in $Le[M|\tau]$. This is a contradiction, since a subset of $\beta$ which is not in $Le[M|\tau]$ is definable over $Q'$. \end{proof} By elementarity of the ultrapower embedding induced by $F$, there exists $N\triangleleft M|\kappa_M$ satisfying \ref{has a woodin}, \ref{delta inaccessible}, \ref{satisfies phi}, $\rho_\omega(N) = \delta_N$, and $Le[N|\delta_N]\triangleleft Le[M|\delta_M]$. It remains to prove the following claim. \begin{claim} $StrLe[N] \triangleleft StrLe[M]$. \end{claim} \begin{proof} Since $N$ projects to $\delta_N$, so does $StrLe[N]$ (by Lemma \ref{s-const lemma}). And $StrLe[N]$ agrees with $StrLe[M]$ up to $\delta_N$ since $Le[N|\delta_N] \triangleleft Le[M|\delta_M]$. So it suffices to show $StrLe[M]$ outiterates $StrLe[N]$. But $StrLe[N]$ has an iteration strategy in $\boldsymbol{\mathcal{G}amma}$, and $StrLe[M]$ cannot by Lemma \ref{strle strategy not in gamma}. \end{proof} \end{proof} \subsection{Main Theorem} \label{main thm section} We are ready to prove Theorem \ref{main thm}. Suppose for contradiction $\langle A_\alpha |\alpha < \boldsymbol{\delta_\mathcal{G}amma^+} \rangle$ is a sequence of distinct $\boldsymbol{\mathcal{G}amma}$ sets. Let $U\subset \mathbb{R} \times \mathbb{R}$ be a universal $\boldsymbol{\mathcal{G}amma}$ set. Let $\mathcal{J} = \{(P,\xi) : P \in \mathcal{I} \wedge \xi < \delta_P$\}. Say $(P,\xi)\leq_*(Q,\zeta)$ if $(P,\xi),(Q,\zeta) \in \mathcal{J}$ and whenever $S$ is a complete iterate of both $P$ and $Q$, $\pi_{P,S}(\xi) \leq \pi_{Q,S}(\zeta)$. By Lemma \ref{pwo is short}, the relation $\leq_*$ has length $> \boldsymbol{\delta_\mathcal{G}amma^+}$. Fix $n$ such that for some (equivalently any) $P\in \mathcal{I}$, $\pi_{P,\infty}(\gamma^P_n) > \boldsymbol{\delta_\mathcal{G}amma^+}$. Let $\leq'_*$ be $\leq_*$ restricted to pairs $(P,\xi)$ such that $\xi < \gamma^P_n$. Then $\leq'_*$ has length $\geq \boldsymbol{\delta_\mathcal{G}amma}$ and $\leq'_*$ is in $J_{\beta'}(\mathbb{R})$.\footnote{This is done by similar arguments to those in Section \ref{internalization section}.} Let $B_\alpha = \{y : U_y = A_\alpha\}$. By the Coding Lemma there is a set $D$ in $J_{\beta'}(\mathbb{R})$ such that $(x,y)\in D$ implies $x$ codes a pair in the domain of $\leq'_*$ and $y\in B_{|x|_{\leq'_*}}$, and $D_x$ is nonempty for all $x$ in the domain of $\leq'_*$. Let $z_0\in\mathbb{R}$ be such that $z_0$ codes $W$ and $D\in OD^{<\beta'}(z_0)$. Let $\mathcal{I}'$ be the directed system of all countable, complete iterates of $M_{z_0}$. Let $M'_\infty$ be the direct limit of $\mathcal{I}'$. For $M,N\in \mathcal{I}'$ and $N$ an iterate of $M$, let $\pi_{M,N}: M \to N$ be the iteration map and $\pi_{M,\infty}: M \to M'_\infty$ the direct limit map (We also used $\pi_{M,N}$ and $\pi_{M,\infty}$ for $M,N\in \mathcal{I}$, but this should not cause any confusion). For $M\in \mathcal{I}'$, let $\tau^M = \tau^M_{D,\delta_M}$. There is a slight issue in that our current definitions do not obviously guarantee that $\tau^M$ is moved correctly. That is, we might have a complete iterate $N$ of $M$ such that $\pi_{M,N}(\tau^M) \newlineeq \tau^N$. This can happen because we defined the operator $x \to M_x$ so that $M_x$ is guided by $\mathcal{G}$, but it is possible $D\newlineotin \mathcal{G}$. There is no real issue here, since we can expand $\mathcal{G}$ to a larger self-justifying system $\mathcal{G}'$ such that $D\in\mathcal{G}'$ and require $M_x$ be guided by $\mathcal{G}'$. However, we should leave the operator $x \to W_x$ as is, otherwise we risk altering our construction of $D$. This raises another minor complication, because in Sections \ref{definability section} and \ref{internalization section} we assumed our $\mathcal{G}amma$-ss mouse $M$ was guided by the same self-justifying system as our $\mathcal{G}amma$-suitable mouse $W$. Fortunately, the results of those sections remain true so long as $\mathcal{G} \subseteq \mathcal{G}'$, modulo increasing the number of terms required as parameters in some of the lemmas. For simplicity, in what follows we will just assume $\tau^M$ is moved correctly. \begin{definition} Say $M \in \mathcal{I}'$ is locally $\alpha$-stable if there is $\xi\in M$ such that $\pi_{\mathcal{H}^M,\infty}(\xi)=\alpha$. Write $\alpha_M$ for this ordinal $\xi$. \end{definition} \begin{definition} Say $M\in \mathcal{I}'$ is $\alpha$-stable if $M$ is locally $\alpha$-stable and whenever $N\in\mathcal{I}'$ is a complete iterate of $M$, $\pi_{M,N}(\alpha_M) = \alpha_N$. \end{definition} \begin{lemma} \label{stable mouse exists} For any $\alpha < \boldsymbol{\delta_\mathcal{G}amma^+}$, there is an $\alpha$-stable $M\in \mathcal{I}'$. \end{lemma} \begin{proof} This is essentially the same as the proof of the analogous lemma in \cite{hra}. We will show for any $P\in \mathcal{I}'$, there is an iterate of $P$ which is $\alpha$-stable. \begin{claim} \label{locally stable mouse exists} For any $P\in\mathcal{I}'$, there is a countable, complete iterate $R$ of $P$ which is locally $\alpha$-stable \end{claim} \begin{proof} Fix $S\in \mathcal{I}$ and $\zeta\in S$ such that $\pi_{S,\infty}(\zeta) = \alpha$. Let $R$ be a countable, complete iterate of $P$ such that $S$ is $Ea_R$-generic over $R$. Let $\dot{S}$ be an $Ea_R$-name for $S$ such that $\emptyset \Vdash^R_{Ea_R}$ ``$\dot{S}$ is a complete iterate of $W$.'' Applying Lemma \ref{coiterate all possible generics} yields $S'\in \mathcal{I}^R$ which is a complete iterate of $S$. Then \begin{align*} \pi_{\mathcal{H}^R,\infty} \circ \pi_{S',\mathcal{H}^R} \circ \pi_{S,S'}(\zeta) &= \pi_{S,\infty}(\zeta)\\ &= \alpha. \end{align*} In particular, $\alpha\in range(\pi_{\mathcal{H}^R,\infty})$. \end{proof} Now suppose no $M \in \mathcal{I}'$ is $\alpha$-stable. Let $\langle R_j : j < \omega\rangle$ be a sequence in $\mathcal{I}'$ such that for all $j$, $R_j$ is locally $\alpha$-stable and $R_{j+1}$ is an iterate of $R_j$, but $\pi_{R_j,R_{j+1}}(\alpha_{R_j}) \newlineeq \alpha_{R_{j+1}}$. \begin{claim} \label{dj application} $\pi_{R_j,R_{j+1}}(\alpha_{R_j}) \geq \alpha_{R_{j+1}}$ \end{claim} \begin{proof} By elementarity, $\pi_{R_j,R_{j+1}}\upharpoonright \mathcal{H}^{R_j}$ is an embedding of $\mathcal{H}^{R_j}$ into $\mathcal{H}^{R_{j+1}}$. Then the Dodd-Jensen property implies for any common, complete iterate $Q$ of $\mathcal{H}^{R_j}$ and $\mathcal{H}^{R_{j+1}}$, \begin{align*} \pi_{\mathcal{H}^{R_{j+1}},Q} \circ \pi_{R_j,R_{j+1}}(\alpha_{R_j}) \geq \pi_{\mathcal{H}^{R_j},Q}(\alpha_{R_j}). \end{align*} Then \begin{align*} \pi_{\mathcal{H}^{R_{j+1}},\infty} \circ \pi_{R_j,R_{j+1}}(\alpha_{R_j}) &\geq \pi_{\mathcal{H}^{R_j},\infty} (\alpha_{R_j}) \\ &= \alpha \\ &= \pi_{\mathcal{H}^{R_{j+1}},\infty}(\alpha_{R_{j+1}}). \end{align*} So $\pi_{R_j,R_{j+1}}(\alpha_{R_j}) \geq \alpha_{R_{j+1}}$. \end{proof} Let $R_\omega$ be the direct limit of the sequence $\langle R_j: j<\omega\rangle$. Let $\alpha_j = \pi_{R_j,R_\omega}(\alpha_{R_j})$. Claim \ref{dj application} implies $\alpha_{j+1} < \alpha_j$ for all $j$, contradicting the wellfoundedness of $R_\omega$. \end{proof} Let $p^M$ be a maximal condition in $Ea_M$ such that $p$ forces the generic $ea$ is a pair $(ea^1,ea^2)$, where $ea^1$ codes a pair $(R_{ea^1},\xi_{ea^1})$ such that there exists an iteration tree on $W$ (according to the strategy for $W$) with last model $R_{ea^1}$ and $\xi_{ea^1} < \delta_{R_{ea^1}}$.\footnote{This is first order by Corollary \ref{lp definable in generic ext} and Lemma \ref{M[y] can approximate branches through maximal trees}.} Pick $p^M$ to be the least such condition in the construction of $M$, to ensure $p^M$ is definable in $M$. \begin{lemma} There is $Q^M\in \mathcal{I}^M$ such that $p^M$ forces $Q^M$ is a complete iterate of $R_{ea^1}$. Moreover, $Q^M$ is definable in $M$ from parameter $p^M$ (uniformly in $M$). \end{lemma} \begin{proof} Apply Lemma \ref{coiterate all possible generics} to the condition $p^M$ and a name for $R_{ea^1}$. \end{proof} \begin{definition} For $\alpha$-stable $M\in \mathcal{I}'$, say $p\in Ea_M$ is $\alpha$-good if $p$ extends $p^M$ and $p$ forces \newlineewline 1. $\pi_{\check{Q}^M,\mathcal{H}^M} \circ \pi_{R_{ea^1},\check{Q}^M}(\xi_{ea^1}) = \alpha_M$ and \newlineewline 2. $(ea^1,ea^2)\in \tau^M$. \end{definition} \begin{remark} If $\alpha < \boldsymbol{\delta_\mathcal{G}amma^+}$, being $\alpha$-good is definable over $\alpha$-stable $M\in \mathcal{I}'$ from $\alpha_M$, $\tau^M$, and $\langle\tau^M_{k,\newlineu_M}: k < n \rangle$ (uniformly in $M$). This follows from Lemmas \ref{M can approximate iteration maps} and \ref{M[y] can approximate branches through maximal trees}. \end{remark} Let $p^M_\alpha$ be the maximal $\alpha$-good condition in $M$ which is least in the construction of $M$. Note if $M$ is $\alpha$-stable and $N$ is a complete iterate of $M$, then $\pi_{M,N}(p^M_\alpha) = p^N_\alpha$. For $w\in\mathbb{R} \cap M$ and a $\Sigma_1$ formula $\psi(w)$, write $M \models [\psi(w)]$ to mean whenever $g$ is $Col(\omega,\delta_M)$-generic over $M$, there is a proper initial segment of $M[g]$ which is a $\langle\psi',g\rangle$-witness, where $\psi'(x)$ is a formula expressing ``$\psi(f(x))$'' for some computable function $f$ such that $f(g) = w$. Note ``$M\models [\psi(w)]$'' is $\Sigma_1$ over $M$ if $M$ is iterable. For $\alpha$-stable $M\in \mathcal{I}'$, let $S^M_\alpha$ be the set of conditions $q$ such that there exist $N,r\in M$ satisfying \begin{enumerate}[(a)] \item $N\triangleleft M|\kappa_M$, \item $N$ has one Woodin, \item $\delta_N$ is a cardinal of $M$, \item $q,r\in Ea_N$ and $(q,r) \Vdash^N_{Ea_N \times Ea_N} [U(ea_l,ea_r^2)],$\footnote{Here by $U$ we really mean some fixed $\Sigma_1$-formula defining $U$ in $J_{\alpha_0}(\mathbb{R})$.} and \item $r$ is compatible with $p^M_\alpha$. \end{enumerate} Let $S_\alpha = \pi_{M,\infty}(S^M_\alpha)$ for some (equivalently any) $\alpha$-stable $M\in \mathcal{I}'$. $S_\alpha$ can be viewed as an element of $P(\kappa_{M'_\infty})^{M'_\infty}$. Let $A'_\alpha$ be the set of reals $x$ such that there is an $\alpha$-stable $M\in \mathcal{I}'$ and $q\in M$ satisfying \begin{enumerate} \item $q\in S_\alpha^M$, \item $x\models q$, and \item $x$ is $Ea_M$-generic over $M$. \end{enumerate} \begin{lemma} \label{A'=A} $A'_\alpha = A_\alpha$ \end{lemma} It suffices to show Lemma \ref{A'=A}. The lemma implies $\alpha \newlineeq \beta \implies S_\alpha \newlineeq S_\beta$. By the same proof as for $M_\infty$ given in Lemma \ref{pwo is long}, $\kappa_{M'_\infty} \leq \boldsymbol{\delta_\mathcal{G}amma}$. So we have $\boldsymbol{\delta_\mathcal{G}amma^+}$ distinct subsets of $\boldsymbol{\delta_\mathcal{G}amma}$ in $M'_\infty$. Then the successor of $\boldsymbol{\delta_\mathcal{G}amma}$ in $M'_\infty$ is the successor of $\boldsymbol{\delta_\mathcal{G}amma}$ in $L(\mathbb{R})$, contradicting the following claim. \begin{claim} Let $\eta = \boldsymbol{\delta_\mathcal{G}amma}$. Then $(\eta^+)^{M'_\infty} < (\eta^+)^{L(\mathbb{R})}$ \end{claim} \begin{proof} Let $\lambda = (\eta^+)^{M'_\infty}$. Since $\lambda$ is regular in $M'_\infty$ but not measurable, Lemma \ref{measurable or cof omega} implies $\lambda$ has cofinality $\omega$ in $L(\mathbb{R})$. Let $f\in L(\mathbb{R})$ be a cofinal function from $\omega$ to $\lambda$. Let $\langle g_\xi : \xi < \lambda \rangle$ be a sequence of functions in $M'_\infty$ such that $g_\xi:\eta \to \xi$ is a surjection. Such a sequence exists because $M'_\infty$ satisfies $AC$. Then in $L(\mathbb{R})$ we can construct from $f$ and $\langle g_\xi \rangle$ a surjection from $\eta$ onto $\lambda$. \end{proof} \begin{proof}[Proof of Lemma \ref{A'=A}] First suppose $x\in A_\alpha$. Pick $y\in \mathbb{R}$ such that $y = (y^1,y^2)$, $D(y^1,y^2)$ holds, and $|y^1|_{\leq_*} = \alpha$. Pick an $\alpha$-stable $\bar{M}\in \mathcal{I}'$ such that $\alpha_{\bar{M}}$ exists. Let $z$ be a real coding $\bar{M}$ and let $P=M_{\langle x,y,z \rangle}$. Let $S = StrLe[P,z_0]$. \begin{claim} \label{x,y generic over M-S} $x$ and $y$ are $Ea_S$-generic over $S$.\footnote{This is a standard property of the fully-backgrounded construction - see Section 1.7 of \cite{hra}.} \end{claim} \begin{claim} $S$ is a complete iterate of $\bar{M}$ by an iteration below $\delta_{\bar{M}}$. \end{claim} \begin{proof} See Lemma \ref{iterate of mitchell-steel}. \end{proof} \begin{claim} \label{x in U_y realized in S} There exist conditions $q,r\in Ea_S$ such that $x\models q$, $y\models r$, and $(q,r) \Vdash^S_{Ea_S \times Ea_S} [U(ea_l,ea_r^2)]$. \end{claim} \begin{proof} By the choice of $y$, $y$ satisfies some $\alpha$-good condition $r$. Let $y_0$ be $S[x]$-generic such that $y_0 \models r$. Then by the definition of $\alpha$-good, $y_0 = (y_0^1,y_0^2)$ where $(y_0^1,y_0^2) \in D$ and $|y_0^1|_{\leq_*} = \alpha$. It follows that $U_{y_0^2} = A_\alpha$. So $x\in U_{y_0^2}$. \begin{subclaim} $S[x][y_0] \models [U(x,y_0^2)]$. \end{subclaim} \begin{proof} Let $g$ be $Col(\omega, \delta_S)$-generic over $S[x][y_0]$. Note $S[x][y_0][g] = S[g]$ is a $g$-mouse. By the proof of Lemma \ref{generic extension lp-closed}, $Lp^\mathcal{G}amma(g)$ is contained in $S[g]$. Let $f$ be a computable function such that $f(g) = (x,y_0^2)$ and let $U'(v)$ be a formula expressing $U(f(v))$ holds. By Lemma \ref{truth gives witness}, there is a $\langle U', g\rangle$-witness which is sound, projects to $\omega$, and has an iteration strategy in $\boldsymbol{\Delta}$. Since $Lp^\mathcal{G}amma(g)\subseteq S[g]$, this witness is an initial segment of $S[g]$. \end{proof} We have shown $S[x][y_0] \models [U(x,y_0^2)]$ for any $y_0$ which satisfies $r$ and is $S[x]$-generic. Thus there is $q\in Ea_S$ such that $x$ satisfies $q$ and $(q,r) \Vdash [U(ea_l,ea_r^2)]$. \end{proof} We next would like to find some $N\triangleleft S|\kappa_S$ with the properties of $S$ we obtained above. Note Claims \ref{x,y generic over M-S} and \ref{x in U_y realized in S} are not first order over $S$, since $x$ and $y$ are not in $S$. So a straightforward reflection argument inside $S$ will not suffice. The point of introducing $P$ and obtaining $S$ as a construction inside $P$ is that these claims are first order in $P$. The next claim demonstrates we can perform a reflection in $P$ to obtain the desired initial segment of $S$. \begin{claim} There is $N\triangleleft S|\kappa_S$ such that $N$ has one Woodin, $\delta_N$ is an inaccessible cardinal of $S$, $x$ and $y$ are generic for $Ea_N$, and there exist $q,r\in Ea_N \times Ea_N$ such that $x\models q$, $y\models r$, and $(q,r) \Vdash [U(ea_l,ea_r^2)]$. \end{claim} \begin{proof} By Claims \ref{x,y generic over M-S} and \ref{x in U_y realized in S}, $P$ satisfies \begin{enumerate} \item \label{x,y generic as 1st order property} $x$ and $y$ are $Ea_{StrLe[P,z_0]}$-generic over $StrLe[P,z_0]$ and \item \label{have good conditions as first order property} there exist conditions $q,r\in StrLe[P,z_0]$ such that $x\models q$, $y\models r$, and \\$(q,r) \Vdash^{StrLe[P,z_0]}_{Ea_{StrLe[P,z_0]} \times Ea_{StrLe[P,z_0]}} [U(ea_l,ea^2_r)]$. \end{enumerate} Both properties are $\Sigma_1$ over $P$ in parameters $x$, $y$, $z_0$, and $\delta_P$. Then we may apply Lemma \ref{reflecting below kappa} to obtain $P' \triangleleft P|\kappa_P$ such that $P'$ has one Woodin cardinal, $\delta_{P'}$ is an inaccessible cardinal of $P$, $StrLe[P',z_0]\triangleleft S$, and $P'$ satisfies properties \ref{x,y generic as 1st order property} and \ref{have good conditions as first order property}. Let $N = StrLe[P',z_0]$. Note $\delta_N = \delta_{P'}$ is an inaccessible cardinal of $S$. Then all the properties we required of $N$ are apparent except that $N\triangleleft S|\kappa_S$. Standard properties of the Mitchell-Steel construction imply that $\kappa_S \geq \kappa_P$.\footnote{Suppose $\lambda < \delta_S = \delta_P$ and $E$ is an extender on the fine extender sequence of $S$ witnessing $\kappa_S$ is $\lambda$-strong in $S$. Let $E^*$ be the background extender for $E$ on the fine extender sequence of $P$. Then $E^*$ witnesses $\kappa_S$ is $\lambda$-strong in $P$.} Then $N$ has cardinality less than $\kappa_S$ in $P$, since $N$ is contained in $P'$. Since also $N\triangleleft S$, we have $N\triangleleft S|\kappa_S$. \end{proof} To get $x\in A'_\alpha$, it remains to show the following claim. \begin{claim} $r$ is compatible with $p_\alpha^S$. \end{claim} \begin{proof} Note by choice of $y$, $y^1$ codes a pair $(R,\xi)$ such that $R$ is a complete iterate of $W$, $\pi_{R,\mathcal{H}^S}(\xi) = \alpha_S$, and $D(y^1,y^2)$ holds. Then there is $p \in Ea_S$ such that $y\models p$, $p$ forces $\pi_{\check{Q}^S,\mathcal{H}^S} \circ \pi_{R_{ea^1},\check{Q}^S}(\xi_{ea^1}) = \alpha_S$, and $(ea^1,ea^2)\in \tau^S$. We may assume $p$ extends $r$. $p$ is $\alpha$-good, so by maximality $p$ is compatible with $p^S_\alpha$. Then $r$ is compatible with $p^S_\alpha$ as well. \end{proof} Now suppose $x\in A'_\alpha$. Let $M,q$ realize this and let $N,r$ realize $q\in S^M_\alpha$. Let $y$ be $M[x]$-generic for $Ea_M$ such that $y \models r \wedge p^M_\alpha$. Since $y \models p^M_\alpha$, $y = (y^1,y^2)$ where $U_{y^2} = A_\alpha$. Since $(x,y)\models (q,r)$, $M[x][y] \models [U(x,y^2)]$. Let $g\subset Col(\omega,\delta_M)$ be $M[x][y]$-generic. Then $M[x][y][g] = M[g]$ has an initial segment $R$ witnessing $U(x,y^2)$. By taking the least such $R$, we may assume $R$ projects to $\omega$ and hence $R \in Lp^\mathcal{G}amma(g)$. It follows that $x\in U_{y^2} = A_\alpha$. \end{proof} \section{Remarks on Some Projective-Like Cases} \label{projective-like chapter} Here we provide a few brief comments on the problem of unreachability for projective-like cases. Section \ref{projective cases section} covers the projective pointclasses. In Section \ref{mouse sets section}, we discuss what appears to be the main obstacle to proving the rest of the following conjecture. \begin{conjecture} \label{full conjecture in L(R)} Assume $ZF + AD + DC + V=L(\mathbb{R})$. Suppose $\kappa\leq \boldsymbol{\delta^2_1}$ is a Suslin cardinal and $\kappa$ is either a successor cardinal or a regular limit cardinal. Then $\kappa^+$ is $S(\kappa)$-unreachable. \end{conjecture} \subsection{The Projective Cases} \label{projective cases section} In the introduction, we discussed a theorem of Sargsysan solving the problem of unreachability for the projective pointclasses: \begin{theorem}[Sargsyan] \label{sargsyan thm 2} Assume $ZF + AD + DC$. Then $\boldsymbol{\delta^1_{2n+2}}$ is $\boldsymbol{\Sigma^1_{2n+2}}$-unreachable. \end{theorem} Our technique for proving Theorem \ref{main thm} gives another proof of Sargsyan's theorem, which we outline below. We will assume $ZF + AD + DC$ for the rest of this section. Let $W = M_{2n+1}^\#$. Let $\mathcal{I}$ be the directed system of countable, complete iterates of $W$ and let $M_\infty$ be the direct limit of $\mathcal{I}$. \begin{fact} $\kappa_{M_\infty} < \boldsymbol{\delta^1_{2n+2}}$ and $\delta_{M_\infty} > (\boldsymbol{\delta^1_{2n+2}})^+$. \end{fact} The iteration strategy for $W$ is guided by indiscernibles, analogously to how the iteration strategies for $\mathcal{G}amma$-suitable mice are guided by terms for sets in a sjs. \cite{otpawtdsom} covers this analysis of the iteration strategy for $W$ in detail. Also analogously to Sections \ref{definability section} and \ref{internalization section}, inside an iterate $M$ of $M^\#_{2n+1}(x_0)$ for some $x_0\in \mathbb{R}$ coding $W$, we can form the direct limit $\mathcal{H}^M$ of countable iterates of $W$ in $M$ and approximate the iteration maps from $W$ to $\mathcal{H}^M$. This internalization is covered in \cite{hra}. The following fact gives us an analogue of the notion of a $\langle \phi,z\rangle$-witness. \begin{fact} There is a computable function which sends a $\Sigma^1_{2n+2}$-formula $\phi$ to a formula $\phi^* = \phi^*(u_0,...,u_{2n-1},v)$ in the language of mice such that the following hold: \begin{enumerate} \item If $x\in \mathbb{R}$, $M$ is a countable, $\omega_1+1$-iterable $x$-premouse, $M\models ZFC$, $M$ has $2n$ Woodin cardinals $\delta_0,...,\delta_{2n-1}$, $\phi$ is a $\Sigma^1_{2n+2}$ formula, and $M\models\phi^*[\delta_0,...,\delta_{2n-1},x]$, then $\phi(x)$ holds. \item If $x\in \mathbb{R}$, $\delta_0,...,\delta_{2n-1}$ are the Woodin cardinals of $M^\#_{2n}(x)$, $\phi$ is a $\Sigma^1_{2n+2}$ formula, and $\phi(x)$ holds, then a proper initial segment of $M$ above $\delta_{2n-1}$ satisfies $ZFC$ and $\phi^*[\delta_0,...,\delta_{2n-1},x]$. \end{enumerate} \end{fact} With these tools it is not difficult to adapt our proof of Theorem \ref{main thm} into a proof of Theorem \ref{sargsyan thm 2}. Here is a brief overview of the proof of Theorem \ref{sargsyan thm 2} in \cite{hra}. The basis of this proof is also studying the directed system $\mathcal{I}'$ of countable iterates of $M^\#_{2n+1}(z_0)$ for some $z_0\in\mathbb{R}$. Suppose $\langle A_\alpha : \alpha < \boldsymbol{\delta^1_{2n+2}}\rangle$ is a sequence of distinct $\boldsymbol{\Sigma^1_{2n+2}}$ sets. Fix a $\Pi^1_{2n+3}\backslash \Sigma^1_{2n+3}$ set $A \subset \omega$. If $n\in A$, this is witnessed in a proper initial segment of any $M_{2n+1}$-like $\Pi^1_{2n+2}$-iterable premouse $M$. Then there is a $\Sigma^1_{2n+3}$ set $A' \subset A$ consisting of, roughly speaking, all $n\in\omega$ which are witnessed in such an $M$ before some $x\in A_\alpha$ is witnessed. There is $n_0\in A'\backslash A$. This is witnessed in some proper initial segment $\bar{N}_M$ of $M|\kappa_M$ for any $M\in\mathcal{I}'$. A coding set $S^M$ is defined analogously to our coding sets in the proof of Theorem \ref{main thm}, but with the additional requirement that the conditions appear below $\bar{N}_M$. The coding sets are used to show a $\boldsymbol{\Sigma^1_{2n+2}}$ code for $A_\alpha$ is small generic over $M$. The contradiction is obtained from this. The technique described in the previous paragraph is a stronger argument than the one we used for Theorem \ref{main thm}, since it gives coding sets which are uniformly bounded below the least strong cardinal. It is not clear whether a similar argument could work for inductive-like pointclasses. There is no obvious analogue of the $\Pi^1_{2n+3}\backslash \Sigma^1_{2n+3}$ set $A$ for an inductive-like pointclass $\mathcal{G}amma$, since there is no universal $\mathcal{G}amma\backslash\mathcal{G}amma^c$ set of integers. So the proof from \cite{hra} is not applicable to inductive-like pointclasses. On the other hand, the techniques of Section \ref{ind-like chapter} are applicable to the projective pointclasses. And this yields a substantially simpler proof of Theorem \ref{sargsyan thm 2}, since it eliminates the need for a uniform bound on our coding sets.\\ \subsection{Mouse Sets and Open Problems} \label{mouse sets section} In this section we discuss the relationship between the problem of unreachability and well-known conjectures on mouse sets. We will assume $ZF + AD + DC + V=L(\mathbb{R})$, although this is overkill for some of the results stated below. \begin{definition} $X \subset \mathbb{R}$ is a mouse set if there is an $\omega_1+1$-iterable premouse $M$ such that $X = M \cap \mathbb{R}$. \end{definition} \begin{theorem}[Steel] \label{projective mouse set thm} Suppose $\mathcal{G}amma = \Sigma^1_{n+2}$ for some $n\in \omega$. Then $C_\mathcal{G}amma$ is a mouse set. \end{theorem} \begin{theorem}[Woodin] \label{mouse set thm} Suppose $\lambda$ is a limit ordinal and let \newlineewline $\mathcal{G}amma = \{ A \subseteq \mathbb{R} :\, A \text{ is definable in } J_\beta(\mathbb{R}) \text{ for some } \beta<\lambda\}$. Then $C_\mathcal{G}amma$ is a mouse set. \end{theorem} See \cite{pwim} and \cite{steel2016} for proofs of Theorems \ref{projective mouse set thm} and \ref{mouse set thm}, respectively. \cite{steel2016} also gives the following conjecture. \begin{conjecture}[Steel] \label{lightface mouse set conjecture} Suppose $\mathcal{G}amma$ is a level of the (lightface) Levy hierarchy. Then $C_\mathcal{G}amma$ is a mouse set. \end{conjecture} Conjecture \ref{lightface mouse set conjecture} is a way of asking if there is a mouse corresponding exactly to the pointclass $\mathcal{G}amma$. For each $\mathcal{G}amma$ in the Levy hierarchy, the core model induction constructs a mouse which contains $C_\mathcal{G}amma$, but in some cases the mouse constructed is too large. For example, let $J$ be the mouse operator $J(x) = \bigcup_{n<\omega} M^\#_n(x)$. If $\mathcal{G}amma = \Sigma_{n+2}(J_2(\mathbb{R}))$, then \begin{align*} M^{J^\#}_n \cap \mathbb{R} \subsetneq C_\mathcal{G}amma \subsetneq M^{J^\#}_{n+1} \cap \mathbb{R}. \end{align*} There are many similar cases in which the mice constructed in \cite{cmi} skip the (hypothesized) mouse realizing Conjecture \ref{lightface mouse set conjecture}. Recent progress has been made towards Conjecture \ref{lightface mouse set conjecture} in \cite{rudominernew}, which resolves the case $\mathcal{G}amma = \Sigma_2(J_2(\mathbb{R}))$. The problem of unreachability is connected to a boldface version of Conjecture \ref{lightface mouse set conjecture}. \begin{conjecture} \label{boldface mouse set conjecture} Suppose $\alpha \in ON$ and $n \in \omega$. For $x\in \mathbb{R}$, let $\mathcal{G}amma_x$ consist of all pointsets $A$ for which there is a $\Sigma_n$ formula $\phi$ with parameter $x$ such that $A = \{y : J_\alpha(\mathbb{R})\models \phi[y]\}$. Then for any $y\in \mathbb{R}$, there is $x\in\mathbb{R}$ such that $y \leq_T x$ and $C_{\mathcal{G}amma_x}$ is a mouse set. \end{conjecture} Presumably a proof of Conjecture \ref{lightface mouse set conjecture} would relativize, so a proof of Conjecture \ref{lightface mouse set conjecture} would also resolve Conjecture \ref{boldface mouse set conjecture}. The mouse operator $x \mapsto M^\#_{2k}(x)$ realizes Conjecture \ref{boldface mouse set conjecture} holds for $\alpha=1$ and $n = 2k+2$. To prove $\boldsymbol{\delta^1_{2n+2}}$ is $\boldsymbol{\Sigma^1_{2n+2}}$-unreachable, we studied the direct limit of $M = M^\#_{2n+1}(x_0)$ for some $x_0\in\mathbb{R}$. Note if $g$ is $Col(\omega,\delta_M)$-generic over $M$, then $M[g] = M^\#_{2n}(g)$. For $\alpha$ admissible, the mouse operator $x\mapsto M_x$ of Theorem \ref{nice strategy exists} realizes Conjecture \ref{boldface mouse set conjecture} holds in the case $n=1$. Note if $g$ is $Col(\omega,\delta_{M_x})$-generic over $M_x$, then $M_x[g] \cap \mathbb{R} = Lp^\mathcal{G}amma(g) \cap \mathbb{R} = C_{\mathcal{G}amma}(g)$. So in the inductive-like case as well we studied the direct limit of a mouse such that collapsing its least Woodin yields a mouse realizing one case of Conjecture \ref{boldface mouse set conjecture}. Thus for each pointclass $\boldsymbol{\Sigma_n}(J_\alpha(\mathbb{R}))$ for which we have proven Conjecture \ref{Grigor conjecture} holds, we used a mouse operator realizing Conjecture \ref{boldface mouse set conjecture} holds for $\alpha$ and $n$. It seems likely a proof of Conjecture \ref{full conjecture in L(R)} would involve proving Conjecture \ref{boldface mouse set conjecture} for each $\alpha$ and $n$ such that $\boldsymbol{\Sigma_n}(J_\alpha(\mathbb{R})) = S(\kappa)$ for some Suslin cardinal $\kappa$ which is a successor cardinal or a regular limit cardinal. \printbibliography \end{document}
\begin{document} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\tx}[1]{\quad\mathbbox{#1}\quad} \noindent {\tt The Nepali Math. Sc. Report\\ Vol. 36, No.1, 2019\\} \title[Maximum Norm Estimates ]{A priori estimates in terms of the maximum norm for the solution of the Navier-Stokes equations with periodic initial data} \author[Santosh Pathak]{ \bfseries Santosh Pathak} \address{ Department of Mathematics and Statistics\\ University of New Mexico, Albuquerque, NM 87131, USA \\ [email protected]} \thanks{\hspace{-.5cm}\tt Received May 31, 2016 } \maketitle \thispagestyle{empty} {\footnotesize \noindent{\bf Abstract:} In this paper, we consider the Cauchy problem for the incompressible Navier-Stokes equations in $\mathbb{R}^n$ for $n\geq 3 $ with smooth periodic initial data and derive a priori estimtes of the maximum norm of all derivatives of the solution in terms of the maximum norm of the initial data. This paper is a special case of a paper by H-O Kreiss and J. Lorenz which also generalizes the main result of their paper to higher dimension. \\ \noindent{\bf Key Words}: Incompressible Navier-Stokes equation; Maximum norm estimates, Periodic initial data\\ \bf AMS (MOS) Subject Classification.} Classification here. \section{Introduction} We consider the Cauchy problem of the Navier-Stokes equations in $\mathbb R^n, n \geq 3$: \begin{align} u_t + u \cdot \nabla u + \nabla p = \triangle u, \ts \nabla \cdot u = 0, \end{align} with initial condition \begin{align} u(x,0)= f(x), \ts x \in \mathbb R^n, \end{align} where $u=u(x,t)= (u_1(x,t), \cdots u_n(x,t)) $ and $ p=p(x,t) $ stand for the unknown velocity vector field of the fluid and its pressure, while $f=f(x)=(f_1(x),\cdots f_n(x) ) $ is the given initial velocity vector field. In what follows, we will use the same notations for the space of vector valued and scalar functions for convenience in writing. There is a large literature on the existence and uniqueness of solution of the Navier-Stokes equations in $\mathbb R^n $. For given initial data, solutions of (1.1) and (1.2) have been constructed in various function spaces. For example, if $f \in L^r $ for some $r$ with $ 3 \leq r < \infty $, then it is well known that there is a unique classical solution in some maximum interval of time $0\leq t < T_f $ where $ 0 < T_f \leq \infty $. But for the uniqueness of the pressure one requires $| p(x,t)| \to 0$ as $|x| \to \infty$. (See \cite{Kato} and \cite{Wiegner} for $r=3 $ and \cite{Amann} for $ 3 < r < \infty $.) If $f\in L ^{\infty}(\mathbb R^n) $ then existence of a regular solution follows from \cite{Cannon}. The solution is only unique if one puts some growth restrictions on the pressure as $| x| \to \infty$. A simple example of non-uniqueness is demonstrated in \cite{Kim} where the velocity $u$ is bounded but $|p(x,t)| \leq C |x| $. In addition, an estimate $|p(x,t)|\leq C(1+|x|^{\sigma} ) $ with $\sigma <1$ ( see \cite{Galdi} ) implies uniqueness. Also the assumption $ p \in L^1_{loc}(0,T; BMO) $ (see \cite{Giga}) implies uniqueness. In this paper we consider the initial function $f \in C^{\infty}_{per}(\mathbb R^n) $ which is the space of smooth $2 \pi $ periodic functions. Since $ C^{\infty}_{per}(\mathbb R^n) $ is a closed subspace of the Banach space $ L^{\infty}(\mathbb R^n) $, the existence of a regular solution of the Navier-Stokes equations (1.1) and (1.2) can be guaranteed by \cite{Cannon}. In addition, in a paper by Giga and others \cite{Giga} they consider $f \in BUC(\mathbb R^n)$ where $ BUC(\mathbb R^n) $ is the space of bounded uniformly continuous functions. In the paper they construct a regular solution of the Navier-Stokes equations in some maximum interval of time $ 0 \leq t < T_f $ where $ T_f \leq \infty $. Clearly, our case is also a special case of their paper where we put extra assumption of \say{smooth periodic} on their initial function $f \in BUC(\mathbb R^n)$. Moreover, for smooth periodic initial data, the existence of smooth periodic solution is proved by H-O Kreiss and J. Lorenz in their book \cite{Kreiss} for $n=3$ where they use successive iteration using the the vorticity formulation. On the other hand, Giga and others use iteration on the integral equation of the transformed abstract ordinary differential equations to construct a mild solution of the Navier-Stokes equations and later prove such mild solution is indeed a regular solution (local in time) of the Navier-Stokes equations (1.1) and (1.2) for $f \in BUC(\mathbb R^n) $ . Readers are referred to the paper by Giga and others \cite{Giga} for details on existence of the smooth periodic solution of the Navier-Stokes equations of (1.1) and (1.2) for $ f \in C^{\infty}_{per}(\mathbb R^n)$ with necessary alternations in their proofs of case $ f \in BUC(\mathbb R^n)$. The work in this paper reproves Theorem 4.1 of the Kreiss and Lorenz paper \cite{Lorenz} in periodic case assuming smooth periodic solution exists for some maximum interval of time $ 0 \leq t < T_f$. Since we are in a special case of their paper, result of Theorem 4.1 must be true for smooth periodic initial data as well, but what makes our work interesting and different is the approach taken to handle the pressure term of the Navier-Stokes equations while deriving the result of Theorem 4.1 of the Kreiss and Lorenz paper as I have adopted in my first paper \cite{SP} Notice, pressure term of the Navier-Stokes equations can be determined from the Poisson equation \begin{align} \triangle p = - \nabla \cdot ( u \cdot \nabla ) u \end{align} which is given by \begin{align} p = \sum_{i,j} R_i R_j ( u_i u_j) , \end{align} where $R_i = (-\triangle )^{-1/2} D_i $ is the $i$-th Riesz transform. Since the Riesz transforms are not bounded in $L^{\infty}(\mathbb R^n)$, the pressure term $p \in L^1_{loc} (0,T;BMO) $ where $ BMO$ is the space of functions of bounded mean oscillation. Because of the non-local nature of the pressure, the proof of Theorem 4.1 of the Kreiss and Lorenz paper is complicated, however. The main objective of this paper is to derive a priori estimates of the maximum norm of the derivatives of $u$ in terms of the maximum norm of the initial function, $u(x,0)=f(x)$, {\it assuming} the solution to exist and to be $C^{\infty}_{per}(\mathbb R^n), n \geq 3 $ for $ 0 \leq t < T_f$. Before we start formulating the problem, we introduce the following notations \begin{align*} | f|_{\infty} = \mathop{\sup}_{x} { |f(x)|} \ts \text{with} \ts |f(x)|^2= \sum_{i} f_i^2(x), \end{align*} and $D^{\alpha} =D_1^{\alpha_1} \cdots D_n^{\alpha_n}, D_i = \partial /{\partial x_i } \hspace{5pt} \text{for a multiindex} \hspace{5pt} \alpha =(\alpha_1,\cdots, \alpha_n )$. In what follows, for any $j=0,1,\cdots $, if $ | \alpha | = j $ then we will denote $D^{\alpha} $ by $D^j$. We also set \begin{align*} |\mathcal {D}^j u(t) |_{\infty} :=|\mathcal {D}^j u(\cdot, t ) |_{\infty} =\mathop{\max}_{|\alpha|=j} |D^{\alpha} u(\cdot,t)|_{\infty} . \end{align*} Clearly, $|\mathcal{D}^j u(t) |_{\infty} $ measures all space derivatives of order $j$ in maximum norm. Proving the following theorem is the main goal of this paper whereas Kreiss and Lorenz in their paper \cite{Lorenz} prove the same theorem for $ f \in L^{\infty}(\mathbb R^n) $ for $n=3$ from rather difficult approach while dealing with the pressure term $p(x,t)$. \begin{Th} Consider the Cauchy problem for the Navier-Stokes equations (1.1), (1.2), where $ f \in C^{\infty}_{per} ( \mathbb R^n)$ for $n \geq 3$ with $ \nabla \cdot f = 0 $. There is a constant $c_0>0$ and for every $j=0,1,\cdots $ there is a constant $K_j$ so that \begin{align} t^{j/2} | \mathcal D^j u(t) |_{\infty} \leq K_j | f|_{\infty} \ts \text{for} \ts 0 < t \leq \frac{c_0}{|f|_{\infty}^2}. \end{align} The constants $c_0$ and $K_j$ are independent of $t$ and $f$. \end{Th} For the purpose of proving Theorem 1.1, we start by transforming the momentum equation (1.1) of the Navier-Stokes equations into the abstract ordinary differential equation for $u$ \begin{align} u_t = \triangle u - \mathbb P (u \cdot \nabla ) u \end{align} by eliminating the pressure, where $\mathbb P $ is the Leray projector defined by \begin{align*} \mathbb P = ( \mathbb P_{ij})_{1\leq i,j \leq n } , \ts \mathbb P_{ij} = \delta_{ij} + R_i R_j; \end{align*} where $ R_i$ is same as in (1.4) and $ \delta_{ij}$ is the Kronecker delta function. Note that the equation (1.6) is obtained from (1.1 ) by applying the Leray projector with the properties $\mathbb P( \nabla p)=0, \mathbb P ( \triangle u ) = \triangle u$, since $\nabla \cdot u = 0 $. Since $ \mathbb P (u \cdot \nabla u)= \sum_{i} D_i \mathbb P(u_i u)$, therefore it is very appropriate to consider an analogous system of (1.6) as below: \begin{align} u_t = \triangle u + D_i \mathbb P g(u) \ts x \in \mathbb R^n, \ts t>0 \end{align} with initial condition \begin{align} u(x,0)= f(x) \ts \text{where} \ts f \in C^{\infty}_{per}(\mathbb R^n). \end{align} Here $ g: \mathbb R^n \to \mathbb R^n $ is assumed to be quadratic in $u$. The maximal interval of existence is again $ 0 \leq t < T_f$. We would like to prove the estimates of the maximum norm of the derivatives of the solution of (1.7) and (1.8) in terms of the maximum norm of the initial data. \begin{Th} \label{main} Under the above assumptions on $f$ and $g$ the solution of (1.7) and (1.8) satisfies the following \\ (a) There is a constant $c_0 >0$ with \begin{align} T(f) > \frac{c_0}{|f|_{\infty}^2} \end{align} and \begin{align} |u(t)|_{\infty} \leq 2 |f|_{\infty} \ts \text{for} \ts 0\leq t \leq \frac{c_0}{|f|_{\infty}^2} . \end{align} (b) For every $j=1,2,\cdots $, there is a constant $K_j>0$ with \begin{align} t^{j/2} | \mathcal{D}^j u(.,t)|_{\infty} \leq K_j |f|_{\infty} \ts \text{for} \ts 0< t \leq \frac{c_0}{|f|_{\infty}^2}. \end{align} The constant $c_0$ and $K_j$ are independent of $t$ and $f$. \end{Th} In section 2, we will introduce some auxiliary results for the solution of the heat equation and few other important estimates which are used later in section 3 and 4. Proof of Theorem 1.2 will be provided in section 3. Then we prove Theorem 1.1 in section 4. Finally, in section 5 we outline some remarks on the use of the result obtained in Theorem 1.1. \section{Some Auxiliary results} Let us consider $f\in C^{\infty}_{per} ( \mathbb R^n) $. The solution of \begin{align*} u_t = \triangle u , \ts u = f \ts \text{at} \ts t=0, \end{align*} is denoted by \begin{align*} u(t):=u(\cdot, t ) = e^{ \triangle t } f = \frac{1}{(2\pi)^n} \int_{\mathbb T^n} \theta (x-y,t) f(y) dy \end{align*} where \begin{align} \theta(x,t)=\sum_{ k \in \mathbb{Z}^n} e^{-|k|^2 t } e^{ik \cdot x} , \ts t>0 \end{align} is the periodic heat kernel in $\mathbb R^n$. Using the Poisson summation formula, (2.1) can be written as \begin{align} \theta(x,t)= \sum_{k\in \mathbb{Z}^n} \bigg(\frac{\pi}{t}\bigg)^n \exp\bigg[\frac{-|x+2\pi k|^2}{4t} \bigg], \ts t > 0. \end{align} With the use of (2.2), it is well known that \begin{align} |e^{t \triangle } f |_{\infty} \leq |f|_{\infty}, \ts t\geq 0 \end{align} and \begin{align} | \mathcal D^j e^{t \triangle} f |_{\infty} \leq C_j t^{-j/2} | f |_{\infty} \end{align} for some $C_j>0$ independent of $t$ and $f$. \begin{Lemma} Let $ f \in C^{\infty}_{per}( \mathbb R^n ) $ then for any $j \geq 1 $ \begin{align} | \mathcal D^j e^{\triangle t } \mathbb P f |_{\infty} \leq C_j t^{-j/2} | f |_{\infty} \ts \text{for} \ts t>0 \end{align} for some constant $C_j>0$ independent of $t$ and $f$. \end{Lemma} \begin{proof} Let us first denote $e^{\triangle t} f = \theta * f $ where $\theta(x,t) $ is given by (2.2). Let us denote the Fourier coefficient of a function by $\mathcal F$. For $ \xi \in \mathbb Z ^n$, notice $ \mathcal F ( {\theta (x,t)}) (\xi ) = C e^{-t |\xi|^2 }, t > 0 $, where $C$ depends on the normalizing constant in the definition of the Fourier coefficient. In the proof of this lemma, we will allow the constant $C$ to change line to line as per the need. Now, for any $t > 0$, any choice of $k,l \in \{1,2, \cdots , n \}$; and for any multiindex $ \alpha $ such that $ |\alpha|=j$, the operator $D^j e^{\triangle t} \mathbb P_{k l} $ on the Fourier side is given by \begin{align*} \mathcal F ( D^j e^{ \triangle t} \mathbb P_{kl} f_l)(\xi) & =(-i \xi)^{\alpha} \mathcal{F}(e^{ \triangle t} \mathbb P_{kl} f_l )(\xi) \\ &= (-i \xi)^{\alpha} \mathcal F( \theta * \mathbb P_{kl} f_l ) (\xi) \\ & = C (-i \xi)^{\alpha} \mathcal{F}(\theta(x,t))(\xi) \mathcal{F}(\mathbb P_{kl} f_l )(\xi) \\ &=C (-i \xi)^{\alpha} e^{-t |\xi|^2 } \bigg( \delta_{kl} - \frac{\xi_k \xi_l}{| \xi |^2} \bigg) \mathcal F (f_l) (\xi) \\ & = C (-i \xi)^{\alpha} e^{-t |\xi |^2 } \delta_{k l} \mathcal F (f_l) (\xi)\\ & -C (-i \xi)^{\alpha} \xi_k \xi_l \mathcal F (f_l) (\xi) \int_t^{\infty} e^{-\tau | \xi |^2 } d\tau. \end{align*} Using Fourier expansion we can write \begin{align*} D^j( e^{ \triangle t} \mathbb P_{kl} f_l)(x) & =C \sum_{\xi \in \mathbb Z^n} (-i\xi)^{\alpha} \delta_{kl} e^{-t |\xi|^2} \mathcal F (f_l)(\xi) e^{i \xi \cdot x } \\ &+ C \sum_{\xi \in \mathbb Z^n} (-i\xi)^{\alpha} (i\xi_k) (i\xi_l) \mathcal F(f_l) (\xi) e^{i \xi \cdot x } \int_t^{\infty} e^{-\tau |\xi|^2 } d\tau \\ & = (-1)^j C \delta_{kl} D^{\alpha} \sum_{\xi \in \mathbb Z^n} e^{-t |\xi|^2} \mathcal F(f_l) (\xi) e^{i \xi \cdot x } \\ &+ C (-1)^j \int_t^{\infty} \sum_{\xi \in \mathbb Z^n} e^{-\tau | \xi|^2 } (i \xi)^{\alpha} (i\xi_k) (i\xi_l) \mathcal F (f_l) (\xi) e^{i \xi \cdot x } d\tau \\ &= (-1)^j C \delta_{kl} D^{\alpha} e^{ \triangle t} f_l + (-1)^j C \int_t^{\infty} D^{\alpha} D_k D_l e^{ \triangle \tau} f_l d\tau \\ &=I_1 + I_2. \end{align*} From (2.4) we have $|I_1|_{\infty} \leq C_j t^{-j/2} |f_l |_{\infty} $. By the use of (2.4) one more time we obtain \begin{align*} |I_2 |_{\infty} & \leq C_j |f_l|_{\infty} \int_t^{\infty} \tau^{-(j+2)/2} d\tau \\ & \leq C_j t^{-j/2} | f_l |_{\infty} . \end{align*} Therefore \begin{align*} | D^j e^{ \triangle t} \mathbb P_{kl} f_l |_{\infty} & \leq | I_1|_{\infty} + | I_2|_{\infty} \\ & \leq C_j t^{-j/2} |f_l |_{\infty}. \end{align*} Hence Lemma 2.1 is proved. \end{proof} \begin{Cor} Let $ g\in C^{\infty}_{per} ( \mathbb R^n \times [0,T] ) $ for some $T>0$, then the solution of \begin{align} u_t = \triangle u + D_i \mathbb P g , \ts u = 0 \ts \text{at} \ts t=0 \end{align} satisfies \begin{align} |u(t)|_{\infty} \leq C t^{1/2} \mathop{\max}_{0\leq s \leq t } |g(s) |_{\infty}. \end{align} \end{Cor} \begin{proof} The solution of (2.6) is given by \begin{align*} u(t) = \int_0^t e^{\triangle (t-s) } D_i \mathbb P g(u))(s) ds \end{align*} and \begin{align*} |u(t)|_{\infty} \leq \int_0^t | e^{\triangle (t-s) } D_i \mathbb P g(u)(s) |_{\infty} ds. \end{align*} After commuting $D_i $ with the heat semi-group, we can use Lemma 2.1 to obtain \begin{align*} |u(t)|_{\infty} \leq \mathop{\max}_{0\leq s \leq t } |g(s)|_{\infty} \int_0^t (t-s)^{-1/2} ds . \end{align*} Hence we obtain \begin{align*} |u(t)|_{\infty} \leq C t^{1/2} \mathop{\max}_{0\leq s \leq t } |g(s) |_{\infty} . \end{align*} \end{proof} \section{Estimates for $u_t = \triangle u + D_i \mathbb P g(u) $ : proof of Theorem 1.2} In this section we consider the system $u_t = \triangle u + D_i \mathbb P g(u) $ with the initial condition $u=f $ at $ t=0$ where $ f\in C^{\infty}_{per}(\mathbb R^n) $. It is well-known that the solution is smooth $2\pi$ periodic in a maximal interval $ 0 \leq t < T_f$ where $ 0 < T_f \leq \infty $. Let us consider $u$ is the solution of the inhomogeneous equation $ u_t = \triangle u + D_i \mathbb P (g(u(x,t))) $ and recall $ g(u)$ is quadratic in $u$. Thus, there is a constant $ C_g$ such that we have the following: \begin{align} | g(u) | \leq C_g |u|^2, \ts |g_u(u) | \leq C_g |u|, \ts \text{for all} \ts u \in \mathbb R^n \end{align} We first estimate the maximum norm of $u$. \begin{Lemma} Let $C_g$ denote the constant in (3.1) and let $C$ denote the constant in (2.7); set $c_0 =\frac{1}{16C^2 C_g^2} $. Then we have $T_f > c_0/{ | f |_{\infty}^2 } $ and \begin{align} | u (t) |_{\infty} < 2 | f |_{\infty} \ts \text{ for } \ts 0 \leq t < \frac{c_0}{|f|_{\infty}^2 }. \end{align} \end{Lemma} \begin{proof} Suppose (3.2) does not hold, then we can find the smallest time $t_0$ such that $ |u(t_0) |_{\infty} = 2 | f |_{\infty} $. Since $t_0$ is the smallest time so we have $ t_0 < c_0/|f|_{\infty}^2 $. Now by (2.3) and (2.7) we have \begin{align*} 2|f|_{\infty} & = |u(t_0) |_{\infty} \\ & \leq |f|_{\infty} + C t_0^{1/2} \mathop{\max}_{0\leq s \leq t_0 } | g(s) |_{\infty} \\ & \leq |f|_{\infty} + C C_g t_0^{1/2} \mathop{\max}_{0 \leq s \leq t_0} | u (s) |_{\infty}^2 \\ & \leq | f|_{\infty} + C C_g t_0^{1/2} 4 |f |_{\infty}^2 . \end{align*} This gives \begin{align*} 1 \leq 4 C C_g t_0^{1/2} | f |_{\infty}, \end{align*} therefore $ t_0 \geq 1/{( 16 C^2 C_g^2 | f |_{\infty}^2)}={c_0}/{|f|_{\infty}^2} $ which is a contradiction. There (3.2) must hold. The estimate $ T_f > c_0/{| f |_{\infty}^2} $ is valid since $ \limsup_{t \to T_f} | u(t) |_{\infty} = \infty $ if $ T_f $ is finite. \end{proof} Now we prove estimate (1.11) of Theorem 1.2 by induction on $j$. Let $j \geq 1 $, and assume \begin{align} t^{k/2} | \mathcal D^k u (t) |_{\infty} \leq K_k | f|_{\infty}, \ts \text{for} \ts 0 \leq t \leq \frac{c_0}{|f|_{\infty}^2} \ts \text{and} \ts 0 \leq k \leq j-1. \end{align} Let us apply $ D^j$ to the equation $u_t = \triangle u + D_i \mathbb P g(u) $ to obtain \begin{align*} v_t = \triangle v + D^{j+1} \mathbb Pg(u), \ts v:= D^j u ,\\ v(t) = D^j e^{\triangle t } f + \int_0^t e^{\triangle (t-s)} D^{j+1} (\mathbb P g(u))(s) ds . \end{align*} Using (2.4) we get \begin{align} t^{j/2} |v(t)|_{\infty} \leq C |f|_{\infty} + t^{j/2} \biggl |\int_0^t e^{\triangle (t-s)} D^{j+1} ( \mathbb P g (u))(s) ds \biggr |_{\infty}. \end{align} We split the integral into \begin{align*} \int_0^{t/2} + \int_{t/2}^t =: I_1 + I_2 \end{align*} and obtain \begin{align*} |I_1(t)| & = \biggl | \int_0^{t/2} D^{j+1} e^{\triangle (t-s) } ( \mathbb P g(u))(s) ds \biggr |_{\infty} \\ & \leq \int_0^{t/2} |D^{j+1} e^{\triangle (t-s)} (\mathbb P g (u )) (s) ds |_{\infty} ds . \end{align*} Using the inequality (2.5) in Lemma 2.1, we get \begin{align*} | I_1(t) |_{\infty} & \leq C \int_0^{t/2} (t-s)^{-(j+1)/2} |g(u(s))|_{\infty} ds \\ & \leq C |f |_{\infty}^2 t^{(1-j)/2}. \\ \end{align*} The integrand in $I_2$ has singularity at $s=t$. Therefore, we can move only one derivative from $D^{j+1} \mathbb P g(u)$ to the heat semigroup.( If we move two or more derivatives then the singularity becomes non-integrable.) Thus, we have \begin{align*} |I_2(t)|_{\infty} = \biggl | - \int_{t/2}^t De^{\triangle (t-s)} (D^j \mathbb P g(u))(s) ds \biggr | _{\infty}. \nonumber \end{align*} Since the Leray projector commutes with any order derivatives, therefore \begin{align*} |I_2(t) |_{\infty} = \bigg| - \int_{t/2}^t D e^{\triangle (t-s) } ( \mathbb P D^j g (u))(s) ds \bigg|_{\infty}. \end{align*} If we use Lemma 2.1 for $j=1$, we obtain \begin{align} |I_2(t) |_{\infty} \leq C \int_{t/2}^t (t-s)^{-1/2} |D^j g(u)(s) |_{\infty} ds. \end{align} Since $g(u)$ is quadratic in $u$, therefore \begin{align*} |D^j g(u) |_{\infty} \leq C | u|_{\infty} | \mathcal D^j u |_{\infty} + \sum_{k=1}^{j-1} | \mathcal D^k u |_{\infty} | \mathcal D^{j-k} u |_{\infty} . \end{align*} By induction hypothesis (3.3) we obtain \begin{align} \sum_{k=1}^{j-1} | \mathcal D^k u (s) |_{\infty} | \mathcal D^{j-k} u (s) |_{\infty} \leq C s^{-j/2} | f |_{\infty}^2. \end{align} Expression in (3.5) can be estimated as below: \begin{align*} |I_2(t) |_{\infty} &\leq C \int_{t/2}^t (t-s)^{-1/2} \bigg( C | u(s)|_{\infty} | \mathcal D^j u(s) |_{\infty} + \sum_{k=1}^{j-1} | \mathcal D^k u (s) |_{\infty} | \mathcal D^{j-k} u (s) |_{\infty} \bigg) ds \\ &= J_1 + J_2. \end{align*} Using (3.6), and since $ \int_{t/2}^t (t-s)^{-1/2} s^{-j/2} ds = C t^{(1-j)/2}$, where $C$ is independent of $t$, we obtain $|J_2(t) |_{\infty} \leq C | f |_{\infty}^2 t^{(1-j)/2}$. \\ For $J_1$, we have \begin{align*} | J_1(t) |_{\infty} &= C \int_{t/2}^t (t-s)^{-1/2} |u(s)|_{\infty} | \mathcal D^j u(s) |_{\infty} ds \\ & \leq C |f|_{\infty} \int_{t/2}^t (t-s)^{-1/2} s^{-j/2} s^{j/2} |\mathcal D^j u (s) |_{\infty} ds \\ & \leq C |f|_{\infty} t^{(1-j)/2} \mathop{\max}_{0 \leq s \leq t } \{ s^{j/2} \mathcal D^j u(s) |_{\infty} \}. \end{align*} We use these bounds to bound the integral in (3.4). We have $v= D^j u $. Then maximizing the resulting estimate for $t^{j/2} |D^j u(t)|_{\infty}$ over all derivatives $D^j$ of order $j$ and setting \begin{align*} \phi (t) := t^{j/2} |\mathcal D^j u(t)|_{\infty} \end{align*} and from (3.4), we obtain the following estimate \begin{align*} \phi(t) \leq C |f|_{\infty} +C t^{1/2} |f|_{\infty}^2 +C |f|_{\infty} t^{1/2} \mathop{\max}_{0\leq s \leq t} \phi (s) \ts \text{for} \ts 0 \leq t \leq \frac{c_0}{|f|_{\infty}^2 }. \end{align*} Since $t^{1/2} |f|_{\infty} \leq \sqrt{c_0} $ then $C t^{1/2} |f|_{\infty}^2 \leq C \sqrt{c_0} |f|_{\infty} $. Therefore \begin{align} \phi (t) \leq C_j |f|_{\infty} + C_j |f|_{\infty} t^{1/2} \mathop{\max}_{0\leq s \leq t} \phi (s) \ts \text{for} \ts 0 \leq t \leq c_0/{|f|_{\infty}^2 }. \end{align} Let us fix $C_j$ so that the above estimate holds, and set \begin{align*} c_j = \min \bigg\{ c_0 , \frac{1}{4 C_j^2} \bigg\}. \end{align*} First, let us prove the following \begin{align*} \phi(t) < 2 C_j |f|_{\infty} \ts \text{for} \ts 0 \leq t < \frac{c_j }{|f|_{\infty}^2}. \end{align*} Suppose there is a smallest time $t_0$ such that $ 0 < t_0 < c_j/{|f|_{\infty}^2 } $ with $\phi(t_0) = 2C_j |f|_{\infty} $. Then using (3.7) we obtain \begin{align*} 2 C_j |f|_{\infty} = \phi (t_0) \leq C_j |f|_{\infty} + 2 C_j^2 |f|_{\infty}^2 t_0^{1/2} , \end{align*} thus \begin{align*} 1\leq 2 C_j |f|_{\infty} t_0^{1/2} \ts \text{gives} \ts t_0 \geq c_j/{|f|_{\infty}^2} \end{align*} which contradicts the assertion. Therefore, we proved the estimate \begin{align} t^{j/2} |\mathcal D^j u(t) | _{\infty} \leq 2 C_j |f|_{\infty} \ts \text{for} \ts 0 \leq t \leq c_j/{|f|_{\infty}^2 }. \end{align} If \begin{align} T_j:= \frac{c_j}{|f|_{\infty}^2} < t \leq \frac{c_0}{|f|_{\infty}^2}=: T_0 \end{align} then we start the corresponding estimate at $t-T_j$. Using Lemma 3.1, we have $|u(t-T_j)|_{\infty} \leq 2 |f|_{\infty}$ and obtain \begin{align} T_j^{j/2} |\mathcal D^j u(t) |_{\infty} \leq 4 C_j |f|_{\infty} . \end{align} Finally, for any $t$ satisfying (3.9) \begin{align*} t^{j/2} \leq T_0^{j/2} = \bigg( \frac{c_0}{c_j} \bigg)^{j/2} T_j^{j/2} \end{align*} and (3.10) yield \begin{align*} t^{j/2} |\mathcal D^j u(t) |_{\infty} \leq 4 C_j \bigg( \frac{c_0}{c_j} \bigg)^{j/2} |f|_{\infty}. \end{align*} This completes the proof of Theorem 1.2. \section{Estimates For the Navier-Stokes Equations} Recall the transformed abstract ordinary differential equation (1.6) \begin{align} u_t = \triangle u - \mathbb P (u \cdot \nabla u), \ts \nabla \cdot u = 0 \end{align} with \begin{align} u(x,0)=f(x). \end{align} Solution of (4.1) and (4.2) is given by \begin{align} u(t) = e^{\triangle t } f -\int_0^t e^{ \triangle (t-s) } \mathbb P(u \cdot \nabla u)(s) ds . \end{align} Using (4.3) with previous estimates (2.3), (2.4) and (2.5), we prove the following lemma. \begin{Lemma} Set \begin{align} V(t)= |u(t)|_{\infty} + t^{1/2} |\mathcal D u(t) |_{\infty} , \ts 0 < t < T(f). \end{align} There is a constant $C>0$, independent of $t$ and $f$, so that \begin{align} V(t) \leq C|f|_{\infty} + C t^{1/2} \mathop{\max}_{0 \leq s \leq t }{V^2(s)} , \ts 0 < t < T(f). \end{align} \end{Lemma} \begin{proof} Using estimate (2.3) of the heat equation in (4.3), we obtain \begin{align*} |u(t)|_{\infty} & \leq |f|_{\infty} + \bigg| \int_0^t e^{\triangle(t-s) } \mathbb P (u \cdot \nabla u )(s) ds \bigg|_{\infty}. \end{align*} Apply identity $\mathbb P (u\cdot \nabla u ) = \sum_i D_i \mathbb P (u_i u ) $ with the fact, heat semi-group commutes with $D_i$, then use of inequality (2.5) in Lemma 2.1 for $j=1$ to proceed \begin{align*} | u ( t)|_{\infty} & \leq |f|_{\infty} + C \int_0^t (t-s)^{-1/2} |u(s)|_{\infty}^2 ds \\ & = |f|_{\infty} + C \int_0^t (t-s)^{-1/2} s^{-1/2} s^{1/2} |u(s)|_{\infty}^2 ds \\ & \leq |f|_{\infty} + C \mathop{\max}_{0\leq s \leq t } \{ s^{1/2} |u(s)|_{\infty}^2 \} \int_0^t (t-s)^{-1/2} s^{-1/2} ds. \end{align*} Since $ \int_0^t (t-s)^{-1/2} s^{-1/2} ds = C>0 $, which is independent of $t$, we have the following estimate \begin{align} |u(t)|_{\infty} & \leq | f|_{\infty} + C \mathop{\max}_{0\leq s \leq t } \{ s^{1/2} |u(s)|_{\infty}^2 \} \nonumber \\ | u(t)|_{\infty} & \leq |f|_{\infty} + C t^{1/2} \mathop{\max}_{0\leq s \leq t }{V^2(s) }. \end{align} Apply $D_i$ to (4.1), and the {\it Duhamel's principle} to obtain \begin{align} v(t) = D_i e^{\triangle t} f - \int_0^t e^{ \triangle (t-s) } D_i \mathbb P (u \cdot \nabla ) u(s) ds . \end{align} We can estimate the integral in (4.7) using Lemma 2.1 for $j=1$ in the following way: \begin{align*} \bigg|\int_0^t D_i e^{ \triangle (t-s)} \mathbb P (u \cdot \nabla u)(s) ds \bigg| & \leq \int_0^t |D_i e^{ \triangle (t-s)} \mathbb P(u\cdot \nabla u) (s) | ds \\ & \leq C \int_0^t (t-s)^{-1/2} |u(s)|_{\infty} | \mathcal D u(s)|_{\infty} ds \\ &= C \int_0^t (t-s)^{-1/2} s^{-1/2} s^{1/2} |u(s)|_{\infty} |\mathcal D u(s) |_{\infty} ds \\ & \leq C \mathop{\max}_{0 \leq s \leq t } { \{s^{1/2} | u(s) |_{\infty} |\mathcal D u(s)|_{\infty}\} } \int_0^t (t-s)^{-1/2} s^{-1/2} ds \\ & \leq C \mathop{\max}_{0\leq s \leq t } { \{ |u(s)|_{\infty}^2 + s | \mathcal D u(s) |_{\infty}^2 \} } . \end{align*} Therefore, using (2.4) $j=1$ in expression (4.7), we arrive at \begin{align} |v(t) |_{\infty} & \leq C t^{-1/2} |f|_{\infty} + C \mathop{\max}_{0\leq s \leq t } { \{ |u(s)|_{\infty}^2 + s | \mathcal D u(s) |_{\infty}^2 \} } \nonumber \\ t^{1/2} |\mathcal D u(t) |_{\infty} & \leq C |f|_{\infty} + C t^{1/2} \mathop{\max}_{0\leq s \leq t } {V^2(t) }. \end{align} Using (4.6) and (4.8), we have proved Lemma 4.1. \end{proof} \begin{Lemma} Let $C>0$ denote the constant in estimate (4.5) and set \begin{align*} c_0 = \frac{1}{16C^4}. \end{align*} Then $T_f > c_0 /{|f|_{\infty}^2}$ and \begin{align} | u(t) |_{\infty} + t^{1/2} | \mathcal D u (t) |_{\infty} < 2C |f|_{\infty} \ts \text{for} \ts 0 \leq t < \frac{c_0}{|f|_{\infty}^2}. \end{align} \end{Lemma} \begin{proof} We prove this lemma by contradiction after recalling the definition of $V(t)$ in (4.4). Suppose that (4.9) does not hold, then denote by $t_0$ the smallest time with $V(t_0)= 2C |f|_{\infty}$. Use (4.5) to obtain \begin{align*} 2C| f |_{\infty} & = V(t_0) \\ & \leq C |f|_{\infty} + C t_0^{1/2} 4 C^2 | f |_{\infty}^2, \end{align*} thus \begin{align*} 1 \leq 4 C^2 t_0^{1/2} | f |_{\infty}^2 , \end{align*} therefore $t_0 \geq c_0/ | f |_{\infty}^2$. This contradiction proves (4.9) and $T_f > c_0/{|f |_{\infty}^2}$. \end{proof} Lemma 4.2 proves Theorem 1.1 for $j=0$ and $j=1$. By an induction argument as in the proof of Theorem 1.2 one proves Theorem 1.1 for any $j =0,1,\cdots $ \section{Remarks} We can apply estimate (1.5) of Theorem 1.1 for \begin{align} \frac{c_0}{2 | f |_{\infty}^2} \leq t \leq \frac{c_0}{ | f |_{\infty}^2} \end{align} and obtain \begin{align} | \mathcal D^j u (t) |_{\infty} \leq C_j | f |_{\infty}^{j+1} \end{align} in interval (5.1). Starting the estimate at $ t_0 \in [0, T_f) $ we have \begin{align} | \mathcal D^j u ( t_0 + t ) |_{\infty} \leq C_j | u(t_0) |_{\infty}^{j+1} \end{align} for \begin{align} \frac{c_0}{2 | u(t_0)|_{\infty}^2} \leq t \leq \frac{c_0}{ | u(t_0)|_{\infty}^2}. \end{align} Then, if $t_1$ is fixed with \begin{align} \frac{c_0}{2 | f|_{\infty}^2} \leq t_1 < T_f, \end{align} we can maximize both sides of (5.3) over $ 0 \leq t_0 \leq t_1$ and obtain \begin{align} \max \bigg\{ | \mathcal D^j u(t) |_{\infty } : \frac{c_0}{2 | f|_{\infty}^2} \leq t \leq t_1+ \tau \bigg\} \leq C_j \max \{ | u(t) |_{\infty}^{j+1} : 0 \leq t \leq t_1 \} \end{align} with \begin{align*} \tau = \frac{c_0}{| u (t_1) |_{\infty}^2} \end{align*} \\ Estimate (5.6) says, essentially, that the maximum of the $j$-th derivatives of $u$ measured by $ | \mathcal D^j u|_{\infty} $ , can be bounded in terms of $ | u |_{\infty}^{j+1} $. The positive value of $\tau$ on the left-hand side of (5.6) shows that $ |u|_{\infty}^{j+1}$ controls $ | \mathcal D^j u |_{\infty}$ for some time into the future. \\ As is well known, if $(u, p)$ solves the Navier-Stokes equations and $ \lambda >0$ is any scaling parameter, then the functions $ u_{\lambda}, p_{\lambda} $ defined by \begin{align*} u_{\lambda} (x,t) = \lambda u( \lambda x, \lambda^2 t) , \ts p_{\lambda} ( x,t) = \lambda^2 p( \lambda x , \lambda^2 t ) \end{align*} also solve the Navier-Stokes equations. Clearly, \begin{align*} | u_{\lambda} (t) |_{\infty} = \lambda | u ( \lambda^2 t)|_{\infty} , \ts | \mathcal D^j u_{\lambda} (t) |_{\infty} = \lambda^{j+1} | \mathcal D^j u ( \lambda^2 t ) |_{\infty}. \end{align*} Therefore, $ | \mathcal D^j u |_{\infty} $ and $ | u|_{\infty}^{j+1} $ both scale like $ \lambda^{j+1} $, which is, of course, consistent with the estimate (5.6). We do not know under what assumptions $ | u |_{\infty}^{j+1} $ can conversely be estimated in terms of $ | \mathcal D^j u |_{\infty} $. \end{document}
\begin{document} \title{On the regularized $L^4$-norm for Eisenstein series in the level aspect, Part II} \author{Jiakun Pan} \maketitle \section{Introduction} This is a sequel paper of \href{https://arxiv.org/abs/2003.10995}{\underline{\textit{On the regularized $L^4$-norm for Eisenstein series in the level aspect}}} by the same author. The two papers will be combined for publication. \subsection{Background} The Gaussian Moments Conjecture is a number theoretical manifestation of the Random Wave Conjecture on the randomness of automorphic forms in the spectral aspect. For Eisenstein series, we can either consider the regularized moments of them, or the moments of its truncation. On Eisenstein series of fixed levels, we have the results on the fourth moments by Spinu\cite{Sp}, Humphries\cite{H}, Djankovi\'c and Khan\cite{DK1,DK2}. In the previous paper, we let the level to increase and reduced the fourth moment of newform Eisenstein series attached to a cusp with \begin{align*} \langle |E_{\mathfrak{a}}(\cdot, \frac{1}{2}+iT, \chi)|^2, |E_{\mathfrak{a}}(\cdot, \frac{1}{2}+iT, \chi)|^2 \rangle_{\mathrm{reg}} = I_1 + I_2, \end{align*} where \begin{align*} I_{1}= \nu^{-1}(N) \sum_{u} \frac{\Lambda^2(\frac{1}{2},u) |\Lambda(\frac{1}{2}+2iT, u\otimes \psi)|^2}{|\Lambda(1+2iT,\psi)|^2} + \mathrm{continuous} \medspace\medspace \mathrm{spectrum}, \end{align*} for some primitive $\psi$ mod $N$ delicately decided by $\chi$ and $\mathfrak{a}$, and \begin{align}\label{part1} I_2 = \nu^{-1}(N) \Big( \frac{24}{\pi}\log^2 N + O(\frac{L''}{L}(1+2iT,\psi)) + O(\log N \log\log N \frac{L'}{L}(1+2iT, \psi)) \Big). \end{align} Here we multiply $I_1$ and $I_2$ by $\nu(N):=[SL_2(\mathbb{Z}):\Gamma_0(N)]$ for clarity, and the sum over $u$ traverses an orthonormal basis of the discrete spectrum spanned by Maa{\ss} forms. In this article, we present a more hypothetical approach to see what the main terms shall be. Before we state the results, we first describe the conditionality of our research. \subsection{Assumptions} For newform Eisenstein series, the period integrals ofteh involve logarithmic derivatives of Dirichlet L-functions. When studying the Quantum Unique Ergodicity (\textbf{QUE}) problem for newform Eisenstein series, the author and Young \cite[Thm 1.3]{PY} found that \begin{align*} \frac{\langle |E|^2, \phi \rangle}{\langle 1, \phi \rangle} = \frac{6}{\pi}\log N + \frac{12}{\pi} \mathrm{Re}\frac{L'}{L}(1+2iT,\psi) + O(N^{-\frac{1}{8}+\varepsilon}), \end{align*} where $E=E_{\mathfrak{a}}(z,\tfrac{1}{2}+iT,\chi)$ for primitive $\chi$ mod $N$, $\psi=\psi(\chi,\mathfrak{a})$ is also primitive mod $N$, and $\phi$ is an $SL_2(\mathbb{Z})$-automorphic test function with nice analytic properties. An unconditional $o(\log N)$ bound being unavailable for the moment, the Generalized Riemann Hypothesis (\textbf{GRH}) implies the following inequalities, which is also useful in this paper: \begin{align}\label{A1} \begin{split} \frac{L'}{L}(1+2iT,\chi) &\ll \log\log N; \\ \frac{L''}{L}(1+2iT,\chi) &\ll (\log\log N)^2. \end{split} \end{align} GRH also implies the Linderl\"of Hypothesis, which helps to bound some L-functions. In addition, our main theorem heavily relies on a recipe of conjecturing the moments of L-functions, due to Conrey, Farmer, Keating, Rubinstein, and Snaith \cite{CFKRS}, see Section \ref{recipe} for a sketch. \subsection{Statement of main result} Now we come back to the spectral sum $I_1$ obtained from the regularized fourth moment of weight $0$ newform Eisenstein series $E=E_{\mathfrak{a}}(z,\tfrac{1}{2}+iT,\chi)$, for fixed $T\in \mathbb{R}$. Recall that $\chi$ is even, primitive mod $N$, and $\mathfrak{a}$ is singular for $\chi$. \begin{theorem}\label{main} Assume GRH and the recipe in \cite{CFKRS}. For all prime $N$ we have \begin{align*} I_1 \sim \frac{24}{\pi} \frac{\log^2 N}{\nu(N)} \Big(1 + \delta_{\chi^2=\chi_{0,N}} \delta_{T=0} \Big). \end{align*} \end{theorem} \noindent Combining with our unconditional estimation for $I_2$, we have the following corollary. \begin{remark}\label{composite} Indeed we can relax $N$ to all positive integers with \begin{align*} \sigma_{-1}(N) = 1+O(N^{-\delta}) \end{align*} for any fixed $\delta>0$. See Remark \ref{relaxation} for an explanation. \end{remark} \begin{remark}\label{varyTQ} Again, we have assumed $T$ to be fixed here. Since each Eisenstein series $E$ is continuous in $T$, it makes us curious about what happens when $T\rightarrow 0$ as $N$ grows. For this sake we allow $T=T(N)$ to approach zero, and conclude that $T\asymp \log^{-1} N$ makes the threshold. In other words, depending on whether $T$ shrinks faster or slower than $\log^{-1} N$, $I_1=I_1(T)$ has different asymptotics. \end{remark} \begin{corollary}\label{rvc} With the same assumptions as in Theorem \ref{main}, we have \begin{align}\label{2cases} \begin{split} \frac{\langle |E|^2, |E|^2 \rangle_{\mathrm{reg}}}{(\mathrm{Vol}(\Gamma_0(N)\backslash\mathbb{H}))^{-1} (\sqrt{2\log N})^4} \sim 2 \cdot \begin{cases} 3 & \text{when } \chi \text{ is quadratic and } T=0; \\ 2 & \text{otherwise}. \end{cases} \end{split} \end{align} \end{corollary} The left hand side can be regarded as the fourth moment of $E$ under proper rescaling; the two cases on the right hand side correspond to two models of the Gaussian Moments Conjecture. When $T=0$ and $\chi$ is quadratic, there exists a complex scalar $\epsilon$, such that $\epsilon E$ is real-valued, which is similar to classical Eisenstein series \cite{DK1,DK2} and dihedral Maa{\ss} forms \cite{HK} in the $t$-aspect. So, we expect their moments to behave like a real random wave in the $N$-aspect. Thus, we have a good reason to expect some similarity with the real Gaussian distribution, whose fourth moment is $3$. In all other cases, the Hecke sequence of $E$ is not contained in any straight line in the complex plane, and as $N$ grows, the limiting behavior resembles the complex Gaussian distribution with fourth moment $2$, as Blomer, Khan and Young showed for holomorphic forms of large weight in \cite{BKY}. \subsection{The fourth moment of truncated newform Eisenstein series} Djankovi\'c and Khan \cite{DK1} did a consistency check on their conjecture (later proved in \cite{DK2}) on the regularized fourth moment for classical Eisenstein series, with the Random Wave Conjecture. Following their methods, we obtain an essentially different result in the level aspect. Recall we write $E^Y$ for $E$ truncated in its $Y$-cuspidal zones. \begin{remark} Here, we need an analogue of Spinu's \cite{Sp} work\footnote{Up to a couple of errors that are fixed by Humphries \cite{H}} on the optimal upper bound for the fourth moment of truncated classical Eisenstein series, in the level aspect: \begin{align}\label{A3} \langle |E^Y|^2, |E^Y|^2 \rangle \ll_Y \frac{\log^2 N}{\nu(N)}. \end{align} Here $E^Y$ stands for the truncated Eisenstein series for $Y>1$; in the spectral parameter aspect, it has been proven by Humphries that the fourth moment of the truncated classical Eisenstein series is $\Omega(\log^2 T)$; GMC says the correct main term should be $\tfrac{36}{\pi}\log^2 T$. \end{remark} \begin{theorem}\label{0diff} Fix $T=0$ and let $\chi$ be quadratic mod $N$.\footnote{The settings on $T$ and $\chi$ is for brevity of discussion; as we point out in Remark \ref{explanation}, things also hold, and are much simpler in other cases.} With GRH and inequality \eqref{A3}, we have \begin{align*} \langle |E|^2, |E|^2 \rangle_{\mathrm{reg}} - \langle |E^Y|^2, |E^Y|^2 \rangle = o_Y(\frac{\log^2 N}{\nu(N)}). \end{align*} \end{theorem} \begin{remark} A level aspect variant for the Gaussian Moments Conjecture has not been formulated yet, so a first guess would be to simply replace every $t$ with $N$ and renormalize with $\nu^{-1}(N)$ in the results for classical Eisenstein series. However, as Theorem \ref{0diff} suggests, if we expect \begin{align*} \langle |E^Y|^2, |E^Y|^2 \rangle \sim C_4 \frac{\log^2 N}{\nu(N)}, \end{align*} for some constant $C_4$ (note this implies \eqref{A3}), then Corollary \ref{rvc} says for $T=0$ and $\chi^2=\chi_{0,N}$, \begin{align*} C_4 = \frac{72}{\pi}, \end{align*} instead of the na\"ively expected $\tfrac{36}{\pi}$. Why there is an extra factor of $2$ remains an open question. \end{remark} \section{Prerequisite} \subsection{Automorphic L-functions} Let $u=u(z)$ be an $L^2$-normalized Maa{\ss} form of trivial nebentypus, spectral parameter $t=t(u)$, and level $M=M(u)$, where $M\mid N$. For each $u$, we have the following formal approximate functional equations \begin{align}\label{FE1} L(s,u) = \sum_{n\geq 1} \frac{\lambda_u(n)}{n^s} g_1(\frac{n}{\sqrt{M}})+ \gamma(u,s) \sum_{n\geq 1} \frac{\lambda_u(n)}{n^{1-s}} g_1(\frac{n}{\sqrt{M}}), \end{align} where $\lambda_u(n)$ is the $n$-th Hecke eigenvalue, $g_1$ is some smooth and compactly supported weight function, and $\rho_u \sqrt{y}\lambda_u(n) K_{it}(2\pi |n| y)$ is the $n$-th Fourier coefficient of $u$. Similarly, \begin{align}\label{FE2} L(s,u \otimes \chi) = \sum_{n\geq 1} \frac{\lambda_u(n) \chi(n)}{n^s} g_2(\frac{n}{N}) + \gamma(s, u\otimes \chi) \sum_{n\geq 1} \frac{\lambda_u(n) \overline{\chi}(n)}{n^{1-s}} g_2(\frac{n}{N}), \end{align} for some weight function $g_2$. The $\gamma$-functions are defined as below (see \cite[Sec. 2.1-2.2]{BFKMM}): \begin{align*} \gamma(s,u) &=\lambda_u(-1) f(s,u)\\ \gamma(s, u\otimes \chi) &= \lambda_u(-1) (\frac{\tau(\chi)}{\sqrt{N}})^2 f(s,u\otimes \chi), \end{align*} with (here $Q(u)=M$ and $Q(u\otimes \chi)=N^2$) \begin{align*} f(s,g) &= Q(g)^{\frac{1}{2}-s} \pi^{2s-1} \frac{\Gamma(\frac{1-s+it}{2}) \Gamma(\frac{1-s-it}{2})}{\Gamma(\frac{s+it}{2}) \Gamma(\frac{s-it}{2})}. \end{align*} \subsection{Eisenstein series} The Fourier expression for newform Eisenstein series can be written our explicitly, see \cite{Young} for more details. When $E=E_{\mathfrak{a}}(z,s,\chi)$ with primitive $\chi$ and cusp $\mathfrak{a} \sim_{\Gamma_0(N)} \tfrac{1}{f}$ singular for $\chi$ for some $f\mid N$,\footnote{Here $f$ is uniquely determined by $\mathfrak{a}$, see \cite[Sec 2]{PY} for more details.} with $(f,\tfrac{N}{f})=1$. Then we can uniquely decompose $\chi=\chi_1\overline{\chi_2}$ with $\chi_1, \chi_2$ primitive mod $\tfrac{N}{f}, f$ respectively. Writing $\psi=\chi_1\chi_2$, we have \begin{align}\label{theta} E =\frac{\tau(\chi_2)}{q_2^{-s}} \Lambda^{-1}(2s,\psi) E^*_{\chi_1,\chi_2}(z,s). \end{align} The completed Eisenstein series $E^*_{\chi_1,\chi_2}(z,s)$ has the following Fourier expansion (see \cite[Prop 4.1]{Young}) \begin{align}\label{Fourier} E^* _{\chi_1, \chi_2}(z,s) = e_{\chi_1, \chi_2}^*(y,s) + 2\sqrt{y} \sum_{n \neq 0} \lambda_{\chi_1, \chi_2}(n,s) e(nx) K_{s-\frac{1}{2}}(2\pi |n| y), \end{align} where the constant term is \begin{align*} e^*_{\chi_1, \chi_2}(y,s)= \delta_{q_1 =1} \theta_{1,\chi_2}(s) (q_2y)^s + \delta_{q_2 =1} \theta_{1,\overline{\chi_1}}(1-s) (q_1y)^{1-s}, \end{align*} $\lambda_{\chi_1, \chi_2}(n,s) = \chi_2(\frac{n}{|n|}) \sum_{ab=|n|}\chi_1(a)\overline{\chi_2}(b) (\frac{b}{a})^{s-\frac{1}{2}}$, $\tau(\chi)$ is the Gauss sum of $\chi$, and $K_{\alpha}$ is the $K$-Bessel function of order $\alpha \in \mathbb{C}$. Following this, for any $u$ we have (\cite[Lemma 6.1]{PY}) \begin{align}\label{expression} |\langle |E|^2, u \rangle|^2 = |\rho_u|^2 \frac{1+\lambda_u(-1)}{8 N} \frac{|\Gamma(\frac{\frac{1}{2} + it}{2})|^4 \prod_{\pm} |\Gamma(\frac{\frac{1}{2}+2iT \pm it}{2})|^2}{|\Gamma(\frac{1}{2}+iT)|^4} \frac{L^2(\frac{1}{2},u) |L(\frac{1}{2}+2iT, u\otimes \psi)|^2}{|L(1+2iT, \psi)|^4}. \end{align} A useful fact from similar calculation is for any $d\mid \tfrac{N}{M}$ we have \begin{align}\label{oldclass} \langle |E|^2, u|_d \rangle = \langle |E|^2, u \rangle. \end{align} \subsection{An orthonormal basis}\label{orthobase} Write $\mathcal{B}_{it}(M)=\mathcal{B}^{(N)}_{it}(M)$ for a collection of Maa{\ss} newform of level $M$, spectral parameter $it$, and $L^2$-norm $1$ on $\Gamma_0(N)\backslash\mathbb{H}$. Also write $\mathcal{B}(M)=\sqcup_t \mathcal{B}_{it}(M)$. An orthonormal basis of Maa{\ss} forms of level $N$ can be chosen as below: \begin{align}\label{BM} \mathcal{O}(N) := \Big\{ u^{\scriptscriptstyle{<\ell>}}(z)=\sum_{d\mid \ell}\xi_{\ell}(d)u|_d \quad \Big| \quad u \in \mathcal{B}_{it}(M), \ell \mid L, ML=N, t\in \mathrm{Spec}_{\Gamma_0(N)}(\Delta) \Big\}, \end{align} with $\xi_{\ell}(d)$ defined in Lemma 2.1 of \cite{BM}. In addition we have \begin{align}\label{BMsharp} \sum_{d\mid \ell}|\xi_{\ell}(d)|=O(\ell^{\varepsilon} ). \end{align} \begin{remark} As Young verified later, the choice of coefficients also makes an orthonormal basis for Eisenstein series, in the sense of the formal inner product $\langle E_{\mathfrak{a}}, E_{\mathfrak{a}}\rangle := \delta_{\mathfrak{a}=\mathfrak{b}} 4\pi$. Henceforth we can let $\mathcal{O}(N)$ contain Eisenstein series and omit specific discussions on them. \end{remark} \subsection{The recipe of conjecturing moments of L-functions}\label{recipe} There is a recipe to make conjectures on the moments of L-functions, due to B. Conrey, D. Farmer, J. Keating, M. Rubinstein and N. Snaith \cite{CFKRS}. We call it ``the recipe'' for short throughout this article. To summarize, we can approximate the average of L-functions by \begin{align*} \sum_u \Big|\sum_{n\geq 1} \frac{a_u(n)}{n^s} + \gamma_u \sum_{n\geq 1} \frac{b_u(n)}{n^{1-s}}\Big|^{2k}, \end{align*} and after we expand the $2k$-th power as \begin{align*} \sum_u \underbrace{\text{Product of Root Numbers}}_{A_u} \cdot \underbrace{\text{Weighted Multi-Dirichlet Series}}_{B_u}, \end{align*} we can instead estimate \begin{align*} \sum_u (\text{expected mean of } A_u \text{ over the family of } u) \cdot B_u. \end{align*} For each type (unitary, orthogonal, symplectic) of L-functions, there are cancellations underneath such that no big loss is expected in the transformation. Needless to say, a rigorous proof that justifies the two expressions to have the same size appears unavailable for the moment. \begin{remark} We introduce the recipe for moments of L-functions for convenience, but it is also applicable to hybrid moments of $L$-functions, which is the case for our Theorem \ref{main} with \eqref{FE1} and \eqref{FE2}. \end{remark} \subsection{The de Branges-Wilson Beta Integral and the Kuznetsov Trace Formula} There is an integral formula by Louis de Branges and James A. Wilson, for which we refer the readers to lecture notes by R. Askey \cite{A}. Writing $B(x,y)$ for the Beta function, they showed that \begin{align}\label{8pi3} \int_{-\infty}^{\infty} t \sinh \pi t \prod_{\epsilon_1,\epsilon_2=\pm 1} B(\tfrac{1}{4}+\epsilon_1 i\tfrac{t}{2}+ \epsilon_2 iT, \tfrac{1}{4}- \epsilon_1 i\tfrac{t}{2}) dt = 8 \pi^3. \end{align} With Kuznetsov's formula (see \cite[Thm 9.3]{I2}), this identity immediately yields the following lemma. \begin{lemma}\label{kuz} We have \begin{align*} \sum_{u\in \mathcal{B}(N)}|\rho_u|^2 \lambda_u(m)\lambda_u(n) \prod_{\epsilon_1,\epsilon_2=\pm 1} B(\tfrac{1}{4}+\epsilon_1 i\tfrac{t_u}{2}+ \epsilon_2 iT, \tfrac{1}{4}- \epsilon_1 i\tfrac{t_u}{2}) = \delta_{m=n} 8\pi + \text{Off-Diagonal Terms}. \end{align*} \end{lemma} \begin{remark} The off-diagonal terms can be ignored for the main term of $I_1$, due to the Recipe. \end{remark} \section{Proof of Theorem \ref{main}} \subsection{Initial cleanings} Recall notations in Section \ref{orthobase}. Let $E=E_{\infty}(z,\frac{1}{2}+iT,\chi)$ be a newform Eisenstein series of even primitive nebentypus mod $N$. We begin with simplifying \begin{align}\label{withEis} \langle |E|^2 - \mathcal{E}, |E|^2 - \mathcal{E} \rangle = \sum_{u\in \mathcal{O}(N)} \langle |E|^2,u\rangle \langle u,|E|^2\rangle. \end{align} When $N$ is prime, \eqref{BM} reduces the above sum to \begin{align*} \sum_{u\in \mathcal{B}(N)} \langle |E|^2,u\rangle \langle u,|E|^2\rangle + \sum_{u\in \mathcal{B}(1)} \sum_{\ell \mid N} \langle |E|^2,u^{<\ell>}\rangle \langle u^{<\ell>},|E|^2\rangle. \end{align*} Then by \eqref{oldclass} for all $u\in \mathcal{O}(1)$ we have \begin{align*} \langle |E|^2,u^{<1>}\rangle &= \langle |E|^2,u\rangle\\ \langle |E|^2,u^{<N>}\rangle &= \langle |E|^2, \xi_N(1) u + \xi_N(N) u|_N \rangle = \overline{(\xi_N(1) + \xi_N(N))} \langle |E|^2,u\rangle, \end{align*} so \eqref{BMsharp} implies \begin{align*} \sum_{u\in \mathcal{B}(1)} \sum_{\ell \mid N} \langle |E|^2,u^{<\ell>}\rangle \langle u^{<\ell>},|E|^2\rangle = (2 + O(N^{-1+2\theta+\varepsilon})) \sum_{u\in \mathcal{B}(1)} \langle |E|^2,u\rangle \langle u,|E|^2\rangle. \end{align*} With \eqref{expression}, the spectral sum over $u\in \mathcal{B}(1)$ can be bounded by $N^{-2+\varepsilon}$ by taking the Lindel\"of bound for the L-functions, which is negligible under comparison with the contribution from $\mathcal{B}(N)$. Thus \begin{align*} \langle |E|^2 - \mathcal{E}, |E|^2 - \mathcal{E} \rangle \sim \sum_{u\in \mathcal{B}(N)} \langle |E|^2,u\rangle \langle u,|E|^2\rangle. \end{align*} \begin{remark}\label{relaxation} Following similar argument we can see the general case. For $N>1$, we have \begin{align*} \sum_{u\in \mathcal{O}(N)} \langle |E|^2,u\rangle \langle u,|E|^2\rangle = \sum_{N=ML} \sum_{u\in \mathcal{B}(M)} \sum_{\ell \mid L} \langle |E|^2,u^{<\ell>}\rangle \langle u^{<\ell>},|E|^2\rangle. \end{align*} Here $u^{<\ell>}$ can similarly be expressed as a linear combination of $u|_d$ for $d\mid \ell$, and the sum over all linear coefficients is $O(\ell^{\epsilon})$, due to \cite{BM}. For all $M<N$ and $L>1$, we have \begin{align*} \sum_{u\in \mathcal{B}(M)} \langle |E|^2,u\rangle \langle u,|E|^2\rangle \ll \frac{N^{\varepsilon}}{\nu(N)} \frac{M}{N}. \end{align*} In total there is \begin{align*} \sum_{u\in \mathcal{O}(N)} \langle |E|^2,u\rangle \langle u,|E|^2\rangle = \frac{N^{\varepsilon}}{\nu(N)} \sum_{\substack{M\mid N \\ M<N}} \frac{M}{N} + \sum_{u\in \mathcal{B}(N)} \langle |E|^2,u\rangle \langle u,|E|^2\rangle. \end{align*} As the proof goes, the second term is of size $\nu^{-1}(N) \log^2 N$, so we can allow $N$ to satisfy \begin{align*} \sum_{\substack{M\mid N \\ M>1}} \frac{1}{M} \ll N^{-\delta}, \end{align*} for any fixed $\delta>0$, as is claimed in Remark \ref{composite}. \end{remark} \subsection{Product of formal approximate functional equations} For convenience, denote \begin{align*} \chi^{+1}=\chi, \quad \chi^{-1}=\overline{\chi}. \end{align*} Also write $\epsilon=(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4) \in \{\pm 1\}^4$, $\alpha=(\alpha_1,\alpha_2,\alpha_3,\alpha_4) \in \mathbb{C}^4$ and $\alpha_0=(0,0,2iT,-2iT)$. Insert the formal approximate functional equations to \eqref{expression} and sum over $u\in \mathcal{B}(N)$. The recipe says \begin{multline} \label{limit} \sum_{u} |\langle |E|^2, u \rangle|^2 \sim \frac{1}{8N} \lim_{\alpha\rightarrow\alpha_0} \sum_{u} (1+\lambda_u(-1)) W_u \sum_{\epsilon_1,...,\epsilon_4 = \pm 1} \prod_{j=1}^4 X_j f_j \\ \sum_{n_1,...,n_4 \geq 1} \frac{\lambda_u(n_1)\lambda_u(n_2)\lambda_u(n_3)\lambda_u(n_4)\chi^{\epsilon_3}(n_3)\overline{\chi}^{\epsilon_4}(n_4)}{n_1^{\frac{1}{2}+\epsilon_1 \alpha_1} n_2^{\frac{1}{2}+\epsilon_2\alpha_2} n_3^{\frac{1}{2}+\epsilon_3\alpha_3} n_4^{\frac{1}{2}+\epsilon_4\alpha_4}}, \end{multline} with \begin{align}\label{Wu} W_u = \frac{|\rho_u|^2 }{ |L(1+2iT,\chi)|^4} \prod_{\epsilon_1,\epsilon_2=\pm 1} B(\tfrac{1}{4}+\epsilon_1 i\tfrac{t}{2}+ \epsilon_2 iT, \tfrac{1}{4}- \epsilon_1 i\tfrac{t}{2}), \end{align} $X_j=X_j(\epsilon_j)$ and $f_j=f_j(\epsilon_j)$ for $j=1,2,3,4$. Moreover, \begin{align*} X_j(\epsilon_j) = \Big(\lambda(-1&)\Big)^{\frac{1-\epsilon_j}{2}}, j=1,2; \quad X_3(\epsilon_3) = \Big(\lambda_u(-1) (\frac{\tau(\chi)}{\sqrt{N}})^2\Big)^{\frac{1-\epsilon_3}{2}}, \quad X_4(\epsilon_4) =\Big(\lambda_u(-1) (\frac{\tau(\overline{\chi})}{\sqrt{N}})^2\Big)^{\frac{1-\epsilon_4}{2}} \\ f_j(\epsilon_j) &= \Big(f(\frac{1}{2}+\alpha_j,u)\Big)^{\frac{1-\epsilon_j}{2}}, j=1,2; \quad f_j(\epsilon_j) = \Big(f(\tfrac{1}{2}+\alpha_j,u\otimes\chi)\Big)^{\frac{1-\epsilon_j}{2}}, j=3,4. \end{align*} \begin{remark}\label{anypath} Since the right hand side in \eqref{limit} is analytic around $\alpha=\alpha_0$, we can take any path to approach this point in a neighborhood of it. This is useful in the last step of the proof. \end{remark} By Hecke relation, the multi-Dirichlet series equals \begin{align*} \zeta(1+\epsilon_1\alpha_1+\epsilon_2\alpha_2) L(1+\epsilon_3\alpha_3+\epsilon_4\alpha_4, \chi_{0,N}) \sum_{n_1,...,n_4 \geq 1}\frac{\lambda_u(n_1 n_2) \lambda_u(n_3 n_4) \chi^{\epsilon_3}(n_3)\overline{\chi}^{\epsilon_4}(n_4)}{n_1^{\frac{1}{2}+\epsilon_1 \alpha_1} n_2^{\frac{1}{2}+\epsilon_2\alpha_2} n_3^{\frac{1}{2}+\epsilon_3\alpha_3} n_4^{\frac{1}{2}+\epsilon_4\alpha_4}}, \end{align*} With \eqref{withEis} and \eqref{limit}, we can write \begin{align}\label{milestone1} \langle |E|^2 - \mathcal{E}, |E|^2 - \mathcal{E} \rangle = \lim_{\alpha\rightarrow\alpha_0} \sum_{\epsilon_1,...,\epsilon_4=\pm 1} S(\epsilon,\alpha), \end{align} where $S(\epsilon,\alpha)$ equals \begin{align}\label{ABC} \sum_{u} \underbrace{(1+\lambda_u(-1)) \prod_{j=1}^4 X_j}_{A_u} \underbrace{W_u \prod_{j=1}^4 f_j \sum_{n_1,...,n_4 \geq 1} \frac{\lambda_u(n_1)\lambda_u(n_2)\lambda_u(n_3)\lambda_u(n_4)\chi^{\epsilon_3}(n_3)\overline{\chi}^{\epsilon_4}(n_4)}{n_1^{\frac{1}{2}+\epsilon_1 \alpha_1} n_2^{\frac{1}{2}+\epsilon_2\alpha_2} n_3^{\frac{1}{2}+\epsilon_3\alpha_3} n_4^{\frac{1}{2}+\epsilon_4\alpha_4}} }_{B_u}. \end{align} Since \begin{align*} \frac{1}{|\mathcal{B}(N)|} \sum_{u \in \mathcal{B}(N)} (1+\lambda_u(-1)) \prod_{j=1}^4 X_j = (\frac{\tau(\chi)}{\sqrt{N}})^{\epsilon_3-\epsilon_4} + o(1), \end{align*} following the recipe, we replace $A_u$ with $(\tfrac{\tau(\chi)}{\sqrt{N}})^{\epsilon_3-\epsilon_4}$, its ``expected mean'' in the family, in \eqref{ABC}. \subsection{Reduction of the multi-Dirichlet series} Now we see that $S(\epsilon,\alpha)$ asymptotically equals \begin{align*} \frac{1}{8N} (\frac{\tau(\chi)}{\sqrt{N}})^{\epsilon_3-\epsilon_4} \sum_{u} W_u \prod_{j=1}^4 f_j \sum_{n_1,...,n_4 \geq 1} \frac{\lambda_u(n_1)\lambda_u(n_2)\lambda_u(n_3)\lambda_u(n_4)\chi^{\epsilon_3}(n_3)\overline{\chi}^{\epsilon_4}(n_4)}{n_1^{\frac{1}{2}+\epsilon_1 \alpha_1} n_2^{\frac{1}{2}+\epsilon_2\alpha_2} n_3^{\frac{1}{2}+\epsilon_3\alpha_3} n_4^{\frac{1}{2}+\epsilon_4\alpha_4}}. \end{align*} Applying Hecke relation, we can rewrite the quadruple sum on the right hand side as \begin{align*} \zeta(1+\epsilon_1\alpha_1+\epsilon_2\alpha_2) L(1+\epsilon_3\alpha_3+\epsilon_4\alpha_4, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4}) \sum_{n_1,...,n_4 \geq 1}\frac{\lambda_u(n_1 n_2) \lambda_u(n_3 n_4) \chi^{\epsilon_3}(n_3)\overline{\chi}^{\epsilon_4}(n_4)}{n_1^{\frac{1}{2}+\epsilon_1 \alpha_1} n_2^{\frac{1}{2}+\epsilon_2\alpha_2} n_3^{\frac{1}{2}+\epsilon_3\alpha_3} n_4^{\frac{1}{2}+\epsilon_4\alpha_4}}. \end{align*} For convenience, denote \begin{align}\label{hu} h_u(\epsilon,\alpha,t) = \prod_{j=1}^4 f_j \Big/ \Big( (\frac{\pi^2}{N})^{\frac{1-\epsilon_1}{2}\alpha_1+\frac{1-\epsilon_2}{2}\alpha_2} (\frac{\pi^2}{N^2})^{\frac{1-\epsilon_3}{2}\alpha_3+\frac{1-\epsilon_4}{2}\alpha_4} \Big) \end{align} for the product of Gamma fractions. Note $f_j$ depends on $t=t_u$. Then \begin{multline}\label{ss} S(\epsilon,\alpha) \sim \frac{1}{8N} (\frac{\tau(\chi)}{\sqrt{N}})^{\epsilon_3-\epsilon_4} (\frac{\pi^2}{N})^{\frac{1-\epsilon_1}{2}\alpha_1+\frac{1-\epsilon_2}{2}\alpha_2} (\frac{\pi^2}{N^2})^{\frac{1-\epsilon_3}{2}\alpha_3+\frac{1-\epsilon_4}{2}\alpha_4} \zeta(1+\epsilon_1\alpha_1+\epsilon_2\alpha_2) \\ L(1+\epsilon_3\alpha_3+\epsilon_4\alpha_4, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4}) \sum_{u} h_u W_u \sum_{n_1,...,n_4 \geq 1}\frac{\lambda_u(n_1 n_2) \lambda_u(n_3 n_4) \chi^{\epsilon_3}(n_3)\overline{\chi}^{\epsilon_4}(n_4)}{n_1^{\frac{1}{2}+\epsilon_1 \alpha_1} n_2^{\frac{1}{2}+\epsilon_2\alpha_2} n_3^{\frac{1}{2}+\epsilon_3\alpha_3} n_4^{\frac{1}{2}+\epsilon_4\alpha_4}}. \end{multline} The next step of the recipe, we need to find the diagonal term for \begin{align*} \sum_{u} h_u W_u \sum_{n_1,...,n_4 \geq 1}\frac{\lambda_u(n_1 n_2) \lambda_u(n_3 n_4) \chi^{\epsilon_3}(n_3)\overline{\chi}^{\epsilon_4}(n_4)}{n_1^{\frac{1}{2}+\epsilon_1 \alpha_1} n_2^{\frac{1}{2}+\epsilon_2\alpha_2} n_3^{\frac{1}{2}+\epsilon_3\alpha_3} n_4^{\frac{1}{2}+\epsilon_4\alpha_4}} \end{align*} in the Kuznetsov trace formula. In general, \begin{align*} \sum_{u} h_u W_u \lambda_u(n_1 n_2) \lambda_u(n_3 n_4) \sim \delta_{n_1n_1=n_3n_4} F_{\epsilon}(h_u;T) |L(1+2iT,\chi)|^{-4}, \end{align*} with \begin{align*} F_{\epsilon}(h_u;T) = \frac{1}{\pi^2} \int_{-\infty}^{\infty} t \sinh(\pi t) \prod_{\epsilon_1,\epsilon_2=\pm 1} B(\tfrac{1}{4}+\epsilon_1 i\tfrac{t_u}{2}+ \epsilon_2 iT, \tfrac{1}{4}- \epsilon_1 i\tfrac{t_u}{2}) h_u(\epsilon,\alpha, t) dt, \end{align*} and by Lemma \ref{kuz}, $F_{\epsilon}(h_u;T) =8\pi$ when $\epsilon_3=\epsilon_4$. If $\epsilon_3 \neq \epsilon_4$, as $T\rightarrow 0$ we also have $F_{\epsilon}(h_u;T)\rightarrow 8\pi$. For the remaining quadruple sum \begin{align*} \sum_{\substack{n_1,...,n_4 \geq 1 \\ n_1n_2=n_3n_4}}\frac{ \chi^{\epsilon_3}(n_3)\overline{\chi}^{\epsilon_4}(n_4)}{n_1^{\frac{1}{2}+\epsilon_1 \alpha_1} n_2^{\frac{1}{2}+\epsilon_2\alpha_2} n_3^{\frac{1}{2}+\epsilon_3\alpha_3} n_4^{\frac{1}{2}+\epsilon_4\alpha_4}}, \end{align*} we can apply Ramanujan's formula for twisted Dirichlet series (see \cite[(13.1)]{I1}) to rewrite it as \begin{align*} \frac{L(1+\epsilon_1\alpha_1+\epsilon_3\alpha_3,\chi^{\epsilon_3}) L(1+\epsilon_2\alpha_2+\epsilon_3\alpha_3, \chi^{\epsilon_3}) L(1+\epsilon_1\alpha+\epsilon_4\alpha_4, \overline{\chi}^{\epsilon_4}) L(1+\epsilon_2\alpha_2+\epsilon_4\alpha_4, \overline{\chi}^{\epsilon_4})}{L(2+\epsilon_1\alpha_1+\epsilon_2\alpha_2+\epsilon_3\alpha_3+\epsilon_4\alpha_4, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})}. \end{align*} To summarize, \eqref{ss} can be rewritten as \begin{multline}\label{milestone2} S(\epsilon,\alpha) \sim \frac{F_{\epsilon}(h;T)}{8N |L(1+2iT,\chi)|^4} (\frac{\tau(\chi)}{\sqrt{N}})^{\epsilon_3-\epsilon_4} \prod_{j=1}^4 \pi^{(1-\epsilon_j)\alpha_j} \\ N^{\frac{\epsilon_1 -1}{2}\alpha_1+\frac{\epsilon_2 -1}{2}\alpha_2+(\epsilon_3 -1)\alpha_3 +(\epsilon_4 -1)\alpha_4} \zeta(1+\epsilon_1\alpha_1+\epsilon_2\alpha_2) L(1+\epsilon_3\alpha_3+\epsilon_4\alpha_4, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4}) \\ \frac{\prod_{j=1,2}L(1+\epsilon_j\alpha_j+\epsilon_3\alpha_3,\chi^{\epsilon_3}) L(1+\epsilon_j\alpha_j+\epsilon_4\alpha_4, \overline{\chi}^{\epsilon_4})}{L(2+\epsilon_1\alpha_1+\epsilon_2\alpha_2+\epsilon_3\alpha_3+\epsilon_4\alpha_4, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})}. \end{multline} \subsection{Final evaluation} Recall Remark \ref{anypath}, to calculate $\lim_{\alpha\rightarrow\alpha_0}\sum_\epsilon S(\epsilon,\alpha)$, we can follow such an order: \begin{itemize} \item Substitute $\alpha=(0,\eta',2iT,-2iT+\eta)$ for $\eta>\eta'>0$. \item Compute $\lim_{\eta'\rightarrow 0} \sum_{\epsilon_1,\epsilon_2=\pm 1}S(\epsilon,\alpha)$. Write the result as $R_{\epsilon_3,\epsilon_4}(\eta)$. \item Compute $\lim_{\eta \rightarrow 0} \sum_{\epsilon_3,\epsilon_4=\pm 1}R_{\epsilon_3,\epsilon_4}(\eta)$. \end{itemize} Specifically, as $\eta'\rightarrow 0$, we have \begin{multline*} F_{\epsilon}(h;T) \pi^{(1-\epsilon_2)\eta'}N^{\frac{\epsilon_2 -1}{2}\eta'} \zeta(1+\epsilon_2\eta') \frac{L(1+\epsilon_2\eta'+\epsilon_3 2iT, \chi^{\epsilon_3}) L(1+\epsilon_2\eta'+\epsilon_4(-2iT+\eta), \overline{\chi}^{\epsilon_4})}{L(2+\epsilon_2\eta'+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})} \\ =\frac{K_{\epsilon_3,\epsilon_4}(T,\eta)}{\epsilon_2\eta'} - \delta_{\epsilon_2=-1} H_{\epsilon}(\eta) \log N \frac{L(1+\epsilon_3 2iT, \chi^{\epsilon_3}) L(1+\epsilon_4(-2iT+\eta), \overline{\chi}^{\epsilon_4})}{L(2+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})} + O(\log\log N) + O(\eta'), \end{multline*} where $H_{\epsilon}(\eta)=F_{\epsilon}(h_0;T)$ with $h_0=h_u(\epsilon,(0,0,2iT,-2iT+\eta),t)$, and \begin{align*} K_{\epsilon_3,\epsilon_4}(T,\eta) = \frac{L(1+\epsilon_3 2iT, \chi^{\epsilon_3}) L(1+\epsilon_4(-2iT+\eta), \overline{\chi}^{\epsilon_4})}{L(2+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})}. \end{align*} Since $K_{\epsilon_3,\epsilon_4}(T,\eta)$ is independent of $\epsilon_1,\epsilon_2$, and so is $H_{\epsilon}(\eta)$ when $\alpha_1=0, \alpha_2\rightarrow 0$, $R_{\epsilon_3,\epsilon_4}(\eta)$ equals \begin{multline*} \lim_{\eta'\rightarrow 0}\sum_{\epsilon_1,\epsilon_2 =\pm 1} S(\epsilon,\alpha) = \frac{\pi^{(\epsilon_4-\epsilon_3)2iT }N^{(\epsilon_3 -\epsilon_4)2iT}}{8N |L(1+2iT,\chi)|^4} (\frac{\tau(\chi)}{\sqrt{N}})^{\epsilon_3-\epsilon_4} L(1+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4}) \\ \pi^{(1-\epsilon_4)\eta} N^{(\epsilon_4-1)\eta} \frac{L^2(1+\epsilon_3 2iT,\chi^{\epsilon_3}) L^2(1+\epsilon_4 (-2iT+\eta),\overline{\chi}^{\epsilon_4})}{L(2+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})} \Big( -2 H_{\epsilon}(\eta) \log N +O(\log\log N) \Big). \end{multline*} Now we analyze the Laurent expansion of $R_{\epsilon_3,\epsilon_4}(\eta)$ around $\eta=0$. We need to discuss $4$ cases: \noindent \textbf{Case 1.} If $\epsilon_3\neq\epsilon_4$ and $\chi$ is complex, then $L(1+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})$ is not principal, so $R_{\epsilon_3,\epsilon_4}(\eta)$ is analytic at $\eta=0$, and by GRH, for any fixed $T\in \mathbb{R}$ we have (note $H_{\epsilon}(0)=O(1)$) \begin{align}\label{nopole} R_{\epsilon_3,\epsilon_4}(0) = O(\log N (\log\log N)^9). \end{align} \noindent \textbf{Case 2.} If $\epsilon_3\neq\epsilon_4$, $\chi$ is quadratic, but $T\neq 0$, then there is no pole around $\eta=0$ for the principal L-function $L(1+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})$, so we similarly have \eqref{nopole}. \begin{remark}\label{varyTA} As we can see in discussion for Case 4, the value of $T$ no longer matters when $\epsilon_3=\epsilon_4$, so we explain our claim in Remark \ref{varyTQ} here. When $\chi$ is quadratic and $T=T(N)$ tends to $0$, the principal L-function $L(1+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})$ equals \begin{align*} \zeta(1+(\epsilon_3-\epsilon_4) 2iT +\epsilon_4 \eta) \prod_{p\mid N} (1-\frac{1}{p^{1+(\epsilon_3-\epsilon_4) 2iT +\epsilon_4 \eta}}) \end{align*} and has large values with small $T$ and $\eta$. Therefore, we have (recall $H_{\epsilon}(0)=8\pi$ for all $\epsilon$) \begin{align*} R_{+1,-1}(0) = \frac{\pi^{-4iT}N^{4iT}}{8N \zeta(2+4iT)} \zeta(1+4iT) \prod_{p\mid N} \frac{1-p^{-(1+4iT)}}{1-p^{-(2+4iT)}} \Big( -16 \pi \log N +O(\log\log N) \Big) \end{align*} and \begin{align*} R_{-1,+1}(0) = \frac{\pi^{4iT}N^{-4iT}}{8N \zeta(2+4iT)} \zeta(1-4iT) \prod_{p\mid N} \frac{1-p^{-(1-4iT)}}{1-p^{-(2-4iT)}} \Big( -16 \pi\log N +O(\log\log N) \Big). \end{align*} Consequently, \begin{align}\label{upndown} R_{+1,-1}(0) + R_{-1,+1}(0) = \frac{-\log N}{4\nu(N) \zeta(2)} \Big( \frac{e^{4iT \log N}}{4iT} + \frac{e^{-4iT \log N}}{-4iT} \Big) + O(\log N (\log\log N)^3). \end{align} If $4iT \log N = o(1)$, then $2\log N$ approximates the factor in the parentheses with an error up to $O(|T| \log^2 N)$. So, for all $|T|\ll \log^{-1-\delta} N$, we have $R_{+1,-1}(0) + R_{-1,+1}(0)=\frac{24}{\pi} \nu^{-1}(N) \log^2 N$. On the other hand, by \eqref{upndown}, $R_{+1,-1}(0) + R_{-1,+1}(0)$ has trivial bound $\nu^{-1}(N)|T|^{-1} \log N$. So, for whatever $|T| \gg \log^{-1+\delta}N$, $R_{+1,-1}(0) + R_{-1,+1}(0)$ will be considerably less than $\nu^{-1}(N) \log^2 N$. \end{remark} \noindent \textbf{Case 3.} If $\epsilon_3\neq\epsilon_4$, $\chi$ is quadratic, and $T= 0$, then $L(1+(\epsilon_3-\epsilon_4)2iT+\epsilon_4\eta, \chi^{\epsilon_3}\overline{\chi}^{\epsilon_4})$ has a pole at $\eta=0$. So $R_{\epsilon_3,\epsilon_4}(\eta)$ equals \begin{align*} R_{\epsilon_3,\epsilon_4}(\eta) = \frac{\pi^{(1-\epsilon_4)\eta} N^{(\epsilon_4-1)\eta}}{8N} H_{\epsilon}(\eta) \prod_{p\mid N}\frac{1-p^{-1-\epsilon_4\eta}}{1-p^{-2-\epsilon_4\eta}}\frac{\zeta(1+\epsilon_4 \eta)} {\zeta(2+\epsilon_4\eta)} \Big( -2 H_{\epsilon}(\eta) \log N +O(\log\log N) \Big), \end{align*} and the Laurent expansion around $\eta=0$ is (note $\zeta(2)=\tfrac{\pi^2}{6}$ and $\nu(N)=N\prod_{p\mid N}(1+p^{-1})$) \begin{multline*} \frac{6}{\nu(N) \pi} \Big( \frac{1}{\epsilon_4 \eta} + (\epsilon_4-1) \log N +O\big((\log\log N)^5 \big) +O(\eta) \Big) \Big( -2 \log N +O(\log\log N) \Big) \\ = \delta_{\epsilon_4=-1} \frac{24}{\nu(N) \pi} \log^2 N +O(\log N (\log\log N)^5). \end{multline*} \noindent \textbf{Case 4.} If $\epsilon_3=\epsilon_4$, we have \begin{multline}\label{milestone3} R_{\epsilon_3,\epsilon_4}(\eta) = \frac{\pi^{(1-\epsilon_4)\eta} N^{(\epsilon_4-1)\eta}}{8N} \frac{L^2(1+\epsilon_3 2iT,\chi^{\epsilon_3}) L^2(1+\epsilon_4 (-2iT+\eta),\overline{\chi}^{\epsilon_4})}{|L(1+2iT,\chi)|^4} H_{\epsilon}(\eta) \\ \prod_{p\mid N}\frac{1-p^{-1-\epsilon_4\eta}}{1-p^{-2-\epsilon_4\eta}} \frac{\zeta(1+\epsilon_4 \eta)} {\zeta(2+\epsilon_4\eta)} \Big( -2 H_{\epsilon}(\eta) \log N +O(\log\log N) \Big), \end{multline} which further equals (note here $H_{\epsilon}(0)=8\pi$ and $H_{\epsilon}'(0)=O(1)$) \begin{multline*} \frac{6}{\nu(N) \pi} \frac{L^2(1+\epsilon_3 2iT,\chi^{\epsilon_3}) L^2(1+\epsilon_4 (-2iT),\overline{\chi}^{\epsilon_4})}{|L(1+2iT,\chi)|^4} \\ \Big( \frac{1}{\epsilon_4 \eta} + (\epsilon_4-1) \log N +O\big((\log\log N)^5 \big) +O(\eta) \Big) \Big( -2 \log N +O(\log\log N) \Big). \end{multline*} With above calculations we can rewrite it as \begin{align*} \delta_{\epsilon_4=-1} \frac{24}{\nu(N)\pi} \log^2 N + O(\log N (\log\log N)^9). \end{align*} \noindent Summing up, we have \begin{align*} I_1 \sim \lim_{\eta\rightarrow 0} \sum_{\epsilon_3,\epsilon_4 =\pm 1} R_{\epsilon_3,\epsilon_4}(\eta) = \frac{24}{\nu(N)\pi} \Big( 1 + \delta_{T=0}\delta_{\chi \text{ quadratic}} \Big) \log^2 N. \end{align*} \section{Proof of Theorem \ref{0diff}} The proof follows the lines of Djankovi\'c and Khan in proving the $t$-aspect formula \begin{align*} \langle |E_t|^2, |E_t|^2 \rangle_{\mathrm{reg}} - \langle |\Lambda^Y E_t|^2, |\Lambda^Y E_t| \rangle \sim \frac{36}{\pi} \log^2 t, \end{align*} provided that $1<Y\ll \log t$. Here $E_t=E(z,\tfrac{1}{2}+it)$ is the classical Eisenstein series. For Atkin-Lehner cusp $\mathfrak{a}$, Define $e_{\mathfrak{a}}$ to be the main term of $E_{\mathfrak{a}}$ as $z$ approaches each cusp. Alternatively, we have for $s=\tfrac{1}{2}$ and $\chi^2=\chi_{0,N}$ \begin{align}\label{ecusp} e_{\mathfrak{a}}|_{\sigma_{\mathfrak{b}}} = \begin{cases} \sqrt{y} & \text{ if } \mathfrak{b=a,a^*} \\ 0 & \text{ otherwise}. \end{cases} \end{align} \noindent Following the definitions, we have \begin{multline}\label{broad} \langle |E|^2, |E|^2 \rangle_{\mathrm{reg}} - \langle |\Lambda^Y E|^2, |\Lambda^Y E|^2 \rangle = \int_{\mathcal{F}} e_{\mathfrak{a}}^2 \overline{E^Y_{\mathfrak{a}}}^2 + \overline{e_{\mathfrak{a}}}^2 (E^Y_{\mathfrak{a}})^2 + 4|e_{\mathfrak{a}}|^2 |E^Y_{\mathfrak{a}}|^2 d\mu \\ 2\int_{\mathcal{F}} \Big( |E^Y_{\mathfrak{a}}|^2 \overline{E^Y_{\mathfrak{a}}} e_{\mathfrak{a}} + |E^Y_{\mathfrak{a}}|^2 E^Y_{\mathfrak{a}} \overline{e_{\mathfrak{a}}} \Big) d\mu + \Phi(Y), \end{multline} where $\mathcal{F}=\cup_{\mathfrak{a}} \sigma_{\mathfrak{a}} \mathcal{F}_{\infty}(Y)$ is the cuspical zone of height $Y$, with scaling matrix $\sigma_{\mathfrak{a}}$, $\Phi(Y)$ is some $N$-independent function in $Y$, and $\mathcal{F}_{\infty}(Y)=\{(x,y) | 0<1\leq 1, y>Y \}$. \subsection{Initial cleanings} For primitive $\chi_1$, $\chi_2$, write $\psi=\chi_1\chi_2$. It is easy to see if $\chi=\chi\overline{\chi_2}$ is even, then so is $\psi$. Denote the completed Eisenstein series $E_{\chi_1,\chi_2}^*(z,s)=\theta_{\chi_1,\chi_2}(s) E_{\chi_1,\chi_2}(z,s)$, with $\theta_{\chi_1,\chi_2}(\tfrac{1}{2})=\Lambda^{-1}(1,\psi)$ as in \eqref{theta}. Write out the completion factor with Young's formula \cite[(9.1)]{Young}: \begin{align*} E_{\mathfrak{a}}(z,\tfrac{1}{2},\chi)= N^{-\frac{1}{2}} E_{\chi_1,\chi_2}(z,\tfrac{1}{2}) = N^{-\frac{1}{2}} \theta^{-1}_{\chi_1,\chi_2}(\tfrac{1}{2}) E^*_{\chi_1,\chi_2}(z,\tfrac{1}{2}), \end{align*} and \begin{align}\label{Eslash} E_{\mathfrak{a}}(\sigma_{\mathfrak{a}} z, \tfrac{1}{2},\chi) = E_{\mathfrak{a}^*}(\sigma_{\mathfrak{a}} z, \tfrac{1}{2},\chi) = \pm N^{-\frac{1}{2}} \theta^{-1}_{\chi_1,\chi_2}(\tfrac{1}{2}) E^*_{1,\psi}(z,\tfrac{1}{2}). \end{align} \noindent When $s=\tfrac{1}{2}$ and $\chi$ is quadratic, $E_{\mathfrak{a}}(z,s,\chi)$ is real-valued, and so is its $Y$-truncation. Then by \eqref{ecusp} \begin{multline*} \int_{\mathcal{F}} e_{\mathfrak{a}}^2 \overline{E^Y_{\mathfrak{a}}}^2 + \overline{e_{\mathfrak{a}}}^2 (E^Y_{\mathfrak{a}})^2 + 4|e_{\mathfrak{a}}|^2 |E^Y_{\mathfrak{a}}|^2 d\mu = \sum_{\mathfrak{b}} \int_Y^{\infty}\int_0^1 \Big( e_{\mathfrak{a}}^2 \overline{E^Y_{\mathfrak{a}}}^2 + \overline{e_{\mathfrak{a}}}^2 (E^Y_{\mathfrak{a}})^2 + 4|e_{\mathfrak{a}}|^2 |E^Y_{\mathfrak{a}}|^2 \Big)\Big|_{\sigma_{\mathfrak{b}}^{-1}} d\mu \\ =\int_Y^{\infty}\int_0^1 \Big( e_{\mathfrak{a}}^2 \overline{E^Y_{\mathfrak{a}}}^2 + \overline{e_{\mathfrak{a}}}^2 (E^Y_{\mathfrak{a}})^2 + 4|e_{\mathfrak{a}}|^2 |E^Y_{\mathfrak{a}}|^2 \Big)\Big|_{\sigma_{\mathfrak{a}}^{-1}} + \Big( e_{\mathfrak{a}}^2 \overline{E^Y_{\mathfrak{a}}}^2 + \overline{e_{\mathfrak{a}}}^2 (E^Y_{\mathfrak{a}})^2 + 4|e_{\mathfrak{a}}|^2 |E^Y_{\mathfrak{a}}|^2 \Big)\Big|_{\sigma_{\mathfrak{a}^*}^{-1}} d\mu. \end{multline*} Since $\mathfrak{a}$ and $\mathfrak{a}^*$ are Atkin-Lehner, their scaling matrices are some matrix involution. Thus we can write the integral as \begin{align}\label{DK5.1} \int_Y^{\infty}\int_0^1 \Big( e_{\mathfrak{a}}^2 \overline{E^Y_{\mathfrak{a}}}^2 + \overline{e_{\mathfrak{a}}}^2 (E^Y_{\mathfrak{a}})^2 + 4|e_{\mathfrak{a}}|^2 |E^Y_{\mathfrak{a}}|^2 \Big)\Big|_{\sigma_{\mathfrak{a}}} + \Big( e_{\mathfrak{a}}^2 \overline{E^Y_{\mathfrak{a}}}^2 + \overline{e_{\mathfrak{a}}}^2 (E^Y_{\mathfrak{a}})^2 + 4|e_{\mathfrak{a}}|^2 |E^Y_{\mathfrak{a}}|^2 \Big)\Big|_{\sigma_{\mathfrak{a}^*}} d\mu. \end{align} \noindent On the other hand, by \eqref{ecusp} and \eqref{Eslash} we have \begin{align*} (e_{\mathfrak{a}}|_{\sigma_{\mathfrak{a}}})^2 = (e_{\mathfrak{a}}|_{\sigma_{\mathfrak{a}^*}})^2 = (\overline{e_{\mathfrak{a}}}|_{\sigma_{\mathfrak{a}}})^2= (\overline{e_{\mathfrak{a}}}|_{\sigma_{\mathfrak{a}^*}})^2 = |e_{\mathfrak{a}}|_{\sigma_{\mathfrak{a}}}|^2 = |e_{\mathfrak{a}}|_{\sigma_{\mathfrak{a}^*}}|^2 &=y, \\ (E^Y_{\mathfrak{a}}|_{\sigma_{\mathfrak{a}}})^2 = (E^Y_{\mathfrak{a}}|_{\sigma_{\mathfrak{a}^*}})^2 =(\overline{E^Y_{\mathfrak{a}}}|_{\sigma_{\mathfrak{a}}})^2 = (\overline{E^Y_{\mathfrak{a}}}|_{\sigma_{\mathfrak{a}^*}})^2 = |E^Y_{\mathfrak{a}}|_{\sigma_{\mathfrak{a}}}|^2 = |E^Y_{\mathfrak{a}}|_{\sigma_{\mathfrak{a}^*}}|^2 &= (|E^*_{1,\psi}|^2)^Y. \end{align*} Recall the explicit formula of $\varphi_{\mathfrak{aa}^*}$ and that $\psi$ is even, we can rewrite \eqref{DK5.1} as \begin{align}\label{cleaned} \frac{12}{\Lambda^2(1,\psi)}\int_Y^{\infty}\int_0^1 y |E^*_{1,\psi}(z,\frac{1}{2})|^2 d\mu. \end{align} \subsection{Integral shift and poles of L-functions} To calculate \eqref{cleaned}, recall \begin{align*} E^*_{1,\psi}(z,\frac{1}{2}+iT) = 2\sqrt{y}\sum_{n\neq 0} \lambda_{1,\psi}(n,2iT) e(nx) K_{iT}(2\pi |n|y), \end{align*} since $\psi(-1)=1$, the double integral in \eqref{cleaned} integral equals \begin{align*} \int_Y^{\infty} 8y \sum_{n\geq 1} \lambda^2_{1,\psi}(n,0) K^2_{0}(2\pi ny) \frac{dy}{y}. \end{align*} With the integral and the sum interchanged, it equals \begin{align}\label{periodint} 8 (2\pi)^{-1} \sum_{n\geq 1} \frac{\lambda^2_{1,\psi}(n,0)}{n} g(2\pi nY), \end{align} where \begin{align*} g(x):= \int_{x}^{\infty} y K_{0}^2(y) \frac{dy}{y}. \end{align*} The Mellin transform of $g$ is \begin{align*} G(s) &= \int_0^{\infty} g(x)x^s \frac{dx}{x} = \int_0^{\infty} x^s \frac{dx}{x} \int_{x}^{\infty} y K_{0}^2(y) \frac{dy}{y} \\ &= \int_{0}^{\infty} y K_{0}^2(y) \frac{dy}{y} \int_0^y x^s \frac{dx}{x} \\ &= \frac{1}{s} \int_{0}^{\infty} y^{1+s} K_{0}^2(y) \frac{dy}{y}, \end{align*} which according to the Mellin-Barnes formula [GR], 6.576.4, equals \begin{align*} \frac{2^{-2+s}}{s \Gamma(1+s)} \Gamma ^4(\frac{1+s}{2}) . \end{align*} Applying the Mellin inverse of $G$, we can rewrite \eqref{periodint} by \begin{align*} 8 (2\pi)^{-1} \sum_{n\geq 1} \frac{\lambda^2_{1,\psi}(n,0)}{n} \frac{1}{2\pi i} \underset{(3)}{\int} G(s) (2\pi n Y)^{-s} ds = \pi^{-1} \frac{1}{2\pi i} \underset{(3)}{\int} \frac{\Gamma ^4(\frac{1+s}{2})}{s \Gamma(1+s)} (\pi Y)^{-s} \sum_{n\geq 1} \frac{\lambda^2_{1,\psi}(n,0)}{n^{1+s}} ds. \end{align*} By Rankin-Selberg method of period integrals, the Dirichlet series equals \begin{align*} \frac{\zeta(1+s) L^2(1+s, \psi) L(1+s, \chi_{0,N})}{L(2+2s,\chi_{0,N})} = \frac{\zeta^2(1+s) L^2(1+s,\psi)}{\zeta(2+2s)} \prod_{p\mid N}(1+\frac{1}{p^{1+s}})^{-1}. \end{align*} \noindent So, \eqref{cleaned} is equal to: \begin{align}\label{3pole} \frac{12}{N \pi L^2(1,\psi)} \frac{1}{2\pi i} \underset{(3)}{\int} \frac{\Gamma ^4(\frac{1+s}{2})}{s \Gamma(1+s)} (\pi Y)^{-s} \frac{\zeta^2(1+s) L^2(1+s,\psi)}{\zeta(2+2s)} \prod_{p\mid N}(1+\frac{1}{p^{1+s}})^{-1} ds. \end{align} \begin{remark}\label{explanation} Now we can see why our case of $T=0$ and $\chi$ is quadratic is most complicated: the integrand has a triple pole at $s=0$, whereas in other cases it can have at most a double pole. \end{remark} \noindent Since GRH implies $L(1+s,\psi) \ll \log\log N$ for $\Re s=0$, for each $N$ we have $c \in (0,\tfrac{\log\log\log N}{\log N})$ such that $L(1+s,\psi) = O(\sqrt{\log N})$ for $\Re s=-c$. Since the integrand of \eqref{3pole} has no poles other than $s=0$ for $\Re s \in (-c,3)$, we have \begin{align*} \eqref{3pole} = \frac{12}{N \pi L^2(1,\psi)} \Big( \underset{s=0}{\text{Res }} H(s) + \frac{1}{2\pi i} \underset{(-c)}{\int} H(s) ds \Big), \end{align*} with \begin{align*} H(s)=\frac{\Gamma ^4(\frac{1+s}{2})}{s \Gamma(1+s)} (\pi Y)^{-s} \frac{\zeta^2(1+s) L^2(1+s,\psi)}{\zeta(2+2s)} \prod_{p\mid N}(1+\frac{1}{p^{1+s}})^{-1}. \end{align*} \noindent For the integral we have \begin{align*} \frac{12}{N \pi L^2(1,\psi)} \frac{1}{2\pi i} \underset{(-c)}{\int} H(s) ds \ll \frac{(\log\log N)^2}{N} Y^c L^2(1-c,\psi) \sum_{p\mid N} p^{-1+c} \ll \frac{\log N (\log\log N)^3}{N} Y^c. \end{align*} On the other hand, since \begin{align*} \zeta(1+s) = \frac{1}{s} + \gamma_0 + \gamma_1 s + O(s^2), \end{align*} for some constant $\gamma_0,\gamma_1$, we have \begin{align}\label{Laurentanal} \underset{s=0}{\text{Res }} H(s) = (2\gamma_1 + \gamma_0^2) K(0) + 2\gamma_0 K'(0) + K''(0), \end{align} where \begin{align*} K(s) = \frac{\Gamma ^4(\frac{1+s}{2})}{\Gamma(1+s)} (\pi Y)^{-s} \frac{ L^2(1+s,\psi)}{\zeta(2+2s)} \prod_{p\mid N}(1+\frac{1}{p^{1+s}})^{-1}. \end{align*} \noindent The following computation is straightforward (recall \eqref{A1}): \begin{align*} K(0) &= 6 L^2(1,\psi) \frac{N}{\nu(N)} \\ K'(0) &= K(0) \Big( -\log Y + 2\frac{L'}{L}(1,\psi) + \sum_{p\mid N}\frac{p^{-1} \log p}{1+p^{-1}} + O(1) \Big) \\ K''(0) &= K(0) \Big( (\frac{K'(s)}{K(s)})'|_{s=0} + (\frac{K'(0)}{K(0)})^2 \Big) \\ &= K(0) \Big( \sum_{p\mid N} \frac{p^{-1} \log^2 p}{1+p^{-1}} + O((\log\log N)^2) \Big). \end{align*} \noindent Thus \eqref{Laurentanal} yields \begin{align*} \frac{12}{N \pi L^2(1,\psi)} \underset{s=0}{\text{Res }} H(s) = \frac{72}{\nu(N)} \Big( \sum_{p\mid N} \frac{p^{-1} \log^2 p}{1+p^{-1}} + O((\log\log N)^2) \Big). \end{align*} Since \begin{align*} \sum_{p\mid N} \frac{p^{-1} \log^2 p}{1+p^{-1}} \leq \log N \sum_{p\mid N} \frac{\log p}{1+p} \ll \log N \log\log N, \end{align*} so \eqref{Laurentanal} is bounded by $\nu^{-1}(N) \log N \log\log N$. Furthermore, \begin{align*} \eqref{3pole} \ll \frac{\log N (\log\log N)^3}{N} Y^c + \frac{\log N \log\log N}{\nu(N)} = o(\frac{\log^2 N}{\nu(N)}). \end{align*} This solves the first part of \eqref{broad}. \subsection{Cauchy's inequality} Now we are left to estimate \begin{align*} \int_{\mathcal{F}} \Big( |E^Y_{\mathfrak{a}}|^2 \overline{E^Y_{\mathfrak{a}}} e_{\mathfrak{a}} + |E^Y_{\mathfrak{a}}|^2 E^Y_{\mathfrak{a}} \overline{e_{\mathfrak{a}}} \Big) d\mu = 2\int_{\mathcal{F}} (E^Y_{\mathfrak{a}})^3 e_{\mathfrak{a}} d\mu. \end{align*} By Cauchy's inequality we have \begin{align*} \int_{\mathcal{F}} (E^Y_{\mathfrak{a}})^3 e_{\mathfrak{a}} d\mu \leq \Big( \int_{\mathcal{F}} (E^Y_{\mathfrak{a}})^2 e^2_{\mathfrak{a}} d\mu \Big)^{\frac{1}{2}} \Big( \int_{\mathcal{F}} (E^Y_{\mathfrak{a}})^4 d\mu \Big)^{\frac{1}{2}}. \end{align*} Our work on \eqref{cleaned} has shown the first integral is $o(\frac{\log^2 N}{\nu(N)})$, and the second integral has the same (big O) bound by assumption, so we have completed the proof. \end{document}
{\mathbff e}gin{document} \Deltalta_{\mathbff e}f\pdFile#1#2#3{\ifnum\pdfoutput>0 \pdfximage width #2pt height #3pt{#1.pdf}\pdfrefximage\pdflastximage\epsilonlse\kern#2pt\vbox to #3pt{\vss}\fi} \mathbfoldsymbol\deltaate{\tauoday} {\mathbff e}gin{abstract} Let a discrete group $G$ possess two convergence actions by homeomorphisms on compacta $X$ and $Y$. Consider the following question: does there exist a convergence action $G{\overlineerline{{\mathcal C}}urvearrowright}Z$ on a compactum $Z$ and continuous equivariant maps $X\leftarrow Z\tauo Y$? We call the space $Z$ (and action of $G$ on it) {\it pullback} space (action). In such general setting a negative answer follows from a recent result of O.~Baker and T.~Riley [BR]. Suppose, in addition, that the initial actions are relatively hyperbolic that is they are non-parabolic and the induced action on the distinct pairs are cocompact. Then the existence of the pullback space if $G$ is finitely generated follows from \overlineerline{{\mathcal C}}ite{Ge2}. The main result of the paper claims that the pullback space exists if and only if the maximal parabolic subgroups of one of the actions are dynamically quasiconvex for the other one. We provide an example of two relatively hyperbolic actions of the free group $G$ of countable rank for which the pullback action does not exist. We study an analog of the notion of geodesic flow for relatively hyperbolic groups. Further these results are used to prove the main theorem. \epsilonnd{abstract} \title{Similar relatively hyperbolic actions of a group} \mathbfigskip \mathsfection{Introduction} This paper is a further development of our project of studying convergence group actions including the actions of relatively hyperbolic groups. An action of a discrete group $G$ by homeomorphisms of a compactum $X$ is said to \it have convergence property \rhom if the induced action on the space of distinct triples of $X$ is properly discontinuous. We call such an action 3\it-discontinuous\rhom. The complement $\mathbfold{\mathcal L}ambda_XG$ of the maximal open subset where the action is properly discontinuous is called the \it limit set \rhom of the action. The action is said to be \it minimal \rhom if $\mathbfold{\mathcal L}ambda_XG{=}X$. The goal of the paper is to establish similarity properties between different convergence actions of a fixed group. The first motivation for us was the following question: \vskip3pt $\mathsf {\mathsff Q1}:$ \mathsfl Given two minimal $3$-discontinuous actions of a group $G$ on compacta $X,Y$ does there exists a $3$-discontinuous action $G\mathsf{(a)}ct Z$ on a compactum $Z$ and continuous equivariant maps $X\leftarrow Z\tauo Y$?\rhom \vskip3pt We call such an action \it pullback \rhom action and the space $Z$ \it pullback \rhom space. The answer to this question is negative in general. This follows from a recent result of O.~Baker and T.~Riley \overlineerline{{\mathcal C}}ite{BR}. They indicated a hyperbolic group $G$ and a free subgroup $H$ of rank three such that the embedding $H\tauo G$ does not admit an equivariant continuous extension to the hyperbolic boundaries $\partial_\infty H\tauo\partial_\infty G$ (so called {\it Cannon-Thurston map}). It is an easy consequence of this result that $3$-discontinuous actions $H{\overlineerline{{\mathcal C}}urvearrowright}\partial_\infty H$ and $H{\overlineerline{{\mathcal C}}urvearrowright}\mathbfold{\mathcal L}ambda_{\partial_\infty G}H$ do not possess a pullback 3-discontinuous action (see section 4). \vskip3pt The question $\mathsf {\mathsff Q1}$ has a natural modification. Suppose in addition that our actions $G{\overlineerline{{\mathcal C}}urvearrowright}X$, $G{\overlineerline{{\mathcal C}}urvearrowright}Y$ are $2$\it-cocompact\rhom, that is the quotient of the space of {\it distinct pairs} of the corresponding space by $G$ is compact. This condition is natural because the class of groups which admit non-trivial 3-discontinuous and 2-cocompact actions (we say $32$\it-actions\rhom) coincides with the class of {\it relatively hyperbolic} groups \overlineerline{{\mathcal C}}ite[Theorem 3.1]{GePo2}. So the following question is the main subject of the paper. \vskip3pt $\mathsf Q2:$ \mathsfl Given two minimal $32$-actions of a group $G$ on compacta $X,Y$ does there exist a $3$-discontinuous action $G{\overlineerline{{\mathcal C}}urvearrowright}Z$ possessing continuous equivariant maps $X\leftarrow Z\tauo Y$?\rhom \vskip3pt We note that if such an action $G\mathsf{(a)}ct Z$ exists then one can choose it to be 2-cocompact (see Lemma \rhoef{quotient}). So the pullback action is also of type $(32)$. Recall few standard definitions. An action on a compactum is called {\it parabolic} if it admits a unique fixed point. For a 3-discontinuous action $G\mathsf{(a)}ct X$ a point $p\in X$ is called {\it parabolic} if it is the unique fixed point for its stabilizer $P={\rhom St}_G p$ and ${\rhom St}_G p$ is infinite. The subgroup $P$ is called a {\it maximal parabolic} subgroup. We denote by ${{\mathcal P}}ar_X$ the set of the parabolic points for an action on $X.$ If $G\mathsf{(a)}ct X$ is a $32$-action then the set of all maximal parabolic subgroups is called {\it peripheral structure} for the action. It consists of finitely many conjugacy classes of maximal parabolic subgroups \overlineerline{{\mathcal C}}ite[Main theorem, a]{Ge1}. If $G$ is finitely generated then an affirmative answer to the question $\mathsf Q2$ can be easily deduced from \overlineerline{{\mathcal C}}ite[Map theorem]{Ge2} (see section 5 below). Furthermore there exists a "universal" pullback space in this case. Namely every $32$-action of a finitely generated group $G$ on a compactum $X$ admits an equivariant continuous map from the Floyd boundary $\partial_fG$ of $G$ to $X$. The space $\partial_fG$ is universal as it does not depend on the action on $X$ (it depends on a scalar function $f$ rescaling the word metric of the Cayley graph and a fixed finite set of generators of $G$). However the same method does not work if the group is not finitely generated. One cannot use the Cayley graph since the quotient of the set of its edges by the group is not finite (the action is not {\it cofinite on edges}) the condition which is needed for the construction of the above map. Replacing the Cayley graph by a relative Cayley graph changes the situation since the latter graph depends on the $32$-action of $G$ on a compactum $X$. Indeed the vertex set of the graph contains the parabolic points for the action $G\mathsf{(a)}ct X.$ This problem turns out to be crucial since the answer to the question $\mathsf Q2$ is negative in general. We show in the following theorem that a counter-example exists already in the case of free groups of countable rank. \nuoindent {\mathbff Theorem I} (Proposition \rhoef{freeinf}). {\it The free group $F_{\infty}$ of countable rank admits two $32$-actions not having a pullback space.} We note that this is a rare example when certain properties of the relatively hyperbolic groups are true for finitely generated groups and are false for non-finitely generated (even countable) groups. Our next goal is to provide necessary and sufficient conditions for two $32$-actions of a group to have a common pullback space. The following theorem is the main result of the paper. \nuoindent {\mathbff Theorem II} (Theorem \rhoef{suffcond}, Theorem B). {\it Two $32$-actions of $G$ on compacta $X$ and $Y$ with peripheral structures ${{\mathcal P}}$ and ${\mathcal Q}$ admit a pullback space $Z$ if and only if one of the following conditions is satisfied: {\mathbff e}gin{itemize} \item [1)] $\mathsf C(X,Y):$ every element $P\in{{\mathcal P}}$ acts $2$-cocompactly on its limit set in $Y$\rhom. \item [2)] $\mathsf C(Y,X):$ every element $Q\in{\mathcal Q}$ acts $2$-cocompactly on its limit set in $X$\rhom. \epsilonnd{itemize} } \mathbfigskip Here are several remarks about the theorem. As an immediate corollary we obtain that $\mathsf C(X,Y)$ is equivalent to $\mathsf C(Y,X).$ This statement seems to be new even in the finitely generated case. It follows from Theorem II that a parabolic subgroup $H$ for a $32$-action of a finitely generated group $G$ acts 2-cocompactly on its limit set for every other such action of $G.$ Theorem I implies that this is not true if $G$ is not finitely generated (see Corollary \rhoef{equivac}.f). The peripheral structure $\{{{\rm Small}(\ba)ll reference}\}c$ for the pullback action on $Z$ is given by the system of subgroups $\{{{\rm Small}(\ba)ll reference}\}c=\{Q\overlineerline{{\mathcal C}}ap P\ :\ P\in {{\mathcal P}}, Q\in{\mathcal Q}, \varepsilonilonrt P\overlineerline{{\mathcal C}}ap Q\varepsilonilonrt=\infty\}.$ In particular Theorem II provides a criterion when the system $\{{{\rm Small}(\ba)ll reference}\}c$ is a peripheral structure for some relatively hyperbolic action of $G$ (Corollary \rhoef{equivac}.a). The proof of Theorem II uses several intermediate results which occupy first sections of the paper and which have independent interest. We will now briefly describe them. In section 3 we study an analog of the geodesic flow introduced by M.~Gromov in the case of hyperbolic groups. If the group $G$ admits a 32-action on a compactum $X,$ then there exists a connected graph ${\mathcal G}mma$ such that $G$ acts properly and cofinitely on the set of edges ${\Gamma}^1$ of ${\Gamma}$ \overlineerline{{\mathcal C}}ite[Theorem A]{GePo2}. The set of vertices ${\Gamma}^0$ of ${\Gamma}$ is ${{\mathcal P}}ar_X\mathsfqcup G.$ The union $\taui X=X\overlineerline{{\mathcal C}}up {\Gamma}^0=X\mathsfqcup G$ admits a Hausdorff topology whose restriction on $X$ and on $G$ coincide with the initial topology and the discrete topology respectively, and $G$ acts on $\taui X$ 3-discontinuously \overlineerline{{\mathcal C}}ite[Proposition 8.3.1]{Ge2}. The action is also 2-cocompact (Lemma \rhoef{quotient}). We call the space $\taui X$ {\it attractor sum} of $X$ and $G$. Consider the space of maps $\gammaamma:{{\Bbb B}bb Z}\tauo\taui X$ for which there exist $m,n\in {{\Bbb B}bb Z}\overlineerline{{\mathcal C}}up\{\pm\infty\}$ such that $\gammaamma$ is constant on one or both (possibly empty) sets $]-\infty, m], [n, +\infty[$ and is geodesic in ${\Gamma}^0$ outside of these sets. We call such a map {\it eventual geodesic} and denote by $\epsilonG$ the space of all eventual geodesics. We prove in section 3 (Proposition \rhoef{closes}) that $\epsilonG$ is closed in the space of maps $X^{{\Bbb B}bb Z}$ equipped with the Tikhonov topology. Then we show that the boundary map $\partial :\epsilonG\tauo \taui X^2$ is continuous at every non-constant eventual geodesic (Proposition \rhoef{conv}). In particular we show that every two distinct points of $\taui X$ can be joined by a geodesic (Theorem \rhoef{exgeod}). This allow us to consider the convex hull ${\rhom Hull}(B)$ of a subset $B\mathsfubset \taui X$ which the union of the images of all geodesics in $\taui X$ with the endpoints in $B.$ We prove that ${\rhom Hull}(B)$ is closed if $B$ is. We extensively use so called {\it visibility property} of the uniformity of the topology of $\taui X,$ that is for every two disjoint closed subsets $A$ and $B$ of $\taui X$ there exists a finite set $F\mathsfubset{\Gamma}^1$ such that every geodesic with one endpoint in $A$ and the other in $B$ contains an edge in $F.$ At the end of the section using the group action we show that the space $\taui X$ cannot contain geodesic horocycles, i.e. non-trivial geodesics whose endpoints coincide. In Section 4 we study properties of subgroups of a group acting $3$-discontinuously on a compactum $X$. According to Bowditch \overlineerline{{\mathcal C}}ite{Bo2} a subgroup $H$ of a group $G$ is called \it dynamically quasiconvex \rhom if for every neighborhood $\mathbf u$ of the diagonal $\mathbfold\partialelta X$ of $X^2 = X{\tauimes}X$ the set $\{g{\in}G:$ $(g\mathbfold{\mathcal L}ambda_XH)^2\nuot\mathsfubset\mathbf u\}/H$ is finite. \vskip3pt Using the results of Section 3 we obtain here the following theorem. \nuoindent {\mathbff Theorem III} (Theorem \rhoef{infquas}). For a $32$-action $G{\overlineerline{{\mathcal C}}urvearrowright}X$ a subgroup $H < G$ is dynamically quasiconvex if and only if its action on $\mathbfold{\mathcal L}ambda_XH$ is $2$-compact\rhom. The proofs of Theorems I and II are given in Section 5. They use the results of the previous sections. In the last section we provide a list of corollaries of the main results (Corollary \rhoef{equivac}). \mathbfigskip {\mathbff Acknowledgements.} During the work on the paper both authors were partially supported by the ANR grant ${\rhom BLAN}~2011\ BS01\ 013\ 04$ ''Facettes des groupes discrets''. The first author is also thankful to the CNRS for providing a research fellowship for his stay in France. \mathsfection{Preliminaries} \mathsfubsection{Entourages and Cauchy-Samuel completions} We recall some well-known notions from the general topology. For further references see \overlineerline{{\mathcal C}}ite{Ke}. Let $X$ be a set. We denote by $\mathbfsn$ the quotient of the product space $\underbrace{X{\tauimes}\mathbfoldsymbol\deltaots{\tauimes}X}_{\mbox{$n$ times}}$ by the action of the permutation group on $n$ symbols. We regard the elements of $\mathbfsn$ as non-ordered $n$-tuples. Let $\mathcal Th^nX$ be the subset of $\mathbfsn$ whose elements are non-ordered $n$-tuples whose components are all distinct. Denote $\partiale^nX=\mathbfsn\mathsfetminusminus \mathcal Th^nX.$ An {\it entourage} is a neighborhood of the diagonal $\mathbfde2=\{(x,x)\ :\ x\in X\}$ in $\mathbfs2.$ The set of entourages of $X$ is denoted by $\epsilonnt X.$ We use the bold font to denote entourages. For $\mathbfu\in \epsilonnt X$ a pair of points $(x,y)\in X^2$ is called $\mathbfu$-small if $(x,y)\in \mathbfu.$ Similarly a set $A\mathsfubset X$ is $\mathbfu$-small if $ {{\mathsf S}^2A} \mathsfubset\mathbfu.$ Denote by ${\rhom Small }(\mathbfu)$ the set of all $\mathbfu$-small subsets of $X.$ For an entourage $\mathbfu$ we define its power $\mathbfu^n$ as follows: $(x,y)\in\mathbfu^n$ if there exist $x_i\in X$ such that $(x_{i-1}, x_i)\in \mathbfu\ (x_0=x, x_n=y, i=1,...,n-1).$ We denote by $\mathsfqrt[n]\mathbfu$ an entourage $\mathbfv$ such that $\mathbfv^n\mathsfubset\mathbfu.$ A filter ${{\mathcal U}}$ on $\mathbfs2$ whose elements are entourages is called {\it uniformity} if $$\forall\hspace*{0.5mm}\mathbfu\in{{\mathcal U}}\ \epsilonxist \mathbfv\in{{\mathcal U}} : \mathbfv^2\mathsfubset\mathbfu.$$ A uniformity ${{\mathcal U}}$ defines the ${{\mathcal U}}$-topology on $X$ in which every neighborhood of a point has a $\mathbfu$-small subset containing the point for some $\mathbfu\in{{\mathcal U}}$. A pair $(X,{{\mathcal U}})$ of a set $X$ equipped with an uniformity ${{\mathcal U}}$ is called {\it uniform} space. A {\it Cauchy filter} ${{\mathcal F}}$ on the uniform space $(X,{{\mathcal U}})$ is a filter such that $\forall\hspace*{0.5mm} \mathbfu\in {{\mathcal U}}\ : {{\mathcal F}}\overlineerline{{\mathcal C}}ap\mathsf{Small}(\mathbfu)\nuot=\epsilonmptyset.$ A space $X$ is {\it complete} if every Cauchy filter on $X$ contains all neighborhoods of a point. The uniform space $(X, {{\mathcal U}})$ admits a completion $(\overlineerline X,\overlineerline{{\mathcal U}})$ called {\it Cauchy-Samuel} completion whose construction is the following. Every point of $\overlineerline X$ is the minimal Cauchy filter $\xi$. For every $\mathbfu\in {{\mathcal U}}$ we define an entourage $\overlineerline\mathbfu$ on $\overlineerline X$ as follows: $$\overlineerline\mathbfu=\{(\xi,\epsilonta)\in {\mathsf S}^2\overlineerline X : \xi \overlineerline{{\mathcal C}}ap\epsilonta\overlineerline{{\mathcal C}}ap \mathsf{Small}(\mathbfu)\nuot=\epsilonmptyset\}. \epsilonqno(1)$$ The uniformity $\overlineerline{{\mathcal U}}$ of $\overlineerline X$ is the filter generated by the entourages $\{\overlineerline \mathbfu : \mathbfu\in{{\mathcal U}}\}.$ We note that the completion $(\overlineerline X, \overlineerline {{\mathcal U}})$ is {\it exact} \overlineerline{{\mathcal C}}ite [II.3, Th\'eor\`eme 3]{{Bourb}}: $$\forall\hspace*{0.5mm} a, b\in\overlineerline X \ a\nuot=b\ \epsilonxist \overlineerline\mathbfu\in\overlineerline{{\mathcal U}} : (a,b)\nuot\in\overlineerline\mathbfu.$$ If $X$ is a compactum then the filter of the neighborhoods of the diagonal $\partiale^2X$ is the unique exact uniformity ${{\mathcal U}}$ consistent with the topology of $X$, and $X$ equipped with ${{\mathcal U}}$ is a complete uniform space \overlineerline{{\mathcal C}}ite[II.4, Th\'eor\`eme 1]{Bourb}. \mathsfubsection{Properties of ($32$)-actions of groups} Let $X$ be a compactum, i.e a compact Hausdorff space, and $G$ be a group acting 3-discontinuously on $X$ (convergence action). Recall that the limit set, denoted by ${\mathcal L}a_XG$ (or ${\mathcal L}a G$ if $X$ is fixed), is the set of accumulation (limit) points of any $G$-orbit in $X.$ The action $G$ on $X$ is said to be {\it minimal} if $X={\mathcal L}a G.$ The action $G\mathsf{(a)}ct X$ is {\it elementary } if $\varepsilonilonrt{\mathcal L}a G\varepsilonilonrt<3$. If the action is not elementary then ${\mathcal L}a G$ is a perfect set \overlineerline{{\mathcal C}}ite{Tu2}. If $G$ is non-elementary then ${\mathcal L}a G$ is the minimal non-empty closed subset of $X$ invariant under $G$. An elementary action of a group $G$ on $X$ is called {\it parabolic} (or {\it trivial}) if ${\mathcal L}a_XG$ is a single point. A point $p\in {\mathcal L}a_XG$ is {\it parabolic} if its stabilizer ${\rhom St}_Gp$ is a maximal parabolic subgroup fixing $p$. The set of parabolic points for the action on $X$ is denoted by ${{\mathcal P}}ar_X.$ A parabolic fixed point $p\in {\mathcal L}a G$ is called {\it bounded parabolic} if the quotient space\hfil\penalty-10000 $({\mathcal L}a G\mathsfetminusminus \{p\})/{\rhom St}_G p$ is compact. We will use an equivalent reformulation of the convergence property in terms of {\it crosses}. A cross $(r, a)^\tauimes\in X\tauimes X$ is the set $r{\tauimes}X\overlineerline{{\mathcal C}}up X{\tauimes} a$ where $(r,a)\in X\tauimes X.$ By identifying every $g\in G$ with its graph one can show that $G$ acts 3-discontinuously on $X$ if and only if all the limit points of the closure of $G$ in $ X\tauimes X$ are crosses \overlineerline{{\mathcal C}}ite[Proposition P]{Ge1}. The points $a$ and $r$ are called respectively {\it attractive} and {\it repelling} points (or attractor and repeller). A point $x\in{\mathcal L}a G$ is {\it conical} if there is an infinite set $S \mathsfubset G$ such that for every $y\in X\mathsfetminusminus \{x\}$ the closure of the set $\{(s(x),s(y)) : s \in S\}$ in $X^2$ does not intersect the diagonal $\partiale^2 X.$ A group $G$ acting on the space $X$ acts on the set of entourages $\epsilonnt X.$ For $\mathbfu\in\epsilonnt X$ we denote by $g\mathbfu$ the set $\{(x,y)\in X^2 : g^{-1}(x,y)\in\mathbfu\}$ and by $G\mathbfu$ the $G$-orbit of $\mathbfu.$ We will say that the orbit $G\mathbfu$ is {\it generating} if it generates $\epsilonnt X$ as filter. An action $G\mathsf{(a)}ct X$ is {\it 2-cocompact} if $\mathcal Theta^2X/G$ is compact. Suppose that a group $G$ admits a 3-discontinuous and 2-cocompact non-parabolic minimal action ($32$-action) on a compactum $X$. Then every point of $X$ is either a bounded parabolic or conical \overlineerline{{\mathcal C}}ite[Main Theorem]{Ge1}. P.~Tukia showed that if $X$ is metrisable then the converse statement is true \overlineerline{{\mathcal C}}ite[Theorem 1C, (b)]{Tu2}. Let ${\Gamma}$ be a graph. We denote by ${\Gamma}^0$ and ${\Gamma}^1$ the set of vertices and edges of ${\Gamma}$ respectively. Recall that an action of $G$ on ${\mathcal G}mma$ is {\it proper on edges} if the stabilizer ${\rhom St}_{{\Gamma}}e$ of every edge $e$ in ${\Gamma}$ is finite. The action $G\mathsf{(a)}ct {\Gamma}$ is called {\it cofinite} if $\varepsilonilonrt{\mathcal G}mma^1/G\varepsilonilonrt < \infty$. According to B.~Bowditch \overlineerline{{\mathcal C}}ite{Bo1} a graph ${\mathcal G}mma$ is called \it fine \rhom if for any two vertices the set of simple arcs of fixed length joining them is finite. It is shown in \overlineerline{{\mathcal C}}ite[Theorem 3.1]{GePo2} that if $G$ admits a non-parabolic $32$-action on a compactum $X$ then Bowditch's condition of relatively hyperbolicity is satisfied. This means that there exists a connected fine and hyperbolic graph ${\Gamma}$ acted upon by $G$ cofinitely and properly on edges. Every vertex of ${\Gamma}$ is either an element of $G$, or belongs to the set of parabolic points ${\mathsf Par}_X$. Consider the union of two topological spaces $\taui X=X\mathsfqcup G=X\overlineerline{{\mathcal C}}up {\Gamma}^0$ where $G$ is equipped with the discrete topology. By \overlineerline{{\mathcal C}}ite[Proposition 8.3.1]{Ge2} $\taui X$ admits a unique compact Hausdorff topology whose restrictions on $X$ and $G$ coincide with the original topologies of $X$ and $G$, and the action on $\taui X$ is 3-discontinuous (the description of this topology see in Proposition \rhoef{pullback}) Following \overlineerline{{\mathcal C}}ite{Ge2} we call the space$\taui X$ {\it attractor sum} of $X$ and ${\Gamma}$. The action on $\taui X$ is also 2-cocompact. Indeed by assumption the action of $G\mathsf{(a)}ct \mathcal Theta^2 X$ is cocompact. So there exists a compact fundamental set $K\mathsfubset \mathcal Theta^2 X$. Hence $\taui K=K\overlineerline{{\mathcal C}}up (\{1\} \tauimes (\taui X\mathsfetminusminus\{1\}))$ is a compact fundamental set for the action on $\mathcal Theta^2\taui X.$ Therefore the action $G\mathsf{(a)}ct \taui X$ is a $32$-action. We summarize all these facts in the following lemma. {\mathbff e}gin{lem}\lambdabel{cocomext} \overlineerline{{\mathcal C}}ite{Ge2}, \overlineerline{{\mathcal C}}ite{GePo2}. Let $G$ admits a non-parabolic $32$-action on a compactum X. Then there exists a connected, fine and hyperbolic graph ${\Gamma}$ acted upon by $G$ properly and cofinitely on edges. Furthermore $G$ acts 3-discontinuously and 2-cocompactly on the attractor sum $\taui X=X\overlineerline{{\mathcal C}}up{\Gamma}^0$ and ${\Gamma}^0=G\overlineerline{{\mathcal C}}up {\mathsf Par}_X$ is the set of all non-conical points for the action on $\taui X$. \mathbfx \epsilonnd{lem} We will consider the entourages $\mathbfu\in \epsilonnt\taui X$ on the attractor sum $\taui X$ as well as their restrictions on ${\Gamma}$ and on $X.$ Following Bowditch \overlineerline{{\mathcal C}}ite{Bo1} for a fixed group $G$ a $G$-invariant set $M$ is called {\it connected $G$-set} if there exists a connected graph ${\Gamma}$ such that $M={\Gamma}^0$ and the action $G\mathsf{(a)}ct{\Gamma}^1$ on edges is proper and cofinite. Recall some more definitions. An entourage $\mathbfu$ on a connected $G$-set $M$ is called {\it perspective} if for any pair $(a,b)\in M\tauimes M$ the set $\{g\in G\ \:\ g(a,b)\nuot\in\mathbfu\}$ is finite. An entourage $\mathbfu$ given on a connected $G$-set $M$ is called {\it divider} if there exists a finite set $F\mathsfubset G$ such that $({\mathbfu} _F)^2\mathsfubset \mathbfu$ where $\mathbfu_F=\overlineerline{{\mathcal C}}ap_{f\in F}f\mathbfu.$ We say that uniformity ${{\mathcal U}}$ of a compactum $\taui X$ is {\it generated by an entourage $\mathbfu$} if it is generated as a filter by the orbit $G\mathbfu $. {\mathbff e}gin{lem}\lambdabel{perdiv}\overlineerline{{\mathcal C}}ite[Proposition 8.4.1]{Ge2}. If a group $G$ acts $3$-discontinuously and $2$-cocompactly on a compact space $X$ then the uniformity ${{\mathcal U}}$ on the compactum $\taui X=X\overlineerline{{\mathcal C}}ap G$ is generated by a perspective divider $\mathbfu.$ \epsilonnd{lem} The following result describes the opposite way which starts from a perspective divider on a connected $G$-set $M={\Gamma}^0$ and gives a $32$-action on the compactum $\taui X=X\overlineerline{{\mathcal C}}up {\Gamma}$ where $X$ is a "boundary" of ${\Gamma}$. {\mathbff e}gin{dfn}\lambdabel{visprop} Let $e\in{\Gamma}^1$ be an edge. A pair of vertices $(a,b)$ of ${\Gamma}$ is called $\mathbfu_e$-small if there exists a geodesic in ${\Gamma}$ with endpoints $a$ and $b$ which does not contain $e.$ A uniformity ${{\mathcal U}}^0$ on $M={\Gamma}^0$ has a {\it visibility property} if for every entourage $\mathbfu^0\in {{\mathcal U}}^0$ there exists a finite set of edges $F\mathsfubset {\mathcal G}mma^1$ such that $\mathbfu_F=\overlineerline{{\mathcal C}}ap\{\mathbfu_e\ \varepsilonilonrt\ e\in F\}\mathsfubset\mathbfu^0.$ \epsilonnd{dfn} \nuoindent The following lemma describes the completion $\taui X$ mentioned above and will be often used in the paper. {\mathbff e}gin{lem}\lambdabel{compldense} \overlineerline{{\mathcal C}}ite[Propositions 3.5.1, 4.2.2]{Ge2}. Suppose that a group $G$ acts on a connected graph ${\Gamma}$ properly and cofinitely on edges. Let ${{\mathcal W}}^0$ be a uniformity on ${\Gamma}^0$ generated by a perspective divider. Then ${{\mathcal W}}^0$ has the visibility property. Furthermore the Cauchy-Samuel completion $(Z,{{\mathcal W}})$ of the uniform space $({\Gamma}^0, {{\mathcal W}}^0)$ admits a $32$-action of $G$. \epsilonnd{lem} Let ${\Gamma}$ be a connected graph. We now recall the definition of the {\it Floyd completion (boundary)} of ${\Gamma}$ mentioned in the Introduction (see also \overlineerline{{\mathcal C}}ite{F}, \overlineerline{{\mathcal C}}ite {Ka}, \overlineerline{{\mathcal C}}ite{Ge2}). A function $f:{\Bbb B}bb N\tauo{\Bbb B}bb R$ is said to be a (Floyd) \it scaling function \rhom if $\mathsfum_{n\gammaeqslant0}f_n<\infty$ and there exists a positive $\lambdambda$ such that $1\gammaeqslant f_{n+1}/f_n\gammaeqslant\lambdambda$ for all $n{\in}{\Bbb B}bb N$. Let $f$ be a scaling function and let ${\mathcal G}mma$ be a connected graph. For each vertex $v{\in}{\mathcal G}mma^0$ we define on ${\mathcal G}mma^0$ a path metric $\mathbfoldsymbol\delta_{v,f}$ for which the length of every edge $e\in {\mathcal G}mma^1$ is $f(d(v, e))$. We say that $\mathbfoldsymbol\delta_{v,f}$ is the \it Floyd metric \rhom(with respect to the scaling function $f$) \it based \rhom at $v$. When $f$ and $v$ are fixed we write $\mathbfoldsymbol\delta$ instead of $\mathbfoldsymbol\delta_{v,f}$. One verifies that $\mathbfoldsymbol\delta_u/\mathbfoldsymbol\delta_v\gammaeqslant\lambdambda^{\mathsf d(u,v)}$ for $u,v{\in}{\mathcal G}mma^0$. Thus the Cauchy completion $\overlineerline{\mathcal G}mma_f$ of ${\mathcal G}mma^0$ with respect to $\mathbfoldsymbol\delta_{v,f}$ does not depend on $v$. The \it Floyd boundary \rhom is the space $\partial_f{\mathcal G}mma =\overlineerline{\mathcal G}mma_f\mathsfetminusminus {\mathcal G}mma^0$. Every $\mathsf d$-isometry of ${\mathcal G}mma$ extends to a homeomorphism $\overlineerline{\mathcal G}mma_f\tauo\overlineerline{\mathcal G}mma_f$. The Floyd metrics extend continuously onto the Floyd completion $\overlineerline{\mathcal G}mma_f$. In the particular case when ${\Gamma}$ is a Cayley graph of $G$ we denote by $\partial_fG$ its Floyd boundary or by $\partial G$ if $f$ is fixed. \mathsfection{Geodesic flows on graphs} In this section we study the properties of geodesics on a class of graphs. Let ${\Gamma}$ be a connected graph. We will assume that ${\Gamma}^0\mathsfubset \taui X$ for a compactum $\taui X.$ Let ${{\mathcal U}}$ be the uniformity consistent with the topology of $\taui X.$ Since $\taui X$ is Hausdorff the uniformity ${{\mathcal U}}$ is exact. In this section we will always admit the following. \nuoindent {\mathbff Assumption.} The uniformity ${{\mathcal U}}$ has the visibility property on ${\Gamma}^0.$ The most of the material of the section does not relate to any group action. However the only known example when the above assumption is satisfied is the case when a compactum $X$ admits a $32$-action of a group $G$ and $\taui X=X\overlineerline{{\mathcal C}}up {\Gamma}$ is the attractor sum (see Lemma \rhoef{compldense}). A path in ${\Gamma}$ is a map $\gammaamma : {{\Bbb B}bb Z}\tauo {\Gamma}$ such that $\gammaamma\{n, n+1\}$ is either an edge of ${\Gamma}$ or a point $\gammaamma(n)=\gammaamma(n+1).$ A path $\gammaamma$ can contain a "stop" subpath, i.e. a subset $J$ of consecutive integers such that $\gammaamma\varepsilonilonrt_J\epsilonquiv{\rhom const}.$ For a finite subset $I\mathsfubset{{\Bbb B}bb Z}$ of consecutive integers we define the boundary $\partial(\gammaammamma\varepsilonilonrt_I)$ to be $\gammaammamma(\partial I)$. We extend naturally the meaning of $\partial\gammaammamma$ over the half-infinite and bi-infinite paths in ${\Gamma}^0\mathsfubset\taui X$ in the case if the corresponding half-infinite branches of $\gammaammamma$ converge to points in $\taui X$. The latter one means that for every entourage $\mathbfv\in{{\mathcal U}}$ the set $\gammaammamma\varepsilonilonrt_{[n,\infty[}$ is $\mathbfv$-small for a sufficiently big $n$. {\mathbff e}gin{lem}\lambdabel{conv} Every half-infinite geodesic ray $\gammaamma:[0, \infty[\tauo{\Gamma}$ converges to a point in $\taui X.$ \epsilonnd{lem} \par \nuoindent{\it Proof: } Fix an entourage $\mathbfv\in{{\mathcal U}}.$ By the visibility property there exists a finite set of edges $F\mathsfubset{\mathcal G}mma^1$ such that $\mathbfoldsymbol\deltaisplaystyle\mathbfu_F=\mathbfigcap_{e\in F}\mathbfu_e\mathsfubset \mathbfv.$ Since $\gammaammamma$ is a geodesic, the ray $\gammaammamma\varepsilonilonrt_{[n_0,\infty[}$ does not contain $F$ for some $n_0\in {\Bbb N}$. So $\gammaammamma\varepsilonilonrt_{[n_0,\infty[}$ is $\mathbfu_F$-small and therefore $\mathbfv$-small.\mathbfx {\mathbff e}gin{dfn} \lambdabel{evgeod} A path $\gammaammamma:I\tauo {\Gamma}$ is an eventual geodesic if it is either a constant map, or each its maximal stop-path is infinite and outside of its maximal stop-paths $\gammaamma$ is a geodesic in ${\Gamma}^0.$ The set of eventual geodesics in ${\Gamma}$ is denoted by $\rhom EG({\Gamma}).$ \epsilonnd{dfn} The image of every eventual geodesic in ${\Gamma}$ is either a geodesic (one-ended, or two-ended or finite) or a point. {\mathbff e}gin{prop}\lambdabel{closes} The space $\epsilonG$ is closed in the space of maps $\taui X^{{\Bbb B}bb Z}$ equipped with the Tikhonov topology. \epsilonnd{prop} \par \nuoindent{\it Proof: } Let $\mathsf{(a)}lpha$ belongs to the closure $\overlineerline{\epsilonG}$ of $\epsilonG$ in $\taui X^{{\Bbb B}bb Z}$. If $\mathsf{(a)}lpha$ is a constant then $\mathsf{(a)}lpha\in \epsilonG$ and there is nothing to prove. So we assume that $\mathsf{(a)}lpha\in\overlineerline{\epsilonG}$ is a non-trivial map. The proof follows from the following three lemmas having their own interest. \nuoindent {\mathbff Lemma \rhoef{closes}.1}\ {\it If $\mathsf{(a)}lpha(n)\nuot=\mathsf{(a)}lpha(n+1)$ then there exists a neighborhood $O$ of $\mathsf{(a)}lpha$ in $\widetilde X^{\mathbb Z}$ such that $\gammaammamma(n)=\mathsf{(a)}lpha(n)$, $\gammaammamma(n+1)=\mathsf{(a)}lpha(n+1)$ for every $\gammaammamma\in O \overlineerline{{\mathcal C}}ap \epsilonG$. In particular $\{\mathsf{(a)}lpha(n),\mathsf{(a)}lpha(n+1)\}$ is an edge of ${\Gamma}.$} {\it Proof of the Lemma.} Since ${{\mathcal U}}$ is an exact uniformity there exists $\mathbfu\in {{\mathcal U}}$ such that $(\mathsf{(a)}lpha(n),\mathsf{(a)}lpha(n+1))\nuotin\mathbf u\in\overlineerline{{\mathcal C}} U$ (see Section 2.1). Since ${{\mathcal U}}$ is a uniformity $\epsilonxist \mathbfv\in{{\mathcal U}}\ :\ \mathbfv^3\in{{\mathcal U}}$ and $\mathbfv\in{{\mathcal U}}$. By the visibility property there exists a finite set $F\mathsfubset {\Gamma}^1$ such that $\mathbfu_F\mathsfubset\mathbfv$. Let $F^0$ denote the set of vertices of the edges in $F$. Let $O_n$ be a $\mathbf v$-small neighborhood of $\mathsf{(a)}lpha(n)$ disjoint from $F^0{\mathsfetminusminus}\{\mathsf{(a)}lpha(n)\}$ and $O_{n+1}$ be a neighborhood of $\mathsf{(a)}lpha(n+1)$ defined in the same way. Set $O=\{\gammaammamma\in\widetilde X^{\mathbb Z}:\gammaammamma(n)\in O_n\}$. If $\gammaammamma\in O$ then $(\gammaamma(n), \gammaamma(n+1))\nuot\in \mathbfv$ and $\gammaamma(n)\nuot=\gammaamma(n+1).$ If in addition $\gammaamma\in \epsilonG$ then $\gammaamma\varepsilonilonrt_{[n, n+1]}$ is a geodesic and necessarily $\{\gammaammamma(n),\gammaammamma(n+1)\}\in F$. By the definition of $O_n$ and $O_{n+1}$ we obtain $\gammaammamma(n)=\mathsf{(a)}lpha(n)$, $\gammaammamma(n+1)=\mathsf{(a)}lpha(n+1)$.\mathbfx \nuoindent {\mathbff Lemma \rhoef{closes}.2. {\it Every maximal stop for $\mathsf{(a)}lpha$ is infinite.} \nuoindent \it Proof of Lemma\rhom. By contradiction assume that $J\mathsfubset{{\Bbb B}bb Z}$ is a finite maximal stop for $\mathsf{(a)}lpha$. By Lemma 1 this means that, for $m,n\in\mathbb Z$ such that $n-m{\gammaeqslant}3$ one has $\mathsf{(a)}lpha(m)\nue\mathsf{(a)}lpha(m+1)$, $\mathsf{(a)}lpha(n-1)\nue\mathsf{(a)}lpha(n)$ and $\mathsf{(a)}lpha(k)=b\in{\mathcal G}mma^0$ for $m<k<n$. Moreover, by Lemma 1 there exists a neighborhood $O$ of $\mathsf{(a)}lpha$ in $\widetilde X^{\mathbb Z}$ such that if $\gammaammamma \in O$ then $\gammaammamma(m)=\mathsf{(a)}lpha(m)$, $\gammaammamma(m+1)=\mathsf{(a)}lpha(m+1)$, $\gammaammamma(n-1)=\mathsf{(a)}lpha(n-1)$, $\gammaammamma(n)=\mathsf{(a)}lpha(n)$. Since $\mathsf{(a)}lpha$ belongs to the closure of $\epsilonG$ such $\gammaammamma$ does exist. But this implies that $\gammaammamma(m+1)=\gammaammamma(n-1)=b$. As $\gammaammamma$ is an eventual geodesic the interval $\{k\in\mathbb Z:m < k < n\}$ is a stop for $\gammaammamma$ so either $\mathsf{(a)}lpha(m)=\gammaammamma(m)=b$ or $\mathsf{(a)}lpha(n)=\gammaammamma(n)=b$ contradicting the maximality of $J.$\mathbfx \nuoindent {\mathbff Lemma \rhoef{closes}.3}. {\it If $J$ is a finite subset of consecutive integers and $J$ does not contain stops for $\mathsf{(a)}lpha$ then $\mathsf{(a)}lpha|_J$ is geodesic.} \it Proof of the Lemma\rhom. By the hypothesis each two consecutive values of $\mathsf{(a)}lpha|_J$ are distinct. Since $J$ is finite by Lemma 1 there exists a neighborhood $O$ of $\mathsf{(a)}lpha$ in $\widetilde X^{\mathbb Z}$ such that if $\gammaammamma \in O \overlineerline{{\mathcal C}}ap \epsilonG$ then $\gammaammamma|_J=\mathsf{(a)}lpha|_J$. Since $\mathsf{(a)}lpha\in\overlineerline{\epsilonG}$ such $\gammaammamma$ does exist.\mathbfx It follows from lemmas 2 and 3 that $\mathsf{(a)}lpha\in\epsilonG$. The proposition is proved. {\mathbff e}gin{cor}\lambdabel{clbal} For every $n\gammaeqslant 0$ the ball $\mathsf B_n q$ of radius $n$ in ${\mathcal G}mma^0$ centered at any $ q\in{\mathcal G}mma^0$ is closed in $\widetilde X$. \epsilonnd{cor} \it Proof\rhom. Let $ p\in\overlineerline{\mathsf B_n q} \mathsfetminusminus \mathsf B_n q$. For a neighborhood $O$ of $ p$ in $\widetilde X$ let $\gammaammamma_O:\{k\in\mathbb Z:0{\leqslant}k{\leqslant}n\}\tauo{\mathcal G}mma^0$ be a geodesic joining $ q$ with a point in $O$. We make each such $\gammaammamma_O$ an eventual geodesic by extending it by constants. Since $\widetilde X^{\mathbb Z}$ is compact there is an accumulation point $\gammaammamma$ for the set of all $\gammaammamma_O$. By Proposition \rhoef{closes} $\gammaammamma\in\epsilonG$. Since the projections $\widetilde X^{\mathbb Z}\tauo\widetilde X$ are continuous and $\widetilde X$ is Hausdorff we have $\gammaammamma(0)= q$, $\gammaammamma(n)= p$. Since $\gammaammamma$ is eventual geodesic we have $ p\in{\mathcal G}mma^0$.\mathbfx {\mathbff e}gin{cor}\lambdabel{opnev} For every finite path $l=\{a_1, ..., a_n\}\mathsfubset {\Gamma}$ the set $${(\epsilonG)_l}=\{\gammaamma\in \epsilonG : \gammaamma(I)=l,\ I\mathsfubset Z, \ \gammaamma(-\infty)\nuot=a_1,\ \gammaamma(\infty)\nuot=a_n\}$$ is open. \epsilonnd{cor} \par \nuoindent{\it Proof: } By the proof of Proposition \rhoef{closes} $\gammaammamma$ admits a neighborhood $O\mathsfubset\epsilonG$ such that $\forall\hspace*{0.5mm} \lambda\in O\ \gammaamma\varepsilonilonrt_l=\lambda\varepsilonilonrt_l$. \mathbfx By Lemma \rhoef{conv} for a half-infinite geodesic ray $\gammaamma:{{\Bbb B}bb Z}_{>0}\tauo X$ $\mathbfoldsymbol\deltaisplaystyle\lim_{t\tauo +\infty}\gammaamma(t)$ exists. The following proposition refines this statement. {\mathbff e}gin{prop}\lambdabel{bcont} The boundary map $\mathbfoldsymbol\deltaisplaystyle \partial :\epsilonG\tauo \taui X^2$ where $\mathbfoldsymbol\deltaisplaystyle \partial :\gammaamma\tauo \partial\gammaamma=\{\lim_{t\tauo-\infty}\gammaamma(t), \lim_{t\tauo+\infty}\gammaamma(t)\}$ is continuous at every non-constant eventual geodesic. \epsilonnd{prop} \par \nuoindent{\it Proof: } For $\mathsf{(a)}lpha\in \epsilonG$ we denote by $\mathsf{(a)}lpha_{-\infty}$ and $\mathsf{(a)}lpha_{+\infty}$ the limits $\mathbfoldsymbol\deltaisplaystyle\lim_{t\tauo-\infty}\mathsf{(a)}lpha(t)$ and $\mathbfoldsymbol\deltaisplaystyle\lim_{t\tauo+\infty}\mathsf{(a)}lpha(t)$ respectively. We will prove that both coordinate functions $\mathbfoldsymbol\deltaisplaystyle\pi_- : \mathsf{(a)}lpha\tauo \mathsf{(a)}lpha_{-\infty}$ and $ \pi_+ : \mathsf{(a)}lpha\tauo\mathsf{(a)}lpha_{+\infty}$ are continuous for every non-constant geodesic $\mathsf{(a)}lpha.$ Fix $\mathsf{(a)}lpha\in \epsilonG$ and suppose that $a=\mathsf{(a)}lpha_{+\infty}.$ We need to prove that for every small neighborhood $U$ of $a$ there exists a neighborhood of $N_\mathsf{(a)}lpha\opn\epsilonG$ such that one of the endpoints of every eventual geodesic belonging to $N_\mathsf{(a)}lpha$ is in $U.$ \underline{Case 1.} $\mathsf{(a)}lpha (n)\ (n\gammaeq 0)$ are all distinct. Let $U$ be a closed neighborhood of $a$ such that $b=\mathsf{(a)}lpha(0)\nuot\in U.$ Choose a "smaller" closed neighborhood $V$ of $a$ such that the interior $\mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{U}$ of $U$ contains $V.$ By the exactness of the uniformity ${{\mathcal U}}$ there exists an entourage $\mathbfv\in{{\mathcal U}}$ such that $\mathbfv\overlineerline{{\mathcal C}}ap (U\tauimes V)=\epsilonmptyset.$ Then, by the visibility property, we have $\mathbfu_F=\overlineerline{{\mathcal C}}ap_{e\in F} \mathbfu_e\mathsfubset \mathbfv$ for some finite set $F$ of edges of ${\Gamma}.$ So every eventual geodesic $\gammaamma$ passing from $b\in U'=\taui X\mathsfetminusminus U$ to $V$ contains an edge from $F.$ Denote by $d$ the diameter (in the graph distance) ${\rhom diam} F={\rhom max} \{d(a_i, a_j)\ :\ a_i\in F^0\}$ of the set of vertices $F^0$ of $F.$ Since ${\Gamma}$ is connected and $F$ is finite $d$ is finite. Since $\{\mathsf{(a)}lpha(n)\}_n$ converges to $a$ there exists $m$ such that $\mathsf{(a)}lpha(m+i) \in \mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{V}\overlineerline{{\mathcal C}}ap {\Gamma}^0$ for $ i=0,...,d+1.$ Consider the following set: $$N_\mathsf{(a)}lpha=\{\gammaamma\in\epsilonG\ :\ \{\gammaamma(m),...,\gammaamma(m+d+1)\}\mathsfubset {\mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{V}}\overlineerline{{\mathcal C}}ap{\Gamma}^0\}. \epsilonqno(1)$$ We have $N_\mathsf{(a)}lpha\nuot=\epsilonmptyset$ as $\mathsf{(a)}lpha\in N_\mathsf{(a)}lpha.$ Furthermore $N_\mathsf{(a)}lpha$ is open in $\epsilonG.$ Indeed ${\mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{V}}\overlineerline{{\mathcal C}}ap{\Gamma}^0$ is open in ${\Gamma}^0$ so the condition $\gammaamma(t)\in {\mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{V}}\overlineerline{{\mathcal C}}ap{\Gamma}^0$ defines an open subset of $\epsilonG$ and $N_\mathsf{(a)}lpha$ is the intersection of finitely many such subsets. Let $\gammaamma\in N_\mathsf{(a)}lpha.$ We claim that $\gammaamma$ cannot quit ${U}\overlineerline{{\mathcal C}}ap {\Gamma}$. Indeed if not then $\gammaamma$ contains a finite subpath $\gammaamma'$ which passes from $U$ to $V$, then it passes through at least $d+1$ distinct consecutive vertices $\gammaamma(i)\in {\mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{V}}\ (i=m,...,m+d+1),$ and after it goes back to $U'=\taui X\mathsfetminusminus U.$ Assuming $\gammaamma'$ to be the minimal subpath of $\gammaamma$ having these properties, by Lemma \rhoef{closes}.2 we obtain that $\gammaamma'$ a geodesic of length at least $d+1.$ Then there is a couple $(i,j)$ of indices such that $i \leq m,\ j\gammaeq m+d+1,\ \{q_i=\gammaamma'(i),q_j=\gammaamma'(j)\}\mathsfubset F^0$ and $d(q_i, q_j) \gammaeq d+1 > {\rhom diam} (F)$ what is impossible. \underline{Case 2.} $a\in {\Gamma}^0$. A similar argument works in this case too. We can assume that $a=\mathsf{(a)}lpha(0)$. Up to re-parametrisation of $\mathsf{(a)}lpha$ we can assume that $\forall\hspace*{0.5mm} t\gammaeq 0\ :\ \mathsf{(a)}lpha(t)=a$ and $\mathsf{(a)}lpha(-1)=b\nuot=a.$ Consider two disjoint closed neighborhoods $U$ and $V$ of $a$ such that $V\mathsfubset {\mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{U}}$ and $b\nuot\in U$. As above there exists a finite set $F\mathsfubset{\Gamma}^1$ such that every eventual geodesic passing from $V$ to $U$ (or vice versa) contains an edge of $F$. Let $d={\rhom diam} F.$ For $k\gammaeq d+1$ put $$N_\mathsf{(a)}lpha= \{\gammaamma\in \epsilonG\ :\ \gammaamma(-2)=\mathsf{(a)}lpha(-2), \gammaamma(-1)=\mathsf{(a)}lpha(-1), \{\gammaamma(0), ..., \gammaamma(k)\}\mathsfubset{\mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{V}} \}. \epsilonqno(1')$$ The set $N_\mathsf{(a)}lpha$ is non-empty as $\mathsf{(a)}lpha\in N_\mathsf{(a)}lpha$. We have $N_\mathsf{(a)}lpha=N_1\overlineerline{{\mathcal C}}ap N_2$ where $N_1=\{\gammaamma\in\epsilonG\ :\ \gammaamma(-2)=\mathsf{(a)}lpha(-2), \gammaamma(-1)=\mathsf{(a)}lpha(-1)\}$ and $N_2=\{\gammaamma\in\epsilonG\ :\{\gammaamma(0), ..., \gammaamma(k)\}\mathsfubset{\mathsf{(a)}ccentset{\overlineerline{{\mathcal C}}irc}{V}} \}.$ Since $\mathsf{(a)}lpha(-2)\nuot =\mathsf{(a)}lpha(-1)$ by Corollary \rhoef{opnev} $N_1$ is open. The set $N_2$ is also open so is $N_\mathsf{(a)}lpha.$ Every $\gammaamma\in N_\mathsf{(a)}lpha$ passes from the point $b$ to $V$ and admits a geodesic sub-interval in $V$ of length at least $d+1.$ So by the argument of Case 1 we have that $\gammaamma_{+\infty}\in U.$ We have proved that the coordinate functions $\pi_-$ and $\pi_+$ are continuous. Therefore $\partial=(\pi_-,\pi_+)$ is continuous at every non-trivial eventual geodesic.\mathbfx \nuoindent {\mathbff Remark.} Note that the map $\partial$ is not continuous at any constant eventual geodesic $\mathsf{(a)}lpha(t)=a\in {\Gamma}^0 (t\in {{\Bbb B}bb Z}).$ Indeed the eventual geodesics of the form $\mathbfoldsymbol\deltaisplaystyle\{\gammaamma\in\epsilonG\ :\ \gammaamma\varepsilonilonrt_{]-\infty,n]}\epsilonquiv a, \ \gammaamma_{+\infty}=b\nuot=a\}$ converge to $\mathsf{(a)}lpha$ when $n\tauo \infty$ in the Tikhonov topology but $\pi_+(\gammaamma)\nuot=a.$ {\mathbff e}gin{theor}\lambdabel{exgeod} For every two distinct points $p,q\in \taui X$ there exists an eventual geodesic $\gammaammamma$ such that $\gammaammamma_{-\infty}{=} p$, $\gammaammamma_{+\infty} = q$. \epsilonnd{theor} \par \nuoindent{\it Proof: } If both $ p$ and $ q$ belong to ${\mathcal G}mma^0$ then the assertion follows from the connectedness of ${\mathcal G}mma$. So we suppose that at least one of the points does not belong to ${\mathcal G}mma^0$. We first consider the case when both $ p$ and $ q$ do not belong to ${\mathcal G}mma^0$. Then we explain how to modify the argument when exactly one of the points belongs to ${\mathcal G}mma^0$. Let $P_0,Q_0$ be two closed disjoint neighborhoods of ${p,q}$ respectively. By the exactness of ${{\mathcal U}}$ there exists an entourage $\mathbf u_0\in{{\mathcal U}}$ such that $\mathbfu_0\overlineerline{{\mathcal C}}ap P_0{\tauimes}Q_0=\epsilonmptyset$. By the visibility property there exists a finite set $F{\mathsfubset}{\mathcal G}mma^1$, such that $\mathbf u_F\mathsfubset \mathbf u_0$. Let $\{( a_i, b_i):i{=}0,1,\mathbfoldsymbol\deltaots,m\}$ be the list of \mathbff ordered \rhom pairs such that $\{ a_i, b_i\}{\in}F$. For closed neighborhoods $P,Q$ of ${p,q}$ contained in $P_0,Q_0$ respectively let $$W_{i,P,Q} =\{\gammaammamma\in\epsilonG:\gammaammamma_{-\infty}\in P,\gammaammamma_\infty\in Q,\gammaammamma(0)= a_i,\gammaammamma(1) = b_i\}. \epsilonqno(2)$$ We claim that $W_{i,P,Q}$ is closed. Indeed suppose $\gammaammamma\nuotin W_{i,P,Q}$. If $\gammaammamma$ is not a constant then Proposition \rhoef{closes} implies that the opposite of every condition in (2) defines an open subset in $\epsilonG.$ Then their finite intersection is open. If $\gammaammamma\epsilonquiv c$ is a constant then either $\gammaamma(0)\nuot=a_i$ or $\gammaamma(1)\nuot= b_i$. So there is an open neighborhood $N_\gammaamma\mathsfubset\epsilonG$ such that every ${\mathbff e}ta\in N_\gammaamma$ satisfies $\{{\mathbff e}ta\in\epsilonG:{\mathbff e}ta(0)\nue a_i\}$ or $\{{\mathbff e}ta\in\epsilonG:{\mathbff e}ta(1)\nue b_i\}$ respectively. In each case $N_\gammaamma\overlineerline{{\mathcal C}}ap W_{i,P,Q}=\epsilonmptyset$ and the claim follows. There exists $i\in\{0,...,m\}$ such that all $W_{i,P,Q}$ are nonempty. Indeed if not then for every $i\in\{1,...,m\}$ there exist neighborhoods $P_i$ and $Q_i$ of $p$ and $q$ such that $W_{i,P_i,Q_i}=\epsilonmptyset$. Then, there would no geodesic between the closed non-empty subsets ${\mathbfigcap}_{i=0}^mP_i$ and ${\mathbfigcap}_{i=0}^mQ_i$ which is impossible. Then for some $i$, say $i=0$, the family $W_{0,P,Q}\nuot=\epsilonmptyset$ for all $P$ and $Q.$ So it is a centered family of non-empty closed sets. Since $\epsilonG$ is compact $\epsilonxist\gammaamma\in\epsilonG$ such that $\gammaamma\in\mathbfigcap_{P,Q}W_{0,P,Q}.$ By definition of $W_{0,P,Q}$ the point $\gammaammamma_{+\infty}$ belongs to any neighborhood of $ q$. Hence $\gammaammamma_{+\infty} = q$, and similarly $\gammaammamma_{-\infty} = p$. The assertion is proved for the case $ p\nuotin {\mathcal G}mma^0$, $ q\nuotin{\mathcal G}mma^0$. If one of the vertices, say $ p$, belongs to ${\mathcal G}mma^0$ then we modify the definition of $W_{i,P,Q}$ as follows: $W_{i,P,Q} = \{\gammaammamma\in \epsilonG : \gammaammamma_{-\infty} = \gammaammamma(-n_i) = p,\gammaammamma_{+\infty} \in Q,\gammaammamma(0) = a_i,\gammaammamma(1) = b_i\}$ where $n_i$ is the distance between $ p$ and $ a_i$. Then the above argument works without any change. \mathbfx \nuoindent Let $B\mathsfubset \taui X$ be a closed set. Define its (eventual) geodesic hull as follows: $$ {\rhom Hull}(B)=\overlineerline{{\mathcal C}}up\{{\rhom Im}\gamma : \partial\gamma\mathsfubset B\}. \epsilonqno(3)$$ \nuoindent The following lemma and its corollary will be used further. {\mathbff e}gin{lem} \lambdabel{closgeod} The set $${\mathcal C}=\{\gammaamma\in\epsilonG\ :\ \partial\gammaamma\in B^2\} \epsilonqno(3)$$ \nuoindent is closed in $\epsilonG.$ \epsilonnd{lem} {\mathbff e}gin{cor} \lambdabel{hull} If $ B\mathsfubset\taui X$ is closed then ${\rhom Hull}(B)$ is a closed subset. \epsilonnd{cor} \nuoindent {\it Proof of the corollary.} The projection $\pi:\gammaamma\in {\mathcal C}\tauo \gammaamma(0)$ is continuous by the definition of the Tikhonov topology. Since $X^{{\Bbb B}bb Z}$ is compact and if ${\mathcal C}$ is closed then $\pi({\mathcal C})$ is closed.\mathbfx \nuoindent {\it Proof of the lemma.} Let us show that $\epsilonG\mathsfetminusminus {\mathcal C}$ is open. Let $\mathsf{(a)}lpha\in\epsilonG\mathsfetminusminus {\mathcal C}$ does not belong to ${\mathcal C}$. Suppose first that $\mathsf{(a)}lpha$ is not a constant. Then $\partial\mathsf{(a)}lpha\nuot\in B^2$. Since $B^2$ is closed in $\taui X^2$ there exists an open neighborhood $W$ of $\partial\mathsf{(a)}lpha$ such that $W\overlineerline{{\mathcal C}}ap B^2=\epsilonmptyset.$ By Proposition \rhoef{bcont} $N_\mathsf{(a)}lpha=\partial^{-1}(W)$ is an open subset of $\epsilonG$, so we are done in this case. Suppose now that $\mathsf{(a)}lpha$ is a constant: $\mathsf{(a)}lpha\epsilonquiv a\in\taui X\mathsfetminusminus {\rhom Hull}(B)$. Choose a closed neighborhood $U$ of $a$ disjoint from $B$. By the visibility property there exists a finite set $F\mathsfubset{\Gamma}^1$ of a finite diameter $d$ such that every eventual geodesic passing from $U$ to $B$ passes through $F.$ Suppose by contradiction that every neighborhood $N_\mathsf{(a)}lpha$ of $a$ in $\epsilonG$ intersects ${\mathcal C}$. Note that every such $N_\mathsf{(a)}lpha$ contains a non-trivial eventual geodesic $\gammaamma\in \epsilonG$ since otherwise $U\overlineerline{{\mathcal C}}ap B\nuot=\epsilonmptyset$ which is not possible. So for any $k\in {{\Bbb B}bb Z}_+$ we can find a non-trivial $\gammaamma\in {\mathcal C}$ such that $\partial\gammaamma\in B^2$ and $\gammaamma(i)\in U\ (i=0,1,2...,k)$. Then $\gammaamma$ contains an edge from $F$ on its way from $B$ to $U$ and it meets $F$ again on its way back from $U$ to $B$. So there is a geodesic subsegment $\gammaamma'$ of $\gammaamma$ passing through $k$ consecutive points in $U$ and having its endpoints in $F.$ Therefore for $k\gammaeq d$ it would imply ${\rhom diam} F \gammaeq k \gammaeq {\rhom length}(\gammaamma') > d$ which is a contradiction.\mathbfx At the end of this section we obtain a description of the endpoints of one-ended and two-ended eventual geodesics. It is the first (and the last) time in the section when we use a group action Let a group $G$ admits a non-parabolic 32-action on a compactum $X$. Then by Lemma \rhoef{cocomext} there exists a connected and fine graph ${\Gamma}$ such that the action on ${\Gamma}$ is proper and cofinite on the edges. We also suppose that ${\mathcal G}mma^0 = G\overlineerline{{\mathcal C}}up\{$the parabolic points ${\in}\widetilde X\}=\{$the non-conical points of $\widetilde X\}$. The existence of a $G$-finite $G$-set ${\mathcal G}mma^1$ making ${\mathcal G}mma^0$ into the vertex set of a connected graph is proved in \overlineerline{{\mathcal C}}ite[Theorem 3.1]{GePo2}. We note that it is also shown in \overlineerline{{\mathcal C}}ite{GePo2} that the graph ${\Gamma}$ is hyperbolic but we will not use it in this section. {\mathbff e}gin{prop}\lambdabel{horabs} For every non-trivial eventual geodesic $\gammaammamma$ one has $\gammaammamma_{-\infty}\nue\gammaammamma_{+\infty}$. \epsilonnd{prop} \nuoindent {\mathbff Remark.} In other words the proposition claims that the graph ${\Gamma}$ does not have non-trivial eventual geodesic horocycles. \par \nuoindent{\it Proof: } Since the action $G\mathsf{(a)}ct X$ is 3-discontinuous and 2-cocompact every limit point is either conical or bounded parabolic \overlineerline{{\mathcal C}}ite[Main theorem]{Ge1}. We can assume that the action on $X$ is minimal so $X={\mathcal L}a_XG.$ Suppose first that $ p=\gammaammamma_{-\infty}= \gammaammamma_{+\infty}$ is conical. Then there exist closed disjoint sets $A,B \mathsfubset X$ and infinite subset $S$ of $G$ such that for every closed set $C\mathsfubset \taui X\mathsfetminusminus \{p\}$ and every closed neighborhood $\taui B$ of $B$ in $\taui X$ disjoint from $A$ there exists a subset $S'\mathsfubset S$ such that $\varepsilonilonrt S\mathsfetminusminus S'\varepsilonilonrt\ <\infty $, $S'(C)\mathsfubset \taui B$ and $S'(p)\in A.$ For an arbitrary finite set $J \mathsfubset \mathbb Z$ we have $s\gammaammamma(J) \mathsfubset \widetilde B$ for all but finitely many $s \in S$. By Lemma \rhoef{closgeod} the set $\partial^{-1}(\widetilde B)$ is closed. So every accumulation point of the set $S\gammaammamma$ belongs to $\partial^{-1}\widetilde B^2$. On the other hand every eventual geodesic $s\gammaammamma$ belongs to the closed set $\partial^{-1}A^2$ as $\partial(s\gammaamma)=s(p)\in A.$ So $s\gammaammamma\in\partial^{-1}\widetilde B^2\overlineerline{{\mathcal C}}ap \partial^{-1}A^2$ which is impossible as $A^2\overlineerline{{\mathcal C}}ap \taui B^2=\epsilonmptyset.$ Let $p = \gammaammamma_{-\infty}{=}\gammaammamma_{+\infty}\in{\mathcal G}mma^0$. In this case $\gammaammamma$ can not be finite so $ p$ is a limit point for the action $G\mathsf{(a)}ct\widetilde X$. Hence $p$ is bounded parabolic. Let $K$ be a compact fundamental set for the action $\mathsf{St}_G p{\overlineerline{{\mathcal C}}urvearrowright}\widetilde X{\mathsfetminusminus}\{ p\}$. Let $\mathbfu$ be an entourage such that $\mathbfu\overlineerline{{\mathcal C}}ap (\{ p\}\tauimes K) =\epsilonmptyset.$ By the visibility property there exists a finite set $F\mathsfubset{\Gamma}^1$ such that every eventual geodesic from $p$ to $K$ contains an edge in $F.$ For every $n\in\mathbb Z$ there exists $h\in\mathsf{St}_G p$ such that $h\gammaammamma(n)\in K$. Since $\partial(h\gammaammamma)= ({p,p}),$ the geodesic $h\gammaammamma$ has edges in $F$ in both sides of $h\gammaammamma(n)$. Hence $\mathsf{dist}_{\mathcal G}mma( p,\gammaammamma(n))=\mathsf{dist}_{\mathcal G}mma( p,h\gammaammamma(n))\leqslant\mathsf{dist}_{\mathcal G}mma(p,F^0){+}\mathsf{diam}_{\mathcal G}mma F^0<\infty\ (n\in{{\Bbb B}bb Z}).$ So $\gammaammamma(\mathbb Z)$ is bounded and can not be an infinite geodesic.\mathbfx {\mathbff e}gin{prop}\lambdabel{geodray} Let $\gammaammamma:[0,+\infty)\tauo{\mathcal G}mma^0$ be an infinite geodesic ray then $\gammaammamma_{+\infty}\nuotin{\mathcal G}mma^0$ and it is a conical point. \epsilonnd{prop} \par \nuoindent{\it Proof: } If the assertion were false the point $ p=\gammaammamma_{+\infty}$ is bounded parabolic. Let $K$ be a compact fundamental set for the action $\mathsf{St}_G p\mathsf{(a)}ct\widetilde X{\mathsfetminusminus}\{ p\}$. Since the geodesic ray is infinite the graph distance $d(\gammaamma(0), \gammaamma(n)$ tend to infinity. Let $h_n$ be an element of $\mathsf{St}_G\gamma p$ such that maps $h_n(\gammaammamma(n))\in K (n\in{{\Bbb B}bb Z}_+)$. By applying the ``flow map'' $\gammaamma(n)\tauo \gammaamma(n-1)$ to the geodesic $h_n\gammaammamma$ we obtain a geodesic $\gammaammamma_n$ such that $\gammaammamma_n(0)\in K$. The sequence of their other endpoints $\{h_n(\gammaammamma(-n))$ converges to $ p$ as $n\tauo+\infty.$ Each accumulation point $\mathsf{(a)}lpha$ of the set $\{\gammaammamma_n\ :\ n \gammaeqslant 0\}$ in $\epsilonG$ is an eventual geodesic whose both endpoints are equal to $p$. Furthermore it cannot be a constant since it has a value in $K$. We have a non-trivial $\mathsf{(a)}lpha\in\epsilonG$ such that $\mathsf{(a)}lpha_{-\infty}=\mathsf{(a)}lpha_{+\infty}$ which contradicts Proposition \rhoef{horabs}.\mathbfx In the following Proposition we show that the absence of the horocycles is equivalent to the hyperbolicity of the graph ${\Gamma}$. {\mathbff e}gin{prop}\lambdabel{uniGen} The uniformity $\overlineerline{{\mathcal C}} U^0=\overlineerline{{\mathcal C}} U\varepsilonilonrt_{{\Gamma}^0}$ is generated by the collection $\{\mathbf u_e:e{\in}{\mathcal G}mma^1\}$. \epsilonnd{prop} \nuoindent\it Proof\rhom. Let us prove that $\mathbf u_e$ is an entourage. By Proposition \rhoef{bcont} the boundary map $\partial$ is continuous on the closed set $K_{0,e}=\{\gammaammamma\in\mathrm{EG}({\mathcal G}mma):e=\{\gammaammamma(0), \gammaammamma(1)\}\}$. Hence $\partial K_{0,e}$ is closed. Its complement in $\widetilde X^2$ is exactly $\mathbf u_e$. By Proposition \rhoef{horabs} the open set $\mathbf u_e$ contains all diagonal pairs $(p,p)$, $p{\in}\widetilde X,$ and so is an entourage. By the visibility property for every $\mathbfv^0\in \overlineerline{{\mathcal C}} U^0$ there exists a finite set $F$ of edges for which $\mathbfu_F=\overlineerline{{\mathcal C}}ap_{e\in F}\mathbfu_e\mathsfubset \mathbfv^0$. So $\overlineerline{{\mathcal C}} U^0$ is generated by the set $\{\mathbfu_e\ :e\in{\Gamma}^1\}$ as a filter. $\mathsfquare$ \mathbff Remark\rhom. By [Ge2, subection 5.1] this corollary implies that ${\mathcal G}mma$ is hyperbolic. Namely each side of every geodesic triangle $\tauriangle$ is contained in the metric $\Deltalta_{\mathbff e}lta$-neighborhood of the union of the other sides. The constant $\Deltalta_{\mathbff e}lta$ is determined as follows. Let $E_\#$ be a finite set of edges intersecting each $G$-orbit in ${\mathcal G}mma^1$. For every $e\in E_\#$, since $\mathbf u_e$ is an entourage, there exists a finite set $F\mathsfubset{\mathcal G}mma^1$ such that $\mathbf u_{F}^2\mathsfubset\mathbf u_e$ (this property is called \it alt-hyperbolicity \rhom in [Ge2]). It follows directly from the definition of $\mathbf u_e$ that one can choose $\Deltalta_{\mathbff e}lta=1{+}\mathsf{max}\{\mathsf{dist}_{\mathcal G}mma(e,F(e)^0):e{\in}E_\#\}$. This gives an independent proof of Yaman theorem without metrisability and cardinality restrictions. Note that it uses [GePo2] where a connected graph ${\mathcal G}mma$ was constructed. It also uses [Ge2] where the visibility property was proved. \mathsfection {Dynamical quasiconvexity and 2-cocompactness condition.}\lambdabel{secqconvex} \mathsfubsection{The statement of the result} Let $X$ be a compactum. We first restate the definition of the dynamical quasiconvexity in terms of entourages \overlineerline{{\mathcal C}}ite{GePo3}. {\mathbff e}gin{dfn}\lambdabel{dynquas} Let $G$ be a discrete group acting 3-discontinuously on a compactum $X$. A subgroup $H$ of $G$ is said to be dynamically quasiconvex if for every entourage $\mathbf u$ of $X$ the set $G_{\mathbfold u}=\{g G:g({\mathcal L}a H)\nuotin\mathsf{Small}(\mathbfold u)\}/H$ is finite. \epsilonnd{dfn} The aim of this Section is the following theorem giving a characterization of the dynamical quasiconvexity. \vspace*{0.5cm} {\mathbff e}gin{thm} \lambdabel{infquas}. Let $G$ be a group which admits 3-discontinuous and 2-cocompact non-trivial action on a compactum $X$. Let $H$ be a subgroup of $G$. The following conditions are equivalent. {\mathbff e}gin{itemize} \item [1.] The action $H\mathsf{(a)}ct{\mathcal L}a_X H$ is 2-cocompact. \item [2.] $H$ is dynamically quasiconvex. \epsilonnd{itemize} \epsilonnd{thm} \nuoindent {\mathbff Remark.} Since every parabolic subgroup is always dynamically quasiconvex we will regard every group action on a one-point set as 2-cocompact. \mathsfubsection{Proof of the implication $2)\{{{\rm Small}(\ba)ll reference}\}ightarrow 1)$ } The action $G\mathsf{(a)}ct X$ is 2-cocompact. So there exists a compact fundamental set $K$ for the action of $G$ on $\mathcal Th^2 X$. Denote by $\mathbfu$ the entourage $X^2\mathsfetminusminus K$. For every two distinct points $p$ and $q$ in $X$ there exists $g\in G$ such that $g(p,q)\in K$. So $ (p,q)\nuot\in \mathbfu_1$ where $\mathbfu_1=g^{-1}\mathbfu.$ This means that the orbit $G\mathbfu$ has separation property. Let us first show that the index of $H$ in the stabilizer ${\rhom St}_G({\mathcal L}a H) =\{g\in G\ : g({\mathcal L}a H)={\mathcal L}a H\}$ of ${\mathcal L}a H$ is finite. Indeed for fixed two distinct points $\{p,q\}\mathsfubset{\mathcal L}a H $ by the exactness there exists $\mathbfu_1\in {{\mathcal U}}$ such that $(p,q)\nuot\in\mathbfu_1.$ So for every $h\in{\rhom St}_G ({\mathcal L}a H)$ we have $ \{p,q\}\mathsfubset h({\mathcal L}a H)$ and $h({\mathcal L}a H)\nuot\in \mathsf{Small}(\mathbfu_1)$. By the dynamical quasiconvexity applied to $\mathbfu_1$ there are at most finitely many such elements $h\in {\rhom St}_G [{\mathcal L}a H)$ distinct modulo $H.$ So ${\rhom St}_X({\mathbff{\mathcal L}a} H)=\overlineerline{{\mathcal C}}up_{j\in J} k_jH\ (\varepsilonilonrt J\varepsilonilonrt <\infty).$ Consider now the orbit $G({\mathcal L}a H)$. Applying the dynamical quasiconvexity to $\mathbfu$ we obtain a finite set $\{g_i\in G\ :\ i\in I\} $ such that once $g(\mathbfold{\mathcal L}ambda H)$ is not $\mathbfu$-small for some $g\in G$ then $g({\mathcal L}a H)=g_i({\mathcal L}a H)$ for some $g_i.$ Consider the following entourage of ${\mathcal L}a H:$ $$\mathbfoldsymbol\deltaisplaystyle \mathbfv=\mathbfigcap_{i\in I, j\in J} (k_j^{-1}g_i^{-1}\mathbfu \overlineerline{{\mathcal C}}ap {\mathcal L}a^2H).$$ Let $(x,y)\in\mathcal Th^2({\mathbff{\mathcal L}a} H)$. Since the orbit $G\mathbfu$ has separation property there exists $g\in G$ such that $g(x,y)\nuot\in\mathbfu$ and hence $g({\mathcal L}a H)\nuot\in \mathsf{Small}(\mathbfu)$. We have $g({\mathcal L}a H)=g_i({\mathbff{\mathcal L}a} H)$ for some $i\in I.$ Hence $g_i^{-1}g({\mathbff{\mathcal L}a} H)={\mathcal L}a H$ and $g_i^{-1}g=k_jh$ for some $j\in J$ and $h\in H.$ So $g(x,y)=g_ik_jh(x,y)\nuot\in \mathbfu$. Consequently $h(x,y)\nuot\in \mathbfv$. We have proved that for every $(x,y)\in\mathcal Th^2({\mathbff{\mathcal L}a} H)$ there exists $h\in H$ such that $(x,y)\nuot\in h^{-1}\mathbfv$. This means that the set $\mathcal Th^2({\mathbff{\mathcal L}a} H)\mathsfetminusminus \mathbfv$ is a compact fundamental set for the action $H\mathsf{(a)}ct \mathcal Th^2({\mathcal L}a H)$. \mathbfx \mathsfubsection{Proof of the implication $1)\{{{\rm Small}(\ba)ll reference}\}ightarrow 2)$ } We fix a group $G$ and a $32$-action of $G$ on a compactum $X$. By Lemma \rhoef{cocomext} $G$ acts on the attractor sum $\taui X=X\overlineerline{{\mathcal C}}up{\Gamma}$ where ${\Gamma}$ is a connected, fine, hyperbolic graph and the action $G\mathsf{(a)}ct{\Gamma}^1$ is proper and cofinite. The canonical uniformity ${{\mathcal U}}$ of $\taui X$ is generated by an orbit $G\mathbfu.$ By Lemma \rhoef{perdiv} the restriction $\mathbfu\varepsilonilonrt_{{\Gamma}^0}$ is a perspective divider. \mathbfigskip Let $H<G$ be a subgroup. Denote by ${\mathcal L}a H\mathsfubset\taui X$ its limit set for the action on $\taui X.$ By Corollary \rhoef{hull} the set $C={{\Bbb B}bb H}L$ is closed in $\taui X.$ {\mathbff e}gin{lem}\lambdabel{degf} Every point $v$ in $ C^0=C\overlineerline{{\mathcal C}}ap {\Gamma}^0$ is either parabolic for the action $H\mathsf{(a)}ct{\mathcal L}a H$ or the number of edges incident to $v$ in the graph $C$ is finite (i.e. the degree of $v$ in $C$ is finite). \epsilonnd{lem} \par \nuoindent{\it Proof: } Let $v\in C^0\mathsfetminusminus {\mathcal L}a H$. The set ${\mathcal L}a H\mathsfubset X$ is compact. So by the exactness of ${{\mathcal U}}$ there exists an entourage $\mathbfw\in {{\mathcal U}}$ such that $\{v\}\tauimes {\mathcal L}a H\overlineerline{{\mathcal C}}ap\mathbfw=\epsilonmptyset$. By the visibility property there exists a finite set $F\mathsfubset {\Gamma}^1$ such that $\mathbfu_F\mathsfubset \mathbfw$. Hence every eventual geodesic $\gammaamma$ with endpoints $a=\gammaamma_{-\infty}, b=\gammaamma_{+\infty}\in{\mathcal L}a H$ and containing $v$ passes through $e_-$ and $e_+,$ where $e_-$ and $e_+$ are edges of $F$ belonging to the geodesic rays joining $\gammaamma_{-\infty}$ with $v$ and $v$ with $\gammaamma_{+\infty}$ respectively. Thus every arc $\gammaamma$ in $C^1$ has a simple subarc $l$ between $e_-$ and $e_+$ which also contains $v$. Since $\gammaamma$ has no intermediate stops $l$ is a geodesic. By the finess property of ${\Gamma}$ the number of such geodesic subarcs is finite. Hence the number of edges incident to $v$ is finite. Suppose now that $v\in C^0\overlineerline{{\mathcal C}}ap {\mathcal L}a H.$ Then it is a parabolic point for $G.$ Since $H$ acts 2-cocompactly on ${\mathcal L}a H$ every point of ${\mathcal L}a H$ is either conical or bounded parabolic \overlineerline{{\mathcal C}}ite[Main theorem, b]{Ge1}. If $p$ is conical in ${\mathcal L}a H$ then by 3-discontinuity of the action $G\mathsf{(a)}ct X$ it is also conical for $G\mathsf{(a)}ct X$ which is impossible by \overlineerline{{\mathcal C}}ite[Theorem 3A]{Tu2}. \mathbfx {\mathbff e}gin{lem}\lambdabel{huldc} Let $C={{\Bbb B}bb H}L.$ If $\varepsilonilonrt C^1/H\varepsilonilonrt <\infty$ then $H$ is dynamically quasiconvex. \epsilonnd{lem} \par \nuoindent{\it Proof: } We extend the visibility property (see Definition \rhoef{visprop}) from ${\Gamma}^0$ to $\taui X$. By Theorem \rhoef{exgeod} for every $(x, y)\in\taui X^2$ there exists a geodesic $\gammaamma\in\epsilonG$ whose endpoints are $x$ and $y$. So for an edge $e\in {\Gamma}^1$ and $(x,y)\in \taui X^2$ put $(x,y)\in \mathbfu_e$ if there exists such $\gammaamma$ which does not contain the edge $e.$ Let $\mathbfu\in {{\mathcal U}}$ and $\mathbfv^3\mathsfubset \mathbfu$. Since the graph ${\mathcal G}mma$ has the visibility property there exists a finite set $E\mathsfubset {\Gamma}^1$ such that $\mathbfu_E=\overlineerline{{\mathcal C}}ap_{e\in E} \mathbfu_e\mathsfubset\mathbfv\varepsilonilonrt_{{\Gamma}^0}.$ Let $(x,y)\nuot\in\mathbfu$ and let $\gammaamma$ be an eventual geodesic joining $x$ with $y$. Choose $\{x', y'\}\mathsfubset \gamma$ such that $(x,x')\in\mathbfv$ and $(y,y')\in\mathbfv$. Since $(x,y)\nuot\in \mathbfu$ we have $(x',y')\nuot\in \mathbfv\varepsilonilonrt_{{\Gamma}^0}.$ Hence $(x',y')\nuot\in\mathbfu_E.$ So the piece of $\gammaamma$ between $x'$ and $y'$ contains an edge from $ E$. Hence $(x,y)\nuot\in\mathbfu_e$. We have proved the inclusion $\mathbfu_E\mathsfubset \mathbfu$ on $\taui X^2$. If now $H$ is not dynamically quasiconvex then by Definition \rhoef{dynquas} the set $G_{\mathbfold u}=\{g\in G:g({\mathcal L}a H)\nuotin\mathsf{Small}(\mathbfold u)\}/H$ is infinite for some $\mathbfu\in{{\mathcal U}}.$ By the above argument there exists a finite $E\mathsfubset {\Gamma}^1$ such that $\mathbfu_E\mathsfubset \mathbfu$ on $\taui X.$ Since $\varepsilonilonrt {\Gamma}^1/G\varepsilonilonrt <\infty$ there exists an edge $e\in E$ for which the set $\{g\in G : g({\mathcal L}a H)\nuot\in \mathsf{Small}(\mathbfu_e)\}/H$ is infinite. Therefore the set $\{g\in G\ : \ e\in g (C^1)\}/H =\{ g\in G\ :\ g^{-1}(e)\in C^1\}/H$ is infinite too. \mathbfx Suppose that the action $H\mathsf{(a)}ct{\mathcal L}a H$ is 2-cocompact. By Lemma \rhoef{huldc} it is enough to prove that $\varepsilonilonrt C^1/H\varepsilonilonrt <\infty$ where $C={{\Bbb B}bb H}L.$ Let $K$ be a compact fundamental set for the action $H\mathsf{(a)}ct \mathcal Th^2({\mathcal L}a H)$. So $HK=\mathcal Theta^2({\mathcal L}a H)$. Let $\mathbfu\in{{\mathcal U}}$ be an entourage such that $\mathbfu^3\overlineerline{{\mathcal C}}ap K=\epsilonmptyset.$ By the visibility property there exists a finite set $F\mathsfubset{\Gamma}^1$ such that $\mathbfu_F\mathsfubset \mathbfu.$ Thus $\mathbfu_F^3\overlineerline{{\mathcal C}}ap K=\epsilonmptyset.$ Up to adding a finite number of edges to $F$ we can assume that $F$ is the edge set of a finite connected subgraph of ${\Gamma}.$ We call the edges of $C^1$ which belong to $HF$ {\it red} edges. The other edges of $C^1$ are {\it white}. Similarly we declare parabolic points of $H$ {\it red} and other vertices of $C$ are {\it white.} {\mathbff e}gin{lem}\lambdabel{whiteray} Every infinite ray $\rhoho:[0,\infty[\tauo C$ contains at least one red edge. Furthermore every geodesic between two red vertices contains a red egde. \epsilonnd{lem} \nuoindent {\it Proof of the lemma.} By Lemma \rhoef{conv} the ray $\rhoho$ converges to a point $x=\rhoho(\infty)\in {\mathcal L}a H.$ Since the action $H\mathsf{(a)}ct{\mathcal L}a H$ is 2-cocompact by \overlineerline{{\mathcal C}}ite{Ge1} every point of ${\mathcal L}a H$ is either conical or bounded parabolic. By Proposition \rhoef{geodray} $x$ is conical for the action $H\mathsf{(a)}ct{\mathcal L}a H.$ Therefore there exists an infinite set $S\mathsfubset H$ and two distinct points $a$ and $b$ in ${\mathcal L}a H$ such that $a$ and $b$ are limit points for the sets $S(\rhoho(\infty))$ and $S(\rhoho(0))$ respectively. Let $U_a$ and $U_b$ be disjoint $\mathbfu$-small neighborhoods of $a$ and $b$ for $\mathbfu$ defined before the Lemma. Thus $\epsilonxist s\in S\ :\ s(\rhoho(\infty))\in U_a,\ s(\rhoho(0))\in U_b$. There exists $h\in H$ such that $h(a,b)\in K$. Hence $ h(a,b)\nuot\in \mathbfu^3$. Since $h (s\rhoho(\infty),a)\in \mathbfu$ and $h (s\rhoho(0), b)\in \mathbfu$ we obtain $ \partial(hs(\rhoho))\nuot\in\mathbfu$. Thus $\partial(hs(\rhoho))\nuot\in\mathbfu_F$ for the finite set $F$ defined therein. It follows that $h s(\rhoho)$ contains a red edge and so is $\rhoho.$ Let now $\gammaamma$ be a geodesic between two red points in $C$. Then $\epsilonxist h\in H : h(\partial\gammaamma)\in K$ so the pair $h(\partial\gammaamma)$ is not $\mathbfu_F$-small. Thus every geodesic $\gammaamma$ connecting two red points contains at least one red edge. The Lemma is proved. \mathbfx It remains to show that the set of white edges of $C^1$ is $H$-finite. Let us say that a segment of an eventual geodesic in ${\mathcal C}$ is {\it white} if all its edges and vertices are white. Denote by ${{\mathcal F}}$ the subgraph of $C^1$ obtained by adding to the set $F^1$ all adjacent white segments. Since $F$ is connected ${{\mathcal F}}$ is also connected. By the first statement of Lemma \rhoef{whiteray} every geodesic interval containing only white edges has finite length. Furthermore by Lemma \rhoef{degf} the degree of every white vertex is finite. Thus by K\"onig Lemma the connected subgraph ${{\mathcal F}}$ is finite. We claim that $H{{\mathcal F}}^1=C^1.$ Indeed if $e=(a,b)\in{\Gamma}^1$ is a white edge then by the second statement of \rhoef{whiteray} one of its vertices, say $a$, is white. Consider a maximal a white segment $l_1$ of $C$ starting from $a$ and not containing $e$. It has a finite length and ends either at a red vertex $c$ or at a red edge. Our aim is to prove that the second case does happen for one of such segments. Suppose it is not true for $l_1$. Then the other vertex $b$ of the edge $e$ cannot be red. Indeed if $b$ is red, then $l_1\overlineerline{{\mathcal C}}up e$ has two red ends $c$ and $b$, and by Lemma \rhoef{whiteray} $l$ must contains a red edge which is impossible. So $b$ is white. Then by Theorem \rhoef{exgeod} there exists another maximal white segment $l_2$ starting from $b$. If it ends up at a red vertex $d$ then applying again \rhoef{whiteray} we obtain that $l_1\overlineerline{{\mathcal C}}up l_2\overlineerline{{\mathcal C}}up e$ contains a red edge. So there exists a white eventual geodesic segment $l$ starting from $e$ and terminating at a red edge $e_1$. Thus there exists $h\in H\ :\ h(l\overlineerline{{\mathcal C}}up e)\mathsfubset {{\mathcal F}}$. The Theorem is proved. \mathbfx The proof of the above Theorem gives rise to another condition of the dynamical quasiconvexity. {\mathbff e}gin{cor} \lambdabel{equiv} The following conditions are equivalent: {\mathbff e}gin{itemize} \item[\mathsff a)] $H$ satisfies one of the conditions 1) or 2) of Theorem \rhoef{infquas}; \item[\mathsff b)] $\varepsilonilonrt C^1/H\varepsilonilonrt <\infty$ where $C={{\Bbb B}bb H}L.$ \epsilonnd{itemize} \epsilonnd{cor} \par \nuoindent{\it Proof: } By Lemma \rhoef{huldc} it remains to prove that $\mathsff a)\{{{\rm Small}(\ba)ll reference}\}ightarrow \mathsff b)$. By the statement $2)\{{{\rm Small}(\ba)ll reference}\}ightarrow 1)$ of Theorem \rhoef{infquas} the dynamical quasiconvexity implies 2-cocompactness of the action $H\mathsf{(a)}ct{\mathcal L}a_XH$. We have proved above that the latter one implies that $\varepsilonilonrt C^1/H\varepsilonilonrt <\infty.$ \mathbfx \mathsfection{Pullback space for $32$-actions} \lambdabel{secpulback} In \overlineerline{{\mathcal C}}ite[page 142]{Ge1} the following problem was formulated. Let a group $G$ admit convergence actions on two compacta $T_i$ does there exist a convergence action on a compactum $Z$ and two $G$-equivariant continuous mappings $\pi_i : Z\tauo T_i\ (i=0,1)$ ? \mathbfigskip {\mathbff e}gin{center} {\mathbff e}gin{picture}(70,27)(-30,-20) \put(0,0){$Z$} \put(-2,-2){\varepsilonilonctor(-1,-1){20}} \put(8,-2){\varepsilonilonctor(1,-1){20}}\put(-25,-11){$\pi_0$} \put(19,-11){$\pi_1$} \put(230,-20){(1)}\put(-30,-35){$T_0$} \put(30,-35){$T_1$}\put(70,-35) {} \epsilonnd{picture} \epsilonnd{center} \mathbfigskip {\mathbff e}gin{dfn}\lambdabel{pullbackdef} We call the space $Z$ and the action $G\mathsf{(a)}ct Z$ {\it pullback space} and {\it pullback action} respectively. \epsilonnd{dfn} Answering a question of M. Mitra \overlineerline{{\mathcal C}}ite{M} O.~Baker and T.~Riley constructed in \overlineerline{{\mathcal C}}ite{BR} a hyperbolic group $G$ containing a free subgroup $H$ of rank $3$ such that the embedding does not induce an equivariant continuous map (called ``Cannon-Thurston map'') $\partial H\tauo \partial G$ where $\partial $ is the boundary of a hyperbolic group. Denote $T_0=\partial H,$ and let $T_1={\mathcal L}a_{\partial G} H$ be the limit set for the action of $H\mathsf{(a)}ct\partial G.$ The following proposition shows that Baker-Riley's example is also a contre-example to the pullback problem. {\mathbff e}gin{prop}\lambdabel{pullback} The compacta $T_i\ (i=0,1)$ do not admit a pullback space on which $H$ acts $3$-discontinuously. \epsilonnd{prop} \par \nuoindent{\it Proof: } Suppose by contradiction that the diagram (1) exists. Consider the spaces $\taui Z=Z\overlineerline{{\mathcal C}}up H,\ \taui T_0 = T_0\overlineerline{{\mathcal C}}up H,\ \taui T_1=T_1\overlineerline{{\mathcal C}}up H$ equipped with the following topology (which we illustrate only for $\taui T_0$ and is defined similarly in the other cases). A set $F$ is closed in $\taui T_0$ if {\mathbff e}gin{itemize} \item[1)] $F\overlineerline{{\mathcal C}}ap T_0\in\mathsf{Closed} (T_0);$ \item[2)] $F\overlineerline{{\mathcal C}}ap H\in \mathsf{Closed}(H);$ \item[3)] $\partial_1(F\overlineerline{{\mathcal C}}ap H)\mathsfubset F$ where $\partial_1$ denotes the set of attractive limit points. \epsilonnd{itemize} The topology axioms are easily checked. Since $H$ is discrete, its points are isolated in $\taui T_0$ and the condition 2) is automatically satisfied. By \overlineerline{{\mathcal C}}ite[Proposition 8.3.1]{Ge2} the actions $G\mathsf{(a)}ct \taui T_i$ and $G\mathsf{(a)}ct \taui Z$ are 3-discontinuous. By the following lemma the maps $\pi_i$ can be extended to the continuous maps $\taui\pi_0:\taui Z\tauo \taui T_0$ and $\taui\pi_1:\taui Z\tauo \taui T_1$ where $\taui\pi_i\varepsilonilonrt_Z=\pi_i$ and $\taui\pi_i\varepsilonilonrt_H={\rhom id}\ (i=0,1).$ {\mathbff e}gin{lem}\lambdabel{extconv} Let $G$ be a group acting $3$-discontinuously on two compacta $X$ and $Y$. Denote $\taui X$ and $\taui Y$ the spaces $X\overlineerline{{\mathcal C}}up G$ and $Y\overlineerline{{\mathcal C}}up G$ respectively equipped with the above topologies. Suppose that the action on $Y$ is minimal and $\varepsilonilonrt Y\varepsilonilonrt > 2.$ If $f:X\tauo Y$ is a continuous $G$-equivariant map then the map $\taui f:\taui X\tauo\taui Y$ such that $\taui f\varepsilonilonrt_X=f$ and $\taui f\varepsilonilonrt_G\epsilonquiv {\rhom id}$ is continuous. \epsilonnd{lem} Assuming the lemma for the moment let us finish the argument. By hypothesis $H\mathsf{(a)}ct Z$ is 3-discontinuous. The map $\pi_0$ is equivariant and continuous and the action $H\mathsf{(a)}ct T_0$ is minimal. So $\pi_0$ is surjective. Since $H$ is hyperbolic all points of $T_0$ are conical \overlineerline{{\mathcal C}}ite{Bo3}. By \overlineerline{{\mathcal C}}ite[Proposition 7.5.2]{Ge2} the map $\pi_0$ is a homeomorphism. So we have the equivariant continuous map $\pi=\pi_1\overlineerline{{\mathcal C}}irc\pi_0^{-1} : T_0\tauo T_1.$ By Lemma \rhoef{extconv} it extends equivariantly to the map $\taui \pi: \taui T_0 \tauo\taui T_1$ where $\taui T_0=H\overlineerline{{\mathcal C}}up\partial H$ and $\taui T_1=G\overlineerline{{\mathcal C}}up\partial_\infty G$. This is a Cannon-Thurston map. A contradiction with the result of Baker-Riley. The Proposition is proved modulo the following. \nuoindent {\it Proof of Lemma \rhoef{extconv}.} Let $F\mathsfubset \taui Y$ be a closed set. Denote $F_Y=F\overlineerline{{\mathcal C}}ap Y$ and $F_G=F\overlineerline{{\mathcal C}}ap G.$ We need to check that the set $\taui f^{-1}(F)=f^{-1}(F_Y)\overlineerline{{\mathcal C}}up F_G$ is closed. The conditions 1) and 2) are obvious for $\taui f^{-1}(F)\overlineerline{{\mathcal C}}ap X$ and for $\taui f^{-1}(F)\overlineerline{{\mathcal C}}ap G$ respectively. Let $z^\tauimes=r\tauimes X \overlineerline{{\mathcal C}}up X\tauimes a$ be a limit cross for $F_G$ on $X.$ To check condition 3) for the set $f^{-1}(F)$ we need to show that $b=f(a)\in F_Y.$ Suppose not, and $b\nuot\in F_Y$ and let $B$ be a closed neighborhood of $b$ such that $B\overlineerline{{\mathcal C}}ap F_Y=\epsilonmptyset.$ Let $\mathbfv\in\epsilonnt Y$ be an entourage such that $B\mathbfv\overlineerline{{\mathcal C}}ap F_Y=\epsilonmptyset$ where $B\mathbfv=\{y\in Y : (y,b_1)\in \mathbfv,\ b_1\in B\}.$ Set $A=f^{-1}(B)\nui a.$ For a neighborhood $R$ of the repelling point $r\in X$ the set $F_0=\{g\in F_G : g(X\mathsfetminusminus R)\mathsfubset A\}$ is infinite. Let $w^\tauimes=p\tauimes Y\overlineerline{{\mathcal C}}up Y\tauimes q$ be a limit cross for $F_0$ on $Y,$ and $P\tauimes Y\overlineerline{{\mathcal C}}up Y\tauimes Q$ be its neighborhood. Since $F_Y\mathsfubset Y$ is closed by condition 3) we have $q\in F_Y$. Suppose that $Q$ is $\mathbfv$-small. By the hypothesis there exist three distinct points $y_i\in Y\ (i=1,2,3)$. Since the set $Y$ is minimal and $f$ is equivariant one has $f^{-1}(y_i)=X_i\nuot=\epsilonmptyset$ and $X_i$ are mutually disjoint $ (i=1,2,3)$. Let us now put some restrictions on $R$. Suppose that $R\overlineerline{{\mathcal C}}ap X_i=\epsilonmptyset$ for at least two indices $i\in\{k,j\}\mathsfubset\{1,2,3\}$ and for one of them, say $k$, we have $y_k\nuot\in P.$ If $g\in G$ is close to $w^\tauimes$ we have $g(Y\mathsfetminusminus P)\mathsfubset Q$ and $g(y_k)\in Q.$ From the other hand $g(X_k)\mathsfubset A$ since $X_k\overlineerline{{\mathcal C}}ap R=\epsilonmptyset.$ Thus $g(y_k)\in Q\overlineerline{{\mathcal C}}ap B$ and so $(q,g(y_k))\in \mathbfv.$ Hence $q\in B\mathbfv$ and $q\nuot\in F_Y.$ A contradiction. The lemma is proved. \mathbfx Since the answer to the pullback problem for general convergence actions is negative, it seems to be rather intriguing to study the pullback problem in a more restrictive case of 2-cocompact actions. The rest of the section is devoted to a discussion of this problem. If $G$ is a finitely generated group acting $3$-discontinuously and $2$-cocompactly on compacta $X_1$ and $X_2$ then by the Mapping theorem \overlineerline{{\mathcal C}}ite[Proposition 3.4.6]{Ge2} there exist equivariant maps $F_i :\partial G\tauo X_i\ (i=1,2)$ from the Floyd boundary $\partial G$ of $G$. By \overlineerline{{\mathcal C}}ite{Ka} the action on $\partial G$ is 3-discontinuous. So $\partial G$ is a pullback space for any two $32$-actions of $G.$ If $G$ is not finitely generated this argument does not work as the Mapping theorem requires the cofiniteness on edges of a graph on which the group acts and which is not true for the Cayley graphs in this case. An action of such a group on a relative graph depends on the system of non-finitely generated parabolic subgroups \overlineerline{{\mathcal C}}ite[Proposition 3.43]{GePo2}. Furthermore the action on the closure of the diagonal image of the group in the product space may not be $3$-discontinuous. However if there is a pullback action for two 3-discontinuous actions of $G$ and both of them are 2-cocompact then as shows the following lemma a quotient of the pullback space also admits a $32$-action. {\mathbff e}gin{lem} \lambdabel{quotient} Suppose that $G$ acts 3-discontinuously and 2-cocompactly on two compacta $X_i\ (i=1,2)$. Let $X$ be a pullback space for $X_i$ and $\pi_i:X\tauo X_i$ be the corresponding equivariant continuous maps. Then the action on the quotient space $T=\pi(X)=\{(\pi_1(x,\pi_2(x)\ :\ x\in X\}$ is of type $(32)$. Furthermore the action of $G$ on the attractor sum $\taui T=T\mathsfqcup G$ is also of type $(32)$. \epsilonnd{lem} \par \nuoindent{\it Proof: } We will argue in terms of the attractor sums to obtain the more stronger last statement. By lemma \rhoef {extconv} the maps $\pi_i$ extend to the continuous equivariant maps $\taui \pi_i :\taui X\tauo\taui X_i$ where $\taui X = X\mathsfqcup G$ and $\taui X_i=X_i\mathsfqcup G$, $\taui\pi_i\varepsilonilonrt_G={\rhom id},\ \taui\pi_i\varepsilonilonrt X_i=\pi_i$. By Lemma \rhoef{cocomext} the actions on $X_i$ extends to 32-actions on the attractor sums $\taui X_i.$ Since the action $G\mathsf{(a)}ct \taui X_i$ is 2-cocompact there exists an entourage $\mathbfu_i$ of $\taui X$ such that the uniformity ${{\mathcal U}}_i$ on $\taui X_i$ is generated as a filter by the orbit $G\mathbfu_i\ (i=1, 2)$ \overlineerline{{\mathcal C}}ite[Proposition E, 7.1]{Ge1}. Let $\taui\mathbfu_i$ denotes the entourage ${\taui\pi_i}^{-1}(\mathbfu_i)$ on $\taui X (i=1, 2).$ Their $G$-orbits generate the lifted uniformities $\taui {{\mathcal U}}_i.$ Then $\taui \mathbfw=\taui\mathbfu_0\overlineerline{{\mathcal C}}ap \taui\mathbfu_1$ is an entourage of $\taui X$ whose orbit $G\taui\mathbfw$ generates a uniformity ${{\mathcal W}}$ on $\taui X$. Note that ${{\mathcal W}}$ is not a priori exact. Indeed there could exists 2 points in $X$ such that $\pi_i(x)=\pi_i(y)$ and there is no way to separate them using the uniformities ${{\mathcal U}}_i\ (i=1, 2)$. So we consider the following quotient spaces: $$\taui T= \taui\pi(\taui X) =\{(\taui\pi_1( x), \taui\pi_2(x))\in \taui X_0\tauimes\taui X_1\ :\ x\in \taui X\}, T=\pi(X)\ {\rhom where}\ \pi=\taui\pi\varepsilonilonrt_X. $$ Since $\taui\pi_i\ (i=0,1)$ are equivariant the map $\taui\pi$ is equivariant too. Denoting by $\taui\pi_i:\taui T\tauo\taui X_{i-2}\ (i=3,4)$ the projections on the factors we obtain the following commutative diagram. {\mathbff e}gin{center} {\mathbff e}gin{picture}(70,27)(-30,-20) \put(0,0){$\taui X$}\put(2,-5){\varepsilonilonctor(0,-1){50}} \put(-5,-30){$\taui\pi$}\put(-1,-70){$\taui T$} \put(-2,-2){\varepsilonilonctor(-1,-1){30}} \put(8,-2){\varepsilonilonctor(1,-1){30}}\put(-30,-16){$\taui\pi_1$} \put(25,-16){$\taui\pi_2$} \put(230,-20){(3)}\put(-45,-40){$\taui X_1$} \put(40,-40){$\taui X_2$} \put(-2,-62){\varepsilonilonctor(-1,1){30}}\put(8,-62){\varepsilonilonctor(1,1){30}} \put(-18,-45){$\taui\pi_3$}\put(12,-45){$\taui\pi_4$} \epsilonnd{picture} \epsilonnd{center} \vspace*{2cm} Since the map $\pi$ is continuous and surjective the action $G\mathsf{(a)}ct \taui T$ is 3-discontinuous too \overlineerline{{\mathcal C}}ite[Proposition 3.1]{GePo1}. It remains to prove that it is 2-cocompact in the quotient topology. Let $\taui\mathbfv=\pi(\taui\mathbfw)$ and consider the uniformity $\taui{{\mathcal V}}$ on $\taui T$ generated by the orbit $G\taui\mathbfv$. To show that $G\mathsf{(a)}ct\taui T$ is 2-cocompact it is enough to prove that $\taui{{\mathcal V}}$ is exact \overlineerline{{\mathcal C}}ite[Proposion E, 7.1]{Ge1}. So let $x, y$ be two distinct points of $\taui T$. Then either $\taui \pi_3(x)\nuot= \pi_3(y)$ or $\taui\pi_4(x)\nuot=\taui\pi_4(y)$. For example in the first case by the exactness of ${\taui{{\mathcal U}}}_0$ there exists $g\in G$ such that $g(\taui \pi_3(x),\taui\pi_3(y))\nuot\in \taui\mathbfu_0$. By definition of $\taui \mathbfw$ we obtain that $(\taui\pi(x),\pi(y))\nuot\in \taui\mathbfv.$ So $\taui{{\mathcal V}}$ is exact. \mathbfx The aim of the following Proposition is to show that two $32$-actions may not have a pullback. We note that it is one of the rare cases when a fact known for finitely generated relatively hyperbolic groups is not in general true for non-finitely generated groups. {\mathbff e}gin{prop}\lambdabel{freeinf} The free group $F_{\infty}$ of countable rank admits two $32$-actions not having a pullback space. \epsilonnd{prop} \par \nuoindent{\it Proof: } Let $G=<x_1,...,x_n, y_1,...,y_m,...>\ (n\gammaeq 2)$ be a group freely generated by the union of a finite set $X=\{x_1,...,x_n\}$ and an infinite set $Y=\{y_1,...\}$. Let $A=<X>$ be a subgroup generated by $X$, and let $H$ be a subgroup of $A$ freely generated by an infinite set $W=\{w_i\ :\ i\in{\Bbb N}\}.$ Set $Z=\{z_m=y_m w_m\ :\ m\in{\Bbb N}\}$, $P= <Y>$ and $Q=<Z>.$ The set $X\overlineerline{{\mathcal C}}up Z$ can be obtained by Nielsen transformations from $X\overlineerline{{\mathcal C}}up Y$ \overlineerline{{\mathcal C}}ite{LS}. So $X\overlineerline{{\mathcal C}}up Z$ is also a free basis for $G,$ and the map $\varphi\ :\ x_i\tauo x_i, y_k\tauo z_k \ (i=1,...,n; k\in{\Bbb N})$ extends to an automorphism of $G$. We have two splittings of $G:$ $$G=A*P,\ \ {\rhom and}\ \ G=A*Q. \epsilonqno(1)$$ Each splittings in (1) gives rise to an action of $G$ on a simplicial tree whose vertex groups are conjugates of either $A$ or $P$ (respectively to $Q$). We now replace the vertices stabilized by $A$ and its conjugates by their Cayley trees. Denote the obtained simplicial $G$-trees by $\mathcal T_i\ (i=1,2)$. Their edge stabilizers are trivial and vertex stabilizers are non-trivial if only if they are conjugate to $P$ (respectively to $Q$). The vertices of $\mathcal T_1$ (respectively $\mathcal T_2$) are the elements of $G$ and the parabolic vertices corresponding to conjugates of $P$ (respectively $Q$). The graph $\mathcal T_i$ is a connected fine hyperbolic graph such that the action of $G$ on edges are proper and cofinite. Hence the actions satisfy Bowditch's criterion of relative hyperbolicity \overlineerline{{\mathcal C}}ite{Bo1}. By \overlineerline{{\mathcal C}}ite{Ge2} both actions on the trees extend to $32$-actions on compacta $R_i$ which are the limit sets for the actions $G\mathsf{(a)}ct\mathcal T_i\ (i=1,2)$. We claim that $P\overlineerline{{\mathcal C}}ap g^{-1}Qg=\{1\}$ for all $g\in G.$ Indeed consider the endomorphism $f$ such that $f(x_i)=x_i,\ f(y_j)= 1\ (i=1,...,n, j=1,...)$. The map $f$ is injective on $Q$ as well as on every conjugate $g^{-1}Qg$. From the other hand $Y\mathsfubset {\rhom Ker} f$. So $P< {\rhom Ker} f.$ We have proved that $$\forall\hspace*{0.5mm} g\in G : P\overlineerline{{\mathcal C}}ap g^{-1}Qg=\{1\}. \epsilonqno(2)$$ Arguing now by contradiction assume that there exists a pullback space $R$ and equivariant projections $\pi_i : R\tauo R_i\ (i=1,2)$. By Lemma \rhoef{quotient} the action on the quotient space: $$ T= \pi( R) =\{(\pi_1( r), \pi_2(r))\ \varepsilonilonrt\ r\in R\} $$ \nuoindent is 3-discontinuous and 2-cocompact. Note that the action $G\mathsf{(a)}ct T$ is minimal because $G\mathsf{(a)}ct R$ is minimal. By \overlineerline{{\mathcal C}}ite[Main theorem, b]{Ge1} all points of $T$ are either conical or bounded parabolic. If $p\in T$ is parabolic then $\pi_{i+2}(p)$ are parabolic points in both $ R_i$ for the map $\pi_{i+2}=\taui\pi_{i+2}\varepsilonilonrt_T$ (see the diagram in Lemma \rhoef{quotient}). Indeed the preimage of a conical point by an equivariant map is conical \overlineerline{{\mathcal C}}ite[Proposition 7.5.2]{Ge2}. So $p$ must be fixed by the intersection of some parabolic subgroup $g_1Pg_1^{-1}$ of the first action and a parabolic subgroup $g_2Qg_2^{-1}$ of the second ($g_i\in G$). However by (2) this intersection is trivial. Thus there are no parabolic points for the $32$-action $G\mathsf{(a)}ct T$. By \overlineerline{{\mathcal C}}ite[Theorem 8.1]{Bo3} (see also \overlineerline{{\mathcal C}}ite[Corollary 3.40]{GePo2}) the group $G$ is hyperbolic and so finitely generated. This is a contradiction.\mathbfx \mathbfigskip \nuoindent {\mathbff Definition.} The set of the stabilizers of the parabolic points for a $32$-action on a compactum $X$ is called {\it peripheral structure} on $G.$ The following theorem provides a sufficient condition for the existence of pullback space for two $32$-actions of a group. {\mathbff e}gin{theor}\lambdabel{suffcond} Let $G$ be a group which admits $32$-actions on compacta $X$ and $Y$. Let ${{\mathcal P}}$ be the peripheral structure corresponding to the action on $X$. Suppose that every $P\in {{\mathcal P}}$ acts 2-cocompactly on ${\mathcal L}a_YP$. Then there exists a compactum $Z$ equipped with a $32$-pullback action of $G$ with respect to its actions on $X$ and $Y$. \epsilonnd{theor} \nuoindent {\mathbff Remark.} Using Theorem \rhoef{infquas} one can reformulate the hypotheses above by requiring that the action of each subgroup $P\in{{\mathcal P}}$ is dynamically quasiconvex on their limit sets in $Y$. \nuoindent {\it Proof of Theorem \rhoef{suffcond}.} Denote $$\mathcal R=\{P \overlineerline{{\mathcal C}}ap Q:P\in\mathcal P,Q\in\mathcal Q,|P \overlineerline{{\mathcal C}}ap Q|=\infty\}. \epsilonqno(4)$$ We will indicate a compactum $Z$ acted upon by $G$ $3$-discontinuously and $2$-cocompactly whose peripheral structure is $\{{{\rm Small}(\ba)ll reference}\}c.$ Denote by ${{\mathcal P}}ar(Y,P)$ the set of parabolic points for the action of $P\in{{\mathcal P}}$ on $Y.$ We will need the following lemma. {\mathbff e}gin{lem} \lambdabel{parabcor} Let $G, X,Y, {{\mathcal P}}, {\mathcal Q}$ be as in Theorem B. The following properties of a subgroup $H{\mathsfubset}G$ are equivalent: {\mathbff e}gin{itemize} \item [a:] $H\in\mathcal R$; \item [b:] there exist $P\in\mathcal P$ and $q\in{{\mathcal P}}ar(Y,P)$ such that $H=\mathsf{St}_Pq$. \epsilonnd{itemize} \epsilonnd{lem} \nuoindent {\it Proof of the Lemma.} $\mathsf{b{\{{{\rm Small}(\ba)ll reference}\}ightarrow}a})$. If the action of $P\in{{\mathcal P}}$ on $Y$ admits a parabolic point $q$ then its stabiliser $H$ is an infinite subgroup of $P$. By \overlineerline{{\mathcal C}}ite[Theorem 3.A]{Tu2} the point $q$ is parabolic for the action $G\mathsf{(a)}ct Y$. We note that the assumption of \overlineerline{{\mathcal C}}ite{Tu2} that the space is metrisable can be omitted by a small modification of the argument. Let $Q={\rhom St}_Gq\in{\mathcal Q}$ be the stabilizer of $q$. We obtain $H=P\overlineerline{{\mathcal C}}ap Q\in\{{{\rm Small}(\ba)ll reference}\}c.$ {\it $\mathsf{a{\{{{\rm Small}(\ba)ll reference}\}ightarrow}b})$.} Let $H=P \overlineerline{{\mathcal C}}ap Q$ for $(P,Q)\in\mathcal{P{\tauimes}Q}$. We may assume that $P=\mathsf{St}_Gp$ and $Q=\mathsf{St}_Gq$ for $p\in{{\mathcal P}}ar(X,G)$, $q\in{{\mathcal P}}ar(Y,G)$. Since $H$ is an infinite subset of $Q$ we have ${\mathcal L}a_YH=\{q\}$. Since $H{\mathsfubset}P$ we have $q\in{\mathcal L}a_YP$. If $q$ is conical for $P{\overlineerline{{\mathcal C}}urvearrowright}{\mathcal L}a_YP$ then it is also conical for $G{\overlineerline{{\mathcal C}}urvearrowright}Y$ contradicting by \overlineerline{{\mathcal C}}ite[Theorem 3.A]{Tu2} the fact that $q\in{{\mathcal P}}ar(Y,G)$. The lemma is proved. The peripheral structure ${{\mathcal P}}$ consists of finitely many $G$-conjugacy classes \overlineerline{{\mathcal C}}ite[Main Theorem, $\mathfrak a$]{Ge1}. Since for every $P\in{{\mathcal P}}$ the action $P\mathsf{(a)}ct{\mathcal L}a_YP$ is 2-cocompact there are finitely many $P$-conjugacy classes of maximal parabolic subgroups in $P.$ So it follows from Lemma \rhoef{parabcor} that $\{{{\rm Small}(\ba)ll reference}\}c$ consists of finitely many $G$-conjugacy classes. Since the subgroups in $\mathcal R$ are infinite, each of them is contained in exactly one $P\in\mathcal P$ and in exactly one $Q\in\mathcal Q$. So the inclusions induce well-defined maps $\mathcal P\overlineerset\pi\longleftarrow\mathcal R\overlineerset\mathsfigmagma\longrightarrow\mathcal Q$ equivariant by conjugation. We now extend the maps $\pi,\mathsfigmagma$ identically over the sets $\widetilde {{\mathcal P}}=G{\mathsfqcup}\mathcal P$, $\widetilde {\mathcal Q}=G{\mathsfqcup}\mathcal Q$, $\widetilde R=G{\mathsfqcup}\mathcal R$. Denoting the extensions by the same symbols we have $G$-equivariant maps $\widetilde{\mathcal P}\overlineerset\pi\longleftarrow\widetilde{\mathcal R}\overlineerset\mathsfigmagma\longrightarrow\widetilde{\mathcal Q}$. By Lemma \rhoef{cocomext} the set $\taui P$ is the vertex set of a connected fine graph $\partiale$ such that the action on edges $G\mathsf{(a)}ct\partiale^1$ is cofinite and proper. We will construct a connected graph ${\Gamma}$ whose vertex set is ${\Gamma}^0=\taui \{{{\rm Small}(\ba)ll reference}\}c$ and the action on edges is cofinite and proper. The set of edges ${\Gamma}^1$ will be obtained by replacing the parabolic vertices ${{\mathcal P}}$ of $\partiale$ by connected graphs coming from the action of the groups $P\in{{\mathcal P}}$ on $Y.$ We do it in the following four steps. \nuoindent {\mathbff Step 1.} {\it Definition of ${\mathcal G}mma^1_1$.} Choose a set $\mathcal R_\#\mathsfubset \mathcal R$ that intersects each conjugacy class by a single element. For every $R\in\mathcal R_\#$ we join the vertex $R$ with each element of $R\mathsfubset G$ and denote by $E_R$ this set of edges. Then put $$\mathbfoldsymbol\deltaisplaystyle {\Gamma}^1_1=\mathbfigcup \left\{ gE_R\ : g\in G,\ R\in R_\#\rhoight\}.$$ The set ${\Gamma}^1_1$ corresponds to the well-known coned-off construction over every coset $gR$ where $g\in G, R\in \{{{\rm Small}(\ba)ll reference}\}c_\#$ \overlineerline{{\mathcal C}}ite{Fa}, \overlineerline{{\mathcal C}}ite{Bo1}. \nuoindent {\mathbff Step 2.} \nuoindent {\it Definitions of ${\Gamma}_2^1$ and ${\Gamma}_3^1$.} Choose a set $\mathcal P_\#\mathsfubset \mathcal P$ that intersects each conjugacy class of $P$ by a single element. For each $P\in\mathcal P_\#$ we add to ${\mathcal G}mma^1$ a connected $G$-finite set of pairs according to one of the following ways. \nuoindent {\mathbff Case 2.1.} ({\it hyperbolic case}) \nuoindent $P\in\mathcal P_\#{\mathsfetminusminus}\mathsf{Im}\pi$ (or equivalently $\pi^{-1}(P)\overlineerline{{\mathcal C}}ap\{{{\rm Small}(\ba)ll reference}\}c=\epsilonmptyset$). Then $P$ acts on $Y$ either as an elementary loxodromic $2$-ended subgroup, or the action $P\mathsf{(a)}ct {\mathcal L}a_YP$ is a non-elementary 32-action without parabolics \overlineerline{{\mathcal C}}ite[Theorem 3A]{Tu1}. In both cases every point of ${\mathcal L}a_YP$ is conical \overlineerline{{\mathcal C}}ite[Main Theorem, $\mathfrak b$]{Ge1} and $G$ is a hyperbolic group \overlineerline{{\mathcal C}}ite [Theorem 8.1]{Bo3} (for another proof of this fact see \overlineerline{{\mathcal C}}ite[Appendix]{GePo1}). There exists a $P$-finite set ${\mathcal G}mma^1_P$ of pairs of elements of $P$ such that the graph $(P,{\mathcal G}mma^1_P)$ is connected. Put $\mathbfoldsymbol\deltaisplaystyle {\Gamma}_2^1 =\mathbfigcup\left\{ g{\Gamma}^1_P\ :\ g\in G,\ P\in\mathcal P_\#{\mathsfetminusminus}\mathsf{Im}\pi\rhoight\}.$ \nuoindent {\mathbff Case 2.2} ({\it non-hyperbolic case}) $P\in\mathcal P_\# \overlineerline{{\mathcal C}}ap \mathsf{Im}\pi$. \nuoindent There is a canonical bijection $\tauau_P:{{\mathcal P}}ar(Y,P)\tauo\pi^{-1}P$. Let \overlineerline{{\mathcal C}}enterline{$M_P=\{g \in G:g$ is joined by a ${\mathcal G}mma_1^1$-edge with some $R\in\pi^{-1}P\}$.} The set $M_P$ is $P$-invariant and $P$-finite by the construction. Let ${\Gamma}_P^1$ be the $P$-finite set of pairs of the elements of the $P$-invariant set $M_P{\overlineerline{{\mathcal C}}up}\pi^{-1}P$ such that $(P, {\Gamma}^1_P)$ is connected. The latter one exists as the graph corresponding to the $32$-action $P\mathsf{(a)}ct{\mathcal L}a_YP$ is connected by Lemma \rhoef{cocomext}. Put $${\Gamma}_3^1=\mathbfigcup\left\{g{\Gamma}_P^1 \ :\ g\in G,\ P\in\mathcal P_\# \overlineerline{{\mathcal C}}ap \mathsf{Im}\pi\rhoight\}.$$ \nuoindent {\mathbff Step 3.} {\it Definition of ${\Gamma}_4^1$.} Consider in the graph $\partiale$ the set of all its ``horospherical'' edges $\partiale_0^1=\{(P,g)\ :\ P\in{{\mathcal P}}, g\in M_P\}$. Let ${\Gamma}_4^1$ be the set of all non-horospherical edges of $\partiale$: $${\Gamma}_4^1=\partiale^1\mathsfetminusminus \partiale^1_0.$$ \nuoindent {\mathbff Step 4.} {\it Definition of ${\Gamma}^1.$} Let $${\Gamma}^1={\Gamma}_1^1\overlineerline{{\mathcal C}}up{\Gamma}_2^1\overlineerline{{\mathcal C}}up{\Gamma}_3^1\overlineerline{{\mathcal C}}up{\Gamma}_4^1. \epsilonqno(5)$$ The set ${\Gamma}^1$ is obviously $G$-finite. {\mathbff e}gin{lem}\lambdabel{connect} The graph ${\mathcal G}mma=({\Gamma}^0, {\Gamma}^1)$ is connected. \epsilonnd{lem} \nuoindent {\it Proof of the lemma.} Since every vertex of ${\mathcal G}mma$ is either an element of $G$ or is joined with an element of $G$ it suffices to verify that every two elements of $G$ can be joined by a path in ${\mathcal G}mma$. We initially join them by a path $\gammaammamma$ in the connected graph $\partialelta$. We transform this path as follows. If all vertices of $\gammaammamma$ belong to $G$ then $\gammaammamma$ is also a path in ${\mathcal G}mma$. It $\gammaammamma$ passes through a point $P\in\mathcal P$ then it has a subpath of the form $g_0{-}P{-}g_1$ where $g_0,g_1 \in M_P$. The graph of $P$ corresponding to the action on $Y$ is connected. So we can replace this subpath by a subpath with the same endpoints all whose vertices are contained in $M_P$ (in the ``hyperbolic'' case) or in $M_P{\overlineerline{{\mathcal C}}up}\pi^{-1}P$ (in the ``non-hyperbolic'' case). In both cases the edges of this new subpath belong to ${\mathcal G}mma^1$. The lemma is proved. \nuoindent {\it End of the proof of Theorem \rhoef{suffcond}.} By Lemma \rhoef{perdiv} the sets $\taui {{\mathcal P}}$ and $\taui Q$ admit perspective dividers $\mathbf u\mathsfubset \widetilde{\mathcal P}^2$, $\mathbf v\mathsfubset \widetilde{\mathcal Q}^2$. Since the projection $\pi$ and $\mathsfigma$ commute with the group action the lifts $\pi^{-1}\mathbf u$ and $\mathsfigmagma^{-1}\mathbf v$ are perspective dividers on ${\Gamma}^0=\widetilde{\mathcal R}$. It is a direct verification that $\mathbfw=\pi^{-1}\mathbf u\overlineerline{{\mathcal C}}ap\mathsfigmagma^{-1}\mathbf v$ is a perspective divider on ${\Gamma}^0$. Indeed if $g(a,b)\nuot\in\mathbfw^0$ then $g(\pi(a), \pi(b))\nuot\in (\mathbfu^0=\mathbfu\varepsilonilonrt_{\taui{{\mathcal P}}})$ or $g(\mathsfigma(a), \mathsfigma(b))\nuot\in (\mathbfv^0=\mathbfv\varepsilonilonrt_{\taui{\mathcal Q}})$. So there exist at most finitely many such elements $g\in G$ as $\mathbfu^0$ and $\mathbfv^0$ are both perspective. Similarly $\mathbfw^0$ is a divider on ${\Gamma}^0$ as if $(\overlineerline{{\mathcal C}}ap F_1\{\mathbfu^0\})^2\mathsfubset \mathbfu^0$ and $(\overlineerline{{\mathcal C}}ap F_2\{\mathbfv^0\})^2\mathsfubset \mathbfv^0$ for some finite $F_i\mathsfubset G\ (i=1,2)$ then $(\overlineerline{{\mathcal C}}ap F\{\mathbfw^0\})^2\mathsfubset \mathbfw^0$ where $F=F_1\overlineerline{{\mathcal C}}ap F_2.$ It follows that the projections $\pi:({\Gamma}^0, \mathbfw)\tauo (\taui {{\mathcal P}}, \mathbfu)$ and $\mathsfigma:({\Gamma}^0, \mathbfw)\tauo (\taui {\mathcal Q}, \mathbfv)$ are uniformly continuous with respect to the uniformities generated by the divider orbits. By Lemma \rhoef{compldense} the action of $G$ on the Cauchy-Samuel completion $\taui Z$ of $({\Gamma}^0, \mathbfw)$ is a $32$-action. By \overlineerline{{\mathcal C}}ite[II.23, Proposition 13]{Bourb} the completion $\taui Z$ coincides with the closure ${\rhom Cl}_{\taui X\tauimes \taui Y}({\Gamma}^0)$ of ${\Gamma}^0$ embedded diagonally in $\taui X\tauimes \taui Y$ where $\taui X=X\mathsfqcup G$ and $\taui Y=Y\mathsfqcup G.$ So the projections $\pi$ and $\mathsfigma$ extend continuously to the equivariant maps $\taui\pi:\taui Z\tauo\taui X$ and $\taui\mathsfigma:\taui Z\tauo \taui Y$ whose restrictions to $G$ is the identity. We have proved that $Z={\mathcal L}a_{\taui Z}G$ is a pullback space. The Theorem is proved. \mathbfx \mathbfigskip \nuoindent To prove the statement converse to Theorem \rhoef{suffcond} we need the following direct generalization of the argument of \overlineerline{{\mathcal C}}ite[Lemma 2.3, (4)]{MOY} avoiding the metrisability assumption. {\mathbff e}gin{lem}\lambdabel{preimlim} Let a group $G$ admits two non-trivial 3-discontinuous actions on compacta $X$ and $Y$, and let $f:X\tauo Y$ be an equivariant continuous map. Let $H$ be a subgroup of $G$ such that ${\mathcal L}a_YH\mathsfubsetneqq Y$. Suppose that $H$ acts cocompactly on $Y\mathsfetminusminus{\mathcal L}a_YH.$ Suppose that for every infinite set $B\mathsfubset G\mathsfetminusminus H$ there exist an infinite subset $B_0\mathsfubset B$ and at least two distinct points $r_i\in f^{-1}({\mathcal L}a_YH)$ such that $\forall\hspace*{0.5mm} g\in B_0 : g(r_i)\nuot\in f^{-1}({\mathcal L}a_YH)\ (i=1,2).$ Then $f^{-1}({\mathcal L}a_YH)={\mathcal L}a_XH.$ \epsilonnd{lem} {\mathbff e}gin{cor}\lambdabel{preimp} If $p$ is a bounded parabolic point for the action of $G$ on $Y$ then $f^{-1}(p)$ is the limit set ${\mathcal L}a_X({\rhom St}_Yp)$ of ${\rhom St}_Gp$ for the action on $X.$ \epsilonnd{cor} \nuoindent {\it Proof of the Corollary.} By the equivariance and continuity of $f$ we have ${\mathcal L}a_XH\mathsfubset f^{-1}({\mathcal L}a_YH)$ where $H={\rhom St}_Gp$. So if $f^{-1}(p)$ is a single point then the statement is trivially true. If $f^{-1}(p)$ contains at least two distinct points $r_i\ (i=1,2)$ then we have $\forall\hspace*{0.5mm} g\in G\mathsfetminusminus H\ :\ g(r_i)\nuot\in f^{-1}({\mathcal L}a_YH)$ as $g(p)\nuot=p.$ and we apply Lemma \rhoef{preimlim}.\mathbfx {\it Proof of the Lemma.} The statement is trivial if $H$ is finite, so we assume that $H$ is infinite. Suppose first that the set $f^{-1}({\mathcal L}a_YH)$ is finite. Since $f({\mathcal L}a_XH)\mathsfubset {\mathcal L}a_YH$ then $f^{-1}({\mathcal L}a_YH)$ is pointwise fixed under a finite index subgroup of $H$. So $f^{-1}({\mathcal L}a_YH)={\mathcal L}a_XH$ in this case. Suppose that $f^{-1}({\mathcal L}a_YH)$ is infinite. Suppose by contradiction that there exists a point $s\in f^{-1}({\mathcal L}a_YH)\mathsfetminusminus {\mathcal L}a_XH$. Then there exist an infinite set $B\mathsfubset G\mathsfetminusminus H$ converging to the cross whose attractive limit point is $s$. By our assumption there exists an infinite subset $B_0\mathsfubset B$ and distinct points $r_i\in f^{-1}({\mathcal L}a_YH)$ such that $\forall\hspace*{0.5mm} g\in B_0\ : g(r_i)\nuot\in f^{-1}({\mathcal L}a_YH)\ (i=1,2).$ Then one of them $z\in\{r_1, r_2\}$ is not repulsive for the limit cross of $B_0.$ So for every open neighborhood $U_s$ of $s$ there exists an infinite subset $B'_0\mathsfubset B_0$ such that $\forall\hspace*{0.5mm} g\in B'_0\ :\ g(z)\in U_s\mathsfetminusminus f^{-1}({\mathcal L}a_YH)$. Let $K$ be a compact fundamental set for the action $H\mathsf{(a)}ct (Y\mathsfetminusminus{\mathcal L}a_YH).$ Since $X$ is compact and $f$ is equivariant the set $f^{-1}(K)=K_1$ is a compact fundamental set for the action of $H$ on $X\mathsfetminusminus f^{-1}({\mathcal L}a_YH)$. Therefore for every $g\in B$ there exists $h\in H$ such that $hg(z)\in K_1.$ The set $$A_s=\{h\in H : h(K_1)\overlineerline{{\mathcal C}}ap U_s\nuot=\epsilonmptyset\}$$ is infinite for every open neighborhood $U_s.$ Indeed if it is not true for some $U_s$ then by the argument above the orbit $A_s(K_1)$ intersects every neighborhood $U^*_s$ of $s$ such that $U^*_s\mathsfubset U_s.$ Then by compactness of $K_1$ we would have $h^{-1}(s)\in K_1$ for some $h\in H,$ implying that $f(s)\in h(K).$ This is impossible as ${\mathcal L}a_Y(H)\overlineerline{{\mathcal C}}ap h(K)=\epsilonmptyset$ for any $h\in H.$ Therefore there exists infinitely many $h\in H$ such that $h(K_1)\overlineerline{{\mathcal C}}ap U_s\nuot=\epsilonmptyset$ for every neighborhood $U_s$ of $s$. Thus $s\in {\mathcal L}a_XH.$ A contradiction. \mathbfx \mathbfigskip \nuoindent The main result of the paper is the following. {\mathbff e}gin{thm}\lambdabel{crit} Two $32$-actions of $G$ on compacta $X$ and $Y$ with peripheral structures ${{\mathcal P}}$ and ${\mathcal Q}$ admit a pullback space $Z$ if and only if one of the following two conditions is satisfied (and so both of them): {\mathbff e}gin{itemize} \item [1.] $\forall\hspace*{0.5mm} P\in {{\mathcal P}}$ acts 2-cocompactly on ${\mathcal L}a_YP$ \item[2.] $\forall\hspace*{0.5mm} Q\in{\mathcal Q}$ acts 2-cocompactly on ${\mathcal L}a_XQ.$ \epsilonnd{itemize} \epsilonnd{thm} \par \nuoindent{\it Proof: } After Theorem \rhoef{suffcond} we need only to show that if the pullback space $Z$ exists then every $Q\in{\mathcal Q}$ acts 2-cocompactly on ${\mathcal L}a_XQ.$ Suppose that $G$ admit a pullback action $G\mathsf{(a)}ct Z$ for two $32$-actions on $X$ and $Y$ and let $\mathbfoldsymbol\deltaisplaystyle X\overlineerset{f_1} {\leftarrow} Z\overlineerset{f_2} {\rhoightarrow} Y$ be the equivariant continuous maps. By Lemma \rhoef{quotient} we may assume that the action $G\mathsf{(a)}ct Z$ is 2-cocompact. Let $q\in Y$ be a parabolic point and $Q={\rhom St}_Gq\in {\mathcal Q}$. By Corollary \rhoef{preimp} $f_2^{-1}(q)$ is the limit set ${\mathcal L}a_ZQ$. Since $(Y\mathsfetminusminus\{q\})/Q$ is compact, $Z$ is compact and $f_2$ is equivariant and continuous, $Q$ acts cocompactly on $Z\mathsfetminusminus f_2^{-1}(q)=Z\mathsfetminusminus{\mathcal L}a_ZQ$. The set $f_1({\mathcal L}a_ZQ)$ is a closed $Q$-invariant subset of $X$. So ${\mathcal L}a_XQ\mathsfubset f_1({\mathcal L}a_ZQ).$ Since $f_1$ is continuous and equivariant we have ${\mathcal L}a_XQ= f_1({\mathcal L}a_ZQ)$ and the action $Q\mathsf{(a)}ct X\mathsfetminusminus {\mathcal L}a_XQ$ is cocompact. By Lemma \rhoef{cocomext} there exists a connected, fine, hyperbolic graph ${\Gamma}_1$ corresponding to the 32-action $G\mathsf{(a)}ct X$ such that the action $G\mathsf{(a)}ct {\Gamma}^1_1$ is proper and cofinite. In the following lemma we will use the notion of a dynamical bounded subgroup introduced in \overlineerline{{\mathcal C}}ite[section 9.1]{GePo3}. Recall the topological version of this definition: a subgroup $Q$ of $G$ is said to be {\it dynamically bounded} for the action on $X$ if there exist finitely many proper closed subsets $F_i$ of $X$ such that $\forall\hspace*{0.5mm} g\in G\ \epsilonxist i\ :\ g({\mathcal L}a_XQ)\mathsfubset F_i$. Considering the action of $Q$ on $\taui X= X\overlineerline{{\mathcal C}}up{\Gamma}_1$ we have. {\mathbff e}gin{lem}\lambdabel{dynbound} If a subgroup $Q$ of $G$ acts cocompactly on $X\mathsfetminusminus{\mathcal L}a_XQ$ then it acts cocompactly on $\taui X\mathsfetminusminus{\mathcal L}a_XQ$. \epsilonnd{lem} \nuoindent {\it Proof of the lemma.} The parabolic subgroup $Q$ is obviously dynamically bounded for the action on $Y$. Indeed since $Y$ is compact there are finitely many closed proper subsets $R_i$ of $Y$ such that $\forall\hspace*{0.5mm} g\in G\ \epsilonxist i\in\{1,...,m\}\ :\ g(q)\in R_i.$ Since $F_i=f_2 f_1^{-1}(R_i)$ is closed in $X$ and $f_i$ is surjective and equivariant, we have $\mathbfoldsymbol\deltaisplaystyle X=\mathbfigcup_{i\in\{1,...,m\}} F_i$ and $g({\mathcal L}a_XQ)=g(f_2 f_1^{-1}({\mathcal L}a_YQ))\mathsfubset F_i$ for some $i\in\{1,...,m\}.$ So $Q$ is dynamically bounded for the action on $X.$ The proof of \overlineerline{{\mathcal C}}ite[Proposition 9.1.3]{GePo3} implies that if $Q$ is dynamically bounded on $X$ and acts cocompactly on $ X\mathsfetminusminus {\mathcal L}a_XQ$ then it acts cocompactly on $\taui X\mathsfetminusminus {\mathcal L}a_XQ$.\mathbfx \nuoindent {\mathbff Remark.} The metrisability assumption stated in \overlineerline{{\mathcal C}}ite[Proposition 9.1.3]{GePo3} was only used to satisfy another (metric) definition of the dynamical boundness which we do not use here. {\it End of the proof of Theorem \rhoef{crit}.} By Lemma \rhoef{dynbound} the action $Q\mathsf{(a)}ct (\taui X\mathsfetminusminus{\mathcal L}a_XQ) $ is cocompact. Let $K\mathsfubset(\taui X\mathsfetminusminus{\mathcal L}a_XQ) $ be a compact fundamental set for this action. By Corollary \rhoef{equiv} it is enough to prove that $\varepsilonilonrt C^1/Q\varepsilonilonrt <\infty$ where $C^1$ is the set of edges of $C={\rhom Hull}_X({\mathcal L}a Q).$ Let $e=(a,b)\in C^1.$ Then one of its vertices, say $a$, is not in ${\mathcal L}a_XQ$. By definition of $C$ there exists an infinite eventual geodesic $\gamma$ such that $e\mathsfubset \gamma({{\Bbb B}bb Z})$ and $\gamma(\{-\infty, +\infty\}\mathsfubset{\mathcal L}a_XQ.$ So there exists $g\in Q$ such that $g(a)\in K\overlineerline{{\mathcal C}}ap C$ and $ge\mathsfubset g\gammaamma({{\Bbb B}bb Z}).$ We have ${\mathcal L}a_XQ\overlineerline{{\mathcal C}}ap K=\epsilonmptyset$. By the exactness of the uniformity ${{\mathcal U}}$ of the topology $\taui X$ there exists an entourage $\mathbfu\in{{\mathcal U}}$ such that $\mathbfu\overlineerline{{\mathcal C}}ap (K\tauimes{\mathcal L}a_XQ)=\epsilonmptyset.$ By the visibility property there exists a finite set $F\mathsfubset {\Gamma}^1$ such that $\mathbfu_F\mathsfubset \mathbfu.$ So every geodesic from $K$ to ${\mathcal L}a_XQ$ contains an edge from $F.$ Hence $g\gammaamma({{\Bbb B}bb Z})$ contains a finite simple geodesic subarc $l$ such that $g(a)\in l^0$ and $\partial l\mathsfubset F^0.$ Since that the graph ${\Gamma}$ is fine there are finitely many geodesic simple arcs joining the vertices of $F^0$. So the set $E$ of the edges of these arcs is finite. We have proved that $Q(E)=C^1$ and so $\varepsilonilonrt C^1/Q\varepsilonilonrt<\infty$. Theorem \rhoef{crit} is proved. \mathbfx \mathsfection{Corollaries} \lambdabel{equivsec} \nuoindent The goal of this Section is the following list of corollaries. {\mathbff e}gin{cor}\lambdabel{equivac} Let a group $G$ acts on compacta $X$ $3$-discontinuously and $2$-cocompactly. Let ${{\mathcal P}}$ and ${\mathcal Q}$ be the peripheral structures for the actions on $X$ and $Y$ respectively. Then the following statements are true. {\mathbff e}gin{itemize} \item [a)] Suppose that one of the conditions 1) or 2) of Theorem \rhoef{suffcond} is satisfied then $G$ is relatively hyperbolic with respect to the system $\{{{\rm Small}(\ba)ll reference}\}c=\{P\overlineerline{{\mathcal C}}ap Q\ :\ P\in{{\mathcal P}}, Q\in {\mathcal Q}, \varepsilonilonrt P\overlineerline{{\mathcal C}}ap Q\varepsilonilonrt=\infty\}.$ \item [b)] Every $P\in {{\mathcal P}}$ acts $2$-cocompactly on ${\mathcal L}a_YP$ if and only if every $Q\in{\mathcal Q}$ acts 2-cocompactly on ${\mathcal L}a_XP$. \item [c)] Assume that $\forall\hspace*{0.5mm} P\in {{\mathcal P}}\ \epsilonxist Q\in{\mathcal Q} : P<Q.$ Then the induced action of every $Q\in{\mathcal Q}$ on ${\mathcal L}a_XQ$ is 2-cocompact. \item [d)] Suppose that for every $P\in{{\mathcal P}}$ there exists $Q\in{\mathcal Q}$ such that $P < Q.$ Then there exists an equivariant continuous map $f: X\tauo Y $. Furthermore the induced action of every $Q\in{\mathcal Q}$ on ${\mathcal L}a_XQ$ is 2-cocompact. \item [e)] If ${{\mathcal P}}={\mathcal Q}$ then the actions are equivariantly homeomorphic. \item [f)] Let $G$ admits a 32-action on a compactum $X$ and $H< G$ is a parabolic subgroup for this action. Then we have: {\mathbff e}gin{itemize} \item[f1)] If $G$ is finitely generated then for any other 32-action $G\mathsf{(a)}ct Y$ the subgroup $H$ is dynamically quasiconvex. \item[f2)] If $G$ is not finitely generated the statement f1) is not true in general. \epsilonnd{itemize} \epsilonnd{itemize} \epsilonnd{cor} \par \nuoindent{\it Proof: } a) directly follows from Theorem \rhoef{suffcond}. b) By Theorem \rhoef{suffcond} the pullback space $Z$ exists. Then by Theorem \rhoef{crit} every $Q\in{\mathcal Q}$ acts 2-cocompactly on ${\mathcal L}a_XQ$. c) By the assumptions the elements of ${{\mathcal P}}$ act parabolically on $Y.$ So they all act 2-cocompactly on their limit sets on $Y$. Then by b) the elements of ${\mathcal Q}$ act 2-cocompactly on their limit sets on $X.$ d) By the assumptions the elements of ${{\mathcal P}}$ act parabolically on $Y$. So they act 2-cocompactly on their limit sets on $Y$. By Theorem \rhoef{suffcond} there exists a $32$-action $G\mathsf{(a)}ct Z$ which is a pullback action for the actions on $X$ and $Y$. We have two equivariant continuous maps $\pi : Z\tauo X$ and $\mathsfigma :Z\tauo Y.$ We claim that $\pi$ is injective. Indeed every point $x$ of $X$ is either conical or bounded parabolic \overlineerline{{\mathcal C}}ite[Main Theorem, b]{Ge1}. If $x\in X$ is conical then $\pi^{-1}(x)$ contains is a single point \overlineerline{{\mathcal C}}ite[Proposition 7.5.2]{Ge2}. If $p\in X$ is a bounded parabolic and $P={\rhom St}_G p$ then by Corollary \rhoef{preimp} $\pi^{-1}(p)={\mathcal L}a_Z(P)$. By Theorem \rhoef{suffcond} the peripheral structure for the action $G\mathsf{(a)}ct Z$ is $\{{{\rm Small}(\ba)ll reference}\}c={{\mathcal P}}\overlineerline{{\mathcal C}}ap {\mathcal Q}={{\mathcal P}}.$ By the assumption we have $P\in \{{{\rm Small}(\ba)ll reference}\}c$ is parabolic for the action on $Z$, so $\pi^{-1}(p)$ is a single point. Thus $\pi$ is injective and so is a homeomorphism. Hence map $ f=\mathsfigma\overlineerline{{\mathcal C}}irc\pi^{-1}:X\tauo Y$ is equivariant and continuous. e) follows from d). f1) Indeed if $G$ is finitely generated the Floyd boundary $\partial_fG$ is universal pullback space for any two $32$-actions of $G$ on $X$ and $Y$ \overlineerline{{\mathcal C}}ite[Map theorem]{Ge1}. So if $H$ is a maximal parabolic subgroup for the action on $X$ by the necessary condition of Theorem B it acts 2-cocompactly on ${\mathcal L}a_YH.$ Then by Theorem A it is dynamically quasiconvex. f2) Proposition \rhoef{freeinf} provides a contre-example. Indeed there is no pullback action for two 32-actions of the free group $F_\infty$ on two spaces. By Theorem B there exists a parabolic subgroup of one of the actions which does not act 2-cocompactly on its limit set for the other one. Again by Theorem A the sugroup is not dynamically quasiconvex for the second action. The Corollary is proved. \mathbfx {\mathbff e}gin{rems}\lambdabel{conapar} {\rhom The statements d) and e) give rise to more restrictive similarity properties of $32$-actions given by equivariant maps. The statement d) was already known in several partial cases. If first, $G$ is finitely generated then it follows from the universality of the Floyd boundary. Indeed by \overlineerline{{\mathcal C}}ite{Ge2} there exist continuous equivariant (Floyd) maps $F_1:\partial G\tauo X$ and $F_2:\partial G\tauo Y$ where $\partial G$ is the Floyd boundary of the Cayley graph of $G$ (with respect to some admissible scalar function). By \overlineerline{{\mathcal C}}ite[Theorem A] {GePo1} for a parabolic point $p\in {\mathcal L}a_X G$ the set $F_1^{-1}(p)$ is the limit set ${\mathcal L}a_{\partial G} P$ of the stabilizer $P={\rhom St}_{G}p$ for the action $G\mathsf{(a)}ct\partial G$. Since $F_2$ is equivariant the set $F_2({\mathcal L}a_{\partial G} P)$ is contained in the limit set ${\mathcal L}a_YQ=\{q\}$. So the map $f=F_2F_1^{-1}$ is well-defined on the set of parabolic points of ${\mathcal L}a_XG.$ Furthermore $f$ is $1{-}{\rhom to}{-}1$ at every conical point of ${\mathcal L}a_X G$ \overlineerline{{\mathcal C}}ite[Proposition 7.5.2]{Ge2}. Since all spaces are compacta the map $f$ is continuous. It is also equivariant as $F_i$ are equivariant. So the map $f$ satisfies the claim in this case. The statement of d) with the additional assumptions that $G$ is countable and $X$ and $Y$ are metrisable was proved in \overlineerline{{\mathcal C}}ite{MOY}. Their proof uses the condition e) which was assumed to be known in this case. The statement e) generalizes the last part of the main result of \overlineerline{{\mathcal C}}ite{Ya} to the case of non-finitely generated groups. It follows from e) that for a $32$-action of $G$ on $X$ whose set of the parabolic points is ${{\mathcal P}}ar_X$, there exists an equivariant homeomorphism from $X$ to the Bowditch's boundary of the graph ${\Gamma}$ whose vertex set is $G\overlineerline{{\mathcal C}}up{{\mathcal P}}ar_X.$} \epsilonnd{rems} {\mathbff e}gin{thebibliography}{Main25} \mathsf{(a)}ddcontentsline{toc}{chapter}{Bibli ography} \mathbfibitem[BR]{BR} O. Baker and T. Riley\mathsfl, Cannon-Thurston maps do not always exist\rhom, arXiv:1206.0505, to appear in Forum of Math., Sigma, 1, e3, 2013. \mathbfibitem[Bo1]{Bo1} B. H. Bowditch, {\mathsfl Relatively hyperbolic groups,} Internat. J. Algebra Comput. 22 (2012), no. 3, 1250--1316. \mathbfibitem[Bo2]{Bo2} B. H. Bowditch, {\mathsfl Convergence groups and configuration spaces}, in ``Group theory down under'' (ed.\ J.Cossey, C.F.Miller, W.D.Neumann, M.Shapiro), de Gruyter (1999) 23--54. \mathbfibitem[Bo3]{Bo3} B. H. Bowditch, {\mathsfl A topological characterisation of hyperbolic groups}, J. Amer. Math. Soc. 11 (1998), no. 3 no. 3 643--667. \mathbfibitem[Bourb]{Bourb} N. Bourbaki\mathsfl, Topologie G\'en\'erale ch. 1--4\rhom. Diffusion, Paris, 1971. \mathbfibitem[Fa]{Fa} B.~Farb, {\it Relatively hyperbolic groups,} GAFA, 8(5), 1998, 810-840. \mathbfibitem[F]{F} W. J. Floyd, {\mathsfl Group completions and limit sets of Kleinian groups}, Inventiones Math. \mathbff 57 \rhom(1980), 205--218. \mathbfibitem[Ge1]{Ge1} V.~Gerasimov, {\mathsfl Expansive Convergence Groups are Relatively Hyperbolic}, GAFA {\mathbff 19} (2009) 137--169. \mathbfibitem[Ge2]{Ge2} V.~Gerasimov\mathsfl, Floyd maps to the boundaries of relatively hyperbolic groups\rhom, GAFA {\mathbff 22} (2012) 1361--1399. \mathbfibitem[GePo1]{GePo1} V.~Gerasimov, L.~Potyagailo, {\mathsfl Quasi-isometric maps and Floyd boundaries of relatively hyperbolic groups}, J. Eur. Math. Soc. {\mathbff 15} (2013), no. 6, 2115-2137. \mathbfibitem[GePo2]{GePo2} V. Gerasimov and L. Potyagailo\mathsfl, Non-finitely generated relatively hyperbolic groups and Floyd quasiconvexity\rhom, arXiv: 1008.3470 [math.GR], 2010, to appear in Groups, Geometry and Dynamics. \mathbfibitem[GePo3]{GePo3} V.~Gerasimov, L.~Potyagailo, {\mathsfl Quasiconvexity in relatively hyperbolic groups}, arXiv:1103.1211 [math.GR], 2011. \mathbfibitem[Gr]{Gr} M. Gromov, {\mathsfl Hyperbolic groups}, in: ``Essays in Group Theory'' (ed. S.~M.~Gersten) M.S.R.I. Publications No.~8, Springer-Verlag (1987) 75--263. \mathbfibitem[Gr1]{Gr1} M. Gromov, {\mathsfl Asymptotic Invariants of Infinite Groups}, ``Geometric Group Theory II'' LMS Lecture notes \mathbff 182\rhom, Cambridge University Press (1993) \mathbfibitem[LS]{LS} R.C. Lyndon,P.E.Schupp, {\mathsfl Combinatorial Group Theory}, Springer Verlag, 1977. \mathbfibitem[Ka]{Ka} A. Karlsson {\mathsfl Free subgroups of groups with non-trivial Floyd boundary}, Comm. Algebra, 31, (2003), 5361--5376. \mathbfibitem[Ke]{Ke} J.L. Kelly, General Topology, Graduate Texts in Mathematics 27, Springer-Verlag, New York, 1975 \mathbfibitem[MOY]{MOY} Y. Matsuda, A. Oguni and S. Yamagata, {\epsilonm Blowing up and down compacta with geometrically finite convergence actions of a group}, preprint, arXiv:1201.6104. \mathbfibitem[M]{M} M. Mitra, {\mathsfl Cannon-Thurston maps for hyperbolic group extensions}, Topology, 37(3), 527-538, 1998. \mathbfibitem[Os]{Os} D.~Osin, {\mathsfl Relatively hyperbolic groups: intrinsic geometry, algebraic properties and algorithmic problems}, \rhom Mem. AMS \mathbff 179 \rhom(2006) no. 843 vi+100pp. \mathbfibitem[Tu1]{Tu1} P. Tukia, {\mathsfl Convergence groups and Gromov's metric hyperbolic spaces} : New Zealand J.\ Math.\ \mathbff 23 \rhom (1994) 157--187. \mathbfibitem[Tu2]{Tu2} P. Tukia, {\mathsfl Conical limit points and uniform convergence groups}, J.\ Reine.\ Angew.\ Math.\ {\mathbff 501} (1998) 71--98. \mathbfibitem[W]{W} A.~Weil, {\rhom Sur les espace à structure uniforme et sur la topologie générale}, Paris, 1937. \mathbfibitem[Ya]{Ya} A. Yaman, {\mathsfl A topological characterisation of relatively hyperbolic groups}, J.\ reine ang.\ Math. \mathbff 566 \rhom(2004), 41--89. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{Linear canonical wavelet transform and the associated uncertainty principles} \begin{abstract} We define a novel time-frequency analyzing tool, namely linear canonical wavelet transform (LCWT) and study some of its important properties like inner product relation, reconstruction formula and also characterize its range. We obtain Donoho-Stark's and Lieb's uncertainty principle for the LCWT and give a lower bound for the measure of its essential support. We also give Shapiro's mean dispersion theorem for the proposed LCWT. \end{abstract} {\textit{Keywords}:} Linear canonical transform; Linear canonical wavelet transform; Uncertainty principle; Shapiro's theorem\\ {\textit{AMS Subject Classification 2020}:} 42B10, 42C40, 43A32 \section{Introduction} We first mention below some important abbreviations that will be used throughout this paper.\\~\\ \textit{List of Abbreviations}\\ FT - Fourier transform\\ FrFT - Fractional Fourier transform\\ LCT - Linear canonical transform\\ WT - Wavelet transform\\ FrWT - Fractional wavelet transform\\ WLCT - Windowed linear canonical transform\\ LCWT - Linear canonical wavelet transform\\ ONS - Orthonormal sequence\\ RKHS - Reproducing kernel Hilbert Space\\ UP - Uncertainty principle\\~\\ As a generalization of FT \cite{debnath2014integral} and FrFT \cite{almeida1994fractional, CHEN202171, i1998fractional}, LCT is a four parameter family of linear integral transform proposed by Mohinsky and Quesne \cite{moshinsky1971linear} and is considered as the important tool for non-stationary signal processing. Because of the extra degrees of freedom, as compared to the FT and FrFT, its application can be found in number of fields including signal separation \cite{sharma2006signal}, signal reconstruction \cite{wei2014reconstruction}, filter designing \cite{barshan1997optimal} and many more. Recently in \cite{GAO2021108233} the authos studied octonion linear canonical transform. For more detail on LCT and its application we refer the reader to \cite{healy2015linear}. Even though the wavelet transform (WT)\cite{debnath2002wavelet} is a potential tool for the analysis of non-stationary signals, it is incompetent for analyzing the signals with not well concentrated energy in the time-frequency plane, for example, the chirp like signal which is ubiquitous in nature \cite{dai2017new}. On the other hand, for the signal whose energy in the frequency domain is not well concentrated, LCT is an appropriate tool. However, because of its global kernel it is not capable of indicating the time localization of the LCT spectral components, and thus LCT is not suitable for processing the non-stationary signal whose LCT spectral characteristics changes with time. The WLCT \cite{kou2012windowed} is thus proposed to overcome this drawback. In this case, the original signal is first segmented with time localization window, followed by performing the LCT spectral analysis for these segment. WLCT is capable of offering a joint signal representation in both time and LCT domain, but its fixed window width limits the practical application, it is impossible to provide good time resolution and spectral resolution simultaneously. Thus to circumvent these limitations of LCT, WT and WLCT we propose a novel LCWT. In fact, Wei et al. \cite{wei2014generalized} and Guo et al. \cite{guo2018linear} generalized the FrWT, studied in \cite{shi2012novel}, to the LCWT. Wei et al. \cite{wei2014generalized} studied its resolution in time and linear canonical domains and Guo et al. \cite{guo2018linear} studied its properties on Sobolev spaces. Dai et al. \cite{dai2017new} gave a new definition of the FrWT(also see \cite{prasad2014generalized}), which we generalize in the context of the LCT and study the associated UP. In Harmonic analysis, the UP is a relation between a function and its FT which says that a function (non-zero) and its FT cannot be very well localize simultaneously. This general fact is interpreted in several different ways, for this we refer the reader to a survey paper by Folland and Sitaram \cite{folland1997uncertainty}. Shapiro, in \cite{shapiro1991uncertainty}, studied the localization for an ONS and proved that if an ONS $\{\phi_{k}\}$ in $L^2(\mathbb{R})$ and the sequence of their FT $\{\hat{\phi}_{k}\}$ are such that their means and dispersions are uniformly bounded, then $\{\phi_{k}\}$ is finite. Jaming and Powell \cite{jaming2007uncertainty} proved a quantitative version of Shapiro's theorem which says that for an ONS $\{\phi_{k}\}$ in $L^2(\mathbb{R})$ and $N\in\mathbb{N},$ $$\sum_{k=1}^{N}\left(\|t\phi_{k}\|_{L^2(\mathbb{R})}^2+\|\xi\hat{\phi}_{k}\|_{L^2(\mathbb{R})}^2\right)\geq\frac{(N+1)^2}{2\pi}.$$ A multivariable quantitative version of Shapiro's theorem for generalized dispersion was proved by Malinnikova \cite{malinnikova2010orthonormal}. It states that if $\{\phi_{k}\}$ be an ONS in $L^2(\mathbb{R}^d),$ $N\in\mathbb{N}$ and $p>0$ then $\exists$ $C_{p,d}$ for which $$\sum_{k=1}^{N}\left(\||t|^{\frac{p}{2}}\phi_{k}\|_{L^2(\mathbb{R}^d)}^2+\||\xi|^{\frac{p}{2}}\hat{\phi}_{k}\|_{L^2(\mathbb{R}^d)}^2\right)\geq C_{p,d}N^{1+\frac{p}{2d}}.$$ Recently, in this direction Shapiro's mean dispersion theorem has been proved for many integral transforms like short-time FT \cite{lamouchi2016time}, WT \cite{hamadi2017shapiro}, Hankel WT \cite{b2018uncertainty}, Hankel Stockwell transform \cite{hamadi2020uncertainty}, shearlet transform \cite{nefzi2021shapiro}, etc. The main objectives of this paper are (i) to define a new time-frequency analysing tool, namely LCWT, which generalizes the FrWT studied in \cite{dai2017new} in the context of LCT, and study some of its basic properties along with the inner product relation, reconstruction formula and also characterize its range (ii) to study the time-LCT frequency analysis and the associated constant $Q-$factor (iii) to establish an UP for the LCWT for a finite energy signal. The UP for the LCWT can be derived from the UP of the LCT following the strategy adopted by Wilczok \cite{wilczok2000new} and Verma et al. \cite{verma2021note} while deriving the UP for the WT and the FrWT respectively. Similar UP has been introduced for several integral transform like fractional WT \cite{bahri2017logarithmic}, non-isotropic angular Stockwell transform \cite{shah2019non}, etc. However, we are interested in proposing an uncertainty principle directly for the LCWT without using the UP associated with LCT. In this regard we establish the Donoho-Stark's and the Lieb's UP for the LCWT, which in turn provide a lower bound for the measure of essential support of the LCWT. See also \cite{kou2012windowed},\cite{huo2019uncertainty}, for similar results in case of other integral transforms. (iv) to study the Shapiro's mean dispersion theorem for the LCWT. The paper is arranged as follows. In section 2, we recall the definition of LCT and some of its properties. In section 3, we define LCWT and study some of its basic properties including inner product relation, reconstruction formula and also characterize the range of the transform. Donoho-Stark's and Lieb's UP for the proposed LCWT are studied in section 4. Section 5 is devoted to Shapiro's mean dispersion theorem for LCWT. Finally, in section 6, we conclude our paper. \section{Preliminaries} We briefly recall the definition of LCT and its important properties that we will be using in the sequel. \subsection{LCT} \begin{definition} The LCT of $f\in L^2(\mathbb{R})$, with respect to a matrix parameter $$M= \begin{bmatrix} A & B\\ C & D \end{bmatrix}, AD-BC=1$$ is defined as $$(\mathcal{L}^{M}f)(\xi)=\begin{cases} \displaystyle\int_{\mathbb{R}}f(t)K_{M}(t,\xi)d t, B\neq 0,\\ \sqrt{D}e^{\frac{i}{2}CD\xi^2}f(D\xi),~B=0, \end{cases}$$ where $K_{M}(t,\xi)$ is a kernel given by \begin{equation} K_{M}(t,\xi)= \frac{1}{\sqrt{2\pi iB}} e^{\frac{i}{2}\left(\frac{A}{B}t^2-\frac{2}{B}\xi t+\frac{D}{B}\xi^2\right)},~\xi\in\mathbb{R}. \end{equation} \end{definition} Among several important properties of the LCT, the important among them, that will be used in the sequel, is the Parseval's formula \begin{equation}\label{P3ParsevalLCT} \int_{\mathbb{R}}f(t)\overline{g(t)}dt=\int_{\mathbb{R}}(\mathcal{L}^Mf)(\xi)\overline{(\mathcal{L}^Mg)(\xi)}d\xi,~\mbox{where}~f,~g\in L^2(\mathbb{R}). \end{equation} Particularly, if $f=g,$ then we have the Plancherel's formula \begin{equation}\|f\|_{L^2(\mathbb{R})}=\|\mathcal{L}^Mf\|_{L^2(\mathbb{R})}. \end{equation} The LCTs satisfies the additive property, i.e., \begin{equation} \mathcal{L}^M\mathcal{L}^Nf=\mathcal{L}^{MN}f,~\mbox{where} ~f\in L^2(\mathbb{R}), \end{equation} and the inversion property \begin{equation} \mathcal{L}^{M^{-1}}\left(\mathcal{L}^{M}f\right)=f, \end{equation} where, $M^{-1}$ denotes the inverse of $M.$ For convenient, we now denote the matrix $M$ by $(A,B;C,D).$ \section{LCWT} We propose a new integral transform namely the LCWT. This definition is mainly motivated from the definition of FrWT defined by Dai et al. \cite{dai2017new}. We shall discuss some of its basic properties along with the inner product relation, reconstruction formula and also prove that its range is a RKHS. Motivated by the definition of the admissible wavelet pair in \cite{daubechies1992ten}, we first define it in the setting of LCT domain. \begin{definition} A pair $\{\psi,\phi\}$ of functions in $L^2(\mathbb{R})$ is said to be an admissible linear canonical wavelet pair (ALCWP) if they satisfy the following admissibility condition \begin{equation}\label{P3ACPair} C_{\psi,\phi,M}:=\int_{\mathbb{R^+}}\overline{(\mathcal{L}^M\psi)\left(\frac{\xi}{a}\right)}(\mathcal{L}^M\phi)\left(\frac{\xi}{a}\right)\frac{da}{a} \end{equation} is a non-zero complex constant independent of $\xi\in\mathbb{R}$ satisfying $|\xi|=1.$ In case $\psi=\phi,$ we denote $C_{\psi,\psi,M}$ by $C_{\psi,M}$ and the required admissibility condition reduces to \begin{equation}\label{P3AC} C_{\psi,M}:=\int_{\mathbb{R^+}}\left|(\mathcal{L}^M\psi)\left(\frac{\xi}{a}\right)\right|^2\frac{da}{a} \end{equation} is a positive constant independent of $\xi$ satisfying $|\xi|=1.$ We call $\psi\in L^2(\mathbb{R}),$ satisfying equation (\ref{P3AC}), the admissible linear canonical wavelet (ALCW). \end{definition} We now give the definition of the novel LCWT. \begin{definition} Let $f\in L^2(\mathbb{R}),$ $M=(A,B;C,D)$ be a matrix with $AD-BC=1~\mbox{and}~B\neq 0$ then we define the LCWT of $f$ with respect to $M$ and an ALCW $\psi$ by $$(W^M_{\psi}f)(a,b)=e^{-\frac{iA}{2B}b^2}\left\{f(t)e^{\frac{iA}{2B}t^2}\star\overline{\sqrt{a}\psi(-at)e^{\frac{iA}{2B}(at)^2}}\right\}(b),~a\in\mathbb{R^+},b\in\mathbb{R},$$ \end{definition} where $\star$ denotes the convolution given by $$(f\star g)(\nu)=\int_{\mathbb{R}}f(x)g(\nu-x)dx,~\nu\in\mathbb{R}.$$ Equivalently, \begin{eqnarray*} \left(W^M_\psi f\right)(a,b)&=&e^{-\frac{iA}{2B}b^2}\int_{\mathbb{R}}f(t)e^{\frac{iA}{2B}t^2}\overline{\sqrt{a}\psi(-a(b-t))e^{\frac{iA}{2B}(a(t-b))^2}}dt\\ &=&\int_{\mathbb{R}}f(t)\overline{e^{-\frac{iA}{2B}\{(t^2-b^2)-(a(t-b))^2\}}\sqrt{a}\psi(a(t-b))}dt\\ &=&\int_{\mathbb{R}}f(t)\overline{\psi^M_{a,b}(t)}dt, \end{eqnarray*} where, with $\psi_{a,b}(t)=\sqrt{a}\psi(a(t-b))$ \begin{equation}\label{P3DW} \psi^M_{a,b}(t)=e^{-\frac{iA}{2B}\{(t^2-b^2)-(a(t-b))^2\}}\psi_{a,b}(t). \end{equation} Thus, we have an equivalent definition of the LCWT as \begin{equation}\label{P3WTDef} (W^M_{\psi}f)(a,b)=\langle f,\psi^M_{a,b} \rangle_{L^2(\mathbb{R})}. \end{equation} It is to be noted that depending on the different choice of the matrix $M,$ we have different integral transforms: \begin{itemize} \item For $M=(\cos\alpha,\sin\alpha;-\sin\alpha,\cos\alpha),\alpha\neq n\pi,$ we obtain the FrWT as discussed in \cite{dai2017new}. \item For $M=(0,1;-1,0)$ we obtain the traditional WT \cite{daubechies1992ten}. \end{itemize} We now establish a fundamental relation between LCWT and the LCT. This relation will be useful in obtaining the resolution of time and linear canonical spectrum in the time-LCT-frequency plane and inner product relation associated with the LCWT. \begin{proposition} If $W_{\psi}^Mf$ and $\mathcal{L}^{M}f$ are respectively the LCWT and the LCT of $f\in L^2(\mathbb{R}).$ Then, \begin{equation}\label{P3LCTWT} \mathcal{L}^M\left((W_{\psi}^Mf)(a,\cdot)\right)(\xi)=\frac{\sqrt{-2\pi iB}}{\sqrt{a}}e^{\frac{iD}{2B}(\frac{\xi}{a})^2}(\mathcal{L}^M f)(\xi)\overline{(\mathcal{L}^M \psi)\left(\frac{\xi}{a}\right)}. \end{equation} \end{proposition} \begin{proof} Form the definition of the LCT and $\psi_{a,b}^M,$ it follows that \begin{eqnarray*} \left(\mathcal{L}^M\psi_{a,b}^M\right)(\xi)&=&\int_{\mathbb{R}}\sqrt{a}\psi(a(t-b))\sqrt{\frac{1}{2\pi iB}}e^{\frac{i}{2}\left\{\frac{Ab^2}{B}+\frac{A}{B}(a(t-b))^2-\frac{2}{B}\xi t+\frac{D}{B}\xi^2\right\}}dt\\ &=& \int_{\mathbb{R}}\sqrt{a}\psi(at)\sqrt{\frac{1}{2\pi iB}}e^{\frac{i}{2}\left\{\frac{Ab^2}{B}+\frac{A}{B}(at)^2-\frac{2}{Ba}(at+ab)\xi +\frac{D}{B}\xi^2\right\}}dt\\ &=&\frac{1}{\sqrt{a}}\int_{\mathbb{R}}\psi(t)\sqrt{\frac{1}{2\pi iB}}e^{\frac{i}{2}\left(\frac{Ab^2}{B}-\frac{2}{B}\xi b+\frac{D}{B}\xi^2\right)}e^{\frac{i}{2}\left(\frac{Ab^2}{B}-\frac{2}{B}t\left(\frac{\xi}{a}\right)+\frac{D}{B}\left(\frac{\xi}{a}\right)^2\right)}e^{\frac{-iD}{2B}\left(\frac{\xi}{a}\right)^2}dt\\ &=&\frac{1}{\sqrt{a}}e^{\frac{-iD}{2B}\left(\frac{\xi}{a}\right)^2}\sqrt{2\pi iB}\int_{\mathbb{R}}\psi(t)K_{M}(b,\xi)K_{M}\left(t,\frac{\xi}{a}\right)dt. \end{eqnarray*} Therefore, we have \begin{equation}\label{P3LCTDW} \left(\mathcal{L}^M\psi_{a,b}^M\right)(\xi)=\frac{\sqrt{2\pi iB}}{\sqrt{a}}K_{M}(b,\xi)\left(\mathcal{L}^M\psi\right)\left(\frac{\xi}{a}\right). \end{equation} Using (\ref{P3ParsevalLCT}) in (\ref{P3WTDef}), we get $$(W_{\psi}^Mf)(a,b)=\langle \mathcal{L}^Mf,\mathcal{L}^M\left(\psi_{a,b}^M\right) \rangle_{L^2(\mathbb{R})}.$$ Using equation (\ref{P3LCTDW}), we have \begin{equation}\label{P3WTLCD} (W_{\psi}^Mf)(a,b)=\frac{\sqrt{-2\pi iB}}{\sqrt{a}}\int_{\mathbb{R}}e^{\frac{-iD}{2B}\left(\frac{\xi}{a}\right)^2}\left(\mathcal{L}^Mf\right)(\xi)\overline{\left(\mathcal{L}^M\psi\right)\left(\frac{\xi}{a}\right)}K_{M^{-1}}(b,\xi)d\xi. \end{equation} Therefore, it follows that $$\mathcal{L}^M\left((W_{\psi}^Mf)(a,\cdot)\right)(\xi)=\frac{\sqrt{-2\pi iB}}{\sqrt{a}}e^{\frac{iD}{2B}(\frac{\xi}{a})^2}(\mathcal{L}^M f)(\xi)\overline{(\mathcal{L}^M \psi)\left(\frac{\xi}{a}\right)}.$$ This completes the proof. \end{proof} \subsection{Time-LCT frequency analysis} From equation (\ref{P3WTDef}) it follows that if $\psi^M_{a,b}$ is supported in the time domain, then so is $(W^M_\psi f)(a,b).$ Also, from equation (\ref{P3WTLCD}), it follows that the LCWT can provide the local property of $f(t)$ in the linear canonical domain. Thus the LCWT is capable of producing simultaneously the time-LCT frequency information and represent the signal in the time-LCT frequency domain. More precisely, if $\psi$ and $\mathcal{L}^M \psi$ are window functions in time and linear canonical domain respectively with $E_\psi$ and $E_{\mathcal{L}^M \psi}$ as centres and $\Delta_{\psi}$ and $\Delta_{\mathcal{L}^M \psi}$ are radii respectively. Then the centre and radius of $\psi^M_{a,b}$ are given respectively by $$E[\psi^M_{a,b}]=\frac{1}{a}E_\psi +b,$$ and $$\Delta[\psi^M_{a,b}]=\frac{1}{a}\Delta_\psi.$$ Similarly, the centre and radius of window function $(\mathcal{L}^M\psi)\left(\frac{\xi}{a}\right)$ are given by $$E\left[(\mathcal{L}^M\psi)\left(\frac{\xi}{a}\right)\right]=aE_{\mathcal{L^M\psi}},$$ and $$\Delta\left[(\mathcal{L}^M\psi)\left(\frac{\xi}{a}\right)\right]=a\Delta_{\mathcal{L^M\psi}}.$$ Thus, the $Q$-factor of the window function of the linear canonical transform domain is $$Q=\frac{\Delta_{\mathcal{L^M\psi}}}{E_{\mathcal{L^M\psi}}},$$ which is independent of the scaling parameter $a$ for a given parameter $M.$ This is called the constant $Q-$property of the LCWT. \subsection{Time-LCT frequency resolution} The LCWT $(W_\psi^M f)(a,b)$ localizes the signal $f$ in the time window $$\left[\frac{1}{a}E_\psi+b-\frac{1}{a}\Delta_ \psi,\frac{1}{a}E_\psi+b+\frac{1}{a}\Delta_ \psi\right].$$ Similarly, we get that the LCWT gives linear canonical spectrum content of $f$ in the window $$\left[aE_{\mathcal{L}^M\psi}-a\Delta_{\mathcal{L}^M\psi},aE_{\mathcal{L}^M\psi}+a\Delta_{\mathcal{L}^M\psi}\right].$$ Thus, the joint resolution of the LCWT in the time and linear canonical domain is given by the window $$\left[\frac{1}{a}E_\psi+b-\frac{1}{a}\Delta_ \psi,\frac{1}{a}E_\psi+b+\frac{1}{a}\Delta_ \psi\right]\times\left[aE_{\mathcal{L}^M\psi}-a\Delta_{\mathcal{L}^M\psi},aE_{\mathcal{L}^M\psi}+a\Delta_{\mathcal{L}^M\psi}\right],$$ with constant area $4\Delta_{\psi}\Delta_{\mathcal{L}^M\psi}$ in the time-LCT-frequency plane. Thus it follows that for a given parameter $M,$ the window area depends on the linear canonical admissible wavelets and is independent of the parameters $a$ and $b.$ But it is to be noted that the the window gets narrower for large value of $a$ and wider for small value of $a.$ Thus the window given by the transform is flexible and hence, it is capable of simultaneously providing the time linear canonical domain information. This flexibility of the window makes the proposed LCWT more advantageous then the WLCT as in this case the window is rigid. Some basic properties of LCWT is given below. \begin{theorem} Let $g,h\in L^2(\mathbb{R}),$ $\psi$ and $\phi$ are ALCWs, $\alpha,\beta\in\mathbb{C}$, $\lambda> 0$ and $y\in \mathbb{R}.$ Then \begin{enumerate} \item $W^M_{\psi}(\alpha g+\beta h)=\alpha (W^M_{\psi}g)+\beta (W^M_{\psi}h)$ \item $W^M_{\alpha \psi+\beta\phi}(g)=\bar\alpha (W^M_{\psi}g)+\bar\beta (W^M_{\phi}g)$ \item $(W^M_{\psi}\delta_{\lambda} g)(a,b)=(W^{\tilde{M}}_{\psi}g)(\frac{a}{\lambda},b\lambda),~\mbox{where}~ (\delta_\lambda g)(t)=\sqrt{\lambda}g(\lambda t)$ and $\tilde{M}=(A,{\lambda}^2B;\frac{C}{{\lambda}^2},D)$ \item $(W^M_{\psi}\tau_{y} g)(a,b)=e^{\frac{iA}{B}y(y-b)}(W^{M}_{\psi} e^{\frac{iA}{B}yt}g)(a,b-y),~\mbox{where}~ (\tau_{y}g)(t)=g(t-y)$. \end{enumerate} \end{theorem} \begin{proof} The proofs are immediate and can be omitted. \end{proof} If $\{\psi,\phi\}$ is admissible linear canonical wavelet pair such that each $\phi$ and $\psi$ are ALCWs and $f,g\in L^2(\mathbb{R})$ and is such that they are orthogonal then $W_{\psi}^M(f)$ and $W_{\phi}^M(g)$ are orthogonal in $L^2(\mathbb{R}^+\times\mathbb{R},dadb).$ This result is justified by the following theorem, which further gives the resolution of identity for the LCWT. \begin{theorem}(\textbf{Inner product relation for LCWT}).\label{P3theo2.2} Let $\{\psi,\phi\}$ be an ALCWP such that $\psi$ and $\phi$ are ALCWs and $f,g\in L^2(\mathbb{R}),$ then \begin{equation}\label{P3IPR} \langle W^M_{\psi}f ,W^M_{\phi}g \rangle_{L^2(\mathbb{R^+}\times\mathbb{R})}=2\pi|B|C_{\psi,\phi,M}\langle f,g\rangle_{L^2(\mathbb{R})}, \end{equation} where $C_{\psi,\phi,M}$ is provided in (\ref{P3ACPair}). \end{theorem} \begin{proof} Using equation (\ref{P3LCTWT}), we get \begin{eqnarray*} \langle W^M_\psi f,W^M_\phi g \rangle_{L^2(\mathbb{R^+}\times\mathbb{R})}&=&\int_{\mathbb{R^+}\times\mathbb{R}}\left(W^M_\psi f\right)(a,b)\overline{\left(W^M_\phi g\right)(a,b)}dadb\\ &=&\int_{\mathbb{R^+}\times\mathbb{R}}\left(\mathcal{L}^M\left(W^M_\psi f\right)(a,\cdot)\right)(\xi)\overline{\left(\mathcal{L}^M\left(W^M_\phi g\right)(a,\cdot)\right)(\xi)}d\xi da\\ &=&\int_{\mathbb{R^+}\times\mathbb{R}}\frac{2\pi|B|}{a}\left(\mathcal{L}^Mf\right)(\xi) \overline{\left(\mathcal{L}^M\psi\right)\left(\frac{\xi}{a}\right)}\overline{\left(\mathcal{L}^Mg\right)(\xi)}\left(\mathcal{L}^M\phi\right)\left(\frac{\xi}{a}\right)d\xi da\\ &=& 2\pi|B|\int_{\mathbb{R}}\left(\mathcal{L}^Mf\right)(\xi)\overline{\left(\mathcal{L}^Mg\right)(\xi) }\left\{\int_{\mathbb{R}^+}\overline{\left(\mathcal{L}^M\psi\right)\left(\frac{\xi}{a}\right)}\left(\mathcal{L}^M\phi\right)\left(\frac{\xi}{a}\right)\frac{da}{a}\right\}d\xi \\ &=&2\pi|B|C_{\psi,\phi,M}\left\langle\mathcal{L}^Mf,\mathcal{L}^Mg \right\rangle_{L^2(\mathbb{R})}\\ &=&2\pi|B|C_{\psi,\phi,M}\langle f,g\rangle. \end{eqnarray*} \end{proof} \begin{remark} Replacing $\psi=\phi$ in equation (\ref{P3IPR}), we have \begin{equation}\label{P3IPRWT} \langle W^M_{\psi}f ,W^M_{\psi}g \rangle_{L^2(\mathbb{R^+}\times\mathbb{R})}=2\pi|B|C_{\psi,M}\langle f,g\rangle_{L^2(\mathbb{R})}, \end{equation} where $C_{\psi,M}$ is provided in (\ref{P3AC}). \end{remark} \begin{remark} (Plancherel's theorem for $W^M_{\psi}$) Replacing $f=g$ and $\phi=\psi$ in equation (\ref{P3IPR}) we have the Plancherel's theorem for $W_\psi^M$ given by \begin{equation}\label{P3PlancherelWT} \|W^M_{\psi} f\|_{L^2(\mathbb{R^+}\times\mathbb{R})}=\left(2\pi|B|C_{\psi,M}\right)^{\frac{1}{2}}\|f\|_{L^2(\mathbb{R})}. \end{equation} Thus, from equation (\ref{P3PlancherelWT}), it follows that LCWT from $L^2(\mathbb{R})$ into $L^2(\mathbb{R^+\times\mathbb{R}})$ is a continuous linear operator. If further ALCW $\psi$ is such that $C_{\psi,M}=\frac{1}{2\pi|B|},$ then the operator is an isometry. \end{remark} \begin{theorem}(\textbf{Reconstruction formula}).\label{P3theo2.3} Let $\{\psi,\phi\}$ be an ALCWP such that $\psi$ and $\phi$ are ALCWs and $f\in L^2(\mathbb{R})$, then $f$ can be given by the formula \begin{equation} f(t)=\frac{1}{2\pi|B|C_{\psi,\phi,M}}\int_{\mathbb{R^+}\times\mathbb{R}}(W^M_{\psi}f)(a,b)\phi^M_{a,b}(t)dadb~~\mbox{a.e.}~t\in\mathbb{R}. \end{equation} \end{theorem} \begin{proof} From equation (\ref{P3IPR}), we get \begin{eqnarray*} 2\pi|B|C_{\psi,\phi,M}\langle f,g\rangle_{L^2(\mathbb{R})}&=&\langle W^M_\psi f,W^M_\phi g \rangle_{L^2(\mathbb{R^+}\times\mathbb{R})}\\ &=&\int_{\mathbb{R}^+\times\mathbb{R}}\left(W_\psi^M f\right)(a,b)\overline{\left(\int_{\mathbb{R}}g(t)\overline{\phi_{a,b}^M(t)}dt\right)}dadb\\ &=&\left\langle\int_{\mathbb{R}^+\times\mathbb{R}}\left(W_\psi^M f\right)(a,b)\phi_{a,b}^M(t)dadb,g(t)\right\rangle_{L^2(\mathbb{R})}. \end{eqnarray*} Since $g\in L^2(\mathbb{R})$ is arbitrary, we have $$f(t)=\frac{1}{2\pi|B|C_{\psi,\phi,M}}\int_{\mathbb{R}^+\times\mathbb{R}}\left(W_\psi^M f\right)(a,b)\phi_{a,b}^M(t)dadb~\mbox{a.e.}$$ The proof is complete. \end{proof} In particular, if $\psi=\phi$ then we have the following reconstruction formula $$f(t)=\frac{1}{2\pi|B|C_{\psi,M}}\int_{\mathbb{R}^+\times\mathbb{R}}\left(W_\psi^M f\right)(a,b)\psi_{a,b}^M(t)dadb~\mbox{a.e.}~t\in\mathbb{R}$$ The following theorem characterizes the range of the LCWT and proves that the range is a RKHS. It also gives the explicit expression for the reproducing kernel. \begin{theorem} For $\psi$ being ALCW, $W^M_\psi(L^2(\mathbb{R}))$ is a RKHS with the kernel $$K^M_\psi(x,y;a,b)=\frac{1}{2\pi|B|C_{\psi,M}}\langle \psi^M_{a,b},\psi^M_{x,y}\rangle_{L^2(\mathbb{R})}, (x,y), (a,b)\in \mathbb{R^+\times\mathbb{R}}.$$ Moreover, the kernel is such that $|K^M_\psi(x,y;a,b)|\leq \frac{1}{2\pi|B|C_{\psi,M}}\|\psi\|^2_{L^2(\mathbb{R})}.$ \end{theorem} \begin{proof} For $(a,b)\in\mathbb{R}^+\times\mathbb{R},$ we see that $$K^M_\psi(x,y;a,b)=\frac{1}{2\pi|B|C_{\psi,M}}\left(W_\psi^M\psi_{a,b}^M\right)(x,y)~\mbox{for all}~(x,y)\in\mathbb{R}^+\times\mathbb{R}.$$ Now, \begin{eqnarray*} \|K^M_\psi(\cdot,\cdot;a,b)\|^2_{L^2(\mathbb{R}^+\times\mathbb{R})}&=&\frac{1}{2\pi|B|C_{\psi,M}}\|W_\psi^M\psi_{a,b}^M\|_{L^2(\mathbb{R}^+\times\mathbb{R})}\\ &=&\frac{1}{2\pi|B|C_{\psi,M}}\|\psi\|^2_{L^2(\mathbb{R})}. \end{eqnarray*} Therefore, for $(a,b)\in\mathbb{R}^+\times\mathbb{R},$ $K^M_\psi(x,y;a,b)\in L^2(\mathbb{R}^+\times\mathbb{R}).$ Now, let $f\in L^2(\mathbb{R})$ \begin{eqnarray*} \left(W_\psi^Mf\right)(a,b)&=&\langle f,\psi_{a,b}^M\rangle_{L^2(\mathbb{R})}\\ &=&\frac{1}{2\pi|B|C_{\psi,M}}\langle W_\psi^M f,2\pi|B|C_{\psi,M} K_{\psi}^M(\cdot,\cdot;a,b)\rangle_{L^2(\mathbb{R}^+\times\mathbb{R})}\\ &=&\langle W_\psi^M f,K_{\psi}^M(\cdot,\cdot;a,b)\rangle_{L^2(\mathbb{R}^+\times\mathbb{R})}. \end{eqnarray*} Thus, it follows that $$K^M_\psi(x,y;a,b)=\frac{1}{2\pi|B|C_{\psi,M}}\langle \psi^M_{a,b},\psi^M_{x,y}\rangle_{L^2(\mathbb{R})},$$ is the reproducing kernel of $W_\psi^M(L^2(\mathbb{R}).$ Again, \begin{eqnarray*} |K^M_\psi(x,y;a,b)|&=&\frac{1}{2\pi|B|C_{\psi,M}}|\langle \psi^M_{a,b},\psi^M_{x,y}\rangle_{L^2(\mathbb{R})}|\\ &\leq &\frac{1}{2\pi|B|C_{\psi,M}}\|\psi_{a,b}^M\|_{L^2(\mathbb{R})}\|\|\psi_{x,y}^M\|_{L^2(\mathbb{R})}\\ &=&\frac{\|\psi\|^2_{L^2(\mathbb{R})}}{2\pi|B|C_{\psi,M}}. \end{eqnarray*} This completes the proof. \end{proof} \section{Uncertainty principle} We prove some UPs that limits the concentration of the LCWT in some subset in $\mathbb{R}^+\times\mathbb{R}$ of small measure. For related results in case of Fourier transform and windowed Fourier transform we refer the reader to \cite{donoho1989uncertainty},\cite{grochenig2001foundations}. Kou et al. \cite{kou2012paley} studied the same for the WLCT. \begin{definition} Let $0\leq\epsilon<1,$ $f\in L^2(\mathbb{R})$ and $E\subset\mathbb{R}$ be measurable, then $f$ is $\epsilon-$concentrated on $E$ if $$\left(\int_{E^{c}}|f(x)|^2dx\right)^\frac{1}{2}\leq\epsilon\|f\|_{L^2(\mathbb{R})}.$$ If $0\leq\epsilon\leq\frac{1}{2},$ then we say that most of the energy of $f$ is concentrated on $E$ and $E$ is called the essential support of $f$. If $\epsilon=0,$ then support of $f$ is contained in $E.$ \end{definition} \begin{lemma} If $\psi$ is an ALCW and $f\in L^2(\mathbb{R}).$ Then $W^M_\psi f\in{L^p(\mathbb{R^+}\times\mathbb{R})},$ for all $p\in[2,\infty].$ Moreover, \begin{equation}\label{P3LIp} \|W^M_\psi f\|_{L^p(\mathbb{R^+}\times\mathbb{R})}\leq (2\pi|B|)^{\frac{1}{p}}C_{\psi,M}^{\frac{1}{p}}\|f\|_{L^2(\mathbb{R})}\|\psi\|^{1-\frac{2}{p}}_{L^2(\mathbb{R})},~p\in[2,\infty) \end{equation} \begin{equation}\label{P3Bound} \|W^M_\psi f\|_{L^\infty(\mathbb{R^+}\times\mathbb{R})}\leq \|\psi\|_{L^2(\mathbb{R})}\|f\|_{L^2(\mathbb{R})}. \end{equation} \end{lemma} \begin{proof} Since $\psi$ is an ALCW, it follows that $W^M_\psi f\in L^2(\mathbb{R^+}\times\mathbb{R}).$ Again \begin{eqnarray*} \left|\left(W^M_\psi f\right)(a,b)\right| &\leq &\|\psi\|_{L^2(\mathbb{R})}\|f\|_{L^2(\mathbb{R})}. \end{eqnarray*} Thus, $W^M_\psi f\in L^\infty(\mathbb{R^+}\times\mathbb{R}).$ Also, since $W^M_\psi f\in L^2(\mathbb{R^+}\times\mathbb{R}),$ we have $W^M_\psi f\in L^p(\mathbb{R^+}\times\mathbb{R}),~p\in[2,\infty).$ Moreover, \begin{eqnarray*} \|W^M_\psi f\|_{L^p(\mathbb{R^+}\times\mathbb{R})} &\leq & \|W^M_\psi f\|^\frac{2}{p}_{L^2(\mathbb{R^+}\times\mathbb{R})} \|W^M_\psi f\|_{L^\infty(\mathbb{R^+}\times\mathbb{R})}^{1-\frac{2}{p}}\\ &\leq & (2\pi|B|C_{\psi,M})^\frac{1}{p}\|f\|^\frac{2}{p}_{L^2(\mathbb{R})}\|f\|^{1-\frac{2}{p}}_{L^2(\mathbb{R})}\|\psi\|^{1-\frac{2}{p}}_{L^2(\mathbb{R})}. \end{eqnarray*} This proves the lemma. \end{proof} \begin{definition} Let $0\leq\epsilon<1,$ $F\in L^2(\mathbb{R^+}\times\mathbb{R})$ and $\Omega\subset\mathbb{R^+}\times\mathbb{R}$ be measurable, then $F$ is $\epsilon-$concentrated on $\Omega$ if $$\left(\int_{\Omega^{c}}|F(x,y)|^2dxdy\right)^\frac{1}{2}\leq\epsilon\|F\|_{L^2(\mathbb{R^+}\times\mathbb{R})}.$$ If $0\leq\epsilon\leq\frac{1}{2},$ then we say that most of the energy of $F$ is concentrated on $\Omega$ and $\Omega$ is called the essential support of $F.$ If $\epsilon=0,$ then support of $F$ is contained in $\Omega.$ \end{definition} We now prove the Donoho-Stark's UP for the propose LCWT. \begin{theorem} Let $0\leq\epsilon<1,$ $\psi$ is an ALCW and a non-zero $f\in L^2(\mathbb{R}).$ Also let $W^M_{\psi} f$ is $\epsilon-$concentrated on $\Omega\subset\mathbb{R^+}\times\mathbb{R}$ then \begin{equation} |\Omega|\|\psi\|^2_{L^2(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon^2), \end{equation} where $|\Omega|$ denoted the measure of $\Omega.$ \end{theorem} \begin{proof} In equation (\ref{P3PlancherelWT}), we have $$\|W_\psi^Mf\|^2_{L^2(\mathbb{R^+}\times\mathbb{R})}=2\pi|B|C_{\psi,M}\|f\|^2_{L^2(\mathbb{R})}.$$ Now, $$\int_{\mathbb{R^+}\times\mathbb{R}}|\left(W^M_{\psi}f\right)(a,b)|^2dadb\leq\int_{\mathbb{R^+}\times\mathbb{R}}\chi_\Omega(a,b)|\left(W^M_{\psi}f\right)(a,b)|^2dadb +\epsilon^2\|W_\psi^Mf\|_{L^2(\mathbb{R^+}\times\mathbb{R})}.$$ This gives $$(1-\epsilon^2)\|W^M_\psi f\|_{L^2(\mathbb{R^+}\times\mathbb{R})}\leq|\Omega|\|W^M_\psi f\|^2_{L^\infty(\mathbb{R^+}\times\mathbb{R})}.$$ Thus, using (\ref{P3Bound}), we get $$2\pi|B|C_{\psi,M}\|f\|^2_{L^2(\mathbb{R})}\leq |\Omega|\|f\|_{L^2(\mathbb{R})}\|\psi\|_{L^2(\mathbb{R})}.$$ The result follows, since $f\neq 0.$ \end{proof} \begin{corollary}{\label{P3D1}} If $f\in L^2(\mathbb{R})\cap L^4(\mathbb{R}),$ in $L^2(\mathbb{R})-$norm, is $\epsilon_{E}-$concentrated on $E\subset\mathbb{R}$ and $W^M_{\psi}f$ is $\epsilon_{\Omega}-$concentrated on $\Omega\subset\mathbb{R^+}\times\mathbb{R},$ then $$|\Omega|m(E)\|\psi\|^2_{L^2(\mathbb{R})}\|f\|^4_{L^4(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon_\Omega^2)(1-\epsilon_E^2)^2\|f\|^4_{L^2(\mathbb{R})},$$ where, $m(E)$ denotes the measure of $E.$ \end{corollary} \begin{proof} Since, $W^M_{\psi}f$ is $\epsilon_{\Omega}-$concentrated on $\Omega\subset\mathbb{R^+}\times\mathbb{R}$ in $L^2(\mathbb{R^+}\times\mathbb{R})-$norm, so we have $|\Omega|\|\psi\|^2_{L^2(\mathbb{R})}\geq2\pi|B|C_{\psi,M}(1-\epsilon_\Omega^2).$ Again, since $f$ is $\epsilon_{E}-$concentrated, we have $$\left(\int_{E^c}|f(x)|^2dx\right)^\frac{1}{2}\leq\epsilon_{E}\|f\|_{L^2(\mathbb{R})},$$ which further implies that $$\|f\|_{L^2(\mathbb{R})}(1-\epsilon_E^2)\leq\int_{\mathbb{R}}\chi_{E}(x)|f(x)|^2dx.$$ We have by Holder's inequality $$\int_{\mathbb{R}}\chi_{E}(x)|f(x)|^2dx\leq\left(\int_{\mathbb{R}}|\chi_{E}(x)|^2dx\right)^\frac{1}{2}\|f\|^2_{L^4(\mathbb{R})}.$$ Thus \begin{equation} (1-\epsilon_{E}^2)\|f\|^2_{L^2(\mathbb{R})}\leq (m(E))^\frac{1}{2}\|f\|_{L^4(\mathbb{R})}. \end{equation} Therefore, \begin{align*} |\Omega|m(E)\|\psi\|^2_{L^2(\mathbb{R})}\|f\|^4_{L^4(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon_\Omega^2)(1-\epsilon_E^2)^2\|f\|^4_{L^2(\mathbb{R})}. \end{align*} The proof is complete. \end{proof} \begin{corollary}{\label{P3D2}} If $f\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R}),$ in $L^2(\mathbb{R})-$norm, is $\epsilon_{E}-$concentrated on $E\subset\mathbb{R}$ and $W^M_{\psi}f$ is $\epsilon_{\Omega}-$concentrated on $\Omega\subset\mathbb{R^+}\times\mathbb{R},$ then $$|\Omega|m(E)\|\psi\|^2_{L^2(\mathbb{R})}\|f\|^2_{L^\infty(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon_\Omega^2)(1-\epsilon_E^2)\|f\|^2_{L^2(\mathbb{R})}.$$ \end{corollary} \begin{proof} Since, $W^M_{\psi}f$ is $\epsilon_{\Omega}-$concentrated on $\Omega\subset\mathbb{R^+}\times\mathbb{R}$ in $L^2(\mathbb{R^+}\times\mathbb{R})-$norm, so we have $|\Omega|\|\psi\|^2_{L^2(\mathbb{R})}\geq2\pi|B|C_{\psi,M}(1-\epsilon_\Omega^2).$ Again, since $f$ is $\epsilon_{E}-$concentrated, we have $$\left(\int_{E^c}|f(x)|^2dx\right)^\frac{1}{2}\leq\epsilon_{E}\|f\|_{L^2(\mathbb{R})},$$ which further implies that $$\|f\|_{L^2(\mathbb{R})}(1-\epsilon_E^2)\leq\int_{\mathbb{R}}|f(x)|^2\chi_{E}(x)dx.$$ Since $f\in L^\infty(\mathbb{R}),$ so \begin{eqnarray*} \int_{\mathbb{R}}\chi_{E}(x)|f(x)|^2dx &\leq & m(E)\|f\|^2_{L^\infty(\mathbb{R})}. \end{eqnarray*} Thus \begin{equation} \|f\|^2_{L^\infty(\mathbb{R})}m(E)\geq(1-\epsilon_{E}^2)\|f\|^2_{L^2{(\mathbb{R}})}. \end{equation} Therefore, $$|\Omega|m(E)\|\psi\|^2_{L^2(\mathbb{R})}\|f\|^2_{L^\infty(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon_\Omega^2)(1-\epsilon_E^2)\|f\|^2_{L^2(\mathbb{R})}.$$ The proof is complete. \end{proof} \begin{theorem}(\textbf{Lieb's uncertainty principle}). Let $0\leq\epsilon<1,$ $\psi$ is an ALCW and a non-zero$f\in L^2(\mathbb{R}).$ Also let $W^M_{\psi} f$ is $\epsilon-$concentrated on $\Omega\subset\mathbb{R^+}\times\mathbb{R},$ then \begin{equation} |\Omega|\|\psi\|^2_{L^2(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon^2)^{\frac{p}{p-2}},~p>2. \end{equation} \end{theorem} \begin{proof} Since $W_\psi^M f$ is $\epsilon-$concentrated on $\Omega,$ we have $$\|\chi_{{\Omega}^c}W_\psi^M f\|_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\leq \epsilon^2\|W_\psi^M f\|_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}.$$ Now, \begin{eqnarray*} \|\chi_{{\Omega}^c}W_\psi^M f\|_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)} &\leq &\|\chi_{\Omega}W_\psi^M f\|_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}+\epsilon^2\|W_\psi^M f\|_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}. \end{eqnarray*} This implies, $$2\pi C_{\psi,M}\|f\|_{L^2(\mathbb{R})}(1-\epsilon^2)\leq \int_{\mathbb{R}^+\times\mathbb{R}}\chi_\Omega (a,b)|(W_\psi^M f)(a,b)|^2dadb.$$ By Holder's inequality, we get \begin{eqnarray*} 2\pi C_{\psi,M}\|f\|_{L^2(\mathbb{R})}(1-\epsilon^2)&\leq & \left(\int_{\mathbb{R}^+\times\mathbb{R}}(\chi_\Omega (a,b))^\frac{p}{p-2} dadb\right)^{\frac{p-2}{p}}\left(\int_{\mathbb{R}^+\times\mathbb{R}}|(W_\psi^M f)(a,b)|^p dadb\right)^\frac{2}{p}\\ &\leq &|\Omega|^{\frac{p-2}{p}}\|W^M_\psi f\|^2_{L^p(\mathbb{R}^+\times\mathbb{R},dadb)}. \end{eqnarray*} Using equation (\ref{P3LIp}), we get $$(2\pi|B|)^{1-\frac{2}{p}}C_{\psi,M}^{1-\frac{2}{p}}(1-\epsilon^2)\leq |\Omega|^{\frac{p-2}{p}}\|\psi\|^{2-\frac{4}{p}}_{L^2(\mathbb{R})}.$$ Therefore, we get $$|\Omega|\|\psi\|^2_{L^2(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon^2)^{\frac{p}{p-2}}.$$ The proof is complete. \end{proof} \begin{corollary} If $f\in L^2(\mathbb{R})\cap L^4(\mathbb{R}),$ in $L^2(\mathbb{R})-$norm, is $\epsilon_{E}-$ concentrated on $E\subset\mathbb{R}$ and $W^M_{\psi}f$ is $\epsilon_{\Omega}-$concentrated on $\Omega\subset\mathbb{R^+}\times\mathbb{R},$ then $$|\Omega|m(E)\|\psi\|^2_{L^2(\mathbb{R})}\|f\|^4_{L^4(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon_\Omega^2)^{\frac{p}{p-2}}(1-\epsilon_E^2)^2\|f\|^4_{L^2(\mathbb{R})},~p>2.$$ \end{corollary} \begin{proof} The proof follows similarly as theorem \ref{P3D1}, we use Lieb's UP instead of the Donoho-Stark's UP. \end{proof} \begin{corollary} If$f\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R}),$ in $L^2(\mathbb{R})-$norm, is $\epsilon_{E}-$ concentrated on $E\subset\mathbb{R}$ and $W^M_{\psi}f$ is $\epsilon_{\Omega}-$concentrated on $\Omega\subset\mathbb{R^+}\times\mathbb{R},$ then $$|\Omega|m(E)\|\psi\|^2_{L^2(\mathbb{R})}\|f\|^2_{L^\infty(\mathbb{R})}\geq 2\pi|B|C_{\psi,M}(1-\epsilon_\Omega^2)^{\frac{p}{p-2}}(1-\epsilon_E^2)\|f\|^2_{L^2(\mathbb{R})},~p>2.$$ \end{corollary} \begin{proof} The proof follows similarly as theorem \ref{P3D2}, we use Lieb's UP instead of the Donoho-Stark's UP. \end{proof} \section{Orthonormal sequences and uncertainty principle} We now express the UP in term of the generalized dispersion of $W_{\psi}^M,$ which is defined by \begin{equation} \rho_{p}\left(W_{\psi}^Mf\right)=\left(\int_{\mathbb{R}^+\times\mathbb{R}}|(a,b)|^p|(W_{\psi}^Mf)(a,b)|^2dadb\right)^{\frac{1}{p}}, \end{equation} where $|(a,b)|=\sqrt{a^2+b^2},~p>0.$ \begin{definition} Let $T$ be a bounded linear operator on a Hilbert space $\mathbb{H}$ over the field $\mathbb{F}$ (where $\mathbb{F}$ is $\mathbb{R}$ or $\mathbb{C}$) and $\{u_{n}\}_{n\in\mathbb{N}}$ be an orthonormal basis of $\mathbb{H},$ then $T$ is called a Hilbert-Schmidt operator if $$\|T\|_{HS}=\left(\sum_{n=1}^\infty\|Tu_{n}\|^2\right)^{\frac{1}{2}}<\infty.$$ \end{definition} Before discussing the main result of this section, we estimate the Hilbert-Schmidt norm of the product of some orthogonal projection operators and use it to estimate the concentration of $W_{\psi}^M$ on subset of $\mathbb{R}^+\times\mathbb{R}.$ Similar results were first studied by Wilczok \cite{wilczok2000new} in the case of windowed FT and WT. \begin{theorem} Let $f\in L^2(\mathbb{R}),$ $\psi$ is an ALCW and $\Omega\subset\mathbb{R}^+\times\mathbb{R}$ such that $|\Omega|<\frac{2\pi |B|C_{\psi,M}}{\|\psi\|^2_{L^2(\mathbb{R})}}.$ Then $$\|\chi_{\Omega^c}W_{\psi}^Mf\|_{L^2(\mathbb{R}^+\times\mathbb{R})}\geq\sqrt{2\pi|B|C_{\psi,M}-|\Omega|\|\psi\|^2_{L^2(\mathbb{R})}}\|f\|_{L^2(\mathbb{R})}.$$ \end{theorem} \begin{proof} We consider the orthogonal projections $P_{\psi}$ from $L^2(\mathbb{R}^+\times\mathbb{R},dadb)$ on the RKHS $W_{\psi}^M(L^2(\mathbb{R}))$ and $P_{\Omega}$ on $L^2(\mathbb{R}^+\times\mathbb{R},dadb)$ defined by $P_{\Omega}F=\chi_{\Omega}F,~\mbox{for all}~F\in L^2(\mathbb{R}^+\times\mathbb{R},dadb).$ According to Saitoh \cite{saitoh1988theory}, for every $(a,b)\in\mathbb{R}^+\times\mathbb{R}$ and $F\in L^2(\mathbb{R}^+\times\mathbb{R},dadb),$ we get \begin{eqnarray*} \left(P_{\Omega}P_{\psi}F\right)(a,b)&=&\chi_{\Omega}(a,b)\left\langle F,K_{\psi}^M\left(\cdot,\cdot;a,b\right)\right\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &=&\int_{\mathbb{R}^+\times\mathbb{R}}\chi_{\Omega}(a,b)F(x,y)\overline{K_{\psi}^M(x,y;a,b)}dxdy. \end{eqnarray*} Thus the integral operator $P_{\Omega}P_{\psi}$ has the kernel $\mathcal{N}_{\psi,\Omega}^{M}$ defined on $(\mathbb{R}^+\times\mathbb{R})^2$ by $$\mathcal{N}_{\psi,\Omega}^{M}(x,y;a,b)=F(x,y)\overline{K_{\psi}^M(x,y;a,b)}$$ such that \begin{align*} \int_{\mathbb{R}^+\times\mathbb{R}}\int_{\mathbb{R}^+\times\mathbb{R}}&|\mathcal{N}_{\psi,\Omega}^{M}(x,y;a,b)|^2dxdydadb\\ &= \int_{\mathbb{R}^+\times\mathbb{R}}\left(\int_{\mathbb{R}^+\times\mathbb{R}}|K_{\psi}^M(x,y;a,b)|^2dxdy\right)|\chi_{\Omega}(a,b)|^2dadb\\ &=\int_{\mathbb{R}^+\times\mathbb{R}}\chi_{\Omega}(a,b)\|K_{\psi}^M(\cdot,\cdot;a,b)\|^2_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}dadb\\ &=\frac{|\Omega|\|\psi\|^2_{L^2(\mathbb{R})}}{2\pi |B|C_{\psi,M}}. \end{align*} Now, $$\chi_{\Omega}W_{\psi}^Mf=P_{\Omega}P_{\psi}(W_{\psi}^Mf)$$ implies $$\|\chi_{\Omega}W_{\psi}^Mf\|^2_{L^2(\mathbb{R}^+\times\mathbb{R})}\leq\|P_{\Omega}P_{\psi}\|^2\|W_{\psi}^Mf\|^2_{L^2(\mathbb{R}^+\times\mathbb{R})}.$$ Therefore $$\|W_{\psi}^Mf\|^2_{L^2(\mathbb{R}^+\times\mathbb{R})}\leq \|P_{\Omega}P_{\psi}\|^2\|W_{\psi}^Mf\|^2_{L^2(\mathbb{R}^+\times\mathbb{R})}+\|\chi_{\Omega^c}W_{\psi}^Mf\|^2_{L^2(\mathbb{R}^+\times\mathbb{R})}$$ $$\mbox{i.e.,}~\|\chi_{\Omega^c}W_{\psi}^Mf\|^2_{L^2(\mathbb{R}^+\times\mathbb{R})}\geq\left(1-\|P_{\Omega}P_{\psi}\|^2\right)2\pi|B|C_{\psi,M}\|f\|^2_{L^2(\mathbb{R})}.$$ Now using the fact that $\|P_{\Omega}P_{\psi}\|\leq\|P_{\Omega}P_{\psi}\|_{HS},$ where $\|\cdot\|$ denotes the operator norm, we obtain \begin{eqnarray*} \|\chi_{\Omega^c}W_{\psi}^Mf\|^2_{L^2(\mathbb{R}^+\times\mathbb{R})}\geq\left(1-\|P_{\Omega}P_{\psi}\|_{HS}^2\right)2\pi|B|C_{\psi,M}\|f\|^2_{L^2(\mathbb{R})}. \end{eqnarray*} Hence, we obtain $$\|\chi_{\Omega^c}W_{\psi}^Mf\|_{L^2(\mathbb{R}^+\times\mathbb{R})}\geq\sqrt{2\pi|B|C_{\psi,M}-|\Omega|\|\psi\|^2_{L^2(\mathbb{R})}}\|f\|_{L^2(\mathbb{R})}.$$ This proves the theorem. \end{proof} \begin{theorem} If $\psi$ is an ALCW, $\{\phi_{n}\}_{n\in\mathbb{N}}\subset L^2(\mathbb{R})$ be an ONS and $\Omega\subset\mathbb{R}^+\times\mathbb{R}$ be such that its measure $|\Omega|<\infty,$ then for any non-empty $\wedge\subset\mathbb{N},$ \begin{equation} \sum_{n\in\wedge}\left(1-\left\|\chi_{\Omega^c}W_\psi^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\|_{L^2(\mathbb{R^+}\times\mathbb{R},dadb)}\right)\leq\frac{|\Omega|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}. \end{equation} \end{theorem} \begin{proof} Consider the orthonormal basis $\{h_{n}\}_{n\in\mathbb{N}}$ of $L^2(\mathbb{R^+}\times\mathbb{R},dadb).$ It has been proved in the above theorem that $P_{\Omega}P_{\psi}$ is a Hilbert-Schmidt operator such that $\|P_{\Omega}P_{\psi}\|^2_{HS}=\frac{|\Omega|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}.$\\ Since $P_{\Omega}^2=P_{\Omega}$ and both $P_{\psi},~P_{\Omega}$ are self-adjoint, the operator $T=(P_{\Omega}P_{\psi})^{\star}(P_{\Omega}P_{\psi})=P_{\psi}P_{\Omega}P_{\psi}$ is positive and is such that \begin{eqnarray*} \sum_{n\in\mathbb{N}}\langle Th_{n},h_{n}\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}&=&\sum_{n\in\mathbb{N}}\langle P_{\Omega}P_{\psi}h_{n},P_{\Omega}P_{\psi}h_{n}\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &=&\sum_{n\in\mathbb{N}}\|P_{\Omega}P_{\psi}h_{n}\|^2_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &=&\|P_{\Omega}P_{\psi}\|^2_{HS}\\ &=&\frac{|\Omega|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}<\infty. \end{eqnarray*} Therefore, $T$ is a trace class operator with $Tr(T)=\frac{|\Omega|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}.$\\ Now as $\{\phi_{n}\}_{n\in\mathbb{N}}$ is an ONS, from equation (\ref{P3IPRWT}), it follows that $\left\{W_\psi^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\}_{n\in\mathbb{N}}$ is an ONS in $L^2(\mathbb{R}^+\times\mathbb{R},dadb).$\\ Hence, we have \begin{align*} \sum_{n\in\wedge}&\left\langle P_{\Omega}W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right),W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &=\sum_{n\in\wedge}\left\langle P_{\psi}P_{\Omega}P_{\psi}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right),W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &=\sum_{n\in\wedge}\left\langle T\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right),W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &\leq \sum_{n\in\mathbb{N}}\left\langle T\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right),W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &= Tr(T)=\frac{|\Omega|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}. \end{align*} For each $n\in\wedge,$ we have \begin{align*} &\left\langle P_{\Omega}W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right),W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &=\left\langle \chi_{\Omega}W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right),W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &=1-\left\langle \chi_{\Omega^c}W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right),W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\rangle_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}\\ &\geq 1-\left\|\chi_{\Omega^c}W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\|_{L^2(\mathbb{R}^+\times\mathbb{R},dadb)}. \end{align*} Thus, we have $$\sum_{n\in\wedge}\left(1-\left\|\chi_{\Omega^c}W_\psi^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\|_{L^2(\mathbb{R^+}\times\mathbb{R},dadb)}\right)\leq\frac{|\Omega|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}.$$ This proves the theorem . \end{proof} The theorem below shows that, if the LCWT of each member of an ONS are $\epsilon-$concentrated in a set of finite measure then the sequence is necessarily finite. The theorem also gives an upper bound of the cardinality of the so proved finite sequence. \begin{theorem}\label{P3theoCard} Let $s,\epsilon>0$ such that $\epsilon<1.$ Let $G_s=\{(a,b)\in\mathbb{R}^+\times\mathbb{R}:a^2+b^2\leq s^2\}$ and $\psi$ is an ALCW. Also let $\wedge\subset \mathbb{N}$ be non-empty and $\{\phi_{n}\}_{n\in\wedge}\subset L^2(\mathbb{R})$ be an ONS. If $W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)$ is $\epsilon-$concentrated in $G_s$ for all $n\in\wedge,$ then $\wedge$ is finite and \begin{equation} Card(\wedge)\leq\frac{s^2\|\psi\|^2_{L^2(\mathbb{R})}}{4|B|C_{\psi,M}(1-\epsilon)}, \end{equation} where $Card(\wedge)$ denotes the cardinality of $\wedge.$ \end{theorem} \begin{proof} Applying above theorem, we have $$\sum_{n\in\wedge}\left(1-\left\|\chi_{G_s^c}W_\psi^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\|_{L^2(\mathbb{R^+}\times\mathbb{R},dadb)}\right)\leq\frac{|G_s|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}.$$ Again, since for each $W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)$ is $\epsilon-$concentrated in $G_s,$ we have $$\left\|\chi_{G_s^c}W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right\|_{L^2(\mathbb{R}^+\times\mathbb{R})}\leq\epsilon.$$ Therefore, it follows that $$\sum_{n\in\wedge}(1-\epsilon)\leq\frac{|G_s|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}$$ $$\mbox{i.e.,}~Card(\wedge)(1-\epsilon)\leq\frac{|G_s|\|\psi\|_{L^2(\mathbb{R})}^2}{2\pi|B|C_{\psi,M}}.$$ Thus $Card(\wedge)$ is finite and using $|G_s|=\frac{\pi s^2}{2},$ we obtain $$Card(\wedge)\leq\frac{s^2\|\psi\|_{L^2(\mathbb{R})}^2}{4|B|(1-\epsilon)C_{\psi,M}}.$$ The proof is complete. \end{proof} \begin{corollary} Let $p>0,$ $R>0$ and $\psi$ is an ALCW. Also let $\wedge\subset \mathbb{N},$ be non-empty and $\{\phi_{n}\}_{n\in\wedge}\subset L^2(\mathbb{R})$ be an ONS. Then $\wedge$ is finite if $\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}_{n\in\wedge}$ is uniformly bounded. Moreover, if it is uniformly bounded by $R,$ then $$Card(\wedge)\leq\frac{2^{\frac{4}{p}+1}R^2\|\psi\|^2_{L^2(\mathbb{R})}}{4|B|C_{\psi,M}}.$$ \end{corollary} \begin{proof} Since for each $n\in\wedge,$ $\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\leq R,$ and thus \begin{align*} &\int_{|(a,b)|\geq R2^{\frac{2}{p}}}\left|\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)(a,b)\right|^2dadb\\ &=\int_{|(a,b)|\geq R2^{\frac{2}{p}}}|(a,b)|^{-p}|(a,b)|^p\left|\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)(a,b)\right|^2dadb\\ &\leq \frac{1}{\left(R2^{\frac{2}{p}}\right)^p}\int_{\mathbb{R}^+\times\mathbb{R}}|(a,b)|^p\left|\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)(a,b)\right|^2dadb\\ &\leq \frac{1}{4}. \end{align*} Thus it follows that, for each $n\in\wedge,$ $W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)$ is $\frac{1}{2}-$concentrated in $$G_{R2^{\frac{2}{p}}}=\left\{(a,b)\in\mathbb{R}^+\times\mathbb{R}:|(a,b)|<R2^{\frac{2}{p}}\right\}.$$ Thus, from theorem \ref{P3theoCard}, it follows that $\wedge$ is finite and $$Card(\wedge)\leq\frac{\left(R2^{\frac{2}{p}}\right)^2\|\psi\|^2_{L^2(\mathbb{R})}}{4|B|\left(1-\frac{1}{2}\right)C_{\psi,M}},$$ $$\mbox{i.e.,}~Card(\wedge)\leq\frac{2^{\frac{4}{p}+1}R^2\|\psi\|^2_{L^2(\mathbb{R})}}{4|B|C_{\psi,M}}.$$ Thus the proof is complete. \end{proof} \begin{lemma}\label{P3lemma5.2} Let $p>0,$ $\psi$ is an ALCW and $\{\phi_{n}\}_{n\in\mathbb{N}}\subset L^2(\mathbb{R})$ be an ONS, then $\exists$ $m_{0}\in\mathbb{Z}$ for which $$\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\geq 2^{m_{0}},~\forall~n\in\mathbb{N}.$$ \end{lemma} \begin{proof} Define $P_{m}=\left\{n\in\mathbb{N}:\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\in [2^{m-1},2^m)\right\},$ for each $m\in\mathbb{Z}.$\\ Then for each $n\in P_{m},$ we get $$\int_{\mathbb{R}^+\times\mathbb{R}}|(a,b)|^{p}\left|\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)(a,b)\right|^2dadb<2^{mp}.$$ Now, \begin{align*} &\int_{|(a,b)|\geq 2^{m+\frac{2}{p}}}\left|\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)(a,b)\right|^2dadb\\ &\leq \frac{1}{2^{mp+2}}\int_{\mathbb{R}^+\times\mathbb{R}}|(a,b)|^p\left|\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)(a,b)\right|^2dadb\\ &\leq \frac{1}{2^{mp+2}}\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p. \end{align*} This gives $$\int_{|(a,b)|\geq 2^{\frac{2}{p}+m}}\left|\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)(a,b)\right|^2dadb\leq\frac{1}{4}.$$ Thus it follows that, for each $n\in P_{m},$ $W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)$ is $\frac{1}{2}-$concentrated on the set $$G_{2^{m+\frac{2}{p}}}=\left\{(a,b)\in\mathbb{R}^+\times\mathbb{R}:|(a,b)|<2^{m+\frac{2}{p}}\right\}.$$ Therefore, $P_m$ is finite and $$Card(P_m)\leq\frac{2^{2m+\frac{4}{p}+1}\|\psi\|^2_{L^2(\mathbb{R})}}{4|B|C_{\psi,M}},~\mbox{for all}~m\in\mathbb{Z}.$$ Letting $m\to -\infty,$ we get $$\lim_{m \to -\infty} Card(P_{m})=0.$$ Hence $\exists$ $m_{0}\in\mathbb{Z}$ such that for all $m<m_{0},$ $P_{m}$ are empty sets. Therefore, $\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\geq 2^{m_{0}},~\mbox{for all}~n\in\mathbb{N}.$ \end{proof} \begin{theorem}(\textbf{Shapiro's Dispersion theorem}). Let $\psi$ be an ALCW and $\{\phi_{n}\}_{n\in\mathbb{N}}\subset L^2(\mathbb{R})$ be an ONS, then for every $p>0$ and non-empty finite $\wedge\subset\mathbb{N},$ $$\sum_{n\in\wedge}\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p\geq \frac{(Card(\wedge))^{\frac{p}{2}+1}}{2^{p+1}}\left(\frac{3|B|C_{\psi,M}}{2^{\frac{4}{p}+2}\|\psi\|^2_{L^2(\mathbb{R})}}\right)^{\frac{p}{2}}.$$ \end{theorem} \begin{proof} Let $m_{0}$ be an integer defined in the above lemma. Let $k\in\mathbb{Z}$ such that $k\geq m_{0}.$ Define $Q_{k}=\bigcup\limits_{m=m_{0}}^k P_{m}.$ Then we have \begin{eqnarray*} Card(Q_{k})=\sum_{m=m_{0}}^k Card(P_{m})&\leq&\sum_{m=m_{0}}^k\frac{2^{2m+\frac{4}{p}+1}\|\psi\|^2_{L^2(\mathbb{R})}}{4|B|C_{\psi,M}}\\ &=&\frac{2^{\frac{4}{p}+1}\|\psi\|^2_{L^2(\mathbb{R})}}{4|B|C_{\psi,M}}\sum_{m=m_{0}}^k 2^{2m}\\ &\leq&\frac{2^{\frac{4}{p}+1}\|\psi\|^2_{L^2(\mathbb{R})}}{4|B|C_{\psi,M}} \frac{2^{2k+2}}{3} \end{eqnarray*} $$ \mbox{i.e.,}~Card(Q_{k})\leq\frac{2^{\frac{4}{p}+1}\|\psi\|^2_{L^2(\mathbb{R})}}{3|B|C_{\psi,M}} 2^{2k}.$$ Let $C=\frac{2^{\frac{4}{p}+2}\|\psi\|^2_{L^2(\mathbb{R})}}{3|B|C_{\psi,M}}.$ Then $Card(Q_{k})\leq\frac{C}{2}2^{2k}.$ If $Card(\wedge)>2^{2(m_{0}+1)},$ then $\frac{1}{2\log{2}}\log\left(\frac{Card(\wedge)}{C}\right)>m_{0}+1.$\\ Let us choose an integer $k>m_{0}+1$ such that $$k-1<\frac{1}{2\log{2}}\log(\frac{Card(\wedge)}{C})\leq k.$$ Then it results in $$C2^{2(k-1)}<Card(\wedge)\leq C2^{2k}.$$ Thus, we have $$Card(Q_{k-1})=\frac{C}{2}2^{2(k-1)}<\frac{Card(\wedge)}{2}.$$ Therefore, \begin{eqnarray*} \sum_{n\in\wedge}\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p&\geq&\sum_{n\not\in Q_{k-1}} \left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p\\ &\geq& \frac{Card(\wedge)}{2}2^{(k-1)p}\\ &=&\frac{Card(\wedge)}{2.2^{p}}2^{kp}. \end{eqnarray*} Since, $Card(\wedge)\leq C2^{2k},$ we have $\left(\frac{Card(\wedge)}{C}\right)^{\frac{p}{2}}\leq 2^{kp}.$\\ Therefore, $$\sum_{n\in\wedge}\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p \geq \frac{(Card(\wedge))^{\frac{p}{2}+1}}{2^{p+1}}\left(\frac{1}{C}\right)^{\frac{p}{2}}.$$ Again, if $Card(\wedge)\leq2^{2(m_{0}+1)},$ then $$\sum_{n\in\wedge}\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p \geq Card(\wedge)2^{m_{0}p},~\mbox{(using lemma \ref{P3lemma5.2})}.$$ Now, $Card(\wedge)\leq C2^{2(m_{0}+1)}$ implies $\frac{1}{2^p}\left(\frac{Card(\wedge}{C})\right)^{\frac{p}{2}}\leq 2^{m_{0}p}.$ Thus we have $$\sum_{n\in\wedge}\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p \geq \frac{(Card(\wedge))^{\frac{p}{2}+1}}{2^{p}}\left(\frac{1}{C}\right)^{\frac{p}{2}}.$$ Hence, for any non-empty finite $\wedge\subset\mathbb{N},$ we have $$\sum_{n\in\wedge}\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p \geq \frac{(Card(\wedge))^{\frac{p}{2}+1}}{2^{p+1}}\left(\frac{1}{C}\right)^{\frac{p}{2}}.$$ Therefore, putting the value of $C$ we get $$\sum_{n\in\wedge}\left\{\rho_{p}\left(W_{\psi}^M\left(\frac{\phi_{n}}{\sqrt{2\pi|B|C_{\psi,M}}}\right)\right)\right\}^p \geq \frac{(Card(\wedge))^{\frac{p}{2}+1}}{2^{p+1}}\left(\frac{3|B|C_{\psi,M}}{2^{\frac{4}{p}+2}\|\psi\|^2_{L^2(\mathbb{R})}}\right)^{\frac{p}{2}}.$$ This completes the proof. \end{proof} \section{Conclusions} We have proposed a novel time-frequency analyzing tool, namely LCWT, which combines the advantages of the LCT and the WT and offers time and linear canonical domain spectral information simultaneously in the time-LCT-frequency plane. We have studied its properties like inner product relation, reconstruction formula and also characterized its range. We also gave a lower bound of the measure of essential support of the LCWT via UP of Donoho-Stark and Lieb. Finally, we have studied the Shapiro's mean dispersion theorem associated with the LCWT. \end{document}
\begin{document} {\rm d}ate{\today} \title{On the Shafarevich group of restricted ramification extensions of number fields in the tame case} \maketitle \subjclass{ } \begin{abstract} Let ${\rm K}$ be a number field and $S$ a finite set of places of ${\rm K}$. We study the kernels ${\sha}^2_S$ of maps $H^2({\rm G}_S,{{\mathcal M}athbb F}_p) \rightarrow \oplus_{v\in S} H^2({\rm G}_v,{{\mathcal M}athbb F}_p)$. There is a natural injection ${\sha}^2_S \hookrightarrow {\rm C}yB_S$, into the dual ${\rm C}yB_S$ of a certain readily computable Kummer group ${\rm V}_S$, which is always an isomorphism in the wild case. The tame case is much more mysterious. Our main result is that given a finite $X$ coprime to $p$, there exists a finite set of places $S$ coprime to $p$ such that ${\sha}^2_{S\cup X} \stackrel{\simeq}{\hookrightarrow} {\rm C}yB_{S\cup X} \stackrel{\simeq}{\twoheadleftarrow} {\rm C}yB_X \hookleftarrow {\sha}^2_X $. In particular, we show that in the tame case $ {\sha}^2_Y$ can {\it increase} with increasing $Y$. This is in contrast with the wild case where ${\sha}^2_Y$ is nonincreasing in size with increasing $Y$. \end{abstract} Let ${\rm K}$ be a number field, and let $S$ be a finite set of places of ${\rm K}$. Denote by ${\rm K}_S$ the maximal extension of~${\rm K}$ unramified outside $S$, and set $G_S={\rm G}al({\rm K}_S/{\rm K})$. Given a prime number $p$, let ${\sha}^2_S$ be the $2$-Shafarevich group associated to ${\rm G}_S$ and $p$: it is the kernel of the localization map of the cohomology group $H^2({\rm G}_S,{{\mathcal M}athbb F}_p)$: $${\sha}^2_S:={\sha}^2({\rm G}_S,{{\mathcal M}athbb F}_p)= {\rm k}er\big(H^2({\rm G}_S,{{\mathcal M}athbb F}_p)\rightarrow \oplus_{v\in S} H^2({\rm G}_v,{{\mathcal M}athbb F}_p)\big),$$ where ${\rm G}_S$ acts trivially on ${{\mathcal M}athbb F}_p$. We denote by ${\rm G}_v$ the absolute Galois group of the maximal extension of the completion ${\rm K}_v$ of ${\rm K}$ at $v$. It is well-known that ${\sha}^2_S$ is closely related to ${\rm C}yB_S =\left( {\rm V}_S/{\rm K}^{\times p}\right)^\vee$, where $${\rm V}_S=\{ x \in {\rm K}^\times, v(x)\equiv 0 \ ({\rm mod} \ p) \ {{\mathcal M}athfrak f}orall v; x \in {\rm K}_v^p \ {{\mathcal M}athfrak f}orall v\in S \}.$$ Clearly ${\rm K}^{\times p} \subset {\rm V}_S$ and $S \subset T \implies {\rm V}_T \subset {\rm V}_S$. Namely, in the wild case, when $S$ contains all the places above~$p$ and all archimedean places, by Poitou-Tate duality Theorem one has ${\sha}^2_S \simeq {\rm C}yB_S$. See for example \cite[Chapter X, \S 7]{NSW}. It is important to note that algorithms exist to compute ${\rm C}yB_S$ via ray class group computations over ${\rm K}(\mu_p)$, so in the wild case one can, at least in theory, compute $d_p {\sha}^2_S$. For the more general tame situation, one only has the following injection (due to Shafarevich and Koch, see for example \cite[Chapter 11, \S 2]{Koch} or \cite[Chapter 10, \S 7]{NSW}) \begin{eqnarray} \label{equation:0} {\sha}^2_S \hookrightarrow {\rm C}yB_S. \end{eqnarray} {\it At present there is no general algorithm to compute $d_p {\sha}^2_S$ in the tame case, short of computing ${\rm G}_S$ itself.} Let us write ${\rm K}_S(p)/{\rm K}$ as the maximal pro-$p$ extension of ${\rm K}$ inside ${\rm K}_S$, and put ${\rm G}_S(p)={\rm G}al({\rm K}_S(p)/{\rm K})$. It is an exercise to see the quotient ${\rm G}_S \twoheadrightarrow {\rm G}_S(p)$ induces the injection ${\sha}^2_{S,p} \hookrightarrow {\sha}^2_S$, where ${\sha}^2_{S,p}:={\rm k}er \left( H^2 ({\rm G}_S(p),{{\mathcal M}athbb F}_p) \rightarrow \oplus_{v\in S} H^2({\rm G}_v,{{\mathcal M}athbb F}_p)\right)$. Observe that we can take ${\rm G}_v(p)$ instead of ${\rm G}_v$, due to the fact that $H^2({\rm G}_v(p),{{\mathcal M}athbb F}_p) \simeq H^2({\rm G}_v,{{\mathcal M}athbb F}_p)$ (see for example \cite[Chapter VII, \S 5]{NSW}). The Shafarevich group ${\sha}^2_S$ is central to the study of the maximal pro-$p$ quotient ${\rm G}_S(p)$ of ${\rm G}_S$, in particular when $S$ is coprime to $p$: obviously, one gets $$d_p H^2({\rm G}_S(p),{{\mathcal M}athbb F}_p) \leq \sum_{v \in S} d_p H^2({\rm G}_v,{{\mathcal M}athbb F}_p) + d_p {\sha}^2_S \leq \sum_{v \in S}{\rm d}elta_{v,p} + d_p {\sha}^2_S \leq |S|+ d_p {\rm V}_S/{\rm K}^{\times p},$$which is sufficient to produce criteria involving the infinitess of ${\rm G}_S(p)$ (thanks to the Golod-Shafarevich Theorem). Here ${\rm d}elta_{v,p}=1$ or $0$ as ${\rm K}_v$ contains the $p$th roots of unity or does not. Observe that thanks to (\ref{equation:0}), one can force ${\sha}^2_S$ to be trivial (see the notion of saturated set $S$ in \S \ref{section:saturated}), which can also yield situations where ${\rm G}_S(p)$ has cohomological dimension~$2$. See \cite{Labute} for the first examples and \cite{Schmidt} for general statements. Before giving our main result, we make the following observation: given $p$ a prime number, and two finite sets $S$ and $X$ of places of ${\rm K}$, one has: \begin{eqnarray}\label{equation:1} {\sha}^2_{S\cup X,p} \hookrightarrow {\sha}^2_{S\cup X} \hookrightarrow {\rm C}yB_{S\cup X} \twoheadleftarrow {\rm C}yB_X \hookleftarrow {\sha}^2_X \hookleftarrow {\sha}^2_{X,p}\end{eqnarray} where the middle surjection follows as ${\rm V}_{S\cup X} \subset {\rm V}_{X}$. To simplify, we consider only the case where the finite places $X$ and $S$ are coprime to $p$. Here we prove: \begin{Theorem} \label{theo:main} Let $p$ be a prime number, and let ${\rm K}$ be a number field. Let $X$ be a finite set of places of ${\rm K}$ coprime to $p$. There exist infinitely many finite sets $S$ of finite places of ${\rm K}$, all coprime to $p$, such that: $${\sha}^2_{S \cup X,p} \simeq {\sha}^2_{S\cup X} \simeq {\rm C}yB_{S \cup X} \simeq {\rm C}yB_X {\rm cd}ot$$ Moreover such $S$ can be chosen of size $|S| \leq d_p{\rm C}yB_\emptyset$. \end{Theorem} Set $m:=d_p {\rm C}yB_\emptyset$. Note ${\rm K}^{ \times p} \subset {\rm V}_S$ for all $S$. In particular, we have the exact sequence $$ 0 \rightarrow {{\mathcal M}athcal O}^\times_{\rm K}/{{\mathcal M}athcal O}_{\rm K}^{\times p} \rightarrow {\rm V}_\emptyset/{\rm K}^{ \times p} \rightarrow Cl_{\rm K} [p] \rightarrow 0$$ so $m=d_p {\rm C}l_{\rm K}+ d_p{{\mathcal M}athcal O}_{\rm K}^\times$. As mentioned above, the computation of ${\sha}^2_S$ is very difficult in the tame case. Indeed, the only examples we know of where the map ${\sha}^2_{\emptyset,p} \hookrightarrow {\rm C}yB_\emptyset$ is {\it not} an isomorphism are those in which we know the relation rank of $G_\emptyset(p)$ by knowing the full group itself. Using Theorem \ref{theo:main}, one may give situations where the value of $|{\sha}^2_S|$ is known without being trivial. As corollary, we get \begin{Corollary} \label{coro2} There exist infinitely many finite sets $S_0 \subset S_1 \subset {\rm cd}ots \subset S_m$ of finite places of ${\rm K}$ all coprime to $p$, such that for $i=0,{\rm cd}ots, m$, one has $${\sha}^2_{S_i,p} \simeq {\sha}^2_{S_i} \simeq {{\mathcal M}athbb F}_p^{m-i}.$$ \end{Corollary} \begin{Remark} We will see that the sets $S$ and $S_i$ can be explicitly given by the Chebotarev density Theorem in some governing field extension over ${\rm K}$. \end{Remark} \begin{Remark} Let ${\rm K}_S^{ta}/{\rm K}$ be the maximal Galois extension of ${\rm K}$, unramified outside $S$, and tamely ramified at $S$; put ${\rm G}_S^{ta}={\rm G}al({\rm K}_S^{ta}/{\rm K})$. Then instead of considering ${\rm G}_S$ one may consider ${\rm G}_S^{ta}$ which also surjects onto ${\rm G}_S(p)$. Observe here that ${\rm G}_S^{ta}$ may be finite (typically when the discriminant of ${\rm K}$ and the norm of prime ideals of $S$ are too small), even trivial (for example when ${\rm K}={{\mathcal M}athbb Q}$ and $S=\emptyset$). \end{Remark} {\bf Notations} $-$ We fix a prime number $p$ and a number field ${\rm K}$. $-$ Put ${\rm K}'={\rm K}(\zeta_p)$ and ${\rm K}''={\rm K}(\zeta_{p^2})$, where $\zeta_{p^2}$ is some primitive $p^2$th root of unity, and $\zeta_p=\zeta_{p^2}^p$. $-$ We denote by ${{\mathcal M}athcal O}_{\rm K}$ the ring of integers of ${\rm K}$, by ${{\mathcal M}athcal O}_{\rm K}^\times$ the group of units of ${{\mathcal M}athcal O}_{\rm K}$, and by ${\rm C}l_{\rm K}$ the class group of ${\rm K}$. $-$ We identify a prime ideal ${{\mathcal M}athfrak p} \subset {{\mathcal M}athcal O}_{\rm K}$ with the place $v$ it defines. We write ${\rm K}_v$ for the completion of ${\rm K}$ at $v$ and ${{\mathcal M}athcal U}_v$ for the units of the local field ${\rm K}_v$; when $v$ is archimedean, put ${{\mathcal M}athcal U}_v={\rm K}_v^\times$. $-$ One says that a prime ideal ${{\mathcal M}athfrak p}$ is {\it tame} if $\# {{\mathcal M}athcal O}_{\rm K}/{{\mathcal M}athfrak p} \equiv 1 ({\rm mod} \ p)$, which is equivalent to $\mu_p \subset {\rm K}_v$, that is ${\rm d}elta_{v,p}=1$. $-$ If $S$ is a finite set of places of ${\rm K}$, we denote by ${\rm K}_S(p)/{\rm K}$ (resp. ${\rm K}_S^{ab}(p)/{\rm K}$) the maximal pro-$p$ extension (resp. abelian) of ${\rm K}$ unramified outside $S$, and we put ${\rm G}_S(p)={\rm G}al({\rm K}_S(p)/{\rm K})$ (resp. ${\rm G}_S^{ab}(p)={\rm G}al({\rm K}_S^{ab}(p)/{\rm K})$). For $S=\emptyset$, we denote by ${\rm H}:={\rm K}_\emptyset^{ab}(p)$ the Hilbert $p$-class field of ${\rm K}$. $-$ By convention, the infinite places in $S$ are only real. Let us write $S=S_0 \cup S_{\rm Inf}ty$, where $S_0$ contains only the finite places and $S_0$ only the real ones. Put ${\rm d}elta_{2,p}= \left\{ \begin{array}{cc} 1 & p=2 \\ 0 & \mbox{otherwise} \end{array}\right.$ $-$ The set $S$ is said to be coprime to $p$, if all finite places $v $ of $S$ are coprime to $p$; it is said to be tame if $S$ is coprime to $p$ and $S_{\rm Inf}ty=\emptyset$. $-$ Put ${\rm V}_S=\{ x \in {\rm K}^\times, v(x)\equiv 0 \ ({\rm mod} \ p) \ {{\mathcal M}athfrak f}orall v; x \in {\rm K}_v^p \ {{\mathcal M}athfrak f}orall v\in S\}$. Note ${\rm K}^{ \times p} \subset {\rm V}_S$ for all $S$. \section{Preliminaries} \subsection{Extensions with prescribed ramification} \label{section:cyclicdegreep2} Let $p$ be a prime number. \subsubsection{Governing fields} We recall a result of Gras-Munnier (see \cite[Chapter V, \S 2, Corollary 2.4.2]{gras}, as well as \cite{Gras-Munnier}) which gives a criterion for the existence of totally ramified $p$-extension at some set $S$ (and unramified outside $S$). Put ${\rm K}':={\rm K}(\zeta_p)$ and consider the governing field ${\rm L}':={\rm K}'(\sqrt[p]{{\rm V}_\emptyset})$. The extension ${\rm L}'/{\rm K}'$ has Galois group isomorphic to $({{\mathcal M}athbb Z}/p{{\mathcal M}athbb Z})^{r_1+r_2-1+{\rm d}elta+d}$, where $d=d_p {\rm C}l_{\rm K}$. Given a place $v$ of ${\rm K}$, we choose some place $w|v$ of ${\rm L}'$ above $v$, and we consider $\sigma_v \in {\rm G}al({\rm L}'/{\rm K}')$ defined as follows: \begin{enumerate} \item[$-$] if $v$ corresponds to a prime ideal ${{\mathcal M}athfrak p}$ coprime to $p$, and ${{\mathcal M}athfrak P}$ to $w$, then ${{\mathcal M}athfrak P}$ is unramified in ${\rm L}'/{\rm K}'$, and then ${\rm d}isplaystyle{\sigma_v=\sigma_{{\mathcal M}athfrak p}= {\rm F}F{{\rm L}'/{\rm K}'}{{{\mathcal M}athfrak P}}}$ corresponds to the Frobenius elements at ${{\mathcal M}athfrak P}$ in ${\rm G}al({\rm L}'/{\rm K}')$; \item[$-$] if $v$ corresponds to a real place, then $\sigma_v$ is the Artin symbol at $w$: $\sigma_v(\sqrt{\varepsilon})=+1$ is $\varepsilon$ is positive at $w$, and $-1$ otherwise. \end{enumerate} While $\sigma_v$ does in fact depend on the choice of ${{\mathcal M}athfrak P}$, it is easy to see a different choice of ${{\mathcal M}athfrak P}$ gives a nonzero multiple of the previous choice of $\sigma_v$ in the ${{\mathcal M}athbb F}_p$-vector space ${\rm G}al({\rm L}'/{\rm K}')$. This is all we need when invoking Theorem~\ref{theo:Gras-Munnier} below. By abuse, we will also call the $\sigma_v$'s Frobenius elements. \begin{theo}[Gras-Munnier] \label{theo:Gras-Munnier} Let $S=\{v_1,{\rm cd}ots, v_t\}$ be a set of places of ${\rm K}$ coprime to $p$. There exists a cyclic degree $p$ extension ${\rm L}/{\rm K}$, unramified outside $S$ and totally ramified at each place of $S$, if and only if, for $i=1,{\rm cd}ots, t$, there exists $a_i \in {{\mathcal M}athbb F}_p^\times$, such that $${{\mathcal M}athfrak p}rod_{i=1}^t \sigma_{v_i}^{a_i} =1 \ \in {\rm G}al({\rm L}'/{\rm K}').$$ \end{theo} When $S$ is as in the Gras-Munnier criterion, i.e. the necessary and sufficient condition of the theorem holds, one says that the elements $\sigma_{v_i}$ satisfy a {\it strongly nontrivial relation}. When one only has ${{\mathcal M}athfrak p}rod_{i=1}^t\sigma_{v_i}^{a_i} =1$, with the $a_i$ not all zero, one says that the $\sigma_{v_i}$'s satisfy a {\it nontrivial relation}. \begin{rema} In fact, we don't find Theorem \ref{theo:Gras-Munnier} in \cite{gras} in this form, the difference coming from the real places (and then only for $p=2$). Indeed, one starts with the following: for a real place $v$, in our context we speak of {\it ramification}, and in the context of \cite{gras} Gras speaks of {\it decomposition}. Hence the governing field in \cite{gras} is smaller than ${\rm L}'$ and the condition he obtains did not involve the $\sigma_v$'s, $v\in S_{\rm Inf}ty$ (in fact, in his case these $\sigma_v$ are trivial). But the proof is the same, we can follow it without difficulty due to the fact that for $v\in S_{\rm Inf}ty$, one has: ${{\mathcal M}athcal U}_v/{{\mathcal M}athcal U}_v^2={{\mathcal M}athbb R}^\times/{{\mathcal M}athbb R}^{\times 2} \simeq {{\mathcal M}athbb Z}/2{{\mathcal M}athbb Z}$; see Lemmas 2.3.1, 2.3.2, 2.3.4 and 2.3.5 of \cite{gras}. \end{rema} \begin{rema} \label{remark:frobenius} The results of \cite{gras} allow us to also obtain the following: put ${\rm L}_0':={\rm K}'(\sqrt[p]{{{\mathcal M}athcal O}_{\rm K}^\times})$, then $\#{\rm G}_S^{ab}(p) > \# {\rm G}_\emptyset^{ab}(p)$ if and only if, there exists some nontrivial relation in ${\rm G}al({\rm L}_0'/{\rm K}')$ between the $\sigma_v$'s, $v \in S$. See also Proposition \ref{key-proposition0}. \end{rema} As consequence of Theorem \ref{theo:Gras-Munnier}, one has: \begin{coro} \label{coro_grascriteria} Given $p$ and ${\rm K}$, and two finite sets $T$ and $S$ of places of ${\rm K}$ coprime to $p$, there exists a cyclic degree $p$ extension ${\rm L}/{\rm K}$, unramified outside $S\cup T$ and ramified at each place of $S$ (no condition on the places of $T$), if and only if the $\sigma_v$'s for $v \in S$ satisfy a strongly nontrivial relation in the quotient ${\rm G}al({\rm L}'/{\rm K}')/\langle \sigma_v, v \in T\rangle$. \end{coro} \subsubsection{Extensions over the Hilbert $p$-class field of ${\rm K}$ that are abelian over ${\rm K}$.} As noted in the beginning of Chapter V of \cite{gras}, the result about the existence of a degree-$p^e$ cyclic extension with prescribed ramification can be generalized in different forms. Let ${\rm H}$ be the Hilbert class field of ${\rm K}$. In what follows, we only need the existence of a degree-$p^2$ cyclic extension of ${\rm H}$, abelian over ${\rm K}$, with prescribed ramification. Now we follow the strategy of \cite[Chapter V, \S2, d)]{gras}. Put $B={\rm G}al({\rm K}_S^{ab}(p)/{\rm H})$. Take $\Sigma$ a finite set of places of ${\rm K}$ coprime to $p$ (not necessarily satisfying the congruence ${\rm N}({{\mathcal M}athfrak p}) \equiv 1 ({\rm mod } \ p^2)$ when ${{\mathcal M}athfrak p} \in \Sigma_0$). By class field theory, we get $$1 \longrightarrow (B/B^{p^2})^* \stackrel{\rho}{\longrightarrow} \bigoplus_{v\in \Sigma} ({{\mathcal M}athcal U}_v/({{\mathcal M}athcal U}_v)^{p^2})^* \longrightarrow \big(\iota ({{\mathcal M}athcal O}_{\rm K}^\times)\big)^* \longrightarrow 1,$$ where ${\rm d}isplaystyle{\iota: {{\mathcal M}athcal O}_{\rm K}^\times \longrightarrow \bigoplus_{v \in \Sigma} {{\mathcal M}athcal U}_v/({{\mathcal M}athcal U}_v)^{p^2}}$ is the diagonal embedding. A cyclic degree-$p^2$ extension ${\rm M}$ of ${\rm H}$, abelian over ${\rm K}$ and unramified outside $\Sigma$ is given by a character~${{\mathcal M}athfrak p}si$ of~$B/B^{p^2}$ of order $p^2$ as follows: Given ${{\mathcal M}athfrak p}si_v \in ({{\mathcal M}athcal U}_v/({{\mathcal M}athcal U}_v)^{p^2})^*$ for all $v\in \Sigma$, there exists a character ${{\mathcal M}athfrak p}si$ of $B/B^{p^2}$ such that ${{\mathcal M}athfrak p}si_{|{{\mathcal M}athcal U}_v}={{\mathcal M}athfrak p}si_v$ if and only if, \begin{eqnarray}\label{equation:2} {{\mathcal M}athfrak f}orall \varepsilon \in {{\mathcal M}athcal O}_{\rm K}^\times, \ {{\mathcal M}athfrak p}rod_{v \in \Sigma} {{\mathcal M}athfrak p}si_v(\varepsilon)=1.\end{eqnarray} As ${\rm M}/{\rm H}$ is totally ramified at at least one prime ideal, at least one ${{\mathcal M}athfrak p}si_v$ has order $p^2$. Now we will focus on the case where $\Sigma$ contains only finite places, and we use the notation ${{\mathcal M}athfrak p}$ instead of $v$. Let $S$ be a finite non-empty set of tame places of ${\rm K}$ where each prime ${{\mathcal M}athfrak p}$ (corresponding to $v \in S$) is such that ${\rm N}({{\mathcal M}athfrak q}) \equiv 1 ({\rm mod } \ p^2)$. Let us write now $\Sigma_{{\mathcal M}athfrak q}=S \cup T_{{\mathcal M}athfrak q}$, where $T_{{\mathcal M}athfrak q}=\{{{\mathcal M}athfrak q}\}$ is also tame. We are interested in the existence of a degree-$p^2$ cyclic extension ${\rm K}_{{\mathcal M}athfrak q}/{\rm H}$, abelian over ${\rm K}$ and unramified outside $\Sigma_{{\mathcal M}athfrak q}$, such that ${\rm K}_{{\mathcal M}athfrak q}/{\rm H}$ has degree $p^2$ and for which the inertia degree at ${{\mathcal M}athfrak q}$ is exactly $p$. For ${{\mathcal M}athfrak p} \in \Sigma_{{\mathcal M}athfrak q}$, let us fix $\chi_{{\mathcal M}athfrak p}$ a generator of $({{\mathcal M}athcal U}_{{\mathcal M}athfrak p}/({{\mathcal M}athcal U}_{{\mathcal M}athfrak p})^{p^2})^*$. By (\ref{equation:2}), ${\rm K}_{{\mathcal M}athfrak q}$ exists if and only if, there exist $a_{{\mathcal M}athfrak q} \in {{\mathcal M}athbb F}_p^\times$, and $b_{{\mathcal M}athfrak p} \in {{\mathcal M}athbb Z}/p^2$, ${{\mathcal M}athfrak p} \in S$, such that $${{\mathcal M}athfrak f}orall \varepsilon \in {{\mathcal M}athcal O}_{\rm K}^\times, \ {\hat \chi_{{\mathcal M}athfrak q}}^{a_{{\mathcal M}athfrak q}}(\varepsilon) {{\mathcal M}athfrak p}rod_{{{\mathcal M}athfrak p} \in S} \chi_{{\mathcal M}athfrak p}^{b_{{\mathcal M}athfrak p}}(\varepsilon)=1,$$ where $${\hat \chi_{{\mathcal M}athfrak q}}=\left\{\begin{array}{ll} \chi_{{\mathcal M}athfrak q} & {\rm if} \ {\rm N}({{\mathcal M}athfrak q}) \nequiv 1 ({\rm mod } \ p^2) \\ \chi_{{\mathcal M}athfrak q}^p & {\rm if} \ {\rm N}({{\mathcal M}athfrak q}) \equiv 1 ({\rm mod } \ p^2) \end{array} \right. ,$$ and such that at least one $b_{{\mathcal M}athfrak p} \in ({{\mathcal M}athbb Z}/p^2{{\mathcal M}athbb Z})^\times$. This last condition can be rephrased thanks to Kummer theory with the following governing field (see \cite[Chapter V, \S 2, d)]{gras}): $${\rm L}={\rm K}''(\sqrt[p^2]{{{\mathcal M}athcal O}_{\rm K}^\times}),$$ where ${\rm K}''={\rm K}(\zeta_{p^2})$. For each prime ${{\mathcal M}athfrak p} \in \Sigma_{{\mathcal M}athfrak q}$ let us choose a prime ${{\mathcal M}athfrak P}|{{\mathcal M}athfrak p}$ of ${\rm K}''$, and denote by $\sigma_{{\mathcal M}athfrak p}$ the Frobenius of ${{\mathcal M}athfrak P}$ in ${\rm G}al({\rm L}/{\rm K}'')$. As before, $\sigma_{{\mathcal M}athfrak p}$ depends on ${{\mathcal M}athfrak P}|{{\mathcal M}athfrak p}$ only up to a power coprime to $p$. The above discussion allows us to obtain the following: \begin{prop} \label{key-proposition0} There exists a degree-$p^2$ cyclic extension ${\rm K}_{{\mathcal M}athfrak q}/{\rm H}$, abelian over ${\rm K}$, unramified outside $\Sigma_{{\mathcal M}athfrak q}$, for which the inertia degree at ${{\mathcal M}athfrak q}$ is exactly $p$, if and only if, there exists $a_{{\mathcal M}athfrak q} \in {{\mathcal M}athbb F}_p^\times$, and $b_{{\mathcal M}athfrak p}\in {{\mathcal M}athbb Z}/p^2{{\mathcal M}athbb Z}$, ${{\mathcal M}athfrak p} \in S$, such that \begin{eqnarray} \label{equation:3} {\hat \sigma}_{{\mathcal M}athfrak q}^{a_{{\mathcal M}athfrak q}} {{\mathcal M}athfrak p}rod_{{{\mathcal M}athfrak p} \in S} \sigma_{{\mathcal M}athfrak p}^{b_{{\mathcal M}athfrak p}} =1 \in {\rm G}al({\rm L}/{\rm K}''), \end{eqnarray} where $${\hat \sigma_{{\mathcal M}athfrak q}}=\left\{\begin{array}{ll} \sigma_{{\mathcal M}athfrak q} & {\rm if} \ {\rm N}({{\mathcal M}athfrak q}) \nequiv 1 ({\rm mod } \ p^2) \\ \sigma_{{\mathcal M}athfrak q}^p & {\rm if} \ {\rm N}({{\mathcal M}athfrak q}) \equiv 1 ({\rm mod } \ p^2) \end{array} \right. ,$$ with at least one $b_{{\mathcal M}athfrak p} \in ({{\mathcal M}athbb Z}/p^2{{\mathcal M}athbb Z})^\times$. \end{prop} \begin{Remark} Infinitely many such sets exist by the Chebotarev Density Theorem. \end{Remark} \subsection{Saturated sets} \label{section:saturated} Take $p$, ${\rm K}$ as before, and let $S$ be a finite set of places of ${\rm K}$, coprime to $p$. \begin{defi} The $S$ set of places ${\rm K}$ is called saturated if ${\rm V}_S/({\rm K}^\times)^p=\{1\}$. \end{defi} Recall the following equality due to Shafarevich (see for example \cite[Chapter X, \S 7, Corollary 10.7.7]{NSW}): \begin{eqnarray} \label{egalite:p-rank} d_p {\rm G}_S=|S_0|+|S_{\rm Inf}ty| {\rm d}elta_{2,p} -(r_1+r_2)+1-{\rm d}elta + d_p {\rm V}_S/({\rm K}^\times)^p, \end{eqnarray} showing that $d_p {\rm G}_S$ is easy to compute when $S$ is saturated. \begin{prop} \label{proposition1.6} Let $S$ and $T$ be two finite sets of places of ${\rm K}$ coprime to $p$. Suppose $S$ is saturated. Then \begin{enumerate} \item[$-$] if $S\subset T$, then $T$ is saturated; \item[$-$] for every tame place $v \notin S$, one has $d_p {\rm G}_{S\cup \{{{\mathcal M}athfrak p}\}}=d_p {\rm G}_S +1$. \end{enumerate} \end{prop} \begin{proof} The first point is due to the fact that ${\rm V}_T \subset {\rm V}_S$, and the second point is a consequence of (\ref{egalite:p-rank}) along with the first point. \end{proof} \begin{theo} \label{theo_saturated} A finite set $S$ coprime to $p$ is saturated if and only if, the Frobenii $\sigma_v$, $v \in S$, generate the whole group ${\rm G}al({\rm K}'(\sqrt[p]{{\rm V}_\emptyset})/{\rm K}')$. \end{theo} \begin{proof} $\bullet$ Suppose the Frobenii generate the full Galois group. By hypothesis, for each degree-$p$ extension ${\rm L}/{\rm K}'$ in ${\rm K}'(\sqrt[p]{{\rm V}_\emptyset})/{\rm K}'$, there exists a place $v\in S$ such that $v$ is inert in ${\rm L}/{\rm K}'$ (when $v\in S_{\rm Inf}ty$, $v$ is ramified in ${\rm L}/{\rm K}'$). Let us take now $x\in {\rm V}_S$: then every $v \in S$ splits totally in ${\rm K}'(\sqrt[p]{x})/{\rm K}'$. As ${\rm K}'(\sqrt[p]{x}) \subset {\rm K}'(\sqrt[p]{{\rm V}_\emptyset})$, one deduces that ${\rm K}'(\sqrt[p]{x})={\rm K}'$, and then $x\in ({\rm K}')^p$. As $[{\rm K}':{\rm K}]$ is coprime to $p$, one finally obtains that $x\in {\rm K}^{\times p}$, so ${\rm C}yB_S=\{0\}$. $\bullet$ If $S$ is saturated, then for every finite set $T$ of tame places of ${\rm K}$ with $T\cap S=\emptyset$, one has $d_p {\rm G}_{S\cup T}=d_p {\rm G}_S + |T|$ by Proposition \ref{proposition1.6}. Then by the Gras-Munnier criterion, one has $\langle \sigma_v , v \in S \rangle = {\rm G}al({\rm L}'/{\rm K}')$. \end{proof} \begin{coro} The finite set $S$ coprime to $p$ is saturated if and only if, for every finite set $T$ of tame places of ${\rm K}$, there exists a cyclic degree $p$-extension of ${\rm K}$ unramified outside $S\cup T$ but ramified at each place of $T$. \end{coro} \begin{proof} $\bullet$ If $S$ is saturated, then by Theorem \ref{theo_saturated} the Frobenii $\sigma_v$, $v \in S$, generate ${\rm G}al({\rm L}'/{\rm K}')$, and the result follows by using Corollary \ref{coro_grascriteria}. $\bullet$ Suppose that $S$ is such that for every finite set $T$ of tame places of ${\rm K}$, there exists a cyclic degree $p$-extension unramified outside $S\cup T$ and ramified at each place of $T$. Then by Corollary \ref{coro_grascriteria} and the Chebotarev density theorem, ${\rm G}al({\rm L}'/{\rm K}')=\langle \sigma_v, \ v \in S\rangle$. By Theorem \ref{theo_saturated}, $S$ is saturated. \end{proof} \subsection{Spectral sequence} Let $S$ and $T$ be two finite sets of places of ${\rm K}$ coprime to~$p$. Consider the following exact sequence of pro-$p$ groups \begin{eqnarray} \label{equation:se} 1 \longrightarrow {\rm H}_{S,T} \longrightarrow {\rm G}_{S\cup T}(p) \longrightarrow {\rm G}_S(p) \longrightarrow 1. \end{eqnarray} \begin{defi} Put $${\rm X}X_{S,T}:={\rm H}_{S,T}/[{\rm H}_{S,T},{\rm H}_{S,T}]{\rm H}_{S,T}^p,$$ and $${\rm X}_{S,T}:=\left({\rm X}X_{S,T}\right)_{{\rm G}_S(p)}={\rm H}_{S,T}/ [{\rm H}_{S,T},{\rm G}_S(p)]{\rm H}_{S,T}^p.$$ \end{defi} Recall that as ${\rm G}_S(p)$ is a pro-$p$ group, then ${{\mathcal M}athbb F}_p\ldbrack {\rm G}_S(p) {\rm rd}brack$ is a local ring. \begin{lemm} The abelian group ${\rm X}X_{S,T}$ is a ${{\mathcal M}athbb F}_p\ldbrack {\rm G}_S(p) {\rm rd}brack$-module (with continuous action) that can be generated by $d_p {\rm X}_{S,T}$ generators. Moreover, $d_p {\rm X}_{S,T} \leq |T|$. \end{lemm} \begin{proof} The first part follows from Nakayama's lemma. For the second, the fact that ${\rm G}_S(p)$ acts transitively on the inertia groups $I_w$ of $w|v \in T$ in ${\rm X}X(S,T)$ implies $$\bigoplus_{i=1}^t {{\mathcal M}athbb F}_p\ldbrack {\rm G}_S(p) {\rm rd}brack \twoheadrightarrow \langle I_w, w|v \in T \rangle = {\rm X}X_{S,T},$$ where $t=|T|$. Taking the ${\rm G}_S(p)$-coinvariants, we obtain ${{\mathcal M}athbb F}_p^t \twoheadrightarrow {\rm X}_{S,T}$. \end{proof} Applying the Hochschild-Serre spectral sequence to (\ref{equation:se}), one gets: \begin{lemm} \label{chase}Let $ S, T$ be two finite sets of places of ${\rm K}$ coprime to $p$. Then one has~: \label{exact-sequence1} $$ 1 \longrightarrow H^1({\rm G}_S(p),{{\mathcal M}athbb F}_p) \longrightarrow H^1({\rm G}_{S\cup T}(p),{{\mathcal M}athbb F}_p) \longrightarrow {\rm X}_{S ,T}^\vee \longrightarrow{ {\sha}^2_{S,p}} \longrightarrow {{\sha}^2_{ S\cup T,p} }. $$ Furthermore, the cokernel of the natural injection ${{\textnormal{\rurm{Sh}}}}_{X,p} \hookrightarrow {\rm C}yB_X$ decreases in dimension as $X$ increases. \end{lemm} \begin{proof} The Hochschild-Serre spectral sequence gives the exact commutative diagram: \xymatrix{ H^1({\rm G}_S(p),{{\mathcal M}athbb F}_p) \ar@{^{(}->}[r] & H^1({\rm G}_{S\cup T}(p),{{\mathcal M}athbb F}_p) \ar@{->}[r] & {\rm X}_{S,T}^\vee \ar@{->}[r] & H^2({\rm G}_S(p),{{\mathcal M}athbb F}_p) \ar@{->}[r] \ar@{->}[d] & H^2({\rm G}_{S\cup T}(p),{{\mathcal M}athbb F}_p) \ar@{->}[d] \\ & && \oplus_{v\in S} H^2({\rm G}_v,{{\mathcal M}athbb F}_p) \ar@{^(->}[r]& \oplus_{v \in S\cup T} H^2({\rm G}_v,{{\mathcal M}athbb F}_p)} Chasing the trangression map ${\rm X}^\vee_{S,T} \stackrel{tg}{\longrightarrow} H^2(G_S(p))$ to the right gives that its image lies in ${\sha}^2_{S,p}$ whose image to the right lies in ${\sha}^2_{S\cup T,p}$. We now have the diagram \xymatrix{ 1 \ar@{->}[r] & H^1({\rm G}_S(p),{{\mathcal M}athbb F}_p) \ar@{->}[r] & H^1({\rm G}_{S\cup T}(p),{{\mathcal M}athbb F}_p) \ar@{->}[r] & {\rm X}_{S,T}^\vee \ar@{->}[r] & {\sha}^2_{S,p} \ar@{->}[r] \ar@{^(->}[d] & {\sha}^2_{S\cup T,p} \ar@{^(->}[d] \\ & & && {\rm C}yB_S \ar@{->>}[r]& {\rm C}yB_{S\cup T} } where the bottom horizontal map is surjective as the inclusion ${\rm V}_{S\cup T}/{\rm K}^{\times p} \hookrightarrow {\rm V}_S/{\rm K}^{\times p}$ is immediate from the definition of ${\rm V}_X$. The second result follows. \end{proof} \begin{coro} If the natural injection ${{\textnormal{\rurm{Sh}}}}_{X,p} \hookrightarrow {\rm C}yB_X$ is an ismorphism, then for any set $Y$ we have ${{\textnormal{\rurm{Sh}}}}_{X\cup Y,p} \stackrel{\simeq}{\hookrightarrow} {\rm C}yB_{X \cup Y}$ \end{coro} Let us give an obvious consequence of Lemma~\ref{exact-sequence1}. \begin{lemm} \label{lemma1} Suppose that $H^1({\rm G}_S(p),{{\mathcal M}athbb F}_p) \simeq H^1({\rm G}_{S\cup T}(p),{{\mathcal M}athbb F}_p)$, then ${\rm X}_{S,T}^\vee \hookrightarrow {\sha}^2_{S,p}$. If moreover $ S \cup T$ is saturated then ${\rm X}_{S,T}^\vee \simeq {\sha}^2_{S,p}$. \end{lemm} \begin{proof} If $S\cup T$ is saturated then ${\rm V}_{S\cup T}/{\rm K}^{\times p}=\{1\}$, which implies that ${\rm C}yB_{S\cup T}=\{1\}$. Hence, by (\ref{equation:0}) ${\sha}^2_{S\cup T}=\{0\}$, and the same holds for ${\sha}^2_{S\cup T,p}$. We conclude with Lemma \ref{exact-sequence1}. \end{proof} An important consequence of Lemmas~\ref{chase} and~\ref{lemma1} is that elements of ${\rm X}_{S,T}^\vee$ can give rise to elements of ${\sha}^2_{S,p}$. The former can be found via ray class group computations. We thus have a method of producing independent elements of ${\sha}^2_{S,p}$. If we find $d_p {\rm C}yB_S$ such elements, we have established ${\sha}^2_{S,p} \stackrel{\simeq}{\hookrightarrow} {\sha}^2_S \stackrel{\simeq}{\hookrightarrow} {\rm C}yB_S$, and thus computed $d_p {\sha}^2_S$. \section{Proof of the results} \subsection{A key Proposition} Let $p$ be a prime number. Let ${\rm K}$ be a number field and let $X$ be a finite set of places of ${\rm K}$ coprime to $p$. The proof of Theorem \ref{theo:Gras-Munnier} is a consequence of the following proposition. \begin{prop} \label{prop:ramification-in-p2} There exist (infinitely many) pairs of finite sets of tame places~$S$ and~$T$ of~${\rm K}$ such that: \begin{enumerate}[$(i)$] \item $T \cup X$ is saturated and $d_p {\rm G}_{T\cup X}=d_p {\rm G}_X$; \item $d_p {\rm G}_{S\cup T \cup X}=d_p {\rm G}_{S \cup X}$; \item $|T| \leq d_p {\rm C}l_{\rm K}+ r_1+r_2-1+{\rm d}elta $ and $|S| \leq r_1+r_2-1+{\rm d}elta$; \item for each prime ${{\mathcal M}athfrak q} \in T$, there exists a degree-$p^2$ cyclic extension ${\rm K}_{{\mathcal M}athfrak q}$ of ${\rm K}^H$, abelian over~${\rm K}$, unramified outside $S\cup X \cup \{{{\mathcal M}athfrak q}\}$, where the inertia group at ${{\mathcal M}athfrak q}$ is of order $p$. \end{enumerate} \end{prop} Put ${\rm F}_0={\rm K}'(\sqrt[p]{{\rm V}_\emptyset})$, ${\rm L}_0={\rm K}'(\sqrt[p]{{{\mathcal M}athcal O}_{\rm K}^\times})$, ${\rm K}''={\rm K}(\zeta_{p^2})$, ${\rm L}_1={\rm K}''(\sqrt[p^2]{{{\mathcal M}athcal O}_{\rm K}^\times})$, ${\rm F}_1={\rm K}''(\sqrt[p]{{\rm V}_\emptyset})$, and ${\rm F}={\rm L} {\rm F}_0={\rm K}''(\sqrt[p^2]{{{\mathcal M}athcal O}_{\rm K}^\times}, \sqrt[p]{{\rm V}_\emptyset})$. Put ${\rm G}={\rm G}al({\rm F}/{\rm K}')$. \begin{proof} (of Proposition \ref{prop:ramification-in-p2}.) Given a prime ${{\mathcal M}athfrak p} $ of ${{\mathcal M}athcal O}_{\rm K}$, coprime to $p$, we choose a prime ${{\mathcal M}athfrak P}|{{\mathcal M}athfrak p}$ of ${\rm F}$, and we consider its Frobenius $\sigma_{{\mathcal M}athfrak p}:=\sigma_{{\mathcal M}athfrak P}$ in the Galois group ${\rm G}al({\rm F}/{\rm K}')$ and its quotients. As mentioned earlier, this is well-defined up to a nonzero scalar multiple in ${\rm G}al({\rm F}/{\rm K}')$ and that is all we need. Put $E_X=\langle {\sigma_{{\mathcal M}athfrak p}}_{|{\rm F}_0}, {{\mathcal M}athfrak p} \in X\rangle \subset {\rm G}al({\rm F}_0/{\rm K}')$ the subgroup of ${\rm G}al({\rm F}_0/{\rm K}')$ generated by the Frobenii of the primes ${{\mathcal M}athfrak p} \in X$. Put $m_{\rm X}=d_p {\rm V}_\emptyset- d_p E_X$. a) \underline{Assume first that ${\rm F}_0\cap {\rm K}''= {\rm K}'$.} {\tiny $$\xymatrix{ & & {\rm L}={\rm K}''(\sqrt[p^2]{{{\mathcal M}athcal O}_K^\times}) \ar@{-}[r] & {\rm F} \ar@{-}[ld] \\ {\rm K}'' \ar@{-}[r]&{\rm L}_1={\rm K}''(\sqrt[p]{{{\mathcal M}athcal O}_{\rm K}^\times}) \ar@{-}[ru] \ar@{-}[r]& {\rm F}_1={\rm K}''(\sqrt[p]{{\rm V}_\emptyset})&\\ {\rm K}' \ar@{-}[u] \ar@{-}[r]& {\rm L}_0={\rm K}'(\sqrt[p]{{{\mathcal M}athcal O}_{\rm K}^\times}) \ar@{-}[u] \ar@{-}[r] &\ar@{-}[u] {\rm F}_0={\rm K}'(\sqrt[p]{{\rm V}_\emptyset})& }$$ } We choose $S$ and $T$ as follows: \begin{enumerate} \item[$-$] let $T$ be {\it any} set of primes ${{\mathcal M}athfrak q}$ whose Frobenii $\sigma_{{\mathcal M}athfrak q}$ in ${\rm G}$ are such that the restriction in ${\rm G}al({\rm F}_0/{\rm K}')$ forms an ${{\mathcal M}athbb F}_p$-basis of a subspace in direct sum with $E_X$: in other words, $${\rm d}isplaystyle{{\rm G}al({\rm F}_0/{\rm K}')=\langle {\sigma_{{\mathcal M}athfrak q}}_{|{\rm F}_0}, {{\mathcal M}athfrak q} \in T \rangle \bigoplus E_X},$$ and ${\rm d}isplaystyle{ \langle {\sigma_{{\mathcal M}athfrak q}}_{|{\rm F}_0}, {{\mathcal M}athfrak q} \in T \rangle = \bigoplus_{{{\mathcal M}athfrak q} \in T} \langle {\sigma_{{\mathcal M}athfrak q}}_{|{\rm F}_0}\rangle}$. \item[$-$] let $\tilde{X}$ be those places of $X$ whose Frobenii lie in ${\rm G}al({\rm F}/{\rm F}_1)$ and let $S$ be {\it any} set of primes ${{\mathcal M}athfrak p}$ whose Frobenii $\sigma_{{\mathcal M}athfrak p}$ in ${\rm G}$ form in direct sum with the Frobenii in $\tilde{X}$ a basis of ${\rm G}al({\rm F}/{\rm F}_1)$. \end{enumerate} As ${\rm G}al({\rm F}_1/{\rm K}')$ has exponent $p$, we see for each ${{\mathcal M}athfrak q} \in T$, $\sigma_{{\mathcal M}athfrak q}^p \in {\rm G}al({\rm F}/{\rm F}_1)$. Observe also that if ${\sigma_{{\mathcal M}athfrak q}}_{|{\rm K}''}$ is not trivial (which is equivalent to ${\rm N}({{\mathcal M}athfrak q}) \neq 1 \ ({\rm mod} \ p^2)$), then $\sigma_{{\mathcal M}athfrak q}^p$ is the Frobenius at ${{\mathcal M}athfrak P}$ in ${\rm G}al({\rm F}/{\rm F}'')$; otherwise $\sigma_{{\mathcal M}athfrak q}^p$ is the $p$-power of the Frobenius at ${\mathcal M}athfrak Q \mid {{\mathcal M}athfrak q}$ in ${\rm G}al({\rm F}/{\rm F}'')$. By Theorem \ref{theo_saturated} the set $T\cup X$ is saturated. Moreover thanks to the condition on the direct sum for the Frobenius at ${{\mathcal M}athfrak p} \in T$, by Theorem \ref{theo:Gras-Munnier}, there is no cyclic degree-$p$ extension of ${\rm K}$, unramified outside $T \cup X$ and totally ramified at any nonempty subset of places of~$T$: thus $d_p {\rm G}_{T\cup X}=d_p {\rm G}_X$, and $(i)$ holds. Moreover as each place of $S$ splits totally in the governing extension ${\rm F}_0/{\rm K}'$, then again by Theorem \ref{theo:Gras-Munnier}, $d_p {\rm G}_{S\cup T\cup X}=d_p {\rm G}_{S\cup X}$, and $(ii)$ holds. The condition on $S$ gives a relation of type (\ref{equation:3}) in ${\rm G}al({\rm F}/{\rm F}_1) \subset {\rm G}al({\rm F}/{\rm L}_1)$ for the set $S\cup \tilde{X}\cup \{{{\mathcal M}athfrak q}\}$, ${{\mathcal M}athfrak q} \in T$. After taking the quotient of this relation by ${\rm G}al({\rm F}/{\rm L})$, we obtain by Proposition \ref{key-proposition0} that for each prime ${{\mathcal M}athfrak q} \in T$, the existence of a degree-$p^2$ cyclic extension ${\rm K}_{{\mathcal M}athfrak q}/{\rm H}$, abelian over ${\rm K}$ and unramified outside $S\cup X \cup \{{{\mathcal M}athfrak q}\}$ for which the inertia at ${{\mathcal M}athfrak q}$ is of order $p$, proving $(iv)$. $(iii)$ is obvious. b) \underline{Assume now that that ${\rm K}''\subset {\rm F}_0$}. Let ${{\mathcal M}athfrak A}_i, i=1,{\rm cd}ots, d$ be ideals of ${{\mathcal M}athcal O}_{\rm K}$, whose classes are a system of minimal generators of ${\rm C}l_{\rm K}[p]$, and let $a_i \in {{\mathcal M}athcal O}_{\rm K}^\times$ such that $(a_i)={{\mathcal M}athfrak A}^p_i$. Put $A=\langle a_1,{\rm cd}ots, a_d \rangle {\rm K}^{\times p}/{\rm K}^{\times p} \subset {\rm V}_\emptyset/{\rm K}^{\times p}$. Note ${\rm K}'(\sqrt[p]{{\rm V}_\emptyset})={\rm K}'(\sqrt[p]{A},\sqrt[p]{{{\mathcal M}athcal O}_{\rm K}^\times})$. As ${\rm F}_0/{\rm K}'$ and ${\rm K}''/{\rm K}'$ are abelian $p$-extensions, the containment ${\rm K}''\subset {\rm F}_0$ implies ${\rm K}'={\rm K}$. Moreover ${\rm L}_0 \cap {\rm K}''(\sqrt[p]{A})={\rm K}''$. {\tiny $$\xymatrix{& {\rm L}={\rm K}''(\sqrt[p^2]{{{\mathcal M}athcal O}_{\rm K}^\times}) \ar@{-}[r]& {\rm F} \\ {\rm L}_0={\rm K}'(\sqrt[p]{{{\mathcal M}athcal O}_{\rm K}^\times}) \ar@{-}[ur] \ar@{-}[r] & {\rm F}_0={\rm F}_1 ={\rm K}'(\sqrt[p]{{\rm V}_\emptyset}) \ar@{-}[ur] & \\ {\rm K}'' \ar@{-}[u] \ar@{-}[r]& {\rm K}''(\sqrt[p]{A}) \ar@{-}[u] & \\ {\rm K}' \ar@{-}[u] \ar@{-}[r] & {\rm K}'(\sqrt[p]{A})\ar@{-}[u]& } $$ } Now take $T$ and $S$ as in case a). \end{proof} \begin{rema} Observe that one can take $T$ such that $|T|\leq m_{\rm X}= d_p {\rm V}_\emptyset -d_p E_X$. \end{rema} \subsection{Proof of Theorem \ref{theo:main} } Let $S$ and $T$ as in Proposition \ref{prop:ramification-in-p2}. As $X\cup T$ is saturated, by $(i)$ of Proposition \ref{prop:ramification-in-p2} and (\ref{egalite:p-rank}), one obtains $|T|=d_p {\rm C}yB_X$. Moreover, $S\cup X \cup T$ is also saturated and in particular, ${\rm C}yB_{S\cup X\cup T} \simeq {\sha}^2_{S\cup X \cup T,p} =\{0\}$. With $(ii)$, we see that ${\rm d}_p {\rm C}yB_{S\cup X}=|T|$ so $(i)$ and $(ii)$ imply: ${\rm C}yB_{S\cup X} \simeq {\rm C}yB_X$. Now let us take the spectral sequence of the short exact sequence $$1 \longrightarrow {\rm H}_{S\cup X,T} \longrightarrow {\rm G}_{S\cup X\cup T}(p) \longrightarrow {\rm G}_{S \cup X}(p) \longrightarrow 1$$ to obtain by Lemma \ref{exact-sequence1}: $$1 \rightarrow H^1({\rm G}_{S\cup X}(p),{{\mathcal M}athbb F}_p) \rightarrow H^1({\rm G}_{S\cup X\cup T}(p),{{\mathcal M}athbb F}_p) \rightarrow {\rm X}_{S\cup X,T}^\vee \rightarrow {\sha}^2_{S \cup X,p} \rightarrow {\sha}^2_{S\cup X\cup T,p} =\{0\}.$$ Hence, ${\rm X}_{S\cup X,T}^\vee \simeq {\sha}^2_{S\cup X,p}$. Now $(iv)$ of Proposition \ref{prop:ramification-in-p2} implies that $d_p {\rm X}_{S\cup X,T} \geq |T|$, and as obviously $d_p {\rm X}_{S\cup X,T} \leq |T|$, we finally get $d_p{\sha}^2_{S\cup X,p}=|T|$. Hence $d_p {\sha}^2_{S\cup X,p}=|T|= d_p {\rm C}yB_{S\cup X} = d_p {\rm C}yB_X$. Thanks to (\ref{equation:1}), one has $$ {\sha}^2_{S\cup X,p} \simeq {\sha}^2_{S\cup X} \simeq {\rm C}yB_{S \cup X} \simeq {\rm C}yB_X .$$ \subsection{Proof of Corollary \ref{coro2}} Let us choose $S$ and $T$ as in proof of Proposition \ref{prop:ramification-in-p2}. Let us write $T=\{{{\mathcal M}athfrak p}_1,{\rm cd}ots, {{\mathcal M}athfrak p}_{m_X}\}$, where $m_X=d_p {\rm C}yB_\emptyset-d_p E_X$. Put $S_0=S\cup X$ and, for $i \geq 0$, $S_{i+1}=S\cup X\cup \{{{\mathcal M}athfrak p}_i\}$. Here, as $d_p {\rm G}_{S_i}=d_p {\rm G}_{S_{m_X}}$, the spectral sequence shows that \begin{eqnarray} \label{equality:diagramm} {{\mathcal M}athbb F}_p \hookrightarrow {\sha}^2_{S_i,p} \longrightarrow {\sha}^2_{S_{i+1},p}, \end{eqnarray} in particular $d_p {\sha}^2_{S_{i},p} \leq d_p {\sha}^2_{S_{i+1},p} +1$. After noting that $d_p {\sha}^2_{S_{m_X},p}=0$ (the set $X\cup T$ is saturated) and that $d_p {\sha}^2_{S_0,p}=|T|=m_X$, then we conclude that $d_p {\sha}^2_{S_i,p}=m_{\rm X}-i$. Observe also that (\ref{equality:diagramm}) induces: $${{\mathcal M}athbb F}_p \hookrightarrow {\sha}^2_{S_i} \longrightarrow {\sha}^2_{S_{i+1}},$$ and as before $d_p {\sha}^2_{S_i}=m-i$. The isomorphisms ${\sha}^2_{S_i,p} \simeq {\sha}^2_{S_i}$'s become obvious. We have proved: \begin{coro} One has ${\sha}^2_{S_i} \simeq {{\mathcal M}athbb F}_p^{m_X-i}$. \end{coro} Take $X=\emptyset$ to have Corollary \ref{coro2}. \section{Examples} In this section we give a few examples of fields ${\rm K}$ and sets $S$ such that in the diagram $${\sha}^2_\emptyset \hookrightarrow {\rm C}yB_\emptyset \twoheadrightarrow {\rm C}yB_S \hookleftarrow {\sha}^2_S,$$ the two maps on the right are isomorphisms. In our first two examples we show the left map is {\it not} an isomorphism. Thus we give explicit examples where ${\sha}^2_X$ increases as $X$ does, in contrast to the wild case. In the third example we establish $$ {\rm C}yB_\emptyset \stackrel{\simeq}{\twoheadrightarrow} {\rm C}yB_S \stackrel{\simeq}{\hookleftarrow} {\sha}^2_S,$$ but do not know whether $d_p {\sha}^2_\emptyset < d_p {\sha}^2_S$. Indeed, we suspect equality in that case. In the examples below, $p_i$ refers to the $i$th prime of ${\rm K}$ above the rational prime $p$ as MAGMA presents the factorization. All code was run unconditionally, that is we did {\it not} use GRH bounds for computing ray class groups. \begin{Example} Let ${\rm K}$ be the unique degree $3$ subfield of $ {{\mathcal M}athbb Q}(\zeta_{7})$ and let $p=2$. Then one can easily compute that ${\rm K}$ has trivial class group and, since ${\rm K}$ is totally real, $d_p {\rm C}yB_\emptyset= d_p {{\mathcal M}athcal O}^\times_{\rm K} /{{\mathcal M}athcal O}^{\times 2}_{\rm K} + d_p Cl_{\rm K}[2]=3$. Clearly $G_\emptyset = \{e\}$ and $d_p {\sha}^2_\emptyset=0$ so ${\sha}^2_\emptyset \hookrightarrow {\rm C}yB_\emptyset$ has $3$-dimensional cokernel. Set $S = \{37_1, 181_1, 293_1\}$ and $T=\{307_1,311_1,349_1\}$. One computes $d_p H^1(G_T,{{\mathcal M}athbb F}_2)=0$ so $T$ and $S\cup T$ are saturated. The $2$-parts of the ray class groups for conductors $S\cup T$ and $S$ are $ ({{\mathcal M}athbb Z}/4)^3$ and $({{\mathcal M}athbb Z}/2)^3$ respectively, so the the map $H^1(G_S,{{\mathcal M}athbb F}_2) \rightarrow H^1(G_{S\cup T},{{\mathcal M}athbb F}_2)$ is an isomorphism and $d_p {\rm X}_{S\cup X,T}^\vee \geq 3$. As $d_p {\sha}^2_S \leq d_p {\rm C}yB_S \leq d_p {\rm C}yB_\emptyset =3$, we see $d_p {\sha}^2_S=3$. \end{Example} \begin{Example} Let ${\rm K}$ be the unique degree $3$ subfield of $ {{\mathcal M}athbb Q}(\zeta_{349})$ and let $p=2$. Here ${\rm K}$ has class group $({{\mathcal M}athbb Z}/2)^2$ and is again totally real, so $d_p {\rm C}yB_\emptyset= d_p {{\mathcal M}athcal O}^\times_{\rm K} /{{\mathcal M}athcal O}^{\times 2}_{\rm K} + d_p Cl_{\rm K}[2]=5$. One computes the class group of the Hilbert class field of ${\rm K}$ is trivial so ${\rm G}_\emptyset ={{\mathcal M}athbb Z}/2 \times {{\mathcal M}athbb Z}/2$ and has three relations. Thus $d_p {\sha}^2_\emptyset =d_p H^2(G_\emptyset,{{\mathcal M}athbb F}_2)=3$ so the map ${\sha}^2_\emptyset \hookrightarrow {\rm C}yB_\emptyset$ has $2$-dimensional cokernel. Set $S = \{701_1, 2857_1, 3169_1\}$ and $T=\{367_1,397_1,401_1,409_1,449_1\}$. One computes $d_p H^1(G_T,{{\mathcal M}athbb F}_2)=2$ so $T$ and $S\cup T$ are saturated. The $2$-parts of the ray class groups for conductors $S\cup T$ and $S$ are $ {{\mathcal M}athbb Z}/4 \times ({{\mathcal M}athbb Z}/8)^2\times {{\mathcal M}athbb Z}/16 \times {{\mathcal M}athbb Z}/32$ and $({{\mathcal M}athbb Z}/2)^5$ respectively, so the the map $H^1(G_S,{{\mathcal M}athbb F}_2) \rightarrow H^1(G_{S\cup T},{{\mathcal M}athbb F}_2)$ is an isomorphism and $d_p {\rm X}_{S\cup X,T}^\vee \geq 5$. As $d_p {\sha}^2_S \leq d_p {\rm C}yB_S \leq d_p {\rm C}yB_\emptyset =5$, we see $d_p {\sha}^2_S=5$. \end{Example} \begin{Example} Let ${\rm K}={{\mathcal M}athbb Q}[x]/(f(x))$ where $f(x)= x^{12}+339x^{10}-19752x^8-2188735x^6+284236829x^4+4401349506x^2+15622982921$. This polynomial is irreducible and ${\rm K}$ is totally complex with small root discriminant and has class group $({{\mathcal M}athbb Z}/2)^6$. The field ${\rm K}$ has been used as a starting point in finding infinite towers of totally complex number fields whose root discriminants are the smallest currently known. Set $$S= \{7_2,11_1,43_1,47_3,67_3,97_1 \},\,\, T= \{ 5_1, 13_1, 19_1,19_2,23_1, 23_2, 23_3, 29_1, 31_1, 61_1, 149_1,149_4\}.$$ As ${\rm K}$ is totally complex, $$d_p {\rm C}yB_\emptyset = d_p {{\mathcal M}athcal O}^\times_{\rm K} /{{\mathcal M}athcal O}^{\times 2}_{\rm K} + d_p Cl_{\rm K}[2]= 6+6=12= \# T.$$ One computes $d_p H^1(G_T,{{\mathcal M}athbb F}_2)=6$ so $T$ and $S\cup T$ are saturated. The $2$-parts of the ray class groups for conductors $S\cup T$ and $S$ are $({{\mathcal M}athbb Z}/4)^5 \times ({{\mathcal M}athbb Z}/8)^4 \times ({{\mathcal M}athbb Z}/16)^3$ and $({{\mathcal M}athbb Z}/2)^{11} \times {{\mathcal M}athbb Z}/8$. respectively, so the the map $H^1(G_S,{{\mathcal M}athbb F}_2) \rightarrow H^1(G_{S\cup T},{{\mathcal M}athbb F}_2)$ is an isomorphism. From this data one can only conclude $d_p {\rm X}_{S\cup X,T}^\vee \geq 11$. On the other hand, for every $v\in T$ one computes the $2$-part of the ray class group for conductor $S \cup \{v\}$ has order at least $2^{15} > 2^{14}$. As the latter quantity is the order of the $2$-part of the ray class group with conductor $S$, we get $\# T=12$ independent elements of $ {\rm X}_{S\cup X,T}^\vee$ so $d_p {\sha}^2_S \geq 12$. As $d_p {\rm C}yB_S \leq d_p {\rm C}yB_\emptyset =12$, we have $d_p {\sha}^2_S =12$. {\it We suspect that in this case $d_p {\sha}^2_\emptyset=12$.} \end{Example} \end{document}
\begin{document} \maketitle \begin{abstract} In this paper we prove a conjecture of Markman about the shape of the monodromy group of irreducible holomorphic symplectic manifolds of OG10 type. As a corollary, we also compute the locally trivial monodromy group of the underlying singular symplectic variety. \end{abstract} \tableofcontents \section*{Introduction} An irreducible holomorphic symplectic manifold is a compact K\"ahler manifold that is simply connected with a unique closed non-degenerate holomorphic $2$-form. They are fundamental factors in the Beauville--Bogomolov decomposition of compact K\"ahler manifolds with trivial canonical bundle. The first example is in dimension $2$, where they are all $K3$ surfaces. It is very difficult to construct examples in higher dimension, and so far only four deformation types are known: two families in any even dimension, namely the $K3^{[n]}$ and $\operatorname{Kum}^n$ types; and two sporadic families in dimension $6$ and $10$, namely the OG6 and OG10 types. The latter is the main object of this paper. Like $K3$ surfaces, irreducible holomorphic symplectic manifolds are studied via their second integral cohomology group. More precisely, Beauville, Bogomolov and Fujiki independently noticed that the group $\operatorname{H}^2(X,\mathbb{Z})$, where $X$ is an irreducible holomorphic symplectic manifold, has a natural non-degenerate symmetric bilinear form, which generalises the intersection product on a $K3$ surface (e.g.\ \cite{Beauville:c1=0}). With this bilinear form, $\operatorname{H}^2(X,\mathbb{Z})$ becomes a lattice of signature $(3,b_2(X)-3)$. In all the examples $\operatorname{H}^2(X,\mathbb{Z})$ is an even lattice, but this is not known to hold in general. Moreover, it is unimodular when $X$ is a $K3$ surface, but in other cases this is not necessarily true: in particular it is not true in all the known higher dimensional examples. It is natural to study and understand the group $\operatorname{O}(\operatorname{H}^2(X,\mathbb{Z}))$ of isometries of the lattice, in particular those isometries which are geometrically meaningful (in a sense to be made precise in Definition~\ref{defn:parallel transport operators}). The subgroup of these geometrically meaningful isometries is denoted $\operatorname{Mon}^2(X)$, and its elements are called \emph{monodromy} operators: among the elements in $\operatorname{Mon}^2(X)$ there are for examples isometries of the form $f^*$, where $f\in\operatorname{Aut}(X)$ is an automorphism and, more generally, $f\in\operatorname{Bir}(X)$ is a birational automorphism (since the canonical bundle of an irreducible holomorphic symplectic manifold is nef, the pullback of a birational transformation is an isomorphism of $\operatorname{H}^2(X,\mathbb{Z})$ into itself, which preserves the lattice structure - see \cite{Huybrechts:BasicResults}). All isometries in $\operatorname{Mon}^2(X)$ preserve a natural orientation of the lattice (see Remark~\ref{rmk:orient pub}), so they lie in the subgroup $\operatorname{O}^+(\operatorname{H}^2(X,\mathbb{Z}))$ of orientation preserving isometries. There are isometries that do not preserve the orientation, like $-\operatorname{id}\in\operatorname{O}(\operatorname{H}^2(X,\mathbb{Z}))$, and that therefore are not monodromy operator. This can be explained from a moduli theory point of view as follows: if $\mathfrak{M}$ denotes the moduli space of marked irreducible holomorphic symplectic manifolds of fixed dimension and deformation type, then the pairs $(X,\eta)$ and $(X,-\eta)$ always belong to different connected components. As remarked before, birational automorphisms induces monodromy operators. On the other hand, any monodromy operator preserving the Hodge structure comes from a birational automorphism, up to some exceptional reflections associated with divisorial contractions (\cite[Theorem~6.18]{Markman:Survey}). We study some exceptional reflections in the context of manifolds of type OG10 in Section~\ref{section:exceptional reflections}. The knowledge of the monodromy group is of paramount importance to study any aspect of the geometry of irreducible holomorphic symplectic manifolds. It has been explicitly computed by Markman for manifolds of $K3^{[n]}$ type (see \cite{Markman:Monodromy1}), and by Markman and Mongardi for manifolds of $\operatorname{Kum}^n$ type (see \cite{Markman:MonodromyKum} and \cite{Mongardi:Monodromy}). In both cases its exact shape depends on the dimension, but the general feature is that $\operatorname{Mon}^2(K3^{[n]})=\operatorname{O}^+(\operatorname{H}^2(K3^{[n]},\mathbb{Z}))$ (when $n-1$ is a prime power), while $\operatorname{Mon}^2(\operatorname{Kum}^n)\subset\operatorname{O}^+(\operatorname{H}^2(\operatorname{Kum}^2,\mathbb{Z}))$ has index $2$ (when $n+1$ is a prime power). The last fact is deeply related to the geometry: it reflects the fact that there exist two families of manifolds of $\operatorname{Kum}^n$ type that are generically non-birational, but Hodge isometric. This phenomenon was noticed by Namikawa in \cite{Namikawa:Counterexample} as a counter-example to the birational Torelli theorem. Finally, the monodromy group of manifolds of OG6 type has been recently computed by Mongardi and Rapagnetta (\cite{MongardiRapagnetta:MonodromyOG6}), who showed that it is maximal, i.e.\ $\operatorname{Mon}^2(\operatorname{OG6})=\operatorname{O}^+(\operatorname{H}^2(\operatorname{OG6},\mathbb{Z}))$. In this paper, we address the remaining question of determining the monodromy group of manifolds of OG10 type. It was conjectured by Markman that $\operatorname{Mon}^2(\operatorname{OG10})=\operatorname{O}^+(\operatorname{H}^2(\operatorname{OG10},\mathbb{Z}))$ (see \cite[Conjecture~10.7]{Markman:Survey}). In \cite[Theorem~3.3]{Mongardi:Monodromy} Mongardi constructs an orientation preserving isometry that is not of monodromy type: unfortunately the construction is based on the work \cite{WandelMongardi:InducedAutomorphisms} that contains a mistake (see~\cite{Mongardi:MonodromyErratum}). Our main result is the following affirmative answer to Markman's conjecture. \begin{thmno}[Theorem~\ref{thm:Mon^2}] Let $X$ be an irreducible holomorphic symplectic manifold of OG10 type. Then $\operatorname{Mon}^2(X)=\operatorname{O}^+(\operatorname{H}^2(X,\mathbb{Z}))$. \end{thmno} We give an explicit description of $\operatorname{Mon}^2(X)$ in terms of generators when $X=\widetilde{M}_S$ is the O'Grady moduli space, namely the symplectic desingularisation of the moduli space $M_S$ of rank $2$ sheaves on a projective $K3$ surface $S$, with trivial determinant and second Chern class of degree $4$ (see Example~\ref{example:O'Grady moduli space}). As a straightforward corollary of this result (see \cite[Theorem~1.3]{Markman:Survey}), we get a strong version of the global Torelli theorem. \begin{corno}[Global Torelli Theorem] Let $X$ and $Y$ be two irreducible holomorphic symplectic manifolds of OG10 type. Then $X$ and $Y$ are bimeromorphic if and only if they are Hodge isometric. \end{corno} Here we say that $X$ and $Y$ are Hodge isometric if there exists an isometry between $\operatorname{H}^2(X,\mathbb{Z})$ and $\operatorname{H}^2(Y,\mathbb{Z})$ that respects the Hodge decomposition. Let us outline the proof of Theorem~\ref{thm:Mon^2}. The first step consists in producing monodromy operators. Partial results in this direction were obtained by Markman in \cite{Markman:ModularGaloisCovers}, where he proved that the group generated by two particular exceptional reflections is contained in the monodromy group. He worked in the family of O'Grady moduli spaces. Working in the same family, we study how the monodromy group of the underlying $K3$ surface lifts to the monodromy group of the O'Grady moduli space (see Theorem~\ref{thm:my operators O'Grady}). This result was expected, but, to the best of the author's knowledge, there is no proof in the literature. Using non-trivial results in birational geometry, like for example the termination of the minimal model program for irreducible holomorphic symplectic manifolds (\cite{LehnPacienza:MMP}), and the birational geometry of singular moduli spaces of sheaves on $K3$ surfaces (\cite{MeachanZhang:BirationalGeometry}), we study in Section~\ref{section:exceptional reflections} monodromy operators arising as exceptional reflections around divisors that are pullback of prime divisors of square $-2$ on the underlying singular moduli space. More monodromy operators are constructed using the family of compactified intermediate Jacobian fibrations constructed by Laza, Sacc\`a and Voisin in \cite{LSV:O'Gr10}. If $V$ is a generic cubic fourfold, the LSV compactification of the fibration whose fibres are intermediate Jacobians of smooth linear sections of $V$ is an irreducible holomorphic symplectic manifold of OG10 type. Working in this family, we study how the monodromy of the cubic fourfold lifts to the monodromy of the OG10 manifold. An explicit parallel transport operator between this family and the family of O'Grady moduli spaces is constructed in Section~\ref{section:bridge} (see Theorem~\ref{thm:LSV to OG} for the final statement). If we denote by $G\subset\operatorname{Mon}^2(\widetilde{M})$ the subgroup generated by all these monodromy operators, in Section~\ref{section:Mon^2} we use lattice-theoretic results to prove that $G=\operatorname{O}^+(\operatorname{H}^2(\widetilde{M},\mathbb{Z}))$, thus completing the proof. It follows from this argument that the monodromy group is generated by monodromy operators coming from projective families: this is a highly non-trivial feature. Even though the same statement is true for all the other known examples of irreducible holomorphic symplectic manifolds, it is not clear why this should hold in general. Finally, using recent developments in the theory of singular symplectic varieties, especially the work of Bakker and Lehn \cite{BakkerLehn:Singular2016}, we study the locally trivial monodromy group of the singular O'Grady moduli space $M_S$ (see Example~\ref{example:O'Grady moduli space}). \begin{thmno}[Theorem~\ref{thm:l t monodromy}] Let $Y$ be a singular symplectic variety locally trivial deformation equivalent to $M_S$. Then $\operatorname{Mon}^2_{\operatorname{lt}}(Y)=\operatorname{O}t^+(\operatorname{H}^2(Y,\mathbb{Z}))$. \end{thmno} We finish by remarking that the same computation in the case of a singular symplectic variety of OG6 type is done in \cite{MongardiRapagnetta:MonodromyOG6}, where it is shown that in their case the locally trivial monodromy group is the whole group of orientation preserving isometries. The difference between the singular OG10 and the singular OG6 cases can be explained in terms of their singularities: in fact, singular moduli spaces of OG10 type can be either locally factorial or $2$-factorial (\cite[Theorem~1.1]{PeregoRapagnetta:Factoriality}), while singular moduli spaces of OG6 type are always $2$-factorial (\cite[Theorem~1.2]{PeregoRapagnetta:Factoriality}). \subsection*{Acknowledgments} It is my pleasure to thank Gregory Sankaran and Giovanni Mongardi for invaluable help, advice and support. The author strongly benefitted from discussions with Valeria Bertini, Klaus Hulek, Antonio Rapagnetta, Giulia Sacc\`a and Ziyu Zhang. In particular, I thank Klaus Hulek and Radu Laza for explaining Proposition~\ref{prop:HLS} to me. I also wish to thank Simon Brandhorst, who pointed out a mistake in an intermediate draft of this work, and Samuel Boissi\`ere and Alastair Craw for having read a first version of this work providing useful and helpful advice. Important remarks and feedback arose from attending the "Japanese--European Symposium on symplectic varieties and moduli spaces": the author wishes to thank the organisers for this interesting meeting. Finally, I warmly thank the anonymous referee for the keen review of this manuscript: their comments and questions have surely improved it a lot. Part of this work was carried on during the author's PhD program by the University of Bath. He wishes to thank the University of Bath and the EPSRC, the Riemann Centre in Hannover, the INdAM project for young researchers "Pursuit of IHS manifolds" and the Research Council of Norway project no. 250104 for financial and administrative support. \subsection*{Notations} By lattice we mean a free $\mathbb{Z}$-module $L$ together with a non-degenrate symmetric bilinear form $(\cdot,\cdot)\colon L\times L\to\mathbb{Z}$. We usually simply write $x^2$ for $(x,x)$. We denote by $L(-1)$ the lattice obtained from $L$ by changing the sign of the bilinear form. Since the bilinear form is non-degenerate, there is a canonical embedding $L\subset L^*$, where $L^*=\operatorname{Hom}(L,\mathbb{Z})$ is the dual lattice. The \emph{discriminant group} $A_L$ is the finite group $L^*/L$. If $L=\operatorname{H}^2(X,\mathbb{Z})$ is the Beauville--Bogomolov--Fujiki lattice associated to an irreducible holomorphic symplectic manifold $X$, then we simply write $A_X$ for the discriminant group. The group of isometries of $L$ is denoted by $\operatorname{O}(L)$, while $\operatorname{O}t(L)$ denotes the subgroup of isometries that act as the identity on the discriminant group. If $M\subset L$ is a sublattice, we denote by $\operatorname{O}(L,M)$ the subgroup of isometries $g$ such that $g(M)=M$. An \emph{isotropic element} is a vector $x\in L$ such that $x^2=0$. Finally, $U$ will always denote the hyperbolic plane, i.e.\ the unique unimodular even lattice of signature $(1,1)$; $A_2$, $E_8$ and $G_2$ denote the root lattices associated to the respective Dynkin diagrams. \section{Preliminaries}\label{section:preliminaries} \subsection{Irreducible holomorphic symplectic manifolds} \begin{defn} A compact K\"ahler manifold $X$ is called \emph{irreducible holomorphic symplectic} if it is simply connected and $\operatorname{H}^0(X,\Omega^2_X)=\mathbb{C}\sigma_X$, where $\sigma_X$ is non-degenerate at any point. \end{defn} It follows directly from the definition that $\operatorname{H}^2(X,\mathbb{Z})$ is a torsion free $\mathbb{Z}$-module; it turns out to be a lattice thanks to the Beauville--Bogomolov--Fujiki form $q_X$ (\cite{Beauville:c1=0}). This lattice structure is indispensable for studying the geometry of an irreducible holomorphic symplectic manifold $X$; we refer to \cite{GrossHuybrechtsJoyce:CalabiYau} and \cite{Markman:Survey} for a detailed account of results on their geometry. Let $X_1$ and $X_2$ be two irreducible holomorphic symplectic manifolds that are deformation equivalent. \begin{defn}\label{defn:parallel transport operators} \begin{enumerate} \item We say that $g\colon\operatorname{H}^2(X_1,\mathbb{Z})\to\operatorname{H}^2(X_2,\mathbb{Z})$ is a \emph{parallel transport operator} if there exists a smooth and proper family $p\colon\mathcal{X}\to B$, points $b_1,b_2\in B$ and isomorphisms $\varphi_i\colon X_i\stackrel{\sim}{\longrightarrow}\mathcal{X}_{b_i}$ such that the composition $(\varphi_2^*)^{-1}\circ g\circ\varphi_1^*$ is the parallel transport inside the local system $R^2 p_*\mathbb{Z}$ along a path $\gamma$ from $b_1$ to $b_2$. Here $R^2p_*\mathbb{Z}$ is endowed with the Gauss-Manin connection. \item If $X_1=X_2=X$ and $\gamma$ is a loop, then the parallel transport is called \emph{monodromy operator}. Such isometries form a group $\operatorname{Mon}^2(X)$ called \emph{monodromy group} (see \cite[Footnote~3 at page~3]{Markman:Survey}). \end{enumerate} \end{defn} \begin{rmk}\label{rmk:orient pub} For any irreducible holomorphic symplectic manifold $X$, let us denote by $\omega_X$ a fixed K\"ahler class. In the following we write $X$ as a pair $(M,I)$ where $M$ is a differential manifold and $I$ a complex structure. By Yau's solution to Calabi's conjecture (\cite{Yau:Calabi}) there is a hyper-K\"ahler metric $g$ on $M$ representing $\omega_X$, i.e.\ $\omega_X=\omega_I=g(I(-),-)$. Since the metric is hyper-K\"ahler, there exists another K\"ahler structure $J$ such that $I$, $J$ and $IJ=K$ satisfy the quaternionic relations. The symplectic form $\sigma_X$ is then defined as $\omega_J+i\omega_K$. The positive (real) three-space $\langle \omega_I,\omega_J,\omega_K\rangle=\langle\omega_X,\operatorname{Re}(\sigma_X),\operatorname{Im}(\sigma_X)\rangle\subset \operatorname{H}^2(X,\mathbb{R})$ comes then with a preferred orientation (given by this basis). We say that an isometry $\operatorname{H}^2(X_1,\mathbb{Z})\to\operatorname{H}^2(X_2,\mathbb{Z})$ is \emph{orientation preserving} if it preserves this orientation. By definition, any parallel transport operator is orientation preserving. In particular, if $\operatorname{O}^+(\operatorname{H}^2(X,\mathbb{Z}))$ denotes the group of orientation preserving isometries, then $\operatorname{Mon}^2(X)\subset\operatorname{O}^+(\operatorname{H}^2(X,\mathbb{Z}))$. We refer to \cite[Section~4]{Markman:Survey} for more details about this phenomenon. \end{rmk} \begin{example}\label{example:monodromy of K3} An irreducible holomorphic symplectic manifold $S$ of dimension $2$ is a $K3$ surfaces. In this case $\operatorname{Mon}^2(S)=\operatorname{O}^+(\operatorname{H}^2(S,\mathbb{Z}))$ (\cite[Proposition~5.5 in Chapter~7]{Huybrechts:K3Surfaces}). \end{example} Now we recall the construction of two families of irreducible holomorphic symplectic manifolds. \subsection{Moduli spaces of sheaves}\label{subsection:moduli spaces of sheaves} Let $S$ be a projective $K3$ surface. The cohomology ring $$\widetilde{\operatorname{H}}(S,\mathbb{Z})=\operatorname{H}^0(S,\mathbb{Z})\oplus\operatorname{H}^2(S,\mathbb{Z})\oplus\operatorname{H}^4(S,\mathbb{Z})$$ has a natural Hodge structure of weight $2$ given by putting $\widetilde{\operatorname{H}}^{2,0}(S,\mathbb{Z})=\operatorname{H}^{2,0}(S,\mathbb{Z})$, and a natural lattice structure that we now recall. If $v=(v_0,v_1,v_2)\in\widetilde{\operatorname{H}}(S,\mathbb{Z})$, then $$ v^2=v_1^2-2v_0v_2,$$ where $v_1^2$ is the standard intersection product on $\operatorname{H}^2(S,\mathbb{Z})$. $\widetilde{\operatorname{H}}(S,\mathbb{Z})$ with this lattice structure is called the Mukai lattice. A vector $v\in\widetilde{\operatorname{H}}(S,\mathbb{Z})$ is called a Mukai vector if it is positive in the sense of \cite[Definition~0.1]{Yoshioka:ModuliSpacesAbelianSurfaces}. In the following we work with a Mukai vector $v\in\widetilde{\operatorname{H}}(S,\mathbb{Z})$ such that $v=2w$, where $w$ is primitive and $w^2=2$. Once the vector $v$ is fixed, there is a decomposition in $v$-chambers and $v$-walls of the ample cone of $S$, and any polarisation in the interior of a $v$-chamber is called $v$-generic (see \cite[Section~2.1]{PeregoRapagnetta:DeformationsOfO'Grady}). For a $v$-generic polarisation $H\in\operatorname{Pic}(S)$, the moduli space $M_v(S)$ of $H$-semistable sheaves on $S$ is singular exactly at those points corresponding to S-equivalence classes of strictly semistable sheaves. Let us denote by $\Sigma_v$ the singular locus of $M_v(S)$. By \cite{Mukai:SymplecticStructure}, the smooth locus of $M_v(S)$ has a symplectic form. \begin{thm}[\protect{\cite{O'Grady:10dimPublished},\cite{Rapagnetta:BeauvilleFormIHSM},\cite{LehnSorger:Singularity},\cite{PeregoRapagnetta:DeformationsOfO'Grady}}]\label{thm:big OG10} Let $v$ and $H$ as above. \begin{enumerate} \item There exists a symplectic desingularisation $\pi\colon\widetilde{M}_v(S)\to M_v(S)$ such that $\widetilde{M}_v(S)$ is an irreducible holomorphic symplectic manifold of dimension $10$. Moreover, $\widetilde{M}_v(S)$ is the blow up of $M_v(S)$ at $(\Sigma_v)_{\operatorname{red}}$. \item Let $S'$ be another K3 surface, and choose a Mukai vector $v'$ and a $v'$-generic polarisation $H'$ as above. Then $\widetilde{M}_{v'}(S',H')$ is deformation equivalent to $\widetilde{M}_v(S,H)$. The deformation is obtained by deforming the surface $S'$ to $S$, the Mukai vector $v'$ to $v$ and the generic polarisation $H'$ to $H$ according to the notion of OLS-triple defined in \cite[Definition~2.12]{PeregoRapagnetta:DeformationsOfO'Grady}. \item The pullback $$ \pi^*\colon \operatorname{H}^2(M_v(S),\mathbb{Z})\to\operatorname{H}^2(\widetilde{M}_v(S),\mathbb{Z})$$ is injective. In particular $\operatorname{H}^2(M_v(S),\mathbb{Z})$ has a pure Hodge structure of weight $2$ and a lattice structure inherited by those of $\operatorname{H}^2(\widetilde{M}_v(S),\mathbb{Z})$. Moreover, there exists a Hodge isometry $$v^\perp\stackrel{\sim}{\longrightarrow}\operatorname{H}^2(M_v(S),\mathbb{Z}).$$ \item There is an isometry $$\operatorname{H}^2(\widetilde{M}_v(S),\mathbb{Z})\cong U^3\oplus E_8(-1)^2\oplus G_2(-1),$$ where $G_2(-1)=\left(\begin{array}{rr}-2 & 3 \\ 3 & -6 \end{array}\right)$. \end{enumerate} \end{thm} Any irreducible holomorphic symplectic manifold that is deformation equivalent to a desingularised moduli space $\widetilde{M}_v(S,H)$ as in the Theorem above is said to be of OG10 type. Define \begin{equation}\label{eqn:Gamma_v in general} \Gamma_v:=\left\{\left(\alpha,k\frac{\sigma}{2}\right)\in(v^\perp)^*\oplus\mathbb{Z}\frac{\sigma}{2}\mid k\in2\mathbb{Z} \Leftrightarrow \alpha\in v^\perp\right\}\subset v^\perp_{\mathbb{Q}}\oplus\mathbb{Q}\sigma, \end{equation} with the non-degenerate pairing $b\left((w_1,m_1\sigma),(w_2,m_2\sigma)\right)= (w_1,w_2)-6m_1m_2$. Notice in particular that $\sigma^2=-6$. Moreover, $\Gamma_v$ has a natural Hodge structure as follows: $v^\perp$ has a Hodge structure inherited by $\widetilde{\operatorname{H}}(S,\mathbb{Z})$) and we declare $\sigma$ to be of type $(1,1)$. \begin{thm}[\protect{\cite[Theorem~3.1]{PeregoRapagnetta:Factoriality}}]\label{thm:factoriality} $\Gamma_v$ is a lattice Hodge isometric to $\operatorname{H}^2(\widetilde{M}_v(S),\mathbb{Z})$. \end{thm} \begin{example}[O'Grady moduli space]\label{example:O'Grady moduli space} Let us fix $v=(2,0,-2)$. In this case, we use the short notation $\widetilde{M}_S$ and $M_S$ instead of $\widetilde{M}_{(2,0,-2)}(S)$ and $M_{(2,0,-2)}(S)$. The space $\widetilde{M}_S$ is called \emph{O'Grady moduli space}, while the space $M_S$ is called \emph{singular O'Grady moduli space} (cf.\ Example~\ref{example:singular moduli spaces}), since this is the case first studied by O'Grady in \cite{O'Grady:10dimPublished}. The locus $B_S=M_S\setminus M^{\operatorname{lf}}_S$ of non-locally free sheaves is a Weil divisor (\cite[Section~3.1]{O'Grady:10dimPublished}) and we denote by $\widetilde{B}_S$ its strict transform. We notice that, by \cite{Perego:2Factoriality}, $B_S$ is not Cartier, but $2B_S$ is. Since $\widetilde{M}_S$ is smooth, the divisor $\widetilde{B}_S$ is always Cartier and by \cite{Rapagnetta:BeauvilleFormIHSM}, $$\langle\widetilde{B}_S,\widetilde{\Sigma}_S\rangle=G_2(-1),$$ where $\widetilde{\Sigma}_S$ is the exceptional divisor of the desingularisation. Here $\langle\widetilde{B}_S,\widetilde{\Sigma}_S\rangle\subset\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z})$ is the sublattice of the Beauville--Bogomolov--Fujiki lattice generated by $\widetilde{B}_S$ and $\widetilde{\Sigma}_S$. More precisely, Rapagnetta shows that $\widetilde{\Sigma}_S^2=-6$, $\widetilde{B}_S^2=-2$ and $(\widetilde{\Sigma}_S,\widetilde{B}_S)=3$. Moreover, Rapagnetta also explicitly describes an isometry $$\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z})\cong\operatorname{H}^2(S,\mathbb{Z})\oplus\langle\widetilde{B}_S,\widetilde{\Sigma}_S\rangle,$$ where the inclusion of $\operatorname{H}^2(S,\mathbb{Z})$ in $\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z})$ is the composition of the Donaldson's map and the pullback by the desingularisation $\pi_S\colon\widetilde{M}_S\to M_S$ (see also \cite[Section~5]{O'Grady:10dimPublished}). Finally, by \cite[Remark~3.2]{PeregoRapagnetta:Factoriality}, the isometry $\Gamma_{(2,0,-2)}\cong\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z})$ is explicitly given by the function \begin{equation}\label{eqn:Gamma_v} \left(\left(\frac{n}{2},\xi,\frac{n}{2}\right),k\frac{\sigma}{2}\right)\mapsto\xi+n\widetilde{B}+\frac{n+k}{2}\widetilde{\Sigma}. \end{equation} For future reference, we notice that the particular case in which $k=0$ is exactly the isometry $v^\perp\cong\operatorname{H}^2(M_S,\mathbb{Z})$ in item $(3)$ of Theorem~\ref{thm:big OG10} in this case (loc.\ cit.). \end{example} \begin{rmk}\label{rmk:factoriality} The singular O'Grady moduli space $M_S$ in Example~\ref{example:O'Grady moduli space} is $2$-factorial (\cite{Perego:2Factoriality}). In general, the factoriality of $M_v(S)$ depends on the vector $w$ such that $v=2w$: if there exists $u\in\widetilde{\operatorname{H}}^{1,1}(S,\mathbb{Z})$ such that $(u,w)=1$, then $M_v(S)$ is $2$-factorial; if $(u,w)\in2\mathbb{Z}$ for every $u\in\widetilde{\operatorname{H}}^{1,1}(S,\mathbb{Z})$, then $M_v(S)$ is locally factorial (cf.\ \cite[Theorem~1.1]{PeregoRapagnetta:Factoriality}). \end{rmk} \begin{example}[Torsion sheaves]\label{exe:torsion sheaves} Let $S$ be a projective K3 surface and $H$ a polarisation such that $H^2=2$. Then any vector of the form $v=(0,2H,b)$ gives a moduli space $M_v(S)$ of dimension $10$. From now on we always assume the polarisation to be $v$-generic. If $b$ is odd, the moduli space is smooth (\cite{Yoshioka:ModuliSpacesAbelianSurfaces}); if $b=2a$ is even, then we are in the situation of Theorem~\ref{thm:big OG10} and the singular moduli space $M_v(S)$ admits a symplectic desingularisation that is an irreducible holomorphic symplectic manifold of type OG10. Let us then consider vectors of the form $(0,2H,2a)$. A general sheaf $F\in M_v(S)$ is of the form $i_*L$, where $i\colon C\to S$ is an embedding, $C\in|2H|$ is a smooth curve of genus $5$ and $L$ is a line bundle on $C$ of degree $2a+4$. In particular $M_v(S)$ contains the relative Picard variety $Pic^{2a+4}_U$, where $U\subset|2H|$ is the open subset parametrising smooth curves. There is a natural morphism $$p_v\colon M_v(S)\longrightarrow|2H|$$ defined by sending a sheaf to its Fitting support (see \cite[Section~1.4]{Mozgovyy:PhD}). In this way $M_v(S)$ can be thought as a projective compactification of $Pic^{2a+4}_U$. Composing the morphism $p_v$ with the desingularisation morphism $\pi_v$ we get a morphism $$ (p_v\circ\pi_v)\colon\widetilde{M}_v(S)\longrightarrow|2H|\cong\mathbb{P}^5 $$ that is a Lagrangian fibration (\cite{Matsushita:OnFibreSpace}). Finally, if $\operatorname{Pic}(S)=\mathbb{Z} H$, then we can directly read the factoriality property of $M_v(S)$ in terms of the parity of $a$. Recall that $v=(0,2H,2a)$. By Remark~\ref{rmk:factoriality}, $M_v(S)$ is factorial if $a$ in even and $2$-factorial if $a$ is odd. For example, the moduli space $M_{(0,2H,-4)}(S)$ is factorial, while the moduli space $M_{(0,2H,-2)}(S)$ is $2$-factorial (we will use this remark in Section~\ref{subsection:hyperelliptic}). \end{example} \subsection{Intermediate Jacobian fibrations}\label{subsection:J_V} Let $V\subset\mathbb{P}^5$ be a smooth cubic fourfold and $U\subset\mathbb{P}(\operatorname{H}^0(V,\mathcal{O}(1))^*)$ the open subset parametrising smooth linear sections. If $Y\in U$, the intermediate Jacobian of $Y$ is the principally polarised abelian variety defined by $$ J_Y:=\operatorname{H}^{2,1}(Y)^*/\operatorname{H}_3(Y,\mathbb{Z}),$$ where $\operatorname{H}_3(Y,\mathbb{Z})\subset \operatorname{H}^{2,1}(Y)^*$ by integration over cycles. Running this construction relatively over $U$ yields an intermediate Jacobian fibration \begin{equation}\label{eqn:LSV over U} \pi_U\colon\mathcal{J}_U\longrightarrow U \end{equation} that is Lagrangian with respect to a non-degenerate holomorphic closed $2$-form on $\mathcal{J}_U$ (\cite{DonagiMarkman:SpectralCovers}). \begin{thm}[\cite{LSV:O'Gr10}]\label{thm:LSV} Suppose that $V$ is very general. Then there exists a symplectic compactification $$ \pi_V\colon\mathcal{J}_V\longrightarrow\mathbb{P}^5$$ of the intermediate Jacobian fibration (\ref{eqn:LSV over U}), such that $\mathcal{J}_V$ is an irreducible holomorphic symplectic manifold of OG10 type. Moreover, $\pi_V$ is a Lagrangian fibration. \end{thm} Here very general means outside a countable union of divisors in the moduli space of smooth cubic fourfolds. The hypothesis of the statement above can be relaxed to general cubic fourfolds, that is to cubic fourfolds contained in a Zariski open subset of the moduli space of smooth cubic fourfolds (see for example \cite[Remark~4.2]{Brosnan:PerverseObstractions}). We notice though that recently G.\ Sacc\`a constructed an irreducible symplectic compactification of $\mathcal{J}_U$ for any smooth cubic fourfold (and even mildly singular cubic fourfolds), see \cite{Sacca:BirationalGeometryLSV}. Sacc\`a's compactification is obtained by using recent developments in the minimal model program; in particular it is not constructive and, a priori, not even unique. \subsection{Singular symplectic varieties}\label{section:Singular} \begin{defn} A \emph{singular symplectic variety} $Y$ is a normal complex variety such that its regular locus $Y_{\operatorname{reg}}$ has a symplectic form that extends holomorphically to any resolution of singularities. A \emph{symplectic resolution of singularities} of $Y$ is a resolution of singularities $\pi\colon X\to Y$ such that the symplectic form on $Y_{\operatorname{reg}}$ extends holomorphically to a symplectic form on $X$. We say that $\pi\colon X\to Y$ is an \emph{irreducible symplectic resolution of singularities} if moreover $X$ is an irreducible holomorphic symplectic manifold. \end{defn} In the following we will only be interested in singular symplectic varieties having an irreducible symplectic resolution of singularities, and we refer to \cite{BakkerLehn:Singular2016} for results and background. The main and unique example we consider in this paper is the following. \begin{example}\label{example:singular moduli spaces} We use the same notations as in Section~\ref{subsection:moduli spaces of sheaves}. Let $S$ be a projective $K3$ surface and $v=2w\in\widetilde{\operatorname{H}}(S,\mathbb{Z})$ a Mukai vector such that $w$ is primitive and $w^2=2$. For any choice of $v$-generic polarisation $H$, the moduli space $M_v(S)$ of $H$-semistable sheaves is a singular symplectic variety having an irreducible symplectic resolution of singularity by Theorem~\ref{thm:big OG10} (see also \cite{Mukai:SymplecticStructure} for the existence of the symplectic form on the regular part of $M_v(S)$). When the Mukai vector is $v=(2,0,-2)$, then $M_v(S)=M_S$ is the singular O'Grady moduli space of Example~\ref{example:O'Grady moduli space}. \end{example} If $Y$ is a singular symplectic variety with an irreducible symplectic resolution of singularities, then $\operatorname{H}^2(Y,\mathbb{Z})$ is endowed with a non-degenerate symmetric bilinear form turning it into a lattice of signature $(3,b_2(Y)-3)$. More precisely, if $\pi\colon X\to Y$ is the irreducible symplectic resolution, then by \cite[Lemma~3.5]{BakkerLehn:Singular2016} the pullback $\pi\colon\operatorname{H}^2(Y,\mathbb{Z})\to\operatorname{H}^2(X,\mathbb{Z})$ is injective and the restriction of the Beauville--Bogomolov--Fujiki form on $\operatorname{H}^2(X,\mathbb{Z})$ to $\operatorname{H}^2(Y,\mathbb{Z})$ is non-degenerate. Moreover, by the same result, the pullback is an isomorphism on the transcendetal part and the orthogonal complement of $\operatorname{H}^2(Y,\mathbb{Z})$ in $\operatorname{H}^2(X,\mathbb{Z})$ is negative definite, justifying the previous claim on the signature of $\operatorname{H}^2(Y,\mathbb{Z})$. This lattice structure is invariant under locally trivial deformations, according to the following definition. \begin{defn} A locally trivial family is a proper morphism $f\colon\mathcal{Y}\to T$ of complex analytic spaces such that, for every point $y\in\mathcal{Y}$, there exist open neighborhoods $V_y\subset \mathcal{Y}$ and $V_{f(y)}\subset T$, and an open subset $U_y\subset f^{-1}(f(y))$ such that $$V_y\cong U_y\times V_{f(y)}.$$ \end{defn} If the morphism $f$ is smooth, i.e.\ the family is smooth, then the condition in the definition is trivially satisfied. The most important example for our purpose is the following. \begin{example}\label{exe:family of OG moduli spaces} Let $p\colon\mathcal{S}\to T$ be a smooth family of projective $K3$ surfaces, and suppose that there exists a flat section $\mathcal{H}\in R^2p_*\mathbb{Z}$ such that $\mathcal{H}_t$ is a generic polarisation for every $t\in T$. Notice that, the second entry being $0$, the Mukai vector $(2,0,-2)$ extends to a flat section of the local system $\widetilde{R}p_*\mathbb{Z}$, whose stalk at $t\in T$ is the Mukai lattices $\widetilde{\operatorname{H}}(\mathcal{S}_t,\mathbb{Z})$. Then the relative O'Grady moduli space $f\colon\mathcal{M}\to T$, whose fibres are the O'Grady moduli spaces $\mathcal{M}_t=M_{\mathcal{S}_t}$, is a locally trivial family (see \cite[Proposition~2.16]{PeregoRapagnetta:DeformationsOfO'Grady}). \end{example} \begin{defn} Let $Y$ be a singular symplectic variety and $\pi\colon X\to Y$ a symplectic resolution of singularities. \begin{enumerate} \item The locally trivial monodromy group $\operatorname{Mon}^2(Y)_{\operatorname{lt}}$ of $Y$ is the subgroup of $\operatorname{O}(\operatorname{H}^2(Y,\mathbb{Z}))$ generated by isometries arising by parallel transport along loops in a locally trivial family of $Y$. \item The monodromy group $\operatorname{Mon}^2(\pi)$ of the desingularisation $\pi$ is the subgroup of the product $\operatorname{O}(\operatorname{H}^2(X,\mathbb{Z}))\times\operatorname{O}(\operatorname{H}^2(Y,\mathbb{Z}))$ consisting of pairs $(g_1,g_2)$ such that $g_1\in\operatorname{Mon}^2(X)$ and $g_2\in\operatorname{Mon}^2(Y)_{\operatorname{lt}}$, and such that $g_1\circ\pi^*=\pi^*\circ g_2$. \end{enumerate} \end{defn} \section{Monodromy operators coming from the family of O'Grady moduli spaces}\label{section:M_S} Let $S$ be a projective $K3$ surface, $H$ a generic polarisation, $M_S$ the O'Grady moduli space and $\widetilde{M}_S$ its symplectic desingularisation. We refer to Example~\ref{example:O'Grady moduli space} for notations. In particular, we always denote by $G_2(-1)$ the lattice generated by the divisor $\widetilde{B}_S$ and the exceptional divisor $\widetilde{\Sigma}_S$. Recall that $$ \operatorname{H}^2(\widetilde{M}_S,\mathbb{Z})\cong\operatorname{H}^2(S,\mathbb{Z})\oplus G_2(-1). $$ In the following, $\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}),G_2(-1))$ is the group of isometries fixing the sublattice $G_2(-1)$. The restriction map \begin{equation}\label{eqn:r M_S} r\colon\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}),G_2(-1))\longrightarrow\operatorname{O}^+(\operatorname{H}^2(S,\mathbb{Z})) \end{equation} is surjective and $\operatorname{O}^+(\operatorname{H}^2(S,\mathbb{Z}))=\operatorname{Mon}^2(S)$ (see Example~\ref{example:monodromy of K3}). We want to show that, given a monodromy operator $g\in\operatorname{Mon}^2(S)$, there exists a canonical extension $\tilde{g}\in\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}),G_2(-1))$ such that $\tilde{g}\in\operatorname{Mon}^2(\widetilde{M}_S)$. As we will see, this extension is given by the identity on $G_2(-1)$. Let $T$ be a curve and $(S,H)$ a polarised $K3$ surface. Let $\mathcal{S}_T\to T$ be a deformation family such that $\mathcal{S}_0=S$ for a base point $0\in T$ and let $\mathcal{H}_T$ be a line bundle on $\mathcal{S}_T$, flat over $T$, such that $\mathcal{H}_0=H$ (see Example~\ref{exe:family of OG moduli spaces}). It is known that the set of points $t\in T$ such that $\mathcal{H}_t$ is not ample is finite. Moreover, Perego and Rapagnetta notice in \cite[Lemma~2.6]{PeregoRapagnetta:DeformationsOfO'Grady} that the set of points $t\in T$ such that $\mathcal{H}_t$ is not generic is also finite. We summarise this remark in the following statement for future reference. \begin{lemma}\label{lemma:def O'Grady on curves} Up to removing a finite number of points from $T$, we can suppose that $\mathcal{H}_t$ is ample and generic for every $t\in T$. \end{lemma} In the following we assume that $\mathcal{H}_T$ satisfies the conditions of Lemma~\ref{lemma:def O'Grady on curves}. Consider the relative moduli space $\mathcal{M}_T\to T$ (resp.\ $\mathcal{M}^s_T$) parametrising rank $2$ semistable (resp.\ stable) sheaves on the fibres of $\mathcal{S}_T\to T$ with trivial determinant and second Chern class equal to $4$ (cf.\ \cite[Theorem~4.3.7]{HuybrechtsLehn:ModuliSpaces}). Notice that $\mathcal{M}^s_T\subset\mathcal{M}_T$ is open. Since $\mathcal{M}_t$ is reduced and irreducible for every $t\in T$, $\mathcal{M}_T$ is flat over $T$ (\cite[Proposition~II.2.19]{EisenbudHarris:GeometryOfSchemes} and \cite[Lemma~2.15]{PeregoRapagnetta:DeformationsOfO'Grady}) and we can think of it as a deformation of (singular) moduli spaces. Now, define $\Sigma_T:=\mathcal{M}_T\setminus\mathcal{M}^s_T$. As explained in the proof of \cite[Proposition~2.16]{PeregoRapagnetta:DeformationsOfO'Grady}, since $\mathcal{H}_t$ is generic for every $t\in T$, $\Sigma_t$ is an irreducible closed subvariety which coincides with the singular locus of $\mathcal{M}_t$. \begin{rmk}\label{rmk:modular description of Sigma} Notice that $\Sigma_T$ has a modular description as the relative second symmetric product $\mathcal{S}ym^2\mathcal{S}_T^{[2]}$ (cf.\ first part of the proof of \cite[Proposition~2.16]{PeregoRapagnetta:DeformationsOfO'Grady}). In fact, by \cite[Lemma~1.1.5]{O'Grady:10dimPublished}, $\Sigma_t=\operatorname{Sym}^2\mathcal{S}_t^{[2]}$. The singular locus of $\Sigma_T$ is then identified with $\mathcal{S}_T^{[2]}$. This implies that $(\Sigma_{\operatorname{red}})_t=(\Sigma_t)_{\operatorname{red}}$ for every $t\in T$. \end{rmk} By \cite[Proposition~II.2.19]{EisenbudHarris:GeometryOfSchemes} we have that $\Sigma_T$ and $(\Sigma_T)_{\operatorname{red}}$ are flat over $T$. Blowing up $\mathcal{M}_T$ at $(\Sigma_T)_{\operatorname{red}}$ yields a projective and flat projection \begin{equation}\label{eqn:def of OG10 from K3} p\colon\widetilde{\mathcal{M}}_T\longrightarrow T \end{equation} such that $\widetilde{\mathcal{M}}_t=\widetilde{M}_{\mathcal{S}_t}$. Notice that a priori it is not obvious that the blow-up of the family is the family of the blow-ups: this follows from \cite[Proposition~2.16]{PeregoRapagnetta:DeformationsOfO'Grady}. The family (\ref{eqn:def of OG10 from K3}) is the deformation family of O'Grady manifolds associated to a deformation of polarised $K3$ surfaces. The first remark is the following. \begin{lemma}\label{lemma:monodromy preserves Sigma} Let $\widetilde{M}_S$ be the O'Grady desingularisation of $M_S$ and $\widetilde{\Sigma}_S$ the exceptional divisor. Any monodromy operator $g$ arising from a deformation family (\ref{eqn:def of OG10 from K3}), as constructed before, must satisfy the equality $g(\widetilde{\Sigma}_S)=\widetilde{\Sigma}_S$. \end{lemma} \begin{proof} This is clear from the discussion above. In fact, on $\widetilde{\mathcal{M}}_T$ there is the relative exceptional divisor $\widetilde{\Sigma}_T$ which is flat over $T$. The associated class in cohomology is then flat in the local system $R^2p_*\mathbb{Z}$ and hence preserved by any parallel transport in the same local system. \end{proof} Next, we want to understand what is the orbit of the divisor $\widetilde{B}_S$ under monodromy operators arising from this kind of family. This is more subtle, because the locus $\mathcal{B}_T:=\mathcal{M}_T\setminus\mathcal{M}_T^{\operatorname{lf}}$ does not have a modular description as in Remark~\ref{rmk:modular description of Sigma}. Here and in the following $\mathcal{M}_T^{\operatorname{lf}}\subset\mathcal{M}_T$ is the open subset parametrising locally free sheaves on the fibres of $\mathcal{S}_T\to T$. We need to work with the Uhlenbeck compactification $\overline{\mathcal{N}}_\infty$ of the Donaldson--Yau moduli space $\mathcal{N}_\infty$ of anti-self-dual connections on the principal bundle of rank $2$, trivial determinant and second Chern class of degree $4$ on the differentiable manifold underlying $S$ (\cite{FriedmanMorgan:SmoothFourManifolds}). Recall that $\overline{\mathcal{N}}_\infty$ exists as a (reduced) projective scheme and there is a regular morphism of schemes $$\phi\colon M_S\longrightarrow\overline{\mathcal{N}}_\infty.$$ Moreover, $\overline{\mathcal{N}}_\infty=\mathcal{N}_\infty\coprod S^{(4)}$, where $S^{(4)}$ stands for the fourth symmetric product of $S$ (see discussion after \cite[Proposition~3.1.1]{O'Grady:10dimPublished}). The morphism $\phi$ restricts to an isomorphism $M^{\operatorname{lf}}_S\cong\mathcal{N}_\infty$ (\cite{Li:AlgebroGeometric}). We want to relativise this construction to the family $p\colon\mathcal{M}_T\to T$. For this, we need to run the same arguments as in \cite[Section~1, Section~2]{Li:AlgebroGeometric} in families. Let $\operatorname{Quot}_{\mathcal{S}/T}$ be a Quot scheme of sheaves on the fibres of $\mathcal{S}_T\to T$, whose open subset $Q_T\subset\operatorname{Quot}_{\mathcal{S}/T}$ parametrising semistable points with respect to the action of the algebraic group $G=\operatorname{PGL}(N)$ (for a suitable integer $N$) has $\mathcal{M}_T$ as GIT quotient. On $\mathcal{S}_T\times Q_T$ there is a universal quotient sheaf $F_T$, flat over $T$ (cf.\ \cite[Theorem~2.2.4]{HuybrechtsLehn:ModuliSpaces}). Now let $k\geq1$ and $D_T\in|k\mathcal{H}_T|$ be a divisor which is smooth over $T$. Notice that such a divisor $D_T$ always exists, up to shrinking the base $T$. Since the fibres of $D_T$ over $T$ are smooth algebraic curves, we can consider the relative Jacobian $J^{g(D_T)-1}(D_T)$. Here $g(D_T)$ means the genus of the general fibre of $D_T$ over $T$. Let $\theta_{D_T}\in J^{g(D_T)-1}(D_T)$ be flat over $T$. Then we define the line bundle \begin{equation} \widetilde{\mathcal{L}}_k(D_T,\theta_{D_T}):=\det\left(R^\bullet q_{1*}(F_T|_{D_T}\otimes q_2^*\theta_{D_T})\right)^{-1} \end{equation} where $q_i$ is the projection from $Q_T\times D_T$ to the $i$-th factor and $F_T|_{D_T}$ is the restriction of $F_T$ to $Q_T\times D_T$. Notice that by construction $\widetilde{\mathcal{L}}_k(D_T,\theta_{D_T})$ is flat over $T$. \begin{lemma} $\widetilde{\mathcal{L}}_k(D_T,\theta_{D_T})$ descends to a line bundle $\mathcal{L}_k(D_T,\theta_{D_T})$ on $\mathcal{M}_T$. \end{lemma} \begin{proof} Since $\mathcal{M}_T$ is constructed as a $G$-quotient from $Q_T$, Kempf's criterion (\cite[Proposition~2.3]{DrezetNarasimhan:GroupDePicard}) says that a $G$-bundle $E$ on $Q_T$ descends to $\mathcal{M}_T$ if and only if for every closed point $x\in Q_T$ with closed orbit, the stabiliser $G_x$ acts trivially on $E_x$. First we show that $\widetilde{\mathcal{L}}_k(D_T,\theta_{D_T})$ is a $G$-bundle. As in the first part of the proof of \cite[Proposition~1.7]{Li:AlgebroGeometric}, we only need to show that the subgroup $\mathbb{C}^*\subset\operatorname{GL}(N)$ acts trivially on $\widetilde{\mathcal{L}}_k(D_T,\theta_{D_T})$. If $\alpha\in\mathbb{C}^*$, the homothety $\alpha.\operatorname{id}$ acts fibrewise on $F_T$. On the other hand, our choice of $D_T$ is such that the Euler characteristic of the fibres over $T$ is zero, so that $\alpha.\operatorname{id}$ induces the identity on $\widetilde{\mathcal{L}}_k(D_T,\theta_{D_T})$. Now, closed points in $Q_T$ are all of the form $i_{t*}F_t$, where $i_t\colon\mathcal{S}_t\to\mathcal{S}_T$ is the inclusion and $F_t\in\mathcal{M}_t$. Moreover, since $\mathcal{H}_t$ is assumed to be generic for every $t\in T$, $Q_t$ satisfies the hypotheses of \cite[Proposition~1.7]{Li:AlgebroGeometric} and therefore the proof is reduced to the proof of \cite[Proposition~1.7]{Li:AlgebroGeometric}. \end{proof} With an abuse of notation, we denote by $\mathcal{L}_k$ the line bundle $\mathcal{L}_k(D_T,\theta_{D_T})$. \begin{prop} Let $(\mathcal{S}_T,\mathcal{H}_T)$ be a polarised family of $K3$ surfaces over a curve $T$. Let $\mathcal{L}_k=\mathcal{L}_k(D_T,\theta_{D_T})$ be the line bundle on $\mathcal{M}_T$ constructed above and suppose $k\geq9$. Then there exists a positive integer $\bar{m}$ such that $(\mathcal{L}_k^m)_t$ is generated by global sections for every $t\in T$ and for every $m\geq\bar{m}$. \end{prop} \begin{proof} For any $t\in T$, there exists a positive integer $m_t$ such that $(\mathcal{L}_{k}^{m_t})_t$ is generated by global sections for every $m\geq m_t$ and $k\geq9$ (\cite[Theorem~3]{Li:AlgebroGeometric}). Now, define $$ \bar{m}:=\sup_{t\in T}\{m_t\} $$ and since $T$ is quasi-compact (in the Zariski topology), $\bar{m}<\infty$. \end{proof} The pushforward $p_*\mathcal{L}_k^{\bar{m}}$ is not locally free in general, but its double dual $p_*(\mathcal{L}_k^{\bar{m}})^{\vee\vee}$ is always locally free (\cite[Corollary~1.4]{Hartshorne:ReflexiveSheaves}). The proposition above says then that the induced map \begin{equation}\label{eqn:Li in families} \varphi_T\colon\mathcal{M}_T\longrightarrow\mathbb{P}\left(p_*(\mathcal{L}_k^{\bar{m}})^{\vee\vee}\right) \end{equation} is a regular morphism of schemes. Notice that $\mathbb{P}\left(p_*(\mathcal{L}_k^{\bar{m}})^{\vee\vee}\right)$ is flat over $T$ (it is a projective bundle) and that $\varphi_T$ is defined fibrewise. Let us define $\overline{\mathcal{N}}_T$ as the image of $\mathcal{M}_T$ via $\varphi_T$. By construction (or by \cite[Proposition~II.2.19]{EisenbudHarris:GeometryOfSchemes}) $\overline{\mathcal{N}}_T$ is flat over $T$ and, for every $t\in T$, $\overline{\mathcal{N}}_t$ is the Donaldson--Uhlenbeck--Yau moduli space associated to the $K3$ surface $\mathcal{S}_t$ (\cite[Theorem~5]{Li:AlgebroGeometric}). The natural projection \begin{equation}\label{eqn:family of DUY} \overline{\mathcal{N}}_T\longrightarrow T \end{equation} is then a family of Donaldson--Uhlenbeck--Yau moduli spaces. If we put $\mathcal{N}_T=\varphi_T(\mathcal{M}_T^{\operatorname{lf}})$, then we get a relative Uhlenbeck decomposition $$\overline{\mathcal{N}}_T=\mathcal{N}_T\coprod\mathcal{S}_T^{(4)}$$ where $\mathcal{S}_T^{(4)}$ is the relative symmetric product, i.e.\ $(\mathcal{S}_T^{(4)})_t=\mathcal{S}_t^{(4)}$. \begin{rmk}\label{rmk:symmetric power is flat} Notice that $\mathcal{S}_T^{(4)}$ is flat over $T$. \end{rmk} \begin{rmk} The construction above is not canonical: it depends on the choice of both $D_T$ and $\theta_{D_T}$, so one should really write $\varphi_{T,D_T,\theta_{D_T}}$. Nevertheless, we suppress this dependence from the notation for the sake of clarity. Anyway, when $T=\operatorname{Spec}(\mathbb{C})$ is a point, Li noticed that $\mathcal{L}_k(D_T,\theta_{D_T})$ does not depend on $D_T$ and $\theta_{D_T}$. In particular, for a general base $T$, the claim is true fibrewise and so, if $D'_T$ is another smooth divisor on $\mathcal{M}_T$ and $\theta_{D'_T}\in J^{g(D'_T)-1}(D'_T)$, then $$\mathcal{L}_k(D_T,\theta_{D_T})\cong\mathcal{L}_k(D'_T,\theta_{D'_T})\otimes p^*A $$ where $A$ is a line bundle on $T$. \end{rmk} \begin{prop}\label{prop:B is flat} $\mathcal{B}_T=\mathcal{M}_T\setminus\mathcal{M}_T^{\operatorname{lf}}$ is flat over $T$. \end{prop} \begin{proof} Consider the surjective morphism $$\varphi_T\colon\mathcal{B}_T\longrightarrow\mathcal{S}_T^{(4)}$$ obtained by restricting the morphism (\ref{eqn:Li in families}). By \cite[Proposition~II.2.19]{EisenbudHarris:GeometryOfSchemes}, it is enough to show that there are no embedded components of $\mathcal{B}_T$ supported on a fibre $\mathcal{B}_t$, for every $t\in T$. Suppose such a component $\overline{\mathcal{B}}_T\subset\mathcal{B}_T$ exists and is supported on the fibre $\mathcal{B}_{t_0}$. Since $\varphi_T$ is defined fibrewise, $\varphi_T(\overline{\mathcal{B}}_T)=\varphi_{t_0}(\overline{\mathcal{B}}_T)\subset\mathcal{S}_{t_0}^{(4)}$. Since $\mathcal{S}_T^{(4)}$ is flat over $T$ (Remark~\ref{rmk:symmetric power is flat}), $\varphi_{t_0}(\overline{\mathcal{B}}_T)$ cannot be an embedded component of $\mathcal{S}_T^{(4)}$. We conclude that such an embedded component $\overline{\mathcal{B}}_T$ cannot exist and that $\mathcal{B}_T$ is flat over $T$. \end{proof} \begin{lemma}\label{lemma:monodromy preserves B} Let $\widetilde{M}_S$ be the O'Grady desingularisation of $M_S$, $B_S\subset M_S$ the divisor of non-locally free sheaves and $\widetilde{B}_S$ its strict transform. Any monodromy operator $g$ arising from a deformation family (\ref{eqn:def of OG10 from K3}) must satisfy the equality $g(\widetilde{B}_S)=\widetilde{B}_S$. \end{lemma} \begin{proof} It follows directly from Proposition~\ref{prop:B is flat} as in the proof of Lemma~\ref{lemma:monodromy preserves Sigma}. \end{proof} The main result of this section is the following proposition. \begin{thm}\label{thm:my operators O'Grady} Let $g\in\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))$ be such that $g(\widetilde{\Sigma}_S)=\widetilde{\Sigma}_S$, $g(\widetilde{B}_S)=\widetilde{B}_S$ and $g(H)=H$. Then $g$ is a monodromy operator. \end{thm} \begin{proof} Let $g$ be as in the statement. In particular $g\in\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}),G_2(-1))$ and so its image $r(g)$ under the restriction map (\ref{eqn:r M_S}) is a monodromy operator on $S$ that preserves the polarisation of $S$ (i.e.\ it is a polarised monodromy operator -- cf.\ \cite[Corollary~7.4]{Markman:Survey}). This means that there exists a family of deformations $\mathcal{S}_T\to T$ such that $r(g)$ is obtained by parallel transport along a loop $\gamma$ in $T$ centred in a point $0\in T$ corresponding to $S$, and a family $\{\mathcal{H}_t\}_{t\in T}$ of ample line bundles such that $\mathcal{H}_0=H$. By the proof of \cite[Proposition~5.5 in Chapter~7]{Huybrechts:K3Surfaces}, it follows that $T$ can be taken to be a curve (in fact, $T$ is a general pencil). Notice that in the proof of \cite[Proposition~5.5 in Chapter~7]{Huybrechts:K3Surfaces} the $K3$ is taken non projective, but exactly the same argument applies for projective surfaces. The number of points $t\in T$ where $\mathcal{H}_t$ is not generic is finite (cf.\ Lemma~\ref{lemma:def O'Grady on curves}). Let us denote by $T'$ the complement in $T$ of these points. By \cite[Th{\'e}or{\`e}me~2.3 in Chapter~X]{Godbillon:AlgebraicTopology}, the induced map $$ \pi_1(T',0)\longrightarrow\pi_1(T,0) $$ is surjective and so we can assume that $[\gamma]\in\pi_1(T',0)$. By construction the parallel transport along $\gamma$ in the family $\widetilde{\mathcal{M}}_{T'}\to T'$ is an isometry $g'$ such that $r(g')=r(g)$ and moreover, by Lemma~\ref{lemma:monodromy preserves Sigma} and Lemma~\ref{lemma:monodromy preserves B}, $g'(\widetilde{\Sigma}_S)=\widetilde{\Sigma}_S$ and $g'(\widetilde{B}_S)=\widetilde{B}_S$. Therefore $g=g'$. \end{proof} \begin{rmk} Since the family $\mathcal{M}_T\to T$ is locally trivial (cf.\ Section~\ref{section:Singular}), Proposition~\ref{prop:B is flat} implies that any (locally trivial) monodromy operator arising from this family must preserve the divisor $2B$ (notice that, as recalled in Example~\ref{example:O'Grady moduli space}, $B$ is not Cartier, while $2B$ is, and this property is preserved in this family). Moreover, running the same proof of Theorem~\ref{thm:my operators O'Grady} in this situation yields the analogous statement $$\operatorname{Mon}^2(S)_{H}=\operatorname{O}^+(\operatorname{H}^2(S,\mathbb{Z}))_{H}\subset\operatorname{Mon}^2(M_S)_{\operatorname{lt}}.$$ \end{rmk} \section{Exceptional reflections from singular moduli spaces}\label{section:exceptional reflections} In this section we show the existence of monodromy reflections coming from exceptional divisors on the singular moduli space $M_v(S)$. First of all, let us explain what we mean by exceptional reflection. If $L$ is a lattice and $x\in L$ is an element, then the reflection $R_x$ is the rational isometry (i.e.\ an isometry of $L\otimes\mathbb{Q}$) defined by $$ y\mapsto y-2\frac{(x,y)}{x^2}x.$$ If now $L=\operatorname{H}^2(X,\mathbb{Z})$ and $x$ is the class of a prime exceptional divisor, i.e.\ a reduced and irreducible divisor such that $x^2<0$ (cf.\ \cite[Definition~5.1]{Markman:Survey}), then Markman proved that the reflection $R_x$ is a monodromy operator (\cite[Theorem~1.1]{Markman:PrimeExceptional}. In particular this implies that $R_x$ is integral, imposing strong conditions on the numerical invariants of the divisor class $x$. Geometrically, in the projective case, we remark that prime exceptional divisors are exceptional divisors of divisorial contractions, possibly after a birational isomorphism of the ambient space (\cite[Proposition~1.4]{Druel:QuelquesRemarques}). More precisely, if $D$ is a prime exceptional divisor, then there exists another irreducible holomorphic symplectic manifold $X'$, a birational isomorphism $X\cong X'$ and a divisorial contraction $X'\to Y$, with $Y$ normal and projective, such that the image of $D$ in $X'$ coincides with the exceptional divisor of $X'\to Y$. We will use the notations and results introduced in Section~\ref{subsection:moduli spaces of sheaves}. In particular, $S$ is a projective $K3$ surface, $v=2w$ a Mukai vector such that $w^2=2$, and $H$ a $v$-generic polarisation. Then we consider the moduli space $M_v(S)$ and its irreducible symplectic desingularisation $\pi\colon\widetilde{M}_v(S)\to M_v(S)$ (see Theorem~\ref{thm:big OG10}). Let $D$ be a reduced and irreducible Cartier divisor on $M_v(S)$ and, with an abuse of notation, let us keep denoting by $D$ its cohomology class. Assume that $D^2=-2$. By \cite[Theorem~5.3]{MeachanZhang:BirationalGeometry}, $D$ arises as the exceptional divisor of a divisorial contraction in some birational model (which is a moduli space of Bridgeland stable objects on $S$). Note that, by \cite[Proposition~2.3]{MeachanZhang:BirationalGeometry} and the Cone Theorem \cite[Theorem~3.7]{KollarMori:BirationalGeometry}, $D$ is uniruled. Let $\widetilde{D}$ be the strict transform of $D$ via the morphism $\pi\colon\widetilde{M}_v(S)\to M_v(S)$. By \cite{LehnPacienza:MMP}, the minimal model program for the pair $(\widetilde{M}_v(S),\pi^*D)$ terminates, and the termination does not depend on the order of the contracted curves. Since $\widetilde{D}$ is negative and uniruled, being the strict transform of a uniruled divisor, the first step in the minimal model program above contracts all the rational curves in $\widetilde{D}$ obtaining a symplectic variety $Y$ and a divisorial contraction $\widetilde{M}_v(S)\to Y$, where $\widetilde{D}$ is the contracted divisor. It follows that $\widetilde{D}$ is a prime exceptional divisor, and so the reflection $R_{\widetilde{D}}$ is of monodromy type. If $\widetilde{\Sigma}$ is the exceptional divisor of the desingularisation $\pi$, then $$\pi^*D=\widetilde{D}+m\widetilde{\Sigma}$$ is still of degree $-2$ (see item (3) of Theorem~\ref{thm:big OG10}). Since the reflection $R_{\widetilde{D}}$ is integral, this forces $\widetilde{D}^2=-2$ and $m=0$. Let us summarise this remark in the following result. \begin{prop} If $D\in \operatorname{H}^2(M_v(S),\mathbb{Z})$ is the class of a reduced and irreducible divisor such that $D^2=-2$, then the reflection $R_{\widetilde{D}}$ around the strict transform of $D$ is a monodromy operator. Moreover $\widetilde{D}=\pi^*D$ and $\widetilde{D}^2=-2$. \end{prop} \begin{rmk} Notice that if the Cartier divisor $D$ is not reduced, then the result is false. In fact, on the O'Grady moduli space $M_S$ (cf.\ Example~\ref{example:O'Grady moduli space} for notation and results) the divisor $D=2B$ is not reduced and its strict transform $2\widetilde{B}=\pi^*(2B)-\widetilde{\Sigma}$ has degree $-8$. \end{rmk} \begin{example}\label{example:R_L} Let $S$ be a projective $K3$ surface of genus $2$ with polarisation $H$, and suppose that $\operatorname{Pic}(S)=\mathbb{Z} H+\mathbb{Z} K$ such that $K^2=-2$ and $(H,K)=0$. Let $v=(2,0,-2)$ be the Mukai vector of the O'Grady moduli space and consider the class $(-1,H+K,-1)\in v^\perp$, that defines an irreducible and effective divisor $Z$ on $M_S$ via the Hodge isometry in item $(3)$ of Theorem~\ref{thm:big OG10}. By construction $Z^2=-2$, so that by the proposition above $\widetilde{Z}=\pi^*Z$, and by the isometry (\ref{eqn:Gamma_v}) in Example~\ref{example:O'Grady moduli space}, $$\widetilde{Z}=H+K-2\widetilde{B}-\widetilde{\Sigma}.$$ \end{example} \section{Monodromy operators coming from the family of intermediate Jacobian fibrations}\label{section:J_V} In this section $V$ is a generic cubic fourfold, $\pi_V\colon\mathcal{J}_V\to\mathbb{P}^5$ is the compactified intermediate Jacobian fibration. Let $\Theta$ be any relative theta divisor on $\mathcal{J}_U$, rigidified along the zero section, and let $\overline{\Theta}$ be its closure in $\mathcal{J}_V$. Denote by $b_V=\pi_V^*\mathcal{O}_{\mathbb{P}^5}(1)$ the class of the fibration $\pi_V$. The following result was communicated to us by Klaus Hulek and Radu Laza. \begin{prop}[Hulek--Laza]\label{prop:HLS} \begin{enumerate} \item The lattice $U_V=\langle\overline{\Theta},b_V\rangle$, generated by the relative theta divisor and the class of the fibration, is a hyperbolic plane. \item There exists an isogeny of Hodge structures\footnote{Given two Hodge structures $H$ and $H'$, an isogeny between them is a morphism $\alpha\colon H\to H'$ such that $\alpha_{\mathbb{Q}}\colon H\otimes\mathbb{Q}\to H'\otimes\mathbb{Q}$ is an isomorphism of Hodge structures.} \begin{equation}\label{eqn:HLS} \alpha\colon\operatorname{H}^4(V,\mathbb{Z})_{\operatorname{prim}}\longrightarrow U_V^\perp\subset\operatorname{H}^2(\mathcal{J}_V,\mathbb{Z}) \end{equation} and an integer $N>0$ such that $x.y=-Nq_V(\alpha(x),\alpha(y))$ for any $x,y\in\operatorname{H}^4(V,\mathbb{Z})_{\operatorname{prim}}$. \end{enumerate} \end{prop} With an abuse of notation we denote by $q_V$ the Beauville--Bogomolov--Fujiki form on $\mathcal{J}_V$. \begin{proof} The first claim follows from the Fujiki formula \cite[Theorem~4.7]{Fujiki:Constant} with Fujiki constant $945$ (\cite{Rapagnetta:BeauvilleFormIHSM}). In fact, for dimension reasons one gets $b_V^{10}=0$, and so $q_V(b_V)=0$; on the other hand, applying the Fujiki formula again to the class $\overline{\Theta}+tb_V$ and taking the coefficient of the term $t^5$, one gets $q_{V}(\overline{\Theta},b_V)=(\overline{\Theta}^5.b_V^5)/5!=1$, where the last equality follows from Poincar\'e formula. This is enough to conclude that $U_V$ is a hyperbolic plane, despite the value of $q_{V}(\overline{\Theta})$. For the second claim, let us first construct the map $\alpha$. By \cite[Lemma~1.1]{LSV:O'Gr10}, there is a distinguished rational cycle $Z\in\operatorname{CH}^2(\mathcal{J}_V\times_{\mathbb{P}^5}\mathcal{Y}_V)_{\mathbb{Q}}$, where $\mathcal{Y}_V\to\mathbb{P}^5$ is the universal family of linear sections of $V$. The associated correspondence is $[Z]_*\colon\operatorname{H}^4(\mathcal{Y}_V,\mathbb{Q})\longrightarrow\operatorname{H}^2(\mathcal{J}_V,\mathbb{Q})$ defined by $[Z]_*(x)=\pi_{1*}(\pi_2^*x.Z)$, where $\pi_1$ and $\pi_2$ are the projection from $\mathcal{J}_V\times_{\mathbb{P}^5}\mathcal{Y}_V$ to the two factors. Denote by $q\colon\mathcal{Y}_V\to V$ the map that is the inclusion on each linear section (notice that $q$ is a $\mathbb{P}^4$-bundle). Then $$\alpha':=[Z]_*\circ q^*\colon\operatorname{H}^4(V,\mathbb{Q})\longrightarrow\operatorname{H}^2(\mathcal{J}_V,\mathbb{Q})$$ is a morphism of rational Hodge structures by construction. If $V$ is very general and $h\in\operatorname{H}^2(V,\mathbb{Q})$ is the hyperplane section, then $\alpha'(h^2)\in (U_V)_{\mathbb{Q}}$, so the same must hold for generic $V$. In particular, the restriction $$\alpha\colon\operatorname{H}^4(V,\mathbb{Q})_{\operatorname{prim}}\longrightarrow (U_V^\perp)_{\mathbb{Q}}$$ is a well-defined morphism of rational Hodge structures. Now, since $V$ is general, the Hodge structure on $\operatorname{H}^4(V,\mathbb{Q})_{\operatorname{prim}}$ is irreducible and, since $\alpha$ is non-zero, $\alpha$ is an isomorphism of rational Hodge structures. Also, since the lattices $\operatorname{H}^4(V,\mathbb{Z})_{\operatorname{prim}}$ and $U_V^\perp$ are anti-isometric, and $\alpha$ sends isotropic classes to isotropic classes, the $\mathbb{Q}$-linear extensions of the symmetric bilinear pairing must coincide. Finally, by clearing the denominators of $Z$, one gets an integral cycle. Moreover, notice that $\alpha'(h^2)\in U_V$, since $U_V\subset\operatorname{H}^2(\mathcal{J}_V,\mathbb{Z})$ is saturated (it is a hyperbolic plane by the previous point). Then the map $\alpha$ restricts to an isogeny of integral Hodge structures and the lattice structures are preserved up to a constant. \end{proof} The constant $N$ comes from the fact that the cycle $Z$ is rational; there is no reason to expect that $Z$ is integral. Now, we want to recall the construction of a distinguished theta divisor. Recall that $U\subset\mathbb{P}\operatorname{H}^0(V,\mathcal{O}_V(1))^*$ is the open subset parametrising smooth linear sections. For $u\in U$, we denote by $Y_u\subset V$ the corresponding smooth cubic threefold. Let $\mathcal{F}_U$ be the relative Fano surface of lines, that is $\mathcal{F}_u=F({Y_u})$ for any $u\in U$. Consider the difference morphism \begin{equation}\label{eqn:difference map} f_V\colon\mathcal{F}_U\times_U\mathcal{F}_U\longrightarrow\mathcal{J}_U, \end{equation} defined fibrewise by sending two lines to the Abel--Jacobi invariant of their difference. By \cite[Theorem~13.4]{ClemensGriffiths:CubicThreefolds}, the image (with reduced scheme structure) of this map is a relative theta divisor. We denote by $\Theta_V$ the closure of the image of $f_V$. Notice that this is an effective divisor and that, by \cite[Proposition~5.3, Theorem~5.7]{LSV:O'Gr10}, it is relatively ample on $\mathcal{J}_V$. We will need the following result. \begin{lemma}[\protect{\cite[Theorem~2]{Sacca:BirationalGeometryLSV}}]\label{lemma:theta exceptional} $\Theta_V$ is a prime exceptional divisor. In particular its degree is $-2$ and the reflection $R_{\Theta_V}$ is a monodromy operator. \end{lemma} The hyperbolic plane $U_V$ has thus a distinguished basis given by $b_V$ and $\Theta_V$. By Theorem~\ref{thm:LSV}, the Beauville--Bogomolov--Fujiki lattice structure on $\operatorname{H}^2(\mathcal{J}_V,\mathbb{Z})$ is abstractly isometric to the one in item $(4)$ of Theorem~\ref{thm:big OG10}, and by item $(1)$ in Proposition~\ref{prop:HLS}, we write $$ \operatorname{H}^2(\mathcal{J}_V,\mathbb{Z})\cong U_V\oplus U^{2}\oplus E_8(-1)^{2}\oplus G_2(-1).$$ The restriction map $$ r\colon\operatorname{O}^+\left(\operatorname{H}^2(\mathcal{J}_V,\mathbb{Z}),U_V\right)\longrightarrow\operatorname{O}^+(U_V^\perp) $$ is surjective and, by Proposition~\ref{prop:HLS} and \cite[Th{\'e}or{\`e}me~2]{Beauville:MonodromyHypersurfaces}, $$ \operatorname{O}t^+(U_V^\perp)\cong\operatorname{O}t^+(\operatorname{H}^4(V,\mathbb{Z})_{\operatorname{prim}})=\operatorname{Mon}^4(V). $$ Notice that Beauville shows that $\operatorname{Mon}^4(V)=\operatorname{O}^+(\operatorname{H}^4(V,\mathbb{Z}))_{h^2}$, where the latter is the group of isometries $g$ such that $g(h^2)=h^2$. By the last part of the proof of \cite[Th{\'e}or{\`e}me~2]{Beauville:MonodromyHypersurfaces} though, it follows that $\operatorname{O}^+(\operatorname{H}^4(V,\mathbb{Z}))_{h^2}=\operatorname{O}t^+(\operatorname{H}^4(V,\mathbb{Z})_{\operatorname{prim}})$ as claimed. Let $\mathcal{U}\subset\mathbb{P}(\operatorname{H}^0(\mathbb{P}^5,\mathcal{O}(3))^*)$ be the parameter space of smooth cubic fourfolds. We denote by $\mathcal{U}'\subset\mathcal{U}$ the open subset of non-special cubic fourfolds, so in particular $\mathcal{U}'$ is the complement in $\mathcal{U}$ of the union of countably many divisors. By Theorem~\ref{thm:LSV}, there is a family $$ \upsilon\colon\mathcal{J}_{\mathcal{U}'}\longrightarrow\mathcal{U}'$$ of intermediate Jacobian fibrations. \begin{rmk}\label{rmk:Theta and b} Notice that $\upsilon\colon\mathcal{J}_{\mathcal{U}'}\to\mathcal{U}'$ is a family of Lagrangian fibrations, and therefore any monodromy operator aring from this family must preserve $b_V$. Moreover, since $\Theta_V$ is a prime exceptional divisor, the class $\Theta_V$ of the theta divisor must be preserved as well. \end{rmk} \begin{prop}\label{thm:my monodromy} Let $g\in\operatorname{O}t^+(\operatorname{H}^2(\mathcal{J}_V,\mathbb{Z}))$ be such that $g(\Theta_V)=\Theta_V$ and $g(b_V)=b_V$. Then $g$ is a monodromy operator. \end{prop} \begin{proof} Let $g$ be as in the statement; in particular $g\in\operatorname{O}t^+\left(\operatorname{H}^2(\mathcal{J}_V,\mathbb{Z}),U_V\right)$. Then, its restriction $r(g)$ induces the isometry $\tilde{g}\in\operatorname{O}t^+\left(\operatorname{H}^4(V,\mathbb{Z})_{\operatorname{prim}}\right)=\operatorname{Mon}^4(V)$. Then there exists a loop $\gamma$ in $\mathcal{U}$ such that $\tilde{g}$ is the parallel transport along $\gamma$. The base $\mathcal{U}\subset\mathbb{P}^{55}$ is open in the standard topology. The restriction to $\mathcal{U}$ of the Fubini--Study metric on $\mathbb{P}^{55}$ can be non-complete on $\mathcal{U}$: we can make such a metric complete by multiplying it with a smooth (scalar) function which diverges to infinity at least quadratically when approaching the boundary of $\mathcal{U}$. Lemma~\ref{lemma:U'} below ensures that the natural map $\pi_1(\mathcal{U}')\to\pi_1(\mathcal{U})$ is surjective: we can move $\gamma$ away from special cubic fourfolds. The parallel transport along $\gamma$ inside the local system $R^2\upsilon_*\mathbb{Z}$ coincides with $g$ by construction (cf.\ Remark~\ref{rmk:Theta and b}). \end{proof} In the proof of Proposition~\ref{thm:my monodromy} we used the following result, which is well-known to experts but of which we could not find a reference. \begin{lemma}\label{lemma:U'} Let $M$ be a connected and complete Riemannian manifold. Let $\{D_k\}_{k\in I}$ be a countable set of closed submanifolds in M of (real) codimension strictly greater than $1$. Let $M'=M\setminus\bigcup_{k\in I}D_k$ and let $i:M'\to M$ be the inclusion. Then the induced map \begin{equation*} i_*:\pi_1(M',p)\longrightarrow\pi_1(M,p) \end{equation*} is surjective for every $p\in M'$. \end{lemma} \begin{proof} Let $\gamma\in\pi_1(M,p)$ be a homotopy class. Let $\mathcal{L}_\gamma$ denote the set of loops $\delta$ in $M$ based at $p\in M$ such that $[\delta]=\gamma\in\pi_1(M,p)$. When endowed with the Hausdorff distance (induced by the complete metric on $M$), $\mathcal{L}_\gamma$ becomes a complete metric space. For a closed submanifold $D\subset M$, let $\mathcal{L}_\gamma(D)$ be the open subset of $\mathcal{L}_\gamma$ consisting of loops disjoint from $D$. If the codimension of $D$ is strictly greater than $1$, Sard's theorem, applied to the inclusion map $\mathcal{L}_\gamma\setminus\mathcal{L}_\gamma(D)\to\mathcal{L}_\gamma$, implies that $\mathcal{L}_\gamma(D)$ is dense in $\mathcal{L}_\gamma$. By Baire's Category theorem it follows then that $\cap_{k\in I}\mathcal{L}(D_k)$ is dense in $\mathcal{L}$ and hence there exists a loop $\bar{\delta}$ in $M$ which is disjoint from all the $D_{k}s$, i.e.\ $[\bar{\delta}]\in\pi_1(M',p)$. By construction $i_*([\bar{\delta}])=\gamma$. \end{proof} \subsection{Bridge to the O'Grady moduli space}\label{section:bridge} We want to explain how to transport the monodromy operators arising from the LSV family to the O'Grady family. We state now the main result and dedicate the rest of the section to its proof. Let $S$ be a generic $K3$ surface of genus $2$, that is $\operatorname{Pic}(S)=\mathbb{Z} H$ with $H^2=2$. Define the classes $e=H-\widetilde{B}-\widetilde{\Sigma}$ and $f=H-2\widetilde{B}-\widetilde{\Sigma}$: they are the standard basis for a hyperbolic plane. \begin{thm}\label{thm:LSV to OG} Let $g\in\operatorname{O}t^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))$ be such that $g(e)=e$ and $g(f)=f$. Then $g$ is a monodromy operator. \end{thm} The strategy is to degenerate the cubic fourfold to the chordal cubic fourfold: the intermediate Jacobian fibration degenerates then to a (desingularised) moduli space of sheaves on the $K3$ surface of degree $2$, associated to the chordal cubic, with Mukai vector $(0,2H,-4)$. Such a moduli space is birational to the O'Grady moduli space $\widetilde{M}_S$. This approach is the one developed and used by Koll\'ar, Laza, Sacc\`a and Voisin (\cite{KollarLazaSaccaVoisin:Degenerations}) to prove that the intermediate Jacobian fibration is of OG10 type. We study the induced map on the Picard lattices, in order to have an explicit parallel transport operator to move the monodromy operators from $\mathcal{J}_V$ to $\widetilde{M}_S$. \subsubsection{The degeneration}\label{subsection:degeneration} Let $V_0$ be a generic chordal cubic fourfold, that is $V_0$ is the secant variety of the Veronese surface $P$ in $\mathbb{P}^5$. Recall that $V_0$ is singular along $P$ and smooth elsewhere. Its S-equivalence class defines a closed point in the boundary of the GIT-semistable compactification of the moduli space of cubic fourfolds. Now, let us pick a simple degeneration of a smooth cubic fourfold $V$ to $V_0$, that is a pencil $\mathcal{V}=\{F+tG=0\}_{t\in\operatorname{D}elta}$, where $V_0=\{F=0\}$ and $V=\{G=0\}$. Consider the intersection $D=V\cap P$. Since $D\subset P\cong\mathbb{P}^2$ is a smooth sextic curve, the double cover $f\colon S\to P$ ramified along $D$ is a smooth $K3$ surface. Moreover, since $V_0$ is general, $\operatorname{Pic}(S)=\mathbb{Z} H$, where $H$ is a polarisation such that $H^2=2$. Consider a general linear section $Y_0$ of $V_0$. This is a chordal threefold, i.e.\ the secant variety of a rational quartic curve $\Gamma$ in $\mathbb{P}^4\subset\mathbb{P}^5$ ($\Gamma$ is the image of the degree $4$ Veronese embedding of $\mathbb{P}^1$). If $Y$ is the corresponding section of $V$, and if the section is general enough, then $Y$ is a smooth cubic threefold and the intersection $Y\cap\Gamma\subset D$ consists of $12$ distinct points. The double cover $f|_C\colon C\to\Gamma$ ramified along these points is a smooth hyperelliptic curve of genus $5$. As explained in \cite{Collino:TheFundamentalGroup}, via the degeneration defined by intersecting a generic linear section of $V$ as above with the degeneration $\mathcal{V}$, the intermediate Jacobian $J_Y$ degenerates to the Jacobian $J_C$. This is a degeneration of principally polarised abelian varieties. As explained in \cite[Section~5]{KollarLazaSaccaVoisin:Degenerations}, this implies that the central fibre $\mathcal{J}_{\mathcal{V}_0}$ of the intermediate Jacobian fibration degeneration associated to $\mathcal{V}$ has a reduced and irreducbile component that is locally isomorphic to a fibration in Jacobians of hyperelliptic curves of genus $5$. In other words, the intermediate Jacobian fibration degeneration is birational to the desingularised moduli space $\widetilde{M}_{(0,2H,-4)}(S)$. More precisely, if $U\subset\mathbb{P}\operatorname{H}^2(V,\mathcal{O}_V(1))^*$ is the open subset of smooth linear sections, with an abuse of notation we also denote by $U$ the open subset in $|2H|$ parametrising smooth curves of genus $5$ on the $K3$ surface $S$ associated to the degeneration. We then denote by $\mathcal{J}_U$ the intermediate Jacobian fibration of $V$ and by $Pic^0_U$ the relative Picard variety of $S$. It is easy to see that $Pic^0_U\subset M_v(S)$, where $v=(0,2H,-4)$. Moreover $M_v(S)$ has a natural morphism to $|2H|$ given by assigning to each sheaf its Fitting support (see Example~\ref{exe:torsion sheaves}), and $M_v(S)$ is a compactification of the Picard variety fibration. The aforementioned results of Collino (\cite{Collino:TheFundamentalGroup}) can be re-phrased by saying that $\mathcal{J}_U$ degenerates to $Pic^0_U$. Koll\'ar, Laza, Sacc\`a and Voisin's result (\cite{KollarLazaSaccaVoisin:Degenerations}) says instead that the compactification $\mathcal{J}_V$ degenerates to the desingularised moduli space $\widetilde{M}_v(S)$. We use this remark to compute the image of the classes $b_V$ and $\Theta_V$ under the degeneration, according to the following strategy. \begin{itemize} \item Recall that $b_V=\pi_V^*\mathcal{O}_{\mathbb{P}^5}(1)$ (see Theorem~\ref{thm:LSV} for the notation) is the class of the fibration. The Lagrangian fibration structure on $\widetilde{M}_v(S)$ is given by composing the desingularisation map $\pi\colon\widetilde{M}_v(S)\to M_v(S)$ with the Fitting support map $p\colon M_v(S)\to|2H|$. The corresponding class of the fibration is then $\tilde{b}_S=(p\circ\pi)^*\mathcal{O}_{\mathbb{P}^5}(1)$. Since the degeneration of $\mathcal{J}_V$ to $\widetilde{M}_v(S)$ preserves the Lagrangian fibration structure, we must have that $b_V$ goes to the class $\tilde{b}_S$. For later use, we put $b_S=p^*\mathcal{O}_{\mathbb{P}^5}(1)$, so that $\tilde{b}_S=\pi^*b_S$. \item Recall that $\Theta_V$ is the closure of the image of the difference map (\ref{eqn:difference map}) $$f_V\colon\mathcal{F}_U\times_U\mathcal{F}_U\to\mathcal{J}_U\subset\mathcal{J}_V,$$ where $U\subset\mathbb{P}\operatorname{H}^0(V,\mathcal{O}_V(1))^*$ is the open set of smooth linear sections as before. Let us focus on a general fibre. In this case the Fano surface of lines of a smooth cubic threefold degenerates to the surface $C^{(2)}\cup F_C$ (\cite[Proposition~2.1]{Collino:TheFundamentalGroup}). Here $C$ is the hyperelliptic curve of genus $5$, $C^{(2)}$ is the symmetric product, $F_C\cong\mathbb{P}^2$ and $C^{(2)}\cap F_C\cong K\cong\mathbb{P}^1$. The inclusion $K\subset F_C$ realises $K$ as a conic in $\mathbb{P}^2$. Notice that $\operatorname{Alb}(C^{(2)}\cup F_C)\cong\operatorname{Alb}(C^{(2)})\cong J_C$, via the restriction map $\operatorname{H}^1(C^{(2)}\cup F_C,\mathbb{Z})\to\operatorname{H}^1(C^{(2)},\mathbb{Z})$ induced by the Mayer--Vietoris sequence. We then look at the difference map $C^{(2)}\times C^{(2)}\to J_C$, and in particular at its relative version \begin{equation}\label{eqn:Collino Fano} f_S\colon\mathcal{C}_U^{(2)}\times_U\mathcal{C}_U^{(2)}\longrightarrow\mathcal{J}_{\mathcal{C}_U}\subset M_v(S). \end{equation} (According to the abuse of notation assumed before, $U$ is now the open subset of smooth curves in the linear system $|2H|$.) Let $T_U$ be the image of $f_S$ and denote by $T_S$ its closure in $M_v(S)$. Then $\Theta_V$ degenerates to the strict transform $\widetilde{T}_S$ of $T_S$. \end{itemize} The next task is then to write down the classes $\tilde{b}_S$ and $\widetilde{T}_S$ in a fixed basis of $\operatorname{Pic}(\widetilde{M}_v(S))\cong\Gamma_v^{1,1}$ (see Theorem~\ref{thm:factoriality}). Since $\tilde{b}_S=\pi^*b_S$, in this case it is enough to do it in the group $\operatorname{Pic}(M_v(S))\cong (v^\perp)^{1,1}$ (see item $(3)$ of Theorem~\ref{thm:big OG10}). For the class $\widetilde{T}_S$ instead, our first remark is that $T_S$ is a Cartier divisor (we will see shortly that $M_v(S)$ is locally factorial), so that we can consider its class in $\operatorname{Pic}(M_v(S))\cong (v^\perp)^{1,1}$, and then we will compute the multiplicity $m$ with which it contains the singular locus of $M_v(S)$. If $\widetilde{\Sigma}$ is the exceptional divisor of the desingularisation, then $\widetilde{T}_S=\pi^*T_S-m\widetilde{\Sigma}$. Notice that we systematically abuse notation by identifying a divisor with its cohomology class. We start by describing the basis of $\operatorname{Pic}(\widetilde{M}_v(S))$ we consider. Recall that $v=(0,2H,-4)$. First of all, we notice that $M_v(S)$ is locally factorial, so any Weil divisor is Cartier. In fact in this case $w=(0,H,-2)=v/2$ is such that $(w,u)\in2\mathbb{Z}$ for any $u\in\widetilde{\operatorname{H}}^{1,1}(S,\mathbb{Z})$, so that the claim follows from Remark~\ref{rmk:factoriality}. (Here we are using that $\operatorname{Pic}(S)=\mathbb{Z} H$.) Recall from item $(3)$ of Theorem~\ref{thm:big OG10} that there is an isometry $\xi\colon(v^\perp)^{1,1}\to\operatorname{Pic}(M_v(S))$, and fix the basis $(v^\perp)^{1,1}=\langle a,b\rangle$, where $a=(-1,H,0)$ and $b=(0,0,1)$. If $D$ is a divisor on $M_v(S)$, then we write $D=\lambda \xi(a)+\mu \xi(b)$. The coefficients $\lambda$ and $\mu$ are determined by computing the intersection numbers $D.l_i$, $\xi(a).l_i$ and $\xi(b).l_i$, where $l_i\subset M_v(S)$, $i=1,2$, are two (linearly independent) curves classes. If $D$ contains the singular locus $\Sigma_v$ of $M_v(S)$ with multiplicity $m$, then the class of the strict transform $\widetilde{D}$ of $D$ is $$\widetilde{D}=\lambda\pi^*\xi(a)+\mu \pi^*\xi(b)-m\widetilde{\Sigma},$$ where $\widetilde{\Sigma}$ is the exceptional divisor of the desingularisation map. Before continuing we make the following warning. \begin{abusenotation} If $M_v(S)$ is a moduli space and $\widetilde{M}_v(S)$ is its symplectic desingularisation, then by item $(3)$ of Theorem~\ref{thm:big OG10}, it follows that the pullback morphism $\pi^*\colon\operatorname{H}^2(M_v(S),\mathbb{Z})\to\operatorname{H}^2(\widetilde{M}_v(S),\mathbb{Z})$ is Hodge isometric and injective, and moreover there is an isometry $\xi\colon v^\perp\to\operatorname{H}^2(M_v(S),\mathbb{Z})$ that preserves the Hodge structure. We make the following abuses of notation: \begin{itemize} \item if $x\in v^\perp$, we write $x\in\operatorname{H}^2(M_v(S),\mathbb{Z})$ instead of $\xi(x)$; \item if $\alpha\in\operatorname{H}^2(M_v(S),\mathbb{Z})$, we write $\alpha\in v^\perp$ instead of $\xi^{-1}(\alpha)$; \item if $\alpha\in\operatorname{H}^2(M_v(S),\mathbb{Z})$, we write $\alpha\in\operatorname{H}^2(\widetilde{M}_v(S),\mathbb{Z})$ instead of $\pi^*(\alpha)$; \item if $x\in v^\perp$, we write $x\in\operatorname{H}^2(\widetilde{M}_v(S),\mathbb{Z})$ instead of $\pi^*(\xi(x))$. \end{itemize} This is justified by the fact that both $\xi$ and $\pi^*$ preserve the Hodge and the lattice structures, so there is no ambiguity on where the lattice-theoretic operations between the classes (of any Hodge type) happen. \end{abusenotation} \begin{rmk}\label{rmk:modular curves} Here we describe the standard method to define curves in a moduli space $M_v(S)$ (only for now $v$ is any Mukai vector). Let $\mathcal{E}$ be a sheaf on the product $S\times C$, where $C$ is a curve, and assume that $\mathcal{E}$ is flat over $C$. We say that $\mathcal{E}$ is a family of sheaves on $S$ parametrised by $C$. Suppose moreover that for every $p\in C$ the restriction $\mathcal{E}_p=\mathcal{E}|_{S\times p}$ is semistable and $v(\mathcal{E}_p)=v$. Then there exists a morphism $$\varphi_{C,\mathcal{E}}\colon C\longrightarrow M_v(S)$$ given by mapping $p\in C$ to the S-equivalence class $[\mathcal{E}_p]$. In fact, since $M_v(S)$ co-represents a moduli functor, the morphism $\varphi_{C,\mathcal{E}}$ is well defined by universal property. If the image of $\varphi_{C,\mathcal{E}}$ is $1$-dimensional, we will refer to the curve $\varphi_{C,\mathcal{E}}(C)\subset M_v(S)$ as the curve associated to the pair $(C,\mathcal{E})$. \end{rmk} \emph{Vertical curve.} Recall that $M_v(S)$ is fibred over $|2H|$ with general fibre $J_C$, the Jacobian of a smooth genus $5$ curve $C$. The fibration $M_v(S)\to|2H|$ is given by mapping any sheaf to its Fitting support. If we fix a point $p_0\in C$, then $C$ embeds in $J_C$ via the Abel--Jacobi map $AJ(p)=\mathcal{O}_C(p-p_0)$. Define the sheaf $$ \mathcal{E}_{C,p_0}=(i\times id)_*\mathcal{O}_{C\times C}\left(\operatorname{D}elta-p_0\times C\right),$$ where $i\colon C\to S$ is the natural embedding, $i\times id\colon C\times C\to S\times C$ and $\operatorname{D}elta\subset C\times C$ is the diagonal. The sheaf $\mathcal{E}_{C,p_0}$ is a sheaf on $S\times C$, flat over $C$. Moreover, for every $p\in C$, the restriction $\mathcal{E}_{C,p_0}|_{S\times p}=i_*\mathcal{O}_C(p-p_0)$ is a stable sheaf in $M_v(S)$. The curve $l_1$ is then the curve associated to the pair $(C,\mathcal{E}_{C,p_0})$ (cf.\ Remark~\ref{rmk:modular curves}). \begin{rmk} This curve is not canonical: it depends on the fixed point, so one should really write $\mathcal{E}_{C,p_0}$. We drop the reference to $p_0$, and only write $\mathcal{E}_C$, because it will not be important in the computations. \end{rmk} \emph{Horizontal curve.} We start with the following remark. \begin{lemma}\label{lemma:pencil} There exists a pencil $L\cong\mathbb{P}^1\subset|2H|\cong\mathbb{P}^5$ and three points $p_1,p_2,p_3\in L$ such that, letting $\mathcal{C}\subset S\times L$ be the family of curves on $S$ parametrised by $L$ and $\mathcal{C}_p$ the curve corresponding to $L$, the following holds: \begin{itemize} \item if $p\neq p_1,p_2,p_3$, then $\mathcal{C}_p$ is irreducible and for $p$ general (in a sense made precise in the proof) it is smooth; \item if $p=p_i$ for $i=1,2,3$, then $\mathcal{C}_{p_i}=\mathcal{C}_{i,1}+\mathcal{C}_{i,2}$ where $C_{i,j}\in|H|$ is a smooth curve of genus $2$; \item the base locus of $L$ consists of eight points $q_1,\cdots,q_8$. \end{itemize} \end{lemma} \begin{proof} Recall that $f\colon S\to\mathbb{P}^2$ is a double cover ramified along a sextic curve. Then $H=f^*\mathcal{O}(1)$ and $|2H|\cong|\mathcal{O}(2)|$. The latter linear system parametrises conics in $\mathbb{P}^2$ and a Lefschetz pencil $L\subset|\mathcal{O}(2)|$ has only three critical points (corresponding to two incident lines - these are the points corresponding to $p_1$, $p_2$ and $p_3$). Let us denote by $\mathcal{C}'$ the family of conics parametrised by $L$. If $L$ is general enough, then it has four base points. The pullback $\mathcal{C}=f^*\mathcal{C}'$ is a family of curves in $S$ of class $|2H|$, parametrised by $L$ and satisfying the three properties in the lemma. In particular the curves $\mathcal{C}_p$, for $p\neq p_1,p_2,p_3$, are always irreducible, and smooth when the corresponding conic is not tangent to the sextic curve. \end{proof} Denote by $j\colon\mathcal{C}\to S\times L$ the natural inclusion and define $$\mathcal{E}_L=j_*\mathcal{O}_{\mathcal{C}}.$$ $\mathcal{E}_L$ is a flat family of sheaves on $S$ parametrised by $L$. Moreover a direct computation shows that the sheaf $i_{p*}\mathcal{O}_{\mathcal{C}_p}$ is stable for every $p\in L$ (here $i_p\colon\mathcal{C}_p\to S$ is the natural embedding). Using the convention set in Remark~\ref{rmk:modular curves}, we denote by $l_2$ the curve associated to the pair $(L,\mathcal{E}_L)$. \begin{rmk}\label{rmk:zero section} Notice that $l_2$ is the closure in $M_v(S)$ of a line in the zero section on $\mathcal{J}_U$ and it does not meet the singular locus of $M_v(S)$. \end{rmk} \begin{lemma}\label{lemma:some intersection} The following intersection products hold: $$ a.l_1=5, \qquad a.l_2=-1, \qquad b.l_1=0,\qquad b.l_2=1.$$ \end{lemma} \begin{proof} Let us outline the computation of $a.l_1$. By \cite[Theorem~8.1.5]{HuybrechtsLehn:ModuliSpaces}, if $G$ is a sheaf on $S$ such that the Mukai vector of $G^\vee$ is $a$, then \begin{equation}\label{eqn:8.1.5} a.l_1=\deg(\phi_{C,\mathcal{E}_C}^*a)=\deg c_1\left(\pi_{C!}\left(\mathcal{E}_C\otimes\pi_S^*G\right)\right), \end{equation} where $\varphi_{C,\mathcal{E}_C}\colon l_1\to M_v(S)$ is the classifying morphism (cf.\ Remark~\ref{rmk:modular curves}). Using the Grothendieck--Riemann--Roch formula, we get $$a.l_1=\pi_{C*}\left[\operatorname{ch}(\mathcal{E}_C)\operatorname{ch}(\pi_S^*G)\operatorname{td}_{\pi_C}\right]_3=\pi_{C*}\left[\operatorname{ch}(\mathcal{E}_C)\pi_S^*(a^\vee\sqrt{\operatorname{td}_S})\right]_3,$$ where the last equality follows from the fact that $\operatorname{td}_{\pi_C}=\pi_S^*\operatorname{td}_S$. Since $a=(-1,H,0)$, then $a^\vee=(-1,-H,0)$ and so $$ \pi_S^*(a^\vee\sqrt{\operatorname{td}_S})=(-1,-[H\times C],-[p_0\times C],0),$$ where the square brackets indicate the class in cohomology of the corresponding cycle. To compute $\operatorname{ch}(\mathcal{E}_C)$, we use the Grothendieck--Riemann--Roch formula again: $$\operatorname{ch}(\mathcal{E}_C)=(i\times\operatorname{id})_*\left(\operatorname{ch}\left(\mathcal{O}_{C\times C}\left(\operatorname{D}elta-p_0\times C\right)\right)\operatorname{td}_{i\times\operatorname{id}}\right).$$ Now, by a direct computation, $$\operatorname{ch}(\mathcal{O}_{C\times C}(\operatorname{D}elta-p_0\times C))=(1,[\operatorname{D}elta]-[p_0\times C],-5)$$ and $$ \operatorname{td}_{i\times\operatorname{id}}=\left(1,-\frac{1}{2}[K_C\times C],0\right),$$ where $K_C$ is the canonical divisor of $C$. Putting all together we eventually get $$ \operatorname{ch}(\mathcal{E}_C)=\left(0,[C\times C],[(i\times\operatorname{id})(\operatorname{D}elta)]-[p_0\times C]-\frac{1}{2}[K_C\times C],-9\right)$$ and hence $$ a.l_1 =\pi_{C*}\left[\operatorname{ch}(\mathcal{E}_C)\pi_S^*(a^\vee\sqrt{\operatorname{td}_S})\right]_3= $$ $$ =-[C\times C].[p_0\times C]-[H\times C].([(i\times\operatorname{id})(\operatorname{D}elta)]-[p_0\times C]-\frac{1}{2}[K_C\times C])+9=$$ $$ =0-4+9=5.$$ The computation of $b.l_1$ is similar and we leave it to the reader. The intersections $a.l_2$ and $b.l_2$ are also computed with the same technique. We only recall here the computation of $\operatorname{ch}(\mathcal{E}_L)$, and leave to the reader the conclusion of the proof. First of all, recall that $\mathcal{E}_L=j_*\mathcal{O}_{\mathcal{C}}$, where $j\colon\mathcal{C}\to S\times L$ is the natural inclusion. Notice that $\mathcal{C}\cong \operatorname{Bl}_{q_1,\cdots,q_8}(S)$, where $q_1,\cdots,q_8$ are the base points of the pencil $L$ (cf.\ Lemma~\ref{lemma:pencil}). Then by the Grothendieck--Riemann--Roch formula we have $$\operatorname{ch}(\mathcal{E}_L)=j_*(\operatorname{td}_j),$$ and from the normal bundle sequence we get $$ \operatorname{td}_j=\operatorname{td}(N_{\mathcal{C}/S\times L})^{-1}=\left(1,-\frac{1}{2}c_1(N_{\mathcal{C}/S\times L}),\frac{1}{6}c_1(N_{\mathcal{C}/S\times L})^2\right).$$ Therefore one needs to compute $c_1(N_{\mathcal{C}/S\times L})$. Using the adjunction formula on $\mathcal{C}\subset S\times L$ and the isomorphism $\mathcal{C}\cong\operatorname{Bl}_{q_1,\cdots,q_8}(S)$, we get $$c_1(N_{\mathcal{C}/S\times L})=K_{\mathcal{C}}-K_{S\times L}|_{\mathcal{C}}=E_1+\cdots+E_8-\pi_L^*K_L.$$ Here $E_i$ is the exceptional divisor in $\mathcal{C}$ corresponding to the point $q_i$ and $\pi_L$ is the restriction to $\mathcal{C}$ of the natural projection $S\times L\to L$. Putting everything together we eventually have $$\operatorname{ch}(\mathcal{E}_L)=\left(0,[\mathcal{C}],-\frac{E_1+\cdots+E_8-\pi_L^*K_L}{2},4\right).$$ \end{proof} \begin{prop}\label{prop:b_V} The class $b_V$ degenerates to the class $\tilde{b}_S=b$. \end{prop} \begin{proof} We have already remarked that $b_V$ degenerates to $\tilde{b}_S=\pi^*b_S$, so the claim follows once we prove that $b_S=b\in v^\perp$. Since $l_1$ is concentrated in a fibre, by the projection formula $b_S.l_1=0$. On the other hand, since $l_2$ is a line inside the zero section, we easily get $b_S.l_2=1$. The claim follows. \end{proof} According to the strategy outlined before, in order to compute the class of $\widetilde{T}_S$, we first compute the class of $T_S$ in the given basis of $(v^\perp)^{1,1}$. \begin{prop}\label{prop:Theta} $T_S=a-2b$. \end{prop} \begin{proof} Since $T_S$ is a theta divisor, by Poincar\'e formula it follows that $T_S.l_1=5$. By taking $d$ to be the intersection number $d=T_S.l_2$ and using Lemma~\ref{lemma:some intersection}, we see that $T_S=a+(d+1)b$. Finally, it follows from Lemma~\ref{lemma:theta exceptional} that $-2=T^2_S=2(d+2)$, so that $d=-3$ and the claim follows. \end{proof} The final step consists in computing the multiplicity of the singular locus $\Sigma_v$ of $M_v(S)$ in $T_S$. \begin{prop}\label{prop:multiplicity T_S} $\Sigma_v$ is not contained in $T_S$. \end{prop} The proof of the proposition relies on the following remark. \begin{lemma}\label{lemma:0=4} Let $w=(0,2H,0)$. Then $M_v(S)$ is isomorphic to $M_w(S)$ and the isomorphism preserves the singular locus. Moreover, the theta divisor $T_S$ is isomorphic to the closure in $M_w(S)$ of the relative Brill--Noether divisor $\mathcal{W}_4^0$ of the family of smooth curves in $|2H|$. \end{lemma} \begin{proof} By Lemma~\ref{lemma:tensor H}, tensoring by $H$ gives an isomorphism $\phi\colon M_v(S)\to M_w(S)$ that preserves the singular locus. More precisely, any smooth curve $C$ in $|2H|$ is hyperelliptic and has a unique $g^1_2$. By definition, the restriction of $H$ to $C$ coincides with $2g^1_2$, hence the morphism $\phi$ acts fibrewise by sending a line bundle $L$ on a smooth curve $C$ to the line bundle $L\otimes2g^1_2$ on $C$. Now, the moduli space $M_w(S)$ contains the Picard variety fibration $Pic^4_{U}$, where $U\subset|2H|$ is the locus of smooth curves. If we denote by $\mathcal{C}_U$ the family of curves parametrised by $U$, the Brill--Noether variety $\mathcal{W}_4^0\subset Pic^4_{U}$ coincides with the image of the morphism $$ g_S\colon\mathcal{C}_U^{(4)}\longrightarrow Pic^4_{U},$$ given by $g_S((p_1,p_2,p_3,p_4);C)=i_*\mathcal{O}_C(p_1+p_2+p_3+p_4)$, where $i\colon C\to S$ is the embedding. Let us call $E_U$ the image of $g_S$. We claim that $E_U=\phi(T_U)$ (recall that $T_U$ was defined as the image of the map (\ref{eqn:Collino Fano})). Let $C\in U$ be a smooth curve in $|2H|$ and consider the line bundle $\mathcal{O}_C(p_1+p_2-q_1-q_2)$. Any divisor in the equivalence class of $H$ cuts $C$ in four points (i.e.\ a pair of $g_1^2$), and up to change the linear equivalence representative, we can always suppose that two of these points are $q_1$ and $q_2$. It follows that $i_*\mathcal{O}_C(p_1+p_2-q_1-q_2)\otimes H=i_*\mathcal{O}_C(p_1+p_2+r_1+r_2)$, where $r_1$ and $r_2$ are the two residue points of the intersection of $C$ with $H$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:multiplicity T_S}] By Lemma~\ref{lemma:0=4} above, it is enough to prove that $\Sigma_w$ is not contained in the closure $\overline{\mathcal{W}}_4^0$ of $\mathcal{W}_4^0$; in particular, it is enough to prove that a general point of $\Sigma_w$ does not belong to $\overline{\mathcal{W}}_4^0$. Recall from \cite{LehnSorger:Singularity} (see also \cite{O'Grady:10dimPublished}) that a general point $x\in\Sigma_w$ is an S-equivalence class $[F_1\oplus F_2]$, where $F_j=i_{j*}L_j$, $L_j\in\operatorname{Pic}^{1}(C_j)$ is general and $C_j\in|H|$ is a smooth curve. The support of $x$ is the curve $C_0=C_1\cup C_2\in|2H|$. If $p\colon M_w(S)\to|2H|$ is the map that associates to any sheaf its Fitting support, then we want to show that $x$ does not belong to the intersection $\overline{\mathcal{W}}_4^0\cap p^{-1}(C_0)$. First of all, let us describe the fibre $\overline{\mathcal{W}}_4^0\cap p^{-1}(C_0)$. By construction, the fibre $p^{-1}(C_0)$ coincides with the generalised Jacobian $\bar{J}_{C_0}$, as constructed in \cite{Caporaso:Compactification} (see also item $(4)$ of \cite[Lemma~2.3]{Rapagnetta:BeauvilleFormIHSM}). So the fibre $\overline{\mathcal{W}}_4^0\cap p^{-1}(C_0)$ can be understood classically as the locus of sheaves with sections, i.e.\ $F\in\overline{\mathcal{W}}_4^0\cap p^{-1}(C_0)$ if and only if $h^0(F)>0$ (\cite[Theorem~5.3]{Alexeev:CompactifiedJacobians}). Now, in any family of semistable sheaves, the locus of sheaves $F$ such that $h^0(F)>0$ is closed on the base; therefore the locus in $M_w(S)$ of S-equivalence classes where at least one representative has sections is still closed. Thanks to this remark, we reduce to prove that any sheaf in the S-equivalence class of a general point $x\in\Sigma_w(S)$ has no sections. Let $x=[i_{1*}L_1\oplus i_{2*}L_2]$ be a general point in $\Sigma_w$. Then $\operatorname{H}^0(L_1)=0=\operatorname{H}^0(L_2)$: in fact $L_j\in\operatorname{Pic}^1(C_j)$ is general, hence not effective, since $C_j$ has genus $2$. Strictly semistable sheaves $F$ such that $[F]=x$ are described (up to isomorphism) as kernels of the surjections (cf.\ proof of \cite[Lemma~2.3]{Rapagnetta:BeauvilleFormIHSM}) $$ i_{1*}L_1(n_1+n_2)\oplus i_{2*}L_2\longrightarrow\!\!\!\!\!\rightarrow Q.$$ If $F$ is any such a kernel, computing the long exact sequence in cohomology we get that $\operatorname{H}^0(F)$ is described as the sub-vector space of $\operatorname{H}^0(L_1(n_1+n_2))\oplus\operatorname{H}^0(L_2)\cong\operatorname{H}^0(L_1(n_1+n_2))$ of sections vanishing at the points $n_1$ and $n_2$ of $C_1$. This sub-vector space coincides with the cohomology group $\operatorname{H}^0(L_1)$, which is zero by assumption. It follows that $\operatorname{H}^0(F)=0$, therefore no representatives of $x$ belong to $\overline{\mathcal{W}}_4^0\cap p^{-1}(C_0)$ and the claim is proved. \end{proof} \begin{cor} The theta divisor $\Theta_V$ degenerates to $\widetilde{T}_S=\pi^*T_S=a-2b$. \end{cor} \begin{proof} We have already remarked that $\Theta_V$ degenerates to $\widetilde{T}_S$. By Proposition~\ref{prop:multiplicity T_S}, the multiplicity of $\Sigma_v$ in $T_S$ is $0$, so that $\widetilde{T}_S=\pi^*T_S$ and the claim follows from Proposition~\ref{prop:Theta}. \end{proof} \subsubsection{The hyperelliptic birational map}\label{subsection:hyperelliptic} Let $U\subset|2H|$ be the locus of smooth curves and $Pic^2_U$ the Picard variety fibration over $U$. Since all the curves parametrised by $U$ are hyperelliptic there exists a canonical isomorphism $Pic^0_U\cong Pic^2_U$ given fibrewise by tensoring with the unique $g^1_2$ on each fibre. Recalling that $Pic^0_U\subset M_{(0,2H,-4)}$ and that $Pic^2_U\subset M_{(0,2H,-2)}$ (see Example~\ref{exe:torsion sheaves}), this also gives a canonical birational map \begin{equation*} \psi\colon M_{(0,2H,-2)}\dashrightarrow M_{(0,2H,-4)}. \end{equation*} We denote by $\widetilde{\psi}$ the induced birational maps on the symplectic desingularisations of these spaces. Only for the rest of this section, we use the notation $v_2=(0,2H,-2)$ and $v_0=(0,2H,-4)$. Recall that $M_{v_0}(S)$ is locally factorial and so $$\operatorname{Pic}(\widetilde{M}_{v_0}(S))\cong(v_0^\perp)^{1,1}\oplus\mathbb{Z}\sigma_0=\langle a,b,\sigma_0\rangle.$$ On the other hand, $M_{v_2}(S)$ is $2$-factorial and $$\operatorname{Pic}(\widetilde{M}_{v_2}(S))=\Gamma_{v_2}^{1,1}=$$ $$ =\left\{\left(w_{m,n},k\frac{\sigma_2}{2}\right)\mid (w_{m,n},v_2^\perp)\subset\mathbb{Z}, k\in2\mathbb{Z}\Leftrightarrow m,n\in2\mathbb{Z}\right\}, $$ where $w_{m,n}=\left(-m,\frac{m}{2}H,\frac{n}{2}\right)$. Both these statements follow from Theorem~\ref{thm:factoriality} and the proof of \cite[Proposition~4.1]{PeregoRapagnetta:Factoriality}. In the following we write $(v_2^\perp)^{1,1}=\langle a_2,b_2\rangle$, where $a_2=(-2,H,0)$ and $b_2=(0,0,1)$. Finally, recall that $\sigma_0^2=-6=\sigma_2^2$. Since $\widetilde{\psi}$ is a birational isomorphism between irreducible holomorphic symplectic manifolds, its pullback induces a Hodge isometry (\cite{Huybrechts:BasicResults}) $$\widetilde{\psi}^*\colon\operatorname{H}^2(\widetilde{M}_{v_0}(S),\mathbb{Z})\longrightarrow\operatorname{H}^2(\widetilde{M}_{v_2}(S),\mathbb{Z}).$$ The goal of this section is to write down explicitly the induced isometry $$\widetilde{\psi}^*\colon\operatorname{Pic}(\widetilde{M}_{v_0}(S))=\Gamma_{v_0}^{1,1}\to\Gamma_{v_2}^{1,1}=\operatorname{Pic}(\widetilde{M}_{v_2}(S))$$ with respect to the generators fixed above. This goal is achieved in Proposition~\ref{prop:psi}, but before that we need to investigate some features about the space $\widetilde{M}_{v_2}(S)$ and the map $\widetilde{\psi}$. Recall from Example~\ref{exe:torsion sheaves} that $\widetilde{M}_{v_2}(S)$ is also a Lagrangian fibration, this structure being given by the composition of the desingularisation map $\pi_2\colon\widetilde{M}_{v_2}(S)\to M_{v_2}(S)$ and the Fitting support map $p\colon M_{v_2}(S)\to|2H|$. The class of the fibration is $\tilde{b}_{v_2}=(p\circ\pi_2)^*\mathcal{O}_{\mathbb{P}^5}(1)$. \begin{lemma}\label{lemma:b_2} $\tilde{b}_{v_2}=b_2$. \end{lemma} \begin{proof} The proof is the same as the proof of Proposition~\ref{prop:b_V}, using computations similar to those in Lemma~\ref{lemma:some intersection}, where instead of the curves $l_1$ and $l_2$ one uses the curves $l_1'$ and $l_2'$ that we now define. The curve $l_1'$ is the image of the vertical curve $l_1$ via $\psi^{-1}$. In fact notice that $l_1$ is contained in $Pic^0_U\subset M_{v_0}(S)$, so the morphism $\psi^{-1}$ is well-defined on it. In particular, $l_1'=(C,\mathcal{E}_C')$, where (we use the same notations used to define the curve $l_1$) $$ \mathcal{E}_{C}'=(i\times id)_*\mathcal{O}_{C\times C}\left(\operatorname{D}elta-p_0\times C+K_C\times C\right).$$ To construct the curve $l_2'$ we use again the pencil $L\subset|2H|$ of Lemma~\ref{lemma:pencil}. Let $\mathcal{C}$ be the family of curves in $S$ parametrised by $L$. Recall that $\mathcal{C}$ is the pullback of a family $\mathcal{C}'$ of conics in $\mathbb{P}^2$. If $Z'\colon L\to\mathcal{C}'$ is a section, then the pullback via the $2\colon1$ cover $S\to\mathbb{P}^2$ is a bi-section $Z\colon L\to\mathcal{C}$. For every $p\in L$ such that $\mathcal{C}_p$ is smooth, $Z_p$ represents the $g^1_2$. We can choose $Z$ in such a way that $Z_p$ is composed by two distinct points (this is always true, after possibly a finite base change); in particular $Z_{p_i}$ does not pass through the node of $\mathcal{C}_{p_i}$, for $i=1,2,3$. Then we define the family of sheaves $j_*\mathcal{O}_{\mathcal{C}}(Z)$, where $j\colon\mathcal{C}\to S\times L$ is the embedding. This is a well defined family of semistable sheaves on $S$ parametrised by $L$ and $l_2'$ is the curve associated to the pair $(L,j_*\mathcal{O}_{\mathcal{C}}(Z))$ (cf.\ Remark~\ref{rmk:modular curves}). Now, it is clear that $\tilde{b}_{v_2}.l_1'=0$ and $\tilde{b}_{v_2}.l_2'=1$. Let us compute $a_2.l_1'$, $b_2.l_1'$ and $b_2.l_2'$. These intersection numbers are computed as in Lemma~\ref{lemma:some intersection} and therefore we will only briefly comment on them. The Chern character of $\mathcal{E}_C'$ is as follows (use notations as in the proof of Lemma~\ref{lemma:some intersection}) $$ \operatorname{ch}(\mathcal{E}_C')=\left(0,[C\times C],[(i\times\operatorname{id})(\operatorname{D}elta)]-[p_0\times C]+\frac{1}{2}[K_C\times C],-4\right),$$ so that $a_2.l_1'=4$ and $b_2.l_1'=0$. It follows that $\tilde{b}_{v_2}$ is a multiple of $b_2$. Finally, the intersection $b_2.l_2'$ is easier to compute. In fact, since $b_2=(0,0,1)$, we do not need to know the whole shape of $\operatorname{ch}(j_*\mathcal{O}_{\mathcal{C}}(Z))$, but only its first Chern class, which is equal to $[\mathcal{C}]$. It follows that $b_2.l_2'=[p_0\times C].[\mathcal{C}]=1$ and so $\tilde{b}_{v_2}=b_2$. \end{proof} \begin{rmk}\label{rmk:zero-two-section} By a direct check, the sheaves $i_{p_{i*}}\mathcal{O}_{\mathcal{C}_{p_i}}(Z_{p_i})$ are strictly semistable, and the same holds if we choose another bi-section $Z$ such that $Z_p$ represents the $g^1_2$ of the smooth curve $\mathcal{C}_p$. Moreover, by a direct computation it can be shown that the S-equivalence class of $\mathcal{O}_{\mathcal{C}_{p_i}}(Z_{p_i})$ is independent of the choice of $Z$, i.e.\ the morphism $\psi^{-1}$ is well-defined on the generic point of the fibre over a reduced and reducible curve. As a consequence we get that $l_2'=\psi^{-1}(l_2)$ and that $l_2'\cap\Sigma_{v_2}=\{p_1,p_2,p_3\}$. In particular, the morphism $\psi^{-1}$ does not preserve the singular locus of $M_{v_0}(S)$. \end{rmk} \begin{prop}\label{prop:psi} $\widetilde{\psi}^*\colon\operatorname{Pic}(\widetilde{M}_{v_0}(S))\to\operatorname{Pic}(\widetilde{M}_{v_2}(S))$ is the isometry $$\begin{array}{rcl} a & \longmapsto & \frac{1}{2}a_2+\frac{3}{2}b_2-\frac{1}{2}\sigma_2\\%=\left(\left(-1,\frac{H}{2},\frac{3}{2}\right),-\frac{\sigma_2}{2}\right)\\ b & \longmapsto & b_2 \\ \sigma_0 & \longmapsto & 3b_2-\sigma_2. \end{array}$$ \end{prop} \begin{proof} First, we remark that the morphism $\widetilde{\psi}$ is defined fibrewise, so that the class of the fibrations are preserved, i.e.\ $\widetilde{\psi}^*\tilde{b}_{v_0}=\tilde{b}_{v_2}$. Then, by Proposition~\ref{prop:b_V} and Lemma~\ref{lemma:b_2}, we get $$\widetilde{\psi}^*(b)=b_2.$$ Writing $\widetilde{\psi}^*(a)=(w_{m,n},\frac{k}{2}\sigma_2)$, from the relations $$\widetilde{\psi}^*(a)^2=2\qquad\mbox{ and }\qquad (\widetilde{\psi}^*(a),b_2)=(\widetilde{\psi}^*(a),\widetilde{\psi}^*(b))=1$$ it follows that $$ \widetilde{\psi}^*(a)=\frac{1}{2}a_2+\frac{n}{2}b_2+\frac{k}{2}\sigma_2=\left(\left(-1,\frac{1}{2}H,\frac{n}{2}\right),k\frac{\sigma_2}{2}\right)$$ where $n,k\in\mathbb{Z}$ are related by the equation $$ n-\frac{3}{2}k^2-\frac{3}{2}=0.$$ Let us now consider the inverse isometry $(\widetilde{\psi}^*)^{-1}$, and write $(\widetilde{\psi}^*)^{-1}(\sigma_2)=xa+yb+z\sigma_0$. From the relation $$ \left((\widetilde{\psi}^*)^{-1}(\sigma_2),b\right)=\left((\widetilde{\psi}^*)^{-1}(\sigma_2),(\widetilde{\psi}^*)^{-1}(b_2)\right)=0 $$ we get that $(\widetilde{\psi}^*)^{-1}(\sigma_2)=yb+z\sigma_0$, and from the relation $(\widetilde{\psi}^*)^{-1}(\sigma_2)^2=-6$ we get that $z=\pm1$. Applying $\widetilde{\psi}^*$ to this equation, we get \begin{equation}\label{eqn:boh} \widetilde{\psi}^*(\sigma_0)=z\left(\sigma_2-yb_2\right). \end{equation} Moreover, $$ 0=(\widetilde{\psi}^*(a),\widetilde{\psi}^*(\sigma_0))=-zy-3zk,$$ which implies that $y=-3k$. Notice that $y=3$. In fact, let us intersect (\ref{eqn:boh}) with $l_2'$: since $l_2'$ is a translate of the zero section on $M_{v_0}(S)$ (see Remark~\ref{rmk:zero-two-section}) and does not intersect the singular locus there, it follows that $\widetilde{\psi}^*(\sigma_0).l_2'=0$; on the other hand $b_2.l_2'=b_0.l_2=1$ (cf.\ Lemma~\ref{lemma:some intersection} and proof of Lemma~\ref{lemma:b_2}) and $\sigma_2.l_2'=3$ as it follows from Remark~\ref{rmk:zero-two-section} (in fact the last intersection happens in the smooth locus of $\Sigma_2$ and, by generality of $l_2'$, it is transversal). Finally, let us show that $z=-1$. By Remark~\ref{rmk:zero-two-section} there exists a point $x\in\Sigma_{v_2}$ such that $\psi(x)\in M_{v_0}$ is smooth. Consider the line $\delta=\pi_2^{-1}(x)$, where $\pi_2\colon\widetilde{M}_{v_2}\to M_{v_2}$ is the desingularisation map. By a direct computation we have that $\sigma_2.\delta=-2$ and $b_2.\delta=0$. On the other hand, the intersection $\widetilde{\psi}^*(\sigma_0).\delta$ is transverse, since $\psi(x)$ is smooth, therefore it is positive. Intersecting the relation (\ref{eqn:boh}) with $\delta$ shows then that $z=-1$. \end{proof} \subsubsection{The O'Grady moduli space}\label{subsection:O'Grady} Let $S$ be as usual a projective $K3$ surface with a polarisation $H$ of degree $2$ and let $\operatorname{D}^b(S)$ be the derived category of coherent sheaves on $S$. Let $\operatorname{D}elta\subset S\times S$ be the diagonal and $F_{\operatorname{D}elta}\colon\operatorname{D}^b(S)\to\operatorname{D}^b(S)$ the Fourier--Mukai transform with kernel the ideal sheaf $I_\operatorname{D}elta$. As usual we denote by $M_S$ the O'Grady moduli space $M_{(2,0,-2)}(S)$. \begin{prop}\label{prop:G} The exact equivalence $$\mathcal{G}\colon\operatorname{D}^b(S)\stackrel{-\otimes H}{\longrightarrow}\operatorname{D}^b(S)\stackrel{F_{\operatorname{D}elta}}{\longrightarrow}\operatorname{D}^b(S)\stackrel{-\otimes H^\vee}{\longrightarrow}\operatorname{D}^b(S)$$ induces a birational map $$\mathcal{G}\colon\widetilde{M}_{(0,2H,-2)}(S)\dashrightarrow\widetilde{M}_S.$$ Moreover, the induced map on the Picard lattices is $$ \left(\left(-m,\frac{m}{2}H,\frac{n}{2}\right),k\frac{\sigma_2}{2}\right)\longmapsto \frac{m+n}{2}H-n\widetilde{B}+\frac{k-n}{2}\widetilde{\Sigma}.$$ \end{prop} \begin{rmk} Notice that $m+n\in2\mathbb{Z}$ since $\left(\left(-m,\frac{m}{2}H,\frac{n}{2}\right),v_2^\perp\right)\subset\mathbb{Z}$ by definition. \end{rmk} The proposition follows from the following two lemmata and Example~\ref{example:O'Grady moduli space}. \begin{lemma}[\protect{\cite[Lemma~2.26]{PeregoRapagnetta:DeformationsOfO'Grady}}]\label{lemma:tensor H} Tensoring by a multiple $kH$ of the polarisation does not change the stability of a sheaf. In particular, if $w=(r,cH,s)$ and $w'=(r,(c+rk)H,s+2ck+rk^2)$, then the induced morphism $$ -\otimes(kH)\colon\widetilde{M}_{w}(S)\longrightarrow\widetilde{M}_{w'}(S)$$ is an isomorphism and sends the exceptional divisor of the former to the exceptional divisor of the latter. \end{lemma} \begin{lemma} The Fourier--Mukai transform $F_\operatorname{D}elta\colon\operatorname{D}^b(S)\to\operatorname{D}^b(S)$ induces a birational morphism $F_\operatorname{D}elta\colon\widetilde{M}_{(0,2H,2)}(S)\to\widetilde{M}_{(2,2H,0)}(S)$, $E\mapsto F_\operatorname{D}elta(E)^\vee$, such that the exceptional divisor of the former is sent to the exceptional divisor of the latter. The induced map $\operatorname{Pic}(\widetilde{M}_{(0,2H,2)})\to\operatorname{Pic}(\widetilde{M}_{(2,2H,0)})$ on the Picard lattices is $$ \left(\left(m,\frac{m}{2}H,\frac{n}{2}\right),k\frac{\sigma_2}{2}\right)\longmapsto \left(\left(-\frac{n}{2},-\frac{m}{2}H,-m\right),k\frac{\sigma_2}{2}\right). $$ \end{lemma} \begin{proof} This is essentially \cite[Proposition~4.1.2]{O'Grady:10dimPublished}. In fact, following the notation therein, let $\mathcal{J}^0$ be the open subset of $\widetilde{M}_{(0,2H,2)}(S)$ consisting of sheaves $E=i_*L$ where $i\colon C\to S$ is the inclusion, $C$ is smooth and $L$ is a globally generated line bundle with $h^0(L)=2$. Notice that $\operatorname{ch}i(L)=2$, so $h^1(L)=0$. The short exact sequence $0\to I_\operatorname{D}elta\to\mathcal{O}_{S\times S}\to\mathcal{O}_\operatorname{D}elta\to0$, induces a short exact sequence of complexes $$ 0\longrightarrow F_\operatorname{D}elta(i_*L)\longrightarrow F_{\mathcal{O}_{S\times S}}(i_*L)\stackrel{f}{\longrightarrow} F_{\mathcal{O}_{\operatorname{D}elta}}(i_*L)\longrightarrow0.$$ Since $i_*L\in\mathcal{J}^0$, we have that $F_{\mathcal{O}_{S\times S}}(i_*L)=\operatorname{H}^0(L)\otimes\mathcal{O}_S$ and $f\colon\operatorname{H}^0(L)\otimes\mathcal{O}_S\to i_*L$ is the evaluation map, which is surjective by hypothesis. The proof is now reduced to the proof of \cite[Proposition~4.1.2]{O'Grady:10dimPublished}. The statement about the exceptional divisor follows from the same computation applied to the general point of the singular locus $\Sigma_v\cong\operatorname{Sym}^2 M_{v/2}(S)$. Finally, the change of sign in the last statement follows from \cite[Proposition~2.5]{Yoshioka:ModuliSpacesAbelianSurfaces} \end{proof} \subsubsection{Proof of Theorem~\ref{thm:LSV to OG}} Let $\mathcal{V}\to\operatorname{D}elta$ be the degeneration to the chordal cubic fourfold considered in \ref{subsection:degeneration}. The open subset $\operatorname{D}elta'=\operatorname{D}elta\setminus\{0\}$ maps to the period domain $\Omega_{\operatorname{OG10}}$ of irreducible holomorphic symplectic manifolds of OG10 type via the periods of the associated Laza--Sacc\`a--Voisin family $\mathcal{J}_{\operatorname{D}elta'}$. The main results of \cite{KollarLazaSaccaVoisin:Degenerations} say that the central fibre of the degeneration $\mathcal{J}_{\operatorname{D}elta}\to\operatorname{D}elta$ can be replaced by a smooth member that is birational to the moduli space $\widetilde{M}_{(0,2H,-4)}(S)$, where $S$ is the (generic) $K3$ surface dual to the chordal cubic fourfold $\mathcal{V}_0$. This means that the map $\operatorname{D}elta'\to\Omega_{\operatorname{OG10}}$ can be extended to a map $\operatorname{D}elta\to\Omega_{\operatorname{OG10}}$ by sending $0$ to the period of $\widetilde{M}_{(0,2H,-4)}(S)$. Finally, since the period map is surjective (\cite{Huybrechts:BasicResults}), one gets a family $p_1\colon\mathcal{X}_1\to\operatorname{D}elta$ with two distinguished members corresponding to $\mathcal{J}_V$ and $\widetilde{M}_{(0,2H,-4)}(S)$ (cf.\ \cite[Proposition~4.4]{Sacca:BirationalGeometryLSV} for a more precise description of this family). By Proposition~\ref{prop:b_V} and Proposition~\ref{prop:Theta}, there exists a parallel transport operator \begin{equation*} P_1\colon\operatorname{H}^2(\mathcal{J}_V,\mathbb{Z})\longrightarrow\operatorname{H}^2(\widetilde{M}_{(0,2H,-4)}(S),\mathbb{Z}) \end{equation*} in the family $p_1$, such that $P_1(b_V)=b$ and $P_1(\Theta_V)=a-2b$. Now, since $\widetilde{M}_{(0,2H,-4)}(S)$ is birational to $\widetilde{M}_S$, there is a family $p_2\colon\mathcal{X}_2\to\widetilde{\operatorname{D}elta}$ over the disc with two origins (cf.\ \cite[Theorem~4.6]{Huybrechts:BasicResults}) such that the two origins correspond to $\widetilde{M}_{(0,2H,-4)}(S)$ and $\widetilde{M}_S$. We constructed in Sections \ref{subsection:hyperelliptic} and \ref{subsection:O'Grady} a parallel transport operator \begin{equation}\label{eqn:P_2} P_2\colon\operatorname{H}^2(\widetilde{M}_{(0,2H,-4)}(S),\mathbb{Z})\longrightarrow\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}) \end{equation} in the family $p_2$, such that $P_2(a)=2H-3\widetilde{B}-2\widetilde{\Sigma}$, $P_2(b)=H-2\widetilde{B}-\widetilde{\Sigma}$ and $P_2(\sigma)=3H-6\widetilde{B}-4\widetilde{\Sigma}$ (see Proposition~\ref{prop:psi} and Proposition~\ref{prop:G}). Gluing together these two families, we eventually get a family $\mathcal{X}\to T$ with two distinguished points corresponding to $\mathcal{J}_V$ and $\widetilde{M}_S$, and a parallel transport operator \begin{equation}\label{eqn:P} P=P_2\circ P_1\colon\operatorname{H}^2(\mathcal{J}_V,\mathbb{Z})\longrightarrow\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}) \end{equation} such that $$P(\Theta_V)=\widetilde{B}\qquad\mbox{and}\qquad P(b_V)=H-2\widetilde{B}-\widetilde{\Sigma}=f.$$ Notice that $$e=H-\widetilde{B}-\widetilde{\Sigma}=P(\Theta_V+b_V),$$ therefore the theorem follows at once by Proposition~\ref{thm:my monodromy}. \begin{rmk} Let $M_{(0,2H,0)}(S)$ be the moduli space containing the Jacobian fibration $\mathcal{J}^4_{|2H|}$ of degree $4$ divisors on the smooth curves in $|2H|$. By Lemma~\ref{lemma:0=4}, $M_{(0,2H,0)}(S)$ is isomorphic to $M_{(0,2H,-4)}(S)$ and the image of $T_S$ under this isomorphism is the (closure of the) theta divisor $E_S$ of effective line bundles in $\mathcal{J}^4_{|2H|}$. The latter divisor is birational to the symmetric product $\operatorname{Sym}^4\mathcal{C}$, where $\mathcal{C}$ is the universal family of (smooth) curves in $|2H|$. There is a natural map $\operatorname{Sym}^4\mathcal{C}\to \operatorname{Sym}^4 S$, whose generic fibre is the $\mathbb{P}^1$ of curves in $|2H|$ passing through four fixed points. On the other hand, by \cite[Proposition~3.0.5]{O'Grady:10dimPublished}, there is a morphism $\widetilde{B}\to\operatorname{Sym}^4S$ whose generic fibre is again $\mathbb{P}^1$. The parallel transport operator $P_2$ makes rigorous the natural expectation that $T_S$ deforms to $\widetilde{B}$. \end{rmk} \section{The monodromy group}\label{section:Mon^2} Let $S$ be a projective $K3$ surface such that $\operatorname{Pic}(S)=\mathbb{Z} H$ with $H^2=2$. Following the notation introduced in the previous section, we put $e=H-\widetilde{B}-\widetilde{\Sigma}$ and $f=H-2\widetilde{B}-\widetilde{\Sigma}$, and denote by $\overline{U}$ the hyperbolic plane they generate. Notice that $\overline{U}=P_2((v^\perp)^{1,1})$, where $v=(0,2H,-4)$ and $P_2$ is the parallel transport operator (\ref{eqn:P_2}). Let $A$ be the projection of $\widetilde{\Sigma}$ on the orthogonal complement $\overline{U}^\perp$ of $\overline{U}$, that is $A$ is such that $\widetilde{\Sigma}=3f-A$ and $A\perp \overline{U}$. \begin{lemma}\label{lemma:A monodromy} The reflection $R_A$ is a monodromy operator. \end{lemma} \begin{proof} Let $\sigma$ be the class of the exceptional divisor $\widetilde{\Sigma}_v$, where $v=(0,2H,-4)$, and let $P_2$ be the parallel transport operator defined in (\ref{eqn:P_2}). Since $P_2(\sigma)=A$, by \cite[Proposition~5.4]{Markman:PrimeExceptional} $A$ is a stably prime exceptional divisors, and so the reflection $R_A$ is a monodromy operator by \cite[Theorem~1.1]{Markman:PrimeExceptional}. \end{proof} \begin{rmk}\label{rmk:A acts -id} Notice that $R_A$ is the identity on $\overline{U}$ and acts as $-id$ on the discriminant group. It follows from Proposition~\ref{thm:my monodromy} that $R_A$ is not induced by a family of cubic fourfolds via the LSV construction. \end{rmk} In order to keep the notation as easy as possible, from now on we simply denote by $\operatorname{O}^+$ the group $\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))$. Consider the following groups: \begin{eqnarray} G_1 & = & \left\{g\in\operatorname{O}^+\mid g(H)=H,\; g(\widetilde{B})=\widetilde{B},\; g(\widetilde{\Sigma})=\widetilde{\Sigma}\right\} \\ \nonumber G_2 & = & \left\{g\in\operatorname{O}^+\mid g(\xi)=\xi, \;\forall\xi\in H^2(S,\mathbb{Z})\right\}=\langle R_{\widetilde{B}},R_{\widetilde{\Sigma}}\rangle\\ \nonumber G_3 & = & \left\{g\in\widetilde{\operatorname{O}}^+\mid g(e)=e,\; g(f)=f\right\}. \nonumber \end{eqnarray} Let $k\in\operatorname{H}^2(S,\mathbb{Z})$ be a class such that $k^2=-2$ and $(k,H)=0$, and put $l=k+f$. Notice that $R_k\in G_1$. Define $$G=\langle G_1,G_2,G_3,R_l,R_A\rangle.$$ \begin{prop}\label{prop:Mon} $G\subset\operatorname{Mon}^2(\widetilde{M}_S)$. \end{prop} \begin{proof} First of all let us notice that $G_1,G_3\subset\operatorname{Mon}^2(\widetilde{M}_S)$ by Theorem~\ref{thm:my operators O'Grady} and Theorem~\ref{thm:LSV to OG}, and that $G_2\subset\operatorname{Mon}^2(\widetilde{M}_S)$ by \cite[Section~5.2]{Markman:ModularGaloisCovers}. Moreover, if we specialise $S$ to a $K3$ surface $S_0$ as in Example~\ref{example:R_L}, and we choose a parallel transport operator $P\colon\operatorname{H}^2(S,\mathbb{Z})\to\operatorname{H}^2(S_0,\mathbb{Z})$ such that $P(H)=H$ and $P(k)=K$, then $R_l=P^{-1}\circ R_{\widetilde{Z}}\circ P\in\operatorname{Mon}^2(\widetilde{M}_S)$, where $R_{\widetilde{Z}}$ is the reflection in Example~\ref{example:R_L}. Finally, $R_{A}\in\operatorname{Mon}^2(\widetilde{M}_S)$ by Lemma~\ref{lemma:A monodromy}. \end{proof} \begin{thm}\label{thm:Mon^2} $\operatorname{Mon}^2(\widetilde{M}_S)=G=\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))$. \end{thm} The proof of the theorem is lattice-theoretic, so we recall here the notation and the results we need. Let $L$ be an even lattice. If $z\in L$ is an isotropic element, i.e.\ $z^2=0$, and $a\in L$ is orthogonal to $z$, then the transvection $t(z,a)$ is defined by $$ t(z,a)(x)=x-(a,x)z+(z,x)a-\frac{1}{2}(a,a)(z,x)z.$$ Transvections are orientation preserving isometries with determinant $1$ and acting as the identity on the discriminant group. \begin{lemma}[\protect{\cite[Section~3]{GritsenkoHulekSankaran:Abelianisation}}]\label{lemma:transvections} \begin{enumerate} \item $t(z,a)^{-1}=t(z,-a)=t(-z,a)$; \item $g\circ t(z,a)\circ g^{-1}=t(g(z),g(a))$ for every $g\in\operatorname{O}^+$; \item if $R_a$ is integral, then $t(z,a)=R_aR_{a+\frac{1}{2}a^2z}$; \item if $(a,z)=0=(b,z)$, then $t(z,a+b)=t(z,a)\circ t(z,b)$. \end{enumerate} \end{lemma} Now suppose that $L=U\oplus L_1$ and that the hyperbolic plane $U$ is generated by two isotropic classes $e$ and $f$; define $$E_U(L_1)=\langle t(e,a),t(f,a)\mid a\in L_1\rangle.$$ If moreover $L_1=U\oplus L_2$, then by \cite[Proposition~3.3~(iii)]{GritsenkoHulekSankaran:Abelianisation}) \begin{equation}\label{eqn:Eichler} \operatorname{O}^+(L)=\langle E_U(L_1),\operatorname{O}^+(L_1)\rangle, \end{equation} where $\operatorname{O}^+(L_1)$ is embedded in $\operatorname{O}^+(L)$ by extending any isometry of $L_1$ via the identity. We are now ready to prove Theorem~\ref{thm:Mon^2}. \begin{proof}[Proof of Theorem~\ref{thm:Mon^2}] By Remark~\ref{rmk:orient pub} and Proposition~\ref{prop:Mon} we have a chain of inclusions $$G\subset\operatorname{Mon}^2(\widetilde{M}_S)\subset\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z})).$$ We claim that $G=\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))$, from which the theorem follows. Recall that $\overline{U}\subset\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))$ is the hyperbolic plane spanned by the distinguished classes $e=H-\widetilde{B}-\widetilde{\Sigma}$ and $f=H-2\widetilde{B}-\widetilde{\Sigma}$; denote by $L_1$ its orthogonal complement. Then by the identification (\ref{eqn:Eichler}), $$\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))=\langle E_{\overline{U}}(L_1),\operatorname{O}^+(L_1)\rangle.$$ Notice that $\operatorname{O}^+(L_1)=\langle G_3,R_A\rangle$. In fact $G_3\cong\widetilde{\operatorname{O}}^+(L_1)$ by definition, and $R_A$ restricts to an isometry of $L_1$ acting as $-\operatorname{id}$ on the discriminant group (Remark~\ref{rmk:A acts -id}). It follows that it is enough to show that all the transvections $t(e,a)$ and $t(f,a)$, with $a\in L_1$, belong to $G$. By part (2) of Lemma~\ref{lemma:transvections}, one notices that $$R_{\widetilde{B}}\circ t(f,a)\circ R_{\widetilde{B}}=t(R_{\widetilde{B}}(f),R_{\widetilde{B}}(a))=t(e,a),$$ where we used that $R_{\widetilde{B}}(a)=a$, for every $a\in L_1$, since $\widetilde{B}\in\overline{U}$. So it is enough to prove the claim for $t(f,a)$. Choosing a base $\{a_1,\cdots,a_{22}\}$ for $L_1\cong U^2\oplus E_8(-1)^2\oplus A_2(-1)$, by part (4) of Lemma~\ref{lemma:transvections} it is enough to prove the claim for $t(f,a_1),\cdots,t(f,a_{22})$. Notice that there is a canonical basis for $L_1$ with $(a_i,a_i)=0$ or $(a_i,a_i)=-2$: in both cases $a_i$ has divisibility $1$. On the other hand, for any isotropic element $c\in L_1$, there exist two $(-2)$-elements $a,b\in L_1$ such that $t(f,c)=t(f,a)\circ t(f,b)$. In fact, if $a\in L_1$ is a $(-2)$-element such that $(a,c)=0$, then pick $b=c-a$. In this way, we reduce the problem to proving that $t(f,a)\in G$ for every $(-2)$-element $a\in L_1$. Applying the Eichler criterion \cite[Proposition~3.3~(i)]{GritsenkoHulekSankaran:Abelianisation} to the lattice $L_1$, and using part (2) of Lemma~\ref{lemma:transvections}, we eventually notice that it is enough to prove the claim for one specific $a$. Let $a=k\in\operatorname{H}^2(S,\mathbb{Z})$ be the class in $\operatorname{H}^2(S,\mathbb{Z})$ such that $k^2=-2$ and $(H,k)=0$. Since the reflection $R_k$ is integral, by part (3) of Lemma~\ref{lemma:transvections} we can write $$t(-f,k)=R_k R_{k+f},$$ and $t(-f,k)=t(f,k)^{-1}$ by part (1) of Lemma~\ref{lemma:transvections}. Finally, $R_k\in G_1$ and $R_{k+f}=R_l\in G$, so the claim is proved. \end{proof} \begin{rmk} The proof proves the stronger statement $$ \operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))=\langle R_k, R_{\widetilde{B}}, R_l, R_A, G_3\rangle.$$ \end{rmk} \section{The locally trivial monodromy group of the singular moduli space} In this section we explain how Theorem~\ref{thm:Mon^2} helps to compute the locally trivial monodromy group of the singular moduli spaces $M_S$. We refer to Section~\ref{section:Singular} for the definitions. Let us recall that there exists a symplectic resolution of singularities $\pi\colon\widetilde{M}_S\to M_S$. The monodromy group $\operatorname{Mon}^2(\widetilde{M}_S)$ and the locally trivial monodromy group $\operatorname{Mon}^2(M_S)_{\operatorname{lt}}$ are related by means of the monodromy group $\operatorname{Mon}^2(\pi)$ of simultaneous monodromy operators. Recall that $\operatorname{Mon}^2(\pi)\subset\operatorname{O}(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))\times\operatorname{O}(\operatorname{H}^2(M_S,\mathbb{Z}))$, and denote by $p$ and $q$ the two projections, i.e. \begin{equation*} \xymatrix{ & \operatorname{Mon}^2(\pi)\ar[dl]_{p}\ar[dr]^{q} & \\ \operatorname{O}(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z})) & & \operatorname{O}(\operatorname{H}^2(M_S,\mathbb{Z})). } \end{equation*} \begin{thm}\label{thm:l t monodromy} $\operatorname{Mon}^2(M_S)_{\operatorname{lt}}=\operatorname{O}t^+(\operatorname{H}^2(M_S,\mathbb{Z}))$. \end{thm} \begin{proof} First of all, the projection $p\colon\operatorname{Mon}^2(\pi)\to\operatorname{O}(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))$ is injective and moreover, by the item $(1)$ of \cite[Corollary~5.18]{BakkerLehn:Singular2016}, $\operatorname{Mon}^2(\pi)$ is identified with the subgroup of $\operatorname{Mon}^2(\widetilde{M}_S)$ stabilising the resolution K\"ahler chamber (see \cite[Definition~5.4]{BakkerLehn:Singular2016}). In this case the resolution K\"ahler chamber is the chamber containing the K\"ahler cone in the decomposition $$\mathcal{C}_{\widetilde{M}_S}\setminus\widetilde{\Sigma}^\perp.$$ Here $\mathcal{C}_{\widetilde{M}_S}$ is the positive cone, i.e.\ the connected component containing the K\"ahler cone in the cone of positive classes in $\operatorname{H}^{1,1}(\widetilde{M}_S,\mathbb{Z})$. Since $\operatorname{Mon}^2(\widetilde{M}_S)=\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}))$ by Theorem~\ref{thm:Mon^2}, it follows that $$ \operatorname{Mon}^2(\pi)=\operatorname{O}^+(\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z}),\widetilde{\Sigma}),$$ namely the subgroup of isometries $g$ such that $g(\widetilde{\Sigma})=\widetilde{\Sigma}$. Finally, by item $(2)$ of \cite[Corollary~5.18]{BakkerLehn:Singular2016}, the locally trivial monodromy group $\operatorname{Mon}^2(M_S)_{\operatorname{lt}}$ is identified with the image of the projection $q\colon\operatorname{Mon}^2(\pi)\longrightarrow\operatorname{O}(\operatorname{H}^2(M_S,\mathbb{Z})).$ By \cite[Proposition~1.5.1]{Nikulin:Lattice}, the image of $q$ is identified with the subgroup of isometries $h\in\operatorname{O}(\operatorname{H}^2(M_S,\mathbb{Z}))$ such that $h$ acts as the identity on the finite group $\operatorname{H}^2(\widetilde{M}_S,\mathbb{Z})/(\pi^*\operatorname{H}^2(M_S,\mathbb{Z})\oplus\mathbb{Z}\widetilde{\Sigma})$. Since the last group is isomorphic to the discriminant group of $\operatorname{H}^2(M_S,\mathbb{Z})$, the theorem is proved. \end{proof} \begin{rmk} Geometrically Theorem~\ref{thm:l t monodromy} reflects the fact that there are two singular moduli spaces that are birational, but whose singular locus is not preserved under the birational isomorphism (cf.\ Section~\ref{subsection:hyperelliptic}). More precisely, the birational isomorphism does not preserve the singularity type of the two moduli spaces: one has locally factorial singularities, while the other has $2$-factorial singularities. \end{rmk} \operatorname{D}eclareRobustCommand{\VAN}[3]{#3} \end{document}
\begin{document} \title{Measuring complete quantum states with a single observable} \author{Xinhua Peng$^{1}$} \email{[email protected]} \author{Jiangfeng Du$^{1,2}$} \author{Dieter Suter$^{1}$} \affiliation{$^{1}$Fachbereich Physik, Universit\"{a}t Dortmund, 44221 Dortmund, Germany} \affiliation{$^{2}$Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, P.R. China} \date{\today} \begin{abstract} Experimental determination of an unknown quantum state usually requires several incompatible measurements. However, it is also possible to determine the full quantum state from a single, repeated measurement. For this purpose, the quantum system whose state is to be determined is first coupled to a second quantum system (the ``assistant") in such a way that part of the information in the quantum state is transferred to the assistant. The actual measurement is then performed on the enlarged system including the original system and the assistant. We discuss in detail the requirements of this procedure and experimentally implement it on a simple quantum system consisting of nuclear spins. \end{abstract} \pacs{03.65.Wj, 03.67.-a, 05.30.-d} \maketitle \section{INTRODUCTION} Given the state of a quantum system, one is able to calculate the results of any measurement performed on that system. However, to determine the state from the results of measurements, one usually has to perform different measurements that are not mutually compatible, using non-commuting observables. This issue is sometimes referred to as the "Pauli problem", since Pauli discussed it in 1933 \cite{Pauli:1933aa} Since then, interest in this issue has continued, as it touches the fundamentals of quantum mechanics \cite{Bohr:1935aa} More recently, it was also found to be of practical importance in quantum communication \cite{HelstromBook:1976aa, Chefles:2000aa, Gill:2003aa} , e.g., in quantum cryptography and quantum key distribution \cite{Bechmann-Pasquinucci:2000aa,Bechmann-Pasquinucci:2000ab}. If we consider an ensemble \textbf{\emph{S}} of $N$-level systems, its quantum state is described by a density matrix $\hat{\rho}$ in $N$-dimensional Hilbert space, which requires $N^2-1$ real parameters for its complete specification. These parameters can be determined experimentally from the outcomes of a series of different measurements on identically prepared ensembles. Since quantum mechanical measurements with an observable $\hat{\Omega}$ with the spectral decomposition $\hat{\Omega} =\sum_{\alpha=1} ^N \varpi_\alpha \hat{P}_\alpha $ generate at most $N-1$ independent probabilities, at least $(N^2-1)/(N-1)=N+1$ measurements with noncommuting observables are required to fully determine the unknown state $\hat{\rho}$. Techniques for the reconstruction of the complete quantum state from a series of measurements are commonly referred to as ``quantum state tomography" \cite{Welsch:1999aa, DAriano:1997aa, LeonhardtBook:1997aa}. Different versions of such techniques have been proposed, with the goal of obtaining the best possible information about the unknown state while using the smallest possible number of measurements. Since the number of measurements must be at least $N+1$, the task is thus to determine an optimal set of $N+1$ observables (see, e.g.,\cite{Ivonovic:1981aa}). A solution to this problem was given by Wootters and Fields \cite{Wootters:1989aa}: They found that the observables should be chosen such that their basis states are evenly distributed through Hilbert space ``mutually unbiased"). This choice assures that the data redundancy is minimized and the information content is maximized. As the simplest example, we consider the state of a spin 1/2. Its density operator can be written in the form $\hat{\rho}=\frac{1}{2}(\mathbf{1}+\vec{s}\cdot\vec{\hat{\sigma}})$, where $\vec{\hat{\sigma}}$ are the Pauli operators and $\vec{s}$ is a dimensionless vector of length $\le 1$ that specifies the position of the state in the Bloch sphere. The simplest approach to determine this state consists of measuring the spin components along the $x, y$, and $z$-axes, yielding 6 possible measurement outcomes (see Fig. \ref{SingleMeas} a). A minimal set of measurements only requires four such probabilities. They may be chosen as the probabilities for measuring the spin operator components in four directions that are oriented like the face normals of a tetrahedron \cite{Rehacek:2004aa}. \begin{figure} \caption{Schematic representation of different schemes for the experimental determination of the state of a quantum system S. (a) Quantum state tomography: a series of mutually incompatible measurements are performed, projecting the quantum state e.g. along the x, y, and z-axes. (b) The present approach: Part of the information in the quantum state is first transferred to the assistant A. A single measurement of the combined system S + A, in direction $\theta$, can then determine the complete initial state of S.} \label{SingleMeas} \end{figure} While these approaches all require a combination of measurements with incompatible observables, it is also possible to obtain the complete state information from a single measurement performed on a larger Hilbert space, provided the state information is first redistributed into this extended space \cite{DAriano:2002aa,Caves:2002aa, Du:2006aa, Allahverdyan:2004aa}. Ref. \cite{DAriano:2002aa} shows that it is possible to estimate the expectation values of all observables of a quantum system by measuring only a single ``universal" observable on an extended Hilbert space. In the Hilbert space of the quantum system, the reduced operator of this ``universal observable" constitutes a minimal informationally complete positive operator-valued measurement \cite{Caves:2002aa}. Du et al. \cite{Du:2006aa} demonstrated an experimental example on this. Allahverdyan et al. \cite{Allahverdyan:2004aa} determine the conditions for making this type of measurement robust by maximizing the determinant of the mapping between the quantum state and the measurement results. The possibility of obtaining the full quantum state information from a single measurement appears highly attractive and may well have practical advantages since it avoids some experimental uncertainties related to measurement setups for incompatible observables. It requires, however, the redistribution of the information within the extended Hilbert space. This is achieved by coupling the system $S$, whose state is to be determined, to an assistant $A$ and letting the combined system evolve for a suitable period. The sketch of this measurement idea is shown in Fig. \ref{SingleMeas} b). As we show below, the success of the resulting measurements depends on the form of the Hamiltonian as well as on the duration of the evolution and the choice of the final measurement on the combined system. In this paper, we study the details of this type of measurements using a (nuclear) spin 1/2 as the system whose quantum state is to be determined, and a different spin 1/2 as the assistant. We consider in detail what types of Hamiltonian can be used to couple the system to the assistant, how the information content of the resulting state can be maximized, and under what conditions the scheme will fail. As an experimental example, we present results from a nuclear spin system, using nuclear magnetic resonance (NMR). \section{Coupling system and assistant} \subsection{Hamiltonian} We consider two qubits interacting with local magnetic fields and coupled through the Heisenberg interaction. The system Hamiltonian can then be written as \begin{equation} \begin{array}{lll} \hat{H} & = & \hat{H}_{z}(B_{1},B_{2})+\hat{H}_{ex}(J_{x},J_{y},J_{z}) \\ & = & B_{1}\hat{S}_{z}^{1}+B_{2}\hat{S}_{z}^{2}+\\ & & J_{x}\hat{S}_{x}^{1}\hat{S}_{x}^{2}+J_{y}\hat{S}_{y}^{1}\hat{S}_{y}^{2}+J_{z}\hat{S}_{z}^{1}\hat{S}_{z}^{2}, \end{array} \label{e.H} \end{equation} where $\hat{S}_{\nu}^{k}=\frac{1}{2}\sigma_{\nu}^{k} (\nu=x,y,z)$ denotes the local spin operator for qubit $k$. The $B_{k}$s are the strengths of the external magnetic fields (along the z axis) acting on qubit $k$, and the $J_{\nu}$s are the Heisenberg exchange constants. For arbitrary $J_{\nu}$, this is often called the anisotropic Heisenberg XYZ model. Some special cases are: \begin{itemize} \item {XXX (or isotropic Heisenberg): $J_{x}=J_{y}=J_{z}$} \item{XXZ : $J_{x}=J_{y}\neq J_{z}$} \item{XY : $J_{z}=0$} \item{XZ : $J_{y}=0$} \item{Heisenberg-Ising : $J_{x}=J_{y}=0$} \end{itemize} $J_{\nu}>0$ and $J_{\nu}<0$ correspond to the antiferromagnetic and ferromagnetic cases, respectively. In many solid-state systems, the coupling constants $J_{\nu}$ can be tuned by external fields and many proposals for solid-state quantum information processors rely on their tunability. The Hamiltonian of Eq (\ref{e.H}) splits into three mutually commuting parts, \begin{equation} \hat{H}=\hat{H}_{zz}+\hat{H}_{0}+\hat{H}_{2} \end{equation} where \begin{equation} \begin{array}{l} \hat{H}_{zz} = J_{z}\hat{S}_{z}^{1}\hat{S}_{z}^{2} \\ \hat{H}_{0} = B\gamma_B(\hat{S}_{z}^{1}-\hat{S}_{z}^{2}) +\frac{J}{2}(\hat{S}_{+}^{1}\hat{S}_{-}^{2}+\hat{S}_{-}^{1}\hat{S}_{+}^{2}) \\ \hat{H}_{2} =B(\hat{S}_{z}^{1}+\hat{S}_{z}^{2}) +\frac{J}{2} \gamma_J(\hat{S}_{+}^{1}\hat{S}_{+}^{2}+\hat{S}_{-}^{1}\hat{S}_{-}^{2}). \end{array} \end{equation} $B=(B_{1}+B_{2})/2$ and $J=(J_{x}+J_{y})/2$ are the average field and the coupling constant, and $\gamma_B=(B_{1}-B_{2})/(B_{1}+B_{2})$ and $\gamma_J=(J_{x}-J_{y})/(J_{x}+J_{y})$ are anisotropy parameters. $\hat{S}_{\pm}^{k}=\hat{S}_{x}^{k}+i\hat{S}_{y}^{k}$ are the raising and lowering operators. With this decomposition, the eigenvalues and eigenvectors can be easily calculated by diagonalizing the subspaces consisting of $\hat{H}_{0}$ and $\hat{H}_{2}$. We obtain for the eigenvalues \begin{equation} \begin{array}{l} \lambda_{1}=\frac{1}{4}J_{z}+\eta_{1},\\ \lambda_{2}=-\frac{1}{4}J_{z}+\eta_{2},\\ \lambda_{3}=-\frac{1}{4}J_{z}-\eta_{2},\\ \lambda_{4}=\frac{1}{4}J_{z}-\eta_{1}, \end{array} \label{Eigval} \end{equation} and for the eigenvectors \begin{equation} \begin{array}{cc} \left\vert \psi_{1}\right\rangle =\left(\begin{array}{c} \cos(\theta_{1}/2)\\ 0\\ 0\\ \sin(\theta_{1}/2)\end{array}\right) & \left\vert \psi_{2}\right\rangle =\left(\begin{array}{c} 0\\ \cos\frac{\theta_{2}}{2}\\ \sin\frac{\theta_{2}}{2}\\ 0\end{array}\right)\\ \left\vert \psi_{3}\right\rangle =\left(\begin{array}{c} 0\\ -\sin\frac{\theta_{2}}{2}\\ \cos\frac{\theta_{2}}{2}\\ 0\end{array}\right) & \left\vert \psi_{4}\right\rangle =\left(\begin{array}{c} -\sin(\theta_{1}/2)\\ 0\\ 0\\ \cos(\theta_{1}/2)\end{array}\right). \end{array}\label{EigVec}\end{equation} Here \begin{equation} \begin{array}{l} \eta_{1}=\sqrt{B^{2}+(J\gamma_J/2)^{2}} \\ \eta_{2}=\sqrt{(B\gamma_B)^{2}+(J/2)^{2}} \end{array}. \end{equation} and \begin{equation} \begin{array}{l} \cos\frac{\theta_{1}}{2}=\sqrt{\frac{\eta_{1}+B}{2\eta_{1}}}\\ \sin\frac{\theta_{1}}{2}=\frac{J\gamma_J/2}{\sqrt{2\eta_{1}(\eta_{1}+B)}}=sgn(J\gamma_J)\sqrt{\frac{\eta_{1}-B}{2\eta_{1}}}\\ \cos\frac{\theta_{2}}{2}=\sqrt{\frac{\eta_{2}+B\gamma_B}{2\eta_{2}}}\\ \sin\frac{\theta_{2}}{2}=\frac{J/2}{\sqrt{2\eta_{2}(\eta_{2}+B\gamma_B)}}=sgn(J)\sqrt{\frac{\eta_{2}-B\gamma_B}{2\eta_{2}}} \end{array}. \label{Paras} \end{equation} \subsection{Evolution} We write the evolution operator as a product of the evolutions generated by the three mutually commuting terms of Eq. (2): \begin{equation} \hat{U}(\tau)=e^{-i\hat{H}\tau} =\hat{U}_{zz}(\tau)\hat{U}_{0}(\tau)\hat{U}_{2}(\tau) \label{e.U} \end{equation} where \begin{equation} \begin{array}{lll} \hat{U}_{zz}(\tau) & = & e^{-i\hat{H}_{zz}\tau}\\ & = & \cos(\frac{J_z\tau}{4})\mathbf{1}-i \sin(\frac{J_z\tau}{4})(4\hat{S}^{1}_z\hat{S}^{2}_z)\\ \hat{U}_{0}(\tau) & = & e^{-i\hat{H}_0\tau}\\ & = & \frac{1+\cos(\eta_2 \tau)}{2}\mathbf{1}+\frac{1-\cos(\eta_2 \tau)}{2}(4\hat{S}^{1}_z\hat{S}^{2}_z)+\\ & & i2\cos\theta_2\sin(\eta_2\tau)(\hat{S}^{1}_z-\hat{S}^{2}_z)+\\ & & i\sin\theta_2\sin(\eta_2\tau)(\hat{S}_{+}^{1}\hat{S}_{-}^{2}+\hat{S}_{-}^{1}\hat{S}_{+}^{2})\\ \hat{U}_{2}(\tau) & = & e^{-i\hat{H}_2\tau}\\ & = & \frac{1+\cos(\eta_1 \tau)}{2}\mathbf{1}-\frac{1-\cos(\eta_1 \tau)}{2}(4\hat{S}^{1}_z\hat{S}^{2}_z)+\\ & & i2\cos\theta_1\sin(\eta_1\tau)(\hat{S}^{1}_z+\hat{S}^{2}_z)+\\ & & i\sin\theta_1\sin(\eta_1\tau)(\hat{S}_{+}^{1}\hat{S}_{+}^{2}+\hat{S}_{-}^{1}\hat{S}_{-}^{2}) . \\ \end{array} \label{U_comp} \end{equation} In the following, we will use a different operator basis for the diagonal terms: we define the polarization operators $I_{i}^{\alpha,\beta}=\frac{1}{2}\mathbf{1}\pm\hat{S}_{z}^{(i)}$. In terms of these operators, the total propagator becomes \begin{eqnarray} \hat{U}(\tau)=a_{1}I_{1}^{\alpha}I_{2}^{\alpha}+a_{2}I_{1}^{\alpha}I_{2}^{\beta}+a_{3}I_{1}^{\beta}I_{2}^{\alpha}+a_{4}I_{1}^{\beta}I_{2}^{\beta} \nonumber\\ +d(\hat{S}_{+}^{1}\hat{S}_{-}^{2}+\hat{S}_{-}^{1}\hat{S}_{+}^{2})+b(\hat{S}_{+}^{1}\hat{S}_{+}^{2}+\hat{S}_{-}^{1}\hat{S}_{-}^{2}) , \label{U_para} \end{eqnarray} where \begin{equation} \begin{array}{l} a_{1}=\cos^{2}\frac{\theta_{1}}{2}e^{-i\lambda_{1}\tau}+\sin^{2}\frac{\theta_{1}}{2}e^{-i\lambda_{4}\tau}\\ a_{2}=\cos^{2}\frac{\theta_{2}}{2}e^{-i\lambda_{2}\tau}+\sin^{2}\frac{\theta_{2}}{2}e^{-i\lambda_{3}\tau}\\ a_{3}=\sin^{2}\frac{\theta_{2}}{2}e^{-i\lambda_{2}\tau}+\cos^{2}\frac{\theta_{2}}{2}e^{-i\lambda_{3}\tau}\\ a_{4}=\sin^{2}\frac{\theta_{1}}{2}e^{-i\lambda_{1}\tau}+\cos^{2}\frac{\theta_{1}}{2}e^{-i\lambda_{4}\tau}\\ b=\frac{1}{2}\sin\theta_{1}(e^{-i\lambda_{1}\tau}-e^{-i\lambda_{4}\tau})\\ d=\frac{1}{2}\sin\theta_{2}(e^{-i\lambda_{2}\tau}-e^{-i\lambda_{3}\tau}) . \end{array} \label{Uparams} \end{equation} As we show in the following section, the evolution of Eq. (\ref{e.U}) transfers information between qubits in such a way that it becomes possible to measure the complete quantum state of one qubit with a single apparatus, as proposed by Allahverdyan et al.\cite{Allahverdyan:2004aa}. \section{Measurement procedure} \subsection{Principle} Consider a two-level system \textbf{\emph{S}} (spin-$\frac{1}{2}$) whose state can be represented by $\hat{\rho}=\left( \begin{array}{cc} \rho_{11} & \rho_{12} \\ \rho_{21} & \rho_{22} \\ \end{array} \right)$ with the normalization $\rho_{11}+\rho_{22}=1$. To determine the state $\hat{\rho}$, we can measure the vector $\vec{s}= 2 Tr(\vec{\hat{S}}\hat{\rho})=(s_{x}, s_{y}, s_{z})^{T}$ where $s_x=\rho_{12}+\rho_{21}$, $s_y=i(\rho_{12}-\rho_{21})$, $s_z=\rho_{11}-\rho_{22}$ and $\vec{\hat{S}}=(\hat{S}_x,\hat{S}_y,\hat{S}_z)^T$. To transfer part of the state information to the assistant $A$, we couple the two subsystems with the interaction Hamiltonian $\hat{H}$ of Eq.(\ref{e.H}). At the time $t=0$, the composite system \textbf{\emph{S+A}} is in the state $\hat{\varrho}_{0}=\hat{\rho}^{(S)}\otimes \hat{\xi}^{(A)}$. Without loss of generality, we assume $\hat{\xi}=\frac{1}{2}\mathbf{1}+\epsilon S_z, (0 \leq\epsilon\leq 1)$. Here, the superscripts $S$ and $A$ refer to the two subsystems. Under the effect of the coupling Hamiltonian of Eq. (1), this state evolves into $\hat{\varrho}_{\tau} = \hat{U}(\tau) \, \hat{\varrho}_{0} \, \hat{U}^{\dag}(\tau)$. On this state, we perform repeatedly measure the simplest possible nondegenerate, factorized observable $\hat{\Omega} =\sum_{\alpha=1} ^4 \varpi_\alpha \hat{P}_\alpha $, which determines the complete set $\{P_\alpha \}$ of the joint probabilities, They correspond to the eigenvalues of $\hat{\varrho}_{\tau}$ in the eigenbasis of the observable $\hat{\Omega} $. Since these values were generated from the initial state by a one-to-one mapping, $P_\alpha=P_{kq}=\sum_{ij} \mathcal{M}_{kq,ij} \rho_{ij}$, we can invert this mapping to calculate the original state $\hat{\rho}$ \cite{Allahverdyan:2004aa}. The precision of the back-calculation depends on the size of the determinant $\Delta=\det(M_{kq,ij})$: If $|\Delta|$ is small, any (experimental) error in the measurement of $P_{kq}$ will result in a large error in $\rho_{ij}$, roughly $\propto 1/|\Delta|$. We therefore seek to maximize $|\Delta|$ and thereby the precision of the measurement. This maximization is achieved by a suitable choice of the Hamiltonian $\hat{H}$, the duration $\tau$, and the single observable $\hat{\Omega} $. \subsection{Symmetry properties of the evolution} The Hamiltonian $\hat{H}$ of Eq. (\ref{e.H}) consists of three commuting parts: $\hat{H}_{zz}$, $\hat{H}_{0}$, and $\hat{H}_{2}$. All of these terms are invariant under $\pi$-rotations around the z-axis. We use this property to separate the density operator into two parts that transform irreducibly under this symmetry operation: One part which is invariant with respect to $\pi_z$ rotations, and the second part, which changes sign. The first part includes diagonal terms, zero quantum and double quantum coherence \cite{ErnstBook:1994aa}; the second part includes the single quantum coherence terms. The symmetry of the Hamiltonian implies that the evolution does not transfer information from one subspace to the other. Figure \ref{space} illustrates the division of the density operator into these subspaces. \begin{figure} \caption{Subspaces of the density operator that are invariant by the evolution under the Hamiltonian of Eq. (\ref{e.H} \label{space} \end{figure} A consequence of this separation of the system into two distinct subspaces is that it restricts the possible choice of observables. In particular, if we choose the z-components of the two spins to the single observable $\hat{\Omega} $, then all possible combinations fall into the subspace that is invariant under $\pi_z$ rotations and therefore does not provide information about the other subspace. For this paper, we choose the x-components of both spins; another, equivalent choice would be the y-components. \subsection{Transfer matrix} This evolution process, together with the subsequent measurement of the single observable $\hat{\Omega} $, transfers the information from the initial state $\hat{\varrho}_{0}$ to the set of measurement results $P_{\alpha}=P_{kq}$, which are the expectation values of the operators $$ \hat{P}_{kq} = (\frac{1}{2}\mathbf{1}- (-1)^{k}\hat{S}_{x}^{(S)}) \otimes (\frac{1}{2}\mathbf{1}- (-1)^{q}\hat{S}_{x}^{(A)}) $$ acting on the state $\hat{\varrho}_{\tau}$: $$ P_{kq} = Tr \{ \hat{P}_{kq} \hat{U}(\tau) \hat{\varrho}_{0} \hat{U}^{\dagger} (\tau) \} . $$ The indices $k,q$ can have the values 1, 2. We use the transfer matrix $\mathcal{M}$ to describe this map: \begin{equation} \mathcal{M} \left(\begin{array}{c} \rho_{11}\\ \rho_{12}\\ \rho_{21}\\ \rho_{22}\end{array}\right)=\left(\begin{array}{c} P_{11}\\ P_{12}\\ P_{21}\\ P_{22} \end{array}\right) . \label{mapM} \end{equation} Its elements are \begin{equation} \begin{array}{l} \mathcal{M}_{11,11}=\mathcal{M}_{22,11}=\frac{1}{8}((1+ \epsilon )|a_{1}+b|^2+(1- \epsilon )|a_{2}+d|^2)\\ \mathcal{M}_{12,11}=\mathcal{M}_{21,11}=\frac{1}{8}((1+ \epsilon )|a_{1}-b|^2+(1- \epsilon )|a_{2}-d|^2)\\ \mathcal{M}_{11,12}=-\mathcal{M}_{22,12}=\mathcal{M}_{11,21}^{*}=-\mathcal{M}_{22,21}^{*}\\ =\frac{1}{8}((1+ \epsilon )(a_{1}+b)(a_{3}^{*}+d^{*})+(1- \epsilon )(a_{2}+d)(a_{4}^{*}+b^{*})\\ \mathcal{M}_{22,22}=-\mathcal{M}_{21,12}=\mathcal{M}_{12,21}^{*}=-\mathcal{M}_{33}^{*}\\ =\frac{1}{8}((1+ \epsilon )(a_{1}-b)(a_{3}^{*}-d^{*})+(1- \epsilon )(a_{2}-d)(a_{4}^{*}-b^{*})\\ \mathcal{M}_{11,22}=\mathcal{M}_{22,22}=\frac{1}{8}((1+ \epsilon )|a_{3}+d|^2+(1- \epsilon )|a_{4}+b|^2)\\ \mathcal{M}_{12,22}=\mathcal{M}_{21,22}=\frac{1}{8}((1+ \epsilon )|a_{3}-d|^2+(1- \epsilon )|a_{4}-b|^2)\\ \end{array} \label{e.elem.M} \end{equation} and its determinant is \begin{equation} \Delta =8 \Im(\mathcal{M}_{12,12}^{*}\mathcal{M}_{11,12})(\mathcal{M}_{11,11}\mathcal{M}_{12,22}-\mathcal{M}_{12,11}\mathcal{M}_{11,22}) . \label{e.detM} \end{equation} Here $\Im(c)$ denotes the imaginary part of $c$. Using Eqs. (\ref{Eigval}), (\ref{Paras}), (\ref{Uparams}) and (\ref{e.elem.M}), we find for its absolute value \begin{equation} \label{e.Delta} \begin{array}{ll} |\Delta|=&\frac{1}{32}|(1-\epsilon^2)\sin(-J_{z}\tau)\{[\sin(2\theta_{1})\sin^{2}(\eta_{1}\tau)]^{2}-\\ &[\sin(2\theta_{2})\sin^{2}(\eta_{2}\tau)]^{2}\}+2\epsilon [\sin(2\theta_{1})\sin^{2}(\eta_{1}\tau)\\ &+\sin(2\theta_{2})\sin^{2}(\eta_{2}\tau)]\{[1-2\sin^{2}\theta_{1}\sin^{2}(\eta_{1}\tau)]\\ &\times \sin\theta_{2}\sin(2\eta_{2}\tau)-[1-2\sin^{2}\theta_{2}\sin^{2}(\eta_{2}\tau)]\\ &\times \sin\theta_{1}\sin(2\eta_{1}\tau)\}| . \end{array} \end{equation} \section{Optimization: Maximizing $|\Delta|$} The size of the determinant $|\Delta|$ of the transfer mapping determines the quality of measurement. Maximizing $|\Delta|$ will minimize the statistical error of the estimation during the inversion of Eq. (\ref{mapM}). We can maximize it by an appropriate choice of the initial condition of the assistant, the parameters of the Hamiltonian that generates the evolution $U$, and the duration of the evolution. Figure \ref{Dmax} plots the maximum possible determinant size $|\Delta|_{max}$ for the Hamiltonian of Eq. (\ref{e.H}) as a function of the polarization $\epsilon$ of the assistant. Clearly, the quality of the measurement should increase with increasing polarization of the assistant. The dashed line in Fig. \ref{Dmax} shows for comparison the maximum possible value for a general exchange interaction, taken from Ref. \cite{Allahverdyan:2004aa}. At the extreme cases of zero and full polarization, the Heisenberg coupling Hamiltonian allows one to reach the maximum possible value, but for intermediate polarizations, its maximum value is slightly lower than for the general case. Let us now focus on two extreme situations $\epsilon=1$ (\emph{a pure state}) and $\epsilon=0$ (\emph{a completely disordered state}). \begin{figure} \caption{(Color Online) Maximal determinant size $|\Delta|_{max} \label{Dmax} \end{figure} \subsection{Assistant in pure state} When the assistant \emph{\textbf{A}} starts in \emph{a pure state} $\hat{\xi}^{(A)} = \frac{1}{2}\mathbf{1}+S_z^{(A)}$ (corresponding to $\epsilon=1$), the determinant becomes \begin{equation} \begin{array}{ll} |\Delta|=&\frac{1}{16}| [\sin(2\theta_{1})\sin^{2}(\eta_{1}\tau)+\sin(2\theta_{2})\sin^{2}(\eta_{2}\tau)]\\ &\times \{[1-2\sin^{2}\theta_{1}\sin^{2}(\eta_{1}\tau)]\sin\theta_{2}\sin(2\eta_{2}\tau)\\ &-[1-2\sin^{2}\theta_{2}\sin^{2}(\eta_{2}\tau)]\sin\theta_{1}\sin(2\eta_{1}\tau)\}|. \end{array} \end{equation} We can see from this expression that $|\Delta|$ is independent of the coupling strength $J_z$ along the z-axis. A Heisenberg XY interaction is therefore sufficient for optimizing the evolution. We therefore specialize to this case. Using the substitutions \begin{equation}\label{subs} \begin{array}{l} \sin\frac{\Xi_k}{2}=\sin\theta_k \sin(\eta_k\tau) \\ \sin\Lambda_k=\cos(\eta_k\tau)/\cos\frac{\Xi_k}{2},(k=1,2), \end{array} \end{equation} we rewrite $|\Delta|$ as \begin{equation} \begin{array}{ll} |\Delta|=&\frac{1}{16}|(\sin\Xi_{1}\cos\Lambda_1+\sin\Xi_{2}\cos\Lambda_2) \\ &\times (\cos\Xi_{1}\sin\Xi_2\sin\Lambda_2-\cos\Xi_2\sin\Xi_{1}\sin\Lambda_1)| . \end{array} \end{equation} In terms of these parameters, an optimal solution ($|\Delta| = 1/(12\sqrt{3})$) is given by the following set of parameters: \begin{equation} \begin{array}{l} \Lambda_1=\Lambda_2=\Lambda \\ \sin(2\Lambda)= \pm 1\\ \sin\frac{\Xi_1-\Xi_2}{2}= \pm \frac{1}{\sqrt{3}} \\ \sin\frac{\Xi_1+\Xi_2}{2}= \pm 1 . \end{array} \end{equation} This parameter set corresponds to the following parameters of the Hamiltonian (1): \begin{equation} \label{Con_pure} \begin{array}{l} \eta_{k}\tau = m\pi \pm \frac{1}{2} \arccos ( -\Gamma_k)\\ B=\pm \eta_1\sqrt{(1-\Gamma_1)/(1+\Gamma_1)}\\ \gamma_B=\pm \sqrt{\frac{(1+\Gamma_1)(1-\Gamma_2)}{(1-\Gamma_1)(1+\Gamma_2)}}\frac{\eta_2}{\eta_1} \\ J=\pm 4\eta_2\sqrt{2\Gamma_2/(1+\Gamma_2)} \\ \gamma_J=\pm \sqrt{\frac{\Gamma_1(1+\Gamma_2)}{\Gamma_2(1+\Gamma_1)}}\frac{\eta_1}{\eta_2} . \end{array} \end{equation} Here, $m$ is an integer and $\Gamma_k=\sin^2 \frac{\Xi_k}{2},(k=1,2)$. $(\Gamma_1,\Gamma_2)$ take the pairs of values $(\frac{1}{2}-\frac{\sqrt{3}}{6},\frac{1}{2}+\frac{\sqrt{3}}{6})$ or $(\frac{1}{2}+\frac{\sqrt{3}}{6},\frac{1}{2}-\frac{\sqrt{3}}{6})$. Without loss of generality, we set $\tau=1$. The optimal Hamiltonian is then \begin{equation} H^{opt}=1.1458\hat{S}_{z}^{1}-0.2935\hat{S}_{z}^{2}+3.3820\hat{S}_{x}^{1} \hat{S}_{x}^{2}-1.2747\hat{S}_{y}^{1}\hat{S}_{y}^{2}. \end{equation} \subsection{Completely disordered assistant.} When the assistant \emph{\textbf{A}} is initially in \emph{a completely disordered state} $\hat{\xi}^{(A)}=\frac{1}{2}\mathbf{1}$ ($\epsilon=0$), we have \begin{equation} \begin{array}{ll} |\Delta|=&\frac{1}{32}|\sin(-J_{z}\tau)||\{[(\sin(2\theta_{1})\sin^{2}(\eta_{1}\tau)]^{2}\\ &-[(\sin(2\theta_{2})\sin^{2}(\eta_{2}\tau)]^{2}\}|. \end{array} \end{equation} $|\Delta|$ reaches its maximum of $1/32$ when \begin{equation} \sin(J_{z}\tau)=\pm 1\Rightarrow J_{z}\tau=\frac{\pi}{2}(2n-1) \end{equation} ($n$ integer) and simultaneously \begin{equation} \sin(2\theta_{1})\sin^{2}(\eta_{1}\tau) = \pm 1 \mbox{ and } \sin(2\theta_{2})\sin^{2}(\eta_{2}\tau) = 0 \label{e23} \end{equation} or \begin{equation} \sin(2\theta_{1})\sin^{2}(\eta_{1}\tau) = 0 \mbox{ and } \sin(2\theta_{2})\sin^{2}(\eta_{2}\tau) = \pm 1 . \label{e24} \end{equation} Condition (\ref{e23}) corresponds to the following parameters for the Hamiltonian: \begin{equation} \begin{array}{ccl} |B|\tau & = &|2m-1| \frac{\sqrt{2}\pi}{4}\\ |J_x-J_y| & = & 4 |B|\\ \gamma_B & = & 0 \mbox{ OR }J_x+J_y=0 \mbox{ OR }\eta_{2}\tau=l\pi \\ \end{array} \end{equation} and (\ref{e24}) to \begin{equation} \begin{array}{ccl} |J|\tau & = & |2m-1| \frac{\sqrt{2}\pi}{2}\\ |B_1-B_2| & = & |J|\\ B_1+B_2 & = & 0 \mbox{ OR }\gamma_J=0 \mbox{ OR }\eta_{1}\tau=l\pi , \end{array} \end{equation} where $m,l$ are integers. These relationships define six classes of Heisenberg exchange interactions that optimally transfer information from the system to the combined system plus assistant. The transfer is determined by the product of the Hamiltonian and the evolution time $\tau$. Without loss of generality, we choose $\tau=\frac{\pi}{4}$. In these units, some possibilities are \begin{description} \item [{(a)}] XYX model: $H^{opt}=\sqrt{2}(\hat{S}_{z}^{1}+\hat{S}_{z}^{2}) +2(\hat{S}_{x}^{1}\hat{S}_{x}^{2}+\hat{S}_{z}^{1}\hat{S}_{z}^{2}) +2(1-2\sqrt{2})\hat{S}_{y}^{1}\hat{S}_{y}^{2}$, as shown in Ref.\cite{Allahverdyan:2004aa}; \item [{(b)}] XXZ model: $H^{opt}=\sqrt{2}(\hat{S}_{z}^{1}-\hat{S}_{z}^{2}) +2\sqrt{2}(\hat{S}_{x}^{1}\hat{S}_{x}^{2}+\hat{S}_{y}^{1}\hat{S}_{y}^{2}) +2\hat{S}_{z}^{1}\hat{S}_{z}^{2}$; \item [{(c)}] XZ model: $H^{opt}=\sqrt{2}(\hat{S}_{z}^{1}\pm \hat{S}_{z}^{2}) +4\sqrt{2}\hat{S}_{x}^{1}\hat{S}_{x}^{2} +2\hat{S}_{z}^{1}\hat{S}_{z}^{2}$. \end{description} \section{Failure analysis} The measurement scheme fails when $\Delta=0$. From Eq. (\ref{e.Delta}), we see that this occurs when \begin{equation}\label{Delta0} \sin(2\theta_{1})\sin^{2}(\eta_{1}\tau)+\sin(2\theta_{2})\sin^{2}(\eta_{2}\tau)=0 \end{equation} or when \begin{equation} \begin{array}{l} (1-\epsilon^2)\sin(-J_{z}\tau)[\sin(2\theta_{1})\sin^{2}(\eta_{1}\tau) -\sin(2\theta_{2})\sin^{2}(\eta_{2}\tau)]\\ +2\epsilon\{[1-2\sin^{2}\theta_{1}\sin^{2}(\eta_{1}\tau)] \sin\theta_{2}\sin(2\eta_{2}\tau)\\ -[1-2\sin^{2}\theta_{2}\sin^{2}(\eta_{2}\tau)] \sin\theta_{1}\sin(2\eta_{1}\tau)\}=0 , \end{array} , \end{equation} independent of the initial state of the assistant \textbf{\emph{A}}. A simple case is $\sin(2\theta_1)=\sin(2\theta_2)=0$, i.e., $J=0$ or $B=0$ or ($\gamma_B=0$ and $\gamma_J=0$). These cases correspond, e.g., to \begin{itemize} \item{ a weakly-coupled liquid-state NMR Hamiltonian ($J_{x}=J_{y}=0$)} \item{any Heisenberg interaction without external field} \item{an isotropic Heisenberg interaction in the XY plane in a uniform external field.} \end{itemize} In all of these cases, the resulting evolution cannot generate a state that allows one to measure the complete information. Another case that fulfills Eq. (\ref{Delta0}) is \begin{equation} \sin(2\theta_{1})=-\sin(2\theta_{2})\Rightarrow\frac{\gamma_J}{\gamma_B}=-\frac{\eta_1^2}{\eta_2^2} \end{equation} and \begin{equation} |\sin(\eta_{1}\tau)|=|\sin(\eta_{2}\tau)|\Rightarrow\eta_{1}\tau=|m\pi\pm\eta_{2}\tau|. \end{equation} If, e.g., $\eta_1=\eta_2$, we get the condition \begin{equation}\label{FailCon} (\gamma_B,\gamma_J)=(1,\pm 1) \mbox{ or } (-1,1) \end{equation} for $\Delta=0$. From Eq. (\ref{e.Delta}), we can seen that for $\epsilon=1$ (a pure state), $|\Delta|$ does not depend on $J_z$, while for $\epsilon=0$ (completely disordered state), $|\Delta|$ depends strongly on $J_z$. For $\epsilon=0$, it is obvious that $\Delta=0$ when \begin{equation} \sin(J_{z}\tau)=0\Rightarrow J_{z}\tau=n\pi. \end{equation} Hence, the existence of the coupling along the z-axis (i.e.,$J_{z}\neq0$) is a necessary condition for this measurement scheme when the assistant \textbf{\emph{A}} is initially prepared in a completely disordered state. In this case, the failure condition (\ref{FailCon}) can be further modified to \begin{equation} (\gamma_B,\gamma_J)=(\pm 1,\pm 1), \end{equation} which means that when any two among $J_{x},J_{y},B_{1},B_{2}$ are equal to zero, the measurement scheme fails. \section{Quantum Simulation of the Exchange Hamiltonian} In liquid-state NMR systems, the natural Hamiltonian for a system of two spins is \begin{equation} \hat{H}_{NMR}= \omega_{1}\hat{S}_{z}^{1} + \omega_{2}\hat{S}_{z}^{2} +2\pi J_{12}\hat{S}_{z}^{1}\hat{S}_{z}^{2}, \label{e.H_nmr} \end{equation} where $\omega_{1,2}$ represent the Larmor angular frequencies of the two qubits (in the rotating frame) and $J_{12}$ the spin-spin coupling constant. This is equivalent to the Heisenberg-Ising model. As discussed in the preceding section, this Hamiltonian cannot be used to transfer the information, since the transfer matrix becomes singular, $\Delta\equiv0$. Therefore, the key to implement this measurement scheme in a liquid-state NMR system is to first perform a quantum simulation of the Hamiltonian (\ref{e.H}). We briefly discuss two techniques for realizing such an evolution. \subsection{Short period expansion} Assuming that we can realize parts of the Hamiltonian experimentally, we write the total Hamiltonian as a sum, $$ \hat{H}=\sum\limits _{k=1}^{L}\hat{H}_{k} . $$ In general, the different terms do not commute with each other, and it is therefore not sufficient to generate them sequentially. However, if the evolution under each term is sufficiently short, it is possible to approximate the overall evolution in this way. Using a symmetrized version of the Trotter formula \cite{Trotter:1959aa}, $$ e^{(A+B)\tau} = e^{(A\tau/2)}e^{(B\tau)}e^{(A\tau/2)} + O(\tau^3) $$ we expand the propagator as \begin{eqnarray} e^{-i\hat{H}\Delta t} \simeq [e^{-i\hat{H}_{1}\frac{\Delta t}{2}}e^{-i\hat{H}_{2}\frac{\Delta t}{2}}...e^{-i\hat{H}_{L}\frac{\Delta t}{2}}] \nonumber \\ \cdot [e^{-i\hat{H}_{L}\frac{\Delta t}{2}} e^{-i\hat{H}_{L-1}\frac{\Delta t}{2}}...e^{-i\hat{H}_{1}\frac{\Delta t}{2}} ]+O(\Delta t^{3}) , \end{eqnarray} which approximates the desired evolution to second order in $\Delta t$. Keeping $\Delta t$ short enough, this allows one to efficiently simulate the target Hamiltonian (\ref{e.H}) by concatenating these evolution periods until the correct total evolution is reached. Our target Hamiltonian can be decomposed into two non-commuting parts $\hat{H}_z +\hat{H}_{zz}$ and $\hat{H}_{xy}=J_x\hat{S}^{1}_{x}\hat{S}^{2}_{x}+J_y\hat{S}^{1}_{y}\hat{S}^{2}_{y}$. We thus generate the overall evolution (\ref{e.U}) as \begin{equation} \hat{U}(\tau)=\hat{U}^m(\Delta t)=[\hat{U}_{z}(\frac{\Delta t}{2})\hat{U}_{xy}(\Delta t)\hat{U}_{z}(\frac{\Delta t}{2})]^m+O(\Delta t^{3}), \label{Imp_Udt} \end{equation} where $\tau=m\Delta t$ is the total duration, and $$ \hat{U}_{z}(\frac{\Delta t}{2})=e^{-i(\hat{H}_z+\hat{H}_{zz})\frac{\Delta t}{2}} , $$ and $$ \hat{U}_{xy}(\Delta t)=e^{-i\hat{H}_{xy}\Delta t} $$ represent the evolutions under the partial Hamiltonians. Taking as an example the XZ model (case (c) in Sec. IV B), it is sufficient to choose the number of evolution periods $m=2$ for $\tau = \pi/4$: the resulting approximate evolution $$ \hat{U}^{ap}(\tau)=[\hat{U}_{z}(\frac{\pi}{16}) \hat{U}_{xy}(\frac{\pi}{8})\hat{U}_{z}(\frac{\pi}{16})]^2 $$ with $$ \hat{U}_{z}(\frac{\pi}{16})=e^{-i[(\sqrt{2}(\hat{S}^{1}_{z}\pm \hat{S}^{2}_{z})+2\hat{S}^{1}_{z}\hat{S}^{2}_{z}]\frac{\pi}{16}} $$ and \begin{eqnarray} \hat{U}_{xy}(\frac{\pi}{8}) & = & e^{-i\hat{S}^{1}_{x} \hat{S}^{2}_{x}\frac{\sqrt{2}\pi}{2}} \nonumber \\ & = & e^{-i(\hat{S}^{1}_{y} +\hat{S}^{2}_{y})\frac{\pi}{2}}e^{-i\hat{S}^{1}_{z} \hat{S}^{2}_{z}\frac{\sqrt{2}\pi}{2}}e^{i(\hat{S}^{1}_{y} +\hat{S}^{2}_{y})\frac{\pi}{2}} \label{Uxy} \end{eqnarray} has a fidelity of 0.9958 with the target evolution, where the fidelity is defined as $$ F(\hat{U}(\tau),\hat{U}^{ap}(\tau))=\frac{Tr(\hat{U}^{\dag}(\tau)\hat{U}^{ap}(\tau))}{4} . $$ The $\hat{U}_z(\frac{\pi}{16})$ operator can be implemented by a free evolution period under the internal Hamiltonian if we choose $\omega_1=\pm \omega_2 = \sqrt{2} \pi J_{12}$ and set the duration to $d_{1}=\frac{1}{16 \, J_{12}}$. The $\hat{U}_{xy}(\frac{\pi}{8})$ operator can be implemented by four $\frac{\pi}{2}$ pulses (corresponding to $e^{-i\hat{S}^{i}_{y}\frac{\pi}{2}} $) and a free precession period of duration $d_{2}=\frac{\sqrt{2}}{4 J_{12}}$. This evolution period implements $e^{-i\hat{S}^{1}_{z}\hat{S}^{2}_{z}\frac{\sqrt{2}\pi}{2}}$; we therefore refocus the chemical shift terms by inserting refocusing $\pi$ pulses in the middle of this period. According to Eq. (\ref{Uxy}), the second set of $\pi/2$ pulses should rotate the spins around the $-y$ axis. Here, we choose the $+y$-axis instead to compensate for the inversion of the axes system by the $\pi$-pulses. The resulting pulse sequence that generates one segment of $\hat{U}^{ap}(\tau)$ is \begin{equation} d_{1}-\left[\frac{\pi}{2}\right]_{y}^{1}\left[\frac{\pi}{2}\right]_{y}^{2}-\frac{d_{2}}{2}-\left[\pi\right]_{-y}^{1}\left[\pi\right]_{-y}^{2}-\frac{d_{2}}{2}-\left[\frac{\pi}{2}\right]_{y}^{1}\left[\frac{\pi}{2}\right]_{y}^{2}-d_{1}, \label{Unmrde} \end{equation} where $\left[\theta\right]_{\hat{\nu}}^{k}$ denotes a $\theta$ rotation of qubit $k$ around the $\hat{\nu}$ axis. \subsection{Exact decomposition} In some cases, it is possible to achieve the exact transformation by a suitable decomposition of the evolution, using, e.g., $$ e^{-i\hat{R}\hat{H}\hat{R}^{\dag}\tau}=\hat{R}e^{-i\hat{H}\tau}\hat{R}^{\dag} . $$ For the propagator (\ref{e.U}), we can use the decomposition \begin{equation} \hat{U}=\hat{R}\cdot e^{-i\hat{H}_{diag}\tau}\cdot\hat{R}^{\dagger} , \label{e.Udecom} \end{equation} where $\hat{H}_{diag}$ is the diagonal form of the Hamiltonian. The transformation \begin{equation} \hat{R}=\left( \begin{array}{llll} \cos\frac{\theta_{1}}{2} & & & -\sin\frac{\theta_{1}}{2}\\ & \cos\frac{\theta_{2}}{2} & -\sin\frac{\theta_{2}}{2}\\ & \sin\frac{\theta_{2}}{2} & \cos\frac{\theta_{2}}{2}\\ \sin\frac{\theta_{1}}{2} & & & \cos\frac{\theta_{1}}{2} \end{array} \right) , \end{equation} which diagonalizes the Hamiltonian (\ref{e.H}), can be implemented experimentally (up to an irrelevant overall phase factor) by the pulse sequence \begin{eqnarray} \left[\frac{\pi}{2}\right]_{-y}^{1}\left[\frac{\pi}{2}\right]_{\varphi}^{2}- \frac{\tau_1}{2}-\left[\pi\right]_{y}^{1}\left[\pi\right]_{-x}^{2}- \frac{\tau_1}{2}-\left[\frac{\pi}{2}\right]_{-x}^{1}\left[\frac{\pi}{2}\right]_{y}^{2} \nonumber\\ -\frac{\tau_2}{2}-\left[\pi\right]_{x}^{1}\left[\pi\right]_{-y}^{2}- \frac{\tau_2}{2}-\left[\frac{\pi}{2}\right]_{-x}^{1}\left[\frac{\pi}{2}\right]_{-y}^{1} \left[\frac{\pi}{2}\right]_{y}^{2} \left[\frac{\pi}{2}\right]_{\varphi}^{2} \nonumber \\ \label{e.pulseR} \end{eqnarray} with $\tau_1=\frac{2|\theta_1-\theta_2|}{\pi J_{12}}$, $\tau_2=\frac{2|\theta_1+\theta_2|}{\pi J_{12}}$ and $\varphi = x$ or $-x$ for $\theta_1>\theta_2$ or $\theta_1<\theta_2$, and $\hat{R}^{\dagger}$ by the Hermite time-reversed sequence. The evolution under the diagonal Hamiltonian \begin{equation} \begin{array}{l} \hat{U}_{diag}=e^{-i\hat{H}_{diag}\tau}=\left(\begin{array}{llll} e^{-i\lambda_{1}\tau}\\ & e^{-i\lambda_{2}\tau}\\ & & e^{-i\lambda_{3}\tau}\\ & & & e^{-i\lambda_{4}\tau}\end{array}\right)\\ \end{array} \end{equation} is realized by the pulse sequence \begin{equation} \frac{\tau_{3}}{2}-\left[\pi\right]_{x}^{1}\left[\pi\right]_{x}^{2} -\frac{\tau_{3}}{2}-\left[\frac{\pi}{2}\right]_{-x}^{1}\left[\frac{\pi}{2} \right]_{-x}^{2}\left[\beta_{1}\right]_{y}^{1}\left[\beta_{2} \right]_{y}^{2}\left[\frac{\pi}{2}\right]_{-x}^{1} \left[\frac{\pi}{2}\right]_{-x}^{2} , \label{e.pulseUd} \end{equation} where $\tau_{3}=\frac{\lambda_{1}-\lambda_{2}-\lambda_{3}+\lambda_{4}}{2\pi J_{12}}\tau$, $\beta_{1}=\frac{\lambda_{1}+\lambda_{2}-\lambda_{3}-\lambda_{4}}{2}\tau$ and $\beta_{2}=\frac{\lambda_{1}-\lambda_{2}+\lambda_{3}-\lambda_{4}}{2}\tau$. The first part of this sequence implements an evolution under the J-coupling alone, the second part implements a composite $z-$rotation of the two qubits by angles $\beta_1$ and $\beta_2$. An alternative realization of $\hat{U}_{diag}$ is achieved by letting the system evolve under a constant Hamiltonian with $$ \omega_1 = \frac{\lambda_{1}+\lambda_{2}-\lambda_{3} -\lambda_{4}}{2}\cdot\frac{\tau}{\tau_{3}} $$ and $$ \omega_2 = \frac{\lambda_{1}-\lambda_{2}+\lambda_{3} -\lambda_{4}}{2}\cdot\frac{\tau}{\tau_{3}} $$ for a period $\tau_{3}$. For the example (c) in Sec. IV B, we choose the parameters \begin{eqnarray} \tau_1 & = & \tau_3 =\frac{1}{4 J_{12}},\tau_2 = \frac{3}{4 J_{12}}, \nonumber\\ \beta_1& = & \frac{\pi (2+\sqrt{2})}{4}, \beta_2 = \frac{\pi (2-\sqrt{2})}{4}. \end{eqnarray} When $B_1=B_2$, $\varphi = x$ in the sequence (\ref{e.pulseR}); when $B_1=-B_2$, $\varphi = -x$. \section{EXPERIMENTAL IMPLEMENTATION} Experiments were performed at room temperature on a Bruker Avance II 500 MHz spectrometer equipped with a Triple-Broadband-Observe(TBO) probe at the frequencies 500.23 MHz for $^{1}$H and 125.13 MHz for $^{13}$C. For the qubit system, we chose $^{13}$C-labelled chloroform diluted in acetone-d$_{6}$. The ``unknown" state $\hat{\rho}$ was prepared on the spin of the proton nuclei ($^{1}$H), which served as the quantum system \textbf{\emph{S}} (qubit 1), and the spin of the $^{13}$C nuclei was taken as the assistant \textbf{\emph{A}} (qubit 2). The spin-spin coupling constant is $J_{12}=214.95Hz$. The relaxation times were $T_{1}=16.5$ s and $T_{2}=6.9$ s for the proton, and $T_{1}=21.2$ s and $T_{2}=0.35$ s for the carbon nuclei. \subsection{Experimental procedure} There are three steps to implement the measurement scheme stated above: (\emph{i}) Preparation of the initial system, (\emph{ii}) Quantum simulation and (\emph{iii}) Measurement. Any qubit state $\hat{\rho}=\frac{1}{2}\mathbf{1}+\vec{s}\cdot\vec{\hat{S}}$ can be parameterized as a vector in the Bloch sphere: \begin{eqnarray} \vec{s}(r,\theta,\phi)&=&(s_x,s_y,s_z)^T \nonumber \\ &=&\left(r\sin\theta\cos\phi,r\sin\theta\cos\phi,r\cos\theta\right)^{T} , \label{state_g} \end{eqnarray} where the amplitude $r=1$ for a pure state, and $0\leq r<1$ for mixed states and $\theta$, and $\phi$ are, respectively, the polar and azimuthal angles. The combined system was initialized in the state $\hat{\varrho}_{0}=\hat{\rho}^{\left(S\right)}\otimes \hat{\xi}^{\left(A\right)}$. In our demonstration experiment we chose a completely disordered state $\hat{\xi}=\frac{1}{2}\mathbf{1}^{\left(A\right)}$ that is experimentally easy to prepare. Such an initial state $\hat{\varrho}_{0}=\hat{\rho}^{\left(S\right)}\otimes \frac{1}{2}\mathbf{1}^{\left(A\right)}$ was prepared by the NMR pulse sequence: \begin{equation} \left[\arccos\left(r\right)\right]_{y}^{1} \left[\frac{\pi}{2}\right]_{y}^{2}-G_{z}-\left[\theta\right]_{\phi + \pi/2}^{1} . \end{equation} The first two RF pulses define the amount of spin polarization on the two qubits. The field gradient pulse $G_{z}$ dephases transverse magnetization to eliminate off-diagonal terms in the density operator. The last RF pulse turns the remaining (longitudinal) magnetization of qubit 1 into the desired orientation. The result of the preparation was checked using the standard method of state determination based on three noncommutative measurements of the system \textbf{\emph{S}} , i.e., $\sigma_x^{(S)}$, $\sigma_y^{(S)}$ and $\sigma_z^{(S)}$. The experimental results are plotted in Fig. \ref{results} c) and d) for $r =1$ and $\frac{1}{2}$. The experimental average fidelity is above 0.99. For the coupling Hamiltonian that transfers the information from $S$ to $A$, we chose $\hat{H}^{opt}$ of example (c) (section IV B). We performed two different methods for the simulation of this propagator (\ref{e.U}): the``short period expansion" (see section VI A), using the sequence (\ref{Unmrde}) with $m=2$, and the exact decomposition (\ref{e.Udecom}) (see section VI B) with the NMR pulse sequences (\ref{e.pulseR}) and (\ref{e.pulseUd}). After the coupling evolution, we measured the \emph{x} components of the two spins to obtain the joint probabilities $P_{kq}$. For this purpose, we rotated the spins to the $z$-axis, using a $\left[\frac{\pi}{2}\right]_{y}^{1,2}$ pulse and destroyed off-diagonal elements by a magnetic field gradient pulse $G_z$. The populations could then be measured by applying another rf pulse to each of the spins and measuring their free induction decays (FIDs). If the two spins are different isotopes, as in our case, their FIDs usually have to be measured in separate experiments. The resulting pulse sequence for the readout is thus \begin{equation} \left[\frac{\pi}{2}\right]_{-y}^{1,2}-G_{z}-\left[\frac{\pi}{2}\right]_{y}^{i}-FID_{i}. \end {equation} where $i=1$ or $2$ denotes qubit $i$. The measured $FIDs$ along with the normalization condition ($\sum_{kq}P_{kq}=1$) allowed us to reconstruct the four diagonal elements (populations) in the density matrix, which correspond to four joint probabilities $P_{kq}$. The information about the state $\hat{\rho}$ was then obtained by the inverse mapping $\mathcal{M}^{-1}$. \subsection{Experimental Results} \begin{figure} \caption{Experimental NMR spectra of carbon and proton for different initial conditions. $\phi=0$ and (a) $\theta=0$, (b) $\theta=\pi/2$ and (c) $\theta=\pi$. The y-axis denotes the signal amplitude in arbitrary units.} \label{spectra} \end{figure} Figure \ref{spectra} shows the experimentally observed NMR signals after Fourier transformation of the corresponding FIDs for the proton and carbon spins for the following initial states: (a) $\theta=0$ (i.e. $\hat{\rho}=\frac{1}{2}\mathbf{1}+\hat{S_{z}}$), (b) $\phi=0, \theta=\pi/2$ (i.e., $\hat{\rho}=\frac{1}{2}\mathbf{1}+\hat{S_{x}}$), (c) $\theta=\pi$ (i.e., $\hat{\rho}=\frac{1}{2}\mathbf{1}-\hat{S_{z}}$). The amplitudes of the different resonance lines correspond directly to population differences: \begin{equation} \begin{array}{c} S_{NMR}(\textrm{Proton})\sim P_{1\mu}-P_{2\mu}\\ S_{NMR}(\textrm{Carbon})\sim P_{\mu1}-P_{\mu2} , \end{array} \label{P_eq} \end{equation} where $\mu=1$ for the resonance line with positive frequency and $\mu=2$ for the negative frequency line. From these populations, we determine the initial condition by inverting Eq. (\ref{mapM}). \begin{figure} \caption{(Color Online) Experimental quantum state tomography for the general initial state $\vec{s} \label{results} \end{figure} Fig. \ref{results} summarizes these results for a series of similar experiments, where we chose initial conditions $\vec{s}\left(r,\theta,\phi\right)$ varying $\theta$ from 0 to $\pi$ in increments of $\pi/8$, and $\phi$ from 0 to $2\pi$ with an increment of $\pi/12$. In Fig. \ref{results}a, we show the measured components $s_{x}^{\left(\exp\right)}, s_{y}^{\left(\exp\right)}, s_{z}^{\left(\exp\right)}$ for pure states ($r=1$), while (b) shows the corresponding results for mixed states with $r=\frac{1}{2}$. The experiments cover a wide range of points on and within the Bloch sphere. The experimental results clearly show the expected cosine and sine modulations (\ref{state_g}), indicating that the measurement network is effective for all these input states. The average fidelity over all $N=9\times13$ measured states is $$ F_{av}=\frac{1}{N}\sum_{1}^{N} \frac{Tr(\hat{\rho}_{in}\hat{\rho}_{exp})} {\sqrt{Tr(\hat{\rho}_{in}^{2})Tr(\hat{\rho}_{exp}^{2})}} \approx0.99 $$ for both cases, $r=1$ and $r=\frac{1}{2}$. The experimental data shown in Fig. \ref{results} were obtained with the ``short period expansion" technique of section VI A, i.e., the propagator $\hat{U}(\tau)$ was approximately realized by Eq. (\ref{Imp_Udt}) by repeating the sequence (\ref{Unmrde}) twice. We also repeated the experiment with the ``exact decomposition" technique. The propagator $\hat{U}(\tau)$ was realized by the exact decomposition Eq. (\ref{e.Udecom}). The corresponding pulse sequence was obtained by combining the sequences (\ref{e.pulseR}) with (\ref{e.pulseUd}). The results that we obtained were similar to those represented in Fig. \ref{results}, but the fidelities were slightly lower. This difference is probably due to the larger number of pulses in this experiment. \subsection{Precision of the Measurement} An alternative measure of the precision of the measurement is the distance $D$ between the experimentally determined state $\vec{s}_{exp}$ and the "true" input state $\vec{s}$. In terms of the parametrization (\ref{state_g}), the trace distance between the two density operators is \begin{equation} D(\vec{s}_{exp},\vec{s})=\frac{\left\vert \vec{s}_{exp}-\vec{s}\right\vert }{2}=\frac{1}{2}\sqrt{\Delta s_{x}^{2}+\Delta s_{y}^{2}+\Delta s_{z}^{2}} \end{equation} where $\Delta s_{\nu}=s_{\nu}^{(exp)}-s _{\nu}(\nu=x,y,z)$. Writing $\Delta P$ for the experimental errors and using the definition (\ref{mapM}) of the transfer matrix, we find for the distance \begin{equation} D(\vec{s}_{exp},\vec{s})=E|\Delta P|, \end{equation} where \begin{equation} E=\frac{1}{2}\sqrt{E_{x}^{2}+E_{y}^{2}+E_{z}^{2}} \label{C} \end{equation} and \begin{equation} \begin{array}{l} E_{x}=\frac{\Delta s_{x}}{\Delta P}=\frac{1}{det(\tilde{\mathcal{M}})}\sum_{k=1}^{4} A_{k2} \\ E_{y}=\frac{\Delta s_{x}}{\Delta P}=\frac{1}{det(\tilde{\mathcal{M}})}\sum_{k=1}^{4} A_{k3} \\ E_{z}=\frac{\Delta s_{x}}{\Delta P}=\frac{1}{det(\tilde{\mathcal{M}})}\sum_{k=1}^{4} A_{k4} \end{array}. \label{eQ_E} \end{equation} The $A_{kj}$ are the cofactors of the minors $\tilde{\mathcal{M}}_{kj}$ of the transfer matrix $$ \tilde{\mathcal{M}} = \frac{1}{2} \mathcal{M}\left( \begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & -i & 0 \\ 0 & 1 & i & 0 \\ 1 & 0 & 0 & -1 \\ \end{array} \right) . $$ The determinant of this matrix is $det(\tilde{\mathcal{M}})=-\frac{1}{4} i\Delta$. Therefore, the error propagation coefficients $E_\alpha$ depend only on the mapping $\mathcal{M}$. The smaller they are, the higher the precision of the resulting measurement. As Eq. (\ref{eQ_E}) shows, the error propagation scales inversely with the determinant $\Delta$ of the transfer matrix $\mathcal{M}$. We illustrate this dependence in Fig. \ref{AD}a), where we plot the two quantities as a function of the coupling evolution time $\tau$. The minima of $E$ occur near the maxima of $|\Delta|$, and when $|\Delta|=0$, $E$ tends to infinity. In this range, it is impossible to determine the state $\rho$ by such a measurement. A closer look shows that the minima of $E$ do not occur exactly at the maxima of $|\Delta|$. The difference arises from the numerators in (\ref{eQ_E}). In our experiment, the experimental uncertainties are $\Delta P \approx 5 \%$. For the chosen experimental parameters, this results in an average distance $D_{av}(\vec{s}_{exp},\vec{s})=0.04$ for $r=1$ and $0.03$ for $r=\frac{1}{2}$. The distance measurement $D$ and the fidelity measurement F are related by $1-F\leq D\leq\sqrt{1-F^{2}}$ \cite{NielsenBook:2000aa}. \begin{figure} \caption{(Color Online) (a) Error coefficient $E$ (solid line), magnitude of the determinant $|\Delta|$ (dashed line) and their product (dotted line) vs. evolution time $\tau$ under the Hamiltonian $\hat{H} \label{AD} \end{figure} \subsection{Entanglement} The evolution that transfers information from the system to the assistant can entangle the two qubits with each other. In Fig. \ref{AD}, we quantify the entanglement generated and relate it to the precision of the measurement. Fig. \ref{AD} b) shows the concurrence $C$ during the coupling evolution, calculated as $$ C(t)=max\{\chi_{1}-\chi_{2}-\chi_{3}-\chi_{4},0\}, $$ where $\chi_{i}(i=1,2,3,4)$ are the square roots of the eigenvalues of $$ 16 \, \hat{\varrho}_{t}(S_{y}^{1}S_{y}^{2})\hat{\varrho}_{t}^{*}(S_{y}^{1}S_{y}^{2}) $$ in decreasing order, and $$ \hat{\varrho}_{t}=\hat{U}(t)(\hat{\rho}\otimes\frac{1}{2}\mathbf{1})\hat{U}^{\dagger}(t) $$ is the instantaneous density operator. If the initial state is in the $xy$-plane, the entanglement between the system and assistant is maximized at roughly the same time as the information transfer for these measurements is optimized (as quantified by $|\Delta|$). However, for initial conditions oriented along the z-axis, the entanglement generated by the specific Hamiltonian shows a relatively complicated time dependence and little correlation with the precision of the measurement. For evolution times close to $\tau \approx 5 \pi /4$, e.g., the entanglement vanishes, while the measurement error is minimized. The dash-dotted curve in Fig. \ref{AD}b) shows the entanglement that is generated for a partially mixed input state with a general orientation ($r = 0.8, \theta = \pi/4, \phi = \pi/6$). In this case, the concurrence remains below 0.2 and reaches zero even at the times where the measurement precision is optimized. If the amplitude $r$ is reduced further, the entanglement vanishes, $C(\hat{\varrho}_{t})\equiv0$, but the precision of the measurement is not affected. We conclude that entanglement between system and assistant is not an essential criterion for the success of this measurement scheme. \section{CONCLUSION} We have experimentally demonstrated how the complete state of a quantum system can be obtained from the results of repeated measurements with a single, factorized observable $\hat{\Omega}$. The procedure, which involves a controlled interaction between the system under test and a second quantum system, was proposed by Allahverdyan et al. \cite{Allahverdyan:2004aa}. In our experiment, we used a Heisenberg-coupling to transfer information from the system to the assistant. Interactions of this type are found in many physical systems: apart from nuclear spins (like in this work), they also occur in quantum dots \cite{Loss:1998aa,Imamogu:1999aa}, donor atoms in silicon \cite{Kane:1998aa, Vrijen:2000aa}, quantum Hall systems \cite{Mozyrsky:2001aa} and electrons on helium \cite{Dykman:2000aa}. The precision of this type of measurements depends strongly on the details of the interaction between system and assistant, on the type of Hamiltonian as well as on the duration of the interaction. This can be understood by considering the transfer of information from the state of the system to the measurement results from the single observable: If we describe this transfer of information from $n$ elements of the density operator of the input state by a matrix $\mathcal{M}$, the rank of this matrix must be $n$, i.e. its inverse must exist. In practice, it is necessary to choose a transfer matrix that is far from the singular case, to maximize the precision with which the input state can be calculated from the measurement results. This initial work has demonstrated the basic possibility of implementing such measurements on the simplest possible quantum system (a single spin 1/2). Of course it is possible to extend the scheme to systems of arbitrary size. Work in this direction is currently under way. \begin{center}\textbf{ACKNOWLEDGMENTS} \par\end{center} We gratefully acknowledge helpful discussions with Dr. Bo Chong, Dr. Jingfu Zhang and financial support from the DFG through Su 192/19-1. Du. J thanks the support of NSFC of China, CAS and the European Commission under Contract No. 007065 (Marie Curie). \end{document}
\begin{document} \begin{frontmatter} \title{Wright--Fisher diffusion with negative mutation~rates} \runtitle{Negative Wright--Fisher} \begin{aug} \author[A]{\fnms{Soumik} \snm{Pal}\corref{}\ead[label=e1]{[email protected]}} \runauthor{S. Pal} \affiliation{University of Washington} \address[A]{Department of Mathematics\\ University of Washington\\ Seattle, Washington 98115\\ USA\\ \printead{e1}} \end{aug} \received{\smonth{2} \syear{2010}} \revised{\smonth{8} \syear{2011}} \begin{abstract} We study a family of $n$-dimensional diffusions, taking values in the unit simplex of vectors with nonnegative coordinates that add up to one. These processes satisfy stochastic differential equations which are similar to the ones for the classical Wright--Fisher diffusions, except that the ``mutation rates'' are now nonpositive. This model, suggested by Aldous, appears in the study of a conjectured diffusion limit for a Markov chain on Cladograms. The striking feature of these models is that the boundary is not reflecting, and we kill the process once it hits the boundary. We derive the explicit exit distribution from the simplex and probabilistic bounds on the exit time. We also prove that these processes can be viewed as a ``stochastic time-reversal'' of a Wright--Fisher process of increasing dimensions and conditioned at a random time. A key idea in our proofs is a skew-product construction using certain one-dimensional diffusions called Bessel-square processes of negative dimensions, which have been recently introduced by G\"oing-Jaeschke and Yor. \end{abstract} \begin{keyword}[class=AMS] \kwd{60G99} \kwd{60J05} \kwd{60J60} \kwd{60J80}. \end{keyword} \begin{keyword} \kwd{Wright--Fisher diffusion} \kwd{Markov chain on cladograms} \kwd{continuum random tree} \kwd{Bessel processes of negative dimension}. \end{keyword} \end{frontmatter} \section{Introduction} An $n$-leaf Cladogram is an unrooted tree with $n\ge4$ labeled leaves (vertices with degree one) and $(n-2)$ other unlabeled vertices (internal branchpoints) of degree three (see Figure~\ref{fig_tree}). The number of edges in such a tree is exactly $2n-3$. Sometimes they are also referred to as phylogenetic trees. Aldous, in~\cite{A00}, proposes the following model of a reversible Markov chain on the space of all $n$-leaf Cladograms, which consists of removing a random leaf (and its incident edge) and reattaching it to one of the remaining random edges. For a precise description we first define two operations on Cladograms. More details, with figures, can be found in~\cite{A00}. \begin{longlist}[(ii)] \item[(i)] To \textit{remove a leaf} $i$. The leaf $i$ is attached by an edge $e_1$ to a branchpoint $b$ where two other edges $e_2$ and $e_3$ are incident. Delete edge $e_1$ and branchpoint $b$, and then merge the two remaining edges $e_2$ and $e_3$ into a single edge $e$. The resulting tree has $2n-5$ edges. \item[(ii)] To \textit{add a leaf} to an edge $f$. Create a branchpoint $b'$ which splits the edge $f$ into two edges, $f_2, f_3$, and attach the leaf $i$ to branchpoint $b'$ via a new edge, $f_1$. This restores the number of leaves and edges to the tree. \end{longlist} Let $\mathbf{T}_n$ denote the finite collection of all $n$-leaf Cladograms. Write $\mathbf{t}' \sim\mathbf{t}$ if $\mathbf{t}'\neq\mathbf{t}$ and $\mathbf{t}'$ can be obtained from $\mathbf{t}$ by following the two operations above for some choice of $i$ and $f$. Thus a $\mathbf{T}_n$ valued chain can be described by saying: remove leaf $i$ uniformly at random, and then pick edge $f$ at random and reattach $i$ to $f$. If we assume every edge to be of unit length, then it also involves resizing the edge length after every operation. In particular the transition matrix of this Markov chain is \[ P(\mathbf{t}, \mathbf{t}')= \cases{\displaystyle \frac{1}{n(2n-5)},&\quad if $\mathbf{t}'\sim\mathbf{t}$,\vspace*{2pt}\cr\displaystyle \frac{n}{n(2n-5)},&\quad if $\mathbf{t}'=\mathbf{t}$. } \] This leads to a symmetric, aperiodic, and irreducible finite state space Markov chain. Schweinsberg~\cite{S} proved that the relaxation time for this chain is $O(n^2)$, improving a previous result in~\cite{A00}. On his webpage~\cite{AOP} Aldous asks the following question: what is an appropriate diffusion limit of this Markov chain? The invariant distribution for the Markov chain on $n$-leaf Cladograms is clearly the Uniform distribution. It is known (see Aldous~\cite{A93}) that the sequence of Uniform distributions on $n$-leaf Cladograms converge weakly to the law of the (Brownian) Continuum Random Tree (CRT). Hence, it is natural to look for an appropriate Markov process on the support of the CRT, which can be thought of as a limit of the sequence of Markov chains described above. At this point it is important to understand that the support of the CRT consists of compact real trees with a measure describing the distribution of leaves. These trees are called continuum trees. For a formal definition of these concepts, we refer the reader to the seminal work by Aldous in~\cite{A93}. However, for an intuitive visualization, one should think of a typical continuum tree as a compact metric space on which branch points are dense, and all edges are infinitesimally small. This implies that the Markov process that mimics the operation of removing and inserting a new leaf on a continuum tree should not jump; in other words, we can call it a diffusion. A detailed description of this diffusion on continuum trees is forthcoming in Pal~\cite{palCT}. In this article we consider several important features of this limiting diffusion that are of interest by themselves and provide bedrock for the followup construction. \begin{figure} \caption{A 7-leaf Cladogram.} \label{fig_tree} \label{fig1} \end{figure} Consider the branchpoint $b$ in the $7$-leaf Cladogram $\mathbf{t}$ in Figure~\ref{fig1}. It divides the collection of leaves naturally into three sets. Let $X(\mathbf{t})=(X_1, X_2,\break X_3)(\mathbf{t})$ denote the vector of proportion of leaves in each set. The corresponding number of edges in these sets are $(2nX_1-1, 2nX_2-1, 2nX_3-1)$. For example, at time zero in our given tree, going clockwise from the right we have $X(0)=(3/7, 2/7, 2/7)$. Let $\mathbb{S}_n$ denote the unit simplex \begin{equation}\label{whatisusimp} \mathbb{S}_n = \Biggl\{ x\in\mathbb{R}^n\dvtx x_i \ge0 \mbox{ for all $i$ and } \sum_{i=1}^n x_i=1 \Biggr\}. \end{equation} Some simple algebra will reveal that for any point $x=(x_1,x_2,x_3)$ in $\mathbb{S}_3$, given $X(\mathbf{t})=x$, the difference $X_1(\mathbf{t}')- X_1(\mathbf{t})$ can only take values in $\{ -1/n,0,1/n\}$ with corresponding probabilities \[ q_{x_1}= x_1 \frac{2n(1-x_1)-2}{2n-5}, \qquad1-p_{x_1}-q_{x_1}, \qquad p_{x_1}=(1-x_1)\frac{2n x_1-1}{2n-5}. \] Thus \begin{eqnarray}\label{cladopara} \quad E \bigl( X_1(\mathbf{t}') - X_1(\mathbf{t}) \mid X(\mathbf{t})=x \bigr)&=& \frac {1}{n}\frac{ 2x_1 - (1-x_1)}{2n-5}\approx-\frac{1}{n^2}\frac {1}{2} ( 1-3x_1 ),\nonumber\\ \quad E \bigl( \bigl( X_1(\mathbf{t}') - X_1(\mathbf{t})\bigr)^2 \mid X(\mathbf{t})=x \bigr)&=& \frac{1}{n^2}\frac{4nx_1(1-x_1)-x_1 - 1 }{2n-5}\\ \quad &\approx&\frac{1}{n^2} 2x_1(1-x_1).\nonumber \end{eqnarray} If we take scaled limits, as $n$ goes to infinity, of the first two conditional moments (the mixed moments can be similarly verified), it is intuitive (and follows by standard tools) that as $n$ goes to infinity, this Markov chain (run at $n^2/2$ speed) will converge to a diffusion with a generator \begin{equation}\label{cladogen} \frac{1}{2}\sum_{i,j=1}^3 x_i ( 1\{i=j\} - x_j ) \frac {\partial^2}{\partial x_i\, \partial x_j} - \frac{1}{2}\sum_{i=1}^n \frac{1}{2} (1 - 3 x_i )\frac{\partial}{\partial x_i}. \end{equation} The generator written as above is similar to the generator for the well-known diffusion limit of the Wright--Fisher (WF) Markov chain models in population genetics. The WF model is one of the most popular models in population genetics. This is a multidimensional Markov chain which keeps track of the vector of proportions of certain genetic traits in a population of nonoverlapping generations. A~good source for an introduction to these models is Chapter 1 in the book by Durrett \cite{durrettgenetics}. For computational purposes one often takes recourse to a diffusion approximation, which, in its standard form, leads to a family of diffusions parametrized by $n$ ``mutation rates.'' The state space of the diffusion is given by $\mathbb{S}_n$ and is parametrized by a vector $(\delta_1, \ldots, \delta_n)$ of nonnegative entries. A weak solution of the WF diffusion with parameters $\delta=(\delta_1, \ldots, \delta_n)$ solves the following stochastic differential equation for $i=1,2,\ldots,n$: \begin{equation}\label{whatisjacobi} dJ_i(t) = \frac{1}{2} \bigl(\delta_i - \delta_0 J_i(t) \bigr)\,dt + \sum_{j=1}^n \tilde{\sigma}_{i,j}(J)\,d\beta_j(t), \qquad\delta_0 = \sum _{i=1}^n \delta_i. \end{equation} Here $\beta=(\beta_1, \ldots, \beta_n)$ is a standard multidimensional Brownian motion, and the diffusion matrix $\tilde{\sigma}$ is given by \begin{equation}\label{whatistsigma} \tilde{\sigma}_{i,j}(x)= \sqrt{x_i} \bigl(1\{i=j\} - \sqrt{x_ix_j} \bigr), \qquad1\le i,j\le n. \end{equation} We define the Wright--Fisher diffusion with \textit{negative mutation rates} to be a family of $n$-dimensional diffusions, parametrized by $n$ nonnegative parameters $\delta=(\delta_1, \ldots, \delta_n)$, which is a weak solution of the following differential equation: \begin{equation}\label{whatisnwf} d\mu_i(t) = -\frac{1}{2} \bigl(\delta_i - \delta_0 \mu_i(t) \bigr)\,dt + \sum_{j=1}^n \tilde{\sigma}_{i,j}(\mu)\,d\beta_j(t), \qquad\delta _0 = \sum_{i=1}^n \delta_i. \end{equation} The initial condition $\mu(0)$ is in the interior of $\mathbb{S}_n$ and the process has a drift that pushes it outside the simplex. We will show later that the process is sure to hit the boundary of the simplex at which point we stop it. In the next section we will explicitly construct a weak solution of \eqref{whatisnwf}. The uniqueness in law of such a solution, until it hits the boundary, follows since the drift and the diffusion coefficients are smooth (hence, Lipschitz) inside the open unit simplex. The law of this process will then be denoted uniquely by $\operatorname{NWF}(\delta_1, \ldots, \delta_n)$. Equivalently this process can be identified by its Markov generator. Expanding $\tilde{\sigma}\tilde{\sigma}'$ and using the fact that $\sum_{i=1}^n x_i=1$, we get \begin{equation}\label{genwf} \mathcal{A}_n = \frac{1}{2}\sum_{i,j=1}^n x_i ( 1\{i=j\} - x_j ) \frac{\partial^2}{\partial x_i\, \partial x_j} - \sum _{i=1}^n \frac{1}{2} (\delta_i - \delta_0 x_i )\frac {\partial}{\partial x_i}, \end{equation} which identifies \eqref{cladogen} as the generator for $\operatorname{NWF}(1/2,1/2,1/2)$. In this text we focus on properties of NWF models as a family of diffusions on the unit simplex and explore some of their properties that are important in the context of the Markov chain model on Cladograms. \textit{Part} (1). We show that, just like Wright--Fisher diffusions (see~\cite{vsm}), the NWF processes can be recovered from a far simpler class of models, the Bessel-square (BESQ) processes with negative dimensions. A comprehensive treatment of BESQ processes can be found in the book by Revuz and Yor~\cite{RY}. This family of one-dimensional diffusions is indexed by a single real parameter $\theta$ (called the dimension) and are solutions of the stochastic differential equations \begin{equation}\label{besqintro} Z(t)= x + 2 \int_0^t \sqrt{\abs{Z(s)}}\,d\beta(s) + \theta t, \qquad x \ge0, t\ge0, \end{equation} where $\beta$ is a one-dimensional standard Brownian motion. We denote the law of this process by $Q^\theta_x$. It can be shown that the above SDE admits a unique strong solution until it hits the origin. The classical model only admits paramater $\theta$ to be nonnegative. However, an extension, introduced by G\"oing-Jaeschke and Yor \cite {yornbesq}, allows the parameter $\theta$ to be negative. It is important to note that $Q_x^\theta$ is the diffusion limit of a Galton--Watson branching process with a $\abs{\theta}$ rate of immigration (for $\theta\ge0$) or emigration (for $\theta<0$). In Section~\ref{sec:timechange} we show that the $\operatorname{NWF}(\delta_1,\ldots , \delta_n)$ law, starting at $(x_1,\ldots,x_n)$, can be recovered via a stochastic time-change from a collection of $n$ independent processes with laws $Q_{x_i}^{-2\delta_i}$, $i=1,\ldots,n$, and dividing each coordinate by the total sum. For the corresponding discrete models this is usually referred to as Poissonization. In this article we utilize this relationship to infer several properties about the NWF processes. For example, we prove that these diffusions, almost surely, hit the boundary of the simplex. We derive the explicit exit density supported on the union of the boundary walls in Theorem~\ref{thm:exitnwf}. \textit{Part} (2). We also prove an interesting duality relationship between WF and NWF models. To describe the duality relationship we let the NWF continue in the lower dimensional simplex when any of the coordinates hit zero. Thus, every time a coordinate hits zero, the dimension of the process gets reduced by one, and ultimately the process is absorbed at the scalar one. Such a process can be obtained by running a WF model with appropriate parameters that initially starts with dimension one and value $1$. At independent random times, the dimension of the process increases by one, and the newly added coordinate is initialized at zero. Finally we condition on the values of the process at a chosen random time. The resulting process, backwards in time and suitably time-changed, is the original NWF model. \textit{Part} (3). The time that the NWF process takes to exit the simplex is a crucial quantity due to a reason which we describe below. We keep our exposition mostly verbal without going into too much detail since the details require considerable formalism from the theory of continuum trees and will be discussed elsewhere. In \cite {palCT} we show how {Part (1)} points toward a Poissonization of the entire Aldous Markov chain, which is simpler for considering scaled limits. The Poissonized version of the Markov chain on $n$-leaf Cladograms stipulates: every existing leaf has an exponential clock of rate $2$ attached to it which determines the instances of their deaths, and every existing edge has an independent exponential clock of rate $1$ attached to it, at which point the edge is split, and a new pair of vertices (one of which is a leaf) is introduced. It is an easy verification that the rates are consistent with the BESQ limit that we claimed in {Part (1)} above. Hence, one would expect that the limit of the Poissonized chains on continuum trees, normalized to give a leaf-mass measure one, and suitably time-changed would give the conjectured Aldous diffusion. This is the strategy followed in~\cite{palCT}. Now, the Poissonized chain has some beautiful and interesting structures. Please see~\cite{A93} for the details about continuum trees that we use below. A~continuum tree $\mathbb{T}$ comes with its associated (infinite) length measure (analogous to the Lebesgue measure) and a leaf-mass probability measure, which describes how the leaves are distributed on it. We will denote the length measure by $\mathbf{Leb}(\mathbb{T})$ and the leaf-mass probability measure by $\mu(\mathbb{T} )$. Suppose we sample $n$ i.i.d. elements from $\mu(\mathbb{T})$ and draw the tree generated by them, which produces an $n$-leaf Cladogram with edge-lengths (or, a proper $n$-tree, according to~\cite{A93}). Thus, by using the fact that the continuum tree is compact, one can approximate a continuum tree by a sequence of $n$-leaf Cladograms. Now consider an $n$-leaf Cladogram for a very large $n$, and further consider $m$ internal branchpoints. For example, in Figure~\ref{fig1}, we have three branchpoints $\{a,b,c\}$ in a $7$-leaf Cladogram. These branchpoints generate a \textit{skeleton} subtree of the original tree and partition the leaves as \textit{internal} or \textit{external} to the skeleton. The components of the vector of external leaf masses grow as independent continuous time, binary branching, Galton--Watson branching processes with a rate of branching/dying $2$ and a rate of emigration $1$. Note that this is consistent with the diffusion limit as BESQ with $\theta=-1$. As the Markov chain (Poissonized or not) proceeds, there comes a time when one of these external leaf masses gets exhausted. When this happens, one of the internal branch points becomes a leaf. The distribution of every coordinate of external leaf-masses at this exit time is derived in Part (2). Until this time, supported on the skeleton, new subtrees can grow and decay. We show, in~\cite{palCT}, that the dynamics of the sizes of these subtrees on the internal part can be modeled as the age process of a chronological splitting tree. Chronological splitting trees are a special kind of biological tree, where an individual lives up to a certain (possibly nonexponential) lifetime and produces children at rate one during that lifetime. Her children behave in an identical manner with an independent and identically distributed lifetime of their own. The age process refers to the point process of current ages of the existing members in the family. More details about splitting trees can be found in the article by Lambert~\cite{lambertAOP}. When one of the internal vertices gets \textit{exposed}, the above dynamics breaks down, and we need to find a slightly different set of internal vertices to proceed. Hence, it is important to derive estimates of the times at which this change happens. We provide quantitative bounds on the value of this stopping time under the special situation of symmetric choice of parameters, which is the case at hand. The article is divided as follows. Our main tool in this analysis is to establish a relationship between NWF processes and Bessel-square processes of negative dimensions, much in the spirit of Pal \cite {vsm}. This has been done in Section~\ref{sec:timechange} where we also establish Theorem~\ref{mainthm}. The relevant results about BESQ processes have been listed in Section~\ref{sec:besq}. Most of these results are known, and appropriate citations have been provided. Proofs of the rest can be found in the \hyperref[appm]{Appendix}. Exact computations of exit density from the simplex have been done in Section \ref {sec:exitdensity}. Estimates of the exit time have been established in Section~\ref{sec:exittime}. \section{Some results about BESQ processes}\label{sec:besq} The Bessel-square processes of negative dimensions $-\theta$, where $\theta\ge0$, are one-dimensional diffusions which are the unique strong solution of the SDE \begin{equation}\label{sdenbesq} X(t) = x - \theta t + 2\int_0^t \sqrt{X(s)}\,d\beta(s), \qquad t \le T_0, \end{equation} where $T_0$ is the first hitting time of zero for the process $X$, and $x$ is a positive constant. The process is absorbed at zero. We will denote the law of this process $Q_x^{-\theta}$ just as BESQ of a positive dimension $\theta$ will be denoted by $Q^{\theta}_x$. The following collection of results is important for us. All the proofs can be found in the article by G\"oing-Jaeschke and Yor~\cite{yornbesq}. \begin{lemma}[(Time-reversal)]\label{lem:trev} For any $\theta> -2$ and any $x >0$, $Q_x^{-\theta}(T_0 < \infty)=1$, while for $\theta\ge 2$, one has $Q^\theta_x(T_0 < \infty)=0$. Moreover the following equality holds in distribution: \begin{equation}\label{Qtreversal} \bigl( X(T_0 - u), u \le T_0 \bigr) = \bigl( Y(u), u \le L_x \bigr), \end{equation} where $Y$ has law $Q^{4+\theta}_0$, and $L_x$ is the last hitting time of $x$ for the process~$Y$. In particular: \begin{longlist}[(ii)] \item[(i)] Both $L_x$ and $T_0$ are distributed as $x/2G$, where $G$ is a Gamma random variable with parameter $(\theta/2 +1)$. \item[(ii)] The transition probabilities $p_t^\theta(x,y)$ for $x,y >0$ satisfy the identity \[ p_t^{-\theta}(x,y)=p_t^{4+\theta}(y,x).\vadjust{\goodbreak} \] \end{longlist} \end{lemma} The following results have been proved in the \hyperref[appm]{Appendix}. \begin{lemma}\label{scale} The scale function for $Q^{-\theta}$, $\theta\ge0$, is given by the function \[ s(x)= x^{\theta/2+1}, \qquad x\ge0. \] Moreover: \begin{longlist}[(ii)] \item[(i)] The origin is an exit boundary for the diffusion and not an entry. \item[(ii)] The change of measure \[ x^{-\theta/2-1}Q_x^{-\theta} ( X(t)^{\theta/2+1}1 ( \cdot ) ) \] on the $\sigma$-algebra generated by the process up to time $t$ is consistent for various $t$ and is the law of $Q_x^{4+\theta}$. Thus, we say $Q_x^{4+\theta}$ is $Q_x^{-\theta}$ conditioned never to hit zero. \end{longlist} \end{lemma} The previous fact is the generalization of the well-known observation that Brownian motion, conditioned never to hit the origin, has the law of the three-dimensional Bessel process. \begin{lemma}\label{logct} Let $\{Z(t), t\ge0\}$ denote a BESQ process of dimension $\theta$ for some $\theta> 2$. Then \[ \lim_{\varepsilon\rightarrow0}\frac{1}{\log(1/\varepsilon)} \int _{\varepsilon}^t \frac{du}{Z(u)}=\frac{1}{\theta-2} \qquad \mbox{for all } t > 0. \] \end{lemma} \section{Changing and reversing time}\label{sec:timechange} Our objective in this section is to establish a time-reversal relationship between NWF and WF models. \begin{theorem}\label{bestimechange} Let $z_1, \ldots, z_n$ and $\theta_1,\ldots, \theta_n$ be nonnegative constants. Let $Z=(Z_1, \ldots,Z_n)$ be a vector of $n$ independent BESQ processes of dimensions $-\theta_1, \ldots, -\theta _n$, respectively, starting from $(z_1, \ldots, z_n)$. Let $\zeta$ be the sum $\sum_{i=1}^n Z_i$. Define \[ T_i= \inf\{ t\ge0\dvtx Z_i(t)=0 \}, \qquad \tau=\bigwedge _{i=1}^n T_i. \] Then, there is an $n$-dimensional diffusion $\mu$, satisfying the SDE in \eqref{whatisnwf} for $\operatorname{NWF}(\theta_1/2, \ldots, \theta_n/2)$, for which the following equality holds: \begin{equation}\label{bestime} Z_i(t\wedge\tau) = \zeta(t\wedge\tau) \mu_i ( 4C_t ), \qquad 1\le i \le n, \qquad C_t=\int_0^{t\wedge\tau} \frac{ds}{\zeta(s)}. \end{equation} Thus, in particular, equation \eqref{whatisnwf} admits a weak solution for all nonnegative parameters $(\delta_1, \ldots, \delta_n)$. \end{theorem} \begin{pf} The proof is almost identical to the case of WF model as shown in~\cite{vsm}, Proposition 11, with obvious modifications.\vadjust{\goodbreak} For example, unlike the WF case, the time-change clock is no longer independent of the NWF process. We outline the basic steps below. We know from \eqref{sdenbesq} that \[ d Z_i(t \wedge\tau) = - \theta_i d(t\wedge\tau) + 2 \sqrt{Z_i}\,d\beta_i( t \wedge\tau), \qquad i=1,2,\ldots,n. \] Define $\theta_0=\sum_{i=1}^n \theta_i$. Let $V_i(t)=Z_i/\zeta(t)$ for $t \le\tau$. Then by It\^o's rule, we get \begin{equation}\label{msde} dV_i(t\wedge\tau)=-\zeta^{-1} [\theta_i - \theta_0 V_i ]\,d(t\wedge\tau) + \sqrt{{V_i}(1-V_i)}\,dM_i(t), \end{equation} where \begin{equation}\label{howtodefmi} dM_i(t)= \frac{2\zeta^{-1/2}}{\sqrt{1- V_i}}\sum_{j=1}^n \bigl(1\{ i=j\} - \sqrt{V_iV_j} \bigr)\,d\beta_j(t\wedge\tau), \end{equation} and $\langle M_i \rangle(t)=4C_t$. Let $\{\rho_u, u\ge0 \}$ be the inverse of the increasing function $4C_t$. Applying this time-change to the SDE for $V_i$ in \eqref {msde}, we get \begin{equation}\label{nuwt} d\mu_i(t) =- \tfrac{1}{4} [\theta_i - \theta_0 \mu_i ]\,dt + \sqrt{\mu_i(1-\mu_i)}\widetilde{W}_i(t), \end{equation} where $\widetilde{W}_i$ is the Dambis--Dubins--Schwarz (DDS; see~\cite{RY}, page~181) Brownian motion associated with $M_i$. This turns out to be the SDE for $\operatorname{NWF}(\theta_1/2,\break \ldots, \theta_n/2)$. \end{pf} Let $\theta_1, \theta_2, \ldots, \theta_n$ be nonnegative and $z_1, z_2, \ldots, z_n$ be positive constants. For $i=1,2,\ldots,n$ define independent random variables $(G_1, \ldots, G_n)$ where $G_i$ is distributed as $\operatorname{Gamma}(\theta_i/2 + 1)$. Let \begin{equation}\label{whatisri} R_i=\frac{z_i}{2 G_i}, \qquad i=1,2,\ldots,n. \end{equation} Also, independent of $(G_1, \ldots, G_n)$, let $Y_1, Y_2, \ldots, Y_n$ be $n$ independent BESQ processes of positive dimensions $(4+\theta_1),(4+\theta_2), \ldots, (4 + \theta_n)$, respectively, all of which are starting from zero. For any permutation $\pi$ of $n$ labels, condition on the event \begin{equation}\label{condperm} R_{\pi_1} > R_{\pi_2} >\cdots> R_{\pi_n} \quad \mbox{and let} \quad R^*=R_{\pi_2}. \end{equation} We now construct the following $n$ dimensional process $(X_1, \ldots, X_n)$: \begin{equation}\label{whatisxi} X_i(t)=Y_i \bigl( (t- R^* + R_i)^+ \bigr), \qquad t\ge0. \end{equation} Notice that at time $t=0$, every $X_i$ is at zero except the $\pi_1$th. Let $S(t)$ denote the total sum process $\sum_{i=1}^n X_i(t)$. Note that $S(t) >0$ for all $t \ge0$ with probability one. Define the process \begin{equation}\label{whatisct} C_t:=\int_{0}^{t}\frac{du}{S(u)}, \qquad t > 0. \end{equation} The process $C_t$ is finite almost surely for every $t$ (unfortunately, we cannot define $R^*=R_{\pi_1}$ precisely because $C_t$ will be infinity; see Lemma~\ref{logct}). Let $A$ denote the inverse function of the continuous increasing function $4C$. That is, \begin{equation}\label{whatisa} A_t = \inf\{ u \ge0\dvtx 4C_u \ge t \}, \qquad t\ge0. \end{equation} \begin{lemma}\label{dimWF} There is an $n$-dimensional diffusion $\nu $ such that the following time-change relationship holds: \begin{equation}\label{getxi} \nu_i(t)=\frac{X_i}{S}(A_t) \quad\mbox{or}\quad X_i(t)= S(t) \nu _i ( 4 C_t ), \qquad t \ge0. \end{equation} The distribution of $\nu$ is supported on the unit simplex \[ \mathbb{S}_n= \{ x_i \ge0\dvtx x_1 + x_2 +\cdots+ x_n=1 \}. \] Conditional on the values of $G_1, \ldots, G_n$ and the process $S$, the law of $\nu$ can be described as below. Let $\pi$ be any permutation of $n$ labels. On the event $R_{\pi_1} > R^*=R_{\pi_2} >\cdots> R_{\pi_n}$. Let $V_2 <\cdots< V_{n}$ be defined by \[ A_{V_i}= R^* - R_{\pi_{i}} \quad \mbox{or, equivalently} \quad 4C_{R^*-R_{\pi_i}}=V_i. \] Note that $V_2=0$. For $i\ge2$ and $V_{i} \le t \le V_{i+1}$, the process $\nu$ is zero on all coordinates except $(\pi_1, \ldots, \pi_i)$. The process $\nu (\pi_1, \ldots, \pi_i)$, given the history of the process till time $V_i$ (and the $G_i$'s and $S$), is distributed as the classical Wright--Fisher diffusion starting from \[ \frac{1}{S}(X_{\pi_1}, \ldots, X_{\pi_i}) (A_{V_i} )=\frac{1}{S}(X_{\pi_1}, \ldots, X_{\pi_i}) (R^*-R_{\pi_i} ), \] and with parameters $(\gamma_{\pi_1}, \ldots, \gamma_{\pi_i} )$ where \[ \gamma_j = \theta_j/2 + 2, \qquad j=1,2,\ldots,n. \] \end{lemma} \begin{pf} The Gamma random variables $G_1, \ldots, G_n$ are independent of the BESQ process $Y_1, \ldots, Y_n$. Thus, conditional on $G_1, \ldots, G_n$, the vector of processes $(X_1, \ldots, X_n)$ has the following description. For \[ R^*- R_{\pi_i}\le t \le R^* - R_{\pi_{i+1}}, \qquad i\ge2, \] all coordinates other than the $\pi_1$th, $\pi_2$th, \ldots, $\pi _i$th are zero. And, $(X_{\pi_1}, \ldots,\break X_{\pi_i})$, conditioned on the past, are independent BESQ processes of dimensions $(4+\theta _{\pi_1}, \ldots, 4+\theta_{\pi_i})$ and starting from $(X_{\pi _1}, \ldots, X_{\pi_i})(R^*- R_{\pi_i})$. Thus, on this interval of time, the existence of the process $\nu$, identifying its law as the WF law, and the claimed independence from the process $S$, all follow from~\cite{vsm}, Proposition~11. The proof of the lemma now follows by combining the argument over the distinct intervals. \end{pf} \begin{lemma}\label{timerevdimWF} Consider the set-up in \eqref{whatisri}, \eqref{whatisxi} and \eqref {whatisa}. Let $Z_1, Z_2, \ldots, Z_n$ be $n$ stochastic processes defined such that $\{Z_i(t), 0\le t \le R^*\}$ is the time-reversal of the process $\{X_i(t), 0\le t \le R^*\}$, conditioned on $X_i(R^*)= z_i$. That is, conditioned on $X_i(R^*)=z_i$ for every $i$, \[ Z_i(t) = X_i(R^*-t)=Y_i(R_i-t)^+ \qquad \mbox{for } 0 \le t \le R^*. \] Then $(Z_1, \ldots, Z_n)$ are independent BESQ processes of dimensions $-\theta_1, \ldots, \break -\theta_n$, starting from $z_1, \ldots, z_n$, and absorbed at the origin. \end{lemma} \begin{pf} It suffices to prove the following: \begin{claim*} Let $\{Y(t), t\ge0\}$ denote a BESQ process of dimension $(4+\theta)$ starting from $0$. Fix a $z >0$. Let $T$ be distributed as $z/2G$, where $G$ is a Gamma random variable with parameter $(\theta/2+1)$. Then, conditioned on $T=l$ and $Y(l)=z$, the time-reversed process $\{Y((l-s)^+), 0\le s < \infty\}$ is distributed as $Q_z^{-\theta}$, absorbed at the origin, conditioned on $T_0=l$. Here $T_0$ is the hitting time of the origin for $Q_z^{-\theta}$. \end{claim*} Once we prove this claim, the lemma follows since the law of $T_0$ is exactly $z/2G$. See Lemma~\ref{lem:trev}. \begin{pf*}{Proof of Claim} For the case of $\theta=0$, this is proved in~\cite{besselbridge}, page~447. The general proof is exactly similar and we outline just the steps and give references within~\cite{besselbridge} for the details. For any $\theta\in\mathbb{R}$, $t >0$, $x,y \ge0$, let $Q^{\theta ,t}_{x\rightarrow y}$ denote the law of the BESQ bridge of dimension $\theta$, length $t$, from points $x$ to $y$. That is to say, if $Y$ follows $Q_x^\theta$, then $Q^{\theta,t}_{x\rightarrow y}$ is the law of the process $\{ Y(s), 0\le s\le t \}$ conditioned on the event $\{ Y(t)=y\}$. Now, BESQ bridges satisfy time-reversal~\cite{besselbridge}, page~446. Thus, if we define $\widehat{P}$ to be the $P$-distribution of a process $\{ X(t-s), 0\le s\le t\}$, then \mbox{$Q^{\theta,t}_{x\rightarrow y}=\widehat Q^{\theta,t}_{y\rightarrow x}$}. We consider the case when the dimension is $(4+\theta), \theta\ge 0$, $x=0, y=z >0$. Then \[ Q^{4+\theta,t}_{z\rightarrow 0}=\widehat Q^{4+\theta,t}_{0\rightarrow z}. \] Now, from Lemma~\ref{scale} (also see~\cite{besselbridge}, Section 3, page~440), we know that $Q^{4+\theta}_z$ is $Q^{-\theta }_z$ conditioned never to hit zero (or equivalently, $Q^{-\theta}_z$ can be interpreted as $Q_z^{4+\theta}$ conditioned to hit zero). Since the origin is an exit distribution for $Q^{-\theta}_z$ and not an entry (Lemma~\ref{scale}; see~\cite{besselbridge}, page~441, for the details of these definitions), the conditional law $Q^{4+\theta ,t}_{z\rightarrow0}$ is nothing but $Q^{-\theta}_z$, conditioned on $T_0=t$. This completes the proof. \end{pf*} \noqed \end{pf} The following is a more precise statement. Let $(z_1, \ldots, z_n)$ be a point in the $n$-dimensional unit simplex $\mathbb{S}_n$. Fix $n$ nonnegative parameters\vadjust{\goodbreak} $\delta_1, \ldots , \delta_n$. Let $G_1, \ldots, G_n$ denote $n$ independent Gamma random variables with parameters $\delta_1+1, \ldots, \delta_n+1$, respectively. Define $R_i=z_i/2G_i$. For any permutation $\pi$ of $n$ labels, condition on the event $R_{\pi_1} > R_{\pi_2} >\cdots> R_{\pi_n}$, and let $R^*=R_{\pi_2}$. Define the continuous process $S$ by prescribing $S(0)=Z_1(R_{\pi _1}-R^*)$ where $Z_1$ is distributed as $Q_0^{4+2\delta_{\pi_1}}$, and for any $t$ such that \[ R^*-R_{\pi_i} \le t \le R^*-R_{\pi_{i+1}},\qquad i\ge2, \qquad R_{\pi _{n+1}}=0. \] Given the history, the process is distributed as a Bessel-square process of dimension $\sum_{j=1}^i (4+ 2\delta_{\pi_j})$ starting from $S(R^*-R_{\pi_i})$. Define the stochastic clocks \[ C_t = \int_{0}^t \frac{du}{S(u)}, \qquad\widehat C_t = \int _{R^*-t}^{R^*} \frac{du}{S(u)}, \qquad 0\le t\le R^*, \] and let $\widehat A_t$ denote the inverse function of $4\widehat C_t$. Let $V_2 <\cdots< V_{n}$ be defined by $4C_{R^*-R_{\pi_i}}=V_i$. Note that $V_2=0$. The $4$ is a standardization constant that appears due to the factor of $2$ in the diffusion coefficient in \eqref{besqintro}. Define an $n$-dimensional process $\nu$, given $R_1, \ldots, R_n$, and the process $S$. For $i\ge2$ and $V_{i} \le t \le V_{i+1}$, the process $\nu$ is zero on all coordinates, except possibly at indices $(\pi_1, \ldots, \pi _i)$. At time zero, the process starts at the vector that is $1$ in the $\pi_1$th coordinate and zero elsewhere. Conditioned on the history till time $V_i$, the process $\{\nu(\pi_1, \ldots, \pi_i)(t), V_i \le t \le V_{i+1}\}$ is distributed as the classical Wright--Fisher diffusion, starting from $\nu(\pi_1, \ldots , \pi_i)(V_i)$ and with parameters $(\gamma_{\pi_1}, \ldots, \gamma _{\pi_i} )$, where \[ \gamma_j = \delta_j + 2, \qquad j=1,2,\ldots,n. \] Finally, consider the conditional law of the process, conditioned on the event \[ S(R^*) \nu_i(4C_{R^*})= z_i \qquad \mbox{for all } i=1,2,\ldots,n. \] \begin{theorem}\label{mainthm} Define the time-reversed process \[ \mu(t) = \nu( \widehat A \circ4C_{R^*-t} ), \] where $\circ$ denotes composition. Then this conditional stochastic time-reversed process, until the first time any of the coordinates hit zero, has a marginal distribution (when $G_i$'s and $S$ are integrated out) $\operatorname{NWF}(\delta_1, \ldots, \delta_n)$ starting from $(z_1, \ldots, z_n)$. \end{theorem} \begin{pf} We start with given values of $R_{\pi_1} > R_{\pi_2} >\cdots> R_{\pi_n}$ and the process $S$ and apply equation \eqref{getxi} in Lemma~\ref{dimWF} to obtain the processes $(X_1, \ldots, X_n)$, defined by \[ X_i(t) = S(t) \nu_i(4C_t), \qquad 0\le t \le R^*. \] Then, the vector $(X_1, X_2, \ldots, X_n)$ has the law prescribed by \eqref{whatisxi}.\vadjust{\goodbreak} Now we apply Lemma~\ref{timerevdimWF} to obtain $(Z_1, \ldots, Z_n)$ by conditioning $(X_1, \ldots, X_n)$ and reversing time. Finally the construction in Theorem~\ref{bestimechange} gives us the vector $(\mu_1, \ldots, \mu_n)$ from $(Z_1, \ldots, Z_n)$, as desired. \end{pf} \section{Exit density}\label{sec:exitdensity} Let $Z_1, Z_2, \ldots, Z_n$ be independent BESQ processes of dimensions $-\theta_1, \ldots, -\theta_n$, where each $\theta_i \ge 0$. We assume that at time zero, the vector $\mathbf{Z}=(Z_1, \ldots, Z_n)$ starts from a point $\mathbf{z}=(z_1, \ldots, z_n)$ where every $z_i > 0$. Define $T_i$ to be the first hitting time of zero for the process $Z_i$, and let $\tau=\bigwedge_{i} T_i$ denote the first time any coordinate hits zero. We would like to determine the joint distribution of $(\tau, \mathbf{Z}(\tau))$. Note that since each $T_i$ is a continuous random variable, the minimum is attained at a unique $i$. Thus, for a fixed $1\le i \le n$, conditioned on the event $\tau=T_i$, the distribution of $Z_i(\tau)$ is the unit mass at zero, and the distribution of every other $Z_j(\tau )$ is supported on $(0, \infty)$. Now, let $h_i$ denote the density of the stopping time $T_i$ on $(0,\infty)$, and let $q_t^{-\theta}$ refer to the transition density of $Q^{-\theta}$. It follows that for any $a_j > 0$, $j\neq i$, we get \begin{eqnarray} &&P \bigl( \tau=T_i, \tau\le t, Z_j(\tau) \ge a_j \mbox{ for all $j \neq i$}\bigr)\nonumber\\ && \qquad =P \bigl( T_i \le t, T_j > T_i, Z_j(T_i) \ge a_j \mbox{ for all $j\neq i$} \bigr)\nonumber\\ && \qquad = \int_0^t h_i(s) \prod_{j\neq i} P \bigl( T_j > s, Z_j(s) \ge a_j \bigr)\,ds= \int_0^t h_i(s) \prod_{j\neq i}P \bigl( Z_j(s) \ge a_j \bigr)\,ds \nonumber\\ \eqntext{\mbox{since $a_j > 0$}}\\ && \qquad = \int_0^t h_i(s)\biggl [ \prod_{j\neq i} \int_{a_j}^\infty q_s^{-\theta_j}(z_j, y_j)\,dy_j \biggr]\,ds.\nonumber \end{eqnarray} Our first job is to find closed form expressions of the integral above. To do this we start by noting that $T_i$ is distributed as $z_i/ 2G_i$ (see Lemma~\ref{lem:trev}), where $G_i$ is a Gamma random variable with parameter $(4+\theta_i)/2-1=\theta_i/2 + 1$. That is, the density of $G_i$ is supported on $(0,\infty)$ and is given by \[ \frac{y^{\theta_i/2}}{\Gamma(\theta_i/2 + 1)}e^{-y}. \] It follows that \[ h_i(s)= \frac{(z_i/2)^{\theta_i/2+1}}{\Gamma(\theta_i/2 + 1)} s^{-\theta_i/2 -2} e^{-z_i/2s}, \qquad 0 \le s < \infty. \] On the other hand, it follows from time reversal (Lemma \ref {lem:trev}) that $q_s^{-\theta_j}(z_j, \break y_j)= q_{s}^{4+\theta_j}(y_j, z_j)$. For any positive $a$, the transition density $q^{a}_s(y,z)$ is explicitly known (see, e.g.,~\cite{vsm}) to be $s^{-1}f(z/s, a, y/s)$, where $f(\cdot,k,\lambda)$ is the density of a noncentral Chi-square distribution with $k$-degrees of freedom and a noncentrality parameter value $\lambda$. In particular, it can be written as a Poisson mixture of central Chi-square (or, Gamma) densities. Thus we have the following expansion: \begin{equation}\label{tranden} \qquad q_s^{-\theta_j}(z_j, y_j)= q_{s}^{4+\theta_j}(y_j, z_j)=s^{-1}\sum _{k=0}^{\infty} e^{-y_j/2s}\frac{(y_j/2s)^k}{k!} g_{\theta_j+4+2k}(z_j/s), \end{equation} where $g_r$ is the Gamma density with parameters $(r/2, 1/2)$. That is, \[ g_r(x)= \frac{2^{-r/2} x^{r/2-1}}{\Gamma(r/2)} e^{-x/2}, \qquad x\ge0. \] Now, define \[ \vec{y}_i=\sum_{j\neq i} y_j, \qquad \vec{\theta}_i= \sum_{j\neq i}\theta _j, \qquad \vec{z}_i=\sum_{j\neq i} z_j. \] Thus \begin{eqnarray*} &&h_i(s)\prod_{j\neq i} q_s^{-\theta_j}(z_j, y_j)\\ && \qquad = \frac {(z_i/2)^{\theta_i/2+1}}{\Gamma(\theta_i/2 + 1)} s^{-\theta_i/2 -2} e^{-z_i/2s} \prod_{j\neq i} s^{-1}\sum_{k=0}^{\infty} e^{-y_j/2s}\frac{(y_j/2s)^k}{k!} g_{\theta_j+4+2k}(z_j/s)\\ && \qquad =\frac{(z_i/2)^{\theta_i/2+1}}{\Gamma(\theta_i/2 + 1)} s^{-\theta _i/2 -2} e^{-z_i/2s} \\ && \qquad \quad {}\times\prod_{j\neq i} s^{-1}\sum_{k=0}^{\infty} e^{-y_j/2s}\frac{(y_j/2s)^k}{k!} \frac{2^{-\theta_j/2-2-k} (z_j/s)^{\theta_j/2+k+1}}{\Gamma(\theta_j/2 + 2 +k)} e^{-z_j/2s}\\ && \qquad =\frac{(z_i/2)^{\theta_i/2+1}}{\Gamma(\theta_i/2 + 1)} s^{-\theta _i/2 - 2-(n-1)} e^{-z_i/2s}\\ && \qquad \quad {}\times e^{-(\vec{y}_i+ \vec{z}_i)/2s}2^{-\vec{\theta}_i/2-2(n-1)} \prod_{j\neq i} \sum _{k=0}^{\infty} \frac{(y_j/2s)^k}{k!} \frac{2^{-k} (z_j/s)^{\theta _j/2+k+1}}{\Gamma(\theta_j/2 + 2 +k)}. \end{eqnarray*} We now exchange the product and the sum in the above. We will need some more notations for a compact representation. For any two vectors $a$ and $b$, denote by \[ a^b = \prod_{i} a_i^{b_i}, \qquad a!=\prod_{i} a_i!. \] Also let $\boldsymbol{\Theta}_i, \mathbf{y}_i, \mathbf{z}_i$ stand for the vectors $(\theta_j, j\neq i)$, $(y_j, j\neq i)$ and $(z_j, j\neq i)$, respectively. Let $\mathbf{k}$ denote the vector $(k_j, j\neq i)$, where every $k_j$ takes any nonnegative integer values. Let $\mathbf{k}'1$ be the sum of the coordinates of $\mathbf{k}$. Then \begin{eqnarray*} &&\prod_{j\neq i} \sum_{k=0}^{\infty} \frac{(y_j/2s)^k}{k!} \frac {2^{-k} (z_j/s)^{\theta_j/2+k+1}}{\Gamma(\theta_j/2 + 2 +k)} \\ && \qquad = \sum_{N=0}^{\infty} (4s)^{-N} s^{-\vec{\theta}_i/2 - N - (n-1)} \mathbf {z}_i^{\boldsymbol{\Theta}_i/2+1}\sum_{\mathbf{k}'1=N} \frac{\mathbf{y}_i^\mathbf{k}}{\mathbf{k}!} \frac {\mathbf{z}_i^{\mathbf{k}}}{\prod_{j\neq i} \Gamma(\theta_j/2 + 2 + k_j)}. \end{eqnarray*} Thus, combining the expressions, we get \begin{eqnarray}\label{inter1} &&h_i(s) \prod_{j\neq i} q_s^{-\theta_j}(z_j, y_j)\nonumber\\ && \qquad =\frac{z_i^{\theta _i/2+1}}{\Gamma(\theta_i/2 + 1)}2^{-\theta_i/2 -1 -\vec{\theta}_i /2-2(n-1)} \\ && \qquad \quad {}\times s^{-\theta_i/2 - 2-(n-1)} e^{-z_i/2s} e^{-(\vec{y}_i+ \vec{z}_i)/2s} \sum_{N=0}^{\infty} 4^{-N} s^{-\vec{\theta}_i/2 - 2N - (n-1)}B_N,\nonumber \end{eqnarray} where \[ B_N= \mathbf{z}_i^{\boldsymbol{\Theta}_i/2+1}\sum_{\mathbf{k}'1=N} \frac{\mathbf{y}_i^\mathbf{k} }{\mathbf{k}!} \frac{\mathbf{z}_i^{\mathbf{k}}}{\prod_{j\neq i} \Gamma(\theta _j/2 + 2 + k_j)}. \] We can now integrate over $s$ in \eqref{inter1} to obtain \begin{eqnarray*} \int_0^\infty h_i(s) \prod_{j\neq i} q_s^{-\theta_j}(z_j, y_j)\,ds= \sum_{N=0}^\infty B'_N \int_0^\infty s^{-a_N} e^{-b/s}\,ds, \end{eqnarray*} where \begin{eqnarray} B_N' &=& \frac{z_i^{\theta_i/2+1}}{\Gamma(\theta_i/2 + 1)}2^{-\theta_0/2-2n+1} 4^{-N} B_N, \qquad \theta_0=\sum_{i=1}^n \theta_i,\\ a_N &=& \theta_i/2 + \vec{\theta}_i/2 +2n + 2N= \theta_0/2 + 2n + 2N,\\ b &=& z_i/2 + (\vec{y}_i+\vec{z}_i)/2 = (\vec{y}_i+ z_0)/2, \qquad z_0=\sum _{i=1}^n z_i. \end{eqnarray} Now a simple change of variable $w=1/s$ shows \begin{eqnarray*} &\displaystyle \int_0^\infty s^{-a_N} e^{-b/s}\,ds = \int_0^\infty w^{a_N}e^{-bw}w^{-2}\,dw=\int_0^\infty w^{a_N -2} e^{-bw}\,dw,&\\ &\displaystyle \frac{\Gamma(a_N-1)}{b^{a_N-1}} \int_0^{\infty} \frac {b^{a_N-1}}{\Gamma(a_N-1)} w^{a_N -2} e^{-bw}\,dw = \frac{\Gamma (a_N-1)}{b^{a_N-1}}.& \end{eqnarray*} Since the $i$th coordinate of the exit point is zero, one can define $y_i=0$ and $y_0=\sum_{j=1}^n y_j= \vec{y}_i$ to simplify notation. Thus we obtain \begin{eqnarray*} &&\int_0^\infty h_i(s)\prod_{j\neq i} q_s^{-\theta_j}(z_j, y_j)\,ds\\ && \qquad = \sum_{N=0}^\infty\frac{z_i^{\theta_i/2+1}}{\Gamma(\theta_i/2 + 1)}2^{-\theta_0/2-2n+1} 4^{-N}B_N \frac{\Gamma(a_N-1)}{b^{a_N-1}}\\ && \qquad =\frac{z_i^{\theta_i/2+1}\mathbf{z}_i^{\boldsymbol{\Theta}_i/2+1}}{\Gamma (\theta_i/2 + 1)}2^{-\theta_0/2-2n+1} \sum_{N=0}^\infty\bigl( (\vec{y}_i+z_0)/2 \bigr)^{-\theta_0/2 - 2n - 2N + 1}\\ && \qquad \quad {}\times\Gamma(\theta_0/2 + 2n + 2N-1) 4^{-N}\sum_{\mathbf{k}'1=N} \frac {\mathbf{y}_i^\mathbf{k}}{\mathbf{k}!} \frac{\mathbf{z}_i^{\mathbf{k}}}{\prod_{j\neq i} \Gamma(\theta_j/2 + 2 + k_j)}\\ && \qquad =\frac{\mathbf{z}^{\boldsymbol{\Theta}/2+1}}{\Gamma(\theta_i/2 + 1)}2^{-\theta_0/2-2n+1} \sum_{N=0}^\infty( y_0 + z_0 )^{-\theta_0/2 - 2n - 2N + 1} 2^{\theta_0/2 + 2n + 2N -1}\\ && \qquad \quad {}\times\Gamma(\theta_0/2 + 2n + 2N-1) 4^{-N}\sum_{\mathbf{k}'1=N} \frac {\mathbf{y}_i^\mathbf{k}}{\mathbf{k}!} \frac{\mathbf{z}_i^{\mathbf{k}}}{\prod_{j\neq i} \Gamma(\theta_j/2 + 2 + k_j)}\\ && \qquad =\frac{\mathbf{z}^{\boldsymbol{\Theta}/2+1}}{\Gamma(\theta_i/2 + 1)} \sum _{N=0}^\infty( y_0 + z_0 )^{-\theta_0/2 - 2n - 2N + 1}\\ && \qquad \quad {}\times \Gamma(\theta_0/2 + 2n + 2N-1)\sum_{\mathbf{k}'1=N} \frac{\mathbf{y}_i^\mathbf{k} }{\mathbf{k}!} \frac{\mathbf{z}_i^{\mathbf{k}}}{\prod_{j\neq i} \Gamma(\theta _j/2 + 2 + k_j)}. \end{eqnarray*} We have the following result. \begin{theorem}\label{thm:besexit} Let $Z_1, Z_2, \ldots, Z_n$ be independent BESQ processes of dimensions $-\theta_1, \ldots, -\theta_n$, where each $\theta_i\ge 0$. Assume that $Z_i(0)= z_i(0) > 0$, for every~$i$. The distribution of $(\tau, Z({\tau}))$ is supported on the set $(0,\infty)\times\bigcup_{i=1}^n H_i$, where $H_i$ is the subspace orthogonal to the $i$th canonical basis vector $e_i$. That is, \[ H_i = \{ (y_1, y_2, \ldots, y_n)\dvtx y_i=0 \}. \] \begin{longlist}[(ii)] \item[(i)] Let $G_i, i=1,2,\ldots,n$ be independent Gamma random variables with parameters $\theta_i/2 + 1, i=1,2,\ldots, n$. The law of $\tau$ is the same as that of $\min_{i} \frac{z_i}{2G_i}$ and \[ P ( \tau=T_i )= P \biggl( \frac{G_i}{z_i} > \frac {G_j}{z_j} \mbox{ for all $j\neq i$} \biggr), \] where $T_i$ is the first hitting time of $H_i$.\vadjust{\goodbreak} \item[(ii)] The restriction of the law of the random vector $Z(\tau )$, restricted to the hyperplane $H_i$, admits a density with respect to all the variables $y_j$'s, $j\neq i$, which is given by \begin{eqnarray}\label{eq:eden} &=&\frac{S^{1-\theta_0/2-2n}}{\Gamma(\theta_i/2 + 1)}\prod_{j=1}^n z_j^{\theta_j/2 + 1}\sum_{N=0}^\infty\Gamma(\theta_0/2 + 2n + 2N-1) S^{- 2N}\nonumber \\[-8pt] \\[-8pt] &&{}\times\sum_{\sum_{j\neq i} k_j=N} \prod_{j\neq i} \frac{(y_j z_j)^{k_j}}{k_j! \Gamma(\theta_j/2 + 2 + k_j)}. \nonumber \end{eqnarray} Here \[ S=\sum_{i=1}^n (y_i + z_i), \qquad y_i=0, \qquad\theta_0=\sum_{i=1}^n \theta_i. \] \end{longlist} \end{theorem} Using Theorem~\ref{bestimechange}, we get that the exit distribution of $\operatorname{NWF}(\delta_1, \ldots, \delta_n)$, starting from a point $(z_1, \ldots, z_n)\in\mathbb{S}_n$, is the image under the map \[ x_i \mapsto\frac{x_i}{\sum_{j=1}^n x_j}, \qquad1\le i\le n, \] of the exit density of independent BESQ processes of dimensions $-\theta_1, \ldots, -\theta_n$, where each $\theta_i=2\delta_i$. \begin{theorem}\label{thm:exitnwf} The exit density of $\mu\sim$ $\operatorname{NWF}(\delta_1, \ldots, \delta_n)$ starting from $(z_1, \ldots, z_n)\in\mathbb{S}_n$ is supported on the set $\bigcup_{i=1}^n F_i$, where $F_i$ is the face $\{ x \in\mathbb{S}_n: x_i=0 \}$, and admits the following description: \begin{longlist}[(ii)] \item[(i)] Let $G_i, i=1,2,\ldots,n$, be independent Gamma random variables with parameters $\delta_i + 1, i=1,2,\ldots, n$. Then \begin{equation}\label{eq:exithyp} P ( \mbox{$\mu$ exits through $F_i$} )= P \biggl( \frac {G_i}{z_i} > \frac{G_j}{z_j} \mbox{ for all $j\neq i$} \biggr). \end{equation} \item[(ii)] Let $\delta$ represent the vector $(\delta_1, \ldots, \delta_n)$, and let $\delta_0=\sum_{i=1}^n \delta_i$. The exit distribution of the process $\mu$, restricted to $F_i$, admits a density with respect to all the variables $x_j$'s, $j\neq i$, which is given by \begin{equation}\label{eq:exitnwf} (\delta_i + 1) \sum_{N=0}^\infty\frac{\Gamma(N+n+\delta _0)}{\Gamma(N+2n + \delta_0)} \sum_{\sum_{j\neq i} k_j=N} \operatorname{Dir} _n(z;\mathbf{k}+ \delta+\mathbf{2}) \operatorname{Dir}_{n-1}(x;\mathbf{k}+\mathbf{1}).\hspace*{-35pt} \end{equation} Here the inner sum above is over all nonnegative integers $(k_j, j \neq i)$, such that $\sum_{j\neq i}k_j =N$. The vector $\mathbf{k}$ represents a vector whose $j$th coordinate is $k_j$ for all $j \neq i$, and $\mathbf{k}_i=0$. The vectors $\mathbf{k}+\delta+\mathbf{2}$ and $\mathbf{k}+\mathbf{1}$ represent vector additions of $\mathbf{k}$, $\delta$ and the vector of all twos, and $\mathbf{k}$ and the vector of all ones, respectively. The factor $\operatorname{Dir}_{n-1}$ is a density with respect to the $(n-1)$-dimensional vector $(x_j, j\neq i)$ with corresponding parameters $(\mathbf{k} _{j}+1, j \neq i)$. It can also be interpreted as the conditional density of the $n$-dimensional $\operatorname{Dir}_n(x;\mathbf{k}+1)$, conditioned on $x_i=0$. \end{longlist} \end{theorem} Note that the density in \eqref{eq:exitnwf} is a mixture of Dirichlet densities, strikingly similar to those appearing as transition probabilities of the Wright--Fisher diffusions themselves. See Griffiths~\cite{griffiths79b}, Barbour, Ethier and Griffiths \cite {BEG} and Pal~\cite{vsm}. \begin{pf*}{Proof of Theorem~\ref{thm:exitnwf}} This is a straightforward integration. We have assumed that $\sum_i z_i=1$. Thus, $S=1+\sum_j y_j$; define $y_0=\sum_j y_j$, and \[ x_j = y_j/y_0, \qquad1\le j \le n. \] Hence \eqref{eq:eden} simplifies to \begin{eqnarray} &=&\frac{(1+y_0)^{1-\theta_0/2-2n}}{\Gamma(\theta_i/2 + 1)}\prod _{j=1}^n z_j^{\theta_j/2 + 1}\sum_{N=0}^\infty\Gamma(\theta_0/2 + 2n + 2N-1) (1+y_0)^{-2 N} \nonumber \\[-8pt] \\[-8pt] &&{}\times y_0^N\sum_{\sum_{j\neq i} k_j=N} \prod_{j\neq i} \frac{(x_j z_j)^{k_j}}{k_j! \Gamma(\theta_j/2 + 2 + k_j)}. \nonumber \end{eqnarray} Now, to get to formula \eqref{eq:exitnwf} we need to make a multivariate change of variables. Without loss of generality, let $i=n$. Then, for any $y\in F_i$, we have $y_{n}=0$. Define the change of variables \[ (y_1, \ldots, y_{n-2}, y_{n-1}) \mapsto( y_0,x_1, \ldots, x_{n-2} ). \] In other words, $y_i= y_0 x_i$ for all $i=1,2,\ldots, n-2$ and $y_{n-1}=y_0(1-x_1-\cdots- x_{n-2})$. The determinant of the well-known Jacobian matrix is given by $y_0^{n-2}$. Thus, the density of $(x_1, \ldots, x_n)$ restricted to $F_i$ is given by \begin{eqnarray}\label{denform1} &&\frac{1}{\Gamma(\theta_i/2 + 1)} \prod_{j=1}^n z_j^{\theta_j/2 + 1}\sum_{N=0}^\infty\Gamma(\theta_0/2 + 2n + 2N-1)\nonumber \\ && \qquad {}\times\int_0^\infty y^{N+n-2}(1+y)^{1-\theta_0/2-2n-2N}\,dy \\ && \qquad {}\times\sum _{\sum_{j\neq i} k_j=N} \prod_{j\neq i} \frac{(x_j z_j)^{k_j}}{k_j! \Gamma(\theta_j/2 + 2 + k_j)}. \nonumber \end{eqnarray} The following formula is easily verifiable for $\alpha\ge0$, $\beta> \alpha+1$: \[ \int_0^\infty y^{\alpha} (1+y)^{-\beta}\,dy=\int_0^1 x^{\beta -\alpha-2}(1-x)^{\alpha}\,dx=B(\alpha+1,\beta-\alpha-1), \] where $B$ refers to the Beta function.\vadjust{\goodbreak} In other words, \eqref{denform1} reduces to \begin{eqnarray}\label{denform2} &&\frac{1}{\Gamma(\theta_i/2 + 1)}\prod_{j=1}^n z_j^{\theta_j/2 + 1}\sum_{N=0}^\infty\Gamma(\theta_0/2 + 2n + 2N-1)\nonumber \\[-9pt] \\[-9pt] && \qquad {}\times B(N+n-1, N+n+\theta_0/2) \sum_{\sum_{j\neq i} k_j=N} \prod_{j\neq i} \frac{(x_j z_j)^{k_j}}{k_j! \Gamma(\theta_j/2 + 2 + k_j)}. \nonumber \end{eqnarray} We now change $\theta_i/2$ to $\delta_i$ and rewrite the above expression in terms of Dirichlet densities. We use the notations in the statement of Theorem~\ref{thm:exitnwf}: the vector $\mathbf{k}$ represents a vector whose $j$th coordinate is $k_j$ for all $j \neq i$, and $\mathbf{k} _i=0$. The vectors $\mathbf{k}+\delta+\mathbf{2}$ and $\mathbf{k}+\mathbf{1}$ represent vector additions of $\mathbf{k}$, $\delta$ and the vector of all twos, and $\mathbf{k}$ and the vector of all ones, respectively. The factor $\operatorname{Dir} _{n-1}$ is a density with respect to the $(n-1)$-dimensional vector $(x_j, j\neq i)$ with corresponding parameters $(\mathbf{k}_{j}+1, j \neq i)$. It can also be interpreted as the conditional density of the $n$-dimensional $\operatorname{Dir}_n(x;\mathbf{k}+1)$, conditioned on $x_i=0$. Hence, for any $(k_j, j\neq i)$, integers \begin{eqnarray*} &&\frac{z_i^{\delta_i+1}}{\Gamma(\delta_i+1)} \prod_{j\neq i} \frac {z_j^{k_j+\delta_j+1}}{\Gamma(\delta_j + 2 + k_j)} \frac {x_j^{k_j}}{k_j!}\\[-2pt] && \qquad =\frac{(\delta_i+1)}{\Gamma(\delta_0+N+2n)\Gamma(N+n-1)}\\[-2pt] && \qquad \quad {}\times \operatorname{Dir} _n(z;\mathbf{k}+\delta+\mathbf{2}) \operatorname{Dir}_{n-1}(x; \mathbf{k}+\mathbf{1}). \nonumber \end{eqnarray*} Thus \eqref{denform2} reduces to \begin{eqnarray}\label{denform3} &&(\delta_i+1) \sum_{N=0}^\infty\frac{\Gamma(\delta_0 + 2n + 2N-1) B(N+n-1, N+n+\delta_0) }{\Gamma(\delta_0+N+2n)\Gamma(N+n-1)}\nonumber \\[-9pt] \\[-9pt] && \qquad {}\times\sum_{\mathbf{k}'\mathbf{1}=N} \operatorname{Dir}_n(z;\mathbf{k}+\delta+\mathbf{2}) \operatorname{Dir} _{n-1}(x; \mathbf{k}+\mathbf{1}). \nonumber \end{eqnarray} However, \begin{eqnarray*} &&\frac{\Gamma(\delta_0 + 2n + 2N-1) B(N+n-1, N+n+\delta_0) }{\Gamma (\delta_0+N+2n)\Gamma(N+n-1)}\\[-2pt] && \qquad = \frac{\Gamma(\delta_0 + 2n + 2N-1)}{\Gamma(\delta_0+N+2n)\Gamma (N+n-1)}\frac{\Gamma(N+n-1)\Gamma(N+n+\delta_0)}{\Gamma (2N+2n+\delta_0-1)}\\[-2pt] && \qquad =\frac{\Gamma(N+n+\delta_0)}{\Gamma(N+2n+\delta_0)}. \end{eqnarray*} This completes the proof of formula \eqref{eq:exitnwf}. The probability in \eqref{eq:exithyp} is a direct consequence of Theorem~\ref{thm:besexit} conclusion~(i).\vadjust{\goodbreak} \end{pf*} \section{Exit time}\label{sec:exittime} Let $X=(X_1, \ldots, X_n)$ be distributed as $\operatorname{NWF}(-\theta_1/2, \ldots,\break -\theta_n/2)$ starting from a point $(x_1, \ldots, x_n)$ in the unit simplex. Let $\sigma_0$ denote the stopping time \[ \sigma_0 = \inf\{ t\ge0\dvtx X_i =0 \mbox{ for some $i$} \}. \] Our objective is to find estimates on the law of $\sigma_0$. We will simplify the situation by assuming that all $x_i=1/n$ and all $\theta_i=\theta$. To this end we use the time-change relationship in Theorem~\ref{bestimechange}. Let $Z=(Z_1, \ldots, Z_n)$ be independent BESQ processes starting from $(z_1, \ldots, z_n)$ as in the set-up of Theorem~\ref{bestimechange}, where each $z_i$ is now one. Then \begin{equation}\label{whatissigma0} \sigma_0 = 4\int_0^{\tau} \frac{ds}{\zeta(s)}, \qquad\zeta (s)=\sum_{i=1}^n Z_i(s). \end{equation} By Theorem~\ref{thm:besexit}, the distribution of $\tau$ is the same as considering $n$ i.i.d. $\operatorname{Gamma}(\theta/2 +1)$ random variables $G_1, \ldots, G_n$, and defining \begin{equation}\label{whatistau} \tau=\frac{1}{2\max_i G_i}. \end{equation} Our first step will be to prove a concentration estimate of $\max_i G_i$. \begin{lemma}\label{maxmom} Let $G_1, G_2, \ldots, G_n$ be $n$ i.i.d. Gamma random variables with parameter $r/2$, for some $r\ge2$. Let $\chi$ be the random variable $\max_i G_i$. Then, as $n$ tends to infinity, \[ E\sqrt{\chi} = \Theta\bigl( \sqrt{\log n} \bigr). \] \end{lemma} \begin{pf} First let $r\in\mathbb{N}$. Let $\{Z_1(i), \ldots, Z_n(i), i=1,2,\ldots,r \}$ be a collection of i.i.d. standard Normal random variables. Then $2G_j$ has the same law as $Z_j^2(1) +\cdots+ Z_j^2(r)$. Hence \[ E \max_j \abs{Z}_j(1) \le E \sqrt{2\chi} \le\sqrt{r} E \max _{i,j} \abs{Z}_j(i). \] As $n$ tends to infinity, the right-hand side above converges to $\sqrt {2r\log(rn)}$ while the left-hand side converges to $\sqrt{2\log n}$. This completes the argument for $r\in\mathbb{N}$. For a general positive $r$, bound on both sides by $\lfloor r \rfloor$ and $\lfloor r \rfloor+ 1$. \end{pf} We also need a version of logarithmic Sobolev inequality for Gamma random variables, which can be found in several articles, including \cite{BW}. \begin{lemma}[(\cite{BW}, page~2718)]\label{lem:logsobo} Let $\mu^\theta$ denote the product probability measure of $n$ i.i.d. $\operatorname{Gamma}(\theta)$ random variables. Then, for every $f$ on $\mathbb{R}^n$ which is in $C^1$ (i.e., once continuously differentiable), one has \begin{equation}\label{logsobo} \mathrm{Ent}(f^2) \le4 \int\Biggl( \sum_{i=1}^n x_i ( \partial_i f(x) )^2 \Biggr)\,d\mu^{\theta}(x). \end{equation} Here $\mathrm{Ent}(\cdot)$ refers to the entropy defined by \[ \mathrm{Ent}(f^2)=\int f^2 \log(f^2)\,d\mu^\theta- \biggl( \int f^2\,d\mu ^\theta\biggr) \log\biggl(\int f^2\,d\mu^\theta\biggr). \] And $\partial_i$ refers to the partial derivative with respect to the $i$th coordinate. \end{lemma} \begin{lemma}\label{expconcen} Consider the set-up in Lemma~\ref{lem:logsobo}. Let $F$ be a function on the open positive quadrant (i.e., every $x_i>0$) which is $C^1$ and satisfies \begin{equation}\label{derivbnd} \sum_{i=1}^n x_i ( \partial_i F )^2 \le F. \end{equation} Then the following concentration estimate holds for any $r >0$: \begin{eqnarray*} \mu^\theta\bigl( \sqrt{F} - E_\theta\sqrt{F} \ge r \bigr) &\le \exp( -r^2 ),\qquad\mu^\theta\bigl( \sqrt{F} - E_\theta\sqrt{F} \le- r \bigr) \le\exp( -r^2 ), \end{eqnarray*} where $E_{\theta}\sqrt F = \int\sqrt{F}\,d\mu^\theta$. \end{lemma} \begin{pf} Condition \eqref{derivbnd} implies that $4\sum_{i=1}^n x_i ( \partial_i \sqrt{F} )^2 \le1$. Hence, from the classical Herbst argument (e.g., the monograph by Ledoux~\cite{L}), with a gradient defined by the right-hand side of \eqref{logsobo}, we get \[ \mu^\theta\bigl( \sqrt{F} -E_\theta\sqrt{F} > r \bigr) \le\exp (- r^2 ). \] Here $\mu^\theta(\sqrt{F})$ is the expectation of $\sqrt{F}$ under $\mu^\theta$. Repeating the argument with $-\sqrt{F}$ instead of $\sqrt{F}$, we get the result. \end{pf} \begin{theorem}\label{tauconcen} The random variable $\chi=\max_i G_i$, where $G_i$'s are i.i.d. $\operatorname{Gamma}(\theta)$ satisfies the following concentration estimate: \begin{equation}\label{eq:chicon} P \bigl( \sqrt{\chi} > E\bigl(\sqrt{\chi}\bigr) + r \bigr) \le e^{-r^2} \qquad\mbox{for all } r > 0. \end{equation} \end{theorem} \begin{pf} To prove \eqref{eq:chicon} we start by noting that Lemma \ref {expconcen} is satisfied by the family of $\mathbb{L}^k$-norms, $\{ F_{k}, k > 1 \}$, defined by \[ F_k(x)= \Biggl(\sum_{i=1}^n x_i^k \Biggr)^{1/k}. \] This is because each $F_k$ is smooth (when every $x_i$ is positive) and \begin{equation}\label{ineqcheck} \sum_{i=1}^n x_i (\partial_i F_{k}(x) )^2 = \sum _{i=1}^n x_i \biggl[ \frac{x_i^{k-1}}{ (\sum_{j=1}^n x_j^k )^{1-1/k}} \biggr]^2=\frac{\sum_{i=1}^n x_i^{2k-1}}{ ( \sum_{j=1}^n x_j^k )^{2-2/k}}. \end{equation} Since, for any nonnegative $y_1, y_2, \ldots, y_n$ and any $\beta >1$, one has \[ \sum_{i=1}^n y_i^{\beta} \le\Biggl( \sum_{i=1}^n y_i \Biggr)^\beta, \] applying it for $y_i= x_i^k$ and $\beta= 2-1/k$, we get \[ \sum_{i=1}^n x_i^{2k-1} \le\Biggl( \sum_{i=1}^n x_i^{k} \Biggr)^{2-1/k}. \] Combining the above with \eqref{ineqcheck}, we get \[ \sum_{i=1}^n x_i (\partial_i F_{k}(x) )^2 \le\Biggl(\sum _{i=1}^n x_i^k \Biggr)^{1/k}=F_k(x). \] Thus $F_{k}$ satisfies condition \eqref{derivbnd}. Since $F_k$ converges pointwise to $\max_i x_i$ as $k$ tends to infinity, by applying DCT, Lemma~\ref{expconcen} is true for the function $\max_i G_i$. This proves \eqref{eq:chicon}. \end{pf} Our next step will be to prove estimate on the quantity $\sigma_0$ in \eqref{whatissigma0}. The process $\zeta(s)$ is non-Markovian and not distributed as $Q^{-n\theta}$. However, on an possibly enlarged sample space, one can create a $Q^{-n\theta}$ process $\tilde\zeta$, such that the paths of $\zeta$ and $\tilde\zeta$ are indistinguishable until $\sigma_0$. This is possible by considering the SDE solved by~$\zeta$, \[ \zeta(t)= n - n \theta t + \int_0^t \sqrt{\zeta(s)}\,dW(s), \qquad t < \sigma_0. \] To extend the process beyond $\sigma_0$, one concatenates an independent Brownian motion $\widetilde W$ and defines \[ \beta(t) = \cases{\displaystyle W(t) ,&\quad $t \le\sigma_0$,\cr\displaystyle W(t) + \widetilde W(t-\sigma_0) ,&\quad $t > \sigma_0$. } \] Then $\beta$ is a Brownian motion in the enlarged filtration. Since $Q^{-n\theta}$ admits a strong solution, the process \begin{equation}\label{extendzeta} \tilde\zeta(t) = n - n \theta t + 2\int_0^t \sqrt{\tilde\zeta(s)}\,d\widetilde W(s),\qquad t < T_0, \end{equation} has law $Q^{-n\theta}$ and pathwise indistinguishable from $\zeta$ until time $\sigma_0$. Thus in the following discussion we will treat as if $\zeta$ itself is distributed as $Q^{-n\theta}$, keeping in mind the above construction. \begin{theorem}\label{thm:exittime} Let $\mu$ be distributed as an $n$-dimensional $\operatorname{NWF}(\delta, \delta, \ldots, \delta)$ starting from the point $(1/n, 1/n,\ldots, 1/n)$. Let $\sigma_0$ be the first time that any of the coordinates of $\mu$ hit zero. Let \[ a_n = E \max_{1\le i \le n} \sqrt{G_i}, \qquad G_i \stackrel{\mathrm{i.i.d.}}{\sim} \operatorname{Gamma}(\delta+1). \] Then, $a_n = \Theta(\sqrt{\log n})$, $\sigma_0$ has the law given by \eqref{whatissigma0} where $\zeta$ is distributed as $Q^{-2n\delta }_1$, and $\tau$ is a random time. Moreover, for any $r > 0$, we get \begin{eqnarray*} P \biggl( \frac{1}{n (a_n + r)} \le\sqrt{2\tau} \le\frac {1}{n(a_n+r)} \biggr) \ge1 - 2 e^{-r^2}. \end{eqnarray*} \end{theorem} \begin{rmk} It is impossible to provide a simple description of the exact distribution of $\sigma_0$, due to the distributional dependence of $\zeta$ and $\tau$. The above theorem shows that $\tau$ is about a constant, and one can compare the distribution of $\sigma_0$ with that of $4\int_0^{\cdot} du/\zeta(u)$, where the upper limit of the integral is a constant. Limiting large deviation behavior of such integrals, it is possible to derive by methods as in~\cite{YZ}. \end{rmk} \begin{pf*}{Proof of Theorem~\ref{thm:exittime}} The proof is obvious from Lemma~\ref{tauconcen} and expression \eqref{whatistau}. \end{pf*} \begin{appendix}\label{appm} \section*{Appendix: Proofs of properties of BESQ processes} \begin{pf*}{Proof of Lemma~\ref{scale}} We use Exercise 3.20 in \cite{RY}, page~311. The scale function for $Q^{\theta}$ for $\theta \ge0$ is well known to be $x^{-\theta/2+1}$ (see~\cite{RY}, page~443). Nearly identical calculations lead to the case when $\theta$ is replaced by $-\theta$, and we obtain the scale function $s(x)= x^{\theta/2+1}$. The speed measure is the measure with the density \[ m'(x)=\frac{2}{s'(x)4x}=\frac{1}{2(\theta/2+1)}x^{-\theta/2-1}. \] We now use Feller's criterion to check if the origin is an entry and/or exit point (see~\cite{itomckean}, page~108). Note that \begin{eqnarray}\qquad m(\xi,1/2)&=&\frac{1}{2(\theta/2+1)}\int_{\xi}^{1/2}x^{-\theta /2-1}\,dx=\frac{1}{\theta(\theta/2+1)} ( \xi^{-\theta/2} - 2^{\theta/2} ),\nonumber \\[-8pt] \\[-8pt] m(0,\xi]&=&\infty \qquad\mbox{for all positive } \xi. \nonumber \end{eqnarray} Thus \[ \int_0^{1/2} m(\xi,1/2] s(d\xi) < \infty \quad\mbox{and}\quad \int_0^{1/2} m(0,\xi] s(d\xi)=\infty. \] This proves that the origin is an exit and not an entry. Finally, to obtain part (ii) we apply Girsanov's theorem~\cite{RY}, page~327. Let $X$ satisfy the SDE $d X(t)= -\theta \,dt + 2\sqrt {X(t)}\,d\beta(t)$; then we take $D(t)=X^{\theta/2+1}(t)$ (without the normalization, for simplicity) and apply Girsanov. Under the changed measure, there is a standard Brownian motion $\beta^*$, such that \begin{eqnarray*} \beta(t) &=& \beta^*(t) + \int_0^t X^{-\theta/2-1}(s)\,d\langle \beta , D \rangle_s\\ &=&\beta^*(t) + \int_0^t X^{-\theta/2-1}(s) ( \theta+ 2 ) X^{\theta/2+1/2}(s)\,ds\\ &=&\beta^*(t) + ( \theta+ 2 ) \int_0^t X^{-1/2}(s)\,ds. \nonumber \end{eqnarray*} Thus under the changed measure, \begin{eqnarray*} dX(t) &=& -\theta\, dt + 2 X^{1/2}(t)\,d\beta(t) = -\theta\, dt + 2(\theta +2)\,dt + d\beta^*(t) \\ &=& (\theta+4)\,dt + d \beta^*(t). \end{eqnarray*} The interpretation as the conditional distribution is classical (see \cite{besselbridge}). \end{pf*} \begin{pf*}{Proof of Lemma~\ref{logct}} For the assertion it is enough to take $t=1$. Note that, under $Q_0^{\theta}$, the coordinate process satisfies time-inversion; that is, the process $\{ t^2 Z(1/t), t\ge0\}$ has law $Q_0^\theta$. Thus, for $0<\varepsilon< 1$, if we define \[ U_{\varepsilon}= \int_{\varepsilon}^1 \frac{du}{Z(u)}= \int _{1}^{1/\varepsilon} \frac{dt}{t^2 Z(1/t)}, \] then $U_\varepsilon$ has the same law as $C_{1/\varepsilon} - C_1 = \int _{1}^{1/\varepsilon}\,du/Z(u)$. Thus, by~\cite{YZ}, Theorem~1.1, we get $\lim_{\varepsilon\rightarrow 0} U_{\varepsilon}/\log(1/\varepsilon) = (\theta- 2)^{-1}$ almost surely. \end{pf*} \end{appendix} \section*{Acknowledgments} I thank David Aldous, Zhen-Qing Chen, Michel Le\-doux and Jon Wellner for very useful discussions. I thank the anonymous referee for a thorough review which led to a significant improvement of the article. \printaddresses \end{document}
\begin{document} \title{A Weighted Linear Matroid Parity Algorithm hanks{ A preliminary version of this paper has appeared in Proceedings of the 49th Annual ACM Symposium on Theory of Computing (STOC 2017), pp.~264--276.} \begin{abstract} The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Nevertheless, Lov\'asz (1980) showed that this problem admits a min-max formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. In this paper, we present a combinatorial, deterministic, polynomial-time algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach based on the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem. \end{abstract} \section{Introduction} The matroid parity problem \cite{Law76} (also known as the matchoid problem~\cite{Jen74} or the matroid matching problem~\cite{Lov78}) was introduced as a common generalization of matching and matroid intersection problems. In the general case, it requires an exponential number of independence oracle calls \cite{JK82,Lov80a}, and a PTAS has been developed only recently \cite{LSV13}. Nevertheless, Lov\'asz~\cite{Lov78,Lov80a,Lov80b} showed that the problem admits a min-max theorem for linear matroids and presented a polynomial algorithm that is applicable if the matroid in question is represented by a matrix. Since then, efficient combinatorial algorithms have been developed for this linear matroid parity problem \cite{GS86,Orl08,OV92}. Gabow and Stallmann~\cite{GS86} developed an augmenting path algorithm with the aid of a linear algebraic trick, which was later extended to the linear delta-matroid parity problem~\cite{GIM03}. Orlin and Vande Vate~\cite{OV92} provided an algorithm that solves this problem by repeatedly solving matroid intersection problems coming from the min-max theorem. Later, Orlin~\cite{Orl08} improved the running time bound of this algorithm. The current best deterministic running time bound due to \cite{GS86,Orl08} is $O(nm^\omega)$, where $n$ is the cardinality of the ground set, $m$ is the rank of the linear matroid, and $\omega$ is the matrix multiplication exponent, which is at most $2.38$. These combinatorial algorithms, however, tend to be complicated. An alternative approach that leads to simpler randomized algorithms is based on an algebraic method. This is originated by Lov\'asz~\cite{Lov79}, who formulated the linear matroid parity problem as rank computation of a skew-symmetric matrix that contains independent parameters. Substituting randomly generated numbers to these parameters enables us to compute the optimal value with high probability. A straightforward adaptation of this approach requires iterations to find an optimal solution. Cheung, Lau, and Leung \cite{CLL14} have improved this algorithm to run in $O(nm^{\omega-1})$ time, extending the techniques of Harvey~\cite{Har09} developed for matching and matroid intersection. While matching and matroid intersection algorithms~\cite{Edm65,Edm68} have been successfully extended to their weighted version~\cite{Edm65b,Edm79,IT76,Law75}, no polynomial algorithms have been known for the weighted linear matroid parity problem for more than three decades. Camerini, Galbiati, and Maffioli~\cite{CGM92} developed a random pseudopolynomial algorithm for the weighted linear matroid parity problem by introducing a polynomial matrix formulation that extends the matrix formulation of Lov\'asz~\cite{Lov79}. This algorithm was later improved by Cheung, Lau, and Leung~\cite{CLL14}. The resulting complexity, however, remained pseudopolynomial. Tong, Lawler, and Vazirani~\cite{TLV84} observed that the weighted matroid parity problem on gammoids can be solved in polynomial time by reduction to the weighted matching problem. As a relaxation of the matroid matching polytope, Vande Vate~\cite{Van92} introduced the fractional matroid matching polytope. Gijswijt and Pap~\cite{GP13} devised a polynomial algorithm for optimizing linear functions over this polytope. The polytope was shown to be half-integral, and the algorithm does not necessarily yield an integral solution. This paper presents a combinatorial, deterministic, polynomial-time algorithm for the weighted linear matroid parity problem. To do so, we combine algebraic approach and augmenting path technique together with the use of node potentials. The algorithm builds on a polynomial matrix formulation, which naturally extends the one discussed in \cite{GI05} for the unweighted problem. The algorithm employs a modification of the augmenting path search procedure for the unweighted problem by Gabow and Stallmann~\cite{GS86}. It adopts a primal-dual approach without writing an explicit LP description. The correctness proof for the optimality is based on the idea of combinatorial relaxation for polynomial matrices due to Murota~\cite{Mur95}. The algorithm is shown to require $O(n^3m)$ arithmetic operations. This leads to a strongly polynomial algorithm for linear matroids represented over a finite field. For linear matroids represented over the rational field, one can exploit our algorithm to solve the problem in polynomial time. Independently of the present work, Gyula Pap has obtained another combinatorial, deterministic, polynomial-time algorithm for the weighted linear matroid parity problem based on a different approach. The matroid matching theory of Lov\'asz \cite{Lov80b} in fact deals with a more general class of matroids that enjoy the double circuit property. Dress and Lov\'asz \cite{DL87} showed that algebraic matroids satisfy this property. Subsequently, Hochst\"attler and Kern \cite{HK89} showed the same phenomenon for pseudomodular matroids. The min-max theorem follows for this class of matroids. To design a polynomial algorithm, however, one has to establish how to represent those matroids in a compact manner. Extending this approach to the weighted problem is left for possible future investigation. The linear matroid parity problem finds various applications: structural solvability analysis of passive electric networks \cite{Mil74}, pinning down planar skeleton structures \cite{LP86}, and maximum genus cellular embedding of graphs \cite{FGM88}. We describe below two interesting applications of the weighted matroid parity problem in combinatorial optimization. A $T$-path in a graph is a path between two distinct vertices in the terminal set $T$. Mader~\cite{Mad78} showed a min-max characterization of the maximum number of openly disjoint $T$-paths. The problem can be equivalently formulated in terms of ${\mathcal S}$-paths, where ${\mathcal S}$ is a partition of $T$ and an ${\mathcal S}$-path is a $T$-path between two different components of ${\mathcal S}$. Lov\'asz~\cite{Lov80b} formulated the problem as a matroid matching problem and showed that one can find a maximum number of disjoint ${\mathcal S}$-paths in polynomial time. Schrijver~\cite{Sch03} has described a more direct reduction to the linear matroid parity problem. The disjoint ${\cal S}$-paths problem has been extended to path packing problems in group-labeled graphs \cite{CCG08,CGGGLS06,Pap07}. Tanigawa and Yamaguchi~\cite{TY16} have shown that these problems also reduce to the matroid matching problem with double circuit property. Yamaguchi~\cite{Yam16a} clarifies a characterization of the groups for which those problems reduce to the linear matroid parity problem. As a weighted version of the disjoint ${\mathcal S}$-paths problem, it is quite natural to think of finding disjoint ${\mathcal S}$-paths of minimum total length. It is not immediately clear that this problem reduces to the weighted linear matroid parity problem. A recent paper of Yamaguchi~\cite{Yam16b} clarifies that this is indeed the case. He also shows that the reduction results on the path packing problems on group-labeled graphs also extend to the weighted version. The weighted linear matroid parity is also useful in the design of approximation algorithms. Pr\"omel and Steger~\cite{PS00} provided an approximation algorithm for the Steiner tree problem. Given an instance of the Steiner tree problem, construct a hypergraph on the terminal set such that each hyperedge corresponds to a terminal subset of cardinality at most three and regard the shortest length of a Steiner tree for the terminal subset as the cost of the hyperedge. The problem of finding a minimum cost spanning hypertree in the resulting hypergraph can be converted to the problem of finding a minimum spanning tree in a 3-uniform hypergraph, which is a special case of the weighted parity problem for graphic matroids. The minimum spanning hypertree thus obtained costs at most 5/3 of the optimal value of the original Steiner tree problem, and one can construct a Steiner tree from the spanning hypertree without increasing the cost. Thus they gave a 5/3-approximation algorithm for the Steiner tree problem via weighted linear matroid parity. This is a very interesting approach that suggests further use of weighted linear matroid parity in the design of approximation algorithms, even though the performance ratio is larger than the current best one for the Steiner tree problem \cite{BGRS13}. \section{The Minimum-Weight Parity Base Problem} \label{sec:problemdef} Let $A$ be a matrix of row-full rank over an arbitrary field $\mathbf{K}$ with row set $U$ and column set $V$. Assume that both $m=|U|$ and $n=|V|$ are even. The column set $V$ is partitioned into pairs, called {\em lines}. Each $v\in V$ has its {\em mate} $\bar{v}$ such that $\{v,\bar{v}\}$ is a line. We denote by $L$ the set of lines, and suppose that each line $\ell\in L$ has a weight $w_\ell \in \mathbb{R}$. The linear dependence of the column vectors naturally defines a matroid $\mathbf{M}(A)$ on $V$. Let ${\cal B}$ denote its base family. A base $B\in{\cal B}$ is called a {\em parity base} if it consists of lines. As a weighted version of the linear matroid parity problem, we will consider the problem of finding a parity base of minimum weight, where the weight of a parity base is the sum of the weights of lines in it. We denote the optimal value by $\zeta(A,L,w)$. This problem generalizes finding a minimum-weight perfect matching in graphs and a minimum-weight common base of a pair of linear matroids on the same ground set. As another weighted version of the matroid parity problem, one can think of finding a matching (independent parity set) of maximum weight. This problem can be easily reduced to the minimum-weight parity base problem. Associated with the minimum-weight parity base problem, we consider a skew-symmetric polynomial matrix ${\cal P}hi_A(\theta)$ in variable $\theta$ defined by $${\cal P}hi_A(\theta)=\begin{pmatrix} O & A \\ -A^\top & D(\theta) \end{pmatrix},$$ where $D(\theta)$ is a block-diagonal matrix in which each block is a $2\times 2$ skew-symmetric polynomial matrix $D_\ell(\theta)=\begin{pmatrix} 0 & -\tau_\ell\theta^{w_\ell} \\ \tau_\ell\theta^{w_\ell} & 0 \end{pmatrix}$ corresponding to a line $\ell\in L$. Assume that the coefficients $\tau_\ell$ are independent parameters (or indeterminates). For a skew-symmetric matrix ${\cal P}hi$ whose rows and columns are indexed by $W$, the {\em support graph} of ${\cal P}hi$ is the graph $\Gamma=(W,E)$ with edge set $E= \{(u, v) \mid {\cal P}hi_{uv}\neq 0\}$. We denote by ${\cal P}f{\cal P}hi$ the {\em Pfaffian} of ${\cal P}hi$, which is defined as follows: $$ {\cal P}f{\cal P}hi = \sum_{M} \sigma_M \prod_{(u, v) \in M} {\cal P}hi_{uv}, $$ where the sum is taken over all perfect matchings $M$ in $\Gamma$ and $\sigma_M$ takes $\pm 1$ in a suitable manner, see~\cite{LP86}. It is well-known that $\det {\cal P}hi=({\cal P}f {\cal P}hi)^2$ and ${\cal P}f(S{\cal P}hi S^\top)={\cal P}f{\cal P}hi\cdot\det S$ for any square matrix $S$. We have the following lemma that associates the optimal value of the minimum-weight parity base problem with ${\cal P}f{\cal P}hi_A(\theta)$. \begin{lemma} \label{lem:Pf} The optimal value of the minimum-weight parity base problem is given by $$\zeta(A,L,w)=\sum_{\ell\in L}w_\ell-\mathrm{deg}_\theta\,{\cal P}f{\cal P}hi_A(\theta).$$ In particular, if ${\cal P}f{\cal P}hi_A(\theta)=0$ (i.e., $\mathrm{deg}_\theta\,{\cal P}f{\cal P}hi_A(\theta) = -\infty$), then there is no parity base. \end{lemma} \begin{proof} We split ${\cal P}hi_A(\theta)$ into ${\cal P}si_A$ and $\Delta(\theta)$ such that \begin{align*} & {\cal P}hi_A(\theta) = {\cal P}si_A + \Delta(\theta), & &{\cal P}si_A=\begin{pmatrix} O & A \\ -A^\top & O \end{pmatrix}, & \Delta(\theta)=\begin{pmatrix} O & O \\ O & D(\theta) \end{pmatrix}. \end{align*} The row and column sets of these skew-symmetric matrices are indexed by $W:=U\cup V$. By \cite[Lemma 7.3.20]{Mur00}, we have $${\cal P}f {\cal P}hi_A(\theta) = \sum_{X \subseteq W} \pm {\cal P}f {\cal P}si_A[W \setminus X] \cdot {\cal P}f \Delta(\theta)[X],$$ where each sign is determined by the choice of $X$, $\Delta(\theta)[X]$ is the principal submatrix of $\Delta(\theta)$ whose rows and columns are both indexed by $X$, and ${\cal P}si_A[W \setminus X]$ is defined in a similar way. One can see that ${\cal P}f \Delta(\theta)[X] \not= 0$ if and only if $X \subseteq V$ (or, equivalently $B:=V\setminus X$) is a union of lines. One can also see for $X \subseteq V$ that ${\cal P}f {\cal P}si_A[W \setminus X] \not= 0$ if and only if $A[U, V \setminus X]$ is nonsingular, which means that $B$ is a base of $\mathbf{M}(A)$. Thus, we have $$ {\cal P}f {\cal P}hi_A (\theta) = \sum_{B} \pm {\cal P}f {\cal P}si_A[U \cup B] \cdot {\cal P}f \Delta(\theta)[V \setminus B], $$ where the sum is taken over all parity bases $B$. Note that no term is canceled out in the summation, because each term contains a distinct set of independent parameters. For a parity base $B$, we have $$\mathrm{deg}_\theta ({\cal P}f {\cal P}si_A[U \cup B] \cdot {\cal P}f \Delta(\theta)[V \setminus B]) = \sum_{\ell\subseteq V \setminus B}w_\ell = \sum_{\ell\in L}w_\ell - \sum_{\ell\subseteq B}w_\ell,$$ which implies that the minimum weight of a parity base is $\displaystyle\sum_{\ell\in L}w_\ell-\mathrm{deg}_\theta\,{\cal P}f {\cal P}hi_A (\theta)$. \end{proof} Note that Lemma~\ref{lem:Pf} does not immediately lead to a (randomized) polynomial-time algorithm for the minimum weight parity base problem. This is because computing the degree of the Pfaffian of a skew-symmetric polynomial matrix is not so easy. Indeed, the algorithms in \cite{CGM92,CLL14} for the weighted linear matroid parity problem compute the degree of the Pfaffian of another skew-symmetric polynomial matrix, which results in pseudopolynomial complexity. \section{Algorithm Outline} \label{sec:algorithm} In this section, we describe the outline of our algorithm for solving the minimum-weight parity base problem. We regard the column set $V$ as a vertex set. The algorithm works on a vertex set $V^*\supseteq V$ that includes some new vertices generated during the execution. The algorithm keeps a nested (laminar) collection $\Lambda=\{H_1,\ldots,H_{|\Lambda|}\}$ of vertex subsets of $V^*$ such that $H_i \cap V$ is a union of lines for each $i$. The indices satisfy that, for any two members $H_i, H_j \in \Lambda$ with $i < j$, either $H_i \cap H_j = \emptyset$ or $H_i \subsetneq H_j$ holds. Each member of $\Lambda$ is called a {\em blossom}. The algorithm maintains a potential $p:V^*\to\mathbb{R}$ and a nonnegative variable $q:\Lambda\to\mathbb{R}_+$, which are collectively called {\em dual variables}. We note that although $p$ and $q$ are called dual variables, they do not correspond to dual variables of an LP-relaxation of the minimum-weight parity base problem. Indeed, this paper presents neither an LP-formulation nor a min-max formula for the minimum-weight parity base problem, explicitly. We will show instead that one can obtain a parity base $B$ that admits feasible dual variables $p$ and $q$, which provide a certificate for the optimality of $B$. The algorithm starts with splitting the weight $w_\ell$ into $p(v)$ and $p(\bar{v})$ for each line $\ell=\{v,\bar{v}\}\in L$, i.e., $p(v) + p(\bar{v}) = w_\ell$. Then it executes the greedy algorithm for finding a base $B\in{\cal B}$ with minimum value of $p(B)=\sum_{u\in B}p(u)$. If $B$ is a parity base, then $B$ is obviously a minimum-weight parity base. Otherwise, there exists a line $\ell=\{v,\bar{v}\}$ in which exactly one of its two vertices belongs to $B$. Such a line is called a {\em source line} and each vertex in a source line is called a {\em source vertex}. A line that is not a source line is called a {\em normal line}. The algorithm initializes $\Lambda:=\emptyset$ and proceeds iterations of primal and dual updates, keeping dual feasibility. In each iteration, the algorithm applies the breadth-first search to find an augmenting path. In the meantime, the algorithm sometimes detects a new blossom and adds it to $\Lambda$. If an augmenting path $P$ is found, the algorithm updates $B$ along $P$. This will reduce the number of source lines by two. If the search procedure terminates without finding an augmenting path, the algorithm updates the dual variables to create new tight edges. The algorithm repeats this process until $B$ becomes a parity base. Then $B$ is a minimum-weight parity base. See Fig.~\ref{fig:revisionfig31} for a flowchart of our algorithm. \begin{figure} \caption{Flow chart of our algorithm. The conditions (BT1), (BT2), and (DF1)--(DF3) always hold, whereas (BR1)--(BR5) do not necessarily hold during the augmentation procedure in Section~\ref{sec:augmentation} \label{fig:revisionfig31} \end{figure} The rest of this paper is organized as follows. In Section~\ref{sec:blossoms}, we introduce new vertices and operations attached to blossoms. We describe some properties of blossoms kept in the algorithm, which we denote (BT1) and (BT2). The feasibility of the dual variables is defined in Section~\ref{sec:dual}. The dual feasibility is denoted by (DF1)--(DF3). We also describe several properties of feasible dual variables that are used in other sections. In Section~\ref{sec:optimality}, we show that a parity base that admits feasible dual variables attains the minimum weight. The proof is based on the polynomial matrix formulation of the minimum-weight parity base problem given in Section~\ref{sec:problemdef}. Combining this with some properties of the dual variables and the duality of the maximum-weight matching problem, we show the optimality of such a parity base. In Section~\ref{sec:search}, we describe a search procedure for an augmenting path. We first define an augmenting path, and then we describe our search procedure. Roughly, our procedure finds a part of the augmenting path outside the blossoms. The routing in each blossom is determined by a prescribed vertex set that satisfies some conditions, which we denote (BR1)--(BR5). Note that the search procedure may create new blossoms. The validity of the procedure is shown in Section~\ref{sec:validity}. We show that the output of the procedure is an augmenting path by using the properties (BR1)--(BR5) of the routing in each blossom. We also show that creating a new blossom does not violate the conditions (BT1), (BT2), (DF1)--(DF3), and (BR1)--(BR5). In Section~\ref{sec:dualupdatealgo}, we describe how to update the dual variables when the search procedure terminates without finding an augmenting path. We obtain new tight edges by updating the dual variables, and repeat the search procedure. We also show that if we cannot obtain new tight edges, then the instance has no feasible solution, i.e., there is no parity base. If the search procedure succeeds in finding an augmenting path $P$, the algorithm updates the base $B$ along $P$. The details of this process are presented in Section~\ref{sec:augmentation}. Basically, we replace the base $B$ with the symmetric difference of $B$ and $P$. In addition, since there exist new vertices corresponding to the blossoms, we update them carefully to keep the conditions (BT1), (BT2), and (DF1)--(DF3). In order to define a new routing in each blossom, we apply the search procedure in each blossom, which enables us to keep the conditions (BR1)--(BR5). Finally, in Section~\ref{sec:complexity}, we describe the entire algorithm and analyze its running time. We show that our algorithm solves the minimum-weight parity base problem in $O(n^3 m)$ time when $\mathbf{K}$ is a finite field of fixed order. When $\mathbf{K} = \mathbb{Q}$, it is not obvious that a direct application of our algorithm runs in polynomial time. However, we show that the minimum-weight parity base problem over $\mathbb Q$ can be solved in polynomial time by applying our algorithm over a sequence of finite fields. \section{Blossoms} \label{sec:blossoms} In this section, we introduce buds and tips attached to blossoms and construct auxiliary matrices that will be used in the definition of dual feasibility. Each blossom contains at most one source line. A blossom that contains a source line is called a {\em source blossom}. A blossom with no source line is called a {\em normal blossom}. Let $\Lambda_{\rm s}$ and $\Lambda_{\rm n}$ denote the sets of source blossoms and normal blossoms, respectively. Then, $\Lambda = \Lambda_{\rm s} \cup \Lambda_{\rm n}$. Let $\lambda$ denote the number of blossoms in $\Lambda$. Each normal blossom $H_i\in\Lambda_{\rm n}$ has a pair of associated vertices $b_i$ and $t_i$ outside $V$, which are called the {\em bud} and the {\em tip} of $H_i$, respectively. The pair $\{b_i,t_i\}$ is called a {\em dummy line}. To simplify the description, we denote $\bar b_i = t_i$ and $\bar t_i = b_i$. The vertex set $V^*$ is defined by $V^* := V \cup T$ with $T:=\{b_i,t_i \mid H_i \in \Lambda_{\rm n}\}$. The tip $t_i$ is contained in $H_i$, whereas the bud $b_i$ is outside $H_i$. For every $i, j$ with $H_j \in \Lambda_{\rm n}$, we have $t_j\in H_i$ if and only if $H_j \subseteq H_i$. Similarly, we have $b_j\in H_i$ if and only if $H_j\subsetneq H_i$. Thus, each normal blossom $H_i$ is of odd cardinality. The algorithm keeps a subset $B^* \subseteq V^*$ such that $B^* \cap V=B$ and $|B^* \cap \{b_i, t_i\}| = 1$ for each $H_i \in \Lambda_{\rm n}$. It also keeps $H_i \cap V \not = H_j \cap V$ for distinct $H_i, H_j \in \Lambda$ and $H_i \cap V \not = \emptyset$ for each $H_i \in \Lambda$. This implies that $|\Lambda| = O(n)$, where $n = |V|$, and hence $|V^*| = O(n)$. Recall that $U$ is the row set of $A$. The {\em fundamental cocircuit matrix} $C$ with respect to a base $B$ is a matrix with row set $B$ and column set $V\setminus B$ obtained by $C=A[U,B]^{-1}A[U,V\setminus B]$. In other words, $(I \ C)$ is obtained from $A$ by identifying $B$ and $U$, applying row transformations, and changing the ordering of columns. For a subset $S\subseteq V$, we have $B\triangle S\in{\cal B}$ if and only if $C[S]:=C[S\cap B,S\setminus B]$ is nonsingular. Here, $\triangle$ denotes the symmetric difference. Then the following lemma characterizes the fundamental cocircuit matrix with respect to $B\triangle S$. \begin{lemma} \label{lem:pivot} Suppose that $C$ is in the form of $C=\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}$ with $\alpha=C[S]$ being nonsingular. Then \begin{equation*} C':=\begin{pmatrix} \alpha^{-1} & \alpha^{-1}\beta \\ -\gamma\alpha^{-1} & \delta-\gamma\alpha^{-1}\beta \end{pmatrix} \end{equation*} is the fundamental cocircuit matrix with respect to $B\triangle S$. \end{lemma} \begin{proof} In order to obtain the fundamental cocircuit matrix with respect to $B\triangle S$, we apply row elementary transformations to $(I \ C) = \begin{pmatrix} I & 0 & \alpha & \beta \\ 0 & I & \gamma & \delta \end{pmatrix}$ so that the columns corresponding to $B \triangle S$ form the identity matrix. Hence, the obtained matrix is $$ \begin{pmatrix} \alpha^{-1} & 0 \\ - \gamma \alpha^{-1} & I \end{pmatrix} \begin{pmatrix} I & 0 & \alpha & \beta \\ 0 & I & \gamma & \delta \end{pmatrix} = \begin{pmatrix} \alpha^{-1} & 0 & I & \alpha^{-1} \beta \\ - \gamma \alpha^{-1} & I & 0 & \delta - \gamma \alpha^{-1} \beta \end{pmatrix}, $$ which shows that $C'$ is the fundamental cocircuit matrix with respect to $B\triangle S$. \end{proof} This operation converting $C$ to $C'$ is called {\em pivoting around $S$}. We have the following property on the nonsingularity of their submatrices. \begin{lemma}\label{lem:pivotsing} Let $C$ and $C'$ be the fundamental cocircuit matrices with respect to $B$ and $B \triangle S$, respectively. Then, for any $X \subseteq V$, $C[X]$ is nonsingular if and only if $C'[X \triangle S]$ is nonsingular. \end{lemma} \begin{proof} Consider the matrix $(I \ C)$ whose column set is equal to $V$. Then, $C[X]$ is nonsingular if and only if the columns of $(I \ C)$ indexed by $X \triangle B$ form a nonsingular matrix. This is equivalent to that the corresponding columns of $(I \ C')$ form a nonsingular matrix, which means that $C'[X \triangle B \triangle (B \triangle S)] = C'[X \triangle S]$ is nonsingular. \end{proof} The algorithm keeps a matrix $C^*$ whose row and column sets are $B^*$ and $V^* \setminus B^*$, respectively. The matrix $C^*$ is obtained from $C$ by attaching additional rows/columns corresponding to $T$, and then pivoting around $T$. Thus we have $B^*\cap V=B$. In other words, the matrix obtained from $C^*$ by pivoting around $T$ contains $C$ as a submatrix (see (BT1) below). If the row and column sets of $C^*$ are clear, for a vertex set $X \subseteq V^*$, we denote $C^*[X]=C^*[X \cap B^*, X \setminus B^*]$. In our algorithm, the matrix $C^*$ satisfies the following properties. \begin{description} \item[(BT1)] Let $C'$ be the matrix obtained from $C^*$ by pivoting around $T$. Then, $C'[V]$ is the fundamental cocircuit matrix with respect to $B = B^*\cap V$. \item[(BT2)] Each normal blossom $H_i \in \Lambda_{\rm n}$ satisfies the following. \begin{itemize} \item If $b_i\in B^*$ and $t_i \in V^* \setminus B^*$, then $C^*_{b_it_i} \neq 0$, $C^*_{b_i v} = 0$ for any $v \in H_i\setminus B^*$ with $v\neq t_i$, and $C^*_{u t_i} = 0$ for any $u \in B^* \setminus H_i$ with $u\neq b_i$ (see Fig.~\ref{fig:41}). \item If $b_i\in V^* \setminus B^*$ and $t_i \in B^*$, then $C^*_{t_i b_i} \neq 0$, $C^*_{u b_i} = 0$ for any $u \in B^*\cap H_i$ with $u\neq t_i$, and $C^*_{t_i v} = 0$ for any $v \in (V^*\setminus B^*)\setminus H_i$ with $v\neq b_i$. \end{itemize} \end{description} \begin{figure} \caption{Illustration of (BT2). In the right figure, real lines represent nonzero entries of $C^*$.} \label{fig:41} \end{figure} \section{Dual Feasibility} \label{sec:dual} In this section, we define feasibility of the dual variables and show their properties. Our algorithm for the minimum-weight parity base problem is designed so that it keeps the dual feasibility. Recall that a potential $p:V^*\to\mathbb{R}$, and a nonnegative variable $q:\Lambda\to\mathbb{R}_+$ are called dual variables. A blossom $H_i$ is said to be {\em positive} if $q(H_i) >0$. For distinct vertices $u, v \in V^*$ and for $H_i \in \Lambda$, we say that a pair $(u, v)$ {\em crosses} $H_i$ if $|\{u, v\} \cap H_i| =1$. For distinct $u, v \in V^*$, we denote by $I_{uv}$ the set of indices $i\in \{1,\ldots,|\Lambda|\}$ such that $(u, v)$ crosses $H_i$. We introduce the set $F^*$ of ordered vertex pairs defined by $$ F^* := \{ (u, v) \mid u\in B^*,\,v\in V^*\setminus B^*,\, C^*_{uv}\neq 0\}. $$ For distinct $u, v \in V^*$, we define \[ Q_{uv} := \sum_{i \in I_{uv}} q(H_i). \] The dual variables are called {\em feasible with respect to $C^*$ and $\Lambda$} if they satisfy the following. \begin{description} \item[(DF1)] $p(v)+p(\bar{v})=w_\ell$ for every line $\ell=\{v,\bar{v}\}\in L$. \item[(DF2)] $p(v)-p(u)\geq Q_{uv}$ for every $(u,v) \in F^*$. \item[(DF3)] $p(v)-p(u)=q(H_i)$ for every $H_i\in\Lambda_{\rm n}$ and $(u,v)\in F^*$ with $\{u,v\}=\{b_i,t_i\}$. \end{description} If no confusion may arise, we omit $C^*$ and $\Lambda$ when we discuss dual feasibility. Note that if $\Lambda = \emptyset$, then $F^*$ corresponds to the nonzero entries of $C = C^*$, which shows that $(B \setminus \{u\}) \cup \{v\} \in {\cal B}$ holds for $(u,v) \in F^*$. This implies that (DF2) holds if $B\in{\cal B}$ is a base minimizing $p(B)=\sum_{u\in B}p(u)$, because $Q_{uv} = 0$ for any $(u,v) \in F^*$. We also note that (DF3) holds if $\Lambda = \emptyset$. Therefore, $p$ and $q$ are feasible if $p$ satisfies (DF1), $\Lambda = \emptyset$, and $B\in{\cal B}$ minimizes $p(B)=\sum_{u\in B}p(u)$ in ${\cal B}$. This ensures that the initial setting of the algorithm satisfies the dual feasibility. We now show some properties of feasible dual variables. \begin{lemma} \label{lem:keyodd} Suppose that $p$ and $q$ are feasible dual variables. Let $X \subseteq V^*$ be a vertex subset such that $C^*[X]$ is nonsingular. Then, we have $$ p(X \setminus B^*) - p(X \cap B^*) \geq \sum \{ q(H_i) \mid H_i \in \Lambda,\ \mbox{$|X \cap H_i|$ is odd}\}.$$ \end{lemma} \begin{proof} Since $C^* [X]$ is nonsingular, there exists a perfect matching $M = \{(u_j,v_j)\mid j=1,\ldots,\mu\}$ between $X \cap B^*$ and $X \setminus B^*$ such that $u_j \in X \cap B^*$, $v_j \in X \setminus B^*$, and $C^*_{u_j v_j} \not= 0$ for $j=1,\dots,\mu$. The dual feasibility implies that $p(v_j)-p(u_j)\geq Q_{u_j v_j}$ for $j=1, \dots , \mu$. Combining these inequalities, we obtain \begin{equation}\label{eq:dual03} p(X \setminus B^*)-p(X \cap B^*) \ge \sum_{j=1}^\mu Q_{u_j v_j} = \sum_{j=1}^\mu \sum_{i \in I_{u_j v_j}} q(H_i). \end{equation} If $|X \cap H_i|$ is odd, there exists an index $j$ such that $i \in I_{u_{j} v_{j}}$, which shows that the coefficient of $q(H_i)$ in the right hand side of (\ref{eq:dual03}) is at least $1$. This completes the proof \end{proof} We now consider the tightness of the inequality in Lemma~\ref{lem:keyodd}. Let $G^* = (V^*, F^*)$ be the undirected graph with vertex set $V^*$ and edge set $F^*$, where we regard $F^*$ as a set of unordered pairs. An edge $(u, v) \in F^*$ with $u \in B^*$ and $v \in V^* \setminus B^*$ is said to be {\em tight} if $p(v)-p(u) = Q_{uv}$. We say that a matching $M \subseteq F^*$ is {\em consistent with a blossom $H_i \in \Lambda$} if at most one edge in $M$ crosses $H_i$. We say that a matching $M \subseteq F^*$ is {\em tight} if every edge of $M$ is tight and $M$ is consistent with every positive blossom $H_i$. As the proof of Lemma~\ref{lem:keyodd} clarifies, if there exists a tight perfect matching $M$ in the subgraph $G^*[X]$ of $G^*$ induced by $X$, then the inequality of Lemma~\ref{lem:keyodd} is tight. Furthermore, in such a case, every perfect matching in $G^*[X]$ must be tight, which is stated as follows. \begin{lemma} \label{lem:tightmatching} For a vertex set $X \subseteq V^*$, if $G^*[X]$ has a tight perfect matching, then any perfect matching in $G^*[X]$ is tight. \end{lemma} When $q(H_i)=0$ for some $H_i\in\Lambda$, one can delete $H_i$ from $\Lambda$ without violating the dual feasibility. In fact, removing such a source blossom does not affect the dual feasibility, (BT1), and (BT2). If $H_i$ is a normal blossom, then apply the pivoting operation around $\{b_i,t_i\}$ to $C^*$, remove $b_i$ and $t_i$ from $V^*$, and remove $H_i$ from $\Lambda$. This process is referred to as ${\sf Expand}(H_i)$. \begin{lemma} \label{lem:expand} If $q(H_i)=0$ for some $H_i\in\Lambda_{\rm n}$, the dual variables $(p,q)$ remain feasible and {\em (BT1)} and {\em (BT2)} hold after ${\sf Expand}(H_i)$ is executed. \end{lemma} \begin{proof} We only consider the case when $b_i \in B^*$ and $t_i \in V^* \setminus B^*$, since we can deal with the case of $b_i \in V^* \setminus B^*$ and $t_i \in B^*$ in the same way. Let $C^*$ be the original matrix and $C'$ be the matrix obtained after ${\sf Expand}(H_i)$ is executed. Let $F^*$ (resp.~$F'$) be the ordered vertex pairs corresponding to the nonzero entries of $C^*$ (resp.~$C'$). Suppose that $p$ and $q$ are feasible with respect to $F^*$. In order to show that $p$ and $q$ are feasible with respect to $F'$, it suffices to consider (DF2), since (DF1) and (DF3) are obvious. Suppose that $(u, v) \in F'$. If $(u, v) \in F^*$, then $p(v) - p(u) \ge Q_{uv}$ by the dual feasibility with respect to $F^*$. Otherwise, we have $(u, v) \in F'$ and $(u, v) \not\in F^*$. By Lemma~\ref{lem:pivot}, $C'_{uv} = C^*_{uv} - C^*_{u t_i} (C^*_{b_i t_i})^{-1} C^*_{b_i v}$, and hence $(u, v) \in F'$ and $(u, v) \not\in F^*$ imply that $C^*_{b_i v} \neq 0$ and $C^*_{u t_i} \neq 0$. Then, by the dual feasibility with respect to $F^*$, we obtain \begin{align*} p(v) - p(b_i) &\ge Q_{b_i v}, \\ p(t_i) - p(u) &\ge Q_{u t_i}. \end{align*} Furthermore, we have $p(b_i) = p(t_i)$ by (DF3) and $Q_{b_i v} + Q_{u t_i} = Q_{b_i v} + Q_{u t_i} + q(H_i) \ge Q_{uv}$. By combining these inequalities, we obtain $p(v) - p(u) \ge Q_{uv}$. This shows that (DF2) holds with respect to $F'$. By the definition of ${\sf Expand}(H_i)$, it is obvious that $C'$ satisfies (BT1). To show (BT2), let $H_j$ be a normal blossom that is different from $H_i$. Suppose that $b_j \in B^*$ and $t_j \in V^* \setminus B^*$. we consider the following cases, separately. \begin{itemize} \item If $H_j \subseteq H_i$, then $C^*_{b_i v} = 0$ for any $v \in H_j \setminus B^*$. In particular, $C^*_{b_i t_j} = 0$. \item If $H_i \subseteq H_j$, then $C^*_{u t_i} = 0$ for any $u \in B^* \setminus H_j$. In particular, $C^*_{b_j t_i} = 0$. \item If $H_i \cap H_j = \emptyset$, then we have that $C^*_{b_i t_j} = 0$ and $C^*_{b_j t_i} = 0$. \end{itemize} In every case, we have that $C'_{b_j v} = C^*_{b_j v} - C^*_{b_j t_i} (C^*_{b_i t_i})^{-1} C^*_{b_i v} = C^*_{b_j v}$ for any $v \in H_j \setminus B^*$, and $C'_{u t_j} = C^*_{u t_j} - C^*_{u t_i} (C^*_{b_i t_i})^{-1} C^*_{b_i t_j} = C^*_{u t_j}$ for any $u \in B^* \setminus H_j$. Therefore, $C'_{b_j t_j} = C^*_{b_j t_j} \not= 0$, $C'_{b_j v} = C^*_{b_j v} = 0$ for any $v \in H_j \setminus B^*$ with $v \not= t_j$, and $C'_{u t_j} = C^*_{u t_j} = 0$ for any $u \in B^* \setminus H_j$ with $u \not= b_j$. We can deal with the case when $b_j \in V^* \setminus B^*$ and $t_j \in B^*$ in a similar way. This shows that $C'$ satisfies (BT2). \end{proof} \section{Optimality} \label{sec:optimality} In this section, we show that if we obtain a parity base $B$ and feasible dual variables $p$ and $q$, then $B$ is a minimum-weight parity base. Note again that although $p$ and $q$ are called dual variables, they do not correspond to dual variables of an LP-relaxation of the minimum-weight parity base problem. Our optimality proof is based on the algebraic formulation of the problem (Lemma~\ref{lem:Pf}) and the duality of the maximum-weight matching problem. \begin{theorem}\label{lem:optimality} If $B := B^*\cap V$ is a parity base and there exist feasible dual variables $p$ and $q$, then $B$ is a minimum-weight parity base. \end{theorem} \begin{proof} Since the optimal value of the minimum-weight parity base problem is represented with $\mathrm{deg}_\theta\,{\cal P}f{\cal P}hi_A(\theta)$ as shown in Lemma~\ref{lem:Pf}, we evaluate the value of $\mathrm{deg}_\theta\,{\cal P}f{\cal P}hi_A(\theta)$, assuming that we have a parity base $B$ and feasible dual variables $p$ and $q$. Recall that $A$ is transformed to $(I \ C)$ by applying row transformations and column permutations, where $C$ is the fundamental cocircuit matrix with respect to the base $B$ obtained by $C=A[U,B]^{-1}A[U,V\setminus B]$. Note that the identity submatrix gives a one to one correspondence between $U$ and $B$, and the row set of $C$ can be regarded as $U$. We now apply the same row transformations and column permutations to ${\cal P}hi_A(\theta)$, and then apply also the corresponding column transformations and row permutations to obtain a skew-symmetric polynomial matrix ${\cal P}hi_A'(\theta)$, that is, $$ {\cal P}hi_A'(\theta)=\left( \begin{array}{c|cc} O & I & C \\ \hline -I & & \\ -C^\top & \multicolumn{2}{c}{\raisebox{5pt}[0pt][0pt]{\large $D'(\theta)$}} \end{array}\right) \begin{array}{l} \leftarrow U \\ \leftarrow B \\ \leftarrow V\setminus B, \end{array} $$ where $D'(\theta)$ is a skew-symmetric matrix obtained from $D(\theta)$ by applying row and column permutations simultaneously. Note that ${\cal P}f{\cal P}hi_A'(\theta) = \pm {\cal P}f{\cal P}hi_A(\theta)/\det A[U,B]$, where the sign is determined by the ordering of $V$. We now consider the following skew-symmetric matrix: $${\cal P}hi_A^*(\theta)= \left( \begin{array}{c|c|c|c|c} \multicolumn{2}{c|}{} & O & \multicolumn{2}{c}{} \\ \cline{3-3} \multicolumn{2}{c|}{\raisebox{7pt}[0pt][0pt]{$O$}} & I & \multicolumn{2}{c}{\raisebox{7pt}[0pt][0pt]{\quad $C^*$}\quad} \\ \hline O & -I & \multicolumn{2}{c|}{} & \\ \cline{1-2} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{\raisebox{7pt}[0pt][0pt]{$D'(\theta)$}} & \raisebox{7pt}[0pt][0pt]{$O$} \\ \cline{3-5} \multicolumn{2}{c|}{\raisebox{7pt}[0pt][0pt]{$-{C^*}^\top$}} & \multicolumn{2}{c|}{O} & O \end{array} \right) \begin{array}{l} \\ \raisebox{7pt}[0pt][0pt]{$\leftarrow U^*$ (identified with $B^*$)} \\ \leftarrow B \\ \leftarrow V\setminus B \\ \leftarrow T\setminus B^*. \end{array} $$ Here, the row and column sets of ${\cal P}hi_A^*(\theta)$ are both indexed by $W^* := U^* \cup V \cup (T \setminus B^*)$, where $U^*$ is the row set of $C^*$, which can be identified with $B^*$. Then, we have the following claim. \begin{claim} It holds that $\mathrm{deg}_\theta{\cal P}f{\cal P}hi^*_A(\theta)= \mathrm{deg}_\theta{\cal P}f{\cal P}hi'_A(\theta) = \mathrm{deg}_\theta{\cal P}f{\cal P}hi_A(\theta)$. \end{claim} \begin{proof} Since $C^*$ satisfies (BT1), we can obtain $ \left( \begin{array}{c|c|c} O & X & I \\ \hline I & C & O \end{array} \right) $ from $ \left( \begin{array}{c|c|c} O & \multicolumn{2}{c}{} \\ \cline{1-1} I & \multicolumn{2}{c}{\raisebox{7pt}[0pt][0pt]{\ $C^*$}} \end{array} \right) $ by applying elementary row transformations, where $X$ is some matrix. Here, the row and column sets are $U^*$ and $B \cup (V \setminus B) \cup (T \setminus B^*)$, respectively. We apply the same row transformations and their corresponding column transformations to ${\cal P}hi_A^*(\theta)$. Then, we obtain the following matrix: $$\widehat {\cal P}hi_A(\theta)= \left( \begin{array}{c|c|c|c|c} \multicolumn{2}{c|}{} & O & X & I \\ \cline{3-5} \multicolumn{2}{c|}{\raisebox{7pt}[0pt][0pt]{$O$}} & I & C & O \\ \hline O & -I & \multicolumn{2}{c|}{} & \\ \cline{1-2} - X^\top & - C^\top & \multicolumn{2}{c|}{\raisebox{7pt}[0pt][0pt]{$D'(\theta)$}} & \raisebox{7pt}[0pt][0pt]{$O$} \\ \hline - I & O & \multicolumn{2}{c|}{O} & O \end{array} \right) \begin{array}{l} \\ \raisebox{7pt}[0pt][0pt]{$\leftarrow U^*$ (identified with $B^*$)} \\ \leftarrow B \\ \leftarrow V\setminus B \\ \leftarrow T\setminus B^*, \end{array} $$ and hence $\mathrm{deg}_\theta{\cal P}f{\cal P}hi^*_A(\theta) = \mathrm{deg}_\theta{\cal P}f \widehat {\cal P}hi_A(\theta)$. Since ${\cal P}f \widehat {\cal P}hi_A(\theta) = \pm {\cal P}f {\cal P}hi'_A(\theta)$, we have that $$ \mathrm{deg}_\theta{\cal P}f{\cal P}hi^*_A(\theta) = \mathrm{deg}_\theta{\cal P}f \widehat {\cal P}hi_A(\theta) = \mathrm{deg}_\theta{\cal P}f {\cal P}hi'_A(\theta) = \mathrm{deg}_\theta{\cal P}f{\cal P}hi_A(\theta), $$ which completes the proof. \end{proof} In what follows, we evaluate $\mathrm{deg}_\theta{\cal P}f{\cal P}hi^*_A(\theta)$. Construct a graph $\Gamma^*=(W^*,E^*)$ with edge set $E^*:=\{(u,v) \mid ({\cal P}hi^*_A(\theta))_{u v} \neq 0 \}$. Each edge $(u,v) \in E^*$ has a weight $\mathrm{deg}_\theta\,({\cal P}hi_A^*(\theta))_{u v}$. Then it can be easily seen by the definition of Pfaffian that the maximum weight of a perfect matching in $\Gamma^*$ is at least $\mathrm{deg}_\theta {\cal P}f{\cal P}hi^*_A(\theta) = \mathrm{deg}_\theta {\cal P}f {\cal P}hi_A(\theta)$. Let us recall that the dual linear program of the maximum-weight perfect matching problem on $\Gamma^*$ is formulated as follows. \begin{eqnarray} \mbox{Minimize} & & \sum_{v\in W^*}\pi(v)-\sum_{Z\in\Omega}\xi(Z) \notag \\ \mbox{subject to} & & \pi(u)+\pi(v)-\sum_{Z \in \Omega_{uv}} \xi(Z) \geq \mathrm{deg}_\theta\,({\cal P}hi_A^*(\theta))_{u v} \quad\quad (\forall (u,v)\in E^*), \label{eq:dualmatching}\\ & & \xi(Z)\geq 0 \quad\quad\quad (\forall Z\in\Omega), \notag \end{eqnarray} where $\Omega=\{Z\mid Z\subseteq W^*,\,\mbox{$|Z|$: odd}, |Z|\geq 3\}$ and $\Omega_{uv}=\{Z\mid Z\in\Omega,|Z \cap \{u, v\}| = 1\}$ (see, e.g., \cite[Theorem 25.1]{Sch03}). In what follows, we construct a feasible solution $(\pi, \xi)$ of this linear program. The objective value provides an upper bound on the maximum weight of a perfect matching in $\Gamma^*$, and consequently serves as an upper bound on $\mathrm{deg}_\theta {\cal P}f{\cal P}hi_A(\theta)$. Since $U^*$ can be identified with $B^*$, we can naturally define a bijection $\eta: B^* \to U^*$ between $B^*$ and $U^*$. We define $\pi: W^*\to \mathbb{R}$ by $$\pi(v) = \begin{cases} p(v) & \mbox{if $v \in V \cup (T \setminus B^*)$}, \\ -p(\eta^{-1}(v)) & \mbox{if $v \in U^*$}, \end{cases} $$ For each $i \in \{1, \dots , \lambda\}$, we introduce $Z_i = (H_i \setminus (T \cap B^*)) \cup \eta(H_i \cap B^*) \subseteq W^*$ and set $\xi(Z_i) = q(H_i)$ (see Fig.~\ref{fig:51}). Since $H_i$ is of odd cardinality and there is no source line in $G$, we see that $$|Z_i|=|H_i\setminus (T \cap B^*)|+|H_i\cap B^*|=|H_i|+|H_i\cap B|$$ is odd and $|Z_i|\geq 3$. Define $\xi(Z) = 0$ for any $Z \in \Omega \setminus \{Z_1, \dots , Z_\lambda\}$. We now show the following claim. \begin{figure} \caption{Definition of $Z_i$. Lines and dummy lines are represented by double bonds.} \label{fig:51} \end{figure} \begin{claim} The dual variables $\pi$ and $\xi$ defined as above form a feasible solution of the linear program (\ref{eq:dualmatching}). \end{claim} \begin{proof} Suppose that $(u,v)\in E^*$. If $u, v \in V$ and $u = \bar v$, then (DF1) shows that $\pi(u) + \pi(v) = p(\bar v) + p(v) = w_\ell = \mathrm{deg}_\theta\,({\cal P}hi_A^*(\theta))_{u v}$, where $\ell = \{v, \bar v\}$. Since $|Z_i \cap \{v, \bar v\}|$ is even for any $i \in \{1, \dots , \lambda\}$, this shows (\ref{eq:dualmatching}). If $u \in U$ and $v \in B$, then $(u, v) \in E^*$ implies that $u = \eta(v)$, and hence $\pi(u) + \pi(v) = 0$, which shows (\ref{eq:dualmatching}) as $|Z_i \cap \{u, v\}|$ is even for any $i \in \{1, \dots , \lambda\}$. The remaining case of $(u,v)\in E^*$ is when $u \in U^*$ and $v \in V^*\setminus B^*$. That is, it suffices to show that $(u, v)$ satisfies (\ref{eq:dualmatching}) if $C^*_{uv} \neq 0$. By the definition of $\pi$, we have $\pi(u)+\pi(v) = p(v) - p(u')$, where $u'=\eta^{-1}(u)$. By the definition of $Z_i$, we have $Z_i\in\Omega_{uv}$ if and only if $i \in I_{u'v}$, which shows that $$\sum_{i:\,Z_i\in\Omega_{uv}} \xi(Z_i) = \sum_{i \in I_{u'v}} q(H_i).$$ Since $C^*_{uv} \not= 0$, by (DF2), we have $$p(v) - p(u') \ge Q_{u'v}=\sum_{i \in I_{u'v}} q(H_i).$$ Thus, we obtain $$\pi(u)+\pi(v) - \sum_{i:\, Z_i\in\Omega_{uv}} \xi(Z_i) \ge 0,$$ which shows that $(u, v)$ satisfies (\ref{eq:dualmatching}). \end{proof} The objective value of this feasible solution is \begin{eqnarray}\label{eq:opt04}\nonumber \sum_{v\in W^*}\pi(v)-\sum_{i=1}^\lambda \xi(Z_i) & = & \sum_{v\in V \setminus B} p(v) + \sum_{v\in T\setminus B^*}p(v)-\sum_{v\in T\cap B^*}p(v) -\sum_{i=1}^\lambda\xi(Z_i) \\ & = & \sum_{v\in V \setminus B} p(v) = \sum_{\ell\subseteq V\setminus B} w_\ell, \end{eqnarray} where the first equality follows from the definition of $\pi$, the second one follows from the definition of $\xi$ and (DF3), and the third one follows from (DF1). By the weak duality of the maximum-weight matching problem, we have \begin{align} \sum_{v\in W^*}\pi(v)-\sum_{i=1}^\lambda \xi(Z_i) &\ge \mbox{(maximum weight of a perfect matching in $\Gamma^*$)} \notag \\ &\ge \mathrm{deg}_\theta {\cal P}f{\cal P}hi_A^*(\theta) = \mathrm{deg}_\theta {\cal P}f{\cal P}hi_A(\theta). \label{eq:opt05} \end{align} On the other hand, Lemma~\ref{lem:Pf} shows that any parity base $B'$ satisfies that \begin{equation}\label{eq:opt06} \sum_{\ell \subseteq B'} w_\ell \ge \sum_{\ell \in L} w_\ell - \mathrm{deg}_\theta {\cal P}f{\cal P}hi_A(\theta), \end{equation} Combining (\ref{eq:opt04})--(\ref{eq:opt06}), we have $\sum_{\ell\subseteq V\setminus B} w_\ell = \mathrm{deg}_\theta {\cal P}f{\cal P}hi_A(\theta)$, which means $B$ is a minimum-weight parity base by Lemma~\ref{lem:Pf}. \end{proof} \section{Finding an Augmenting Path} \label{sec:search} In this section, we define an augmenting path and present a procedure for finding one. The validity of our procedure is shown in Section~\ref{sec:validity}. Suppose we are given $V^*$, $B^*$, $C^*$, $\Lambda$, and feasible dual variables $p$ and $q$. Let $F^\circ \subseteq F^*$ be the set of tight edges, i.e., $F^\circ = \{ (u, v) \in F^* \mid u \in B^*,\ v \in V^* \setminus B^*,\ p(v) - p(u) = Q_{uv}\}$. Our procedure works primarily on the undirected graph $G^\circ=(V^*, F^\circ)$, where we ignore the ordering of the vertices when we regard $F^\circ$ or $F^*$ as an edge set. For a vertex set $X \subseteq V^*$, $G^\circ[X]$ denotes the subgraph of $G^\circ$ induced by $X$. For $H_i \in \Lambda$, define $H^-_i$ as $$ H^-_i = \{ v \in H_i \setminus \{t_i\} \mid \mbox{there is an edge in $F^*$ between $v$ and $V^* \setminus H_i$} \}. $$ Here, $\{t_i\}$ is regarded as $\emptyset$ if $H_i \in \Lambda_{\rm s}$. This definition shows that we can ignore $H_i \setminus H^-_i$ when we consider edges in $F^*$ (or $F^\circ$) connecting $H_i$ and $V^* \setminus H_i$. Roughly, our procedure finds a part of the augmenting path outside the blossoms. The routing in each blossom $H_i$ is determined by a prescribed vertex set $R_{H_i}(x)$ for $x \in H^\bullet_i$, where $H^\bullet_i := H^-_i \cup (H_i \cap V)$. For any $i \in \{1, \dots , \lambda\}$ and for any $x \in H^\bullet_i$, the prescribed vertex set $R_{H_i}(x) \subseteq H_i$ is assumed to satisfy the following. \begin{description} \item[(BR1)] $x \in R_{H_i}(x) \subseteq H_i$. \item[(BR2)] If $H_i \in \Lambda_{\rm n}$, then $R_{H_i}(x)$ consists of lines, dummy lines, and the tip $t_i$. If $H_i \in \Lambda_{\rm s}$, then $R_{H_i}(x)$ consists of lines, dummy lines, and a source vertex. \item[(BR3)] For any $H_j \in \Lambda_{\rm n}$ with $R_{H_i}(x) \cap H_j \not= \emptyset$ and $H_j \subsetneq H_i$, it holds that $\{b_j, t_j \} \subseteq R_{H_i}(x)$. \end{description} See Fig.~\ref{fig:revisionfig04} for an example of $R_{H_i}(x)$. We sometimes regard $R_{H_i}(x)$ as a sequence of vertices, and in such a case, the last two vertices are $\bar x x$. We also suppose that the first vertex of $R_{H_i}(x)$ is $t_i$ if $H_i \in \Lambda_{\rm n}$ and the unique source vertex in $R_{H_i}(x)$ if $H_i \in \Lambda_{\rm s}$. Each blossom $H_i \in \Lambda$ is assigned a total order $<_{H_i}$ among all the vertices in $H^\bullet_i$. In the procedure, $R_{H_i}(x)$ keeps additional properties which will be described in Section~\ref{sec:propertyR}. \begin{figure} \caption{An example of $R_{H_i} \label{fig:revisionfig04} \end{figure} We say that a vertex set $P \subseteq V^*$ is an augmenting path if it satisfies the following properties. \begin{description} \item[(AP1)] $P$ consists of normal lines, dummy lines, and two vertices from distinct source lines. \item[(AP2)] For each $H_i \in \Lambda$, either $P \cap H_i = \emptyset$ or $P \cap H_i = R_{H_i}(x_i)$ for some $x_i \in H^\bullet_i$. \item[(AP3)] $G^\circ[P]$ has a unique tight perfect matching. \end{description} By (AP1), (AP2), and (BR2), we have the following observation. \begin{observation} \label{obs:budtipinP} For an augmenting path $P$ and for each $H_i \in \Lambda_{\rm n}$ with $P \cap H_i \not= \emptyset$, it holds that $\{b_i, t_i\} \subseteq P$. \end{observation} In the rest of this section, we describe how to find an augmenting path. Section~\ref{sec:searchproc} is devoted to the search procedure, which calls two procedures: ${\cal B}lossom$ and ${\sf Graft}$. The details of these procedures are described in Sections~\ref{sec:createblossom} and~\ref{sec:graft}, respectively. In Section~\ref{sec:basicpro}, we show that the procedure keeps some conditions. \subsection{Search Procedure} \label{sec:searchproc} In this subsection, we describe a procedure for searching for an augmenting path. The procedure performs the breadth-first search using a queue to grow paths from source vertices. A vertex $v\in V^*$ is labeled and put into the queue when it is reached by the search. The procedure picks the first labeled element from the queue, and examines its neighbors. A linear order $\prec$ is defined on the labeled vertex set so that $u \prec v$ means $u$ is labeled prior to $v$. For each $x\in V^*$, we define $K(x) = H_i \cup \{b_i\}$ if there exists a maximal blossom $H_i$ such that $H_i$ is a normal blossom with $x \in H_i \cup \{b_i\}$, and define $K(x) = H_i$ if there exists a maximal blossom $H_i$ such that $H_i$ is a source blossom with $x \in H_i$. If such a blossom does not exist, then it is called {\em single} and we denote $K(x) = \{x, \bar x\}$. The procedure also labels some blossoms with $\oplus$ or $\ominus$, which will be used later for modifying dual variables. With each labeled vertex $v$, the procedure associates a path $P(v)$ and its subpath $J(v)$, where a path is a sequence of vertices. The first vertex of $P(v)$ is a labeled vertex in a source line and the last one is $v$. The reverse path of $P(v)$ is denoted by $\overline{P(v)}$. For a path $P(v)$ and a vertex $r$ in $P(v)$, we denote by $P(v|r)$ the subsequence of $P(v)$ after $r$ (not including $r$). We sometimes identify a path with its vertex set. When an unlabeled vertex $u$ is examined in the procedure, we assign a vertex $\rho(u)$ and a path $I(u)$. Roughly, $\rho(u)$ is a neighbor of $u$ such that $u$ is examined when we pick up $\rho(u)$ from the queue. Paths $I(u)$ and $J(v)$, where $u$ is an unlabeled vertex and $v$ is a labeled vertex, are used to decompose a search path as we will see in Lemma~\ref{lem:dec} later. Roughly, $I(u)$ and $J(v)$ represent ``fractions'' of the search path containing $u$ and $v$, respectively. The procedure is described as follows. \begin{description} \item[Procedure] ${\sf Search}$ \item[Step 0:] Initialize the objects so that the queue is empty, every vertex is unlabeled, and every blossom is unlabeled. \item[Step 1:] While there exists an unlabeled single vertex $x$ in a source line, label $x$ with $P(x):= J(x) := x$ and put $x$ into the queue. While there exists a source line $\{x, \bar x\}$ such that $K(x)=K(\bar x)=\{x, \bar x\}$ and $x$ is adjacent to $\bar x$ in $G^\circ$, add a new source blossom $H=\{x, \bar x\}$ to $\Lambda$, label $H$ with $\oplus$, and define $R_H(x) := x$ and $R_H(\bar x) := \bar x$. While there exists an unlabeled maximal source blossom $H_i \in \Lambda_{\rm s}$, label $H_i$ with $\oplus$ and do the following: for each vertex $x \in H^\bullet_i$ in the order of $<_{H_i}$, label $x$ with $P(x):=J(x):=R_{H_i}(x)$ and put $x$ into the queue. \item[Step 2:] If the queue is empty, then return $\emptyset$ and terminate the procedure (see Section~\ref{sec:dualupdatealgo}). Otherwise, remove the first element $v$ from the queue. \item[Step 3:] While there exists a labeled vertex $u$ adjacent to $v$ in $G^\circ$ with $K(u) \not= K(v)$, choose such $u$ that is minimum with respect to $\prec$ and do the following (3-1) and (3-2) (see Fig.~\ref{fig:revisionfig05}). \begin{description} \item[(3-1)] If the first elements in $P(v)$ and in $P(u)$ belong to different source lines, then return $P:= P(v) \overline{ P(u)}$ as an augmenting path. \item[(3-2)] Otherwise, apply ${\cal B}lossom(v,u)$ to add a new blossom to $\Lambda$. \end{description} \item[Step 4:] While there exists an unlabeled vertex $u$ adjacent to $v$ in $G^\circ$ with $K(u) \not= K(v)$ such that $\rho(u)$ is not assigned, do the following (4-1)--(4-3). \begin{description} \item[(4-1)] If $K(u) = \{u, \bar u\}$, then label $\bar{u}$ with $P(\bar{u}):=P(v)u\bar{u}$ and $J(\bar{u}) := \{\bar u\}$, set $\rho(u) := v$ and $I(u) := \{u\}$, and put $\bar{u}$ into the queue (see Fig.~\ref{fig:revisionfig06}). Furthermore, if $(v, \bar u) \in F^\circ$, then apply ${\cal B}lossom(\bar u,v)$. \item[(4-2)] If $K(u) = H_i \cup \{b_i\}$ for some $H_i \in \Lambda_{\rm n}$ and $(v, b_i) \in F^\circ$, then apply ${\sf Graft}(v,H_i)$ (see Fig.~\ref{fig:revisionfig09}). \item[(4-3)] If $K(u) = H_i \cup \{b_i\}$ for some $H_i \in \Lambda_{\rm n}$ and $(v, b_i) \not\in F^\circ$, then choose $y \in H^\bullet_i$ with $(v, y) \in F^\circ$ that is minimum with respect to $<_{H_i}$, and do the following.\footnote{Such $y$ always exists, because $u$ satisfies the condition.} Label $H_i$ with $\ominus$, label $b_i$ with $P(b_i):=P(v) \overline{R_{H_i}(y)}b_i$ and $J(b_i) := \{b_i\}$, and put $b_i$ into the queue. For each unlabeled vertex $x \in H^\bullet_i$, set $\rho(x) := v$ and $I(x) := \overline{R_{H_i}(x)}$ (see Fig.~\ref{fig:revisionfig10}). \end{description} \item[Step 5:] Go back to Step 2. \end{description} \begin{figure} \caption{Illustrations of Step 3. We apply (3.1) for the leftmost case, and apply (3.2) for the other cases.} \label{fig:revisionfig05} \end{figure} \begin{figure} \caption{Step (4-1).} \label{fig:revisionfig06} \caption{Step (4-2).} \label{fig:revisionfig09} \caption{Step (4-3).} \label{fig:revisionfig10} \end{figure} \subsection{Creating a Blossom} \label{sec:createblossom} In this subsection, we describe procedure ${\cal B}lossom$ that creates a new blossom, which is called in Steps (3-2) and (4-1) of ${\sf Search}$. \begin{description} \item[Procedure] ${\cal B}lossom(v,u)$ \item[Step 1:] Let $c$ be the last vertex in $P(v)$ such that $K(c)$ contains a vertex in $P(u)$. Let $d$ be the last vertex in $P(u)$ contained in $K(c)$. Note that $K(c) = K(d)$. If $c=d$, then define $H := \bigcup \{K (x) \mid x \in P(v|c) \cup P(u|d) \}$ and $r := c$. If $c\not=d$, then define $H := \bigcup \{K (x) \mid x \in P(v|c) \cup P(u|d) \cup \{c\} \}$ and let $r$ be the last vertex in $P(v)$ not contained in $H$ if exists. See Fig.~\ref{fig:61} for an example. \begin{figure} \caption{Definition of $H$.} \label{fig:61} \end{figure} \item[Step 2:] If $H$ contains no source line, then define $g$ to be the vertex subsequent to $r$ in $P(v)$, introduce new vertices $b$ and $t$, namely $V^*:=V^*\cup\{b,t\}$, and add $t$ to $H$, namely $H:=H\cup\{t\}$. Update $B^*$, $C^*$, and $p$ as follows (see Fig.~\ref{fig:revisionfig13}). \begin{itemize} \item If $r\in B^*$ and $g\in V^*\setminus B^*$, then $B^*:=B^*\cup\{b\}$, $C^*_{bt}:=C^*_{rg}$, $C^*_{by}:=C^*_{ry}$ for $y\in H\setminus B^*$, $C^*_{by}:=0$ for $y\in (V^*\setminus B^*)\setminus H$, $C^*_{xt}:=C^*_{xg}$ for $x\in B^*\setminus H$, $C^*_{xt}:=0$ for $x\in B^*\cap H$, and $p(b):=p(t):=p(r)+Q_{rb}$. \item If $r\in V^*\setminus B^*$ and $g\in B^*$, then $B^*:=B^*\cup\{t\}$, $C^*_{tb}:=C^*_{gr}$, $C^*_{xb}:=C^*_{xr}$ for $x\in B^*\cap H$, $C^*_{xb}:=0$ for $x\in B^*\setminus H$, $C^*_{ty}:=C^*_{gy}$ for $y\in (V^*\setminus B^*)\setminus H$, $C^*_{ty}:=0$ for $y\in H\setminus B^*$, and $p(b):=p(t):=p(r)-Q_{rb}$. \item Apply the pivoting operation around $\{b,t\}$ to $C^*$, namely $B^*:=B^*\triangle\{b,t\}$, and update $F^*$ accordingly. \end{itemize} \begin{figure} \caption{Definition of $C^*$. By the definition, $C^*_{ry} \label{fig:revisionfig13} \end{figure} \item[Step 3:] If $H$ contains no source line, then for each labeled vertex $x$ with $P(x) \cap H\not= \emptyset$, replace $P(x)$ by $P(x) := P(r) b t P(x|r)$. Label $t$ with $P(t) := P(r)bt$ and $J (t) := \{t\}$, and extend the ordering $\prec$ of the labeled vertices so that $t$ is just after $r$, i.e., $r \prec t$ and no element is between $r$ and $t$. For each vertex $x \in H$ with $\rho(x) = r$, update $\rho(x)$ as $\rho(x) := t$. Set $\rho(b) := r$ and $I(b) := \{b\}$ (see Fig.~\ref{fig:revisionfig11}). \begin{figure} \caption{Update of $P(x)$.} \label{fig:revisionfig11} \end{figure} \item[Step 4:] For each unlabeled vertex $x\in H^\bullet$, label $x$ as follows. \begin{enumerate} \item[(i)] If $K(x) = \{x, \bar x\}$ and $x \in P(u|d)$, then $P(x):=P(v)\overline{P(u|x)}x$. \item[(ii)] If $K(x) = \{x, \bar x\}$ and $x \in P(v|c)$, then $P(x):=P(u)\overline{P(v|x)}x$. \item[(iii)] If $K(x) = H_i \cup \{b_i\}$ for some $H_i \in \Lambda_{\rm n}$ labeled with $\oplus$ such that $x=b_i$ and $x \in P(u|d)$, then $P(x):=P(v)\overline{P(u|x)}x$. \item[(iv)] If $K(x) = H_i \cup \{b_i\}$ for some $H_i \in \Lambda_{\rm n}$ labeled with $\oplus$ such that $x=b_i$ and $x \in P(v|c)$, then $P(x):=P(u)\overline{P(v|x)}x$. \item[(v)] If $K(x) = H_i \cup \{b_i\}$ for some $H_i \in \Lambda_{\rm n}$ labeled with $\ominus$ such that $x \in H^\bullet_i$ and $t_i \in P(u|d)$, then $P(x):=P(v)\overline{P(u|t_i)} R_{H_i}(x)$. \item[(vi)] If $K(x) = H_i \cup \{b_i\}$ for some $H_i \in \Lambda_{\rm n}$ labeled with $\ominus$ such that $x \in H^\bullet_i$ and $t_i \in P(v|c)$, then $P(x):=P(u)\overline{P(v|t_i)} R_{H_i}(x)$. \end{enumerate} Define $J(x) := P(x|t)$ and put $x$ into the queue (see Fig.~\ref{fig:revisionfig12}). Here, we choose the vertices in the ordering such that the following conditions hold. \begin{itemize} \item For two unlabeled vertices $x,y\in H^\bullet$, if $\rho(x) \succ \rho(y)$, then we choose $x$ prior to $y$. \item For two unlabeled vertices $x,y\in H^\bullet$, if $\rho(x) = \rho(y)$, $K(x) = K(y) = H_i \cup \{b_i\}$, and $x <_{H_i} y$, then we choose $x$ prior to $y$. \item If $r=c=d\neq u$ holds, then no element is chosen between $g$ and $h$, where $h$ is the vertex subsequent to $t$ in $P(u)$. Note that this condition makes sense only when $K(g)$ or $K(h)$ corresponds to a blossom labeled with $\ominus$. \end{itemize} \begin{figure} \caption{Illustration of Step 4 of ${\cal B} \label{fig:revisionfig12} \end{figure} \item[Step 5:] Label $H$ with $\oplus$. Define $R_{H} (x) := P(x|b)$ for each $x \in H^\bullet$ if $H$ contains no source line, and define $R_{H} (x) := P(x)$ for each $x \in H^\bullet$ if $H$ contains a source line. Define $<_{H}$ by the ordering $\prec$ of the labeled vertices in $H^\bullet$. Add $H$ to $\Lambda$ with $q(H)=0$ regarding $b$ and $t$, if exist, as the bud and the tip of $H$, respectively, and update $\Lambda_{\rm n}$, $\Lambda_{\rm s}$, $\lambda$, $G^\circ$, and $K (y)$ for $y \in V^*$, accordingly. \end{description} We note that, for any $x \in V^*$, if $J(x)$ (resp.~$I(x)$) is defined, then it is equal to either $\{x\}$ or $R_{H_i}(x)$ (resp.~either $\{x\}$ or $\overline{R_{H_i}(x)}$) for some $H_i \in \Lambda$. In particular, the last element of $J(x)$ and the first element of $I(x)$ are $x$. We also note that $J(x)$ and $I(x)$ are not used in the procedure explicitly, but we introduce them to show the validity of the procedure. \subsection{Grafting a Blossom} \label{sec:graft} In this subsection, we describe ${\sf Graft}$ that replaces a blossom with another blossom, which is called in Step (4-2) of ${\sf Search}$. See Fig.~\ref{fig:revisionfig15} for an illustration. \begin{description} \item[Procedure] ${\sf Graft}(v,H_i)$ \item[Step 1:] Set $H := H_i \cup \{b_i\}$, where $H_i$ is a normal blossom. Introduce new vertices $b$ and $t$ in the same say as Step 2 of ${\cal B}lossom(v, u)$ with $r:=v$ and $g:=b_i$, add $t$ to $H$, and apply the pivoting operation around $\{b,t\}$ to $C^*$. Label $t$ with $P(t) := P(v)b t$ and $J (t) := \{ t\}$, and extend the ordering $\prec$ of the labeled vertices so that $t$ is just after $v$, i.e., $v \prec t$ and no element is between $v$ and $t$. Set $\rho(b) := v$ and $I(b) := \{b\}$. \item[Step 2:] For each vertex $x\in H^\bullet_i$ in the order of $<_{H_i}$, label $x$ with $P(x) := P(v) b t b_i R_{H_i}(x)$ and $J(x) := t b_i R_{H_i}(x)$, and put $x$ into the queue. \item[Step 3:] Label $H$ with $\oplus$. Define $R_{H} (x) := P(x|b)$ for each $x \in H^\bullet$. Define $<_{H}$ by the ordering $\prec$ of the labeled vertices in $H^\bullet$. Add $H$ to $\Lambda$ with $q(H)=0$ regarding $b$ and $t$ as the bud and the tip of $H$, respectively, and update $\Lambda_{\rm n}$, $\lambda$, $G^\circ$, and $K (y)$ for $y \in V^*$, accordingly. \item[Step 4:] Set $\epsilon := q(H_i)$ and modify the dual variables by $q(H_i) := 0$, $q(H) := \epsilon$, \begin{align*} p(b_i) &:= \begin{cases} p(b_i) - \epsilon & \mbox{if $b_i \in V^* \setminus B^*$}, \\ p(b_i) + \epsilon & \mbox{if $b_i \in B^*$}, \end{cases} \\ p(t) &:= \begin{cases} p(t) - \epsilon & \mbox{if $t \in B^*$}, \\ p(t) + \epsilon & \mbox{if $t \in V^* \setminus B^*$}. \end{cases} \end{align*} Apply ${\sf Expand}(H_i)$ to delete $H_i$ from $\Lambda$, and set $H:=H\setminus\{b_i,t_i\}$. For each vertex $x$, delete $b_i$ and $t_i$ from $P(x)$, $R_H(x)$, and $J(x)$. \end{description} \begin{figure} \caption{Illustration of ${\sf Graft} \label{fig:revisionfig15} \end{figure} We note that Step 4 of ${\sf Graft}(v,H_i)$ is executed to keep the condition $H_i \cap V \not = H_j \cap V$ for distinct $H_i, H_j \in \Lambda$. \subsection{Basic Properties} \label{sec:basicpro} For better understanding of the pivoting operations in ${\cal B}lossom(v,u)$ and ${\sf Graft}(v,H_i)$, we give several lemmas in this subsection. Then, we show that the conditions (BT1), (BT2), and (DF1)--(DF3) hold in the search procedure. \begin{lemma} \label{lem:72} Suppose that ${\cal B}lossom(v, u)$ or {\em Steps 1--3} of ${\sf Graft}(v,H_i)$ have created a new blossom $H$ containing no source line. Then the following conditions hold after the pivoting operation: \begin{itemize} \item $b$ and $t$ satisfy the conditions in {\em (BT2)}, \item there is no edge in $F^*$ between $r$ and $H$, and \item there is no edge in $F^*$ between $g$ and $V^* \setminus H$. \end{itemize} \end{lemma} \begin{proof} To show the properties, we use the notation $\widehat V^*, \widehat B^*$, $\widehat C^*$, and $\widehat F^*$ to represent the objects after the pivoting operation, whereas $V^*$, $B^*$, $C^*$, and $F^*$ represent those before the pivoting operation. We only consider the case when $b \in \widehat V^* \setminus \widehat B^*$ and $t \in \widehat B^*$ as the other case can be dealt with in a similar way. In Step 2 of ${\cal B}lossom(v, u)$ (or Step 1 of ${\sf Graft}(v,H_i)$), we have $C^*_{bt}=C^*_{rg}\neq 0$, and hence $\widehat C^*_{t b} = 1/C^*_{bt} \neq 0$. Since $C^*_{b y} = 0$ for any $y \in (V^* \setminus B^*) \setminus H$, we have $\widehat C^*_{t y} = C^*_{b y} / C^*_{b t} = 0$ for any $y \in (\widehat V^* \setminus \widehat B^*) \setminus H$ with $y\neq b$. Similarly, since $C^*_{x t} = 0$ for any $x \in H \cap B^*$, we have $\widehat C^*_{x b} = C^*_{x t} / C^*_{b t} = 0$ for any $x \in H \cap \widehat B^*$ with $x\neq t$. Thus, $b$ and $t$ satisfy the conditions in (BT2). Since $C^*_{bt}=C^*_{rt}$ and $C^*_{ry} = C^*_{by}$ for any $y \in (H \setminus B^*) \setminus \{t\}$, we have $\widehat C^*_{ry} = C^*_{ry} - C^*_{rt} (C^*_{bt})^{-1} C^*_{by} =0$ for any $y \in (H \setminus B^*) \setminus \{t\}$ by Lemma~\ref{lem:pivot}. Thus, there is no edge in $\widehat F^*$ between $r$ and $H$. Similarly, since $C^*_{bt}=C^*_{bg}$ and $C^*_{xg} = C^*_{xt}$ for any $x \in (B^* \setminus H) \setminus \{b\}$, we have $\widehat C^*_{xg} = C^*_{xg} - C^*_{xt} (C^*_{bt})^{-1} C^*_{bg} =0$ for any $x \in (B^* \setminus H) \setminus \{b\}$ by Lemma~\ref{lem:pivot}. Thus, there is no edge in $\widehat F^*$ between $g$ and $V^* \setminus H$. \end{proof} The following lemma shows how creating a blossom affects the edges in $F^\circ$. \begin{lemma}\label{clm:71} Suppose that ${\cal B}lossom(v, u)$ or {\em Steps 1--3} of ${\sf Graft}(v,H_i)$ have created a new blossom $H$ containing no source line, and let $F^\circ$ (resp.~$\widehat F^\circ$) be the tight edge set before (resp.~after) the execution of ${\cal B}lossom(v, u)$ or {\em Steps 1--3} of ${\sf Graft}(v,H_i)$. If $(x,y) \in F^\circ \triangle \widehat F^\circ$, then {\em (i)} $\{x, y\} \cap \{ b, t\} \not= \emptyset$, or {\em (ii)} exactly one of $\{x, y\}$, say $x$, is contained in $H$, and $(x, r), (g, y) \in F^\circ$. \end{lemma} \begin{proof} Suppose that $\{x, y\} \cap \{ b, t \} = \emptyset$. By Lemma~\ref{lem:pivot}, we have $(x,y) \in F^\circ \triangle \widehat F^\circ$ only when $(x, b), (t, y) \in F^*$ or $(y, b), (t, x) \in F^*$ holds before the pivoting operation in Step 2 of ${\cal B}lossom(v,u)$ (or Step 1 of ${\sf Graft}(v,H_i)$). This shows that exactly one of $\{x, y\}$, say $x$, is contained in $H$, and that $(x,r),(g,y)\in F^*$ holds before ${\cal B}lossom(v,u)$ (or ${\sf Graft}(v,H_i)$). Suppose that $x \in B^*$. In this case, if $(x, r), (g, y) \in F^*$ holds before ${\cal B}lossom(v,u)$ (or ${\sf Graft}(v,H_i)$) and $(x,y) \in F^\circ \triangle \widehat F^\circ$, then we have \begin{align*} p(y) - p(x) &= Q_{xy}, \\ p(r) - p(g) &= Q_{rg}, \\ p(r) - p(x) &\ge Q_{x r}, \\ p(y) - p(g)&\ge Q_{g y}. \end{align*} Furthermore, we have $Q_{xy}+Q_{rg} = Q_{xr}+Q_{gy}$ by a simple counting argument. Combining these inequalities, we see that all the inequalities above must be tight. Thus, we have $(x, r), (g, y) \in F^\circ$. The same argument can be applied to the case when $x \in V^* \setminus B^*$. \end{proof} The proof of this lemma implies the following result. \begin{corollary}\label{cor:78} Suppose that ${\cal B}lossom(v, u)$ or {\em Steps 1--3} of ${\sf Graft}(v,H_i)$ have created a new blossom $H$ containing no source line, and let $F^*$ (resp.~$\widehat F^*$) be the edge set before (resp.~after) the execution of ${\cal B}lossom(v, u)$ or {\em Steps 1--3} of ${\sf Graft}(v,H_i)$. If $(x,y) \in F^* \triangle \widehat F^*$, then {\em (i)} $\{x, y\} \cap \{ b, t\} \not= \emptyset$, or {\em (ii)} exactly one of $\{x, y\}$, say $x$, is contained in $H$, and $(x, r), (g, y) \in F^*$. \end{corollary} The following lemma shows that Step 4 of ${\sf Graft}(v,H_i)$ roughly replaces edges incident to $t_i$ with ones incident to $t$. \begin{lemma} \label{lem:dblossomexpand} Suppose that ${\sf Expand}(H_i)$ is executed for some positive blossom $H_i \in \Lambda_{\rm n}$ in ${\sf Graft}(v,H_i)$. Then, we have the following. \begin{itemize} \item ${\sf Expand}(H_i)$ does not affect the edges in $F^*$ that are not incident to $\{t, b_i, t_i\}$. \item If $(t, x) \in F^*$ after ${\sf Expand}(H_i)$, then $(t, x) \in F^*$ or $(t_i, x) \in F^*$ before ${\sf Expand}(H_i)$. \item If $(t, x) \in F^\circ$ after ${\sf Expand}(H_i)$, then $(t, x) \in F^\circ$ or $(t_i, x) \in F^\circ$ before ${\sf Expand}(H_i)$. \item If $(t_i, x) \in F^\circ$ before ${\sf Expand}(H_i)$ with $x \not= b_i$, then $(t, x) \in F^\circ$ after ${\sf Expand}(H_i)$. \end{itemize} \end{lemma} \begin{proof} Since $(b_i, t_i)$ is the only edge in $F^*$ connecting $b_i$ and $H_i$, $(b_i, t)$ and $(b_i, t_i)$ are the only edges in $F^*$ incident to $b_i$ just before ${\sf Expand}(H_i)$. Thus, the first property holds. By Lemma~\ref{lem:pivotsing}, $(t, x) \in F^*$ after ${\sf Expand}(H_i)$ if and only if $C^*[\{t, x, b_i, t_i\}]$ is nonsingular before ${\sf Expand}(H_i)$, which shows the second property. Then, by the dual feasibility, we obtain the third property. If $(t_i, x) \in F^\circ$ before ${\sf Expand}(H_i)$, then $(t, x) \not\in F^*$ before ${\sf Expand}(H_i)$ by the dual feasibility, and hence $C^*[\{t, x, b_i, t_i\}]$ is nonsingular. Thus, $(t, x) \in F^\circ$ after ${\sf Expand}(H_i)$. \end{proof} We can also see that creating a new blossom does not violate the dual feasibility as follows. \begin{lemma} \label{lem:createblossomdual} Suppose that the dual variables are feasible before ${\cal B}lossom(v,u)$ or {\em Steps 1--3} of ${\sf Graft}(v,H_i)$, which create a new blossom $H$. Then, the dual variables remain feasible after ${\cal B}lossom(v, u)$ or {\em Steps 1--3} of ${\sf Graft}(v,H_i)$. \end{lemma} \begin{proof} We use $\widehat V^*$, $\widehat B^*$, $\widehat C^*$, $\widehat F^*$, $\widehat p$, and $\widehat \Lambda$ to represent the objects after ${\cal B}lossom(v,u)$ (or Steps 1--3 of ${\sf Graft}(v,H_i)$), whereas $V^*$, $B^*$, $C^*$, $F^*$, $p$, and $\Lambda$ represent the objects before ${\cal B}lossom(v,u)$ (or Steps 1--3 of ${\sf Graft}(v,H_i)$). We only consider the case when $b \in \widehat V^* \setminus \widehat B^*$ and $t \in \widehat B^*$, as the other case can be dealt with in a similar way. Since there is an edge in $F^\circ$ between $r$ and $g$, we have $p(g) - p(r) = Q_{r g}$, and hence \begin{equation} \widehat p(b) = \widehat p(t) = p(r) + Q_{r b} = p(g) + Q_{r b} - Q_{r g} = p(g) - Q_{g b}. \label{eq:70} \end{equation} By the definition of $\widehat C^*$, we have the following. \begin{itemize} \item If $(x, b) \in \widehat F^*$ for $x \in B^*$, then $x \in V^* \setminus H$ and $(x, g) \in F^*$. Thus, we have $$ \widehat p (b) - \widehat p (x) = p(g) - p(x) - Q_{g b} \ge Q_{x g} - Q_{g b} = Q_{x b} $$ by (\ref{eq:70}) and the dual feasibility before ${\cal B}lossom(v, u)$ (or Steps 1--3 of ${\sf Graft}(v,H_i)$). \item If $(t, y) \in \widehat F^*$ for $y \in V^* \setminus B^*$, then $y \in H$ and $(r, y) \in F^*$. Thus, we have $$ \widehat p (y) - \widehat p (t) = p(y) - p(r) - Q_{r b} \ge Q_{ry} - Q_{rb} = Q_{b y} = Q_{t y} $$ by (\ref{eq:70}), the dual feasibility before ${\cal B}lossom(v, u)$ (or Steps 1--3 of ${\sf Graft}(v,H_i)$), and $q(H)=0$. \item If $(x,y)\in \widehat F^*\setminus F^*$ for $x\in B^*$ and $y\in V^*\setminus B^*$, then $x\in V^*\setminus H$, $y\in H$, and $(x,g), (r,y)\in F^*$ by Corollary~\ref{cor:78}. Thus, we have \begin{align*} \widehat p(y) - \widehat p(x) = p(y) - p(x) &= (p(y)-p(r)) - (p(g)-p(r)) + (p(g)-p(x)) \\ &\geq Q_{r y} - Q_{r g} + Q_{x g} = Q_{x y} \end{align*} by the dual feasibility before ${\cal B}lossom(v,u)$ (or Steps 1--3 of ${\sf Graft}(v,H_i)$). \end{itemize} These facts show that $\widehat p$ and $\widehat q$ are feasible with respect to $\widehat\Lambda$. \end{proof} It is obvious that creating a new blossom does not violate (BT1). Thus, by Lemmas~\ref{lem:expand}, \ref{lem:72}, and \ref{lem:createblossomdual}, we see that the procedure {\sf Search}\ keeps the conditions (BT1), (BT2), and (DF1)--(DF3). \section{Validity} \label{sec:validity} This section is devoted to the validity proof of the procedures described in Section~\ref{sec:search}. In Section~\ref{sec:propertyR}, we introduce properties (BR4) and (BR5) of the routing in blossoms. The procedures are designed so that they keep the conditions (BR1)--(BR5). Assuming these conditions, we show in Section~\ref{sec:validpath} that a nonempty output of ${\sf Search}$ is indeed an augmenting path. In Section~\ref{sec:validrouting}, we show that these conditions hold during the procedure. \subsection{Properties of Routings in Blossoms} \label{sec:propertyR} In this subsection, we introduce properties (BR4) and (BR5) of $R_{H_i}(x)$ kept in the procedure. Recall that, for $H_i \in \Lambda$, \begin{align*} H^-_i &= \{ v \in H_i \setminus \{t_i\} \mid \mbox{there is an edge in $F^*$ between $v$ and $V^* \setminus H_i$} \}, \\ H^\bullet_i &= H^-_i \cup (H_i \cap V). \end{align*} In addition to (BR1)--(BR3), we assume that $R_{H_i}(x)$ satisfies the following (BR4) and (BR5) for any $H_i \in \Lambda$ and $x \in H^\bullet_i$. \begin{description} \item[(BR4)] $G^\circ [R_{H_i}(x) \setminus \{x\}]$ has a unique tight perfect matching. \item[(BR5)] If $x \in H^-_i$, then we have the following. Suppose that $Z \subseteq R_{H_i}(x) \cap H^-_i$ satisfies that $z \ge_{H_i} x$ for any $z \in Z$, $Z \not= \{x\}$, and $|H_j \cap Z| \le 1$ for any positive blossom $H_j \in \Lambda$. Then, $G^\circ [R_{H_i}(x) \setminus Z]$ has no tight perfect matching. \end{description} Here, we suppose that $G^\circ [\emptyset]$ has a unique tight perfect matching $M= \emptyset$ to simplify the description. We now explain roles of (BR4) and (BR5). These conditions are used to show that the output $P$ in Step (3-1) of ${\sf Search}$ satisfies (AP3), i.e., $G^\circ [P]$ has a unique tight perfect matching. We will show that the obtained path $P$ can be decomposed into subsequences, and each subsequence consists of a singleton or a set $R_{H_i}(x)$ for some $x \in H^\bullet_i$ (see Lemma~\ref{lem:dec}). Our aim is to show that if $G^\circ [P]$ has a tight perfect matching, then $x$ is the only vertex in $R_{H_i}(x)$ that is matched with a vertex outside $R_{H_i}(x)$. This is guaranteed by (BR5), where $Z$ means the set of vertices that are matched with vertices outside $R_{H_i}(x)$. Then, (BR4) assures that there exists a unique perfect matching covering $R_{H_i}(x)$ except $x$. \subsection{Finding an Augmenting Path} \label{sec:validpath} This subsection is devoted to the validity of Step (3-1) of ${\sf Search}$. We first show the following lemma. \begin{lemma} \label{lem:dec} In each step of ${\sf Search}$, for any labeled vertex $x$, $P(x)$ is decomposed as $$P(x)=J(x_k) I(y_k) \cdots J(x_1)I(y_1)J(x_0)$$ with $x_k \prec \cdots \prec x_1 \prec x_0 = x$ such that, for each $i$, \begin{description} \item[(PD0)] $J(x_i)$ is equal to either $\{x_i\}$ or $R_{H_j}(x_i)$ for some $H_j \in \Lambda$, and $I(y_i)$ is equal to either $\{y_i\}$ or $\overline{R_{H_j}(y_i)}$ for some positive blossom $H_j \in \Lambda$, \item[(PD1)] $x_i$ is adjacent to $y_i$ in $G^\circ$, \item[(PD2)] the first element of $J(x_{i-1})$ and the last element of $I(y_i)$ form a line or a dummy line, \item[(PD3)] any labeled vertex $z$ with $z \prec x_i$ is not adjacent to $I(y_i) \cup J(x_{i-1})$ in $G^\circ$, and \item[(PD4)] $x_i$ is not adjacent to $J(x_{i-1})$ in $G^\circ$. Furthermore, if $I(y_i) = \overline{R_{H_j}(y_i)}$, then $x_i$ is not adjacent to $\{z \in I(y_i) \mid z <_{H_j} y_i \}$ in $G^\circ$. \end{description} See Fig.~\ref{fig:revisionfig16} for an example of the decomposition. \end{lemma} \begin{figure} \caption{An example of the decomposition.} \label{fig:revisionfig16} \end{figure} \begin{proof} The procedure ${\sf Search}$ naturally defines the decomposition $$P(x)=J(x_k) I(y_k) \cdots J(x_1)I(y_1)J(x_0).$$ It suffices to show that ${\cal B}lossom(v, u)$ and ${\sf Graft}(v, H_i)$ do not violate the conditions (PD0)--(PD4), since we can easily see that the other operations do not violate them. We first consider the case when ${\cal B}lossom(v, u)$ is applied to obtain a new blossom $H$. In ${\cal B}lossom(v, u)$, $P(x)$ is updated or defined as $P(x) := P(x)$, $P(x) := P(r) bt P(x|r)$, or $P(x) := P(r) b R_{H}(x)$. Let $F^\circ$ (resp.~$\widehat F^\circ$) be the tight edge sets before (resp.~after) the execution of ${\cal B}lossom(v, u)$ that adds $H$ to $\Lambda$. Suppose that $P(x)$ is defined by $P(x) := P(r) I(b) J(x)$, where $I(b) = \{b\}$ and $J(x) = R_{H}(x)$. In this case, (PD0), (PD1), and (PD2) are trivial. We now consider (PD3). Since $P(r)$ satisfies (PD3), in order to show that any labeled vertex $z$ with $z \prec x_i$ is not adjacent to $I(y_i) \cup J(x_{i-1})$ in $\widehat G^\circ=(V^*,\widehat{F}^\circ)$, it suffices to consider the case when $x_i = r$, $y_i = b$, and $x_{i-1} = x$ (see Fig.~\ref{fig:revisionfig18}). Assume to the contrary that $z \prec r$ is adjacent to $I(b) \cup J(x)$ in $\widehat G^\circ$. Since $z$ is not adjacent to $I(b) \cup J(x)$ in $G^\circ$ by the procedure, Lemma~\ref{clm:71} shows that $(z, g) \in F^\circ$. This contradicts that $z \prec x_i =r$ and the definition of $H$. To show (PD4), it suffices to consider the case when $x_i = r$. In this case, since $r$ is not adjacent to $H$ in $\widehat G^\circ$ by Lemma~\ref{lem:72}, $P(x)$ satisfies (PD4). \begin{figure} \caption{The case of $P(x) := P(r) I(b) J(x)$.} \label{fig:revisionfig18} \end{figure} Suppose that $P(x)$ is updated as $P(x) := P(x)$ or $P(x) := P(r) I(b) J(t) P(x|r)$, where $I(b) = \{b\}$ and $J(t) = \{t\}$ (see Fig.~\ref{fig:revisionfig19} for an example). In this case, (PD0), (PD1), and (PD2) are trivial. We now consider (PD3). Since (PD3) holds before creating $H$, in order to show that any labeled vertex $z$ with $z \prec x_i$ is not adjacent to $w \in I(y_i) \cup J(x_{i-1})$ in $\widehat G^\circ$, it suffices to consider the case when (i) $z = t$, or (ii) $w \in I(b) \cup J(t)$, or (iii) $(z, g) \in F^\circ$ and $(w, r) \in F^\circ$, or (iv) $(w, g) \in F^\circ$ and $(z, g) \in F^\circ$ by Lemma~\ref{clm:71}. In the first case, if $(t, w) \in \widehat F^\circ$, then $(r, w) \in F^\circ$, which contradicts that (PD3) holds before creating $H$. In the second case, if $w=b$, then $(z, w) \in \widehat F^\circ$ implies that $(z, g) \in F^\circ$, which contradicts that $z \prec x_i = r$ and the definition of $H$. If $w=t$, then $(w, z) \in \widehat F^\circ$ implies that $(r, z) \in F^\circ$, which contradicts that $r$ and $z$ are labeled. In the third case, $(w, r) \in F^\circ$ implies $x_i \preceq r$ as (PD3) holds before creating $H$. By the definition of $H$, however, $z \prec x_i \preceq r$ contradicts $(z, g) \in F^\circ$. In the fourth case, $(z, r) \in F^\circ$ contradicts that $r$ and $z$ are labeled. By these four cases, we obtain (PD3). \begin{figure} \caption{The case of $P(x) := P(r) I(b) J(t) P(x|r)$.} \label{fig:revisionfig19} \end{figure} We next consider (PD4). Since (PD4) holds before creating $H$, in order to show that $x_i$ is not adjacent to $w \in J(x_{i-1})$ or $w \in \{z \in I(y_i) \mid z <_{H_j} y_i \}$ in $\widehat F^\circ$ it suffices to consider the case when (i) $x_i = r$, or (ii) $x_i = t$, or (iii) $(x_i, w)$ crosses $H$. In the first case, the claim is obvious. In the second case, if $(t, w) \in \widehat F^\circ$, then $(r, w) \in F^\circ$, which contradicts that (PD4) holds before creating $H$. In the third case, since $x_i \in H$ and $w \not\in H$, Lemma~\ref{clm:71} implies that it suffices to consider the case when $(w, g) \in F^\circ$ and $(x_i, r) \in F^\circ$, which contradicts that $x_i$ and $r$ are labeled. By these three cases, we obtain (PD4). We can show that ${\sf Graft}(v, H_i)$ does not violate (PD0)--(PD4) in a similar manner by observing that $P(x)$ is updated or defined as $P(x) := P(x)$ or $P(x) := P(v) R_H(x)$ in ${\sf Graft}(v, H_i)$. We note that ${\sf Expand}(H_i)$ in ${\sf Graft}$ does not affect (PD0)--(PD4) by Lemma~\ref{lem:dblossomexpand}. \end{proof} Recall that we assume the conditions (BT1), (BT2), (DF1)--(DF3), and (BR1)--(BR5). We are now ready to show the validity of Step (3-1) of ${\sf Search}$. \begin{lemma} \label{lem:aug} If ${\sf Search}$ returns $P:= P(v) \overline{P(u)}$ in {\em Step (3-1)}, then $P$ is an augmenting path. \end{lemma} \begin{proof} It suffices to show that $G^\circ[P]$ has a unique tight perfect matching. By Lemma~\ref{lem:dec}, $P(v)$ and $P(u)$ are decomposed as $P(v)=J(v_k)I(s_k)\cdots J(v_1)I(s_1)J(v_0)$ and $P(u)=J(u_l)I(r_l)\cdots J(u_1)I(r_1)J(u_0)$. For each pair of $i\leq k$ and $j\leq l$, let $X_{ij}$ denote the set of vertices in the subsequence $$ J(v_i)I(s_i)\cdots J(v_1)I(s_1)J(v_0)\overline{J(u_0)} \, \overline{I(r_1)}\, \overline{J(u_1)}\cdots \overline{I(r_j)}\,\overline{J(u_j)} $$ of $P$. We intend to show inductively that $G^\circ[X_{ij}]$ has a unique tight perfect matching. We first show that $G^\circ[X_{00}] = G^\circ[J(u)\cup J(v)]$ has a unique tight perfect matching. Since $u$ and $v$ are adjacent in $G^\circ$, (PD0) and (BR4) guarantee that $G^\circ[J(u)\cup J(v)]$ has a tight perfect matching. Let $M$ be an arbitrary tight perfect matching in $G^\circ[J(u)\cup J(v)]$, and let $Z$ be the set of vertices in $J(v)$ adjacent to $J(u)$ in $M$. If $J(v) = \{v\}$, then it is obvious that $Z = \{v\}$. Otherwise, $J(v) = R_{H_i}(v)$ for some $H_i \in \Lambda$. For any positive blossom $H_j \in \Lambda$, since $M$ is consistent with $H_j$ by the definition of a tight matching, we have that $|H_j \cap Z| \le 1$. Since there are no edges of $G^\circ$ between $J(u)$ and $\{y \in J(v) \mid y \prec v \}$, we have that $z \ge_{H_i} v$ for any $z \in Z$. Furthermore, since there is an edge in $M$ connecting each $z \in Z$ and $J(u)$, we have $Z \subseteq J(v) \cap H^-_i$. Then it follows from (BR5) that $G^\circ[J(v)\setminus Z]$ has no tight perfect matching unless $Z=\{v\}$. This means $v$ is the only vertex in $J(v)$ adjacent to $J(u)$ in $M$. Note that $G^\circ[J(v)\setminus\{v\}]$ has a unique tight perfect matching by (BR4), which must form a part of $M$. Let $z$ be the vertex adjacent to $v$ in $M$. Since the vertices in $\{y \in J(u) \mid y \prec u \}$ are not adjacent to $v$ in $G^\circ$, we have $z \ge_{H_j} u$ if $J(u) = R_{H_j}(u)$ for some $H_j \in \Lambda$ (see Fig.~\ref{fig:revisionfig20}). By (BR5) again, $G^\circ[J(u)\setminus\{z\}]$ has no tight perfect matching unless $z=u$. This means $M$ must contain the edge $(u, v)$. Note that $G^\circ[J(u)\setminus\{u\}]$ has a unique tight perfect matching by (BR4), which must form a part of $M$. Thus $M$ must be the unique tight perfect matching in $G^\circ[J(u)\cup J(v)]$. \begin{figure} \caption{An example of $G^\circ[X_{00} \label{fig:revisionfig20} \end{figure} We now show the statement for general $i$ and $j$ assuming that the same statement holds if either $i$ or $j$ is smaller. Suppose that $v_i \prec u_j$. Then there are no edges of $G^\circ$ between $X_{ij}\setminus J(v_i)$ and $\{y \in J(v_i) \mid y \prec v_i \}$ by (PD3) of Lemma~\ref{lem:dec}. Let $M$ be an arbitrary tight perfect matching in $G^\circ[X_{ij}]$, and let $Z$ be the set of vertices in $J(v_i)$ adjacent to $X_{ij}\setminus J(v_i)$ in $M$. Then, by the same argument as above, $G^\circ[J(v_i)\setminus Z]$ has no tight perfect matching unless $Z=\{v_i\}$. Thus $v_i$ is the only vertex in $J(v_i)$ matched to $X_{ij}\setminus J(v_i)$ in $M$. Since $v_i$ is not adjacent to $X_{i-1,j}$ in $G^\circ$ by (PD3) and (PD4) of Lemma~\ref{lem:dec}, an edge connecting $v_i$ and $I(s_i)$ must belong to $M$. We note that it is the only edge in $M$ between $I(s_i)$ and $X_{ij} \setminus I(s_i)$ since $M$ is tight and $I(s_i)$ is equal to either $\{s_i\}$ or $\overline{R_{H}(s_i)}$ for some positive blossom $H \in \Lambda$. Let $z$ be the vertex adjacent to $v_i$ in $M$. By (BR5), $G^\circ[I(s_i)\setminus \{z\}]$ has no tight perfect matching unless $z = s_i$ (see Fig.~\ref{fig:revisionfig21}). This means that $M$ contains the edge $(v_i, s_i)$. Note that each of $G^\circ[J(v_i)\setminus\{v_i\}]$ and $G^\circ[I(s_i)\setminus \{s_i\}]$ has a unique tight perfect matching by (BR4), and so does $G^\circ[X_{i-1,j}]$ by induction hypothesis. Therefore, $M$ is the unique tight perfect matching in $G^\circ[X_{ij}]$. The case of $v_i \succ u_j$ can be dealt with similarly. Thus, we have seen that $G^\circ[X_{kl}] = G^\circ[P]$ has a unique tight perfect matching. \end{proof} \begin{figure} \caption{An example of $G^\circ[X_{ij} \label{fig:revisionfig21} \end{figure} This proof implies the following corollaries. \begin{corollary} \label{cor:reachable} For any labeled vertex $v \in V^*$, $G^\circ[P(v) \setminus \{v\}]$ has a unique tight perfect matching. \end{corollary} \begin{corollary} \label{cor:uniqueedgeaug} If ${\sf Search}$ returns $P$, then the unique tight matching in $G^\circ[P]$ contains exactly one edge connecting $H_i$ and $V^* \setminus H_i$ for each $H_i \in \Lambda$ with $P \cap H_i \not= \emptyset$. \end{corollary} \subsection{Routing in Blossoms} \label{sec:validrouting} First, to see that $R_H(x)$ is well-defined for each $x \in H^\bullet$ when we create a new blossom $H$, we observe that every vertex $x \in H^\bullet$ satisfies one of the six cases in Step 4 of ${\cal B}lossom(v, u)$. This is because, if $x \in H_i \setminus H^\bullet_i$ for some $H_i \in \Lambda$ with $H_i \subsetneq H$, then $x \not\in H^\bullet$, and if $c \not=d$, $K(c) = H_i \cup \{b_i\}$, and $x = b_i = g$ for some $H_i \in \Lambda_{\rm n}$, then $x \not\in H^-$ by Lemma~\ref{lem:72}. When we create a new blossom $H$ in ${\sf Graft}(v, H_i)$, for each $x \in H^\bullet$, $R_{H}(x)$ clearly satisfies (BR1)--(BR5) by Lemma~\ref{lem:dblossomexpand}. Suppose that a new blossom $H$ is created in ${\cal B}lossom(v, u)$. For each $x \in H^\bullet$, $R_{H}(x)$ defined in ${\cal B}lossom(v, u)$ also satisfies (BR1)--(BR3). We will show (BR4) and (BR5) in this subsection. \begin{lemma} \label{lem:BR45} Suppose that ${\cal B}lossom(v, u)$ creates a new blossom $H$. Then, for each $x \in H^\bullet$, $R_{H}(x)$ satisfies {\em (BR4)} and {\em (BR5)}. \end{lemma} \begin{proof} We only consider the case when $H$ contains no source line, since the case with a source line can be dealt with in a similar way. We note that a vertex $y \in H$ is adjacent to $r$ in $G^\circ$ before ${\cal B}lossom(v,u)$ if and only if $y$ is adjacent to $t$ in $G^\circ$ after ${\cal B}lossom(v,u)$. If $x = t$, the claim is obvious. We consider the other cases separately. \noindent \textbf{Case 1.} Suppose that $x \in H^\bullet$ was not labeled before $H$ is created. Among six cases in Step 4 of ${\cal B}lossom(v,u)$, we consider the cases of (i), (iii), and (v), since the other cases can be dealt with in a similar manner. By Lemma~\ref{lem:dec}, $P(v)$ can be decomposed as $$P(v)=P(r) b t I(s_k) J(v_{k-1}) I(s_{k-1}) \cdots J(v_1) I(s_1) J(v_0)$$ with $v=v_0$. In the cases of (i) and (iii), $P(u|x)$ can be decomposed as $J(u_l) I(r_l) \cdots J(u_1)I(r_1)J(u_0)$ with $u_0=u$, where the first element of $J(u_l)$ is $\bar x$, and hence $$R_{H}(x)= J(v_k) I(s_k) J(v_{k-1}) \cdots I(s_1)J(v_0) \overline{J(u_0)}\, \overline{I(r_1)} \cdots \overline{I(r_l)}\,\overline{J(u_l)} x$$ with $v_k=t$. Similarly, in the case of (v), $R_{H}(x)$ can be decomposed as $$R_{H}(x)= J(v_k) I(s_k) J(v_{k-1}) \cdots I(s_1)J(v_0) \overline{J(u_0)}\, \overline{I(r_1)} \cdots \overline{I(r_l)}\,\overline{J(u_l)} R_{H_i}(x).$$ Therefore, in the cases of (i), (iii), and (v), we have $$R_{H}(x)= J(v_k) I(s_k) J(v_{k-1}) \cdots I(s_1)J(v_0) \overline{J(u_0)}\, \overline{I(r_1)} \cdots \overline{I(r_l)}\,\overline{J(u_l)}\, \overline{I(r_{l+1})}$$ with $v_k=t$ and $r_{l+1} = x$ (see Fig.~\ref{fig:72} for an example). \begin{figure} \caption{A decomposition of $R_{H} \label{fig:72} \end{figure} We now intend to show that $R_{H}(x)$ satisfies (BR5), that is, $G^\circ[R_{H}(x) \setminus Z]$ has no tight perfect matching if $Z \subseteq R_{H}(x) \cap H^-$ satisfies that $z \ge_{H} x$ for any $z \in Z$, $Z \not= \{x\}$, and $|H_j \cap Z| \le 1$ for any positive blossom $H_j \in \Lambda$. Suppose to the contrary that $G^\circ[R_{H}(x) \setminus Z]$ has a tight perfect matching $M$. Note that $Z \subseteq I(r_{l+1}) \cup \bigcup_i I(s_i)$, because $z \ge_{H} x$ for any $z \in Z$. For each $i$, since either $I(s_i) = \{s_i\}$ or $I(s_i) = R_{H_j}(s_i)$ for some positive blossom $H_j \in \Lambda$, we have $|I(s_i) \cap Z| \le 1$. Similarly, $|I(r_{l+1}) \cap Z| \le 1$. Furthermore, if $|I(s_i) \cap Z| = 1$ (resp.~$|I(r_{l+1}) \cap Z| = 1$), then $|I(s_i) \setminus Z|$ (resp.~$|I(r_{l+1}) \setminus Z|$) is even, and hence there is no edge in $M$ between $I(s_i)$ (resp.~$I(r_{l+1})$) and its outside, because $M$ is a tight perfect matching. If $Z \subseteq I(r_{l+1})$, then $|I(r_{l+1}) \cap Z| = 1$ and $M$ contains no edge between $I(r_{l+1})$ and the outside of $I(r_{l+1})$, which contradicts that $G^\circ[I(r_{l+1}) \setminus Z]$ has no tight perfect matching by (BR5). Thus, we may assume that $Z \cap \bigcup_i I(s_i) \not= \emptyset$. Since $I(s_i) \cap Z \not= \emptyset$ implies that there exists no edge in $M$ between $I(s_i)$ and the outside of $I(s_i)$, we can take the largest number $j$ such that $(v_j, s_j) \notin M$. We consider the following two cases separately. \textbf{Case 1a.} Suppose that $j = k$. In this case, since $J(v_k) = \{ t\}$, there exists an edge in $M$ between $t$ and $I(r_{l+1}) \cup (I(s_{k}) \setminus \{s_k\})$. See Fig.~\ref{fig:revisionfig22} for an example. If this edge is incident to $z \in I(s_{k}) \setminus \{s_k\}$, then $I(s_k) = R_{H'}(s_k)$ for some positive blossom $H' \in \Lambda$ and $z >_{H'} s_k$ by the procedure, and hence $G^\circ[I(s_{k}) \setminus \{z\}]$ has no tight perfect matching by (BR5), which is a contradiction. Otherwise, since $v_k = t$ is matched with some vertex $y \in I(r_{l+1})$, we have $h \in I(r_{l+1})$, where $h$ is as in Step 4 of ${\cal B}lossom(v, u)$. This shows that $Z \subseteq I(r_{l+1}) \cup I(s_{k})$ as $z \ge_{H} x = r_{l+1}$ for any $z \in Z$. Since $|Z \cap I(r_{l+1})| \le 1$, $|Z \cap I(s_{k})| \le 1$, and $M$ is a tight perfect matching, we have $I(r_{l+1}) \cap Z = \emptyset$, $Z = \{ z\}$ for some $z \in I(s_k)$, and each of $G^\circ[I(r_{l+1}) \setminus \{y\}]$ and $G^\circ[I(s_{k}) \setminus \{z\}]$ has a tight perfect matching. This shows that $y \le_{H} r_{l+1}$ and $z \le_{H} s_k$ by (BR5) and the definition of $\le_H$. Then, we obtain $$h \le_{H} y \le_{H} r_{l+1} = x \le_{H} z \le_{H} s_k=g.$$ Since no element is chosen between $g$ and $h$ in Step 4 of ${\cal B}lossom(v, u)$, we have $h=y = r_{l+1} = x$ and $z=s_k=g$, which contradicts that $z \in H^-$ and $g \not \in H^-$ by Lemma~\ref{lem:72}. We note that when we apply the same argument to the cases of (ii), (iv), and (vi) by changing the roles of $g$ and $h$, we obtain $g=y = r_{l+1} = x$. Then, this contradicts that $x \in H^-$ and $g \not \in H^-$. \begin{figure} \caption{Example of Case 1a.} \label{fig:revisionfig22} \caption{Example of Case 1b.} \label{fig:revisionfig23} \end{figure} \textbf{Case 1b.} Suppose that $j \le k-1$. In this case, since $M$ is a tight perfect matching, for $i=j+1, \dots , k$, we have $Z \cap I(s_i) = \emptyset$ and $(v_i, s_i)$ is the only edge in $M$ between $I(s_i)$ and the outside of $I(s_i)$. We can also see that $Z \cap J(v_j) = \emptyset$, since $z \ge_{H} x$ for any $z \in Z$. We denote by $Z_j$ the set of vertices in $J(v_j)$ matched by $M$ to the outside of $J(v_j)$. Since $z \ge_{H} x$ for any $z \in Z$ and $Z \cap I(s_i) \not= \emptyset$ for some $i \le j-1$, we have $v_j \prec u_{l+1}$, where $u_{l+1}$ is the vertex naturally defined by the decomposition of $P(u)$ (see Fig.~\ref{fig:revisionfig23}). Note that the assumption $j \le k-1$ is used here. Then, for any vertex $y \in J(v_j)$ with $y <_{H} v_j$, there is no edge in $M$ connecting $y$ and $R_{H}(x) \setminus J(v_j)$ because of the following: \begin{itemize} \item By (PD3) of Lemma~\ref{lem:dec}, $y$ is not adjacent to $I(s_i) \cup J(v_{i-1})$ for $i \le j$, because $y \prec v_j \preceq v_i$. \item By (PD3) of Lemma~\ref{lem:dec}, $y$ is not adjacent to $I(r_i) \cup J(u_{i-1})$ for $i \le l + 1$, because $y \prec v_j \prec u_{l+1} \preceq u_i$. \item If $z \in J(v_i)$ with $i > j$, then $z$ is not adjacent to $y$ by (PD3) of Lemma~\ref{lem:dec}. \item For $i>j$, $(v_i, s_i)$ is the only edge in $M$ between $I(s_i)$ and its outside, and hence there is no edge is $M$ between $I(s_i)$ and $y$. \end{itemize} This shows that $(Z \cap J(v_j)) \cup Z_j = Z_j \subseteq \{ y \in J(v_j) \mid y \ge_{H} v_j\}$. Therefore, by (BR5), if $G^\circ [J(v_j) \setminus (Z \cup Z_j)]$ has a tight perfect matching, then $Z_j = \{v_j\}$. The vertex $v_j$ is not adjacent to the vertices in $R_{H}(x)\setminus(J(v_j)\cup I(s_j) \cup \cdots \cup I(s_k))$ by (PD3) and (PD4) of Lemma~\ref{lem:dec}. Since $(v_i, s_i)$ is the only edge in $M$ between $I(s_i)$ and its outside for $i>j$, $v_j$ has to be adjacent to $I(s_j)$. Furthermore, by $(v_j, s_j) \not\in M$ and by (PD4) of Lemma~\ref{lem:dec}, we have that $v_j$ is incident to a vertex $z \in I(s_j)$ with $z >_{H'} s_j$, where $I(s_j) = R_{H'}(s_j)$ for some positive blossom $H' \in \Lambda$. Since $G^\circ[I(s_j) \setminus \{z\}]$ has no tight perfect matching by (BR5), we obtain a contradiction. We next show that $R_{H}(x)$ satisfies (BR4), that is, $G^\circ[R_{H}(x) \setminus \{x\}]$ has a unique tight perfect matching. Let $M$ be an arbitrary tight perfect matching in $G^\circ[R_{H}(x) \setminus \{x\}]$. Recall that $r_{l+1} = x$ and either $I(r_{l+1}) = \{r_{l+1}\}$ or $I(r_{l+1}) = R_{H_j}(r_{l+1})$ for some positive blossom $H_j \in \Lambda$. Since $M$ is a tight perfect matching and $|I(r_{l+1}) \setminus \{x\}|$ is even, there is no edge in $M$ between $I(r_{l+1})$ and its outside. By (BR4), $G^\circ[I(r_{l+1}) \setminus \{x\}]$ has a unique tight perfect matching, which must form a part of $M$. On the other hand, $$G^\circ[J(v_k) I(s_k) J(v_{k-1}) I(s_{k-1})\cdots J(v_1)I(s_1)J(v_0) \overline{J(u_0)}\, \overline{I(r_1)}\, \overline{J(u_1)}\cdots \overline{I(r_l)}\,\overline{J(u_l)}]$$ has a unique tight perfect matching by the same argument as Lemma~\ref{lem:aug}. By combining them, we have that $G^\circ[R_{H}(x) \setminus \{x\}]$ has a unique tight perfect matching. \noindent \textbf{Case 2.} Suppose that $x \in H$ was labeled before $H$ is created. We consider the case of $x \in K(y)$ with $y \in P(v|c)$. The case of $x \in K(y)$ with $y \in P(u|d)$ can be dealt with in a similar manner. By Lemma~\ref{lem:dec}, $R_{H}(x)$ can be decomposed as $$R_{H}(x)= J(v_k) I(s_k) J(v_{k-1}) I(s_{k-1})\cdots J(v_{l+1})I(s_{l+1})J(v_l)$$ with $x = v_l$ (see Fig.~\ref{fig:revisionfig24}). \begin{figure} \caption{Example of Case 2.} \label{fig:revisionfig24} \end{figure} We first show that $R_{H}(x)$ satisfies (BR5), that is, $G^\circ[R_{H}(x) \setminus Z]$ has no tight perfect matching if $Z \subseteq R_{H}(x) \cap H^-$ satisfies that $z \ge_{H} x$ for any $z \in Z$, $Z \not= \{x\}$, and $|H_j \cap Z| \le 1$ for any positive blossom $H_j \in \Lambda$. Since $z \ge_{H} x$ for any $z \in Z$, we have that $Z \subseteq J(v_l) \cup \bigcup_i I(s_i)$, which shows that we can apply the same argument as Case 1 to obtain (BR5). We next show that $R_{H}(x)$ satisfies (BR4), that is, $G^\circ[R_{H}(x) \setminus \{x \}]$ has a unique tight perfect matching. By Corollary~\ref{cor:reachable}, $G^\circ[P(x) \setminus \{x\}]$ has a unique tight perfect matching $M$, and a part of $M$ forms a tight perfect matching in $G^\circ[R_{H}(x) \setminus \{x \}]$. Thus, this matching is a unique tight perfect matching in $G^\circ[R_{H}(x) \setminus \{x \}]$. \end{proof} We note that, for a blossom $H \in \Lambda$, creating/deleting another blossom $H'$ does not change $H^-$ and $H^\bullet$ by Corollary~\ref{cor:78} and Lemma~\ref{lem:dblossomexpand}. We also note that if $R_H(x)$ satisfies (BR1)--(BR5) for $x \in H^\bullet$, then creating/deleting another blossom $H'$ does not violate these conditions by Lemmas~\ref{lem:72},~\ref{clm:71} and~\ref{lem:dblossomexpand}. Therefore, Lemma~\ref{lem:BR45} shows that the procedure {\sf Search}\ keeps the conditions (BR1)--(BR5). \section{Dual Update} \label{sec:dualupdatealgo} In this section, we describe how to modify the dual variables when ${\sf Search}$ returns $\emptyset$ in Step 2. In Section~\ref{sec:infeasible}, we show that the procedure keeps the dual variables finite as long as the instance has a parity base. In Section~\ref{sec:iterations}, we bound the number of dual updates per augmentation. Let $R\subseteq V^*$ be the set of vertices that are reached or examined by the search procedure and not contained in any blossoms. We denote by $R^+$ and $R^-$ the sets of labeled and unlabeled vertices in $R$, respectively. In particular, the bud $b_i$ of a maximal blossom $H_i$ belongs to $R^+$ if $H_i$ is labeled with $\ominus$, and to $R^-$ if $H_i$ is labeled with $\oplus$. Let $Z$ denote the set of vertices in $V^*$ contained in labeled blossoms. The set $Z$ is partitioned into $Z^+$ and $Z^-$, where \begin{align*} Z^+ &= \bigcup \{ H_i \mid \mbox{$H_i$ is a maximal blossom labeled with $\oplus$} \}, \\ Z^- &= \bigcup \{ H_i \mid \mbox{$H_i$ is a maximal blossom labeled with $\ominus$} \}. \end{align*} We denote by $Y$ the set of vertices that do not belong to these subsets, i.e., $Y=V^*\setminus (R\cup Z)$. For each vertex $v\in R$, we update $p(v)$ as $$p(v):= \begin{cases} p(v)+\epsilon & (v\in R^+\cap B^*) \\ p(v)-\epsilon & (v\in R^+\setminus B^*) \\ p(v)-\epsilon & (v\in R^-\cap B^*) \\ p(v)+\epsilon & (v\in R^-\setminus B^*). \end{cases}$$ We also modify $q(H)$ for each maximal blossom $H$ by $$q(H):= \begin{cases} q(H)+\epsilon & (H: \mbox{labeled with $\oplus$}) \\ q(H)-\epsilon & (H: \mbox{labeled with $\ominus$}) \\ q(H) & (\mbox{otherwise}). \end{cases}$$ To keep the feasibility of the dual variables, $\epsilon$ is determined by $\epsilon=\min\{\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4\}$, where \begin{align*} \epsilon_1 & = \frac{1}{2}\min\{p(v)-p(u)-Q_{uv}\mid (u, v)\in F^*,\, u, v\in R^+\cup Z^+,\, K(u)\not=K(v) \}, \\ \epsilon_2 & = \min\{p(v)-p(u)-Q_{uv}\mid (u, v)\in F^*,\, u\in R^+\cup Z^+,\,v\in Y \}, \\ \epsilon_3 & = \min\{p(v)-p(u)-Q_{uv}\mid (u, v)\in F^*,\, u\in Y,\,v\in R^+\cup Z^+ \}, \\ \epsilon_4 & = \min\{q(H)\mid \mbox{$H$: a maximal blossom labeled with $\ominus$}\}. \end{align*} If $\epsilon = + \infty$, then we terminate ${\sf Search}$ and conclude that there exists no parity base. Otherwise, while there exists a maximal blossom whose value of $q$ is zero after the dual update, delete such a blossom from $\Lambda$ by ${\sf Expand}$. Then, apply the procedure ${\sf Search}$ again. \subsection{Detecting Infeasibility} \label{sec:infeasible} By the definition of $\epsilon$, we can easily see that the updated dual variables are feasible if $\epsilon$ is a finite value. We now show that we can conclude that the instance has no parity base if $\epsilon = + \infty$. A skew-symmetric matrix is called an {\em alternating matrix} if all the diagonal entries are zero. Note that any skew-symmetric matrix is alternating unless the underlying field is of characteristic two. By a congruence transformation, an alternating matrix can be brought into a block-diagonal form in which each nonzero block is a $2 \times 2$ alternating matrix. This shows that the rank of an alternating matrix is even, which plays an important role in the proof of the following lemma. \begin{lemma} \label{lem:noparitybase} Suppose that there is a source line, and suppose also that $\epsilon = + \infty$ when we update the dual variables. Then, the instance has no parity base. \end{lemma} \begin{proof} In order to show that there is no parity base, by Lemma~\ref{lem:Pf}, it suffices to show that ${\cal P}f{\cal P}hi_A(\theta)=0$. We construct the matrix $${\cal P}hi_A^*(\theta)= \left( \begin{array}{c|c|c|c|c} \multicolumn{2}{c|}{} & O & \multicolumn{2}{c}{} \\ \cline{3-3} \multicolumn{2}{c|}{\raisebox{7pt}[0pt][0pt]{$O$}} & I & \multicolumn{2}{c}{\raisebox{7pt}[0pt][0pt]{\quad $C^*$}\quad} \\ \hline O & -I & \multicolumn{2}{c|}{} & \\ \cline{1-2} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{\raisebox{7pt}[0pt][0pt]{$D'(\theta)$}} & \raisebox{7pt}[0pt][0pt]{$O$} \\ \cline{3-5} \multicolumn{2}{c|}{\raisebox{7pt}[0pt][0pt]{$-{C^*}^\top$}} & \multicolumn{2}{c|}{O} & O \end{array} \right) \begin{array}{l} \leftarrow T \cap B^* \\ \leftarrow U \mbox{ (identified with $B$)} \\ \leftarrow B \\ \leftarrow V\setminus B \\ \leftarrow T \setminus B^* \end{array} $$ in the same way as Section~\ref{sec:optimality}, where $T := \{b_i, t_i \mid H_i \in \Lambda_{\rm n} \}$. Note that we regard the row set of $C^*$ as $(T \cap B^*) \cup U$ instead of $U^*$, and hence the row/column set of ${\cal P}hi^*_A(\theta)$ is $W^* := V^* \cup U$. Then ${\cal P}f{\cal P}hi_A(\theta)=0$ is equivalent to ${\cal P}f{\cal P}hi_A^*(\theta) =0$. Construct a graph $\Gamma^*=(W^*,E^*)$ with edge set $E^*:=\{(u,v) \mid ({\cal P}hi^*_A(\theta))_{u v} \neq 0 \}$. In order to show that ${\cal P}f{\cal P}hi_A^*(\theta)=0$, it suffices to prove that $\Gamma^*$ does not have a perfect matching. Since ${\cal P}hi^*_A(\theta) [U, B]$ is the identity matrix, we have a natural bijection $\eta: B \to U$ between $B$ and $U$. We then define $X\subseteq W^*$ by $X:=(R^-\setminus B)\cup\eta(R^-\cap B)$. Since $\epsilon_4 = + \infty$, no maximal blossom $H_i$ is labeled with $\ominus$. For each maximal blossom $H_i$ labeled with $\oplus$, we introduce $Z_i := H_i \cup \eta(H_i \cap B)$. If $H_i$ is a normal blossom, then $H_i$ is of odd cardinality and $H_i$ does not contain any source line, which imply that $|Z_i|$ is odd. If $H_i$ is a source blossom, then $H_i$ is of even cardinality and $H_i$ contains exactly one source line, which again imply that $Z_i$ is of odd cardinality. Note that there exist no edges of $E^*$ between $Z_i$ and $W^*\setminus (X\cup Z_i)$. All the source lines that are not included in any blossoms are contained in $R^+$. For each normal line $\ell \subseteq R$, exactly one vertex $u_\ell$ in $\ell$ is unlabeled and the other vertex $\bar u_\ell$ is labeled. For each line $\ell\subseteq R$, we now introduce $R_\ell$ by $$R_\ell:= \begin{cases} \{u_\ell, \bar u_\ell, \eta(\bar u_\ell)\} & (\ell\subseteq B), \\ \{v_\ell, \bar v_\ell, \eta(\bar v_\ell)\} & (\ell=\{v_\ell,\bar v_\ell\}, \bar v_\ell\in B, v_\ell\in V\setminus B), \\ \{\bar u_\ell\} & (\ell\subseteq V\setminus B). \end{cases}$$ Note that $R_\ell$ is of odd cardinality and that there exist no edges of $E^*$ between $R_\ell$ and $W^*\setminus (X\cup Z_i)$. Let $\mathrm{odd}(\Gamma^*\setminus X)$ denote the number of odd components after deleting $X$ from $\Gamma^*$. For each $b_i\in R^-$, we have a corresponding odd component $Z_i$. For each $u_\ell\in R^-$, we have an odd component $R_\ell$. In addition, there are some other odd components coming from source blossoms or source lines. Thus we have $\mathrm{odd}(\Gamma^*\setminus X)>|X|$, which implies by the theorem of Tutte~\cite{Tut47} that $\Gamma^*$ does not admit a perfect matching. \end{proof} \subsection{Bounding Iterations} \label{sec:iterations} We next show that the dual variables are updated $O(n)$ times per augmentation. To see this, roughly, we show that this operation increases the number of labeled vertices. Although ${\sf Search}$ contains flexibility on the ordering of vertices, it does not affect the set of the labeled vertices when ${\sf Search}$ returns $\emptyset$. This is guaranteed by the following lemma. \begin{lemma} \label{lem:labeliff} Suppose that a vertex $v \in V \cup \{b_i \mid \mbox{$H_i \in \Lambda_{\rm n}$ is a maximal blossom} \}$ is not removed in ${\sf Search}$ that returns $\emptyset$. Then, $v$ is labeled in ${\sf Search}$ if and only if there exists a vertex set $X \subseteq V^*$ such that \begin{itemize} \item $X \cup \{v\}$ consists of normal lines, dummy lines, and a source vertex $s$, \item $T \subseteq X \cup \{v\}$, \item $C^*[X]$ is nonsingular, and \item the equality \begin{equation} \label{eq:8411} p(X \setminus B^*) - p (X \cap B^*) = \sum \{ q( H_i) \mid H_i \in \Lambda,\ \mbox{$|X\cap H_i|$ is odd}\} \end{equation} holds. \end{itemize} \end{lemma} \begin{proof} We first observe that creating or deleting a blossom does not affect the conditions in Lemma~\ref{lem:labeliff} unless $v$ is removed. Indeed, when $T$ is updated as $T' := T \cup \{b_i, t_i\}$ or $T' := T \setminus \{b_i, t_i\}$ by creating/deleting a blossom, $X' := ((X \setminus T) \cup T') \setminus \{v\}$ satisfies the conditions by Lemma~\ref{lem:pivotsing}. Thus, it suffices to show that $v$ is labeled in ${\sf Search}$ if and only if there exists a vertex set $X$ satisfying the conditions when ${\sf Search}$ returns $\emptyset$. In what follows in the proof, all notations ($V^*, C^*, T, \Lambda$, etc.) represent the objects when ${\sf Search}$ returns $\emptyset$. If $v$ is labeled in ${\sf Search}$, then we obtain $P(v)$ such that $G^\circ[P(v) \setminus \{v\}]$ has a unique tight perfect matching by Corollary~\ref{cor:reachable}. Define $X := (P(v) \cup T) \setminus \{v\}$. For any minimal $H_i \in \Lambda_{\rm n}$ with $P(v) \cap H_i = \emptyset$, it follows from Lemma~\ref{clm:71} that $(b_i, t_i)$ is a unique edge in $G^\circ$ between $t_i$ and $X \setminus \{t_i\}$. Thus, if $G^\circ[X]$ has a perfect matching, then it must contain $(b_i, t_i)$. By applying this argument repeatedly for each $H_i \in \Lambda_{\rm n}$ with $P(v) \cap H_i = \emptyset$ in the order of indices (i.e., in the order from smaller blossoms to larger ones), $G^\circ[X]$ has a unique tight perfect matching, because $b_i, t_i \in P(v)$ for any $H_i \in \Lambda_{\rm n}$ with $P(v) \cap H_i \not= \emptyset$ by Observation~\ref{obs:budtipinP}. Thus, $C^*[X]$ is nonsingular by Lemma~\ref{lem:tightmatching}, and the equality (\ref{eq:8411}) holds. We now intend to prove the converse. Suppose that $X$ satisfies the above conditions, and assume to the contrary that $v$ is not labeled when ${\sf Search}$ returns $\emptyset$. Then, we can update the dual variables keeping the dual feasibility as described at the beginning of this section. We now see how the dual update affects (\ref{eq:8411}). \begin{itemize} \item Consider the dual variables corresponding to $K(s)$. If $s$ is single, then the left hand side of (\ref{eq:8411}) decreases by $\epsilon$ by updating $p(s)$. Otherwise, $K(s) = H_i$ for some source blossom $H_i \in \Lambda_{\rm s}$, since $s$ is a source vertex. Then, $|X \cap H_i|$ is odd as $v \not\in H_i$, and hence the right hand side of (\ref{eq:8411}) increases by $\epsilon$ by updating $q(H_i)$. \item Consider the dual variables corresponding to $K(v)$. \begin{itemize} \item If $v$ is single, then the left hand side of (\ref{eq:8411}) decreases by $\epsilon$ or does not change by updating $p(\bar v)$, because $\bar v \in R^+ \cup Y$. \item If $v \in H_i$ for some maximal blossom $H_i \in \Lambda_{\rm n}$, then $|X \cap H_i|$ is even. Thus, the right hand side of (\ref{eq:8411}) does not change by updating $q(H_i)$. Furthermore, since $H_i$ is not labeled with $\oplus$, we have $b_i \in R^+ \cup Y$, which shows that the left hand side of (\ref{eq:8411}) decreases by $\epsilon$ or does not change by updating $p(b_i)$. \item If $v = b_i$ for some maximal blossom $H_i \in \Lambda_{\rm n}$, then $|X \cap H_i|$ is odd. Since $H_i$ is not labeled with $\ominus$, the right hand side of (\ref{eq:8411}) increases by $\epsilon$ or does not change by updating $q(H_i)$. \item If $v\in H_i$ for some maximal blossom $H_i\in\Lambda_{\rm s}$, then $v$ is labeled, which contradicts the assumption. \end{itemize} \item For any $u \in X$ with $s, v \not\in K(u)$, updating the dual variables corresponding to $K(u)$ does not affect the equality (\ref{eq:8411}), since $|X\cap H_i|$ is even for any $H_i \in \Lambda_{\rm s}$ with $s \not\in H_i$ and $|X \cap H_i|$ is odd for any $H_i \in \Lambda_{\rm n}$ with $v \not\in H_i \cup \{b_i\}$. \end{itemize} By combining these facts, after updating the dual variables, we have that the left hand side of (\ref{eq:8411}) is strictly less than its right hand side, which contradicts Lemma~\ref{lem:keyodd}. \end{proof} By using this lemma, we bound the number of dual updates as follows. \begin{lemma} \label{lem:dualupdatebound} The dual variables are updated at most $O(n)$ times before ${\sf Search}$ finds an augmenting path or we conclude that the instance has no parity base by Lemma~\ref{lem:noparitybase}. \end{lemma} \begin{proof} Suppose that we update the dual variables more than once, and we consider how the value of $$\kappa(V^*,\Lambda):=|\{ w \in V \mid \mbox{$w$ is labeled} \}| + |\Lambda_1| - |\Lambda_2| - 2 |\Lambda_3|$$ will change between two consecutive dual updates, where \begin{align*} \Lambda_1&:= |\{H_i \in \Lambda \mid \mbox{$H_i$ contains a labeled vertex} \}|, \\ \Lambda_2&:= |\{H_i \in \Lambda_{\rm n} \mid \mbox{$H_i$ is a maximal blossom labeled with $\ominus$} \}|, \\ \Lambda_3&:= \Lambda \setminus (\Lambda_1 \cup \Lambda_2). \end{align*} Note that every maximal blossom labeled with $\ominus$ contains no labeled vertex, and hence $\Lambda_1 \cap \Lambda_2 = \emptyset$. We first show that $\kappa(V^*,\Lambda)$ does not decrease. By Lemma~\ref{lem:labeliff}, if $w \in V$ is labeled at the time of the first dual update, then it is labeled again at the time of the second dual update. This shows that $|\{ w \in V \mid \mbox{$w$ is labeled} \}|$ does not decrease. By Lemma~\ref{lem:labeliff} again, blossoms satisfy the following. \begin{itemize} \item If a blossom is in $\Lambda_1$ at the time of the first dual update, then it is still in $\Lambda_1$ at the time of the second dual update unless it is deleted. Note that such a blossom is deleted only when it is replaced with a new blossom in ${\sf Graft}$. \item If a blossom is in $\Lambda_2$ at the time of the first dual update, then it is in $\Lambda_1 \cup \Lambda_2$ at the time of the second dual update unless it is deleted. \item If a blossom is in $\Lambda_3$ at the time of the first dual update, then it is in $\Lambda = \Lambda_1 \cup \Lambda_2 \cup \Lambda_3$ at the time of the second dual update unless it is deleted. \item If a new blossom is created in ${\cal B}lossom$ after the first dual update, then it is in $\Lambda_1$ at the time of the second dual update. \item If ${\sf Graft}$ is applied after the first dual update, then it replaces a blossom in $\Lambda$ with a new blossom containing a labeled vertex, i.e., the new blossom is in $\Lambda_1$ at the time of the second dual update. \end{itemize} By the above observations, $\kappa(V^*,\Lambda)$ does not decrease. In what follows, we show that $\kappa(V^*,\Lambda)$ increases strictly. If we update the dual variables with $\epsilon = \epsilon_4$, then there exists a maximal blossom $H_i \in \Lambda_{\rm n}$ labeled with $\ominus$ such that $q(H_i) = \epsilon$, which shows that $H_i \in \Lambda_2$ is deleted before the time of the second dual update. This shows that $\kappa(V^*,\Lambda)$ increases. If $\epsilon < \epsilon_4$, then there is a new tight edge between $R^+\cup Z^+$ and $Y$, or between two vertices in $R^+\cup Z^+$. We note that some blossoms may be created or deleted in ${\sf Graft}$ after the first dual update is executed. However, such a new tight edge remains to exist by Lemmas~\ref{clm:71} and~\ref{lem:dblossomexpand}. Suppose that $\epsilon = \epsilon_2$. In this case, we create a new tight edge $(u, v)$ with $u\in R^+\cup Z^+$ and $v\in Y$. Since $u$ is labeled again at the time of the second dual update, some vertex in $K(v)$ is newly labeled. Thus, $|\{ w \in V \mid \mbox{$w$ is labeled} \}|$ increases or a blossom in $\Lambda_3$ becomes a member of $\Lambda_2$, and hence the value of $\kappa(V^*,\Lambda)$ will increase. The same argument can be applied to the case of $\epsilon = \epsilon_3$. Suppose that $\epsilon = \epsilon_1$. In this case, we create a new tight edge $(u, v)$ with $u, v\in R^+\cup Z^+$ and $K(u) \not= K(v)$. By changing the roles of $u$ and $v$ if necessary, we may assume that $u \prec v$. Then, we consider each of the following cases. \begin{itemize} \item If the first elements in $P(v)$ and $P(u)$ belong to different source lines, then we obtain an augmenting path, which contradicts that we apply the second dual update. \item If $v \in H_i$ for some maximal normal blossom $H_i \in \Lambda_{\rm n}$ and $u = \rho (b_i)$, then there exists an edge in $F^*$ between $u=\rho(b_i)$ and $v \in H_i$, which contradicts Lemma~\ref{lem:72}. \item If neither of the above cases apply, then a new blossom $H$ is created in ${\cal B}lossom(v, u)$, and hence $|\Lambda_1|$ increases. This shows that the value of $\kappa(V^*,\Lambda)$ increases. \end{itemize} Thus, the value of $\kappa(V^*,\Lambda)$ increases by at least one between two consecutive dual updates. Since the range of $\kappa(V^*,\Lambda)$ is at most $O(n)$, the dual variables are updated at most $O(n)$ times. \end{proof} \section{Augmentation} \label{sec:augmentation} The objective of this section is to describe how to update the primal solution using an augmenting path $P$. The augmentation procedure that primarily replaces $B^*$ with $B^* \triangle P$, where $\triangle$ denotes the symmetric difference. In addition, it updates the bud and the tip of each normal blossom. Suppose we are given $V^*$, $B^*$, $C^*$, $\Lambda$, and feasible dual variables $p$ and $q$. Let $P$ be an augmenting path, and $\Lambda_P$ denote the set of blossoms that intersect with $P$, i.e., $\Lambda_P=\{H_i\in \Lambda\mid H_i\cap P\neq \emptyset\}$. Let $\Lambda^+_P$ denote the set of positive blossoms in $\Lambda_P$. In the augmentation along $P$, we update $V^*$, $B^*$, $C^*$, $\Lambda$, $b_i$, $t_i$, $p$, and $q$. The procedure for augmentation is described as follows. \begin{description} \item[Procedure ${\sf Augment}(P)$] \item[Step 0:] While there exists a maximal blossom $H_i \in \Lambda \setminus \Lambda_P$ with $q(H_i) = 0$, apply ${\sf Expand}(H_i)$. \item[Step 1:] Let $M$ be the unique tight perfect matching in $G^\circ[P]$. For each $H_i\in\Lambda^+_P$, let $(x_i,y_i)$ be the unique edge in $M$ with $x_i\in H_i$ and $y_i\in V^*\setminus H_i$ (see Corollary~\ref{cor:uniqueedgeaug}), add new vertices $\widehat b_i$ and $\widehat t_i$ to $V^*$, and update $B^*$, $C^*$, and $p$ as follows (see Fig.~\ref{fig:revisionfig32}). \begin{itemize} \item Add $\widehat t_i$ to $H_i$. For each blossom $H_j$ with $H_i\subsetneq H_j$, add $\widehat b_i$ and $\widehat t_i$ to $H_j$. \item If $x_i\in B^*$ and $y_i\in V^*\setminus B^*$, then $B^*:=B^*\cup\{\widehat b_i\}$, $$C^*_{\widehat b_i v}:= \begin{cases} C^*_{x_i v} & (v\in (V^*\setminus B^*)\setminus H_i), \\ 0 & (v\in H_i\setminus B^*), \end{cases} \quad\quad C^*_{u \widehat t_i}:= \begin{cases} C^*_{u y_i} & (u\in B^*\cap H_i), \\ 0 & (u\in B^*\setminus H_i), \end{cases} $$ $p(\widehat b_i):=p(y_i) - Q_{\widehat b_i y_i}$, and $p(\widehat t_i):=p(x_i)+Q_{x_i\widehat t_i}$. \item If $x_i\in V^*\setminus B^*$ and $y_i\in B^*$, then $B^*:=B^*\cup\{\widehat t_i\}$, $$C^*_{u \widehat b_i}:= \begin{cases} C^*_{u x_i} & (u\in B^*\setminus H_i), \\ 0 & (u \in B^*\cap H_i), \end{cases} \quad\quad C^*_{\widehat t_i v}:= \begin{cases} C^*_{y_i v} & (v\in H_i\setminus B^*), \\ 0 & (v \in (V^*\setminus B^*)\setminus H_i), \end{cases} $$ $p(\widehat b_i):=p(y_i) + Q_{\widehat b_i y_i}$, and $p(\widehat t_i):=p(x_i) - Q_{x_i\widehat t_i}$. \end{itemize} \begin{figure} \caption{Definition of $C^*$ in ${\sf Augment} \label{fig:revisionfig32} \end{figure} \item[Step 2:] Apply the pivoting operation around $P^*:=P\cup \{\widehat b_i, \widehat t_i\mid H_i\in \Lambda^+_P\}$ to $C^*$, namely $B^*:=B^*\triangle P^*$. \item[Step 3:] For each (not necessarily maximal) blossom $H_i\in\Lambda_P \setminus \Lambda^+_P$, remove $H_i$ from $\Lambda$, and if $H_i$ is a normal blossom, then remove also $b_i$ and $t_i$ from $V^*$. For each $H_i\in\Lambda^+_P$, remove $b_i$ and $t_i$ from $V^*$ if $H_i$ is a normal blossom, and rename $\widehat b_i$ and $\widehat t_i$ as the bud $b_i$ and the tip $t_i$ of $H_i$, respectively. \item[Step 4:] For each $H_i\in\Lambda^+_P$ in the order of indices (i.e., in the order from smaller blossoms to larger ones), apply the following. \begin{enumerate} \item[(i)] Introduce new vertices $b'_i$ and $t'_i$ and add $t'_i$ to $H_i$. For each blossom $H_j$ with $H_i\subsetneq H_j$, add $b'_i$ and $t'_i$ to $H_j$. \item[(ii)] If $b_i\in B^*$ and $t_i\in V^*\setminus B^*$, then $B^*:=B^*\cup\{t'_i\}$, $$C^*_{u b'_i}:= \begin{cases} C^*_{u t_i} & (u \in B^* \setminus H_i), \\ 0 & (u \in H_i \cap B^*), \end{cases} \quad\quad C^*_{t'_i v}:= \begin{cases} C^*_{b_i v} & (v\in H_i \setminus B^*), \\ 0 & (v\in (V^*\setminus B^*)\setminus H_i), \end{cases} $$ $p(b'_i):=p(t_i) - Q_{b'_i t_i}$, and $p(t'_i):=p(b_i) + Q_{b_i t'_i}$. \item[(iii)] If $b_i\in V^* \setminus B^*$ and $t_i\in B^*$, then $B^*:=B^*\cup\{b'_i\}$, $$C^*_{b'_i v}:= \begin{cases} C^*_{t_i v} & (v \in (V^* \setminus B^*) \setminus H_i), \\ 0 & (v \in H_i \setminus B^*), \end{cases} \quad\quad C^*_{u t'_i}:= \begin{cases} C^*_{u b_i} & (u\in H_i \cap B^*), \\ 0 & (u\in B^* \setminus H_i), \end{cases} $$ $p(b'_i):=p(t_i) + Q_{b'_i t_i}$, and $p(t'_i):=p(b_i) - Q_{b_i t'_i}$. \item[(iv)] Apply the pivoting operation around $\{b_i, t_i, b'_i, t'_i\}$ to $C^*$, namely $B^*:=B^*\triangle\{b_i, t_i, b'_i, t'_i\}$. \end{enumerate} Then, for each $H_i\in\Lambda^+_P$, remove $b_i$ and $t_i$ from $V^*$, and rename $b'_i$ and $t'_i$ as the bud $b_i$ and the tip $t_i$ of $H_i$, respectively. \item[Step 5:] For each $H_i\in\Lambda^+_P$ in the reverse order of indices (i.e., in the order from larger blossoms to smaller ones), apply the procedures (i)--(iv) in Step 4. Then, for each $H_i\in\Lambda^+_P$, remove $b_i$ and $t_i$ from $V^*$, and rename $b'_i$ and $t'_i$ as the bud $b_i$ and the tip $t_i$ of $H_i$, respectively. \end{description} Note that Steps 4 and 5 are executed to keep (BT2). After Step 3, (BT2) does not necessarily hold, whereas the dual variables are feasible and (BT1) holds. Step 4 is applied to delete all the edges in $F^*$ between $t_i$ and $(V^* \setminus H_i) \setminus \{b_i\}$ for each $H_i \in \Lambda^+_{P}$, and Step 5 is applied to delete all the edges in $F^*$ between $b_i$ and $H_i \setminus \{t_i\}$ for each $H_i \in \Lambda^+_{P}$. See Lemma~\ref{lem:augBT} for details. In Section~\ref{sec:augmentfeas}, we show the validity of the augmentation procedure. After the augmentation, the algorithm applies ${\sf Search}$ in each blossom $H_i$ to obtain a new routing and ordering in $H_i$, which will be described in Section~\ref{sec:reroute}. \subsection{Validity} \label{sec:augmentfeas} In this subsection, we show the validity of ${\sf Augment}(P)$. We first show that the dual feasibility holds after the augmentation. \begin{lemma} Suppose that the dual variables $(p,q)$ are feasible at the beginning of ${\sf Augment}(P)$. Then the procedure keeps the dual feasibility. \end{lemma} \begin{proof} By Lemma~\ref{lem:expand}, the dual variables $(p,q)$ are feasible after Step 0. We intend to show that $(p,q)$ are feasible after Step 1. New edges that appear in $F^*$ are incident to $\widehat b_i$ or $\widehat t_i$ for some $H_i\in\Lambda_P$. For a new edge $(u,\widehat{t_i})\in F^*$, we have $u\in H_i$, $(u,y_i)\in F^*$, and $Q_{u y_i}-Q_{u \widehat t_i} = Q_{x_i y_i} - Q_{x_i \widehat t_i}$. If $x_i\in B^*$, we have \begin{eqnarray*} p(\widehat t_i)-p(u) & = & p(x_i)+Q_{x_i\widehat t_i}-p(u) \\ & = & p(y_i)-Q_{x_iy_i}+Q_{x_i\widehat t_i} - p(u) \\ & = & p(y_i)-Q_{uy_i}+Q_{u\widehat t_i} -p(u) \geq Q_{u\widehat t_i}. \end{eqnarray*} If $x_i\in V^*\setminus B^*$, we can similarly derive $p(u) - p(\widehat t_i) \geq Q_{u\widehat t_i}$. For a new edge $(\widehat{b_i},v)\in F^*$, we have $v\in V^*\setminus H_i$, $(x_i,v)\in F^*$, and $Q_{x_i v}-Q_{\widehat b_i v} = Q_{x_i y_i} - Q_{\widehat b_i y_i}$. If $x_i\in B^*$, we have \begin{eqnarray*} p(v) - p(\widehat b_i) & = & p(v) - p(y_i) + Q_{\widehat b_i y_i} \\ & = & p(v) - p(x_i) - Q_{x_iy_i} + Q_{\widehat b_i y_i} \\ & = & p(v) - p(x_i) - Q_{x_i v} + Q_{\widehat b_i v} \geq Q_{x_i v}. \end{eqnarray*} If $x_i\in V^*\setminus B^*$, we can similarly derive $p(\widehat b_i) - p(v) \geq Q_{\widehat b_i v}$. Thus the dual variables $(p,q)$ remain feasible at the end of Step 1. We next intend to show that Step 2 also keeps the dual feasibility. Suppose that $(u,v)\in F^*$ with $u\in B^*$ and $v\in V^*\setminus B^*$ after Step 2. Then $C^*[P^*\triangle\{u,v\}]$ must be nonsingular before the pivoting operation by Lemma~\ref{lem:pivotsing}. Since $|P^*\cap H_i|$ is even for each $H_i\in\Lambda$ with $q(H_i) > 0$, it follows from Lemma~\ref{lem:keyodd} that $$p((P^*\triangle\{u,v\})\setminus B^*)-p((P^*\triangle\{u,v\})\cap B^*) \geq Q_{uv}$$ before Step 2. On the other hand, since $G^\circ[P^*]$ contains a tight perfect matching, we have $$p(P\setminus B^*)-p(P^*\cap B^*) =0$$ before Step 2. Combining these two inequalities with $u\in P^*\triangle B^*$ and $v\in V^*\setminus (P^*\triangle B^*)$, we obtain $p(v)-p(u)\geq Q_{uv}$, which shows that $(p,q)$ remain feasible after Step 2. Removing some vertices in Step 3 does not affect the dual feasibility. Finally, we consider each step of Steps 4 and 5. We can see that adding $b'_i$ and $t'_i$ does not violate the dual feasibility by the same argument as Step 1. If $(u, v) \in F^*$ after the pivoting operation in Step 4 or 5, then $C^*[X_i \triangle \{u, v\}]$ is nonsingular where $X_i := \{b_i, t_i, b'_i, t'_i\}$ before the pivoting operation by Lemma~\ref{lem:pivotsing}. Since $G^\circ[X_i]$ contains a tight perfect matching before the pivoting operation, we can apply the same argument as Step 2 to show that $(p,q)$ remain feasible after Steps 4 and 5. Thus $(p,q)$ is feasible throughout the procedure. \end{proof} We next show the nonsingularity of $C^*[P^*]$ in Step 2, which guarantees that we can apply the pivoting operation in Step 2 of ${\sf Augment}(P)$. \begin{lemma} When we apply the pivoting operation in Step 2 of ${\sf Augment}(P)$, $C^*[P^*]$ is nonsingular. \end{lemma} \begin{proof} We first note that ${\sf Expand}(H_i)$ in Step 0 does not affect the edges in $G^\circ[P]$. We show that $G^\circ[P']$ has a unique tight perfect matching for $P' := P \cup \{\widehat b_i, \widehat t_i\}$ with $H_i \in \Lambda^+_P$. Since $G^\circ[P]$ has a unique tight perfect matching $M$, which contains $(x_i, y_i)$, both $G^\circ[(P \cap H_i) \cup \{y_i\}]$ and $G^\circ[(P \setminus H_i) \cup \{x_i\}]$ have a unique tight perfect matching. By the definition of $\widehat b_i$ and $\widehat t_i$, this shows that both $G^\circ[P' \cap H_i]$ and $G^\circ[P' \setminus H_i]$ have a unique tight perfect matching. Thus, we obtain a tight perfect matching in $G^\circ[P']$. Furthermore, since $|H_i \cap P'|$ is even and $H_i$ is positive, any tight perfect matching in $G^\circ[P']$ consists of a tight perfect matching in $G^\circ[P' \cap H_i]$ and one in $G^\circ[P' \setminus H_i]$. Therefore, $G^\circ[P']$ has a unique tight perfect matching. By applying the same argument to each $H_i \in \Lambda^+_P$, repeatedly, we see that $G^\circ[P^*]$ has a unique tight perfect matching. By Lemma~\ref{lem:tightmatching}, $G^*[P^*]$ has a unique perfect matching, which shows that $C^*[P^*]$ is nonsingular. \end{proof} Finally in this subsection, we show that (BT1) and (BT2) hold after ${\sf Augment}(P)$. \begin{lemma}\label{lem:augBT} The procedure ${\sf Augment}(P)$ keeps {\em (BT1)} and {\em (BT2)}. \end{lemma} \begin{proof} It is obvious from the definition that (BT1) holds. We first show by induction on $i$ that, for any $j \le i$ with $H_j\in\Lambda^+_P$, $(b'_j, t'_j) \in F^*$ and there is no edge in $F^*$ between $t'_j$ and $(V^* \setminus H_j) \setminus \{b'_j\}$ after the pivoting operation around $X_i := \{b_i, t_i, b'_i, t'_i\}$ in Step 4. We only consider the case when $b'_i \in B^*$ and $t'_i \in V^* \setminus B^*$ after the pivoting operation as the other case can be dealt with in a similar way. Since $$ C^*\left[ P^* \setminus \{\widehat b_j, \widehat t_j \mid H_j \in \Lambda^+_P, j\le i\}\right] $$ is nonsingular before the pivoting operation around $P^*$ in Step 2, we have $C^*[\{b_j, t_j \mid H_j \in \Lambda^+_{P}, j \le i\}]$ is nonsingular after the pivoting operation around $P^*$ in Step 2 by Lemma~\ref{lem:pivotsing}. By Lemma~\ref{lem:pivotsing} again, this shows that $C^*[\{b'_j, t'_j \mid H_j \in \Lambda^+_{P}, j \le i\}]$ is nonsingular after the pivoting operation around $X_i$ in Step 4. Since there is no edge between $t'_j$ and $(V^* \setminus H_j) \setminus \{b'_j\}$ for $j < i$ by induction hypothesis, the nonsingularity of $C^*[\{b'_j, t'_j \mid H_j \in \Lambda^+_{P}, j \le i\}]$ shows that $C^*_{b'_i t'_i} \not= 0$. Before the pivoting operation around $X_i$, for $u \in B^*\setminus H_i$ with $u\neq b_i$, $\det C^*[X_i \triangle \{t'_i, u\}] = \det C^*[\{b_i, t_i, b'_i, u\}]$ is zero, since two columns in $C^*[\{b_i, t_i, b'_i, v\}]$ are the same by the definition of $b'_i$. Thus, $C^*_{u t'_i} = 0$ for $u \in B^*\setminus H_i$ with $u\neq b'_i$ after the pivoting operation around $X_i$. Furthermore, for any $j < i$ with $H_j \in \Lambda^+_P$, the pivoting operation around $X_i$ does not create a new edge in $F^*$ between $t'_j$ and $v \in (V^* \setminus H_j) \setminus \{b'_j\}$, because a row/column of $C^*[X_i \triangle \{t'_j, v\}]$ corresponding to $v$ is zero before the pivoting operation around $X_i$. We can also see that the pivoting operation around $X_i$ does not remove $(b'_j, t'_j)$ from $F^*$ for any $j < i$ with $H_j \in \Lambda^+_P$. Hence, for each $H_i \in \Lambda^+_P$, $(b'_i, t'_i) \in F^*$ and there is no edge in $F^*$ between $t'_i$ and $(V^* \setminus H_i) \setminus \{b'_i\}$ after applying (i)--(iv) for each normal blossom in Step 4. We next show by induction on $i$ (in the reverse order) that, for any $j \ge i$ with $H_j \in \Lambda^+_P$, $H_j$ satisfies the condition in (BT2) after the pivoting operation around $X_i := \{b_i, t_i, b'_i, t'_i\}$ in Step 5. Note that the pivoting operation around $X_i$ creates/deletes neither an edge in $F^*$ between $t'_j$ and $(V^* \setminus H_j) \setminus \{b'_j\}$ for $j \not= i$, nor an edge in $F^*$ between $b'_j$ and $H_j \setminus \{t'_j\}$ for $j > i$. Thus, it suffices to show that there is no edge in $F^*$ between $b'_i$ and $H_i \setminus \{t'_i\}$ after the pivoting operation around $X_i$ in Step 5. We only consider the case when $b'_i \in B^*$ and $t'_i \in V^* \setminus B^*$ after the pivoting operation as the other case can be dealt with in a similar way. Before the pivoting operation around $X_i$, for $v \in H_i\cap B^*$ with $v\neq t_i$, $\det C^*[X_i \triangle \{b'_i, v\}] = \det C^*[\{b_i, t_i, t'_i, v\}]$ is zero, since two rows in $C^*[\{b_i, t_i, t'_i, v\}]$ are the same by the definition of $t'_i$. Thus, $C^*_{b'_i v} = 0$ for $v \in H_i\cap B^*$ with $v\neq t'_i$ after the pivoting operation around $X_i$. Hence, by applying (i)--(iv) for each normal blossom in Step 5, there is no edge in $F^*$ between $b'_i$ and $H_i \setminus \{t'_i\}$ for each $H_i \in \Lambda^+_P$. Since the pivoting operations do not create/delete an edge in $F^*$ between $t'_i$ and $(V^* \setminus H_i) \setminus \{b'_i\}$ for each $H_i \in \Lambda_{\rm n} \setminus \Lambda^+_P$, (BT2) holds after ${\sf Augment}(P)$. \end{proof} \subsection{Search in Each Blossom} \label{sec:reroute} In this subsection, we describe how to update the routing $R_{H_i}(x)$ for each $x\in H^\bullet_i$ and the ordering $<_{H_i}$ in $H^\bullet_i$ after the augmentation. If $H_i$ does not intersect with the augmenting path $P$, then the augmentation does not affect $G^\circ[H_i]$, and the algorithm simply keeps the same routing and ordering as before. For each blossom $H_i \in \Lambda_{\rm n}$ with $H_i \cap P \not= \emptyset$, in the order of indices, we apply ${\sf Search}$ to $H_i \cup \{b_i\}$ in which we regard the dummy line $\{b_i,t_i\}$ as the unique source line. The family of blossoms is restricted to the set of blossoms $H_j\in\Lambda_{\rm n}$ with $H_j\subsetneq H_i$. For each inner blossom $H_j$, we have already computed $<_{H_j}$ and $R_{H_j}(x)$ for $x \in H^\bullet_j$. Since there exists no augmenting path in $H_i$, ${\sf Search}$ always returns $\emptyset$. Then, we can show that the procedure labels every vertex in $H_i \cap V$ without updating the dual variables as we will see in Lemma~\ref{lem:reachableHi}. However, this procedure may create new blossoms in $H_i$, and the bud $b$ of such a blossom $H$ is not labeled. This means that we do not obtain $R_{H_i}(b)$, whereas $b$ might be in $H^-_i$. To overcome this problem, we update the dual variables and apply ${\sf Expand}(H_i)$. Whenever ${\sf Search}$ terminates, we update the dual variables as we will describe later. We repeat this process until $q (H_i)$ becomes zero. Then, we apply ${\sf Expand}(H_i)$. A new blossom $H$ created in this procedure is accompanied by $<_{H}$ and $R_{H}(x)$ for $x \in H^\bullet$ satisfying (BR1)--(BR5) by the argument in Sections~\ref{sec:search} and \ref{sec:validity}. We can also see that $p$ and $q$ are feasible after creating a new blossom by the same argument as Lemma~\ref{lem:createblossomdual}. This argument shows that (BT1), (BT2), and (DF1)--(DF3) hold when we restrict the instance to $H_i \cup \{b_i\}$. We now show that we can create a new blossom $H$ with $q(H) = 0$ in the procedure so that these conditions hold in the entire instance. To this end, when we create a new blossom $H$, we define the row and the column of $C^*$ corresponding to $\{b, t\}$ as follows. \begin{itemize} \item If $b \in B^*$, then we define $C^*_{b y}= 0$ for any $y \in (V^* \setminus (H_i \cup \{b_i\})) \setminus B^*$ and $C^*_{x t} = C^*_{x g}$ for any $x \in B^* \setminus (H_i \cup \{b_i\})$ (see Fig.~\ref{fig:revisionfig30}). \item If $b \in V^* \setminus B^*$, then we define $C^*_{x b}= 0$ for any $x \in B^* \setminus (H_i \cup \{b_i\})$ and $C^*_{t y} = C^*_{g y}$ for any $y \in (V^* \setminus (H_i \cup \{b_i\})) \setminus B^*$ \item The other entries in $C^*$ are determined by ${\sf Search}$ in $H_i \cup \{b_i\}$. \item Then, apply the pivoting operation to $C^*$ around $\{b, t\}$. \end{itemize} In other words, we consider all the vertices in $V^*$ (instead of $H_i \cup \{b_i\}$) when we introduce new vertices in Step 2 of ${\cal B}lossom(v, u)$ or Step 1 of ${\sf Graft}(v, H_i)$. Note that this modification does not affect the entries in $C^*[H_i \cup \{b_i\}]$, and hence it does not affect ${\sf Search}$ in $H_i \cup \{b_i\}$. \begin{figure} \caption{Definition of $C^*$. Each element in the left figure is determined by ${\sf Search} \label{fig:revisionfig30} \end{figure} \begin{lemma} \label{lem:reachableHi} When we apply ${\sf Search}$ in $H_i \cup \{b_i\}$ as above, the procedure labels every vertex in $H_i \cap V$ without updating the dual variables. \end{lemma} \begin{proof} By Lemma~\ref{lem:labeliff}, it suffices to show that for every vertex $v \in H_i \cap V$, there exists a vertex set $X \subseteq H_i$ with the conditions in Lemma~\ref{lem:labeliff}. We first show that such a vertex set exists after Step 3 of ${\sf Augment}(P)$. For a given vertex $v\in H_i \cap V$, define $Z \subseteq H_i$ by $$Z := \begin{cases} R_{H_i}(v) \setminus \{v\} & \mbox{if $v \not\in P$}, \\ R_{H_i}(\bar v) \setminus \{\bar v \} & \mbox{if $v \in P$}. \end{cases} $$ Then, $G^\circ[Z]$ has a unique tight perfect matching by (BR4). Set $$ Y:= Z \cup (P^* \setminus H_i) \cup \{b_j, t_j \mid H_j \in \Lambda_{\rm n},\ H_j \subsetneq H_i,\ H_j \cap Z = \emptyset\}. $$ Since each of $G^\circ[P^* \setminus H_i]$ and $G^\circ[P^* \cap H_i]$ has a unique tight perfect matching, where $P^* \cap H_i$ might be the emptyset, $G^\circ[Y]$ also has a unique tight perfect matching. This shows that $C^*[Y]$ is nonsingular before the pivoting operation around $P^*$ in Step 2 of ${\sf Augment}(P)$ by Lemma~\ref{lem:tightmatching}, and hence $C^*[X]$ with $X := Y \triangle P^*$ is nonsingular after the pivoting operation around $P^*$ by Lemma~\ref{lem:pivotsing}. Then, $X \cup \{v\}$ consists of lines, dummy lines, and the tip $t_i$, and it contains all the buds and the tips in $H_i$ after updating $b_i$ and $t_i$ in Step 3 of ${\sf Augment}(P)$. Furthermore, the tightness of the perfect matching in $G^\circ[Y]$ shows that $X$ satisfies (\ref{eq:8411}) after the augmentation. Thus, $X$ satisfies the conditions in Lemma~\ref{lem:labeliff} after Step 3 of ${\sf Augment}(P)$. We next show that such a set $X$ exists after Steps 4 and 5 of ${\sf Augment}(P)$. Suppose that we apply (i)--(iv) in Step 4 or 5 of ${\sf Augment}(P)$ for $H_j \in \Lambda_{\rm n}$, that is, we apply the pivoting operation around $X_j := \{b_j, t_j, b'_j, t'_j\}$. We consider the following three cases, separately. \begin{itemize} \item Suppose that $H_j \subsetneq H_i$. In this case, since $C^*[X]$ is nonsingular before the pivoting operation around $X_j$, $C^*[X \triangle X_j]$ is nonsingular after the pivoting operation by Lemma~\ref{lem:pivotsing}. We can also check that $X \triangle X_j$ satisfies the other conditions in Lemma~\ref{lem:labeliff}. \item Suppose that $H_j \supsetneq H_i$ or $H_j \cap H_i = \emptyset$. Let $X':= X \cup \{b_j, t_j\}$. Since $X'$ satisfies (\ref{eq:8411}) and $|X' \cap H_i|$ is even, the nonsingularity of $C^*[X]$ and $C^*[\{b_j, t_j\}]$ shows that $C^*[X']$ is nonsingular before the pivoting operation around $X_j$. Hence, $C^*[X' \triangle X_j]$ is nonsingular after the pivoting operation by Lemma~\ref{lem:pivotsing}. Since $|X' \cap H_i|$ is even, this implies that $C^*[(X' \triangle X_j) \setminus \{b'_j, t'_j\}] = C^*[X]$ is nonsingular after the pivoting operation. We can also check that $X$ satisfies the other conditions in Lemma~\ref{lem:labeliff}. \item Suppose that $H_j = H_i$. Let $X':= X \cup \{b_i, b'_i\}$. Since $(b_i, b'_i) \in F^*$ and there is no edge in $F^*$ between $b'_i$ and $H_i$, the nonsingularity of $C^*[X]$ shows that $C^*[X']$ is nonsingular before the pivoting operation around $X_j$. Hence, $C^*[X' \triangle X_j]$ is nonsingular after the pivoting operation by Lemma~\ref{lem:pivotsing}. We can also check that $X' \triangle X_j$ satisfies the other conditions in Lemma~\ref{lem:labeliff}. \end{itemize} By these cases, there exists a set $X$ satisfying the conditions in Lemma~\ref{lem:labeliff} after Steps 4 and 5 of ${\sf Augment}(P)$. Therefore, every vertex in $H_i \cap V$ is labeled without updating the dual variables by Lemma~\ref{lem:labeliff}, which completes the proof. \end{proof} In what follows in this subsection, we describe how to update the dual variables. Suppose that ${\sf Search}$ returns $\emptyset$ when it is applied to $H_i \cup \{b_i\}$. Define $R^+$, $R^-$, $Z^+$, $Z^-$, $Y$, and $\epsilon=\min\{\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4\}$ as in Section~\ref{sec:dualupdatealgo}. By Lemma~\ref{lem:reachableHi}, we have that $R^+ = \{b_i, t_i \} $, $R^-=\{b_j\mid \mbox{$H_j$: maximal blossom with $H_j\subsetneq H_i$}\}$, $Z^-= Y = \emptyset$, and $\epsilon_2 = \epsilon_3 = \epsilon_4 = + \infty$. In particular, every maximal blossom is labeled with $\oplus$. Here, a blossom $H_j\subsetneq H_i$ is called a {\em maximal blossom} if there exists no blossom $H$ with $H_j \subsetneq H \subsetneq H_i$. We now modify the dual variables in $V^*$ as follows. Set $\epsilon' := \min \{ \epsilon, q (H_i)\}$, which is a finite positive value. Then update $p(t_i)$ as $$p(t_i) :=\begin{cases} p(t_i)+\epsilon' & (t_i \in B^*), \\ p(t_i)-\epsilon' & (t_i \in V^* \setminus B^*), \end{cases}$$ and update $q(H_i)$ as $q(H_i) := q(H_i)-\epsilon'$. For each maximal blossom $H_j\subsetneq H_i$, which must be labeled with $\oplus$, update $q(H_j)$ as $q(H_j) := q(H_j)+\epsilon'$ and $p(b_j)$ as $$ p(b_j) :=\begin{cases} p(b_j) - \epsilon' & (b_j \in B^*), \\ p(b_j) + \epsilon' & (b_j \in V^* \setminus B^*). \end{cases} $$ Note that ${\sf Expand}(H_j)$ is not applied for any maximal blossom $H_j \subsetneq H_i$, because $q(H_j) > 0$ after the dual update, whereas ${\sf Expand}(H_i)$ is applied when $q (H_i)$ becomes zero. We now prove the following claim, which shows the validity of this procedure. \begin{claim} The obtained dual variables $p$ and $q$ are feasible in $V^*$ (not only in $H_i$). \end{claim} \begin{proof} It suffices to show (DF2). Suppose that $u \in B^*$, $v \in V^* \setminus B^*$, and $(u, v) \in F^*$. Since the value of $q(H)$ is zero for newly created blossoms $H$, the dual variable $(p,q)$ are feasible at the end of in ${\sf Search}$ applied to $H_i \cup \{b_i\}$. Updating the dual variables decreases the slack $p(v)-p(u)-Q_{uv}$ only if $u$ and $v$ belong to distinct maximal blossoms included in $H_i$ or one of them is $t_i$. In these cases, however, we have $p(v)-p(u)-Q_{uv}\geq\epsilon_1\geq\epsilon'$. Thus the above update of the dual variables does not violate the feasibility. \end{proof} \section{Algorithm Description and Complexity} \label{sec:complexity} Our algorithm for the minimum-weight parity base problem is described as follows. \begin{description} \item[Algorithm] {$\sf Minimum$-$\sf Weight$ $\sf Parity$ $\sf Base$} \item[Step 1:] Split the weight $w_\ell$ into $p(v)$ and $p(\bar{v})$ for each line $\ell=\{v,\bar{v}\}\in L$, i.e., $p(v) + p(\bar{v}) = w_\ell$. Execute the greedy algorithm for finding a base $B\in{\cal B}$ with minimum value of $p(B)=\sum_{u\in B}p(u)$. Set $\Lambda = \emptyset$. \item[Step 2:] If there is no source line, then return $B := B^* \cap V$ as an optimal solution. Otherwise, apply ${\sf Search}$. If ${\sf Search}$ returns $\emptyset$, then go to Step 3. If ${\sf Search}$ finds an augmenting path $P$, then go to Step 4. \item[Step 3:] Update the dual variables as in Section~\ref{sec:dualupdatealgo}. If $\epsilon = +\infty$, then conclude that there exists no parity base and terminate the algorithm. Otherwise, apply ${\sf Expand}(H_i)$ for all maximal blossoms $H_i$ with $q(H_i)=0$ and go to Step 2. \item[Step 4:] Apply ${\sf Augment}(P)$ to obtain a new base $B^*$, a family $\Lambda$ of blossoms, and feasible dual variables $p$ and $q$. For each normal blossom $H_i$ with $H_i \cap P \not= \emptyset$ in the increasing order of $i$, do the following. \begin{quote} While $q (H_i) > 0$, apply ${\sf Search}$ in $H_i$ and update the dual variables as in Section~\ref{sec:reroute}. Apply ${\sf Expand}(H_i)$. \end{quote} Go back to Step 2. \end{description} We have already seen the correctness of this algorithm, and we now analyze the complexity. Since $|V^*| \:= O(n)$, an execution of the procedure ${\sf Search}$ as well as the dual update requires $O(n^2)$ arithmetic operations. By Lemma~\ref{lem:dualupdatebound}, Step 3 is executed at most $O(n)$ times per augmentation. In Step 4, we create a new blossom or apply ${\sf Expand}(H_i)$ when we update the dual variables, which shows that the number of dual updates as well as executions of ${\sf Search}$ in Step 4 is also bounded by $O(n)$. Thus, ${\sf Search}$ and dual update are executed $O(n)$ times per augmentation, which requires $O(n^3)$ operations. We note that it also requires $O(n^3)$ operations to update $C^*$ and $G^*$ after augmentation. Since each augmentation reduces the number of source lines by two, the number of augmentations during the algorithm is $O(m)$, where $m=\mathrm{rank}\, A$, and hence the total number of arithmetic operations is $O(n^3 m)$. \begin{theorem} \label{th:complexity} Algorithm {\sf Minimum-Weight Parity Base} finds a parity base of minimum weight or detects infeasibility with $O(n^3m)$ arithmetic operations over $\mathbf{K}$. \end{theorem} If $\mathbf{K}$ is a finite field of fixed order, each arithmetic operation can be executed in $O(1)$ time. Hence Theorem~\ref{th:complexity} implies the following. \begin{corollary} The minimum-weight parity base problem over an arbitrary fixed finite field $\mathbf{K}$ can be solved in strongly polynomial time. \end{corollary} When $\mathbf{K} = \mathbb{Q}$, it is not obvious that a direct application of our algorithm runs in polynomial time. This is because we do not know how to bound the number of bits required to represent the entries of $C^*$. However, the minimum-weight parity base problem over $\mathbb Q$ can be solved in polynomial time by applying our algorithm over a sequence of finite fields. \begin{theorem} The minimum-weight parity base problem over $\mathbb Q$ can be solved in time polynomial in the binary encoding length $\langle A \rangle$ of the matrix representation $A$. \end{theorem} \begin{proof} By multiplying each entry of $A$ by the product of the denominators of all entries, we may assume that each entry of $A$ is an integer. Let $\gamma$ be the maximum absolute value of the entries of $A$, and put $N := \lceil m \log (m \gamma) \rceil$. Note that $N$ is bounded by a polynomial in $\langle A \rangle$. We compute the $N$ smallest prime numbers $p_1, \dots , p_N$. Since it is known that $p_N = O(N \log N)$ by the prime number theorem, they can be computed in polynomial time by the sieve of Eratosthenes. For $i=1, \dots , N$, we consider the minimum-weight parity base problem over ${\rm GF}(p_i)$ where each entry of $A$ is regarded as an element of ${\rm GF}(p_i)$. In other words, we consider the problem in which each operation is executed modulo $p_i$. Since each arithmetic operation over ${\rm GF}(p_i)$ can be executed in polynomial time, we can solve the minimum-weight parity base problem over ${\rm GF}(p_i)$ in polynomial time by Theorem~\ref{th:complexity}. Among all optimal solutions of these problems, the algorithm returns the best one $B$. That is, $B$ is the minimum weight parity set subject to $|B| = m$ and $\det A[U, B] \not\equiv 0 \pmod {p_i}$ for some $i \in \{1, \dots , N\}$. To see the correctness of this algorithm, we evaluate the absolute value of the subdeterminant of $A$. For any subset $X \subseteq V$ with $|X| = m$, we have $$ |\det A[U, X]| \le m! \gamma^m \le (m \gamma)^m \le 2^N < \prod^N_{i=1} p_i. $$ This shows that $\det A[U, X] = 0$ if and only if $\det A[U, X] \equiv 0 \pmod {\prod^N_{i=1} p_i}$. Therefore, $\det A[U, X] \not= 0$ if and only if $\det A[U, X] \not\equiv 0 \pmod {p_i}$ for some $i \in \{1, \dots , N\}$, which shows that the output $B$ is an optimal solution. \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{Multi-task nonparallel support vector machine for classification} \author{Zongmin Liu$^1$}, \author{Yitian Xu$^{2*}$} \ead{[email protected], Tel.: +8610 62737077.} \address{$^1$College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China} \address{$^2$College of Science, China Agricultural University, Beijing 100083, China} \begin{abstract} Direct multi-task twin support vector machine (DMTSVM) explores the shared information between multiple correlated tasks, then it produces better generalization performance. However, it contains matrix inversion operation when solving the dual problems, so it costs much running time. Moreover, kernel trick cannot be directly utilized in the nonlinear case. To effectively avoid above problems, a novel multi-task nonparallel support vector machine (MTNPSVM) including linear and nonlinear cases is proposed in this paper. By introducing $\epsilon$-insensitive loss instead of square loss in DMTSVM, MTNPSVM effectively avoids matrix inversion operation and takes full advantage of the kernel trick. Theoretical implication of the model is further discussed. To further improve the computational efficiency, the alternating direction method of multipliers (ADMM) is employed when solving the dual problem. The computational complexity and convergence of the algorithm are provided. In addition, the property and sensitivity of the parameter in model are further explored. The experimental results on fifteen benchmark datasets and twelve image datasets demonstrate the validity of MTNPSVM in comparison with the state-of-the-art algorithms. Finally, it is applied to real Chinese Wine dataset, and also verifies its effectiveness. \end{abstract} \begin{keyword} Multi-task learning, nonparallel support vector machine, ADMM. \end{keyword} \end{frontmatter} \section{Introduction} In the single-task learning (STL) field, support vector machine (SVM) has attracted much academic attention in recent years due to its solid theoretical foundation and good performance, but it needs to deal with a large-scale problem, which leads to a low computational efficiency. Twin support vector machine (TWSVM) \cite{twsvm} proposed by Jayadeva et al. transforms a larger-scale problem in SVM into two small-scale problems. It simultaneously seeks two decision hyperplanes, such that each hyperplane is required to be close to one of the two classes by the square loss function, and is required to be at least one distance apart from the other by hinge loss function. So it significantly reduces the computational time. Afterward, many researchers have made further improvements to TWSVM \cite{tbsvm}. As a successful improvement, nonparallel support vector machine \cite{npsvm} proposed by Tian et al. has become one of the state-of-the-art classifiers due to its great generalization performance. This model similarly seeks two nonparallel decision hyperplanes, and the hinge loss is also employed to allow the hyperplane as far as possible from other class. Unlike TWSVM, $\epsilon$-insensitive loss \cite{svr} replaces the original square loss to require that the hyperplane be as close to the class itself. It should be pointed that, TWSVM loses half of the sparsity due to the fact that the samples constrained by the square loss function almost all contribute to the final decision hyperplane. By contrast, the $\epsilon$-insensitive loss function is similar to the hinge loss function in that both allow only a fraction of the samples to be support vectors (the samples that contribute to the decision hyperplane). The $\epsilon$-insensitive loss gives the model the following merits: (a) Matrix inversion operation is avoided in the solving process. (b) Kernel trick can be implemented directly in the nonlinear case. (c) It follows the structural risk minimization (SRM) principle. (d) The sparsity of the model is improved. In this paper, the sparse property of NPSVM is denoted as whole sparsity and the corresponding property of TWSVM is denoted as semi-sparsity. In recent years, due to these advantages of the NPSVM, it has been combined with other learning theories to tackle different problems, such as multi-instance learning \cite{mit}, multi-view learning \cite{mvl}, multi-class learning \cite{mcl}, large margin distribution machine \cite{ldm}. These methods have all yielded excellent performances. So it is potentially beneficial to extend the NPSVM to handle multi-task issues. For decades, multi-task learning (MTL) as a branch of machine learning, has developed rapidly in web application \cite{web}, bioinformatics \cite{bio}, computer vision \cite{cv}, and natural language processing \cite{nlp}. Compared with the STL methods, it improves the generalization performance via discovering relations among tasks, and supposes all related tasks have potential similar structural information\cite{infor}. Multi-task learning theory has thus been further supplemented and enhanced rapidly \cite{pro1,pro2}. Generally speaking, the MTL methods can be divided into three categories based on the content of the shared information, such as feature-based \cite{feature1,feature2}, instance-based \cite{instance} and parameter-based \cite{parame1,parame2} methods. The feature-based MTL assumes that multiple tasks share the same feature subspace and requires that the feature coefficients of multiple tasks are sparse. Instance-based MTL attempts to identify samples in each task that may be beneficial to other tasks. Parameter-based MTL assumes that multiple related tasks have common parameters. Recently, the mean regularized multi-task learning (RMTL) \cite{rmtl} proposed by Evgeniou et al. firstly combines multi-task learning theory and support vector machine, and achieves good generalization performance. As a parameter-based MTL approach, RMTL assumes that all tasks share a common mean hyperplane, and the hyperplane of each task has an offset with the mean hyperplane. The final decision hyperplane of each task is determined by the common hyperplane and its offset. Whereas RMTL has a low computational efficiency due to the necessary to handle a large scale problem, by combining TWSVM with MTL, a direct multi-task twin support vector machine (DMTSVM) is further proposed by Xie et al. \cite{dmtsvm}. It simultaneously seeks two decision hyperplanes for each task, theoretically increasing computational efficiency by four times. Due to the excellent performance of DMTSVM, many researchers have made many improvements. Multi-task centroid twin support vector machine (MTCTSVM) \cite{mtctsvm} proposed by Xie et al. additionally takes into account the centroid of each task. Mei et al. presented multi-task $v$-twin support vector machine (MT-$v$-TWSVM) \cite{vtwin} based on the property of $v$ in $v$-TWSVM, where the value of $v$ can control the sparsity of the model. Moreover, based on the idea that misclassified samples should be given different penalties in different locations, An et al. introduced rough set theory into MT-$v$-TWSVM and established a rough margin-based multi-task $v$-twin support vector machine (rough MT-$v$-TSVM) \cite{roughv}. The above multi-task TWSVMs all obtain better generalization performance due to their own unique structures, but they all have to face the following problems: \begin{itemize} \item [\textbullet]When processing these models, the matrix inversion operation is required. However, when the matrix is not invertible, the added correction term makes the result of the solution is not exactly equal to the optimal solution of the original model. \item [\textbullet]These models need to consider extra kernel-generated space when using kernel trick \cite{kernel1} to solve linear inseparable problem. This increases the burden of model implementation. \end{itemize} Based on the ideas above, this paper puts forward a novel multi-task nonparallel support vector machine, it firstly introduces the idea of nonparallel support vector machine into the multi-task learning field. By replacing the square loss in the multi-task TWSVMs with $\epsilon$-insensitive loss, MTNPSVM not only considers the correlation between tasks when training multiple related tasks, but also inherits the merits of NPSVM. But it inevitably increases the scale of the problem. To address this problem, the ADMM \cite{admm} is adopted to accelerate computational efficiency by converting a large problem into multiple small problems. The main contributions of the paper can be summarized as follows: \begin{enumerate} \item This paper proposes a novel multi-task nonparallel support vector machine, which improves the generalization performance by introducing the $\epsilon$-insensitive loss function. \item MTNPSVM constrains one class of samples by the $\epsilon$-insensitive loss instead of the square loss. This makes the samples appear only in the constraints, thus avoiding the matrix inversion operation and directly applying the kernel trick in the nonlinear case. \item ADMM is employed in the MTNPSVM, which greatly improves the solving efficiency. \end{enumerate} The rest of this paper is outlined as follows. In Section \ref{10044}, a brief review of the DMTSVM and NPSVM is shown. MTNPSVM is proposed in Section \ref{10055}. A detailed derivation of ADMM to solve MTNPSVM is provided in Section \ref{10066}. A large number of comparative experiments have been shown in Section \ref{10077}. Finally, some conclusions and future directions for research are given in Section \ref{10088}. \section{Related work}\label{10044} In this section, detailed explanations of the nonparallel support vector machine and the direct multi-task support vector machine are shown, and these models are the basis of MTNPSVM. \subsection{Nonparallel support vector machine} As a single-task learning method, NPSVM is similar to TWSVM, which seeks two nonparallel proximal hyperplanes $x^{\top}w_{+}+b_{+}=0$ and $x^{\top}w_{-}+b_{-}=0$. Unlike TWSVM, the regularization term and the $\epsilon$-insensitive loss function are introduced into the model. The matrices $A_{+}$ and $B_{-}$ are defined as all positive and negative samples, respectively. For simplicity, the $A=(A_{+},e_{+})$, $B=(B_{-},e_{-})$, $u=(w_{+};b_{+})$, and $v=(v_{-};b_{-})$ are denoted, where $e_{+}$ and $e_{-}$ are vectors of ones of appropriate dimensions. Then the original problems of NPSVM are displayed as follows: \begin{eqnarray}\label{1001} \displaystyle{\min_{u,\xi,\xi^{*},\eta}}~~&&\frac{1}{2}\|u\|^{2}+C_{1}e_{+}^{\top}\left(\xi+\xi^{*}\right)+C_{2}e_{-}^{\top}\eta\\ \mbox{s.t.}~~&&-\varepsilon e_{+}-\xi^{*}\le\phi(A)u\le\varepsilon e_{+}+\xi, \nonumber\\ &&-\phi(B)u\ge e_{-}-\eta,\nonumber\\ &&\xi,~\xi^{*},~\eta\ge0,\nonumber \end{eqnarray} and \begin{eqnarray}\label{1002} \displaystyle{\min_{v,\xi,\eta,\eta^{*}}}~~&&\frac{1}{2}\|v\|^{2}+C_{3}e_{-}^{\top}\left(\eta+\eta^{*}\right)+C_{4}e_{+}^{\top}\xi\\ \mbox{s.t.}~~&&-\varepsilon e_{-}-\eta^{*}\le\phi(B)v\le\varepsilon e_{-}+\eta,\nonumber\\ &&\phi(A)v\ge e_{+}-\xi,\nonumber\\ &&\eta,~\eta^{*},~\xi\ge0\nonumber, \end{eqnarray} where $C_{i}\ge0$, $(i=1, 2, 3, 4)$ are trade-off parameters, $\xi$, $\xi^{*}$, $\eta$ and $\eta^{*}$ are slack variables. $\phi(\cdot)$ is the mapping function which can map the samples from the original space to the higher dimensional space, and the different nonlinear mapping can be exploited. In the linear case, the mapping function will degenerate into identity mapping. As is shown in primal problem (\ref{1001}), when constructing positive hyperplane, $\epsilon$-insensitive loss function can restrict the positive samples in $\epsilon$-band between $x^{\top}w_{+}+b_{+}=\epsilon$ and $x^{\top}w_{+}+b_{+}=-\epsilon$ as much as possible. The hinge loss can make the negative samples at least 1 away from the positive hyperplane. This leaves the positive hyperplane determined by only a small number of samples in two classes. Thus, the $\epsilon$-insensitive loss function improves the model from semi-sparsity to whole sparsity. Moreover, the regularization term $\frac{1}{2}\|u\|^{2}$ is added to make the width of the $\epsilon$-band as large as possible, thus enabling the model to follow the SRM principle. In addition, this model avoids matrix inversion operation in the solving process. The same derivation happens in problem (\ref{1002}). The dual formulations of problems (\ref{1001}) and (\ref{1002}) can be converted to the following form: \begin{eqnarray}\label{1003} \displaystyle{\min_{\pi}}~~&&\frac{1}{2}{\pi}^{\top}{\Lambda}{\pi}+{\kappa}^{\top}{\pi}\\ \text{s.t.}~~&&{e}^{\top}{\pi}=0,\nonumber\\ &&{0}\leq{\pi}\leq{C},\nonumber\ \end{eqnarray} where $\Lambda$ is a matrix of appropriate size. $\pi$, $e$, $\kappa$ and $C$ are vectors of appropriate dimensions. It is observed that this form is a standard QPP, so the NPSVM can be solved efficiently by sequential minimization optimization (SMO) method or alternating direction method of multipliers (ADMM). Due to these incomparable advantages, the model performs better than other algorithms, but this method can only learn the tasks individually which is not favorable for learning multiple associated tasks. \subsection{Direct multi-task twin support vector machine}\label{10033} DMTSVM is built on the foundation of RMTL, which directly integrates the thoughts of TWSVM and MTL. In contrast to RMTL, this model constructs two nonparallel hyperplanes for each task, which reduces the scale of the problem and improves efficiency. Suppose $X_{p}$ and $X_{q}$ represent positive and negative samples of all tasks, respectively. $X_{pt}$ and $X_{qt}$ represent the positive and negative samples in the $t$-th task. $e_{t}$, $e_{1t}$, $e_{2t}$ and $e$ are one vectors of appropriate dimensions, the length of $e_{1t}$, $e_{2t}$ is equal to the number of positive and negative samples of the $t$-th task, respectively. The $A$=($X_{p}$, $e$), $B$=($X_{q}$, $e$), $A_{t}$=($X_{pt}$, $e_{1t}$) and $B_{t}$=($X_{qt}$, $e_{2t}$) are denoted. Based on the idea of multi-task learning, all tasks share two common hyperplanes $u$=$(w_{1}; b_{1})$ and $v$=$(w_{2}; b_{2})$. $u_{t}$ and $v_{t}$ represent the biases of $t$-task, respectively. The positive decision hyperplane of the $t$-th task can be expressed as ($w_{1t}$; $b_{1t}$)=($u$+$u_{t}$), while the negative decision hyperplane is ($w_{2t}$; $b_{2t}$)=($v$+$v_{t}$). DMTSVM is acquired by solving the following two QPPs: \begin{eqnarray}\label{1004} \displaystyle{\min_{u, u_{t}, p_{t}}}~~&&\frac{1}{2}\|Au\|_{2}^{2}+\frac{1}{2}\sum_{t=1}^{T}\rho_{t}\left\|A_{t}u_{t}\right\|_{2}^{2}+C_{1}\sum_{t=1}^{T}e_{2t}^{\top} \xi_{t}\\ \mbox{s.t.}~~&&-B_{t}\left(u+u_{t}\right)+\xi_{t}\geq e_{2t},\nonumber\\ ~~&&\xi_{t}\geq0,~t=1,2,\cdots,T,\nonumber \end{eqnarray} and \begin{eqnarray}\label{1005} \displaystyle{\min_{v, v_{t}, q_{t}}}~~&&\frac{1}{2}\|Bv\|_{2}^{2}+\frac{1}{2}\sum_{t=1}^{T}\lambda_{t}\left\|B_{t}v_{t}\right\|_{2}^{2}+C_{2}\sum_{t=1}^{T} e_{1t}^{\top}\eta_{t} \\ \mbox{s.t.}~~&&A_{t}\left(v+v_{t}\right)+\eta_{t}\geq e_{1t},\nonumber\\ ~~&&\eta_{t}\geq0,~t=1,2,\cdots,T,\nonumber \end{eqnarray} where $C_{i}\ge0, (i=1, 2)$ are trade-off parameters. $\xi_{t}$ and $\eta_{t}$ represent slack variables. $\rho_{t}$ and $\lambda_{t}$ can adjust the relationship between tasks. For the primal problem (\ref{1004}), when constructing the positive hyperplane for each task, the square loss in the objective function can restrict the hyperplane locate as close as possible to all positive samples, and the hinge loss can make the hyperplane be at least 1 away from the negative samples. A similar derivation occurs in problem (\ref{1005}). When $\rho_{t}\rightarrow$0 and $\lambda_{t}\rightarrow$0, this causes $u\rightarrow$0, $v\rightarrow$0 and all tasks are treated as unrelated. In contrary, when $\rho_{t}\rightarrow\infty$ and $\lambda_{t}\rightarrow\infty$, it leads to $u_{t}\rightarrow$0 and $v_{t}\rightarrow$0 and all tasks will considered as a unified whole. The label of $x$ in $t$-th task is assigned with the following decision function: \begin{eqnarray}\label{1006} f(x)=\arg\min_{r=1,2}\left|x^{\top}w_{rt}+b_{rt}\right|. \end{eqnarray} As an extension of TWSVM to multi-task learning scenario, DMTSVM can take advantage of correlation between tasks to improve generalization performance. However, this model has similar disadvantages to TWSVM, such that the semi-sparsity of the model, and the matrix inversion operation that cannot be avoided in the solving process. \section{Multi-task nonparallel support vector machine}\label{10055} In Section \ref{10044}, NPSVM and DMTSVM are proved to be complementary, so based on the above two models, a novel multi-task nonparallel support vector machine (MTNPSVM) is presented, it absorbs the merits of NPSVM and multi-task learning. This provides a modern perspective on the extension of NPSVM to multi-task learning. \subsection{Linear MTNPSVM} In this subsection, the definitions of matrices $A$, $B$, $A_{t}$, $B_{t}$ and the vectors $u$, $v$, $u_{t}$, $v_{t}$, $e_{1t}$, $e_{2t}$ are the identical to those utilized in section \ref{10033}. Also $u+u_{t}=(w_{1t}; b_{1t})$, $v$+$v_{t}=(w_{2t}; b_{2t})$ are vectors of positive plane and negative plane in the $t$-th task. The primal problems of MTNPSVM can be built as follows: \begin{eqnarray}\label{1009} \displaystyle{\min_{{u},u_{t},\eta_{t}^{*},\eta_{t}^{*},\xi_{t}}}~~&&\frac{\rho_{1}}{2}\|u\|^{2}+\frac{1}{2T}\sum_{t=1}^{T}\left\|u_{t}\right\|^{2}+C_{1}\sum_{t=1}^{T} e_{1t}^{\top}\left(\eta_{t}+\eta_{t}^{*}\right)+C_{2}\sum_{t=1}^{T}e_{2t}^{\top}\xi_{t}\\ \mbox{s.t.~~~}~~&&-\epsilon e_{1t}-\eta_{t}^{*}\leq A_{t}\left(u+u_{t}\right)\leq\epsilon{e}_{1 t}+\eta_{t},\nonumber\\ &&B_{t}\left(u+u_{t}\right)\leq-e_{2t}+\xi_{t},\nonumber\\ &&\eta_{t},~\eta_{t}^{*},~\xi_{t}\geq0,~t=1,2,\cdots,T,\nonumber \end{eqnarray} and \begin{eqnarray}\label{1010} \displaystyle{\min _{v, v_{t},\xi_{t}^{*},\xi_{t}^{*},\eta_{t}}}~~&&\frac{\rho_{2}}{2}\|v\|^{2}+\frac{1}{2T}\sum_{t=1}^{T}\left\|v_{t}\right\|^{2}+C_{3}\sum_{t=1}^{T} e_{2t}^{\top}\left(\xi_{t}+\xi_{t}^{*}\right)+C_{4}\sum_{t=1}^{T}e_{1t}^{\top}\eta_{t}\\ \mbox{s.t.~~~}~~&&-\epsilon e_{2t}-\xi_{t}^{*}\leq B_{t}\left(v+v_{t}\right)\leq\epsilon{e}_{2t}+\xi_{t},\nonumber\\ &&A_{t}\left(v+v_{t}\right)\geq e_{1t}-\eta_{t},\nonumber\\ &&\xi_{t},~\xi_{t}^{*},~\eta_{t}\geq0,~t=1,2,\cdots,T.\nonumber \end{eqnarray} The relationship between tasks can be adjusted by $\rho_{1}$ and $\rho_{2}$. $C_{i}\ge 0$, ($i$=1, 2, $\cdots$, 4) are penalty parameters. $\xi_{t}$, $\xi_{t}^{*}$, $\eta_{t}$ and $\eta_{t}^{*}$ are slack variables of the $t$-th task like the corresponding parameters in NPSVM. Note that the primal problem (\ref{1009}), when constructing the positive hyperplane for each task, the $\epsilon$-insensitive loss $\left(\eta_{t}+\eta_{t}^{*}\right)$ accompanied by the first constraint can restrict the positive samples in $\epsilon$-band between $x^{\top}w_{1t}+b_{1t}=\epsilon$ and $x^{\top}w_{1t}+b_{1t}=-\epsilon$ as much as possible, and the hinge loss $\xi_{t}$ accompanied by the second constraint can allow the hyperplane be at least 1 away from the negative samples. In addition, MTNPSVM can obtain the commonality between tasks through the parameter $u(v)$ and capture the personality of each task through the parameter $u_{t}(v_{t})$. Also the first two regularization terms are equivalent to the trade-off between maximizing the width of $\epsilon$-band $\frac{2\epsilon}{\|w_{1t}\|}$ and minimizing the distance between each task hyperplane and the common hyperplane. The similar conclusion can be found in \cite{npsvm,rmtl}. The construction of the negative hyperplane in problem (\ref{1010}) is similar to that in problem (\ref{1009}). The dual problems of (\ref{1009}) and (\ref{1010}) can be obtained by introducing the Lagrangian multiplier vectors $\alpha_{t}^{+}$, $\alpha_{t}^{+*}$, $\beta_{t}^{-}$, $\gamma_{t}$, $\theta_{t}$, $\psi_{t}$. Now taking the problem (\ref{1009}) as an example. The Lagrangian function can be given by \begin{eqnarray}\label{20001} L=&&\frac{\rho_{1}}{2}\|u\|^{2}+\frac{1}{2T}\sum_{t=1}^{T}\left\|u_{t}\right\|^{2}+C_{1} \sum_{t=1}^{T} e_{1t}^{\top}\left(\eta_{t}+\eta_{t}^{*}\right)+C_{2} \sum_{t=1}^{T}e_{2 t}^{\top}\xi_{t}\nonumber\\&&-\sum_{t=1}^{T}\alpha_{t}^{+\top}\left[\epsilon e_{1t}+\eta_{t}-A_{t}\left(u+u_{t}\right)\right]-\sum_{t=1}^{T}\alpha_{t}^{+*\top}\left[\epsilon e_{1t}+\eta_{t}^{*}+A_{t}\left(u+u_{t}\right)\right]\nonumber\\ &&-\sum_{t=1}^{T}\beta_{t}^{-\top}\left[-e_{2t}+\xi_{t}-B_{t}\left(u+u_{t}\right)\right]-\sum_{t=1}^{T}\gamma_{t}^{\top}\xi_{t}-\sum_{t=1}^{T}\theta_{t}^{\top} \eta_{t}-\sum_{t=1}^{T}\psi_{t}^{\top}\eta_{t}^{*}, \end{eqnarray} the KKT conditions can be obtained by differentiating parameters $u$, $u_{t}$, $\eta_{t}$, $\eta_{t}^{*}$, $\xi_{t}$ and setting the differential equations equal to 0: \begin{eqnarray} &&\frac{\partial L}{\partial u}=\rho_{1}u-\sum_{t=1}^{T}A_{t}^{\top}\left(\alpha_{t}^{+*}-\alpha_{t}^{+}\right)+\sum_{t=1}^{T}B_{t}^{\top}\beta_{t}^{-}=0, \\ &&\frac{\partial L}{\partial v_{t}}=\frac{u_{t}}{T}-A_{t}^{\top}\left(\alpha_{t}^{+*}-\alpha_{t}^{+}\right)+B_{t}^{\top}\beta_{t}^{-}=0, \\ &&\frac{\partial L}{\partial \eta_{t}}=C_{1}e_{1 t}-\alpha_{t}^{+}-\theta_{t}=0, \\ &&\frac{\partial L}{\partial \eta_{t}^{*}}=C_{1} e_{1 t}-\alpha_{t}^{+*}-\psi_{t}=0, \\ &&\frac{\partial L}{\partial \xi_{t}}=C_{2}e_{2 t}-\beta_{t}^{-}-\gamma_{t}=0. \end{eqnarray} By the above equations, the polynomial for each parameter can be derived, then substituting them into the original Lagrangian function. By declaring the following definition: \begin{eqnarray}\label{30001} &&P_{t}=A_{t}\cdot B_{t}^{\top},\\ &&P=blkdiag(P_{1},P_{2},\cdots,P_{T}),\\ &&M(A,B^{\top})=\frac{1}{\rho}A\cdot B^{\top}+T \cdot P, \end{eqnarray} where $blkdiag(\cdot)$ is used to construct the block diagonal matrix, the dual form can be given as follows: \begin{eqnarray}\label{dual1} \displaystyle{\min_{\alpha^{+*},\alpha^{+},\beta^{-}}}~~&&\frac{1}{2}\left(\alpha^{+*}-\alpha^{+}\right)^{\top} M(A,A^{\top})\left(\alpha^{+*}-\alpha^{+}\right)-\left(\alpha^{+*}-\alpha^{+}\right)^{\top}M(A,B^{\top})\beta^{-}\nonumber\\ &&+\frac{1}{2}\beta^{-\top}M(B,B^{\top})\beta^{-}+\epsilon e_{1}^{\top}\left(\alpha^{*}+\alpha\right)-e_{2}^{\top} \beta^{-} \\ \mbox {s.t.~~~}~~&&0\le\alpha^{+},~\alpha^{+*} \le C_{1} e_{1},\nonumber\\ ~~&&0\le\beta^{-}\le C_{2}e_{2},\nonumber \end{eqnarray} where $\alpha^{+*}$=$(\alpha_{1}^{+*}; \cdots; \alpha_{t}^{+*})$, $\alpha^{+}$=$(\alpha_{1}^{+}; \cdots; \alpha_{t}^{+})$, and $\beta^{-}$=$(\beta_{1}^{-}; \cdots; \beta_{t}^{-})$. $e_{1}$ and $e_{2}$ are the ones vectors of approximate dimensions. By further simplifying the above equations, the dual formulation of problem (\ref{1009}) can be concisely rewritten as \begin{eqnarray}\label{1012} \displaystyle{\min_{\widetilde{\pi}}}~~&&\frac{1}{2} \widetilde{\pi}^{\top} \widetilde{\Lambda} \widetilde{\pi}+\widetilde{\kappa}^{\top} \widetilde{\pi} \\ \mbox {s.t.}~~&&0 \leq\widetilde{\pi} \leq \widetilde{C}.\nonumber\ \end{eqnarray} Here $\widetilde{\Lambda}$= $\begin{pmatrix} H_{1} &-H_{2}\\ -\hat{H_{2}}&H_{3} \end{pmatrix}$, $H_{1}$= $\begin{pmatrix} M(A,A^{\top}) &-M(A,A^{\top})\\ -M(A,A^{\top})&M(A,A^{\top}) \end{pmatrix}$, $H_{2}$= $\begin{pmatrix} M(A,B^{\top}) &\\ -M(A,B^{\top})& \end{pmatrix}$, $H_{3}$=M($B$, $B^{\top}$), and $\widetilde{\pi}=(\alpha^{+*};\alpha^{+};\beta^{-})$, $\widetilde{C}=(C_{1}e_{1};C_{1}e_{1};C_{2}e_{2})$, and $\widetilde{\kappa}=(\epsilon e_{1};\epsilon e_{1};-e_{2})$. The problem of (\ref{1012}) is clearly a QPP. Similarly the dual form of (\ref{1010}) is shown as follows: \begin{eqnarray} \label{dual2} \displaystyle{\min_{\alpha^{-*},\alpha^{-},\beta^{+}}}~~&& \frac{1}{2}\left(\alpha^{-*}-\alpha^{-}\right)^{\top} M(B,B^{\top})\left(\alpha^{-*}-\alpha^{-}\right)-\left(\alpha^{-*}-\alpha^{-}\right)^{\top}M(B,A^{\top})\beta^{+}\nonumber\\ &&+\frac{1}{2}\beta^{+\top} M(A,A^{\top})\beta^{+} +\epsilon e_{2}^{\top}\left(\alpha^{-*}+\alpha^{-}\right)-e_{1}^{\top}\beta^{+} \\ \mbox {s.t.~~~}~~&&0 \le\alpha^{-},\alpha^{-*}\le C_{3}e_{2}, \nonumber\\ &&0\le\beta^{+}\le C_{4}e_{1}.\nonumber \end{eqnarray} Similarly, $\alpha^{-*}=(\alpha_{1}^{-*};\cdots;\alpha_{t}^{+*})$, $\alpha^{-}$=$(\alpha_{1}^{-};\cdots;\alpha_{t}^{-})$, $\beta^{+}$=$(\beta_{1}^{+};\cdots;\beta_{t}^{+})$, and the dual problem can be concisely reformulated as \begin{eqnarray}\label{1013} \displaystyle{\min_{\hat{\pi}}}~~&&\frac{1}{2} \hat{\pi}^{\top} \hat{\Lambda} \hat{\pi}+\hat{\kappa}^{\top} \hat{\pi} \\ \mbox {s.t.}~~&& 0 \le \hat{\pi} \le \hat{C}.\nonumber\ \end{eqnarray} Here $\hat{\Lambda}$= $\begin{pmatrix} Q_{1} &-Q_{2}\\ -Q_{2}&Q_{3} \end{pmatrix}$, $Q_{1}$= $\begin{pmatrix} M(B,B^{\top}) &-M(B,B^{\top})\\ -M(B,B^{\top})&M(B,B^{\top}) \end{pmatrix}$, $Q_{2}$= $\begin{pmatrix} M(B,A^{\top}) &\\ -M(B,A^{\top})& \end{pmatrix}$, $Q_{3}=M(A, A^{\top})$, $\hat{\pi}=(\alpha^{-*};\alpha^{-};\beta^{+})$, $\hat{C}=(C_{3}e_{2};C_{3}e_{2};C_{4}e_{1})$, and $\hat{\kappa}=(\epsilon e_{2};\epsilon e_{2}; -e_{1})$. The following conclusions can be justified by applying the KKT conditions of problems (\ref{1012}) and (\ref{1013}). The proofs of Theorems \ref{th1} and \ref{th3} are placed in Appendix \ref{proof1}, and the proofs of Theorems \ref{th2} and \ref{th4} are shown in Appendix \ref{proof2}. The similar conclusion can also be found in \cite{npsvm,mvl}. \begin{theorem}\label{th1} Suppose $\widetilde{\pi}^{*}$ is the optimal solution of (\ref{1012}), if $\alpha_{it}^{+}$ and $\alpha_{it}^{+*}$ represent the $i$-th component of $\alpha_{t}^{+}$ and $\alpha_{t}^{+*}$, respectively. The each pair of $\alpha_{it}^{+*}$ and $\alpha_{it}^{+}$ must satisfy $\alpha_{it}^{+*}\alpha_{it}^{+}=0$, $i=1, 2,\cdots, q$; $t=1, 2,\cdots, T$, which implies that the each pair parameters can not be nonzero simultaneously. \end{theorem} \begin{theorem}\label{th2} Suppose $\widetilde{\pi}^{*}$ is the optimal solution of (\ref{1012}), the value of $u$ can be obtained by applying the KKT conditions of (\ref{1009}) in the following way: \begin{eqnarray}\label{h1} u=\frac{1}{\rho_{1}}(\sum_{t=1}^{T} A_{t}^{\top}\left(\alpha_{t}^{+*}-\alpha_{t}^{+}\right)-\sum_{t=1}^{T} B_{t}^{\top} \beta_{t}^{-}), \end{eqnarray} \begin{eqnarray}\label{h2} u_{t}=T(A_{t}^{\top}\left(\alpha_{t}^{+*}-\alpha_{t}^{+}\right)-B_{t}^{\top} \beta_{t}^{-}). \end{eqnarray} \end{theorem} \begin{theorem}\label{th3} Suppose $\hat{\pi}^{*}$ is the optimal solution of (\ref{1013}), if $\alpha_{it}^{-}$ and $\alpha_{it}^{-*}$ represent the $i$-th component of $\alpha_{t}^{-}$ and $\alpha_{t}^{-*}$, respectively. The each pair of $\alpha_{it}^{-*}$ and $\alpha_{it}^{-*}$ must satisfy $\alpha_{it}^{-*}$$\alpha_{it}^{-}=0$, $i=1, 2,\cdots, q$; $t=1, 2,\cdots, T$, which implies that the each pair parameters can not be nonzero simultaneously. \end{theorem} \begin{theorem}\label{th4} Suppose $\hat{\pi}^{*}$ is the optimal solution of (\ref{1013}), the value of $u$ can be obtained by applying the KKT conditions of (\ref{1010}) in the following way: \begin{eqnarray}\label{h3} v=\frac{1}{\rho_{2}}(\sum_{t=1}^{T} B_{t}^{\top}\left(\alpha_{t}^{-*}-\alpha_{t}^{-}\right)+\sum_{t=1}^{T} A_{t}^{\top} \beta_{t}^{+}), \end{eqnarray} \begin{eqnarray}\label{h4} v_{t}=T(B_{t}^{\top}\left(\alpha_{t}^{-*}-\alpha_{t}^{-}\right)-A_{t}^{\top} \beta_{t}^{+}). \end{eqnarray} \end{theorem} In terms of Theorems \ref{th2} and \ref{th4}, there is no necessary to calculate the inversion matrix when obtaining the parameters of mean hyperplane and bias, which can accelerate the computational speed to a certain extent. Combined with the $u+u_{t}=(w_{1t}; b_{1t})$, $v$+$v_{t}=(w_{2t}; b_{2t})$, the label of the test sample $x$ in $t$-th task can obtained by the following equation: \begin{eqnarray}\label{30003} f(x)=\arg\min_{r=1,2} \left|x^{\top}w_{rt}+b_{rt}\right|. \end{eqnarray} \subsection{Nonlinear MTNPSVM} Unlike the multi-task TWSVMs, MTNPSVM can directly exploit the kernel trick in the nonlinear case and thus only needs to deal with the problems similar to the linear case. The reason is that the nonlinear mapping function appears only as the inner product in the dual problem. $\phi(\cdot)$ represents the nonlinear mapping function, $x_{it}$ represents random sample. Finally, the decision hyperplanes of the $t$-th task will be changed as follows: \begin{eqnarray}\label{2008} \phi(x_{it})^{\top}w_{1t}+b_{1t}=0,\text{ and } \phi(x_{it})^{\top}w_{2t}+b_{2t}=0. \end{eqnarray} To obtain the above hyperplanes, the nonlinear MTNPSVM needs to solve the following problems: \begin{eqnarray}\label{2009} \displaystyle{\min_{u, u_{t},\eta_{t}^,\eta_{t}^{*}\,xi_{t}}}~~&&\frac{\rho_{1}}{2}\|u\|^{2}+\frac{1}{2T}\sum_{t=1}^{T}\left\|u_{t}\right\|^{2}+C_{1}\sum_{t=1}^{T} e_{1t}^{\top}\left(\eta_{t}+\eta_{t}^{*}\right)+C_{2}\sum_{t=1}^{T}e_{2t}^{\top}\xi_{t} \\ \mbox{s.t.}~~&&-\epsilon e_{1t}-\eta_{t}^{*} \leq\phi\left(A_{t}\right)\left(u+u_{t}\right)\leq\epsilon e_{1t}+\eta_{t},\nonumber\\ &&\phi\left(B_{t}\right)\left(u+u_{t}\right)\leq-e_{2t}+\xi_{t},\nonumber\\ &&\eta_{t},~\eta_{t}^{*},~\xi_{t}\geq 0,~t=1,2,\cdots,T,\nonumber \end{eqnarray} and \begin{eqnarray}\label{2010} \displaystyle{\min_{v, v_{t},\xi_{t},\xi_{t}^{*},\eta_{t}}}~~&&\frac{\rho_{2}}{2}\|v\|^{2}+\frac{1}{2T} \sum_{t=1}^{T}\left\|v_{t}\right\|^{2}+C_{3}\sum_{t=1}^{T}e_{2t}^{\top}\left(\xi_{t}+\xi_{t}^{*}\right)+C_{4}\sum_{t=1}^{T}e_{1 t}^{\top} \eta_{t}\\ \mbox{s.t.}~~&&-\epsilon e_{2t}-\xi_{t}^{*} \leq\phi\left(B_{t}\right)\left(v+v_{t}\right)\leq \epsilon e_{2t}+\xi_{t}, \nonumber\\ &&\phi\left(A_{t}\right) \left(v+v_{t}\right) \geq e_{1t}-\eta_{t},\nonumber\\ &&\xi_{t},~\xi_{t}^{*},~\eta_{t}\geq 0,~t=1,2,\cdots,T.\nonumber \end{eqnarray} The original problem is almost identical to the linear case, except that the mapping function $\phi$($\cdot$) is introduced into the primal problems. A corresponding difference in the dual problem is the definition of (\ref{30001}). In the nonlinear case, the new definition is as follows: \begin{eqnarray}\label{30002} &&P_{t}=K(A_{t}, B_{t}^{\top}),\\ &&P=blkdiag(P_{1},P_{2},\cdots,P_{T}),\\ &&M(A,B^{\top})=\frac{1}{\rho}K(A, B^{\top})+T \cdot P, \end{eqnarray} here $K(x_{i},x_{j})=(\phi(x_{i})\cdot\phi(x_{j}))$ represents kernel function, the Polynomial kernel and the RBF kernel are employed in this paper. The properties in the nonlinear case are very similar to Theorems \ref{th1}$\sim $\ref{th4}, this only requires transforming the identical mapping into the nonlinear mapping function. Finally, the label of a new sample can be obtained by the same decision function as (\ref{30003}). \subsection{Advantages of MTNPSVM} As an improvement of the DMTSVM, the MTNPSVM draws on the advantages of the NPSVM and avoids many disadvantages of the DMTSVM, thus it has significant theoretical merits. Although MTNPVM have a additional parameter $\epsilon$, it still has the following advantages: \begin{itemize} \item[\textbullet] MTNPSVM has a similar elegant equation form as RMTL, which can avoid matrix inversion operation in the solving process. Moreover, it can be solved by SMO-type algorithms. \item[\textbullet] Only the inner product appears in the dual problem leading to the kernel trick can be directly employed in the nonlinear case. This reduces the burden on the implementation methods. \item[\textbullet] The inclusion of two regularization terms allows the model to reflect the commonality and individuality of tasks when dealing with multiple associated tasks. Also like RMTL, this enables the model to comply with the SRM principle. \item[\textbullet] DMTSVM loses sparsity due to the square loss function. In the proposed model MTNPSVM, the $\epsilon$-insensitive loss function is added so that it inherits the whole sparsity of the NPSVM. Models with high sparsity can be combined with algorithms, such as safe screening rule \cite{ssrnp,ssrdmt}, to speed up the efficiency of model solving. \end{itemize} \section{ADMM Optimization}\label{10066} \subsection{ADMM for MTNPSVM} MTNPSVM has a low efficiency in solving process due to the construction of large-scale matrices in the MTL methods. So the ADMM algorithm is developed into multi-task learning to accelerate the solving of MTNPSVM. ADMM is an advanced fast solving algorithm which improves computational efficiency by transforming a large scale problem into multiple small subproblems. In order to apply this algorithm, the inequality constraints of problems (\ref{1012}) and (\ref{1013}) are turned into the equality constraints. In this subsection, the details of solving MTNPSVM are displayed. By introducing new variables $\widetilde\lambda$ and $\hat{\lambda}$, the problems can be written as: \begin{eqnarray}\label{1018} \displaystyle{\min_{\widetilde{\pi}}}~~&&\frac{1}{2}\widetilde{\pi}^{\top}\widetilde{\Lambda}\widetilde{\pi}+\widetilde{\kappa}^{\top} \widetilde{\pi}+g(\widetilde\lambda)\\ \mbox{s.t.}~~&&\widetilde{\pi}+\widetilde\lambda=\widetilde{C},\nonumber \end{eqnarray} and \begin{eqnarray}\label{1019} \displaystyle{\min_{\hat{\pi}}}~~&&\frac{1}{2}\hat{\pi}^{\top}\hat{\Lambda}\hat{\pi}+\hat{\kappa}^{\top}\hat{\pi}+g(\hat\lambda)\\ \mbox{s.t.}~~&&\hat{\pi}+\hat\lambda=\hat{C},\nonumber \end{eqnarray} where g($\cdot$) stands for indicator function, it is defined as (\ref{2001}), the value of the parameter $C$ changes according to the different functions. \begin{eqnarray}\label{2001} g(\lambda)=\left\{\begin{array}{ll}0, &\mbox {if}~0\le\lambda \le C \\+\infty, &\mbox {otherwise.}\end{array}\right. \end{eqnarray} Then, the iterative procedures of ADMM algorithm for (\ref{1018}) and (\ref{1019}) is displayed as: \begin{eqnarray}\label{36} \left\{\begin{aligned} \widetilde{\pi}_{k+1}=&\mathop{\arg\min}\limits_{\pi}(\frac{1}{2}\widetilde{\pi}^{\top}\widetilde{\Lambda} \widetilde{\pi}+\widetilde{{\kappa}}^{\top}\widetilde{\pi}+\frac{\mu}{2}\|\widetilde{\pi}+\widetilde\lambda_{k}-\widetilde{C}+\widetilde{h}_{k}\|^{2}),\\ \widetilde{\lambda}_{k+1}=&\mathop{\arg\min}\limits_{\lambda}(g(\widetilde{\lambda})+\frac{\mu}{2}\|\widetilde{\pi}_{k+1}+\widetilde\lambda-\widetilde{C}+\widetilde{h}_{k}\|^{2}),\\ \widetilde{h}_{k+1}=&\widetilde{\pi}_{k+1}+\widetilde\lambda_{k+1}-\widetilde{C}+\widetilde{h}_{k}, \end{aligned}\right. \end{eqnarray} and \begin{eqnarray}\label{37} \left\{\begin{aligned} \hat{\pi}_{k+1}=&\mathop{\arg\min}\limits_{\pi}(\frac{1}{2}\hat{\pi}^{\top}\hat{\Lambda} \hat{\pi}+\hat{{\kappa}}^{\top}\hat{\pi}+\frac{\mu}{2}\|\hat{\pi}+\hat\lambda_{k}-\hat{C}+\hat{h}_{k}\|^{2}),\\ \hat{\lambda}_{k+1}=&\mathop{\arg\min}\limits_{\lambda}(g(\hat{\lambda})+\frac{\mu}{2}\|\hat{\pi}_{k+1}+\hat\lambda-\hat{C}+\hat{h}_{k}\|^{2}),\\ \hat{h}_{k+1}=&\hat{\pi}_{k+1}+\hat\lambda_{k+1}-\hat{c}+\hat{h}_{k}. \end{aligned}\right. \end{eqnarray} Here $k$ stands for the $k$-th iteration and $\mu$ is a relaxation factor which can control the speed of convergence. In algorithms, $f$ is denoted as the objective function value, the primal residual $r^{k+1}=\pi_{k+1}-\lambda_{k+1}$, the dual residual $s^{k+1}=\mu(\lambda_{k+1}-\lambda_{k})$. The convergence thresholds $\delta_{p}^{k}$, $\delta_{d}^{k}$ both are defined as the linear combination of the absolute tolerance $\delta_{1}$ and the relative tolerance $\delta_{2}$ as follows: \begin{eqnarray} \label{x1}&&\delta_{p}^{k}=\delta_{1}\cdot\sqrt{n}+\delta_{2}\cdot\max (\|\pi_{k}\|,\|\lambda_{k}\|),\\ \label{x2}&&\delta_{d}^{k}=\delta_{1}\cdot\sqrt{n}+\delta_{2}\cdot\|\mu h_{k}\|, \end{eqnarray}where $n$ is the dimension of the vector $\pi_k$. If $\|r^{k}\|\le\delta_{p}^{k}$ and $\|s^{k}\|\le\delta_{d}^{k}$, the iteration will stop and the objective function value $f$ will converge to the certain value. The detailed derivation of the algorithm can be found in \cite{admm}. Furthermore, the linear case is used as an instance to elaborate the overall process of algorithm optimization. Before solving, the original dual problems (\ref{1012}) and (\ref{1013}) must be transformed into the objective functions (\ref{1018}) and (\ref{1019}), which are the standard form of the objective function of the ADMM algorithm. The pseudo-code for solving the objective functions (\ref{1018}) and (\ref{1019}) is summarized in Algorithms. \ref{alg1} and \ref{alg2}, respectively. Above all, the solving process of MTNPSVM are shown in Fig. \ref{figx}. As shown, MTNPSVM follows the classical multi-task learning framework. It is worth noting that the model needs to be transformed twice into the objective function of ADMM algorithm. \begin{algorithm}[h] \caption{ADMM algorithm for objective function (\ref{1018})} \label{alg1} \begin{algorithmic} \Require Train set $\{X,Y\}$ for all tasks; $A$, $B$ represent the positive and negative samples in the train set $X$; parameters $C_{1}$, $C_{2}$, $\epsilon$; \Ensure $u$, $u_{t}$; \State Initialize $\widetilde{\pi}_{0}$=$(\alpha^{+*};\alpha^{+};\beta^{-})$=0, $\widetilde{\lambda}_{0}$, $\widetilde{h}_{0}$, $\mu$, $k$=0; \State Compute $\widetilde{\Lambda}$ matrix according to the description of (\ref{1012}); \Repeat \State Compute ($\widetilde{\pi}_{k+1}$, $\widetilde{\lambda}_{k+1}$, $\widetilde{h}_{k+1}$) by (\ref{36}); \State Compute convergence thresholds $\delta_{p}^{k}$, $\delta_{d}^{k}$ via (\ref{x1}) and (\ref{x2}); \Until $\|r^{k}\|\le\delta_{p}^{k}$, $\|s^{k}\|\le\delta_{d}^{k}$. \State Get the optimal solution $\widetilde{\pi}^{*}$=$\widetilde{\pi}_{k+1}$; \State Compute $u$, $u_{t}$ by (\ref{h1}) and (\ref{h2}). \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{ADMM algorithm for objective function (\ref{1019})} \label{alg2} \begin{algorithmic} \Require Train set $\{X,Y\}$ for all tasks; $A$, $B$ represent the positive and negative samples in the train set $X$; parameters $C_{3}$, $C_{4}$, $\epsilon$; \Ensure $v$, $v_{t}$; \State Initialize $\hat{\pi}_{0}$=$(\alpha^{-*};\alpha^{-};\beta^{+})=0$, $\hat{\lambda}_{0}$, $\hat{h}_{0}$, $\mu$, $k$=0; \State Compute $\hat{\Lambda}$ matrix according to the description of (\ref{1013}); \Repeat \State Compute ($\widetilde{\pi}_{k+1}$, $\hat{\lambda}_{k+1}$, $\hat{h}_{k+1}$) by (\ref{37}); \State Compute convergence thresholds $\delta_{p}^{k}$, $\delta_{d}^{k}$ via (\ref{x1}) and (\ref{x2}); \Until $\|r^{k}\|\le\delta_{p}^{k}$, $\|s^{k}\|\le\delta_{d}^{k}$; \State Get the optimal solution $\hat{\pi}^{*}$=$\hat{\pi}_{k+1}$; \State Compute $v$, $v_{t}$ by (\ref{h3}) and (\ref{h4}). \end{algorithmic} \end{algorithm} \begin{figure} \caption{A description of the model construction and solution process. The datasets for all tasks are correlated and the decision hyperplane for each task can be obtained simultaneously by solving the proposed MTNPSVM.} \label{figx} \end{figure} \subsection{Computational complexity} This subsection theoretically analyzes the time complexity of algorithm. $p$, $q$ represent the number of positive and negative samples, respectively. Algorithm \ref{alg1} is used here as an example. The dimension of matrix $\widetilde{\Lambda}$ is calculated as $(2p+q)*(2p+q)$. $r$ represents the number of iterations. When updating the $\widetilde{\pi}$, since it needs to use the Choleskey decomposition in the first iteration, and store for subsequent calculations, so the computational complexity is O($(2p+q)^{3}$+r$(2p+q)^{2}$). When updating the $\widetilde{\lambda}$ and $\widetilde{h}$, their computational complexities are all O($r(2p+q)^{2}$). The total computational complexity of ADMM algorithm is O($(2p+q)^{3}$+$r(2p+q)^{2}$). Also if the function ``quadprog" in MATLAB is used to solve it, the computational complexity is O($r(2p+q)^{3}$). Apparently, if the number of iterations is exceeds $1$ and equal, the ADMM algorithm will have a theoretical advantage of higher computational efficiency. To verify the advantage of the convergence speed of the ADMM algorithm, the solving speeds of the ADMM and the ``quadprog" function further are compared practically in Section \ref{sec_con}. \section{Numerical experiments}\label{10077} In this section, three types of experiments including fifteen benchmark datasets, twelve image datasets, and one real Chinese Wine dataset are conducted to verify the validity of proposed MTNPSVM. Additional three benchmark datasets, often used to evaluate MTL methods, are utilized to analyze the properties of the model in Section \ref{analysis}. The STL methods used in experiments include SVM, TWSVM, $\nu$-TWSVM, LSTSVM and NPSVM, the MTL methods consist of DMTSVM, MTPSVM \cite{mtpsvm}, MTLS-TSVM \cite{mtlstsvm}, and MTCTSVM. Two kernels including RBF kernel and Polynomial kernel are used in the following experiments. Kernel parameter $\delta$ in RBF varies in $\{2^{i}|i=-3,-2,\cdots,3\}$, and Kernel parameter $\delta$ in Polynomial is selected in the set $\{1, 2, \cdots, 7\}$. Parameter $\epsilon$ in every model varies in $\{0.1, 0.2 \cdots, 0.5\}$. Parameter $\nu$ in $\nu$-TWSVM are selected in set $\{0.1, 0.2, \cdots,0.9\}$. The other parameters used in the experiment are chosen from the set $\{2^{i}|i=-3,-2,\cdots,3\}$. In addition, the``Accuracy" in experiments represents the mean accuracy of $T$ tasks. It can be calculated as $Accuracy= 1/T\sum_{t=1}^{T}Acc_{t}$, where the $Acc_{t}$ indicates the accuracy of $t$-th task. The ``time" denotes the computational cost for training datasets. For the comparability of experimental results, all the experiments are performed on Windows 10 running in MATLAB R2018b with the system configuration of Intel(R) Core(TM) I5-7200U CPU (2.50 GHz) with 8.00 GB of RAM. The code for the experiment can be downloaded from the web page\footnote{https://www.github.com/liuzongmin/mtnpsvm}. \subsection{Experiments on fifteen benchmark datasets} In this subsection, the performance of the MTNPSVM is demonstrated by conducting fifteen benchmark experiments with the seven methods. Here the methods contain two STL methods which are TWSVM and NPSVM, and five MTL methods consist of DMTSVM, MTPSVM, MTLS-TSVM, MTCTSVM and MTNPSVM. Each experimental dataset is divided into 80\% training set and 20\% testing set. The grid-search strategy and 5-fold cross-validation are performed in training set. More specially, training set is randomly divided into five subsets, one of which is used as the validation set and the remaining subsets are used for training. The optimal parameters are selected based on the average performance of five times experimental results on the training set. The performance on the testing set with the optimal parameters is utilized to evaluate the performance of the model. \subsubsection{Experimental results} \vspace*{5pt} The fifteen multi-label datasets from UCI machine learning repository\footnote{http://archive.ics.uci.edu/ml/datasets.php} are used as multi-task datasets by treating different labels as different objectives. Their statistics are shown in Table \ref{tablex1}. RBF kernel is employed in these benchmark experiments. The experimental results of seven algorithms on these benchmark datasets are shown in Table \ref{tablex2}, and the optimal parameters used in experiments are listed in Table \ref{tablex3}. The bold values represent the best accuracy in Table \ref{tablex2}. \setlength{\tabcolsep}{10mm} \begin{table*}[!htbp] \centering \renewcommand\arraystretch{1.2} \caption{The statistics of fifteen benchmark datasets.}\label{tablex1} \begin{tabular}{lrrr} \hline Datasets&$^{\#}$Attributes&$^{\#}$Instances&$^{\#}$Tasks\\ \hline Yeast&104&160&3\\ Student&5&240&3\\ Abalone&7&240&3\\ Corel5k&500&240&3\\ Scene&295&240&3\\ Bookmark&2151&120&3\\ Isolet-ab&242&480&5\\ Emotion&72&480&6\\ CAL500&69&120&3\\ Genbase&1186&120&3\\ Monk&6&291&3\\ Flag&19&567&7\\ Delicious&501&120&3\\ Mediamill&121&240&3\\ Recreation&607&240&3\\ \hline \end{tabular} \end{table*} \setlength{\tabcolsep}{2pt} \begin{table*}[!htbp] \renewcommand\arraystretch{1.2} \centering \caption{The performance comparison of seven algorithms on fifteen benchmark datasets.}\label{tablex2} \resizebox{\textwidth}{!}{ \begin{tabular}{lllllllll} \hline Datasets&TWSVM&NPSVM&DMTSVM&MTPSVM&MTLS-TSVM&MTCTSVM&MTNPSVM\\ ~&Accuracy(\%)&Accuracy(\%)&Accuracy(\%)&Accuracy(\%)&Accuracy(\%)&Accuracy(\%)&Accuracy(\%)\\ ~&Time(s)&Time(s)&Time(s)&Time(s)&Time(s)&Time(s)&Time(s)\\ \hline Yeast&66.17&67.83&65.83&68.33&64.83&65.83&\textbf{70.07}\\ ~&0.06&0.19&0.19&0.010&0.17&0.15&0.29\\ Student&61.67&64.33&62.00&63.00&60.67&64.00&\textbf{64.67}\\ ~&0.05&0.13&0.02&0.001&0.02&0.02&0.07\\ Abalone&79.33&81.33&80.00&79.33&81.00&80.67&\textbf{81.67}\\ ~&0.06&0.14&0.03&0.010&0.01&0.02&0.08\\ Corel5k&59.33&64.67&64.00&66.67&65.00&65.67&\textbf{68.00}\\ ~&0.06&0.14&0.03&0.001&0.03&0.04&0.06\\ Scene&70.33&72.00&93.00&92.33&94.00&94.00&\textbf{94.67}\\ ~&0.06&0.14&0.03&0.001&0.03&0.04&0.06\\ Bookmark&96.75&96.24&97.75&\textbf{98.75}&98.00&\textbf{98.75}&96.86\\ ~&0.06&0.15&0.02&0.001&0.02&0.06&0.12\\ Isolet&99.00&99.17&99.30&99.49&\textbf{99.50}&\textbf{99.50}&\textbf{99.50}\\ ~&0.04&0.12&0.01&0.010&0.01&0.25&0.27\\ Emotion&77.99&\textbf{80.30}&77.15&79.00&76.33&76.91&78.69\\ ~&0.05&0.12&0.03&0.007&0.01&0.19&0.21\\ CAL500&76.67&80.00&77.33&79.33&77.33&79.33&\textbf{80.00}\\ ~&0.08&0.16&0.01&0.001&0.01&0.01&0.03\\ Genbase&92.49&94.87&89.83&92.49&92.49&92.49&\textbf{94.90}\\ ~&0.07&0.15&0.02&0.001&0.01&0.04&0.02\\ Monk&\textbf{86.66}&85.55&85.02&83.72&83.67&84.54&86.37\\ ~&0.04&0.07&0.01&0.001&0.01&0.04&0.07\\ Flag&73.81&\textbf{75.61}&73.38&74.92&73.72&74.29&75.37\\ ~&0.03&0.12&0.03&0.010&0.01&0.28&0.34\\ Delicious&61.33&69.33&67.33&68.67&63.33&68.68&\textbf{70.67}\\ ~&0.06&0.14&0.01&0.001&0.01&0.02&0.02\\ Mediamill&70.83&73.33&73.33&74.17&63.33&67.50&\textbf{75.00}\\ ~&0.07&0.19&0.03&0.001&0.03&0.03&0.04\\ Recreation&51.00&54.00&51.67&53.33&51.67&54.33&\textbf{54.67}\\ &0.08&0.18&0.04&0.003&0.05&0.08&0.23\\ \hline \end{tabular}} \end{table*} \begin{table*}[!htbp] \centering \caption{The optimal parameters of seven algorithms used in the experiments on fifteen benchmark datasets.}\label{tablex3} \resizebox{\textwidth}{!}{ \begin{tabular}{lllllllll} \hline Datasets&TSVM&NPSVM&DMTSVM&MTPSVM&MTLS-TSVM&MTCTSVM&MTNPSVM\\ ~&$(c,\rho)$&$(c_{1},c_{2},\delta,\epsilon)$&$(c,\rho,\delta)$&$(c,\rho,\delta)$&$(c,\rho,\delta)$&$(c,g,\rho,\delta)$&$(\rho,c_{1},c_{2},\delta,\epsilon)$\\ \hline Yeast&$(2^{-3},2^{3})$&$(2^{-1},2^{-1},2^{3},0.1)$&$(2^{-3},2^{2},2^{3})$&$(1,2^{-1},2^{3})$&$(2^{-3},1,2^{3})$&$(2^{-3},2^{-2},2^{-3},2^{3})$&$(2^{-1},1,2^{-2},2^{3},0.1)$\\ Student&$(1,2^{3})$&$(2^{2},2^{2},2^{3},0.1)$&$(1,2^{2},2^{3})$&$(2^{2},2^{1},2^{3})$&$(2^{-1},2^{2},2^{3})$&$(1,2^{2},2^{2},2^{3})$&$(2^{-3},2^{-1},1,2^{2},0.1)$\\ Abalone&$(2^{2},2^{3})$&$(2^{3},1,2^{2},0.1)$&$(2^{-3},2^{3},2^{2})$&$(2^{2},2^{1},2^{-1})$&$(2^{1},2^{3},2^{1})$&$(2^{-1},2^{2},2^{-3},2^{1})$&$(2^{3},2^{2},2^{-1},2^{1},0.1)$\\ Corel5k&$(2^{-3},2^{3})$&$(2^{-3},2^{-3},2^{2},0.1)$&$(2^{-3},2^{3},2^{3})$&$(2^{-2},2^{1},2^{3})$&$(2^{-3},2^{3},2^{3})$&$(2^{-3},2^{1},2^{3},2^{3})$&$(2^{-3},2^{-3},2^{-3},2^{3},0.1)$\\ Scene&$(2^{-3},2^{3})$&$(2^{-1},2^{-1},2^{3},0.1)$&$(2^{-3},2^{3},2^{3})$&$(2^{-1},2^{3},2^{3})$&$(2^{-1},2^{3},2^{3})$&$(2^{-3},2^{1},2^{2},2^{3})$&$(2^{-3},2^{-3},2^{-3},2^{2},0.1)$\\ Bookmark&$(2^{-3},2^{-3})$&$(2^{-3},2^{-3},2^{2},0.1)$&$(2^{-3},1,2^{-3})$&$(2^{-3},2^{1},2^{-3})$&$(2^{-3},2^{1},2^{-3})$&$(2^{-3},2^{-3},2^{-1},1)$&$(2^{-2},2^{-3},2^{-3},2^{3},0.1)$\\ Isolet-ab&$(2^{-3},2^{3})$&$(2^{-3},2^{-2},2^{2},0.1)$&$(2^{-3},1,2^{1})$&$(2^{-2},1,2^{1})$&$(2^{},2^{},2^{})$&$(2^{-3},2^{2},2^{3},2^{1})$&$(2^{-3},2^{-3},2^{-3},2^{1},0.1)$\\ Emotion&$(2^{-3},1)$&$(1,2^{-2},1,0.1)$&$(2^{-3},2^{-2},1)$&$(2^{-3},2^{-3},1)$&$(2^{-3},2^{-3},1)$&$(2^{-3},1,2^{-3},1)$&$(2^{-3},2^{-3},2^{-3},2^{1},0.1)$\\ CAL500&$(2^{-3},2^{-3})$&$(2^{-2},2^{-2},2^{2},0.1)$&$(2^{-3},2^{-1},2^{2})$&$(2^{-3},2^{-3},2^{3})$&$(2^{-3},2^{1},2^{2})$&$(2^{-3},2^{1},2^{-3},2^{2})$&$(1,2^{-3},2^{-2},2^{3},0.1)$\\ Genbase&$(2^{-3},2^{3})$&$(2^{1},2^{1},2^{3},0.1)$&$(2^{-3},2^{-3},2^{3})$&$(2^{-2},2^{-3},2^{3})$&$(2^{-3},2^{-3},2^{3})$&$(2^{-3},2^{3},2^{1},2^{3})$&$(2^{1},1,2^{-1},2^{3},0.1)$\\ Monk&$(2^{1},2^{2})$&$(2^{3},2^{3},1,0.1)$&$(2^{-3},2^{-1},2^{2})$&$(2^{3},2^{-3},2^{1})$&$(2^{3},2^{-3},2^{1})$&$(2^{-3},2^{3},2^{-2},2^{2})$&$(2^{1},2^{1},2^{1},2^{1},0.1)$\\ Flag&$(2^{-2},2^{2})$&$(2^{1},2^{-1},1,0.1)$&$(2^{-1},2^{2},2^{3})$&$(2^{-1},2^{2},2^{2})$&$(1,2^{-3},2^{3})$&$(2^{-2},2^{3},1,2^{2})$&$(2^{-1},1,2^{-2},2^{1},0.1)$\\ Delicious&$(2^{-3},2^{1})$&$(2^{-2},1,2^{3},0.1)$&$(2^{-3},2^{1},2^{3})$&$(2^{1},1,2^{3})$&$(2^{-3},2^{3},2^{2})$&$(2^{-3},2^{-1},2^{-1},2^{3})$&$(2^{3},2^{-3},2^{-1},2^{3},0.1)$\\ Mediamill&$(2^{-3},2^{1})$&$(2^{-1},2^{1},2^{2},0.1)$&$(2^{-3},2^{3},2^{1})$&$(2^{2},2^{3},2^{1})$&$(2^{-3},2^{2},2^{1})$&$(2^{-3},2^{-1},2^{2},2^{1})$&$(2^{-3},2^{-3},2^{-3},2^{1},0.1)$\\ Recreation&$(2^{-1},2^{3})$&$(2^{-2},1,2^{3},0.1)$&$(2^{-3},2^{-1},2^{2})$&$(2^{3},2^{-1},2^{3})$&$(1,1,2^{-2})$&$(1,2^{1},2^{-1},2^{3})$&$(2^{-3},2^{-3},2^{-2},2^{3},0.1)$\\ \hline \end{tabular}} \end{table*} In terms of accuracy, MTNPSVM performs better than the remaining methods on two thirds of the datasets. Compared to the STL methods, although MTNPSVM has a lower computational efficiency due to the necessary to train multiple tasks simultaneously, it also achieves better generalization performance as a result. Compared to the other MTL methods, MTNPSVM performs the best on most of the benchmark datasets. This also indicates that the $\epsilon$-insensitive loss function not only has higher theoretical sparsity than the square loss function, but is also more conducive to the construction of the decision hyperplane. In terms of the running time, MTNPSVM takes longer time since it needs to handle larger scale problems than DMTSVM and MTCTSVM. The better computational efficiency of MTLS-TSVM and MTPSVM is due to the fact that they only need to deal with linear programming problems, but it is worth noting that there is no sparsity in these two models. \subsubsection{Friedman test} \vspace*{5pt} It is not intuitively observable here that MTNPSVM performs better than the other models in Table \ref{tablex2}. To differentiate the performance of the seven algorithms, the Friedman test is introduced as a non-parametric post-hoc test. The average ranks of the seven algorithms with respect to accuracy are tabulated in Table \ref{tablex4}. \setlength{\tabcolsep}{2pt} \begin{table*}[!htbp] \renewcommand\arraystretch{1.2} \centering \caption{Average rank on classification accuracy of seven algorithms.}\label{tablex4} \resizebox{\textwidth}{!}{ \begin{tabular}{lllllllll} \hline Datasets&TWSVM&NPSVM&DMTSVM&MTPSVM&MTLS-TSVM&MTCTSVM&MTNPSVM\\ \hline Yeast&4&3&6&2&7&5&1\\ Student&6&2&5&4&7&3&1\\ Abalone&6.5&2&5&6.5&3&4&1\\ Corel5k&7&5&6&2&4&3&1\\ Scene&7&6&4&5&2.5&2.5&1\\ Bookmark&6&7&4&1.5&3&1.5&5\\ Isolet-ab&7&6&4&5&2&2&2\\ Emotion&4&1&5&2&7&6&3\\ CAL500&7&1.5&5.5&3.5&5.5&3.5&1.5\\ Genbase&6&2&7&4&4&4&1\\ Monk&1&2&4&6&7&5&2\\ Flag&5&1&7&3&6&4&2\\ Delicious&7&2&5&4&6&3&1\\ Mediamill&5&3.5&3.5&2&7&6&1\\ Recreation&7&3&5.5&4&5.5&2&1\\ \hline Average rank&5.70&3.30&5.10&3.63&5.10&3.63&1.63\\ \hline \end{tabular}} \end{table*} Under the null hypothesis, all algorithms are equivalent. The Friedman statistic \cite{fried} can be computed as follows: \begin{eqnarray} \label{friedman} \chi_{F}^{2}=\frac{12N}{k(k+1)}\left[\sum_{j}R_{j}^{2}-\frac{k(k+1)^{2}}{4}\right], \end{eqnarray} where the $k$ and $N$ represent the number of algorithms and datasets, respectively, and the $R_{j}$ denotes the average rank of the $j$-th algorithm on all datasets. Since the original Friedman statistic above was too conservative, the new statistic is derived as follows: \begin{eqnarray} \label{fdistri} F_{F}=\frac{(N-1)\chi_{F}^{2}}{N(k-1)-\chi_{F}^{2}}, \end{eqnarray} where the $F_{F}$ obeys to the $F$-distribution with $k-1$ and $(k-1)(N-1)$ degrees of freedom. The $\chi_{F}^{2}=39.8915$ and $F_{F}=11.1454$ can be obtained according to (\ref{friedman}) and (\ref{fdistri}). Here the $F_{F}$ obeys to the $F$-distribution with $(6,84)$. When the level of significance $\alpha$=$0.05$ the critical value of $F(6,84)$ is $2.20$, and similarly $2.56$ at $\alpha$=$0.025$. The $F_{F}$ is much larger than the critical value which means that there are very significant differences between the seven algorithms. It should be noted that the average rank of MTNPSVM is much lower than the remaining algorithms, which proves that MTNPSVM outperforms the remaining methods. \subsection{Analysis of model}\label{analysis} In this subsection, the model is further analyzed. Firstly, two solution methods are compared to demonstrate the efficiency of ADMM algorithm used in above solving process. Then performance influence of task size, property of parameter $\epsilon$, convergence of algorithm, and parameter sensitivity are further analyzed. The grid-search strategy and 5-fold cross-validation are performed in this subsection. \subsubsection{Solution method}\label{sec_con} \vspace*{5pt} ``quadprog" function in MATLAB is often leveraged to settle the quadratic programming problems. To demonstrate the validity of the ADMM algorithm, the performance of MTNPSVM solved by ADMM algorithm and ``quadprog" function in MATLAB are shown in the Table \ref{table3}. Here three datasets $landmine$\footnote{http://people.ee.duke.edu/lcarin/LandmineData.zip}, $Letter$\footnote{http://archive.ics.uci.edu/ml/datasets.php}, and $Spambase$\footnote{http://www.ecmlpkdd2006.org/challenge.html} are often used to evaluate multi-task learning. The specific information can also be found in \cite{roughv}. As shown, it can be found that the ADMM algorithm can speed up the training speed while only a slight change in the training accuracy. Although the computational time is still more than other models, the computational efficiency has been significantly improved compared to the previous ``quadprog" function. \setlength{\tabcolsep}{4pt} \begin{table*}[!htbp] \renewcommand\arraystretch{1.2} \centering \caption{Performance comparison of different solving methods on three benchmark datasets.}\label{table3} \begin{tabular}{cccccccccccc}\hline &Method&&Landmine&&Letter&&Spambase&&\\ \cline{3-10}&~&&Accuracy(\%) &Time(s)& Accuracy(\%) &Time(s) &Accuracy(\%) &Time(s)\\ \noalign{ } \hline \noalign{ } &MTNPSVM-QUAD&&79.60&0.10&96.80&0.21&93.01&0.29\\ &MTNPSVM-ADMM&&79.80&0.05&96.83&0.12&92.20&0.18\\ \hline \end{tabular} \end{table*} \subsubsection{Performance influence of task size} \vspace*{5pt} $Spambase$ dataset is a binary dataset for spam identification, which includes three tasks and each task contains 200 mails and the final data contains 36 features reduced through PCA. In order to further explore the influence of task size on generalization performance, the $Spambase$ dataset is resized to different scales, ranging from 40 to 180. In addition, MTNPSVM is compared with all STL methods and MTL methods, respectively. The experimental results at different scales of task with RBF kernel are displayed in Figs. \ref{fig2} and \ref{fig3}. \begin{figure} \caption{The performance comparison between STL methods and MTNPSVM on $spambase$ dataset with RBF kernel.} \label{fig2} \end{figure} \begin{figure} \caption{The performance comparison between MTL methods and MTNPSVM on $spambase$ dataset with RBF kernel.} \label{fig3} \end{figure} In Fig. \ref{fig2}, the experimental results indicate that MTNPSVM has much better performance than other STL methods with the increasing task size. Also it can be found that the prediction accuracy increases roughly with the task size, which indicated that the larger size of task is helpful for us to better discover the intrinsic properties of data. In addition, it can be found that the training duration of all methods rise with the task size, it can be explained that the extended number of samples increases the matrix dimension in programming, thereby aggravating the burden of calculation. In Fig. \ref{fig3}, MTNPSVM has better generalization performance than other MTL methods for different task sizes. Moreover, a similar conclusion to Fig. \ref{fig2} can be drawn, i.e., as the task size gets larger, the testing accuracy gets higher and the computational time gets longer. By comparing the accuracy of STL methods and MTL methods globally in Figs. \ref{fig2} and \ref{fig3}, the multi-task learning method has more stable and better generalization performance than the STL methods when the sample size is very small, but with the increasing of the number of samples, the gap between the two kinds of methods will become smaller and smaller. It can be explained as follows, single-task learning cannot fully explore the potential information of the samples when the sample size is small, while MTL methods can effectively improve the overall generalization performance by exploring the similar structural information among multiple related tasks. This results in a more obvious advantage of MTL methods. However, as the sample size increases, STL methods can explore the data information using sufficient samples, so the gap between the two types of methods is reduced. Therefore, multi-task learning can fully demonstrate its advantages with small samples. The similar conclusions can be drawn by referring to \cite{vtwin}. \subsubsection{Property of parameter $\epsilon$} \vspace*{5pt} In order to demonstrate the property of parameter $\epsilon$, this subsection carries out experiments on the MTNPSVM with different kernels. Although $\epsilon$ increases the burden of parameter selection, it adds convenience to adjust the sparsity of the dual solution. It can influence the number of support vectors (SVs) by adjusting the value of $\epsilon$. After cross-validation and grid search, the other parameters of the model are fixed as optimal. Figs. \ref{figsv1}$\sim$\ref{figsv2} (a), (b) illustrate the variations of SVs in two different QPPs, respectively. In Fig. \ref{figsv1}, while $\epsilon$ goes bigger and the other relevant parameters are remained unchanged, the number of SVs in class itself decreases obviously and less in the other class, so that sparseness increases. Furthermore, the number of SVs in class itself changes greatly which indicates that more positive samples are close to the decision hyperplane. The similar phenomenon on $Landmine$ dataset can be found in Fig. \ref{figsv2}. \begin{figure} \caption{Sparseness changes in $Spambase$ datasets with the increasing $\epsilon$.} \label{figsv1} \end{figure} \begin{figure} \caption{Sparseness changes in $Landmine$ datasets with the increasing $\epsilon$.} \label{figsv2} \end{figure} \subsubsection{Convergence analysis} \vspace*{5pt} To better understand the convergence process of the ADMM, the objective function $f$, primal residual $\|r\|_{2}$, and dual residual $\|s\|_{2}$ as several crucial indicators, their variation curves are displayed in Fig. \ref{figiter} with RBF kernel. The hyperparameters are fixed as the optimal parameters obtained by 5-fold cross-validation and grid search. As the number of iterations increases, it can be found that primal residual $\|r\|_{2}$ and dual residual $\|s\|_{2}$ will be close to 0 and vary slightly, while the objective function values $f$ in problems (\ref{1018}) and (\ref{1019}) tend to a fixed value after the certain iterations. The experimental results reveal that MTNPSVM can be solved well by ADMM and finally converges efficiently. \begin{figure} \caption{Convergence of ADMM in two datasets with RBF kernel. (a), (c) and (b), (d) represent two different programming problems in the proposed model, as (\ref{2009} \label{figiter} \end{figure} \subsubsection{Parameter sensitivity} \vspace*{5pt} In order to further explore the effect of the main parameters on the final generalization performance, the parameters $\rho_{1}$($\rho_{2}$), $C_{1}$($C_{3}$) and $C_{2}$($C_{4}$) are chosen to conduct the numerical experiments on two benchmark datasets with the rest of parameter fixed, the scale of color indicates the accuracy, and the three axes represent three different parameters. The same grid search and cross-validation as in the previous experiments are also executed. In order to investigate the effect of the model sensitivity to three types of different parameters, the RBF kernel function with different kernel parameter values is applied in the Figs. \ref{para_1} and \ref{para_2}, respectively. The experimental results are analyzed to reach the following conclusions: (a) the model is becoming increasingly more insensitive to the $\rho$ with the increasing $\delta$. (b) MTNPSVM has comparable sensitivity to parameter $C_{1}(C_{3})$ and parameter $C_{2}(C_{4})$. \begin{figure} \caption{Parameter sensitivity on $Spambase$ with different kernel parameters.} \label{para_1} \end{figure} \begin{figure} \caption{Parameter sensitivity on $Landmine$ with different kernel parameters.} \label{para_2} \end{figure} \subsection{Experiments on image datasets} \begin{figure} \caption{Caltech image dataset.} \label{fig4} \end{figure} To verify the performance of the MTNPSVM in comparison with the other MTL methods, this subsection searches for two very well-known Caltech image repositories including Caltech 101 and the Caltech 256 \cite{cal101,cal256}. Caltech 101 includes 102 classes of image and a background class, each of which has about 50 images. Caltech 256 has 256 classes of image and a background class, each of which has no less than 80 images. The samples in background class are not in any of image categories, so it can be viewed as negative class. To transform Caltech images into multiple datasets with similar structural information, based on the architecture information, the related subclasses are synthesized into a large superclass. Some categories of pictures are displayed in Fig. \ref{fig4}, each superclass contains from 3 to 6 subclasses. It can be found that each column of pictures has a similar feature information. For instance, in the first column, their aerocrafts all contain the cabin, wings, tail, etc., so they can be seen as a superclass. Eventually each subclass is mixed with negative samples. identiting samples belonging to similar superclasses in different subclasses can be viewed as a set of related tasks. In Caltech 101, five multi task datasets are synthesized in final, the number of samples in each superclass is selected 50 images, so the final number of each task is 100. Similarly, seven multi-task datasets are combined from Caltech 256. Finally, multi-task learning improves the generalization performance by exploiting the similar structure information between tasks. The dense-SIFT algorithm \cite{sift} is used for feature extraction. To further speed up the training efficiency based on retaining the original training information as much as possible, the PCA is introduced to reduce the original dimensions, while it can keep the original 97\% of the information. It should be noted here that the feature dimensions of the image datasets are still 300-600 dimensions by dimensionality reduction. Compared to the benchmark dataset, MTNPSVM does not perform very well in this case. In this subsection, the grid-search strategy and 5-fold cross-validation are also employed. The performance comparison on the five multi task datasets from Caltech 101 with RBF kernel are shown in Fig. \ref{cal101_guass}. In terms of accuracy, the experimental results show that MTNPSVM performs slightly better than the other MTL methods. It can be explained as follows, the RBF kernel allows the samples to be mapped to a sufficiently high dimension, so that most of the samples can be linearly separable, thereby making the performance of all the models not easily distinguishable. In order to better reveal and compare the performance of the models, some experiments with Polynomial kernel are further implemented, which maps the features to a finite number of dimensions, the experimental results are displayed in Figs. \ref{cal101_poly} and \ref{cal256_poly}. Unlike the experiments results with RBF kernel, MTNPSVM can show more obvious advantages over other models, especially in seven datasets from Caltech 256. A similar statement of conclusion can also be drawn in the \cite{vtwin}. In addition, in terms of computational time, since MTNPSVM requires the construction of a larger dimensional matrix, which results in more computational time. After acceleration by ADMM algorithm, the training time is still slightly higher than other models. Taking advantage of the high sparsity of the proposed model to improve the solving speed is the next research direction. \begin{figure} \caption{Performance comparison on Caltech 101 with RBF kernel.} \label{cal101_guass} \end{figure} \begin{figure} \caption{Performance comparison on Caltech 101 with Polynomial kernel.} \label{cal101_poly} \end{figure} \begin{figure} \caption{Performance comparison on Caltech 256 with Polynomial kernel.} \label{cal256_poly} \end{figure} \subsection{Application in Chinese Wine} From the numerical experiments above, it can be found that MTNPSVM has sufficient theoretical significance and good generalization performance because it inherits the common advantages of both NPSVM and multi-task learning. To further validate the practical significance of MTNPSVM, this subsection conducts comparable experiments with other models on the Chinese Wine dataset. The wine dataset was collected from four areas, i.e., Hexi, Tonghua, Corridor, Helan Mountain, and Shacheng. Because the datasets from four different locations all have 1436 samples with 2203 feature dimensions, they can be considered as four highly related tasks. The grid-search strategy and 5-fold cross-validation are also performed in this dataset. By applying the above MTL methods with Polynomial kernel, the accuracies and optimal parameters used in experiment are displayed in Table \ref{table4}. After comparison, it can be found that MTNPSVM has better generalization performance than other multi-task models. In addition, it can be found that the parameter $\epsilon$ only has a large effect on the sparsity of the model, but has little effect on the prediction accuracy. Therefore it is suggested that the readers preset the parameter $\epsilon$ to 0.1 or 0.2. In this way, the added $\epsilon$ does not increase the burden of grid search. \setlength{\tabcolsep}{2pt} \begin{table*}[!htbp] \centering \caption{The performance comparison on Chinese Wine dataset.}\label{table4} \begin{tabular}{cccccccccccc} \hline Datasets&&\makecell{DMTSVM\\Accuracy(\%)\\(c,$\rho$,$\delta$)}&&\makecell[c]{MTLS-TSVM\\Accuracy(\%)\\(c,$\rho$,$\delta$)}&&\makecell[c]{MTPSVM\\Accuracy(\%)\\(c,$\rho$,$\delta$)}&&\makecell[c]{MTCTSVM\\Accuracy(\%)\\(c,$\rho$,$\delta$,$g$)}&&\makecell[c]{MTNPSVM\\Accuracy(\%)\\($c_{1}$,$c_{2}$,$\rho$,$\delta$,$\epsilon$)}\\ \hline Chinese Wine&&\makecell[c]{74.81\\$(2^{-3},2,7)$}&&\makecell[c]{73.60\\$(2^{-3},2^{-3},1)$}&&\makecell[c]{77.42\\$(2^{-3},2^{2},4)$}&&\makecell[c]{73.60\\$(2^{-2},2^{-1},2^{-2},5)$}&&\makecell[c]{\textbf{78.62}\\$(2^{-3},2^{-3},2^{3},6,0.1)$}\\ \hline \end{tabular} \end{table*} \section{Conclusion and further work}\label{10088} This paper proposes a novel MTNPSVM which is an extension of the nonparallel support machine in the multi-task learning field. It both inherits the advantages of MTL and NPSVM, and overcomes the shortages of multi-task TWSVMs. It only needs to deal with one form of QPP for the linear case and the nonlinear case. Compared with the single task learning, MTNPSVM has a good generalization performance resulting from the task relations. Similarly, compared with the other MTL methods, MTNPSVM gets a better performance due to the introduction of the $\epsilon$-insensitive loss. Furthermore, it is proved that $\epsilon$ can flexibly adjust the sparsity of the model. Finally ADMM is introduced as the solving algorithm for the proposed model. Experiments on fifteen benchmark datasets and twelve image datasets are conducted to demonstrate the good performance of MTNPSVM. The application on the Chinese Wine dataset validates the practical significance of MTNPSVM. Combining the high full sparsity of the proposed model with algorithms to improve the solving rate is the future research direction. \vspace*{10pt} {\bfseries Acknowledgments} The authors gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation. This work was supported in part by National Natural Science Foundation of China (No. 12071475, 11671010) and Beijing Natural Science Foundation (No. 4172035). \appendix \section{Proofs of Theorem \ref{th1}}\label{proof1} At the beginning, the KKT conditions for the primal problem are derived, in the main text a part of the KKT condition can be obtained by deriving the Lagrangian function as follows: \begin{eqnarray}\label{a1} &&\rho_{1} u-\sum_{t=1}^{T} A_{t}^{\top}\left(\alpha_{t}^{+*}-\alpha_{t}^{+}\right)+\sum_{t=1}^{T} B_{t}^{\top} \beta_{t}^{-}=0, \\ \label{a2} &&\frac{u_{t}}{T}-A_{t}^{\top}\left(\alpha_{t}^{+*}-\alpha_{t}^{+}\right)+B_{t}^{\top} \beta_{t}^{-}=0, \\ &&C_{1} e_{1 t}-\alpha_{t}^{+}-\theta_{t}=0, \\ &&C_{1} e_{1 t}-\alpha_{t}^{+*}-\psi_{t}=0, \\ &&C_{2} e_{2 t}-\beta_{t}^{-}-\gamma_{t}=0. \end{eqnarray} In addition, one can get the following complementary relaxation conditions: \begin{eqnarray} \label{a6} &&\alpha_{it}\left[\epsilon +\eta_{it}-\left(u+u_{t}\right)^{\top}x_{it}\right]=0,\\ \label{a7} &&\alpha_{it}^{+*}\left[\epsilon +\eta_{it}^{*}+\left(u+u_{t}\right)^{\top}x_{it}\right]=0. \end{eqnarray} In order to prove Theorem \ref{th1}, the KKT conditions can be obtained by constructing the Lagrangian function of (\ref{dual1}) as follows: \begin{eqnarray} &&M(A,A^{\top})(\alpha^{+*}-\alpha^{+})-M(A,B^{\top})\beta+\epsilon e^{+}-\overline{s}^{*}+\overline{\eta}^{*}=0,\\ &&-M(A,A^{\top})(\alpha^{+*}-\alpha^{+})+M(A,B^{\top})\beta+\epsilon e^{+}-\overline{s}^{*}+\overline{\eta}^{*}=0,\\ &&\overline{\eta}^{(*)}\ge0,\\ &&\overline{s}^{(*)}\ge0, \end{eqnarray} and \begin{eqnarray} \label{a12} &&\overline{\eta}_{it}^{(*)}(C_{1}-\alpha_{it}^{+(*)})=0,\\ &&\overline{s}_{it}^{(*)}\alpha_{it}^{+(*)}=0, \end{eqnarray} here the $\overline{\eta}^{(*)}$,$\overline{s}^{(*)}$ which are the new Lagrangian multipliers represent $\overline{\eta}$, $\overline{\eta}^{*}$ and $\overline{s}$, $\overline{s}^{*}$, respectively. The subscript letter $it$ of each vector represents the $i$ component of the $t$-th task. It should be mentioned that the $\overline{\eta}^{(*)}$ is equivalent to the relaxation variable ${\eta}^{(*)}$in the primal problem and also satisfies the equation (\ref{a6}) and (\ref{a7}). Detailed proof can be found in \cite{svr}. Now let us further discuss equations (\ref{a6}) and (\ref{a12}) to prove Theorem \ref{th1} in different situations. if $\alpha_{it}^{+}\in(0,C_{1})$, According to (\ref{a6}) and (\ref{a12}), $\eta_{it}$=0, $(u+u_{t})x_{it}=\epsilon>-\epsilon$, further according to the constraints of the primal problem: \begin{eqnarray} \label{a14} (u+u_{t})^{\top}x_{it}\ge-\epsilon-\eta_{it}^{+*}, \end{eqnarray} the $\eta_{it}^{+*}$=0 can be obtained. By the (\ref{a7}), finally $\alpha_{it}^{+*}$=0 can be derived. Similarly, when $\alpha_{it}^{+*}\in(0,C_{1})$, it can also be prove that $\alpha_{it}^{+}$=0. If $\alpha_{it}^{+}=C_{1}$, by the (\ref{a12}), $\eta_{it}^{+}\ge0$, from (\ref{a6}), the $(u+u_{t})x_{it}=\epsilon+\eta_{it}^{+}>-\epsilon$ can be obtained, further according to the (\ref{a14}), one can get $\eta_{it}^{+*}$=0, by the (\ref{a7}), finally $\alpha_{it}^{+*}$=0 can be derived. Similarly, when $\alpha_{it}^{+*}=C_{1}$, it can also be proved that $\alpha_{it}^{+}$=0. Based on the above mentioned, it can be summarized that $\alpha_{it}^{+*}$$\alpha_{it}^{+}$=0. Theorem \ref{th1} is proved, and similarly the Theorem \ref{th3} can be proved by using problem (\ref{dual2}). They have the same proof procedure. \section{Proofs of Theorem \ref{th2}}\label{proof2} For the Theorem \ref{th2}, by following the KKT conditions (\ref{a1}) and (\ref{a2}), the equations can be converted into the following form: \begin{eqnarray} &&u=\frac{1}{\rho_{1}}(\sum_{t=1}^{T} A_{t}^{\top}\left(\alpha_{t}^{+*}-\alpha_{t}^{+}\right)-\sum_{t=1}^{T} B_{t}^{\top} \beta_{t}^{-}),\\ &&u_{t}=T(A_{t}^{\top}\left(\alpha_{t}^{+*}-\alpha_{t}^{+}\right)-B_{t}^{\top} \beta_{t}^{-}). \end{eqnarray} The same proof occurs in Theorem \ref{th4}. \end{document}